dataset
stringlengths
0
82
model_name
stringlengths
0
150
paper_title
stringlengths
19
175
paper_date
timestamp[ns]
paper_url
stringlengths
32
35
code_links
listlengths
1
1
prompts
stringlengths
105
331
answer
stringlengths
1
67
RefCOCOg-val
EVF-SAM
EVF-SAM: Early Vision-Language Fusion for Text-Prompted Segment Anything Model
2024-06-28T00:00:00
https://arxiv.org/abs/2406.20076v4
[ "https://github.com/hustvl/evf-sam" ]
In the paper 'EVF-SAM: Early Vision-Language Fusion for Text-Prompted Segment Anything Model', what Overall IoU score did the EVF-SAM model get on the RefCOCOg-val dataset
76.8
CULane
FENetV1
FENet: Focusing Enhanced Network for Lane Detection
2023-12-28T00:00:00
https://arxiv.org/abs/2312.17163v6
[ "https://github.com/hanyangzhong/fenet" ]
In the paper 'FENet: Focusing Enhanced Network for Lane Detection', what F1 score score did the FENetV1 model get on the CULane dataset
80.15
RSITMD
RemoteCLIP
RemoteCLIP: A Vision Language Foundation Model for Remote Sensing
2023-06-19T00:00:00
https://arxiv.org/abs/2306.11029v4
[ "https://github.com/chendelong1999/remoteclip" ]
In the paper 'RemoteCLIP: A Vision Language Foundation Model for Remote Sensing', what Mean Recall score did the RemoteCLIP model get on the RSITMD dataset
50.52%
Inverse-Text
DeepSolo (ViTAEv2-S, TextOCR)
DeepSolo++: Let Transformer Decoder with Explicit Points Solo for Multilingual Text Spotting
2023-05-31T00:00:00
https://arxiv.org/abs/2305.19957v2
[ "https://github.com/vitae-transformer/deepsolo" ]
In the paper 'DeepSolo++: Let Transformer Decoder with Explicit Points Solo for Multilingual Text Spotting', what F-measure (%) - No Lexicon score did the DeepSolo (ViTAEv2-S, TextOCR) model get on the Inverse-Text dataset
68.8
AmsterTime
BoQ
BoQ: A Place is Worth a Bag of Learnable Queries
2024-05-12T00:00:00
https://arxiv.org/abs/2405.07364v3
[ "https://github.com/amaralibey/bag-of-queries" ]
In the paper 'BoQ: A Place is Worth a Bag of Learnable Queries', what Recall@1 score did the BoQ model get on the AmsterTime dataset
63.0
MSCOCO
3SHNet
3SHNet: Boosting Image-Sentence Retrieval via Visual Semantic-Spatial Self-Highlighting
2024-04-26T00:00:00
https://arxiv.org/abs/2404.17273v1
[ "https://github.com/xurige1995/3shnet" ]
In the paper '3SHNet: Boosting Image-Sentence Retrieval via Visual Semantic-Spatial Self-Highlighting', what Image-to-text R@1 score did the 3SHNet model get on the MSCOCO dataset
85.8
Something-Something V2
TDS-CLIP-ViT-L/14(8frames)
TDS-CLIP: Temporal Difference Side Network for Image-to-Video Transfer Learning
2024-08-20T00:00:00
https://arxiv.org/abs/2408.10688v1
[ "https://github.com/BBYL9413/TDS-CLIP" ]
In the paper 'TDS-CLIP: Temporal Difference Side Network for Image-to-Video Transfer Learning', what Top-1 Accuracy score did the TDS-CLIP-ViT-L/14(8frames) model get on the Something-Something V2 dataset
73.4
DomainNet
VL2V-SD (CLIP, ViT-B/16)
Leveraging Vision-Language Models for Improving Domain Generalization in Image Classification
2023-10-12T00:00:00
https://arxiv.org/abs/2310.08255v2
[ "https://github.com/val-iisc/VL2V-ADiP" ]
In the paper 'Leveraging Vision-Language Models for Improving Domain Generalization in Image Classification', what Average Accuracy score did the VL2V-SD (CLIP, ViT-B/16) model get on the DomainNet dataset
62.79
Amazon Photo
GAT
Classic GNNs are Strong Baselines: Reassessing GNNs for Node Classification
2024-06-13T00:00:00
https://arxiv.org/abs/2406.08993v2
[ "https://github.com/LUOyk1999/tunedGNN" ]
In the paper 'Classic GNNs are Strong Baselines: Reassessing GNNs for Node Classification', what Accuracy score did the GAT model get on the Amazon Photo dataset
96.60 ± 0.33
MassSpecGym
SELFIES Transformer
MassSpecGym: A benchmark for the discovery and identification of molecules
2024-10-30T00:00:00
https://arxiv.org/abs/2410.23326v1
[ "https://github.com/pluskal-lab/massspecgym" ]
In the paper 'MassSpecGym: A benchmark for the discovery and identification of molecules', what Top-1 MCES score did the SELFIES Transformer model get on the MassSpecGym dataset
33.28
SMAC MMM2_7m2M1M_vs_9m3M1M
DDN
A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning
2023-06-04T00:00:00
https://arxiv.org/abs/2306.02430v1
[ "https://github.com/j3soon/dfac-extended" ]
In the paper 'A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning', what Median Win Rate score did the DDN model get on the SMAC MMM2_7m2M1M_vs_9m3M1M dataset
90.34
ASQP
ChatGPT (gpt-3.5-turbo, zero-shot)
MvP: Multi-view Prompting Improves Aspect Sentiment Tuple Prediction
2023-05-22T00:00:00
https://arxiv.org/abs/2305.12627v1
[ "https://github.com/ZubinGou/multi-view-prompting" ]
In the paper 'MvP: Multi-view Prompting Improves Aspect Sentiment Tuple Prediction', what F1 (R15) score did the ChatGPT (gpt-3.5-turbo, zero-shot) model get on the ASQP dataset
22.87
Human3.6M
SMPLer-L
SMPLer: Taming Transformers for Monocular 3D Human Shape and Pose Estimation
2024-04-23T00:00:00
https://arxiv.org/abs/2404.15276v1
[ "https://github.com/xuxy09/smpler" ]
In the paper 'SMPLer: Taming Transformers for Monocular 3D Human Shape and Pose Estimation', what Average MPJPE (mm) score did the SMPLer-L model get on the Human3.6M dataset
45.2
CAT2000
SUM
SUM: Saliency Unification through Mamba for Visual Attention Modeling
2024-06-25T00:00:00
https://arxiv.org/abs/2406.17815v2
[ "https://github.com/Arhosseini77/SUM" ]
In the paper 'SUM: Saliency Unification through Mamba for Visual Attention Modeling', what AUC score did the SUM model get on the CAT2000 dataset
0.888
ACMPS
EfficientNet-P
Revising deep learning methods in parking lot occupancy detection
2023-06-07T00:00:00
https://arxiv.org/abs/2306.04288v3
[ "https://github.com/eighonet/parking-research" ]
In the paper 'Revising deep learning methods in parking lot occupancy detection', what F1-score score did the EfficientNet-P model get on the ACMPS dataset
0.9982
CIFAR-10
EDM-AOT
Improving Diffusion-Based Generative Models via Approximated Optimal Transport
2024-03-08T00:00:00
https://arxiv.org/abs/2403.05069v1
[ "https://github.com/large-scale-kim/EDM-AOT" ]
In the paper 'Improving Diffusion-Based Generative Models via Approximated Optimal Transport', what FID score did the EDM-AOT model get on the CIFAR-10 dataset
1.73
CC3M-TagMask
TTD (TCL)
TTD: Text-Tag Self-Distillation Enhancing Image-Text Alignment in CLIP to Alleviate Single Tag Bias
2024-03-30T00:00:00
https://arxiv.org/abs/2404.00384v2
[ "https://github.com/shjo-april/TTD" ]
In the paper 'TTD: Text-Tag Self-Distillation Enhancing Image-Text Alignment in CLIP to Alleviate Single Tag Bias', what mIoU score did the TTD (TCL) model get on the CC3M-TagMask dataset
65.5
MM-Vet
Dynamic-LLaVA-13B
Dynamic-LLaVA: Efficient Multimodal Large Language Models via Dynamic Vision-language Context Sparsification
2024-12-01T00:00:00
https://arxiv.org/abs/2412.00876v2
[ "https://github.com/osilly/dynamic_llava" ]
In the paper 'Dynamic-LLaVA: Efficient Multimodal Large Language Models via Dynamic Vision-language Context Sparsification', what GPT-4 score score did the Dynamic-LLaVA-13B model get on the MM-Vet dataset
37.3
OTB-2015
SAMURAI-L
SAMURAI: Adapting Segment Anything Model for Zero-Shot Visual Tracking with Motion-Aware Memory
2024-11-18T00:00:00
https://arxiv.org/abs/2411.11922v2
[ "https://github.com/yangchris11/samurai" ]
In the paper 'SAMURAI: Adapting Segment Anything Model for Zero-Shot Visual Tracking with Motion-Aware Memory', what AUC score did the SAMURAI-L model get on the OTB-2015 dataset
0.715
RefCoCo val
HyperSeg
HyperSeg: Towards Universal Visual Segmentation with Large Language Model
2024-11-26T00:00:00
https://arxiv.org/abs/2411.17606v2
[ "https://github.com/congvvc/HyperSeg" ]
In the paper 'HyperSeg: Towards Universal Visual Segmentation with Large Language Model', what Overall IoU score did the HyperSeg model get on the RefCoCo val dataset
84.8
AISHELL-2
Paraformer
FunASR: A Fundamental End-to-End Speech Recognition Toolkit
2023-05-18T00:00:00
https://arxiv.org/abs/2305.11013v1
[ "https://github.com/alibaba-damo-academy/FunASR" ]
In the paper 'FunASR: A Fundamental End-to-End Speech Recognition Toolkit', what Word Error Rate (WER) score did the Paraformer model get on the AISHELL-2 dataset
5.73
GTSRB
SAG-ViT
SAG-ViT: A Scale-Aware, High-Fidelity Patching Approach with Graph Attention for Vision Transformers
2024-11-14T00:00:00
https://arxiv.org/abs/2411.09420v2
[ "https://github.com/shravan-18/SAG-ViT" ]
In the paper 'SAG-ViT: A Scale-Aware, High-Fidelity Patching Approach with Graph Attention for Vision Transformers', what F1 score did the SAG-ViT model get on the GTSRB dataset
99.58
CIFAR-10
TURTLE (CLIP + DINOv2)
Let Go of Your Labels with Unsupervised Transfer
2024-06-11T00:00:00
https://arxiv.org/abs/2406.07236v1
[ "https://github.com/mlbio-epfl/turtle" ]
In the paper 'Let Go of Your Labels with Unsupervised Transfer', what Accuracy score did the TURTLE (CLIP + DINOv2) model get on the CIFAR-10 dataset
0.995
MAWPS
DeBERTa (PM + VM)
Math Word Problem Solving by Generating Linguistic Variants of Problem Statements
2023-06-24T00:00:00
https://arxiv.org/abs/2306.13899v1
[ "https://github.com/starscream-11813/variational-mathematical-reasoning" ]
In the paper 'Math Word Problem Solving by Generating Linguistic Variants of Problem Statements', what Accuracy (%) score did the DeBERTa (PM + VM) model get on the MAWPS dataset
91.0
MATH
MetaMath 13B
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
2023-09-21T00:00:00
https://arxiv.org/abs/2309.12284v4
[ "https://github.com/meta-math/MetaMath" ]
In the paper 'MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models', what Accuracy score did the MetaMath 13B model get on the MATH dataset
22.5
Weather (96)
DiPE-Linear
Disentangled Interpretable Representation for Efficient Long-term Time Series Forecasting
2024-11-26T00:00:00
https://arxiv.org/abs/2411.17257v1
[ "https://github.com/wintertee/dipe-linear" ]
In the paper 'Disentangled Interpretable Representation for Efficient Long-term Time Series Forecasting', what MSE score did the DiPE-Linear model get on the Weather (96) dataset
0.142
AISHELL-1
Paraformer
FunASR: A Fundamental End-to-End Speech Recognition Toolkit
2023-05-18T00:00:00
https://arxiv.org/abs/2305.11013v1
[ "https://github.com/alibaba-damo-academy/FunASR" ]
In the paper 'FunASR: A Fundamental End-to-End Speech Recognition Toolkit', what Word Error Rate (WER) score did the Paraformer model get on the AISHELL-1 dataset
4.95
LAVIB
FLAVR
LAVIB: A Large-scale Video Interpolation Benchmark
2024-06-14T00:00:00
https://arxiv.org/abs/2406.09754v2
[ "https://github.com/alexandrosstergiou/lavib" ]
In the paper 'LAVIB: A Large-scale Video Interpolation Benchmark', what PSNR score did the FLAVR model get on the LAVIB dataset
33.44
PubMed (48%/32%/20% fixed splits)
GESN
Addressing Heterophily in Node Classification with Graph Echo State Networks
2023-05-14T00:00:00
https://arxiv.org/abs/2305.08233v2
[ "https://github.com/dtortorella/addressing-heterophily-gesn" ]
In the paper 'Addressing Heterophily in Node Classification with Graph Echo State Networks', what 1:1 Accuracy score did the GESN model get on the PubMed (48%/32%/20% fixed splits) dataset
89.20 ± 0.34
ESOL
ChemBFN
A Bayesian Flow Network Framework for Chemistry Tasks
2024-07-28T00:00:00
https://arxiv.org/abs/2407.20294v1
[ "https://github.com/Augus1999/bayesian-flow-network-for-chemistry" ]
In the paper 'A Bayesian Flow Network Framework for Chemistry Tasks', what RMSE score did the ChemBFN model get on the ESOL dataset
0.884
EgoExoLearn
Action anticipation baseline (co-training, with gaze)
EgoExoLearn: A Dataset for Bridging Asynchronous Ego- and Exo-centric View of Procedural Activities in Real World
2024-03-24T00:00:00
https://arxiv.org/abs/2403.16182v2
[ "https://github.com/opengvlab/egoexolearn" ]
In the paper 'EgoExoLearn: A Dataset for Bridging Asynchronous Ego- and Exo-centric View of Procedural Activities in Real World', what Accuracy score did the Action anticipation baseline (co-training, with gaze) model get on the EgoExoLearn dataset
45.45
Vid4 - 4x upscaling
EvTexture+
EvTexture: Event-driven Texture Enhancement for Video Super-Resolution
2024-06-19T00:00:00
https://arxiv.org/abs/2406.13457v1
[ "https://github.com/dachunkai/evtexture" ]
In the paper 'EvTexture: Event-driven Texture Enhancement for Video Super-Resolution', what PSNR score did the EvTexture+ model get on the Vid4 - 4x upscaling dataset
29.78
3DPW
ZeDO (Cross Dataset)
Back to Optimization: Diffusion-based Zero-Shot 3D Human Pose Estimation
2023-07-07T00:00:00
https://arxiv.org/abs/2307.03833v3
[ "https://github.com/ipl-uw/ZeDO-Release" ]
In the paper 'Back to Optimization: Diffusion-based Zero-Shot 3D Human Pose Estimation', what PA-MPJPE score did the ZeDO (Cross Dataset) model get on the 3DPW dataset
42.6
RSTPReid
APTM
Towards Unified Text-based Person Retrieval: A Large-scale Multi-Attribute and Language Search Benchmark
2023-06-05T00:00:00
https://arxiv.org/abs/2306.02898v4
[ "https://github.com/Shuyu-XJTU/APTM" ]
In the paper 'Towards Unified Text-based Person Retrieval: A Large-scale Multi-Attribute and Language Search Benchmark', what R@1 score did the APTM model get on the RSTPReid dataset
67.50
SFCHD
RetinaNet
Large, Complex, and Realistic Safety Clothing and Helmet Detection: Dataset and Method
2023-06-03T00:00:00
https://arxiv.org/abs/2306.02098v2
[ "https://github.com/lijfrank-open/SFCHD-SCALE" ]
In the paper 'Large, Complex, and Realistic Safety Clothing and Helmet Detection: Dataset and Method', what mAP@0.50 score did the RetinaNet model get on the SFCHD dataset
75.9
GSM8K
Shepherd + Mistral-7B (SFT on MetaMATH + PRM RL)
Math-Shepherd: Verify and Reinforce LLMs Step-by-step without Human Annotations
2023-12-14T00:00:00
https://arxiv.org/abs/2312.08935v3
[ "https://huggingface.co/datasets/peiyi9979/Math-Shepherd" ]
In the paper 'Math-Shepherd: Verify and Reinforce LLMs Step-by-step without Human Annotations', what Accuracy score did the Shepherd + Mistral-7B (SFT on MetaMATH + PRM RL) model get on the GSM8K dataset
84.1
ETTm1 (96) Multivariate
PRformer
PRformer: Pyramidal Recurrent Transformer for Multivariate Time Series Forecasting
2024-08-20T00:00:00
https://arxiv.org/abs/2408.10483v1
[ "https://github.com/usualheart/prformer" ]
In the paper 'PRformer: Pyramidal Recurrent Transformer for Multivariate Time Series Forecasting', what MSE score did the PRformer model get on the ETTm1 (96) Multivariate dataset
0.278
SVT
CLIP4STR-H (DFN-5B)
CLIP4STR: A Simple Baseline for Scene Text Recognition with Pre-trained Vision-Language Model
2023-05-23T00:00:00
https://arxiv.org/abs/2305.14014v3
[ "https://github.com/VamosC/CLIP4STR" ]
In the paper 'CLIP4STR: A Simple Baseline for Scene Text Recognition with Pre-trained Vision-Language Model', what Accuracy score did the CLIP4STR-H (DFN-5B) model get on the SVT dataset
99.1
Something-Something V1
TAdaFormer-L/14
Temporally-Adaptive Models for Efficient Video Understanding
2023-08-10T00:00:00
https://arxiv.org/abs/2308.05787v1
[ "https://github.com/alibaba-mmai-research/TAdaConv" ]
In the paper 'Temporally-Adaptive Models for Efficient Video Understanding', what Top 1 Accuracy score did the TAdaFormer-L/14 model get on the Something-Something V1 dataset
63.7
Wisconsin
CoED
Improving Graph Neural Networks by Learning Continuous Edge Directions
2024-10-18T00:00:00
https://arxiv.org/abs/2410.14109v1
[ "https://github.com/hormoz-lab/coed-gnn" ]
In the paper 'Improving Graph Neural Networks by Learning Continuous Edge Directions', what Accuracy score did the CoED model get on the Wisconsin dataset
87.84±3.70
REDS4- 4x upscaling
EvTexture+
EvTexture: Event-driven Texture Enhancement for Video Super-Resolution
2024-06-19T00:00:00
https://arxiv.org/abs/2406.13457v1
[ "https://github.com/dachunkai/evtexture" ]
In the paper 'EvTexture: Event-driven Texture Enhancement for Video Super-Resolution', what PSNR score did the EvTexture+ model get on the REDS4- 4x upscaling dataset
32.93
PreCo
Maverick_incr
Maverick: Efficient and Accurate Coreference Resolution Defying Recent Trends
2024-07-31T00:00:00
https://arxiv.org/abs/2407.21489v1
[ "https://github.com/sapienzanlp/maverick-coref" ]
In the paper 'Maverick: Efficient and Accurate Coreference Resolution Defying Recent Trends', what F1 score did the Maverick_incr model get on the PreCo dataset
88.0
YouTube-VIS validation
UniVS(Swin-L)
UniVS: Unified and Universal Video Segmentation with Prompts as Queries
2024-02-28T00:00:00
https://arxiv.org/abs/2402.18115v2
[ "https://github.com/minghanli/univs" ]
In the paper 'UniVS: Unified and Universal Video Segmentation with Prompts as Queries', what mask AP score did the UniVS(Swin-L) model get on the YouTube-VIS validation dataset
60.0
Kvasir-SEG
PVT-GCASCADE
G-CASCADE: Efficient Cascaded Graph Convolutional Decoding for 2D Medical Image Segmentation
2023-10-24T00:00:00
https://arxiv.org/abs/2310.16175v1
[ "https://github.com/SLDGroup/G-CASCADE" ]
In the paper 'G-CASCADE: Efficient Cascaded Graph Convolutional Decoding for 2D Medical Image Segmentation', what mean Dice score did the PVT-GCASCADE model get on the Kvasir-SEG dataset
0.9274
TAO
GLEE-Plus
General Object Foundation Model for Images and Videos at Scale
2023-12-14T00:00:00
https://arxiv.org/abs/2312.09158v1
[ "https://github.com/FoundationVision/GLEE" ]
In the paper 'General Object Foundation Model for Images and Videos at Scale', what TETA score did the GLEE-Plus model get on the TAO dataset
41.5
LSUN Churches 256 x 256
BOSS
Bellman Optimal Stepsize Straightening of Flow-Matching Models
2023-12-27T00:00:00
https://arxiv.org/abs/2312.16414v3
[ "https://github.com/nguyenngocbaocmt02/boss" ]
In the paper 'Bellman Optimal Stepsize Straightening of Flow-Matching Models', what clean-FID score did the BOSS model get on the LSUN Churches 256 x 256 dataset
13.21
ACE 2005
GoLLIE
GoLLIE: Annotation Guidelines improve Zero-Shot Information-Extraction
2023-10-05T00:00:00
https://arxiv.org/abs/2310.03668v5
[ "https://github.com/hitz-zentroa/gollie" ]
In the paper 'GoLLIE: Annotation Guidelines improve Zero-Shot Information-Extraction', what RE Micro F1 score did the GoLLIE model get on the ACE 2005 dataset
70.1
VisDA2017
RCL
Empowering Source-Free Domain Adaptation with MLLM-driven Curriculum Learning
2024-05-28T00:00:00
https://arxiv.org/abs/2405.18376v1
[ "https://github.com/Dong-Jie-Chen/RCL" ]
In the paper 'Empowering Source-Free Domain Adaptation with MLLM-driven Curriculum Learning', what Accuracy score did the RCL model get on the VisDA2017 dataset
93.2
CIFAR-100-LT (ρ=100)
LIFT (ViT-B/16, ImageNet-21K pre-training)
Long-Tail Learning with Foundation Model: Heavy Fine-Tuning Hurts
2023-09-18T00:00:00
https://arxiv.org/abs/2309.10019v3
[ "https://github.com/shijxcs/lift" ]
In the paper 'Long-Tail Learning with Foundation Model: Heavy Fine-Tuning Hurts', what Error Rate score did the LIFT (ViT-B/16, ImageNet-21K pre-training) model get on the CIFAR-100-LT (ρ=100) dataset
10.9
Electricity (192)
CycleNet
CycleNet: Enhancing Time Series Forecasting through Modeling Periodic Patterns
2024-09-27T00:00:00
https://arxiv.org/abs/2409.18479v2
[ "https://github.com/ACAT-SCUT/CycleNet" ]
In the paper 'CycleNet: Enhancing Time Series Forecasting through Modeling Periodic Patterns', what MSE score did the CycleNet model get on the Electricity (192) dataset
0.144
ENZYMES
R-GCN + PANDA
PANDA: Expanded Width-Aware Message Passing Beyond Rewiring
2024-06-06T00:00:00
https://arxiv.org/abs/2406.03671v2
[ "https://github.com/jeongwhanchoi/panda" ]
In the paper 'PANDA: Expanded Width-Aware Message Passing Beyond Rewiring', what Accuracy score did the R-GCN + PANDA model get on the ENZYMES dataset
43.9
WDC Products-80%cc-seen-medium
Llama3.1_70B_structured_explanations
Fine-tuning Large Language Models for Entity Matching
2024-09-12T00:00:00
https://arxiv.org/abs/2409.08185v1
[ "https://github.com/wbsg-uni-mannheim/tailormatch" ]
In the paper 'Fine-tuning Large Language Models for Entity Matching', what F1 (%) score did the Llama3.1_70B_structured_explanations model get on the WDC Products-80%cc-seen-medium dataset
76.70
EARS-WHAM
Schrödinger Bridge (PESQ loss)
Investigating Training Objectives for Generative Speech Enhancement
2024-09-16T00:00:00
https://arxiv.org/abs/2409.10753v1
[ "https://github.com/sp-uhh/sgmse" ]
In the paper 'Investigating Training Objectives for Generative Speech Enhancement', what PESQ-WB score did the Schrödinger Bridge (PESQ loss) model get on the EARS-WHAM dataset
3.09
MVSEC-SEG
EventSAM
Segment Any Events via Weighted Adaptation of Pivotal Tokens
2023-12-24T00:00:00
https://arxiv.org/abs/2312.16222v1
[ "https://github.com/happychenpipi/eventsam" ]
In the paper 'Segment Any Events via Weighted Adaptation of Pivotal Tokens', what mIoU score did the EventSAM model get on the MVSEC-SEG dataset
0.40
Fashion IQ
SPRC
Sentence-level Prompts Benefit Composed Image Retrieval
2023-10-09T00:00:00
https://arxiv.org/abs/2310.05473v1
[ "https://github.com/chunmeifeng/sprc" ]
In the paper 'Sentence-level Prompts Benefit Composed Image Retrieval', what (Recall@10+Recall@50)/2 score did the SPRC model get on the Fashion IQ dataset
64.85
DELIVER
GeminiFusion
GeminiFusion: Efficient Pixel-wise Multimodal Fusion for Vision Transformer
2024-06-03T00:00:00
https://arxiv.org/abs/2406.01210v2
[ "https://github.com/jiadingcn/geminifusion" ]
In the paper 'GeminiFusion: Efficient Pixel-wise Multimodal Fusion for Vision Transformer', what mIoU score did the GeminiFusion model get on the DELIVER dataset
66.9
ImageNet
KD++(T:resnet50 S:resnet18)
Improving Knowledge Distillation via Regularizing Feature Norm and Direction
2023-05-26T00:00:00
https://arxiv.org/abs/2305.17007v1
[ "https://github.com/wangyz1608/knowledge-distillation-via-nd" ]
In the paper 'Improving Knowledge Distillation via Regularizing Feature Norm and Direction', what Top-1 accuracy % score did the KD++(T:resnet50 S:resnet18) model get on the ImageNet dataset
72.53
ICFG-PEDES
PLIP-RN50
PLIP: Language-Image Pre-training for Person Representation Learning
2023-05-15T00:00:00
https://arxiv.org/abs/2305.08386v2
[ "https://github.com/zplusdragon/plip" ]
In the paper 'PLIP: Language-Image Pre-training for Person Representation Learning', what R@1 score did the PLIP-RN50 model get on the ICFG-PEDES dataset
64.25
Set14 - 4x upscaling
Extracter-rec
EXTRACTER: Efficient Texture Matching with Attention and Gradient Enhancing for Large Scale Image Super Resolution
2023-10-02T00:00:00
https://arxiv.org/abs/2310.01379v1
[ "https://github.com/esteban-rs/extracter" ]
In the paper 'EXTRACTER: Efficient Texture Matching with Attention and Gradient Enhancing for Large Scale Image Super Resolution', what PSNR score did the Extracter-rec model get on the Set14 - 4x upscaling dataset
28.09
ADE20K
CLIPSelf
CLIPSelf: Vision Transformer Distills Itself for Open-Vocabulary Dense Prediction
2023-10-02T00:00:00
https://arxiv.org/abs/2310.01403v2
[ "https://github.com/wusize/clipself" ]
In the paper 'CLIPSelf: Vision Transformer Distills Itself for Open-Vocabulary Dense Prediction', what PQ score did the CLIPSelf model get on the ADE20K dataset
23.7
SIQA
phi-1.5-web 1.3B (zero-shot)
Textbooks Are All You Need II: phi-1.5 technical report
2023-09-11T00:00:00
https://arxiv.org/abs/2309.05463v1
[ "https://github.com/knowlab/bi-weekly-paper-presentation" ]
In the paper 'Textbooks Are All You Need II: phi-1.5 technical report', what Accuracy score did the phi-1.5-web 1.3B (zero-shot) model get on the SIQA dataset
53.0
CelebA 64x64
PDM+CS
Compensation Sampling for Improved Convergence in Diffusion Models
2023-12-11T00:00:00
https://arxiv.org/abs/2312.06285v1
[ "https://github.com/hotfinda/Compensation-sampling" ]
In the paper 'Compensation Sampling for Improved Convergence in Diffusion Models', what FID score did the PDM+CS model get on the CelebA 64x64 dataset
1.38
Quora Question Pairs
SplitEE-S
SplitEE: Early Exit in Deep Neural Networks with Split Computing
2023-09-17T00:00:00
https://arxiv.org/abs/2309.09195v1
[ "https://github.com/Div290/SplitEE/blob/main/README.md" ]
In the paper 'SplitEE: Early Exit in Deep Neural Networks with Split Computing', what Accuarcy score did the SplitEE-S model get on the Quora Question Pairs dataset
71.1
DeLiVER
StitchFusion (RGB-Event)
StitchFusion: Weaving Any Visual Modalities to Enhance Multimodal Semantic Segmentation
2024-08-02T00:00:00
https://arxiv.org/abs/2408.01343v1
[ "https://github.com/libingyu01/stitchfusion-stitchfusion-weaving-any-visual-modalities-to-enhance-multimodal-semantic-segmentation" ]
In the paper 'StitchFusion: Weaving Any Visual Modalities to Enhance Multimodal Semantic Segmentation', what mIoU score did the StitchFusion (RGB-Event) model get on the DeLiVER dataset
57.44
AMZ Comp
GCN
Half-Hop: A graph upsampling approach for slowing down message passing
2023-08-17T00:00:00
https://arxiv.org/abs/2308.09198v1
[ "https://github.com/nerdslab/halfhop" ]
In the paper 'Half-Hop: A graph upsampling approach for slowing down message passing', what Accuracy score did the GCN model get on the AMZ Comp dataset
90.22%
MVBench
PLLaVA
PLLaVA : Parameter-free LLaVA Extension from Images to Videos for Video Dense Captioning
2024-04-25T00:00:00
https://arxiv.org/abs/2404.16994v2
[ "https://github.com/magic-research/PLLaVA" ]
In the paper 'PLLaVA : Parameter-free LLaVA Extension from Images to Videos for Video Dense Captioning', what Avg. score did the PLLaVA model get on the MVBench dataset
58.1
ImageNet
ReviewKD++(T:resnet50, S:mobilenet-v1)
Improving Knowledge Distillation via Regularizing Feature Norm and Direction
2023-05-26T00:00:00
https://arxiv.org/abs/2305.17007v1
[ "https://github.com/wangyz1608/knowledge-distillation-via-nd" ]
In the paper 'Improving Knowledge Distillation via Regularizing Feature Norm and Direction', what Top-1 accuracy % score did the ReviewKD++(T:resnet50, S:mobilenet-v1) model get on the ImageNet dataset
72.96
ChEBI-20
TGM-DLM w/o corr
Text-Guided Molecule Generation with Diffusion Language Model
2024-02-20T00:00:00
https://arxiv.org/abs/2402.13040v1
[ "https://github.com/deno-v/tgm-dlm" ]
In the paper 'Text-Guided Molecule Generation with Diffusion Language Model', what Text2Mol score did the TGM-DLM w/o corr model get on the ChEBI-20 dataset
58.9
DeLiVER
StitchFusion (RGB-D-LiDAR)
StitchFusion: Weaving Any Visual Modalities to Enhance Multimodal Semantic Segmentation
2024-08-02T00:00:00
https://arxiv.org/abs/2408.01343v1
[ "https://github.com/libingyu01/stitchfusion-stitchfusion-weaving-any-visual-modalities-to-enhance-multimodal-semantic-segmentation" ]
In the paper 'StitchFusion: Weaving Any Visual Modalities to Enhance Multimodal Semantic Segmentation', what mIoU score did the StitchFusion (RGB-D-LiDAR) model get on the DeLiVER dataset
66.65
Event-Camera Dataset
HyperE2VID
HyperE2VID: Improving Event-Based Video Reconstruction via Hypernetworks
2023-05-10T00:00:00
https://arxiv.org/abs/2305.06382v2
[ "https://github.com/ercanburak/HyperE2VID" ]
In the paper 'HyperE2VID: Improving Event-Based Video Reconstruction via Hypernetworks', what Mean Squared Error score did the HyperE2VID model get on the Event-Camera Dataset dataset
0.033
PACS
VL2V-SD (CLIP, ViT-B/16)
Leveraging Vision-Language Models for Improving Domain Generalization in Image Classification
2023-10-12T00:00:00
https://arxiv.org/abs/2310.08255v2
[ "https://github.com/val-iisc/VL2V-ADiP" ]
In the paper 'Leveraging Vision-Language Models for Improving Domain Generalization in Image Classification', what Average Accuracy score did the VL2V-SD (CLIP, ViT-B/16) model get on the PACS dataset
96.68
MAWPS
GPT-3.5 turbo (175B)
Math Word Problem Solving by Generating Linguistic Variants of Problem Statements
2023-06-24T00:00:00
https://arxiv.org/abs/2306.13899v1
[ "https://github.com/starscream-11813/variational-mathematical-reasoning" ]
In the paper 'Math Word Problem Solving by Generating Linguistic Variants of Problem Statements', what Accuracy (%) score did the GPT-3.5 turbo (175B) model get on the MAWPS dataset
80.3
Abt-Buy
gpt-4o-2024-08-06
Fine-tuning Large Language Models for Entity Matching
2024-09-12T00:00:00
https://arxiv.org/abs/2409.08185v1
[ "https://github.com/wbsg-uni-mannheim/tailormatch" ]
In the paper 'Fine-tuning Large Language Models for Entity Matching', what F1 (%) score did the gpt-4o-2024-08-06 model get on the Abt-Buy dataset
92.20
RealBlur-R
MLWNet
Efficient Multi-scale Network with Learnable Discrete Wavelet Transform for Blind Motion Deblurring
2023-12-29T00:00:00
https://arxiv.org/abs/2401.00027v2
[ "https://github.com/thqiu0419/mlwnet" ]
In the paper 'Efficient Multi-scale Network with Learnable Discrete Wavelet Transform for Blind Motion Deblurring', what PSNR (sRGB) score did the MLWNet model get on the RealBlur-R dataset
40.69
DEplain-web-sent
mBART (trained on DEplain-APA-sent)
DEPLAIN: A German Parallel Corpus with Intralingual Translations into Plain Language for Sentence and Document Simplification
2023-05-30T00:00:00
https://arxiv.org/abs/2305.18939v1
[ "https://github.com/rstodden/deplain" ]
In the paper 'DEPLAIN: A German Parallel Corpus with Intralingual Translations into Plain Language for Sentence and Document Simplification', what SARI (EASSE>=0.2.1) score did the mBART (trained on DEplain-APA-sent) model get on the DEplain-web-sent dataset
30.867
WHU Building Dataset
SGSLN/128
Exchanging Dual Encoder-Decoder: A New Strategy for Change Detection with Semantic Guidance and Spatial Localization
2023-11-19T00:00:00
https://arxiv.org/abs/2311.11302v1
[ "https://github.com/walking-shadow/Semantic-guidance-and-spatial-localization-network" ]
In the paper 'Exchanging Dual Encoder-Decoder: A New Strategy for Change Detection with Semantic Guidance and Spatial Localization', what F1-score score did the SGSLN/128 model get on the WHU Building Dataset dataset
0.9168
CAMELYON16
CAMIL
CAMIL: Context-Aware Multiple Instance Learning for Cancer Detection and Subtyping in Whole Slide Images
2023-05-09T00:00:00
https://arxiv.org/abs/2305.05314v3
[ "https://github.com/olgarithmics/ICLR_CAMIL" ]
In the paper 'CAMIL: Context-Aware Multiple Instance Learning for Cancer Detection and Subtyping in Whole Slide Images', what AUC score did the CAMIL model get on the CAMELYON16 dataset
0.959
BC4CHEMD
UniNER-7B
UniversalNER: Targeted Distillation from Large Language Models for Open Named Entity Recognition
2023-08-07T00:00:00
https://arxiv.org/abs/2308.03279v2
[ "https://github.com/emma1066/retrieval-augmented-it-openner" ]
In the paper 'UniversalNER: Targeted Distillation from Large Language Models for Open Named Entity Recognition', what F1 score did the UniNER-7B model get on the BC4CHEMD dataset
89.21
CUB
MSENet
Enhancing Few-Shot Image Classification through Learnable Multi-Scale Embedding and Attention Mechanisms
2024-09-12T00:00:00
https://arxiv.org/abs/2409.07989v1
[ "https://github.com/FatemehAskari/MSENet" ]
In the paper 'Enhancing Few-Shot Image Classification through Learnable Multi-Scale Embedding and Attention Mechanisms', what 5 shot score did the MSENet model get on the CUB dataset
71.59
ChEBI-20
MolCA, Galac125M
MolCA: Molecular Graph-Language Modeling with Cross-Modal Projector and Uni-Modal Adapter
2023-10-19T00:00:00
https://arxiv.org/abs/2310.12798v4
[ "https://github.com/acharkq/molca" ]
In the paper 'MolCA: Molecular Graph-Language Modeling with Cross-Modal Projector and Uni-Modal Adapter', what BLEU-2 score did the MolCA, Galac125M model get on the ChEBI-20 dataset
61.6
TXL-PBC: a freely accessible labeled peripheral blood cell dataset
yolov5s
TXL-PBC: a freely accessible labeled peripheral blood cell dataset
2024-07-18T00:00:00
https://arxiv.org/abs/2407.13214v1
[ "https://github.com/lugan113/TXL-PBC_Dataset" ]
In the paper 'TXL-PBC: a freely accessible labeled peripheral blood cell dataset', what mAP50 score did the yolov5s model get on the TXL-PBC: a freely accessible labeled peripheral blood cell dataset dataset
0.97
ogbl-ddi
GCN (node embedding)
Can GNNs Learn Link Heuristics? A Concise Review and Evaluation of Link Prediction Methods
2024-11-22T00:00:00
https://arxiv.org/abs/2411.14711v1
[ "https://github.com/astroming/GNNHE" ]
In the paper 'Can GNNs Learn Link Heuristics? A Concise Review and Evaluation of Link Prediction Methods', what Test Hits@20 score did the GCN (node embedding) model get on the ogbl-ddi dataset
0.9549 ± 0.0073
LIDC-IDRI
MST
Medical Slice Transformer: Improved Diagnosis and Explainability on 3D Medical Images with DINOv2
2024-11-24T00:00:00
https://arxiv.org/abs/2411.15802v1
[ "https://github.com/mueller-franzes/mst" ]
In the paper 'Medical Slice Transformer: Improved Diagnosis and Explainability on 3D Medical Images with DINOv2', what AUC score did the MST model get on the LIDC-IDRI dataset
95
ETTh1 (336) Multivariate
AutoTimes
AutoTimes: Autoregressive Time Series Forecasters via Large Language Models
2024-02-04T00:00:00
https://arxiv.org/abs/2402.02370v4
[ "https://github.com/thuml/AutoTimes" ]
In the paper 'AutoTimes: Autoregressive Time Series Forecasters via Large Language Models', what MSE score did the AutoTimes model get on the ETTh1 (336) Multivariate dataset
0.401
CodeContests
MapCoder (GPT-4)
MapCoder: Multi-Agent Code Generation for Competitive Problem Solving
2024-05-18T00:00:00
https://arxiv.org/abs/2405.11403v1
[ "https://github.com/md-ashraful-pramanik/mapcoder" ]
In the paper 'MapCoder: Multi-Agent Code Generation for Competitive Problem Solving', what Test Set pass@1 score did the MapCoder (GPT-4) model get on the CodeContests dataset
28.5
LibriSpeech test-other
Zipformer+pruned transducer w/ CR-CTC (no external language model)
CR-CTC: Consistency regularization on CTC for improved speech recognition
2024-10-07T00:00:00
https://arxiv.org/abs/2410.05101v3
[ "https://github.com/k2-fsa/icefall" ]
In the paper 'CR-CTC: Consistency regularization on CTC for improved speech recognition', what Word Error Rate (WER) score did the Zipformer+pruned transducer w/ CR-CTC (no external language model) model get on the LibriSpeech test-other dataset
3.95
RES-Q
QurrentOS-coder + GPT-4 Turbo
RES-Q: Evaluating Code-Editing Large Language Model Systems at the Repository Scale
2024-06-24T00:00:00
https://arxiv.org/abs/2406.16801v2
[ "https://github.com/qurrent-ai/res-q" ]
In the paper 'RES-Q: Evaluating Code-Editing Large Language Model Systems at the Repository Scale', what pass@1 score did the QurrentOS-coder + GPT-4 Turbo model get on the RES-Q dataset
37.0
MATH
MetaMath 7B
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
2023-09-21T00:00:00
https://arxiv.org/abs/2309.12284v4
[ "https://github.com/meta-math/MetaMath" ]
In the paper 'MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models', what Accuracy score did the MetaMath 7B model get on the MATH dataset
19.4
Split CIFAR-10
Model with negotiation paradigm
Negotiated Representations to Prevent Forgetting in Machine Learning Applications
2023-11-30T00:00:00
https://arxiv.org/abs/2312.00237v1
[ "https://github.com/nurikorhan/negotiated-representations-for-continual-learning" ]
In the paper 'Negotiated Representations to Prevent Forgetting in Machine Learning Applications', what Percentage Average accuracy - 5 tasks score did the Model with negotiation paradigm model get on the Split CIFAR-10 dataset
46.5
miniF2F-valid
LEGO-Prover ChatGPT
LEGO-Prover: Neural Theorem Proving with Growing Libraries
2023-10-01T00:00:00
https://arxiv.org/abs/2310.00656v3
[ "https://github.com/wiio12/LEGO-Prover" ]
In the paper 'LEGO-Prover: Neural Theorem Proving with Growing Libraries', what Pass@100 score did the LEGO-Prover ChatGPT model get on the miniF2F-valid dataset
57.0
Foggy Cityscapes
MILA
MILA: Memory-Based Instance-Level Adaptation for Cross-Domain Object Detection
2023-11-20T00:00:00
https://arxiv.org/abs/2309.01086v1
[ "https://github.com/hitachi-rd-cv/MILA" ]
In the paper 'MILA: Memory-Based Instance-Level Adaptation for Cross-Domain Object Detection', what mAP score did the MILA model get on the Foggy Cityscapes dataset
50.6
SWDE
InstrucTE (zero-shot)
Schema-Driven Information Extraction from Heterogeneous Tables
2023-05-23T00:00:00
https://arxiv.org/abs/2305.14336v5
[ "https://github.com/bflashcp3f/schema-to-json" ]
In the paper 'Schema-Driven Information Extraction from Heterogeneous Tables', what Avg F1 score did the InstrucTE (zero-shot) model get on the SWDE dataset
95.7
ScanObjectNN
ULIP-2 + PointNeXt (no voting)
ULIP-2: Towards Scalable Multimodal Pre-training for 3D Understanding
2023-05-14T00:00:00
https://arxiv.org/abs/2305.08275v4
[ "https://github.com/salesforce/ulip" ]
In the paper 'ULIP-2: Towards Scalable Multimodal Pre-training for 3D Understanding', what Overall Accuracy score did the ULIP-2 + PointNeXt (no voting) model get on the ScanObjectNN dataset
90.8
MATH
ToRA-Code 34B (w/ code)
ToRA: A Tool-Integrated Reasoning Agent for Mathematical Problem Solving
2023-09-29T00:00:00
https://arxiv.org/abs/2309.17452v4
[ "https://github.com/microsoft/tora" ]
In the paper 'ToRA: A Tool-Integrated Reasoning Agent for Mathematical Problem Solving', what Accuracy score did the ToRA-Code 34B (w/ code) model get on the MATH dataset
50.8
SPKL
CarNet
Revising deep learning methods in parking lot occupancy detection
2023-06-07T00:00:00
https://arxiv.org/abs/2306.04288v3
[ "https://github.com/eighonet/parking-research" ]
In the paper 'Revising deep learning methods in parking lot occupancy detection', what F1-score score did the CarNet model get on the SPKL dataset
0.7131
MBPP
DeepSeek-Coder-Base 6.7B (few-shot)
DeepSeek-Coder: When the Large Language Model Meets Programming -- The Rise of Code Intelligence
2024-01-25T00:00:00
https://arxiv.org/abs/2401.14196v2
[ "https://github.com/deepseek-ai/DeepSeek-Coder" ]
In the paper 'DeepSeek-Coder: When the Large Language Model Meets Programming -- The Rise of Code Intelligence', what Accuracy score did the DeepSeek-Coder-Base 6.7B (few-shot) model get on the MBPP dataset
60.6
Winoground
KeyComp* (GPT-4)
Prompting Large Vision-Language Models for Compositional Reasoning
2024-01-20T00:00:00
https://arxiv.org/abs/2401.11337v1
[ "https://github.com/tossowski/keycomp" ]
In the paper 'Prompting Large Vision-Language Models for Compositional Reasoning', what Text Score score did the KeyComp* (GPT-4) model get on the Winoground dataset
43.5
Office-Home
VL2V-SD (CLIP, ViT-B/16)
Leveraging Vision-Language Models for Improving Domain Generalization in Image Classification
2023-10-12T00:00:00
https://arxiv.org/abs/2310.08255v2
[ "https://github.com/val-iisc/VL2V-ADiP" ]
In the paper 'Leveraging Vision-Language Models for Improving Domain Generalization in Image Classification', what Average Accuracy score did the VL2V-SD (CLIP, ViT-B/16) model get on the Office-Home dataset
87.38
CropHarvest - Kenya
Input Fusion with TAE
In the Search for Optimal Multi-view Learning Models for Crop Classification with Global Remote Sensing Data
2024-03-25T00:00:00
https://arxiv.org/abs/2403.16582v2
[ "https://github.com/fmenat/optimal-multiview-crop-classifier" ]
In the paper 'In the Search for Optimal Multi-view Learning Models for Crop Classification with Global Remote Sensing Data', what Average Accuracy score did the Input Fusion with TAE model get on the CropHarvest - Kenya dataset
0.673
PRCC
CAL+GEFF+DLCR
DLCR: A Generative Data Expansion Framework via Diffusion for Clothes-Changing Person Re-ID
2024-11-11T00:00:00
https://arxiv.org/abs/2411.07205v2
[ "https://github.com/croitorualin/dlcr" ]
In the paper 'DLCR: A Generative Data Expansion Framework via Diffusion for Clothes-Changing Person Re-ID', what Rank-1 score did the CAL+GEFF+DLCR model get on the PRCC dataset
84.6