dataset
stringlengths
0
82
model_name
stringlengths
0
150
paper_title
stringlengths
19
175
paper_date
timestamp[ns]
paper_url
stringlengths
32
35
code_links
listlengths
1
1
prompts
stringlengths
105
331
answer
stringlengths
1
67
MPI-INF-3DHP
MotionAGFormer-B (T=81)
MotionAGFormer: Enhancing 3D Human Pose Estimation with a Transformer-GCNFormer Network
2023-10-25T00:00:00
https://arxiv.org/abs/2310.16288v1
[ "https://github.com/taatiteam/motionagformer" ]
In the paper 'MotionAGFormer: Enhancing 3D Human Pose Estimation with a Transformer-GCNFormer Network', what AUC score did the MotionAGFormer-B (T=81) model get on the MPI-INF-3DHP dataset
84.2
GSM8K
MetaMath 7B
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
2023-09-21T00:00:00
https://arxiv.org/abs/2309.12284v4
[ "https://github.com/meta-math/MetaMath" ]
In the paper 'MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models', what Accuracy score did the MetaMath 7B model get on the GSM8K dataset
66.4
ETTh2 (720) Multivariate
RLinear
Revisiting Long-term Time Series Forecasting: An Investigation on Linear Mapping
2023-05-18T00:00:00
https://arxiv.org/abs/2305.10721v1
[ "https://github.com/plumprc/rtsf" ]
In the paper 'Revisiting Long-term Time Series Forecasting: An Investigation on Linear Mapping', what MSE score did the RLinear model get on the ETTh2 (720) Multivariate dataset
0.372
CHILI-3K
GraphSAGE
CHILI: Chemically-Informed Large-scale Inorganic Nanomaterials Dataset for Advancing Graph Machine Learning
2024-02-20T00:00:00
https://arxiv.org/abs/2402.13221v2
[ "https://github.com/UlrikFriisJensen/CHILI" ]
In the paper 'CHILI: Chemically-Informed Large-scale Inorganic Nanomaterials Dataset for Advancing Graph Machine Learning', what MSE score did the GraphSAGE model get on the CHILI-3K dataset
0.055 +/- 0.002
St Lucia
BoQ (DINOv2)
BoQ: A Place is Worth a Bag of Learnable Queries
2024-05-12T00:00:00
https://arxiv.org/abs/2405.07364v3
[ "https://github.com/amaralibey/bag-of-queries" ]
In the paper 'BoQ: A Place is Worth a Bag of Learnable Queries', what Recall@1 score did the BoQ (DINOv2) model get on the St Lucia dataset
100.0
RealBlur-R
ID-Blau (Stripformer)
ID-Blau: Image Deblurring by Implicit Diffusion-based reBLurring AUgmentation
2023-12-18T00:00:00
https://arxiv.org/abs/2312.10998v2
[ "https://github.com/plusgood-steven/id-blau" ]
In the paper 'ID-Blau: Image Deblurring by Implicit Diffusion-based reBLurring AUgmentation', what PSNR (sRGB) score did the ID-Blau (Stripformer) model get on the RealBlur-R dataset
41.06
USNA-Cn2 (short-duration)
GBRT
Effective Benchmarks for Optical Turbulence Modeling
2024-01-07T00:00:00
https://arxiv.org/abs/2401.03573v1
[ "https://github.com/cdjellen/otbench" ]
In the paper 'Effective Benchmarks for Optical Turbulence Modeling', what RMSE score did the GBRT model get on the USNA-Cn2 (short-duration) dataset
0.160
WikiOFGraph
T5-large
Ontology-Free General-Domain Knowledge Graph-to-Text Generation Dataset Synthesis using Large Language Model
2024-09-11T00:00:00
https://arxiv.org/abs/2409.07088v1
[ "https://github.com/daehuikim/WikiOFGraph" ]
In the paper 'Ontology-Free General-Domain Knowledge Graph-to-Text Generation Dataset Synthesis using Large Language Model', what BLEU score did the T5-large model get on the WikiOFGraph dataset
69.27
SF-XL test v2
ProGEO
ProGEO: Generating Prompts through Image-Text Contrastive Learning for Visual Geo-localization
2024-06-04T00:00:00
https://arxiv.org/abs/2406.01906v1
[ "https://github.com/chain-mao/progeo" ]
In the paper 'ProGEO: Generating Prompts through Image-Text Contrastive Learning for Visual Geo-localization', what Recall@1 score did the ProGEO model get on the SF-XL test v2 dataset
93.0
DRIVE
MERIT-GCASCADE
G-CASCADE: Efficient Cascaded Graph Convolutional Decoding for 2D Medical Image Segmentation
2023-10-24T00:00:00
https://arxiv.org/abs/2310.16175v1
[ "https://github.com/SLDGroup/G-CASCADE" ]
In the paper 'G-CASCADE: Efficient Cascaded Graph Convolutional Decoding for 2D Medical Image Segmentation', what F1 score score did the MERIT-GCASCADE model get on the DRIVE dataset
0.8290
COCO-20i (5-shot)
MIANet (VGG-16)
MIANet: Aggregating Unbiased Instance and General Information for Few-Shot Semantic Segmentation
2023-05-23T00:00:00
https://arxiv.org/abs/2305.13864v1
[ "https://github.com/aldrich2y/mianet" ]
In the paper 'MIANet: Aggregating Unbiased Instance and General Information for Few-Shot Semantic Segmentation', what Mean IoU score did the MIANet (VGG-16) model get on the COCO-20i (5-shot) dataset
51.03
Occ3D-nuScenes
HyDRa R50
Unleashing HyDRa: Hybrid Fusion, Depth Consistency and Radar for Unified 3D Perception
2024-03-12T00:00:00
https://arxiv.org/abs/2403.07746v2
[ "https://github.com/phi-wol/hydra" ]
In the paper 'Unleashing HyDRa: Hybrid Fusion, Depth Consistency and Radar for Unified 3D Perception', what mIoU score did the HyDRa R50 model get on the Occ3D-nuScenes dataset
44.4
EM
EMCAD
EMCAD: Efficient Multi-scale Convolutional Attention Decoding for Medical Image Segmentation
2024-05-11T00:00:00
https://arxiv.org/abs/2405.06880v1
[ "https://github.com/sldgroup/emcad" ]
In the paper 'EMCAD: Efficient Multi-scale Convolutional Attention Decoding for Medical Image Segmentation', what DSC score did the EMCAD model get on the EM dataset
95.53
CocoGlide
Late Fusion
MMFusion: Combining Image Forensic Filters for Visual Manipulation Detection and Localization
2023-12-04T00:00:00
https://arxiv.org/abs/2312.01790v2
[ "https://github.com/idt-iti/mmfusion-iml" ]
In the paper 'MMFusion: Combining Image Forensic Filters for Visual Manipulation Detection and Localization', what AUC score did the Late Fusion model get on the CocoGlide dataset
.760
Pittsburgh-30k-test
SegVLAD-FineT (M)
Revisit Anything: Visual Place Recognition via Image Segment Retrieval
2024-09-26T00:00:00
https://arxiv.org/abs/2409.18049v1
[ "https://github.com/anyloc/revisit-anything" ]
In the paper 'Revisit Anything: Visual Place Recognition via Image Segment Retrieval', what Recall@1 score did the SegVLAD-FineT (M) model get on the Pittsburgh-30k-test dataset
93.1
HumanEval
AFlow(GPT-4o-mini)
AFlow: Automating Agentic Workflow Generation
2024-10-14T00:00:00
https://arxiv.org/abs/2410.10762v1
[ "https://github.com/geekan/metagpt" ]
In the paper 'AFlow: Automating Agentic Workflow Generation', what Pass@1 score did the AFlow(GPT-4o-mini) model get on the HumanEval dataset
94.7
TpuGraphs Layout mean
TpuGraphs
TpuGraphs: A Performance Prediction Dataset on Large Tensor Computational Graphs
2023-08-25T00:00:00
https://arxiv.org/abs/2308.13490v3
[ "https://github.com/google-research-datasets/tpu_graphs" ]
In the paper 'TpuGraphs: A Performance Prediction Dataset on Large Tensor Computational Graphs', what Kendall's Tau score did the TpuGraphs model get on the TpuGraphs Layout mean dataset
0.298
E-commerce
DialMAE
Dial-MAE: ConTextual Masked Auto-Encoder for Retrieval-based Dialogue Systems
2023-06-07T00:00:00
https://arxiv.org/abs/2306.04357v5
[ "https://github.com/suu990901/Dial-MAE" ]
In the paper 'Dial-MAE: ConTextual Masked Auto-Encoder for Retrieval-based Dialogue Systems', what R10@1 score did the DialMAE model get on the E-commerce dataset
0.930
ETTh1 (336) Multivariate
TimesFM
A decoder-only foundation model for time-series forecasting
2023-10-14T00:00:00
https://arxiv.org/abs/2310.10688v4
[ "https://github.com/google-research/timesfm" ]
In the paper 'A decoder-only foundation model for time-series forecasting', what MAE score did the TimesFM model get on the ETTh1 (336) Multivariate dataset
0.436
ACOS
MvP (muilti-task)
MvP: Multi-view Prompting Improves Aspect Sentiment Tuple Prediction
2023-05-22T00:00:00
https://arxiv.org/abs/2305.12627v1
[ "https://github.com/ZubinGou/multi-view-prompting" ]
In the paper 'MvP: Multi-view Prompting Improves Aspect Sentiment Tuple Prediction', what F1 (Laptop) score did the MvP (muilti-task) model get on the ACOS dataset
43.84
rt-inod-bias
Gemma
Benchmarking Llama2, Mistral, Gemma and GPT for Factuality, Toxicity, Bias and Propensity for Hallucinations
2024-04-15T00:00:00
https://arxiv.org/abs/2404.09785v1
[ "https://github.com/innodatalabs/innodata-llm-safety" ]
In the paper 'Benchmarking Llama2, Mistral, Gemma and GPT for Factuality, Toxicity, Bias and Propensity for Hallucinations', what Best-of score did the Gemma model get on the rt-inod-bias dataset
0.41
SYNTHIA
Resnet50
MRFP: Learning Generalizable Semantic Segmentation from Sim-2-Real with Multi-Resolution Feature Perturbation
2023-11-30T00:00:00
https://arxiv.org/abs/2311.18331v2
[ "https://github.com/airl-iisc/MRFP" ]
In the paper 'MRFP: Learning Generalizable Semantic Segmentation from Sim-2-Real with Multi-Resolution Feature Perturbation', what mIoU score did the Resnet50 model get on the SYNTHIA dataset
25.84
BACE
elEmBERT-V1
Structure to Property: Chemical Element Embeddings and a Deep Learning Approach for Accurate Prediction of Chemical Properties
2023-09-17T00:00:00
https://arxiv.org/abs/2309.09355v3
[ "https://github.com/dmamur/elembert" ]
In the paper 'Structure to Property: Chemical Element Embeddings and a Deep Learning Approach for Accurate Prediction of Chemical Properties', what AUC score did the elEmBERT-V1 model get on the BACE dataset
0.856
OVIS validation
DVIS++(R50, Offline)
DVIS++: Improved Decoupled Framework for Universal Video Segmentation
2023-12-20T00:00:00
https://arxiv.org/abs/2312.13305v1
[ "https://github.com/zhang-tao-whu/DVIS_Plus" ]
In the paper 'DVIS++: Improved Decoupled Framework for Universal Video Segmentation', what mask AP score did the DVIS++(R50, Offline) model get on the OVIS validation dataset
41.2
MOSE
DEVA (with OVIS)
Tracking Anything with Decoupled Video Segmentation
2023-09-07T00:00:00
https://arxiv.org/abs/2309.03903v1
[ "https://github.com/hkchengrex/Tracking-Anything-with-DEVA" ]
In the paper 'Tracking Anything with Decoupled Video Segmentation', what J&F score did the DEVA (with OVIS) model get on the MOSE dataset
66.5
COCO test-dev
LeYOLO-nano@480
LeYOLO, New Scalable and Efficient CNN Architecture for Object Detection
2024-06-20T00:00:00
https://arxiv.org/abs/2406.14239v1
[ "https://github.com/LilianHollard/LeYOLO" ]
In the paper 'LeYOLO, New Scalable and Efficient CNN Architecture for Object Detection', what GFLOPs score did the LeYOLO-nano@480 model get on the COCO test-dev dataset
1.47
CULane
CLRKDNet (DLA-34)
CLRKDNet: Speeding up Lane Detection with Knowledge Distillation
2024-05-21T00:00:00
https://arxiv.org/abs/2405.12503v1
[ "https://github.com/weiqingq/CLRKDNet" ]
In the paper 'CLRKDNet: Speeding up Lane Detection with Knowledge Distillation', what F1 score score did the CLRKDNet (DLA-34) model get on the CULane dataset
80.68
IC19-Art
MixNet
MixNet: Toward Accurate Detection of Challenging Scene Text in the Wild
2023-08-23T00:00:00
https://arxiv.org/abs/2308.12817v2
[ "https://github.com/D641593/MixNet" ]
In the paper 'MixNet: Toward Accurate Detection of Challenging Scene Text in the Wild', what H-Mean score did the MixNet model get on the IC19-Art dataset
79.7
GOT-10k
ARTrackV2-L
ARTrackV2: Prompting Autoregressive Tracker Where to Look and How to Describe
2023-12-28T00:00:00
https://arxiv.org/abs/2312.17133v3
[ "https://github.com/miv-xjtu/artrack" ]
In the paper 'ARTrackV2: Prompting Autoregressive Tracker Where to Look and How to Describe', what Average Overlap score did the ARTrackV2-L model get on the GOT-10k dataset
79.5
Food-101
ZLaP*
Label Propagation for Zero-shot Classification with Vision-Language Models
2024-04-05T00:00:00
https://arxiv.org/abs/2404.04072v1
[ "https://github.com/vladan-stojnic/zlap" ]
In the paper 'Label Propagation for Zero-shot Classification with Vision-Language Models', what Accuracy score did the ZLaP* model get on the Food-101 dataset
87.9
SHD - Adding
LSTM
The Expressive Leaky Memory Neuron: an Efficient and Expressive Phenomenological Neuron Model Can Solve Long-Horizon Tasks
2023-06-14T00:00:00
https://arxiv.org/abs/2306.16922v3
[ "https://github.com/AaronSpieler/elmneuron" ]
In the paper 'The Expressive Leaky Memory Neuron: an Efficient and Expressive Phenomenological Neuron Model Can Solve Long-Horizon Tasks', what Accuracy (%) score did the LSTM model get on the SHD - Adding dataset
10
COVERAGE
Early Fusion
MMFusion: Combining Image Forensic Filters for Visual Manipulation Detection and Localization
2023-12-04T00:00:00
https://arxiv.org/abs/2312.01790v2
[ "https://github.com/idt-iti/mmfusion-iml" ]
In the paper 'MMFusion: Combining Image Forensic Filters for Visual Manipulation Detection and Localization', what Average Pixel F1(Fixed threshold) score did the Early Fusion model get on the COVERAGE dataset
.663
Weather2K850 (96)
MoLE-RLinear
Mixture-of-Linear-Experts for Long-term Time Series Forecasting
2023-12-11T00:00:00
https://arxiv.org/abs/2312.06786v3
[ "https://github.com/rogerni/mole" ]
In the paper 'Mixture-of-Linear-Experts for Long-term Time Series Forecasting', what MSE score did the MoLE-RLinear model get on the Weather2K850 (96) dataset
0.471
RealBlur-J
ID-Blau (Stripformer)
ID-Blau: Image Deblurring by Implicit Diffusion-based reBLurring AUgmentation
2023-12-18T00:00:00
https://arxiv.org/abs/2312.10998v2
[ "https://github.com/plusgood-steven/id-blau" ]
In the paper 'ID-Blau: Image Deblurring by Implicit Diffusion-based reBLurring AUgmentation', what SSIM (sRGB) score did the ID-Blau (Stripformer) model get on the RealBlur-J dataset
0.940
CommonsenseQA
Phi 3 3.8B
Hierarchical Prompting Taxonomy: A Universal Evaluation Framework for Large Language Models Aligned with Human Cognitive Principles
2024-06-18T00:00:00
https://arxiv.org/abs/2406.12644v4
[ "https://github.com/devichand579/HPT" ]
In the paper 'Hierarchical Prompting Taxonomy: A Universal Evaluation Framework for Large Language Models Aligned with Human Cognitive Principles', what Accuracy score did the Phi 3 3.8B model get on the CommonsenseQA dataset
88.452
Set14 - 4x upscaling
HMA†
HMANet: Hybrid Multi-Axis Aggregation Network for Image Super-Resolution
2024-05-08T00:00:00
https://arxiv.org/abs/2405.05001v1
[ "https://github.com/korouuuuu/hma" ]
In the paper 'HMANet: Hybrid Multi-Axis Aggregation Network for Image Super-Resolution', what PSNR score did the HMA† model get on the Set14 - 4x upscaling dataset
29.51
SUIM
DatUS^2
DatUS^2: Data-driven Unsupervised Semantic Segmentation with Pre-trained Self-supervised Vision Transformer
2024-01-23T00:00:00
https://arxiv.org/abs/2401.12820v1
[ "https://github.com/SonalKumar95/DatUS" ]
In the paper 'DatUS^2: Data-driven Unsupervised Semantic Segmentation with Pre-trained Self-supervised Vision Transformer', what Clustering [mIoU] score did the DatUS^2 model get on the SUIM dataset
34.02
KITTI Test (Online Methods)
IMM-JHSE
One Homography is All You Need: IMM-based Joint Homography and Multiple Object State Estimation
2024-09-04T00:00:00
https://arxiv.org/abs/2409.02562v2
[ "https://github.com/Paulkie99/imm-jhse" ]
In the paper 'One Homography is All You Need: IMM-based Joint Homography and Multiple Object State Estimation', what HOTA score did the IMM-JHSE model get on the KITTI Test (Online Methods) dataset
79.21
HumanML3D
MMM (gt length)
MMM: Generative Masked Motion Model
2023-12-06T00:00:00
https://arxiv.org/abs/2312.03596v2
[ "https://github.com/exitudio/MMM" ]
In the paper 'MMM: Generative Masked Motion Model', what FID score did the MMM (gt length) model get on the HumanML3D dataset
0.089
AG News
vONTSS
vONTSS: vMF based semi-supervised neural topic modeling with optimal transport
2023-07-03T00:00:00
https://arxiv.org/abs/2307.01226v2
[ "https://github.com/xuweijieshuai/vONTSS" ]
In the paper 'vONTSS: vMF based semi-supervised neural topic modeling with optimal transport', what C_v score did the vONTSS model get on the AG News dataset
0.49
Nature
RDNet
Reversible Decoupling Network for Single Image Reflection Removal
2024-10-10T00:00:00
https://arxiv.org/abs/2410.08063v1
[ "https://github.com/lime-j/RDNet" ]
In the paper 'Reversible Decoupling Network for Single Image Reflection Removal', what PSNR score did the RDNet model get on the Nature dataset
26.21
MAS3K
SAM2-UNet
SAM2-UNet: Segment Anything 2 Makes Strong Encoder for Natural and Medical Image Segmentation
2024-08-16T00:00:00
https://arxiv.org/abs/2408.08870v1
[ "https://github.com/wzh0120/sam2-unet" ]
In the paper 'SAM2-UNet: Segment Anything 2 Makes Strong Encoder for Natural and Medical Image Segmentation', what S-measure score did the SAM2-UNet model get on the MAS3K dataset
0.903
CHILI-3K
Most Frequent Class
CHILI: Chemically-Informed Large-scale Inorganic Nanomaterials Dataset for Advancing Graph Machine Learning
2024-02-20T00:00:00
https://arxiv.org/abs/2402.13221v2
[ "https://github.com/UlrikFriisJensen/CHILI" ]
In the paper 'CHILI: Chemically-Informed Large-scale Inorganic Nanomaterials Dataset for Advancing Graph Machine Learning', what F1-score (Weighted) score did the Most Frequent Class model get on the CHILI-3K dataset
0.461
RefCOCO+ testA
EVF-SAM
EVF-SAM: Early Vision-Language Fusion for Text-Prompted Segment Anything Model
2024-06-28T00:00:00
https://arxiv.org/abs/2406.20076v4
[ "https://github.com/hustvl/evf-sam" ]
In the paper 'EVF-SAM: Early Vision-Language Fusion for Text-Prompted Segment Anything Model', what Overall IoU score did the EVF-SAM model get on the RefCOCO+ testA dataset
78.3
WPC
COPP-Net
No-Reference Point Cloud Quality Assessment via Weighted Patch Quality Prediction
2023-05-13T00:00:00
https://arxiv.org/abs/2305.07829v2
[ "https://github.com/philox12358/COPP-Net" ]
In the paper 'No-Reference Point Cloud Quality Assessment via Weighted Patch Quality Prediction', what PLCC score did the COPP-Net model get on the WPC dataset
0.9324
IllusionVQA
Gemini-Pro 4-shot+CoT
IllusionVQA: A Challenging Optical Illusion Dataset for Vision Language Models
2024-03-23T00:00:00
https://arxiv.org/abs/2403.15952v3
[ "https://github.com/csebuetnlp/illusionvqa" ]
In the paper 'IllusionVQA: A Challenging Optical Illusion Dataset for Vision Language Models', what Accuracy score did the Gemini-Pro 4-shot+CoT model get on the IllusionVQA dataset
33.9
ETTh1 (336) Multivariate
SAMformer
SAMformer: Unlocking the Potential of Transformers in Time Series Forecasting with Sharpness-Aware Minimization and Channel-Wise Attention
2024-02-15T00:00:00
https://arxiv.org/abs/2402.10198v3
[ "https://github.com/romilbert/samformer" ]
In the paper 'SAMformer: Unlocking the Potential of Transformers in Time Series Forecasting with Sharpness-Aware Minimization and Channel-Wise Attention', what MSE score did the SAMformer model get on the ETTh1 (336) Multivariate dataset
0.423
MATH
OpenMath-CodeLlama-70B (w/ code)
OpenMathInstruct-1: A 1.8 Million Math Instruction Tuning Dataset
2024-02-15T00:00:00
https://arxiv.org/abs/2402.10176v2
[ "https://github.com/kipok/nemo-skills" ]
In the paper 'OpenMathInstruct-1: A 1.8 Million Math Instruction Tuning Dataset', what Accuracy score did the OpenMath-CodeLlama-70B (w/ code) model get on the MATH dataset
50.7
DUTS-TE
BiRefNet (DUTS, UHRSD)
Bilateral Reference for High-Resolution Dichotomous Image Segmentation
2024-01-07T00:00:00
https://arxiv.org/abs/2401.03407v6
[ "https://github.com/zhengpeng7/birefnet" ]
In the paper 'Bilateral Reference for High-Resolution Dichotomous Image Segmentation', what MAE score did the BiRefNet (DUTS, UHRSD) model get on the DUTS-TE dataset
0.018
LaSOT-ext
ARTrackV2-L
ARTrackV2: Prompting Autoregressive Tracker Where to Look and How to Describe
2023-12-28T00:00:00
https://arxiv.org/abs/2312.17133v3
[ "https://github.com/miv-xjtu/artrack" ]
In the paper 'ARTrackV2: Prompting Autoregressive Tracker Where to Look and How to Describe', what AUC score did the ARTrackV2-L model get on the LaSOT-ext dataset
53.4
ETTh2 (336) Multivariate
MoLE-RLinear
Mixture-of-Linear-Experts for Long-term Time Series Forecasting
2023-12-11T00:00:00
https://arxiv.org/abs/2312.06786v3
[ "https://github.com/rogerni/mole" ]
In the paper 'Mixture-of-Linear-Experts for Long-term Time Series Forecasting', what MSE score did the MoLE-RLinear model get on the ETTh2 (336) Multivariate dataset
0.371
fastMRI Knee Val 8x
PromptMR
Fill the K-Space and Refine the Image: Prompting for Dynamic and Multi-Contrast MRI Reconstruction
2023-09-25T00:00:00
https://arxiv.org/abs/2309.13839v1
[ "https://github.com/hellopipu/promptmr" ]
In the paper 'Fill the K-Space and Refine the Image: Prompting for Dynamic and Multi-Contrast MRI Reconstruction', what SSIM score did the PromptMR model get on the fastMRI Knee Val 8x dataset
0.8983
DSO-1
Early Fusion
MMFusion: Combining Image Forensic Filters for Visual Manipulation Detection and Localization
2023-12-04T00:00:00
https://arxiv.org/abs/2312.01790v2
[ "https://github.com/idt-iti/mmfusion-iml" ]
In the paper 'MMFusion: Combining Image Forensic Filters for Visual Manipulation Detection and Localization', what AUC score did the Early Fusion model get on the DSO-1 dataset
.966
Aria Everyday Objects
Cube R-CNN
EFM3D: A Benchmark for Measuring Progress Towards 3D Egocentric Foundation Models
2024-06-14T00:00:00
https://arxiv.org/abs/2406.10224v1
[ "https://github.com/facebookresearch/efm3d" ]
In the paper 'EFM3D: A Benchmark for Measuring Progress Towards 3D Egocentric Foundation Models', what mAP score did the Cube R-CNN model get on the Aria Everyday Objects dataset
8
LRS3
RTFS-Net-4
RTFS-Net: Recurrent Time-Frequency Modelling for Efficient Audio-Visual Speech Separation
2023-09-29T00:00:00
https://arxiv.org/abs/2309.17189v4
[ "https://github.com/spkgyk/RTFS-Net" ]
In the paper 'RTFS-Net: Recurrent Time-Frequency Modelling for Efficient Audio-Visual Speech Separation', what SI-SNRi score did the RTFS-Net-4 model get on the LRS3 dataset
15.5
Fashion-MNIST
CNN+ Wilson-Cowan model RNN
Learning in Wilson-Cowan model for metapopulation
2024-06-24T00:00:00
https://arxiv.org/abs/2406.16453v2
[ "https://github.com/raffaelemarino/learning_in_wilsoncowan" ]
In the paper 'Learning in Wilson-Cowan model for metapopulation', what Accuracy score did the CNN+ Wilson-Cowan model RNN model get on the Fashion-MNIST dataset
91.35
Amazon-Google
gpt-4o-2024-08-06
Fine-tuning Large Language Models for Entity Matching
2024-09-12T00:00:00
https://arxiv.org/abs/2409.08185v1
[ "https://github.com/wbsg-uni-mannheim/tailormatch" ]
In the paper 'Fine-tuning Large Language Models for Entity Matching', what F1 (%) score did the gpt-4o-2024-08-06 model get on the Amazon-Google dataset
63.45
Electricity (96)
CycleNet
CycleNet: Enhancing Time Series Forecasting through Modeling Periodic Patterns
2024-09-27T00:00:00
https://arxiv.org/abs/2409.18479v2
[ "https://github.com/ACAT-SCUT/CycleNet" ]
In the paper 'CycleNet: Enhancing Time Series Forecasting through Modeling Periodic Patterns', what MSE score did the CycleNet model get on the Electricity (96) dataset
0.126
GSM8K
MMOS-DeepSeekMath-7B(0-shot)
An Empirical Study of Data Ability Boundary in LLMs' Math Reasoning
2024-02-23T00:00:00
https://arxiv.org/abs/2403.00799v1
[ "https://github.com/cyzhh/MMOS" ]
In the paper 'An Empirical Study of Data Ability Boundary in LLMs' Math Reasoning', what Accuracy score did the MMOS-DeepSeekMath-7B(0-shot) model get on the GSM8K dataset
80.5
MSD Heart
OneNete,4
OneNet: A Channel-Wise 1D Convolutional U-Net
2024-11-14T00:00:00
https://arxiv.org/abs/2411.09838v1
[ "https://github.com/shbyun080/onenet" ]
In the paper 'OneNet: A Channel-Wise 1D Convolutional U-Net', what mIoU score did the OneNete,4 model get on the MSD Heart dataset
6.6
CIFAR100-B0(50 tasks)-no-exemplars
SEED
Divide and not forget: Ensemble of selectively trained experts in Continual Learning
2024-01-18T00:00:00
https://arxiv.org/abs/2401.10191v3
[ "https://github.com/grypesc/seed" ]
In the paper 'Divide and not forget: Ensemble of selectively trained experts in Continual Learning', what Average Incremental Accuracy score did the SEED model get on the CIFAR100-B0(50 tasks)-no-exemplars dataset
42.6
Aff-Wild2
ARBEx
ARBEx: Attentive Feature Extraction with Reliability Balancing for Robust Facial Expression Learning
2023-05-02T00:00:00
https://arxiv.org/abs/2305.01486v5
[ "https://github.com/takihasan/arbex" ]
In the paper 'ARBEx: Attentive Feature Extraction with Reliability Balancing for Robust Facial Expression Learning', what Accuracy score did the ARBEx model get on the Aff-Wild2 dataset
72.48
VNHSGE-Literature
Bing Chat
VNHSGE: VietNamese High School Graduation Examination Dataset for Large Language Models
2023-05-20T00:00:00
https://arxiv.org/abs/2305.12199v1
[ "https://github.com/xdao85/vnhsge" ]
In the paper 'VNHSGE: VietNamese High School Graduation Examination Dataset for Large Language Models', what Accuracy score did the Bing Chat model get on the VNHSGE-Literature dataset
56.8
Occluded-DukeMTMC
CLIPReID-Baseline+UFFM+AMC
Enhancing Person Re-Identification via Uncertainty Feature Fusion and Auto-weighted Measure Combination
2024-05-02T00:00:00
https://arxiv.org/abs/2405.01101v4
[ "https://github.com/chequanghuy/Enhancing-Person-Re-Identification-via-UFFM-and-AMC" ]
In the paper 'Enhancing Person Re-Identification via Uncertainty Feature Fusion and Auto-weighted Measure Combination', what mAP score did the CLIPReID-Baseline+UFFM+AMC model get on the Occluded-DukeMTMC dataset
61.9
Office-Home
PromptStyler (CLIP, ViT-L/14)
PromptStyler: Prompt-driven Style Generation for Source-free Domain Generalization
2023-07-27T00:00:00
https://arxiv.org/abs/2307.15199v2
[ "https://github.com/zhanghr2001/promptta" ]
In the paper 'PromptStyler: Prompt-driven Style Generation for Source-free Domain Generalization', what Average Accuracy score did the PromptStyler (CLIP, ViT-L/14) model get on the Office-Home dataset
89.1
SID SonyA7S2 x300
LED
Make Explicit Calibration Implicit: Calibrate Denoiser Instead of the Noise Model
2023-08-07T00:00:00
https://arxiv.org/abs/2308.03448v2
[ "https://github.com/srameo/led" ]
In the paper 'Make Explicit Calibration Implicit: Calibrate Denoiser Instead of the Noise Model', what PSNR (Raw) score did the LED model get on the SID SonyA7S2 x300 dataset
36.67
Winoground
InstructBLIP
Compositional Chain-of-Thought Prompting for Large Multimodal Models
2023-11-27T00:00:00
https://arxiv.org/abs/2311.17076v3
[ "https://github.com/chancharikmitra/ccot" ]
In the paper 'Compositional Chain-of-Thought Prompting for Large Multimodal Models', what Text Score score did the InstructBLIP model get on the Winoground dataset
7.0
MATH
WizardMath-13B-V1.0
WizardMath: Empowering Mathematical Reasoning for Large Language Models via Reinforced Evol-Instruct
2023-08-18T00:00:00
https://arxiv.org/abs/2308.09583v1
[ "https://github.com/nlpxucan/wizardlm" ]
In the paper 'WizardMath: Empowering Mathematical Reasoning for Large Language Models via Reinforced Evol-Instruct', what Accuracy score did the WizardMath-13B-V1.0 model get on the MATH dataset
14.0
ImageNet 64x64
SCT
Stable Consistency Tuning: Understanding and Improving Consistency Models
2024-10-24T00:00:00
https://arxiv.org/abs/2410.18958v3
[ "https://github.com/G-U-N/Stable-Consistency-Tuning" ]
In the paper 'Stable Consistency Tuning: Understanding and Improving Consistency Models', what FID score did the SCT model get on the ImageNet 64x64 dataset
1.47
ChEBI-20
BioT5
BioT5: Enriching Cross-modal Integration in Biology with Chemical Knowledge and Natural Language Associations
2023-10-11T00:00:00
https://arxiv.org/abs/2310.07276v3
[ "https://github.com/QizhiPei/BioT5" ]
In the paper 'BioT5: Enriching Cross-modal Integration in Biology with Chemical Knowledge and Natural Language Associations', what Text2Mol score did the BioT5 model get on the ChEBI-20 dataset
57.6
VNHSGE-Literature
ChatGPT
VNHSGE: VietNamese High School Graduation Examination Dataset for Large Language Models
2023-05-20T00:00:00
https://arxiv.org/abs/2305.12199v1
[ "https://github.com/xdao85/vnhsge" ]
In the paper 'VNHSGE: VietNamese High School Graduation Examination Dataset for Large Language Models', what Accuracy score did the ChatGPT model get on the VNHSGE-Literature dataset
68
STAC
Structured
Structured Dialogue Discourse Parsing
2023-06-26T00:00:00
https://arxiv.org/abs/2306.15103v1
[ "https://github.com/chijames/structured_dialogue_discourse_parsing" ]
In the paper 'Structured Dialogue Discourse Parsing', what Link F1 score did the Structured model get on the STAC dataset
74.4
ImageNet-LT
LIFT (ViT-B/16)
Long-Tail Learning with Foundation Model: Heavy Fine-Tuning Hurts
2023-09-18T00:00:00
https://arxiv.org/abs/2309.10019v3
[ "https://github.com/shijxcs/lift" ]
In the paper 'Long-Tail Learning with Foundation Model: Heavy Fine-Tuning Hurts', what Top-1 Accuracy score did the LIFT (ViT-B/16) model get on the ImageNet-LT dataset
78.3
Mini-Imagenet 5-way (5-shot)
PT+MAP+SF+BPA (transductive)
The Balanced-Pairwise-Affinities Feature Transform
2024-06-25T00:00:00
https://arxiv.org/abs/2407.01467v1
[ "https://github.com/danielshalam/bpa" ]
In the paper 'The Balanced-Pairwise-Affinities Feature Transform', what Accuracy score did the PT+MAP+SF+BPA (transductive) model get on the Mini-Imagenet 5-way (5-shot) dataset
91.34
dbp15k ja-en
UMAEA (w/o surf & iter )
Rethinking Uncertainly Missing and Ambiguous Visual Modality in Multi-Modal Entity Alignment
2023-07-30T00:00:00
https://arxiv.org/abs/2307.16210v2
[ "https://github.com/zjukg/umaea" ]
In the paper 'Rethinking Uncertainly Missing and Ambiguous Visual Modality in Multi-Modal Entity Alignment', what Hits@1 score did the UMAEA (w/o surf & iter ) model get on the dbp15k ja-en dataset
0.801
VoxCeleb1
ReDimNet-B5-SF2-LM (9.2M)
Reshape Dimensions Network for Speaker Recognition
2024-07-25T00:00:00
https://arxiv.org/abs/2407.18223v2
[ "https://github.com/IDRnD/ReDimNet" ]
In the paper 'Reshape Dimensions Network for Speaker Recognition', what EER score did the ReDimNet-B5-SF2-LM (9.2M) model get on the VoxCeleb1 dataset
0.43
NExT-QA
LLaVA-OV(72B)
LLaVA-OneVision: Easy Visual Task Transfer
2024-08-06T00:00:00
https://arxiv.org/abs/2408.03326v3
[ "https://github.com/evolvinglmms-lab/lmms-eval" ]
In the paper 'LLaVA-OneVision: Easy Visual Task Transfer', what Accuracy score did the LLaVA-OV(72B) model get on the NExT-QA dataset
80.2
SID SonyA7S2 x100
LED
Make Explicit Calibration Implicit: Calibrate Denoiser Instead of the Noise Model
2023-08-07T00:00:00
https://arxiv.org/abs/2308.03448v2
[ "https://github.com/srameo/led" ]
In the paper 'Make Explicit Calibration Implicit: Calibrate Denoiser Instead of the Noise Model', what PSNR (Raw) score did the LED model get on the SID SonyA7S2 x100 dataset
41.98
Electricity (720)
MoLE-RMLP
Mixture-of-Linear-Experts for Long-term Time Series Forecasting
2023-12-11T00:00:00
https://arxiv.org/abs/2312.06786v3
[ "https://github.com/rogerni/mole" ]
In the paper 'Mixture-of-Linear-Experts for Long-term Time Series Forecasting', what MSE score did the MoLE-RMLP model get on the Electricity (720) dataset
0.178
PASCAL Context-59
TaAlign(trained with image-text pairs)
TagAlign: Improving Vision-Language Alignment with Multi-Tag Classification
2023-12-21T00:00:00
https://arxiv.org/abs/2312.14149v4
[ "https://github.com/Qinying-Liu/TagAlign" ]
In the paper 'TagAlign: Improving Vision-Language Alignment with Multi-Tag Classification', what mIoU score did the TaAlign(trained with image-text pairs) model get on the PASCAL Context-59 dataset
37.6
MCubeS (P)
MMSFormer (RGB-A-D)
MMSFormer: Multimodal Transformer for Material and Semantic Segmentation
2023-09-07T00:00:00
https://arxiv.org/abs/2309.04001v4
[ "https://github.com/csiplab/mmsformer" ]
In the paper 'MMSFormer: Multimodal Transformer for Material and Semantic Segmentation', what mIoU score did the MMSFormer (RGB-A-D) model get on the MCubeS (P) dataset
52.03
Id Pattern Dataset
Claude 3 Opus
Identification of Stone Deterioration Patterns with Large Multimodal Models
2024-06-05T00:00:00
https://arxiv.org/abs/2406.03207v1
[ "https://github.com/dcorradetti/redai_id_pattern" ]
In the paper 'Identification of Stone Deterioration Patterns with Large Multimodal Models', what Percentage correct score did the Claude 3 Opus model get on the Id Pattern Dataset dataset
24.3%
Deforming Plate
HCMT
Learning Flexible Body Collision Dynamics with Hierarchical Contact Mesh Transformer
2023-12-19T00:00:00
https://arxiv.org/abs/2312.12467v3
[ "https://github.com/yuyudeep/hcmt" ]
In the paper 'Learning Flexible Body Collision Dynamics with Hierarchical Contact Mesh Transformer', what Rollout RMSE-all [1e3] Position score did the HCMT model get on the Deforming Plate dataset
7.49±0.07
Fishyscapes L&F
FlowEneDet
Concurrent Misclassification and Out-of-Distribution Detection for Semantic Segmentation via Energy-Based Normalizing Flow
2023-05-16T00:00:00
https://arxiv.org/abs/2305.09610v1
[ "https://github.com/gudovskiy/flowenedet" ]
In the paper 'Concurrent Misclassification and Out-of-Distribution Detection for Semantic Segmentation via Energy-Based Normalizing Flow', what AP score did the FlowEneDet model get on the Fishyscapes L&F dataset
50.15
PASCAL VOC
GMTR
GMTR: Graph Matching Transformers
2023-11-14T00:00:00
https://arxiv.org/abs/2311.08141v2
[ "https://github.com/jp-guo/gm-transformer" ]
In the paper 'GMTR: Graph Matching Transformers', what matching accuracy score did the GMTR model get on the PASCAL VOC dataset
0.836
Near-OOD
SCALE (ResNet50)
Scaling for Training Time and Post-hoc Out-of-distribution Detection Enhancement
2023-09-30T00:00:00
https://arxiv.org/abs/2310.00227v1
[ "https://github.com/kai422/scale" ]
In the paper 'Scaling for Training Time and Post-hoc Out-of-distribution Detection Enhancement', what ID ACC score did the SCALE (ResNet50) model get on the Near-OOD dataset
76.18
Tanks and Temples
MVSFormer++
MVSFormer++: Revealing the Devil in Transformer's Details for Multi-View Stereo
2024-01-22T00:00:00
https://arxiv.org/abs/2401.11673v1
[ "https://github.com/maybelx/mvsformerplusplus" ]
In the paper 'MVSFormer++: Revealing the Devil in Transformer's Details for Multi-View Stereo', what Mean F1 (Intermediate) score did the MVSFormer++ model get on the Tanks and Temples dataset
67.03
DESED
MDFD-CRNN
Pushing the Limit of Sound Event Detection with Multi-Dilated Frequency Dynamic Convolution
2024-06-19T00:00:00
https://arxiv.org/abs/2406.13312v3
[ "https://github.com/frednam93/MDFD-SED" ]
In the paper 'Pushing the Limit of Sound Event Detection with Multi-Dilated Frequency Dynamic Convolution', what PSDS1 score did the MDFD-CRNN model get on the DESED dataset
0.485
TriviaQA
GaC(Qwen2-72B-Instruct + Llama-3-70B-Instruct)
Breaking the Ceiling of the LLM Community by Treating Token Generation as a Classification for Ensembling
2024-06-18T00:00:00
https://arxiv.org/abs/2406.12585v2
[ "https://github.com/yaoching0/gac" ]
In the paper 'Breaking the Ceiling of the LLM Community by Treating Token Generation as a Classification for Ensembling', what EM score did the GaC(Qwen2-72B-Instruct + Llama-3-70B-Instruct) model get on the TriviaQA dataset
79.29
ImageNet
Wave-ViT-S
Which Transformer to Favor: A Comparative Analysis of Efficiency in Vision Transformers
2023-08-18T00:00:00
https://arxiv.org/abs/2308.09372v3
[ "https://github.com/tobna/whattransformertofavor" ]
In the paper 'Which Transformer to Favor: A Comparative Analysis of Efficiency in Vision Transformers', what Top 1 Accuracy score did the Wave-ViT-S model get on the ImageNet dataset
83.61%
ScanNet200
OpenMask3D
OpenMask3D: Open-Vocabulary 3D Instance Segmentation
2023-06-23T00:00:00
https://arxiv.org/abs/2306.13631v2
[ "https://github.com/OpenMask3D/openmask3d" ]
In the paper 'OpenMask3D: Open-Vocabulary 3D Instance Segmentation', what mAP score did the OpenMask3D model get on the ScanNet200 dataset
15.4
GSO
Unique3D
Unique3D: High-Quality and Efficient 3D Mesh Generation from a Single Image
2024-05-30T00:00:00
https://arxiv.org/abs/2405.20343v3
[ "https://github.com/AiuniAI/Unique3D" ]
In the paper 'Unique3D: High-Quality and Efficient 3D Mesh Generation from a Single Image', what Chamfer Distance score did the Unique3D model get on the GSO dataset
0.0145
BoolQ
LLaMA2-7b
GIFT-SW: Gaussian noise Injected Fine-Tuning of Salient Weights for LLMs
2024-08-27T00:00:00
https://arxiv.org/abs/2408.15300v1
[ "https://github.com/On-Point-RND/GIFT_SW" ]
In the paper 'GIFT-SW: Gaussian noise Injected Fine-Tuning of Salient Weights for LLMs', what Accuracy (% ) score did the LLaMA2-7b model get on the BoolQ dataset
82.63
MS COCO
Kandinsky
Kandinsky: an Improved Text-to-Image Synthesis with Image Prior and Latent Diffusion
2023-10-05T00:00:00
https://arxiv.org/abs/2310.03502v1
[ "https://github.com/ai-forever/Kandinsky-2" ]
In the paper 'Kandinsky: an Improved Text-to-Image Synthesis with Image Prior and Latent Diffusion', what FID score did the Kandinsky model get on the MS COCO dataset
8.03
SVOX-Snow
BoQ (ResNet-50)
BoQ: A Place is Worth a Bag of Learnable Queries
2024-05-12T00:00:00
https://arxiv.org/abs/2405.07364v3
[ "https://github.com/amaralibey/bag-of-queries" ]
In the paper 'BoQ: A Place is Worth a Bag of Learnable Queries', what Recall@1 score did the BoQ (ResNet-50) model get on the SVOX-Snow dataset
98.7
Atari 2600 Pong
ASL DDQN
Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity
2023-05-07T00:00:00
https://arxiv.org/abs/2305.04180v3
[ "https://github.com/xinjinghao/color" ]
In the paper 'Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity', what Score score did the ASL DDQN model get on the Atari 2600 Pong dataset
21
TrackingNet
LoRAT-g-378
Tracking Meets LoRA: Faster Training, Larger Model, Stronger Performance
2024-03-08T00:00:00
https://arxiv.org/abs/2403.05231v2
[ "https://github.com/litinglin/lorat" ]
In the paper 'Tracking Meets LoRA: Faster Training, Larger Model, Stronger Performance', what Precision score did the LoRAT-g-378 model get on the TrackingNet dataset
86.1
WDC Products-80%cc-seen-medium
gpt-4o-mini-2024-07-18
Fine-tuning Large Language Models for Entity Matching
2024-09-12T00:00:00
https://arxiv.org/abs/2409.08185v1
[ "https://github.com/wbsg-uni-mannheim/tailormatch" ]
In the paper 'Fine-tuning Large Language Models for Entity Matching', what F1 (%) score did the gpt-4o-mini-2024-07-18 model get on the WDC Products-80%cc-seen-medium dataset
81.61
WiC
OPT-1.3B
Achieving Dimension-Free Communication in Federated Learning via Zeroth-Order Optimization
2024-05-24T00:00:00
https://arxiv.org/abs/2405.15861v3
[ "https://github.com/ZidongLiu/DeComFL" ]
In the paper 'Achieving Dimension-Free Communication in Federated Learning via Zeroth-Order Optimization', what Test Accuracy score did the OPT-1.3B model get on the WiC dataset
56.14%
MVTec AD
ReConPatch Ensemble (+RefineNet)
ReConPatch : Contrastive Patch Representation Learning for Industrial Anomaly Detection
2023-05-26T00:00:00
https://arxiv.org/abs/2305.16713v3
[ "https://github.com/travishsu/ReConPatch-TF" ]
In the paper 'ReConPatch : Contrastive Patch Representation Learning for Industrial Anomaly Detection', what Detection AUROC score did the ReConPatch Ensemble (+RefineNet) model get on the MVTec AD dataset
99.72