dataset
stringlengths
0
82
model_name
stringlengths
0
150
paper_title
stringlengths
19
175
paper_date
timestamp[ns]
paper_url
stringlengths
32
35
code_links
listlengths
1
1
prompts
stringlengths
105
331
answer
stringlengths
1
67
MATH
OpenMath-Mistral-7B (w/ code)
OpenMathInstruct-1: A 1.8 Million Math Instruction Tuning Dataset
2024-02-15T00:00:00
https://arxiv.org/abs/2402.10176v2
[ "https://github.com/kipok/nemo-skills" ]
In the paper 'OpenMathInstruct-1: A 1.8 Million Math Instruction Tuning Dataset', what Accuracy score did the OpenMath-Mistral-7B (w/ code) model get on the MATH dataset
44.5
MathMC
GPT-4 (Teaching-Inspired)
Teaching-Inspired Integrated Prompting Framework: A Novel Approach for Enhancing Reasoning in Large Language Models
2024-10-10T00:00:00
https://arxiv.org/abs/2410.08068v1
[ "https://github.com/sallytan13/teaching-inspired-prompting" ]
In the paper 'Teaching-Inspired Integrated Prompting Framework: A Novel Approach for Enhancing Reasoning in Large Language Models', what Accuracy score did the GPT-4 (Teaching-Inspired) model get on the MathMC dataset
92.2
MBPP
Branch-Train-Merge 4x7B (top-2)
Branch-Train-MiX: Mixing Expert LLMs into a Mixture-of-Experts LLM
2024-03-12T00:00:00
https://arxiv.org/abs/2403.07816v1
[ "https://github.com/Leeroo-AI/mergoo" ]
In the paper 'Branch-Train-MiX: Mixing Expert LLMs into a Mixture-of-Experts LLM', what Accuracy score did the Branch-Train-Merge 4x7B (top-2) model get on the MBPP dataset
42.6
NeedForSpeed
PiVOT-L
Improving Visual Object Tracking through Visual Prompting
2024-09-27T00:00:00
https://arxiv.org/abs/2409.18901v1
[ "https://github.com/chenshihfang/GOT" ]
In the paper 'Improving Visual Object Tracking through Visual Prompting', what AUC score did the PiVOT-L model get on the NeedForSpeed dataset
0.682
MM-Vet
LLaVA-1.5+CoS
Chain-of-Spot: Interactive Reasoning Improves Large Vision-Language Models
2024-03-19T00:00:00
https://arxiv.org/abs/2403.12966v2
[ "https://github.com/dongyh20/chain-of-spot" ]
In the paper 'Chain-of-Spot: Interactive Reasoning Improves Large Vision-Language Models', what GPT-4 score score did the LLaVA-1.5+CoS model get on the MM-Vet dataset
37.6
H2O (2 Hands and Objects)
SHARP
SHARP: Segmentation of Hands and Arms by Range using Pseudo-Depth for Enhanced Egocentric 3D Hand Pose Estimation and Action Recognition
2024-08-19T00:00:00
https://arxiv.org/abs/2408.10037v1
[ "https://github.com/wiktormucha/SHARP" ]
In the paper 'SHARP: Segmentation of Hands and Arms by Range using Pseudo-Depth for Enhanced Egocentric 3D Hand Pose Estimation and Action Recognition', what Actions Top-1 score did the SHARP model get on the H2O (2 Hands and Objects) dataset
91.73
MM-Vet
LLaVA-1.5 + DenseFusion-1M (Vicuna-7B)
DenseFusion-1M: Merging Vision Experts for Comprehensive Multimodal Perception
2024-07-11T00:00:00
https://arxiv.org/abs/2407.08303v2
[ "https://github.com/baaivision/densefusion" ]
In the paper 'DenseFusion-1M: Merging Vision Experts for Comprehensive Multimodal Perception', what GPT-4 score score did the LLaVA-1.5 + DenseFusion-1M (Vicuna-7B) model get on the MM-Vet dataset
37.8
MVTec AD Textures Domain Generalization
FABLE
FABLE : Fabric Anomaly Detection Automation Process
2023-06-16T00:00:00
https://arxiv.org/abs/2306.10089v1
[ "https://github.com/SimonThomine/FABLE" ]
In the paper 'FABLE : Fabric Anomaly Detection Automation Process', what Detection AUROC score did the FABLE model get on the MVTec AD Textures Domain Generalization dataset
97.5
Inside Out
SegVLAD-FineT (M)
Revisit Anything: Visual Place Recognition via Image Segment Retrieval
2024-09-26T00:00:00
https://arxiv.org/abs/2409.18049v1
[ "https://github.com/anyloc/revisit-anything" ]
In the paper 'Revisit Anything: Visual Place Recognition via Image Segment Retrieval', what Recall@1 score did the SegVLAD-FineT (M) model get on the Inside Out dataset
7.2
HMDB51
MSQNet
Actor-agnostic Multi-label Action Recognition with Multi-modal Query
2023-07-20T00:00:00
https://arxiv.org/abs/2307.10763v3
[ "https://github.com/mondalanindya/msqnet" ]
In the paper 'Actor-agnostic Multi-label Action Recognition with Multi-modal Query', what Accuracy score did the MSQNet model get on the HMDB51 dataset
93.25
CLUSTER
TIGT
Topology-Informed Graph Transformer
2024-02-03T00:00:00
https://arxiv.org/abs/2402.02005v1
[ "https://github.com/leemingo/tigt" ]
In the paper 'Topology-Informed Graph Transformer', what Accuracy score did the TIGT model get on the CLUSTER dataset
78.033
MCubeS (P)
ShareCMP (B2 RGB-D)
ShareCMP: Polarization-Aware RGB-P Semantic Segmentation
2023-12-06T00:00:00
https://arxiv.org/abs/2312.03430v2
[ "https://github.com/lefteyex/sharecmp" ]
In the paper 'ShareCMP: Polarization-Aware RGB-P Semantic Segmentation', what mIoU score did the ShareCMP (B2 RGB-D) model get on the MCubeS (P) dataset
50.55
MATH
PaLM 2 (few-shot, k=4, CoT)
PaLM 2 Technical Report
2023-05-17T00:00:00
https://arxiv.org/abs/2305.10403v3
[ "https://github.com/eternityyw/tram-benchmark" ]
In the paper 'PaLM 2 Technical Report', what Accuracy score did the PaLM 2 (few-shot, k=4, CoT) model get on the MATH dataset
34.3
CFC-DAOD
UMT (ResNet50-FPN)
Align and Distill: Unifying and Improving Domain Adaptive Object Detection
2024-03-18T00:00:00
https://arxiv.org/abs/2403.12029v2
[ "https://github.com/justinkay/aldi" ]
In the paper 'Align and Distill: Unifying and Improving Domain Adaptive Object Detection', what AP@0.5 score did the UMT (ResNet50-FPN) model get on the CFC-DAOD dataset
61.2
TNL2K
LoRAT-L-378
Tracking Meets LoRA: Faster Training, Larger Model, Stronger Performance
2024-03-08T00:00:00
https://arxiv.org/abs/2403.05231v2
[ "https://github.com/litinglin/lorat" ]
In the paper 'Tracking Meets LoRA: Faster Training, Larger Model, Stronger Performance', what precision score did the LoRAT-L-378 model get on the TNL2K dataset
67.0
VNHSGE-Physics
ChatGPT
VNHSGE: VietNamese High School Graduation Examination Dataset for Large Language Models
2023-05-20T00:00:00
https://arxiv.org/abs/2305.12199v1
[ "https://github.com/xdao85/vnhsge" ]
In the paper 'VNHSGE: VietNamese High School Graduation Examination Dataset for Large Language Models', what Accuracy score did the ChatGPT model get on the VNHSGE-Physics dataset
61
SVAMP (1:N)
ATHENA (roberta-large)
ATHENA: Mathematical Reasoning with Thought Expansion
2023-11-02T00:00:00
https://arxiv.org/abs/2311.01036v1
[ "https://github.com/the-jb/athena-math" ]
In the paper 'ATHENA: Mathematical Reasoning with Thought Expansion', what Execution Accuracy score did the ATHENA (roberta-large) model get on the SVAMP (1:N) dataset
67.8
COCO-Stuff-27
CAUSE (DINOv2, ViT-B/14)
Causal Unsupervised Semantic Segmentation
2023-10-11T00:00:00
https://arxiv.org/abs/2310.07379v1
[ "https://github.com/ByungKwanLee/Causal-Unsupervised-Segmentation" ]
In the paper 'Causal Unsupervised Semantic Segmentation', what Accuracy score did the CAUSE (DINOv2, ViT-B/14) model get on the COCO-Stuff-27 dataset
78.0
RotKITTI Registration Benchmark
GeoTransformer
GeoTransformer: Fast and Robust Point Cloud Registration with Geometric Transformer
2023-07-25T00:00:00
https://arxiv.org/abs/2308.03768v1
[ "https://github.com/qinzheng93/geotransformer" ]
In the paper 'GeoTransformer: Fast and Robust Point Cloud Registration with Geometric Transformer', what RR@(1.5,0.3) score did the GeoTransformer model get on the RotKITTI Registration Benchmark dataset
78.5
CIFAR-100-LT (ρ=10)
GML (ResNet-32)
Long-Tailed Recognition by Mutual Information Maximization between Latent Features and Ground-Truth Labels
2023-05-02T00:00:00
https://arxiv.org/abs/2305.01160v3
[ "https://github.com/bluecdm/Long-tailed-recognition" ]
In the paper 'Long-Tailed Recognition by Mutual Information Maximization between Latent Features and Ground-Truth Labels', what Error Rate score did the GML (ResNet-32) model get on the CIFAR-100-LT (ρ=10) dataset
33.0
WebQuestions
PaLM 2-S (one-shot)
PaLM 2 Technical Report
2023-05-17T00:00:00
https://arxiv.org/abs/2305.10403v3
[ "https://github.com/eternityyw/tram-benchmark" ]
In the paper 'PaLM 2 Technical Report', what EM score did the PaLM 2-S (one-shot) model get on the WebQuestions dataset
21.8
WDC Products-80%cc-seen-medium
gpt4-0613_zeroshot
Entity Matching using Large Language Models
2023-10-17T00:00:00
https://arxiv.org/abs/2310.11244v4
[ "https://github.com/wbsg-uni-mannheim/matchgpt" ]
In the paper 'Entity Matching using Large Language Models', what F1 (%) score did the gpt4-0613_zeroshot model get on the WDC Products-80%cc-seen-medium dataset
89.61
nuScenes
HyDRa
Unleashing HyDRa: Hybrid Fusion, Depth Consistency and Radar for Unified 3D Perception
2024-03-12T00:00:00
https://arxiv.org/abs/2403.07746v2
[ "https://github.com/phi-wol/hydra" ]
In the paper 'Unleashing HyDRa: Hybrid Fusion, Depth Consistency and Radar for Unified 3D Perception', what NDS score did the HyDRa model get on the nuScenes dataset
0.64
MM-Vet
SoM-LLaVA-1.5
List Items One by One: A New Data Source and Learning Paradigm for Multimodal LLMs
2024-04-25T00:00:00
https://arxiv.org/abs/2404.16375v1
[ "https://github.com/zzxslp/som-llava" ]
In the paper 'List Items One by One: A New Data Source and Learning Paradigm for Multimodal LLMs', what GPT-4 score score did the SoM-LLaVA-1.5 model get on the MM-Vet dataset
35.9
MCubeS
MMSFormer (RGB-A-D-N)
MMSFormer: Multimodal Transformer for Material and Semantic Segmentation
2023-09-07T00:00:00
https://arxiv.org/abs/2309.04001v4
[ "https://github.com/csiplab/mmsformer" ]
In the paper 'MMSFormer: Multimodal Transformer for Material and Semantic Segmentation', what mIoU score did the MMSFormer (RGB-A-D-N) model get on the MCubeS dataset
53.11%
STL-10, 40 Labels
ShrinkMatch
Shrinking Class Space for Enhanced Certainty in Semi-Supervised Learning
2023-08-13T00:00:00
https://arxiv.org/abs/2308.06777v1
[ "https://github.com/LiheYoung/ShrinkMatch" ]
In the paper 'Shrinking Class Space for Enhanced Certainty in Semi-Supervised Learning', what Accuracy score did the ShrinkMatch model get on the STL-10, 40 Labels dataset
85.98
LVIS v1.0
OVMR
OVMR: Open-Vocabulary Recognition with Multi-Modal References
2024-06-07T00:00:00
https://arxiv.org/abs/2406.04675v1
[ "https://github.com/zehong-ma/ovmr" ]
In the paper 'OVMR: Open-Vocabulary Recognition with Multi-Modal References', what AP novel-LVIS base training score did the OVMR model get on the LVIS v1.0 dataset
34.4
GuitarSet
Beat This!
Beat this! Accurate beat tracking without DBN postprocessing
2024-07-31T00:00:00
https://arxiv.org/abs/2407.21658v1
[ "https://github.com/CPJKU/beat_this" ]
In the paper 'Beat this! Accurate beat tracking without DBN postprocessing', what F1 score did the Beat This! model get on the GuitarSet dataset
92.0
BIOSCAN_1M_Insect Dataset
BIOSCAN_1M_order_classifier
A Step Towards Worldwide Biodiversity Assessment: The BIOSCAN-1M Insect Dataset
2023-07-19T00:00:00
https://arxiv.org/abs/2307.10455v3
[ "https://github.com/zahrag/BIOSCAN-1M" ]
In the paper 'A Step Towards Worldwide Biodiversity Assessment: The BIOSCAN-1M Insect Dataset', what Macro F1 score did the BIOSCAN_1M_order_classifier model get on the BIOSCAN_1M_Insect Dataset dataset
92.65
MRR-Benchmark
GPT-4V
The Dawn of LMMs: Preliminary Explorations with GPT-4V(ision)
2023-09-29T00:00:00
https://arxiv.org/abs/2309.17421v2
[ "https://github.com/qi-zhangyang/gemini-vs-gpt4v" ]
In the paper 'The Dawn of LMMs: Preliminary Explorations with GPT-4V(ision)', what Total Column Score score did the GPT-4V model get on the MRR-Benchmark dataset
415
Amazon Fashion
ProxyRCA
Proxy-based Item Representation for Attribute and Context-aware Recommendation
2023-12-11T00:00:00
https://arxiv.org/abs/2312.06145v1
[ "https://github.com/theeluwin/ProxyRCA" ]
In the paper 'Proxy-based Item Representation for Attribute and Context-aware Recommendation', what nDCG@10 (100 Neg. Samples) score did the ProxyRCA model get on the Amazon Fashion dataset
0.446
SMD
CARLA
CARLA: Self-supervised Contrastive Representation Learning for Time Series Anomaly Detection
2023-08-18T00:00:00
https://arxiv.org/abs/2308.09296v4
[ "https://github.com/zamanzadeh/CARLA" ]
In the paper 'CARLA: Self-supervised Contrastive Representation Learning for Time Series Anomaly Detection', what precision score did the CARLA model get on the SMD dataset
0.4276
CocoGlide
Early Fusion
MMFusion: Combining Image Forensic Filters for Visual Manipulation Detection and Localization
2023-12-04T00:00:00
https://arxiv.org/abs/2312.01790v2
[ "https://github.com/idt-iti/mmfusion-iml" ]
In the paper 'MMFusion: Combining Image Forensic Filters for Visual Manipulation Detection and Localization', what Average Pixel F1(Fixed threshold) score did the Early Fusion model get on the CocoGlide dataset
.553
EMOTIC
CAGE
CAGE: Circumplex Affect Guided Expression Inference
2024-04-23T00:00:00
https://arxiv.org/abs/2404.14975v1
[ "https://github.com/wagner-niklas/cage_expression_inference" ]
In the paper 'CAGE: Circumplex Affect Guided Expression Inference', what Top-3 Accuracy (%) score did the CAGE model get on the EMOTIC dataset
14.73
ISTD+
ShadowMaskFormer (arXiv 2024) (256x256)
ShadowMaskFormer: Mask Augmented Patch Embeddings for Shadow Removal
2024-04-29T00:00:00
https://arxiv.org/abs/2404.18433v2
[ "https://github.com/lizhh268/shadowmaskformer" ]
In the paper 'ShadowMaskFormer: Mask Augmented Patch Embeddings for Shadow Removal', what RMSE score did the ShadowMaskFormer (arXiv 2024) (256x256) model get on the ISTD+ dataset
3.39
LingOly
GPT-4
LINGOLY: A Benchmark of Olympiad-Level Linguistic Reasoning Puzzles in Low-Resource and Extinct Languages
2024-06-10T00:00:00
https://arxiv.org/abs/2406.06196v3
[ "https://github.com/am-bean/lingOly" ]
In the paper 'LINGOLY: A Benchmark of Olympiad-Level Linguistic Reasoning Puzzles in Low-Resource and Extinct Languages', what Exact Match Accuracy score did the GPT-4 model get on the LingOly dataset
33.4%
iSAID
AerialFormer-T
AerialFormer: Multi-resolution Transformer for Aerial Image Segmentation
2023-06-12T00:00:00
https://arxiv.org/abs/2306.06842v2
[ "https://github.com/UARK-AICV/AerialFormer" ]
In the paper 'AerialFormer: Multi-resolution Transformer for Aerial Image Segmentation', what mIoU score did the AerialFormer-T model get on the iSAID dataset
67.5
St Lucia
SelaVPR
Towards Seamless Adaptation of Pre-trained Models for Visual Place Recognition
2024-02-22T00:00:00
https://arxiv.org/abs/2402.14505v3
[ "https://github.com/Lu-Feng/SelaVPR" ]
In the paper 'Towards Seamless Adaptation of Pre-trained Models for Visual Place Recognition', what Recall@1 score did the SelaVPR model get on the St Lucia dataset
99.8
SPEC-MTP
W-HMR
W-HMR: Monocular Human Mesh Recovery in World Space with Weak-Supervised Calibration
2023-11-29T00:00:00
https://arxiv.org/abs/2311.17460v6
[ "https://github.com/yw0208/W-HMR" ]
In the paper 'W-HMR: Monocular Human Mesh Recovery in World Space with Weak-Supervised Calibration', what W-MPJPE score did the W-HMR model get on the SPEC-MTP dataset
118.7
Action-Camera Parking
mAlexNet
Revising deep learning methods in parking lot occupancy detection
2023-06-07T00:00:00
https://arxiv.org/abs/2306.04288v3
[ "https://github.com/eighonet/parking-research" ]
In the paper 'Revising deep learning methods in parking lot occupancy detection', what F1-score score did the mAlexNet model get on the Action-Camera Parking dataset
0.8577
MIMIC-II
HP-CDE
Hawkes Process Based on Controlled Differential Equations
2023-05-09T00:00:00
https://arxiv.org/abs/2305.07031v2
[ "https://github.com/kookseungji/Hawkes-Process-Based-on-Controlled-Differential-Equations" ]
In the paper 'Hawkes Process Based on Controlled Differential Equations', what RMSE score did the HP-CDE model get on the MIMIC-II dataset
0.726±0.042
ETH/UCY
PPT
Progressive Pretext Task Learning for Human Trajectory Prediction
2024-07-16T00:00:00
https://arxiv.org/abs/2407.11588v1
[ "https://github.com/isee-laboratory/ppt" ]
In the paper 'Progressive Pretext Task Learning for Human Trajectory Prediction', what ADE-8/12 score did the PPT model get on the ETH/UCY dataset
0.20
SIM10K to Cityscapes
MILA
MILA: Memory-Based Instance-Level Adaptation for Cross-Domain Object Detection
2023-09-03T00:00:00
https://arxiv.org/abs/2309.01086v1
[ "https://github.com/hitachi-rd-cv/MILA" ]
In the paper 'MILA: Memory-Based Instance-Level Adaptation for Cross-Domain Object Detection', what mAP@0.5 score did the MILA model get on the SIM10K to Cityscapes dataset
57.4
VehicleID Small
MBR-4B (without RK)
Strength in Diversity: Multi-Branch Representation Learning for Vehicle Re-Identification
2023-10-02T00:00:00
https://arxiv.org/abs/2310.01129v1
[ "https://github.com/videturfortuna/vehicle_reid_itsc2023" ]
In the paper 'Strength in Diversity: Multi-Branch Representation Learning for Vehicle Re-Identification', what mAP score did the MBR-4B (without RK) model get on the VehicleID Small dataset
92.5
VoxCeleb
ReDimNet-B3-LM (3.0M)
Reshape Dimensions Network for Speaker Recognition
2024-07-25T00:00:00
https://arxiv.org/abs/2407.18223v2
[ "https://github.com/IDRnD/ReDimNet" ]
In the paper 'Reshape Dimensions Network for Speaker Recognition', what EER score did the ReDimNet-B3-LM (3.0M) model get on the VoxCeleb dataset
0.5
Flowers (Tensorflow)
CNN+ Wilson-Cowan model RNN
Learning in Wilson-Cowan model for metapopulation
2024-06-24T00:00:00
https://arxiv.org/abs/2406.16453v2
[ "https://github.com/raffaelemarino/learning_in_wilsoncowan" ]
In the paper 'Learning in Wilson-Cowan model for metapopulation', what Accuracy score did the CNN+ Wilson-Cowan model RNN model get on the Flowers (Tensorflow) dataset
84.85
RSTPReid
RaSa
RaSa: Relation and Sensitivity Aware Representation Learning for Text-based Person Search
2023-05-23T00:00:00
https://arxiv.org/abs/2305.13653v1
[ "https://github.com/flame-chasers/rasa" ]
In the paper 'RaSa: Relation and Sensitivity Aware Representation Learning for Text-based Person Search', what R@1 score did the RaSa model get on the RSTPReid dataset
66.90
MVBench
HawkEye
HawkEye: Training Video-Text LLMs for Grounding Text in Videos
2024-03-15T00:00:00
https://arxiv.org/abs/2403.10228v1
[ "https://github.com/yellow-binary-tree/hawkeye" ]
In the paper 'HawkEye: Training Video-Text LLMs for Grounding Text in Videos', what Avg. score did the HawkEye model get on the MVBench dataset
47.55
ACDC Scribbles
ScribbleVC
ScribbleVC: Scribble-supervised Medical Image Segmentation with Vision-Class Embedding
2023-07-30T00:00:00
https://arxiv.org/abs/2307.16226v1
[ "https://github.com/huanglizi/scribblevc" ]
In the paper 'ScribbleVC: Scribble-supervised Medical Image Segmentation with Vision-Class Embedding', what Dice (Average) score did the ScribbleVC model get on the ACDC Scribbles dataset
88.4%
cb
OPT-1.3B
Achieving Dimension-Free Communication in Federated Learning via Zeroth-Order Optimization
2024-05-24T00:00:00
https://arxiv.org/abs/2405.15861v3
[ "https://github.com/ZidongLiu/DeComFL" ]
In the paper 'Achieving Dimension-Free Communication in Federated Learning via Zeroth-Order Optimization', what Test Accuracy score did the OPT-1.3B model get on the cb dataset
75.71%
GoPro
ID-Blau (Restormer)
ID-Blau: Image Deblurring by Implicit Diffusion-based reBLurring AUgmentation
2023-12-18T00:00:00
https://arxiv.org/abs/2312.10998v2
[ "https://github.com/plusgood-steven/id-blau" ]
In the paper 'ID-Blau: Image Deblurring by Implicit Diffusion-based reBLurring AUgmentation', what PSNR score did the ID-Blau (Restormer) model get on the GoPro dataset
33.51
One-class CIFAR-100
GeneralAD
GeneralAD: Anomaly Detection Across Domains by Attending to Distorted Features
2024-07-17T00:00:00
https://arxiv.org/abs/2407.12427v1
[ "https://github.com/LucStrater/GeneralAD" ]
In the paper 'GeneralAD: Anomaly Detection Across Domains by Attending to Distorted Features', what AUROC score did the GeneralAD model get on the One-class CIFAR-100 dataset
98.4
TAP-Vid-Kinetics-First
LocoTrack-B
Local All-Pair Correspondence for Point Tracking
2024-07-22T00:00:00
https://arxiv.org/abs/2407.15420v1
[ "https://github.com/ku-cvlab/locotrack" ]
In the paper 'Local All-Pair Correspondence for Point Tracking', what Average Jaccard score did the LocoTrack-B model get on the TAP-Vid-Kinetics-First dataset
52.3
S3DIS
Superpoint Transformer
Efficient 3D Semantic Segmentation with Superpoint Transformer
2023-06-13T00:00:00
https://arxiv.org/abs/2306.08045v2
[ "https://github.com/drprojects/superpoint_transformer" ]
In the paper 'Efficient 3D Semantic Segmentation with Superpoint Transformer', what mIoU score did the Superpoint Transformer model get on the S3DIS dataset
76.0
FDMSE-ISL
HWGAT
Hierarchical Windowed Graph Attention Network and a Large Scale Dataset for Isolated Indian Sign Language Recognition
2024-07-19T00:00:00
https://arxiv.org/abs/2407.14224v2
[ "https://github.com/suvajit-patra/sl-hwgat" ]
In the paper 'Hierarchical Windowed Graph Attention Network and a Large Scale Dataset for Isolated Indian Sign Language Recognition', what Top-1 Accuracy score did the HWGAT model get on the FDMSE-ISL dataset
93.86
Vinoground
Gemini-1.5-Pro
Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context
2024-03-08T00:00:00
https://arxiv.org/abs/2403.05530v4
[ "https://github.com/dlvuldet/primevul" ]
In the paper 'Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context', what Text Score score did the Gemini-1.5-Pro model get on the Vinoground dataset
35.8
USNA-Cn2 (long-term)
Offshore Macro Meteorological
Effective Benchmarks for Optical Turbulence Modeling
2024-01-07T00:00:00
https://arxiv.org/abs/2401.03573v1
[ "https://github.com/cdjellen/otbench" ]
In the paper 'Effective Benchmarks for Optical Turbulence Modeling', what RMSE score did the Offshore Macro Meteorological model get on the USNA-Cn2 (long-term) dataset
0.675
CLUSTER
GPTrans-Nano
Graph Propagation Transformer for Graph Representation Learning
2023-05-19T00:00:00
https://arxiv.org/abs/2305.11424v3
[ "https://github.com/czczup/gptrans" ]
In the paper 'Graph Propagation Transformer for Graph Representation Learning', what Accuracy score did the GPTrans-Nano model get on the CLUSTER dataset
78.07
ChEBI-20
Song et al.
Towards Cross-Modal Text-Molecule Retrieval with Better Modality Alignment
2024-10-31T00:00:00
https://arxiv.org/abs/2410.23715v1
[ "https://github.com/DeepLearnXMU/CMTMR" ]
In the paper 'Towards Cross-Modal Text-Molecule Retrieval with Better Modality Alignment', what Mean Rank score did the Song et al. model get on the ChEBI-20 dataset
12.66
Cityscapes val
DSNet(single-scale)
DSNet: A Novel Way to Use Atrous Convolutions in Semantic Segmentation
2024-06-06T00:00:00
https://arxiv.org/abs/2406.03702v1
[ "https://github.com/takaniwa/dsnet" ]
In the paper 'DSNet: A Novel Way to Use Atrous Convolutions in Semantic Segmentation', what mIoU score did the DSNet(single-scale) model get on the Cityscapes val dataset
80.4
CNN / Daily Mail
Fourier Transformer
Fourier Transformer: Fast Long Range Modeling by Removing Sequence Redundancy with FFT Operator
2023-05-24T00:00:00
https://arxiv.org/abs/2305.15099v1
[ "https://github.com/lumia-group/fouriertransformer" ]
In the paper 'Fourier Transformer: Fast Long Range Modeling by Removing Sequence Redundancy with FFT Operator', what ROUGE-1 score did the Fourier Transformer model get on the CNN / Daily Mail dataset
44.76
SMAC MMM2_7m2M1M_vs_8m4M1M
VDN
A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning
2023-06-04T00:00:00
https://arxiv.org/abs/2306.02430v1
[ "https://github.com/j3soon/dfac-extended" ]
In the paper 'A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning', what Median Win Rate score did the VDN model get on the SMAC MMM2_7m2M1M_vs_8m4M1M dataset
13.35
CHILI-3K
GraphUNet
CHILI: Chemically-Informed Large-scale Inorganic Nanomaterials Dataset for Advancing Graph Machine Learning
2024-02-20T00:00:00
https://arxiv.org/abs/2402.13221v2
[ "https://github.com/UlrikFriisJensen/CHILI" ]
In the paper 'CHILI: Chemically-Informed Large-scale Inorganic Nanomaterials Dataset for Advancing Graph Machine Learning', what MSE score did the GraphUNet model get on the CHILI-3K dataset
0.055 +/- 0.001
VisA
D3AD
Dynamic Addition of Noise in a Diffusion Model for Anomaly Detection
2024-01-09T00:00:00
https://arxiv.org/abs/2401.04463v2
[ "https://github.com/JustinTebbe/D3AD" ]
In the paper 'Dynamic Addition of Noise in a Diffusion Model for Anomaly Detection', what Detection AUROC score did the D3AD model get on the VisA dataset
96.0
Amazon Beauty
ProxyRCA
Proxy-based Item Representation for Attribute and Context-aware Recommendation
2023-12-11T00:00:00
https://arxiv.org/abs/2312.06145v1
[ "https://github.com/theeluwin/ProxyRCA" ]
In the paper 'Proxy-based Item Representation for Attribute and Context-aware Recommendation', what Hit@10 score did the ProxyRCA model get on the Amazon Beauty dataset
0.626
Domain-independent anomalies datasets
Spatial Embedding MLP (Wide-ResNet50-2)
Domain-independent detection of known anomalies
2024-07-03T00:00:00
https://arxiv.org/abs/2407.02910v1
[ "https://github.com/Jonas1302/anomalib" ]
In the paper 'Domain-independent detection of known anomalies', what Detection AUROC score did the Spatial Embedding MLP (Wide-ResNet50-2) model get on the Domain-independent anomalies datasets dataset
87.2
MM-Vet
Vary-base
Vary: Scaling up the Vision Vocabulary for Large Vision-Language Models
2023-12-11T00:00:00
https://arxiv.org/abs/2312.06109v1
[ "https://github.com/Ucas-HaoranWei/Vary" ]
In the paper 'Vary: Scaling up the Vision Vocabulary for Large Vision-Language Models', what GPT-4 score score did the Vary-base model get on the MM-Vet dataset
36.2
LibriSpeech 100h test-other
Branchformer + GFSA
Graph Convolutions Enrich the Self-Attention in Transformers!
2023-12-07T00:00:00
https://arxiv.org/abs/2312.04234v5
[ "https://github.com/jeongwhanchoi/gfsa" ]
In the paper 'Graph Convolutions Enrich the Self-Attention in Transformers!', what Word Error Rate (WER) score did the Branchformer + GFSA model get on the LibriSpeech 100h test-other dataset
22.25
GoPro
CAPTNet
Prompt-based Ingredient-Oriented All-in-One Image Restoration
2023-09-06T00:00:00
https://arxiv.org/abs/2309.03063v2
[ "https://github.com/Tombs98/CAPTNet" ]
In the paper 'Prompt-based Ingredient-Oriented All-in-One Image Restoration', what PSNR score did the CAPTNet model get on the GoPro dataset
33.74
KIT Motion-Language
MLP+GRU
Motion2Language, unsupervised learning of synchronized semantic motion segmentation
2023-10-16T00:00:00
https://arxiv.org/abs/2310.10594v2
[ "https://github.com/rd20karim/M2T-Segmentation" ]
In the paper 'Motion2Language, unsupervised learning of synchronized semantic motion segmentation', what BLEU-4 score did the MLP+GRU model get on the KIT Motion-Language dataset
25.4
NYUv2
SwinMTL
SwinMTL: A Shared Architecture for Simultaneous Depth Estimation and Semantic Segmentation from Monocular Camera Images
2024-03-15T00:00:00
https://arxiv.org/abs/2403.10662v1
[ "https://github.com/pardistaghavi/swinmtl" ]
In the paper 'SwinMTL: A Shared Architecture for Simultaneous Depth Estimation and Semantic Segmentation from Monocular Camera Images', what Mean IoU score did the SwinMTL model get on the NYUv2 dataset
58.14
Unpaired-abdomen-CT
CLIP+ViT
Spatially Covariant Image Registration with Text Prompts
2023-11-27T00:00:00
https://arxiv.org/abs/2311.15607v2
[ "https://github.com/tinymilky/NeRD" ]
In the paper 'Spatially Covariant Image Registration with Text Prompts', what DSC score did the CLIP+ViT model get on the Unpaired-abdomen-CT dataset
0.5933
Citeseer: fixed 20 node per class
ScaleNet
Scale Invariance of Graph Neural Networks
2024-11-28T00:00:00
https://arxiv.org/abs/2411.19392v2
[ "https://github.com/qin87/scalenet" ]
In the paper 'Scale Invariance of Graph Neural Networks', what Accuracy score did the ScaleNet model get on the Citeseer: fixed 20 node per class dataset
68.3±1.5
WHAMR!
TD-Conformer (L) + DM
On Time Domain Conformer Models for Monaural Speech Separation in Noisy Reverberant Acoustic Environments
2023-10-09T00:00:00
https://arxiv.org/abs/2310.06125v1
[ "https://github.com/jwr1995/pubsep" ]
In the paper 'On Time Domain Conformer Models for Monaural Speech Separation in Noisy Reverberant Acoustic Environments', what SI-SDRi score did the TD-Conformer (L) + DM model get on the WHAMR! dataset
13.4
Traffic (720)
TSMixer
TSMixer: Lightweight MLP-Mixer Model for Multivariate Time Series Forecasting
2023-06-14T00:00:00
https://arxiv.org/abs/2306.09364v4
[ "https://github.com/ibm/tsfm" ]
In the paper 'TSMixer: Lightweight MLP-Mixer Model for Multivariate Time Series Forecasting', what MSE score did the TSMixer model get on the Traffic (720) dataset
0.424
PACS
QT-DoG (ResNet-50)
QT-DoG: Quantization-aware Training for Domain Generalization
2024-10-08T00:00:00
https://arxiv.org/abs/2410.06020v1
[ "https://github.com/saqibjaved1/QT-DoG" ]
In the paper 'QT-DoG: Quantization-aware Training for Domain Generalization', what Average Accuracy score did the QT-DoG (ResNet-50) model get on the PACS dataset
87.89
Winoground
PaLI (ft SNLI-VE + Synthetic Data)
What You See is What You Read? Improving Text-Image Alignment Evaluation
2023-05-17T00:00:00
https://arxiv.org/abs/2305.10400v4
[ "https://github.com/yonatanbitton/wysiwyr" ]
In the paper 'What You See is What You Read? Improving Text-Image Alignment Evaluation', what Text Score score did the PaLI (ft SNLI-VE + Synthetic Data) model get on the Winoground dataset
46.5
RoadTextVQA
GIT
Reading Between the Lanes: Text VideoQA on the Road
2023-07-08T00:00:00
https://arxiv.org/abs/2307.03948v1
[ "https://github.com/georg3tom/RoadTextVQA" ]
In the paper 'Reading Between the Lanes: Text VideoQA on the Road', what ACCURACY score did the GIT model get on the RoadTextVQA dataset
29.58
LingOly
Llama 2 70B
LINGOLY: A Benchmark of Olympiad-Level Linguistic Reasoning Puzzles in Low-Resource and Extinct Languages
2024-06-10T00:00:00
https://arxiv.org/abs/2406.06196v3
[ "https://github.com/am-bean/lingOly" ]
In the paper 'LINGOLY: A Benchmark of Olympiad-Level Linguistic Reasoning Puzzles in Low-Resource and Extinct Languages', what Exact Match Accuracy score did the Llama 2 70B model get on the LingOly dataset
6.4%
SOD4SB Public Test
E2 method (Normalized Gaussian Wasserstein Distance + Switch Hard Augmentation + Multi scale train + Weight Moving Average + CenterNet + VarifocalNet)
MVA2023 Small Object Detection Challenge for Spotting Birds: Dataset, Methods, and Results
2023-07-18T00:00:00
https://arxiv.org/abs/2307.09143v1
[ "https://github.com/iim-ttij/mva2023smallobjectdetection4spottingbirds" ]
In the paper 'MVA2023 Small Object Detection Challenge for Spotting Birds: Dataset, Methods, and Results', what AP50 score did the E2 method (Normalized Gaussian Wasserstein Distance + Switch Hard Augmentation + Multi scale train + Weight Moving Average + CenterNet + VarifocalNet) model get on the SOD4SB Public Test dataset
69.6
PEMS-BAY
STD-MAE
Spatial-Temporal-Decoupled Masked Pre-training for Spatiotemporal Forecasting
2023-12-01T00:00:00
https://arxiv.org/abs/2312.00516v3
[ "https://github.com/jimmy-7664/std-mae" ]
In the paper 'Spatial-Temporal-Decoupled Masked Pre-training for Spatiotemporal Forecasting', what MAE @ 12 step score did the STD-MAE model get on the PEMS-BAY dataset
1.77
iNaturalist
AIMv2-3B
Multimodal Autoregressive Pre-training of Large Vision Encoders
2024-11-21T00:00:00
https://arxiv.org/abs/2411.14402v1
[ "https://github.com/apple/ml-aim" ]
In the paper 'Multimodal Autoregressive Pre-training of Large Vision Encoders', what Top 1 Accuracy score did the AIMv2-3B model get on the iNaturalist dataset
81.5
TriviaQA
DPA-RAG
Understand What LLM Needs: Dual Preference Alignment for Retrieval-Augmented Generation
2024-06-26T00:00:00
https://arxiv.org/abs/2406.18676v2
[ "https://github.com/dongguanting/dpa-rag" ]
In the paper 'Understand What LLM Needs: Dual Preference Alignment for Retrieval-Augmented Generation', what F1 score did the DPA-RAG model get on the TriviaQA dataset
80.08
CIFAR-10
Transformer+SSA
The Information Pathways Hypothesis: Transformers are Dynamic Self-Ensembles
2023-06-02T00:00:00
https://arxiv.org/abs/2306.01705v1
[ "https://github.com/shamim-hussain/ssa" ]
In the paper 'The Information Pathways Hypothesis: Transformers are Dynamic Self-Ensembles', what bits/dimension score did the Transformer+SSA model get on the CIFAR-10 dataset
2.774
STAR Benchmark
GF(sup)
Glance and Focus: Memory Prompting for Multi-Event Video Question Answering
2024-01-03T00:00:00
https://arxiv.org/abs/2401.01529v1
[ "https://github.com/byz0e/glance-focus" ]
In the paper 'Glance and Focus: Memory Prompting for Multi-Event Video Question Answering', what Average Accuracy score did the GF(sup) model get on the STAR Benchmark dataset
53.94
CTCUG
D-DFFNet
Depth and DOF Cues Make A Better Defocus Blur Detector
2023-06-20T00:00:00
https://arxiv.org/abs/2306.11334v1
[ "https://github.com/yuxinjin-whu/d-dffnet" ]
In the paper 'Depth and DOF Cues Make A Better Defocus Blur Detector', what MAE score did the D-DFFNet model get on the CTCUG dataset
0.074
VideoInstruct
PPLLaVA-7B
PPLLaVA: Varied Video Sequence Understanding With Prompt Guidance
2024-11-04T00:00:00
https://arxiv.org/abs/2411.02327v2
[ "https://github.com/farewellthree/ppllava" ]
In the paper 'PPLLaVA: Varied Video Sequence Understanding With Prompt Guidance', what gpt-score score did the PPLLaVA-7B model get on the VideoInstruct dataset
3.85
Charades-STA
LLMEPET
Prior Knowledge Integration via LLM Encoding and Pseudo Event Regulation for Video Moment Retrieval
2024-07-21T00:00:00
https://arxiv.org/abs/2407.15051v3
[ "https://github.com/fletcherjiang/llmepet" ]
In the paper 'Prior Knowledge Integration via LLM Encoding and Pseudo Event Regulation for Video Moment Retrieval', what R@1 IoU=0.5 score did the LLMEPET model get on the Charades-STA dataset
58.31
AfriSenti
AfriBERTa
UCAS-IIE-NLP at SemEval-2023 Task 12: Enhancing Generalization of Multilingual BERT for Low-resource Sentiment Analysis
2023-06-01T00:00:00
https://arxiv.org/abs/2306.01093v1
[ "https://github.com/zerohd4869/sacl" ]
In the paper 'UCAS-IIE-NLP at SemEval-2023 Task 12: Enhancing Generalization of Multilingual BERT for Low-resource Sentiment Analysis', what weighted-F1 score score did the AfriBERTa model get on the AfriSenti dataset
0.439
KITTI-360
SuperCluster
Scalable 3D Panoptic Segmentation As Superpoint Graph Clustering
2024-01-12T00:00:00
https://arxiv.org/abs/2401.06704v2
[ "https://github.com/drprojects/superpoint_transformer" ]
In the paper 'Scalable 3D Panoptic Segmentation As Superpoint Graph Clustering', what PQ score did the SuperCluster model get on the KITTI-360 dataset
48.3
Synthetic Dynamic Networks
Static Features
Learning the mechanisms of network growth
2024-03-31T00:00:00
https://arxiv.org/abs/2404.00793v3
[ "https://github.com/LourensT/DynamicNetworkSimulation" ]
In the paper 'Learning the mechanisms of network growth', what Accuracy score did the Static Features model get on the Synthetic Dynamic Networks dataset
92.81%
INRIA Aerial Image Labeling
UANet(VGG-16)
Building Extraction from Remote Sensing Images via an Uncertainty-Aware Network
2023-07-23T00:00:00
https://arxiv.org/abs/2307.12309v1
[ "https://github.com/henryjiepanli/uncertainty-aware-network" ]
In the paper 'Building Extraction from Remote Sensing Images via an Uncertainty-Aware Network', what IoU score did the UANet(VGG-16) model get on the INRIA Aerial Image Labeling dataset
83.08
Peptides-struct
ViT-PS
Learning Probabilistic Symmetrization for Architecture Agnostic Equivariance
2023-06-05T00:00:00
https://arxiv.org/abs/2306.02866v3
[ "https://github.com/jw9730/lps" ]
In the paper 'Learning Probabilistic Symmetrization for Architecture Agnostic Equivariance', what MAE score did the ViT-PS model get on the Peptides-struct dataset
0.2559
GSM8K
OpenMath-CodeLlama-13B (w/ code)
OpenMathInstruct-1: A 1.8 Million Math Instruction Tuning Dataset
2024-02-15T00:00:00
https://arxiv.org/abs/2402.10176v2
[ "https://github.com/kipok/nemo-skills" ]
In the paper 'OpenMathInstruct-1: A 1.8 Million Math Instruction Tuning Dataset', what Accuracy score did the OpenMath-CodeLlama-13B (w/ code) model get on the GSM8K dataset
78.8
SPED
BoQ
BoQ: A Place is Worth a Bag of Learnable Queries
2024-05-12T00:00:00
https://arxiv.org/abs/2405.07364v3
[ "https://github.com/amaralibey/bag-of-queries" ]
In the paper 'BoQ: A Place is Worth a Bag of Learnable Queries', what Recall@1 score did the BoQ model get on the SPED dataset
92.5
ColonINST-v1 (Seen)
Bunny-v1.0-3B (w/ LoRA, w/ extra data)
Efficient Multimodal Learning from Data-centric Perspective
2024-02-18T00:00:00
https://arxiv.org/abs/2402.11530v3
[ "https://github.com/baai-dcai/bunny" ]
In the paper 'Efficient Multimodal Learning from Data-centric Perspective', what Accuray score did the Bunny-v1.0-3B (w/ LoRA, w/ extra data) model get on the ColonINST-v1 (Seen) dataset
92.47
FSC147
DAVE
DAVE -- A Detect-and-Verify Paradigm for Low-Shot Counting
2024-04-25T00:00:00
https://arxiv.org/abs/2404.16622v1
[ "https://github.com/jerpelhan/dave" ]
In the paper 'DAVE -- A Detect-and-Verify Paradigm for Low-Shot Counting', what MAE(val) score did the DAVE model get on the FSC147 dataset
8.91
StreetTryOn
Street TryOn
Street TryOn: Learning In-the-Wild Virtual Try-On from Unpaired Person Images
2023-11-27T00:00:00
https://arxiv.org/abs/2311.16094v3
[ "https://github.com/cuiaiyu/street-tryon-benchmark" ]
In the paper 'Street TryOn: Learning In-the-Wild Virtual Try-On from Unpaired Person Images', what FID score did the Street TryOn model get on the StreetTryOn dataset
33.039
PeMS07
PM-DMnet(R)
Pattern-Matching Dynamic Memory Network for Dual-Mode Traffic Prediction
2024-08-12T00:00:00
https://arxiv.org/abs/2408.07100v1
[ "https://github.com/wengwenchao123/PM-DMNet" ]
In the paper 'Pattern-Matching Dynamic Memory Network for Dual-Mode Traffic Prediction', what MAE@1h score did the PM-DMnet(R) model get on the PeMS07 dataset
19.18
MSVD-QA
vid-TLDR (UMT-L)
vid-TLDR: Training Free Token merging for Light-weight Video Transformer
2024-03-20T00:00:00
https://arxiv.org/abs/2403.13347v2
[ "https://github.com/mlvlab/vid-tldr" ]
In the paper 'vid-TLDR: Training Free Token merging for Light-weight Video Transformer', what Accuracy score did the vid-TLDR (UMT-L) model get on the MSVD-QA dataset
0.549