dataset stringlengths 0 82 | model_name stringlengths 0 150 | paper_title stringlengths 19 175 | paper_date timestamp[ns] | paper_url stringlengths 32 35 | code_links listlengths 1 1 | prompts stringlengths 105 331 | answer stringlengths 1 67 |
|---|---|---|---|---|---|---|---|
SPKL | VGG-19 | Revising deep learning methods in parking lot occupancy detection | 2023-06-07T00:00:00 | https://arxiv.org/abs/2306.04288v3 | [
"https://github.com/eighonet/parking-research"
] | In the paper 'Revising deep learning methods in parking lot occupancy detection', what F1-score score did the VGG-19 model get on the SPKL dataset
| 0.6801 |
HRSOD | BiRefNet (DUTS, HRSOD) | Bilateral Reference for High-Resolution Dichotomous Image Segmentation | 2024-01-07T00:00:00 | https://arxiv.org/abs/2401.03407v6 | [
"https://github.com/zhengpeng7/birefnet"
] | In the paper 'Bilateral Reference for High-Resolution Dichotomous Image Segmentation', what S-Measure score did the BiRefNet (DUTS, HRSOD) model get on the HRSOD dataset
| 0.962 |
AVisT | PiVOT-L | Improving Visual Object Tracking through Visual Prompting | 2024-09-27T00:00:00 | https://arxiv.org/abs/2409.18901v1 | [
"https://github.com/chenshihfang/GOT"
] | In the paper 'Improving Visual Object Tracking through Visual Prompting', what Success Rate score did the PiVOT-L model get on the AVisT dataset
| 62.2 |
ISTD | Resfusion | Resfusion: Denoising Diffusion Probabilistic Models for Image Restoration Based on Prior Residual Noise | 2023-11-25T00:00:00 | https://arxiv.org/abs/2311.14900v4 | [
"https://github.com/nkicsl/resfusion"
] | In the paper 'Resfusion: Denoising Diffusion Probabilistic Models for Image Restoration Based on Prior Residual Noise', what MAE score did the Resfusion model get on the ISTD dataset
| 4.81 |
WHU-CD | T-UNet | T-UNet: Triplet UNet for Change Detection in High-Resolution Remote Sensing Images | 2023-08-04T00:00:00 | https://arxiv.org/abs/2308.02356v1 | [
"https://github.com/pl-2000/t-unet"
] | In the paper 'T-UNet: Triplet UNet for Change Detection in High-Resolution Remote Sensing Images', what F1 score did the T-UNet model get on the WHU-CD dataset
| 91.77 |
CNRPark+EXT | ViT | Revising deep learning methods in parking lot occupancy detection | 2023-06-07T00:00:00 | https://arxiv.org/abs/2306.04288v3 | [
"https://github.com/eighonet/parking-research"
] | In the paper 'Revising deep learning methods in parking lot occupancy detection', what F1-score score did the ViT model get on the CNRPark+EXT dataset
| 0.9176 |
ImageNet | TinySaver(EfficientFormerV2_l, 0.01 Acc drop) | Tiny Models are the Computational Saver for Large Models | 2024-03-26T00:00:00 | https://arxiv.org/abs/2403.17726v3 | [
"https://github.com/QingyuanWang/tinysaver"
] | In the paper 'Tiny Models are the Computational Saver for Large Models', what Top 1 Accuracy score did the TinySaver(EfficientFormerV2_l, 0.01 Acc drop) model get on the ImageNet dataset
| 83.52 |
AgeDB | MiVOLO-D1 | MiVOLO: Multi-input Transformer for Age and Gender Estimation | 2023-07-10T00:00:00 | https://arxiv.org/abs/2307.04616v2 | [
"https://github.com/wildchlamydia/mivolo"
] | In the paper 'MiVOLO: Multi-input Transformer for Age and Gender Estimation', what MAE score did the MiVOLO-D1 model get on the AgeDB dataset
| 5.55 |
ShanghaiTech | TSGAD | An Exploratory Study on Human-Centric Video Anomaly Detection through Variational Autoencoders and Trajectory Prediction | 2024-04-29T00:00:00 | https://arxiv.org/abs/2406.15395v1 | [
"https://github.com/tecsar-uncc/tsgad"
] | In the paper 'An Exploratory Study on Human-Centric Video Anomaly Detection through Variational Autoencoders and Trajectory Prediction', what AUC score did the TSGAD model get on the ShanghaiTech dataset
| 80.6% |
PeMS07 | STD-MAE | Spatial-Temporal-Decoupled Masked Pre-training for Spatiotemporal Forecasting | 2023-12-01T00:00:00 | https://arxiv.org/abs/2312.00516v3 | [
"https://github.com/jimmy-7664/std-mae"
] | In the paper 'Spatial-Temporal-Decoupled Masked Pre-training for Spatiotemporal Forecasting', what MAE@1h score did the STD-MAE model get on the PeMS07 dataset
| 18.31 |
ReCoRD | PaLM 2-L (one-shot) | PaLM 2 Technical Report | 2023-05-17T00:00:00 | https://arxiv.org/abs/2305.10403v3 | [
"https://github.com/eternityyw/tram-benchmark"
] | In the paper 'PaLM 2 Technical Report', what F1 score did the PaLM 2-L (one-shot) model get on the ReCoRD dataset
| 93.8 |
SBU / SBU-Refine | ISTA-Net | Interactive Spatiotemporal Token Attention Network for Skeleton-based General Interactive Action Recognition | 2023-07-14T00:00:00 | https://arxiv.org/abs/2307.07469v1 | [
"https://github.com/Necolizer/ISTA-Net"
] | In the paper 'Interactive Spatiotemporal Token Attention Network for Skeleton-based General Interactive Action Recognition', what Accuracy score did the ISTA-Net model get on the SBU / SBU-Refine dataset
| 98.51±1.47 |
YouTube Highlights | SG-DETR | Saliency-Guided DETR for Moment Retrieval and Highlight Detection | 2024-10-02T00:00:00 | https://arxiv.org/abs/2410.01615v1 | [
"https://github.com/ai-forever/sg-detr"
] | In the paper 'Saliency-Guided DETR for Moment Retrieval and Highlight Detection', what mAP score did the SG-DETR model get on the YouTube Highlights dataset
| 76.7 |
EMODB | VGG-optiVMD | An Extended Variational Mode Decomposition Algorithm Developed Speech Emotion Recognition Performance | 2023-12-18T00:00:00 | https://arxiv.org/abs/2312.10937v1 | [
"https://github.com/DavidHason/VGG-optiVMD"
] | In the paper 'An Extended Variational Mode Decomposition Algorithm Developed Speech Emotion Recognition Performance', what 1:1 Accuracy score did the VGG-optiVMD model get on the EMODB dataset
| 96.09 |
CrowdPose | BUCTD-W48 | Rethinking pose estimation in crowds: overcoming the detection information-bottleneck and ambiguity | 2023-06-13T00:00:00 | https://arxiv.org/abs/2306.07879v2 | [
"https://github.com/amathislab/BUCTD"
] | In the paper 'Rethinking pose estimation in crowds: overcoming the detection information-bottleneck and ambiguity', what AP score did the BUCTD-W48 model get on the CrowdPose dataset
| 72.9 |
LitBank | Maverick_incr | Maverick: Efficient and Accurate Coreference Resolution Defying Recent Trends | 2024-07-31T00:00:00 | https://arxiv.org/abs/2407.21489v1 | [
"https://github.com/sapienzanlp/maverick-coref"
] | In the paper 'Maverick: Efficient and Accurate Coreference Resolution Defying Recent Trends', what Avg F1 score did the Maverick_incr model get on the LitBank dataset
| 78.3 |
QVHighlights | UnLoc-L | UnLoc: A Unified Framework for Video Localization Tasks | 2023-08-21T00:00:00 | https://arxiv.org/abs/2308.11062v1 | [
"https://github.com/google-research/scenic"
] | In the paper 'UnLoc: A Unified Framework for Video Localization Tasks', what R@1 IoU=0.5 score did the UnLoc-L model get on the QVHighlights dataset
| 66.1 |
SST-2 | OPT-1.3B | Achieving Dimension-Free Communication in Federated Learning via Zeroth-Order Optimization | 2024-05-24T00:00:00 | https://arxiv.org/abs/2405.15861v3 | [
"https://github.com/ZidongLiu/DeComFL"
] | In the paper 'Achieving Dimension-Free Communication in Federated Learning via Zeroth-Order Optimization', what Test Accuracy score did the OPT-1.3B model get on the SST-2 dataset
| 90.78% |
ScanObjectNN | DeLA | Decoupled Local Aggregation for Point Cloud Learning | 2023-08-31T00:00:00 | https://arxiv.org/abs/2308.16532v1 | [
"https://github.com/matrix-asc/dela"
] | In the paper 'Decoupled Local Aggregation for Point Cloud Learning', what Overall Accuracy score did the DeLA model get on the ScanObjectNN dataset
| 90.4 |
TrackingNet | HIPTrack | HIPTrack: Visual Tracking with Historical Prompts | 2023-11-03T00:00:00 | https://arxiv.org/abs/2311.02072v2 | [
"https://github.com/wenruicai/hiptrack"
] | In the paper 'HIPTrack: Visual Tracking with Historical Prompts', what Precision score did the HIPTrack model get on the TrackingNet dataset
| 83.8 |
COCO 2017 | DAT-T++ | DAT++: Spatially Dynamic Vision Transformer with Deformable Attention | 2023-09-04T00:00:00 | https://arxiv.org/abs/2309.01430v1 | [
"https://github.com/leaplabthu/dat"
] | In the paper 'DAT++: Spatially Dynamic Vision Transformer with Deformable Attention', what AP score did the DAT-T++ model get on the COCO 2017 dataset
| 49.2 |
DUT-OMRON | BiRefNet (DUTS, UHRSD) | Bilateral Reference for High-Resolution Dichotomous Image Segmentation | 2024-01-07T00:00:00 | https://arxiv.org/abs/2401.03407v6 | [
"https://github.com/zhengpeng7/birefnet"
] | In the paper 'Bilateral Reference for High-Resolution Dichotomous Image Segmentation', what MAE score did the BiRefNet (DUTS, UHRSD) model get on the DUT-OMRON dataset
| 0.036 |
RSICD | HarMA (w/ GeoRSCLIP) | Efficient Remote Sensing with Harmonized Transfer Learning and Modality Alignment | 2024-04-28T00:00:00 | https://arxiv.org/abs/2404.18253v5 | [
"https://github.com/seekerhuang/harma"
] | In the paper 'Efficient Remote Sensing with Harmonized Transfer Learning and Modality Alignment', what Mean Recall score did the HarMA (w/ GeoRSCLIP) model get on the RSICD dataset
| 38.95% |
UCF101 | HPT | Learning Hierarchical Prompt with Structured Linguistic Knowledge for Vision-Language Models | 2023-12-11T00:00:00 | https://arxiv.org/abs/2312.06323v1 | [
"https://github.com/vill-lab/2024-aaai-hpt"
] | In the paper 'Learning Hierarchical Prompt with Structured Linguistic Knowledge for Vision-Language Models', what Harmonic mean score did the HPT model get on the UCF101 dataset
| 83.16 |
Digits-five | ABA (LeNet) | Adversarial Bayesian Augmentation for Single-Source Domain Generalization | 2023-07-18T00:00:00 | https://arxiv.org/abs/2307.09520v2 | [
"https://github.com/shengcheng/aba"
] | In the paper 'Adversarial Bayesian Augmentation for Single-Source Domain Generalization', what Accuracy score did the ABA (LeNet) model get on the Digits-five dataset
| 76.72 |
ETTh1 (192) Multivariate | SCNN | Disentangling Structured Components: Towards Adaptive, Interpretable and Scalable Time Series Forecasting | 2023-05-22T00:00:00 | https://arxiv.org/abs/2305.13036v3 | [
"https://github.com/JLDeng/SCNN"
] | In the paper 'Disentangling Structured Components: Towards Adaptive, Interpretable and Scalable Time Series Forecasting', what MSE score did the SCNN model get on the ETTh1 (192) Multivariate dataset
| 0.379 |
VietMed | XLSR-53-Viet | VietMed: A Dataset and Benchmark for Automatic Speech Recognition of Vietnamese in the Medical Domain | 2024-04-08T00:00:00 | https://arxiv.org/abs/2404.05659v2 | [
"https://github.com/leduckhai/multimed"
] | In the paper 'VietMed: A Dataset and Benchmark for Automatic Speech Recognition of Vietnamese in the Medical Domain', what Dev WER score did the XLSR-53-Viet model get on the VietMed dataset
| 26.8 |
MVBench | ST-LLM | ST-LLM: Large Language Models Are Effective Temporal Learners | 2024-03-30T00:00:00 | https://arxiv.org/abs/2404.00308v1 | [
"https://github.com/TencentARC/ST-LLM"
] | In the paper 'ST-LLM: Large Language Models Are Effective Temporal Learners', what Avg. score did the ST-LLM model get on the MVBench dataset
| 54.9 |
SIDER | G-Tuning | Fine-tuning Graph Neural Networks by Preserving Graph Generative Patterns | 2023-12-21T00:00:00 | https://arxiv.org/abs/2312.13583v1 | [
"https://github.com/zjunet/G-Tuning"
] | In the paper 'Fine-tuning Graph Neural Networks by Preserving Graph Generative Patterns', what ROC-AUC score did the G-Tuning model get on the SIDER dataset
| 61.40 |
PASCAL-5i (1-Shot) | BAM (DifFSS, ResNet-50) | DifFSS: Diffusion Model for Few-Shot Semantic Segmentation | 2023-07-03T00:00:00 | https://arxiv.org/abs/2307.00773v3 | [
"https://github.com/TrinitialChan/DifFSS"
] | In the paper 'DifFSS: Diffusion Model for Few-Shot Semantic Segmentation', what Mean IoU score did the BAM (DifFSS, ResNet-50) model get on the PASCAL-5i (1-Shot) dataset
| 69.3 |
JAAH | Beat This! | Beat this! Accurate beat tracking without DBN postprocessing | 2024-07-31T00:00:00 | https://arxiv.org/abs/2407.21658v1 | [
"https://github.com/CPJKU/beat_this"
] | In the paper 'Beat this! Accurate beat tracking without DBN postprocessing', what F1 score did the Beat This! model get on the JAAH dataset
| 85.0 |
VibraVox (temple vibration pickup) | ECAPA2 | Vibravox: A Dataset of French Speech Captured with Body-conduction Audio Sensors | 2024-07-16T00:00:00 | https://arxiv.org/abs/2407.11828v2 | [
"https://github.com/jhauret/vibravox"
] | In the paper 'Vibravox: A Dataset of French Speech Captured with Body-conduction Audio Sensors', what Test EER score did the ECAPA2 model get on the VibraVox (temple vibration pickup) dataset
| 0.08 |
Human3.6M | STAF | STAF: 3D Human Mesh Recovery from Video with Spatio-Temporal Alignment Fusion | 2024-01-03T00:00:00 | https://arxiv.org/abs/2401.01730v1 | [
"https://github.com/yw0208/STAF"
] | In the paper 'STAF: 3D Human Mesh Recovery from Video with Spatio-Temporal Alignment Fusion', what Average MPJPE (mm) score did the STAF model get on the Human3.6M dataset
| 70.4 |
MCubeS (P) | MMSFormer (RGB-A) | MMSFormer: Multimodal Transformer for Material and Semantic Segmentation | 2023-09-07T00:00:00 | https://arxiv.org/abs/2309.04001v4 | [
"https://github.com/csiplab/mmsformer"
] | In the paper 'MMSFormer: Multimodal Transformer for Material and Semantic Segmentation', what mIoU score did the MMSFormer (RGB-A) model get on the MCubeS (P) dataset
| 51.30 |
COCO-Stuff-27 | CAUSE (ViT-B/8) | Causal Unsupervised Semantic Segmentation | 2023-10-11T00:00:00 | https://arxiv.org/abs/2310.07379v1 | [
"https://github.com/ByungKwanLee/Causal-Unsupervised-Segmentation"
] | In the paper 'Causal Unsupervised Semantic Segmentation', what Accuracy score did the CAUSE (ViT-B/8) model get on the COCO-Stuff-27 dataset
| 74.9 |
minesweeper | GraphSAGE | Classic GNNs are Strong Baselines: Reassessing GNNs for Node Classification | 2024-06-13T00:00:00 | https://arxiv.org/abs/2406.08993v2 | [
"https://github.com/LUOyk1999/tunedGNN"
] | In the paper 'Classic GNNs are Strong Baselines: Reassessing GNNs for Node Classification', what AUCROC score did the GraphSAGE model get on the minesweeper dataset
| 97.77 ± 0.62 |
CIFAR-10 | i-DODE | Improved Techniques for Maximum Likelihood Estimation for Diffusion ODEs | 2023-05-06T00:00:00 | https://arxiv.org/abs/2305.03935v4 | [
"https://github.com/thu-ml/i-dode"
] | In the paper 'Improved Techniques for Maximum Likelihood Estimation for Diffusion ODEs', what bits/dimension score did the i-DODE model get on the CIFAR-10 dataset
| 2.42 |
SALMon | TWIST 1.3B | Textually Pretrained Speech Language Models | 2023-05-22T00:00:00 | https://arxiv.org/abs/2305.13009v3 | [
"https://github.com/slp-rl/spokenstorycloze"
] | In the paper 'Textually Pretrained Speech Language Models', what Speaker Consistency score did the TWIST 1.3B model get on the SALMon dataset
| 69.0 |
miniF2F-test | COPRA + GPT-4 | An In-Context Learning Agent for Formal Theorem-Proving | 2023-10-06T00:00:00 | https://arxiv.org/abs/2310.04353v5 | [
"https://github.com/trishullab/copra"
] | In the paper 'An In-Context Learning Agent for Formal Theorem-Proving', what Pass@1 score did the COPRA + GPT-4 model get on the miniF2F-test dataset
| 23.3 |
CUHK Avenue | AnomalyRuler | Follow the Rules: Reasoning for Video Anomaly Detection with Large Language Models | 2024-07-14T00:00:00 | https://arxiv.org/abs/2407.10299v2 | [
"https://github.com/Yuchen413/AnomalyRuler"
] | In the paper 'Follow the Rules: Reasoning for Video Anomaly Detection with Large Language Models', what AUC score did the AnomalyRuler model get on the CUHK Avenue dataset
| 89.7% |
MLO-Cn2 | RNN | Effective Benchmarks for Optical Turbulence Modeling | 2024-01-07T00:00:00 | https://arxiv.org/abs/2401.03573v1 | [
"https://github.com/cdjellen/otbench"
] | In the paper 'Effective Benchmarks for Optical Turbulence Modeling', what RMSE score did the RNN model get on the MLO-Cn2 dataset
| 0.581 |
TASD | ChatGPT (gpt-3.5-turbo, zero-shot) | MvP: Multi-view Prompting Improves Aspect Sentiment Tuple Prediction | 2023-05-22T00:00:00 | https://arxiv.org/abs/2305.12627v1 | [
"https://github.com/ZubinGou/multi-view-prompting"
] | In the paper 'MvP: Multi-view Prompting Improves Aspect Sentiment Tuple Prediction', what F1 (R16) score did the ChatGPT (gpt-3.5-turbo, zero-shot) model get on the TASD dataset
| 34.08 |
ImageNet 512x512 | EDM2- XXL Autoguidance (M, T /3.5) | Guiding a Diffusion Model with a Bad Version of Itself | 2024-06-04T00:00:00 | https://arxiv.org/abs/2406.02507v2 | [
"https://github.com/nvlabs/edm2"
] | In the paper 'Guiding a Diffusion Model with a Bad Version of Itself', what FID score did the EDM2- XXL Autoguidance (M, T /3.5) model get on the ImageNet 512x512 dataset
| 1.25 |
CIFAR-100 (400 Labels, ImageNet-100 Unlabeled) | UnMixMatch | Scaling Up Semi-supervised Learning with Unconstrained Unlabelled Data | 2023-06-02T00:00:00 | https://arxiv.org/abs/2306.01222v2 | [
"https://github.com/shuvenduroy/unmixmatch"
] | In the paper 'Scaling Up Semi-supervised Learning with Unconstrained Unlabelled Data', what Accuracy score did the UnMixMatch model get on the CIFAR-100 (400 Labels, ImageNet-100 Unlabeled) dataset
| 26.13 |
RefCoCo val | MaskRIS (Swin-B) | MaskRIS: Semantic Distortion-aware Data Augmentation for Referring Image Segmentation | 2024-11-28T00:00:00 | https://arxiv.org/abs/2411.19067v1 | [
"https://github.com/naver-ai/maskris"
] | In the paper 'MaskRIS: Semantic Distortion-aware Data Augmentation for Referring Image Segmentation', what Overall IoU score did the MaskRIS (Swin-B) model get on the RefCoCo val dataset
| 76.49 |
CACD | ResNet-50-Cross-Entropy | A Call to Reflect on Evaluation Practices for Age Estimation: Comparative Analysis of the State-of-the-Art and a Unified Benchmark | 2023-07-10T00:00:00 | https://arxiv.org/abs/2307.04570v3 | [
"https://github.com/paplhjak/facial-age-estimation-benchmark"
] | In the paper 'A Call to Reflect on Evaluation Practices for Age Estimation: Comparative Analysis of the State-of-the-Art and a Unified Benchmark', what MAE score did the ResNet-50-Cross-Entropy model get on the CACD dataset
| 3.96 |
VideoInstruct | BT-Adapter | BT-Adapter: Video Conversation is Feasible Without Video Instruction Tuning | 2023-09-27T00:00:00 | https://arxiv.org/abs/2309.15785v2 | [
"https://github.com/farewellthree/BT-Adapter"
] | In the paper 'BT-Adapter: Video Conversation is Feasible Without Video Instruction Tuning', what gpt-score score did the BT-Adapter model get on the VideoInstruct dataset
| 2.68 |
nuScenes (Distant PCR) | FCGF+APR(s) | APR: Online Distant Point Cloud Registration Through Aggregated Point Cloud Reconstruction | 2023-05-04T00:00:00 | https://arxiv.org/abs/2305.02893v2 | [
"https://github.com/liuquan98/apr"
] | In the paper 'APR: Online Distant Point Cloud Registration Through Aggregated Point Cloud Reconstruction', what mRR @ Normal Criterion (1.5°&0.3m) score did the FCGF+APR(s) model get on the nuScenes (Distant PCR) dataset
| 62.9 |
FP-O-E | GeoTransformer | GeoTransformer: Fast and Robust Point Cloud Registration with Geometric Transformer | 2023-07-25T00:00:00 | https://arxiv.org/abs/2308.03768v1 | [
"https://github.com/qinzheng93/geotransformer"
] | In the paper 'GeoTransformer: Fast and Robust Point Cloud Registration with Geometric Transformer', what Recall (3cm, 10 degrees) score did the GeoTransformer model get on the FP-O-E dataset
| 63.94 |
ImageNet 512x512 | TiTok-B-128 | An Image is Worth 32 Tokens for Reconstruction and Generation | 2024-06-11T00:00:00 | https://arxiv.org/abs/2406.07550v1 | [
"https://github.com/bytedance/1d-tokenizer"
] | In the paper 'An Image is Worth 32 Tokens for Reconstruction and Generation', what FID score did the TiTok-B-128 model get on the ImageNet 512x512 dataset
| 2.13 |
SALMon | Spirit-LM (base) | Spirit LM: Interleaved Spoken and Written Language Model | 2024-02-08T00:00:00 | https://arxiv.org/abs/2402.05755v2 | [
"https://github.com/facebookresearch/spiritlm"
] | In the paper 'Spirit LM: Interleaved Spoken and Written Language Model', what Speaker Consistency score did the Spirit-LM (base) model get on the SALMon dataset
| 69.5 |
WMT2016 Romanian-English | GenTranslate | GenTranslate: Large Language Models are Generative Multilingual Speech and Machine Translators | 2024-02-10T00:00:00 | https://arxiv.org/abs/2402.06894v2 | [
"https://github.com/yuchen005/gentranslate"
] | In the paper 'GenTranslate: Large Language Models are Generative Multilingual Speech and Machine Translators', what BLEU score score did the GenTranslate model get on the WMT2016 Romanian-English dataset
| 33.5 |
MPDD | GLAD | GLAD: Towards Better Reconstruction with Global and Local Adaptive Diffusion Models for Unsupervised Anomaly Detection | 2024-06-11T00:00:00 | https://arxiv.org/abs/2406.07487v3 | [
"https://github.com/hyao1/glad"
] | In the paper 'GLAD: Towards Better Reconstruction with Global and Local Adaptive Diffusion Models for Unsupervised Anomaly Detection', what Detection AUROC score did the GLAD model get on the MPDD dataset
| 97.5 |
PascalVOC | ViT-B+MST+CL | MST: Adaptive Multi-Scale Tokens Guided Interactive Segmentation | 2024-01-09T00:00:00 | https://arxiv.org/abs/2401.04403v2 | [
"https://github.com/hahamyt/mst"
] | In the paper 'MST: Adaptive Multi-Scale Tokens Guided Interactive Segmentation', what NoC@85 score did the ViT-B+MST+CL model get on the PascalVOC dataset
| 1.69 |
EC-FUNSD | RORE (GeoLayoutLM) | Modeling Layout Reading Order as Ordering Relations for Visually-rich Document Understanding | 2024-09-29T00:00:00 | https://arxiv.org/abs/2409.19672v1 | [
"https://github.com/chongzhangFDU/ROOR"
] | In the paper 'Modeling Layout Reading Order as Ordering Relations for Visually-rich Document Understanding', what F1 score did the RORE (GeoLayoutLM) model get on the EC-FUNSD dataset
| 84.34 |
ImageNet-R | FAN-L-Hybrid+STL | Fully Attentional Networks with Self-emerging Token Labeling | 2024-01-08T00:00:00 | https://arxiv.org/abs/2401.03844v1 | [
"https://github.com/NVlabs/STL"
] | In the paper 'Fully Attentional Networks with Self-emerging Token Labeling', what Top-1 Error Rate score did the FAN-L-Hybrid+STL model get on the ImageNet-R dataset
| 43.4 |
CropHarvest - Kenya | Gated Fusion (Feature-level) | A Comparative Assessment of Multi-view fusion learning for Crop Classification | 2023-08-10T00:00:00 | https://arxiv.org/abs/2308.05407v1 | [
"https://github.com/fmenat/multiviewcropclassification"
] | In the paper 'A Comparative Assessment of Multi-view fusion learning for Crop Classification', what Average Accuracy score did the Gated Fusion (Feature-level) model get on the CropHarvest - Kenya dataset
| 0.665 |
COCO 100% labeled data | MixPL | Mixed Pseudo Labels for Semi-Supervised Object Detection | 2023-12-12T00:00:00 | https://arxiv.org/abs/2312.07006v1 | [
"https://github.com/czm369/mixpl"
] | In the paper 'Mixed Pseudo Labels for Semi-Supervised Object Detection', what mAP score did the MixPL model get on the COCO 100% labeled data dataset
| 55.2 |
VisA | RealNet | RealNet: A Feature Selection Network with Realistic Synthetic Anomaly for Anomaly Detection | 2024-03-09T00:00:00 | https://arxiv.org/abs/2403.05897v1 | [
"https://github.com/cnulab/realnet"
] | In the paper 'RealNet: A Feature Selection Network with Realistic Synthetic Anomaly for Anomaly Detection', what Detection AUROC score did the RealNet model get on the VisA dataset
| 97.8 |
Chameleon | TE-GCNN | Transfer Entropy in Graph Convolutional Neural Networks | 2024-06-08T00:00:00 | https://arxiv.org/abs/2406.06632v1 | [
"https://github.com/avmoldovan/Heterophily_and_oversmoothing-forked"
] | In the paper 'Transfer Entropy in Graph Convolutional Neural Networks', what Accuracy score did the TE-GCNN model get on the Chameleon dataset
| 71.14 ± 1.84 |
STL-10 | TURTLE (CLIP + DINOv2) | Let Go of Your Labels with Unsupervised Transfer | 2024-06-11T00:00:00 | https://arxiv.org/abs/2406.07236v1 | [
"https://github.com/mlbio-epfl/turtle"
] | In the paper 'Let Go of Your Labels with Unsupervised Transfer', what Accuracy score did the TURTLE (CLIP + DINOv2) model get on the STL-10 dataset
| 0.997 |
SQuAD1.1 dev | Blended RAG | Blended RAG: Improving RAG (Retriever-Augmented Generation) Accuracy with Semantic Search and Hybrid Query-Based Retrievers | 2024-03-22T00:00:00 | https://arxiv.org/abs/2404.07220v2 | [
"https://github.com/ibm-ecosystem-engineering/blended-rag"
] | In the paper 'Blended RAG: Improving RAG (Retriever-Augmented Generation) Accuracy with Semantic Search and Hybrid Query-Based Retrievers', what EM score did the Blended RAG model get on the SQuAD1.1 dev dataset
| 57.63 |
Youtube-VIS 2022 Validation | DVIS(Swin-L) | DVIS: Decoupled Video Instance Segmentation Framework | 2023-06-06T00:00:00 | https://arxiv.org/abs/2306.03413v3 | [
"https://github.com/zhang-tao-whu/DVIS"
] | In the paper 'DVIS: Decoupled Video Instance Segmentation Framework', what mAP_L score did the DVIS(Swin-L) model get on the Youtube-VIS 2022 Validation dataset
| 45.9 |
wiki | A2DUG | A Simple and Scalable Graph Neural Network for Large Directed Graphs | 2023-06-14T00:00:00 | https://arxiv.org/abs/2306.08274v2 | [
"https://github.com/seijimaekawa/a2dug"
] | In the paper 'A Simple and Scalable Graph Neural Network for Large Directed Graphs', what ACCURACY score did the A2DUG model get on the wiki dataset
| 65.13±0.07 |
BDD100K val | DSNet-Base | DSNet: A Novel Way to Use Atrous Convolutions in Semantic Segmentation | 2024-06-06T00:00:00 | https://arxiv.org/abs/2406.03702v1 | [
"https://github.com/takaniwa/dsnet"
] | In the paper 'DSNet: A Novel Way to Use Atrous Convolutions in Semantic Segmentation', what mIoU score did the DSNet-Base model get on the BDD100K val dataset
| 64.6 |
GSM8K | ToRA-Code-34B (SC, k=50) | ToRA: A Tool-Integrated Reasoning Agent for Mathematical Problem Solving | 2023-09-29T00:00:00 | https://arxiv.org/abs/2309.17452v4 | [
"https://github.com/microsoft/tora"
] | In the paper 'ToRA: A Tool-Integrated Reasoning Agent for Mathematical Problem Solving', what Accuracy score did the ToRA-Code-34B (SC, k=50) model get on the GSM8K dataset
| 85.1 |
SVAMP | GPT-4 (Teaching-Inspired) | Teaching-Inspired Integrated Prompting Framework: A Novel Approach for Enhancing Reasoning in Large Language Models | 2024-10-10T00:00:00 | https://arxiv.org/abs/2410.08068v1 | [
"https://github.com/sallytan13/teaching-inspired-prompting"
] | In the paper 'Teaching-Inspired Integrated Prompting Framework: A Novel Approach for Enhancing Reasoning in Large Language Models', what Execution Accuracy score did the GPT-4 (Teaching-Inspired) model get on the SVAMP dataset
| 93.9 |
S3DIS | OpenIns3D | OpenIns3D: Snap and Lookup for 3D Open-vocabulary Instance Segmentation | 2023-09-01T00:00:00 | https://arxiv.org/abs/2309.00616v5 | [
"https://github.com/Pointcept/OpenIns3D"
] | In the paper 'OpenIns3D: Snap and Lookup for 3D Open-vocabulary Instance Segmentation', what AP50 Novel B8/N4 score did the OpenIns3D model get on the S3DIS dataset
| 37.0 |
AmsterTime | SegVLAD-FineT (M) | Revisit Anything: Visual Place Recognition via Image Segment Retrieval | 2024-09-26T00:00:00 | https://arxiv.org/abs/2409.18049v1 | [
"https://github.com/anyloc/revisit-anything"
] | In the paper 'Revisit Anything: Visual Place Recognition via Image Segment Retrieval', what Recall@1 score did the SegVLAD-FineT (M) model get on the AmsterTime dataset
| 60.2 |
PanNuke | NuLite-H | NuLite -- Lightweight and Fast Model for Nuclei Instance Segmentation and Classification | 2024-08-03T00:00:00 | https://arxiv.org/abs/2408.01797v2 | [
"https://github.com/cosmoiknoslab/nulite"
] | In the paper 'NuLite -- Lightweight and Fast Model for Nuclei Instance Segmentation and Classification', what PQ score did the NuLite-H model get on the PanNuke dataset
| 49.81 |
FB15k-237 | MetaSD | Self-Distillation with Meta Learning for Knowledge Graph Completion | 2023-05-20T00:00:00 | https://arxiv.org/abs/2305.12209v1 | [
"https://github.com/pldlgb/MetaSD"
] | In the paper 'Self-Distillation with Meta Learning for Knowledge Graph Completion', what MRR score did the MetaSD model get on the FB15k-237 dataset
| 0.391 |
VoxCeleb2 | IIANet | IIANet: An Intra- and Inter-Modality Attention Network for Audio-Visual Speech Separation | 2023-08-16T00:00:00 | https://arxiv.org/abs/2308.08143v3 | [
"https://github.com/JusperLee/IIANet"
] | In the paper 'IIANet: An Intra- and Inter-Modality Attention Network for Audio-Visual Speech Separation', what SI-SNRi score did the IIANet model get on the VoxCeleb2 dataset
| 14.0 |
WenetSpeech | Paraformer-large | FunASR: A Fundamental End-to-End Speech Recognition Toolkit | 2023-05-18T00:00:00 | https://arxiv.org/abs/2305.11013v1 | [
"https://github.com/alibaba-damo-academy/FunASR"
] | In the paper 'FunASR: A Fundamental End-to-End Speech Recognition Toolkit', what Character Error Rate (CER) score did the Paraformer-large model get on the WenetSpeech dataset
| 6.97 |
Turbulence | CodeLlama:7B-4bit-quantised | Turbulence: Systematically and Automatically Testing Instruction-Tuned Large Language Models for Code | 2023-12-22T00:00:00 | https://arxiv.org/abs/2312.14856v2 | [
"https://github.com/shahinhonarvar/turbulence-benchmark"
] | In the paper 'Turbulence: Systematically and Automatically Testing Instruction-Tuned Large Language Models for Code', what CorrSc score did the CodeLlama:7B-4bit-quantised model get on the Turbulence dataset
| 0.289 |
CACD | ResNet-50-DLDL-v2 | A Call to Reflect on Evaluation Practices for Age Estimation: Comparative Analysis of the State-of-the-Art and a Unified Benchmark | 2023-07-10T00:00:00 | https://arxiv.org/abs/2307.04570v3 | [
"https://github.com/paplhjak/facial-age-estimation-benchmark"
] | In the paper 'A Call to Reflect on Evaluation Practices for Age Estimation: Comparative Analysis of the State-of-the-Art and a Unified Benchmark', what MAE score did the ResNet-50-DLDL-v2 model get on the CACD dataset
| 3.96 |
ImageNet | TinySaver(ConvNeXtV2_h, 0.5 Acc drop) | Tiny Models are the Computational Saver for Large Models | 2024-03-26T00:00:00 | https://arxiv.org/abs/2403.17726v3 | [
"https://github.com/QingyuanWang/tinysaver"
] | In the paper 'Tiny Models are the Computational Saver for Large Models', what Top 1 Accuracy score did the TinySaver(ConvNeXtV2_h, 0.5 Acc drop) model get on the ImageNet dataset
| 85.75 |
Occ3D-nuScenes | FB-OCC-H | FB-OCC: 3D Occupancy Prediction based on Forward-Backward View Transformation | 2023-07-04T00:00:00 | https://arxiv.org/abs/2307.01492v1 | [
"https://github.com/nvlabs/fb-bev"
] | In the paper 'FB-OCC: 3D Occupancy Prediction based on Forward-Backward View Transformation', what mIoU score did the FB-OCC-H model get on the Occ3D-nuScenes dataset
| 42.06 |
Set14 - 4x upscaling | DRCT-L | DRCT: Saving Image Super-resolution away from Information Bottleneck | 2024-03-31T00:00:00 | https://arxiv.org/abs/2404.00722v5 | [
"https://github.com/ming053l/drct"
] | In the paper 'DRCT: Saving Image Super-resolution away from Information Bottleneck', what PSNR score did the DRCT-L model get on the Set14 - 4x upscaling dataset
| 29.54 |
SVAMP | MsAT-DeductReasoner | Learning Multi-Step Reasoning by Solving Arithmetic Tasks | 2023-06-02T00:00:00 | https://arxiv.org/abs/2306.01707v3 | [
"https://github.com/TianduoWang/MsAT"
] | In the paper 'Learning Multi-Step Reasoning by Solving Arithmetic Tasks', what Execution Accuracy score did the MsAT-DeductReasoner model get on the SVAMP dataset
| 48.9 |
AMZ Comp | HH-GraphSAGE | Half-Hop: A graph upsampling approach for slowing down message passing | 2023-08-17T00:00:00 | https://arxiv.org/abs/2308.09198v1 | [
"https://github.com/nerdslab/halfhop"
] | In the paper 'Half-Hop: A graph upsampling approach for slowing down message passing', what Accuracy score did the HH-GraphSAGE model get on the AMZ Comp dataset
| 86.6% |
IIIT5k | DTrOCR 105M | DTrOCR: Decoder-only Transformer for Optical Character Recognition | 2023-08-30T00:00:00 | https://arxiv.org/abs/2308.15996v1 | [
"https://github.com/arvindrajan92/DTrOCR"
] | In the paper 'DTrOCR: Decoder-only Transformer for Optical Character Recognition', what Accuracy score did the DTrOCR 105M model get on the IIIT5k dataset
| 99.6 |
Atari 2600 Tennis | ASL DDQN | Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity | 2023-05-07T00:00:00 | https://arxiv.org/abs/2305.04180v3 | [
"https://github.com/xinjinghao/color"
] | In the paper 'Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity', what Score score did the ASL DDQN model get on the Atari 2600 Tennis dataset
| 22.3 |
GerMS-AT | mE5-large-SVM | Detecting Sexism in German Online Newspaper Comments with Open-Source Text Embeddings (Team GDA, GermEval2024 Shared Task 1: GerMS-Detect, Subtasks 1 and 2, Closed Track) | 2024-09-16T00:00:00 | https://arxiv.org/abs/2409.10341v2 | [
"https://github.com/dslaborg/germeval2024"
] | In the paper 'Detecting Sexism in German Online Newspaper Comments with Open-Source Text Embeddings (Team GDA, GermEval2024 Shared Task 1: GerMS-Detect, Subtasks 1 and 2, Closed Track)', what Macro F1 score did the mE5-large-SVM model get on the GerMS-AT dataset
| 0.597 |
Occ3D-nuScenes | FB-OCC-G | FB-OCC: 3D Occupancy Prediction based on Forward-Backward View Transformation | 2023-07-04T00:00:00 | https://arxiv.org/abs/2307.01492v1 | [
"https://github.com/nvlabs/fb-bev"
] | In the paper 'FB-OCC: 3D Occupancy Prediction based on Forward-Backward View Transformation', what mIoU score did the FB-OCC-G model get on the Occ3D-nuScenes dataset
| 40.69 |
10,000 People - Human Pose Recognition Data | 1 | DecisionNCE: Embodied Multimodal Representations via Implicit Preference Learning | 2024-02-28T00:00:00 | https://arxiv.org/abs/2402.18137v2 | [
"https://github.com/2toinf/DecisionNCE"
] | In the paper 'DecisionNCE: Embodied Multimodal Representations via Implicit Preference Learning', what 0..5sec score did the 1 model get on the 10,000 People - Human Pose Recognition Data dataset
| 1 |
MATH | AlphaLLM (with MCTS) | Toward Self-Improvement of LLMs via Imagination, Searching, and Criticizing | 2024-04-18T00:00:00 | https://arxiv.org/abs/2404.12253v2 | [
"https://github.com/yetianjhu/alphallm"
] | In the paper 'Toward Self-Improvement of LLMs via Imagination, Searching, and Criticizing', what Accuracy score did the AlphaLLM (with MCTS) model get on the MATH dataset
| 51 |
MAESTRO | YourMT3+ (YPTF.MoE+M) | YourMT3+: Multi-instrument Music Transcription with Enhanced Transformer Architectures and Cross-dataset Stem Augmentation | 2024-07-05T00:00:00 | https://arxiv.org/abs/2407.04822v3 | [
"https://github.com/mimbres/yourmt3"
] | In the paper 'YourMT3+: Multi-instrument Music Transcription with Enhanced Transformer Architectures and Cross-dataset Stem Augmentation', what Onset F1 score did the YourMT3+ (YPTF.MoE+M) model get on the MAESTRO dataset
| 96.52 |
SUN-RGBD | TokenFusion (S) | DFormer: Rethinking RGBD Representation Learning for Semantic Segmentation | 2023-09-18T00:00:00 | https://arxiv.org/abs/2309.09668v2 | [
"https://github.com/VCIP-RGBD/DFormer"
] | In the paper 'DFormer: Rethinking RGBD Representation Learning for Semantic Segmentation', what Mean IoU score did the TokenFusion (S) model get on the SUN-RGBD dataset
| 50.0% |
WDC Products-80%cc-seen-medium | gpt-4o-2024-08-06_fine_tuned_wdc_small | Fine-tuning Large Language Models for Entity Matching | 2024-09-12T00:00:00 | https://arxiv.org/abs/2409.08185v1 | [
"https://github.com/wbsg-uni-mannheim/tailormatch"
] | In the paper 'Fine-tuning Large Language Models for Entity Matching', what F1 (%) score did the gpt-4o-2024-08-06_fine_tuned_wdc_small model get on the WDC Products-80%cc-seen-medium dataset
| 87.10 |
MATH | MMOS-DeepSeekMath-7B(0-shot) | An Empirical Study of Data Ability Boundary in LLMs' Math Reasoning | 2024-02-23T00:00:00 | https://arxiv.org/abs/2403.00799v1 | [
"https://github.com/cyzhh/MMOS"
] | In the paper 'An Empirical Study of Data Ability Boundary in LLMs' Math Reasoning', what Accuracy score did the MMOS-DeepSeekMath-7B(0-shot) model get on the MATH dataset
| 55.0 |
ImageNet 32x32 | idode | Improved Techniques for Maximum Likelihood Estimation for Diffusion ODEs | 2023-05-06T00:00:00 | https://arxiv.org/abs/2305.03935v4 | [
"https://github.com/thu-ml/i-dode"
] | In the paper 'Improved Techniques for Maximum Likelihood Estimation for Diffusion ODEs', what NLL (bits/dim) score did the idode model get on the ImageNet 32x32 dataset
| 3.69 |
MATH | MuggleMATH-13B | MuggleMath: Assessing the Impact of Query and Response Augmentation on Math Reasoning | 2023-10-09T00:00:00 | https://arxiv.org/abs/2310.05506v3 | [
"https://github.com/ofa-sys/gsm8k-screl"
] | In the paper 'MuggleMath: Assessing the Impact of Query and Response Augmentation on Math Reasoning', what Accuracy score did the MuggleMATH-13B model get on the MATH dataset
| 30.7 |
UTKFace | FaRL+MLP | A Call to Reflect on Evaluation Practices for Age Estimation: Comparative Analysis of the State-of-the-Art and a Unified Benchmark | 2023-07-10T00:00:00 | https://arxiv.org/abs/2307.04570v3 | [
"https://github.com/paplhjak/facial-age-estimation-benchmark"
] | In the paper 'A Call to Reflect on Evaluation Practices for Age Estimation: Comparative Analysis of the State-of-the-Art and a Unified Benchmark', what MAE score did the FaRL+MLP model get on the UTKFace dataset
| 3.87 |
MUSDB18 | TFC-TDF-UNet (v3) | Sound Demixing Challenge 2023 Music Demixing Track Technical Report: TFC-TDF-UNet v3 | 2023-06-15T00:00:00 | https://arxiv.org/abs/2306.09382v3 | [
"https://github.com/kuielab/sdx23"
] | In the paper 'Sound Demixing Challenge 2023 Music Demixing Track Technical Report: TFC-TDF-UNet v3', what SDR (vocals) score did the TFC-TDF-UNet (v3) model get on the MUSDB18 dataset
| 9.59 |
Vinoground | Gemini-1.5-Pro (CoT) | Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context | 2024-03-08T00:00:00 | https://arxiv.org/abs/2403.05530v4 | [
"https://github.com/dlvuldet/primevul"
] | In the paper 'Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context', what Text Score score did the Gemini-1.5-Pro (CoT) model get on the Vinoground dataset
| 37 |
Pittsburgh-30k-test | SelaVPR | Towards Seamless Adaptation of Pre-trained Models for Visual Place Recognition | 2024-02-22T00:00:00 | https://arxiv.org/abs/2402.14505v3 | [
"https://github.com/Lu-Feng/SelaVPR"
] | In the paper 'Towards Seamless Adaptation of Pre-trained Models for Visual Place Recognition', what Recall@1 score did the SelaVPR model get on the Pittsburgh-30k-test dataset
| 92.8 |
OA-Mine - annotations | ft-GPT-3.5-json-val | ExtractGPT: Exploring the Potential of Large Language Models for Product Attribute Value Extraction | 2023-10-19T00:00:00 | https://arxiv.org/abs/2310.12537v5 | [
"https://github.com/wbsg-uni-mannheim/extractgpt"
] | In the paper 'ExtractGPT: Exploring the Potential of Large Language Models for Product Attribute Value Extraction', what F1-score score did the ft-GPT-3.5-json-val model get on the OA-Mine - annotations dataset
| 84.5 |
COCO test-dev | MoCaE | MoCaE: Mixture of Calibrated Experts Significantly Improves Object Detection | 2023-09-26T00:00:00 | https://arxiv.org/abs/2309.14976v4 | [
"https://github.com/fiveai/MoCaE"
] | In the paper 'MoCaE: Mixture of Calibrated Experts Significantly Improves Object Detection', what box mAP score did the MoCaE model get on the COCO test-dev dataset
| 65.1 |
HumanEva-I | RTPCA | Refined Temporal Pyramidal Compression-and-Amplification Transformer for 3D Human Pose Estimation | 2023-09-04T00:00:00 | https://arxiv.org/abs/2309.01365v3 | [
"https://github.com/hbing-l/rtpca"
] | In the paper 'Refined Temporal Pyramidal Compression-and-Amplification Transformer for 3D Human Pose Estimation', what Mean Reconstruction Error (mm) score did the RTPCA model get on the HumanEva-I dataset
| 19.1 |
PATTERN | CKGCN | CKGConv: General Graph Convolution with Continuous Kernels | 2024-04-21T00:00:00 | https://arxiv.org/abs/2404.13604v2 | [
"https://github.com/networkslab/ckgconv"
] | In the paper 'CKGConv: General Graph Convolution with Continuous Kernels', what Accuracy score did the CKGCN model get on the PATTERN dataset
| 88.661 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.