dataset stringlengths 0 82 | model_name stringlengths 0 150 | paper_title stringlengths 19 175 | paper_date timestamp[ns] | paper_url stringlengths 32 35 | code_links listlengths 1 1 | prompts stringlengths 105 331 | answer stringlengths 1 67 |
|---|---|---|---|---|---|---|---|
OLID | RoBERTa-large-ST | Noisy Self-Training with Data Augmentations for Offensive and Hate Speech Detection Tasks | 2023-07-31T00:00:00 | https://arxiv.org/abs/2307.16609v1 | [
"https://github.com/jaugusto97/offense-self-training"
] | In the paper 'Noisy Self-Training with Data Augmentations for Offensive and Hate Speech Detection Tasks', what Macro F1 score did the RoBERTa-large-ST model get on the OLID dataset
| 80.7 |
GigaSpeech TEST | Zipformer+pruned transducer
(no external language model) | CR-CTC: Consistency regularization on CTC for improved speech recognition | 2024-10-07T00:00:00 | https://arxiv.org/abs/2410.05101v3 | [
"https://github.com/k2-fsa/icefall"
] | In the paper 'CR-CTC: Consistency regularization on CTC for improved speech recognition', what Word Error Rate (WER) score did the Zipformer+pruned transducer
(no external language model) model get on the GigaSpeech TEST dataset
| 10.2 |
FP-R-E | GeoTransformer | GeoTransformer: Fast and Robust Point Cloud Registration with Geometric Transformer | 2023-07-25T00:00:00 | https://arxiv.org/abs/2308.03768v1 | [
"https://github.com/qinzheng93/geotransformer"
] | In the paper 'GeoTransformer: Fast and Robust Point Cloud Registration with Geometric Transformer', what Recall (3cm, 10 degrees) score did the GeoTransformer model get on the FP-R-E dataset
| 64.12 |
ICFG-PEDES | RDE | Noisy-Correspondence Learning for Text-to-Image Person Re-identification | 2023-08-19T00:00:00 | https://arxiv.org/abs/2308.09911v3 | [
"https://github.com/QinYang79/RDE"
] | In the paper 'Noisy-Correspondence Learning for Text-to-Image Person Re-identification', what Rank 1 score did the RDE model get on the ICFG-PEDES dataset
| 66.54 |
LVIS v1.0 val | GLEE-Pro | General Object Foundation Model for Images and Videos at Scale | 2023-12-14T00:00:00 | https://arxiv.org/abs/2312.09158v1 | [
"https://github.com/FoundationVision/GLEE"
] | In the paper 'General Object Foundation Model for Images and Videos at Scale', what mask AP score did the GLEE-Pro model get on the LVIS v1.0 val dataset
| 49.9 |
COCO test-dev | GLEE-Lite | General Object Foundation Model for Images and Videos at Scale | 2023-12-14T00:00:00 | https://arxiv.org/abs/2312.09158v1 | [
"https://github.com/FoundationVision/GLEE"
] | In the paper 'General Object Foundation Model for Images and Videos at Scale', what box mAP score did the GLEE-Lite model get on the COCO test-dev dataset
| 54.7 |
JIGSAWS | RICA^2 (Deterministic) | RICA2: Rubric-Informed, Calibrated Assessment of Actions | 2024-08-04T00:00:00 | https://arxiv.org/abs/2408.02138v2 | [
"https://github.com/abrarmajeedi/rica2_aqa"
] | In the paper 'RICA2: Rubric-Informed, Calibrated Assessment of Actions', what Spearman Correlation score did the RICA^2 (Deterministic) model get on the JIGSAWS dataset
| 0.9 |
Something-Something V1 | TDS-CLIP-ViT-L/14(8frames) | TDS-CLIP: Temporal Difference Side Network for Image-to-Video Transfer Learning | 2024-08-20T00:00:00 | https://arxiv.org/abs/2408.10688v1 | [
"https://github.com/BBYL9413/TDS-CLIP"
] | In the paper 'TDS-CLIP: Temporal Difference Side Network for Image-to-Video Transfer Learning', what Top 1 Accuracy score did the TDS-CLIP-ViT-L/14(8frames) model get on the Something-Something V1 dataset
| 63.0 |
Winoground | BLIP2 (ft COCO) | What You See is What You Read? Improving Text-Image Alignment Evaluation | 2023-05-17T00:00:00 | https://arxiv.org/abs/2305.10400v4 | [
"https://github.com/yonatanbitton/wysiwyr"
] | In the paper 'What You See is What You Read? Improving Text-Image Alignment Evaluation', what Text Score score did the BLIP2 (ft COCO) model get on the Winoground dataset
| 44.00 |
CIFAR-10 | BFN | Bayesian Flow Networks | 2023-08-14T00:00:00 | https://arxiv.org/abs/2308.07037v5 | [
"https://github.com/nnaisense/bayesian-flow-networks"
] | In the paper 'Bayesian Flow Networks', what bits/dimension score did the BFN model get on the CIFAR-10 dataset
| 2.66 |
BenchLMM | Sphinx-V2-1K | SPHINX: The Joint Mixing of Weights, Tasks, and Visual Embeddings for Multi-modal Large Language Models | 2023-11-13T00:00:00 | https://arxiv.org/abs/2311.07575v1 | [
"https://github.com/alpha-vllm/llama2-accessory"
] | In the paper 'SPHINX: The Joint Mixing of Weights, Tasks, and Visual Embeddings for Multi-modal Large Language Models', what GPT-3.5 score score did the Sphinx-V2-1K model get on the BenchLMM dataset
| 57.43 |
ICBHI Respiratory Sound Database | Audio-CLAP | BTS: Bridging Text and Sound Modalities for Metadata-Aided Respiratory Sound Classification | 2024-06-10T00:00:00 | https://arxiv.org/abs/2406.06786v2 | [
"https://github.com/kaen2891/bts"
] | In the paper 'BTS: Bridging Text and Sound Modalities for Metadata-Aided Respiratory Sound Classification', what ICBHI Score score did the Audio-CLAP model get on the ICBHI Respiratory Sound Database dataset
| 62.56 |
CropHarvest - Kenya | Feature-level fusion (sum) | A Comparative Assessment of Multi-view fusion learning for Crop Classification | 2023-08-10T00:00:00 | https://arxiv.org/abs/2308.05407v1 | [
"https://github.com/fmenat/multiviewcropclassification"
] | In the paper 'A Comparative Assessment of Multi-view fusion learning for Crop Classification', what Average Accuracy score did the Feature-level fusion (sum) model get on the CropHarvest - Kenya dataset
| 0.630 |
GSM8K | MMOS-CODE-34B(0-shot) | An Empirical Study of Data Ability Boundary in LLMs' Math Reasoning | 2024-02-23T00:00:00 | https://arxiv.org/abs/2403.00799v1 | [
"https://github.com/cyzhh/MMOS"
] | In the paper 'An Empirical Study of Data Ability Boundary in LLMs' Math Reasoning', what Accuracy score did the MMOS-CODE-34B(0-shot) model get on the GSM8K dataset
| 80.4 |
Electricity (720) | MoLE-DLinear | Mixture-of-Linear-Experts for Long-term Time Series Forecasting | 2023-12-11T00:00:00 | https://arxiv.org/abs/2312.06786v3 | [
"https://github.com/rogerni/mole"
] | In the paper 'Mixture-of-Linear-Experts for Long-term Time Series Forecasting', what MSE score did the MoLE-DLinear model get on the Electricity (720) dataset
| 0.18 |
CACD | MiVOLO-V2 | Beyond Specialization: Assessing the Capabilities of MLLMs in Age and Gender Estimation | 2024-03-04T00:00:00 | https://arxiv.org/abs/2403.02302v3 | [
"https://github.com/wildchlamydia/mivolo"
] | In the paper 'Beyond Specialization: Assessing the Capabilities of MLLMs in Age and Gender Estimation', what MAE score did the MiVOLO-V2 model get on the CACD dataset
| 3.89 |
ScanNet200 | PPT+SparseUNet | Towards Large-scale 3D Representation Learning with Multi-dataset Point Prompt Training | 2023-08-18T00:00:00 | https://arxiv.org/abs/2308.09718v2 | [
"https://github.com/Pointcept/Pointcept"
] | In the paper 'Towards Large-scale 3D Representation Learning with Multi-dataset Point Prompt Training', what val mIoU score did the PPT+SparseUNet model get on the ScanNet200 dataset
| 31.9 |
BIG-bench (Logic Grid Puzzle) | PaLM-540B (few-shot, k=5) | PaLM 2 Technical Report | 2023-05-17T00:00:00 | https://arxiv.org/abs/2305.10403v3 | [
"https://github.com/eternityyw/tram-benchmark"
] | In the paper 'PaLM 2 Technical Report', what Accuracy score did the PaLM-540B (few-shot, k=5) model get on the BIG-bench (Logic Grid Puzzle) dataset
| 42.4 |
AI-TOD | YOLOv5+Inner-IoU | Inner-IoU: More Effective Intersection over Union Loss with Auxiliary Bounding Box | 2023-11-06T00:00:00 | https://arxiv.org/abs/2311.02877v4 | [
"https://github.com/malagoutou/Inner-IoU"
] | In the paper 'Inner-IoU: More Effective Intersection over Union Loss with Auxiliary Bounding Box', what mAP50 score did the YOLOv5+Inner-IoU model get on the AI-TOD dataset
| 43.77 |
Tox21 | elEmBERT-V1 | Structure to Property: Chemical Element Embeddings and a Deep Learning Approach for Accurate Prediction of Chemical Properties | 2023-09-17T00:00:00 | https://arxiv.org/abs/2309.09355v3 | [
"https://github.com/dmamur/elembert"
] | In the paper 'Structure to Property: Chemical Element Embeddings and a Deep Learning Approach for Accurate Prediction of Chemical Properties', what AUC score did the elEmBERT-V1 model get on the Tox21 dataset
| 0.961 |
SciTail | SplitEE-S | SplitEE: Early Exit in Deep Neural Networks with Split Computing | 2023-09-17T00:00:00 | https://arxiv.org/abs/2309.09195v1 | [
"https://github.com/Div290/SplitEE/blob/main/README.md"
] | In the paper 'SplitEE: Early Exit in Deep Neural Networks with Split Computing', what Accuracy score did the SplitEE-S model get on the SciTail dataset
| 78.9 |
Biwi 3D Audiovisual Corpus of Affective Communication - B3D(AC)^2 | FaceDiffuser | FaceDiffuser: Speech-Driven 3D Facial Animation Synthesis Using Diffusion | 2023-09-20T00:00:00 | https://arxiv.org/abs/2309.11306v1 | [
"https://github.com/uuembodiedsocialai/FaceDiffuser"
] | In the paper 'FaceDiffuser: Speech-Driven 3D Facial Animation Synthesis Using Diffusion', what Lip Vertex Error score did the FaceDiffuser model get on the Biwi 3D Audiovisual Corpus of Affective Communication - B3D(AC)^2 dataset
| 4.2985 |
waymo vehicle | PillarNeXt | PillarNeXt: Rethinking Network Designs for 3D Object Detection in LiDAR Point Clouds | 2023-05-08T00:00:00 | https://arxiv.org/abs/2305.04925v1 | [
"https://github.com/qcraftai/pillarnext"
] | In the paper 'PillarNeXt: Rethinking Network Designs for 3D Object Detection in LiDAR Point Clouds', what APH/L2 score did the PillarNeXt model get on the waymo vehicle dataset
| 75.76 |
CoNLL-2014 Shared Task | GRECO (voting+ESC) | System Combination via Quality Estimation for Grammatical Error Correction | 2023-10-23T00:00:00 | https://arxiv.org/abs/2310.14947v1 | [
"https://github.com/nusnlp/greco"
] | In the paper 'System Combination via Quality Estimation for Grammatical Error Correction', what F0.5 score did the GRECO (voting+ESC) model get on the CoNLL-2014 Shared Task dataset
| 71.12 |
ImageNet - 1% labeled data | SynCo (ResNet-50) 800ep | SynCo: Synthetic Hard Negatives in Contrastive Learning for Better Unsupervised Visual Representations | 2024-10-03T00:00:00 | https://arxiv.org/abs/2410.02401v5 | [
"https://github.com/giakoumoglou/synco"
] | In the paper 'SynCo: Synthetic Hard Negatives in Contrastive Learning for Better Unsupervised Visual Representations', what Top 5 Accuracy score did the SynCo (ResNet-50) 800ep model get on the ImageNet - 1% labeled data dataset
| 77.5% |
FLEURS X-eng | GenTranslateV2 | GenTranslate: Large Language Models are Generative Multilingual Speech and Machine Translators | 2024-02-10T00:00:00 | https://arxiv.org/abs/2402.06894v2 | [
"https://github.com/yuchen005/gentranslate"
] | In the paper 'GenTranslate: Large Language Models are Generative Multilingual Speech and Machine Translators', what ASR-BLEU score did the GenTranslateV2 model get on the FLEURS X-eng dataset
| 32.3 |
PCQM-Contact | ViT-PS | Learning Probabilistic Symmetrization for Architecture Agnostic Equivariance | 2023-06-05T00:00:00 | https://arxiv.org/abs/2306.02866v3 | [
"https://github.com/jw9730/lps"
] | In the paper 'Learning Probabilistic Symmetrization for Architecture Agnostic Equivariance', what Hits@1 score did the ViT-PS model get on the PCQM-Contact dataset
| 0.3287 |
CUHK | D-DFFNet | Depth and DOF Cues Make A Better Defocus Blur Detector | 2023-06-20T00:00:00 | https://arxiv.org/abs/2306.11334v1 | [
"https://github.com/yuxinjin-whu/d-dffnet"
] | In the paper 'Depth and DOF Cues Make A Better Defocus Blur Detector', what MAE score did the D-DFFNet model get on the CUHK dataset
| 0.036 |
PlantVillage | DenseNet | Which Backbone to Use: A Resource-efficient Domain Specific Comparison for Computer Vision | 2024-06-09T00:00:00 | https://arxiv.org/abs/2406.05612v2 | [
"https://github.com/pranavphoenix/Backbones"
] | In the paper 'Which Backbone to Use: A Resource-efficient Domain Specific Comparison for Computer Vision', what Accuracy score did the DenseNet model get on the PlantVillage dataset
| 99.88 |
Ego4D | SOIA-DOD | Short-term Object Interaction Anticipation with Disentangled Object Detection @ Ego4D Short Term Object Interaction Anticipation Challenge | 2024-07-08T00:00:00 | https://arxiv.org/abs/2407.05713v1 | [
"https://github.com/keenyjin/soia-dod"
] | In the paper 'Short-term Object Interaction Anticipation with Disentangled Object Detection @ Ego4D Short Term Object Interaction Anticipation Challenge', what Overall (Top5 mAP) score did the SOIA-DOD model get on the Ego4D dataset
| 6.221 |
NAS-Bench-201, CIFAR-10 | IS-DARTS | IS-DARTS: Stabilizing DARTS through Precise Measurement on Candidate Importance | 2023-12-19T00:00:00 | https://arxiv.org/abs/2312.12648v1 | [
"https://github.com/hy-he/is-darts"
] | In the paper 'IS-DARTS: Stabilizing DARTS through Precise Measurement on Candidate Importance', what Accuracy (Test) score did the IS-DARTS model get on the NAS-Bench-201, CIFAR-10 dataset
| 94.36 |
AM | BoP | From Primes to Paths: Enabling Fast Multi-Relational Graph Analysis | 2024-11-17T00:00:00 | https://arxiv.org/abs/2411.11149v1 | [
"https://github.com/kbogas/PAM_BoP"
] | In the paper 'From Primes to Paths: Enabling Fast Multi-Relational Graph Analysis', what Accuracy score did the BoP model get on the AM dataset
| 92.41 |
gRefCOCO | SimVG-DB | SimVG: A Simple Framework for Visual Grounding with Decoupled Multi-modal Fusion | 2024-09-26T00:00:00 | https://arxiv.org/abs/2409.17531v2 | [
"https://github.com/dmmm1997/simvg"
] | In the paper 'SimVG: A Simple Framework for Visual Grounding with Decoupled Multi-modal Fusion', what Precision@(F1=1, IoU≥0.5) score did the SimVG-DB model get on the gRefCOCO dataset
| 62.1 |
STL-10 | RDUOT | A High-Quality Robust Diffusion Framework for Corrupted Dataset | 2023-11-28T00:00:00 | https://arxiv.org/abs/2311.17101v2 | [
"https://github.com/VinAIResearch/RDUOT"
] | In the paper 'A High-Quality Robust Diffusion Framework for Corrupted Dataset', what FID score did the RDUOT model get on the STL-10 dataset
| 11.5 |
RETWEET | HP-CDE | Hawkes Process Based on Controlled Differential Equations | 2023-05-09T00:00:00 | https://arxiv.org/abs/2305.07031v2 | [
"https://github.com/kookseungji/Hawkes-Process-Based-on-Controlled-Differential-Equations"
] | In the paper 'Hawkes Process Based on Controlled Differential Equations', what Accuracy score did the HP-CDE model get on the RETWEET dataset
| 0.552±0.009 |
ActivityNet-QA | LocVLM-Vid-B+ | Learning to Localize Objects Improves Spatial Reasoning in Visual-LLMs | 2024-04-11T00:00:00 | https://arxiv.org/abs/2404.07449v1 | [
"https://github.com/kahnchana/locvlm"
] | In the paper 'Learning to Localize Objects Improves Spatial Reasoning in Visual-LLMs', what Accuracy score did the LocVLM-Vid-B+ model get on the ActivityNet-QA dataset
| 38.2 |
EQ-Bench | openchat/openchat 3.5 | EQ-Bench: An Emotional Intelligence Benchmark for Large Language Models | 2023-12-11T00:00:00 | https://arxiv.org/abs/2312.06281v2 | [
"https://github.com/eq-bench/eq-bench"
] | In the paper 'EQ-Bench: An Emotional Intelligence Benchmark for Large Language Models', what EQ-Bench Score score did the openchat/openchat 3.5 model get on the EQ-Bench dataset
| 37.08 |
AGORA | W-HMR | W-HMR: Monocular Human Mesh Recovery in World Space with Weak-Supervised Calibration | 2023-11-29T00:00:00 | https://arxiv.org/abs/2311.17460v6 | [
"https://github.com/yw0208/W-HMR"
] | In the paper 'W-HMR: Monocular Human Mesh Recovery in World Space with Weak-Supervised Calibration', what B-NMVE score did the W-HMR model get on the AGORA dataset
| 70.4 |
Aria Synthetic Environments | ImVoxelNet | EFM3D: A Benchmark for Measuring Progress Towards 3D Egocentric Foundation Models | 2024-06-14T00:00:00 | https://arxiv.org/abs/2406.10224v1 | [
"https://github.com/facebookresearch/efm3d"
] | In the paper 'EFM3D: A Benchmark for Measuring Progress Towards 3D Egocentric Foundation Models', what MAP score did the ImVoxelNet model get on the Aria Synthetic Environments dataset
| 64 |
SAFIM | codegen-6B-multi | Evaluation of LLMs on Syntax-Aware Code Fill-in-the-Middle Tasks | 2024-03-07T00:00:00 | https://arxiv.org/abs/2403.04814v3 | [
"https://github.com/gonglinyuan/safim"
] | In the paper 'Evaluation of LLMs on Syntax-Aware Code Fill-in-the-Middle Tasks', what Algorithmic score did the codegen-6B-multi model get on the SAFIM dataset
| 23.60 |
GSM8K | DART-Math-Llama3-70B-Uniform (0-shot CoT, w/o code) | DART-Math: Difficulty-Aware Rejection Tuning for Mathematical Problem-Solving | 2024-06-18T00:00:00 | https://arxiv.org/abs/2407.13690v1 | [
"https://github.com/hkust-nlp/dart-math"
] | In the paper 'DART-Math: Difficulty-Aware Rejection Tuning for Mathematical Problem-Solving', what Accuracy score did the DART-Math-Llama3-70B-Uniform (0-shot CoT, w/o code) model get on the GSM8K dataset
| 90.4 |
ROOR | LayoutLMv3-GlobalPointer (large) | Modeling Layout Reading Order as Ordering Relations for Visually-rich Document Understanding | 2024-09-29T00:00:00 | https://arxiv.org/abs/2409.19672v1 | [
"https://github.com/chongzhangFDU/ROOR"
] | In the paper 'Modeling Layout Reading Order as Ordering Relations for Visually-rich Document Understanding', what Segment-level F1 score did the LayoutLMv3-GlobalPointer (large) model get on the ROOR dataset
| 82.38 |
GuitarSet | Beat This! | Beat this! Accurate beat tracking without DBN postprocessing | 2024-07-31T00:00:00 | https://arxiv.org/abs/2407.21658v1 | [
"https://github.com/CPJKU/beat_this"
] | In the paper 'Beat this! Accurate beat tracking without DBN postprocessing', what F1 score did the Beat This! model get on the GuitarSet dataset
| 88.1 |
UCR Anomaly Archive | TimeVQVAE-AD | Explainable Time Series Anomaly Detection using Masked Latent Generative Modeling | 2023-11-21T00:00:00 | https://arxiv.org/abs/2311.12550v5 | [
"https://github.com/ml4its/timevqvae-anomalydetection"
] | In the paper 'Explainable Time Series Anomaly Detection using Masked Latent Generative Modeling', what accuracy score did the TimeVQVAE-AD model get on the UCR Anomaly Archive dataset
| 0.708 |
WildDESED | CRNN (WildDESED + Curriculrm learning) | WildDESED: An LLM-Powered Dataset for Wild Domestic Environment Sound Event Detection System | 2024-07-04T00:00:00 | https://arxiv.org/abs/2407.03656v3 | [
"https://github.com/swagshaw/wilddesed"
] | In the paper 'WildDESED: An LLM-Powered Dataset for Wild Domestic Environment Sound Event Detection System', what PSDS1 (-5dB) score did the CRNN (WildDESED + Curriculrm learning) model get on the WildDESED dataset
| 0.049 |
EC-FUNSD | GeoLayoutLM | Rethinking the Evaluation of Pre-trained Text-and-Layout Models from an Entity-Centric Perspective | 2024-02-04T00:00:00 | https://arxiv.org/abs/2402.02379v1 | [
"https://github.com/chongzhangFDU/ROOR"
] | In the paper 'Rethinking the Evaluation of Pre-trained Text-and-Layout Models from an Entity-Centric Perspective', what F1 score did the GeoLayoutLM model get on the EC-FUNSD dataset
| 83.62 |
WMT2014 English-German | PartialFormer | PartialFormer: Modeling Part Instead of Whole for Machine Translation | 2023-10-23T00:00:00 | https://arxiv.org/abs/2310.14921v2 | [
"https://github.com/zhengkid/partialformer"
] | In the paper 'PartialFormer: Modeling Part Instead of Whole for Machine Translation', what BLEU score score did the PartialFormer model get on the WMT2014 English-German dataset
| 29.56 |
OntoNotes | ReCAT(pretrained on wikitext103) | Augmenting Transformers with Recursively Composed Multi-grained Representations | 2023-09-28T00:00:00 | https://arxiv.org/abs/2309.16319v2 | [
"https://github.com/ant-research/structuredlm_rtdt"
] | In the paper 'Augmenting Transformers with Recursively Composed Multi-grained Representations', what F1 score did the ReCAT(pretrained on wikitext103) model get on the OntoNotes dataset
| 88.0 |
SUN-RGBD | GeminiFusion (MiT-B5) | GeminiFusion: Efficient Pixel-wise Multimodal Fusion for Vision Transformer | 2024-06-03T00:00:00 | https://arxiv.org/abs/2406.01210v2 | [
"https://github.com/jiadingcn/geminifusion"
] | In the paper 'GeminiFusion: Efficient Pixel-wise Multimodal Fusion for Vision Transformer', what Mean IoU score did the GeminiFusion (MiT-B5) model get on the SUN-RGBD dataset
| 53.3 |
Aria Everyday Objects | EVL | EFM3D: A Benchmark for Measuring Progress Towards 3D Egocentric Foundation Models | 2024-06-14T00:00:00 | https://arxiv.org/abs/2406.10224v1 | [
"https://github.com/facebookresearch/efm3d"
] | In the paper 'EFM3D: A Benchmark for Measuring Progress Towards 3D Egocentric Foundation Models', what mAP score did the EVL model get on the Aria Everyday Objects dataset
| 22 |
SOD4SB Private Test | GFL + Test Time Augmentation | BandRe: Rethinking Band-Pass Filters for Scale-Wise Object Detection Evaluation | 2023-07-21T00:00:00 | https://arxiv.org/abs/2307.11748v1 | [
"https://github.com/shinya7y/UniverseNet"
] | In the paper 'BandRe: Rethinking Band-Pass Filters for Scale-Wise Object Detection Evaluation', what AP50 score did the GFL + Test Time Augmentation model get on the SOD4SB Private Test dataset
| 23.7 |
DALES | Superpoint Transformer | Efficient 3D Semantic Segmentation with Superpoint Transformer | 2023-06-13T00:00:00 | https://arxiv.org/abs/2306.08045v2 | [
"https://github.com/drprojects/superpoint_transformer"
] | In the paper 'Efficient 3D Semantic Segmentation with Superpoint Transformer', what mIoU score did the Superpoint Transformer model get on the DALES dataset
| 79.6 |
ENZYMES | GCN + PANDA | PANDA: Expanded Width-Aware Message Passing Beyond Rewiring | 2024-06-06T00:00:00 | https://arxiv.org/abs/2406.03671v2 | [
"https://github.com/jeongwhanchoi/panda"
] | In the paper 'PANDA: Expanded Width-Aware Message Passing Beyond Rewiring', what Accuracy score did the GCN + PANDA model get on the ENZYMES dataset
| 31.55 |
ImageNet-LT | MDCS (ResNeXt-50) | MDCS: More Diverse Experts with Consistency Self-distillation for Long-tailed Recognition | 2023-08-19T00:00:00 | https://arxiv.org/abs/2308.09922v2 | [
"https://github.com/fistyee/mdcs"
] | In the paper 'MDCS: More Diverse Experts with Consistency Self-distillation for Long-tailed Recognition', what Top-1 Accuracy score did the MDCS (ResNeXt-50) model get on the ImageNet-LT dataset
| 61.8 |
SMAC corridor_2z_vs_24zg | DPLEX | A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning | 2023-06-04T00:00:00 | https://arxiv.org/abs/2306.02430v1 | [
"https://github.com/j3soon/dfac-extended"
] | In the paper 'A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning', what Median Win Rate score did the DPLEX model get on the SMAC corridor_2z_vs_24zg dataset
| 3.12 |
3DSSG | SG-PGM | SG-PGM: Partial Graph Matching Network with Semantic Geometric Fusion for 3D Scene Graph Alignment and Its Downstream Tasks | 2024-03-28T00:00:00 | https://arxiv.org/abs/2403.19474v1 | [
"https://github.com/dfki-av/sg-pgm"
] | In the paper 'SG-PGM: Partial Graph Matching Network with Semantic Geometric Fusion for 3D Scene Graph Alignment and Its Downstream Tasks', what MRR score did the SG-PGM model get on the 3DSSG dataset
| 98.6 |
PASCAL-5i (5-Shot) | MSDNet (ResNet-101) | MSDNet: Multi-Scale Decoder for Few-Shot Semantic Segmentation via Transformer-Guided Prototyping | 2024-09-17T00:00:00 | https://arxiv.org/abs/2409.11316v1 | [
"https://github.com/amirrezafateh/msdnet"
] | In the paper 'MSDNet: Multi-Scale Decoder for Few-Shot Semantic Segmentation via Transformer-Guided Prototyping', what Mean IoU score did the MSDNet (ResNet-101) model get on the PASCAL-5i (5-Shot) dataset
| 70.8 |
MVTec LOCO AD | ComAD+AST | Component-aware anomaly detection framework for adjustable and logical industrial visual inspection | 2023-05-15T00:00:00 | https://arxiv.org/abs/2305.08509v1 | [
"https://github.com/liutongkun/comad"
] | In the paper 'Component-aware anomaly detection framework for adjustable and logical industrial visual inspection', what Avg. Detection AUROC score did the ComAD+AST model get on the MVTec LOCO AD dataset
| 89.8 |
BDD100K val | ContrasTR | Contrastive Learning for Multi-Object Tracking with Transformers | 2023-11-14T00:00:00 | https://arxiv.org/abs/2311.08043v1 | [
"https://github.com/pfdp0/ContrasTR"
] | In the paper 'Contrastive Learning for Multi-Object Tracking with Transformers', what mMOTA score did the ContrasTR model get on the BDD100K val dataset
| 41.7 |
ZINC | CIN++-small | CIN++: Enhancing Topological Message Passing | 2023-06-06T00:00:00 | https://arxiv.org/abs/2306.03561v1 | [
"https://github.com/twitter-research/cwn"
] | In the paper 'CIN++: Enhancing Topological Message Passing', what MAE score did the CIN++-small model get on the ZINC dataset
| 0.091 |
NTU RGB+D 120 | ISTA-Net | Interactive Spatiotemporal Token Attention Network for Skeleton-based General Interactive Action Recognition | 2023-07-14T00:00:00 | https://arxiv.org/abs/2307.07469v1 | [
"https://github.com/Necolizer/ISTA-Net"
] | In the paper 'Interactive Spatiotemporal Token Attention Network for Skeleton-based General Interactive Action Recognition', what Accuracy (Cross-Subject) score did the ISTA-Net model get on the NTU RGB+D 120 dataset
| 90.5 |
UPLight | ShareCMP (B2 RGB-FP) | ShareCMP: Polarization-Aware RGB-P Semantic Segmentation | 2023-12-06T00:00:00 | https://arxiv.org/abs/2312.03430v2 | [
"https://github.com/lefteyex/sharecmp"
] | In the paper 'ShareCMP: Polarization-Aware RGB-P Semantic Segmentation', what mIoU score did the ShareCMP (B2 RGB-FP) model get on the UPLight dataset
| 92.45 |
BANKING77 | OCaTS (kNN-GPT-4) | Cache me if you Can: an Online Cost-aware Teacher-Student framework to Reduce the Calls to Large Language Models | 2023-10-20T00:00:00 | https://arxiv.org/abs/2310.13395v1 | [
"https://github.com/stoyian/OCaTS"
] | In the paper 'Cache me if you Can: an Online Cost-aware Teacher-Student framework to Reduce the Calls to Large Language Models', what Accuracy (%) score did the OCaTS (kNN-GPT-4) model get on the BANKING77 dataset
| 82.7 |
IMDB-Clean | MiVOLO-D1 | MiVOLO: Multi-input Transformer for Age and Gender Estimation | 2023-07-10T00:00:00 | https://arxiv.org/abs/2307.04616v2 | [
"https://github.com/wildchlamydia/mivolo"
] | In the paper 'MiVOLO: Multi-input Transformer for Age and Gender Estimation', what Average mean absolute error score did the MiVOLO-D1 model get on the IMDB-Clean dataset
| 4.09 |
Slakh2100 | YourMT3+ (YPTF.MoE+M) | YourMT3+: Multi-instrument Music Transcription with Enhanced Transformer Architectures and Cross-dataset Stem Augmentation | 2024-07-05T00:00:00 | https://arxiv.org/abs/2407.04822v3 | [
"https://github.com/mimbres/yourmt3"
] | In the paper 'YourMT3+: Multi-instrument Music Transcription with Enhanced Transformer Architectures and Cross-dataset Stem Augmentation', what note-level F-measure-no-offset (Fno) score did the YourMT3+ (YPTF.MoE+M) model get on the Slakh2100 dataset
| 0.8456 |
NExT-QA | LinVT-Qwen2-VL
(7B) | LinVT: Empower Your Image-level Large Language Model to Understand Videos | 2024-12-06T00:00:00 | https://arxiv.org/abs/2412.05185v2 | [
"https://github.com/gls0425/linvt"
] | In the paper 'LinVT: Empower Your Image-level Large Language Model to Understand Videos', what Accuracy score did the LinVT-Qwen2-VL
(7B) model get on the NExT-QA dataset
| 85.5 |
Office-Home | EUDA | EUDA: An Efficient Unsupervised Domain Adaptation via Self-Supervised Vision Transformer | 2024-07-31T00:00:00 | https://arxiv.org/abs/2407.21311v1 | [
"https://github.com/a-abedi/euda"
] | In the paper 'EUDA: An Efficient Unsupervised Domain Adaptation via Self-Supervised Vision Transformer', what Accuracy score did the EUDA model get on the Office-Home dataset
| 84.9 |
SPKL | ViT | Revising deep learning methods in parking lot occupancy detection | 2023-06-07T00:00:00 | https://arxiv.org/abs/2306.04288v3 | [
"https://github.com/eighonet/parking-research"
] | In the paper 'Revising deep learning methods in parking lot occupancy detection', what F1-score score did the ViT model get on the SPKL dataset
| 0.7335 |
GAMUS | TIMF | GAMUS: A Geometry-aware Multi-modal Semantic Segmentation Benchmark for Remote Sensing Data | 2023-05-24T00:00:00 | https://arxiv.org/abs/2305.14914v1 | [
"https://github.com/earthnets/rsi-mmsegmentation"
] | In the paper 'GAMUS: A Geometry-aware Multi-modal Semantic Segmentation Benchmark for Remote Sensing Data', what mIoU score did the TIMF model get on the GAMUS dataset
| 76.38 |
CIFAR-100 | ABNet-2G-R3 | ANDHRA Bandersnatch: Training Neural Networks to Predict Parallel Realities | 2024-11-28T00:00:00 | https://arxiv.org/abs/2411.19213v1 | [
"https://github.com/dvssajay/New_World"
] | In the paper 'ANDHRA Bandersnatch: Training Neural Networks to Predict Parallel Realities', what Percentage correct score did the ABNet-2G-R3 model get on the CIFAR-100 dataset
| 80.830 |
RES-Q | QurrentOS-coder + Gemini 1.5 Pro | RES-Q: Evaluating Code-Editing Large Language Model Systems at the Repository Scale | 2024-06-24T00:00:00 | https://arxiv.org/abs/2406.16801v2 | [
"https://github.com/qurrent-ai/res-q"
] | In the paper 'RES-Q: Evaluating Code-Editing Large Language Model Systems at the Repository Scale', what pass@1 score did the QurrentOS-coder + Gemini 1.5 Pro model get on the RES-Q dataset
| 30.0 |
SMAC MMM2 | QPLEX | A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning | 2023-06-04T00:00:00 | https://arxiv.org/abs/2306.02430v1 | [
"https://github.com/j3soon/dfac-extended"
] | In the paper 'A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning', what Median Win Rate score did the QPLEX model get on the SMAC MMM2 dataset
| 96.88 |
Lyft Level 5 | PointBeV (ResNet-50) | PointBeV: A Sparse Approach to BeV Predictions | 2023-12-01T00:00:00 | https://arxiv.org/abs/2312.00703v2 | [
"https://github.com/valeoai/pointbev"
] | In the paper 'PointBeV: A Sparse Approach to BeV Predictions', what IoU vehicle - 224x480 - Long score did the PointBeV (ResNet-50) model get on the Lyft Level 5 dataset
| 44.5 |
Charades-STA | video-mamba-suite | Video Mamba Suite: State Space Model as a Versatile Alternative for Video Understanding | 2024-03-14T00:00:00 | https://arxiv.org/abs/2403.09626v1 | [
"https://github.com/opengvlab/video-mamba-suite"
] | In the paper 'Video Mamba Suite: State Space Model as a Versatile Alternative for Video Understanding', what R@1 IoU=0.5 score did the video-mamba-suite model get on the Charades-STA dataset
| 57.18 |
UCF101 | RPO | Read-only Prompt Optimization for Vision-Language Few-shot Learning | 2023-08-29T00:00:00 | https://arxiv.org/abs/2308.14960v2 | [
"https://github.com/mlvlab/rpo"
] | In the paper 'Read-only Prompt Optimization for Vision-Language Few-shot Learning', what Harmonic mean score did the RPO model get on the UCF101 dataset
| 79.34 |
VLCS | POEM | POEM: Polarization of Embeddings for Domain-Invariant Representations | 2023-05-22T00:00:00 | https://arxiv.org/abs/2305.13046v1 | [
"https://github.com/josangyoung/official-poem"
] | In the paper 'POEM: Polarization of Embeddings for Domain-Invariant Representations', what Average Accuracy score did the POEM model get on the VLCS dataset
| 79.2 |
BSD100 - 2x upscaling | WaveMixSR-V2 | WaveMixSR-V2: Enhancing Super-resolution with Higher Efficiency | 2024-09-16T00:00:00 | https://arxiv.org/abs/2409.10582v3 | [
"https://github.com/pranavphoenix/WaveMixSR"
] | In the paper 'WaveMixSR-V2: Enhancing Super-resolution with Higher Efficiency', what PSNR score did the WaveMixSR-V2 model get on the BSD100 - 2x upscaling dataset
| 33.12 |
Manga109 - 4x upscaling | DAT | Dual Aggregation Transformer for Image Super-Resolution | 2023-08-07T00:00:00 | https://arxiv.org/abs/2308.03364v2 | [
"https://github.com/zhengchen1999/dat"
] | In the paper 'Dual Aggregation Transformer for Image Super-Resolution', what PSNR score did the DAT model get on the Manga109 - 4x upscaling dataset
| 32.51 |
DeLiVER | StitchFusion(RGB-D-E-LiDAR) | StitchFusion: Weaving Any Visual Modalities to Enhance Multimodal Semantic Segmentation | 2024-08-02T00:00:00 | https://arxiv.org/abs/2408.01343v1 | [
"https://github.com/libingyu01/stitchfusion-stitchfusion-weaving-any-visual-modalities-to-enhance-multimodal-semantic-segmentation"
] | In the paper 'StitchFusion: Weaving Any Visual Modalities to Enhance Multimodal Semantic Segmentation', what mIoU score did the StitchFusion(RGB-D-E-LiDAR) model get on the DeLiVER dataset
| 68.18 |
Texas | TE-GCNN | Transfer Entropy in Graph Convolutional Neural Networks | 2024-06-08T00:00:00 | https://arxiv.org/abs/2406.06632v1 | [
"https://github.com/avmoldovan/Heterophily_and_oversmoothing-forked"
] | In the paper 'Transfer Entropy in Graph Convolutional Neural Networks', what Accuracy score did the TE-GCNN model get on the Texas dataset
| 84.86 ± 4.55 |
Texas | ChebNet+Bregman | Bregman Graph Neural Network | 2023-09-12T00:00:00 | https://arxiv.org/abs/2309.06645v1 | [
"https://github.com/jiayuzhai1207/bregmangnn"
] | In the paper 'Bregman Graph Neural Network', what Accuracy score did the ChebNet+Bregman model get on the Texas dataset
| 84.05 ± 5.47 |
VDS dataset: Multi exposure stack-based inverse tone mapping | CEVR | Learning Continuous Exposure Value Representations for Single-Image HDR Reconstruction | 2023-09-07T00:00:00 | https://arxiv.org/abs/2309.03900v1 | [
"https://github.com/skchen1993/2023_CEVR"
] | In the paper 'Learning Continuous Exposure Value Representations for Single-Image HDR Reconstruction', what HDR-VDP-2 score did the CEVR model get on the VDS dataset: Multi exposure stack-based inverse tone mapping dataset
| 59.00 |
SMAC corridor | QPLEX | A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning | 2023-06-04T00:00:00 | https://arxiv.org/abs/2306.02430v1 | [
"https://github.com/j3soon/dfac-extended"
] | In the paper 'A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning', what Median Win Rate score did the QPLEX model get on the SMAC corridor dataset
| 75.00 |
AnatEM | UniNER-7B | UniversalNER: Targeted Distillation from Large Language Models for Open Named Entity Recognition | 2023-08-07T00:00:00 | https://arxiv.org/abs/2308.03279v2 | [
"https://github.com/emma1066/retrieval-augmented-it-openner"
] | In the paper 'UniversalNER: Targeted Distillation from Large Language Models for Open Named Entity Recognition', what F1 score did the UniNER-7B model get on the AnatEM dataset
| 88.65 |
MedConceptsQA | PharMolix/BioMedGPT-LM-7B | BioMedGPT: Open Multimodal Generative Pre-trained Transformer for BioMedicine | 2023-08-18T00:00:00 | https://arxiv.org/abs/2308.09442v2 | [
"https://github.com/pharmolix/openbiomed"
] | In the paper 'BioMedGPT: Open Multimodal Generative Pre-trained Transformer for BioMedicine', what Accuracy score did the PharMolix/BioMedGPT-LM-7B model get on the MedConceptsQA dataset
| 24.924 |
ETTh2 (336) Multivariate | DiPE-Linear | Disentangled Interpretable Representation for Efficient Long-term Time Series Forecasting | 2024-11-26T00:00:00 | https://arxiv.org/abs/2411.17257v1 | [
"https://github.com/wintertee/dipe-linear"
] | In the paper 'Disentangled Interpretable Representation for Efficient Long-term Time Series Forecasting', what MSE score did the DiPE-Linear model get on the ETTh2 (336) Multivariate dataset
| 0.35 |
ShanghaiTech | MULDE-object-centric-micro | MULDE: Multiscale Log-Density Estimation via Denoising Score Matching for Video Anomaly Detection | 2024-03-21T00:00:00 | https://arxiv.org/abs/2403.14497v1 | [
"https://github.com/jakubmicorek/MULDE-Multiscale-Log-Density-Estimation-via-Denoising-Score-Matching-for-Video-Anomaly-Detection"
] | In the paper 'MULDE: Multiscale Log-Density Estimation via Denoising Score Matching for Video Anomaly Detection', what AUC score did the MULDE-object-centric-micro model get on the ShanghaiTech dataset
| 86.7% |
Ballroom | Beat This! | Beat this! Accurate beat tracking without DBN postprocessing | 2024-07-31T00:00:00 | https://arxiv.org/abs/2407.21658v1 | [
"https://github.com/CPJKU/beat_this"
] | In the paper 'Beat this! Accurate beat tracking without DBN postprocessing', what F1 score did the Beat This! model get on the Ballroom dataset
| 97.5 |
CrowdHuman (full body) | MMPedestron | When Pedestrian Detection Meets Multi-Modal Learning: Generalist Model and Benchmark Dataset | 2024-07-14T00:00:00 | https://arxiv.org/abs/2407.10125v1 | [
"https://github.com/BubblyYi/MMPedestron"
] | In the paper 'When Pedestrian Detection Meets Multi-Modal Learning: Generalist Model and Benchmark Dataset', what AP score did the MMPedestron model get on the CrowdHuman (full body) dataset
| 97.1 |
ASDiv-A | ATHENA (roberta-base) | ATHENA: Mathematical Reasoning with Thought Expansion | 2023-11-02T00:00:00 | https://arxiv.org/abs/2311.01036v1 | [
"https://github.com/the-jb/athena-math"
] | In the paper 'ATHENA: Mathematical Reasoning with Thought Expansion', what Execution Accuracy score did the ATHENA (roberta-base) model get on the ASDiv-A dataset
| 86.4 |
Mini-Imagenet 5-way (5-shot) | DiffKendall (Meta-Baseline, ResNet-12) | DiffKendall: A Novel Approach for Few-Shot Learning with Differentiable Kendall's Rank Correlation | 2023-07-28T00:00:00 | https://arxiv.org/abs/2307.15317v2 | [
"https://github.com/kaipengm2/DiffKendall"
] | In the paper 'DiffKendall: A Novel Approach for Few-Shot Learning with Differentiable Kendall's Rank Correlation', what Accuracy score did the DiffKendall (Meta-Baseline, ResNet-12) model get on the Mini-Imagenet 5-way (5-shot) dataset
| 80.79 |
nuScenes Camera Only | VCD-A | Leveraging Vision-Centric Multi-Modal Expertise for 3D Object Detection | 2023-10-24T00:00:00 | https://arxiv.org/abs/2310.15670v1 | [
"https://github.com/opendrivelab/birds-eye-view-perception"
] | In the paper 'Leveraging Vision-Centric Multi-Modal Expertise for 3D Object Detection', what NDS score did the VCD-A model get on the nuScenes Camera Only dataset
| 67.2 |
TVBench | VideoGPT+ | VideoGPT+: Integrating Image and Video Encoders for Enhanced Video Understanding | 2024-06-13T00:00:00 | https://arxiv.org/abs/2406.09418v1 | [
"https://github.com/mbzuai-oryx/videogpt-plus"
] | In the paper 'VideoGPT+: Integrating Image and Video Encoders for Enhanced Video Understanding', what Average Accuracy score did the VideoGPT+ model get on the TVBench dataset
| 41.7 |
SportsMOT | Deep-EIoU | Iterative Scale-Up ExpansionIoU and Deep Features Association for Multi-Object Tracking in Sports | 2023-06-22T00:00:00 | https://arxiv.org/abs/2306.13074v5 | [
"https://github.com/hsiangwei0903/Deep-EIoU"
] | In the paper 'Iterative Scale-Up ExpansionIoU and Deep Features Association for Multi-Object Tracking in Sports', what HOTA score did the Deep-EIoU model get on the SportsMOT dataset
| 77.2 |
RWTH-PHOENIX-Weather 2014 T | MSKA-SLR | Multi-Stream Keypoint Attention Network for Sign Language Recognition and Translation | 2024-05-09T00:00:00 | https://arxiv.org/abs/2405.05672v1 | [
"https://github.com/sutwangyan/MSKA"
] | In the paper 'Multi-Stream Keypoint Attention Network for Sign Language Recognition and Translation', what Word Error Rate (WER) score did the MSKA-SLR model get on the RWTH-PHOENIX-Weather 2014 T dataset
| 20.5 |
LRS3 | RTFS-Net-12 | RTFS-Net: Recurrent Time-Frequency Modelling for Efficient Audio-Visual Speech Separation | 2023-09-29T00:00:00 | https://arxiv.org/abs/2309.17189v4 | [
"https://github.com/spkgyk/RTFS-Net"
] | In the paper 'RTFS-Net: Recurrent Time-Frequency Modelling for Efficient Audio-Visual Speech Separation', what SI-SNRi score did the RTFS-Net-12 model get on the LRS3 dataset
| 17.5 |
ICDAR2013 | CLIP4STR-L | CLIP4STR: A Simple Baseline for Scene Text Recognition with Pre-trained Vision-Language Model | 2023-05-23T00:00:00 | https://arxiv.org/abs/2305.14014v3 | [
"https://github.com/VamosC/CLIP4STR"
] | In the paper 'CLIP4STR: A Simple Baseline for Scene Text Recognition with Pre-trained Vision-Language Model', what Accuracy score did the CLIP4STR-L model get on the ICDAR2013 dataset
| 98.5 |
GSM8K | MetaMath 70B | MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models | 2023-09-21T00:00:00 | https://arxiv.org/abs/2309.12284v4 | [
"https://github.com/meta-math/MetaMath"
] | In the paper 'MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models', what Accuracy score did the MetaMath 70B model get on the GSM8K dataset
| 82.3 |
MM-Vet | LOVA$^3$ | LOVA3: Learning to Visual Question Answering, Asking and Assessment | 2024-05-23T00:00:00 | https://arxiv.org/abs/2405.14974v2 | [
"https://github.com/showlab/lova3"
] | In the paper 'LOVA3: Learning to Visual Question Answering, Asking and Assessment', what GPT-4 score score did the LOVA$^3$ model get on the MM-Vet dataset
| 35.2 |
MBPP | DeepSeek-Coder-Instruct 33B (few-shot) | DeepSeek-Coder: When the Large Language Model Meets Programming -- The Rise of Code Intelligence | 2024-01-25T00:00:00 | https://arxiv.org/abs/2401.14196v2 | [
"https://github.com/deepseek-ai/DeepSeek-Coder"
] | In the paper 'DeepSeek-Coder: When the Large Language Model Meets Programming -- The Rise of Code Intelligence', what Accuracy score did the DeepSeek-Coder-Instruct 33B (few-shot) model get on the MBPP dataset
| 70 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.