dataset stringlengths 0 82 | model_name stringlengths 0 150 | paper_title stringlengths 19 175 | paper_date timestamp[ns] | paper_url stringlengths 32 35 | code_links listlengths 1 1 | prompts stringlengths 105 331 | answer stringlengths 1 67 |
|---|---|---|---|---|---|---|---|
VOT2020 | ODTrack-B | ODTrack: Online Dense Temporal Token Learning for Visual Tracking | 2024-01-03T00:00:00 | https://arxiv.org/abs/2401.01686v1 | [
"https://github.com/gxnu-zhonglab/odtrack"
] | In the paper 'ODTrack: Online Dense Temporal Token Learning for Visual Tracking', what EAO score did the ODTrack-B model get on the VOT2020 dataset
| 0.581 |
LVIS v1.0 | LaMI-DETR | LaMI-DETR: Open-Vocabulary Detection with Language Model Instruction | 2024-07-16T00:00:00 | https://arxiv.org/abs/2407.11335v2 | [
"https://github.com/eternaldolphin/lami-detr"
] | In the paper 'LaMI-DETR: Open-Vocabulary Detection with Language Model Instruction', what AP novel-LVIS base training score did the LaMI-DETR model get on the LVIS v1.0 dataset
| 43.4 |
SportsMOT | DeepEIoU + GTA | GTA: Global Tracklet Association for Multi-Object Tracking in Sports | 2024-11-12T00:00:00 | https://arxiv.org/abs/2411.08216v1 | [
"https://github.com/sjc042/gta-link"
] | In the paper 'GTA: Global Tracklet Association for Multi-Object Tracking in Sports', what HOTA score did the DeepEIoU + GTA model get on the SportsMOT dataset
| 81.0 |
SFCHD | TOOD | Large, Complex, and Realistic Safety Clothing and Helmet Detection: Dataset and Method | 2023-06-03T00:00:00 | https://arxiv.org/abs/2306.02098v2 | [
"https://github.com/lijfrank-open/SFCHD-SCALE"
] | In the paper 'Large, Complex, and Realistic Safety Clothing and Helmet Detection: Dataset and Method', what mAP@0.50 score did the TOOD model get on the SFCHD dataset
| 78.9 |
LEVIR+ | C2FNet | C2F-SemiCD: A Coarse-to-Fine Semi-Supervised Change Detection Method Based on Consistency Regularization in High-Resolution Remote Sensing Images | 2024-04-22T00:00:00 | https://arxiv.org/abs/2404.13838v1 | [
"https://github.com/chengxihan/c2f-semicd-and-c2f-cdnet"
] | In the paper 'C2F-SemiCD: A Coarse-to-Fine Semi-Supervised Change Detection Method Based on Consistency Regularization in High-Resolution Remote Sensing Images', what F1 score did the C2FNet model get on the LEVIR+ dataset
| 79.15 |
RefCoCo val | PSALM | PSALM: Pixelwise SegmentAtion with Large Multi-Modal Model | 2024-03-21T00:00:00 | https://arxiv.org/abs/2403.14598v1 | [
"https://github.com/zamling/psalm"
] | In the paper 'PSALM: Pixelwise SegmentAtion with Large Multi-Modal Model', what Overall IoU score did the PSALM model get on the RefCoCo val dataset
| 83.6 |
Atari 2600 Yars Revenge | ASL DDQN | Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity | 2023-05-07T00:00:00 | https://arxiv.org/abs/2305.04180v3 | [
"https://github.com/xinjinghao/color"
] | In the paper 'Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity', what Score score did the ASL DDQN model get on the Atari 2600 Yars Revenge dataset
| 29231.9 |
PASCAL Context-59 | MAFT+ | Collaborative Vision-Text Representation Optimizing for Open-Vocabulary Segmentation | 2024-08-01T00:00:00 | https://arxiv.org/abs/2408.00744v2 | [
"https://github.com/jiaosiyu1999/MAFT-Plus"
] | In the paper 'Collaborative Vision-Text Representation Optimizing for Open-Vocabulary Segmentation', what mIoU score did the MAFT+ model get on the PASCAL Context-59 dataset
| 59.4 |
GOT-10k | HIPTrack | HIPTrack: Visual Tracking with Historical Prompts | 2023-11-03T00:00:00 | https://arxiv.org/abs/2311.02072v2 | [
"https://github.com/wenruicai/hiptrack"
] | In the paper 'HIPTrack: Visual Tracking with Historical Prompts', what Average Overlap score did the HIPTrack model get on the GOT-10k dataset
| 77.4 |
OUMVLP | GaitSTR | GaitSTR: Gait Recognition with Sequential Two-stream Refinement | 2024-04-02T00:00:00 | https://arxiv.org/abs/2404.02345v1 | [
"https://github.com/ZoeyZheng0/GaitSTR"
] | In the paper 'GaitSTR: Gait Recognition with Sequential Two-stream Refinement', what Averaged rank-1 acc(%) score did the GaitSTR model get on the OUMVLP dataset
| 90.2 |
NExT-QA | mPLUG-Owl3(8B) | mPLUG-Owl3: Towards Long Image-Sequence Understanding in Multi-Modal Large Language Models | 2024-08-09T00:00:00 | https://arxiv.org/abs/2408.04840v2 | [
"https://github.com/x-plug/mplug-owl"
] | In the paper 'mPLUG-Owl3: Towards Long Image-Sequence Understanding in Multi-Modal Large Language Models', what Accuracy score did the mPLUG-Owl3(8B) model get on the NExT-QA dataset
| 78.6 |
NAS-Bench-301 | DiNAS | Multi-conditioned Graph Diffusion for Neural Architecture Search | 2024-03-09T00:00:00 | https://arxiv.org/abs/2403.06020v2 | [
"https://github.com/rohanasthana/dinas"
] | In the paper 'Multi-conditioned Graph Diffusion for Neural Architecture Search', what Accuracy (Val) score did the DiNAS model get on the NAS-Bench-301 dataset
| 94.92 |
VideoInstruct | IG-VLM-GPT4v | An Image Grid Can Be Worth a Video: Zero-shot Video Question Answering Using a VLM | 2024-03-27T00:00:00 | https://arxiv.org/abs/2403.18406v1 | [
"https://github.com/imagegridworth/IG-VLM"
] | In the paper 'An Image Grid Can Be Worth a Video: Zero-shot Video Question Answering Using a VLM', what Correctness of Information score did the IG-VLM-GPT4v model get on the VideoInstruct dataset
| 3.40 |
RefCOCO testB | MaskRIS (Swin-B, combined DB) | MaskRIS: Semantic Distortion-aware Data Augmentation for Referring Image Segmentation | 2024-11-28T00:00:00 | https://arxiv.org/abs/2411.19067v1 | [
"https://github.com/naver-ai/maskris"
] | In the paper 'MaskRIS: Semantic Distortion-aware Data Augmentation for Referring Image Segmentation', what Overall IoU score did the MaskRIS (Swin-B, combined DB) model get on the RefCOCO testB dataset
| 75.1 |
VoxCeleb1 | ReDimNet-B3-LM (3.0M) | Reshape Dimensions Network for Speaker Recognition | 2024-07-25T00:00:00 | https://arxiv.org/abs/2407.18223v2 | [
"https://github.com/IDRnD/ReDimNet"
] | In the paper 'Reshape Dimensions Network for Speaker Recognition', what EER score did the ReDimNet-B3-LM (3.0M) model get on the VoxCeleb1 dataset
| 0.5 |
COLLAB | R-GCN + PANDA | PANDA: Expanded Width-Aware Message Passing Beyond Rewiring | 2024-06-06T00:00:00 | https://arxiv.org/abs/2406.03671v2 | [
"https://github.com/jeongwhanchoi/panda"
] | In the paper 'PANDA: Expanded Width-Aware Message Passing Beyond Rewiring', what Accuracy score did the R-GCN + PANDA model get on the COLLAB dataset
| 71.4% |
YouCook2 | HowToCaption | HowToCaption: Prompting LLMs to Transform Video Annotations at Scale | 2023-10-07T00:00:00 | https://arxiv.org/abs/2310.04900v2 | [
"https://github.com/ninatu/howtocaption"
] | In the paper 'HowToCaption: Prompting LLMs to Transform Video Annotations at Scale', what BLEU-4 score did the HowToCaption model get on the YouCook2 dataset
| 8.8 |
HIV dataset | ChemBFN | A Bayesian Flow Network Framework for Chemistry Tasks | 2024-07-28T00:00:00 | https://arxiv.org/abs/2407.20294v1 | [
"https://github.com/Augus1999/bayesian-flow-network-for-chemistry"
] | In the paper 'A Bayesian Flow Network Framework for Chemistry Tasks', what AUC score did the ChemBFN model get on the HIV dataset dataset
| 0.794 |
HACS | RDFA-S6 (InternVideo2-6B) | Enhancing Temporal Action Localization: Advanced S6 Modeling with Recurrent Mechanism | 2024-07-18T00:00:00 | https://arxiv.org/abs/2407.13078v1 | [
"https://github.com/lsy0882/RDFA-S6"
] | In the paper 'Enhancing Temporal Action Localization: Advanced S6 Modeling with Recurrent Mechanism', what Average-mAP score did the RDFA-S6 (InternVideo2-6B) model get on the HACS dataset
| 45.8 |
ScreenSpot | OS-Atlas-Base-7B | OS-ATLAS: A Foundation Action Model for Generalist GUI Agents | 2024-10-30T00:00:00 | https://arxiv.org/abs/2410.23218v1 | [
"https://github.com/OS-Copilot/OS-Atlas"
] | In the paper 'OS-ATLAS: A Foundation Action Model for Generalist GUI Agents', what Accuracy (%) score did the OS-Atlas-Base-7B model get on the ScreenSpot dataset
| 82.47 |
UCF101 | DePT | DePT: Decoupled Prompt Tuning | 2023-09-14T00:00:00 | https://arxiv.org/abs/2309.07439v2 | [
"https://github.com/koorye/dept"
] | In the paper 'DePT: Decoupled Prompt Tuning', what Harmonic mean score did the DePT model get on the UCF101 dataset
| 82.46 |
FineAction | DyFADet (VideoMAE v2-g) | DyFADet: Dynamic Feature Aggregation for Temporal Action Detection | 2024-07-03T00:00:00 | https://arxiv.org/abs/2407.03197v1 | [
"https://github.com/yangle15/DyFADet-pytorch"
] | In the paper 'DyFADet: Dynamic Feature Aggregation for Temporal Action Detection', what mAP score did the DyFADet (VideoMAE v2-g) model get on the FineAction dataset
| 23.8 |
SVT | CPPD | Context Perception Parallel Decoder for Scene Text Recognition | 2023-07-23T00:00:00 | https://arxiv.org/abs/2307.12270v2 | [
"https://github.com/PaddlePaddle/PaddleOCR"
] | In the paper 'Context Perception Parallel Decoder for Scene Text Recognition', what Accuracy score did the CPPD model get on the SVT dataset
| 98.5 |
CoNLL-2014 Shared Task (10 annotations) | GRECO (vote+ESC) | System Combination via Quality Estimation for Grammatical Error Correction | 2023-10-23T00:00:00 | https://arxiv.org/abs/2310.14947v1 | [
"https://github.com/nusnlp/greco"
] | In the paper 'System Combination via Quality Estimation for Grammatical Error Correction', what F0.5 score did the GRECO (vote+ESC) model get on the CoNLL-2014 Shared Task (10 annotations) dataset
| 85.21 |
VoxCeleb | ReDimNet-B3-LM-ASNorm (3.0M) | Reshape Dimensions Network for Speaker Recognition | 2024-07-25T00:00:00 | https://arxiv.org/abs/2407.18223v2 | [
"https://github.com/IDRnD/ReDimNet"
] | In the paper 'Reshape Dimensions Network for Speaker Recognition', what EER score did the ReDimNet-B3-LM-ASNorm (3.0M) model get on the VoxCeleb dataset
| 0.47 |
Stanford2D3D Panoramic | SFSS-MMSI (RGB+HHA) | Single Frame Semantic Segmentation Using Multi-Modal Spherical Images | 2023-08-18T00:00:00 | https://arxiv.org/abs/2308.09369v1 | [
"https://github.com/sguttikon/SFSS-MMSI"
] | In the paper 'Single Frame Semantic Segmentation Using Multi-Modal Spherical Images', what mIoU score did the SFSS-MMSI (RGB+HHA) model get on the Stanford2D3D Panoramic dataset
| 60.6% |
BIG-bench (Penguins In A Table) | PaLM 2 (few-shot, k=3, Direct) | PaLM 2 Technical Report | 2023-05-17T00:00:00 | https://arxiv.org/abs/2305.10403v3 | [
"https://github.com/eternityyw/tram-benchmark"
] | In the paper 'PaLM 2 Technical Report', what Accuracy score did the PaLM 2 (few-shot, k=3, Direct) model get on the BIG-bench (Penguins In A Table) dataset
| 65.8 |
SUN-RGBD | DPLNet | Efficient Multimodal Semantic Segmentation via Dual-Prompt Learning | 2023-12-01T00:00:00 | https://arxiv.org/abs/2312.00360v2 | [
"https://github.com/shaohuadong2021/dplnet"
] | In the paper 'Efficient Multimodal Semantic Segmentation via Dual-Prompt Learning', what Mean IoU score did the DPLNet model get on the SUN-RGBD dataset
| 52.8% |
RealCQA | crct- 11th ep FineTune | RealCQA: Scientific Chart Question Answering as a Test-bed for First-Order Logic | 2023-08-03T00:00:00 | https://arxiv.org/abs/2308.01979v1 | [
"https://github.com/cse-ai-lab/RealCQA"
] | In the paper 'RealCQA: Scientific Chart Question Answering as a Test-bed for First-Order Logic', what 1:1 Accuracy score did the crct- 11th ep FineTune model get on the RealCQA dataset
| 0.239897973990427 |
PF-PASCAL | GeoAware-SC (Supervised, AP-10K P.T.) | Telling Left from Right: Identifying Geometry-Aware Semantic Correspondence | 2023-11-28T00:00:00 | https://arxiv.org/abs/2311.17034v2 | [
"https://github.com/Junyi42/geoaware-sc"
] | In the paper 'Telling Left from Right: Identifying Geometry-Aware Semantic Correspondence', what PCK score did the GeoAware-SC (Supervised, AP-10K P.T.) model get on the PF-PASCAL dataset
| 95.7 |
ScanObjectNN | PointMLS | ModelNet-O: A Large-Scale Synthetic Dataset for Occlusion-Aware Point Cloud Classification | 2024-01-16T00:00:00 | https://arxiv.org/abs/2401.08210v1 | [
"https://github.com/fanglaosi/pointmls"
] | In the paper 'ModelNet-O: A Large-Scale Synthetic Dataset for Occlusion-Aware Point Cloud Classification', what Overall Accuracy score did the PointMLS model get on the ScanObjectNN dataset
| 86.6 |
MoCA-Mask | ZoomNeXt-PVTv2-B5 | ZoomNeXt: A Unified Collaborative Pyramid Network for Camouflaged Object Detection | 2023-10-31T00:00:00 | https://arxiv.org/abs/2310.20208v4 | [
"https://github.com/lartpang/zoomnext"
] | In the paper 'ZoomNeXt: A Unified Collaborative Pyramid Network for Camouflaged Object Detection', what S-measure score did the ZoomNeXt-PVTv2-B5 model get on the MoCA-Mask dataset
| 0.734 |
GSM8K | MMOS-DeepSeekMath-7B(0-shot,k=50) | An Empirical Study of Data Ability Boundary in LLMs' Math Reasoning | 2024-02-23T00:00:00 | https://arxiv.org/abs/2403.00799v1 | [
"https://github.com/cyzhh/MMOS"
] | In the paper 'An Empirical Study of Data Ability Boundary in LLMs' Math Reasoning', what Accuracy score did the MMOS-DeepSeekMath-7B(0-shot,k=50) model get on the GSM8K dataset
| 87.2 |
SPair-71k | GMTR | GMTR: Graph Matching Transformers | 2023-11-14T00:00:00 | https://arxiv.org/abs/2311.08141v2 | [
"https://github.com/jp-guo/gm-transformer"
] | In the paper 'GMTR: Graph Matching Transformers', what matching accuracy score did the GMTR model get on the SPair-71k dataset
| 0.832 |
MUTAG | R-GIN + PANDA | PANDA: Expanded Width-Aware Message Passing Beyond Rewiring | 2024-06-06T00:00:00 | https://arxiv.org/abs/2406.03671v2 | [
"https://github.com/jeongwhanchoi/panda"
] | In the paper 'PANDA: Expanded Width-Aware Message Passing Beyond Rewiring', what Accuracy score did the R-GIN + PANDA model get on the MUTAG dataset
| 88.2% |
Squirrel | DJ-GNN | Diffusion-Jump GNNs: Homophiliation via Learnable Metric Filters | 2023-06-29T00:00:00 | https://arxiv.org/abs/2306.16976v1 | [
"https://github.com/AhmedBegggaUA/TFM"
] | In the paper 'Diffusion-Jump GNNs: Homophiliation via Learnable Metric Filters', what Accuracy score did the DJ-GNN model get on the Squirrel dataset
| 73.48±1.59 |
CHAMELEON | BiRefNet | Bilateral Reference for High-Resolution Dichotomous Image Segmentation | 2024-01-07T00:00:00 | https://arxiv.org/abs/2401.03407v6 | [
"https://github.com/zhengpeng7/birefnet"
] | In the paper 'Bilateral Reference for High-Resolution Dichotomous Image Segmentation', what S-measure score did the BiRefNet model get on the CHAMELEON dataset
| 0.932 |
Weather2K114 (192) | MoLE-DLinear | Mixture-of-Linear-Experts for Long-term Time Series Forecasting | 2023-12-11T00:00:00 | https://arxiv.org/abs/2312.06786v3 | [
"https://github.com/rogerni/mole"
] | In the paper 'Mixture-of-Linear-Experts for Long-term Time Series Forecasting', what MSE score did the MoLE-DLinear model get on the Weather2K114 (192) dataset
| 0.405 |
BIG-bench (Reasoning About Colored Objects) | PaLM 2 (few-shot, k=3, CoT) | PaLM 2 Technical Report | 2023-05-17T00:00:00 | https://arxiv.org/abs/2305.10403v3 | [
"https://github.com/eternityyw/tram-benchmark"
] | In the paper 'PaLM 2 Technical Report', what Accuracy score did the PaLM 2 (few-shot, k=3, CoT) model get on the BIG-bench (Reasoning About Colored Objects) dataset
| 91.2 |
3DPW | HuManiFlow | HuManiFlow: Ancestor-Conditioned Normalising Flows on SO(3) Manifolds for Human Pose and Shape Distribution Estimation | 2023-05-11T00:00:00 | https://arxiv.org/abs/2305.06968v1 | [
"https://github.com/akashsengupta1997/humaniflow"
] | In the paper 'HuManiFlow: Ancestor-Conditioned Normalising Flows on SO(3) Manifolds for Human Pose and Shape Distribution Estimation', what PA-MPJPE score did the HuManiFlow model get on the 3DPW dataset
| 53.4 |
GTA-to-Avg(Cityscapes,BDD,Mapillary) | CLOUDS | Collaborating Foundation Models for Domain Generalized Semantic Segmentation | 2023-12-15T00:00:00 | https://arxiv.org/abs/2312.09788v2 | [
"https://github.com/yasserben/clouds"
] | In the paper 'Collaborating Foundation Models for Domain Generalized Semantic Segmentation', what mIoU score did the CLOUDS model get on the GTA-to-Avg(Cityscapes,BDD,Mapillary) dataset
| 61.5 |
HIDE (trained on GOPRO) | ID-Blau (FFTformer) | ID-Blau: Image Deblurring by Implicit Diffusion-based reBLurring AUgmentation | 2023-12-18T00:00:00 | https://arxiv.org/abs/2312.10998v2 | [
"https://github.com/plusgood-steven/id-blau"
] | In the paper 'ID-Blau: Image Deblurring by Implicit Diffusion-based reBLurring AUgmentation', what PSNR (sRGB) score did the ID-Blau (FFTformer) model get on the HIDE (trained on GOPRO) dataset
| 31.94 |
COCO-20i (1-shot) | MSDNet (ResNet-101) | MSDNet: Multi-Scale Decoder for Few-Shot Semantic Segmentation via Transformer-Guided Prototyping | 2024-09-17T00:00:00 | https://arxiv.org/abs/2409.11316v1 | [
"https://github.com/amirrezafateh/msdnet"
] | In the paper 'MSDNet: Multi-Scale Decoder for Few-Shot Semantic Segmentation via Transformer-Guided Prototyping', what Mean IoU score did the MSDNet (ResNet-101) model get on the COCO-20i (1-shot) dataset
| 48.5 |
Squirrel | CoED | Improving Graph Neural Networks by Learning Continuous Edge Directions | 2024-10-18T00:00:00 | https://arxiv.org/abs/2410.14109v1 | [
"https://github.com/hormoz-lab/coed-gnn"
] | In the paper 'Improving Graph Neural Networks by Learning Continuous Edge Directions', what Accuracy score did the CoED model get on the Squirrel dataset
| 75.32±1.82 |
MedConceptsQA | HuggingFaceH4/zephyr-7b-beta | Zephyr: Direct Distillation of LM Alignment | 2023-10-25T00:00:00 | https://arxiv.org/abs/2310.16944v1 | [
"https://github.com/huggingface/alignment-handbook"
] | In the paper 'Zephyr: Direct Distillation of LM Alignment', what Accuracy score did the HuggingFaceH4/zephyr-7b-beta model get on the MedConceptsQA dataset
| 25.058 |
UBnormal | MULDE-frame-centric-micro-one-class-classification | MULDE: Multiscale Log-Density Estimation via Denoising Score Matching for Video Anomaly Detection | 2024-03-21T00:00:00 | https://arxiv.org/abs/2403.14497v1 | [
"https://github.com/jakubmicorek/MULDE-Multiscale-Log-Density-Estimation-via-Denoising-Score-Matching-for-Video-Anomaly-Detection"
] | In the paper 'MULDE: Multiscale Log-Density Estimation via Denoising Score Matching for Video Anomaly Detection', what AUC score did the MULDE-frame-centric-micro-one-class-classification model get on the UBnormal dataset
| 72.8% |
MSR-VTT | UCoFiA | Unified Coarse-to-Fine Alignment for Video-Text Retrieval | 2023-09-18T00:00:00 | https://arxiv.org/abs/2309.10091v1 | [
"https://github.com/ziyang412/ucofia"
] | In the paper 'Unified Coarse-to-Fine Alignment for Video-Text Retrieval', what text-to-video R@1 score did the UCoFiA model get on the MSR-VTT dataset
| 49.4 |
MCubeS | MMSFormer (RGB-A) | MMSFormer: Multimodal Transformer for Material and Semantic Segmentation | 2023-09-07T00:00:00 | https://arxiv.org/abs/2309.04001v4 | [
"https://github.com/csiplab/mmsformer"
] | In the paper 'MMSFormer: Multimodal Transformer for Material and Semantic Segmentation', what mIoU score did the MMSFormer (RGB-A) model get on the MCubeS dataset
| 51.30% |
Oxford 102 Flower | RPO | Read-only Prompt Optimization for Vision-Language Few-shot Learning | 2023-08-29T00:00:00 | https://arxiv.org/abs/2308.14960v2 | [
"https://github.com/mlvlab/rpo"
] | In the paper 'Read-only Prompt Optimization for Vision-Language Few-shot Learning', what Harmonic mean score did the RPO model get on the Oxford 102 Flower dataset
| 84.50 |
CIFAR-FS 5-way (1-shot) | PT+MAP+SF+BPA (transductive) | The Balanced-Pairwise-Affinities Feature Transform | 2024-06-25T00:00:00 | https://arxiv.org/abs/2407.01467v1 | [
"https://github.com/danielshalam/bpa"
] | In the paper 'The Balanced-Pairwise-Affinities Feature Transform', what Accuracy score did the PT+MAP+SF+BPA (transductive) model get on the CIFAR-FS 5-way (1-shot) dataset
| 89.94 |
Electricity (96) | MoLE-RMLP | Mixture-of-Linear-Experts for Long-term Time Series Forecasting | 2023-12-11T00:00:00 | https://arxiv.org/abs/2312.06786v3 | [
"https://github.com/rogerni/mole"
] | In the paper 'Mixture-of-Linear-Experts for Long-term Time Series Forecasting', what MSE score did the MoLE-RMLP model get on the Electricity (96) dataset
| 0.129 |
VoxCeleb1 | ReDimNet-B6-SF2-LM (15.0M) | Reshape Dimensions Network for Speaker Recognition | 2024-07-25T00:00:00 | https://arxiv.org/abs/2407.18223v2 | [
"https://github.com/IDRnD/ReDimNet"
] | In the paper 'Reshape Dimensions Network for Speaker Recognition', what EER score did the ReDimNet-B6-SF2-LM (15.0M) model get on the VoxCeleb1 dataset
| 0.4 |
3D Object Detection on Argoverse2 Camera Only | Far3D | Far3D: Expanding the Horizon for Surround-view 3D Object Detection | 2023-08-18T00:00:00 | https://arxiv.org/abs/2308.09616v2 | [
"https://github.com/megvii-research/far3d"
] | In the paper 'Far3D: Expanding the Horizon for Surround-view 3D Object Detection', what Average mAP score did the Far3D model get on the 3D Object Detection on Argoverse2 Camera Only dataset
| 24.4 |
AfriSenti | Random | UCAS-IIE-NLP at SemEval-2023 Task 12: Enhancing Generalization of Multilingual BERT for Low-resource Sentiment Analysis | 2023-06-01T00:00:00 | https://arxiv.org/abs/2306.01093v1 | [
"https://github.com/zerohd4869/sacl"
] | In the paper 'UCAS-IIE-NLP at SemEval-2023 Task 12: Enhancing Generalization of Multilingual BERT for Low-resource Sentiment Analysis', what weighted-F1 score score did the Random model get on the AfriSenti dataset
| 0.34 |
InvertedPendulum-v2 | TLA | Optimizing Attention and Cognitive Control Costs Using Temporally-Layered Architectures | 2023-05-30T00:00:00 | https://arxiv.org/abs/2305.18701v3 | [
"https://github.com/dee0512/Temporally-Layered-Architecture"
] | In the paper 'Optimizing Attention and Cognitive Control Costs Using Temporally-Layered Architectures', what Mean Reward score did the TLA model get on the InvertedPendulum-v2 dataset
| 1000 |
UVG | DiQP on AV1 with QP 255 | Reversing the Damage: A QP-Aware Transformer-Diffusion Approach for 8K Video Restoration under Codec Compression | 2024-12-12T00:00:00 | https://arxiv.org/abs/2412.08912v1 | [
"https://github.com/alimd94/DiQP"
] | In the paper 'Reversing the Damage: A QP-Aware Transformer-Diffusion Approach for 8K Video Restoration under Codec Compression', what Average PSNR (dB) score did the DiQP on AV1 with QP 255 model get on the UVG dataset
| 32.551 |
ImageNet 256x256 | VAR (Visual Autoregressive) | Visual Autoregressive Modeling: Scalable Image Generation via Next-Scale Prediction | 2024-04-03T00:00:00 | https://arxiv.org/abs/2404.02905v2 | [
"https://github.com/FoundationVision/VAR"
] | In the paper 'Visual Autoregressive Modeling: Scalable Image Generation via Next-Scale Prediction', what FID score did the VAR (Visual Autoregressive) model get on the ImageNet 256x256 dataset
| 1.73 |
CIFAR-10 | Blackout Diffusion | Blackout Diffusion: Generative Diffusion Models in Discrete-State Spaces | 2023-05-18T00:00:00 | https://arxiv.org/abs/2305.11089v1 | [
"https://github.com/lanl/blackout-diffusion"
] | In the paper 'Blackout Diffusion: Generative Diffusion Models in Discrete-State Spaces', what FID score did the Blackout Diffusion model get on the CIFAR-10 dataset
| 4.58 |
Amazon-Health | HetroFair | Heterophily-Aware Fair Recommendation using Graph Convolutional Networks | 2024-01-31T00:00:00 | https://arxiv.org/abs/2402.03365v2 | [
"https://github.com/nematgh/hetrofair"
] | In the paper 'Heterophily-Aware Fair Recommendation using Graph Convolutional Networks', what NDCG@20 score did the HetroFair model get on the Amazon-Health dataset
| 0.1334 |
Lipogram-e | GPT-2-fine-tuned-5-epochs | Most Language Models can be Poets too: An AI Writing Assistant and Constrained Text Generation Studio | 2023-06-28T00:00:00 | https://arxiv.org/abs/2306.15926v1 | [
"https://github.com/hellisotherpeople/constrained-text-generation-studio"
] | In the paper 'Most Language Models can be Poets too: An AI Writing Assistant and Constrained Text Generation Studio', what Ignored Constraint Error Rate score did the GPT-2-fine-tuned-5-epochs model get on the Lipogram-e dataset
| 0.5% |
ShanghaiTech | DAC(STG-NF + Jigsaw) | Divide and Conquer in Video Anomaly Detection: A Comprehensive Review and New Approach | 2023-09-26T00:00:00 | https://arxiv.org/abs/2309.14622v2 | [
"https://github.com/XiaoJian923/Divide-and-Conquer"
] | In the paper 'Divide and Conquer in Video Anomaly Detection: A Comprehensive Review and New Approach', what AUC score did the DAC(STG-NF + Jigsaw) model get on the ShanghaiTech dataset
| 87.72% |
ActivityNet-QA | LocVLM-Vid-B | Learning to Localize Objects Improves Spatial Reasoning in Visual-LLMs | 2024-04-11T00:00:00 | https://arxiv.org/abs/2404.07449v1 | [
"https://github.com/kahnchana/locvlm"
] | In the paper 'Learning to Localize Objects Improves Spatial Reasoning in Visual-LLMs', what Accuracy score did the LocVLM-Vid-B model get on the ActivityNet-QA dataset
| 37.4 |
SCUT-CTW1500 | MixNet | MixNet: Toward Accurate Detection of Challenging Scene Text in the Wild | 2023-08-23T00:00:00 | https://arxiv.org/abs/2308.12817v2 | [
"https://github.com/D641593/MixNet"
] | In the paper 'MixNet: Toward Accurate Detection of Challenging Scene Text in the Wild', what F-Measure score did the MixNet model get on the SCUT-CTW1500 dataset
| 89.8 |
ImageNet | GTP-ViT-L/P8 | GTP-ViT: Efficient Vision Transformers via Graph-based Token Propagation | 2023-11-06T00:00:00 | https://arxiv.org/abs/2311.03035v2 | [
"https://github.com/ackesnal/gtp-vit"
] | In the paper 'GTP-ViT: Efficient Vision Transformers via Graph-based Token Propagation', what Top 1 Accuracy score did the GTP-ViT-L/P8 model get on the ImageNet dataset
| 83.7% |
Synthetic | DyEdgeGAT | DyEdgeGAT: Dynamic Edge via Graph Attention for Early Fault Detection in IIoT Systems | 2023-07-07T00:00:00 | https://arxiv.org/abs/2307.03761v3 | [
"https://github.com/mengjiezhao/dyedgegat"
] | In the paper 'DyEdgeGAT: Dynamic Edge via Graph Attention for Early Fault Detection in IIoT Systems', what AUC score did the DyEdgeGAT model get on the Synthetic dataset
| 0.83 |
EuroSAT | RPO | Read-only Prompt Optimization for Vision-Language Few-shot Learning | 2023-08-29T00:00:00 | https://arxiv.org/abs/2308.14960v2 | [
"https://github.com/mlvlab/rpo"
] | In the paper 'Read-only Prompt Optimization for Vision-Language Few-shot Learning', what Harmonic mean score did the RPO model get on the EuroSAT dataset
| 76.79 |
DuoRC | Hybrid-RecallM | RecallM: An Adaptable Memory Mechanism with Temporal Understanding for Large Language Models | 2023-07-06T00:00:00 | https://arxiv.org/abs/2307.02738v3 | [
"https://github.com/cisco-open/DeepVision/tree/main/recallm"
] | In the paper 'RecallM: An Adaptable Memory Mechanism with Temporal Understanding for Large Language Models', what Accuracy score did the Hybrid-RecallM model get on the DuoRC dataset
| 52.68 |
Electricity (96) | TSMixer | TSMixer: Lightweight MLP-Mixer Model for Multivariate Time Series Forecasting | 2023-06-14T00:00:00 | https://arxiv.org/abs/2306.09364v4 | [
"https://github.com/ibm/tsfm"
] | In the paper 'TSMixer: Lightweight MLP-Mixer Model for Multivariate Time Series Forecasting', what MSE score did the TSMixer model get on the Electricity (96) dataset
| 0.129 |
ACOS | ChatGPT (gpt-3.5-turbo, zero-shot) | MvP: Multi-view Prompting Improves Aspect Sentiment Tuple Prediction | 2023-05-22T00:00:00 | https://arxiv.org/abs/2305.12627v1 | [
"https://github.com/ZubinGou/multi-view-prompting"
] | In the paper 'MvP: Multi-view Prompting Improves Aspect Sentiment Tuple Prediction', what F1 (Restaurant) score did the ChatGPT (gpt-3.5-turbo, zero-shot) model get on the ACOS dataset
| 27.11 |
TACRED | LLM-QA4RE (XXLarge) | Aligning Instruction Tasks Unlocks Large Language Models as Zero-Shot Relation Extractors | 2023-05-18T00:00:00 | https://arxiv.org/abs/2305.11159v1 | [
"https://github.com/osu-nlp-group/qa4re"
] | In the paper 'Aligning Instruction Tasks Unlocks Large Language Models as Zero-Shot Relation Extractors', what F1 score did the LLM-QA4RE (XXLarge) model get on the TACRED dataset
| 52.2 |
Weather (720) | RLinear-CI | Revisiting Long-term Time Series Forecasting: An Investigation on Linear Mapping | 2023-05-18T00:00:00 | https://arxiv.org/abs/2305.10721v1 | [
"https://github.com/plumprc/rtsf"
] | In the paper 'Revisiting Long-term Time Series Forecasting: An Investigation on Linear Mapping', what MSE score did the RLinear-CI model get on the Weather (720) dataset
| 0.314 |
GTA5-to-Cityscapes | CMFormer | Learning Content-enhanced Mask Transformer for Domain Generalized Urban-Scene Segmentation | 2023-07-01T00:00:00 | https://arxiv.org/abs/2307.00371v5 | [
"https://github.com/BiQiWHU/CMFormer"
] | In the paper 'Learning Content-enhanced Mask Transformer for Domain Generalized Urban-Scene Segmentation', what mIoU score did the CMFormer model get on the GTA5-to-Cityscapes dataset
| 55.31 |
PHEVA | TSGAD (Pose Branch) | PHEVA: A Privacy-preserving Human-centric Video Anomaly Detection Dataset | 2024-08-26T00:00:00 | https://arxiv.org/abs/2408.14329v1 | [
"https://github.com/tecsar-uncc/pheva"
] | In the paper 'PHEVA: A Privacy-preserving Human-centric Video Anomaly Detection Dataset', what AUC-ROC score did the TSGAD (Pose Branch) model get on the PHEVA dataset
| 68 |
MPI-INF-3DHP | ARTS (Resnet50 L=16) | ARTS: Semi-Analytical Regressor using Disentangled Skeletal Representations for Human Mesh Recovery from Videos | 2024-10-21T00:00:00 | https://arxiv.org/abs/2410.15582v1 | [
"https://github.com/tangtao-pku/arts"
] | In the paper 'ARTS: Semi-Analytical Regressor using Disentangled Skeletal Representations for Human Mesh Recovery from Videos', what MPJPE score did the ARTS (Resnet50 L=16) model get on the MPI-INF-3DHP dataset
| 71.8 |
Weather (96) | PRformer | PRformer: Pyramidal Recurrent Transformer for Multivariate Time Series Forecasting | 2024-08-20T00:00:00 | https://arxiv.org/abs/2408.10483v1 | [
"https://github.com/usualheart/prformer"
] | In the paper 'PRformer: Pyramidal Recurrent Transformer for Multivariate Time Series Forecasting', what MSE score did the PRformer model get on the Weather (96) dataset
| 0.144 |
HMDB51 | OTI(ViT-L/14) | Orthogonal Temporal Interpolation for Zero-Shot Video Recognition | 2023-08-14T00:00:00 | https://arxiv.org/abs/2308.06897v1 | [
"https://github.com/sweetorangezhuyan/mm2023_oti"
] | In the paper 'Orthogonal Temporal Interpolation for Zero-Shot Video Recognition', what Top-1 Accuracy score did the OTI(ViT-L/14) model get on the HMDB51 dataset
| 64 |
CIFAR-10, 250 Labels | ShrinkMatch | Shrinking Class Space for Enhanced Certainty in Semi-Supervised Learning | 2023-08-13T00:00:00 | https://arxiv.org/abs/2308.06777v1 | [
"https://github.com/LiheYoung/ShrinkMatch"
] | In the paper 'Shrinking Class Space for Enhanced Certainty in Semi-Supervised Learning', what Percentage error score did the ShrinkMatch model get on the CIFAR-10, 250 Labels dataset
| 4.74 |
Office-Home | PDA (CLIP, ResNet-50) | Prompt-based Distribution Alignment for Unsupervised Domain Adaptation | 2023-12-15T00:00:00 | https://arxiv.org/abs/2312.09553v2 | [
"https://github.com/baishuanghao/prompt-based-distribution-alignment"
] | In the paper 'Prompt-based Distribution Alignment for Unsupervised Domain Adaptation', what Accuracy score did the PDA (CLIP, ResNet-50) model get on the Office-Home dataset
| 75.3 |
COCO-20i (1-shot) | SCCAN (ResNet-50) | Self-Calibrated Cross Attention Network for Few-Shot Segmentation | 2023-08-18T00:00:00 | https://arxiv.org/abs/2308.09294v1 | [
"https://github.com/sam1224/sccan"
] | In the paper 'Self-Calibrated Cross Attention Network for Few-Shot Segmentation', what Mean IoU score did the SCCAN (ResNet-50) model get on the COCO-20i (1-shot) dataset
| 46.3 |
HellaSwag | LLaMA3+MoSLoRA | Mixture-of-Subspaces in Low-Rank Adaptation | 2024-06-16T00:00:00 | https://arxiv.org/abs/2406.11909v3 | [
"https://github.com/wutaiqiang/moslora"
] | In the paper 'Mixture-of-Subspaces in Low-Rank Adaptation', what Accuracy score did the LLaMA3+MoSLoRA model get on the HellaSwag dataset
| 95.0 |
BDD100K test | ContrasTR | Contrastive Learning for Multi-Object Tracking with Transformers | 2023-11-14T00:00:00 | https://arxiv.org/abs/2311.08043v1 | [
"https://github.com/pfdp0/ContrasTR"
] | In the paper 'Contrastive Learning for Multi-Object Tracking with Transformers', what mMOTA score did the ContrasTR model get on the BDD100K test dataset
| 42.8 |
COVERAGE | Late Fusion | MMFusion: Combining Image Forensic Filters for Visual Manipulation Detection and Localization | 2023-12-04T00:00:00 | https://arxiv.org/abs/2312.01790v2 | [
"https://github.com/idt-iti/mmfusion-iml"
] | In the paper 'MMFusion: Combining Image Forensic Filters for Visual Manipulation Detection and Localization', what AUC score did the Late Fusion model get on the COVERAGE dataset
| .792 |
Electricity (192) | DiPE-Linear | Disentangled Interpretable Representation for Efficient Long-term Time Series Forecasting | 2024-11-26T00:00:00 | https://arxiv.org/abs/2411.17257v1 | [
"https://github.com/wintertee/dipe-linear"
] | In the paper 'Disentangled Interpretable Representation for Efficient Long-term Time Series Forecasting', what MSE score did the DiPE-Linear model get on the Electricity (192) dataset
| 0.148 |
BIG-bench (Date Understanding) | PaLM 2 (few-shot, k=3, Direct) | PaLM 2 Technical Report | 2023-05-17T00:00:00 | https://arxiv.org/abs/2305.10403v3 | [
"https://github.com/eternityyw/tram-benchmark"
] | In the paper 'PaLM 2 Technical Report', what Accuracy score did the PaLM 2 (few-shot, k=3, Direct) model get on the BIG-bench (Date Understanding) dataset
| 74.0 |
BIRD (BIg Bench for LaRge-scale Database Grounded Text-to-SQL Evaluation) | MAC-SQL + GPT-4 | MAC-SQL: A Multi-Agent Collaborative Framework for Text-to-SQL | 2023-12-18T00:00:00 | https://arxiv.org/abs/2312.11242v5 | [
"https://github.com/wbbeyourself/mac-sql"
] | In the paper 'MAC-SQL: A Multi-Agent Collaborative Framework for Text-to-SQL', what Execution Accuracy % (Test) score did the MAC-SQL + GPT-4 model get on the BIRD (BIg Bench for LaRge-scale Database Grounded Text-to-SQL Evaluation) dataset
| 59.59 |
Turbulence | GPT-4 | Turbulence: Systematically and Automatically Testing Instruction-Tuned Large Language Models for Code | 2023-12-22T00:00:00 | https://arxiv.org/abs/2312.14856v2 | [
"https://github.com/shahinhonarvar/turbulence-benchmark"
] | In the paper 'Turbulence: Systematically and Automatically Testing Instruction-Tuned Large Language Models for Code', what CorrSc score did the GPT-4 model get on the Turbulence dataset
| 0.848 |
Rope3D | CoBEV | CoBEV: Elevating Roadside 3D Object Detection with Depth and Height Complementarity | 2023-10-04T00:00:00 | https://arxiv.org/abs/2310.02815v3 | [
"https://github.com/MasterHow/CoBEV"
] | In the paper 'CoBEV: Elevating Roadside 3D Object Detection with Depth and Height Complementarity', what AP@0.7 score did the CoBEV model get on the Rope3D dataset
| 52.72 |
HellaSwag | PaLM 2-S (1-shot) | PaLM 2 Technical Report | 2023-05-17T00:00:00 | https://arxiv.org/abs/2305.10403v3 | [
"https://github.com/eternityyw/tram-benchmark"
] | In the paper 'PaLM 2 Technical Report', what Accuracy score did the PaLM 2-S (1-shot) model get on the HellaSwag dataset
| 85.6 |
Peptides-func | GCN + PANDA | PANDA: Expanded Width-Aware Message Passing Beyond Rewiring | 2024-06-06T00:00:00 | https://arxiv.org/abs/2406.03671v2 | [
"https://github.com/jeongwhanchoi/panda"
] | In the paper 'PANDA: Expanded Width-Aware Message Passing Beyond Rewiring', what AP score did the GCN + PANDA model get on the Peptides-func dataset
| 0.6028±0.0031 |
LVIS v1.0 | RALF | Retrieval-Augmented Open-Vocabulary Object Detection | 2024-04-08T00:00:00 | https://arxiv.org/abs/2404.05687v1 | [
"https://github.com/mlvlab/RALF"
] | In the paper 'Retrieval-Augmented Open-Vocabulary Object Detection', what AP novel-LVIS base training score did the RALF model get on the LVIS v1.0 dataset
| 21.9 |
LDC2017T10 | LeakDistill | Incorporating Graph Information in Transformer-based AMR Parsing | 2023-06-23T00:00:00 | https://arxiv.org/abs/2306.13467v1 | [
"https://github.com/sapienzanlp/leakdistill"
] | In the paper 'Incorporating Graph Information in Transformer-based AMR Parsing', what Smatch score did the LeakDistill model get on the LDC2017T10 dataset
| 86.1 |
IllusionVQA | Gemini-Pro | IllusionVQA: A Challenging Optical Illusion Dataset for Vision Language Models | 2024-03-23T00:00:00 | https://arxiv.org/abs/2403.15952v3 | [
"https://github.com/csebuetnlp/illusionvqa"
] | In the paper 'IllusionVQA: A Challenging Optical Illusion Dataset for Vision Language Models', what Accuracy score did the Gemini-Pro model get on the IllusionVQA dataset
| 51.26 |
ColonINST-v1 (Unseen) | Bunny-v1.0-3B
(w/ LoRA, w/o extra data) | Efficient Multimodal Learning from Data-centric Perspective | 2024-02-18T00:00:00 | https://arxiv.org/abs/2402.11530v3 | [
"https://github.com/baai-dcai/bunny"
] | In the paper 'Efficient Multimodal Learning from Data-centric Perspective', what Accuray score did the Bunny-v1.0-3B
(w/ LoRA, w/o extra data) model get on the ColonINST-v1 (Unseen) dataset
| 75.50 |
MM-Vet v2 | IXC2-VL-7B | InternLM-XComposer2: Mastering Free-form Text-Image Composition and Comprehension in Vision-Language Large Model | 2024-01-29T00:00:00 | https://arxiv.org/abs/2401.16420v1 | [
"https://github.com/internlm/internlm-xcomposer"
] | In the paper 'InternLM-XComposer2: Mastering Free-form Text-Image Composition and Comprehension in Vision-Language Large Model', what GPT-4 score score did the IXC2-VL-7B model get on the MM-Vet v2 dataset
| 42.5±0.3 |
PCPNet | MSECNet | MSECNet: Accurate and Robust Normal Estimation for 3D Point Clouds by Multi-Scale Edge Conditioning | 2023-08-04T00:00:00 | https://arxiv.org/abs/2308.02237 | [
"https://github.com/martianxiu/MSECNet"
] | In the paper 'MSECNet: Accurate and Robust Normal Estimation for 3D Point Clouds by Multi-Scale Edge Conditioning', what RMSE score did the MSECNet model get on the PCPNet dataset
| 9.76 |
DID-MDN | OneRestore | OneRestore: A Universal Restoration Framework for Composite Degradation | 2024-07-05T00:00:00 | https://arxiv.org/abs/2407.04621v4 | [
"https://github.com/gy65896/onerestore"
] | In the paper 'OneRestore: A Universal Restoration Framework for Composite Degradation', what PSNR score did the OneRestore model get on the DID-MDN dataset
| 32.89 |
MATH | GPT-4-code model (CSV, w/ code) | Solving Challenging Math Word Problems Using GPT-4 Code Interpreter with Code-based Self-Verification | 2023-08-15T00:00:00 | https://arxiv.org/abs/2308.07921v1 | [
"https://github.com/kipok/nemo-skills"
] | In the paper 'Solving Challenging Math Word Problems Using GPT-4 Code Interpreter with Code-based Self-Verification', what Accuracy score did the GPT-4-code model (CSV, w/ code) model get on the MATH dataset
| 73.5 |
TriviaQA | PaLM 2-L (one-shot) | PaLM 2 Technical Report | 2023-05-17T00:00:00 | https://arxiv.org/abs/2305.10403v3 | [
"https://github.com/eternityyw/tram-benchmark"
] | In the paper 'PaLM 2 Technical Report', what EM score did the PaLM 2-L (one-shot) model get on the TriviaQA dataset
| 86.1 |
KPI | CARLA | CARLA: Self-supervised Contrastive Representation Learning for Time Series Anomaly Detection | 2023-08-18T00:00:00 | https://arxiv.org/abs/2308.09296v4 | [
"https://github.com/zamanzadeh/CARLA"
] | In the paper 'CARLA: Self-supervised Contrastive Representation Learning for Time Series Anomaly Detection', what precision score did the CARLA model get on the KPI dataset
| 0.195 |
GTA5-to-Cityscapes | DIFF | Diffusion Features to Bridge Domain Gap for Semantic Segmentation | 2024-06-02T00:00:00 | https://arxiv.org/abs/2406.00777v2 | [
"https://github.com/Yux1angJi/DIFF"
] | In the paper 'Diffusion Features to Bridge Domain Gap for Semantic Segmentation', what mIoU score did the DIFF model get on the GTA5-to-Cityscapes dataset
| 58.01 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.