dataset stringlengths 0 82 | model_name stringlengths 0 150 | paper_title stringlengths 19 175 | paper_date timestamp[ns] | paper_url stringlengths 32 35 | code_links listlengths 1 1 | prompts stringlengths 105 331 | answer stringlengths 1 67 |
|---|---|---|---|---|---|---|---|
COCO minival | HyperSeg (Swin-B) | HyperSeg: Towards Universal Visual Segmentation with Large Language Model | 2024-11-26T00:00:00 | https://arxiv.org/abs/2411.17606v2 | [
"https://github.com/congvvc/HyperSeg"
] | In the paper 'HyperSeg: Towards Universal Visual Segmentation with Large Language Model', what PQ score did the HyperSeg (Swin-B) model get on the COCO minival dataset
| 61.2 |
ASAP | Beat This! | Beat this! Accurate beat tracking without DBN postprocessing | 2024-07-31T00:00:00 | https://arxiv.org/abs/2407.21658v1 | [
"https://github.com/CPJKU/beat_this"
] | In the paper 'Beat this! Accurate beat tracking without DBN postprocessing', what F1 score did the Beat This! model get on the ASAP dataset
| 76.3 |
CropHarvest - Global | Ensemble strategy | Impact Assessment of Missing Data in Model Predictions for Earth Observation Applications | 2024-03-21T00:00:00 | https://arxiv.org/abs/2403.14297v2 | [
"https://github.com/fmenat/missingviews-study-eo"
] | In the paper 'Impact Assessment of Missing Data in Model Predictions for Earth Observation Applications', what Average Accuracy score did the Ensemble strategy model get on the CropHarvest - Global dataset
| 0.828 |
LEVIR+ | CGNet | Change Guiding Network: Incorporating Change Prior to Guide Change Detection in Remote Sensing Imagery | 2024-04-14T00:00:00 | https://arxiv.org/abs/2404.09179v1 | [
"https://github.com/chengxihan/cgnet-cd"
] | In the paper 'Change Guiding Network: Incorporating Change Prior to Guide Change Detection in Remote Sensing Imagery', what F1 score did the CGNet model get on the LEVIR+ dataset
| 83.68 |
TED-LIUM | Whisper-LLaMa-7b | HyPoradise: An Open Baseline for Generative Speech Recognition with Large Language Models | 2023-09-27T00:00:00 | https://arxiv.org/abs/2309.15701v2 | [
"https://github.com/hypotheses-paradise/hypo2trans"
] | In the paper 'HyPoradise: An Open Baseline for Generative Speech Recognition with Large Language Models', what Word Error Rate (WER) score did the Whisper-LLaMa-7b model get on the TED-LIUM dataset
| 4.6 |
Flickr8k-Expert | SoftSPICE | FACTUAL: A Benchmark for Faithful and Consistent Textual Scene Graph Parsing | 2023-05-27T00:00:00 | https://arxiv.org/abs/2305.17497v2 | [
"https://github.com/zhuang-li/factual"
] | In the paper 'FACTUAL: A Benchmark for Faithful and Consistent Textual Scene Graph Parsing', what Kendall's Tau-c score did the SoftSPICE model get on the Flickr8k-Expert dataset
| 54.2 |
ChestX | RFS+MLP | Improving Cross-domain Few-shot Classification with Multilayer Perceptron | 2023-12-15T00:00:00 | https://arxiv.org/abs/2312.09589v1 | [
"https://github.com/BaiShuanghao/CDFSC-MLP"
] | In the paper 'Improving Cross-domain Few-shot Classification with Multilayer Perceptron', what 5 shot score did the RFS+MLP model get on the ChestX dataset
| 26.00 |
BIG-bench (SNARKS) | PaLM 2(few-shot, k=3, CoT) | PaLM 2 Technical Report | 2023-05-17T00:00:00 | https://arxiv.org/abs/2305.10403v3 | [
"https://github.com/eternityyw/tram-benchmark"
] | In the paper 'PaLM 2 Technical Report', what Accuracy score did the PaLM 2(few-shot, k=3, CoT) model get on the BIG-bench (SNARKS) dataset
| 84.8 |
MOSE | Cutie+ (small, MEGA) | Putting the Object Back into Video Object Segmentation | 2023-10-19T00:00:00 | https://arxiv.org/abs/2310.12982v2 | [
"https://github.com/hkchengrex/Cutie"
] | In the paper 'Putting the Object Back into Video Object Segmentation', what J&F score did the Cutie+ (small, MEGA) model get on the MOSE dataset
| 70.3 |
VNHSGE-English | ChatGPT | VNHSGE: VietNamese High School Graduation Examination Dataset for Large Language Models | 2023-05-20T00:00:00 | https://arxiv.org/abs/2305.12199v1 | [
"https://github.com/xdao85/vnhsge"
] | In the paper 'VNHSGE: VietNamese High School Graduation Examination Dataset for Large Language Models', what Accuracy score did the ChatGPT model get on the VNHSGE-English dataset
| 79.2 |
PubMed with Public Split: fixed 20 nodes per class | OGC | From Cluster Assumption to Graph Convolution: Graph-based Semi-Supervised Learning Revisited | 2023-09-24T00:00:00 | https://arxiv.org/abs/2309.13599v2 | [
"https://github.com/zhengwang100/ogc_ggcm"
] | In the paper 'From Cluster Assumption to Graph Convolution: Graph-based Semi-Supervised Learning Revisited', what Accuracy score did the OGC model get on the PubMed with Public Split: fixed 20 nodes per class dataset
| 83.4% |
ColonINST-v1 (Unseen) | LLaVA-Med-v1.5
(w/ LoRA, w/o extra data) | LLaVA-Med: Training a Large Language-and-Vision Assistant for Biomedicine in One Day | 2023-06-01T00:00:00 | https://arxiv.org/abs/2306.00890v1 | [
"https://github.com/microsoft/LLaVA-Med"
] | In the paper 'LLaVA-Med: Training a Large Language-and-Vision Assistant for Biomedicine in One Day', what Accuray score did the LLaVA-Med-v1.5
(w/ LoRA, w/o extra data) model get on the ColonINST-v1 (Unseen) dataset
| 73.05 |
UBnormal | AnomalyRuler | Follow the Rules: Reasoning for Video Anomaly Detection with Large Language Models | 2024-07-14T00:00:00 | https://arxiv.org/abs/2407.10299v2 | [
"https://github.com/Yuchen413/AnomalyRuler"
] | In the paper 'Follow the Rules: Reasoning for Video Anomaly Detection with Large Language Models', what AUC score did the AnomalyRuler model get on the UBnormal dataset
| 71.9% |
KITTI Test (Online Methods) | MCTrack | MCTrack: A Unified 3D Multi-Object Tracking Framework for Autonomous Driving | 2024-09-23T00:00:00 | https://arxiv.org/abs/2409.16149v2 | [
"https://github.com/megvii-research/mctrack"
] | In the paper 'MCTrack: A Unified 3D Multi-Object Tracking Framework for Autonomous Driving', what HOTA score did the MCTrack model get on the KITTI Test (Online Methods) dataset
| 81.07 |
Cityscapes val | DSNet-Base(single-scale) | DSNet: A Novel Way to Use Atrous Convolutions in Semantic Segmentation | 2024-06-06T00:00:00 | https://arxiv.org/abs/2406.03702v1 | [
"https://github.com/takaniwa/dsnet"
] | In the paper 'DSNet: A Novel Way to Use Atrous Convolutions in Semantic Segmentation', what mIoU score did the DSNet-Base(single-scale) model get on the Cityscapes val dataset
| 82.0 |
MATH | ToRA-Code 34B model (w/ code, SC, k=50) | ToRA: A Tool-Integrated Reasoning Agent for Mathematical Problem Solving | 2023-09-29T00:00:00 | https://arxiv.org/abs/2309.17452v4 | [
"https://github.com/microsoft/tora"
] | In the paper 'ToRA: A Tool-Integrated Reasoning Agent for Mathematical Problem Solving', what Accuracy score did the ToRA-Code 34B model (w/ code, SC, k=50) model get on the MATH dataset
| 60.0 |
HumanEval | AgentCoder (GPT-4) | AgentCoder: Multi-Agent-based Code Generation with Iterative Testing and Optimisation | 2023-12-20T00:00:00 | https://arxiv.org/abs/2312.13010v3 | [
"https://github.com/huangd1999/AgentCoder"
] | In the paper 'AgentCoder: Multi-Agent-based Code Generation with Iterative Testing and Optimisation', what Pass@1 score did the AgentCoder (GPT-4) model get on the HumanEval dataset
| 96.3 |
X4K1000FPS | VFIMamba | VFIMamba: Video Frame Interpolation with State Space Models | 2024-07-02T00:00:00 | https://arxiv.org/abs/2407.02315v2 | [
"https://github.com/mcg-nju/vfimamba"
] | In the paper 'VFIMamba: Video Frame Interpolation with State Space Models', what PSNR score did the VFIMamba model get on the X4K1000FPS dataset
| 32.15 |
COCO val2017 | SynCo (ResNet-50) 200ep | SynCo: Synthetic Hard Negatives in Contrastive Learning for Better Unsupervised Visual Representations | 2024-10-03T00:00:00 | https://arxiv.org/abs/2410.02401v5 | [
"https://github.com/giakoumoglou/synco"
] | In the paper 'SynCo: Synthetic Hard Negatives in Contrastive Learning for Better Unsupervised Visual Representations', what mask AP score did the SynCo (ResNet-50) 200ep model get on the COCO val2017 dataset
| 35.4 |
GSM8K | OpenMath-CodeLlama-7B (w/ code, SC, k=50) | OpenMathInstruct-1: A 1.8 Million Math Instruction Tuning Dataset | 2024-02-15T00:00:00 | https://arxiv.org/abs/2402.10176v2 | [
"https://github.com/kipok/nemo-skills"
] | In the paper 'OpenMathInstruct-1: A 1.8 Million Math Instruction Tuning Dataset', what Accuracy score did the OpenMath-CodeLlama-7B (w/ code, SC, k=50) model get on the GSM8K dataset
| 84.8 |
GTA-to-Avg(Cityscapes,BDD,Mapillary) | CMFormer | Learning Content-enhanced Mask Transformer for Domain Generalized Urban-Scene Segmentation | 2023-07-01T00:00:00 | https://arxiv.org/abs/2307.00371v5 | [
"https://github.com/BiQiWHU/CMFormer"
] | In the paper 'Learning Content-enhanced Mask Transformer for Domain Generalized Urban-Scene Segmentation', what mIoU score did the CMFormer model get on the GTA-to-Avg(Cityscapes,BDD,Mapillary) dataset
| 51.10 |
STS16 | PromptEOL+CSE+OPT-13B | Scaling Sentence Embeddings with Large Language Models | 2023-07-31T00:00:00 | https://arxiv.org/abs/2307.16645v1 | [
"https://github.com/kongds/scaling_sentemb"
] | In the paper 'Scaling Sentence Embeddings with Large Language Models', what Spearman Correlation score did the PromptEOL+CSE+OPT-13B model get on the STS16 dataset
| 0.8590 |
VisDA2017 | MLNet | MLNet: Mutual Learning Network with Neighborhood Invariance for Universal Domain Adaptation | 2023-12-13T00:00:00 | https://arxiv.org/abs/2312.07871v4 | [
"https://github.com/YanzuoLu/MLNet"
] | In the paper 'MLNet: Mutual Learning Network with Neighborhood Invariance for Universal Domain Adaptation', what H-score score did the MLNet model get on the VisDA2017 dataset
| 69.9 |
MM-Vet | InternLM2+ViT (QMoSLoRA) | Mixture-of-Subspaces in Low-Rank Adaptation | 2024-06-16T00:00:00 | https://arxiv.org/abs/2406.11909v3 | [
"https://github.com/wutaiqiang/moslora"
] | In the paper 'Mixture-of-Subspaces in Low-Rank Adaptation', what GPT-4 score score did the InternLM2+ViT (QMoSLoRA) model get on the MM-Vet dataset
| 35.2 |
Waymo Open Dataset | VoxelKP | VoxelKP: A Voxel-based Network Architecture for Human Keypoint Estimation in LiDAR Data | 2023-12-11T00:00:00 | https://arxiv.org/abs/2312.08871v1 | [
"https://github.com/shijianjian/voxelkp"
] | In the paper 'VoxelKP: A Voxel-based Network Architecture for Human Keypoint Estimation in LiDAR Data', what MPJPE score did the VoxelKP model get on the Waymo Open Dataset dataset
| 8.87 |
HACS | ActionMamba(InternVideo2-6B) | Video Mamba Suite: State Space Model as a Versatile Alternative for Video Understanding | 2024-03-14T00:00:00 | https://arxiv.org/abs/2403.09626v1 | [
"https://github.com/opengvlab/video-mamba-suite"
] | In the paper 'Video Mamba Suite: State Space Model as a Versatile Alternative for Video Understanding', what Average-mAP score did the ActionMamba(InternVideo2-6B) model get on the HACS dataset
| 44.56 |
CoNLL 2012 | Maverick_mes | Maverick: Efficient and Accurate Coreference Resolution Defying Recent Trends | 2024-07-31T00:00:00 | https://arxiv.org/abs/2407.21489v1 | [
"https://github.com/sapienzanlp/maverick-coref"
] | In the paper 'Maverick: Efficient and Accurate Coreference Resolution Defying Recent Trends', what Avg F1 score did the Maverick_mes model get on the CoNLL 2012 dataset
| 83.6 |
ImageNet | GTP-EVA-L/P8 | GTP-ViT: Efficient Vision Transformers via Graph-based Token Propagation | 2023-11-06T00:00:00 | https://arxiv.org/abs/2311.03035v2 | [
"https://github.com/ackesnal/gtp-vit"
] | In the paper 'GTP-ViT: Efficient Vision Transformers via Graph-based Token Propagation', what Top 1 Accuracy score did the GTP-EVA-L/P8 model get on the ImageNet dataset
| 85.4% |
Winoground | KeyComp (GPT-3.5) | Prompting Large Vision-Language Models for Compositional Reasoning | 2024-01-20T00:00:00 | https://arxiv.org/abs/2401.11337v1 | [
"https://github.com/tossowski/keycomp"
] | In the paper 'Prompting Large Vision-Language Models for Compositional Reasoning', what Text Score score did the KeyComp (GPT-3.5) model get on the Winoground dataset
| 30.3 |
50 Salads | LTContext | How Much Temporal Long-Term Context is Needed for Action Segmentation? | 2023-08-22T00:00:00 | https://arxiv.org/abs/2308.11358v2 | [
"https://github.com/ltcontext/ltcontext"
] | In the paper 'How Much Temporal Long-Term Context is Needed for Action Segmentation?', what F1@10% score did the LTContext model get on the 50 Salads dataset
| 89.4 |
TACO -- Twitter Arguments from COnversations | TACO | TACO -- Twitter Arguments from COnversations | 2024-03-30T00:00:00 | https://arxiv.org/abs/2404.00406v1 | [
"https://github.com/tomatenmarc/taco"
] | In the paper 'TACO -- Twitter Arguments from COnversations', what macro F1 score did the TACO model get on the TACO -- Twitter Arguments from COnversations dataset
| 85.06 |
InterHuman | in2IN | in2IN: Leveraging individual Information to Generate Human INteractions | 2024-04-15T00:00:00 | https://arxiv.org/abs/2404.09988v1 | [
"https://github.com/pabloruizponce/in2IN"
] | In the paper 'in2IN: Leveraging individual Information to Generate Human INteractions', what FID score did the in2IN model get on the InterHuman dataset
| 5.177 |
Automatic Cardiac Diagnosis Challenge (ACDC) | EMCAD | EMCAD: Efficient Multi-scale Convolutional Attention Decoding for Medical Image Segmentation | 2024-05-11T00:00:00 | https://arxiv.org/abs/2405.06880v1 | [
"https://github.com/sldgroup/emcad"
] | In the paper 'EMCAD: Efficient Multi-scale Convolutional Attention Decoding for Medical Image Segmentation', what Avg DSC score did the EMCAD model get on the Automatic Cardiac Diagnosis Challenge (ACDC) dataset
| 92.12 |
Mol-Instruction | BioT5+ | BioT5+: Towards Generalized Biological Understanding with IUPAC Integration and Multi-task Tuning | 2024-02-27T00:00:00 | https://arxiv.org/abs/2402.17810v2 | [
"https://github.com/QizhiPei/BioT5"
] | In the paper 'BioT5+: Towards Generalized Biological Understanding with IUPAC Integration and Multi-task Tuning', what Exact score did the BioT5+ model get on the Mol-Instruction dataset
| 0.864 |
Wiki-CS | GCN | Half-Hop: A graph upsampling approach for slowing down message passing | 2023-08-17T00:00:00 | https://arxiv.org/abs/2308.09198v1 | [
"https://github.com/nerdslab/halfhop"
] | In the paper 'Half-Hop: A graph upsampling approach for slowing down message passing', what Accuracy score did the GCN model get on the Wiki-CS dataset
| 81.93 |
MPDD | RealNet | RealNet: A Feature Selection Network with Realistic Synthetic Anomaly for Anomaly Detection | 2024-03-09T00:00:00 | https://arxiv.org/abs/2403.05897v1 | [
"https://github.com/cnulab/realnet"
] | In the paper 'RealNet: A Feature Selection Network with Realistic Synthetic Anomaly for Anomaly Detection', what Detection AUROC score did the RealNet model get on the MPDD dataset
| 96.3 |
SNLI | SplitEE-S | SplitEE: Early Exit in Deep Neural Networks with Split Computing | 2023-09-17T00:00:00 | https://arxiv.org/abs/2309.09195v1 | [
"https://github.com/Div290/SplitEE/blob/main/README.md"
] | In the paper 'SplitEE: Early Exit in Deep Neural Networks with Split Computing', what Accuracy score did the SplitEE-S model get on the SNLI dataset
| 79.0 |
MCubeS | StitchFusion (RGB-N) | StitchFusion: Weaving Any Visual Modalities to Enhance Multimodal Semantic Segmentation | 2024-08-02T00:00:00 | https://arxiv.org/abs/2408.01343v1 | [
"https://github.com/libingyu01/stitchfusion-stitchfusion-weaving-any-visual-modalities-to-enhance-multimodal-semantic-segmentation"
] | In the paper 'StitchFusion: Weaving Any Visual Modalities to Enhance Multimodal Semantic Segmentation', what mIoU score did the StitchFusion (RGB-N) model get on the MCubeS dataset
| 53.21 |
Walker2d-v2 | TLA | Optimizing Attention and Cognitive Control Costs Using Temporally-Layered Architectures | 2023-05-30T00:00:00 | https://arxiv.org/abs/2305.18701v3 | [
"https://github.com/dee0512/Temporally-Layered-Architecture"
] | In the paper 'Optimizing Attention and Cognitive Control Costs Using Temporally-Layered Architectures', what Mean Reward score did the TLA model get on the Walker2d-v2 dataset
| 3878.41 |
ImageNet - 1% labeled data | SemiReward | SemiReward: A General Reward Model for Semi-supervised Learning | 2023-10-04T00:00:00 | https://arxiv.org/abs/2310.03013v2 | [
"https://github.com/Westlake-AI/SemiReward"
] | In the paper 'SemiReward: A General Reward Model for Semi-supervised Learning', what Top 1 Accuracy score did the SemiReward model get on the ImageNet - 1% labeled data dataset
| 59.64% |
MassSpecGym | FFN Fingerprint | MassSpecGym: A benchmark for the discovery and identification of molecules | 2024-10-30T00:00:00 | https://arxiv.org/abs/2410.23326v1 | [
"https://github.com/pluskal-lab/massspecgym"
] | In the paper 'MassSpecGym: A benchmark for the discovery and identification of molecules', what Hit Rate @ 1 score did the FFN Fingerprint model get on the MassSpecGym dataset
| 7.62 |
ImageNet | DynamicViT-S | Which Transformer to Favor: A Comparative Analysis of Efficiency in Vision Transformers | 2023-08-18T00:00:00 | https://arxiv.org/abs/2308.09372v3 | [
"https://github.com/tobna/whattransformertofavor"
] | In the paper 'Which Transformer to Favor: A Comparative Analysis of Efficiency in Vision Transformers', what Top 1 Accuracy score did the DynamicViT-S model get on the ImageNet dataset
| 81.09% |
THUMOS’14 | P-MIL | Proposal-Based Multiple Instance Learning for Weakly-Supervised Temporal Action Localization | 2023-05-29T00:00:00 | https://arxiv.org/abs/2305.17861v1 | [
"https://github.com/RenHuan1999/CVPR2023_P-MIL"
] | In the paper 'Proposal-Based Multiple Instance Learning for Weakly-Supervised Temporal Action Localization', what mAP@0.5 score did the P-MIL model get on the THUMOS’14 dataset
| 40.0 |
RoadTextVQA | Singularity | Reading Between the Lanes: Text VideoQA on the Road | 2023-07-08T00:00:00 | https://arxiv.org/abs/2307.03948v1 | [
"https://github.com/georg3tom/RoadTextVQA"
] | In the paper 'Reading Between the Lanes: Text VideoQA on the Road', what ACCURACY score did the Singularity model get on the RoadTextVQA dataset
| 24.62 |
MATH | MathCoder-L-13B | MathCoder: Seamless Code Integration in LLMs for Enhanced Mathematical Reasoning | 2023-10-05T00:00:00 | https://arxiv.org/abs/2310.03731v1 | [
"https://github.com/mathllm/mathcoder"
] | In the paper 'MathCoder: Seamless Code Integration in LLMs for Enhanced Mathematical Reasoning', what Accuracy score did the MathCoder-L-13B model get on the MATH dataset
| 29.9 |
Fashion-MNIST | VMM | The VampPrior Mixture Model | 2024-02-06T00:00:00 | https://arxiv.org/abs/2402.04412v2 | [
"https://github.com/astirn/vampprior-mixture-model"
] | In the paper 'The VampPrior Mixture Model', what Accuracy score did the VMM model get on the Fashion-MNIST dataset
| 0.716 |
CATH 4.2 | GVP | Knowledge-Design: Pushing the Limit of Protein Design via Knowledge Refinement | 2023-05-20T00:00:00 | https://arxiv.org/abs/2305.15151v4 | [
"https://github.com/A4Bio/OpenCPD"
] | In the paper 'Knowledge-Design: Pushing the Limit of Protein Design via Knowledge Refinement', what Sequence Recovery %(All) score did the GVP model get on the CATH 4.2 dataset
| 39.47 |
PASCAL-5i (5-Shot) | MIANet (ResNet-50) | MIANet: Aggregating Unbiased Instance and General Information for Few-Shot Semantic Segmentation | 2023-05-23T00:00:00 | https://arxiv.org/abs/2305.13864v1 | [
"https://github.com/aldrich2y/mianet"
] | In the paper 'MIANet: Aggregating Unbiased Instance and General Information for Few-Shot Semantic Segmentation', what Mean IoU score did the MIANet (ResNet-50) model get on the PASCAL-5i (5-Shot) dataset
| 71.59 |
YouTube Highlights | UVCOM | Bridging the Gap: A Unified Video Comprehension Framework for Moment Retrieval and Highlight Detection | 2023-11-28T00:00:00 | https://arxiv.org/abs/2311.16464v1 | [
"https://github.com/easonxiao-888/uvcom"
] | In the paper 'Bridging the Gap: A Unified Video Comprehension Framework for Moment Retrieval and Highlight Detection', what mAP score did the UVCOM model get on the YouTube Highlights dataset
| 77.4 |
ImageNet 256x256 | GIVT-Causal-L+A | GIVT: Generative Infinite-Vocabulary Transformers | 2023-12-04T00:00:00 | https://arxiv.org/abs/2312.02116v4 | [
"https://github.com/google-research/big_vision"
] | In the paper 'GIVT: Generative Infinite-Vocabulary Transformers', what FID score did the GIVT-Causal-L+A model get on the ImageNet 256x256 dataset
| 2.59 |
ImageNet - 1% labeled data | Meta Co-Training | Meta Co-Training: Two Views are Better than One | 2023-11-29T00:00:00 | https://arxiv.org/abs/2311.18083v4 | [
"https://github.com/jayrothenberger/meta-co-training"
] | In the paper 'Meta Co-Training: Two Views are Better than One', what Top 1 Accuracy score did the Meta Co-Training model get on the ImageNet - 1% labeled data dataset
| 80.7% |
Imagenet-dog-15 | DPAC | Deep Online Probability Aggregation Clustering | 2024-07-07T00:00:00 | https://arxiv.org/abs/2407.05246v2 | [
"https://github.com/aomandechenai/deep-probability-aggregation-clustering"
] | In the paper 'Deep Online Probability Aggregation Clustering', what Accuracy score did the DPAC model get on the Imagenet-dog-15 dataset
| 0.726 |
KIT Motion-Language | AttT2M | AttT2M: Text-Driven Human Motion Generation with Multi-Perspective Attention Mechanism | 2023-09-02T00:00:00 | https://arxiv.org/abs/2309.00796v1 | [
"https://github.com/zcymonkey/attt2m"
] | In the paper 'AttT2M: Text-Driven Human Motion Generation with Multi-Perspective Attention Mechanism', what FID score did the AttT2M model get on the KIT Motion-Language dataset
| 0.870 |
AudioCaps | GenAU-S | Taming Data and Transformers for Audio Generation | 2024-06-27T00:00:00 | https://arxiv.org/abs/2406.19388v2 | [
"https://github.com/snap-research/GenAU"
] | In the paper 'Taming Data and Transformers for Audio Generation', what FAD score did the GenAU-S model get on the AudioCaps dataset
| 1.07 |
DanceTrack | MOTIP (Deformable DETR, with DanceTrack val and CrowdHuman) | Multiple Object Tracking as ID Prediction | 2024-03-25T00:00:00 | https://arxiv.org/abs/2403.16848v1 | [
"https://github.com/MCG-NJU/MOTIP"
] | In the paper 'Multiple Object Tracking as ID Prediction', what HOTA score did the MOTIP (Deformable DETR, with DanceTrack val and CrowdHuman) model get on the DanceTrack dataset
| 73.7 |
MOT17 | IMM-JHSE | One Homography is All You Need: IMM-based Joint Homography and Multiple Object State Estimation | 2024-09-04T00:00:00 | https://arxiv.org/abs/2409.02562v2 | [
"https://github.com/Paulkie99/imm-jhse"
] | In the paper 'One Homography is All You Need: IMM-based Joint Homography and Multiple Object State Estimation', what MOTA score did the IMM-JHSE model get on the MOT17 dataset
| 79.54 |
TASD | ChatGPT (gpt-3.5-turbo, few-shot) | MvP: Multi-view Prompting Improves Aspect Sentiment Tuple Prediction | 2023-05-22T00:00:00 | https://arxiv.org/abs/2305.12627v1 | [
"https://github.com/ZubinGou/multi-view-prompting"
] | In the paper 'MvP: Multi-view Prompting Improves Aspect Sentiment Tuple Prediction', what F1 (R16) score did the ChatGPT (gpt-3.5-turbo, few-shot) model get on the TASD dataset
| 46.51 |
Replica | Open-YOLO 3D | Open-YOLO 3D: Towards Fast and Accurate Open-Vocabulary 3D Instance Segmentation | 2024-06-04T00:00:00 | https://arxiv.org/abs/2406.02548v2 | [
"https://github.com/aminebdj/openyolo3d"
] | In the paper 'Open-YOLO 3D: Towards Fast and Accurate Open-Vocabulary 3D Instance Segmentation', what mAP score did the Open-YOLO 3D model get on the Replica dataset
| 23.7 |
ReClor | Rational Reasoner / IDOL | IDOL: Indicator-oriented Logic Pre-training for Logical Reasoning | 2023-06-27T00:00:00 | https://arxiv.org/abs/2306.15273v1 | [
"https://github.com/GeekDream-x/IDOL"
] | In the paper 'IDOL: Indicator-oriented Logic Pre-training for Logical Reasoning', what Test score did the Rational Reasoner / IDOL model get on the ReClor dataset
| 80.6 |
CSL-Daily | MSKA-SLT | Multi-Stream Keypoint Attention Network for Sign Language Recognition and Translation | 2024-05-09T00:00:00 | https://arxiv.org/abs/2405.05672v1 | [
"https://github.com/sutwangyan/MSKA"
] | In the paper 'Multi-Stream Keypoint Attention Network for Sign Language Recognition and Translation', what BLEU-4 score did the MSKA-SLT model get on the CSL-Daily dataset
| 25.52 |
Wisconsin (60%/20%/20% random splits) | HH-GraphSAGE | Half-Hop: A graph upsampling approach for slowing down message passing | 2023-08-17T00:00:00 | https://arxiv.org/abs/2308.09198v1 | [
"https://github.com/nerdslab/halfhop"
] | In the paper 'Half-Hop: A graph upsampling approach for slowing down message passing', what 1:1 Accuracy score did the HH-GraphSAGE model get on the Wisconsin (60%/20%/20% random splits) dataset
| 85.88 ± 3.99 |
MOT17 | UCMCTrack | UCMCTrack: Multi-Object Tracking with Uniform Camera Motion Compensation | 2023-12-14T00:00:00 | https://arxiv.org/abs/2312.08952v2 | [
"https://github.com/corfyi/ucmctrack"
] | In the paper 'UCMCTrack: Multi-Object Tracking with Uniform Camera Motion Compensation', what MOTA score did the UCMCTrack model get on the MOT17 dataset
| 80.5 |
SAFIM | CodeLlama-34b-hf | Evaluation of LLMs on Syntax-Aware Code Fill-in-the-Middle Tasks | 2024-03-07T00:00:00 | https://arxiv.org/abs/2403.04814v3 | [
"https://github.com/gonglinyuan/safim"
] | In the paper 'Evaluation of LLMs on Syntax-Aware Code Fill-in-the-Middle Tasks', what Algorithmic score did the CodeLlama-34b-hf model get on the SAFIM dataset
| 38.55 |
MM-Vet v2 | Gemini 1.5 Pro | Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context | 2024-03-08T00:00:00 | https://arxiv.org/abs/2403.05530v4 | [
"https://github.com/dlvuldet/primevul"
] | In the paper 'Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context', what GPT-4 score score did the Gemini 1.5 Pro model get on the MM-Vet v2 dataset
| 66.9±0.2 |
COCO test-dev | LeYOLO-Small@320 | LeYOLO, New Scalable and Efficient CNN Architecture for Object Detection | 2024-06-20T00:00:00 | https://arxiv.org/abs/2406.14239v1 | [
"https://github.com/LilianHollard/LeYOLO"
] | In the paper 'LeYOLO, New Scalable and Efficient CNN Architecture for Object Detection', what box mAP score did the LeYOLO-Small@320 model get on the COCO test-dev dataset
| 29 |
CommitmentBank | PaLM 2-S (one-shot) | PaLM 2 Technical Report | 2023-05-17T00:00:00 | https://arxiv.org/abs/2305.10403v3 | [
"https://github.com/eternityyw/tram-benchmark"
] | In the paper 'PaLM 2 Technical Report', what Accuracy score did the PaLM 2-S (one-shot) model get on the CommitmentBank dataset
| 82.1 |
UCF-101 | FIFO-Diffusion | FIFO-Diffusion: Generating Infinite Videos from Text without Training | 2024-05-19T00:00:00 | https://arxiv.org/abs/2405.11473v4 | [
"https://github.com/jjihwan/FIFO-Diffusion_public"
] | In the paper 'FIFO-Diffusion: Generating Infinite Videos from Text without Training', what Inception Score score did the FIFO-Diffusion model get on the UCF-101 dataset
| 74.44 |
COCO-20i (1-shot) | QCLNet (ResNet-101) | Quaternion-valued Correlation Learning for Few-Shot Semantic Segmentation | 2023-05-12T00:00:00 | https://arxiv.org/abs/2305.07283v3 | [
"https://github.com/zwzheng98/qclnet"
] | In the paper 'Quaternion-valued Correlation Learning for Few-Shot Semantic Segmentation', what Mean IoU score did the QCLNet (ResNet-101) model get on the COCO-20i (1-shot) dataset
| 43.6 |
ETTm1 (720) Multivariate | RLinear | Revisiting Long-term Time Series Forecasting: An Investigation on Linear Mapping | 2023-05-18T00:00:00 | https://arxiv.org/abs/2305.10721v1 | [
"https://github.com/plumprc/rtsf"
] | In the paper 'Revisiting Long-term Time Series Forecasting: An Investigation on Linear Mapping', what MSE score did the RLinear model get on the ETTm1 (720) Multivariate dataset
| 0.425 |
GSM8K | OpenMath-Llama2-70B (w/ code, SC, k=50) | OpenMathInstruct-1: A 1.8 Million Math Instruction Tuning Dataset | 2024-02-15T00:00:00 | https://arxiv.org/abs/2402.10176v2 | [
"https://github.com/kipok/nemo-skills"
] | In the paper 'OpenMathInstruct-1: A 1.8 Million Math Instruction Tuning Dataset', what Accuracy score did the OpenMath-Llama2-70B (w/ code, SC, k=50) model get on the GSM8K dataset
| 90.1 |
S3DIS Area5 | PointHR | PointHR: Exploring High-Resolution Architectures for 3D Point Cloud Segmentation | 2023-10-11T00:00:00 | https://arxiv.org/abs/2310.07743v1 | [
"https://github.com/haibo-qiu/PointHR"
] | In the paper 'PointHR: Exploring High-Resolution Architectures for 3D Point Cloud Segmentation', what mIoU score did the PointHR model get on the S3DIS Area5 dataset
| 73.2 |
MVTec AD | DDAD | Anomaly Detection with Conditioned Denoising Diffusion Models | 2023-05-25T00:00:00 | https://arxiv.org/abs/2305.15956v2 | [
"https://github.com/arimousa/DDAD"
] | In the paper 'Anomaly Detection with Conditioned Denoising Diffusion Models', what Detection AUROC score did the DDAD model get on the MVTec AD dataset
| 99.8 |
Travel | Binary Diffusion | Tabular Data Generation using Binary Diffusion | 2024-09-20T00:00:00 | https://arxiv.org/abs/2409.13882v2 | [
"https://github.com/vkinakh/binary-diffusion-tabular"
] | In the paper 'Tabular Data Generation using Binary Diffusion', what LR Accuracy score did the Binary Diffusion model get on the Travel dataset
| 83.79 |
MUTAG | Graph-JEPA | Graph-level Representation Learning with Joint-Embedding Predictive Architectures | 2023-09-27T00:00:00 | https://arxiv.org/abs/2309.16014v2 | [
"https://github.com/geriskenderi/graph-jepa"
] | In the paper 'Graph-level Representation Learning with Joint-Embedding Predictive Architectures', what Accuracy score did the Graph-JEPA model get on the MUTAG dataset
| 91.25% |
TAP-Vid-RGB-Stacking | LocoTrack-B | Local All-Pair Correspondence for Point Tracking | 2024-07-22T00:00:00 | https://arxiv.org/abs/2407.15420v1 | [
"https://github.com/ku-cvlab/locotrack"
] | In the paper 'Local All-Pair Correspondence for Point Tracking', what Average Jaccard score did the LocoTrack-B model get on the TAP-Vid-RGB-Stacking dataset
| 70.8 |
HalfCheetah-v4 | MEow | Maximum Entropy Reinforcement Learning via Energy-Based Normalizing Flow | 2024-05-22T00:00:00 | https://arxiv.org/abs/2405.13629v2 | [
"https://github.com/ChienFeng-hub/meow"
] | In the paper 'Maximum Entropy Reinforcement Learning via Energy-Based Normalizing Flow', what Average Return score did the MEow model get on the HalfCheetah-v4 dataset
| 10981.47 |
OCHuman | BBox-Mask-Pose 2x | Detection, Pose Estimation and Segmentation for Multiple Bodies: Closing the Virtuous Circle | 2024-12-02T00:00:00 | https://arxiv.org/abs/2412.01562v1 | [
"https://github.com/MiraPurkrabek/BBoxMaskPose"
] | In the paper 'Detection, Pose Estimation and Segmentation for Multiple Bodies: Closing the Virtuous Circle', what Test AP score did the BBox-Mask-Pose 2x model get on the OCHuman dataset
| 48.3 |
ETTm2 (96) Multivariate | PRformer | PRformer: Pyramidal Recurrent Transformer for Multivariate Time Series Forecasting | 2024-08-20T00:00:00 | https://arxiv.org/abs/2408.10483v1 | [
"https://github.com/usualheart/prformer"
] | In the paper 'PRformer: Pyramidal Recurrent Transformer for Multivariate Time Series Forecasting', what MSE score did the PRformer model get on the ETTm2 (96) Multivariate dataset
| 0.162 |
SARDet-100K | MSFA (Deformable DETR) | SARDet-100K: Towards Open-Source Benchmark and ToolKit for Large-Scale SAR Object Detection | 2024-03-11T00:00:00 | https://arxiv.org/abs/2403.06534v2 | [
"https://github.com/zcablii/sardet_100k"
] | In the paper 'SARDet-100K: Towards Open-Source Benchmark and ToolKit for Large-Scale SAR Object Detection', what box mAP score did the MSFA (Deformable DETR) model get on the SARDet-100K dataset
| 51.3 |
MM-Vet | LLaVA-1.5-7B (CSR) | Calibrated Self-Rewarding Vision Language Models | 2024-05-23T00:00:00 | https://arxiv.org/abs/2405.14622v4 | [
"https://github.com/yiyangzhou/csr"
] | In the paper 'Calibrated Self-Rewarding Vision Language Models', what GPT-4 score score did the LLaVA-1.5-7B (CSR) model get on the MM-Vet dataset
| 33.9 |
UTKFace | ResNet-50-DLDL-v2 | A Call to Reflect on Evaluation Practices for Age Estimation: Comparative Analysis of the State-of-the-Art and a Unified Benchmark | 2023-07-10T00:00:00 | https://arxiv.org/abs/2307.04570v3 | [
"https://github.com/paplhjak/facial-age-estimation-benchmark"
] | In the paper 'A Call to Reflect on Evaluation Practices for Age Estimation: Comparative Analysis of the State-of-the-Art and a Unified Benchmark', what MAE score did the ResNet-50-DLDL-v2 model get on the UTKFace dataset
| 4.42 |
ScanObjectNN | PointGPT | PointGPT: Auto-regressively Generative Pre-training from Point Clouds | 2023-05-19T00:00:00 | https://arxiv.org/abs/2305.11487v2 | [
"https://github.com/CGuangyan-BIT/PointGPT"
] | In the paper 'PointGPT: Auto-regressively Generative Pre-training from Point Clouds', what Overall Accuracy score did the PointGPT model get on the ScanObjectNN dataset
| 93.4 |
NExT-QA | ViLA (3B) | ViLA: Efficient Video-Language Alignment for Video Question Answering | 2023-12-13T00:00:00 | https://arxiv.org/abs/2312.08367v4 | [
"https://github.com/xijun-cs/vila"
] | In the paper 'ViLA: Efficient Video-Language Alignment for Video Question Answering', what Accuracy score did the ViLA (3B) model get on the NExT-QA dataset
| 75.6 |
ETTh1 (336) Multivariate | MoLE-RLinear | Mixture-of-Linear-Experts for Long-term Time Series Forecasting | 2023-12-11T00:00:00 | https://arxiv.org/abs/2312.06786v3 | [
"https://github.com/rogerni/mole"
] | In the paper 'Mixture-of-Linear-Experts for Long-term Time Series Forecasting', what MSE score did the MoLE-RLinear model get on the ETTh1 (336) Multivariate dataset
| 0.43 |
VoxCeleb1 | ReDimNet-B1-LM (2.2M) | Reshape Dimensions Network for Speaker Recognition | 2024-07-25T00:00:00 | https://arxiv.org/abs/2407.18223v2 | [
"https://github.com/IDRnD/ReDimNet"
] | In the paper 'Reshape Dimensions Network for Speaker Recognition', what EER score did the ReDimNet-B1-LM (2.2M) model get on the VoxCeleb1 dataset
| 0.85 |
SMAC MMM2_7m2M1M_vs_9m3M1M | DMIX | A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning | 2023-06-04T00:00:00 | https://arxiv.org/abs/2306.02430v1 | [
"https://github.com/j3soon/dfac-extended"
] | In the paper 'A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning', what Median Win Rate score did the DMIX model get on the SMAC MMM2_7m2M1M_vs_9m3M1M dataset
| 92.33 |
PeMSD7 | PM-DMNet(R) | Pattern-Matching Dynamic Memory Network for Dual-Mode Traffic Prediction | 2024-08-12T00:00:00 | https://arxiv.org/abs/2408.07100v1 | [
"https://github.com/wengwenchao123/PM-DMNet"
] | In the paper 'Pattern-Matching Dynamic Memory Network for Dual-Mode Traffic Prediction', what 12 steps MAE score did the PM-DMNet(R) model get on the PeMSD7 dataset
| 19.18 |
MATH | Shepherd + Mistral-7B (SFT on MetaMATH + PRM RL) | Math-Shepherd: Verify and Reinforce LLMs Step-by-step without Human Annotations | 2023-12-14T00:00:00 | https://arxiv.org/abs/2312.08935v3 | [
"https://huggingface.co/datasets/peiyi9979/Math-Shepherd"
] | In the paper 'Math-Shepherd: Verify and Reinforce LLMs Step-by-step without Human Annotations', what Accuracy score did the Shepherd + Mistral-7B (SFT on MetaMATH + PRM RL) model get on the MATH dataset
| 33.0 |
Adience Age | MiVOLO-V2 | Beyond Specialization: Assessing the Capabilities of MLLMs in Age and Gender Estimation | 2024-03-04T00:00:00 | https://arxiv.org/abs/2403.02302v3 | [
"https://github.com/wildchlamydia/mivolo"
] | In the paper 'Beyond Specialization: Assessing the Capabilities of MLLMs in Age and Gender Estimation', what Accuracy (5-fold) score did the MiVOLO-V2 model get on the Adience Age dataset
| 69.43 |
MSRA-TD500 | MixNet | MixNet: Toward Accurate Detection of Challenging Scene Text in the Wild | 2023-08-23T00:00:00 | https://arxiv.org/abs/2308.12817v2 | [
"https://github.com/D641593/MixNet"
] | In the paper 'MixNet: Toward Accurate Detection of Challenging Scene Text in the Wild', what Recall score did the MixNet model get on the MSRA-TD500 dataset
| 88.1 |
GSM8K | Shepherd+Mistral-7B (SFT on MetaMATH + PRM RL+ PRM rerank, k=256) | Math-Shepherd: Verify and Reinforce LLMs Step-by-step without Human Annotations | 2023-12-14T00:00:00 | https://arxiv.org/abs/2312.08935v3 | [
"https://huggingface.co/datasets/peiyi9979/Math-Shepherd"
] | In the paper 'Math-Shepherd: Verify and Reinforce LLMs Step-by-step without Human Annotations', what Accuracy score did the Shepherd+Mistral-7B (SFT on MetaMATH + PRM RL+ PRM rerank, k=256) model get on the GSM8K dataset
| 89.1 |
Natural Questions | PaLM 2-S (one-shot) | PaLM 2 Technical Report | 2023-05-17T00:00:00 | https://arxiv.org/abs/2305.10403v3 | [
"https://github.com/eternityyw/tram-benchmark"
] | In the paper 'PaLM 2 Technical Report', what EM score did the PaLM 2-S (one-shot) model get on the Natural Questions dataset
| 25.3 |
Traffic (192) | TSMixer | TSMixer: Lightweight MLP-Mixer Model for Multivariate Time Series Forecasting | 2023-06-14T00:00:00 | https://arxiv.org/abs/2306.09364v4 | [
"https://github.com/ibm/tsfm"
] | In the paper 'TSMixer: Lightweight MLP-Mixer Model for Multivariate Time Series Forecasting', what MSE score did the TSMixer model get on the Traffic (192) dataset
| 0.377 |
IMDB-Clean | VOLO-D1 age&gender | MiVOLO: Multi-input Transformer for Age and Gender Estimation | 2023-07-10T00:00:00 | https://arxiv.org/abs/2307.04616v2 | [
"https://github.com/wildchlamydia/mivolo"
] | In the paper 'MiVOLO: Multi-input Transformer for Age and Gender Estimation', what Average mean absolute error score did the VOLO-D1 age&gender model get on the IMDB-Clean dataset
| 4.22 |
MM-Vet | FlashSloth-HD | FlashSloth: Lightning Multimodal Large Language Models via Embedded Visual Compression | 2024-12-05T00:00:00 | https://arxiv.org/abs/2412.04317v1 | [
"https://github.com/codefanw/flashsloth"
] | In the paper 'FlashSloth: Lightning Multimodal Large Language Models via Embedded Visual Compression', what GPT-4 score score did the FlashSloth-HD model get on the MM-Vet dataset
| 49.0 |
RealBlur-R | ID-Blau (FFTformer) | ID-Blau: Image Deblurring by Implicit Diffusion-based reBLurring AUgmentation | 2023-12-18T00:00:00 | https://arxiv.org/abs/2312.10998v2 | [
"https://github.com/plusgood-steven/id-blau"
] | In the paper 'ID-Blau: Image Deblurring by Implicit Diffusion-based reBLurring AUgmentation', what PSNR (sRGB) score did the ID-Blau (FFTformer) model get on the RealBlur-R dataset
| 40.45 |
ToolBench | Attention Bucket | Fortify the Shortest Stave in Attention: Enhancing Context Awareness of Large Language Models for Effective Tool Use | 2023-12-07T00:00:00 | https://arxiv.org/abs/2312.04455v4 | [
"https://github.com/fiorina1212/attention-buckets"
] | In the paper 'Fortify the Shortest Stave in Attention: Enhancing Context Awareness of Large Language Models for Effective Tool Use', what Win rate score did the Attention Bucket model get on the ToolBench dataset
| 71.5 |
ImageNet | GTP-LV-ViT-S/P8 | GTP-ViT: Efficient Vision Transformers via Graph-based Token Propagation | 2023-11-06T00:00:00 | https://arxiv.org/abs/2311.03035v2 | [
"https://github.com/ackesnal/gtp-vit"
] | In the paper 'GTP-ViT: Efficient Vision Transformers via Graph-based Token Propagation', what Top 1 Accuracy score did the GTP-LV-ViT-S/P8 model get on the ImageNet dataset
| 81.9% |
DAVIS-S | BiRefNet (DUTS) | Bilateral Reference for High-Resolution Dichotomous Image Segmentation | 2024-01-07T00:00:00 | https://arxiv.org/abs/2401.03407v6 | [
"https://github.com/zhengpeng7/birefnet"
] | In the paper 'Bilateral Reference for High-Resolution Dichotomous Image Segmentation', what S-measure score did the BiRefNet (DUTS) model get on the DAVIS-S dataset
| 0.967 |
Office-31 | GSDE | Gradual Source Domain Expansion for Unsupervised Domain Adaptation | 2023-11-16T00:00:00 | https://arxiv.org/abs/2311.09599v1 | [
"https://github.com/ThomasWestfechtel/GSDE"
] | In the paper 'Gradual Source Domain Expansion for Unsupervised Domain Adaptation', what Average Accuracy score did the GSDE model get on the Office-31 dataset
| 91.7 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.