dataset stringlengths 0 82 | model_name stringlengths 0 150 | paper_title stringlengths 19 175 | paper_date timestamp[ns] | paper_url stringlengths 32 35 | code_links listlengths 1 1 | prompts stringlengths 105 331 | answer stringlengths 1 67 |
|---|---|---|---|---|---|---|---|
USNA-Cn2 (short-duration) | Macro Meteorological | Effective Benchmarks for Optical Turbulence Modeling | 2024-01-07T00:00:00 | https://arxiv.org/abs/2401.03573v1 | [
"https://github.com/cdjellen/otbench"
] | In the paper 'Effective Benchmarks for Optical Turbulence Modeling', what RMSE score did the Macro Meteorological model get on the USNA-Cn2 (short-duration) dataset
| 0.864 |
HIV dataset | CIN++ | CIN++: Enhancing Topological Message Passing | 2023-06-06T00:00:00 | https://arxiv.org/abs/2306.03561v1 | [
"https://github.com/twitter-research/cwn"
] | In the paper 'CIN++: Enhancing Topological Message Passing', what ROC-AUC score did the CIN++ model get on the HIV dataset dataset
| 80.63 |
DAVIS | ViT-B+MST+CL | MST: Adaptive Multi-Scale Tokens Guided Interactive Segmentation | 2024-01-09T00:00:00 | https://arxiv.org/abs/2401.04403v2 | [
"https://github.com/hahamyt/mst"
] | In the paper 'MST: Adaptive Multi-Scale Tokens Guided Interactive Segmentation', what NoC@90 score did the ViT-B+MST+CL model get on the DAVIS dataset
| 4.55 |
Ubuntu Dialogue (v1, Ranking) | Dial-MAE | Dial-MAE: ConTextual Masked Auto-Encoder for Retrieval-based Dialogue Systems | 2023-06-07T00:00:00 | https://arxiv.org/abs/2306.04357v5 | [
"https://github.com/suu990901/Dial-MAE"
] | In the paper 'Dial-MAE: ConTextual Masked Auto-Encoder for Retrieval-based Dialogue Systems', what R10@1 score did the Dial-MAE model get on the Ubuntu Dialogue (v1, Ranking) dataset
| 0.918 |
AIST++ | RobustCap | Fusing Monocular Images and Sparse IMU Signals for Real-time Human Motion Capture | 2023-09-01T00:00:00 | https://arxiv.org/abs/2309.00310v1 | [
"https://github.com/shaohua-pan/RobustCap"
] | In the paper 'Fusing Monocular Images and Sparse IMU Signals for Real-time Human Motion Capture', what MPJPE score did the RobustCap model get on the AIST++ dataset
| 33.1 |
SVAMP | SYRELM (Vicuna 13B) | Frugal LMs Trained to Invoke Symbolic Solvers Achieve Parameter-Efficient Arithmetic Reasoning | 2023-12-09T00:00:00 | https://arxiv.org/abs/2312.05571v2 | [
"https://github.com/joykirat18/syrelm"
] | In the paper 'Frugal LMs Trained to Invoke Symbolic Solvers Achieve Parameter-Efficient Arithmetic Reasoning', what Execution Accuracy score did the SYRELM (Vicuna 13B) model get on the SVAMP dataset
| 56.65 |
Tiered ImageNet 5-way (1-shot) | CAML [Laion-2b] | Context-Aware Meta-Learning | 2023-10-17T00:00:00 | https://arxiv.org/abs/2310.10971v2 | [
"https://github.com/cfifty/CAML"
] | In the paper 'Context-Aware Meta-Learning', what Accuracy score did the CAML [Laion-2b] model get on the Tiered ImageNet 5-way (1-shot) dataset
| 96.8 |
OCHuman | BBox-Mask-Pose 2x | Detection, Pose Estimation and Segmentation for Multiple Bodies: Closing the Virtuous Circle | 2024-12-02T00:00:00 | https://arxiv.org/abs/2412.01562v1 | [
"https://github.com/MiraPurkrabek/BBoxMaskPose"
] | In the paper 'Detection, Pose Estimation and Segmentation for Multiple Bodies: Closing the Virtuous Circle', what Test AP score did the BBox-Mask-Pose 2x model get on the OCHuman dataset
| 48.3 |
BC5CDR | GoLLIE | GoLLIE: Annotation Guidelines improve Zero-Shot Information-Extraction | 2023-10-05T00:00:00 | https://arxiv.org/abs/2310.03668v5 | [
"https://github.com/hitz-zentroa/gollie"
] | In the paper 'GoLLIE: Annotation Guidelines improve Zero-Shot Information-Extraction', what F1 score did the GoLLIE model get on the BC5CDR dataset
| 88.4 |
MM-Vet | MG-LLaVA(34B) | MG-LLaVA: Towards Multi-Granularity Visual Instruction Tuning | 2024-06-25T00:00:00 | https://arxiv.org/abs/2406.17770v2 | [
"https://github.com/phoenixz810/mg-llava"
] | In the paper 'MG-LLaVA: Towards Multi-Granularity Visual Instruction Tuning', what GPT-4 score score did the MG-LLaVA(34B) model get on the MM-Vet dataset
| 48.5 |
rt-inod-bias | Baseline | Benchmarking Llama2, Mistral, Gemma and GPT for Factuality, Toxicity, Bias and Propensity for Hallucinations | 2024-04-15T00:00:00 | https://arxiv.org/abs/2404.09785v1 | [
"https://github.com/innodatalabs/innodata-llm-safety"
] | In the paper 'Benchmarking Llama2, Mistral, Gemma and GPT for Factuality, Toxicity, Bias and Propensity for Hallucinations', what Best-of score did the Baseline model get on the rt-inod-bias dataset
| 0.41 |
CATH 4.2 | GraphTrans | Knowledge-Design: Pushing the Limit of Protein Design via Knowledge Refinement | 2023-05-20T00:00:00 | https://arxiv.org/abs/2305.15151v4 | [
"https://github.com/A4Bio/OpenCPD"
] | In the paper 'Knowledge-Design: Pushing the Limit of Protein Design via Knowledge Refinement', what Sequence Recovery %(All) score did the GraphTrans model get on the CATH 4.2 dataset
| 35.82 |
BKAI-IGH NeoPolyp-Small | EMCAD | EMCAD: Efficient Multi-scale Convolutional Attention Decoding for Medical Image Segmentation | 2024-05-11T00:00:00 | https://arxiv.org/abs/2405.06880v1 | [
"https://github.com/sldgroup/emcad"
] | In the paper 'EMCAD: Efficient Multi-scale Convolutional Attention Decoding for Medical Image Segmentation', what Average Dice score did the EMCAD model get on the BKAI-IGH NeoPolyp-Small dataset
| 0.9296 |
ActivityNet | DMAE
(ViT-B/32) | Dual-Modal Attention-Enhanced Text-Video Retrieval with Triplet Partial Margin Contrastive Learning | 2023-09-20T00:00:00 | https://arxiv.org/abs/2309.11082v3 | [
"https://github.com/alipay/Ant-Multi-Modal-Framework"
] | In the paper 'Dual-Modal Attention-Enhanced Text-Video Retrieval with Triplet Partial Margin Contrastive Learning', what text-to-video R@1 score did the DMAE
(ViT-B/32) model get on the ActivityNet dataset
| 53.4 |
ogbn-proteins | GraphSAGE | Classic GNNs are Strong Baselines: Reassessing GNNs for Node Classification | 2024-06-13T00:00:00 | https://arxiv.org/abs/2406.08993v2 | [
"https://github.com/LUOyk1999/tunedGNN"
] | In the paper 'Classic GNNs are Strong Baselines: Reassessing GNNs for Node Classification', what Test ROC-AUC score did the GraphSAGE model get on the ogbn-proteins dataset
| 0.8221 ± 0.0032 |
ImageNet | MIRL(ViT-S-54) | Masked Image Residual Learning for Scaling Deeper Vision Transformers | 2023-09-25T00:00:00 | https://arxiv.org/abs/2309.14136v3 | [
"https://github.com/russellllaputa/MIRL"
] | In the paper 'Masked Image Residual Learning for Scaling Deeper Vision Transformers', what Top 1 Accuracy score did the MIRL(ViT-S-54) model get on the ImageNet dataset
| 84.8% |
nuScenes | FocalFormer3D-F | FocalFormer3D : Focusing on Hard Instance for 3D Object Detection | 2023-08-08T00:00:00 | https://arxiv.org/abs/2308.04556v1 | [
"https://github.com/NVlabs/FocalFormer3D"
] | In the paper 'FocalFormer3D : Focusing on Hard Instance for 3D Object Detection', what NDS score did the FocalFormer3D-F model get on the nuScenes dataset
| 0.75 |
GQA | LocVLM-L | Learning to Localize Objects Improves Spatial Reasoning in Visual-LLMs | 2024-04-11T00:00:00 | https://arxiv.org/abs/2404.07449v1 | [
"https://github.com/kahnchana/locvlm"
] | In the paper 'Learning to Localize Objects Improves Spatial Reasoning in Visual-LLMs', what Accuracy score did the LocVLM-L model get on the GQA dataset
| 50.2 |
Atari 2600 Berzerk | ASL DDQN | Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity | 2023-05-07T00:00:00 | https://arxiv.org/abs/2305.04180v3 | [
"https://github.com/xinjinghao/color"
] | In the paper 'Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity', what Score score did the ASL DDQN model get on the Atari 2600 Berzerk dataset
| 2597.2 |
Spike-X4K | SwinSF | SwinSF: Image Reconstruction from Spatial-Temporal Spike Streams | 2024-07-22T00:00:00 | https://arxiv.org/abs/2407.15708v2 | [
"https://github.com/bupt-ai-cz/SwinSF"
] | In the paper 'SwinSF: Image Reconstruction from Spatial-Temporal Spike Streams', what Average PSNR score did the SwinSF model get on the Spike-X4K dataset
| 39.61 |
Adience Gender | MiVOLO-V2 | Beyond Specialization: Assessing the Capabilities of MLLMs in Age and Gender Estimation | 2024-03-04T00:00:00 | https://arxiv.org/abs/2403.02302v3 | [
"https://github.com/wildchlamydia/mivolo"
] | In the paper 'Beyond Specialization: Assessing the Capabilities of MLLMs in Age and Gender Estimation', what Accuracy (5-fold) score did the MiVOLO-V2 model get on the Adience Gender dataset
| 97.39 |
ZINC | NeuralWalker | Learning Long Range Dependencies on Graphs via Random Walks | 2024-06-05T00:00:00 | https://arxiv.org/abs/2406.03386v2 | [
"https://github.com/borgwardtlab/neuralwalker"
] | In the paper 'Learning Long Range Dependencies on Graphs via Random Walks', what MAE score did the NeuralWalker model get on the ZINC dataset
| 0.065 ± 0.001 |
PASCAL Context-59 | TTD (MaskCLIP) | TTD: Text-Tag Self-Distillation Enhancing Image-Text Alignment in CLIP to Alleviate Single Tag Bias | 2024-03-30T00:00:00 | https://arxiv.org/abs/2404.00384v2 | [
"https://github.com/shjo-april/TTD"
] | In the paper 'TTD: Text-Tag Self-Distillation Enhancing Image-Text Alignment in CLIP to Alleviate Single Tag Bias', what mIoU score did the TTD (MaskCLIP) model get on the PASCAL Context-59 dataset
| 31.0 |
FP-T-M | GeoTransformer | GeoTransformer: Fast and Robust Point Cloud Registration with Geometric Transformer | 2023-07-25T00:00:00 | https://arxiv.org/abs/2308.03768v1 | [
"https://github.com/qinzheng93/geotransformer"
] | In the paper 'GeoTransformer: Fast and Robust Point Cloud Registration with Geometric Transformer', what Recall (3cm, 10 degrees) score did the GeoTransformer model get on the FP-T-M dataset
| 64.29 |
SWiG | ClipSitu | ClipSitu: Effectively Leveraging CLIP for Conditional Predictions in Situation Recognition | 2023-07-02T00:00:00 | https://arxiv.org/abs/2307.00586v3 | [
"https://github.com/LUNAProject22/CLIPSitu"
] | In the paper 'ClipSitu: Effectively Leveraging CLIP for Conditional Predictions in Situation Recognition', what Top-1 Verb score did the ClipSitu model get on the SWiG dataset
| 58.19 |
MSMT17 | PCL-CLIP (L_pcl+L_id) | Prototypical Contrastive Learning-based CLIP Fine-tuning for Object Re-identification | 2023-10-26T00:00:00 | https://arxiv.org/abs/2310.17218v1 | [
"https://github.com/RikoLi/PCL-CLIP"
] | In the paper 'Prototypical Contrastive Learning-based CLIP Fine-tuning for Object Re-identification', what Rank-1 score did the PCL-CLIP (L_pcl+L_id) model get on the MSMT17 dataset
| 89.8 |
CARLA | DriveAdapter | DriveAdapter: Breaking the Coupling Barrier of Perception and Planning in End-to-End Autonomous Driving | 2023-08-01T00:00:00 | https://arxiv.org/abs/2308.00398v2 | [
"https://github.com/opendrivelab/driveadapter"
] | In the paper 'DriveAdapter: Breaking the Coupling Barrier of Perception and Planning in End-to-End Autonomous Driving', what Driving Score score did the DriveAdapter model get on the CARLA dataset
| 59 |
miniF2F-test | LEGO-Prover ChatGPT | LEGO-Prover: Neural Theorem Proving with Growing Libraries | 2023-10-01T00:00:00 | https://arxiv.org/abs/2310.00656v3 | [
"https://github.com/wiio12/LEGO-Prover"
] | In the paper 'LEGO-Prover: Neural Theorem Proving with Growing Libraries', what Pass@100 score did the LEGO-Prover ChatGPT model get on the miniF2F-test dataset
| 47.1 |
Amazon Games | CARCA-Rotatory + Con. | Positional encoding is not the same as context: A study on positional encoding for Sequential recommendation | 2024-05-16T00:00:00 | https://arxiv.org/abs/2405.10436v1 | [
"https://github.com/researcher1741/position_encoding_srs"
] | In the paper 'Positional encoding is not the same as context: A study on positional encoding for Sequential recommendation', what Hit@10 score did the CARCA-Rotatory + Con. model get on the Amazon Games dataset
| 0.8062 |
MTL-AQA | RICA^2 | RICA2: Rubric-Informed, Calibrated Assessment of Actions | 2024-08-04T00:00:00 | https://arxiv.org/abs/2408.02138v2 | [
"https://github.com/abrarmajeedi/rica2_aqa"
] | In the paper 'RICA2: Rubric-Informed, Calibrated Assessment of Actions', what Spearman Correlation score did the RICA^2 model get on the MTL-AQA dataset
| 95.94 |
CoLA | LM-CPPF RoBERTa-base | LM-CPPF: Paraphrasing-Guided Data Augmentation for Contrastive Prompt-Based Few-Shot Fine-Tuning | 2023-05-29T00:00:00 | https://arxiv.org/abs/2305.18169v3 | [
"https://github.com/amirabaskohi/lm-cppf"
] | In the paper 'LM-CPPF: Paraphrasing-Guided Data Augmentation for Contrastive Prompt-Based Few-Shot Fine-Tuning', what Accuracy score did the LM-CPPF RoBERTa-base model get on the CoLA dataset
| 14.1% |
Food-101 | PromptKD | PromptKD: Unsupervised Prompt Distillation for Vision-Language Models | 2024-03-05T00:00:00 | https://arxiv.org/abs/2403.02781v5 | [
"https://github.com/zhengli97/promptkd"
] | In the paper 'PromptKD: Unsupervised Prompt Distillation for Vision-Language Models', what Harmonic mean score did the PromptKD model get on the Food-101 dataset
| 93.05 |
Winoground | COCA ViT-L14 (f.t on COCO) | What You See is What You Read? Improving Text-Image Alignment Evaluation | 2023-05-17T00:00:00 | https://arxiv.org/abs/2305.10400v4 | [
"https://github.com/yonatanbitton/wysiwyr"
] | In the paper 'What You See is What You Read? Improving Text-Image Alignment Evaluation', what Text Score score did the COCA ViT-L14 (f.t on COCO) model get on the Winoground dataset
| 28.25 |
MOSE | Cutie+ (base, MEGA) | Putting the Object Back into Video Object Segmentation | 2023-10-19T00:00:00 | https://arxiv.org/abs/2310.12982v2 | [
"https://github.com/hkchengrex/Cutie"
] | In the paper 'Putting the Object Back into Video Object Segmentation', what J&F score did the Cutie+ (base, MEGA) model get on the MOSE dataset
| 71.7 |
The Pile | Gemma-2 9B | Efficiently Learning at Test-Time: Active Fine-Tuning of LLMs | 2024-10-10T00:00:00 | https://arxiv.org/abs/2410.08020v2 | [
"https://github.com/jonhue/activeft"
] | In the paper 'Efficiently Learning at Test-Time: Active Fine-Tuning of LLMs', what Bits per byte score did the Gemma-2 9B model get on the The Pile dataset
| 0.670 |
VideoCube | RTracker-L | RTracker: Recoverable Tracking via PN Tree Structured Memory | 2024-03-28T00:00:00 | https://arxiv.org/abs/2403.19242v1 | [
"https://github.com/norahgreen/rtracker"
] | In the paper 'RTracker: Recoverable Tracking via PN Tree Structured Memory', what Precision score did the RTracker-L model get on the VideoCube dataset
| 63.2 |
MovieLens | TF4CTR | TF4CTR: Twin Focus Framework for CTR Prediction via Adaptive Sample Differentiation | 2024-05-06T00:00:00 | https://arxiv.org/abs/2405.03167v2 | [
"https://github.com/salmon1802/tf4ctr"
] | In the paper 'TF4CTR: Twin Focus Framework for CTR Prediction via Adaptive Sample Differentiation', what AUC score did the TF4CTR model get on the MovieLens dataset
| 0.9746 |
CVC-ClinicDB | EMCAD | EMCAD: Efficient Multi-scale Convolutional Attention Decoding for Medical Image Segmentation | 2024-05-11T00:00:00 | https://arxiv.org/abs/2405.06880v1 | [
"https://github.com/sldgroup/emcad"
] | In the paper 'EMCAD: Efficient Multi-scale Convolutional Attention Decoding for Medical Image Segmentation', what mean Dice score did the EMCAD model get on the CVC-ClinicDB dataset
| 0.9521 |
Set14 - 4x upscaling | CFSR | Transforming Image Super-Resolution: A ConvFormer-based Efficient Approach | 2024-01-11T00:00:00 | https://arxiv.org/abs/2401.05633v2 | [
"https://github.com/aitical/cfsr"
] | In the paper 'Transforming Image Super-Resolution: A ConvFormer-based Efficient Approach', what PSNR score did the CFSR model get on the Set14 - 4x upscaling dataset
| 28.73 |
shape bias | Stable Diffusion | Intriguing properties of generative classifiers | 2023-09-28T00:00:00 | https://arxiv.org/abs/2309.16779v2 | [
"https://github.com/SamsungSAILMontreal/ForestDiffusion"
] | In the paper 'Intriguing properties of generative classifiers', what shape bias score did the Stable Diffusion model get on the shape bias dataset
| 92.7 |
ColonINST-v1 (Seen) | MobileVLM-1.7B (w/o LoRA, w/ extra data) | MobileVLM : A Fast, Strong and Open Vision Language Assistant for Mobile Devices | 2023-12-28T00:00:00 | https://arxiv.org/abs/2312.16886v2 | [
"https://github.com/meituan-automl/mobilevlm"
] | In the paper 'MobileVLM : A Fast, Strong and Open Vision Language Assistant for Mobile Devices', what Accuray score did the MobileVLM-1.7B (w/o LoRA, w/ extra data) model get on the ColonINST-v1 (Seen) dataset
| 93.02 |
USNA-Cn2 (short-duration) | Minute Climatology | Effective Benchmarks for Optical Turbulence Modeling | 2024-01-07T00:00:00 | https://arxiv.org/abs/2401.03573v1 | [
"https://github.com/cdjellen/otbench"
] | In the paper 'Effective Benchmarks for Optical Turbulence Modeling', what RMSE score did the Minute Climatology model get on the USNA-Cn2 (short-duration) dataset
| 0.453 |
Vid4 - 4x upscaling | CFD-PSRT | Collaborative Feedback Discriminative Propagation for Video Super-Resolution | 2024-04-06T00:00:00 | https://arxiv.org/abs/2404.04745v1 | [
"https://github.com/house-leo/cfdvsr"
] | In the paper 'Collaborative Feedback Discriminative Propagation for Video Super-Resolution', what PSNR score did the CFD-PSRT model get on the Vid4 - 4x upscaling dataset
| 28.18 |
FSS-1000 (5-shot) | GF-SAM | Bridge the Points: Graph-based Few-shot Segment Anything Semantically | 2024-10-09T00:00:00 | https://arxiv.org/abs/2410.06964v2 | [
"https://github.com/ANDYZAQ/GF-SAM"
] | In the paper 'Bridge the Points: Graph-based Few-shot Segment Anything Semantically', what Mean IoU score did the GF-SAM model get on the FSS-1000 (5-shot) dataset
| 88.9 |
Charades-STA | BAM-DETR | BAM-DETR: Boundary-Aligned Moment Detection Transformer for Temporal Sentence Grounding in Videos | 2023-11-30T00:00:00 | https://arxiv.org/abs/2312.00083v2 | [
"https://github.com/Pilhyeon/BAM-DETR"
] | In the paper 'BAM-DETR: Boundary-Aligned Moment Detection Transformer for Temporal Sentence Grounding in Videos', what R@1 IoU=0.5 score did the BAM-DETR model get on the Charades-STA dataset
| 59.95 |
FSC147 | GeCo | A Novel Unified Architecture for Low-Shot Counting by Detection and Segmentation | 2024-09-27T00:00:00 | https://arxiv.org/abs/2409.18686v2 | [
"https://github.com/jerpelhan/GeCo"
] | In the paper 'A Novel Unified Architecture for Low-Shot Counting by Detection and Segmentation', what MAE(val) score did the GeCo model get on the FSC147 dataset
| 9.52 |
RefCOCO+ test B | Florence-2-large-ft | Florence-2: Advancing a Unified Representation for a Variety of Vision Tasks | 2023-11-10T00:00:00 | https://arxiv.org/abs/2311.06242v1 | [
"https://github.com/retkowsky/florence-2"
] | In the paper 'Florence-2: Advancing a Unified Representation for a Variety of Vision Tasks', what Accuracy (%) score did the Florence-2-large-ft model get on the RefCOCO+ test B dataset
| 92.0 |
MM-Vet | FlashSloth | FlashSloth: Lightning Multimodal Large Language Models via Embedded Visual Compression | 2024-12-05T00:00:00 | https://arxiv.org/abs/2412.04317v1 | [
"https://github.com/codefanw/flashsloth"
] | In the paper 'FlashSloth: Lightning Multimodal Large Language Models via Embedded Visual Compression', what GPT-4 score score did the FlashSloth model get on the MM-Vet dataset
| 41.9 |
ETTm1 (192) Multivariate | PRformer | PRformer: Pyramidal Recurrent Transformer for Multivariate Time Series Forecasting | 2024-08-20T00:00:00 | https://arxiv.org/abs/2408.10483v1 | [
"https://github.com/usualheart/prformer"
] | In the paper 'PRformer: Pyramidal Recurrent Transformer for Multivariate Time Series Forecasting', what MSE score did the PRformer model get on the ETTm1 (192) Multivariate dataset
| 0.324 |
ETTh2 (720) Multivariate | DiPE-Linear | Disentangled Interpretable Representation for Efficient Long-term Time Series Forecasting | 2024-11-26T00:00:00 | https://arxiv.org/abs/2411.17257v1 | [
"https://github.com/wintertee/dipe-linear"
] | In the paper 'Disentangled Interpretable Representation for Efficient Long-term Time Series Forecasting', what MSE score did the DiPE-Linear model get on the ETTh2 (720) Multivariate dataset
| 0.375 |
ImageNet 256x256 | DiGIT | Stabilize the Latent Space for Image Autoregressive Modeling: A Unified Perspective | 2024-10-16T00:00:00 | https://arxiv.org/abs/2410.12490v2 | [
"https://github.com/DAMO-NLP-SG/DiGIT"
] | In the paper 'Stabilize the Latent Space for Image Autoregressive Modeling: A Unified Perspective', what FID score did the DiGIT model get on the ImageNet 256x256 dataset
| 3.39 |
EQ-Bench | Intel/neural-chat-7b-v3-1 | EQ-Bench: An Emotional Intelligence Benchmark for Large Language Models | 2023-12-11T00:00:00 | https://arxiv.org/abs/2312.06281v2 | [
"https://github.com/eq-bench/eq-bench"
] | In the paper 'EQ-Bench: An Emotional Intelligence Benchmark for Large Language Models', what EQ-Bench Score score did the Intel/neural-chat-7b-v3-1 model get on the EQ-Bench dataset
| 43.61 |
Bongard-OpenWorld | ChatCaptioner + ChatGPT | Bongard-OpenWorld: Few-Shot Reasoning for Free-form Visual Concepts in the Real World | 2023-10-16T00:00:00 | https://arxiv.org/abs/2310.10207v5 | [
"https://github.com/joyjayng/Bongard-OpenWorld"
] | In the paper 'Bongard-OpenWorld: Few-Shot Reasoning for Free-form Visual Concepts in the Real World', what 2-Class Accuracy score did the ChatCaptioner + ChatGPT model get on the Bongard-OpenWorld dataset
| 49.3 |
MM-Vet | SEAL (7B) | V*: Guided Visual Search as a Core Mechanism in Multimodal LLMs | 2023-12-21T00:00:00 | https://arxiv.org/abs/2312.14135v2 | [
"https://github.com/penghao-wu/vstar"
] | In the paper 'V*: Guided Visual Search as a Core Mechanism in Multimodal LLMs', what GPT-4 score score did the SEAL (7B) model get on the MM-Vet dataset
| 27.7 |
COCO-20i -> Pascal VOC (5-shot) | MSDNet (ResNet-50) | MSDNet: Multi-Scale Decoder for Few-Shot Semantic Segmentation via Transformer-Guided Prototyping | 2024-09-17T00:00:00 | https://arxiv.org/abs/2409.11316v1 | [
"https://github.com/amirrezafateh/msdnet"
] | In the paper 'MSDNet: Multi-Scale Decoder for Few-Shot Semantic Segmentation via Transformer-Guided Prototyping', what Mean IoU score did the MSDNet (ResNet-50) model get on the COCO-20i -> Pascal VOC (5-shot) dataset
| 74.2 |
STPLS3D | OPENINS3D | OpenIns3D: Snap and Lookup for 3D Open-vocabulary Instance Segmentation | 2023-09-01T00:00:00 | https://arxiv.org/abs/2309.00616v5 | [
"https://github.com/Pointcept/OpenIns3D"
] | In the paper 'OpenIns3D: Snap and Lookup for 3D Open-vocabulary Instance Segmentation', what AP50 score did the OPENINS3D model get on the STPLS3D dataset
| 13.3 |
MM-Vet | SPHINX-2k | SPHINX: The Joint Mixing of Weights, Tasks, and Visual Embeddings for Multi-modal Large Language Models | 2023-11-13T00:00:00 | https://arxiv.org/abs/2311.07575v1 | [
"https://github.com/alpha-vllm/llama2-accessory"
] | In the paper 'SPHINX: The Joint Mixing of Weights, Tasks, and Visual Embeddings for Multi-modal Large Language Models', what GPT-4 score score did the SPHINX-2k model get on the MM-Vet dataset
| 40.2 |
ISPRS Potsdam | AerialFormer-B | AerialFormer: Multi-resolution Transformer for Aerial Image Segmentation | 2023-06-12T00:00:00 | https://arxiv.org/abs/2306.06842v2 | [
"https://github.com/UARK-AICV/AerialFormer"
] | In the paper 'AerialFormer: Multi-resolution Transformer for Aerial Image Segmentation', what Overall Accuracy score did the AerialFormer-B model get on the ISPRS Potsdam dataset
| 93.9 |
SICK | PromptEOL+CSE+LLaMA-30B | Scaling Sentence Embeddings with Large Language Models | 2023-07-31T00:00:00 | https://arxiv.org/abs/2307.16645v1 | [
"https://github.com/kongds/scaling_sentemb"
] | In the paper 'Scaling Sentence Embeddings with Large Language Models', what Spearman Correlation score did the PromptEOL+CSE+LLaMA-30B model get on the SICK dataset
| 0.8238 |
BIG-bench (Logic Grid Puzzle) | PaLM-62B (few-shot, k=5) | PaLM 2 Technical Report | 2023-05-17T00:00:00 | https://arxiv.org/abs/2305.10403v3 | [
"https://github.com/eternityyw/tram-benchmark"
] | In the paper 'PaLM 2 Technical Report', what Accuracy score did the PaLM-62B (few-shot, k=5) model get on the BIG-bench (Logic Grid Puzzle) dataset
| 36.5 |
VoxCeleb | ReDimNet-B2-SF2-LM-ASNorm (4.7M) | Reshape Dimensions Network for Speaker Recognition | 2024-07-25T00:00:00 | https://arxiv.org/abs/2407.18223v2 | [
"https://github.com/IDRnD/ReDimNet"
] | In the paper 'Reshape Dimensions Network for Speaker Recognition', what EER score did the ReDimNet-B2-SF2-LM-ASNorm (4.7M) model get on the VoxCeleb dataset
| 0.52 |
DUT-OMRON | BiRefNet (DUTS) | Bilateral Reference for High-Resolution Dichotomous Image Segmentation | 2024-01-07T00:00:00 | https://arxiv.org/abs/2401.03407v6 | [
"https://github.com/zhengpeng7/birefnet"
] | In the paper 'Bilateral Reference for High-Resolution Dichotomous Image Segmentation', what MAE score did the BiRefNet (DUTS) model get on the DUT-OMRON dataset
| 0.040 |
UMVM-dbp-ja-en | UMAEA (w/o surf & iter ) | Rethinking Uncertainly Missing and Ambiguous Visual Modality in Multi-Modal Entity Alignment | 2023-07-30T00:00:00 | https://arxiv.org/abs/2307.16210v2 | [
"https://github.com/zjukg/umaea"
] | In the paper 'Rethinking Uncertainly Missing and Ambiguous Visual Modality in Multi-Modal Entity Alignment', what Hits@1 score did the UMAEA (w/o surf & iter ) model get on the UMVM-dbp-ja-en dataset
| 0.801 |
Oxford-IIIT Pet Dataset | ProMetaR | Prompt Learning via Meta-Regularization | 2024-04-01T00:00:00 | https://arxiv.org/abs/2404.00851v1 | [
"https://github.com/mlvlab/prometar"
] | In the paper 'Prompt Learning via Meta-Regularization', what Harmonic mean score did the ProMetaR model get on the Oxford-IIIT Pet Dataset dataset
| 96.49 |
QNLI | Prompt2Model (T5-base) | Prompt2Model: Generating Deployable Models from Natural Language Instructions | 2023-08-23T00:00:00 | https://arxiv.org/abs/2308.12261v1 | [
"https://github.com/neulab/prompt2model"
] | In the paper 'Prompt2Model: Generating Deployable Models from Natural Language Instructions', what Accuracy score did the Prompt2Model (T5-base) model get on the QNLI dataset
| 62.2 |
CBVS | UniCLIP | CBVS: A Large-Scale Chinese Image-Text Benchmark for Real-World Short Video Search Scenarios | 2024-01-19T00:00:00 | https://arxiv.org/abs/2401.10475v2 | [
"https://github.com/QQBrowserVideoSearch/CBVS-UniCLIP"
] | In the paper 'CBVS: A Large-Scale Chinese Image-Text Benchmark for Real-World Short Video Search Scenarios', what Recall@1 score did the UniCLIP model get on the CBVS dataset
| 0.503 |
METR-LA | DCGCN | Dynamic Causal Graph Convolutional Network for Traffic Prediction | 2023-06-12T00:00:00 | https://arxiv.org/abs/2306.07019v2 | [
"https://github.com/MonBG/DCGCN"
] | In the paper 'Dynamic Causal Graph Convolutional Network for Traffic Prediction', what MAE @ 12 step score did the DCGCN model get on the METR-LA dataset
| 3.48 |
ImageNet | WTTM (T: DeiT III-Small S:DeiT-Tiny) | Knowledge Distillation Based on Transformed Teacher Matching | 2024-02-17T00:00:00 | https://arxiv.org/abs/2402.11148v2 | [
"https://github.com/zkxufo/TTM"
] | In the paper 'Knowledge Distillation Based on Transformed Teacher Matching', what Top-1 accuracy % score did the WTTM (T: DeiT III-Small S:DeiT-Tiny) model get on the ImageNet dataset
| 77.03 |
Pittsburgh-30k-test | BoQ
(ResNet-50) | BoQ: A Place is Worth a Bag of Learnable Queries | 2024-05-12T00:00:00 | https://arxiv.org/abs/2405.07364v3 | [
"https://github.com/amaralibey/bag-of-queries"
] | In the paper 'BoQ: A Place is Worth a Bag of Learnable Queries', what Recall@1 score did the BoQ
(ResNet-50) model get on the Pittsburgh-30k-test dataset
| 92.4 |
Pittsburgh-250k-test | DINOv2 SALAD | Optimal Transport Aggregation for Visual Place Recognition | 2023-11-27T00:00:00 | https://arxiv.org/abs/2311.15937v2 | [
"https://github.com/serizba/salad"
] | In the paper 'Optimal Transport Aggregation for Visual Place Recognition', what Recall@1 score did the DINOv2 SALAD model get on the Pittsburgh-250k-test dataset
| 95.1 |
Refer-YouTube-VOS (2021 public validation) | HTR (Pre-training) | Temporally Consistent Referring Video Object Segmentation with Hybrid Memory | 2024-03-28T00:00:00 | https://arxiv.org/abs/2403.19407v2 | [
"https://github.com/bo-miao/HTR"
] | In the paper 'Temporally Consistent Referring Video Object Segmentation with Hybrid Memory', what J&F score did the HTR (Pre-training) model get on the Refer-YouTube-VOS (2021 public validation) dataset
| 67.1 |
Diabetes | Binary Diffusion | Tabular Data Generation using Binary Diffusion | 2024-09-20T00:00:00 | https://arxiv.org/abs/2409.13882v2 | [
"https://github.com/vkinakh/binary-diffusion-tabular"
] | In the paper 'Tabular Data Generation using Binary Diffusion', what LR Accuracy score did the Binary Diffusion model get on the Diabetes dataset
| 0.5775 |
V-COCO | DiffHOI | Boosting Human-Object Interaction Detection with Text-to-Image Diffusion Model | 2023-05-20T00:00:00 | https://arxiv.org/abs/2305.12252v1 | [
"https://github.com/IDEA-Research/DiffHOI"
] | In the paper 'Boosting Human-Object Interaction Detection with Text-to-Image Diffusion Model', what AP(S1) score did the DiffHOI model get on the V-COCO dataset
| 65.7 |
PIQA | PaLM 2-M (1-shot) | PaLM 2 Technical Report | 2023-05-17T00:00:00 | https://arxiv.org/abs/2305.10403v3 | [
"https://github.com/eternityyw/tram-benchmark"
] | In the paper 'PaLM 2 Technical Report', what Accuracy score did the PaLM 2-M (1-shot) model get on the PIQA dataset
| 83.2 |
UHRSD | BiRefNet (DUTS, HRSOD) | Bilateral Reference for High-Resolution Dichotomous Image Segmentation | 2024-01-07T00:00:00 | https://arxiv.org/abs/2401.03407v6 | [
"https://github.com/zhengpeng7/birefnet"
] | In the paper 'Bilateral Reference for High-Resolution Dichotomous Image Segmentation', what S-Measure score did the BiRefNet (DUTS, HRSOD) model get on the UHRSD dataset
| 0.937 |
Winoground | LLaVA-1.5-CCoT | Compositional Chain-of-Thought Prompting for Large Multimodal Models | 2023-11-27T00:00:00 | https://arxiv.org/abs/2311.17076v3 | [
"https://github.com/chancharikmitra/ccot"
] | In the paper 'Compositional Chain-of-Thought Prompting for Large Multimodal Models', what Text Score score did the LLaVA-1.5-CCoT model get on the Winoground dataset
| 42.0 |
CC3M-TagMask | TTD (MaskCLIP) | TTD: Text-Tag Self-Distillation Enhancing Image-Text Alignment in CLIP to Alleviate Single Tag Bias | 2024-03-30T00:00:00 | https://arxiv.org/abs/2404.00384v2 | [
"https://github.com/shjo-april/TTD"
] | In the paper 'TTD: Text-Tag Self-Distillation Enhancing Image-Text Alignment in CLIP to Alleviate Single Tag Bias', what mIoU score did the TTD (MaskCLIP) model get on the CC3M-TagMask dataset
| 50.2 |
arXiv Summarization Dataset | Claude Instant + SigExt | Salient Information Prompting to Steer Content in Prompt-based Abstractive Summarization | 2024-10-03T00:00:00 | https://arxiv.org/abs/2410.02741v2 | [
"https://github.com/amazon-science/SigExt"
] | In the paper 'Salient Information Prompting to Steer Content in Prompt-based Abstractive Summarization', what ROUGE-1 score did the Claude Instant + SigExt model get on the arXiv Summarization Dataset dataset
| 45.2 |
Action-Camera Parking | CFEN | Revising deep learning methods in parking lot occupancy detection | 2023-06-07T00:00:00 | https://arxiv.org/abs/2306.04288v3 | [
"https://github.com/eighonet/parking-research"
] | In the paper 'Revising deep learning methods in parking lot occupancy detection', what F1-score score did the CFEN model get on the Action-Camera Parking dataset
| 0.8302 |
RealBlur-J | ID-Blau (FFTformer) | ID-Blau: Image Deblurring by Implicit Diffusion-based reBLurring AUgmentation | 2023-12-18T00:00:00 | https://arxiv.org/abs/2312.10998v2 | [
"https://github.com/plusgood-steven/id-blau"
] | In the paper 'ID-Blau: Image Deblurring by Implicit Diffusion-based reBLurring AUgmentation', what SSIM (sRGB) score did the ID-Blau (FFTformer) model get on the RealBlur-J dataset
| 0.934 |
amazon-ratings | GAT | Classic GNNs are Strong Baselines: Reassessing GNNs for Node Classification | 2024-06-13T00:00:00 | https://arxiv.org/abs/2406.08993v2 | [
"https://github.com/LUOyk1999/tunedGNN"
] | In the paper 'Classic GNNs are Strong Baselines: Reassessing GNNs for Node Classification', what Accuracy (%) score did the GAT model get on the amazon-ratings dataset
| 55.54 ± 0.51 |
WSRD+ | ShadowRefiner | ShadowRefiner: Towards Mask-free Shadow Removal via Fast Fourier Transformer | 2024-04-18T00:00:00 | https://arxiv.org/abs/2406.02559v2 | [
"https://github.com/movingforward100/shadow_r"
] | In the paper 'ShadowRefiner: Towards Mask-free Shadow Removal via Fast Fourier Transformer', what PSNR score did the ShadowRefiner model get on the WSRD+ dataset
| 26.04 |
GSM8K | MathCoder-L-7B | MathCoder: Seamless Code Integration in LLMs for Enhanced Mathematical Reasoning | 2023-10-05T00:00:00 | https://arxiv.org/abs/2310.03731v1 | [
"https://github.com/mathllm/mathcoder"
] | In the paper 'MathCoder: Seamless Code Integration in LLMs for Enhanced Mathematical Reasoning', what Accuracy score did the MathCoder-L-7B model get on the GSM8K dataset
| 64.2 |
EuroSAT | ZLaP | Label Propagation for Zero-shot Classification with Vision-Language Models | 2024-04-05T00:00:00 | https://arxiv.org/abs/2404.04072v1 | [
"https://github.com/vladan-stojnic/zlap"
] | In the paper 'Label Propagation for Zero-shot Classification with Vision-Language Models', what Accuracy score did the ZLaP model get on the EuroSAT dataset
| 63.2 |
Traffic (336) | PRformer | PRformer: Pyramidal Recurrent Transformer for Multivariate Time Series Forecasting | 2024-08-20T00:00:00 | https://arxiv.org/abs/2408.10483v1 | [
"https://github.com/usualheart/prformer"
] | In the paper 'PRformer: Pyramidal Recurrent Transformer for Multivariate Time Series Forecasting', what MSE score did the PRformer model get on the Traffic (336) dataset
| 0.385 |
SVAMP | GPT-4 DUP | Achieving >97% on GSM8K: Deeply Understanding the Problems Makes LLMs Better Solvers for Math Word Problems | 2024-04-23T00:00:00 | https://arxiv.org/abs/2404.14963v4 | [
"https://github.com/whu-zqh/dup"
] | In the paper 'Achieving >97% on GSM8K: Deeply Understanding the Problems Makes LLMs Better Solvers for Math Word Problems', what Accuracy score did the GPT-4 DUP model get on the SVAMP dataset
| 94.2 |
VATEX | COSA | COSA: Concatenated Sample Pretrained Vision-Language Foundation Model | 2023-06-15T00:00:00 | https://arxiv.org/abs/2306.09085v1 | [
"https://github.com/txh-mercury/cosa"
] | In the paper 'COSA: Concatenated Sample Pretrained Vision-Language Foundation Model', what BLEU-4 score did the COSA model get on the VATEX dataset
| 43.7 |
ogbn-proteins | GAT | Classic GNNs are Strong Baselines: Reassessing GNNs for Node Classification | 2024-06-13T00:00:00 | https://arxiv.org/abs/2406.08993v2 | [
"https://github.com/LUOyk1999/tunedGNN"
] | In the paper 'Classic GNNs are Strong Baselines: Reassessing GNNs for Node Classification', what Test ROC-AUC score did the GAT model get on the ogbn-proteins dataset
| 0.8501 ± 0.0046 |
SMAC 3s5z_vs_3s6z | DPLEX | A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning | 2023-06-04T00:00:00 | https://arxiv.org/abs/2306.02430v1 | [
"https://github.com/j3soon/dfac-extended"
] | In the paper 'A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning', what Median Win Rate score did the DPLEX model get on the SMAC 3s5z_vs_3s6z dataset
| 90.62 |
MLO-Cn2 | RNN | Effective Benchmarks for Optical Turbulence Modeling | 2024-01-07T00:00:00 | https://arxiv.org/abs/2401.03573v1 | [
"https://github.com/cdjellen/otbench"
] | In the paper 'Effective Benchmarks for Optical Turbulence Modeling', what RMSE score did the RNN model get on the MLO-Cn2 dataset
| 0.336 |
Oxford 102 Flowers | RAT-Diffusion | Data Extrapolation for Text-to-image Generation on Small Datasets | 2024-10-02T00:00:00 | https://arxiv.org/abs/2410.01638v1 | [
"https://github.com/senmaoy/RAT-Diffusion"
] | In the paper 'Data Extrapolation for Text-to-image Generation on Small Datasets', what FID score did the RAT-Diffusion model get on the Oxford 102 Flowers dataset
| 9.52 |
NYU Depth v2 | DFormer-L | DFormer: Rethinking RGBD Representation Learning for Semantic Segmentation | 2023-09-18T00:00:00 | https://arxiv.org/abs/2309.09668v2 | [
"https://github.com/VCIP-RGBD/DFormer"
] | In the paper 'DFormer: Rethinking RGBD Representation Learning for Semantic Segmentation', what Mean IoU score did the DFormer-L model get on the NYU Depth v2 dataset
| 57.2% |
DESED | MAT-SED | MAT-SED: A Masked Audio Transformer with Masked-Reconstruction Based Pre-training for Sound Event Detection | 2024-08-16T00:00:00 | https://arxiv.org/abs/2408.08673v2 | [
"https://github.com/cai525/transformer4sed"
] | In the paper 'MAT-SED: A Masked Audio Transformer with Masked-Reconstruction Based Pre-training for Sound Event Detection', what PSDS1 score did the MAT-SED model get on the DESED dataset
| 0.587 |
Bukva | MobileNetV2_TSM | Bukva: Russian Sign Language Alphabet | 2024-10-11T00:00:00 | https://arxiv.org/abs/2410.08675v1 | [
"https://github.com/ai-forever/bukva"
] | In the paper 'Bukva: Russian Sign Language Alphabet', what Accuracy (Top-1) score did the MobileNetV2_TSM model get on the Bukva dataset
| 83.6 |
SPOT-10 | ResNet50 Distiller | SPOTS-10: Animal Pattern Benchmark Dataset for Machine Learning Algorithms | 2024-10-28T00:00:00 | https://arxiv.org/abs/2410.21044v1 | [
"https://github.com/amotica/spots-10"
] | In the paper 'SPOTS-10: Animal Pattern Benchmark Dataset for Machine Learning Algorithms', what Accuracy score did the ResNet50 Distiller model get on the SPOT-10 dataset
| 77.45 |
LIVE | UNIQA | You Only Train Once: A Unified Framework for Both Full-Reference and No-Reference Image Quality Assessment | 2023-10-14T00:00:00 | https://arxiv.org/abs/2310.09560v2 | [
"https://github.com/barcodereader/yoto"
] | In the paper 'You Only Train Once: A Unified Framework for Both Full-Reference and No-Reference Image Quality Assessment', what SRCC score did the UNIQA model get on the LIVE dataset
| 0.986 |
PASCAL VOC | GMT-BBGM | GMTR: Graph Matching Transformers | 2023-11-14T00:00:00 | https://arxiv.org/abs/2311.08141v2 | [
"https://github.com/jp-guo/gm-transformer"
] | In the paper 'GMTR: Graph Matching Transformers', what matching accuracy score did the GMT-BBGM model get on the PASCAL VOC dataset
| 0.8411 |
NTU RGB+D | SkateFormer | SkateFormer: Skeletal-Temporal Transformer for Human Action Recognition | 2024-03-14T00:00:00 | https://arxiv.org/abs/2403.09508v3 | [
"https://github.com/KAIST-VICLab/SkateFormer"
] | In the paper 'SkateFormer: Skeletal-Temporal Transformer for Human Action Recognition', what Accuracy (Cross-Subject) score did the SkateFormer model get on the NTU RGB+D dataset
| 97.1 |
CIFAR-10-LT (ρ=100) | SURE(ResNet-32) | SURE: SUrvey REcipes for building reliable and robust deep networks | 2024-03-01T00:00:00 | https://arxiv.org/abs/2403.00543v1 | [
"https://github.com/YutingLi0606/SURE"
] | In the paper 'SURE: SUrvey REcipes for building reliable and robust deep networks', what Error Rate score did the SURE(ResNet-32) model get on the CIFAR-10-LT (ρ=100) dataset
| 13.07 |
ICFG-PEDES | APTM | Towards Unified Text-based Person Retrieval: A Large-scale Multi-Attribute and Language Search Benchmark | 2023-06-05T00:00:00 | https://arxiv.org/abs/2306.02898v4 | [
"https://github.com/Shuyu-XJTU/APTM"
] | In the paper 'Towards Unified Text-based Person Retrieval: A Large-scale Multi-Attribute and Language Search Benchmark', what mAP score did the APTM model get on the ICFG-PEDES dataset
| 41.22 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.