prompts stringlengths 81 413 | metrics_response stringlengths 0 371 |
|---|---|
What metrics were used to measure the CALIP model in the CALIP: Zero-Shot Enhancement of CLIP with Parameter-free Attention paper on the ModelNet40 dataset? | Accuracy (%), Parameters, Need 3D Data? |
What metrics were used to measure the PointCLIP model in the PointCLIP: Point Cloud Understanding by CLIP paper on the ModelNet40 dataset? | Accuracy (%), Parameters, Need 3D Data? |
What metrics were used to measure the Point-NN model in the Parameter is Not All You Need: Starting from Non-Parametric Networks for 3D Point Cloud Analysis paper on the ScanObjectNN dataset? | Accuracy (%), Parameters, Need 3D Data? |
What metrics were used to measure the PointCLIP V2 model in the PointCLIP V2: Prompting CLIP and GPT for Powerful 3D Open-world Learning paper on the ScanObjectNN dataset? | Accuracy (%), Parameters, Need 3D Data? |
What metrics were used to measure the CLIP2Point model in the CLIP2Point: Transfer CLIP to Point Cloud Classification with Image-Depth Pre-training paper on the ScanObjectNN dataset? | Accuracy (%), Parameters, Need 3D Data? |
What metrics were used to measure the CALIP model in the CALIP: Zero-Shot Enhancement of CLIP with Parameter-free Attention paper on the ScanObjectNN dataset? | Accuracy (%), Parameters, Need 3D Data? |
What metrics were used to measure the PointCLIP model in the PointCLIP: Point Cloud Understanding by CLIP paper on the ScanObjectNN dataset? | Accuracy (%), Parameters, Need 3D Data? |
What metrics were used to measure the Point-NN model in the Parameter is Not All You Need: Starting from Non-Parametric Networks for 3D Point Cloud Analysis paper on the ShapeNet-Part dataset? | mIoU, Need 3D Data?, Parameters |
What metrics were used to measure the PointCLIP V2 model in the PointCLIP V2: Prompting CLIP and GPT for Powerful 3D Open-world Learning paper on the ShapeNet-Part dataset? | mIoU, Need 3D Data?, Parameters |
What metrics were used to measure the PointCLIP model in the PointCLIP: Point Cloud Understanding by CLIP paper on the ShapeNet-Part dataset? | mIoU, Need 3D Data?, Parameters |
What metrics were used to measure the Vid2Seq model in the Vid2Seq: Large-Scale Pretraining of a Visual Language Model for Dense Video Captioning paper on the ActivityNet Captions dataset? | METEOR, BLEU-3, BLEU-4, CIDEr, SODA, DIV-1, DIV-2, RE-4 |
What metrics were used to measure the ADV-INF + Global model in the Global Object Proposals for Improving Multi-Sentence Video Descriptions paper on the ActivityNet Captions dataset? | METEOR, BLEU-3, BLEU-4, CIDEr, SODA, DIV-1, DIV-2, RE-4 |
What metrics were used to measure the Bi-directional+intra captioning model in the Team RUC_AIM3 Technical Report at Activitynet 2020 Task 2: Exploring Sequential Events Detection for Dense Video Captioning paper on the ActivityNet Captions dataset? | METEOR, BLEU-3, BLEU-4, CIDEr, SODA, DIV-1, DIV-2, RE-4 |
What metrics were used to measure the GVL model in the Learning Grounded Vision-Language Representation for Versatile Understanding in Untrimmed Videos paper on the ActivityNet Captions dataset? | METEOR, BLEU-3, BLEU-4, CIDEr, SODA, DIV-1, DIV-2, RE-4 |
What metrics were used to measure the TSRM-CMG-HRNN+SCST model in the Dense-Captioning Events in Videos: SYSU Submission to ActivityNet Challenge 2020 paper on the ActivityNet Captions dataset? | METEOR, BLEU-3, BLEU-4, CIDEr, SODA, DIV-1, DIV-2, RE-4 |
What metrics were used to measure the PDVC (TSP features, no SCST) model in the End-to-End Dense Video Captioning with Parallel Decoding paper on the ActivityNet Captions dataset? | METEOR, BLEU-3, BLEU-4, CIDEr, SODA, DIV-1, DIV-2, RE-4 |
What metrics were used to measure the TSP model in the TSP: Temporally-Sensitive Pretraining of Video Encoders for Localization Tasks paper on the ActivityNet Captions dataset? | METEOR, BLEU-3, BLEU-4, CIDEr, SODA, DIV-1, DIV-2, RE-4 |
What metrics were used to measure the BMT model in the A Better Use of Audio-Visual Cues: Dense Video Captioning with Bi-modal Transformer paper on the ActivityNet Captions dataset? | METEOR, BLEU-3, BLEU-4, CIDEr, SODA, DIV-1, DIV-2, RE-4 |
What metrics were used to measure the iPerceive (Chadha et al., 2020) model in the iPerceive: Applying Common-Sense Reasoning to Multi-Modal Dense Video Captioning and Video Question Answering paper on the ActivityNet Captions dataset? | METEOR, BLEU-3, BLEU-4, CIDEr, SODA, DIV-1, DIV-2, RE-4 |
What metrics were used to measure the MDVC model in the Multi-modal Dense Video Captioning paper on the ActivityNet Captions dataset? | METEOR, BLEU-3, BLEU-4, CIDEr, SODA, DIV-1, DIV-2, RE-4 |
What metrics were used to measure the Vid2Seq model in the VidChapters-7M: Video Chapters at Scale paper on the VidChapters-7M dataset? | CIDEr |
What metrics were used to measure the Vid2Seq (HowTo100M+VidChapters-7M PT) model in the VidChapters-7M: Video Chapters at Scale paper on the YouCook2 dataset? | CIDEr, METEOR, SODA, BLEU4, ROUGE-L |
What metrics were used to measure the Vid2Seq model in the Vid2Seq: Large-Scale Pretraining of a Visual Language Model for Dense Video Captioning paper on the YouCook2 dataset? | CIDEr, METEOR, SODA, BLEU4, ROUGE-L |
What metrics were used to measure the GVL model in the Learning Grounded Vision-Language Representation for Versatile Understanding in Untrimmed Videos paper on the YouCook2 dataset? | CIDEr, METEOR, SODA, BLEU4, ROUGE-L |
What metrics were used to measure the PDVC (TSN features, no SCST) model in the End-to-End Dense Video Captioning with Parallel Decoding paper on the YouCook2 dataset? | CIDEr, METEOR, SODA, BLEU4, ROUGE-L |
What metrics were used to measure the E2vidD6-MASSalign-BiD model in the Multimodal Pretraining for Dense Video Captioning paper on the YouCook2 dataset? | CIDEr, METEOR, SODA, BLEU4, ROUGE-L |
What metrics were used to measure the Vid2Seq (VidChapters-7M PT) model in the VidChapters-7M: Video Chapters at Scale paper on the ViTT dataset? | SODA, CIDEr, METEOR |
What metrics were used to measure the Vid2Seq model in the Vid2Seq: Large-Scale Pretraining of a Visual Language Model for Dense Video Captioning paper on the ViTT dataset? | SODA, CIDEr, METEOR |
What metrics were used to measure the E2ESG model in the End-to-end Dense Video Captioning as Sequence Generation paper on the ViTT dataset? | SODA, CIDEr, METEOR |
What metrics were used to measure the InstrucTE (zero-shot) model in the Schema-Driven Information Extraction from Heterogeneous Tables paper on the SWDE dataset? | Avg F1 |
What metrics were used to measure the DOM-LM model in the DOM-LM: Learning Generalizable Representations for HTML Documents paper on the SWDE dataset? | Avg F1 |
What metrics were used to measure the Afformer model in the Affordance Grounding from Demonstration Video to Target Image paper on the OPRA (28x28) dataset? | KLD, SIM, AUC-J |
What metrics were used to measure the Demo2Vec model in the Demo2Vec: Reasoning Object Affordances From Online Videos paper on the OPRA (28x28) dataset? | KLD, SIM, AUC-J |
What metrics were used to measure the HAG-Net (+Hand Box) model in the Learning Visual Affordance Grounding from Demonstration Videos paper on the OPRA (28x28) dataset? | KLD, SIM, AUC-J |
What metrics were used to measure the Hotspot model in the Grounded Human-Object Interaction Hotspots from Video paper on the OPRA (28x28) dataset? | KLD, SIM, AUC-J |
What metrics were used to measure the Afformer (ViTDet-B encoder) model in the Affordance Grounding from Demonstration Video to Target Image paper on the OPRA dataset? | KLD, Top-1 Action Accuracy |
What metrics were used to measure the Afformer (ResNet-50-FPN encoder) model in the Affordance Grounding from Demonstration Video to Target Image paper on the OPRA dataset? | KLD, Top-1 Action Accuracy |
What metrics were used to measure the Demo2Vec model in the Demo2Vec: Reasoning Object Affordances From Online Videos paper on the OPRA dataset? | KLD, Top-1 Action Accuracy |
What metrics were used to measure the Afformer model in the Affordance Grounding from Demonstration Video to Target Image paper on the EPIC-Hotspot dataset? | KLD, SIM, AUC-J |
What metrics were used to measure the HAG-Net (+Hand Box) model in the Learning Visual Affordance Grounding from Demonstration Videos paper on the EPIC-Hotspot dataset? | KLD, SIM, AUC-J |
What metrics were used to measure the Hotspot model in the Grounded Human-Object Interaction Hotspots from Video paper on the EPIC-Hotspot dataset? | KLD, SIM, AUC-J |
What metrics were used to measure the NeuralPCI model in the NeuralPCI: Spatio-temporal Neural Field for 3D Point Cloud Multi-frame Non-linear Interpolation paper on the NL-Drive dataset? | CD, EMD |
What metrics were used to measure the PointINet model in the NeuralPCI: Spatio-temporal Neural Field for 3D Point Cloud Multi-frame Non-linear Interpolation paper on the NL-Drive dataset? | CD, EMD |
What metrics were used to measure the PV-RAFT model in the NeuralPCI: Spatio-temporal Neural Field for 3D Point Cloud Multi-frame Non-linear Interpolation paper on the NL-Drive dataset? | CD, EMD |
What metrics were used to measure the NSFP model in the NeuralPCI: Spatio-temporal Neural Field for 3D Point Cloud Multi-frame Non-linear Interpolation paper on the NL-Drive dataset? | CD, EMD |
What metrics were used to measure the NeuralPCI model in the NeuralPCI: Spatio-temporal Neural Field for 3D Point Cloud Multi-frame Non-linear Interpolation paper on the DHB Dataset dataset? | CD, EMD |
What metrics were used to measure the PV-RAFT model in the NeuralPCI: Spatio-temporal Neural Field for 3D Point Cloud Multi-frame Non-linear Interpolation paper on the DHB Dataset dataset? | CD, EMD |
What metrics were used to measure the PointINet model in the NeuralPCI: Spatio-temporal Neural Field for 3D Point Cloud Multi-frame Non-linear Interpolation paper on the DHB Dataset dataset? | CD, EMD |
What metrics were used to measure the IDEA-Net model in the NeuralPCI: Spatio-temporal Neural Field for 3D Point Cloud Multi-frame Non-linear Interpolation paper on the DHB Dataset dataset? | CD, EMD |
What metrics were used to measure the NSFP model in the NeuralPCI: Spatio-temporal Neural Field for 3D Point Cloud Multi-frame Non-linear Interpolation paper on the DHB Dataset dataset? | CD, EMD |
What metrics were used to measure the ProlificDreamer model in the ProlificDreamer: High-Fidelity and Diverse Text-to-3D Generation with Variational Score Distillation paper on the T$^3$Bench dataset? | Avg |
What metrics were used to measure the Magic3D model in the Magic3D: High-Resolution Text-to-3D Content Creation paper on the T$^3$Bench dataset? | Avg |
What metrics were used to measure the LatentNeRF model in the Latent-NeRF for Shape-Guided Generation of 3D Shapes and Textures paper on the T$^3$Bench dataset? | Avg |
What metrics were used to measure the Fantasia3D model in the Fantasia3D: Disentangling Geometry and Appearance for High-quality Text-to-3D Content Creation paper on the T$^3$Bench dataset? | Avg |
What metrics were used to measure the DreamFusion model in the DreamFusion: Text-to-3D using 2D Diffusion paper on the T$^3$Bench dataset? | Avg |
What metrics were used to measure the SJC model in the Score Jacobian Chaining: Lifting Pretrained 2D Diffusion Models for 3D Generation paper on the T$^3$Bench dataset? | Avg |
What metrics were used to measure the Augstatic model in the AugStatic - A Light-Weight Image Augmentation Library paper on the Intel Image Classification dataset? | Balanced Accuracy |
What metrics were used to measure the HypLiLoc model in the HypLiLoc: Towards Effective LiDAR Pose Regression with Hyperbolic Fusion paper on the vReLoc (Seq-05) dataset? | Median Translation/Rotation Error (m/degree) |
What metrics were used to measure the HypLiLoc model in the HypLiLoc: Towards Effective LiDAR Pose Regression with Hyperbolic Fusion paper on the Oxford Radar RobotCar (Full-9) dataset? | Mean Translation/Rotation Error (m/degree) |
What metrics were used to measure the HypLiLoc model in the HypLiLoc: Towards Effective LiDAR Pose Regression with Hyperbolic Fusion paper on the vReLoc (Seq-06) dataset? | Median Translation/Rotation Error (m/degree) |
What metrics were used to measure the HypLiLoc model in the HypLiLoc: Towards Effective LiDAR Pose Regression with Hyperbolic Fusion paper on the Oxford Radar RobotCar (Full-7) dataset? | Mean Translation/Rotation Error (m/degree) |
What metrics were used to measure the HypLiLoc model in the HypLiLoc: Towards Effective LiDAR Pose Regression with Hyperbolic Fusion paper on the Oxford Radar RobotCar (Full-8) dataset? | Mean Translation/Rotation Error (m/degree) |
What metrics were used to measure the HypLiLoc model in the HypLiLoc: Towards Effective LiDAR Pose Regression with Hyperbolic Fusion paper on the vReLoc (Seq-07) dataset? | Median Translation/Rotation Error (m/degree) |
What metrics were used to measure the HypLiLoc model in the HypLiLoc: Towards Effective LiDAR Pose Regression with Hyperbolic Fusion paper on the vReLoc (Seq-14) dataset? | Median Translation/Rotation Error (m/degree) |
What metrics were used to measure the HypLiLoc model in the HypLiLoc: Towards Effective LiDAR Pose Regression with Hyperbolic Fusion paper on the Oxford Radar RobotCar (Full-6) dataset? | Mean Translation/Rotation Error (m/degree) |
What metrics were used to measure the Extract-Classify-ACOS model in the Aspect-Category-Opinion-Sentiment Quadruple Extraction with Implicit Aspects and Opinions paper on the Restaurant-ACOS dataset? | F1 |
What metrics were used to measure the BART-ABSA model in the Aspect-Category-Opinion-Sentiment Extraction Using Generative Transformer Model paper on the Restaurant-ACOS dataset? | F1 |
What metrics were used to measure the Extract-Classify-ACOS model in the Aspect-Category-Opinion-Sentiment Quadruple Extraction with Implicit Aspects and Opinions paper on the Laptop-ACOS dataset? | F1 |
What metrics were used to measure the BART-ABSA model in the Aspect-Category-Opinion-Sentiment Extraction Using Generative Transformer Model paper on the Laptop-ACOS dataset? | F1 |
What metrics were used to measure the Gopher-280B (few-shot, k=5) model in the Scaling Language Models: Methods, Analysis & Insights from Training Gopher paper on the BIG-bench dataset? | Accuracy |
What metrics were used to measure the LSTM model in the A speech corpus of Quechua Collao for automatic dimensional emotion recognition paper on the Quechua-SER dataset? | CCC (Arousal), CCC (Valence) |
What metrics were used to measure the Dusha baseline model in the Large Raw Emotional Dataset with Aggregation Mechanism paper on the Dusha Podcast dataset? | Macro F1, UA, WA |
What metrics were used to measure the VQ-MAE-S-12 (Frame) + Query2Emo model in the A vector quantized masked autoencoder for speech emotion recognition paper on the EmoDB Dataset dataset? | Accuracy, F1 |
What metrics were used to measure the w2v2-L-robust-12 model in the Dawn of the transformer era in speech emotion recognition: closing the valence gap paper on the MSP-Podcast (Valence) dataset? | CCC |
What metrics were used to measure the preCPC model in the Contrastive Unsupervised Learning for Speech Emotion Recognition paper on the MSP-Podcast (Valence) dataset? | CCC |
What metrics were used to measure the VQ-MAE-S-12 (Frame) + Query2Emo model in the A vector quantized masked autoencoder for speech emotion recognition paper on the RAVDESS dataset? | Accuracy, F1 Score, Precision, Recall, F1 |
What metrics were used to measure the CNN-X (Shallow CNN) model in the Shallow over Deep Neural Networks: A empirical analysis for human emotion classification using audio data paper on the RAVDESS dataset? | Accuracy, F1 Score, Precision, Recall, F1 |
What metrics were used to measure the xlsr-Wav2Vec2.0(FineTuning) model in the A proposal for Multimodal Emotion Recognition using aural transformers and Action Units on RAVDESS dataset paper on the RAVDESS dataset? | Accuracy, F1 Score, Precision, Recall, F1 |
What metrics were used to measure the CNN-14 (Fine-Tuning) model in the Multimodal Emotion Recognition on RAVDESS Dataset Using Transfer Learning paper on the RAVDESS dataset? | Accuracy, F1 Score, Precision, Recall, F1 |
What metrics were used to measure the AlexNet (FineTuning) model in the Multimodal Emotion Recognition on RAVDESS Dataset Using Transfer Learning paper on the RAVDESS dataset? | Accuracy, F1 Score, Precision, Recall, F1 |
What metrics were used to measure the w2v2-L-robust-12 model in the Dawn of the transformer era in speech emotion recognition: closing the valence gap paper on the MSP-Podcast (Activation) dataset? | CCC |
What metrics were used to measure the preCPC model in the Contrastive Unsupervised Learning for Speech Emotion Recognition paper on the MSP-Podcast (Activation) dataset? | CCC |
What metrics were used to measure the DANN model in the Context-Dependent Domain Adversarial Neural Network for Multimodal Emotion Recognition paper on the IEMOCAP dataset? | WA, UA, F1, AUC |
What metrics were used to measure the MMER model in the MMER: Multimodal Multi-task Learning for Speech Emotion Recognition paper on the IEMOCAP dataset? | WA, UA, F1, AUC |
What metrics were used to measure the TAP model in the Speaker Normalization for Self-supervised Speech Emotion Recognition paper on the IEMOCAP dataset? | WA, UA, F1, AUC |
What metrics were used to measure the SYSCOMB: BLSTMATT with CSA model in the Empirical Interpretation of Speech Emotion Perception with Attention Based Model for Speech Emotion Recognition paper on the IEMOCAP dataset? | WA, UA, F1, AUC |
What metrics were used to measure the Partially Fine-tuned HuBERT Large model in the A Fine-tuned Wav2vec 2.0/HuBERT Benchmark For Speech Emotion Recognition, Speaker Verification and Spoken Language Understanding paper on the IEMOCAP dataset? | WA, UA, F1, AUC |
What metrics were used to measure the LSTM+FC model in the Speech Emotion Recognition Using Speech Feature and Word Embedding paper on the IEMOCAP dataset? | WA, UA, F1, AUC |
What metrics were used to measure the TAP (5-fold) model in the Speaker Normalization for Self-supervised Speech Emotion Recognition paper on the IEMOCAP dataset? | WA, UA, F1, AUC |
What metrics were used to measure the SER with MTL model in the Speech Emotion Recognition with Multi-Task Learning paper on the IEMOCAP dataset? | WA, UA, F1, AUC |
What metrics were used to measure the Ensemble (Acoustic + Text)(Random Forests + Gradient Boosted Trees + Multi Layer Perceptron + Multinomial Naive Bayes + Logistic Regression) model in the Multimodal Speech Emotion Recognition and Ambiguity Resolution paper on the IEMOCAP dataset? | WA, UA, F1, AUC |
What metrics were used to measure the CNN+LSTM model in the CNN+LSTM Architecture for Speech Emotion Recognition with Data Augmentation paper on the IEMOCAP dataset? | WA, UA, F1, AUC |
What metrics were used to measure the TAP (Low Resource) model in the Speaker Normalization for Self-supervised Speech Emotion Recognition paper on the IEMOCAP dataset? | WA, UA, F1, AUC |
What metrics were used to measure the Speechformer++ model in the SpeechFormer++: A Hierarchical Efficient Framework for Paralinguistic Speech Processing paper on the LSSED dataset? | Unweighted Accuracy (UA) |
What metrics were used to measure the Transformer model in the Attention Is All You Need paper on the LSSED dataset? | Unweighted Accuracy (UA) |
What metrics were used to measure the PyResNet model in the LSSED: a large-scale dataset and benchmark for speech emotion recognition paper on the LSSED dataset? | Unweighted Accuracy (UA) |
What metrics were used to measure the CoordViT model in the CoordViT: A Novel Method of Improve Vision Transformer-Based Speech Emotion Recognition using Coordinate Information Concatenate paper on the CREMA-D dataset? | Accuracy |
What metrics were used to measure the SepTr + LeRaC model in the LeRaC: Learning Rate Curriculum paper on the CREMA-D dataset? | Accuracy |
What metrics were used to measure the SepTr model in the SepTr: Separable Transformer for Audio Spectrogram Processing paper on the CREMA-D dataset? | Accuracy |
What metrics were used to measure the ResNet-18 + SPEL model in the Self-paced ensemble learning for speech and audio classification paper on the CREMA-D dataset? | Accuracy |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.