prompts stringlengths 81 413 | metrics_response stringlengths 0 371 |
|---|---|
What metrics were used to measure the T2M-GPT (τ = 0.5) model in the T2M-GPT: Generating Human Motion from Textual Descriptions with Discrete Representations paper on the HumanML3D dataset? | FID, R Precision Top3, Diversity, Multimodality |
What metrics were used to measure the T2M-GPT (τ = 0) model in the T2M-GPT: Generating Human Motion from Textual Descriptions with Discrete Representations paper on the HumanML3D dataset? | FID, R Precision Top3, Diversity, Multimodality |
What metrics were used to measure the T2M-GPT (τ ∈ U[0, 1]) model in the T2M-GPT: Generating Human Motion from Textual Descriptions with Discrete Representations paper on the HumanML3D dataset? | FID, R Precision Top3, Diversity, Multimodality |
What metrics were used to measure the Fg-T2M model in the Fg-T2M: Fine-Grained Text-Driven Human Motion Generation via Diffusion Model paper on the HumanML3D dataset? | FID, R Precision Top3, Diversity, Multimodality |
What metrics were used to measure the MLD model in the Executing your Commands via Motion Diffusion in Latent Space paper on the HumanML3D dataset? | FID, R Precision Top3, Diversity, Multimodality |
What metrics were used to measure the MDM model in the Human Motion Diffusion Model paper on the HumanML3D dataset? | FID, R Precision Top3, Diversity, Multimodality |
What metrics were used to measure the MotionDiffuse model in the MotionDiffuse: Text-Driven Human Motion Generation with Diffusion Model paper on the HumanML3D dataset? | FID, R Precision Top3, Diversity, Multimodality |
What metrics were used to measure the MAA model in the Make-An-Animation: Large-Scale Text-conditional 3D Human Motion Generation paper on the HumanML3D dataset? | FID, R Precision Top3, Diversity, Multimodality |
What metrics were used to measure the T2M model in the Generating Diverse and Natural 3D Human Motions From Text paper on the HumanML3D dataset? | FID, R Precision Top3, Diversity, Multimodality |
What metrics were used to measure the TM2T model in the TM2T: Stochastic and Tokenized Modeling for the Reciprocal Generation of 3D Human Motions and Texts paper on the HumanML3D dataset? | FID, R Precision Top3, Diversity, Multimodality |
What metrics were used to measure the Text2Gesture model in the TM2T: Stochastic and Tokenized Modeling for the Reciprocal Generation of 3D Human Motions and Texts paper on the HumanML3D dataset? | FID, R Precision Top3, Diversity, Multimodality |
What metrics were used to measure the Diffuion Motion model in the Diffusion Motion: Generate Text-Guided 3D Human Motion by Diffusion Model paper on the HumanML3D dataset? | FID, R Precision Top3, Diversity, Multimodality |
What metrics were used to measure the Language2Pose model in the TM2T: Stochastic and Tokenized Modeling for the Reciprocal Generation of 3D Human Motions and Texts paper on the HumanML3D dataset? | FID, R Precision Top3, Diversity, Multimodality |
What metrics were used to measure the $\Delta$-interpolator model in the Motion Inbetweening via Deep $Δ$-Interpolator paper on the LaFAN1 dataset? | L2Q@5, L2Q@15, L2Q@30, L2P@5, L2P@15, L2P@30, NPSS@5, NPSS@15, NPSS@30 |
What metrics were used to measure the SSMCT model in the Single-Shot Motion Completion with Transformer paper on the LaFAN1 dataset? | L2Q@5, L2Q@15, L2Q@30, L2P@5, L2P@15, L2P@30, NPSS@5, NPSS@15, NPSS@30 |
What metrics were used to measure the TG-complete model in the Robust Motion In-betweening paper on the LaFAN1 dataset? | L2Q@5, L2Q@15, L2Q@30, L2P@5, L2P@15, L2P@30, NPSS@5, NPSS@15, NPSS@30 |
What metrics were used to measure the HM-VAE model in the Task-Generic Hierarchical Human Motion Prior using VAEs paper on the LaFAN1 dataset? | L2Q@5, L2Q@15, L2Q@30, L2P@5, L2P@15, L2P@30, NPSS@5, NPSS@15, NPSS@30 |
What metrics were used to measure the MLD model in the Executing your Commands via Motion Diffusion in Latent Space paper on the KIT Motion-Language dataset? | FID, R Precision Top3, Diversity, Multimodality |
What metrics were used to measure the DiverseMotion model in the DiverseMotion: Towards Diverse Human Motion Generation via Discrete Diffusion paper on the KIT Motion-Language dataset? | FID, R Precision Top3, Diversity, Multimodality |
What metrics were used to measure the MDM model in the Human Motion Diffusion Model paper on the KIT Motion-Language dataset? | FID, R Precision Top3, Diversity, Multimodality |
What metrics were used to measure the T2M-GPT (τ ∈ U[0, 1]) model in the T2M-GPT: Generating Human Motion from Textual Descriptions with Discrete Representations paper on the KIT Motion-Language dataset? | FID, R Precision Top3, Diversity, Multimodality |
What metrics were used to measure the Fg-T2M model in the Fg-T2M: Fine-Grained Text-Driven Human Motion Generation via Diffusion Model paper on the KIT Motion-Language dataset? | FID, R Precision Top3, Diversity, Multimodality |
What metrics were used to measure the T2M-GPT (τ = 0.5) model in the T2M-GPT: Generating Human Motion from Textual Descriptions with Discrete Representations paper on the KIT Motion-Language dataset? | FID, R Precision Top3, Diversity, Multimodality |
What metrics were used to measure the T2M-GPT (τ = 0) model in the T2M-GPT: Generating Human Motion from Textual Descriptions with Discrete Representations paper on the KIT Motion-Language dataset? | FID, R Precision Top3, Diversity, Multimodality |
What metrics were used to measure the AttT2M model in the AttT2M: Text-Driven Human Motion Generation with Multi-Perspective Attention Mechanism paper on the KIT Motion-Language dataset? | FID, R Precision Top3, Diversity, Multimodality |
What metrics were used to measure the MotionDiffuse model in the MotionDiffuse: Text-Driven Human Motion Generation with Diffusion Model paper on the KIT Motion-Language dataset? | FID, R Precision Top3, Diversity, Multimodality |
What metrics were used to measure the T2M model in the Generating Diverse and Natural 3D Human Motions From Text paper on the KIT Motion-Language dataset? | FID, R Precision Top3, Diversity, Multimodality |
What metrics were used to measure the TM2T model in the TM2T: Stochastic and Tokenized Modeling for the Reciprocal Generation of 3D Human Motions and Texts paper on the KIT Motion-Language dataset? | FID, R Precision Top3, Diversity, Multimodality |
What metrics were used to measure the TEMOS model in the Executing your Commands via Motion Diffusion in Latent Space paper on the KIT Motion-Language dataset? | FID, R Precision Top3, Diversity, Multimodality |
What metrics were used to measure the Language2Pose model in the TM2T: Stochastic and Tokenized Modeling for the Reciprocal Generation of 3D Human Motions and Texts paper on the KIT Motion-Language dataset? | FID, R Precision Top3, Diversity, Multimodality |
What metrics were used to measure the Text2Gesture model in the TM2T: Stochastic and Tokenized Modeling for the Reciprocal Generation of 3D Human Motions and Texts paper on the KIT Motion-Language dataset? | FID, R Precision Top3, Diversity, Multimodality |
What metrics were used to measure the Dance Revolution model in the Dance Revolution: Long-Term Dance Generation with Music via Curriculum Learning paper on the BRACE dataset? | Frechet Inception Distance, Beat alignment score, Beat DTW cost, Footwork average, Powermove average, Toprock average |
What metrics were used to measure the AIST++ model in the AI Choreographer: Music Conditioned 3D Dance Generation with AIST++ paper on the BRACE dataset? | Frechet Inception Distance, Beat alignment score, Beat DTW cost, Footwork average, Powermove average, Toprock average |
What metrics were used to measure the Dancing 2 Music model in the Dancing to Music paper on the BRACE dataset? | Frechet Inception Distance, Beat alignment score, Beat DTW cost, Footwork average, Powermove average, Toprock average |
What metrics were used to measure the EDGE (w=1) model in the EDGE: Editable Dance Generation From Music paper on the AIST++ dataset? | Beat alignment score, FID |
What metrics were used to measure the EDGE (w=2) model in the EDGE: Editable Dance Generation From Music paper on the AIST++ dataset? | Beat alignment score, FID |
What metrics were used to measure the MoFusion model in the MoFusion: A Framework for Denoising-Diffusion-based Motion Synthesis paper on the AIST++ dataset? | Beat alignment score, FID |
What metrics were used to measure the Bailando model in the Bailando: 3D Dance Generation by Actor-Critic GPT with Choreographic Memory paper on the AIST++ dataset? | Beat alignment score, FID |
What metrics were used to measure the AI Choreographer model in the AI Choreographer: Music Conditioned 3D Dance Generation with AIST++ paper on the AIST++ dataset? | Beat alignment score, FID |
What metrics were used to measure the Dance Revolution model in the Dance Revolution: Long-Term Dance Generation with Music via Curriculum Learning paper on the AIST++ dataset? | Beat alignment score, FID |
What metrics were used to measure the DanceNet model in the Music2Dance: DanceNet for Music-driven Dance Generation paper on the AIST++ dataset? | Beat alignment score, FID |
What metrics were used to measure the MDM model in the Human Motion Diffusion Model paper on the HumanAct12 dataset? | Accuracy, FID, Multimodality |
What metrics were used to measure the MLD model in the Executing your Commands via Motion Diffusion in Latent Space paper on the HumanAct12 dataset? | Accuracy, FID, Multimodality |
What metrics were used to measure the Stacked Ensemble (CRF) model in the Domain Adaptation of Thai Word Segmentation Models using Stacked Ensemble paper on the WS160 dataset? | F1-score |
What metrics were used to measure the LATTE (Linguistic units, lattices, PTMs, GNNs) model in the LATTE: Lattice ATTentive Encoding for Character-based Word Segmentation paper on the BEST-2010 dataset? | F1-Score |
What metrics were used to measure the Multiple Attentions (char-word-cc) model in the Character-based Thai Word Segmentation with Multiple Attentions paper on the BEST-2010 dataset? | F1-Score |
What metrics were used to measure the ThaiLMCut model in the ThaiLMCut: Unsupervised Pretraining for Thai Word Segmentation paper on the BEST-2010 dataset? | F1-Score |
What metrics were used to measure the AttaCut-SC model in the AttaCut: A Fast and Accurate Neural Thai Word Segmenter paper on the BEST-2010 dataset? | F1-Score |
What metrics were used to measure the Stacked Ensemble (CRF) model in the Domain Adaptation of Thai Word Segmentation Models using Stacked Ensemble paper on the BEST-2010 dataset? | F1-Score |
What metrics were used to measure the RCF (with Post-Processing) model in the Bootstrapping Objectness from Videos by Relaxed Common Fate and Visual Grouping paper on the DAVIS 2016 dataset? | J score |
What metrics were used to measure the RCF (without Post-Processing) model in the Bootstrapping Objectness from Videos by Relaxed Common Fate and Visual Grouping paper on the DAVIS 2016 dataset? | J score |
What metrics were used to measure the MOD model in the Motion-inductive Self-supervised Object Discovery in Videos paper on the DAVIS 2016 dataset? | J score |
What metrics were used to measure the OCLR model in the Segmenting Moving Objects via an Object-Centric Layered Representation paper on the DAVIS 2016 dataset? | J score |
What metrics were used to measure the GWM model in the Guess What Moves: Unsupervised Video and Image Segmentation by Anticipating Motion paper on the DAVIS 2016 dataset? | J score |
What metrics were used to measure the EM model in the EM-driven unsupervised learning for efficient motion segmentation paper on the DAVIS 2016 dataset? | J score |
What metrics were used to measure the MG model in the Self-supervised Video Object Segmentation by Motion Grouping paper on the DAVIS 2016 dataset? | J score |
What metrics were used to measure the SIMO model in the paper on the DAVIS 2016 dataset? | J score |
What metrics were used to measure the AMD model in the The Emergence of Objectness: Learning Zero-Shot Segmentation from Videos paper on the DAVIS 2016 dataset? | J score |
What metrics were used to measure the GENESIS-V2 model in the GENESIS-V2: Inferring Unordered Object Representations without Iterative Refinement paper on the Shelf&Tote Training Dataset dataset? | ARI |
What metrics were used to measure the MONET-G model in the GENESIS-V2: Inferring Unordered Object Representations without Iterative Refinement paper on the Shelf&Tote Training Dataset dataset? | ARI |
What metrics were used to measure the GENESIS model in the GENESIS-V2: Inferring Unordered Object Representations without Iterative Refinement paper on the Shelf&Tote Training Dataset dataset? | ARI |
What metrics were used to measure the SlotAttention model in the GENESIS-V2: Inferring Unordered Object Representations without Iterative Refinement paper on the Shelf&Tote Training Dataset dataset? | ARI |
What metrics were used to measure the DeepCut model in the DeepCut: Unsupervised Segmentation using Graph Neural Networks Clustering paper on the ECSSD dataset? | mIoU |
What metrics were used to measure the AST model in the Unsupervised Multi-object Segmentation Using Attention and Soft-argmax paper on the ShapeStacks dataset? | ARI-FG |
What metrics were used to measure the GENESIS-V2 model in the GENESIS-V2: Inferring Unordered Object Representations without Iterative Refinement paper on the ShapeStacks dataset? | ARI-FG |
What metrics were used to measure the SlotAttention model in the GENESIS-V2: Inferring Unordered Object Representations without Iterative Refinement paper on the ShapeStacks dataset? | ARI-FG |
What metrics were used to measure the GENESIS model in the GENESIS-V2: Inferring Unordered Object Representations without Iterative Refinement paper on the ShapeStacks dataset? | ARI-FG |
What metrics were used to measure the MONET-G model in the GENESIS-V2: Inferring Unordered Object Representations without Iterative Refinement paper on the ShapeStacks dataset? | ARI-FG |
What metrics were used to measure the AST-Seg-B3-CT model in the Unsupervised Multi-object Segmentation Using Attention and Soft-argmax paper on the ClevrTex dataset? | mIoU, MSE |
What metrics were used to measure the AST model in the Unsupervised Multi-object Segmentation Using Attention and Soft-argmax paper on the ClevrTex dataset? | mIoU, MSE |
What metrics were used to measure the GNM model in the ClevrTex: A Texture-Rich Benchmark for Unsupervised Multi-Object Segmentation paper on the ClevrTex dataset? | mIoU, MSE |
What metrics were used to measure the DTI model in the ClevrTex: A Texture-Rich Benchmark for Unsupervised Multi-Object Segmentation paper on the ClevrTex dataset? | mIoU, MSE |
What metrics were used to measure the eMORL model in the ClevrTex: A Texture-Rich Benchmark for Unsupervised Multi-Object Segmentation paper on the ClevrTex dataset? | mIoU, MSE |
What metrics were used to measure the IODINE model in the ClevrTex: A Texture-Rich Benchmark for Unsupervised Multi-Object Segmentation paper on the ClevrTex dataset? | mIoU, MSE |
What metrics were used to measure the SA model in the ClevrTex: A Texture-Rich Benchmark for Unsupervised Multi-Object Segmentation paper on the ClevrTex dataset? | mIoU, MSE |
What metrics were used to measure the MONet model in the ClevrTex: A Texture-Rich Benchmark for Unsupervised Multi-Object Segmentation paper on the ClevrTex dataset? | mIoU, MSE |
What metrics were used to measure the MN model in the ClevrTex: A Texture-Rich Benchmark for Unsupervised Multi-Object Segmentation paper on the ClevrTex dataset? | mIoU, MSE |
What metrics were used to measure the SPACE model in the ClevrTex: A Texture-Rich Benchmark for Unsupervised Multi-Object Segmentation paper on the ClevrTex dataset? | mIoU, MSE |
What metrics were used to measure the GenV2 model in the ClevrTex: A Texture-Rich Benchmark for Unsupervised Multi-Object Segmentation paper on the ClevrTex dataset? | mIoU, MSE |
What metrics were used to measure the SPAIR model in the ClevrTex: A Texture-Rich Benchmark for Unsupervised Multi-Object Segmentation paper on the ClevrTex dataset? | mIoU, MSE |
What metrics were used to measure the AST model in the Unsupervised Multi-object Segmentation Using Attention and Soft-argmax paper on the ObjectsRoom dataset? | ARI-FG |
What metrics were used to measure the GENESIS-V2 model in the GENESIS-V2: Inferring Unordered Object Representations without Iterative Refinement paper on the ObjectsRoom dataset? | ARI-FG |
What metrics were used to measure the SlotAttention model in the GENESIS-V2: Inferring Unordered Object Representations without Iterative Refinement paper on the ObjectsRoom dataset? | ARI-FG |
What metrics were used to measure the GENESIS model in the GENESIS-V2: Inferring Unordered Object Representations without Iterative Refinement paper on the ObjectsRoom dataset? | ARI-FG |
What metrics were used to measure the MONET-G model in the GENESIS-V2: Inferring Unordered Object Representations without Iterative Refinement paper on the ObjectsRoom dataset? | ARI-FG |
What metrics were used to measure the RCF (with post-processing) model in the Bootstrapping Objectness from Videos by Relaxed Common Fate and Visual Grouping paper on the FBMS-59 dataset? | mIoU |
What metrics were used to measure the RCF (without post-processing) model in the Bootstrapping Objectness from Videos by Relaxed Common Fate and Visual Grouping paper on the FBMS-59 dataset? | mIoU |
What metrics were used to measure the OCLR model in the Segmenting Moving Objects via an Object-Centric Layered Representation paper on the FBMS-59 dataset? | mIoU |
What metrics were used to measure the MOD model in the Motion-inductive Self-supervised Object Discovery in Videos paper on the FBMS-59 dataset? | mIoU |
What metrics were used to measure the GWM model in the Guess What Moves: Unsupervised Video and Image Segmentation by Anticipating Motion paper on the FBMS-59 dataset? | mIoU |
What metrics were used to measure the TokenCut model in the TokenCut: Segmenting Objects in Images and Videos with Self-supervised Transformer and Normalized Cut paper on the FBMS-59 dataset? | mIoU |
What metrics were used to measure the AMD model in the The Emergence of Objectness: Learning Zero-Shot Segmentation from Videos paper on the FBMS-59 dataset? | mIoU |
What metrics were used to measure the RCF (with post-processing) model in the Bootstrapping Objectness from Videos by Relaxed Common Fate and Visual Grouping paper on the SegTrack-v2 dataset? | mIoU |
What metrics were used to measure the RCF (without post-processing) model in the Bootstrapping Objectness from Videos by Relaxed Common Fate and Visual Grouping paper on the SegTrack-v2 dataset? | mIoU |
What metrics were used to measure the OCLR model in the Segmenting Moving Objects via an Object-Centric Layered Representation paper on the SegTrack-v2 dataset? | mIoU |
What metrics were used to measure the GWM model in the Guess What Moves: Unsupervised Video and Image Segmentation by Anticipating Motion paper on the SegTrack-v2 dataset? | mIoU |
What metrics were used to measure the MOD model in the Motion-inductive Self-supervised Object Discovery in Videos paper on the SegTrack-v2 dataset? | mIoU |
What metrics were used to measure the SIMO model in the paper on the SegTrack-v2 dataset? | mIoU |
What metrics were used to measure the TokenCut model in the TokenCut: Segmenting Objects in Images and Videos with Self-supervised Transformer and Normalized Cut paper on the SegTrack-v2 dataset? | mIoU |
What metrics were used to measure the AMD model in the The Emergence of Objectness: Learning Zero-Shot Segmentation from Videos paper on the SegTrack-v2 dataset? | mIoU |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.