mishig HF Staff commited on
Commit
a1fec40
·
verified ·
1 Parent(s): 50fd710

Add 1 files

Browse files
Files changed (1) hide show
  1. 2505/2505.21146.md +431 -0
2505/2505.21146.md ADDED
@@ -0,0 +1,431 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Title: IKMo: Image-Keyframed Motion Generation with Trajectory-Pose Conditioned Motion Diffusion Model
2
+
3
+ URL Source: https://arxiv.org/html/2505.21146
4
+
5
+ Markdown Content:
6
+ ###### Abstract.
7
+
8
+ Existing human motion generation methods with trajectory and pose inputs operate global processing on both modalities, leading to suboptimal outputs. In this paper, we propose IKMo, an image-keyframed motion generation method based on the diffusion model with trajectory and pose being decoupled. The trajectory and pose inputs go through a two-stage conditioning framework. In the first stage, the dedicated optimization module is applied to refine inputs. In the second stage, trajectory and pose are encoded via a Trajectory Encoder and a Pose Encoder in parallel. Then, motion with high spatial and semantic fidelity is guided by a motion ControlNet, which processes the fused trajectory and pose data. Experiment results based on HumanML3D and KIT-ML datasets demonstrate that the proposed method outperforms state-of-the-art on all metrics under trajectory-keyframe constraints. In addition, MLLM-based agents are implemented to pre-process model inputs. Given texts and keyframe images from users, the agents extract motion descriptions, keyframe poses, and trajectories as the optimized inputs into the motion generation model. We conducts a user study with 10 participants. The experiment results prove that the MLLM-based agents pre-processing makes generated motion more in line with users’ expectation. We believe that the proposed method improves both the fidelity and controllability of motion generation by the diffusion model.
9
+
10
+ ![Image 1: Refer to caption](https://arxiv.org/html/2505.21146v1/x1.png)
11
+
12
+ Figure 1. We propose a human motion diffusion model that realizes simultaneous control via trajectory and keyframe pose constraints. Built upon this, we introduce IKMo, a novel MLLM-powered framework for image-keyframed motion generation. In the top-left, we visualize four motion trajectories arranged to form the abbreviation of our method, IKMo. In the bottom-left, we show the user input, consisting of keyframe images generated by Doubao and a text-based trajectory description. On the right, we provide a detailed view of the letter o, illustrating the generated motion. Purple colored entity frames represent the keyframes, while gray frames denote the remaining frames. The order of frames in the motion sequence is presented by their transparency, where the most transparent frame appears first. The green curve on the ground represents a standard circular trajectory.
13
+
14
+ \Description
15
+
16
+ teaser
17
+
18
+ 1. INTRODUCTION
19
+ ---------------
20
+
21
+ Human motion generation is one of the essential tasks in computer animation. Virtual characters driven by human motion generation techniques support a wide range of applications in games, movies, and virtual reality. In recent years, the text-to-motion method has become popular, as it allows natural human motion generation based on language descriptions.
22
+
23
+ However, relying solely on textual descriptions poses critical limitations. Text is often ambiguous and imprecise, lacks hard constraints, and introduces high variability in generated results—even for identical prompts. As a result, the synthesized motion often fails to match users’ expectations and is difficult to deploy directly in applications requiring precise control. For example, a prompt like ”the person walks in an arc and waves their hand” lacks clear specifications on the arc shape, timing, or type of gesture, and may not even result in an actual arc trajectory, leading to inconsistent or even unusable outputs.
24
+
25
+ Multi-modal control signals—such as images, poses, and trajectories—have proven essential in improving generation quality across domains. In image and video generation, ControlNet(Zhang et al., [2023b](https://arxiv.org/html/2505.21146v1#bib.bib65)), Animate Anyone(Hu, [2024](https://arxiv.org/html/2505.21146v1#bib.bib21)), Follow Your Pose(Ma et al., [2024a](https://arxiv.org/html/2505.21146v1#bib.bib37)), and MimicMotion(Zhang et al., [2024b](https://arxiv.org/html/2505.21146v1#bib.bib69)) utilize pose-based conditions to guide controllable synthesis. MotionBridge(Tanveer et al., [2024](https://arxiv.org/html/2505.21146v1#bib.bib49)) combines keyframes and trajectories for joint video control. In motion generation, OmniControl(Xie et al., [2023](https://arxiv.org/html/2505.21146v1#bib.bib59)) leverages spatial joint trajectories, while CondMDI(Cohan et al., [2024](https://arxiv.org/html/2505.21146v1#bib.bib9)) uses keyframe poses for motion interpolation. Inspired by these works, we aim to control motion generation using both trajectories and keyframe poses. However, existing methods typically merge these constraints into a single representation and process them jointly through a unified strategy. Such designs often fail to fully exploit the unique characteristics of each constraint type, resulting in suboptimal control accuracy and flexibility in complex motion scenarios.
26
+
27
+ In this work, we propose a motion diffusion model that leverages both trajectory and keyframe pose constraints for controllable human motion generation. To enhance performance when simultaneously conditioned on both trajectories and keyframe poses, we decouple the two constraints and process them through parallel control pathways. By integrating both trajectory and pose constraints, our model enables fine-grained and flexible control over human motion generation based on 3D trajectory and keyframe pose inputs. Experiments on HumanML3D and KIT-ML show that our model outperforms SOTA baselines across all metrics under keyframe pose and trajectory conditions.
28
+
29
+ While the multi-modal constraints enhance the performance of motion generation, it remains challenging for general users to design 3D trajectories or full-body poses manually, as a variety of expertise skills, such as specialized tools, coordinate transformations, and anatomically accurate joint configurations, are required.
30
+
31
+ To facilitate user interaction more intuitively, we further introduce a user-friendly framework that integrates our motion generation model with a multi-agent system powered by Multi-modal Large Language Models (MLLMs). Given user-provided images and text prompts, the system—composed of an interaction agent, a motion design agent, and a trajectory planning agent—conducts multi-turn dialogues to interpret user intent, extract 3D poses, and plan complete motion configurations. These structured controls are then provided to our conditioned diffusion model to synthesize motions that align with both visual cues and user instructions.
32
+
33
+ The main contributions are summarized as follows:
34
+
35
+ * •We proposed a novel pipeline of image-keyframed human motion generation with the diffusion model. The MLLM-based agents are used to interpret users’ input, including texts and images, into model inputs. The user study proves the generated motion meets more in line with users’ exception. To the best of our knowledge, it is the first framework to use human images as keyframe cues for motion control.
36
+ * •We propose a decoupled control strategy that employs two-stage parallel modules to handle trajectory constraints and keyframe pose constraints separately, enabling more effective and fine-grained control. Extensive experiments on HumanML3D and KIT-ML demonstrate that our method outperforms state-of-the-art approaches across all metrics in trajectory-keyframe pose condition.
37
+
38
+ 2. RELATED WORK
39
+ ---------------
40
+
41
+ ### 2.1. Human Motion Generation
42
+
43
+ Human motion generation aims to synthesize natural motion sequences conditioned on various controls. Based on conditioning type, it can be categorized into subfields. Action-to-motion(Guo et al., [2020](https://arxiv.org/html/2505.21146v1#bib.bib18); Petrovich et al., [2021](https://arxiv.org/html/2505.21146v1#bib.bib39); Degardin et al., [2022](https://arxiv.org/html/2505.21146v1#bib.bib11); Cervantes et al., [2022](https://arxiv.org/html/2505.21146v1#bib.bib3); Lu et al., [2022](https://arxiv.org/html/2505.21146v1#bib.bib35)) generates motion sequences conditioned on action categories. Text-to-motion(Petrovich et al., [2022](https://arxiv.org/html/2505.21146v1#bib.bib40); Tevet et al., [2023](https://arxiv.org/html/2505.21146v1#bib.bib51), [2022](https://arxiv.org/html/2505.21146v1#bib.bib50); Guo et al., [2022b](https://arxiv.org/html/2505.21146v1#bib.bib17); Petrovich et al., [2023](https://arxiv.org/html/2505.21146v1#bib.bib41); Jiang et al., [2023](https://arxiv.org/html/2505.21146v1#bib.bib23); Kim et al., [2023](https://arxiv.org/html/2505.21146v1#bib.bib27); Guo et al., [2024](https://arxiv.org/html/2505.21146v1#bib.bib15); Zhang et al., [2023a](https://arxiv.org/html/2505.21146v1#bib.bib67), [2024a](https://arxiv.org/html/2505.21146v1#bib.bib66)) controls motion generation based on natural language descriptions. Speech-to-motion(Yi et al., [2023](https://arxiv.org/html/2505.21146v1#bib.bib63); Chen et al., [2024a](https://arxiv.org/html/2505.21146v1#bib.bib5); Chhatre et al., [2024](https://arxiv.org/html/2505.21146v1#bib.bib7); Liu et al., [2024b](https://arxiv.org/html/2505.21146v1#bib.bib31)) generates human gestures conditioned on speech signals. Music-to-motion(Li et al., [2022](https://arxiv.org/html/2505.21146v1#bib.bib28); Tseng et al., [2023](https://arxiv.org/html/2505.21146v1#bib.bib52)) generates dance motions based on music. Sketch-to-motion(Wu et al., [2024](https://arxiv.org/html/2505.21146v1#bib.bib57); Wang et al., [2025](https://arxiv.org/html/2505.21146v1#bib.bib54)) generates motions conditioned on human pose sketches or hand-drawn stick figures. Trajectory-guided motion generation(Kaufmann et al., [2020](https://arxiv.org/html/2505.21146v1#bib.bib26); Karunratanakul et al., [2023](https://arxiv.org/html/2505.21146v1#bib.bib25); Rempe et al., [2023](https://arxiv.org/html/2505.21146v1#bib.bib44); Shafir et al., [2024](https://arxiv.org/html/2505.21146v1#bib.bib46)) uses predefined motion trajectories to guide the spatial path of the generated motions. Keyframe-guided motion generation(Xie et al., [2023](https://arxiv.org/html/2505.21146v1#bib.bib59); Cohan et al., [2024](https://arxiv.org/html/2505.21146v1#bib.bib9)) controls motion generation through spatial or pose constraints at keyframes. In addition, several works explore human motion generation involving interactions with 3D scenes(Hassan et al., [2021](https://arxiv.org/html/2505.21146v1#bib.bib19); Huang et al., [2023](https://arxiv.org/html/2505.21146v1#bib.bib22); Lim et al., [2023](https://arxiv.org/html/2505.21146v1#bib.bib30); Liu et al., [2024a](https://arxiv.org/html/2505.21146v1#bib.bib32); Xiao et al., [2024](https://arxiv.org/html/2505.21146v1#bib.bib58); Zhao et al., [2023](https://arxiv.org/html/2505.21146v1#bib.bib70); Jiang et al., [2024](https://arxiv.org/html/2505.21146v1#bib.bib24)), objects(Zhang et al., [2022](https://arxiv.org/html/2505.21146v1#bib.bib68); Xu et al., [2023](https://arxiv.org/html/2505.21146v1#bib.bib61); Gao et al., [2024](https://arxiv.org/html/2505.21146v1#bib.bib14); Diller and Dai, [2024](https://arxiv.org/html/2505.21146v1#bib.bib12); Dai et al., [2024](https://arxiv.org/html/2505.21146v1#bib.bib10)), or other humans(Tanaka and Fujiwara, [2023](https://arxiv.org/html/2505.21146v1#bib.bib48); Cai et al., [2024](https://arxiv.org/html/2505.21146v1#bib.bib2); Chopin et al., [2024](https://arxiv.org/html/2505.21146v1#bib.bib8); Liang et al., [2024](https://arxiv.org/html/2505.21146v1#bib.bib29)).
44
+
45
+ Although various modalities have been employed to control human motion generation, images—a intuitive modality for specifying motion cues—have not been thoroughly explored as keyframe constraints. In this work, we make an initial attempt by proposing IKMo, a framework that enables image-based keyframe control for human motion generation.
46
+
47
+ ### 2.2. Trajectory and Keyframe Guided Motion Generation
48
+
49
+ Existing motion generation methods often involve considerable randomness in their outputs. While this encourages motion diversity, it may hinder the generation of motions that align with user intent, making controllability a key research focus. Prior works such as PriorMDM(Shafir et al., [2024](https://arxiv.org/html/2505.21146v1#bib.bib46)), GMD(Karunratanakul et al., [2023](https://arxiv.org/html/2505.21146v1#bib.bib25)), and OmniControl(Xie et al., [2023](https://arxiv.org/html/2505.21146v1#bib.bib59)) enable control through joint trajectory constraints, allowing users to specify spatial paths that the motion should follow. Meanwhile, CondMDI(Cohan et al., [2024](https://arxiv.org/html/2505.21146v1#bib.bib9)) utilizes keyframe poses as constraints to guide motion interpolation, enabling fine-grained temporal control over specific frames.
50
+
51
+ Although previous methods support motion generation conditioned on either trajectories or keyframes, jointly controlling both remains a significant challenge. Many existing approaches fuse these signals into a unified representation, entangling spatial and semantic constraints. However, trajectories represent absolute motion paths, while poses encode relative body configurations. Unifying them can distort pose structure and compromise both trajectory fidelity and control precision. This highlights the need for a decoupled approach that treats trajectories and poses as complementary yet independent control signals, enabling more accurate and flexible motion synthesis.
52
+
53
+ OmniControl(Xie et al., [2023](https://arxiv.org/html/2505.21146v1#bib.bib59)) uses global spatial coordinates of joints to control motion trajectories and approximates full-body keyframe poses with the 3D positions of five joints (pelvis, wrists, ankles), applying the same control mechanism in both cases. While it supports trajectory control via specific joints (e.g., pelvis for body movement, wrists for arms), its inability to jointly control the full body limits its effectiveness in keyframe pose tasks. CondMDI(Cohan et al., [2024](https://arxiv.org/html/2505.21146v1#bib.bib9)) conditions on joint rotations extracted from keyframe poses to perform motion in-betweening. However, since its control condition is based on joint rotations rather than spatial coordinates, and it employs a unified mechanism for both trajectory and pose control, it suffers from larger trajectory control errors and is similarly difficult to manually configure for effective guidance.In contrast, our approach decouples trajectory and keyframe pose signals and handles them with parallel modules, significantly reducing control errors in both trajectory following and keyframe pose alignment.
54
+
55
+ ### 2.3. Controllable Diffusion-Based Generative Model In Video Generation
56
+
57
+ In the general domain of controllable video generation, several image-guided approaches have been explored. Image-conditioned methods(Wang et al., [2023](https://arxiv.org/html/2505.21146v1#bib.bib55); Xing et al., [2024](https://arxiv.org/html/2505.21146v1#bib.bib60)) typically use depth maps or sketches as keyframe conditions to guide video synthesis. Trajectory-conditioned generation(Yin et al., [2023](https://arxiv.org/html/2505.21146v1#bib.bib64); Ma et al., [2024b](https://arxiv.org/html/2505.21146v1#bib.bib36); Wang et al., [2024b](https://arxiv.org/html/2505.21146v1#bib.bib53), [a](https://arxiv.org/html/2505.21146v1#bib.bib56)) employs spatial trajectories to control object or camera movements. MotionBridge(Tanveer et al., [2024](https://arxiv.org/html/2505.21146v1#bib.bib49)) combines keyframe images and trajectories to enable more comprehensive control over the generation process. In the subfield of human motion video generation, approaches such as(Ma et al., [2024a](https://arxiv.org/html/2505.21146v1#bib.bib37); Hu, [2024](https://arxiv.org/html/2505.21146v1#bib.bib21); Chan et al., [2019](https://arxiv.org/html/2505.21146v1#bib.bib4); Xu et al., [2024](https://arxiv.org/html/2505.21146v1#bib.bib62); Zhang et al., [2024b](https://arxiv.org/html/2505.21146v1#bib.bib69)) rely on pose stick figures extracted by pose detectors to indicate motion cues.
58
+
59
+ However, in the domain of human motion generation, the use of image-based keyframes and trajectory conditions remains underexplored. In this work, we conduct a preliminary investigation in this direction by developing a pipeline that enables human motion generation conditioned on both image-based keyframes and trajectory instructions.
60
+
61
+ ![Image 2: Refer to caption](https://arxiv.org/html/2505.21146v1/x2.png)
62
+
63
+ Figure 2. (a) The overall pipeline of IKMo. Given an input image and textual requirement, our MLLM-based multi-agent system outputs a motion configuration consisting of a motion description, keyframe poses, and trajectory coordinates. This configuration is then fed into our Conditioned Motion Diffusion Model to generate the final human motion. (b) Details of the Conditioned Motion Diffusion Model. The model predicts a clean motion from a noised motion sequence and a text prompt, while being guided by keyframe poses and trajectory constraints. (c) Motion Optimization. Keyframe poses and trajectory constraints iteratively perturb the noised motion through gradient descent to better align with control signals. (d)Motion Control. Keyframe poses and trajectory inputs are encoded separately using a Pose Encoder and a Trajectory Encoder. The resulting features are fused and injected into the Motion ControlNet to guide motion generation.
64
+
65
+ 3. BACKGROUND
66
+ -------------
67
+
68
+ Diffusion models have demonstrated remarkable performance across a variety of generative tasks(Sohl-Dickstein et al., [2015](https://arxiv.org/html/2505.21146v1#bib.bib47); Ho et al., [2020](https://arxiv.org/html/2505.21146v1#bib.bib20)), achieving especially notable success in text-to-image synthesis(Saharia et al., [2022](https://arxiv.org/html/2505.21146v1#bib.bib45); Ramesh et al., [2022](https://arxiv.org/html/2505.21146v1#bib.bib43)). More recently, these models have been extended to the domain of human motion generation(Tevet et al., [2023](https://arxiv.org/html/2505.21146v1#bib.bib51)), enabling the synthesis of temporally coherent and structurally plausible motion sequences conditioned on textual descriptions.
69
+
70
+ Let 𝐱 0∈ℝ N×D subscript 𝐱 0 superscript ℝ 𝑁 𝐷\mathbf{x}_{0}\in\mathbb{R}^{N\times D}bold_x start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ∈ blackboard_R start_POSTSUPERSCRIPT italic_N × italic_D end_POSTSUPERSCRIPT denote the original human motion sequence, where N 𝑁 N italic_N is the number of frames and D 𝐷 D italic_D is the dimensionality of each pose. The diffusion process consists of two stages: a forward diffusion process and a reverse denoising process.
71
+
72
+ In the forward diffusion process, the original motion data is progressively corrupted with Gaussian noise using a predefined variance schedule {β t}t=1 T superscript subscript subscript 𝛽 𝑡 𝑡 1 𝑇\{\beta_{t}\}_{t=1}^{T}{ italic_β start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT } start_POSTSUBSCRIPT italic_t = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT, producing a sequence of latent variables 𝐱 1,…,𝐱 T subscript 𝐱 1…subscript 𝐱 𝑇\mathbf{x}_{1},\dots,\mathbf{x}_{T}bold_x start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , … , bold_x start_POSTSUBSCRIPT italic_T end_POSTSUBSCRIPT. Each step in the process is defined as:
73
+
74
+ (1)q⁢(𝐱 t|𝐱 t−1)=𝒩⁢(𝐱 t;1−β t⁢𝐱 t−1,β t⁢𝐈)𝑞 conditional subscript 𝐱 𝑡 subscript 𝐱 𝑡 1 𝒩 subscript 𝐱 𝑡 1 subscript 𝛽 𝑡 subscript 𝐱 𝑡 1 subscript 𝛽 𝑡 𝐈 q(\mathbf{x}_{t}|\mathbf{x}_{t-1})=\mathcal{N}(\mathbf{x}_{t};\sqrt{1-\beta_{t% }}\mathbf{x}_{t-1},\beta_{t}\mathbf{I})italic_q ( bold_x start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT | bold_x start_POSTSUBSCRIPT italic_t - 1 end_POSTSUBSCRIPT ) = caligraphic_N ( bold_x start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ; square-root start_ARG 1 - italic_β start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT end_ARG bold_x start_POSTSUBSCRIPT italic_t - 1 end_POSTSUBSCRIPT , italic_β start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT bold_I )
75
+
76
+ In the reverse denoising process, the model seeks to recover clean motion sequences from noisy inputs by learning a reverse diffusion process, conditioned on a text prompt 𝐩 𝐩\mathbf{p}bold_p. The transition at each step is modeled as:
77
+
78
+ (2)P θ⁢(𝐱 t−1∣𝐱 t,𝐩)=𝒩⁢(μ t⁢(θ),(1−α t)⁢𝐈)subscript 𝑃 𝜃 conditional subscript 𝐱 𝑡 1 subscript 𝐱 𝑡 𝐩 𝒩 subscript 𝜇 𝑡 𝜃 1 subscript 𝛼 𝑡 𝐈 P_{\theta}(\mathbf{x}_{t-1}\mid\mathbf{x}_{t},\mathbf{p})=\mathcal{N}(\mu_{t}(% \theta),(1-\alpha_{t})\mathbf{I})italic_P start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT ( bold_x start_POSTSUBSCRIPT italic_t - 1 end_POSTSUBSCRIPT ∣ bold_x start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT , bold_p ) = caligraphic_N ( italic_μ start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ( italic_θ ) , ( 1 - italic_α start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ) bold_I )
79
+
80
+ where 𝐱 t∈ℝ N×D subscript 𝐱 𝑡 superscript ℝ 𝑁 𝐷\mathbf{x}_{t}\in\mathbb{R}^{N\times D}bold_x start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ∈ blackboard_R start_POSTSUPERSCRIPT italic_N × italic_D end_POSTSUPERSCRIPT denotes the motion at the t th superscript 𝑡 th t^{\text{th}}italic_t start_POSTSUPERSCRIPT th end_POSTSUPERSCRIPT noising step, and there are T 𝑇 T italic_T denoising steps in total. α t∈(0,1)subscript 𝛼 𝑡 0 1\alpha_{t}\in(0,1)italic_α start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ∈ ( 0 , 1 ) are hyper-parameters that gradually decrease towards 0 0 as t 𝑡 t italic_t increases. The term 𝐩 𝐩\mathbf{p}bold_p represents the conditioning input used to guide the generation process. Unlike traditional noise prediction methods, most human motion diffusion models directly predict the clean motion sequence 𝐱 0 subscript 𝐱 0\mathbf{x}_{0}bold_x start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT, denoted as G θ⁢(𝐱 t,t,𝐩)subscript 𝐺 𝜃 subscript 𝐱 𝑡 𝑡 𝐩 G_{\theta}(\mathbf{x}_{t},t,\mathbf{p})italic_G start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT ( bold_x start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT , italic_t , bold_p ). The mean of the reverse process is then given by:
81
+
82
+ (3)μ t⁢(θ)=α¯t−1⁢β t 1−α¯t⁢𝐱 0⁢(θ)+α t⁢(1−α¯t−1)1−α¯t⁢𝐱 t subscript 𝜇 𝑡 𝜃 subscript¯𝛼 𝑡 1 subscript 𝛽 𝑡 1 subscript¯𝛼 𝑡 subscript 𝐱 0 𝜃 subscript 𝛼 𝑡 1 subscript¯𝛼 𝑡 1 1 subscript¯𝛼 𝑡 subscript 𝐱 𝑡\mu_{t}(\theta)=\frac{\sqrt{\bar{\alpha}_{t-1}}\beta_{t}}{1-\bar{\alpha}_{t}}% \mathbf{x}_{0}(\theta)+\frac{\sqrt{\alpha_{t}}(1-\bar{\alpha}_{t-1})}{1-\bar{% \alpha}_{t}}\mathbf{x}_{t}italic_μ start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ( italic_θ ) = divide start_ARG square-root start_ARG over¯ start_ARG italic_α end_ARG start_POSTSUBSCRIPT italic_t - 1 end_POSTSUBSCRIPT end_ARG italic_β start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT end_ARG start_ARG 1 - over¯ start_ARG italic_α end_ARG start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT end_ARG bold_x start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ( italic_θ ) + divide start_ARG square-root start_ARG italic_α start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT end_ARG ( 1 - over¯ start_ARG italic_α end_ARG start_POSTSUBSCRIPT italic_t - 1 end_POSTSUBSCRIPT ) end_ARG start_ARG 1 - over¯ start_ARG italic_α end_ARG start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT end_ARG bold_x start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT
83
+
84
+ where α t=1−β t subscript 𝛼 𝑡 1 subscript 𝛽 𝑡\alpha_{t}=1-\beta_{t}italic_α start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT = 1 - italic_β start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT and α¯t=∏s=1 t α s subscript¯𝛼 𝑡 superscript subscript product 𝑠 1 𝑡 subscript 𝛼 𝑠\bar{\alpha}_{t}=\prod_{s=1}^{t}\alpha_{s}over¯ start_ARG italic_α end_ARG start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT = ∏ start_POSTSUBSCRIPT italic_s = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_t end_POSTSUPERSCRIPT italic_α start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT.
85
+
86
+ Under this formulation, the training objective of the model is defined as:
87
+
88
+ (4)ℒ:=𝔼(𝐱 0,𝐩)∼q,t∼[1,T]⁢[‖𝐱 0−G θ⁢(𝐱 t,t,𝐩)‖2 2]assign ℒ subscript 𝔼 formulae-sequence similar-to subscript 𝐱 0 𝐩 𝑞 similar-to 𝑡 1 𝑇 delimited-[]superscript subscript norm subscript 𝐱 0 subscript 𝐺 𝜃 subscript 𝐱 𝑡 𝑡 𝐩 2 2\mathcal{L}:=\mathbb{E}_{(\mathbf{x}_{0},\mathbf{p})\sim q,\,t\sim[1,T]}\left[% \left\|\mathbf{x}_{0}-G_{\theta}(\mathbf{x}_{t},t,\mathbf{p})\right\|_{2}^{2}\right]caligraphic_L := blackboard_E start_POSTSUBSCRIPT ( bold_x start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT , bold_p ) ∼ italic_q , italic_t ∼ [ 1 , italic_T ] end_POSTSUBSCRIPT [ ∥ bold_x start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT - italic_G start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT ( bold_x start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT , italic_t , bold_p ) ∥ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ]
89
+
90
+ This objective minimizes the mean squared error between predicted and ground-truth motions, enhancing generation accuracy and consistency.
91
+
92
+ 4. METHOD
93
+ ---------
94
+
95
+ In motion generation tasks jointly guided by trajectories and keyframe poses, the two inputs encode inherently distinct semantics: trajectories represent the absolute 3D path of the pelvis in space, denoted as 𝐓={𝐭 1,𝐭 2,…,𝐭 N},𝐭 i∈ℝ 3 formulae-sequence 𝐓 subscript 𝐭 1 subscript 𝐭 2…subscript 𝐭 𝑁 subscript 𝐭 𝑖 superscript ℝ 3\mathbf{T}=\{\mathbf{t}_{1},\mathbf{t}_{2},\ldots,\mathbf{t}_{N}\},\quad% \mathbf{t}_{i}\in\mathbb{R}^{3}bold_T = { bold_t start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , bold_t start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , … , bold_t start_POSTSUBSCRIPT italic_N end_POSTSUBSCRIPT } , bold_t start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ∈ blackboard_R start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT, where N 𝑁 N italic_N is the number of frames. A pose encompasses the 3D spatial coordinates of all body joints as 𝐏 i={𝐣 i(1),𝐣 i(2),…,𝐣 i(K)},𝐣 i(k)∈ℝ 3 formulae-sequence subscript 𝐏 𝑖 superscript subscript 𝐣 𝑖 1 superscript subscript 𝐣 𝑖 2…superscript subscript 𝐣 𝑖 𝐾 superscript subscript 𝐣 𝑖 𝑘 superscript ℝ 3\mathbf{P}_{i}=\{\mathbf{j}_{i}^{(1)},\mathbf{j}_{i}^{(2)},\ldots,\mathbf{j}_{% i}^{(K)}\},\quad\mathbf{j}_{i}^{(k)}\in\mathbb{R}^{3}bold_P start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = { bold_j start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( 1 ) end_POSTSUPERSCRIPT , bold_j start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( 2 ) end_POSTSUPERSCRIPT , … , bold_j start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_K ) end_POSTSUPERSCRIPT } , bold_j start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_k ) end_POSTSUPERSCRIPT ∈ blackboard_R start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT, where K 𝐾 K italic_K is the number of joints. Our method explicitly decouples the processing of trajectories and poses, enabling trajectory control to focus on absolute spatial motion and pose control on local joint configuration. This separation leads to more accurate and interpretable control, as illustrated in [Figure 2](https://arxiv.org/html/2505.21146v1#S2.F2 "Figure 2 ‣ 2.3. Controllable Diffusion-Based Generative Model In Video Generation ‣ 2. RELATED WORK ‣ IKMo: Image-Keyframed Motion Generation with Trajectory-Pose Conditioned Motion Diffusion Model")(b–d). Notably, the decoupled design allows users to flexibly input poses from external sources without requiring consistent global positioning. Built on this foundation, we present a user-friendly framework—IKMo—where a multi-agent MLLM system interprets high-level image and text prompts into structured motion instructions. Our motion diffusion model then synthesizes the final animation under coordinated trajectory and pose guidance, as shown in [Figure 2](https://arxiv.org/html/2505.21146v1#S2.F2 "Figure 2 ‣ 2.3. Controllable Diffusion-Based Generative Model In Video Generation ‣ 2. RELATED WORK ‣ IKMo: Image-Keyframed Motion Generation with Trajectory-Pose Conditioned Motion Diffusion Model")(a).
96
+
97
+ In this section, we first describe the mechanisms of trajectory and keyframe pose guidance. Following this, we describe how trajectory and keyframe poses are synergistically leveraged within our denoising network to control motion generation, as illustrated in [Figure 2](https://arxiv.org/html/2505.21146v1#S2.F2 "Figure 2 ‣ 2.3. Controllable Diffusion-Based Generative Model In Video Generation ‣ 2. RELATED WORK ‣ IKMo: Image-Keyframed Motion Generation with Trajectory-Pose Conditioned Motion Diffusion Model")(b–d). Finally, we present the pipeline for generating motion from keyframe images.
98
+
99
+ ### 4.1. Motion Generation With Trajectory Control
100
+
101
+ We adopt the two-stage trajectory guidance framework proposed in OmniControl(Xie et al., [2023](https://arxiv.org/html/2505.21146v1#bib.bib59)), a SOTA motion diffusion model that achieves strong performance in trajectory-conditioned human motion generation. Its design offers a flexible yet effective mechanism for incorporating sparse trajectory constraints, making it a suitable foundation for our framework.
102
+
103
+ In the first stage, Trajectory Optimization, an analytical function L traj⁢(x,c traj)subscript 𝐿 traj 𝑥 subscript 𝑐 traj L_{\text{traj}}(x,c_{\text{traj}})italic_L start_POSTSUBSCRIPT traj end_POSTSUBSCRIPT ( italic_x , italic_c start_POSTSUBSCRIPT traj end_POSTSUBSCRIPT ) is employed to measure the L2 distance between the motion joint positions and the trajectory constraints. This function is used to optimize the generative process through gradient-based perturbation. In our study, only the pelvis joint is utilized as a control signal to constrain the global trajectory of the motion. The analytical function is defined as:
104
+
105
+ (5)L traj⁢(x,c traj)=∑n σ n;traj⁢∥c n;traj−x n,r⁢o⁢o⁢t g∥2∑n σ n;traj,x g=R⁢(x),formulae-sequence subscript 𝐿 traj 𝑥 subscript 𝑐 traj subscript 𝑛 subscript 𝜎 𝑛 traj subscript delimited-∥∥subscript 𝑐 𝑛 traj superscript subscript 𝑥 n 𝑟 𝑜 𝑜 ��� 𝑔 2 subscript 𝑛 subscript 𝜎 𝑛 traj superscript 𝑥 𝑔 𝑅 𝑥 L_{\text{traj}}(x,c_{\text{traj}})=\frac{\sum_{n}\sigma_{n;\text{traj}}\left% \lVert c_{n;\text{traj}}-x_{\text{n},root}^{g}\right\rVert_{2}}{\sum_{n}\sigma% _{n;\text{traj}}},\quad x^{g}=R(x),italic_L start_POSTSUBSCRIPT traj end_POSTSUBSCRIPT ( italic_x , italic_c start_POSTSUBSCRIPT traj end_POSTSUBSCRIPT ) = divide start_ARG ∑ start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT italic_σ start_POSTSUBSCRIPT italic_n ; traj end_POSTSUBSCRIPT ∥ italic_c start_POSTSUBSCRIPT italic_n ; traj end_POSTSUBSCRIPT - italic_x start_POSTSUBSCRIPT n , italic_r italic_o italic_o italic_t end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_g end_POSTSUPERSCRIPT ∥ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT end_ARG start_ARG ∑ start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT italic_σ start_POSTSUBSCRIPT italic_n ; traj end_POSTSUBSCRIPT end_ARG , italic_x start_POSTSUPERSCRIPT italic_g end_POSTSUPERSCRIPT = italic_R ( italic_x ) ,
106
+
107
+ where σ n;traj subscript 𝜎 𝑛 traj\sigma_{n;\text{traj}}italic_σ start_POSTSUBSCRIPT italic_n ; traj end_POSTSUBSCRIPT is a binary indicator that specifies whether the trajectory control signal c n;traj subscript 𝑐 𝑛 traj c_{n;\text{traj}}italic_c start_POSTSUBSCRIPT italic_n ; traj end_POSTSUBSCRIPT contains a valid value at frame n 𝑛 n italic_n for the root joint, and R⁢(⋅)𝑅⋅R(\cdot)italic_R ( ⋅ ) transforms local joint coordinates into global absolute positions.
108
+
109
+ However, since this guidance strategy is applied to only a single joint, it fails to effectively propagate spatial constraints to the rest of the body through backpropagation, often resulting in unrealistic or physically implausible full-body configurations. To mitigate this limitation, a second stage is introduced. This stage consists of a trajectory encoder and a trainable copy of the Transformer encoder—composed of stacked encoder blocks—originally used in the motion diffusion model. By conditioning on the spatial signals, it facilitates the generative model in learning globally coherent and physically plausible full-body motion patterns.
110
+
111
+ ### 4.2. Motion Generation With Keyframe Pose Control
112
+
113
+ Pose guidance focuses on ensuring the consistency between the generated motion and the user-constrained keyframe poses. Our framework incorporates two-stage for this purpose.
114
+
115
+ In the Pose Optimization Module, we introduce a pose analysis function to quantify the alignment between generated and target poses.
116
+
117
+ (6)L pose⁢(x,c pose)=∑n σ n;pose⁢∥A⁢l⁢i⁢g⁢n⁢(x n g,c n;pose,c n;traj)−x n g∥2∑n σ n;pose,subscript 𝐿 pose 𝑥 subscript 𝑐 pose subscript 𝑛 subscript 𝜎 𝑛 pose subscript delimited-∥∥𝐴 𝑙 𝑖 𝑔 𝑛 superscript subscript 𝑥 n 𝑔 subscript 𝑐 𝑛 pose subscript 𝑐 𝑛 traj superscript subscript 𝑥 n 𝑔 2 subscript 𝑛 subscript 𝜎 𝑛 pose L_{\text{pose}}(x,c_{\text{pose}})=\frac{\sum_{n}\sigma_{n;\text{pose}}\left% \lVert Align(x_{\text{n}}^{g},c_{n;\text{pose}},c_{n;\text{traj}})-x_{\text{n}% }^{g}\right\rVert_{2}}{\sum_{n}\sigma_{n;\text{pose}}},italic_L start_POSTSUBSCRIPT pose end_POSTSUBSCRIPT ( italic_x , italic_c start_POSTSUBSCRIPT pose end_POSTSUBSCRIPT ) = divide start_ARG ∑ start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT italic_σ start_POSTSUBSCRIPT italic_n ; pose end_POSTSUBSCRIPT ∥ italic_A italic_l italic_i italic_g italic_n ( italic_x start_POSTSUBSCRIPT n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_g end_POSTSUPERSCRIPT , italic_c start_POSTSUBSCRIPT italic_n ; pose end_POSTSUBSCRIPT , italic_c start_POSTSUBSCRIPT italic_n ; traj end_POSTSUBSCRIPT ) - italic_x start_POSTSUBSCRIPT n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_g end_POSTSUPERSCRIPT ∥ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT end_ARG start_ARG ∑ start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT italic_σ start_POSTSUBSCRIPT italic_n ; pose end_POSTSUBSCRIPT end_ARG ,
118
+
119
+ where σ n;pose subscript 𝜎 𝑛 pose\sigma_{n;\text{pose}}italic_σ start_POSTSUBSCRIPT italic_n ; pose end_POSTSUBSCRIPT is a binary indicator that specifies whether the pose control signal c n;pose subscript 𝑐 𝑛 pose c_{n;\text{pose}}italic_c start_POSTSUBSCRIPT italic_n ; pose end_POSTSUBSCRIPT provides a valid constraint at frame n 𝑛 n italic_n, The function L pose⁢(⋅)subscript 𝐿 pose⋅L_{\text{pose}}(\cdot)italic_L start_POSTSUBSCRIPT pose end_POSTSUBSCRIPT ( ⋅ ) evaluates the L2 distance between the motion pose and the aligned constraint pose.
120
+
121
+ To enhance the sensitivity of this guidance to relative pose structures, we apply strategic spatial adjustments to the constraint poses. Specifically, for frames with trajectory constraints, we translate the entire pose so that the root joint’s position aligns with the position of the trajectory constraint. For frames without trajectory constraints, we align the root joint’s projection to that of the corresponding frame in the motion sequence. This spatial alignment mechanism mitigates the influence of global displacement and allows the optimization process to focus on local pose differences.
122
+
123
+ The alignment function is denoted as:
124
+
125
+ (7)Align⁢(x n g,c n;pose,c n;traj)=c n;pose+Δ align,Align subscript superscript 𝑥 𝑔 𝑛 subscript 𝑐 𝑛 pose subscript 𝑐 𝑛 traj subscript 𝑐 𝑛 pose subscript Δ align\text{Align}(x^{g}_{n},c_{n;\text{pose}},c_{n;\text{traj}})=c_{n;\text{pose}}+% \Delta_{\text{align}},Align ( italic_x start_POSTSUPERSCRIPT italic_g end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT , italic_c start_POSTSUBSCRIPT italic_n ; pose end_POSTSUBSCRIPT , italic_c start_POSTSUBSCRIPT italic_n ; traj end_POSTSUBSCRIPT ) = italic_c start_POSTSUBSCRIPT italic_n ; pose end_POSTSUBSCRIPT + roman_Δ start_POSTSUBSCRIPT align end_POSTSUBSCRIPT ,
126
+
127
+ (8)Δ align={Pos⁢(c n;traj,root)−Pos⁢(c n;pose,root),if⁢σ n;traj=1(Proj x⁢(x n,root g)−Proj x⁢(c n;pose,root),0,Proj z⁢(x n,root g)−Proj z⁢(c n;pose,root)),otherwise subscript Δ align cases Pos subscript 𝑐 𝑛 traj root Pos subscript 𝑐 𝑛 pose root if subscript 𝜎 𝑛 traj 1 missing-subexpression subscript Proj 𝑥 subscript superscript 𝑥 𝑔 𝑛 root subscript Proj 𝑥 subscript 𝑐 𝑛 pose root missing-subexpression 0 missing-subexpression subscript Proj 𝑧 subscript superscript 𝑥 𝑔 𝑛 root subscript Proj 𝑧 subscript 𝑐 𝑛 pose root otherwise\Delta_{\text{align}}=\begin{cases}\text{Pos}(c_{n;\text{traj},\text{root}})-% \text{Pos}(c_{n;\text{pose},\text{root}}),&\text{if }\sigma_{n;\text{traj}}=1% \\ \left(\begin{aligned} &\text{Proj}_{x}(x^{g}_{n,\text{root}})-\text{Proj}_{x}(% c_{n;\text{pose},\text{root}}),\\ &0,\\ &\text{Proj}_{z}(x^{g}_{n,\text{root}})-\text{Proj}_{z}(c_{n;\text{pose},\text% {root}})\end{aligned}\right),&\text{otherwise}\end{cases}roman_Δ start_POSTSUBSCRIPT align end_POSTSUBSCRIPT = { start_ROW start_CELL Pos ( italic_c start_POSTSUBSCRIPT italic_n ; traj , root end_POSTSUBSCRIPT ) - Pos ( italic_c start_POSTSUBSCRIPT italic_n ; pose , root end_POSTSUBSCRIPT ) , end_CELL start_CELL if italic_σ start_POSTSUBSCRIPT italic_n ; traj end_POSTSUBSCRIPT = 1 end_CELL end_ROW start_ROW start_CELL ( start_ROW start_CELL end_CELL start_CELL Proj start_POSTSUBSCRIPT italic_x end_POSTSUBSCRIPT ( italic_x start_POSTSUPERSCRIPT italic_g end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_n , root end_POSTSUBSCRIPT ) - Proj start_POSTSUBSCRIPT italic_x end_POSTSUBSCRIPT ( italic_c start_POSTSUBSCRIPT italic_n ; pose , root end_POSTSUBSCRIPT ) , end_CELL end_ROW start_ROW start_CELL end_CELL start_CELL 0 , end_CELL end_ROW start_ROW start_CELL end_CELL start_CELL Proj start_POSTSUBSCRIPT italic_z end_POSTSUBSCRIPT ( italic_x start_POSTSUPERSCRIPT italic_g end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_n , root end_POSTSUBSCRIPT ) - Proj start_POSTSUBSCRIPT italic_z end_POSTSUBSCRIPT ( italic_c start_POSTSUBSCRIPT italic_n ; pose , root end_POSTSUBSCRIPT ) end_CELL end_ROW ) , end_CELL start_CELL otherwise end_CELL end_ROW
128
+
129
+ where P⁢o⁢s⁢(⋅)𝑃 𝑜 𝑠⋅Pos(\cdot)italic_P italic_o italic_s ( ⋅ ) represents the pelvis joint coordinates of the frame’s motion, P⁢r⁢o⁢j x⁢(⋅)𝑃 𝑟 𝑜 subscript 𝑗 𝑥⋅Proj_{x}(\cdot)italic_P italic_r italic_o italic_j start_POSTSUBSCRIPT italic_x end_POSTSUBSCRIPT ( ⋅ ) represents the projection of the pelvis coordinates onto the x-axis, and P⁢r⁢o⁢j z⁢(⋅)𝑃 𝑟 𝑜 subscript 𝑗 𝑧⋅Proj_{z}(\cdot)italic_P italic_r italic_o italic_j start_POSTSUBSCRIPT italic_z end_POSTSUBSCRIPT ( ⋅ ) represents the projection of the pelvis coordinates onto the z-axis.
130
+
131
+ To further enhance the motion style accuracy and temporal consistency of pose generation, we introduce a second stage within the denoising network. This stage consists of a pose encoder and a trainable copy of the Transformer encoder. It takes the keyframe poses as conditional inputs and runs in parallel with the diffusion model to provide continuous pose-level guidance throughout the generation process. Its core function is to improve intra-frame similarity between the generated motion and the keyframes, while also increasing the semantic alignment between the complete motion sequence and the specified key poses. Through this guidance, the system not only boosts frame-wise fidelity but also enforces structural coherence over the entire motion.
132
+
133
+ ### 4.3. Synergistic Guidance via Trajectory and Keyframe Poses
134
+
135
+ We randomly sample trajectories and pose sequences along the temporal dimension. The trajectory is represented by a sequence of root joint coordinates, while the pose corresponds to a sequence of full-body spatial joint positions. To mitigate inconsistencies in global rotation between image-based 3D poses and real-world motions, we apply small-scale global random rotations to the sampled poses before training.
136
+
137
+ In the first stage, Motion Optimization, we jointly utilize the pose analysis function and trajectory analysis function to construct a gradient perturbation term that updates the noised motion x t subscript 𝑥 𝑡 x_{t}italic_x start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT. The guidance process is shown in [Figure 2](https://arxiv.org/html/2505.21146v1#S2.F2 "Figure 2 ‣ 2.3. Controllable Diffusion-Based Generative Model In Video Generation ‣ 2. RELATED WORK ‣ IKMo: Image-Keyframed Motion Generation with Trajectory-Pose Conditioned Motion Diffusion Model")(c), where ”Noised motion” denotes x t subscript 𝑥 𝑡 x_{t}italic_x start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT. The perturbation is defined as:
138
+
139
+ (9)x t=x t−τ⁢∇x t[α⋅L traj⁢(x t,c traj)+(1−α)⋅L pose⁢(x t,c pose)],subscript 𝑥 𝑡 subscript 𝑥 𝑡 𝜏 subscript∇subscript 𝑥 𝑡⋅𝛼 subscript 𝐿 traj subscript 𝑥 𝑡 subscript 𝑐 traj⋅1 𝛼 subscript 𝐿 pose subscript 𝑥 𝑡 subscript 𝑐 pose x_{t}=x_{t}-\tau\nabla_{x_{t}}\left[\alpha\cdot L_{\text{traj}}(x_{t},c_{\text% {traj}})+(1-\alpha)\cdot L_{\text{pose}}(x_{t},c_{\text{pose}})\right],italic_x start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT = italic_x start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT - italic_τ ∇ start_POSTSUBSCRIPT italic_x start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT end_POSTSUBSCRIPT [ italic_α ⋅ italic_L start_POSTSUBSCRIPT traj end_POSTSUBSCRIPT ( italic_x start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT , italic_c start_POSTSUBSCRIPT traj end_POSTSUBSCRIPT ) + ( 1 - italic_α ) ⋅ italic_L start_POSTSUBSCRIPT pose end_POSTSUBSCRIPT ( italic_x start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT , italic_c start_POSTSUBSCRIPT pose end_POSTSUBSCRIPT ) ] ,
140
+
141
+ where τ 𝜏\tau italic_τ denotes the guidance strength, and the weighting factor α 𝛼\alpha italic_α is dynamically computed based on the relative loss magnitudes:
142
+
143
+ (10)α=L traj⁢(x t,c traj)L traj⁢(x t,c traj)+L pose⁢(x t,c pose),𝛼 subscript 𝐿 traj subscript 𝑥 𝑡 subscript 𝑐 traj subscript 𝐿 traj subscript 𝑥 𝑡 subscript 𝑐 traj subscript 𝐿 pose subscript 𝑥 𝑡 subscript 𝑐 pose\alpha=\frac{L_{\text{traj}}(x_{t},c_{\text{traj}})}{L_{\text{traj}}(x_{t},c_{% \text{traj}})+L_{\text{pose}}(x_{t},c_{\text{pose}})},italic_α = divide start_ARG italic_L start_POSTSUBSCRIPT traj end_POSTSUBSCRIPT ( italic_x start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT , italic_c start_POSTSUBSCRIPT traj end_POSTSUBSCRIPT ) end_ARG start_ARG italic_L start_POSTSUBSCRIPT traj end_POSTSUBSCRIPT ( italic_x start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT , italic_c start_POSTSUBSCRIPT traj end_POSTSUBSCRIPT ) + italic_L start_POSTSUBSCRIPT pose end_POSTSUBSCRIPT ( italic_x start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT , italic_c start_POSTSUBSCRIPT pose end_POSTSUBSCRIPT ) end_ARG ,
144
+
145
+ which reflects the relative contributions of trajectory and pose errors. This joint guidance allows the model to balance structural coherence and spatial alignment adaptively.
146
+
147
+ In the second stage, we employ a Motion ControlNet, a trainable copy of the Transformer encoder used in the motion diffusion model. It incorporates trajectory and keyframe pose constraints, which are independently processed through dedicated trajectory and pose encoders. The encoded features are fused and injected into the ControlNet, allowing the integration of both spatial and semantic information into the denoising backbone. The guidance process is illustrated in [Figure 2](https://arxiv.org/html/2505.21146v1#S2.F2 "Figure 2 ‣ 2.3. Controllable Diffusion-Based Generative Model In Video Generation ‣ 2. RELATED WORK ‣ IKMo: Image-Keyframed Motion Generation with Trajectory-Pose Conditioned Motion Diffusion Model")(b,d).
148
+
149
+ ### 4.4. Image Keyframe Controlled Motion Generation
150
+
151
+ To enable fine-grained, controllable motion generation from high-level visual and textual inputs, we propose a multi-agent system with three specialized agents: an Interaction Agent, a Motion Design Agent, and a Trajectory Planning Agent. Unlike single-agent LLMs that struggle with multi-modal parsing and temporally structured planning, our collaborative framework decomposes the task into focused roles, improving accuracy, interpretability, and modularity. The Interaction Agent conducts multi-turn dialogue to extract user intent, including trajectory preferences and action semantics. The Motion Design Agent recovers 3D keyframe poses from user images using TokenHMR(Dwivedi et al., [2024](https://arxiv.org/html/2505.21146v1#bib.bib13)), a SOTA human mesh recovery model. Extracted SMPL(Loper et al., [2015](https://arxiv.org/html/2505.21146v1#bib.bib33)) poses are mapped to a canonical 22-joint format and transformed into a unified coordinate system compatible with HumanML3D(Guo et al., [2022a](https://arxiv.org/html/2505.21146v1#bib.bib16)), ensuring consistency across diverse inputs and alignment with the trajectory frame. The Trajectory Planning Agent procedurally generates 3D paths using parameterized curves and supports complex multi-segment planning.
152
+
153
+ Together, these agents output a complete motion specification: a structured natural language prompt, aligned keyframe poses, and a global trajectory. These elements are seamlessly integrated into our diffusion-based generation model, enabling precise and flexible motion synthesis.
154
+
155
+ Table 1. Quantitative results on the HumanML3D test set.Bold indicates the best result, underline indicates the second-best. ↑↑\uparrow↑ indicates higher is better, ↓↓\downarrow↓ means lower is better, and →→\rightarrow→ denotes that values closer to the real data are preferred. Ours (on rotation) indicates training with random pose rotations; Ours (without rotation) indicates training without rotation.
156
+
157
+ Method Condition FID↓↓\downarrow↓R-precision(Top-3) ↑↑\uparrow↑Diversity→→\rightarrow→Foot skating ratio↓↓\downarrow↓Traj.err.(50 cm) ↓↓\downarrow↓Loc.err.(50 cm) ↓↓\downarrow↓Avg.err. ↓↓\downarrow↓
158
+ Real-0.002 0.797 9.503 0.000 0.000 0.000 0.000
159
+ MDM Pelvis 0.698 0.602 9.197 0.1019 0.4022 0.3076 0.5959
160
+ PriorMDM Pelvis 0.475 0.583 9.156 0.0897 0.3457 0.2132 0.4417
161
+ GMD Pelvis 0.576 0.665 9.206 0.1009 0.0931 0.0321 0.1439
162
+ OmniControl Pelvis 0.322 0.691 9.545 0.0571 0.0404 0.0085 0.0367
163
+ OmniControl Pelvis+Keyframes 1.420 0.652 9.268 0.1039 0.4457 0.0670 0.1371
164
+ CondMDI Pelvis+Keyframes 0.266 0.678 9.198 0.0900 0.4671 0.2093 0.1661
165
+ Ours(w/ rotations)Pelvis+Keyframes 0.239 0.678 9.686 0.0543 0.0246 0.0076 0.0250
166
+ Ours(w/o rotations)Pelvis+Keyframes 0.177 0.672 9.616 0.0609 0.0176 0.0056 0.0212
167
+
168
+ Table 2. Quantitative results on the KIT-ML test set.
169
+
170
+ Method Condition FID↓↓\downarrow↓R-precision(Top-3) ↑↑\uparrow↑Diversity→→\rightarrow→Traj.err.(50 cm) ↓↓\downarrow↓Loc.err.(50 cm) ↓↓\downarrow↓Avg.err.↓↓\downarrow↓
171
+ Real-0.031 0.779 11.08 0.000 0.000 0.000
172
+ PriorMDM Pelvis 0.851 0.397 10.518 0.3310 0.1400 0.2305
173
+ GMD Pelvis 1.565 0.382 9.664 0.5443 0.3003 0.4070
174
+ OmniControl Pelvis 1.237 0.365 10.784 0.1315 0.0384 0.0875
175
+ OmniControl Pelvis+Keyframes 0.690 0.405 10.597 0.1226 0.0123 0.0673
176
+ Ours Pelvis+Keyframes 0.531 0.413 10.748 0.0341 0.0084 0.0333
177
+
178
+ 5. RESULTS
179
+ ----------
180
+
181
+ ![Image 3: Refer to caption](https://arxiv.org/html/2505.21146v1/x3.png)
182
+
183
+ Figure 3. Qualitative Results. All input images are generated by Doubao. Colored entity frames represent keyframes, while gray frames represent the other frames. The transparency of the gray frames indicates their position in the motion sequence, with more transparent frames appearing earlier. The green trajectory on the ground represents a standard trajectory. To provide a clearer and consistent view for comparison, we applied translation and rotation to some results. Origin thumbnail represents the original version of the motion.
184
+
185
+ ### 5.1. Datasets and Evaluation Metrics
186
+
187
+ We evaluate our model on the HumanML3D(Guo et al., [2022a](https://arxiv.org/html/2505.21146v1#bib.bib16)) dataset, which contains 14,646 text-annotated motion sequences sourced from AMASS(Mahmood et al., [2019](https://arxiv.org/html/2505.21146v1#bib.bib38)) and HumanAct12(Guo et al., [2020](https://arxiv.org/html/2505.21146v1#bib.bib18)). Additionally, we evaluate on the KIT-ML(Plappert et al., [2016](https://arxiv.org/html/2505.21146v1#bib.bib42)) dataset comprising 3,911 sequences.
188
+
189
+ We follow the evaluation protocol proposed in(Guo et al., [2022a](https://arxiv.org/html/2505.21146v1#bib.bib16)), using FID to measure motion realism, R-Precision for text-motion alignment, and Diversity for intra-sample variation. To assess control accuracy, we report Foot skating ratio, Trajectory Error, Location Error, and Average Error at keyframes(Karunratanakul et al., [2023](https://arxiv.org/html/2505.21146v1#bib.bib25)).To further quantify the accuracy of pose control, we introduce a new metric, Pose Dist, which evaluates the average Euclidean distance between pelvis-centered generated and reference poses at controlled frames. Full implementation details are provided in the supplementary material.
190
+
191
+ ### 5.2. Quantitative Evaluation
192
+
193
+ We compare our method against both mainstream and SOTA approaches on the HumanML3D(Guo et al., [2022a](https://arxiv.org/html/2505.21146v1#bib.bib16)) and KIT-ML(Plappert et al., [2016](https://arxiv.org/html/2505.21146v1#bib.bib42)) datasets. [Table 1](https://arxiv.org/html/2505.21146v1#S4.T1 "Table 1 ‣ 4.4. Image Keyframe Controlled Motion Generation ‣ 4. METHOD ‣ IKMo: Image-Keyframed Motion Generation with Trajectory-Pose Conditioned Motion Diffusion Model") presents the performance comparison on HumanML3D, while [Table 2](https://arxiv.org/html/2505.21146v1#S4.T2 "Table 2 ‣ 4.4. Image Keyframe Controlled Motion Generation ‣ 4. METHOD ‣ IKMo: Image-Keyframed Motion Generation with Trajectory-Pose Conditioned Motion Diffusion Model") shows the results on KIT-ML.
194
+
195
+ As shown in [Table 1](https://arxiv.org/html/2505.21146v1#S4.T1 "Table 1 ‣ 4.4. Image Keyframe Controlled Motion Generation ‣ 4. METHOD ‣ IKMo: Image-Keyframed Motion Generation with Trajectory-Pose Conditioned Motion Diffusion Model"), MDM (Tevet et al., [2023](https://arxiv.org/html/2505.21146v1#bib.bib51)), PriorMDM (Shafir et al., [2024](https://arxiv.org/html/2505.21146v1#bib.bib46)), and GMD (Karunratanakul et al., [2023](https://arxiv.org/html/2505.21146v1#bib.bib25)) focus solely on controlling the pelvis joint and are therefore limited to trajectory-level control. OmniControl (Xie et al., [2023](https://arxiv.org/html/2505.21146v1#bib.bib59)), trained on the full body joints, supports both trajectory and pose-level control by conditioning on keyframes. CondMDI (Cohan et al., [2024](https://arxiv.org/html/2505.21146v1#bib.bib9)) is the SOTA keyframe-guided motion generation model capable of handling both trajectory and keyframe pose conditions. To enable a comprehensive comparison, we first report the performance of previous methods under pelvis-only (trajectory) control. As shown in the table, OmniControl experiences a significant drop in performance when keyframe conditions are added, indicating that using a single mechanism to simultaneously control both trajectory and pose is suboptimal. Then, we compare our method with OmniControl and CondMDI under pelvis+keyframes (trajectory + keyframe pose) control. We also evaluate the performance difference of our method when trained with and without random pose rotations. Since our model is designed to handle external inputs that may include arbitrary global rotations, the version trained with random rotations (Ours (on rotations)) performs better under such conditions (as demonstrated in [Table 5](https://arxiv.org/html/2505.21146v1#S5.T5 "Table 5 ‣ 5.4. Ablation Study ‣ 5. RESULTS ‣ IKMo: Image-Keyframed Motion Generation with Trajectory-Pose Conditioned Motion Diffusion Model")). Therefore, in all subsequent experiments, Ours refers to Ours (on rotations). Specifically, our method achieves a Trajectory Error that is only about 5% of that of both OmniControl and CondMDI. In terms of Location Error, our approach achieves just 11% of the error reported by OmniControl and 4% of that reported by CondMDI. For the Average Error, our method reduces the error to 18% compared to OmniControl and 15% compared to CondMDI. A similar comparison is conducted on the KIT-ML dataset. However, since CondMDI is trained on HumanML3D in global rotation format and does not support KIT-ML, we compare our method only with the SOTA OmniControl under pelvis+keyframes conditions on KIT-ML, as shown in [Table 2](https://arxiv.org/html/2505.21146v1#S4.T2 "Table 2 ‣ 4.4. Image Keyframe Controlled Motion Generation ‣ 4. METHOD ‣ IKMo: Image-Keyframed Motion Generation with Trajectory-Pose Conditioned Motion Diffusion Model"). Results demonstrate that our method achieves the best performance across all metrics under the pelvis+keyframes setting on both datasets.
196
+
197
+ Table 3. Pose Dist Evaluation on the HumanML3D test set.
198
+
199
+ Method Condition Pose Dist↓↓\downarrow↓
200
+ Real-0.000
201
+ OmniControl Pelvis+Keyframes 0.0373
202
+ Ours Pelvis+Keyframes 0.0210
203
+
204
+ Since OmniControl is trained using full-body joint positions and adopts a similar pose representation with us, it is compatible with our proposed Pose Dist metric. We evaluate this metric on the HumanML3D dataset, as presented in [Table 3](https://arxiv.org/html/2505.21146v1#S5.T3 "Table 3 ‣ 5.2. Quantitative Evaluation ‣ 5. RESULTS ‣ IKMo: Image-Keyframed Motion Generation with Trajectory-Pose Conditioned Motion Diffusion Model"). The results demonstrate that our method achieves better keyframe pose similarity.
205
+
206
+ ### 5.3. Qualitative Results
207
+
208
+ [Figure 3](https://arxiv.org/html/2505.21146v1#S5.F3 "Figure 3 ‣ 5. RESULTS ‣ IKMo: Image-Keyframed Motion Generation with Trajectory-Pose Conditioned Motion Diffusion Model") qualitatively compares our method against baselines OmniControl(Xie et al., [2023](https://arxiv.org/html/2505.21146v1#bib.bib59)) and MDM(Tevet et al., [2023](https://arxiv.org/html/2505.21146v1#bib.bib51)). For each comparison, we synthesized four pose images. These images, along with a specified Circular or S-shaped trajectory, were input to our IKMo framework. Its MLLM-based multi-agent system then generated a motion configuration—comprising a textual description, 3D keyframe poses with temporal alignment, and full trajectory coordinates—which subsequently drove our model and OmniControl. For the text-only MDM, we augmented its textual input with descriptions of the keyframes and trajectory to serve as proxies for explicit conditioning.
209
+
210
+ For fair comparison, we extract identical keyframes for all methods. Our model effectively aligns with both the keyframe poses and the predefined trajectory, preserving keyframe motion style and spatial accuracy. In contrast, OmniControl, which jointly processes absolute joint positions for pose and trajectory, ignores relative pose relationships—leading to overlapping keyframes, poor pose matching, and trajectory deviation. MDM fails to respond to textual trajectory and keyframe prompts, producing motions that ignore both. In the second row, although MDM’s leftmost motion visually resembles keyframe a, it actually aligns with d, revealing inaccurate keyframe following. Under the S-shaped trajectory, both OmniControl and MDM fail to respect the path and show significant motion overlap. To provide a clearer and consistent view for comparison, we applied translation and rotation to their results. Original unmodified motion videos are provided in the supplementary material.
211
+
212
+ ![Image 4: Refer to caption](https://arxiv.org/html/2505.21146v1/x4.png)
213
+
214
+ Figure 4. Qualitative results using video/text inputs. Both methods are given the same textual prompt: ”The person is performing a dance routine involving a sequence of movements. These include gestures with the arms raised, swinging from side to side, and leg kicks.”
215
+
216
+ Moreover, for motion types derived from videos, IKMo effectively captures the motion style by leveraging keyframes extracted from the video. [Figure 4](https://arxiv.org/html/2505.21146v1#S5.F4 "Figure 4 ‣ 5.3. Qualitative Results ‣ 5. RESULTS ‣ IKMo: Image-Keyframed Motion Generation with Trajectory-Pose Conditioned Motion Diffusion Model") presents a qualitative comparison. Specifically, we select motion clips and their corresponding textual descriptions from the MoVid dataset provided by(Chen et al., [2024b](https://arxiv.org/html/2505.21146v1#bib.bib6)). Our method utilizes video-extracted keyframes along with the textual description to generate motion sequences, whereas MDM relies solely on the text input. The results demonstrate that our approach better reflects the motion style of the reference video, while MDM generates only a basic arm-swinging motion. Unlike pose estimation methods, our framework supports flexible keyframe sparsity, allowing users to adjust the similarity to the reference video and enabling more diverse motion generation.
217
+
218
+ ### 5.4. Ablation Study
219
+
220
+ We conduct ablation studies on the HumanML3D(Guo et al., [2022a](https://arxiv.org/html/2505.21146v1#bib.bib16)) dataset to validate the effectiveness of the proposed modules in our motion diffusion model. We summarize several key findings below.
221
+
222
+ Table 4. Ablation studies on the HumanML3D test set.
223
+
224
+ Method Traj.err.(50 cm) ↓↓\downarrow↓Loc.err.(50 cm) ↓↓\downarrow↓Avg.err.↓↓\downarrow↓Pose Dist↓↓\downarrow↓
225
+ w/o Motion Optimization 0.3006 0.1982 0.3491 0.0618
226
+ w/o Motion ControlNet 0.1383 0.0336 0.0754 0.1622
227
+ Ours 0.0246 0.0076 0.0250 0.0210
228
+
229
+ ![Image 5: Refer to caption](https://arxiv.org/html/2505.21146v1/x5.png)
230
+
231
+ Figure 5. Ablation results. All input images are generated by Doubao.
232
+
233
+ Motion Optimization Significantly Enhances Trajectory Accuracy. As reported in [Table 4](https://arxiv.org/html/2505.21146v1#S5.T4 "Table 4 ‣ 5.4. Ablation Study ‣ 5. RESULTS ‣ IKMo: Image-Keyframed Motion Generation with Trajectory-Pose Conditioned Motion Diffusion Model"), enabling the Motion Optimization module leads to more than a 90% reduction in Traj.err., Loc.err., and Avg.err., compared to the variant without it (w/o Motion Optimization), highlighting its critical role in trajectory control. As shown on the left side of [Figure 5](https://arxiv.org/html/2505.21146v1#S5.F5 "Figure 5 ‣ 5.4. Ablation Study ‣ 5. RESULTS ‣ IKMo: Image-Keyframed Motion Generation with Trajectory-Pose Conditioned Motion Diffusion Model"), the model without Motion Optimization fails to follow the trajectory constraints during motion generation.
234
+
235
+ Both Motion Optimization and Motion ControlNet Modules Contribute to Keyframe Pose Accuracy. As shown in [Table 4](https://arxiv.org/html/2505.21146v1#S5.T4 "Table 4 ‣ 5.4. Ablation Study ‣ 5. RESULTS ‣ IKMo: Image-Keyframed Motion Generation with Trajectory-Pose Conditioned Motion Diffusion Model"), removing either module results in degraded alignment with keyframe poses. Specifically, the absence of the Motion Optimization module causes the Pose Dist metric to deteriorate by 2. 94×\times×, while removing the Motion ControlNet leads to a 7. 72×\times× degradation. These results demonstrate that both components are essential for accurate keyframe pose reconstruction, with Motion ControlNet playing a particularly critical role in keyframe pose alignment. As shown on the right side of [Figure 5](https://arxiv.org/html/2505.21146v1#S5.F5 "Figure 5 ‣ 5.4. Ablation Study ‣ 5. RESULTS ‣ IKMo: Image-Keyframed Motion Generation with Trajectory-Pose Conditioned Motion Diffusion Model"), the generated keyframe poses do not match the poses depicted in the input images.
236
+
237
+ Applying small random rotations during training enhances the model’s robustness to external inputs with rotational variations. As shown in [Table 5](https://arxiv.org/html/2505.21146v1#S5.T5 "Table 5 ‣ 5.4. Ablation Study ‣ 5. RESULTS ‣ IKMo: Image-Keyframed Motion Generation with Trajectory-Pose Conditioned Motion Diffusion Model"), when random rotations are applied during evaluation, Ours(w/ rotations) maintains high motion quality under rotational perturbations, while Ours(w/o rotations) suffers a significant drop, indicating a lack of generalization to pose rotations.
238
+
239
+ Table 5. Ablation studies on rotation.Eval Rotations indicates whether random perturbation is applied to poses during evaluation: w/ Rotations applies small random rotations, while w/o Rotations uses the original pose.
240
+
241
+ Method Eval Rotations FID↓↓\downarrow↓R-precision(Top-3) ↑↑\uparrow↑Diversity→→\rightarrow→
242
+ Ours(w/ rotations)w/o Rotations 0.239 0.678 9.686
243
+ Ours(w/o rotations)w/o Rotations 0.177 0.672 9.616
244
+ Ours(w/ rotations)w/ Rotations 0.238 0.671 9.642
245
+ Ours(w/o rotations)w/ Rotations 0.594 0.635 9.032
246
+
247
+ ### 5.5. User Study
248
+
249
+ To validate that IKMo generates motions better aligned with user expectation compared to prior works, we conducted a user study with 10 participants(M=24.8, Std=3.05) . The SOTA keyframe-controlled method CondMDI(Cohan et al., [2024](https://arxiv.org/html/2505.21146v1#bib.bib9)) employs a unique pose representation and lacks an interface for external pose inputs, relying solely on randomly sampled keyframe poses from HumanML3D(Guo et al., [2022a](https://arxiv.org/html/2505.21146v1#bib.bib16)). To control variables, we simulated a CondMDI-style baseline using our Motion Diffusion Model and compared it against IKMo. From two MoVid(Chen et al., [2024b](https://arxiv.org/html/2505.21146v1#bib.bib6)) videos, users extracted 1 or 10 frames as image keyframes for IKMo. For the CondMDI-style baseline, we used the same text prompts (from IKMo) and randomly sampled 1 or 10 keyframe poses from HumanML3D to control motion. We provided each participant with a questionnaire using a 5-point Likert scale to evaluate the similarity between the generated motion and the original video motion. Higher scores indicate greater similarity. As shown in [Table 6](https://arxiv.org/html/2505.21146v1#S5.T6 "Table 6 ‣ 5.5. User Study ‣ 5. RESULTS ‣ IKMo: Image-Keyframed Motion Generation with Trajectory-Pose Conditioned Motion Diffusion Model"), IKMo with 10 images achieved the highest similarity, demonstrating that it effectively captures the motion style of the original video. IKMo with 1 image slightly outperformed the baseline with 1 keyframe pose. The baseline with 10 keyframe poses performed worst, likely due to stylistic shifts caused by random keyframe sampling that reduced alignment with the original video.
250
+
251
+ These results demonstrate that IKMo generates motions more consistent with user expectations. Its use of image keyframes enables more accurate style preservation and intent alignment compared to pose-sampling-based baselines.
252
+
253
+ Table 6. Results of the user study.One Image and Ten Images: IKMo with 1 or 10 images as keyframes. One Pose and Ten Poses: CondMDI-style with 1 or 10 randomly sampled keyframe poses from HumanML3D dataset. Mean represents the average score, and Std represents the standard deviation.
254
+
255
+ Video Value One Image Ten Images One Pose Ten Poses
256
+ video1 Mean 2.8 4.0 2.1 1.0
257
+ Std 0.63 0.82 0.74 0.00
258
+ video2 Mean 2.3 3.7 1.9 1.0
259
+ Std 0.82 1.06 0.88 0.00
260
+
261
+ 6. CONCLUSION
262
+ -------------
263
+
264
+ We present a conditional motion generation model that decouples trajectory and keyframe pose inputs for two-stage parallel processing. Based on this model, we introduce IKMo, an intuitive motion generation framework that leverages a multi-agent MLLM system to translate high-level image and text inputs into structured motion configurations. Our two-stage conditioning design enhances controllability by processing trajectory and pose cues in parallel at each stage. Jointly leveraging these signals, our method generates realistic, coherent, and semantically aligned motions. Experiments on HumanML3D and KIT-ML show consistent improvements over SOTA baselines. The user study demonstrates the effectiveness of the IKMo framework.
265
+
266
+ #### Limitations and Future Work
267
+
268
+ While our model effectively captures motion style, it does not perfectly match target keyframe poses. Lack of finger joint annotations also leads to less accurate hand motions. Additionally, the image-to-motion pipeline relies on intermediate pose extraction, which may introduce errors. Future work will explore datasets with detailed hand labels and end-to-end models that directly generate motion from images to improve pose fidelity and reduce intermediate noise.
269
+
270
+ References
271
+ ----------
272
+
273
+ * (1)
274
+ * Cai et al. (2024) Zhongang Cai, Jianping Jiang, Zhongfei Qing, Xinying Guo, Mingyuan Zhang, Zhengyu Lin, Haiyi Mei, Chen Wei, Ruisi Wang, Wanqi Yin, et al. 2024. Digital life project: Autonomous 3d characters with social intelligence. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_. 582–592.
275
+ * Cervantes et al. (2022) Pablo Cervantes, Yusuke Sekikawa, Ikuro Sato, and Koichi Shinoda. 2022. Implicit neural representations for variable length human motion generation. In _European Conference on Computer Vision_. Springer, 356–372.
276
+ * Chan et al. (2019) Caroline Chan, Shiry Ginosar, Tinghui Zhou, and Alexei A Efros. 2019. Everybody dance now. In _Proceedings of the IEEE/CVF international conference on computer vision_. 5933–5942.
277
+ * Chen et al. (2024a) Bohong Chen, Yumeng Li, Yao-Xiang Ding, Tianjia Shao, and Kun Zhou. 2024a. Enabling synergistic full-body control in prompt-based co-speech motion generation. In _Proceedings of the 32nd ACM International Conference on Multimedia_. 6774–6783.
278
+ * Chen et al. (2024b) Ling-Hao Chen, Shunlin Lu, Ailing Zeng, Hao Zhang, Benyou Wang, Ruimao Zhang, and Lei Zhang. 2024b. Motionllm: Understanding human behaviors from human motions and videos. _arXiv preprint arXiv:2405.20340_ (2024).
279
+ * Chhatre et al. (2024) Kiran Chhatre, Nikos Athanasiou, Giorgio Becherini, Christopher Peters, Michael J Black, Timo Bolkart, et al. 2024. Emotional speech-driven 3d body animation via disentangled latent diffusion. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_. 1942–1953.
280
+ * Chopin et al. (2024) Baptiste Chopin, Hao Tang, and Mohamed Daoudi. 2024. Bipartite graph diffusion model for human interaction generation. In _Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision_. 5333–5342.
281
+ * Cohan et al. (2024) Setareh Cohan, Guy Tevet, Daniele Reda, Xue Bin Peng, and Michiel van de Panne. 2024. Flexible motion in-betweening with diffusion models. In _ACM SIGGRAPH 2024 Conference Papers_. 1–9.
282
+ * Dai et al. (2024) Sisi Dai, Wenhao Li, Haowen Sun, Haibin Huang, Chongyang Ma, Hui Huang, Kai Xu, and Ruizhen Hu. 2024. Interfusion: Text-driven generation of 3d human-object interaction. In _European Conference on Computer Vision_. Springer, 18–35.
283
+ * Degardin et al. (2022) Bruno Degardin, Joao Neves, Vasco Lopes, Joao Brito, Ehsan Yaghoubi, and Hugo Proença. 2022. Generative adversarial graph convolutional networks for human action synthesis. In _Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision_. 1150–1159.
284
+ * Diller and Dai (2024) Christian Diller and Angela Dai. 2024. Cg-hoi: Contact-guided 3d human-object interaction generation. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_. 19888–19901.
285
+ * Dwivedi et al. (2024) Sai Kumar Dwivedi, Yu Sun, Priyanka Patel, Yao Feng, and Michael J Black. 2024. Tokenhmr: Advancing human mesh recovery with a tokenized pose representation. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_. 1323–1333.
286
+ * Gao et al. (2024) Jiawei Gao, Ziqin Wang, Zeqi Xiao, Jingbo Wang, Tai Wang, Jinkun Cao, Xiaolin Hu, Si Liu, Jifeng Dai, and Jiangmiao Pang. 2024. Coohoi: Learning cooperative human-object interaction with manipulated object dynamics. _Advances in Neural Information Processing Systems_ 37 (2024), 79741–79763.
287
+ * Guo et al. (2024) Chuan Guo, Yuxuan Mu, Muhammad Gohar Javed, Sen Wang, and Li Cheng. 2024. Momask: Generative masked modeling of 3d human motions. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_. 1900–1910.
288
+ * Guo et al. (2022a) Chuan Guo, Shihao Zou, Xinxin Zuo, Sen Wang, Wei Ji, Xingyu Li, and Li Cheng. 2022a. Generating diverse and natural 3d human motions from text. In _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_. 5152–5161.
289
+ * Guo et al. (2022b) Chuan Guo, Xinxin Zuo, Sen Wang, and Li Cheng. 2022b. Tm2t: Stochastic and tokenized modeling for the reciprocal generation of 3d human motions and texts. In _European Conference on Computer Vision_. Springer, 580–597.
290
+ * Guo et al. (2020) Chuan Guo, Xinxin Zuo, Sen Wang, Shihao Zou, Qingyao Sun, Annan Deng, Minglun Gong, and Li Cheng. 2020. Action2motion: Conditioned generation of 3d human motions. In _Proceedings of the 28th ACM International Conference on Multimedia_. 2021–2029.
291
+ * Hassan et al. (2021) Mohamed Hassan, Partha Ghosh, Joachim Tesch, Dimitrios Tzionas, and Michael J Black. 2021. Populating 3D scenes by learning human-scene interaction. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_. 14708–14718.
292
+ * Ho et al. (2020) Jonathan Ho, Ajay Jain, and Pieter Abbeel. 2020. Denoising diffusion probabilistic models. _Advances in neural information processing systems_ 33 (2020), 6840–6851.
293
+ * Hu (2024) Li Hu. 2024. Animate anyone: Consistent and controllable image-to-video synthesis for character animation. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_. 8153–8163.
294
+ * Huang et al. (2023) Siyuan Huang, Zan Wang, Puhao Li, Baoxiong Jia, Tengyu Liu, Yixin Zhu, Wei Liang, and Song-Chun Zhu. 2023. Diffusion-based generation, optimization, and planning in 3d scenes. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_. 16750–16761.
295
+ * Jiang et al. (2023) Biao Jiang, Xin Chen, Wen Liu, Jingyi Yu, Gang Yu, and Tao Chen. 2023. Motiongpt: Human motion as a foreign language. _Advances in Neural Information Processing Systems_ 36 (2023), 20067–20079.
296
+ * Jiang et al. (2024) Nan Jiang, Zimo He, Zi Wang, Hongjie Li, Yixin Chen, Siyuan Huang, and Yixin Zhu. 2024. Autonomous character-scene interaction synthesis from text instruction. In _SIGGRAPH Asia 2024 Conference Papers_. 1–11.
297
+ * Karunratanakul et al. (2023) Korrawe Karunratanakul, Konpat Preechakul, Supasorn Suwajanakorn, and Siyu Tang. 2023. Guided motion diffusion for controllable human motion synthesis. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_. 2151–2162.
298
+ * Kaufmann et al. (2020) Manuel Kaufmann, Emre Aksan, Jie Song, Fabrizio Pece, Remo Ziegler, and Otmar Hilliges. 2020. Convolutional autoencoders for human motion infilling. In _2020 International Conference on 3D Vision (3DV)_. IEEE, 918–927.
299
+ * Kim et al. (2023) Jihoon Kim, Jiseob Kim, and Sungjoon Choi. 2023. Flame: Free-form language-based motion synthesis & editing. In _Proceedings of the AAAI Conference on Artificial Intelligence_, Vol.37. 8255–8263.
300
+ * Li et al. (2022) Buyu Li, Yongchi Zhao, Shi Zhelun, and Lu Sheng. 2022. Danceformer: Music conditioned 3d dance generation with parametric motion transformer. In _Proceedings of the AAAI Conference on Artificial Intelligence_, Vol.36. 1272–1279.
301
+ * Liang et al. (2024) Han Liang, Wenqian Zhang, Wenxuan Li, Jingyi Yu, and Lan Xu. 2024. Intergen: Diffusion-based multi-human motion generation under complex interactions. _International Journal of Computer Vision_ 132, 9 (2024), 3463–3483.
302
+ * Lim et al. (2023) Donggeun Lim, Cheongi Jeong, and Young Min Kim. 2023. Mammos: Mapping multiple human motion with scene understanding and natural interactions. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_. 4278–4287.
303
+ * Liu et al. (2024b) Haiyang Liu, Zihao Zhu, Giorgio Becherini, Yichen Peng, Mingyang Su, You Zhou, Xuefei Zhe, Naoya Iwamoto, Bo Zheng, and Michael J Black. 2024b. EMAGE: Towards unified holistic co-speech gesture generation via expressive masked audio gesture modeling. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_. 1144–1154.
304
+ * Liu et al. (2024a) Xinpeng Liu, Haowen Hou, Yanchao Yang, Yong-Lu Li, and Cewu Lu. 2024a. Revisit human-scene interaction via space occupancy. In _European Conference on Computer Vision_. Springer, 1–19.
305
+ * Loper et al. (2015) Matthew Loper, Naureen Mahmood, Javier Romero, Gerard Pons-Moll, and Michael J Black. 2015. SMPL: a skinned multi-person linear model. _ACM Transactions on Graphics (TOG)_ 34, 6 (2015), 1–16.
306
+ * Loshchilov and Hutter (2017) Ilya Loshchilov and Frank Hutter. 2017. Decoupled Weight Decay Regularization. In _International Conference on Learning Representations_.
307
+ * Lu et al. (2022) Qiujing Lu, Yipeng Zhang, Mingjian Lu, and Vwani Roychowdhury. 2022. Action-conditioned on-demand motion generation. In _Proceedings of the 30th ACM International Conference on Multimedia_. 2249–2257.
308
+ * Ma et al. (2024b) Wan-Duo Kurt Ma, John P Lewis, and W Bastiaan Kleijn. 2024b. Trailblazer: Trajectory control for diffusion-based video generation. In _SIGGRAPH Asia 2024 Conference Papers_. 1–11.
309
+ * Ma et al. (2024a) Yue Ma, Yingqing He, Xiaodong Cun, Xintao Wang, Siran Chen, Xiu Li, and Qifeng Chen. 2024a. Follow your pose: Pose-guided text-to-video generation using pose-free videos. In _Proceedings of the AAAI Conference on Artificial Intelligence_, Vol.38. 4117–4125.
310
+ * Mahmood et al. (2019) Naureen Mahmood, Nima Ghorbani, Nikolaus F Troje, Gerard Pons-Moll, and Michael J Black. 2019. AMASS: Archive of motion capture as surface shapes. In _Proceedings of the IEEE/CVF international conference on computer vision_. 5442–5451.
311
+ * Petrovich et al. (2021) Mathis Petrovich, Michael J Black, and Gül Varol. 2021. Action-conditioned 3D human motion synthesis with transformer VAE. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_. 10985–10995.
312
+ * Petrovich et al. (2022) Mathis Petrovich, Michael J Black, and Gül Varol. 2022. Temos: Generating diverse human motions from textual descriptions. In _European Conference on Computer Vision_. Springer, 480–497.
313
+ * Petrovich et al. (2023) Mathis Petrovich, Michael J Black, and Gül Varol. 2023. Tmr: Text-to-motion retrieval using contrastive 3d human motion synthesis. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_. 9488–9497.
314
+ * Plappert et al. (2016) Matthias Plappert, Christian Mandery, and Tamim Asfour. 2016. The kit motion-language dataset. _Big data_ 4, 4 (2016), 236–252.
315
+ * Ramesh et al. (2022) Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. 2022. Hierarchical text-conditional image generation with clip latents. _arXiv preprint arXiv:2204.06125_ 1, 2 (2022), 3.
316
+ * Rempe et al. (2023) Davis Rempe, Zhengyi Luo, Xue Bin Peng, Ye Yuan, Kris Kitani, Karsten Kreis, Sanja Fidler, and Or Litany. 2023. Trace and pace: Controllable pedestrian animation via guided trajectory diffusion. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_. 13756–13766.
317
+ * Saharia et al. (2022) Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily L Denton, Kamyar Ghasemipour, Raphael Gontijo Lopes, Burcu Karagol Ayan, Tim Salimans, et al. 2022. Photorealistic text-to-image diffusion models with deep language understanding. _Advances in neural information processing systems_ 35 (2022), 36479–36494.
318
+ * Shafir et al. (2024) Yoni Shafir, Guy Tevet, Roy Kapon, and Amit Haim Bermano. 2024. Human Motion Diffusion as a Generative Prior. In _ICLR_.
319
+ * Sohl-Dickstein et al. (2015) Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, and Surya Ganguli. 2015. Deep unsupervised learning using nonequilibrium thermodynamics. In _International conference on machine learning_. pmlr, 2256–2265.
320
+ * Tanaka and Fujiwara (2023) Mikihiro Tanaka and Kent Fujiwara. 2023. Role-aware interaction generation from textual description. In _Proceedings of the IEEE/CVF international conference on computer vision_. 15999–16009.
321
+ * Tanveer et al. (2024) Maham Tanveer, Yang Zhou, Simon Niklaus, Ali Mahdavi Amiri, Hao Zhang, Krishna Kumar Singh, and Nanxuan Zhao. 2024. MotionBridge: Dynamic Video Inbetweening with Flexible Controls. _arXiv preprint arXiv:2412.13190_ (2024).
322
+ * Tevet et al. (2022) Guy Tevet, Brian Gordon, Amir Hertz, Amit H Bermano, and Daniel Cohen-Or. 2022. Motionclip: Exposing human motion generation to clip space. In _European Conference on Computer Vision_. Springer, 358–374.
323
+ * Tevet et al. (2023) Guy Tevet, Sigal Raab, Brian Gordon, Yoni Shafir, Daniel Cohen-or, and Amit Haim Bermano. 2023. Human Motion Diffusion Model. In _The Eleventh International Conference on Learning Representations_.
324
+ * Tseng et al. (2023) Jonathan Tseng, Rodrigo Castellon, and Karen Liu. 2023. Edge: Editable dance generation from music. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_. 448–458.
325
+ * Wang et al. (2024b) Jiawei Wang, Yuchen Zhang, Jiaxin Zou, Yan Zeng, Guoqiang Wei, Liping Yuan, and Hang Li. 2024b. Boximator: Generating rich and controllable motions for video synthesis. _arXiv preprint arXiv:2402.01566_ (2024).
326
+ * Wang et al. (2025) Tao Wang, Zhihua Wu, Qiaozhi He, Jiaming Chu, Ling Qian, Yu Cheng, Junliang Xing, Jian Zhao, and Lei Jin. 2025. StickMotion: Generating 3D Human Motions by Drawing a Stickman. _arXiv preprint arXiv:2503.04829_ (2025).
327
+ * Wang et al. (2023) Xiang Wang, Hangjie Yuan, Shiwei Zhang, Dayou Chen, Jiuniu Wang, Yingya Zhang, Yujun Shen, Deli Zhao, and Jingren Zhou. 2023. Videocomposer: Compositional video synthesis with motion controllability. _Advances in Neural Information Processing Systems_ 36 (2023), 7594–7611.
328
+ * Wang et al. (2024a) Zhouxia Wang, Ziyang Yuan, Xintao Wang, Yaowei Li, Tianshui Chen, Menghan Xia, Ping Luo, and Ying Shan. 2024a. Motionctrl: A unified and flexible motion controller for video generation. In _ACM SIGGRAPH 2024 Conference Papers_. 1–11.
329
+ * Wu et al. (2024) Zizhao Wu, Qin Wang, Xinyang Zheng, Jianglei Ye, Ping Yang, Yunhai Wang, and Yigang Wang. 2024. Doodle Your Motion: Sketch-Guided Human Motion Generation. _IEEE Transactions on Visualization and Computer Graphics_ (2024).
330
+ * Xiao et al. (2024) Zeqi Xiao, Tai Wang, Jingbo Wang, Jinkun Cao, Wenwei Zhang, Bo Dai, Dahua Lin, and Jiangmiao Pang. 2024. Unified Human-Scene Interaction via Prompted Chain-of-Contacts. In _ICLR_.
331
+ * Xie et al. (2023) Yiming Xie, Varun Jampani, Lei Zhong, Deqing Sun, and Huaizu Jiang. 2023. Omnicontrol: Control any joint at any time for human motion generation. _arXiv preprint arXiv:2310.08580_ (2023).
332
+ * Xing et al. (2024) Jinbo Xing, Hanyuan Liu, Menghan Xia, Yong Zhang, Xintao Wang, Ying Shan, and Tien-Tsin Wong. 2024. Tooncrafter: Generative cartoon interpolation. _ACM Transactions on Graphics (TOG)_ 43, 6 (2024), 1–11.
333
+ * Xu et al. (2023) Sirui Xu, Zhengyuan Li, Yu-Xiong Wang, and Liang-Yan Gui. 2023. Interdiff: Generating 3d human-object interactions with physics-informed diffusion. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_. 14928–14940.
334
+ * Xu et al. (2024) Zhongcong Xu, Jianfeng Zhang, Jun Hao Liew, Hanshu Yan, Jia-Wei Liu, Chenxu Zhang, Jiashi Feng, and Mike Zheng Shou. 2024. Magicanimate: Temporally consistent human image animation using diffusion model. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_. 1481–1490.
335
+ * Yi et al. (2023) Hongwei Yi, Hualin Liang, Yifei Liu, Qiong Cao, Yandong Wen, Timo Bolkart, Dacheng Tao, and Michael J Black. 2023. Generating holistic 3d human motion from speech. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_. 469–480.
336
+ * Yin et al. (2023) Shengming Yin, Chenfei Wu, Jian Liang, Jie Shi, Houqiang Li, Gong Ming, and Nan Duan. 2023. Dragnuwa: Fine-grained control in video generation by integrating text, image, and trajectory. _arXiv preprint arXiv:2308.08089_ (2023).
337
+ * Zhang et al. (2023b) Lvmin Zhang, Anyi Rao, and Maneesh Agrawala. 2023b. Adding conditional control to text-to-image diffusion models. In _Proceedings of the IEEE/CVF international conference on computer vision_. 3836–3847.
338
+ * Zhang et al. (2024a) Mingyuan Zhang, Zhongang Cai, Liang Pan, Fangzhou Hong, Xinying Guo, Lei Yang, and Ziwei Liu. 2024a. Motiondiffuse: Text-driven human motion generation with diffusion model. _IEEE transactions on pattern analysis and machine intelligence_ 46, 6 (2024), 4115–4128.
339
+ * Zhang et al. (2023a) Mingyuan Zhang, Xinying Guo, Liang Pan, Zhongang Cai, Fangzhou Hong, Huirong Li, Lei Yang, and Ziwei Liu. 2023a. Remodiffuse: Retrieval-augmented motion diffusion model. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_. 364–373.
340
+ * Zhang et al. (2022) Xiaohan Zhang, Bharat Lal Bhatnagar, Sebastian Starke, Vladimir Guzov, and Gerard Pons-Moll. 2022. Couch: Towards controllable human-chair interactions. In _European Conference on Computer Vision_. Springer, 518–535.
341
+ * Zhang et al. (2024b) Yuang Zhang, Jiaxi Gu, Li-Wen Wang, Han Wang, Junqi Cheng, Yuefeng Zhu, and Fangyuan Zou. 2024b. Mimicmotion: High-quality human motion video generation with confidence-aware pose guidance. _arXiv preprint arXiv:2406.19680_ (2024).
342
+ * Zhao et al. (2023) Kaifeng Zhao, Yan Zhang, Shaofei Wang, Thabo Beeler, and Siyu Tang. 2023. Synthesizing diverse human motions in 3d indoor scenes. In _Proceedings of the IEEE/CVF international conference on computer vision_. 14738–14749.
343
+
344
+ Appendix A Multi-Agent System for Structured Motion Specification
345
+ -----------------------------------------------------------------
346
+
347
+ To facilitate controllable motion generation from user-provided visual and textual inputs, we design a multi-agent system composed of three specialized agents: (1) an Interaction Agent, (2) a Pose Extraction and Motion Design Agent, and (3) a Trajectory Planning Agent.
348
+
349
+ ### A.1. Interaction Agent.
350
+
351
+ This agent engages in multi-turn dialogue with the user to elicit structured intent, including high-level motion semantics and trajectory preferences. Given a set of input images and optional textual descriptions, it interprets the user’s goals and formulates a coarse motion plan.
352
+
353
+ ### A.2. Motion Design Agent.
354
+
355
+ This agent is responsible for three core tasks: (i) extracting 3D keyframe poses from user-provided images, (ii) generating fine-grained action descriptions for each pose, and (iii) synthesizing a unified motion prompt by integrating image-level descriptions with user-specified intent, while determining the appropriate temporal placement of each pose within the keyframe sequence.
356
+
357
+ To recover 3D human poses, we employ the pre-trained state-of-the-art model, TokenHMR(Dwivedi et al., [2024](https://arxiv.org/html/2505.21146v1#bib.bib13)), which regresses SMPL(Loper et al., [2015](https://arxiv.org/html/2505.21146v1#bib.bib33)) parameters from monocular images. The SMPL representation is chosen due to its compatibility with the skeletal structure adopted in the HumanML3D(Guo et al., [2022a](https://arxiv.org/html/2505.21146v1#bib.bib16)) dataset. From the SMPL parameters, we extract 3D joint coordinates, initially denoted as 𝐉𝐨𝐢𝐧𝐭𝐬 0∈ℝ 64×3 subscript 𝐉𝐨𝐢𝐧𝐭𝐬 0 superscript ℝ 64 3\mathbf{Joints}_{0}\in\mathbb{R}^{64\times 3}bold_Joints start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ∈ blackboard_R start_POSTSUPERSCRIPT 64 × 3 end_POSTSUPERSCRIPT.Although TokenHMR provides joint estimates structurally aligned with HumanML3D, differences remain in joint count and coordinate system conventions. To ensure compatibility, we map the joints to a canonical 22-joint representation and perform axis transformations as follows:
358
+
359
+ (11)𝐉𝐨𝐢𝐧𝐭𝐬={(x i,−y i,−z i)∣i=1,…,22}.𝐉𝐨𝐢𝐧𝐭𝐬 conditional-set subscript 𝑥 𝑖 subscript 𝑦 𝑖 subscript 𝑧 𝑖 𝑖 1…22\mathbf{Joints}=\{(x_{i},\ -y_{i},\ -z_{i})\mid i=1,\ldots,22\}.bold_Joints = { ( italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , - italic_y start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , - italic_z start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) ∣ italic_i = 1 , … , 22 } .
360
+
361
+ Despite coordinate normalization, poses extracted from static images may exhibit camera-induced global rotations that are misaligned with the intended motion direction. This challenge is addressed in Section[subsection 4.3](https://arxiv.org/html/2505.21146v1#S4.SS3 "4.3. Synergistic Guidance via Trajectory and Keyframe Poses ‣ 4. METHOD ‣ IKMo: Image-Keyframed Motion Generation with Trajectory-Pose Conditioned Motion Diffusion Model").
362
+
363
+ ### A.3. Trajectory Planning Agent.
364
+
365
+ This agent is responsible for procedural generation of motion trajectories based on user intent. It synthesizes 3D trajectories using parameterized mathematical curves (e.g., linear, circular, arc) and supports multi-segment composition to accommodate complex motion patterns.
366
+
367
+ ### A.4. System Output.
368
+
369
+ Upon completion, the multi-agent system outputs a structured motion specification consisting of (i) a natural language motion prompt, (ii) temporally aligned 3D keyframe poses, and (iii) a full trajectory configuration. These control signals are subsequently fed into our motion generation model to synthesize temporally coherent and semantically aligned motion sequences.
370
+
371
+ Table 7. Quantitative results on the HumanML3D test set.
372
+
373
+ Method Condition FID↓↓\downarrow↓R-precision(Top-3) ↑↑\uparrow↑Diversity→→\rightarrow→Foot skating ratio↓↓\downarrow↓Traj.err.(50 cm) ↓↓\downarrow↓Loc.err.(50 cm) ↓↓\downarrow↓Avg.err. ↓↓\downarrow↓
374
+ Real-0.002 0.797 9.503 0.000 0.000 0.000 0.000
375
+ OmniControl(Origin)Pelvis 0.322 0.691 9.545 0.0571 0.0404 0.0085 0.0367
376
+ OmniControl(On ours RTX4090)Pelvis 0.355 0.676 9.754 0.0522 0.0437 0.0102 0.0387
377
+
378
+ Appendix B More Implementation Details
379
+ --------------------------------------
380
+
381
+ ### B.1. Datasets
382
+
383
+ HumanML3D(Guo et al., [2022a](https://arxiv.org/html/2505.21146v1#bib.bib16)) consists of 14,646 human motion sequences paired with natural language descriptions. These sequences are sourced from the AMASS(Mahmood et al., [2019](https://arxiv.org/html/2505.21146v1#bib.bib38)) and HumanAct12(Guo et al., [2020](https://arxiv.org/html/2505.21146v1#bib.bib18)) datasets. The motions vary in length and are zero-padded or truncated to 196 frames at 20 FPS (average duration: 7.1s).
384
+
385
+ KIT-ML(Plappert et al., [2016](https://arxiv.org/html/2505.21146v1#bib.bib42)) includes 3,911 diverse human motion sequences. We follow the same preprocessing as in HumanML3D for fair comparison.
386
+
387
+ ### B.2. Evaluation Metrics
388
+
389
+ We follow the protocol proposed in(Guo et al., [2022a](https://arxiv.org/html/2505.21146v1#bib.bib16)) and use the following metrics:
390
+
391
+ * •Fréchet Inception Distance (FID): Evaluates the realism of generated motion distributions.
392
+ * •R-Precision: Assesses the relevance of generated motion to the input text prompt using retrieval accuracy.
393
+ * •Diversity: Measures the average distance between motions to reflect generation variability.
394
+
395
+ To assess control accuracy, following(Karunratanakul et al., [2023](https://arxiv.org/html/2505.21146v1#bib.bib25)) we compute:
396
+
397
+ * •Foot Skating Ratio: Measures the proportion of frames with visible foot slippage (defined as lateral movement ¿2.5cm while foot height ¡5cm).
398
+ * •Trajectory Error: Fraction of sequences with any keyframe location error exceeding a predefined threshold.
399
+ * •Location Error: Fraction of individual keyframes that exceed the spatial error threshold.
400
+ * •Average Error: Mean Euclidean distance between generated joint positions and the reference positions at keyframe timestamps.
401
+
402
+ ### B.3. Pose Dist (New Metric)
403
+
404
+ We propose Pose Dist to precisely evaluate the similarity between generated motions and the target keyframe poses. For each sample in the validation set, we first identify the frames where pose control signals are applied. At these frames, both the generated and reference poses are transformed into a pelvis-centered coordinate system by subtracting the root joint (pelvis) position from all joint coordinates. We then compute the average Euclidean distance between the corresponding controlled joints in the generated and reference poses. The final Pose Dist score is obtained by averaging these distances across all controlled frames and samples, providing a reliable measure of keyframe similarity.
405
+
406
+ ### B.4. Architecture and Training Details
407
+
408
+ We adopt GPT-4o as the foundation vision-language model (VLM) for our multi-agent system, which is responsible for generating motion configurations, including motion prompts, keyframes, and trajectories. The system consists of an interaction agent, a pose-extraction and motion-design agent, and a trajectory-planning agent. For 3D pose extraction from images, we utilize the state-of-the-art pre-trained model TokenHMR(Dwivedi et al., [2024](https://arxiv.org/html/2505.21146v1#bib.bib13)). For visualization purposes, we open-source a Blender Add-on that enables convenient rendering of the generated human motions within Blender.
409
+
410
+ For our baseline diffusion model, we adopt the motion diffusion framework from OmniControl(Xie et al., [2023](https://arxiv.org/html/2505.21146v1#bib.bib59)). During training and inference, the diffusion process is configured with T=1000 𝑇 1000 T=1000 italic_T = 1000 noise steps.The diffusion models are trained on a single NVIDIA RTX 4090 GPU. The batch size is set to b=64 𝑏 64 b=64 italic_b = 64. We use the AdamW optimizer(Loshchilov and Hutter, [2017](https://arxiv.org/html/2505.21146v1#bib.bib34)) with a learning rate of 1×10−5 1 superscript 10 5 1\times 10^{-5}1 × 10 start_POSTSUPERSCRIPT - 5 end_POSTSUPERSCRIPT.
411
+
412
+ ### B.5. Evaluation Details
413
+
414
+ Following OmniControl(Xie et al., [2023](https://arxiv.org/html/2505.21146v1#bib.bib59)), we train all models to generate motion sequences of 196 frames. For both keyframe poses and trajectories, we adopt five levels of control sparsity: 1, 2, 5, 49 (25% density), and 196 (100% density). Keyframe timestamps are randomly sampled. All reported metrics are averaged over all sparsity levels.
415
+
416
+ Appendix C More Result Details
417
+ ------------------------------
418
+
419
+ ### C.1. More Quantitative Evaluation Details
420
+
421
+ In the quantitative comparison on the HumanML3D(Guo et al., [2022a](https://arxiv.org/html/2505.21146v1#bib.bib16)) and KIT-ML(Plappert et al., [2016](https://arxiv.org/html/2505.21146v1#bib.bib42)) datasets, for the Pelvis condition, some prior methods were conducted under different experimental settings from ours, and the original OmniControl(Xie et al., [2023](https://arxiv.org/html/2505.21146v1#bib.bib59)) paper does not detail their evaluation procedures. Moreover, the Pelvis condition differs from our proposed Pelvis + Keyframe setting. Therefore, the reported results under the Pelvis condition are provided primarily for reference, rather than for direct comparison with our method.
422
+
423
+ Since our experimental setup follows that of OmniControl, and our other baseline CondMDI(Cohan et al., [2024](https://arxiv.org/html/2505.21146v1#bib.bib9)) also uses this part of the data, we adopt the Pelvis condition results as reported in the original OmniControl paper. Additionally, because we have access to the implementation details of OmniControl, we re-evaluated it under our experimental environment, as shown in [Table 7](https://arxiv.org/html/2505.21146v1#A1.T7 "Table 7 ‣ A.4. System Output. ‣ Appendix A Multi-Agent System for Structured Motion Specification ‣ IKMo: Image-Keyframed Motion Generation with Trajectory-Pose Conditioned Motion Diffusion Model"). In the main paper, we report the better-performing result (i.e., the one from the original OmniControl paper).
424
+
425
+ For the Pelvis + Keyframe condition, all methods were evaluated under our unified experimental setting to ensure fair comparison.
426
+
427
+ ### C.2. More Qualitative Evaluation Details
428
+
429
+ In MDM(Tevet et al., [2023](https://arxiv.org/html/2505.21146v1#bib.bib51)), text prompts are used as substitutes for keyframe and trajectory inputs. In circular trajectory, the prompt is Animate a character in a ready stance at frame 20, with a leg extended at frame 80, in a floor stretch at frame 160, kicking forward at frame 180, and then the character walks out in a circular curve.
430
+
431
+ In S-Curve, the prompt is Fighting stance at frame 10, a low lunge at frame 50, a side kick at frame 100, a downward stretch at frame 190, and then the character walks out in an S-shaped curve.