Xiling6666 commited on
Commit
bb7e9d3
Β·
verified Β·
1 Parent(s): e3df1b9

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +45 -0
README.md ADDED
@@ -0,0 +1,45 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ pipeline_tag: text-to-video
5
+ tags:
6
+ - video-generation
7
+ - world-model
8
+ - pytorch
9
+ - dit
10
+ library_name: pytorch
11
+ ---
12
+
13
+ # HyDRA: Out of Sight but Not Out of Mind: Hybrid Memory for Dynamic Video World Models
14
+
15
+ This is the official Hugging Face model repository for **HyDRA** (Hybrid Memory for Dynamic Video World Models).
16
+
17
+ πŸ”— **GitHub Repository:** [H-EmbodVis/HyDRA](https://github.com/H-EmbodVis/HyDRA)
18
+ πŸ“„ **Project Page:** [Hybrid-Memory-in-Video-World-Models](https://kj-chen666.github.io/Hybrid-Memory-in-Video-World-Models/)
19
+
20
+ ## πŸ” Overview
21
+
22
+ While recent video world models excel at simulating static environments, they share a critical blind spot: the physical world is dynamic. When moving subjects exit the camera's field of view and later re-emerge, current models often lose track of them.
23
+
24
+ To bridge this gap, we introduce **Hybrid Memory**, a novel paradigm that requires models to simultaneously act as precise archivists for static backgrounds and vigilant trackers for dynamic subjects. **HyDRA** is a specialized memory architecture that compresses contexts into memory tokens and utilizes a spatiotemporal relevance-driven retrieval mechanism.
25
+
26
+ ## 🎯 Task & Capabilities
27
+ - **Task:** Text-to-Video Generation / Video World Modeling
28
+ - **Input:** Text prompts, camera poses, and initial video latents.
29
+ - **Output:** High-fidelity video sequences maintaining both identity and motion continuity of dynamic subjects, even during out-of-view intervals.
30
+
31
+ ## πŸš€ Usage
32
+
33
+ To use these weights, please refer to our GitHub repository: [H-EmbodVis/HyDRA](https://github.com/H-EmbodVis/HyDRA)
34
+
35
+
36
+ ## πŸ“– Citation
37
+ If you find our work useful, please consider citing:
38
+
39
+ ```bibtex
40
+ @article{chen2026out,
41
+ title = {Out of Sight but Not Out of Mind: Hybrid Memory for Dynamic Video World Models},
42
+ author = {Chen, Kaijin and Liang, Dingkang and Zhou, Xin and Ding, Yikang and Liu, Xiaoqiang and Wan, Pengfei and Bai, Xiang},
43
+ journal = {arXiv preprint arXiv:2603.25716},
44
+ year = {2026}
45
+ }