File size: 2,013 Bytes
bb7e9d3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
---
language:
- en
pipeline_tag: text-to-video
tags:
- video-generation
- world-model
- pytorch
- dit
library_name: pytorch
---

# HyDRA: Out of Sight but Not Out of Mind: Hybrid Memory for Dynamic Video World Models

This is the official Hugging Face model repository for **HyDRA** (Hybrid Memory for Dynamic Video World Models). 

πŸ”— **GitHub Repository:** [H-EmbodVis/HyDRA](https://github.com/H-EmbodVis/HyDRA)
πŸ“„ **Project Page:** [Hybrid-Memory-in-Video-World-Models](https://kj-chen666.github.io/Hybrid-Memory-in-Video-World-Models/)

## πŸ” Overview

While recent video world models excel at simulating static environments, they share a critical blind spot: the physical world is dynamic. When moving subjects exit the camera's field of view and later re-emerge, current models often lose track of them. 

To bridge this gap, we introduce **Hybrid Memory**, a novel paradigm that requires models to simultaneously act as precise archivists for static backgrounds and vigilant trackers for dynamic subjects. **HyDRA** is a specialized memory architecture that compresses contexts into memory tokens and utilizes a spatiotemporal relevance-driven retrieval mechanism.

## 🎯 Task & Capabilities
- **Task:** Text-to-Video Generation / Video World Modeling
- **Input:** Text prompts, camera poses, and initial video latents.
- **Output:** High-fidelity video sequences maintaining both identity and motion continuity of dynamic subjects, even during out-of-view intervals.

## πŸš€ Usage

To use these weights, please refer to our GitHub repository: [H-EmbodVis/HyDRA](https://github.com/H-EmbodVis/HyDRA)


## πŸ“– Citation
If you find our work useful, please consider citing:

```bibtex
@article{chen2026out,
  title   = {Out of Sight but Not Out of Mind: Hybrid Memory for Dynamic Video World Models},
  author  = {Chen, Kaijin and Liang, Dingkang and Zhou, Xin and Ding, Yikang and Liu, Xiaoqiang and Wan, Pengfei and Bai, Xiang},
  journal = {arXiv preprint arXiv:2603.25716},
  year    = {2026}
}