Update README.md
Browse files
README.md
CHANGED
|
@@ -16,26 +16,18 @@ license_link: LICENSE
|
|
| 16 |
<br>
|
| 17 |
|
| 18 |
[](https://howe125.github.io/AnimeGamer.github.io/)
|
| 19 |
-
[](https://arxiv.org/
|
| 20 |
[](https://github.com/TencentARC/SEED-Story)
|
| 21 |
|
| 22 |
|
| 23 |
-
|
| 24 |
-
long stories consists of rich and coherent narrative texts, along with images that are consistent in characters and
|
| 25 |
-
style. We also release the StoryStream Dataset for build this model.
|
| 26 |
|
| 27 |
-
|
| 28 |
-
We release the pretrained Tokenizer, the pretrained De-Tokenizer, the pre-trained foundation model **SEED-X-pretrained**,
|
| 29 |
-
the StoryStream instruction-tuned MLLM **SEED-Story-George**, and the StoryStream tuned De-Tokenizer in **Detokenizer-George**
|
| 30 |
|
| 31 |
-
|
| 32 |
|
| 33 |
-
You also need to download [stable-diffusion-xl-base-1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0) and [Qwen-VL-Chat](https://huggingface.co/Qwen/Qwen-VL-Chat), and save them under the folder `./pretrained`. Please use the following script to extract the weights of visual encoder in Qwen-VL-Chat.
|
| 34 |
-
```bash
|
| 35 |
-
python3 src/tools/reload_qwen_vit.py
|
| 36 |
-
```
|
| 37 |
|
| 38 |
-
|
| 39 |
If you find the work helpful, please consider citing:
|
| 40 |
```bash
|
| 41 |
@article{yang2024seedstory,
|
|
|
|
| 16 |
<br>
|
| 17 |
|
| 18 |
[](https://howe125.github.io/AnimeGamer.github.io/)
|
| 19 |
+
[](https://arxiv.org/pdf/2504.01014)
|
| 20 |
[](https://github.com/TencentARC/SEED-Story)
|
| 21 |
|
| 22 |
|
| 23 |
+
### Introduction
|
|
|
|
|
|
|
| 24 |
|
| 25 |
+
**Experience the endless adventure of infinite anime life with AnimeGamer!** 🤩
|
|
|
|
|
|
|
| 26 |
|
| 27 |
+
AnimeGamer is a pioneering model designed for infinite anime life simulation. It leverages Multimodal Large Language Models (MLLMs) to generate dynamic animation shots that depict character movements and updates to character states. By incorporating historical visual context, AnimeGamer ensures contextual consistency and engaging gameplay. AnimeGamer uses novel action-aware multimodal representations and a video diffusion model to produce high-quality video clips, creating an immersive and ever-evolving gaming experience.
|
| 28 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 29 |
|
| 30 |
+
### Citation
|
| 31 |
If you find the work helpful, please consider citing:
|
| 32 |
```bash
|
| 33 |
@article{yang2024seedstory,
|