Improve model card with metadata and links
Browse filesThis PR enhances the model card by:
- Adding the `pipeline_tag: text-to-video` to improve discoverability on the Hugging Face model hub.
- Specifying the `library_name: transformers` given the model's configuration files.
- Including links to the paper, project website, and GitHub repository.
README.md
CHANGED
|
@@ -1,5 +1,13 @@
|
|
| 1 |
---
|
| 2 |
license: mit
|
|
|
|
|
|
|
| 3 |
---
|
| 4 |
|
| 5 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
license: mit
|
| 3 |
+
pipeline_tag: text-to-video
|
| 4 |
+
library_name: transformers
|
| 5 |
---
|
| 6 |
|
| 7 |
+
# VideoPhy: Evaluating Physical Commonsense in Video Generation
|
| 8 |
+
|
| 9 |
+
This text-to-video model is part of the VideoPhy project, which benchmarks physical commonsense in video generation. It generates videos from text prompts, aiming to evaluate how well generated videos adhere to real-world physics.
|
| 10 |
+
|
| 11 |
+
[Project Website](https://videophy2.github.io/) | [Paper](https://arxiv.org/abs/2406.03520) | [GitHub](https://github.com/Hritikbansal/videophy)
|
| 12 |
+
|
| 13 |
+
For detailed use-case instructions, please refer to the project's GitHub repository.
|