Instructions to use videophysics/videocon_physics with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use videophysics/videocon_physics with Transformers:
# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("videophysics/videocon_physics", dtype="auto") - Notebooks
- Google Colab
- Kaggle
Improve model card with metadata and links
#2
by nielsr HF Staff - opened
README.md
CHANGED
|
@@ -1,5 +1,13 @@
|
|
| 1 |
---
|
| 2 |
license: mit
|
|
|
|
|
|
|
| 3 |
---
|
| 4 |
|
| 5 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
license: mit
|
| 3 |
+
pipeline_tag: text-to-video
|
| 4 |
+
library_name: transformers
|
| 5 |
---
|
| 6 |
|
| 7 |
+
# VideoPhy: Evaluating Physical Commonsense in Video Generation
|
| 8 |
+
|
| 9 |
+
This text-to-video model is part of the VideoPhy project, which benchmarks physical commonsense in video generation. It generates videos from text prompts, aiming to evaluate how well generated videos adhere to real-world physics.
|
| 10 |
+
|
| 11 |
+
[Project Website](https://videophy2.github.io/) | [Paper](https://arxiv.org/abs/2406.03520) | [GitHub](https://github.com/Hritikbansal/videophy)
|
| 12 |
+
|
| 13 |
+
For detailed use-case instructions, please refer to the project's GitHub repository.
|