text2video
Browse files
README.md
CHANGED
|
@@ -4,7 +4,7 @@
|
|
| 4 |
colorFrom: indigo
|
| 5 |
colorTo: blue
|
| 6 |
sdk: gradio
|
| 7 |
-
app_file:
|
| 8 |
pinned: false
|
| 9 |
license: mit
|
| 10 |
short_description: Neural video studio using latent diffusion & cross attention
|
|
@@ -23,7 +23,7 @@ # Zero-Shot Video Generation
|
|
| 23 |
|
| 24 |
A zero-shot neural synthesis studio leveraging latent diffusion models and cross-frame attention to synthesize temporally consistent video sequences directly from unconstrained textual prompts.
|
| 25 |
|
| 26 |
-
**[Source Code](Source
|
| 27 |
|
| 28 |
<br>
|
| 29 |
|
|
@@ -240,7 +240,7 @@ ## Usage Guidelines
|
|
| 240 |
This repository is openly shared to support learning and knowledge exchange across the academic community.
|
| 241 |
|
| 242 |
**For Students**
|
| 243 |
-
Use this project as reference material for understanding **Neural Video Synthesis**, **Diffusion Models**, and **temporal latent interpolation**. The
|
| 244 |
|
| 245 |
**For Educators**
|
| 246 |
This project may serve as a practical lab example or supplementary teaching resource for **Machine Learning**, **Computer Vision**, and **Generative AI** courses. Attribution is appreciated when utilizing content.
|
|
@@ -310,3 +310,4 @@ ### 馃帗 [MEng Computer Engineering Repository](https://github.com/Amey-Thakur
|
|
| 310 |
*Semester-wise curriculum, laboratories, projects, and academic notes.*
|
| 311 |
|
| 312 |
</div>
|
|
|
|
|
|
| 4 |
colorFrom: indigo
|
| 5 |
colorTo: blue
|
| 6 |
sdk: gradio
|
| 7 |
+
app_file: app.py
|
| 8 |
pinned: false
|
| 9 |
license: mit
|
| 10 |
short_description: Neural video studio using latent diffusion & cross attention
|
|
|
|
| 23 |
|
| 24 |
A zero-shot neural synthesis studio leveraging latent diffusion models and cross-frame attention to synthesize temporally consistent video sequences directly from unconstrained textual prompts.
|
| 25 |
|
| 26 |
+
**[Source Code](Source Code/)** 路 **[Project Report](https://github.com/Amey-Thakur/MACHINE--LEARNING/blob/main/ML%20Project/Zero-Shot%20Video%20Generation%20Project%20Report.pdf)** 路 **[Video Demo](https://youtu.be/za9hId6UPoY)** 路 **[Live Demo](https://huggingface.co/spaces/ameythakur/Zero-Shot-Video-Generation)**
|
| 27 |
|
| 28 |
<br>
|
| 29 |
|
|
|
|
| 240 |
This repository is openly shared to support learning and knowledge exchange across the academic community.
|
| 241 |
|
| 242 |
**For Students**
|
| 243 |
+
Use this project as reference material for understanding **Neural Video Synthesis**, **Diffusion Models**, and **temporal latent interpolation**. The Source Code is explicitly annotated to facilitate self-paced learning and exploration of **Python-based generative deep learning pipelines**.
|
| 244 |
|
| 245 |
**For Educators**
|
| 246 |
This project may serve as a practical lab example or supplementary teaching resource for **Machine Learning**, **Computer Vision**, and **Generative AI** courses. Attribution is appreciated when utilizing content.
|
|
|
|
| 310 |
*Semester-wise curriculum, laboratories, projects, and academic notes.*
|
| 311 |
|
| 312 |
</div>
|
| 313 |
+
|
app.py
CHANGED
|
@@ -8,8 +8,7 @@ sys.path.insert(0, source_dir)
|
|
| 8 |
# Change the current working directory to 'Source Code' so relative files like style.css are found
|
| 9 |
os.chdir(source_dir)
|
| 10 |
|
| 11 |
-
#
|
| 12 |
-
|
| 13 |
-
|
| 14 |
-
|
| 15 |
-
pass
|
|
|
|
| 8 |
# Change the current working directory to 'Source Code' so relative files like style.css are found
|
| 9 |
os.chdir(source_dir)
|
| 10 |
|
| 11 |
+
# Import the main app logic
|
| 12 |
+
# In Source Code/app.py, the launch() method is automatically called
|
| 13 |
+
# when 'on_huggingspace' is True. Simply importing it triggers the deployment.
|
| 14 |
+
import app
|
|
|