Update README.md
Browse files
README.md
CHANGED
|
@@ -12,11 +12,11 @@ tags:
|
|
| 12 |
- art
|
| 13 |
---
|
| 14 |
|
| 15 |
-
# Model Card for
|
| 16 |
|
| 17 |

|
| 18 |
|
| 19 |
-
|
| 20 |
This model is supported by [GENIAC](https://www.meti.go.jp/english/policy/mono_info_service/geniac/index.html) (NEDO, METI).
|
| 21 |
|
| 22 |
## Model Details
|
|
@@ -26,9 +26,9 @@ This model is supported by [GENIAC](https://www.meti.go.jp/english/policy/mono_i
|
|
| 26 |
At AIdeaLab, we develop AI technology through active dialogue with creators, aiming for mutual understanding and cooperation.
|
| 27 |
We strive to solve challenges faced by creators and grow together.
|
| 28 |
One of these challenges is that some creators and fans want to use video generation but can't, likely due to the lack of permission to use certain videos for training.
|
| 29 |
-
To address this issue, we have developed
|
| 30 |
|
| 31 |
-
#### Features of
|
| 32 |
|
| 33 |
- Principally uses images with obtained learning permissions
|
| 34 |
- Understands both Japanese and English text inputs directly
|
|
@@ -115,7 +115,7 @@ null_prompt_embeds = null_prompt_embeds.to(dtype=torch_dtype, device=device)
|
|
| 115 |
del text_encoder
|
| 116 |
|
| 117 |
transformer = CogVideoXTransformer3DModel.from_pretrained(
|
| 118 |
-
"aidealab/
|
| 119 |
torch_dtype=torch_dtype
|
| 120 |
)
|
| 121 |
transformer=transformer.to(device)
|
|
|
|
| 12 |
- art
|
| 13 |
---
|
| 14 |
|
| 15 |
+
# Model Card for AIdeaLabVideo JP
|
| 16 |
|
| 17 |

|
| 18 |
|
| 19 |
+
AIdeaLabVideo JP is a text-to-video model learning from CC-BY, CC-0 like images.
|
| 20 |
This model is supported by [GENIAC](https://www.meti.go.jp/english/policy/mono_info_service/geniac/index.html) (NEDO, METI).
|
| 21 |
|
| 22 |
## Model Details
|
|
|
|
| 26 |
At AIdeaLab, we develop AI technology through active dialogue with creators, aiming for mutual understanding and cooperation.
|
| 27 |
We strive to solve challenges faced by creators and grow together.
|
| 28 |
One of these challenges is that some creators and fans want to use video generation but can't, likely due to the lack of permission to use certain videos for training.
|
| 29 |
+
To address this issue, we have developed AIdeaLabVideo JP.
|
| 30 |
|
| 31 |
+
#### Features of AIdeaLabVideo JP
|
| 32 |
|
| 33 |
- Principally uses images with obtained learning permissions
|
| 34 |
- Understands both Japanese and English text inputs directly
|
|
|
|
| 115 |
del text_encoder
|
| 116 |
|
| 117 |
transformer = CogVideoXTransformer3DModel.from_pretrained(
|
| 118 |
+
"aidealab/AIdeaLabVideo-JP",
|
| 119 |
torch_dtype=torch_dtype
|
| 120 |
)
|
| 121 |
transformer=transformer.to(device)
|