Update README.md
Browse files
README.md
CHANGED
|
@@ -46,7 +46,9 @@ def _transform(n_px):
|
|
| 46 |
|
| 47 |
See the documentation for [`CLIPImageProcessor` for details](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPImageProcessor).
|
| 48 |
|
| 49 |
-
Also, despite the names `style vector` and `content vector`, I have noticed by visual inspection that both are basically equally good for style embedding. I don't know why, but I guess that's life?
|
|
|
|
|
|
|
| 50 |
|
| 51 |
## How to use it
|
| 52 |
|
|
|
|
| 46 |
|
| 47 |
See the documentation for [`CLIPImageProcessor` for details](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPImageProcessor).
|
| 48 |
|
| 49 |
+
Also, despite the names `style vector` and `content vector`, I have noticed by visual inspection that both are basically equally good for style embedding. I don't know why, but I guess that's life? (No, it's actually not supposed to happen. I don't know why it didn't really disentangle style and content. Maybe that's a question for a small research paper.)
|
| 50 |
+
|
| 51 |
+
You can see for yourself by changing the line `style_output = output["style_output"].squeeze(0)` to `style_output = output["content_output"].squeeze(0)` in the demo. The resulting t-SNE is still clustering by style, to my eyes equally well.
|
| 52 |
|
| 53 |
## How to use it
|
| 54 |
|