Instructions to use circlestone-labs/Anima with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusion Single File
How to use circlestone-labs/Anima with Diffusion Single File:
# No code snippets available yet for this library. # To use this model, check the repository files and the library's documentation. # Want to help? PRs adding snippets are welcome at: # https://github.com/huggingface/huggingface.js
- Notebooks
- Google Colab
- Kaggle
Training Examples.
Can we get some training data examples from the dataset Anima is being trained on? I would like to understand the best way to structure my own training data for lora training.
I agree, even 10-20 examples would be awesome.
https://civitai.com/models/2536147/greg-rutkowski-style-anima
already updated
Thanks, I am using this workflow already. But it seems like anima supports both tags and captions. Did they train on seperate dataset batches that were tagged, and captioned, respectively? Did they have both tags and captions on the same image? That is more of what I am trying to understand here.
https://civitai.com/models/2536147/greg-rutkowski-style-anima
already updated
Thanks, I am using this workflow already. But it seems like anima supports both tags and captions. Did they train on seperate dataset batches that were tagged, and captioned, respectively? Did they have both tags and captions on the same image? That is more of what I am trying to understand here.
He put everything he used in the description and on the side information panel on civitai, even the complete dataset (he primarely used NL only).
He put everything he used in the description and on the side information panel on civitai, even the complete dataset (he primarely used NL only).
Correct, I understood this much. That wasn't what I was confused on. Purely the formatting for tags. Clearly anima was also trained on dan tags, I am just wondering about the tag interaction with the NL captions.
Am I doing. [foo, bar, baz, a photo of foo holding bar while standing in baz]. Do I have two subfolders, one containing tagged images, and the other with NL captioned images? This is the first model I have used that can understand both. I would like to train with both if that would give me better results.
That was the purpose of me creating this discussion. I understand the demo dataset only contains captions. Clearly the model also works with tags and I am trying to understand that relationship.
aah, my bad. I believe most people are training the same way you would prompt the model, following the models tag order:
[quality/meta/year/safety tags] [1girl/1boy/1other etc] [character] [series] [artist] [general tags]
example: masterpiece, best quality, @big chungus. An anime girl with medium-length blonde hair is...
I do the exact same, but I don't really use danbooru tags, so I would presume after the artist tag and before the NL sentence tags? I'm not really entirely sure that it matters that much, as long as it's tagged. However, I'm also a noobie, so what would I know...
all tags in one line, then a newline with nat
