Instructions to use circlestone-labs/Anima with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusion Single File
How to use circlestone-labs/Anima with Diffusion Single File:
# No code snippets available yet for this library. # To use this model, check the repository files and the library's documentation. # Want to help? PRs adding snippets are welcome at: # https://github.com/huggingface/huggingface.js
- Notebooks
- Google Colab
- Kaggle
Tagging for lora training or finetuning?
#105
by CappyAdams - opened
I know how to train loras in general but I still wanna ask this:
Should I use natural language or danbooru tags in my datasets? or both?
I think the official way to do it is mixing, while keeping the format as close to the reference as possible which is [quality/meta/year/safety tags] [1girl/1boy/1other etc] [character] [series] [artist] [general tags/natlang]
author said they use dropout, so should have that enabled. Other than that, keep to the order, and then do a mix of tags, tags + natlang, natlang, natlang + tags, is what i've found from my own training runs, and been told to do from others.
Gotcha! Thank you so much for your reply!!
CappyAdams changed discussion status to closed