| { | |
| "base_model": "seawolf2357/test-bag4", | |
| "tree": [ | |
| { | |
| "model_id": "seawolf2357/test-bag4", | |
| "gated": "False", | |
| "card": "---\ntags:\n- text-to-image\n- flux\n- lora\n- diffusers\n- template:sd-lora\n- ai-toolkit\nwidget:\n- text: a woman wearing a white shirt and black leggings, standing on a set of stairs\n with a black Chanel bag in her hand. The background of the image is a building.\n [trigger]\n output:\n url: samples/1728266939949__000001000_0.jpg\nbase_model: black-forest-labs/FLUX.1-dev\ninstance_prompt: handbag\nlicense: other\nlicense_name: flux-1-dev-non-commercial-license\nlicense_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md\n---\n\n# test-bag4\nModel trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit)\n<Gallery />\n\n## Trigger words\n\nYou should use `handbag` to trigger the image generation.\n\n## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, etc.\n\nWeights for this model are available in Safetensors format.\n\n[Download](/seawolf2357/test-bag4/tree/main) them in the Files & versions tab.\n\n## Use it with the [\ud83e\udde8 diffusers library](https://github.com/huggingface/diffusers)\n\n```py\nfrom diffusers import AutoPipelineForText2Image\nimport torch\n\npipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.bfloat16).to('cuda')\npipeline.load_lora_weights('seawolf2357/test-bag4', weight_name='test-bag4.safetensors')\nimage = pipeline('a woman wearing a white shirt and black leggings, standing on a set of stairs with a black Chanel bag in her hand. The background of the image is a building. [trigger]').images[0]\nimage.save(\"my_image.png\")\n```\n\nFor more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)\n\n", | |
| "metadata": "\"N/A\"", | |
| "depth": 0, | |
| "children": [], | |
| "children_count": 0, | |
| "adapters": [], | |
| "adapters_count": 0, | |
| "quantized": [], | |
| "quantized_count": 0, | |
| "merges": [], | |
| "merges_count": 0, | |
| "total_derivatives": 0, | |
| "spaces": [], | |
| "spaces_count": 0, | |
| "parents": [], | |
| "base_model": "seawolf2357/test-bag4", | |
| "base_model_relation": "base" | |
| } | |
| ] | |
| } |