Add files using upload-large-folder tool
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- stable_diffusion/LICENSE +82 -0
- stable_diffusion/README.md +215 -0
- stable_diffusion/assets/modelfigure.png +0 -0
- stable_diffusion/assets/reconstruction2.png +3 -0
- stable_diffusion/assets/results.gif.REMOVED.git-id +1 -0
- stable_diffusion/assets/stable-samples/img2img/upscaling-out.png.REMOVED.git-id +1 -0
- stable_diffusion/assets/stable-samples/txt2img/merged-0005.png.REMOVED.git-id +1 -0
- stable_diffusion/assets/stable-samples/txt2img/merged-0006.png.REMOVED.git-id +1 -0
- stable_diffusion/assets/stable-samples/txt2img/merged-0007.png.REMOVED.git-id +1 -0
- stable_diffusion/assets/txt2img-preview.png.REMOVED.git-id +1 -0
- stable_diffusion/assets/v1-variants-scores.jpg +0 -0
- stable_diffusion/constants/__pycache__/const.cpython-38.pyc +0 -0
- stable_diffusion/constants/const.py +10 -0
- stable_diffusion/data/example_conditioning/superresolution/sample_0.jpg +0 -0
- stable_diffusion/data/example_conditioning/text_conditional/sample_0.txt +1 -0
- stable_diffusion/data/imagenet_clsidx_to_label.txt +1000 -0
- stable_diffusion/data/imagenet_train_hr_indices.p.REMOVED.git-id +1 -0
- stable_diffusion/data/index_synset.yaml +1000 -0
- stable_diffusion/data/inpainting_examples/6458524847_2f4c361183_k_mask.png +0 -0
- stable_diffusion/data/inpainting_examples/8399166846_f6fb4e4b8e_k_mask.png +0 -0
- stable_diffusion/data/inpainting_examples/alex-iby-G_Pk4D9rMLs_mask.png +0 -0
- stable_diffusion/data/inpainting_examples/bench2_mask.png +0 -0
- stable_diffusion/data/inpainting_examples/bertrand-gabioud-CpuFzIsHYJ0_mask.png +0 -0
- stable_diffusion/data/inpainting_examples/billow926-12-Wc-Zgx6Y_mask.png +0 -0
- stable_diffusion/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png +0 -0
- stable_diffusion/data/inpainting_examples/photo-1583445095369-9c651e7e5d34_mask.png +0 -0
- stable_diffusion/environment.yaml +31 -0
- stable_diffusion/exp_qualitative_eval.py +157 -0
- stable_diffusion/ldm/data/__init__.py +0 -0
- stable_diffusion/ldm/guaidance.py +96 -0
- stable_diffusion/ldm/lr_scheduler.py +101 -0
- stable_diffusion/ldm/modules/diffusionmodules/__pycache__/__init__.cpython-38.pyc +0 -0
- stable_diffusion/ldm/modules/diffusionmodules/__pycache__/model.cpython-38.pyc +0 -0
- stable_diffusion/ldm/modules/diffusionmodules/__pycache__/openaimodel.cpython-38.pyc +0 -0
- stable_diffusion/ldm/modules/diffusionmodules/__pycache__/util.cpython-38.pyc +0 -0
- stable_diffusion/ldm/modules/distributions/__pycache__/__init__.cpython-38.pyc +0 -0
- stable_diffusion/ldm/modules/distributions/__pycache__/distributions.cpython-38.pyc +0 -0
- stable_diffusion/ldm/modules/distributions/distributions.py +125 -0
- stable_diffusion/ldm/modules/losses/__init__.py +1 -0
- stable_diffusion/ldm/modules/losses/contperceptual.py +111 -0
- stable_diffusion/ldm/modules/losses/vqperceptual.py +167 -0
- stable_diffusion/ldm/util.py +197 -0
- stable_diffusion/main.py +744 -0
- stable_diffusion/models/first_stage_models/kl-f16/config.yaml +44 -0
- stable_diffusion/models/first_stage_models/kl-f32/config.yaml +46 -0
- stable_diffusion/models/first_stage_models/kl-f4/config.yaml +41 -0
- stable_diffusion/models/first_stage_models/kl-f8/config.yaml +42 -0
- stable_diffusion/models/first_stage_models/vq-f16/config.yaml +49 -0
- stable_diffusion/models/first_stage_models/vq-f4-noattn/config.yaml +46 -0
- stable_diffusion/models/first_stage_models/vq-f4/config.yaml +45 -0
stable_diffusion/LICENSE
ADDED
|
@@ -0,0 +1,82 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Copyright (c) 2022 Robin Rombach and Patrick Esser and contributors
|
| 2 |
+
|
| 3 |
+
CreativeML Open RAIL-M
|
| 4 |
+
dated August 22, 2022
|
| 5 |
+
|
| 6 |
+
Section I: PREAMBLE
|
| 7 |
+
|
| 8 |
+
Multimodal generative models are being widely adopted and used, and have the potential to transform the way artists, among other individuals, conceive and benefit from AI or ML technologies as a tool for content creation.
|
| 9 |
+
|
| 10 |
+
Notwithstanding the current and potential benefits that these artifacts can bring to society at large, there are also concerns about potential misuses of them, either due to their technical limitations or ethical considerations.
|
| 11 |
+
|
| 12 |
+
In short, this license strives for both the open and responsible downstream use of the accompanying model. When it comes to the open character, we took inspiration from open source permissive licenses regarding the grant of IP rights. Referring to the downstream responsible use, we added use-based restrictions not permitting the use of the Model in very specific scenarios, in order for the licensor to be able to enforce the license in case potential misuses of the Model may occur. At the same time, we strive to promote open and responsible research on generative models for art and content generation.
|
| 13 |
+
|
| 14 |
+
Even though downstream derivative versions of the model could be released under different licensing terms, the latter will always have to include - at minimum - the same use-based restrictions as the ones in the original license (this license). We believe in the intersection between open and responsible AI development; thus, this License aims to strike a balance between both in order to enable responsible open-science in the field of AI.
|
| 15 |
+
|
| 16 |
+
This License governs the use of the model (and its derivatives) and is informed by the model card associated with the model.
|
| 17 |
+
|
| 18 |
+
NOW THEREFORE, You and Licensor agree as follows:
|
| 19 |
+
|
| 20 |
+
1. Definitions
|
| 21 |
+
|
| 22 |
+
- "License" means the terms and conditions for use, reproduction, and Distribution as defined in this document.
|
| 23 |
+
- "Data" means a collection of information and/or content extracted from the dataset used with the Model, including to train, pretrain, or otherwise evaluate the Model. The Data is not licensed under this License.
|
| 24 |
+
- "Output" means the results of operating a Model as embodied in informational content resulting therefrom.
|
| 25 |
+
- "Model" means any accompanying machine-learning based assemblies (including checkpoints), consisting of learnt weights, parameters (including optimizer states), corresponding to the model architecture as embodied in the Complementary Material, that have been trained or tuned, in whole or in part on the Data, using the Complementary Material.
|
| 26 |
+
- "Derivatives of the Model" means all modifications to the Model, works based on the Model, or any other model which is created or initialized by transfer of patterns of the weights, parameters, activations or output of the Model, to the other model, in order to cause the other model to perform similarly to the Model, including - but not limited to - distillation methods entailing the use of intermediate data representations or methods based on the generation of synthetic data by the Model for training the other model.
|
| 27 |
+
- "Complementary Material" means the accompanying source code and scripts used to define, run, load, benchmark or evaluate the Model, and used to prepare data for training or evaluation, if any. This includes any accompanying documentation, tutorials, examples, etc, if any.
|
| 28 |
+
- "Distribution" means any transmission, reproduction, publication or other sharing of the Model or Derivatives of the Model to a third party, including providing the Model as a hosted service made available by electronic or other remote means - e.g. API-based or web access.
|
| 29 |
+
- "Licensor" means the copyright owner or entity authorized by the copyright owner that is granting the License, including the persons or entities that may have rights in the Model and/or distributing the Model.
|
| 30 |
+
- "You" (or "Your") means an individual or Legal Entity exercising permissions granted by this License and/or making use of the Model for whichever purpose and in any field of use, including usage of the Model in an end-use application - e.g. chatbot, translator, image generator.
|
| 31 |
+
- "Third Parties" means individuals or legal entities that are not under common control with Licensor or You.
|
| 32 |
+
- "Contribution" means any work of authorship, including the original version of the Model and any modifications or additions to that Model or Derivatives of the Model thereof, that is intentionally submitted to Licensor for inclusion in the Model by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Model, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution."
|
| 33 |
+
- "Contributor" means Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Model.
|
| 34 |
+
|
| 35 |
+
Section II: INTELLECTUAL PROPERTY RIGHTS
|
| 36 |
+
|
| 37 |
+
Both copyright and patent grants apply to the Model, Derivatives of the Model and Complementary Material. The Model and Derivatives of the Model are subject to additional terms as described in Section III.
|
| 38 |
+
|
| 39 |
+
2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare, publicly display, publicly perform, sublicense, and distribute the Complementary Material, the Model, and Derivatives of the Model.
|
| 40 |
+
3. Grant of Patent License. Subject to the terms and conditions of this License and where and as applicable, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this paragraph) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Model and the Complementary Material, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Model to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Model and/or Complementary Material or a Contribution incorporated within the Model and/or Complementary Material constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for the Model and/or Work shall terminate as of the date such litigation is asserted or filed.
|
| 41 |
+
|
| 42 |
+
Section III: CONDITIONS OF USAGE, DISTRIBUTION AND REDISTRIBUTION
|
| 43 |
+
|
| 44 |
+
4. Distribution and Redistribution. You may host for Third Party remote access purposes (e.g. software-as-a-service), reproduce and distribute copies of the Model or Derivatives of the Model thereof in any medium, with or without modifications, provided that You meet the following conditions:
|
| 45 |
+
Use-based restrictions as referenced in paragraph 5 MUST be included as an enforceable provision by You in any type of legal agreement (e.g. a license) governing the use and/or distribution of the Model or Derivatives of the Model, and You shall give notice to subsequent users You Distribute to, that the Model or Derivatives of the Model are subject to paragraph 5. This provision does not apply to the use of Complementary Material.
|
| 46 |
+
You must give any Third Party recipients of the Model or Derivatives of the Model a copy of this License;
|
| 47 |
+
You must cause any modified files to carry prominent notices stating that You changed the files;
|
| 48 |
+
You must retain all copyright, patent, trademark, and attribution notices excluding those notices that do not pertain to any part of the Model, Derivatives of the Model.
|
| 49 |
+
You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions - respecting paragraph 4.a. - for use, reproduction, or Distribution of Your modifications, or for any such Derivatives of the Model as a whole, provided Your use, reproduction, and Distribution of the Model otherwise complies with the conditions stated in this License.
|
| 50 |
+
5. Use-based restrictions. The restrictions set forth in Attachment A are considered Use-based restrictions. Therefore You cannot use the Model and the Derivatives of the Model for the specified restricted uses. You may use the Model subject to this License, including only for lawful purposes and in accordance with the License. Use may include creating any content with, finetuning, updating, running, training, evaluating and/or reparametrizing the Model. You shall require all of Your users who use the Model or a Derivative of the Model to comply with the terms of this paragraph (paragraph 5).
|
| 51 |
+
6. The Output You Generate. Except as set forth herein, Licensor claims no rights in the Output You generate using the Model. You are accountable for the Output you generate and its subsequent uses. No use of the output can contravene any provision as stated in the License.
|
| 52 |
+
|
| 53 |
+
Section IV: OTHER PROVISIONS
|
| 54 |
+
|
| 55 |
+
7. Updates and Runtime Restrictions. To the maximum extent permitted by law, Licensor reserves the right to restrict (remotely or otherwise) usage of the Model in violation of this License, update the Model through electronic means, or modify the Output of the Model based on updates. You shall undertake reasonable efforts to use the latest version of the Model.
|
| 56 |
+
8. Trademarks and related. Nothing in this License permits You to make use of Licensors’ trademarks, trade names, logos or to otherwise suggest endorsement or misrepresent the relationship between the parties; and any rights not expressly granted herein are reserved by the Licensors.
|
| 57 |
+
9. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Model and the Complementary Material (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Model, Derivatives of the Model, and the Complementary Material and assume any risks associated with Your exercise of permissions under this License.
|
| 58 |
+
10. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Model and the Complementary Material (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages.
|
| 59 |
+
11. Accepting Warranty or Additional Liability. While redistributing the Model, Derivatives of the Model and the Complementary Material thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability.
|
| 60 |
+
12. If any provision of this License is held to be invalid, illegal or unenforceable, the remaining provisions shall be unaffected thereby and remain valid as if such provision had not been set forth herein.
|
| 61 |
+
|
| 62 |
+
END OF TERMS AND CONDITIONS
|
| 63 |
+
|
| 64 |
+
|
| 65 |
+
|
| 66 |
+
|
| 67 |
+
Attachment A
|
| 68 |
+
|
| 69 |
+
Use Restrictions
|
| 70 |
+
|
| 71 |
+
You agree not to use the Model or Derivatives of the Model:
|
| 72 |
+
- In any way that violates any applicable national, federal, state, local or international law or regulation;
|
| 73 |
+
- For the purpose of exploiting, harming or attempting to exploit or harm minors in any way;
|
| 74 |
+
- To generate or disseminate verifiably false information and/or content with the purpose of harming others;
|
| 75 |
+
- To generate or disseminate personal identifiable information that can be used to harm an individual;
|
| 76 |
+
- To defame, disparage or otherwise harass others;
|
| 77 |
+
- For fully automated decision making that adversely impacts an individual’s legal rights or otherwise creates or modifies a binding, enforceable obligation;
|
| 78 |
+
- For any use intended to or which has the effect of discriminating against or harming individuals or groups based on online or offline social behavior or known or predicted personal or personality characteristics;
|
| 79 |
+
- To exploit any of the vulnerabilities of a specific group of persons based on their age, social, physical or mental characteristics, in order to materially distort the behavior of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm;
|
| 80 |
+
- For any use intended to or which has the effect of discriminating against individuals or groups based on legally protected characteristics or categories;
|
| 81 |
+
- To provide medical advice and medical results interpretation;
|
| 82 |
+
- To generate or disseminate information for the purpose to be used for administration of justice, law enforcement, immigration or asylum processes, such as predicting an individual will commit fraud/crime commitment (e.g. by text profiling, drawing causal relationships between assertions made in documents, indiscriminate and arbitrarily-targeted use).
|
stable_diffusion/README.md
ADDED
|
@@ -0,0 +1,215 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Stable Diffusion
|
| 2 |
+
*Stable Diffusion was made possible thanks to a collaboration with [Stability AI](https://stability.ai/) and [Runway](https://runwayml.com/) and builds upon our previous work:*
|
| 3 |
+
|
| 4 |
+
[**High-Resolution Image Synthesis with Latent Diffusion Models**](https://ommer-lab.com/research/latent-diffusion-models/)<br/>
|
| 5 |
+
[Robin Rombach](https://github.com/rromb)\*,
|
| 6 |
+
[Andreas Blattmann](https://github.com/ablattmann)\*,
|
| 7 |
+
[Dominik Lorenz](https://github.com/qp-qp)\,
|
| 8 |
+
[Patrick Esser](https://github.com/pesser),
|
| 9 |
+
[Björn Ommer](https://hci.iwr.uni-heidelberg.de/Staff/bommer)<br/>
|
| 10 |
+
_[CVPR '22 Oral](https://openaccess.thecvf.com/content/CVPR2022/html/Rombach_High-Resolution_Image_Synthesis_With_Latent_Diffusion_Models_CVPR_2022_paper.html) |
|
| 11 |
+
[GitHub](https://github.com/CompVis/latent-diffusion) | [arXiv](https://arxiv.org/abs/2112.10752) | [Project page](https://ommer-lab.com/research/latent-diffusion-models/)_
|
| 12 |
+
|
| 13 |
+

|
| 14 |
+
[Stable Diffusion](#stable-diffusion-v1) is a latent text-to-image diffusion
|
| 15 |
+
model.
|
| 16 |
+
Thanks to a generous compute donation from [Stability AI](https://stability.ai/) and support from [LAION](https://laion.ai/), we were able to train a Latent Diffusion Model on 512x512 images from a subset of the [LAION-5B](https://laion.ai/blog/laion-5b/) database.
|
| 17 |
+
Similar to Google's [Imagen](https://arxiv.org/abs/2205.11487),
|
| 18 |
+
this model uses a frozen CLIP ViT-L/14 text encoder to condition the model on text prompts.
|
| 19 |
+
With its 860M UNet and 123M text encoder, the model is relatively lightweight and runs on a GPU with at least 10GB VRAM.
|
| 20 |
+
See [this section](#stable-diffusion-v1) below and the [model card](https://huggingface.co/CompVis/stable-diffusion).
|
| 21 |
+
|
| 22 |
+
|
| 23 |
+
## Requirements
|
| 24 |
+
A suitable [conda](https://conda.io/) environment named `ldm` can be created
|
| 25 |
+
and activated with:
|
| 26 |
+
|
| 27 |
+
```
|
| 28 |
+
conda env create -f environment.yaml
|
| 29 |
+
conda activate ldm
|
| 30 |
+
```
|
| 31 |
+
|
| 32 |
+
You can also update an existing [latent diffusion](https://github.com/CompVis/latent-diffusion) environment by running
|
| 33 |
+
|
| 34 |
+
```
|
| 35 |
+
conda install pytorch torchvision -c pytorch
|
| 36 |
+
pip install transformers==4.19.2 diffusers invisible-watermark
|
| 37 |
+
pip install -e .
|
| 38 |
+
```
|
| 39 |
+
|
| 40 |
+
|
| 41 |
+
## Stable Diffusion v1
|
| 42 |
+
|
| 43 |
+
Stable Diffusion v1 refers to a specific configuration of the model
|
| 44 |
+
architecture that uses a downsampling-factor 8 autoencoder with an 860M UNet
|
| 45 |
+
and CLIP ViT-L/14 text encoder for the diffusion model. The model was pretrained on 256x256 images and
|
| 46 |
+
then finetuned on 512x512 images.
|
| 47 |
+
|
| 48 |
+
*Note: Stable Diffusion v1 is a general text-to-image diffusion model and therefore mirrors biases and (mis-)conceptions that are present
|
| 49 |
+
in its training data.
|
| 50 |
+
Details on the training procedure and data, as well as the intended use of the model can be found in the corresponding [model card](Stable_Diffusion_v1_Model_Card.md).*
|
| 51 |
+
|
| 52 |
+
The weights are available via [the CompVis organization at Hugging Face](https://huggingface.co/CompVis) under [a license which contains specific use-based restrictions to prevent misuse and harm as informed by the model card, but otherwise remains permissive](LICENSE). While commercial use is permitted under the terms of the license, **we do not recommend using the provided weights for services or products without additional safety mechanisms and considerations**, since there are [known limitations and biases](Stable_Diffusion_v1_Model_Card.md#limitations-and-bias) of the weights, and research on safe and ethical deployment of general text-to-image models is an ongoing effort. **The weights are research artifacts and should be treated as such.**
|
| 53 |
+
|
| 54 |
+
[The CreativeML OpenRAIL M license](LICENSE) is an [Open RAIL M license](https://www.licenses.ai/blog/2022/8/18/naming-convention-of-responsible-ai-licenses), adapted from the work that [BigScience](https://bigscience.huggingface.co/) and [the RAIL Initiative](https://www.licenses.ai/) are jointly carrying in the area of responsible AI licensing. See also [the article about the BLOOM Open RAIL license](https://bigscience.huggingface.co/blog/the-bigscience-rail-license) on which our license is based.
|
| 55 |
+
|
| 56 |
+
### Weights
|
| 57 |
+
|
| 58 |
+
We currently provide the following checkpoints:
|
| 59 |
+
|
| 60 |
+
- `sd-v1-1.ckpt`: 237k steps at resolution `256x256` on [laion2B-en](https://huggingface.co/datasets/laion/laion2B-en).
|
| 61 |
+
194k steps at resolution `512x512` on [laion-high-resolution](https://huggingface.co/datasets/laion/laion-high-resolution) (170M examples from LAION-5B with resolution `>= 1024x1024`).
|
| 62 |
+
- `sd-v1-2.ckpt`: Resumed from `sd-v1-1.ckpt`.
|
| 63 |
+
515k steps at resolution `512x512` on [laion-aesthetics v2 5+](https://laion.ai/blog/laion-aesthetics/) (a subset of laion2B-en with estimated aesthetics score `> 5.0`, and additionally
|
| 64 |
+
filtered to images with an original size `>= 512x512`, and an estimated watermark probability `< 0.5`. The watermark estimate is from the [LAION-5B](https://laion.ai/blog/laion-5b/) metadata, the aesthetics score is estimated using the [LAION-Aesthetics Predictor V2](https://github.com/christophschuhmann/improved-aesthetic-predictor)).
|
| 65 |
+
- `sd-v1-3.ckpt`: Resumed from `sd-v1-2.ckpt`. 195k steps at resolution `512x512` on "laion-aesthetics v2 5+" and 10\% dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
|
| 66 |
+
- `sd-v1-4.ckpt`: Resumed from `sd-v1-2.ckpt`. 225k steps at resolution `512x512` on "laion-aesthetics v2 5+" and 10\% dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
|
| 67 |
+
|
| 68 |
+
Evaluations with different classifier-free guidance scales (1.5, 2.0, 3.0, 4.0,
|
| 69 |
+
5.0, 6.0, 7.0, 8.0) and 50 PLMS sampling
|
| 70 |
+
steps show the relative improvements of the checkpoints:
|
| 71 |
+

|
| 72 |
+
|
| 73 |
+
|
| 74 |
+
|
| 75 |
+
### Text-to-Image with Stable Diffusion
|
| 76 |
+

|
| 77 |
+

|
| 78 |
+
|
| 79 |
+
Stable Diffusion is a latent diffusion model conditioned on the (non-pooled) text embeddings of a CLIP ViT-L/14 text encoder.
|
| 80 |
+
We provide a [reference script for sampling](#reference-sampling-script), but
|
| 81 |
+
there also exists a [diffusers integration](#diffusers-integration), which we
|
| 82 |
+
expect to see more active community development.
|
| 83 |
+
|
| 84 |
+
#### Reference Sampling Script
|
| 85 |
+
|
| 86 |
+
We provide a reference sampling script, which incorporates
|
| 87 |
+
|
| 88 |
+
- a [Safety Checker Module](https://github.com/CompVis/stable-diffusion/pull/36),
|
| 89 |
+
to reduce the probability of explicit outputs,
|
| 90 |
+
- an [invisible watermarking](https://github.com/ShieldMnt/invisible-watermark)
|
| 91 |
+
of the outputs, to help viewers [identify the images as machine-generated](scripts/tests/test_watermark.py).
|
| 92 |
+
|
| 93 |
+
After [obtaining the `stable-diffusion-v1-*-original` weights](#weights), link them
|
| 94 |
+
```
|
| 95 |
+
mkdir -p models/ldm/stable-diffusion-v1/
|
| 96 |
+
ln -s <path/to/model.ckpt> models/ldm/stable-diffusion-v1/model.ckpt
|
| 97 |
+
```
|
| 98 |
+
and sample with
|
| 99 |
+
```
|
| 100 |
+
python scripts/txt2img.py --prompt "a photograph of an astronaut riding a horse" --plms
|
| 101 |
+
```
|
| 102 |
+
|
| 103 |
+
By default, this uses a guidance scale of `--scale 7.5`, [Katherine Crowson's implementation](https://github.com/CompVis/latent-diffusion/pull/51) of the [PLMS](https://arxiv.org/abs/2202.09778) sampler,
|
| 104 |
+
and renders images of size 512x512 (which it was trained on) in 50 steps. All supported arguments are listed below (type `python scripts/txt2img.py --help`).
|
| 105 |
+
|
| 106 |
+
|
| 107 |
+
```commandline
|
| 108 |
+
usage: txt2img.py [-h] [--prompt [PROMPT]] [--outdir [OUTDIR]] [--skip_grid] [--skip_save] [--ddim_steps DDIM_STEPS] [--plms] [--laion400m] [--fixed_code] [--ddim_eta DDIM_ETA]
|
| 109 |
+
[--n_iter N_ITER] [--H H] [--W W] [--C C] [--f F] [--n_samples N_SAMPLES] [--n_rows N_ROWS] [--scale SCALE] [--from-file FROM_FILE] [--config CONFIG] [--ckpt CKPT]
|
| 110 |
+
[--seed SEED] [--precision {full,autocast}]
|
| 111 |
+
|
| 112 |
+
optional arguments:
|
| 113 |
+
-h, --help show this help message and exit
|
| 114 |
+
--prompt [PROMPT] the prompt to render
|
| 115 |
+
--outdir [OUTDIR] dir to write results to
|
| 116 |
+
--skip_grid do not save a grid, only individual samples. Helpful when evaluating lots of samples
|
| 117 |
+
--skip_save do not save individual samples. For speed measurements.
|
| 118 |
+
--ddim_steps DDIM_STEPS
|
| 119 |
+
number of ddim sampling steps
|
| 120 |
+
--plms use plms sampling
|
| 121 |
+
--laion400m uses the LAION400M model
|
| 122 |
+
--fixed_code if enabled, uses the same starting code across samples
|
| 123 |
+
--ddim_eta DDIM_ETA ddim eta (eta=0.0 corresponds to deterministic sampling
|
| 124 |
+
--n_iter N_ITER sample this often
|
| 125 |
+
--H H image height, in pixel space
|
| 126 |
+
--W W image width, in pixel space
|
| 127 |
+
--C C latent channels
|
| 128 |
+
--f F downsampling factor
|
| 129 |
+
--n_samples N_SAMPLES
|
| 130 |
+
how many samples to produce for each given prompt. A.k.a. batch size
|
| 131 |
+
--n_rows N_ROWS rows in the grid (default: n_samples)
|
| 132 |
+
--scale SCALE unconditional guidance scale: eps = eps(x, empty) + scale * (eps(x, cond) - eps(x, empty))
|
| 133 |
+
--from-file FROM_FILE
|
| 134 |
+
if specified, load prompts from this file
|
| 135 |
+
--config CONFIG path to config which constructs model
|
| 136 |
+
--ckpt CKPT path to checkpoint of model
|
| 137 |
+
--seed SEED the seed (for reproducible sampling)
|
| 138 |
+
--precision {full,autocast}
|
| 139 |
+
evaluate at this precision
|
| 140 |
+
```
|
| 141 |
+
Note: The inference config for all v1 versions is designed to be used with EMA-only checkpoints.
|
| 142 |
+
For this reason `use_ema=False` is set in the configuration, otherwise the code will try to switch from
|
| 143 |
+
non-EMA to EMA weights. If you want to examine the effect of EMA vs no EMA, we provide "full" checkpoints
|
| 144 |
+
which contain both types of weights. For these, `use_ema=False` will load and use the non-EMA weights.
|
| 145 |
+
|
| 146 |
+
|
| 147 |
+
#### Diffusers Integration
|
| 148 |
+
|
| 149 |
+
A simple way to download and sample Stable Diffusion is by using the [diffusers library](https://github.com/huggingface/diffusers/tree/main#new--stable-diffusion-is-now-fully-compatible-with-diffusers):
|
| 150 |
+
```py
|
| 151 |
+
# make sure you're logged in with `huggingface-cli login`
|
| 152 |
+
from torch import autocast
|
| 153 |
+
from diffusers import StableDiffusionPipeline
|
| 154 |
+
|
| 155 |
+
pipe = StableDiffusionPipeline.from_pretrained(
|
| 156 |
+
"CompVis/stable-diffusion-v1-4",
|
| 157 |
+
use_auth_token=True
|
| 158 |
+
).to("cuda")
|
| 159 |
+
|
| 160 |
+
prompt = "a photo of an astronaut riding a horse on mars"
|
| 161 |
+
with autocast("cuda"):
|
| 162 |
+
image = pipe(prompt)["sample"][0]
|
| 163 |
+
|
| 164 |
+
image.save("astronaut_rides_horse.png")
|
| 165 |
+
```
|
| 166 |
+
|
| 167 |
+
|
| 168 |
+
### Image Modification with Stable Diffusion
|
| 169 |
+
|
| 170 |
+
By using a diffusion-denoising mechanism as first proposed by [SDEdit](https://arxiv.org/abs/2108.01073), the model can be used for different
|
| 171 |
+
tasks such as text-guided image-to-image translation and upscaling. Similar to the txt2img sampling script,
|
| 172 |
+
we provide a script to perform image modification with Stable Diffusion.
|
| 173 |
+
|
| 174 |
+
The following describes an example where a rough sketch made in [Pinta](https://www.pinta-project.com/) is converted into a detailed artwork.
|
| 175 |
+
```
|
| 176 |
+
python scripts/img2img.py --prompt "A fantasy landscape, trending on artstation" --init-img <path-to-img.jpg> --strength 0.8
|
| 177 |
+
```
|
| 178 |
+
Here, strength is a value between 0.0 and 1.0, that controls the amount of noise that is added to the input image.
|
| 179 |
+
Values that approach 1.0 allow for lots of variations but will also produce images that are not semantically consistent with the input. See the following example.
|
| 180 |
+
|
| 181 |
+
**Input**
|
| 182 |
+
|
| 183 |
+

|
| 184 |
+
|
| 185 |
+
**Outputs**
|
| 186 |
+
|
| 187 |
+

|
| 188 |
+

|
| 189 |
+
|
| 190 |
+
This procedure can, for example, also be used to upscale samples from the base model.
|
| 191 |
+
|
| 192 |
+
|
| 193 |
+
## Comments
|
| 194 |
+
|
| 195 |
+
- Our codebase for the diffusion models builds heavily on [OpenAI's ADM codebase](https://github.com/openai/guided-diffusion)
|
| 196 |
+
and [https://github.com/lucidrains/denoising-diffusion-pytorch](https://github.com/lucidrains/denoising-diffusion-pytorch).
|
| 197 |
+
Thanks for open-sourcing!
|
| 198 |
+
|
| 199 |
+
- The implementation of the transformer encoder is from [x-transformers](https://github.com/lucidrains/x-transformers) by [lucidrains](https://github.com/lucidrains?tab=repositories).
|
| 200 |
+
|
| 201 |
+
|
| 202 |
+
## BibTeX
|
| 203 |
+
|
| 204 |
+
```
|
| 205 |
+
@misc{rombach2021highresolution,
|
| 206 |
+
title={High-Resolution Image Synthesis with Latent Diffusion Models},
|
| 207 |
+
author={Robin Rombach and Andreas Blattmann and Dominik Lorenz and Patrick Esser and Björn Ommer},
|
| 208 |
+
year={2021},
|
| 209 |
+
eprint={2112.10752},
|
| 210 |
+
archivePrefix={arXiv},
|
| 211 |
+
primaryClass={cs.CV}
|
| 212 |
+
}
|
| 213 |
+
```
|
| 214 |
+
|
| 215 |
+
|
stable_diffusion/assets/modelfigure.png
ADDED
|
stable_diffusion/assets/reconstruction2.png
ADDED
|
Git LFS Details
|
stable_diffusion/assets/results.gif.REMOVED.git-id
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
82b6590e670a32196093cc6333ea19e6547d07de
|
stable_diffusion/assets/stable-samples/img2img/upscaling-out.png.REMOVED.git-id
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
1c4bb25a779f34d86b2d90e584ac67af91bb1303
|
stable_diffusion/assets/stable-samples/txt2img/merged-0005.png.REMOVED.git-id
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
ca0a1af206555f0f208a1ab879e95efedc1b1c5b
|
stable_diffusion/assets/stable-samples/txt2img/merged-0006.png.REMOVED.git-id
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
999f3703230580e8c89e9081abd6a1f8f50896d4
|
stable_diffusion/assets/stable-samples/txt2img/merged-0007.png.REMOVED.git-id
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
af390acaf601283782d6f479d4cade4d78e30b26
|
stable_diffusion/assets/txt2img-preview.png.REMOVED.git-id
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
51ee1c235dfdc63d4c41de7d303d03730e43c33c
|
stable_diffusion/assets/v1-variants-scores.jpg
ADDED
|
stable_diffusion/constants/__pycache__/const.cpython-38.pyc
ADDED
|
Binary file (1.5 kB). View file
|
|
|
stable_diffusion/constants/const.py
ADDED
|
@@ -0,0 +1,10 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
style_list = "Abstractionism Bricks Cartoon Color_Fantasy Cubism Early_Autumn Fauvism Gorgeous_Love Impressionism Joy Meta_Physics Monet Pencil_Drawing Red_Blue_Ink Superstring Warm_Love Artist_Sketch Byzantine Cold_Warm Crayon Dadaism Expressionism Glowing_Sunset Greenfield Ink_Art Magic_Cube Meteor_Shower Neon_Lines Picasso Rust Sponge_Dabbed Van_Gogh1 Watercolor On_Fire Vibrant_Flow Mosaic Blossom_Season Warm_Smear"
|
| 2 |
+
|
| 3 |
+
theme_available = ["Abstractionism", "Bricks", "Cartoon", "Color_Fantasy", "Cubism", "Early_Autumn", "Fauvism",
|
| 4 |
+
"Gorgeous_Love", "Impressionism", "Joy", "Meta_Physics", "Monet", "Pencil_Drawing", "Red_Blue_Ink", "Superstring",
|
| 5 |
+
"Warm_Love", "Artist_Sketch", "Byzantine", "Cold_Warm", "Crayon", "Dadaism", "Expressionism", "Glowing_Sunset",
|
| 6 |
+
"Greenfield", "Ink_Art", "Magic_Cube", "Meteor_Shower", "Neon_Lines", "Picasso", "Rust", "Sponge_Dabbed",
|
| 7 |
+
"Van_Gogh1", "Watercolor", "On_Fire", "Vibrant_Flow", "Mosaic", "Blossom_Season", "Warm_Smear"]
|
| 8 |
+
|
| 9 |
+
class_available = ["Architectures", "Bears", "Birds", "Butterfly", "Cats", "Dogs", "Fishes", "Flame", "Flowers",
|
| 10 |
+
"Frogs", "Horses", "Human", "Jellyfish", "Rabbits", "Sandwiches", "Sea", "Statues", "Towers", "Trees", "Waterfalls"]
|
stable_diffusion/data/example_conditioning/superresolution/sample_0.jpg
ADDED
|
stable_diffusion/data/example_conditioning/text_conditional/sample_0.txt
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
A basket of cerries
|
stable_diffusion/data/imagenet_clsidx_to_label.txt
ADDED
|
@@ -0,0 +1,1000 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
0: 'tench, Tinca tinca',
|
| 2 |
+
1: 'goldfish, Carassius auratus',
|
| 3 |
+
2: 'great white shark, white shark, man-eater, man-eating shark, Carcharodon carcharias',
|
| 4 |
+
3: 'tiger shark, Galeocerdo cuvieri',
|
| 5 |
+
4: 'hammerhead, hammerhead shark',
|
| 6 |
+
5: 'electric ray, crampfish, numbfish, torpedo',
|
| 7 |
+
6: 'stingray',
|
| 8 |
+
7: 'cock',
|
| 9 |
+
8: 'hen',
|
| 10 |
+
9: 'ostrich, Struthio camelus',
|
| 11 |
+
10: 'brambling, Fringilla montifringilla',
|
| 12 |
+
11: 'goldfinch, Carduelis carduelis',
|
| 13 |
+
12: 'house finch, linnet, Carpodacus mexicanus',
|
| 14 |
+
13: 'junco, snowbird',
|
| 15 |
+
14: 'indigo bunting, indigo finch, indigo bird, Passerina cyanea',
|
| 16 |
+
15: 'robin, American robin, Turdus migratorius',
|
| 17 |
+
16: 'bulbul',
|
| 18 |
+
17: 'jay',
|
| 19 |
+
18: 'magpie',
|
| 20 |
+
19: 'chickadee',
|
| 21 |
+
20: 'water ouzel, dipper',
|
| 22 |
+
21: 'kite',
|
| 23 |
+
22: 'bald eagle, American eagle, Haliaeetus leucocephalus',
|
| 24 |
+
23: 'vulture',
|
| 25 |
+
24: 'great grey owl, great gray owl, Strix nebulosa',
|
| 26 |
+
25: 'European fire salamander, Salamandra salamandra',
|
| 27 |
+
26: 'common newt, Triturus vulgaris',
|
| 28 |
+
27: 'eft',
|
| 29 |
+
28: 'spotted salamander, Ambystoma maculatum',
|
| 30 |
+
29: 'axolotl, mud puppy, Ambystoma mexicanum',
|
| 31 |
+
30: 'bullfrog, Rana catesbeiana',
|
| 32 |
+
31: 'tree frog, tree-frog',
|
| 33 |
+
32: 'tailed frog, bell toad, ribbed toad, tailed toad, Ascaphus trui',
|
| 34 |
+
33: 'loggerhead, loggerhead turtle, Caretta caretta',
|
| 35 |
+
34: 'leatherback turtle, leatherback, leathery turtle, Dermochelys coriacea',
|
| 36 |
+
35: 'mud turtle',
|
| 37 |
+
36: 'terrapin',
|
| 38 |
+
37: 'box turtle, box tortoise',
|
| 39 |
+
38: 'banded gecko',
|
| 40 |
+
39: 'common iguana, iguana, Iguana iguana',
|
| 41 |
+
40: 'American chameleon, anole, Anolis carolinensis',
|
| 42 |
+
41: 'whiptail, whiptail lizard',
|
| 43 |
+
42: 'agama',
|
| 44 |
+
43: 'frilled lizard, Chlamydosaurus kingi',
|
| 45 |
+
44: 'alligator lizard',
|
| 46 |
+
45: 'Gila monster, Heloderma suspectum',
|
| 47 |
+
46: 'green lizard, Lacerta viridis',
|
| 48 |
+
47: 'African chameleon, Chamaeleo chamaeleon',
|
| 49 |
+
48: 'Komodo dragon, Komodo lizard, dragon lizard, giant lizard, Varanus komodoensis',
|
| 50 |
+
49: 'African crocodile, Nile crocodile, Crocodylus niloticus',
|
| 51 |
+
50: 'American alligator, Alligator mississipiensis',
|
| 52 |
+
51: 'triceratops',
|
| 53 |
+
52: 'thunder snake, worm snake, Carphophis amoenus',
|
| 54 |
+
53: 'ringneck snake, ring-necked snake, ring snake',
|
| 55 |
+
54: 'hognose snake, puff adder, sand viper',
|
| 56 |
+
55: 'green snake, grass snake',
|
| 57 |
+
56: 'king snake, kingsnake',
|
| 58 |
+
57: 'garter snake, grass snake',
|
| 59 |
+
58: 'water snake',
|
| 60 |
+
59: 'vine snake',
|
| 61 |
+
60: 'night snake, Hypsiglena torquata',
|
| 62 |
+
61: 'boa constrictor, Constrictor constrictor',
|
| 63 |
+
62: 'rock python, rock snake, Python sebae',
|
| 64 |
+
63: 'Indian cobra, Naja naja',
|
| 65 |
+
64: 'green mamba',
|
| 66 |
+
65: 'sea snake',
|
| 67 |
+
66: 'horned viper, cerastes, sand viper, horned asp, Cerastes cornutus',
|
| 68 |
+
67: 'diamondback, diamondback rattlesnake, Crotalus adamanteus',
|
| 69 |
+
68: 'sidewinder, horned rattlesnake, Crotalus cerastes',
|
| 70 |
+
69: 'trilobite',
|
| 71 |
+
70: 'harvestman, daddy longlegs, Phalangium opilio',
|
| 72 |
+
71: 'scorpion',
|
| 73 |
+
72: 'black and gold garden spider, Argiope aurantia',
|
| 74 |
+
73: 'barn spider, Araneus cavaticus',
|
| 75 |
+
74: 'garden spider, Aranea diademata',
|
| 76 |
+
75: 'black widow, Latrodectus mactans',
|
| 77 |
+
76: 'tarantula',
|
| 78 |
+
77: 'wolf spider, hunting spider',
|
| 79 |
+
78: 'tick',
|
| 80 |
+
79: 'centipede',
|
| 81 |
+
80: 'black grouse',
|
| 82 |
+
81: 'ptarmigan',
|
| 83 |
+
82: 'ruffed grouse, partridge, Bonasa umbellus',
|
| 84 |
+
83: 'prairie chicken, prairie grouse, prairie fowl',
|
| 85 |
+
84: 'peacock',
|
| 86 |
+
85: 'quail',
|
| 87 |
+
86: 'partridge',
|
| 88 |
+
87: 'African grey, African gray, Psittacus erithacus',
|
| 89 |
+
88: 'macaw',
|
| 90 |
+
89: 'sulphur-crested cockatoo, Kakatoe galerita, Cacatua galerita',
|
| 91 |
+
90: 'lorikeet',
|
| 92 |
+
91: 'coucal',
|
| 93 |
+
92: 'bee eater',
|
| 94 |
+
93: 'hornbill',
|
| 95 |
+
94: 'hummingbird',
|
| 96 |
+
95: 'jacamar',
|
| 97 |
+
96: 'toucan',
|
| 98 |
+
97: 'drake',
|
| 99 |
+
98: 'red-breasted merganser, Mergus serrator',
|
| 100 |
+
99: 'goose',
|
| 101 |
+
100: 'black swan, Cygnus atratus',
|
| 102 |
+
101: 'tusker',
|
| 103 |
+
102: 'echidna, spiny anteater, anteater',
|
| 104 |
+
103: 'platypus, duckbill, duckbilled platypus, duck-billed platypus, Ornithorhynchus anatinus',
|
| 105 |
+
104: 'wallaby, brush kangaroo',
|
| 106 |
+
105: 'koala, koala bear, kangaroo bear, native bear, Phascolarctos cinereus',
|
| 107 |
+
106: 'wombat',
|
| 108 |
+
107: 'jellyfish',
|
| 109 |
+
108: 'sea anemone, anemone',
|
| 110 |
+
109: 'brain coral',
|
| 111 |
+
110: 'flatworm, platyhelminth',
|
| 112 |
+
111: 'nematode, nematode worm, roundworm',
|
| 113 |
+
112: 'conch',
|
| 114 |
+
113: 'snail',
|
| 115 |
+
114: 'slug',
|
| 116 |
+
115: 'sea slug, nudibranch',
|
| 117 |
+
116: 'chiton, coat-of-mail shell, sea cradle, polyplacophore',
|
| 118 |
+
117: 'chambered nautilus, pearly nautilus, nautilus',
|
| 119 |
+
118: 'Dungeness crab, Cancer magister',
|
| 120 |
+
119: 'rock crab, Cancer irroratus',
|
| 121 |
+
120: 'fiddler crab',
|
| 122 |
+
121: 'king crab, Alaska crab, Alaskan king crab, Alaska king crab, Paralithodes camtschatica',
|
| 123 |
+
122: 'American lobster, Northern lobster, Maine lobster, Homarus americanus',
|
| 124 |
+
123: 'spiny lobster, langouste, rock lobster, crawfish, crayfish, sea crawfish',
|
| 125 |
+
124: 'crayfish, crawfish, crawdad, crawdaddy',
|
| 126 |
+
125: 'hermit crab',
|
| 127 |
+
126: 'isopod',
|
| 128 |
+
127: 'white stork, Ciconia ciconia',
|
| 129 |
+
128: 'black stork, Ciconia nigra',
|
| 130 |
+
129: 'spoonbill',
|
| 131 |
+
130: 'flamingo',
|
| 132 |
+
131: 'little blue heron, Egretta caerulea',
|
| 133 |
+
132: 'American egret, great white heron, Egretta albus',
|
| 134 |
+
133: 'bittern',
|
| 135 |
+
134: 'crane',
|
| 136 |
+
135: 'limpkin, Aramus pictus',
|
| 137 |
+
136: 'European gallinule, Porphyrio porphyrio',
|
| 138 |
+
137: 'American coot, marsh hen, mud hen, water hen, Fulica americana',
|
| 139 |
+
138: 'bustard',
|
| 140 |
+
139: 'ruddy turnstone, Arenaria interpres',
|
| 141 |
+
140: 'red-backed sandpiper, dunlin, Erolia alpina',
|
| 142 |
+
141: 'redshank, Tringa totanus',
|
| 143 |
+
142: 'dowitcher',
|
| 144 |
+
143: 'oystercatcher, oyster catcher',
|
| 145 |
+
144: 'pelican',
|
| 146 |
+
145: 'king penguin, Aptenodytes patagonica',
|
| 147 |
+
146: 'albatross, mollymawk',
|
| 148 |
+
147: 'grey whale, gray whale, devilfish, Eschrichtius gibbosus, Eschrichtius robustus',
|
| 149 |
+
148: 'killer whale, killer, orca, grampus, sea wolf, Orcinus orca',
|
| 150 |
+
149: 'dugong, Dugong dugon',
|
| 151 |
+
150: 'sea lion',
|
| 152 |
+
151: 'Chihuahua',
|
| 153 |
+
152: 'Japanese spaniel',
|
| 154 |
+
153: 'Maltese dog, Maltese terrier, Maltese',
|
| 155 |
+
154: 'Pekinese, Pekingese, Peke',
|
| 156 |
+
155: 'Shih-Tzu',
|
| 157 |
+
156: 'Blenheim spaniel',
|
| 158 |
+
157: 'papillon',
|
| 159 |
+
158: 'toy terrier',
|
| 160 |
+
159: 'Rhodesian ridgeback',
|
| 161 |
+
160: 'Afghan hound, Afghan',
|
| 162 |
+
161: 'basset, basset hound',
|
| 163 |
+
162: 'beagle',
|
| 164 |
+
163: 'bloodhound, sleuthhound',
|
| 165 |
+
164: 'bluetick',
|
| 166 |
+
165: 'black-and-tan coonhound',
|
| 167 |
+
166: 'Walker hound, Walker foxhound',
|
| 168 |
+
167: 'English foxhound',
|
| 169 |
+
168: 'redbone',
|
| 170 |
+
169: 'borzoi, Russian wolfhound',
|
| 171 |
+
170: 'Irish wolfhound',
|
| 172 |
+
171: 'Italian greyhound',
|
| 173 |
+
172: 'whippet',
|
| 174 |
+
173: 'Ibizan hound, Ibizan Podenco',
|
| 175 |
+
174: 'Norwegian elkhound, elkhound',
|
| 176 |
+
175: 'otterhound, otter hound',
|
| 177 |
+
176: 'Saluki, gazelle hound',
|
| 178 |
+
177: 'Scottish deerhound, deerhound',
|
| 179 |
+
178: 'Weimaraner',
|
| 180 |
+
179: 'Staffordshire bullterrier, Staffordshire bull terrier',
|
| 181 |
+
180: 'American Staffordshire terrier, Staffordshire terrier, American pit bull terrier, pit bull terrier',
|
| 182 |
+
181: 'Bedlington terrier',
|
| 183 |
+
182: 'Border terrier',
|
| 184 |
+
183: 'Kerry blue terrier',
|
| 185 |
+
184: 'Irish terrier',
|
| 186 |
+
185: 'Norfolk terrier',
|
| 187 |
+
186: 'Norwich terrier',
|
| 188 |
+
187: 'Yorkshire terrier',
|
| 189 |
+
188: 'wire-haired fox terrier',
|
| 190 |
+
189: 'Lakeland terrier',
|
| 191 |
+
190: 'Sealyham terrier, Sealyham',
|
| 192 |
+
191: 'Airedale, Airedale terrier',
|
| 193 |
+
192: 'cairn, cairn terrier',
|
| 194 |
+
193: 'Australian terrier',
|
| 195 |
+
194: 'Dandie Dinmont, Dandie Dinmont terrier',
|
| 196 |
+
195: 'Boston bull, Boston terrier',
|
| 197 |
+
196: 'miniature schnauzer',
|
| 198 |
+
197: 'giant schnauzer',
|
| 199 |
+
198: 'standard schnauzer',
|
| 200 |
+
199: 'Scotch terrier, Scottish terrier, Scottie',
|
| 201 |
+
200: 'Tibetan terrier, chrysanthemum dog',
|
| 202 |
+
201: 'silky terrier, Sydney silky',
|
| 203 |
+
202: 'soft-coated wheaten terrier',
|
| 204 |
+
203: 'West Highland white terrier',
|
| 205 |
+
204: 'Lhasa, Lhasa apso',
|
| 206 |
+
205: 'flat-coated retriever',
|
| 207 |
+
206: 'curly-coated retriever',
|
| 208 |
+
207: 'golden retriever',
|
| 209 |
+
208: 'Labrador retriever',
|
| 210 |
+
209: 'Chesapeake Bay retriever',
|
| 211 |
+
210: 'German short-haired pointer',
|
| 212 |
+
211: 'vizsla, Hungarian pointer',
|
| 213 |
+
212: 'English setter',
|
| 214 |
+
213: 'Irish setter, red setter',
|
| 215 |
+
214: 'Gordon setter',
|
| 216 |
+
215: 'Brittany spaniel',
|
| 217 |
+
216: 'clumber, clumber spaniel',
|
| 218 |
+
217: 'English springer, English springer spaniel',
|
| 219 |
+
218: 'Welsh springer spaniel',
|
| 220 |
+
219: 'cocker spaniel, English cocker spaniel, cocker',
|
| 221 |
+
220: 'Sussex spaniel',
|
| 222 |
+
221: 'Irish water spaniel',
|
| 223 |
+
222: 'kuvasz',
|
| 224 |
+
223: 'schipperke',
|
| 225 |
+
224: 'groenendael',
|
| 226 |
+
225: 'malinois',
|
| 227 |
+
226: 'briard',
|
| 228 |
+
227: 'kelpie',
|
| 229 |
+
228: 'komondor',
|
| 230 |
+
229: 'Old English sheepdog, bobtail',
|
| 231 |
+
230: 'Shetland sheepdog, Shetland sheep dog, Shetland',
|
| 232 |
+
231: 'collie',
|
| 233 |
+
232: 'Border collie',
|
| 234 |
+
233: 'Bouvier des Flandres, Bouviers des Flandres',
|
| 235 |
+
234: 'Rottweiler',
|
| 236 |
+
235: 'German shepherd, German shepherd dog, German police dog, alsatian',
|
| 237 |
+
236: 'Doberman, Doberman pinscher',
|
| 238 |
+
237: 'miniature pinscher',
|
| 239 |
+
238: 'Greater Swiss Mountain dog',
|
| 240 |
+
239: 'Bernese mountain dog',
|
| 241 |
+
240: 'Appenzeller',
|
| 242 |
+
241: 'EntleBucher',
|
| 243 |
+
242: 'boxer',
|
| 244 |
+
243: 'bull mastiff',
|
| 245 |
+
244: 'Tibetan mastiff',
|
| 246 |
+
245: 'French bulldog',
|
| 247 |
+
246: 'Great Dane',
|
| 248 |
+
247: 'Saint Bernard, St Bernard',
|
| 249 |
+
248: 'Eskimo dog, husky',
|
| 250 |
+
249: 'malamute, malemute, Alaskan malamute',
|
| 251 |
+
250: 'Siberian husky',
|
| 252 |
+
251: 'dalmatian, coach dog, carriage dog',
|
| 253 |
+
252: 'affenpinscher, monkey pinscher, monkey dog',
|
| 254 |
+
253: 'basenji',
|
| 255 |
+
254: 'pug, pug-dog',
|
| 256 |
+
255: 'Leonberg',
|
| 257 |
+
256: 'Newfoundland, Newfoundland dog',
|
| 258 |
+
257: 'Great Pyrenees',
|
| 259 |
+
258: 'Samoyed, Samoyede',
|
| 260 |
+
259: 'Pomeranian',
|
| 261 |
+
260: 'chow, chow chow',
|
| 262 |
+
261: 'keeshond',
|
| 263 |
+
262: 'Brabancon griffon',
|
| 264 |
+
263: 'Pembroke, Pembroke Welsh corgi',
|
| 265 |
+
264: 'Cardigan, Cardigan Welsh corgi',
|
| 266 |
+
265: 'toy poodle',
|
| 267 |
+
266: 'miniature poodle',
|
| 268 |
+
267: 'standard poodle',
|
| 269 |
+
268: 'Mexican hairless',
|
| 270 |
+
269: 'timber wolf, grey wolf, gray wolf, Canis lupus',
|
| 271 |
+
270: 'white wolf, Arctic wolf, Canis lupus tundrarum',
|
| 272 |
+
271: 'red wolf, maned wolf, Canis rufus, Canis niger',
|
| 273 |
+
272: 'coyote, prairie wolf, brush wolf, Canis latrans',
|
| 274 |
+
273: 'dingo, warrigal, warragal, Canis dingo',
|
| 275 |
+
274: 'dhole, Cuon alpinus',
|
| 276 |
+
275: 'African hunting dog, hyena dog, Cape hunting dog, Lycaon pictus',
|
| 277 |
+
276: 'hyena, hyaena',
|
| 278 |
+
277: 'red fox, Vulpes vulpes',
|
| 279 |
+
278: 'kit fox, Vulpes macrotis',
|
| 280 |
+
279: 'Arctic fox, white fox, Alopex lagopus',
|
| 281 |
+
280: 'grey fox, gray fox, Urocyon cinereoargenteus',
|
| 282 |
+
281: 'tabby, tabby cat',
|
| 283 |
+
282: 'tiger cat',
|
| 284 |
+
283: 'Persian cat',
|
| 285 |
+
284: 'Siamese cat, Siamese',
|
| 286 |
+
285: 'Egyptian cat',
|
| 287 |
+
286: 'cougar, puma, catamount, mountain lion, painter, panther, Felis concolor',
|
| 288 |
+
287: 'lynx, catamount',
|
| 289 |
+
288: 'leopard, Panthera pardus',
|
| 290 |
+
289: 'snow leopard, ounce, Panthera uncia',
|
| 291 |
+
290: 'jaguar, panther, Panthera onca, Felis onca',
|
| 292 |
+
291: 'lion, king of beasts, Panthera leo',
|
| 293 |
+
292: 'tiger, Panthera tigris',
|
| 294 |
+
293: 'cheetah, chetah, Acinonyx jubatus',
|
| 295 |
+
294: 'brown bear, bruin, Ursus arctos',
|
| 296 |
+
295: 'American black bear, black bear, Ursus americanus, Euarctos americanus',
|
| 297 |
+
296: 'ice bear, polar bear, Ursus Maritimus, Thalarctos maritimus',
|
| 298 |
+
297: 'sloth bear, Melursus ursinus, Ursus ursinus',
|
| 299 |
+
298: 'mongoose',
|
| 300 |
+
299: 'meerkat, mierkat',
|
| 301 |
+
300: 'tiger beetle',
|
| 302 |
+
301: 'ladybug, ladybeetle, lady beetle, ladybird, ladybird beetle',
|
| 303 |
+
302: 'ground beetle, carabid beetle',
|
| 304 |
+
303: 'long-horned beetle, longicorn, longicorn beetle',
|
| 305 |
+
304: 'leaf beetle, chrysomelid',
|
| 306 |
+
305: 'dung beetle',
|
| 307 |
+
306: 'rhinoceros beetle',
|
| 308 |
+
307: 'weevil',
|
| 309 |
+
308: 'fly',
|
| 310 |
+
309: 'bee',
|
| 311 |
+
310: 'ant, emmet, pismire',
|
| 312 |
+
311: 'grasshopper, hopper',
|
| 313 |
+
312: 'cricket',
|
| 314 |
+
313: 'walking stick, walkingstick, stick insect',
|
| 315 |
+
314: 'cockroach, roach',
|
| 316 |
+
315: 'mantis, mantid',
|
| 317 |
+
316: 'cicada, cicala',
|
| 318 |
+
317: 'leafhopper',
|
| 319 |
+
318: 'lacewing, lacewing fly',
|
| 320 |
+
319: "dragonfly, darning needle, devil's darning needle, sewing needle, snake feeder, snake doctor, mosquito hawk, skeeter hawk",
|
| 321 |
+
320: 'damselfly',
|
| 322 |
+
321: 'admiral',
|
| 323 |
+
322: 'ringlet, ringlet butterfly',
|
| 324 |
+
323: 'monarch, monarch butterfly, milkweed butterfly, Danaus plexippus',
|
| 325 |
+
324: 'cabbage butterfly',
|
| 326 |
+
325: 'sulphur butterfly, sulfur butterfly',
|
| 327 |
+
326: 'lycaenid, lycaenid butterfly',
|
| 328 |
+
327: 'starfish, sea star',
|
| 329 |
+
328: 'sea urchin',
|
| 330 |
+
329: 'sea cucumber, holothurian',
|
| 331 |
+
330: 'wood rabbit, cottontail, cottontail rabbit',
|
| 332 |
+
331: 'hare',
|
| 333 |
+
332: 'Angora, Angora rabbit',
|
| 334 |
+
333: 'hamster',
|
| 335 |
+
334: 'porcupine, hedgehog',
|
| 336 |
+
335: 'fox squirrel, eastern fox squirrel, Sciurus niger',
|
| 337 |
+
336: 'marmot',
|
| 338 |
+
337: 'beaver',
|
| 339 |
+
338: 'guinea pig, Cavia cobaya',
|
| 340 |
+
339: 'sorrel',
|
| 341 |
+
340: 'zebra',
|
| 342 |
+
341: 'hog, pig, grunter, squealer, Sus scrofa',
|
| 343 |
+
342: 'wild boar, boar, Sus scrofa',
|
| 344 |
+
343: 'warthog',
|
| 345 |
+
344: 'hippopotamus, hippo, river horse, Hippopotamus amphibius',
|
| 346 |
+
345: 'ox',
|
| 347 |
+
346: 'water buffalo, water ox, Asiatic buffalo, Bubalus bubalis',
|
| 348 |
+
347: 'bison',
|
| 349 |
+
348: 'ram, tup',
|
| 350 |
+
349: 'bighorn, bighorn sheep, cimarron, Rocky Mountain bighorn, Rocky Mountain sheep, Ovis canadensis',
|
| 351 |
+
350: 'ibex, Capra ibex',
|
| 352 |
+
351: 'hartebeest',
|
| 353 |
+
352: 'impala, Aepyceros melampus',
|
| 354 |
+
353: 'gazelle',
|
| 355 |
+
354: 'Arabian camel, dromedary, Camelus dromedarius',
|
| 356 |
+
355: 'llama',
|
| 357 |
+
356: 'weasel',
|
| 358 |
+
357: 'mink',
|
| 359 |
+
358: 'polecat, fitch, foulmart, foumart, Mustela putorius',
|
| 360 |
+
359: 'black-footed ferret, ferret, Mustela nigripes',
|
| 361 |
+
360: 'otter',
|
| 362 |
+
361: 'skunk, polecat, wood pussy',
|
| 363 |
+
362: 'badger',
|
| 364 |
+
363: 'armadillo',
|
| 365 |
+
364: 'three-toed sloth, ai, Bradypus tridactylus',
|
| 366 |
+
365: 'orangutan, orang, orangutang, Pongo pygmaeus',
|
| 367 |
+
366: 'gorilla, Gorilla gorilla',
|
| 368 |
+
367: 'chimpanzee, chimp, Pan troglodytes',
|
| 369 |
+
368: 'gibbon, Hylobates lar',
|
| 370 |
+
369: 'siamang, Hylobates syndactylus, Symphalangus syndactylus',
|
| 371 |
+
370: 'guenon, guenon monkey',
|
| 372 |
+
371: 'patas, hussar monkey, Erythrocebus patas',
|
| 373 |
+
372: 'baboon',
|
| 374 |
+
373: 'macaque',
|
| 375 |
+
374: 'langur',
|
| 376 |
+
375: 'colobus, colobus monkey',
|
| 377 |
+
376: 'proboscis monkey, Nasalis larvatus',
|
| 378 |
+
377: 'marmoset',
|
| 379 |
+
378: 'capuchin, ringtail, Cebus capucinus',
|
| 380 |
+
379: 'howler monkey, howler',
|
| 381 |
+
380: 'titi, titi monkey',
|
| 382 |
+
381: 'spider monkey, Ateles geoffroyi',
|
| 383 |
+
382: 'squirrel monkey, Saimiri sciureus',
|
| 384 |
+
383: 'Madagascar cat, ring-tailed lemur, Lemur catta',
|
| 385 |
+
384: 'indri, indris, Indri indri, Indri brevicaudatus',
|
| 386 |
+
385: 'Indian elephant, Elephas maximus',
|
| 387 |
+
386: 'African elephant, Loxodonta africana',
|
| 388 |
+
387: 'lesser panda, red panda, panda, bear cat, cat bear, Ailurus fulgens',
|
| 389 |
+
388: 'giant panda, panda, panda bear, coon bear, Ailuropoda melanoleuca',
|
| 390 |
+
389: 'barracouta, snoek',
|
| 391 |
+
390: 'eel',
|
| 392 |
+
391: 'coho, cohoe, coho salmon, blue jack, silver salmon, Oncorhynchus kisutch',
|
| 393 |
+
392: 'rock beauty, Holocanthus tricolor',
|
| 394 |
+
393: 'anemone fish',
|
| 395 |
+
394: 'sturgeon',
|
| 396 |
+
395: 'gar, garfish, garpike, billfish, Lepisosteus osseus',
|
| 397 |
+
396: 'lionfish',
|
| 398 |
+
397: 'puffer, pufferfish, blowfish, globefish',
|
| 399 |
+
398: 'abacus',
|
| 400 |
+
399: 'abaya',
|
| 401 |
+
400: "academic gown, academic robe, judge's robe",
|
| 402 |
+
401: 'accordion, piano accordion, squeeze box',
|
| 403 |
+
402: 'acoustic guitar',
|
| 404 |
+
403: 'aircraft carrier, carrier, flattop, attack aircraft carrier',
|
| 405 |
+
404: 'airliner',
|
| 406 |
+
405: 'airship, dirigible',
|
| 407 |
+
406: 'altar',
|
| 408 |
+
407: 'ambulance',
|
| 409 |
+
408: 'amphibian, amphibious vehicle',
|
| 410 |
+
409: 'analog clock',
|
| 411 |
+
410: 'apiary, bee house',
|
| 412 |
+
411: 'apron',
|
| 413 |
+
412: 'ashcan, trash can, garbage can, wastebin, ash bin, ash-bin, ashbin, dustbin, trash barrel, trash bin',
|
| 414 |
+
413: 'assault rifle, assault gun',
|
| 415 |
+
414: 'backpack, back pack, knapsack, packsack, rucksack, haversack',
|
| 416 |
+
415: 'bakery, bakeshop, bakehouse',
|
| 417 |
+
416: 'balance beam, beam',
|
| 418 |
+
417: 'balloon',
|
| 419 |
+
418: 'ballpoint, ballpoint pen, ballpen, Biro',
|
| 420 |
+
419: 'Band Aid',
|
| 421 |
+
420: 'banjo',
|
| 422 |
+
421: 'bannister, banister, balustrade, balusters, handrail',
|
| 423 |
+
422: 'barbell',
|
| 424 |
+
423: 'barber chair',
|
| 425 |
+
424: 'barbershop',
|
| 426 |
+
425: 'barn',
|
| 427 |
+
426: 'barometer',
|
| 428 |
+
427: 'barrel, cask',
|
| 429 |
+
428: 'barrow, garden cart, lawn cart, wheelbarrow',
|
| 430 |
+
429: 'baseball',
|
| 431 |
+
430: 'basketball',
|
| 432 |
+
431: 'bassinet',
|
| 433 |
+
432: 'bassoon',
|
| 434 |
+
433: 'bathing cap, swimming cap',
|
| 435 |
+
434: 'bath towel',
|
| 436 |
+
435: 'bathtub, bathing tub, bath, tub',
|
| 437 |
+
436: 'beach wagon, station wagon, wagon, estate car, beach waggon, station waggon, waggon',
|
| 438 |
+
437: 'beacon, lighthouse, beacon light, pharos',
|
| 439 |
+
438: 'beaker',
|
| 440 |
+
439: 'bearskin, busby, shako',
|
| 441 |
+
440: 'beer bottle',
|
| 442 |
+
441: 'beer glass',
|
| 443 |
+
442: 'bell cote, bell cot',
|
| 444 |
+
443: 'bib',
|
| 445 |
+
444: 'bicycle-built-for-two, tandem bicycle, tandem',
|
| 446 |
+
445: 'bikini, two-piece',
|
| 447 |
+
446: 'binder, ring-binder',
|
| 448 |
+
447: 'binoculars, field glasses, opera glasses',
|
| 449 |
+
448: 'birdhouse',
|
| 450 |
+
449: 'boathouse',
|
| 451 |
+
450: 'bobsled, bobsleigh, bob',
|
| 452 |
+
451: 'bolo tie, bolo, bola tie, bola',
|
| 453 |
+
452: 'bonnet, poke bonnet',
|
| 454 |
+
453: 'bookcase',
|
| 455 |
+
454: 'bookshop, bookstore, bookstall',
|
| 456 |
+
455: 'bottlecap',
|
| 457 |
+
456: 'bow',
|
| 458 |
+
457: 'bow tie, bow-tie, bowtie',
|
| 459 |
+
458: 'brass, memorial tablet, plaque',
|
| 460 |
+
459: 'brassiere, bra, bandeau',
|
| 461 |
+
460: 'breakwater, groin, groyne, mole, bulwark, seawall, jetty',
|
| 462 |
+
461: 'breastplate, aegis, egis',
|
| 463 |
+
462: 'broom',
|
| 464 |
+
463: 'bucket, pail',
|
| 465 |
+
464: 'buckle',
|
| 466 |
+
465: 'bulletproof vest',
|
| 467 |
+
466: 'bullet train, bullet',
|
| 468 |
+
467: 'butcher shop, meat market',
|
| 469 |
+
468: 'cab, hack, taxi, taxicab',
|
| 470 |
+
469: 'caldron, cauldron',
|
| 471 |
+
470: 'candle, taper, wax light',
|
| 472 |
+
471: 'cannon',
|
| 473 |
+
472: 'canoe',
|
| 474 |
+
473: 'can opener, tin opener',
|
| 475 |
+
474: 'cardigan',
|
| 476 |
+
475: 'car mirror',
|
| 477 |
+
476: 'carousel, carrousel, merry-go-round, roundabout, whirligig',
|
| 478 |
+
477: "carpenter's kit, tool kit",
|
| 479 |
+
478: 'carton',
|
| 480 |
+
479: 'car wheel',
|
| 481 |
+
480: 'cash machine, cash dispenser, automated teller machine, automatic teller machine, automated teller, automatic teller, ATM',
|
| 482 |
+
481: 'cassette',
|
| 483 |
+
482: 'cassette player',
|
| 484 |
+
483: 'castle',
|
| 485 |
+
484: 'catamaran',
|
| 486 |
+
485: 'CD player',
|
| 487 |
+
486: 'cello, violoncello',
|
| 488 |
+
487: 'cellular telephone, cellular phone, cellphone, cell, mobile phone',
|
| 489 |
+
488: 'chain',
|
| 490 |
+
489: 'chainlink fence',
|
| 491 |
+
490: 'chain mail, ring mail, mail, chain armor, chain armour, ring armor, ring armour',
|
| 492 |
+
491: 'chain saw, chainsaw',
|
| 493 |
+
492: 'chest',
|
| 494 |
+
493: 'chiffonier, commode',
|
| 495 |
+
494: 'chime, bell, gong',
|
| 496 |
+
495: 'china cabinet, china closet',
|
| 497 |
+
496: 'Christmas stocking',
|
| 498 |
+
497: 'church, church building',
|
| 499 |
+
498: 'cinema, movie theater, movie theatre, movie house, picture palace',
|
| 500 |
+
499: 'cleaver, meat cleaver, chopper',
|
| 501 |
+
500: 'cliff dwelling',
|
| 502 |
+
501: 'cloak',
|
| 503 |
+
502: 'clog, geta, patten, sabot',
|
| 504 |
+
503: 'cocktail shaker',
|
| 505 |
+
504: 'coffee mug',
|
| 506 |
+
505: 'coffeepot',
|
| 507 |
+
506: 'coil, spiral, volute, whorl, helix',
|
| 508 |
+
507: 'combination lock',
|
| 509 |
+
508: 'computer keyboard, keypad',
|
| 510 |
+
509: 'confectionery, confectionary, candy store',
|
| 511 |
+
510: 'container ship, containership, container vessel',
|
| 512 |
+
511: 'convertible',
|
| 513 |
+
512: 'corkscrew, bottle screw',
|
| 514 |
+
513: 'cornet, horn, trumpet, trump',
|
| 515 |
+
514: 'cowboy boot',
|
| 516 |
+
515: 'cowboy hat, ten-gallon hat',
|
| 517 |
+
516: 'cradle',
|
| 518 |
+
517: 'crane',
|
| 519 |
+
518: 'crash helmet',
|
| 520 |
+
519: 'crate',
|
| 521 |
+
520: 'crib, cot',
|
| 522 |
+
521: 'Crock Pot',
|
| 523 |
+
522: 'croquet ball',
|
| 524 |
+
523: 'crutch',
|
| 525 |
+
524: 'cuirass',
|
| 526 |
+
525: 'dam, dike, dyke',
|
| 527 |
+
526: 'desk',
|
| 528 |
+
527: 'desktop computer',
|
| 529 |
+
528: 'dial telephone, dial phone',
|
| 530 |
+
529: 'diaper, nappy, napkin',
|
| 531 |
+
530: 'digital clock',
|
| 532 |
+
531: 'digital watch',
|
| 533 |
+
532: 'dining table, board',
|
| 534 |
+
533: 'dishrag, dishcloth',
|
| 535 |
+
534: 'dishwasher, dish washer, dishwashing machine',
|
| 536 |
+
535: 'disk brake, disc brake',
|
| 537 |
+
536: 'dock, dockage, docking facility',
|
| 538 |
+
537: 'dogsled, dog sled, dog sleigh',
|
| 539 |
+
538: 'dome',
|
| 540 |
+
539: 'doormat, welcome mat',
|
| 541 |
+
540: 'drilling platform, offshore rig',
|
| 542 |
+
541: 'drum, membranophone, tympan',
|
| 543 |
+
542: 'drumstick',
|
| 544 |
+
543: 'dumbbell',
|
| 545 |
+
544: 'Dutch oven',
|
| 546 |
+
545: 'electric fan, blower',
|
| 547 |
+
546: 'electric guitar',
|
| 548 |
+
547: 'electric locomotive',
|
| 549 |
+
548: 'entertainment center',
|
| 550 |
+
549: 'envelope',
|
| 551 |
+
550: 'espresso maker',
|
| 552 |
+
551: 'face powder',
|
| 553 |
+
552: 'feather boa, boa',
|
| 554 |
+
553: 'file, file cabinet, filing cabinet',
|
| 555 |
+
554: 'fireboat',
|
| 556 |
+
555: 'fire engine, fire truck',
|
| 557 |
+
556: 'fire screen, fireguard',
|
| 558 |
+
557: 'flagpole, flagstaff',
|
| 559 |
+
558: 'flute, transverse flute',
|
| 560 |
+
559: 'folding chair',
|
| 561 |
+
560: 'football helmet',
|
| 562 |
+
561: 'forklift',
|
| 563 |
+
562: 'fountain',
|
| 564 |
+
563: 'fountain pen',
|
| 565 |
+
564: 'four-poster',
|
| 566 |
+
565: 'freight car',
|
| 567 |
+
566: 'French horn, horn',
|
| 568 |
+
567: 'frying pan, frypan, skillet',
|
| 569 |
+
568: 'fur coat',
|
| 570 |
+
569: 'garbage truck, dustcart',
|
| 571 |
+
570: 'gasmask, respirator, gas helmet',
|
| 572 |
+
571: 'gas pump, gasoline pump, petrol pump, island dispenser',
|
| 573 |
+
572: 'goblet',
|
| 574 |
+
573: 'go-kart',
|
| 575 |
+
574: 'golf ball',
|
| 576 |
+
575: 'golfcart, golf cart',
|
| 577 |
+
576: 'gondola',
|
| 578 |
+
577: 'gong, tam-tam',
|
| 579 |
+
578: 'gown',
|
| 580 |
+
579: 'grand piano, grand',
|
| 581 |
+
580: 'greenhouse, nursery, glasshouse',
|
| 582 |
+
581: 'grille, radiator grille',
|
| 583 |
+
582: 'grocery store, grocery, food market, market',
|
| 584 |
+
583: 'guillotine',
|
| 585 |
+
584: 'hair slide',
|
| 586 |
+
585: 'hair spray',
|
| 587 |
+
586: 'half track',
|
| 588 |
+
587: 'hammer',
|
| 589 |
+
588: 'hamper',
|
| 590 |
+
589: 'hand blower, blow dryer, blow drier, hair dryer, hair drier',
|
| 591 |
+
590: 'hand-held computer, hand-held microcomputer',
|
| 592 |
+
591: 'handkerchief, hankie, hanky, hankey',
|
| 593 |
+
592: 'hard disc, hard disk, fixed disk',
|
| 594 |
+
593: 'harmonica, mouth organ, harp, mouth harp',
|
| 595 |
+
594: 'harp',
|
| 596 |
+
595: 'harvester, reaper',
|
| 597 |
+
596: 'hatchet',
|
| 598 |
+
597: 'holster',
|
| 599 |
+
598: 'home theater, home theatre',
|
| 600 |
+
599: 'honeycomb',
|
| 601 |
+
600: 'hook, claw',
|
| 602 |
+
601: 'hoopskirt, crinoline',
|
| 603 |
+
602: 'horizontal bar, high bar',
|
| 604 |
+
603: 'horse cart, horse-cart',
|
| 605 |
+
604: 'hourglass',
|
| 606 |
+
605: 'iPod',
|
| 607 |
+
606: 'iron, smoothing iron',
|
| 608 |
+
607: "jack-o'-lantern",
|
| 609 |
+
608: 'jean, blue jean, denim',
|
| 610 |
+
609: 'jeep, landrover',
|
| 611 |
+
610: 'jersey, T-shirt, tee shirt',
|
| 612 |
+
611: 'jigsaw puzzle',
|
| 613 |
+
612: 'jinrikisha, ricksha, rickshaw',
|
| 614 |
+
613: 'joystick',
|
| 615 |
+
614: 'kimono',
|
| 616 |
+
615: 'knee pad',
|
| 617 |
+
616: 'knot',
|
| 618 |
+
617: 'lab coat, laboratory coat',
|
| 619 |
+
618: 'ladle',
|
| 620 |
+
619: 'lampshade, lamp shade',
|
| 621 |
+
620: 'laptop, laptop computer',
|
| 622 |
+
621: 'lawn mower, mower',
|
| 623 |
+
622: 'lens cap, lens cover',
|
| 624 |
+
623: 'letter opener, paper knife, paperknife',
|
| 625 |
+
624: 'library',
|
| 626 |
+
625: 'lifeboat',
|
| 627 |
+
626: 'lighter, light, igniter, ignitor',
|
| 628 |
+
627: 'limousine, limo',
|
| 629 |
+
628: 'liner, ocean liner',
|
| 630 |
+
629: 'lipstick, lip rouge',
|
| 631 |
+
630: 'Loafer',
|
| 632 |
+
631: 'lotion',
|
| 633 |
+
632: 'loudspeaker, speaker, speaker unit, loudspeaker system, speaker system',
|
| 634 |
+
633: "loupe, jeweler's loupe",
|
| 635 |
+
634: 'lumbermill, sawmill',
|
| 636 |
+
635: 'magnetic compass',
|
| 637 |
+
636: 'mailbag, postbag',
|
| 638 |
+
637: 'mailbox, letter box',
|
| 639 |
+
638: 'maillot',
|
| 640 |
+
639: 'maillot, tank suit',
|
| 641 |
+
640: 'manhole cover',
|
| 642 |
+
641: 'maraca',
|
| 643 |
+
642: 'marimba, xylophone',
|
| 644 |
+
643: 'mask',
|
| 645 |
+
644: 'matchstick',
|
| 646 |
+
645: 'maypole',
|
| 647 |
+
646: 'maze, labyrinth',
|
| 648 |
+
647: 'measuring cup',
|
| 649 |
+
648: 'medicine chest, medicine cabinet',
|
| 650 |
+
649: 'megalith, megalithic structure',
|
| 651 |
+
650: 'microphone, mike',
|
| 652 |
+
651: 'microwave, microwave oven',
|
| 653 |
+
652: 'military uniform',
|
| 654 |
+
653: 'milk can',
|
| 655 |
+
654: 'minibus',
|
| 656 |
+
655: 'miniskirt, mini',
|
| 657 |
+
656: 'minivan',
|
| 658 |
+
657: 'missile',
|
| 659 |
+
658: 'mitten',
|
| 660 |
+
659: 'mixing bowl',
|
| 661 |
+
660: 'mobile home, manufactured home',
|
| 662 |
+
661: 'Model T',
|
| 663 |
+
662: 'modem',
|
| 664 |
+
663: 'monastery',
|
| 665 |
+
664: 'monitor',
|
| 666 |
+
665: 'moped',
|
| 667 |
+
666: 'mortar',
|
| 668 |
+
667: 'mortarboard',
|
| 669 |
+
668: 'mosque',
|
| 670 |
+
669: 'mosquito net',
|
| 671 |
+
670: 'motor scooter, scooter',
|
| 672 |
+
671: 'mountain bike, all-terrain bike, off-roader',
|
| 673 |
+
672: 'mountain tent',
|
| 674 |
+
673: 'mouse, computer mouse',
|
| 675 |
+
674: 'mousetrap',
|
| 676 |
+
675: 'moving van',
|
| 677 |
+
676: 'muzzle',
|
| 678 |
+
677: 'nail',
|
| 679 |
+
678: 'neck brace',
|
| 680 |
+
679: 'necklace',
|
| 681 |
+
680: 'nipple',
|
| 682 |
+
681: 'notebook, notebook computer',
|
| 683 |
+
682: 'obelisk',
|
| 684 |
+
683: 'oboe, hautboy, hautbois',
|
| 685 |
+
684: 'ocarina, sweet potato',
|
| 686 |
+
685: 'odometer, hodometer, mileometer, milometer',
|
| 687 |
+
686: 'oil filter',
|
| 688 |
+
687: 'organ, pipe organ',
|
| 689 |
+
688: 'oscilloscope, scope, cathode-ray oscilloscope, CRO',
|
| 690 |
+
689: 'overskirt',
|
| 691 |
+
690: 'oxcart',
|
| 692 |
+
691: 'oxygen mask',
|
| 693 |
+
692: 'packet',
|
| 694 |
+
693: 'paddle, boat paddle',
|
| 695 |
+
694: 'paddlewheel, paddle wheel',
|
| 696 |
+
695: 'padlock',
|
| 697 |
+
696: 'paintbrush',
|
| 698 |
+
697: "pajama, pyjama, pj's, jammies",
|
| 699 |
+
698: 'palace',
|
| 700 |
+
699: 'panpipe, pandean pipe, syrinx',
|
| 701 |
+
700: 'paper towel',
|
| 702 |
+
701: 'parachute, chute',
|
| 703 |
+
702: 'parallel bars, bars',
|
| 704 |
+
703: 'park bench',
|
| 705 |
+
704: 'parking meter',
|
| 706 |
+
705: 'passenger car, coach, carriage',
|
| 707 |
+
706: 'patio, terrace',
|
| 708 |
+
707: 'pay-phone, pay-station',
|
| 709 |
+
708: 'pedestal, plinth, footstall',
|
| 710 |
+
709: 'pencil box, pencil case',
|
| 711 |
+
710: 'pencil sharpener',
|
| 712 |
+
711: 'perfume, essence',
|
| 713 |
+
712: 'Petri dish',
|
| 714 |
+
713: 'photocopier',
|
| 715 |
+
714: 'pick, plectrum, plectron',
|
| 716 |
+
715: 'pickelhaube',
|
| 717 |
+
716: 'picket fence, paling',
|
| 718 |
+
717: 'pickup, pickup truck',
|
| 719 |
+
718: 'pier',
|
| 720 |
+
719: 'piggy bank, penny bank',
|
| 721 |
+
720: 'pill bottle',
|
| 722 |
+
721: 'pillow',
|
| 723 |
+
722: 'ping-pong ball',
|
| 724 |
+
723: 'pinwheel',
|
| 725 |
+
724: 'pirate, pirate ship',
|
| 726 |
+
725: 'pitcher, ewer',
|
| 727 |
+
726: "plane, carpenter's plane, woodworking plane",
|
| 728 |
+
727: 'planetarium',
|
| 729 |
+
728: 'plastic bag',
|
| 730 |
+
729: 'plate rack',
|
| 731 |
+
730: 'plow, plough',
|
| 732 |
+
731: "plunger, plumber's helper",
|
| 733 |
+
732: 'Polaroid camera, Polaroid Land camera',
|
| 734 |
+
733: 'pole',
|
| 735 |
+
734: 'police van, police wagon, paddy wagon, patrol wagon, wagon, black Maria',
|
| 736 |
+
735: 'poncho',
|
| 737 |
+
736: 'pool table, billiard table, snooker table',
|
| 738 |
+
737: 'pop bottle, soda bottle',
|
| 739 |
+
738: 'pot, flowerpot',
|
| 740 |
+
739: "potter's wheel",
|
| 741 |
+
740: 'power drill',
|
| 742 |
+
741: 'prayer rug, prayer mat',
|
| 743 |
+
742: 'printer',
|
| 744 |
+
743: 'prison, prison house',
|
| 745 |
+
744: 'projectile, missile',
|
| 746 |
+
745: 'projector',
|
| 747 |
+
746: 'puck, hockey puck',
|
| 748 |
+
747: 'punching bag, punch bag, punching ball, punchball',
|
| 749 |
+
748: 'purse',
|
| 750 |
+
749: 'quill, quill pen',
|
| 751 |
+
750: 'quilt, comforter, comfort, puff',
|
| 752 |
+
751: 'racer, race car, racing car',
|
| 753 |
+
752: 'racket, racquet',
|
| 754 |
+
753: 'radiator',
|
| 755 |
+
754: 'radio, wireless',
|
| 756 |
+
755: 'radio telescope, radio reflector',
|
| 757 |
+
756: 'rain barrel',
|
| 758 |
+
757: 'recreational vehicle, RV, R.V.',
|
| 759 |
+
758: 'reel',
|
| 760 |
+
759: 'reflex camera',
|
| 761 |
+
760: 'refrigerator, icebox',
|
| 762 |
+
761: 'remote control, remote',
|
| 763 |
+
762: 'restaurant, eating house, eating place, eatery',
|
| 764 |
+
763: 'revolver, six-gun, six-shooter',
|
| 765 |
+
764: 'rifle',
|
| 766 |
+
765: 'rocking chair, rocker',
|
| 767 |
+
766: 'rotisserie',
|
| 768 |
+
767: 'rubber eraser, rubber, pencil eraser',
|
| 769 |
+
768: 'rugby ball',
|
| 770 |
+
769: 'rule, ruler',
|
| 771 |
+
770: 'running shoe',
|
| 772 |
+
771: 'safe',
|
| 773 |
+
772: 'safety pin',
|
| 774 |
+
773: 'saltshaker, salt shaker',
|
| 775 |
+
774: 'sandal',
|
| 776 |
+
775: 'sarong',
|
| 777 |
+
776: 'sax, saxophone',
|
| 778 |
+
777: 'scabbard',
|
| 779 |
+
778: 'scale, weighing machine',
|
| 780 |
+
779: 'school bus',
|
| 781 |
+
780: 'schooner',
|
| 782 |
+
781: 'scoreboard',
|
| 783 |
+
782: 'screen, CRT screen',
|
| 784 |
+
783: 'screw',
|
| 785 |
+
784: 'screwdriver',
|
| 786 |
+
785: 'seat belt, seatbelt',
|
| 787 |
+
786: 'sewing machine',
|
| 788 |
+
787: 'shield, buckler',
|
| 789 |
+
788: 'shoe shop, shoe-shop, shoe store',
|
| 790 |
+
789: 'shoji',
|
| 791 |
+
790: 'shopping basket',
|
| 792 |
+
791: 'shopping cart',
|
| 793 |
+
792: 'shovel',
|
| 794 |
+
793: 'shower cap',
|
| 795 |
+
794: 'shower curtain',
|
| 796 |
+
795: 'ski',
|
| 797 |
+
796: 'ski mask',
|
| 798 |
+
797: 'sleeping bag',
|
| 799 |
+
798: 'slide rule, slipstick',
|
| 800 |
+
799: 'sliding door',
|
| 801 |
+
800: 'slot, one-armed bandit',
|
| 802 |
+
801: 'snorkel',
|
| 803 |
+
802: 'snowmobile',
|
| 804 |
+
803: 'snowplow, snowplough',
|
| 805 |
+
804: 'soap dispenser',
|
| 806 |
+
805: 'soccer ball',
|
| 807 |
+
806: 'sock',
|
| 808 |
+
807: 'solar dish, solar collector, solar furnace',
|
| 809 |
+
808: 'sombrero',
|
| 810 |
+
809: 'soup bowl',
|
| 811 |
+
810: 'space bar',
|
| 812 |
+
811: 'space heater',
|
| 813 |
+
812: 'space shuttle',
|
| 814 |
+
813: 'spatula',
|
| 815 |
+
814: 'speedboat',
|
| 816 |
+
815: "spider web, spider's web",
|
| 817 |
+
816: 'spindle',
|
| 818 |
+
817: 'sports car, sport car',
|
| 819 |
+
818: 'spotlight, spot',
|
| 820 |
+
819: 'stage',
|
| 821 |
+
820: 'steam locomotive',
|
| 822 |
+
821: 'steel arch bridge',
|
| 823 |
+
822: 'steel drum',
|
| 824 |
+
823: 'stethoscope',
|
| 825 |
+
824: 'stole',
|
| 826 |
+
825: 'stone wall',
|
| 827 |
+
826: 'stopwatch, stop watch',
|
| 828 |
+
827: 'stove',
|
| 829 |
+
828: 'strainer',
|
| 830 |
+
829: 'streetcar, tram, tramcar, trolley, trolley car',
|
| 831 |
+
830: 'stretcher',
|
| 832 |
+
831: 'studio couch, day bed',
|
| 833 |
+
832: 'stupa, tope',
|
| 834 |
+
833: 'submarine, pigboat, sub, U-boat',
|
| 835 |
+
834: 'suit, suit of clothes',
|
| 836 |
+
835: 'sundial',
|
| 837 |
+
836: 'sunglass',
|
| 838 |
+
837: 'sunglasses, dark glasses, shades',
|
| 839 |
+
838: 'sunscreen, sunblock, sun blocker',
|
| 840 |
+
839: 'suspension bridge',
|
| 841 |
+
840: 'swab, swob, mop',
|
| 842 |
+
841: 'sweatshirt',
|
| 843 |
+
842: 'swimming trunks, bathing trunks',
|
| 844 |
+
843: 'swing',
|
| 845 |
+
844: 'switch, electric switch, electrical switch',
|
| 846 |
+
845: 'syringe',
|
| 847 |
+
846: 'table lamp',
|
| 848 |
+
847: 'tank, army tank, armored combat vehicle, armoured combat vehicle',
|
| 849 |
+
848: 'tape player',
|
| 850 |
+
849: 'teapot',
|
| 851 |
+
850: 'teddy, teddy bear',
|
| 852 |
+
851: 'television, television system',
|
| 853 |
+
852: 'tennis ball',
|
| 854 |
+
853: 'thatch, thatched roof',
|
| 855 |
+
854: 'theater curtain, theatre curtain',
|
| 856 |
+
855: 'thimble',
|
| 857 |
+
856: 'thresher, thrasher, threshing machine',
|
| 858 |
+
857: 'throne',
|
| 859 |
+
858: 'tile roof',
|
| 860 |
+
859: 'toaster',
|
| 861 |
+
860: 'tobacco shop, tobacconist shop, tobacconist',
|
| 862 |
+
861: 'toilet seat',
|
| 863 |
+
862: 'torch',
|
| 864 |
+
863: 'totem pole',
|
| 865 |
+
864: 'tow truck, tow car, wrecker',
|
| 866 |
+
865: 'toyshop',
|
| 867 |
+
866: 'tractor',
|
| 868 |
+
867: 'trailer truck, tractor trailer, trucking rig, rig, articulated lorry, semi',
|
| 869 |
+
868: 'tray',
|
| 870 |
+
869: 'trench coat',
|
| 871 |
+
870: 'tricycle, trike, velocipede',
|
| 872 |
+
871: 'trimaran',
|
| 873 |
+
872: 'tripod',
|
| 874 |
+
873: 'triumphal arch',
|
| 875 |
+
874: 'trolleybus, trolley coach, trackless trolley',
|
| 876 |
+
875: 'trombone',
|
| 877 |
+
876: 'tub, vat',
|
| 878 |
+
877: 'turnstile',
|
| 879 |
+
878: 'typewriter keyboard',
|
| 880 |
+
879: 'umbrella',
|
| 881 |
+
880: 'unicycle, monocycle',
|
| 882 |
+
881: 'upright, upright piano',
|
| 883 |
+
882: 'vacuum, vacuum cleaner',
|
| 884 |
+
883: 'vase',
|
| 885 |
+
884: 'vault',
|
| 886 |
+
885: 'velvet',
|
| 887 |
+
886: 'vending machine',
|
| 888 |
+
887: 'vestment',
|
| 889 |
+
888: 'viaduct',
|
| 890 |
+
889: 'violin, fiddle',
|
| 891 |
+
890: 'volleyball',
|
| 892 |
+
891: 'waffle iron',
|
| 893 |
+
892: 'wall clock',
|
| 894 |
+
893: 'wallet, billfold, notecase, pocketbook',
|
| 895 |
+
894: 'wardrobe, closet, press',
|
| 896 |
+
895: 'warplane, military plane',
|
| 897 |
+
896: 'washbasin, handbasin, washbowl, lavabo, wash-hand basin',
|
| 898 |
+
897: 'washer, automatic washer, washing machine',
|
| 899 |
+
898: 'water bottle',
|
| 900 |
+
899: 'water jug',
|
| 901 |
+
900: 'water tower',
|
| 902 |
+
901: 'whiskey jug',
|
| 903 |
+
902: 'whistle',
|
| 904 |
+
903: 'wig',
|
| 905 |
+
904: 'window screen',
|
| 906 |
+
905: 'window shade',
|
| 907 |
+
906: 'Windsor tie',
|
| 908 |
+
907: 'wine bottle',
|
| 909 |
+
908: 'wing',
|
| 910 |
+
909: 'wok',
|
| 911 |
+
910: 'wooden spoon',
|
| 912 |
+
911: 'wool, woolen, woollen',
|
| 913 |
+
912: 'worm fence, snake fence, snake-rail fence, Virginia fence',
|
| 914 |
+
913: 'wreck',
|
| 915 |
+
914: 'yawl',
|
| 916 |
+
915: 'yurt',
|
| 917 |
+
916: 'web site, website, internet site, site',
|
| 918 |
+
917: 'comic book',
|
| 919 |
+
918: 'crossword puzzle, crossword',
|
| 920 |
+
919: 'street sign',
|
| 921 |
+
920: 'traffic light, traffic signal, stoplight',
|
| 922 |
+
921: 'book jacket, dust cover, dust jacket, dust wrapper',
|
| 923 |
+
922: 'menu',
|
| 924 |
+
923: 'plate',
|
| 925 |
+
924: 'guacamole',
|
| 926 |
+
925: 'consomme',
|
| 927 |
+
926: 'hot pot, hotpot',
|
| 928 |
+
927: 'trifle',
|
| 929 |
+
928: 'ice cream, icecream',
|
| 930 |
+
929: 'ice lolly, lolly, lollipop, popsicle',
|
| 931 |
+
930: 'French loaf',
|
| 932 |
+
931: 'bagel, beigel',
|
| 933 |
+
932: 'pretzel',
|
| 934 |
+
933: 'cheeseburger',
|
| 935 |
+
934: 'hotdog, hot dog, red hot',
|
| 936 |
+
935: 'mashed potato',
|
| 937 |
+
936: 'head cabbage',
|
| 938 |
+
937: 'broccoli',
|
| 939 |
+
938: 'cauliflower',
|
| 940 |
+
939: 'zucchini, courgette',
|
| 941 |
+
940: 'spaghetti squash',
|
| 942 |
+
941: 'acorn squash',
|
| 943 |
+
942: 'butternut squash',
|
| 944 |
+
943: 'cucumber, cuke',
|
| 945 |
+
944: 'artichoke, globe artichoke',
|
| 946 |
+
945: 'bell pepper',
|
| 947 |
+
946: 'cardoon',
|
| 948 |
+
947: 'mushroom',
|
| 949 |
+
948: 'Granny Smith',
|
| 950 |
+
949: 'strawberry',
|
| 951 |
+
950: 'orange',
|
| 952 |
+
951: 'lemon',
|
| 953 |
+
952: 'fig',
|
| 954 |
+
953: 'pineapple, ananas',
|
| 955 |
+
954: 'banana',
|
| 956 |
+
955: 'jackfruit, jak, jack',
|
| 957 |
+
956: 'custard apple',
|
| 958 |
+
957: 'pomegranate',
|
| 959 |
+
958: 'hay',
|
| 960 |
+
959: 'carbonara',
|
| 961 |
+
960: 'chocolate sauce, chocolate syrup',
|
| 962 |
+
961: 'dough',
|
| 963 |
+
962: 'meat loaf, meatloaf',
|
| 964 |
+
963: 'pizza, pizza pie',
|
| 965 |
+
964: 'potpie',
|
| 966 |
+
965: 'burrito',
|
| 967 |
+
966: 'red wine',
|
| 968 |
+
967: 'espresso',
|
| 969 |
+
968: 'cup',
|
| 970 |
+
969: 'eggnog',
|
| 971 |
+
970: 'alp',
|
| 972 |
+
971: 'bubble',
|
| 973 |
+
972: 'cliff, drop, drop-off',
|
| 974 |
+
973: 'coral reef',
|
| 975 |
+
974: 'geyser',
|
| 976 |
+
975: 'lakeside, lakeshore',
|
| 977 |
+
976: 'promontory, headland, head, foreland',
|
| 978 |
+
977: 'sandbar, sand bar',
|
| 979 |
+
978: 'seashore, coast, seacoast, sea-coast',
|
| 980 |
+
979: 'valley, vale',
|
| 981 |
+
980: 'volcano',
|
| 982 |
+
981: 'ballplayer, baseball player',
|
| 983 |
+
982: 'groom, bridegroom',
|
| 984 |
+
983: 'scuba diver',
|
| 985 |
+
984: 'rapeseed',
|
| 986 |
+
985: 'daisy',
|
| 987 |
+
986: "yellow lady's slipper, yellow lady-slipper, Cypripedium calceolus, Cypripedium parviflorum",
|
| 988 |
+
987: 'corn',
|
| 989 |
+
988: 'acorn',
|
| 990 |
+
989: 'hip, rose hip, rosehip',
|
| 991 |
+
990: 'buckeye, horse chestnut, conker',
|
| 992 |
+
991: 'coral fungus',
|
| 993 |
+
992: 'agaric',
|
| 994 |
+
993: 'gyromitra',
|
| 995 |
+
994: 'stinkhorn, carrion fungus',
|
| 996 |
+
995: 'earthstar',
|
| 997 |
+
996: 'hen-of-the-woods, hen of the woods, Polyporus frondosus, Grifola frondosa',
|
| 998 |
+
997: 'bolete',
|
| 999 |
+
998: 'ear, spike, capitulum',
|
| 1000 |
+
999: 'toilet tissue, toilet paper, bathroom tissue'
|
stable_diffusion/data/imagenet_train_hr_indices.p.REMOVED.git-id
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
b8d6d4689d2ecf32147e9cc2f5e6c50e072df26f
|
stable_diffusion/data/index_synset.yaml
ADDED
|
@@ -0,0 +1,1000 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
0: n01440764
|
| 2 |
+
1: n01443537
|
| 3 |
+
2: n01484850
|
| 4 |
+
3: n01491361
|
| 5 |
+
4: n01494475
|
| 6 |
+
5: n01496331
|
| 7 |
+
6: n01498041
|
| 8 |
+
7: n01514668
|
| 9 |
+
8: n07646067
|
| 10 |
+
9: n01518878
|
| 11 |
+
10: n01530575
|
| 12 |
+
11: n01531178
|
| 13 |
+
12: n01532829
|
| 14 |
+
13: n01534433
|
| 15 |
+
14: n01537544
|
| 16 |
+
15: n01558993
|
| 17 |
+
16: n01560419
|
| 18 |
+
17: n01580077
|
| 19 |
+
18: n01582220
|
| 20 |
+
19: n01592084
|
| 21 |
+
20: n01601694
|
| 22 |
+
21: n13382471
|
| 23 |
+
22: n01614925
|
| 24 |
+
23: n01616318
|
| 25 |
+
24: n01622779
|
| 26 |
+
25: n01629819
|
| 27 |
+
26: n01630670
|
| 28 |
+
27: n01631663
|
| 29 |
+
28: n01632458
|
| 30 |
+
29: n01632777
|
| 31 |
+
30: n01641577
|
| 32 |
+
31: n01644373
|
| 33 |
+
32: n01644900
|
| 34 |
+
33: n01664065
|
| 35 |
+
34: n01665541
|
| 36 |
+
35: n01667114
|
| 37 |
+
36: n01667778
|
| 38 |
+
37: n01669191
|
| 39 |
+
38: n01675722
|
| 40 |
+
39: n01677366
|
| 41 |
+
40: n01682714
|
| 42 |
+
41: n01685808
|
| 43 |
+
42: n01687978
|
| 44 |
+
43: n01688243
|
| 45 |
+
44: n01689811
|
| 46 |
+
45: n01692333
|
| 47 |
+
46: n01693334
|
| 48 |
+
47: n01694178
|
| 49 |
+
48: n01695060
|
| 50 |
+
49: n01697457
|
| 51 |
+
50: n01698640
|
| 52 |
+
51: n01704323
|
| 53 |
+
52: n01728572
|
| 54 |
+
53: n01728920
|
| 55 |
+
54: n01729322
|
| 56 |
+
55: n01729977
|
| 57 |
+
56: n01734418
|
| 58 |
+
57: n01735189
|
| 59 |
+
58: n01737021
|
| 60 |
+
59: n01739381
|
| 61 |
+
60: n01740131
|
| 62 |
+
61: n01742172
|
| 63 |
+
62: n01744401
|
| 64 |
+
63: n01748264
|
| 65 |
+
64: n01749939
|
| 66 |
+
65: n01751748
|
| 67 |
+
66: n01753488
|
| 68 |
+
67: n01755581
|
| 69 |
+
68: n01756291
|
| 70 |
+
69: n01768244
|
| 71 |
+
70: n01770081
|
| 72 |
+
71: n01770393
|
| 73 |
+
72: n01773157
|
| 74 |
+
73: n01773549
|
| 75 |
+
74: n01773797
|
| 76 |
+
75: n01774384
|
| 77 |
+
76: n01774750
|
| 78 |
+
77: n01775062
|
| 79 |
+
78: n04432308
|
| 80 |
+
79: n01784675
|
| 81 |
+
80: n01795545
|
| 82 |
+
81: n01796340
|
| 83 |
+
82: n01797886
|
| 84 |
+
83: n01798484
|
| 85 |
+
84: n01806143
|
| 86 |
+
85: n07647321
|
| 87 |
+
86: n07647496
|
| 88 |
+
87: n01817953
|
| 89 |
+
88: n01818515
|
| 90 |
+
89: n01819313
|
| 91 |
+
90: n01820546
|
| 92 |
+
91: n01824575
|
| 93 |
+
92: n01828970
|
| 94 |
+
93: n01829413
|
| 95 |
+
94: n01833805
|
| 96 |
+
95: n01843065
|
| 97 |
+
96: n01843383
|
| 98 |
+
97: n01847000
|
| 99 |
+
98: n01855032
|
| 100 |
+
99: n07646821
|
| 101 |
+
100: n01860187
|
| 102 |
+
101: n01871265
|
| 103 |
+
102: n01872772
|
| 104 |
+
103: n01873310
|
| 105 |
+
104: n01877812
|
| 106 |
+
105: n01882714
|
| 107 |
+
106: n01883070
|
| 108 |
+
107: n01910747
|
| 109 |
+
108: n01914609
|
| 110 |
+
109: n01917289
|
| 111 |
+
110: n01924916
|
| 112 |
+
111: n01930112
|
| 113 |
+
112: n01943899
|
| 114 |
+
113: n01944390
|
| 115 |
+
114: n13719102
|
| 116 |
+
115: n01950731
|
| 117 |
+
116: n01955084
|
| 118 |
+
117: n01968897
|
| 119 |
+
118: n01978287
|
| 120 |
+
119: n01978455
|
| 121 |
+
120: n01980166
|
| 122 |
+
121: n01981276
|
| 123 |
+
122: n01983481
|
| 124 |
+
123: n01984695
|
| 125 |
+
124: n01985128
|
| 126 |
+
125: n01986214
|
| 127 |
+
126: n01990800
|
| 128 |
+
127: n02002556
|
| 129 |
+
128: n02002724
|
| 130 |
+
129: n02006656
|
| 131 |
+
130: n02007558
|
| 132 |
+
131: n02009229
|
| 133 |
+
132: n02009912
|
| 134 |
+
133: n02011460
|
| 135 |
+
134: n03126707
|
| 136 |
+
135: n02013706
|
| 137 |
+
136: n02017213
|
| 138 |
+
137: n02018207
|
| 139 |
+
138: n02018795
|
| 140 |
+
139: n02025239
|
| 141 |
+
140: n02027492
|
| 142 |
+
141: n02028035
|
| 143 |
+
142: n02033041
|
| 144 |
+
143: n02037110
|
| 145 |
+
144: n02051845
|
| 146 |
+
145: n02056570
|
| 147 |
+
146: n02058221
|
| 148 |
+
147: n02066245
|
| 149 |
+
148: n02071294
|
| 150 |
+
149: n02074367
|
| 151 |
+
150: n02077923
|
| 152 |
+
151: n08742578
|
| 153 |
+
152: n02085782
|
| 154 |
+
153: n02085936
|
| 155 |
+
154: n02086079
|
| 156 |
+
155: n02086240
|
| 157 |
+
156: n02086646
|
| 158 |
+
157: n02086910
|
| 159 |
+
158: n02087046
|
| 160 |
+
159: n02087394
|
| 161 |
+
160: n02088094
|
| 162 |
+
161: n02088238
|
| 163 |
+
162: n02088364
|
| 164 |
+
163: n02088466
|
| 165 |
+
164: n02088632
|
| 166 |
+
165: n02089078
|
| 167 |
+
166: n02089867
|
| 168 |
+
167: n02089973
|
| 169 |
+
168: n02090379
|
| 170 |
+
169: n02090622
|
| 171 |
+
170: n02090721
|
| 172 |
+
171: n02091032
|
| 173 |
+
172: n02091134
|
| 174 |
+
173: n02091244
|
| 175 |
+
174: n02091467
|
| 176 |
+
175: n02091635
|
| 177 |
+
176: n02091831
|
| 178 |
+
177: n02092002
|
| 179 |
+
178: n02092339
|
| 180 |
+
179: n02093256
|
| 181 |
+
180: n02093428
|
| 182 |
+
181: n02093647
|
| 183 |
+
182: n02093754
|
| 184 |
+
183: n02093859
|
| 185 |
+
184: n02093991
|
| 186 |
+
185: n02094114
|
| 187 |
+
186: n02094258
|
| 188 |
+
187: n02094433
|
| 189 |
+
188: n02095314
|
| 190 |
+
189: n02095570
|
| 191 |
+
190: n02095889
|
| 192 |
+
191: n02096051
|
| 193 |
+
192: n02096177
|
| 194 |
+
193: n02096294
|
| 195 |
+
194: n02096437
|
| 196 |
+
195: n02096585
|
| 197 |
+
196: n02097047
|
| 198 |
+
197: n02097130
|
| 199 |
+
198: n02097209
|
| 200 |
+
199: n02097298
|
| 201 |
+
200: n02097474
|
| 202 |
+
201: n02097658
|
| 203 |
+
202: n02098105
|
| 204 |
+
203: n02098286
|
| 205 |
+
204: n02098413
|
| 206 |
+
205: n02099267
|
| 207 |
+
206: n02099429
|
| 208 |
+
207: n02099601
|
| 209 |
+
208: n02099712
|
| 210 |
+
209: n02099849
|
| 211 |
+
210: n02100236
|
| 212 |
+
211: n02100583
|
| 213 |
+
212: n02100735
|
| 214 |
+
213: n02100877
|
| 215 |
+
214: n02101006
|
| 216 |
+
215: n02101388
|
| 217 |
+
216: n02101556
|
| 218 |
+
217: n02102040
|
| 219 |
+
218: n02102177
|
| 220 |
+
219: n02102318
|
| 221 |
+
220: n02102480
|
| 222 |
+
221: n02102973
|
| 223 |
+
222: n02104029
|
| 224 |
+
223: n02104365
|
| 225 |
+
224: n02105056
|
| 226 |
+
225: n02105162
|
| 227 |
+
226: n02105251
|
| 228 |
+
227: n02105412
|
| 229 |
+
228: n02105505
|
| 230 |
+
229: n02105641
|
| 231 |
+
230: n02105855
|
| 232 |
+
231: n02106030
|
| 233 |
+
232: n02106166
|
| 234 |
+
233: n02106382
|
| 235 |
+
234: n02106550
|
| 236 |
+
235: n02106662
|
| 237 |
+
236: n02107142
|
| 238 |
+
237: n02107312
|
| 239 |
+
238: n02107574
|
| 240 |
+
239: n02107683
|
| 241 |
+
240: n02107908
|
| 242 |
+
241: n02108000
|
| 243 |
+
242: n02108089
|
| 244 |
+
243: n02108422
|
| 245 |
+
244: n02108551
|
| 246 |
+
245: n02108915
|
| 247 |
+
246: n02109047
|
| 248 |
+
247: n02109525
|
| 249 |
+
248: n02109961
|
| 250 |
+
249: n02110063
|
| 251 |
+
250: n02110185
|
| 252 |
+
251: n02110341
|
| 253 |
+
252: n02110627
|
| 254 |
+
253: n02110806
|
| 255 |
+
254: n02110958
|
| 256 |
+
255: n02111129
|
| 257 |
+
256: n02111277
|
| 258 |
+
257: n02111500
|
| 259 |
+
258: n02111889
|
| 260 |
+
259: n02112018
|
| 261 |
+
260: n02112137
|
| 262 |
+
261: n02112350
|
| 263 |
+
262: n02112706
|
| 264 |
+
263: n02113023
|
| 265 |
+
264: n02113186
|
| 266 |
+
265: n02113624
|
| 267 |
+
266: n02113712
|
| 268 |
+
267: n02113799
|
| 269 |
+
268: n02113978
|
| 270 |
+
269: n02114367
|
| 271 |
+
270: n02114548
|
| 272 |
+
271: n02114712
|
| 273 |
+
272: n02114855
|
| 274 |
+
273: n02115641
|
| 275 |
+
274: n02115913
|
| 276 |
+
275: n02116738
|
| 277 |
+
276: n02117135
|
| 278 |
+
277: n02119022
|
| 279 |
+
278: n02119789
|
| 280 |
+
279: n02120079
|
| 281 |
+
280: n02120505
|
| 282 |
+
281: n02123045
|
| 283 |
+
282: n02123159
|
| 284 |
+
283: n02123394
|
| 285 |
+
284: n02123597
|
| 286 |
+
285: n02124075
|
| 287 |
+
286: n02125311
|
| 288 |
+
287: n02127052
|
| 289 |
+
288: n02128385
|
| 290 |
+
289: n02128757
|
| 291 |
+
290: n02128925
|
| 292 |
+
291: n02129165
|
| 293 |
+
292: n02129604
|
| 294 |
+
293: n02130308
|
| 295 |
+
294: n02132136
|
| 296 |
+
295: n02133161
|
| 297 |
+
296: n02134084
|
| 298 |
+
297: n02134418
|
| 299 |
+
298: n02137549
|
| 300 |
+
299: n02138441
|
| 301 |
+
300: n02165105
|
| 302 |
+
301: n02165456
|
| 303 |
+
302: n02167151
|
| 304 |
+
303: n02168699
|
| 305 |
+
304: n02169497
|
| 306 |
+
305: n02172182
|
| 307 |
+
306: n02174001
|
| 308 |
+
307: n02177972
|
| 309 |
+
308: n03373237
|
| 310 |
+
309: n07975909
|
| 311 |
+
310: n02219486
|
| 312 |
+
311: n02226429
|
| 313 |
+
312: n02229544
|
| 314 |
+
313: n02231487
|
| 315 |
+
314: n02233338
|
| 316 |
+
315: n02236044
|
| 317 |
+
316: n02256656
|
| 318 |
+
317: n02259212
|
| 319 |
+
318: n02264363
|
| 320 |
+
319: n02268443
|
| 321 |
+
320: n02268853
|
| 322 |
+
321: n02276258
|
| 323 |
+
322: n02277742
|
| 324 |
+
323: n02279972
|
| 325 |
+
324: n02280649
|
| 326 |
+
325: n02281406
|
| 327 |
+
326: n02281787
|
| 328 |
+
327: n02317335
|
| 329 |
+
328: n02319095
|
| 330 |
+
329: n02321529
|
| 331 |
+
330: n02325366
|
| 332 |
+
331: n02326432
|
| 333 |
+
332: n02328150
|
| 334 |
+
333: n02342885
|
| 335 |
+
334: n02346627
|
| 336 |
+
335: n02356798
|
| 337 |
+
336: n02361337
|
| 338 |
+
337: n05262120
|
| 339 |
+
338: n02364673
|
| 340 |
+
339: n02389026
|
| 341 |
+
340: n02391049
|
| 342 |
+
341: n02395406
|
| 343 |
+
342: n02396427
|
| 344 |
+
343: n02397096
|
| 345 |
+
344: n02398521
|
| 346 |
+
345: n02403003
|
| 347 |
+
346: n02408429
|
| 348 |
+
347: n02410509
|
| 349 |
+
348: n02412080
|
| 350 |
+
349: n02415577
|
| 351 |
+
350: n02417914
|
| 352 |
+
351: n02422106
|
| 353 |
+
352: n02422699
|
| 354 |
+
353: n02423022
|
| 355 |
+
354: n02437312
|
| 356 |
+
355: n02437616
|
| 357 |
+
356: n10771990
|
| 358 |
+
357: n14765497
|
| 359 |
+
358: n02443114
|
| 360 |
+
359: n02443484
|
| 361 |
+
360: n14765785
|
| 362 |
+
361: n02445715
|
| 363 |
+
362: n02447366
|
| 364 |
+
363: n02454379
|
| 365 |
+
364: n02457408
|
| 366 |
+
365: n02480495
|
| 367 |
+
366: n02480855
|
| 368 |
+
367: n02481823
|
| 369 |
+
368: n02483362
|
| 370 |
+
369: n02483708
|
| 371 |
+
370: n02484975
|
| 372 |
+
371: n02486261
|
| 373 |
+
372: n02486410
|
| 374 |
+
373: n02487347
|
| 375 |
+
374: n02488291
|
| 376 |
+
375: n02488702
|
| 377 |
+
376: n02489166
|
| 378 |
+
377: n02490219
|
| 379 |
+
378: n02492035
|
| 380 |
+
379: n02492660
|
| 381 |
+
380: n02493509
|
| 382 |
+
381: n02493793
|
| 383 |
+
382: n02494079
|
| 384 |
+
383: n02497673
|
| 385 |
+
384: n02500267
|
| 386 |
+
385: n02504013
|
| 387 |
+
386: n02504458
|
| 388 |
+
387: n02509815
|
| 389 |
+
388: n02510455
|
| 390 |
+
389: n02514041
|
| 391 |
+
390: n07783967
|
| 392 |
+
391: n02536864
|
| 393 |
+
392: n02606052
|
| 394 |
+
393: n02607072
|
| 395 |
+
394: n02640242
|
| 396 |
+
395: n02641379
|
| 397 |
+
396: n02643566
|
| 398 |
+
397: n02655020
|
| 399 |
+
398: n02666347
|
| 400 |
+
399: n02667093
|
| 401 |
+
400: n02669723
|
| 402 |
+
401: n02672831
|
| 403 |
+
402: n02676566
|
| 404 |
+
403: n02687172
|
| 405 |
+
404: n02690373
|
| 406 |
+
405: n02692877
|
| 407 |
+
406: n02699494
|
| 408 |
+
407: n02701002
|
| 409 |
+
408: n02704792
|
| 410 |
+
409: n02708093
|
| 411 |
+
410: n02727426
|
| 412 |
+
411: n08496334
|
| 413 |
+
412: n02747177
|
| 414 |
+
413: n02749479
|
| 415 |
+
414: n02769748
|
| 416 |
+
415: n02776631
|
| 417 |
+
416: n02777292
|
| 418 |
+
417: n02782329
|
| 419 |
+
418: n02783161
|
| 420 |
+
419: n02786058
|
| 421 |
+
420: n02787622
|
| 422 |
+
421: n02788148
|
| 423 |
+
422: n02790996
|
| 424 |
+
423: n02791124
|
| 425 |
+
424: n02791270
|
| 426 |
+
425: n02793495
|
| 427 |
+
426: n02794156
|
| 428 |
+
427: n02795169
|
| 429 |
+
428: n02797295
|
| 430 |
+
429: n02799071
|
| 431 |
+
430: n02802426
|
| 432 |
+
431: n02804515
|
| 433 |
+
432: n02804610
|
| 434 |
+
433: n02807133
|
| 435 |
+
434: n02808304
|
| 436 |
+
435: n02808440
|
| 437 |
+
436: n02814533
|
| 438 |
+
437: n02814860
|
| 439 |
+
438: n02815834
|
| 440 |
+
439: n02817516
|
| 441 |
+
440: n02823428
|
| 442 |
+
441: n02823750
|
| 443 |
+
442: n02825657
|
| 444 |
+
443: n02834397
|
| 445 |
+
444: n02835271
|
| 446 |
+
445: n02837789
|
| 447 |
+
446: n02840245
|
| 448 |
+
447: n02841315
|
| 449 |
+
448: n02843684
|
| 450 |
+
449: n02859443
|
| 451 |
+
450: n02860847
|
| 452 |
+
451: n02865351
|
| 453 |
+
452: n02869837
|
| 454 |
+
453: n02870880
|
| 455 |
+
454: n02871525
|
| 456 |
+
455: n02877765
|
| 457 |
+
456: n02880308
|
| 458 |
+
457: n02883205
|
| 459 |
+
458: n02892201
|
| 460 |
+
459: n02892767
|
| 461 |
+
460: n02894605
|
| 462 |
+
461: n02895154
|
| 463 |
+
462: n12520864
|
| 464 |
+
463: n02909870
|
| 465 |
+
464: n02910353
|
| 466 |
+
465: n02916936
|
| 467 |
+
466: n02917067
|
| 468 |
+
467: n02927161
|
| 469 |
+
468: n02930766
|
| 470 |
+
469: n02939185
|
| 471 |
+
470: n02948072
|
| 472 |
+
471: n02950826
|
| 473 |
+
472: n02951358
|
| 474 |
+
473: n02951585
|
| 475 |
+
474: n02963159
|
| 476 |
+
475: n02965783
|
| 477 |
+
476: n02966193
|
| 478 |
+
477: n02966687
|
| 479 |
+
478: n02971356
|
| 480 |
+
479: n02974003
|
| 481 |
+
480: n02977058
|
| 482 |
+
481: n02978881
|
| 483 |
+
482: n02979186
|
| 484 |
+
483: n02980441
|
| 485 |
+
484: n02981792
|
| 486 |
+
485: n02988304
|
| 487 |
+
486: n02992211
|
| 488 |
+
487: n02992529
|
| 489 |
+
488: n13652994
|
| 490 |
+
489: n03000134
|
| 491 |
+
490: n03000247
|
| 492 |
+
491: n03000684
|
| 493 |
+
492: n03014705
|
| 494 |
+
493: n03016953
|
| 495 |
+
494: n03017168
|
| 496 |
+
495: n03018349
|
| 497 |
+
496: n03026506
|
| 498 |
+
497: n03028079
|
| 499 |
+
498: n03032252
|
| 500 |
+
499: n03041632
|
| 501 |
+
500: n03042490
|
| 502 |
+
501: n03045698
|
| 503 |
+
502: n03047690
|
| 504 |
+
503: n03062245
|
| 505 |
+
504: n03063599
|
| 506 |
+
505: n03063689
|
| 507 |
+
506: n03065424
|
| 508 |
+
507: n03075370
|
| 509 |
+
508: n03085013
|
| 510 |
+
509: n03089624
|
| 511 |
+
510: n03095699
|
| 512 |
+
511: n03100240
|
| 513 |
+
512: n03109150
|
| 514 |
+
513: n03110669
|
| 515 |
+
514: n03124043
|
| 516 |
+
515: n03124170
|
| 517 |
+
516: n15142452
|
| 518 |
+
517: n03126707
|
| 519 |
+
518: n03127747
|
| 520 |
+
519: n03127925
|
| 521 |
+
520: n03131574
|
| 522 |
+
521: n03133878
|
| 523 |
+
522: n03134739
|
| 524 |
+
523: n03141823
|
| 525 |
+
524: n03146219
|
| 526 |
+
525: n03160309
|
| 527 |
+
526: n03179701
|
| 528 |
+
527: n03180011
|
| 529 |
+
528: n03187595
|
| 530 |
+
529: n03188531
|
| 531 |
+
530: n03196217
|
| 532 |
+
531: n03197337
|
| 533 |
+
532: n03201208
|
| 534 |
+
533: n03207743
|
| 535 |
+
534: n03207941
|
| 536 |
+
535: n03208938
|
| 537 |
+
536: n03216828
|
| 538 |
+
537: n03218198
|
| 539 |
+
538: n13872072
|
| 540 |
+
539: n03223299
|
| 541 |
+
540: n03240683
|
| 542 |
+
541: n03249569
|
| 543 |
+
542: n07647870
|
| 544 |
+
543: n03255030
|
| 545 |
+
544: n03259401
|
| 546 |
+
545: n03271574
|
| 547 |
+
546: n03272010
|
| 548 |
+
547: n03272562
|
| 549 |
+
548: n03290653
|
| 550 |
+
549: n13869788
|
| 551 |
+
550: n03297495
|
| 552 |
+
551: n03314780
|
| 553 |
+
552: n03325584
|
| 554 |
+
553: n03337140
|
| 555 |
+
554: n03344393
|
| 556 |
+
555: n03345487
|
| 557 |
+
556: n03347037
|
| 558 |
+
557: n03355925
|
| 559 |
+
558: n03372029
|
| 560 |
+
559: n03376595
|
| 561 |
+
560: n03379051
|
| 562 |
+
561: n03384352
|
| 563 |
+
562: n03388043
|
| 564 |
+
563: n03388183
|
| 565 |
+
564: n03388549
|
| 566 |
+
565: n03393912
|
| 567 |
+
566: n03394916
|
| 568 |
+
567: n03400231
|
| 569 |
+
568: n03404251
|
| 570 |
+
569: n03417042
|
| 571 |
+
570: n03424325
|
| 572 |
+
571: n03425413
|
| 573 |
+
572: n03443371
|
| 574 |
+
573: n03444034
|
| 575 |
+
574: n03445777
|
| 576 |
+
575: n03445924
|
| 577 |
+
576: n03447447
|
| 578 |
+
577: n03447721
|
| 579 |
+
578: n08286342
|
| 580 |
+
579: n03452741
|
| 581 |
+
580: n03457902
|
| 582 |
+
581: n03459775
|
| 583 |
+
582: n03461385
|
| 584 |
+
583: n03467068
|
| 585 |
+
584: n03476684
|
| 586 |
+
585: n03476991
|
| 587 |
+
586: n03478589
|
| 588 |
+
587: n03482001
|
| 589 |
+
588: n03482405
|
| 590 |
+
589: n03483316
|
| 591 |
+
590: n03485407
|
| 592 |
+
591: n03485794
|
| 593 |
+
592: n03492542
|
| 594 |
+
593: n03494278
|
| 595 |
+
594: n03495570
|
| 596 |
+
595: n10161363
|
| 597 |
+
596: n03498962
|
| 598 |
+
597: n03527565
|
| 599 |
+
598: n03529860
|
| 600 |
+
599: n09218315
|
| 601 |
+
600: n03532672
|
| 602 |
+
601: n03534580
|
| 603 |
+
602: n03535780
|
| 604 |
+
603: n03538406
|
| 605 |
+
604: n03544143
|
| 606 |
+
605: n03584254
|
| 607 |
+
606: n03584829
|
| 608 |
+
607: n03590841
|
| 609 |
+
608: n03594734
|
| 610 |
+
609: n03594945
|
| 611 |
+
610: n03595614
|
| 612 |
+
611: n03598930
|
| 613 |
+
612: n03599486
|
| 614 |
+
613: n03602883
|
| 615 |
+
614: n03617480
|
| 616 |
+
615: n03623198
|
| 617 |
+
616: n15102712
|
| 618 |
+
617: n03630383
|
| 619 |
+
618: n03633091
|
| 620 |
+
619: n03637318
|
| 621 |
+
620: n03642806
|
| 622 |
+
621: n03649909
|
| 623 |
+
622: n03657121
|
| 624 |
+
623: n03658185
|
| 625 |
+
624: n07977870
|
| 626 |
+
625: n03662601
|
| 627 |
+
626: n03666591
|
| 628 |
+
627: n03670208
|
| 629 |
+
628: n03673027
|
| 630 |
+
629: n03676483
|
| 631 |
+
630: n03680355
|
| 632 |
+
631: n03690938
|
| 633 |
+
632: n03691459
|
| 634 |
+
633: n03692522
|
| 635 |
+
634: n03697007
|
| 636 |
+
635: n03706229
|
| 637 |
+
636: n03709823
|
| 638 |
+
637: n03710193
|
| 639 |
+
638: n03710637
|
| 640 |
+
639: n03710721
|
| 641 |
+
640: n03717622
|
| 642 |
+
641: n03720891
|
| 643 |
+
642: n03721384
|
| 644 |
+
643: n03725035
|
| 645 |
+
644: n03729826
|
| 646 |
+
645: n03733131
|
| 647 |
+
646: n03733281
|
| 648 |
+
647: n03733805
|
| 649 |
+
648: n03742115
|
| 650 |
+
649: n03743016
|
| 651 |
+
650: n03759954
|
| 652 |
+
651: n03761084
|
| 653 |
+
652: n03763968
|
| 654 |
+
653: n03764736
|
| 655 |
+
654: n03769881
|
| 656 |
+
655: n03770439
|
| 657 |
+
656: n03770679
|
| 658 |
+
657: n03773504
|
| 659 |
+
658: n03775071
|
| 660 |
+
659: n03775546
|
| 661 |
+
660: n03776460
|
| 662 |
+
661: n03777568
|
| 663 |
+
662: n03777754
|
| 664 |
+
663: n03781244
|
| 665 |
+
664: n03782006
|
| 666 |
+
665: n03785016
|
| 667 |
+
666: n14955889
|
| 668 |
+
667: n03787032
|
| 669 |
+
668: n03788195
|
| 670 |
+
669: n03788365
|
| 671 |
+
670: n03791053
|
| 672 |
+
671: n03792782
|
| 673 |
+
672: n03792972
|
| 674 |
+
673: n03793489
|
| 675 |
+
674: n03794056
|
| 676 |
+
675: n03796401
|
| 677 |
+
676: n03803284
|
| 678 |
+
677: n13652335
|
| 679 |
+
678: n03814639
|
| 680 |
+
679: n03814906
|
| 681 |
+
680: n03825788
|
| 682 |
+
681: n03832673
|
| 683 |
+
682: n03837869
|
| 684 |
+
683: n03838899
|
| 685 |
+
684: n03840681
|
| 686 |
+
685: n03841143
|
| 687 |
+
686: n03843555
|
| 688 |
+
687: n03854065
|
| 689 |
+
688: n03857828
|
| 690 |
+
689: n03866082
|
| 691 |
+
690: n03868242
|
| 692 |
+
691: n03868863
|
| 693 |
+
692: n07281099
|
| 694 |
+
693: n03873416
|
| 695 |
+
694: n03874293
|
| 696 |
+
695: n03874599
|
| 697 |
+
696: n03876231
|
| 698 |
+
697: n03877472
|
| 699 |
+
698: n08053121
|
| 700 |
+
699: n03884397
|
| 701 |
+
700: n03887697
|
| 702 |
+
701: n03888257
|
| 703 |
+
702: n03888605
|
| 704 |
+
703: n03891251
|
| 705 |
+
704: n03891332
|
| 706 |
+
705: n03895866
|
| 707 |
+
706: n03899768
|
| 708 |
+
707: n03902125
|
| 709 |
+
708: n03903868
|
| 710 |
+
709: n03908618
|
| 711 |
+
710: n03908714
|
| 712 |
+
711: n03916031
|
| 713 |
+
712: n03920288
|
| 714 |
+
713: n03924679
|
| 715 |
+
714: n03929660
|
| 716 |
+
715: n03929855
|
| 717 |
+
716: n03930313
|
| 718 |
+
717: n03930630
|
| 719 |
+
718: n03934042
|
| 720 |
+
719: n03935335
|
| 721 |
+
720: n03937543
|
| 722 |
+
721: n03938244
|
| 723 |
+
722: n03942813
|
| 724 |
+
723: n03944341
|
| 725 |
+
724: n03947888
|
| 726 |
+
725: n03950228
|
| 727 |
+
726: n03954731
|
| 728 |
+
727: n03956157
|
| 729 |
+
728: n03958227
|
| 730 |
+
729: n03961711
|
| 731 |
+
730: n03967562
|
| 732 |
+
731: n03970156
|
| 733 |
+
732: n03976467
|
| 734 |
+
733: n08620881
|
| 735 |
+
734: n03977966
|
| 736 |
+
735: n03980874
|
| 737 |
+
736: n03982430
|
| 738 |
+
737: n03983396
|
| 739 |
+
738: n03991062
|
| 740 |
+
739: n03992509
|
| 741 |
+
740: n03995372
|
| 742 |
+
741: n03998194
|
| 743 |
+
742: n04004767
|
| 744 |
+
743: n13937284
|
| 745 |
+
744: n04008634
|
| 746 |
+
745: n04009801
|
| 747 |
+
746: n04019541
|
| 748 |
+
747: n04023962
|
| 749 |
+
748: n13413294
|
| 750 |
+
749: n04033901
|
| 751 |
+
750: n04033995
|
| 752 |
+
751: n04037443
|
| 753 |
+
752: n04039381
|
| 754 |
+
753: n09403211
|
| 755 |
+
754: n04041544
|
| 756 |
+
755: n04044716
|
| 757 |
+
756: n04049303
|
| 758 |
+
757: n04065272
|
| 759 |
+
758: n07056680
|
| 760 |
+
759: n04069434
|
| 761 |
+
760: n04070727
|
| 762 |
+
761: n04074963
|
| 763 |
+
762: n04081281
|
| 764 |
+
763: n04086273
|
| 765 |
+
764: n04090263
|
| 766 |
+
765: n04099969
|
| 767 |
+
766: n04111531
|
| 768 |
+
767: n04116512
|
| 769 |
+
768: n04118538
|
| 770 |
+
769: n04118776
|
| 771 |
+
770: n04120489
|
| 772 |
+
771: n04125116
|
| 773 |
+
772: n04127249
|
| 774 |
+
773: n04131690
|
| 775 |
+
774: n04133789
|
| 776 |
+
775: n04136333
|
| 777 |
+
776: n04141076
|
| 778 |
+
777: n04141327
|
| 779 |
+
778: n04141975
|
| 780 |
+
779: n04146614
|
| 781 |
+
780: n04147291
|
| 782 |
+
781: n04149813
|
| 783 |
+
782: n04152593
|
| 784 |
+
783: n04154340
|
| 785 |
+
784: n07917272
|
| 786 |
+
785: n04162706
|
| 787 |
+
786: n04179913
|
| 788 |
+
787: n04192698
|
| 789 |
+
788: n04200800
|
| 790 |
+
789: n04201297
|
| 791 |
+
790: n04204238
|
| 792 |
+
791: n04204347
|
| 793 |
+
792: n04208427
|
| 794 |
+
793: n04209133
|
| 795 |
+
794: n04209239
|
| 796 |
+
795: n04228054
|
| 797 |
+
796: n04229816
|
| 798 |
+
797: n04235860
|
| 799 |
+
798: n04238763
|
| 800 |
+
799: n04239074
|
| 801 |
+
800: n04243546
|
| 802 |
+
801: n04251144
|
| 803 |
+
802: n04252077
|
| 804 |
+
803: n04252225
|
| 805 |
+
804: n04254120
|
| 806 |
+
805: n04254680
|
| 807 |
+
806: n04254777
|
| 808 |
+
807: n04258138
|
| 809 |
+
808: n04259630
|
| 810 |
+
809: n04263257
|
| 811 |
+
810: n04264628
|
| 812 |
+
811: n04265275
|
| 813 |
+
812: n04266014
|
| 814 |
+
813: n04270147
|
| 815 |
+
814: n04273569
|
| 816 |
+
815: n04275363
|
| 817 |
+
816: n05605498
|
| 818 |
+
817: n04285008
|
| 819 |
+
818: n04286575
|
| 820 |
+
819: n08646566
|
| 821 |
+
820: n04310018
|
| 822 |
+
821: n04311004
|
| 823 |
+
822: n04311174
|
| 824 |
+
823: n04317175
|
| 825 |
+
824: n04325704
|
| 826 |
+
825: n04326547
|
| 827 |
+
826: n04328186
|
| 828 |
+
827: n04330267
|
| 829 |
+
828: n04332243
|
| 830 |
+
829: n04335435
|
| 831 |
+
830: n04337157
|
| 832 |
+
831: n04344873
|
| 833 |
+
832: n04346328
|
| 834 |
+
833: n04347754
|
| 835 |
+
834: n04350905
|
| 836 |
+
835: n04355338
|
| 837 |
+
836: n04355933
|
| 838 |
+
837: n04356056
|
| 839 |
+
838: n04357314
|
| 840 |
+
839: n04366367
|
| 841 |
+
840: n04367480
|
| 842 |
+
841: n04370456
|
| 843 |
+
842: n04371430
|
| 844 |
+
843: n14009946
|
| 845 |
+
844: n04372370
|
| 846 |
+
845: n04376876
|
| 847 |
+
846: n04380533
|
| 848 |
+
847: n04389033
|
| 849 |
+
848: n04392985
|
| 850 |
+
849: n04398044
|
| 851 |
+
850: n04399382
|
| 852 |
+
851: n04404412
|
| 853 |
+
852: n04409515
|
| 854 |
+
853: n04417672
|
| 855 |
+
854: n04418357
|
| 856 |
+
855: n04423845
|
| 857 |
+
856: n04428191
|
| 858 |
+
857: n04429376
|
| 859 |
+
858: n04435653
|
| 860 |
+
859: n04442312
|
| 861 |
+
860: n04443257
|
| 862 |
+
861: n04447861
|
| 863 |
+
862: n04456115
|
| 864 |
+
863: n04458633
|
| 865 |
+
864: n04461696
|
| 866 |
+
865: n04462240
|
| 867 |
+
866: n04465666
|
| 868 |
+
867: n04467665
|
| 869 |
+
868: n04476259
|
| 870 |
+
869: n04479046
|
| 871 |
+
870: n04482393
|
| 872 |
+
871: n04483307
|
| 873 |
+
872: n04485082
|
| 874 |
+
873: n04486054
|
| 875 |
+
874: n04487081
|
| 876 |
+
875: n04487394
|
| 877 |
+
876: n04493381
|
| 878 |
+
877: n04501370
|
| 879 |
+
878: n04505470
|
| 880 |
+
879: n04507155
|
| 881 |
+
880: n04509417
|
| 882 |
+
881: n04515003
|
| 883 |
+
882: n04517823
|
| 884 |
+
883: n04522168
|
| 885 |
+
884: n04523525
|
| 886 |
+
885: n04525038
|
| 887 |
+
886: n04525305
|
| 888 |
+
887: n04532106
|
| 889 |
+
888: n04532670
|
| 890 |
+
889: n04536866
|
| 891 |
+
890: n04540053
|
| 892 |
+
891: n04542943
|
| 893 |
+
892: n04548280
|
| 894 |
+
893: n04548362
|
| 895 |
+
894: n04550184
|
| 896 |
+
895: n04552348
|
| 897 |
+
896: n04553703
|
| 898 |
+
897: n04554684
|
| 899 |
+
898: n04557648
|
| 900 |
+
899: n04560804
|
| 901 |
+
900: n04562935
|
| 902 |
+
901: n04579145
|
| 903 |
+
902: n04579667
|
| 904 |
+
903: n04584207
|
| 905 |
+
904: n04589890
|
| 906 |
+
905: n04590129
|
| 907 |
+
906: n04591157
|
| 908 |
+
907: n04591713
|
| 909 |
+
908: n10782135
|
| 910 |
+
909: n04596742
|
| 911 |
+
910: n04598010
|
| 912 |
+
911: n04599235
|
| 913 |
+
912: n04604644
|
| 914 |
+
913: n14423870
|
| 915 |
+
914: n04612504
|
| 916 |
+
915: n04613696
|
| 917 |
+
916: n06359193
|
| 918 |
+
917: n06596364
|
| 919 |
+
918: n06785654
|
| 920 |
+
919: n06794110
|
| 921 |
+
920: n06874185
|
| 922 |
+
921: n07248320
|
| 923 |
+
922: n07565083
|
| 924 |
+
923: n07657664
|
| 925 |
+
924: n07583066
|
| 926 |
+
925: n07584110
|
| 927 |
+
926: n07590611
|
| 928 |
+
927: n07613480
|
| 929 |
+
928: n07614500
|
| 930 |
+
929: n07615774
|
| 931 |
+
930: n07684084
|
| 932 |
+
931: n07693725
|
| 933 |
+
932: n07695742
|
| 934 |
+
933: n07697313
|
| 935 |
+
934: n07697537
|
| 936 |
+
935: n07711569
|
| 937 |
+
936: n07714571
|
| 938 |
+
937: n07714990
|
| 939 |
+
938: n07715103
|
| 940 |
+
939: n12159804
|
| 941 |
+
940: n12160303
|
| 942 |
+
941: n12160857
|
| 943 |
+
942: n07717556
|
| 944 |
+
943: n07718472
|
| 945 |
+
944: n07718747
|
| 946 |
+
945: n07720875
|
| 947 |
+
946: n07730033
|
| 948 |
+
947: n13001041
|
| 949 |
+
948: n07742313
|
| 950 |
+
949: n12630144
|
| 951 |
+
950: n14991210
|
| 952 |
+
951: n07749582
|
| 953 |
+
952: n07753113
|
| 954 |
+
953: n07753275
|
| 955 |
+
954: n07753592
|
| 956 |
+
955: n07754684
|
| 957 |
+
956: n07760859
|
| 958 |
+
957: n07768694
|
| 959 |
+
958: n07802026
|
| 960 |
+
959: n07831146
|
| 961 |
+
960: n07836838
|
| 962 |
+
961: n07860988
|
| 963 |
+
962: n07871810
|
| 964 |
+
963: n07873807
|
| 965 |
+
964: n07875152
|
| 966 |
+
965: n07880968
|
| 967 |
+
966: n07892512
|
| 968 |
+
967: n07920052
|
| 969 |
+
968: n13904665
|
| 970 |
+
969: n07932039
|
| 971 |
+
970: n09193705
|
| 972 |
+
971: n09229709
|
| 973 |
+
972: n09246464
|
| 974 |
+
973: n09256479
|
| 975 |
+
974: n09288635
|
| 976 |
+
975: n09332890
|
| 977 |
+
976: n09399592
|
| 978 |
+
977: n09421951
|
| 979 |
+
978: n09428293
|
| 980 |
+
979: n09468604
|
| 981 |
+
980: n09472597
|
| 982 |
+
981: n09835506
|
| 983 |
+
982: n10148035
|
| 984 |
+
983: n10565667
|
| 985 |
+
984: n11879895
|
| 986 |
+
985: n11939491
|
| 987 |
+
986: n12057211
|
| 988 |
+
987: n12144580
|
| 989 |
+
988: n12267677
|
| 990 |
+
989: n12620546
|
| 991 |
+
990: n12768682
|
| 992 |
+
991: n12985857
|
| 993 |
+
992: n12998815
|
| 994 |
+
993: n13037406
|
| 995 |
+
994: n13040303
|
| 996 |
+
995: n13044778
|
| 997 |
+
996: n13052670
|
| 998 |
+
997: n13054560
|
| 999 |
+
998: n13133613
|
| 1000 |
+
999: n15075141
|
stable_diffusion/data/inpainting_examples/6458524847_2f4c361183_k_mask.png
ADDED
|
stable_diffusion/data/inpainting_examples/8399166846_f6fb4e4b8e_k_mask.png
ADDED
|
stable_diffusion/data/inpainting_examples/alex-iby-G_Pk4D9rMLs_mask.png
ADDED
|
stable_diffusion/data/inpainting_examples/bench2_mask.png
ADDED
|
stable_diffusion/data/inpainting_examples/bertrand-gabioud-CpuFzIsHYJ0_mask.png
ADDED
|
stable_diffusion/data/inpainting_examples/billow926-12-Wc-Zgx6Y_mask.png
ADDED
|
stable_diffusion/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png
ADDED
|
stable_diffusion/data/inpainting_examples/photo-1583445095369-9c651e7e5d34_mask.png
ADDED
|
stable_diffusion/environment.yaml
ADDED
|
@@ -0,0 +1,31 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
name: ldm
|
| 2 |
+
channels:
|
| 3 |
+
- pytorch
|
| 4 |
+
- defaults
|
| 5 |
+
dependencies:
|
| 6 |
+
- python=3.8.5
|
| 7 |
+
- pip=20.3
|
| 8 |
+
- cudatoolkit=11.3
|
| 9 |
+
- pytorch=1.11.0
|
| 10 |
+
- torchvision=0.12.0
|
| 11 |
+
- numpy=1.19.2
|
| 12 |
+
- pip:
|
| 13 |
+
- albumentations==0.4.3
|
| 14 |
+
- diffusers
|
| 15 |
+
- opencv-python==4.1.2.30
|
| 16 |
+
- pudb==2019.2
|
| 17 |
+
- invisible-watermark
|
| 18 |
+
- imageio==2.9.0
|
| 19 |
+
- imageio-ffmpeg==0.4.2
|
| 20 |
+
- pytorch-lightning==1.4.2
|
| 21 |
+
- omegaconf==2.1.1
|
| 22 |
+
- test-tube>=0.7.5
|
| 23 |
+
- streamlit>=0.73.1
|
| 24 |
+
- einops==0.3.0
|
| 25 |
+
- torch-fidelity==0.3.0
|
| 26 |
+
- transformers==4.19.2
|
| 27 |
+
- torchmetrics==0.6.0
|
| 28 |
+
- kornia==0.6
|
| 29 |
+
- -e git+https://github.com/CompVis/taming-transformers.git@master#egg=taming-transformers
|
| 30 |
+
- -e git+https://github.com/openai/CLIP.git@main#egg=clip
|
| 31 |
+
- -e .
|
stable_diffusion/exp_qualitative_eval.py
ADDED
|
@@ -0,0 +1,157 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
from __future__ import annotations
|
| 2 |
+
|
| 3 |
+
import json
|
| 4 |
+
import math
|
| 5 |
+
import os
|
| 6 |
+
import sys
|
| 7 |
+
from argparse import ArgumentParser
|
| 8 |
+
from pathlib import Path
|
| 9 |
+
import einops
|
| 10 |
+
import numpy as np
|
| 11 |
+
import torch
|
| 12 |
+
import torch.nn as nn
|
| 13 |
+
from einops import rearrange
|
| 14 |
+
from omegaconf import OmegaConf
|
| 15 |
+
from PIL import Image, ImageOps
|
| 16 |
+
from torch import autocast
|
| 17 |
+
|
| 18 |
+
from edm_sampler.external import CompVisDenoiser
|
| 19 |
+
from edm_sampler.sampling import sample_euler_ancestral
|
| 20 |
+
|
| 21 |
+
sys.path.append("./stable_diffusion")
|
| 22 |
+
|
| 23 |
+
from stable_diffusion.ldm.util import instantiate_from_config
|
| 24 |
+
|
| 25 |
+
|
| 26 |
+
class CFGDenoiser(nn.Module):
|
| 27 |
+
def __init__(self, model):
|
| 28 |
+
super().__init__()
|
| 29 |
+
self.inner_model = model
|
| 30 |
+
|
| 31 |
+
def forward(self, z, sigma, cond, uncond, text_cfg_scale, image_cfg_scale):
|
| 32 |
+
cfg_z = einops.repeat(z, "1 ... -> n ...", n=3)
|
| 33 |
+
cfg_sigma = einops.repeat(sigma, "1 ... -> n ...", n=3)
|
| 34 |
+
cfg_cond = {
|
| 35 |
+
"c_crossattn": [torch.cat([cond["c_crossattn"][0], uncond["c_crossattn"][0], uncond["c_crossattn"][0]])],
|
| 36 |
+
"c_concat": [torch.cat([cond["c_concat"][0], cond["c_concat"][0], uncond["c_concat"][0]])], }
|
| 37 |
+
out_cond, out_img_cond, out_uncond = self.inner_model(cfg_z, cfg_sigma, cond=cfg_cond).chunk(3)
|
| 38 |
+
return out_uncond + text_cfg_scale * (out_cond - out_img_cond) + image_cfg_scale * (out_img_cond - out_uncond)
|
| 39 |
+
|
| 40 |
+
|
| 41 |
+
def load_model_from_config(config, ckpt, vae_ckpt=None, verbose=False):
|
| 42 |
+
print(f"Loading model from {ckpt}")
|
| 43 |
+
pl_sd = torch.load(ckpt, map_location="cpu")
|
| 44 |
+
if "global_step" in pl_sd:
|
| 45 |
+
print(f"Global Step: {pl_sd['global_step']}")
|
| 46 |
+
sd = pl_sd["state_dict"]
|
| 47 |
+
if vae_ckpt is not None:
|
| 48 |
+
print(f"Loading VAE from {vae_ckpt}")
|
| 49 |
+
vae_sd = torch.load(vae_ckpt, map_location="cpu")["state_dict"]
|
| 50 |
+
sd = {k: vae_sd[k[len("first_stage_model."):]] if k.startswith("first_stage_model.") else v for k, v in
|
| 51 |
+
sd.items()}
|
| 52 |
+
model = instantiate_from_config(config.model)
|
| 53 |
+
m, u = model.load_state_dict(sd, strict=False)
|
| 54 |
+
if len(m) > 0 and verbose:
|
| 55 |
+
print("missing keys:")
|
| 56 |
+
print(m)
|
| 57 |
+
if len(u) > 0 and verbose:
|
| 58 |
+
print("unexpected keys:")
|
| 59 |
+
print(u)
|
| 60 |
+
return model
|
| 61 |
+
|
| 62 |
+
|
| 63 |
+
def main():
|
| 64 |
+
parser = ArgumentParser()
|
| 65 |
+
parser.add_argument("--data-path", default="../data/clip-filtered-dataset/", type=str)
|
| 66 |
+
parser.add_argument("--resolution", default=512, type=int)
|
| 67 |
+
parser.add_argument("--steps", default=100, type=int)
|
| 68 |
+
parser.add_argument("--config", default="configs/generate.yaml", type=str)
|
| 69 |
+
parser.add_argument("--ckpt", required=True, type=str)
|
| 70 |
+
parser.add_argument("--vae-ckpt", default=None, type=str)
|
| 71 |
+
parser.add_argument("--identifier", required=True, type=str)
|
| 72 |
+
parser.add_argument("--cfg-text-list", default=[3.5, 5.5, 7.5, 9.5, 11.5], type=float, nargs="+")
|
| 73 |
+
parser.add_argument("--cfg-image-list", default=[1.5], type=float, nargs="+")
|
| 74 |
+
parser.add_argument("--seed", type=int, default=10086)
|
| 75 |
+
parser.add_argument("--sample-num", type=int, default=200)
|
| 76 |
+
parser.add_argument("--eval-type", default="edit",
|
| 77 |
+
# choices=["edit", "depth", "hed", "seg", "depth_inv", "seg", "hed_inv"],
|
| 78 |
+
choices=["edit", "depth", "hed", "seg"], type=str)
|
| 79 |
+
args = parser.parse_args()
|
| 80 |
+
torch.manual_seed(args.seed)
|
| 81 |
+
|
| 82 |
+
config = OmegaConf.load(args.config)
|
| 83 |
+
model = load_model_from_config(config, args.ckpt, args.vae_ckpt)
|
| 84 |
+
model.eval().cuda()
|
| 85 |
+
model_wrap = CompVisDenoiser(model)
|
| 86 |
+
model_wrap_cfg = CFGDenoiser(model_wrap)
|
| 87 |
+
null_token = model.get_learned_conditioning([""])
|
| 88 |
+
|
| 89 |
+
with open(Path(args.data_path, "seeds.json")) as f:
|
| 90 |
+
seeds = json.load(f)
|
| 91 |
+
|
| 92 |
+
total = len(seeds)
|
| 93 |
+
i_start = int(total * 0.9)
|
| 94 |
+
i_end = i_start + args.sample_num
|
| 95 |
+
|
| 96 |
+
output_dir = f"imgs/qualitative/{args.identifier}"
|
| 97 |
+
os.makedirs(output_dir, exist_ok=True)
|
| 98 |
+
output_dir = os.path.join(output_dir, args.eval_type)
|
| 99 |
+
|
| 100 |
+
for i in range(i_start, i_end):
|
| 101 |
+
print(f"===========================> Processing {i}/{total} <===========================")
|
| 102 |
+
name, i_seeds = seeds[i]
|
| 103 |
+
output_sub_dir = os.path.join(output_dir, name)
|
| 104 |
+
os.makedirs(output_sub_dir, exist_ok=True)
|
| 105 |
+
propt_dir = Path(args.data_path, name)
|
| 106 |
+
if args.eval_type == "edit":
|
| 107 |
+
with open(propt_dir.joinpath("prompt.json")) as fp:
|
| 108 |
+
edit_instruction = json.load(fp)["edit"]
|
| 109 |
+
elif args.eval_type == "depth":
|
| 110 |
+
edit_instruction = "Transfer to a depth map"
|
| 111 |
+
elif args.eval_type == "hed":
|
| 112 |
+
edit_instruction = "Transfer to a hed map"
|
| 113 |
+
elif args.eval_type == "seg":
|
| 114 |
+
edit_instruction = "Transfer to a segmentation map"
|
| 115 |
+
else:
|
| 116 |
+
raise NotImplementedError
|
| 117 |
+
|
| 118 |
+
image_seed = i_seeds[0]
|
| 119 |
+
|
| 120 |
+
input_image_path = propt_dir.joinpath(f"{image_seed}_0.jpg")
|
| 121 |
+
input_image = Image.open(input_image_path).convert("RGB")
|
| 122 |
+
width, height = input_image.size
|
| 123 |
+
factor = args.resolution / max(width, height)
|
| 124 |
+
factor = math.ceil(min(width, height) * factor / 64) * 64 / min(width, height)
|
| 125 |
+
width = int((width * factor) // 64) * 64
|
| 126 |
+
height = int((height * factor) // 64) * 64
|
| 127 |
+
input_image = ImageOps.fit(input_image, (width, height), method=Image.Resampling.LANCZOS)
|
| 128 |
+
|
| 129 |
+
for cfg_text in args.cfg_text_list:
|
| 130 |
+
for cfg_image in args.cfg_image_list:
|
| 131 |
+
output_image_path = os.path.join(output_sub_dir, f"{image_seed}_text{cfg_text}_image{cfg_image}.jpg")
|
| 132 |
+
with torch.no_grad(), autocast("cuda"), model.ema_scope():
|
| 133 |
+
cond = {}
|
| 134 |
+
cond["c_crossattn"] = [model.get_learned_conditioning([edit_instruction])]
|
| 135 |
+
new_input_image = 2 * torch.tensor(np.array(input_image)).float() / 255 - 1
|
| 136 |
+
new_input_image = rearrange(new_input_image, "h w c -> 1 c h w").to(model.device)
|
| 137 |
+
cond["c_concat"] = [model.encode_first_stage(new_input_image).mode()]
|
| 138 |
+
|
| 139 |
+
uncond = {}
|
| 140 |
+
uncond["c_crossattn"] = [null_token]
|
| 141 |
+
uncond["c_concat"] = [torch.zeros_like(cond["c_concat"][0])]
|
| 142 |
+
|
| 143 |
+
sigmas = model_wrap.get_sigmas(args.steps)
|
| 144 |
+
print(f"Editing the image {input_image_path}, with cfg_text={cfg_text}, cfg_image={cfg_image}")
|
| 145 |
+
extra_args = {"cond": cond, "uncond": uncond, "text_cfg_scale": cfg_text,
|
| 146 |
+
"image_cfg_scale": cfg_image, }
|
| 147 |
+
z = torch.randn_like(cond["c_concat"][0]) * sigmas[0]
|
| 148 |
+
z = sample_euler_ancestral(model_wrap_cfg, z, sigmas, extra_args=extra_args)
|
| 149 |
+
x = model.decode_first_stage(z)
|
| 150 |
+
x = torch.clamp((x + 1.0) / 2.0, min=0.0, max=1.0)
|
| 151 |
+
x = 255.0 * rearrange(x, "1 c h w -> h w c")
|
| 152 |
+
edited_image = Image.fromarray(x.type(torch.uint8).cpu().numpy())
|
| 153 |
+
edited_image.save(output_image_path)
|
| 154 |
+
|
| 155 |
+
|
| 156 |
+
if __name__ == "__main__":
|
| 157 |
+
main()
|
stable_diffusion/ldm/data/__init__.py
ADDED
|
File without changes
|
stable_diffusion/ldm/guaidance.py
ADDED
|
@@ -0,0 +1,96 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
from typing import List, Tuple
|
| 3 |
+
from scipy import interpolate
|
| 4 |
+
import numpy as np
|
| 5 |
+
import torch
|
| 6 |
+
import matplotlib.pyplot as plt
|
| 7 |
+
from IPython.display import clear_output
|
| 8 |
+
import abc
|
| 9 |
+
|
| 10 |
+
|
| 11 |
+
class GuideModel(torch.nn.Module, abc.ABC):
|
| 12 |
+
def __init__(self) -> None:
|
| 13 |
+
super().__init__()
|
| 14 |
+
|
| 15 |
+
@abc.abstractmethod
|
| 16 |
+
def preprocess(self, x_img):
|
| 17 |
+
pass
|
| 18 |
+
|
| 19 |
+
@abc.abstractmethod
|
| 20 |
+
def compute_loss(self, inp):
|
| 21 |
+
pass
|
| 22 |
+
|
| 23 |
+
|
| 24 |
+
class Guider(torch.nn.Module):
|
| 25 |
+
def __init__(self, sampler, guide_model, scale=1.0, verbose=False):
|
| 26 |
+
"""Apply classifier guidance
|
| 27 |
+
Specify a guidance scale as either a scalar
|
| 28 |
+
Or a schedule as a list of tuples t = 0->1 and scale, e.g.
|
| 29 |
+
[(0, 10), (0.5, 20), (1, 50)]
|
| 30 |
+
"""
|
| 31 |
+
super().__init__()
|
| 32 |
+
self.sampler = sampler
|
| 33 |
+
self.index = 0
|
| 34 |
+
self.show = verbose
|
| 35 |
+
self.guide_model = guide_model
|
| 36 |
+
self.history = []
|
| 37 |
+
|
| 38 |
+
if isinstance(scale, (Tuple, List)):
|
| 39 |
+
times = np.array([x[0] for x in scale])
|
| 40 |
+
values = np.array([x[1] for x in scale])
|
| 41 |
+
self.scale_schedule = {"times": times, "values": values}
|
| 42 |
+
else:
|
| 43 |
+
self.scale_schedule = float(scale)
|
| 44 |
+
|
| 45 |
+
self.ddim_timesteps = sampler.ddim_timesteps
|
| 46 |
+
self.ddpm_num_timesteps = sampler.ddpm_num_timesteps
|
| 47 |
+
|
| 48 |
+
|
| 49 |
+
def get_scales(self):
|
| 50 |
+
if isinstance(self.scale_schedule, float):
|
| 51 |
+
return len(self.ddim_timesteps)*[self.scale_schedule]
|
| 52 |
+
|
| 53 |
+
interpolater = interpolate.interp1d(self.scale_schedule["times"], self.scale_schedule["values"])
|
| 54 |
+
fractional_steps = np.array(self.ddim_timesteps)/self.ddpm_num_timesteps
|
| 55 |
+
return interpolater(fractional_steps)
|
| 56 |
+
|
| 57 |
+
def modify_score(self, model, e_t, x, t, c):
|
| 58 |
+
|
| 59 |
+
# TODO look up index by t
|
| 60 |
+
scale = self.get_scales()[self.index]
|
| 61 |
+
|
| 62 |
+
if (scale == 0):
|
| 63 |
+
return e_t
|
| 64 |
+
|
| 65 |
+
sqrt_1ma = self.sampler.ddim_sqrt_one_minus_alphas[self.index].to(x.device)
|
| 66 |
+
with torch.enable_grad():
|
| 67 |
+
x_in = x.detach().requires_grad_(True)
|
| 68 |
+
pred_x0 = model.predict_start_from_noise(x_in, t=t, noise=e_t)
|
| 69 |
+
x_img = model.first_stage_model.decode((1/0.18215)*pred_x0)
|
| 70 |
+
|
| 71 |
+
inp = self.guide_model.preprocess(x_img)
|
| 72 |
+
loss = self.guide_model.compute_loss(inp)
|
| 73 |
+
grads = torch.autograd.grad(loss.sum(), x_in)[0]
|
| 74 |
+
correction = grads * scale
|
| 75 |
+
|
| 76 |
+
if self.show:
|
| 77 |
+
clear_output(wait=True)
|
| 78 |
+
print(loss.item(), scale, correction.abs().max().item(), e_t.abs().max().item())
|
| 79 |
+
self.history.append([loss.item(), scale, correction.min().item(), correction.max().item()])
|
| 80 |
+
plt.imshow((inp[0].detach().permute(1,2,0).clamp(-1,1).cpu()+1)/2)
|
| 81 |
+
plt.axis('off')
|
| 82 |
+
plt.show()
|
| 83 |
+
plt.imshow(correction[0][0].detach().cpu())
|
| 84 |
+
plt.axis('off')
|
| 85 |
+
plt.show()
|
| 86 |
+
|
| 87 |
+
|
| 88 |
+
e_t_mod = e_t - sqrt_1ma*correction
|
| 89 |
+
if self.show:
|
| 90 |
+
fig, axs = plt.subplots(1, 3)
|
| 91 |
+
axs[0].imshow(e_t[0][0].detach().cpu(), vmin=-2, vmax=+2)
|
| 92 |
+
axs[1].imshow(e_t_mod[0][0].detach().cpu(), vmin=-2, vmax=+2)
|
| 93 |
+
axs[2].imshow(correction[0][0].detach().cpu(), vmin=-2, vmax=+2)
|
| 94 |
+
plt.show()
|
| 95 |
+
self.index += 1
|
| 96 |
+
return e_t_mod
|
stable_diffusion/ldm/lr_scheduler.py
ADDED
|
@@ -0,0 +1,101 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import numpy as np
|
| 2 |
+
|
| 3 |
+
|
| 4 |
+
class LambdaWarmUpCosineScheduler:
|
| 5 |
+
"""
|
| 6 |
+
note: use with a base_lr of 1.0
|
| 7 |
+
"""
|
| 8 |
+
def __init__(self, warm_up_steps, lr_min, lr_max, lr_start, max_decay_steps, verbosity_interval=0):
|
| 9 |
+
self.lr_warm_up_steps = warm_up_steps
|
| 10 |
+
self.lr_start = lr_start
|
| 11 |
+
self.lr_min = lr_min
|
| 12 |
+
self.lr_max = lr_max
|
| 13 |
+
self.lr_max_decay_steps = max_decay_steps
|
| 14 |
+
self.last_lr = 0.
|
| 15 |
+
self.verbosity_interval = verbosity_interval
|
| 16 |
+
|
| 17 |
+
def schedule(self, n, **kwargs):
|
| 18 |
+
if self.verbosity_interval > 0:
|
| 19 |
+
if n % self.verbosity_interval == 0: print(f"current step: {n}, recent lr-multiplier: {self.last_lr}")
|
| 20 |
+
if n < self.lr_warm_up_steps:
|
| 21 |
+
lr = (self.lr_max - self.lr_start) / self.lr_warm_up_steps * n + self.lr_start
|
| 22 |
+
self.last_lr = lr
|
| 23 |
+
return lr
|
| 24 |
+
else:
|
| 25 |
+
t = (n - self.lr_warm_up_steps) / (self.lr_max_decay_steps - self.lr_warm_up_steps)
|
| 26 |
+
t = min(t, 1.0)
|
| 27 |
+
lr = self.lr_min + 0.5 * (self.lr_max - self.lr_min) * (
|
| 28 |
+
1 + np.cos(t * np.pi))
|
| 29 |
+
self.last_lr = lr
|
| 30 |
+
return lr
|
| 31 |
+
|
| 32 |
+
def __call__(self, n, **kwargs):
|
| 33 |
+
return self.schedule(n,**kwargs)
|
| 34 |
+
|
| 35 |
+
|
| 36 |
+
class LambdaWarmUpCosineScheduler2:
|
| 37 |
+
"""
|
| 38 |
+
supports repeated iterations, configurable via lists
|
| 39 |
+
note: use with a base_lr of 1.0.
|
| 40 |
+
"""
|
| 41 |
+
def __init__(self, warm_up_steps, f_min, f_max, f_start, cycle_lengths, verbosity_interval=0):
|
| 42 |
+
assert len(warm_up_steps) == len(f_min) == len(f_max) == len(f_start) == len(cycle_lengths)
|
| 43 |
+
self.lr_warm_up_steps = warm_up_steps
|
| 44 |
+
self.f_start = f_start
|
| 45 |
+
self.f_min = f_min
|
| 46 |
+
self.f_max = f_max
|
| 47 |
+
self.cycle_lengths = cycle_lengths
|
| 48 |
+
self.cum_cycles = np.cumsum([0] + list(self.cycle_lengths))
|
| 49 |
+
self.last_f = 0.
|
| 50 |
+
self.verbosity_interval = verbosity_interval
|
| 51 |
+
|
| 52 |
+
def find_in_interval(self, n):
|
| 53 |
+
interval = 0
|
| 54 |
+
for cl in self.cum_cycles[1:]:
|
| 55 |
+
if n <= cl:
|
| 56 |
+
return interval
|
| 57 |
+
interval += 1
|
| 58 |
+
|
| 59 |
+
def schedule(self, n, **kwargs):
|
| 60 |
+
cycle = self.find_in_interval(n)
|
| 61 |
+
n = n - self.cum_cycles[cycle]
|
| 62 |
+
if self.verbosity_interval > 0:
|
| 63 |
+
if n % self.verbosity_interval == 0: print(f"current step: {n}, recent lr-multiplier: {self.last_f}, "
|
| 64 |
+
f"current cycle {cycle}")
|
| 65 |
+
if n < self.lr_warm_up_steps[cycle]:
|
| 66 |
+
f = (self.f_max[cycle] - self.f_start[cycle]) / self.lr_warm_up_steps[cycle] * n + self.f_start[cycle]
|
| 67 |
+
self.last_f = f
|
| 68 |
+
return f
|
| 69 |
+
else:
|
| 70 |
+
t = (n - self.lr_warm_up_steps[cycle]) / (self.cycle_lengths[cycle] - self.lr_warm_up_steps[cycle])
|
| 71 |
+
t = min(t, 1.0)
|
| 72 |
+
f = self.f_min[cycle] + 0.5 * (self.f_max[cycle] - self.f_min[cycle]) * (
|
| 73 |
+
1 + np.cos(t * np.pi))
|
| 74 |
+
self.last_f = f
|
| 75 |
+
return f
|
| 76 |
+
|
| 77 |
+
def __call__(self, n, **kwargs):
|
| 78 |
+
return self.schedule(n, **kwargs)
|
| 79 |
+
|
| 80 |
+
|
| 81 |
+
class LambdaLinearScheduler(LambdaWarmUpCosineScheduler2):
|
| 82 |
+
"""
|
| 83 |
+
|
| 84 |
+
"""
|
| 85 |
+
|
| 86 |
+
def schedule(self, n, **kwargs):
|
| 87 |
+
cycle = self.find_in_interval(n)
|
| 88 |
+
n = n - self.cum_cycles[cycle]
|
| 89 |
+
if self.verbosity_interval > 0:
|
| 90 |
+
if n % self.verbosity_interval == 0: print(f"current step: {n}, recent lr-multiplier: {self.last_f}, "
|
| 91 |
+
f"current cycle {cycle}")
|
| 92 |
+
|
| 93 |
+
if n < self.lr_warm_up_steps[cycle]:
|
| 94 |
+
f = (self.f_max[cycle] - self.f_start[cycle]) / self.lr_warm_up_steps[cycle] * n + self.f_start[cycle]
|
| 95 |
+
self.last_f = f
|
| 96 |
+
return f
|
| 97 |
+
else:
|
| 98 |
+
f = self.f_min[cycle] + (self.f_max[cycle] - self.f_min[cycle]) * (self.cycle_lengths[cycle] - n) / (self.cycle_lengths[cycle])
|
| 99 |
+
self.last_f = f
|
| 100 |
+
return f
|
| 101 |
+
|
stable_diffusion/ldm/modules/diffusionmodules/__pycache__/__init__.cpython-38.pyc
ADDED
|
Binary file (215 Bytes). View file
|
|
|
stable_diffusion/ldm/modules/diffusionmodules/__pycache__/model.cpython-38.pyc
ADDED
|
Binary file (20.9 kB). View file
|
|
|
stable_diffusion/ldm/modules/diffusionmodules/__pycache__/openaimodel.cpython-38.pyc
ADDED
|
Binary file (24.1 kB). View file
|
|
|
stable_diffusion/ldm/modules/diffusionmodules/__pycache__/util.cpython-38.pyc
ADDED
|
Binary file (12 kB). View file
|
|
|
stable_diffusion/ldm/modules/distributions/__pycache__/__init__.cpython-38.pyc
ADDED
|
Binary file (212 Bytes). View file
|
|
|
stable_diffusion/ldm/modules/distributions/__pycache__/distributions.cpython-38.pyc
ADDED
|
Binary file (5.49 kB). View file
|
|
|
stable_diffusion/ldm/modules/distributions/distributions.py
ADDED
|
@@ -0,0 +1,125 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import torch
|
| 2 |
+
import numpy as np
|
| 3 |
+
|
| 4 |
+
|
| 5 |
+
class AbstractDistribution:
|
| 6 |
+
def sample(self):
|
| 7 |
+
raise NotImplementedError()
|
| 8 |
+
|
| 9 |
+
def mode(self):
|
| 10 |
+
raise NotImplementedError()
|
| 11 |
+
|
| 12 |
+
|
| 13 |
+
class DiracDistribution(AbstractDistribution):
|
| 14 |
+
def __init__(self, value):
|
| 15 |
+
self.value = value
|
| 16 |
+
|
| 17 |
+
def sample(self):
|
| 18 |
+
return self.value
|
| 19 |
+
|
| 20 |
+
def mode(self):
|
| 21 |
+
return self.value
|
| 22 |
+
|
| 23 |
+
|
| 24 |
+
class DiagonalGaussianDistribution(object):
|
| 25 |
+
def __init__(self, parameters, deterministic=False):
|
| 26 |
+
"""
|
| 27 |
+
parameters: input is expected to be a 2D tensor, where the first
|
| 28 |
+
half of the last dimension are the means and the second
|
| 29 |
+
half are the log-variances.
|
| 30 |
+
deterministic: if set to True, would mean that there is no randomness in the distribution
|
| 31 |
+
(i.e., variance and standard deviation are set to zero).
|
| 32 |
+
mathematical:
|
| 33 |
+
self.mean = µ
|
| 34 |
+
self.std = σ
|
| 35 |
+
self.var = σ^2
|
| 36 |
+
self.logvar = log(σ^2) = 2log(σ)
|
| 37 |
+
The logarithm of the variance (self.logvar) is also often used in formulas
|
| 38 |
+
in statistics. For example, the log-likelihood of a Gaussian distribution
|
| 39 |
+
involves the log of the variance. Therefore, working directly with the log
|
| 40 |
+
-variance can make the formulas simpler and more numerically stable.
|
| 41 |
+
"""
|
| 42 |
+
self.parameters = parameters
|
| 43 |
+
self.mean, self.logvar = torch.chunk(parameters, 2, dim=1)
|
| 44 |
+
self.logvar = torch.clamp(self.logvar, -30.0, 20.0)
|
| 45 |
+
self.deterministic = deterministic
|
| 46 |
+
self.std = torch.exp(0.5 * self.logvar)
|
| 47 |
+
self.var = torch.exp(self.logvar)
|
| 48 |
+
if self.deterministic:
|
| 49 |
+
self.var = self.std = torch.zeros_like(self.mean).to(device=self.parameters.device)
|
| 50 |
+
|
| 51 |
+
def sample(self):
|
| 52 |
+
"""
|
| 53 |
+
Reparameterization:
|
| 54 |
+
if Z is a standard normal random variable (i.e., Gaussian distributed
|
| 55 |
+
with mean 0 and standard deviation 1), X = μ + σZ is a normal random
|
| 56 |
+
variable with mean μ and standard deviation σ.
|
| 57 |
+
"""
|
| 58 |
+
x = self.mean + self.std * torch.randn(self.mean.shape).to(device=self.parameters.device)
|
| 59 |
+
return x
|
| 60 |
+
|
| 61 |
+
def kl(self, other=None):
|
| 62 |
+
"""
|
| 63 |
+
This function is to compute the KL-divergence of the current
|
| 64 |
+
Gaussian distribution with another one. If other is None, then
|
| 65 |
+
compute the KL-divergence with a standard distribution.
|
| 66 |
+
$ KL(P||Q) = log(σ_2 / σ_1) + \frac{σ_1^2 + (μ1 - μ2)^2}{2σ_2^2} - 0.5 $
|
| 67 |
+
"""
|
| 68 |
+
if self.deterministic:
|
| 69 |
+
return torch.Tensor([0.])
|
| 70 |
+
else:
|
| 71 |
+
if other is None:
|
| 72 |
+
return 0.5 * torch.sum(torch.pow(self.mean, 2)
|
| 73 |
+
+ self.var - 1.0 - self.logvar,
|
| 74 |
+
dim=[1, 2, 3])
|
| 75 |
+
else:
|
| 76 |
+
return 0.5 * torch.sum(
|
| 77 |
+
torch.pow(self.mean - other.mean, 2) / other.var
|
| 78 |
+
+ self.var / other.var - 1.0 - self.logvar + other.logvar,
|
| 79 |
+
dim=[1, 2, 3])
|
| 80 |
+
|
| 81 |
+
def nll(self, sample, dims=[1, 2, 3]):
|
| 82 |
+
"""
|
| 83 |
+
The negative log likelihood (NLL) of observing a sample x from a normal
|
| 84 |
+
distribution with mean μ and variance σ^2 is given by:
|
| 85 |
+
NLL = 0.5 * log(2πσ^2) + (1 / 2σ^2) * (x - μ)^2
|
| 86 |
+
"""
|
| 87 |
+
if self.deterministic:
|
| 88 |
+
return torch.Tensor([0.])
|
| 89 |
+
logtwopi = np.log(2.0 * np.pi)
|
| 90 |
+
return 0.5 * torch.sum(
|
| 91 |
+
logtwopi + self.logvar + torch.pow(sample - self.mean, 2) / self.var,
|
| 92 |
+
dim=dims)
|
| 93 |
+
|
| 94 |
+
def mode(self):
|
| 95 |
+
return self.mean
|
| 96 |
+
|
| 97 |
+
|
| 98 |
+
def normal_kl(mean1, logvar1, mean2, logvar2):
|
| 99 |
+
"""
|
| 100 |
+
source: https://github.com/openai/guided-diffusion/blob/27c20a8fab9cb472df5d6bdd6c8d11c8f430b924/guided_diffusion/losses.py#L12
|
| 101 |
+
Compute the KL divergence between two Gaussians.
|
| 102 |
+
Shapes are automatically broadcasted, so batches can be compared to
|
| 103 |
+
scalars, among other use cases.
|
| 104 |
+
"""
|
| 105 |
+
tensor = None
|
| 106 |
+
for obj in (mean1, logvar1, mean2, logvar2):
|
| 107 |
+
if isinstance(obj, torch.Tensor):
|
| 108 |
+
tensor = obj
|
| 109 |
+
break
|
| 110 |
+
assert tensor is not None, "at least one argument must be a Tensor"
|
| 111 |
+
|
| 112 |
+
# Force variances to be Tensors. Broadcasting helps convert scalars to
|
| 113 |
+
# Tensors, but it does not work for torch.exp().
|
| 114 |
+
logvar1, logvar2 = [
|
| 115 |
+
x if isinstance(x, torch.Tensor) else torch.tensor(x).to(tensor)
|
| 116 |
+
for x in (logvar1, logvar2)
|
| 117 |
+
]
|
| 118 |
+
|
| 119 |
+
return 0.5 * (
|
| 120 |
+
-1.0
|
| 121 |
+
+ logvar2
|
| 122 |
+
- logvar1
|
| 123 |
+
+ torch.exp(logvar1 - logvar2)
|
| 124 |
+
+ ((mean1 - mean2) ** 2) * torch.exp(-logvar2)
|
| 125 |
+
)
|
stable_diffusion/ldm/modules/losses/__init__.py
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
from ldm.modules.losses.contperceptual import LPIPSWithDiscriminator
|
stable_diffusion/ldm/modules/losses/contperceptual.py
ADDED
|
@@ -0,0 +1,111 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import torch
|
| 2 |
+
import torch.nn as nn
|
| 3 |
+
|
| 4 |
+
from taming.modules.losses.vqperceptual import * # TODO: taming dependency yes/no?
|
| 5 |
+
|
| 6 |
+
|
| 7 |
+
class LPIPSWithDiscriminator(nn.Module):
|
| 8 |
+
def __init__(self, disc_start, logvar_init=0.0, kl_weight=1.0, pixelloss_weight=1.0,
|
| 9 |
+
disc_num_layers=3, disc_in_channels=3, disc_factor=1.0, disc_weight=1.0,
|
| 10 |
+
perceptual_weight=1.0, use_actnorm=False, disc_conditional=False,
|
| 11 |
+
disc_loss="hinge"):
|
| 12 |
+
|
| 13 |
+
super().__init__()
|
| 14 |
+
assert disc_loss in ["hinge", "vanilla"]
|
| 15 |
+
self.kl_weight = kl_weight
|
| 16 |
+
self.pixel_weight = pixelloss_weight
|
| 17 |
+
self.perceptual_loss = LPIPS().eval()
|
| 18 |
+
self.perceptual_weight = perceptual_weight
|
| 19 |
+
# output log variance
|
| 20 |
+
self.logvar = nn.Parameter(torch.ones(size=()) * logvar_init)
|
| 21 |
+
|
| 22 |
+
self.discriminator = NLayerDiscriminator(input_nc=disc_in_channels,
|
| 23 |
+
n_layers=disc_num_layers,
|
| 24 |
+
use_actnorm=use_actnorm
|
| 25 |
+
).apply(weights_init)
|
| 26 |
+
self.discriminator_iter_start = disc_start
|
| 27 |
+
self.disc_loss = hinge_d_loss if disc_loss == "hinge" else vanilla_d_loss
|
| 28 |
+
self.disc_factor = disc_factor
|
| 29 |
+
self.discriminator_weight = disc_weight
|
| 30 |
+
self.disc_conditional = disc_conditional
|
| 31 |
+
|
| 32 |
+
def calculate_adaptive_weight(self, nll_loss, g_loss, last_layer=None):
|
| 33 |
+
if last_layer is not None:
|
| 34 |
+
nll_grads = torch.autograd.grad(nll_loss, last_layer, retain_graph=True)[0]
|
| 35 |
+
g_grads = torch.autograd.grad(g_loss, last_layer, retain_graph=True)[0]
|
| 36 |
+
else:
|
| 37 |
+
nll_grads = torch.autograd.grad(nll_loss, self.last_layer[0], retain_graph=True)[0]
|
| 38 |
+
g_grads = torch.autograd.grad(g_loss, self.last_layer[0], retain_graph=True)[0]
|
| 39 |
+
|
| 40 |
+
d_weight = torch.norm(nll_grads) / (torch.norm(g_grads) + 1e-4)
|
| 41 |
+
d_weight = torch.clamp(d_weight, 0.0, 1e4).detach()
|
| 42 |
+
d_weight = d_weight * self.discriminator_weight
|
| 43 |
+
return d_weight
|
| 44 |
+
|
| 45 |
+
def forward(self, inputs, reconstructions, posteriors, optimizer_idx,
|
| 46 |
+
global_step, last_layer=None, cond=None, split="train",
|
| 47 |
+
weights=None):
|
| 48 |
+
rec_loss = torch.abs(inputs.contiguous() - reconstructions.contiguous())
|
| 49 |
+
if self.perceptual_weight > 0:
|
| 50 |
+
p_loss = self.perceptual_loss(inputs.contiguous(), reconstructions.contiguous())
|
| 51 |
+
rec_loss = rec_loss + self.perceptual_weight * p_loss
|
| 52 |
+
|
| 53 |
+
nll_loss = rec_loss / torch.exp(self.logvar) + self.logvar
|
| 54 |
+
weighted_nll_loss = nll_loss
|
| 55 |
+
if weights is not None:
|
| 56 |
+
weighted_nll_loss = weights*nll_loss
|
| 57 |
+
weighted_nll_loss = torch.sum(weighted_nll_loss) / weighted_nll_loss.shape[0]
|
| 58 |
+
nll_loss = torch.sum(nll_loss) / nll_loss.shape[0]
|
| 59 |
+
kl_loss = posteriors.kl()
|
| 60 |
+
kl_loss = torch.sum(kl_loss) / kl_loss.shape[0]
|
| 61 |
+
|
| 62 |
+
# now the GAN part
|
| 63 |
+
if optimizer_idx == 0:
|
| 64 |
+
# generator update
|
| 65 |
+
if cond is None:
|
| 66 |
+
assert not self.disc_conditional
|
| 67 |
+
logits_fake = self.discriminator(reconstructions.contiguous())
|
| 68 |
+
else:
|
| 69 |
+
assert self.disc_conditional
|
| 70 |
+
logits_fake = self.discriminator(torch.cat((reconstructions.contiguous(), cond), dim=1))
|
| 71 |
+
g_loss = -torch.mean(logits_fake)
|
| 72 |
+
|
| 73 |
+
if self.disc_factor > 0.0:
|
| 74 |
+
try:
|
| 75 |
+
d_weight = self.calculate_adaptive_weight(nll_loss, g_loss, last_layer=last_layer)
|
| 76 |
+
except RuntimeError:
|
| 77 |
+
assert not self.training
|
| 78 |
+
d_weight = torch.tensor(0.0)
|
| 79 |
+
else:
|
| 80 |
+
d_weight = torch.tensor(0.0)
|
| 81 |
+
|
| 82 |
+
disc_factor = adopt_weight(self.disc_factor, global_step, threshold=self.discriminator_iter_start)
|
| 83 |
+
loss = weighted_nll_loss + self.kl_weight * kl_loss + d_weight * disc_factor * g_loss
|
| 84 |
+
|
| 85 |
+
log = {"{}/total_loss".format(split): loss.clone().detach().mean(), "{}/logvar".format(split): self.logvar.detach(),
|
| 86 |
+
"{}/kl_loss".format(split): kl_loss.detach().mean(), "{}/nll_loss".format(split): nll_loss.detach().mean(),
|
| 87 |
+
"{}/rec_loss".format(split): rec_loss.detach().mean(),
|
| 88 |
+
"{}/d_weight".format(split): d_weight.detach(),
|
| 89 |
+
"{}/disc_factor".format(split): torch.tensor(disc_factor),
|
| 90 |
+
"{}/g_loss".format(split): g_loss.detach().mean(),
|
| 91 |
+
}
|
| 92 |
+
return loss, log
|
| 93 |
+
|
| 94 |
+
if optimizer_idx == 1:
|
| 95 |
+
# second pass for discriminator update
|
| 96 |
+
if cond is None:
|
| 97 |
+
logits_real = self.discriminator(inputs.contiguous().detach())
|
| 98 |
+
logits_fake = self.discriminator(reconstructions.contiguous().detach())
|
| 99 |
+
else:
|
| 100 |
+
logits_real = self.discriminator(torch.cat((inputs.contiguous().detach(), cond), dim=1))
|
| 101 |
+
logits_fake = self.discriminator(torch.cat((reconstructions.contiguous().detach(), cond), dim=1))
|
| 102 |
+
|
| 103 |
+
disc_factor = adopt_weight(self.disc_factor, global_step, threshold=self.discriminator_iter_start)
|
| 104 |
+
d_loss = disc_factor * self.disc_loss(logits_real, logits_fake)
|
| 105 |
+
|
| 106 |
+
log = {"{}/disc_loss".format(split): d_loss.clone().detach().mean(),
|
| 107 |
+
"{}/logits_real".format(split): logits_real.detach().mean(),
|
| 108 |
+
"{}/logits_fake".format(split): logits_fake.detach().mean()
|
| 109 |
+
}
|
| 110 |
+
return d_loss, log
|
| 111 |
+
|
stable_diffusion/ldm/modules/losses/vqperceptual.py
ADDED
|
@@ -0,0 +1,167 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import torch
|
| 2 |
+
from torch import nn
|
| 3 |
+
import torch.nn.functional as F
|
| 4 |
+
from einops import repeat
|
| 5 |
+
|
| 6 |
+
from taming.modules.discriminator.model import NLayerDiscriminator, weights_init
|
| 7 |
+
from taming.modules.losses.lpips import LPIPS
|
| 8 |
+
from taming.modules.losses.vqperceptual import hinge_d_loss, vanilla_d_loss
|
| 9 |
+
|
| 10 |
+
|
| 11 |
+
def hinge_d_loss_with_exemplar_weights(logits_real, logits_fake, weights):
|
| 12 |
+
assert weights.shape[0] == logits_real.shape[0] == logits_fake.shape[0]
|
| 13 |
+
loss_real = torch.mean(F.relu(1. - logits_real), dim=[1,2,3])
|
| 14 |
+
loss_fake = torch.mean(F.relu(1. + logits_fake), dim=[1,2,3])
|
| 15 |
+
loss_real = (weights * loss_real).sum() / weights.sum()
|
| 16 |
+
loss_fake = (weights * loss_fake).sum() / weights.sum()
|
| 17 |
+
d_loss = 0.5 * (loss_real + loss_fake)
|
| 18 |
+
return d_loss
|
| 19 |
+
|
| 20 |
+
def adopt_weight(weight, global_step, threshold=0, value=0.):
|
| 21 |
+
if global_step < threshold:
|
| 22 |
+
weight = value
|
| 23 |
+
return weight
|
| 24 |
+
|
| 25 |
+
|
| 26 |
+
def measure_perplexity(predicted_indices, n_embed):
|
| 27 |
+
# src: https://github.com/karpathy/deep-vector-quantization/blob/main/model.py
|
| 28 |
+
# eval cluster perplexity. when perplexity == num_embeddings then all clusters are used exactly equally
|
| 29 |
+
encodings = F.one_hot(predicted_indices, n_embed).float().reshape(-1, n_embed)
|
| 30 |
+
avg_probs = encodings.mean(0)
|
| 31 |
+
perplexity = (-(avg_probs * torch.log(avg_probs + 1e-10)).sum()).exp()
|
| 32 |
+
cluster_use = torch.sum(avg_probs > 0)
|
| 33 |
+
return perplexity, cluster_use
|
| 34 |
+
|
| 35 |
+
def l1(x, y):
|
| 36 |
+
return torch.abs(x-y)
|
| 37 |
+
|
| 38 |
+
|
| 39 |
+
def l2(x, y):
|
| 40 |
+
return torch.pow((x-y), 2)
|
| 41 |
+
|
| 42 |
+
|
| 43 |
+
class VQLPIPSWithDiscriminator(nn.Module):
|
| 44 |
+
def __init__(self, disc_start, codebook_weight=1.0, pixelloss_weight=1.0,
|
| 45 |
+
disc_num_layers=3, disc_in_channels=3, disc_factor=1.0, disc_weight=1.0,
|
| 46 |
+
perceptual_weight=1.0, use_actnorm=False, disc_conditional=False,
|
| 47 |
+
disc_ndf=64, disc_loss="hinge", n_classes=None, perceptual_loss="lpips",
|
| 48 |
+
pixel_loss="l1"):
|
| 49 |
+
super().__init__()
|
| 50 |
+
assert disc_loss in ["hinge", "vanilla"]
|
| 51 |
+
assert perceptual_loss in ["lpips", "clips", "dists"]
|
| 52 |
+
assert pixel_loss in ["l1", "l2"]
|
| 53 |
+
self.codebook_weight = codebook_weight
|
| 54 |
+
self.pixel_weight = pixelloss_weight
|
| 55 |
+
if perceptual_loss == "lpips":
|
| 56 |
+
print(f"{self.__class__.__name__}: Running with LPIPS.")
|
| 57 |
+
self.perceptual_loss = LPIPS().eval()
|
| 58 |
+
else:
|
| 59 |
+
raise ValueError(f"Unknown perceptual loss: >> {perceptual_loss} <<")
|
| 60 |
+
self.perceptual_weight = perceptual_weight
|
| 61 |
+
|
| 62 |
+
if pixel_loss == "l1":
|
| 63 |
+
self.pixel_loss = l1
|
| 64 |
+
else:
|
| 65 |
+
self.pixel_loss = l2
|
| 66 |
+
|
| 67 |
+
self.discriminator = NLayerDiscriminator(input_nc=disc_in_channels,
|
| 68 |
+
n_layers=disc_num_layers,
|
| 69 |
+
use_actnorm=use_actnorm,
|
| 70 |
+
ndf=disc_ndf
|
| 71 |
+
).apply(weights_init)
|
| 72 |
+
self.discriminator_iter_start = disc_start
|
| 73 |
+
if disc_loss == "hinge":
|
| 74 |
+
self.disc_loss = hinge_d_loss
|
| 75 |
+
elif disc_loss == "vanilla":
|
| 76 |
+
self.disc_loss = vanilla_d_loss
|
| 77 |
+
else:
|
| 78 |
+
raise ValueError(f"Unknown GAN loss '{disc_loss}'.")
|
| 79 |
+
print(f"VQLPIPSWithDiscriminator running with {disc_loss} loss.")
|
| 80 |
+
self.disc_factor = disc_factor
|
| 81 |
+
self.discriminator_weight = disc_weight
|
| 82 |
+
self.disc_conditional = disc_conditional
|
| 83 |
+
self.n_classes = n_classes
|
| 84 |
+
|
| 85 |
+
def calculate_adaptive_weight(self, nll_loss, g_loss, last_layer=None):
|
| 86 |
+
if last_layer is not None:
|
| 87 |
+
nll_grads = torch.autograd.grad(nll_loss, last_layer, retain_graph=True)[0]
|
| 88 |
+
g_grads = torch.autograd.grad(g_loss, last_layer, retain_graph=True)[0]
|
| 89 |
+
else:
|
| 90 |
+
nll_grads = torch.autograd.grad(nll_loss, self.last_layer[0], retain_graph=True)[0]
|
| 91 |
+
g_grads = torch.autograd.grad(g_loss, self.last_layer[0], retain_graph=True)[0]
|
| 92 |
+
|
| 93 |
+
d_weight = torch.norm(nll_grads) / (torch.norm(g_grads) + 1e-4)
|
| 94 |
+
d_weight = torch.clamp(d_weight, 0.0, 1e4).detach()
|
| 95 |
+
d_weight = d_weight * self.discriminator_weight
|
| 96 |
+
return d_weight
|
| 97 |
+
|
| 98 |
+
def forward(self, codebook_loss, inputs, reconstructions, optimizer_idx,
|
| 99 |
+
global_step, last_layer=None, cond=None, split="train", predicted_indices=None):
|
| 100 |
+
if not exists(codebook_loss):
|
| 101 |
+
codebook_loss = torch.tensor([0.]).to(inputs.device)
|
| 102 |
+
#rec_loss = torch.abs(inputs.contiguous() - reconstructions.contiguous())
|
| 103 |
+
rec_loss = self.pixel_loss(inputs.contiguous(), reconstructions.contiguous())
|
| 104 |
+
if self.perceptual_weight > 0:
|
| 105 |
+
p_loss = self.perceptual_loss(inputs.contiguous(), reconstructions.contiguous())
|
| 106 |
+
rec_loss = rec_loss + self.perceptual_weight * p_loss
|
| 107 |
+
else:
|
| 108 |
+
p_loss = torch.tensor([0.0])
|
| 109 |
+
|
| 110 |
+
nll_loss = rec_loss
|
| 111 |
+
#nll_loss = torch.sum(nll_loss) / nll_loss.shape[0]
|
| 112 |
+
nll_loss = torch.mean(nll_loss)
|
| 113 |
+
|
| 114 |
+
# now the GAN part
|
| 115 |
+
if optimizer_idx == 0:
|
| 116 |
+
# generator update
|
| 117 |
+
if cond is None:
|
| 118 |
+
assert not self.disc_conditional
|
| 119 |
+
logits_fake = self.discriminator(reconstructions.contiguous())
|
| 120 |
+
else:
|
| 121 |
+
assert self.disc_conditional
|
| 122 |
+
logits_fake = self.discriminator(torch.cat((reconstructions.contiguous(), cond), dim=1))
|
| 123 |
+
g_loss = -torch.mean(logits_fake)
|
| 124 |
+
|
| 125 |
+
try:
|
| 126 |
+
d_weight = self.calculate_adaptive_weight(nll_loss, g_loss, last_layer=last_layer)
|
| 127 |
+
except RuntimeError:
|
| 128 |
+
assert not self.training
|
| 129 |
+
d_weight = torch.tensor(0.0)
|
| 130 |
+
|
| 131 |
+
disc_factor = adopt_weight(self.disc_factor, global_step, threshold=self.discriminator_iter_start)
|
| 132 |
+
loss = nll_loss + d_weight * disc_factor * g_loss + self.codebook_weight * codebook_loss.mean()
|
| 133 |
+
|
| 134 |
+
log = {"{}/total_loss".format(split): loss.clone().detach().mean(),
|
| 135 |
+
"{}/quant_loss".format(split): codebook_loss.detach().mean(),
|
| 136 |
+
"{}/nll_loss".format(split): nll_loss.detach().mean(),
|
| 137 |
+
"{}/rec_loss".format(split): rec_loss.detach().mean(),
|
| 138 |
+
"{}/p_loss".format(split): p_loss.detach().mean(),
|
| 139 |
+
"{}/d_weight".format(split): d_weight.detach(),
|
| 140 |
+
"{}/disc_factor".format(split): torch.tensor(disc_factor),
|
| 141 |
+
"{}/g_loss".format(split): g_loss.detach().mean(),
|
| 142 |
+
}
|
| 143 |
+
if predicted_indices is not None:
|
| 144 |
+
assert self.n_classes is not None
|
| 145 |
+
with torch.no_grad():
|
| 146 |
+
perplexity, cluster_usage = measure_perplexity(predicted_indices, self.n_classes)
|
| 147 |
+
log[f"{split}/perplexity"] = perplexity
|
| 148 |
+
log[f"{split}/cluster_usage"] = cluster_usage
|
| 149 |
+
return loss, log
|
| 150 |
+
|
| 151 |
+
if optimizer_idx == 1:
|
| 152 |
+
# second pass for discriminator update
|
| 153 |
+
if cond is None:
|
| 154 |
+
logits_real = self.discriminator(inputs.contiguous().detach())
|
| 155 |
+
logits_fake = self.discriminator(reconstructions.contiguous().detach())
|
| 156 |
+
else:
|
| 157 |
+
logits_real = self.discriminator(torch.cat((inputs.contiguous().detach(), cond), dim=1))
|
| 158 |
+
logits_fake = self.discriminator(torch.cat((reconstructions.contiguous().detach(), cond), dim=1))
|
| 159 |
+
|
| 160 |
+
disc_factor = adopt_weight(self.disc_factor, global_step, threshold=self.discriminator_iter_start)
|
| 161 |
+
d_loss = disc_factor * self.disc_loss(logits_real, logits_fake)
|
| 162 |
+
|
| 163 |
+
log = {"{}/disc_loss".format(split): d_loss.clone().detach().mean(),
|
| 164 |
+
"{}/logits_real".format(split): logits_real.detach().mean(),
|
| 165 |
+
"{}/logits_fake".format(split): logits_fake.detach().mean()
|
| 166 |
+
}
|
| 167 |
+
return d_loss, log
|
stable_diffusion/ldm/util.py
ADDED
|
@@ -0,0 +1,197 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import importlib
|
| 2 |
+
|
| 3 |
+
import torch
|
| 4 |
+
from torch import optim
|
| 5 |
+
import numpy as np
|
| 6 |
+
|
| 7 |
+
from inspect import isfunction
|
| 8 |
+
from PIL import Image, ImageDraw, ImageFont
|
| 9 |
+
|
| 10 |
+
|
| 11 |
+
def log_txt_as_img(wh, xc, size=10):
|
| 12 |
+
# wh a tuple of (width, height)
|
| 13 |
+
# xc a list of captions to plot
|
| 14 |
+
b = len(xc)
|
| 15 |
+
txts = list()
|
| 16 |
+
for bi in range(b):
|
| 17 |
+
txt = Image.new("RGB", wh, color="white")
|
| 18 |
+
draw = ImageDraw.Draw(txt)
|
| 19 |
+
font = ImageFont.truetype('data/DejaVuSans.ttf', size=size)
|
| 20 |
+
nc = int(40 * (wh[0] / 256))
|
| 21 |
+
lines = "\n".join(xc[bi][start:start + nc] for start in range(0, len(xc[bi]), nc))
|
| 22 |
+
|
| 23 |
+
try:
|
| 24 |
+
draw.text((0, 0), lines, fill="black", font=font)
|
| 25 |
+
except UnicodeEncodeError:
|
| 26 |
+
print("Cant encode string for logging. Skipping.")
|
| 27 |
+
|
| 28 |
+
txt = np.array(txt).transpose(2, 0, 1) / 127.5 - 1.0
|
| 29 |
+
txts.append(txt)
|
| 30 |
+
txts = np.stack(txts)
|
| 31 |
+
txts = torch.tensor(txts)
|
| 32 |
+
return txts
|
| 33 |
+
|
| 34 |
+
|
| 35 |
+
def ismap(x):
|
| 36 |
+
if not isinstance(x, torch.Tensor):
|
| 37 |
+
return False
|
| 38 |
+
return (len(x.shape) == 4) and (x.shape[1] > 3)
|
| 39 |
+
|
| 40 |
+
|
| 41 |
+
def isimage(x):
|
| 42 |
+
if not isinstance(x,torch.Tensor):
|
| 43 |
+
return False
|
| 44 |
+
return (len(x.shape) == 4) and (x.shape[1] == 3 or x.shape[1] == 1)
|
| 45 |
+
|
| 46 |
+
|
| 47 |
+
def exists(x):
|
| 48 |
+
return x is not None
|
| 49 |
+
|
| 50 |
+
|
| 51 |
+
def default(val, d):
|
| 52 |
+
if exists(val):
|
| 53 |
+
return val
|
| 54 |
+
return d() if isfunction(d) else d
|
| 55 |
+
|
| 56 |
+
|
| 57 |
+
def mean_flat(tensor):
|
| 58 |
+
"""
|
| 59 |
+
https://github.com/openai/guided-diffusion/blob/27c20a8fab9cb472df5d6bdd6c8d11c8f430b924/guided_diffusion/nn.py#L86
|
| 60 |
+
Take the mean over all non-batch dimensions.
|
| 61 |
+
"""
|
| 62 |
+
return tensor.mean(dim=list(range(1, len(tensor.shape))))
|
| 63 |
+
|
| 64 |
+
|
| 65 |
+
def count_params(model, verbose=False):
|
| 66 |
+
total_params = sum(p.numel() for p in model.parameters())
|
| 67 |
+
if verbose:
|
| 68 |
+
print(f"{model.__class__.__name__} has {total_params*1.e-6:.2f} M params.")
|
| 69 |
+
return total_params
|
| 70 |
+
|
| 71 |
+
|
| 72 |
+
def instantiate_from_config(config):
|
| 73 |
+
if not "target" in config:
|
| 74 |
+
if config == '__is_first_stage__':
|
| 75 |
+
return None
|
| 76 |
+
elif config == "__is_unconditional__":
|
| 77 |
+
return None
|
| 78 |
+
raise KeyError("Expected key `target` to instantiate.")
|
| 79 |
+
return get_obj_from_str(config["target"])(**config.get("params", dict()))
|
| 80 |
+
|
| 81 |
+
|
| 82 |
+
def get_obj_from_str(string, reload=False):
|
| 83 |
+
module, cls = string.rsplit(".", 1)
|
| 84 |
+
if reload:
|
| 85 |
+
module_imp = importlib.import_module(module)
|
| 86 |
+
importlib.reload(module_imp)
|
| 87 |
+
return getattr(importlib.import_module(module, package=None), cls)
|
| 88 |
+
|
| 89 |
+
|
| 90 |
+
class AdamWwithEMAandWings(optim.Optimizer):
|
| 91 |
+
# credit to https://gist.github.com/crowsonkb/65f7265353f403714fce3b2595e0b298
|
| 92 |
+
def __init__(self, params, lr=1.e-3, betas=(0.9, 0.999), eps=1.e-8, # TODO: check hyperparameters before using
|
| 93 |
+
weight_decay=1.e-2, amsgrad=False, ema_decay=0.9999, # ema decay to match previous code
|
| 94 |
+
ema_power=1., param_names=()):
|
| 95 |
+
"""AdamW that saves EMA versions of the parameters."""
|
| 96 |
+
if not 0.0 <= lr:
|
| 97 |
+
raise ValueError("Invalid learning rate: {}".format(lr))
|
| 98 |
+
if not 0.0 <= eps:
|
| 99 |
+
raise ValueError("Invalid epsilon value: {}".format(eps))
|
| 100 |
+
if not 0.0 <= betas[0] < 1.0:
|
| 101 |
+
raise ValueError("Invalid beta parameter at index 0: {}".format(betas[0]))
|
| 102 |
+
if not 0.0 <= betas[1] < 1.0:
|
| 103 |
+
raise ValueError("Invalid beta parameter at index 1: {}".format(betas[1]))
|
| 104 |
+
if not 0.0 <= weight_decay:
|
| 105 |
+
raise ValueError("Invalid weight_decay value: {}".format(weight_decay))
|
| 106 |
+
if not 0.0 <= ema_decay <= 1.0:
|
| 107 |
+
raise ValueError("Invalid ema_decay value: {}".format(ema_decay))
|
| 108 |
+
defaults = dict(lr=lr, betas=betas, eps=eps,
|
| 109 |
+
weight_decay=weight_decay, amsgrad=amsgrad, ema_decay=ema_decay,
|
| 110 |
+
ema_power=ema_power, param_names=param_names)
|
| 111 |
+
super().__init__(params, defaults)
|
| 112 |
+
|
| 113 |
+
def __setstate__(self, state):
|
| 114 |
+
super().__setstate__(state)
|
| 115 |
+
for group in self.param_groups:
|
| 116 |
+
group.setdefault('amsgrad', False)
|
| 117 |
+
|
| 118 |
+
@torch.no_grad()
|
| 119 |
+
def step(self, closure=None):
|
| 120 |
+
"""Performs a single optimization step.
|
| 121 |
+
Args:
|
| 122 |
+
closure (callable, optional): A closure that reevaluates the model
|
| 123 |
+
and returns the loss.
|
| 124 |
+
"""
|
| 125 |
+
loss = None
|
| 126 |
+
if closure is not None:
|
| 127 |
+
with torch.enable_grad():
|
| 128 |
+
loss = closure()
|
| 129 |
+
|
| 130 |
+
for group in self.param_groups:
|
| 131 |
+
params_with_grad = []
|
| 132 |
+
grads = []
|
| 133 |
+
exp_avgs = []
|
| 134 |
+
exp_avg_sqs = []
|
| 135 |
+
ema_params_with_grad = []
|
| 136 |
+
state_sums = []
|
| 137 |
+
max_exp_avg_sqs = []
|
| 138 |
+
state_steps = []
|
| 139 |
+
amsgrad = group['amsgrad']
|
| 140 |
+
beta1, beta2 = group['betas']
|
| 141 |
+
ema_decay = group['ema_decay']
|
| 142 |
+
ema_power = group['ema_power']
|
| 143 |
+
|
| 144 |
+
for p in group['params']:
|
| 145 |
+
if p.grad is None:
|
| 146 |
+
continue
|
| 147 |
+
params_with_grad.append(p)
|
| 148 |
+
if p.grad.is_sparse:
|
| 149 |
+
raise RuntimeError('AdamW does not support sparse gradients')
|
| 150 |
+
grads.append(p.grad)
|
| 151 |
+
|
| 152 |
+
state = self.state[p]
|
| 153 |
+
|
| 154 |
+
# State initialization
|
| 155 |
+
if len(state) == 0:
|
| 156 |
+
state['step'] = 0
|
| 157 |
+
# Exponential moving average of gradient values
|
| 158 |
+
state['exp_avg'] = torch.zeros_like(p, memory_format=torch.preserve_format)
|
| 159 |
+
# Exponential moving average of squared gradient values
|
| 160 |
+
state['exp_avg_sq'] = torch.zeros_like(p, memory_format=torch.preserve_format)
|
| 161 |
+
if amsgrad:
|
| 162 |
+
# Maintains max of all exp. moving avg. of sq. grad. values
|
| 163 |
+
state['max_exp_avg_sq'] = torch.zeros_like(p, memory_format=torch.preserve_format)
|
| 164 |
+
# Exponential moving average of parameter values
|
| 165 |
+
state['param_exp_avg'] = p.detach().float().clone()
|
| 166 |
+
|
| 167 |
+
exp_avgs.append(state['exp_avg'])
|
| 168 |
+
exp_avg_sqs.append(state['exp_avg_sq'])
|
| 169 |
+
ema_params_with_grad.append(state['param_exp_avg'])
|
| 170 |
+
|
| 171 |
+
if amsgrad:
|
| 172 |
+
max_exp_avg_sqs.append(state['max_exp_avg_sq'])
|
| 173 |
+
|
| 174 |
+
# update the steps for each param group update
|
| 175 |
+
state['step'] += 1
|
| 176 |
+
# record the step after step update
|
| 177 |
+
state_steps.append(state['step'])
|
| 178 |
+
|
| 179 |
+
optim._functional.adamw(params_with_grad,
|
| 180 |
+
grads,
|
| 181 |
+
exp_avgs,
|
| 182 |
+
exp_avg_sqs,
|
| 183 |
+
max_exp_avg_sqs,
|
| 184 |
+
state_steps,
|
| 185 |
+
amsgrad=amsgrad,
|
| 186 |
+
beta1=beta1,
|
| 187 |
+
beta2=beta2,
|
| 188 |
+
lr=group['lr'],
|
| 189 |
+
weight_decay=group['weight_decay'],
|
| 190 |
+
eps=group['eps'],
|
| 191 |
+
maximize=False)
|
| 192 |
+
|
| 193 |
+
cur_ema_decay = min(ema_decay, 1 - state['step'] ** -ema_power)
|
| 194 |
+
for param, ema_param in zip(params_with_grad, ema_params_with_grad):
|
| 195 |
+
ema_param.mul_(cur_ema_decay).add_(param.float(), alpha=1 - cur_ema_decay)
|
| 196 |
+
|
| 197 |
+
return loss
|
stable_diffusion/main.py
ADDED
|
@@ -0,0 +1,744 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import argparse, os, sys, datetime, glob, importlib, csv
|
| 2 |
+
import numpy as np
|
| 3 |
+
import time
|
| 4 |
+
import torch
|
| 5 |
+
import torchvision
|
| 6 |
+
import pytorch_lightning as pl
|
| 7 |
+
|
| 8 |
+
from packaging import version
|
| 9 |
+
from omegaconf import OmegaConf
|
| 10 |
+
from torch.utils.data import random_split, DataLoader, Dataset, Subset
|
| 11 |
+
from functools import partial
|
| 12 |
+
from PIL import Image
|
| 13 |
+
|
| 14 |
+
from pytorch_lightning import seed_everything
|
| 15 |
+
from pytorch_lightning.trainer import Trainer
|
| 16 |
+
from pytorch_lightning.callbacks import ModelCheckpoint, Callback, LearningRateMonitor
|
| 17 |
+
from pytorch_lightning.utilities.distributed import rank_zero_only
|
| 18 |
+
from pytorch_lightning.utilities import rank_zero_info
|
| 19 |
+
|
| 20 |
+
from ldm.data.base import Txt2ImgIterableBaseDataset
|
| 21 |
+
from ldm.util import instantiate_from_config
|
| 22 |
+
|
| 23 |
+
|
| 24 |
+
def get_parser(**parser_kwargs):
|
| 25 |
+
def str2bool(v):
|
| 26 |
+
if isinstance(v, bool):
|
| 27 |
+
return v
|
| 28 |
+
if v.lower() in ("yes", "true", "t", "y", "1"):
|
| 29 |
+
return True
|
| 30 |
+
elif v.lower() in ("no", "false", "f", "n", "0"):
|
| 31 |
+
return False
|
| 32 |
+
else:
|
| 33 |
+
raise argparse.ArgumentTypeError("Boolean value expected.")
|
| 34 |
+
|
| 35 |
+
parser = argparse.ArgumentParser(**parser_kwargs)
|
| 36 |
+
parser.add_argument(
|
| 37 |
+
"-n",
|
| 38 |
+
"--name",
|
| 39 |
+
type=str,
|
| 40 |
+
const=True,
|
| 41 |
+
default="",
|
| 42 |
+
nargs="?",
|
| 43 |
+
help="postfix for logdir",
|
| 44 |
+
)
|
| 45 |
+
parser.add_argument(
|
| 46 |
+
"-r",
|
| 47 |
+
"--resume",
|
| 48 |
+
type=str,
|
| 49 |
+
const=True,
|
| 50 |
+
default="",
|
| 51 |
+
nargs="?",
|
| 52 |
+
help="resume from logdir or checkpoint in logdir",
|
| 53 |
+
)
|
| 54 |
+
parser.add_argument(
|
| 55 |
+
"-b",
|
| 56 |
+
"--base",
|
| 57 |
+
nargs="*",
|
| 58 |
+
metavar="base_config.yaml",
|
| 59 |
+
help="paths to base configs. Loaded from left-to-right. "
|
| 60 |
+
"Parameters can be overwritten or added with command-line options of the form `--key value`.",
|
| 61 |
+
default=list(),
|
| 62 |
+
)
|
| 63 |
+
parser.add_argument(
|
| 64 |
+
"-t",
|
| 65 |
+
"--train",
|
| 66 |
+
type=str2bool,
|
| 67 |
+
const=True,
|
| 68 |
+
default=False,
|
| 69 |
+
nargs="?",
|
| 70 |
+
help="train",
|
| 71 |
+
)
|
| 72 |
+
parser.add_argument(
|
| 73 |
+
"--no-test",
|
| 74 |
+
type=str2bool,
|
| 75 |
+
const=True,
|
| 76 |
+
default=False,
|
| 77 |
+
nargs="?",
|
| 78 |
+
help="disable test",
|
| 79 |
+
)
|
| 80 |
+
parser.add_argument(
|
| 81 |
+
"-p",
|
| 82 |
+
"--project",
|
| 83 |
+
help="name of new or path to existing project"
|
| 84 |
+
)
|
| 85 |
+
parser.add_argument(
|
| 86 |
+
"-d",
|
| 87 |
+
"--debug",
|
| 88 |
+
type=str2bool,
|
| 89 |
+
nargs="?",
|
| 90 |
+
const=True,
|
| 91 |
+
default=False,
|
| 92 |
+
help="enable post-mortem debugging",
|
| 93 |
+
)
|
| 94 |
+
parser.add_argument(
|
| 95 |
+
"-s",
|
| 96 |
+
"--seed",
|
| 97 |
+
type=int,
|
| 98 |
+
default=23,
|
| 99 |
+
help="seed for seed_everything",
|
| 100 |
+
)
|
| 101 |
+
parser.add_argument(
|
| 102 |
+
"-f",
|
| 103 |
+
"--postfix",
|
| 104 |
+
type=str,
|
| 105 |
+
default="",
|
| 106 |
+
help="post-postfix for default name",
|
| 107 |
+
)
|
| 108 |
+
parser.add_argument(
|
| 109 |
+
"-l",
|
| 110 |
+
"--logdir",
|
| 111 |
+
type=str,
|
| 112 |
+
default="logs",
|
| 113 |
+
help="directory for logging dat shit",
|
| 114 |
+
)
|
| 115 |
+
parser.add_argument(
|
| 116 |
+
"--scale_lr",
|
| 117 |
+
type=str2bool,
|
| 118 |
+
nargs="?",
|
| 119 |
+
const=True,
|
| 120 |
+
default=True,
|
| 121 |
+
help="scale base-lr by ngpu * batch_size * n_accumulate",
|
| 122 |
+
)
|
| 123 |
+
return parser
|
| 124 |
+
|
| 125 |
+
|
| 126 |
+
def nondefault_trainer_args(opt):
|
| 127 |
+
parser = argparse.ArgumentParser()
|
| 128 |
+
parser = Trainer.add_argparse_args(parser)
|
| 129 |
+
args = parser.parse_args([])
|
| 130 |
+
return sorted(k for k in vars(args) if getattr(opt, k) != getattr(args, k))
|
| 131 |
+
|
| 132 |
+
|
| 133 |
+
class WrappedDataset(Dataset):
|
| 134 |
+
"""Wraps an arbitrary object with __len__ and __getitem__ into a pytorch dataset"""
|
| 135 |
+
|
| 136 |
+
def __init__(self, dataset):
|
| 137 |
+
self.data = dataset
|
| 138 |
+
|
| 139 |
+
def __len__(self):
|
| 140 |
+
return len(self.data)
|
| 141 |
+
|
| 142 |
+
def __getitem__(self, idx):
|
| 143 |
+
return self.data[idx]
|
| 144 |
+
|
| 145 |
+
|
| 146 |
+
def worker_init_fn(_):
|
| 147 |
+
worker_info = torch.utils.data.get_worker_info()
|
| 148 |
+
|
| 149 |
+
dataset = worker_info.dataset
|
| 150 |
+
worker_id = worker_info.id
|
| 151 |
+
|
| 152 |
+
if isinstance(dataset, Txt2ImgIterableBaseDataset):
|
| 153 |
+
split_size = dataset.num_records // worker_info.num_workers
|
| 154 |
+
# reset num_records to the true number to retain reliable length information
|
| 155 |
+
dataset.sample_ids = dataset.valid_ids[worker_id * split_size:(worker_id + 1) * split_size]
|
| 156 |
+
current_id = np.random.choice(len(np.random.get_state()[1]), 1)
|
| 157 |
+
return np.random.seed(np.random.get_state()[1][current_id] + worker_id)
|
| 158 |
+
else:
|
| 159 |
+
return np.random.seed(np.random.get_state()[1][0] + worker_id)
|
| 160 |
+
|
| 161 |
+
|
| 162 |
+
class DataModuleFromConfig(pl.LightningDataModule):
|
| 163 |
+
def __init__(self, batch_size, train=None, validation=None, test=None, predict=None,
|
| 164 |
+
wrap=False, num_workers=None, shuffle_test_loader=False, use_worker_init_fn=False,
|
| 165 |
+
shuffle_val_dataloader=False):
|
| 166 |
+
super().__init__()
|
| 167 |
+
self.batch_size = batch_size
|
| 168 |
+
self.dataset_configs = dict()
|
| 169 |
+
self.num_workers = num_workers if num_workers is not None else batch_size * 2
|
| 170 |
+
self.use_worker_init_fn = use_worker_init_fn
|
| 171 |
+
if train is not None:
|
| 172 |
+
self.dataset_configs["train"] = train
|
| 173 |
+
self.train_dataloader = self._train_dataloader
|
| 174 |
+
if validation is not None:
|
| 175 |
+
self.dataset_configs["validation"] = validation
|
| 176 |
+
self.val_dataloader = partial(self._val_dataloader, shuffle=shuffle_val_dataloader)
|
| 177 |
+
if test is not None:
|
| 178 |
+
self.dataset_configs["test"] = test
|
| 179 |
+
self.test_dataloader = partial(self._test_dataloader, shuffle=shuffle_test_loader)
|
| 180 |
+
if predict is not None:
|
| 181 |
+
self.dataset_configs["predict"] = predict
|
| 182 |
+
self.predict_dataloader = self._predict_dataloader
|
| 183 |
+
self.wrap = wrap
|
| 184 |
+
|
| 185 |
+
def prepare_data(self):
|
| 186 |
+
for data_cfg in self.dataset_configs.values():
|
| 187 |
+
instantiate_from_config(data_cfg)
|
| 188 |
+
|
| 189 |
+
def setup(self, stage=None):
|
| 190 |
+
self.datasets = dict(
|
| 191 |
+
(k, instantiate_from_config(self.dataset_configs[k]))
|
| 192 |
+
for k in self.dataset_configs)
|
| 193 |
+
if self.wrap:
|
| 194 |
+
for k in self.datasets:
|
| 195 |
+
self.datasets[k] = WrappedDataset(self.datasets[k])
|
| 196 |
+
|
| 197 |
+
def _train_dataloader(self):
|
| 198 |
+
is_iterable_dataset = isinstance(self.datasets['train'], Txt2ImgIterableBaseDataset)
|
| 199 |
+
if is_iterable_dataset or self.use_worker_init_fn:
|
| 200 |
+
init_fn = worker_init_fn
|
| 201 |
+
else:
|
| 202 |
+
init_fn = None
|
| 203 |
+
return DataLoader(self.datasets["train"], batch_size=self.batch_size,
|
| 204 |
+
num_workers=self.num_workers, shuffle=False if is_iterable_dataset else True,
|
| 205 |
+
worker_init_fn=init_fn)
|
| 206 |
+
|
| 207 |
+
def _val_dataloader(self, shuffle=False):
|
| 208 |
+
if isinstance(self.datasets['validation'], Txt2ImgIterableBaseDataset) or self.use_worker_init_fn:
|
| 209 |
+
init_fn = worker_init_fn
|
| 210 |
+
else:
|
| 211 |
+
init_fn = None
|
| 212 |
+
return DataLoader(self.datasets["validation"],
|
| 213 |
+
batch_size=self.batch_size,
|
| 214 |
+
num_workers=self.num_workers,
|
| 215 |
+
worker_init_fn=init_fn,
|
| 216 |
+
shuffle=shuffle)
|
| 217 |
+
|
| 218 |
+
def _test_dataloader(self, shuffle=False):
|
| 219 |
+
is_iterable_dataset = isinstance(self.datasets['train'], Txt2ImgIterableBaseDataset)
|
| 220 |
+
if is_iterable_dataset or self.use_worker_init_fn:
|
| 221 |
+
init_fn = worker_init_fn
|
| 222 |
+
else:
|
| 223 |
+
init_fn = None
|
| 224 |
+
|
| 225 |
+
# do not shuffle dataloader for iterable dataset
|
| 226 |
+
shuffle = shuffle and (not is_iterable_dataset)
|
| 227 |
+
|
| 228 |
+
return DataLoader(self.datasets["test"], batch_size=self.batch_size,
|
| 229 |
+
num_workers=self.num_workers, worker_init_fn=init_fn, shuffle=shuffle)
|
| 230 |
+
|
| 231 |
+
def _predict_dataloader(self, shuffle=False):
|
| 232 |
+
if isinstance(self.datasets['predict'], Txt2ImgIterableBaseDataset) or self.use_worker_init_fn:
|
| 233 |
+
init_fn = worker_init_fn
|
| 234 |
+
else:
|
| 235 |
+
init_fn = None
|
| 236 |
+
return DataLoader(self.datasets["predict"], batch_size=self.batch_size,
|
| 237 |
+
num_workers=self.num_workers, worker_init_fn=init_fn)
|
| 238 |
+
|
| 239 |
+
|
| 240 |
+
class SetupCallback(Callback):
|
| 241 |
+
def __init__(self, resume, now, logdir, ckptdir, cfgdir, config, lightning_config):
|
| 242 |
+
super().__init__()
|
| 243 |
+
self.resume = resume
|
| 244 |
+
self.now = now
|
| 245 |
+
self.logdir = logdir
|
| 246 |
+
self.ckptdir = ckptdir
|
| 247 |
+
self.cfgdir = cfgdir
|
| 248 |
+
self.config = config
|
| 249 |
+
self.lightning_config = lightning_config
|
| 250 |
+
|
| 251 |
+
def on_keyboard_interrupt(self, trainer, pl_module):
|
| 252 |
+
if trainer.global_rank == 0:
|
| 253 |
+
print("Summoning checkpoint.")
|
| 254 |
+
ckpt_path = os.path.join(self.ckptdir, "last.ckpt")
|
| 255 |
+
trainer.save_checkpoint(ckpt_path)
|
| 256 |
+
|
| 257 |
+
def on_pretrain_routine_start(self, trainer, pl_module):
|
| 258 |
+
if trainer.global_rank == 0:
|
| 259 |
+
# Create logdirs and save configs
|
| 260 |
+
os.makedirs(self.logdir, exist_ok=True)
|
| 261 |
+
os.makedirs(self.ckptdir, exist_ok=True)
|
| 262 |
+
os.makedirs(self.cfgdir, exist_ok=True)
|
| 263 |
+
|
| 264 |
+
if "callbacks" in self.lightning_config:
|
| 265 |
+
if 'metrics_over_trainsteps_checkpoint' in self.lightning_config['callbacks']:
|
| 266 |
+
os.makedirs(os.path.join(self.ckptdir, 'trainstep_checkpoints'), exist_ok=True)
|
| 267 |
+
print("Project config")
|
| 268 |
+
print(OmegaConf.to_yaml(self.config))
|
| 269 |
+
OmegaConf.save(self.config,
|
| 270 |
+
os.path.join(self.cfgdir, "{}-project.yaml".format(self.now)))
|
| 271 |
+
|
| 272 |
+
print("Lightning config")
|
| 273 |
+
print(OmegaConf.to_yaml(self.lightning_config))
|
| 274 |
+
OmegaConf.save(OmegaConf.create({"lightning": self.lightning_config}),
|
| 275 |
+
os.path.join(self.cfgdir, "{}-lightning.yaml".format(self.now)))
|
| 276 |
+
|
| 277 |
+
else:
|
| 278 |
+
# ModelCheckpoint callback created log directory --- remove it
|
| 279 |
+
if not self.resume and os.path.exists(self.logdir):
|
| 280 |
+
dst, name = os.path.split(self.logdir)
|
| 281 |
+
dst = os.path.join(dst, "child_runs", name)
|
| 282 |
+
os.makedirs(os.path.split(dst)[0], exist_ok=True)
|
| 283 |
+
try:
|
| 284 |
+
os.rename(self.logdir, dst)
|
| 285 |
+
except FileNotFoundError:
|
| 286 |
+
pass
|
| 287 |
+
|
| 288 |
+
|
| 289 |
+
class ImageLogger(Callback):
|
| 290 |
+
def __init__(self, batch_frequency, max_images, clamp=True, increase_log_steps=True,
|
| 291 |
+
rescale=True, disabled=False, log_on_batch_idx=False, log_first_step=False,
|
| 292 |
+
log_images_kwargs=None):
|
| 293 |
+
super().__init__()
|
| 294 |
+
self.rescale = rescale
|
| 295 |
+
self.batch_freq = batch_frequency
|
| 296 |
+
self.max_images = max_images
|
| 297 |
+
self.logger_log_images = {
|
| 298 |
+
pl.loggers.TestTubeLogger: self._testtube,
|
| 299 |
+
}
|
| 300 |
+
self.log_steps = [2 ** n for n in range(int(np.log2(self.batch_freq)) + 1)]
|
| 301 |
+
if not increase_log_steps:
|
| 302 |
+
self.log_steps = [self.batch_freq]
|
| 303 |
+
self.clamp = clamp
|
| 304 |
+
self.disabled = disabled
|
| 305 |
+
self.log_on_batch_idx = log_on_batch_idx
|
| 306 |
+
self.log_images_kwargs = log_images_kwargs if log_images_kwargs else {}
|
| 307 |
+
self.log_first_step = log_first_step
|
| 308 |
+
|
| 309 |
+
@rank_zero_only
|
| 310 |
+
def _testtube(self, pl_module, images, batch_idx, split):
|
| 311 |
+
for k in images:
|
| 312 |
+
grid = torchvision.utils.make_grid(images[k])
|
| 313 |
+
grid = (grid + 1.0) / 2.0 # -1,1 -> 0,1; c,h,w
|
| 314 |
+
|
| 315 |
+
tag = f"{split}/{k}"
|
| 316 |
+
pl_module.logger.experiment.add_image(
|
| 317 |
+
tag, grid,
|
| 318 |
+
global_step=pl_module.global_step)
|
| 319 |
+
|
| 320 |
+
@rank_zero_only
|
| 321 |
+
def log_local(self, save_dir, split, images,
|
| 322 |
+
global_step, current_epoch, batch_idx):
|
| 323 |
+
root = os.path.join(save_dir, "images", split)
|
| 324 |
+
for k in images:
|
| 325 |
+
grid = torchvision.utils.make_grid(images[k], nrow=4)
|
| 326 |
+
if self.rescale:
|
| 327 |
+
grid = (grid + 1.0) / 2.0 # -1,1 -> 0,1; c,h,w
|
| 328 |
+
grid = grid.transpose(0, 1).transpose(1, 2).squeeze(-1)
|
| 329 |
+
grid = grid.numpy()
|
| 330 |
+
grid = (grid * 255).astype(np.uint8)
|
| 331 |
+
filename = "{}_gs-{:06}_e-{:06}_b-{:06}.png".format(
|
| 332 |
+
k,
|
| 333 |
+
global_step,
|
| 334 |
+
current_epoch,
|
| 335 |
+
batch_idx)
|
| 336 |
+
path = os.path.join(root, filename)
|
| 337 |
+
os.makedirs(os.path.split(path)[0], exist_ok=True)
|
| 338 |
+
Image.fromarray(grid).save(path)
|
| 339 |
+
|
| 340 |
+
def log_img(self, pl_module, batch, batch_idx, split="train"):
|
| 341 |
+
check_idx = batch_idx if self.log_on_batch_idx else pl_module.global_step
|
| 342 |
+
if (self.check_frequency(check_idx) and # batch_idx % self.batch_freq == 0
|
| 343 |
+
hasattr(pl_module, "log_images") and
|
| 344 |
+
callable(pl_module.log_images) and
|
| 345 |
+
self.max_images > 0):
|
| 346 |
+
logger = type(pl_module.logger)
|
| 347 |
+
|
| 348 |
+
is_train = pl_module.training
|
| 349 |
+
if is_train:
|
| 350 |
+
pl_module.eval()
|
| 351 |
+
|
| 352 |
+
with torch.no_grad():
|
| 353 |
+
images = pl_module.log_images(batch, split=split, **self.log_images_kwargs)
|
| 354 |
+
|
| 355 |
+
for k in images:
|
| 356 |
+
N = min(images[k].shape[0], self.max_images)
|
| 357 |
+
images[k] = images[k][:N]
|
| 358 |
+
if isinstance(images[k], torch.Tensor):
|
| 359 |
+
images[k] = images[k].detach().cpu()
|
| 360 |
+
if self.clamp:
|
| 361 |
+
images[k] = torch.clamp(images[k], -1., 1.)
|
| 362 |
+
|
| 363 |
+
self.log_local(pl_module.logger.save_dir, split, images,
|
| 364 |
+
pl_module.global_step, pl_module.current_epoch, batch_idx)
|
| 365 |
+
|
| 366 |
+
logger_log_images = self.logger_log_images.get(logger, lambda *args, **kwargs: None)
|
| 367 |
+
logger_log_images(pl_module, images, pl_module.global_step, split)
|
| 368 |
+
|
| 369 |
+
if is_train:
|
| 370 |
+
pl_module.train()
|
| 371 |
+
|
| 372 |
+
def check_frequency(self, check_idx):
|
| 373 |
+
if ((check_idx % self.batch_freq) == 0 or (check_idx in self.log_steps)) and (
|
| 374 |
+
check_idx > 0 or self.log_first_step):
|
| 375 |
+
try:
|
| 376 |
+
self.log_steps.pop(0)
|
| 377 |
+
except IndexError as e:
|
| 378 |
+
print(e)
|
| 379 |
+
pass
|
| 380 |
+
return True
|
| 381 |
+
return False
|
| 382 |
+
|
| 383 |
+
def on_train_batch_end(self, trainer, pl_module, outputs, batch, batch_idx, dataloader_idx):
|
| 384 |
+
if not self.disabled and (pl_module.global_step > 0 or self.log_first_step):
|
| 385 |
+
self.log_img(pl_module, batch, batch_idx, split="train")
|
| 386 |
+
|
| 387 |
+
def on_validation_batch_end(self, trainer, pl_module, outputs, batch, batch_idx, dataloader_idx):
|
| 388 |
+
if not self.disabled and pl_module.global_step > 0:
|
| 389 |
+
self.log_img(pl_module, batch, batch_idx, split="val")
|
| 390 |
+
if hasattr(pl_module, 'calibrate_grad_norm'):
|
| 391 |
+
if (pl_module.calibrate_grad_norm and batch_idx % 25 == 0) and batch_idx > 0:
|
| 392 |
+
self.log_gradients(trainer, pl_module, batch_idx=batch_idx)
|
| 393 |
+
|
| 394 |
+
|
| 395 |
+
class CUDACallback(Callback):
|
| 396 |
+
# see https://github.com/SeanNaren/minGPT/blob/master/mingpt/callback.py
|
| 397 |
+
def on_train_epoch_start(self, trainer, pl_module):
|
| 398 |
+
# Reset the memory use counter
|
| 399 |
+
torch.cuda.reset_peak_memory_stats(trainer.root_gpu)
|
| 400 |
+
torch.cuda.synchronize(trainer.root_gpu)
|
| 401 |
+
self.start_time = time.time()
|
| 402 |
+
|
| 403 |
+
def on_train_epoch_end(self, trainer, pl_module, outputs):
|
| 404 |
+
torch.cuda.synchronize(trainer.root_gpu)
|
| 405 |
+
max_memory = torch.cuda.max_memory_allocated(trainer.root_gpu) / 2 ** 20
|
| 406 |
+
epoch_time = time.time() - self.start_time
|
| 407 |
+
|
| 408 |
+
try:
|
| 409 |
+
max_memory = trainer.training_type_plugin.reduce(max_memory)
|
| 410 |
+
epoch_time = trainer.training_type_plugin.reduce(epoch_time)
|
| 411 |
+
|
| 412 |
+
rank_zero_info(f"Average Epoch time: {epoch_time:.2f} seconds")
|
| 413 |
+
rank_zero_info(f"Average Peak memory {max_memory:.2f}MiB")
|
| 414 |
+
except AttributeError:
|
| 415 |
+
pass
|
| 416 |
+
|
| 417 |
+
|
| 418 |
+
if __name__ == "__main__":
|
| 419 |
+
# custom parser to specify config files, train, test and debug mode,
|
| 420 |
+
# postfix, resume.
|
| 421 |
+
# `--key value` arguments are interpreted as arguments to the trainer.
|
| 422 |
+
# `nested.key=value` arguments are interpreted as config parameters.
|
| 423 |
+
# configs are merged from left-to-right followed by command line parameters.
|
| 424 |
+
|
| 425 |
+
# model:
|
| 426 |
+
# base_learning_rate: float
|
| 427 |
+
# target: path to lightning module
|
| 428 |
+
# params:
|
| 429 |
+
# key: value
|
| 430 |
+
# data:
|
| 431 |
+
# target: main.DataModuleFromConfig
|
| 432 |
+
# params:
|
| 433 |
+
# batch_size: int
|
| 434 |
+
# wrap: bool
|
| 435 |
+
# train:
|
| 436 |
+
# target: path to train dataset
|
| 437 |
+
# params:
|
| 438 |
+
# key: value
|
| 439 |
+
# validation:
|
| 440 |
+
# target: path to validation dataset
|
| 441 |
+
# params:
|
| 442 |
+
# key: value
|
| 443 |
+
# test:
|
| 444 |
+
# target: path to test dataset
|
| 445 |
+
# params:
|
| 446 |
+
# key: value
|
| 447 |
+
# lightning: (optional, has sane defaults and can be specified on cmdline)
|
| 448 |
+
# trainer:
|
| 449 |
+
# additional arguments to trainer
|
| 450 |
+
# logger:
|
| 451 |
+
# logger to instantiate
|
| 452 |
+
# modelcheckpoint:
|
| 453 |
+
# modelcheckpoint to instantiate
|
| 454 |
+
# callbacks:
|
| 455 |
+
# callback1:
|
| 456 |
+
# target: importpath
|
| 457 |
+
# params:
|
| 458 |
+
# key: value
|
| 459 |
+
|
| 460 |
+
now = datetime.datetime.now().strftime("%Y-%m-%dT%H-%M-%S")
|
| 461 |
+
|
| 462 |
+
# add cwd for convenience and to make classes in this file available when
|
| 463 |
+
# running as `python main.py`
|
| 464 |
+
# (in particular `main.DataModuleFromConfig`)
|
| 465 |
+
sys.path.append(os.getcwd())
|
| 466 |
+
|
| 467 |
+
parser = get_parser()
|
| 468 |
+
parser = Trainer.add_argparse_args(parser)
|
| 469 |
+
|
| 470 |
+
opt, unknown = parser.parse_known_args()
|
| 471 |
+
if opt.name and opt.resume:
|
| 472 |
+
raise ValueError(
|
| 473 |
+
"-n/--name and -r/--resume cannot be specified both."
|
| 474 |
+
"If you want to resume training in a new log folder, "
|
| 475 |
+
"use -n/--name in combination with --resume_from_checkpoint"
|
| 476 |
+
)
|
| 477 |
+
if opt.resume:
|
| 478 |
+
if not os.path.exists(opt.resume):
|
| 479 |
+
raise ValueError("Cannot find {}".format(opt.resume))
|
| 480 |
+
if os.path.isfile(opt.resume):
|
| 481 |
+
paths = opt.resume.split("/")
|
| 482 |
+
# idx = len(paths)-paths[::-1].index("logs")+1
|
| 483 |
+
# logdir = "/".join(paths[:idx])
|
| 484 |
+
logdir = "/".join(paths[:-2])
|
| 485 |
+
ckpt = opt.resume
|
| 486 |
+
else:
|
| 487 |
+
assert os.path.isdir(opt.resume), opt.resume
|
| 488 |
+
logdir = opt.resume.rstrip("/")
|
| 489 |
+
ckpt = os.path.join(logdir, "checkpoints", "last.ckpt")
|
| 490 |
+
|
| 491 |
+
opt.resume_from_checkpoint = ckpt
|
| 492 |
+
base_configs = sorted(glob.glob(os.path.join(logdir, "configs/*.yaml")))
|
| 493 |
+
opt.base = base_configs + opt.base
|
| 494 |
+
_tmp = logdir.split("/")
|
| 495 |
+
nowname = _tmp[-1]
|
| 496 |
+
else:
|
| 497 |
+
if opt.name:
|
| 498 |
+
name = "_" + opt.name
|
| 499 |
+
elif opt.base:
|
| 500 |
+
cfg_fname = os.path.split(opt.base[0])[-1]
|
| 501 |
+
cfg_name = os.path.splitext(cfg_fname)[0]
|
| 502 |
+
name = "_" + cfg_name
|
| 503 |
+
else:
|
| 504 |
+
name = ""
|
| 505 |
+
nowname = now + name + opt.postfix
|
| 506 |
+
logdir = os.path.join(opt.logdir, nowname)
|
| 507 |
+
|
| 508 |
+
ckptdir = os.path.join(logdir, "checkpoints")
|
| 509 |
+
cfgdir = os.path.join(logdir, "configs")
|
| 510 |
+
seed_everything(opt.seed)
|
| 511 |
+
|
| 512 |
+
try:
|
| 513 |
+
# init and save configs
|
| 514 |
+
configs = [OmegaConf.load(cfg) for cfg in opt.base]
|
| 515 |
+
cli = OmegaConf.from_dotlist(unknown)
|
| 516 |
+
config = OmegaConf.merge(*configs, cli)
|
| 517 |
+
lightning_config = config.pop("lightning", OmegaConf.create())
|
| 518 |
+
# merge trainer cli with config
|
| 519 |
+
trainer_config = lightning_config.get("trainer", OmegaConf.create())
|
| 520 |
+
# default to ddp
|
| 521 |
+
trainer_config["accelerator"] = "ddp"
|
| 522 |
+
for k in nondefault_trainer_args(opt):
|
| 523 |
+
trainer_config[k] = getattr(opt, k)
|
| 524 |
+
if not "gpus" in trainer_config:
|
| 525 |
+
del trainer_config["accelerator"]
|
| 526 |
+
cpu = True
|
| 527 |
+
else:
|
| 528 |
+
gpuinfo = trainer_config["gpus"]
|
| 529 |
+
print(f"Running on GPUs {gpuinfo}")
|
| 530 |
+
cpu = False
|
| 531 |
+
trainer_opt = argparse.Namespace(**trainer_config)
|
| 532 |
+
lightning_config.trainer = trainer_config
|
| 533 |
+
|
| 534 |
+
# model
|
| 535 |
+
model = instantiate_from_config(config.model)
|
| 536 |
+
|
| 537 |
+
# trainer and callbacks
|
| 538 |
+
trainer_kwargs = dict()
|
| 539 |
+
|
| 540 |
+
# default logger configs
|
| 541 |
+
default_logger_cfgs = {
|
| 542 |
+
"wandb": {
|
| 543 |
+
"target": "pytorch_lightning.loggers.WandbLogger",
|
| 544 |
+
"params": {
|
| 545 |
+
"name": nowname,
|
| 546 |
+
"save_dir": logdir,
|
| 547 |
+
"offline": opt.debug,
|
| 548 |
+
"id": nowname,
|
| 549 |
+
}
|
| 550 |
+
},
|
| 551 |
+
"testtube": {
|
| 552 |
+
"target": "pytorch_lightning.loggers.TestTubeLogger",
|
| 553 |
+
"params": {
|
| 554 |
+
"name": "testtube",
|
| 555 |
+
"save_dir": logdir,
|
| 556 |
+
}
|
| 557 |
+
},
|
| 558 |
+
}
|
| 559 |
+
default_logger_cfg = default_logger_cfgs["testtube"]
|
| 560 |
+
if "logger" in lightning_config:
|
| 561 |
+
logger_cfg = lightning_config.logger
|
| 562 |
+
else:
|
| 563 |
+
logger_cfg = OmegaConf.create()
|
| 564 |
+
logger_cfg = OmegaConf.merge(default_logger_cfg, logger_cfg)
|
| 565 |
+
trainer_kwargs["logger"] = instantiate_from_config(logger_cfg)
|
| 566 |
+
|
| 567 |
+
# modelcheckpoint - use TrainResult/EvalResult(checkpoint_on=metric) to
|
| 568 |
+
# specify which metric is used to determine best models
|
| 569 |
+
default_modelckpt_cfg = {
|
| 570 |
+
"target": "pytorch_lightning.callbacks.ModelCheckpoint",
|
| 571 |
+
"params": {
|
| 572 |
+
"dirpath": ckptdir,
|
| 573 |
+
"filename": "{epoch:06}",
|
| 574 |
+
"verbose": True,
|
| 575 |
+
"save_last": True,
|
| 576 |
+
}
|
| 577 |
+
}
|
| 578 |
+
if hasattr(model, "monitor"):
|
| 579 |
+
print(f"Monitoring {model.monitor} as checkpoint metric.")
|
| 580 |
+
default_modelckpt_cfg["params"]["monitor"] = model.monitor
|
| 581 |
+
default_modelckpt_cfg["params"]["save_top_k"] = 3
|
| 582 |
+
|
| 583 |
+
if "modelcheckpoint" in lightning_config:
|
| 584 |
+
modelckpt_cfg = lightning_config.modelcheckpoint
|
| 585 |
+
else:
|
| 586 |
+
modelckpt_cfg = OmegaConf.create()
|
| 587 |
+
modelckpt_cfg = OmegaConf.merge(default_modelckpt_cfg, modelckpt_cfg)
|
| 588 |
+
print(f"Merged modelckpt-cfg: \n{modelckpt_cfg}")
|
| 589 |
+
if version.parse(pl.__version__) < version.parse('1.4.0'):
|
| 590 |
+
trainer_kwargs["checkpoint_callback"] = instantiate_from_config(modelckpt_cfg)
|
| 591 |
+
|
| 592 |
+
# add callback which sets up log directory
|
| 593 |
+
default_callbacks_cfg = {
|
| 594 |
+
"setup_callback": {
|
| 595 |
+
"target": "main.SetupCallback",
|
| 596 |
+
"params": {
|
| 597 |
+
"resume": opt.resume,
|
| 598 |
+
"now": now,
|
| 599 |
+
"logdir": logdir,
|
| 600 |
+
"ckptdir": ckptdir,
|
| 601 |
+
"cfgdir": cfgdir,
|
| 602 |
+
"config": config,
|
| 603 |
+
"lightning_config": lightning_config,
|
| 604 |
+
}
|
| 605 |
+
},
|
| 606 |
+
"image_logger": {
|
| 607 |
+
"target": "main.ImageLogger",
|
| 608 |
+
"params": {
|
| 609 |
+
"batch_frequency": 750,
|
| 610 |
+
"max_images": 4,
|
| 611 |
+
"clamp": True
|
| 612 |
+
}
|
| 613 |
+
},
|
| 614 |
+
"learning_rate_logger": {
|
| 615 |
+
"target": "main.LearningRateMonitor",
|
| 616 |
+
"params": {
|
| 617 |
+
"logging_interval": "step",
|
| 618 |
+
# "log_momentum": True
|
| 619 |
+
}
|
| 620 |
+
},
|
| 621 |
+
"cuda_callback": {
|
| 622 |
+
"target": "main.CUDACallback"
|
| 623 |
+
},
|
| 624 |
+
}
|
| 625 |
+
if version.parse(pl.__version__) >= version.parse('1.4.0'):
|
| 626 |
+
default_callbacks_cfg.update({'checkpoint_callback': modelckpt_cfg})
|
| 627 |
+
|
| 628 |
+
if "callbacks" in lightning_config:
|
| 629 |
+
callbacks_cfg = lightning_config.callbacks
|
| 630 |
+
else:
|
| 631 |
+
callbacks_cfg = OmegaConf.create()
|
| 632 |
+
|
| 633 |
+
if 'metrics_over_trainsteps_checkpoint' in callbacks_cfg:
|
| 634 |
+
print(
|
| 635 |
+
'Caution: Saving checkpoints every n train steps without deleting. This might require some free space.')
|
| 636 |
+
default_metrics_over_trainsteps_ckpt_dict = {
|
| 637 |
+
'metrics_over_trainsteps_checkpoint':
|
| 638 |
+
{"target": 'pytorch_lightning.callbacks.ModelCheckpoint',
|
| 639 |
+
'params': {
|
| 640 |
+
"dirpath": os.path.join(ckptdir, 'trainstep_checkpoints'),
|
| 641 |
+
"filename": "{epoch:06}-{step:09}",
|
| 642 |
+
"verbose": True,
|
| 643 |
+
'save_top_k': -1,
|
| 644 |
+
'every_n_train_steps': 10000,
|
| 645 |
+
'save_weights_only': True
|
| 646 |
+
}
|
| 647 |
+
}
|
| 648 |
+
}
|
| 649 |
+
default_callbacks_cfg.update(default_metrics_over_trainsteps_ckpt_dict)
|
| 650 |
+
|
| 651 |
+
callbacks_cfg = OmegaConf.merge(default_callbacks_cfg, callbacks_cfg)
|
| 652 |
+
if 'ignore_keys_callback' in callbacks_cfg and hasattr(trainer_opt, 'resume_from_checkpoint'):
|
| 653 |
+
callbacks_cfg.ignore_keys_callback.params['ckpt_path'] = trainer_opt.resume_from_checkpoint
|
| 654 |
+
elif 'ignore_keys_callback' in callbacks_cfg:
|
| 655 |
+
del callbacks_cfg['ignore_keys_callback']
|
| 656 |
+
|
| 657 |
+
trainer_kwargs["callbacks"] = [instantiate_from_config(callbacks_cfg[k]) for k in callbacks_cfg]
|
| 658 |
+
|
| 659 |
+
trainer = Trainer.from_argparse_args(trainer_opt, **trainer_kwargs)
|
| 660 |
+
trainer.logdir = logdir ###
|
| 661 |
+
|
| 662 |
+
# data
|
| 663 |
+
data = instantiate_from_config(config.data)
|
| 664 |
+
# NOTE according to https://pytorch-lightning.readthedocs.io/en/latest/datamodules.html
|
| 665 |
+
# calling these ourselves should not be necessary but it is.
|
| 666 |
+
# lightning still takes care of proper multiprocessing though
|
| 667 |
+
data.prepare_data()
|
| 668 |
+
data.setup()
|
| 669 |
+
print("#### Data #####")
|
| 670 |
+
for k in data.datasets:
|
| 671 |
+
print(f"{k}, {data.datasets[k].__class__.__name__}, {len(data.datasets[k])}")
|
| 672 |
+
|
| 673 |
+
# configure learning rate
|
| 674 |
+
bs, base_lr = config.data.params.batch_size, config.model.base_learning_rate
|
| 675 |
+
if not cpu:
|
| 676 |
+
ngpu = len(lightning_config.trainer.gpus.strip(",").split(','))
|
| 677 |
+
else:
|
| 678 |
+
ngpu = 1
|
| 679 |
+
if 'accumulate_grad_batches' in lightning_config.trainer:
|
| 680 |
+
accumulate_grad_batches = lightning_config.trainer.accumulate_grad_batches
|
| 681 |
+
else:
|
| 682 |
+
accumulate_grad_batches = 1
|
| 683 |
+
print(f"accumulate_grad_batches = {accumulate_grad_batches}")
|
| 684 |
+
lightning_config.trainer.accumulate_grad_batches = accumulate_grad_batches
|
| 685 |
+
if opt.scale_lr:
|
| 686 |
+
model.learning_rate = accumulate_grad_batches * ngpu * bs * base_lr
|
| 687 |
+
print(
|
| 688 |
+
"Setting learning rate to {:.2e} = {} (accumulate_grad_batches) * {} (num_gpus) * {} (batchsize) * {:.2e} (base_lr)".format(
|
| 689 |
+
model.learning_rate, accumulate_grad_batches, ngpu, bs, base_lr))
|
| 690 |
+
else:
|
| 691 |
+
model.learning_rate = base_lr
|
| 692 |
+
print("++++ NOT USING LR SCALING ++++")
|
| 693 |
+
print(f"Setting learning rate to {model.learning_rate:.2e}")
|
| 694 |
+
|
| 695 |
+
|
| 696 |
+
# allow checkpointing via USR1
|
| 697 |
+
def melk(*args, **kwargs):
|
| 698 |
+
# run all checkpoint hooks
|
| 699 |
+
if trainer.global_rank == 0:
|
| 700 |
+
print("Summoning checkpoint.")
|
| 701 |
+
ckpt_path = os.path.join(ckptdir, "last.ckpt")
|
| 702 |
+
trainer.save_checkpoint(ckpt_path)
|
| 703 |
+
|
| 704 |
+
|
| 705 |
+
def divein(*args, **kwargs):
|
| 706 |
+
if trainer.global_rank == 0:
|
| 707 |
+
import pudb;
|
| 708 |
+
pudb.set_trace()
|
| 709 |
+
|
| 710 |
+
|
| 711 |
+
import signal
|
| 712 |
+
|
| 713 |
+
signal.signal(signal.SIGUSR1, melk)
|
| 714 |
+
signal.signal(signal.SIGUSR2, divein)
|
| 715 |
+
|
| 716 |
+
# run
|
| 717 |
+
if opt.train:
|
| 718 |
+
try:
|
| 719 |
+
trainer.fit(model, data)
|
| 720 |
+
except Exception:
|
| 721 |
+
melk()
|
| 722 |
+
raise
|
| 723 |
+
if not opt.no_test and not trainer.interrupted:
|
| 724 |
+
trainer.test(model, data)
|
| 725 |
+
except Exception:
|
| 726 |
+
if opt.debug and trainer.global_rank == 0:
|
| 727 |
+
try:
|
| 728 |
+
import pudb as debugger
|
| 729 |
+
except ImportError:
|
| 730 |
+
import pdb as debugger
|
| 731 |
+
debugger.post_mortem()
|
| 732 |
+
raise
|
| 733 |
+
finally:
|
| 734 |
+
# move newly created debug project to debug_runs
|
| 735 |
+
if opt.debug and not opt.resume and trainer.global_rank == 0:
|
| 736 |
+
dst, name = os.path.split(logdir)
|
| 737 |
+
dst = os.path.join(dst, "debug_runs", name)
|
| 738 |
+
os.makedirs(os.path.split(dst)[0], exist_ok=True)
|
| 739 |
+
os.rename(logdir, dst)
|
| 740 |
+
try:
|
| 741 |
+
if trainer.global_rank == 0:
|
| 742 |
+
print(trainer.profiler.summary())
|
| 743 |
+
except:
|
| 744 |
+
pass
|
stable_diffusion/models/first_stage_models/kl-f16/config.yaml
ADDED
|
@@ -0,0 +1,44 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
model:
|
| 2 |
+
base_learning_rate: 4.5e-06
|
| 3 |
+
target: ldm.models.autoencoder.AutoencoderKL
|
| 4 |
+
params:
|
| 5 |
+
monitor: val/rec_loss
|
| 6 |
+
embed_dim: 16
|
| 7 |
+
lossconfig:
|
| 8 |
+
target: ldm.modules.losses.LPIPSWithDiscriminator
|
| 9 |
+
params:
|
| 10 |
+
disc_start: 50001
|
| 11 |
+
kl_weight: 1.0e-06
|
| 12 |
+
disc_weight: 0.5
|
| 13 |
+
ddconfig:
|
| 14 |
+
double_z: true
|
| 15 |
+
z_channels: 16
|
| 16 |
+
resolution: 256
|
| 17 |
+
in_channels: 3
|
| 18 |
+
out_ch: 3
|
| 19 |
+
ch: 128
|
| 20 |
+
ch_mult:
|
| 21 |
+
- 1
|
| 22 |
+
- 1
|
| 23 |
+
- 2
|
| 24 |
+
- 2
|
| 25 |
+
- 4
|
| 26 |
+
num_res_blocks: 2
|
| 27 |
+
attn_resolutions:
|
| 28 |
+
- 16
|
| 29 |
+
dropout: 0.0
|
| 30 |
+
data:
|
| 31 |
+
target: main.DataModuleFromConfig
|
| 32 |
+
params:
|
| 33 |
+
batch_size: 6
|
| 34 |
+
wrap: true
|
| 35 |
+
train:
|
| 36 |
+
target: ldm.data.openimages.FullOpenImagesTrain
|
| 37 |
+
params:
|
| 38 |
+
size: 384
|
| 39 |
+
crop_size: 256
|
| 40 |
+
validation:
|
| 41 |
+
target: ldm.data.openimages.FullOpenImagesValidation
|
| 42 |
+
params:
|
| 43 |
+
size: 384
|
| 44 |
+
crop_size: 256
|
stable_diffusion/models/first_stage_models/kl-f32/config.yaml
ADDED
|
@@ -0,0 +1,46 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
model:
|
| 2 |
+
base_learning_rate: 4.5e-06
|
| 3 |
+
target: ldm.models.autoencoder.AutoencoderKL
|
| 4 |
+
params:
|
| 5 |
+
monitor: val/rec_loss
|
| 6 |
+
embed_dim: 64
|
| 7 |
+
lossconfig:
|
| 8 |
+
target: ldm.modules.losses.LPIPSWithDiscriminator
|
| 9 |
+
params:
|
| 10 |
+
disc_start: 50001
|
| 11 |
+
kl_weight: 1.0e-06
|
| 12 |
+
disc_weight: 0.5
|
| 13 |
+
ddconfig:
|
| 14 |
+
double_z: true
|
| 15 |
+
z_channels: 64
|
| 16 |
+
resolution: 256
|
| 17 |
+
in_channels: 3
|
| 18 |
+
out_ch: 3
|
| 19 |
+
ch: 128
|
| 20 |
+
ch_mult:
|
| 21 |
+
- 1
|
| 22 |
+
- 1
|
| 23 |
+
- 2
|
| 24 |
+
- 2
|
| 25 |
+
- 4
|
| 26 |
+
- 4
|
| 27 |
+
num_res_blocks: 2
|
| 28 |
+
attn_resolutions:
|
| 29 |
+
- 16
|
| 30 |
+
- 8
|
| 31 |
+
dropout: 0.0
|
| 32 |
+
data:
|
| 33 |
+
target: main.DataModuleFromConfig
|
| 34 |
+
params:
|
| 35 |
+
batch_size: 6
|
| 36 |
+
wrap: true
|
| 37 |
+
train:
|
| 38 |
+
target: ldm.data.openimages.FullOpenImagesTrain
|
| 39 |
+
params:
|
| 40 |
+
size: 384
|
| 41 |
+
crop_size: 256
|
| 42 |
+
validation:
|
| 43 |
+
target: ldm.data.openimages.FullOpenImagesValidation
|
| 44 |
+
params:
|
| 45 |
+
size: 384
|
| 46 |
+
crop_size: 256
|
stable_diffusion/models/first_stage_models/kl-f4/config.yaml
ADDED
|
@@ -0,0 +1,41 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
model:
|
| 2 |
+
base_learning_rate: 4.5e-06
|
| 3 |
+
target: ldm.models.autoencoder.AutoencoderKL
|
| 4 |
+
params:
|
| 5 |
+
monitor: val/rec_loss
|
| 6 |
+
embed_dim: 3
|
| 7 |
+
lossconfig:
|
| 8 |
+
target: ldm.modules.losses.LPIPSWithDiscriminator
|
| 9 |
+
params:
|
| 10 |
+
disc_start: 50001
|
| 11 |
+
kl_weight: 1.0e-06
|
| 12 |
+
disc_weight: 0.5
|
| 13 |
+
ddconfig:
|
| 14 |
+
double_z: true
|
| 15 |
+
z_channels: 3
|
| 16 |
+
resolution: 256
|
| 17 |
+
in_channels: 3
|
| 18 |
+
out_ch: 3
|
| 19 |
+
ch: 128
|
| 20 |
+
ch_mult:
|
| 21 |
+
- 1
|
| 22 |
+
- 2
|
| 23 |
+
- 4
|
| 24 |
+
num_res_blocks: 2
|
| 25 |
+
attn_resolutions: []
|
| 26 |
+
dropout: 0.0
|
| 27 |
+
data:
|
| 28 |
+
target: main.DataModuleFromConfig
|
| 29 |
+
params:
|
| 30 |
+
batch_size: 10
|
| 31 |
+
wrap: true
|
| 32 |
+
train:
|
| 33 |
+
target: ldm.data.openimages.FullOpenImagesTrain
|
| 34 |
+
params:
|
| 35 |
+
size: 384
|
| 36 |
+
crop_size: 256
|
| 37 |
+
validation:
|
| 38 |
+
target: ldm.data.openimages.FullOpenImagesValidation
|
| 39 |
+
params:
|
| 40 |
+
size: 384
|
| 41 |
+
crop_size: 256
|
stable_diffusion/models/first_stage_models/kl-f8/config.yaml
ADDED
|
@@ -0,0 +1,42 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
model:
|
| 2 |
+
base_learning_rate: 4.5e-06
|
| 3 |
+
target: ldm.models.autoencoder.AutoencoderKL
|
| 4 |
+
params:
|
| 5 |
+
monitor: val/rec_loss
|
| 6 |
+
embed_dim: 4
|
| 7 |
+
lossconfig:
|
| 8 |
+
target: ldm.modules.losses.LPIPSWithDiscriminator
|
| 9 |
+
params:
|
| 10 |
+
disc_start: 50001
|
| 11 |
+
kl_weight: 1.0e-06
|
| 12 |
+
disc_weight: 0.5
|
| 13 |
+
ddconfig:
|
| 14 |
+
double_z: true
|
| 15 |
+
z_channels: 4
|
| 16 |
+
resolution: 256
|
| 17 |
+
in_channels: 3
|
| 18 |
+
out_ch: 3
|
| 19 |
+
ch: 128
|
| 20 |
+
ch_mult:
|
| 21 |
+
- 1
|
| 22 |
+
- 2
|
| 23 |
+
- 4
|
| 24 |
+
- 4
|
| 25 |
+
num_res_blocks: 2
|
| 26 |
+
attn_resolutions: []
|
| 27 |
+
dropout: 0.0
|
| 28 |
+
data:
|
| 29 |
+
target: main.DataModuleFromConfig
|
| 30 |
+
params:
|
| 31 |
+
batch_size: 4
|
| 32 |
+
wrap: true
|
| 33 |
+
train:
|
| 34 |
+
target: ldm.data.openimages.FullOpenImagesTrain
|
| 35 |
+
params:
|
| 36 |
+
size: 384
|
| 37 |
+
crop_size: 256
|
| 38 |
+
validation:
|
| 39 |
+
target: ldm.data.openimages.FullOpenImagesValidation
|
| 40 |
+
params:
|
| 41 |
+
size: 384
|
| 42 |
+
crop_size: 256
|
stable_diffusion/models/first_stage_models/vq-f16/config.yaml
ADDED
|
@@ -0,0 +1,49 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
model:
|
| 2 |
+
base_learning_rate: 4.5e-06
|
| 3 |
+
target: ldm.models.autoencoder.VQModel
|
| 4 |
+
params:
|
| 5 |
+
embed_dim: 8
|
| 6 |
+
n_embed: 16384
|
| 7 |
+
ddconfig:
|
| 8 |
+
double_z: false
|
| 9 |
+
z_channels: 8
|
| 10 |
+
resolution: 256
|
| 11 |
+
in_channels: 3
|
| 12 |
+
out_ch: 3
|
| 13 |
+
ch: 128
|
| 14 |
+
ch_mult:
|
| 15 |
+
- 1
|
| 16 |
+
- 1
|
| 17 |
+
- 2
|
| 18 |
+
- 2
|
| 19 |
+
- 4
|
| 20 |
+
num_res_blocks: 2
|
| 21 |
+
attn_resolutions:
|
| 22 |
+
- 16
|
| 23 |
+
dropout: 0.0
|
| 24 |
+
lossconfig:
|
| 25 |
+
target: taming.modules.losses.vqperceptual.VQLPIPSWithDiscriminator
|
| 26 |
+
params:
|
| 27 |
+
disc_conditional: false
|
| 28 |
+
disc_in_channels: 3
|
| 29 |
+
disc_start: 250001
|
| 30 |
+
disc_weight: 0.75
|
| 31 |
+
disc_num_layers: 2
|
| 32 |
+
codebook_weight: 1.0
|
| 33 |
+
|
| 34 |
+
data:
|
| 35 |
+
target: main.DataModuleFromConfig
|
| 36 |
+
params:
|
| 37 |
+
batch_size: 14
|
| 38 |
+
num_workers: 20
|
| 39 |
+
wrap: true
|
| 40 |
+
train:
|
| 41 |
+
target: ldm.data.openimages.FullOpenImagesTrain
|
| 42 |
+
params:
|
| 43 |
+
size: 384
|
| 44 |
+
crop_size: 256
|
| 45 |
+
validation:
|
| 46 |
+
target: ldm.data.openimages.FullOpenImagesValidation
|
| 47 |
+
params:
|
| 48 |
+
size: 384
|
| 49 |
+
crop_size: 256
|
stable_diffusion/models/first_stage_models/vq-f4-noattn/config.yaml
ADDED
|
@@ -0,0 +1,46 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
model:
|
| 2 |
+
base_learning_rate: 4.5e-06
|
| 3 |
+
target: ldm.models.autoencoder.VQModel
|
| 4 |
+
params:
|
| 5 |
+
embed_dim: 3
|
| 6 |
+
n_embed: 8192
|
| 7 |
+
monitor: val/rec_loss
|
| 8 |
+
|
| 9 |
+
ddconfig:
|
| 10 |
+
attn_type: none
|
| 11 |
+
double_z: false
|
| 12 |
+
z_channels: 3
|
| 13 |
+
resolution: 256
|
| 14 |
+
in_channels: 3
|
| 15 |
+
out_ch: 3
|
| 16 |
+
ch: 128
|
| 17 |
+
ch_mult:
|
| 18 |
+
- 1
|
| 19 |
+
- 2
|
| 20 |
+
- 4
|
| 21 |
+
num_res_blocks: 2
|
| 22 |
+
attn_resolutions: []
|
| 23 |
+
dropout: 0.0
|
| 24 |
+
lossconfig:
|
| 25 |
+
target: taming.modules.losses.vqperceptual.VQLPIPSWithDiscriminator
|
| 26 |
+
params:
|
| 27 |
+
disc_conditional: false
|
| 28 |
+
disc_in_channels: 3
|
| 29 |
+
disc_start: 11
|
| 30 |
+
disc_weight: 0.75
|
| 31 |
+
codebook_weight: 1.0
|
| 32 |
+
|
| 33 |
+
data:
|
| 34 |
+
target: main.DataModuleFromConfig
|
| 35 |
+
params:
|
| 36 |
+
batch_size: 8
|
| 37 |
+
num_workers: 12
|
| 38 |
+
wrap: true
|
| 39 |
+
train:
|
| 40 |
+
target: ldm.data.openimages.FullOpenImagesTrain
|
| 41 |
+
params:
|
| 42 |
+
crop_size: 256
|
| 43 |
+
validation:
|
| 44 |
+
target: ldm.data.openimages.FullOpenImagesValidation
|
| 45 |
+
params:
|
| 46 |
+
crop_size: 256
|
stable_diffusion/models/first_stage_models/vq-f4/config.yaml
ADDED
|
@@ -0,0 +1,45 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
model:
|
| 2 |
+
base_learning_rate: 4.5e-06
|
| 3 |
+
target: ldm.models.autoencoder.VQModel
|
| 4 |
+
params:
|
| 5 |
+
embed_dim: 3
|
| 6 |
+
n_embed: 8192
|
| 7 |
+
monitor: val/rec_loss
|
| 8 |
+
|
| 9 |
+
ddconfig:
|
| 10 |
+
double_z: false
|
| 11 |
+
z_channels: 3
|
| 12 |
+
resolution: 256
|
| 13 |
+
in_channels: 3
|
| 14 |
+
out_ch: 3
|
| 15 |
+
ch: 128
|
| 16 |
+
ch_mult:
|
| 17 |
+
- 1
|
| 18 |
+
- 2
|
| 19 |
+
- 4
|
| 20 |
+
num_res_blocks: 2
|
| 21 |
+
attn_resolutions: []
|
| 22 |
+
dropout: 0.0
|
| 23 |
+
lossconfig:
|
| 24 |
+
target: taming.modules.losses.vqperceptual.VQLPIPSWithDiscriminator
|
| 25 |
+
params:
|
| 26 |
+
disc_conditional: false
|
| 27 |
+
disc_in_channels: 3
|
| 28 |
+
disc_start: 0
|
| 29 |
+
disc_weight: 0.75
|
| 30 |
+
codebook_weight: 1.0
|
| 31 |
+
|
| 32 |
+
data:
|
| 33 |
+
target: main.DataModuleFromConfig
|
| 34 |
+
params:
|
| 35 |
+
batch_size: 8
|
| 36 |
+
num_workers: 16
|
| 37 |
+
wrap: true
|
| 38 |
+
train:
|
| 39 |
+
target: ldm.data.openimages.FullOpenImagesTrain
|
| 40 |
+
params:
|
| 41 |
+
crop_size: 256
|
| 42 |
+
validation:
|
| 43 |
+
target: ldm.data.openimages.FullOpenImagesValidation
|
| 44 |
+
params:
|
| 45 |
+
crop_size: 256
|