Zednation commited on
Commit
6852b98
·
verified ·
1 Parent(s): ac29b20

Add files using upload-large-folder tool

Browse files
Files changed (50) hide show
  1. .cache/pip/http-v2/3/4/e/4/e/34e4ed9f6da78ec378e04be04de734c69a68c889dbef86cb0f5f498a +0 -0
  2. .cache/pip/http-v2/3/4/e/4/e/34e4ed9f6da78ec378e04be04de734c69a68c889dbef86cb0f5f498a.body +0 -0
  3. .cache/pip/http-v2/3/5/a/0/6/35a0615e0411edd59492c90e8f5cd0a1a0554d6f7d1dec5be2b30560 +0 -0
  4. .cache/pip/http-v2/3/5/a/0/6/35a0615e0411edd59492c90e8f5cd0a1a0554d6f7d1dec5be2b30560.body +97 -0
  5. .cache/pip/http-v2/3/7/c/5/1/37c518ffee828550acec5a0fd158a5f1d852928eea13df9d077b428b +0 -0
  6. .cache/pip/http-v2/3/7/c/5/1/37c518ffee828550acec5a0fd158a5f1d852928eea13df9d077b428b.body +0 -0
  7. .cache/pip/http-v2/3/8/6/0/e/3860e4de9ae53c79d2fd61419e9049df314ccc8b640782c02c6e2e2d +0 -0
  8. .cache/pip/http-v2/3/8/6/0/e/3860e4de9ae53c79d2fd61419e9049df314ccc8b640782c02c6e2e2d.body +0 -0
  9. .cache/pip/http-v2/3/8/9/2/7/389270eb64d5d655add63491064e7e833611004a96c2fb8b19caa5fb +0 -0
  10. .cache/pip/http-v2/3/8/9/2/7/389270eb64d5d655add63491064e7e833611004a96c2fb8b19caa5fb.body +321 -0
  11. .cache/pip/http-v2/3/9/5/b/c/395bc73efd302f7d742a8b69a0d15de0af47c6a2b8a5accf08c79c9b +0 -0
  12. .cache/pip/http-v2/3/9/5/b/c/395bc73efd302f7d742a8b69a0d15de0af47c6a2b8a5accf08c79c9b.body +0 -0
  13. .cache/pip/http-v2/3/a/3/0/9/3a3094a7a3e575e1209e7823121d830fa26ecec395f096840b934033 +0 -0
  14. .cache/pip/http-v2/3/a/3/0/9/3a3094a7a3e575e1209e7823121d830fa26ecec395f096840b934033.body +574 -0
  15. .cache/pip/http-v2/5/9/1/a/0/591a0a7ea47d81cffb332bf5e1460e560ce743822558c6f345314d4b +0 -0
  16. .cache/pip/http-v2/5/9/1/a/0/591a0a7ea47d81cffb332bf5e1460e560ce743822558c6f345314d4b.body +0 -0
  17. .cache/pip/http-v2/5/d/0/b/4/5d0b459b3c4f2993a6258ed848f61436c9f7a018d2587028572840f2 +0 -0
  18. .cache/pip/http-v2/6/1/6/5/a/6165a4393aa66a360946f71adc9147b3870cebf86b7878337a7535a9 +0 -0
  19. .cache/pip/http-v2/6/4/c/f/c/64cfc03e83f9fad4049b1d2a1d785c9273270a4ab9788b538f5054e3 +0 -0
  20. .cache/pip/http-v2/6/5/1/e/5/651e58859e8db8c99b9e7068d03984cfd4577518ff0e021c717afbf4 +0 -0
  21. .cache/pip/http-v2/6/5/1/e/5/651e58859e8db8c99b9e7068d03984cfd4577518ff0e021c717afbf4.body +78 -0
  22. .cache/pip/http-v2/6/6/e/c/7/66ec76a7b6ed4081044f5c7821af293b63c17bc2ac523ff93d5ca7d5 +0 -0
  23. .cache/pip/http-v2/6/8/0/d/4/680d4dd80dc6a3d2df9b9478dfcc8e81e0e4f130e154a3268b98b877 +0 -0
  24. .cache/pip/http-v2/6/b/5/3/a/6b53a9dd0e4fce887cc28c1a921aa1befe8c1a82e6c213d2542d2acb +0 -0
  25. .cache/pip/http-v2/6/b/5/3/a/6b53a9dd0e4fce887cc28c1a921aa1befe8c1a82e6c213d2542d2acb.body +0 -0
  26. .cache/pip/http-v2/6/b/8/1/e/6b81e7b491d69713c085c9f59d6c9162e9c07ca91d4f2bb5b3cd4b8e +0 -0
  27. .cache/pip/http-v2/6/b/8/1/e/6b81e7b491d69713c085c9f59d6c9162e9c07ca91d4f2bb5b3cd4b8e.body +386 -0
  28. .cache/pip/http-v2/6/c/6/e/e/6c6eeaf6757edbde690577822daacaba826c2b12ce67b57b33e8021d +0 -0
  29. .cache/pip/http-v2/6/c/6/e/e/6c6eeaf6757edbde690577822daacaba826c2b12ce67b57b33e8021d.body +0 -0
  30. .cache/pip/http-v2/6/f/4/2/0/6f4201922ae9660b891766d0cd792260a5663fc66339ed1036f3be9b +0 -0
  31. .cache/pip/http-v2/7/2/8/a/2/728a2f33f382f4dacf08f6df77aad6f3d889f819ba4fa3efad5ec7e4 +0 -0
  32. .cache/pip/http-v2/7/7/3/7/4/77374f6555d766d1d452fe4918fb303c49f49a5a37a0986ea4f1b212 +0 -0
  33. .cache/pip/http-v2/7/7/3/7/4/77374f6555d766d1d452fe4918fb303c49f49a5a37a0986ea4f1b212.body +139 -0
  34. .cache/pip/http-v2/7/7/3/b/e/773be4e62f2a7f9be9d2b777b9be56e14e2b6c9666994e8793db52fd +0 -0
  35. .cache/pip/http-v2/7/7/7/e/1/777e1a90a003ce34ae211dad270679555faa9422154a8c9601ab0e1f +0 -0
  36. .cache/pip/http-v2/7/7/7/e/1/777e1a90a003ce34ae211dad270679555faa9422154a8c9601ab0e1f.body +0 -0
  37. .cache/pip/http-v2/7/9/2/1/a/7921ac3318a5cdb592026cc26a94f7a2c1e1f7d3a1dc1e3857fd49f1 +0 -0
  38. .cache/pip/http-v2/7/9/2/1/a/7921ac3318a5cdb592026cc26a94f7a2c1e1f7d3a1dc1e3857fd49f1.body +0 -0
  39. .cache/pip/http-v2/7/b/3/0/7/7b3075adb708114992fb27c6511ef6dfacffdcb852b8e8d037a10c4b +0 -0
  40. .cache/pip/http-v2/7/c/6/1/8/7c6183dd169526815070ce1b173d3f738b22aa5bc9caaa83eec98534 +0 -0
  41. .cache/pip/http-v2/d/0/1/5/f/d015fbc6aca5c05d98c5a5fd1b5a5da789d8a3e8323acf92db497bce +0 -0
  42. .cache/pip/http-v2/d/0/1/b/7/d01b75bf0e331a9e22160ebda15b330e26789777201136ee08c89b17 +0 -0
  43. .cache/pip/http-v2/d/0/1/b/7/d01b75bf0e331a9e22160ebda15b330e26789777201136ee08c89b17.body +0 -0
  44. .cache/pip/http-v2/d/3/2/9/d/d329d269473dcb26bc2fcf82e354619e8463ba8855d5a3b77637c124 +0 -0
  45. .cache/pip/http-v2/d/b/c/9/6/dbc96dffe61b94bcd3688bd8959baaf90ecc4c26bd760252ca8c1de1 +0 -0
  46. .cache/pip/http-v2/d/b/c/9/6/dbc96dffe61b94bcd3688bd8959baaf90ecc4c26bd760252ca8c1de1.body +87 -0
  47. .cache/pip/http-v2/d/d/2/0/6/dd206ee8d449d4bec512242e8f8ebadb5a808c682766018f3921f62b.body +51 -0
  48. .cache/uv/sdists-v9/.gitignore +0 -0
  49. .cache/uv/simple-v18/pypi/filelock.rkyv +0 -0
  50. .cache/uv/simple-v18/pypi/packaging.rkyv +0 -0
.cache/pip/http-v2/3/4/e/4/e/34e4ed9f6da78ec378e04be04de734c69a68c889dbef86cb0f5f498a ADDED
Binary file (1.79 kB). View file
 
.cache/pip/http-v2/3/4/e/4/e/34e4ed9f6da78ec378e04be04de734c69a68c889dbef86cb0f5f498a.body ADDED
Binary file (20.4 kB). View file
 
.cache/pip/http-v2/3/5/a/0/6/35a0615e0411edd59492c90e8f5cd0a1a0554d6f7d1dec5be2b30560 ADDED
Binary file (1.17 kB). View file
 
.cache/pip/http-v2/3/5/a/0/6/35a0615e0411edd59492c90e8f5cd0a1a0554d6f7d1dec5be2b30560.body ADDED
@@ -0,0 +1,97 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Metadata-Version: 2.4
2
+ Name: kiwisolver
3
+ Version: 1.5.0
4
+ Summary: A fast implementation of the Cassowary constraint solver
5
+ Author-email: The Nucleic Development Team <sccolbert@gmail.com>
6
+ Maintainer-email: "Matthieu C. Dartiailh" <m.dartiailh@gmail.com>
7
+ License: =========================
8
+ The Kiwi licensing terms
9
+ =========================
10
+ Kiwi is licensed under the terms of the Modified BSD License (also known as
11
+ New or Revised BSD), as follows:
12
+
13
+ Copyright (c) 2013-2026, Nucleic Development Team
14
+
15
+ All rights reserved.
16
+
17
+ Redistribution and use in source and binary forms, with or without
18
+ modification, are permitted provided that the following conditions are met:
19
+
20
+ Redistributions of source code must retain the above copyright notice, this
21
+ list of conditions and the following disclaimer.
22
+
23
+ Redistributions in binary form must reproduce the above copyright notice, this
24
+ list of conditions and the following disclaimer in the documentation and/or
25
+ other materials provided with the distribution.
26
+
27
+ Neither the name of the Nucleic Development Team nor the names of its
28
+ contributors may be used to endorse or promote products derived from this
29
+ software without specific prior written permission.
30
+
31
+ THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
32
+ ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
33
+ WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
34
+ DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE
35
+ FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
36
+ DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
37
+ SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
38
+ CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
39
+ OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
40
+ OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
41
+
42
+ About Kiwi
43
+ ----------
44
+ Chris Colbert began the Kiwi project in December 2013 in an effort to
45
+ create a blisteringly fast UI constraint solver. Chris is still the
46
+ project lead.
47
+
48
+ The Nucleic Development Team is the set of all contributors to the Nucleic
49
+ project and its subprojects.
50
+
51
+ The core team that coordinates development on GitHub can be found here:
52
+ http://github.com/nucleic. The current team consists of:
53
+
54
+ * Chris Colbert
55
+
56
+ Our Copyright Policy
57
+ --------------------
58
+ Nucleic uses a shared copyright model. Each contributor maintains copyright
59
+ over their contributions to Nucleic. But, it is important to note that these
60
+ contributions are typically only changes to the repositories. Thus, the Nucleic
61
+ source code, in its entirety is not the copyright of any single person or
62
+ institution. Instead, it is the collective copyright of the entire Nucleic
63
+ Development Team. If individual contributors want to maintain a record of what
64
+ changes/contributions they have specific copyright on, they should indicate
65
+ their copyright in the commit message of the change, when they commit the
66
+ change to one of the Nucleic repositories.
67
+
68
+ With this in mind, the following banner should be used in any source code file
69
+ to indicate the copyright and license terms:
70
+
71
+ #------------------------------------------------------------------------------
72
+ # Copyright (c) 2013-2026, Nucleic Development Team.
73
+ #
74
+ # Distributed under the terms of the Modified BSD License.
75
+ #
76
+ # The full license is in the file LICENSE, distributed with this software.
77
+ #------------------------------------------------------------------------------
78
+
79
+ Project-URL: homepage, https://github.com/nucleic/kiwi
80
+ Project-URL: documentation, https://kiwisolver.readthedocs.io/en/latest/
81
+ Project-URL: repository, https://github.com/nucleic/kiwi
82
+ Project-URL: changelog, https://github.com/nucleic/kiwi/blob/main/releasenotes.rst
83
+ Classifier: License :: OSI Approved :: BSD License
84
+ Classifier: Programming Language :: Python
85
+ Classifier: Programming Language :: Python :: 3
86
+ Classifier: Programming Language :: Python :: 3.10
87
+ Classifier: Programming Language :: Python :: 3.11
88
+ Classifier: Programming Language :: Python :: 3.12
89
+ Classifier: Programming Language :: Python :: 3.13
90
+ Classifier: Programming Language :: Python :: 3.14
91
+ Classifier: Programming Language :: Python :: Implementation :: CPython
92
+ Classifier: Programming Language :: Python :: Implementation :: PyPy
93
+ Classifier: Programming Language :: Python :: Implementation :: GraalPy
94
+ Requires-Python: >=3.10
95
+ Description-Content-Type: text/x-rst
96
+ License-File: LICENSE
97
+ Dynamic: license-file
.cache/pip/http-v2/3/7/c/5/1/37c518ffee828550acec5a0fd158a5f1d852928eea13df9d077b428b ADDED
Binary file (1.15 kB). View file
 
.cache/pip/http-v2/3/7/c/5/1/37c518ffee828550acec5a0fd158a5f1d852928eea13df9d077b428b.body ADDED
Binary file (78.4 kB). View file
 
.cache/pip/http-v2/3/8/6/0/e/3860e4de9ae53c79d2fd61419e9049df314ccc8b640782c02c6e2e2d ADDED
Binary file (1.77 kB). View file
 
.cache/pip/http-v2/3/8/6/0/e/3860e4de9ae53c79d2fd61419e9049df314ccc8b640782c02c6e2e2d.body ADDED
Binary file (42.1 kB). View file
 
.cache/pip/http-v2/3/8/9/2/7/389270eb64d5d655add63491064e7e833611004a96c2fb8b19caa5fb ADDED
Binary file (1.2 kB). View file
 
.cache/pip/http-v2/3/8/9/2/7/389270eb64d5d655add63491064e7e833611004a96c2fb8b19caa5fb.body ADDED
@@ -0,0 +1,321 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Metadata-Version: 2.4
2
+ Name: colorlog
3
+ Version: 6.10.1
4
+ Summary: Add colours to the output of Python's logging module.
5
+ Home-page: https://github.com/borntyping/python-colorlog
6
+ Author: Sam Clements
7
+ Author-email: sam@borntyping.co.uk
8
+ License: MIT License
9
+ Classifier: Development Status :: 5 - Production/Stable
10
+ Classifier: Environment :: Console
11
+ Classifier: Intended Audience :: Developers
12
+ Classifier: License :: OSI Approved :: MIT License
13
+ Classifier: Operating System :: OS Independent
14
+ Classifier: Programming Language :: Python
15
+ Classifier: Programming Language :: Python :: 3
16
+ Classifier: Programming Language :: Python :: 3.6
17
+ Classifier: Programming Language :: Python :: 3.7
18
+ Classifier: Programming Language :: Python :: 3.8
19
+ Classifier: Programming Language :: Python :: 3.9
20
+ Classifier: Programming Language :: Python :: 3.10
21
+ Classifier: Programming Language :: Python :: 3.11
22
+ Classifier: Programming Language :: Python :: 3.12
23
+ Classifier: Programming Language :: Python :: 3.13
24
+ Classifier: Topic :: Terminals
25
+ Classifier: Topic :: Utilities
26
+ Requires-Python: >=3.6
27
+ Description-Content-Type: text/markdown
28
+ License-File: LICENSE
29
+ Requires-Dist: colorama; sys_platform == "win32"
30
+ Provides-Extra: development
31
+ Requires-Dist: black; extra == "development"
32
+ Requires-Dist: flake8; extra == "development"
33
+ Requires-Dist: mypy; extra == "development"
34
+ Requires-Dist: pytest; extra == "development"
35
+ Requires-Dist: types-colorama; extra == "development"
36
+ Dynamic: author
37
+ Dynamic: author-email
38
+ Dynamic: classifier
39
+ Dynamic: description
40
+ Dynamic: description-content-type
41
+ Dynamic: home-page
42
+ Dynamic: license
43
+ Dynamic: license-file
44
+ Dynamic: provides-extra
45
+ Dynamic: requires-python
46
+ Dynamic: summary
47
+
48
+ # Log formatting with colors!
49
+
50
+ [![](https://img.shields.io/pypi/v/colorlog.svg)](https://pypi.org/project/colorlog/)
51
+ [![](https://img.shields.io/pypi/l/colorlog.svg)](https://pypi.org/project/colorlog/)
52
+
53
+ Add colours to the output of Python's `logging` module.
54
+
55
+ * [Source on GitHub](https://github.com/borntyping/python-colorlog)
56
+ * [Packages on PyPI](https://pypi.org/pypi/colorlog/)
57
+
58
+ ## Status
59
+
60
+ colorlog currently requires Python 3.6 or higher. Older versions (below 5.x.x)
61
+ support Python 2.6 and above.
62
+
63
+ * colorlog 6.x requires Python 3.6 or higher.
64
+ * colorlog 5.x is an interim version that will warn Python 2 users to downgrade.
65
+ * colorlog 4.x is the final version supporting Python 2.
66
+
67
+ [colorama] is included as a required dependency and initialised when using
68
+ colorlog on Windows.
69
+
70
+ This library is over a decade old and supported a wide set of Python versions
71
+ for most of its life, which has made it a difficult library to add new features
72
+ to. colorlog 6 may break backwards compatibility so that newer features
73
+ can be added more easily, but may still not accept all changes or feature
74
+ requests. colorlog 4 might accept essential bugfixes but should not be
75
+ considered actively maintained and will not accept any major changes or new
76
+ features.
77
+
78
+ ## Installation
79
+
80
+ Install from PyPI with:
81
+
82
+ ```bash
83
+ pip install colorlog
84
+ ```
85
+
86
+ Several Linux distributions provide official packages ([Debian], [Arch], [Fedora],
87
+ [Gentoo], [OpenSuse] and [Ubuntu]), and others have user provided packages
88
+ ([BSD ports], [Conda]).
89
+
90
+ ## Usage
91
+
92
+ ```python
93
+ import colorlog
94
+
95
+ handler = colorlog.StreamHandler()
96
+ handler.setFormatter(colorlog.ColoredFormatter(
97
+ '%(log_color)s%(levelname)s:%(name)s:%(message)s'))
98
+
99
+ logger = colorlog.getLogger('example')
100
+ logger.addHandler(handler)
101
+ ```
102
+
103
+ The `ColoredFormatter` class takes several arguments:
104
+
105
+ - `format`: The format string used to output the message (required).
106
+ - `datefmt`: An optional date format passed to the base class. See [`logging.Formatter`][Formatter].
107
+ - `reset`: Implicitly adds a color reset code to the message output, unless the output already ends with one. Defaults to `True`.
108
+ - `log_colors`: A mapping of record level names to color names. The defaults can be found in `colorlog.default_log_colors`, or the below example.
109
+ - `secondary_log_colors`: A mapping of names to `log_colors` style mappings, defining additional colors that can be used in format strings. See below for an example.
110
+ - `style`: Available on Python 3.2 and above. See [`logging.Formatter`][Formatter].
111
+
112
+ Color escape codes can be selected based on the log records level, by adding
113
+ parameters to the format string:
114
+
115
+ - `log_color`: Return the color associated with the records level.
116
+ - `<name>_log_color`: Return another color based on the records level if the formatter has secondary colors configured (see `secondary_log_colors` below).
117
+
118
+ Multiple escape codes can be used at once by joining them with commas when
119
+ configuring the color for a log level (but can't be used directly in the format
120
+ string). For example, `black,bg_white` would use the escape codes for black
121
+ text on a white background.
122
+
123
+ The following escape codes are made available for use in the format string:
124
+
125
+ - `{color}`, `fg_{color}`, `bg_{color}`: Foreground and background colors.
126
+ - `bold`, `bold_{color}`, `fg_bold_{color}`, `bg_bold_{color}`: Bold/bright colors.
127
+ - `thin`, `thin_{color}`, `fg_thin_{color}`: Thin colors (terminal dependent).
128
+ - `reset`: Clear all formatting (both foreground and background colors).
129
+
130
+ The available color names are:
131
+
132
+ - `black`
133
+ - `red`
134
+ - `green`
135
+ - `yellow`
136
+ - `blue`,
137
+ - `purple`
138
+ - `cyan`
139
+ - `white`
140
+
141
+ You can also use "bright" colors. These aren't standard ANSI codes, and
142
+ support for these varies wildly across different terminals.
143
+
144
+ - `light_black`
145
+ - `light_red`
146
+ - `light_green`
147
+ - `light_yellow`
148
+ - `light_blue`
149
+ - `light_purple`
150
+ - `light_cyan`
151
+ - `light_white`
152
+
153
+ ## Examples
154
+
155
+ ![Example output](docs/example.png)
156
+
157
+ The following code creates a `ColoredFormatter` for use in a logging setup,
158
+ using the default values for each argument.
159
+
160
+ ```python
161
+ from colorlog import ColoredFormatter
162
+
163
+ formatter = ColoredFormatter(
164
+ "%(log_color)s%(levelname)-8s%(reset)s %(blue)s%(message)s",
165
+ datefmt=None,
166
+ reset=True,
167
+ log_colors={
168
+ 'DEBUG': 'cyan',
169
+ 'INFO': 'green',
170
+ 'WARNING': 'yellow',
171
+ 'ERROR': 'red',
172
+ 'CRITICAL': 'red,bg_white',
173
+ },
174
+ secondary_log_colors={},
175
+ style='%'
176
+ )
177
+ ```
178
+
179
+ ### Using `secondary_log_colors`
180
+
181
+ Secondary log colors are a way to have more than one color that is selected
182
+ based on the log level. Each key in `secondary_log_colors` adds an attribute
183
+ that can be used in format strings (`message` becomes `message_log_color`), and
184
+ has a corresponding value that is identical in format to the `log_colors`
185
+ argument.
186
+
187
+ The following example highlights the level name using the default log colors,
188
+ and highlights the message in red for `error` and `critical` level log messages.
189
+
190
+ ```python
191
+ from colorlog import ColoredFormatter
192
+
193
+ formatter = ColoredFormatter(
194
+ "%(log_color)s%(levelname)-8s%(reset)s %(message_log_color)s%(message)s",
195
+ secondary_log_colors={
196
+ 'message': {
197
+ 'ERROR': 'red',
198
+ 'CRITICAL': 'red'
199
+ }
200
+ }
201
+ )
202
+ ```
203
+
204
+ ### With [`dictConfig`][dictConfig]
205
+
206
+ ```python
207
+ logging.config.dictConfig({
208
+ 'formatters': {
209
+ 'colored': {
210
+ '()': 'colorlog.ColoredFormatter',
211
+ 'format': "%(log_color)s%(levelname)-8s%(reset)s %(blue)s%(message)s"
212
+ }
213
+ }
214
+ })
215
+ ```
216
+
217
+ A full example dictionary can be found in `tests/test_colorlog.py`.
218
+
219
+ ### With [`fileConfig`][fileConfig]
220
+
221
+ ```ini
222
+ ...
223
+
224
+ [formatters]
225
+ keys=color
226
+
227
+ [formatter_color]
228
+ class=colorlog.ColoredFormatter
229
+ format=%(log_color)s%(levelname)-8s%(reset)s %(bg_blue)s[%(name)s]%(reset)s %(message)s from fileConfig
230
+ datefmt=%m-%d %H:%M:%S
231
+ ```
232
+
233
+ An instance of ColoredFormatter created with those arguments will then be used
234
+ by any handlers that are configured to use the `color` formatter.
235
+
236
+ A full example configuration can be found in `tests/test_config.ini`.
237
+
238
+ ### With custom log levels
239
+
240
+ ColoredFormatter will work with custom log levels added with
241
+ [`logging.addLevelName`][addLevelName]:
242
+
243
+ ```python
244
+ import logging, colorlog
245
+ TRACE = 5
246
+ logging.addLevelName(TRACE, 'TRACE')
247
+ formatter = colorlog.ColoredFormatter(log_colors={'TRACE': 'yellow'})
248
+ handler = logging.StreamHandler()
249
+ handler.setFormatter(formatter)
250
+ logger = logging.getLogger('example')
251
+ logger.addHandler(handler)
252
+ logger.setLevel('TRACE')
253
+ logger.log(TRACE, 'a message using a custom level')
254
+ ```
255
+
256
+ ## Tests
257
+
258
+ Tests similar to the above examples are found in `tests/test_colorlog.py`.
259
+
260
+ ## Status
261
+
262
+ colorlog is in maintenance mode. I try and ensure bugfixes are published,
263
+ but compatibility a wide set of Python versions makes this a difficult
264
+ codebase to add features to. Any changes that might break backwards
265
+ compatibility for existing users will not be considered.
266
+
267
+ ## Alternatives
268
+
269
+ There are some more modern libraries for improving Python logging you may
270
+ find useful.
271
+
272
+ - [structlog]
273
+ - [jsonlog]
274
+
275
+ ## Projects using colorlog
276
+
277
+ GitHub provides [a list of projects that depend on colorlog][dependents].
278
+
279
+ Some early adopters included [Errbot], [Pythran], and [zenlog].
280
+
281
+ ## Licence
282
+
283
+ Copyright (c) 2012-2025 Sam Clements <sam@borntyping.co.uk>
284
+
285
+ Permission is hereby granted, free of charge, to any person obtaining a copy of
286
+ this software and associated documentation files (the "Software"), to deal in
287
+ the Software without restriction, including without limitation the rights to
288
+ use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of
289
+ the Software, and to permit persons to whom the Software is furnished to do so,
290
+ subject to the following conditions:
291
+
292
+ The above copyright notice and this permission notice shall be included in all
293
+ copies or substantial portions of the Software.
294
+
295
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
296
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS
297
+ FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR
298
+ COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER
299
+ IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
300
+ CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
301
+
302
+ [dictConfig]: http://docs.python.org/3/library/logging.config.html#logging.config.dictConfig
303
+ [fileConfig]: http://docs.python.org/3/library/logging.config.html#logging.config.fileConfig
304
+ [addLevelName]: https://docs.python.org/3/library/logging.html#logging.addLevelName
305
+ [Formatter]: http://docs.python.org/3/library/logging.html#logging.Formatter
306
+ [tox]: http://tox.readthedocs.org/
307
+ [Arch]: https://archlinux.org/packages/extra/any/python-colorlog/
308
+ [BSD ports]: https://www.freshports.org/devel/py-colorlog/
309
+ [colorama]: https://pypi.python.org/pypi/colorama
310
+ [Conda]: https://anaconda.org/conda-forge/colorlog
311
+ [Debian]: [https://packages.debian.org/buster/python3-colorlog](https://packages.debian.org/buster/python3-colorlog)
312
+ [Errbot]: http://errbot.io/
313
+ [Fedora]: https://src.fedoraproject.org/rpms/python-colorlog
314
+ [Gentoo]: https://packages.gentoo.org/packages/dev-python/colorlog
315
+ [OpenSuse]: http://rpm.pbone.net/index.php3?stat=3&search=python-colorlog&srodzaj=3
316
+ [Pythran]: https://github.com/serge-sans-paille/pythran
317
+ [Ubuntu]: https://launchpad.net/python-colorlog
318
+ [zenlog]: https://github.com/ManufacturaInd/python-zenlog
319
+ [structlog]: https://www.structlog.org/en/stable/
320
+ [jsonlog]: https://github.com/borntyping/jsonlog
321
+ [dependents]: https://github.com/borntyping/python-colorlog/network/dependents?package_id=UGFja2FnZS01MDk3NDcyMQ%3D%3D
.cache/pip/http-v2/3/9/5/b/c/395bc73efd302f7d742a8b69a0d15de0af47c6a2b8a5accf08c79c9b ADDED
Binary file (1.15 kB). View file
 
.cache/pip/http-v2/3/9/5/b/c/395bc73efd302f7d742a8b69a0d15de0af47c6a2b8a5accf08c79c9b.body ADDED
Binary file (56.1 kB). View file
 
.cache/pip/http-v2/3/a/3/0/9/3a3094a7a3e575e1209e7823121d830fa26ecec395f096840b934033 ADDED
Binary file (1.19 kB). View file
 
.cache/pip/http-v2/3/a/3/0/9/3a3094a7a3e575e1209e7823121d830fa26ecec395f096840b934033.body ADDED
@@ -0,0 +1,574 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Metadata-Version: 2.1
2
+ Name: timm
3
+ Version: 1.0.26
4
+ Summary: PyTorch Image Models
5
+ Keywords: pytorch,image-classification
6
+ Author-Email: Ross Wightman <ross@huggingface.co>
7
+ License: Apache-2.0
8
+ Classifier: Development Status :: 5 - Production/Stable
9
+ Classifier: Intended Audience :: Education
10
+ Classifier: Intended Audience :: Science/Research
11
+ Classifier: License :: OSI Approved :: Apache Software License
12
+ Classifier: Programming Language :: Python :: 3.8
13
+ Classifier: Programming Language :: Python :: 3.9
14
+ Classifier: Programming Language :: Python :: 3.10
15
+ Classifier: Programming Language :: Python :: 3.11
16
+ Classifier: Programming Language :: Python :: 3.12
17
+ Classifier: Topic :: Scientific/Engineering
18
+ Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
19
+ Classifier: Topic :: Software Development
20
+ Classifier: Topic :: Software Development :: Libraries
21
+ Classifier: Topic :: Software Development :: Libraries :: Python Modules
22
+ Project-URL: homepage, https://github.com/huggingface/pytorch-image-models
23
+ Project-URL: documentation, https://huggingface.co/docs/timm/en/index
24
+ Project-URL: repository, https://github.com/huggingface/pytorch-image-models
25
+ Requires-Python: >=3.8
26
+ Requires-Dist: torch
27
+ Requires-Dist: torchvision
28
+ Requires-Dist: pyyaml
29
+ Requires-Dist: huggingface_hub
30
+ Requires-Dist: safetensors
31
+ Description-Content-Type: text/markdown
32
+
33
+ # PyTorch Image Models
34
+ - [What's New](#whats-new)
35
+ - [Introduction](#introduction)
36
+ - [Models](#models)
37
+ - [Features](#features)
38
+ - [Results](#results)
39
+ - [Getting Started (Documentation)](#getting-started-documentation)
40
+ - [Train, Validation, Inference Scripts](#train-validation-inference-scripts)
41
+ - [Awesome PyTorch Resources](#awesome-pytorch-resources)
42
+ - [Licenses](#licenses)
43
+ - [Citing](#citing)
44
+
45
+ ## What's New
46
+
47
+ ## March 23, 2026
48
+ * Improve pickle checkpoint handling security. Default all loading to `weights_only=True`, add safe_global for ArgParse.
49
+ * Improve attention mask handling for core ViT/EVA models & layers. Resolve bool masks, pass `is_causal` through for SSL tasks.
50
+ * Fix class & register token uses with ViT and no pos embed enabled.
51
+ * Add Patch Representation Refinement (PRR) as a pooling option in ViT. Thanks Sina (https://github.com/sinahmr).
52
+ * Improve consistency of output projection / MLP dimensions for attention pooling layers.
53
+ * Hiera model F.SDPA optimization to allow Flash Attention kernel use.
54
+ * Caution added to SGDP optimizer.
55
+ * Release 1.0.26. First maintenance release since my departure from Hugging Face.
56
+
57
+ ## Feb 23, 2026
58
+ * Add token distillation training support to distillation task wrappers
59
+ * Remove some torch.jit usage in prep for official deprecation
60
+ * Caution added to AdamP optimizer
61
+ * Call reset_parameters() even if meta-device init so that buffers get init w/ hacks like init_empty_weights
62
+ * Tweak Muon optimizer to work with DTensor/FSDP2 (clamp_ instead of clamp_min_, alternate NS branch for DTensor)
63
+ * Release 1.0.25
64
+
65
+ ## Jan 21, 2026
66
+ * **Compat Break**: Fix oversight w/ QKV vs MLP bias in `ParallelScalingBlock` (& `DiffParallelScalingBlock`)
67
+ * Does not impact any trained `timm` models but could impact downstream use.
68
+
69
+ ## Jan 5 & 6, 2026
70
+ * Release 1.0.24
71
+ * Add new benchmark result csv files for inference timing on all models w/ RTX Pro 6000, 5090, and 4090 cards w/ PyTorch 2.9.1
72
+ * Fix moved module error in deprecated timm.models.layers import path that impacts legacy imports
73
+ * Release 1.0.23
74
+
75
+ ## Dec 30, 2025
76
+ * Add better NAdaMuon trained `dpwee`, `dwee`, `dlittle` (differential) ViTs with a small boost over previous runs
77
+ * https://huggingface.co/timm/vit_dlittle_patch16_reg1_gap_256.sbb_nadamuon_in1k (83.24% top-1)
78
+ * https://huggingface.co/timm/vit_dwee_patch16_reg1_gap_256.sbb_nadamuon_in1k (81.80% top-1)
79
+ * https://huggingface.co/timm/vit_dpwee_patch16_reg1_gap_256.sbb_nadamuon_in1k (81.67% top-1)
80
+ * Add a ~21M param `timm` variant of the CSATv2 model at 512x512 & 640x640
81
+ * https://huggingface.co/timm/csatv2_21m.sw_r640_in1k (83.13% top-1)
82
+ * https://huggingface.co/timm/csatv2_21m.sw_r512_in1k (82.58% top-1)
83
+ * Factor non-persistent param init out of `__init__` into a common method that can be externally called via `init_non_persistent_buffers()` after meta-device init.
84
+
85
+ ## Dec 12, 2025
86
+ * Add CSATV2 model (thanks https://github.com/gusdlf93) -- a lightweight but high res model with DCT stem & spatial attention. https://huggingface.co/Hyunil/CSATv2
87
+ * Add AdaMuon and NAdaMuon optimizer support to existing `timm` Muon impl. Appears more competitive vs AdamW with familiar hparams for image tasks.
88
+ * End of year PR cleanup, merge aspects of several long open PR
89
+ * Merge differential attention (`DiffAttention`), add corresponding `DiffParallelScalingBlock` (for ViT), train some wee vits
90
+ * https://huggingface.co/timm/vit_dwee_patch16_reg1_gap_256.sbb_in1k
91
+ * https://huggingface.co/timm/vit_dpwee_patch16_reg1_gap_256.sbb_in1k
92
+ * Add a few pooling modules, `LsePlus` and `SimPool`
93
+ * Cleanup, optimize `DropBlock2d` (also add support to ByobNet based models)
94
+ * Bump unit tests to PyTorch 2.9.1 + Python 3.13 on upper end, lower still PyTorch 1.13 + Python 3.10
95
+
96
+ ## Dec 1, 2025
97
+ * Add lightweight task abstraction, add logits and feature distillation support to train script via new tasks.
98
+ * Remove old APEX AMP support
99
+
100
+ ## Nov 4, 2025
101
+ * Fix LayerScale / LayerScale2d init bug (init values ignored), introduced in 1.0.21. Thanks https://github.com/Ilya-Fradlin
102
+ * Release 1.0.22
103
+
104
+ ## Oct 31, 2025 🎃
105
+ * Update imagenet & OOD variant result csv files to include a few new models and verify correctness over several torch & timm versions
106
+ * EfficientNet-X and EfficientNet-H B5 model weights added as part of a hparam search for AdamW vs Muon (still iterating on Muon runs)
107
+
108
+ ## Oct 16-20, 2025
109
+ * Add an impl of the Muon optimizer (based on https://github.com/KellerJordan/Muon) with customizations
110
+ * extra flexibility and improved handling for conv weights and fallbacks for weight shapes not suited for orthogonalization
111
+ * small speedup for NS iterations by reducing allocs and using fused (b)add(b)mm ops
112
+ * by default uses AdamW (or NAdamW if `nesterov=True`) updates if muon not suitable for parameter shape (or excluded via param group flag)
113
+ * like torch impl, select from several LR scale adjustment fns via `adjust_lr_fn`
114
+ * select from several NS coefficient presets or specify your own via `ns_coefficients`
115
+ * First 2 steps of 'meta' device model initialization supported
116
+ * Fix several ops that were breaking creation under 'meta' device context
117
+ * Add device & dtype factory kwarg support to all models and modules (anything inherting from nn.Module) in `timm`
118
+ * License fields added to pretrained cfgs in code
119
+ * Release 1.0.21
120
+
121
+ ## Sept 21, 2025
122
+ * Remap DINOv3 ViT weight tags from `lvd_1689m` -> `lvd1689m` to match (same for `sat_493m` -> `sat493m`)
123
+ * Release 1.0.20
124
+
125
+ ## Sept 17, 2025
126
+ * DINOv3 (https://arxiv.org/abs/2508.10104) ConvNeXt and ViT models added. ConvNeXt models were mapped to existing `timm` model. ViT support done via the EVA base model w/ a new `RotaryEmbeddingDinoV3` to match the DINOv3 specific RoPE impl
127
+ * HuggingFace Hub: https://huggingface.co/collections/timm/timm-dinov3-68cb08bb0bee365973d52a4d
128
+ * MobileCLIP-2 (https://arxiv.org/abs/2508.20691) vision encoders. New MCI3/MCI4 FastViT variants added and weights mapped to existing FastViT and B, L/14 ViTs.
129
+ * MetaCLIP-2 Worldwide (https://arxiv.org/abs/2507.22062) ViT encoder weights added.
130
+ * SigLIP-2 (https://arxiv.org/abs/2502.14786) NaFlex ViT encoder weights added via timm NaFlexViT model.
131
+ * Misc fixes and contributions
132
+
133
+ ## July 23, 2025
134
+ * Add `set_input_size()` method to EVA models, used by OpenCLIP 3.0.0 to allow resizing for timm based encoder models.
135
+ * Release 1.0.18, needed for PE-Core S & T models in OpenCLIP 3.0.0
136
+ * Fix small typing issue that broke Python 3.9 compat. 1.0.19 patch release.
137
+
138
+ ## July 21, 2025
139
+ * ROPE support added to NaFlexViT. All models covered by the EVA base (`eva.py`) including EVA, EVA02, Meta PE ViT, `timm` SBB ViT w/ ROPE, and Naver ROPE-ViT can be now loaded in NaFlexViT when `use_naflex=True` passed at model creation time
140
+ * More Meta PE ViT encoders added, including small/tiny variants, lang variants w/ tiling, and more spatial variants.
141
+ * PatchDropout fixed with NaFlexViT and also w/ EVA models (regression after adding Naver ROPE-ViT)
142
+ * Fix XY order with grid_indexing='xy', impacted non-square image use in 'xy' mode (only ROPE-ViT and PE impacted).
143
+
144
+ ## July 7, 2025
145
+ * MobileNet-v5 backbone tweaks for improved Google Gemma 3n behaviour (to pair with updated official weights)
146
+ * Add stem bias (zero'd in updated weights, compat break with old weights)
147
+ * GELU -> GELU (tanh approx). A minor change to be closer to JAX
148
+ * Add two arguments to layer-decay support, a min scale clamp and 'no optimization' scale threshold
149
+ * Add 'Fp32' LayerNorm, RMSNorm, SimpleNorm variants that can be enabled to force computation of norm in float32
150
+ * Some typing, argument cleanup for norm, norm+act layers done with above
151
+ * Support Naver ROPE-ViT (https://github.com/naver-ai/rope-vit) in `eva.py`, add RotaryEmbeddingMixed module for mixed mode, weights on HuggingFace Hub
152
+
153
+ |model |img_size|top1 |top5 |param_count|
154
+ |--------------------------------------------------|--------|------|------|-----------|
155
+ |vit_large_patch16_rope_mixed_ape_224.naver_in1k |224 |84.84 |97.122|304.4 |
156
+ |vit_large_patch16_rope_mixed_224.naver_in1k |224 |84.828|97.116|304.2 |
157
+ |vit_large_patch16_rope_ape_224.naver_in1k |224 |84.65 |97.154|304.37 |
158
+ |vit_large_patch16_rope_224.naver_in1k |224 |84.648|97.122|304.17 |
159
+ |vit_base_patch16_rope_mixed_ape_224.naver_in1k |224 |83.894|96.754|86.59 |
160
+ |vit_base_patch16_rope_mixed_224.naver_in1k |224 |83.804|96.712|86.44 |
161
+ |vit_base_patch16_rope_ape_224.naver_in1k |224 |83.782|96.61 |86.59 |
162
+ |vit_base_patch16_rope_224.naver_in1k |224 |83.718|96.672|86.43 |
163
+ |vit_small_patch16_rope_224.naver_in1k |224 |81.23 |95.022|21.98 |
164
+ |vit_small_patch16_rope_mixed_224.naver_in1k |224 |81.216|95.022|21.99 |
165
+ |vit_small_patch16_rope_ape_224.naver_in1k |224 |81.004|95.016|22.06 |
166
+ |vit_small_patch16_rope_mixed_ape_224.naver_in1k |224 |80.986|94.976|22.06 |
167
+ * Some cleanup of ROPE modules, helpers, and FX tracing leaf registration
168
+ * Preparing version 1.0.17 release
169
+
170
+ ## June 26, 2025
171
+ * MobileNetV5 backbone (w/ encoder only variant) for [Gemma 3n](https://ai.google.dev/gemma/docs/gemma-3n#parameters) image encoder
172
+ * Version 1.0.16 released
173
+
174
+ ## June 23, 2025
175
+ * Add F.grid_sample based 2D and factorized pos embed resize to NaFlexViT. Faster when lots of different sizes (based on example by https://github.com/stas-sl).
176
+ * Further speed up patch embed resample by replacing vmap with matmul (based on snippet by https://github.com/stas-sl).
177
+ * Add 3 initial native aspect NaFlexViT checkpoints created while testing, ImageNet-1k and 3 different pos embed configs w/ same hparams.
178
+
179
+ | Model | Top-1 Acc | Top-5 Acc | Params (M) | Eval Seq Len |
180
+ |:---|:---:|:---:|:---:|:---:|
181
+ | [naflexvit_base_patch16_par_gap.e300_s576_in1k](https://hf.co/timm/naflexvit_base_patch16_par_gap.e300_s576_in1k) | 83.67 | 96.45 | 86.63 | 576 |
182
+ | [naflexvit_base_patch16_parfac_gap.e300_s576_in1k](https://hf.co/timm/naflexvit_base_patch16_parfac_gap.e300_s576_in1k) | 83.63 | 96.41 | 86.46 | 576 |
183
+ | [naflexvit_base_patch16_gap.e300_s576_in1k](https://hf.co/timm/naflexvit_base_patch16_gap.e300_s576_in1k) | 83.50 | 96.46 | 86.63 | 576 |
184
+ * Support gradient checkpointing for `forward_intermediates` and fix some checkpointing bugs. Thanks https://github.com/brianhou0208
185
+ * Add 'corrected weight decay' (https://arxiv.org/abs/2506.02285) as option to AdamW (legacy), Adopt, Kron, Adafactor (BV), Lamb, LaProp, Lion, NadamW, RmsPropTF, SGDW optimizers
186
+ * Switch PE (perception encoder) ViT models to use native timm weights instead of remapping on the fly
187
+ * Fix cuda stream bug in prefetch loader
188
+
189
+ ## June 5, 2025
190
+ * Initial NaFlexVit model code. NaFlexVit is a Vision Transformer with:
191
+ 1. Encapsulated embedding and position encoding in a single module
192
+ 2. Support for nn.Linear patch embedding on pre-patchified (dictionary) inputs
193
+ 3. Support for NaFlex variable aspect, variable resolution (SigLip-2: https://arxiv.org/abs/2502.14786)
194
+ 4. Support for FlexiViT variable patch size (https://arxiv.org/abs/2212.08013)
195
+ 5. Support for NaViT fractional/factorized position embedding (https://arxiv.org/abs/2307.06304)
196
+ * Existing vit models in `vision_transformer.py` can be loaded into the NaFlexVit model by adding the `use_naflex=True` flag to `create_model`
197
+ * Some native weights coming soon
198
+ * A full NaFlex data pipeline is available that allows training / fine-tuning / evaluating with variable aspect / size images
199
+ * To enable in `train.py` and `validate.py` add the `--naflex-loader` arg, must be used with a NaFlexVit
200
+ * To evaluate an existing (classic) ViT loaded in NaFlexVit model w/ NaFlex data pipe:
201
+ * `python validate.py /imagenet --amp -j 8 --model vit_base_patch16_224 --model-kwargs use_naflex=True --naflex-loader --naflex-max-seq-len 256`
202
+ * The training has some extra args features worth noting
203
+ * The `--naflex-train-seq-lens'` argument specifies which sequence lengths to randomly pick from per batch during training
204
+ * The `--naflex-max-seq-len` argument sets the target sequence length for validation
205
+ * Adding `--model-kwargs enable_patch_interpolator=True --naflex-patch-sizes 12 16 24` will enable random patch size selection per-batch w/ interpolation
206
+ * The `--naflex-loss-scale` arg changes loss scaling mode per batch relative to the batch size, `timm` NaFlex loading changes the batch size for each seq len
207
+
208
+ ## May 28, 2025
209
+ * Add a number of small/fast models thanks to https://github.com/brianhou0208
210
+ * SwiftFormer - [(ICCV2023) SwiftFormer: Efficient Additive Attention for Transformer-based Real-time Mobile Vision Applications](https://github.com/Amshaker/SwiftFormer)
211
+ * FasterNet - [(CVPR2023) Run, Don’t Walk: Chasing Higher FLOPS for Faster Neural Networks](https://github.com/JierunChen/FasterNet)
212
+ * SHViT - [(CVPR2024) SHViT: Single-Head Vision Transformer with Memory Efficient](https://github.com/ysj9909/SHViT)
213
+ * StarNet - [(CVPR2024) Rewrite the Stars](https://github.com/ma-xu/Rewrite-the-Stars)
214
+ * GhostNet-V3 [GhostNetV3: Exploring the Training Strategies for Compact Models](https://github.com/huawei-noah/Efficient-AI-Backbones/tree/master/ghostnetv3_pytorch)
215
+ * Update EVA ViT (closest match) to support Perception Encoder models (https://arxiv.org/abs/2504.13181) from Meta, loading Hub weights but I still need to push dedicated `timm` weights
216
+ * Add some flexibility to ROPE impl
217
+ * Big increase in number of models supporting `forward_intermediates()` and some additional fixes thanks to https://github.com/brianhou0208
218
+ * DaViT, EdgeNeXt, EfficientFormerV2, EfficientViT(MIT), EfficientViT(MSRA), FocalNet, GCViT, HGNet /V2, InceptionNeXt, Inception-V4, MambaOut, MetaFormer, NesT, Next-ViT, PiT, PVT V2, RepGhostNet, RepViT, ResNetV2, ReXNet, TinyViT, TResNet, VoV
219
+ * TNT model updated w/ new weights `forward_intermediates()` thanks to https://github.com/brianhou0208
220
+ * Add `local-dir:` pretrained schema, can use `local-dir:/path/to/model/folder` for model name to source model / pretrained cfg & weights Hugging Face Hub models (config.json + weights file) from a local folder.
221
+ * Fixes, improvements for onnx export
222
+
223
+ ## Feb 21, 2025
224
+ * SigLIP 2 ViT image encoders added (https://huggingface.co/collections/timm/siglip-2-67b8e72ba08b09dd97aecaf9)
225
+ * Variable resolution / aspect NaFlex versions are a WIP
226
+ * Add 'SO150M2' ViT weights trained with SBB recipes, great results, better for ImageNet than previous attempt w/ less training.
227
+ * `vit_so150m2_patch16_reg1_gap_448.sbb_e200_in12k_ft_in1k` - 88.1% top-1
228
+ * `vit_so150m2_patch16_reg1_gap_384.sbb_e200_in12k_ft_in1k` - 87.9% top-1
229
+ * `vit_so150m2_patch16_reg1_gap_256.sbb_e200_in12k_ft_in1k` - 87.3% top-1
230
+ * `vit_so150m2_patch16_reg4_gap_256.sbb_e200_in12k`
231
+ * Updated InternViT-300M '2.5' weights
232
+ * Release 1.0.15
233
+
234
+ ## Feb 1, 2025
235
+ * FYI PyTorch 2.6 & Python 3.13 are tested and working w/ current main and released version of `timm`
236
+
237
+ ## Jan 27, 2025
238
+ * Add Kron Optimizer (PSGD w/ Kronecker-factored preconditioner)
239
+ * Code from https://github.com/evanatyourservice/kron_torch
240
+ * See also https://sites.google.com/site/lixilinx/home/psgd
241
+
242
+ ## Jan 19, 2025
243
+ * Fix loading of LeViT safetensor weights, remove conversion code which should have been deactivated
244
+ * Add 'SO150M' ViT weights trained with SBB recipes, decent results, but not optimal shape for ImageNet-12k/1k pretrain/ft
245
+ * `vit_so150m_patch16_reg4_gap_256.sbb_e250_in12k_ft_in1k` - 86.7% top-1
246
+ * `vit_so150m_patch16_reg4_gap_384.sbb_e250_in12k_ft_in1k` - 87.4% top-1
247
+ * `vit_so150m_patch16_reg4_gap_256.sbb_e250_in12k`
248
+ * Misc typing, typo, etc. cleanup
249
+ * 1.0.14 release to get above LeViT fix out
250
+
251
+ ## Jan 9, 2025
252
+ * Add support to train and validate in pure `bfloat16` or `float16`
253
+ * `wandb` project name arg added by https://github.com/caojiaolong, use arg.experiment for name
254
+ * Fix old issue w/ checkpoint saving not working on filesystem w/o hard-link support (e.g. FUSE fs mounts)
255
+ * 1.0.13 release
256
+
257
+ ## Jan 6, 2025
258
+ * Add `torch.utils.checkpoint.checkpoint()` wrapper in `timm.models` that defaults `use_reentrant=False`, unless `TIMM_REENTRANT_CKPT=1` is set in env.
259
+
260
+ ## Dec 31, 2024
261
+ * `convnext_nano` 384x384 ImageNet-12k pretrain & fine-tune. https://huggingface.co/models?search=convnext_nano%20r384
262
+ * Add AIM-v2 encoders from https://github.com/apple/ml-aim, see on Hub: https://huggingface.co/models?search=timm%20aimv2
263
+ * Add PaliGemma2 encoders from https://github.com/google-research/big_vision to existing PaliGemma, see on Hub: https://huggingface.co/models?search=timm%20pali2
264
+ * Add missing L/14 DFN2B 39B CLIP ViT, `vit_large_patch14_clip_224.dfn2b_s39b`
265
+ * Fix existing `RmsNorm` layer & fn to match standard formulation, use PT 2.5 impl when possible. Move old impl to `SimpleNorm` layer, it's LN w/o centering or bias. There were only two `timm` models using it, and they have been updated.
266
+ * Allow override of `cache_dir` arg for model creation
267
+ * Pass through `trust_remote_code` for HF datasets wrapper
268
+ * `inception_next_atto` model added by creator
269
+ * Adan optimizer caution, and Lamb decoupled weight decay options
270
+ * Some feature_info metadata fixed by https://github.com/brianhou0208
271
+ * All OpenCLIP and JAX (CLIP, SigLIP, Pali, etc) model weights that used load time remapping were given their own HF Hub instances so that they work with `hf-hub:` based loading, and thus will work with new Transformers `TimmWrapperModel`
272
+
273
+ ## Introduction
274
+
275
+ Py**T**orch **Im**age **M**odels (`timm`) is a collection of image models, layers, utilities, optimizers, schedulers, data-loaders / augmentations, and reference training / validation scripts that aim to pull together a wide variety of SOTA models with ability to reproduce ImageNet training results.
276
+
277
+ The work of many others is present here. I've tried to make sure all source material is acknowledged via links to github, arxiv papers, etc in the README, documentation, and code docstrings. Please let me know if I missed anything.
278
+
279
+ ## Features
280
+
281
+ ### Models
282
+
283
+ All model architecture families include variants with pretrained weights. There are specific model variants without any weights, it is NOT a bug. Help training new or better weights is always appreciated.
284
+
285
+ * Aggregating Nested Transformers - https://arxiv.org/abs/2105.12723
286
+ * BEiT - https://arxiv.org/abs/2106.08254
287
+ * BEiT-V2 - https://arxiv.org/abs/2208.06366
288
+ * BEiT3 - https://arxiv.org/abs/2208.10442
289
+ * Big Transfer ResNetV2 (BiT) - https://arxiv.org/abs/1912.11370
290
+ * Bottleneck Transformers - https://arxiv.org/abs/2101.11605
291
+ * CaiT (Class-Attention in Image Transformers) - https://arxiv.org/abs/2103.17239
292
+ * CoaT (Co-Scale Conv-Attentional Image Transformers) - https://arxiv.org/abs/2104.06399
293
+ * CoAtNet (Convolution and Attention) - https://arxiv.org/abs/2106.04803
294
+ * ConvNeXt - https://arxiv.org/abs/2201.03545
295
+ * ConvNeXt-V2 - http://arxiv.org/abs/2301.00808
296
+ * ConViT (Soft Convolutional Inductive Biases Vision Transformers)- https://arxiv.org/abs/2103.10697
297
+ * CspNet (Cross-Stage Partial Networks) - https://arxiv.org/abs/1911.11929
298
+ * DeiT - https://arxiv.org/abs/2012.12877
299
+ * DeiT-III - https://arxiv.org/pdf/2204.07118.pdf
300
+ * DenseNet - https://arxiv.org/abs/1608.06993
301
+ * DLA - https://arxiv.org/abs/1707.06484
302
+ * DPN (Dual-Path Network) - https://arxiv.org/abs/1707.01629
303
+ * EdgeNeXt - https://arxiv.org/abs/2206.10589
304
+ * EfficientFormer - https://arxiv.org/abs/2206.01191
305
+ * EfficientFormer-V2 - https://arxiv.org/abs/2212.08059
306
+ * EfficientNet (MBConvNet Family)
307
+ * EfficientNet NoisyStudent (B0-B7, L2) - https://arxiv.org/abs/1911.04252
308
+ * EfficientNet AdvProp (B0-B8) - https://arxiv.org/abs/1911.09665
309
+ * EfficientNet (B0-B7) - https://arxiv.org/abs/1905.11946
310
+ * EfficientNet-EdgeTPU (S, M, L) - https://ai.googleblog.com/2019/08/efficientnet-edgetpu-creating.html
311
+ * EfficientNet V2 - https://arxiv.org/abs/2104.00298
312
+ * FBNet-C - https://arxiv.org/abs/1812.03443
313
+ * MixNet - https://arxiv.org/abs/1907.09595
314
+ * MNASNet B1, A1 (Squeeze-Excite), and Small - https://arxiv.org/abs/1807.11626
315
+ * MobileNet-V2 - https://arxiv.org/abs/1801.04381
316
+ * Single-Path NAS - https://arxiv.org/abs/1904.02877
317
+ * TinyNet - https://arxiv.org/abs/2010.14819
318
+ * EfficientViT (MIT) - https://arxiv.org/abs/2205.14756
319
+ * EfficientViT (MSRA) - https://arxiv.org/abs/2305.07027
320
+ * EVA - https://arxiv.org/abs/2211.07636
321
+ * EVA-02 - https://arxiv.org/abs/2303.11331
322
+ * FasterNet - https://arxiv.org/abs/2303.03667
323
+ * FastViT - https://arxiv.org/abs/2303.14189
324
+ * FlexiViT - https://arxiv.org/abs/2212.08013
325
+ * FocalNet (Focal Modulation Networks) - https://arxiv.org/abs/2203.11926
326
+ * GCViT (Global Context Vision Transformer) - https://arxiv.org/abs/2206.09959
327
+ * GhostNet - https://arxiv.org/abs/1911.11907
328
+ * GhostNet-V2 - https://arxiv.org/abs/2211.12905
329
+ * GhostNet-V3 - https://arxiv.org/abs/2404.11202
330
+ * gMLP - https://arxiv.org/abs/2105.08050
331
+ * GPU-Efficient Networks - https://arxiv.org/abs/2006.14090
332
+ * Halo Nets - https://arxiv.org/abs/2103.12731
333
+ * HGNet / HGNet-V2 - TBD
334
+ * HRNet - https://arxiv.org/abs/1908.07919
335
+ * InceptionNeXt - https://arxiv.org/abs/2303.16900
336
+ * Inception-V3 - https://arxiv.org/abs/1512.00567
337
+ * Inception-ResNet-V2 and Inception-V4 - https://arxiv.org/abs/1602.07261
338
+ * Lambda Networks - https://arxiv.org/abs/2102.08602
339
+ * LeViT (Vision Transformer in ConvNet's Clothing) - https://arxiv.org/abs/2104.01136
340
+ * MambaOut - https://arxiv.org/abs/2405.07992
341
+ * MaxViT (Multi-Axis Vision Transformer) - https://arxiv.org/abs/2204.01697
342
+ * MetaFormer (PoolFormer-v2, ConvFormer, CAFormer) - https://arxiv.org/abs/2210.13452
343
+ * MLP-Mixer - https://arxiv.org/abs/2105.01601
344
+ * MobileCLIP - https://arxiv.org/abs/2311.17049
345
+ * MobileNet-V3 (MBConvNet w/ Efficient Head) - https://arxiv.org/abs/1905.02244
346
+ * FBNet-V3 - https://arxiv.org/abs/2006.02049
347
+ * HardCoRe-NAS - https://arxiv.org/abs/2102.11646
348
+ * LCNet - https://arxiv.org/abs/2109.15099
349
+ * MobileNetV4 - https://arxiv.org/abs/2404.10518
350
+ * MobileOne - https://arxiv.org/abs/2206.04040
351
+ * MobileViT - https://arxiv.org/abs/2110.02178
352
+ * MobileViT-V2 - https://arxiv.org/abs/2206.02680
353
+ * MViT-V2 (Improved Multiscale Vision Transformer) - https://arxiv.org/abs/2112.01526
354
+ * NASNet-A - https://arxiv.org/abs/1707.07012
355
+ * NesT - https://arxiv.org/abs/2105.12723
356
+ * Next-ViT - https://arxiv.org/abs/2207.05501
357
+ * NFNet-F - https://arxiv.org/abs/2102.06171
358
+ * NF-RegNet / NF-ResNet - https://arxiv.org/abs/2101.08692
359
+ * PE (Perception Encoder) - https://arxiv.org/abs/2504.13181
360
+ * PNasNet - https://arxiv.org/abs/1712.00559
361
+ * PoolFormer (MetaFormer) - https://arxiv.org/abs/2111.11418
362
+ * Pooling-based Vision Transformer (PiT) - https://arxiv.org/abs/2103.16302
363
+ * PVT-V2 (Improved Pyramid Vision Transformer) - https://arxiv.org/abs/2106.13797
364
+ * RDNet (DenseNets Reloaded) - https://arxiv.org/abs/2403.19588
365
+ * RegNet - https://arxiv.org/abs/2003.13678
366
+ * RegNetZ - https://arxiv.org/abs/2103.06877
367
+ * RepVGG - https://arxiv.org/abs/2101.03697
368
+ * RepGhostNet - https://arxiv.org/abs/2211.06088
369
+ * RepViT - https://arxiv.org/abs/2307.09283
370
+ * ResMLP - https://arxiv.org/abs/2105.03404
371
+ * ResNet/ResNeXt
372
+ * ResNet (v1b/v1.5) - https://arxiv.org/abs/1512.03385
373
+ * ResNeXt - https://arxiv.org/abs/1611.05431
374
+ * 'Bag of Tricks' / Gluon C, D, E, S variations - https://arxiv.org/abs/1812.01187
375
+ * Weakly-supervised (WSL) Instagram pretrained / ImageNet tuned ResNeXt101 - https://arxiv.org/abs/1805.00932
376
+ * Semi-supervised (SSL) / Semi-weakly Supervised (SWSL) ResNet/ResNeXts - https://arxiv.org/abs/1905.00546
377
+ * ECA-Net (ECAResNet) - https://arxiv.org/abs/1910.03151v4
378
+ * Squeeze-and-Excitation Networks (SEResNet) - https://arxiv.org/abs/1709.01507
379
+ * ResNet-RS - https://arxiv.org/abs/2103.07579
380
+ * Res2Net - https://arxiv.org/abs/1904.01169
381
+ * ResNeSt - https://arxiv.org/abs/2004.08955
382
+ * ReXNet - https://arxiv.org/abs/2007.00992
383
+ * ROPE-ViT - https://arxiv.org/abs/2403.13298
384
+ * SelecSLS - https://arxiv.org/abs/1907.00837
385
+ * Selective Kernel Networks - https://arxiv.org/abs/1903.06586
386
+ * Sequencer2D - https://arxiv.org/abs/2205.01972
387
+ * SHViT - https://arxiv.org/abs/2401.16456
388
+ * SigLIP (image encoder) - https://arxiv.org/abs/2303.15343
389
+ * SigLIP 2 (image encoder) - https://arxiv.org/abs/2502.14786
390
+ * StarNet - https://arxiv.org/abs/2403.19967
391
+ * SwiftFormer - https://arxiv.org/pdf/2303.15446
392
+ * Swin S3 (AutoFormerV2) - https://arxiv.org/abs/2111.14725
393
+ * Swin Transformer - https://arxiv.org/abs/2103.14030
394
+ * Swin Transformer V2 - https://arxiv.org/abs/2111.09883
395
+ * TinyViT - https://arxiv.org/abs/2207.10666
396
+ * Transformer-iN-Transformer (TNT) - https://arxiv.org/abs/2103.00112
397
+ * TResNet - https://arxiv.org/abs/2003.13630
398
+ * Twins (Spatial Attention in Vision Transformers) - https://arxiv.org/pdf/2104.13840.pdf
399
+ * VGG - https://arxiv.org/abs/1409.1556
400
+ * Visformer - https://arxiv.org/abs/2104.12533
401
+ * Vision Transformer - https://arxiv.org/abs/2010.11929
402
+ * ViTamin - https://arxiv.org/abs/2404.02132
403
+ * VOLO (Vision Outlooker) - https://arxiv.org/abs/2106.13112
404
+ * VovNet V2 and V1 - https://arxiv.org/abs/1911.06667
405
+ * Xception - https://arxiv.org/abs/1610.02357
406
+ * Xception (Modified Aligned, Gluon) - https://arxiv.org/abs/1802.02611
407
+ * Xception (Modified Aligned, TF) - https://arxiv.org/abs/1802.02611
408
+ * XCiT (Cross-Covariance Image Transformers) - https://arxiv.org/abs/2106.09681
409
+
410
+ ### Optimizers
411
+ To see full list of optimizers w/ descriptions: `timm.optim.list_optimizers(with_description=True)`
412
+
413
+ Included optimizers available via `timm.optim.create_optimizer_v2` factory method:
414
+ * `adabelief` an implementation of AdaBelief adapted from https://github.com/juntang-zhuang/Adabelief-Optimizer - https://arxiv.org/abs/2010.07468
415
+ * `adafactor` adapted from [FAIRSeq impl](https://github.com/pytorch/fairseq/blob/master/fairseq/optim/adafactor.py) - https://arxiv.org/abs/1804.04235
416
+ * `adafactorbv` adapted from [Big Vision](https://github.com/google-research/big_vision/blob/main/big_vision/optax.py) - https://arxiv.org/abs/2106.04560
417
+ * `adahessian` by [David Samuel](https://github.com/davda54/ada-hessian) - https://arxiv.org/abs/2006.00719
418
+ * `adamp` and `sgdp` by [Naver ClovAI](https://github.com/clovaai) - https://arxiv.org/abs/2006.08217
419
+ * `adamuon` and `nadamuon` as per https://github.com/Chongjie-Si/AdaMuon - https://arxiv.org/abs/2507.11005
420
+ * `adan` an implementation of Adan adapted from https://github.com/sail-sg/Adan - https://arxiv.org/abs/2208.06677
421
+ * `adopt` ADOPT adapted from https://github.com/iShohei220/adopt - https://arxiv.org/abs/2411.02853
422
+ * `kron` PSGD w/ Kronecker-factored preconditioner from https://github.com/evanatyourservice/kron_torch - https://sites.google.com/site/lixilinx/home/psgd
423
+ * `lamb` an implementation of Lamb and LambC (w/ trust-clipping) cleaned up and modified to support use with XLA - https://arxiv.org/abs/1904.00962
424
+ * `laprop` optimizer from https://github.com/Z-T-WANG/LaProp-Optimizer - https://arxiv.org/abs/2002.04839
425
+ * `lars` an implementation of LARS and LARC (w/ trust-clipping) - https://arxiv.org/abs/1708.03888
426
+ * `lion` and implementation of Lion adapted from https://github.com/google/automl/tree/master/lion - https://arxiv.org/abs/2302.06675
427
+ * `lookahead` adapted from impl by [Liam](https://github.com/alphadl/lookahead.pytorch) - https://arxiv.org/abs/1907.08610
428
+ * `madgrad` an implementation of MADGRAD adapted from https://github.com/facebookresearch/madgrad - https://arxiv.org/abs/2101.11075
429
+ * `mars` MARS optimizer from https://github.com/AGI-Arena/MARS - https://arxiv.org/abs/2411.10438
430
+ * `muon` MUON optimizer from https://github.com/KellerJordan/Muon with numerous additions and improved non-transformer behaviour
431
+ * `nadam` an implementation of Adam w/ Nesterov momentum
432
+ * `nadamw` an implementation of AdamW (Adam w/ decoupled weight-decay) w/ Nesterov momentum. A simplified impl based on https://github.com/mlcommons/algorithmic-efficiency
433
+ * `novograd` by [Masashi Kimura](https://github.com/convergence-lab/novograd) - https://arxiv.org/abs/1905.11286
434
+ * `radam` by [Liyuan Liu](https://github.com/LiyuanLucasLiu/RAdam) - https://arxiv.org/abs/1908.03265
435
+ * `rmsprop_tf` adapted from PyTorch RMSProp by myself. Reproduces much improved Tensorflow RMSProp behaviour
436
+ * `sgdw` and implementation of SGD w/ decoupled weight-decay
437
+ * `fused<name>` optimizers by name with [NVIDIA Apex](https://github.com/NVIDIA/apex/tree/master/apex/optimizers) installed
438
+ * `bnb<name>` optimizers by name with [BitsAndBytes](https://github.com/TimDettmers/bitsandbytes) installed
439
+ * `cadamw`, `clion`, and more 'Cautious' optimizers from https://github.com/kyleliang919/C-Optim - https://arxiv.org/abs/2411.16085
440
+ * `adam`, `adamw`, `rmsprop`, `adadelta`, `adagrad`, and `sgd` pass through to `torch.optim` implementations
441
+ * `c` suffix (eg `adamc`, `nadamc` to implement 'corrected weight decay' in https://arxiv.org/abs/2506.02285)
442
+
443
+ ### Augmentations
444
+ * Random Erasing from [Zhun Zhong](https://github.com/zhunzhong07/Random-Erasing/blob/master/transforms.py) - https://arxiv.org/abs/1708.04896)
445
+ * Mixup - https://arxiv.org/abs/1710.09412
446
+ * CutMix - https://arxiv.org/abs/1905.04899
447
+ * AutoAugment (https://arxiv.org/abs/1805.09501) and RandAugment (https://arxiv.org/abs/1909.13719) ImageNet configurations modeled after impl for EfficientNet training (https://github.com/tensorflow/tpu/blob/master/models/official/efficientnet/autoaugment.py)
448
+ * AugMix w/ JSD loss, JSD w/ clean + augmented mixing support works with AutoAugment and RandAugment as well - https://arxiv.org/abs/1912.02781
449
+ * SplitBachNorm - allows splitting batch norm layers between clean and augmented (auxiliary batch norm) data
450
+
451
+ ### Regularization
452
+ * DropPath aka "Stochastic Depth" - https://arxiv.org/abs/1603.09382
453
+ * DropBlock - https://arxiv.org/abs/1810.12890
454
+ * Blur Pooling - https://arxiv.org/abs/1904.11486
455
+
456
+ ### Other
457
+
458
+ Several (less common) features that I often utilize in my projects are included. Many of their additions are the reason why I maintain my own set of models, instead of using others' via PIP:
459
+
460
+ * All models have a common default configuration interface and API for
461
+ * accessing/changing the classifier - `get_classifier` and `reset_classifier`
462
+ * doing a forward pass on just the features - `forward_features` (see [documentation](https://huggingface.co/docs/timm/feature_extraction))
463
+ * these makes it easy to write consistent network wrappers that work with any of the models
464
+ * All models support multi-scale feature map extraction (feature pyramids) via create_model (see [documentation](https://huggingface.co/docs/timm/feature_extraction))
465
+ * `create_model(name, features_only=True, out_indices=..., output_stride=...)`
466
+ * `out_indices` creation arg specifies which feature maps to return, these indices are 0 based and generally correspond to the `C(i + 1)` feature level.
467
+ * `output_stride` creation arg controls output stride of the network by using dilated convolutions. Most networks are stride 32 by default. Not all networks support this.
468
+ * feature map channel counts, reduction level (stride) can be queried AFTER model creation via the `.feature_info` member
469
+ * All models have a consistent pretrained weight loader that adapts last linear if necessary, and from 3 to 1 channel input if desired
470
+ * High performance [reference training, validation, and inference scripts](https://huggingface.co/docs/timm/training_script) that work in several process/GPU modes:
471
+ * NVIDIA DDP w/ a single GPU per process, multiple processes with APEX present (AMP mixed-precision optional)
472
+ * PyTorch DistributedDataParallel w/ multi-gpu, single process (AMP disabled as it crashes when enabled)
473
+ * PyTorch w/ single GPU single process (AMP optional)
474
+ * A dynamic global pool implementation that allows selecting from average pooling, max pooling, average + max, or concat([average, max]) at model creation. All global pooling is adaptive average by default and compatible with pretrained weights.
475
+ * A 'Test Time Pool' wrapper that can wrap any of the included models and usually provides improved performance doing inference with input images larger than the training size. Idea adapted from original DPN implementation when I ported (https://github.com/cypw/DPNs)
476
+ * Learning rate schedulers
477
+ * Ideas adopted from
478
+ * [AllenNLP schedulers](https://github.com/allenai/allennlp/tree/master/allennlp/training/learning_rate_schedulers)
479
+ * [FAIRseq lr_scheduler](https://github.com/pytorch/fairseq/tree/master/fairseq/optim/lr_scheduler)
480
+ * SGDR: Stochastic Gradient Descent with Warm Restarts (https://arxiv.org/abs/1608.03983)
481
+ * Schedulers include `step`, `cosine` w/ restarts, `tanh` w/ restarts, `plateau`
482
+ * Space-to-Depth by [mrT23](https://github.com/mrT23/TResNet/blob/master/src/models/tresnet/layers/space_to_depth.py) (https://arxiv.org/abs/1801.04590)
483
+ * Adaptive Gradient Clipping (https://arxiv.org/abs/2102.06171, https://github.com/deepmind/deepmind-research/tree/master/nfnets)
484
+ * An extensive selection of channel and/or spatial attention modules:
485
+ * Bottleneck Transformer - https://arxiv.org/abs/2101.11605
486
+ * CBAM - https://arxiv.org/abs/1807.06521
487
+ * Effective Squeeze-Excitation (ESE) - https://arxiv.org/abs/1911.06667
488
+ * Efficient Channel Attention (ECA) - https://arxiv.org/abs/1910.03151
489
+ * Gather-Excite (GE) - https://arxiv.org/abs/1810.12348
490
+ * Global Context (GC) - https://arxiv.org/abs/1904.11492
491
+ * Halo - https://arxiv.org/abs/2103.12731
492
+ * Involution - https://arxiv.org/abs/2103.06255
493
+ * Lambda Layer - https://arxiv.org/abs/2102.08602
494
+ * Non-Local (NL) - https://arxiv.org/abs/1711.07971
495
+ * Squeeze-and-Excitation (SE) - https://arxiv.org/abs/1709.01507
496
+ * Selective Kernel (SK) - (https://arxiv.org/abs/1903.06586
497
+ * Split (SPLAT) - https://arxiv.org/abs/2004.08955
498
+ * Shifted Window (SWIN) - https://arxiv.org/abs/2103.14030
499
+
500
+ ## Results
501
+
502
+ Model validation results can be found in the [results tables](results/README.md)
503
+
504
+ ## Getting Started (Documentation)
505
+
506
+ The official documentation can be found at https://huggingface.co/docs/hub/timm. Documentation contributions are welcome.
507
+
508
+ [Getting Started with PyTorch Image Models (timm): A Practitioner’s Guide](https://towardsdatascience.com/getting-started-with-pytorch-image-models-timm-a-practitioners-guide-4e77b4bf9055-2/) by [Chris Hughes](https://github.com/Chris-hughes10) is an extensive blog post covering many aspects of `timm` in detail.
509
+
510
+ [timmdocs](http://timm.fast.ai/) is an alternate set of documentation for `timm`. A big thanks to [Aman Arora](https://github.com/amaarora) for his efforts creating timmdocs.
511
+
512
+ [paperswithcode](https://paperswithcode.com/lib/timm) is a good resource for browsing the models within `timm`.
513
+
514
+ ## Train, Validation, Inference Scripts
515
+
516
+ The root folder of the repository contains reference train, validation, and inference scripts that work with the included models and other features of this repository. They are adaptable for other datasets and use cases with a little hacking. See [documentation](https://huggingface.co/docs/timm/training_script).
517
+
518
+ ## Awesome PyTorch Resources
519
+
520
+ One of the greatest assets of PyTorch is the community and their contributions. A few of my favourite resources that pair well with the models and components here are listed below.
521
+
522
+ ### Object Detection, Instance and Semantic Segmentation
523
+ * Detectron2 - https://github.com/facebookresearch/detectron2
524
+ * Segmentation Models (Semantic) - https://github.com/qubvel/segmentation_models.pytorch
525
+ * EfficientDet (Obj Det, Semantic soon) - https://github.com/rwightman/efficientdet-pytorch
526
+
527
+ ### Computer Vision / Image Augmentation
528
+ * Albumentations - https://github.com/albumentations-team/albumentations
529
+ * Kornia - https://github.com/kornia/kornia
530
+
531
+ ### Knowledge Distillation
532
+ * RepDistiller - https://github.com/HobbitLong/RepDistiller
533
+ * torchdistill - https://github.com/yoshitomo-matsubara/torchdistill
534
+
535
+ ### Metric Learning
536
+ * PyTorch Metric Learning - https://github.com/KevinMusgrave/pytorch-metric-learning
537
+
538
+ ### Training / Frameworks
539
+ * fastai - https://github.com/fastai/fastai
540
+ * lightly_train - https://github.com/lightly-ai/lightly-train
541
+
542
+ ### Deployment
543
+ * timmx (Export timm models to ONNX, CoreML, LiteRT, TensorRT, and more) - https://github.com/Boulaouaney/timmx
544
+
545
+ ## Licenses
546
+
547
+ ### Code
548
+ The code here is licensed Apache 2.0. I've taken care to make sure any third party code included or adapted has compatible (permissive) licenses such as MIT, BSD, etc. I've made an effort to avoid any GPL / LGPL conflicts. That said, it is your responsibility to ensure you comply with licenses here and conditions of any dependent licenses. Where applicable, I've linked the sources/references for various components in docstrings. If you think I've missed anything please create an issue.
549
+
550
+ ### Pretrained Weights
551
+ So far all of the pretrained weights available here are pretrained on ImageNet with a select few that have some additional pretraining (see extra note below). ImageNet was released for non-commercial research purposes only (https://image-net.org/download). It's not clear what the implications of that are for the use of pretrained weights from that dataset. Any models I have trained with ImageNet are done for research purposes and one should assume that the original dataset license applies to the weights. It's best to seek legal advice if you intend to use the pretrained weights in a commercial product.
552
+
553
+ #### Pretrained on more than ImageNet
554
+ Several weights included or references here were pretrained with proprietary datasets that I do not have access to. These include the Facebook WSL, SSL, SWSL ResNe(Xt) and the Google Noisy Student EfficientNet models. The Facebook models have an explicit non-commercial license (CC-BY-NC 4.0, https://github.com/facebookresearch/semi-supervised-ImageNet1K-models, https://github.com/facebookresearch/WSL-Images). The Google models do not appear to have any restriction beyond the Apache 2.0 license (and ImageNet concerns). In either case, you should contact Facebook or Google with any questions.
555
+
556
+ ## Citing
557
+
558
+ ### BibTeX
559
+
560
+ ```bibtex
561
+ @misc{rw2019timm,
562
+ author = {Ross Wightman},
563
+ title = {PyTorch Image Models},
564
+ year = {2019},
565
+ publisher = {GitHub},
566
+ journal = {GitHub repository},
567
+ doi = {10.5281/zenodo.4414861},
568
+ howpublished = {\url{https://github.com/rwightman/pytorch-image-models}}
569
+ }
570
+ ```
571
+
572
+ ### Latest DOI
573
+
574
+ [![DOI](https://zenodo.org/badge/168799526.svg)](https://zenodo.org/badge/latestdoi/168799526)
.cache/pip/http-v2/5/9/1/a/0/591a0a7ea47d81cffb332bf5e1460e560ce743822558c6f345314d4b ADDED
Binary file (1.15 kB). View file
 
.cache/pip/http-v2/5/9/1/a/0/591a0a7ea47d81cffb332bf5e1460e560ce743822558c6f345314d4b.body ADDED
Binary file (18.1 kB). View file
 
.cache/pip/http-v2/5/d/0/b/4/5d0b459b3c4f2993a6258ed848f61436c9f7a018d2587028572840f2 ADDED
Binary file (1.13 kB). View file
 
.cache/pip/http-v2/6/1/6/5/a/6165a4393aa66a360946f71adc9147b3870cebf86b7878337a7535a9 ADDED
Binary file (1.12 kB). View file
 
.cache/pip/http-v2/6/4/c/f/c/64cfc03e83f9fad4049b1d2a1d785c9273270a4ab9788b538f5054e3 ADDED
Binary file (1.86 kB). View file
 
.cache/pip/http-v2/6/5/1/e/5/651e58859e8db8c99b9e7068d03984cfd4577518ff0e021c717afbf4 ADDED
Binary file (1.2 kB). View file
 
.cache/pip/http-v2/6/5/1/e/5/651e58859e8db8c99b9e7068d03984cfd4577518ff0e021c717afbf4.body ADDED
@@ -0,0 +1,78 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Metadata-Version: 2.1
2
+ Name: cycler
3
+ Version: 0.12.1
4
+ Summary: Composable style cycles
5
+ Author-email: Thomas A Caswell <matplotlib-users@python.org>
6
+ License: Copyright (c) 2015, matplotlib project
7
+ All rights reserved.
8
+
9
+ Redistribution and use in source and binary forms, with or without
10
+ modification, are permitted provided that the following conditions are met:
11
+
12
+ * Redistributions of source code must retain the above copyright notice, this
13
+ list of conditions and the following disclaimer.
14
+
15
+ * Redistributions in binary form must reproduce the above copyright notice,
16
+ this list of conditions and the following disclaimer in the documentation
17
+ and/or other materials provided with the distribution.
18
+
19
+ * Neither the name of the matplotlib project nor the names of its
20
+ contributors may be used to endorse or promote products derived from
21
+ this software without specific prior written permission.
22
+
23
+ THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
24
+ AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
25
+ IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
26
+ DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
27
+ FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
28
+ DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
29
+ SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
30
+ CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
31
+ OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
32
+ OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
33
+ Project-URL: homepage, https://matplotlib.org/cycler/
34
+ Project-URL: repository, https://github.com/matplotlib/cycler
35
+ Keywords: cycle kwargs
36
+ Classifier: License :: OSI Approved :: BSD License
37
+ Classifier: Development Status :: 4 - Beta
38
+ Classifier: Programming Language :: Python :: 3
39
+ Classifier: Programming Language :: Python :: 3.8
40
+ Classifier: Programming Language :: Python :: 3.9
41
+ Classifier: Programming Language :: Python :: 3.10
42
+ Classifier: Programming Language :: Python :: 3.11
43
+ Classifier: Programming Language :: Python :: 3.12
44
+ Classifier: Programming Language :: Python :: 3 :: Only
45
+ Requires-Python: >=3.8
46
+ Description-Content-Type: text/x-rst
47
+ License-File: LICENSE
48
+ Provides-Extra: docs
49
+ Requires-Dist: ipython ; extra == 'docs'
50
+ Requires-Dist: matplotlib ; extra == 'docs'
51
+ Requires-Dist: numpydoc ; extra == 'docs'
52
+ Requires-Dist: sphinx ; extra == 'docs'
53
+ Provides-Extra: tests
54
+ Requires-Dist: pytest ; extra == 'tests'
55
+ Requires-Dist: pytest-cov ; extra == 'tests'
56
+ Requires-Dist: pytest-xdist ; extra == 'tests'
57
+
58
+ |PyPi|_ |Conda|_ |Supported Python versions|_ |GitHub Actions|_ |Codecov|_
59
+
60
+ .. |PyPi| image:: https://img.shields.io/pypi/v/cycler.svg?style=flat
61
+ .. _PyPi: https://pypi.python.org/pypi/cycler
62
+
63
+ .. |Conda| image:: https://img.shields.io/conda/v/conda-forge/cycler
64
+ .. _Conda: https://anaconda.org/conda-forge/cycler
65
+
66
+ .. |Supported Python versions| image:: https://img.shields.io/pypi/pyversions/cycler.svg
67
+ .. _Supported Python versions: https://pypi.python.org/pypi/cycler
68
+
69
+ .. |GitHub Actions| image:: https://github.com/matplotlib/cycler/actions/workflows/tests.yml/badge.svg
70
+ .. _GitHub Actions: https://github.com/matplotlib/cycler/actions
71
+
72
+ .. |Codecov| image:: https://codecov.io/github/matplotlib/cycler/badge.svg?branch=main&service=github
73
+ .. _Codecov: https://codecov.io/github/matplotlib/cycler?branch=main
74
+
75
+ cycler: composable cycles
76
+ =========================
77
+
78
+ Docs: https://matplotlib.org/cycler/
.cache/pip/http-v2/6/6/e/c/7/66ec76a7b6ed4081044f5c7821af293b63c17bc2ac523ff93d5ca7d5 ADDED
Binary file (1.86 kB). View file
 
.cache/pip/http-v2/6/8/0/d/4/680d4dd80dc6a3d2df9b9478dfcc8e81e0e4f130e154a3268b98b877 ADDED
Binary file (1.87 kB). View file
 
.cache/pip/http-v2/6/b/5/3/a/6b53a9dd0e4fce887cc28c1a921aa1befe8c1a82e6c213d2542d2acb ADDED
Binary file (1.85 kB). View file
 
.cache/pip/http-v2/6/b/5/3/a/6b53a9dd0e4fce887cc28c1a921aa1befe8c1a82e6c213d2542d2acb.body ADDED
Binary file (14.8 kB). View file
 
.cache/pip/http-v2/6/b/8/1/e/6b81e7b491d69713c085c9f59d6c9162e9c07ca91d4f2bb5b3cd4b8e ADDED
Binary file (1.16 kB). View file
 
.cache/pip/http-v2/6/b/8/1/e/6b81e7b491d69713c085c9f59d6c9162e9c07ca91d4f2bb5b3cd4b8e.body ADDED
@@ -0,0 +1,386 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Metadata-Version: 2.4
2
+ Name: optuna
3
+ Version: 4.8.0
4
+ Summary: A hyperparameter optimization framework
5
+ Author: Takuya Akiba
6
+ Project-URL: homepage, https://optuna.org/
7
+ Project-URL: repository, https://github.com/optuna/optuna
8
+ Project-URL: documentation, https://optuna.readthedocs.io
9
+ Project-URL: bugtracker, https://github.com/optuna/optuna/issues
10
+ Classifier: Development Status :: 5 - Production/Stable
11
+ Classifier: Intended Audience :: Science/Research
12
+ Classifier: Intended Audience :: Developers
13
+ Classifier: License :: OSI Approved :: MIT License
14
+ Classifier: Programming Language :: Python :: 3
15
+ Classifier: Programming Language :: Python :: 3.9
16
+ Classifier: Programming Language :: Python :: 3.10
17
+ Classifier: Programming Language :: Python :: 3.11
18
+ Classifier: Programming Language :: Python :: 3.12
19
+ Classifier: Programming Language :: Python :: 3.13
20
+ Classifier: Programming Language :: Python :: 3.14
21
+ Classifier: Programming Language :: Python :: 3 :: Only
22
+ Classifier: Topic :: Scientific/Engineering
23
+ Classifier: Topic :: Scientific/Engineering :: Mathematics
24
+ Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
25
+ Classifier: Topic :: Software Development
26
+ Classifier: Topic :: Software Development :: Libraries
27
+ Classifier: Topic :: Software Development :: Libraries :: Python Modules
28
+ Requires-Python: >=3.9
29
+ Description-Content-Type: text/markdown
30
+ License-File: LICENSE
31
+ License-File: LICENSE_THIRD_PARTY
32
+ Requires-Dist: alembic>=1.5.0
33
+ Requires-Dist: colorlog
34
+ Requires-Dist: numpy
35
+ Requires-Dist: packaging>=20.0
36
+ Requires-Dist: sqlalchemy>=1.4.2
37
+ Requires-Dist: tqdm
38
+ Requires-Dist: PyYAML
39
+ Provides-Extra: checking
40
+ Requires-Dist: mypy; extra == "checking"
41
+ Requires-Dist: mypy_boto3_s3; extra == "checking"
42
+ Requires-Dist: ruff; extra == "checking"
43
+ Requires-Dist: scipy-stubs; python_version >= "3.10" and extra == "checking"
44
+ Requires-Dist: types-PyYAML; extra == "checking"
45
+ Requires-Dist: types-redis; extra == "checking"
46
+ Requires-Dist: types-setuptools; extra == "checking"
47
+ Requires-Dist: types-tqdm; extra == "checking"
48
+ Requires-Dist: typing_extensions>=3.10.0.0; extra == "checking"
49
+ Provides-Extra: document
50
+ Requires-Dist: ase; extra == "document"
51
+ Requires-Dist: cmaes>=0.12.0; extra == "document"
52
+ Requires-Dist: fvcore; extra == "document"
53
+ Requires-Dist: kaleido<0.4; extra == "document"
54
+ Requires-Dist: lightgbm; extra == "document"
55
+ Requires-Dist: matplotlib!=3.6.0; extra == "document"
56
+ Requires-Dist: pandas; extra == "document"
57
+ Requires-Dist: pillow; extra == "document"
58
+ Requires-Dist: plotly>=4.9.0; extra == "document"
59
+ Requires-Dist: scikit-learn; extra == "document"
60
+ Requires-Dist: sphinx; extra == "document"
61
+ Requires-Dist: sphinx-copybutton; extra == "document"
62
+ Requires-Dist: sphinx-gallery; extra == "document"
63
+ Requires-Dist: sphinx-notfound-page; extra == "document"
64
+ Requires-Dist: sphinx_rtd_theme>=1.2.0; extra == "document"
65
+ Requires-Dist: torch; extra == "document"
66
+ Requires-Dist: torchvision; extra == "document"
67
+ Provides-Extra: optional
68
+ Requires-Dist: boto3; extra == "optional"
69
+ Requires-Dist: cmaes>=0.12.0; extra == "optional"
70
+ Requires-Dist: google-cloud-storage; extra == "optional"
71
+ Requires-Dist: matplotlib!=3.6.0; extra == "optional"
72
+ Requires-Dist: pandas; extra == "optional"
73
+ Requires-Dist: plotly>=4.9.0; extra == "optional"
74
+ Requires-Dist: redis; extra == "optional"
75
+ Requires-Dist: scikit-learn>=0.24.2; extra == "optional"
76
+ Requires-Dist: scipy; extra == "optional"
77
+ Requires-Dist: torch; extra == "optional"
78
+ Requires-Dist: greenlet; extra == "optional"
79
+ Requires-Dist: grpcio; extra == "optional"
80
+ Requires-Dist: protobuf>=5.28.1; extra == "optional"
81
+ Provides-Extra: test
82
+ Requires-Dist: fakeredis[lua]; extra == "test"
83
+ Requires-Dist: kaleido<0.4; extra == "test"
84
+ Requires-Dist: moto; extra == "test"
85
+ Requires-Dist: pytest; extra == "test"
86
+ Requires-Dist: pytest-xdist; extra == "test"
87
+ Requires-Dist: scipy>=1.9.2; extra == "test"
88
+ Requires-Dist: torch; extra == "test"
89
+ Requires-Dist: greenlet; extra == "test"
90
+ Requires-Dist: grpcio; extra == "test"
91
+ Requires-Dist: protobuf>=5.28.1; extra == "test"
92
+ Dynamic: license-file
93
+
94
+ <div align="center"><img src="https://raw.githubusercontent.com/optuna/optuna/master/docs/image/optuna-logo.png" width="800"/></div>
95
+
96
+ # Optuna: A hyperparameter optimization framework
97
+
98
+ [![Python](https://img.shields.io/badge/python-3.9%20%7C%203.10%20%7C%203.11%20%7C%203.12%20%7C%203.13%20%7C%203.14-blue)](https://www.python.org)
99
+ [![pypi](https://img.shields.io/pypi/v/optuna.svg)](https://pypi.python.org/pypi/optuna)
100
+ [![conda](https://img.shields.io/conda/vn/conda-forge/optuna.svg)](https://anaconda.org/conda-forge/optuna)
101
+ [![GitHub license](https://img.shields.io/badge/license-MIT-blue.svg)](https://github.com/optuna/optuna)
102
+ [![Read the Docs](https://readthedocs.org/projects/optuna/badge/?version=stable)](https://optuna.readthedocs.io/en/stable/)
103
+
104
+ :link: [**Website**](https://optuna.org/)
105
+ | :page_with_curl: [**Docs**](https://optuna.readthedocs.io/en/stable/)
106
+ | :gear: [**Install Guide**](https://optuna.readthedocs.io/en/stable/installation.html)
107
+ | :pencil: [**Tutorial**](https://optuna.readthedocs.io/en/stable/tutorial/index.html)
108
+ | :bulb: [**Examples**](https://github.com/optuna/optuna-examples)
109
+ | [**Twitter**](https://twitter.com/OptunaAutoML)
110
+ | [**LinkedIn**](https://www.linkedin.com/showcase/optuna/)
111
+ | [**Medium**](https://medium.com/optuna)
112
+
113
+ *Optuna* is an automatic hyperparameter optimization software framework, particularly designed
114
+ for machine learning. It features an imperative, *define-by-run* style user API. Thanks to our
115
+ *define-by-run* API, the code written with Optuna enjoys high modularity, and the user of
116
+ Optuna can dynamically construct the search spaces for the hyperparameters.
117
+
118
+ ## :loudspeaker: News
119
+ Help us create the next version of Optuna!
120
+
121
+ Optuna 5.0 Roadmap published for review. Please take a look at [the planned improvements to Optuna](https://medium.com/optuna/optuna-v5-roadmap-ac7d6935a878), and share your feedback in [the github issues](https://github.com/optuna/optuna/labels/v5). PR contributions also welcome!
122
+
123
+ Please take a few minutes to fill in [this survey](https://forms.gle/wVwLCQ9g6st6AXuq9), and let us know how you use Optuna now and what improvements you'd like.🤔
124
+ All questions are optional. 🙇‍♂️
125
+
126
+ <!-- TODO: when you add a new line, please delete the oldest line -->
127
+ * **Jan 19, 2026**: Optuna 4.7.0 is out! Check out [the release note](https://github.com/optuna/optuna/releases/tag/v4.7.0) for details.
128
+ * **Nov 10, 2025**: A new article [Announcing Optuna 4.6](https://medium.com/optuna/announcing-optuna-4-6-a9e82183ab07) has been published.
129
+ * **Oct 28, 2025**: A new article [AutoSampler: Full Support for Multi-Objective & Constrained Optimization](https://medium.com/optuna/autosampler-full-support-for-multi-objective-constrained-optimization-c1c4fc957ba2) has been published.
130
+ * **Sep 22, 2025**: A new article [[Optuna v4.5] Gaussian Process-Based Sampler (GPSampler) Can Now Perform Constrained Multi-Objective Optimization](https://medium.com/optuna/optuna-v4-5-81e78d8e077a) has been published.
131
+ * **Jun 16, 2025**: Optuna 4.4.0 has been released! Check out [the release blog](https://medium.com/optuna/announcing-optuna-4-4-ece661493126).
132
+ * **May 26, 2025**: Optuna 5.0 roadmap has been published! See [the blog](https://medium.com/optuna/optuna-v5-roadmap-ac7d6935a878) for more details.
133
+
134
+ ## :fire: Key Features
135
+
136
+ Optuna has modern functionalities as follows:
137
+
138
+ - [Lightweight, versatile, and platform agnostic architecture](https://optuna.readthedocs.io/en/stable/tutorial/10_key_features/001_first.html)
139
+ - Handle a wide variety of tasks with a simple installation that has few requirements.
140
+ - [Pythonic search spaces](https://optuna.readthedocs.io/en/stable/tutorial/10_key_features/002_configurations.html)
141
+ - Define search spaces using familiar Python syntax including conditionals and loops.
142
+ - [Efficient optimization algorithms](https://optuna.readthedocs.io/en/stable/tutorial/10_key_features/003_efficient_optimization_algorithms.html)
143
+ - Adopt state-of-the-art algorithms for sampling hyperparameters and efficiently pruning unpromising trials.
144
+ - [Easy parallelization](https://optuna.readthedocs.io/en/stable/tutorial/10_key_features/004_distributed.html)
145
+ - Scale studies to tens or hundreds of workers with little or no changes to the code.
146
+ - [Quick visualization](https://optuna.readthedocs.io/en/stable/tutorial/10_key_features/005_visualization.html)
147
+ - Inspect optimization histories from a variety of plotting functions.
148
+
149
+
150
+ ## Basic Concepts
151
+
152
+ We use the terms *study* and *trial* as follows:
153
+
154
+ - Study: optimization based on an objective function
155
+ - Trial: a single execution of the objective function
156
+
157
+ Please refer to the sample code below. The goal of a *study* is to find out the optimal set of
158
+ hyperparameter values (e.g., `regressor` and `svr_c`) through multiple *trials* (e.g.,
159
+ `n_trials=100`). Optuna is a framework designed for automation and acceleration of
160
+ optimization *studies*.
161
+
162
+ <details open>
163
+ <summary>Sample code with scikit-learn</summary>
164
+
165
+ [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](http://colab.research.google.com/github/optuna/optuna-examples/blob/main/quickstart.ipynb)
166
+
167
+ ```python
168
+ import optuna
169
+ import sklearn
170
+
171
+
172
+ # Define an objective function to be minimized.
173
+ def objective(trial):
174
+
175
+ # Invoke suggest methods of a Trial object to generate hyperparameters.
176
+ regressor_name = trial.suggest_categorical("regressor", ["SVR", "RandomForest"])
177
+ if regressor_name == "SVR":
178
+ svr_c = trial.suggest_float("svr_c", 1e-10, 1e10, log=True)
179
+ regressor_obj = sklearn.svm.SVR(C=svr_c)
180
+ else:
181
+ rf_max_depth = trial.suggest_int("rf_max_depth", 2, 32)
182
+ regressor_obj = sklearn.ensemble.RandomForestRegressor(max_depth=rf_max_depth)
183
+
184
+ X, y = sklearn.datasets.fetch_california_housing(return_X_y=True)
185
+ X_train, X_val, y_train, y_val = sklearn.model_selection.train_test_split(X, y, random_state=0)
186
+
187
+ regressor_obj.fit(X_train, y_train)
188
+ y_pred = regressor_obj.predict(X_val)
189
+
190
+ error = sklearn.metrics.mean_squared_error(y_val, y_pred)
191
+
192
+ return error # An objective value linked with the Trial object.
193
+
194
+
195
+ study = optuna.create_study() # Create a new study.
196
+ study.optimize(objective, n_trials=100) # Invoke optimization of the objective function.
197
+ ```
198
+ </details>
199
+
200
+ > [!NOTE]
201
+ > More examples can be found in [optuna/optuna-examples](https://github.com/optuna/optuna-examples).
202
+ >
203
+ > The examples cover diverse problem setups such as multi-objective optimization, constrained optimization, pruning, and distributed optimization.
204
+
205
+ ## Installation
206
+
207
+ Optuna is available at [the Python Package Index](https://pypi.org/project/optuna/) and on [Anaconda Cloud](https://anaconda.org/conda-forge/optuna).
208
+
209
+ ```bash
210
+ # PyPI
211
+ $ pip install optuna
212
+ ```
213
+
214
+ ```bash
215
+ # Anaconda Cloud
216
+ $ conda install -c conda-forge optuna
217
+ ```
218
+
219
+ > [!IMPORTANT]
220
+ > Optuna supports Python 3.9 or newer.
221
+ >
222
+ > Also, we provide Optuna docker images on [DockerHub](https://hub.docker.com/r/optuna/optuna).
223
+
224
+ ## Integrations
225
+
226
+ Optuna has integration features with various third-party libraries. Integrations can be found in [optuna/optuna-integration](https://github.com/optuna/optuna-integration) and the document is available [here](https://optuna-integration.readthedocs.io/en/stable/index.html).
227
+
228
+ <details>
229
+ <summary>Supported integration libraries</summary>
230
+
231
+ * [Catboost](https://github.com/optuna/optuna-examples/tree/main/catboost/catboost_pruning.py)
232
+ * [Dask](https://github.com/optuna/optuna-examples/tree/main/dask/dask_simple.py)
233
+ * [fastai](https://github.com/optuna/optuna-examples/tree/main/fastai/fastai_simple.py)
234
+ * [Keras](https://github.com/optuna/optuna-examples/tree/main/keras/keras_integration.py)
235
+ * [LightGBM](https://github.com/optuna/optuna-examples/tree/main/lightgbm/lightgbm_integration.py)
236
+ * [MLflow](https://github.com/optuna/optuna-examples/tree/main/mlflow/keras_mlflow.py)
237
+ * [PyTorch](https://github.com/optuna/optuna-examples/tree/main/pytorch/pytorch_simple.py)
238
+ * [PyTorch Ignite](https://github.com/optuna/optuna-examples/tree/main/pytorch/pytorch_ignite_simple.py)
239
+ * [PyTorch Lightning](https://github.com/optuna/optuna-examples/tree/main/pytorch/pytorch_lightning_simple.py)
240
+ * [TensorBoard](https://github.com/optuna/optuna-examples/tree/main/tensorboard/tensorboard_simple.py)
241
+ * [TensorFlow](https://github.com/optuna/optuna-examples/tree/main/tensorflow/tensorflow_estimator_integration.py)
242
+ * [tf.keras](https://github.com/optuna/optuna-examples/tree/main/tfkeras/tfkeras_integration.py)
243
+ * [Weights & Biases](https://github.com/optuna/optuna-examples/tree/main/wandb/wandb_integration.py)
244
+ * [XGBoost](https://github.com/optuna/optuna-examples/tree/main/xgboost/xgboost_integration.py)
245
+ </details>
246
+
247
+ ## Web Dashboard
248
+
249
+ [Optuna Dashboard](https://github.com/optuna/optuna-dashboard) is a real-time web dashboard for Optuna.
250
+ You can check the optimization history, hyperparameter importance, etc. in graphs and tables.
251
+ You don't need to create a Python script to call [Optuna's visualization](https://optuna.readthedocs.io/en/stable/reference/visualization/index.html) functions.
252
+ Feature requests and bug reports are welcome!
253
+
254
+ ![optuna-dashboard](https://user-images.githubusercontent.com/5564044/204975098-95c2cb8c-0fb5-4388-abc4-da32f56cb4e5.gif)
255
+
256
+ `optuna-dashboard` can be installed via pip:
257
+
258
+ ```shell
259
+ $ pip install optuna-dashboard
260
+ ```
261
+
262
+ > [!TIP]
263
+ > Please check out the convenience of Optuna Dashboard using the sample code below.
264
+
265
+ <details>
266
+ <summary>Sample code to launch Optuna Dashboard</summary>
267
+
268
+ Save the following code as `optimize_toy.py`.
269
+
270
+ ```python
271
+ import optuna
272
+
273
+
274
+ def objective(trial):
275
+ x1 = trial.suggest_float("x1", -100, 100)
276
+ x2 = trial.suggest_float("x2", -100, 100)
277
+ return x1**2 + 0.01 * x2**2
278
+
279
+
280
+ study = optuna.create_study(storage="sqlite:///db.sqlite3") # Create a new study with database.
281
+ study.optimize(objective, n_trials=100)
282
+ ```
283
+
284
+ Then try the commands below:
285
+
286
+ ```shell
287
+ # Run the study specified above
288
+ $ python optimize_toy.py
289
+
290
+ # Launch the dashboard based on the storage `sqlite:///db.sqlite3`
291
+ $ optuna-dashboard sqlite:///db.sqlite3
292
+ ...
293
+ Listening on http://localhost:8080/
294
+ Hit Ctrl-C to quit.
295
+ ```
296
+
297
+ </details>
298
+
299
+
300
+ ## OptunaHub
301
+
302
+ [OptunaHub](https://hub.optuna.org/) is a feature-sharing platform for Optuna.
303
+ You can use the registered features and publish your packages.
304
+
305
+ ### Use registered features
306
+
307
+ `optunahub` can be installed via pip:
308
+
309
+ ```shell
310
+ $ pip install optunahub
311
+ # Install AutoSampler dependencies (CPU only is sufficient for PyTorch)
312
+ $ pip install cmaes scipy torch --extra-index-url https://download.pytorch.org/whl/cpu
313
+ ```
314
+
315
+ You can load registered module with `optunahub.load_module`.
316
+
317
+ ```python
318
+ import optuna
319
+ import optunahub
320
+
321
+
322
+ def objective(trial: optuna.Trial) -> float:
323
+ x = trial.suggest_float("x", -5, 5)
324
+ y = trial.suggest_float("y", -5, 5)
325
+ return x**2 + y**2
326
+
327
+
328
+ module = optunahub.load_module(package="samplers/auto_sampler")
329
+ study = optuna.create_study(sampler=module.AutoSampler())
330
+ study.optimize(objective, n_trials=10)
331
+
332
+ print(study.best_trial.value, study.best_trial.params)
333
+ ```
334
+
335
+ For more details, please refer to [the optunahub documentation](https://optuna.github.io/optunahub/).
336
+
337
+ ### Publish your packages
338
+
339
+ You can publish your package via [optunahub-registry](https://github.com/optuna/optunahub-registry).
340
+ See the [Tutorials for Contributors](https://optuna.github.io/optunahub/tutorials_for_contributors.html) in OptunaHub.
341
+
342
+
343
+ ## Communication
344
+
345
+ - [GitHub Discussions] for questions.
346
+ - [GitHub Issues] for bug reports and feature requests.
347
+
348
+ [GitHub Discussions]: https://github.com/optuna/optuna/discussions
349
+ [GitHub issues]: https://github.com/optuna/optuna/issues
350
+
351
+
352
+ ## Contribution
353
+
354
+ Any contributions to Optuna are more than welcome!
355
+
356
+ If you are new to Optuna, please check the [good first issues](https://github.com/optuna/optuna/labels/good%20first%20issue). They are relatively simple, well-defined, and often good starting points for you to get familiar with the contribution workflow and other developers.
357
+
358
+ If you already have contributed to Optuna, we recommend the other [contribution-welcome issues](https://github.com/optuna/optuna/labels/contribution-welcome).
359
+
360
+ For general guidelines on how to contribute to the project, take a look at [CONTRIBUTING.md](./CONTRIBUTING.md).
361
+
362
+
363
+ ## Reference
364
+
365
+ If you use Optuna in one of your research projects, please cite [our KDD paper](https://doi.org/10.1145/3292500.3330701) "Optuna: A Next-generation Hyperparameter Optimization Framework":
366
+
367
+ <details open>
368
+ <summary>BibTeX</summary>
369
+
370
+ ```bibtex
371
+ @inproceedings{akiba2019optuna,
372
+ title={{O}ptuna: A Next-Generation Hyperparameter Optimization Framework},
373
+ author={Akiba, Takuya and Sano, Shotaro and Yanase, Toshihiko and Ohta, Takeru and Koyama, Masanori},
374
+ booktitle={The 25th ACM SIGKDD International Conference on Knowledge Discovery \& Data Mining},
375
+ pages={2623--2631},
376
+ year={2019}
377
+ }
378
+ ```
379
+ </details>
380
+
381
+
382
+ ## License
383
+
384
+ MIT License (see [LICENSE](./LICENSE)).
385
+
386
+ Optuna uses the codes from SciPy and fdlibm projects (see [LICENSE_THIRD_PARTY](./LICENSE_THIRD_PARTY)).
.cache/pip/http-v2/6/c/6/e/e/6c6eeaf6757edbde690577822daacaba826c2b12ce67b57b33e8021d ADDED
Binary file (1.15 kB). View file
 
.cache/pip/http-v2/6/c/6/e/e/6c6eeaf6757edbde690577822daacaba826c2b12ce67b57b33e8021d.body ADDED
Binary file (8.32 kB). View file
 
.cache/pip/http-v2/6/f/4/2/0/6f4201922ae9660b891766d0cd792260a5663fc66339ed1036f3be9b ADDED
Binary file (1.15 kB). View file
 
.cache/pip/http-v2/7/2/8/a/2/728a2f33f382f4dacf08f6df77aad6f3d889f819ba4fa3efad5ec7e4 ADDED
Binary file (1.87 kB). View file
 
.cache/pip/http-v2/7/7/3/7/4/77374f6555d766d1d452fe4918fb303c49f49a5a37a0986ea4f1b212 ADDED
Binary file (1.22 kB). View file
 
.cache/pip/http-v2/7/7/3/7/4/77374f6555d766d1d452fe4918fb303c49f49a5a37a0986ea4f1b212.body ADDED
@@ -0,0 +1,139 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Metadata-Version: 2.4
2
+ Name: alembic
3
+ Version: 1.18.4
4
+ Summary: A database migration tool for SQLAlchemy.
5
+ Author-email: Mike Bayer <mike_mp@zzzcomputing.com>
6
+ License-Expression: MIT
7
+ Project-URL: Homepage, https://alembic.sqlalchemy.org
8
+ Project-URL: Documentation, https://alembic.sqlalchemy.org/en/latest/
9
+ Project-URL: Changelog, https://alembic.sqlalchemy.org/en/latest/changelog.html
10
+ Project-URL: Source, https://github.com/sqlalchemy/alembic/
11
+ Project-URL: Issue Tracker, https://github.com/sqlalchemy/alembic/issues/
12
+ Classifier: Development Status :: 5 - Production/Stable
13
+ Classifier: Intended Audience :: Developers
14
+ Classifier: Environment :: Console
15
+ Classifier: Operating System :: OS Independent
16
+ Classifier: Programming Language :: Python
17
+ Classifier: Programming Language :: Python :: 3
18
+ Classifier: Programming Language :: Python :: 3.10
19
+ Classifier: Programming Language :: Python :: 3.11
20
+ Classifier: Programming Language :: Python :: 3.12
21
+ Classifier: Programming Language :: Python :: 3.13
22
+ Classifier: Programming Language :: Python :: Implementation :: CPython
23
+ Classifier: Programming Language :: Python :: Implementation :: PyPy
24
+ Classifier: Topic :: Database :: Front-Ends
25
+ Requires-Python: >=3.10
26
+ Description-Content-Type: text/x-rst
27
+ License-File: LICENSE
28
+ Requires-Dist: SQLAlchemy>=1.4.23
29
+ Requires-Dist: Mako
30
+ Requires-Dist: typing-extensions>=4.12
31
+ Requires-Dist: tomli; python_version < "3.11"
32
+ Provides-Extra: tz
33
+ Requires-Dist: tzdata; extra == "tz"
34
+ Dynamic: license-file
35
+
36
+ Alembic is a database migrations tool written by the author
37
+ of `SQLAlchemy <http://www.sqlalchemy.org>`_. A migrations tool
38
+ offers the following functionality:
39
+
40
+ * Can emit ALTER statements to a database in order to change
41
+ the structure of tables and other constructs
42
+ * Provides a system whereby "migration scripts" may be constructed;
43
+ each script indicates a particular series of steps that can "upgrade" a
44
+ target database to a new version, and optionally a series of steps that can
45
+ "downgrade" similarly, doing the same steps in reverse.
46
+ * Allows the scripts to execute in some sequential manner.
47
+
48
+ The goals of Alembic are:
49
+
50
+ * Very open ended and transparent configuration and operation. A new
51
+ Alembic environment is generated from a set of templates which is selected
52
+ among a set of options when setup first occurs. The templates then deposit a
53
+ series of scripts that define fully how database connectivity is established
54
+ and how migration scripts are invoked; the migration scripts themselves are
55
+ generated from a template within that series of scripts. The scripts can
56
+ then be further customized to define exactly how databases will be
57
+ interacted with and what structure new migration files should take.
58
+ * Full support for transactional DDL. The default scripts ensure that all
59
+ migrations occur within a transaction - for those databases which support
60
+ this (Postgresql, Microsoft SQL Server), migrations can be tested with no
61
+ need to manually undo changes upon failure.
62
+ * Minimalist script construction. Basic operations like renaming
63
+ tables/columns, adding/removing columns, changing column attributes can be
64
+ performed through one line commands like alter_column(), rename_table(),
65
+ add_constraint(). There is no need to recreate full SQLAlchemy Table
66
+ structures for simple operations like these - the functions themselves
67
+ generate minimalist schema structures behind the scenes to achieve the given
68
+ DDL sequence.
69
+ * "auto generation" of migrations. While real world migrations are far more
70
+ complex than what can be automatically determined, Alembic can still
71
+ eliminate the initial grunt work in generating new migration directives
72
+ from an altered schema. The ``--autogenerate`` feature will inspect the
73
+ current status of a database using SQLAlchemy's schema inspection
74
+ capabilities, compare it to the current state of the database model as
75
+ specified in Python, and generate a series of "candidate" migrations,
76
+ rendering them into a new migration script as Python directives. The
77
+ developer then edits the new file, adding additional directives and data
78
+ migrations as needed, to produce a finished migration. Table and column
79
+ level changes can be detected, with constraints and indexes to follow as
80
+ well.
81
+ * Full support for migrations generated as SQL scripts. Those of us who
82
+ work in corporate environments know that direct access to DDL commands on a
83
+ production database is a rare privilege, and DBAs want textual SQL scripts.
84
+ Alembic's usage model and commands are oriented towards being able to run a
85
+ series of migrations into a textual output file as easily as it runs them
86
+ directly to a database. Care must be taken in this mode to not invoke other
87
+ operations that rely upon in-memory SELECTs of rows - Alembic tries to
88
+ provide helper constructs like bulk_insert() to help with data-oriented
89
+ operations that are compatible with script-based DDL.
90
+ * Non-linear, dependency-graph versioning. Scripts are given UUID
91
+ identifiers similarly to a DVCS, and the linkage of one script to the next
92
+ is achieved via human-editable markers within the scripts themselves.
93
+ The structure of a set of migration files is considered as a
94
+ directed-acyclic graph, meaning any migration file can be dependent
95
+ on any other arbitrary set of migration files, or none at
96
+ all. Through this open-ended system, migration files can be organized
97
+ into branches, multiple roots, and mergepoints, without restriction.
98
+ Commands are provided to produce new branches, roots, and merges of
99
+ branches automatically.
100
+ * Provide a library of ALTER constructs that can be used by any SQLAlchemy
101
+ application. The DDL constructs build upon SQLAlchemy's own DDLElement base
102
+ and can be used standalone by any application or script.
103
+ * At long last, bring SQLite and its inability to ALTER things into the fold,
104
+ but in such a way that SQLite's very special workflow needs are accommodated
105
+ in an explicit way that makes the most of a bad situation, through the
106
+ concept of a "batch" migration, where multiple changes to a table can
107
+ be batched together to form a series of instructions for a single, subsequent
108
+ "move-and-copy" workflow. You can even use "move-and-copy" workflow for
109
+ other databases, if you want to recreate a table in the background
110
+ on a busy system.
111
+
112
+ Documentation and status of Alembic is at https://alembic.sqlalchemy.org/
113
+
114
+ The SQLAlchemy Project
115
+ ======================
116
+
117
+ Alembic is part of the `SQLAlchemy Project <https://www.sqlalchemy.org>`_ and
118
+ adheres to the same standards and conventions as the core project.
119
+
120
+ Development / Bug reporting / Pull requests
121
+ ___________________________________________
122
+
123
+ Please refer to the
124
+ `SQLAlchemy Community Guide <https://www.sqlalchemy.org/develop.html>`_ for
125
+ guidelines on coding and participating in this project.
126
+
127
+ Code of Conduct
128
+ _______________
129
+
130
+ Above all, SQLAlchemy places great emphasis on polite, thoughtful, and
131
+ constructive communication between users and developers.
132
+ Please see our current Code of Conduct at
133
+ `Code of Conduct <https://www.sqlalchemy.org/codeofconduct.html>`_.
134
+
135
+ License
136
+ =======
137
+
138
+ Alembic is distributed under the `MIT license
139
+ <https://opensource.org/licenses/MIT>`_.
.cache/pip/http-v2/7/7/3/b/e/773be4e62f2a7f9be9d2b777b9be56e14e2b6c9666994e8793db52fd ADDED
Binary file (1.14 kB). View file
 
.cache/pip/http-v2/7/7/7/e/1/777e1a90a003ce34ae211dad270679555faa9422154a8c9601ab0e1f ADDED
Binary file (1.87 kB). View file
 
.cache/pip/http-v2/7/7/7/e/1/777e1a90a003ce34ae211dad270679555faa9422154a8c9601ab0e1f.body ADDED
Binary file (8.96 kB). View file
 
.cache/pip/http-v2/7/9/2/1/a/7921ac3318a5cdb592026cc26a94f7a2c1e1f7d3a1dc1e3857fd49f1 ADDED
Binary file (1.85 kB). View file
 
.cache/pip/http-v2/7/9/2/1/a/7921ac3318a5cdb592026cc26a94f7a2c1e1f7d3a1dc1e3857fd49f1.body ADDED
Binary file (1.96 kB). View file
 
.cache/pip/http-v2/7/b/3/0/7/7b3075adb708114992fb27c6511ef6dfacffdcb852b8e8d037a10c4b ADDED
Binary file (1.15 kB). View file
 
.cache/pip/http-v2/7/c/6/1/8/7c6183dd169526815070ce1b173d3f738b22aa5bc9caaa83eec98534 ADDED
Binary file (1.15 kB). View file
 
.cache/pip/http-v2/d/0/1/5/f/d015fbc6aca5c05d98c5a5fd1b5a5da789d8a3e8323acf92db497bce ADDED
Binary file (1.15 kB). View file
 
.cache/pip/http-v2/d/0/1/b/7/d01b75bf0e331a9e22160ebda15b330e26789777201136ee08c89b17 ADDED
Binary file (1.77 kB). View file
 
.cache/pip/http-v2/d/0/1/b/7/d01b75bf0e331a9e22160ebda15b330e26789777201136ee08c89b17.body ADDED
Binary file (6.54 kB). View file
 
.cache/pip/http-v2/d/3/2/9/d/d329d269473dcb26bc2fcf82e354619e8463ba8855d5a3b77637c124 ADDED
Binary file (1.85 kB). View file
 
.cache/pip/http-v2/d/b/c/9/6/dbc96dffe61b94bcd3688bd8959baaf90ecc4c26bd760252ca8c1de1 ADDED
Binary file (1.23 kB). View file
 
.cache/pip/http-v2/d/b/c/9/6/dbc96dffe61b94bcd3688bd8959baaf90ecc4c26bd760252ca8c1de1.body ADDED
@@ -0,0 +1,87 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Metadata-Version: 2.4
2
+ Name: hf-xet
3
+ Version: 1.4.2
4
+ Classifier: Development Status :: 5 - Production/Stable
5
+ Classifier: License :: OSI Approved :: Apache Software License
6
+ Classifier: Programming Language :: Rust
7
+ Classifier: Programming Language :: Python :: Implementation :: CPython
8
+ Classifier: Programming Language :: Python :: Implementation :: PyPy
9
+ Classifier: Programming Language :: Python :: 3
10
+ Classifier: Programming Language :: Python :: 3 :: Only
11
+ Classifier: Programming Language :: Python :: 3.8
12
+ Classifier: Programming Language :: Python :: 3.9
13
+ Classifier: Programming Language :: Python :: 3.10
14
+ Classifier: Programming Language :: Python :: 3.11
15
+ Classifier: Programming Language :: Python :: 3.12
16
+ Classifier: Programming Language :: Python :: 3.13
17
+ Classifier: Programming Language :: Python :: 3.14
18
+ Classifier: Programming Language :: Python :: Free Threading
19
+ Classifier: Programming Language :: Python :: Free Threading :: 2 - Beta
20
+ Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
21
+ Requires-Dist: pytest ; extra == 'tests'
22
+ Provides-Extra: tests
23
+ License-File: LICENSE
24
+ Summary: Fast transfer of large files with the Hugging Face Hub.
25
+ Maintainer-email: Rajat Arya <rajat@rajatarya.com>, Jared Sulzdorf <j.sulzdorf@gmail.com>, Di Xiao <di@huggingface.co>, Assaf Vayner <assaf@huggingface.co>, Hoyt Koepke <hoytak@gmail.com>
26
+ License-Expression: Apache-2.0
27
+ Requires-Python: >=3.8
28
+ Description-Content-Type: text/markdown; charset=UTF-8; variant=GFM
29
+ Project-URL: Documentation, https://huggingface.co/docs/hub/xet/index
30
+ Project-URL: Homepage, https://github.com/huggingface/xet-core
31
+ Project-URL: Issues, https://github.com/huggingface/xet-core/issues
32
+ Project-URL: Repository, https://github.com/huggingface/xet-core.git
33
+
34
+ <!---
35
+ Copyright 2024 The HuggingFace Team. All rights reserved.
36
+
37
+ Licensed under the Apache License, Version 2.0 (the "License");
38
+ you may not use this file except in compliance with the License.
39
+ You may obtain a copy of the License at
40
+
41
+ http://www.apache.org/licenses/LICENSE-2.0
42
+
43
+ Unless required by applicable law or agreed to in writing, software
44
+ distributed under the License is distributed on an "AS IS" BASIS,
45
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
46
+ See the License for the specific language governing permissions and
47
+ limitations under the License.
48
+ -->
49
+ <p align="center">
50
+ <a href="https://github.com/huggingface/xet-core/blob/main/LICENSE"><img alt="License" src="https://img.shields.io/github/license/huggingface/xet-core.svg?color=blue"></a>
51
+ <a href="https://github.com/huggingface/xet-core/releases"><img alt="GitHub release" src="https://img.shields.io/github/release/huggingface/xet-core.svg"></a>
52
+ <a href="https://github.com/huggingface/xet-core/blob/main/CODE_OF_CONDUCT.md"><img alt="Contributor Covenant" src="https://img.shields.io/badge/Contributor%20Covenant-v2.0%20adopted-ff69b4.svg"></a>
53
+ </p>
54
+
55
+ <h3 align="center">
56
+ <p>🤗 hf-xet - xet client tech, used in <a target="_blank" href="https://github.com/huggingface/huggingface_hub/">huggingface_hub</a></p>
57
+ </h3>
58
+
59
+ ## Welcome
60
+
61
+ `hf-xet` enables `huggingface_hub` to utilize xet storage for uploading and downloading to HF Hub. Xet storage provides chunk-based deduplication, efficient storage/retrieval with local disk caching, and backwards compatibility with Git LFS. This library is not meant to be used directly, and is instead intended to be used from [huggingface_hub](https://pypi.org/project/huggingface-hub).
62
+
63
+ ## Key features
64
+
65
+ ♻ **chunk-based deduplication implementation**: avoid transferring and storing chunks that are shared across binary files (models, datasets, etc).
66
+
67
+ 🤗 **Python bindings**: bindings for [huggingface_hub](https://github.com/huggingface/huggingface_hub/) package.
68
+
69
+ ↔ **network communications**: concurrent communication to HF Hub Xet backend services (CAS).
70
+
71
+ 🔖 **local disk caching**: chunk-based cache that sits alongside the existing [huggingface_hub disk cache](https://huggingface.co/docs/huggingface_hub/guides/manage-cache).
72
+
73
+ ## Installation
74
+
75
+ Install the `hf_xet` package with [pip](https://pypi.org/project/hf-xet/):
76
+
77
+ ```bash
78
+ pip install hf_xet
79
+ ```
80
+
81
+ ## Quick Start
82
+
83
+ `hf_xet` is not intended to be run independently as it is expected to be used from `huggingface_hub`, so to get started with `huggingface_hub` check out the documentation [here]("https://hf.co/docs/huggingface_hub").
84
+
85
+ ## Contributions (feature requests, bugs, etc.) are encouraged & appreciated 💙💚💛💜🧡❤️
86
+
87
+ Please join us in making hf-xet better. We value everyone's contributions. Code is not the only way to help. Answering questions, helping each other, improving documentation, filing issues all help immensely. If you are interested in contributing (please do!), check out the [contribution guide](https://github.com/huggingface/xet-core/blob/main/CONTRIBUTING.md) for this repository.
.cache/pip/http-v2/d/d/2/0/6/dd206ee8d449d4bec512242e8f8ebadb5a808c682766018f3921f62b.body ADDED
@@ -0,0 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Metadata-Version: 2.1
2
+ Name: et_xmlfile
3
+ Version: 2.0.0
4
+ Summary: An implementation of lxml.xmlfile for the standard library
5
+ Home-page: https://foss.heptapod.net/openpyxl/et_xmlfile
6
+ Author: See AUTHORS.txt
7
+ Author-email: charlie.clark@clark-consulting.eu
8
+ License: MIT
9
+ Project-URL: Documentation, https://openpyxl.pages.heptapod.net/et_xmlfile/
10
+ Project-URL: Source, https://foss.heptapod.net/openpyxl/et_xmlfile
11
+ Project-URL: Tracker, https://foss.heptapod.net/openpyxl/et_xmfile/-/issues
12
+ Classifier: Development Status :: 5 - Production/Stable
13
+ Classifier: Operating System :: MacOS :: MacOS X
14
+ Classifier: Operating System :: Microsoft :: Windows
15
+ Classifier: Operating System :: POSIX
16
+ Classifier: License :: OSI Approved :: MIT License
17
+ Classifier: Programming Language :: Python
18
+ Classifier: Programming Language :: Python :: 3.8
19
+ Classifier: Programming Language :: Python :: 3.9
20
+ Classifier: Programming Language :: Python :: 3.10
21
+ Classifier: Programming Language :: Python :: 3.11
22
+ Classifier: Programming Language :: Python :: 3.12
23
+ Classifier: Programming Language :: Python :: 3.13
24
+ Requires-Python: >=3.8
25
+ License-File: LICENCE.python
26
+ License-File: LICENCE.rst
27
+ License-File: AUTHORS.txt
28
+
29
+ .. image:: https://foss.heptapod.net/openpyxl/et_xmlfile/badges/branch/default/coverage.svg
30
+ :target: https://coveralls.io/bitbucket/openpyxl/et_xmlfile?branch=default
31
+ :alt: coverage status
32
+
33
+ et_xmfile
34
+ =========
35
+
36
+ XML can use lots of memory, and et_xmlfile is a low memory library for creating large XML files
37
+ And, although the standard library already includes an incremental parser, `iterparse` it has no equivalent when writing XML. Once an element has been added to the tree, it is written to
38
+ the file or stream and the memory is then cleared.
39
+
40
+ This module is based upon the `xmlfile module from lxml <http://lxml.de/api.html#incremental-xml-generation>`_ with the aim of allowing code to be developed that will work with both libraries.
41
+ It was developed initially for the openpyxl project, but is now a standalone module.
42
+
43
+ The code was written by Elias Rabel as part of the `Python Düsseldorf <http://pyddf.de>`_ openpyxl sprint in September 2014.
44
+
45
+ Proper support for incremental writing was provided by Daniel Hillier in 2024
46
+
47
+ Note on performance
48
+ -------------------
49
+
50
+ The code was not developed with performance in mind, but turned out to be faster than the existing SAX-based implementation but is generally slower than lxml's xmlfile.
51
+ There is one area where an optimisation for lxml may negatively affect the performance of et_xmfile and that is when using the `.element()` method on the xmlfile context manager. It is, therefore, recommended simply to create Elements write these directly, as in the sample code.
.cache/uv/sdists-v9/.gitignore ADDED
File without changes
.cache/uv/simple-v18/pypi/filelock.rkyv ADDED
Binary file (88.2 kB). View file
 
.cache/uv/simple-v18/pypi/packaging.rkyv ADDED
Binary file (62.6 kB). View file