ZTWHHH commited on
Commit
d790110
·
verified ·
1 Parent(s): c0ee639

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. minigpt2/lib/python3.10/site-packages/decorator-5.1.1.dist-info/INSTALLER +1 -0
  2. minigpt2/lib/python3.10/site-packages/decorator-5.1.1.dist-info/LICENSE.txt +26 -0
  3. minigpt2/lib/python3.10/site-packages/decorator-5.1.1.dist-info/METADATA +127 -0
  4. minigpt2/lib/python3.10/site-packages/decorator-5.1.1.dist-info/RECORD +10 -0
  5. minigpt2/lib/python3.10/site-packages/decorator-5.1.1.dist-info/REQUESTED +0 -0
  6. minigpt2/lib/python3.10/site-packages/decorator-5.1.1.dist-info/WHEEL +5 -0
  7. minigpt2/lib/python3.10/site-packages/decorator-5.1.1.dist-info/pbr.json +1 -0
  8. minigpt2/lib/python3.10/site-packages/decorator-5.1.1.dist-info/top_level.txt +1 -0
  9. minigpt2/lib/python3.10/site-packages/more_itertools/__init__.py +6 -0
  10. minigpt2/lib/python3.10/site-packages/more_itertools/__init__.pyi +2 -0
  11. minigpt2/lib/python3.10/site-packages/more_itertools/__pycache__/__init__.cpython-310.pyc +0 -0
  12. minigpt2/lib/python3.10/site-packages/more_itertools/__pycache__/recipes.cpython-310.pyc +0 -0
  13. minigpt2/lib/python3.10/site-packages/more_itertools/more.py +0 -0
  14. minigpt2/lib/python3.10/site-packages/more_itertools/more.pyi +815 -0
  15. minigpt2/lib/python3.10/site-packages/more_itertools/py.typed +0 -0
  16. minigpt2/lib/python3.10/site-packages/more_itertools/recipes.py +1075 -0
  17. minigpt2/lib/python3.10/site-packages/more_itertools/recipes.pyi +136 -0
  18. minigpt2/lib/python3.10/site-packages/torchgen/__init__.py +10 -0
  19. minigpt2/lib/python3.10/site-packages/torchgen/context.py +130 -0
  20. minigpt2/lib/python3.10/site-packages/torchgen/gen_executorch.py +998 -0
  21. minigpt2/lib/python3.10/site-packages/torchgen/local.py +59 -0
  22. minigpt2/lib/python3.10/site-packages/tzdata/__pycache__/__init__.cpython-310.pyc +0 -0
  23. minigpt2/lib/python3.10/site-packages/tzdata/zoneinfo/Arctic/Longyearbyen +0 -0
  24. minigpt2/lib/python3.10/site-packages/tzdata/zoneinfo/Arctic/__init__.py +0 -0
  25. minigpt2/lib/python3.10/site-packages/tzdata/zoneinfo/Arctic/__pycache__/__init__.cpython-310.pyc +0 -0
  26. minigpt2/lib/python3.10/site-packages/tzdata/zoneinfo/Atlantic/Azores +0 -0
  27. minigpt2/lib/python3.10/site-packages/tzdata/zoneinfo/Atlantic/Bermuda +0 -0
  28. minigpt2/lib/python3.10/site-packages/tzdata/zoneinfo/Atlantic/Canary +0 -0
  29. minigpt2/lib/python3.10/site-packages/tzdata/zoneinfo/Atlantic/Faeroe +0 -0
  30. minigpt2/lib/python3.10/site-packages/tzdata/zoneinfo/Atlantic/Jan_Mayen +0 -0
  31. minigpt2/lib/python3.10/site-packages/tzdata/zoneinfo/Atlantic/South_Georgia +0 -0
  32. minigpt2/lib/python3.10/site-packages/tzdata/zoneinfo/Atlantic/Stanley +0 -0
  33. minigpt2/lib/python3.10/site-packages/tzdata/zoneinfo/Atlantic/__pycache__/__init__.cpython-310.pyc +0 -0
  34. minigpt2/lib/python3.10/site-packages/tzdata/zoneinfo/Brazil/Acre +0 -0
  35. minigpt2/lib/python3.10/site-packages/tzdata/zoneinfo/Brazil/DeNoronha +0 -0
  36. minigpt2/lib/python3.10/site-packages/tzdata/zoneinfo/Brazil/East +0 -0
  37. minigpt2/lib/python3.10/site-packages/tzdata/zoneinfo/Brazil/West +0 -0
  38. minigpt2/lib/python3.10/site-packages/tzdata/zoneinfo/Brazil/__init__.py +0 -0
  39. minigpt2/lib/python3.10/site-packages/tzdata/zoneinfo/Brazil/__pycache__/__init__.cpython-310.pyc +0 -0
  40. minigpt2/lib/python3.10/site-packages/tzdata/zoneinfo/Chile/Continental +0 -0
  41. minigpt2/lib/python3.10/site-packages/tzdata/zoneinfo/Chile/EasterIsland +0 -0
  42. minigpt2/lib/python3.10/site-packages/tzdata/zoneinfo/Chile/__init__.py +0 -0
  43. minigpt2/lib/python3.10/site-packages/tzdata/zoneinfo/Chile/__pycache__/__init__.cpython-310.pyc +0 -0
  44. minigpt2/lib/python3.10/site-packages/tzdata/zoneinfo/Etc/GMT +0 -0
  45. minigpt2/lib/python3.10/site-packages/tzdata/zoneinfo/Etc/GMT+1 +0 -0
  46. minigpt2/lib/python3.10/site-packages/tzdata/zoneinfo/Etc/GMT+10 +0 -0
  47. minigpt2/lib/python3.10/site-packages/tzdata/zoneinfo/Etc/GMT+11 +0 -0
  48. minigpt2/lib/python3.10/site-packages/tzdata/zoneinfo/Etc/GMT+12 +0 -0
  49. minigpt2/lib/python3.10/site-packages/tzdata/zoneinfo/Etc/GMT+3 +0 -0
  50. minigpt2/lib/python3.10/site-packages/tzdata/zoneinfo/Etc/GMT+5 +0 -0
minigpt2/lib/python3.10/site-packages/decorator-5.1.1.dist-info/INSTALLER ADDED
@@ -0,0 +1 @@
 
 
1
+ pip
minigpt2/lib/python3.10/site-packages/decorator-5.1.1.dist-info/LICENSE.txt ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Copyright (c) 2005-2018, Michele Simionato
2
+ All rights reserved.
3
+
4
+ Redistribution and use in source and binary forms, with or without
5
+ modification, are permitted provided that the following conditions are
6
+ met:
7
+
8
+ Redistributions of source code must retain the above copyright
9
+ notice, this list of conditions and the following disclaimer.
10
+ Redistributions in bytecode form must reproduce the above copyright
11
+ notice, this list of conditions and the following disclaimer in
12
+ the documentation and/or other materials provided with the
13
+ distribution.
14
+
15
+ THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
16
+ "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
17
+ LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
18
+ A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
19
+ HOLDERS OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
20
+ INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
21
+ BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS
22
+ OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
23
+ ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR
24
+ TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
25
+ USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
26
+ DAMAGE.
minigpt2/lib/python3.10/site-packages/decorator-5.1.1.dist-info/METADATA ADDED
@@ -0,0 +1,127 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Metadata-Version: 2.1
2
+ Name: decorator
3
+ Version: 5.1.1
4
+ Summary: Decorators for Humans
5
+ Home-page: https://github.com/micheles/decorator
6
+ Author: Michele Simionato
7
+ Author-email: michele.simionato@gmail.com
8
+ License: new BSD License
9
+ Keywords: decorators generic utility
10
+ Platform: All
11
+ Classifier: Development Status :: 5 - Production/Stable
12
+ Classifier: Intended Audience :: Developers
13
+ Classifier: License :: OSI Approved :: BSD License
14
+ Classifier: Natural Language :: English
15
+ Classifier: Operating System :: OS Independent
16
+ Classifier: Programming Language :: Python
17
+ Classifier: Programming Language :: Python :: 3.5
18
+ Classifier: Programming Language :: Python :: 3.6
19
+ Classifier: Programming Language :: Python :: 3.7
20
+ Classifier: Programming Language :: Python :: 3.8
21
+ Classifier: Programming Language :: Python :: 3.9
22
+ Classifier: Programming Language :: Python :: 3.10
23
+ Classifier: Programming Language :: Python :: Implementation :: CPython
24
+ Classifier: Topic :: Software Development :: Libraries
25
+ Classifier: Topic :: Utilities
26
+ Requires-Python: >=3.5
27
+
28
+ Decorators for Humans
29
+ =====================
30
+
31
+ The goal of the decorator module is to make it easy to define
32
+ signature-preserving function decorators and decorator factories.
33
+ It also includes an implementation of multiple dispatch and other niceties
34
+ (please check the docs). It is released under a two-clauses
35
+ BSD license, i.e. basically you can do whatever you want with it but I am not
36
+ responsible.
37
+
38
+ Installation
39
+ -------------
40
+
41
+ If you are lazy, just perform
42
+
43
+ ``$ pip install decorator``
44
+
45
+ which will install just the module on your system.
46
+
47
+ If you prefer to install the full distribution from source, including
48
+ the documentation, clone the `GitHub repo`_ or download the tarball_, unpack it and run
49
+
50
+ ``$ pip install .``
51
+
52
+ in the main directory, possibly as superuser.
53
+
54
+ .. _tarball: https://pypi.org/project/decorator/#files
55
+ .. _GitHub repo: https://github.com/micheles/decorator
56
+
57
+ Testing
58
+ --------
59
+
60
+ If you have the source code installation you can run the tests with
61
+
62
+ `$ python src/tests/test.py -v`
63
+
64
+ or (if you have setuptools installed)
65
+
66
+ `$ python setup.py test`
67
+
68
+ Notice that you may run into trouble if in your system there
69
+ is an older version of the decorator module; in such a case remove the
70
+ old version. It is safe even to copy the module `decorator.py` over
71
+ an existing one, since we kept backward-compatibility for a long time.
72
+
73
+ Repository
74
+ ---------------
75
+
76
+ The project is hosted on GitHub. You can look at the source here:
77
+
78
+ https://github.com/micheles/decorator
79
+
80
+ Documentation
81
+ ---------------
82
+
83
+ The documentation has been moved to https://github.com/micheles/decorator/blob/master/docs/documentation.md
84
+
85
+ From there you can get a PDF version by simply using the print
86
+ functionality of your browser.
87
+
88
+ Here is the documentation for previous versions of the module:
89
+
90
+ https://github.com/micheles/decorator/blob/4.3.2/docs/tests.documentation.rst
91
+ https://github.com/micheles/decorator/blob/4.2.1/docs/tests.documentation.rst
92
+ https://github.com/micheles/decorator/blob/4.1.2/docs/tests.documentation.rst
93
+ https://github.com/micheles/decorator/blob/4.0.0/documentation.rst
94
+ https://github.com/micheles/decorator/blob/3.4.2/documentation.rst
95
+
96
+ For the impatient
97
+ -----------------
98
+
99
+ Here is an example of how to define a family of decorators tracing slow
100
+ operations:
101
+
102
+ .. code-block:: python
103
+
104
+ from decorator import decorator
105
+
106
+ @decorator
107
+ def warn_slow(func, timelimit=60, *args, **kw):
108
+ t0 = time.time()
109
+ result = func(*args, **kw)
110
+ dt = time.time() - t0
111
+ if dt > timelimit:
112
+ logging.warn('%s took %d seconds', func.__name__, dt)
113
+ else:
114
+ logging.info('%s took %d seconds', func.__name__, dt)
115
+ return result
116
+
117
+ @warn_slow # warn if it takes more than 1 minute
118
+ def preprocess_input_files(inputdir, tempdir):
119
+ ...
120
+
121
+ @warn_slow(timelimit=600) # warn if it takes more than 10 minutes
122
+ def run_calculation(tempdir, outdir):
123
+ ...
124
+
125
+ Enjoy!
126
+
127
+
minigpt2/lib/python3.10/site-packages/decorator-5.1.1.dist-info/RECORD ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ __pycache__/decorator.cpython-310.pyc,,
2
+ decorator-5.1.1.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4
3
+ decorator-5.1.1.dist-info/LICENSE.txt,sha256=_RFmDKvwUyCCxFcGhi-vwpSQfsf44heBgkCkmZgGeC4,1309
4
+ decorator-5.1.1.dist-info/METADATA,sha256=XAr2zbYpRxCkcPbsmg1oaiS5ea7mhTq-j-wb0XjuVho,3955
5
+ decorator-5.1.1.dist-info/RECORD,,
6
+ decorator-5.1.1.dist-info/REQUESTED,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
7
+ decorator-5.1.1.dist-info/WHEEL,sha256=ewwEueio1C2XeHTvT17n8dZUJgOvyCWCt0WVNLClP9o,92
8
+ decorator-5.1.1.dist-info/pbr.json,sha256=AL84oUUWQHwkd8OCPhLRo2NJjU5MDdmXMqRHv-posqs,47
9
+ decorator-5.1.1.dist-info/top_level.txt,sha256=Kn6eQjo83ctWxXVyBMOYt0_YpjRjBznKYVuNyuC_DSI,10
10
+ decorator.py,sha256=el5cAEgoTEpRQN65tOxGhElue-CccMv0xol-J2MwOc0,16752
minigpt2/lib/python3.10/site-packages/decorator-5.1.1.dist-info/REQUESTED ADDED
File without changes
minigpt2/lib/python3.10/site-packages/decorator-5.1.1.dist-info/WHEEL ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ Wheel-Version: 1.0
2
+ Generator: bdist_wheel (0.37.0)
3
+ Root-Is-Purelib: true
4
+ Tag: py3-none-any
5
+
minigpt2/lib/python3.10/site-packages/decorator-5.1.1.dist-info/pbr.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"is_release": false, "git_version": "8608a46"}
minigpt2/lib/python3.10/site-packages/decorator-5.1.1.dist-info/top_level.txt ADDED
@@ -0,0 +1 @@
 
 
1
+ decorator
minigpt2/lib/python3.10/site-packages/more_itertools/__init__.py ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ """More routines for operating on iterables, beyond itertools"""
2
+
3
+ from .more import * # noqa
4
+ from .recipes import * # noqa
5
+
6
+ __version__ = '10.5.0'
minigpt2/lib/python3.10/site-packages/more_itertools/__init__.pyi ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ from .more import *
2
+ from .recipes import *
minigpt2/lib/python3.10/site-packages/more_itertools/__pycache__/__init__.cpython-310.pyc ADDED
Binary file (310 Bytes). View file
 
minigpt2/lib/python3.10/site-packages/more_itertools/__pycache__/recipes.cpython-310.pyc ADDED
Binary file (29.7 kB). View file
 
minigpt2/lib/python3.10/site-packages/more_itertools/more.py ADDED
The diff for this file is too large to render. See raw diff
 
minigpt2/lib/python3.10/site-packages/more_itertools/more.pyi ADDED
@@ -0,0 +1,815 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Stubs for more_itertools.more"""
2
+
3
+ from __future__ import annotations
4
+
5
+ import sys
6
+ import types
7
+
8
+ from typing import (
9
+ Any,
10
+ Callable,
11
+ Container,
12
+ ContextManager,
13
+ Generic,
14
+ Hashable,
15
+ Mapping,
16
+ Iterable,
17
+ Iterator,
18
+ Mapping,
19
+ overload,
20
+ Reversible,
21
+ Sequence,
22
+ Sized,
23
+ Type,
24
+ TypeVar,
25
+ type_check_only,
26
+ )
27
+ from typing_extensions import Protocol
28
+
29
+ # Type and type variable definitions
30
+ _T = TypeVar('_T')
31
+ _T1 = TypeVar('_T1')
32
+ _T2 = TypeVar('_T2')
33
+ _T3 = TypeVar('_T3')
34
+ _T4 = TypeVar('_T4')
35
+ _T5 = TypeVar('_T5')
36
+ _U = TypeVar('_U')
37
+ _V = TypeVar('_V')
38
+ _W = TypeVar('_W')
39
+ _T_co = TypeVar('_T_co', covariant=True)
40
+ _GenFn = TypeVar('_GenFn', bound=Callable[..., Iterator[Any]])
41
+ _Raisable = BaseException | Type[BaseException]
42
+
43
+ # The type of isinstance's second argument (from typeshed builtins)
44
+ if sys.version_info >= (3, 10):
45
+ _ClassInfo = type | types.UnionType | tuple[_ClassInfo, ...]
46
+ else:
47
+ _ClassInfo = type | tuple[_ClassInfo, ...]
48
+
49
+ @type_check_only
50
+ class _SizedIterable(Protocol[_T_co], Sized, Iterable[_T_co]): ...
51
+
52
+ @type_check_only
53
+ class _SizedReversible(Protocol[_T_co], Sized, Reversible[_T_co]): ...
54
+
55
+ @type_check_only
56
+ class _SupportsSlicing(Protocol[_T_co]):
57
+ def __getitem__(self, __k: slice) -> _T_co: ...
58
+
59
+ def chunked(
60
+ iterable: Iterable[_T], n: int | None, strict: bool = ...
61
+ ) -> Iterator[list[_T]]: ...
62
+ @overload
63
+ def first(iterable: Iterable[_T]) -> _T: ...
64
+ @overload
65
+ def first(iterable: Iterable[_T], default: _U) -> _T | _U: ...
66
+ @overload
67
+ def last(iterable: Iterable[_T]) -> _T: ...
68
+ @overload
69
+ def last(iterable: Iterable[_T], default: _U) -> _T | _U: ...
70
+ @overload
71
+ def nth_or_last(iterable: Iterable[_T], n: int) -> _T: ...
72
+ @overload
73
+ def nth_or_last(iterable: Iterable[_T], n: int, default: _U) -> _T | _U: ...
74
+
75
+ class peekable(Generic[_T], Iterator[_T]):
76
+ def __init__(self, iterable: Iterable[_T]) -> None: ...
77
+ def __iter__(self) -> peekable[_T]: ...
78
+ def __bool__(self) -> bool: ...
79
+ @overload
80
+ def peek(self) -> _T: ...
81
+ @overload
82
+ def peek(self, default: _U) -> _T | _U: ...
83
+ def prepend(self, *items: _T) -> None: ...
84
+ def __next__(self) -> _T: ...
85
+ @overload
86
+ def __getitem__(self, index: int) -> _T: ...
87
+ @overload
88
+ def __getitem__(self, index: slice) -> list[_T]: ...
89
+
90
+ def consumer(func: _GenFn) -> _GenFn: ...
91
+ def ilen(iterable: Iterable[_T]) -> int: ...
92
+ def iterate(func: Callable[[_T], _T], start: _T) -> Iterator[_T]: ...
93
+ def with_iter(
94
+ context_manager: ContextManager[Iterable[_T]],
95
+ ) -> Iterator[_T]: ...
96
+ def one(
97
+ iterable: Iterable[_T],
98
+ too_short: _Raisable | None = ...,
99
+ too_long: _Raisable | None = ...,
100
+ ) -> _T: ...
101
+ def raise_(exception: _Raisable, *args: Any) -> None: ...
102
+ def strictly_n(
103
+ iterable: Iterable[_T],
104
+ n: int,
105
+ too_short: _GenFn | None = ...,
106
+ too_long: _GenFn | None = ...,
107
+ ) -> list[_T]: ...
108
+ def distinct_permutations(
109
+ iterable: Iterable[_T], r: int | None = ...
110
+ ) -> Iterator[tuple[_T, ...]]: ...
111
+ def intersperse(
112
+ e: _U, iterable: Iterable[_T], n: int = ...
113
+ ) -> Iterator[_T | _U]: ...
114
+ def unique_to_each(*iterables: Iterable[_T]) -> list[list[_T]]: ...
115
+ @overload
116
+ def windowed(
117
+ seq: Iterable[_T], n: int, *, step: int = ...
118
+ ) -> Iterator[tuple[_T | None, ...]]: ...
119
+ @overload
120
+ def windowed(
121
+ seq: Iterable[_T], n: int, fillvalue: _U, step: int = ...
122
+ ) -> Iterator[tuple[_T | _U, ...]]: ...
123
+ def substrings(iterable: Iterable[_T]) -> Iterator[tuple[_T, ...]]: ...
124
+ def substrings_indexes(
125
+ seq: Sequence[_T], reverse: bool = ...
126
+ ) -> Iterator[tuple[Sequence[_T], int, int]]: ...
127
+
128
+ class bucket(Generic[_T, _U], Container[_U]):
129
+ def __init__(
130
+ self,
131
+ iterable: Iterable[_T],
132
+ key: Callable[[_T], _U],
133
+ validator: Callable[[_U], object] | None = ...,
134
+ ) -> None: ...
135
+ def __contains__(self, value: object) -> bool: ...
136
+ def __iter__(self) -> Iterator[_U]: ...
137
+ def __getitem__(self, value: object) -> Iterator[_T]: ...
138
+
139
+ def spy(
140
+ iterable: Iterable[_T], n: int = ...
141
+ ) -> tuple[list[_T], Iterator[_T]]: ...
142
+ def interleave(*iterables: Iterable[_T]) -> Iterator[_T]: ...
143
+ def interleave_longest(*iterables: Iterable[_T]) -> Iterator[_T]: ...
144
+ def interleave_evenly(
145
+ iterables: list[Iterable[_T]], lengths: list[int] | None = ...
146
+ ) -> Iterator[_T]: ...
147
+ def collapse(
148
+ iterable: Iterable[Any],
149
+ base_type: _ClassInfo | None = ...,
150
+ levels: int | None = ...,
151
+ ) -> Iterator[Any]: ...
152
+ @overload
153
+ def side_effect(
154
+ func: Callable[[_T], object],
155
+ iterable: Iterable[_T],
156
+ chunk_size: None = ...,
157
+ before: Callable[[], object] | None = ...,
158
+ after: Callable[[], object] | None = ...,
159
+ ) -> Iterator[_T]: ...
160
+ @overload
161
+ def side_effect(
162
+ func: Callable[[list[_T]], object],
163
+ iterable: Iterable[_T],
164
+ chunk_size: int,
165
+ before: Callable[[], object] | None = ...,
166
+ after: Callable[[], object] | None = ...,
167
+ ) -> Iterator[_T]: ...
168
+ def sliced(
169
+ seq: _SupportsSlicing[_T], n: int, strict: bool = ...
170
+ ) -> Iterator[_T]: ...
171
+ def split_at(
172
+ iterable: Iterable[_T],
173
+ pred: Callable[[_T], object],
174
+ maxsplit: int = ...,
175
+ keep_separator: bool = ...,
176
+ ) -> Iterator[list[_T]]: ...
177
+ def split_before(
178
+ iterable: Iterable[_T], pred: Callable[[_T], object], maxsplit: int = ...
179
+ ) -> Iterator[list[_T]]: ...
180
+ def split_after(
181
+ iterable: Iterable[_T], pred: Callable[[_T], object], maxsplit: int = ...
182
+ ) -> Iterator[list[_T]]: ...
183
+ def split_when(
184
+ iterable: Iterable[_T],
185
+ pred: Callable[[_T, _T], object],
186
+ maxsplit: int = ...,
187
+ ) -> Iterator[list[_T]]: ...
188
+ def split_into(
189
+ iterable: Iterable[_T], sizes: Iterable[int | None]
190
+ ) -> Iterator[list[_T]]: ...
191
+ @overload
192
+ def padded(
193
+ iterable: Iterable[_T],
194
+ *,
195
+ n: int | None = ...,
196
+ next_multiple: bool = ...,
197
+ ) -> Iterator[_T | None]: ...
198
+ @overload
199
+ def padded(
200
+ iterable: Iterable[_T],
201
+ fillvalue: _U,
202
+ n: int | None = ...,
203
+ next_multiple: bool = ...,
204
+ ) -> Iterator[_T | _U]: ...
205
+ @overload
206
+ def repeat_last(iterable: Iterable[_T]) -> Iterator[_T]: ...
207
+ @overload
208
+ def repeat_last(iterable: Iterable[_T], default: _U) -> Iterator[_T | _U]: ...
209
+ def distribute(n: int, iterable: Iterable[_T]) -> list[Iterator[_T]]: ...
210
+ @overload
211
+ def stagger(
212
+ iterable: Iterable[_T],
213
+ offsets: _SizedIterable[int] = ...,
214
+ longest: bool = ...,
215
+ ) -> Iterator[tuple[_T | None, ...]]: ...
216
+ @overload
217
+ def stagger(
218
+ iterable: Iterable[_T],
219
+ offsets: _SizedIterable[int] = ...,
220
+ longest: bool = ...,
221
+ fillvalue: _U = ...,
222
+ ) -> Iterator[tuple[_T | _U, ...]]: ...
223
+
224
+ class UnequalIterablesError(ValueError):
225
+ def __init__(self, details: tuple[int, int, int] | None = ...) -> None: ...
226
+
227
+ # zip_equal
228
+ @overload
229
+ def zip_equal(__iter1: Iterable[_T1]) -> Iterator[tuple[_T1]]: ...
230
+ @overload
231
+ def zip_equal(
232
+ __iter1: Iterable[_T1], __iter2: Iterable[_T2]
233
+ ) -> Iterator[tuple[_T1, _T2]]: ...
234
+ @overload
235
+ def zip_equal(
236
+ __iter1: Iterable[_T1], __iter2: Iterable[_T2], __iter3: Iterable[_T3]
237
+ ) -> Iterator[tuple[_T1, _T2, _T3]]: ...
238
+ @overload
239
+ def zip_equal(
240
+ __iter1: Iterable[_T1],
241
+ __iter2: Iterable[_T2],
242
+ __iter3: Iterable[_T3],
243
+ __iter4: Iterable[_T4],
244
+ ) -> Iterator[tuple[_T1, _T2, _T3, _T4]]: ...
245
+ @overload
246
+ def zip_equal(
247
+ __iter1: Iterable[_T1],
248
+ __iter2: Iterable[_T2],
249
+ __iter3: Iterable[_T3],
250
+ __iter4: Iterable[_T4],
251
+ __iter5: Iterable[_T5],
252
+ ) -> Iterator[tuple[_T1, _T2, _T3, _T4, _T5]]: ...
253
+ @overload
254
+ def zip_equal(
255
+ __iter1: Iterable[Any],
256
+ __iter2: Iterable[Any],
257
+ __iter3: Iterable[Any],
258
+ __iter4: Iterable[Any],
259
+ __iter5: Iterable[Any],
260
+ __iter6: Iterable[Any],
261
+ *iterables: Iterable[Any],
262
+ ) -> Iterator[tuple[Any, ...]]: ...
263
+
264
+ # zip_offset
265
+ @overload
266
+ def zip_offset(
267
+ __iter1: Iterable[_T1],
268
+ *,
269
+ offsets: _SizedIterable[int],
270
+ longest: bool = ...,
271
+ fillvalue: None = None,
272
+ ) -> Iterator[tuple[_T1 | None]]: ...
273
+ @overload
274
+ def zip_offset(
275
+ __iter1: Iterable[_T1],
276
+ __iter2: Iterable[_T2],
277
+ *,
278
+ offsets: _SizedIterable[int],
279
+ longest: bool = ...,
280
+ fillvalue: None = None,
281
+ ) -> Iterator[tuple[_T1 | None, _T2 | None]]: ...
282
+ @overload
283
+ def zip_offset(
284
+ __iter1: Iterable[_T],
285
+ __iter2: Iterable[_T],
286
+ __iter3: Iterable[_T],
287
+ *iterables: Iterable[_T],
288
+ offsets: _SizedIterable[int],
289
+ longest: bool = ...,
290
+ fillvalue: None = None,
291
+ ) -> Iterator[tuple[_T | None, ...]]: ...
292
+ @overload
293
+ def zip_offset(
294
+ __iter1: Iterable[_T1],
295
+ *,
296
+ offsets: _SizedIterable[int],
297
+ longest: bool = ...,
298
+ fillvalue: _U,
299
+ ) -> Iterator[tuple[_T1 | _U]]: ...
300
+ @overload
301
+ def zip_offset(
302
+ __iter1: Iterable[_T1],
303
+ __iter2: Iterable[_T2],
304
+ *,
305
+ offsets: _SizedIterable[int],
306
+ longest: bool = ...,
307
+ fillvalue: _U,
308
+ ) -> Iterator[tuple[_T1 | _U, _T2 | _U]]: ...
309
+ @overload
310
+ def zip_offset(
311
+ __iter1: Iterable[_T],
312
+ __iter2: Iterable[_T],
313
+ __iter3: Iterable[_T],
314
+ *iterables: Iterable[_T],
315
+ offsets: _SizedIterable[int],
316
+ longest: bool = ...,
317
+ fillvalue: _U,
318
+ ) -> Iterator[tuple[_T | _U, ...]]: ...
319
+ def sort_together(
320
+ iterables: Iterable[Iterable[_T]],
321
+ key_list: Iterable[int] = ...,
322
+ key: Callable[..., Any] | None = ...,
323
+ reverse: bool = ...,
324
+ strict: bool = ...,
325
+ ) -> list[tuple[_T, ...]]: ...
326
+ def unzip(iterable: Iterable[Sequence[_T]]) -> tuple[Iterator[_T], ...]: ...
327
+ def divide(n: int, iterable: Iterable[_T]) -> list[Iterator[_T]]: ...
328
+ def always_iterable(
329
+ obj: object,
330
+ base_type: _ClassInfo | None = ...,
331
+ ) -> Iterator[Any]: ...
332
+ def adjacent(
333
+ predicate: Callable[[_T], bool],
334
+ iterable: Iterable[_T],
335
+ distance: int = ...,
336
+ ) -> Iterator[tuple[bool, _T]]: ...
337
+ @overload
338
+ def groupby_transform(
339
+ iterable: Iterable[_T],
340
+ keyfunc: None = None,
341
+ valuefunc: None = None,
342
+ reducefunc: None = None,
343
+ ) -> Iterator[tuple[_T, Iterator[_T]]]: ...
344
+ @overload
345
+ def groupby_transform(
346
+ iterable: Iterable[_T],
347
+ keyfunc: Callable[[_T], _U],
348
+ valuefunc: None,
349
+ reducefunc: None,
350
+ ) -> Iterator[tuple[_U, Iterator[_T]]]: ...
351
+ @overload
352
+ def groupby_transform(
353
+ iterable: Iterable[_T],
354
+ keyfunc: None,
355
+ valuefunc: Callable[[_T], _V],
356
+ reducefunc: None,
357
+ ) -> Iterable[tuple[_T, Iterable[_V]]]: ...
358
+ @overload
359
+ def groupby_transform(
360
+ iterable: Iterable[_T],
361
+ keyfunc: Callable[[_T], _U],
362
+ valuefunc: Callable[[_T], _V],
363
+ reducefunc: None,
364
+ ) -> Iterable[tuple[_U, Iterator[_V]]]: ...
365
+ @overload
366
+ def groupby_transform(
367
+ iterable: Iterable[_T],
368
+ keyfunc: None,
369
+ valuefunc: None,
370
+ reducefunc: Callable[[Iterator[_T]], _W],
371
+ ) -> Iterable[tuple[_T, _W]]: ...
372
+ @overload
373
+ def groupby_transform(
374
+ iterable: Iterable[_T],
375
+ keyfunc: Callable[[_T], _U],
376
+ valuefunc: None,
377
+ reducefunc: Callable[[Iterator[_T]], _W],
378
+ ) -> Iterable[tuple[_U, _W]]: ...
379
+ @overload
380
+ def groupby_transform(
381
+ iterable: Iterable[_T],
382
+ keyfunc: None,
383
+ valuefunc: Callable[[_T], _V],
384
+ reducefunc: Callable[[Iterable[_V]], _W],
385
+ ) -> Iterable[tuple[_T, _W]]: ...
386
+ @overload
387
+ def groupby_transform(
388
+ iterable: Iterable[_T],
389
+ keyfunc: Callable[[_T], _U],
390
+ valuefunc: Callable[[_T], _V],
391
+ reducefunc: Callable[[Iterable[_V]], _W],
392
+ ) -> Iterable[tuple[_U, _W]]: ...
393
+
394
+ class numeric_range(Generic[_T, _U], Sequence[_T], Hashable, Reversible[_T]):
395
+ @overload
396
+ def __init__(self, __stop: _T) -> None: ...
397
+ @overload
398
+ def __init__(self, __start: _T, __stop: _T) -> None: ...
399
+ @overload
400
+ def __init__(self, __start: _T, __stop: _T, __step: _U) -> None: ...
401
+ def __bool__(self) -> bool: ...
402
+ def __contains__(self, elem: object) -> bool: ...
403
+ def __eq__(self, other: object) -> bool: ...
404
+ @overload
405
+ def __getitem__(self, key: int) -> _T: ...
406
+ @overload
407
+ def __getitem__(self, key: slice) -> numeric_range[_T, _U]: ...
408
+ def __hash__(self) -> int: ...
409
+ def __iter__(self) -> Iterator[_T]: ...
410
+ def __len__(self) -> int: ...
411
+ def __reduce__(
412
+ self,
413
+ ) -> tuple[Type[numeric_range[_T, _U]], tuple[_T, _T, _U]]: ...
414
+ def __repr__(self) -> str: ...
415
+ def __reversed__(self) -> Iterator[_T]: ...
416
+ def count(self, value: _T) -> int: ...
417
+ def index(self, value: _T) -> int: ... # type: ignore
418
+
419
+ def count_cycle(
420
+ iterable: Iterable[_T], n: int | None = ...
421
+ ) -> Iterable[tuple[int, _T]]: ...
422
+ def mark_ends(
423
+ iterable: Iterable[_T],
424
+ ) -> Iterable[tuple[bool, bool, _T]]: ...
425
+ def locate(
426
+ iterable: Iterable[_T],
427
+ pred: Callable[..., Any] = ...,
428
+ window_size: int | None = ...,
429
+ ) -> Iterator[int]: ...
430
+ def lstrip(
431
+ iterable: Iterable[_T], pred: Callable[[_T], object]
432
+ ) -> Iterator[_T]: ...
433
+ def rstrip(
434
+ iterable: Iterable[_T], pred: Callable[[_T], object]
435
+ ) -> Iterator[_T]: ...
436
+ def strip(
437
+ iterable: Iterable[_T], pred: Callable[[_T], object]
438
+ ) -> Iterator[_T]: ...
439
+
440
+ class islice_extended(Generic[_T], Iterator[_T]):
441
+ def __init__(self, iterable: Iterable[_T], *args: int | None) -> None: ...
442
+ def __iter__(self) -> islice_extended[_T]: ...
443
+ def __next__(self) -> _T: ...
444
+ def __getitem__(self, index: slice) -> islice_extended[_T]: ...
445
+
446
+ def always_reversible(iterable: Iterable[_T]) -> Iterator[_T]: ...
447
+ def consecutive_groups(
448
+ iterable: Iterable[_T], ordering: Callable[[_T], int] = ...
449
+ ) -> Iterator[Iterator[_T]]: ...
450
+ @overload
451
+ def difference(
452
+ iterable: Iterable[_T],
453
+ func: Callable[[_T, _T], _U] = ...,
454
+ *,
455
+ initial: None = ...,
456
+ ) -> Iterator[_T | _U]: ...
457
+ @overload
458
+ def difference(
459
+ iterable: Iterable[_T], func: Callable[[_T, _T], _U] = ..., *, initial: _U
460
+ ) -> Iterator[_U]: ...
461
+
462
+ class SequenceView(Generic[_T], Sequence[_T]):
463
+ def __init__(self, target: Sequence[_T]) -> None: ...
464
+ @overload
465
+ def __getitem__(self, index: int) -> _T: ...
466
+ @overload
467
+ def __getitem__(self, index: slice) -> Sequence[_T]: ...
468
+ def __len__(self) -> int: ...
469
+
470
+ class seekable(Generic[_T], Iterator[_T]):
471
+ def __init__(
472
+ self, iterable: Iterable[_T], maxlen: int | None = ...
473
+ ) -> None: ...
474
+ def __iter__(self) -> seekable[_T]: ...
475
+ def __next__(self) -> _T: ...
476
+ def __bool__(self) -> bool: ...
477
+ @overload
478
+ def peek(self) -> _T: ...
479
+ @overload
480
+ def peek(self, default: _U) -> _T | _U: ...
481
+ def elements(self) -> SequenceView[_T]: ...
482
+ def seek(self, index: int) -> None: ...
483
+ def relative_seek(self, count: int) -> None: ...
484
+
485
+ class run_length:
486
+ @staticmethod
487
+ def encode(iterable: Iterable[_T]) -> Iterator[tuple[_T, int]]: ...
488
+ @staticmethod
489
+ def decode(iterable: Iterable[tuple[_T, int]]) -> Iterator[_T]: ...
490
+
491
+ def exactly_n(
492
+ iterable: Iterable[_T], n: int, predicate: Callable[[_T], object] = ...
493
+ ) -> bool: ...
494
+ def circular_shifts(
495
+ iterable: Iterable[_T], steps: int = 1
496
+ ) -> list[tuple[_T, ...]]: ...
497
+ def make_decorator(
498
+ wrapping_func: Callable[..., _U], result_index: int = ...
499
+ ) -> Callable[..., Callable[[Callable[..., Any]], Callable[..., _U]]]: ...
500
+ @overload
501
+ def map_reduce(
502
+ iterable: Iterable[_T],
503
+ keyfunc: Callable[[_T], _U],
504
+ valuefunc: None = ...,
505
+ reducefunc: None = ...,
506
+ ) -> dict[_U, list[_T]]: ...
507
+ @overload
508
+ def map_reduce(
509
+ iterable: Iterable[_T],
510
+ keyfunc: Callable[[_T], _U],
511
+ valuefunc: Callable[[_T], _V],
512
+ reducefunc: None = ...,
513
+ ) -> dict[_U, list[_V]]: ...
514
+ @overload
515
+ def map_reduce(
516
+ iterable: Iterable[_T],
517
+ keyfunc: Callable[[_T], _U],
518
+ valuefunc: None = ...,
519
+ reducefunc: Callable[[list[_T]], _W] = ...,
520
+ ) -> dict[_U, _W]: ...
521
+ @overload
522
+ def map_reduce(
523
+ iterable: Iterable[_T],
524
+ keyfunc: Callable[[_T], _U],
525
+ valuefunc: Callable[[_T], _V],
526
+ reducefunc: Callable[[list[_V]], _W],
527
+ ) -> dict[_U, _W]: ...
528
+ def rlocate(
529
+ iterable: Iterable[_T],
530
+ pred: Callable[..., object] = ...,
531
+ window_size: int | None = ...,
532
+ ) -> Iterator[int]: ...
533
+ def replace(
534
+ iterable: Iterable[_T],
535
+ pred: Callable[..., object],
536
+ substitutes: Iterable[_U],
537
+ count: int | None = ...,
538
+ window_size: int = ...,
539
+ ) -> Iterator[_T | _U]: ...
540
+ def partitions(iterable: Iterable[_T]) -> Iterator[list[list[_T]]]: ...
541
+ def set_partitions(
542
+ iterable: Iterable[_T],
543
+ k: int | None = ...,
544
+ min_size: int | None = ...,
545
+ max_size: int | None = ...,
546
+ ) -> Iterator[list[list[_T]]]: ...
547
+
548
+ class time_limited(Generic[_T], Iterator[_T]):
549
+ def __init__(
550
+ self, limit_seconds: float, iterable: Iterable[_T]
551
+ ) -> None: ...
552
+ def __iter__(self) -> islice_extended[_T]: ...
553
+ def __next__(self) -> _T: ...
554
+
555
+ @overload
556
+ def only(
557
+ iterable: Iterable[_T], *, too_long: _Raisable | None = ...
558
+ ) -> _T | None: ...
559
+ @overload
560
+ def only(
561
+ iterable: Iterable[_T], default: _U, too_long: _Raisable | None = ...
562
+ ) -> _T | _U: ...
563
+ def ichunked(iterable: Iterable[_T], n: int) -> Iterator[Iterator[_T]]: ...
564
+ def distinct_combinations(
565
+ iterable: Iterable[_T], r: int
566
+ ) -> Iterator[tuple[_T, ...]]: ...
567
+ def filter_except(
568
+ validator: Callable[[Any], object],
569
+ iterable: Iterable[_T],
570
+ *exceptions: Type[BaseException],
571
+ ) -> Iterator[_T]: ...
572
+ def map_except(
573
+ function: Callable[[Any], _U],
574
+ iterable: Iterable[_T],
575
+ *exceptions: Type[BaseException],
576
+ ) -> Iterator[_U]: ...
577
+ def map_if(
578
+ iterable: Iterable[Any],
579
+ pred: Callable[[Any], bool],
580
+ func: Callable[[Any], Any],
581
+ func_else: Callable[[Any], Any] | None = ...,
582
+ ) -> Iterator[Any]: ...
583
+ def _sample_unweighted(
584
+ iterator: Iterator[_T], k: int, strict: bool
585
+ ) -> list[_T]: ...
586
+ def _sample_counted(
587
+ population: Iterator[_T], k: int, counts: Iterable[int], strict: bool
588
+ ) -> list[_T]: ...
589
+ def _sample_weighted(
590
+ iterator: Iterator[_T], k: int, weights, strict
591
+ ) -> list[_T]: ...
592
+ def sample(
593
+ iterable: Iterable[_T],
594
+ k: int,
595
+ weights: Iterable[float] | None = ...,
596
+ *,
597
+ counts: Iterable[int] | None = ...,
598
+ strict: bool = False,
599
+ ) -> list[_T]: ...
600
+ def is_sorted(
601
+ iterable: Iterable[_T],
602
+ key: Callable[[_T], _U] | None = ...,
603
+ reverse: bool = False,
604
+ strict: bool = False,
605
+ ) -> bool: ...
606
+
607
+ class AbortThread(BaseException):
608
+ pass
609
+
610
+ class callback_iter(Generic[_T], Iterator[_T]):
611
+ def __init__(
612
+ self,
613
+ func: Callable[..., Any],
614
+ callback_kwd: str = ...,
615
+ wait_seconds: float = ...,
616
+ ) -> None: ...
617
+ def __enter__(self) -> callback_iter[_T]: ...
618
+ def __exit__(
619
+ self,
620
+ exc_type: Type[BaseException] | None,
621
+ exc_value: BaseException | None,
622
+ traceback: types.TracebackType | None,
623
+ ) -> bool | None: ...
624
+ def __iter__(self) -> callback_iter[_T]: ...
625
+ def __next__(self) -> _T: ...
626
+ def _reader(self) -> Iterator[_T]: ...
627
+ @property
628
+ def done(self) -> bool: ...
629
+ @property
630
+ def result(self) -> Any: ...
631
+
632
+ def windowed_complete(
633
+ iterable: Iterable[_T], n: int
634
+ ) -> Iterator[tuple[tuple[_T, ...], tuple[_T, ...], tuple[_T, ...]]]: ...
635
+ def all_unique(
636
+ iterable: Iterable[_T], key: Callable[[_T], _U] | None = ...
637
+ ) -> bool: ...
638
+ def nth_product(index: int, *args: Iterable[_T]) -> tuple[_T, ...]: ...
639
+ def nth_combination_with_replacement(
640
+ iterable: Iterable[_T], r: int, index: int
641
+ ) -> tuple[_T, ...]: ...
642
+ def nth_permutation(
643
+ iterable: Iterable[_T], r: int, index: int
644
+ ) -> tuple[_T, ...]: ...
645
+ def value_chain(*args: _T | Iterable[_T]) -> Iterable[_T]: ...
646
+ def product_index(element: Iterable[_T], *args: Iterable[_T]) -> int: ...
647
+ def combination_index(
648
+ element: Iterable[_T], iterable: Iterable[_T]
649
+ ) -> int: ...
650
+ def combination_with_replacement_index(
651
+ element: Iterable[_T], iterable: Iterable[_T]
652
+ ) -> int: ...
653
+ def permutation_index(
654
+ element: Iterable[_T], iterable: Iterable[_T]
655
+ ) -> int: ...
656
+ def repeat_each(iterable: Iterable[_T], n: int = ...) -> Iterator[_T]: ...
657
+
658
+ class countable(Generic[_T], Iterator[_T]):
659
+ def __init__(self, iterable: Iterable[_T]) -> None: ...
660
+ def __iter__(self) -> countable[_T]: ...
661
+ def __next__(self) -> _T: ...
662
+ items_seen: int
663
+
664
+ def chunked_even(iterable: Iterable[_T], n: int) -> Iterator[list[_T]]: ...
665
+ @overload
666
+ def zip_broadcast(
667
+ __obj1: _T | Iterable[_T],
668
+ *,
669
+ scalar_types: _ClassInfo | None = ...,
670
+ strict: bool = ...,
671
+ ) -> Iterable[tuple[_T, ...]]: ...
672
+ @overload
673
+ def zip_broadcast(
674
+ __obj1: _T | Iterable[_T],
675
+ __obj2: _T | Iterable[_T],
676
+ *,
677
+ scalar_types: _ClassInfo | None = ...,
678
+ strict: bool = ...,
679
+ ) -> Iterable[tuple[_T, ...]]: ...
680
+ @overload
681
+ def zip_broadcast(
682
+ __obj1: _T | Iterable[_T],
683
+ __obj2: _T | Iterable[_T],
684
+ __obj3: _T | Iterable[_T],
685
+ *,
686
+ scalar_types: _ClassInfo | None = ...,
687
+ strict: bool = ...,
688
+ ) -> Iterable[tuple[_T, ...]]: ...
689
+ @overload
690
+ def zip_broadcast(
691
+ __obj1: _T | Iterable[_T],
692
+ __obj2: _T | Iterable[_T],
693
+ __obj3: _T | Iterable[_T],
694
+ __obj4: _T | Iterable[_T],
695
+ *,
696
+ scalar_types: _ClassInfo | None = ...,
697
+ strict: bool = ...,
698
+ ) -> Iterable[tuple[_T, ...]]: ...
699
+ @overload
700
+ def zip_broadcast(
701
+ __obj1: _T | Iterable[_T],
702
+ __obj2: _T | Iterable[_T],
703
+ __obj3: _T | Iterable[_T],
704
+ __obj4: _T | Iterable[_T],
705
+ __obj5: _T | Iterable[_T],
706
+ *,
707
+ scalar_types: _ClassInfo | None = ...,
708
+ strict: bool = ...,
709
+ ) -> Iterable[tuple[_T, ...]]: ...
710
+ @overload
711
+ def zip_broadcast(
712
+ __obj1: _T | Iterable[_T],
713
+ __obj2: _T | Iterable[_T],
714
+ __obj3: _T | Iterable[_T],
715
+ __obj4: _T | Iterable[_T],
716
+ __obj5: _T | Iterable[_T],
717
+ __obj6: _T | Iterable[_T],
718
+ *objects: _T | Iterable[_T],
719
+ scalar_types: _ClassInfo | None = ...,
720
+ strict: bool = ...,
721
+ ) -> Iterable[tuple[_T, ...]]: ...
722
+ def unique_in_window(
723
+ iterable: Iterable[_T], n: int, key: Callable[[_T], _U] | None = ...
724
+ ) -> Iterator[_T]: ...
725
+ def duplicates_everseen(
726
+ iterable: Iterable[_T], key: Callable[[_T], _U] | None = ...
727
+ ) -> Iterator[_T]: ...
728
+ def duplicates_justseen(
729
+ iterable: Iterable[_T], key: Callable[[_T], _U] | None = ...
730
+ ) -> Iterator[_T]: ...
731
+ def classify_unique(
732
+ iterable: Iterable[_T], key: Callable[[_T], _U] | None = ...
733
+ ) -> Iterator[tuple[_T, bool, bool]]: ...
734
+
735
+ class _SupportsLessThan(Protocol):
736
+ def __lt__(self, __other: Any) -> bool: ...
737
+
738
+ _SupportsLessThanT = TypeVar("_SupportsLessThanT", bound=_SupportsLessThan)
739
+
740
+ @overload
741
+ def minmax(
742
+ iterable_or_value: Iterable[_SupportsLessThanT], *, key: None = None
743
+ ) -> tuple[_SupportsLessThanT, _SupportsLessThanT]: ...
744
+ @overload
745
+ def minmax(
746
+ iterable_or_value: Iterable[_T], *, key: Callable[[_T], _SupportsLessThan]
747
+ ) -> tuple[_T, _T]: ...
748
+ @overload
749
+ def minmax(
750
+ iterable_or_value: Iterable[_SupportsLessThanT],
751
+ *,
752
+ key: None = None,
753
+ default: _U,
754
+ ) -> _U | tuple[_SupportsLessThanT, _SupportsLessThanT]: ...
755
+ @overload
756
+ def minmax(
757
+ iterable_or_value: Iterable[_T],
758
+ *,
759
+ key: Callable[[_T], _SupportsLessThan],
760
+ default: _U,
761
+ ) -> _U | tuple[_T, _T]: ...
762
+ @overload
763
+ def minmax(
764
+ iterable_or_value: _SupportsLessThanT,
765
+ __other: _SupportsLessThanT,
766
+ *others: _SupportsLessThanT,
767
+ ) -> tuple[_SupportsLessThanT, _SupportsLessThanT]: ...
768
+ @overload
769
+ def minmax(
770
+ iterable_or_value: _T,
771
+ __other: _T,
772
+ *others: _T,
773
+ key: Callable[[_T], _SupportsLessThan],
774
+ ) -> tuple[_T, _T]: ...
775
+ def longest_common_prefix(
776
+ iterables: Iterable[Iterable[_T]],
777
+ ) -> Iterator[_T]: ...
778
+ def iequals(*iterables: Iterable[Any]) -> bool: ...
779
+ def constrained_batches(
780
+ iterable: Iterable[_T],
781
+ max_size: int,
782
+ max_count: int | None = ...,
783
+ get_len: Callable[[_T], object] = ...,
784
+ strict: bool = ...,
785
+ ) -> Iterator[tuple[_T]]: ...
786
+ def gray_product(*iterables: Iterable[_T]) -> Iterator[tuple[_T, ...]]: ...
787
+ def partial_product(*iterables: Iterable[_T]) -> Iterator[tuple[_T, ...]]: ...
788
+ def takewhile_inclusive(
789
+ predicate: Callable[[_T], bool], iterable: Iterable[_T]
790
+ ) -> Iterator[_T]: ...
791
+ def outer_product(
792
+ func: Callable[[_T, _U], _V],
793
+ xs: Iterable[_T],
794
+ ys: Iterable[_U],
795
+ *args: Any,
796
+ **kwargs: Any,
797
+ ) -> Iterator[tuple[_V, ...]]: ...
798
+ def iter_suppress(
799
+ iterable: Iterable[_T],
800
+ *exceptions: Type[BaseException],
801
+ ) -> Iterator[_T]: ...
802
+ def filter_map(
803
+ func: Callable[[_T], _V | None],
804
+ iterable: Iterable[_T],
805
+ ) -> Iterator[_V]: ...
806
+ def powerset_of_sets(iterable: Iterable[_T]) -> Iterator[set[_T]]: ...
807
+ def join_mappings(
808
+ **field_to_map: Mapping[_T, _V]
809
+ ) -> dict[_T, dict[str, _V]]: ...
810
+ def doublestarmap(
811
+ func: Callable[..., _T],
812
+ iterable: Iterable[Mapping[str, Any]],
813
+ ) -> Iterator[_T]: ...
814
+ def dft(xarr: Sequence[complex]) -> Iterator[complex]: ...
815
+ def idft(Xarr: Sequence[complex]) -> Iterator[complex]: ...
minigpt2/lib/python3.10/site-packages/more_itertools/py.typed ADDED
File without changes
minigpt2/lib/python3.10/site-packages/more_itertools/recipes.py ADDED
@@ -0,0 +1,1075 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Imported from the recipes section of the itertools documentation.
2
+
3
+ All functions taken from the recipes section of the itertools library docs
4
+ [1]_.
5
+ Some backward-compatible usability improvements have been made.
6
+
7
+ .. [1] http://docs.python.org/library/itertools.html#recipes
8
+
9
+ """
10
+
11
+ import math
12
+ import operator
13
+
14
+ from collections import deque
15
+ from collections.abc import Sized
16
+ from functools import partial, reduce
17
+ from itertools import (
18
+ chain,
19
+ combinations,
20
+ compress,
21
+ count,
22
+ cycle,
23
+ groupby,
24
+ islice,
25
+ product,
26
+ repeat,
27
+ starmap,
28
+ tee,
29
+ zip_longest,
30
+ )
31
+ from random import randrange, sample, choice
32
+ from sys import hexversion
33
+
34
+ __all__ = [
35
+ 'all_equal',
36
+ 'batched',
37
+ 'before_and_after',
38
+ 'consume',
39
+ 'convolve',
40
+ 'dotproduct',
41
+ 'first_true',
42
+ 'factor',
43
+ 'flatten',
44
+ 'grouper',
45
+ 'iter_except',
46
+ 'iter_index',
47
+ 'matmul',
48
+ 'ncycles',
49
+ 'nth',
50
+ 'nth_combination',
51
+ 'padnone',
52
+ 'pad_none',
53
+ 'pairwise',
54
+ 'partition',
55
+ 'polynomial_eval',
56
+ 'polynomial_from_roots',
57
+ 'polynomial_derivative',
58
+ 'powerset',
59
+ 'prepend',
60
+ 'quantify',
61
+ 'reshape',
62
+ 'random_combination_with_replacement',
63
+ 'random_combination',
64
+ 'random_permutation',
65
+ 'random_product',
66
+ 'repeatfunc',
67
+ 'roundrobin',
68
+ 'sieve',
69
+ 'sliding_window',
70
+ 'subslices',
71
+ 'sum_of_squares',
72
+ 'tabulate',
73
+ 'tail',
74
+ 'take',
75
+ 'totient',
76
+ 'transpose',
77
+ 'triplewise',
78
+ 'unique',
79
+ 'unique_everseen',
80
+ 'unique_justseen',
81
+ ]
82
+
83
+ _marker = object()
84
+
85
+
86
+ # zip with strict is available for Python 3.10+
87
+ try:
88
+ zip(strict=True)
89
+ except TypeError:
90
+ _zip_strict = zip
91
+ else:
92
+ _zip_strict = partial(zip, strict=True)
93
+
94
+ # math.sumprod is available for Python 3.12+
95
+ _sumprod = getattr(math, 'sumprod', lambda x, y: dotproduct(x, y))
96
+
97
+
98
+ def take(n, iterable):
99
+ """Return first *n* items of the iterable as a list.
100
+
101
+ >>> take(3, range(10))
102
+ [0, 1, 2]
103
+
104
+ If there are fewer than *n* items in the iterable, all of them are
105
+ returned.
106
+
107
+ >>> take(10, range(3))
108
+ [0, 1, 2]
109
+
110
+ """
111
+ return list(islice(iterable, n))
112
+
113
+
114
+ def tabulate(function, start=0):
115
+ """Return an iterator over the results of ``func(start)``,
116
+ ``func(start + 1)``, ``func(start + 2)``...
117
+
118
+ *func* should be a function that accepts one integer argument.
119
+
120
+ If *start* is not specified it defaults to 0. It will be incremented each
121
+ time the iterator is advanced.
122
+
123
+ >>> square = lambda x: x ** 2
124
+ >>> iterator = tabulate(square, -3)
125
+ >>> take(4, iterator)
126
+ [9, 4, 1, 0]
127
+
128
+ """
129
+ return map(function, count(start))
130
+
131
+
132
+ def tail(n, iterable):
133
+ """Return an iterator over the last *n* items of *iterable*.
134
+
135
+ >>> t = tail(3, 'ABCDEFG')
136
+ >>> list(t)
137
+ ['E', 'F', 'G']
138
+
139
+ """
140
+ # If the given iterable has a length, then we can use islice to get its
141
+ # final elements. Note that if the iterable is not actually Iterable,
142
+ # either islice or deque will throw a TypeError. This is why we don't
143
+ # check if it is Iterable.
144
+ if isinstance(iterable, Sized):
145
+ yield from islice(iterable, max(0, len(iterable) - n), None)
146
+ else:
147
+ yield from iter(deque(iterable, maxlen=n))
148
+
149
+
150
+ def consume(iterator, n=None):
151
+ """Advance *iterable* by *n* steps. If *n* is ``None``, consume it
152
+ entirely.
153
+
154
+ Efficiently exhausts an iterator without returning values. Defaults to
155
+ consuming the whole iterator, but an optional second argument may be
156
+ provided to limit consumption.
157
+
158
+ >>> i = (x for x in range(10))
159
+ >>> next(i)
160
+ 0
161
+ >>> consume(i, 3)
162
+ >>> next(i)
163
+ 4
164
+ >>> consume(i)
165
+ >>> next(i)
166
+ Traceback (most recent call last):
167
+ File "<stdin>", line 1, in <module>
168
+ StopIteration
169
+
170
+ If the iterator has fewer items remaining than the provided limit, the
171
+ whole iterator will be consumed.
172
+
173
+ >>> i = (x for x in range(3))
174
+ >>> consume(i, 5)
175
+ >>> next(i)
176
+ Traceback (most recent call last):
177
+ File "<stdin>", line 1, in <module>
178
+ StopIteration
179
+
180
+ """
181
+ # Use functions that consume iterators at C speed.
182
+ if n is None:
183
+ # feed the entire iterator into a zero-length deque
184
+ deque(iterator, maxlen=0)
185
+ else:
186
+ # advance to the empty slice starting at position n
187
+ next(islice(iterator, n, n), None)
188
+
189
+
190
+ def nth(iterable, n, default=None):
191
+ """Returns the nth item or a default value.
192
+
193
+ >>> l = range(10)
194
+ >>> nth(l, 3)
195
+ 3
196
+ >>> nth(l, 20, "zebra")
197
+ 'zebra'
198
+
199
+ """
200
+ return next(islice(iterable, n, None), default)
201
+
202
+
203
+ def all_equal(iterable, key=None):
204
+ """
205
+ Returns ``True`` if all the elements are equal to each other.
206
+
207
+ >>> all_equal('aaaa')
208
+ True
209
+ >>> all_equal('aaab')
210
+ False
211
+
212
+ A function that accepts a single argument and returns a transformed version
213
+ of each input item can be specified with *key*:
214
+
215
+ >>> all_equal('AaaA', key=str.casefold)
216
+ True
217
+ >>> all_equal([1, 2, 3], key=lambda x: x < 10)
218
+ True
219
+
220
+ """
221
+ iterator = groupby(iterable, key)
222
+ for first in iterator:
223
+ for second in iterator:
224
+ return False
225
+ return True
226
+ return True
227
+
228
+
229
+ def quantify(iterable, pred=bool):
230
+ """Return the how many times the predicate is true.
231
+
232
+ >>> quantify([True, False, True])
233
+ 2
234
+
235
+ """
236
+ return sum(map(pred, iterable))
237
+
238
+
239
+ def pad_none(iterable):
240
+ """Returns the sequence of elements and then returns ``None`` indefinitely.
241
+
242
+ >>> take(5, pad_none(range(3)))
243
+ [0, 1, 2, None, None]
244
+
245
+ Useful for emulating the behavior of the built-in :func:`map` function.
246
+
247
+ See also :func:`padded`.
248
+
249
+ """
250
+ return chain(iterable, repeat(None))
251
+
252
+
253
+ padnone = pad_none
254
+
255
+
256
+ def ncycles(iterable, n):
257
+ """Returns the sequence elements *n* times
258
+
259
+ >>> list(ncycles(["a", "b"], 3))
260
+ ['a', 'b', 'a', 'b', 'a', 'b']
261
+
262
+ """
263
+ return chain.from_iterable(repeat(tuple(iterable), n))
264
+
265
+
266
+ def dotproduct(vec1, vec2):
267
+ """Returns the dot product of the two iterables.
268
+
269
+ >>> dotproduct([10, 10], [20, 20])
270
+ 400
271
+
272
+ """
273
+ return sum(map(operator.mul, vec1, vec2))
274
+
275
+
276
+ def flatten(listOfLists):
277
+ """Return an iterator flattening one level of nesting in a list of lists.
278
+
279
+ >>> list(flatten([[0, 1], [2, 3]]))
280
+ [0, 1, 2, 3]
281
+
282
+ See also :func:`collapse`, which can flatten multiple levels of nesting.
283
+
284
+ """
285
+ return chain.from_iterable(listOfLists)
286
+
287
+
288
+ def repeatfunc(func, times=None, *args):
289
+ """Call *func* with *args* repeatedly, returning an iterable over the
290
+ results.
291
+
292
+ If *times* is specified, the iterable will terminate after that many
293
+ repetitions:
294
+
295
+ >>> from operator import add
296
+ >>> times = 4
297
+ >>> args = 3, 5
298
+ >>> list(repeatfunc(add, times, *args))
299
+ [8, 8, 8, 8]
300
+
301
+ If *times* is ``None`` the iterable will not terminate:
302
+
303
+ >>> from random import randrange
304
+ >>> times = None
305
+ >>> args = 1, 11
306
+ >>> take(6, repeatfunc(randrange, times, *args)) # doctest:+SKIP
307
+ [2, 4, 8, 1, 8, 4]
308
+
309
+ """
310
+ if times is None:
311
+ return starmap(func, repeat(args))
312
+ return starmap(func, repeat(args, times))
313
+
314
+
315
+ def _pairwise(iterable):
316
+ """Returns an iterator of paired items, overlapping, from the original
317
+
318
+ >>> take(4, pairwise(count()))
319
+ [(0, 1), (1, 2), (2, 3), (3, 4)]
320
+
321
+ On Python 3.10 and above, this is an alias for :func:`itertools.pairwise`.
322
+
323
+ """
324
+ a, b = tee(iterable)
325
+ next(b, None)
326
+ return zip(a, b)
327
+
328
+
329
+ try:
330
+ from itertools import pairwise as itertools_pairwise
331
+ except ImportError:
332
+ pairwise = _pairwise
333
+ else:
334
+
335
+ def pairwise(iterable):
336
+ return itertools_pairwise(iterable)
337
+
338
+ pairwise.__doc__ = _pairwise.__doc__
339
+
340
+
341
+ class UnequalIterablesError(ValueError):
342
+ def __init__(self, details=None):
343
+ msg = 'Iterables have different lengths'
344
+ if details is not None:
345
+ msg += (': index 0 has length {}; index {} has length {}').format(
346
+ *details
347
+ )
348
+
349
+ super().__init__(msg)
350
+
351
+
352
+ def _zip_equal_generator(iterables):
353
+ for combo in zip_longest(*iterables, fillvalue=_marker):
354
+ for val in combo:
355
+ if val is _marker:
356
+ raise UnequalIterablesError()
357
+ yield combo
358
+
359
+
360
+ def _zip_equal(*iterables):
361
+ # Check whether the iterables are all the same size.
362
+ try:
363
+ first_size = len(iterables[0])
364
+ for i, it in enumerate(iterables[1:], 1):
365
+ size = len(it)
366
+ if size != first_size:
367
+ raise UnequalIterablesError(details=(first_size, i, size))
368
+ # All sizes are equal, we can use the built-in zip.
369
+ return zip(*iterables)
370
+ # If any one of the iterables didn't have a length, start reading
371
+ # them until one runs out.
372
+ except TypeError:
373
+ return _zip_equal_generator(iterables)
374
+
375
+
376
+ def grouper(iterable, n, incomplete='fill', fillvalue=None):
377
+ """Group elements from *iterable* into fixed-length groups of length *n*.
378
+
379
+ >>> list(grouper('ABCDEF', 3))
380
+ [('A', 'B', 'C'), ('D', 'E', 'F')]
381
+
382
+ The keyword arguments *incomplete* and *fillvalue* control what happens for
383
+ iterables whose length is not a multiple of *n*.
384
+
385
+ When *incomplete* is `'fill'`, the last group will contain instances of
386
+ *fillvalue*.
387
+
388
+ >>> list(grouper('ABCDEFG', 3, incomplete='fill', fillvalue='x'))
389
+ [('A', 'B', 'C'), ('D', 'E', 'F'), ('G', 'x', 'x')]
390
+
391
+ When *incomplete* is `'ignore'`, the last group will not be emitted.
392
+
393
+ >>> list(grouper('ABCDEFG', 3, incomplete='ignore', fillvalue='x'))
394
+ [('A', 'B', 'C'), ('D', 'E', 'F')]
395
+
396
+ When *incomplete* is `'strict'`, a subclass of `ValueError` will be raised.
397
+
398
+ >>> it = grouper('ABCDEFG', 3, incomplete='strict')
399
+ >>> list(it) # doctest: +IGNORE_EXCEPTION_DETAIL
400
+ Traceback (most recent call last):
401
+ ...
402
+ UnequalIterablesError
403
+
404
+ """
405
+ args = [iter(iterable)] * n
406
+ if incomplete == 'fill':
407
+ return zip_longest(*args, fillvalue=fillvalue)
408
+ if incomplete == 'strict':
409
+ return _zip_equal(*args)
410
+ if incomplete == 'ignore':
411
+ return zip(*args)
412
+ else:
413
+ raise ValueError('Expected fill, strict, or ignore')
414
+
415
+
416
+ def roundrobin(*iterables):
417
+ """Yields an item from each iterable, alternating between them.
418
+
419
+ >>> list(roundrobin('ABC', 'D', 'EF'))
420
+ ['A', 'D', 'E', 'B', 'F', 'C']
421
+
422
+ This function produces the same output as :func:`interleave_longest`, but
423
+ may perform better for some inputs (in particular when the number of
424
+ iterables is small).
425
+
426
+ """
427
+ # Algorithm credited to George Sakkis
428
+ iterators = map(iter, iterables)
429
+ for num_active in range(len(iterables), 0, -1):
430
+ iterators = cycle(islice(iterators, num_active))
431
+ yield from map(next, iterators)
432
+
433
+
434
+ def partition(pred, iterable):
435
+ """
436
+ Returns a 2-tuple of iterables derived from the input iterable.
437
+ The first yields the items that have ``pred(item) == False``.
438
+ The second yields the items that have ``pred(item) == True``.
439
+
440
+ >>> is_odd = lambda x: x % 2 != 0
441
+ >>> iterable = range(10)
442
+ >>> even_items, odd_items = partition(is_odd, iterable)
443
+ >>> list(even_items), list(odd_items)
444
+ ([0, 2, 4, 6, 8], [1, 3, 5, 7, 9])
445
+
446
+ If *pred* is None, :func:`bool` is used.
447
+
448
+ >>> iterable = [0, 1, False, True, '', ' ']
449
+ >>> false_items, true_items = partition(None, iterable)
450
+ >>> list(false_items), list(true_items)
451
+ ([0, False, ''], [1, True, ' '])
452
+
453
+ """
454
+ if pred is None:
455
+ pred = bool
456
+
457
+ t1, t2, p = tee(iterable, 3)
458
+ p1, p2 = tee(map(pred, p))
459
+ return (compress(t1, map(operator.not_, p1)), compress(t2, p2))
460
+
461
+
462
+ def powerset(iterable):
463
+ """Yields all possible subsets of the iterable.
464
+
465
+ >>> list(powerset([1, 2, 3]))
466
+ [(), (1,), (2,), (3,), (1, 2), (1, 3), (2, 3), (1, 2, 3)]
467
+
468
+ :func:`powerset` will operate on iterables that aren't :class:`set`
469
+ instances, so repeated elements in the input will produce repeated elements
470
+ in the output.
471
+
472
+ >>> seq = [1, 1, 0]
473
+ >>> list(powerset(seq))
474
+ [(), (1,), (1,), (0,), (1, 1), (1, 0), (1, 0), (1, 1, 0)]
475
+
476
+ For a variant that efficiently yields actual :class:`set` instances, see
477
+ :func:`powerset_of_sets`.
478
+ """
479
+ s = list(iterable)
480
+ return chain.from_iterable(combinations(s, r) for r in range(len(s) + 1))
481
+
482
+
483
+ def unique_everseen(iterable, key=None):
484
+ """
485
+ Yield unique elements, preserving order.
486
+
487
+ >>> list(unique_everseen('AAAABBBCCDAABBB'))
488
+ ['A', 'B', 'C', 'D']
489
+ >>> list(unique_everseen('ABBCcAD', str.lower))
490
+ ['A', 'B', 'C', 'D']
491
+
492
+ Sequences with a mix of hashable and unhashable items can be used.
493
+ The function will be slower (i.e., `O(n^2)`) for unhashable items.
494
+
495
+ Remember that ``list`` objects are unhashable - you can use the *key*
496
+ parameter to transform the list to a tuple (which is hashable) to
497
+ avoid a slowdown.
498
+
499
+ >>> iterable = ([1, 2], [2, 3], [1, 2])
500
+ >>> list(unique_everseen(iterable)) # Slow
501
+ [[1, 2], [2, 3]]
502
+ >>> list(unique_everseen(iterable, key=tuple)) # Faster
503
+ [[1, 2], [2, 3]]
504
+
505
+ Similarly, you may want to convert unhashable ``set`` objects with
506
+ ``key=frozenset``. For ``dict`` objects,
507
+ ``key=lambda x: frozenset(x.items())`` can be used.
508
+
509
+ """
510
+ seenset = set()
511
+ seenset_add = seenset.add
512
+ seenlist = []
513
+ seenlist_add = seenlist.append
514
+ use_key = key is not None
515
+
516
+ for element in iterable:
517
+ k = key(element) if use_key else element
518
+ try:
519
+ if k not in seenset:
520
+ seenset_add(k)
521
+ yield element
522
+ except TypeError:
523
+ if k not in seenlist:
524
+ seenlist_add(k)
525
+ yield element
526
+
527
+
528
+ def unique_justseen(iterable, key=None):
529
+ """Yields elements in order, ignoring serial duplicates
530
+
531
+ >>> list(unique_justseen('AAAABBBCCDAABBB'))
532
+ ['A', 'B', 'C', 'D', 'A', 'B']
533
+ >>> list(unique_justseen('ABBCcAD', str.lower))
534
+ ['A', 'B', 'C', 'A', 'D']
535
+
536
+ """
537
+ if key is None:
538
+ return map(operator.itemgetter(0), groupby(iterable))
539
+
540
+ return map(next, map(operator.itemgetter(1), groupby(iterable, key)))
541
+
542
+
543
+ def unique(iterable, key=None, reverse=False):
544
+ """Yields unique elements in sorted order.
545
+
546
+ >>> list(unique([[1, 2], [3, 4], [1, 2]]))
547
+ [[1, 2], [3, 4]]
548
+
549
+ *key* and *reverse* are passed to :func:`sorted`.
550
+
551
+ >>> list(unique('ABBcCAD', str.casefold))
552
+ ['A', 'B', 'c', 'D']
553
+ >>> list(unique('ABBcCAD', str.casefold, reverse=True))
554
+ ['D', 'c', 'B', 'A']
555
+
556
+ The elements in *iterable* need not be hashable, but they must be
557
+ comparable for sorting to work.
558
+ """
559
+ return unique_justseen(sorted(iterable, key=key, reverse=reverse), key=key)
560
+
561
+
562
+ def iter_except(func, exception, first=None):
563
+ """Yields results from a function repeatedly until an exception is raised.
564
+
565
+ Converts a call-until-exception interface to an iterator interface.
566
+ Like ``iter(func, sentinel)``, but uses an exception instead of a sentinel
567
+ to end the loop.
568
+
569
+ >>> l = [0, 1, 2]
570
+ >>> list(iter_except(l.pop, IndexError))
571
+ [2, 1, 0]
572
+
573
+ Multiple exceptions can be specified as a stopping condition:
574
+
575
+ >>> l = [1, 2, 3, '...', 4, 5, 6]
576
+ >>> list(iter_except(lambda: 1 + l.pop(), (IndexError, TypeError)))
577
+ [7, 6, 5]
578
+ >>> list(iter_except(lambda: 1 + l.pop(), (IndexError, TypeError)))
579
+ [4, 3, 2]
580
+ >>> list(iter_except(lambda: 1 + l.pop(), (IndexError, TypeError)))
581
+ []
582
+
583
+ """
584
+ try:
585
+ if first is not None:
586
+ yield first()
587
+ while 1:
588
+ yield func()
589
+ except exception:
590
+ pass
591
+
592
+
593
+ def first_true(iterable, default=None, pred=None):
594
+ """
595
+ Returns the first true value in the iterable.
596
+
597
+ If no true value is found, returns *default*
598
+
599
+ If *pred* is not None, returns the first item for which
600
+ ``pred(item) == True`` .
601
+
602
+ >>> first_true(range(10))
603
+ 1
604
+ >>> first_true(range(10), pred=lambda x: x > 5)
605
+ 6
606
+ >>> first_true(range(10), default='missing', pred=lambda x: x > 9)
607
+ 'missing'
608
+
609
+ """
610
+ return next(filter(pred, iterable), default)
611
+
612
+
613
+ def random_product(*args, repeat=1):
614
+ """Draw an item at random from each of the input iterables.
615
+
616
+ >>> random_product('abc', range(4), 'XYZ') # doctest:+SKIP
617
+ ('c', 3, 'Z')
618
+
619
+ If *repeat* is provided as a keyword argument, that many items will be
620
+ drawn from each iterable.
621
+
622
+ >>> random_product('abcd', range(4), repeat=2) # doctest:+SKIP
623
+ ('a', 2, 'd', 3)
624
+
625
+ This equivalent to taking a random selection from
626
+ ``itertools.product(*args, **kwarg)``.
627
+
628
+ """
629
+ pools = [tuple(pool) for pool in args] * repeat
630
+ return tuple(choice(pool) for pool in pools)
631
+
632
+
633
+ def random_permutation(iterable, r=None):
634
+ """Return a random *r* length permutation of the elements in *iterable*.
635
+
636
+ If *r* is not specified or is ``None``, then *r* defaults to the length of
637
+ *iterable*.
638
+
639
+ >>> random_permutation(range(5)) # doctest:+SKIP
640
+ (3, 4, 0, 1, 2)
641
+
642
+ This equivalent to taking a random selection from
643
+ ``itertools.permutations(iterable, r)``.
644
+
645
+ """
646
+ pool = tuple(iterable)
647
+ r = len(pool) if r is None else r
648
+ return tuple(sample(pool, r))
649
+
650
+
651
+ def random_combination(iterable, r):
652
+ """Return a random *r* length subsequence of the elements in *iterable*.
653
+
654
+ >>> random_combination(range(5), 3) # doctest:+SKIP
655
+ (2, 3, 4)
656
+
657
+ This equivalent to taking a random selection from
658
+ ``itertools.combinations(iterable, r)``.
659
+
660
+ """
661
+ pool = tuple(iterable)
662
+ n = len(pool)
663
+ indices = sorted(sample(range(n), r))
664
+ return tuple(pool[i] for i in indices)
665
+
666
+
667
+ def random_combination_with_replacement(iterable, r):
668
+ """Return a random *r* length subsequence of elements in *iterable*,
669
+ allowing individual elements to be repeated.
670
+
671
+ >>> random_combination_with_replacement(range(3), 5) # doctest:+SKIP
672
+ (0, 0, 1, 2, 2)
673
+
674
+ This equivalent to taking a random selection from
675
+ ``itertools.combinations_with_replacement(iterable, r)``.
676
+
677
+ """
678
+ pool = tuple(iterable)
679
+ n = len(pool)
680
+ indices = sorted(randrange(n) for i in range(r))
681
+ return tuple(pool[i] for i in indices)
682
+
683
+
684
+ def nth_combination(iterable, r, index):
685
+ """Equivalent to ``list(combinations(iterable, r))[index]``.
686
+
687
+ The subsequences of *iterable* that are of length *r* can be ordered
688
+ lexicographically. :func:`nth_combination` computes the subsequence at
689
+ sort position *index* directly, without computing the previous
690
+ subsequences.
691
+
692
+ >>> nth_combination(range(5), 3, 5)
693
+ (0, 3, 4)
694
+
695
+ ``ValueError`` will be raised If *r* is negative or greater than the length
696
+ of *iterable*.
697
+ ``IndexError`` will be raised if the given *index* is invalid.
698
+ """
699
+ pool = tuple(iterable)
700
+ n = len(pool)
701
+ if (r < 0) or (r > n):
702
+ raise ValueError
703
+
704
+ c = 1
705
+ k = min(r, n - r)
706
+ for i in range(1, k + 1):
707
+ c = c * (n - k + i) // i
708
+
709
+ if index < 0:
710
+ index += c
711
+
712
+ if (index < 0) or (index >= c):
713
+ raise IndexError
714
+
715
+ result = []
716
+ while r:
717
+ c, n, r = c * r // n, n - 1, r - 1
718
+ while index >= c:
719
+ index -= c
720
+ c, n = c * (n - r) // n, n - 1
721
+ result.append(pool[-1 - n])
722
+
723
+ return tuple(result)
724
+
725
+
726
+ def prepend(value, iterator):
727
+ """Yield *value*, followed by the elements in *iterator*.
728
+
729
+ >>> value = '0'
730
+ >>> iterator = ['1', '2', '3']
731
+ >>> list(prepend(value, iterator))
732
+ ['0', '1', '2', '3']
733
+
734
+ To prepend multiple values, see :func:`itertools.chain`
735
+ or :func:`value_chain`.
736
+
737
+ """
738
+ return chain([value], iterator)
739
+
740
+
741
+ def convolve(signal, kernel):
742
+ """Convolve the iterable *signal* with the iterable *kernel*.
743
+
744
+ >>> signal = (1, 2, 3, 4, 5)
745
+ >>> kernel = [3, 2, 1]
746
+ >>> list(convolve(signal, kernel))
747
+ [3, 8, 14, 20, 26, 14, 5]
748
+
749
+ Note: the input arguments are not interchangeable, as the *kernel*
750
+ is immediately consumed and stored.
751
+
752
+ """
753
+ # This implementation intentionally doesn't match the one in the itertools
754
+ # documentation.
755
+ kernel = tuple(kernel)[::-1]
756
+ n = len(kernel)
757
+ window = deque([0], maxlen=n) * n
758
+ for x in chain(signal, repeat(0, n - 1)):
759
+ window.append(x)
760
+ yield _sumprod(kernel, window)
761
+
762
+
763
+ def before_and_after(predicate, it):
764
+ """A variant of :func:`takewhile` that allows complete access to the
765
+ remainder of the iterator.
766
+
767
+ >>> it = iter('ABCdEfGhI')
768
+ >>> all_upper, remainder = before_and_after(str.isupper, it)
769
+ >>> ''.join(all_upper)
770
+ 'ABC'
771
+ >>> ''.join(remainder) # takewhile() would lose the 'd'
772
+ 'dEfGhI'
773
+
774
+ Note that the first iterator must be fully consumed before the second
775
+ iterator can generate valid results.
776
+ """
777
+ it = iter(it)
778
+ transition = []
779
+
780
+ def true_iterator():
781
+ for elem in it:
782
+ if predicate(elem):
783
+ yield elem
784
+ else:
785
+ transition.append(elem)
786
+ return
787
+
788
+ # Note: this is different from itertools recipes to allow nesting
789
+ # before_and_after remainders into before_and_after again. See tests
790
+ # for an example.
791
+ remainder_iterator = chain(transition, it)
792
+
793
+ return true_iterator(), remainder_iterator
794
+
795
+
796
+ def triplewise(iterable):
797
+ """Return overlapping triplets from *iterable*.
798
+
799
+ >>> list(triplewise('ABCDE'))
800
+ [('A', 'B', 'C'), ('B', 'C', 'D'), ('C', 'D', 'E')]
801
+
802
+ """
803
+ # This deviates from the itertools documentation reciple - see
804
+ # https://github.com/more-itertools/more-itertools/issues/889
805
+ t1, t2, t3 = tee(iterable, 3)
806
+ next(t3, None)
807
+ next(t3, None)
808
+ next(t2, None)
809
+ return zip(t1, t2, t3)
810
+
811
+
812
+ def _sliding_window_islice(iterable, n):
813
+ # Fast path for small, non-zero values of n.
814
+ iterators = tee(iterable, n)
815
+ for i, iterator in enumerate(iterators):
816
+ next(islice(iterator, i, i), None)
817
+ return zip(*iterators)
818
+
819
+
820
+ def _sliding_window_deque(iterable, n):
821
+ # Normal path for other values of n.
822
+ it = iter(iterable)
823
+ window = deque(islice(it, n - 1), maxlen=n)
824
+ for x in it:
825
+ window.append(x)
826
+ yield tuple(window)
827
+
828
+
829
+ def sliding_window(iterable, n):
830
+ """Return a sliding window of width *n* over *iterable*.
831
+
832
+ >>> list(sliding_window(range(6), 4))
833
+ [(0, 1, 2, 3), (1, 2, 3, 4), (2, 3, 4, 5)]
834
+
835
+ If *iterable* has fewer than *n* items, then nothing is yielded:
836
+
837
+ >>> list(sliding_window(range(3), 4))
838
+ []
839
+
840
+ For a variant with more features, see :func:`windowed`.
841
+ """
842
+ if n > 20:
843
+ return _sliding_window_deque(iterable, n)
844
+ elif n > 2:
845
+ return _sliding_window_islice(iterable, n)
846
+ elif n == 2:
847
+ return pairwise(iterable)
848
+ elif n == 1:
849
+ return zip(iterable)
850
+ else:
851
+ raise ValueError(f'n should be at least one, not {n}')
852
+
853
+
854
+ def subslices(iterable):
855
+ """Return all contiguous non-empty subslices of *iterable*.
856
+
857
+ >>> list(subslices('ABC'))
858
+ [['A'], ['A', 'B'], ['A', 'B', 'C'], ['B'], ['B', 'C'], ['C']]
859
+
860
+ This is similar to :func:`substrings`, but emits items in a different
861
+ order.
862
+ """
863
+ seq = list(iterable)
864
+ slices = starmap(slice, combinations(range(len(seq) + 1), 2))
865
+ return map(operator.getitem, repeat(seq), slices)
866
+
867
+
868
+ def polynomial_from_roots(roots):
869
+ """Compute a polynomial's coefficients from its roots.
870
+
871
+ >>> roots = [5, -4, 3] # (x - 5) * (x + 4) * (x - 3)
872
+ >>> polynomial_from_roots(roots) # x^3 - 4 * x^2 - 17 * x + 60
873
+ [1, -4, -17, 60]
874
+ """
875
+ factors = zip(repeat(1), map(operator.neg, roots))
876
+ return list(reduce(convolve, factors, [1]))
877
+
878
+
879
+ def iter_index(iterable, value, start=0, stop=None):
880
+ """Yield the index of each place in *iterable* that *value* occurs,
881
+ beginning with index *start* and ending before index *stop*.
882
+
883
+
884
+ >>> list(iter_index('AABCADEAF', 'A'))
885
+ [0, 1, 4, 7]
886
+ >>> list(iter_index('AABCADEAF', 'A', 1)) # start index is inclusive
887
+ [1, 4, 7]
888
+ >>> list(iter_index('AABCADEAF', 'A', 1, 7)) # stop index is not inclusive
889
+ [1, 4]
890
+
891
+ The behavior for non-scalar *values* matches the built-in Python types.
892
+
893
+ >>> list(iter_index('ABCDABCD', 'AB'))
894
+ [0, 4]
895
+ >>> list(iter_index([0, 1, 2, 3, 0, 1, 2, 3], [0, 1]))
896
+ []
897
+ >>> list(iter_index([[0, 1], [2, 3], [0, 1], [2, 3]], [0, 1]))
898
+ [0, 2]
899
+
900
+ See :func:`locate` for a more general means of finding the indexes
901
+ associated with particular values.
902
+
903
+ """
904
+ seq_index = getattr(iterable, 'index', None)
905
+ if seq_index is None:
906
+ # Slow path for general iterables
907
+ it = islice(iterable, start, stop)
908
+ for i, element in enumerate(it, start):
909
+ if element is value or element == value:
910
+ yield i
911
+ else:
912
+ # Fast path for sequences
913
+ stop = len(iterable) if stop is None else stop
914
+ i = start - 1
915
+ try:
916
+ while True:
917
+ yield (i := seq_index(value, i + 1, stop))
918
+ except ValueError:
919
+ pass
920
+
921
+
922
+ def sieve(n):
923
+ """Yield the primes less than n.
924
+
925
+ >>> list(sieve(30))
926
+ [2, 3, 5, 7, 11, 13, 17, 19, 23, 29]
927
+ """
928
+ if n > 2:
929
+ yield 2
930
+ start = 3
931
+ data = bytearray((0, 1)) * (n // 2)
932
+ limit = math.isqrt(n) + 1
933
+ for p in iter_index(data, 1, start, limit):
934
+ yield from iter_index(data, 1, start, p * p)
935
+ data[p * p : n : p + p] = bytes(len(range(p * p, n, p + p)))
936
+ start = p * p
937
+ yield from iter_index(data, 1, start)
938
+
939
+
940
+ def _batched(iterable, n, *, strict=False):
941
+ """Batch data into tuples of length *n*. If the number of items in
942
+ *iterable* is not divisible by *n*:
943
+ * The last batch will be shorter if *strict* is ``False``.
944
+ * :exc:`ValueError` will be raised if *strict* is ``True``.
945
+
946
+ >>> list(batched('ABCDEFG', 3))
947
+ [('A', 'B', 'C'), ('D', 'E', 'F'), ('G',)]
948
+
949
+ On Python 3.13 and above, this is an alias for :func:`itertools.batched`.
950
+ """
951
+ if n < 1:
952
+ raise ValueError('n must be at least one')
953
+ it = iter(iterable)
954
+ while batch := tuple(islice(it, n)):
955
+ if strict and len(batch) != n:
956
+ raise ValueError('batched(): incomplete batch')
957
+ yield batch
958
+
959
+
960
+ if hexversion >= 0x30D00A2:
961
+ from itertools import batched as itertools_batched
962
+
963
+ def batched(iterable, n, *, strict=False):
964
+ return itertools_batched(iterable, n, strict=strict)
965
+
966
+ else:
967
+ batched = _batched
968
+
969
+ batched.__doc__ = _batched.__doc__
970
+
971
+
972
+ def transpose(it):
973
+ """Swap the rows and columns of the input matrix.
974
+
975
+ >>> list(transpose([(1, 2, 3), (11, 22, 33)]))
976
+ [(1, 11), (2, 22), (3, 33)]
977
+
978
+ The caller should ensure that the dimensions of the input are compatible.
979
+ If the input is empty, no output will be produced.
980
+ """
981
+ return _zip_strict(*it)
982
+
983
+
984
+ def reshape(matrix, cols):
985
+ """Reshape the 2-D input *matrix* to have a column count given by *cols*.
986
+
987
+ >>> matrix = [(0, 1), (2, 3), (4, 5)]
988
+ >>> cols = 3
989
+ >>> list(reshape(matrix, cols))
990
+ [(0, 1, 2), (3, 4, 5)]
991
+ """
992
+ return batched(chain.from_iterable(matrix), cols)
993
+
994
+
995
+ def matmul(m1, m2):
996
+ """Multiply two matrices.
997
+
998
+ >>> list(matmul([(7, 5), (3, 5)], [(2, 5), (7, 9)]))
999
+ [(49, 80), (41, 60)]
1000
+
1001
+ The caller should ensure that the dimensions of the input matrices are
1002
+ compatible with each other.
1003
+ """
1004
+ n = len(m2[0])
1005
+ return batched(starmap(_sumprod, product(m1, transpose(m2))), n)
1006
+
1007
+
1008
+ def factor(n):
1009
+ """Yield the prime factors of n.
1010
+
1011
+ >>> list(factor(360))
1012
+ [2, 2, 2, 3, 3, 5]
1013
+ """
1014
+ for prime in sieve(math.isqrt(n) + 1):
1015
+ while not n % prime:
1016
+ yield prime
1017
+ n //= prime
1018
+ if n == 1:
1019
+ return
1020
+ if n > 1:
1021
+ yield n
1022
+
1023
+
1024
+ def polynomial_eval(coefficients, x):
1025
+ """Evaluate a polynomial at a specific value.
1026
+
1027
+ Example: evaluating x^3 - 4 * x^2 - 17 * x + 60 at x = 2.5:
1028
+
1029
+ >>> coefficients = [1, -4, -17, 60]
1030
+ >>> x = 2.5
1031
+ >>> polynomial_eval(coefficients, x)
1032
+ 8.125
1033
+ """
1034
+ n = len(coefficients)
1035
+ if n == 0:
1036
+ return x * 0 # coerce zero to the type of x
1037
+ powers = map(pow, repeat(x), reversed(range(n)))
1038
+ return _sumprod(coefficients, powers)
1039
+
1040
+
1041
+ def sum_of_squares(it):
1042
+ """Return the sum of the squares of the input values.
1043
+
1044
+ >>> sum_of_squares([10, 20, 30])
1045
+ 1400
1046
+ """
1047
+ return _sumprod(*tee(it))
1048
+
1049
+
1050
+ def polynomial_derivative(coefficients):
1051
+ """Compute the first derivative of a polynomial.
1052
+
1053
+ Example: evaluating the derivative of x^3 - 4 * x^2 - 17 * x + 60
1054
+
1055
+ >>> coefficients = [1, -4, -17, 60]
1056
+ >>> derivative_coefficients = polynomial_derivative(coefficients)
1057
+ >>> derivative_coefficients
1058
+ [3, -8, -17]
1059
+ """
1060
+ n = len(coefficients)
1061
+ powers = reversed(range(1, n))
1062
+ return list(map(operator.mul, coefficients, powers))
1063
+
1064
+
1065
+ def totient(n):
1066
+ """Return the count of natural numbers up to *n* that are coprime with *n*.
1067
+
1068
+ >>> totient(9)
1069
+ 6
1070
+ >>> totient(12)
1071
+ 4
1072
+ """
1073
+ for prime in set(factor(n)):
1074
+ n -= n // prime
1075
+ return n
minigpt2/lib/python3.10/site-packages/more_itertools/recipes.pyi ADDED
@@ -0,0 +1,136 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Stubs for more_itertools.recipes"""
2
+
3
+ from __future__ import annotations
4
+
5
+ from typing import (
6
+ Any,
7
+ Callable,
8
+ Iterable,
9
+ Iterator,
10
+ overload,
11
+ Sequence,
12
+ Type,
13
+ TypeVar,
14
+ )
15
+
16
+ # Type and type variable definitions
17
+ _T = TypeVar('_T')
18
+ _T1 = TypeVar('_T1')
19
+ _T2 = TypeVar('_T2')
20
+ _U = TypeVar('_U')
21
+
22
+ def take(n: int, iterable: Iterable[_T]) -> list[_T]: ...
23
+ def tabulate(
24
+ function: Callable[[int], _T], start: int = ...
25
+ ) -> Iterator[_T]: ...
26
+ def tail(n: int, iterable: Iterable[_T]) -> Iterator[_T]: ...
27
+ def consume(iterator: Iterable[_T], n: int | None = ...) -> None: ...
28
+ @overload
29
+ def nth(iterable: Iterable[_T], n: int) -> _T | None: ...
30
+ @overload
31
+ def nth(iterable: Iterable[_T], n: int, default: _U) -> _T | _U: ...
32
+ def all_equal(
33
+ iterable: Iterable[_T], key: Callable[[_T], _U] | None = ...
34
+ ) -> bool: ...
35
+ def quantify(
36
+ iterable: Iterable[_T], pred: Callable[[_T], bool] = ...
37
+ ) -> int: ...
38
+ def pad_none(iterable: Iterable[_T]) -> Iterator[_T | None]: ...
39
+ def padnone(iterable: Iterable[_T]) -> Iterator[_T | None]: ...
40
+ def ncycles(iterable: Iterable[_T], n: int) -> Iterator[_T]: ...
41
+ def dotproduct(vec1: Iterable[_T1], vec2: Iterable[_T2]) -> Any: ...
42
+ def flatten(listOfLists: Iterable[Iterable[_T]]) -> Iterator[_T]: ...
43
+ def repeatfunc(
44
+ func: Callable[..., _U], times: int | None = ..., *args: Any
45
+ ) -> Iterator[_U]: ...
46
+ def pairwise(iterable: Iterable[_T]) -> Iterator[tuple[_T, _T]]: ...
47
+ def grouper(
48
+ iterable: Iterable[_T],
49
+ n: int,
50
+ incomplete: str = ...,
51
+ fillvalue: _U = ...,
52
+ ) -> Iterator[tuple[_T | _U, ...]]: ...
53
+ def roundrobin(*iterables: Iterable[_T]) -> Iterator[_T]: ...
54
+ def partition(
55
+ pred: Callable[[_T], object] | None, iterable: Iterable[_T]
56
+ ) -> tuple[Iterator[_T], Iterator[_T]]: ...
57
+ def powerset(iterable: Iterable[_T]) -> Iterator[tuple[_T, ...]]: ...
58
+ def unique_everseen(
59
+ iterable: Iterable[_T], key: Callable[[_T], _U] | None = ...
60
+ ) -> Iterator[_T]: ...
61
+ def unique_justseen(
62
+ iterable: Iterable[_T], key: Callable[[_T], object] | None = ...
63
+ ) -> Iterator[_T]: ...
64
+ def unique(
65
+ iterable: Iterable[_T],
66
+ key: Callable[[_T], object] | None = ...,
67
+ reverse: bool = False,
68
+ ) -> Iterator[_T]: ...
69
+ @overload
70
+ def iter_except(
71
+ func: Callable[[], _T],
72
+ exception: Type[BaseException] | tuple[Type[BaseException], ...],
73
+ first: None = ...,
74
+ ) -> Iterator[_T]: ...
75
+ @overload
76
+ def iter_except(
77
+ func: Callable[[], _T],
78
+ exception: Type[BaseException] | tuple[Type[BaseException], ...],
79
+ first: Callable[[], _U],
80
+ ) -> Iterator[_T | _U]: ...
81
+ @overload
82
+ def first_true(
83
+ iterable: Iterable[_T], *, pred: Callable[[_T], object] | None = ...
84
+ ) -> _T | None: ...
85
+ @overload
86
+ def first_true(
87
+ iterable: Iterable[_T],
88
+ default: _U,
89
+ pred: Callable[[_T], object] | None = ...,
90
+ ) -> _T | _U: ...
91
+ def random_product(
92
+ *args: Iterable[_T], repeat: int = ...
93
+ ) -> tuple[_T, ...]: ...
94
+ def random_permutation(
95
+ iterable: Iterable[_T], r: int | None = ...
96
+ ) -> tuple[_T, ...]: ...
97
+ def random_combination(iterable: Iterable[_T], r: int) -> tuple[_T, ...]: ...
98
+ def random_combination_with_replacement(
99
+ iterable: Iterable[_T], r: int
100
+ ) -> tuple[_T, ...]: ...
101
+ def nth_combination(
102
+ iterable: Iterable[_T], r: int, index: int
103
+ ) -> tuple[_T, ...]: ...
104
+ def prepend(value: _T, iterator: Iterable[_U]) -> Iterator[_T | _U]: ...
105
+ def convolve(signal: Iterable[_T], kernel: Iterable[_T]) -> Iterator[_T]: ...
106
+ def before_and_after(
107
+ predicate: Callable[[_T], bool], it: Iterable[_T]
108
+ ) -> tuple[Iterator[_T], Iterator[_T]]: ...
109
+ def triplewise(iterable: Iterable[_T]) -> Iterator[tuple[_T, _T, _T]]: ...
110
+ def sliding_window(
111
+ iterable: Iterable[_T], n: int
112
+ ) -> Iterator[tuple[_T, ...]]: ...
113
+ def subslices(iterable: Iterable[_T]) -> Iterator[list[_T]]: ...
114
+ def polynomial_from_roots(roots: Sequence[_T]) -> list[_T]: ...
115
+ def iter_index(
116
+ iterable: Iterable[_T],
117
+ value: Any,
118
+ start: int | None = ...,
119
+ stop: int | None = ...,
120
+ ) -> Iterator[int]: ...
121
+ def sieve(n: int) -> Iterator[int]: ...
122
+ def batched(
123
+ iterable: Iterable[_T], n: int, *, strict: bool = False
124
+ ) -> Iterator[tuple[_T]]: ...
125
+ def transpose(
126
+ it: Iterable[Iterable[_T]],
127
+ ) -> Iterator[tuple[_T, ...]]: ...
128
+ def reshape(
129
+ matrix: Iterable[Iterable[_T]], cols: int
130
+ ) -> Iterator[tuple[_T, ...]]: ...
131
+ def matmul(m1: Sequence[_T], m2: Sequence[_T]) -> Iterator[tuple[_T]]: ...
132
+ def factor(n: int) -> Iterator[int]: ...
133
+ def polynomial_eval(coefficients: Sequence[_T], x: _U) -> _U: ...
134
+ def sum_of_squares(it: Iterable[_T]) -> _T: ...
135
+ def polynomial_derivative(coefficients: Sequence[_T]) -> list[_T]: ...
136
+ def totient(n: int) -> int: ...
minigpt2/lib/python3.10/site-packages/torchgen/__init__.py ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ """torchgen
2
+
3
+ This module contains codegeneration utilities for PyTorch. It is used to
4
+ build PyTorch from source, but may also be used for out-of-tree projects
5
+ that extend PyTorch.
6
+
7
+ Note well that we provide no BC guarantees for torchgen. If you're interested
8
+ in using torchgen and want the PyTorch team to be aware, please reach out
9
+ on GitHub.
10
+ """
minigpt2/lib/python3.10/site-packages/torchgen/context.py ADDED
@@ -0,0 +1,130 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from __future__ import annotations
2
+
3
+ import contextlib
4
+ import functools
5
+ from typing import Any, Callable, Iterator, List, Optional, Tuple, TypeVar, Union
6
+
7
+ import torchgen.local as local
8
+ from torchgen.model import (
9
+ BackendIndex,
10
+ DispatchKey,
11
+ NativeFunction,
12
+ NativeFunctionsGroup,
13
+ NativeFunctionsViewGroup,
14
+ )
15
+ from torchgen.utils import context, S, T
16
+
17
+
18
+ # Helper functions for defining generators on things in the model
19
+
20
+ F = TypeVar(
21
+ "F",
22
+ NativeFunction,
23
+ NativeFunctionsGroup,
24
+ NativeFunctionsViewGroup,
25
+ Union[NativeFunction, NativeFunctionsGroup],
26
+ Union[NativeFunction, NativeFunctionsViewGroup],
27
+ )
28
+
29
+ F2 = TypeVar(
30
+ "F2",
31
+ NativeFunction,
32
+ NativeFunctionsGroup,
33
+ Optional[NativeFunction],
34
+ bool,
35
+ str,
36
+ )
37
+
38
+ F3 = TypeVar("F3", Tuple[NativeFunction, Any], List[NativeFunction])
39
+
40
+
41
+ @contextlib.contextmanager
42
+ def native_function_manager(
43
+ g: NativeFunctionsGroup | NativeFunctionsViewGroup | NativeFunction,
44
+ ) -> Iterator[None]:
45
+ if isinstance(g, NativeFunctionsGroup):
46
+ # By default, we associate all errors with structured native functions
47
+ # with the out variant. In some cases, it might be better to have
48
+ # a more specific place to hang things; if so, use
49
+ # native_function_manager again on the inside
50
+ f = g.out
51
+ elif isinstance(g, NativeFunctionsViewGroup):
52
+ # We associate errors with the view operator
53
+ f = g.view
54
+ else:
55
+ f = g
56
+ with context(lambda: f"in native_functions.yaml line {f.loc}:\n {f.func}"):
57
+ with local.parametrize(
58
+ use_const_ref_for_mutable_tensors=f.use_const_ref_for_mutable_tensors,
59
+ use_ilistref_for_tensor_lists=f.part_of_structured_group,
60
+ ):
61
+ yield
62
+
63
+
64
+ # Given a function that operates on NativeFunction, wrap it into a new function
65
+ # that sets some appropriate context managers for that native function.
66
+ # YOU MUST WRAP FUNCTIONS IN THIS for calls to api modules to be sound
67
+ # (you will get an error if we try to access the local variables without having
68
+ # set them).
69
+ def with_native_function(func: Callable[[F], T]) -> Callable[[F], T]:
70
+ @functools.wraps(func)
71
+ def wrapper(f: F) -> T:
72
+ with native_function_manager(f):
73
+ return func(f)
74
+
75
+ return wrapper
76
+
77
+
78
+ def with_native_function_and(func: Callable[[F, F2], T]) -> Callable[[F, F2], T]:
79
+ @functools.wraps(func)
80
+ def wrapper(f: F, f2: F2) -> T:
81
+ # The first native_function is assumed to be the one with the appropriate context.
82
+ with native_function_manager(f):
83
+ return func(f, f2)
84
+
85
+ return wrapper
86
+
87
+
88
+ def method_with_native_function(func: Callable[[S, F], T]) -> Callable[[S, F], T]:
89
+ @functools.wraps(func)
90
+ def wrapper(slf: S, f: F) -> T:
91
+ with native_function_manager(f):
92
+ return func(slf, f)
93
+
94
+ return wrapper
95
+
96
+
97
+ def method_with_nested_native_function(
98
+ func: Callable[[S, F3], T]
99
+ ) -> Callable[[S, F3], T]:
100
+ @functools.wraps(func)
101
+ def wrapper(slf: S, f: F3) -> T:
102
+ with native_function_manager(f[0]):
103
+ return func(slf, f)
104
+
105
+ return wrapper
106
+
107
+
108
+ # Convenience decorator for functions that explicitly take in a BackendIndex,
109
+ # instead of indirectly taking one in as a closure
110
+ def with_native_function_and_index(
111
+ func: Callable[[F, BackendIndex], T]
112
+ ) -> Callable[[F, BackendIndex], T]:
113
+ @functools.wraps(func)
114
+ def wrapper(f: F, backend_index: BackendIndex) -> T:
115
+ with native_function_manager(f):
116
+ return func(f, backend_index)
117
+
118
+ return wrapper
119
+
120
+
121
+ # Convenience decorator for functions that explicitly take in a Dict of BackendIndices
122
+ def with_native_function_and_indices(
123
+ func: Callable[[F, dict[DispatchKey, BackendIndex]], T]
124
+ ) -> Callable[[F, dict[DispatchKey, BackendIndex]], T]:
125
+ @functools.wraps(func)
126
+ def wrapper(f: F, backend_indices: dict[DispatchKey, BackendIndex]) -> T:
127
+ with native_function_manager(f):
128
+ return func(f, backend_indices)
129
+
130
+ return wrapper
minigpt2/lib/python3.10/site-packages/torchgen/gen_executorch.py ADDED
@@ -0,0 +1,998 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from __future__ import annotations
2
+
3
+ import argparse
4
+ import os
5
+ from collections import defaultdict
6
+ from dataclasses import dataclass
7
+ from pathlib import Path
8
+ from typing import Any, Callable, Sequence, TextIO, TYPE_CHECKING
9
+
10
+ import yaml
11
+
12
+ # Parse native_functions.yaml into a sequence of NativeFunctions and Backend Indices.
13
+ from torchgen import dest
14
+ from torchgen.api import cpp as aten_cpp
15
+ from torchgen.api.types import CppSignature, CppSignatureGroup, CType, NamedCType
16
+ from torchgen.context import (
17
+ method_with_native_function,
18
+ method_with_nested_native_function,
19
+ with_native_function_and_index,
20
+ )
21
+ from torchgen.executorch.api import et_cpp
22
+ from torchgen.executorch.api.custom_ops import (
23
+ ComputeNativeFunctionStub,
24
+ gen_custom_ops_registration,
25
+ )
26
+ from torchgen.executorch.api.types import contextArg, ExecutorchCppSignature
27
+ from torchgen.executorch.api.unboxing import Unboxing
28
+ from torchgen.executorch.model import ETKernelIndex, ETKernelKey, ETParsedYaml
29
+ from torchgen.executorch.parse import ET_FIELDS, parse_et_yaml, parse_et_yaml_struct
30
+ from torchgen.gen import (
31
+ get_custom_build_selector,
32
+ get_native_function_declarations,
33
+ get_native_function_declarations_from_ns_grouped_kernels,
34
+ get_native_function_schema_registrations,
35
+ LineLoader,
36
+ parse_native_yaml,
37
+ )
38
+ from torchgen.model import (
39
+ BackendIndex,
40
+ BackendMetadata,
41
+ DEFAULT_KERNEL_NAMESPACE,
42
+ DispatchKey,
43
+ FunctionSchema,
44
+ Location,
45
+ NativeFunction,
46
+ NativeFunctionsGroup,
47
+ OperatorName,
48
+ Variant,
49
+ )
50
+ from torchgen.utils import (
51
+ context,
52
+ FileManager,
53
+ make_file_manager,
54
+ mapMaybe,
55
+ NamespaceHelper,
56
+ )
57
+
58
+
59
+ if TYPE_CHECKING:
60
+ from torchgen.selective_build.selector import SelectiveBuilder
61
+
62
+
63
+ def _sig_decl_wrapper(sig: CppSignature | ExecutorchCppSignature) -> str:
64
+ """
65
+ A wrapper function to basically get `sig.decl(include_context=True)`.
66
+ For ATen kernel, the codegen has no idea about ET contextArg, so we
67
+ use this wrapper to add it.
68
+ """
69
+ if isinstance(sig, ExecutorchCppSignature):
70
+ return sig.decl()
71
+
72
+ returns_type = aten_cpp.returns_type(sig.func.returns).cpp_type()
73
+ cpp_args = [a.decl() for a in sig.arguments()]
74
+ cpp_args_str = ", ".join([contextArg.decl()] + cpp_args)
75
+ sig_decl = f"{returns_type} {sig.name()}({cpp_args_str})"
76
+ return sig_decl
77
+
78
+
79
+ def static_dispatch(
80
+ sig: CppSignature | ExecutorchCppSignature,
81
+ f: NativeFunction,
82
+ backend_indices: list[BackendIndex],
83
+ ) -> str:
84
+ """
85
+ For a given `NativeFunction`, find out the corresponding native function and dispatch to it. If zero or more than one
86
+ native function exists, error out. A simplified version of register_dispatch_key.py
87
+ Arguments:
88
+ sig: A CppSignature for this native function we want to use.
89
+ f: NativeFunction to generate static dispatch.
90
+ backend_indices: All available backends.
91
+ Return:
92
+ C++ code to call backend-specific functions, e.g., "return at::native::add(self, other, scale);"
93
+ """
94
+ if len(backend_indices) == 0 or f.manual_kernel_registration:
95
+ return ""
96
+
97
+ backends = [b for b in backend_indices if b.has_kernel(f)]
98
+ static_block = None
99
+ if len(backends) == 1:
100
+ backend_metadata = backends[0].get_kernel(f)
101
+ if backend_metadata:
102
+ args = ", ".join(a.name for a in sig.arguments())
103
+ # Here we are assuming there's no difference between CppSignature and NativeSignature for Executorch.
104
+ static_block = f"return ::{backend_metadata.cpp_namespace}::{backend_metadata.kernel}({args});"
105
+ else:
106
+ static_block = f"""
107
+ ET_ASSERT_UNREACHABLE_MSG("The number of native function(s) binding to {f.func.name} is {len(backends)}.");
108
+ """
109
+ return f"""
110
+ // {f.namespace}::{f.func}
111
+ TORCH_API inline {_sig_decl_wrapper(sig)} {{
112
+ {static_block}
113
+ }}
114
+ """
115
+
116
+
117
+ # Generates Functions.h, which provides the functional public C++ API,
118
+ # and the scaffolding to call into the dispatcher from these functions.
119
+ @dataclass(frozen=True)
120
+ class ComputeFunction:
121
+ static_dispatch_backend_indices: list[BackendIndex]
122
+
123
+ selector: SelectiveBuilder
124
+
125
+ use_aten_lib: bool
126
+
127
+ is_custom_op: Callable[[NativeFunction], bool]
128
+
129
+ @method_with_native_function
130
+ def __call__(self, f: NativeFunction) -> str | None:
131
+ is_method_variant = False
132
+ if not self.selector.is_root_operator(f"{f.namespace}::{f.func.name}"):
133
+ return None
134
+
135
+ if Variant.function not in f.variants and Variant.method in f.variants:
136
+ is_method_variant = True
137
+
138
+ # only valid remaining case is only function is in f.variants
139
+ elif not (Variant.function in f.variants and Variant.method not in f.variants):
140
+ raise Exception( # noqa: TRY002
141
+ f"Can't handle native function {f.func} with the following variant specification {f.variants}."
142
+ )
143
+
144
+ sig: CppSignature | ExecutorchCppSignature = (
145
+ CppSignatureGroup.from_native_function(
146
+ f, method=False, fallback_binding=f.manual_cpp_binding
147
+ ).most_faithful_signature()
148
+ if self.use_aten_lib
149
+ else ExecutorchCppSignature.from_native_function(f)
150
+ )
151
+ if self.use_aten_lib and not self.is_custom_op(f):
152
+ comma = ", "
153
+
154
+ if is_method_variant:
155
+ return f"""
156
+ // {f.namespace}::{f.func}
157
+ TORCH_API inline {_sig_decl_wrapper(sig)} {{
158
+ return {sig.arguments()[0].name}.{sig.name()}({comma.join(e.name for e in sig.arguments()[1:])});
159
+ }}
160
+ """
161
+ else:
162
+ return f"""
163
+ // {f.namespace}::{f.func}
164
+ TORCH_API inline {_sig_decl_wrapper(sig)} {{
165
+ return at::{sig.name()}({comma.join(e.name for e in sig.arguments())});
166
+ }}
167
+ """
168
+
169
+ else:
170
+ return static_dispatch(
171
+ sig,
172
+ f,
173
+ backend_indices=self.static_dispatch_backend_indices,
174
+ )
175
+
176
+
177
+ # Generates RegisterCodegenUnboxedKernels.cpp.
178
+ @dataclass(frozen=True)
179
+ class ComputeCodegenUnboxedKernels:
180
+ selector: SelectiveBuilder
181
+
182
+ use_aten_lib: bool
183
+
184
+ @method_with_nested_native_function
185
+ def __call__(
186
+ self,
187
+ unbox_kernel_entry: tuple[NativeFunction, tuple[ETKernelKey, BackendMetadata]],
188
+ ) -> str:
189
+ f: NativeFunction = unbox_kernel_entry[0]
190
+ kernel_key: ETKernelKey | list[ETKernelKey] = unbox_kernel_entry[1][0]
191
+ kernel_meta: BackendMetadata = unbox_kernel_entry[1][1]
192
+
193
+ op_name = f"{f.namespace}::{f.func.name}"
194
+ if not self.selector.is_root_operator(op_name):
195
+ return ""
196
+
197
+ if not isinstance(kernel_key, list):
198
+ kernel_key = [kernel_key]
199
+ used_kernel_keys = self.selector.et_get_selected_kernels(
200
+ op_name, [k.to_native_string() for k in kernel_key]
201
+ )
202
+ if not used_kernel_keys:
203
+ return ""
204
+ sig: CppSignature | ExecutorchCppSignature
205
+ argument_type_gen: Callable[..., NamedCType]
206
+ return_type_gen: Callable[..., CType]
207
+ if self.use_aten_lib:
208
+ sig = CppSignatureGroup.from_native_function(
209
+ f, method=False, fallback_binding=f.manual_cpp_binding
210
+ ).most_faithful_signature()
211
+ argument_type_gen = aten_cpp.argumenttype_type
212
+ return_type_gen = aten_cpp.returns_type
213
+ arguments = sig.arguments()
214
+ kernel_call = f"torch::executor::{f.namespace}::{sig.name()}"
215
+ else:
216
+ sig = ExecutorchCppSignature.from_native_function(f)
217
+ argument_type_gen = et_cpp.argumenttype_type
218
+ return_type_gen = et_cpp.returns_type
219
+ arguments = sig.arguments(include_context=False)
220
+ kernel_call = f"{kernel_meta.cpp_namespace}::{kernel_meta.kernel}"
221
+ # parse arguments into C++ code
222
+ binding_list, code_list = Unboxing(
223
+ argument_type_gen=argument_type_gen
224
+ ).convert_arguments(arguments)
225
+
226
+ # for each C++ argument, generate the conversion code
227
+ code_connector = "\n\t"
228
+ arg_connector = ", "
229
+
230
+ args_str = f"{arg_connector.join(e.name for e in binding_list)}"
231
+ event_tracer_output_logging = ""
232
+ output_ids = []
233
+
234
+ if len(f.func.returns) == 0:
235
+ if len(f.func.arguments.out) == 0:
236
+ raise Exception( # noqa: TRY002
237
+ f"Can't handle native function {f.func} with no returns and no out yet."
238
+ )
239
+ out = f.func.arguments.out[0]
240
+ return_assignment = f"""stack[{len(binding_list)}] = &{out.name};"""
241
+ ret_prefix = ""
242
+ output_ids = [len(binding_list)]
243
+ else:
244
+ if len(f.func.arguments.out) == 0:
245
+ return_assignment = (
246
+ f"""*stack[{len(binding_list)}] = EValue(result_);"""
247
+ )
248
+ ret_prefix = return_type_gen(f.func.returns).cpp_type() + " result_ = "
249
+ output_ids = [len(binding_list)]
250
+ else:
251
+ return_assignment = ""
252
+ ret_prefix = ""
253
+ output_ids = [
254
+ len(binding_list) - (i + 1)
255
+ for i in reversed(range(len(f.func.arguments.out)))
256
+ ]
257
+
258
+ for output_id in output_ids:
259
+ event_tracer_output_logging += (
260
+ f"internal::event_tracer_log_evalue("
261
+ f"context.internal_event_tracer(), "
262
+ f"*stack[{output_id}]);\n"
263
+ )
264
+
265
+ newline = "\n "
266
+ return "\n".join(
267
+ [
268
+ f"""
269
+ Kernel(
270
+ "{f.namespace}::{f.func.name}",{newline + '"' + (k + '",') if k != 'default' else ''}
271
+ []({contextArg.defn()}, EValue** stack) {{
272
+ {code_connector.join(code_list)}
273
+
274
+ internal::EventTracerProfileScope event_tracer_scope(context.internal_event_tracer(), "native_call_{f.func.name}");
275
+ EXECUTORCH_SCOPE_PROF("native_call_{f.func.name}");
276
+ {ret_prefix}{kernel_call}(context, {args_str});
277
+ {event_tracer_output_logging}
278
+ {return_assignment}
279
+ }}
280
+ ),
281
+ """
282
+ for k in used_kernel_keys
283
+ ]
284
+ )
285
+
286
+
287
+ def gen_unboxing(
288
+ *,
289
+ native_functions: Sequence[NativeFunction],
290
+ cpu_fm: FileManager,
291
+ selector: SelectiveBuilder,
292
+ use_aten_lib: bool,
293
+ kernel_index: ETKernelIndex,
294
+ manual_registration: bool,
295
+ ) -> None:
296
+ # Iterable type for write_sharded is a Tuple of (native_function, (kernel_key, metadata))
297
+ def key_func(
298
+ item: tuple[NativeFunction, tuple[ETKernelKey, BackendMetadata]]
299
+ ) -> str:
300
+ return item[0].root_name + ":" + item[1][0].to_native_string()
301
+
302
+ items: list[tuple[NativeFunction, tuple[ETKernelKey, BackendMetadata]]] = [
303
+ (native_function, (kernel_key, metadata))
304
+ for native_function in native_functions
305
+ for kernel_key, metadata in kernel_index.get_kernels(native_function).items()
306
+ ]
307
+
308
+ header = ["Functions.h" if use_aten_lib else "NativeFunctions.h"]
309
+ filename = (
310
+ "RegisterKernels.cpp"
311
+ if manual_registration
312
+ else "RegisterCodegenUnboxedKernels.cpp"
313
+ )
314
+ cpu_fm.write_sharded(
315
+ filename,
316
+ items,
317
+ key_fn=key_func,
318
+ env_callable=lambda unbox_kernel_entry: {
319
+ "unboxed_kernels": [
320
+ ComputeCodegenUnboxedKernels(selector, use_aten_lib)(unbox_kernel_entry)
321
+ ],
322
+ "fn_header": header
323
+ if unbox_kernel_entry == items[0]
324
+ else [], # Only write header once
325
+ },
326
+ num_shards=1,
327
+ sharded_keys={"unboxed_kernels", "fn_header"},
328
+ )
329
+
330
+
331
+ @with_native_function_and_index # type: ignore[arg-type]
332
+ def compute_native_function_declaration(
333
+ g: NativeFunctionsGroup | NativeFunction, kernel_index: ETKernelIndex
334
+ ) -> list[str]:
335
+ assert isinstance(g, NativeFunction)
336
+ sig = ExecutorchCppSignature.from_native_function(f=g)
337
+ metadata_list = kernel_index.get_kernels(g).values()
338
+ if metadata_list is None:
339
+ return []
340
+
341
+ # for kernels in lean mode, we declare two versions, one with context and one without.
342
+ # In the end we will cleanup the unused one.
343
+ def gen_decl(metadata: BackendMetadata, include_context: bool) -> str:
344
+ return f"{sig.decl(name=metadata.kernel, include_context=include_context)};"
345
+
346
+ return [
347
+ gen_decl(metadata, include_context)
348
+ for include_context in [False, True]
349
+ for metadata in metadata_list
350
+ ]
351
+
352
+
353
+ def gen_functions_declarations(
354
+ *,
355
+ native_functions: Sequence[NativeFunction],
356
+ kernel_index: ETKernelIndex,
357
+ selector: SelectiveBuilder,
358
+ use_aten_lib: bool,
359
+ custom_ops_native_functions: Sequence[NativeFunction] | None = None,
360
+ ) -> str:
361
+ """
362
+ Generates namespace separated C++ function API inline declaration/definitions.
363
+ Native functions are grouped by namespaces and the generated code is wrapped inside
364
+ namespace blocks.
365
+
366
+ E.g., for `custom_1::foo.out` in yaml file we will generate a C++ API as a symbol
367
+ in `torch::executor::custom_1::foo_out`. This way we avoid symbol conflict when
368
+ the other `custom_2::foo.out` is available.
369
+ """
370
+
371
+ # convert kernel index to BackendIndex. This is because we can't handle ETKernelIndex yet.
372
+ # TODO larryliu: evaluate if this code is still needed. If yes let it handle ETKernelIndex.
373
+
374
+ backend_index = kernel_index._to_backend_index()
375
+
376
+ ns_grouped_functions = defaultdict(list)
377
+ for native_function in native_functions:
378
+ ns_grouped_functions[native_function.namespace].append(native_function)
379
+ functions_declarations = ""
380
+ newline = "\n"
381
+ for namespace in ns_grouped_functions:
382
+ ns_helper = NamespaceHelper(
383
+ namespace_str=namespace,
384
+ entity_name="",
385
+ max_level=3,
386
+ )
387
+ declarations = list(
388
+ mapMaybe(
389
+ ComputeFunction(
390
+ static_dispatch_backend_indices=[backend_index],
391
+ selector=selector,
392
+ use_aten_lib=use_aten_lib,
393
+ is_custom_op=lambda f: custom_ops_native_functions is not None
394
+ and f in custom_ops_native_functions,
395
+ ),
396
+ ns_grouped_functions[namespace],
397
+ )
398
+ )
399
+ functions_declarations += f"""
400
+ {ns_helper.prologue}
401
+ {newline.join(declarations)}
402
+ {ns_helper.epilogue}
403
+ """
404
+ return functions_declarations
405
+
406
+
407
+ def get_ns_grouped_kernels(
408
+ *,
409
+ native_functions: Sequence[NativeFunction],
410
+ kernel_index: ETKernelIndex,
411
+ native_function_decl_gen: Callable[
412
+ [
413
+ NativeFunctionsGroup | NativeFunction,
414
+ ETKernelIndex,
415
+ ],
416
+ list[str],
417
+ ],
418
+ ) -> dict[str, list[str]]:
419
+ ns_grouped_kernels: dict[str, list[str]] = defaultdict(list)
420
+ for f in native_functions:
421
+ native_function_namespaces = set()
422
+ op_kernels = kernel_index.get_kernels(f)
423
+ for backend_metadata in op_kernels.values():
424
+ if backend_metadata:
425
+ namespace = backend_metadata.cpp_namespace
426
+ native_function_namespaces.add(namespace)
427
+ else:
428
+ namespace = DEFAULT_KERNEL_NAMESPACE
429
+ assert (
430
+ len(native_function_namespaces) <= 1
431
+ ), f"Codegen only supports one namespace per operator, got {native_function_namespaces}"
432
+ ns_grouped_kernels[namespace].extend(
433
+ native_function_decl_gen(f, kernel_index)
434
+ )
435
+ return ns_grouped_kernels
436
+
437
+
438
+ def gen_headers(
439
+ *,
440
+ native_functions: Sequence[NativeFunction],
441
+ gen_custom_ops_header: bool,
442
+ custom_ops_native_functions: Sequence[NativeFunction],
443
+ selector: SelectiveBuilder,
444
+ kernel_index: ETKernelIndex,
445
+ cpu_fm: FileManager,
446
+ use_aten_lib: bool,
447
+ ) -> None:
448
+ """Generate headers.
449
+
450
+ Args:
451
+ native_functions (Sequence[NativeFunction]): a collection of NativeFunction for ATen ops.
452
+ gen_custom_ops_header (bool): whether we should generate CustomOpsNativeFunctions.h
453
+ custom_ops_native_functions (Sequence[NativeFunction]): a collection of NativeFunction for custom ops.
454
+ kernel_index (ETKernelIndex): kernel collection
455
+ cpu_fm (FileManager): file manager manages output stream
456
+ use_aten_lib (bool): whether we are generating for PyTorch types or Executorch types.
457
+ """
458
+ aten_headers = ["#include <ATen/Functions.h>"]
459
+ backend_indices = {DispatchKey.CPU: kernel_index._to_backend_index()}
460
+ if gen_custom_ops_header:
461
+ cpu_fm.write_with_template(
462
+ "CustomOpsNativeFunctions.h",
463
+ "NativeFunctions.h",
464
+ lambda: {
465
+ "nativeFunctions_declarations": get_native_function_declarations(
466
+ grouped_native_functions=custom_ops_native_functions,
467
+ backend_indices=backend_indices,
468
+ native_function_decl_gen=dest.compute_native_function_declaration,
469
+ ),
470
+ "headers": [
471
+ "#include <ATen/ATen.h>",
472
+ "#include <torch/torch.h>",
473
+ ],
474
+ },
475
+ )
476
+ aten_headers.append('#include "CustomOpsNativeFunctions.h"')
477
+ cpu_fm.write(
478
+ "Functions.h",
479
+ lambda: {
480
+ "static_dispatch_extra_headers": aten_headers
481
+ if use_aten_lib
482
+ else ['#include "NativeFunctions.h"'],
483
+ "Functions_declarations": gen_functions_declarations(
484
+ native_functions=native_functions,
485
+ kernel_index=kernel_index,
486
+ selector=selector,
487
+ use_aten_lib=use_aten_lib,
488
+ custom_ops_native_functions=custom_ops_native_functions,
489
+ ),
490
+ },
491
+ )
492
+ cpu_fm.write(
493
+ "RegisterKernels.h",
494
+ lambda: {
495
+ "generated_comment": "@" + "generated by torchgen/gen_executorch.py",
496
+ },
497
+ )
498
+ headers = {
499
+ "headers": [
500
+ "#include <executorch/runtime/core/exec_aten/exec_aten.h> // at::Tensor etc.",
501
+ "#include <executorch/runtime/kernel/kernel_runtime_context.h>",
502
+ ],
503
+ }
504
+ if use_aten_lib:
505
+ headers["headers"].append("#include <executorch/codegen/macros.h> // TORCH_API")
506
+ cpu_fm.write(
507
+ "NativeFunctions.h",
508
+ lambda: dict(
509
+ {
510
+ "nativeFunctions_declarations": get_native_function_declarations(
511
+ grouped_native_functions=native_functions,
512
+ backend_indices=backend_indices,
513
+ native_function_decl_gen=dest.compute_native_function_declaration,
514
+ ),
515
+ },
516
+ **headers,
517
+ ),
518
+ )
519
+ else:
520
+ ns_grouped_kernels = get_ns_grouped_kernels(
521
+ native_functions=native_functions,
522
+ kernel_index=kernel_index,
523
+ native_function_decl_gen=compute_native_function_declaration, # type: ignore[arg-type]
524
+ )
525
+ cpu_fm.write(
526
+ "NativeFunctions.h",
527
+ lambda: dict(
528
+ {
529
+ "nativeFunctions_declarations": get_native_function_declarations_from_ns_grouped_kernels(
530
+ ns_grouped_kernels=ns_grouped_kernels,
531
+ ),
532
+ },
533
+ **headers,
534
+ ),
535
+ )
536
+
537
+
538
+ def gen_custom_ops(
539
+ *,
540
+ native_functions: Sequence[NativeFunction],
541
+ selector: SelectiveBuilder,
542
+ kernel_index: ETKernelIndex,
543
+ cpu_fm: FileManager,
544
+ rocm: bool,
545
+ ) -> None:
546
+ dispatch_key = DispatchKey.CPU
547
+ (
548
+ anonymous_definition,
549
+ static_init_dispatch_registrations,
550
+ ) = gen_custom_ops_registration(
551
+ native_functions=native_functions,
552
+ selector=selector,
553
+ kernel_index=kernel_index,
554
+ rocm=rocm,
555
+ )
556
+ cpu_fm.write_with_template(
557
+ f"Register{dispatch_key}CustomOps.cpp",
558
+ "RegisterDispatchKeyCustomOps.cpp",
559
+ lambda: {
560
+ "ops_headers": '#include "CustomOpsNativeFunctions.h"',
561
+ "DispatchKey": dispatch_key,
562
+ "dispatch_namespace": dispatch_key.lower(),
563
+ "dispatch_namespaced_definitions": "",
564
+ "dispatch_anonymous_definitions": anonymous_definition,
565
+ "static_init_dispatch_registrations": static_init_dispatch_registrations,
566
+ },
567
+ )
568
+ cpu_fm.write_with_template(
569
+ f"Register{dispatch_key}Stub.cpp",
570
+ "RegisterDispatchKeyCustomOps.cpp",
571
+ lambda: {
572
+ "ops_headers": "",
573
+ "DispatchKey": dispatch_key,
574
+ "dispatch_namespace": dispatch_key.lower(),
575
+ "dispatch_namespaced_definitions": "",
576
+ "dispatch_anonymous_definitions": list(
577
+ mapMaybe(ComputeNativeFunctionStub(), native_functions)
578
+ ),
579
+ "static_init_dispatch_registrations": static_init_dispatch_registrations,
580
+ },
581
+ )
582
+
583
+ (
584
+ aten_schema_registrations,
585
+ schema_registrations,
586
+ ) = get_native_function_schema_registrations(
587
+ native_functions=native_functions,
588
+ schema_selector=selector,
589
+ )
590
+ cpu_fm.write(
591
+ "RegisterSchema.cpp",
592
+ lambda: {
593
+ "schema_registrations": schema_registrations,
594
+ "aten_schema_registrations": aten_schema_registrations,
595
+ },
596
+ )
597
+
598
+
599
+ def translate_native_yaml(
600
+ tags_yaml_path: str,
601
+ aten_yaml_path: str,
602
+ native_yaml_path: str | None,
603
+ use_aten_lib: bool,
604
+ out_file: TextIO,
605
+ ) -> None:
606
+ """Translates Executorch DSL dialect to use the same syntax as
607
+ native_functions.yaml. The major difference is that Executorch DSL dialect
608
+ supports "op" key, where it refers to the operator name in native_functions.yaml.
609
+
610
+ For example, a functions.yaml may have the following entry:
611
+
612
+ - op: add.out
613
+ ...
614
+
615
+ It needs to be translated to the following:
616
+
617
+ - func: add.out(Tensor self, Tensor other, *, Scalar alpha=1, Tensor(a!) out) -> Tensor(a!)
618
+ ...
619
+
620
+ We go in aten_yaml_path and find the operator schema for "add.out" and add it
621
+ to the original functions.yaml. We also add required field "variants", where for
622
+ Executorch it will always be "function".
623
+
624
+ For ATen mode we don't have to do the translation because native_yaml_path is
625
+ the same as native_functions.yaml.
626
+
627
+ Args:
628
+ tags_yaml_path: Path to a tags.yaml file to satisfy codegen parsing.
629
+ It is not optional.
630
+ aten_yaml_path: Path to ATen operator yaml file native_functions.yaml.
631
+ native_yaml_path: Path to a functions.yaml file to parse.
632
+ If the path does not exist in the filesystem, it is treated as an
633
+ empty file. If `custom_ops_yaml_path` exists, the contents of that
634
+ file are appended to the yaml input to be parsed.
635
+ use_aten_lib: We use this flag to determine if we want to generate native
636
+ functions. In ATen mode we should generate out= variants.
637
+ out_file: The IO object that we are writing into.
638
+ Returns:
639
+ None
640
+ """
641
+ if use_aten_lib:
642
+ with open(aten_yaml_path) as aten_yaml:
643
+ out_file.writelines(aten_yaml.readlines())
644
+ return
645
+
646
+ native_functions, persisted_fields = parse_et_yaml(
647
+ aten_yaml_path,
648
+ tags_yaml_path,
649
+ None,
650
+ skip_native_fns_gen=False,
651
+ )
652
+
653
+ func_to_scoped_name: dict[FunctionSchema, str] = {
654
+ f.func: f"{f.namespace}::{f.func.name}" for f in native_functions
655
+ }
656
+ op_to_scoped_name: dict[OperatorName, str] = {
657
+ func.name: name for func, name in func_to_scoped_name.items()
658
+ }
659
+
660
+ schema_dict = {name: str(func) for func, name in func_to_scoped_name.items()}
661
+ kernel_persist_dict: dict[str, dict[str, Any]] = {
662
+ op_to_scoped_name[op]: v for op, v in persisted_fields.items()
663
+ }
664
+
665
+ if (
666
+ not native_yaml_path
667
+ or not os.path.exists(native_yaml_path)
668
+ or os.stat(native_yaml_path).st_size == 0
669
+ ):
670
+ return
671
+ with open(native_yaml_path) as native_yaml:
672
+ native_es = yaml.load(native_yaml, Loader=LineLoader)
673
+ if not native_es:
674
+ return
675
+ for e in native_es:
676
+ assert isinstance(e.get("__line__"), int), e
677
+ loc = Location(native_yaml_path, e.pop("__line__"))
678
+ with context(lambda: f"in {loc}:\n "):
679
+ if "variants" not in e:
680
+ e["variants"] = "function"
681
+ if "func" in e:
682
+ continue
683
+ assert isinstance(e.get("op"), str), e
684
+ opname = e.pop("op")
685
+ if "::" not in opname:
686
+ opname = "aten::" + opname
687
+ assert opname in schema_dict
688
+ e["func"] = schema_dict.get(opname)
689
+
690
+ # Write out persisted kernel information
691
+ if opname in kernel_persist_dict:
692
+ for k, v in kernel_persist_dict[opname].items():
693
+ e[k] = v
694
+
695
+ yaml.dump(native_es, out_file, width=1000)
696
+
697
+
698
+ def parse_yaml(
699
+ path: str | None,
700
+ tags_yaml_path: str,
701
+ function_filter: Callable[[NativeFunction], bool],
702
+ skip_native_fns_gen: bool = False,
703
+ ) -> tuple[
704
+ list[NativeFunction],
705
+ dict[DispatchKey, dict[OperatorName, BackendMetadata]] | ETKernelIndex,
706
+ ]:
707
+ if path and os.path.exists(path) and os.stat(path).st_size > 0:
708
+ with open(path) as f:
709
+ es = yaml.load(f, Loader=LineLoader)
710
+
711
+ # Check for kernel index structure
712
+ kernel_index = (
713
+ parse_et_yaml_struct(es) if any("kernels" in e for e in es) else None
714
+ )
715
+
716
+ # Remove ET specific fields from entries for BC compatibility
717
+ for entry in es:
718
+ for field in ET_FIELDS:
719
+ entry.pop(field, None)
720
+
721
+ parsed_yaml = parse_native_yaml(
722
+ path,
723
+ tags_yaml_path,
724
+ None,
725
+ skip_native_fns_gen=skip_native_fns_gen,
726
+ loaded_yaml=es,
727
+ )
728
+ native_functions = list(filter(function_filter, parsed_yaml.native_functions))
729
+ op_names = [f.func.name for f in native_functions]
730
+
731
+ # (1) Return ETKernelIndex if kernel index is present
732
+ if kernel_index is not None:
733
+ filtered_index = {
734
+ op_name: kernel_mapping
735
+ for op_name, kernel_mapping in kernel_index.index.items()
736
+ if op_name in op_names
737
+ }
738
+ return native_functions, ETKernelIndex(index=filtered_index)
739
+
740
+ # (2) Return BackendIndices if kernel index is absent
741
+ def map_index(
742
+ m: dict[OperatorName, BackendMetadata]
743
+ ) -> dict[OperatorName, BackendMetadata]:
744
+ return {op: m[op] for op in m if op in op_names}
745
+
746
+ backend_indices = {
747
+ k: map_index(b.index) for (k, b) in parsed_yaml.backend_indices.items()
748
+ }
749
+
750
+ return native_functions, backend_indices
751
+ else:
752
+ return [], {}
753
+
754
+
755
+ def parse_yaml_files(
756
+ tags_yaml_path: str,
757
+ aten_yaml_path: str,
758
+ native_yaml_path: str | None,
759
+ custom_ops_yaml_path: str | None,
760
+ selector: SelectiveBuilder,
761
+ use_aten_lib: bool,
762
+ ) -> tuple[ETParsedYaml, ETParsedYaml | None]:
763
+ """Parses functions.yaml and custom_ops.yaml files.
764
+
765
+ Args:
766
+ tags_yaml_path: Path to a tags.yaml file to satisfy codegen parsing.
767
+ It is not optional.
768
+ aten_yaml_path: Path to ATen operator yaml file native_functions.yaml.
769
+ native_yaml_path: Path to a functions.yaml file to parse.
770
+ If the path does not exist in the filesystem, it is treated as an
771
+ empty file. If `custom_ops_yaml_path` exists, the contents of that
772
+ file are appended to the yaml input to be parsed.
773
+ custom_ops_yaml_path: Path to a custom_ops.yaml file to parse. If
774
+ the path does not exist in the filesystem, it is ignored.
775
+ selector: For selective build.
776
+ use_aten_lib: We use this flag to determine if we want to generate native
777
+ functions. In ATen mode we should generate out= variants.
778
+ Returns:
779
+ A tuple with two elements:
780
+ [0]: The parsed results of concatenating the contents of
781
+ `native_yaml_path` and `custom_ops_yaml_path`.
782
+ [1]: The parsed results of the contents of `custom_ops_yaml_path`, if
783
+ present. If not present, None.
784
+ """
785
+ import tempfile
786
+
787
+ # only include selected ops, this is because we want to avoid
788
+ def function_filter(f: NativeFunction) -> bool:
789
+ return selector.is_native_function_selected(f)
790
+
791
+ with tempfile.TemporaryDirectory() as tmpdirname:
792
+ translated_yaml_path = os.path.join(tmpdirname, "translated.yaml")
793
+ with open(translated_yaml_path, "w") as translated:
794
+ translate_native_yaml(
795
+ tags_yaml_path,
796
+ aten_yaml_path,
797
+ native_yaml_path,
798
+ use_aten_lib,
799
+ translated,
800
+ )
801
+
802
+ translated_functions, translated_indices = parse_yaml(
803
+ translated_yaml_path, tags_yaml_path, function_filter, not use_aten_lib
804
+ )
805
+ custom_ops_functions, custom_ops_indices = parse_yaml(
806
+ custom_ops_yaml_path, tags_yaml_path, function_filter, True
807
+ )
808
+
809
+ # Convert BackendIndices to ETKernelIndex
810
+ if not isinstance(translated_indices, ETKernelIndex):
811
+ translated_indices = ETKernelIndex.from_backend_indices(translated_indices)
812
+ if not isinstance(custom_ops_indices, ETKernelIndex):
813
+ custom_ops_indices = ETKernelIndex.from_backend_indices(custom_ops_indices)
814
+
815
+ combined_functions = translated_functions + custom_ops_functions
816
+ combined_kernel_index = ETKernelIndex.merge_indices(
817
+ translated_indices, custom_ops_indices
818
+ )
819
+ combined_yaml = ETParsedYaml(combined_functions, combined_kernel_index)
820
+ custom_ops_parsed_yaml = ETParsedYaml(custom_ops_functions, custom_ops_indices)
821
+
822
+ return combined_yaml, custom_ops_parsed_yaml
823
+
824
+
825
+ def main() -> None:
826
+ parser = argparse.ArgumentParser(description="Generate operator source files")
827
+ # Although we don't refer to --source-path directly, make_file_manager()
828
+ # expects it to point to a directory that contains a templates/ subdirectory
829
+ # containing the file templates.
830
+ parser.add_argument(
831
+ "-s",
832
+ "--source-path",
833
+ help="path to source directory for kernel templates",
834
+ )
835
+ parser.add_argument(
836
+ "--functions-yaml-path",
837
+ "--functions_yaml_path",
838
+ help="path to the functions.yaml file to use. Optional, but at least "
839
+ "one of --functions-yaml-path and --custom-ops-yaml-path must be "
840
+ "specified.",
841
+ )
842
+ parser.add_argument(
843
+ "--custom-ops-yaml-path",
844
+ "--custom_ops_yaml_path",
845
+ help="path to the custom_ops.yaml file to use. Optional, but at least "
846
+ "one of --functions-yaml-path and --custom-ops-yaml-path must be "
847
+ "specified.",
848
+ )
849
+ parser.add_argument(
850
+ "--aten-yaml-path",
851
+ "--aten_yaml_path",
852
+ help="path to native_functions.yaml file.",
853
+ )
854
+ # Note that make_file_manager() also looks at --install-dir.
855
+ parser.add_argument(
856
+ "-d",
857
+ "--install-dir",
858
+ "--install_dir",
859
+ help="output directory",
860
+ default="build/generated",
861
+ )
862
+ parser.add_argument(
863
+ "-o",
864
+ "--output-dependencies",
865
+ help="output a list of dependencies into the given file and exit",
866
+ )
867
+ # Although we don't refer to --dry-run directly, make_file_manager() looks
868
+ # for it.
869
+ parser.add_argument(
870
+ "--dry-run",
871
+ action="store_true",
872
+ help="run without writing any files (still updates outputs)",
873
+ )
874
+ parser.add_argument(
875
+ "--static-dispatch-backend",
876
+ "--static_dispatch_backend",
877
+ nargs="*",
878
+ help="generate static dispatch code for the specific backend (if set)",
879
+ )
880
+ parser.add_argument(
881
+ "--op-registration-whitelist",
882
+ "--op_registration_whitelist",
883
+ nargs="*",
884
+ help="filter op registrations by the whitelist (if set); "
885
+ "each item is `namespace`::`operator name` without overload name; "
886
+ "e.g.: aten::empty aten::conv2d ...",
887
+ )
888
+ parser.add_argument(
889
+ "--op-selection-yaml-path",
890
+ "--op_selection_yaml_path",
891
+ help="Provide a path to the operator selection (for custom build) YAML "
892
+ "that contains the information about the set of selected operators "
893
+ "and their categories (training, ...). Each operator is either a "
894
+ "full operator name with overload or just a bare operator name. "
895
+ "The operator names also contain the namespace prefix (e.g. aten::)",
896
+ )
897
+ parser.add_argument(
898
+ "--tags-path",
899
+ help="Path to tags.yaml. Required by yaml parsing in codegen system.",
900
+ )
901
+ parser.add_argument(
902
+ "--rocm",
903
+ action="store_true",
904
+ help="reinterpret CUDA as ROCm/HIP and adjust filepaths accordingly",
905
+ )
906
+ parser.add_argument(
907
+ "--use-aten-lib",
908
+ "--use_aten_lib",
909
+ action="store_true",
910
+ help="a boolean flag to indicate whether we use ATen kernels or not, in the future this flag will be per "
911
+ "operator",
912
+ )
913
+ parser.add_argument(
914
+ "--manual_registration",
915
+ "--manual-registration",
916
+ action="store_true",
917
+ help="a boolean flag to indicate whether we want to manually call"
918
+ "register_kernels() or rely on static init. ",
919
+ )
920
+ parser.add_argument(
921
+ "--generate",
922
+ type=str,
923
+ nargs="*",
924
+ choices=["headers", "sources"],
925
+ default=["headers", "sources"],
926
+ help="Generate only a subset of files",
927
+ )
928
+ options = parser.parse_args()
929
+ assert options.tags_path, "tags.yaml is required by codegen yaml parsing."
930
+
931
+ selector = get_custom_build_selector(
932
+ options.op_registration_whitelist,
933
+ options.op_selection_yaml_path,
934
+ )
935
+
936
+ parsed_yaml, custom_ops_parsed_yaml = parse_yaml_files(
937
+ aten_yaml_path=options.aten_yaml_path,
938
+ tags_yaml_path=options.tags_path,
939
+ native_yaml_path=options.functions_yaml_path,
940
+ custom_ops_yaml_path=options.custom_ops_yaml_path,
941
+ selector=selector,
942
+ use_aten_lib=options.use_aten_lib,
943
+ )
944
+ native_functions, kernel_index = (
945
+ parsed_yaml.native_functions,
946
+ parsed_yaml.kernel_index,
947
+ )
948
+ custom_ops_native_functions = (
949
+ custom_ops_parsed_yaml.native_functions if custom_ops_parsed_yaml else []
950
+ )
951
+
952
+ cpu_fm = make_file_manager(options=options)
953
+
954
+ if "headers" in options.generate:
955
+ # generate CustomOpsNativeFunctions.h when custom_ops.yaml is present, to match the build system.
956
+ gen_headers(
957
+ native_functions=native_functions,
958
+ gen_custom_ops_header=options.custom_ops_yaml_path,
959
+ custom_ops_native_functions=custom_ops_native_functions,
960
+ selector=selector,
961
+ kernel_index=kernel_index,
962
+ cpu_fm=cpu_fm,
963
+ use_aten_lib=options.use_aten_lib,
964
+ )
965
+
966
+ if "sources" in options.generate:
967
+ gen_unboxing(
968
+ native_functions=native_functions,
969
+ cpu_fm=cpu_fm,
970
+ selector=selector,
971
+ use_aten_lib=options.use_aten_lib,
972
+ kernel_index=kernel_index,
973
+ manual_registration=options.manual_registration,
974
+ )
975
+ if custom_ops_native_functions:
976
+ gen_custom_ops(
977
+ native_functions=custom_ops_native_functions,
978
+ selector=selector,
979
+ kernel_index=kernel_index,
980
+ cpu_fm=cpu_fm,
981
+ rocm=options.rocm,
982
+ )
983
+
984
+ if options.output_dependencies:
985
+ depfile_path = Path(options.output_dependencies).resolve()
986
+ depfile_name = depfile_path.name
987
+ depfile_stem = depfile_path.stem
988
+
989
+ for fm, prefix in [
990
+ (cpu_fm, ""),
991
+ ]:
992
+ varname = prefix + depfile_stem
993
+ path = depfile_path.parent / (prefix + depfile_name)
994
+ fm.write_outputs(varname, str(path))
995
+
996
+
997
+ if __name__ == "__main__":
998
+ main()
minigpt2/lib/python3.10/site-packages/torchgen/local.py ADDED
@@ -0,0 +1,59 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from __future__ import annotations
2
+
3
+ import threading
4
+ from contextlib import contextmanager
5
+ from typing import Iterator
6
+
7
+
8
+ # Simple dynamic scoping implementation. The name "parametrize" comes
9
+ # from Racket.
10
+ #
11
+ # WARNING WARNING: LOOKING TO EDIT THIS FILE? Think carefully about
12
+ # why you need to add a toggle to the global behavior of code
13
+ # generation. The parameters here should really only be used
14
+ # for "temporary" situations, where we need to temporarily change
15
+ # the codegen in some cases because we cannot conveniently update
16
+ # all call sites, and are slated to be eliminated once all call
17
+ # sites are eliminated. If you don't have a plan for how to get there,
18
+ # DON'T add a new entry here.
19
+
20
+
21
+ class Locals(threading.local):
22
+ use_const_ref_for_mutable_tensors: bool | None = None
23
+ use_ilistref_for_tensor_lists: bool | None = None
24
+
25
+
26
+ _locals = Locals()
27
+
28
+
29
+ def use_const_ref_for_mutable_tensors() -> bool:
30
+ assert _locals.use_const_ref_for_mutable_tensors is not None, (
31
+ "need to initialize local.use_const_ref_for_mutable_tensors with "
32
+ "local.parametrize"
33
+ )
34
+ return _locals.use_const_ref_for_mutable_tensors
35
+
36
+
37
+ def use_ilistref_for_tensor_lists() -> bool:
38
+ assert _locals.use_ilistref_for_tensor_lists is not None, (
39
+ "need to initialize local.use_ilistref_for_tensor_lists with "
40
+ "local.parametrize"
41
+ )
42
+ return _locals.use_ilistref_for_tensor_lists
43
+
44
+
45
+ @contextmanager
46
+ def parametrize(
47
+ *, use_const_ref_for_mutable_tensors: bool, use_ilistref_for_tensor_lists: bool
48
+ ) -> Iterator[None]:
49
+ old_use_const_ref_for_mutable_tensors = _locals.use_const_ref_for_mutable_tensors
50
+ old_use_ilistref_for_tensor_lists = _locals.use_ilistref_for_tensor_lists
51
+ try:
52
+ _locals.use_const_ref_for_mutable_tensors = use_const_ref_for_mutable_tensors
53
+ _locals.use_ilistref_for_tensor_lists = use_ilistref_for_tensor_lists
54
+ yield
55
+ finally:
56
+ _locals.use_const_ref_for_mutable_tensors = (
57
+ old_use_const_ref_for_mutable_tensors
58
+ )
59
+ _locals.use_ilistref_for_tensor_lists = old_use_ilistref_for_tensor_lists
minigpt2/lib/python3.10/site-packages/tzdata/__pycache__/__init__.cpython-310.pyc ADDED
Binary file (209 Bytes). View file
 
minigpt2/lib/python3.10/site-packages/tzdata/zoneinfo/Arctic/Longyearbyen ADDED
Binary file (705 Bytes). View file
 
minigpt2/lib/python3.10/site-packages/tzdata/zoneinfo/Arctic/__init__.py ADDED
File without changes
minigpt2/lib/python3.10/site-packages/tzdata/zoneinfo/Arctic/__pycache__/__init__.cpython-310.pyc ADDED
Binary file (176 Bytes). View file
 
minigpt2/lib/python3.10/site-packages/tzdata/zoneinfo/Atlantic/Azores ADDED
Binary file (1.4 kB). View file
 
minigpt2/lib/python3.10/site-packages/tzdata/zoneinfo/Atlantic/Bermuda ADDED
Binary file (1.02 kB). View file
 
minigpt2/lib/python3.10/site-packages/tzdata/zoneinfo/Atlantic/Canary ADDED
Binary file (478 Bytes). View file
 
minigpt2/lib/python3.10/site-packages/tzdata/zoneinfo/Atlantic/Faeroe ADDED
Binary file (441 Bytes). View file
 
minigpt2/lib/python3.10/site-packages/tzdata/zoneinfo/Atlantic/Jan_Mayen ADDED
Binary file (705 Bytes). View file
 
minigpt2/lib/python3.10/site-packages/tzdata/zoneinfo/Atlantic/South_Georgia ADDED
Binary file (132 Bytes). View file
 
minigpt2/lib/python3.10/site-packages/tzdata/zoneinfo/Atlantic/Stanley ADDED
Binary file (789 Bytes). View file
 
minigpt2/lib/python3.10/site-packages/tzdata/zoneinfo/Atlantic/__pycache__/__init__.cpython-310.pyc ADDED
Binary file (178 Bytes). View file
 
minigpt2/lib/python3.10/site-packages/tzdata/zoneinfo/Brazil/Acre ADDED
Binary file (418 Bytes). View file
 
minigpt2/lib/python3.10/site-packages/tzdata/zoneinfo/Brazil/DeNoronha ADDED
Binary file (484 Bytes). View file
 
minigpt2/lib/python3.10/site-packages/tzdata/zoneinfo/Brazil/East ADDED
Binary file (952 Bytes). View file
 
minigpt2/lib/python3.10/site-packages/tzdata/zoneinfo/Brazil/West ADDED
Binary file (412 Bytes). View file
 
minigpt2/lib/python3.10/site-packages/tzdata/zoneinfo/Brazil/__init__.py ADDED
File without changes
minigpt2/lib/python3.10/site-packages/tzdata/zoneinfo/Brazil/__pycache__/__init__.cpython-310.pyc ADDED
Binary file (176 Bytes). View file
 
minigpt2/lib/python3.10/site-packages/tzdata/zoneinfo/Chile/Continental ADDED
Binary file (1.35 kB). View file
 
minigpt2/lib/python3.10/site-packages/tzdata/zoneinfo/Chile/EasterIsland ADDED
Binary file (1.17 kB). View file
 
minigpt2/lib/python3.10/site-packages/tzdata/zoneinfo/Chile/__init__.py ADDED
File without changes
minigpt2/lib/python3.10/site-packages/tzdata/zoneinfo/Chile/__pycache__/__init__.cpython-310.pyc ADDED
Binary file (175 Bytes). View file
 
minigpt2/lib/python3.10/site-packages/tzdata/zoneinfo/Etc/GMT ADDED
Binary file (111 Bytes). View file
 
minigpt2/lib/python3.10/site-packages/tzdata/zoneinfo/Etc/GMT+1 ADDED
Binary file (113 Bytes). View file
 
minigpt2/lib/python3.10/site-packages/tzdata/zoneinfo/Etc/GMT+10 ADDED
Binary file (114 Bytes). View file
 
minigpt2/lib/python3.10/site-packages/tzdata/zoneinfo/Etc/GMT+11 ADDED
Binary file (114 Bytes). View file
 
minigpt2/lib/python3.10/site-packages/tzdata/zoneinfo/Etc/GMT+12 ADDED
Binary file (114 Bytes). View file
 
minigpt2/lib/python3.10/site-packages/tzdata/zoneinfo/Etc/GMT+3 ADDED
Binary file (113 Bytes). View file
 
minigpt2/lib/python3.10/site-packages/tzdata/zoneinfo/Etc/GMT+5 ADDED
Binary file (113 Bytes). View file