Linksome commited on
Commit
b2eaaa7
·
verified ·
1 Parent(s): 46ac64a

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. .cache/pip/http-v2/0/3/b/7/e/03b7e31ef41a67b4d5d0212d71d27b02703ce649d01a9d3a622a0976 +0 -0
  2. .cache/pip/http-v2/0/3/b/7/e/03b7e31ef41a67b4d5d0212d71d27b02703ce649d01a9d3a622a0976.body +0 -0
  3. .cache/pip/http-v2/0/4/1/8/c/0418c83b80f7f7bfaec2738bfbbee53d2c1562196c0781702f6eddc8 +0 -0
  4. .cache/pip/http-v2/0/4/4/0/9/04409a64cbe9342d7e3b5728f6ad45c1cb35fb3ec830064d6f7f201a +0 -0
  5. .cache/pip/http-v2/0/4/4/0/9/04409a64cbe9342d7e3b5728f6ad45c1cb35fb3ec830064d6f7f201a.body +0 -0
  6. .cache/pip/http-v2/0/4/4/f/2/044f2f53cdcb47194b0e9c6df3c8720ef44d4e5b4d49bedd28cc5434 +0 -0
  7. .cache/pip/http-v2/0/4/6/5/1/04651185225d2f4f97326e1133d8157925fc8a7b89e811a1210f6558 +0 -0
  8. .cache/pip/http-v2/0/4/6/d/f/046df54530e19066b4add1db381002b195b15dbdabc6020b26a061ab +0 -0
  9. .cache/pip/http-v2/0/4/6/d/f/046df54530e19066b4add1db381002b195b15dbdabc6020b26a061ab.body +0 -0
  10. .cache/pip/http-v2/0/4/8/3/d/0483d39313b18dfb10c6853bf0a0161321faeeda4237e8ec3ac14929 +0 -0
  11. .cache/pip/http-v2/0/4/8/3/d/0483d39313b18dfb10c6853bf0a0161321faeeda4237e8ec3ac14929.body +0 -0
  12. .cache/pip/http-v2/0/7/3/c/3/073c319374cb73c3a72a13b4dc11439818d60565b30816a95031773f +0 -0
  13. .cache/pip/http-v2/0/7/3/c/3/073c319374cb73c3a72a13b4dc11439818d60565b30816a95031773f.body +127 -0
  14. .cache/pip/http-v2/0/c/1/9/0/0c1902a50947e5344575b4ef11e0b41b63cc4e3e15eb945e6b0cd91d +0 -0
  15. .cache/pip/http-v2/0/c/1/9/0/0c1902a50947e5344575b4ef11e0b41b63cc4e3e15eb945e6b0cd91d.body +0 -0
  16. .cache/pip/http-v2/0/c/f/6/e/0cf6e817e2c5554000c735ecab0f3cf492f7d33b50d5a474a801ba24 +0 -0
  17. .cache/pip/http-v2/0/c/f/6/e/0cf6e817e2c5554000c735ecab0f3cf492f7d33b50d5a474a801ba24.body +0 -0
  18. .cache/pip/http-v2/0/d/d/c/c/0ddcc4d907dd4d8c3307074b6869d260abb891f5178579de5abe7eff +0 -0
  19. .cache/pip/http-v2/0/d/d/c/c/0ddcc4d907dd4d8c3307074b6869d260abb891f5178579de5abe7eff.body +0 -0
  20. .cache/pip/http-v2/0/e/2/b/6/0e2b668f82a48b560e0865ba3099ebc225eb59d26d4badf227df001a +0 -0
  21. .cache/pip/http-v2/0/e/2/b/6/0e2b668f82a48b560e0865ba3099ebc225eb59d26d4badf227df001a.body +194 -0
  22. .cache/pip/http-v2/0/e/d/3/3/0ed336d749e009fe1267c844e36c442e5b3dc645d5eb9865c545b81a +0 -0
  23. .cache/pip/http-v2/0/e/d/3/3/0ed336d749e009fe1267c844e36c442e5b3dc645d5eb9865c545b81a.body +0 -0
  24. .cache/pip/http-v2/3/1/2/8/c/3128c2672ba9ca8952d2c175538dc592d0105b1a9f9a771148c3a28f +0 -0
  25. .cache/pip/http-v2/3/1/2/8/c/3128c2672ba9ca8952d2c175538dc592d0105b1a9f9a771148c3a28f.body +17 -0
  26. .cache/pip/http-v2/3/1/5/3/9/31539552f2303282ae40da968a51b4e53cadfeaff6b349f1c1a078d2 +0 -0
  27. .cache/pip/http-v2/3/2/4/d/4/324d452fb0049fb2bf10d8b96263e1f9fd84df3bc269e3e4a4b15750 +0 -0
  28. .cache/pip/http-v2/3/2/4/d/4/324d452fb0049fb2bf10d8b96263e1f9fd84df3bc269e3e4a4b15750.body +0 -0
  29. .cache/pip/http-v2/3/3/3/6/9/3336914bfed02afdf3c7792853e86090f0a58451ce670f4a4fb2138e +0 -0
  30. .cache/pip/http-v2/3/3/3/6/9/3336914bfed02afdf3c7792853e86090f0a58451ce670f4a4fb2138e.body +0 -0
  31. .cache/pip/http-v2/3/3/9/7/4/33974f84394d9a943f68359da08431dab4af9f86c33962982ea21b5f +0 -0
  32. .cache/pip/http-v2/3/3/9/7/4/33974f84394d9a943f68359da08431dab4af9f86c33962982ea21b5f.body +0 -0
  33. .cache/pip/http-v2/3/4/e/4/e/34e4ed9f6da78ec378e04be04de734c69a68c889dbef86cb0f5f498a +0 -0
  34. .cache/pip/http-v2/3/4/e/4/e/34e4ed9f6da78ec378e04be04de734c69a68c889dbef86cb0f5f498a.body +0 -0
  35. .cache/pip/http-v2/3/7/c/5/1/37c518ffee828550acec5a0fd158a5f1d852928eea13df9d077b428b +0 -0
  36. .cache/pip/http-v2/3/7/c/5/1/37c518ffee828550acec5a0fd158a5f1d852928eea13df9d077b428b.body +0 -0
  37. .cache/pip/http-v2/3/8/6/0/e/3860e4de9ae53c79d2fd61419e9049df314ccc8b640782c02c6e2e2d +0 -0
  38. .cache/pip/http-v2/3/8/6/0/e/3860e4de9ae53c79d2fd61419e9049df314ccc8b640782c02c6e2e2d.body +0 -0
  39. .cache/pip/http-v2/3/8/e/d/8/38ed89a6c68dd747f9bfdf6c4269891176f046dfcecb4f35f2564158 +0 -0
  40. .cache/pip/http-v2/3/8/e/d/8/38ed89a6c68dd747f9bfdf6c4269891176f046dfcecb4f35f2564158.body +394 -0
  41. .cache/pip/http-v2/3/a/3/4/c/3a34c42ebe642dc43046710b9f05bca097f79a9cf254a8dc21415fae +0 -0
  42. .cache/pip/http-v2/3/a/3/4/c/3a34c42ebe642dc43046710b9f05bca097f79a9cf254a8dc21415fae.body +0 -0
  43. .cache/pip/http-v2/3/a/c/3/6/3ac3678b5f9023a96ea9ecd57204cc7f36baa7f631c59f5e1b8ecedd +0 -0
  44. .cache/pip/http-v2/3/b/f/3/3/3bf332a483597076d042361a196dcaec374b1527b935d69fead02825 +0 -0
  45. .cache/pip/http-v2/3/c/e/7/7/3ce770df07ea321ab056bed31a67391a50e82aea9460d349fd057276 +0 -0
  46. .cache/pip/http-v2/3/c/e/7/7/3ce770df07ea321ab056bed31a67391a50e82aea9460d349fd057276.body +107 -0
  47. .cache/pip/http-v2/3/d/8/c/a/3d8cad774d82fa02cd6b57b61c7ae5f445e51121f6b8147871d094e2 +0 -0
  48. .cache/pip/http-v2/3/d/8/c/a/3d8cad774d82fa02cd6b57b61c7ae5f445e51121f6b8147871d094e2.body +251 -0
  49. .cache/pip/http-v2/3/d/a/4/6/3da464de30c7468304779380c74eab34e7778fd6716e1e47bbe62e37 +0 -0
  50. .cache/pip/http-v2/3/d/a/4/6/3da464de30c7468304779380c74eab34e7778fd6716e1e47bbe62e37.body +265 -0
.cache/pip/http-v2/0/3/b/7/e/03b7e31ef41a67b4d5d0212d71d27b02703ce649d01a9d3a622a0976 ADDED
Binary file (1.12 kB). View file
 
.cache/pip/http-v2/0/3/b/7/e/03b7e31ef41a67b4d5d0212d71d27b02703ce649d01a9d3a622a0976.body ADDED
Binary file (9.76 kB). View file
 
.cache/pip/http-v2/0/4/1/8/c/0418c83b80f7f7bfaec2738bfbbee53d2c1562196c0781702f6eddc8 ADDED
Binary file (1.78 kB). View file
 
.cache/pip/http-v2/0/4/4/0/9/04409a64cbe9342d7e3b5728f6ad45c1cb35fb3ec830064d6f7f201a ADDED
Binary file (1.78 kB). View file
 
.cache/pip/http-v2/0/4/4/0/9/04409a64cbe9342d7e3b5728f6ad45c1cb35fb3ec830064d6f7f201a.body ADDED
Binary file (14.8 kB). View file
 
.cache/pip/http-v2/0/4/4/f/2/044f2f53cdcb47194b0e9c6df3c8720ef44d4e5b4d49bedd28cc5434 ADDED
Binary file (1.13 kB). View file
 
.cache/pip/http-v2/0/4/6/5/1/04651185225d2f4f97326e1133d8157925fc8a7b89e811a1210f6558 ADDED
Binary file (1.15 kB). View file
 
.cache/pip/http-v2/0/4/6/d/f/046df54530e19066b4add1db381002b195b15dbdabc6020b26a061ab ADDED
Binary file (1.78 kB). View file
 
.cache/pip/http-v2/0/4/6/d/f/046df54530e19066b4add1db381002b195b15dbdabc6020b26a061ab.body ADDED
Binary file (71.2 kB). View file
 
.cache/pip/http-v2/0/4/8/3/d/0483d39313b18dfb10c6853bf0a0161321faeeda4237e8ec3ac14929 ADDED
Binary file (1.78 kB). View file
 
.cache/pip/http-v2/0/4/8/3/d/0483d39313b18dfb10c6853bf0a0161321faeeda4237e8ec3ac14929.body ADDED
Binary file (30.5 kB). View file
 
.cache/pip/http-v2/0/7/3/c/3/073c319374cb73c3a72a13b4dc11439818d60565b30816a95031773f ADDED
Binary file (1.17 kB). View file
 
.cache/pip/http-v2/0/7/3/c/3/073c319374cb73c3a72a13b4dc11439818d60565b30816a95031773f.body ADDED
@@ -0,0 +1,127 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Metadata-Version: 2.4
2
+ Name: av
3
+ Version: 16.0.0
4
+ Summary: Pythonic bindings for FFmpeg's libraries.
5
+ Author-email: WyattBlue <wyattblue@auto-editor.com>, Jeremy Lainé <jeremy.laine@m4x.org>
6
+ License-Expression: BSD-3-Clause
7
+ Project-URL: Bug Tracker, https://github.com/PyAV-Org/PyAV/discussions/new?category=4-bugs
8
+ Project-URL: Source Code, https://github.com/PyAV-Org/PyAV
9
+ Project-URL: homepage, https://pyav.basswood-io.com
10
+ Classifier: Development Status :: 5 - Production/Stable
11
+ Classifier: Intended Audience :: Developers
12
+ Classifier: Natural Language :: English
13
+ Classifier: Operating System :: MacOS :: MacOS X
14
+ Classifier: Operating System :: POSIX
15
+ Classifier: Operating System :: Unix
16
+ Classifier: Operating System :: Microsoft :: Windows
17
+ Classifier: Programming Language :: Cython
18
+ Classifier: Programming Language :: Python :: 3.10
19
+ Classifier: Programming Language :: Python :: 3.11
20
+ Classifier: Programming Language :: Python :: 3.12
21
+ Classifier: Programming Language :: Python :: 3.13
22
+ Classifier: Programming Language :: Python :: 3.14
23
+ Classifier: Topic :: Software Development :: Libraries :: Python Modules
24
+ Classifier: Topic :: Multimedia :: Sound/Audio
25
+ Classifier: Topic :: Multimedia :: Sound/Audio :: Conversion
26
+ Classifier: Topic :: Multimedia :: Video
27
+ Classifier: Topic :: Multimedia :: Video :: Conversion
28
+ Requires-Python: >=3.10
29
+ Description-Content-Type: text/markdown
30
+ License-File: LICENSE.txt
31
+ License-File: AUTHORS.py
32
+ License-File: AUTHORS.rst
33
+ Dynamic: license-file
34
+
35
+ PyAV
36
+ ====
37
+
38
+ PyAV is a Pythonic binding for the [FFmpeg][ffmpeg] libraries. We aim to provide all of the power and control of the underlying library, but manage the gritty details as much as possible.
39
+
40
+ ---
41
+
42
+ [![GitHub Test Status][github-tests-badge]][github-tests] [![Documentation][docs-badge]][docs] [![Python Package Index][pypi-badge]][pypi] [![Conda Forge][conda-badge]][conda]
43
+
44
+ PyAV is for direct and precise access to your media via containers, streams, packets, codecs, and frames. It exposes a few transformations of that data, and helps you get your data to/from other packages (e.g. Numpy and Pillow).
45
+
46
+ This power does come with some responsibility as working with media is horrendously complicated and PyAV can't abstract it away or make all the best decisions for you. If the `ffmpeg` command does the job without you bending over backwards, PyAV is likely going to be more of a hindrance than a help.
47
+
48
+ But where you can't work without it, PyAV is a critical tool.
49
+
50
+
51
+ Installation
52
+ ------------
53
+
54
+ Binary wheels are provided on [PyPI][pypi] for Linux, MacOS and Windows linked against the latest stable version of ffmpeg. You can install these wheels by running:
55
+
56
+ ```bash
57
+ pip install av
58
+ ```
59
+
60
+ Another way of installing PyAV is via [conda-forge][conda-forge]:
61
+
62
+ ```bash
63
+ conda install av -c conda-forge
64
+ ```
65
+
66
+ See the [Conda install][conda-install] docs to get started with (mini)Conda.
67
+
68
+
69
+ Alternative installation methods
70
+ --------------------------------
71
+
72
+ Due to the complexity of the dependencies, PyAV is not always the easiest Python package to install from source. If you want to use your existing ffmpeg (must be the correct major version), the source version of PyAV is on [PyPI][pypi]:
73
+
74
+ > [!WARNING]
75
+ > You must be in a posix env, and have the correct version of ffmpeg installed on your system.
76
+
77
+ ```bash
78
+ pip install av --no-binary av
79
+ ```
80
+
81
+
82
+ Installing From Source
83
+ ----------------------
84
+
85
+ Here's how to build PyAV from source. You must use [MSYS2](https://www.msys2.org/) when using Windows.
86
+
87
+ ```bash
88
+ git clone https://github.com/PyAV-Org/PyAV.git
89
+ cd PyAV
90
+ source scripts/activate.sh
91
+
92
+ # Build ffmpeg from source. You can skip this step
93
+ # if ffmpeg is already installed.
94
+ ./scripts/build-deps
95
+
96
+ # Build PyAV
97
+ make
98
+
99
+ # Testing
100
+ make test
101
+
102
+ # Install globally
103
+ deactivate
104
+ pip install .
105
+ ```
106
+
107
+ ---
108
+
109
+ Have fun, [read the docs][docs], [come chat with us][discuss], and good luck!
110
+
111
+
112
+
113
+ [conda-badge]: https://img.shields.io/conda/vn/conda-forge/av.svg?colorB=CCB39A
114
+ [conda]: https://anaconda.org/conda-forge/av
115
+ [docs-badge]: https://img.shields.io/badge/docs-on%20pyav.basswood--io.com-blue.svg
116
+ [docs]: https://pyav.basswood-io.com
117
+ [pypi-badge]: https://img.shields.io/pypi/v/av.svg?colorB=CCB39A
118
+ [pypi]: https://pypi.org/project/av
119
+ [discuss]: https://github.com/PyAV-Org/PyAV/discussions
120
+
121
+ [github-tests-badge]: https://github.com/PyAV-Org/PyAV/workflows/tests/badge.svg
122
+ [github-tests]: https://github.com/PyAV-Org/PyAV/actions?workflow=tests
123
+ [github]: https://github.com/PyAV-Org/PyAV
124
+
125
+ [ffmpeg]: https://ffmpeg.org/
126
+ [conda-forge]: https://conda-forge.github.io/
127
+ [conda-install]: https://docs.conda.io/projects/conda/en/latest/user-guide/install/index.html
.cache/pip/http-v2/0/c/1/9/0/0c1902a50947e5344575b4ef11e0b41b63cc4e3e15eb945e6b0cd91d ADDED
Binary file (1.78 kB). View file
 
.cache/pip/http-v2/0/c/1/9/0/0c1902a50947e5344575b4ef11e0b41b63cc4e3e15eb945e6b0cd91d.body ADDED
Binary file (1.46 kB). View file
 
.cache/pip/http-v2/0/c/f/6/e/0cf6e817e2c5554000c735ecab0f3cf492f7d33b50d5a474a801ba24 ADDED
Binary file (1.78 kB). View file
 
.cache/pip/http-v2/0/c/f/6/e/0cf6e817e2c5554000c735ecab0f3cf492f7d33b50d5a474a801ba24.body ADDED
Binary file (8.77 kB). View file
 
.cache/pip/http-v2/0/d/d/c/c/0ddcc4d907dd4d8c3307074b6869d260abb891f5178579de5abe7eff ADDED
Binary file (1.17 kB). View file
 
.cache/pip/http-v2/0/d/d/c/c/0ddcc4d907dd4d8c3307074b6869d260abb891f5178579de5abe7eff.body ADDED
Binary file (13.6 kB). View file
 
.cache/pip/http-v2/0/e/2/b/6/0e2b668f82a48b560e0865ba3099ebc225eb59d26d4badf227df001a ADDED
Binary file (1.17 kB). View file
 
.cache/pip/http-v2/0/e/2/b/6/0e2b668f82a48b560e0865ba3099ebc225eb59d26d4badf227df001a.body ADDED
@@ -0,0 +1,194 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Metadata-Version: 2.4
2
+ Name: shtab
3
+ Version: 1.8.0
4
+ Summary: Automagic shell tab completion for Python CLI applications
5
+ Author-email: Casper da Costa-Luis <casper.dcl@physics.org>
6
+ Maintainer-email: Iterative <support@iterative.ai>
7
+ License-Expression: Apache-2.0
8
+ Project-URL: documentation, https://docs.iterative.ai/shtab
9
+ Project-URL: repository, https://github.com/iterative/shtab
10
+ Project-URL: changelog, https://github.com/iterative/shtab/releases
11
+ Keywords: tab,complete,completion,shell,bash,zsh,argparse
12
+ Classifier: Development Status :: 5 - Production/Stable
13
+ Classifier: Environment :: Console
14
+ Classifier: Environment :: MacOS X
15
+ Classifier: Environment :: Other Environment
16
+ Classifier: Intended Audience :: Developers
17
+ Classifier: Intended Audience :: Education
18
+ Classifier: Intended Audience :: End Users/Desktop
19
+ Classifier: Intended Audience :: Other Audience
20
+ Classifier: Intended Audience :: System Administrators
21
+ Classifier: Operating System :: MacOS
22
+ Classifier: Operating System :: MacOS :: MacOS X
23
+ Classifier: Operating System :: POSIX
24
+ Classifier: Operating System :: POSIX :: BSD
25
+ Classifier: Operating System :: POSIX :: BSD :: FreeBSD
26
+ Classifier: Operating System :: POSIX :: Linux
27
+ Classifier: Operating System :: POSIX :: SunOS/Solaris
28
+ Classifier: Operating System :: Unix
29
+ Classifier: Programming Language :: Other Scripting Engines
30
+ Classifier: Programming Language :: Python
31
+ Classifier: Programming Language :: Python :: 3
32
+ Classifier: Programming Language :: Python :: 3.9
33
+ Classifier: Programming Language :: Python :: 3.10
34
+ Classifier: Programming Language :: Python :: 3.11
35
+ Classifier: Programming Language :: Python :: 3.12
36
+ Classifier: Programming Language :: Python :: 3.13
37
+ Classifier: Programming Language :: Python :: 3 :: Only
38
+ Classifier: Programming Language :: Python :: Implementation
39
+ Classifier: Programming Language :: Python :: Implementation :: IronPython
40
+ Classifier: Programming Language :: Python :: Implementation :: PyPy
41
+ Classifier: Programming Language :: Unix Shell
42
+ Classifier: Topic :: Desktop Environment
43
+ Classifier: Topic :: Education :: Computer Aided Instruction (CAI)
44
+ Classifier: Topic :: Education :: Testing
45
+ Classifier: Topic :: Office/Business
46
+ Classifier: Topic :: Other/Nonlisted Topic
47
+ Classifier: Topic :: Software Development
48
+ Classifier: Topic :: Software Development :: Build Tools
49
+ Classifier: Topic :: Software Development :: Libraries
50
+ Classifier: Topic :: Software Development :: Libraries :: Python Modules
51
+ Classifier: Topic :: Software Development :: Pre-processors
52
+ Classifier: Topic :: Software Development :: User Interfaces
53
+ Classifier: Topic :: System
54
+ Classifier: Topic :: System :: Installation/Setup
55
+ Classifier: Topic :: System :: Shells
56
+ Classifier: Topic :: System :: System Shells
57
+ Classifier: Topic :: Terminals
58
+ Classifier: Topic :: Utilities
59
+ Requires-Python: >=3.9
60
+ Description-Content-Type: text/x-rst
61
+ License-File: LICENCE
62
+ Provides-Extra: dev
63
+ Requires-Dist: pytest>=6; extra == "dev"
64
+ Requires-Dist: pytest-cov; extra == "dev"
65
+ Requires-Dist: pytest-timeout; extra == "dev"
66
+ Dynamic: license-file
67
+
68
+ |Logo|
69
+
70
+ shtab
71
+ =====
72
+
73
+ |PyPI-Downloads| |Tests| |Coverage| |PyPI| |Conda|
74
+
75
+ - What: Automatically generate shell tab completion scripts for Python CLI apps
76
+ - Why: Speed & correctness. Alternatives like
77
+ `argcomplete <https://pypi.org/project/argcomplete>`_ and
78
+ `pyzshcomplete <https://pypi.org/project/pyzshcomplete>`_ are slow and have
79
+ side-effects
80
+ - How: ``shtab`` processes an ``argparse.ArgumentParser`` object to generate a
81
+ tab completion script for your shell
82
+
83
+ Features
84
+ --------
85
+
86
+ - Outputs tab completion scripts for
87
+
88
+ - ``bash``
89
+ - ``zsh``
90
+ - ``tcsh``
91
+
92
+ - Supports
93
+
94
+ - `argparse <https://docs.python.org/library/argparse>`_
95
+ - `docopt <https://pypi.org/project/docopt>`_ (via `argopt <https://pypi.org/project/argopt>`_)
96
+
97
+ - Supports arguments, options and subparsers
98
+ - Supports choices (e.g. ``--say={hello,goodbye}``)
99
+ - Supports file and directory path completion
100
+ - Supports custom path completion (e.g. ``--file={*.txt}``)
101
+
102
+ ------------------------------------------
103
+
104
+ .. contents:: Table of Contents
105
+ :backlinks: top
106
+
107
+ Installation
108
+ ------------
109
+
110
+ Choose one of:
111
+
112
+ - ``pip install shtab``, or
113
+ - ``conda install -c conda-forge shtab``
114
+
115
+ See `operating system-specific instructions in the docs <https://docs.iterative.ai/shtab/#installation>`_.
116
+
117
+ Usage
118
+ -----
119
+
120
+ There are two ways of using ``shtab``:
121
+
122
+ - `CLI Usage <https://docs.iterative.ai/shtab/use/#cli-usage>`_: ``shtab``'s own CLI interface for external applications
123
+
124
+ - may not require any code modifications whatsoever
125
+ - end-users execute ``shtab your_cli_app.your_parser_object``
126
+
127
+ - `Library Usage <https://docs.iterative.ai/shtab/use/#library-usage>`_: as a library integrated into your CLI application
128
+
129
+ - adds a couple of lines to your application
130
+ - argument mode: end-users execute ``your_cli_app --print-completion {bash,zsh,tcsh}``
131
+ - subparser mode: end-users execute ``your_cli_app completion {bash,zsh,tcsh}``
132
+
133
+ Examples
134
+ --------
135
+
136
+ See `the docs for usage examples <https://docs.iterative.ai/shtab/use/#main.py>`_.
137
+
138
+ FAQs
139
+ ----
140
+
141
+ Not working? Check out `frequently asked questions <https://docs.iterative.ai/shtab/#faqs>`_.
142
+
143
+ Alternatives
144
+ ------------
145
+
146
+ - `argcomplete <https://pypi.org/project/argcomplete>`_
147
+
148
+ - executes the underlying script *every* time ``<TAB>`` is pressed (slow and
149
+ has side-effects)
150
+
151
+ - `pyzshcomplete <https://pypi.org/project/pyzshcomplete>`_
152
+
153
+ - executes the underlying script *every* time ``<TAB>`` is pressed (slow and
154
+ has side-effects)
155
+ - only provides ``zsh`` completion
156
+
157
+ - `click <https://pypi.org/project/click>`_
158
+
159
+ - different framework completely replacing the builtin ``argparse``
160
+ - solves multiple problems (rather than POSIX-style "do one thing well")
161
+
162
+ Contributions
163
+ -------------
164
+
165
+ Please do open `issues <https://github.com/iterative/shtab/issues>`_ & `pull requests <https://github.com/iterative/shtab/pulls>`_! Some ideas:
166
+
167
+ - support ``fish`` (`#174 <https://github.com/iterative/shtab/pull/174>`_)
168
+ - support ``powershell``
169
+
170
+ See
171
+ `CONTRIBUTING.md <https://github.com/iterative/shtab/tree/main/CONTRIBUTING.md>`_
172
+ for more guidance.
173
+
174
+ |Hits|
175
+
176
+ .. |Logo| image:: https://github.com/iterative/shtab/raw/main/meta/logo.png
177
+ .. |Tests| image:: https://img.shields.io/github/actions/workflow/status/iterative/shtab/test.yml?logo=github&label=tests
178
+ :target: https://github.com/iterative/shtab/actions
179
+ :alt: Tests
180
+ .. |Coverage| image:: https://codecov.io/gh/iterative/shtab/branch/main/graph/badge.svg
181
+ :target: https://codecov.io/gh/iterative/shtab
182
+ :alt: Coverage
183
+ .. |Conda| image:: https://img.shields.io/conda/v/conda-forge/shtab.svg?label=conda&logo=conda-forge
184
+ :target: https://anaconda.org/conda-forge/shtab
185
+ :alt: conda-forge
186
+ .. |PyPI| image:: https://img.shields.io/pypi/v/shtab.svg?label=pip&logo=PyPI&logoColor=white
187
+ :target: https://pypi.org/project/shtab
188
+ :alt: PyPI
189
+ .. |PyPI-Downloads| image:: https://img.shields.io/pypi/dm/shtab.svg?label=pypi%20downloads&logo=PyPI&logoColor=white
190
+ :target: https://pepy.tech/project/shtab
191
+ :alt: Downloads
192
+ .. |Hits| image:: https://cgi.cdcl.ml/hits?q=shtab&style=social&r=https://github.com/iterative/shtab&a=hidden
193
+ :target: https://cgi.cdcl.ml/hits?q=shtab&a=plot&r=https://github.com/iterative/shtab&style=social
194
+ :alt: Hits
.cache/pip/http-v2/0/e/d/3/3/0ed336d749e009fe1267c844e36c442e5b3dc645d5eb9865c545b81a ADDED
Binary file (1.16 kB). View file
 
.cache/pip/http-v2/0/e/d/3/3/0ed336d749e009fe1267c844e36c442e5b3dc645d5eb9865c545b81a.body ADDED
Binary file (20.5 kB). View file
 
.cache/pip/http-v2/3/1/2/8/c/3128c2672ba9ca8952d2c175538dc592d0105b1a9f9a771148c3a28f ADDED
Binary file (1.22 kB). View file
 
.cache/pip/http-v2/3/1/2/8/c/3128c2672ba9ca8952d2c175538dc592d0105b1a9f9a771148c3a28f.body ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Metadata-Version: 2.1
2
+ Name: protobuf
3
+ Author: protobuf@googlegroups.com
4
+ Author-email: protobuf@googlegroups.com
5
+ Home-page: https://developers.google.com/protocol-buffers/
6
+ License: 3-Clause BSD License
7
+ Classifier: Programming Language :: Python
8
+ Classifier: Programming Language :: Python :: 3
9
+ Classifier: Programming Language :: Python :: 3.9
10
+ Classifier: Programming Language :: Python :: 3.10
11
+ Classifier: Programming Language :: Python :: 3.11
12
+ Classifier: Programming Language :: Python :: 3.12
13
+ Classifier: Programming Language :: Python :: 3.13
14
+ Requires-Python: >=3.9
15
+ Version: 6.33.5
16
+
17
+ UNKNOWN
.cache/pip/http-v2/3/1/5/3/9/31539552f2303282ae40da968a51b4e53cadfeaff6b349f1c1a078d2 ADDED
Binary file (1.79 kB). View file
 
.cache/pip/http-v2/3/2/4/d/4/324d452fb0049fb2bf10d8b96263e1f9fd84df3bc269e3e4a4b15750 ADDED
Binary file (1.78 kB). View file
 
.cache/pip/http-v2/3/2/4/d/4/324d452fb0049fb2bf10d8b96263e1f9fd84df3bc269e3e4a4b15750.body ADDED
Binary file (2.56 kB). View file
 
.cache/pip/http-v2/3/3/3/6/9/3336914bfed02afdf3c7792853e86090f0a58451ce670f4a4fb2138e ADDED
Binary file (1.16 kB). View file
 
.cache/pip/http-v2/3/3/3/6/9/3336914bfed02afdf3c7792853e86090f0a58451ce670f4a4fb2138e.body ADDED
Binary file (68.5 kB). View file
 
.cache/pip/http-v2/3/3/9/7/4/33974f84394d9a943f68359da08431dab4af9f86c33962982ea21b5f ADDED
Binary file (1.78 kB). View file
 
.cache/pip/http-v2/3/3/9/7/4/33974f84394d9a943f68359da08431dab4af9f86c33962982ea21b5f.body ADDED
Binary file (6.94 kB). View file
 
.cache/pip/http-v2/3/4/e/4/e/34e4ed9f6da78ec378e04be04de734c69a68c889dbef86cb0f5f498a ADDED
Binary file (1.78 kB). View file
 
.cache/pip/http-v2/3/4/e/4/e/34e4ed9f6da78ec378e04be04de734c69a68c889dbef86cb0f5f498a.body ADDED
Binary file (18.6 kB). View file
 
.cache/pip/http-v2/3/7/c/5/1/37c518ffee828550acec5a0fd158a5f1d852928eea13df9d077b428b ADDED
Binary file (1.15 kB). View file
 
.cache/pip/http-v2/3/7/c/5/1/37c518ffee828550acec5a0fd158a5f1d852928eea13df9d077b428b.body ADDED
Binary file (78.4 kB). View file
 
.cache/pip/http-v2/3/8/6/0/e/3860e4de9ae53c79d2fd61419e9049df314ccc8b640782c02c6e2e2d ADDED
Binary file (1.78 kB). View file
 
.cache/pip/http-v2/3/8/6/0/e/3860e4de9ae53c79d2fd61419e9049df314ccc8b640782c02c6e2e2d.body ADDED
Binary file (42.1 kB). View file
 
.cache/pip/http-v2/3/8/e/d/8/38ed89a6c68dd747f9bfdf6c4269891176f046dfcecb4f35f2564158 ADDED
Binary file (1.22 kB). View file
 
.cache/pip/http-v2/3/8/e/d/8/38ed89a6c68dd747f9bfdf6c4269891176f046dfcecb4f35f2564158.body ADDED
@@ -0,0 +1,394 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Metadata-Version: 2.4
2
+ Name: einops
3
+ Version: 0.8.2
4
+ Summary: A new flavour of deep learning operations
5
+ Project-URL: Homepage, https://github.com/arogozhnikov/einops
6
+ Author: Alex Rogozhnikov
7
+ License: MIT
8
+ License-File: LICENSE
9
+ Keywords: deep learning,einops,machine learning,neural networks,scientific computations,tensor manipulation
10
+ Classifier: Intended Audience :: Science/Research
11
+ Classifier: License :: OSI Approved :: MIT License
12
+ Classifier: Programming Language :: Python :: 3
13
+ Requires-Python: >=3.9
14
+ Description-Content-Type: text/markdown
15
+
16
+
17
+ <!--
18
+ <a href='http://arogozhnikov.github.io/images/einops/einops_video.mp4' >
19
+ <div align="center">
20
+ <img src="http://arogozhnikov.github.io/images/einops/einops_video.gif" alt="einops package examples" />
21
+ <br>
22
+ <small><a href='http://arogozhnikov.github.io/images/einops/einops_video.mp4'>This video in high quality (mp4)</a></small>
23
+ <br><br>
24
+ </div>
25
+ </a>
26
+ -->
27
+
28
+ <!-- this link magically rendered as video on github readme, unfortunately not in docs -->
29
+
30
+ https://user-images.githubusercontent.com/6318811/177030658-66f0eb5d-e136-44d8-99c9-86ae298ead5b.mp4
31
+
32
+
33
+
34
+
35
+ # einops
36
+ [![Run tests](https://github.com/arogozhnikov/einops/actions/workflows/run_tests.yml/badge.svg)](https://github.com/arogozhnikov/einops/actions/workflows/run_tests.yml)
37
+ [![PyPI version](https://badge.fury.io/py/einops.svg)](https://badge.fury.io/py/einops)
38
+ [![Documentation](https://img.shields.io/badge/documentation-link-blue.svg)](https://einops.rocks/)
39
+ ![Supported python versions](https://raw.githubusercontent.com/arogozhnikov/einops/main/docs/resources/python_badge.svg)
40
+
41
+
42
+ Flexible and powerful tensor operations for readable and reliable code. <br />
43
+ Supports numpy, pytorch, tensorflow, jax, and [others](#supported-frameworks).
44
+
45
+ ## Recent updates:
46
+
47
+ - 0.8.0: tinygrad backend added, small fixes
48
+ - 0.7.0: no-hassle `torch.compile`, support of [array api standard](https://data-apis.org/array-api/latest/API_specification/index.html) and more
49
+ - 10'000🎉: github reports that more than 10k project use einops
50
+ - einops 0.6.1: paddle backend added
51
+ - einops 0.6 introduces [packing and unpacking](https://github.com/arogozhnikov/einops/blob/main/docs/4-pack-and-unpack.ipynb)
52
+ - einops 0.5: einsum is now a part of einops
53
+ - [Einops paper](https://openreview.net/pdf?id=oapKSVM2bcj) is accepted for oral presentation at ICLR 2022 (yes, it worth reading).
54
+ Talk recordings are [available](https://iclr.cc/virtual/2022/oral/6603)
55
+
56
+
57
+ <details markdown="1">
58
+ <summary>Previous updates</summary>
59
+ - flax and oneflow backend added
60
+ - torch.jit.script is supported for pytorch layers
61
+ - powerful EinMix added to einops. [Einmix tutorial notebook](https://github.com/arogozhnikov/einops/blob/main/docs/3-einmix-layer.ipynb)
62
+ </details>
63
+
64
+ <!--<div align="center">
65
+ <img src="http://arogozhnikov.github.io/images/einops/einops_logo_350x350.png"
66
+ alt="einops package logo" width="250" height="250" />
67
+ <br><br>
68
+ </div> -->
69
+
70
+
71
+ ## Tweets
72
+
73
+ > In case you need convincing arguments for setting aside time to learn about einsum and einops...
74
+ [Tim Rocktäschel](https://twitter.com/_rockt/status/1230818967205425152)
75
+
76
+ > Writing better code with PyTorch and einops 👌
77
+ [Andrej Karpathy](https://twitter.com/karpathy/status/1290826075916779520)
78
+
79
+ > Slowly but surely, einops is seeping in to every nook and cranny of my code. If you find yourself shuffling around bazillion dimensional tensors, this might change your life
80
+ [Nasim Rahaman](https://twitter.com/nasim_rahaman/status/1216022614755463169)
81
+
82
+ [More testimonials](https://einops.rocks/pages/testimonials/)
83
+
84
+
85
+ ## Contents
86
+
87
+ - [Installation](#Installation)
88
+ - [Documentation](https://einops.rocks/)
89
+ - [Tutorial](#Tutorials)
90
+ - [API micro-reference](#API)
91
+ - [Why use einops](#Why-use-einops-notation)
92
+ - [Supported frameworks](#Supported-frameworks)
93
+ - [Citing](#Citing)
94
+ - [Repository](https://github.com/arogozhnikov/einops) and [discussions](https://github.com/arogozhnikov/einops/discussions)
95
+
96
+ ## Installation <a name="Installation"></a>
97
+
98
+ Plain and simple:
99
+ ```bash
100
+ pip install einops
101
+ ```
102
+
103
+ (`uv pip install einops` works as well)
104
+
105
+ ## Tutorials <a name="Tutorials"></a>
106
+
107
+ Tutorials are the most convenient way to see `einops` in action
108
+
109
+ - part 1: [einops fundamentals](https://github.com/arogozhnikov/einops/blob/main/docs/1-einops-basics.ipynb)
110
+ - part 2: [einops for deep learning](https://github.com/arogozhnikov/einops/blob/main/docs/2-einops-for-deep-learning.ipynb)
111
+ - part 3: [packing and unpacking](https://github.com/arogozhnikov/einops/blob/main/docs/4-pack-and-unpack.ipynb)
112
+ - part 4: [improve pytorch code with einops](http://einops.rocks/pytorch-examples.html)
113
+
114
+ Kapil Sachdeva recorded a small [intro to einops](https://www.youtube.com/watch?v=xGy75Pjsqzo).
115
+
116
+ ## API <a name="API"></a>
117
+
118
+ `einops` has a minimalistic yet powerful API.
119
+
120
+ Three core operations provided ([einops tutorial](https://github.com/arogozhnikov/einops/blob/main/docs/)
121
+ shows those cover stacking, reshape, transposition, squeeze/unsqueeze, repeat, tile, concatenate, view and numerous reductions)
122
+
123
+ ```python
124
+ from einops import rearrange, reduce, repeat
125
+ # rearrange elements according to the pattern
126
+ output_tensor = rearrange(input_tensor, 't b c -> b c t')
127
+ # combine rearrangement and reduction
128
+ output_tensor = reduce(input_tensor, 'b c (h h2) (w w2) -> b h w c', 'mean', h2=2, w2=2)
129
+ # copy along a new axis
130
+ output_tensor = repeat(input_tensor, 'h w -> h w c', c=3)
131
+ ```
132
+
133
+ Later additions to the family are `pack` and `unpack` functions (better than stack/split/concatenate):
134
+
135
+ ```python
136
+ from einops import pack, unpack
137
+ # pack and unpack allow reversibly 'packing' multiple tensors into one.
138
+ # Packed tensors may be of different dimensionality:
139
+ packed, ps = pack([class_token_bc, image_tokens_bhwc, text_tokens_btc], 'b * c')
140
+ class_emb_bc, image_emb_bhwc, text_emb_btc = unpack(transformer(packed), ps, 'b * c')
141
+ ```
142
+
143
+ Finally, einops provides einsum with a support of multi-lettered names:
144
+
145
+ ```python
146
+ from einops import einsum, pack, unpack
147
+ # einsum is like ... einsum, generic and flexible dot-product
148
+ # but 1) axes can be multi-lettered 2) pattern goes last 3) works with multiple frameworks
149
+ C = einsum(A, B, 'b t1 head c, b t2 head c -> b head t1 t2')
150
+ ```
151
+
152
+ ### EinMix
153
+
154
+ `EinMix` is a generic linear layer, perfect for MLP Mixers and similar architectures.
155
+
156
+ ### Layers
157
+
158
+ Einops provides layers (`einops` keeps a separate version for each framework) that reflect corresponding functions
159
+
160
+ ```python
161
+ from einops.layers.torch import Rearrange, Reduce
162
+ from einops.layers.tensorflow import Rearrange, Reduce
163
+ from einops.layers.flax import Rearrange, Reduce
164
+ from einops.layers.paddle import Rearrange, Reduce
165
+ ```
166
+
167
+ <details markdown="1">
168
+ <summary>Example of using layers within a pytorch model</summary>
169
+ Example given for pytorch, but code in other frameworks is almost identical
170
+
171
+ ```python
172
+ from torch.nn import Sequential, Conv2d, MaxPool2d, Linear, ReLU
173
+ from einops.layers.torch import Rearrange
174
+
175
+ model = Sequential(
176
+ ...,
177
+ Conv2d(6, 16, kernel_size=5),
178
+ MaxPool2d(kernel_size=2),
179
+ # flattening without need to write forward
180
+ Rearrange('b c h w -> b (c h w)'),
181
+ Linear(16*5*5, 120),
182
+ ReLU(),
183
+ Linear(120, 10),
184
+ )
185
+ ```
186
+
187
+ No more flatten needed!
188
+
189
+ Additionally, torch layers as those are script-able and compile-able.
190
+ Operations [are torch.compile-able](https://github.com/arogozhnikov/einops/wiki/Using-torch.compile-with-einops),
191
+ but not script-able due to limitations of torch.jit.script.
192
+ </details>
193
+
194
+
195
+
196
+
197
+ ## Naming <a name="Naming"></a>
198
+
199
+ `einops` stands for Einstein-Inspired Notation for operations
200
+ (though "Einstein operations" is more attractive and easier to remember).
201
+
202
+ Notation was loosely inspired by Einstein summation (in particular by `numpy.einsum` operation).
203
+
204
+ ## Why use `einops` notation?! <a name="Why-use-einops-notation"></a>
205
+
206
+
207
+ ### Semantic information (being verbose in expectations)
208
+
209
+ ```python
210
+ y = x.view(x.shape[0], -1)
211
+ y = rearrange(x, 'b c h w -> b (c h w)')
212
+ ```
213
+ While these two lines are doing the same job in *some* context,
214
+ the second one provides information about the input and output.
215
+ In other words, `einops` focuses on interface: *what is the input and output*, not *how* the output is computed.
216
+
217
+ The next operation looks similar:
218
+
219
+ ```python
220
+ y = rearrange(x, 'time c h w -> time (c h w)')
221
+ ```
222
+ but it gives the reader a hint:
223
+ this is not an independent batch of images we are processing,
224
+ but rather a sequence (video).
225
+
226
+ Semantic information makes the code easier to read and maintain.
227
+
228
+ ### Convenient checks
229
+
230
+ Reconsider the same example:
231
+
232
+ ```python
233
+ y = x.view(x.shape[0], -1) # x: (batch, 256, 19, 19)
234
+ y = rearrange(x, 'b c h w -> b (c h w)')
235
+ ```
236
+ The second line checks that the input has four dimensions,
237
+ but you can also specify particular dimensions.
238
+ That's opposed to just writing comments about shapes since comments don't prevent mistakes,
239
+ not tested, and without code review tend to be outdated
240
+ ```python
241
+ y = x.view(x.shape[0], -1) # x: (batch, 256, 19, 19)
242
+ y = rearrange(x, 'b c h w -> b (c h w)', c=256, h=19, w=19)
243
+ ```
244
+
245
+ ### Result is strictly determined
246
+
247
+ Below we have at least two ways to define the depth-to-space operation
248
+ ```python
249
+ # depth-to-space
250
+ rearrange(x, 'b c (h h2) (w w2) -> b (c h2 w2) h w', h2=2, w2=2)
251
+ rearrange(x, 'b c (h h2) (w w2) -> b (h2 w2 c) h w', h2=2, w2=2)
252
+ ```
253
+ There are at least four more ways to do it. Which one is used by the framework?
254
+
255
+ These details are ignored, since *usually* it makes no difference,
256
+ but it can make a big difference (e.g. if you use grouped convolutions in the next stage),
257
+ and you'd like to specify this in your code.
258
+
259
+
260
+ ### Uniformity
261
+
262
+ ```python
263
+ reduce(x, 'b c (x dx) -> b c x', 'max', dx=2)
264
+ reduce(x, 'b c (x dx) (y dy) -> b c x y', 'max', dx=2, dy=3)
265
+ reduce(x, 'b c (x dx) (y dy) (z dz) -> b c x y z', 'max', dx=2, dy=3, dz=4)
266
+ ```
267
+ These examples demonstrated that we don't use separate operations for 1d/2d/3d pooling,
268
+ those are all defined in a uniform way.
269
+
270
+ Space-to-depth and depth-to space are defined in many frameworks but how about width-to-height? Here you go:
271
+
272
+ ```python
273
+ rearrange(x, 'b c h (w w2) -> b c (h w2) w', w2=2)
274
+ ```
275
+
276
+
277
+ ### Framework independent behavior
278
+
279
+ Even simple functions are defined differently by different frameworks
280
+
281
+ ```python
282
+ y = x.flatten() # or flatten(x)
283
+ ```
284
+
285
+ Suppose `x`'s shape was `(3, 4, 5)`, then `y` has shape ...
286
+
287
+ - numpy, pytorch, cupy, chainer, jax: `(60,)`
288
+ - keras, tensorflow.layers, gluon: `(3, 20)`
289
+
290
+ `einops` works the same way in all frameworks.
291
+
292
+
293
+ ### Independence of framework terminology
294
+
295
+ Example: `tile` vs `repeat` causes lots of confusion. To copy image along width:
296
+ ```python
297
+ np.tile(image, (1, 2)) # in numpy
298
+ image.repeat(1, 2) # pytorch's repeat ~ numpy's tile
299
+ ```
300
+
301
+ With einops you don't need to decipher which axis was repeated:
302
+ ```python
303
+ repeat(image, 'h w -> h (tile w)', tile=2) # in numpy
304
+ repeat(image, 'h w -> h (tile w)', tile=2) # in pytorch
305
+ repeat(image, 'h w -> h (tile w)', tile=2) # in tf
306
+ repeat(image, 'h w -> h (tile w)', tile=2) # in jax
307
+ repeat(image, 'h w -> h (tile w)', tile=2) # in cupy
308
+ ... (etc.)
309
+ ```
310
+
311
+ [Testimonials](https://einops.rocks/pages/testimonials/) provide users' perspective on the same question.
312
+
313
+
314
+ ## Supported frameworks <a name="Supported-frameworks"></a>
315
+
316
+ Einops works with ...
317
+
318
+ - [numpy](http://www.numpy.org/)
319
+ - [pytorch](https://pytorch.org/)
320
+ - [tensorflow](https://www.tensorflow.org/)
321
+ - [jax](https://github.com/google/jax)
322
+ - [cupy](https://github.com/cupy/cupy)
323
+ - [flax](https://github.com/google/flax) (community)
324
+ - [paddle](https://github.com/PaddlePaddle/Paddle) (community)
325
+ - [oneflow](https://github.com/Oneflow-Inc/oneflow) (community)
326
+ - [tinygrad](https://github.com/tinygrad/tinygrad) (community)
327
+ - [pytensor](https://github.com/pymc-devs/pytensor) (community)
328
+
329
+
330
+
331
+ ```python
332
+ from einops import rearrange
333
+ => from einops.array_api import rearrange
334
+ ```
335
+
336
+ But actually it is even better: einops can be used with *any* framework that supports
337
+ [Python array API standard](https://data-apis.org/array-api/latest/API_specification/index.html),
338
+ to name a few:
339
+
340
+ - numpy >= 2.0
341
+ - [MLX](https://github.com/ml-explore/mlx) # yes, einops works with apple's framework
342
+ - [pydata/sparse](https://github.com/pydata/sparse) >= 0.15 # and works with sparse tensors
343
+ - [cubed](https://github.com/cubed-dev/cubed) # and with distributed tensors too
344
+ - [quantco/ndonnx](https://github.com/Quantco/ndonnx)
345
+ - jax
346
+ - cupy
347
+ - dask is supported via [array-api-compat](https://github.com/data-apis/array-api-compat)
348
+
349
+
350
+ ## Development
351
+
352
+ Devcontainer is provided, this environment can be used locally, or on your server,
353
+ or within github codespaces.
354
+ To start with devcontainers in vs code, clone repo, and click 'Reopen in Devcontainer'.
355
+
356
+ Starting from einops 0.8.1, einops distributes tests as a part of package.
357
+
358
+ ```bash
359
+ # pip install einops pytest
360
+ python -m einops.tests.run_tests numpy pytorch jax --pip-install
361
+ ```
362
+
363
+ `numpy pytorch jax` is an _example_, any subset of testable frameworks can be provided.
364
+ Every framework is tested against numpy, so it is a requirement for tests.
365
+
366
+ Specifying `--pip-install` will install requirements in current virtualenv,
367
+ and should be omitted if dependencies are installed locally.
368
+
369
+ To build/test docs:
370
+
371
+ ```bash
372
+ hatch run docs:serve # Serving on http://localhost:8000/
373
+ ```
374
+
375
+
376
+ ## Citing einops <a name="Citing"></a>
377
+
378
+ Please use the following bibtex record
379
+
380
+ ```text
381
+ @inproceedings{
382
+ rogozhnikov2022einops,
383
+ title={Einops: Clear and Reliable Tensor Manipulations with Einstein-like Notation},
384
+ author={Alex Rogozhnikov},
385
+ booktitle={International Conference on Learning Representations},
386
+ year={2022},
387
+ url={https://openreview.net/forum?id=oapKSVM2bcj}
388
+ }
389
+ ```
390
+
391
+
392
+ ## Supported python versions
393
+
394
+ `einops` works with python 3.9 or later.
.cache/pip/http-v2/3/a/3/4/c/3a34c42ebe642dc43046710b9f05bca097f79a9cf254a8dc21415fae ADDED
Binary file (1.78 kB). View file
 
.cache/pip/http-v2/3/a/3/4/c/3a34c42ebe642dc43046710b9f05bca097f79a9cf254a8dc21415fae.body ADDED
Binary file (76.9 kB). View file
 
.cache/pip/http-v2/3/a/c/3/6/3ac3678b5f9023a96ea9ecd57204cc7f36baa7f631c59f5e1b8ecedd ADDED
Binary file (1.13 kB). View file
 
.cache/pip/http-v2/3/b/f/3/3/3bf332a483597076d042361a196dcaec374b1527b935d69fead02825 ADDED
Binary file (1.26 kB). View file
 
.cache/pip/http-v2/3/c/e/7/7/3ce770df07ea321ab056bed31a67391a50e82aea9460d349fd057276 ADDED
Binary file (1.2 kB). View file
 
.cache/pip/http-v2/3/c/e/7/7/3ce770df07ea321ab056bed31a67391a50e82aea9460d349fd057276.body ADDED
@@ -0,0 +1,107 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Metadata-Version: 2.4
2
+ Name: packaging
3
+ Version: 26.0
4
+ Summary: Core utilities for Python packages
5
+ Author-email: Donald Stufft <donald@stufft.io>
6
+ Requires-Python: >=3.8
7
+ Description-Content-Type: text/x-rst
8
+ License-Expression: Apache-2.0 OR BSD-2-Clause
9
+ Classifier: Development Status :: 5 - Production/Stable
10
+ Classifier: Intended Audience :: Developers
11
+ Classifier: Programming Language :: Python
12
+ Classifier: Programming Language :: Python :: 3
13
+ Classifier: Programming Language :: Python :: 3 :: Only
14
+ Classifier: Programming Language :: Python :: 3.8
15
+ Classifier: Programming Language :: Python :: 3.9
16
+ Classifier: Programming Language :: Python :: 3.10
17
+ Classifier: Programming Language :: Python :: 3.11
18
+ Classifier: Programming Language :: Python :: 3.12
19
+ Classifier: Programming Language :: Python :: 3.13
20
+ Classifier: Programming Language :: Python :: 3.14
21
+ Classifier: Programming Language :: Python :: Implementation :: CPython
22
+ Classifier: Programming Language :: Python :: Implementation :: PyPy
23
+ Classifier: Typing :: Typed
24
+ License-File: LICENSE
25
+ License-File: LICENSE.APACHE
26
+ License-File: LICENSE.BSD
27
+ Project-URL: Documentation, https://packaging.pypa.io/
28
+ Project-URL: Source, https://github.com/pypa/packaging
29
+
30
+ packaging
31
+ =========
32
+
33
+ .. start-intro
34
+
35
+ Reusable core utilities for various Python Packaging
36
+ `interoperability specifications <https://packaging.python.org/specifications/>`_.
37
+
38
+ This library provides utilities that implement the interoperability
39
+ specifications which have clearly one correct behaviour (eg: :pep:`440`)
40
+ or benefit greatly from having a single shared implementation (eg: :pep:`425`).
41
+
42
+ .. end-intro
43
+
44
+ The ``packaging`` project includes the following: version handling, specifiers,
45
+ markers, requirements, tags, metadata, lockfiles, utilities.
46
+
47
+ Documentation
48
+ -------------
49
+
50
+ The `documentation`_ provides information and the API for the following:
51
+
52
+ - Version Handling
53
+ - Specifiers
54
+ - Markers
55
+ - Requirements
56
+ - Tags
57
+ - Metadata
58
+ - Lockfiles
59
+ - Utilities
60
+
61
+ Installation
62
+ ------------
63
+
64
+ Use ``pip`` to install these utilities::
65
+
66
+ pip install packaging
67
+
68
+ The ``packaging`` library uses calendar-based versioning (``YY.N``).
69
+
70
+ Discussion
71
+ ----------
72
+
73
+ If you run into bugs, you can file them in our `issue tracker`_.
74
+
75
+ You can also join ``#pypa`` on Freenode to ask questions or get involved.
76
+
77
+
78
+ .. _`documentation`: https://packaging.pypa.io/
79
+ .. _`issue tracker`: https://github.com/pypa/packaging/issues
80
+
81
+
82
+ Code of Conduct
83
+ ---------------
84
+
85
+ Everyone interacting in the packaging project's codebases, issue trackers, chat
86
+ rooms, and mailing lists is expected to follow the `PSF Code of Conduct`_.
87
+
88
+ .. _PSF Code of Conduct: https://github.com/pypa/.github/blob/main/CODE_OF_CONDUCT.md
89
+
90
+ Contributing
91
+ ------------
92
+
93
+ The ``CONTRIBUTING.rst`` file outlines how to contribute to this project as
94
+ well as how to report a potential security issue. The documentation for this
95
+ project also covers information about `project development`_ and `security`_.
96
+
97
+ .. _`project development`: https://packaging.pypa.io/en/latest/development/
98
+ .. _`security`: https://packaging.pypa.io/en/latest/security/
99
+
100
+ Project History
101
+ ---------------
102
+
103
+ Please review the ``CHANGELOG.rst`` file or the `Changelog documentation`_ for
104
+ recent changes and project history.
105
+
106
+ .. _`Changelog documentation`: https://packaging.pypa.io/en/latest/changelog/
107
+
.cache/pip/http-v2/3/d/8/c/a/3d8cad774d82fa02cd6b57b61c7ae5f445e51121f6b8147871d094e2 ADDED
Binary file (1.18 kB). View file
 
.cache/pip/http-v2/3/d/8/c/a/3d8cad774d82fa02cd6b57b61c7ae5f445e51121f6b8147871d094e2.body ADDED
@@ -0,0 +1,251 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Metadata-Version: 2.4
2
+ Name: sentencepiece
3
+ Version: 0.2.1
4
+ Summary: Unsupervised text tokenizer and detokenizer.
5
+ Author-email: Taku Kudo <taku@google.com>
6
+ Project-URL: Homepage, https://github.com/google/sentencepiece
7
+ Classifier: Programming Language :: Python :: 3
8
+ Classifier: Development Status :: 5 - Production/Stable
9
+ Classifier: Environment :: Console
10
+ Classifier: Intended Audience :: Developers
11
+ Classifier: Intended Audience :: Science/Research
12
+ Classifier: Operating System :: MacOS :: MacOS X
13
+ Classifier: Operating System :: Microsoft :: Windows
14
+ Classifier: Operating System :: POSIX :: Linux
15
+ Classifier: Programming Language :: Python
16
+ Classifier: Programming Language :: Python :: 3.9
17
+ Classifier: Programming Language :: Python :: 3.10
18
+ Classifier: Programming Language :: Python :: 3.11
19
+ Classifier: Programming Language :: Python :: 3.12
20
+ Classifier: Programming Language :: Python :: 3.13
21
+ Classifier: Programming Language :: Python :: 3.14
22
+ Classifier: Programming Language :: Python :: Free Threading :: 2 - Beta
23
+ Classifier: Topic :: Text Processing :: Linguistic
24
+ Classifier: Topic :: Software Development :: Libraries :: Python Modules
25
+ Requires-Python: >=3.9
26
+ Description-Content-Type: text/markdown
27
+ Provides-Extra: test
28
+ Requires-Dist: pytest; extra == "test"
29
+ Provides-Extra: testpaths
30
+ Requires-Dist: test; extra == "testpaths"
31
+
32
+ # SentencePiece Python Wrapper
33
+
34
+ Python wrapper for SentencePiece. This API will offer the encoding, decoding and training of Sentencepiece.
35
+
36
+ ## Build and Install SentencePiece
37
+
38
+ For Linux (x64/i686), macOS, and Windows(win32/x64/arm64) environment, you can simply use pip command to install SentencePiece python module.
39
+
40
+ ```
41
+ % pip install sentencepiece
42
+ ```
43
+
44
+ Before building SentencePiece from source on Linux, ensure that the following dependencies are installed.
45
+
46
+ ```
47
+ % sudo apt update
48
+ % sudo apt install -y cmake pkg-config libsentencepiece-dev
49
+ ```
50
+
51
+ To build and install the Python wrapper from source, try the following commands to build and install wheel package.
52
+
53
+ ```
54
+ % git clone https://github.com/google/sentencepiece.git
55
+ % cd sentencepiece
56
+ % mkdir build
57
+ % cd build
58
+ % cmake .. -DSPM_ENABLE_SHARED=OFF -DCMAKE_INSTALL_PREFIX=./root -DSPM_DISABLE_EMBEDDED_DATA=ON
59
+ % make install
60
+ % cd ../python
61
+ % python setup.py bdist_wheel
62
+ % pip install dist/sentencepiece*.whl
63
+ ```
64
+
65
+ If you don’t have write permission to the global site-packages directory or don’t want to install into it, please try:
66
+
67
+ ```
68
+ % python setup.py install --user
69
+ ```
70
+
71
+ For Windows users who want to build from source, you can build and install the Python wrapper using Visual Studio. First, you need to install the `pwsh.exe` (Powershell 7). Use `winget install --id Microsoft.Powershell --source winget` to install directly. Then open the `Developer PowerShell for VS 2022`, and execute the following commands.
72
+
73
+ ```
74
+ git clone https://github.com/google/sentencepiece.git
75
+ cd sentencepiece
76
+ mkdir build
77
+ cd build
78
+ cmake .. -DSPM_ENABLE_SHARED=OFF -DCMAKE_INSTALL_PREFIX=".\root" -DSPM_DISABLE_EMBEDDED_DATA=ON
79
+ cmake --build . --config Release --target install
80
+ cd ../python
81
+ pip install wheel
82
+ python setup.py bdist_wheel
83
+ Get-ChildItem .\dist\sentencepiece*.whl | ForEach-Object { pip install $_.FullName }
84
+ ```
85
+
86
+ ## Usage
87
+
88
+ See [this google colab page](https://github.com/google/sentencepiece/blob/master/python/sentencepiece_python_module_example.ipynb) to run sentencepiece interactively.
89
+
90
+ ### Segmentation
91
+
92
+ ```
93
+ % python
94
+ >>> import sentencepiece as spm
95
+ >>> sp = spm.SentencePieceProcessor(model_file='test/test_model.model')
96
+
97
+ >>> sp.encode('This is a test')
98
+ [284, 47, 11, 4, 15, 400]
99
+
100
+ >>> sp.encode(['This is a test', 'Hello world'], out_type=int)
101
+ [[284, 47, 11, 4, 15, 400], [151, 88, 21, 887]]
102
+
103
+ >>> sp.encode_as_ids(['This is a test', 'Hello world'])
104
+ [[284, 47, 11, 4, 15, 400], [151, 88, 21, 887]]
105
+
106
+ >>> sp.encode('This is a test', out_type=str)
107
+ ['▁This', '▁is', '▁a', '▁', 't', 'est']
108
+
109
+ >>> sp.encode(['This is a test', 'Hello world'], out_type=str)
110
+ [['▁This', '▁is', '▁a', '▁', 't', 'est'], ['▁He', 'll', 'o', '▁world']]
111
+
112
+ >>> sp.encode_as_pieces(['This is a test', 'Hello world'])
113
+ [['▁This', '▁is', '▁a', '▁', 't', 'est'], ['▁He', 'll', 'o', '▁world']]
114
+
115
+ >>> proto = sp.encode('This is a test', out_type='immutable_proto')
116
+ >>> for n in proto.pieces:
117
+ ... print('piece="{}" surface="{}" id={} begin={} end={}'.format(n.piece, n.surface, n.id, n.begin, n.end))
118
+ ...
119
+ piece="▁This" surface="This" id=284 begin=0 end=4
120
+ piece="▁is" surface=" is" id=47 begin=4 end=7
121
+ piece="▁a" surface=" a" id=11 begin=7 end=9
122
+ piece="▁" surface=" " id=4 begin=9 end=10
123
+ piece="t" surface="t" id=15 begin=10 end=11
124
+ piece="est" surface="est" id=400 begin=11 end=14
125
+
126
+ >>> [[x.id for x in proto.pieces], [x.piece for x in proto.pieces], [x.begin for x in proto.pieces], [x.end for x in proto.pieces]]
127
+ [[284, 47, 11, 4, 15, 400], ['▁This', '▁is', '▁a', '▁', 't', 'est'], [0, 4, 7, 9, 10, 11], [4, 7, 9, 10, 11, 14]]
128
+
129
+ >>> proto2 = sp.encode_as_immutable_proto('This is a test')
130
+ >>> proto2 == proto
131
+ True
132
+
133
+ >>> for _ in range(10):
134
+ ... sp.encode('This is a test', out_type=str, enable_sampling=True, alpha=0.1, nbest_size=-1)
135
+ ...
136
+ ['▁', 'This', '▁', 'is', '▁a', '▁', 't', 'e', 'st']
137
+ ['▁T', 'h', 'i', 's', '▁is', '▁a', '▁', 'te', 's', 't']
138
+ ['▁T', 'h', 'is', '▁', 'is', '▁', 'a', '▁', 't', 'est']
139
+ ['▁', 'This', '▁is', '▁', 'a', '▁', 't', 'e', 'st']
140
+ ['▁', 'This', '▁', 'is', '▁', 'a', '▁', 't', 'e', 's', 't']
141
+ ['▁This', '▁is', '▁a', '▁', 'te', 's', 't']
142
+ ['▁This', '▁is', '▁', 'a', '▁', 't', 'e', 'st']
143
+ ['▁', 'T', 'h', 'is', '▁', 'is', '▁', 'a', '▁', 'te', 'st']
144
+ ['▁', 'This', '▁', 'i', 's', '▁a', '▁', 't', 'e', 'st']
145
+ ['▁This', '▁', 'is', '▁a', '▁', 't', 'est']
146
+
147
+ >> sp.nbest_encode('This is a test', nbest_size=5, out_type=str)
148
+ [['▁This', '▁is', '▁a', '▁', 't', 'est'],
149
+ ['▁This', '▁is', '▁a', '▁', 'te', 'st'],
150
+ ['▁This', '▁is', '▁a', '▁', 'te', 's', 't'],
151
+ ['▁This', '▁is', '▁a', '▁', 't', 'e', 'st'],
152
+ ['▁This', '▁is', '▁a', '▁', 't', 'es', 't']]
153
+
154
+ >>> sp.sample_encode_and_score('This is a test', num_samples=5, alpha=0.1, out_type=str, wor=True)
155
+ [(['▁This', '▁', 'i', 's', '▁a', '▁', 'te', 's', 't'], -3.043105125427246),
156
+ (['▁This', '▁', 'i', 's', '▁a', '▁', 'te', 'st'], -2.8475849628448486),
157
+ (['▁', 'This', '▁is', '▁', 'a', '▁', 'te', 'st'], -3.043248176574707),
158
+ (['▁', 'This', '▁is', '▁a', '▁', 't', 'e', 'st'], -2.87727689743042),
159
+ (['▁', 'This', '▁', 'i', 's', '▁', 'a', '▁', 't', 'est'], -3.6284031867980957)]
160
+
161
+ >>> sp.decode([284, 47, 11, 4, 15, 400])
162
+ 'This is a test'
163
+
164
+ >>> sp.decode([[284, 47, 11, 4, 15, 400], [151, 88, 21, 887]])
165
+ ['This is a test', 'Hello world']
166
+
167
+ >>> proto = sp.decode([284, 47, 11, 4, 15, 400], out_type='immutable_proto')
168
+ >>> proto.text
169
+ 'This is a test'
170
+
171
+ >>> sp.decode(['▁', 'This', '▁', 'is', '▁a', '▁', 't', 'e', 'st'])
172
+ 'This is a test'
173
+
174
+ >>> sp.decode([['▁This', '▁is', '▁a', '▁', 't', 'est'], ['▁He', 'll', 'o', '▁world']])
175
+ ['This is a test', 'Hello world']
176
+
177
+ >>> sp.get_piece_size()
178
+ 1000
179
+
180
+ >>> sp.id_to_piece(2)
181
+ '</s>'
182
+
183
+ >>> sp.id_to_piece([2, 3, 4])
184
+ ['</s>', '\r', '▁']
185
+
186
+ >>> sp.piece_to_id('<s>')
187
+ 1
188
+
189
+ >>> sp.piece_to_id(['</s>', '\r', '▁'])
190
+ [2, 3, 4]
191
+
192
+ >>> len(sp)
193
+ 1000
194
+
195
+ >>> sp['</s>']
196
+ 2
197
+ ```
198
+
199
+ ### Model Training
200
+
201
+ Training is performed by passing parameters of [spm_train](https://github.com/google/sentencepiece#train-sentencepiece-model) to SentencePieceTrainer.train() function.
202
+
203
+ ```
204
+ >>> import sentencepiece as spm
205
+ >>> spm.SentencePieceTrainer.train(input='test/botchan.txt', model_prefix='m', vocab_size=1000, user_defined_symbols=['foo', 'bar'])
206
+ sentencepiece_trainer.cc(73) LOG(INFO) Starts training with :
207
+ trainer_spec {
208
+ input: test/botchan.txt
209
+ .. snip
210
+ unigram_model_trainer.cc(500) LOG(INFO) EM sub_iter=1 size=1188 obj=10.2839 num_tokens=32182 num_tokens/piece=27.0892
211
+ unigram_model_trainer.cc(500) LOG(INFO) EM sub_iter=0 size=1100 obj=10.4269 num_tokens=33001 num_tokens/piece=30.0009
212
+ unigram_model_trainer.cc(500) LOG(INFO) EM sub_iter=1 size=1100 obj=10.4069 num_tokens=33002 num_tokens/piece=30.0018
213
+ trainer_interface.cc(595) LOG(INFO) Saving model: m.model
214
+ trainer_interface.cc(619) LOG(INFO) Saving vocabs: m.vocab
215
+ >>>
216
+ ```
217
+
218
+ ### Training without local filesystem
219
+
220
+ Sentencepiece trainer can receive any iterable object to feed training sentences. You can also pass a file object (instance with write() method) to emit the output model to any devices. These features are useful to run sentencepiece on environment that have limited access to the local file system (e.g., Google colab.)
221
+
222
+ ```
223
+ import urllib.request
224
+ import io
225
+ import sentencepiece as spm
226
+
227
+ # Loads model from URL as iterator and stores the model to BytesIO.
228
+ model = io.BytesIO()
229
+ with urllib.request.urlopen(
230
+ 'https://raw.githubusercontent.com/google/sentencepiece/master/data/botchan.txt'
231
+ ) as response:
232
+ spm.SentencePieceTrainer.train(
233
+ sentence_iterator=response, model_writer=model, vocab_size=1000)
234
+
235
+ # Serialize the model as file.
236
+ # with open('out.model', 'wb') as f:
237
+ # f.write(model.getvalue())
238
+
239
+ # Directly load the model from serialized model.
240
+ sp = spm.SentencePieceProcessor(model_proto=model.getvalue())
241
+ print(sp.encode('this is test'))
242
+ ```
243
+
244
+ ### Free Threading support
245
+ Experimental support for no-GIL/Free-Threading has been introduced since v0.2.1. For more details, please refer to [this page](https://py-free-threading.github.io.).
246
+ This operates similarly to how [NumPy](https://numpy.org/devdocs/reference/thread_safety.html#free-threaded-python) handles it.
247
+
248
+ The C++ library's const and static methods, e.g., encode(), decode() and train(), are designed to work in a non-GIL environment.
249
+ However, non-const methods, e.g., load(), may have potential data race issues, so please ensure you implement appropriate locks beforehand.
250
+
251
+ While this limitation might be removed in the future, please note that it's not a simple fix, as it would require additional shared locks in C++.
.cache/pip/http-v2/3/d/a/4/6/3da464de30c7468304779380c74eab34e7778fd6716e1e47bbe62e37 ADDED
Binary file (1.2 kB). View file
 
.cache/pip/http-v2/3/d/a/4/6/3da464de30c7468304779380c74eab34e7778fd6716e1e47bbe62e37.body ADDED
@@ -0,0 +1,265 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Metadata-Version: 2.4
2
+ Name: peft
3
+ Version: 0.18.1
4
+ Summary: Parameter-Efficient Fine-Tuning (PEFT)
5
+ Home-page: https://github.com/huggingface/peft
6
+ Author: The HuggingFace team
7
+ Author-email: benjamin@huggingface.co
8
+ License: Apache
9
+ Keywords: deep learning
10
+ Classifier: Development Status :: 5 - Production/Stable
11
+ Classifier: Intended Audience :: Developers
12
+ Classifier: Intended Audience :: Education
13
+ Classifier: Intended Audience :: Science/Research
14
+ Classifier: License :: OSI Approved :: Apache Software License
15
+ Classifier: Operating System :: OS Independent
16
+ Classifier: Programming Language :: Python :: 3
17
+ Classifier: Programming Language :: Python :: 3.10
18
+ Classifier: Programming Language :: Python :: 3.11
19
+ Classifier: Programming Language :: Python :: 3.12
20
+ Classifier: Programming Language :: Python :: 3.13
21
+ Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
22
+ Requires-Python: >=3.10.0
23
+ Description-Content-Type: text/markdown
24
+ License-File: LICENSE
25
+ Requires-Dist: numpy>=1.17
26
+ Requires-Dist: packaging>=20.0
27
+ Requires-Dist: psutil
28
+ Requires-Dist: pyyaml
29
+ Requires-Dist: torch>=1.13.0
30
+ Requires-Dist: transformers
31
+ Requires-Dist: tqdm
32
+ Requires-Dist: accelerate>=0.21.0
33
+ Requires-Dist: safetensors
34
+ Requires-Dist: huggingface_hub>=0.25.0
35
+ Provides-Extra: quality
36
+ Requires-Dist: black; extra == "quality"
37
+ Requires-Dist: hf-doc-builder; extra == "quality"
38
+ Requires-Dist: ruff~=0.12.8; extra == "quality"
39
+ Provides-Extra: docs-specific
40
+ Requires-Dist: black; extra == "docs-specific"
41
+ Requires-Dist: hf-doc-builder; extra == "docs-specific"
42
+ Provides-Extra: dev
43
+ Requires-Dist: black; extra == "dev"
44
+ Requires-Dist: hf-doc-builder; extra == "dev"
45
+ Requires-Dist: ruff~=0.12.8; extra == "dev"
46
+ Requires-Dist: black; extra == "dev"
47
+ Requires-Dist: hf-doc-builder; extra == "dev"
48
+ Provides-Extra: test
49
+ Requires-Dist: black; extra == "test"
50
+ Requires-Dist: hf-doc-builder; extra == "test"
51
+ Requires-Dist: ruff~=0.12.8; extra == "test"
52
+ Requires-Dist: black; extra == "test"
53
+ Requires-Dist: hf-doc-builder; extra == "test"
54
+ Requires-Dist: pytest; extra == "test"
55
+ Requires-Dist: pytest-cov; extra == "test"
56
+ Requires-Dist: pytest-xdist; extra == "test"
57
+ Requires-Dist: parameterized; extra == "test"
58
+ Requires-Dist: datasets; extra == "test"
59
+ Requires-Dist: diffusers; extra == "test"
60
+ Requires-Dist: scipy; extra == "test"
61
+ Requires-Dist: protobuf; extra == "test"
62
+ Requires-Dist: sentencepiece; extra == "test"
63
+ Dynamic: author
64
+ Dynamic: author-email
65
+ Dynamic: classifier
66
+ Dynamic: description
67
+ Dynamic: description-content-type
68
+ Dynamic: home-page
69
+ Dynamic: keywords
70
+ Dynamic: license
71
+ Dynamic: license-file
72
+ Dynamic: provides-extra
73
+ Dynamic: requires-dist
74
+ Dynamic: requires-python
75
+ Dynamic: summary
76
+
77
+ <!---
78
+ Copyright 2023 The HuggingFace Team. All rights reserved.
79
+
80
+ Licensed under the Apache License, Version 2.0 (the "License");
81
+ you may not use this file except in compliance with the License.
82
+ You may obtain a copy of the License at
83
+
84
+ http://www.apache.org/licenses/LICENSE-2.0
85
+
86
+ Unless required by applicable law or agreed to in writing, software
87
+ distributed under the License is distributed on an "AS IS" BASIS,
88
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
89
+ See the License for the specific language governing permissions and
90
+ limitations under the License.
91
+ -->
92
+
93
+ <h1 align="center"> <p>🤗 PEFT</p></h1>
94
+ <h3 align="center">
95
+ <p>State-of-the-art Parameter-Efficient Fine-Tuning (PEFT) methods</p>
96
+ </h3>
97
+
98
+ Fine-tuning large pretrained models is often prohibitively costly due to their scale. Parameter-Efficient Fine-Tuning (PEFT) methods enable efficient adaptation of large pretrained models to various downstream applications by only fine-tuning a small number of (extra) model parameters instead of all the model's parameters. This significantly decreases the computational and storage costs. Recent state-of-the-art PEFT techniques achieve performance comparable to fully fine-tuned models.
99
+
100
+ PEFT is integrated with Transformers for easy model training and inference, Diffusers for conveniently managing different adapters, and Accelerate for distributed training and inference for really big models.
101
+
102
+ > [!TIP]
103
+ > Visit the [PEFT](https://huggingface.co/PEFT) organization to read about the PEFT methods implemented in the library and to see notebooks demonstrating how to apply these methods to a variety of downstream tasks. Click the "Watch repos" button on the organization page to be notified of newly implemented methods and notebooks!
104
+
105
+ Check the PEFT Adapters API Reference section for a list of supported PEFT methods, and read the [Adapters](https://huggingface.co/docs/peft/en/conceptual_guides/adapter), [Soft prompts](https://huggingface.co/docs/peft/en/conceptual_guides/prompting), and [IA3](https://huggingface.co/docs/peft/en/conceptual_guides/ia3) conceptual guides to learn more about how these methods work.
106
+
107
+ ## Quickstart
108
+
109
+ Install PEFT from pip:
110
+
111
+ ```bash
112
+ pip install peft
113
+ ```
114
+
115
+ Prepare a model for training with a PEFT method such as LoRA by wrapping the base model and PEFT configuration with `get_peft_model`. For the bigscience/mt0-large model, you're only training 0.19% of the parameters!
116
+
117
+ ```python
118
+ from transformers import AutoModelForCausalLM
119
+ from peft import LoraConfig, TaskType, get_peft_model
120
+
121
+ device = torch.accelerator.current_accelerator().type if hasattr(torch, "accelerator") else "cuda"
122
+ model_id = "Qwen/Qwen2.5-3B-Instruct"
123
+ model = AutoModelForCausalLM.from_pretrained(model_id, device_map=device)
124
+ peft_config = LoraConfig(
125
+ r=16,
126
+ lora_alpha=32,
127
+ task_type=TaskType.CAUSAL_LM,
128
+ # target_modules=["q_proj", "v_proj", ...] # optionally indicate target modules
129
+ )
130
+ model = get_peft_model(model, peft_config)
131
+ model.print_trainable_parameters()
132
+ # prints: trainable params: 3,686,400 || all params: 3,089,625,088 || trainable%: 0.1193
133
+
134
+ # now perform training on your dataset, e.g. using transformers Trainer, then save the model
135
+ model.save_pretrained("qwen2.5-3b-lora")
136
+ ```
137
+
138
+ To load a PEFT model for inference:
139
+
140
+ ```python
141
+ from transformers import AutoModelForCausalLM, AutoTokenizer
142
+ from peft import PeftModel
143
+
144
+ device = torch.accelerator.current_accelerator().type if hasattr(torch, "accelerator") else "cuda"
145
+ model_id = "Qwen/Qwen2.5-3B-Instruct"
146
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
147
+ model = AutoModelForCausalLM.from_pretrained(model_id, device_map=device)
148
+ model = PeftModel.from_pretrained(model, "qwen2.5-3b-lora")
149
+
150
+ inputs = tokenizer("Preheat the oven to 350 degrees and place the cookie dough", return_tensors="pt")
151
+ outputs = model.generate(**inputs.to(device), max_new_tokens=50)
152
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
153
+
154
+ # prints something like: Preheat the oven to 350 degrees and place the cookie dough in a baking dish [...]
155
+ ```
156
+
157
+ ## Why you should use PEFT
158
+
159
+ There are many benefits of using PEFT but the main one is the huge savings in compute and storage, making PEFT applicable to many different use cases.
160
+
161
+ ### High performance on consumer hardware
162
+
163
+ Consider the memory requirements for training the following models on the [ought/raft/twitter_complaints](https://huggingface.co/datasets/ought/raft/viewer/twitter_complaints) dataset with an A100 80GB GPU with more than 64GB of CPU RAM.
164
+
165
+ | Model | Full Finetuning | PEFT-LoRA PyTorch | PEFT-LoRA DeepSpeed with CPU Offloading |
166
+ | --------- | ---- | ---- | ---- |
167
+ | bigscience/T0_3B (3B params) | 47.14GB GPU / 2.96GB CPU | 14.4GB GPU / 2.96GB CPU | 9.8GB GPU / 17.8GB CPU |
168
+ | bigscience/mt0-xxl (12B params) | OOM GPU | 56GB GPU / 3GB CPU | 22GB GPU / 52GB CPU |
169
+ | bigscience/bloomz-7b1 (7B params) | OOM GPU | 32GB GPU / 3.8GB CPU | 18.1GB GPU / 35GB CPU |
170
+
171
+ With LoRA you can fully finetune a 12B parameter model that would've otherwise run out of memory on the 80GB GPU, and comfortably fit and train a 3B parameter model. When you look at the 3B parameter model's performance, it is comparable to a fully finetuned model at a fraction of the GPU memory.
172
+
173
+ | Submission Name | Accuracy |
174
+ | --------- | ---- |
175
+ | Human baseline (crowdsourced) | 0.897 |
176
+ | Flan-T5 | 0.892 |
177
+ | lora-t0-3b | 0.863 |
178
+
179
+ > [!TIP]
180
+ > The bigscience/T0_3B model performance isn't optimized in the table above. You can squeeze even more performance out of it by playing around with the input instruction templates, LoRA hyperparameters, and other training related hyperparameters. The final checkpoint size of this model is just 19MB compared to 11GB of the full bigscience/T0_3B model. Learn more about the advantages of finetuning with PEFT in this [blog post](https://www.philschmid.de/fine-tune-flan-t5-peft).
181
+
182
+ ### Quantization
183
+
184
+ Quantization is another method for reducing the memory requirements of a model by representing the data in a lower precision. It can be combined with PEFT methods to make it even easier to train and load LLMs for inference.
185
+
186
+ * Learn how to finetune [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) with QLoRA and the [TRL](https://huggingface.co/docs/trl/index) library on a 16GB GPU in the [Finetune LLMs on your own consumer hardware using tools from PyTorch and Hugging Face ecosystem](https://pytorch.org/blog/finetune-llms/) blog post.
187
+ * Learn how to finetune a [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) model for multilingual automatic speech recognition with LoRA and 8-bit quantization in this [notebook](https://colab.research.google.com/drive/1DOkD_5OUjFa0r5Ik3SgywJLJtEo2qLxO?usp=sharing) (see this [notebook](https://colab.research.google.com/drive/1vhF8yueFqha3Y3CpTHN6q9EVcII9EYzs?usp=sharing) instead for an example of streaming a dataset).
188
+
189
+ ### Save compute and storage
190
+
191
+ PEFT can help you save storage by avoiding full finetuning of models on each of downstream task or dataset. In many cases, you're only finetuning a very small fraction of a model's parameters and each checkpoint is only a few MBs in size (instead of GBs). These smaller PEFT adapters demonstrate performance comparable to a fully finetuned model. If you have many datasets, you can save a lot of storage with a PEFT model and not have to worry about catastrophic forgetting or overfitting the backbone or base model.
192
+
193
+ ## PEFT integrations
194
+
195
+ PEFT is widely supported across the Hugging Face ecosystem because of the massive efficiency it brings to training and inference.
196
+
197
+ ### Diffusers
198
+
199
+ The iterative diffusion process consumes a lot of memory which can make it difficult to train. PEFT can help reduce the memory requirements and reduce the storage size of the final model checkpoint. For example, consider the memory required for training a Stable Diffusion model with LoRA on an A100 80GB GPU with more than 64GB of CPU RAM. The final model checkpoint size is only 8.8MB!
200
+
201
+ | Model | Full Finetuning | PEFT-LoRA | PEFT-LoRA with Gradient Checkpointing |
202
+ | --------- | ---- | ---- | ---- |
203
+ | CompVis/stable-diffusion-v1-4 | 27.5GB GPU / 3.97GB CPU | 15.5GB GPU / 3.84GB CPU | 8.12GB GPU / 3.77GB CPU |
204
+
205
+ > [!TIP]
206
+ > Take a look at the [examples/lora_dreambooth/train_dreambooth.py](examples/lora_dreambooth/train_dreambooth.py) training script to try training your own Stable Diffusion model with LoRA, and play around with the [smangrul/peft-lora-sd-dreambooth](https://huggingface.co/spaces/smangrul/peft-lora-sd-dreambooth) Space which is running on a T4 instance. Learn more about the PEFT integration in Diffusers in this [tutorial](https://huggingface.co/docs/peft/main/en/tutorial/peft_integrations#diffusers).
207
+
208
+ ### Transformers
209
+
210
+ PEFT is directly integrated with [Transformers](https://huggingface.co/docs/transformers/main/en/peft). After loading a model, call `add_adapter` to add a new PEFT adapter to the model:
211
+
212
+ ```python
213
+ from peft import LoraConfig
214
+ model = ... # transformers model
215
+ peft_config = LoraConfig(...)
216
+ model.add_adapter(lora_config, adapter_name="lora_1")
217
+ ```
218
+
219
+ To load a trained PEFT adapter, call `load_adapter`:
220
+
221
+ ```python
222
+ model = ... # transformers model
223
+ model.load_adapter(<path-to-adapter>, adapter_name="lora_1")
224
+ ```
225
+
226
+ And to switch between different adapters, call `set_adapter`:
227
+
228
+ ```python
229
+ model.set_adapter("lora_2")
230
+ ```
231
+
232
+ The Transformers integration doesn't include all the functionalities offered in PEFT, such as methods for merging the adapter into the base model.
233
+
234
+ ### Accelerate
235
+
236
+ [Accelerate](https://huggingface.co/docs/accelerate/index) is a library for distributed training and inference on various training setups and hardware (GPUs, TPUs, Apple Silicon, etc.). PEFT models work with Accelerate out of the box, making it really convenient to train really large models or use them for inference on consumer hardware with limited resources.
237
+
238
+ ### TRL
239
+
240
+ PEFT can also be applied to training LLMs with RLHF components such as the ranker and policy. Get started by reading:
241
+
242
+ * [Fine-tune a Mistral-7b model with Direct Preference Optimization](https://towardsdatascience.com/fine-tune-a-mistral-7b-model-with-direct-preference-optimization-708042745aac) with PEFT and the [TRL](https://huggingface.co/docs/trl/index) library to learn more about the Direct Preference Optimization (DPO) method and how to apply it to a LLM.
243
+ * [Fine-tuning 20B LLMs with RLHF on a 24GB consumer GPU](https://huggingface.co/blog/trl-peft) with PEFT and the [TRL](https://huggingface.co/docs/trl/index) library, and then try out the [gpt2-sentiment_peft.ipynb](https://github.com/huggingface/trl/blob/main/examples/notebooks/gpt2-sentiment.ipynb) notebook to optimize GPT2 to generate positive movie reviews.
244
+ * [StackLLaMA: A hands-on guide to train LLaMA with RLHF](https://huggingface.co/blog/stackllama) with PEFT, and then try out the [stack_llama/scripts](https://github.com/huggingface/trl/tree/main/examples/research_projects/stack_llama/scripts) for supervised finetuning, reward modeling, and RL finetuning.
245
+
246
+ ## Model support
247
+
248
+ Use this [Space](https://stevhliu-peft-methods.hf.space) or check out the [docs](https://huggingface.co/docs/peft/main/en/index) to find which models officially support a PEFT method out of the box. Even if you don't see a model listed below, you can manually configure the model config to enable PEFT for a model. Read the [New transformers architecture](https://huggingface.co/docs/peft/main/en/developer_guides/custom_models#new-transformers-architectures) guide to learn how.
249
+
250
+ ## Contribute
251
+
252
+ If you would like to contribute to PEFT, please check out our [contribution guide](https://huggingface.co/docs/peft/developer_guides/contributing).
253
+
254
+ ## Citing 🤗 PEFT
255
+
256
+ To use 🤗 PEFT in your publication, please cite it by using the following BibTeX entry.
257
+
258
+ ```bibtex
259
+ @Misc{peft,
260
+ title = {{PEFT}: State-of-the-art Parameter-Efficient Fine-Tuning methods},
261
+ author = {Sourab Mangrulkar and Sylvain Gugger and Lysandre Debut and Younes Belkada and Sayak Paul and Benjamin Bossan},
262
+ howpublished = {\url{https://github.com/huggingface/peft}},
263
+ year = {2022}
264
+ }
265
+ ```