content stringlengths 1 103k ⌀ | path stringlengths 8 216 | filename stringlengths 2 179 | language stringclasses 15
values | size_bytes int64 2 189k | quality_score float64 0.5 0.95 | complexity float64 0 1 | documentation_ratio float64 0 1 | repository stringclasses 5
values | stars int64 0 1k | created_date stringdate 2023-07-10 19:21:08 2025-07-09 19:11:45 | license stringclasses 4
values | is_test bool 2
classes | file_hash stringlengths 32 32 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
\n\n | .venv\Lib\site-packages\pandas\__pycache__\__init__.cpython-313.pyc | __init__.cpython-313.pyc | Other | 8,333 | 0.95 | 0.053191 | 0.011111 | vue-tools | 559 | 2024-08-06T14:19:38.974945 | Apache-2.0 | false | 66c570c7e5ac7a1c5643ce887f79907a |
Version: 1.10.1\nArguments: ['C:\\Users\\runneradmin\\AppData\\Local\\Temp\\cibw-run-hufamb51\\cp313-win_amd64\\build\\venv\\Scripts\\delvewheel', 'repair', '-w', 'C:\\Users\\runneradmin\\AppData\\Local\\Temp\\cibw-run-hufamb51\\cp313-win_amd64\\repaired_wheel', 'C:\\Users\\runneradmin\\AppData\\Local\\Temp\\cibw-run-hufamb51\\cp313-win_amd64\\built_wheel\\pandas-2.3.0-cp313-cp313-win_amd64.whl']\n | .venv\Lib\site-packages\pandas-2.3.0.dist-info\DELVEWHEEL | DELVEWHEEL | Other | 399 | 0.7 | 0 | 0 | python-kit | 516 | 2024-07-21T19:20:09.826311 | Apache-2.0 | false | 530fad543bffb64119aba8034a346145 |
[pandas_plotting_backends]\nmatplotlib = pandas:plotting._matplotlib\n\n | .venv\Lib\site-packages\pandas-2.3.0.dist-info\entry_points.txt | entry_points.txt | Other | 69 | 0.5 | 0 | 0 | node-utils | 650 | 2023-11-28T21:24:01.322905 | Apache-2.0 | false | e8418c0f0046a96d85fe5baf46f25a8c |
pip\n | .venv\Lib\site-packages\pandas-2.3.0.dist-info\INSTALLER | INSTALLER | Other | 4 | 0.5 | 0 | 0 | node-utils | 919 | 2023-08-14T01:21:31.976439 | Apache-2.0 | false | 365c9bfeb7d89244f2ce01c1de44cb85 |
BSD 3-Clause License\n\nCopyright (c) 2008-2011, AQR Capital Management, LLC, Lambda Foundry, Inc. and PyData Development Team\nAll rights reserved.\n\nCopyright (c) 2011-2023, Open source contributors.\n\nRedistribution and use in source and binary forms, with or without\nmodification, are permitted provided that the following conditions are met:\n\n* Redistributions of source code must retain the above copyright notice, this\n list of conditions and the following disclaimer.\n\n* Redistributions in binary form must reproduce the above copyright notice,\n this list of conditions and the following disclaimer in the documentation\n and/or other materials provided with the distribution.\n\n* Neither the name of the copyright holder nor the names of its\n contributors may be used to endorse or promote products derived from\n this software without specific prior written permission.\n\nTHIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"\nAND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\nIMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\nDISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE\nFOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\nDAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR\nSERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\nCAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,\nOR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\nOF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n | .venv\Lib\site-packages\pandas-2.3.0.dist-info\LICENSE | LICENSE | Other | 1,634 | 0.7 | 0 | 0.125 | node-utils | 564 | 2024-04-02T00:12:27.769927 | BSD-3-Clause | false | cb819092901ddb13a7d0a4f5e05f098a |
Metadata-Version: 2.1\nName: pandas\nVersion: 2.3.0\nSummary: Powerful data structures for data analysis, time series, and statistics\nAuthor-Email: The Pandas Development Team <pandas-dev@python.org>\nLicense: BSD 3-Clause License\n \n Copyright (c) 2008-2011, AQR Capital Management, LLC, Lambda Foundry, Inc. and PyData Development Team\n All rights reserved.\n \n Copyright (c) 2011-2023, Open source contributors.\n \n Redistribution and use in source and binary forms, with or without\n modification, are permitted provided that the following conditions are met:\n \n * Redistributions of source code must retain the above copyright notice, this\n list of conditions and the following disclaimer.\n \n * Redistributions in binary form must reproduce the above copyright notice,\n this list of conditions and the following disclaimer in the documentation\n and/or other materials provided with the distribution.\n \n * Neither the name of the copyright holder nor the names of its\n contributors may be used to endorse or promote products derived from\n this software without specific prior written permission.\n \n THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"\n AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\n DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE\n FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\n DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR\n SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\n CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,\n OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n \nClassifier: Development Status :: 5 - Production/Stable\nClassifier: Environment :: Console\nClassifier: Intended Audience :: Science/Research\nClassifier: License :: OSI Approved :: BSD License\nClassifier: Operating System :: OS Independent\nClassifier: Programming Language :: Cython\nClassifier: Programming Language :: Python\nClassifier: Programming Language :: Python :: 3\nClassifier: Programming Language :: Python :: 3 :: Only\nClassifier: Programming Language :: Python :: 3.9\nClassifier: Programming Language :: Python :: 3.10\nClassifier: Programming Language :: Python :: 3.11\nClassifier: Programming Language :: Python :: 3.12\nClassifier: Programming Language :: Python :: 3.13\nClassifier: Topic :: Scientific/Engineering\nProject-URL: homepage, https://pandas.pydata.org\nProject-URL: documentation, https://pandas.pydata.org/docs/\nProject-URL: repository, https://github.com/pandas-dev/pandas\nRequires-Python: >=3.9\nRequires-Dist: numpy>=1.22.4; python_version < "3.11"\nRequires-Dist: numpy>=1.23.2; python_version == "3.11"\nRequires-Dist: numpy>=1.26.0; python_version >= "3.12"\nRequires-Dist: python-dateutil>=2.8.2\nRequires-Dist: pytz>=2020.1\nRequires-Dist: tzdata>=2022.7\nProvides-Extra: test\nRequires-Dist: hypothesis>=6.46.1; extra == "test"\nRequires-Dist: pytest>=7.3.2; extra == "test"\nRequires-Dist: pytest-xdist>=2.2.0; extra == "test"\nProvides-Extra: pyarrow\nRequires-Dist: pyarrow>=10.0.1; extra == "pyarrow"\nProvides-Extra: performance\nRequires-Dist: bottleneck>=1.3.6; extra == "performance"\nRequires-Dist: numba>=0.56.4; extra == "performance"\nRequires-Dist: numexpr>=2.8.4; extra == "performance"\nProvides-Extra: computation\nRequires-Dist: scipy>=1.10.0; extra == "computation"\nRequires-Dist: xarray>=2022.12.0; extra == "computation"\nProvides-Extra: fss\nRequires-Dist: fsspec>=2022.11.0; extra == "fss"\nProvides-Extra: aws\nRequires-Dist: s3fs>=2022.11.0; extra == "aws"\nProvides-Extra: gcp\nRequires-Dist: gcsfs>=2022.11.0; extra == "gcp"\nRequires-Dist: pandas-gbq>=0.19.0; extra == "gcp"\nProvides-Extra: excel\nRequires-Dist: odfpy>=1.4.1; extra == "excel"\nRequires-Dist: openpyxl>=3.1.0; extra == "excel"\nRequires-Dist: python-calamine>=0.1.7; extra == "excel"\nRequires-Dist: pyxlsb>=1.0.10; extra == "excel"\nRequires-Dist: xlrd>=2.0.1; extra == "excel"\nRequires-Dist: xlsxwriter>=3.0.5; extra == "excel"\nProvides-Extra: parquet\nRequires-Dist: pyarrow>=10.0.1; extra == "parquet"\nProvides-Extra: feather\nRequires-Dist: pyarrow>=10.0.1; extra == "feather"\nProvides-Extra: hdf5\nRequires-Dist: tables>=3.8.0; extra == "hdf5"\nProvides-Extra: spss\nRequires-Dist: pyreadstat>=1.2.0; extra == "spss"\nProvides-Extra: postgresql\nRequires-Dist: SQLAlchemy>=2.0.0; extra == "postgresql"\nRequires-Dist: psycopg2>=2.9.6; extra == "postgresql"\nRequires-Dist: adbc-driver-postgresql>=0.8.0; extra == "postgresql"\nProvides-Extra: mysql\nRequires-Dist: SQLAlchemy>=2.0.0; extra == "mysql"\nRequires-Dist: pymysql>=1.0.2; extra == "mysql"\nProvides-Extra: sql-other\nRequires-Dist: SQLAlchemy>=2.0.0; extra == "sql-other"\nRequires-Dist: adbc-driver-postgresql>=0.8.0; extra == "sql-other"\nRequires-Dist: adbc-driver-sqlite>=0.8.0; extra == "sql-other"\nProvides-Extra: html\nRequires-Dist: beautifulsoup4>=4.11.2; extra == "html"\nRequires-Dist: html5lib>=1.1; extra == "html"\nRequires-Dist: lxml>=4.9.2; extra == "html"\nProvides-Extra: xml\nRequires-Dist: lxml>=4.9.2; extra == "xml"\nProvides-Extra: plot\nRequires-Dist: matplotlib>=3.6.3; extra == "plot"\nProvides-Extra: output-formatting\nRequires-Dist: jinja2>=3.1.2; extra == "output-formatting"\nRequires-Dist: tabulate>=0.9.0; extra == "output-formatting"\nProvides-Extra: clipboard\nRequires-Dist: PyQt5>=5.15.9; extra == "clipboard"\nRequires-Dist: qtpy>=2.3.0; extra == "clipboard"\nProvides-Extra: compression\nRequires-Dist: zstandard>=0.19.0; extra == "compression"\nProvides-Extra: consortium-standard\nRequires-Dist: dataframe-api-compat>=0.1.7; extra == "consortium-standard"\nProvides-Extra: all\nRequires-Dist: adbc-driver-postgresql>=0.8.0; extra == "all"\nRequires-Dist: adbc-driver-sqlite>=0.8.0; extra == "all"\nRequires-Dist: beautifulsoup4>=4.11.2; extra == "all"\nRequires-Dist: bottleneck>=1.3.6; extra == "all"\nRequires-Dist: dataframe-api-compat>=0.1.7; extra == "all"\nRequires-Dist: fastparquet>=2022.12.0; extra == "all"\nRequires-Dist: fsspec>=2022.11.0; extra == "all"\nRequires-Dist: gcsfs>=2022.11.0; extra == "all"\nRequires-Dist: html5lib>=1.1; extra == "all"\nRequires-Dist: hypothesis>=6.46.1; extra == "all"\nRequires-Dist: jinja2>=3.1.2; extra == "all"\nRequires-Dist: lxml>=4.9.2; extra == "all"\nRequires-Dist: matplotlib>=3.6.3; extra == "all"\nRequires-Dist: numba>=0.56.4; extra == "all"\nRequires-Dist: numexpr>=2.8.4; extra == "all"\nRequires-Dist: odfpy>=1.4.1; extra == "all"\nRequires-Dist: openpyxl>=3.1.0; extra == "all"\nRequires-Dist: pandas-gbq>=0.19.0; extra == "all"\nRequires-Dist: psycopg2>=2.9.6; extra == "all"\nRequires-Dist: pyarrow>=10.0.1; extra == "all"\nRequires-Dist: pymysql>=1.0.2; extra == "all"\nRequires-Dist: PyQt5>=5.15.9; extra == "all"\nRequires-Dist: pyreadstat>=1.2.0; extra == "all"\nRequires-Dist: pytest>=7.3.2; extra == "all"\nRequires-Dist: pytest-xdist>=2.2.0; extra == "all"\nRequires-Dist: python-calamine>=0.1.7; extra == "all"\nRequires-Dist: pyxlsb>=1.0.10; extra == "all"\nRequires-Dist: qtpy>=2.3.0; extra == "all"\nRequires-Dist: scipy>=1.10.0; extra == "all"\nRequires-Dist: s3fs>=2022.11.0; extra == "all"\nRequires-Dist: SQLAlchemy>=2.0.0; extra == "all"\nRequires-Dist: tables>=3.8.0; extra == "all"\nRequires-Dist: tabulate>=0.9.0; extra == "all"\nRequires-Dist: xarray>=2022.12.0; extra == "all"\nRequires-Dist: xlrd>=2.0.1; extra == "all"\nRequires-Dist: xlsxwriter>=3.0.5; extra == "all"\nRequires-Dist: zstandard>=0.19.0; extra == "all"\nDescription-Content-Type: text/markdown\n\n<div align="center">\n <img src="https://pandas.pydata.org/static/img/pandas.svg"><br>\n</div>\n\n-----------------\n\n# pandas: powerful Python data analysis toolkit\n\n| | |\n| --- | --- |\n| Testing | [](https://github.com/pandas-dev/pandas/actions/workflows/unit-tests.yml) [](https://codecov.io/gh/pandas-dev/pandas) |\n| Package | [](https://pypi.org/project/pandas/) [](https://pypi.org/project/pandas/) [](https://anaconda.org/conda-forge/pandas) [](https://anaconda.org/conda-forge/pandas) |\n| Meta | [](https://numfocus.org) [](https://doi.org/10.5281/zenodo.3509134) [](https://github.com/pandas-dev/pandas/blob/main/LICENSE) [](https://pandas.pydata.org/docs/dev/development/community.html?highlight=slack#community-slack) |\n\n\n## What is it?\n\n**pandas** is a Python package that provides fast, flexible, and expressive data\nstructures designed to make working with "relational" or "labeled" data both\neasy and intuitive. It aims to be the fundamental high-level building block for\ndoing practical, **real world** data analysis in Python. Additionally, it has\nthe broader goal of becoming **the most powerful and flexible open source data\nanalysis / manipulation tool available in any language**. It is already well on\nits way towards this goal.\n\n## Table of Contents\n\n- [Main Features](#main-features)\n- [Where to get it](#where-to-get-it)\n- [Dependencies](#dependencies)\n- [Installation from sources](#installation-from-sources)\n- [License](#license)\n- [Documentation](#documentation)\n- [Background](#background)\n- [Getting Help](#getting-help)\n- [Discussion and Development](#discussion-and-development)\n- [Contributing to pandas](#contributing-to-pandas)\n\n## Main Features\nHere are just a few of the things that pandas does well:\n\n - Easy handling of [**missing data**][missing-data] (represented as\n `NaN`, `NA`, or `NaT`) in floating point as well as non-floating point data\n - Size mutability: columns can be [**inserted and\n deleted**][insertion-deletion] from DataFrame and higher dimensional\n objects\n - Automatic and explicit [**data alignment**][alignment]: objects can\n be explicitly aligned to a set of labels, or the user can simply\n ignore the labels and let `Series`, `DataFrame`, etc. automatically\n align the data for you in computations\n - Powerful, flexible [**group by**][groupby] functionality to perform\n split-apply-combine operations on data sets, for both aggregating\n and transforming data\n - Make it [**easy to convert**][conversion] ragged,\n differently-indexed data in other Python and NumPy data structures\n into DataFrame objects\n - Intelligent label-based [**slicing**][slicing], [**fancy\n indexing**][fancy-indexing], and [**subsetting**][subsetting] of\n large data sets\n - Intuitive [**merging**][merging] and [**joining**][joining] data\n sets\n - Flexible [**reshaping**][reshape] and [**pivoting**][pivot-table] of\n data sets\n - [**Hierarchical**][mi] labeling of axes (possible to have multiple\n labels per tick)\n - Robust IO tools for loading data from [**flat files**][flat-files]\n (CSV and delimited), [**Excel files**][excel], [**databases**][db],\n and saving/loading data from the ultrafast [**HDF5 format**][hdfstore]\n - [**Time series**][timeseries]-specific functionality: date range\n generation and frequency conversion, moving window statistics,\n date shifting and lagging\n\n\n [missing-data]: https://pandas.pydata.org/pandas-docs/stable/user_guide/missing_data.html\n [insertion-deletion]: https://pandas.pydata.org/pandas-docs/stable/user_guide/dsintro.html#column-selection-addition-deletion\n [alignment]: https://pandas.pydata.org/pandas-docs/stable/user_guide/dsintro.html?highlight=alignment#intro-to-data-structures\n [groupby]: https://pandas.pydata.org/pandas-docs/stable/user_guide/groupby.html#group-by-split-apply-combine\n [conversion]: https://pandas.pydata.org/pandas-docs/stable/user_guide/dsintro.html#dataframe\n [slicing]: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#slicing-ranges\n [fancy-indexing]: https://pandas.pydata.org/pandas-docs/stable/user_guide/advanced.html#advanced\n [subsetting]: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#boolean-indexing\n [merging]: https://pandas.pydata.org/pandas-docs/stable/user_guide/merging.html#database-style-dataframe-or-named-series-joining-merging\n [joining]: https://pandas.pydata.org/pandas-docs/stable/user_guide/merging.html#joining-on-index\n [reshape]: https://pandas.pydata.org/pandas-docs/stable/user_guide/reshaping.html\n [pivot-table]: https://pandas.pydata.org/pandas-docs/stable/user_guide/reshaping.html\n [mi]: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#hierarchical-indexing-multiindex\n [flat-files]: https://pandas.pydata.org/pandas-docs/stable/user_guide/io.html#csv-text-files\n [excel]: https://pandas.pydata.org/pandas-docs/stable/user_guide/io.html#excel-files\n [db]: https://pandas.pydata.org/pandas-docs/stable/user_guide/io.html#sql-queries\n [hdfstore]: https://pandas.pydata.org/pandas-docs/stable/user_guide/io.html#hdf5-pytables\n [timeseries]: https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#time-series-date-functionality\n\n## Where to get it\nThe source code is currently hosted on GitHub at:\nhttps://github.com/pandas-dev/pandas\n\nBinary installers for the latest released version are available at the [Python\nPackage Index (PyPI)](https://pypi.org/project/pandas) and on [Conda](https://docs.conda.io/en/latest/).\n\n```sh\n# conda\nconda install -c conda-forge pandas\n```\n\n```sh\n# or PyPI\npip install pandas\n```\n\nThe list of changes to pandas between each release can be found\n[here](https://pandas.pydata.org/pandas-docs/stable/whatsnew/index.html). For full\ndetails, see the commit logs at https://github.com/pandas-dev/pandas.\n\n## Dependencies\n- [NumPy - Adds support for large, multi-dimensional arrays, matrices and high-level mathematical functions to operate on these arrays](https://www.numpy.org)\n- [python-dateutil - Provides powerful extensions to the standard datetime module](https://dateutil.readthedocs.io/en/stable/index.html)\n- [pytz - Brings the Olson tz database into Python which allows accurate and cross platform timezone calculations](https://github.com/stub42/pytz)\n\nSee the [full installation instructions](https://pandas.pydata.org/pandas-docs/stable/install.html#dependencies) for minimum supported versions of required, recommended and optional dependencies.\n\n## Installation from sources\nTo install pandas from source you need [Cython](https://cython.org/) in addition to the normal\ndependencies above. Cython can be installed from PyPI:\n\n```sh\npip install cython\n```\n\nIn the `pandas` directory (same one where you found this file after\ncloning the git repo), execute:\n\n```sh\npip install .\n```\n\nor for installing in [development mode](https://pip.pypa.io/en/latest/cli/pip_install/#install-editable):\n\n\n```sh\npython -m pip install -ve . --no-build-isolation --config-settings=editable-verbose=true\n```\n\nSee the full instructions for [installing from source](https://pandas.pydata.org/docs/dev/development/contributing_environment.html).\n\n## License\n[BSD 3](LICENSE)\n\n## Documentation\nThe official documentation is hosted on [PyData.org](https://pandas.pydata.org/pandas-docs/stable/).\n\n## Background\nWork on ``pandas`` started at [AQR](https://www.aqr.com/) (a quantitative hedge fund) in 2008 and\nhas been under active development since then.\n\n## Getting Help\n\nFor usage questions, the best place to go to is [StackOverflow](https://stackoverflow.com/questions/tagged/pandas).\nFurther, general questions and discussions can also take place on the [pydata mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata).\n\n## Discussion and Development\nMost development discussions take place on GitHub in this repo, via the [GitHub issue tracker](https://github.com/pandas-dev/pandas/issues).\n\nFurther, the [pandas-dev mailing list](https://mail.python.org/mailman/listinfo/pandas-dev) can also be used for specialized discussions or design issues, and a [Slack channel](https://pandas.pydata.org/docs/dev/development/community.html?highlight=slack#community-slack) is available for quick development related questions.\n\nThere are also frequent [community meetings](https://pandas.pydata.org/docs/dev/development/community.html#community-meeting) for project maintainers open to the community as well as monthly [new contributor meetings](https://pandas.pydata.org/docs/dev/development/community.html#new-contributor-meeting) to help support new contributors.\n\nAdditional information on the communication channels can be found on the [contributor community](https://pandas.pydata.org/docs/development/community.html) page.\n\n## Contributing to pandas\n\n[](https://www.codetriage.com/pandas-dev/pandas)\n\nAll contributions, bug reports, bug fixes, documentation improvements, enhancements, and ideas are welcome.\n\nA detailed overview on how to contribute can be found in the **[contributing guide](https://pandas.pydata.org/docs/dev/development/contributing.html)**.\n\nIf you are simply looking to start working with the pandas codebase, navigate to the [GitHub "issues" tab](https://github.com/pandas-dev/pandas/issues) and start looking through interesting issues. There are a number of issues listed under [Docs](https://github.com/pandas-dev/pandas/issues?labels=Docs&sort=updated&state=open) and [good first issue](https://github.com/pandas-dev/pandas/issues?labels=good+first+issue&sort=updated&state=open) where you could start out.\n\nYou can also triage issues which may include reproducing bug reports, or asking for vital information such as version numbers or reproduction instructions. If you would like to start triaging issues, one easy way to get started is to [subscribe to pandas on CodeTriage](https://www.codetriage.com/pandas-dev/pandas).\n\nOr maybe through using pandas you have an idea of your own or are looking for something in the documentation and thinking ‘this can be improved’...you can do something about it!\n\nFeel free to ask questions on the [mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata) or on [Slack](https://pandas.pydata.org/docs/dev/development/community.html?highlight=slack#community-slack).\n\nAs contributors and maintainers to this project, you are expected to abide by pandas' code of conduct. More information can be found at: [Contributor Code of Conduct](https://github.com/pandas-dev/.github/blob/master/CODE_OF_CONDUCT.md)\n\n<hr>\n\n[Go to Top](#table-of-contents)\n | .venv\Lib\site-packages\pandas-2.3.0.dist-info\METADATA | METADATA | Other | 19,437 | 0.95 | 0.042254 | 0.063545 | awesome-app | 428 | 2024-06-15T09:02:55.870040 | MIT | false | cd2d8cc36185a6d609e202ca3825352c |
Wheel-Version: 1.0\nGenerator: meson\nRoot-Is-Purelib: false\nTag: cp313-cp313-win_amd64 | .venv\Lib\site-packages\pandas-2.3.0.dist-info\WHEEL | WHEEL | Other | 85 | 0.5 | 0 | 0 | python-kit | 254 | 2024-07-29T04:02:23.121792 | GPL-3.0 | false | 51337c97620c3b1e0d781ad8efe86cea |
pip\n | .venv\Lib\site-packages\pandocfilters-1.5.1.dist-info\INSTALLER | INSTALLER | Other | 4 | 0.5 | 0 | 0 | react-lib | 113 | 2025-01-25T03:33:37.487812 | BSD-3-Clause | false | 365c9bfeb7d89244f2ce01c1de44cb85 |
Copyright (c) 2013, John MacFarlane\nAll rights reserved.\n\nRedistribution and use in source and binary forms, with or without modification,\nare permitted provided that the following conditions are met:\n\n - Redistributions of source code must retain the above copyright notice,\n this list of conditions and the following disclaimer.\n\n - Redistributions in binary form must reproduce the above copyright notice,\n this list of conditions and the following disclaimer in the documentation\n and/or other materials provided with the distribution.\n\n - Neither the name of John Macfarlane nor the names of its contributors may\n be used to endorse or promote products derived from this software without\n specific prior written permission.\n\nTHIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND\nANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED\nWARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\nDISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR\nANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES\n(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;\nLOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON\nANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS\nSOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n | .venv\Lib\site-packages\pandocfilters-1.5.1.dist-info\LICENSE | LICENSE | Other | 1,496 | 0.7 | 0 | 0 | awesome-app | 650 | 2023-09-01T20:53:34.085890 | GPL-3.0 | false | 28b59ac864caa776edcbdb77a8a57267 |
Metadata-Version: 2.1\nName: pandocfilters\nVersion: 1.5.1\nSummary: Utilities for writing pandoc filters in python\nHome-page: http://github.com/jgm/pandocfilters\nAuthor: John MacFarlane\nAuthor-email: fiddlosopher@gmail.com\nLicense: BSD-3-Clause\nKeywords: pandoc\nClassifier: Development Status :: 3 - Alpha\nClassifier: Environment :: Console\nClassifier: Intended Audience :: End Users/Desktop\nClassifier: Intended Audience :: Developers\nClassifier: License :: OSI Approved :: BSD License\nClassifier: Operating System :: OS Independent\nClassifier: Programming Language :: Python\nClassifier: Topic :: Text Processing :: Filters\nClassifier: Programming Language :: Python :: 2\nClassifier: Programming Language :: Python :: 2.7\nClassifier: Programming Language :: Python :: 3\nClassifier: Programming Language :: Python :: 3.4\nClassifier: Programming Language :: Python :: 3.5\nClassifier: Programming Language :: Python :: 3.6\nClassifier: Programming Language :: Python :: 3.7\nClassifier: Programming Language :: Python :: 3.8\nClassifier: Programming Language :: Python :: 3.9\nRequires-Python: >=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*\nLicense-File: LICENSE\n\npandocfilters\n=============\n\nA python module for writing `pandoc <http://pandoc.org/>`_ filters\n\nWhat are pandoc filters?\n--------------------------\nPandoc filters\nare pipes that read a JSON serialization of the Pandoc AST\nfrom stdin, transform it in some way, and write it to stdout.\nThey can be used with pandoc (>= 1.12) either using pipes ::\n\n pandoc -t json -s | ./caps.py | pandoc -f json\n\nor using the ``--filter`` (or ``-F``) command-line option. ::\n\n pandoc --filter ./caps.py -s\n\nFor more on pandoc filters, see the pandoc documentation under ``--filter``\nand `the tutorial on writing filters`__.\n\n__ http://johnmacfarlane.net/pandoc/scripting.html\n\nFor an alternative library for writing pandoc filters, with\na more "Pythonic" design, see `panflute`__.\n\n__ https://github.com/sergiocorreia/panflute\n\nCompatibility\n----------------\nPandoc 1.16 introduced link and image `attributes` to the existing\n`caption` and `target` arguments, requiring a change in pandocfilters\nthat breaks backwards compatibility. Consequently, you should use:\n\n- pandocfilters version <= 1.2.4 for pandoc versions 1.12--1.15, and\n- pandocfilters version >= 1.3.0 for pandoc versions >= 1.16.\n\nPandoc 1.17.3 (pandoc-types 1.17.*) introduced a new JSON format.\npandocfilters 1.4.0 should work with both the old and the new\nformat.\n\nInstalling\n--------------\nRun this inside the present directory::\n\n python setup.py install\n\nOr install from PyPI::\n\n pip install pandocfilters\n\nAvailable functions\n----------------------\nThe main functions ``pandocfilters`` exports are\n\n- ``walk(x, action, format, meta)``\n\n Walk a tree, applying an action to every object. Returns a modified\n tree. An action is a function of the form\n ``action(key, value, format, meta)``, where:\n\n - ``key`` is the type of the pandoc object (e.g. 'Str', 'Para')\n - ``value`` is the contents of the object (e.g. a string for 'Str', a list of\n inline elements for 'Para')\n - ``format`` is the target output format (as supplied by the\n ``format`` argument of ``walk``)\n - ``meta`` is the document's metadata\n\n The return of an action is either:\n\n - ``None``: this means that the object should remain unchanged\n - a pandoc object: this will replace the original object\n - a list of pandoc objects: these will replace the original object;\n the list is merged with the neighbors of the original objects\n (spliced into the list the original object belongs to); returning\n an empty list deletes the object\n\n- ``toJSONFilter(action)``\n\n Like ``toJSONFilters``, but takes a single action as argument.\n\n- ``toJSONFilters(actions)``\n\n Generate a JSON-to-JSON filter from stdin to stdout\n\n The filter:\n\n - reads a JSON-formatted pandoc document from stdin\n - transforms it by walking the tree and performing the actions\n - returns a new JSON-formatted pandoc document to stdout\n\n The argument ``actions`` is a list of functions of the form\n ``action(key, value, format, meta)``, as described in more detail\n under ``walk``.\n\n This function calls ``applyJSONFilters``, with the ``format``\n argument provided by the first command-line argument, if present.\n (Pandoc sets this by default when calling filters.)\n\n- ``applyJSONFilters(actions, source, format="")``\n\n Walk through JSON structure and apply filters\n\n This:\n\n - reads a JSON-formatted pandoc document from a source string\n - transforms it by walking the tree and performing the actions\n - returns a new JSON-formatted pandoc document as a string\n\n The ``actions`` argument is a list of functions (see ``walk`` for a\n full description).\n\n The argument ``source`` is a string encoded JSON object.\n\n The argument ``format`` is a string describing the output format.\n\n Returns a new JSON-formatted pandoc document.\n\n- ``stringify(x)``\n\n Walks the tree x and returns concatenated string content, leaving out\n all formatting.\n\n- ``attributes(attrs)``\n\n Returns an attribute list, constructed from the dictionary attrs.\n\nHow to use\n----------\nMost users will only need ``toJSONFilter``. Here is a simple example\nof its use::\n\n #!/usr/bin/env python\n\n """\n Pandoc filter to convert all regular text to uppercase.\n Code, link URLs, etc. are not affected.\n """\n\n from pandocfilters import toJSONFilter, Str\n\n def caps(key, value, format, meta):\n if key == 'Str':\n return Str(value.upper())\n\n if __name__ == "__main__":\n toJSONFilter(caps)\n\nExamples\n--------\n\nThe examples subdirectory in the source repository contains the\nfollowing filters. These filters should provide a useful starting point\nfor developing your own pandocfilters.\n\n``abc.py``\n Pandoc filter to process code blocks with class ``abc`` containing ABC\n notation into images. Assumes that abcm2ps and ImageMagick's convert\n are in the path. Images are put in the abc-images directory.\n\n``caps.py``\n Pandoc filter to convert all regular text to uppercase. Code, link\n URLs, etc. are not affected.\n\n``blockdiag.py``\n Pandoc filter to process code blocks with class "blockdiag" into\n generated images. Needs utils from http://blockdiag.com.\n\n``comments.py``\n Pandoc filter that causes everything between\n ``<!-- BEGIN COMMENT -->`` and ``<!-- END COMMENT -->`` to be ignored.\n The comment lines must appear on lines by themselves, with blank\n lines surrounding\n\n``deemph.py``\n Pandoc filter that causes emphasized text to be displayed in ALL\n CAPS.\n\n``deflists.py``\n Pandoc filter to convert definition lists to bullet lists with the\n defined terms in strong emphasis (for compatibility with standard\n markdown).\n\n``gabc.py``\n Pandoc filter to convert code blocks with class "gabc" to LaTeX\n \\gabcsnippet commands in LaTeX output, and to images in HTML output.\n\n``graphviz.py``\n Pandoc filter to process code blocks with class ``graphviz`` into\n graphviz-generated images.\n\n``lilypond.py``\n Pandoc filter to process code blocks with class "ly" containing\n Lilypond notation.\n\n``metavars.py``\n Pandoc filter to allow interpolation of metadata fields into a\n document. ``%{fields}`` will be replaced by the field's value, assuming\n it is of the type ``MetaInlines`` or ``MetaString``.\n\n``myemph.py``\n Pandoc filter that causes emphasis to be rendered using the custom\n macro ``\myemph{...}`` rather than ``\emph{...}`` in latex. Other output\n formats are unaffected.\n\n``plantuml.py``\n Pandoc filter to process code blocks with class ``plantuml`` to images.\n Needs `plantuml.jar` from http://plantuml.com/.\n\n``ditaa.py``\n Pandoc filter to process code blocks with class ``ditaa`` to images.\n Needs `ditaa.jar` from http://ditaa.sourceforge.net/.\n\n``theorem.py``\n Pandoc filter to convert divs with ``class="theorem"`` to LaTeX theorem\n environments in LaTeX output, and to numbered theorems in HTML\n output.\n\n``tikz.py``\n Pandoc filter to process raw latex tikz environments into images.\n Assumes that pdflatex is in the path, and that the standalone\n package is available. Also assumes that ImageMagick's convert is in\n the path. Images are put in the ``tikz-images`` directory.\n\nAPI documentation\n-----------------\n\nBy default most filters use ``get_filename4code`` to\ncreate a directory ``...-images`` to save temporary\nfiles. This directory doesn't get removed as it can be used as a cache so that\nlater pandoc runs don't have to recreate files if they already exist. The\ndirectory is generated in the current directory.\n\nIf you prefer to have a clean directory after running pandoc filters, you\ncan set an environment variable ``PANDOCFILTER_CLEANUP`` to any non-empty value such as `1`\nwhich forces the code to create a temporary directory that will be removed\nby the end of execution.\n | .venv\Lib\site-packages\pandocfilters-1.5.1.dist-info\METADATA | METADATA | Other | 8,978 | 0.95 | 0.093985 | 0.005076 | react-lib | 672 | 2025-06-07T17:15:29.422675 | GPL-3.0 | false | 4383a662cb8d8318a526adc08a8b6a32 |
__pycache__/pandocfilters.cpython-313.pyc,,\npandocfilters-1.5.1.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4\npandocfilters-1.5.1.dist-info/LICENSE,sha256=VnZs6ZcqUz_PZzMPGrvrQU4VPH5Ywn3FREG8D6-oFJI,1496\npandocfilters-1.5.1.dist-info/METADATA,sha256=w--HaNuiYN1GMR4azDpnhjoOVnGp0j_NLhp7claqtS4,8978\npandocfilters-1.5.1.dist-info/RECORD,,\npandocfilters-1.5.1.dist-info/WHEEL,sha256=-G_t0oGuE7UD0DrSpVZnq1hHMBV9DD2XkS5v7XpmTnk,110\npandocfilters-1.5.1.dist-info/top_level.txt,sha256=h7g1ToRWOmlqrfjEEXEnAlRitBAFDoq8mI-rNOCSY_8,14\npandocfilters.py,sha256=dqDT_lIkshPybYCXQZNnXKuXrsCBx3X8Io9NzZvc8CY,9070\n | .venv\Lib\site-packages\pandocfilters-1.5.1.dist-info\RECORD | RECORD | Other | 635 | 0.7 | 0 | 0 | awesome-app | 506 | 2023-09-23T17:21:29.813930 | BSD-3-Clause | false | 73197ea5f0d851e74c82ec3d0c1502b1 |
pandocfilters\n | .venv\Lib\site-packages\pandocfilters-1.5.1.dist-info\top_level.txt | top_level.txt | Other | 14 | 0.5 | 0 | 0 | react-lib | 146 | 2025-01-31T14:22:44.739941 | GPL-3.0 | false | daecf3ee0686942a84dde5ecdacda755 |
Wheel-Version: 1.0\nGenerator: bdist_wheel (0.42.0)\nRoot-Is-Purelib: true\nTag: py2-none-any\nTag: py3-none-any\n\n | .venv\Lib\site-packages\pandocfilters-1.5.1.dist-info\WHEEL | WHEEL | Other | 110 | 0.7 | 0 | 0 | python-kit | 896 | 2025-05-21T20:39:12.040029 | GPL-3.0 | false | 2313aa2f22b437eec79847eb5836f034 |
import time\nimport os\nimport sys\nimport hashlib\nimport gc\nimport shutil\nimport platform\nimport logging\nimport warnings\nimport pickle\nfrom pathlib import Path\nfrom typing import Dict, Any\n\nLOG = logging.getLogger(__name__)\n\n_CACHED_FILE_MINIMUM_SURVIVAL = 60 * 10 # 10 minutes\n"""\nCached files should survive at least a few minutes.\n"""\n\n_CACHED_FILE_MAXIMUM_SURVIVAL = 60 * 60 * 24 * 30\n"""\nMaximum time for a cached file to survive if it is not\naccessed within.\n"""\n\n_CACHED_SIZE_TRIGGER = 600\n"""\nThis setting limits the amount of cached files. It's basically a way to start\ngarbage collection.\n\nThe reasoning for this limit being as big as it is, is the following:\n\nNumpy, Pandas, Matplotlib and Tensorflow together use about 500 files. This\nmakes Jedi use ~500mb of memory. Since we might want a bit more than those few\nlibraries, we just increase it a bit.\n"""\n\n_PICKLE_VERSION = 33\n"""\nVersion number (integer) for file system cache.\n\nIncrement this number when there are any incompatible changes in\nthe parser tree classes. For example, the following changes\nare regarded as incompatible.\n\n- A class name is changed.\n- A class is moved to another module.\n- A __slot__ of a class is changed.\n"""\n\n_VERSION_TAG = '%s-%s%s-%s' % (\n platform.python_implementation(),\n sys.version_info[0],\n sys.version_info[1],\n _PICKLE_VERSION\n)\n"""\nShort name for distinguish Python implementations and versions.\n\nIt's a bit similar to `sys.implementation.cache_tag`.\nSee: http://docs.python.org/3/library/sys.html#sys.implementation\n"""\n\n\ndef _get_default_cache_path():\n if platform.system().lower() == 'windows':\n dir_ = Path(os.getenv('LOCALAPPDATA') or '~', 'Parso', 'Parso')\n elif platform.system().lower() == 'darwin':\n dir_ = Path('~', 'Library', 'Caches', 'Parso')\n else:\n dir_ = Path(os.getenv('XDG_CACHE_HOME') or '~/.cache', 'parso')\n return dir_.expanduser()\n\n\n_default_cache_path = _get_default_cache_path()\n"""\nThe path where the cache is stored.\n\nOn Linux, this defaults to ``~/.cache/parso/``, on OS X to\n``~/Library/Caches/Parso/`` and on Windows to ``%LOCALAPPDATA%\\Parso\\Parso\\``.\nOn Linux, if environment variable ``$XDG_CACHE_HOME`` is set,\n``$XDG_CACHE_HOME/parso`` is used instead of the default one.\n"""\n\n_CACHE_CLEAR_THRESHOLD = 60 * 60 * 24\n\n\ndef _get_cache_clear_lock_path(cache_path=None):\n """\n The path where the cache lock is stored.\n\n Cache lock will prevent continous cache clearing and only allow garbage\n collection once a day (can be configured in _CACHE_CLEAR_THRESHOLD).\n """\n cache_path = cache_path or _default_cache_path\n return cache_path.joinpath("PARSO-CACHE-LOCK")\n\n\nparser_cache: Dict[str, Any] = {}\n\n\nclass _NodeCacheItem:\n def __init__(self, node, lines, change_time=None):\n self.node = node\n self.lines = lines\n if change_time is None:\n change_time = time.time()\n self.change_time = change_time\n self.last_used = change_time\n\n\ndef load_module(hashed_grammar, file_io, cache_path=None):\n """\n Returns a module or None, if it fails.\n """\n p_time = file_io.get_last_modified()\n if p_time is None:\n return None\n\n try:\n module_cache_item = parser_cache[hashed_grammar][file_io.path]\n if p_time <= module_cache_item.change_time:\n module_cache_item.last_used = time.time()\n return module_cache_item.node\n except KeyError:\n return _load_from_file_system(\n hashed_grammar,\n file_io.path,\n p_time,\n cache_path=cache_path\n )\n\n\ndef _load_from_file_system(hashed_grammar, path, p_time, cache_path=None):\n cache_path = _get_hashed_path(hashed_grammar, path, cache_path=cache_path)\n try:\n if p_time > os.path.getmtime(cache_path):\n # Cache is outdated\n return None\n\n with open(cache_path, 'rb') as f:\n gc.disable()\n try:\n module_cache_item = pickle.load(f)\n finally:\n gc.enable()\n except FileNotFoundError:\n return None\n else:\n _set_cache_item(hashed_grammar, path, module_cache_item)\n LOG.debug('pickle loaded: %s', path)\n return module_cache_item.node\n\n\ndef _set_cache_item(hashed_grammar, path, module_cache_item):\n if sum(len(v) for v in parser_cache.values()) >= _CACHED_SIZE_TRIGGER:\n # Garbage collection of old cache files.\n # We are basically throwing everything away that hasn't been accessed\n # in 10 minutes.\n cutoff_time = time.time() - _CACHED_FILE_MINIMUM_SURVIVAL\n for key, path_to_item_map in parser_cache.items():\n parser_cache[key] = {\n path: node_item\n for path, node_item in path_to_item_map.items()\n if node_item.last_used > cutoff_time\n }\n\n parser_cache.setdefault(hashed_grammar, {})[path] = module_cache_item\n\n\ndef try_to_save_module(hashed_grammar, file_io, module, lines, pickling=True, cache_path=None):\n path = file_io.path\n try:\n p_time = None if path is None else file_io.get_last_modified()\n except OSError:\n p_time = None\n pickling = False\n\n item = _NodeCacheItem(module, lines, p_time)\n _set_cache_item(hashed_grammar, path, item)\n if pickling and path is not None:\n try:\n _save_to_file_system(hashed_grammar, path, item, cache_path=cache_path)\n except PermissionError:\n # It's not really a big issue if the cache cannot be saved to the\n # file system. It's still in RAM in that case. However we should\n # still warn the user that this is happening.\n warnings.warn(\n 'Tried to save a file to %s, but got permission denied.' % path,\n Warning\n )\n else:\n _remove_cache_and_update_lock(cache_path=cache_path)\n\n\ndef _save_to_file_system(hashed_grammar, path, item, cache_path=None):\n with open(_get_hashed_path(hashed_grammar, path, cache_path=cache_path), 'wb') as f:\n pickle.dump(item, f, pickle.HIGHEST_PROTOCOL)\n\n\ndef clear_cache(cache_path=None):\n if cache_path is None:\n cache_path = _default_cache_path\n shutil.rmtree(cache_path)\n parser_cache.clear()\n\n\ndef clear_inactive_cache(\n cache_path=None,\n inactivity_threshold=_CACHED_FILE_MAXIMUM_SURVIVAL,\n):\n if cache_path is None:\n cache_path = _default_cache_path\n if not cache_path.exists():\n return False\n for dirname in os.listdir(cache_path):\n version_path = cache_path.joinpath(dirname)\n if not version_path.is_dir():\n continue\n for file in os.scandir(version_path):\n if file.stat().st_atime + _CACHED_FILE_MAXIMUM_SURVIVAL <= time.time():\n try:\n os.remove(file.path)\n except OSError: # silently ignore all failures\n continue\n else:\n return True\n\n\ndef _touch(path):\n try:\n os.utime(path, None)\n except FileNotFoundError:\n try:\n file = open(path, 'a')\n file.close()\n except (OSError, IOError): # TODO Maybe log this?\n return False\n return True\n\n\ndef _remove_cache_and_update_lock(cache_path=None):\n lock_path = _get_cache_clear_lock_path(cache_path=cache_path)\n try:\n clear_lock_time = os.path.getmtime(lock_path)\n except FileNotFoundError:\n clear_lock_time = None\n if (\n clear_lock_time is None # first time\n or clear_lock_time + _CACHE_CLEAR_THRESHOLD <= time.time()\n ):\n if not _touch(lock_path):\n # First make sure that as few as possible other cleanup jobs also\n # get started. There is still a race condition but it's probably\n # not a big problem.\n return False\n\n clear_inactive_cache(cache_path=cache_path)\n\n\ndef _get_hashed_path(hashed_grammar, path, cache_path=None):\n directory = _get_cache_directory_path(cache_path=cache_path)\n\n file_hash = hashlib.sha256(str(path).encode("utf-8")).hexdigest()\n return os.path.join(directory, '%s-%s.pkl' % (hashed_grammar, file_hash))\n\n\ndef _get_cache_directory_path(cache_path=None):\n if cache_path is None:\n cache_path = _default_cache_path\n directory = cache_path.joinpath(_VERSION_TAG)\n if not directory.exists():\n os.makedirs(directory)\n return directory\n | .venv\Lib\site-packages\parso\cache.py | cache.py | Python | 8,452 | 0.95 | 0.210909 | 0.044843 | node-utils | 733 | 2023-08-31T17:11:53.518600 | Apache-2.0 | false | d911d877331437158cdda9a605888a11 |
import os\nfrom pathlib import Path\nfrom typing import Union\n\n\nclass FileIO:\n def __init__(self, path: Union[os.PathLike, str]):\n if isinstance(path, str):\n path = Path(path)\n self.path = path\n\n def read(self): # Returns bytes/str\n # We would like to read unicode here, but we cannot, because we are not\n # sure if it is a valid unicode file. Therefore just read whatever is\n # here.\n with open(self.path, 'rb') as f:\n return f.read()\n\n def get_last_modified(self):\n """\n Returns float - timestamp or None, if path doesn't exist.\n """\n try:\n return os.path.getmtime(self.path)\n except FileNotFoundError:\n return None\n\n def __repr__(self):\n return '%s(%s)' % (self.__class__.__name__, self.path)\n\n\nclass KnownContentFileIO(FileIO):\n def __init__(self, path, content):\n super().__init__(path)\n self._content = content\n\n def read(self):\n return self._content\n | .venv\Lib\site-packages\parso\file_io.py | file_io.py | Python | 1,023 | 0.95 | 0.315789 | 0.1 | awesome-app | 227 | 2024-10-03T12:28:05.365338 | BSD-3-Clause | false | b407158fa08cab3c2bad8446ff13cdf5 |
import hashlib\nimport os\nfrom typing import Generic, TypeVar, Union, Dict, Optional, Any\nfrom pathlib import Path\n\nfrom parso._compatibility import is_pypy\nfrom parso.pgen2 import generate_grammar\nfrom parso.utils import split_lines, python_bytes_to_unicode, \\n PythonVersionInfo, parse_version_string\nfrom parso.python.diff import DiffParser\nfrom parso.python.tokenize import tokenize_lines, tokenize\nfrom parso.python.token import PythonTokenTypes\nfrom parso.cache import parser_cache, load_module, try_to_save_module\nfrom parso.parser import BaseParser\nfrom parso.python.parser import Parser as PythonParser\nfrom parso.python.errors import ErrorFinderConfig\nfrom parso.python import pep8\nfrom parso.file_io import FileIO, KnownContentFileIO\nfrom parso.normalizer import RefactoringNormalizer, NormalizerConfig\n\n_loaded_grammars: Dict[str, 'Grammar'] = {}\n\n_NodeT = TypeVar("_NodeT")\n\n\nclass Grammar(Generic[_NodeT]):\n """\n :py:func:`parso.load_grammar` returns instances of this class.\n\n Creating custom none-python grammars by calling this is not supported, yet.\n\n :param text: A BNF representation of your grammar.\n """\n _start_nonterminal: str\n _error_normalizer_config: Optional[ErrorFinderConfig] = None\n _token_namespace: Any = None\n _default_normalizer_config: NormalizerConfig = pep8.PEP8NormalizerConfig()\n\n def __init__(self, text: str, *, tokenizer, parser=BaseParser, diff_parser=None):\n self._pgen_grammar = generate_grammar(\n text,\n token_namespace=self._get_token_namespace()\n )\n self._parser = parser\n self._tokenizer = tokenizer\n self._diff_parser = diff_parser\n self._hashed = hashlib.sha256(text.encode("utf-8")).hexdigest()\n\n def parse(self,\n code: Union[str, bytes] = None,\n *,\n error_recovery=True,\n path: Union[os.PathLike, str] = None,\n start_symbol: str = None,\n cache=False,\n diff_cache=False,\n cache_path: Union[os.PathLike, str] = None,\n file_io: FileIO = None) -> _NodeT:\n """\n If you want to parse a Python file you want to start here, most likely.\n\n If you need finer grained control over the parsed instance, there will be\n other ways to access it.\n\n :param str code: A unicode or bytes string. When it's not possible to\n decode bytes to a string, returns a\n :py:class:`UnicodeDecodeError`.\n :param bool error_recovery: If enabled, any code will be returned. If\n it is invalid, it will be returned as an error node. If disabled,\n you will get a ParseError when encountering syntax errors in your\n code.\n :param str start_symbol: The grammar rule (nonterminal) that you want\n to parse. Only allowed to be used when error_recovery is False.\n :param str path: The path to the file you want to open. Only needed for caching.\n :param bool cache: Keeps a copy of the parser tree in RAM and on disk\n if a path is given. Returns the cached trees if the corresponding\n files on disk have not changed. Note that this stores pickle files\n on your file system (e.g. for Linux in ``~/.cache/parso/``).\n :param bool diff_cache: Diffs the cached python module against the new\n code and tries to parse only the parts that have changed. Returns\n the same (changed) module that is found in cache. Using this option\n requires you to not do anything anymore with the cached modules\n under that path, because the contents of it might change. This\n option is still somewhat experimental. If you want stability,\n please don't use it.\n :param bool cache_path: If given saves the parso cache in this\n directory. If not given, defaults to the default cache places on\n each platform.\n\n :return: A subclass of :py:class:`parso.tree.NodeOrLeaf`. Typically a\n :py:class:`parso.python.tree.Module`.\n """\n if code is None and path is None and file_io is None:\n raise TypeError("Please provide either code or a path.")\n\n if isinstance(path, str):\n path = Path(path)\n if isinstance(cache_path, str):\n cache_path = Path(cache_path)\n\n if start_symbol is None:\n start_symbol = self._start_nonterminal\n\n if error_recovery and start_symbol != 'file_input':\n raise NotImplementedError("This is currently not implemented.")\n\n if file_io is None:\n if code is None:\n file_io = FileIO(path) # type: ignore[arg-type]\n else:\n file_io = KnownContentFileIO(path, code)\n\n if cache and file_io.path is not None:\n module_node = load_module(self._hashed, file_io, cache_path=cache_path)\n if module_node is not None:\n return module_node # type: ignore[no-any-return]\n\n if code is None:\n code = file_io.read()\n code = python_bytes_to_unicode(code)\n\n lines = split_lines(code, keepends=True)\n if diff_cache:\n if self._diff_parser is None:\n raise TypeError("You have to define a diff parser to be able "\n "to use this option.")\n try:\n module_cache_item = parser_cache[self._hashed][file_io.path]\n except KeyError:\n pass\n else:\n module_node = module_cache_item.node\n old_lines = module_cache_item.lines\n if old_lines == lines:\n return module_node # type: ignore[no-any-return]\n\n new_node = self._diff_parser(\n self._pgen_grammar, self._tokenizer, module_node\n ).update(\n old_lines=old_lines,\n new_lines=lines\n )\n try_to_save_module(self._hashed, file_io, new_node, lines,\n # Never pickle in pypy, it's slow as hell.\n pickling=cache and not is_pypy,\n cache_path=cache_path)\n return new_node # type: ignore[no-any-return]\n\n tokens = self._tokenizer(lines)\n\n p = self._parser(\n self._pgen_grammar,\n error_recovery=error_recovery,\n start_nonterminal=start_symbol\n )\n root_node = p.parse(tokens=tokens)\n\n if cache or diff_cache:\n try_to_save_module(self._hashed, file_io, root_node, lines,\n # Never pickle in pypy, it's slow as hell.\n pickling=cache and not is_pypy,\n cache_path=cache_path)\n return root_node # type: ignore[no-any-return]\n\n def _get_token_namespace(self):\n ns = self._token_namespace\n if ns is None:\n raise ValueError("The token namespace should be set.")\n return ns\n\n def iter_errors(self, node):\n """\n Given a :py:class:`parso.tree.NodeOrLeaf` returns a generator of\n :py:class:`parso.normalizer.Issue` objects. For Python this is\n a list of syntax/indentation errors.\n """\n if self._error_normalizer_config is None:\n raise ValueError("No error normalizer specified for this grammar.")\n\n return self._get_normalizer_issues(node, self._error_normalizer_config)\n\n def refactor(self, base_node, node_to_str_map):\n return RefactoringNormalizer(node_to_str_map).walk(base_node)\n\n def _get_normalizer(self, normalizer_config):\n if normalizer_config is None:\n normalizer_config = self._default_normalizer_config\n if normalizer_config is None:\n raise ValueError("You need to specify a normalizer, because "\n "there's no default normalizer for this tree.")\n return normalizer_config.create_normalizer(self)\n\n def _normalize(self, node, normalizer_config=None):\n """\n TODO this is not public, yet.\n The returned code will be normalized, e.g. PEP8 for Python.\n """\n normalizer = self._get_normalizer(normalizer_config)\n return normalizer.walk(node)\n\n def _get_normalizer_issues(self, node, normalizer_config=None):\n normalizer = self._get_normalizer(normalizer_config)\n normalizer.walk(node)\n return normalizer.issues\n\n def __repr__(self):\n nonterminals = self._pgen_grammar.nonterminal_to_dfas.keys()\n txt = ' '.join(list(nonterminals)[:3]) + ' ...'\n return '<%s:%s>' % (self.__class__.__name__, txt)\n\n\nclass PythonGrammar(Grammar):\n _error_normalizer_config = ErrorFinderConfig()\n _token_namespace = PythonTokenTypes\n _start_nonterminal = 'file_input'\n\n def __init__(self, version_info: PythonVersionInfo, bnf_text: str):\n super().__init__(\n bnf_text,\n tokenizer=self._tokenize_lines,\n parser=PythonParser,\n diff_parser=DiffParser\n )\n self.version_info = version_info\n\n def _tokenize_lines(self, lines, **kwargs):\n return tokenize_lines(lines, version_info=self.version_info, **kwargs)\n\n def _tokenize(self, code):\n # Used by Jedi.\n return tokenize(code, version_info=self.version_info)\n\n\ndef load_grammar(*, version: str = None, path: str = None):\n """\n Loads a :py:class:`parso.Grammar`. The default version is the current Python\n version.\n\n :param str version: A python version string, e.g. ``version='3.8'``.\n :param str path: A path to a grammar file\n """\n version_info = parse_version_string(version)\n\n file = path or os.path.join(\n 'python',\n 'grammar%s%s.txt' % (version_info.major, version_info.minor)\n )\n\n global _loaded_grammars\n path = os.path.join(os.path.dirname(__file__), file)\n try:\n return _loaded_grammars[path]\n except KeyError:\n try:\n with open(path) as f:\n bnf_text = f.read()\n\n grammar = PythonGrammar(version_info, bnf_text)\n return _loaded_grammars.setdefault(path, grammar)\n except FileNotFoundError:\n message = "Python version %s.%s is currently not supported." % (\n version_info.major, version_info.minor\n )\n raise NotImplementedError(message)\n | .venv\Lib\site-packages\parso\grammar.py | grammar.py | Python | 10,553 | 0.95 | 0.189394 | 0.018018 | python-kit | 872 | 2023-11-09T22:33:56.813414 | MIT | false | 64dc06089fa7ab893efa0fdbc88d0c18 |
from contextlib import contextmanager\nfrom typing import Dict, List\n\n\nclass _NormalizerMeta(type):\n def __new__(cls, name, bases, dct):\n new_cls = type.__new__(cls, name, bases, dct)\n new_cls.rule_value_classes = {}\n new_cls.rule_type_classes = {}\n return new_cls\n\n\nclass Normalizer(metaclass=_NormalizerMeta):\n _rule_type_instances: Dict[str, List[type]] = {}\n _rule_value_instances: Dict[str, List[type]] = {}\n\n def __init__(self, grammar, config):\n self.grammar = grammar\n self._config = config\n self.issues = []\n\n self._rule_type_instances = self._instantiate_rules('rule_type_classes')\n self._rule_value_instances = self._instantiate_rules('rule_value_classes')\n\n def _instantiate_rules(self, attr):\n dct = {}\n for base in type(self).mro():\n rules_map = getattr(base, attr, {})\n for type_, rule_classes in rules_map.items():\n new = [rule_cls(self) for rule_cls in rule_classes]\n dct.setdefault(type_, []).extend(new)\n return dct\n\n def walk(self, node):\n self.initialize(node)\n value = self.visit(node)\n self.finalize()\n return value\n\n def visit(self, node):\n try:\n children = node.children\n except AttributeError:\n return self.visit_leaf(node)\n else:\n with self.visit_node(node):\n return ''.join(self.visit(child) for child in children)\n\n @contextmanager\n def visit_node(self, node):\n self._check_type_rules(node)\n yield\n\n def _check_type_rules(self, node):\n for rule in self._rule_type_instances.get(node.type, []):\n rule.feed_node(node)\n\n def visit_leaf(self, leaf):\n self._check_type_rules(leaf)\n\n for rule in self._rule_value_instances.get(leaf.value, []):\n rule.feed_node(leaf)\n\n return leaf.prefix + leaf.value\n\n def initialize(self, node):\n pass\n\n def finalize(self):\n pass\n\n def add_issue(self, node, code, message):\n issue = Issue(node, code, message)\n if issue not in self.issues:\n self.issues.append(issue)\n return True\n\n @classmethod\n def register_rule(cls, *, value=None, values=(), type=None, types=()):\n """\n Use it as a class decorator::\n\n normalizer = Normalizer('grammar', 'config')\n @normalizer.register_rule(value='foo')\n class MyRule(Rule):\n error_code = 42\n """\n values = list(values)\n types = list(types)\n if value is not None:\n values.append(value)\n if type is not None:\n types.append(type)\n\n if not values and not types:\n raise ValueError("You must register at least something.")\n\n def decorator(rule_cls):\n for v in values:\n cls.rule_value_classes.setdefault(v, []).append(rule_cls)\n for t in types:\n cls.rule_type_classes.setdefault(t, []).append(rule_cls)\n return rule_cls\n\n return decorator\n\n\nclass NormalizerConfig:\n normalizer_class = Normalizer\n\n def create_normalizer(self, grammar):\n if self.normalizer_class is None:\n return None\n\n return self.normalizer_class(grammar, self)\n\n\nclass Issue:\n def __init__(self, node, code, message):\n self.code = code\n """\n An integer code that stands for the type of error.\n """\n self.message = message\n """\n A message (string) for the issue.\n """\n self.start_pos = node.start_pos\n """\n The start position position of the error as a tuple (line, column). As\n always in |parso| the first line is 1 and the first column 0.\n """\n self.end_pos = node.end_pos\n\n def __eq__(self, other):\n return self.start_pos == other.start_pos and self.code == other.code\n\n def __ne__(self, other):\n return not self.__eq__(other)\n\n def __hash__(self):\n return hash((self.code, self.start_pos))\n\n def __repr__(self):\n return '<%s: %s>' % (self.__class__.__name__, self.code)\n\n\nclass Rule:\n code: int\n message: str\n\n def __init__(self, normalizer):\n self._normalizer = normalizer\n\n def is_issue(self, node):\n raise NotImplementedError()\n\n def get_node(self, node):\n return node\n\n def _get_message(self, message, node):\n if message is None:\n message = self.message\n if message is None:\n raise ValueError("The message on the class is not set.")\n return message\n\n def add_issue(self, node, code=None, message=None):\n if code is None:\n code = self.code\n if code is None:\n raise ValueError("The error code on the class is not set.")\n\n message = self._get_message(message, node)\n\n self._normalizer.add_issue(node, code, message)\n\n def feed_node(self, node):\n if self.is_issue(node):\n issue_node = self.get_node(node)\n self.add_issue(issue_node)\n\n\nclass RefactoringNormalizer(Normalizer):\n def __init__(self, node_to_str_map):\n self._node_to_str_map = node_to_str_map\n\n def visit(self, node):\n try:\n return self._node_to_str_map[node]\n except KeyError:\n return super().visit(node)\n\n def visit_leaf(self, leaf):\n try:\n return self._node_to_str_map[leaf]\n except KeyError:\n return super().visit_leaf(leaf)\n | .venv\Lib\site-packages\parso\normalizer.py | normalizer.py | Python | 5,597 | 0.85 | 0.308081 | 0 | python-kit | 465 | 2024-03-30T05:34:49.945606 | MIT | false | 4b7a8c1c562a084850ed54a62e258c64 |
# Copyright 2004-2005 Elemental Security, Inc. All Rights Reserved.\n# Licensed to PSF under a Contributor Agreement.\n\n# Modifications:\n# Copyright David Halter and Contributors\n# Modifications are dual-licensed: MIT and PSF.\n# 99% of the code is different from pgen2, now.\n\n"""\nThe ``Parser`` tries to convert the available Python code in an easy to read\nformat, something like an abstract syntax tree. The classes who represent this\ntree, are sitting in the :mod:`parso.tree` module.\n\nThe Python module ``tokenize`` is a very important part in the ``Parser``,\nbecause it splits the code into different words (tokens). Sometimes it looks a\nbit messy. Sorry for that! You might ask now: "Why didn't you use the ``ast``\nmodule for this? Well, ``ast`` does a very good job understanding proper Python\ncode, but fails to work as soon as there's a single line of broken code.\n\nThere's one important optimization that needs to be known: Statements are not\nbeing parsed completely. ``Statement`` is just a representation of the tokens\nwithin the statement. This lowers memory usage and cpu time and reduces the\ncomplexity of the ``Parser`` (there's another parser sitting inside\n``Statement``, which produces ``Array`` and ``Call``).\n"""\nfrom typing import Dict, Type\n\nfrom parso import tree\nfrom parso.pgen2.generator import ReservedString\n\n\nclass ParserSyntaxError(Exception):\n """\n Contains error information about the parser tree.\n\n May be raised as an exception.\n """\n def __init__(self, message, error_leaf):\n self.message = message\n self.error_leaf = error_leaf\n\n\nclass InternalParseError(Exception):\n """\n Exception to signal the parser is stuck and error recovery didn't help.\n Basically this shouldn't happen. It's a sign that something is really\n wrong.\n """\n\n def __init__(self, msg, type_, value, start_pos):\n Exception.__init__(self, "%s: type=%r, value=%r, start_pos=%r" %\n (msg, type_.name, value, start_pos))\n self.msg = msg\n self.type = type\n self.value = value\n self.start_pos = start_pos\n\n\nclass Stack(list):\n def _allowed_transition_names_and_token_types(self):\n def iterate():\n # An API just for Jedi.\n for stack_node in reversed(self):\n for transition in stack_node.dfa.transitions:\n if isinstance(transition, ReservedString):\n yield transition.value\n else:\n yield transition # A token type\n\n if not stack_node.dfa.is_final:\n break\n\n return list(iterate())\n\n\nclass StackNode:\n def __init__(self, dfa):\n self.dfa = dfa\n self.nodes = []\n\n @property\n def nonterminal(self):\n return self.dfa.from_rule\n\n def __repr__(self):\n return '%s(%s, %s)' % (self.__class__.__name__, self.dfa, self.nodes)\n\n\ndef _token_to_transition(grammar, type_, value):\n # Map from token to label\n if type_.value.contains_syntax:\n # Check for reserved words (keywords)\n try:\n return grammar.reserved_syntax_strings[value]\n except KeyError:\n pass\n\n return type_\n\n\nclass BaseParser:\n """Parser engine.\n\n A Parser instance contains state pertaining to the current token\n sequence, and should not be used concurrently by different threads\n to parse separate token sequences.\n\n See python/tokenize.py for how to get input tokens by a string.\n\n When a syntax error occurs, error_recovery() is called.\n """\n\n node_map: Dict[str, Type[tree.BaseNode]] = {}\n default_node = tree.Node\n\n leaf_map: Dict[str, Type[tree.Leaf]] = {}\n default_leaf = tree.Leaf\n\n def __init__(self, pgen_grammar, start_nonterminal='file_input', error_recovery=False):\n self._pgen_grammar = pgen_grammar\n self._start_nonterminal = start_nonterminal\n self._error_recovery = error_recovery\n\n def parse(self, tokens):\n first_dfa = self._pgen_grammar.nonterminal_to_dfas[self._start_nonterminal][0]\n self.stack = Stack([StackNode(first_dfa)])\n\n for token in tokens:\n self._add_token(token)\n\n while True:\n tos = self.stack[-1]\n if not tos.dfa.is_final:\n # We never broke out -- EOF is too soon -- Unfinished statement.\n # However, the error recovery might have added the token again, if\n # the stack is empty, we're fine.\n raise InternalParseError(\n "incomplete input", token.type, token.string, token.start_pos\n )\n\n if len(self.stack) > 1:\n self._pop()\n else:\n return self.convert_node(tos.nonterminal, tos.nodes)\n\n def error_recovery(self, token):\n if self._error_recovery:\n raise NotImplementedError("Error Recovery is not implemented")\n else:\n type_, value, start_pos, prefix = token\n error_leaf = tree.ErrorLeaf(type_, value, start_pos, prefix)\n raise ParserSyntaxError('SyntaxError: invalid syntax', error_leaf)\n\n def convert_node(self, nonterminal, children):\n try:\n node = self.node_map[nonterminal](children)\n except KeyError:\n node = self.default_node(nonterminal, children)\n return node\n\n def convert_leaf(self, type_, value, prefix, start_pos):\n try:\n return self.leaf_map[type_](value, start_pos, prefix)\n except KeyError:\n return self.default_leaf(value, start_pos, prefix)\n\n def _add_token(self, token):\n """\n This is the only core function for parsing. Here happens basically\n everything. Everything is well prepared by the parser generator and we\n only apply the necessary steps here.\n """\n grammar = self._pgen_grammar\n stack = self.stack\n type_, value, start_pos, prefix = token\n transition = _token_to_transition(grammar, type_, value)\n\n while True:\n try:\n plan = stack[-1].dfa.transitions[transition]\n break\n except KeyError:\n if stack[-1].dfa.is_final:\n self._pop()\n else:\n self.error_recovery(token)\n return\n except IndexError:\n raise InternalParseError("too much input", type_, value, start_pos)\n\n stack[-1].dfa = plan.next_dfa\n\n for push in plan.dfa_pushes:\n stack.append(StackNode(push))\n\n leaf = self.convert_leaf(type_, value, prefix, start_pos)\n stack[-1].nodes.append(leaf)\n\n def _pop(self):\n tos = self.stack.pop()\n # If there's exactly one child, return that child instead of\n # creating a new node. We still create expr_stmt and\n # file_input though, because a lot of Jedi depends on its\n # logic.\n if len(tos.nodes) == 1:\n new_node = tos.nodes[0]\n else:\n new_node = self.convert_node(tos.dfa.from_rule, tos.nodes)\n\n self.stack[-1].nodes.append(new_node)\n | .venv\Lib\site-packages\parso\parser.py | parser.py | Python | 7,182 | 0.95 | 0.219048 | 0.096386 | python-kit | 997 | 2024-10-24T12:53:07.573059 | MIT | false | da1dc8d136a089879031d9eca2c8b5da |
from abc import abstractmethod, abstractproperty\nfrom typing import List, Optional, Tuple, Union\n\nfrom parso.utils import split_lines\n\n\ndef search_ancestor(node: 'NodeOrLeaf', *node_types: str) -> 'Optional[BaseNode]':\n """\n Recursively looks at the parents of a node and returns the first found node\n that matches ``node_types``. Returns ``None`` if no matching node is found.\n\n This function is deprecated, use :meth:`NodeOrLeaf.search_ancestor` instead.\n\n :param node: The ancestors of this node will be checked.\n :param node_types: type names that are searched for.\n """\n n = node.parent\n while n is not None:\n if n.type in node_types:\n return n\n n = n.parent\n return None\n\n\nclass NodeOrLeaf:\n """\n The base class for nodes and leaves.\n """\n __slots__ = ('parent',)\n type: str\n '''\n The type is a string that typically matches the types of the grammar file.\n '''\n parent: 'Optional[BaseNode]'\n '''\n The parent :class:`BaseNode` of this node or leaf.\n None if this is the root node.\n '''\n\n def get_root_node(self):\n """\n Returns the root node of a parser tree. The returned node doesn't have\n a parent node like all the other nodes/leaves.\n """\n scope = self\n while scope.parent is not None:\n scope = scope.parent\n return scope\n\n def get_next_sibling(self):\n """\n Returns the node immediately following this node in this parent's\n children list. If this node does not have a next sibling, it is None\n """\n parent = self.parent\n if parent is None:\n return None\n\n # Can't use index(); we need to test by identity\n for i, child in enumerate(parent.children):\n if child is self:\n try:\n return self.parent.children[i + 1]\n except IndexError:\n return None\n\n def get_previous_sibling(self):\n """\n Returns the node immediately preceding this node in this parent's\n children list. If this node does not have a previous sibling, it is\n None.\n """\n parent = self.parent\n if parent is None:\n return None\n\n # Can't use index(); we need to test by identity\n for i, child in enumerate(parent.children):\n if child is self:\n if i == 0:\n return None\n return self.parent.children[i - 1]\n\n def get_previous_leaf(self):\n """\n Returns the previous leaf in the parser tree.\n Returns `None` if this is the first element in the parser tree.\n """\n if self.parent is None:\n return None\n\n node = self\n while True:\n c = node.parent.children\n i = c.index(node)\n if i == 0:\n node = node.parent\n if node.parent is None:\n return None\n else:\n node = c[i - 1]\n break\n\n while True:\n try:\n node = node.children[-1]\n except AttributeError: # A Leaf doesn't have children.\n return node\n\n def get_next_leaf(self):\n """\n Returns the next leaf in the parser tree.\n Returns None if this is the last element in the parser tree.\n """\n if self.parent is None:\n return None\n\n node = self\n while True:\n c = node.parent.children\n i = c.index(node)\n if i == len(c) - 1:\n node = node.parent\n if node.parent is None:\n return None\n else:\n node = c[i + 1]\n break\n\n while True:\n try:\n node = node.children[0]\n except AttributeError: # A Leaf doesn't have children.\n return node\n\n @abstractproperty\n def start_pos(self) -> Tuple[int, int]:\n """\n Returns the starting position of the prefix as a tuple, e.g. `(3, 4)`.\n\n :return tuple of int: (line, column)\n """\n\n @abstractproperty\n def end_pos(self) -> Tuple[int, int]:\n """\n Returns the end position of the prefix as a tuple, e.g. `(3, 4)`.\n\n :return tuple of int: (line, column)\n """\n\n @abstractmethod\n def get_start_pos_of_prefix(self):\n """\n Returns the start_pos of the prefix. This means basically it returns\n the end_pos of the last prefix. The `get_start_pos_of_prefix()` of the\n prefix `+` in `2 + 1` would be `(1, 1)`, while the start_pos is\n `(1, 2)`.\n\n :return tuple of int: (line, column)\n """\n\n @abstractmethod\n def get_first_leaf(self):\n """\n Returns the first leaf of a node or itself if this is a leaf.\n """\n\n @abstractmethod\n def get_last_leaf(self):\n """\n Returns the last leaf of a node or itself if this is a leaf.\n """\n\n @abstractmethod\n def get_code(self, include_prefix=True):\n """\n Returns the code that was the input for the parser for this node.\n\n :param include_prefix: Removes the prefix (whitespace and comments) of\n e.g. a statement.\n """\n\n def search_ancestor(self, *node_types: str) -> 'Optional[BaseNode]':\n """\n Recursively looks at the parents of this node or leaf and returns the\n first found node that matches ``node_types``. Returns ``None`` if no\n matching node is found.\n\n :param node_types: type names that are searched for.\n """\n node = self.parent\n while node is not None:\n if node.type in node_types:\n return node\n node = node.parent\n return None\n\n def dump(self, *, indent: Optional[Union[int, str]] = 4) -> str:\n """\n Returns a formatted dump of the parser tree rooted at this node or leaf. This is\n mainly useful for debugging purposes.\n\n The ``indent`` parameter is interpreted in a similar way as :py:func:`ast.dump`.\n If ``indent`` is a non-negative integer or string, then the tree will be\n pretty-printed with that indent level. An indent level of 0, negative, or ``""``\n will only insert newlines. ``None`` selects the single line representation.\n Using a positive integer indent indents that many spaces per level. If\n ``indent`` is a string (such as ``"\\t"``), that string is used to indent each\n level.\n\n :param indent: Indentation style as described above. The default indentation is\n 4 spaces, which yields a pretty-printed dump.\n\n >>> import parso\n >>> print(parso.parse("lambda x, y: x + y").dump())\n Module([\n Lambda([\n Keyword('lambda', (1, 0)),\n Param([\n Name('x', (1, 7), prefix=' '),\n Operator(',', (1, 8)),\n ]),\n Param([\n Name('y', (1, 10), prefix=' '),\n ]),\n Operator(':', (1, 11)),\n PythonNode('arith_expr', [\n Name('x', (1, 13), prefix=' '),\n Operator('+', (1, 15), prefix=' '),\n Name('y', (1, 17), prefix=' '),\n ]),\n ]),\n EndMarker('', (1, 18)),\n ])\n """\n if indent is None:\n newline = False\n indent_string = ''\n elif isinstance(indent, int):\n newline = True\n indent_string = ' ' * indent\n elif isinstance(indent, str):\n newline = True\n indent_string = indent\n else:\n raise TypeError(f"expect 'indent' to be int, str or None, got {indent!r}")\n\n def _format_dump(node: NodeOrLeaf, indent: str = '', top_level: bool = True) -> str:\n result = ''\n node_type = type(node).__name__\n if isinstance(node, Leaf):\n result += f'{indent}{node_type}('\n if isinstance(node, ErrorLeaf):\n result += f'{node.token_type!r}, '\n elif isinstance(node, TypedLeaf):\n result += f'{node.type!r}, '\n result += f'{node.value!r}, {node.start_pos!r}'\n if node.prefix:\n result += f', prefix={node.prefix!r}'\n result += ')'\n elif isinstance(node, BaseNode):\n result += f'{indent}{node_type}('\n if isinstance(node, Node):\n result += f'{node.type!r}, '\n result += '['\n if newline:\n result += '\n'\n for child in node.children:\n result += _format_dump(child, indent=indent + indent_string, top_level=False)\n result += f'{indent}])'\n else: # pragma: no cover\n # We shouldn't ever reach here, unless:\n # - `NodeOrLeaf` is incorrectly subclassed else where\n # - or a node's children list contains invalid nodes or leafs\n # Both are unexpected internal errors.\n raise TypeError(f'unsupported node encountered: {node!r}')\n if not top_level:\n if newline:\n result += ',\n'\n else:\n result += ', '\n return result\n\n return _format_dump(self)\n\n\nclass Leaf(NodeOrLeaf):\n '''\n Leafs are basically tokens with a better API. Leafs exactly know where they\n were defined and what text preceeds them.\n '''\n __slots__ = ('value', 'line', 'column', 'prefix')\n prefix: str\n\n def __init__(self, value: str, start_pos: Tuple[int, int], prefix: str = '') -> None:\n self.value = value\n '''\n :py:func:`str` The value of the current token.\n '''\n self.start_pos = start_pos\n self.prefix = prefix\n '''\n :py:func:`str` Typically a mixture of whitespace and comments. Stuff\n that is syntactically irrelevant for the syntax tree.\n '''\n self.parent: Optional[BaseNode] = None\n '''\n The parent :class:`BaseNode` of this leaf.\n '''\n\n @property\n def start_pos(self) -> Tuple[int, int]:\n return self.line, self.column\n\n @start_pos.setter\n def start_pos(self, value: Tuple[int, int]) -> None:\n self.line = value[0]\n self.column = value[1]\n\n def get_start_pos_of_prefix(self):\n previous_leaf = self.get_previous_leaf()\n if previous_leaf is None:\n lines = split_lines(self.prefix)\n # + 1 is needed because split_lines always returns at least [''].\n return self.line - len(lines) + 1, 0 # It's the first leaf.\n return previous_leaf.end_pos\n\n def get_first_leaf(self):\n return self\n\n def get_last_leaf(self):\n return self\n\n def get_code(self, include_prefix=True):\n if include_prefix:\n return self.prefix + self.value\n else:\n return self.value\n\n @property\n def end_pos(self) -> Tuple[int, int]:\n lines = split_lines(self.value)\n end_pos_line = self.line + len(lines) - 1\n # Check for multiline token\n if self.line == end_pos_line:\n end_pos_column = self.column + len(lines[-1])\n else:\n end_pos_column = len(lines[-1])\n return end_pos_line, end_pos_column\n\n def __repr__(self):\n value = self.value\n if not value:\n value = self.type\n return "<%s: %s>" % (type(self).__name__, value)\n\n\nclass TypedLeaf(Leaf):\n __slots__ = ('type',)\n\n def __init__(self, type, value, start_pos, prefix=''):\n super().__init__(value, start_pos, prefix)\n self.type = type\n\n\nclass BaseNode(NodeOrLeaf):\n """\n The super class for all nodes.\n A node has children, a type and possibly a parent node.\n """\n __slots__ = ('children',)\n\n def __init__(self, children: List[NodeOrLeaf]) -> None:\n self.children = children\n """\n A list of :class:`NodeOrLeaf` child nodes.\n """\n self.parent: Optional[BaseNode] = None\n '''\n The parent :class:`BaseNode` of this node.\n None if this is the root node.\n '''\n for child in children:\n child.parent = self\n\n @property\n def start_pos(self) -> Tuple[int, int]:\n return self.children[0].start_pos\n\n def get_start_pos_of_prefix(self):\n return self.children[0].get_start_pos_of_prefix()\n\n @property\n def end_pos(self) -> Tuple[int, int]:\n return self.children[-1].end_pos\n\n def _get_code_for_children(self, children, include_prefix):\n if include_prefix:\n return "".join(c.get_code() for c in children)\n else:\n first = children[0].get_code(include_prefix=False)\n return first + "".join(c.get_code() for c in children[1:])\n\n def get_code(self, include_prefix=True):\n return self._get_code_for_children(self.children, include_prefix)\n\n def get_leaf_for_position(self, position, include_prefixes=False):\n """\n Get the :py:class:`parso.tree.Leaf` at ``position``\n\n :param tuple position: A position tuple, row, column. Rows start from 1\n :param bool include_prefixes: If ``False``, ``None`` will be returned if ``position`` falls\n on whitespace or comments before a leaf\n :return: :py:class:`parso.tree.Leaf` at ``position``, or ``None``\n """\n def binary_search(lower, upper):\n if lower == upper:\n element = self.children[lower]\n if not include_prefixes and position < element.start_pos:\n # We're on a prefix.\n return None\n # In case we have prefixes, a leaf always matches\n try:\n return element.get_leaf_for_position(position, include_prefixes)\n except AttributeError:\n return element\n\n index = int((lower + upper) / 2)\n element = self.children[index]\n if position <= element.end_pos:\n return binary_search(lower, index)\n else:\n return binary_search(index + 1, upper)\n\n if not ((1, 0) <= position <= self.children[-1].end_pos):\n raise ValueError('Please provide a position that exists within this node.')\n return binary_search(0, len(self.children) - 1)\n\n def get_first_leaf(self):\n return self.children[0].get_first_leaf()\n\n def get_last_leaf(self):\n return self.children[-1].get_last_leaf()\n\n def __repr__(self):\n code = self.get_code().replace('\n', ' ').replace('\r', ' ').strip()\n return "<%s: %s@%s,%s>" % \\n (type(self).__name__, code, self.start_pos[0], self.start_pos[1])\n\n\nclass Node(BaseNode):\n """Concrete implementation for interior nodes."""\n __slots__ = ('type',)\n\n def __init__(self, type, children):\n super().__init__(children)\n self.type = type\n\n def __repr__(self):\n return "%s(%s, %r)" % (self.__class__.__name__, self.type, self.children)\n\n\nclass ErrorNode(BaseNode):\n """\n A node that contains valid nodes/leaves that we're follow by a token that\n was invalid. This basically means that the leaf after this node is where\n Python would mark a syntax error.\n """\n __slots__ = ()\n type = 'error_node'\n\n\nclass ErrorLeaf(Leaf):\n """\n A leaf that is either completely invalid in a language (like `$` in Python)\n or is invalid at that position. Like the star in `1 +* 1`.\n """\n __slots__ = ('token_type',)\n type = 'error_leaf'\n\n def __init__(self, token_type, value, start_pos, prefix=''):\n super().__init__(value, start_pos, prefix)\n self.token_type = token_type\n\n def __repr__(self):\n return "<%s: %s:%s, %s>" % \\n (type(self).__name__, self.token_type, repr(self.value), self.start_pos)\n | .venv\Lib\site-packages\parso\tree.py | tree.py | Python | 16,153 | 0.95 | 0.252049 | 0.024213 | awesome-app | 555 | 2024-05-10T18:52:59.616292 | Apache-2.0 | false | d9b5161e482914e4d2be0e70fcda12c5 |
import re\nimport sys\nfrom ast import literal_eval\nfrom functools import total_ordering\nfrom typing import NamedTuple, Sequence, Union\n\n# The following is a list in Python that are line breaks in str.splitlines, but\n# not in Python. In Python only \r (Carriage Return, 0xD) and \n (Line Feed,\n# 0xA) are allowed to split lines.\n_NON_LINE_BREAKS = (\n '\v', # Vertical Tabulation 0xB\n '\f', # Form Feed 0xC\n '\x1C', # File Separator\n '\x1D', # Group Separator\n '\x1E', # Record Separator\n '\x85', # Next Line (NEL - Equivalent to CR+LF.\n # Used to mark end-of-line on some IBM mainframes.)\n '\u2028', # Line Separator\n '\u2029', # Paragraph Separator\n)\n\n\nclass Version(NamedTuple):\n major: int\n minor: int\n micro: int\n\n\ndef split_lines(string: str, keepends: bool = False) -> Sequence[str]:\n r"""\n Intended for Python code. In contrast to Python's :py:meth:`str.splitlines`,\n looks at form feeds and other special characters as normal text. Just\n splits ``\n`` and ``\r\n``.\n Also different: Returns ``[""]`` for an empty string input.\n\n In Python 2.7 form feeds are used as normal characters when using\n str.splitlines. However in Python 3 somewhere there was a decision to split\n also on form feeds.\n """\n if keepends:\n lst = string.splitlines(True)\n\n # We have to merge lines that were broken by form feed characters.\n merge = []\n for i, line in enumerate(lst):\n try:\n last_chr = line[-1]\n except IndexError:\n pass\n else:\n if last_chr in _NON_LINE_BREAKS:\n merge.append(i)\n\n for index in reversed(merge):\n try:\n lst[index] = lst[index] + lst[index + 1]\n del lst[index + 1]\n except IndexError:\n # index + 1 can be empty and therefore there's no need to\n # merge.\n pass\n\n # The stdlib's implementation of the end is inconsistent when calling\n # it with/without keepends. One time there's an empty string in the\n # end, one time there's none.\n if string.endswith('\n') or string.endswith('\r') or string == '':\n lst.append('')\n return lst\n else:\n return re.split(r'\n|\r\n|\r', string)\n\n\ndef python_bytes_to_unicode(\n source: Union[str, bytes], encoding: str = 'utf-8', errors: str = 'strict'\n) -> str:\n """\n Checks for unicode BOMs and PEP 263 encoding declarations. Then returns a\n unicode object like in :py:meth:`bytes.decode`.\n\n :param encoding: See :py:meth:`bytes.decode` documentation.\n :param errors: See :py:meth:`bytes.decode` documentation. ``errors`` can be\n ``'strict'``, ``'replace'`` or ``'ignore'``.\n """\n def detect_encoding():\n """\n For the implementation of encoding definitions in Python, look at:\n - http://www.python.org/dev/peps/pep-0263/\n - http://docs.python.org/2/reference/lexical_analysis.html#encoding-declarations\n """\n byte_mark = literal_eval(r"b'\xef\xbb\xbf'")\n if source.startswith(byte_mark):\n # UTF-8 byte-order mark\n return 'utf-8'\n\n first_two_lines = re.match(br'(?:[^\r\n]*(?:\r\n|\r|\n)){0,2}', source).group(0)\n possible_encoding = re.search(br"coding[=:]\s*([-\w.]+)",\n first_two_lines)\n if possible_encoding:\n e = possible_encoding.group(1)\n if not isinstance(e, str):\n e = str(e, 'ascii', 'replace')\n return e\n else:\n # the default if nothing else has been set -> PEP 263\n return encoding\n\n if isinstance(source, str):\n # only cast str/bytes\n return source\n\n encoding = detect_encoding()\n try:\n # Cast to unicode\n return str(source, encoding, errors)\n except LookupError:\n if errors == 'replace':\n # This is a weird case that can happen if the given encoding is not\n # a valid encoding. This usually shouldn't happen with provided\n # encodings, but can happen if somebody uses encoding declarations\n # like `# coding: foo-8`.\n return str(source, 'utf-8', errors)\n raise\n\n\ndef version_info() -> Version:\n """\n Returns a namedtuple of parso's version, similar to Python's\n ``sys.version_info``.\n """\n from parso import __version__\n tupl = re.findall(r'[a-z]+|\d+', __version__)\n return Version(*[x if i == 3 else int(x) for i, x in enumerate(tupl)])\n\n\nclass _PythonVersionInfo(NamedTuple):\n major: int\n minor: int\n\n\n@total_ordering\nclass PythonVersionInfo(_PythonVersionInfo):\n def __gt__(self, other):\n if isinstance(other, tuple):\n if len(other) != 2:\n raise ValueError("Can only compare to tuples of length 2.")\n return (self.major, self.minor) > other\n super().__gt__(other)\n\n return (self.major, self.minor)\n\n def __eq__(self, other):\n if isinstance(other, tuple):\n if len(other) != 2:\n raise ValueError("Can only compare to tuples of length 2.")\n return (self.major, self.minor) == other\n super().__eq__(other)\n\n def __ne__(self, other):\n return not self.__eq__(other)\n\n\ndef _parse_version(version) -> PythonVersionInfo:\n match = re.match(r'(\d+)(?:\.(\d{1,2})(?:\.\d+)?)?((a|b|rc)\d)?$', version)\n if match is None:\n raise ValueError('The given version is not in the right format. '\n 'Use something like "3.8" or "3".')\n\n major = int(match.group(1))\n minor = match.group(2)\n if minor is None:\n # Use the latest Python in case it's not exactly defined, because the\n # grammars are typically backwards compatible?\n if major == 2:\n minor = "7"\n elif major == 3:\n minor = "6"\n else:\n raise NotImplementedError("Sorry, no support yet for those fancy new/old versions.")\n minor = int(minor)\n return PythonVersionInfo(major, minor)\n\n\ndef parse_version_string(version: str = None) -> PythonVersionInfo:\n """\n Checks for a valid version number (e.g. `3.8` or `3.10.1` or `3`) and\n returns a corresponding version info that is always two characters long in\n decimal.\n """\n if version is None:\n version = '%s.%s' % sys.version_info[:2]\n if not isinstance(version, str):\n raise TypeError('version must be a string like "3.8"')\n\n return _parse_version(version)\n | .venv\Lib\site-packages\parso\utils.py | utils.py | Python | 6,620 | 0.95 | 0.226804 | 0.121951 | awesome-app | 180 | 2024-08-08T00:13:52.831313 | GPL-3.0 | false | ac92e9c6e61307163c3e24e9b4d69a39 |
import platform\n\nis_pypy = platform.python_implementation() == 'PyPy'\n | .venv\Lib\site-packages\parso\_compatibility.py | _compatibility.py | Python | 70 | 0.65 | 0 | 0 | react-lib | 254 | 2024-04-05T04:24:04.305341 | GPL-3.0 | false | 85e4b6e663ac9ea873bc8a1d4cad25cb |
r"""\nParso is a Python parser that supports error recovery and round-trip parsing\nfor different Python versions (in multiple Python versions). Parso is also able\nto list multiple syntax errors in your python file.\n\nParso has been battle-tested by jedi_. It was pulled out of jedi to be useful\nfor other projects as well.\n\nParso consists of a small API to parse Python and analyse the syntax tree.\n\n.. _jedi: https://github.com/davidhalter/jedi\n\nA simple example:\n\n>>> import parso\n>>> module = parso.parse('hello + 1', version="3.9")\n>>> expr = module.children[0]\n>>> expr\nPythonNode(arith_expr, [<Name: hello@1,0>, <Operator: +>, <Number: 1>])\n>>> print(expr.get_code())\nhello + 1\n>>> name = expr.children[0]\n>>> name\n<Name: hello@1,0>\n>>> name.end_pos\n(1, 5)\n>>> expr.end_pos\n(1, 9)\n\nTo list multiple issues:\n\n>>> grammar = parso.load_grammar()\n>>> module = grammar.parse('foo +\nbar\ncontinue')\n>>> error1, error2 = grammar.iter_errors(module)\n>>> error1.message\n'SyntaxError: invalid syntax'\n>>> error2.message\n"SyntaxError: 'continue' not properly in loop"\n"""\n\nfrom parso.parser import ParserSyntaxError\nfrom parso.grammar import Grammar, load_grammar\nfrom parso.utils import split_lines, python_bytes_to_unicode\n\n\n__version__ = '0.8.4'\n\n\ndef parse(code=None, **kwargs):\n """\n A utility function to avoid loading grammars.\n Params are documented in :py:meth:`parso.Grammar.parse`.\n\n :param str version: The version used by :py:func:`parso.load_grammar`.\n """\n version = kwargs.pop('version', None)\n grammar = load_grammar(version=version)\n return grammar.parse(code, **kwargs)\n | .venv\Lib\site-packages\parso\__init__.py | __init__.py | Python | 1,607 | 0.95 | 0.068966 | 0 | react-lib | 124 | 2023-07-21T13:50:19.925788 | Apache-2.0 | false | 99953f122b9e183cac3a95ad1f424995 |
# Copyright 2004-2005 Elemental Security, Inc. All Rights Reserved.\n# Licensed to PSF under a Contributor Agreement.\n\n# Modifications:\n# Copyright David Halter and Contributors\n# Modifications are dual-licensed: MIT and PSF.\n\n"""\nThis module defines the data structures used to represent a grammar.\n\nSpecifying grammars in pgen is possible with this grammar::\n\n grammar: (NEWLINE | rule)* ENDMARKER\n rule: NAME ':' rhs NEWLINE\n rhs: items ('|' items)*\n items: item+\n item: '[' rhs ']' | atom ['+' | '*']\n atom: '(' rhs ')' | NAME | STRING\n\nThis grammar is self-referencing.\n\nThis parser generator (pgen2) was created by Guido Rossum and used for lib2to3.\nMost of the code has been refactored to make it more Pythonic. Since this was a\n"copy" of the CPython Parser parser "pgen", there was some work needed to make\nit more readable. It should also be slightly faster than the original pgen2,\nbecause we made some optimizations.\n"""\n\nfrom ast import literal_eval\nfrom typing import TypeVar, Generic, Mapping, Sequence, Set, Union\n\nfrom parso.pgen2.grammar_parser import GrammarParser, NFAState\n\n_TokenTypeT = TypeVar("_TokenTypeT")\n\n\nclass Grammar(Generic[_TokenTypeT]):\n """\n Once initialized, this class supplies the grammar tables for the\n parsing engine implemented by parse.py. The parsing engine\n accesses the instance variables directly.\n\n The only important part in this parsers are dfas and transitions between\n dfas.\n """\n\n def __init__(self,\n start_nonterminal: str,\n rule_to_dfas: Mapping[str, Sequence['DFAState[_TokenTypeT]']],\n reserved_syntax_strings: Mapping[str, 'ReservedString']):\n self.nonterminal_to_dfas = rule_to_dfas\n self.reserved_syntax_strings = reserved_syntax_strings\n self.start_nonterminal = start_nonterminal\n\n\nclass DFAPlan:\n """\n Plans are used for the parser to create stack nodes and do the proper\n DFA state transitions.\n """\n def __init__(self, next_dfa: 'DFAState', dfa_pushes: Sequence['DFAState'] = []):\n self.next_dfa = next_dfa\n self.dfa_pushes = dfa_pushes\n\n def __repr__(self):\n return '%s(%s, %s)' % (self.__class__.__name__, self.next_dfa, self.dfa_pushes)\n\n\nclass DFAState(Generic[_TokenTypeT]):\n """\n The DFAState object is the core class for pretty much anything. DFAState\n are the vertices of an ordered graph while arcs and transitions are the\n edges.\n\n Arcs are the initial edges, where most DFAStates are not connected and\n transitions are then calculated to connect the DFA state machines that have\n different nonterminals.\n """\n def __init__(self, from_rule: str, nfa_set: Set[NFAState], final: NFAState):\n assert isinstance(nfa_set, set)\n assert isinstance(next(iter(nfa_set)), NFAState)\n assert isinstance(final, NFAState)\n self.from_rule = from_rule\n self.nfa_set = nfa_set\n # map from terminals/nonterminals to DFAState\n self.arcs: Mapping[str, DFAState] = {}\n # In an intermediary step we set these nonterminal arcs (which has the\n # same structure as arcs). These don't contain terminals anymore.\n self.nonterminal_arcs: Mapping[str, DFAState] = {}\n\n # Transitions are basically the only thing that the parser is using\n # with is_final. Everyting else is purely here to create a parser.\n self.transitions: Mapping[Union[_TokenTypeT, ReservedString], DFAPlan] = {}\n self.is_final = final in nfa_set\n\n def add_arc(self, next_, label):\n assert isinstance(label, str)\n assert label not in self.arcs\n assert isinstance(next_, DFAState)\n self.arcs[label] = next_\n\n def unifystate(self, old, new):\n for label, next_ in self.arcs.items():\n if next_ is old:\n self.arcs[label] = new\n\n def __eq__(self, other):\n # Equality test -- ignore the nfa_set instance variable\n assert isinstance(other, DFAState)\n if self.is_final != other.is_final:\n return False\n # Can't just return self.arcs == other.arcs, because that\n # would invoke this method recursively, with cycles...\n if len(self.arcs) != len(other.arcs):\n return False\n for label, next_ in self.arcs.items():\n if next_ is not other.arcs.get(label):\n return False\n return True\n\n def __repr__(self):\n return '<%s: %s is_final=%s>' % (\n self.__class__.__name__, self.from_rule, self.is_final\n )\n\n\nclass ReservedString:\n """\n Most grammars will have certain keywords and operators that are mentioned\n in the grammar as strings (e.g. "if") and not token types (e.g. NUMBER).\n This class basically is the former.\n """\n\n def __init__(self, value: str):\n self.value = value\n\n def __repr__(self):\n return '%s(%s)' % (self.__class__.__name__, self.value)\n\n\ndef _simplify_dfas(dfas):\n """\n This is not theoretically optimal, but works well enough.\n Algorithm: repeatedly look for two states that have the same\n set of arcs (same labels pointing to the same nodes) and\n unify them, until things stop changing.\n\n dfas is a list of DFAState instances\n """\n changes = True\n while changes:\n changes = False\n for i, state_i in enumerate(dfas):\n for j in range(i + 1, len(dfas)):\n state_j = dfas[j]\n if state_i == state_j:\n del dfas[j]\n for state in dfas:\n state.unifystate(state_j, state_i)\n changes = True\n break\n\n\ndef _make_dfas(start, finish):\n """\n Uses the powerset construction algorithm to create DFA states from sets of\n NFA states.\n\n Also does state reduction if some states are not needed.\n """\n # To turn an NFA into a DFA, we define the states of the DFA\n # to correspond to *sets* of states of the NFA. Then do some\n # state reduction.\n assert isinstance(start, NFAState)\n assert isinstance(finish, NFAState)\n\n def addclosure(nfa_state, base_nfa_set):\n assert isinstance(nfa_state, NFAState)\n if nfa_state in base_nfa_set:\n return\n base_nfa_set.add(nfa_state)\n for nfa_arc in nfa_state.arcs:\n if nfa_arc.nonterminal_or_string is None:\n addclosure(nfa_arc.next, base_nfa_set)\n\n base_nfa_set = set()\n addclosure(start, base_nfa_set)\n states = [DFAState(start.from_rule, base_nfa_set, finish)]\n for state in states: # NB states grows while we're iterating\n arcs = {}\n # Find state transitions and store them in arcs.\n for nfa_state in state.nfa_set:\n for nfa_arc in nfa_state.arcs:\n if nfa_arc.nonterminal_or_string is not None:\n nfa_set = arcs.setdefault(nfa_arc.nonterminal_or_string, set())\n addclosure(nfa_arc.next, nfa_set)\n\n # Now create the dfa's with no None's in arcs anymore. All Nones have\n # been eliminated and state transitions (arcs) are properly defined, we\n # just need to create the dfa's.\n for nonterminal_or_string, nfa_set in arcs.items():\n for nested_state in states:\n if nested_state.nfa_set == nfa_set:\n # The DFA state already exists for this rule.\n break\n else:\n nested_state = DFAState(start.from_rule, nfa_set, finish)\n states.append(nested_state)\n\n state.add_arc(nested_state, nonterminal_or_string)\n return states # List of DFAState instances; first one is start\n\n\ndef _dump_nfa(start, finish):\n print("Dump of NFA for", start.from_rule)\n todo = [start]\n for i, state in enumerate(todo):\n print(" State", i, state is finish and "(final)" or "")\n for arc in state.arcs:\n label, next_ = arc.nonterminal_or_string, arc.next\n if next_ in todo:\n j = todo.index(next_)\n else:\n j = len(todo)\n todo.append(next_)\n if label is None:\n print(" -> %d" % j)\n else:\n print(" %s -> %d" % (label, j))\n\n\ndef _dump_dfas(dfas):\n print("Dump of DFA for", dfas[0].from_rule)\n for i, state in enumerate(dfas):\n print(" State", i, state.is_final and "(final)" or "")\n for nonterminal, next_ in state.arcs.items():\n print(" %s -> %d" % (nonterminal, dfas.index(next_)))\n\n\ndef generate_grammar(bnf_grammar: str, token_namespace) -> Grammar:\n """\n ``bnf_text`` is a grammar in extended BNF (using * for repetition, + for\n at-least-once repetition, [] for optional parts, | for alternatives and ()\n for grouping).\n\n It's not EBNF according to ISO/IEC 14977. It's a dialect Python uses in its\n own parser.\n """\n rule_to_dfas = {}\n start_nonterminal = None\n for nfa_a, nfa_z in GrammarParser(bnf_grammar).parse():\n # _dump_nfa(nfa_a, nfa_z)\n dfas = _make_dfas(nfa_a, nfa_z)\n # _dump_dfas(dfas)\n # oldlen = len(dfas)\n _simplify_dfas(dfas)\n # newlen = len(dfas)\n rule_to_dfas[nfa_a.from_rule] = dfas\n # print(nfa_a.from_rule, oldlen, newlen)\n\n if start_nonterminal is None:\n start_nonterminal = nfa_a.from_rule\n\n reserved_strings: Mapping[str, ReservedString] = {}\n for nonterminal, dfas in rule_to_dfas.items():\n for dfa_state in dfas:\n for terminal_or_nonterminal, next_dfa in dfa_state.arcs.items():\n if terminal_or_nonterminal in rule_to_dfas:\n dfa_state.nonterminal_arcs[terminal_or_nonterminal] = next_dfa\n else:\n transition = _make_transition(\n token_namespace,\n reserved_strings,\n terminal_or_nonterminal\n )\n dfa_state.transitions[transition] = DFAPlan(next_dfa)\n\n _calculate_tree_traversal(rule_to_dfas)\n return Grammar(start_nonterminal, rule_to_dfas, reserved_strings) # type: ignore[arg-type]\n\n\ndef _make_transition(token_namespace, reserved_syntax_strings, label):\n """\n Creates a reserved string ("if", "for", "*", ...) or returns the token type\n (NUMBER, STRING, ...) for a given grammar terminal.\n """\n if label[0].isalpha():\n # A named token (e.g. NAME, NUMBER, STRING)\n return getattr(token_namespace, label)\n else:\n # Either a keyword or an operator\n assert label[0] in ('"', "'"), label\n assert not label.startswith('"""') and not label.startswith("'''")\n value = literal_eval(label)\n try:\n return reserved_syntax_strings[value]\n except KeyError:\n r = reserved_syntax_strings[value] = ReservedString(value)\n return r\n\n\ndef _calculate_tree_traversal(nonterminal_to_dfas):\n """\n By this point we know how dfas can move around within a stack node, but we\n don't know how we can add a new stack node (nonterminal transitions).\n """\n # Map from grammar rule (nonterminal) name to a set of tokens.\n first_plans = {}\n\n nonterminals = list(nonterminal_to_dfas.keys())\n nonterminals.sort()\n for nonterminal in nonterminals:\n if nonterminal not in first_plans:\n _calculate_first_plans(nonterminal_to_dfas, first_plans, nonterminal)\n\n # Now that we have calculated the first terminals, we are sure that\n # there is no left recursion.\n\n for dfas in nonterminal_to_dfas.values():\n for dfa_state in dfas:\n transitions = dfa_state.transitions\n for nonterminal, next_dfa in dfa_state.nonterminal_arcs.items():\n for transition, pushes in first_plans[nonterminal].items():\n if transition in transitions:\n prev_plan = transitions[transition]\n # Make sure these are sorted so that error messages are\n # at least deterministic\n choices = sorted([\n (\n prev_plan.dfa_pushes[0].from_rule\n if prev_plan.dfa_pushes\n else prev_plan.next_dfa.from_rule\n ),\n (\n pushes[0].from_rule\n if pushes else next_dfa.from_rule\n ),\n ])\n raise ValueError(\n "Rule %s is ambiguous; given a %s token, we "\n "can't determine if we should evaluate %s or %s."\n % (\n (\n dfa_state.from_rule,\n transition,\n ) + tuple(choices)\n )\n )\n transitions[transition] = DFAPlan(next_dfa, pushes)\n\n\ndef _calculate_first_plans(nonterminal_to_dfas, first_plans, nonterminal):\n """\n Calculates the first plan in the first_plans dictionary for every given\n nonterminal. This is going to be used to know when to create stack nodes.\n """\n dfas = nonterminal_to_dfas[nonterminal]\n new_first_plans = {}\n first_plans[nonterminal] = None # dummy to detect left recursion\n # We only need to check the first dfa. All the following ones are not\n # interesting to find first terminals.\n state = dfas[0]\n for transition, next_ in state.transitions.items():\n # It's a string. We have finally found a possible first token.\n new_first_plans[transition] = [next_.next_dfa]\n\n for nonterminal2, next_ in state.nonterminal_arcs.items():\n # It's a nonterminal and we have either a left recursion issue\n # in the grammar or we have to recurse.\n try:\n first_plans2 = first_plans[nonterminal2]\n except KeyError:\n first_plans2 = _calculate_first_plans(nonterminal_to_dfas, first_plans, nonterminal2)\n else:\n if first_plans2 is None:\n raise ValueError("left recursion for rule %r" % nonterminal)\n\n for t, pushes in first_plans2.items():\n new_first_plans[t] = [next_] + pushes\n\n first_plans[nonterminal] = new_first_plans\n return new_first_plans\n | .venv\Lib\site-packages\parso\pgen2\generator.py | generator.py | Python | 14,580 | 0.95 | 0.256545 | 0.118012 | node-utils | 286 | 2023-08-26T01:59:19.945636 | Apache-2.0 | false | f3a479823a75364cf6524a3cb51de3ad |
# Copyright 2004-2005 Elemental Security, Inc. All Rights Reserved.\n# Licensed to PSF under a Contributor Agreement.\n\n# Modifications:\n# Copyright David Halter and Contributors\n# Modifications are dual-licensed: MIT and PSF.\nfrom typing import Optional, Iterator, Tuple, List\n\nfrom parso.python.tokenize import tokenize\nfrom parso.utils import parse_version_string\nfrom parso.python.token import PythonTokenTypes\n\n\nclass NFAArc:\n def __init__(self, next_: 'NFAState', nonterminal_or_string: Optional[str]):\n self.next: NFAState = next_\n self.nonterminal_or_string: Optional[str] = nonterminal_or_string\n\n def __repr__(self):\n return '<%s: %s>' % (self.__class__.__name__, self.nonterminal_or_string)\n\n\nclass NFAState:\n def __init__(self, from_rule: str):\n self.from_rule: str = from_rule\n self.arcs: List[NFAArc] = []\n\n def add_arc(self, next_, nonterminal_or_string=None):\n assert nonterminal_or_string is None or isinstance(nonterminal_or_string, str)\n assert isinstance(next_, NFAState)\n self.arcs.append(NFAArc(next_, nonterminal_or_string))\n\n def __repr__(self):\n return '<%s: from %s>' % (self.__class__.__name__, self.from_rule)\n\n\nclass GrammarParser:\n """\n The parser for Python grammar files.\n """\n def __init__(self, bnf_grammar: str):\n self._bnf_grammar = bnf_grammar\n self.generator = tokenize(\n bnf_grammar,\n version_info=parse_version_string('3.9')\n )\n self._gettoken() # Initialize lookahead\n\n def parse(self) -> Iterator[Tuple[NFAState, NFAState]]:\n # grammar: (NEWLINE | rule)* ENDMARKER\n while self.type != PythonTokenTypes.ENDMARKER:\n while self.type == PythonTokenTypes.NEWLINE:\n self._gettoken()\n\n # rule: NAME ':' rhs NEWLINE\n self._current_rule_name = self._expect(PythonTokenTypes.NAME)\n self._expect(PythonTokenTypes.OP, ':')\n\n a, z = self._parse_rhs()\n self._expect(PythonTokenTypes.NEWLINE)\n\n yield a, z\n\n def _parse_rhs(self):\n # rhs: items ('|' items)*\n a, z = self._parse_items()\n if self.value != "|":\n return a, z\n else:\n aa = NFAState(self._current_rule_name)\n zz = NFAState(self._current_rule_name)\n while True:\n # Add the possibility to go into the state of a and come back\n # to finish.\n aa.add_arc(a)\n z.add_arc(zz)\n if self.value != "|":\n break\n\n self._gettoken()\n a, z = self._parse_items()\n return aa, zz\n\n def _parse_items(self):\n # items: item+\n a, b = self._parse_item()\n while self.type in (PythonTokenTypes.NAME, PythonTokenTypes.STRING) \\n or self.value in ('(', '['):\n c, d = self._parse_item()\n # Need to end on the next item.\n b.add_arc(c)\n b = d\n return a, b\n\n def _parse_item(self):\n # item: '[' rhs ']' | atom ['+' | '*']\n if self.value == "[":\n self._gettoken()\n a, z = self._parse_rhs()\n self._expect(PythonTokenTypes.OP, ']')\n # Make it also possible that there is no token and change the\n # state.\n a.add_arc(z)\n return a, z\n else:\n a, z = self._parse_atom()\n value = self.value\n if value not in ("+", "*"):\n return a, z\n self._gettoken()\n # Make it clear that we can go back to the old state and repeat.\n z.add_arc(a)\n if value == "+":\n return a, z\n else:\n # The end state is the same as the beginning, nothing must\n # change.\n return a, a\n\n def _parse_atom(self):\n # atom: '(' rhs ')' | NAME | STRING\n if self.value == "(":\n self._gettoken()\n a, z = self._parse_rhs()\n self._expect(PythonTokenTypes.OP, ')')\n return a, z\n elif self.type in (PythonTokenTypes.NAME, PythonTokenTypes.STRING):\n a = NFAState(self._current_rule_name)\n z = NFAState(self._current_rule_name)\n # Make it clear that the state transition requires that value.\n a.add_arc(z, self.value)\n self._gettoken()\n return a, z\n else:\n self._raise_error("expected (...) or NAME or STRING, got %s/%s",\n self.type, self.value)\n\n def _expect(self, type_, value=None):\n if self.type != type_:\n self._raise_error("expected %s, got %s [%s]",\n type_, self.type, self.value)\n if value is not None and self.value != value:\n self._raise_error("expected %s, got %s", value, self.value)\n value = self.value\n self._gettoken()\n return value\n\n def _gettoken(self):\n tup = next(self.generator)\n self.type, self.value, self.begin, prefix = tup\n\n def _raise_error(self, msg, *args):\n if args:\n try:\n msg = msg % args\n except:\n msg = " ".join([msg] + list(map(str, args)))\n line = self._bnf_grammar.splitlines()[self.begin[0] - 1]\n raise SyntaxError(msg, ('<grammar>', self.begin[0],\n self.begin[1], line))\n | .venv\Lib\site-packages\parso\pgen2\grammar_parser.py | grammar_parser.py | Python | 5,515 | 0.95 | 0.2 | 0.145985 | react-lib | 273 | 2024-02-14T03:52:58.336779 | MIT | false | d3e30e5734f60b952ebd641df96aab97 |
# Copyright 2004-2005 Elemental Security, Inc. All Rights Reserved.\n# Licensed to PSF under a Contributor Agreement.\n\n# Modifications:\n# Copyright 2006 Google, Inc. All Rights Reserved.\n# Licensed to PSF under a Contributor Agreement.\n# Copyright 2014 David Halter and Contributors\n# Modifications are dual-licensed: MIT and PSF.\n\nfrom parso.pgen2.generator import generate_grammar\n | .venv\Lib\site-packages\parso\pgen2\__init__.py | __init__.py | Python | 382 | 0.95 | 0 | 0.875 | node-utils | 846 | 2025-05-19T14:02:41.352124 | MIT | false | dbb91eb5576eb7192da6ab9eaa669c55 |
\n\n | .venv\Lib\site-packages\parso\pgen2\__pycache__\generator.cpython-313.pyc | generator.cpython-313.pyc | Other | 15,442 | 0.95 | 0.146497 | 0.013793 | vue-tools | 562 | 2023-08-16T17:40:11.372439 | BSD-3-Clause | false | 439f9b85c50409880f439c3a6d83515e |
\n\n | .venv\Lib\site-packages\parso\pgen2\__pycache__\grammar_parser.cpython-313.pyc | grammar_parser.cpython-313.pyc | Other | 8,437 | 0.8 | 0.016949 | 0 | vue-tools | 458 | 2024-01-22T00:09:11.353261 | Apache-2.0 | false | 8631979b29772a10e42d4dfc8e14d389 |
\n\n | .venv\Lib\site-packages\parso\pgen2\__pycache__\__init__.cpython-313.pyc | __init__.cpython-313.pyc | Other | 253 | 0.7 | 0 | 0 | node-utils | 876 | 2024-02-11T18:37:46.630684 | MIT | false | 0e7b547aeb01fc7281dcd8d83f4c4169 |
"""\nThe diff parser is trying to be a faster version of the normal parser by trying\nto reuse the nodes of a previous pass over the same file. This is also called\nincremental parsing in parser literature. The difference is mostly that with\nincremental parsing you get a range that needs to be reparsed. Here we\ncalculate that range ourselves by using difflib. After that it's essentially\nincremental parsing.\n\nThe biggest issue of this approach is that we reuse nodes in a mutable way. The\nintial design and idea is quite problematic for this parser, but it is also\npretty fast. Measurements showed that just copying nodes in Python is simply\nquite a bit slower (especially for big files >3 kLOC). Therefore we did not\nwant to get rid of the mutable nodes, since this is usually not an issue.\n\nThis is by far the hardest software I ever wrote, exactly because the initial\ndesign is crappy. When you have to account for a lot of mutable state, it\ncreates a ton of issues that you would otherwise not have. This file took\nprobably 3-6 months to write, which is insane for a parser.\n\nThere is a fuzzer in that helps test this whole thing. Please use it if you\nmake changes here. If you run the fuzzer like::\n\n test/fuzz_diff_parser.py random -n 100000\n\nyou can be pretty sure that everything is still fine. I sometimes run the\nfuzzer up to 24h to make sure everything is still ok.\n"""\nimport re\nimport difflib\nfrom collections import namedtuple\nimport logging\n\nfrom parso.utils import split_lines\nfrom parso.python.parser import Parser\nfrom parso.python.tree import EndMarker\nfrom parso.python.tokenize import PythonToken, BOM_UTF8_STRING\nfrom parso.python.token import PythonTokenTypes\n\nLOG = logging.getLogger(__name__)\nDEBUG_DIFF_PARSER = False\n\n_INDENTATION_TOKENS = 'INDENT', 'ERROR_DEDENT', 'DEDENT'\n\nNEWLINE = PythonTokenTypes.NEWLINE\nDEDENT = PythonTokenTypes.DEDENT\nNAME = PythonTokenTypes.NAME\nERROR_DEDENT = PythonTokenTypes.ERROR_DEDENT\nENDMARKER = PythonTokenTypes.ENDMARKER\n\n\ndef _is_indentation_error_leaf(node):\n return node.type == 'error_leaf' and node.token_type in _INDENTATION_TOKENS\n\n\ndef _get_previous_leaf_if_indentation(leaf):\n while leaf and _is_indentation_error_leaf(leaf):\n leaf = leaf.get_previous_leaf()\n return leaf\n\n\ndef _get_next_leaf_if_indentation(leaf):\n while leaf and _is_indentation_error_leaf(leaf):\n leaf = leaf.get_next_leaf()\n return leaf\n\n\ndef _get_suite_indentation(tree_node):\n return _get_indentation(tree_node.children[1])\n\n\ndef _get_indentation(tree_node):\n return tree_node.start_pos[1]\n\n\ndef _assert_valid_graph(node):\n """\n Checks if the parent/children relationship is correct.\n\n This is a check that only runs during debugging/testing.\n """\n try:\n children = node.children\n except AttributeError:\n # Ignore INDENT is necessary, because indent/dedent tokens don't\n # contain value/prefix and are just around, because of the tokenizer.\n if node.type == 'error_leaf' and node.token_type in _INDENTATION_TOKENS:\n assert not node.value\n assert not node.prefix\n return\n\n # Calculate the content between two start positions.\n previous_leaf = _get_previous_leaf_if_indentation(node.get_previous_leaf())\n if previous_leaf is None:\n content = node.prefix\n previous_start_pos = 1, 0\n else:\n assert previous_leaf.end_pos <= node.start_pos, \\n (previous_leaf, node)\n\n content = previous_leaf.value + node.prefix\n previous_start_pos = previous_leaf.start_pos\n\n if '\n' in content or '\r' in content:\n splitted = split_lines(content)\n line = previous_start_pos[0] + len(splitted) - 1\n actual = line, len(splitted[-1])\n else:\n actual = previous_start_pos[0], previous_start_pos[1] + len(content)\n if content.startswith(BOM_UTF8_STRING) \\n and node.get_start_pos_of_prefix() == (1, 0):\n # Remove the byte order mark\n actual = actual[0], actual[1] - 1\n\n assert node.start_pos == actual, (node.start_pos, actual)\n else:\n for child in children:\n assert child.parent == node, (node, child)\n _assert_valid_graph(child)\n\n\ndef _assert_nodes_are_equal(node1, node2):\n try:\n children1 = node1.children\n except AttributeError:\n assert not hasattr(node2, 'children'), (node1, node2)\n assert node1.value == node2.value, (node1, node2)\n assert node1.type == node2.type, (node1, node2)\n assert node1.prefix == node2.prefix, (node1, node2)\n assert node1.start_pos == node2.start_pos, (node1, node2)\n return\n else:\n try:\n children2 = node2.children\n except AttributeError:\n assert False, (node1, node2)\n for n1, n2 in zip(children1, children2):\n _assert_nodes_are_equal(n1, n2)\n assert len(children1) == len(children2), '\n' + repr(children1) + '\n' + repr(children2)\n\n\ndef _get_debug_error_message(module, old_lines, new_lines):\n current_lines = split_lines(module.get_code(), keepends=True)\n current_diff = difflib.unified_diff(new_lines, current_lines)\n old_new_diff = difflib.unified_diff(old_lines, new_lines)\n import parso\n return (\n "There's an issue with the diff parser. Please "\n "report (parso v%s) - Old/New:\n%s\nActual Diff (May be empty):\n%s"\n % (parso.__version__, ''.join(old_new_diff), ''.join(current_diff))\n )\n\n\ndef _get_last_line(node_or_leaf):\n last_leaf = node_or_leaf.get_last_leaf()\n if _ends_with_newline(last_leaf):\n return last_leaf.start_pos[0]\n else:\n n = last_leaf.get_next_leaf()\n if n.type == 'endmarker' and '\n' in n.prefix:\n # This is a very special case and has to do with error recovery in\n # Parso. The problem is basically that there's no newline leaf at\n # the end sometimes (it's required in the grammar, but not needed\n # actually before endmarker, CPython just adds a newline to make\n # source code pass the parser, to account for that Parso error\n # recovery allows small_stmt instead of simple_stmt).\n return last_leaf.end_pos[0] + 1\n return last_leaf.end_pos[0]\n\n\ndef _skip_dedent_error_leaves(leaf):\n while leaf is not None and leaf.type == 'error_leaf' and leaf.token_type == 'DEDENT':\n leaf = leaf.get_previous_leaf()\n return leaf\n\n\ndef _ends_with_newline(leaf, suffix=''):\n leaf = _skip_dedent_error_leaves(leaf)\n\n if leaf.type == 'error_leaf':\n typ = leaf.token_type.lower()\n else:\n typ = leaf.type\n\n return typ == 'newline' or suffix.endswith('\n') or suffix.endswith('\r')\n\n\ndef _flows_finished(pgen_grammar, stack):\n """\n if, while, for and try might not be finished, because another part might\n still be parsed.\n """\n for stack_node in stack:\n if stack_node.nonterminal in ('if_stmt', 'while_stmt', 'for_stmt', 'try_stmt'):\n return False\n return True\n\n\ndef _func_or_class_has_suite(node):\n if node.type == 'decorated':\n node = node.children[-1]\n if node.type in ('async_funcdef', 'async_stmt'):\n node = node.children[-1]\n return node.type in ('classdef', 'funcdef') and node.children[-1].type == 'suite'\n\n\ndef _suite_or_file_input_is_valid(pgen_grammar, stack):\n if not _flows_finished(pgen_grammar, stack):\n return False\n\n for stack_node in reversed(stack):\n if stack_node.nonterminal == 'decorator':\n # A decorator is only valid with the upcoming function.\n return False\n\n if stack_node.nonterminal == 'suite':\n # If only newline is in the suite, the suite is not valid, yet.\n return len(stack_node.nodes) > 1\n # Not reaching a suite means that we're dealing with file_input levels\n # where there's no need for a valid statement in it. It can also be empty.\n return True\n\n\ndef _is_flow_node(node):\n if node.type == 'async_stmt':\n node = node.children[1]\n try:\n value = node.children[0].value\n except AttributeError:\n return False\n return value in ('if', 'for', 'while', 'try', 'with')\n\n\nclass _PositionUpdatingFinished(Exception):\n pass\n\n\ndef _update_positions(nodes, line_offset, last_leaf):\n for node in nodes:\n try:\n children = node.children\n except AttributeError:\n # Is a leaf\n node.line += line_offset\n if node is last_leaf:\n raise _PositionUpdatingFinished\n else:\n _update_positions(children, line_offset, last_leaf)\n\n\nclass DiffParser:\n """\n An advanced form of parsing a file faster. Unfortunately comes with huge\n side effects. It changes the given module.\n """\n def __init__(self, pgen_grammar, tokenizer, module):\n self._pgen_grammar = pgen_grammar\n self._tokenizer = tokenizer\n self._module = module\n\n def _reset(self):\n self._copy_count = 0\n self._parser_count = 0\n\n self._nodes_tree = _NodesTree(self._module)\n\n def update(self, old_lines, new_lines):\n '''\n The algorithm works as follows:\n\n Equal:\n - Assure that the start is a newline, otherwise parse until we get\n one.\n - Copy from parsed_until_line + 1 to max(i2 + 1)\n - Make sure that the indentation is correct (e.g. add DEDENT)\n - Add old and change positions\n Insert:\n - Parse from parsed_until_line + 1 to min(j2 + 1), hopefully not\n much more.\n\n Returns the new module node.\n '''\n LOG.debug('diff parser start')\n # Reset the used names cache so they get regenerated.\n self._module._used_names = None\n\n self._parser_lines_new = new_lines\n\n self._reset()\n\n line_length = len(new_lines)\n sm = difflib.SequenceMatcher(None, old_lines, self._parser_lines_new)\n opcodes = sm.get_opcodes()\n LOG.debug('line_lengths old: %s; new: %s' % (len(old_lines), line_length))\n\n for operation, i1, i2, j1, j2 in opcodes:\n LOG.debug('-> code[%s] old[%s:%s] new[%s:%s]',\n operation, i1 + 1, i2, j1 + 1, j2)\n\n if j2 == line_length and new_lines[-1] == '':\n # The empty part after the last newline is not relevant.\n j2 -= 1\n\n if operation == 'equal':\n line_offset = j1 - i1\n self._copy_from_old_parser(line_offset, i1 + 1, i2, j2)\n elif operation == 'replace':\n self._parse(until_line=j2)\n elif operation == 'insert':\n self._parse(until_line=j2)\n else:\n assert operation == 'delete'\n\n # With this action all change will finally be applied and we have a\n # changed module.\n self._nodes_tree.close()\n\n if DEBUG_DIFF_PARSER:\n # If there is reasonable suspicion that the diff parser is not\n # behaving well, this should be enabled.\n try:\n code = ''.join(new_lines)\n assert self._module.get_code() == code\n _assert_valid_graph(self._module)\n without_diff_parser_module = Parser(\n self._pgen_grammar,\n error_recovery=True\n ).parse(self._tokenizer(new_lines))\n _assert_nodes_are_equal(self._module, without_diff_parser_module)\n except AssertionError:\n print(_get_debug_error_message(self._module, old_lines, new_lines))\n raise\n\n last_pos = self._module.end_pos[0]\n if last_pos != line_length:\n raise Exception(\n ('(%s != %s) ' % (last_pos, line_length))\n + _get_debug_error_message(self._module, old_lines, new_lines)\n )\n LOG.debug('diff parser end')\n return self._module\n\n def _enabled_debugging(self, old_lines, lines_new):\n if self._module.get_code() != ''.join(lines_new):\n LOG.warning('parser issue:\n%s\n%s', ''.join(old_lines), ''.join(lines_new))\n\n def _copy_from_old_parser(self, line_offset, start_line_old, until_line_old, until_line_new):\n last_until_line = -1\n while until_line_new > self._nodes_tree.parsed_until_line:\n parsed_until_line_old = self._nodes_tree.parsed_until_line - line_offset\n line_stmt = self._get_old_line_stmt(parsed_until_line_old + 1)\n if line_stmt is None:\n # Parse 1 line at least. We don't need more, because we just\n # want to get into a state where the old parser has statements\n # again that can be copied (e.g. not lines within parentheses).\n self._parse(self._nodes_tree.parsed_until_line + 1)\n else:\n p_children = line_stmt.parent.children\n index = p_children.index(line_stmt)\n\n if start_line_old == 1 \\n and p_children[0].get_first_leaf().prefix.startswith(BOM_UTF8_STRING):\n # If there's a BOM in the beginning, just reparse. It's too\n # complicated to account for it otherwise.\n copied_nodes = []\n else:\n from_ = self._nodes_tree.parsed_until_line + 1\n copied_nodes = self._nodes_tree.copy_nodes(\n p_children[index:],\n until_line_old,\n line_offset\n )\n # Match all the nodes that are in the wanted range.\n if copied_nodes:\n self._copy_count += 1\n\n to = self._nodes_tree.parsed_until_line\n\n LOG.debug('copy old[%s:%s] new[%s:%s]',\n copied_nodes[0].start_pos[0],\n copied_nodes[-1].end_pos[0] - 1, from_, to)\n else:\n # We have copied as much as possible (but definitely not too\n # much). Therefore we just parse a bit more.\n self._parse(self._nodes_tree.parsed_until_line + 1)\n # Since there are potential bugs that might loop here endlessly, we\n # just stop here.\n assert last_until_line != self._nodes_tree.parsed_until_line, last_until_line\n last_until_line = self._nodes_tree.parsed_until_line\n\n def _get_old_line_stmt(self, old_line):\n leaf = self._module.get_leaf_for_position((old_line, 0), include_prefixes=True)\n\n if _ends_with_newline(leaf):\n leaf = leaf.get_next_leaf()\n if leaf.get_start_pos_of_prefix()[0] == old_line:\n node = leaf\n while node.parent.type not in ('file_input', 'suite'):\n node = node.parent\n\n # Make sure that if only the `else:` line of an if statement is\n # copied that not the whole thing is going to be copied.\n if node.start_pos[0] >= old_line:\n return node\n # Must be on the same line. Otherwise we need to parse that bit.\n return None\n\n def _parse(self, until_line):\n """\n Parses at least until the given line, but might just parse more until a\n valid state is reached.\n """\n last_until_line = 0\n while until_line > self._nodes_tree.parsed_until_line:\n node = self._try_parse_part(until_line)\n nodes = node.children\n\n self._nodes_tree.add_parsed_nodes(nodes, self._keyword_token_indents)\n if self._replace_tos_indent is not None:\n self._nodes_tree.indents[-1] = self._replace_tos_indent\n\n LOG.debug(\n 'parse_part from %s to %s (to %s in part parser)',\n nodes[0].get_start_pos_of_prefix()[0],\n self._nodes_tree.parsed_until_line,\n node.end_pos[0] - 1\n )\n # Since the tokenizer sometimes has bugs, we cannot be sure that\n # this loop terminates. Therefore assert that there's always a\n # change.\n assert last_until_line != self._nodes_tree.parsed_until_line, last_until_line\n last_until_line = self._nodes_tree.parsed_until_line\n\n def _try_parse_part(self, until_line):\n """\n Sets up a normal parser that uses a spezialized tokenizer to only parse\n until a certain position (or a bit longer if the statement hasn't\n ended.\n """\n self._parser_count += 1\n # TODO speed up, shouldn't copy the whole list all the time.\n # memoryview?\n parsed_until_line = self._nodes_tree.parsed_until_line\n lines_after = self._parser_lines_new[parsed_until_line:]\n tokens = self._diff_tokenize(\n lines_after,\n until_line,\n line_offset=parsed_until_line\n )\n self._active_parser = Parser(\n self._pgen_grammar,\n error_recovery=True\n )\n return self._active_parser.parse(tokens=tokens)\n\n def _diff_tokenize(self, lines, until_line, line_offset=0):\n was_newline = False\n indents = self._nodes_tree.indents\n initial_indentation_count = len(indents)\n\n tokens = self._tokenizer(\n lines,\n start_pos=(line_offset + 1, 0),\n indents=indents,\n is_first_token=line_offset == 0,\n )\n stack = self._active_parser.stack\n self._replace_tos_indent = None\n self._keyword_token_indents = {}\n # print('start', line_offset + 1, indents)\n for token in tokens:\n # print(token, indents)\n typ = token.type\n if typ == DEDENT:\n if len(indents) < initial_indentation_count:\n # We are done here, only thing that can come now is an\n # endmarker or another dedented code block.\n while True:\n typ, string, start_pos, prefix = token = next(tokens)\n if typ in (DEDENT, ERROR_DEDENT):\n if typ == ERROR_DEDENT:\n # We want to force an error dedent in the next\n # parser/pass. To make this possible we just\n # increase the location by one.\n self._replace_tos_indent = start_pos[1] + 1\n pass\n else:\n break\n\n if '\n' in prefix or '\r' in prefix:\n prefix = re.sub(r'[^\n\r]+\Z', '', prefix)\n else:\n assert start_pos[1] >= len(prefix), repr(prefix)\n if start_pos[1] - len(prefix) == 0:\n prefix = ''\n yield PythonToken(\n ENDMARKER, '',\n start_pos,\n prefix\n )\n break\n elif typ == NEWLINE and token.start_pos[0] >= until_line:\n was_newline = True\n elif was_newline:\n was_newline = False\n if len(indents) == initial_indentation_count:\n # Check if the parser is actually in a valid suite state.\n if _suite_or_file_input_is_valid(self._pgen_grammar, stack):\n yield PythonToken(ENDMARKER, '', token.start_pos, '')\n break\n\n if typ == NAME and token.string in ('class', 'def'):\n self._keyword_token_indents[token.start_pos] = list(indents)\n\n yield token\n\n\nclass _NodesTreeNode:\n _ChildrenGroup = namedtuple(\n '_ChildrenGroup',\n 'prefix children line_offset last_line_offset_leaf')\n\n def __init__(self, tree_node, parent=None, indentation=0):\n self.tree_node = tree_node\n self._children_groups = []\n self.parent = parent\n self._node_children = []\n self.indentation = indentation\n\n def finish(self):\n children = []\n for prefix, children_part, line_offset, last_line_offset_leaf in self._children_groups:\n first_leaf = _get_next_leaf_if_indentation(\n children_part[0].get_first_leaf()\n )\n\n first_leaf.prefix = prefix + first_leaf.prefix\n if line_offset != 0:\n try:\n _update_positions(\n children_part, line_offset, last_line_offset_leaf)\n except _PositionUpdatingFinished:\n pass\n children += children_part\n self.tree_node.children = children\n # Reset the parents\n for node in children:\n node.parent = self.tree_node\n\n for node_child in self._node_children:\n node_child.finish()\n\n def add_child_node(self, child_node):\n self._node_children.append(child_node)\n\n def add_tree_nodes(self, prefix, children, line_offset=0,\n last_line_offset_leaf=None):\n if last_line_offset_leaf is None:\n last_line_offset_leaf = children[-1].get_last_leaf()\n group = self._ChildrenGroup(\n prefix, children, line_offset, last_line_offset_leaf\n )\n self._children_groups.append(group)\n\n def get_last_line(self, suffix):\n line = 0\n if self._children_groups:\n children_group = self._children_groups[-1]\n last_leaf = _get_previous_leaf_if_indentation(\n children_group.last_line_offset_leaf\n )\n\n line = last_leaf.end_pos[0] + children_group.line_offset\n\n # Newlines end on the next line, which means that they would cover\n # the next line. That line is not fully parsed at this point.\n if _ends_with_newline(last_leaf, suffix):\n line -= 1\n line += len(split_lines(suffix)) - 1\n\n if suffix and not suffix.endswith('\n') and not suffix.endswith('\r'):\n # This is the end of a file (that doesn't end with a newline).\n line += 1\n\n if self._node_children:\n return max(line, self._node_children[-1].get_last_line(suffix))\n return line\n\n def __repr__(self):\n return '<%s: %s>' % (self.__class__.__name__, self.tree_node)\n\n\nclass _NodesTree:\n def __init__(self, module):\n self._base_node = _NodesTreeNode(module)\n self._working_stack = [self._base_node]\n self._module = module\n self._prefix_remainder = ''\n self.prefix = ''\n self.indents = [0]\n\n @property\n def parsed_until_line(self):\n return self._working_stack[-1].get_last_line(self.prefix)\n\n def _update_insertion_node(self, indentation):\n for node in reversed(list(self._working_stack)):\n if node.indentation < indentation or node is self._working_stack[0]:\n return node\n self._working_stack.pop()\n\n def add_parsed_nodes(self, tree_nodes, keyword_token_indents):\n old_prefix = self.prefix\n tree_nodes = self._remove_endmarker(tree_nodes)\n if not tree_nodes:\n self.prefix = old_prefix + self.prefix\n return\n\n assert tree_nodes[0].type != 'newline'\n\n node = self._update_insertion_node(tree_nodes[0].start_pos[1])\n assert node.tree_node.type in ('suite', 'file_input')\n node.add_tree_nodes(old_prefix, tree_nodes)\n # tos = Top of stack\n self._update_parsed_node_tos(tree_nodes[-1], keyword_token_indents)\n\n def _update_parsed_node_tos(self, tree_node, keyword_token_indents):\n if tree_node.type == 'suite':\n def_leaf = tree_node.parent.children[0]\n new_tos = _NodesTreeNode(\n tree_node,\n indentation=keyword_token_indents[def_leaf.start_pos][-1],\n )\n new_tos.add_tree_nodes('', list(tree_node.children))\n\n self._working_stack[-1].add_child_node(new_tos)\n self._working_stack.append(new_tos)\n\n self._update_parsed_node_tos(tree_node.children[-1], keyword_token_indents)\n elif _func_or_class_has_suite(tree_node):\n self._update_parsed_node_tos(tree_node.children[-1], keyword_token_indents)\n\n def _remove_endmarker(self, tree_nodes):\n """\n Helps cleaning up the tree nodes that get inserted.\n """\n last_leaf = tree_nodes[-1].get_last_leaf()\n is_endmarker = last_leaf.type == 'endmarker'\n self._prefix_remainder = ''\n if is_endmarker:\n prefix = last_leaf.prefix\n separation = max(prefix.rfind('\n'), prefix.rfind('\r'))\n if separation > -1:\n # Remove the whitespace part of the prefix after a newline.\n # That is not relevant if parentheses were opened. Always parse\n # until the end of a line.\n last_leaf.prefix, self._prefix_remainder = \\n last_leaf.prefix[:separation + 1], last_leaf.prefix[separation + 1:]\n\n self.prefix = ''\n\n if is_endmarker:\n self.prefix = last_leaf.prefix\n\n tree_nodes = tree_nodes[:-1]\n return tree_nodes\n\n def _get_matching_indent_nodes(self, tree_nodes, is_new_suite):\n # There might be a random dedent where we have to stop copying.\n # Invalid indents are ok, because the parser handled that\n # properly before. An invalid dedent can happen, because a few\n # lines above there was an invalid indent.\n node_iterator = iter(tree_nodes)\n if is_new_suite:\n yield next(node_iterator)\n\n first_node = next(node_iterator)\n indent = _get_indentation(first_node)\n if not is_new_suite and indent not in self.indents:\n return\n yield first_node\n\n for n in node_iterator:\n if _get_indentation(n) != indent:\n return\n yield n\n\n def copy_nodes(self, tree_nodes, until_line, line_offset):\n """\n Copies tree nodes from the old parser tree.\n\n Returns the number of tree nodes that were copied.\n """\n if tree_nodes[0].type in ('error_leaf', 'error_node'):\n # Avoid copying errors in the beginning. Can lead to a lot of\n # issues.\n return []\n\n indentation = _get_indentation(tree_nodes[0])\n old_working_stack = list(self._working_stack)\n old_prefix = self.prefix\n old_indents = self.indents\n self.indents = [i for i in self.indents if i <= indentation]\n\n self._update_insertion_node(indentation)\n\n new_nodes, self._working_stack, self.prefix, added_indents = self._copy_nodes(\n list(self._working_stack),\n tree_nodes,\n until_line,\n line_offset,\n self.prefix,\n )\n if new_nodes:\n self.indents += added_indents\n else:\n self._working_stack = old_working_stack\n self.prefix = old_prefix\n self.indents = old_indents\n return new_nodes\n\n def _copy_nodes(self, working_stack, nodes, until_line, line_offset,\n prefix='', is_nested=False):\n new_nodes = []\n added_indents = []\n\n nodes = list(self._get_matching_indent_nodes(\n nodes,\n is_new_suite=is_nested,\n ))\n\n new_prefix = ''\n for node in nodes:\n if node.start_pos[0] > until_line:\n break\n\n if node.type == 'endmarker':\n break\n\n if node.type == 'error_leaf' and node.token_type in ('DEDENT', 'ERROR_DEDENT'):\n break\n # TODO this check might take a bit of time for large files. We\n # might want to change this to do more intelligent guessing or\n # binary search.\n if _get_last_line(node) > until_line:\n # We can split up functions and classes later.\n if _func_or_class_has_suite(node):\n new_nodes.append(node)\n break\n try:\n c = node.children\n except AttributeError:\n pass\n else:\n # This case basically appears with error recovery of one line\n # suites like `def foo(): bar.-`. In this case we might not\n # include a newline in the statement and we need to take care\n # of that.\n n = node\n if n.type == 'decorated':\n n = n.children[-1]\n if n.type in ('async_funcdef', 'async_stmt'):\n n = n.children[-1]\n if n.type in ('classdef', 'funcdef'):\n suite_node = n.children[-1]\n else:\n suite_node = c[-1]\n\n if suite_node.type in ('error_leaf', 'error_node'):\n break\n\n new_nodes.append(node)\n\n # Pop error nodes at the end from the list\n if new_nodes:\n while new_nodes:\n last_node = new_nodes[-1]\n if (last_node.type in ('error_leaf', 'error_node')\n or _is_flow_node(new_nodes[-1])):\n # Error leafs/nodes don't have a defined start/end. Error\n # nodes might not end with a newline (e.g. if there's an\n # open `(`). Therefore ignore all of them unless they are\n # succeeded with valid parser state.\n # If we copy flows at the end, they might be continued\n # after the copy limit (in the new parser).\n # In this while loop we try to remove until we find a newline.\n new_prefix = ''\n new_nodes.pop()\n while new_nodes:\n last_node = new_nodes[-1]\n if last_node.get_last_leaf().type == 'newline':\n break\n new_nodes.pop()\n continue\n if len(new_nodes) > 1 and new_nodes[-2].type == 'error_node':\n # The problem here is that Parso error recovery sometimes\n # influences nodes before this node.\n # Since the new last node is an error node this will get\n # cleaned up in the next while iteration.\n new_nodes.pop()\n continue\n break\n\n if not new_nodes:\n return [], working_stack, prefix, added_indents\n\n tos = working_stack[-1]\n last_node = new_nodes[-1]\n had_valid_suite_last = False\n # Pop incomplete suites from the list\n if _func_or_class_has_suite(last_node):\n suite = last_node\n while suite.type != 'suite':\n suite = suite.children[-1]\n\n indent = _get_suite_indentation(suite)\n added_indents.append(indent)\n\n suite_tos = _NodesTreeNode(suite, indentation=_get_indentation(last_node))\n # Don't need to pass line_offset here, it's already done by the\n # parent.\n suite_nodes, new_working_stack, new_prefix, ai = self._copy_nodes(\n working_stack + [suite_tos], suite.children, until_line, line_offset,\n is_nested=True,\n )\n added_indents += ai\n if len(suite_nodes) < 2:\n # A suite only with newline is not valid.\n new_nodes.pop()\n new_prefix = ''\n else:\n assert new_nodes\n tos.add_child_node(suite_tos)\n working_stack = new_working_stack\n had_valid_suite_last = True\n\n if new_nodes:\n if not _ends_with_newline(new_nodes[-1].get_last_leaf()) and not had_valid_suite_last:\n p = new_nodes[-1].get_next_leaf().prefix\n # We are not allowed to remove the newline at the end of the\n # line, otherwise it's going to be missing. This happens e.g.\n # if a bracket is around before that moves newlines to\n # prefixes.\n new_prefix = split_lines(p, keepends=True)[0]\n\n if had_valid_suite_last:\n last = new_nodes[-1]\n if last.type == 'decorated':\n last = last.children[-1]\n if last.type in ('async_funcdef', 'async_stmt'):\n last = last.children[-1]\n last_line_offset_leaf = last.children[-2].get_last_leaf()\n assert last_line_offset_leaf == ':'\n else:\n last_line_offset_leaf = new_nodes[-1].get_last_leaf()\n tos.add_tree_nodes(\n prefix, new_nodes, line_offset, last_line_offset_leaf,\n )\n prefix = new_prefix\n self._prefix_remainder = ''\n\n return new_nodes, working_stack, prefix, added_indents\n\n def close(self):\n self._base_node.finish()\n\n # Add an endmarker.\n try:\n last_leaf = self._module.get_last_leaf()\n except IndexError:\n end_pos = [1, 0]\n else:\n last_leaf = _skip_dedent_error_leaves(last_leaf)\n end_pos = list(last_leaf.end_pos)\n lines = split_lines(self.prefix)\n assert len(lines) > 0\n if len(lines) == 1:\n if lines[0].startswith(BOM_UTF8_STRING) and end_pos == [1, 0]:\n end_pos[1] -= 1\n end_pos[1] += len(lines[0])\n else:\n end_pos[0] += len(lines) - 1\n end_pos[1] = len(lines[-1])\n\n endmarker = EndMarker('', tuple(end_pos), self.prefix + self._prefix_remainder)\n endmarker.parent = self._module\n self._module.children.append(endmarker)\n | .venv\Lib\site-packages\parso\python\diff.py | diff.py | Python | 34,206 | 0.95 | 0.211061 | 0.120482 | awesome-app | 510 | 2024-09-13T06:40:54.137067 | MIT | false | 6bafb523703262e9fc392c9a7921032f |
# -*- coding: utf-8 -*-\nimport codecs\nimport sys\nimport warnings\nimport re\nfrom contextlib import contextmanager\n\nfrom parso.normalizer import Normalizer, NormalizerConfig, Issue, Rule\nfrom parso.python.tokenize import _get_token_collection\n\n_BLOCK_STMTS = ('if_stmt', 'while_stmt', 'for_stmt', 'try_stmt', 'with_stmt')\n_STAR_EXPR_PARENTS = ('testlist_star_expr', 'testlist_comp', 'exprlist')\n# This is the maximal block size given by python.\n_MAX_BLOCK_SIZE = 20\n_MAX_INDENT_COUNT = 100\nALLOWED_FUTURES = (\n 'nested_scopes', 'generators', 'division', 'absolute_import',\n 'with_statement', 'print_function', 'unicode_literals', 'generator_stop',\n)\n_COMP_FOR_TYPES = ('comp_for', 'sync_comp_for')\n\n\ndef _get_rhs_name(node, version):\n type_ = node.type\n if type_ == "lambdef":\n return "lambda"\n elif type_ == "atom":\n comprehension = _get_comprehension_type(node)\n first, second = node.children[:2]\n if comprehension is not None:\n return comprehension\n elif second.type == "dictorsetmaker":\n if version < (3, 8):\n return "literal"\n else:\n if second.children[1] == ":" or second.children[0] == "**":\n if version < (3, 10):\n return "dict display"\n else:\n return "dict literal"\n else:\n return "set display"\n elif (\n first == "("\n and (second == ")"\n or (len(node.children) == 3 and node.children[1].type == "testlist_comp"))\n ):\n return "tuple"\n elif first == "(":\n return _get_rhs_name(_remove_parens(node), version=version)\n elif first == "[":\n return "list"\n elif first == "{" and second == "}":\n if version < (3, 10):\n return "dict display"\n else:\n return "dict literal"\n elif first == "{" and len(node.children) > 2:\n return "set display"\n elif type_ == "keyword":\n if "yield" in node.value:\n return "yield expression"\n if version < (3, 8):\n return "keyword"\n else:\n return str(node.value)\n elif type_ == "operator" and node.value == "...":\n if version < (3, 10):\n return "Ellipsis"\n else:\n return "ellipsis"\n elif type_ == "comparison":\n return "comparison"\n elif type_ in ("string", "number", "strings"):\n return "literal"\n elif type_ == "yield_expr":\n return "yield expression"\n elif type_ == "test":\n return "conditional expression"\n elif type_ in ("atom_expr", "power"):\n if node.children[0] == "await":\n return "await expression"\n elif node.children[-1].type == "trailer":\n trailer = node.children[-1]\n if trailer.children[0] == "(":\n return "function call"\n elif trailer.children[0] == "[":\n return "subscript"\n elif trailer.children[0] == ".":\n return "attribute"\n elif (\n ("expr" in type_ and "star_expr" not in type_) # is a substring\n or "_test" in type_\n or type_ in ("term", "factor")\n ):\n if version < (3, 10):\n return "operator"\n else:\n return "expression"\n elif type_ == "star_expr":\n return "starred"\n elif type_ == "testlist_star_expr":\n return "tuple"\n elif type_ == "fstring":\n return "f-string expression"\n return type_ # shouldn't reach here\n\n\ndef _iter_stmts(scope):\n """\n Iterates over all statements and splits up simple_stmt.\n """\n for child in scope.children:\n if child.type == 'simple_stmt':\n for child2 in child.children:\n if child2.type == 'newline' or child2 == ';':\n continue\n yield child2\n else:\n yield child\n\n\ndef _get_comprehension_type(atom):\n first, second = atom.children[:2]\n if second.type == 'testlist_comp' and second.children[1].type in _COMP_FOR_TYPES:\n if first == '[':\n return 'list comprehension'\n else:\n return 'generator expression'\n elif second.type == 'dictorsetmaker' and second.children[-1].type in _COMP_FOR_TYPES:\n if second.children[1] == ':':\n return 'dict comprehension'\n else:\n return 'set comprehension'\n return None\n\n\ndef _is_future_import(import_from):\n # It looks like a __future__ import that is relative is still a future\n # import. That feels kind of odd, but whatever.\n # if import_from.level != 0:\n # return False\n from_names = import_from.get_from_names()\n return [n.value for n in from_names] == ['__future__']\n\n\ndef _remove_parens(atom):\n """\n Returns the inner part of an expression like `(foo)`. Also removes nested\n parens.\n """\n try:\n children = atom.children\n except AttributeError:\n pass\n else:\n if len(children) == 3 and children[0] == '(':\n return _remove_parens(atom.children[1])\n return atom\n\n\ndef _skip_parens_bottom_up(node):\n """\n Returns an ancestor node of an expression, skipping all levels of parens\n bottom-up.\n """\n while node.parent is not None:\n node = node.parent\n if node.type != 'atom' or node.children[0] != '(':\n return node\n return None\n\n\ndef _iter_params(parent_node):\n return (n for n in parent_node.children if n.type == 'param' or n.type == 'operator')\n\n\ndef _is_future_import_first(import_from):\n """\n Checks if the import is the first statement of a file.\n """\n found_docstring = False\n for stmt in _iter_stmts(import_from.get_root_node()):\n if stmt.type == 'string' and not found_docstring:\n continue\n found_docstring = True\n\n if stmt == import_from:\n return True\n if stmt.type == 'import_from' and _is_future_import(stmt):\n continue\n return False\n\n\ndef _iter_definition_exprs_from_lists(exprlist):\n def check_expr(child):\n if child.type == 'atom':\n if child.children[0] == '(':\n testlist_comp = child.children[1]\n if testlist_comp.type == 'testlist_comp':\n yield from _iter_definition_exprs_from_lists(testlist_comp)\n return\n else:\n # It's a paren that doesn't do anything, like 1 + (1)\n yield from check_expr(testlist_comp)\n return\n elif child.children[0] == '[':\n yield testlist_comp\n return\n yield child\n\n if exprlist.type in _STAR_EXPR_PARENTS:\n for child in exprlist.children[::2]:\n yield from check_expr(child)\n else:\n yield from check_expr(exprlist)\n\n\ndef _get_expr_stmt_definition_exprs(expr_stmt):\n exprs = []\n for list_ in expr_stmt.children[:-2:2]:\n if list_.type in ('testlist_star_expr', 'testlist'):\n exprs += _iter_definition_exprs_from_lists(list_)\n else:\n exprs.append(list_)\n return exprs\n\n\ndef _get_for_stmt_definition_exprs(for_stmt):\n exprlist = for_stmt.children[1]\n return list(_iter_definition_exprs_from_lists(exprlist))\n\n\ndef _is_argument_comprehension(argument):\n return argument.children[1].type in _COMP_FOR_TYPES\n\n\ndef _any_fstring_error(version, node):\n if version < (3, 9) or node is None:\n return False\n if node.type == "error_node":\n return any(child.type == "fstring_start" for child in node.children)\n elif node.type == "fstring":\n return True\n else:\n return node.search_ancestor("fstring")\n\n\nclass _Context:\n def __init__(self, node, add_syntax_error, parent_context=None):\n self.node = node\n self.blocks = []\n self.parent_context = parent_context\n self._used_name_dict = {}\n self._global_names = []\n self._local_params_names = []\n self._nonlocal_names = []\n self._nonlocal_names_in_subscopes = []\n self._add_syntax_error = add_syntax_error\n\n def is_async_funcdef(self):\n # Stupidly enough async funcdefs can have two different forms,\n # depending if a decorator is used or not.\n return self.is_function() \\n and self.node.parent.type in ('async_funcdef', 'async_stmt')\n\n def is_function(self):\n return self.node.type == 'funcdef'\n\n def add_name(self, name):\n parent_type = name.parent.type\n if parent_type == 'trailer':\n # We are only interested in first level names.\n return\n\n if parent_type == 'global_stmt':\n self._global_names.append(name)\n elif parent_type == 'nonlocal_stmt':\n self._nonlocal_names.append(name)\n elif parent_type == 'funcdef':\n self._local_params_names.extend(\n [param.name.value for param in name.parent.get_params()]\n )\n else:\n self._used_name_dict.setdefault(name.value, []).append(name)\n\n def finalize(self):\n """\n Returns a list of nonlocal names that need to be part of that scope.\n """\n self._analyze_names(self._global_names, 'global')\n self._analyze_names(self._nonlocal_names, 'nonlocal')\n\n global_name_strs = {n.value: n for n in self._global_names}\n for nonlocal_name in self._nonlocal_names:\n try:\n global_name = global_name_strs[nonlocal_name.value]\n except KeyError:\n continue\n\n message = "name '%s' is nonlocal and global" % global_name.value\n if global_name.start_pos < nonlocal_name.start_pos:\n error_name = global_name\n else:\n error_name = nonlocal_name\n self._add_syntax_error(error_name, message)\n\n nonlocals_not_handled = []\n for nonlocal_name in self._nonlocal_names_in_subscopes:\n search = nonlocal_name.value\n if search in self._local_params_names:\n continue\n if search in global_name_strs or self.parent_context is None:\n message = "no binding for nonlocal '%s' found" % nonlocal_name.value\n self._add_syntax_error(nonlocal_name, message)\n elif not self.is_function() or \\n nonlocal_name.value not in self._used_name_dict:\n nonlocals_not_handled.append(nonlocal_name)\n return self._nonlocal_names + nonlocals_not_handled\n\n def _analyze_names(self, globals_or_nonlocals, type_):\n def raise_(message):\n self._add_syntax_error(base_name, message % (base_name.value, type_))\n\n params = []\n if self.node.type == 'funcdef':\n params = self.node.get_params()\n\n for base_name in globals_or_nonlocals:\n found_global_or_nonlocal = False\n # Somehow Python does it the reversed way.\n for name in reversed(self._used_name_dict.get(base_name.value, [])):\n if name.start_pos > base_name.start_pos:\n # All following names don't have to be checked.\n found_global_or_nonlocal = True\n\n parent = name.parent\n if parent.type == 'param' and parent.name == name:\n # Skip those here, these definitions belong to the next\n # scope.\n continue\n\n if name.is_definition():\n if parent.type == 'expr_stmt' \\n and parent.children[1].type == 'annassign':\n if found_global_or_nonlocal:\n # If it's after the global the error seems to be\n # placed there.\n base_name = name\n raise_("annotated name '%s' can't be %s")\n break\n else:\n message = "name '%s' is assigned to before %s declaration"\n else:\n message = "name '%s' is used prior to %s declaration"\n\n if not found_global_or_nonlocal:\n raise_(message)\n # Only add an error for the first occurence.\n break\n\n for param in params:\n if param.name.value == base_name.value:\n raise_("name '%s' is parameter and %s"),\n\n @contextmanager\n def add_block(self, node):\n self.blocks.append(node)\n yield\n self.blocks.pop()\n\n def add_context(self, node):\n return _Context(node, self._add_syntax_error, parent_context=self)\n\n def close_child_context(self, child_context):\n self._nonlocal_names_in_subscopes += child_context.finalize()\n\n\nclass ErrorFinder(Normalizer):\n """\n Searches for errors in the syntax tree.\n """\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self._error_dict = {}\n self.version = self.grammar.version_info\n\n def initialize(self, node):\n def create_context(node):\n if node is None:\n return None\n\n parent_context = create_context(node.parent)\n if node.type in ('classdef', 'funcdef', 'file_input'):\n return _Context(node, self._add_syntax_error, parent_context)\n return parent_context\n\n self.context = create_context(node) or _Context(node, self._add_syntax_error)\n self._indentation_count = 0\n\n def visit(self, node):\n if node.type == 'error_node':\n with self.visit_node(node):\n # Don't need to investigate the inners of an error node. We\n # might find errors in there that should be ignored, because\n # the error node itself already shows that there's an issue.\n return ''\n return super().visit(node)\n\n @contextmanager\n def visit_node(self, node):\n self._check_type_rules(node)\n\n if node.type in _BLOCK_STMTS:\n with self.context.add_block(node):\n if len(self.context.blocks) == _MAX_BLOCK_SIZE:\n self._add_syntax_error(node, "too many statically nested blocks")\n yield\n return\n elif node.type == 'suite':\n self._indentation_count += 1\n if self._indentation_count == _MAX_INDENT_COUNT:\n self._add_indentation_error(node.children[1], "too many levels of indentation")\n\n yield\n\n if node.type == 'suite':\n self._indentation_count -= 1\n elif node.type in ('classdef', 'funcdef'):\n context = self.context\n self.context = context.parent_context\n self.context.close_child_context(context)\n\n def visit_leaf(self, leaf):\n if leaf.type == 'error_leaf':\n if leaf.token_type in ('INDENT', 'ERROR_DEDENT'):\n # Indents/Dedents itself never have a prefix. They are just\n # "pseudo" tokens that get removed by the syntax tree later.\n # Therefore in case of an error we also have to check for this.\n spacing = list(leaf.get_next_leaf()._split_prefix())[-1]\n if leaf.token_type == 'INDENT':\n message = 'unexpected indent'\n else:\n message = 'unindent does not match any outer indentation level'\n self._add_indentation_error(spacing, message)\n else:\n if leaf.value.startswith('\\'):\n message = 'unexpected character after line continuation character'\n else:\n match = re.match('\\w{,2}("{1,3}|\'{1,3})', leaf.value)\n if match is None:\n message = 'invalid syntax'\n if (\n self.version >= (3, 9)\n and leaf.value in _get_token_collection(\n self.version\n ).always_break_tokens\n ):\n message = "f-string: " + message\n else:\n if len(match.group(1)) == 1:\n message = 'EOL while scanning string literal'\n else:\n message = 'EOF while scanning triple-quoted string literal'\n self._add_syntax_error(leaf, message)\n return ''\n elif leaf.value == ':':\n parent = leaf.parent\n if parent.type in ('classdef', 'funcdef'):\n self.context = self.context.add_context(parent)\n\n # The rest is rule based.\n return super().visit_leaf(leaf)\n\n def _add_indentation_error(self, spacing, message):\n self.add_issue(spacing, 903, "IndentationError: " + message)\n\n def _add_syntax_error(self, node, message):\n self.add_issue(node, 901, "SyntaxError: " + message)\n\n def add_issue(self, node, code, message):\n # Overwrite the default behavior.\n # Check if the issues are on the same line.\n line = node.start_pos[0]\n args = (code, message, node)\n self._error_dict.setdefault(line, args)\n\n def finalize(self):\n self.context.finalize()\n\n for code, message, node in self._error_dict.values():\n self.issues.append(Issue(node, code, message))\n\n\nclass IndentationRule(Rule):\n code = 903\n\n def _get_message(self, message, node):\n message = super()._get_message(message, node)\n return "IndentationError: " + message\n\n\n@ErrorFinder.register_rule(type='error_node')\nclass _ExpectIndentedBlock(IndentationRule):\n message = 'expected an indented block'\n\n def get_node(self, node):\n leaf = node.get_next_leaf()\n return list(leaf._split_prefix())[-1]\n\n def is_issue(self, node):\n # This is the beginning of a suite that is not indented.\n return node.children[-1].type == 'newline'\n\n\nclass ErrorFinderConfig(NormalizerConfig):\n normalizer_class = ErrorFinder\n\n\nclass SyntaxRule(Rule):\n code = 901\n\n def _get_message(self, message, node):\n message = super()._get_message(message, node)\n if (\n "f-string" not in message\n and _any_fstring_error(self._normalizer.version, node)\n ):\n message = "f-string: " + message\n return "SyntaxError: " + message\n\n\n@ErrorFinder.register_rule(type='error_node')\nclass _InvalidSyntaxRule(SyntaxRule):\n message = "invalid syntax"\n fstring_message = "f-string: invalid syntax"\n\n def get_node(self, node):\n return node.get_next_leaf()\n\n def is_issue(self, node):\n error = node.get_next_leaf().type != 'error_leaf'\n if (\n error\n and _any_fstring_error(self._normalizer.version, node)\n ):\n self.add_issue(node, message=self.fstring_message)\n else:\n # Error leafs will be added later as an error.\n return error\n\n\n@ErrorFinder.register_rule(value='await')\nclass _AwaitOutsideAsync(SyntaxRule):\n message = "'await' outside async function"\n\n def is_issue(self, leaf):\n return not self._normalizer.context.is_async_funcdef()\n\n def get_error_node(self, node):\n # Return the whole await statement.\n return node.parent\n\n\n@ErrorFinder.register_rule(value='break')\nclass _BreakOutsideLoop(SyntaxRule):\n message = "'break' outside loop"\n\n def is_issue(self, leaf):\n in_loop = False\n for block in self._normalizer.context.blocks:\n if block.type in ('for_stmt', 'while_stmt'):\n in_loop = True\n return not in_loop\n\n\n@ErrorFinder.register_rule(value='continue')\nclass _ContinueChecks(SyntaxRule):\n message = "'continue' not properly in loop"\n message_in_finally = "'continue' not supported inside 'finally' clause"\n\n def is_issue(self, leaf):\n in_loop = False\n for block in self._normalizer.context.blocks:\n if block.type in ('for_stmt', 'while_stmt'):\n in_loop = True\n if block.type == 'try_stmt':\n last_block = block.children[-3]\n if (\n last_block == "finally"\n and leaf.start_pos > last_block.start_pos\n and self._normalizer.version < (3, 8)\n ):\n self.add_issue(leaf, message=self.message_in_finally)\n return False # Error already added\n if not in_loop:\n return True\n\n\n@ErrorFinder.register_rule(value='from')\nclass _YieldFromCheck(SyntaxRule):\n message = "'yield from' inside async function"\n\n def get_node(self, leaf):\n return leaf.parent.parent # This is the actual yield statement.\n\n def is_issue(self, leaf):\n return leaf.parent.type == 'yield_arg' \\n and self._normalizer.context.is_async_funcdef()\n\n\n@ErrorFinder.register_rule(type='name')\nclass _NameChecks(SyntaxRule):\n message = 'cannot assign to __debug__'\n message_none = 'cannot assign to None'\n\n def is_issue(self, leaf):\n self._normalizer.context.add_name(leaf)\n\n if leaf.value == '__debug__' and leaf.is_definition():\n return True\n\n\n@ErrorFinder.register_rule(type='string')\nclass _StringChecks(SyntaxRule):\n if sys.version_info < (3, 10):\n message = "bytes can only contain ASCII literal characters."\n else:\n message = "bytes can only contain ASCII literal characters"\n\n def is_issue(self, leaf):\n string_prefix = leaf.string_prefix.lower()\n if 'b' in string_prefix \\n and any(c for c in leaf.value if ord(c) > 127):\n # b'ä'\n return True\n\n if 'r' not in string_prefix:\n # Raw strings don't need to be checked if they have proper\n # escaping.\n\n payload = leaf._get_payload()\n if 'b' in string_prefix:\n payload = payload.encode('utf-8')\n func = codecs.escape_decode\n else:\n func = codecs.unicode_escape_decode\n\n try:\n with warnings.catch_warnings():\n # The warnings from parsing strings are not relevant.\n warnings.filterwarnings('ignore')\n func(payload)\n except UnicodeDecodeError as e:\n self.add_issue(leaf, message='(unicode error) ' + str(e))\n except ValueError as e:\n self.add_issue(leaf, message='(value error) ' + str(e))\n\n\n@ErrorFinder.register_rule(value='*')\nclass _StarCheck(SyntaxRule):\n message = "named arguments must follow bare *"\n\n def is_issue(self, leaf):\n params = leaf.parent\n if params.type == 'parameters' and params:\n after = params.children[params.children.index(leaf) + 1:]\n after = [child for child in after\n if child not in (',', ')') and not child.star_count]\n return len(after) == 0\n\n\n@ErrorFinder.register_rule(value='**')\nclass _StarStarCheck(SyntaxRule):\n # e.g. {**{} for a in [1]}\n # TODO this should probably get a better end_pos including\n # the next sibling of leaf.\n message = "dict unpacking cannot be used in dict comprehension"\n\n def is_issue(self, leaf):\n if leaf.parent.type == 'dictorsetmaker':\n comp_for = leaf.get_next_sibling().get_next_sibling()\n return comp_for is not None and comp_for.type in _COMP_FOR_TYPES\n\n\n@ErrorFinder.register_rule(value='yield')\n@ErrorFinder.register_rule(value='return')\nclass _ReturnAndYieldChecks(SyntaxRule):\n message = "'return' with value in async generator"\n message_async_yield = "'yield' inside async function"\n\n def get_node(self, leaf):\n return leaf.parent\n\n def is_issue(self, leaf):\n if self._normalizer.context.node.type != 'funcdef':\n self.add_issue(self.get_node(leaf), message="'%s' outside function" % leaf.value)\n elif self._normalizer.context.is_async_funcdef() \\n and any(self._normalizer.context.node.iter_yield_exprs()):\n if leaf.value == 'return' and leaf.parent.type == 'return_stmt':\n return True\n\n\n@ErrorFinder.register_rule(type='strings')\nclass _BytesAndStringMix(SyntaxRule):\n # e.g. 's' b''\n message = "cannot mix bytes and nonbytes literals"\n\n def _is_bytes_literal(self, string):\n if string.type == 'fstring':\n return False\n return 'b' in string.string_prefix.lower()\n\n def is_issue(self, node):\n first = node.children[0]\n first_is_bytes = self._is_bytes_literal(first)\n for string in node.children[1:]:\n if first_is_bytes != self._is_bytes_literal(string):\n return True\n\n\n@ErrorFinder.register_rule(type='import_as_names')\nclass _TrailingImportComma(SyntaxRule):\n # e.g. from foo import a,\n message = "trailing comma not allowed without surrounding parentheses"\n\n def is_issue(self, node):\n if node.children[-1] == ',' and node.parent.children[-1] != ')':\n return True\n\n\n@ErrorFinder.register_rule(type='import_from')\nclass _ImportStarInFunction(SyntaxRule):\n message = "import * only allowed at module level"\n\n def is_issue(self, node):\n return node.is_star_import() and self._normalizer.context.parent_context is not None\n\n\n@ErrorFinder.register_rule(type='import_from')\nclass _FutureImportRule(SyntaxRule):\n message = "from __future__ imports must occur at the beginning of the file"\n\n def is_issue(self, node):\n if _is_future_import(node):\n if not _is_future_import_first(node):\n return True\n\n for from_name, future_name in node.get_paths():\n name = future_name.value\n allowed_futures = list(ALLOWED_FUTURES)\n if self._normalizer.version >= (3, 7):\n allowed_futures.append('annotations')\n if name == 'braces':\n self.add_issue(node, message="not a chance")\n elif name == 'barry_as_FLUFL':\n m = "Seriously I'm not implementing this :) ~ Dave"\n self.add_issue(node, message=m)\n elif name not in allowed_futures:\n message = "future feature %s is not defined" % name\n self.add_issue(node, message=message)\n\n\n@ErrorFinder.register_rule(type='star_expr')\nclass _StarExprRule(SyntaxRule):\n message_iterable_unpacking = "iterable unpacking cannot be used in comprehension"\n\n def is_issue(self, node):\n def check_delete_starred(node):\n while node.parent is not None:\n node = node.parent\n if node.type == 'del_stmt':\n return True\n if node.type not in (*_STAR_EXPR_PARENTS, 'atom'):\n return False\n return False\n\n if self._normalizer.version >= (3, 9):\n ancestor = node.parent\n else:\n ancestor = _skip_parens_bottom_up(node)\n # starred expression not in tuple/list/set\n if ancestor.type not in (*_STAR_EXPR_PARENTS, 'dictorsetmaker') \\n and not (ancestor.type == 'atom' and ancestor.children[0] != '('):\n self.add_issue(node, message="can't use starred expression here")\n return\n\n if check_delete_starred(node):\n if self._normalizer.version >= (3, 9):\n self.add_issue(node, message="cannot delete starred")\n else:\n self.add_issue(node, message="can't use starred expression here")\n return\n\n if node.parent.type == 'testlist_comp':\n # [*[] for a in [1]]\n if node.parent.children[1].type in _COMP_FOR_TYPES:\n self.add_issue(node, message=self.message_iterable_unpacking)\n\n\n@ErrorFinder.register_rule(types=_STAR_EXPR_PARENTS)\nclass _StarExprParentRule(SyntaxRule):\n def is_issue(self, node):\n def is_definition(node, ancestor):\n if ancestor is None:\n return False\n\n type_ = ancestor.type\n if type_ == 'trailer':\n return False\n\n if type_ == 'expr_stmt':\n return node.start_pos < ancestor.children[-1].start_pos\n\n return is_definition(node, ancestor.parent)\n\n if is_definition(node, node.parent):\n args = [c for c in node.children if c != ',']\n starred = [c for c in args if c.type == 'star_expr']\n if len(starred) > 1:\n if self._normalizer.version < (3, 9):\n message = "two starred expressions in assignment"\n else:\n message = "multiple starred expressions in assignment"\n self.add_issue(starred[1], message=message)\n elif starred:\n count = args.index(starred[0])\n if count >= 256:\n message = "too many expressions in star-unpacking assignment"\n self.add_issue(starred[0], message=message)\n\n\n@ErrorFinder.register_rule(type='annassign')\nclass _AnnotatorRule(SyntaxRule):\n # True: int\n # {}: float\n message = "illegal target for annotation"\n\n def get_node(self, node):\n return node.parent\n\n def is_issue(self, node):\n type_ = None\n lhs = node.parent.children[0]\n lhs = _remove_parens(lhs)\n try:\n children = lhs.children\n except AttributeError:\n pass\n else:\n if ',' in children or lhs.type == 'atom' and children[0] == '(':\n type_ = 'tuple'\n elif lhs.type == 'atom' and children[0] == '[':\n type_ = 'list'\n trailer = children[-1]\n\n if type_ is None:\n if not (lhs.type == 'name'\n # subscript/attributes are allowed\n or lhs.type in ('atom_expr', 'power')\n and trailer.type == 'trailer'\n and trailer.children[0] != '('):\n return True\n else:\n # x, y: str\n message = "only single target (not %s) can be annotated"\n self.add_issue(lhs.parent, message=message % type_)\n\n\n@ErrorFinder.register_rule(type='argument')\nclass _ArgumentRule(SyntaxRule):\n def is_issue(self, node):\n first = node.children[0]\n if self._normalizer.version < (3, 8):\n # a((b)=c) is valid in <3.8\n first = _remove_parens(first)\n if node.children[1] == '=' and first.type != 'name':\n if first.type == 'lambdef':\n # f(lambda: 1=1)\n if self._normalizer.version < (3, 8):\n message = "lambda cannot contain assignment"\n else:\n message = 'expression cannot contain assignment, perhaps you meant "=="?'\n else:\n # f(+x=1)\n if self._normalizer.version < (3, 8):\n message = "keyword can't be an expression"\n else:\n message = 'expression cannot contain assignment, perhaps you meant "=="?'\n self.add_issue(first, message=message)\n\n if _is_argument_comprehension(node) and node.parent.type == 'classdef':\n self.add_issue(node, message='invalid syntax')\n\n\n@ErrorFinder.register_rule(type='nonlocal_stmt')\nclass _NonlocalModuleLevelRule(SyntaxRule):\n message = "nonlocal declaration not allowed at module level"\n\n def is_issue(self, node):\n return self._normalizer.context.parent_context is None\n\n\n@ErrorFinder.register_rule(type='arglist')\nclass _ArglistRule(SyntaxRule):\n @property\n def message(self):\n if self._normalizer.version < (3, 7):\n return "Generator expression must be parenthesized if not sole argument"\n else:\n return "Generator expression must be parenthesized"\n\n def is_issue(self, node):\n arg_set = set()\n kw_only = False\n kw_unpacking_only = False\n for argument in node.children:\n if argument == ',':\n continue\n\n if argument.type == 'argument':\n first = argument.children[0]\n if _is_argument_comprehension(argument) and len(node.children) >= 2:\n # a(a, b for b in c)\n return True\n\n if first in ('*', '**'):\n if first == '*':\n if kw_unpacking_only:\n # foo(**kwargs, *args)\n message = "iterable argument unpacking " \\n "follows keyword argument unpacking"\n self.add_issue(argument, message=message)\n else:\n kw_unpacking_only = True\n else: # Is a keyword argument.\n kw_only = True\n if first.type == 'name':\n if first.value in arg_set:\n # f(x=1, x=2)\n message = "keyword argument repeated"\n if self._normalizer.version >= (3, 9):\n message += ": {}".format(first.value)\n self.add_issue(first, message=message)\n else:\n arg_set.add(first.value)\n else:\n if kw_unpacking_only:\n # f(**x, y)\n message = "positional argument follows keyword argument unpacking"\n self.add_issue(argument, message=message)\n elif kw_only:\n # f(x=2, y)\n message = "positional argument follows keyword argument"\n self.add_issue(argument, message=message)\n\n\n@ErrorFinder.register_rule(type='parameters')\n@ErrorFinder.register_rule(type='lambdef')\nclass _ParameterRule(SyntaxRule):\n # def f(x=3, y): pass\n message = "non-default argument follows default argument"\n\n def is_issue(self, node):\n param_names = set()\n default_only = False\n star_seen = False\n for p in _iter_params(node):\n if p.type == 'operator':\n if p.value == '*':\n star_seen = True\n default_only = False\n continue\n\n if p.name.value in param_names:\n message = "duplicate argument '%s' in function definition"\n self.add_issue(p.name, message=message % p.name.value)\n param_names.add(p.name.value)\n\n if not star_seen:\n if p.default is None and not p.star_count:\n if default_only:\n return True\n elif p.star_count:\n star_seen = True\n default_only = False\n else:\n default_only = True\n\n\n@ErrorFinder.register_rule(type='try_stmt')\nclass _TryStmtRule(SyntaxRule):\n message = "default 'except:' must be last"\n\n def is_issue(self, try_stmt):\n default_except = None\n for except_clause in try_stmt.children[3::3]:\n if except_clause in ('else', 'finally'):\n break\n if except_clause == 'except':\n default_except = except_clause\n elif default_except is not None:\n self.add_issue(default_except, message=self.message)\n\n\n@ErrorFinder.register_rule(type='fstring')\nclass _FStringRule(SyntaxRule):\n _fstring_grammar = None\n message_expr = "f-string expression part cannot include a backslash"\n message_nested = "f-string: expressions nested too deeply"\n message_conversion = "f-string: invalid conversion character: expected 's', 'r', or 'a'"\n\n def _check_format_spec(self, format_spec, depth):\n self._check_fstring_contents(format_spec.children[1:], depth)\n\n def _check_fstring_expr(self, fstring_expr, depth):\n if depth >= 2:\n self.add_issue(fstring_expr, message=self.message_nested)\n\n expr = fstring_expr.children[1]\n if '\\' in expr.get_code():\n self.add_issue(expr, message=self.message_expr)\n\n children_2 = fstring_expr.children[2]\n if children_2.type == 'operator' and children_2.value == '=':\n conversion = fstring_expr.children[3]\n else:\n conversion = children_2\n if conversion.type == 'fstring_conversion':\n name = conversion.children[1]\n if name.value not in ('s', 'r', 'a'):\n self.add_issue(name, message=self.message_conversion)\n\n format_spec = fstring_expr.children[-2]\n if format_spec.type == 'fstring_format_spec':\n self._check_format_spec(format_spec, depth + 1)\n\n def is_issue(self, fstring):\n self._check_fstring_contents(fstring.children[1:-1])\n\n def _check_fstring_contents(self, children, depth=0):\n for fstring_content in children:\n if fstring_content.type == 'fstring_expr':\n self._check_fstring_expr(fstring_content, depth)\n\n\nclass _CheckAssignmentRule(SyntaxRule):\n def _check_assignment(self, node, is_deletion=False, is_namedexpr=False, is_aug_assign=False):\n error = None\n type_ = node.type\n if type_ == 'lambdef':\n error = 'lambda'\n elif type_ == 'atom':\n first, second = node.children[:2]\n error = _get_comprehension_type(node)\n if error is None:\n if second.type == 'dictorsetmaker':\n if self._normalizer.version < (3, 8):\n error = 'literal'\n else:\n if second.children[1] == ':':\n if self._normalizer.version < (3, 10):\n error = 'dict display'\n else:\n error = 'dict literal'\n else:\n error = 'set display'\n elif first == "{" and second == "}":\n if self._normalizer.version < (3, 8):\n error = 'literal'\n else:\n if self._normalizer.version < (3, 10):\n error = "dict display"\n else:\n error = "dict literal"\n elif first == "{" and len(node.children) > 2:\n if self._normalizer.version < (3, 8):\n error = 'literal'\n else:\n error = "set display"\n elif first in ('(', '['):\n if second.type == 'yield_expr':\n error = 'yield expression'\n elif second.type == 'testlist_comp':\n # ([a, b] := [1, 2])\n # ((a, b) := [1, 2])\n if is_namedexpr:\n if first == '(':\n error = 'tuple'\n elif first == '[':\n error = 'list'\n\n # This is not a comprehension, they were handled\n # further above.\n for child in second.children[::2]:\n self._check_assignment(child, is_deletion, is_namedexpr, is_aug_assign)\n else: # Everything handled, must be useless brackets.\n self._check_assignment(second, is_deletion, is_namedexpr, is_aug_assign)\n elif type_ == 'keyword':\n if node.value == "yield":\n error = "yield expression"\n elif self._normalizer.version < (3, 8):\n error = 'keyword'\n else:\n error = str(node.value)\n elif type_ == 'operator':\n if node.value == '...':\n if self._normalizer.version < (3, 10):\n error = 'Ellipsis'\n else:\n error = 'ellipsis'\n elif type_ == 'comparison':\n error = 'comparison'\n elif type_ in ('string', 'number', 'strings'):\n error = 'literal'\n elif type_ == 'yield_expr':\n # This one seems to be a slightly different warning in Python.\n message = 'assignment to yield expression not possible'\n self.add_issue(node, message=message)\n elif type_ == 'test':\n error = 'conditional expression'\n elif type_ in ('atom_expr', 'power'):\n if node.children[0] == 'await':\n error = 'await expression'\n elif node.children[-2] == '**':\n if self._normalizer.version < (3, 10):\n error = 'operator'\n else:\n error = 'expression'\n else:\n # Has a trailer\n trailer = node.children[-1]\n assert trailer.type == 'trailer'\n if trailer.children[0] == '(':\n error = 'function call'\n elif is_namedexpr and trailer.children[0] == '[':\n error = 'subscript'\n elif is_namedexpr and trailer.children[0] == '.':\n error = 'attribute'\n elif type_ == "fstring":\n if self._normalizer.version < (3, 8):\n error = 'literal'\n else:\n error = "f-string expression"\n elif type_ in ('testlist_star_expr', 'exprlist', 'testlist'):\n for child in node.children[::2]:\n self._check_assignment(child, is_deletion, is_namedexpr, is_aug_assign)\n elif ('expr' in type_ and type_ != 'star_expr' # is a substring\n or '_test' in type_\n or type_ in ('term', 'factor')):\n if self._normalizer.version < (3, 10):\n error = 'operator'\n else:\n error = 'expression'\n elif type_ == "star_expr":\n if is_deletion:\n if self._normalizer.version >= (3, 9):\n error = "starred"\n else:\n self.add_issue(node, message="can't use starred expression here")\n else:\n if self._normalizer.version >= (3, 9):\n ancestor = node.parent\n else:\n ancestor = _skip_parens_bottom_up(node)\n if ancestor.type not in _STAR_EXPR_PARENTS and not is_aug_assign \\n and not (ancestor.type == 'atom' and ancestor.children[0] == '['):\n message = "starred assignment target must be in a list or tuple"\n self.add_issue(node, message=message)\n\n self._check_assignment(node.children[1])\n\n if error is not None:\n if is_namedexpr:\n message = 'cannot use assignment expressions with %s' % error\n else:\n cannot = "can't" if self._normalizer.version < (3, 8) else "cannot"\n message = ' '.join([cannot, "delete" if is_deletion else "assign to", error])\n self.add_issue(node, message=message)\n\n\n@ErrorFinder.register_rule(type='sync_comp_for')\nclass _CompForRule(_CheckAssignmentRule):\n message = "asynchronous comprehension outside of an asynchronous function"\n\n def is_issue(self, node):\n expr_list = node.children[1]\n if expr_list.type != 'expr_list': # Already handled.\n self._check_assignment(expr_list)\n\n return node.parent.children[0] == 'async' \\n and not self._normalizer.context.is_async_funcdef()\n\n\n@ErrorFinder.register_rule(type='expr_stmt')\nclass _ExprStmtRule(_CheckAssignmentRule):\n message = "illegal expression for augmented assignment"\n extended_message = "'{target}' is an " + message\n\n def is_issue(self, node):\n augassign = node.children[1]\n is_aug_assign = augassign != '=' and augassign.type != 'annassign'\n\n if self._normalizer.version <= (3, 8) or not is_aug_assign:\n for before_equal in node.children[:-2:2]:\n self._check_assignment(before_equal, is_aug_assign=is_aug_assign)\n\n if is_aug_assign:\n target = _remove_parens(node.children[0])\n # a, a[b], a.b\n\n if target.type == "name" or (\n target.type in ("atom_expr", "power")\n and target.children[1].type == "trailer"\n and target.children[-1].children[0] != "("\n ):\n return False\n\n if self._normalizer.version <= (3, 8):\n return True\n else:\n self.add_issue(\n node,\n message=self.extended_message.format(\n target=_get_rhs_name(node.children[0], self._normalizer.version)\n ),\n )\n\n\n@ErrorFinder.register_rule(type='with_item')\nclass _WithItemRule(_CheckAssignmentRule):\n def is_issue(self, with_item):\n self._check_assignment(with_item.children[2])\n\n\n@ErrorFinder.register_rule(type='del_stmt')\nclass _DelStmtRule(_CheckAssignmentRule):\n def is_issue(self, del_stmt):\n child = del_stmt.children[1]\n\n if child.type != 'expr_list': # Already handled.\n self._check_assignment(child, is_deletion=True)\n\n\n@ErrorFinder.register_rule(type='expr_list')\nclass _ExprListRule(_CheckAssignmentRule):\n def is_issue(self, expr_list):\n for expr in expr_list.children[::2]:\n self._check_assignment(expr)\n\n\n@ErrorFinder.register_rule(type='for_stmt')\nclass _ForStmtRule(_CheckAssignmentRule):\n def is_issue(self, for_stmt):\n # Some of the nodes here are already used, so no else if\n expr_list = for_stmt.children[1]\n if expr_list.type != 'expr_list': # Already handled.\n self._check_assignment(expr_list)\n\n\n@ErrorFinder.register_rule(type='namedexpr_test')\nclass _NamedExprRule(_CheckAssignmentRule):\n # namedexpr_test: test [':=' test]\n\n def is_issue(self, namedexpr_test):\n # assigned name\n first = namedexpr_test.children[0]\n\n def search_namedexpr_in_comp_for(node):\n while True:\n parent = node.parent\n if parent is None:\n return parent\n if parent.type == 'sync_comp_for' and parent.children[3] == node:\n return parent\n node = parent\n\n if search_namedexpr_in_comp_for(namedexpr_test):\n # [i+1 for i in (i := range(5))]\n # [i+1 for i in (j := range(5))]\n # [i+1 for i in (lambda: (j := range(5)))()]\n message = 'assignment expression cannot be used in a comprehension iterable expression'\n self.add_issue(namedexpr_test, message=message)\n\n # defined names\n exprlist = list()\n\n def process_comp_for(comp_for):\n if comp_for.type == 'sync_comp_for':\n comp = comp_for\n elif comp_for.type == 'comp_for':\n comp = comp_for.children[1]\n exprlist.extend(_get_for_stmt_definition_exprs(comp))\n\n def search_all_comp_ancestors(node):\n has_ancestors = False\n while True:\n node = node.search_ancestor('testlist_comp', 'dictorsetmaker')\n if node is None:\n break\n for child in node.children:\n if child.type in _COMP_FOR_TYPES:\n process_comp_for(child)\n has_ancestors = True\n break\n return has_ancestors\n\n # check assignment expressions in comprehensions\n search_all = search_all_comp_ancestors(namedexpr_test)\n if search_all:\n if self._normalizer.context.node.type == 'classdef':\n message = 'assignment expression within a comprehension ' \\n 'cannot be used in a class body'\n self.add_issue(namedexpr_test, message=message)\n\n namelist = [expr.value for expr in exprlist if expr.type == 'name']\n if first.type == 'name' and first.value in namelist:\n # [i := 0 for i, j in range(5)]\n # [[(i := i) for j in range(5)] for i in range(5)]\n # [i for i, j in range(5) if True or (i := 1)]\n # [False and (i := 0) for i, j in range(5)]\n message = 'assignment expression cannot rebind ' \\n 'comprehension iteration variable %r' % first.value\n self.add_issue(namedexpr_test, message=message)\n\n self._check_assignment(first, is_namedexpr=True)\n | .venv\Lib\site-packages\parso\python\errors.py | errors.py | Python | 49,113 | 0.95 | 0.286576 | 0.064632 | vue-tools | 326 | 2024-08-02T04:55:08.224400 | GPL-3.0 | false | f30f1af77f31273c0f88d17418ed2a91 |
# Grammar for Python\n\n# NOTE WELL: You should also follow all the steps listed at\n# https://devguide.python.org/grammar/\n\n# Start symbols for the grammar:\n# single_input is a single interactive statement;\n# file_input is a module or sequence of commands read from an input file;\n# eval_input is the input for the eval() functions.\n# NB: compound_stmt in single_input is followed by extra NEWLINE!\nsingle_input: NEWLINE | simple_stmt | compound_stmt NEWLINE\nfile_input: stmt* ENDMARKER\neval_input: testlist NEWLINE* ENDMARKER\n\ndecorator: '@' namedexpr_test NEWLINE\ndecorators: decorator+\ndecorated: decorators (classdef | funcdef | async_funcdef)\n\nasync_funcdef: 'async' funcdef\nfuncdef: 'def' NAME parameters ['->' test] ':' suite\n\nparameters: '(' [typedargslist] ')'\ntypedargslist: (\n (tfpdef ['=' test] (',' tfpdef ['=' test])* ',' '/' [',' [ tfpdef ['=' test] (\n ',' tfpdef ['=' test])* ([',' [\n '*' [tfpdef] (',' tfpdef ['=' test])* [',' ['**' tfpdef [',']]]\n | '**' tfpdef [',']]])\n | '*' [tfpdef] (',' tfpdef ['=' test])* ([',' ['**' tfpdef [',']]])\n | '**' tfpdef [',']]] )\n| (tfpdef ['=' test] (',' tfpdef ['=' test])* [',' [\n '*' [tfpdef] (',' tfpdef ['=' test])* [',' ['**' tfpdef [',']]]\n | '**' tfpdef [',']]]\n | '*' [tfpdef] (',' tfpdef ['=' test])* [',' ['**' tfpdef [',']]]\n | '**' tfpdef [','])\n)\ntfpdef: NAME [':' test]\nvarargslist: vfpdef ['=' test ](',' vfpdef ['=' test])* ',' '/' [',' [ (vfpdef ['=' test] (',' vfpdef ['=' test])* [',' [\n '*' [vfpdef] (',' vfpdef ['=' test])* [',' ['**' vfpdef [',']]]\n | '**' vfpdef [',']]]\n | '*' [vfpdef] (',' vfpdef ['=' test])* [',' ['**' vfpdef [',']]]\n | '**' vfpdef [',']) ]] | (vfpdef ['=' test] (',' vfpdef ['=' test])* [',' [\n '*' [vfpdef] (',' vfpdef ['=' test])* [',' ['**' vfpdef [',']]]\n | '**' vfpdef [',']]]\n | '*' [vfpdef] (',' vfpdef ['=' test])* [',' ['**' vfpdef [',']]]\n | '**' vfpdef [',']\n)\nvfpdef: NAME\n\nstmt: simple_stmt | compound_stmt | NEWLINE\nsimple_stmt: small_stmt (';' small_stmt)* [';'] NEWLINE\nsmall_stmt: (expr_stmt | del_stmt | pass_stmt | flow_stmt |\n import_stmt | global_stmt | nonlocal_stmt | assert_stmt)\nexpr_stmt: testlist_star_expr (annassign | augassign (yield_expr|testlist) |\n ('=' (yield_expr|testlist_star_expr))*)\nannassign: ':' test ['=' (yield_expr|testlist_star_expr)]\ntestlist_star_expr: (test|star_expr) (',' (test|star_expr))* [',']\naugassign: ('+=' | '-=' | '*=' | '@=' | '/=' | '%=' | '&=' | '|=' | '^=' |\n '<<=' | '>>=' | '**=' | '//=')\n# For normal and annotated assignments, additional restrictions enforced by the interpreter\ndel_stmt: 'del' exprlist\npass_stmt: 'pass'\nflow_stmt: break_stmt | continue_stmt | return_stmt | raise_stmt | yield_stmt\nbreak_stmt: 'break'\ncontinue_stmt: 'continue'\nreturn_stmt: 'return' [testlist_star_expr]\nyield_stmt: yield_expr\nraise_stmt: 'raise' [test ['from' test]]\nimport_stmt: import_name | import_from\nimport_name: 'import' dotted_as_names\n# note below: the ('.' | '...') is necessary because '...' is tokenized as ELLIPSIS\nimport_from: ('from' (('.' | '...')* dotted_name | ('.' | '...')+)\n 'import' ('*' | '(' import_as_names ')' | import_as_names))\nimport_as_name: NAME ['as' NAME]\ndotted_as_name: dotted_name ['as' NAME]\nimport_as_names: import_as_name (',' import_as_name)* [',']\ndotted_as_names: dotted_as_name (',' dotted_as_name)*\ndotted_name: NAME ('.' NAME)*\nglobal_stmt: 'global' NAME (',' NAME)*\nnonlocal_stmt: 'nonlocal' NAME (',' NAME)*\nassert_stmt: 'assert' test [',' test]\n\ncompound_stmt: if_stmt | while_stmt | for_stmt | try_stmt | with_stmt | funcdef | classdef | decorated | async_stmt\nasync_stmt: 'async' (funcdef | with_stmt | for_stmt)\nif_stmt: 'if' namedexpr_test ':' suite ('elif' namedexpr_test ':' suite)* ['else' ':' suite]\nwhile_stmt: 'while' namedexpr_test ':' suite ['else' ':' suite]\nfor_stmt: 'for' exprlist 'in' testlist ':' suite ['else' ':' suite]\ntry_stmt: ('try' ':' suite\n ((except_clause ':' suite)+\n ['else' ':' suite]\n ['finally' ':' suite] |\n 'finally' ':' suite))\nwith_stmt: 'with' with_item (',' with_item)* ':' suite\nwith_item: test ['as' expr]\n# NB compile.c makes sure that the default except clause is last\nexcept_clause: 'except' [test ['as' NAME]]\nsuite: simple_stmt | NEWLINE INDENT stmt+ DEDENT\n\nnamedexpr_test: test [':=' test]\ntest: or_test ['if' or_test 'else' test] | lambdef\nlambdef: 'lambda' [varargslist] ':' test\nor_test: and_test ('or' and_test)*\nand_test: not_test ('and' not_test)*\nnot_test: 'not' not_test | comparison\ncomparison: expr (comp_op expr)*\n# <> isn't actually a valid comparison operator in Python. It's here for the\n# sake of a __future__ import described in PEP 401 (which really works :-)\ncomp_op: '<'|'>'|'=='|'>='|'<='|'<>'|'!='|'in'|'not' 'in'|'is'|'is' 'not'\nstar_expr: '*' expr\nexpr: xor_expr ('|' xor_expr)*\nxor_expr: and_expr ('^' and_expr)*\nand_expr: shift_expr ('&' shift_expr)*\nshift_expr: arith_expr (('<<'|'>>') arith_expr)*\narith_expr: term (('+'|'-') term)*\nterm: factor (('*'|'@'|'/'|'%'|'//') factor)*\nfactor: ('+'|'-'|'~') factor | power\npower: atom_expr ['**' factor]\natom_expr: ['await'] atom trailer*\natom: ('(' [yield_expr|testlist_comp] ')' |\n '[' [testlist_comp] ']' |\n '{' [dictorsetmaker] '}' |\n NAME | NUMBER | strings | '...' | 'None' | 'True' | 'False')\ntestlist_comp: (namedexpr_test|star_expr) ( comp_for | (',' (namedexpr_test|star_expr))* [','] )\ntrailer: '(' [arglist] ')' | '[' subscriptlist ']' | '.' NAME\nsubscriptlist: subscript (',' subscript)* [',']\nsubscript: test [':=' test] | [test] ':' [test] [sliceop]\nsliceop: ':' [test]\nexprlist: (expr|star_expr) (',' (expr|star_expr))* [',']\ntestlist: test (',' test)* [',']\ndictorsetmaker: ( ((test ':' test | '**' expr)\n (comp_for | (',' (test ':' test | '**' expr))* [','])) |\n ((test [':=' test] | star_expr)\n (comp_for | (',' (test [':=' test] | star_expr))* [','])) )\n\nclassdef: 'class' NAME ['(' [arglist] ')'] ':' suite\n\narglist: argument (',' argument)* [',']\n\n# The reason that keywords are test nodes instead of NAME is that using NAME\n# results in an ambiguity. ast.c makes sure it's a NAME.\n# "test '=' test" is really "keyword '=' test", but we have no such token.\n# These need to be in a single rule to avoid grammar that is ambiguous\n# to our LL(1) parser. Even though 'test' includes '*expr' in star_expr,\n# we explicitly match '*' here, too, to give it proper precedence.\n# Illegal combinations and orderings are blocked in ast.c:\n# multiple (test comp_for) arguments are blocked; keyword unpackings\n# that precede iterable unpackings are blocked; etc.\nargument: ( test [comp_for] |\n test ':=' test |\n test '=' test |\n '**' test |\n '*' test )\n\ncomp_iter: comp_for | comp_if\nsync_comp_for: 'for' exprlist 'in' or_test [comp_iter]\ncomp_for: ['async'] sync_comp_for\ncomp_if: 'if' or_test [comp_iter]\n\n# not used in grammar, but may appear in "node" passed from Parser to Compiler\nencoding_decl: NAME\n\nyield_expr: 'yield' [yield_arg]\nyield_arg: 'from' test | testlist_star_expr\n\nstrings: (STRING | fstring)+\nfstring: FSTRING_START fstring_content* FSTRING_END\nfstring_content: FSTRING_STRING | fstring_expr\nfstring_conversion: '!' NAME\nfstring_expr: '{' (testlist_comp | yield_expr) ['='] [ fstring_conversion ] [ fstring_format_spec ] '}'\nfstring_format_spec: ':' fstring_content*\n | .venv\Lib\site-packages\parso\python\grammar310.txt | grammar310.txt | Other | 7,511 | 0.95 | 0.076923 | 0.149351 | vue-tools | 801 | 2024-09-15T21:17:48.333157 | BSD-3-Clause | false | 663bd8e6c3008a6849caeb04b084aed8 |
# Grammar for Python\n\n# NOTE WELL: You should also follow all the steps listed at\n# https://devguide.python.org/grammar/\n\n# Start symbols for the grammar:\n# single_input is a single interactive statement;\n# file_input is a module or sequence of commands read from an input file;\n# eval_input is the input for the eval() functions.\n# NB: compound_stmt in single_input is followed by extra NEWLINE!\nsingle_input: NEWLINE | simple_stmt | compound_stmt NEWLINE\nfile_input: stmt* ENDMARKER\neval_input: testlist NEWLINE* ENDMARKER\n\ndecorator: '@' namedexpr_test NEWLINE\ndecorators: decorator+\ndecorated: decorators (classdef | funcdef | async_funcdef)\n\nasync_funcdef: 'async' funcdef\nfuncdef: 'def' NAME parameters ['->' test] ':' suite\n\nparameters: '(' [typedargslist] ')'\ntypedargslist: (\n (tfpdef ['=' test] (',' tfpdef ['=' test])* ',' '/' [',' [ tfpdef ['=' test] (\n ',' tfpdef ['=' test])* ([',' [\n '*' [tfpdef] (',' tfpdef ['=' test])* [',' ['**' tfpdef [',']]]\n | '**' tfpdef [',']]])\n | '*' [tfpdef] (',' tfpdef ['=' test])* ([',' ['**' tfpdef [',']]])\n | '**' tfpdef [',']]] )\n| (tfpdef ['=' test] (',' tfpdef ['=' test])* [',' [\n '*' [tfpdef] (',' tfpdef ['=' test])* [',' ['**' tfpdef [',']]]\n | '**' tfpdef [',']]]\n | '*' [tfpdef] (',' tfpdef ['=' test])* [',' ['**' tfpdef [',']]]\n | '**' tfpdef [','])\n)\ntfpdef: NAME [':' test]\nvarargslist: vfpdef ['=' test ](',' vfpdef ['=' test])* ',' '/' [',' [ (vfpdef ['=' test] (',' vfpdef ['=' test])* [',' [\n '*' [vfpdef] (',' vfpdef ['=' test])* [',' ['**' vfpdef [',']]]\n | '**' vfpdef [',']]]\n | '*' [vfpdef] (',' vfpdef ['=' test])* [',' ['**' vfpdef [',']]]\n | '**' vfpdef [',']) ]] | (vfpdef ['=' test] (',' vfpdef ['=' test])* [',' [\n '*' [vfpdef] (',' vfpdef ['=' test])* [',' ['**' vfpdef [',']]]\n | '**' vfpdef [',']]]\n | '*' [vfpdef] (',' vfpdef ['=' test])* [',' ['**' vfpdef [',']]]\n | '**' vfpdef [',']\n)\nvfpdef: NAME\n\nstmt: simple_stmt | compound_stmt | NEWLINE\nsimple_stmt: small_stmt (';' small_stmt)* [';'] NEWLINE\nsmall_stmt: (expr_stmt | del_stmt | pass_stmt | flow_stmt |\n import_stmt | global_stmt | nonlocal_stmt | assert_stmt)\nexpr_stmt: testlist_star_expr (annassign | augassign (yield_expr|testlist) |\n ('=' (yield_expr|testlist_star_expr))*)\nannassign: ':' test ['=' (yield_expr|testlist_star_expr)]\ntestlist_star_expr: (test|star_expr) (',' (test|star_expr))* [',']\naugassign: ('+=' | '-=' | '*=' | '@=' | '/=' | '%=' | '&=' | '|=' | '^=' |\n '<<=' | '>>=' | '**=' | '//=')\n# For normal and annotated assignments, additional restrictions enforced by the interpreter\ndel_stmt: 'del' exprlist\npass_stmt: 'pass'\nflow_stmt: break_stmt | continue_stmt | return_stmt | raise_stmt | yield_stmt\nbreak_stmt: 'break'\ncontinue_stmt: 'continue'\nreturn_stmt: 'return' [testlist_star_expr]\nyield_stmt: yield_expr\nraise_stmt: 'raise' [test ['from' test]]\nimport_stmt: import_name | import_from\nimport_name: 'import' dotted_as_names\n# note below: the ('.' | '...') is necessary because '...' is tokenized as ELLIPSIS\nimport_from: ('from' (('.' | '...')* dotted_name | ('.' | '...')+)\n 'import' ('*' | '(' import_as_names ')' | import_as_names))\nimport_as_name: NAME ['as' NAME]\ndotted_as_name: dotted_name ['as' NAME]\nimport_as_names: import_as_name (',' import_as_name)* [',']\ndotted_as_names: dotted_as_name (',' dotted_as_name)*\ndotted_name: NAME ('.' NAME)*\nglobal_stmt: 'global' NAME (',' NAME)*\nnonlocal_stmt: 'nonlocal' NAME (',' NAME)*\nassert_stmt: 'assert' test [',' test]\n\ncompound_stmt: if_stmt | while_stmt | for_stmt | try_stmt | with_stmt | funcdef | classdef | decorated | async_stmt\nasync_stmt: 'async' (funcdef | with_stmt | for_stmt)\nif_stmt: 'if' namedexpr_test ':' suite ('elif' namedexpr_test ':' suite)* ['else' ':' suite]\nwhile_stmt: 'while' namedexpr_test ':' suite ['else' ':' suite]\nfor_stmt: 'for' exprlist 'in' testlist ':' suite ['else' ':' suite]\ntry_stmt: ('try' ':' suite\n ((except_clause ':' suite)+\n ['else' ':' suite]\n ['finally' ':' suite] |\n 'finally' ':' suite))\nwith_stmt: 'with' with_item (',' with_item)* ':' suite\nwith_item: test ['as' expr]\n# NB compile.c makes sure that the default except clause is last\nexcept_clause: 'except' [test ['as' NAME]]\nsuite: simple_stmt | NEWLINE INDENT stmt+ DEDENT\n\nnamedexpr_test: test [':=' test]\ntest: or_test ['if' or_test 'else' test] | lambdef\nlambdef: 'lambda' [varargslist] ':' test\nor_test: and_test ('or' and_test)*\nand_test: not_test ('and' not_test)*\nnot_test: 'not' not_test | comparison\ncomparison: expr (comp_op expr)*\n# <> isn't actually a valid comparison operator in Python. It's here for the\n# sake of a __future__ import described in PEP 401 (which really works :-)\ncomp_op: '<'|'>'|'=='|'>='|'<='|'<>'|'!='|'in'|'not' 'in'|'is'|'is' 'not'\nstar_expr: '*' expr\nexpr: xor_expr ('|' xor_expr)*\nxor_expr: and_expr ('^' and_expr)*\nand_expr: shift_expr ('&' shift_expr)*\nshift_expr: arith_expr (('<<'|'>>') arith_expr)*\narith_expr: term (('+'|'-') term)*\nterm: factor (('*'|'@'|'/'|'%'|'//') factor)*\nfactor: ('+'|'-'|'~') factor | power\npower: atom_expr ['**' factor]\natom_expr: ['await'] atom trailer*\natom: ('(' [yield_expr|testlist_comp] ')' |\n '[' [testlist_comp] ']' |\n '{' [dictorsetmaker] '}' |\n NAME | NUMBER | strings | '...' | 'None' | 'True' | 'False')\ntestlist_comp: (namedexpr_test|star_expr) ( comp_for | (',' (namedexpr_test|star_expr))* [','] )\ntrailer: '(' [arglist] ')' | '[' subscriptlist ']' | '.' NAME\nsubscriptlist: subscript (',' subscript)* [',']\nsubscript: test [':=' test] | [test] ':' [test] [sliceop]\nsliceop: ':' [test]\nexprlist: (expr|star_expr) (',' (expr|star_expr))* [',']\ntestlist: test (',' test)* [',']\ndictorsetmaker: ( ((test ':' test | '**' expr)\n (comp_for | (',' (test ':' test | '**' expr))* [','])) |\n ((test [':=' test] | star_expr)\n (comp_for | (',' (test [':=' test] | star_expr))* [','])) )\n\nclassdef: 'class' NAME ['(' [arglist] ')'] ':' suite\n\narglist: argument (',' argument)* [',']\n\n# The reason that keywords are test nodes instead of NAME is that using NAME\n# results in an ambiguity. ast.c makes sure it's a NAME.\n# "test '=' test" is really "keyword '=' test", but we have no such token.\n# These need to be in a single rule to avoid grammar that is ambiguous\n# to our LL(1) parser. Even though 'test' includes '*expr' in star_expr,\n# we explicitly match '*' here, too, to give it proper precedence.\n# Illegal combinations and orderings are blocked in ast.c:\n# multiple (test comp_for) arguments are blocked; keyword unpackings\n# that precede iterable unpackings are blocked; etc.\nargument: ( test [comp_for] |\n test ':=' test |\n test '=' test |\n '**' test |\n '*' test )\n\ncomp_iter: comp_for | comp_if\nsync_comp_for: 'for' exprlist 'in' or_test [comp_iter]\ncomp_for: ['async'] sync_comp_for\ncomp_if: 'if' or_test [comp_iter]\n\n# not used in grammar, but may appear in "node" passed from Parser to Compiler\nencoding_decl: NAME\n\nyield_expr: 'yield' [yield_arg]\nyield_arg: 'from' test | testlist_star_expr\n\nstrings: (STRING | fstring)+\nfstring: FSTRING_START fstring_content* FSTRING_END\nfstring_content: FSTRING_STRING | fstring_expr\nfstring_conversion: '!' NAME\nfstring_expr: '{' (testlist_comp | yield_expr) ['='] [ fstring_conversion ] [ fstring_format_spec ] '}'\nfstring_format_spec: ':' fstring_content*\n | .venv\Lib\site-packages\parso\python\grammar311.txt | grammar311.txt | Other | 7,511 | 0.95 | 0.076923 | 0.149351 | awesome-app | 777 | 2024-07-31T04:23:20.360433 | BSD-3-Clause | false | 663bd8e6c3008a6849caeb04b084aed8 |
# Grammar for Python\n\n# NOTE WELL: You should also follow all the steps listed at\n# https://devguide.python.org/grammar/\n\n# Start symbols for the grammar:\n# single_input is a single interactive statement;\n# file_input is a module or sequence of commands read from an input file;\n# eval_input is the input for the eval() functions.\n# NB: compound_stmt in single_input is followed by extra NEWLINE!\nsingle_input: NEWLINE | simple_stmt | compound_stmt NEWLINE\nfile_input: stmt* ENDMARKER\neval_input: testlist NEWLINE* ENDMARKER\n\ndecorator: '@' namedexpr_test NEWLINE\ndecorators: decorator+\ndecorated: decorators (classdef | funcdef | async_funcdef)\n\nasync_funcdef: 'async' funcdef\nfuncdef: 'def' NAME parameters ['->' test] ':' suite\n\nparameters: '(' [typedargslist] ')'\ntypedargslist: (\n (tfpdef ['=' test] (',' tfpdef ['=' test])* ',' '/' [',' [ tfpdef ['=' test] (\n ',' tfpdef ['=' test])* ([',' [\n '*' [tfpdef] (',' tfpdef ['=' test])* [',' ['**' tfpdef [',']]]\n | '**' tfpdef [',']]])\n | '*' [tfpdef] (',' tfpdef ['=' test])* ([',' ['**' tfpdef [',']]])\n | '**' tfpdef [',']]] )\n| (tfpdef ['=' test] (',' tfpdef ['=' test])* [',' [\n '*' [tfpdef] (',' tfpdef ['=' test])* [',' ['**' tfpdef [',']]]\n | '**' tfpdef [',']]]\n | '*' [tfpdef] (',' tfpdef ['=' test])* [',' ['**' tfpdef [',']]]\n | '**' tfpdef [','])\n)\ntfpdef: NAME [':' test]\nvarargslist: vfpdef ['=' test ](',' vfpdef ['=' test])* ',' '/' [',' [ (vfpdef ['=' test] (',' vfpdef ['=' test])* [',' [\n '*' [vfpdef] (',' vfpdef ['=' test])* [',' ['**' vfpdef [',']]]\n | '**' vfpdef [',']]]\n | '*' [vfpdef] (',' vfpdef ['=' test])* [',' ['**' vfpdef [',']]]\n | '**' vfpdef [',']) ]] | (vfpdef ['=' test] (',' vfpdef ['=' test])* [',' [\n '*' [vfpdef] (',' vfpdef ['=' test])* [',' ['**' vfpdef [',']]]\n | '**' vfpdef [',']]]\n | '*' [vfpdef] (',' vfpdef ['=' test])* [',' ['**' vfpdef [',']]]\n | '**' vfpdef [',']\n)\nvfpdef: NAME\n\nstmt: simple_stmt | compound_stmt | NEWLINE\nsimple_stmt: small_stmt (';' small_stmt)* [';'] NEWLINE\nsmall_stmt: (expr_stmt | del_stmt | pass_stmt | flow_stmt |\n import_stmt | global_stmt | nonlocal_stmt | assert_stmt)\nexpr_stmt: testlist_star_expr (annassign | augassign (yield_expr|testlist) |\n ('=' (yield_expr|testlist_star_expr))*)\nannassign: ':' test ['=' (yield_expr|testlist_star_expr)]\ntestlist_star_expr: (test|star_expr) (',' (test|star_expr))* [',']\naugassign: ('+=' | '-=' | '*=' | '@=' | '/=' | '%=' | '&=' | '|=' | '^=' |\n '<<=' | '>>=' | '**=' | '//=')\n# For normal and annotated assignments, additional restrictions enforced by the interpreter\ndel_stmt: 'del' exprlist\npass_stmt: 'pass'\nflow_stmt: break_stmt | continue_stmt | return_stmt | raise_stmt | yield_stmt\nbreak_stmt: 'break'\ncontinue_stmt: 'continue'\nreturn_stmt: 'return' [testlist_star_expr]\nyield_stmt: yield_expr\nraise_stmt: 'raise' [test ['from' test]]\nimport_stmt: import_name | import_from\nimport_name: 'import' dotted_as_names\n# note below: the ('.' | '...') is necessary because '...' is tokenized as ELLIPSIS\nimport_from: ('from' (('.' | '...')* dotted_name | ('.' | '...')+)\n 'import' ('*' | '(' import_as_names ')' | import_as_names))\nimport_as_name: NAME ['as' NAME]\ndotted_as_name: dotted_name ['as' NAME]\nimport_as_names: import_as_name (',' import_as_name)* [',']\ndotted_as_names: dotted_as_name (',' dotted_as_name)*\ndotted_name: NAME ('.' NAME)*\nglobal_stmt: 'global' NAME (',' NAME)*\nnonlocal_stmt: 'nonlocal' NAME (',' NAME)*\nassert_stmt: 'assert' test [',' test]\n\ncompound_stmt: if_stmt | while_stmt | for_stmt | try_stmt | with_stmt | funcdef | classdef | decorated | async_stmt\nasync_stmt: 'async' (funcdef | with_stmt | for_stmt)\nif_stmt: 'if' namedexpr_test ':' suite ('elif' namedexpr_test ':' suite)* ['else' ':' suite]\nwhile_stmt: 'while' namedexpr_test ':' suite ['else' ':' suite]\nfor_stmt: 'for' exprlist 'in' testlist ':' suite ['else' ':' suite]\ntry_stmt: ('try' ':' suite\n ((except_clause ':' suite)+\n ['else' ':' suite]\n ['finally' ':' suite] |\n 'finally' ':' suite))\nwith_stmt: 'with' with_item (',' with_item)* ':' suite\nwith_item: test ['as' expr]\n# NB compile.c makes sure that the default except clause is last\nexcept_clause: 'except' [test ['as' NAME]]\nsuite: simple_stmt | NEWLINE INDENT stmt+ DEDENT\n\nnamedexpr_test: test [':=' test]\ntest: or_test ['if' or_test 'else' test] | lambdef\nlambdef: 'lambda' [varargslist] ':' test\nor_test: and_test ('or' and_test)*\nand_test: not_test ('and' not_test)*\nnot_test: 'not' not_test | comparison\ncomparison: expr (comp_op expr)*\n# <> isn't actually a valid comparison operator in Python. It's here for the\n# sake of a __future__ import described in PEP 401 (which really works :-)\ncomp_op: '<'|'>'|'=='|'>='|'<='|'<>'|'!='|'in'|'not' 'in'|'is'|'is' 'not'\nstar_expr: '*' expr\nexpr: xor_expr ('|' xor_expr)*\nxor_expr: and_expr ('^' and_expr)*\nand_expr: shift_expr ('&' shift_expr)*\nshift_expr: arith_expr (('<<'|'>>') arith_expr)*\narith_expr: term (('+'|'-') term)*\nterm: factor (('*'|'@'|'/'|'%'|'//') factor)*\nfactor: ('+'|'-'|'~') factor | power\npower: atom_expr ['**' factor]\natom_expr: ['await'] atom trailer*\natom: ('(' [yield_expr|testlist_comp] ')' |\n '[' [testlist_comp] ']' |\n '{' [dictorsetmaker] '}' |\n NAME | NUMBER | strings | '...' | 'None' | 'True' | 'False')\ntestlist_comp: (namedexpr_test|star_expr) ( comp_for | (',' (namedexpr_test|star_expr))* [','] )\ntrailer: '(' [arglist] ')' | '[' subscriptlist ']' | '.' NAME\nsubscriptlist: subscript (',' subscript)* [',']\nsubscript: test [':=' test] | [test] ':' [test] [sliceop]\nsliceop: ':' [test]\nexprlist: (expr|star_expr) (',' (expr|star_expr))* [',']\ntestlist: test (',' test)* [',']\ndictorsetmaker: ( ((test ':' test | '**' expr)\n (comp_for | (',' (test ':' test | '**' expr))* [','])) |\n ((test [':=' test] | star_expr)\n (comp_for | (',' (test [':=' test] | star_expr))* [','])) )\n\nclassdef: 'class' NAME ['(' [arglist] ')'] ':' suite\n\narglist: argument (',' argument)* [',']\n\n# The reason that keywords are test nodes instead of NAME is that using NAME\n# results in an ambiguity. ast.c makes sure it's a NAME.\n# "test '=' test" is really "keyword '=' test", but we have no such token.\n# These need to be in a single rule to avoid grammar that is ambiguous\n# to our LL(1) parser. Even though 'test' includes '*expr' in star_expr,\n# we explicitly match '*' here, too, to give it proper precedence.\n# Illegal combinations and orderings are blocked in ast.c:\n# multiple (test comp_for) arguments are blocked; keyword unpackings\n# that precede iterable unpackings are blocked; etc.\nargument: ( test [comp_for] |\n test ':=' test |\n test '=' test |\n '**' test |\n '*' test )\n\ncomp_iter: comp_for | comp_if\nsync_comp_for: 'for' exprlist 'in' or_test [comp_iter]\ncomp_for: ['async'] sync_comp_for\ncomp_if: 'if' or_test [comp_iter]\n\n# not used in grammar, but may appear in "node" passed from Parser to Compiler\nencoding_decl: NAME\n\nyield_expr: 'yield' [yield_arg]\nyield_arg: 'from' test | testlist_star_expr\n\nstrings: (STRING | fstring)+\nfstring: FSTRING_START fstring_content* FSTRING_END\nfstring_content: FSTRING_STRING | fstring_expr\nfstring_conversion: '!' NAME\nfstring_expr: '{' (testlist_comp | yield_expr) ['='] [ fstring_conversion ] [ fstring_format_spec ] '}'\nfstring_format_spec: ':' fstring_content*\n | .venv\Lib\site-packages\parso\python\grammar312.txt | grammar312.txt | Other | 7,511 | 0.95 | 0.076923 | 0.149351 | vue-tools | 768 | 2024-12-22T02:52:23.367506 | BSD-3-Clause | false | 663bd8e6c3008a6849caeb04b084aed8 |
# Grammar for Python\n\n# NOTE WELL: You should also follow all the steps listed at\n# https://devguide.python.org/grammar/\n\n# Start symbols for the grammar:\n# single_input is a single interactive statement;\n# file_input is a module or sequence of commands read from an input file;\n# eval_input is the input for the eval() functions.\n# NB: compound_stmt in single_input is followed by extra NEWLINE!\nsingle_input: NEWLINE | simple_stmt | compound_stmt NEWLINE\nfile_input: stmt* ENDMARKER\neval_input: testlist NEWLINE* ENDMARKER\n\ndecorator: '@' namedexpr_test NEWLINE\ndecorators: decorator+\ndecorated: decorators (classdef | funcdef | async_funcdef)\n\nasync_funcdef: 'async' funcdef\nfuncdef: 'def' NAME parameters ['->' test] ':' suite\n\nparameters: '(' [typedargslist] ')'\ntypedargslist: (\n (tfpdef ['=' test] (',' tfpdef ['=' test])* ',' '/' [',' [ tfpdef ['=' test] (\n ',' tfpdef ['=' test])* ([',' [\n '*' [tfpdef] (',' tfpdef ['=' test])* [',' ['**' tfpdef [',']]]\n | '**' tfpdef [',']]])\n | '*' [tfpdef] (',' tfpdef ['=' test])* ([',' ['**' tfpdef [',']]])\n | '**' tfpdef [',']]] )\n| (tfpdef ['=' test] (',' tfpdef ['=' test])* [',' [\n '*' [tfpdef] (',' tfpdef ['=' test])* [',' ['**' tfpdef [',']]]\n | '**' tfpdef [',']]]\n | '*' [tfpdef] (',' tfpdef ['=' test])* [',' ['**' tfpdef [',']]]\n | '**' tfpdef [','])\n)\ntfpdef: NAME [':' test]\nvarargslist: vfpdef ['=' test ](',' vfpdef ['=' test])* ',' '/' [',' [ (vfpdef ['=' test] (',' vfpdef ['=' test])* [',' [\n '*' [vfpdef] (',' vfpdef ['=' test])* [',' ['**' vfpdef [',']]]\n | '**' vfpdef [',']]]\n | '*' [vfpdef] (',' vfpdef ['=' test])* [',' ['**' vfpdef [',']]]\n | '**' vfpdef [',']) ]] | (vfpdef ['=' test] (',' vfpdef ['=' test])* [',' [\n '*' [vfpdef] (',' vfpdef ['=' test])* [',' ['**' vfpdef [',']]]\n | '**' vfpdef [',']]]\n | '*' [vfpdef] (',' vfpdef ['=' test])* [',' ['**' vfpdef [',']]]\n | '**' vfpdef [',']\n)\nvfpdef: NAME\n\nstmt: simple_stmt | compound_stmt | NEWLINE\nsimple_stmt: small_stmt (';' small_stmt)* [';'] NEWLINE\nsmall_stmt: (expr_stmt | del_stmt | pass_stmt | flow_stmt |\n import_stmt | global_stmt | nonlocal_stmt | assert_stmt)\nexpr_stmt: testlist_star_expr (annassign | augassign (yield_expr|testlist) |\n ('=' (yield_expr|testlist_star_expr))*)\nannassign: ':' test ['=' (yield_expr|testlist_star_expr)]\ntestlist_star_expr: (test|star_expr) (',' (test|star_expr))* [',']\naugassign: ('+=' | '-=' | '*=' | '@=' | '/=' | '%=' | '&=' | '|=' | '^=' |\n '<<=' | '>>=' | '**=' | '//=')\n# For normal and annotated assignments, additional restrictions enforced by the interpreter\ndel_stmt: 'del' exprlist\npass_stmt: 'pass'\nflow_stmt: break_stmt | continue_stmt | return_stmt | raise_stmt | yield_stmt\nbreak_stmt: 'break'\ncontinue_stmt: 'continue'\nreturn_stmt: 'return' [testlist_star_expr]\nyield_stmt: yield_expr\nraise_stmt: 'raise' [test ['from' test]]\nimport_stmt: import_name | import_from\nimport_name: 'import' dotted_as_names\n# note below: the ('.' | '...') is necessary because '...' is tokenized as ELLIPSIS\nimport_from: ('from' (('.' | '...')* dotted_name | ('.' | '...')+)\n 'import' ('*' | '(' import_as_names ')' | import_as_names))\nimport_as_name: NAME ['as' NAME]\ndotted_as_name: dotted_name ['as' NAME]\nimport_as_names: import_as_name (',' import_as_name)* [',']\ndotted_as_names: dotted_as_name (',' dotted_as_name)*\ndotted_name: NAME ('.' NAME)*\nglobal_stmt: 'global' NAME (',' NAME)*\nnonlocal_stmt: 'nonlocal' NAME (',' NAME)*\nassert_stmt: 'assert' test [',' test]\n\ncompound_stmt: if_stmt | while_stmt | for_stmt | try_stmt | with_stmt | funcdef | classdef | decorated | async_stmt\nasync_stmt: 'async' (funcdef | with_stmt | for_stmt)\nif_stmt: 'if' namedexpr_test ':' suite ('elif' namedexpr_test ':' suite)* ['else' ':' suite]\nwhile_stmt: 'while' namedexpr_test ':' suite ['else' ':' suite]\nfor_stmt: 'for' exprlist 'in' testlist ':' suite ['else' ':' suite]\ntry_stmt: ('try' ':' suite\n ((except_clause ':' suite)+\n ['else' ':' suite]\n ['finally' ':' suite] |\n 'finally' ':' suite))\nwith_stmt: 'with' with_item (',' with_item)* ':' suite\nwith_item: test ['as' expr]\n# NB compile.c makes sure that the default except clause is last\nexcept_clause: 'except' [test ['as' NAME]]\nsuite: simple_stmt | NEWLINE INDENT stmt+ DEDENT\n\nnamedexpr_test: test [':=' test]\ntest: or_test ['if' or_test 'else' test] | lambdef\nlambdef: 'lambda' [varargslist] ':' test\nor_test: and_test ('or' and_test)*\nand_test: not_test ('and' not_test)*\nnot_test: 'not' not_test | comparison\ncomparison: expr (comp_op expr)*\n# <> isn't actually a valid comparison operator in Python. It's here for the\n# sake of a __future__ import described in PEP 401 (which really works :-)\ncomp_op: '<'|'>'|'=='|'>='|'<='|'<>'|'!='|'in'|'not' 'in'|'is'|'is' 'not'\nstar_expr: '*' expr\nexpr: xor_expr ('|' xor_expr)*\nxor_expr: and_expr ('^' and_expr)*\nand_expr: shift_expr ('&' shift_expr)*\nshift_expr: arith_expr (('<<'|'>>') arith_expr)*\narith_expr: term (('+'|'-') term)*\nterm: factor (('*'|'@'|'/'|'%'|'//') factor)*\nfactor: ('+'|'-'|'~') factor | power\npower: atom_expr ['**' factor]\natom_expr: ['await'] atom trailer*\natom: ('(' [yield_expr|testlist_comp] ')' |\n '[' [testlist_comp] ']' |\n '{' [dictorsetmaker] '}' |\n NAME | NUMBER | strings | '...' | 'None' | 'True' | 'False')\ntestlist_comp: (namedexpr_test|star_expr) ( comp_for | (',' (namedexpr_test|star_expr))* [','] )\ntrailer: '(' [arglist] ')' | '[' subscriptlist ']' | '.' NAME\nsubscriptlist: subscript (',' subscript)* [',']\nsubscript: test [':=' test] | [test] ':' [test] [sliceop]\nsliceop: ':' [test]\nexprlist: (expr|star_expr) (',' (expr|star_expr))* [',']\ntestlist: test (',' test)* [',']\ndictorsetmaker: ( ((test ':' test | '**' expr)\n (comp_for | (',' (test ':' test | '**' expr))* [','])) |\n ((test [':=' test] | star_expr)\n (comp_for | (',' (test [':=' test] | star_expr))* [','])) )\n\nclassdef: 'class' NAME ['(' [arglist] ')'] ':' suite\n\narglist: argument (',' argument)* [',']\n\n# The reason that keywords are test nodes instead of NAME is that using NAME\n# results in an ambiguity. ast.c makes sure it's a NAME.\n# "test '=' test" is really "keyword '=' test", but we have no such token.\n# These need to be in a single rule to avoid grammar that is ambiguous\n# to our LL(1) parser. Even though 'test' includes '*expr' in star_expr,\n# we explicitly match '*' here, too, to give it proper precedence.\n# Illegal combinations and orderings are blocked in ast.c:\n# multiple (test comp_for) arguments are blocked; keyword unpackings\n# that precede iterable unpackings are blocked; etc.\nargument: ( test [comp_for] |\n test ':=' test |\n test '=' test |\n '**' test |\n '*' test )\n\ncomp_iter: comp_for | comp_if\nsync_comp_for: 'for' exprlist 'in' or_test [comp_iter]\ncomp_for: ['async'] sync_comp_for\ncomp_if: 'if' or_test [comp_iter]\n\n# not used in grammar, but may appear in "node" passed from Parser to Compiler\nencoding_decl: NAME\n\nyield_expr: 'yield' [yield_arg]\nyield_arg: 'from' test | testlist_star_expr\n\nstrings: (STRING | fstring)+\nfstring: FSTRING_START fstring_content* FSTRING_END\nfstring_content: FSTRING_STRING | fstring_expr\nfstring_conversion: '!' NAME\nfstring_expr: '{' (testlist_comp | yield_expr) ['='] [ fstring_conversion ] [ fstring_format_spec ] '}'\nfstring_format_spec: ':' fstring_content*\n | .venv\Lib\site-packages\parso\python\grammar313.txt | grammar313.txt | Other | 7,511 | 0.95 | 0.076923 | 0.149351 | vue-tools | 781 | 2024-06-29T03:15:57.214751 | GPL-3.0 | false | 663bd8e6c3008a6849caeb04b084aed8 |
# Grammar for Python\n\n# NOTE WELL: You should also follow all the steps listed at\n# https://docs.python.org/devguide/grammar.html\n\n# Start symbols for the grammar:\n# single_input is a single interactive statement;\n# file_input is a module or sequence of commands read from an input file;\n# eval_input is the input for the eval() functions.\n# NB: compound_stmt in single_input is followed by extra NEWLINE!\nsingle_input: NEWLINE | simple_stmt | compound_stmt NEWLINE\nfile_input: stmt* ENDMARKER\neval_input: testlist NEWLINE* ENDMARKER\ndecorator: '@' dotted_name [ '(' [arglist] ')' ] NEWLINE\ndecorators: decorator+\ndecorated: decorators (classdef | funcdef | async_funcdef)\n\n# NOTE: Francisco Souza/Reinoud Elhorst, using ASYNC/'await' keywords instead of\n# skipping python3.5+ compatibility, in favour of 3.7 solution\nasync_funcdef: 'async' funcdef\nfuncdef: 'def' NAME parameters ['->' test] ':' suite\n\nparameters: '(' [typedargslist] ')'\ntypedargslist: (tfpdef ['=' test] (',' tfpdef ['=' test])* [',' [\n '*' [tfpdef] (',' tfpdef ['=' test])* [',' ['**' tfpdef [',']]]\n | '**' tfpdef [',']]]\n | '*' [tfpdef] (',' tfpdef ['=' test])* [',' ['**' tfpdef [',']]]\n | '**' tfpdef [','])\ntfpdef: NAME [':' test]\nvarargslist: (vfpdef ['=' test] (',' vfpdef ['=' test])* [',' [\n '*' [vfpdef] (',' vfpdef ['=' test])* [',' ['**' vfpdef [',']]]\n | '**' vfpdef [',']]]\n | '*' [vfpdef] (',' vfpdef ['=' test])* [',' ['**' vfpdef [',']]]\n | '**' vfpdef [',']\n)\nvfpdef: NAME\n\nstmt: simple_stmt | compound_stmt | NEWLINE\nsimple_stmt: small_stmt (';' small_stmt)* [';'] NEWLINE\nsmall_stmt: (expr_stmt | del_stmt | pass_stmt | flow_stmt |\n import_stmt | global_stmt | nonlocal_stmt | assert_stmt)\nexpr_stmt: testlist_star_expr (annassign | augassign (yield_expr|testlist) |\n ('=' (yield_expr|testlist_star_expr))*)\nannassign: ':' test ['=' test]\ntestlist_star_expr: (test|star_expr) (',' (test|star_expr))* [',']\naugassign: ('+=' | '-=' | '*=' | '@=' | '/=' | '%=' | '&=' | '|=' | '^=' |\n '<<=' | '>>=' | '**=' | '//=')\n# For normal and annotated assignments, additional restrictions enforced by the interpreter\ndel_stmt: 'del' exprlist\npass_stmt: 'pass'\nflow_stmt: break_stmt | continue_stmt | return_stmt | raise_stmt | yield_stmt\nbreak_stmt: 'break'\ncontinue_stmt: 'continue'\nreturn_stmt: 'return' [testlist]\nyield_stmt: yield_expr\nraise_stmt: 'raise' [test ['from' test]]\nimport_stmt: import_name | import_from\nimport_name: 'import' dotted_as_names\n# note below: the ('.' | '...') is necessary because '...' is tokenized as ELLIPSIS\nimport_from: ('from' (('.' | '...')* dotted_name | ('.' | '...')+)\n 'import' ('*' | '(' import_as_names ')' | import_as_names))\nimport_as_name: NAME ['as' NAME]\ndotted_as_name: dotted_name ['as' NAME]\nimport_as_names: import_as_name (',' import_as_name)* [',']\ndotted_as_names: dotted_as_name (',' dotted_as_name)*\ndotted_name: NAME ('.' NAME)*\nglobal_stmt: 'global' NAME (',' NAME)*\nnonlocal_stmt: 'nonlocal' NAME (',' NAME)*\nassert_stmt: 'assert' test [',' test]\n\ncompound_stmt: if_stmt | while_stmt | for_stmt | try_stmt | with_stmt | funcdef | classdef | decorated | async_stmt\nasync_stmt: 'async' (funcdef | with_stmt | for_stmt)\nif_stmt: 'if' test ':' suite ('elif' test ':' suite)* ['else' ':' suite]\nwhile_stmt: 'while' test ':' suite ['else' ':' suite]\nfor_stmt: 'for' exprlist 'in' testlist ':' suite ['else' ':' suite]\ntry_stmt: ('try' ':' suite\n ((except_clause ':' suite)+\n ['else' ':' suite]\n ['finally' ':' suite] |\n 'finally' ':' suite))\nwith_stmt: 'with' with_item (',' with_item)* ':' suite\nwith_item: test ['as' expr]\n# NB compile.c makes sure that the default except clause is last\nexcept_clause: 'except' [test ['as' NAME]]\nsuite: simple_stmt | NEWLINE INDENT stmt+ DEDENT\n\ntest: or_test ['if' or_test 'else' test] | lambdef\ntest_nocond: or_test | lambdef_nocond\nlambdef: 'lambda' [varargslist] ':' test\nlambdef_nocond: 'lambda' [varargslist] ':' test_nocond\nor_test: and_test ('or' and_test)*\nand_test: not_test ('and' not_test)*\nnot_test: 'not' not_test | comparison\ncomparison: expr (comp_op expr)*\n# <> isn't actually a valid comparison operator in Python. It's here for the\n# sake of a __future__ import described in PEP 401 (which really works :-)\ncomp_op: '<'|'>'|'=='|'>='|'<='|'<>'|'!='|'in'|'not' 'in'|'is'|'is' 'not'\nstar_expr: '*' expr\nexpr: xor_expr ('|' xor_expr)*\nxor_expr: and_expr ('^' and_expr)*\nand_expr: shift_expr ('&' shift_expr)*\nshift_expr: arith_expr (('<<'|'>>') arith_expr)*\narith_expr: term (('+'|'-') term)*\nterm: factor (('*'|'@'|'/'|'%'|'//') factor)*\nfactor: ('+'|'-'|'~') factor | power\npower: atom_expr ['**' factor]\natom_expr: ['await'] atom trailer*\natom: ('(' [yield_expr|testlist_comp] ')' |\n '[' [testlist_comp] ']' |\n '{' [dictorsetmaker] '}' |\n NAME | NUMBER | strings | '...' | 'None' | 'True' | 'False')\ntestlist_comp: (test|star_expr) ( comp_for | (',' (test|star_expr))* [','] )\ntrailer: '(' [arglist] ')' | '[' subscriptlist ']' | '.' NAME\nsubscriptlist: subscript (',' subscript)* [',']\nsubscript: test | [test] ':' [test] [sliceop]\nsliceop: ':' [test]\nexprlist: (expr|star_expr) (',' (expr|star_expr))* [',']\ntestlist: test (',' test)* [',']\ndictorsetmaker: ( ((test ':' test | '**' expr)\n (comp_for | (',' (test ':' test | '**' expr))* [','])) |\n ((test | star_expr)\n (comp_for | (',' (test | star_expr))* [','])) )\n\nclassdef: 'class' NAME ['(' [arglist] ')'] ':' suite\n\narglist: argument (',' argument)* [',']\n\n# The reason that keywords are test nodes instead of NAME is that using NAME\n# results in an ambiguity. ast.c makes sure it's a NAME.\n# "test '=' test" is really "keyword '=' test", but we have no such token.\n# These need to be in a single rule to avoid grammar that is ambiguous\n# to our LL(1) parser. Even though 'test' includes '*expr' in star_expr,\n# we explicitly match '*' here, too, to give it proper precedence.\n# Illegal combinations and orderings are blocked in ast.c:\n# multiple (test comp_for) arguments are blocked; keyword unpackings\n# that precede iterable unpackings are blocked; etc.\nargument: ( test [comp_for] |\n test '=' test |\n '**' test |\n '*' test )\n\ncomp_iter: comp_for | comp_if\nsync_comp_for: 'for' exprlist 'in' or_test [comp_iter]\ncomp_for: ['async'] sync_comp_for\ncomp_if: 'if' test_nocond [comp_iter]\n\n# not used in grammar, but may appear in "node" passed from Parser to Compiler\nencoding_decl: NAME\n\nyield_expr: 'yield' [yield_arg]\nyield_arg: 'from' test | testlist\n\nstrings: (STRING | fstring)+\nfstring: FSTRING_START fstring_content* FSTRING_END\nfstring_content: FSTRING_STRING | fstring_expr\nfstring_conversion: '!' NAME\nfstring_expr: '{' (testlist_comp | yield_expr) [ fstring_conversion ] [ fstring_format_spec ] '}'\nfstring_format_spec: ':' fstring_content*\n | .venv\Lib\site-packages\parso\python\grammar36.txt | grammar36.txt | Other | 6,948 | 0.95 | 0.082278 | 0.173611 | python-kit | 396 | 2023-12-23T15:43:28.613949 | MIT | false | 065d68f87edf462b782306109ea1a127 |
# Grammar for Python\n\n# NOTE WELL: You should also follow all the steps listed at\n# https://docs.python.org/devguide/grammar.html\n\n# Start symbols for the grammar:\n# single_input is a single interactive statement;\n# file_input is a module or sequence of commands read from an input file;\n# eval_input is the input for the eval() functions.\n# NB: compound_stmt in single_input is followed by extra NEWLINE!\nsingle_input: NEWLINE | simple_stmt | compound_stmt NEWLINE\nfile_input: stmt* ENDMARKER\neval_input: testlist NEWLINE* ENDMARKER\ndecorator: '@' dotted_name [ '(' [arglist] ')' ] NEWLINE\ndecorators: decorator+\ndecorated: decorators (classdef | funcdef | async_funcdef)\n\nasync_funcdef: 'async' funcdef\nfuncdef: 'def' NAME parameters ['->' test] ':' suite\n\nparameters: '(' [typedargslist] ')'\ntypedargslist: (tfpdef ['=' test] (',' tfpdef ['=' test])* [',' [\n '*' [tfpdef] (',' tfpdef ['=' test])* [',' ['**' tfpdef [',']]]\n | '**' tfpdef [',']]]\n | '*' [tfpdef] (',' tfpdef ['=' test])* [',' ['**' tfpdef [',']]]\n | '**' tfpdef [','])\ntfpdef: NAME [':' test]\nvarargslist: (vfpdef ['=' test] (',' vfpdef ['=' test])* [',' [\n '*' [vfpdef] (',' vfpdef ['=' test])* [',' ['**' vfpdef [',']]]\n | '**' vfpdef [',']]]\n | '*' [vfpdef] (',' vfpdef ['=' test])* [',' ['**' vfpdef [',']]]\n | '**' vfpdef [',']\n)\nvfpdef: NAME\n\nstmt: simple_stmt | compound_stmt | NEWLINE\nsimple_stmt: small_stmt (';' small_stmt)* [';'] NEWLINE\nsmall_stmt: (expr_stmt | del_stmt | pass_stmt | flow_stmt |\n import_stmt | global_stmt | nonlocal_stmt | assert_stmt)\nexpr_stmt: testlist_star_expr (annassign | augassign (yield_expr|testlist) |\n ('=' (yield_expr|testlist_star_expr))*)\nannassign: ':' test ['=' test]\ntestlist_star_expr: (test|star_expr) (',' (test|star_expr))* [',']\naugassign: ('+=' | '-=' | '*=' | '@=' | '/=' | '%=' | '&=' | '|=' | '^=' |\n '<<=' | '>>=' | '**=' | '//=')\n# For normal and annotated assignments, additional restrictions enforced by the interpreter\ndel_stmt: 'del' exprlist\npass_stmt: 'pass'\nflow_stmt: break_stmt | continue_stmt | return_stmt | raise_stmt | yield_stmt\nbreak_stmt: 'break'\ncontinue_stmt: 'continue'\nreturn_stmt: 'return' [testlist]\nyield_stmt: yield_expr\nraise_stmt: 'raise' [test ['from' test]]\nimport_stmt: import_name | import_from\nimport_name: 'import' dotted_as_names\n# note below: the ('.' | '...') is necessary because '...' is tokenized as ELLIPSIS\nimport_from: ('from' (('.' | '...')* dotted_name | ('.' | '...')+)\n 'import' ('*' | '(' import_as_names ')' | import_as_names))\nimport_as_name: NAME ['as' NAME]\ndotted_as_name: dotted_name ['as' NAME]\nimport_as_names: import_as_name (',' import_as_name)* [',']\ndotted_as_names: dotted_as_name (',' dotted_as_name)*\ndotted_name: NAME ('.' NAME)*\nglobal_stmt: 'global' NAME (',' NAME)*\nnonlocal_stmt: 'nonlocal' NAME (',' NAME)*\nassert_stmt: 'assert' test [',' test]\n\ncompound_stmt: if_stmt | while_stmt | for_stmt | try_stmt | with_stmt | funcdef | classdef | decorated | async_stmt\nasync_stmt: 'async' (funcdef | with_stmt | for_stmt)\nif_stmt: 'if' test ':' suite ('elif' test ':' suite)* ['else' ':' suite]\nwhile_stmt: 'while' test ':' suite ['else' ':' suite]\nfor_stmt: 'for' exprlist 'in' testlist ':' suite ['else' ':' suite]\ntry_stmt: ('try' ':' suite\n ((except_clause ':' suite)+\n ['else' ':' suite]\n ['finally' ':' suite] |\n 'finally' ':' suite))\nwith_stmt: 'with' with_item (',' with_item)* ':' suite\nwith_item: test ['as' expr]\n# NB compile.c makes sure that the default except clause is last\nexcept_clause: 'except' [test ['as' NAME]]\nsuite: simple_stmt | NEWLINE INDENT stmt+ DEDENT\n\ntest: or_test ['if' or_test 'else' test] | lambdef\ntest_nocond: or_test | lambdef_nocond\nlambdef: 'lambda' [varargslist] ':' test\nlambdef_nocond: 'lambda' [varargslist] ':' test_nocond\nor_test: and_test ('or' and_test)*\nand_test: not_test ('and' not_test)*\nnot_test: 'not' not_test | comparison\ncomparison: expr (comp_op expr)*\n# <> isn't actually a valid comparison operator in Python. It's here for the\n# sake of a __future__ import described in PEP 401 (which really works :-)\ncomp_op: '<'|'>'|'=='|'>='|'<='|'<>'|'!='|'in'|'not' 'in'|'is'|'is' 'not'\nstar_expr: '*' expr\nexpr: xor_expr ('|' xor_expr)*\nxor_expr: and_expr ('^' and_expr)*\nand_expr: shift_expr ('&' shift_expr)*\nshift_expr: arith_expr (('<<'|'>>') arith_expr)*\narith_expr: term (('+'|'-') term)*\nterm: factor (('*'|'@'|'/'|'%'|'//') factor)*\nfactor: ('+'|'-'|'~') factor | power\npower: atom_expr ['**' factor]\natom_expr: ['await'] atom trailer*\natom: ('(' [yield_expr|testlist_comp] ')' |\n '[' [testlist_comp] ']' |\n '{' [dictorsetmaker] '}' |\n NAME | NUMBER | strings | '...' | 'None' | 'True' | 'False')\ntestlist_comp: (test|star_expr) ( comp_for | (',' (test|star_expr))* [','] )\ntrailer: '(' [arglist] ')' | '[' subscriptlist ']' | '.' NAME\nsubscriptlist: subscript (',' subscript)* [',']\nsubscript: test | [test] ':' [test] [sliceop]\nsliceop: ':' [test]\nexprlist: (expr|star_expr) (',' (expr|star_expr))* [',']\ntestlist: test (',' test)* [',']\ndictorsetmaker: ( ((test ':' test | '**' expr)\n (comp_for | (',' (test ':' test | '**' expr))* [','])) |\n ((test | star_expr)\n (comp_for | (',' (test | star_expr))* [','])) )\n\nclassdef: 'class' NAME ['(' [arglist] ')'] ':' suite\n\narglist: argument (',' argument)* [',']\n\n# The reason that keywords are test nodes instead of NAME is that using NAME\n# results in an ambiguity. ast.c makes sure it's a NAME.\n# "test '=' test" is really "keyword '=' test", but we have no such token.\n# These need to be in a single rule to avoid grammar that is ambiguous\n# to our LL(1) parser. Even though 'test' includes '*expr' in star_expr,\n# we explicitly match '*' here, too, to give it proper precedence.\n# Illegal combinations and orderings are blocked in ast.c:\n# multiple (test comp_for) arguments are blocked; keyword unpackings\n# that precede iterable unpackings are blocked; etc.\nargument: ( test [comp_for] |\n test '=' test |\n '**' test |\n '*' test )\n\ncomp_iter: comp_for | comp_if\nsync_comp_for: 'for' exprlist 'in' or_test [comp_iter]\ncomp_for: ['async'] sync_comp_for\ncomp_if: 'if' test_nocond [comp_iter]\n\n# not used in grammar, but may appear in "node" passed from Parser to Compiler\nencoding_decl: NAME\n\nyield_expr: 'yield' [yield_arg]\nyield_arg: 'from' test | testlist\n\nstrings: (STRING | fstring)+\nfstring: FSTRING_START fstring_content* FSTRING_END\nfstring_content: FSTRING_STRING | fstring_expr\nfstring_conversion: '!' NAME\nfstring_expr: '{' (testlist_comp | yield_expr) [ fstring_conversion ] [ fstring_format_spec ] '}'\nfstring_format_spec: ':' fstring_content*\n | .venv\Lib\site-packages\parso\python\grammar37.txt | grammar37.txt | Other | 6,804 | 0.95 | 0.083333 | 0.161972 | node-utils | 651 | 2023-09-23T16:46:14.291255 | GPL-3.0 | false | 917d407e58d440f68b98578b37813b16 |
# Grammar for Python\n\n# NOTE WELL: You should also follow all the steps listed at\n# https://devguide.python.org/grammar/\n\n# Start symbols for the grammar:\n# single_input is a single interactive statement;\n# file_input is a module or sequence of commands read from an input file;\n# eval_input is the input for the eval() functions.\n# NB: compound_stmt in single_input is followed by extra NEWLINE!\nsingle_input: NEWLINE | simple_stmt | compound_stmt NEWLINE\nfile_input: stmt* ENDMARKER\neval_input: testlist NEWLINE* ENDMARKER\n\ndecorator: '@' dotted_name [ '(' [arglist] ')' ] NEWLINE\ndecorators: decorator+\ndecorated: decorators (classdef | funcdef | async_funcdef)\n\nasync_funcdef: 'async' funcdef\nfuncdef: 'def' NAME parameters ['->' test] ':' suite\n\nparameters: '(' [typedargslist] ')'\ntypedargslist: (\n (tfpdef ['=' test] (',' tfpdef ['=' test])* ',' '/' [',' [ tfpdef ['=' test] (\n ',' tfpdef ['=' test])* ([',' [\n '*' [tfpdef] (',' tfpdef ['=' test])* [',' ['**' tfpdef [',']]]\n | '**' tfpdef [',']]])\n | '*' [tfpdef] (',' tfpdef ['=' test])* ([',' ['**' tfpdef [',']]])\n | '**' tfpdef [',']]] )\n| (tfpdef ['=' test] (',' tfpdef ['=' test])* [',' [\n '*' [tfpdef] (',' tfpdef ['=' test])* [',' ['**' tfpdef [',']]]\n | '**' tfpdef [',']]]\n | '*' [tfpdef] (',' tfpdef ['=' test])* [',' ['**' tfpdef [',']]]\n | '**' tfpdef [','])\n)\ntfpdef: NAME [':' test]\nvarargslist: vfpdef ['=' test ](',' vfpdef ['=' test])* ',' '/' [',' [ (vfpdef ['=' test] (',' vfpdef ['=' test])* [',' [\n '*' [vfpdef] (',' vfpdef ['=' test])* [',' ['**' vfpdef [',']]]\n | '**' vfpdef [',']]]\n | '*' [vfpdef] (',' vfpdef ['=' test])* [',' ['**' vfpdef [',']]]\n | '**' vfpdef [',']) ]] | (vfpdef ['=' test] (',' vfpdef ['=' test])* [',' [\n '*' [vfpdef] (',' vfpdef ['=' test])* [',' ['**' vfpdef [',']]]\n | '**' vfpdef [',']]]\n | '*' [vfpdef] (',' vfpdef ['=' test])* [',' ['**' vfpdef [',']]]\n | '**' vfpdef [',']\n)\nvfpdef: NAME\n\nstmt: simple_stmt | compound_stmt | NEWLINE\nsimple_stmt: small_stmt (';' small_stmt)* [';'] NEWLINE\nsmall_stmt: (expr_stmt | del_stmt | pass_stmt | flow_stmt |\n import_stmt | global_stmt | nonlocal_stmt | assert_stmt)\nexpr_stmt: testlist_star_expr (annassign | augassign (yield_expr|testlist) |\n ('=' (yield_expr|testlist_star_expr))*)\nannassign: ':' test ['=' (yield_expr|testlist_star_expr)]\ntestlist_star_expr: (test|star_expr) (',' (test|star_expr))* [',']\naugassign: ('+=' | '-=' | '*=' | '@=' | '/=' | '%=' | '&=' | '|=' | '^=' |\n '<<=' | '>>=' | '**=' | '//=')\n# For normal and annotated assignments, additional restrictions enforced by the interpreter\ndel_stmt: 'del' exprlist\npass_stmt: 'pass'\nflow_stmt: break_stmt | continue_stmt | return_stmt | raise_stmt | yield_stmt\nbreak_stmt: 'break'\ncontinue_stmt: 'continue'\nreturn_stmt: 'return' [testlist_star_expr]\nyield_stmt: yield_expr\nraise_stmt: 'raise' [test ['from' test]]\nimport_stmt: import_name | import_from\nimport_name: 'import' dotted_as_names\n# note below: the ('.' | '...') is necessary because '...' is tokenized as ELLIPSIS\nimport_from: ('from' (('.' | '...')* dotted_name | ('.' | '...')+)\n 'import' ('*' | '(' import_as_names ')' | import_as_names))\nimport_as_name: NAME ['as' NAME]\ndotted_as_name: dotted_name ['as' NAME]\nimport_as_names: import_as_name (',' import_as_name)* [',']\ndotted_as_names: dotted_as_name (',' dotted_as_name)*\ndotted_name: NAME ('.' NAME)*\nglobal_stmt: 'global' NAME (',' NAME)*\nnonlocal_stmt: 'nonlocal' NAME (',' NAME)*\nassert_stmt: 'assert' test [',' test]\n\ncompound_stmt: if_stmt | while_stmt | for_stmt | try_stmt | with_stmt | funcdef | classdef | decorated | async_stmt\nasync_stmt: 'async' (funcdef | with_stmt | for_stmt)\nif_stmt: 'if' namedexpr_test ':' suite ('elif' namedexpr_test ':' suite)* ['else' ':' suite]\nwhile_stmt: 'while' namedexpr_test ':' suite ['else' ':' suite]\nfor_stmt: 'for' exprlist 'in' testlist ':' suite ['else' ':' suite]\ntry_stmt: ('try' ':' suite\n ((except_clause ':' suite)+\n ['else' ':' suite]\n ['finally' ':' suite] |\n 'finally' ':' suite))\nwith_stmt: 'with' with_item (',' with_item)* ':' suite\nwith_item: test ['as' expr]\n# NB compile.c makes sure that the default except clause is last\nexcept_clause: 'except' [test ['as' NAME]]\nsuite: simple_stmt | NEWLINE INDENT stmt+ DEDENT\n\nnamedexpr_test: test [':=' test]\ntest: or_test ['if' or_test 'else' test] | lambdef\ntest_nocond: or_test | lambdef_nocond\nlambdef: 'lambda' [varargslist] ':' test\nlambdef_nocond: 'lambda' [varargslist] ':' test_nocond\nor_test: and_test ('or' and_test)*\nand_test: not_test ('and' not_test)*\nnot_test: 'not' not_test | comparison\ncomparison: expr (comp_op expr)*\n# <> isn't actually a valid comparison operator in Python. It's here for the\n# sake of a __future__ import described in PEP 401 (which really works :-)\ncomp_op: '<'|'>'|'=='|'>='|'<='|'<>'|'!='|'in'|'not' 'in'|'is'|'is' 'not'\nstar_expr: '*' expr\nexpr: xor_expr ('|' xor_expr)*\nxor_expr: and_expr ('^' and_expr)*\nand_expr: shift_expr ('&' shift_expr)*\nshift_expr: arith_expr (('<<'|'>>') arith_expr)*\narith_expr: term (('+'|'-') term)*\nterm: factor (('*'|'@'|'/'|'%'|'//') factor)*\nfactor: ('+'|'-'|'~') factor | power\npower: atom_expr ['**' factor]\natom_expr: ['await'] atom trailer*\natom: ('(' [yield_expr|testlist_comp] ')' |\n '[' [testlist_comp] ']' |\n '{' [dictorsetmaker] '}' |\n NAME | NUMBER | strings | '...' | 'None' | 'True' | 'False')\ntestlist_comp: (namedexpr_test|star_expr) ( comp_for | (',' (namedexpr_test|star_expr))* [','] )\ntrailer: '(' [arglist] ')' | '[' subscriptlist ']' | '.' NAME\nsubscriptlist: subscript (',' subscript)* [',']\nsubscript: test | [test] ':' [test] [sliceop]\nsliceop: ':' [test]\nexprlist: (expr|star_expr) (',' (expr|star_expr))* [',']\ntestlist: test (',' test)* [',']\ndictorsetmaker: ( ((test ':' test | '**' expr)\n (comp_for | (',' (test ':' test | '**' expr))* [','])) |\n ((test | star_expr)\n (comp_for | (',' (test | star_expr))* [','])) )\n\nclassdef: 'class' NAME ['(' [arglist] ')'] ':' suite\n\narglist: argument (',' argument)* [',']\n\n# The reason that keywords are test nodes instead of NAME is that using NAME\n# results in an ambiguity. ast.c makes sure it's a NAME.\n# "test '=' test" is really "keyword '=' test", but we have no such token.\n# These need to be in a single rule to avoid grammar that is ambiguous\n# to our LL(1) parser. Even though 'test' includes '*expr' in star_expr,\n# we explicitly match '*' here, too, to give it proper precedence.\n# Illegal combinations and orderings are blocked in ast.c:\n# multiple (test comp_for) arguments are blocked; keyword unpackings\n# that precede iterable unpackings are blocked; etc.\nargument: ( test [comp_for] |\n test ':=' test |\n test '=' test |\n '**' test |\n '*' test )\n\ncomp_iter: comp_for | comp_if\nsync_comp_for: 'for' exprlist 'in' or_test [comp_iter]\ncomp_for: ['async'] sync_comp_for\ncomp_if: 'if' test_nocond [comp_iter]\n\n# not used in grammar, but may appear in "node" passed from Parser to Compiler\nencoding_decl: NAME\n\nyield_expr: 'yield' [yield_arg]\nyield_arg: 'from' test | testlist_star_expr\n\nstrings: (STRING | fstring)+\nfstring: FSTRING_START fstring_content* FSTRING_END\nfstring_content: FSTRING_STRING | fstring_expr\nfstring_conversion: '!' NAME\nfstring_expr: '{' (testlist_comp | yield_expr) ['='] [ fstring_conversion ] [ fstring_format_spec ] '}'\nfstring_format_spec: ':' fstring_content*\n | .venv\Lib\site-packages\parso\python\grammar38.txt | grammar38.txt | Other | 7,591 | 0.95 | 0.076023 | 0.147436 | python-kit | 994 | 2025-02-26T01:45:46.754005 | GPL-3.0 | false | 374da5df6c0eb76e07ab1edbef5136e4 |
# Grammar for Python\n\n# NOTE WELL: You should also follow all the steps listed at\n# https://devguide.python.org/grammar/\n\n# Start symbols for the grammar:\n# single_input is a single interactive statement;\n# file_input is a module or sequence of commands read from an input file;\n# eval_input is the input for the eval() functions.\n# NB: compound_stmt in single_input is followed by extra NEWLINE!\nsingle_input: NEWLINE | simple_stmt | compound_stmt NEWLINE\nfile_input: stmt* ENDMARKER\neval_input: testlist NEWLINE* ENDMARKER\n\ndecorator: '@' namedexpr_test NEWLINE\ndecorators: decorator+\ndecorated: decorators (classdef | funcdef | async_funcdef)\n\nasync_funcdef: 'async' funcdef\nfuncdef: 'def' NAME parameters ['->' test] ':' suite\n\nparameters: '(' [typedargslist] ')'\ntypedargslist: (\n (tfpdef ['=' test] (',' tfpdef ['=' test])* ',' '/' [',' [ tfpdef ['=' test] (\n ',' tfpdef ['=' test])* ([',' [\n '*' [tfpdef] (',' tfpdef ['=' test])* [',' ['**' tfpdef [',']]]\n | '**' tfpdef [',']]])\n | '*' [tfpdef] (',' tfpdef ['=' test])* ([',' ['**' tfpdef [',']]])\n | '**' tfpdef [',']]] )\n| (tfpdef ['=' test] (',' tfpdef ['=' test])* [',' [\n '*' [tfpdef] (',' tfpdef ['=' test])* [',' ['**' tfpdef [',']]]\n | '**' tfpdef [',']]]\n | '*' [tfpdef] (',' tfpdef ['=' test])* [',' ['**' tfpdef [',']]]\n | '**' tfpdef [','])\n)\ntfpdef: NAME [':' test]\nvarargslist: vfpdef ['=' test ](',' vfpdef ['=' test])* ',' '/' [',' [ (vfpdef ['=' test] (',' vfpdef ['=' test])* [',' [\n '*' [vfpdef] (',' vfpdef ['=' test])* [',' ['**' vfpdef [',']]]\n | '**' vfpdef [',']]]\n | '*' [vfpdef] (',' vfpdef ['=' test])* [',' ['**' vfpdef [',']]]\n | '**' vfpdef [',']) ]] | (vfpdef ['=' test] (',' vfpdef ['=' test])* [',' [\n '*' [vfpdef] (',' vfpdef ['=' test])* [',' ['**' vfpdef [',']]]\n | '**' vfpdef [',']]]\n | '*' [vfpdef] (',' vfpdef ['=' test])* [',' ['**' vfpdef [',']]]\n | '**' vfpdef [',']\n)\nvfpdef: NAME\n\nstmt: simple_stmt | compound_stmt | NEWLINE\nsimple_stmt: small_stmt (';' small_stmt)* [';'] NEWLINE\nsmall_stmt: (expr_stmt | del_stmt | pass_stmt | flow_stmt |\n import_stmt | global_stmt | nonlocal_stmt | assert_stmt)\nexpr_stmt: testlist_star_expr (annassign | augassign (yield_expr|testlist) |\n ('=' (yield_expr|testlist_star_expr))*)\nannassign: ':' test ['=' (yield_expr|testlist_star_expr)]\ntestlist_star_expr: (test|star_expr) (',' (test|star_expr))* [',']\naugassign: ('+=' | '-=' | '*=' | '@=' | '/=' | '%=' | '&=' | '|=' | '^=' |\n '<<=' | '>>=' | '**=' | '//=')\n# For normal and annotated assignments, additional restrictions enforced by the interpreter\ndel_stmt: 'del' exprlist\npass_stmt: 'pass'\nflow_stmt: break_stmt | continue_stmt | return_stmt | raise_stmt | yield_stmt\nbreak_stmt: 'break'\ncontinue_stmt: 'continue'\nreturn_stmt: 'return' [testlist_star_expr]\nyield_stmt: yield_expr\nraise_stmt: 'raise' [test ['from' test]]\nimport_stmt: import_name | import_from\nimport_name: 'import' dotted_as_names\n# note below: the ('.' | '...') is necessary because '...' is tokenized as ELLIPSIS\nimport_from: ('from' (('.' | '...')* dotted_name | ('.' | '...')+)\n 'import' ('*' | '(' import_as_names ')' | import_as_names))\nimport_as_name: NAME ['as' NAME]\ndotted_as_name: dotted_name ['as' NAME]\nimport_as_names: import_as_name (',' import_as_name)* [',']\ndotted_as_names: dotted_as_name (',' dotted_as_name)*\ndotted_name: NAME ('.' NAME)*\nglobal_stmt: 'global' NAME (',' NAME)*\nnonlocal_stmt: 'nonlocal' NAME (',' NAME)*\nassert_stmt: 'assert' test [',' test]\n\ncompound_stmt: if_stmt | while_stmt | for_stmt | try_stmt | with_stmt | funcdef | classdef | decorated | async_stmt\nasync_stmt: 'async' (funcdef | with_stmt | for_stmt)\nif_stmt: 'if' namedexpr_test ':' suite ('elif' namedexpr_test ':' suite)* ['else' ':' suite]\nwhile_stmt: 'while' namedexpr_test ':' suite ['else' ':' suite]\nfor_stmt: 'for' exprlist 'in' testlist ':' suite ['else' ':' suite]\ntry_stmt: ('try' ':' suite\n ((except_clause ':' suite)+\n ['else' ':' suite]\n ['finally' ':' suite] |\n 'finally' ':' suite))\nwith_stmt: 'with' with_item (',' with_item)* ':' suite\nwith_item: test ['as' expr]\n# NB compile.c makes sure that the default except clause is last\nexcept_clause: 'except' [test ['as' NAME]]\nsuite: simple_stmt | NEWLINE INDENT stmt+ DEDENT\n\nnamedexpr_test: test [':=' test]\ntest: or_test ['if' or_test 'else' test] | lambdef\nlambdef: 'lambda' [varargslist] ':' test\nor_test: and_test ('or' and_test)*\nand_test: not_test ('and' not_test)*\nnot_test: 'not' not_test | comparison\ncomparison: expr (comp_op expr)*\n# <> isn't actually a valid comparison operator in Python. It's here for the\n# sake of a __future__ import described in PEP 401 (which really works :-)\ncomp_op: '<'|'>'|'=='|'>='|'<='|'<>'|'!='|'in'|'not' 'in'|'is'|'is' 'not'\nstar_expr: '*' expr\nexpr: xor_expr ('|' xor_expr)*\nxor_expr: and_expr ('^' and_expr)*\nand_expr: shift_expr ('&' shift_expr)*\nshift_expr: arith_expr (('<<'|'>>') arith_expr)*\narith_expr: term (('+'|'-') term)*\nterm: factor (('*'|'@'|'/'|'%'|'//') factor)*\nfactor: ('+'|'-'|'~') factor | power\npower: atom_expr ['**' factor]\natom_expr: ['await'] atom trailer*\natom: ('(' [yield_expr|testlist_comp] ')' |\n '[' [testlist_comp] ']' |\n '{' [dictorsetmaker] '}' |\n NAME | NUMBER | strings | '...' | 'None' | 'True' | 'False')\ntestlist_comp: (namedexpr_test|star_expr) ( comp_for | (',' (namedexpr_test|star_expr))* [','] )\ntrailer: '(' [arglist] ')' | '[' subscriptlist ']' | '.' NAME\nsubscriptlist: subscript (',' subscript)* [',']\nsubscript: test | [test] ':' [test] [sliceop]\nsliceop: ':' [test]\nexprlist: (expr|star_expr) (',' (expr|star_expr))* [',']\ntestlist: test (',' test)* [',']\ndictorsetmaker: ( ((test ':' test | '**' expr)\n (comp_for | (',' (test ':' test | '**' expr))* [','])) |\n ((test [':=' test] | star_expr)\n (comp_for | (',' (test [':=' test] | star_expr))* [','])) )\n\nclassdef: 'class' NAME ['(' [arglist] ')'] ':' suite\n\narglist: argument (',' argument)* [',']\n\n# The reason that keywords are test nodes instead of NAME is that using NAME\n# results in an ambiguity. ast.c makes sure it's a NAME.\n# "test '=' test" is really "keyword '=' test", but we have no such token.\n# These need to be in a single rule to avoid grammar that is ambiguous\n# to our LL(1) parser. Even though 'test' includes '*expr' in star_expr,\n# we explicitly match '*' here, too, to give it proper precedence.\n# Illegal combinations and orderings are blocked in ast.c:\n# multiple (test comp_for) arguments are blocked; keyword unpackings\n# that precede iterable unpackings are blocked; etc.\nargument: ( test [comp_for] |\n test ':=' test |\n test '=' test |\n '**' test |\n '*' test )\n\ncomp_iter: comp_for | comp_if\nsync_comp_for: 'for' exprlist 'in' or_test [comp_iter]\ncomp_for: ['async'] sync_comp_for\ncomp_if: 'if' or_test [comp_iter]\n\n# not used in grammar, but may appear in "node" passed from Parser to Compiler\nencoding_decl: NAME\n\nyield_expr: 'yield' [yield_arg]\nyield_arg: 'from' test | testlist_star_expr\n\nstrings: (STRING | fstring)+\nfstring: FSTRING_START fstring_content* FSTRING_END\nfstring_content: FSTRING_STRING | fstring_expr\nfstring_conversion: '!' NAME\nfstring_expr: '{' (testlist_comp | yield_expr) ['='] [ fstring_conversion ] [ fstring_format_spec ] '}'\nfstring_format_spec: ':' fstring_content*\n | .venv\Lib\site-packages\parso\python\grammar39.txt | grammar39.txt | Other | 7,499 | 0.95 | 0.076923 | 0.149351 | react-lib | 844 | 2024-03-24T03:50:32.159055 | MIT | false | fbbad176c79cc8670f9c2b4a0078b4fe |
from parso.python import tree\nfrom parso.python.token import PythonTokenTypes\nfrom parso.parser import BaseParser\n\n\nNAME = PythonTokenTypes.NAME\nINDENT = PythonTokenTypes.INDENT\nDEDENT = PythonTokenTypes.DEDENT\n\n\nclass Parser(BaseParser):\n """\n This class is used to parse a Python file, it then divides them into a\n class structure of different scopes.\n\n :param pgen_grammar: The grammar object of pgen2. Loaded by load_grammar.\n """\n\n node_map = {\n 'expr_stmt': tree.ExprStmt,\n 'classdef': tree.Class,\n 'funcdef': tree.Function,\n 'file_input': tree.Module,\n 'import_name': tree.ImportName,\n 'import_from': tree.ImportFrom,\n 'break_stmt': tree.KeywordStatement,\n 'continue_stmt': tree.KeywordStatement,\n 'return_stmt': tree.ReturnStmt,\n 'raise_stmt': tree.KeywordStatement,\n 'yield_expr': tree.YieldExpr,\n 'del_stmt': tree.KeywordStatement,\n 'pass_stmt': tree.KeywordStatement,\n 'global_stmt': tree.GlobalStmt,\n 'nonlocal_stmt': tree.KeywordStatement,\n 'print_stmt': tree.KeywordStatement,\n 'assert_stmt': tree.AssertStmt,\n 'if_stmt': tree.IfStmt,\n 'with_stmt': tree.WithStmt,\n 'for_stmt': tree.ForStmt,\n 'while_stmt': tree.WhileStmt,\n 'try_stmt': tree.TryStmt,\n 'sync_comp_for': tree.SyncCompFor,\n # Not sure if this is the best idea, but IMO it's the easiest way to\n # avoid extreme amounts of work around the subtle difference of 2/3\n # grammar in list comoprehensions.\n 'decorator': tree.Decorator,\n 'lambdef': tree.Lambda,\n 'lambdef_nocond': tree.Lambda,\n 'namedexpr_test': tree.NamedExpr,\n }\n default_node = tree.PythonNode\n\n # Names/Keywords are handled separately\n _leaf_map = {\n PythonTokenTypes.STRING: tree.String,\n PythonTokenTypes.NUMBER: tree.Number,\n PythonTokenTypes.NEWLINE: tree.Newline,\n PythonTokenTypes.ENDMARKER: tree.EndMarker,\n PythonTokenTypes.FSTRING_STRING: tree.FStringString,\n PythonTokenTypes.FSTRING_START: tree.FStringStart,\n PythonTokenTypes.FSTRING_END: tree.FStringEnd,\n }\n\n def __init__(self, pgen_grammar, error_recovery=True, start_nonterminal='file_input'):\n super().__init__(pgen_grammar, start_nonterminal,\n error_recovery=error_recovery)\n\n self.syntax_errors = []\n self._omit_dedent_list = []\n self._indent_counter = 0\n\n def parse(self, tokens):\n if self._error_recovery:\n if self._start_nonterminal != 'file_input':\n raise NotImplementedError\n\n tokens = self._recovery_tokenize(tokens)\n\n return super().parse(tokens)\n\n def convert_node(self, nonterminal, children):\n """\n Convert raw node information to a PythonBaseNode instance.\n\n This is passed to the parser driver which calls it whenever a reduction of a\n grammar rule produces a new complete node, so that the tree is build\n strictly bottom-up.\n """\n try:\n node = self.node_map[nonterminal](children)\n except KeyError:\n if nonterminal == 'suite':\n # We don't want the INDENT/DEDENT in our parser tree. Those\n # leaves are just cancer. They are virtual leaves and not real\n # ones and therefore have pseudo start/end positions and no\n # prefixes. Just ignore them.\n children = [children[0]] + children[2:-1]\n node = self.default_node(nonterminal, children)\n return node\n\n def convert_leaf(self, type, value, prefix, start_pos):\n # print('leaf', repr(value), token.tok_name[type])\n if type == NAME:\n if value in self._pgen_grammar.reserved_syntax_strings:\n return tree.Keyword(value, start_pos, prefix)\n else:\n return tree.Name(value, start_pos, prefix)\n\n return self._leaf_map.get(type, tree.Operator)(value, start_pos, prefix)\n\n def error_recovery(self, token):\n tos_nodes = self.stack[-1].nodes\n if tos_nodes:\n last_leaf = tos_nodes[-1].get_last_leaf()\n else:\n last_leaf = None\n\n if self._start_nonterminal == 'file_input' and \\n (token.type == PythonTokenTypes.ENDMARKER\n or token.type == DEDENT and not last_leaf.value.endswith('\n')\n and not last_leaf.value.endswith('\r')):\n # In Python statements need to end with a newline. But since it's\n # possible (and valid in Python) that there's no newline at the\n # end of a file, we have to recover even if the user doesn't want\n # error recovery.\n if self.stack[-1].dfa.from_rule == 'simple_stmt':\n try:\n plan = self.stack[-1].dfa.transitions[PythonTokenTypes.NEWLINE]\n except KeyError:\n pass\n else:\n if plan.next_dfa.is_final and not plan.dfa_pushes:\n # We are ignoring here that the newline would be\n # required for a simple_stmt.\n self.stack[-1].dfa = plan.next_dfa\n self._add_token(token)\n return\n\n if not self._error_recovery:\n return super().error_recovery(token)\n\n def current_suite(stack):\n # For now just discard everything that is not a suite or\n # file_input, if we detect an error.\n for until_index, stack_node in reversed(list(enumerate(stack))):\n # `suite` can sometimes be only simple_stmt, not stmt.\n if stack_node.nonterminal == 'file_input':\n break\n elif stack_node.nonterminal == 'suite':\n # In the case where we just have a newline we don't want to\n # do error recovery here. In all other cases, we want to do\n # error recovery.\n if len(stack_node.nodes) != 1:\n break\n return until_index\n\n until_index = current_suite(self.stack)\n\n if self._stack_removal(until_index + 1):\n self._add_token(token)\n else:\n typ, value, start_pos, prefix = token\n if typ == INDENT:\n # For every deleted INDENT we have to delete a DEDENT as well.\n # Otherwise the parser will get into trouble and DEDENT too early.\n self._omit_dedent_list.append(self._indent_counter)\n\n error_leaf = tree.PythonErrorLeaf(typ.name, value, start_pos, prefix)\n self.stack[-1].nodes.append(error_leaf)\n\n tos = self.stack[-1]\n if tos.nonterminal == 'suite':\n # Need at least one statement in the suite. This happend with the\n # error recovery above.\n try:\n tos.dfa = tos.dfa.arcs['stmt']\n except KeyError:\n # We're already in a final state.\n pass\n\n def _stack_removal(self, start_index):\n all_nodes = [node for stack_node in self.stack[start_index:] for node in stack_node.nodes]\n\n if all_nodes:\n node = tree.PythonErrorNode(all_nodes)\n self.stack[start_index - 1].nodes.append(node)\n\n self.stack[start_index:] = []\n return bool(all_nodes)\n\n def _recovery_tokenize(self, tokens):\n for token in tokens:\n typ = token[0]\n if typ == DEDENT:\n # We need to count indents, because if we just omit any DEDENT,\n # we might omit them in the wrong place.\n o = self._omit_dedent_list\n if o and o[-1] == self._indent_counter:\n o.pop()\n self._indent_counter -= 1\n continue\n\n self._indent_counter -= 1\n elif typ == INDENT:\n self._indent_counter += 1\n yield token\n | .venv\Lib\site-packages\parso\python\parser.py | parser.py | Python | 8,108 | 0.95 | 0.199029 | 0.158192 | awesome-app | 816 | 2024-10-13T00:52:19.802086 | Apache-2.0 | false | bc3cf7de2946f351c106cd9f45674edb |
import re\nfrom contextlib import contextmanager\nfrom typing import Tuple\n\nfrom parso.python.errors import ErrorFinder, ErrorFinderConfig\nfrom parso.normalizer import Rule\nfrom parso.python.tree import Flow, Scope\n\n\n_IMPORT_TYPES = ('import_name', 'import_from')\n_SUITE_INTRODUCERS = ('classdef', 'funcdef', 'if_stmt', 'while_stmt',\n 'for_stmt', 'try_stmt', 'with_stmt')\n_NON_STAR_TYPES = ('term', 'import_from', 'power')\n_OPENING_BRACKETS = '(', '[', '{'\n_CLOSING_BRACKETS = ')', ']', '}'\n_FACTOR = '+', '-', '~'\n_ALLOW_SPACE = '*', '+', '-', '**', '/', '//', '@'\n_BITWISE_OPERATOR = '<<', '>>', '|', '&', '^'\n_NEEDS_SPACE: Tuple[str, ...] = (\n '=', '%', '->',\n '<', '>', '==', '>=', '<=', '<>', '!=',\n '+=', '-=', '*=', '@=', '/=', '%=', '&=', '|=', '^=', '<<=',\n '>>=', '**=', '//=')\n_NEEDS_SPACE += _BITWISE_OPERATOR\n_IMPLICIT_INDENTATION_TYPES = ('dictorsetmaker', 'argument')\n_POSSIBLE_SLICE_PARENTS = ('subscript', 'subscriptlist', 'sliceop')\n\n\nclass IndentationTypes:\n VERTICAL_BRACKET = object()\n HANGING_BRACKET = object()\n BACKSLASH = object()\n SUITE = object()\n IMPLICIT = object()\n\n\nclass IndentationNode(object):\n type = IndentationTypes.SUITE\n\n def __init__(self, config, indentation, parent=None):\n self.bracket_indentation = self.indentation = indentation\n self.parent = parent\n\n def __repr__(self):\n return '<%s>' % self.__class__.__name__\n\n def get_latest_suite_node(self):\n n = self\n while n is not None:\n if n.type == IndentationTypes.SUITE:\n return n\n\n n = n.parent\n\n\nclass BracketNode(IndentationNode):\n def __init__(self, config, leaf, parent, in_suite_introducer=False):\n self.leaf = leaf\n\n # Figure out here what the indentation is. For chained brackets\n # we can basically use the previous indentation.\n previous_leaf = leaf\n n = parent\n if n.type == IndentationTypes.IMPLICIT:\n n = n.parent\n while True:\n if hasattr(n, 'leaf') and previous_leaf.line != n.leaf.line:\n break\n\n previous_leaf = previous_leaf.get_previous_leaf()\n if not isinstance(n, BracketNode) or previous_leaf != n.leaf:\n break\n n = n.parent\n parent_indentation = n.indentation\n\n next_leaf = leaf.get_next_leaf()\n if '\n' in next_leaf.prefix or '\r' in next_leaf.prefix:\n # This implies code like:\n # foobarbaz(\n # a,\n # b,\n # )\n self.bracket_indentation = parent_indentation \\n + config.closing_bracket_hanging_indentation\n self.indentation = parent_indentation + config.indentation\n self.type = IndentationTypes.HANGING_BRACKET\n else:\n # Implies code like:\n # foobarbaz(\n # a,\n # b,\n # )\n expected_end_indent = leaf.end_pos[1]\n if '\t' in config.indentation:\n self.indentation = None\n else:\n self.indentation = ' ' * expected_end_indent\n self.bracket_indentation = self.indentation\n self.type = IndentationTypes.VERTICAL_BRACKET\n\n if in_suite_introducer and parent.type == IndentationTypes.SUITE \\n and self.indentation == parent_indentation + config.indentation:\n self.indentation += config.indentation\n # The closing bracket should have the same indentation.\n self.bracket_indentation = self.indentation\n self.parent = parent\n\n\nclass ImplicitNode(BracketNode):\n """\n Implicit indentation after keyword arguments, default arguments,\n annotations and dict values.\n """\n def __init__(self, config, leaf, parent):\n super().__init__(config, leaf, parent)\n self.type = IndentationTypes.IMPLICIT\n\n next_leaf = leaf.get_next_leaf()\n if leaf == ':' and '\n' not in next_leaf.prefix and '\r' not in next_leaf.prefix:\n self.indentation += ' '\n\n\nclass BackslashNode(IndentationNode):\n type = IndentationTypes.BACKSLASH\n\n def __init__(self, config, parent_indentation, containing_leaf, spacing, parent=None):\n expr_stmt = containing_leaf.search_ancestor('expr_stmt')\n if expr_stmt is not None:\n equals = expr_stmt.children[-2]\n\n if '\t' in config.indentation:\n # TODO unite with the code of BracketNode\n self.indentation = None\n else:\n # If the backslash follows the equals, use normal indentation\n # otherwise it should align with the equals.\n if equals.end_pos == spacing.start_pos:\n self.indentation = parent_indentation + config.indentation\n else:\n # +1 because there is a space.\n self.indentation = ' ' * (equals.end_pos[1] + 1)\n else:\n self.indentation = parent_indentation + config.indentation\n self.bracket_indentation = self.indentation\n self.parent = parent\n\n\ndef _is_magic_name(name):\n return name.value.startswith('__') and name.value.endswith('__')\n\n\nclass PEP8Normalizer(ErrorFinder):\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self._previous_part = None\n self._previous_leaf = None\n self._on_newline = True\n self._newline_count = 0\n self._wanted_newline_count = None\n self._max_new_lines_in_prefix = 0\n self._new_statement = True\n self._implicit_indentation_possible = False\n # The top of stack of the indentation nodes.\n self._indentation_tos = self._last_indentation_tos = \\n IndentationNode(self._config, indentation='')\n self._in_suite_introducer = False\n\n if ' ' in self._config.indentation:\n self._indentation_type = 'spaces'\n self._wrong_indentation_char = '\t'\n else:\n self._indentation_type = 'tabs'\n self._wrong_indentation_char = ' '\n\n @contextmanager\n def visit_node(self, node):\n with super().visit_node(node):\n with self._visit_node(node):\n yield\n\n @contextmanager\n def _visit_node(self, node):\n typ = node.type\n\n if typ in 'import_name':\n names = node.get_defined_names()\n if len(names) > 1:\n for name in names[:1]:\n self.add_issue(name, 401, 'Multiple imports on one line')\n elif typ == 'lambdef':\n expr_stmt = node.parent\n # Check if it's simply defining a single name, not something like\n # foo.bar or x[1], where using a lambda could make more sense.\n if expr_stmt.type == 'expr_stmt' and any(n.type == 'name'\n for n in expr_stmt.children[:-2:2]):\n self.add_issue(node, 731, 'Do not assign a lambda expression, use a def')\n elif typ == 'try_stmt':\n for child in node.children:\n # Here we can simply check if it's an except, because otherwise\n # it would be an except_clause.\n if child.type == 'keyword' and child.value == 'except':\n self.add_issue(child, 722, 'Do not use bare except, specify exception instead')\n elif typ == 'comparison':\n for child in node.children:\n if child.type not in ('atom_expr', 'power'):\n continue\n if len(child.children) > 2:\n continue\n trailer = child.children[1]\n atom = child.children[0]\n if trailer.type == 'trailer' and atom.type == 'name' \\n and atom.value == 'type':\n self.add_issue(node, 721, "Do not compare types, use 'isinstance()")\n break\n elif typ == 'file_input':\n endmarker = node.children[-1]\n prev = endmarker.get_previous_leaf()\n prefix = endmarker.prefix\n if (not prefix.endswith('\n') and not prefix.endswith('\r') and (\n prefix or prev is None or prev.value not in {'\n', '\r\n', '\r'})):\n self.add_issue(endmarker, 292, "No newline at end of file")\n\n if typ in _IMPORT_TYPES:\n simple_stmt = node.parent\n module = simple_stmt.parent\n if module.type == 'file_input':\n index = module.children.index(simple_stmt)\n for child in module.children[:index]:\n children = [child]\n if child.type == 'simple_stmt':\n # Remove the newline.\n children = child.children[:-1]\n\n found_docstring = False\n for c in children:\n if c.type == 'string' and not found_docstring:\n continue\n found_docstring = True\n\n if c.type == 'expr_stmt' and \\n all(_is_magic_name(n) for n in c.get_defined_names()):\n continue\n\n if c.type in _IMPORT_TYPES or isinstance(c, Flow):\n continue\n\n self.add_issue(node, 402, 'Module level import not at top of file')\n break\n else:\n continue\n break\n\n implicit_indentation_possible = typ in _IMPLICIT_INDENTATION_TYPES\n in_introducer = typ in _SUITE_INTRODUCERS\n if in_introducer:\n self._in_suite_introducer = True\n elif typ == 'suite':\n if self._indentation_tos.type == IndentationTypes.BACKSLASH:\n self._indentation_tos = self._indentation_tos.parent\n\n self._indentation_tos = IndentationNode(\n self._config,\n self._indentation_tos.indentation + self._config.indentation,\n parent=self._indentation_tos\n )\n elif implicit_indentation_possible:\n self._implicit_indentation_possible = True\n yield\n if typ == 'suite':\n assert self._indentation_tos.type == IndentationTypes.SUITE\n self._indentation_tos = self._indentation_tos.parent\n # If we dedent, no lines are needed anymore.\n self._wanted_newline_count = None\n elif implicit_indentation_possible:\n self._implicit_indentation_possible = False\n if self._indentation_tos.type == IndentationTypes.IMPLICIT:\n self._indentation_tos = self._indentation_tos.parent\n elif in_introducer:\n self._in_suite_introducer = False\n if typ in ('classdef', 'funcdef'):\n self._wanted_newline_count = self._get_wanted_blank_lines_count()\n\n def _check_tabs_spaces(self, spacing):\n if self._wrong_indentation_char in spacing.value:\n self.add_issue(spacing, 101, 'Indentation contains ' + self._indentation_type)\n return True\n return False\n\n def _get_wanted_blank_lines_count(self):\n suite_node = self._indentation_tos.get_latest_suite_node()\n return int(suite_node.parent is None) + 1\n\n def _reset_newlines(self, spacing, leaf, is_comment=False):\n self._max_new_lines_in_prefix = \\n max(self._max_new_lines_in_prefix, self._newline_count)\n\n wanted = self._wanted_newline_count\n if wanted is not None:\n # Need to substract one\n blank_lines = self._newline_count - 1\n if wanted > blank_lines and leaf.type != 'endmarker':\n # In case of a comment we don't need to add the issue, yet.\n if not is_comment:\n # TODO end_pos wrong.\n code = 302 if wanted == 2 else 301\n message = "expected %s blank line, found %s" \\n % (wanted, blank_lines)\n self.add_issue(spacing, code, message)\n self._wanted_newline_count = None\n else:\n self._wanted_newline_count = None\n\n if not is_comment:\n wanted = self._get_wanted_blank_lines_count()\n actual = self._max_new_lines_in_prefix - 1\n\n val = leaf.value\n needs_lines = (\n val == '@' and leaf.parent.type == 'decorator'\n or (\n val == 'class'\n or val == 'async' and leaf.get_next_leaf() == 'def'\n or val == 'def' and self._previous_leaf != 'async'\n ) and leaf.parent.parent.type != 'decorated'\n )\n if needs_lines and actual < wanted:\n func_or_cls = leaf.parent\n suite = func_or_cls.parent\n if suite.type == 'decorated':\n suite = suite.parent\n\n # The first leaf of a file or a suite should not need blank\n # lines.\n if suite.children[int(suite.type == 'suite')] != func_or_cls:\n code = 302 if wanted == 2 else 301\n message = "expected %s blank line, found %s" \\n % (wanted, actual)\n self.add_issue(spacing, code, message)\n\n self._max_new_lines_in_prefix = 0\n\n self._newline_count = 0\n\n def visit_leaf(self, leaf):\n super().visit_leaf(leaf)\n for part in leaf._split_prefix():\n if part.type == 'spacing':\n # This part is used for the part call after for.\n break\n self._visit_part(part, part.create_spacing_part(), leaf)\n\n self._analyse_non_prefix(leaf)\n self._visit_part(leaf, part, leaf)\n\n # Cleanup\n self._last_indentation_tos = self._indentation_tos\n\n self._new_statement = leaf.type == 'newline'\n\n # TODO does this work? with brackets and stuff?\n if leaf.type == 'newline' and \\n self._indentation_tos.type == IndentationTypes.BACKSLASH:\n self._indentation_tos = self._indentation_tos.parent\n\n if leaf.value == ':' and leaf.parent.type in _SUITE_INTRODUCERS:\n self._in_suite_introducer = False\n elif leaf.value == 'elif':\n self._in_suite_introducer = True\n\n if not self._new_statement:\n self._reset_newlines(part, leaf)\n self._max_blank_lines = 0\n\n self._previous_leaf = leaf\n\n return leaf.value\n\n def _visit_part(self, part, spacing, leaf):\n value = part.value\n type_ = part.type\n if type_ == 'error_leaf':\n return\n\n if value == ',' and part.parent.type == 'dictorsetmaker':\n self._indentation_tos = self._indentation_tos.parent\n\n node = self._indentation_tos\n\n if type_ == 'comment':\n if value.startswith('##'):\n # Whole blocks of # should not raise an error.\n if value.lstrip('#'):\n self.add_issue(part, 266, "Too many leading '#' for block comment.")\n elif self._on_newline:\n if not re.match(r'#:? ', value) and not value == '#' \\n and not (value.startswith('#!') and part.start_pos == (1, 0)):\n self.add_issue(part, 265, "Block comment should start with '# '")\n else:\n if not re.match(r'#:? [^ ]', value):\n self.add_issue(part, 262, "Inline comment should start with '# '")\n\n self._reset_newlines(spacing, leaf, is_comment=True)\n elif type_ == 'newline':\n if self._newline_count > self._get_wanted_blank_lines_count():\n self.add_issue(part, 303, "Too many blank lines (%s)" % self._newline_count)\n elif leaf in ('def', 'class') \\n and leaf.parent.parent.type == 'decorated':\n self.add_issue(part, 304, "Blank lines found after function decorator")\n\n self._newline_count += 1\n\n if type_ == 'backslash':\n # TODO is this enough checking? What about ==?\n if node.type != IndentationTypes.BACKSLASH:\n if node.type != IndentationTypes.SUITE:\n self.add_issue(part, 502, 'The backslash is redundant between brackets')\n else:\n indentation = node.indentation\n if self._in_suite_introducer and node.type == IndentationTypes.SUITE:\n indentation += self._config.indentation\n\n self._indentation_tos = BackslashNode(\n self._config,\n indentation,\n part,\n spacing,\n parent=self._indentation_tos\n )\n elif self._on_newline:\n indentation = spacing.value\n if node.type == IndentationTypes.BACKSLASH \\n and self._previous_part.type == 'newline':\n self._indentation_tos = self._indentation_tos.parent\n\n if not self._check_tabs_spaces(spacing):\n should_be_indentation = node.indentation\n if type_ == 'comment':\n # Comments can be dedented. So we have to care for that.\n n = self._last_indentation_tos\n while True:\n if len(indentation) > len(n.indentation):\n break\n\n should_be_indentation = n.indentation\n\n self._last_indentation_tos = n\n if n == node:\n break\n n = n.parent\n\n if self._new_statement:\n if type_ == 'newline':\n if indentation:\n self.add_issue(spacing, 291, 'Trailing whitespace')\n elif indentation != should_be_indentation:\n s = '%s %s' % (len(self._config.indentation), self._indentation_type)\n self.add_issue(part, 111, 'Indentation is not a multiple of ' + s)\n else:\n if value in '])}':\n should_be_indentation = node.bracket_indentation\n else:\n should_be_indentation = node.indentation\n if self._in_suite_introducer and indentation == \\n node.get_latest_suite_node().indentation \\n + self._config.indentation:\n self.add_issue(part, 129, "Line with same indent as next logical block")\n elif indentation != should_be_indentation:\n if not self._check_tabs_spaces(spacing) and part.value not in \\n {'\n', '\r\n', '\r'}:\n if value in '])}':\n if node.type == IndentationTypes.VERTICAL_BRACKET:\n self.add_issue(\n part,\n 124,\n "Closing bracket does not match visual indentation"\n )\n else:\n self.add_issue(\n part,\n 123,\n "Losing bracket does not match "\n "indentation of opening bracket's line"\n )\n else:\n if len(indentation) < len(should_be_indentation):\n if node.type == IndentationTypes.VERTICAL_BRACKET:\n self.add_issue(\n part,\n 128,\n 'Continuation line under-indented for visual indent'\n )\n elif node.type == IndentationTypes.BACKSLASH:\n self.add_issue(\n part,\n 122,\n 'Continuation line missing indentation or outdented'\n )\n elif node.type == IndentationTypes.IMPLICIT:\n self.add_issue(part, 135, 'xxx')\n else:\n self.add_issue(\n part,\n 121,\n 'Continuation line under-indented for hanging indent'\n )\n else:\n if node.type == IndentationTypes.VERTICAL_BRACKET:\n self.add_issue(\n part,\n 127,\n 'Continuation line over-indented for visual indent'\n )\n elif node.type == IndentationTypes.IMPLICIT:\n self.add_issue(part, 136, 'xxx')\n else:\n self.add_issue(\n part,\n 126,\n 'Continuation line over-indented for hanging indent'\n )\n else:\n self._check_spacing(part, spacing)\n\n self._check_line_length(part, spacing)\n # -------------------------------\n # Finalizing. Updating the state.\n # -------------------------------\n if value and value in '()[]{}' and type_ != 'error_leaf' \\n and part.parent.type != 'error_node':\n if value in _OPENING_BRACKETS:\n self._indentation_tos = BracketNode(\n self._config, part,\n parent=self._indentation_tos,\n in_suite_introducer=self._in_suite_introducer\n )\n else:\n assert node.type != IndentationTypes.IMPLICIT\n self._indentation_tos = self._indentation_tos.parent\n elif value in ('=', ':') and self._implicit_indentation_possible \\n and part.parent.type in _IMPLICIT_INDENTATION_TYPES:\n indentation = node.indentation\n self._indentation_tos = ImplicitNode(\n self._config, part, parent=self._indentation_tos\n )\n\n self._on_newline = type_ in ('newline', 'backslash', 'bom')\n\n self._previous_part = part\n self._previous_spacing = spacing\n\n def _check_line_length(self, part, spacing):\n if part.type == 'backslash':\n last_column = part.start_pos[1] + 1\n else:\n last_column = part.end_pos[1]\n if last_column > self._config.max_characters \\n and spacing.start_pos[1] <= self._config.max_characters:\n # Special case for long URLs in multi-line docstrings or comments,\n # but still report the error when the 72 first chars are whitespaces.\n report = True\n if part.type == 'comment':\n splitted = part.value[1:].split()\n if len(splitted) == 1 \\n and (part.end_pos[1] - len(splitted[0])) < 72:\n report = False\n if report:\n self.add_issue(\n part,\n 501,\n 'Line too long (%s > %s characters)' %\n (last_column, self._config.max_characters),\n )\n\n def _check_spacing(self, part, spacing):\n def add_if_spaces(*args):\n if spaces:\n return self.add_issue(*args)\n\n def add_not_spaces(*args):\n if not spaces:\n return self.add_issue(*args)\n\n spaces = spacing.value\n prev = self._previous_part\n if prev is not None and prev.type == 'error_leaf' or part.type == 'error_leaf':\n return\n\n type_ = part.type\n if '\t' in spaces:\n self.add_issue(spacing, 223, 'Used tab to separate tokens')\n elif type_ == 'comment':\n if len(spaces) < self._config.spaces_before_comment:\n self.add_issue(spacing, 261, 'At least two spaces before inline comment')\n elif type_ == 'newline':\n add_if_spaces(spacing, 291, 'Trailing whitespace')\n elif len(spaces) > 1:\n self.add_issue(spacing, 221, 'Multiple spaces used')\n else:\n if prev in _OPENING_BRACKETS:\n message = "Whitespace after '%s'" % part.value\n add_if_spaces(spacing, 201, message)\n elif part in _CLOSING_BRACKETS:\n message = "Whitespace before '%s'" % part.value\n add_if_spaces(spacing, 202, message)\n elif part in (',', ';') or part == ':' \\n and part.parent.type not in _POSSIBLE_SLICE_PARENTS:\n message = "Whitespace before '%s'" % part.value\n add_if_spaces(spacing, 203, message)\n elif prev == ':' and prev.parent.type in _POSSIBLE_SLICE_PARENTS:\n pass # TODO\n elif prev in (',', ';', ':'):\n add_not_spaces(spacing, 231, "missing whitespace after '%s'")\n elif part == ':': # Is a subscript\n # TODO\n pass\n elif part in ('*', '**') and part.parent.type not in _NON_STAR_TYPES \\n or prev in ('*', '**') \\n and prev.parent.type not in _NON_STAR_TYPES:\n # TODO\n pass\n elif prev in _FACTOR and prev.parent.type == 'factor':\n pass\n elif prev == '@' and prev.parent.type == 'decorator':\n pass # TODO should probably raise an error if there's a space here\n elif part in _NEEDS_SPACE or prev in _NEEDS_SPACE:\n if part == '=' and part.parent.type in ('argument', 'param') \\n or prev == '=' and prev.parent.type in ('argument', 'param'):\n if part == '=':\n param = part.parent\n else:\n param = prev.parent\n if param.type == 'param' and param.annotation:\n add_not_spaces(spacing, 252, 'Expected spaces around annotation equals')\n else:\n add_if_spaces(\n spacing,\n 251,\n 'Unexpected spaces around keyword / parameter equals'\n )\n elif part in _BITWISE_OPERATOR or prev in _BITWISE_OPERATOR:\n add_not_spaces(\n spacing,\n 227,\n 'Missing whitespace around bitwise or shift operator'\n )\n elif part == '%' or prev == '%':\n add_not_spaces(spacing, 228, 'Missing whitespace around modulo operator')\n else:\n message_225 = 'Missing whitespace between tokens'\n add_not_spaces(spacing, 225, message_225)\n elif type_ == 'keyword' or prev.type == 'keyword':\n add_not_spaces(spacing, 275, 'Missing whitespace around keyword')\n else:\n prev_spacing = self._previous_spacing\n if prev in _ALLOW_SPACE and spaces != prev_spacing.value \\n and '\n' not in self._previous_leaf.prefix \\n and '\r' not in self._previous_leaf.prefix:\n message = "Whitespace before operator doesn't match with whitespace after"\n self.add_issue(spacing, 229, message)\n\n if spaces and part not in _ALLOW_SPACE and prev not in _ALLOW_SPACE:\n message_225 = 'Missing whitespace between tokens'\n # self.add_issue(spacing, 225, message_225)\n # TODO why only brackets?\n if part in _OPENING_BRACKETS:\n message = "Whitespace before '%s'" % part.value\n add_if_spaces(spacing, 211, message)\n\n def _analyse_non_prefix(self, leaf):\n typ = leaf.type\n if typ == 'name' and leaf.value in ('l', 'O', 'I'):\n if leaf.is_definition():\n message = "Do not define %s named 'l', 'O', or 'I' one line"\n if leaf.parent.type == 'class' and leaf.parent.name == leaf:\n self.add_issue(leaf, 742, message % 'classes')\n elif leaf.parent.type == 'function' and leaf.parent.name == leaf:\n self.add_issue(leaf, 743, message % 'function')\n else:\n self.add_issuadd_issue(741, message % 'variables', leaf)\n elif leaf.value == ':':\n if isinstance(leaf.parent, (Flow, Scope)) and leaf.parent.type != 'lambdef':\n next_leaf = leaf.get_next_leaf()\n if next_leaf.type != 'newline':\n if leaf.parent.type == 'funcdef':\n self.add_issue(next_leaf, 704, 'Multiple statements on one line (def)')\n else:\n self.add_issue(next_leaf, 701, 'Multiple statements on one line (colon)')\n elif leaf.value == ';':\n if leaf.get_next_leaf().type in ('newline', 'endmarker'):\n self.add_issue(leaf, 703, 'Statement ends with a semicolon')\n else:\n self.add_issue(leaf, 702, 'Multiple statements on one line (semicolon)')\n elif leaf.value in ('==', '!='):\n comparison = leaf.parent\n index = comparison.children.index(leaf)\n left = comparison.children[index - 1]\n right = comparison.children[index + 1]\n for node in left, right:\n if node.type == 'keyword' or node.type == 'name':\n if node.value == 'None':\n message = "comparison to None should be 'if cond is None:'"\n self.add_issue(leaf, 711, message)\n break\n elif node.value in ('True', 'False'):\n message = "comparison to False/True should be " \\n "'if cond is True:' or 'if cond:'"\n self.add_issue(leaf, 712, message)\n break\n elif leaf.value in ('in', 'is'):\n comparison = leaf.parent\n if comparison.type == 'comparison' and comparison.parent.type == 'not_test':\n if leaf.value == 'in':\n self.add_issue(leaf, 713, "test for membership should be 'not in'")\n else:\n self.add_issue(leaf, 714, "test for object identity should be 'is not'")\n elif typ == 'string':\n # Checking multiline strings\n for i, line in enumerate(leaf.value.splitlines()[1:]):\n indentation = re.match(r'[ \t]*', line).group(0)\n start_pos = leaf.line + i, len(indentation)\n # TODO check multiline indentation.\n start_pos\n elif typ == 'endmarker':\n if self._newline_count >= 2:\n self.add_issue(leaf, 391, 'Blank line at end of file')\n\n def add_issue(self, node, code, message):\n if self._previous_leaf is not None:\n if self._previous_leaf.search_ancestor('error_node') is not None:\n return\n if self._previous_leaf.type == 'error_leaf':\n return\n if node.search_ancestor('error_node') is not None:\n return\n if code in (901, 903):\n # 901 and 903 are raised by the ErrorFinder.\n super().add_issue(node, code, message)\n else:\n # Skip ErrorFinder here, because it has custom behavior.\n super(ErrorFinder, self).add_issue(node, code, message)\n\n\nclass PEP8NormalizerConfig(ErrorFinderConfig):\n normalizer_class = PEP8Normalizer\n """\n Normalizing to PEP8. Not really implemented, yet.\n """\n def __init__(self, indentation=' ' * 4, hanging_indentation=None,\n max_characters=79, spaces_before_comment=2):\n self.indentation = indentation\n if hanging_indentation is None:\n hanging_indentation = indentation\n self.hanging_indentation = hanging_indentation\n self.closing_bracket_hanging_indentation = ''\n self.break_after_binary = False\n self.max_characters = max_characters\n self.spaces_before_comment = spaces_before_comment\n\n\n# TODO this is not yet ready.\n# @PEP8Normalizer.register_rule(type='endmarker')\nclass BlankLineAtEnd(Rule):\n code = 392\n message = 'Blank line at end of file'\n\n def is_issue(self, leaf):\n return self._newline_count >= 2\n | .venv\Lib\site-packages\parso\python\pep8.py | pep8.py | Python | 33,779 | 0.95 | 0.237288 | 0.073314 | vue-tools | 439 | 2024-12-18T18:14:27.971245 | MIT | false | 0553e156d2a1cb51287dacdeb076d69b |
import re\nfrom codecs import BOM_UTF8\nfrom typing import Tuple\n\nfrom parso.python.tokenize import group\n\nunicode_bom = BOM_UTF8.decode('utf-8')\n\n\nclass PrefixPart:\n def __init__(self, leaf, typ, value, spacing='', start_pos=None):\n assert start_pos is not None\n self.parent = leaf\n self.type = typ\n self.value = value\n self.spacing = spacing\n self.start_pos: Tuple[int, int] = start_pos\n\n @property\n def end_pos(self) -> Tuple[int, int]:\n if self.value.endswith('\n') or self.value.endswith('\r'):\n return self.start_pos[0] + 1, 0\n if self.value == unicode_bom:\n # The bom doesn't have a length at the start of a Python file.\n return self.start_pos\n return self.start_pos[0], self.start_pos[1] + len(self.value)\n\n def create_spacing_part(self):\n column = self.start_pos[1] - len(self.spacing)\n return PrefixPart(\n self.parent, 'spacing', self.spacing,\n start_pos=(self.start_pos[0], column)\n )\n\n def __repr__(self):\n return '%s(%s, %s, %s)' % (\n self.__class__.__name__,\n self.type,\n repr(self.value),\n self.start_pos\n )\n\n def search_ancestor(self, *node_types):\n node = self.parent\n while node is not None:\n if node.type in node_types:\n return node\n node = node.parent\n return None\n\n\n_comment = r'#[^\n\r\f]*'\n_backslash = r'\\\r?\n|\\\r'\n_newline = r'\r?\n|\r'\n_form_feed = r'\f'\n_only_spacing = '$'\n_spacing = r'[ \t]*'\n_bom = unicode_bom\n\n_regex = group(\n _comment, _backslash, _newline, _form_feed, _only_spacing, _bom,\n capture=True\n)\n_regex = re.compile(group(_spacing, capture=True) + _regex)\n\n\n_types = {\n '#': 'comment',\n '\\': 'backslash',\n '\f': 'formfeed',\n '\n': 'newline',\n '\r': 'newline',\n unicode_bom: 'bom'\n}\n\n\ndef split_prefix(leaf, start_pos):\n line, column = start_pos\n start = 0\n value = spacing = ''\n bom = False\n while start != len(leaf.prefix):\n match = _regex.match(leaf.prefix, start)\n spacing = match.group(1)\n value = match.group(2)\n if not value:\n break\n type_ = _types[value[0]]\n yield PrefixPart(\n leaf, type_, value, spacing,\n start_pos=(line, column + start - int(bom) + len(spacing))\n )\n if type_ == 'bom':\n bom = True\n\n start = match.end(0)\n if value.endswith('\n') or value.endswith('\r'):\n line += 1\n column = -start\n\n if value:\n spacing = ''\n yield PrefixPart(\n leaf, 'spacing', spacing,\n start_pos=(line, column + start)\n )\n | .venv\Lib\site-packages\parso\python\prefix.py | prefix.py | Python | 2,743 | 0.95 | 0.150943 | 0.011236 | awesome-app | 569 | 2025-04-10T14:13:05.475903 | Apache-2.0 | false | 891a4af1bd162658e962bd465cef624c |
from __future__ import absolute_import\n\nfrom enum import Enum\n\n\nclass TokenType:\n name: str\n contains_syntax: bool\n\n def __init__(self, name: str, contains_syntax: bool = False):\n self.name = name\n self.contains_syntax = contains_syntax\n\n def __repr__(self):\n return '%s(%s)' % (self.__class__.__name__, self.name)\n\n\nclass PythonTokenTypes(Enum):\n STRING = TokenType('STRING')\n NUMBER = TokenType('NUMBER')\n NAME = TokenType('NAME', contains_syntax=True)\n ERRORTOKEN = TokenType('ERRORTOKEN')\n NEWLINE = TokenType('NEWLINE')\n INDENT = TokenType('INDENT')\n DEDENT = TokenType('DEDENT')\n ERROR_DEDENT = TokenType('ERROR_DEDENT')\n FSTRING_STRING = TokenType('FSTRING_STRING')\n FSTRING_START = TokenType('FSTRING_START')\n FSTRING_END = TokenType('FSTRING_END')\n OP = TokenType('OP', contains_syntax=True)\n ENDMARKER = TokenType('ENDMARKER')\n | .venv\Lib\site-packages\parso\python\token.py | token.py | Python | 909 | 0.85 | 0.129032 | 0 | node-utils | 339 | 2023-11-04T06:30:29.785847 | BSD-3-Clause | false | 455755453691804e5b3e6db29e4d4313 |
# -*- coding: utf-8 -*-\n"""\nThis tokenizer has been copied from the ``tokenize.py`` standard library\ntokenizer. The reason was simple: The standard library tokenizer fails\nif the indentation is not right. To make it possible to do error recovery the\n tokenizer needed to be rewritten.\n\nBasically this is a stripped down version of the standard library module, so\nyou can read the documentation there. Additionally we included some speed and\nmemory optimizations here.\n"""\nfrom __future__ import absolute_import\n\nimport sys\nimport re\nimport itertools as _itertools\nfrom codecs import BOM_UTF8\nfrom typing import NamedTuple, Tuple, Iterator, Iterable, List, Dict, \\n Pattern, Set\n\nfrom parso.python.token import PythonTokenTypes\nfrom parso.utils import split_lines, PythonVersionInfo, parse_version_string\n\n\n# Maximum code point of Unicode 6.0: 0x10ffff (1,114,111)\nMAX_UNICODE = '\U0010ffff'\n\nSTRING = PythonTokenTypes.STRING\nNAME = PythonTokenTypes.NAME\nNUMBER = PythonTokenTypes.NUMBER\nOP = PythonTokenTypes.OP\nNEWLINE = PythonTokenTypes.NEWLINE\nINDENT = PythonTokenTypes.INDENT\nDEDENT = PythonTokenTypes.DEDENT\nENDMARKER = PythonTokenTypes.ENDMARKER\nERRORTOKEN = PythonTokenTypes.ERRORTOKEN\nERROR_DEDENT = PythonTokenTypes.ERROR_DEDENT\nFSTRING_START = PythonTokenTypes.FSTRING_START\nFSTRING_STRING = PythonTokenTypes.FSTRING_STRING\nFSTRING_END = PythonTokenTypes.FSTRING_END\n\n\nclass TokenCollection(NamedTuple):\n pseudo_token: Pattern\n single_quoted: Set[str]\n triple_quoted: Set[str]\n endpats: Dict[str, Pattern]\n whitespace: Pattern\n fstring_pattern_map: Dict[str, str]\n always_break_tokens: Tuple[str]\n\n\nBOM_UTF8_STRING = BOM_UTF8.decode('utf-8')\n\n_token_collection_cache: Dict[PythonVersionInfo, TokenCollection] = {}\n\n\ndef group(*choices, capture=False, **kwargs):\n assert not kwargs\n\n start = '('\n if not capture:\n start += '?:'\n return start + '|'.join(choices) + ')'\n\n\ndef maybe(*choices):\n return group(*choices) + '?'\n\n\n# Return the empty string, plus all of the valid string prefixes.\ndef _all_string_prefixes(*, include_fstring=False, only_fstring=False):\n def different_case_versions(prefix):\n for s in _itertools.product(*[(c, c.upper()) for c in prefix]):\n yield ''.join(s)\n # The valid string prefixes. Only contain the lower case versions,\n # and don't contain any permuations (include 'fr', but not\n # 'rf'). The various permutations will be generated.\n valid_string_prefixes = ['b', 'r', 'u', 'br']\n\n result = {''}\n if include_fstring:\n f = ['f', 'fr']\n if only_fstring:\n valid_string_prefixes = f\n result = set()\n else:\n valid_string_prefixes += f\n elif only_fstring:\n return set()\n\n # if we add binary f-strings, add: ['fb', 'fbr']\n for prefix in valid_string_prefixes:\n for t in _itertools.permutations(prefix):\n # create a list with upper and lower versions of each\n # character\n result.update(different_case_versions(t))\n return result\n\n\ndef _compile(expr):\n return re.compile(expr, re.UNICODE)\n\n\ndef _get_token_collection(version_info):\n try:\n return _token_collection_cache[tuple(version_info)]\n except KeyError:\n _token_collection_cache[tuple(version_info)] = result = \\n _create_token_collection(version_info)\n return result\n\n\nunicode_character_name = r'[A-Za-z0-9\-]+(?: [A-Za-z0-9\-]+)*'\nfstring_string_single_line = _compile(\n r'(?:\{\{|\}\}|\\N\{' + unicode_character_name\n + r'\}|\\(?:\r\n?|\n)|\\[^\r\nN]|[^{}\r\n\\])+'\n)\nfstring_string_multi_line = _compile(\n r'(?:\{\{|\}\}|\\N\{' + unicode_character_name + r'\}|\\[^N]|[^{}\\])+'\n)\nfstring_format_spec_single_line = _compile(r'(?:\\(?:\r\n?|\n)|[^{}\r\n])+')\nfstring_format_spec_multi_line = _compile(r'[^{}]+')\n\n\ndef _create_token_collection(version_info):\n # Note: we use unicode matching for names ("\w") but ascii matching for\n # number literals.\n Whitespace = r'[ \f\t]*'\n whitespace = _compile(Whitespace)\n Comment = r'#[^\r\n]*'\n Name = '([A-Za-z_0-9\u0080-' + MAX_UNICODE + ']+)'\n\n Hexnumber = r'0[xX](?:_?[0-9a-fA-F])+'\n Binnumber = r'0[bB](?:_?[01])+'\n Octnumber = r'0[oO](?:_?[0-7])+'\n Decnumber = r'(?:0(?:_?0)*|[1-9](?:_?[0-9])*)'\n Intnumber = group(Hexnumber, Binnumber, Octnumber, Decnumber)\n Exponent = r'[eE][-+]?[0-9](?:_?[0-9])*'\n Pointfloat = group(r'[0-9](?:_?[0-9])*\.(?:[0-9](?:_?[0-9])*)?',\n r'\.[0-9](?:_?[0-9])*') + maybe(Exponent)\n Expfloat = r'[0-9](?:_?[0-9])*' + Exponent\n Floatnumber = group(Pointfloat, Expfloat)\n Imagnumber = group(r'[0-9](?:_?[0-9])*[jJ]', Floatnumber + r'[jJ]')\n Number = group(Imagnumber, Floatnumber, Intnumber)\n\n # Note that since _all_string_prefixes includes the empty string,\n # StringPrefix can be the empty string (making it optional).\n possible_prefixes = _all_string_prefixes()\n StringPrefix = group(*possible_prefixes)\n StringPrefixWithF = group(*_all_string_prefixes(include_fstring=True))\n fstring_prefixes = _all_string_prefixes(include_fstring=True, only_fstring=True)\n FStringStart = group(*fstring_prefixes)\n\n # Tail end of ' string.\n Single = r"(?:\\.|[^'\\])*'"\n # Tail end of " string.\n Double = r'(?:\\.|[^"\\])*"'\n # Tail end of ''' string.\n Single3 = r"(?:\\.|'(?!'')|[^'\\])*'''"\n # Tail end of """ string.\n Double3 = r'(?:\\.|"(?!"")|[^"\\])*"""'\n Triple = group(StringPrefixWithF + "'''", StringPrefixWithF + '"""')\n\n # Because of leftmost-then-longest match semantics, be sure to put the\n # longest operators first (e.g., if = came before ==, == would get\n # recognized as two instances of =).\n Operator = group(r"\*\*=?", r">>=?", r"<<=?",\n r"//=?", r"->",\n r"[+\-*/%&@`|^!=<>]=?",\n r"~")\n\n Bracket = '[][(){}]'\n\n special_args = [r'\.\.\.', r'\r\n?', r'\n', r'[;.,@]']\n if version_info >= (3, 8):\n special_args.insert(0, ":=?")\n else:\n special_args.insert(0, ":")\n Special = group(*special_args)\n\n Funny = group(Operator, Bracket, Special)\n\n # First (or only) line of ' or " string.\n ContStr = group(StringPrefix + r"'[^\r\n'\\]*(?:\\.[^\r\n'\\]*)*"\n + group("'", r'\\(?:\r\n?|\n)'),\n StringPrefix + r'"[^\r\n"\\]*(?:\\.[^\r\n"\\]*)*'\n + group('"', r'\\(?:\r\n?|\n)'))\n pseudo_extra_pool = [Comment, Triple]\n all_quotes = '"', "'", '"""', "'''"\n if fstring_prefixes:\n pseudo_extra_pool.append(FStringStart + group(*all_quotes))\n\n PseudoExtras = group(r'\\(?:\r\n?|\n)|\Z', *pseudo_extra_pool)\n PseudoToken = group(Whitespace, capture=True) + \\n group(PseudoExtras, Number, Funny, ContStr, Name, capture=True)\n\n # For a given string prefix plus quotes, endpats maps it to a regex\n # to match the remainder of that string. _prefix can be empty, for\n # a normal single or triple quoted string (with no prefix).\n endpats = {}\n for _prefix in possible_prefixes:\n endpats[_prefix + "'"] = _compile(Single)\n endpats[_prefix + '"'] = _compile(Double)\n endpats[_prefix + "'''"] = _compile(Single3)\n endpats[_prefix + '"""'] = _compile(Double3)\n\n # A set of all of the single and triple quoted string prefixes,\n # including the opening quotes.\n single_quoted = set()\n triple_quoted = set()\n fstring_pattern_map = {}\n for t in possible_prefixes:\n for quote in '"', "'":\n single_quoted.add(t + quote)\n\n for quote in '"""', "'''":\n triple_quoted.add(t + quote)\n\n for t in fstring_prefixes:\n for quote in all_quotes:\n fstring_pattern_map[t + quote] = quote\n\n ALWAYS_BREAK_TOKENS = (';', 'import', 'class', 'def', 'try', 'except',\n 'finally', 'while', 'with', 'return', 'continue',\n 'break', 'del', 'pass', 'global', 'assert', 'nonlocal')\n pseudo_token_compiled = _compile(PseudoToken)\n return TokenCollection(\n pseudo_token_compiled, single_quoted, triple_quoted, endpats,\n whitespace, fstring_pattern_map, set(ALWAYS_BREAK_TOKENS)\n )\n\n\nclass Token(NamedTuple):\n type: PythonTokenTypes\n string: str\n start_pos: Tuple[int, int]\n prefix: str\n\n @property\n def end_pos(self) -> Tuple[int, int]:\n lines = split_lines(self.string)\n if len(lines) > 1:\n return self.start_pos[0] + len(lines) - 1, 0\n else:\n return self.start_pos[0], self.start_pos[1] + len(self.string)\n\n\nclass PythonToken(Token):\n def __repr__(self):\n return ('TokenInfo(type=%s, string=%r, start_pos=%r, prefix=%r)' %\n self._replace(type=self.type.name))\n\n\nclass FStringNode:\n def __init__(self, quote):\n self.quote = quote\n self.parentheses_count = 0\n self.previous_lines = ''\n self.last_string_start_pos = None\n # In the syntax there can be multiple format_spec's nested:\n # {x:{y:3}}\n self.format_spec_count = 0\n\n def open_parentheses(self, character):\n self.parentheses_count += 1\n\n def close_parentheses(self, character):\n self.parentheses_count -= 1\n if self.parentheses_count == 0:\n # No parentheses means that the format spec is also finished.\n self.format_spec_count = 0\n\n def allow_multiline(self):\n return len(self.quote) == 3\n\n def is_in_expr(self):\n return self.parentheses_count > self.format_spec_count\n\n def is_in_format_spec(self):\n return not self.is_in_expr() and self.format_spec_count\n\n\ndef _close_fstring_if_necessary(fstring_stack, string, line_nr, column, additional_prefix):\n for fstring_stack_index, node in enumerate(fstring_stack):\n lstripped_string = string.lstrip()\n len_lstrip = len(string) - len(lstripped_string)\n if lstripped_string.startswith(node.quote):\n token = PythonToken(\n FSTRING_END,\n node.quote,\n (line_nr, column + len_lstrip),\n prefix=additional_prefix+string[:len_lstrip],\n )\n additional_prefix = ''\n assert not node.previous_lines\n del fstring_stack[fstring_stack_index:]\n return token, '', len(node.quote) + len_lstrip\n return None, additional_prefix, 0\n\n\ndef _find_fstring_string(endpats, fstring_stack, line, lnum, pos):\n tos = fstring_stack[-1]\n allow_multiline = tos.allow_multiline()\n if tos.is_in_format_spec():\n if allow_multiline:\n regex = fstring_format_spec_multi_line\n else:\n regex = fstring_format_spec_single_line\n else:\n if allow_multiline:\n regex = fstring_string_multi_line\n else:\n regex = fstring_string_single_line\n\n match = regex.match(line, pos)\n if match is None:\n return tos.previous_lines, pos\n\n if not tos.previous_lines:\n tos.last_string_start_pos = (lnum, pos)\n\n string = match.group(0)\n for fstring_stack_node in fstring_stack:\n end_match = endpats[fstring_stack_node.quote].match(string)\n if end_match is not None:\n string = end_match.group(0)[:-len(fstring_stack_node.quote)]\n\n new_pos = pos\n new_pos += len(string)\n # even if allow_multiline is False, we still need to check for trailing\n # newlines, because a single-line f-string can contain line continuations\n if string.endswith('\n') or string.endswith('\r'):\n tos.previous_lines += string\n string = ''\n else:\n string = tos.previous_lines + string\n\n return string, new_pos\n\n\ndef tokenize(\n code: str, *, version_info: PythonVersionInfo, start_pos: Tuple[int, int] = (1, 0)\n) -> Iterator[PythonToken]:\n """Generate tokens from a the source code (string)."""\n lines = split_lines(code, keepends=True)\n return tokenize_lines(lines, version_info=version_info, start_pos=start_pos)\n\n\ndef _print_tokens(func):\n """\n A small helper function to help debug the tokenize_lines function.\n """\n def wrapper(*args, **kwargs):\n for token in func(*args, **kwargs):\n print(token) # This print is intentional for debugging!\n yield token\n\n return wrapper\n\n\n# @_print_tokens\ndef tokenize_lines(\n lines: Iterable[str],\n *,\n version_info: PythonVersionInfo,\n indents: List[int] = None,\n start_pos: Tuple[int, int] = (1, 0),\n is_first_token=True,\n) -> Iterator[PythonToken]:\n """\n A heavily modified Python standard library tokenizer.\n\n Additionally to the default information, yields also the prefix of each\n token. This idea comes from lib2to3. The prefix contains all information\n that is irrelevant for the parser like newlines in parentheses or comments.\n """\n def dedent_if_necessary(start):\n while start < indents[-1]:\n if start > indents[-2]:\n yield PythonToken(ERROR_DEDENT, '', (lnum, start), '')\n indents[-1] = start\n break\n indents.pop()\n yield PythonToken(DEDENT, '', spos, '')\n\n pseudo_token, single_quoted, triple_quoted, endpats, whitespace, \\n fstring_pattern_map, always_break_tokens, = \\n _get_token_collection(version_info)\n paren_level = 0 # count parentheses\n if indents is None:\n indents = [0]\n max_ = 0\n numchars = '0123456789'\n contstr = ''\n contline: str\n contstr_start: Tuple[int, int]\n endprog: Pattern\n # We start with a newline. This makes indent at the first position\n # possible. It's not valid Python, but still better than an INDENT in the\n # second line (and not in the first). This makes quite a few things in\n # Jedi's fast parser possible.\n new_line = True\n prefix = '' # Should never be required, but here for safety\n additional_prefix = ''\n lnum = start_pos[0] - 1\n fstring_stack: List[FStringNode] = []\n for line in lines: # loop over lines in stream\n lnum += 1\n pos = 0\n max_ = len(line)\n if is_first_token:\n if line.startswith(BOM_UTF8_STRING):\n additional_prefix = BOM_UTF8_STRING\n line = line[1:]\n max_ = len(line)\n\n # Fake that the part before was already parsed.\n line = '^' * start_pos[1] + line\n pos = start_pos[1]\n max_ += start_pos[1]\n\n is_first_token = False\n\n if contstr: # continued string\n endmatch = endprog.match(line) # noqa: F821\n if endmatch:\n pos = endmatch.end(0)\n yield PythonToken(\n STRING, contstr + line[:pos],\n contstr_start, prefix) # noqa: F821\n contstr = ''\n contline = ''\n else:\n contstr = contstr + line\n contline = contline + line\n continue\n\n while pos < max_:\n if fstring_stack:\n tos = fstring_stack[-1]\n if not tos.is_in_expr():\n string, pos = _find_fstring_string(endpats, fstring_stack, line, lnum, pos)\n if string:\n yield PythonToken(\n FSTRING_STRING, string,\n tos.last_string_start_pos,\n # Never has a prefix because it can start anywhere and\n # include whitespace.\n prefix=''\n )\n tos.previous_lines = ''\n continue\n if pos == max_:\n break\n\n rest = line[pos:]\n fstring_end_token, additional_prefix, quote_length = _close_fstring_if_necessary(\n fstring_stack,\n rest,\n lnum,\n pos,\n additional_prefix,\n )\n pos += quote_length\n if fstring_end_token is not None:\n yield fstring_end_token\n continue\n\n # in an f-string, match until the end of the string\n if fstring_stack:\n string_line = line\n for fstring_stack_node in fstring_stack:\n quote = fstring_stack_node.quote\n end_match = endpats[quote].match(line, pos)\n if end_match is not None:\n end_match_string = end_match.group(0)\n if len(end_match_string) - len(quote) + pos < len(string_line):\n string_line = line[:pos] + end_match_string[:-len(quote)]\n pseudomatch = pseudo_token.match(string_line, pos)\n else:\n pseudomatch = pseudo_token.match(line, pos)\n\n if pseudomatch:\n prefix = additional_prefix + pseudomatch.group(1)\n additional_prefix = ''\n start, pos = pseudomatch.span(2)\n spos = (lnum, start)\n token = pseudomatch.group(2)\n if token == '':\n assert prefix\n additional_prefix = prefix\n # This means that we have a line with whitespace/comments at\n # the end, which just results in an endmarker.\n break\n initial = token[0]\n else:\n match = whitespace.match(line, pos)\n initial = line[match.end()]\n start = match.end()\n spos = (lnum, start)\n\n if new_line and initial not in '\r\n#' and (initial != '\\' or pseudomatch is None):\n new_line = False\n if paren_level == 0 and not fstring_stack:\n indent_start = start\n if indent_start > indents[-1]:\n yield PythonToken(INDENT, '', spos, '')\n indents.append(indent_start)\n yield from dedent_if_necessary(indent_start)\n\n if not pseudomatch: # scan for tokens\n match = whitespace.match(line, pos)\n if new_line and paren_level == 0 and not fstring_stack:\n yield from dedent_if_necessary(match.end())\n pos = match.end()\n new_line = False\n yield PythonToken(\n ERRORTOKEN, line[pos], (lnum, pos),\n additional_prefix + match.group(0)\n )\n additional_prefix = ''\n pos += 1\n continue\n\n if (initial in numchars # ordinary number\n or (initial == '.' and token != '.' and token != '...')):\n yield PythonToken(NUMBER, token, spos, prefix)\n elif pseudomatch.group(3) is not None: # ordinary name\n if token in always_break_tokens and (fstring_stack or paren_level):\n fstring_stack[:] = []\n paren_level = 0\n # We only want to dedent if the token is on a new line.\n m = re.match(r'[ \f\t]*$', line[:start])\n if m is not None:\n yield from dedent_if_necessary(m.end())\n if token.isidentifier():\n yield PythonToken(NAME, token, spos, prefix)\n else:\n yield from _split_illegal_unicode_name(token, spos, prefix)\n elif initial in '\r\n':\n if any(not f.allow_multiline() for f in fstring_stack):\n fstring_stack.clear()\n\n if not new_line and paren_level == 0 and not fstring_stack:\n yield PythonToken(NEWLINE, token, spos, prefix)\n else:\n additional_prefix = prefix + token\n new_line = True\n elif initial == '#': # Comments\n assert not token.endswith("\n") and not token.endswith("\r")\n if fstring_stack and fstring_stack[-1].is_in_expr():\n # `#` is not allowed in f-string expressions\n yield PythonToken(ERRORTOKEN, initial, spos, prefix)\n pos = start + 1\n else:\n additional_prefix = prefix + token\n elif token in triple_quoted:\n endprog = endpats[token]\n endmatch = endprog.match(line, pos)\n if endmatch: # all on one line\n pos = endmatch.end(0)\n token = line[start:pos]\n yield PythonToken(STRING, token, spos, prefix)\n else:\n contstr_start = spos # multiple lines\n contstr = line[start:]\n contline = line\n break\n\n # Check up to the first 3 chars of the token to see if\n # they're in the single_quoted set. If so, they start\n # a string.\n # We're using the first 3, because we're looking for\n # "rb'" (for example) at the start of the token. If\n # we switch to longer prefixes, this needs to be\n # adjusted.\n # Note that initial == token[:1].\n # Also note that single quote checking must come after\n # triple quote checking (above).\n elif initial in single_quoted or \\n token[:2] in single_quoted or \\n token[:3] in single_quoted:\n if token[-1] in '\r\n': # continued string\n # This means that a single quoted string ends with a\n # backslash and is continued.\n contstr_start = lnum, start\n endprog = (endpats.get(initial) or endpats.get(token[1])\n or endpats.get(token[2]))\n contstr = line[start:]\n contline = line\n break\n else: # ordinary string\n yield PythonToken(STRING, token, spos, prefix)\n elif token in fstring_pattern_map: # The start of an fstring.\n fstring_stack.append(FStringNode(fstring_pattern_map[token]))\n yield PythonToken(FSTRING_START, token, spos, prefix)\n elif initial == '\\' and line[start:] in ('\\\n', '\\\r\n', '\\\r'): # continued stmt\n additional_prefix += prefix + line[start:]\n break\n else:\n if token in '([{':\n if fstring_stack:\n fstring_stack[-1].open_parentheses(token)\n else:\n paren_level += 1\n elif token in ')]}':\n if fstring_stack:\n fstring_stack[-1].close_parentheses(token)\n else:\n if paren_level:\n paren_level -= 1\n elif token.startswith(':') and fstring_stack \\n and fstring_stack[-1].parentheses_count \\n - fstring_stack[-1].format_spec_count == 1:\n # `:` and `:=` both count\n fstring_stack[-1].format_spec_count += 1\n token = ':'\n pos = start + 1\n\n yield PythonToken(OP, token, spos, prefix)\n\n if contstr:\n yield PythonToken(ERRORTOKEN, contstr, contstr_start, prefix)\n if contstr.endswith('\n') or contstr.endswith('\r'):\n new_line = True\n\n if fstring_stack:\n tos = fstring_stack[-1]\n if tos.previous_lines:\n yield PythonToken(\n FSTRING_STRING, tos.previous_lines,\n tos.last_string_start_pos,\n # Never has a prefix because it can start anywhere and\n # include whitespace.\n prefix=''\n )\n\n end_pos = lnum, max_\n # As the last position we just take the maximally possible position. We\n # remove -1 for the last new line.\n for indent in indents[1:]:\n indents.pop()\n yield PythonToken(DEDENT, '', end_pos, '')\n yield PythonToken(ENDMARKER, '', end_pos, additional_prefix)\n\n\ndef _split_illegal_unicode_name(token, start_pos, prefix):\n def create_token():\n return PythonToken(ERRORTOKEN if is_illegal else NAME, found, pos, prefix)\n\n found = ''\n is_illegal = False\n pos = start_pos\n for i, char in enumerate(token):\n if is_illegal:\n if char.isidentifier():\n yield create_token()\n found = char\n is_illegal = False\n prefix = ''\n pos = start_pos[0], start_pos[1] + i\n else:\n found += char\n else:\n new_found = found + char\n if new_found.isidentifier():\n found = new_found\n else:\n if found:\n yield create_token()\n prefix = ''\n pos = start_pos[0], start_pos[1] + i\n found = char\n is_illegal = True\n\n if found:\n yield create_token()\n\n\nif __name__ == "__main__":\n path = sys.argv[1]\n with open(path) as f:\n code = f.read()\n\n for token in tokenize(code, version_info=parse_version_string('3.10')):\n print(token)\n | .venv\Lib\site-packages\parso\python\tokenize.py | tokenize.py | Python | 25,795 | 0.95 | 0.195051 | 0.10473 | vue-tools | 752 | 2024-08-16T08:46:48.307106 | BSD-3-Clause | false | eb647a07127538ef425ada2fe81764e5 |
"""\nThis is the syntax tree for Python 3 syntaxes. The classes represent\nsyntax elements like functions and imports.\n\nAll of the nodes can be traced back to the `Python grammar file\n<https://docs.python.org/3/reference/grammar.html>`_. If you want to know how\na tree is structured, just analyse that file (for each Python version it's a\nbit different).\n\nThere's a lot of logic here that makes it easier for Jedi (and other libraries)\nto deal with a Python syntax tree.\n\nBy using :py:meth:`parso.tree.NodeOrLeaf.get_code` on a module, you can get\nback the 1-to-1 representation of the input given to the parser. This is\nimportant if you want to refactor a parser tree.\n\n>>> from parso import parse\n>>> parser = parse('import os')\n>>> module = parser.get_root_node()\n>>> module\n<Module: @1-1>\n\nAny subclasses of :class:`Scope`, including :class:`Module` has an attribute\n:attr:`iter_imports <Scope.iter_imports>`:\n\n>>> list(module.iter_imports())\n[<ImportName: import os@1,0>]\n\nChanges to the Python Grammar\n-----------------------------\n\nA few things have changed when looking at Python grammar files:\n\n- :class:`Param` does not exist in Python grammar files. It is essentially a\n part of a ``parameters`` node. |parso| splits it up to make it easier to\n analyse parameters. However this just makes it easier to deal with the syntax\n tree, it doesn't actually change the valid syntax.\n- A few nodes like `lambdef` and `lambdef_nocond` have been merged in the\n syntax tree to make it easier to do deal with them.\n\nParser Tree Classes\n-------------------\n"""\n\nimport re\ntry:\n from collections.abc import Mapping\nexcept ImportError:\n from collections import Mapping\nfrom typing import Tuple\n\nfrom parso.tree import Node, BaseNode, Leaf, ErrorNode, ErrorLeaf, search_ancestor # noqa\nfrom parso.python.prefix import split_prefix\nfrom parso.utils import split_lines\n\n_FLOW_CONTAINERS = set(['if_stmt', 'while_stmt', 'for_stmt', 'try_stmt',\n 'with_stmt', 'async_stmt', 'suite'])\n_RETURN_STMT_CONTAINERS = set(['suite', 'simple_stmt']) | _FLOW_CONTAINERS\n\n_FUNC_CONTAINERS = set(\n ['suite', 'simple_stmt', 'decorated', 'async_funcdef']\n) | _FLOW_CONTAINERS\n\n_GET_DEFINITION_TYPES = set([\n 'expr_stmt', 'sync_comp_for', 'with_stmt', 'for_stmt', 'import_name',\n 'import_from', 'param', 'del_stmt', 'namedexpr_test',\n])\n_IMPORTS = set(['import_name', 'import_from'])\n\n\nclass DocstringMixin:\n __slots__ = ()\n\n def get_doc_node(self):\n """\n Returns the string leaf of a docstring. e.g. ``r'''foo'''``.\n """\n if self.type == 'file_input':\n node = self.children[0]\n elif self.type in ('funcdef', 'classdef'):\n node = self.children[self.children.index(':') + 1]\n if node.type == 'suite': # Normally a suite\n node = node.children[1] # -> NEWLINE stmt\n else: # ExprStmt\n simple_stmt = self.parent\n c = simple_stmt.parent.children\n index = c.index(simple_stmt)\n if not index:\n return None\n node = c[index - 1]\n\n if node.type == 'simple_stmt':\n node = node.children[0]\n if node.type == 'string':\n return node\n return None\n\n\nclass PythonMixin:\n """\n Some Python specific utilities.\n """\n __slots__ = ()\n\n def get_name_of_position(self, position):\n """\n Given a (line, column) tuple, returns a :py:class:`Name` or ``None`` if\n there is no name at that position.\n """\n for c in self.children:\n if isinstance(c, Leaf):\n if c.type == 'name' and c.start_pos <= position <= c.end_pos:\n return c\n else:\n result = c.get_name_of_position(position)\n if result is not None:\n return result\n return None\n\n\nclass PythonLeaf(PythonMixin, Leaf):\n __slots__ = ()\n\n def _split_prefix(self):\n return split_prefix(self, self.get_start_pos_of_prefix())\n\n def get_start_pos_of_prefix(self):\n """\n Basically calls :py:meth:`parso.tree.NodeOrLeaf.get_start_pos_of_prefix`.\n """\n # TODO it is really ugly that we have to override it. Maybe change\n # indent error leafs somehow? No idea how, though.\n previous_leaf = self.get_previous_leaf()\n if previous_leaf is not None and previous_leaf.type == 'error_leaf' \\n and previous_leaf.token_type in ('INDENT', 'DEDENT', 'ERROR_DEDENT'):\n previous_leaf = previous_leaf.get_previous_leaf()\n\n if previous_leaf is None: # It's the first leaf.\n lines = split_lines(self.prefix)\n # + 1 is needed because split_lines always returns at least [''].\n return self.line - len(lines) + 1, 0 # It's the first leaf.\n return previous_leaf.end_pos\n\n\nclass _LeafWithoutNewlines(PythonLeaf):\n """\n Simply here to optimize performance.\n """\n __slots__ = ()\n\n @property\n def end_pos(self) -> Tuple[int, int]:\n return self.line, self.column + len(self.value)\n\n\n# Python base classes\nclass PythonBaseNode(PythonMixin, BaseNode):\n __slots__ = ()\n\n\nclass PythonNode(PythonMixin, Node):\n __slots__ = ()\n\n\nclass PythonErrorNode(PythonMixin, ErrorNode):\n __slots__ = ()\n\n\nclass PythonErrorLeaf(ErrorLeaf, PythonLeaf):\n __slots__ = ()\n\n\nclass EndMarker(_LeafWithoutNewlines):\n __slots__ = ()\n type = 'endmarker'\n\n def __repr__(self):\n return "<%s: prefix=%s end_pos=%s>" % (\n type(self).__name__, repr(self.prefix), self.end_pos\n )\n\n\nclass Newline(PythonLeaf):\n """Contains NEWLINE and ENDMARKER tokens."""\n __slots__ = ()\n type = 'newline'\n\n def __repr__(self):\n return "<%s: %s>" % (type(self).__name__, repr(self.value))\n\n\nclass Name(_LeafWithoutNewlines):\n """\n A string. Sometimes it is important to know if the string belongs to a name\n or not.\n """\n type = 'name'\n __slots__ = ()\n\n def __repr__(self):\n return "<%s: %s@%s,%s>" % (type(self).__name__, self.value,\n self.line, self.column)\n\n def is_definition(self, include_setitem=False):\n """\n Returns True if the name is being defined.\n """\n return self.get_definition(include_setitem=include_setitem) is not None\n\n def get_definition(self, import_name_always=False, include_setitem=False):\n """\n Returns None if there's no definition for a name.\n\n :param import_name_always: Specifies if an import name is always a\n definition. Normally foo in `from foo import bar` is not a\n definition.\n """\n node = self.parent\n type_ = node.type\n\n if type_ in ('funcdef', 'classdef'):\n if self == node.name:\n return node\n return None\n\n if type_ == 'except_clause':\n if self.get_previous_sibling() == 'as':\n return node.parent # The try_stmt.\n return None\n\n while node is not None:\n if node.type == 'suite':\n return None\n if node.type in _GET_DEFINITION_TYPES:\n if self in node.get_defined_names(include_setitem):\n return node\n if import_name_always and node.type in _IMPORTS:\n return node\n return None\n node = node.parent\n return None\n\n\nclass Literal(PythonLeaf):\n __slots__ = ()\n\n\nclass Number(Literal):\n type = 'number'\n __slots__ = ()\n\n\nclass String(Literal):\n type = 'string'\n __slots__ = ()\n\n @property\n def string_prefix(self):\n return re.match(r'\w*(?=[\'"])', self.value).group(0)\n\n def _get_payload(self):\n match = re.search(\n r'''('{3}|"{3}|'|")(.*)$''',\n self.value,\n flags=re.DOTALL\n )\n return match.group(2)[:-len(match.group(1))]\n\n\nclass FStringString(PythonLeaf):\n """\n f-strings contain f-string expressions and normal python strings. These are\n the string parts of f-strings.\n """\n type = 'fstring_string'\n __slots__ = ()\n\n\nclass FStringStart(PythonLeaf):\n """\n f-strings contain f-string expressions and normal python strings. These are\n the string parts of f-strings.\n """\n type = 'fstring_start'\n __slots__ = ()\n\n\nclass FStringEnd(PythonLeaf):\n """\n f-strings contain f-string expressions and normal python strings. These are\n the string parts of f-strings.\n """\n type = 'fstring_end'\n __slots__ = ()\n\n\nclass _StringComparisonMixin:\n __slots__ = ()\n\n def __eq__(self, other):\n """\n Make comparisons with strings easy.\n Improves the readability of the parser.\n """\n if isinstance(other, str):\n return self.value == other\n\n return self is other\n\n def __hash__(self):\n return hash(self.value)\n\n\nclass Operator(_LeafWithoutNewlines, _StringComparisonMixin):\n type = 'operator'\n __slots__ = ()\n\n\nclass Keyword(_LeafWithoutNewlines, _StringComparisonMixin):\n type = 'keyword'\n __slots__ = ()\n\n\nclass Scope(PythonBaseNode, DocstringMixin):\n """\n Super class for the parser tree, which represents the state of a python\n text file.\n A Scope is either a function, class or lambda.\n """\n __slots__ = ()\n\n def __init__(self, children):\n super().__init__(children)\n\n def iter_funcdefs(self):\n """\n Returns a generator of `funcdef` nodes.\n """\n return self._search_in_scope('funcdef')\n\n def iter_classdefs(self):\n """\n Returns a generator of `classdef` nodes.\n """\n return self._search_in_scope('classdef')\n\n def iter_imports(self):\n """\n Returns a generator of `import_name` and `import_from` nodes.\n """\n return self._search_in_scope('import_name', 'import_from')\n\n def _search_in_scope(self, *names):\n def scan(children):\n for element in children:\n if element.type in names:\n yield element\n if element.type in _FUNC_CONTAINERS:\n yield from scan(element.children)\n\n return scan(self.children)\n\n def get_suite(self):\n """\n Returns the part that is executed by the function.\n """\n return self.children[-1]\n\n def __repr__(self):\n try:\n name = self.name.value\n except AttributeError:\n name = ''\n\n return "<%s: %s@%s-%s>" % (type(self).__name__, name,\n self.start_pos[0], self.end_pos[0])\n\n\nclass Module(Scope):\n """\n The top scope, which is always a module.\n Depending on the underlying parser this may be a full module or just a part\n of a module.\n """\n __slots__ = ('_used_names',)\n type = 'file_input'\n\n def __init__(self, children):\n super().__init__(children)\n self._used_names = None\n\n def _iter_future_import_names(self):\n """\n :return: A list of future import names.\n :rtype: list of str\n """\n # In Python it's not allowed to use future imports after the first\n # actual (non-future) statement. However this is not a linter here,\n # just return all future imports. If people want to scan for issues\n # they should use the API.\n for imp in self.iter_imports():\n if imp.type == 'import_from' and imp.level == 0:\n for path in imp.get_paths():\n names = [name.value for name in path]\n if len(names) == 2 and names[0] == '__future__':\n yield names[1]\n\n def get_used_names(self):\n """\n Returns all the :class:`Name` leafs that exist in this module. This\n includes both definitions and references of names.\n """\n if self._used_names is None:\n # Don't directly use self._used_names to eliminate a lookup.\n dct = {}\n\n def recurse(node):\n try:\n children = node.children\n except AttributeError:\n if node.type == 'name':\n arr = dct.setdefault(node.value, [])\n arr.append(node)\n else:\n for child in children:\n recurse(child)\n\n recurse(self)\n self._used_names = UsedNamesMapping(dct)\n return self._used_names\n\n\nclass Decorator(PythonBaseNode):\n type = 'decorator'\n __slots__ = ()\n\n\nclass ClassOrFunc(Scope):\n __slots__ = ()\n\n @property\n def name(self):\n """\n Returns the `Name` leaf that defines the function or class name.\n """\n return self.children[1]\n\n def get_decorators(self):\n """\n :rtype: list of :class:`Decorator`\n """\n decorated = self.parent\n if decorated.type == 'async_funcdef':\n decorated = decorated.parent\n\n if decorated.type == 'decorated':\n if decorated.children[0].type == 'decorators':\n return decorated.children[0].children\n else:\n return decorated.children[:1]\n else:\n return []\n\n\nclass Class(ClassOrFunc):\n """\n Used to store the parsed contents of a python class.\n """\n type = 'classdef'\n __slots__ = ()\n\n def __init__(self, children):\n super().__init__(children)\n\n def get_super_arglist(self):\n """\n Returns the `arglist` node that defines the super classes. It returns\n None if there are no arguments.\n """\n if self.children[2] != '(': # Has no parentheses\n return None\n else:\n if self.children[3] == ')': # Empty parentheses\n return None\n else:\n return self.children[3]\n\n\ndef _create_params(parent, argslist_list):\n """\n `argslist_list` is a list that can contain an argslist as a first item, but\n most not. It's basically the items between the parameter brackets (which is\n at most one item).\n This function modifies the parser structure. It generates `Param` objects\n from the normal ast. Those param objects do not exist in a normal ast, but\n make the evaluation of the ast tree so much easier.\n You could also say that this function replaces the argslist node with a\n list of Param objects.\n """\n try:\n first = argslist_list[0]\n except IndexError:\n return []\n\n if first.type in ('name', 'fpdef'):\n return [Param([first], parent)]\n elif first == '*':\n return [first]\n else: # argslist is a `typedargslist` or a `varargslist`.\n if first.type == 'tfpdef':\n children = [first]\n else:\n children = first.children\n new_children = []\n start = 0\n # Start with offset 1, because the end is higher.\n for end, child in enumerate(children + [None], 1):\n if child is None or child == ',':\n param_children = children[start:end]\n if param_children: # Could as well be comma and then end.\n if param_children[0] == '*' \\n and (len(param_children) == 1\n or param_children[1] == ',') \\n or param_children[0] == '/':\n for p in param_children:\n p.parent = parent\n new_children += param_children\n else:\n new_children.append(Param(param_children, parent))\n start = end\n return new_children\n\n\nclass Function(ClassOrFunc):\n """\n Used to store the parsed contents of a python function.\n\n Children::\n\n 0. <Keyword: def>\n 1. <Name>\n 2. parameter list (including open-paren and close-paren <Operator>s)\n 3. or 5. <Operator: :>\n 4. or 6. Node() representing function body\n 3. -> (if annotation is also present)\n 4. annotation (if present)\n """\n type = 'funcdef'\n __slots__ = ()\n\n def __init__(self, children):\n super().__init__(children)\n parameters = self.children[2] # After `def foo`\n parameters_children = parameters.children[1:-1]\n # If input parameters list already has Param objects, keep it as is;\n # otherwise, convert it to a list of Param objects.\n if not any(isinstance(child, Param) for child in parameters_children):\n parameters.children[1:-1] = _create_params(parameters, parameters_children)\n\n def _get_param_nodes(self):\n return self.children[2].children\n\n def get_params(self):\n """\n Returns a list of `Param()`.\n """\n return [p for p in self._get_param_nodes() if p.type == 'param']\n\n @property\n def name(self):\n return self.children[1] # First token after `def`\n\n def iter_yield_exprs(self):\n """\n Returns a generator of `yield_expr`.\n """\n def scan(children):\n for element in children:\n if element.type in ('classdef', 'funcdef', 'lambdef'):\n continue\n\n try:\n nested_children = element.children\n except AttributeError:\n if element.value == 'yield':\n if element.parent.type == 'yield_expr':\n yield element.parent\n else:\n yield element\n else:\n yield from scan(nested_children)\n\n return scan(self.children)\n\n def iter_return_stmts(self):\n """\n Returns a generator of `return_stmt`.\n """\n def scan(children):\n for element in children:\n if element.type == 'return_stmt' \\n or element.type == 'keyword' and element.value == 'return':\n yield element\n if element.type in _RETURN_STMT_CONTAINERS:\n yield from scan(element.children)\n\n return scan(self.children)\n\n def iter_raise_stmts(self):\n """\n Returns a generator of `raise_stmt`. Includes raise statements inside try-except blocks\n """\n def scan(children):\n for element in children:\n if element.type == 'raise_stmt' \\n or element.type == 'keyword' and element.value == 'raise':\n yield element\n if element.type in _RETURN_STMT_CONTAINERS:\n yield from scan(element.children)\n\n return scan(self.children)\n\n def is_generator(self):\n """\n :return bool: Checks if a function is a generator or not.\n """\n return next(self.iter_yield_exprs(), None) is not None\n\n @property\n def annotation(self):\n """\n Returns the test node after `->` or `None` if there is no annotation.\n """\n try:\n if self.children[3] == "->":\n return self.children[4]\n assert self.children[3] == ":"\n return None\n except IndexError:\n return None\n\n\nclass Lambda(Function):\n """\n Lambdas are basically trimmed functions, so give it the same interface.\n\n Children::\n\n 0. <Keyword: lambda>\n *. <Param x> for each argument x\n -2. <Operator: :>\n -1. Node() representing body\n """\n type = 'lambdef'\n __slots__ = ()\n\n def __init__(self, children):\n # We don't want to call the Function constructor, call its parent.\n super(Function, self).__init__(children)\n # Everything between `lambda` and the `:` operator is a parameter.\n parameters_children = self.children[1:-2]\n # If input children list already has Param objects, keep it as is;\n # otherwise, convert it to a list of Param objects.\n if not any(isinstance(child, Param) for child in parameters_children):\n self.children[1:-2] = _create_params(self, parameters_children)\n\n @property\n def name(self):\n """\n Raises an AttributeError. Lambdas don't have a defined name.\n """\n raise AttributeError("lambda is not named.")\n\n def _get_param_nodes(self):\n return self.children[1:-2]\n\n @property\n def annotation(self):\n """\n Returns `None`, lambdas don't have annotations.\n """\n return None\n\n def __repr__(self):\n return "<%s@%s>" % (self.__class__.__name__, self.start_pos)\n\n\nclass Flow(PythonBaseNode):\n __slots__ = ()\n\n\nclass IfStmt(Flow):\n type = 'if_stmt'\n __slots__ = ()\n\n def get_test_nodes(self):\n """\n E.g. returns all the `test` nodes that are named as x, below:\n\n if x:\n pass\n elif x:\n pass\n """\n for i, c in enumerate(self.children):\n if c in ('elif', 'if'):\n yield self.children[i + 1]\n\n def get_corresponding_test_node(self, node):\n """\n Searches for the branch in which the node is and returns the\n corresponding test node (see function above). However if the node is in\n the test node itself and not in the suite return None.\n """\n start_pos = node.start_pos\n for check_node in reversed(list(self.get_test_nodes())):\n if check_node.start_pos < start_pos:\n if start_pos < check_node.end_pos:\n return None\n # In this case the node is within the check_node itself,\n # not in the suite\n else:\n return check_node\n\n def is_node_after_else(self, node):\n """\n Checks if a node is defined after `else`.\n """\n for c in self.children:\n if c == 'else':\n if node.start_pos > c.start_pos:\n return True\n else:\n return False\n\n\nclass WhileStmt(Flow):\n type = 'while_stmt'\n __slots__ = ()\n\n\nclass ForStmt(Flow):\n type = 'for_stmt'\n __slots__ = ()\n\n def get_testlist(self):\n """\n Returns the input node ``y`` from: ``for x in y:``.\n """\n return self.children[3]\n\n def get_defined_names(self, include_setitem=False):\n return _defined_names(self.children[1], include_setitem)\n\n\nclass TryStmt(Flow):\n type = 'try_stmt'\n __slots__ = ()\n\n def get_except_clause_tests(self):\n """\n Returns the ``test`` nodes found in ``except_clause`` nodes.\n Returns ``[None]`` for except clauses without an exception given.\n """\n for node in self.children:\n if node.type == 'except_clause':\n yield node.children[1]\n elif node == 'except':\n yield None\n\n\nclass WithStmt(Flow):\n type = 'with_stmt'\n __slots__ = ()\n\n def get_defined_names(self, include_setitem=False):\n """\n Returns the a list of `Name` that the with statement defines. The\n defined names are set after `as`.\n """\n names = []\n for with_item in self.children[1:-2:2]:\n # Check with items for 'as' names.\n if with_item.type == 'with_item':\n names += _defined_names(with_item.children[2], include_setitem)\n return names\n\n def get_test_node_from_name(self, name):\n node = name.search_ancestor("with_item")\n if node is None:\n raise ValueError('The name is not actually part of a with statement.')\n return node.children[0]\n\n\nclass Import(PythonBaseNode):\n __slots__ = ()\n\n def get_path_for_name(self, name):\n """\n The path is the list of names that leads to the searched name.\n\n :return list of Name:\n """\n try:\n # The name may be an alias. If it is, just map it back to the name.\n name = self._aliases()[name]\n except KeyError:\n pass\n\n for path in self.get_paths():\n if name in path:\n return path[:path.index(name) + 1]\n raise ValueError('Name should be defined in the import itself')\n\n def is_nested(self):\n return False # By default, sub classes may overwrite this behavior\n\n def is_star_import(self):\n return self.children[-1] == '*'\n\n\nclass ImportFrom(Import):\n type = 'import_from'\n __slots__ = ()\n\n def get_defined_names(self, include_setitem=False):\n """\n Returns the a list of `Name` that the import defines. The\n defined names are set after `import` or in case an alias - `as` - is\n present that name is returned.\n """\n return [alias or name for name, alias in self._as_name_tuples()]\n\n def _aliases(self):\n """Mapping from alias to its corresponding name."""\n return dict((alias, name) for name, alias in self._as_name_tuples()\n if alias is not None)\n\n def get_from_names(self):\n for n in self.children[1:]:\n if n not in ('.', '...'):\n break\n if n.type == 'dotted_name': # from x.y import\n return n.children[::2]\n elif n == 'import': # from . import\n return []\n else: # from x import\n return [n]\n\n @property\n def level(self):\n """The level parameter of ``__import__``."""\n level = 0\n for n in self.children[1:]:\n if n in ('.', '...'):\n level += len(n.value)\n else:\n break\n return level\n\n def _as_name_tuples(self):\n last = self.children[-1]\n if last == ')':\n last = self.children[-2]\n elif last == '*':\n return # No names defined directly.\n\n if last.type == 'import_as_names':\n as_names = last.children[::2]\n else:\n as_names = [last]\n for as_name in as_names:\n if as_name.type == 'name':\n yield as_name, None\n else:\n yield as_name.children[::2] # yields x, y -> ``x as y``\n\n def get_paths(self):\n """\n The import paths defined in an import statement. Typically an array\n like this: ``[<Name: datetime>, <Name: date>]``.\n\n :return list of list of Name:\n """\n dotted = self.get_from_names()\n\n if self.children[-1] == '*':\n return [dotted]\n return [dotted + [name] for name, alias in self._as_name_tuples()]\n\n\nclass ImportName(Import):\n """For ``import_name`` nodes. Covers normal imports without ``from``."""\n type = 'import_name'\n __slots__ = ()\n\n def get_defined_names(self, include_setitem=False):\n """\n Returns the a list of `Name` that the import defines. The defined names\n is always the first name after `import` or in case an alias - `as` - is\n present that name is returned.\n """\n return [alias or path[0] for path, alias in self._dotted_as_names()]\n\n @property\n def level(self):\n """The level parameter of ``__import__``."""\n return 0 # Obviously 0 for imports without from.\n\n def get_paths(self):\n return [path for path, alias in self._dotted_as_names()]\n\n def _dotted_as_names(self):\n """Generator of (list(path), alias) where alias may be None."""\n dotted_as_names = self.children[1]\n if dotted_as_names.type == 'dotted_as_names':\n as_names = dotted_as_names.children[::2]\n else:\n as_names = [dotted_as_names]\n\n for as_name in as_names:\n if as_name.type == 'dotted_as_name':\n alias = as_name.children[2]\n as_name = as_name.children[0]\n else:\n alias = None\n if as_name.type == 'name':\n yield [as_name], alias\n else:\n # dotted_names\n yield as_name.children[::2], alias\n\n def is_nested(self):\n """\n This checks for the special case of nested imports, without aliases and\n from statement::\n\n import foo.bar\n """\n return bool([1 for path, alias in self._dotted_as_names()\n if alias is None and len(path) > 1])\n\n def _aliases(self):\n """\n :return list of Name: Returns all the alias\n """\n return dict((alias, path[-1]) for path, alias in self._dotted_as_names()\n if alias is not None)\n\n\nclass KeywordStatement(PythonBaseNode):\n """\n For the following statements: `assert`, `del`, `global`, `nonlocal`,\n `raise`, `return`, `yield`.\n\n `pass`, `continue` and `break` are not in there, because they are just\n simple keywords and the parser reduces it to a keyword.\n """\n __slots__ = ()\n\n @property\n def type(self):\n """\n Keyword statements start with the keyword and end with `_stmt`. You can\n crosscheck this with the Python grammar.\n """\n return '%s_stmt' % self.keyword\n\n @property\n def keyword(self):\n return self.children[0].value\n\n def get_defined_names(self, include_setitem=False):\n keyword = self.keyword\n if keyword == 'del':\n return _defined_names(self.children[1], include_setitem)\n if keyword in ('global', 'nonlocal'):\n return self.children[1::2]\n return []\n\n\nclass AssertStmt(KeywordStatement):\n __slots__ = ()\n\n @property\n def assertion(self):\n return self.children[1]\n\n\nclass GlobalStmt(KeywordStatement):\n __slots__ = ()\n\n def get_global_names(self):\n return self.children[1::2]\n\n\nclass ReturnStmt(KeywordStatement):\n __slots__ = ()\n\n\nclass YieldExpr(PythonBaseNode):\n type = 'yield_expr'\n __slots__ = ()\n\n\ndef _defined_names(current, include_setitem):\n """\n A helper function to find the defined names in statements, for loops and\n list comprehensions.\n """\n names = []\n if current.type in ('testlist_star_expr', 'testlist_comp', 'exprlist', 'testlist'):\n for child in current.children[::2]:\n names += _defined_names(child, include_setitem)\n elif current.type in ('atom', 'star_expr'):\n names += _defined_names(current.children[1], include_setitem)\n elif current.type in ('power', 'atom_expr'):\n if current.children[-2] != '**': # Just if there's no operation\n trailer = current.children[-1]\n if trailer.children[0] == '.':\n names.append(trailer.children[1])\n elif trailer.children[0] == '[' and include_setitem:\n for node in current.children[-2::-1]:\n if node.type == 'trailer':\n names.append(node.children[1])\n break\n if node.type == 'name':\n names.append(node)\n break\n else:\n names.append(current)\n return names\n\n\nclass ExprStmt(PythonBaseNode, DocstringMixin):\n type = 'expr_stmt'\n __slots__ = ()\n\n def get_defined_names(self, include_setitem=False):\n """\n Returns a list of `Name` defined before the `=` sign.\n """\n names = []\n if self.children[1].type == 'annassign':\n names = _defined_names(self.children[0], include_setitem)\n return [\n name\n for i in range(0, len(self.children) - 2, 2)\n if '=' in self.children[i + 1].value\n for name in _defined_names(self.children[i], include_setitem)\n ] + names\n\n def get_rhs(self):\n """Returns the right-hand-side of the equals."""\n node = self.children[-1]\n if node.type == 'annassign':\n if len(node.children) == 4:\n node = node.children[3]\n else:\n node = node.children[1]\n return node\n\n def yield_operators(self):\n """\n Returns a generator of `+=`, `=`, etc. or None if there is no operation.\n """\n first = self.children[1]\n if first.type == 'annassign':\n if len(first.children) <= 2:\n return # No operator is available, it's just PEP 484.\n\n first = first.children[2]\n yield first\n\n yield from self.children[3::2]\n\n\nclass NamedExpr(PythonBaseNode):\n type = 'namedexpr_test'\n\n def get_defined_names(self, include_setitem=False):\n return _defined_names(self.children[0], include_setitem)\n\n\nclass Param(PythonBaseNode):\n """\n It's a helper class that makes business logic with params much easier. The\n Python grammar defines no ``param`` node. It defines it in a different way\n that is not really suited to working with parameters.\n """\n type = 'param'\n\n def __init__(self, children, parent=None):\n super().__init__(children)\n self.parent = parent\n\n @property\n def star_count(self):\n """\n Is `0` in case of `foo`, `1` in case of `*foo` or `2` in case of\n `**foo`.\n """\n first = self.children[0]\n if first in ('*', '**'):\n return len(first.value)\n return 0\n\n @property\n def default(self):\n """\n The default is the test node that appears after the `=`. Is `None` in\n case no default is present.\n """\n has_comma = self.children[-1] == ','\n try:\n if self.children[-2 - int(has_comma)] == '=':\n return self.children[-1 - int(has_comma)]\n except IndexError:\n return None\n\n @property\n def annotation(self):\n """\n The default is the test node that appears after `:`. Is `None` in case\n no annotation is present.\n """\n tfpdef = self._tfpdef()\n if tfpdef.type == 'tfpdef':\n assert tfpdef.children[1] == ":"\n assert len(tfpdef.children) == 3\n annotation = tfpdef.children[2]\n return annotation\n else:\n return None\n\n def _tfpdef(self):\n """\n tfpdef: see e.g. grammar36.txt.\n """\n offset = int(self.children[0] in ('*', '**'))\n return self.children[offset]\n\n @property\n def name(self):\n """\n The `Name` leaf of the param.\n """\n if self._tfpdef().type == 'tfpdef':\n return self._tfpdef().children[0]\n else:\n return self._tfpdef()\n\n def get_defined_names(self, include_setitem=False):\n return [self.name]\n\n @property\n def position_index(self):\n """\n Property for the positional index of a paramter.\n """\n index = self.parent.children.index(self)\n try:\n keyword_only_index = self.parent.children.index('*')\n if index > keyword_only_index:\n # Skip the ` *, `\n index -= 2\n except ValueError:\n pass\n try:\n keyword_only_index = self.parent.children.index('/')\n if index > keyword_only_index:\n # Skip the ` /, `\n index -= 2\n except ValueError:\n pass\n return index - 1\n\n def get_parent_function(self):\n """\n Returns the function/lambda of a parameter.\n """\n return self.search_ancestor('funcdef', 'lambdef')\n\n def get_code(self, include_prefix=True, include_comma=True):\n """\n Like all the other get_code functions, but includes the param\n `include_comma`.\n\n :param include_comma bool: If enabled includes the comma in the string output.\n """\n if include_comma:\n return super().get_code(include_prefix)\n\n children = self.children\n if children[-1] == ',':\n children = children[:-1]\n return self._get_code_for_children(\n children,\n include_prefix=include_prefix\n )\n\n def __repr__(self):\n default = '' if self.default is None else '=%s' % self.default.get_code()\n return '<%s: %s>' % (type(self).__name__, str(self._tfpdef()) + default)\n\n\nclass SyncCompFor(PythonBaseNode):\n type = 'sync_comp_for'\n __slots__ = ()\n\n def get_defined_names(self, include_setitem=False):\n """\n Returns the a list of `Name` that the comprehension defines.\n """\n # allow async for\n return _defined_names(self.children[1], include_setitem)\n\n\n# This is simply here so an older Jedi version can work with this new parso\n# version. Can be deleted in the next release.\nCompFor = SyncCompFor\n\n\nclass UsedNamesMapping(Mapping):\n """\n This class exists for the sole purpose of creating an immutable dict.\n """\n def __init__(self, dct):\n self._dict = dct\n\n def __getitem__(self, key):\n return self._dict[key]\n\n def __len__(self):\n return len(self._dict)\n\n def __iter__(self):\n return iter(self._dict)\n\n def __hash__(self):\n return id(self)\n\n def __eq__(self, other):\n # Comparing these dicts does not make sense.\n return self is other\n | .venv\Lib\site-packages\parso\python\tree.py | tree.py | Python | 37,226 | 0.95 | 0.274699 | 0.027805 | awesome-app | 404 | 2024-09-22T15:52:43.065107 | GPL-3.0 | false | 311e3c1790fb7362032fb0618f0374cb |
\n\n | .venv\Lib\site-packages\parso\python\__pycache__\diff.cpython-313.pyc | diff.cpython-313.pyc | Other | 36,288 | 0.8 | 0.050898 | 0 | python-kit | 184 | 2023-07-20T14:21:58.845595 | MIT | false | b01e12dc04fe2f590dfa4d46a74261df |
\n\n | .venv\Lib\site-packages\parso\python\__pycache__\errors.cpython-313.pyc | errors.cpython-313.pyc | Other | 63,675 | 0.75 | 0.028351 | 0.008021 | python-kit | 678 | 2024-11-04T03:39:39.151959 | BSD-3-Clause | false | 07e5a61b2ef4d081b6fdc376ff3e6a72 |
\n\n | .venv\Lib\site-packages\parso\python\__pycache__\parser.cpython-313.pyc | parser.cpython-313.pyc | Other | 9,581 | 0.95 | 0.024096 | 0 | node-utils | 651 | 2023-07-18T06:37:27.291236 | Apache-2.0 | false | 18c25f753812ba5807f26a69027d0bd2 |
\n\n | .venv\Lib\site-packages\parso\python\__pycache__\pep8.cpython-313.pyc | pep8.cpython-313.pyc | Other | 35,811 | 0.95 | 0.074257 | 0.005102 | awesome-app | 769 | 2025-03-29T14:54:37.472640 | BSD-3-Clause | false | f8c05e1b8f012ce6fbb1483af31b41f4 |
\n\n | .venv\Lib\site-packages\parso\python\__pycache__\prefix.cpython-313.pyc | prefix.cpython-313.pyc | Other | 4,441 | 0.8 | 0 | 0.030769 | react-lib | 640 | 2024-07-11T00:39:13.909932 | GPL-3.0 | false | a9e1da43a76c68e6ddf8cc0c0565f638 |
\n\n | .venv\Lib\site-packages\parso\python\__pycache__\token.cpython-313.pyc | token.cpython-313.pyc | Other | 1,780 | 0.95 | 0 | 0 | python-kit | 676 | 2025-01-31T04:44:08.349215 | BSD-3-Clause | false | 756ba2ab8185e93ff5a4d1d581dc966f |
\n\n | .venv\Lib\site-packages\parso\python\__pycache__\tokenize.cpython-313.pyc | tokenize.cpython-313.pyc | Other | 24,975 | 0.95 | 0.033473 | 0.004329 | node-utils | 93 | 2024-07-27T22:58:24.050558 | GPL-3.0 | false | 0e330f32697a25f9386b3c2735d3fcb3 |
\n\n | .venv\Lib\site-packages\parso\python\__pycache__\tree.cpython-313.pyc | tree.cpython-313.pyc | Other | 54,595 | 0.75 | 0.109959 | 0.004425 | python-kit | 556 | 2025-06-11T16:03:24.523164 | MIT | false | 4c279c5e8c83901a9ae0c4398eb4a139 |
\n\n | .venv\Lib\site-packages\parso\python\__pycache__\__init__.cpython-313.pyc | __init__.cpython-313.pyc | Other | 187 | 0.7 | 0 | 0 | node-utils | 382 | 2024-02-26T15:20:37.041192 | BSD-3-Clause | false | 0c4a961f2dd342d5232beee877bb883d |
\n\n | .venv\Lib\site-packages\parso\__pycache__\cache.cpython-313.pyc | cache.cpython-313.pyc | Other | 10,123 | 0.8 | 0.010753 | 0.033333 | python-kit | 403 | 2024-11-18T09:20:50.288275 | MIT | false | 0a8094a04c60be2338055aa3c80b9e90 |
\n\n | .venv\Lib\site-packages\parso\__pycache__\file_io.cpython-313.pyc | file_io.cpython-313.pyc | Other | 2,392 | 0.8 | 0.045455 | 0 | python-kit | 993 | 2024-12-19T10:03:32.006793 | GPL-3.0 | false | d22e04f8cd3b2da0bf72b22cffd7fa21 |
\n\n | .venv\Lib\site-packages\parso\__pycache__\grammar.cpython-313.pyc | grammar.cpython-313.pyc | Other | 12,554 | 0.95 | 0.113821 | 0 | vue-tools | 925 | 2023-08-12T20:08:41.417157 | Apache-2.0 | false | f4958b38d518b1017b024ed6a172b769 |
\n\n | .venv\Lib\site-packages\parso\__pycache__\normalizer.cpython-313.pyc | normalizer.cpython-313.pyc | Other | 10,708 | 0.95 | 0.050633 | 0 | react-lib | 99 | 2023-09-13T11:28:19.540164 | BSD-3-Clause | false | 1887c18aed56e7d338b8f48d196ea84f |
\n\n | .venv\Lib\site-packages\parso\__pycache__\parser.cpython-313.pyc | parser.cpython-313.pyc | Other | 10,188 | 0.95 | 0.050505 | 0 | python-kit | 865 | 2023-10-28T20:55:30.297345 | MIT | false | 2aaab05ce17afd79513cd17dbc6b07dd |
\n\n | .venv\Lib\site-packages\parso\__pycache__\tree.cpython-313.pyc | tree.cpython-313.pyc | Other | 21,042 | 0.95 | 0.083004 | 0 | node-utils | 228 | 2025-05-17T04:45:51.118040 | MIT | false | f9cc70ef52bbdd63b8e6a74c6662ced1 |
\n\n | .venv\Lib\site-packages\parso\__pycache__\utils.cpython-313.pyc | utils.cpython-313.pyc | Other | 7,799 | 0.8 | 0.053191 | 0 | vue-tools | 525 | 2024-08-02T15:14:31.360889 | MIT | false | fba6ab0390c6a2184b679921340463b3 |
\n\n | .venv\Lib\site-packages\parso\__pycache__\_compatibility.cpython-313.pyc | _compatibility.cpython-313.pyc | Other | 306 | 0.7 | 0 | 0.142857 | vue-tools | 258 | 2024-07-08T17:17:33.259228 | BSD-3-Clause | false | af67ece39fd123099973095deea73ef1 |
\n\n | .venv\Lib\site-packages\parso\__pycache__\__init__.cpython-313.pyc | __init__.cpython-313.pyc | Other | 1,980 | 0.95 | 0.06383 | 0 | python-kit | 529 | 2025-02-16T06:16:25.147363 | BSD-3-Clause | false | d01fd7a69c816afe164d9b628f870a4d |
Main Authors\n============\n\nDavid Halter (@davidhalter) <davidhalter88@gmail.com>\n\nCode Contributors\n=================\nAlisdair Robertson (@robodair)\nBryan Forbes (@bryanforbes) <bryan@reigndropsfall.net>\n\n\nCode Contributors (to Jedi and therefore possibly to this library)\n==================================================================\n\nTakafumi Arakaki (@tkf) <aka.tkf@gmail.com>\nDanilo Bargen (@dbrgn) <mail@dbrgn.ch>\nLaurens Van Houtven (@lvh) <_@lvh.cc>\nAldo Stracquadanio (@Astrac) <aldo.strac@gmail.com>\nJean-Louis Fuchs (@ganwell) <ganwell@fangorn.ch>\ntek (@tek)\nYasha Borevich (@jjay) <j.borevich@gmail.com>\nAaron Griffin <aaronmgriffin@gmail.com>\nandviro (@andviro)\nMike Gilbert (@floppym) <floppym@gentoo.org>\nAaron Meurer (@asmeurer) <asmeurer@gmail.com>\nLubos Trilety <ltrilety@redhat.com>\nAkinori Hattori (@hattya) <hattya@gmail.com>\nsrusskih (@srusskih)\nSteven Silvester (@blink1073)\nColin Duquesnoy (@ColinDuquesnoy) <colin.duquesnoy@gmail.com>\nJorgen Schaefer (@jorgenschaefer) <contact@jorgenschaefer.de>\nFredrik Bergroth (@fbergroth)\nMathias Fußenegger (@mfussenegger)\nSyohei Yoshida (@syohex) <syohex@gmail.com>\nppalucky (@ppalucky)\nimmerrr (@immerrr) immerrr@gmail.com\nAlbertas Agejevas (@alga)\nSavor d'Isavano (@KenetJervet) <newelevenken@163.com>\nPhillip Berndt (@phillipberndt) <phillip.berndt@gmail.com>\nIan Lee (@IanLee1521) <IanLee1521@gmail.com>\nFarkhad Khatamov (@hatamov) <comsgn@gmail.com>\nKevin Kelley (@kelleyk) <kelleyk@kelleyk.net>\nSid Shanker (@squidarth) <sid.p.shanker@gmail.com>\nReinoud Elhorst (@reinhrst)\nGuido van Rossum (@gvanrossum) <guido@python.org>\nDmytro Sadovnychyi (@sadovnychyi) <jedi@dmit.ro>\nCristi Burcă (@scribu)\nbstaint (@bstaint)\nMathias Rav (@Mortal) <rav@cs.au.dk>\nDaniel Fiterman (@dfit99) <fitermandaniel2@gmail.com>\nSimon Ruggier (@sruggier)\nÉlie Gouzien (@ElieGouzien)\nTim Gates (@timgates42) <tim.gates@iress.com>\nBatuhan Taskaya (@isidentical) <isidentical@gmail.com>\nJocelyn Boullier (@Kazy) <jocelyn@boullier.bzh>\n\n\nNote: (@user) means a github user name.\n | .venv\Lib\site-packages\parso-0.8.4.dist-info\AUTHORS.txt | AUTHORS.txt | Other | 2,029 | 0.7 | 0 | 0 | awesome-app | 500 | 2024-07-25T20:50:08.899001 | Apache-2.0 | false | 85fd2d33cea11749324d7bf4667ba405 |
pip\n | .venv\Lib\site-packages\parso-0.8.4.dist-info\INSTALLER | INSTALLER | Other | 4 | 0.5 | 0 | 0 | react-lib | 387 | 2025-01-17T09:50:19.784746 | GPL-3.0 | false | 365c9bfeb7d89244f2ce01c1de44cb85 |
All contributions towards parso are MIT licensed.\n\nSome Python files have been taken from the standard library and are therefore\nPSF licensed. Modifications on these files are dual licensed (both MIT and\nPSF). These files are:\n\n- parso/pgen2/*\n- parso/tokenize.py\n- parso/token.py\n- test/test_pgen2.py\n\nAlso some test files under test/normalizer_issue_files have been copied from\nhttps://github.com/PyCQA/pycodestyle (Expat License == MIT License).\n\n-------------------------------------------------------------------------------\nThe MIT License (MIT)\n\nCopyright (c) <2013-2017> <David Halter and others, see AUTHORS.txt>\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the "Software"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in\nall copies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\nTHE SOFTWARE.\n\n-------------------------------------------------------------------------------\n\nPYTHON SOFTWARE FOUNDATION LICENSE VERSION 2\n--------------------------------------------\n\n1. This LICENSE AGREEMENT is between the Python Software Foundation\n("PSF"), and the Individual or Organization ("Licensee") accessing and\notherwise using this software ("Python") in source or binary form and\nits associated documentation.\n\n2. Subject to the terms and conditions of this License Agreement, PSF hereby\ngrants Licensee a nonexclusive, royalty-free, world-wide license to reproduce,\nanalyze, test, perform and/or display publicly, prepare derivative works,\ndistribute, and otherwise use Python alone or in any derivative version,\nprovided, however, that PSF's License Agreement and PSF's notice of copyright,\ni.e., "Copyright (c) 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010,\n2011, 2012, 2013, 2014, 2015 Python Software Foundation; All Rights Reserved"\nare retained in Python alone or in any derivative version prepared by Licensee.\n\n3. In the event Licensee prepares a derivative work that is based on\nor incorporates Python or any part thereof, and wants to make\nthe derivative work available to others as provided herein, then\nLicensee hereby agrees to include in any such work a brief summary of\nthe changes made to Python.\n\n4. PSF is making Python available to Licensee on an "AS IS"\nbasis. PSF MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR\nIMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, PSF MAKES NO AND\nDISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS\nFOR ANY PARTICULAR PURPOSE OR THAT THE USE OF PYTHON WILL NOT\nINFRINGE ANY THIRD PARTY RIGHTS.\n\n5. PSF SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF PYTHON\nFOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS AS\nA RESULT OF MODIFYING, DISTRIBUTING, OR OTHERWISE USING PYTHON,\nOR ANY DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF.\n\n6. This License Agreement will automatically terminate upon a material\nbreach of its terms and conditions.\n\n7. Nothing in this License Agreement shall be deemed to create any\nrelationship of agency, partnership, or joint venture between PSF and\nLicensee. This License Agreement does not grant permission to use PSF\ntrademarks or trade name in a trademark sense to endorse or promote\nproducts or services of Licensee, or any third party.\n\n8. By copying, installing or otherwise using Python, Licensee\nagrees to be bound by the terms and conditions of this License\nAgreement.\n | .venv\Lib\site-packages\parso-0.8.4.dist-info\LICENSE.txt | LICENSE.txt | Other | 4,176 | 0.8 | 0 | 0 | python-kit | 646 | 2024-12-24T08:33:47.488104 | GPL-3.0 | false | cbaa2675b2424d771451332a7a69503f |
Metadata-Version: 2.1\nName: parso\nVersion: 0.8.4\nSummary: A Python Parser\nHome-page: https://github.com/davidhalter/parso\nAuthor: David Halter\nAuthor-email: davidhalter88@gmail.com\nMaintainer: David Halter\nMaintainer-email: davidhalter88@gmail.com\nLicense: MIT\nKeywords: python parser parsing\nPlatform: any\nClassifier: Development Status :: 4 - Beta\nClassifier: Environment :: Plugins\nClassifier: Intended Audience :: Developers\nClassifier: License :: OSI Approved :: MIT License\nClassifier: Operating System :: OS Independent\nClassifier: Programming Language :: Python :: 3\nClassifier: Programming Language :: Python :: 3.6\nClassifier: Programming Language :: Python :: 3.7\nClassifier: Programming Language :: Python :: 3.8\nClassifier: Programming Language :: Python :: 3.9\nClassifier: Topic :: Software Development :: Libraries :: Python Modules\nClassifier: Topic :: Text Editors :: Integrated Development Environments (IDE)\nClassifier: Topic :: Utilities\nClassifier: Typing :: Typed\nRequires-Python: >=3.6\nProvides-Extra: qa\nRequires-Dist: flake8 (==5.0.4) ; extra == 'qa'\nRequires-Dist: mypy (==0.971) ; extra == 'qa'\nRequires-Dist: types-setuptools (==67.2.0.1) ; extra == 'qa'\nProvides-Extra: testing\nRequires-Dist: docopt ; extra == 'testing'\nRequires-Dist: pytest ; extra == 'testing'\n\n###################################################################\nparso - A Python Parser\n###################################################################\n\n\n.. image:: https://github.com/davidhalter/parso/workflows/Build/badge.svg?branch=master\n :target: https://github.com/davidhalter/parso/actions\n :alt: GitHub Actions build status\n\n.. image:: https://coveralls.io/repos/github/davidhalter/parso/badge.svg?branch=master\n :target: https://coveralls.io/github/davidhalter/parso?branch=master\n :alt: Coverage Status\n\n.. image:: https://pepy.tech/badge/parso\n :target: https://pepy.tech/project/parso\n :alt: PyPI Downloads\n\n.. image:: https://raw.githubusercontent.com/davidhalter/parso/master/docs/_static/logo_characters.png\n\nParso is a Python parser that supports error recovery and round-trip parsing\nfor different Python versions (in multiple Python versions). Parso is also able\nto list multiple syntax errors in your python file.\n\nParso has been battle-tested by jedi_. It was pulled out of jedi to be useful\nfor other projects as well.\n\nParso consists of a small API to parse Python and analyse the syntax tree.\n\nA simple example:\n\n.. code-block:: python\n\n >>> import parso\n >>> module = parso.parse('hello + 1', version="3.9")\n >>> expr = module.children[0]\n >>> expr\n PythonNode(arith_expr, [<Name: hello@1,0>, <Operator: +>, <Number: 1>])\n >>> print(expr.get_code())\n hello + 1\n >>> name = expr.children[0]\n >>> name\n <Name: hello@1,0>\n >>> name.end_pos\n (1, 5)\n >>> expr.end_pos\n (1, 9)\n\nTo list multiple issues:\n\n.. code-block:: python\n\n >>> grammar = parso.load_grammar()\n >>> module = grammar.parse('foo +\nbar\ncontinue')\n >>> error1, error2 = grammar.iter_errors(module)\n >>> error1.message\n 'SyntaxError: invalid syntax'\n >>> error2.message\n "SyntaxError: 'continue' not properly in loop"\n\nResources\n=========\n\n- `Testing <https://parso.readthedocs.io/en/latest/docs/development.html#testing>`_\n- `PyPI <https://pypi.python.org/pypi/parso>`_\n- `Docs <https://parso.readthedocs.org/en/latest/>`_\n- Uses `semantic versioning <https://semver.org/>`_\n\nInstallation\n============\n\n pip install parso\n\nFuture\n======\n\n- There will be better support for refactoring and comments. Stay tuned.\n- There's a WIP PEP8 validator. It's however not in a good shape, yet.\n\nKnown Issues\n============\n\n- `async`/`await` are already used as keywords in Python3.6.\n- `from __future__ import print_function` is not ignored.\n\n\nAcknowledgements\n================\n\n- Guido van Rossum (@gvanrossum) for creating the parser generator pgen2\n (originally used in lib2to3).\n- `Salome Schneider <https://www.crepes-schnaegg.ch/cr%C3%AApes-schn%C3%A4gg/kunst-f%C3%BCrs-cr%C3%AApes-mobil/>`_\n for the extremely awesome parso logo.\n\n\n.. _jedi: https://github.com/davidhalter/jedi\n\n\n.. :changelog:\n\nChangelog\n---------\n\nUnreleased\n++++++++++\n\n0.8.4 (2024-04-05)\n++++++++++++++++++\n\n- Add basic support for Python 3.13\n\n0.8.3 (2021-11-30)\n++++++++++++++++++\n\n- Add basic support for Python 3.11 and 3.12\n\n0.8.2 (2021-03-30)\n++++++++++++++++++\n\n- Various small bugfixes\n\n0.8.1 (2020-12-10)\n++++++++++++++++++\n\n- Various small bugfixes\n\n0.8.0 (2020-08-05)\n++++++++++++++++++\n\n- Dropped Support for Python 2.7, 3.4, 3.5\n- It's possible to use ``pathlib.Path`` objects now in the API\n- The stubs are gone, we are now using annotations\n- ``namedexpr_test`` nodes are now a proper class called ``NamedExpr``\n- A lot of smaller refactorings\n\n0.7.1 (2020-07-24)\n++++++++++++++++++\n\n- Fixed a couple of smaller bugs (mostly syntax error detection in\n ``Grammar.iter_errors``)\n\nThis is going to be the last release that supports Python 2.7, 3.4 and 3.5.\n\n0.7.0 (2020-04-13)\n++++++++++++++++++\n\n- Fix a lot of annoying bugs in the diff parser. The fuzzer did not find\n issues anymore even after running it for more than 24 hours (500k tests).\n- Small grammar change: suites can now contain newlines even after a newline.\n This should really not matter if you don't use error recovery. It allows for\n nicer error recovery.\n\n0.6.2 (2020-02-27)\n++++++++++++++++++\n\n- Bugfixes\n- Add Grammar.refactor (might still be subject to change until 0.7.0)\n\n0.6.1 (2020-02-03)\n++++++++++++++++++\n\n- Add ``parso.normalizer.Issue.end_pos`` to make it possible to know where an\n issue ends\n\n0.6.0 (2020-01-26)\n++++++++++++++++++\n\n- Dropped Python 2.6/Python 3.3 support\n- del_stmt names are now considered as a definition\n (for ``name.is_definition()``)\n- Bugfixes\n\n0.5.2 (2019-12-15)\n++++++++++++++++++\n\n- Add include_setitem to get_definition/is_definition and get_defined_names (#66)\n- Fix named expression error listing (#89, #90)\n- Fix some f-string tokenizer issues (#93)\n\n0.5.1 (2019-07-13)\n++++++++++++++++++\n\n- Fix: Some unicode identifiers were not correctly tokenized\n- Fix: Line continuations in f-strings are now working\n\n0.5.0 (2019-06-20)\n++++++++++++++++++\n\n- **Breaking Change** comp_for is now called sync_comp_for for all Python\n versions to be compatible with the Python 3.8 Grammar\n- Added .pyi stubs for a lot of the parso API\n- Small FileIO changes\n\n0.4.0 (2019-04-05)\n++++++++++++++++++\n\n- Python 3.8 support\n- FileIO support, it's now possible to use abstract file IO, support is alpha\n\n0.3.4 (2019-02-13)\n+++++++++++++++++++\n\n- Fix an f-string tokenizer error\n\n0.3.3 (2019-02-06)\n+++++++++++++++++++\n\n- Fix async errors in the diff parser\n- A fix in iter_errors\n- This is a very small bugfix release\n\n0.3.2 (2019-01-24)\n+++++++++++++++++++\n\n- 20+ bugfixes in the diff parser and 3 in the tokenizer\n- A fuzzer for the diff parser, to give confidence that the diff parser is in a\n good shape.\n- Some bugfixes for f-string\n\n0.3.1 (2018-07-09)\n+++++++++++++++++++\n\n- Bugfixes in the diff parser and keyword-only arguments\n\n0.3.0 (2018-06-30)\n+++++++++++++++++++\n\n- Rewrote the pgen2 parser generator.\n\n0.2.1 (2018-05-21)\n+++++++++++++++++++\n\n- A bugfix for the diff parser.\n- Grammar files can now be loaded from a specific path.\n\n0.2.0 (2018-04-15)\n+++++++++++++++++++\n\n- f-strings are now parsed as a part of the normal Python grammar. This makes\n it way easier to deal with them.\n\n0.1.1 (2017-11-05)\n+++++++++++++++++++\n\n- Fixed a few bugs in the caching layer\n- Added support for Python 3.7\n\n0.1.0 (2017-09-04)\n+++++++++++++++++++\n\n- Pulling the library out of Jedi. Some APIs will definitely change.\n\n\n | .venv\Lib\site-packages\parso-0.8.4.dist-info\METADATA | METADATA | Other | 7,674 | 0.95 | 0.066202 | 0.009709 | node-utils | 50 | 2024-02-08T00:36:46.674599 | GPL-3.0 | false | 139d4209de5041f6236c9ea18cba846f |
parso-0.8.4.dist-info/AUTHORS.txt,sha256=SDCgu8hXlBBcjPPyUT-SKW20_IM2MxW-95hKFaRIqyI,2029\nparso-0.8.4.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4\nparso-0.8.4.dist-info/LICENSE.txt,sha256=-meXMHN1PRdiTK-GhNXugW1wyJ2RLFvKfKDwjnsVDts,4176\nparso-0.8.4.dist-info/METADATA,sha256=Cddv2ow5rQXI88dE0LdPJiGhqps1pykizCUr-Pi1a64,7674\nparso-0.8.4.dist-info/RECORD,,\nparso-0.8.4.dist-info/WHEEL,sha256=kGT74LWyRUZrL4VgLh6_g12IeVl_9u9ZVhadrgXZUEY,110\nparso-0.8.4.dist-info/top_level.txt,sha256=GOOKQCPcnr0_7IRArxyI0CX5LLu4WLlzIRAVWS-vJ4s,6\nparso/__init__.py,sha256=GYriQ4JgyDKw6GeT2fr6iuaIRuVwCekAmlZmnrVnybM,1607\nparso/__pycache__/__init__.cpython-313.pyc,,\nparso/__pycache__/_compatibility.cpython-313.pyc,,\nparso/__pycache__/cache.cpython-313.pyc,,\nparso/__pycache__/file_io.cpython-313.pyc,,\nparso/__pycache__/grammar.cpython-313.pyc,,\nparso/__pycache__/normalizer.cpython-313.pyc,,\nparso/__pycache__/parser.cpython-313.pyc,,\nparso/__pycache__/tree.cpython-313.pyc,,\nparso/__pycache__/utils.cpython-313.pyc,,\nparso/_compatibility.py,sha256=y-fATJ1dyaoVry175CMDBA088IGTxChkCKD2dUAnsrU,70\nparso/cache.py,sha256=KyQBZdTuBXhDjLmwTSLOgyQoq4NLt_wNr1882DTkOW4,8452\nparso/file_io.py,sha256=2SbXQuMpjAaQ0OYvxZXOgl-oU945-CrIei3eEamWWmk,1023\nparso/grammar.py,sha256=K7HvV0YV6wcjA-ImfE3hrajXIS0VMcRZkJPaT9-K3rI,10553\nparso/normalizer.py,sha256=geYG9UZQ6ZpafTc_CiXQoBt8VImdBsiNw6_GJLeSGbg,5597\nparso/parser.py,sha256=qlIrRikSxAccfsC6B6Y9sPWyEhR0HIBaCbNveV1OcAE,7182\nparso/pgen2/__init__.py,sha256=kFfRZsSReM49V0YIJ_cG0_TMTew2t4IMbG95KO2BI8E,382\nparso/pgen2/__pycache__/__init__.cpython-313.pyc,,\nparso/pgen2/__pycache__/generator.cpython-313.pyc,,\nparso/pgen2/__pycache__/grammar_parser.cpython-313.pyc,,\nparso/pgen2/generator.py,sha256=PHjCpx7QM2duGZqGw5GOQQIgc6RE3jcKV7IwXcOgJhw,14580\nparso/pgen2/grammar_parser.py,sha256=knJh3a40_JxUkb0HePG78ZZoqjpPNk3uZwNOz2EkkV4,5515\nparso/py.typed,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0\nparso/python/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0\nparso/python/__pycache__/__init__.cpython-313.pyc,,\nparso/python/__pycache__/diff.cpython-313.pyc,,\nparso/python/__pycache__/errors.cpython-313.pyc,,\nparso/python/__pycache__/parser.cpython-313.pyc,,\nparso/python/__pycache__/pep8.cpython-313.pyc,,\nparso/python/__pycache__/prefix.cpython-313.pyc,,\nparso/python/__pycache__/token.cpython-313.pyc,,\nparso/python/__pycache__/tokenize.cpython-313.pyc,,\nparso/python/__pycache__/tree.cpython-313.pyc,,\nparso/python/diff.py,sha256=jyrqWRKklyPPezZRKRHxoKbhkywiCUoGH_K1HcRSyMA,34206\nparso/python/errors.py,sha256=Vlmxc0MLUNTYnVNMEWVO68JTpkmXzLEG_sh7rkEt6dA,49113\nparso/python/grammar310.txt,sha256=QwXaHqJcJ_zgi9FAAbdv1U_kKgcku9UWjHZoClbtpb4,7511\nparso/python/grammar311.txt,sha256=QwXaHqJcJ_zgi9FAAbdv1U_kKgcku9UWjHZoClbtpb4,7511\nparso/python/grammar312.txt,sha256=QwXaHqJcJ_zgi9FAAbdv1U_kKgcku9UWjHZoClbtpb4,7511\nparso/python/grammar313.txt,sha256=QwXaHqJcJ_zgi9FAAbdv1U_kKgcku9UWjHZoClbtpb4,7511\nparso/python/grammar36.txt,sha256=ezjXEeLpG9BBMrN0rbM3Z77mcn0XESxSlAaZEy2er-k,6948\nparso/python/grammar37.txt,sha256=Ke73_sTcivtBt2rkJaoNYiXa_zLenhCr96HOVPpZB_E,6804\nparso/python/grammar38.txt,sha256=OhPReVYqhsX2RWyVryca3RUGcvLb-R1dcbwdbgPIvBI,7591\nparso/python/grammar39.txt,sha256=cVrVbF9Pg5UJLFi2tvLetPkG-BOAkpqDa9hqslNjSHU,7499\nparso/python/parser.py,sha256=5OMU32ybPF6kcKUdbcfNNkDOK8hJy0B7fqi6b-Gfwqw,8108\nparso/python/pep8.py,sha256=tsuRslXZvfio8LTBIAbfExjBIT1f3Xjx3igt28fm3G4,33779\nparso/python/prefix.py,sha256=BM93VenBA1Vs-qk2AJSLBMJNn5BDbyVZLIZ5ScT4FIU,2743\nparso/python/token.py,sha256=0dzmQf6L59bEJb9MXYbrDtq3bAHNdTuk-PmOhox81G4,909\nparso/python/tokenize.py,sha256=kqmG8SEdkbLG3Gf6gQyeQZerp4yhzzXX14FCYOZJ5mI,25795\nparso/python/tree.py,sha256=bwJ54y4Nt_ebUqXiFE7QZbBplNmicL1JDvy-8o-Av7o,37226\nparso/tree.py,sha256=deZ68uAq0jodEeumJpBYWXWciIXcBYfpsLpe1f1WLO8,16153\nparso/utils.py,sha256=qW8kJuw9pyK8WaIi37FX44kNjAaIgu15EGv7V_R4PmE,6620\n | .venv\Lib\site-packages\parso-0.8.4.dist-info\RECORD | RECORD | Other | 3,971 | 0.7 | 0 | 0 | react-lib | 694 | 2025-06-10T09:27:43.819644 | Apache-2.0 | false | 74f0b8fcc0f11060822be1232c01694f |
parso\n | .venv\Lib\site-packages\parso-0.8.4.dist-info\top_level.txt | top_level.txt | Other | 6 | 0.5 | 0 | 0 | node-utils | 934 | 2025-02-15T23:36:39.045127 | Apache-2.0 | false | 8d335af1fc02682b477f653138b332cc |
Wheel-Version: 1.0\nGenerator: bdist_wheel (0.34.2)\nRoot-Is-Purelib: true\nTag: py2-none-any\nTag: py3-none-any\n\n | .venv\Lib\site-packages\parso-0.8.4.dist-info\WHEEL | WHEEL | Other | 110 | 0.7 | 0 | 0 | awesome-app | 771 | 2025-04-20T09:18:10.508138 | BSD-3-Clause | false | d2a91f104288b412dbc67b54de94e3ac |
from __future__ import annotations\n\nimport os\nfrom io import BytesIO\nfrom typing import IO\n\nfrom . import ExifTags, Image, ImageFile\n\ntry:\n from . import _avif\n\n SUPPORTED = True\nexcept ImportError:\n SUPPORTED = False\n\n# Decoder options as module globals, until there is a way to pass parameters\n# to Image.open (see https://github.com/python-pillow/Pillow/issues/569)\nDECODE_CODEC_CHOICE = "auto"\nDEFAULT_MAX_THREADS = 0\n\n\ndef get_codec_version(codec_name: str) -> str | None:\n versions = _avif.codec_versions()\n for version in versions.split(", "):\n if version.split(" [")[0] == codec_name:\n return version.split(":")[-1].split(" ")[0]\n return None\n\n\ndef _accept(prefix: bytes) -> bool | str:\n if prefix[4:8] != b"ftyp":\n return False\n major_brand = prefix[8:12]\n if major_brand in (\n # coding brands\n b"avif",\n b"avis",\n # We accept files with AVIF container brands; we can't yet know if\n # the ftyp box has the correct compatible brands, but if it doesn't\n # then the plugin will raise a SyntaxError which Pillow will catch\n # before moving on to the next plugin that accepts the file.\n #\n # Also, because this file might not actually be an AVIF file, we\n # don't raise an error if AVIF support isn't properly compiled.\n b"mif1",\n b"msf1",\n ):\n if not SUPPORTED:\n return (\n "image file could not be identified because AVIF support not installed"\n )\n return True\n return False\n\n\ndef _get_default_max_threads() -> int:\n if DEFAULT_MAX_THREADS:\n return DEFAULT_MAX_THREADS\n if hasattr(os, "sched_getaffinity"):\n return len(os.sched_getaffinity(0))\n else:\n return os.cpu_count() or 1\n\n\nclass AvifImageFile(ImageFile.ImageFile):\n format = "AVIF"\n format_description = "AVIF image"\n __frame = -1\n\n def _open(self) -> None:\n if not SUPPORTED:\n msg = "image file could not be opened because AVIF support not installed"\n raise SyntaxError(msg)\n\n if DECODE_CODEC_CHOICE != "auto" and not _avif.decoder_codec_available(\n DECODE_CODEC_CHOICE\n ):\n msg = "Invalid opening codec"\n raise ValueError(msg)\n self._decoder = _avif.AvifDecoder(\n self.fp.read(),\n DECODE_CODEC_CHOICE,\n _get_default_max_threads(),\n )\n\n # Get info from decoder\n self._size, self.n_frames, self._mode, icc, exif, exif_orientation, xmp = (\n self._decoder.get_info()\n )\n self.is_animated = self.n_frames > 1\n\n if icc:\n self.info["icc_profile"] = icc\n if xmp:\n self.info["xmp"] = xmp\n\n if exif_orientation != 1 or exif:\n exif_data = Image.Exif()\n if exif:\n exif_data.load(exif)\n original_orientation = exif_data.get(ExifTags.Base.Orientation, 1)\n else:\n original_orientation = 1\n if exif_orientation != original_orientation:\n exif_data[ExifTags.Base.Orientation] = exif_orientation\n exif = exif_data.tobytes()\n if exif:\n self.info["exif"] = exif\n self.seek(0)\n\n def seek(self, frame: int) -> None:\n if not self._seek_check(frame):\n return\n\n # Set tile\n self.__frame = frame\n self.tile = [ImageFile._Tile("raw", (0, 0) + self.size, 0, self.mode)]\n\n def load(self) -> Image.core.PixelAccess | None:\n if self.tile:\n # We need to load the image data for this frame\n data, timescale, pts_in_timescales, duration_in_timescales = (\n self._decoder.get_frame(self.__frame)\n )\n self.info["timestamp"] = round(1000 * (pts_in_timescales / timescale))\n self.info["duration"] = round(1000 * (duration_in_timescales / timescale))\n\n if self.fp and self._exclusive_fp:\n self.fp.close()\n self.fp = BytesIO(data)\n\n return super().load()\n\n def load_seek(self, pos: int) -> None:\n pass\n\n def tell(self) -> int:\n return self.__frame\n\n\ndef _save_all(im: Image.Image, fp: IO[bytes], filename: str | bytes) -> None:\n _save(im, fp, filename, save_all=True)\n\n\ndef _save(\n im: Image.Image, fp: IO[bytes], filename: str | bytes, save_all: bool = False\n) -> None:\n info = im.encoderinfo.copy()\n if save_all:\n append_images = list(info.get("append_images", []))\n else:\n append_images = []\n\n total = 0\n for ims in [im] + append_images:\n total += getattr(ims, "n_frames", 1)\n\n quality = info.get("quality", 75)\n if not isinstance(quality, int) or quality < 0 or quality > 100:\n msg = "Invalid quality setting"\n raise ValueError(msg)\n\n duration = info.get("duration", 0)\n subsampling = info.get("subsampling", "4:2:0")\n speed = info.get("speed", 6)\n max_threads = info.get("max_threads", _get_default_max_threads())\n codec = info.get("codec", "auto")\n if codec != "auto" and not _avif.encoder_codec_available(codec):\n msg = "Invalid saving codec"\n raise ValueError(msg)\n range_ = info.get("range", "full")\n tile_rows_log2 = info.get("tile_rows", 0)\n tile_cols_log2 = info.get("tile_cols", 0)\n alpha_premultiplied = bool(info.get("alpha_premultiplied", False))\n autotiling = bool(info.get("autotiling", tile_rows_log2 == tile_cols_log2 == 0))\n\n icc_profile = info.get("icc_profile", im.info.get("icc_profile"))\n exif_orientation = 1\n if exif := info.get("exif"):\n if isinstance(exif, Image.Exif):\n exif_data = exif\n else:\n exif_data = Image.Exif()\n exif_data.load(exif)\n if ExifTags.Base.Orientation in exif_data:\n exif_orientation = exif_data.pop(ExifTags.Base.Orientation)\n exif = exif_data.tobytes() if exif_data else b""\n elif isinstance(exif, Image.Exif):\n exif = exif_data.tobytes()\n\n xmp = info.get("xmp")\n\n if isinstance(xmp, str):\n xmp = xmp.encode("utf-8")\n\n advanced = info.get("advanced")\n if advanced is not None:\n if isinstance(advanced, dict):\n advanced = advanced.items()\n try:\n advanced = tuple(advanced)\n except TypeError:\n invalid = True\n else:\n invalid = any(not isinstance(v, tuple) or len(v) != 2 for v in advanced)\n if invalid:\n msg = (\n "advanced codec options must be a dict of key-value string "\n "pairs or a series of key-value two-tuples"\n )\n raise ValueError(msg)\n\n # Setup the AVIF encoder\n enc = _avif.AvifEncoder(\n im.size,\n subsampling,\n quality,\n speed,\n max_threads,\n codec,\n range_,\n tile_rows_log2,\n tile_cols_log2,\n alpha_premultiplied,\n autotiling,\n icc_profile or b"",\n exif or b"",\n exif_orientation,\n xmp or b"",\n advanced,\n )\n\n # Add each frame\n frame_idx = 0\n frame_duration = 0\n cur_idx = im.tell()\n is_single_frame = total == 1\n try:\n for ims in [im] + append_images:\n # Get number of frames in this image\n nfr = getattr(ims, "n_frames", 1)\n\n for idx in range(nfr):\n ims.seek(idx)\n\n # Make sure image mode is supported\n frame = ims\n rawmode = ims.mode\n if ims.mode not in {"RGB", "RGBA"}:\n rawmode = "RGBA" if ims.has_transparency_data else "RGB"\n frame = ims.convert(rawmode)\n\n # Update frame duration\n if isinstance(duration, (list, tuple)):\n frame_duration = duration[frame_idx]\n else:\n frame_duration = duration\n\n # Append the frame to the animation encoder\n enc.add(\n frame.tobytes("raw", rawmode),\n frame_duration,\n frame.size,\n rawmode,\n is_single_frame,\n )\n\n # Update frame index\n frame_idx += 1\n\n if not save_all:\n break\n\n finally:\n im.seek(cur_idx)\n\n # Get the final output from the encoder\n data = enc.finish()\n if data is None:\n msg = "cannot write file as AVIF (encoder returned None)"\n raise OSError(msg)\n\n fp.write(data)\n\n\nImage.register_open(AvifImageFile.format, AvifImageFile, _accept)\nif SUPPORTED:\n Image.register_save(AvifImageFile.format, _save)\n Image.register_save_all(AvifImageFile.format, _save_all)\n Image.register_extensions(AvifImageFile.format, [".avif", ".avifs"])\n Image.register_mime(AvifImageFile.format, "image/avif")\n | .venv\Lib\site-packages\PIL\AvifImagePlugin.py | AvifImagePlugin.py | Python | 9,285 | 0.95 | 0.199313 | 0.086777 | node-utils | 553 | 2024-02-19T14:23:15.814564 | Apache-2.0 | false | 265a4619d837b044b0189ce01860ed30 |
#\n# The Python Imaging Library\n# $Id$\n#\n# bitmap distribution font (bdf) file parser\n#\n# history:\n# 1996-05-16 fl created (as bdf2pil)\n# 1997-08-25 fl converted to FontFile driver\n# 2001-05-25 fl removed bogus __init__ call\n# 2002-11-20 fl robustification (from Kevin Cazabon, Dmitry Vasiliev)\n# 2003-04-22 fl more robustification (from Graham Dumpleton)\n#\n# Copyright (c) 1997-2003 by Secret Labs AB.\n# Copyright (c) 1997-2003 by Fredrik Lundh.\n#\n# See the README file for information on usage and redistribution.\n#\n\n"""\nParse X Bitmap Distribution Format (BDF)\n"""\nfrom __future__ import annotations\n\nfrom typing import BinaryIO\n\nfrom . import FontFile, Image\n\n\ndef bdf_char(\n f: BinaryIO,\n) -> (\n tuple[\n str,\n int,\n tuple[tuple[int, int], tuple[int, int, int, int], tuple[int, int, int, int]],\n Image.Image,\n ]\n | None\n):\n # skip to STARTCHAR\n while True:\n s = f.readline()\n if not s:\n return None\n if s.startswith(b"STARTCHAR"):\n break\n id = s[9:].strip().decode("ascii")\n\n # load symbol properties\n props = {}\n while True:\n s = f.readline()\n if not s or s.startswith(b"BITMAP"):\n break\n i = s.find(b" ")\n props[s[:i].decode("ascii")] = s[i + 1 : -1].decode("ascii")\n\n # load bitmap\n bitmap = bytearray()\n while True:\n s = f.readline()\n if not s or s.startswith(b"ENDCHAR"):\n break\n bitmap += s[:-1]\n\n # The word BBX\n # followed by the width in x (BBw), height in y (BBh),\n # and x and y displacement (BBxoff0, BByoff0)\n # of the lower left corner from the origin of the character.\n width, height, x_disp, y_disp = (int(p) for p in props["BBX"].split())\n\n # The word DWIDTH\n # followed by the width in x and y of the character in device pixels.\n dwx, dwy = (int(p) for p in props["DWIDTH"].split())\n\n bbox = (\n (dwx, dwy),\n (x_disp, -y_disp - height, width + x_disp, -y_disp),\n (0, 0, width, height),\n )\n\n try:\n im = Image.frombytes("1", (width, height), bitmap, "hex", "1")\n except ValueError:\n # deal with zero-width characters\n im = Image.new("1", (width, height))\n\n return id, int(props["ENCODING"]), bbox, im\n\n\nclass BdfFontFile(FontFile.FontFile):\n """Font file plugin for the X11 BDF format."""\n\n def __init__(self, fp: BinaryIO) -> None:\n super().__init__()\n\n s = fp.readline()\n if not s.startswith(b"STARTFONT 2.1"):\n msg = "not a valid BDF file"\n raise SyntaxError(msg)\n\n props = {}\n comments = []\n\n while True:\n s = fp.readline()\n if not s or s.startswith(b"ENDPROPERTIES"):\n break\n i = s.find(b" ")\n props[s[:i].decode("ascii")] = s[i + 1 : -1].decode("ascii")\n if s[:i] in [b"COMMENT", b"COPYRIGHT"]:\n if s.find(b"LogicalFontDescription") < 0:\n comments.append(s[i + 1 : -1].decode("ascii"))\n\n while True:\n c = bdf_char(fp)\n if not c:\n break\n id, ch, (xy, dst, src), im = c\n if 0 <= ch < len(self.glyph):\n self.glyph[ch] = xy, dst, src, im\n | .venv\Lib\site-packages\PIL\BdfFontFile.py | BdfFontFile.py | Python | 3,407 | 0.95 | 0.188525 | 0.271845 | vue-tools | 747 | 2025-01-25T23:50:41.273171 | BSD-3-Clause | false | 925fb46dcfebe525b5ed378859677f50 |
"""\nBlizzard Mipmap Format (.blp)\nJerome Leclanche <jerome@leclan.ch>\n\nThe contents of this file are hereby released in the public domain (CC0)\nFull text of the CC0 license:\n https://creativecommons.org/publicdomain/zero/1.0/\n\nBLP1 files, used mostly in Warcraft III, are not fully supported.\nAll types of BLP2 files used in World of Warcraft are supported.\n\nThe BLP file structure consists of a header, up to 16 mipmaps of the\ntexture\n\nTexture sizes must be powers of two, though the two dimensions do\nnot have to be equal; 512x256 is valid, but 512x200 is not.\nThe first mipmap (mipmap #0) is the full size image; each subsequent\nmipmap halves both dimensions. The final mipmap should be 1x1.\n\nBLP files come in many different flavours:\n* JPEG-compressed (type == 0) - only supported for BLP1.\n* RAW images (type == 1, encoding == 1). Each mipmap is stored as an\n array of 8-bit values, one per pixel, left to right, top to bottom.\n Each value is an index to the palette.\n* DXT-compressed (type == 1, encoding == 2):\n- DXT1 compression is used if alpha_encoding == 0.\n - An additional alpha bit is used if alpha_depth == 1.\n - DXT3 compression is used if alpha_encoding == 1.\n - DXT5 compression is used if alpha_encoding == 7.\n"""\n\nfrom __future__ import annotations\n\nimport abc\nimport os\nimport struct\nfrom enum import IntEnum\nfrom io import BytesIO\nfrom typing import IO\n\nfrom . import Image, ImageFile\n\n\nclass Format(IntEnum):\n JPEG = 0\n\n\nclass Encoding(IntEnum):\n UNCOMPRESSED = 1\n DXT = 2\n UNCOMPRESSED_RAW_BGRA = 3\n\n\nclass AlphaEncoding(IntEnum):\n DXT1 = 0\n DXT3 = 1\n DXT5 = 7\n\n\ndef unpack_565(i: int) -> tuple[int, int, int]:\n return ((i >> 11) & 0x1F) << 3, ((i >> 5) & 0x3F) << 2, (i & 0x1F) << 3\n\n\ndef decode_dxt1(\n data: bytes, alpha: bool = False\n) -> tuple[bytearray, bytearray, bytearray, bytearray]:\n """\n input: one "row" of data (i.e. will produce 4*width pixels)\n """\n\n blocks = len(data) // 8 # number of blocks in row\n ret = (bytearray(), bytearray(), bytearray(), bytearray())\n\n for block_index in range(blocks):\n # Decode next 8-byte block.\n idx = block_index * 8\n color0, color1, bits = struct.unpack_from("<HHI", data, idx)\n\n r0, g0, b0 = unpack_565(color0)\n r1, g1, b1 = unpack_565(color1)\n\n # Decode this block into 4x4 pixels\n # Accumulate the results onto our 4 row accumulators\n for j in range(4):\n for i in range(4):\n # get next control op and generate a pixel\n\n control = bits & 3\n bits = bits >> 2\n\n a = 0xFF\n if control == 0:\n r, g, b = r0, g0, b0\n elif control == 1:\n r, g, b = r1, g1, b1\n elif control == 2:\n if color0 > color1:\n r = (2 * r0 + r1) // 3\n g = (2 * g0 + g1) // 3\n b = (2 * b0 + b1) // 3\n else:\n r = (r0 + r1) // 2\n g = (g0 + g1) // 2\n b = (b0 + b1) // 2\n elif control == 3:\n if color0 > color1:\n r = (2 * r1 + r0) // 3\n g = (2 * g1 + g0) // 3\n b = (2 * b1 + b0) // 3\n else:\n r, g, b, a = 0, 0, 0, 0\n\n if alpha:\n ret[j].extend([r, g, b, a])\n else:\n ret[j].extend([r, g, b])\n\n return ret\n\n\ndef decode_dxt3(data: bytes) -> tuple[bytearray, bytearray, bytearray, bytearray]:\n """\n input: one "row" of data (i.e. will produce 4*width pixels)\n """\n\n blocks = len(data) // 16 # number of blocks in row\n ret = (bytearray(), bytearray(), bytearray(), bytearray())\n\n for block_index in range(blocks):\n idx = block_index * 16\n block = data[idx : idx + 16]\n # Decode next 16-byte block.\n bits = struct.unpack_from("<8B", block)\n color0, color1 = struct.unpack_from("<HH", block, 8)\n\n (code,) = struct.unpack_from("<I", block, 12)\n\n r0, g0, b0 = unpack_565(color0)\n r1, g1, b1 = unpack_565(color1)\n\n for j in range(4):\n high = False # Do we want the higher bits?\n for i in range(4):\n alphacode_index = (4 * j + i) // 2\n a = bits[alphacode_index]\n if high:\n high = False\n a >>= 4\n else:\n high = True\n a &= 0xF\n a *= 17 # We get a value between 0 and 15\n\n color_code = (code >> 2 * (4 * j + i)) & 0x03\n\n if color_code == 0:\n r, g, b = r0, g0, b0\n elif color_code == 1:\n r, g, b = r1, g1, b1\n elif color_code == 2:\n r = (2 * r0 + r1) // 3\n g = (2 * g0 + g1) // 3\n b = (2 * b0 + b1) // 3\n elif color_code == 3:\n r = (2 * r1 + r0) // 3\n g = (2 * g1 + g0) // 3\n b = (2 * b1 + b0) // 3\n\n ret[j].extend([r, g, b, a])\n\n return ret\n\n\ndef decode_dxt5(data: bytes) -> tuple[bytearray, bytearray, bytearray, bytearray]:\n """\n input: one "row" of data (i.e. will produce 4 * width pixels)\n """\n\n blocks = len(data) // 16 # number of blocks in row\n ret = (bytearray(), bytearray(), bytearray(), bytearray())\n\n for block_index in range(blocks):\n idx = block_index * 16\n block = data[idx : idx + 16]\n # Decode next 16-byte block.\n a0, a1 = struct.unpack_from("<BB", block)\n\n bits = struct.unpack_from("<6B", block, 2)\n alphacode1 = bits[2] | (bits[3] << 8) | (bits[4] << 16) | (bits[5] << 24)\n alphacode2 = bits[0] | (bits[1] << 8)\n\n color0, color1 = struct.unpack_from("<HH", block, 8)\n\n (code,) = struct.unpack_from("<I", block, 12)\n\n r0, g0, b0 = unpack_565(color0)\n r1, g1, b1 = unpack_565(color1)\n\n for j in range(4):\n for i in range(4):\n # get next control op and generate a pixel\n alphacode_index = 3 * (4 * j + i)\n\n if alphacode_index <= 12:\n alphacode = (alphacode2 >> alphacode_index) & 0x07\n elif alphacode_index == 15:\n alphacode = (alphacode2 >> 15) | ((alphacode1 << 1) & 0x06)\n else: # alphacode_index >= 18 and alphacode_index <= 45\n alphacode = (alphacode1 >> (alphacode_index - 16)) & 0x07\n\n if alphacode == 0:\n a = a0\n elif alphacode == 1:\n a = a1\n elif a0 > a1:\n a = ((8 - alphacode) * a0 + (alphacode - 1) * a1) // 7\n elif alphacode == 6:\n a = 0\n elif alphacode == 7:\n a = 255\n else:\n a = ((6 - alphacode) * a0 + (alphacode - 1) * a1) // 5\n\n color_code = (code >> 2 * (4 * j + i)) & 0x03\n\n if color_code == 0:\n r, g, b = r0, g0, b0\n elif color_code == 1:\n r, g, b = r1, g1, b1\n elif color_code == 2:\n r = (2 * r0 + r1) // 3\n g = (2 * g0 + g1) // 3\n b = (2 * b0 + b1) // 3\n elif color_code == 3:\n r = (2 * r1 + r0) // 3\n g = (2 * g1 + g0) // 3\n b = (2 * b1 + b0) // 3\n\n ret[j].extend([r, g, b, a])\n\n return ret\n\n\nclass BLPFormatError(NotImplementedError):\n pass\n\n\ndef _accept(prefix: bytes) -> bool:\n return prefix.startswith((b"BLP1", b"BLP2"))\n\n\nclass BlpImageFile(ImageFile.ImageFile):\n """\n Blizzard Mipmap Format\n """\n\n format = "BLP"\n format_description = "Blizzard Mipmap Format"\n\n def _open(self) -> None:\n self.magic = self.fp.read(4)\n if not _accept(self.magic):\n msg = f"Bad BLP magic {repr(self.magic)}"\n raise BLPFormatError(msg)\n\n compression = struct.unpack("<i", self.fp.read(4))[0]\n if self.magic == b"BLP1":\n alpha = struct.unpack("<I", self.fp.read(4))[0] != 0\n else:\n encoding = struct.unpack("<b", self.fp.read(1))[0]\n alpha = struct.unpack("<b", self.fp.read(1))[0] != 0\n alpha_encoding = struct.unpack("<b", self.fp.read(1))[0]\n self.fp.seek(1, os.SEEK_CUR) # mips\n\n self._size = struct.unpack("<II", self.fp.read(8))\n\n args: tuple[int, int, bool] | tuple[int, int, bool, int]\n if self.magic == b"BLP1":\n encoding = struct.unpack("<i", self.fp.read(4))[0]\n self.fp.seek(4, os.SEEK_CUR) # subtype\n\n args = (compression, encoding, alpha)\n offset = 28\n else:\n args = (compression, encoding, alpha, alpha_encoding)\n offset = 20\n\n decoder = self.magic.decode()\n\n self._mode = "RGBA" if alpha else "RGB"\n self.tile = [ImageFile._Tile(decoder, (0, 0) + self.size, offset, args)]\n\n\nclass _BLPBaseDecoder(abc.ABC, ImageFile.PyDecoder):\n _pulls_fd = True\n\n def decode(self, buffer: bytes | Image.SupportsArrayInterface) -> tuple[int, int]:\n try:\n self._read_header()\n self._load()\n except struct.error as e:\n msg = "Truncated BLP file"\n raise OSError(msg) from e\n return -1, 0\n\n @abc.abstractmethod\n def _load(self) -> None:\n pass\n\n def _read_header(self) -> None:\n self._offsets = struct.unpack("<16I", self._safe_read(16 * 4))\n self._lengths = struct.unpack("<16I", self._safe_read(16 * 4))\n\n def _safe_read(self, length: int) -> bytes:\n assert self.fd is not None\n return ImageFile._safe_read(self.fd, length)\n\n def _read_palette(self) -> list[tuple[int, int, int, int]]:\n ret = []\n for i in range(256):\n try:\n b, g, r, a = struct.unpack("<4B", self._safe_read(4))\n except struct.error:\n break\n ret.append((b, g, r, a))\n return ret\n\n def _read_bgra(\n self, palette: list[tuple[int, int, int, int]], alpha: bool\n ) -> bytearray:\n data = bytearray()\n _data = BytesIO(self._safe_read(self._lengths[0]))\n while True:\n try:\n (offset,) = struct.unpack("<B", _data.read(1))\n except struct.error:\n break\n b, g, r, a = palette[offset]\n d: tuple[int, ...] = (r, g, b)\n if alpha:\n d += (a,)\n data.extend(d)\n return data\n\n\nclass BLP1Decoder(_BLPBaseDecoder):\n def _load(self) -> None:\n self._compression, self._encoding, alpha = self.args\n\n if self._compression == Format.JPEG:\n self._decode_jpeg_stream()\n\n elif self._compression == 1:\n if self._encoding in (4, 5):\n palette = self._read_palette()\n data = self._read_bgra(palette, alpha)\n self.set_as_raw(data)\n else:\n msg = f"Unsupported BLP encoding {repr(self._encoding)}"\n raise BLPFormatError(msg)\n else:\n msg = f"Unsupported BLP compression {repr(self._encoding)}"\n raise BLPFormatError(msg)\n\n def _decode_jpeg_stream(self) -> None:\n from .JpegImagePlugin import JpegImageFile\n\n (jpeg_header_size,) = struct.unpack("<I", self._safe_read(4))\n jpeg_header = self._safe_read(jpeg_header_size)\n assert self.fd is not None\n self._safe_read(self._offsets[0] - self.fd.tell()) # What IS this?\n data = self._safe_read(self._lengths[0])\n data = jpeg_header + data\n image = JpegImageFile(BytesIO(data))\n Image._decompression_bomb_check(image.size)\n if image.mode == "CMYK":\n args = image.tile[0].args\n assert isinstance(args, tuple)\n image.tile = [image.tile[0]._replace(args=(args[0], "CMYK"))]\n self.set_as_raw(image.convert("RGB").tobytes(), "BGR")\n\n\nclass BLP2Decoder(_BLPBaseDecoder):\n def _load(self) -> None:\n self._compression, self._encoding, alpha, self._alpha_encoding = self.args\n\n palette = self._read_palette()\n\n assert self.fd is not None\n self.fd.seek(self._offsets[0])\n\n if self._compression == 1:\n # Uncompressed or DirectX compression\n\n if self._encoding == Encoding.UNCOMPRESSED:\n data = self._read_bgra(palette, alpha)\n\n elif self._encoding == Encoding.DXT:\n data = bytearray()\n if self._alpha_encoding == AlphaEncoding.DXT1:\n linesize = (self.state.xsize + 3) // 4 * 8\n for yb in range((self.state.ysize + 3) // 4):\n for d in decode_dxt1(self._safe_read(linesize), alpha):\n data += d\n\n elif self._alpha_encoding == AlphaEncoding.DXT3:\n linesize = (self.state.xsize + 3) // 4 * 16\n for yb in range((self.state.ysize + 3) // 4):\n for d in decode_dxt3(self._safe_read(linesize)):\n data += d\n\n elif self._alpha_encoding == AlphaEncoding.DXT5:\n linesize = (self.state.xsize + 3) // 4 * 16\n for yb in range((self.state.ysize + 3) // 4):\n for d in decode_dxt5(self._safe_read(linesize)):\n data += d\n else:\n msg = f"Unsupported alpha encoding {repr(self._alpha_encoding)}"\n raise BLPFormatError(msg)\n else:\n msg = f"Unknown BLP encoding {repr(self._encoding)}"\n raise BLPFormatError(msg)\n\n else:\n msg = f"Unknown BLP compression {repr(self._compression)}"\n raise BLPFormatError(msg)\n\n self.set_as_raw(data)\n\n\nclass BLPEncoder(ImageFile.PyEncoder):\n _pushes_fd = True\n\n def _write_palette(self) -> bytes:\n data = b""\n assert self.im is not None\n palette = self.im.getpalette("RGBA", "RGBA")\n for i in range(len(palette) // 4):\n r, g, b, a = palette[i * 4 : (i + 1) * 4]\n data += struct.pack("<4B", b, g, r, a)\n while len(data) < 256 * 4:\n data += b"\x00" * 4\n return data\n\n def encode(self, bufsize: int) -> tuple[int, int, bytes]:\n palette_data = self._write_palette()\n\n offset = 20 + 16 * 4 * 2 + len(palette_data)\n data = struct.pack("<16I", offset, *((0,) * 15))\n\n assert self.im is not None\n w, h = self.im.size\n data += struct.pack("<16I", w * h, *((0,) * 15))\n\n data += palette_data\n\n for y in range(h):\n for x in range(w):\n data += struct.pack("<B", self.im.getpixel((x, y)))\n\n return len(data), 0, data\n\n\ndef _save(im: Image.Image, fp: IO[bytes], filename: str | bytes) -> None:\n if im.mode != "P":\n msg = "Unsupported BLP image mode"\n raise ValueError(msg)\n\n magic = b"BLP1" if im.encoderinfo.get("blp_version") == "BLP1" else b"BLP2"\n fp.write(magic)\n\n assert im.palette is not None\n fp.write(struct.pack("<i", 1)) # Uncompressed or DirectX compression\n\n alpha_depth = 1 if im.palette.mode == "RGBA" else 0\n if magic == b"BLP1":\n fp.write(struct.pack("<L", alpha_depth))\n else:\n fp.write(struct.pack("<b", Encoding.UNCOMPRESSED))\n fp.write(struct.pack("<b", alpha_depth))\n fp.write(struct.pack("<b", 0)) # alpha encoding\n fp.write(struct.pack("<b", 0)) # mips\n fp.write(struct.pack("<II", *im.size))\n if magic == b"BLP1":\n fp.write(struct.pack("<i", 5))\n fp.write(struct.pack("<i", 0))\n\n ImageFile._save(im, fp, [ImageFile._Tile("BLP", (0, 0) + im.size, 0, im.mode)])\n\n\nImage.register_open(BlpImageFile.format, BlpImageFile, _accept)\nImage.register_extension(BlpImageFile.format, ".blp")\nImage.register_decoder("BLP1", BLP1Decoder)\nImage.register_decoder("BLP2", BLP2Decoder)\n\nImage.register_save(BlpImageFile.format, _save)\nImage.register_encoder("BLP", BLPEncoder)\n | .venv\Lib\site-packages\PIL\BlpImagePlugin.py | BlpImagePlugin.py | Python | 17,030 | 0.95 | 0.162978 | 0.028351 | vue-tools | 491 | 2024-03-17T14:28:00.079344 | BSD-3-Clause | false | ac9172058b93e25b1ab4be79c479eee5 |
#\n# The Python Imaging Library.\n# $Id$\n#\n# BMP file handler\n#\n# Windows (and OS/2) native bitmap storage format.\n#\n# history:\n# 1995-09-01 fl Created\n# 1996-04-30 fl Added save\n# 1997-08-27 fl Fixed save of 1-bit images\n# 1998-03-06 fl Load P images as L where possible\n# 1998-07-03 fl Load P images as 1 where possible\n# 1998-12-29 fl Handle small palettes\n# 2002-12-30 fl Fixed load of 1-bit palette images\n# 2003-04-21 fl Fixed load of 1-bit monochrome images\n# 2003-04-23 fl Added limited support for BI_BITFIELDS compression\n#\n# Copyright (c) 1997-2003 by Secret Labs AB\n# Copyright (c) 1995-2003 by Fredrik Lundh\n#\n# See the README file for information on usage and redistribution.\n#\nfrom __future__ import annotations\n\nimport os\nfrom typing import IO, Any\n\nfrom . import Image, ImageFile, ImagePalette\nfrom ._binary import i16le as i16\nfrom ._binary import i32le as i32\nfrom ._binary import o8\nfrom ._binary import o16le as o16\nfrom ._binary import o32le as o32\n\n#\n# --------------------------------------------------------------------\n# Read BMP file\n\nBIT2MODE = {\n # bits => mode, rawmode\n 1: ("P", "P;1"),\n 4: ("P", "P;4"),\n 8: ("P", "P"),\n 16: ("RGB", "BGR;15"),\n 24: ("RGB", "BGR"),\n 32: ("RGB", "BGRX"),\n}\n\nUSE_RAW_ALPHA = False\n\n\ndef _accept(prefix: bytes) -> bool:\n return prefix.startswith(b"BM")\n\n\ndef _dib_accept(prefix: bytes) -> bool:\n return i32(prefix) in [12, 40, 52, 56, 64, 108, 124]\n\n\n# =============================================================================\n# Image plugin for the Windows BMP format.\n# =============================================================================\nclass BmpImageFile(ImageFile.ImageFile):\n """Image plugin for the Windows Bitmap format (BMP)"""\n\n # ------------------------------------------------------------- Description\n format_description = "Windows Bitmap"\n format = "BMP"\n\n # -------------------------------------------------- BMP Compression values\n COMPRESSIONS = {"RAW": 0, "RLE8": 1, "RLE4": 2, "BITFIELDS": 3, "JPEG": 4, "PNG": 5}\n for k, v in COMPRESSIONS.items():\n vars()[k] = v\n\n def _bitmap(self, header: int = 0, offset: int = 0) -> None:\n """Read relevant info about the BMP"""\n read, seek = self.fp.read, self.fp.seek\n if header:\n seek(header)\n # read bmp header size @offset 14 (this is part of the header size)\n file_info: dict[str, bool | int | tuple[int, ...]] = {\n "header_size": i32(read(4)),\n "direction": -1,\n }\n\n # -------------------- If requested, read header at a specific position\n # read the rest of the bmp header, without its size\n assert isinstance(file_info["header_size"], int)\n header_data = ImageFile._safe_read(self.fp, file_info["header_size"] - 4)\n\n # ------------------------------- Windows Bitmap v2, IBM OS/2 Bitmap v1\n # ----- This format has different offsets because of width/height types\n # 12: BITMAPCOREHEADER/OS21XBITMAPHEADER\n if file_info["header_size"] == 12:\n file_info["width"] = i16(header_data, 0)\n file_info["height"] = i16(header_data, 2)\n file_info["planes"] = i16(header_data, 4)\n file_info["bits"] = i16(header_data, 6)\n file_info["compression"] = self.COMPRESSIONS["RAW"]\n file_info["palette_padding"] = 3\n\n # --------------------------------------------- Windows Bitmap v3 to v5\n # 40: BITMAPINFOHEADER\n # 52: BITMAPV2HEADER\n # 56: BITMAPV3HEADER\n # 64: BITMAPCOREHEADER2/OS22XBITMAPHEADER\n # 108: BITMAPV4HEADER\n # 124: BITMAPV5HEADER\n elif file_info["header_size"] in (40, 52, 56, 64, 108, 124):\n file_info["y_flip"] = header_data[7] == 0xFF\n file_info["direction"] = 1 if file_info["y_flip"] else -1\n file_info["width"] = i32(header_data, 0)\n file_info["height"] = (\n i32(header_data, 4)\n if not file_info["y_flip"]\n else 2**32 - i32(header_data, 4)\n )\n file_info["planes"] = i16(header_data, 8)\n file_info["bits"] = i16(header_data, 10)\n file_info["compression"] = i32(header_data, 12)\n # byte size of pixel data\n file_info["data_size"] = i32(header_data, 16)\n file_info["pixels_per_meter"] = (\n i32(header_data, 20),\n i32(header_data, 24),\n )\n file_info["colors"] = i32(header_data, 28)\n file_info["palette_padding"] = 4\n assert isinstance(file_info["pixels_per_meter"], tuple)\n self.info["dpi"] = tuple(x / 39.3701 for x in file_info["pixels_per_meter"])\n if file_info["compression"] == self.COMPRESSIONS["BITFIELDS"]:\n masks = ["r_mask", "g_mask", "b_mask"]\n if len(header_data) >= 48:\n if len(header_data) >= 52:\n masks.append("a_mask")\n else:\n file_info["a_mask"] = 0x0\n for idx, mask in enumerate(masks):\n file_info[mask] = i32(header_data, 36 + idx * 4)\n else:\n # 40 byte headers only have the three components in the\n # bitfields masks, ref:\n # https://msdn.microsoft.com/en-us/library/windows/desktop/dd183376(v=vs.85).aspx\n # See also\n # https://github.com/python-pillow/Pillow/issues/1293\n # There is a 4th component in the RGBQuad, in the alpha\n # location, but it is listed as a reserved component,\n # and it is not generally an alpha channel\n file_info["a_mask"] = 0x0\n for mask in masks:\n file_info[mask] = i32(read(4))\n assert isinstance(file_info["r_mask"], int)\n assert isinstance(file_info["g_mask"], int)\n assert isinstance(file_info["b_mask"], int)\n assert isinstance(file_info["a_mask"], int)\n file_info["rgb_mask"] = (\n file_info["r_mask"],\n file_info["g_mask"],\n file_info["b_mask"],\n )\n file_info["rgba_mask"] = (\n file_info["r_mask"],\n file_info["g_mask"],\n file_info["b_mask"],\n file_info["a_mask"],\n )\n else:\n msg = f"Unsupported BMP header type ({file_info['header_size']})"\n raise OSError(msg)\n\n # ------------------ Special case : header is reported 40, which\n # ---------------------- is shorter than real size for bpp >= 16\n assert isinstance(file_info["width"], int)\n assert isinstance(file_info["height"], int)\n self._size = file_info["width"], file_info["height"]\n\n # ------- If color count was not found in the header, compute from bits\n assert isinstance(file_info["bits"], int)\n file_info["colors"] = (\n file_info["colors"]\n if file_info.get("colors", 0)\n else (1 << file_info["bits"])\n )\n assert isinstance(file_info["colors"], int)\n if offset == 14 + file_info["header_size"] and file_info["bits"] <= 8:\n offset += 4 * file_info["colors"]\n\n # ---------------------- Check bit depth for unusual unsupported values\n self._mode, raw_mode = BIT2MODE.get(file_info["bits"], ("", ""))\n if not self.mode:\n msg = f"Unsupported BMP pixel depth ({file_info['bits']})"\n raise OSError(msg)\n\n # ---------------- Process BMP with Bitfields compression (not palette)\n decoder_name = "raw"\n if file_info["compression"] == self.COMPRESSIONS["BITFIELDS"]:\n SUPPORTED: dict[int, list[tuple[int, ...]]] = {\n 32: [\n (0xFF0000, 0xFF00, 0xFF, 0x0),\n (0xFF000000, 0xFF0000, 0xFF00, 0x0),\n (0xFF000000, 0xFF00, 0xFF, 0x0),\n (0xFF000000, 0xFF0000, 0xFF00, 0xFF),\n (0xFF, 0xFF00, 0xFF0000, 0xFF000000),\n (0xFF0000, 0xFF00, 0xFF, 0xFF000000),\n (0xFF000000, 0xFF00, 0xFF, 0xFF0000),\n (0x0, 0x0, 0x0, 0x0),\n ],\n 24: [(0xFF0000, 0xFF00, 0xFF)],\n 16: [(0xF800, 0x7E0, 0x1F), (0x7C00, 0x3E0, 0x1F)],\n }\n MASK_MODES = {\n (32, (0xFF0000, 0xFF00, 0xFF, 0x0)): "BGRX",\n (32, (0xFF000000, 0xFF0000, 0xFF00, 0x0)): "XBGR",\n (32, (0xFF000000, 0xFF00, 0xFF, 0x0)): "BGXR",\n (32, (0xFF000000, 0xFF0000, 0xFF00, 0xFF)): "ABGR",\n (32, (0xFF, 0xFF00, 0xFF0000, 0xFF000000)): "RGBA",\n (32, (0xFF0000, 0xFF00, 0xFF, 0xFF000000)): "BGRA",\n (32, (0xFF000000, 0xFF00, 0xFF, 0xFF0000)): "BGAR",\n (32, (0x0, 0x0, 0x0, 0x0)): "BGRA",\n (24, (0xFF0000, 0xFF00, 0xFF)): "BGR",\n (16, (0xF800, 0x7E0, 0x1F)): "BGR;16",\n (16, (0x7C00, 0x3E0, 0x1F)): "BGR;15",\n }\n if file_info["bits"] in SUPPORTED:\n if (\n file_info["bits"] == 32\n and file_info["rgba_mask"] in SUPPORTED[file_info["bits"]]\n ):\n assert isinstance(file_info["rgba_mask"], tuple)\n raw_mode = MASK_MODES[(file_info["bits"], file_info["rgba_mask"])]\n self._mode = "RGBA" if "A" in raw_mode else self.mode\n elif (\n file_info["bits"] in (24, 16)\n and file_info["rgb_mask"] in SUPPORTED[file_info["bits"]]\n ):\n assert isinstance(file_info["rgb_mask"], tuple)\n raw_mode = MASK_MODES[(file_info["bits"], file_info["rgb_mask"])]\n else:\n msg = "Unsupported BMP bitfields layout"\n raise OSError(msg)\n else:\n msg = "Unsupported BMP bitfields layout"\n raise OSError(msg)\n elif file_info["compression"] == self.COMPRESSIONS["RAW"]:\n if file_info["bits"] == 32 and (\n header == 22 or USE_RAW_ALPHA # 32-bit .cur offset\n ):\n raw_mode, self._mode = "BGRA", "RGBA"\n elif file_info["compression"] in (\n self.COMPRESSIONS["RLE8"],\n self.COMPRESSIONS["RLE4"],\n ):\n decoder_name = "bmp_rle"\n else:\n msg = f"Unsupported BMP compression ({file_info['compression']})"\n raise OSError(msg)\n\n # --------------- Once the header is processed, process the palette/LUT\n if self.mode == "P": # Paletted for 1, 4 and 8 bit images\n # ---------------------------------------------------- 1-bit images\n if not (0 < file_info["colors"] <= 65536):\n msg = f"Unsupported BMP Palette size ({file_info['colors']})"\n raise OSError(msg)\n else:\n assert isinstance(file_info["palette_padding"], int)\n padding = file_info["palette_padding"]\n palette = read(padding * file_info["colors"])\n grayscale = True\n indices = (\n (0, 255)\n if file_info["colors"] == 2\n else list(range(file_info["colors"]))\n )\n\n # ----------------- Check if grayscale and ignore palette if so\n for ind, val in enumerate(indices):\n rgb = palette[ind * padding : ind * padding + 3]\n if rgb != o8(val) * 3:\n grayscale = False\n\n # ------- If all colors are gray, white or black, ditch palette\n if grayscale:\n self._mode = "1" if file_info["colors"] == 2 else "L"\n raw_mode = self.mode\n else:\n self._mode = "P"\n self.palette = ImagePalette.raw(\n "BGRX" if padding == 4 else "BGR", palette\n )\n\n # ---------------------------- Finally set the tile data for the plugin\n self.info["compression"] = file_info["compression"]\n args: list[Any] = [raw_mode]\n if decoder_name == "bmp_rle":\n args.append(file_info["compression"] == self.COMPRESSIONS["RLE4"])\n else:\n assert isinstance(file_info["width"], int)\n args.append(((file_info["width"] * file_info["bits"] + 31) >> 3) & (~3))\n args.append(file_info["direction"])\n self.tile = [\n ImageFile._Tile(\n decoder_name,\n (0, 0, file_info["width"], file_info["height"]),\n offset or self.fp.tell(),\n tuple(args),\n )\n ]\n\n def _open(self) -> None:\n """Open file, check magic number and read header"""\n # read 14 bytes: magic number, filesize, reserved, header final offset\n head_data = self.fp.read(14)\n # choke if the file does not have the required magic bytes\n if not _accept(head_data):\n msg = "Not a BMP file"\n raise SyntaxError(msg)\n # read the start position of the BMP image data (u32)\n offset = i32(head_data, 10)\n # load bitmap information (offset=raster info)\n self._bitmap(offset=offset)\n\n\nclass BmpRleDecoder(ImageFile.PyDecoder):\n _pulls_fd = True\n\n def decode(self, buffer: bytes | Image.SupportsArrayInterface) -> tuple[int, int]:\n assert self.fd is not None\n rle4 = self.args[1]\n data = bytearray()\n x = 0\n dest_length = self.state.xsize * self.state.ysize\n while len(data) < dest_length:\n pixels = self.fd.read(1)\n byte = self.fd.read(1)\n if not pixels or not byte:\n break\n num_pixels = pixels[0]\n if num_pixels:\n # encoded mode\n if x + num_pixels > self.state.xsize:\n # Too much data for row\n num_pixels = max(0, self.state.xsize - x)\n if rle4:\n first_pixel = o8(byte[0] >> 4)\n second_pixel = o8(byte[0] & 0x0F)\n for index in range(num_pixels):\n if index % 2 == 0:\n data += first_pixel\n else:\n data += second_pixel\n else:\n data += byte * num_pixels\n x += num_pixels\n else:\n if byte[0] == 0:\n # end of line\n while len(data) % self.state.xsize != 0:\n data += b"\x00"\n x = 0\n elif byte[0] == 1:\n # end of bitmap\n break\n elif byte[0] == 2:\n # delta\n bytes_read = self.fd.read(2)\n if len(bytes_read) < 2:\n break\n right, up = self.fd.read(2)\n data += b"\x00" * (right + up * self.state.xsize)\n x = len(data) % self.state.xsize\n else:\n # absolute mode\n if rle4:\n # 2 pixels per byte\n byte_count = byte[0] // 2\n bytes_read = self.fd.read(byte_count)\n for byte_read in bytes_read:\n data += o8(byte_read >> 4)\n data += o8(byte_read & 0x0F)\n else:\n byte_count = byte[0]\n bytes_read = self.fd.read(byte_count)\n data += bytes_read\n if len(bytes_read) < byte_count:\n break\n x += byte[0]\n\n # align to 16-bit word boundary\n if self.fd.tell() % 2 != 0:\n self.fd.seek(1, os.SEEK_CUR)\n rawmode = "L" if self.mode == "L" else "P"\n self.set_as_raw(bytes(data), rawmode, (0, self.args[-1]))\n return -1, 0\n\n\n# =============================================================================\n# Image plugin for the DIB format (BMP alias)\n# =============================================================================\nclass DibImageFile(BmpImageFile):\n format = "DIB"\n format_description = "Windows Bitmap"\n\n def _open(self) -> None:\n self._bitmap()\n\n\n#\n# --------------------------------------------------------------------\n# Write BMP file\n\n\nSAVE = {\n "1": ("1", 1, 2),\n "L": ("L", 8, 256),\n "P": ("P", 8, 256),\n "RGB": ("BGR", 24, 0),\n "RGBA": ("BGRA", 32, 0),\n}\n\n\ndef _dib_save(im: Image.Image, fp: IO[bytes], filename: str | bytes) -> None:\n _save(im, fp, filename, False)\n\n\ndef _save(\n im: Image.Image, fp: IO[bytes], filename: str | bytes, bitmap_header: bool = True\n) -> None:\n try:\n rawmode, bits, colors = SAVE[im.mode]\n except KeyError as e:\n msg = f"cannot write mode {im.mode} as BMP"\n raise OSError(msg) from e\n\n info = im.encoderinfo\n\n dpi = info.get("dpi", (96, 96))\n\n # 1 meter == 39.3701 inches\n ppm = tuple(int(x * 39.3701 + 0.5) for x in dpi)\n\n stride = ((im.size[0] * bits + 7) // 8 + 3) & (~3)\n header = 40 # or 64 for OS/2 version 2\n image = stride * im.size[1]\n\n if im.mode == "1":\n palette = b"".join(o8(i) * 3 + b"\x00" for i in (0, 255))\n elif im.mode == "L":\n palette = b"".join(o8(i) * 3 + b"\x00" for i in range(256))\n elif im.mode == "P":\n palette = im.im.getpalette("RGB", "BGRX")\n colors = len(palette) // 4\n else:\n palette = None\n\n # bitmap header\n if bitmap_header:\n offset = 14 + header + colors * 4\n file_size = offset + image\n if file_size > 2**32 - 1:\n msg = "File size is too large for the BMP format"\n raise ValueError(msg)\n fp.write(\n b"BM" # file type (magic)\n + o32(file_size) # file size\n + o32(0) # reserved\n + o32(offset) # image data offset\n )\n\n # bitmap info header\n fp.write(\n o32(header) # info header size\n + o32(im.size[0]) # width\n + o32(im.size[1]) # height\n + o16(1) # planes\n + o16(bits) # depth\n + o32(0) # compression (0=uncompressed)\n + o32(image) # size of bitmap\n + o32(ppm[0]) # resolution\n + o32(ppm[1]) # resolution\n + o32(colors) # colors used\n + o32(colors) # colors important\n )\n\n fp.write(b"\0" * (header - 40)) # padding (for OS/2 format)\n\n if palette:\n fp.write(palette)\n\n ImageFile._save(\n im, fp, [ImageFile._Tile("raw", (0, 0) + im.size, 0, (rawmode, stride, -1))]\n )\n\n\n#\n# --------------------------------------------------------------------\n# Registry\n\n\nImage.register_open(BmpImageFile.format, BmpImageFile, _accept)\nImage.register_save(BmpImageFile.format, _save)\n\nImage.register_extension(BmpImageFile.format, ".bmp")\n\nImage.register_mime(BmpImageFile.format, "image/bmp")\n\nImage.register_decoder("bmp_rle", BmpRleDecoder)\n\nImage.register_open(DibImageFile.format, DibImageFile, _dib_accept)\nImage.register_save(DibImageFile.format, _dib_save)\n\nImage.register_extension(DibImageFile.format, ".dib")\n\nImage.register_mime(DibImageFile.format, "image/bmp")\n | .venv\Lib\site-packages\PIL\BmpImagePlugin.py | BmpImagePlugin.py | Python | 20,370 | 0.95 | 0.153398 | 0.196035 | awesome-app | 958 | 2025-05-23T19:21:48.777107 | Apache-2.0 | false | 7c59d17362e82214d6bfa195711246e3 |
#\n# The Python Imaging Library\n# $Id$\n#\n# BUFR stub adapter\n#\n# Copyright (c) 1996-2003 by Fredrik Lundh\n#\n# See the README file for information on usage and redistribution.\n#\nfrom __future__ import annotations\n\nimport os\nfrom typing import IO\n\nfrom . import Image, ImageFile\n\n_handler = None\n\n\ndef register_handler(handler: ImageFile.StubHandler | None) -> None:\n """\n Install application-specific BUFR image handler.\n\n :param handler: Handler object.\n """\n global _handler\n _handler = handler\n\n\n# --------------------------------------------------------------------\n# Image adapter\n\n\ndef _accept(prefix: bytes) -> bool:\n return prefix.startswith((b"BUFR", b"ZCZC"))\n\n\nclass BufrStubImageFile(ImageFile.StubImageFile):\n format = "BUFR"\n format_description = "BUFR"\n\n def _open(self) -> None:\n if not _accept(self.fp.read(4)):\n msg = "Not a BUFR file"\n raise SyntaxError(msg)\n\n self.fp.seek(-4, os.SEEK_CUR)\n\n # make something up\n self._mode = "F"\n self._size = 1, 1\n\n loader = self._load()\n if loader:\n loader.open(self)\n\n def _load(self) -> ImageFile.StubHandler | None:\n return _handler\n\n\ndef _save(im: Image.Image, fp: IO[bytes], filename: str | bytes) -> None:\n if _handler is None or not hasattr(_handler, "save"):\n msg = "BUFR save handler not installed"\n raise OSError(msg)\n _handler.save(im, fp, filename)\n\n\n# --------------------------------------------------------------------\n# Registry\n\nImage.register_open(BufrStubImageFile.format, BufrStubImageFile, _accept)\nImage.register_save(BufrStubImageFile.format, _save)\n\nImage.register_extension(BufrStubImageFile.format, ".bufr")\n | .venv\Lib\site-packages\PIL\BufrStubImagePlugin.py | BufrStubImagePlugin.py | Python | 1,805 | 0.95 | 0.133333 | 0.288462 | vue-tools | 940 | 2025-03-06T09:33:01.597903 | BSD-3-Clause | false | 273df80e108fb8f146c51da832a96eb2 |
#\n# The Python Imaging Library.\n# $Id$\n#\n# a class to read from a container file\n#\n# History:\n# 1995-06-18 fl Created\n# 1995-09-07 fl Added readline(), readlines()\n#\n# Copyright (c) 1997-2001 by Secret Labs AB\n# Copyright (c) 1995 by Fredrik Lundh\n#\n# See the README file for information on usage and redistribution.\n#\nfrom __future__ import annotations\n\nimport io\nfrom collections.abc import Iterable\nfrom typing import IO, AnyStr, NoReturn\n\n\nclass ContainerIO(IO[AnyStr]):\n """\n A file object that provides read access to a part of an existing\n file (for example a TAR file).\n """\n\n def __init__(self, file: IO[AnyStr], offset: int, length: int) -> None:\n """\n Create file object.\n\n :param file: Existing file.\n :param offset: Start of region, in bytes.\n :param length: Size of region, in bytes.\n """\n self.fh: IO[AnyStr] = file\n self.pos = 0\n self.offset = offset\n self.length = length\n self.fh.seek(offset)\n\n ##\n # Always false.\n\n def isatty(self) -> bool:\n return False\n\n def seekable(self) -> bool:\n return True\n\n def seek(self, offset: int, mode: int = io.SEEK_SET) -> int:\n """\n Move file pointer.\n\n :param offset: Offset in bytes.\n :param mode: Starting position. Use 0 for beginning of region, 1\n for current offset, and 2 for end of region. You cannot move\n the pointer outside the defined region.\n :returns: Offset from start of region, in bytes.\n """\n if mode == 1:\n self.pos = self.pos + offset\n elif mode == 2:\n self.pos = self.length + offset\n else:\n self.pos = offset\n # clamp\n self.pos = max(0, min(self.pos, self.length))\n self.fh.seek(self.offset + self.pos)\n return self.pos\n\n def tell(self) -> int:\n """\n Get current file pointer.\n\n :returns: Offset from start of region, in bytes.\n """\n return self.pos\n\n def readable(self) -> bool:\n return True\n\n def read(self, n: int = -1) -> AnyStr:\n """\n Read data.\n\n :param n: Number of bytes to read. If omitted, zero or negative,\n read until end of region.\n :returns: An 8-bit string.\n """\n if n > 0:\n n = min(n, self.length - self.pos)\n else:\n n = self.length - self.pos\n if n <= 0: # EOF\n return b"" if "b" in self.fh.mode else "" # type: ignore[return-value]\n self.pos = self.pos + n\n return self.fh.read(n)\n\n def readline(self, n: int = -1) -> AnyStr:\n """\n Read a line of text.\n\n :param n: Number of bytes to read. If omitted, zero or negative,\n read until end of line.\n :returns: An 8-bit string.\n """\n s: AnyStr = b"" if "b" in self.fh.mode else "" # type: ignore[assignment]\n newline_character = b"\n" if "b" in self.fh.mode else "\n"\n while True:\n c = self.read(1)\n if not c:\n break\n s = s + c\n if c == newline_character or len(s) == n:\n break\n return s\n\n def readlines(self, n: int | None = -1) -> list[AnyStr]:\n """\n Read multiple lines of text.\n\n :param n: Number of lines to read. If omitted, zero, negative or None,\n read until end of region.\n :returns: A list of 8-bit strings.\n """\n lines = []\n while True:\n s = self.readline()\n if not s:\n break\n lines.append(s)\n if len(lines) == n:\n break\n return lines\n\n def writable(self) -> bool:\n return False\n\n def write(self, b: AnyStr) -> NoReturn:\n raise NotImplementedError()\n\n def writelines(self, lines: Iterable[AnyStr]) -> NoReturn:\n raise NotImplementedError()\n\n def truncate(self, size: int | None = None) -> int:\n raise NotImplementedError()\n\n def __enter__(self) -> ContainerIO[AnyStr]:\n return self\n\n def __exit__(self, *args: object) -> None:\n self.close()\n\n def __iter__(self) -> ContainerIO[AnyStr]:\n return self\n\n def __next__(self) -> AnyStr:\n line = self.readline()\n if not line:\n msg = "end of region"\n raise StopIteration(msg)\n return line\n\n def fileno(self) -> int:\n return self.fh.fileno()\n\n def flush(self) -> None:\n self.fh.flush()\n\n def close(self) -> None:\n self.fh.close()\n | .venv\Lib\site-packages\PIL\ContainerIO.py | ContainerIO.py | Python | 4,777 | 0.95 | 0.231214 | 0.125874 | node-utils | 459 | 2024-04-21T21:05:53.171796 | Apache-2.0 | false | 9fd45bfe9eb231523eed4f3f73db41a7 |
#\n# The Python Imaging Library.\n# $Id$\n#\n# Windows Cursor support for PIL\n#\n# notes:\n# uses BmpImagePlugin.py to read the bitmap data.\n#\n# history:\n# 96-05-27 fl Created\n#\n# Copyright (c) Secret Labs AB 1997.\n# Copyright (c) Fredrik Lundh 1996.\n#\n# See the README file for information on usage and redistribution.\n#\nfrom __future__ import annotations\n\nfrom . import BmpImagePlugin, Image, ImageFile\nfrom ._binary import i16le as i16\nfrom ._binary import i32le as i32\n\n#\n# --------------------------------------------------------------------\n\n\ndef _accept(prefix: bytes) -> bool:\n return prefix.startswith(b"\0\0\2\0")\n\n\n##\n# Image plugin for Windows Cursor files.\n\n\nclass CurImageFile(BmpImagePlugin.BmpImageFile):\n format = "CUR"\n format_description = "Windows Cursor"\n\n def _open(self) -> None:\n offset = self.fp.tell()\n\n # check magic\n s = self.fp.read(6)\n if not _accept(s):\n msg = "not a CUR file"\n raise SyntaxError(msg)\n\n # pick the largest cursor in the file\n m = b""\n for i in range(i16(s, 4)):\n s = self.fp.read(16)\n if not m:\n m = s\n elif s[0] > m[0] and s[1] > m[1]:\n m = s\n if not m:\n msg = "No cursors were found"\n raise TypeError(msg)\n\n # load as bitmap\n self._bitmap(i32(m, 12) + offset)\n\n # patch up the bitmap height\n self._size = self.size[0], self.size[1] // 2\n d, e, o, a = self.tile[0]\n self.tile[0] = ImageFile._Tile(d, (0, 0) + self.size, o, a)\n\n\n#\n# --------------------------------------------------------------------\n\nImage.register_open(CurImageFile.format, CurImageFile, _accept)\n\nImage.register_extension(CurImageFile.format, ".cur")\n | .venv\Lib\site-packages\PIL\CurImagePlugin.py | CurImagePlugin.py | Python | 1,872 | 0.95 | 0.133333 | 0.465517 | vue-tools | 847 | 2024-12-14T17:39:12.064083 | GPL-3.0 | false | 6f6d11fe5aa15827e87291185a15596b |
#\n# The Python Imaging Library.\n# $Id$\n#\n# DCX file handling\n#\n# DCX is a container file format defined by Intel, commonly used\n# for fax applications. Each DCX file consists of a directory\n# (a list of file offsets) followed by a set of (usually 1-bit)\n# PCX files.\n#\n# History:\n# 1995-09-09 fl Created\n# 1996-03-20 fl Properly derived from PcxImageFile.\n# 1998-07-15 fl Renamed offset attribute to avoid name clash\n# 2002-07-30 fl Fixed file handling\n#\n# Copyright (c) 1997-98 by Secret Labs AB.\n# Copyright (c) 1995-96 by Fredrik Lundh.\n#\n# See the README file for information on usage and redistribution.\n#\nfrom __future__ import annotations\n\nfrom . import Image\nfrom ._binary import i32le as i32\nfrom ._util import DeferredError\nfrom .PcxImagePlugin import PcxImageFile\n\nMAGIC = 0x3ADE68B1 # QUIZ: what's this value, then?\n\n\ndef _accept(prefix: bytes) -> bool:\n return len(prefix) >= 4 and i32(prefix) == MAGIC\n\n\n##\n# Image plugin for the Intel DCX format.\n\n\nclass DcxImageFile(PcxImageFile):\n format = "DCX"\n format_description = "Intel DCX"\n _close_exclusive_fp_after_loading = False\n\n def _open(self) -> None:\n # Header\n s = self.fp.read(4)\n if not _accept(s):\n msg = "not a DCX file"\n raise SyntaxError(msg)\n\n # Component directory\n self._offset = []\n for i in range(1024):\n offset = i32(self.fp.read(4))\n if not offset:\n break\n self._offset.append(offset)\n\n self._fp = self.fp\n self.frame = -1\n self.n_frames = len(self._offset)\n self.is_animated = self.n_frames > 1\n self.seek(0)\n\n def seek(self, frame: int) -> None:\n if not self._seek_check(frame):\n return\n if isinstance(self._fp, DeferredError):\n raise self._fp.ex\n self.frame = frame\n self.fp = self._fp\n self.fp.seek(self._offset[frame])\n PcxImageFile._open(self)\n\n def tell(self) -> int:\n return self.frame\n\n\nImage.register_open(DcxImageFile.format, DcxImageFile, _accept)\n\nImage.register_extension(DcxImageFile.format, ".dcx")\n | .venv\Lib\site-packages\PIL\DcxImagePlugin.py | DcxImagePlugin.py | Python | 2,228 | 0.95 | 0.156627 | 0.38806 | awesome-app | 924 | 2025-06-18T07:49:23.938752 | BSD-3-Clause | false | 813add9dbb79d31dc316f8ee2443c64b |
"""\nA Pillow plugin for .dds files (S3TC-compressed aka DXTC)\nJerome Leclanche <jerome@leclan.ch>\n\nDocumentation:\nhttps://web.archive.org/web/20170802060935/http://oss.sgi.com/projects/ogl-sample/registry/EXT/texture_compression_s3tc.txt\n\nThe contents of this file are hereby released in the public domain (CC0)\nFull text of the CC0 license:\nhttps://creativecommons.org/publicdomain/zero/1.0/\n"""\n\nfrom __future__ import annotations\n\nimport io\nimport struct\nimport sys\nfrom enum import IntEnum, IntFlag\nfrom typing import IO\n\nfrom . import Image, ImageFile, ImagePalette\nfrom ._binary import i32le as i32\nfrom ._binary import o8\nfrom ._binary import o32le as o32\n\n# Magic ("DDS ")\nDDS_MAGIC = 0x20534444\n\n\n# DDS flags\nclass DDSD(IntFlag):\n CAPS = 0x1\n HEIGHT = 0x2\n WIDTH = 0x4\n PITCH = 0x8\n PIXELFORMAT = 0x1000\n MIPMAPCOUNT = 0x20000\n LINEARSIZE = 0x80000\n DEPTH = 0x800000\n\n\n# DDS caps\nclass DDSCAPS(IntFlag):\n COMPLEX = 0x8\n TEXTURE = 0x1000\n MIPMAP = 0x400000\n\n\nclass DDSCAPS2(IntFlag):\n CUBEMAP = 0x200\n CUBEMAP_POSITIVEX = 0x400\n CUBEMAP_NEGATIVEX = 0x800\n CUBEMAP_POSITIVEY = 0x1000\n CUBEMAP_NEGATIVEY = 0x2000\n CUBEMAP_POSITIVEZ = 0x4000\n CUBEMAP_NEGATIVEZ = 0x8000\n VOLUME = 0x200000\n\n\n# Pixel Format\nclass DDPF(IntFlag):\n ALPHAPIXELS = 0x1\n ALPHA = 0x2\n FOURCC = 0x4\n PALETTEINDEXED8 = 0x20\n RGB = 0x40\n LUMINANCE = 0x20000\n\n\n# dxgiformat.h\nclass DXGI_FORMAT(IntEnum):\n UNKNOWN = 0\n R32G32B32A32_TYPELESS = 1\n R32G32B32A32_FLOAT = 2\n R32G32B32A32_UINT = 3\n R32G32B32A32_SINT = 4\n R32G32B32_TYPELESS = 5\n R32G32B32_FLOAT = 6\n R32G32B32_UINT = 7\n R32G32B32_SINT = 8\n R16G16B16A16_TYPELESS = 9\n R16G16B16A16_FLOAT = 10\n R16G16B16A16_UNORM = 11\n R16G16B16A16_UINT = 12\n R16G16B16A16_SNORM = 13\n R16G16B16A16_SINT = 14\n R32G32_TYPELESS = 15\n R32G32_FLOAT = 16\n R32G32_UINT = 17\n R32G32_SINT = 18\n R32G8X24_TYPELESS = 19\n D32_FLOAT_S8X24_UINT = 20\n R32_FLOAT_X8X24_TYPELESS = 21\n X32_TYPELESS_G8X24_UINT = 22\n R10G10B10A2_TYPELESS = 23\n R10G10B10A2_UNORM = 24\n R10G10B10A2_UINT = 25\n R11G11B10_FLOAT = 26\n R8G8B8A8_TYPELESS = 27\n R8G8B8A8_UNORM = 28\n R8G8B8A8_UNORM_SRGB = 29\n R8G8B8A8_UINT = 30\n R8G8B8A8_SNORM = 31\n R8G8B8A8_SINT = 32\n R16G16_TYPELESS = 33\n R16G16_FLOAT = 34\n R16G16_UNORM = 35\n R16G16_UINT = 36\n R16G16_SNORM = 37\n R16G16_SINT = 38\n R32_TYPELESS = 39\n D32_FLOAT = 40\n R32_FLOAT = 41\n R32_UINT = 42\n R32_SINT = 43\n R24G8_TYPELESS = 44\n D24_UNORM_S8_UINT = 45\n R24_UNORM_X8_TYPELESS = 46\n X24_TYPELESS_G8_UINT = 47\n R8G8_TYPELESS = 48\n R8G8_UNORM = 49\n R8G8_UINT = 50\n R8G8_SNORM = 51\n R8G8_SINT = 52\n R16_TYPELESS = 53\n R16_FLOAT = 54\n D16_UNORM = 55\n R16_UNORM = 56\n R16_UINT = 57\n R16_SNORM = 58\n R16_SINT = 59\n R8_TYPELESS = 60\n R8_UNORM = 61\n R8_UINT = 62\n R8_SNORM = 63\n R8_SINT = 64\n A8_UNORM = 65\n R1_UNORM = 66\n R9G9B9E5_SHAREDEXP = 67\n R8G8_B8G8_UNORM = 68\n G8R8_G8B8_UNORM = 69\n BC1_TYPELESS = 70\n BC1_UNORM = 71\n BC1_UNORM_SRGB = 72\n BC2_TYPELESS = 73\n BC2_UNORM = 74\n BC2_UNORM_SRGB = 75\n BC3_TYPELESS = 76\n BC3_UNORM = 77\n BC3_UNORM_SRGB = 78\n BC4_TYPELESS = 79\n BC4_UNORM = 80\n BC4_SNORM = 81\n BC5_TYPELESS = 82\n BC5_UNORM = 83\n BC5_SNORM = 84\n B5G6R5_UNORM = 85\n B5G5R5A1_UNORM = 86\n B8G8R8A8_UNORM = 87\n B8G8R8X8_UNORM = 88\n R10G10B10_XR_BIAS_A2_UNORM = 89\n B8G8R8A8_TYPELESS = 90\n B8G8R8A8_UNORM_SRGB = 91\n B8G8R8X8_TYPELESS = 92\n B8G8R8X8_UNORM_SRGB = 93\n BC6H_TYPELESS = 94\n BC6H_UF16 = 95\n BC6H_SF16 = 96\n BC7_TYPELESS = 97\n BC7_UNORM = 98\n BC7_UNORM_SRGB = 99\n AYUV = 100\n Y410 = 101\n Y416 = 102\n NV12 = 103\n P010 = 104\n P016 = 105\n OPAQUE_420 = 106\n YUY2 = 107\n Y210 = 108\n Y216 = 109\n NV11 = 110\n AI44 = 111\n IA44 = 112\n P8 = 113\n A8P8 = 114\n B4G4R4A4_UNORM = 115\n P208 = 130\n V208 = 131\n V408 = 132\n SAMPLER_FEEDBACK_MIN_MIP_OPAQUE = 189\n SAMPLER_FEEDBACK_MIP_REGION_USED_OPAQUE = 190\n\n\nclass D3DFMT(IntEnum):\n UNKNOWN = 0\n R8G8B8 = 20\n A8R8G8B8 = 21\n X8R8G8B8 = 22\n R5G6B5 = 23\n X1R5G5B5 = 24\n A1R5G5B5 = 25\n A4R4G4B4 = 26\n R3G3B2 = 27\n A8 = 28\n A8R3G3B2 = 29\n X4R4G4B4 = 30\n A2B10G10R10 = 31\n A8B8G8R8 = 32\n X8B8G8R8 = 33\n G16R16 = 34\n A2R10G10B10 = 35\n A16B16G16R16 = 36\n A8P8 = 40\n P8 = 41\n L8 = 50\n A8L8 = 51\n A4L4 = 52\n V8U8 = 60\n L6V5U5 = 61\n X8L8V8U8 = 62\n Q8W8V8U8 = 63\n V16U16 = 64\n A2W10V10U10 = 67\n D16_LOCKABLE = 70\n D32 = 71\n D15S1 = 73\n D24S8 = 75\n D24X8 = 77\n D24X4S4 = 79\n D16 = 80\n D32F_LOCKABLE = 82\n D24FS8 = 83\n D32_LOCKABLE = 84\n S8_LOCKABLE = 85\n L16 = 81\n VERTEXDATA = 100\n INDEX16 = 101\n INDEX32 = 102\n Q16W16V16U16 = 110\n R16F = 111\n G16R16F = 112\n A16B16G16R16F = 113\n R32F = 114\n G32R32F = 115\n A32B32G32R32F = 116\n CxV8U8 = 117\n A1 = 118\n A2B10G10R10_XR_BIAS = 119\n BINARYBUFFER = 199\n\n UYVY = i32(b"UYVY")\n R8G8_B8G8 = i32(b"RGBG")\n YUY2 = i32(b"YUY2")\n G8R8_G8B8 = i32(b"GRGB")\n DXT1 = i32(b"DXT1")\n DXT2 = i32(b"DXT2")\n DXT3 = i32(b"DXT3")\n DXT4 = i32(b"DXT4")\n DXT5 = i32(b"DXT5")\n DX10 = i32(b"DX10")\n BC4S = i32(b"BC4S")\n BC4U = i32(b"BC4U")\n BC5S = i32(b"BC5S")\n BC5U = i32(b"BC5U")\n ATI1 = i32(b"ATI1")\n ATI2 = i32(b"ATI2")\n MULTI2_ARGB8 = i32(b"MET1")\n\n\n# Backward compatibility layer\nmodule = sys.modules[__name__]\nfor item in DDSD:\n assert item.name is not None\n setattr(module, f"DDSD_{item.name}", item.value)\nfor item1 in DDSCAPS:\n assert item1.name is not None\n setattr(module, f"DDSCAPS_{item1.name}", item1.value)\nfor item2 in DDSCAPS2:\n assert item2.name is not None\n setattr(module, f"DDSCAPS2_{item2.name}", item2.value)\nfor item3 in DDPF:\n assert item3.name is not None\n setattr(module, f"DDPF_{item3.name}", item3.value)\n\nDDS_FOURCC = DDPF.FOURCC\nDDS_RGB = DDPF.RGB\nDDS_RGBA = DDPF.RGB | DDPF.ALPHAPIXELS\nDDS_LUMINANCE = DDPF.LUMINANCE\nDDS_LUMINANCEA = DDPF.LUMINANCE | DDPF.ALPHAPIXELS\nDDS_ALPHA = DDPF.ALPHA\nDDS_PAL8 = DDPF.PALETTEINDEXED8\n\nDDS_HEADER_FLAGS_TEXTURE = DDSD.CAPS | DDSD.HEIGHT | DDSD.WIDTH | DDSD.PIXELFORMAT\nDDS_HEADER_FLAGS_MIPMAP = DDSD.MIPMAPCOUNT\nDDS_HEADER_FLAGS_VOLUME = DDSD.DEPTH\nDDS_HEADER_FLAGS_PITCH = DDSD.PITCH\nDDS_HEADER_FLAGS_LINEARSIZE = DDSD.LINEARSIZE\n\nDDS_HEIGHT = DDSD.HEIGHT\nDDS_WIDTH = DDSD.WIDTH\n\nDDS_SURFACE_FLAGS_TEXTURE = DDSCAPS.TEXTURE\nDDS_SURFACE_FLAGS_MIPMAP = DDSCAPS.COMPLEX | DDSCAPS.MIPMAP\nDDS_SURFACE_FLAGS_CUBEMAP = DDSCAPS.COMPLEX\n\nDDS_CUBEMAP_POSITIVEX = DDSCAPS2.CUBEMAP | DDSCAPS2.CUBEMAP_POSITIVEX\nDDS_CUBEMAP_NEGATIVEX = DDSCAPS2.CUBEMAP | DDSCAPS2.CUBEMAP_NEGATIVEX\nDDS_CUBEMAP_POSITIVEY = DDSCAPS2.CUBEMAP | DDSCAPS2.CUBEMAP_POSITIVEY\nDDS_CUBEMAP_NEGATIVEY = DDSCAPS2.CUBEMAP | DDSCAPS2.CUBEMAP_NEGATIVEY\nDDS_CUBEMAP_POSITIVEZ = DDSCAPS2.CUBEMAP | DDSCAPS2.CUBEMAP_POSITIVEZ\nDDS_CUBEMAP_NEGATIVEZ = DDSCAPS2.CUBEMAP | DDSCAPS2.CUBEMAP_NEGATIVEZ\n\nDXT1_FOURCC = D3DFMT.DXT1\nDXT3_FOURCC = D3DFMT.DXT3\nDXT5_FOURCC = D3DFMT.DXT5\n\nDXGI_FORMAT_R8G8B8A8_TYPELESS = DXGI_FORMAT.R8G8B8A8_TYPELESS\nDXGI_FORMAT_R8G8B8A8_UNORM = DXGI_FORMAT.R8G8B8A8_UNORM\nDXGI_FORMAT_R8G8B8A8_UNORM_SRGB = DXGI_FORMAT.R8G8B8A8_UNORM_SRGB\nDXGI_FORMAT_BC5_TYPELESS = DXGI_FORMAT.BC5_TYPELESS\nDXGI_FORMAT_BC5_UNORM = DXGI_FORMAT.BC5_UNORM\nDXGI_FORMAT_BC5_SNORM = DXGI_FORMAT.BC5_SNORM\nDXGI_FORMAT_BC6H_UF16 = DXGI_FORMAT.BC6H_UF16\nDXGI_FORMAT_BC6H_SF16 = DXGI_FORMAT.BC6H_SF16\nDXGI_FORMAT_BC7_TYPELESS = DXGI_FORMAT.BC7_TYPELESS\nDXGI_FORMAT_BC7_UNORM = DXGI_FORMAT.BC7_UNORM\nDXGI_FORMAT_BC7_UNORM_SRGB = DXGI_FORMAT.BC7_UNORM_SRGB\n\n\nclass DdsImageFile(ImageFile.ImageFile):\n format = "DDS"\n format_description = "DirectDraw Surface"\n\n def _open(self) -> None:\n if not _accept(self.fp.read(4)):\n msg = "not a DDS file"\n raise SyntaxError(msg)\n (header_size,) = struct.unpack("<I", self.fp.read(4))\n if header_size != 124:\n msg = f"Unsupported header size {repr(header_size)}"\n raise OSError(msg)\n header_bytes = self.fp.read(header_size - 4)\n if len(header_bytes) != 120:\n msg = f"Incomplete header: {len(header_bytes)} bytes"\n raise OSError(msg)\n header = io.BytesIO(header_bytes)\n\n flags, height, width = struct.unpack("<3I", header.read(12))\n self._size = (width, height)\n extents = (0, 0) + self.size\n\n pitch, depth, mipmaps = struct.unpack("<3I", header.read(12))\n struct.unpack("<11I", header.read(44)) # reserved\n\n # pixel format\n pfsize, pfflags, fourcc, bitcount = struct.unpack("<4I", header.read(16))\n n = 0\n rawmode = None\n if pfflags & DDPF.RGB:\n # Texture contains uncompressed RGB data\n if pfflags & DDPF.ALPHAPIXELS:\n self._mode = "RGBA"\n mask_count = 4\n else:\n self._mode = "RGB"\n mask_count = 3\n\n masks = struct.unpack(f"<{mask_count}I", header.read(mask_count * 4))\n self.tile = [ImageFile._Tile("dds_rgb", extents, 0, (bitcount, masks))]\n return\n elif pfflags & DDPF.LUMINANCE:\n if bitcount == 8:\n self._mode = "L"\n elif bitcount == 16 and pfflags & DDPF.ALPHAPIXELS:\n self._mode = "LA"\n else:\n msg = f"Unsupported bitcount {bitcount} for {pfflags}"\n raise OSError(msg)\n elif pfflags & DDPF.PALETTEINDEXED8:\n self._mode = "P"\n self.palette = ImagePalette.raw("RGBA", self.fp.read(1024))\n self.palette.mode = "RGBA"\n elif pfflags & DDPF.FOURCC:\n offset = header_size + 4\n if fourcc == D3DFMT.DXT1:\n self._mode = "RGBA"\n self.pixel_format = "DXT1"\n n = 1\n elif fourcc == D3DFMT.DXT3:\n self._mode = "RGBA"\n self.pixel_format = "DXT3"\n n = 2\n elif fourcc == D3DFMT.DXT5:\n self._mode = "RGBA"\n self.pixel_format = "DXT5"\n n = 3\n elif fourcc in (D3DFMT.BC4U, D3DFMT.ATI1):\n self._mode = "L"\n self.pixel_format = "BC4"\n n = 4\n elif fourcc == D3DFMT.BC5S:\n self._mode = "RGB"\n self.pixel_format = "BC5S"\n n = 5\n elif fourcc in (D3DFMT.BC5U, D3DFMT.ATI2):\n self._mode = "RGB"\n self.pixel_format = "BC5"\n n = 5\n elif fourcc == D3DFMT.DX10:\n offset += 20\n # ignoring flags which pertain to volume textures and cubemaps\n (dxgi_format,) = struct.unpack("<I", self.fp.read(4))\n self.fp.read(16)\n if dxgi_format in (\n DXGI_FORMAT.BC1_UNORM,\n DXGI_FORMAT.BC1_TYPELESS,\n ):\n self._mode = "RGBA"\n self.pixel_format = "BC1"\n n = 1\n elif dxgi_format in (DXGI_FORMAT.BC2_TYPELESS, DXGI_FORMAT.BC2_UNORM):\n self._mode = "RGBA"\n self.pixel_format = "BC2"\n n = 2\n elif dxgi_format in (DXGI_FORMAT.BC3_TYPELESS, DXGI_FORMAT.BC3_UNORM):\n self._mode = "RGBA"\n self.pixel_format = "BC3"\n n = 3\n elif dxgi_format in (DXGI_FORMAT.BC4_TYPELESS, DXGI_FORMAT.BC4_UNORM):\n self._mode = "L"\n self.pixel_format = "BC4"\n n = 4\n elif dxgi_format in (DXGI_FORMAT.BC5_TYPELESS, DXGI_FORMAT.BC5_UNORM):\n self._mode = "RGB"\n self.pixel_format = "BC5"\n n = 5\n elif dxgi_format == DXGI_FORMAT.BC5_SNORM:\n self._mode = "RGB"\n self.pixel_format = "BC5S"\n n = 5\n elif dxgi_format == DXGI_FORMAT.BC6H_UF16:\n self._mode = "RGB"\n self.pixel_format = "BC6H"\n n = 6\n elif dxgi_format == DXGI_FORMAT.BC6H_SF16:\n self._mode = "RGB"\n self.pixel_format = "BC6HS"\n n = 6\n elif dxgi_format in (\n DXGI_FORMAT.BC7_TYPELESS,\n DXGI_FORMAT.BC7_UNORM,\n DXGI_FORMAT.BC7_UNORM_SRGB,\n ):\n self._mode = "RGBA"\n self.pixel_format = "BC7"\n n = 7\n if dxgi_format == DXGI_FORMAT.BC7_UNORM_SRGB:\n self.info["gamma"] = 1 / 2.2\n elif dxgi_format in (\n DXGI_FORMAT.R8G8B8A8_TYPELESS,\n DXGI_FORMAT.R8G8B8A8_UNORM,\n DXGI_FORMAT.R8G8B8A8_UNORM_SRGB,\n ):\n self._mode = "RGBA"\n if dxgi_format == DXGI_FORMAT.R8G8B8A8_UNORM_SRGB:\n self.info["gamma"] = 1 / 2.2\n else:\n msg = f"Unimplemented DXGI format {dxgi_format}"\n raise NotImplementedError(msg)\n else:\n msg = f"Unimplemented pixel format {repr(fourcc)}"\n raise NotImplementedError(msg)\n else:\n msg = f"Unknown pixel format flags {pfflags}"\n raise NotImplementedError(msg)\n\n if n:\n self.tile = [\n ImageFile._Tile("bcn", extents, offset, (n, self.pixel_format))\n ]\n else:\n self.tile = [ImageFile._Tile("raw", extents, 0, rawmode or self.mode)]\n\n def load_seek(self, pos: int) -> None:\n pass\n\n\nclass DdsRgbDecoder(ImageFile.PyDecoder):\n _pulls_fd = True\n\n def decode(self, buffer: bytes | Image.SupportsArrayInterface) -> tuple[int, int]:\n assert self.fd is not None\n bitcount, masks = self.args\n\n # Some masks will be padded with zeros, e.g. R 0b11 G 0b1100\n # Calculate how many zeros each mask is padded with\n mask_offsets = []\n # And the maximum value of each channel without the padding\n mask_totals = []\n for mask in masks:\n offset = 0\n if mask != 0:\n while mask >> (offset + 1) << (offset + 1) == mask:\n offset += 1\n mask_offsets.append(offset)\n mask_totals.append(mask >> offset)\n\n data = bytearray()\n bytecount = bitcount // 8\n dest_length = self.state.xsize * self.state.ysize * len(masks)\n while len(data) < dest_length:\n value = int.from_bytes(self.fd.read(bytecount), "little")\n for i, mask in enumerate(masks):\n masked_value = value & mask\n # Remove the zero padding, and scale it to 8 bits\n data += o8(\n int(((masked_value >> mask_offsets[i]) / mask_totals[i]) * 255)\n )\n self.set_as_raw(data)\n return -1, 0\n\n\ndef _save(im: Image.Image, fp: IO[bytes], filename: str | bytes) -> None:\n if im.mode not in ("RGB", "RGBA", "L", "LA"):\n msg = f"cannot write mode {im.mode} as DDS"\n raise OSError(msg)\n\n flags = DDSD.CAPS | DDSD.HEIGHT | DDSD.WIDTH | DDSD.PIXELFORMAT\n bitcount = len(im.getbands()) * 8\n pixel_format = im.encoderinfo.get("pixel_format")\n args: tuple[int] | str\n if pixel_format:\n codec_name = "bcn"\n flags |= DDSD.LINEARSIZE\n pitch = (im.width + 3) * 4\n rgba_mask = [0, 0, 0, 0]\n pixel_flags = DDPF.FOURCC\n if pixel_format == "DXT1":\n fourcc = D3DFMT.DXT1\n args = (1,)\n elif pixel_format == "DXT3":\n fourcc = D3DFMT.DXT3\n args = (2,)\n elif pixel_format == "DXT5":\n fourcc = D3DFMT.DXT5\n args = (3,)\n else:\n fourcc = D3DFMT.DX10\n if pixel_format == "BC2":\n args = (2,)\n dxgi_format = DXGI_FORMAT.BC2_TYPELESS\n elif pixel_format == "BC3":\n args = (3,)\n dxgi_format = DXGI_FORMAT.BC3_TYPELESS\n elif pixel_format == "BC5":\n args = (5,)\n dxgi_format = DXGI_FORMAT.BC5_TYPELESS\n if im.mode != "RGB":\n msg = "only RGB mode can be written as BC5"\n raise OSError(msg)\n else:\n msg = f"cannot write pixel format {pixel_format}"\n raise OSError(msg)\n else:\n codec_name = "raw"\n flags |= DDSD.PITCH\n pitch = (im.width * bitcount + 7) // 8\n\n alpha = im.mode[-1] == "A"\n if im.mode[0] == "L":\n pixel_flags = DDPF.LUMINANCE\n args = im.mode\n if alpha:\n rgba_mask = [0x000000FF, 0x000000FF, 0x000000FF]\n else:\n rgba_mask = [0xFF000000, 0xFF000000, 0xFF000000]\n else:\n pixel_flags = DDPF.RGB\n args = im.mode[::-1]\n rgba_mask = [0x00FF0000, 0x0000FF00, 0x000000FF]\n\n if alpha:\n r, g, b, a = im.split()\n im = Image.merge("RGBA", (a, r, g, b))\n if alpha:\n pixel_flags |= DDPF.ALPHAPIXELS\n rgba_mask.append(0xFF000000 if alpha else 0)\n\n fourcc = D3DFMT.UNKNOWN\n fp.write(\n o32(DDS_MAGIC)\n + struct.pack(\n "<7I",\n 124, # header size\n flags, # flags\n im.height,\n im.width,\n pitch,\n 0, # depth\n 0, # mipmaps\n )\n + struct.pack("11I", *((0,) * 11)) # reserved\n # pfsize, pfflags, fourcc, bitcount\n + struct.pack("<4I", 32, pixel_flags, fourcc, bitcount)\n + struct.pack("<4I", *rgba_mask) # dwRGBABitMask\n + struct.pack("<5I", DDSCAPS.TEXTURE, 0, 0, 0, 0)\n )\n if fourcc == D3DFMT.DX10:\n fp.write(\n # dxgi_format, 2D resource, misc, array size, straight alpha\n struct.pack("<5I", dxgi_format, 3, 0, 0, 1)\n )\n ImageFile._save(im, fp, [ImageFile._Tile(codec_name, (0, 0) + im.size, 0, args)])\n\n\ndef _accept(prefix: bytes) -> bool:\n return prefix.startswith(b"DDS ")\n\n\nImage.register_open(DdsImageFile.format, DdsImageFile, _accept)\nImage.register_decoder("dds_rgb", DdsRgbDecoder)\nImage.register_save(DdsImageFile.format, _save)\nImage.register_extension(DdsImageFile.format, ".dds")\n | .venv\Lib\site-packages\PIL\DdsImagePlugin.py | DdsImagePlugin.py | Python | 19,530 | 0.95 | 0.073718 | 0.026224 | react-lib | 955 | 2023-10-28T16:59:37.844105 | Apache-2.0 | false | 4d93d5bdd03f5c8c8ccac54ad2c9cae9 |
#\n# The Python Imaging Library.\n# $Id$\n#\n# EPS file handling\n#\n# History:\n# 1995-09-01 fl Created (0.1)\n# 1996-05-18 fl Don't choke on "atend" fields, Ghostscript interface (0.2)\n# 1996-08-22 fl Don't choke on floating point BoundingBox values\n# 1996-08-23 fl Handle files from Macintosh (0.3)\n# 2001-02-17 fl Use 're' instead of 'regex' (Python 2.1) (0.4)\n# 2003-09-07 fl Check gs.close status (from Federico Di Gregorio) (0.5)\n# 2014-05-07 e Handling of EPS with binary preview and fixed resolution\n# resizing\n#\n# Copyright (c) 1997-2003 by Secret Labs AB.\n# Copyright (c) 1995-2003 by Fredrik Lundh\n#\n# See the README file for information on usage and redistribution.\n#\nfrom __future__ import annotations\n\nimport io\nimport os\nimport re\nimport subprocess\nimport sys\nimport tempfile\nfrom typing import IO\n\nfrom . import Image, ImageFile\nfrom ._binary import i32le as i32\n\n# --------------------------------------------------------------------\n\n\nsplit = re.compile(r"^%%([^:]*):[ \t]*(.*)[ \t]*$")\nfield = re.compile(r"^%[%!\w]([^:]*)[ \t]*$")\n\ngs_binary: str | bool | None = None\ngs_windows_binary = None\n\n\ndef has_ghostscript() -> bool:\n global gs_binary, gs_windows_binary\n if gs_binary is None:\n if sys.platform.startswith("win"):\n if gs_windows_binary is None:\n import shutil\n\n for binary in ("gswin32c", "gswin64c", "gs"):\n if shutil.which(binary) is not None:\n gs_windows_binary = binary\n break\n else:\n gs_windows_binary = False\n gs_binary = gs_windows_binary\n else:\n try:\n subprocess.check_call(["gs", "--version"], stdout=subprocess.DEVNULL)\n gs_binary = "gs"\n except OSError:\n gs_binary = False\n return gs_binary is not False\n\n\ndef Ghostscript(\n tile: list[ImageFile._Tile],\n size: tuple[int, int],\n fp: IO[bytes],\n scale: int = 1,\n transparency: bool = False,\n) -> Image.core.ImagingCore:\n """Render an image using Ghostscript"""\n global gs_binary\n if not has_ghostscript():\n msg = "Unable to locate Ghostscript on paths"\n raise OSError(msg)\n assert isinstance(gs_binary, str)\n\n # Unpack decoder tile\n args = tile[0].args\n assert isinstance(args, tuple)\n length, bbox = args\n\n # Hack to support hi-res rendering\n scale = int(scale) or 1\n width = size[0] * scale\n height = size[1] * scale\n # resolution is dependent on bbox and size\n res_x = 72.0 * width / (bbox[2] - bbox[0])\n res_y = 72.0 * height / (bbox[3] - bbox[1])\n\n out_fd, outfile = tempfile.mkstemp()\n os.close(out_fd)\n\n infile_temp = None\n if hasattr(fp, "name") and os.path.exists(fp.name):\n infile = fp.name\n else:\n in_fd, infile_temp = tempfile.mkstemp()\n os.close(in_fd)\n infile = infile_temp\n\n # Ignore length and offset!\n # Ghostscript can read it\n # Copy whole file to read in Ghostscript\n with open(infile_temp, "wb") as f:\n # fetch length of fp\n fp.seek(0, io.SEEK_END)\n fsize = fp.tell()\n # ensure start position\n # go back\n fp.seek(0)\n lengthfile = fsize\n while lengthfile > 0:\n s = fp.read(min(lengthfile, 100 * 1024))\n if not s:\n break\n lengthfile -= len(s)\n f.write(s)\n\n if transparency:\n # "RGBA"\n device = "pngalpha"\n else:\n # "pnmraw" automatically chooses between\n # PBM ("1"), PGM ("L"), and PPM ("RGB").\n device = "pnmraw"\n\n # Build Ghostscript command\n command = [\n gs_binary,\n "-q", # quiet mode\n f"-g{width:d}x{height:d}", # set output geometry (pixels)\n f"-r{res_x:f}x{res_y:f}", # set input DPI (dots per inch)\n "-dBATCH", # exit after processing\n "-dNOPAUSE", # don't pause between pages\n "-dSAFER", # safe mode\n f"-sDEVICE={device}",\n f"-sOutputFile={outfile}", # output file\n # adjust for image origin\n "-c",\n f"{-bbox[0]} {-bbox[1]} translate",\n "-f",\n infile, # input file\n # showpage (see https://bugs.ghostscript.com/show_bug.cgi?id=698272)\n "-c",\n "showpage",\n ]\n\n # push data through Ghostscript\n try:\n startupinfo = None\n if sys.platform.startswith("win"):\n startupinfo = subprocess.STARTUPINFO()\n startupinfo.dwFlags |= subprocess.STARTF_USESHOWWINDOW\n subprocess.check_call(command, startupinfo=startupinfo)\n with Image.open(outfile) as out_im:\n out_im.load()\n return out_im.im.copy()\n finally:\n try:\n os.unlink(outfile)\n if infile_temp:\n os.unlink(infile_temp)\n except OSError:\n pass\n\n\ndef _accept(prefix: bytes) -> bool:\n return prefix.startswith(b"%!PS") or (\n len(prefix) >= 4 and i32(prefix) == 0xC6D3D0C5\n )\n\n\n##\n# Image plugin for Encapsulated PostScript. This plugin supports only\n# a few variants of this format.\n\n\nclass EpsImageFile(ImageFile.ImageFile):\n """EPS File Parser for the Python Imaging Library"""\n\n format = "EPS"\n format_description = "Encapsulated Postscript"\n\n mode_map = {1: "L", 2: "LAB", 3: "RGB", 4: "CMYK"}\n\n def _open(self) -> None:\n (length, offset) = self._find_offset(self.fp)\n\n # go to offset - start of "%!PS"\n self.fp.seek(offset)\n\n self._mode = "RGB"\n\n # When reading header comments, the first comment is used.\n # When reading trailer comments, the last comment is used.\n bounding_box: list[int] | None = None\n imagedata_size: tuple[int, int] | None = None\n\n byte_arr = bytearray(255)\n bytes_mv = memoryview(byte_arr)\n bytes_read = 0\n reading_header_comments = True\n reading_trailer_comments = False\n trailer_reached = False\n\n def check_required_header_comments() -> None:\n """\n The EPS specification requires that some headers exist.\n This should be checked when the header comments formally end,\n when image data starts, or when the file ends, whichever comes first.\n """\n if "PS-Adobe" not in self.info:\n msg = 'EPS header missing "%!PS-Adobe" comment'\n raise SyntaxError(msg)\n if "BoundingBox" not in self.info:\n msg = 'EPS header missing "%%BoundingBox" comment'\n raise SyntaxError(msg)\n\n def read_comment(s: str) -> bool:\n nonlocal bounding_box, reading_trailer_comments\n try:\n m = split.match(s)\n except re.error as e:\n msg = "not an EPS file"\n raise SyntaxError(msg) from e\n\n if not m:\n return False\n\n k, v = m.group(1, 2)\n self.info[k] = v\n if k == "BoundingBox":\n if v == "(atend)":\n reading_trailer_comments = True\n elif not bounding_box or (trailer_reached and reading_trailer_comments):\n try:\n # Note: The DSC spec says that BoundingBox\n # fields should be integers, but some drivers\n # put floating point values there anyway.\n bounding_box = [int(float(i)) for i in v.split()]\n except Exception:\n pass\n return True\n\n while True:\n byte = self.fp.read(1)\n if byte == b"":\n # if we didn't read a byte we must be at the end of the file\n if bytes_read == 0:\n if reading_header_comments:\n check_required_header_comments()\n break\n elif byte in b"\r\n":\n # if we read a line ending character, ignore it and parse what\n # we have already read. if we haven't read any other characters,\n # continue reading\n if bytes_read == 0:\n continue\n else:\n # ASCII/hexadecimal lines in an EPS file must not exceed\n # 255 characters, not including line ending characters\n if bytes_read >= 255:\n # only enforce this for lines starting with a "%",\n # otherwise assume it's binary data\n if byte_arr[0] == ord("%"):\n msg = "not an EPS file"\n raise SyntaxError(msg)\n else:\n if reading_header_comments:\n check_required_header_comments()\n reading_header_comments = False\n # reset bytes_read so we can keep reading\n # data until the end of the line\n bytes_read = 0\n byte_arr[bytes_read] = byte[0]\n bytes_read += 1\n continue\n\n if reading_header_comments:\n # Load EPS header\n\n # if this line doesn't start with a "%",\n # or does start with "%%EndComments",\n # then we've reached the end of the header/comments\n if byte_arr[0] != ord("%") or bytes_mv[:13] == b"%%EndComments":\n check_required_header_comments()\n reading_header_comments = False\n continue\n\n s = str(bytes_mv[:bytes_read], "latin-1")\n if not read_comment(s):\n m = field.match(s)\n if m:\n k = m.group(1)\n if k.startswith("PS-Adobe"):\n self.info["PS-Adobe"] = k[9:]\n else:\n self.info[k] = ""\n elif s[0] == "%":\n # handle non-DSC PostScript comments that some\n # tools mistakenly put in the Comments section\n pass\n else:\n msg = "bad EPS header"\n raise OSError(msg)\n elif bytes_mv[:11] == b"%ImageData:":\n # Check for an "ImageData" descriptor\n # https://www.adobe.com/devnet-apps/photoshop/fileformatashtml/#50577413_pgfId-1035096\n\n # If we've already read an "ImageData" descriptor,\n # don't read another one.\n if imagedata_size:\n bytes_read = 0\n continue\n\n # Values:\n # columns\n # rows\n # bit depth (1 or 8)\n # mode (1: L, 2: LAB, 3: RGB, 4: CMYK)\n # number of padding channels\n # block size (number of bytes per row per channel)\n # binary/ascii (1: binary, 2: ascii)\n # data start identifier (the image data follows after a single line\n # consisting only of this quoted value)\n image_data_values = byte_arr[11:bytes_read].split(None, 7)\n columns, rows, bit_depth, mode_id = (\n int(value) for value in image_data_values[:4]\n )\n\n if bit_depth == 1:\n self._mode = "1"\n elif bit_depth == 8:\n try:\n self._mode = self.mode_map[mode_id]\n except ValueError:\n break\n else:\n break\n\n # Parse the columns and rows after checking the bit depth and mode\n # in case the bit depth and/or mode are invalid.\n imagedata_size = columns, rows\n elif bytes_mv[:5] == b"%%EOF":\n break\n elif trailer_reached and reading_trailer_comments:\n # Load EPS trailer\n s = str(bytes_mv[:bytes_read], "latin-1")\n read_comment(s)\n elif bytes_mv[:9] == b"%%Trailer":\n trailer_reached = True\n bytes_read = 0\n\n # A "BoundingBox" is always required,\n # even if an "ImageData" descriptor size exists.\n if not bounding_box:\n msg = "cannot determine EPS bounding box"\n raise OSError(msg)\n\n # An "ImageData" size takes precedence over the "BoundingBox".\n self._size = imagedata_size or (\n bounding_box[2] - bounding_box[0],\n bounding_box[3] - bounding_box[1],\n )\n\n self.tile = [\n ImageFile._Tile("eps", (0, 0) + self.size, offset, (length, bounding_box))\n ]\n\n def _find_offset(self, fp: IO[bytes]) -> tuple[int, int]:\n s = fp.read(4)\n\n if s == b"%!PS":\n # for HEAD without binary preview\n fp.seek(0, io.SEEK_END)\n length = fp.tell()\n offset = 0\n elif i32(s) == 0xC6D3D0C5:\n # FIX for: Some EPS file not handled correctly / issue #302\n # EPS can contain binary data\n # or start directly with latin coding\n # more info see:\n # https://web.archive.org/web/20160528181353/http://partners.adobe.com/public/developer/en/ps/5002.EPSF_Spec.pdf\n s = fp.read(8)\n offset = i32(s)\n length = i32(s, 4)\n else:\n msg = "not an EPS file"\n raise SyntaxError(msg)\n\n return length, offset\n\n def load(\n self, scale: int = 1, transparency: bool = False\n ) -> Image.core.PixelAccess | None:\n # Load EPS via Ghostscript\n if self.tile:\n self.im = Ghostscript(self.tile, self.size, self.fp, scale, transparency)\n self._mode = self.im.mode\n self._size = self.im.size\n self.tile = []\n return Image.Image.load(self)\n\n def load_seek(self, pos: int) -> None:\n # we can't incrementally load, so force ImageFile.parser to\n # use our custom load method by defining this method.\n pass\n\n\n# --------------------------------------------------------------------\n\n\ndef _save(im: Image.Image, fp: IO[bytes], filename: str | bytes, eps: int = 1) -> None:\n """EPS Writer for the Python Imaging Library."""\n\n # make sure image data is available\n im.load()\n\n # determine PostScript image mode\n if im.mode == "L":\n operator = (8, 1, b"image")\n elif im.mode == "RGB":\n operator = (8, 3, b"false 3 colorimage")\n elif im.mode == "CMYK":\n operator = (8, 4, b"false 4 colorimage")\n else:\n msg = "image mode is not supported"\n raise ValueError(msg)\n\n if eps:\n # write EPS header\n fp.write(b"%!PS-Adobe-3.0 EPSF-3.0\n")\n fp.write(b"%%Creator: PIL 0.1 EpsEncode\n")\n # fp.write("%%CreationDate: %s"...)\n fp.write(b"%%%%BoundingBox: 0 0 %d %d\n" % im.size)\n fp.write(b"%%Pages: 1\n")\n fp.write(b"%%EndComments\n")\n fp.write(b"%%Page: 1 1\n")\n fp.write(b"%%ImageData: %d %d " % im.size)\n fp.write(b'%d %d 0 1 1 "%s"\n' % operator)\n\n # image header\n fp.write(b"gsave\n")\n fp.write(b"10 dict begin\n")\n fp.write(b"/buf %d string def\n" % (im.size[0] * operator[1]))\n fp.write(b"%d %d scale\n" % im.size)\n fp.write(b"%d %d 8\n" % im.size) # <= bits\n fp.write(b"[%d 0 0 -%d 0 %d]\n" % (im.size[0], im.size[1], im.size[1]))\n fp.write(b"{ currentfile buf readhexstring pop } bind\n")\n fp.write(operator[2] + b"\n")\n if hasattr(fp, "flush"):\n fp.flush()\n\n ImageFile._save(im, fp, [ImageFile._Tile("eps", (0, 0) + im.size)])\n\n fp.write(b"\n%%%%EndBinary\n")\n fp.write(b"grestore end\n")\n if hasattr(fp, "flush"):\n fp.flush()\n\n\n# --------------------------------------------------------------------\n\n\nImage.register_open(EpsImageFile.format, EpsImageFile, _accept)\n\nImage.register_save(EpsImageFile.format, _save)\n\nImage.register_extensions(EpsImageFile.format, [".ps", ".eps"])\n\nImage.register_mime(EpsImageFile.format, "application/postscript")\n | .venv\Lib\site-packages\PIL\EpsImagePlugin.py | EpsImagePlugin.py | Python | 16,865 | 0.95 | 0.153361 | 0.243243 | python-kit | 137 | 2025-02-22T23:44:03.505627 | BSD-3-Clause | false | 2c55b65f4f7f23db11833e110f0cbc1b |
#\n# The Python Imaging Library.\n# $Id$\n#\n# EXIF tags\n#\n# Copyright (c) 2003 by Secret Labs AB\n#\n# See the README file for information on usage and redistribution.\n#\n\n"""\nThis module provides constants and clear-text names for various\nwell-known EXIF tags.\n"""\nfrom __future__ import annotations\n\nfrom enum import IntEnum\n\n\nclass Base(IntEnum):\n # possibly incomplete\n InteropIndex = 0x0001\n ProcessingSoftware = 0x000B\n NewSubfileType = 0x00FE\n SubfileType = 0x00FF\n ImageWidth = 0x0100\n ImageLength = 0x0101\n BitsPerSample = 0x0102\n Compression = 0x0103\n PhotometricInterpretation = 0x0106\n Thresholding = 0x0107\n CellWidth = 0x0108\n CellLength = 0x0109\n FillOrder = 0x010A\n DocumentName = 0x010D\n ImageDescription = 0x010E\n Make = 0x010F\n Model = 0x0110\n StripOffsets = 0x0111\n Orientation = 0x0112\n SamplesPerPixel = 0x0115\n RowsPerStrip = 0x0116\n StripByteCounts = 0x0117\n MinSampleValue = 0x0118\n MaxSampleValue = 0x0119\n XResolution = 0x011A\n YResolution = 0x011B\n PlanarConfiguration = 0x011C\n PageName = 0x011D\n FreeOffsets = 0x0120\n FreeByteCounts = 0x0121\n GrayResponseUnit = 0x0122\n GrayResponseCurve = 0x0123\n T4Options = 0x0124\n T6Options = 0x0125\n ResolutionUnit = 0x0128\n PageNumber = 0x0129\n TransferFunction = 0x012D\n Software = 0x0131\n DateTime = 0x0132\n Artist = 0x013B\n HostComputer = 0x013C\n Predictor = 0x013D\n WhitePoint = 0x013E\n PrimaryChromaticities = 0x013F\n ColorMap = 0x0140\n HalftoneHints = 0x0141\n TileWidth = 0x0142\n TileLength = 0x0143\n TileOffsets = 0x0144\n TileByteCounts = 0x0145\n SubIFDs = 0x014A\n InkSet = 0x014C\n InkNames = 0x014D\n NumberOfInks = 0x014E\n DotRange = 0x0150\n TargetPrinter = 0x0151\n ExtraSamples = 0x0152\n SampleFormat = 0x0153\n SMinSampleValue = 0x0154\n SMaxSampleValue = 0x0155\n TransferRange = 0x0156\n ClipPath = 0x0157\n XClipPathUnits = 0x0158\n YClipPathUnits = 0x0159\n Indexed = 0x015A\n JPEGTables = 0x015B\n OPIProxy = 0x015F\n JPEGProc = 0x0200\n JpegIFOffset = 0x0201\n JpegIFByteCount = 0x0202\n JpegRestartInterval = 0x0203\n JpegLosslessPredictors = 0x0205\n JpegPointTransforms = 0x0206\n JpegQTables = 0x0207\n JpegDCTables = 0x0208\n JpegACTables = 0x0209\n YCbCrCoefficients = 0x0211\n YCbCrSubSampling = 0x0212\n YCbCrPositioning = 0x0213\n ReferenceBlackWhite = 0x0214\n XMLPacket = 0x02BC\n RelatedImageFileFormat = 0x1000\n RelatedImageWidth = 0x1001\n RelatedImageLength = 0x1002\n Rating = 0x4746\n RatingPercent = 0x4749\n ImageID = 0x800D\n CFARepeatPatternDim = 0x828D\n BatteryLevel = 0x828F\n Copyright = 0x8298\n ExposureTime = 0x829A\n FNumber = 0x829D\n IPTCNAA = 0x83BB\n ImageResources = 0x8649\n ExifOffset = 0x8769\n InterColorProfile = 0x8773\n ExposureProgram = 0x8822\n SpectralSensitivity = 0x8824\n GPSInfo = 0x8825\n ISOSpeedRatings = 0x8827\n OECF = 0x8828\n Interlace = 0x8829\n TimeZoneOffset = 0x882A\n SelfTimerMode = 0x882B\n SensitivityType = 0x8830\n StandardOutputSensitivity = 0x8831\n RecommendedExposureIndex = 0x8832\n ISOSpeed = 0x8833\n ISOSpeedLatitudeyyy = 0x8834\n ISOSpeedLatitudezzz = 0x8835\n ExifVersion = 0x9000\n DateTimeOriginal = 0x9003\n DateTimeDigitized = 0x9004\n OffsetTime = 0x9010\n OffsetTimeOriginal = 0x9011\n OffsetTimeDigitized = 0x9012\n ComponentsConfiguration = 0x9101\n CompressedBitsPerPixel = 0x9102\n ShutterSpeedValue = 0x9201\n ApertureValue = 0x9202\n BrightnessValue = 0x9203\n ExposureBiasValue = 0x9204\n MaxApertureValue = 0x9205\n SubjectDistance = 0x9206\n MeteringMode = 0x9207\n LightSource = 0x9208\n Flash = 0x9209\n FocalLength = 0x920A\n Noise = 0x920D\n ImageNumber = 0x9211\n SecurityClassification = 0x9212\n ImageHistory = 0x9213\n TIFFEPStandardID = 0x9216\n MakerNote = 0x927C\n UserComment = 0x9286\n SubsecTime = 0x9290\n SubsecTimeOriginal = 0x9291\n SubsecTimeDigitized = 0x9292\n AmbientTemperature = 0x9400\n Humidity = 0x9401\n Pressure = 0x9402\n WaterDepth = 0x9403\n Acceleration = 0x9404\n CameraElevationAngle = 0x9405\n XPTitle = 0x9C9B\n XPComment = 0x9C9C\n XPAuthor = 0x9C9D\n XPKeywords = 0x9C9E\n XPSubject = 0x9C9F\n FlashPixVersion = 0xA000\n ColorSpace = 0xA001\n ExifImageWidth = 0xA002\n ExifImageHeight = 0xA003\n RelatedSoundFile = 0xA004\n ExifInteroperabilityOffset = 0xA005\n FlashEnergy = 0xA20B\n SpatialFrequencyResponse = 0xA20C\n FocalPlaneXResolution = 0xA20E\n FocalPlaneYResolution = 0xA20F\n FocalPlaneResolutionUnit = 0xA210\n SubjectLocation = 0xA214\n ExposureIndex = 0xA215\n SensingMethod = 0xA217\n FileSource = 0xA300\n SceneType = 0xA301\n CFAPattern = 0xA302\n CustomRendered = 0xA401\n ExposureMode = 0xA402\n WhiteBalance = 0xA403\n DigitalZoomRatio = 0xA404\n FocalLengthIn35mmFilm = 0xA405\n SceneCaptureType = 0xA406\n GainControl = 0xA407\n Contrast = 0xA408\n Saturation = 0xA409\n Sharpness = 0xA40A\n DeviceSettingDescription = 0xA40B\n SubjectDistanceRange = 0xA40C\n ImageUniqueID = 0xA420\n CameraOwnerName = 0xA430\n BodySerialNumber = 0xA431\n LensSpecification = 0xA432\n LensMake = 0xA433\n LensModel = 0xA434\n LensSerialNumber = 0xA435\n CompositeImage = 0xA460\n CompositeImageCount = 0xA461\n CompositeImageExposureTimes = 0xA462\n Gamma = 0xA500\n PrintImageMatching = 0xC4A5\n DNGVersion = 0xC612\n DNGBackwardVersion = 0xC613\n UniqueCameraModel = 0xC614\n LocalizedCameraModel = 0xC615\n CFAPlaneColor = 0xC616\n CFALayout = 0xC617\n LinearizationTable = 0xC618\n BlackLevelRepeatDim = 0xC619\n BlackLevel = 0xC61A\n BlackLevelDeltaH = 0xC61B\n BlackLevelDeltaV = 0xC61C\n WhiteLevel = 0xC61D\n DefaultScale = 0xC61E\n DefaultCropOrigin = 0xC61F\n DefaultCropSize = 0xC620\n ColorMatrix1 = 0xC621\n ColorMatrix2 = 0xC622\n CameraCalibration1 = 0xC623\n CameraCalibration2 = 0xC624\n ReductionMatrix1 = 0xC625\n ReductionMatrix2 = 0xC626\n AnalogBalance = 0xC627\n AsShotNeutral = 0xC628\n AsShotWhiteXY = 0xC629\n BaselineExposure = 0xC62A\n BaselineNoise = 0xC62B\n BaselineSharpness = 0xC62C\n BayerGreenSplit = 0xC62D\n LinearResponseLimit = 0xC62E\n CameraSerialNumber = 0xC62F\n LensInfo = 0xC630\n ChromaBlurRadius = 0xC631\n AntiAliasStrength = 0xC632\n ShadowScale = 0xC633\n DNGPrivateData = 0xC634\n MakerNoteSafety = 0xC635\n CalibrationIlluminant1 = 0xC65A\n CalibrationIlluminant2 = 0xC65B\n BestQualityScale = 0xC65C\n RawDataUniqueID = 0xC65D\n OriginalRawFileName = 0xC68B\n OriginalRawFileData = 0xC68C\n ActiveArea = 0xC68D\n MaskedAreas = 0xC68E\n AsShotICCProfile = 0xC68F\n AsShotPreProfileMatrix = 0xC690\n CurrentICCProfile = 0xC691\n CurrentPreProfileMatrix = 0xC692\n ColorimetricReference = 0xC6BF\n CameraCalibrationSignature = 0xC6F3\n ProfileCalibrationSignature = 0xC6F4\n AsShotProfileName = 0xC6F6\n NoiseReductionApplied = 0xC6F7\n ProfileName = 0xC6F8\n ProfileHueSatMapDims = 0xC6F9\n ProfileHueSatMapData1 = 0xC6FA\n ProfileHueSatMapData2 = 0xC6FB\n ProfileToneCurve = 0xC6FC\n ProfileEmbedPolicy = 0xC6FD\n ProfileCopyright = 0xC6FE\n ForwardMatrix1 = 0xC714\n ForwardMatrix2 = 0xC715\n PreviewApplicationName = 0xC716\n PreviewApplicationVersion = 0xC717\n PreviewSettingsName = 0xC718\n PreviewSettingsDigest = 0xC719\n PreviewColorSpace = 0xC71A\n PreviewDateTime = 0xC71B\n RawImageDigest = 0xC71C\n OriginalRawFileDigest = 0xC71D\n SubTileBlockSize = 0xC71E\n RowInterleaveFactor = 0xC71F\n ProfileLookTableDims = 0xC725\n ProfileLookTableData = 0xC726\n OpcodeList1 = 0xC740\n OpcodeList2 = 0xC741\n OpcodeList3 = 0xC74E\n NoiseProfile = 0xC761\n\n\n"""Maps EXIF tags to tag names."""\nTAGS = {\n **{i.value: i.name for i in Base},\n 0x920C: "SpatialFrequencyResponse",\n 0x9214: "SubjectLocation",\n 0x9215: "ExposureIndex",\n 0x828E: "CFAPattern",\n 0x920B: "FlashEnergy",\n 0x9216: "TIFF/EPStandardID",\n}\n\n\nclass GPS(IntEnum):\n GPSVersionID = 0x00\n GPSLatitudeRef = 0x01\n GPSLatitude = 0x02\n GPSLongitudeRef = 0x03\n GPSLongitude = 0x04\n GPSAltitudeRef = 0x05\n GPSAltitude = 0x06\n GPSTimeStamp = 0x07\n GPSSatellites = 0x08\n GPSStatus = 0x09\n GPSMeasureMode = 0x0A\n GPSDOP = 0x0B\n GPSSpeedRef = 0x0C\n GPSSpeed = 0x0D\n GPSTrackRef = 0x0E\n GPSTrack = 0x0F\n GPSImgDirectionRef = 0x10\n GPSImgDirection = 0x11\n GPSMapDatum = 0x12\n GPSDestLatitudeRef = 0x13\n GPSDestLatitude = 0x14\n GPSDestLongitudeRef = 0x15\n GPSDestLongitude = 0x16\n GPSDestBearingRef = 0x17\n GPSDestBearing = 0x18\n GPSDestDistanceRef = 0x19\n GPSDestDistance = 0x1A\n GPSProcessingMethod = 0x1B\n GPSAreaInformation = 0x1C\n GPSDateStamp = 0x1D\n GPSDifferential = 0x1E\n GPSHPositioningError = 0x1F\n\n\n"""Maps EXIF GPS tags to tag names."""\nGPSTAGS = {i.value: i.name for i in GPS}\n\n\nclass Interop(IntEnum):\n InteropIndex = 0x0001\n InteropVersion = 0x0002\n RelatedImageFileFormat = 0x1000\n RelatedImageWidth = 0x1001\n RelatedImageHeight = 0x1002\n\n\nclass IFD(IntEnum):\n Exif = 0x8769\n GPSInfo = 0x8825\n MakerNote = 0x927C\n Makernote = 0x927C # Deprecated\n Interop = 0xA005\n IFD1 = -1\n\n\nclass LightSource(IntEnum):\n Unknown = 0x00\n Daylight = 0x01\n Fluorescent = 0x02\n Tungsten = 0x03\n Flash = 0x04\n Fine = 0x09\n Cloudy = 0x0A\n Shade = 0x0B\n DaylightFluorescent = 0x0C\n DayWhiteFluorescent = 0x0D\n CoolWhiteFluorescent = 0x0E\n WhiteFluorescent = 0x0F\n StandardLightA = 0x11\n StandardLightB = 0x12\n StandardLightC = 0x13\n D55 = 0x14\n D65 = 0x15\n D75 = 0x16\n D50 = 0x17\n ISO = 0x18\n Other = 0xFF\n | .venv\Lib\site-packages\PIL\ExifTags.py | ExifTags.py | Python | 10,313 | 0.95 | 0.02356 | 0.032787 | react-lib | 758 | 2024-02-10T19:28:02.709223 | GPL-3.0 | false | fa70778bea25d60a94f882eb17151bb1 |
from __future__ import annotations\n\nimport collections\nimport os\nimport sys\nimport warnings\nfrom typing import IO\n\nimport PIL\n\nfrom . import Image\nfrom ._deprecate import deprecate\n\nmodules = {\n "pil": ("PIL._imaging", "PILLOW_VERSION"),\n "tkinter": ("PIL._tkinter_finder", "tk_version"),\n "freetype2": ("PIL._imagingft", "freetype2_version"),\n "littlecms2": ("PIL._imagingcms", "littlecms_version"),\n "webp": ("PIL._webp", "webpdecoder_version"),\n "avif": ("PIL._avif", "libavif_version"),\n}\n\n\ndef check_module(feature: str) -> bool:\n """\n Checks if a module is available.\n\n :param feature: The module to check for.\n :returns: ``True`` if available, ``False`` otherwise.\n :raises ValueError: If the module is not defined in this version of Pillow.\n """\n if feature not in modules:\n msg = f"Unknown module {feature}"\n raise ValueError(msg)\n\n module, ver = modules[feature]\n\n try:\n __import__(module)\n return True\n except ModuleNotFoundError:\n return False\n except ImportError as ex:\n warnings.warn(str(ex))\n return False\n\n\ndef version_module(feature: str) -> str | None:\n """\n :param feature: The module to check for.\n :returns:\n The loaded version number as a string, or ``None`` if unknown or not available.\n :raises ValueError: If the module is not defined in this version of Pillow.\n """\n if not check_module(feature):\n return None\n\n module, ver = modules[feature]\n\n return getattr(__import__(module, fromlist=[ver]), ver)\n\n\ndef get_supported_modules() -> list[str]:\n """\n :returns: A list of all supported modules.\n """\n return [f for f in modules if check_module(f)]\n\n\ncodecs = {\n "jpg": ("jpeg", "jpeglib"),\n "jpg_2000": ("jpeg2k", "jp2klib"),\n "zlib": ("zip", "zlib"),\n "libtiff": ("libtiff", "libtiff"),\n}\n\n\ndef check_codec(feature: str) -> bool:\n """\n Checks if a codec is available.\n\n :param feature: The codec to check for.\n :returns: ``True`` if available, ``False`` otherwise.\n :raises ValueError: If the codec is not defined in this version of Pillow.\n """\n if feature not in codecs:\n msg = f"Unknown codec {feature}"\n raise ValueError(msg)\n\n codec, lib = codecs[feature]\n\n return f"{codec}_encoder" in dir(Image.core)\n\n\ndef version_codec(feature: str) -> str | None:\n """\n :param feature: The codec to check for.\n :returns:\n The version number as a string, or ``None`` if not available.\n Checked at compile time for ``jpg``, run-time otherwise.\n :raises ValueError: If the codec is not defined in this version of Pillow.\n """\n if not check_codec(feature):\n return None\n\n codec, lib = codecs[feature]\n\n version = getattr(Image.core, f"{lib}_version")\n\n if feature == "libtiff":\n return version.split("\n")[0].split("Version ")[1]\n\n return version\n\n\ndef get_supported_codecs() -> list[str]:\n """\n :returns: A list of all supported codecs.\n """\n return [f for f in codecs if check_codec(f)]\n\n\nfeatures: dict[str, tuple[str, str | bool, str | None]] = {\n "webp_anim": ("PIL._webp", True, None),\n "webp_mux": ("PIL._webp", True, None),\n "transp_webp": ("PIL._webp", True, None),\n "raqm": ("PIL._imagingft", "HAVE_RAQM", "raqm_version"),\n "fribidi": ("PIL._imagingft", "HAVE_FRIBIDI", "fribidi_version"),\n "harfbuzz": ("PIL._imagingft", "HAVE_HARFBUZZ", "harfbuzz_version"),\n "libjpeg_turbo": ("PIL._imaging", "HAVE_LIBJPEGTURBO", "libjpeg_turbo_version"),\n "mozjpeg": ("PIL._imaging", "HAVE_MOZJPEG", "libjpeg_turbo_version"),\n "zlib_ng": ("PIL._imaging", "HAVE_ZLIBNG", "zlib_ng_version"),\n "libimagequant": ("PIL._imaging", "HAVE_LIBIMAGEQUANT", "imagequant_version"),\n "xcb": ("PIL._imaging", "HAVE_XCB", None),\n}\n\n\ndef check_feature(feature: str) -> bool | None:\n """\n Checks if a feature is available.\n\n :param feature: The feature to check for.\n :returns: ``True`` if available, ``False`` if unavailable, ``None`` if unknown.\n :raises ValueError: If the feature is not defined in this version of Pillow.\n """\n if feature not in features:\n msg = f"Unknown feature {feature}"\n raise ValueError(msg)\n\n module, flag, ver = features[feature]\n\n if isinstance(flag, bool):\n deprecate(f'check_feature("{feature}")', 12)\n try:\n imported_module = __import__(module, fromlist=["PIL"])\n if isinstance(flag, bool):\n return flag\n return getattr(imported_module, flag)\n except ModuleNotFoundError:\n return None\n except ImportError as ex:\n warnings.warn(str(ex))\n return None\n\n\ndef version_feature(feature: str) -> str | None:\n """\n :param feature: The feature to check for.\n :returns: The version number as a string, or ``None`` if not available.\n :raises ValueError: If the feature is not defined in this version of Pillow.\n """\n if not check_feature(feature):\n return None\n\n module, flag, ver = features[feature]\n\n if ver is None:\n return None\n\n return getattr(__import__(module, fromlist=[ver]), ver)\n\n\ndef get_supported_features() -> list[str]:\n """\n :returns: A list of all supported features.\n """\n supported_features = []\n for f, (module, flag, _) in features.items():\n if flag is True:\n for feature, (feature_module, _) in modules.items():\n if feature_module == module:\n if check_module(feature):\n supported_features.append(f)\n break\n elif check_feature(f):\n supported_features.append(f)\n return supported_features\n\n\ndef check(feature: str) -> bool | None:\n """\n :param feature: A module, codec, or feature name.\n :returns:\n ``True`` if the module, codec, or feature is available,\n ``False`` or ``None`` otherwise.\n """\n\n if feature in modules:\n return check_module(feature)\n if feature in codecs:\n return check_codec(feature)\n if feature in features:\n return check_feature(feature)\n warnings.warn(f"Unknown feature '{feature}'.", stacklevel=2)\n return False\n\n\ndef version(feature: str) -> str | None:\n """\n :param feature:\n The module, codec, or feature to check for.\n :returns:\n The version number as a string, or ``None`` if unknown or not available.\n """\n if feature in modules:\n return version_module(feature)\n if feature in codecs:\n return version_codec(feature)\n if feature in features:\n return version_feature(feature)\n return None\n\n\ndef get_supported() -> list[str]:\n """\n :returns: A list of all supported modules, features, and codecs.\n """\n\n ret = get_supported_modules()\n ret.extend(get_supported_features())\n ret.extend(get_supported_codecs())\n return ret\n\n\ndef pilinfo(out: IO[str] | None = None, supported_formats: bool = True) -> None:\n """\n Prints information about this installation of Pillow.\n This function can be called with ``python3 -m PIL``.\n It can also be called with ``python3 -m PIL.report`` or ``python3 -m PIL --report``\n to have "supported_formats" set to ``False``, omitting the list of all supported\n image file formats.\n\n :param out:\n The output stream to print to. Defaults to ``sys.stdout`` if ``None``.\n :param supported_formats:\n If ``True``, a list of all supported image file formats will be printed.\n """\n\n if out is None:\n out = sys.stdout\n\n Image.init()\n\n print("-" * 68, file=out)\n print(f"Pillow {PIL.__version__}", file=out)\n py_version_lines = sys.version.splitlines()\n print(f"Python {py_version_lines[0].strip()}", file=out)\n for py_version in py_version_lines[1:]:\n print(f" {py_version.strip()}", file=out)\n print("-" * 68, file=out)\n print(f"Python executable is {sys.executable or 'unknown'}", file=out)\n if sys.prefix != sys.base_prefix:\n print(f"Environment Python files loaded from {sys.prefix}", file=out)\n print(f"System Python files loaded from {sys.base_prefix}", file=out)\n print("-" * 68, file=out)\n print(\n f"Python Pillow modules loaded from {os.path.dirname(Image.__file__)}",\n file=out,\n )\n print(\n f"Binary Pillow modules loaded from {os.path.dirname(Image.core.__file__)}",\n file=out,\n )\n print("-" * 68, file=out)\n\n for name, feature in [\n ("pil", "PIL CORE"),\n ("tkinter", "TKINTER"),\n ("freetype2", "FREETYPE2"),\n ("littlecms2", "LITTLECMS2"),\n ("webp", "WEBP"),\n ("avif", "AVIF"),\n ("jpg", "JPEG"),\n ("jpg_2000", "OPENJPEG (JPEG2000)"),\n ("zlib", "ZLIB (PNG/ZIP)"),\n ("libtiff", "LIBTIFF"),\n ("raqm", "RAQM (Bidirectional Text)"),\n ("libimagequant", "LIBIMAGEQUANT (Quantization method)"),\n ("xcb", "XCB (X protocol)"),\n ]:\n if check(name):\n v: str | None = None\n if name == "jpg":\n libjpeg_turbo_version = version_feature("libjpeg_turbo")\n if libjpeg_turbo_version is not None:\n v = "mozjpeg" if check_feature("mozjpeg") else "libjpeg-turbo"\n v += " " + libjpeg_turbo_version\n if v is None:\n v = version(name)\n if v is not None:\n version_static = name in ("pil", "jpg")\n if name == "littlecms2":\n # this check is also in src/_imagingcms.c:setup_module()\n version_static = tuple(int(x) for x in v.split(".")) < (2, 7)\n t = "compiled for" if version_static else "loaded"\n if name == "zlib":\n zlib_ng_version = version_feature("zlib_ng")\n if zlib_ng_version is not None:\n v += ", compiled for zlib-ng " + zlib_ng_version\n elif name == "raqm":\n for f in ("fribidi", "harfbuzz"):\n v2 = version_feature(f)\n if v2 is not None:\n v += f", {f} {v2}"\n print("---", feature, "support ok,", t, v, file=out)\n else:\n print("---", feature, "support ok", file=out)\n else:\n print("***", feature, "support not installed", file=out)\n print("-" * 68, file=out)\n\n if supported_formats:\n extensions = collections.defaultdict(list)\n for ext, i in Image.EXTENSION.items():\n extensions[i].append(ext)\n\n for i in sorted(Image.ID):\n line = f"{i}"\n if i in Image.MIME:\n line = f"{line} {Image.MIME[i]}"\n print(line, file=out)\n\n if i in extensions:\n print(\n "Extensions: {}".format(", ".join(sorted(extensions[i]))), file=out\n )\n\n features = []\n if i in Image.OPEN:\n features.append("open")\n if i in Image.SAVE:\n features.append("save")\n if i in Image.SAVE_ALL:\n features.append("save_all")\n if i in Image.DECODERS:\n features.append("decode")\n if i in Image.ENCODERS:\n features.append("encode")\n\n print("Features: {}".format(", ".join(features)), file=out)\n print("-" * 68, file=out)\n | .venv\Lib\site-packages\PIL\features.py | features.py | Python | 11,840 | 0.95 | 0.254848 | 0.003367 | vue-tools | 701 | 2024-06-09T20:56:19.888990 | MIT | false | 57573723012b46c53b7debe7baa707f1 |
#\n# The Python Imaging Library\n# $Id$\n#\n# FITS file handling\n#\n# Copyright (c) 1998-2003 by Fredrik Lundh\n#\n# See the README file for information on usage and redistribution.\n#\nfrom __future__ import annotations\n\nimport gzip\nimport math\n\nfrom . import Image, ImageFile\n\n\ndef _accept(prefix: bytes) -> bool:\n return prefix.startswith(b"SIMPLE")\n\n\nclass FitsImageFile(ImageFile.ImageFile):\n format = "FITS"\n format_description = "FITS"\n\n def _open(self) -> None:\n assert self.fp is not None\n\n headers: dict[bytes, bytes] = {}\n header_in_progress = False\n decoder_name = ""\n while True:\n header = self.fp.read(80)\n if not header:\n msg = "Truncated FITS file"\n raise OSError(msg)\n keyword = header[:8].strip()\n if keyword in (b"SIMPLE", b"XTENSION"):\n header_in_progress = True\n elif headers and not header_in_progress:\n # This is now a data unit\n break\n elif keyword == b"END":\n # Seek to the end of the header unit\n self.fp.seek(math.ceil(self.fp.tell() / 2880) * 2880)\n if not decoder_name:\n decoder_name, offset, args = self._parse_headers(headers)\n\n header_in_progress = False\n continue\n\n if decoder_name:\n # Keep going to read past the headers\n continue\n\n value = header[8:].split(b"/")[0].strip()\n if value.startswith(b"="):\n value = value[1:].strip()\n if not headers and (not _accept(keyword) or value != b"T"):\n msg = "Not a FITS file"\n raise SyntaxError(msg)\n headers[keyword] = value\n\n if not decoder_name:\n msg = "No image data"\n raise ValueError(msg)\n\n offset += self.fp.tell() - 80\n self.tile = [ImageFile._Tile(decoder_name, (0, 0) + self.size, offset, args)]\n\n def _get_size(\n self, headers: dict[bytes, bytes], prefix: bytes\n ) -> tuple[int, int] | None:\n naxis = int(headers[prefix + b"NAXIS"])\n if naxis == 0:\n return None\n\n if naxis == 1:\n return 1, int(headers[prefix + b"NAXIS1"])\n else:\n return int(headers[prefix + b"NAXIS1"]), int(headers[prefix + b"NAXIS2"])\n\n def _parse_headers(\n self, headers: dict[bytes, bytes]\n ) -> tuple[str, int, tuple[str | int, ...]]:\n prefix = b""\n decoder_name = "raw"\n offset = 0\n if (\n headers.get(b"XTENSION") == b"'BINTABLE'"\n and headers.get(b"ZIMAGE") == b"T"\n and headers[b"ZCMPTYPE"] == b"'GZIP_1 '"\n ):\n no_prefix_size = self._get_size(headers, prefix) or (0, 0)\n number_of_bits = int(headers[b"BITPIX"])\n offset = no_prefix_size[0] * no_prefix_size[1] * (number_of_bits // 8)\n\n prefix = b"Z"\n decoder_name = "fits_gzip"\n\n size = self._get_size(headers, prefix)\n if not size:\n return "", 0, ()\n\n self._size = size\n\n number_of_bits = int(headers[prefix + b"BITPIX"])\n if number_of_bits == 8:\n self._mode = "L"\n elif number_of_bits == 16:\n self._mode = "I;16"\n elif number_of_bits == 32:\n self._mode = "I"\n elif number_of_bits in (-32, -64):\n self._mode = "F"\n\n args: tuple[str | int, ...]\n if decoder_name == "raw":\n args = (self.mode, 0, -1)\n else:\n args = (number_of_bits,)\n return decoder_name, offset, args\n\n\nclass FitsGzipDecoder(ImageFile.PyDecoder):\n _pulls_fd = True\n\n def decode(self, buffer: bytes | Image.SupportsArrayInterface) -> tuple[int, int]:\n assert self.fd is not None\n value = gzip.decompress(self.fd.read())\n\n rows = []\n offset = 0\n number_of_bits = min(self.args[0] // 8, 4)\n for y in range(self.state.ysize):\n row = bytearray()\n for x in range(self.state.xsize):\n row += value[offset + (4 - number_of_bits) : offset + 4]\n offset += 4\n rows.append(row)\n self.set_as_raw(bytes([pixel for row in rows[::-1] for pixel in row]))\n return -1, 0\n\n\n# --------------------------------------------------------------------\n# Registry\n\nImage.register_open(FitsImageFile.format, FitsImageFile, _accept)\nImage.register_decoder("fits_gzip", FitsGzipDecoder)\n\nImage.register_extensions(FitsImageFile.format, [".fit", ".fits"])\n | .venv\Lib\site-packages\PIL\FitsImagePlugin.py | FitsImagePlugin.py | Python | 4,796 | 0.95 | 0.171053 | 0.121951 | python-kit | 539 | 2023-11-26T22:55:45.968121 | Apache-2.0 | false | eaa5745fa6539c1205ebcf6be7dc5728 |
#\n# The Python Imaging Library.\n# $Id$\n#\n# FLI/FLC file handling.\n#\n# History:\n# 95-09-01 fl Created\n# 97-01-03 fl Fixed parser, setup decoder tile\n# 98-07-15 fl Renamed offset attribute to avoid name clash\n#\n# Copyright (c) Secret Labs AB 1997-98.\n# Copyright (c) Fredrik Lundh 1995-97.\n#\n# See the README file for information on usage and redistribution.\n#\nfrom __future__ import annotations\n\nimport os\n\nfrom . import Image, ImageFile, ImagePalette\nfrom ._binary import i16le as i16\nfrom ._binary import i32le as i32\nfrom ._binary import o8\nfrom ._util import DeferredError\n\n#\n# decoder\n\n\ndef _accept(prefix: bytes) -> bool:\n return (\n len(prefix) >= 6\n and i16(prefix, 4) in [0xAF11, 0xAF12]\n and i16(prefix, 14) in [0, 3] # flags\n )\n\n\n##\n# Image plugin for the FLI/FLC animation format. Use the <b>seek</b>\n# method to load individual frames.\n\n\nclass FliImageFile(ImageFile.ImageFile):\n format = "FLI"\n format_description = "Autodesk FLI/FLC Animation"\n _close_exclusive_fp_after_loading = False\n\n def _open(self) -> None:\n # HEAD\n s = self.fp.read(128)\n if not (_accept(s) and s[20:22] == b"\x00\x00"):\n msg = "not an FLI/FLC file"\n raise SyntaxError(msg)\n\n # frames\n self.n_frames = i16(s, 6)\n self.is_animated = self.n_frames > 1\n\n # image characteristics\n self._mode = "P"\n self._size = i16(s, 8), i16(s, 10)\n\n # animation speed\n duration = i32(s, 16)\n magic = i16(s, 4)\n if magic == 0xAF11:\n duration = (duration * 1000) // 70\n self.info["duration"] = duration\n\n # look for palette\n palette = [(a, a, a) for a in range(256)]\n\n s = self.fp.read(16)\n\n self.__offset = 128\n\n if i16(s, 4) == 0xF100:\n # prefix chunk; ignore it\n self.__offset = self.__offset + i32(s)\n self.fp.seek(self.__offset)\n s = self.fp.read(16)\n\n if i16(s, 4) == 0xF1FA:\n # look for palette chunk\n number_of_subchunks = i16(s, 6)\n chunk_size: int | None = None\n for _ in range(number_of_subchunks):\n if chunk_size is not None:\n self.fp.seek(chunk_size - 6, os.SEEK_CUR)\n s = self.fp.read(6)\n chunk_type = i16(s, 4)\n if chunk_type in (4, 11):\n self._palette(palette, 2 if chunk_type == 11 else 0)\n break\n chunk_size = i32(s)\n if not chunk_size:\n break\n\n self.palette = ImagePalette.raw(\n "RGB", b"".join(o8(r) + o8(g) + o8(b) for (r, g, b) in palette)\n )\n\n # set things up to decode first frame\n self.__frame = -1\n self._fp = self.fp\n self.__rewind = self.fp.tell()\n self.seek(0)\n\n def _palette(self, palette: list[tuple[int, int, int]], shift: int) -> None:\n # load palette\n\n i = 0\n for e in range(i16(self.fp.read(2))):\n s = self.fp.read(2)\n i = i + s[0]\n n = s[1]\n if n == 0:\n n = 256\n s = self.fp.read(n * 3)\n for n in range(0, len(s), 3):\n r = s[n] << shift\n g = s[n + 1] << shift\n b = s[n + 2] << shift\n palette[i] = (r, g, b)\n i += 1\n\n def seek(self, frame: int) -> None:\n if not self._seek_check(frame):\n return\n if frame < self.__frame:\n self._seek(0)\n\n for f in range(self.__frame + 1, frame + 1):\n self._seek(f)\n\n def _seek(self, frame: int) -> None:\n if isinstance(self._fp, DeferredError):\n raise self._fp.ex\n if frame == 0:\n self.__frame = -1\n self._fp.seek(self.__rewind)\n self.__offset = 128\n else:\n # ensure that the previous frame was loaded\n self.load()\n\n if frame != self.__frame + 1:\n msg = f"cannot seek to frame {frame}"\n raise ValueError(msg)\n self.__frame = frame\n\n # move to next frame\n self.fp = self._fp\n self.fp.seek(self.__offset)\n\n s = self.fp.read(4)\n if not s:\n msg = "missing frame size"\n raise EOFError(msg)\n\n framesize = i32(s)\n\n self.decodermaxblock = framesize\n self.tile = [ImageFile._Tile("fli", (0, 0) + self.size, self.__offset)]\n\n self.__offset += framesize\n\n def tell(self) -> int:\n return self.__frame\n\n\n#\n# registry\n\nImage.register_open(FliImageFile.format, FliImageFile, _accept)\n\nImage.register_extensions(FliImageFile.format, [".fli", ".flc"])\n | .venv\Lib\site-packages\PIL\FliImagePlugin.py | FliImagePlugin.py | Python | 4,964 | 0.95 | 0.179775 | 0.239437 | python-kit | 445 | 2024-04-27T08:43:06.063878 | GPL-3.0 | false | 3a0a403d2415f863e24ddfc90599f38b |
#\n# The Python Imaging Library\n# $Id$\n#\n# base class for raster font file parsers\n#\n# history:\n# 1997-06-05 fl created\n# 1997-08-19 fl restrict image width\n#\n# Copyright (c) 1997-1998 by Secret Labs AB\n# Copyright (c) 1997-1998 by Fredrik Lundh\n#\n# See the README file for information on usage and redistribution.\n#\nfrom __future__ import annotations\n\nimport os\nfrom typing import BinaryIO\n\nfrom . import Image, _binary\n\nWIDTH = 800\n\n\ndef puti16(\n fp: BinaryIO, values: tuple[int, int, int, int, int, int, int, int, int, int]\n) -> None:\n """Write network order (big-endian) 16-bit sequence"""\n for v in values:\n if v < 0:\n v += 65536\n fp.write(_binary.o16be(v))\n\n\nclass FontFile:\n """Base class for raster font file handlers."""\n\n bitmap: Image.Image | None = None\n\n def __init__(self) -> None:\n self.info: dict[bytes, bytes | int] = {}\n self.glyph: list[\n tuple[\n tuple[int, int],\n tuple[int, int, int, int],\n tuple[int, int, int, int],\n Image.Image,\n ]\n | None\n ] = [None] * 256\n\n def __getitem__(self, ix: int) -> (\n tuple[\n tuple[int, int],\n tuple[int, int, int, int],\n tuple[int, int, int, int],\n Image.Image,\n ]\n | None\n ):\n return self.glyph[ix]\n\n def compile(self) -> None:\n """Create metrics and bitmap"""\n\n if self.bitmap:\n return\n\n # create bitmap large enough to hold all data\n h = w = maxwidth = 0\n lines = 1\n for glyph in self.glyph:\n if glyph:\n d, dst, src, im = glyph\n h = max(h, src[3] - src[1])\n w = w + (src[2] - src[0])\n if w > WIDTH:\n lines += 1\n w = src[2] - src[0]\n maxwidth = max(maxwidth, w)\n\n xsize = maxwidth\n ysize = lines * h\n\n if xsize == 0 and ysize == 0:\n return\n\n self.ysize = h\n\n # paste glyphs into bitmap\n self.bitmap = Image.new("1", (xsize, ysize))\n self.metrics: list[\n tuple[tuple[int, int], tuple[int, int, int, int], tuple[int, int, int, int]]\n | None\n ] = [None] * 256\n x = y = 0\n for i in range(256):\n glyph = self[i]\n if glyph:\n d, dst, src, im = glyph\n xx = src[2] - src[0]\n x0, y0 = x, y\n x = x + xx\n if x > WIDTH:\n x, y = 0, y + h\n x0, y0 = x, y\n x = xx\n s = src[0] + x0, src[1] + y0, src[2] + x0, src[3] + y0\n self.bitmap.paste(im.crop(src), s)\n self.metrics[i] = d, dst, s\n\n def save(self, filename: str) -> None:\n """Save font"""\n\n self.compile()\n\n # font data\n if not self.bitmap:\n msg = "No bitmap created"\n raise ValueError(msg)\n self.bitmap.save(os.path.splitext(filename)[0] + ".pbm", "PNG")\n\n # font metrics\n with open(os.path.splitext(filename)[0] + ".pil", "wb") as fp:\n fp.write(b"PILfont\n")\n fp.write(f";;;;;;{self.ysize};\n".encode("ascii")) # HACK!!!\n fp.write(b"DATA\n")\n for id in range(256):\n m = self.metrics[id]\n if not m:\n puti16(fp, (0,) * 10)\n else:\n puti16(fp, m[0] + m[1] + m[2])\n | .venv\Lib\site-packages\PIL\FontFile.py | FontFile.py | Python | 3,711 | 0.95 | 0.179104 | 0.168142 | python-kit | 588 | 2025-04-24T16:50:00.714975 | BSD-3-Clause | false | e358e96fc7a5dda546b0574df4e5de6c |
#\n# THIS IS WORK IN PROGRESS\n#\n# The Python Imaging Library.\n# $Id$\n#\n# FlashPix support for PIL\n#\n# History:\n# 97-01-25 fl Created (reads uncompressed RGB images only)\n#\n# Copyright (c) Secret Labs AB 1997.\n# Copyright (c) Fredrik Lundh 1997.\n#\n# See the README file for information on usage and redistribution.\n#\nfrom __future__ import annotations\n\nimport olefile\n\nfrom . import Image, ImageFile\nfrom ._binary import i32le as i32\n\n# we map from colour field tuples to (mode, rawmode) descriptors\nMODES = {\n # opacity\n (0x00007FFE,): ("A", "L"),\n # monochrome\n (0x00010000,): ("L", "L"),\n (0x00018000, 0x00017FFE): ("RGBA", "LA"),\n # photo YCC\n (0x00020000, 0x00020001, 0x00020002): ("RGB", "YCC;P"),\n (0x00028000, 0x00028001, 0x00028002, 0x00027FFE): ("RGBA", "YCCA;P"),\n # standard RGB (NIFRGB)\n (0x00030000, 0x00030001, 0x00030002): ("RGB", "RGB"),\n (0x00038000, 0x00038001, 0x00038002, 0x00037FFE): ("RGBA", "RGBA"),\n}\n\n\n#\n# --------------------------------------------------------------------\n\n\ndef _accept(prefix: bytes) -> bool:\n return prefix.startswith(olefile.MAGIC)\n\n\n##\n# Image plugin for the FlashPix images.\n\n\nclass FpxImageFile(ImageFile.ImageFile):\n format = "FPX"\n format_description = "FlashPix"\n\n def _open(self) -> None:\n #\n # read the OLE directory and see if this is a likely\n # to be a FlashPix file\n\n try:\n self.ole = olefile.OleFileIO(self.fp)\n except OSError as e:\n msg = "not an FPX file; invalid OLE file"\n raise SyntaxError(msg) from e\n\n root = self.ole.root\n if not root or root.clsid != "56616700-C154-11CE-8553-00AA00A1F95B":\n msg = "not an FPX file; bad root CLSID"\n raise SyntaxError(msg)\n\n self._open_index(1)\n\n def _open_index(self, index: int = 1) -> None:\n #\n # get the Image Contents Property Set\n\n prop = self.ole.getproperties(\n [f"Data Object Store {index:06d}", "\005Image Contents"]\n )\n\n # size (highest resolution)\n\n assert isinstance(prop[0x1000002], int)\n assert isinstance(prop[0x1000003], int)\n self._size = prop[0x1000002], prop[0x1000003]\n\n size = max(self.size)\n i = 1\n while size > 64:\n size = size // 2\n i += 1\n self.maxid = i - 1\n\n # mode. instead of using a single field for this, flashpix\n # requires you to specify the mode for each channel in each\n # resolution subimage, and leaves it to the decoder to make\n # sure that they all match. for now, we'll cheat and assume\n # that this is always the case.\n\n id = self.maxid << 16\n\n s = prop[0x2000002 | id]\n\n if not isinstance(s, bytes) or (bands := i32(s, 4)) > 4:\n msg = "Invalid number of bands"\n raise OSError(msg)\n\n # note: for now, we ignore the "uncalibrated" flag\n colors = tuple(i32(s, 8 + i * 4) & 0x7FFFFFFF for i in range(bands))\n\n self._mode, self.rawmode = MODES[colors]\n\n # load JPEG tables, if any\n self.jpeg = {}\n for i in range(256):\n id = 0x3000001 | (i << 16)\n if id in prop:\n self.jpeg[i] = prop[id]\n\n self._open_subimage(1, self.maxid)\n\n def _open_subimage(self, index: int = 1, subimage: int = 0) -> None:\n #\n # setup tile descriptors for a given subimage\n\n stream = [\n f"Data Object Store {index:06d}",\n f"Resolution {subimage:04d}",\n "Subimage 0000 Header",\n ]\n\n fp = self.ole.openstream(stream)\n\n # skip prefix\n fp.read(28)\n\n # header stream\n s = fp.read(36)\n\n size = i32(s, 4), i32(s, 8)\n # tilecount = i32(s, 12)\n tilesize = i32(s, 16), i32(s, 20)\n # channels = i32(s, 24)\n offset = i32(s, 28)\n length = i32(s, 32)\n\n if size != self.size:\n msg = "subimage mismatch"\n raise OSError(msg)\n\n # get tile descriptors\n fp.seek(28 + offset)\n s = fp.read(i32(s, 12) * length)\n\n x = y = 0\n xsize, ysize = size\n xtile, ytile = tilesize\n self.tile = []\n\n for i in range(0, len(s), length):\n x1 = min(xsize, x + xtile)\n y1 = min(ysize, y + ytile)\n\n compression = i32(s, i + 8)\n\n if compression == 0:\n self.tile.append(\n ImageFile._Tile(\n "raw",\n (x, y, x1, y1),\n i32(s, i) + 28,\n self.rawmode,\n )\n )\n\n elif compression == 1:\n # FIXME: the fill decoder is not implemented\n self.tile.append(\n ImageFile._Tile(\n "fill",\n (x, y, x1, y1),\n i32(s, i) + 28,\n (self.rawmode, s[12:16]),\n )\n )\n\n elif compression == 2:\n internal_color_conversion = s[14]\n jpeg_tables = s[15]\n rawmode = self.rawmode\n\n if internal_color_conversion:\n # The image is stored as usual (usually YCbCr).\n if rawmode == "RGBA":\n # For "RGBA", data is stored as YCbCrA based on\n # negative RGB. The following trick works around\n # this problem :\n jpegmode, rawmode = "YCbCrK", "CMYK"\n else:\n jpegmode = None # let the decoder decide\n\n else:\n # The image is stored as defined by rawmode\n jpegmode = rawmode\n\n self.tile.append(\n ImageFile._Tile(\n "jpeg",\n (x, y, x1, y1),\n i32(s, i) + 28,\n (rawmode, jpegmode),\n )\n )\n\n # FIXME: jpeg tables are tile dependent; the prefix\n # data must be placed in the tile descriptor itself!\n\n if jpeg_tables:\n self.tile_prefix = self.jpeg[jpeg_tables]\n\n else:\n msg = "unknown/invalid compression"\n raise OSError(msg)\n\n x = x + xtile\n if x >= xsize:\n x, y = 0, y + ytile\n if y >= ysize:\n break # isn't really required\n\n self.stream = stream\n self._fp = self.fp\n self.fp = None\n\n def load(self) -> Image.core.PixelAccess | None:\n if not self.fp:\n self.fp = self.ole.openstream(self.stream[:2] + ["Subimage 0000 Data"])\n\n return ImageFile.ImageFile.load(self)\n\n def close(self) -> None:\n self.ole.close()\n super().close()\n\n def __exit__(self, *args: object) -> None:\n self.ole.close()\n super().__exit__()\n\n\n#\n# --------------------------------------------------------------------\n\n\nImage.register_open(FpxImageFile.format, FpxImageFile, _accept)\n\nImage.register_extension(FpxImageFile.format, ".fpx")\n | .venv\Lib\site-packages\PIL\FpxImagePlugin.py | FpxImagePlugin.py | Python | 7,550 | 0.95 | 0.132296 | 0.277778 | python-kit | 91 | 2023-07-30T09:22:15.103761 | MIT | false | 894ce07c21a84130de3a211113c308fd |
"""\nA Pillow loader for .ftc and .ftu files (FTEX)\nJerome Leclanche <jerome@leclan.ch>\n\nThe contents of this file are hereby released in the public domain (CC0)\nFull text of the CC0 license:\n https://creativecommons.org/publicdomain/zero/1.0/\n\nIndependence War 2: Edge Of Chaos - Texture File Format - 16 October 2001\n\nThe textures used for 3D objects in Independence War 2: Edge Of Chaos are in a\npacked custom format called FTEX. This file format uses file extensions FTC\nand FTU.\n* FTC files are compressed textures (using standard texture compression).\n* FTU files are not compressed.\nTexture File Format\nThe FTC and FTU texture files both use the same format. This\nhas the following structure:\n{header}\n{format_directory}\n{data}\nWhere:\n{header} = {\n u32:magic,\n u32:version,\n u32:width,\n u32:height,\n u32:mipmap_count,\n u32:format_count\n}\n\n* The "magic" number is "FTEX".\n* "width" and "height" are the dimensions of the texture.\n* "mipmap_count" is the number of mipmaps in the texture.\n* "format_count" is the number of texture formats (different versions of the\nsame texture) in this file.\n\n{format_directory} = format_count * { u32:format, u32:where }\n\nThe format value is 0 for DXT1 compressed textures and 1 for 24-bit RGB\nuncompressed textures.\nThe texture data for a format starts at the position "where" in the file.\n\nEach set of texture data in the file has the following structure:\n{data} = format_count * { u32:mipmap_size, mipmap_size * { u8 } }\n* "mipmap_size" is the number of bytes in that mip level. For compressed\ntextures this is the size of the texture data compressed with DXT1. For 24 bit\nuncompressed textures, this is 3 * width * height. Following this are the image\nbytes for that mipmap level.\n\nNote: All data is stored in little-Endian (Intel) byte order.\n"""\n\nfrom __future__ import annotations\n\nimport struct\nfrom enum import IntEnum\nfrom io import BytesIO\n\nfrom . import Image, ImageFile\n\nMAGIC = b"FTEX"\n\n\nclass Format(IntEnum):\n DXT1 = 0\n UNCOMPRESSED = 1\n\n\nclass FtexImageFile(ImageFile.ImageFile):\n format = "FTEX"\n format_description = "Texture File Format (IW2:EOC)"\n\n def _open(self) -> None:\n if not _accept(self.fp.read(4)):\n msg = "not an FTEX file"\n raise SyntaxError(msg)\n struct.unpack("<i", self.fp.read(4)) # version\n self._size = struct.unpack("<2i", self.fp.read(8))\n mipmap_count, format_count = struct.unpack("<2i", self.fp.read(8))\n\n # Only support single-format files.\n # I don't know of any multi-format file.\n assert format_count == 1\n\n format, where = struct.unpack("<2i", self.fp.read(8))\n self.fp.seek(where)\n (mipmap_size,) = struct.unpack("<i", self.fp.read(4))\n\n data = self.fp.read(mipmap_size)\n\n if format == Format.DXT1:\n self._mode = "RGBA"\n self.tile = [ImageFile._Tile("bcn", (0, 0) + self.size, 0, (1,))]\n elif format == Format.UNCOMPRESSED:\n self._mode = "RGB"\n self.tile = [ImageFile._Tile("raw", (0, 0) + self.size, 0, "RGB")]\n else:\n msg = f"Invalid texture compression format: {repr(format)}"\n raise ValueError(msg)\n\n self.fp.close()\n self.fp = BytesIO(data)\n\n def load_seek(self, pos: int) -> None:\n pass\n\n\ndef _accept(prefix: bytes) -> bool:\n return prefix.startswith(MAGIC)\n\n\nImage.register_open(FtexImageFile.format, FtexImageFile, _accept)\nImage.register_extensions(FtexImageFile.format, [".ftc", ".ftu"])\n | .venv\Lib\site-packages\PIL\FtexImagePlugin.py | FtexImagePlugin.py | Python | 3,649 | 0.95 | 0.114035 | 0.103448 | awesome-app | 121 | 2023-07-27T12:34:25.358614 | BSD-3-Clause | false | 6053c0dd4526054a30c58f2e24c23710 |
#\n# The Python Imaging Library\n#\n# load a GIMP brush file\n#\n# History:\n# 96-03-14 fl Created\n# 16-01-08 es Version 2\n#\n# Copyright (c) Secret Labs AB 1997.\n# Copyright (c) Fredrik Lundh 1996.\n# Copyright (c) Eric Soroos 2016.\n#\n# See the README file for information on usage and redistribution.\n#\n#\n# See https://github.com/GNOME/gimp/blob/mainline/devel-docs/gbr.txt for\n# format documentation.\n#\n# This code Interprets version 1 and 2 .gbr files.\n# Version 1 files are obsolete, and should not be used for new\n# brushes.\n# Version 2 files are saved by GIMP v2.8 (at least)\n# Version 3 files have a format specifier of 18 for 16bit floats in\n# the color depth field. This is currently unsupported by Pillow.\nfrom __future__ import annotations\n\nfrom . import Image, ImageFile\nfrom ._binary import i32be as i32\n\n\ndef _accept(prefix: bytes) -> bool:\n return len(prefix) >= 8 and i32(prefix, 0) >= 20 and i32(prefix, 4) in (1, 2)\n\n\n##\n# Image plugin for the GIMP brush format.\n\n\nclass GbrImageFile(ImageFile.ImageFile):\n format = "GBR"\n format_description = "GIMP brush file"\n\n def _open(self) -> None:\n header_size = i32(self.fp.read(4))\n if header_size < 20:\n msg = "not a GIMP brush"\n raise SyntaxError(msg)\n version = i32(self.fp.read(4))\n if version not in (1, 2):\n msg = f"Unsupported GIMP brush version: {version}"\n raise SyntaxError(msg)\n\n width = i32(self.fp.read(4))\n height = i32(self.fp.read(4))\n color_depth = i32(self.fp.read(4))\n if width <= 0 or height <= 0:\n msg = "not a GIMP brush"\n raise SyntaxError(msg)\n if color_depth not in (1, 4):\n msg = f"Unsupported GIMP brush color depth: {color_depth}"\n raise SyntaxError(msg)\n\n if version == 1:\n comment_length = header_size - 20\n else:\n comment_length = header_size - 28\n magic_number = self.fp.read(4)\n if magic_number != b"GIMP":\n msg = "not a GIMP brush, bad magic number"\n raise SyntaxError(msg)\n self.info["spacing"] = i32(self.fp.read(4))\n\n comment = self.fp.read(comment_length)[:-1]\n\n if color_depth == 1:\n self._mode = "L"\n else:\n self._mode = "RGBA"\n\n self._size = width, height\n\n self.info["comment"] = comment\n\n # Image might not be small\n Image._decompression_bomb_check(self.size)\n\n # Data is an uncompressed block of w * h * bytes/pixel\n self._data_size = width * height * color_depth\n\n def load(self) -> Image.core.PixelAccess | None:\n if self._im is None:\n self.im = Image.core.new(self.mode, self.size)\n self.frombytes(self.fp.read(self._data_size))\n return Image.Image.load(self)\n\n\n#\n# registry\n\n\nImage.register_open(GbrImageFile.format, GbrImageFile, _accept)\nImage.register_extension(GbrImageFile.format, ".gbr")\n | .venv\Lib\site-packages\PIL\GbrImagePlugin.py | GbrImagePlugin.py | Python | 3,109 | 0.95 | 0.165049 | 0.378049 | awesome-app | 189 | 2025-02-28T05:34:36.227731 | Apache-2.0 | false | 3686cf4f97971c39d20dfafc3a6f4bcb |
#\n# The Python Imaging Library.\n# $Id$\n#\n# GD file handling\n#\n# History:\n# 1996-04-12 fl Created\n#\n# Copyright (c) 1997 by Secret Labs AB.\n# Copyright (c) 1996 by Fredrik Lundh.\n#\n# See the README file for information on usage and redistribution.\n#\n\n\n"""\n.. note::\n This format cannot be automatically recognized, so the\n class is not registered for use with :py:func:`PIL.Image.open()`. To open a\n gd file, use the :py:func:`PIL.GdImageFile.open()` function instead.\n\n.. warning::\n THE GD FORMAT IS NOT DESIGNED FOR DATA INTERCHANGE. This\n implementation is provided for convenience and demonstrational\n purposes only.\n"""\nfrom __future__ import annotations\n\nfrom typing import IO\n\nfrom . import ImageFile, ImagePalette, UnidentifiedImageError\nfrom ._binary import i16be as i16\nfrom ._binary import i32be as i32\nfrom ._typing import StrOrBytesPath\n\n\nclass GdImageFile(ImageFile.ImageFile):\n """\n Image plugin for the GD uncompressed format. Note that this format\n is not supported by the standard :py:func:`PIL.Image.open()` function. To use\n this plugin, you have to import the :py:mod:`PIL.GdImageFile` module and\n use the :py:func:`PIL.GdImageFile.open()` function.\n """\n\n format = "GD"\n format_description = "GD uncompressed images"\n\n def _open(self) -> None:\n # Header\n assert self.fp is not None\n\n s = self.fp.read(1037)\n\n if i16(s) not in [65534, 65535]:\n msg = "Not a valid GD 2.x .gd file"\n raise SyntaxError(msg)\n\n self._mode = "P"\n self._size = i16(s, 2), i16(s, 4)\n\n true_color = s[6]\n true_color_offset = 2 if true_color else 0\n\n # transparency index\n tindex = i32(s, 7 + true_color_offset)\n if tindex < 256:\n self.info["transparency"] = tindex\n\n self.palette = ImagePalette.raw(\n "RGBX", s[7 + true_color_offset + 6 : 7 + true_color_offset + 6 + 256 * 4]\n )\n\n self.tile = [\n ImageFile._Tile(\n "raw",\n (0, 0) + self.size,\n 7 + true_color_offset + 6 + 256 * 4,\n "L",\n )\n ]\n\n\ndef open(fp: StrOrBytesPath | IO[bytes], mode: str = "r") -> GdImageFile:\n """\n Load texture from a GD image file.\n\n :param fp: GD file name, or an opened file handle.\n :param mode: Optional mode. In this version, if the mode argument\n is given, it must be "r".\n :returns: An image instance.\n :raises OSError: If the image could not be read.\n """\n if mode != "r":\n msg = "bad mode"\n raise ValueError(msg)\n\n try:\n return GdImageFile(fp)\n except SyntaxError as e:\n msg = "cannot identify this image file"\n raise UnidentifiedImageError(msg) from e\n | .venv\Lib\site-packages\PIL\GdImageFile.py | GdImageFile.py | Python | 2,890 | 0.95 | 0.166667 | 0.195122 | python-kit | 743 | 2024-03-13T01:11:47.590296 | BSD-3-Clause | false | 85c8900775ba6d2a0142f88aa80e5856 |
#\n# The Python Imaging Library.\n# $Id$\n#\n# GIF file handling\n#\n# History:\n# 1995-09-01 fl Created\n# 1996-12-14 fl Added interlace support\n# 1996-12-30 fl Added animation support\n# 1997-01-05 fl Added write support, fixed local colour map bug\n# 1997-02-23 fl Make sure to load raster data in getdata()\n# 1997-07-05 fl Support external decoder (0.4)\n# 1998-07-09 fl Handle all modes when saving (0.5)\n# 1998-07-15 fl Renamed offset attribute to avoid name clash\n# 2001-04-16 fl Added rewind support (seek to frame 0) (0.6)\n# 2001-04-17 fl Added palette optimization (0.7)\n# 2002-06-06 fl Added transparency support for save (0.8)\n# 2004-02-24 fl Disable interlacing for small images\n#\n# Copyright (c) 1997-2004 by Secret Labs AB\n# Copyright (c) 1995-2004 by Fredrik Lundh\n#\n# See the README file for information on usage and redistribution.\n#\nfrom __future__ import annotations\n\nimport itertools\nimport math\nimport os\nimport subprocess\nfrom enum import IntEnum\nfrom functools import cached_property\nfrom typing import IO, Any, Literal, NamedTuple, Union, cast\n\nfrom . import (\n Image,\n ImageChops,\n ImageFile,\n ImageMath,\n ImageOps,\n ImagePalette,\n ImageSequence,\n)\nfrom ._binary import i16le as i16\nfrom ._binary import o8\nfrom ._binary import o16le as o16\nfrom ._util import DeferredError\n\nTYPE_CHECKING = False\nif TYPE_CHECKING:\n from . import _imaging\n from ._typing import Buffer\n\n\nclass LoadingStrategy(IntEnum):\n """.. versionadded:: 9.1.0"""\n\n RGB_AFTER_FIRST = 0\n RGB_AFTER_DIFFERENT_PALETTE_ONLY = 1\n RGB_ALWAYS = 2\n\n\n#: .. versionadded:: 9.1.0\nLOADING_STRATEGY = LoadingStrategy.RGB_AFTER_FIRST\n\n# --------------------------------------------------------------------\n# Identify/read GIF files\n\n\ndef _accept(prefix: bytes) -> bool:\n return prefix.startswith((b"GIF87a", b"GIF89a"))\n\n\n##\n# Image plugin for GIF images. This plugin supports both GIF87 and\n# GIF89 images.\n\n\nclass GifImageFile(ImageFile.ImageFile):\n format = "GIF"\n format_description = "Compuserve GIF"\n _close_exclusive_fp_after_loading = False\n\n global_palette = None\n\n def data(self) -> bytes | None:\n s = self.fp.read(1)\n if s and s[0]:\n return self.fp.read(s[0])\n return None\n\n def _is_palette_needed(self, p: bytes) -> bool:\n for i in range(0, len(p), 3):\n if not (i // 3 == p[i] == p[i + 1] == p[i + 2]):\n return True\n return False\n\n def _open(self) -> None:\n # Screen\n s = self.fp.read(13)\n if not _accept(s):\n msg = "not a GIF file"\n raise SyntaxError(msg)\n\n self.info["version"] = s[:6]\n self._size = i16(s, 6), i16(s, 8)\n flags = s[10]\n bits = (flags & 7) + 1\n\n if flags & 128:\n # get global palette\n self.info["background"] = s[11]\n # check if palette contains colour indices\n p = self.fp.read(3 << bits)\n if self._is_palette_needed(p):\n p = ImagePalette.raw("RGB", p)\n self.global_palette = self.palette = p\n\n self._fp = self.fp # FIXME: hack\n self.__rewind = self.fp.tell()\n self._n_frames: int | None = None\n self._seek(0) # get ready to read first frame\n\n @property\n def n_frames(self) -> int:\n if self._n_frames is None:\n current = self.tell()\n try:\n while True:\n self._seek(self.tell() + 1, False)\n except EOFError:\n self._n_frames = self.tell() + 1\n self.seek(current)\n return self._n_frames\n\n @cached_property\n def is_animated(self) -> bool:\n if self._n_frames is not None:\n return self._n_frames != 1\n\n current = self.tell()\n if current:\n return True\n\n try:\n self._seek(1, False)\n is_animated = True\n except EOFError:\n is_animated = False\n\n self.seek(current)\n return is_animated\n\n def seek(self, frame: int) -> None:\n if not self._seek_check(frame):\n return\n if frame < self.__frame:\n self._im = None\n self._seek(0)\n\n last_frame = self.__frame\n for f in range(self.__frame + 1, frame + 1):\n try:\n self._seek(f)\n except EOFError as e:\n self.seek(last_frame)\n msg = "no more images in GIF file"\n raise EOFError(msg) from e\n\n def _seek(self, frame: int, update_image: bool = True) -> None:\n if isinstance(self._fp, DeferredError):\n raise self._fp.ex\n if frame == 0:\n # rewind\n self.__offset = 0\n self.dispose: _imaging.ImagingCore | None = None\n self.__frame = -1\n self._fp.seek(self.__rewind)\n self.disposal_method = 0\n if "comment" in self.info:\n del self.info["comment"]\n else:\n # ensure that the previous frame was loaded\n if self.tile and update_image:\n self.load()\n\n if frame != self.__frame + 1:\n msg = f"cannot seek to frame {frame}"\n raise ValueError(msg)\n\n self.fp = self._fp\n if self.__offset:\n # backup to last frame\n self.fp.seek(self.__offset)\n while self.data():\n pass\n self.__offset = 0\n\n s = self.fp.read(1)\n if not s or s == b";":\n msg = "no more images in GIF file"\n raise EOFError(msg)\n\n palette: ImagePalette.ImagePalette | Literal[False] | None = None\n\n info: dict[str, Any] = {}\n frame_transparency = None\n interlace = None\n frame_dispose_extent = None\n while True:\n if not s:\n s = self.fp.read(1)\n if not s or s == b";":\n break\n\n elif s == b"!":\n #\n # extensions\n #\n s = self.fp.read(1)\n block = self.data()\n if s[0] == 249 and block is not None:\n #\n # graphic control extension\n #\n flags = block[0]\n if flags & 1:\n frame_transparency = block[3]\n info["duration"] = i16(block, 1) * 10\n\n # disposal method - find the value of bits 4 - 6\n dispose_bits = 0b00011100 & flags\n dispose_bits = dispose_bits >> 2\n if dispose_bits:\n # only set the dispose if it is not\n # unspecified. I'm not sure if this is\n # correct, but it seems to prevent the last\n # frame from looking odd for some animations\n self.disposal_method = dispose_bits\n elif s[0] == 254:\n #\n # comment extension\n #\n comment = b""\n\n # Read this comment block\n while block:\n comment += block\n block = self.data()\n\n if "comment" in info:\n # If multiple comment blocks in frame, separate with \n\n info["comment"] += b"\n" + comment\n else:\n info["comment"] = comment\n s = None\n continue\n elif s[0] == 255 and frame == 0 and block is not None:\n #\n # application extension\n #\n info["extension"] = block, self.fp.tell()\n if block.startswith(b"NETSCAPE2.0"):\n block = self.data()\n if block and len(block) >= 3 and block[0] == 1:\n self.info["loop"] = i16(block, 1)\n while self.data():\n pass\n\n elif s == b",":\n #\n # local image\n #\n s = self.fp.read(9)\n\n # extent\n x0, y0 = i16(s, 0), i16(s, 2)\n x1, y1 = x0 + i16(s, 4), y0 + i16(s, 6)\n if (x1 > self.size[0] or y1 > self.size[1]) and update_image:\n self._size = max(x1, self.size[0]), max(y1, self.size[1])\n Image._decompression_bomb_check(self._size)\n frame_dispose_extent = x0, y0, x1, y1\n flags = s[8]\n\n interlace = (flags & 64) != 0\n\n if flags & 128:\n bits = (flags & 7) + 1\n p = self.fp.read(3 << bits)\n if self._is_palette_needed(p):\n palette = ImagePalette.raw("RGB", p)\n else:\n palette = False\n\n # image data\n bits = self.fp.read(1)[0]\n self.__offset = self.fp.tell()\n break\n s = None\n\n if interlace is None:\n msg = "image not found in GIF frame"\n raise EOFError(msg)\n\n self.__frame = frame\n if not update_image:\n return\n\n self.tile = []\n\n if self.dispose:\n self.im.paste(self.dispose, self.dispose_extent)\n\n self._frame_palette = palette if palette is not None else self.global_palette\n self._frame_transparency = frame_transparency\n if frame == 0:\n if self._frame_palette:\n if LOADING_STRATEGY == LoadingStrategy.RGB_ALWAYS:\n self._mode = "RGBA" if frame_transparency is not None else "RGB"\n else:\n self._mode = "P"\n else:\n self._mode = "L"\n\n if palette:\n self.palette = palette\n elif self.global_palette:\n from copy import copy\n\n self.palette = copy(self.global_palette)\n else:\n self.palette = None\n else:\n if self.mode == "P":\n if (\n LOADING_STRATEGY != LoadingStrategy.RGB_AFTER_DIFFERENT_PALETTE_ONLY\n or palette\n ):\n if "transparency" in self.info:\n self.im.putpalettealpha(self.info["transparency"], 0)\n self.im = self.im.convert("RGBA", Image.Dither.FLOYDSTEINBERG)\n self._mode = "RGBA"\n del self.info["transparency"]\n else:\n self._mode = "RGB"\n self.im = self.im.convert("RGB", Image.Dither.FLOYDSTEINBERG)\n\n def _rgb(color: int) -> tuple[int, int, int]:\n if self._frame_palette:\n if color * 3 + 3 > len(self._frame_palette.palette):\n color = 0\n return cast(\n tuple[int, int, int],\n tuple(self._frame_palette.palette[color * 3 : color * 3 + 3]),\n )\n else:\n return (color, color, color)\n\n self.dispose = None\n self.dispose_extent: tuple[int, int, int, int] | None = frame_dispose_extent\n if self.dispose_extent and self.disposal_method >= 2:\n try:\n if self.disposal_method == 2:\n # replace with background colour\n\n # only dispose the extent in this frame\n x0, y0, x1, y1 = self.dispose_extent\n dispose_size = (x1 - x0, y1 - y0)\n\n Image._decompression_bomb_check(dispose_size)\n\n # by convention, attempt to use transparency first\n dispose_mode = "P"\n color = self.info.get("transparency", frame_transparency)\n if color is not None:\n if self.mode in ("RGB", "RGBA"):\n dispose_mode = "RGBA"\n color = _rgb(color) + (0,)\n else:\n color = self.info.get("background", 0)\n if self.mode in ("RGB", "RGBA"):\n dispose_mode = "RGB"\n color = _rgb(color)\n self.dispose = Image.core.fill(dispose_mode, dispose_size, color)\n else:\n # replace with previous contents\n if self._im is not None:\n # only dispose the extent in this frame\n self.dispose = self._crop(self.im, self.dispose_extent)\n elif frame_transparency is not None:\n x0, y0, x1, y1 = self.dispose_extent\n dispose_size = (x1 - x0, y1 - y0)\n\n Image._decompression_bomb_check(dispose_size)\n dispose_mode = "P"\n color = frame_transparency\n if self.mode in ("RGB", "RGBA"):\n dispose_mode = "RGBA"\n color = _rgb(frame_transparency) + (0,)\n self.dispose = Image.core.fill(\n dispose_mode, dispose_size, color\n )\n except AttributeError:\n pass\n\n if interlace is not None:\n transparency = -1\n if frame_transparency is not None:\n if frame == 0:\n if LOADING_STRATEGY != LoadingStrategy.RGB_ALWAYS:\n self.info["transparency"] = frame_transparency\n elif self.mode not in ("RGB", "RGBA"):\n transparency = frame_transparency\n self.tile = [\n ImageFile._Tile(\n "gif",\n (x0, y0, x1, y1),\n self.__offset,\n (bits, interlace, transparency),\n )\n ]\n\n if info.get("comment"):\n self.info["comment"] = info["comment"]\n for k in ["duration", "extension"]:\n if k in info:\n self.info[k] = info[k]\n elif k in self.info:\n del self.info[k]\n\n def load_prepare(self) -> None:\n temp_mode = "P" if self._frame_palette else "L"\n self._prev_im = None\n if self.__frame == 0:\n if self._frame_transparency is not None:\n self.im = Image.core.fill(\n temp_mode, self.size, self._frame_transparency\n )\n elif self.mode in ("RGB", "RGBA"):\n self._prev_im = self.im\n if self._frame_palette:\n self.im = Image.core.fill("P", self.size, self._frame_transparency or 0)\n self.im.putpalette("RGB", *self._frame_palette.getdata())\n else:\n self._im = None\n if not self._prev_im and self._im is not None and self.size != self.im.size:\n expanded_im = Image.core.fill(self.im.mode, self.size)\n if self._frame_palette:\n expanded_im.putpalette("RGB", *self._frame_palette.getdata())\n expanded_im.paste(self.im, (0, 0) + self.im.size)\n\n self.im = expanded_im\n self._mode = temp_mode\n self._frame_palette = None\n\n super().load_prepare()\n\n def load_end(self) -> None:\n if self.__frame == 0:\n if self.mode == "P" and LOADING_STRATEGY == LoadingStrategy.RGB_ALWAYS:\n if self._frame_transparency is not None:\n self.im.putpalettealpha(self._frame_transparency, 0)\n self._mode = "RGBA"\n else:\n self._mode = "RGB"\n self.im = self.im.convert(self.mode, Image.Dither.FLOYDSTEINBERG)\n return\n if not self._prev_im:\n return\n if self.size != self._prev_im.size:\n if self._frame_transparency is not None:\n expanded_im = Image.core.fill("RGBA", self.size)\n else:\n expanded_im = Image.core.fill("P", self.size)\n expanded_im.putpalette("RGB", "RGB", self.im.getpalette())\n expanded_im = expanded_im.convert("RGB")\n expanded_im.paste(self._prev_im, (0, 0) + self._prev_im.size)\n\n self._prev_im = expanded_im\n assert self._prev_im is not None\n if self._frame_transparency is not None:\n if self.mode == "L":\n frame_im = self.im.convert_transparent("LA", self._frame_transparency)\n else:\n self.im.putpalettealpha(self._frame_transparency, 0)\n frame_im = self.im.convert("RGBA")\n else:\n frame_im = self.im.convert("RGB")\n\n assert self.dispose_extent is not None\n frame_im = self._crop(frame_im, self.dispose_extent)\n\n self.im = self._prev_im\n self._mode = self.im.mode\n if frame_im.mode in ("LA", "RGBA"):\n self.im.paste(frame_im, self.dispose_extent, frame_im)\n else:\n self.im.paste(frame_im, self.dispose_extent)\n\n def tell(self) -> int:\n return self.__frame\n\n\n# --------------------------------------------------------------------\n# Write GIF files\n\n\nRAWMODE = {"1": "L", "L": "L", "P": "P"}\n\n\ndef _normalize_mode(im: Image.Image) -> Image.Image:\n """\n Takes an image (or frame), returns an image in a mode that is appropriate\n for saving in a Gif.\n\n It may return the original image, or it may return an image converted to\n palette or 'L' mode.\n\n :param im: Image object\n :returns: Image object\n """\n if im.mode in RAWMODE:\n im.load()\n return im\n if Image.getmodebase(im.mode) == "RGB":\n im = im.convert("P", palette=Image.Palette.ADAPTIVE)\n assert im.palette is not None\n if im.palette.mode == "RGBA":\n for rgba in im.palette.colors:\n if rgba[3] == 0:\n im.info["transparency"] = im.palette.colors[rgba]\n break\n return im\n return im.convert("L")\n\n\n_Palette = Union[bytes, bytearray, list[int], ImagePalette.ImagePalette]\n\n\ndef _normalize_palette(\n im: Image.Image, palette: _Palette | None, info: dict[str, Any]\n) -> Image.Image:\n """\n Normalizes the palette for image.\n - Sets the palette to the incoming palette, if provided.\n - Ensures that there's a palette for L mode images\n - Optimizes the palette if necessary/desired.\n\n :param im: Image object\n :param palette: bytes object containing the source palette, or ....\n :param info: encoderinfo\n :returns: Image object\n """\n source_palette = None\n if palette:\n # a bytes palette\n if isinstance(palette, (bytes, bytearray, list)):\n source_palette = bytearray(palette[:768])\n if isinstance(palette, ImagePalette.ImagePalette):\n source_palette = bytearray(palette.palette)\n\n if im.mode == "P":\n if not source_palette:\n im_palette = im.getpalette(None)\n assert im_palette is not None\n source_palette = bytearray(im_palette)\n else: # L-mode\n if not source_palette:\n source_palette = bytearray(i // 3 for i in range(768))\n im.palette = ImagePalette.ImagePalette("RGB", palette=source_palette)\n assert source_palette is not None\n\n if palette:\n used_palette_colors: list[int | None] = []\n assert im.palette is not None\n for i in range(0, len(source_palette), 3):\n source_color = tuple(source_palette[i : i + 3])\n index = im.palette.colors.get(source_color)\n if index in used_palette_colors:\n index = None\n used_palette_colors.append(index)\n for i, index in enumerate(used_palette_colors):\n if index is None:\n for j in range(len(used_palette_colors)):\n if j not in used_palette_colors:\n used_palette_colors[i] = j\n break\n dest_map: list[int] = []\n for index in used_palette_colors:\n assert index is not None\n dest_map.append(index)\n im = im.remap_palette(dest_map)\n else:\n optimized_palette_colors = _get_optimize(im, info)\n if optimized_palette_colors is not None:\n im = im.remap_palette(optimized_palette_colors, source_palette)\n if "transparency" in info:\n try:\n info["transparency"] = optimized_palette_colors.index(\n info["transparency"]\n )\n except ValueError:\n del info["transparency"]\n return im\n\n assert im.palette is not None\n im.palette.palette = source_palette\n return im\n\n\ndef _write_single_frame(\n im: Image.Image,\n fp: IO[bytes],\n palette: _Palette | None,\n) -> None:\n im_out = _normalize_mode(im)\n for k, v in im_out.info.items():\n if isinstance(k, str):\n im.encoderinfo.setdefault(k, v)\n im_out = _normalize_palette(im_out, palette, im.encoderinfo)\n\n for s in _get_global_header(im_out, im.encoderinfo):\n fp.write(s)\n\n # local image header\n flags = 0\n if get_interlace(im):\n flags = flags | 64\n _write_local_header(fp, im, (0, 0), flags)\n\n im_out.encoderconfig = (8, get_interlace(im))\n ImageFile._save(\n im_out, fp, [ImageFile._Tile("gif", (0, 0) + im.size, 0, RAWMODE[im_out.mode])]\n )\n\n fp.write(b"\0") # end of image data\n\n\ndef _getbbox(\n base_im: Image.Image, im_frame: Image.Image\n) -> tuple[Image.Image, tuple[int, int, int, int] | None]:\n palette_bytes = [\n bytes(im.palette.palette) if im.palette else b"" for im in (base_im, im_frame)\n ]\n if palette_bytes[0] != palette_bytes[1]:\n im_frame = im_frame.convert("RGBA")\n base_im = base_im.convert("RGBA")\n delta = ImageChops.subtract_modulo(im_frame, base_im)\n return delta, delta.getbbox(alpha_only=False)\n\n\nclass _Frame(NamedTuple):\n im: Image.Image\n bbox: tuple[int, int, int, int] | None\n encoderinfo: dict[str, Any]\n\n\ndef _write_multiple_frames(\n im: Image.Image, fp: IO[bytes], palette: _Palette | None\n) -> bool:\n duration = im.encoderinfo.get("duration")\n disposal = im.encoderinfo.get("disposal", im.info.get("disposal"))\n\n im_frames: list[_Frame] = []\n previous_im: Image.Image | None = None\n frame_count = 0\n background_im = None\n for imSequence in itertools.chain([im], im.encoderinfo.get("append_images", [])):\n for im_frame in ImageSequence.Iterator(imSequence):\n # a copy is required here since seek can still mutate the image\n im_frame = _normalize_mode(im_frame.copy())\n if frame_count == 0:\n for k, v in im_frame.info.items():\n if k == "transparency":\n continue\n if isinstance(k, str):\n im.encoderinfo.setdefault(k, v)\n\n encoderinfo = im.encoderinfo.copy()\n if "transparency" in im_frame.info:\n encoderinfo.setdefault("transparency", im_frame.info["transparency"])\n im_frame = _normalize_palette(im_frame, palette, encoderinfo)\n if isinstance(duration, (list, tuple)):\n encoderinfo["duration"] = duration[frame_count]\n elif duration is None and "duration" in im_frame.info:\n encoderinfo["duration"] = im_frame.info["duration"]\n if isinstance(disposal, (list, tuple)):\n encoderinfo["disposal"] = disposal[frame_count]\n frame_count += 1\n\n diff_frame = None\n if im_frames and previous_im:\n # delta frame\n delta, bbox = _getbbox(previous_im, im_frame)\n if not bbox:\n # This frame is identical to the previous frame\n if encoderinfo.get("duration"):\n im_frames[-1].encoderinfo["duration"] += encoderinfo["duration"]\n continue\n if im_frames[-1].encoderinfo.get("disposal") == 2:\n # To appear correctly in viewers using a convention,\n # only consider transparency, and not background color\n color = im.encoderinfo.get(\n "transparency", im.info.get("transparency")\n )\n if color is not None:\n if background_im is None:\n background = _get_background(im_frame, color)\n background_im = Image.new("P", im_frame.size, background)\n first_palette = im_frames[0].im.palette\n assert first_palette is not None\n background_im.putpalette(first_palette, first_palette.mode)\n bbox = _getbbox(background_im, im_frame)[1]\n else:\n bbox = (0, 0) + im_frame.size\n elif encoderinfo.get("optimize") and im_frame.mode != "1":\n if "transparency" not in encoderinfo:\n assert im_frame.palette is not None\n try:\n encoderinfo["transparency"] = (\n im_frame.palette._new_color_index(im_frame)\n )\n except ValueError:\n pass\n if "transparency" in encoderinfo:\n # When the delta is zero, fill the image with transparency\n diff_frame = im_frame.copy()\n fill = Image.new("P", delta.size, encoderinfo["transparency"])\n if delta.mode == "RGBA":\n r, g, b, a = delta.split()\n mask = ImageMath.lambda_eval(\n lambda args: args["convert"](\n args["max"](\n args["max"](\n args["max"](args["r"], args["g"]), args["b"]\n ),\n args["a"],\n )\n * 255,\n "1",\n ),\n r=r,\n g=g,\n b=b,\n a=a,\n )\n else:\n if delta.mode == "P":\n # Convert to L without considering palette\n delta_l = Image.new("L", delta.size)\n delta_l.putdata(delta.getdata())\n delta = delta_l\n mask = ImageMath.lambda_eval(\n lambda args: args["convert"](args["im"] * 255, "1"),\n im=delta,\n )\n diff_frame.paste(fill, mask=ImageOps.invert(mask))\n else:\n bbox = None\n previous_im = im_frame\n im_frames.append(_Frame(diff_frame or im_frame, bbox, encoderinfo))\n\n if len(im_frames) == 1:\n if "duration" in im.encoderinfo:\n # Since multiple frames will not be written, use the combined duration\n im.encoderinfo["duration"] = im_frames[0].encoderinfo["duration"]\n return False\n\n for frame_data in im_frames:\n im_frame = frame_data.im\n if not frame_data.bbox:\n # global header\n for s in _get_global_header(im_frame, frame_data.encoderinfo):\n fp.write(s)\n offset = (0, 0)\n else:\n # compress difference\n if not palette:\n frame_data.encoderinfo["include_color_table"] = True\n\n if frame_data.bbox != (0, 0) + im_frame.size:\n im_frame = im_frame.crop(frame_data.bbox)\n offset = frame_data.bbox[:2]\n _write_frame_data(fp, im_frame, offset, frame_data.encoderinfo)\n return True\n\n\ndef _save_all(im: Image.Image, fp: IO[bytes], filename: str | bytes) -> None:\n _save(im, fp, filename, save_all=True)\n\n\ndef _save(\n im: Image.Image, fp: IO[bytes], filename: str | bytes, save_all: bool = False\n) -> None:\n # header\n if "palette" in im.encoderinfo or "palette" in im.info:\n palette = im.encoderinfo.get("palette", im.info.get("palette"))\n else:\n palette = None\n im.encoderinfo.setdefault("optimize", True)\n\n if not save_all or not _write_multiple_frames(im, fp, palette):\n _write_single_frame(im, fp, palette)\n\n fp.write(b";") # end of file\n\n if hasattr(fp, "flush"):\n fp.flush()\n\n\ndef get_interlace(im: Image.Image) -> int:\n interlace = im.encoderinfo.get("interlace", 1)\n\n # workaround for @PIL153\n if min(im.size) < 16:\n interlace = 0\n\n return interlace\n\n\ndef _write_local_header(\n fp: IO[bytes], im: Image.Image, offset: tuple[int, int], flags: int\n) -> None:\n try:\n transparency = im.encoderinfo["transparency"]\n except KeyError:\n transparency = None\n\n if "duration" in im.encoderinfo:\n duration = int(im.encoderinfo["duration"] / 10)\n else:\n duration = 0\n\n disposal = int(im.encoderinfo.get("disposal", 0))\n\n if transparency is not None or duration != 0 or disposal:\n packed_flag = 1 if transparency is not None else 0\n packed_flag |= disposal << 2\n\n fp.write(\n b"!"\n + o8(249) # extension intro\n + o8(4) # length\n + o8(packed_flag) # packed fields\n + o16(duration) # duration\n + o8(transparency or 0) # transparency index\n + o8(0)\n )\n\n include_color_table = im.encoderinfo.get("include_color_table")\n if include_color_table:\n palette_bytes = _get_palette_bytes(im)\n color_table_size = _get_color_table_size(palette_bytes)\n if color_table_size:\n flags = flags | 128 # local color table flag\n flags = flags | color_table_size\n\n fp.write(\n b","\n + o16(offset[0]) # offset\n + o16(offset[1])\n + o16(im.size[0]) # size\n + o16(im.size[1])\n + o8(flags) # flags\n )\n if include_color_table and color_table_size:\n fp.write(_get_header_palette(palette_bytes))\n fp.write(o8(8)) # bits\n\n\ndef _save_netpbm(im: Image.Image, fp: IO[bytes], filename: str | bytes) -> None:\n # Unused by default.\n # To use, uncomment the register_save call at the end of the file.\n #\n # If you need real GIF compression and/or RGB quantization, you\n # can use the external NETPBM/PBMPLUS utilities. See comments\n # below for information on how to enable this.\n tempfile = im._dump()\n\n try:\n with open(filename, "wb") as f:\n if im.mode != "RGB":\n subprocess.check_call(\n ["ppmtogif", tempfile], stdout=f, stderr=subprocess.DEVNULL\n )\n else:\n # Pipe ppmquant output into ppmtogif\n # "ppmquant 256 %s | ppmtogif > %s" % (tempfile, filename)\n quant_cmd = ["ppmquant", "256", tempfile]\n togif_cmd = ["ppmtogif"]\n quant_proc = subprocess.Popen(\n quant_cmd, stdout=subprocess.PIPE, stderr=subprocess.DEVNULL\n )\n togif_proc = subprocess.Popen(\n togif_cmd,\n stdin=quant_proc.stdout,\n stdout=f,\n stderr=subprocess.DEVNULL,\n )\n\n # Allow ppmquant to receive SIGPIPE if ppmtogif exits\n assert quant_proc.stdout is not None\n quant_proc.stdout.close()\n\n retcode = quant_proc.wait()\n if retcode:\n raise subprocess.CalledProcessError(retcode, quant_cmd)\n\n retcode = togif_proc.wait()\n if retcode:\n raise subprocess.CalledProcessError(retcode, togif_cmd)\n finally:\n try:\n os.unlink(tempfile)\n except OSError:\n pass\n\n\n# Force optimization so that we can test performance against\n# cases where it took lots of memory and time previously.\n_FORCE_OPTIMIZE = False\n\n\ndef _get_optimize(im: Image.Image, info: dict[str, Any]) -> list[int] | None:\n """\n Palette optimization is a potentially expensive operation.\n\n This function determines if the palette should be optimized using\n some heuristics, then returns the list of palette entries in use.\n\n :param im: Image object\n :param info: encoderinfo\n :returns: list of indexes of palette entries in use, or None\n """\n if im.mode in ("P", "L") and info and info.get("optimize"):\n # Potentially expensive operation.\n\n # The palette saves 3 bytes per color not used, but palette\n # lengths are restricted to 3*(2**N) bytes. Max saving would\n # be 768 -> 6 bytes if we went all the way down to 2 colors.\n # * If we're over 128 colors, we can't save any space.\n # * If there aren't any holes, it's not worth collapsing.\n # * If we have a 'large' image, the palette is in the noise.\n\n # create the new palette if not every color is used\n optimise = _FORCE_OPTIMIZE or im.mode == "L"\n if optimise or im.width * im.height < 512 * 512:\n # check which colors are used\n used_palette_colors = []\n for i, count in enumerate(im.histogram()):\n if count:\n used_palette_colors.append(i)\n\n if optimise or max(used_palette_colors) >= len(used_palette_colors):\n return used_palette_colors\n\n assert im.palette is not None\n num_palette_colors = len(im.palette.palette) // Image.getmodebands(\n im.palette.mode\n )\n current_palette_size = 1 << (num_palette_colors - 1).bit_length()\n if (\n # check that the palette would become smaller when saved\n len(used_palette_colors) <= current_palette_size // 2\n # check that the palette is not already the smallest possible size\n and current_palette_size > 2\n ):\n return used_palette_colors\n return None\n\n\ndef _get_color_table_size(palette_bytes: bytes) -> int:\n # calculate the palette size for the header\n if not palette_bytes:\n return 0\n elif len(palette_bytes) < 9:\n return 1\n else:\n return math.ceil(math.log(len(palette_bytes) // 3, 2)) - 1\n\n\ndef _get_header_palette(palette_bytes: bytes) -> bytes:\n """\n Returns the palette, null padded to the next power of 2 (*3) bytes\n suitable for direct inclusion in the GIF header\n\n :param palette_bytes: Unpadded palette bytes, in RGBRGB form\n :returns: Null padded palette\n """\n color_table_size = _get_color_table_size(palette_bytes)\n\n # add the missing amount of bytes\n # the palette has to be 2<<n in size\n actual_target_size_diff = (2 << color_table_size) - len(palette_bytes) // 3\n if actual_target_size_diff > 0:\n palette_bytes += o8(0) * 3 * actual_target_size_diff\n return palette_bytes\n\n\ndef _get_palette_bytes(im: Image.Image) -> bytes:\n """\n Gets the palette for inclusion in the gif header\n\n :param im: Image object\n :returns: Bytes, len<=768 suitable for inclusion in gif header\n """\n if not im.palette:\n return b""\n\n palette = bytes(im.palette.palette)\n if im.palette.mode == "RGBA":\n palette = b"".join(palette[i * 4 : i * 4 + 3] for i in range(len(palette) // 3))\n return palette\n\n\ndef _get_background(\n im: Image.Image,\n info_background: int | tuple[int, int, int] | tuple[int, int, int, int] | None,\n) -> int:\n background = 0\n if info_background:\n if isinstance(info_background, tuple):\n # WebPImagePlugin stores an RGBA value in info["background"]\n # So it must be converted to the same format as GifImagePlugin's\n # info["background"] - a global color table index\n assert im.palette is not None\n try:\n background = im.palette.getcolor(info_background, im)\n except ValueError as e:\n if str(e) not in (\n # If all 256 colors are in use,\n # then there is no need for the background color\n "cannot allocate more than 256 colors",\n # Ignore non-opaque WebP background\n "cannot add non-opaque RGBA color to RGB palette",\n ):\n raise\n else:\n background = info_background\n return background\n\n\ndef _get_global_header(im: Image.Image, info: dict[str, Any]) -> list[bytes]:\n """Return a list of strings representing a GIF header"""\n\n # Header Block\n # https://www.matthewflickinger.com/lab/whatsinagif/bits_and_bytes.asp\n\n version = b"87a"\n if im.info.get("version") == b"89a" or (\n info\n and (\n "transparency" in info\n or info.get("loop") is not None\n or info.get("duration")\n or info.get("comment")\n )\n ):\n version = b"89a"\n\n background = _get_background(im, info.get("background"))\n\n palette_bytes = _get_palette_bytes(im)\n color_table_size = _get_color_table_size(palette_bytes)\n\n header = [\n b"GIF" # signature\n + version # version\n + o16(im.size[0]) # canvas width\n + o16(im.size[1]), # canvas height\n # Logical Screen Descriptor\n # size of global color table + global color table flag\n o8(color_table_size + 128), # packed fields\n # background + reserved/aspect\n o8(background) + o8(0),\n # Global Color Table\n _get_header_palette(palette_bytes),\n ]\n if info.get("loop") is not None:\n header.append(\n b"!"\n + o8(255) # extension intro\n + o8(11)\n + b"NETSCAPE2.0"\n + o8(3)\n + o8(1)\n + o16(info["loop"]) # number of loops\n + o8(0)\n )\n if info.get("comment"):\n comment_block = b"!" + o8(254) # extension intro\n\n comment = info["comment"]\n if isinstance(comment, str):\n comment = comment.encode()\n for i in range(0, len(comment), 255):\n subblock = comment[i : i + 255]\n comment_block += o8(len(subblock)) + subblock\n\n comment_block += o8(0)\n header.append(comment_block)\n return header\n\n\ndef _write_frame_data(\n fp: IO[bytes],\n im_frame: Image.Image,\n offset: tuple[int, int],\n params: dict[str, Any],\n) -> None:\n try:\n im_frame.encoderinfo = params\n\n # local image header\n _write_local_header(fp, im_frame, offset, 0)\n\n ImageFile._save(\n im_frame,\n fp,\n [ImageFile._Tile("gif", (0, 0) + im_frame.size, 0, RAWMODE[im_frame.mode])],\n )\n\n fp.write(b"\0") # end of image data\n finally:\n del im_frame.encoderinfo\n\n\n# --------------------------------------------------------------------\n# Legacy GIF utilities\n\n\ndef getheader(\n im: Image.Image, palette: _Palette | None = None, info: dict[str, Any] | None = None\n) -> tuple[list[bytes], list[int] | None]:\n """\n Legacy Method to get Gif data from image.\n\n Warning:: May modify image data.\n\n :param im: Image object\n :param palette: bytes object containing the source palette, or ....\n :param info: encoderinfo\n :returns: tuple of(list of header items, optimized palette)\n\n """\n if info is None:\n info = {}\n\n used_palette_colors = _get_optimize(im, info)\n\n if "background" not in info and "background" in im.info:\n info["background"] = im.info["background"]\n\n im_mod = _normalize_palette(im, palette, info)\n im.palette = im_mod.palette\n im.im = im_mod.im\n header = _get_global_header(im, info)\n\n return header, used_palette_colors\n\n\ndef getdata(\n im: Image.Image, offset: tuple[int, int] = (0, 0), **params: Any\n) -> list[bytes]:\n """\n Legacy Method\n\n Return a list of strings representing this image.\n The first string is a local image header, the rest contains\n encoded image data.\n\n To specify duration, add the time in milliseconds,\n e.g. ``getdata(im_frame, duration=1000)``\n\n :param im: Image object\n :param offset: Tuple of (x, y) pixels. Defaults to (0, 0)\n :param \\**params: e.g. duration or other encoder info parameters\n :returns: List of bytes containing GIF encoded frame data\n\n """\n from io import BytesIO\n\n class Collector(BytesIO):\n data = []\n\n def write(self, data: Buffer) -> int:\n self.data.append(data)\n return len(data)\n\n im.load() # make sure raster data is available\n\n fp = Collector()\n\n _write_frame_data(fp, im, offset, params)\n\n return fp.data\n\n\n# --------------------------------------------------------------------\n# Registry\n\nImage.register_open(GifImageFile.format, GifImageFile, _accept)\nImage.register_save(GifImageFile.format, _save)\nImage.register_save_all(GifImageFile.format, _save_all)\nImage.register_extension(GifImageFile.format, ".gif")\nImage.register_mime(GifImageFile.format, "image/gif")\n\n#\n# Uncomment the following line if you wish to use NETPBM/PBMPLUS\n# instead of the built-in "uncompressed" GIF encoder\n\n# Image.register_save(GifImageFile.format, _save_netpbm)\n | .venv\Lib\site-packages\PIL\GifImagePlugin.py | GifImagePlugin.py | Python | 43,414 | 0.95 | 0.198681 | 0.1261 | vue-tools | 167 | 2025-06-30T04:30:01.883927 | Apache-2.0 | false | ab9ec952684799c22309da041dbeb038 |
#\n# Python Imaging Library\n# $Id$\n#\n# stuff to read (and render) GIMP gradient files\n#\n# History:\n# 97-08-23 fl Created\n#\n# Copyright (c) Secret Labs AB 1997.\n# Copyright (c) Fredrik Lundh 1997.\n#\n# See the README file for information on usage and redistribution.\n#\n\n"""\nStuff to translate curve segments to palette values (derived from\nthe corresponding code in GIMP, written by Federico Mena Quintero.\nSee the GIMP distribution for more information.)\n"""\nfrom __future__ import annotations\n\nfrom math import log, pi, sin, sqrt\nfrom typing import IO, Callable\n\nfrom ._binary import o8\n\nEPSILON = 1e-10\n"""""" # Enable auto-doc for data member\n\n\ndef linear(middle: float, pos: float) -> float:\n if pos <= middle:\n if middle < EPSILON:\n return 0.0\n else:\n return 0.5 * pos / middle\n else:\n pos = pos - middle\n middle = 1.0 - middle\n if middle < EPSILON:\n return 1.0\n else:\n return 0.5 + 0.5 * pos / middle\n\n\ndef curved(middle: float, pos: float) -> float:\n return pos ** (log(0.5) / log(max(middle, EPSILON)))\n\n\ndef sine(middle: float, pos: float) -> float:\n return (sin((-pi / 2.0) + pi * linear(middle, pos)) + 1.0) / 2.0\n\n\ndef sphere_increasing(middle: float, pos: float) -> float:\n return sqrt(1.0 - (linear(middle, pos) - 1.0) ** 2)\n\n\ndef sphere_decreasing(middle: float, pos: float) -> float:\n return 1.0 - sqrt(1.0 - linear(middle, pos) ** 2)\n\n\nSEGMENTS = [linear, curved, sine, sphere_increasing, sphere_decreasing]\n"""""" # Enable auto-doc for data member\n\n\nclass GradientFile:\n gradient: (\n list[\n tuple[\n float,\n float,\n float,\n list[float],\n list[float],\n Callable[[float, float], float],\n ]\n ]\n | None\n ) = None\n\n def getpalette(self, entries: int = 256) -> tuple[bytes, str]:\n assert self.gradient is not None\n palette = []\n\n ix = 0\n x0, x1, xm, rgb0, rgb1, segment = self.gradient[ix]\n\n for i in range(entries):\n x = i / (entries - 1)\n\n while x1 < x:\n ix += 1\n x0, x1, xm, rgb0, rgb1, segment = self.gradient[ix]\n\n w = x1 - x0\n\n if w < EPSILON:\n scale = segment(0.5, 0.5)\n else:\n scale = segment((xm - x0) / w, (x - x0) / w)\n\n # expand to RGBA\n r = o8(int(255 * ((rgb1[0] - rgb0[0]) * scale + rgb0[0]) + 0.5))\n g = o8(int(255 * ((rgb1[1] - rgb0[1]) * scale + rgb0[1]) + 0.5))\n b = o8(int(255 * ((rgb1[2] - rgb0[2]) * scale + rgb0[2]) + 0.5))\n a = o8(int(255 * ((rgb1[3] - rgb0[3]) * scale + rgb0[3]) + 0.5))\n\n # add to palette\n palette.append(r + g + b + a)\n\n return b"".join(palette), "RGBA"\n\n\nclass GimpGradientFile(GradientFile):\n """File handler for GIMP's gradient format."""\n\n def __init__(self, fp: IO[bytes]) -> None:\n if not fp.readline().startswith(b"GIMP Gradient"):\n msg = "not a GIMP gradient file"\n raise SyntaxError(msg)\n\n line = fp.readline()\n\n # GIMP 1.2 gradient files don't contain a name, but GIMP 1.3 files do\n if line.startswith(b"Name: "):\n line = fp.readline().strip()\n\n count = int(line)\n\n self.gradient = []\n\n for i in range(count):\n s = fp.readline().split()\n w = [float(x) for x in s[:11]]\n\n x0, x1 = w[0], w[2]\n xm = w[1]\n rgb0 = w[3:7]\n rgb1 = w[7:11]\n\n segment = SEGMENTS[int(s[11])]\n cspace = int(s[12])\n\n if cspace != 0:\n msg = "cannot handle HSV colour space"\n raise OSError(msg)\n\n self.gradient.append((x0, x1, xm, rgb0, rgb1, segment))\n | .venv\Lib\site-packages\PIL\GimpGradientFile.py | GimpGradientFile.py | Python | 4,055 | 0.95 | 0.167785 | 0.154545 | vue-tools | 450 | 2023-08-30T13:58:22.973588 | Apache-2.0 | false | c94966f8e59549267165ae8ab088e36d |
#\n# Python Imaging Library\n# $Id$\n#\n# stuff to read GIMP palette files\n#\n# History:\n# 1997-08-23 fl Created\n# 2004-09-07 fl Support GIMP 2.0 palette files.\n#\n# Copyright (c) Secret Labs AB 1997-2004. All rights reserved.\n# Copyright (c) Fredrik Lundh 1997-2004.\n#\n# See the README file for information on usage and redistribution.\n#\nfrom __future__ import annotations\n\nimport re\nfrom io import BytesIO\nfrom typing import IO\n\n\nclass GimpPaletteFile:\n """File handler for GIMP's palette format."""\n\n rawmode = "RGB"\n\n def _read(self, fp: IO[bytes], limit: bool = True) -> None:\n if not fp.readline().startswith(b"GIMP Palette"):\n msg = "not a GIMP palette file"\n raise SyntaxError(msg)\n\n palette: list[int] = []\n i = 0\n while True:\n if limit and i == 256 + 3:\n break\n\n i += 1\n s = fp.readline()\n if not s:\n break\n\n # skip fields and comment lines\n if re.match(rb"\w+:|#", s):\n continue\n if limit and len(s) > 100:\n msg = "bad palette file"\n raise SyntaxError(msg)\n\n v = s.split(maxsplit=3)\n if len(v) < 3:\n msg = "bad palette entry"\n raise ValueError(msg)\n\n palette += (int(v[i]) for i in range(3))\n if limit and len(palette) == 768:\n break\n\n self.palette = bytes(palette)\n\n def __init__(self, fp: IO[bytes]) -> None:\n self._read(fp)\n\n @classmethod\n def frombytes(cls, data: bytes) -> GimpPaletteFile:\n self = cls.__new__(cls)\n self._read(BytesIO(data), False)\n return self\n\n def getpalette(self) -> tuple[bytes, str]:\n return self.palette, self.rawmode\n | .venv\Lib\site-packages\PIL\GimpPaletteFile.py | GimpPaletteFile.py | Python | 1,887 | 0.95 | 0.222222 | 0.275862 | python-kit | 810 | 2025-06-16T02:03:49.668087 | Apache-2.0 | false | 82d983a3cf487e459f3d606bab96336b |
#\n# The Python Imaging Library\n# $Id$\n#\n# GRIB stub adapter\n#\n# Copyright (c) 1996-2003 by Fredrik Lundh\n#\n# See the README file for information on usage and redistribution.\n#\nfrom __future__ import annotations\n\nimport os\nfrom typing import IO\n\nfrom . import Image, ImageFile\n\n_handler = None\n\n\ndef register_handler(handler: ImageFile.StubHandler | None) -> None:\n """\n Install application-specific GRIB image handler.\n\n :param handler: Handler object.\n """\n global _handler\n _handler = handler\n\n\n# --------------------------------------------------------------------\n# Image adapter\n\n\ndef _accept(prefix: bytes) -> bool:\n return prefix.startswith(b"GRIB") and prefix[7] == 1\n\n\nclass GribStubImageFile(ImageFile.StubImageFile):\n format = "GRIB"\n format_description = "GRIB"\n\n def _open(self) -> None:\n if not _accept(self.fp.read(8)):\n msg = "Not a GRIB file"\n raise SyntaxError(msg)\n\n self.fp.seek(-8, os.SEEK_CUR)\n\n # make something up\n self._mode = "F"\n self._size = 1, 1\n\n loader = self._load()\n if loader:\n loader.open(self)\n\n def _load(self) -> ImageFile.StubHandler | None:\n return _handler\n\n\ndef _save(im: Image.Image, fp: IO[bytes], filename: str | bytes) -> None:\n if _handler is None or not hasattr(_handler, "save"):\n msg = "GRIB save handler not installed"\n raise OSError(msg)\n _handler.save(im, fp, filename)\n\n\n# --------------------------------------------------------------------\n# Registry\n\nImage.register_open(GribStubImageFile.format, GribStubImageFile, _accept)\nImage.register_save(GribStubImageFile.format, _save)\n\nImage.register_extension(GribStubImageFile.format, ".grib")\n | .venv\Lib\site-packages\PIL\GribStubImagePlugin.py | GribStubImagePlugin.py | Python | 1,813 | 0.95 | 0.133333 | 0.288462 | python-kit | 626 | 2023-11-24T03:28:47.093623 | GPL-3.0 | false | bd29a93402812d739868e2b7de15edf5 |
#\n# The Python Imaging Library\n# $Id$\n#\n# HDF5 stub adapter\n#\n# Copyright (c) 2000-2003 by Fredrik Lundh\n#\n# See the README file for information on usage and redistribution.\n#\nfrom __future__ import annotations\n\nimport os\nfrom typing import IO\n\nfrom . import Image, ImageFile\n\n_handler = None\n\n\ndef register_handler(handler: ImageFile.StubHandler | None) -> None:\n """\n Install application-specific HDF5 image handler.\n\n :param handler: Handler object.\n """\n global _handler\n _handler = handler\n\n\n# --------------------------------------------------------------------\n# Image adapter\n\n\ndef _accept(prefix: bytes) -> bool:\n return prefix.startswith(b"\x89HDF\r\n\x1a\n")\n\n\nclass HDF5StubImageFile(ImageFile.StubImageFile):\n format = "HDF5"\n format_description = "HDF5"\n\n def _open(self) -> None:\n if not _accept(self.fp.read(8)):\n msg = "Not an HDF file"\n raise SyntaxError(msg)\n\n self.fp.seek(-8, os.SEEK_CUR)\n\n # make something up\n self._mode = "F"\n self._size = 1, 1\n\n loader = self._load()\n if loader:\n loader.open(self)\n\n def _load(self) -> ImageFile.StubHandler | None:\n return _handler\n\n\ndef _save(im: Image.Image, fp: IO[bytes], filename: str | bytes) -> None:\n if _handler is None or not hasattr(_handler, "save"):\n msg = "HDF5 save handler not installed"\n raise OSError(msg)\n _handler.save(im, fp, filename)\n\n\n# --------------------------------------------------------------------\n# Registry\n\nImage.register_open(HDF5StubImageFile.format, HDF5StubImageFile, _accept)\nImage.register_save(HDF5StubImageFile.format, _save)\n\nImage.register_extensions(HDF5StubImageFile.format, [".h5", ".hdf"])\n | .venv\Lib\site-packages\PIL\Hdf5StubImagePlugin.py | Hdf5StubImagePlugin.py | Python | 1,816 | 0.95 | 0.133333 | 0.288462 | python-kit | 867 | 2024-03-24T09:36:37.606763 | MIT | false | e5b8998b5e1895ad1f68df86949cff7e |
#\n# The Python Imaging Library.\n# $Id$\n#\n# macOS icns file decoder, based on icns.py by Bob Ippolito.\n#\n# history:\n# 2004-10-09 fl Turned into a PIL plugin; removed 2.3 dependencies.\n# 2020-04-04 Allow saving on all operating systems.\n#\n# Copyright (c) 2004 by Bob Ippolito.\n# Copyright (c) 2004 by Secret Labs.\n# Copyright (c) 2004 by Fredrik Lundh.\n# Copyright (c) 2014 by Alastair Houghton.\n# Copyright (c) 2020 by Pan Jing.\n#\n# See the README file for information on usage and redistribution.\n#\nfrom __future__ import annotations\n\nimport io\nimport os\nimport struct\nimport sys\nfrom typing import IO\n\nfrom . import Image, ImageFile, PngImagePlugin, features\nfrom ._deprecate import deprecate\n\nenable_jpeg2k = features.check_codec("jpg_2000")\nif enable_jpeg2k:\n from . import Jpeg2KImagePlugin\n\nMAGIC = b"icns"\nHEADERSIZE = 8\n\n\ndef nextheader(fobj: IO[bytes]) -> tuple[bytes, int]:\n return struct.unpack(">4sI", fobj.read(HEADERSIZE))\n\n\ndef read_32t(\n fobj: IO[bytes], start_length: tuple[int, int], size: tuple[int, int, int]\n) -> dict[str, Image.Image]:\n # The 128x128 icon seems to have an extra header for some reason.\n (start, length) = start_length\n fobj.seek(start)\n sig = fobj.read(4)\n if sig != b"\x00\x00\x00\x00":\n msg = "Unknown signature, expecting 0x00000000"\n raise SyntaxError(msg)\n return read_32(fobj, (start + 4, length - 4), size)\n\n\ndef read_32(\n fobj: IO[bytes], start_length: tuple[int, int], size: tuple[int, int, int]\n) -> dict[str, Image.Image]:\n """\n Read a 32bit RGB icon resource. Seems to be either uncompressed or\n an RLE packbits-like scheme.\n """\n (start, length) = start_length\n fobj.seek(start)\n pixel_size = (size[0] * size[2], size[1] * size[2])\n sizesq = pixel_size[0] * pixel_size[1]\n if length == sizesq * 3:\n # uncompressed ("RGBRGBGB")\n indata = fobj.read(length)\n im = Image.frombuffer("RGB", pixel_size, indata, "raw", "RGB", 0, 1)\n else:\n # decode image\n im = Image.new("RGB", pixel_size, None)\n for band_ix in range(3):\n data = []\n bytesleft = sizesq\n while bytesleft > 0:\n byte = fobj.read(1)\n if not byte:\n break\n byte_int = byte[0]\n if byte_int & 0x80:\n blocksize = byte_int - 125\n byte = fobj.read(1)\n for i in range(blocksize):\n data.append(byte)\n else:\n blocksize = byte_int + 1\n data.append(fobj.read(blocksize))\n bytesleft -= blocksize\n if bytesleft <= 0:\n break\n if bytesleft != 0:\n msg = f"Error reading channel [{repr(bytesleft)} left]"\n raise SyntaxError(msg)\n band = Image.frombuffer("L", pixel_size, b"".join(data), "raw", "L", 0, 1)\n im.im.putband(band.im, band_ix)\n return {"RGB": im}\n\n\ndef read_mk(\n fobj: IO[bytes], start_length: tuple[int, int], size: tuple[int, int, int]\n) -> dict[str, Image.Image]:\n # Alpha masks seem to be uncompressed\n start = start_length[0]\n fobj.seek(start)\n pixel_size = (size[0] * size[2], size[1] * size[2])\n sizesq = pixel_size[0] * pixel_size[1]\n band = Image.frombuffer("L", pixel_size, fobj.read(sizesq), "raw", "L", 0, 1)\n return {"A": band}\n\n\ndef read_png_or_jpeg2000(\n fobj: IO[bytes], start_length: tuple[int, int], size: tuple[int, int, int]\n) -> dict[str, Image.Image]:\n (start, length) = start_length\n fobj.seek(start)\n sig = fobj.read(12)\n\n im: Image.Image\n if sig.startswith(b"\x89PNG\x0d\x0a\x1a\x0a"):\n fobj.seek(start)\n im = PngImagePlugin.PngImageFile(fobj)\n Image._decompression_bomb_check(im.size)\n return {"RGBA": im}\n elif (\n sig.startswith((b"\xff\x4f\xff\x51", b"\x0d\x0a\x87\x0a"))\n or sig == b"\x00\x00\x00\x0cjP \x0d\x0a\x87\x0a"\n ):\n if not enable_jpeg2k:\n msg = (\n "Unsupported icon subimage format (rebuild PIL "\n "with JPEG 2000 support to fix this)"\n )\n raise ValueError(msg)\n # j2k, jpc or j2c\n fobj.seek(start)\n jp2kstream = fobj.read(length)\n f = io.BytesIO(jp2kstream)\n im = Jpeg2KImagePlugin.Jpeg2KImageFile(f)\n Image._decompression_bomb_check(im.size)\n if im.mode != "RGBA":\n im = im.convert("RGBA")\n return {"RGBA": im}\n else:\n msg = "Unsupported icon subimage format"\n raise ValueError(msg)\n\n\nclass IcnsFile:\n SIZES = {\n (512, 512, 2): [(b"ic10", read_png_or_jpeg2000)],\n (512, 512, 1): [(b"ic09", read_png_or_jpeg2000)],\n (256, 256, 2): [(b"ic14", read_png_or_jpeg2000)],\n (256, 256, 1): [(b"ic08", read_png_or_jpeg2000)],\n (128, 128, 2): [(b"ic13", read_png_or_jpeg2000)],\n (128, 128, 1): [\n (b"ic07", read_png_or_jpeg2000),\n (b"it32", read_32t),\n (b"t8mk", read_mk),\n ],\n (64, 64, 1): [(b"icp6", read_png_or_jpeg2000)],\n (32, 32, 2): [(b"ic12", read_png_or_jpeg2000)],\n (48, 48, 1): [(b"ih32", read_32), (b"h8mk", read_mk)],\n (32, 32, 1): [\n (b"icp5", read_png_or_jpeg2000),\n (b"il32", read_32),\n (b"l8mk", read_mk),\n ],\n (16, 16, 2): [(b"ic11", read_png_or_jpeg2000)],\n (16, 16, 1): [\n (b"icp4", read_png_or_jpeg2000),\n (b"is32", read_32),\n (b"s8mk", read_mk),\n ],\n }\n\n def __init__(self, fobj: IO[bytes]) -> None:\n """\n fobj is a file-like object as an icns resource\n """\n # signature : (start, length)\n self.dct = {}\n self.fobj = fobj\n sig, filesize = nextheader(fobj)\n if not _accept(sig):\n msg = "not an icns file"\n raise SyntaxError(msg)\n i = HEADERSIZE\n while i < filesize:\n sig, blocksize = nextheader(fobj)\n if blocksize <= 0:\n msg = "invalid block header"\n raise SyntaxError(msg)\n i += HEADERSIZE\n blocksize -= HEADERSIZE\n self.dct[sig] = (i, blocksize)\n fobj.seek(blocksize, io.SEEK_CUR)\n i += blocksize\n\n def itersizes(self) -> list[tuple[int, int, int]]:\n sizes = []\n for size, fmts in self.SIZES.items():\n for fmt, reader in fmts:\n if fmt in self.dct:\n sizes.append(size)\n break\n return sizes\n\n def bestsize(self) -> tuple[int, int, int]:\n sizes = self.itersizes()\n if not sizes:\n msg = "No 32bit icon resources found"\n raise SyntaxError(msg)\n return max(sizes)\n\n def dataforsize(self, size: tuple[int, int, int]) -> dict[str, Image.Image]:\n """\n Get an icon resource as {channel: array}. Note that\n the arrays are bottom-up like windows bitmaps and will likely\n need to be flipped or transposed in some way.\n """\n dct = {}\n for code, reader in self.SIZES[size]:\n desc = self.dct.get(code)\n if desc is not None:\n dct.update(reader(self.fobj, desc, size))\n return dct\n\n def getimage(\n self, size: tuple[int, int] | tuple[int, int, int] | None = None\n ) -> Image.Image:\n if size is None:\n size = self.bestsize()\n elif len(size) == 2:\n size = (size[0], size[1], 1)\n channels = self.dataforsize(size)\n\n im = channels.get("RGBA")\n if im:\n return im\n\n im = channels["RGB"].copy()\n try:\n im.putalpha(channels["A"])\n except KeyError:\n pass\n return im\n\n\n##\n# Image plugin for Mac OS icons.\n\n\nclass IcnsImageFile(ImageFile.ImageFile):\n """\n PIL image support for Mac OS .icns files.\n Chooses the best resolution, but will possibly load\n a different size image if you mutate the size attribute\n before calling 'load'.\n\n The info dictionary has a key 'sizes' that is a list\n of sizes that the icns file has.\n """\n\n format = "ICNS"\n format_description = "Mac OS icns resource"\n\n def _open(self) -> None:\n self.icns = IcnsFile(self.fp)\n self._mode = "RGBA"\n self.info["sizes"] = self.icns.itersizes()\n self.best_size = self.icns.bestsize()\n self.size = (\n self.best_size[0] * self.best_size[2],\n self.best_size[1] * self.best_size[2],\n )\n\n @property # type: ignore[override]\n def size(self) -> tuple[int, int] | tuple[int, int, int]:\n return self._size\n\n @size.setter\n def size(self, value: tuple[int, int] | tuple[int, int, int]) -> None:\n if len(value) == 3:\n deprecate("Setting size to (width, height, scale)", 12, "load(scale)")\n if value in self.info["sizes"]:\n self._size = value # type: ignore[assignment]\n return\n else:\n # Check that a matching size exists,\n # or that there is a scale that would create a size that matches\n for size in self.info["sizes"]:\n simple_size = size[0] * size[2], size[1] * size[2]\n scale = simple_size[0] // value[0]\n if simple_size[1] / value[1] == scale:\n self._size = value\n return\n msg = "This is not one of the allowed sizes of this image"\n raise ValueError(msg)\n\n def load(self, scale: int | None = None) -> Image.core.PixelAccess | None:\n if scale is not None or len(self.size) == 3:\n if scale is None and len(self.size) == 3:\n scale = self.size[2]\n assert scale is not None\n width, height = self.size[:2]\n self.size = width * scale, height * scale\n self.best_size = width, height, scale\n\n px = Image.Image.load(self)\n if self._im is not None and self.im.size == self.size:\n # Already loaded\n return px\n self.load_prepare()\n # This is likely NOT the best way to do it, but whatever.\n im = self.icns.getimage(self.best_size)\n\n # If this is a PNG or JPEG 2000, it won't be loaded yet\n px = im.load()\n\n self.im = im.im\n self._mode = im.mode\n self.size = im.size\n\n return px\n\n\ndef _save(im: Image.Image, fp: IO[bytes], filename: str | bytes) -> None:\n """\n Saves the image as a series of PNG files,\n that are then combined into a .icns file.\n """\n if hasattr(fp, "flush"):\n fp.flush()\n\n sizes = {\n b"ic07": 128,\n b"ic08": 256,\n b"ic09": 512,\n b"ic10": 1024,\n b"ic11": 32,\n b"ic12": 64,\n b"ic13": 256,\n b"ic14": 512,\n }\n provided_images = {im.width: im for im in im.encoderinfo.get("append_images", [])}\n size_streams = {}\n for size in set(sizes.values()):\n image = (\n provided_images[size]\n if size in provided_images\n else im.resize((size, size))\n )\n\n temp = io.BytesIO()\n image.save(temp, "png")\n size_streams[size] = temp.getvalue()\n\n entries = []\n for type, size in sizes.items():\n stream = size_streams[size]\n entries.append((type, HEADERSIZE + len(stream), stream))\n\n # Header\n fp.write(MAGIC)\n file_length = HEADERSIZE # Header\n file_length += HEADERSIZE + 8 * len(entries) # TOC\n file_length += sum(entry[1] for entry in entries)\n fp.write(struct.pack(">i", file_length))\n\n # TOC\n fp.write(b"TOC ")\n fp.write(struct.pack(">i", HEADERSIZE + len(entries) * HEADERSIZE))\n for entry in entries:\n fp.write(entry[0])\n fp.write(struct.pack(">i", entry[1]))\n\n # Data\n for entry in entries:\n fp.write(entry[0])\n fp.write(struct.pack(">i", entry[1]))\n fp.write(entry[2])\n\n if hasattr(fp, "flush"):\n fp.flush()\n\n\ndef _accept(prefix: bytes) -> bool:\n return prefix.startswith(MAGIC)\n\n\nImage.register_open(IcnsImageFile.format, IcnsImageFile, _accept)\nImage.register_extension(IcnsImageFile.format, ".icns")\n\nImage.register_save(IcnsImageFile.format, _save)\nImage.register_mime(IcnsImageFile.format, "image/icns")\n\nif __name__ == "__main__":\n if len(sys.argv) < 2:\n print("Syntax: python3 IcnsImagePlugin.py [file]")\n sys.exit()\n\n with open(sys.argv[1], "rb") as fp:\n imf = IcnsImageFile(fp)\n for size in imf.info["sizes"]:\n width, height, scale = imf.size = size\n imf.save(f"out-{width}-{height}-{scale}.png")\n with Image.open(sys.argv[1]) as im:\n im.save("out.png")\n if sys.platform == "windows":\n os.startfile("out.png")\n | .venv\Lib\site-packages\PIL\IcnsImagePlugin.py | IcnsImagePlugin.py | Python | 13,360 | 0.95 | 0.16545 | 0.095238 | python-kit | 222 | 2024-06-26T14:54:10.206518 | BSD-3-Clause | false | c74a7688e42520493b24e31e847cdfe7 |
#\n# The Python Imaging Library.\n# $Id$\n#\n# Windows Icon support for PIL\n#\n# History:\n# 96-05-27 fl Created\n#\n# Copyright (c) Secret Labs AB 1997.\n# Copyright (c) Fredrik Lundh 1996.\n#\n# See the README file for information on usage and redistribution.\n#\n\n# This plugin is a refactored version of Win32IconImagePlugin by Bryan Davis\n# <casadebender@gmail.com>.\n# https://code.google.com/archive/p/casadebender/wikis/Win32IconImagePlugin.wiki\n#\n# Icon format references:\n# * https://en.wikipedia.org/wiki/ICO_(file_format)\n# * https://msdn.microsoft.com/en-us/library/ms997538.aspx\nfrom __future__ import annotations\n\nimport warnings\nfrom io import BytesIO\nfrom math import ceil, log\nfrom typing import IO, NamedTuple\n\nfrom . import BmpImagePlugin, Image, ImageFile, PngImagePlugin\nfrom ._binary import i16le as i16\nfrom ._binary import i32le as i32\nfrom ._binary import o8\nfrom ._binary import o16le as o16\nfrom ._binary import o32le as o32\n\n#\n# --------------------------------------------------------------------\n\n_MAGIC = b"\0\0\1\0"\n\n\ndef _save(im: Image.Image, fp: IO[bytes], filename: str | bytes) -> None:\n fp.write(_MAGIC) # (2+2)\n bmp = im.encoderinfo.get("bitmap_format") == "bmp"\n sizes = im.encoderinfo.get(\n "sizes",\n [(16, 16), (24, 24), (32, 32), (48, 48), (64, 64), (128, 128), (256, 256)],\n )\n frames = []\n provided_ims = [im] + im.encoderinfo.get("append_images", [])\n width, height = im.size\n for size in sorted(set(sizes)):\n if size[0] > width or size[1] > height or size[0] > 256 or size[1] > 256:\n continue\n\n for provided_im in provided_ims:\n if provided_im.size != size:\n continue\n frames.append(provided_im)\n if bmp:\n bits = BmpImagePlugin.SAVE[provided_im.mode][1]\n bits_used = [bits]\n for other_im in provided_ims:\n if other_im.size != size:\n continue\n bits = BmpImagePlugin.SAVE[other_im.mode][1]\n if bits not in bits_used:\n # Another image has been supplied for this size\n # with a different bit depth\n frames.append(other_im)\n bits_used.append(bits)\n break\n else:\n # TODO: invent a more convenient method for proportional scalings\n frame = provided_im.copy()\n frame.thumbnail(size, Image.Resampling.LANCZOS, reducing_gap=None)\n frames.append(frame)\n fp.write(o16(len(frames))) # idCount(2)\n offset = fp.tell() + len(frames) * 16\n for frame in frames:\n width, height = frame.size\n # 0 means 256\n fp.write(o8(width if width < 256 else 0)) # bWidth(1)\n fp.write(o8(height if height < 256 else 0)) # bHeight(1)\n\n bits, colors = BmpImagePlugin.SAVE[frame.mode][1:] if bmp else (32, 0)\n fp.write(o8(colors)) # bColorCount(1)\n fp.write(b"\0") # bReserved(1)\n fp.write(b"\0\0") # wPlanes(2)\n fp.write(o16(bits)) # wBitCount(2)\n\n image_io = BytesIO()\n if bmp:\n frame.save(image_io, "dib")\n\n if bits != 32:\n and_mask = Image.new("1", size)\n ImageFile._save(\n and_mask,\n image_io,\n [ImageFile._Tile("raw", (0, 0) + size, 0, ("1", 0, -1))],\n )\n else:\n frame.save(image_io, "png")\n image_io.seek(0)\n image_bytes = image_io.read()\n if bmp:\n image_bytes = image_bytes[:8] + o32(height * 2) + image_bytes[12:]\n bytes_len = len(image_bytes)\n fp.write(o32(bytes_len)) # dwBytesInRes(4)\n fp.write(o32(offset)) # dwImageOffset(4)\n current = fp.tell()\n fp.seek(offset)\n fp.write(image_bytes)\n offset = offset + bytes_len\n fp.seek(current)\n\n\ndef _accept(prefix: bytes) -> bool:\n return prefix.startswith(_MAGIC)\n\n\nclass IconHeader(NamedTuple):\n width: int\n height: int\n nb_color: int\n reserved: int\n planes: int\n bpp: int\n size: int\n offset: int\n dim: tuple[int, int]\n square: int\n color_depth: int\n\n\nclass IcoFile:\n def __init__(self, buf: IO[bytes]) -> None:\n """\n Parse image from file-like object containing ico file data\n """\n\n # check magic\n s = buf.read(6)\n if not _accept(s):\n msg = "not an ICO file"\n raise SyntaxError(msg)\n\n self.buf = buf\n self.entry = []\n\n # Number of items in file\n self.nb_items = i16(s, 4)\n\n # Get headers for each item\n for i in range(self.nb_items):\n s = buf.read(16)\n\n # See Wikipedia\n width = s[0] or 256\n height = s[1] or 256\n\n # No. of colors in image (0 if >=8bpp)\n nb_color = s[2]\n bpp = i16(s, 6)\n icon_header = IconHeader(\n width=width,\n height=height,\n nb_color=nb_color,\n reserved=s[3],\n planes=i16(s, 4),\n bpp=i16(s, 6),\n size=i32(s, 8),\n offset=i32(s, 12),\n dim=(width, height),\n square=width * height,\n # See Wikipedia notes about color depth.\n # We need this just to differ images with equal sizes\n color_depth=bpp or (nb_color != 0 and ceil(log(nb_color, 2))) or 256,\n )\n\n self.entry.append(icon_header)\n\n self.entry = sorted(self.entry, key=lambda x: x.color_depth)\n # ICO images are usually squares\n self.entry = sorted(self.entry, key=lambda x: x.square, reverse=True)\n\n def sizes(self) -> set[tuple[int, int]]:\n """\n Get a set of all available icon sizes and color depths.\n """\n return {(h.width, h.height) for h in self.entry}\n\n def getentryindex(self, size: tuple[int, int], bpp: int | bool = False) -> int:\n for i, h in enumerate(self.entry):\n if size == h.dim and (bpp is False or bpp == h.color_depth):\n return i\n return 0\n\n def getimage(self, size: tuple[int, int], bpp: int | bool = False) -> Image.Image:\n """\n Get an image from the icon\n """\n return self.frame(self.getentryindex(size, bpp))\n\n def frame(self, idx: int) -> Image.Image:\n """\n Get an image from frame idx\n """\n\n header = self.entry[idx]\n\n self.buf.seek(header.offset)\n data = self.buf.read(8)\n self.buf.seek(header.offset)\n\n im: Image.Image\n if data[:8] == PngImagePlugin._MAGIC:\n # png frame\n im = PngImagePlugin.PngImageFile(self.buf)\n Image._decompression_bomb_check(im.size)\n else:\n # XOR + AND mask bmp frame\n im = BmpImagePlugin.DibImageFile(self.buf)\n Image._decompression_bomb_check(im.size)\n\n # change tile dimension to only encompass XOR image\n im._size = (im.size[0], int(im.size[1] / 2))\n d, e, o, a = im.tile[0]\n im.tile[0] = ImageFile._Tile(d, (0, 0) + im.size, o, a)\n\n # figure out where AND mask image starts\n if header.bpp == 32:\n # 32-bit color depth icon image allows semitransparent areas\n # PIL's DIB format ignores transparency bits, recover them.\n # The DIB is packed in BGRX byte order where X is the alpha\n # channel.\n\n # Back up to start of bmp data\n self.buf.seek(o)\n # extract every 4th byte (eg. 3,7,11,15,...)\n alpha_bytes = self.buf.read(im.size[0] * im.size[1] * 4)[3::4]\n\n # convert to an 8bpp grayscale image\n try:\n mask = Image.frombuffer(\n "L", # 8bpp\n im.size, # (w, h)\n alpha_bytes, # source chars\n "raw", # raw decoder\n ("L", 0, -1), # 8bpp inverted, unpadded, reversed\n )\n except ValueError:\n if ImageFile.LOAD_TRUNCATED_IMAGES:\n mask = None\n else:\n raise\n else:\n # get AND image from end of bitmap\n w = im.size[0]\n if (w % 32) > 0:\n # bitmap row data is aligned to word boundaries\n w += 32 - (im.size[0] % 32)\n\n # the total mask data is\n # padded row size * height / bits per char\n\n total_bytes = int((w * im.size[1]) / 8)\n and_mask_offset = header.offset + header.size - total_bytes\n\n self.buf.seek(and_mask_offset)\n mask_data = self.buf.read(total_bytes)\n\n # convert raw data to image\n try:\n mask = Image.frombuffer(\n "1", # 1 bpp\n im.size, # (w, h)\n mask_data, # source chars\n "raw", # raw decoder\n ("1;I", int(w / 8), -1), # 1bpp inverted, padded, reversed\n )\n except ValueError:\n if ImageFile.LOAD_TRUNCATED_IMAGES:\n mask = None\n else:\n raise\n\n # now we have two images, im is XOR image and mask is AND image\n\n # apply mask image as alpha channel\n if mask:\n im = im.convert("RGBA")\n im.putalpha(mask)\n\n return im\n\n\n##\n# Image plugin for Windows Icon files.\n\n\nclass IcoImageFile(ImageFile.ImageFile):\n """\n PIL read-only image support for Microsoft Windows .ico files.\n\n By default the largest resolution image in the file will be loaded. This\n can be changed by altering the 'size' attribute before calling 'load'.\n\n The info dictionary has a key 'sizes' that is a list of the sizes available\n in the icon file.\n\n Handles classic, XP and Vista icon formats.\n\n When saving, PNG compression is used. Support for this was only added in\n Windows Vista. If you are unable to view the icon in Windows, convert the\n image to "RGBA" mode before saving.\n\n This plugin is a refactored version of Win32IconImagePlugin by Bryan Davis\n <casadebender@gmail.com>.\n https://code.google.com/archive/p/casadebender/wikis/Win32IconImagePlugin.wiki\n """\n\n format = "ICO"\n format_description = "Windows Icon"\n\n def _open(self) -> None:\n self.ico = IcoFile(self.fp)\n self.info["sizes"] = self.ico.sizes()\n self.size = self.ico.entry[0].dim\n self.load()\n\n @property\n def size(self) -> tuple[int, int]:\n return self._size\n\n @size.setter\n def size(self, value: tuple[int, int]) -> None:\n if value not in self.info["sizes"]:\n msg = "This is not one of the allowed sizes of this image"\n raise ValueError(msg)\n self._size = value\n\n def load(self) -> Image.core.PixelAccess | None:\n if self._im is not None and self.im.size == self.size:\n # Already loaded\n return Image.Image.load(self)\n im = self.ico.getimage(self.size)\n # if tile is PNG, it won't really be loaded yet\n im.load()\n self.im = im.im\n self._mode = im.mode\n if im.palette:\n self.palette = im.palette\n if im.size != self.size:\n warnings.warn("Image was not the expected size")\n\n index = self.ico.getentryindex(self.size)\n sizes = list(self.info["sizes"])\n sizes[index] = im.size\n self.info["sizes"] = set(sizes)\n\n self.size = im.size\n return Image.Image.load(self)\n\n def load_seek(self, pos: int) -> None:\n # Flag the ImageFile.Parser so that it\n # just does all the decode at the end.\n pass\n\n\n#\n# --------------------------------------------------------------------\n\n\nImage.register_open(IcoImageFile.format, IcoImageFile, _accept)\nImage.register_save(IcoImageFile.format, _save)\nImage.register_extension(IcoImageFile.format, ".ico")\n\nImage.register_mime(IcoImageFile.format, "image/x-icon")\n | .venv\Lib\site-packages\PIL\IcoImagePlugin.py | IcoImagePlugin.py | Python | 12,872 | 0.95 | 0.149606 | 0.193038 | vue-tools | 915 | 2023-12-17T01:45:14.042596 | GPL-3.0 | false | 3ab0e875721f17f2ad9fa77dda50f1ca |
#\n# The Python Imaging Library.\n# $Id$\n#\n# standard channel operations\n#\n# History:\n# 1996-03-24 fl Created\n# 1996-08-13 fl Added logical operations (for "1" images)\n# 2000-10-12 fl Added offset method (from Image.py)\n#\n# Copyright (c) 1997-2000 by Secret Labs AB\n# Copyright (c) 1996-2000 by Fredrik Lundh\n#\n# See the README file for information on usage and redistribution.\n#\n\nfrom __future__ import annotations\n\nfrom . import Image\n\n\ndef constant(image: Image.Image, value: int) -> Image.Image:\n """Fill a channel with a given gray level.\n\n :rtype: :py:class:`~PIL.Image.Image`\n """\n\n return Image.new("L", image.size, value)\n\n\ndef duplicate(image: Image.Image) -> Image.Image:\n """Copy a channel. Alias for :py:meth:`PIL.Image.Image.copy`.\n\n :rtype: :py:class:`~PIL.Image.Image`\n """\n\n return image.copy()\n\n\ndef invert(image: Image.Image) -> Image.Image:\n """\n Invert an image (channel). ::\n\n out = MAX - image\n\n :rtype: :py:class:`~PIL.Image.Image`\n """\n\n image.load()\n return image._new(image.im.chop_invert())\n\n\ndef lighter(image1: Image.Image, image2: Image.Image) -> Image.Image:\n """\n Compares the two images, pixel by pixel, and returns a new image containing\n the lighter values. ::\n\n out = max(image1, image2)\n\n :rtype: :py:class:`~PIL.Image.Image`\n """\n\n image1.load()\n image2.load()\n return image1._new(image1.im.chop_lighter(image2.im))\n\n\ndef darker(image1: Image.Image, image2: Image.Image) -> Image.Image:\n """\n Compares the two images, pixel by pixel, and returns a new image containing\n the darker values. ::\n\n out = min(image1, image2)\n\n :rtype: :py:class:`~PIL.Image.Image`\n """\n\n image1.load()\n image2.load()\n return image1._new(image1.im.chop_darker(image2.im))\n\n\ndef difference(image1: Image.Image, image2: Image.Image) -> Image.Image:\n """\n Returns the absolute value of the pixel-by-pixel difference between the two\n images. ::\n\n out = abs(image1 - image2)\n\n :rtype: :py:class:`~PIL.Image.Image`\n """\n\n image1.load()\n image2.load()\n return image1._new(image1.im.chop_difference(image2.im))\n\n\ndef multiply(image1: Image.Image, image2: Image.Image) -> Image.Image:\n """\n Superimposes two images on top of each other.\n\n If you multiply an image with a solid black image, the result is black. If\n you multiply with a solid white image, the image is unaffected. ::\n\n out = image1 * image2 / MAX\n\n :rtype: :py:class:`~PIL.Image.Image`\n """\n\n image1.load()\n image2.load()\n return image1._new(image1.im.chop_multiply(image2.im))\n\n\ndef screen(image1: Image.Image, image2: Image.Image) -> Image.Image:\n """\n Superimposes two inverted images on top of each other. ::\n\n out = MAX - ((MAX - image1) * (MAX - image2) / MAX)\n\n :rtype: :py:class:`~PIL.Image.Image`\n """\n\n image1.load()\n image2.load()\n return image1._new(image1.im.chop_screen(image2.im))\n\n\ndef soft_light(image1: Image.Image, image2: Image.Image) -> Image.Image:\n """\n Superimposes two images on top of each other using the Soft Light algorithm\n\n :rtype: :py:class:`~PIL.Image.Image`\n """\n\n image1.load()\n image2.load()\n return image1._new(image1.im.chop_soft_light(image2.im))\n\n\ndef hard_light(image1: Image.Image, image2: Image.Image) -> Image.Image:\n """\n Superimposes two images on top of each other using the Hard Light algorithm\n\n :rtype: :py:class:`~PIL.Image.Image`\n """\n\n image1.load()\n image2.load()\n return image1._new(image1.im.chop_hard_light(image2.im))\n\n\ndef overlay(image1: Image.Image, image2: Image.Image) -> Image.Image:\n """\n Superimposes two images on top of each other using the Overlay algorithm\n\n :rtype: :py:class:`~PIL.Image.Image`\n """\n\n image1.load()\n image2.load()\n return image1._new(image1.im.chop_overlay(image2.im))\n\n\ndef add(\n image1: Image.Image, image2: Image.Image, scale: float = 1.0, offset: float = 0\n) -> Image.Image:\n """\n Adds two images, dividing the result by scale and adding the\n offset. If omitted, scale defaults to 1.0, and offset to 0.0. ::\n\n out = ((image1 + image2) / scale + offset)\n\n :rtype: :py:class:`~PIL.Image.Image`\n """\n\n image1.load()\n image2.load()\n return image1._new(image1.im.chop_add(image2.im, scale, offset))\n\n\ndef subtract(\n image1: Image.Image, image2: Image.Image, scale: float = 1.0, offset: float = 0\n) -> Image.Image:\n """\n Subtracts two images, dividing the result by scale and adding the offset.\n If omitted, scale defaults to 1.0, and offset to 0.0. ::\n\n out = ((image1 - image2) / scale + offset)\n\n :rtype: :py:class:`~PIL.Image.Image`\n """\n\n image1.load()\n image2.load()\n return image1._new(image1.im.chop_subtract(image2.im, scale, offset))\n\n\ndef add_modulo(image1: Image.Image, image2: Image.Image) -> Image.Image:\n """Add two images, without clipping the result. ::\n\n out = ((image1 + image2) % MAX)\n\n :rtype: :py:class:`~PIL.Image.Image`\n """\n\n image1.load()\n image2.load()\n return image1._new(image1.im.chop_add_modulo(image2.im))\n\n\ndef subtract_modulo(image1: Image.Image, image2: Image.Image) -> Image.Image:\n """Subtract two images, without clipping the result. ::\n\n out = ((image1 - image2) % MAX)\n\n :rtype: :py:class:`~PIL.Image.Image`\n """\n\n image1.load()\n image2.load()\n return image1._new(image1.im.chop_subtract_modulo(image2.im))\n\n\ndef logical_and(image1: Image.Image, image2: Image.Image) -> Image.Image:\n """Logical AND between two images.\n\n Both of the images must have mode "1". If you would like to perform a\n logical AND on an image with a mode other than "1", try\n :py:meth:`~PIL.ImageChops.multiply` instead, using a black-and-white mask\n as the second image. ::\n\n out = ((image1 and image2) % MAX)\n\n :rtype: :py:class:`~PIL.Image.Image`\n """\n\n image1.load()\n image2.load()\n return image1._new(image1.im.chop_and(image2.im))\n\n\ndef logical_or(image1: Image.Image, image2: Image.Image) -> Image.Image:\n """Logical OR between two images.\n\n Both of the images must have mode "1". ::\n\n out = ((image1 or image2) % MAX)\n\n :rtype: :py:class:`~PIL.Image.Image`\n """\n\n image1.load()\n image2.load()\n return image1._new(image1.im.chop_or(image2.im))\n\n\ndef logical_xor(image1: Image.Image, image2: Image.Image) -> Image.Image:\n """Logical XOR between two images.\n\n Both of the images must have mode "1". ::\n\n out = ((bool(image1) != bool(image2)) % MAX)\n\n :rtype: :py:class:`~PIL.Image.Image`\n """\n\n image1.load()\n image2.load()\n return image1._new(image1.im.chop_xor(image2.im))\n\n\ndef blend(image1: Image.Image, image2: Image.Image, alpha: float) -> Image.Image:\n """Blend images using constant transparency weight. Alias for\n :py:func:`PIL.Image.blend`.\n\n :rtype: :py:class:`~PIL.Image.Image`\n """\n\n return Image.blend(image1, image2, alpha)\n\n\ndef composite(\n image1: Image.Image, image2: Image.Image, mask: Image.Image\n) -> Image.Image:\n """Create composite using transparency mask. Alias for\n :py:func:`PIL.Image.composite`.\n\n :rtype: :py:class:`~PIL.Image.Image`\n """\n\n return Image.composite(image1, image2, mask)\n\n\ndef offset(image: Image.Image, xoffset: int, yoffset: int | None = None) -> Image.Image:\n """Returns a copy of the image where data has been offset by the given\n distances. Data wraps around the edges. If ``yoffset`` is omitted, it\n is assumed to be equal to ``xoffset``.\n\n :param image: Input image.\n :param xoffset: The horizontal distance.\n :param yoffset: The vertical distance. If omitted, both\n distances are set to the same value.\n :rtype: :py:class:`~PIL.Image.Image`\n """\n\n if yoffset is None:\n yoffset = xoffset\n image.load()\n return image._new(image.im.offset(xoffset, yoffset))\n | .venv\Lib\site-packages\PIL\ImageChops.py | ImageChops.py | Python | 8,257 | 0.95 | 0.157556 | 0.076923 | node-utils | 910 | 2024-08-17T00:47:53.786101 | BSD-3-Clause | false | 96fc2aa6647aaf0bd6a54ab5ad44d53d |
# The Python Imaging Library.\n# $Id$\n\n# Optional color management support, based on Kevin Cazabon's PyCMS\n# library.\n\n# Originally released under LGPL. Graciously donated to PIL in\n# March 2009, for distribution under the standard PIL license\n\n# History:\n\n# 2009-03-08 fl Added to PIL.\n\n# Copyright (C) 2002-2003 Kevin Cazabon\n# Copyright (c) 2009 by Fredrik Lundh\n# Copyright (c) 2013 by Eric Soroos\n\n# See the README file for information on usage and redistribution. See\n# below for the original description.\nfrom __future__ import annotations\n\nimport operator\nimport sys\nfrom enum import IntEnum, IntFlag\nfrom functools import reduce\nfrom typing import Any, Literal, SupportsFloat, SupportsInt, Union\n\nfrom . import Image, __version__\nfrom ._deprecate import deprecate\nfrom ._typing import SupportsRead\n\ntry:\n from . import _imagingcms as core\n\n _CmsProfileCompatible = Union[\n str, SupportsRead[bytes], core.CmsProfile, "ImageCmsProfile"\n ]\nexcept ImportError as ex:\n # Allow error import for doc purposes, but error out when accessing\n # anything in core.\n from ._util import DeferredError\n\n core = DeferredError.new(ex)\n\n_DESCRIPTION = """\npyCMS\n\n a Python / PIL interface to the littleCMS ICC Color Management System\n Copyright (C) 2002-2003 Kevin Cazabon\n kevin@cazabon.com\n https://www.cazabon.com\n\n pyCMS home page: https://www.cazabon.com/pyCMS\n littleCMS home page: https://www.littlecms.com\n (littleCMS is Copyright (C) 1998-2001 Marti Maria)\n\n Originally released under LGPL. Graciously donated to PIL in\n March 2009, for distribution under the standard PIL license\n\n The pyCMS.py module provides a "clean" interface between Python/PIL and\n pyCMSdll, taking care of some of the more complex handling of the direct\n pyCMSdll functions, as well as error-checking and making sure that all\n relevant data is kept together.\n\n While it is possible to call pyCMSdll functions directly, it's not highly\n recommended.\n\n Version History:\n\n 1.0.0 pil Oct 2013 Port to LCMS 2.\n\n 0.1.0 pil mod March 10, 2009\n\n Renamed display profile to proof profile. The proof\n profile is the profile of the device that is being\n simulated, not the profile of the device which is\n actually used to display/print the final simulation\n (that'd be the output profile) - also see LCMSAPI.txt\n input colorspace -> using 'renderingIntent' -> proof\n colorspace -> using 'proofRenderingIntent' -> output\n colorspace\n\n Added LCMS FLAGS support.\n Added FLAGS["SOFTPROOFING"] as default flag for\n buildProofTransform (otherwise the proof profile/intent\n would be ignored).\n\n 0.1.0 pil March 2009 - added to PIL, as PIL.ImageCms\n\n 0.0.2 alpha Jan 6, 2002\n\n Added try/except statements around type() checks of\n potential CObjects... Python won't let you use type()\n on them, and raises a TypeError (stupid, if you ask\n me!)\n\n Added buildProofTransformFromOpenProfiles() function.\n Additional fixes in DLL, see DLL code for details.\n\n 0.0.1 alpha first public release, Dec. 26, 2002\n\n Known to-do list with current version (of Python interface, not pyCMSdll):\n\n none\n\n"""\n\n_VERSION = "1.0.0 pil"\n\n\ndef __getattr__(name: str) -> Any:\n if name == "DESCRIPTION":\n deprecate("PIL.ImageCms.DESCRIPTION", 12)\n return _DESCRIPTION\n elif name == "VERSION":\n deprecate("PIL.ImageCms.VERSION", 12)\n return _VERSION\n elif name == "FLAGS":\n deprecate("PIL.ImageCms.FLAGS", 12, "PIL.ImageCms.Flags")\n return _FLAGS\n msg = f"module '{__name__}' has no attribute '{name}'"\n raise AttributeError(msg)\n\n\n# --------------------------------------------------------------------.\n\n\n#\n# intent/direction values\n\n\nclass Intent(IntEnum):\n PERCEPTUAL = 0\n RELATIVE_COLORIMETRIC = 1\n SATURATION = 2\n ABSOLUTE_COLORIMETRIC = 3\n\n\nclass Direction(IntEnum):\n INPUT = 0\n OUTPUT = 1\n PROOF = 2\n\n\n#\n# flags\n\n\nclass Flags(IntFlag):\n """Flags and documentation are taken from ``lcms2.h``."""\n\n NONE = 0\n NOCACHE = 0x0040\n """Inhibit 1-pixel cache"""\n NOOPTIMIZE = 0x0100\n """Inhibit optimizations"""\n NULLTRANSFORM = 0x0200\n """Don't transform anyway"""\n GAMUTCHECK = 0x1000\n """Out of Gamut alarm"""\n SOFTPROOFING = 0x4000\n """Do softproofing"""\n BLACKPOINTCOMPENSATION = 0x2000\n NOWHITEONWHITEFIXUP = 0x0004\n """Don't fix scum dot"""\n HIGHRESPRECALC = 0x0400\n """Use more memory to give better accuracy"""\n LOWRESPRECALC = 0x0800\n """Use less memory to minimize resources"""\n # this should be 8BITS_DEVICELINK, but that is not a valid name in Python:\n USE_8BITS_DEVICELINK = 0x0008\n """Create 8 bits devicelinks"""\n GUESSDEVICECLASS = 0x0020\n """Guess device class (for ``transform2devicelink``)"""\n KEEP_SEQUENCE = 0x0080\n """Keep profile sequence for devicelink creation"""\n FORCE_CLUT = 0x0002\n """Force CLUT optimization"""\n CLUT_POST_LINEARIZATION = 0x0001\n """create postlinearization tables if possible"""\n CLUT_PRE_LINEARIZATION = 0x0010\n """create prelinearization tables if possible"""\n NONEGATIVES = 0x8000\n """Prevent negative numbers in floating point transforms"""\n COPY_ALPHA = 0x04000000\n """Alpha channels are copied on ``cmsDoTransform()``"""\n NODEFAULTRESOURCEDEF = 0x01000000\n\n _GRIDPOINTS_1 = 1 << 16\n _GRIDPOINTS_2 = 2 << 16\n _GRIDPOINTS_4 = 4 << 16\n _GRIDPOINTS_8 = 8 << 16\n _GRIDPOINTS_16 = 16 << 16\n _GRIDPOINTS_32 = 32 << 16\n _GRIDPOINTS_64 = 64 << 16\n _GRIDPOINTS_128 = 128 << 16\n\n @staticmethod\n def GRIDPOINTS(n: int) -> Flags:\n """\n Fine-tune control over number of gridpoints\n\n :param n: :py:class:`int` in range ``0 <= n <= 255``\n """\n return Flags.NONE | ((n & 0xFF) << 16)\n\n\n_MAX_FLAG = reduce(operator.or_, Flags)\n\n\n_FLAGS = {\n "MATRIXINPUT": 1,\n "MATRIXOUTPUT": 2,\n "MATRIXONLY": (1 | 2),\n "NOWHITEONWHITEFIXUP": 4, # Don't hot fix scum dot\n # Don't create prelinearization tables on precalculated transforms\n # (internal use):\n "NOPRELINEARIZATION": 16,\n "GUESSDEVICECLASS": 32, # Guess device class (for transform2devicelink)\n "NOTCACHE": 64, # Inhibit 1-pixel cache\n "NOTPRECALC": 256,\n "NULLTRANSFORM": 512, # Don't transform anyway\n "HIGHRESPRECALC": 1024, # Use more memory to give better accuracy\n "LOWRESPRECALC": 2048, # Use less memory to minimize resources\n "WHITEBLACKCOMPENSATION": 8192,\n "BLACKPOINTCOMPENSATION": 8192,\n "GAMUTCHECK": 4096, # Out of Gamut alarm\n "SOFTPROOFING": 16384, # Do softproofing\n "PRESERVEBLACK": 32768, # Black preservation\n "NODEFAULTRESOURCEDEF": 16777216, # CRD special\n "GRIDPOINTS": lambda n: (n & 0xFF) << 16, # Gridpoints\n}\n\n\n# --------------------------------------------------------------------.\n# Experimental PIL-level API\n# --------------------------------------------------------------------.\n\n##\n# Profile.\n\n\nclass ImageCmsProfile:\n def __init__(self, profile: str | SupportsRead[bytes] | core.CmsProfile) -> None:\n """\n :param profile: Either a string representing a filename,\n a file like object containing a profile or a\n low-level profile object\n\n """\n self.filename = None\n self.product_name = None # profile.product_name\n self.product_info = None # profile.product_info\n\n if isinstance(profile, str):\n if sys.platform == "win32":\n profile_bytes_path = profile.encode()\n try:\n profile_bytes_path.decode("ascii")\n except UnicodeDecodeError:\n with open(profile, "rb") as f:\n self.profile = core.profile_frombytes(f.read())\n return\n self.filename = profile\n self.profile = core.profile_open(profile)\n elif hasattr(profile, "read"):\n self.profile = core.profile_frombytes(profile.read())\n elif isinstance(profile, core.CmsProfile):\n self.profile = profile\n else:\n msg = "Invalid type for Profile" # type: ignore[unreachable]\n raise TypeError(msg)\n\n def tobytes(self) -> bytes:\n """\n Returns the profile in a format suitable for embedding in\n saved images.\n\n :returns: a bytes object containing the ICC profile.\n """\n\n return core.profile_tobytes(self.profile)\n\n\nclass ImageCmsTransform(Image.ImagePointHandler):\n """\n Transform. This can be used with the procedural API, or with the standard\n :py:func:`~PIL.Image.Image.point` method.\n\n Will return the output profile in the ``output.info['icc_profile']``.\n """\n\n def __init__(\n self,\n input: ImageCmsProfile,\n output: ImageCmsProfile,\n input_mode: str,\n output_mode: str,\n intent: Intent = Intent.PERCEPTUAL,\n proof: ImageCmsProfile | None = None,\n proof_intent: Intent = Intent.ABSOLUTE_COLORIMETRIC,\n flags: Flags = Flags.NONE,\n ):\n supported_modes = (\n "RGB",\n "RGBA",\n "RGBX",\n "CMYK",\n "I;16",\n "I;16L",\n "I;16B",\n "YCbCr",\n "LAB",\n "L",\n "1",\n )\n for mode in (input_mode, output_mode):\n if mode not in supported_modes:\n deprecate(\n mode,\n 12,\n {\n "L;16": "I;16 or I;16L",\n "L:16B": "I;16B",\n "YCCA": "YCbCr",\n "YCC": "YCbCr",\n }.get(mode),\n )\n if proof is None:\n self.transform = core.buildTransform(\n input.profile, output.profile, input_mode, output_mode, intent, flags\n )\n else:\n self.transform = core.buildProofTransform(\n input.profile,\n output.profile,\n proof.profile,\n input_mode,\n output_mode,\n intent,\n proof_intent,\n flags,\n )\n # Note: inputMode and outputMode are for pyCMS compatibility only\n self.input_mode = self.inputMode = input_mode\n self.output_mode = self.outputMode = output_mode\n\n self.output_profile = output\n\n def point(self, im: Image.Image) -> Image.Image:\n return self.apply(im)\n\n def apply(self, im: Image.Image, imOut: Image.Image | None = None) -> Image.Image:\n if imOut is None:\n imOut = Image.new(self.output_mode, im.size, None)\n self.transform.apply(im.getim(), imOut.getim())\n imOut.info["icc_profile"] = self.output_profile.tobytes()\n return imOut\n\n def apply_in_place(self, im: Image.Image) -> Image.Image:\n if im.mode != self.output_mode:\n msg = "mode mismatch"\n raise ValueError(msg) # wrong output mode\n self.transform.apply(im.getim(), im.getim())\n im.info["icc_profile"] = self.output_profile.tobytes()\n return im\n\n\ndef get_display_profile(handle: SupportsInt | None = None) -> ImageCmsProfile | None:\n """\n (experimental) Fetches the profile for the current display device.\n\n :returns: ``None`` if the profile is not known.\n """\n\n if sys.platform != "win32":\n return None\n\n from . import ImageWin # type: ignore[unused-ignore, unreachable]\n\n if isinstance(handle, ImageWin.HDC):\n profile = core.get_display_profile_win32(int(handle), 1)\n else:\n profile = core.get_display_profile_win32(int(handle or 0))\n if profile is None:\n return None\n return ImageCmsProfile(profile)\n\n\n# --------------------------------------------------------------------.\n# pyCMS compatible layer\n# --------------------------------------------------------------------.\n\n\nclass PyCMSError(Exception):\n """(pyCMS) Exception class.\n This is used for all errors in the pyCMS API."""\n\n pass\n\n\ndef profileToProfile(\n im: Image.Image,\n inputProfile: _CmsProfileCompatible,\n outputProfile: _CmsProfileCompatible,\n renderingIntent: Intent = Intent.PERCEPTUAL,\n outputMode: str | None = None,\n inPlace: bool = False,\n flags: Flags = Flags.NONE,\n) -> Image.Image | None:\n """\n (pyCMS) Applies an ICC transformation to a given image, mapping from\n ``inputProfile`` to ``outputProfile``.\n\n If the input or output profiles specified are not valid filenames, a\n :exc:`PyCMSError` will be raised. If ``inPlace`` is ``True`` and\n ``outputMode != im.mode``, a :exc:`PyCMSError` will be raised.\n If an error occurs during application of the profiles,\n a :exc:`PyCMSError` will be raised.\n If ``outputMode`` is not a mode supported by the ``outputProfile`` (or by pyCMS),\n a :exc:`PyCMSError` will be raised.\n\n This function applies an ICC transformation to im from ``inputProfile``'s\n color space to ``outputProfile``'s color space using the specified rendering\n intent to decide how to handle out-of-gamut colors.\n\n ``outputMode`` can be used to specify that a color mode conversion is to\n be done using these profiles, but the specified profiles must be able\n to handle that mode. I.e., if converting im from RGB to CMYK using\n profiles, the input profile must handle RGB data, and the output\n profile must handle CMYK data.\n\n :param im: An open :py:class:`~PIL.Image.Image` object (i.e. Image.new(...)\n or Image.open(...), etc.)\n :param inputProfile: String, as a valid filename path to the ICC input\n profile you wish to use for this image, or a profile object\n :param outputProfile: String, as a valid filename path to the ICC output\n profile you wish to use for this image, or a profile object\n :param renderingIntent: Integer (0-3) specifying the rendering intent you\n wish to use for the transform\n\n ImageCms.Intent.PERCEPTUAL = 0 (DEFAULT)\n ImageCms.Intent.RELATIVE_COLORIMETRIC = 1\n ImageCms.Intent.SATURATION = 2\n ImageCms.Intent.ABSOLUTE_COLORIMETRIC = 3\n\n see the pyCMS documentation for details on rendering intents and what\n they do.\n :param outputMode: A valid PIL mode for the output image (i.e. "RGB",\n "CMYK", etc.). Note: if rendering the image "inPlace", outputMode\n MUST be the same mode as the input, or omitted completely. If\n omitted, the outputMode will be the same as the mode of the input\n image (im.mode)\n :param inPlace: Boolean. If ``True``, the original image is modified in-place,\n and ``None`` is returned. If ``False`` (default), a new\n :py:class:`~PIL.Image.Image` object is returned with the transform applied.\n :param flags: Integer (0-...) specifying additional flags\n :returns: Either None or a new :py:class:`~PIL.Image.Image` object, depending on\n the value of ``inPlace``\n :exception PyCMSError:\n """\n\n if outputMode is None:\n outputMode = im.mode\n\n if not isinstance(renderingIntent, int) or not (0 <= renderingIntent <= 3):\n msg = "renderingIntent must be an integer between 0 and 3"\n raise PyCMSError(msg)\n\n if not isinstance(flags, int) or not (0 <= flags <= _MAX_FLAG):\n msg = f"flags must be an integer between 0 and {_MAX_FLAG}"\n raise PyCMSError(msg)\n\n try:\n if not isinstance(inputProfile, ImageCmsProfile):\n inputProfile = ImageCmsProfile(inputProfile)\n if not isinstance(outputProfile, ImageCmsProfile):\n outputProfile = ImageCmsProfile(outputProfile)\n transform = ImageCmsTransform(\n inputProfile,\n outputProfile,\n im.mode,\n outputMode,\n renderingIntent,\n flags=flags,\n )\n if inPlace:\n transform.apply_in_place(im)\n imOut = None\n else:\n imOut = transform.apply(im)\n except (OSError, TypeError, ValueError) as v:\n raise PyCMSError(v) from v\n\n return imOut\n\n\ndef getOpenProfile(\n profileFilename: str | SupportsRead[bytes] | core.CmsProfile,\n) -> ImageCmsProfile:\n """\n (pyCMS) Opens an ICC profile file.\n\n The PyCMSProfile object can be passed back into pyCMS for use in creating\n transforms and such (as in ImageCms.buildTransformFromOpenProfiles()).\n\n If ``profileFilename`` is not a valid filename for an ICC profile,\n a :exc:`PyCMSError` will be raised.\n\n :param profileFilename: String, as a valid filename path to the ICC profile\n you wish to open, or a file-like object.\n :returns: A CmsProfile class object.\n :exception PyCMSError:\n """\n\n try:\n return ImageCmsProfile(profileFilename)\n except (OSError, TypeError, ValueError) as v:\n raise PyCMSError(v) from v\n\n\ndef buildTransform(\n inputProfile: _CmsProfileCompatible,\n outputProfile: _CmsProfileCompatible,\n inMode: str,\n outMode: str,\n renderingIntent: Intent = Intent.PERCEPTUAL,\n flags: Flags = Flags.NONE,\n) -> ImageCmsTransform:\n """\n (pyCMS) Builds an ICC transform mapping from the ``inputProfile`` to the\n ``outputProfile``. Use applyTransform to apply the transform to a given\n image.\n\n If the input or output profiles specified are not valid filenames, a\n :exc:`PyCMSError` will be raised. If an error occurs during creation\n of the transform, a :exc:`PyCMSError` will be raised.\n\n If ``inMode`` or ``outMode`` are not a mode supported by the ``outputProfile``\n (or by pyCMS), a :exc:`PyCMSError` will be raised.\n\n This function builds and returns an ICC transform from the ``inputProfile``\n to the ``outputProfile`` using the ``renderingIntent`` to determine what to do\n with out-of-gamut colors. It will ONLY work for converting images that\n are in ``inMode`` to images that are in ``outMode`` color format (PIL mode,\n i.e. "RGB", "RGBA", "CMYK", etc.).\n\n Building the transform is a fair part of the overhead in\n ImageCms.profileToProfile(), so if you're planning on converting multiple\n images using the same input/output settings, this can save you time.\n Once you have a transform object, it can be used with\n ImageCms.applyProfile() to convert images without the need to re-compute\n the lookup table for the transform.\n\n The reason pyCMS returns a class object rather than a handle directly\n to the transform is that it needs to keep track of the PIL input/output\n modes that the transform is meant for. These attributes are stored in\n the ``inMode`` and ``outMode`` attributes of the object (which can be\n manually overridden if you really want to, but I don't know of any\n time that would be of use, or would even work).\n\n :param inputProfile: String, as a valid filename path to the ICC input\n profile you wish to use for this transform, or a profile object\n :param outputProfile: String, as a valid filename path to the ICC output\n profile you wish to use for this transform, or a profile object\n :param inMode: String, as a valid PIL mode that the appropriate profile\n also supports (i.e. "RGB", "RGBA", "CMYK", etc.)\n :param outMode: String, as a valid PIL mode that the appropriate profile\n also supports (i.e. "RGB", "RGBA", "CMYK", etc.)\n :param renderingIntent: Integer (0-3) specifying the rendering intent you\n wish to use for the transform\n\n ImageCms.Intent.PERCEPTUAL = 0 (DEFAULT)\n ImageCms.Intent.RELATIVE_COLORIMETRIC = 1\n ImageCms.Intent.SATURATION = 2\n ImageCms.Intent.ABSOLUTE_COLORIMETRIC = 3\n\n see the pyCMS documentation for details on rendering intents and what\n they do.\n :param flags: Integer (0-...) specifying additional flags\n :returns: A CmsTransform class object.\n :exception PyCMSError:\n """\n\n if not isinstance(renderingIntent, int) or not (0 <= renderingIntent <= 3):\n msg = "renderingIntent must be an integer between 0 and 3"\n raise PyCMSError(msg)\n\n if not isinstance(flags, int) or not (0 <= flags <= _MAX_FLAG):\n msg = f"flags must be an integer between 0 and {_MAX_FLAG}"\n raise PyCMSError(msg)\n\n try:\n if not isinstance(inputProfile, ImageCmsProfile):\n inputProfile = ImageCmsProfile(inputProfile)\n if not isinstance(outputProfile, ImageCmsProfile):\n outputProfile = ImageCmsProfile(outputProfile)\n return ImageCmsTransform(\n inputProfile, outputProfile, inMode, outMode, renderingIntent, flags=flags\n )\n except (OSError, TypeError, ValueError) as v:\n raise PyCMSError(v) from v\n\n\ndef buildProofTransform(\n inputProfile: _CmsProfileCompatible,\n outputProfile: _CmsProfileCompatible,\n proofProfile: _CmsProfileCompatible,\n inMode: str,\n outMode: str,\n renderingIntent: Intent = Intent.PERCEPTUAL,\n proofRenderingIntent: Intent = Intent.ABSOLUTE_COLORIMETRIC,\n flags: Flags = Flags.SOFTPROOFING,\n) -> ImageCmsTransform:\n """\n (pyCMS) Builds an ICC transform mapping from the ``inputProfile`` to the\n ``outputProfile``, but tries to simulate the result that would be\n obtained on the ``proofProfile`` device.\n\n If the input, output, or proof profiles specified are not valid\n filenames, a :exc:`PyCMSError` will be raised.\n\n If an error occurs during creation of the transform,\n a :exc:`PyCMSError` will be raised.\n\n If ``inMode`` or ``outMode`` are not a mode supported by the ``outputProfile``\n (or by pyCMS), a :exc:`PyCMSError` will be raised.\n\n This function builds and returns an ICC transform from the ``inputProfile``\n to the ``outputProfile``, but tries to simulate the result that would be\n obtained on the ``proofProfile`` device using ``renderingIntent`` and\n ``proofRenderingIntent`` to determine what to do with out-of-gamut\n colors. This is known as "soft-proofing". It will ONLY work for\n converting images that are in ``inMode`` to images that are in outMode\n color format (PIL mode, i.e. "RGB", "RGBA", "CMYK", etc.).\n\n Usage of the resulting transform object is exactly the same as with\n ImageCms.buildTransform().\n\n Proof profiling is generally used when using an output device to get a\n good idea of what the final printed/displayed image would look like on\n the ``proofProfile`` device when it's quicker and easier to use the\n output device for judging color. Generally, this means that the\n output device is a monitor, or a dye-sub printer (etc.), and the simulated\n device is something more expensive, complicated, or time consuming\n (making it difficult to make a real print for color judgement purposes).\n\n Soft-proofing basically functions by adjusting the colors on the\n output device to match the colors of the device being simulated. However,\n when the simulated device has a much wider gamut than the output\n device, you may obtain marginal results.\n\n :param inputProfile: String, as a valid filename path to the ICC input\n profile you wish to use for this transform, or a profile object\n :param outputProfile: String, as a valid filename path to the ICC output\n (monitor, usually) profile you wish to use for this transform, or a\n profile object\n :param proofProfile: String, as a valid filename path to the ICC proof\n profile you wish to use for this transform, or a profile object\n :param inMode: String, as a valid PIL mode that the appropriate profile\n also supports (i.e. "RGB", "RGBA", "CMYK", etc.)\n :param outMode: String, as a valid PIL mode that the appropriate profile\n also supports (i.e. "RGB", "RGBA", "CMYK", etc.)\n :param renderingIntent: Integer (0-3) specifying the rendering intent you\n wish to use for the input->proof (simulated) transform\n\n ImageCms.Intent.PERCEPTUAL = 0 (DEFAULT)\n ImageCms.Intent.RELATIVE_COLORIMETRIC = 1\n ImageCms.Intent.SATURATION = 2\n ImageCms.Intent.ABSOLUTE_COLORIMETRIC = 3\n\n see the pyCMS documentation for details on rendering intents and what\n they do.\n :param proofRenderingIntent: Integer (0-3) specifying the rendering intent\n you wish to use for proof->output transform\n\n ImageCms.Intent.PERCEPTUAL = 0 (DEFAULT)\n ImageCms.Intent.RELATIVE_COLORIMETRIC = 1\n ImageCms.Intent.SATURATION = 2\n ImageCms.Intent.ABSOLUTE_COLORIMETRIC = 3\n\n see the pyCMS documentation for details on rendering intents and what\n they do.\n :param flags: Integer (0-...) specifying additional flags\n :returns: A CmsTransform class object.\n :exception PyCMSError:\n """\n\n if not isinstance(renderingIntent, int) or not (0 <= renderingIntent <= 3):\n msg = "renderingIntent must be an integer between 0 and 3"\n raise PyCMSError(msg)\n\n if not isinstance(flags, int) or not (0 <= flags <= _MAX_FLAG):\n msg = f"flags must be an integer between 0 and {_MAX_FLAG}"\n raise PyCMSError(msg)\n\n try:\n if not isinstance(inputProfile, ImageCmsProfile):\n inputProfile = ImageCmsProfile(inputProfile)\n if not isinstance(outputProfile, ImageCmsProfile):\n outputProfile = ImageCmsProfile(outputProfile)\n if not isinstance(proofProfile, ImageCmsProfile):\n proofProfile = ImageCmsProfile(proofProfile)\n return ImageCmsTransform(\n inputProfile,\n outputProfile,\n inMode,\n outMode,\n renderingIntent,\n proofProfile,\n proofRenderingIntent,\n flags,\n )\n except (OSError, TypeError, ValueError) as v:\n raise PyCMSError(v) from v\n\n\nbuildTransformFromOpenProfiles = buildTransform\nbuildProofTransformFromOpenProfiles = buildProofTransform\n\n\ndef applyTransform(\n im: Image.Image, transform: ImageCmsTransform, inPlace: bool = False\n) -> Image.Image | None:\n """\n (pyCMS) Applies a transform to a given image.\n\n If ``im.mode != transform.input_mode``, a :exc:`PyCMSError` is raised.\n\n If ``inPlace`` is ``True`` and ``transform.input_mode != transform.output_mode``, a\n :exc:`PyCMSError` is raised.\n\n If ``im.mode``, ``transform.input_mode`` or ``transform.output_mode`` is not\n supported by pyCMSdll or the profiles you used for the transform, a\n :exc:`PyCMSError` is raised.\n\n If an error occurs while the transform is being applied,\n a :exc:`PyCMSError` is raised.\n\n This function applies a pre-calculated transform (from\n ImageCms.buildTransform() or ImageCms.buildTransformFromOpenProfiles())\n to an image. The transform can be used for multiple images, saving\n considerable calculation time if doing the same conversion multiple times.\n\n If you want to modify im in-place instead of receiving a new image as\n the return value, set ``inPlace`` to ``True``. This can only be done if\n ``transform.input_mode`` and ``transform.output_mode`` are the same, because we\n can't change the mode in-place (the buffer sizes for some modes are\n different). The default behavior is to return a new :py:class:`~PIL.Image.Image`\n object of the same dimensions in mode ``transform.output_mode``.\n\n :param im: An :py:class:`~PIL.Image.Image` object, and ``im.mode`` must be the same\n as the ``input_mode`` supported by the transform.\n :param transform: A valid CmsTransform class object\n :param inPlace: Bool. If ``True``, ``im`` is modified in place and ``None`` is\n returned, if ``False``, a new :py:class:`~PIL.Image.Image` object with the\n transform applied is returned (and ``im`` is not changed). The default is\n ``False``.\n :returns: Either ``None``, or a new :py:class:`~PIL.Image.Image` object,\n depending on the value of ``inPlace``. The profile will be returned in\n the image's ``info['icc_profile']``.\n :exception PyCMSError:\n """\n\n try:\n if inPlace:\n transform.apply_in_place(im)\n imOut = None\n else:\n imOut = transform.apply(im)\n except (TypeError, ValueError) as v:\n raise PyCMSError(v) from v\n\n return imOut\n\n\ndef createProfile(\n colorSpace: Literal["LAB", "XYZ", "sRGB"], colorTemp: SupportsFloat = 0\n) -> core.CmsProfile:\n """\n (pyCMS) Creates a profile.\n\n If colorSpace not in ``["LAB", "XYZ", "sRGB"]``,\n a :exc:`PyCMSError` is raised.\n\n If using LAB and ``colorTemp`` is not a positive integer,\n a :exc:`PyCMSError` is raised.\n\n If an error occurs while creating the profile,\n a :exc:`PyCMSError` is raised.\n\n Use this function to create common profiles on-the-fly instead of\n having to supply a profile on disk and knowing the path to it. It\n returns a normal CmsProfile object that can be passed to\n ImageCms.buildTransformFromOpenProfiles() to create a transform to apply\n to images.\n\n :param colorSpace: String, the color space of the profile you wish to\n create.\n Currently only "LAB", "XYZ", and "sRGB" are supported.\n :param colorTemp: Positive number for the white point for the profile, in\n degrees Kelvin (i.e. 5000, 6500, 9600, etc.). The default is for D50\n illuminant if omitted (5000k). colorTemp is ONLY applied to LAB\n profiles, and is ignored for XYZ and sRGB.\n :returns: A CmsProfile class object\n :exception PyCMSError:\n """\n\n if colorSpace not in ["LAB", "XYZ", "sRGB"]:\n msg = (\n f"Color space not supported for on-the-fly profile creation ({colorSpace})"\n )\n raise PyCMSError(msg)\n\n if colorSpace == "LAB":\n try:\n colorTemp = float(colorTemp)\n except (TypeError, ValueError) as e:\n msg = f'Color temperature must be numeric, "{colorTemp}" not valid'\n raise PyCMSError(msg) from e\n\n try:\n return core.createProfile(colorSpace, colorTemp)\n except (TypeError, ValueError) as v:\n raise PyCMSError(v) from v\n\n\ndef getProfileName(profile: _CmsProfileCompatible) -> str:\n """\n\n (pyCMS) Gets the internal product name for the given profile.\n\n If ``profile`` isn't a valid CmsProfile object or filename to a profile,\n a :exc:`PyCMSError` is raised If an error occurs while trying\n to obtain the name tag, a :exc:`PyCMSError` is raised.\n\n Use this function to obtain the INTERNAL name of the profile (stored\n in an ICC tag in the profile itself), usually the one used when the\n profile was originally created. Sometimes this tag also contains\n additional information supplied by the creator.\n\n :param profile: EITHER a valid CmsProfile object, OR a string of the\n filename of an ICC profile.\n :returns: A string containing the internal name of the profile as stored\n in an ICC tag.\n :exception PyCMSError:\n """\n\n try:\n # add an extra newline to preserve pyCMS compatibility\n if not isinstance(profile, ImageCmsProfile):\n profile = ImageCmsProfile(profile)\n # do it in python, not c.\n # // name was "%s - %s" (model, manufacturer) || Description ,\n # // but if the Model and Manufacturer were the same or the model\n # // was long, Just the model, in 1.x\n model = profile.profile.model\n manufacturer = profile.profile.manufacturer\n\n if not (model or manufacturer):\n return (profile.profile.profile_description or "") + "\n"\n if not manufacturer or (model and len(model) > 30):\n return f"{model}\n"\n return f"{model} - {manufacturer}\n"\n\n except (AttributeError, OSError, TypeError, ValueError) as v:\n raise PyCMSError(v) from v\n\n\ndef getProfileInfo(profile: _CmsProfileCompatible) -> str:\n """\n (pyCMS) Gets the internal product information for the given profile.\n\n If ``profile`` isn't a valid CmsProfile object or filename to a profile,\n a :exc:`PyCMSError` is raised.\n\n If an error occurs while trying to obtain the info tag,\n a :exc:`PyCMSError` is raised.\n\n Use this function to obtain the information stored in the profile's\n info tag. This often contains details about the profile, and how it\n was created, as supplied by the creator.\n\n :param profile: EITHER a valid CmsProfile object, OR a string of the\n filename of an ICC profile.\n :returns: A string containing the internal profile information stored in\n an ICC tag.\n :exception PyCMSError:\n """\n\n try:\n if not isinstance(profile, ImageCmsProfile):\n profile = ImageCmsProfile(profile)\n # add an extra newline to preserve pyCMS compatibility\n # Python, not C. the white point bits weren't working well,\n # so skipping.\n # info was description \r\n\r\n copyright \r\n\r\n K007 tag \r\n\r\n whitepoint\n description = profile.profile.profile_description\n cpright = profile.profile.copyright\n elements = [element for element in (description, cpright) if element]\n return "\r\n\r\n".join(elements) + "\r\n\r\n"\n\n except (AttributeError, OSError, TypeError, ValueError) as v:\n raise PyCMSError(v) from v\n\n\ndef getProfileCopyright(profile: _CmsProfileCompatible) -> str:\n """\n (pyCMS) Gets the copyright for the given profile.\n\n If ``profile`` isn't a valid CmsProfile object or filename to a profile, a\n :exc:`PyCMSError` is raised.\n\n If an error occurs while trying to obtain the copyright tag,\n a :exc:`PyCMSError` is raised.\n\n Use this function to obtain the information stored in the profile's\n copyright tag.\n\n :param profile: EITHER a valid CmsProfile object, OR a string of the\n filename of an ICC profile.\n :returns: A string containing the internal profile information stored in\n an ICC tag.\n :exception PyCMSError:\n """\n try:\n # add an extra newline to preserve pyCMS compatibility\n if not isinstance(profile, ImageCmsProfile):\n profile = ImageCmsProfile(profile)\n return (profile.profile.copyright or "") + "\n"\n except (AttributeError, OSError, TypeError, ValueError) as v:\n raise PyCMSError(v) from v\n\n\ndef getProfileManufacturer(profile: _CmsProfileCompatible) -> str:\n """\n (pyCMS) Gets the manufacturer for the given profile.\n\n If ``profile`` isn't a valid CmsProfile object or filename to a profile, a\n :exc:`PyCMSError` is raised.\n\n If an error occurs while trying to obtain the manufacturer tag, a\n :exc:`PyCMSError` is raised.\n\n Use this function to obtain the information stored in the profile's\n manufacturer tag.\n\n :param profile: EITHER a valid CmsProfile object, OR a string of the\n filename of an ICC profile.\n :returns: A string containing the internal profile information stored in\n an ICC tag.\n :exception PyCMSError:\n """\n try:\n # add an extra newline to preserve pyCMS compatibility\n if not isinstance(profile, ImageCmsProfile):\n profile = ImageCmsProfile(profile)\n return (profile.profile.manufacturer or "") + "\n"\n except (AttributeError, OSError, TypeError, ValueError) as v:\n raise PyCMSError(v) from v\n\n\ndef getProfileModel(profile: _CmsProfileCompatible) -> str:\n """\n (pyCMS) Gets the model for the given profile.\n\n If ``profile`` isn't a valid CmsProfile object or filename to a profile, a\n :exc:`PyCMSError` is raised.\n\n If an error occurs while trying to obtain the model tag,\n a :exc:`PyCMSError` is raised.\n\n Use this function to obtain the information stored in the profile's\n model tag.\n\n :param profile: EITHER a valid CmsProfile object, OR a string of the\n filename of an ICC profile.\n :returns: A string containing the internal profile information stored in\n an ICC tag.\n :exception PyCMSError:\n """\n\n try:\n # add an extra newline to preserve pyCMS compatibility\n if not isinstance(profile, ImageCmsProfile):\n profile = ImageCmsProfile(profile)\n return (profile.profile.model or "") + "\n"\n except (AttributeError, OSError, TypeError, ValueError) as v:\n raise PyCMSError(v) from v\n\n\ndef getProfileDescription(profile: _CmsProfileCompatible) -> str:\n """\n (pyCMS) Gets the description for the given profile.\n\n If ``profile`` isn't a valid CmsProfile object or filename to a profile, a\n :exc:`PyCMSError` is raised.\n\n If an error occurs while trying to obtain the description tag,\n a :exc:`PyCMSError` is raised.\n\n Use this function to obtain the information stored in the profile's\n description tag.\n\n :param profile: EITHER a valid CmsProfile object, OR a string of the\n filename of an ICC profile.\n :returns: A string containing the internal profile information stored in an\n ICC tag.\n :exception PyCMSError:\n """\n\n try:\n # add an extra newline to preserve pyCMS compatibility\n if not isinstance(profile, ImageCmsProfile):\n profile = ImageCmsProfile(profile)\n return (profile.profile.profile_description or "") + "\n"\n except (AttributeError, OSError, TypeError, ValueError) as v:\n raise PyCMSError(v) from v\n\n\ndef getDefaultIntent(profile: _CmsProfileCompatible) -> int:\n """\n (pyCMS) Gets the default intent name for the given profile.\n\n If ``profile`` isn't a valid CmsProfile object or filename to a profile, a\n :exc:`PyCMSError` is raised.\n\n If an error occurs while trying to obtain the default intent, a\n :exc:`PyCMSError` is raised.\n\n Use this function to determine the default (and usually best optimized)\n rendering intent for this profile. Most profiles support multiple\n rendering intents, but are intended mostly for one type of conversion.\n If you wish to use a different intent than returned, use\n ImageCms.isIntentSupported() to verify it will work first.\n\n :param profile: EITHER a valid CmsProfile object, OR a string of the\n filename of an ICC profile.\n :returns: Integer 0-3 specifying the default rendering intent for this\n profile.\n\n ImageCms.Intent.PERCEPTUAL = 0 (DEFAULT)\n ImageCms.Intent.RELATIVE_COLORIMETRIC = 1\n ImageCms.Intent.SATURATION = 2\n ImageCms.Intent.ABSOLUTE_COLORIMETRIC = 3\n\n see the pyCMS documentation for details on rendering intents and what\n they do.\n :exception PyCMSError:\n """\n\n try:\n if not isinstance(profile, ImageCmsProfile):\n profile = ImageCmsProfile(profile)\n return profile.profile.rendering_intent\n except (AttributeError, OSError, TypeError, ValueError) as v:\n raise PyCMSError(v) from v\n\n\ndef isIntentSupported(\n profile: _CmsProfileCompatible, intent: Intent, direction: Direction\n) -> Literal[-1, 1]:\n """\n (pyCMS) Checks if a given intent is supported.\n\n Use this function to verify that you can use your desired\n ``intent`` with ``profile``, and that ``profile`` can be used for the\n input/output/proof profile as you desire.\n\n Some profiles are created specifically for one "direction", can cannot\n be used for others. Some profiles can only be used for certain\n rendering intents, so it's best to either verify this before trying\n to create a transform with them (using this function), or catch the\n potential :exc:`PyCMSError` that will occur if they don't\n support the modes you select.\n\n :param profile: EITHER a valid CmsProfile object, OR a string of the\n filename of an ICC profile.\n :param intent: Integer (0-3) specifying the rendering intent you wish to\n use with this profile\n\n ImageCms.Intent.PERCEPTUAL = 0 (DEFAULT)\n ImageCms.Intent.RELATIVE_COLORIMETRIC = 1\n ImageCms.Intent.SATURATION = 2\n ImageCms.Intent.ABSOLUTE_COLORIMETRIC = 3\n\n see the pyCMS documentation for details on rendering intents and what\n they do.\n :param direction: Integer specifying if the profile is to be used for\n input, output, or proof\n\n INPUT = 0 (or use ImageCms.Direction.INPUT)\n OUTPUT = 1 (or use ImageCms.Direction.OUTPUT)\n PROOF = 2 (or use ImageCms.Direction.PROOF)\n\n :returns: 1 if the intent/direction are supported, -1 if they are not.\n :exception PyCMSError:\n """\n\n try:\n if not isinstance(profile, ImageCmsProfile):\n profile = ImageCmsProfile(profile)\n # FIXME: I get different results for the same data w. different\n # compilers. Bug in LittleCMS or in the binding?\n if profile.profile.is_intent_supported(intent, direction):\n return 1\n else:\n return -1\n except (AttributeError, OSError, TypeError, ValueError) as v:\n raise PyCMSError(v) from v\n\n\ndef versions() -> tuple[str, str | None, str, str]:\n """\n (pyCMS) Fetches versions.\n """\n\n deprecate(\n "PIL.ImageCms.versions()",\n 12,\n '(PIL.features.version("littlecms2"), sys.version, PIL.__version__)',\n )\n return _VERSION, core.littlecms_version, sys.version.split()[0], __version__\n | .venv\Lib\site-packages\PIL\ImageCms.py | ImageCms.py | Python | 43,057 | 0.95 | 0.191451 | 0.051991 | vue-tools | 519 | 2024-08-28T02:40:06.930603 | BSD-3-Clause | false | ac86f66d01e41926bdd8a1a98994219e |
#\n# The Python Imaging Library\n# $Id$\n#\n# map CSS3-style colour description strings to RGB\n#\n# History:\n# 2002-10-24 fl Added support for CSS-style color strings\n# 2002-12-15 fl Added RGBA support\n# 2004-03-27 fl Fixed remaining int() problems for Python 1.5.2\n# 2004-07-19 fl Fixed gray/grey spelling issues\n# 2009-03-05 fl Fixed rounding error in grayscale calculation\n#\n# Copyright (c) 2002-2004 by Secret Labs AB\n# Copyright (c) 2002-2004 by Fredrik Lundh\n#\n# See the README file for information on usage and redistribution.\n#\nfrom __future__ import annotations\n\nimport re\nfrom functools import lru_cache\n\nfrom . import Image\n\n\n@lru_cache\ndef getrgb(color: str) -> tuple[int, int, int] | tuple[int, int, int, int]:\n """\n Convert a color string to an RGB or RGBA tuple. If the string cannot be\n parsed, this function raises a :py:exc:`ValueError` exception.\n\n .. versionadded:: 1.1.4\n\n :param color: A color string\n :return: ``(red, green, blue[, alpha])``\n """\n if len(color) > 100:\n msg = "color specifier is too long"\n raise ValueError(msg)\n color = color.lower()\n\n rgb = colormap.get(color, None)\n if rgb:\n if isinstance(rgb, tuple):\n return rgb\n rgb_tuple = getrgb(rgb)\n assert len(rgb_tuple) == 3\n colormap[color] = rgb_tuple\n return rgb_tuple\n\n # check for known string formats\n if re.match("#[a-f0-9]{3}$", color):\n return int(color[1] * 2, 16), int(color[2] * 2, 16), int(color[3] * 2, 16)\n\n if re.match("#[a-f0-9]{4}$", color):\n return (\n int(color[1] * 2, 16),\n int(color[2] * 2, 16),\n int(color[3] * 2, 16),\n int(color[4] * 2, 16),\n )\n\n if re.match("#[a-f0-9]{6}$", color):\n return int(color[1:3], 16), int(color[3:5], 16), int(color[5:7], 16)\n\n if re.match("#[a-f0-9]{8}$", color):\n return (\n int(color[1:3], 16),\n int(color[3:5], 16),\n int(color[5:7], 16),\n int(color[7:9], 16),\n )\n\n m = re.match(r"rgb\(\s*(\d+)\s*,\s*(\d+)\s*,\s*(\d+)\s*\)$", color)\n if m:\n return int(m.group(1)), int(m.group(2)), int(m.group(3))\n\n m = re.match(r"rgb\(\s*(\d+)%\s*,\s*(\d+)%\s*,\s*(\d+)%\s*\)$", color)\n if m:\n return (\n int((int(m.group(1)) * 255) / 100.0 + 0.5),\n int((int(m.group(2)) * 255) / 100.0 + 0.5),\n int((int(m.group(3)) * 255) / 100.0 + 0.5),\n )\n\n m = re.match(\n r"hsl\(\s*(\d+\.?\d*)\s*,\s*(\d+\.?\d*)%\s*,\s*(\d+\.?\d*)%\s*\)$", color\n )\n if m:\n from colorsys import hls_to_rgb\n\n rgb_floats = hls_to_rgb(\n float(m.group(1)) / 360.0,\n float(m.group(3)) / 100.0,\n float(m.group(2)) / 100.0,\n )\n return (\n int(rgb_floats[0] * 255 + 0.5),\n int(rgb_floats[1] * 255 + 0.5),\n int(rgb_floats[2] * 255 + 0.5),\n )\n\n m = re.match(\n r"hs[bv]\(\s*(\d+\.?\d*)\s*,\s*(\d+\.?\d*)%\s*,\s*(\d+\.?\d*)%\s*\)$", color\n )\n if m:\n from colorsys import hsv_to_rgb\n\n rgb_floats = hsv_to_rgb(\n float(m.group(1)) / 360.0,\n float(m.group(2)) / 100.0,\n float(m.group(3)) / 100.0,\n )\n return (\n int(rgb_floats[0] * 255 + 0.5),\n int(rgb_floats[1] * 255 + 0.5),\n int(rgb_floats[2] * 255 + 0.5),\n )\n\n m = re.match(r"rgba\(\s*(\d+)\s*,\s*(\d+)\s*,\s*(\d+)\s*,\s*(\d+)\s*\)$", color)\n if m:\n return int(m.group(1)), int(m.group(2)), int(m.group(3)), int(m.group(4))\n msg = f"unknown color specifier: {repr(color)}"\n raise ValueError(msg)\n\n\n@lru_cache\ndef getcolor(color: str, mode: str) -> int | tuple[int, ...]:\n """\n Same as :py:func:`~PIL.ImageColor.getrgb` for most modes. However, if\n ``mode`` is HSV, converts the RGB value to a HSV value, or if ``mode`` is\n not color or a palette image, converts the RGB value to a grayscale value.\n If the string cannot be parsed, this function raises a :py:exc:`ValueError`\n exception.\n\n .. versionadded:: 1.1.4\n\n :param color: A color string\n :param mode: Convert result to this mode\n :return: ``graylevel, (graylevel, alpha) or (red, green, blue[, alpha])``\n """\n # same as getrgb, but converts the result to the given mode\n rgb, alpha = getrgb(color), 255\n if len(rgb) == 4:\n alpha = rgb[3]\n rgb = rgb[:3]\n\n if mode == "HSV":\n from colorsys import rgb_to_hsv\n\n r, g, b = rgb\n h, s, v = rgb_to_hsv(r / 255, g / 255, b / 255)\n return int(h * 255), int(s * 255), int(v * 255)\n elif Image.getmodebase(mode) == "L":\n r, g, b = rgb\n # ITU-R Recommendation 601-2 for nonlinear RGB\n # scaled to 24 bits to match the convert's implementation.\n graylevel = (r * 19595 + g * 38470 + b * 7471 + 0x8000) >> 16\n if mode[-1] == "A":\n return graylevel, alpha\n return graylevel\n elif mode[-1] == "A":\n return rgb + (alpha,)\n return rgb\n\n\ncolormap: dict[str, str | tuple[int, int, int]] = {\n # X11 colour table from https://drafts.csswg.org/css-color-4/, with\n # gray/grey spelling issues fixed. This is a superset of HTML 4.0\n # colour names used in CSS 1.\n "aliceblue": "#f0f8ff",\n "antiquewhite": "#faebd7",\n "aqua": "#00ffff",\n "aquamarine": "#7fffd4",\n "azure": "#f0ffff",\n "beige": "#f5f5dc",\n "bisque": "#ffe4c4",\n "black": "#000000",\n "blanchedalmond": "#ffebcd",\n "blue": "#0000ff",\n "blueviolet": "#8a2be2",\n "brown": "#a52a2a",\n "burlywood": "#deb887",\n "cadetblue": "#5f9ea0",\n "chartreuse": "#7fff00",\n "chocolate": "#d2691e",\n "coral": "#ff7f50",\n "cornflowerblue": "#6495ed",\n "cornsilk": "#fff8dc",\n "crimson": "#dc143c",\n "cyan": "#00ffff",\n "darkblue": "#00008b",\n "darkcyan": "#008b8b",\n "darkgoldenrod": "#b8860b",\n "darkgray": "#a9a9a9",\n "darkgrey": "#a9a9a9",\n "darkgreen": "#006400",\n "darkkhaki": "#bdb76b",\n "darkmagenta": "#8b008b",\n "darkolivegreen": "#556b2f",\n "darkorange": "#ff8c00",\n "darkorchid": "#9932cc",\n "darkred": "#8b0000",\n "darksalmon": "#e9967a",\n "darkseagreen": "#8fbc8f",\n "darkslateblue": "#483d8b",\n "darkslategray": "#2f4f4f",\n "darkslategrey": "#2f4f4f",\n "darkturquoise": "#00ced1",\n "darkviolet": "#9400d3",\n "deeppink": "#ff1493",\n "deepskyblue": "#00bfff",\n "dimgray": "#696969",\n "dimgrey": "#696969",\n "dodgerblue": "#1e90ff",\n "firebrick": "#b22222",\n "floralwhite": "#fffaf0",\n "forestgreen": "#228b22",\n "fuchsia": "#ff00ff",\n "gainsboro": "#dcdcdc",\n "ghostwhite": "#f8f8ff",\n "gold": "#ffd700",\n "goldenrod": "#daa520",\n "gray": "#808080",\n "grey": "#808080",\n "green": "#008000",\n "greenyellow": "#adff2f",\n "honeydew": "#f0fff0",\n "hotpink": "#ff69b4",\n "indianred": "#cd5c5c",\n "indigo": "#4b0082",\n "ivory": "#fffff0",\n "khaki": "#f0e68c",\n "lavender": "#e6e6fa",\n "lavenderblush": "#fff0f5",\n "lawngreen": "#7cfc00",\n "lemonchiffon": "#fffacd",\n "lightblue": "#add8e6",\n "lightcoral": "#f08080",\n "lightcyan": "#e0ffff",\n "lightgoldenrodyellow": "#fafad2",\n "lightgreen": "#90ee90",\n "lightgray": "#d3d3d3",\n "lightgrey": "#d3d3d3",\n "lightpink": "#ffb6c1",\n "lightsalmon": "#ffa07a",\n "lightseagreen": "#20b2aa",\n "lightskyblue": "#87cefa",\n "lightslategray": "#778899",\n "lightslategrey": "#778899",\n "lightsteelblue": "#b0c4de",\n "lightyellow": "#ffffe0",\n "lime": "#00ff00",\n "limegreen": "#32cd32",\n "linen": "#faf0e6",\n "magenta": "#ff00ff",\n "maroon": "#800000",\n "mediumaquamarine": "#66cdaa",\n "mediumblue": "#0000cd",\n "mediumorchid": "#ba55d3",\n "mediumpurple": "#9370db",\n "mediumseagreen": "#3cb371",\n "mediumslateblue": "#7b68ee",\n "mediumspringgreen": "#00fa9a",\n "mediumturquoise": "#48d1cc",\n "mediumvioletred": "#c71585",\n "midnightblue": "#191970",\n "mintcream": "#f5fffa",\n "mistyrose": "#ffe4e1",\n "moccasin": "#ffe4b5",\n "navajowhite": "#ffdead",\n "navy": "#000080",\n "oldlace": "#fdf5e6",\n "olive": "#808000",\n "olivedrab": "#6b8e23",\n "orange": "#ffa500",\n "orangered": "#ff4500",\n "orchid": "#da70d6",\n "palegoldenrod": "#eee8aa",\n "palegreen": "#98fb98",\n "paleturquoise": "#afeeee",\n "palevioletred": "#db7093",\n "papayawhip": "#ffefd5",\n "peachpuff": "#ffdab9",\n "peru": "#cd853f",\n "pink": "#ffc0cb",\n "plum": "#dda0dd",\n "powderblue": "#b0e0e6",\n "purple": "#800080",\n "rebeccapurple": "#663399",\n "red": "#ff0000",\n "rosybrown": "#bc8f8f",\n "royalblue": "#4169e1",\n "saddlebrown": "#8b4513",\n "salmon": "#fa8072",\n "sandybrown": "#f4a460",\n "seagreen": "#2e8b57",\n "seashell": "#fff5ee",\n "sienna": "#a0522d",\n "silver": "#c0c0c0",\n "skyblue": "#87ceeb",\n "slateblue": "#6a5acd",\n "slategray": "#708090",\n "slategrey": "#708090",\n "snow": "#fffafa",\n "springgreen": "#00ff7f",\n "steelblue": "#4682b4",\n "tan": "#d2b48c",\n "teal": "#008080",\n "thistle": "#d8bfd8",\n "tomato": "#ff6347",\n "turquoise": "#40e0d0",\n "violet": "#ee82ee",\n "wheat": "#f5deb3",\n "white": "#ffffff",\n "whitesmoke": "#f5f5f5",\n "yellow": "#ffff00",\n "yellowgreen": "#9acd32",\n}\n | .venv\Lib\site-packages\PIL\ImageColor.py | ImageColor.py | Python | 9,761 | 0.95 | 0.084375 | 0.085034 | python-kit | 197 | 2023-09-30T16:48:58.419459 | BSD-3-Clause | false | 7552b46a33dccfc55ed2848530a19791 |
#\n# The Python Imaging Library\n# $Id$\n#\n# drawing interface operations\n#\n# History:\n# 1996-04-13 fl Created (experimental)\n# 1996-08-07 fl Filled polygons, ellipses.\n# 1996-08-13 fl Added text support\n# 1998-06-28 fl Handle I and F images\n# 1998-12-29 fl Added arc; use arc primitive to draw ellipses\n# 1999-01-10 fl Added shape stuff (experimental)\n# 1999-02-06 fl Added bitmap support\n# 1999-02-11 fl Changed all primitives to take options\n# 1999-02-20 fl Fixed backwards compatibility\n# 2000-10-12 fl Copy on write, when necessary\n# 2001-02-18 fl Use default ink for bitmap/text also in fill mode\n# 2002-10-24 fl Added support for CSS-style color strings\n# 2002-12-10 fl Added experimental support for RGBA-on-RGB drawing\n# 2002-12-11 fl Refactored low-level drawing API (work in progress)\n# 2004-08-26 fl Made Draw() a factory function, added getdraw() support\n# 2004-09-04 fl Added width support to line primitive\n# 2004-09-10 fl Added font mode handling\n# 2006-06-19 fl Added font bearing support (getmask2)\n#\n# Copyright (c) 1997-2006 by Secret Labs AB\n# Copyright (c) 1996-2006 by Fredrik Lundh\n#\n# See the README file for information on usage and redistribution.\n#\nfrom __future__ import annotations\n\nimport math\nimport struct\nfrom collections.abc import Sequence\nfrom types import ModuleType\nfrom typing import Any, AnyStr, Callable, Union, cast\n\nfrom . import Image, ImageColor\nfrom ._deprecate import deprecate\nfrom ._typing import Coords\n\n# experimental access to the outline API\nOutline: Callable[[], Image.core._Outline] = Image.core.outline\n\nTYPE_CHECKING = False\nif TYPE_CHECKING:\n from . import ImageDraw2, ImageFont\n\n_Ink = Union[float, tuple[int, ...], str]\n\n"""\nA simple 2D drawing interface for PIL images.\n<p>\nApplication code should use the <b>Draw</b> factory, instead of\ndirectly.\n"""\n\n\nclass ImageDraw:\n font: (\n ImageFont.ImageFont | ImageFont.FreeTypeFont | ImageFont.TransposedFont | None\n ) = None\n\n def __init__(self, im: Image.Image, mode: str | None = None) -> None:\n """\n Create a drawing instance.\n\n :param im: The image to draw in.\n :param mode: Optional mode to use for color values. For RGB\n images, this argument can be RGB or RGBA (to blend the\n drawing into the image). For all other modes, this argument\n must be the same as the image mode. If omitted, the mode\n defaults to the mode of the image.\n """\n im.load()\n if im.readonly:\n im._copy() # make it writeable\n blend = 0\n if mode is None:\n mode = im.mode\n if mode != im.mode:\n if mode == "RGBA" and im.mode == "RGB":\n blend = 1\n else:\n msg = "mode mismatch"\n raise ValueError(msg)\n if mode == "P":\n self.palette = im.palette\n else:\n self.palette = None\n self._image = im\n self.im = im.im\n self.draw = Image.core.draw(self.im, blend)\n self.mode = mode\n if mode in ("I", "F"):\n self.ink = self.draw.draw_ink(1)\n else:\n self.ink = self.draw.draw_ink(-1)\n if mode in ("1", "P", "I", "F"):\n # FIXME: fix Fill2 to properly support matte for I+F images\n self.fontmode = "1"\n else:\n self.fontmode = "L" # aliasing is okay for other modes\n self.fill = False\n\n def getfont(\n self,\n ) -> ImageFont.ImageFont | ImageFont.FreeTypeFont | ImageFont.TransposedFont:\n """\n Get the current default font.\n\n To set the default font for this ImageDraw instance::\n\n from PIL import ImageDraw, ImageFont\n draw.font = ImageFont.truetype("Tests/fonts/FreeMono.ttf")\n\n To set the default font for all future ImageDraw instances::\n\n from PIL import ImageDraw, ImageFont\n ImageDraw.ImageDraw.font = ImageFont.truetype("Tests/fonts/FreeMono.ttf")\n\n If the current default font is ``None``,\n it is initialized with ``ImageFont.load_default()``.\n\n :returns: An image font."""\n if not self.font:\n # FIXME: should add a font repository\n from . import ImageFont\n\n self.font = ImageFont.load_default()\n return self.font\n\n def _getfont(\n self, font_size: float | None\n ) -> ImageFont.ImageFont | ImageFont.FreeTypeFont | ImageFont.TransposedFont:\n if font_size is not None:\n from . import ImageFont\n\n return ImageFont.load_default(font_size)\n else:\n return self.getfont()\n\n def _getink(\n self, ink: _Ink | None, fill: _Ink | None = None\n ) -> tuple[int | None, int | None]:\n result_ink = None\n result_fill = None\n if ink is None and fill is None:\n if self.fill:\n result_fill = self.ink\n else:\n result_ink = self.ink\n else:\n if ink is not None:\n if isinstance(ink, str):\n ink = ImageColor.getcolor(ink, self.mode)\n if self.palette and isinstance(ink, tuple):\n ink = self.palette.getcolor(ink, self._image)\n result_ink = self.draw.draw_ink(ink)\n if fill is not None:\n if isinstance(fill, str):\n fill = ImageColor.getcolor(fill, self.mode)\n if self.palette and isinstance(fill, tuple):\n fill = self.palette.getcolor(fill, self._image)\n result_fill = self.draw.draw_ink(fill)\n return result_ink, result_fill\n\n def arc(\n self,\n xy: Coords,\n start: float,\n end: float,\n fill: _Ink | None = None,\n width: int = 1,\n ) -> None:\n """Draw an arc."""\n ink, fill = self._getink(fill)\n if ink is not None:\n self.draw.draw_arc(xy, start, end, ink, width)\n\n def bitmap(\n self, xy: Sequence[int], bitmap: Image.Image, fill: _Ink | None = None\n ) -> None:\n """Draw a bitmap."""\n bitmap.load()\n ink, fill = self._getink(fill)\n if ink is None:\n ink = fill\n if ink is not None:\n self.draw.draw_bitmap(xy, bitmap.im, ink)\n\n def chord(\n self,\n xy: Coords,\n start: float,\n end: float,\n fill: _Ink | None = None,\n outline: _Ink | None = None,\n width: int = 1,\n ) -> None:\n """Draw a chord."""\n ink, fill_ink = self._getink(outline, fill)\n if fill_ink is not None:\n self.draw.draw_chord(xy, start, end, fill_ink, 1)\n if ink is not None and ink != fill_ink and width != 0:\n self.draw.draw_chord(xy, start, end, ink, 0, width)\n\n def ellipse(\n self,\n xy: Coords,\n fill: _Ink | None = None,\n outline: _Ink | None = None,\n width: int = 1,\n ) -> None:\n """Draw an ellipse."""\n ink, fill_ink = self._getink(outline, fill)\n if fill_ink is not None:\n self.draw.draw_ellipse(xy, fill_ink, 1)\n if ink is not None and ink != fill_ink and width != 0:\n self.draw.draw_ellipse(xy, ink, 0, width)\n\n def circle(\n self,\n xy: Sequence[float],\n radius: float,\n fill: _Ink | None = None,\n outline: _Ink | None = None,\n width: int = 1,\n ) -> None:\n """Draw a circle given center coordinates and a radius."""\n ellipse_xy = (xy[0] - radius, xy[1] - radius, xy[0] + radius, xy[1] + radius)\n self.ellipse(ellipse_xy, fill, outline, width)\n\n def line(\n self,\n xy: Coords,\n fill: _Ink | None = None,\n width: int = 0,\n joint: str | None = None,\n ) -> None:\n """Draw a line, or a connected sequence of line segments."""\n ink = self._getink(fill)[0]\n if ink is not None:\n self.draw.draw_lines(xy, ink, width)\n if joint == "curve" and width > 4:\n points: Sequence[Sequence[float]]\n if isinstance(xy[0], (list, tuple)):\n points = cast(Sequence[Sequence[float]], xy)\n else:\n points = [\n cast(Sequence[float], tuple(xy[i : i + 2]))\n for i in range(0, len(xy), 2)\n ]\n for i in range(1, len(points) - 1):\n point = points[i]\n angles = [\n math.degrees(math.atan2(end[0] - start[0], start[1] - end[1]))\n % 360\n for start, end in (\n (points[i - 1], point),\n (point, points[i + 1]),\n )\n ]\n if angles[0] == angles[1]:\n # This is a straight line, so no joint is required\n continue\n\n def coord_at_angle(\n coord: Sequence[float], angle: float\n ) -> tuple[float, ...]:\n x, y = coord\n angle -= 90\n distance = width / 2 - 1\n return tuple(\n p + (math.floor(p_d) if p_d > 0 else math.ceil(p_d))\n for p, p_d in (\n (x, distance * math.cos(math.radians(angle))),\n (y, distance * math.sin(math.radians(angle))),\n )\n )\n\n flipped = (\n angles[1] > angles[0] and angles[1] - 180 > angles[0]\n ) or (angles[1] < angles[0] and angles[1] + 180 > angles[0])\n coords = [\n (point[0] - width / 2 + 1, point[1] - width / 2 + 1),\n (point[0] + width / 2 - 1, point[1] + width / 2 - 1),\n ]\n if flipped:\n start, end = (angles[1] + 90, angles[0] + 90)\n else:\n start, end = (angles[0] - 90, angles[1] - 90)\n self.pieslice(coords, start - 90, end - 90, fill)\n\n if width > 8:\n # Cover potential gaps between the line and the joint\n if flipped:\n gap_coords = [\n coord_at_angle(point, angles[0] + 90),\n point,\n coord_at_angle(point, angles[1] + 90),\n ]\n else:\n gap_coords = [\n coord_at_angle(point, angles[0] - 90),\n point,\n coord_at_angle(point, angles[1] - 90),\n ]\n self.line(gap_coords, fill, width=3)\n\n def shape(\n self,\n shape: Image.core._Outline,\n fill: _Ink | None = None,\n outline: _Ink | None = None,\n ) -> None:\n """(Experimental) Draw a shape."""\n shape.close()\n ink, fill_ink = self._getink(outline, fill)\n if fill_ink is not None:\n self.draw.draw_outline(shape, fill_ink, 1)\n if ink is not None and ink != fill_ink:\n self.draw.draw_outline(shape, ink, 0)\n\n def pieslice(\n self,\n xy: Coords,\n start: float,\n end: float,\n fill: _Ink | None = None,\n outline: _Ink | None = None,\n width: int = 1,\n ) -> None:\n """Draw a pieslice."""\n ink, fill_ink = self._getink(outline, fill)\n if fill_ink is not None:\n self.draw.draw_pieslice(xy, start, end, fill_ink, 1)\n if ink is not None and ink != fill_ink and width != 0:\n self.draw.draw_pieslice(xy, start, end, ink, 0, width)\n\n def point(self, xy: Coords, fill: _Ink | None = None) -> None:\n """Draw one or more individual pixels."""\n ink, fill = self._getink(fill)\n if ink is not None:\n self.draw.draw_points(xy, ink)\n\n def polygon(\n self,\n xy: Coords,\n fill: _Ink | None = None,\n outline: _Ink | None = None,\n width: int = 1,\n ) -> None:\n """Draw a polygon."""\n ink, fill_ink = self._getink(outline, fill)\n if fill_ink is not None:\n self.draw.draw_polygon(xy, fill_ink, 1)\n if ink is not None and ink != fill_ink and width != 0:\n if width == 1:\n self.draw.draw_polygon(xy, ink, 0, width)\n elif self.im is not None:\n # To avoid expanding the polygon outwards,\n # use the fill as a mask\n mask = Image.new("1", self.im.size)\n mask_ink = self._getink(1)[0]\n draw = Draw(mask)\n draw.draw.draw_polygon(xy, mask_ink, 1)\n\n self.draw.draw_polygon(xy, ink, 0, width * 2 - 1, mask.im)\n\n def regular_polygon(\n self,\n bounding_circle: Sequence[Sequence[float] | float],\n n_sides: int,\n rotation: float = 0,\n fill: _Ink | None = None,\n outline: _Ink | None = None,\n width: int = 1,\n ) -> None:\n """Draw a regular polygon."""\n xy = _compute_regular_polygon_vertices(bounding_circle, n_sides, rotation)\n self.polygon(xy, fill, outline, width)\n\n def rectangle(\n self,\n xy: Coords,\n fill: _Ink | None = None,\n outline: _Ink | None = None,\n width: int = 1,\n ) -> None:\n """Draw a rectangle."""\n ink, fill_ink = self._getink(outline, fill)\n if fill_ink is not None:\n self.draw.draw_rectangle(xy, fill_ink, 1)\n if ink is not None and ink != fill_ink and width != 0:\n self.draw.draw_rectangle(xy, ink, 0, width)\n\n def rounded_rectangle(\n self,\n xy: Coords,\n radius: float = 0,\n fill: _Ink | None = None,\n outline: _Ink | None = None,\n width: int = 1,\n *,\n corners: tuple[bool, bool, bool, bool] | None = None,\n ) -> None:\n """Draw a rounded rectangle."""\n if isinstance(xy[0], (list, tuple)):\n (x0, y0), (x1, y1) = cast(Sequence[Sequence[float]], xy)\n else:\n x0, y0, x1, y1 = cast(Sequence[float], xy)\n if x1 < x0:\n msg = "x1 must be greater than or equal to x0"\n raise ValueError(msg)\n if y1 < y0:\n msg = "y1 must be greater than or equal to y0"\n raise ValueError(msg)\n if corners is None:\n corners = (True, True, True, True)\n\n d = radius * 2\n\n x0 = round(x0)\n y0 = round(y0)\n x1 = round(x1)\n y1 = round(y1)\n full_x, full_y = False, False\n if all(corners):\n full_x = d >= x1 - x0 - 1\n if full_x:\n # The two left and two right corners are joined\n d = x1 - x0\n full_y = d >= y1 - y0 - 1\n if full_y:\n # The two top and two bottom corners are joined\n d = y1 - y0\n if full_x and full_y:\n # If all corners are joined, that is a circle\n return self.ellipse(xy, fill, outline, width)\n\n if d == 0 or not any(corners):\n # If the corners have no curve,\n # or there are no corners,\n # that is a rectangle\n return self.rectangle(xy, fill, outline, width)\n\n r = int(d // 2)\n ink, fill_ink = self._getink(outline, fill)\n\n def draw_corners(pieslice: bool) -> None:\n parts: tuple[tuple[tuple[float, float, float, float], int, int], ...]\n if full_x:\n # Draw top and bottom halves\n parts = (\n ((x0, y0, x0 + d, y0 + d), 180, 360),\n ((x0, y1 - d, x0 + d, y1), 0, 180),\n )\n elif full_y:\n # Draw left and right halves\n parts = (\n ((x0, y0, x0 + d, y0 + d), 90, 270),\n ((x1 - d, y0, x1, y0 + d), 270, 90),\n )\n else:\n # Draw four separate corners\n parts = tuple(\n part\n for i, part in enumerate(\n (\n ((x0, y0, x0 + d, y0 + d), 180, 270),\n ((x1 - d, y0, x1, y0 + d), 270, 360),\n ((x1 - d, y1 - d, x1, y1), 0, 90),\n ((x0, y1 - d, x0 + d, y1), 90, 180),\n )\n )\n if corners[i]\n )\n for part in parts:\n if pieslice:\n self.draw.draw_pieslice(*(part + (fill_ink, 1)))\n else:\n self.draw.draw_arc(*(part + (ink, width)))\n\n if fill_ink is not None:\n draw_corners(True)\n\n if full_x:\n self.draw.draw_rectangle((x0, y0 + r + 1, x1, y1 - r - 1), fill_ink, 1)\n elif x1 - r - 1 > x0 + r + 1:\n self.draw.draw_rectangle((x0 + r + 1, y0, x1 - r - 1, y1), fill_ink, 1)\n if not full_x and not full_y:\n left = [x0, y0, x0 + r, y1]\n if corners[0]:\n left[1] += r + 1\n if corners[3]:\n left[3] -= r + 1\n self.draw.draw_rectangle(left, fill_ink, 1)\n\n right = [x1 - r, y0, x1, y1]\n if corners[1]:\n right[1] += r + 1\n if corners[2]:\n right[3] -= r + 1\n self.draw.draw_rectangle(right, fill_ink, 1)\n if ink is not None and ink != fill_ink and width != 0:\n draw_corners(False)\n\n if not full_x:\n top = [x0, y0, x1, y0 + width - 1]\n if corners[0]:\n top[0] += r + 1\n if corners[1]:\n top[2] -= r + 1\n self.draw.draw_rectangle(top, ink, 1)\n\n bottom = [x0, y1 - width + 1, x1, y1]\n if corners[3]:\n bottom[0] += r + 1\n if corners[2]:\n bottom[2] -= r + 1\n self.draw.draw_rectangle(bottom, ink, 1)\n if not full_y:\n left = [x0, y0, x0 + width - 1, y1]\n if corners[0]:\n left[1] += r + 1\n if corners[3]:\n left[3] -= r + 1\n self.draw.draw_rectangle(left, ink, 1)\n\n right = [x1 - width + 1, y0, x1, y1]\n if corners[1]:\n right[1] += r + 1\n if corners[2]:\n right[3] -= r + 1\n self.draw.draw_rectangle(right, ink, 1)\n\n def _multiline_check(self, text: AnyStr) -> bool:\n split_character = "\n" if isinstance(text, str) else b"\n"\n\n return split_character in text\n\n def text(\n self,\n xy: tuple[float, float],\n text: AnyStr,\n fill: _Ink | None = None,\n font: (\n ImageFont.ImageFont\n | ImageFont.FreeTypeFont\n | ImageFont.TransposedFont\n | None\n ) = None,\n anchor: str | None = None,\n spacing: float = 4,\n align: str = "left",\n direction: str | None = None,\n features: list[str] | None = None,\n language: str | None = None,\n stroke_width: float = 0,\n stroke_fill: _Ink | None = None,\n embedded_color: bool = False,\n *args: Any,\n **kwargs: Any,\n ) -> None:\n """Draw text."""\n if embedded_color and self.mode not in ("RGB", "RGBA"):\n msg = "Embedded color supported only in RGB and RGBA modes"\n raise ValueError(msg)\n\n if font is None:\n font = self._getfont(kwargs.get("font_size"))\n\n if self._multiline_check(text):\n return self.multiline_text(\n xy,\n text,\n fill,\n font,\n anchor,\n spacing,\n align,\n direction,\n features,\n language,\n stroke_width,\n stroke_fill,\n embedded_color,\n )\n\n def getink(fill: _Ink | None) -> int:\n ink, fill_ink = self._getink(fill)\n if ink is None:\n assert fill_ink is not None\n return fill_ink\n return ink\n\n def draw_text(ink: int, stroke_width: float = 0) -> None:\n mode = self.fontmode\n if stroke_width == 0 and embedded_color:\n mode = "RGBA"\n coord = []\n for i in range(2):\n coord.append(int(xy[i]))\n start = (math.modf(xy[0])[0], math.modf(xy[1])[0])\n try:\n mask, offset = font.getmask2( # type: ignore[union-attr,misc]\n text,\n mode,\n direction=direction,\n features=features,\n language=language,\n stroke_width=stroke_width,\n stroke_filled=True,\n anchor=anchor,\n ink=ink,\n start=start,\n *args,\n **kwargs,\n )\n coord = [coord[0] + offset[0], coord[1] + offset[1]]\n except AttributeError:\n try:\n mask = font.getmask( # type: ignore[misc]\n text,\n mode,\n direction,\n features,\n language,\n stroke_width,\n anchor,\n ink,\n start=start,\n *args,\n **kwargs,\n )\n except TypeError:\n mask = font.getmask(text)\n if mode == "RGBA":\n # font.getmask2(mode="RGBA") returns color in RGB bands and mask in A\n # extract mask and set text alpha\n color, mask = mask, mask.getband(3)\n ink_alpha = struct.pack("i", ink)[3]\n color.fillband(3, ink_alpha)\n x, y = coord\n if self.im is not None:\n self.im.paste(\n color, (x, y, x + mask.size[0], y + mask.size[1]), mask\n )\n else:\n self.draw.draw_bitmap(coord, mask, ink)\n\n ink = getink(fill)\n if ink is not None:\n stroke_ink = None\n if stroke_width:\n stroke_ink = getink(stroke_fill) if stroke_fill is not None else ink\n\n if stroke_ink is not None:\n # Draw stroked text\n draw_text(stroke_ink, stroke_width)\n\n # Draw normal text\n if ink != stroke_ink:\n draw_text(ink)\n else:\n # Only draw normal text\n draw_text(ink)\n\n def _prepare_multiline_text(\n self,\n xy: tuple[float, float],\n text: AnyStr,\n font: (\n ImageFont.ImageFont\n | ImageFont.FreeTypeFont\n | ImageFont.TransposedFont\n | None\n ),\n anchor: str | None,\n spacing: float,\n align: str,\n direction: str | None,\n features: list[str] | None,\n language: str | None,\n stroke_width: float,\n embedded_color: bool,\n font_size: float | None,\n ) -> tuple[\n ImageFont.ImageFont | ImageFont.FreeTypeFont | ImageFont.TransposedFont,\n list[tuple[tuple[float, float], str, AnyStr]],\n ]:\n if anchor is None:\n anchor = "lt" if direction == "ttb" else "la"\n elif len(anchor) != 2:\n msg = "anchor must be a 2 character string"\n raise ValueError(msg)\n elif anchor[1] in "tb" and direction != "ttb":\n msg = "anchor not supported for multiline text"\n raise ValueError(msg)\n\n if font is None:\n font = self._getfont(font_size)\n\n lines = text.split("\n" if isinstance(text, str) else b"\n")\n line_spacing = (\n self.textbbox((0, 0), "A", font, stroke_width=stroke_width)[3]\n + stroke_width\n + spacing\n )\n\n top = xy[1]\n parts = []\n if direction == "ttb":\n left = xy[0]\n for line in lines:\n parts.append(((left, top), anchor, line))\n left += line_spacing\n else:\n widths = []\n max_width: float = 0\n for line in lines:\n line_width = self.textlength(\n line,\n font,\n direction=direction,\n features=features,\n language=language,\n embedded_color=embedded_color,\n )\n widths.append(line_width)\n max_width = max(max_width, line_width)\n\n if anchor[1] == "m":\n top -= (len(lines) - 1) * line_spacing / 2.0\n elif anchor[1] == "d":\n top -= (len(lines) - 1) * line_spacing\n\n for idx, line in enumerate(lines):\n left = xy[0]\n width_difference = max_width - widths[idx]\n\n # align by align parameter\n if align in ("left", "justify"):\n pass\n elif align == "center":\n left += width_difference / 2.0\n elif align == "right":\n left += width_difference\n else:\n msg = 'align must be "left", "center", "right" or "justify"'\n raise ValueError(msg)\n\n if (\n align == "justify"\n and width_difference != 0\n and idx != len(lines) - 1\n ):\n words = line.split(" " if isinstance(text, str) else b" ")\n if len(words) > 1:\n # align left by anchor\n if anchor[0] == "m":\n left -= max_width / 2.0\n elif anchor[0] == "r":\n left -= max_width\n\n word_widths = [\n self.textlength(\n word,\n font,\n direction=direction,\n features=features,\n language=language,\n embedded_color=embedded_color,\n )\n for word in words\n ]\n word_anchor = "l" + anchor[1]\n width_difference = max_width - sum(word_widths)\n for i, word in enumerate(words):\n parts.append(((left, top), word_anchor, word))\n left += word_widths[i] + width_difference / (len(words) - 1)\n top += line_spacing\n continue\n\n # align left by anchor\n if anchor[0] == "m":\n left -= width_difference / 2.0\n elif anchor[0] == "r":\n left -= width_difference\n parts.append(((left, top), anchor, line))\n top += line_spacing\n\n return font, parts\n\n def multiline_text(\n self,\n xy: tuple[float, float],\n text: AnyStr,\n fill: _Ink | None = None,\n font: (\n ImageFont.ImageFont\n | ImageFont.FreeTypeFont\n | ImageFont.TransposedFont\n | None\n ) = None,\n anchor: str | None = None,\n spacing: float = 4,\n align: str = "left",\n direction: str | None = None,\n features: list[str] | None = None,\n language: str | None = None,\n stroke_width: float = 0,\n stroke_fill: _Ink | None = None,\n embedded_color: bool = False,\n *,\n font_size: float | None = None,\n ) -> None:\n font, lines = self._prepare_multiline_text(\n xy,\n text,\n font,\n anchor,\n spacing,\n align,\n direction,\n features,\n language,\n stroke_width,\n embedded_color,\n font_size,\n )\n\n for xy, anchor, line in lines:\n self.text(\n xy,\n line,\n fill,\n font,\n anchor,\n direction=direction,\n features=features,\n language=language,\n stroke_width=stroke_width,\n stroke_fill=stroke_fill,\n embedded_color=embedded_color,\n )\n\n def textlength(\n self,\n text: AnyStr,\n font: (\n ImageFont.ImageFont\n | ImageFont.FreeTypeFont\n | ImageFont.TransposedFont\n | None\n ) = None,\n direction: str | None = None,\n features: list[str] | None = None,\n language: str | None = None,\n embedded_color: bool = False,\n *,\n font_size: float | None = None,\n ) -> float:\n """Get the length of a given string, in pixels with 1/64 precision."""\n if self._multiline_check(text):\n msg = "can't measure length of multiline text"\n raise ValueError(msg)\n if embedded_color and self.mode not in ("RGB", "RGBA"):\n msg = "Embedded color supported only in RGB and RGBA modes"\n raise ValueError(msg)\n\n if font is None:\n font = self._getfont(font_size)\n mode = "RGBA" if embedded_color else self.fontmode\n return font.getlength(text, mode, direction, features, language)\n\n def textbbox(\n self,\n xy: tuple[float, float],\n text: AnyStr,\n font: (\n ImageFont.ImageFont\n | ImageFont.FreeTypeFont\n | ImageFont.TransposedFont\n | None\n ) = None,\n anchor: str | None = None,\n spacing: float = 4,\n align: str = "left",\n direction: str | None = None,\n features: list[str] | None = None,\n language: str | None = None,\n stroke_width: float = 0,\n embedded_color: bool = False,\n *,\n font_size: float | None = None,\n ) -> tuple[float, float, float, float]:\n """Get the bounding box of a given string, in pixels."""\n if embedded_color and self.mode not in ("RGB", "RGBA"):\n msg = "Embedded color supported only in RGB and RGBA modes"\n raise ValueError(msg)\n\n if font is None:\n font = self._getfont(font_size)\n\n if self._multiline_check(text):\n return self.multiline_textbbox(\n xy,\n text,\n font,\n anchor,\n spacing,\n align,\n direction,\n features,\n language,\n stroke_width,\n embedded_color,\n )\n\n mode = "RGBA" if embedded_color else self.fontmode\n bbox = font.getbbox(\n text, mode, direction, features, language, stroke_width, anchor\n )\n return bbox[0] + xy[0], bbox[1] + xy[1], bbox[2] + xy[0], bbox[3] + xy[1]\n\n def multiline_textbbox(\n self,\n xy: tuple[float, float],\n text: AnyStr,\n font: (\n ImageFont.ImageFont\n | ImageFont.FreeTypeFont\n | ImageFont.TransposedFont\n | None\n ) = None,\n anchor: str | None = None,\n spacing: float = 4,\n align: str = "left",\n direction: str | None = None,\n features: list[str] | None = None,\n language: str | None = None,\n stroke_width: float = 0,\n embedded_color: bool = False,\n *,\n font_size: float | None = None,\n ) -> tuple[float, float, float, float]:\n font, lines = self._prepare_multiline_text(\n xy,\n text,\n font,\n anchor,\n spacing,\n align,\n direction,\n features,\n language,\n stroke_width,\n embedded_color,\n font_size,\n )\n\n bbox: tuple[float, float, float, float] | None = None\n\n for xy, anchor, line in lines:\n bbox_line = self.textbbox(\n xy,\n line,\n font,\n anchor,\n direction=direction,\n features=features,\n language=language,\n stroke_width=stroke_width,\n embedded_color=embedded_color,\n )\n if bbox is None:\n bbox = bbox_line\n else:\n bbox = (\n min(bbox[0], bbox_line[0]),\n min(bbox[1], bbox_line[1]),\n max(bbox[2], bbox_line[2]),\n max(bbox[3], bbox_line[3]),\n )\n\n if bbox is None:\n return xy[0], xy[1], xy[0], xy[1]\n return bbox\n\n\ndef Draw(im: Image.Image, mode: str | None = None) -> ImageDraw:\n """\n A simple 2D drawing interface for PIL images.\n\n :param im: The image to draw in.\n :param mode: Optional mode to use for color values. For RGB\n images, this argument can be RGB or RGBA (to blend the\n drawing into the image). For all other modes, this argument\n must be the same as the image mode. If omitted, the mode\n defaults to the mode of the image.\n """\n try:\n return getattr(im, "getdraw")(mode)\n except AttributeError:\n return ImageDraw(im, mode)\n\n\ndef getdraw(\n im: Image.Image | None = None, hints: list[str] | None = None\n) -> tuple[ImageDraw2.Draw | None, ModuleType]:\n """\n :param im: The image to draw in.\n :param hints: An optional list of hints. Deprecated.\n :returns: A (drawing context, drawing resource factory) tuple.\n """\n if hints is not None:\n deprecate("'hints' parameter", 12)\n from . import ImageDraw2\n\n draw = ImageDraw2.Draw(im) if im is not None else None\n return draw, ImageDraw2\n\n\ndef floodfill(\n image: Image.Image,\n xy: tuple[int, int],\n value: float | tuple[int, ...],\n border: float | tuple[int, ...] | None = None,\n thresh: float = 0,\n) -> None:\n """\n .. warning:: This method is experimental.\n\n Fills a bounded region with a given color.\n\n :param image: Target image.\n :param xy: Seed position (a 2-item coordinate tuple). See\n :ref:`coordinate-system`.\n :param value: Fill color.\n :param border: Optional border value. If given, the region consists of\n pixels with a color different from the border color. If not given,\n the region consists of pixels having the same color as the seed\n pixel.\n :param thresh: Optional threshold value which specifies a maximum\n tolerable difference of a pixel value from the 'background' in\n order for it to be replaced. Useful for filling regions of\n non-homogeneous, but similar, colors.\n """\n # based on an implementation by Eric S. Raymond\n # amended by yo1995 @20180806\n pixel = image.load()\n assert pixel is not None\n x, y = xy\n try:\n background = pixel[x, y]\n if _color_diff(value, background) <= thresh:\n return # seed point already has fill color\n pixel[x, y] = value\n except (ValueError, IndexError):\n return # seed point outside image\n edge = {(x, y)}\n # use a set to keep record of current and previous edge pixels\n # to reduce memory consumption\n full_edge = set()\n while edge:\n new_edge = set()\n for x, y in edge: # 4 adjacent method\n for s, t in ((x + 1, y), (x - 1, y), (x, y + 1), (x, y - 1)):\n # If already processed, or if a coordinate is negative, skip\n if (s, t) in full_edge or s < 0 or t < 0:\n continue\n try:\n p = pixel[s, t]\n except (ValueError, IndexError):\n pass\n else:\n full_edge.add((s, t))\n if border is None:\n fill = _color_diff(p, background) <= thresh\n else:\n fill = p not in (value, border)\n if fill:\n pixel[s, t] = value\n new_edge.add((s, t))\n full_edge = edge # discard pixels processed\n edge = new_edge\n\n\ndef _compute_regular_polygon_vertices(\n bounding_circle: Sequence[Sequence[float] | float], n_sides: int, rotation: float\n) -> list[tuple[float, float]]:\n """\n Generate a list of vertices for a 2D regular polygon.\n\n :param bounding_circle: The bounding circle is a sequence defined\n by a point and radius. The polygon is inscribed in this circle.\n (e.g. ``bounding_circle=(x, y, r)`` or ``((x, y), r)``)\n :param n_sides: Number of sides\n (e.g. ``n_sides=3`` for a triangle, ``6`` for a hexagon)\n :param rotation: Apply an arbitrary rotation to the polygon\n (e.g. ``rotation=90``, applies a 90 degree rotation)\n :return: List of regular polygon vertices\n (e.g. ``[(25, 50), (50, 50), (50, 25), (25, 25)]``)\n\n How are the vertices computed?\n 1. Compute the following variables\n - theta: Angle between the apothem & the nearest polygon vertex\n - side_length: Length of each polygon edge\n - centroid: Center of bounding circle (1st, 2nd elements of bounding_circle)\n - polygon_radius: Polygon radius (last element of bounding_circle)\n - angles: Location of each polygon vertex in polar grid\n (e.g. A square with 0 degree rotation => [225.0, 315.0, 45.0, 135.0])\n\n 2. For each angle in angles, get the polygon vertex at that angle\n The vertex is computed using the equation below.\n X= xcos(φ) + ysin(φ)\n Y= −xsin(φ) + ycos(φ)\n\n Note:\n φ = angle in degrees\n x = 0\n y = polygon_radius\n\n The formula above assumes rotation around the origin.\n In our case, we are rotating around the centroid.\n To account for this, we use the formula below\n X = xcos(φ) + ysin(φ) + centroid_x\n Y = −xsin(φ) + ycos(φ) + centroid_y\n """\n # 1. Error Handling\n # 1.1 Check `n_sides` has an appropriate value\n if not isinstance(n_sides, int):\n msg = "n_sides should be an int" # type: ignore[unreachable]\n raise TypeError(msg)\n if n_sides < 3:\n msg = "n_sides should be an int > 2"\n raise ValueError(msg)\n\n # 1.2 Check `bounding_circle` has an appropriate value\n if not isinstance(bounding_circle, (list, tuple)):\n msg = "bounding_circle should be a sequence"\n raise TypeError(msg)\n\n if len(bounding_circle) == 3:\n if not all(isinstance(i, (int, float)) for i in bounding_circle):\n msg = "bounding_circle should only contain numeric data"\n raise ValueError(msg)\n\n *centroid, polygon_radius = cast(list[float], list(bounding_circle))\n elif len(bounding_circle) == 2 and isinstance(bounding_circle[0], (list, tuple)):\n if not all(\n isinstance(i, (int, float)) for i in bounding_circle[0]\n ) or not isinstance(bounding_circle[1], (int, float)):\n msg = "bounding_circle should only contain numeric data"\n raise ValueError(msg)\n\n if len(bounding_circle[0]) != 2:\n msg = "bounding_circle centre should contain 2D coordinates (e.g. (x, y))"\n raise ValueError(msg)\n\n centroid = cast(list[float], list(bounding_circle[0]))\n polygon_radius = cast(float, bounding_circle[1])\n else:\n msg = (\n "bounding_circle should contain 2D coordinates "\n "and a radius (e.g. (x, y, r) or ((x, y), r) )"\n )\n raise ValueError(msg)\n\n if polygon_radius <= 0:\n msg = "bounding_circle radius should be > 0"\n raise ValueError(msg)\n\n # 1.3 Check `rotation` has an appropriate value\n if not isinstance(rotation, (int, float)):\n msg = "rotation should be an int or float" # type: ignore[unreachable]\n raise ValueError(msg)\n\n # 2. Define Helper Functions\n def _apply_rotation(point: list[float], degrees: float) -> tuple[float, float]:\n return (\n round(\n point[0] * math.cos(math.radians(360 - degrees))\n - point[1] * math.sin(math.radians(360 - degrees))\n + centroid[0],\n 2,\n ),\n round(\n point[1] * math.cos(math.radians(360 - degrees))\n + point[0] * math.sin(math.radians(360 - degrees))\n + centroid[1],\n 2,\n ),\n )\n\n def _compute_polygon_vertex(angle: float) -> tuple[float, float]:\n start_point = [polygon_radius, 0]\n return _apply_rotation(start_point, angle)\n\n def _get_angles(n_sides: int, rotation: float) -> list[float]:\n angles = []\n degrees = 360 / n_sides\n # Start with the bottom left polygon vertex\n current_angle = (270 - 0.5 * degrees) + rotation\n for _ in range(n_sides):\n angles.append(current_angle)\n current_angle += degrees\n if current_angle > 360:\n current_angle -= 360\n return angles\n\n # 3. Variable Declarations\n angles = _get_angles(n_sides, rotation)\n\n # 4. Compute Vertices\n return [_compute_polygon_vertex(angle) for angle in angles]\n\n\ndef _color_diff(\n color1: float | tuple[int, ...], color2: float | tuple[int, ...]\n) -> float:\n """\n Uses 1-norm distance to calculate difference between two values.\n """\n first = color1 if isinstance(color1, tuple) else (color1,)\n second = color2 if isinstance(color2, tuple) else (color2,)\n\n return sum(abs(first[i] - second[i]) for i in range(len(second)))\n | .venv\Lib\site-packages\PIL\ImageDraw.py | ImageDraw.py | Python | 44,077 | 0.95 | 0.171266 | 0.071556 | awesome-app | 192 | 2024-06-30T11:51:31.643525 | BSD-3-Clause | false | 8eeac970a2378fb952918c81b313a2f1 |
#\n# The Python Imaging Library\n# $Id$\n#\n# WCK-style drawing interface operations\n#\n# History:\n# 2003-12-07 fl created\n# 2005-05-15 fl updated; added to PIL as ImageDraw2\n# 2005-05-15 fl added text support\n# 2005-05-20 fl added arc/chord/pieslice support\n#\n# Copyright (c) 2003-2005 by Secret Labs AB\n# Copyright (c) 2003-2005 by Fredrik Lundh\n#\n# See the README file for information on usage and redistribution.\n#\n\n\n"""\n(Experimental) WCK-style drawing interface operations\n\n.. seealso:: :py:mod:`PIL.ImageDraw`\n"""\nfrom __future__ import annotations\n\nfrom typing import Any, AnyStr, BinaryIO\n\nfrom . import Image, ImageColor, ImageDraw, ImageFont, ImagePath\nfrom ._typing import Coords, StrOrBytesPath\n\n\nclass Pen:\n """Stores an outline color and width."""\n\n def __init__(self, color: str, width: int = 1, opacity: int = 255) -> None:\n self.color = ImageColor.getrgb(color)\n self.width = width\n\n\nclass Brush:\n """Stores a fill color"""\n\n def __init__(self, color: str, opacity: int = 255) -> None:\n self.color = ImageColor.getrgb(color)\n\n\nclass Font:\n """Stores a TrueType font and color"""\n\n def __init__(\n self, color: str, file: StrOrBytesPath | BinaryIO, size: float = 12\n ) -> None:\n # FIXME: add support for bitmap fonts\n self.color = ImageColor.getrgb(color)\n self.font = ImageFont.truetype(file, size)\n\n\nclass Draw:\n """\n (Experimental) WCK-style drawing interface\n """\n\n def __init__(\n self,\n image: Image.Image | str,\n size: tuple[int, int] | list[int] | None = None,\n color: float | tuple[float, ...] | str | None = None,\n ) -> None:\n if isinstance(image, str):\n if size is None:\n msg = "If image argument is mode string, size must be a list or tuple"\n raise ValueError(msg)\n image = Image.new(image, size, color)\n self.draw = ImageDraw.Draw(image)\n self.image = image\n self.transform: tuple[float, float, float, float, float, float] | None = None\n\n def flush(self) -> Image.Image:\n return self.image\n\n def render(\n self,\n op: str,\n xy: Coords,\n pen: Pen | Brush | None,\n brush: Brush | Pen | None = None,\n **kwargs: Any,\n ) -> None:\n # handle color arguments\n outline = fill = None\n width = 1\n if isinstance(pen, Pen):\n outline = pen.color\n width = pen.width\n elif isinstance(brush, Pen):\n outline = brush.color\n width = brush.width\n if isinstance(brush, Brush):\n fill = brush.color\n elif isinstance(pen, Brush):\n fill = pen.color\n # handle transformation\n if self.transform:\n path = ImagePath.Path(xy)\n path.transform(self.transform)\n xy = path\n # render the item\n if op in ("arc", "line"):\n kwargs.setdefault("fill", outline)\n else:\n kwargs.setdefault("fill", fill)\n kwargs.setdefault("outline", outline)\n if op == "line":\n kwargs.setdefault("width", width)\n getattr(self.draw, op)(xy, **kwargs)\n\n def settransform(self, offset: tuple[float, float]) -> None:\n """Sets a transformation offset."""\n (xoffset, yoffset) = offset\n self.transform = (1, 0, xoffset, 0, 1, yoffset)\n\n def arc(\n self,\n xy: Coords,\n pen: Pen | Brush | None,\n start: float,\n end: float,\n *options: Any,\n ) -> None:\n """\n Draws an arc (a portion of a circle outline) between the start and end\n angles, inside the given bounding box.\n\n .. seealso:: :py:meth:`PIL.ImageDraw.ImageDraw.arc`\n """\n self.render("arc", xy, pen, *options, start=start, end=end)\n\n def chord(\n self,\n xy: Coords,\n pen: Pen | Brush | None,\n start: float,\n end: float,\n *options: Any,\n ) -> None:\n """\n Same as :py:meth:`~PIL.ImageDraw2.Draw.arc`, but connects the end points\n with a straight line.\n\n .. seealso:: :py:meth:`PIL.ImageDraw.ImageDraw.chord`\n """\n self.render("chord", xy, pen, *options, start=start, end=end)\n\n def ellipse(self, xy: Coords, pen: Pen | Brush | None, *options: Any) -> None:\n """\n Draws an ellipse inside the given bounding box.\n\n .. seealso:: :py:meth:`PIL.ImageDraw.ImageDraw.ellipse`\n """\n self.render("ellipse", xy, pen, *options)\n\n def line(self, xy: Coords, pen: Pen | Brush | None, *options: Any) -> None:\n """\n Draws a line between the coordinates in the ``xy`` list.\n\n .. seealso:: :py:meth:`PIL.ImageDraw.ImageDraw.line`\n """\n self.render("line", xy, pen, *options)\n\n def pieslice(\n self,\n xy: Coords,\n pen: Pen | Brush | None,\n start: float,\n end: float,\n *options: Any,\n ) -> None:\n """\n Same as arc, but also draws straight lines between the end points and the\n center of the bounding box.\n\n .. seealso:: :py:meth:`PIL.ImageDraw.ImageDraw.pieslice`\n """\n self.render("pieslice", xy, pen, *options, start=start, end=end)\n\n def polygon(self, xy: Coords, pen: Pen | Brush | None, *options: Any) -> None:\n """\n Draws a polygon.\n\n The polygon outline consists of straight lines between the given\n coordinates, plus a straight line between the last and the first\n coordinate.\n\n\n .. seealso:: :py:meth:`PIL.ImageDraw.ImageDraw.polygon`\n """\n self.render("polygon", xy, pen, *options)\n\n def rectangle(self, xy: Coords, pen: Pen | Brush | None, *options: Any) -> None:\n """\n Draws a rectangle.\n\n .. seealso:: :py:meth:`PIL.ImageDraw.ImageDraw.rectangle`\n """\n self.render("rectangle", xy, pen, *options)\n\n def text(self, xy: tuple[float, float], text: AnyStr, font: Font) -> None:\n """\n Draws the string at the given position.\n\n .. seealso:: :py:meth:`PIL.ImageDraw.ImageDraw.text`\n """\n if self.transform:\n path = ImagePath.Path(xy)\n path.transform(self.transform)\n xy = path\n self.draw.text(xy, text, font=font.font, fill=font.color)\n\n def textbbox(\n self, xy: tuple[float, float], text: AnyStr, font: Font\n ) -> tuple[float, float, float, float]:\n """\n Returns bounding box (in pixels) of given text.\n\n :return: ``(left, top, right, bottom)`` bounding box\n\n .. seealso:: :py:meth:`PIL.ImageDraw.ImageDraw.textbbox`\n """\n if self.transform:\n path = ImagePath.Path(xy)\n path.transform(self.transform)\n xy = path\n return self.draw.textbbox(xy, text, font=font.font)\n\n def textlength(self, text: AnyStr, font: Font) -> float:\n """\n Returns length (in pixels) of given text.\n This is the amount by which following text should be offset.\n\n .. seealso:: :py:meth:`PIL.ImageDraw.ImageDraw.textlength`\n """\n return self.draw.textlength(text, font=font.font)\n | .venv\Lib\site-packages\PIL\ImageDraw2.py | ImageDraw2.py | Python | 7,470 | 0.95 | 0.131687 | 0.125 | awesome-app | 866 | 2024-04-07T12:21:51.121319 | Apache-2.0 | false | 4f00af91b926666311fe7ac080920ebd |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.