content
stringlengths
1
103k
path
stringlengths
8
216
filename
stringlengths
2
179
language
stringclasses
15 values
size_bytes
int64
2
189k
quality_score
float64
0.5
0.95
complexity
float64
0
1
documentation_ratio
float64
0
1
repository
stringclasses
5 values
stars
int64
0
1k
created_date
stringdate
2023-07-10 19:21:08
2025-07-09 19:11:45
license
stringclasses
4 values
is_test
bool
2 classes
file_hash
stringlengths
32
32
\n\n
.venv\Lib\site-packages\notebook_shim\__pycache__\traits.cpython-313.pyc
traits.cpython-313.pyc
Other
6,625
0.8
0.184615
0
vue-tools
527
2024-03-23T04:36:13.142787
MIT
false
0278d9c180ae5c41e1a5f914aefc4113
\n\n
.venv\Lib\site-packages\notebook_shim\__pycache__\_version.cpython-313.pyc
_version.cpython-313.pyc
Other
261
0.7
0
0
react-lib
9
2024-11-20T16:55:09.921710
BSD-3-Clause
false
724393de48133abfcccf0082b918ff85
\n\n
.venv\Lib\site-packages\notebook_shim\__pycache__\__init__.cpython-313.pyc
__init__.cpython-313.pyc
Other
372
0.7
0
0
vue-tools
305
2023-08-10T06:25:56.050344
GPL-3.0
false
e589fc92227532a77f242d8bd8520bba
pip\n
.venv\Lib\site-packages\notebook_shim-0.2.4.dist-info\INSTALLER
INSTALLER
Other
4
0.5
0
0
python-kit
779
2024-11-27T07:14:49.399347
BSD-3-Clause
false
365c9bfeb7d89244f2ce01c1de44cb85
Metadata-Version: 2.1\nName: notebook_shim\nVersion: 0.2.4\nSummary: A shim layer for notebook traits and config\nAuthor-email: Jupyter Development Team <jupyter@googlegroups.com>\nLicense: BSD 3-Clause License\n \n Copyright (c) 2022 Project Jupyter Contributors\n All rights reserved.\n \n Redistribution and use in source and binary forms, with or without\n modification, are permitted provided that the following conditions are met:\n \n 1. Redistributions of source code must retain the above copyright notice, this\n list of conditions and the following disclaimer.\n \n 2. Redistributions in binary form must reproduce the above copyright notice,\n this list of conditions and the following disclaimer in the documentation\n and/or other materials provided with the distribution.\n \n 3. Neither the name of the copyright holder nor the names of its\n contributors may be used to endorse or promote products derived from\n this software without specific prior written permission.\n \n THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"\n AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\n DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE\n FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\n DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR\n SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\n CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,\n OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\nLicense-File: LICENSE\nKeywords: ipython,jupyter\nClassifier: Framework :: Jupyter\nClassifier: Intended Audience :: Developers\nClassifier: Intended Audience :: Science/Research\nClassifier: Intended Audience :: System Administrators\nClassifier: License :: OSI Approved :: BSD License\nClassifier: Programming Language :: Python\nClassifier: Programming Language :: Python :: 3\nClassifier: Programming Language :: Python :: 3 :: Only\nClassifier: Programming Language :: Python :: 3.7\nClassifier: Programming Language :: Python :: 3.8\nClassifier: Programming Language :: Python :: 3.9\nClassifier: Programming Language :: Python :: 3.10\nRequires-Python: >=3.7\nRequires-Dist: jupyter-server<3,>=1.8\nProvides-Extra: test\nRequires-Dist: pytest; extra == 'test'\nRequires-Dist: pytest-console-scripts; extra == 'test'\nRequires-Dist: pytest-jupyter; extra == 'test'\nRequires-Dist: pytest-tornasync; extra == 'test'\nDescription-Content-Type: text/markdown\n\n# Notebook Shim\n\nThis project provides a way for JupyterLab and other frontends to switch to [Jupyter Server](https://github.com/jupyter/jupyter_server/) for their Python Web application backend.\n\n## Basic Usage\n\nInstall from PyPI:\n\n```\npip install notebook_shim\n```\n\nThis will automatically enable the extension in Jupyter Server.\n\n## Usage\n\nThis project also includes an API for shimming traits that moved from `NotebookApp` in to `ServerApp` in Jupyter Server. This can be used by applications that subclassed `NotebookApp` to leverage the Python server backend of Jupyter Notebooks. Such extensions should *now* switch to `ExtensionApp` API in Jupyter Server and add `NotebookConfigShimMixin` in their inheritance list to properly handle moved traits.\n\nFor example, an application class that previously looked like:\n\n```python\nfrom notebook.notebookapp import NotebookApp\n\nclass MyApplication(NotebookApp):\n```\n\nshould switch to look something like:\n\n```python\nfrom jupyter_server.extension.application import ExtensionApp\nfrom notebook_shim.shim import NotebookConfigShimMixin\n\nclass MyApplication(NotebookConfigShimMixin, ExtensionApp):\n```
.venv\Lib\site-packages\notebook_shim-0.2.4.dist-info\METADATA
METADATA
Other
4,032
0.95
0.111111
0.042254
node-utils
688
2025-03-08T00:17:27.644516
GPL-3.0
false
48ef1db100e20ea1b897f469300448ce
../../etc/jupyter/jupyter_server_config.d/notebook_shim.json,sha256=t1_5Rmm0oG8XxVUzl9q5ciYm87jRLd2NQ-CGYXOz6zs,106\nnotebook_shim-0.2.4.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4\nnotebook_shim-0.2.4.dist-info/METADATA,sha256=LrAxPYLARCilYGvys8eZh_p3i_bCi_4xUHLmB8Jk61c,4032\nnotebook_shim-0.2.4.dist-info/RECORD,,\nnotebook_shim-0.2.4.dist-info/WHEEL,sha256=TJPnKdtrSue7xZ_AVGkp9YXcvDrobsjBds1du3Nx6dc,87\nnotebook_shim-0.2.4.dist-info/licenses/LICENSE,sha256=YowdSp8QwvYTVckhLw2OzVKtgx35ln8fIwnJCmWp14k,1535\nnotebook_shim/__init__.py,sha256=ySX50CvFk-ahTDCiuQZ8WGIa1Rf5PYINWFjKubr0D74,127\nnotebook_shim/__pycache__/__init__.cpython-313.pyc,,\nnotebook_shim/__pycache__/_version.cpython-313.pyc,,\nnotebook_shim/__pycache__/nbserver.cpython-313.pyc,,\nnotebook_shim/__pycache__/shim.cpython-313.pyc,,\nnotebook_shim/__pycache__/traits.cpython-313.pyc,,\nnotebook_shim/_version.py,sha256=aoIwr5ZKTHblytnkJKpRE_aiywKLiL_hWlLHJMQRnj4,55\nnotebook_shim/nbserver.py,sha256=glQbNFFVrMG8W8AemOS95K_6z8EWwsFP9noj1m62nqQ,5189\nnotebook_shim/shim.py,sha256=cA_MlH-mgXtIGbOIUWQbP6VGvN25XBvoArXJ3TU7RXU,11733\nnotebook_shim/tests/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0\nnotebook_shim/tests/__pycache__/__init__.cpython-313.pyc,,\nnotebook_shim/tests/__pycache__/mockextension.cpython-313.pyc,,\nnotebook_shim/tests/__pycache__/test_extension.cpython-313.pyc,,\nnotebook_shim/tests/confs/__pycache__/jupyter_my_ext_config.cpython-313.pyc,,\nnotebook_shim/tests/confs/__pycache__/jupyter_notebook_config.cpython-313.pyc,,\nnotebook_shim/tests/confs/__pycache__/jupyter_server_config.cpython-313.pyc,,\nnotebook_shim/tests/confs/jupyter_my_ext_config.py,sha256=plA6Thcub9t63GB-_fiNY_yYZTEK5XO4PJcdpOj-qGg,31\nnotebook_shim/tests/confs/jupyter_notebook_config.py,sha256=XweB_TUw_oLl68cp77sZq8ilfBN1dcwvbzWf6W4mAyA,105\nnotebook_shim/tests/confs/jupyter_server_config.py,sha256=aVmxPbQAMir-83BTCQho6dTy8ka6btHl3Q4LHnhhD18,24\nnotebook_shim/tests/mockextension.py,sha256=kH_pOOEj53uULTMY_e-FhwQ1_hRl-jVwTBrsU_0YvbE,850\nnotebook_shim/tests/test_extension.py,sha256=YlHRom_zg7WGgK4DjmSfHbH_jBM636cCEwNCLZWygE8,3850\nnotebook_shim/traits.py,sha256=F-tC2rDKVc_yZ58Zr0ETX-_IxZPYjIcvPT55ULL0LRo,5600\n
.venv\Lib\site-packages\notebook_shim-0.2.4.dist-info\RECORD
RECORD
Other
2,238
0.7
0
0
vue-tools
144
2023-10-09T13:46:42.915860
GPL-3.0
false
1dbc1c97d91597e76089dbe242dea43f
Wheel-Version: 1.0\nGenerator: hatchling 1.21.1\nRoot-Is-Purelib: true\nTag: py3-none-any\n
.venv\Lib\site-packages\notebook_shim-0.2.4.dist-info\WHEEL
WHEEL
Other
87
0.5
0
0
python-kit
328
2023-12-08T01:36:23.276291
MIT
false
a2e74b4e3aea204ad48eb8854874f5a5
BSD 3-Clause License\n\nCopyright (c) 2022 Project Jupyter Contributors\nAll rights reserved.\n\nRedistribution and use in source and binary forms, with or without\nmodification, are permitted provided that the following conditions are met:\n\n1. Redistributions of source code must retain the above copyright notice, this\n list of conditions and the following disclaimer.\n\n2. Redistributions in binary form must reproduce the above copyright notice,\n this list of conditions and the following disclaimer in the documentation\n and/or other materials provided with the distribution.\n\n3. Neither the name of the copyright holder nor the names of its\n contributors may be used to endorse or promote products derived from\n this software without specific prior written permission.\n\nTHIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"\nAND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\nIMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\nDISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE\nFOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\nDAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR\nSERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\nCAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,\nOR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\nOF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n
.venv\Lib\site-packages\notebook_shim-0.2.4.dist-info\licenses\LICENSE
LICENSE
Other
1,535
0.7
0
0
python-kit
262
2023-10-28T00:56:57.822039
BSD-3-Clause
false
f91a22ac359078bf4380ccbace244c41
"""\nPytest configuration and fixtures for the Numpy test suite.\n"""\nimport os\nimport string\nimport sys\nimport tempfile\nimport warnings\nfrom contextlib import contextmanager\n\nimport hypothesis\nimport pytest\n\nimport numpy\nimport numpy as np\nfrom numpy._core._multiarray_tests import get_fpu_mode\nfrom numpy._core.tests._natype import get_stringdtype_dtype, pd_NA\nfrom numpy.testing._private.utils import NOGIL_BUILD\n\ntry:\n from scipy_doctest.conftest import dt_config\n HAVE_SCPDT = True\nexcept ModuleNotFoundError:\n HAVE_SCPDT = False\n\n\n_old_fpu_mode = None\n_collect_results = {}\n\n# Use a known and persistent tmpdir for hypothesis' caches, which\n# can be automatically cleared by the OS or user.\nhypothesis.configuration.set_hypothesis_home_dir(\n os.path.join(tempfile.gettempdir(), ".hypothesis")\n)\n\n# We register two custom profiles for Numpy - for details see\n# https://hypothesis.readthedocs.io/en/latest/settings.html\n# The first is designed for our own CI runs; the latter also\n# forces determinism and is designed for use via np.test()\nhypothesis.settings.register_profile(\n name="numpy-profile", deadline=None, print_blob=True,\n)\nhypothesis.settings.register_profile(\n name="np.test() profile",\n deadline=None, print_blob=True, database=None, derandomize=True,\n suppress_health_check=list(hypothesis.HealthCheck),\n)\n# Note that the default profile is chosen based on the presence\n# of pytest.ini, but can be overridden by passing the\n# --hypothesis-profile=NAME argument to pytest.\n_pytest_ini = os.path.join(os.path.dirname(__file__), "..", "pytest.ini")\nhypothesis.settings.load_profile(\n "numpy-profile" if os.path.isfile(_pytest_ini) else "np.test() profile"\n)\n\n# The experimentalAPI is used in _umath_tests\nos.environ["NUMPY_EXPERIMENTAL_DTYPE_API"] = "1"\n\ndef pytest_configure(config):\n config.addinivalue_line("markers",\n "valgrind_error: Tests that are known to error under valgrind.")\n config.addinivalue_line("markers",\n "leaks_references: Tests that are known to leak references.")\n config.addinivalue_line("markers",\n "slow: Tests that are very slow.")\n config.addinivalue_line("markers",\n "slow_pypy: Tests that are very slow on pypy.")\n\n\ndef pytest_addoption(parser):\n parser.addoption("--available-memory", action="store", default=None,\n help=("Set amount of memory available for running the "\n "test suite. This can result to tests requiring "\n "especially large amounts of memory to be skipped. "\n "Equivalent to setting environment variable "\n "NPY_AVAILABLE_MEM. Default: determined"\n "automatically."))\n\n\ngil_enabled_at_start = True\nif NOGIL_BUILD:\n gil_enabled_at_start = sys._is_gil_enabled()\n\n\ndef pytest_sessionstart(session):\n available_mem = session.config.getoption('available_memory')\n if available_mem is not None:\n os.environ['NPY_AVAILABLE_MEM'] = available_mem\n\n\ndef pytest_terminal_summary(terminalreporter, exitstatus, config):\n if NOGIL_BUILD and not gil_enabled_at_start and sys._is_gil_enabled():\n tr = terminalreporter\n tr.ensure_newline()\n tr.section("GIL re-enabled", sep="=", red=True, bold=True)\n tr.line("The GIL was re-enabled at runtime during the tests.")\n tr.line("This can happen with no test failures if the RuntimeWarning")\n tr.line("raised by Python when this happens is filtered by a test.")\n tr.line("")\n tr.line("Please ensure all new C modules declare support for running")\n tr.line("without the GIL. Any new tests that intentionally imports ")\n tr.line("code that re-enables the GIL should do so in a subprocess.")\n pytest.exit("GIL re-enabled during tests", returncode=1)\n\n# FIXME when yield tests are gone.\n@pytest.hookimpl()\ndef pytest_itemcollected(item):\n """\n Check FPU precision mode was not changed during test collection.\n\n The clumsy way we do it here is mainly necessary because numpy\n still uses yield tests, which can execute code at test collection\n time.\n """\n global _old_fpu_mode\n\n mode = get_fpu_mode()\n\n if _old_fpu_mode is None:\n _old_fpu_mode = mode\n elif mode != _old_fpu_mode:\n _collect_results[item] = (_old_fpu_mode, mode)\n _old_fpu_mode = mode\n\n\n@pytest.fixture(scope="function", autouse=True)\ndef check_fpu_mode(request):\n """\n Check FPU precision mode was not changed during the test.\n """\n old_mode = get_fpu_mode()\n yield\n new_mode = get_fpu_mode()\n\n if old_mode != new_mode:\n raise AssertionError(f"FPU precision mode changed from {old_mode:#x} to "\n f"{new_mode:#x} during the test")\n\n collect_result = _collect_results.get(request.node)\n if collect_result is not None:\n old_mode, new_mode = collect_result\n raise AssertionError(f"FPU precision mode changed from {old_mode:#x} to "\n f"{new_mode:#x} when collecting the test")\n\n\n@pytest.fixture(autouse=True)\ndef add_np(doctest_namespace):\n doctest_namespace['np'] = numpy\n\n@pytest.fixture(autouse=True)\ndef env_setup(monkeypatch):\n monkeypatch.setenv('PYTHONHASHSEED', '0')\n\n\nif HAVE_SCPDT:\n\n @contextmanager\n def warnings_errors_and_rng(test=None):\n """Filter out the wall of DeprecationWarnings.\n """\n msgs = ["The numpy.linalg.linalg",\n "The numpy.fft.helper",\n "dep_util",\n "pkg_resources",\n "numpy.core.umath",\n "msvccompiler",\n "Deprecated call",\n "numpy.core",\n "Importing from numpy.matlib",\n "This function is deprecated.", # random_integers\n "Data type alias 'a'", # numpy.rec.fromfile\n "Arrays of 2-dimensional vectors", # matlib.cross\n "`in1d` is deprecated", ]\n msg = "|".join(msgs)\n\n msgs_r = [\n "invalid value encountered",\n "divide by zero encountered"\n ]\n msg_r = "|".join(msgs_r)\n\n with warnings.catch_warnings():\n warnings.filterwarnings(\n 'ignore', category=DeprecationWarning, message=msg\n )\n warnings.filterwarnings(\n 'ignore', category=RuntimeWarning, message=msg_r\n )\n yield\n\n # find and check doctests under this context manager\n dt_config.user_context_mgr = warnings_errors_and_rng\n\n # numpy specific tweaks from refguide-check\n dt_config.rndm_markers.add('#uninitialized')\n dt_config.rndm_markers.add('# uninitialized')\n\n # make the checker pick on mismatched dtypes\n dt_config.strict_check = True\n\n import doctest\n dt_config.optionflags = doctest.NORMALIZE_WHITESPACE | doctest.ELLIPSIS\n\n # recognize the StringDType repr\n dt_config.check_namespace['StringDType'] = numpy.dtypes.StringDType\n\n # temporary skips\n dt_config.skiplist = {\n 'numpy.savez', # unclosed file\n 'numpy.matlib.savez',\n 'numpy.__array_namespace_info__',\n 'numpy.matlib.__array_namespace_info__',\n }\n\n # xfail problematic tutorials\n dt_config.pytest_extra_xfail = {\n 'how-to-verify-bug.rst': '',\n 'c-info.ufunc-tutorial.rst': '',\n 'basics.interoperability.rst': 'needs pandas',\n 'basics.dispatch.rst': 'errors out in /testing/overrides.py',\n 'basics.subclassing.rst': '.. testcode:: admonitions not understood',\n 'misc.rst': 'manipulates warnings',\n }\n\n # ignores are for things fail doctest collection (optionals etc)\n dt_config.pytest_extra_ignore = [\n 'numpy/distutils',\n 'numpy/_core/cversions.py',\n 'numpy/_pyinstaller',\n 'numpy/random/_examples',\n 'numpy/f2py/_backends/_distutils.py',\n ]\n\n\n@pytest.fixture\ndef random_string_list():\n chars = list(string.ascii_letters + string.digits)\n chars = np.array(chars, dtype="U1")\n ret = np.random.choice(chars, size=100 * 10, replace=True)\n return ret.view("U100")\n\n\n@pytest.fixture(params=[True, False])\ndef coerce(request):\n return request.param\n\n\n@pytest.fixture(\n params=["unset", None, pd_NA, np.nan, float("nan"), "__nan__"],\n ids=["unset", "None", "pandas.NA", "np.nan", "float('nan')", "string nan"],\n)\ndef na_object(request):\n return request.param\n\n\n@pytest.fixture()\ndef dtype(na_object, coerce):\n return get_stringdtype_dtype(na_object, coerce)\n
.venv\Lib\site-packages\numpy\conftest.py
conftest.py
Python
8,835
0.95
0.131783
0.086124
awesome-app
464
2024-06-12T06:58:53.012884
MIT
true
1f92366a9f754ca7baf2b91c16000910
"""\nThis module is home to specific dtypes related functionality and their classes.\nFor more general information about dtypes, also see `numpy.dtype` and\n:ref:`arrays.dtypes`.\n\nSimilar to the builtin ``types`` module, this submodule defines types (classes)\nthat are not widely used directly.\n\n.. versionadded:: NumPy 1.25\n\n The dtypes module is new in NumPy 1.25. Previously DType classes were\n only accessible indirectly.\n\n\nDType classes\n-------------\n\nThe following are the classes of the corresponding NumPy dtype instances and\nNumPy scalar types. The classes can be used in ``isinstance`` checks and can\nalso be instantiated or used directly. Direct use of these classes is not\ntypical, since their scalar counterparts (e.g. ``np.float64``) or strings\nlike ``"float64"`` can be used.\n"""\n\n# See doc/source/reference/routines.dtypes.rst for module-level docs\n\n__all__ = []\n\n\ndef _add_dtype_helper(DType, alias):\n # Function to add DTypes a bit more conveniently without channeling them\n # through `numpy._core._multiarray_umath` namespace or similar.\n from numpy import dtypes\n\n setattr(dtypes, DType.__name__, DType)\n __all__.append(DType.__name__)\n\n if alias:\n alias = alias.removeprefix("numpy.dtypes.")\n setattr(dtypes, alias, DType)\n __all__.append(alias)\n
.venv\Lib\site-packages\numpy\dtypes.py
dtypes.py
Python
1,353
0.95
0.073171
0.103448
vue-tools
854
2023-08-23T07:32:27.370245
MIT
false
9e09c2ddc281ffba284a681e4208648d
# ruff: noqa: ANN401\nfrom typing import (\n Any,\n Generic,\n LiteralString,\n Never,\n NoReturn,\n Self,\n TypeAlias,\n final,\n overload,\n type_check_only,\n)\nfrom typing import Literal as L\n\nfrom typing_extensions import TypeVar\n\nimport numpy as np\n\n__all__ = [ # noqa: RUF022\n 'BoolDType',\n 'Int8DType',\n 'ByteDType',\n 'UInt8DType',\n 'UByteDType',\n 'Int16DType',\n 'ShortDType',\n 'UInt16DType',\n 'UShortDType',\n 'Int32DType',\n 'IntDType',\n 'UInt32DType',\n 'UIntDType',\n 'Int64DType',\n 'LongDType',\n 'UInt64DType',\n 'ULongDType',\n 'LongLongDType',\n 'ULongLongDType',\n 'Float16DType',\n 'Float32DType',\n 'Float64DType',\n 'LongDoubleDType',\n 'Complex64DType',\n 'Complex128DType',\n 'CLongDoubleDType',\n 'ObjectDType',\n 'BytesDType',\n 'StrDType',\n 'VoidDType',\n 'DateTime64DType',\n 'TimeDelta64DType',\n 'StringDType',\n]\n\n# Helper base classes (typing-only)\n\n_ScalarT_co = TypeVar("_ScalarT_co", bound=np.generic, covariant=True)\n\n@type_check_only\nclass _SimpleDType(np.dtype[_ScalarT_co], Generic[_ScalarT_co]): # type: ignore[misc] # pyright: ignore[reportGeneralTypeIssues]\n names: None # pyright: ignore[reportIncompatibleVariableOverride]\n def __new__(cls, /) -> Self: ...\n def __getitem__(self, key: Any, /) -> NoReturn: ...\n @property\n def base(self) -> np.dtype[_ScalarT_co]: ...\n @property\n def fields(self) -> None: ...\n @property\n def isalignedstruct(self) -> L[False]: ...\n @property\n def isnative(self) -> L[True]: ...\n @property\n def ndim(self) -> L[0]: ...\n @property\n def shape(self) -> tuple[()]: ...\n @property\n def subdtype(self) -> None: ...\n\n@type_check_only\nclass _LiteralDType(_SimpleDType[_ScalarT_co], Generic[_ScalarT_co]): # type: ignore[misc]\n @property\n def flags(self) -> L[0]: ...\n @property\n def hasobject(self) -> L[False]: ...\n\n# Helper mixins (typing-only):\n\n_KindT_co = TypeVar("_KindT_co", bound=LiteralString, covariant=True)\n_CharT_co = TypeVar("_CharT_co", bound=LiteralString, covariant=True)\n_NumT_co = TypeVar("_NumT_co", bound=int, covariant=True)\n\n@type_check_only\nclass _TypeCodes(Generic[_KindT_co, _CharT_co, _NumT_co]):\n @final\n @property\n def kind(self) -> _KindT_co: ...\n @final\n @property\n def char(self) -> _CharT_co: ...\n @final\n @property\n def num(self) -> _NumT_co: ...\n\n@type_check_only\nclass _NoOrder:\n @final\n @property\n def byteorder(self) -> L["|"]: ...\n\n@type_check_only\nclass _NativeOrder:\n @final\n @property\n def byteorder(self) -> L["="]: ...\n\n_DataSize_co = TypeVar("_DataSize_co", bound=int, covariant=True)\n_ItemSize_co = TypeVar("_ItemSize_co", bound=int, covariant=True, default=int)\n\n@type_check_only\nclass _NBit(Generic[_DataSize_co, _ItemSize_co]):\n @final\n @property\n def alignment(self) -> _DataSize_co: ...\n @final\n @property\n def itemsize(self) -> _ItemSize_co: ...\n\n@type_check_only\nclass _8Bit(_NoOrder, _NBit[L[1], L[1]]): ...\n\n# Boolean:\n\n@final\nclass BoolDType( # type: ignore[misc]\n _TypeCodes[L["b"], L["?"], L[0]],\n _8Bit,\n _LiteralDType[np.bool],\n):\n @property\n def name(self) -> L["bool"]: ...\n @property\n def str(self) -> L["|b1"]: ...\n\n# Sized integers:\n\n@final\nclass Int8DType( # type: ignore[misc]\n _TypeCodes[L["i"], L["b"], L[1]],\n _8Bit,\n _LiteralDType[np.int8],\n):\n @property\n def name(self) -> L["int8"]: ...\n @property\n def str(self) -> L["|i1"]: ...\n\n@final\nclass UInt8DType( # type: ignore[misc]\n _TypeCodes[L["u"], L["B"], L[2]],\n _8Bit,\n _LiteralDType[np.uint8],\n):\n @property\n def name(self) -> L["uint8"]: ...\n @property\n def str(self) -> L["|u1"]: ...\n\n@final\nclass Int16DType( # type: ignore[misc]\n _TypeCodes[L["i"], L["h"], L[3]],\n _NativeOrder,\n _NBit[L[2], L[2]],\n _LiteralDType[np.int16],\n):\n @property\n def name(self) -> L["int16"]: ...\n @property\n def str(self) -> L["<i2", ">i2"]: ...\n\n@final\nclass UInt16DType( # type: ignore[misc]\n _TypeCodes[L["u"], L["H"], L[4]],\n _NativeOrder,\n _NBit[L[2], L[2]],\n _LiteralDType[np.uint16],\n):\n @property\n def name(self) -> L["uint16"]: ...\n @property\n def str(self) -> L["<u2", ">u2"]: ...\n\n@final\nclass Int32DType( # type: ignore[misc]\n _TypeCodes[L["i"], L["i", "l"], L[5, 7]],\n _NativeOrder,\n _NBit[L[4], L[4]],\n _LiteralDType[np.int32],\n):\n @property\n def name(self) -> L["int32"]: ...\n @property\n def str(self) -> L["<i4", ">i4"]: ...\n\n@final\nclass UInt32DType( # type: ignore[misc]\n _TypeCodes[L["u"], L["I", "L"], L[6, 8]],\n _NativeOrder,\n _NBit[L[4], L[4]],\n _LiteralDType[np.uint32],\n):\n @property\n def name(self) -> L["uint32"]: ...\n @property\n def str(self) -> L["<u4", ">u4"]: ...\n\n@final\nclass Int64DType( # type: ignore[misc]\n _TypeCodes[L["i"], L["l", "q"], L[7, 9]],\n _NativeOrder,\n _NBit[L[8], L[8]],\n _LiteralDType[np.int64],\n):\n @property\n def name(self) -> L["int64"]: ...\n @property\n def str(self) -> L["<i8", ">i8"]: ...\n\n@final\nclass UInt64DType( # type: ignore[misc]\n _TypeCodes[L["u"], L["L", "Q"], L[8, 10]],\n _NativeOrder,\n _NBit[L[8], L[8]],\n _LiteralDType[np.uint64],\n):\n @property\n def name(self) -> L["uint64"]: ...\n @property\n def str(self) -> L["<u8", ">u8"]: ...\n\n# Standard C-named version/alias:\n# NOTE: Don't make these `Final`: it will break stubtest\nByteDType = Int8DType\nUByteDType = UInt8DType\nShortDType = Int16DType\nUShortDType = UInt16DType\n\n@final\nclass IntDType( # type: ignore[misc]\n _TypeCodes[L["i"], L["i"], L[5]],\n _NativeOrder,\n _NBit[L[4], L[4]],\n _LiteralDType[np.intc],\n):\n @property\n def name(self) -> L["int32"]: ...\n @property\n def str(self) -> L["<i4", ">i4"]: ...\n\n@final\nclass UIntDType( # type: ignore[misc]\n _TypeCodes[L["u"], L["I"], L[6]],\n _NativeOrder,\n _NBit[L[4], L[4]],\n _LiteralDType[np.uintc],\n):\n @property\n def name(self) -> L["uint32"]: ...\n @property\n def str(self) -> L["<u4", ">u4"]: ...\n\n@final\nclass LongDType( # type: ignore[misc]\n _TypeCodes[L["i"], L["l"], L[7]],\n _NativeOrder,\n _NBit[L[4, 8], L[4, 8]],\n _LiteralDType[np.long],\n):\n @property\n def name(self) -> L["int32", "int64"]: ...\n @property\n def str(self) -> L["<i4", ">i4", "<i8", ">i8"]: ...\n\n@final\nclass ULongDType( # type: ignore[misc]\n _TypeCodes[L["u"], L["L"], L[8]],\n _NativeOrder,\n _NBit[L[4, 8], L[4, 8]],\n _LiteralDType[np.ulong],\n):\n @property\n def name(self) -> L["uint32", "uint64"]: ...\n @property\n def str(self) -> L["<u4", ">u4", "<u8", ">u8"]: ...\n\n@final\nclass LongLongDType( # type: ignore[misc]\n _TypeCodes[L["i"], L["q"], L[9]],\n _NativeOrder,\n _NBit[L[8], L[8]],\n _LiteralDType[np.longlong],\n):\n @property\n def name(self) -> L["int64"]: ...\n @property\n def str(self) -> L["<i8", ">i8"]: ...\n\n@final\nclass ULongLongDType( # type: ignore[misc]\n _TypeCodes[L["u"], L["Q"], L[10]],\n _NativeOrder,\n _NBit[L[8], L[8]],\n _LiteralDType[np.ulonglong],\n):\n @property\n def name(self) -> L["uint64"]: ...\n @property\n def str(self) -> L["<u8", ">u8"]: ...\n\n# Floats:\n\n@final\nclass Float16DType( # type: ignore[misc]\n _TypeCodes[L["f"], L["e"], L[23]],\n _NativeOrder,\n _NBit[L[2], L[2]],\n _LiteralDType[np.float16],\n):\n @property\n def name(self) -> L["float16"]: ...\n @property\n def str(self) -> L["<f2", ">f2"]: ...\n\n@final\nclass Float32DType( # type: ignore[misc]\n _TypeCodes[L["f"], L["f"], L[11]],\n _NativeOrder,\n _NBit[L[4], L[4]],\n _LiteralDType[np.float32],\n):\n @property\n def name(self) -> L["float32"]: ...\n @property\n def str(self) -> L["<f4", ">f4"]: ...\n\n@final\nclass Float64DType( # type: ignore[misc]\n _TypeCodes[L["f"], L["d"], L[12]],\n _NativeOrder,\n _NBit[L[8], L[8]],\n _LiteralDType[np.float64],\n):\n @property\n def name(self) -> L["float64"]: ...\n @property\n def str(self) -> L["<f8", ">f8"]: ...\n\n@final\nclass LongDoubleDType( # type: ignore[misc]\n _TypeCodes[L["f"], L["g"], L[13]],\n _NativeOrder,\n _NBit[L[8, 12, 16], L[8, 12, 16]],\n _LiteralDType[np.longdouble],\n):\n @property\n def name(self) -> L["float64", "float96", "float128"]: ...\n @property\n def str(self) -> L["<f8", ">f8", "<f12", ">f12", "<f16", ">f16"]: ...\n\n# Complex:\n\n@final\nclass Complex64DType( # type: ignore[misc]\n _TypeCodes[L["c"], L["F"], L[14]],\n _NativeOrder,\n _NBit[L[4], L[8]],\n _LiteralDType[np.complex64],\n):\n @property\n def name(self) -> L["complex64"]: ...\n @property\n def str(self) -> L["<c8", ">c8"]: ...\n\n@final\nclass Complex128DType( # type: ignore[misc]\n _TypeCodes[L["c"], L["D"], L[15]],\n _NativeOrder,\n _NBit[L[8], L[16]],\n _LiteralDType[np.complex128],\n):\n @property\n def name(self) -> L["complex128"]: ...\n @property\n def str(self) -> L["<c16", ">c16"]: ...\n\n@final\nclass CLongDoubleDType( # type: ignore[misc]\n _TypeCodes[L["c"], L["G"], L[16]],\n _NativeOrder,\n _NBit[L[8, 12, 16], L[16, 24, 32]],\n _LiteralDType[np.clongdouble],\n):\n @property\n def name(self) -> L["complex128", "complex192", "complex256"]: ...\n @property\n def str(self) -> L["<c16", ">c16", "<c24", ">c24", "<c32", ">c32"]: ...\n\n# Python objects:\n\n@final\nclass ObjectDType( # type: ignore[misc]\n _TypeCodes[L["O"], L["O"], L[17]],\n _NoOrder,\n _NBit[L[8], L[8]],\n _SimpleDType[np.object_],\n):\n @property\n def hasobject(self) -> L[True]: ...\n @property\n def name(self) -> L["object"]: ...\n @property\n def str(self) -> L["|O"]: ...\n\n# Flexible:\n\n@final\nclass BytesDType( # type: ignore[misc]\n _TypeCodes[L["S"], L["S"], L[18]],\n _NoOrder,\n _NBit[L[1], _ItemSize_co],\n _SimpleDType[np.bytes_],\n Generic[_ItemSize_co],\n):\n def __new__(cls, size: _ItemSize_co, /) -> BytesDType[_ItemSize_co]: ...\n @property\n def hasobject(self) -> L[False]: ...\n @property\n def name(self) -> LiteralString: ...\n @property\n def str(self) -> LiteralString: ...\n\n@final\nclass StrDType( # type: ignore[misc]\n _TypeCodes[L["U"], L["U"], L[19]],\n _NativeOrder,\n _NBit[L[4], _ItemSize_co],\n _SimpleDType[np.str_],\n Generic[_ItemSize_co],\n):\n def __new__(cls, size: _ItemSize_co, /) -> StrDType[_ItemSize_co]: ...\n @property\n def hasobject(self) -> L[False]: ...\n @property\n def name(self) -> LiteralString: ...\n @property\n def str(self) -> LiteralString: ...\n\n@final\nclass VoidDType( # type: ignore[misc]\n _TypeCodes[L["V"], L["V"], L[20]],\n _NoOrder,\n _NBit[L[1], _ItemSize_co],\n np.dtype[np.void], # pyright: ignore[reportGeneralTypeIssues]\n Generic[_ItemSize_co],\n):\n # NOTE: `VoidDType(...)` raises a `TypeError` at the moment\n def __new__(cls, length: _ItemSize_co, /) -> NoReturn: ...\n @property\n def base(self) -> Self: ...\n @property\n def isalignedstruct(self) -> L[False]: ...\n @property\n def isnative(self) -> L[True]: ...\n @property\n def ndim(self) -> L[0]: ...\n @property\n def shape(self) -> tuple[()]: ...\n @property\n def subdtype(self) -> None: ...\n @property\n def name(self) -> LiteralString: ...\n @property\n def str(self) -> LiteralString: ...\n\n# Other:\n\n_DateUnit: TypeAlias = L["Y", "M", "W", "D"]\n_TimeUnit: TypeAlias = L["h", "m", "s", "ms", "us", "ns", "ps", "fs", "as"]\n_DateTimeUnit: TypeAlias = _DateUnit | _TimeUnit\n\n@final\nclass DateTime64DType( # type: ignore[misc]\n _TypeCodes[L["M"], L["M"], L[21]],\n _NativeOrder,\n _NBit[L[8], L[8]],\n _LiteralDType[np.datetime64],\n):\n # NOTE: `DateTime64DType(...)` raises a `TypeError` at the moment\n # TODO: Once implemented, don't forget the`unit: L["μs"]` overload.\n def __new__(cls, unit: _DateTimeUnit, /) -> NoReturn: ...\n @property\n def name(self) -> L[\n "datetime64",\n "datetime64[Y]",\n "datetime64[M]",\n "datetime64[W]",\n "datetime64[D]",\n "datetime64[h]",\n "datetime64[m]",\n "datetime64[s]",\n "datetime64[ms]",\n "datetime64[us]",\n "datetime64[ns]",\n "datetime64[ps]",\n "datetime64[fs]",\n "datetime64[as]",\n ]: ...\n @property\n def str(self) -> L[\n "<M8", ">M8",\n "<M8[Y]", ">M8[Y]",\n "<M8[M]", ">M8[M]",\n "<M8[W]", ">M8[W]",\n "<M8[D]", ">M8[D]",\n "<M8[h]", ">M8[h]",\n "<M8[m]", ">M8[m]",\n "<M8[s]", ">M8[s]",\n "<M8[ms]", ">M8[ms]",\n "<M8[us]", ">M8[us]",\n "<M8[ns]", ">M8[ns]",\n "<M8[ps]", ">M8[ps]",\n "<M8[fs]", ">M8[fs]",\n "<M8[as]", ">M8[as]",\n ]: ...\n\n@final\nclass TimeDelta64DType( # type: ignore[misc]\n _TypeCodes[L["m"], L["m"], L[22]],\n _NativeOrder,\n _NBit[L[8], L[8]],\n _LiteralDType[np.timedelta64],\n):\n # NOTE: `TimeDelta64DType(...)` raises a `TypeError` at the moment\n # TODO: Once implemented, don't forget to overload on `unit: L["μs"]`.\n def __new__(cls, unit: _DateTimeUnit, /) -> NoReturn: ...\n @property\n def name(self) -> L[\n "timedelta64",\n "timedelta64[Y]",\n "timedelta64[M]",\n "timedelta64[W]",\n "timedelta64[D]",\n "timedelta64[h]",\n "timedelta64[m]",\n "timedelta64[s]",\n "timedelta64[ms]",\n "timedelta64[us]",\n "timedelta64[ns]",\n "timedelta64[ps]",\n "timedelta64[fs]",\n "timedelta64[as]",\n ]: ...\n @property\n def str(self) -> L[\n "<m8", ">m8",\n "<m8[Y]", ">m8[Y]",\n "<m8[M]", ">m8[M]",\n "<m8[W]", ">m8[W]",\n "<m8[D]", ">m8[D]",\n "<m8[h]", ">m8[h]",\n "<m8[m]", ">m8[m]",\n "<m8[s]", ">m8[s]",\n "<m8[ms]", ">m8[ms]",\n "<m8[us]", ">m8[us]",\n "<m8[ns]", ">m8[ns]",\n "<m8[ps]", ">m8[ps]",\n "<m8[fs]", ">m8[fs]",\n "<m8[as]", ">m8[as]",\n ]: ...\n\n_NaObjectT_co = TypeVar("_NaObjectT_co", default=Never, covariant=True)\n\n@final\nclass StringDType( # type: ignore[misc]\n _TypeCodes[L["T"], L["T"], L[2056]],\n _NativeOrder,\n _NBit[L[8], L[16]],\n # TODO(jorenham): change once we have a string scalar type:\n # https://github.com/numpy/numpy/issues/28165\n np.dtype[str], # type: ignore[type-var] # pyright: ignore[reportGeneralTypeIssues, reportInvalidTypeArguments]\n Generic[_NaObjectT_co],\n):\n @property\n def na_object(self) -> _NaObjectT_co: ...\n @property\n def coerce(self) -> L[True]: ...\n\n #\n @overload\n def __new__(cls, /, *, coerce: bool = True) -> Self: ...\n @overload\n def __new__(cls, /, *, na_object: _NaObjectT_co, coerce: bool = True) -> Self: ...\n\n #\n def __getitem__(self, key: Never, /) -> NoReturn: ... # type: ignore[override] # pyright: ignore[reportIncompatibleMethodOverride]\n @property\n def fields(self) -> None: ...\n @property\n def base(self) -> Self: ...\n @property\n def ndim(self) -> L[0]: ...\n @property\n def shape(self) -> tuple[()]: ...\n\n #\n @property\n def name(self) -> L["StringDType64", "StringDType128"]: ...\n @property\n def subdtype(self) -> None: ...\n @property\n def type(self) -> type[str]: ...\n @property\n def str(self) -> L["|T8", "|T16"]: ...\n\n #\n @property\n def hasobject(self) -> L[True]: ...\n @property\n def isalignedstruct(self) -> L[False]: ...\n @property\n def isnative(self) -> L[True]: ...\n
.venv\Lib\site-packages\numpy\dtypes.pyi
dtypes.pyi
Other
16,175
0.95
0.22187
0.04014
node-utils
618
2023-11-19T15:42:18.994737
BSD-3-Clause
false
69e3acd4e9d4f94fbc0d0d90302125bf
"""\nExceptions and Warnings\n=======================\n\nGeneral exceptions used by NumPy. Note that some exceptions may be module\nspecific, such as linear algebra errors.\n\n.. versionadded:: NumPy 1.25\n\n The exceptions module is new in NumPy 1.25. Older exceptions remain\n available through the main NumPy namespace for compatibility.\n\n.. currentmodule:: numpy.exceptions\n\nWarnings\n--------\n.. autosummary::\n :toctree: generated/\n\n ComplexWarning Given when converting complex to real.\n VisibleDeprecationWarning Same as a DeprecationWarning, but more visible.\n RankWarning Issued when the design matrix is rank deficient.\n\nExceptions\n----------\n.. autosummary::\n :toctree: generated/\n\n AxisError Given when an axis was invalid.\n DTypePromotionError Given when no common dtype could be found.\n TooHardError Error specific to `numpy.shares_memory`.\n\n"""\n\n\n__all__ = [\n "ComplexWarning", "VisibleDeprecationWarning", "ModuleDeprecationWarning",\n "TooHardError", "AxisError", "DTypePromotionError"]\n\n\n# Disallow reloading this module so as to preserve the identities of the\n# classes defined here.\nif '_is_loaded' in globals():\n raise RuntimeError('Reloading numpy._globals is not allowed')\n_is_loaded = True\n\n\nclass ComplexWarning(RuntimeWarning):\n """\n The warning raised when casting a complex dtype to a real dtype.\n\n As implemented, casting a complex number to a real discards its imaginary\n part, but this behavior may not be what the user actually wants.\n\n """\n pass\n\n\nclass ModuleDeprecationWarning(DeprecationWarning):\n """Module deprecation warning.\n\n .. warning::\n\n This warning should not be used, since nose testing is not relevant\n anymore.\n\n The nose tester turns ordinary Deprecation warnings into test failures.\n That makes it hard to deprecate whole modules, because they get\n imported by default. So this is a special Deprecation warning that the\n nose tester will let pass without making tests fail.\n\n """\n pass\n\n\nclass VisibleDeprecationWarning(UserWarning):\n """Visible deprecation warning.\n\n By default, python will not show deprecation warnings, so this class\n can be used when a very visible warning is helpful, for example because\n the usage is most likely a user bug.\n\n """\n pass\n\n\nclass RankWarning(RuntimeWarning):\n """Matrix rank warning.\n\n Issued by polynomial functions when the design matrix is rank deficient.\n\n """\n pass\n\n\n# Exception used in shares_memory()\nclass TooHardError(RuntimeError):\n """``max_work`` was exceeded.\n\n This is raised whenever the maximum number of candidate solutions\n to consider specified by the ``max_work`` parameter is exceeded.\n Assigning a finite number to ``max_work`` may have caused the operation\n to fail.\n\n """\n pass\n\n\nclass AxisError(ValueError, IndexError):\n """Axis supplied was invalid.\n\n This is raised whenever an ``axis`` parameter is specified that is larger\n than the number of array dimensions.\n For compatibility with code written against older numpy versions, which\n raised a mixture of :exc:`ValueError` and :exc:`IndexError` for this\n situation, this exception subclasses both to ensure that\n ``except ValueError`` and ``except IndexError`` statements continue\n to catch ``AxisError``.\n\n Parameters\n ----------\n axis : int or str\n The out of bounds axis or a custom exception message.\n If an axis is provided, then `ndim` should be specified as well.\n ndim : int, optional\n The number of array dimensions.\n msg_prefix : str, optional\n A prefix for the exception message.\n\n Attributes\n ----------\n axis : int, optional\n The out of bounds axis or ``None`` if a custom exception\n message was provided. This should be the axis as passed by\n the user, before any normalization to resolve negative indices.\n\n .. versionadded:: 1.22\n ndim : int, optional\n The number of array dimensions or ``None`` if a custom exception\n message was provided.\n\n .. versionadded:: 1.22\n\n\n Examples\n --------\n >>> import numpy as np\n >>> array_1d = np.arange(10)\n >>> np.cumsum(array_1d, axis=1)\n Traceback (most recent call last):\n ...\n numpy.exceptions.AxisError: axis 1 is out of bounds for array of dimension 1\n\n Negative axes are preserved:\n\n >>> np.cumsum(array_1d, axis=-2)\n Traceback (most recent call last):\n ...\n numpy.exceptions.AxisError: axis -2 is out of bounds for array of dimension 1\n\n The class constructor generally takes the axis and arrays'\n dimensionality as arguments:\n\n >>> print(np.exceptions.AxisError(2, 1, msg_prefix='error'))\n error: axis 2 is out of bounds for array of dimension 1\n\n Alternatively, a custom exception message can be passed:\n\n >>> print(np.exceptions.AxisError('Custom error message'))\n Custom error message\n\n """\n\n __slots__ = ("_msg", "axis", "ndim")\n\n def __init__(self, axis, ndim=None, msg_prefix=None):\n if ndim is msg_prefix is None:\n # single-argument form: directly set the error message\n self._msg = axis\n self.axis = None\n self.ndim = None\n else:\n self._msg = msg_prefix\n self.axis = axis\n self.ndim = ndim\n\n def __str__(self):\n axis = self.axis\n ndim = self.ndim\n\n if axis is ndim is None:\n return self._msg\n else:\n msg = f"axis {axis} is out of bounds for array of dimension {ndim}"\n if self._msg is not None:\n msg = f"{self._msg}: {msg}"\n return msg\n\n\nclass DTypePromotionError(TypeError):\n """Multiple DTypes could not be converted to a common one.\n\n This exception derives from ``TypeError`` and is raised whenever dtypes\n cannot be converted to a single common one. This can be because they\n are of a different category/class or incompatible instances of the same\n one (see Examples).\n\n Notes\n -----\n Many functions will use promotion to find the correct result and\n implementation. For these functions the error will typically be chained\n with a more specific error indicating that no implementation was found\n for the input dtypes.\n\n Typically promotion should be considered "invalid" between the dtypes of\n two arrays when `arr1 == arr2` can safely return all ``False`` because the\n dtypes are fundamentally different.\n\n Examples\n --------\n Datetimes and complex numbers are incompatible classes and cannot be\n promoted:\n\n >>> import numpy as np\n >>> np.result_type(np.dtype("M8[s]"), np.complex128) # doctest: +IGNORE_EXCEPTION_DETAIL\n Traceback (most recent call last):\n ...\n DTypePromotionError: The DType <class 'numpy.dtype[datetime64]'> could not\n be promoted by <class 'numpy.dtype[complex128]'>. This means that no common\n DType exists for the given inputs. For example they cannot be stored in a\n single array unless the dtype is `object`. The full list of DTypes is:\n (<class 'numpy.dtype[datetime64]'>, <class 'numpy.dtype[complex128]'>)\n\n For example for structured dtypes, the structure can mismatch and the\n same ``DTypePromotionError`` is given when two structured dtypes with\n a mismatch in their number of fields is given:\n\n >>> dtype1 = np.dtype([("field1", np.float64), ("field2", np.int64)])\n >>> dtype2 = np.dtype([("field1", np.float64)])\n >>> np.promote_types(dtype1, dtype2) # doctest: +IGNORE_EXCEPTION_DETAIL\n Traceback (most recent call last):\n ...\n DTypePromotionError: field names `('field1', 'field2')` and `('field1',)`\n mismatch.\n\n """ # noqa: E501\n pass\n
.venv\Lib\site-packages\numpy\exceptions.py
exceptions.py
Python
8,047
0.95
0.137652
0.021978
python-kit
538
2024-12-06T02:56:04.477986
Apache-2.0
false
38b34ead9424761b84cc8293e5d3f33e
from typing import overload\n\n__all__ = [\n "ComplexWarning",\n "VisibleDeprecationWarning",\n "ModuleDeprecationWarning",\n "TooHardError",\n "AxisError",\n "DTypePromotionError",\n]\n\nclass ComplexWarning(RuntimeWarning): ...\nclass ModuleDeprecationWarning(DeprecationWarning): ...\nclass VisibleDeprecationWarning(UserWarning): ...\nclass RankWarning(RuntimeWarning): ...\nclass TooHardError(RuntimeError): ...\nclass DTypePromotionError(TypeError): ...\n\nclass AxisError(ValueError, IndexError):\n axis: int | None\n ndim: int | None\n @overload\n def __init__(self, axis: str, ndim: None = ..., msg_prefix: None = ...) -> None: ...\n @overload\n def __init__(self, axis: int, ndim: int, msg_prefix: str | None = ...) -> None: ...\n
.venv\Lib\site-packages\numpy\exceptions.pyi
exceptions.pyi
Other
776
0.85
0.36
0
react-lib
480
2024-05-15T11:41:15.240120
Apache-2.0
false
61a88a40466862d7e4563701b9d7fe50
import warnings\n\n# 2018-05-29, PendingDeprecationWarning added to matrix.__new__\n# 2020-01-23, numpy 1.19.0 PendingDeprecatonWarning\nwarnings.warn("Importing from numpy.matlib is deprecated since 1.19.0. "\n "The matrix subclass is not the recommended way to represent "\n "matrices or deal with linear algebra (see "\n "https://docs.scipy.org/doc/numpy/user/numpy-for-matlab-users.html). "\n "Please adjust your code to use regular ndarray. ",\n PendingDeprecationWarning, stacklevel=2)\n\nimport numpy as np\n\n# Matlib.py contains all functions in the numpy namespace with a few\n# replacements. See doc/source/reference/routines.matlib.rst for details.\n# Need * as we're copying the numpy namespace.\nfrom numpy import * # noqa: F403\nfrom numpy.matrixlib.defmatrix import asmatrix, matrix\n\n__version__ = np.__version__\n\n__all__ = ['rand', 'randn', 'repmat']\n__all__ += np.__all__\n\ndef empty(shape, dtype=None, order='C'):\n """Return a new matrix of given shape and type, without initializing entries.\n\n Parameters\n ----------\n shape : int or tuple of int\n Shape of the empty matrix.\n dtype : data-type, optional\n Desired output data-type.\n order : {'C', 'F'}, optional\n Whether to store multi-dimensional data in row-major\n (C-style) or column-major (Fortran-style) order in\n memory.\n\n See Also\n --------\n numpy.empty : Equivalent array function.\n matlib.zeros : Return a matrix of zeros.\n matlib.ones : Return a matrix of ones.\n\n Notes\n -----\n Unlike other matrix creation functions (e.g. `matlib.zeros`,\n `matlib.ones`), `matlib.empty` does not initialize the values of the\n matrix, and may therefore be marginally faster. However, the values\n stored in the newly allocated matrix are arbitrary. For reproducible\n behavior, be sure to set each element of the matrix before reading.\n\n Examples\n --------\n >>> import numpy.matlib\n >>> np.matlib.empty((2, 2)) # filled with random data\n matrix([[ 6.76425276e-320, 9.79033856e-307], # random\n [ 7.39337286e-309, 3.22135945e-309]])\n >>> np.matlib.empty((2, 2), dtype=int)\n matrix([[ 6600475, 0], # random\n [ 6586976, 22740995]])\n\n """\n return ndarray.__new__(matrix, shape, dtype, order=order)\n\ndef ones(shape, dtype=None, order='C'):\n """\n Matrix of ones.\n\n Return a matrix of given shape and type, filled with ones.\n\n Parameters\n ----------\n shape : {sequence of ints, int}\n Shape of the matrix\n dtype : data-type, optional\n The desired data-type for the matrix, default is np.float64.\n order : {'C', 'F'}, optional\n Whether to store matrix in C- or Fortran-contiguous order,\n default is 'C'.\n\n Returns\n -------\n out : matrix\n Matrix of ones of given shape, dtype, and order.\n\n See Also\n --------\n ones : Array of ones.\n matlib.zeros : Zero matrix.\n\n Notes\n -----\n If `shape` has length one i.e. ``(N,)``, or is a scalar ``N``,\n `out` becomes a single row matrix of shape ``(1,N)``.\n\n Examples\n --------\n >>> np.matlib.ones((2,3))\n matrix([[1., 1., 1.],\n [1., 1., 1.]])\n\n >>> np.matlib.ones(2)\n matrix([[1., 1.]])\n\n """\n a = ndarray.__new__(matrix, shape, dtype, order=order)\n a.fill(1)\n return a\n\ndef zeros(shape, dtype=None, order='C'):\n """\n Return a matrix of given shape and type, filled with zeros.\n\n Parameters\n ----------\n shape : int or sequence of ints\n Shape of the matrix\n dtype : data-type, optional\n The desired data-type for the matrix, default is float.\n order : {'C', 'F'}, optional\n Whether to store the result in C- or Fortran-contiguous order,\n default is 'C'.\n\n Returns\n -------\n out : matrix\n Zero matrix of given shape, dtype, and order.\n\n See Also\n --------\n numpy.zeros : Equivalent array function.\n matlib.ones : Return a matrix of ones.\n\n Notes\n -----\n If `shape` has length one i.e. ``(N,)``, or is a scalar ``N``,\n `out` becomes a single row matrix of shape ``(1,N)``.\n\n Examples\n --------\n >>> import numpy.matlib\n >>> np.matlib.zeros((2, 3))\n matrix([[0., 0., 0.],\n [0., 0., 0.]])\n\n >>> np.matlib.zeros(2)\n matrix([[0., 0.]])\n\n """\n a = ndarray.__new__(matrix, shape, dtype, order=order)\n a.fill(0)\n return a\n\ndef identity(n, dtype=None):\n """\n Returns the square identity matrix of given size.\n\n Parameters\n ----------\n n : int\n Size of the returned identity matrix.\n dtype : data-type, optional\n Data-type of the output. Defaults to ``float``.\n\n Returns\n -------\n out : matrix\n `n` x `n` matrix with its main diagonal set to one,\n and all other elements zero.\n\n See Also\n --------\n numpy.identity : Equivalent array function.\n matlib.eye : More general matrix identity function.\n\n Examples\n --------\n >>> import numpy.matlib\n >>> np.matlib.identity(3, dtype=int)\n matrix([[1, 0, 0],\n [0, 1, 0],\n [0, 0, 1]])\n\n """\n a = array([1] + n * [0], dtype=dtype)\n b = empty((n, n), dtype=dtype)\n b.flat = a\n return b\n\ndef eye(n, M=None, k=0, dtype=float, order='C'):\n """\n Return a matrix with ones on the diagonal and zeros elsewhere.\n\n Parameters\n ----------\n n : int\n Number of rows in the output.\n M : int, optional\n Number of columns in the output, defaults to `n`.\n k : int, optional\n Index of the diagonal: 0 refers to the main diagonal,\n a positive value refers to an upper diagonal,\n and a negative value to a lower diagonal.\n dtype : dtype, optional\n Data-type of the returned matrix.\n order : {'C', 'F'}, optional\n Whether the output should be stored in row-major (C-style) or\n column-major (Fortran-style) order in memory.\n\n Returns\n -------\n I : matrix\n A `n` x `M` matrix where all elements are equal to zero,\n except for the `k`-th diagonal, whose values are equal to one.\n\n See Also\n --------\n numpy.eye : Equivalent array function.\n identity : Square identity matrix.\n\n Examples\n --------\n >>> import numpy.matlib\n >>> np.matlib.eye(3, k=1, dtype=float)\n matrix([[0., 1., 0.],\n [0., 0., 1.],\n [0., 0., 0.]])\n\n """\n return asmatrix(np.eye(n, M=M, k=k, dtype=dtype, order=order))\n\ndef rand(*args):\n """\n Return a matrix of random values with given shape.\n\n Create a matrix of the given shape and propagate it with\n random samples from a uniform distribution over ``[0, 1)``.\n\n Parameters\n ----------\n \\*args : Arguments\n Shape of the output.\n If given as N integers, each integer specifies the size of one\n dimension.\n If given as a tuple, this tuple gives the complete shape.\n\n Returns\n -------\n out : ndarray\n The matrix of random values with shape given by `\\*args`.\n\n See Also\n --------\n randn, numpy.random.RandomState.rand\n\n Examples\n --------\n >>> np.random.seed(123)\n >>> import numpy.matlib\n >>> np.matlib.rand(2, 3)\n matrix([[0.69646919, 0.28613933, 0.22685145],\n [0.55131477, 0.71946897, 0.42310646]])\n >>> np.matlib.rand((2, 3))\n matrix([[0.9807642 , 0.68482974, 0.4809319 ],\n [0.39211752, 0.34317802, 0.72904971]])\n\n If the first argument is a tuple, other arguments are ignored:\n\n >>> np.matlib.rand((2, 3), 4)\n matrix([[0.43857224, 0.0596779 , 0.39804426],\n [0.73799541, 0.18249173, 0.17545176]])\n\n """\n if isinstance(args[0], tuple):\n args = args[0]\n return asmatrix(np.random.rand(*args))\n\ndef randn(*args):\n """\n Return a random matrix with data from the "standard normal" distribution.\n\n `randn` generates a matrix filled with random floats sampled from a\n univariate "normal" (Gaussian) distribution of mean 0 and variance 1.\n\n Parameters\n ----------\n \\*args : Arguments\n Shape of the output.\n If given as N integers, each integer specifies the size of one\n dimension. If given as a tuple, this tuple gives the complete shape.\n\n Returns\n -------\n Z : matrix of floats\n A matrix of floating-point samples drawn from the standard normal\n distribution.\n\n See Also\n --------\n rand, numpy.random.RandomState.randn\n\n Notes\n -----\n For random samples from the normal distribution with mean ``mu`` and\n standard deviation ``sigma``, use::\n\n sigma * np.matlib.randn(...) + mu\n\n Examples\n --------\n >>> np.random.seed(123)\n >>> import numpy.matlib\n >>> np.matlib.randn(1)\n matrix([[-1.0856306]])\n >>> np.matlib.randn(1, 2, 3)\n matrix([[ 0.99734545, 0.2829785 , -1.50629471],\n [-0.57860025, 1.65143654, -2.42667924]])\n\n Two-by-four matrix of samples from the normal distribution with\n mean 3 and standard deviation 2.5:\n\n >>> 2.5 * np.matlib.randn((2, 4)) + 3\n matrix([[1.92771843, 6.16484065, 0.83314899, 1.30278462],\n [2.76322758, 6.72847407, 1.40274501, 1.8900451 ]])\n\n """\n if isinstance(args[0], tuple):\n args = args[0]\n return asmatrix(np.random.randn(*args))\n\ndef repmat(a, m, n):\n """\n Repeat a 0-D to 2-D array or matrix MxN times.\n\n Parameters\n ----------\n a : array_like\n The array or matrix to be repeated.\n m, n : int\n The number of times `a` is repeated along the first and second axes.\n\n Returns\n -------\n out : ndarray\n The result of repeating `a`.\n\n Examples\n --------\n >>> import numpy.matlib\n >>> a0 = np.array(1)\n >>> np.matlib.repmat(a0, 2, 3)\n array([[1, 1, 1],\n [1, 1, 1]])\n\n >>> a1 = np.arange(4)\n >>> np.matlib.repmat(a1, 2, 2)\n array([[0, 1, 2, 3, 0, 1, 2, 3],\n [0, 1, 2, 3, 0, 1, 2, 3]])\n\n >>> a2 = np.asmatrix(np.arange(6).reshape(2, 3))\n >>> np.matlib.repmat(a2, 2, 3)\n matrix([[0, 1, 2, 0, 1, 2, 0, 1, 2],\n [3, 4, 5, 3, 4, 5, 3, 4, 5],\n [0, 1, 2, 0, 1, 2, 0, 1, 2],\n [3, 4, 5, 3, 4, 5, 3, 4, 5]])\n\n """\n a = asanyarray(a)\n ndim = a.ndim\n if ndim == 0:\n origrows, origcols = (1, 1)\n elif ndim == 1:\n origrows, origcols = (1, a.shape[0])\n else:\n origrows, origcols = a.shape\n rows = origrows * m\n cols = origcols * n\n c = a.reshape(1, a.size).repeat(m, 0).reshape(rows, origcols).repeat(n, 0)\n return c.reshape(rows, cols)\n
.venv\Lib\site-packages\numpy\matlib.py
matlib.py
Python
11,018
0.95
0.055263
0.015974
react-lib
353
2024-08-18T10:22:34.812952
Apache-2.0
false
6b4a07236b75c200609a2c56f3047bbb
from typing import Any, Literal, TypeAlias, TypeVar, overload\n\nimport numpy as np\nimport numpy.typing as npt\nfrom numpy import ( # noqa: F401\n False_,\n ScalarType,\n True_,\n __array_namespace_info__,\n __version__,\n abs,\n absolute,\n acos,\n acosh,\n add,\n all,\n allclose,\n amax,\n amin,\n angle,\n any,\n append,\n apply_along_axis,\n apply_over_axes,\n arange,\n arccos,\n arccosh,\n arcsin,\n arcsinh,\n arctan,\n arctan2,\n arctanh,\n argmax,\n argmin,\n argpartition,\n argsort,\n argwhere,\n around,\n array,\n array2string,\n array_equal,\n array_equiv,\n array_repr,\n array_split,\n array_str,\n asanyarray,\n asarray,\n asarray_chkfinite,\n ascontiguousarray,\n asfortranarray,\n asin,\n asinh,\n asmatrix,\n astype,\n atan,\n atan2,\n atanh,\n atleast_1d,\n atleast_2d,\n atleast_3d,\n average,\n bartlett,\n base_repr,\n binary_repr,\n bincount,\n bitwise_and,\n bitwise_count,\n bitwise_invert,\n bitwise_left_shift,\n bitwise_not,\n bitwise_or,\n bitwise_right_shift,\n bitwise_xor,\n blackman,\n block,\n bmat,\n bool,\n bool_,\n broadcast,\n broadcast_arrays,\n broadcast_shapes,\n broadcast_to,\n busday_count,\n busday_offset,\n busdaycalendar,\n byte,\n bytes_,\n c_,\n can_cast,\n cbrt,\n cdouble,\n ceil,\n char,\n character,\n choose,\n clip,\n clongdouble,\n column_stack,\n common_type,\n complex64,\n complex128,\n complex256,\n complexfloating,\n compress,\n concat,\n concatenate,\n conj,\n conjugate,\n convolve,\n copy,\n copysign,\n copyto,\n core,\n corrcoef,\n correlate,\n cos,\n cosh,\n count_nonzero,\n cov,\n cross,\n csingle,\n ctypeslib,\n cumprod,\n cumsum,\n cumulative_prod,\n cumulative_sum,\n datetime64,\n datetime_as_string,\n datetime_data,\n deg2rad,\n degrees,\n delete,\n diag,\n diag_indices,\n diag_indices_from,\n diagflat,\n diagonal,\n diff,\n digitize,\n divide,\n divmod,\n dot,\n double,\n dsplit,\n dstack,\n dtype,\n dtypes,\n e,\n ediff1d,\n einsum,\n einsum_path,\n emath,\n empty_like,\n equal,\n errstate,\n euler_gamma,\n exceptions,\n exp,\n exp2,\n expand_dims,\n expm1,\n extract,\n f2py,\n fabs,\n fft,\n fill_diagonal,\n finfo,\n fix,\n flatiter,\n flatnonzero,\n flexible,\n flip,\n fliplr,\n flipud,\n float16,\n float32,\n float64,\n float128,\n float_power,\n floating,\n floor,\n floor_divide,\n fmax,\n fmin,\n fmod,\n format_float_positional,\n format_float_scientific,\n frexp,\n from_dlpack,\n frombuffer,\n fromfile,\n fromfunction,\n fromiter,\n frompyfunc,\n fromregex,\n fromstring,\n full,\n full_like,\n gcd,\n generic,\n genfromtxt,\n geomspace,\n get_include,\n get_printoptions,\n getbufsize,\n geterr,\n geterrcall,\n gradient,\n greater,\n greater_equal,\n half,\n hamming,\n hanning,\n heaviside,\n histogram,\n histogram2d,\n histogram_bin_edges,\n histogramdd,\n hsplit,\n hstack,\n hypot,\n i0,\n iinfo,\n imag,\n in1d,\n index_exp,\n indices,\n inexact,\n inf,\n info,\n inner,\n insert,\n int8,\n int16,\n int32,\n int64,\n int_,\n intc,\n integer,\n interp,\n intersect1d,\n intp,\n invert,\n is_busday,\n isclose,\n iscomplex,\n iscomplexobj,\n isdtype,\n isfinite,\n isfortran,\n isin,\n isinf,\n isnan,\n isnat,\n isneginf,\n isposinf,\n isreal,\n isrealobj,\n isscalar,\n issubdtype,\n iterable,\n ix_,\n kaiser,\n kron,\n lcm,\n ldexp,\n left_shift,\n less,\n less_equal,\n lexsort,\n lib,\n linalg,\n linspace,\n little_endian,\n load,\n loadtxt,\n log,\n log1p,\n log2,\n log10,\n logaddexp,\n logaddexp2,\n logical_and,\n logical_not,\n logical_or,\n logical_xor,\n logspace,\n long,\n longdouble,\n longlong,\n ma,\n mask_indices,\n matmul,\n matrix,\n matrix_transpose,\n matvec,\n max,\n maximum,\n may_share_memory,\n mean,\n median,\n memmap,\n meshgrid,\n mgrid,\n min,\n min_scalar_type,\n minimum,\n mintypecode,\n mod,\n modf,\n moveaxis,\n multiply,\n nan,\n nan_to_num,\n nanargmax,\n nanargmin,\n nancumprod,\n nancumsum,\n nanmax,\n nanmean,\n nanmedian,\n nanmin,\n nanpercentile,\n nanprod,\n nanquantile,\n nanstd,\n nansum,\n nanvar,\n ndarray,\n ndenumerate,\n ndim,\n ndindex,\n nditer,\n negative,\n nested_iters,\n newaxis,\n nextafter,\n nonzero,\n not_equal,\n number,\n object_,\n ogrid,\n ones_like,\n outer,\n packbits,\n pad,\n partition,\n percentile,\n permute_dims,\n pi,\n piecewise,\n place,\n poly,\n poly1d,\n polyadd,\n polyder,\n polydiv,\n polyfit,\n polyint,\n polymul,\n polynomial,\n polysub,\n polyval,\n positive,\n pow,\n power,\n printoptions,\n prod,\n promote_types,\n ptp,\n put,\n put_along_axis,\n putmask,\n quantile,\n r_,\n rad2deg,\n radians,\n random,\n ravel,\n ravel_multi_index,\n real,\n real_if_close,\n rec,\n recarray,\n reciprocal,\n record,\n remainder,\n repeat,\n require,\n reshape,\n resize,\n result_type,\n right_shift,\n rint,\n roll,\n rollaxis,\n roots,\n rot90,\n round,\n row_stack,\n s_,\n save,\n savetxt,\n savez,\n savez_compressed,\n sctypeDict,\n searchsorted,\n select,\n set_printoptions,\n setbufsize,\n setdiff1d,\n seterr,\n seterrcall,\n setxor1d,\n shape,\n shares_memory,\n short,\n show_config,\n show_runtime,\n sign,\n signbit,\n signedinteger,\n sin,\n sinc,\n single,\n sinh,\n size,\n sort,\n sort_complex,\n spacing,\n split,\n sqrt,\n square,\n squeeze,\n stack,\n std,\n str_,\n strings,\n subtract,\n sum,\n swapaxes,\n take,\n take_along_axis,\n tan,\n tanh,\n tensordot,\n test,\n testing,\n tile,\n timedelta64,\n trace,\n transpose,\n trapezoid,\n trapz,\n tri,\n tril,\n tril_indices,\n tril_indices_from,\n trim_zeros,\n triu,\n triu_indices,\n triu_indices_from,\n true_divide,\n trunc,\n typecodes,\n typename,\n typing,\n ubyte,\n ufunc,\n uint,\n uint8,\n uint16,\n uint32,\n uint64,\n uintc,\n uintp,\n ulong,\n ulonglong,\n union1d,\n unique,\n unique_all,\n unique_counts,\n unique_inverse,\n unique_values,\n unpackbits,\n unravel_index,\n unsignedinteger,\n unstack,\n unwrap,\n ushort,\n vander,\n var,\n vdot,\n vecdot,\n vecmat,\n vectorize,\n void,\n vsplit,\n vstack,\n where,\n zeros_like,\n)\nfrom numpy._typing import _ArrayLike, _DTypeLike\n\n__all__ = ["rand", "randn", "repmat"]\n__all__ += np.__all__\n\n###\n\n_T = TypeVar("_T", bound=np.generic)\n_Matrix: TypeAlias = np.matrix[tuple[int, int], np.dtype[_T]]\n_Order: TypeAlias = Literal["C", "F"]\n\n###\n\n#\n@overload\ndef empty(shape: int | tuple[int, int], dtype: None = None, order: _Order = "C") -> _Matrix[np.float64]: ...\n@overload\ndef empty(shape: int | tuple[int, int], dtype: _DTypeLike[_T], order: _Order = "C") -> _Matrix[_T]: ...\n@overload\ndef empty(shape: int | tuple[int, int], dtype: npt.DTypeLike, order: _Order = "C") -> _Matrix[Any]: ...\n\n#\n@overload\ndef ones(shape: int | tuple[int, int], dtype: None = None, order: _Order = "C") -> _Matrix[np.float64]: ...\n@overload\ndef ones(shape: int | tuple[int, int], dtype: _DTypeLike[_T], order: _Order = "C") -> _Matrix[_T]: ...\n@overload\ndef ones(shape: int | tuple[int, int], dtype: npt.DTypeLike, order: _Order = "C") -> _Matrix[Any]: ...\n\n#\n@overload\ndef zeros(shape: int | tuple[int, int], dtype: None = None, order: _Order = "C") -> _Matrix[np.float64]: ...\n@overload\ndef zeros(shape: int | tuple[int, int], dtype: _DTypeLike[_T], order: _Order = "C") -> _Matrix[_T]: ...\n@overload\ndef zeros(shape: int | tuple[int, int], dtype: npt.DTypeLike, order: _Order = "C") -> _Matrix[Any]: ...\n\n#\n@overload\ndef identity(n: int, dtype: None = None) -> _Matrix[np.float64]: ...\n@overload\ndef identity(n: int, dtype: _DTypeLike[_T]) -> _Matrix[_T]: ...\n@overload\ndef identity(n: int, dtype: npt.DTypeLike | None = None) -> _Matrix[Any]: ...\n\n#\n@overload\ndef eye(\n n: int,\n M: int | None = None,\n k: int = 0,\n dtype: type[np.float64] | None = ...,\n order: _Order = "C",\n) -> _Matrix[np.float64]: ...\n@overload\ndef eye(n: int, M: int | None, k: int, dtype: _DTypeLike[_T], order: _Order = "C") -> _Matrix[_T]: ...\n@overload\ndef eye(n: int, M: int | None = None, k: int = 0, *, dtype: _DTypeLike[_T], order: _Order = "C") -> _Matrix[_T]: ...\n@overload\ndef eye(n: int, M: int | None = None, k: int = 0, dtype: npt.DTypeLike = ..., order: _Order = "C") -> _Matrix[Any]: ...\n\n#\n@overload\ndef rand(arg: int | tuple[()] | tuple[int] | tuple[int, int], /) -> _Matrix[np.float64]: ...\n@overload\ndef rand(arg: int, /, *args: int) -> _Matrix[np.float64]: ...\n\n#\n@overload\ndef randn(arg: int | tuple[()] | tuple[int] | tuple[int, int], /) -> _Matrix[np.float64]: ...\n@overload\ndef randn(arg: int, /, *args: int) -> _Matrix[np.float64]: ...\n\n#\n@overload\ndef repmat(a: _Matrix[_T], m: int, n: int) -> _Matrix[_T]: ...\n@overload\ndef repmat(a: _ArrayLike[_T], m: int, n: int) -> npt.NDArray[_T]: ...\n@overload\ndef repmat(a: npt.ArrayLike, m: int, n: int) -> npt.NDArray[Any]: ...\n
.venv\Lib\site-packages\numpy\matlib.pyi
matlib.pyi
Other
10,184
0.95
0.039519
0.017575
node-utils
402
2023-10-24T05:39:21.468355
MIT
false
4a805553d4b617f6bd9a65c3ca378bc1
\n"""\nModule to expose more detailed version info for the installed `numpy`\n"""\nversion = "2.3.1"\n__version__ = version\nfull_version = version\n\ngit_revision = "4d833e5df760c382f24ee3eb643dc20c3da4a5a1"\nrelease = 'dev' not in version and '+' not in version\nshort_version = version.split("+")[0]\n
.venv\Lib\site-packages\numpy\version.py
version.py
Python
304
0.7
0.090909
0
vue-tools
51
2024-07-16T14:11:31.293382
BSD-3-Clause
false
6e50af7fa35a5f6f24130c1b0a831de5
from typing import Final, LiteralString\n\n__all__ = (\n '__version__',\n 'full_version',\n 'git_revision',\n 'release',\n 'short_version',\n 'version',\n)\n\nversion: Final[LiteralString]\n__version__: Final[LiteralString]\nfull_version: Final[LiteralString]\n\ngit_revision: Final[LiteralString]\nrelease: Final[bool]\nshort_version: Final[LiteralString]\n
.venv\Lib\site-packages\numpy\version.pyi
version.pyi
Other
376
0.85
0
0
awesome-app
86
2025-02-08T08:58:39.599630
MIT
false
7f51171c3b11d39cfffdedfc5f9e7bc7
"""\nArray API Inspection namespace\n\nThis is the namespace for inspection functions as defined by the array API\nstandard. See\nhttps://data-apis.org/array-api/latest/API_specification/inspection.html for\nmore details.\n\n"""\nfrom numpy._core import (\n bool,\n complex64,\n complex128,\n dtype,\n float32,\n float64,\n int8,\n int16,\n int32,\n int64,\n intp,\n uint8,\n uint16,\n uint32,\n uint64,\n)\n\n\nclass __array_namespace_info__:\n """\n Get the array API inspection namespace for NumPy.\n\n The array API inspection namespace defines the following functions:\n\n - capabilities()\n - default_device()\n - default_dtypes()\n - dtypes()\n - devices()\n\n See\n https://data-apis.org/array-api/latest/API_specification/inspection.html\n for more details.\n\n Returns\n -------\n info : ModuleType\n The array API inspection namespace for NumPy.\n\n Examples\n --------\n >>> info = np.__array_namespace_info__()\n >>> info.default_dtypes()\n {'real floating': numpy.float64,\n 'complex floating': numpy.complex128,\n 'integral': numpy.int64,\n 'indexing': numpy.int64}\n\n """\n\n __module__ = 'numpy'\n\n def capabilities(self):\n """\n Return a dictionary of array API library capabilities.\n\n The resulting dictionary has the following keys:\n\n - **"boolean indexing"**: boolean indicating whether an array library\n supports boolean indexing. Always ``True`` for NumPy.\n\n - **"data-dependent shapes"**: boolean indicating whether an array\n library supports data-dependent output shapes. Always ``True`` for\n NumPy.\n\n See\n https://data-apis.org/array-api/latest/API_specification/generated/array_api.info.capabilities.html\n for more details.\n\n See Also\n --------\n __array_namespace_info__.default_device,\n __array_namespace_info__.default_dtypes,\n __array_namespace_info__.dtypes,\n __array_namespace_info__.devices\n\n Returns\n -------\n capabilities : dict\n A dictionary of array API library capabilities.\n\n Examples\n --------\n >>> info = np.__array_namespace_info__()\n >>> info.capabilities()\n {'boolean indexing': True,\n 'data-dependent shapes': True,\n 'max dimensions': 64}\n\n """\n return {\n "boolean indexing": True,\n "data-dependent shapes": True,\n "max dimensions": 64,\n }\n\n def default_device(self):\n """\n The default device used for new NumPy arrays.\n\n For NumPy, this always returns ``'cpu'``.\n\n See Also\n --------\n __array_namespace_info__.capabilities,\n __array_namespace_info__.default_dtypes,\n __array_namespace_info__.dtypes,\n __array_namespace_info__.devices\n\n Returns\n -------\n device : str\n The default device used for new NumPy arrays.\n\n Examples\n --------\n >>> info = np.__array_namespace_info__()\n >>> info.default_device()\n 'cpu'\n\n """\n return "cpu"\n\n def default_dtypes(self, *, device=None):\n """\n The default data types used for new NumPy arrays.\n\n For NumPy, this always returns the following dictionary:\n\n - **"real floating"**: ``numpy.float64``\n - **"complex floating"**: ``numpy.complex128``\n - **"integral"**: ``numpy.intp``\n - **"indexing"**: ``numpy.intp``\n\n Parameters\n ----------\n device : str, optional\n The device to get the default data types for. For NumPy, only\n ``'cpu'`` is allowed.\n\n Returns\n -------\n dtypes : dict\n A dictionary describing the default data types used for new NumPy\n arrays.\n\n See Also\n --------\n __array_namespace_info__.capabilities,\n __array_namespace_info__.default_device,\n __array_namespace_info__.dtypes,\n __array_namespace_info__.devices\n\n Examples\n --------\n >>> info = np.__array_namespace_info__()\n >>> info.default_dtypes()\n {'real floating': numpy.float64,\n 'complex floating': numpy.complex128,\n 'integral': numpy.int64,\n 'indexing': numpy.int64}\n\n """\n if device not in ["cpu", None]:\n raise ValueError(\n 'Device not understood. Only "cpu" is allowed, but received:'\n f' {device}'\n )\n return {\n "real floating": dtype(float64),\n "complex floating": dtype(complex128),\n "integral": dtype(intp),\n "indexing": dtype(intp),\n }\n\n def dtypes(self, *, device=None, kind=None):\n """\n The array API data types supported by NumPy.\n\n Note that this function only returns data types that are defined by\n the array API.\n\n Parameters\n ----------\n device : str, optional\n The device to get the data types for. For NumPy, only ``'cpu'`` is\n allowed.\n kind : str or tuple of str, optional\n The kind of data types to return. If ``None``, all data types are\n returned. If a string, only data types of that kind are returned.\n If a tuple, a dictionary containing the union of the given kinds\n is returned. The following kinds are supported:\n\n - ``'bool'``: boolean data types (i.e., ``bool``).\n - ``'signed integer'``: signed integer data types (i.e., ``int8``,\n ``int16``, ``int32``, ``int64``).\n - ``'unsigned integer'``: unsigned integer data types (i.e.,\n ``uint8``, ``uint16``, ``uint32``, ``uint64``).\n - ``'integral'``: integer data types. Shorthand for ``('signed\n integer', 'unsigned integer')``.\n - ``'real floating'``: real-valued floating-point data types\n (i.e., ``float32``, ``float64``).\n - ``'complex floating'``: complex floating-point data types (i.e.,\n ``complex64``, ``complex128``).\n - ``'numeric'``: numeric data types. Shorthand for ``('integral',\n 'real floating', 'complex floating')``.\n\n Returns\n -------\n dtypes : dict\n A dictionary mapping the names of data types to the corresponding\n NumPy data types.\n\n See Also\n --------\n __array_namespace_info__.capabilities,\n __array_namespace_info__.default_device,\n __array_namespace_info__.default_dtypes,\n __array_namespace_info__.devices\n\n Examples\n --------\n >>> info = np.__array_namespace_info__()\n >>> info.dtypes(kind='signed integer')\n {'int8': numpy.int8,\n 'int16': numpy.int16,\n 'int32': numpy.int32,\n 'int64': numpy.int64}\n\n """\n if device not in ["cpu", None]:\n raise ValueError(\n 'Device not understood. Only "cpu" is allowed, but received:'\n f' {device}'\n )\n if kind is None:\n return {\n "bool": dtype(bool),\n "int8": dtype(int8),\n "int16": dtype(int16),\n "int32": dtype(int32),\n "int64": dtype(int64),\n "uint8": dtype(uint8),\n "uint16": dtype(uint16),\n "uint32": dtype(uint32),\n "uint64": dtype(uint64),\n "float32": dtype(float32),\n "float64": dtype(float64),\n "complex64": dtype(complex64),\n "complex128": dtype(complex128),\n }\n if kind == "bool":\n return {"bool": bool}\n if kind == "signed integer":\n return {\n "int8": dtype(int8),\n "int16": dtype(int16),\n "int32": dtype(int32),\n "int64": dtype(int64),\n }\n if kind == "unsigned integer":\n return {\n "uint8": dtype(uint8),\n "uint16": dtype(uint16),\n "uint32": dtype(uint32),\n "uint64": dtype(uint64),\n }\n if kind == "integral":\n return {\n "int8": dtype(int8),\n "int16": dtype(int16),\n "int32": dtype(int32),\n "int64": dtype(int64),\n "uint8": dtype(uint8),\n "uint16": dtype(uint16),\n "uint32": dtype(uint32),\n "uint64": dtype(uint64),\n }\n if kind == "real floating":\n return {\n "float32": dtype(float32),\n "float64": dtype(float64),\n }\n if kind == "complex floating":\n return {\n "complex64": dtype(complex64),\n "complex128": dtype(complex128),\n }\n if kind == "numeric":\n return {\n "int8": dtype(int8),\n "int16": dtype(int16),\n "int32": dtype(int32),\n "int64": dtype(int64),\n "uint8": dtype(uint8),\n "uint16": dtype(uint16),\n "uint32": dtype(uint32),\n "uint64": dtype(uint64),\n "float32": dtype(float32),\n "float64": dtype(float64),\n "complex64": dtype(complex64),\n "complex128": dtype(complex128),\n }\n if isinstance(kind, tuple):\n res = {}\n for k in kind:\n res.update(self.dtypes(kind=k))\n return res\n raise ValueError(f"unsupported kind: {kind!r}")\n\n def devices(self):\n """\n The devices supported by NumPy.\n\n For NumPy, this always returns ``['cpu']``.\n\n Returns\n -------\n devices : list of str\n The devices supported by NumPy.\n\n See Also\n --------\n __array_namespace_info__.capabilities,\n __array_namespace_info__.default_device,\n __array_namespace_info__.default_dtypes,\n __array_namespace_info__.dtypes\n\n Examples\n --------\n >>> info = np.__array_namespace_info__()\n >>> info.devices()\n ['cpu']\n\n """\n return ["cpu"]\n
.venv\Lib\site-packages\numpy\_array_api_info.py
_array_api_info.py
Python
10,700
0.95
0.101156
0
python-kit
509
2025-04-09T12:12:00.339402
BSD-3-Clause
false
0a8a02bd75b9f1bb5b8a32c98c86e040
from typing import (\n ClassVar,\n Literal,\n Never,\n TypeAlias,\n TypedDict,\n TypeVar,\n final,\n overload,\n type_check_only,\n)\n\nimport numpy as np\n\n_Device: TypeAlias = Literal["cpu"]\n_DeviceLike: TypeAlias = _Device | None\n\n_Capabilities = TypedDict(\n "_Capabilities",\n {\n "boolean indexing": Literal[True],\n "data-dependent shapes": Literal[True],\n },\n)\n\n_DefaultDTypes = TypedDict(\n "_DefaultDTypes",\n {\n "real floating": np.dtype[np.float64],\n "complex floating": np.dtype[np.complex128],\n "integral": np.dtype[np.intp],\n "indexing": np.dtype[np.intp],\n },\n)\n\n_KindBool: TypeAlias = Literal["bool"]\n_KindInt: TypeAlias = Literal["signed integer"]\n_KindUInt: TypeAlias = Literal["unsigned integer"]\n_KindInteger: TypeAlias = Literal["integral"]\n_KindFloat: TypeAlias = Literal["real floating"]\n_KindComplex: TypeAlias = Literal["complex floating"]\n_KindNumber: TypeAlias = Literal["numeric"]\n_Kind: TypeAlias = (\n _KindBool\n | _KindInt\n | _KindUInt\n | _KindInteger\n | _KindFloat\n | _KindComplex\n | _KindNumber\n)\n\n_T1 = TypeVar("_T1")\n_T2 = TypeVar("_T2")\n_T3 = TypeVar("_T3")\n_Permute1: TypeAlias = _T1 | tuple[_T1]\n_Permute2: TypeAlias = tuple[_T1, _T2] | tuple[_T2, _T1]\n_Permute3: TypeAlias = (\n tuple[_T1, _T2, _T3] | tuple[_T1, _T3, _T2]\n | tuple[_T2, _T1, _T3] | tuple[_T2, _T3, _T1]\n | tuple[_T3, _T1, _T2] | tuple[_T3, _T2, _T1]\n)\n\n@type_check_only\nclass _DTypesBool(TypedDict):\n bool: np.dtype[np.bool]\n\n@type_check_only\nclass _DTypesInt(TypedDict):\n int8: np.dtype[np.int8]\n int16: np.dtype[np.int16]\n int32: np.dtype[np.int32]\n int64: np.dtype[np.int64]\n\n@type_check_only\nclass _DTypesUInt(TypedDict):\n uint8: np.dtype[np.uint8]\n uint16: np.dtype[np.uint16]\n uint32: np.dtype[np.uint32]\n uint64: np.dtype[np.uint64]\n\n@type_check_only\nclass _DTypesInteger(_DTypesInt, _DTypesUInt): ...\n\n@type_check_only\nclass _DTypesFloat(TypedDict):\n float32: np.dtype[np.float32]\n float64: np.dtype[np.float64]\n\n@type_check_only\nclass _DTypesComplex(TypedDict):\n complex64: np.dtype[np.complex64]\n complex128: np.dtype[np.complex128]\n\n@type_check_only\nclass _DTypesNumber(_DTypesInteger, _DTypesFloat, _DTypesComplex): ...\n\n@type_check_only\nclass _DTypes(_DTypesBool, _DTypesNumber): ...\n\n@type_check_only\nclass _DTypesUnion(TypedDict, total=False):\n bool: np.dtype[np.bool]\n int8: np.dtype[np.int8]\n int16: np.dtype[np.int16]\n int32: np.dtype[np.int32]\n int64: np.dtype[np.int64]\n uint8: np.dtype[np.uint8]\n uint16: np.dtype[np.uint16]\n uint32: np.dtype[np.uint32]\n uint64: np.dtype[np.uint64]\n float32: np.dtype[np.float32]\n float64: np.dtype[np.float64]\n complex64: np.dtype[np.complex64]\n complex128: np.dtype[np.complex128]\n\n_EmptyDict: TypeAlias = dict[Never, Never]\n\n@final\nclass __array_namespace_info__:\n __module__: ClassVar[Literal['numpy']]\n\n def capabilities(self) -> _Capabilities: ...\n def default_device(self) -> _Device: ...\n def default_dtypes(\n self,\n *,\n device: _DeviceLike = ...,\n ) -> _DefaultDTypes: ...\n def devices(self) -> list[_Device]: ...\n\n @overload\n def dtypes(\n self,\n *,\n device: _DeviceLike = ...,\n kind: None = ...,\n ) -> _DTypes: ...\n @overload\n def dtypes(\n self,\n *,\n device: _DeviceLike = ...,\n kind: _Permute1[_KindBool],\n ) -> _DTypesBool: ...\n @overload\n def dtypes(\n self,\n *,\n device: _DeviceLike = ...,\n kind: _Permute1[_KindInt],\n ) -> _DTypesInt: ...\n @overload\n def dtypes(\n self,\n *,\n device: _DeviceLike = ...,\n kind: _Permute1[_KindUInt],\n ) -> _DTypesUInt: ...\n @overload\n def dtypes(\n self,\n *,\n device: _DeviceLike = ...,\n kind: _Permute1[_KindFloat],\n ) -> _DTypesFloat: ...\n @overload\n def dtypes(\n self,\n *,\n device: _DeviceLike = ...,\n kind: _Permute1[_KindComplex],\n ) -> _DTypesComplex: ...\n @overload\n def dtypes(\n self,\n *,\n device: _DeviceLike = ...,\n kind: (\n _Permute1[_KindInteger]\n | _Permute2[_KindInt, _KindUInt]\n ),\n ) -> _DTypesInteger: ...\n @overload\n def dtypes(\n self,\n *,\n device: _DeviceLike = ...,\n kind: (\n _Permute1[_KindNumber]\n | _Permute3[_KindInteger, _KindFloat, _KindComplex]\n ),\n ) -> _DTypesNumber: ...\n @overload\n def dtypes(\n self,\n *,\n device: _DeviceLike = ...,\n kind: tuple[()],\n ) -> _EmptyDict: ...\n @overload\n def dtypes(\n self,\n *,\n device: _DeviceLike = ...,\n kind: tuple[_Kind, ...],\n ) -> _DTypesUnion: ...\n
.venv\Lib\site-packages\numpy\_array_api_info.pyi
_array_api_info.pyi
Other
5,071
0.85
0.115942
0.058511
awesome-app
334
2025-01-21T19:09:06.830099
BSD-3-Clause
false
204da84613e52a4d75fc98c6187af573
import argparse\nimport sys\nfrom pathlib import Path\n\nfrom .lib._utils_impl import get_include\nfrom .version import __version__\n\n\ndef main() -> None:\n parser = argparse.ArgumentParser()\n parser.add_argument(\n "--version",\n action="version",\n version=__version__,\n help="Print the version and exit.",\n )\n parser.add_argument(\n "--cflags",\n action="store_true",\n help="Compile flag needed when using the NumPy headers.",\n )\n parser.add_argument(\n "--pkgconfigdir",\n action="store_true",\n help=("Print the pkgconfig directory in which `numpy.pc` is stored "\n "(useful for setting $PKG_CONFIG_PATH)."),\n )\n args = parser.parse_args()\n if not sys.argv[1:]:\n parser.print_help()\n if args.cflags:\n print("-I" + get_include())\n if args.pkgconfigdir:\n _path = Path(get_include()) / '..' / 'lib' / 'pkgconfig'\n print(_path.resolve())\n\n\nif __name__ == "__main__":\n main()\n
.venv\Lib\site-packages\numpy\_configtool.py
_configtool.py
Python
1,046
0.85
0.153846
0
vue-tools
298
2023-09-04T07:49:11.224313
Apache-2.0
false
d31f52918750616494082e8115284a89
def main() -> None: ...\n
.venv\Lib\site-packages\numpy\_configtool.pyi
_configtool.pyi
Other
25
0.65
1
0
awesome-app
794
2023-10-23T11:07:35.817867
GPL-3.0
false
163a29351e4447c95e0d3fb1be5ca0dc
""" Distributor init file\n\nDistributors: you can add custom code here to support particular distributions\nof numpy.\n\nFor example, this is a good place to put any BLAS/LAPACK initialization code.\n\nThe numpy standard source distribution will not put code in this file, so you\ncan safely replace this file with your own version.\n"""\n\ntry:\n from . import _distributor_init_local # noqa: F401\nexcept ImportError:\n pass\n
.venv\Lib\site-packages\numpy\_distributor_init.py
_distributor_init.py
Python
436
0.95
0.066667
0
node-utils
104
2024-09-29T21:11:32.012310
MIT
false
bafc6e80db03883c578c7990c35e67f6
# intentionally left blank\n
.venv\Lib\site-packages\numpy\_distributor_init.pyi
_distributor_init.pyi
Other
28
0.6
0
1
awesome-app
459
2024-01-16T20:48:27.463737
GPL-3.0
false
42a79b8b1e7abd9128fcd3553a3b3282
"""\nDict of expired attributes that are discontinued since 2.0 release.\nEach item is associated with a migration note.\n"""\n\n__expired_attributes__ = {\n "geterrobj": "Use the np.errstate context manager instead.",\n "seterrobj": "Use the np.errstate context manager instead.",\n "cast": "Use `np.asarray(arr, dtype=dtype)` instead.",\n "source": "Use `inspect.getsource` instead.",\n "lookfor": "Search NumPy's documentation directly.",\n "who": "Use an IDE variable explorer or `locals()` instead.",\n "fastCopyAndTranspose": "Use `arr.T.copy()` instead.",\n "set_numeric_ops":\n "For the general case, use `PyUFunc_ReplaceLoopBySignature`. "\n "For ndarray subclasses, define the ``__array_ufunc__`` method "\n "and override the relevant ufunc.",\n "NINF": "Use `-np.inf` instead.",\n "PINF": "Use `np.inf` instead.",\n "NZERO": "Use `-0.0` instead.",\n "PZERO": "Use `0.0` instead.",\n "add_newdoc":\n "It's still available as `np.lib.add_newdoc`.",\n "add_docstring":\n "It's still available as `np.lib.add_docstring`.",\n "add_newdoc_ufunc":\n "It's an internal function and doesn't have a replacement.",\n "safe_eval": "Use `ast.literal_eval` instead.",\n "float_": "Use `np.float64` instead.",\n "complex_": "Use `np.complex128` instead.",\n "longfloat": "Use `np.longdouble` instead.",\n "singlecomplex": "Use `np.complex64` instead.",\n "cfloat": "Use `np.complex128` instead.",\n "longcomplex": "Use `np.clongdouble` instead.",\n "clongfloat": "Use `np.clongdouble` instead.",\n "string_": "Use `np.bytes_` instead.",\n "unicode_": "Use `np.str_` instead.",\n "Inf": "Use `np.inf` instead.",\n "Infinity": "Use `np.inf` instead.",\n "NaN": "Use `np.nan` instead.",\n "infty": "Use `np.inf` instead.",\n "issctype": "Use `issubclass(rep, np.generic)` instead.",\n "maximum_sctype":\n "Use a specific dtype instead. You should avoid relying "\n "on any implicit mechanism and select the largest dtype of "\n "a kind explicitly in the code.",\n "obj2sctype": "Use `np.dtype(obj).type` instead.",\n "sctype2char": "Use `np.dtype(obj).char` instead.",\n "sctypes": "Access dtypes explicitly instead.",\n "issubsctype": "Use `np.issubdtype` instead.",\n "set_string_function":\n "Use `np.set_printoptions` instead with a formatter for "\n "custom printing of NumPy objects.",\n "asfarray": "Use `np.asarray` with a proper dtype instead.",\n "issubclass_": "Use `issubclass` builtin instead.",\n "tracemalloc_domain": "It's now available from `np.lib`.",\n "mat": "Use `np.asmatrix` instead.",\n "recfromcsv": "Use `np.genfromtxt` with comma delimiter instead.",\n "recfromtxt": "Use `np.genfromtxt` instead.",\n "deprecate": "Emit `DeprecationWarning` with `warnings.warn` directly, "\n "or use `typing.deprecated`.",\n "deprecate_with_doc": "Emit `DeprecationWarning` with `warnings.warn` "\n "directly, or use `typing.deprecated`.",\n "disp": "Use your own printing function instead.",\n "find_common_type":\n "Use `numpy.promote_types` or `numpy.result_type` instead. "\n "To achieve semantics for the `scalar_types` argument, use "\n "`numpy.result_type` and pass the Python values `0`, `0.0`, or `0j`.",\n "round_": "Use `np.round` instead.",\n "get_array_wrap": "",\n "DataSource": "It's still available as `np.lib.npyio.DataSource`.",\n "nbytes": "Use `np.dtype(<dtype>).itemsize` instead.",\n "byte_bounds": "Now it's available under `np.lib.array_utils.byte_bounds`",\n "compare_chararrays":\n "It's still available as `np.char.compare_chararrays`.",\n "format_parser": "It's still available as `np.rec.format_parser`.",\n "alltrue": "Use `np.all` instead.",\n "sometrue": "Use `np.any` instead.",\n}\n
.venv\Lib\site-packages\numpy\_expired_attrs_2_0.py
_expired_attrs_2_0.py
Python
3,905
0.85
0.050633
0
awesome-app
801
2024-08-07T08:05:54.728117
GPL-3.0
false
daa8cd8f9c1905718f2f32fc95960307
from typing import Final, TypedDict, final, type_check_only\n\n@final\n@type_check_only\nclass _ExpiredAttributesType(TypedDict):\n geterrobj: str\n seterrobj: str\n cast: str\n source: str\n lookfor: str\n who: str\n fastCopyAndTranspose: str\n set_numeric_ops: str\n NINF: str\n PINF: str\n NZERO: str\n PZERO: str\n add_newdoc: str\n add_docstring: str\n add_newdoc_ufunc: str\n safe_eval: str\n float_: str\n complex_: str\n longfloat: str\n singlecomplex: str\n cfloat: str\n longcomplex: str\n clongfloat: str\n string_: str\n unicode_: str\n Inf: str\n Infinity: str\n NaN: str\n infty: str\n issctype: str\n maximum_sctype: str\n obj2sctype: str\n sctype2char: str\n sctypes: str\n issubsctype: str\n set_string_function: str\n asfarray: str\n issubclass_: str\n tracemalloc_domain: str\n mat: str\n recfromcsv: str\n recfromtxt: str\n deprecate: str\n deprecate_with_doc: str\n disp: str\n find_common_type: str\n round_: str\n get_array_wrap: str\n DataSource: str\n nbytes: str\n byte_bounds: str\n compare_chararrays: str\n format_parser: str\n alltrue: str\n sometrue: str\n\n__expired_attributes__: Final[_ExpiredAttributesType] = ...\n
.venv\Lib\site-packages\numpy\_expired_attrs_2_0.pyi
_expired_attrs_2_0.pyi
Other
1,315
0.85
0.016129
0
node-utils
558
2024-05-04T23:42:51.788234
BSD-3-Clause
false
174e730073ffa0b16db342e37c41c2fb
"""\nModule defining global singleton classes.\n\nThis module raises a RuntimeError if an attempt to reload it is made. In that\nway the identities of the classes defined here are fixed and will remain so\neven if numpy itself is reloaded. In particular, a function like the following\nwill still work correctly after numpy is reloaded::\n\n def foo(arg=np._NoValue):\n if arg is np._NoValue:\n ...\n\nThat was not the case when the singleton classes were defined in the numpy\n``__init__.py`` file. See gh-7844 for a discussion of the reload problem that\nmotivated this module.\n\n"""\nimport enum\n\nfrom ._utils import set_module as _set_module\n\n__all__ = ['_NoValue', '_CopyMode']\n\n\n# Disallow reloading this module so as to preserve the identities of the\n# classes defined here.\nif '_is_loaded' in globals():\n raise RuntimeError('Reloading numpy._globals is not allowed')\n_is_loaded = True\n\n\nclass _NoValueType:\n """Special keyword value.\n\n The instance of this class may be used as the default value assigned to a\n keyword if no other obvious default (e.g., `None`) is suitable,\n\n Common reasons for using this keyword are:\n\n - A new keyword is added to a function, and that function forwards its\n inputs to another function or method which can be defined outside of\n NumPy. For example, ``np.std(x)`` calls ``x.std``, so when a ``keepdims``\n keyword was added that could only be forwarded if the user explicitly\n specified ``keepdims``; downstream array libraries may not have added\n the same keyword, so adding ``x.std(..., keepdims=keepdims)``\n unconditionally could have broken previously working code.\n - A keyword is being deprecated, and a deprecation warning must only be\n emitted when the keyword is used.\n\n """\n __instance = None\n\n def __new__(cls):\n # ensure that only one instance exists\n if not cls.__instance:\n cls.__instance = super().__new__(cls)\n return cls.__instance\n\n def __repr__(self):\n return "<no value>"\n\n\n_NoValue = _NoValueType()\n\n\n@_set_module("numpy")\nclass _CopyMode(enum.Enum):\n """\n An enumeration for the copy modes supported\n by numpy.copy() and numpy.array(). The following three modes are supported,\n\n - ALWAYS: This means that a deep copy of the input\n array will always be taken.\n - IF_NEEDED: This means that a deep copy of the input\n array will be taken only if necessary.\n - NEVER: This means that the deep copy will never be taken.\n If a copy cannot be avoided then a `ValueError` will be\n raised.\n\n Note that the buffer-protocol could in theory do copies. NumPy currently\n assumes an object exporting the buffer protocol will never do this.\n """\n\n ALWAYS = True\n NEVER = False\n IF_NEEDED = 2\n\n def __bool__(self):\n # For backwards compatibility\n if self == _CopyMode.ALWAYS:\n return True\n\n if self == _CopyMode.NEVER:\n return False\n\n raise ValueError(f"{self} is neither True nor False.")\n
.venv\Lib\site-packages\numpy\_globals.py
_globals.py
Python
3,187
0.95
0.25
0.057143
node-utils
675
2024-10-01T20:18:39.416301
Apache-2.0
false
d96015a9c3abfaf67a9f235242a3388d
__all__ = ["_CopyMode", "_NoValue"]\n\nimport enum\nfrom typing import Final, final\n\n@final\nclass _CopyMode(enum.Enum):\n ALWAYS = True\n NEVER = False\n IF_NEEDED = 2\n\n def __bool__(self, /) -> bool: ...\n\n@final\nclass _NoValueType: ...\n\n_NoValue: Final[_NoValueType] = ...\n
.venv\Lib\site-packages\numpy\_globals.pyi
_globals.pyi
Other
297
0.85
0.176471
0
python-kit
266
2024-04-27T18:29:27.870678
MIT
false
dcca3c26ed1077b7099883c8015b9806
"""\nPytest test running.\n\nThis module implements the ``test()`` function for NumPy modules. The usual\nboiler plate for doing that is to put the following in the module\n``__init__.py`` file::\n\n from numpy._pytesttester import PytestTester\n test = PytestTester(__name__)\n del PytestTester\n\n\nWarnings filtering and other runtime settings should be dealt with in the\n``pytest.ini`` file in the numpy repo root. The behavior of the test depends on\nwhether or not that file is found as follows:\n\n* ``pytest.ini`` is present (develop mode)\n All warnings except those explicitly filtered out are raised as error.\n* ``pytest.ini`` is absent (release mode)\n DeprecationWarnings and PendingDeprecationWarnings are ignored, other\n warnings are passed through.\n\nIn practice, tests run from the numpy repo are run in development mode with\n``spin``, through the standard ``spin test`` invocation or from an inplace\nbuild with ``pytest numpy``.\n\nThis module is imported by every numpy subpackage, so lies at the top level to\nsimplify circular import issues. For the same reason, it contains no numpy\nimports at module scope, instead importing numpy within function calls.\n"""\nimport os\nimport sys\n\n__all__ = ['PytestTester']\n\n\ndef _show_numpy_info():\n import numpy as np\n\n print(f"NumPy version {np.__version__}")\n info = np.lib._utils_impl._opt_info()\n print("NumPy CPU features: ", (info or 'nothing enabled'))\n\n\nclass PytestTester:\n """\n Pytest test runner.\n\n A test function is typically added to a package's __init__.py like so::\n\n from numpy._pytesttester import PytestTester\n test = PytestTester(__name__).test\n del PytestTester\n\n Calling this test function finds and runs all tests associated with the\n module and all its sub-modules.\n\n Attributes\n ----------\n module_name : str\n Full path to the package to test.\n\n Parameters\n ----------\n module_name : module name\n The name of the module to test.\n\n Notes\n -----\n Unlike the previous ``nose``-based implementation, this class is not\n publicly exposed as it performs some ``numpy``-specific warning\n suppression.\n\n """\n def __init__(self, module_name):\n self.module_name = module_name\n self.__module__ = module_name\n\n def __call__(self, label='fast', verbose=1, extra_argv=None,\n doctests=False, coverage=False, durations=-1, tests=None):\n """\n Run tests for module using pytest.\n\n Parameters\n ----------\n label : {'fast', 'full'}, optional\n Identifies the tests to run. When set to 'fast', tests decorated\n with `pytest.mark.slow` are skipped, when 'full', the slow marker\n is ignored.\n verbose : int, optional\n Verbosity value for test outputs, in the range 1-3. Default is 1.\n extra_argv : list, optional\n List with any extra arguments to pass to pytests.\n doctests : bool, optional\n .. note:: Not supported\n coverage : bool, optional\n If True, report coverage of NumPy code. Default is False.\n Requires installation of (pip) pytest-cov.\n durations : int, optional\n If < 0, do nothing, If 0, report time of all tests, if > 0,\n report the time of the slowest `timer` tests. Default is -1.\n tests : test or list of tests\n Tests to be executed with pytest '--pyargs'\n\n Returns\n -------\n result : bool\n Return True on success, false otherwise.\n\n Notes\n -----\n Each NumPy module exposes `test` in its namespace to run all tests for\n it. For example, to run all tests for numpy.lib:\n\n >>> np.lib.test() #doctest: +SKIP\n\n Examples\n --------\n >>> result = np.lib.test() #doctest: +SKIP\n ...\n 1023 passed, 2 skipped, 6 deselected, 1 xfailed in 10.39 seconds\n >>> result\n True\n\n """\n import warnings\n\n import pytest\n\n module = sys.modules[self.module_name]\n module_path = os.path.abspath(module.__path__[0])\n\n # setup the pytest arguments\n pytest_args = ["-l"]\n\n # offset verbosity. The "-q" cancels a "-v".\n pytest_args += ["-q"]\n\n if sys.version_info < (3, 12):\n with warnings.catch_warnings():\n warnings.simplefilter("always")\n # Filter out distutils cpu warnings (could be localized to\n # distutils tests). ASV has problems with top level import,\n # so fetch module for suppression here.\n from numpy.distutils import cpuinfo # noqa: F401\n\n # Filter out annoying import messages. Want these in both develop and\n # release mode.\n pytest_args += [\n "-W ignore:Not importing directory",\n "-W ignore:numpy.dtype size changed",\n "-W ignore:numpy.ufunc size changed",\n "-W ignore::UserWarning:cpuinfo",\n ]\n\n # When testing matrices, ignore their PendingDeprecationWarnings\n pytest_args += [\n "-W ignore:the matrix subclass is not",\n "-W ignore:Importing from numpy.matlib is",\n ]\n\n if doctests:\n pytest_args += ["--doctest-modules"]\n\n if extra_argv:\n pytest_args += list(extra_argv)\n\n if verbose > 1:\n pytest_args += ["-" + "v" * (verbose - 1)]\n\n if coverage:\n pytest_args += ["--cov=" + module_path]\n\n if label == "fast":\n # not importing at the top level to avoid circular import of module\n from numpy.testing import IS_PYPY\n if IS_PYPY:\n pytest_args += ["-m", "not slow and not slow_pypy"]\n else:\n pytest_args += ["-m", "not slow"]\n\n elif label != "full":\n pytest_args += ["-m", label]\n\n if durations >= 0:\n pytest_args += [f"--durations={durations}"]\n\n if tests is None:\n tests = [self.module_name]\n\n pytest_args += ["--pyargs"] + list(tests)\n\n # run tests.\n _show_numpy_info()\n\n try:\n code = pytest.main(pytest_args)\n except SystemExit as exc:\n code = exc.code\n\n return code == 0\n
.venv\Lib\site-packages\numpy\_pytesttester.py
_pytesttester.py
Python
6,529
0.95
0.134328
0.077419
node-utils
606
2024-05-09T15:47:49.820080
Apache-2.0
true
368824b657c83e3f4d8c16d1812ed78f
from collections.abc import Iterable\nfrom typing import Literal as L\n\n__all__ = ["PytestTester"]\n\nclass PytestTester:\n module_name: str\n def __init__(self, module_name: str) -> None: ...\n def __call__(\n self,\n label: L["fast", "full"] = ...,\n verbose: int = ...,\n extra_argv: Iterable[str] | None = ...,\n doctests: L[False] = ...,\n coverage: bool = ...,\n durations: int = ...,\n tests: Iterable[str] | None = ...,\n ) -> bool: ...\n
.venv\Lib\site-packages\numpy\_pytesttester.pyi
_pytesttester.pyi
Other
515
0.85
0.166667
0
node-utils
876
2024-05-17T18:54:28.986710
MIT
true
a8bd151c684f3b1f190741e3cea93363
# This file is generated by numpy's build process\n# It contains system_info results at the time of building this package.\nfrom enum import Enum\nfrom numpy._core._multiarray_umath import (\n __cpu_features__,\n __cpu_baseline__,\n __cpu_dispatch__,\n)\n\n__all__ = ["show_config"]\n_built_with_meson = True\n\n\nclass DisplayModes(Enum):\n stdout = "stdout"\n dicts = "dicts"\n\n\ndef _cleanup(d):\n """\n Removes empty values in a `dict` recursively\n This ensures we remove values that Meson could not provide to CONFIG\n """\n if isinstance(d, dict):\n return {k: _cleanup(v) for k, v in d.items() if v and _cleanup(v)}\n else:\n return d\n\n\nCONFIG = _cleanup(\n {\n "Compilers": {\n "c": {\n "name": "msvc",\n "linker": r"link",\n "version": "19.43.34808",\n "commands": r"cl",\n "args": r"",\n "linker args": r"",\n },\n "cython": {\n "name": "cython",\n "linker": r"cython",\n "version": "3.1.2",\n "commands": r"cython",\n "args": r"",\n "linker args": r"",\n },\n "c++": {\n "name": "msvc",\n "linker": r"link",\n "version": "19.43.34808",\n "commands": r"cl",\n "args": r"",\n "linker args": r"",\n },\n },\n "Machine Information": {\n "host": {\n "cpu": "x86_64",\n "family": "x86_64",\n "endian": "little",\n "system": "windows",\n },\n "build": {\n "cpu": "x86_64",\n "family": "x86_64",\n "endian": "little",\n "system": "windows",\n },\n "cross-compiled": bool("False".lower().replace("false", "")),\n },\n "Build Dependencies": {\n "blas": {\n "name": "scipy-openblas",\n "found": bool("True".lower().replace("false", "")),\n "version": "0.3.29",\n "detection method": "pkgconfig",\n "include directory": r"C:/Users/runneradmin/AppData/Local/Temp/cibw-run-9tmma_o0/cp313-win_amd64/build/venv/Lib/site-packages/scipy_openblas64/include",\n "lib directory": r"C:/Users/runneradmin/AppData/Local/Temp/cibw-run-9tmma_o0/cp313-win_amd64/build/venv/Lib/site-packages/scipy_openblas64/lib",\n "openblas configuration": r"OpenBLAS 0.3.29 USE64BITINT DYNAMIC_ARCH NO_AFFINITY Haswell MAX_THREADS=24",\n "pc file directory": r"D:/a/numpy/numpy/.openblas",\n },\n "lapack": {\n "name": "scipy-openblas",\n "found": bool("True".lower().replace("false", "")),\n "version": "0.3.29",\n "detection method": "pkgconfig",\n "include directory": r"C:/Users/runneradmin/AppData/Local/Temp/cibw-run-9tmma_o0/cp313-win_amd64/build/venv/Lib/site-packages/scipy_openblas64/include",\n "lib directory": r"C:/Users/runneradmin/AppData/Local/Temp/cibw-run-9tmma_o0/cp313-win_amd64/build/venv/Lib/site-packages/scipy_openblas64/lib",\n "openblas configuration": r"OpenBLAS 0.3.29 USE64BITINT DYNAMIC_ARCH NO_AFFINITY Haswell MAX_THREADS=24",\n "pc file directory": r"D:/a/numpy/numpy/.openblas",\n },\n },\n "Python Information": {\n "path": r"C:\Users\runneradmin\AppData\Local\Temp\build-env-13_2l0e9\Scripts\python.exe",\n "version": "3.13",\n },\n "SIMD Extensions": {\n "baseline": __cpu_baseline__,\n "found": [\n feature for feature in __cpu_dispatch__ if __cpu_features__[feature]\n ],\n "not found": [\n feature for feature in __cpu_dispatch__ if not __cpu_features__[feature]\n ],\n },\n }\n)\n\n\ndef _check_pyyaml():\n import yaml\n\n return yaml\n\n\ndef show(mode=DisplayModes.stdout.value):\n """\n Show libraries and system information on which NumPy was built\n and is being used\n\n Parameters\n ----------\n mode : {`'stdout'`, `'dicts'`}, optional.\n Indicates how to display the config information.\n `'stdout'` prints to console, `'dicts'` returns a dictionary\n of the configuration.\n\n Returns\n -------\n out : {`dict`, `None`}\n If mode is `'dicts'`, a dict is returned, else None\n\n See Also\n --------\n get_include : Returns the directory containing NumPy C\n header files.\n\n Notes\n -----\n 1. The `'stdout'` mode will give more readable\n output if ``pyyaml`` is installed\n\n """\n if mode == DisplayModes.stdout.value:\n try: # Non-standard library, check import\n yaml = _check_pyyaml()\n\n print(yaml.dump(CONFIG))\n except ModuleNotFoundError:\n import warnings\n import json\n\n warnings.warn("Install `pyyaml` for better output", stacklevel=1)\n print(json.dumps(CONFIG, indent=2))\n elif mode == DisplayModes.dicts.value:\n return CONFIG\n else:\n raise AttributeError(\n f"Invalid `mode`, use one of: {', '.join([e.value for e in DisplayModes])}"\n )\n\n\ndef show_config(mode=DisplayModes.stdout.value):\n return show(mode)\n\n\nshow_config.__doc__ = show.__doc__\nshow_config.__module__ = "numpy"\n
.venv\Lib\site-packages\numpy\__config__.py
__config__.py
Python
5,693
0.95
0.1
0.013605
awesome-app
844
2024-05-15T08:38:54.342264
Apache-2.0
false
e55fa8f7febc7cf3ee5aec76ee0c5435
from enum import Enum\nfrom types import ModuleType\nfrom typing import Final, NotRequired, TypedDict, overload, type_check_only\nfrom typing import Literal as L\n\n_CompilerConfigDictValue = TypedDict(\n "_CompilerConfigDictValue",\n {\n "name": str,\n "linker": str,\n "version": str,\n "commands": str,\n "args": str,\n "linker args": str,\n },\n)\n_CompilerConfigDict = TypedDict(\n "_CompilerConfigDict",\n {\n "c": _CompilerConfigDictValue,\n "cython": _CompilerConfigDictValue,\n "c++": _CompilerConfigDictValue,\n },\n)\n_MachineInformationDict = TypedDict(\n "_MachineInformationDict",\n {\n "host": _MachineInformationDictValue,\n "build": _MachineInformationDictValue,\n "cross-compiled": NotRequired[L[True]],\n },\n)\n\n@type_check_only\nclass _MachineInformationDictValue(TypedDict):\n cpu: str\n family: str\n endian: L["little", "big"]\n system: str\n\n_BuildDependenciesDictValue = TypedDict(\n "_BuildDependenciesDictValue",\n {\n "name": str,\n "found": NotRequired[L[True]],\n "version": str,\n "include directory": str,\n "lib directory": str,\n "openblas configuration": str,\n "pc file directory": str,\n },\n)\n\nclass _BuildDependenciesDict(TypedDict):\n blas: _BuildDependenciesDictValue\n lapack: _BuildDependenciesDictValue\n\nclass _PythonInformationDict(TypedDict):\n path: str\n version: str\n\n_SIMDExtensionsDict = TypedDict(\n "_SIMDExtensionsDict",\n {\n "baseline": list[str],\n "found": list[str],\n "not found": list[str],\n },\n)\n\n_ConfigDict = TypedDict(\n "_ConfigDict",\n {\n "Compilers": _CompilerConfigDict,\n "Machine Information": _MachineInformationDict,\n "Build Dependencies": _BuildDependenciesDict,\n "Python Information": _PythonInformationDict,\n "SIMD Extensions": _SIMDExtensionsDict,\n },\n)\n\n###\n\n__all__ = ["show_config"]\n\nCONFIG: Final[_ConfigDict] = ...\n\nclass DisplayModes(Enum):\n stdout = "stdout"\n dicts = "dicts"\n\ndef _check_pyyaml() -> ModuleType: ...\n\n@overload\ndef show(mode: L["stdout"] = "stdout") -> None: ...\n@overload\ndef show(mode: L["dicts"]) -> _ConfigDict: ...\n\n@overload\ndef show_config(mode: L["stdout"] = "stdout") -> None: ...\n@overload\ndef show_config(mode: L["dicts"]) -> _ConfigDict: ...\n
.venv\Lib\site-packages\numpy\__config__.pyi
__config__.pyi
Other
2,469
0.95
0.088235
0.011364
react-lib
156
2024-12-23T07:25:44.360893
MIT
false
13d850c4050b5fea0d7a99204299d249
# NumPy static imports for Cython >= 3.0\n#\n# If any of the PyArray_* functions are called, import_array must be\n# called first. This is done automatically by Cython 3.0+ if a call\n# is not detected inside of the module.\n#\n# Author: Dag Sverre Seljebotn\n#\n\nfrom cpython.ref cimport Py_INCREF\nfrom cpython.object cimport PyObject, PyTypeObject, PyObject_TypeCheck\ncimport libc.stdio as stdio\n\n\ncdef extern from *:\n # Leave a marker that the NumPy declarations came from NumPy itself and not from Cython.\n # See https://github.com/cython/cython/issues/3573\n """\n /* Using NumPy API declarations from "numpy/__init__.cython-30.pxd" */\n """\n\n\ncdef extern from "numpy/arrayobject.h":\n # It would be nice to use size_t and ssize_t, but ssize_t has special\n # implicit conversion rules, so just use "long".\n # Note: The actual type only matters for Cython promotion, so long\n # is closer than int, but could lead to incorrect promotion.\n # (Not to worrying, and always the status-quo.)\n ctypedef signed long npy_intp\n ctypedef unsigned long npy_uintp\n\n ctypedef unsigned char npy_bool\n\n ctypedef signed char npy_byte\n ctypedef signed short npy_short\n ctypedef signed int npy_int\n ctypedef signed long npy_long\n ctypedef signed long long npy_longlong\n\n ctypedef unsigned char npy_ubyte\n ctypedef unsigned short npy_ushort\n ctypedef unsigned int npy_uint\n ctypedef unsigned long npy_ulong\n ctypedef unsigned long long npy_ulonglong\n\n ctypedef float npy_float\n ctypedef double npy_double\n ctypedef long double npy_longdouble\n\n ctypedef signed char npy_int8\n ctypedef signed short npy_int16\n ctypedef signed int npy_int32\n ctypedef signed long long npy_int64\n\n ctypedef unsigned char npy_uint8\n ctypedef unsigned short npy_uint16\n ctypedef unsigned int npy_uint32\n ctypedef unsigned long long npy_uint64\n\n ctypedef float npy_float32\n ctypedef double npy_float64\n ctypedef long double npy_float80\n ctypedef long double npy_float96\n ctypedef long double npy_float128\n\n ctypedef struct npy_cfloat:\n pass\n\n ctypedef struct npy_cdouble:\n pass\n\n ctypedef struct npy_clongdouble:\n pass\n\n ctypedef struct npy_complex64:\n pass\n\n ctypedef struct npy_complex128:\n pass\n\n ctypedef struct npy_complex160:\n pass\n\n ctypedef struct npy_complex192:\n pass\n\n ctypedef struct npy_complex256:\n pass\n\n ctypedef struct PyArray_Dims:\n npy_intp *ptr\n int len\n\n\n cdef enum NPY_TYPES:\n NPY_BOOL\n NPY_BYTE\n NPY_UBYTE\n NPY_SHORT\n NPY_USHORT\n NPY_INT\n NPY_UINT\n NPY_LONG\n NPY_ULONG\n NPY_LONGLONG\n NPY_ULONGLONG\n NPY_FLOAT\n NPY_DOUBLE\n NPY_LONGDOUBLE\n NPY_CFLOAT\n NPY_CDOUBLE\n NPY_CLONGDOUBLE\n NPY_OBJECT\n NPY_STRING\n NPY_UNICODE\n NPY_VSTRING\n NPY_VOID\n NPY_DATETIME\n NPY_TIMEDELTA\n NPY_NTYPES_LEGACY\n NPY_NOTYPE\n\n NPY_INT8\n NPY_INT16\n NPY_INT32\n NPY_INT64\n NPY_UINT8\n NPY_UINT16\n NPY_UINT32\n NPY_UINT64\n NPY_FLOAT16\n NPY_FLOAT32\n NPY_FLOAT64\n NPY_FLOAT80\n NPY_FLOAT96\n NPY_FLOAT128\n NPY_COMPLEX64\n NPY_COMPLEX128\n NPY_COMPLEX160\n NPY_COMPLEX192\n NPY_COMPLEX256\n\n NPY_INTP\n NPY_UINTP\n NPY_DEFAULT_INT # Not a compile time constant (normally)!\n\n ctypedef enum NPY_ORDER:\n NPY_ANYORDER\n NPY_CORDER\n NPY_FORTRANORDER\n NPY_KEEPORDER\n\n ctypedef enum NPY_CASTING:\n NPY_NO_CASTING\n NPY_EQUIV_CASTING\n NPY_SAFE_CASTING\n NPY_SAME_KIND_CASTING\n NPY_UNSAFE_CASTING\n\n ctypedef enum NPY_CLIPMODE:\n NPY_CLIP\n NPY_WRAP\n NPY_RAISE\n\n ctypedef enum NPY_SCALARKIND:\n NPY_NOSCALAR,\n NPY_BOOL_SCALAR,\n NPY_INTPOS_SCALAR,\n NPY_INTNEG_SCALAR,\n NPY_FLOAT_SCALAR,\n NPY_COMPLEX_SCALAR,\n NPY_OBJECT_SCALAR\n\n ctypedef enum NPY_SORTKIND:\n NPY_QUICKSORT\n NPY_HEAPSORT\n NPY_MERGESORT\n\n ctypedef enum NPY_SEARCHSIDE:\n NPY_SEARCHLEFT\n NPY_SEARCHRIGHT\n\n enum:\n NPY_ARRAY_C_CONTIGUOUS\n NPY_ARRAY_F_CONTIGUOUS\n NPY_ARRAY_OWNDATA\n NPY_ARRAY_FORCECAST\n NPY_ARRAY_ENSURECOPY\n NPY_ARRAY_ENSUREARRAY\n NPY_ARRAY_ELEMENTSTRIDES\n NPY_ARRAY_ALIGNED\n NPY_ARRAY_NOTSWAPPED\n NPY_ARRAY_WRITEABLE\n NPY_ARRAY_WRITEBACKIFCOPY\n\n NPY_ARRAY_BEHAVED\n NPY_ARRAY_BEHAVED_NS\n NPY_ARRAY_CARRAY\n NPY_ARRAY_CARRAY_RO\n NPY_ARRAY_FARRAY\n NPY_ARRAY_FARRAY_RO\n NPY_ARRAY_DEFAULT\n\n NPY_ARRAY_IN_ARRAY\n NPY_ARRAY_OUT_ARRAY\n NPY_ARRAY_INOUT_ARRAY\n NPY_ARRAY_IN_FARRAY\n NPY_ARRAY_OUT_FARRAY\n NPY_ARRAY_INOUT_FARRAY\n\n NPY_ARRAY_UPDATE_ALL\n\n cdef enum:\n NPY_MAXDIMS # 64 on NumPy 2.x and 32 on NumPy 1.x\n NPY_RAVEL_AXIS # Used for functions like PyArray_Mean\n\n ctypedef void (*PyArray_VectorUnaryFunc)(void *, void *, npy_intp, void *, void *)\n\n ctypedef struct PyArray_ArrayDescr:\n # shape is a tuple, but Cython doesn't support "tuple shape"\n # inside a non-PyObject declaration, so we have to declare it\n # as just a PyObject*.\n PyObject* shape\n\n ctypedef struct PyArray_Descr:\n pass\n\n ctypedef class numpy.dtype [object PyArray_Descr, check_size ignore]:\n # Use PyDataType_* macros when possible, however there are no macros\n # for accessing some of the fields, so some are defined.\n cdef PyTypeObject* typeobj\n cdef char kind\n cdef char type\n # Numpy sometimes mutates this without warning (e.g. it'll\n # sometimes change "|" to "<" in shared dtype objects on\n # little-endian machines). If this matters to you, use\n # PyArray_IsNativeByteOrder(dtype.byteorder) instead of\n # directly accessing this field.\n cdef char byteorder\n cdef int type_num\n\n @property\n cdef inline npy_intp itemsize(self) noexcept nogil:\n return PyDataType_ELSIZE(self)\n\n @property\n cdef inline npy_intp alignment(self) noexcept nogil:\n return PyDataType_ALIGNMENT(self)\n\n # Use fields/names with care as they may be NULL. You must check\n # for this using PyDataType_HASFIELDS.\n @property\n cdef inline object fields(self):\n return <object>PyDataType_FIELDS(self)\n\n @property\n cdef inline tuple names(self):\n return <tuple>PyDataType_NAMES(self)\n\n # Use PyDataType_HASSUBARRAY to test whether this field is\n # valid (the pointer can be NULL). Most users should access\n # this field via the inline helper method PyDataType_SHAPE.\n @property\n cdef inline PyArray_ArrayDescr* subarray(self) noexcept nogil:\n return PyDataType_SUBARRAY(self)\n\n @property\n cdef inline npy_uint64 flags(self) noexcept nogil:\n """The data types flags."""\n return PyDataType_FLAGS(self)\n\n\n ctypedef class numpy.flatiter [object PyArrayIterObject, check_size ignore]:\n # Use through macros\n pass\n\n ctypedef class numpy.broadcast [object PyArrayMultiIterObject, check_size ignore]:\n\n @property\n cdef inline int numiter(self) noexcept nogil:\n """The number of arrays that need to be broadcast to the same shape."""\n return PyArray_MultiIter_NUMITER(self)\n\n @property\n cdef inline npy_intp size(self) noexcept nogil:\n """The total broadcasted size."""\n return PyArray_MultiIter_SIZE(self)\n\n @property\n cdef inline npy_intp index(self) noexcept nogil:\n """The current (1-d) index into the broadcasted result."""\n return PyArray_MultiIter_INDEX(self)\n\n @property\n cdef inline int nd(self) noexcept nogil:\n """The number of dimensions in the broadcasted result."""\n return PyArray_MultiIter_NDIM(self)\n\n @property\n cdef inline npy_intp* dimensions(self) noexcept nogil:\n """The shape of the broadcasted result."""\n return PyArray_MultiIter_DIMS(self)\n\n @property\n cdef inline void** iters(self) noexcept nogil:\n """An array of iterator objects that holds the iterators for the arrays to be broadcast together.\n On return, the iterators are adjusted for broadcasting."""\n return PyArray_MultiIter_ITERS(self)\n\n\n ctypedef struct PyArrayObject:\n # For use in situations where ndarray can't replace PyArrayObject*,\n # like PyArrayObject**.\n pass\n\n ctypedef class numpy.ndarray [object PyArrayObject, check_size ignore]:\n cdef __cythonbufferdefaults__ = {"mode": "strided"}\n\n # NOTE: no field declarations since direct access is deprecated since NumPy 1.7\n # Instead, we use properties that map to the corresponding C-API functions.\n\n @property\n cdef inline PyObject* base(self) noexcept nogil:\n """Returns a borrowed reference to the object owning the data/memory.\n """\n return PyArray_BASE(self)\n\n @property\n cdef inline dtype descr(self):\n """Returns an owned reference to the dtype of the array.\n """\n return <dtype>PyArray_DESCR(self)\n\n @property\n cdef inline int ndim(self) noexcept nogil:\n """Returns the number of dimensions in the array.\n """\n return PyArray_NDIM(self)\n\n @property\n cdef inline npy_intp *shape(self) noexcept nogil:\n """Returns a pointer to the dimensions/shape of the array.\n The number of elements matches the number of dimensions of the array (ndim).\n Can return NULL for 0-dimensional arrays.\n """\n return PyArray_DIMS(self)\n\n @property\n cdef inline npy_intp *strides(self) noexcept nogil:\n """Returns a pointer to the strides of the array.\n The number of elements matches the number of dimensions of the array (ndim).\n """\n return PyArray_STRIDES(self)\n\n @property\n cdef inline npy_intp size(self) noexcept nogil:\n """Returns the total size (in number of elements) of the array.\n """\n return PyArray_SIZE(self)\n\n @property\n cdef inline char* data(self) noexcept nogil:\n """The pointer to the data buffer as a char*.\n This is provided for legacy reasons to avoid direct struct field access.\n For new code that needs this access, you probably want to cast the result\n of `PyArray_DATA()` instead, which returns a 'void*'.\n """\n return PyArray_BYTES(self)\n\n\n int _import_array() except -1\n # A second definition so _import_array isn't marked as used when we use it here.\n # Do not use - subject to change any time.\n int __pyx_import_array "_import_array"() except -1\n\n #\n # Macros from ndarrayobject.h\n #\n bint PyArray_CHKFLAGS(ndarray m, int flags) nogil\n bint PyArray_IS_C_CONTIGUOUS(ndarray arr) nogil\n bint PyArray_IS_F_CONTIGUOUS(ndarray arr) nogil\n bint PyArray_ISCONTIGUOUS(ndarray m) nogil\n bint PyArray_ISWRITEABLE(ndarray m) nogil\n bint PyArray_ISALIGNED(ndarray m) nogil\n\n int PyArray_NDIM(ndarray) nogil\n bint PyArray_ISONESEGMENT(ndarray) nogil\n bint PyArray_ISFORTRAN(ndarray) nogil\n int PyArray_FORTRANIF(ndarray) nogil\n\n void* PyArray_DATA(ndarray) nogil\n char* PyArray_BYTES(ndarray) nogil\n\n npy_intp* PyArray_DIMS(ndarray) nogil\n npy_intp* PyArray_STRIDES(ndarray) nogil\n npy_intp PyArray_DIM(ndarray, size_t) nogil\n npy_intp PyArray_STRIDE(ndarray, size_t) nogil\n\n PyObject *PyArray_BASE(ndarray) nogil # returns borrowed reference!\n PyArray_Descr *PyArray_DESCR(ndarray) nogil # returns borrowed reference to dtype!\n PyArray_Descr *PyArray_DTYPE(ndarray) nogil # returns borrowed reference to dtype! NP 1.7+ alias for descr.\n int PyArray_FLAGS(ndarray) nogil\n void PyArray_CLEARFLAGS(ndarray, int flags) nogil # Added in NumPy 1.7\n void PyArray_ENABLEFLAGS(ndarray, int flags) nogil # Added in NumPy 1.7\n npy_intp PyArray_ITEMSIZE(ndarray) nogil\n int PyArray_TYPE(ndarray arr) nogil\n\n object PyArray_GETITEM(ndarray arr, void *itemptr)\n int PyArray_SETITEM(ndarray arr, void *itemptr, object obj) except -1\n\n bint PyTypeNum_ISBOOL(int) nogil\n bint PyTypeNum_ISUNSIGNED(int) nogil\n bint PyTypeNum_ISSIGNED(int) nogil\n bint PyTypeNum_ISINTEGER(int) nogil\n bint PyTypeNum_ISFLOAT(int) nogil\n bint PyTypeNum_ISNUMBER(int) nogil\n bint PyTypeNum_ISSTRING(int) nogil\n bint PyTypeNum_ISCOMPLEX(int) nogil\n bint PyTypeNum_ISFLEXIBLE(int) nogil\n bint PyTypeNum_ISUSERDEF(int) nogil\n bint PyTypeNum_ISEXTENDED(int) nogil\n bint PyTypeNum_ISOBJECT(int) nogil\n\n npy_intp PyDataType_ELSIZE(dtype) nogil\n npy_intp PyDataType_ALIGNMENT(dtype) nogil\n PyObject* PyDataType_METADATA(dtype) nogil\n PyArray_ArrayDescr* PyDataType_SUBARRAY(dtype) nogil\n PyObject* PyDataType_NAMES(dtype) nogil\n PyObject* PyDataType_FIELDS(dtype) nogil\n\n bint PyDataType_ISBOOL(dtype) nogil\n bint PyDataType_ISUNSIGNED(dtype) nogil\n bint PyDataType_ISSIGNED(dtype) nogil\n bint PyDataType_ISINTEGER(dtype) nogil\n bint PyDataType_ISFLOAT(dtype) nogil\n bint PyDataType_ISNUMBER(dtype) nogil\n bint PyDataType_ISSTRING(dtype) nogil\n bint PyDataType_ISCOMPLEX(dtype) nogil\n bint PyDataType_ISFLEXIBLE(dtype) nogil\n bint PyDataType_ISUSERDEF(dtype) nogil\n bint PyDataType_ISEXTENDED(dtype) nogil\n bint PyDataType_ISOBJECT(dtype) nogil\n bint PyDataType_HASFIELDS(dtype) nogil\n bint PyDataType_HASSUBARRAY(dtype) nogil\n npy_uint64 PyDataType_FLAGS(dtype) nogil\n\n bint PyArray_ISBOOL(ndarray) nogil\n bint PyArray_ISUNSIGNED(ndarray) nogil\n bint PyArray_ISSIGNED(ndarray) nogil\n bint PyArray_ISINTEGER(ndarray) nogil\n bint PyArray_ISFLOAT(ndarray) nogil\n bint PyArray_ISNUMBER(ndarray) nogil\n bint PyArray_ISSTRING(ndarray) nogil\n bint PyArray_ISCOMPLEX(ndarray) nogil\n bint PyArray_ISFLEXIBLE(ndarray) nogil\n bint PyArray_ISUSERDEF(ndarray) nogil\n bint PyArray_ISEXTENDED(ndarray) nogil\n bint PyArray_ISOBJECT(ndarray) nogil\n bint PyArray_HASFIELDS(ndarray) nogil\n\n bint PyArray_ISVARIABLE(ndarray) nogil\n\n bint PyArray_SAFEALIGNEDCOPY(ndarray) nogil\n bint PyArray_ISNBO(char) nogil # works on ndarray.byteorder\n bint PyArray_IsNativeByteOrder(char) nogil # works on ndarray.byteorder\n bint PyArray_ISNOTSWAPPED(ndarray) nogil\n bint PyArray_ISBYTESWAPPED(ndarray) nogil\n\n bint PyArray_FLAGSWAP(ndarray, int) nogil\n\n bint PyArray_ISCARRAY(ndarray) nogil\n bint PyArray_ISCARRAY_RO(ndarray) nogil\n bint PyArray_ISFARRAY(ndarray) nogil\n bint PyArray_ISFARRAY_RO(ndarray) nogil\n bint PyArray_ISBEHAVED(ndarray) nogil\n bint PyArray_ISBEHAVED_RO(ndarray) nogil\n\n\n bint PyDataType_ISNOTSWAPPED(dtype) nogil\n bint PyDataType_ISBYTESWAPPED(dtype) nogil\n\n bint PyArray_DescrCheck(object)\n\n bint PyArray_Check(object)\n bint PyArray_CheckExact(object)\n\n # Cannot be supported due to out arg:\n # bint PyArray_HasArrayInterfaceType(object, dtype, object, object&)\n # bint PyArray_HasArrayInterface(op, out)\n\n\n bint PyArray_IsZeroDim(object)\n # Cannot be supported due to ## ## in macro:\n # bint PyArray_IsScalar(object, verbatim work)\n bint PyArray_CheckScalar(object)\n bint PyArray_IsPythonNumber(object)\n bint PyArray_IsPythonScalar(object)\n bint PyArray_IsAnyScalar(object)\n bint PyArray_CheckAnyScalar(object)\n\n ndarray PyArray_GETCONTIGUOUS(ndarray)\n bint PyArray_SAMESHAPE(ndarray, ndarray) nogil\n npy_intp PyArray_SIZE(ndarray) nogil\n npy_intp PyArray_NBYTES(ndarray) nogil\n\n object PyArray_FROM_O(object)\n object PyArray_FROM_OF(object m, int flags)\n object PyArray_FROM_OT(object m, int type)\n object PyArray_FROM_OTF(object m, int type, int flags)\n object PyArray_FROMANY(object m, int type, int min, int max, int flags)\n object PyArray_ZEROS(int nd, npy_intp* dims, int type, int fortran)\n object PyArray_EMPTY(int nd, npy_intp* dims, int type, int fortran)\n void PyArray_FILLWBYTE(ndarray, int val)\n object PyArray_ContiguousFromAny(op, int, int min_depth, int max_depth)\n unsigned char PyArray_EquivArrTypes(ndarray a1, ndarray a2)\n bint PyArray_EquivByteorders(int b1, int b2) nogil\n object PyArray_SimpleNew(int nd, npy_intp* dims, int typenum)\n object PyArray_SimpleNewFromData(int nd, npy_intp* dims, int typenum, void* data)\n #object PyArray_SimpleNewFromDescr(int nd, npy_intp* dims, dtype descr)\n object PyArray_ToScalar(void* data, ndarray arr)\n\n void* PyArray_GETPTR1(ndarray m, npy_intp i) nogil\n void* PyArray_GETPTR2(ndarray m, npy_intp i, npy_intp j) nogil\n void* PyArray_GETPTR3(ndarray m, npy_intp i, npy_intp j, npy_intp k) nogil\n void* PyArray_GETPTR4(ndarray m, npy_intp i, npy_intp j, npy_intp k, npy_intp l) nogil\n\n # Cannot be supported due to out arg\n # void PyArray_DESCR_REPLACE(descr)\n\n\n object PyArray_Copy(ndarray)\n object PyArray_FromObject(object op, int type, int min_depth, int max_depth)\n object PyArray_ContiguousFromObject(object op, int type, int min_depth, int max_depth)\n object PyArray_CopyFromObject(object op, int type, int min_depth, int max_depth)\n\n object PyArray_Cast(ndarray mp, int type_num)\n object PyArray_Take(ndarray ap, object items, int axis)\n object PyArray_Put(ndarray ap, object items, object values)\n\n void PyArray_ITER_RESET(flatiter it) nogil\n void PyArray_ITER_NEXT(flatiter it) nogil\n void PyArray_ITER_GOTO(flatiter it, npy_intp* destination) nogil\n void PyArray_ITER_GOTO1D(flatiter it, npy_intp ind) nogil\n void* PyArray_ITER_DATA(flatiter it) nogil\n bint PyArray_ITER_NOTDONE(flatiter it) nogil\n\n void PyArray_MultiIter_RESET(broadcast multi) nogil\n void PyArray_MultiIter_NEXT(broadcast multi) nogil\n void PyArray_MultiIter_GOTO(broadcast multi, npy_intp dest) nogil\n void PyArray_MultiIter_GOTO1D(broadcast multi, npy_intp ind) nogil\n void* PyArray_MultiIter_DATA(broadcast multi, npy_intp i) nogil\n void PyArray_MultiIter_NEXTi(broadcast multi, npy_intp i) nogil\n bint PyArray_MultiIter_NOTDONE(broadcast multi) nogil\n npy_intp PyArray_MultiIter_SIZE(broadcast multi) nogil\n int PyArray_MultiIter_NDIM(broadcast multi) nogil\n npy_intp PyArray_MultiIter_INDEX(broadcast multi) nogil\n int PyArray_MultiIter_NUMITER(broadcast multi) nogil\n npy_intp* PyArray_MultiIter_DIMS(broadcast multi) nogil\n void** PyArray_MultiIter_ITERS(broadcast multi) nogil\n\n # Functions from __multiarray_api.h\n\n # Functions taking dtype and returning object/ndarray are disabled\n # for now as they steal dtype references. I'm conservative and disable\n # more than is probably needed until it can be checked further.\n int PyArray_INCREF (ndarray) except * # uses PyArray_Item_INCREF...\n int PyArray_XDECREF (ndarray) except * # uses PyArray_Item_DECREF...\n dtype PyArray_DescrFromType (int)\n object PyArray_TypeObjectFromType (int)\n char * PyArray_Zero (ndarray)\n char * PyArray_One (ndarray)\n #object PyArray_CastToType (ndarray, dtype, int)\n int PyArray_CanCastSafely (int, int) # writes errors\n npy_bool PyArray_CanCastTo (dtype, dtype) # writes errors\n int PyArray_ObjectType (object, int) except 0\n dtype PyArray_DescrFromObject (object, dtype)\n #ndarray* PyArray_ConvertToCommonType (object, int *)\n dtype PyArray_DescrFromScalar (object)\n dtype PyArray_DescrFromTypeObject (object)\n npy_intp PyArray_Size (object)\n #object PyArray_Scalar (void *, dtype, object)\n #object PyArray_FromScalar (object, dtype)\n void PyArray_ScalarAsCtype (object, void *)\n #int PyArray_CastScalarToCtype (object, void *, dtype)\n #int PyArray_CastScalarDirect (object, dtype, void *, int)\n #PyArray_VectorUnaryFunc * PyArray_GetCastFunc (dtype, int)\n #object PyArray_FromAny (object, dtype, int, int, int, object)\n object PyArray_EnsureArray (object)\n object PyArray_EnsureAnyArray (object)\n #object PyArray_FromFile (stdio.FILE *, dtype, npy_intp, char *)\n #object PyArray_FromString (char *, npy_intp, dtype, npy_intp, char *)\n #object PyArray_FromBuffer (object, dtype, npy_intp, npy_intp)\n #object PyArray_FromIter (object, dtype, npy_intp)\n object PyArray_Return (ndarray)\n #object PyArray_GetField (ndarray, dtype, int)\n #int PyArray_SetField (ndarray, dtype, int, object) except -1\n object PyArray_Byteswap (ndarray, npy_bool)\n object PyArray_Resize (ndarray, PyArray_Dims *, int, NPY_ORDER)\n int PyArray_CopyInto (ndarray, ndarray) except -1\n int PyArray_CopyAnyInto (ndarray, ndarray) except -1\n int PyArray_CopyObject (ndarray, object) except -1\n object PyArray_NewCopy (ndarray, NPY_ORDER)\n object PyArray_ToList (ndarray)\n object PyArray_ToString (ndarray, NPY_ORDER)\n int PyArray_ToFile (ndarray, stdio.FILE *, char *, char *) except -1\n int PyArray_Dump (object, object, int) except -1\n object PyArray_Dumps (object, int)\n int PyArray_ValidType (int) # Cannot error\n void PyArray_UpdateFlags (ndarray, int)\n object PyArray_New (type, int, npy_intp *, int, npy_intp *, void *, int, int, object)\n #object PyArray_NewFromDescr (type, dtype, int, npy_intp *, npy_intp *, void *, int, object)\n #dtype PyArray_DescrNew (dtype)\n dtype PyArray_DescrNewFromType (int)\n double PyArray_GetPriority (object, double) # clears errors as of 1.25\n object PyArray_IterNew (object)\n object PyArray_MultiIterNew (int, ...)\n\n int PyArray_PyIntAsInt (object) except? -1\n npy_intp PyArray_PyIntAsIntp (object)\n int PyArray_Broadcast (broadcast) except -1\n int PyArray_FillWithScalar (ndarray, object) except -1\n npy_bool PyArray_CheckStrides (int, int, npy_intp, npy_intp, npy_intp *, npy_intp *)\n dtype PyArray_DescrNewByteorder (dtype, char)\n object PyArray_IterAllButAxis (object, int *)\n #object PyArray_CheckFromAny (object, dtype, int, int, int, object)\n #object PyArray_FromArray (ndarray, dtype, int)\n object PyArray_FromInterface (object)\n object PyArray_FromStructInterface (object)\n #object PyArray_FromArrayAttr (object, dtype, object)\n #NPY_SCALARKIND PyArray_ScalarKind (int, ndarray*)\n int PyArray_CanCoerceScalar (int, int, NPY_SCALARKIND)\n npy_bool PyArray_CanCastScalar (type, type)\n int PyArray_RemoveSmallest (broadcast) except -1\n int PyArray_ElementStrides (object)\n void PyArray_Item_INCREF (char *, dtype) except *\n void PyArray_Item_XDECREF (char *, dtype) except *\n object PyArray_Transpose (ndarray, PyArray_Dims *)\n object PyArray_TakeFrom (ndarray, object, int, ndarray, NPY_CLIPMODE)\n object PyArray_PutTo (ndarray, object, object, NPY_CLIPMODE)\n object PyArray_PutMask (ndarray, object, object)\n object PyArray_Repeat (ndarray, object, int)\n object PyArray_Choose (ndarray, object, ndarray, NPY_CLIPMODE)\n int PyArray_Sort (ndarray, int, NPY_SORTKIND) except -1\n object PyArray_ArgSort (ndarray, int, NPY_SORTKIND)\n object PyArray_SearchSorted (ndarray, object, NPY_SEARCHSIDE, PyObject *)\n object PyArray_ArgMax (ndarray, int, ndarray)\n object PyArray_ArgMin (ndarray, int, ndarray)\n object PyArray_Reshape (ndarray, object)\n object PyArray_Newshape (ndarray, PyArray_Dims *, NPY_ORDER)\n object PyArray_Squeeze (ndarray)\n #object PyArray_View (ndarray, dtype, type)\n object PyArray_SwapAxes (ndarray, int, int)\n object PyArray_Max (ndarray, int, ndarray)\n object PyArray_Min (ndarray, int, ndarray)\n object PyArray_Ptp (ndarray, int, ndarray)\n object PyArray_Mean (ndarray, int, int, ndarray)\n object PyArray_Trace (ndarray, int, int, int, int, ndarray)\n object PyArray_Diagonal (ndarray, int, int, int)\n object PyArray_Clip (ndarray, object, object, ndarray)\n object PyArray_Conjugate (ndarray, ndarray)\n object PyArray_Nonzero (ndarray)\n object PyArray_Std (ndarray, int, int, ndarray, int)\n object PyArray_Sum (ndarray, int, int, ndarray)\n object PyArray_CumSum (ndarray, int, int, ndarray)\n object PyArray_Prod (ndarray, int, int, ndarray)\n object PyArray_CumProd (ndarray, int, int, ndarray)\n object PyArray_All (ndarray, int, ndarray)\n object PyArray_Any (ndarray, int, ndarray)\n object PyArray_Compress (ndarray, object, int, ndarray)\n object PyArray_Flatten (ndarray, NPY_ORDER)\n object PyArray_Ravel (ndarray, NPY_ORDER)\n npy_intp PyArray_MultiplyList (npy_intp *, int)\n int PyArray_MultiplyIntList (int *, int)\n void * PyArray_GetPtr (ndarray, npy_intp*)\n int PyArray_CompareLists (npy_intp *, npy_intp *, int)\n #int PyArray_AsCArray (object*, void *, npy_intp *, int, dtype)\n int PyArray_Free (object, void *)\n #int PyArray_Converter (object, object*)\n int PyArray_IntpFromSequence (object, npy_intp *, int) except -1\n object PyArray_Concatenate (object, int)\n object PyArray_InnerProduct (object, object)\n object PyArray_MatrixProduct (object, object)\n object PyArray_Correlate (object, object, int)\n #int PyArray_DescrConverter (object, dtype*) except 0\n #int PyArray_DescrConverter2 (object, dtype*) except 0\n int PyArray_IntpConverter (object, PyArray_Dims *) except 0\n #int PyArray_BufferConverter (object, chunk) except 0\n int PyArray_AxisConverter (object, int *) except 0\n int PyArray_BoolConverter (object, npy_bool *) except 0\n int PyArray_ByteorderConverter (object, char *) except 0\n int PyArray_OrderConverter (object, NPY_ORDER *) except 0\n unsigned char PyArray_EquivTypes (dtype, dtype) # clears errors\n #object PyArray_Zeros (int, npy_intp *, dtype, int)\n #object PyArray_Empty (int, npy_intp *, dtype, int)\n object PyArray_Where (object, object, object)\n object PyArray_Arange (double, double, double, int)\n #object PyArray_ArangeObj (object, object, object, dtype)\n int PyArray_SortkindConverter (object, NPY_SORTKIND *) except 0\n object PyArray_LexSort (object, int)\n object PyArray_Round (ndarray, int, ndarray)\n unsigned char PyArray_EquivTypenums (int, int)\n int PyArray_RegisterDataType (dtype) except -1\n int PyArray_RegisterCastFunc (dtype, int, PyArray_VectorUnaryFunc *) except -1\n int PyArray_RegisterCanCast (dtype, int, NPY_SCALARKIND) except -1\n #void PyArray_InitArrFuncs (PyArray_ArrFuncs *)\n object PyArray_IntTupleFromIntp (int, npy_intp *)\n int PyArray_ClipmodeConverter (object, NPY_CLIPMODE *) except 0\n #int PyArray_OutputConverter (object, ndarray*) except 0\n object PyArray_BroadcastToShape (object, npy_intp *, int)\n #int PyArray_DescrAlignConverter (object, dtype*) except 0\n #int PyArray_DescrAlignConverter2 (object, dtype*) except 0\n int PyArray_SearchsideConverter (object, void *) except 0\n object PyArray_CheckAxis (ndarray, int *, int)\n npy_intp PyArray_OverflowMultiplyList (npy_intp *, int)\n int PyArray_SetBaseObject(ndarray, base) except -1 # NOTE: steals a reference to base! Use "set_array_base()" instead.\n\n # The memory handler functions require the NumPy 1.22 API\n # and may require defining NPY_TARGET_VERSION\n ctypedef struct PyDataMemAllocator:\n void *ctx\n void* (*malloc) (void *ctx, size_t size)\n void* (*calloc) (void *ctx, size_t nelem, size_t elsize)\n void* (*realloc) (void *ctx, void *ptr, size_t new_size)\n void (*free) (void *ctx, void *ptr, size_t size)\n\n ctypedef struct PyDataMem_Handler:\n char* name\n npy_uint8 version\n PyDataMemAllocator allocator\n\n object PyDataMem_SetHandler(object handler)\n object PyDataMem_GetHandler()\n\n # additional datetime related functions are defined below\n\n\n# Typedefs that matches the runtime dtype objects in\n# the numpy module.\n\n# The ones that are commented out needs an IFDEF function\n# in Cython to enable them only on the right systems.\n\nctypedef npy_int8 int8_t\nctypedef npy_int16 int16_t\nctypedef npy_int32 int32_t\nctypedef npy_int64 int64_t\n\nctypedef npy_uint8 uint8_t\nctypedef npy_uint16 uint16_t\nctypedef npy_uint32 uint32_t\nctypedef npy_uint64 uint64_t\n\nctypedef npy_float32 float32_t\nctypedef npy_float64 float64_t\n#ctypedef npy_float80 float80_t\n#ctypedef npy_float128 float128_t\n\nctypedef float complex complex64_t\nctypedef double complex complex128_t\n\nctypedef npy_longlong longlong_t\nctypedef npy_ulonglong ulonglong_t\n\nctypedef npy_intp intp_t\nctypedef npy_uintp uintp_t\n\nctypedef npy_double float_t\nctypedef npy_double double_t\nctypedef npy_longdouble longdouble_t\n\nctypedef float complex cfloat_t\nctypedef double complex cdouble_t\nctypedef double complex complex_t\nctypedef long double complex clongdouble_t\n\ncdef inline object PyArray_MultiIterNew1(a):\n return PyArray_MultiIterNew(1, <void*>a)\n\ncdef inline object PyArray_MultiIterNew2(a, b):\n return PyArray_MultiIterNew(2, <void*>a, <void*>b)\n\ncdef inline object PyArray_MultiIterNew3(a, b, c):\n return PyArray_MultiIterNew(3, <void*>a, <void*>b, <void*> c)\n\ncdef inline object PyArray_MultiIterNew4(a, b, c, d):\n return PyArray_MultiIterNew(4, <void*>a, <void*>b, <void*>c, <void*> d)\n\ncdef inline object PyArray_MultiIterNew5(a, b, c, d, e):\n return PyArray_MultiIterNew(5, <void*>a, <void*>b, <void*>c, <void*> d, <void*> e)\n\ncdef inline tuple PyDataType_SHAPE(dtype d):\n if PyDataType_HASSUBARRAY(d):\n return <tuple>d.subarray.shape\n else:\n return ()\n\n\ncdef extern from "numpy/ndarrayobject.h":\n PyTypeObject PyTimedeltaArrType_Type\n PyTypeObject PyDatetimeArrType_Type\n ctypedef int64_t npy_timedelta\n ctypedef int64_t npy_datetime\n\ncdef extern from "numpy/ndarraytypes.h":\n ctypedef struct PyArray_DatetimeMetaData:\n NPY_DATETIMEUNIT base\n int64_t num\n\n ctypedef struct npy_datetimestruct:\n int64_t year\n int32_t month, day, hour, min, sec, us, ps, as\n\n # Iterator API added in v1.6\n #\n # These don't match the definition in the C API because Cython can't wrap\n # function pointers that return functions.\n # https://github.com/cython/cython/issues/6720\n ctypedef int (*NpyIter_IterNextFunc "NpyIter_IterNextFunc *")(NpyIter* it) noexcept nogil\n ctypedef void (*NpyIter_GetMultiIndexFunc "NpyIter_GetMultiIndexFunc *")(NpyIter* it, npy_intp* outcoords) noexcept nogil\n\n\ncdef extern from "numpy/arrayscalars.h":\n\n # abstract types\n ctypedef class numpy.generic [object PyObject]:\n pass\n ctypedef class numpy.number [object PyObject]:\n pass\n ctypedef class numpy.integer [object PyObject]:\n pass\n ctypedef class numpy.signedinteger [object PyObject]:\n pass\n ctypedef class numpy.unsignedinteger [object PyObject]:\n pass\n ctypedef class numpy.inexact [object PyObject]:\n pass\n ctypedef class numpy.floating [object PyObject]:\n pass\n ctypedef class numpy.complexfloating [object PyObject]:\n pass\n ctypedef class numpy.flexible [object PyObject]:\n pass\n ctypedef class numpy.character [object PyObject]:\n pass\n\n ctypedef struct PyDatetimeScalarObject:\n # PyObject_HEAD\n npy_datetime obval\n PyArray_DatetimeMetaData obmeta\n\n ctypedef struct PyTimedeltaScalarObject:\n # PyObject_HEAD\n npy_timedelta obval\n PyArray_DatetimeMetaData obmeta\n\n ctypedef enum NPY_DATETIMEUNIT:\n NPY_FR_Y\n NPY_FR_M\n NPY_FR_W\n NPY_FR_D\n NPY_FR_B\n NPY_FR_h\n NPY_FR_m\n NPY_FR_s\n NPY_FR_ms\n NPY_FR_us\n NPY_FR_ns\n NPY_FR_ps\n NPY_FR_fs\n NPY_FR_as\n NPY_FR_GENERIC\n\n\ncdef extern from "numpy/arrayobject.h":\n # These are part of the C-API defined in `__multiarray_api.h`\n\n # NumPy internal definitions in datetime_strings.c:\n int get_datetime_iso_8601_strlen "NpyDatetime_GetDatetimeISO8601StrLen" (\n int local, NPY_DATETIMEUNIT base)\n int make_iso_8601_datetime "NpyDatetime_MakeISO8601Datetime" (\n npy_datetimestruct *dts, char *outstr, npy_intp outlen,\n int local, int utc, NPY_DATETIMEUNIT base, int tzoffset,\n NPY_CASTING casting) except -1\n\n # NumPy internal definition in datetime.c:\n # May return 1 to indicate that object does not appear to be a datetime\n # (returns 0 on success).\n int convert_pydatetime_to_datetimestruct "NpyDatetime_ConvertPyDateTimeToDatetimeStruct" (\n PyObject *obj, npy_datetimestruct *out,\n NPY_DATETIMEUNIT *out_bestunit, int apply_tzinfo) except -1\n int convert_datetime64_to_datetimestruct "NpyDatetime_ConvertDatetime64ToDatetimeStruct" (\n PyArray_DatetimeMetaData *meta, npy_datetime dt,\n npy_datetimestruct *out) except -1\n int convert_datetimestruct_to_datetime64 "NpyDatetime_ConvertDatetimeStructToDatetime64"(\n PyArray_DatetimeMetaData *meta, const npy_datetimestruct *dts,\n npy_datetime *out) except -1\n\n\n#\n# ufunc API\n#\n\ncdef extern from "numpy/ufuncobject.h":\n\n ctypedef void (*PyUFuncGenericFunction) (char **, npy_intp *, npy_intp *, void *)\n\n ctypedef class numpy.ufunc [object PyUFuncObject, check_size ignore]:\n cdef:\n int nin, nout, nargs\n int identity\n PyUFuncGenericFunction *functions\n void **data\n int ntypes\n int check_return\n char *name\n char *types\n char *doc\n void *ptr\n PyObject *obj\n PyObject *userloops\n\n cdef enum:\n PyUFunc_Zero\n PyUFunc_One\n PyUFunc_None\n # deprecated\n UFUNC_FPE_DIVIDEBYZERO\n UFUNC_FPE_OVERFLOW\n UFUNC_FPE_UNDERFLOW\n UFUNC_FPE_INVALID\n # use these instead\n NPY_FPE_DIVIDEBYZERO\n NPY_FPE_OVERFLOW\n NPY_FPE_UNDERFLOW\n NPY_FPE_INVALID\n\n\n object PyUFunc_FromFuncAndData(PyUFuncGenericFunction *,\n void **, char *, int, int, int, int, char *, char *, int)\n int PyUFunc_RegisterLoopForType(ufunc, int,\n PyUFuncGenericFunction, int *, void *) except -1\n void PyUFunc_f_f_As_d_d \\n (char **, npy_intp *, npy_intp *, void *)\n void PyUFunc_d_d \\n (char **, npy_intp *, npy_intp *, void *)\n void PyUFunc_f_f \\n (char **, npy_intp *, npy_intp *, void *)\n void PyUFunc_g_g \\n (char **, npy_intp *, npy_intp *, void *)\n void PyUFunc_F_F_As_D_D \\n (char **, npy_intp *, npy_intp *, void *)\n void PyUFunc_F_F \\n (char **, npy_intp *, npy_intp *, void *)\n void PyUFunc_D_D \\n (char **, npy_intp *, npy_intp *, void *)\n void PyUFunc_G_G \\n (char **, npy_intp *, npy_intp *, void *)\n void PyUFunc_O_O \\n (char **, npy_intp *, npy_intp *, void *)\n void PyUFunc_ff_f_As_dd_d \\n (char **, npy_intp *, npy_intp *, void *)\n void PyUFunc_ff_f \\n (char **, npy_intp *, npy_intp *, void *)\n void PyUFunc_dd_d \\n (char **, npy_intp *, npy_intp *, void *)\n void PyUFunc_gg_g \\n (char **, npy_intp *, npy_intp *, void *)\n void PyUFunc_FF_F_As_DD_D \\n (char **, npy_intp *, npy_intp *, void *)\n void PyUFunc_DD_D \\n (char **, npy_intp *, npy_intp *, void *)\n void PyUFunc_FF_F \\n (char **, npy_intp *, npy_intp *, void *)\n void PyUFunc_GG_G \\n (char **, npy_intp *, npy_intp *, void *)\n void PyUFunc_OO_O \\n (char **, npy_intp *, npy_intp *, void *)\n void PyUFunc_O_O_method \\n (char **, npy_intp *, npy_intp *, void *)\n void PyUFunc_OO_O_method \\n (char **, npy_intp *, npy_intp *, void *)\n void PyUFunc_On_Om \\n (char **, npy_intp *, npy_intp *, void *)\n void PyUFunc_clearfperr()\n int PyUFunc_getfperr()\n int PyUFunc_ReplaceLoopBySignature \\n (ufunc, PyUFuncGenericFunction, int *, PyUFuncGenericFunction *)\n object PyUFunc_FromFuncAndDataAndSignature \\n (PyUFuncGenericFunction *, void **, char *, int, int, int,\n int, char *, char *, int, char *)\n\n int _import_umath() except -1\n\ncdef inline void set_array_base(ndarray arr, object base) except *:\n Py_INCREF(base) # important to do this before stealing the reference below!\n PyArray_SetBaseObject(arr, base)\n\ncdef inline object get_array_base(ndarray arr):\n base = PyArray_BASE(arr)\n if base is NULL:\n return None\n return <object>base\n\n# Versions of the import_* functions which are more suitable for\n# Cython code.\ncdef inline int import_array() except -1:\n try:\n __pyx_import_array()\n except Exception:\n raise ImportError("numpy._core.multiarray failed to import")\n\ncdef inline int import_umath() except -1:\n try:\n _import_umath()\n except Exception:\n raise ImportError("numpy._core.umath failed to import")\n\ncdef inline int import_ufunc() except -1:\n try:\n _import_umath()\n except Exception:\n raise ImportError("numpy._core.umath failed to import")\n\n\ncdef inline bint is_timedelta64_object(object obj) noexcept:\n """\n Cython equivalent of `isinstance(obj, np.timedelta64)`\n\n Parameters\n ----------\n obj : object\n\n Returns\n -------\n bool\n """\n return PyObject_TypeCheck(obj, &PyTimedeltaArrType_Type)\n\n\ncdef inline bint is_datetime64_object(object obj) noexcept:\n """\n Cython equivalent of `isinstance(obj, np.datetime64)`\n\n Parameters\n ----------\n obj : object\n\n Returns\n -------\n bool\n """\n return PyObject_TypeCheck(obj, &PyDatetimeArrType_Type)\n\n\ncdef inline npy_datetime get_datetime64_value(object obj) noexcept nogil:\n """\n returns the int64 value underlying scalar numpy datetime64 object\n\n Note that to interpret this as a datetime, the corresponding unit is\n also needed. That can be found using `get_datetime64_unit`.\n """\n return (<PyDatetimeScalarObject*>obj).obval\n\n\ncdef inline npy_timedelta get_timedelta64_value(object obj) noexcept nogil:\n """\n returns the int64 value underlying scalar numpy timedelta64 object\n """\n return (<PyTimedeltaScalarObject*>obj).obval\n\n\ncdef inline NPY_DATETIMEUNIT get_datetime64_unit(object obj) noexcept nogil:\n """\n returns the unit part of the dtype for a numpy datetime64 object.\n """\n return <NPY_DATETIMEUNIT>(<PyDatetimeScalarObject*>obj).obmeta.base\n\n\ncdef extern from "numpy/arrayobject.h":\n\n ctypedef struct NpyIter:\n pass\n\n cdef enum:\n NPY_FAIL\n NPY_SUCCEED\n\n cdef enum:\n # Track an index representing C order\n NPY_ITER_C_INDEX\n # Track an index representing Fortran order\n NPY_ITER_F_INDEX\n # Track a multi-index\n NPY_ITER_MULTI_INDEX\n # User code external to the iterator does the 1-dimensional innermost loop\n NPY_ITER_EXTERNAL_LOOP\n # Convert all the operands to a common data type\n NPY_ITER_COMMON_DTYPE\n # Operands may hold references, requiring API access during iteration\n NPY_ITER_REFS_OK\n # Zero-sized operands should be permitted, iteration checks IterSize for 0\n NPY_ITER_ZEROSIZE_OK\n # Permits reductions (size-0 stride with dimension size > 1)\n NPY_ITER_REDUCE_OK\n # Enables sub-range iteration\n NPY_ITER_RANGED\n # Enables buffering\n NPY_ITER_BUFFERED\n # When buffering is enabled, grows the inner loop if possible\n NPY_ITER_GROWINNER\n # Delay allocation of buffers until first Reset* call\n NPY_ITER_DELAY_BUFALLOC\n # When NPY_KEEPORDER is specified, disable reversing negative-stride axes\n NPY_ITER_DONT_NEGATE_STRIDES\n NPY_ITER_COPY_IF_OVERLAP\n # The operand will be read from and written to\n NPY_ITER_READWRITE\n # The operand will only be read from\n NPY_ITER_READONLY\n # The operand will only be written to\n NPY_ITER_WRITEONLY\n # The operand's data must be in native byte order\n NPY_ITER_NBO\n # The operand's data must be aligned\n NPY_ITER_ALIGNED\n # The operand's data must be contiguous (within the inner loop)\n NPY_ITER_CONTIG\n # The operand may be copied to satisfy requirements\n NPY_ITER_COPY\n # The operand may be copied with WRITEBACKIFCOPY to satisfy requirements\n NPY_ITER_UPDATEIFCOPY\n # Allocate the operand if it is NULL\n NPY_ITER_ALLOCATE\n # If an operand is allocated, don't use any subtype\n NPY_ITER_NO_SUBTYPE\n # This is a virtual array slot, operand is NULL but temporary data is there\n NPY_ITER_VIRTUAL\n # Require that the dimension match the iterator dimensions exactly\n NPY_ITER_NO_BROADCAST\n # A mask is being used on this array, affects buffer -> array copy\n NPY_ITER_WRITEMASKED\n # This array is the mask for all WRITEMASKED operands\n NPY_ITER_ARRAYMASK\n # Assume iterator order data access for COPY_IF_OVERLAP\n NPY_ITER_OVERLAP_ASSUME_ELEMENTWISE\n\n # construction and destruction functions\n NpyIter* NpyIter_New(ndarray arr, npy_uint32 flags, NPY_ORDER order,\n NPY_CASTING casting, dtype datatype) except NULL\n NpyIter* NpyIter_MultiNew(npy_intp nop, PyArrayObject** op, npy_uint32 flags,\n NPY_ORDER order, NPY_CASTING casting, npy_uint32*\n op_flags, PyArray_Descr** op_dtypes) except NULL\n NpyIter* NpyIter_AdvancedNew(npy_intp nop, PyArrayObject** op,\n npy_uint32 flags, NPY_ORDER order,\n NPY_CASTING casting, npy_uint32* op_flags,\n PyArray_Descr** op_dtypes, int oa_ndim,\n int** op_axes, const npy_intp* itershape,\n npy_intp buffersize) except NULL\n NpyIter* NpyIter_Copy(NpyIter* it) except NULL\n int NpyIter_RemoveAxis(NpyIter* it, int axis) except NPY_FAIL\n int NpyIter_RemoveMultiIndex(NpyIter* it) except NPY_FAIL\n int NpyIter_EnableExternalLoop(NpyIter* it) except NPY_FAIL\n int NpyIter_Deallocate(NpyIter* it) except NPY_FAIL\n int NpyIter_Reset(NpyIter* it, char** errmsg) except NPY_FAIL\n int NpyIter_ResetToIterIndexRange(NpyIter* it, npy_intp istart,\n npy_intp iend, char** errmsg) except NPY_FAIL\n int NpyIter_ResetBasePointers(NpyIter* it, char** baseptrs, char** errmsg) except NPY_FAIL\n int NpyIter_GotoMultiIndex(NpyIter* it, const npy_intp* multi_index) except NPY_FAIL\n int NpyIter_GotoIndex(NpyIter* it, npy_intp index) except NPY_FAIL\n npy_intp NpyIter_GetIterSize(NpyIter* it) nogil\n npy_intp NpyIter_GetIterIndex(NpyIter* it) nogil\n void NpyIter_GetIterIndexRange(NpyIter* it, npy_intp* istart,\n npy_intp* iend) nogil\n int NpyIter_GotoIterIndex(NpyIter* it, npy_intp iterindex) except NPY_FAIL\n npy_bool NpyIter_HasDelayedBufAlloc(NpyIter* it) nogil\n npy_bool NpyIter_HasExternalLoop(NpyIter* it) nogil\n npy_bool NpyIter_HasMultiIndex(NpyIter* it) nogil\n npy_bool NpyIter_HasIndex(NpyIter* it) nogil\n npy_bool NpyIter_RequiresBuffering(NpyIter* it) nogil\n npy_bool NpyIter_IsBuffered(NpyIter* it) nogil\n npy_bool NpyIter_IsGrowInner(NpyIter* it) nogil\n npy_intp NpyIter_GetBufferSize(NpyIter* it) nogil\n int NpyIter_GetNDim(NpyIter* it) nogil\n int NpyIter_GetNOp(NpyIter* it) nogil\n npy_intp* NpyIter_GetAxisStrideArray(NpyIter* it, int axis) except NULL\n int NpyIter_GetShape(NpyIter* it, npy_intp* outshape) nogil\n PyArray_Descr** NpyIter_GetDescrArray(NpyIter* it)\n PyArrayObject** NpyIter_GetOperandArray(NpyIter* it)\n ndarray NpyIter_GetIterView(NpyIter* it, npy_intp i)\n void NpyIter_GetReadFlags(NpyIter* it, char* outreadflags)\n void NpyIter_GetWriteFlags(NpyIter* it, char* outwriteflags)\n int NpyIter_CreateCompatibleStrides(NpyIter* it, npy_intp itemsize,\n npy_intp* outstrides) except NPY_FAIL\n npy_bool NpyIter_IsFirstVisit(NpyIter* it, int iop) nogil\n # functions for iterating an NpyIter object\n #\n # These don't match the definition in the C API because Cython can't wrap\n # function pointers that return functions.\n NpyIter_IterNextFunc NpyIter_GetIterNext(NpyIter* it, char** errmsg) except NULL\n NpyIter_GetMultiIndexFunc NpyIter_GetGetMultiIndex(NpyIter* it,\n char** errmsg) except NULL\n char** NpyIter_GetDataPtrArray(NpyIter* it) nogil\n char** NpyIter_GetInitialDataPtrArray(NpyIter* it) nogil\n npy_intp* NpyIter_GetIndexPtr(NpyIter* it)\n npy_intp* NpyIter_GetInnerStrideArray(NpyIter* it) nogil\n npy_intp* NpyIter_GetInnerLoopSizePtr(NpyIter* it) nogil\n void NpyIter_GetInnerFixedStrideArray(NpyIter* it, npy_intp* outstrides) nogil\n npy_bool NpyIter_IterationNeedsAPI(NpyIter* it) nogil\n void NpyIter_DebugPrint(NpyIter* it)\n\n# NpyString API\ncdef extern from "numpy/ndarraytypes.h":\n ctypedef struct npy_string_allocator:\n pass\n\n ctypedef struct npy_packed_static_string:\n pass\n\n ctypedef struct npy_static_string:\n size_t size\n const char *buf\n\n ctypedef struct PyArray_StringDTypeObject:\n PyArray_Descr base\n PyObject *na_object\n char coerce\n char has_nan_na\n char has_string_na\n char array_owned\n npy_static_string default_string\n npy_static_string na_name\n npy_string_allocator *allocator\n\ncdef extern from "numpy/arrayobject.h":\n npy_string_allocator *NpyString_acquire_allocator(const PyArray_StringDTypeObject *descr)\n void NpyString_acquire_allocators(size_t n_descriptors, PyArray_Descr *const descrs[], npy_string_allocator *allocators[])\n void NpyString_release_allocator(npy_string_allocator *allocator)\n void NpyString_release_allocators(size_t length, npy_string_allocator *allocators[])\n int NpyString_load(npy_string_allocator *allocator, const npy_packed_static_string *packed_string, npy_static_string *unpacked_string)\n int NpyString_pack_null(npy_string_allocator *allocator, npy_packed_static_string *packed_string)\n int NpyString_pack(npy_string_allocator *allocator, npy_packed_static_string *packed_string, const char *buf, size_t size)\n
.venv\Lib\site-packages\numpy\__init__.cython-30.pxd
__init__.cython-30.pxd
Other
48,364
0.95
0.034649
0.140038
node-utils
855
2023-11-01T14:38:00.552269
Apache-2.0
false
cafe28c0cc9f6d182ad083409dc617ec
# NumPy static imports for Cython < 3.0\n#\n# If any of the PyArray_* functions are called, import_array must be\n# called first.\n#\n# Author: Dag Sverre Seljebotn\n#\n\nDEF _buffer_format_string_len = 255\n\ncimport cpython.buffer as pybuf\nfrom cpython.ref cimport Py_INCREF\nfrom cpython.mem cimport PyObject_Malloc, PyObject_Free\nfrom cpython.object cimport PyObject, PyTypeObject\nfrom cpython.buffer cimport PyObject_GetBuffer\nfrom cpython.type cimport type\ncimport libc.stdio as stdio\n\n\ncdef extern from *:\n # Leave a marker that the NumPy declarations came from NumPy itself and not from Cython.\n # See https://github.com/cython/cython/issues/3573\n """\n /* Using NumPy API declarations from "numpy/__init__.pxd" */\n """\n\n\ncdef extern from "Python.h":\n ctypedef int Py_intptr_t\n bint PyObject_TypeCheck(object obj, PyTypeObject* type)\n\ncdef extern from "numpy/arrayobject.h":\n # It would be nice to use size_t and ssize_t, but ssize_t has special\n # implicit conversion rules, so just use "long".\n # Note: The actual type only matters for Cython promotion, so long\n # is closer than int, but could lead to incorrect promotion.\n # (Not to worrying, and always the status-quo.)\n ctypedef signed long npy_intp\n ctypedef unsigned long npy_uintp\n\n ctypedef unsigned char npy_bool\n\n ctypedef signed char npy_byte\n ctypedef signed short npy_short\n ctypedef signed int npy_int\n ctypedef signed long npy_long\n ctypedef signed long long npy_longlong\n\n ctypedef unsigned char npy_ubyte\n ctypedef unsigned short npy_ushort\n ctypedef unsigned int npy_uint\n ctypedef unsigned long npy_ulong\n ctypedef unsigned long long npy_ulonglong\n\n ctypedef float npy_float\n ctypedef double npy_double\n ctypedef long double npy_longdouble\n\n ctypedef signed char npy_int8\n ctypedef signed short npy_int16\n ctypedef signed int npy_int32\n ctypedef signed long long npy_int64\n\n ctypedef unsigned char npy_uint8\n ctypedef unsigned short npy_uint16\n ctypedef unsigned int npy_uint32\n ctypedef unsigned long long npy_uint64\n\n ctypedef float npy_float32\n ctypedef double npy_float64\n ctypedef long double npy_float80\n ctypedef long double npy_float96\n ctypedef long double npy_float128\n\n ctypedef struct npy_cfloat:\n pass\n\n ctypedef struct npy_cdouble:\n pass\n\n ctypedef struct npy_clongdouble:\n pass\n\n ctypedef struct npy_complex64:\n pass\n\n ctypedef struct npy_complex128:\n pass\n\n ctypedef struct npy_complex160:\n pass\n\n ctypedef struct npy_complex192:\n pass\n\n ctypedef struct npy_complex256:\n pass\n\n ctypedef struct PyArray_Dims:\n npy_intp *ptr\n int len\n\n\n cdef enum NPY_TYPES:\n NPY_BOOL\n NPY_BYTE\n NPY_UBYTE\n NPY_SHORT\n NPY_USHORT\n NPY_INT\n NPY_UINT\n NPY_LONG\n NPY_ULONG\n NPY_LONGLONG\n NPY_ULONGLONG\n NPY_FLOAT\n NPY_DOUBLE\n NPY_LONGDOUBLE\n NPY_CFLOAT\n NPY_CDOUBLE\n NPY_CLONGDOUBLE\n NPY_OBJECT\n NPY_STRING\n NPY_UNICODE\n NPY_VSTRING\n NPY_VOID\n NPY_DATETIME\n NPY_TIMEDELTA\n NPY_NTYPES_LEGACY\n NPY_NOTYPE\n\n NPY_INT8\n NPY_INT16\n NPY_INT32\n NPY_INT64\n NPY_UINT8\n NPY_UINT16\n NPY_UINT32\n NPY_UINT64\n NPY_FLOAT16\n NPY_FLOAT32\n NPY_FLOAT64\n NPY_FLOAT80\n NPY_FLOAT96\n NPY_FLOAT128\n NPY_COMPLEX64\n NPY_COMPLEX128\n NPY_COMPLEX160\n NPY_COMPLEX192\n NPY_COMPLEX256\n\n NPY_INTP\n NPY_UINTP\n NPY_DEFAULT_INT # Not a compile time constant (normally)!\n\n ctypedef enum NPY_ORDER:\n NPY_ANYORDER\n NPY_CORDER\n NPY_FORTRANORDER\n NPY_KEEPORDER\n\n ctypedef enum NPY_CASTING:\n NPY_NO_CASTING\n NPY_EQUIV_CASTING\n NPY_SAFE_CASTING\n NPY_SAME_KIND_CASTING\n NPY_UNSAFE_CASTING\n\n ctypedef enum NPY_CLIPMODE:\n NPY_CLIP\n NPY_WRAP\n NPY_RAISE\n\n ctypedef enum NPY_SCALARKIND:\n NPY_NOSCALAR,\n NPY_BOOL_SCALAR,\n NPY_INTPOS_SCALAR,\n NPY_INTNEG_SCALAR,\n NPY_FLOAT_SCALAR,\n NPY_COMPLEX_SCALAR,\n NPY_OBJECT_SCALAR\n\n ctypedef enum NPY_SORTKIND:\n NPY_QUICKSORT\n NPY_HEAPSORT\n NPY_MERGESORT\n\n ctypedef enum NPY_SEARCHSIDE:\n NPY_SEARCHLEFT\n NPY_SEARCHRIGHT\n\n enum:\n NPY_ARRAY_C_CONTIGUOUS\n NPY_ARRAY_F_CONTIGUOUS\n NPY_ARRAY_OWNDATA\n NPY_ARRAY_FORCECAST\n NPY_ARRAY_ENSURECOPY\n NPY_ARRAY_ENSUREARRAY\n NPY_ARRAY_ELEMENTSTRIDES\n NPY_ARRAY_ALIGNED\n NPY_ARRAY_NOTSWAPPED\n NPY_ARRAY_WRITEABLE\n NPY_ARRAY_WRITEBACKIFCOPY\n\n NPY_ARRAY_BEHAVED\n NPY_ARRAY_BEHAVED_NS\n NPY_ARRAY_CARRAY\n NPY_ARRAY_CARRAY_RO\n NPY_ARRAY_FARRAY\n NPY_ARRAY_FARRAY_RO\n NPY_ARRAY_DEFAULT\n\n NPY_ARRAY_IN_ARRAY\n NPY_ARRAY_OUT_ARRAY\n NPY_ARRAY_INOUT_ARRAY\n NPY_ARRAY_IN_FARRAY\n NPY_ARRAY_OUT_FARRAY\n NPY_ARRAY_INOUT_FARRAY\n\n NPY_ARRAY_UPDATE_ALL\n\n cdef enum:\n NPY_MAXDIMS # 64 on NumPy 2.x and 32 on NumPy 1.x\n NPY_RAVEL_AXIS # Used for functions like PyArray_Mean\n\n ctypedef void (*PyArray_VectorUnaryFunc)(void *, void *, npy_intp, void *, void *)\n\n ctypedef struct PyArray_ArrayDescr:\n # shape is a tuple, but Cython doesn't support "tuple shape"\n # inside a non-PyObject declaration, so we have to declare it\n # as just a PyObject*.\n PyObject* shape\n\n ctypedef struct PyArray_Descr:\n pass\n\n ctypedef class numpy.dtype [object PyArray_Descr, check_size ignore]:\n # Use PyDataType_* macros when possible, however there are no macros\n # for accessing some of the fields, so some are defined.\n cdef PyTypeObject* typeobj\n cdef char kind\n cdef char type\n # Numpy sometimes mutates this without warning (e.g. it'll\n # sometimes change "|" to "<" in shared dtype objects on\n # little-endian machines). If this matters to you, use\n # PyArray_IsNativeByteOrder(dtype.byteorder) instead of\n # directly accessing this field.\n cdef char byteorder\n # Flags are not directly accessible on Cython <3. Use PyDataType_FLAGS.\n # cdef char flags\n cdef int type_num\n # itemsize/elsize, alignment, fields, names, and subarray must\n # use the `PyDataType_*` accessor macros. With Cython 3 you can\n # still use getter attributes `dtype.itemsize`\n\n ctypedef class numpy.flatiter [object PyArrayIterObject, check_size ignore]:\n # Use through macros\n pass\n\n ctypedef class numpy.broadcast [object PyArrayMultiIterObject, check_size ignore]:\n cdef int numiter\n cdef npy_intp size, index\n cdef int nd\n cdef npy_intp *dimensions\n cdef void **iters\n\n ctypedef struct PyArrayObject:\n # For use in situations where ndarray can't replace PyArrayObject*,\n # like PyArrayObject**.\n pass\n\n ctypedef class numpy.ndarray [object PyArrayObject, check_size ignore]:\n cdef __cythonbufferdefaults__ = {"mode": "strided"}\n\n cdef:\n # Only taking a few of the most commonly used and stable fields.\n # One should use PyArray_* macros instead to access the C fields.\n char *data\n int ndim "nd"\n npy_intp *shape "dimensions"\n npy_intp *strides\n dtype descr # deprecated since NumPy 1.7 !\n PyObject* base # NOT PUBLIC, DO NOT USE !\n\n\n int _import_array() except -1\n # A second definition so _import_array isn't marked as used when we use it here.\n # Do not use - subject to change any time.\n int __pyx_import_array "_import_array"() except -1\n\n #\n # Macros from ndarrayobject.h\n #\n bint PyArray_CHKFLAGS(ndarray m, int flags) nogil\n bint PyArray_IS_C_CONTIGUOUS(ndarray arr) nogil\n bint PyArray_IS_F_CONTIGUOUS(ndarray arr) nogil\n bint PyArray_ISCONTIGUOUS(ndarray m) nogil\n bint PyArray_ISWRITEABLE(ndarray m) nogil\n bint PyArray_ISALIGNED(ndarray m) nogil\n\n int PyArray_NDIM(ndarray) nogil\n bint PyArray_ISONESEGMENT(ndarray) nogil\n bint PyArray_ISFORTRAN(ndarray) nogil\n int PyArray_FORTRANIF(ndarray) nogil\n\n void* PyArray_DATA(ndarray) nogil\n char* PyArray_BYTES(ndarray) nogil\n\n npy_intp* PyArray_DIMS(ndarray) nogil\n npy_intp* PyArray_STRIDES(ndarray) nogil\n npy_intp PyArray_DIM(ndarray, size_t) nogil\n npy_intp PyArray_STRIDE(ndarray, size_t) nogil\n\n PyObject *PyArray_BASE(ndarray) nogil # returns borrowed reference!\n PyArray_Descr *PyArray_DESCR(ndarray) nogil # returns borrowed reference to dtype!\n PyArray_Descr *PyArray_DTYPE(ndarray) nogil # returns borrowed reference to dtype! NP 1.7+ alias for descr.\n int PyArray_FLAGS(ndarray) nogil\n void PyArray_CLEARFLAGS(ndarray, int flags) nogil # Added in NumPy 1.7\n void PyArray_ENABLEFLAGS(ndarray, int flags) nogil # Added in NumPy 1.7\n npy_intp PyArray_ITEMSIZE(ndarray) nogil\n int PyArray_TYPE(ndarray arr) nogil\n\n object PyArray_GETITEM(ndarray arr, void *itemptr)\n int PyArray_SETITEM(ndarray arr, void *itemptr, object obj) except -1\n\n bint PyTypeNum_ISBOOL(int) nogil\n bint PyTypeNum_ISUNSIGNED(int) nogil\n bint PyTypeNum_ISSIGNED(int) nogil\n bint PyTypeNum_ISINTEGER(int) nogil\n bint PyTypeNum_ISFLOAT(int) nogil\n bint PyTypeNum_ISNUMBER(int) nogil\n bint PyTypeNum_ISSTRING(int) nogil\n bint PyTypeNum_ISCOMPLEX(int) nogil\n bint PyTypeNum_ISFLEXIBLE(int) nogil\n bint PyTypeNum_ISUSERDEF(int) nogil\n bint PyTypeNum_ISEXTENDED(int) nogil\n bint PyTypeNum_ISOBJECT(int) nogil\n\n npy_intp PyDataType_ELSIZE(dtype) nogil\n npy_intp PyDataType_ALIGNMENT(dtype) nogil\n PyObject* PyDataType_METADATA(dtype) nogil\n PyArray_ArrayDescr* PyDataType_SUBARRAY(dtype) nogil\n PyObject* PyDataType_NAMES(dtype) nogil\n PyObject* PyDataType_FIELDS(dtype) nogil\n\n bint PyDataType_ISBOOL(dtype) nogil\n bint PyDataType_ISUNSIGNED(dtype) nogil\n bint PyDataType_ISSIGNED(dtype) nogil\n bint PyDataType_ISINTEGER(dtype) nogil\n bint PyDataType_ISFLOAT(dtype) nogil\n bint PyDataType_ISNUMBER(dtype) nogil\n bint PyDataType_ISSTRING(dtype) nogil\n bint PyDataType_ISCOMPLEX(dtype) nogil\n bint PyDataType_ISFLEXIBLE(dtype) nogil\n bint PyDataType_ISUSERDEF(dtype) nogil\n bint PyDataType_ISEXTENDED(dtype) nogil\n bint PyDataType_ISOBJECT(dtype) nogil\n bint PyDataType_HASFIELDS(dtype) nogil\n bint PyDataType_HASSUBARRAY(dtype) nogil\n npy_uint64 PyDataType_FLAGS(dtype) nogil\n\n bint PyArray_ISBOOL(ndarray) nogil\n bint PyArray_ISUNSIGNED(ndarray) nogil\n bint PyArray_ISSIGNED(ndarray) nogil\n bint PyArray_ISINTEGER(ndarray) nogil\n bint PyArray_ISFLOAT(ndarray) nogil\n bint PyArray_ISNUMBER(ndarray) nogil\n bint PyArray_ISSTRING(ndarray) nogil\n bint PyArray_ISCOMPLEX(ndarray) nogil\n bint PyArray_ISFLEXIBLE(ndarray) nogil\n bint PyArray_ISUSERDEF(ndarray) nogil\n bint PyArray_ISEXTENDED(ndarray) nogil\n bint PyArray_ISOBJECT(ndarray) nogil\n bint PyArray_HASFIELDS(ndarray) nogil\n\n bint PyArray_ISVARIABLE(ndarray) nogil\n\n bint PyArray_SAFEALIGNEDCOPY(ndarray) nogil\n bint PyArray_ISNBO(char) nogil # works on ndarray.byteorder\n bint PyArray_IsNativeByteOrder(char) nogil # works on ndarray.byteorder\n bint PyArray_ISNOTSWAPPED(ndarray) nogil\n bint PyArray_ISBYTESWAPPED(ndarray) nogil\n\n bint PyArray_FLAGSWAP(ndarray, int) nogil\n\n bint PyArray_ISCARRAY(ndarray) nogil\n bint PyArray_ISCARRAY_RO(ndarray) nogil\n bint PyArray_ISFARRAY(ndarray) nogil\n bint PyArray_ISFARRAY_RO(ndarray) nogil\n bint PyArray_ISBEHAVED(ndarray) nogil\n bint PyArray_ISBEHAVED_RO(ndarray) nogil\n\n\n bint PyDataType_ISNOTSWAPPED(dtype) nogil\n bint PyDataType_ISBYTESWAPPED(dtype) nogil\n\n bint PyArray_DescrCheck(object)\n\n bint PyArray_Check(object)\n bint PyArray_CheckExact(object)\n\n # Cannot be supported due to out arg:\n # bint PyArray_HasArrayInterfaceType(object, dtype, object, object&)\n # bint PyArray_HasArrayInterface(op, out)\n\n\n bint PyArray_IsZeroDim(object)\n # Cannot be supported due to ## ## in macro:\n # bint PyArray_IsScalar(object, verbatim work)\n bint PyArray_CheckScalar(object)\n bint PyArray_IsPythonNumber(object)\n bint PyArray_IsPythonScalar(object)\n bint PyArray_IsAnyScalar(object)\n bint PyArray_CheckAnyScalar(object)\n\n ndarray PyArray_GETCONTIGUOUS(ndarray)\n bint PyArray_SAMESHAPE(ndarray, ndarray) nogil\n npy_intp PyArray_SIZE(ndarray) nogil\n npy_intp PyArray_NBYTES(ndarray) nogil\n\n object PyArray_FROM_O(object)\n object PyArray_FROM_OF(object m, int flags)\n object PyArray_FROM_OT(object m, int type)\n object PyArray_FROM_OTF(object m, int type, int flags)\n object PyArray_FROMANY(object m, int type, int min, int max, int flags)\n object PyArray_ZEROS(int nd, npy_intp* dims, int type, int fortran)\n object PyArray_EMPTY(int nd, npy_intp* dims, int type, int fortran)\n void PyArray_FILLWBYTE(ndarray, int val)\n object PyArray_ContiguousFromAny(op, int, int min_depth, int max_depth)\n unsigned char PyArray_EquivArrTypes(ndarray a1, ndarray a2)\n bint PyArray_EquivByteorders(int b1, int b2) nogil\n object PyArray_SimpleNew(int nd, npy_intp* dims, int typenum)\n object PyArray_SimpleNewFromData(int nd, npy_intp* dims, int typenum, void* data)\n #object PyArray_SimpleNewFromDescr(int nd, npy_intp* dims, dtype descr)\n object PyArray_ToScalar(void* data, ndarray arr)\n\n void* PyArray_GETPTR1(ndarray m, npy_intp i) nogil\n void* PyArray_GETPTR2(ndarray m, npy_intp i, npy_intp j) nogil\n void* PyArray_GETPTR3(ndarray m, npy_intp i, npy_intp j, npy_intp k) nogil\n void* PyArray_GETPTR4(ndarray m, npy_intp i, npy_intp j, npy_intp k, npy_intp l) nogil\n\n # Cannot be supported due to out arg\n # void PyArray_DESCR_REPLACE(descr)\n\n\n object PyArray_Copy(ndarray)\n object PyArray_FromObject(object op, int type, int min_depth, int max_depth)\n object PyArray_ContiguousFromObject(object op, int type, int min_depth, int max_depth)\n object PyArray_CopyFromObject(object op, int type, int min_depth, int max_depth)\n\n object PyArray_Cast(ndarray mp, int type_num)\n object PyArray_Take(ndarray ap, object items, int axis)\n object PyArray_Put(ndarray ap, object items, object values)\n\n void PyArray_ITER_RESET(flatiter it) nogil\n void PyArray_ITER_NEXT(flatiter it) nogil\n void PyArray_ITER_GOTO(flatiter it, npy_intp* destination) nogil\n void PyArray_ITER_GOTO1D(flatiter it, npy_intp ind) nogil\n void* PyArray_ITER_DATA(flatiter it) nogil\n bint PyArray_ITER_NOTDONE(flatiter it) nogil\n\n void PyArray_MultiIter_RESET(broadcast multi) nogil\n void PyArray_MultiIter_NEXT(broadcast multi) nogil\n void PyArray_MultiIter_GOTO(broadcast multi, npy_intp dest) nogil\n void PyArray_MultiIter_GOTO1D(broadcast multi, npy_intp ind) nogil\n void* PyArray_MultiIter_DATA(broadcast multi, npy_intp i) nogil\n void PyArray_MultiIter_NEXTi(broadcast multi, npy_intp i) nogil\n bint PyArray_MultiIter_NOTDONE(broadcast multi) nogil\n npy_intp PyArray_MultiIter_SIZE(broadcast multi) nogil\n int PyArray_MultiIter_NDIM(broadcast multi) nogil\n npy_intp PyArray_MultiIter_INDEX(broadcast multi) nogil\n int PyArray_MultiIter_NUMITER(broadcast multi) nogil\n npy_intp* PyArray_MultiIter_DIMS(broadcast multi) nogil\n void** PyArray_MultiIter_ITERS(broadcast multi) nogil\n\n # Functions from __multiarray_api.h\n\n # Functions taking dtype and returning object/ndarray are disabled\n # for now as they steal dtype references. I'm conservative and disable\n # more than is probably needed until it can be checked further.\n int PyArray_INCREF (ndarray) except * # uses PyArray_Item_INCREF...\n int PyArray_XDECREF (ndarray) except * # uses PyArray_Item_DECREF...\n dtype PyArray_DescrFromType (int)\n object PyArray_TypeObjectFromType (int)\n char * PyArray_Zero (ndarray)\n char * PyArray_One (ndarray)\n #object PyArray_CastToType (ndarray, dtype, int)\n int PyArray_CanCastSafely (int, int) # writes errors\n npy_bool PyArray_CanCastTo (dtype, dtype) # writes errors\n int PyArray_ObjectType (object, int) except 0\n dtype PyArray_DescrFromObject (object, dtype)\n #ndarray* PyArray_ConvertToCommonType (object, int *)\n dtype PyArray_DescrFromScalar (object)\n dtype PyArray_DescrFromTypeObject (object)\n npy_intp PyArray_Size (object)\n #object PyArray_Scalar (void *, dtype, object)\n #object PyArray_FromScalar (object, dtype)\n void PyArray_ScalarAsCtype (object, void *)\n #int PyArray_CastScalarToCtype (object, void *, dtype)\n #int PyArray_CastScalarDirect (object, dtype, void *, int)\n #PyArray_VectorUnaryFunc * PyArray_GetCastFunc (dtype, int)\n #object PyArray_FromAny (object, dtype, int, int, int, object)\n object PyArray_EnsureArray (object)\n object PyArray_EnsureAnyArray (object)\n #object PyArray_FromFile (stdio.FILE *, dtype, npy_intp, char *)\n #object PyArray_FromString (char *, npy_intp, dtype, npy_intp, char *)\n #object PyArray_FromBuffer (object, dtype, npy_intp, npy_intp)\n #object PyArray_FromIter (object, dtype, npy_intp)\n object PyArray_Return (ndarray)\n #object PyArray_GetField (ndarray, dtype, int)\n #int PyArray_SetField (ndarray, dtype, int, object) except -1\n object PyArray_Byteswap (ndarray, npy_bool)\n object PyArray_Resize (ndarray, PyArray_Dims *, int, NPY_ORDER)\n int PyArray_CopyInto (ndarray, ndarray) except -1\n int PyArray_CopyAnyInto (ndarray, ndarray) except -1\n int PyArray_CopyObject (ndarray, object) except -1\n object PyArray_NewCopy (ndarray, NPY_ORDER)\n object PyArray_ToList (ndarray)\n object PyArray_ToString (ndarray, NPY_ORDER)\n int PyArray_ToFile (ndarray, stdio.FILE *, char *, char *) except -1\n int PyArray_Dump (object, object, int) except -1\n object PyArray_Dumps (object, int)\n int PyArray_ValidType (int) # Cannot error\n void PyArray_UpdateFlags (ndarray, int)\n object PyArray_New (type, int, npy_intp *, int, npy_intp *, void *, int, int, object)\n #object PyArray_NewFromDescr (type, dtype, int, npy_intp *, npy_intp *, void *, int, object)\n #dtype PyArray_DescrNew (dtype)\n dtype PyArray_DescrNewFromType (int)\n double PyArray_GetPriority (object, double) # clears errors as of 1.25\n object PyArray_IterNew (object)\n object PyArray_MultiIterNew (int, ...)\n\n int PyArray_PyIntAsInt (object) except? -1\n npy_intp PyArray_PyIntAsIntp (object)\n int PyArray_Broadcast (broadcast) except -1\n int PyArray_FillWithScalar (ndarray, object) except -1\n npy_bool PyArray_CheckStrides (int, int, npy_intp, npy_intp, npy_intp *, npy_intp *)\n dtype PyArray_DescrNewByteorder (dtype, char)\n object PyArray_IterAllButAxis (object, int *)\n #object PyArray_CheckFromAny (object, dtype, int, int, int, object)\n #object PyArray_FromArray (ndarray, dtype, int)\n object PyArray_FromInterface (object)\n object PyArray_FromStructInterface (object)\n #object PyArray_FromArrayAttr (object, dtype, object)\n #NPY_SCALARKIND PyArray_ScalarKind (int, ndarray*)\n int PyArray_CanCoerceScalar (int, int, NPY_SCALARKIND)\n npy_bool PyArray_CanCastScalar (type, type)\n int PyArray_RemoveSmallest (broadcast) except -1\n int PyArray_ElementStrides (object)\n void PyArray_Item_INCREF (char *, dtype) except *\n void PyArray_Item_XDECREF (char *, dtype) except *\n object PyArray_Transpose (ndarray, PyArray_Dims *)\n object PyArray_TakeFrom (ndarray, object, int, ndarray, NPY_CLIPMODE)\n object PyArray_PutTo (ndarray, object, object, NPY_CLIPMODE)\n object PyArray_PutMask (ndarray, object, object)\n object PyArray_Repeat (ndarray, object, int)\n object PyArray_Choose (ndarray, object, ndarray, NPY_CLIPMODE)\n int PyArray_Sort (ndarray, int, NPY_SORTKIND) except -1\n object PyArray_ArgSort (ndarray, int, NPY_SORTKIND)\n object PyArray_SearchSorted (ndarray, object, NPY_SEARCHSIDE, PyObject *)\n object PyArray_ArgMax (ndarray, int, ndarray)\n object PyArray_ArgMin (ndarray, int, ndarray)\n object PyArray_Reshape (ndarray, object)\n object PyArray_Newshape (ndarray, PyArray_Dims *, NPY_ORDER)\n object PyArray_Squeeze (ndarray)\n #object PyArray_View (ndarray, dtype, type)\n object PyArray_SwapAxes (ndarray, int, int)\n object PyArray_Max (ndarray, int, ndarray)\n object PyArray_Min (ndarray, int, ndarray)\n object PyArray_Ptp (ndarray, int, ndarray)\n object PyArray_Mean (ndarray, int, int, ndarray)\n object PyArray_Trace (ndarray, int, int, int, int, ndarray)\n object PyArray_Diagonal (ndarray, int, int, int)\n object PyArray_Clip (ndarray, object, object, ndarray)\n object PyArray_Conjugate (ndarray, ndarray)\n object PyArray_Nonzero (ndarray)\n object PyArray_Std (ndarray, int, int, ndarray, int)\n object PyArray_Sum (ndarray, int, int, ndarray)\n object PyArray_CumSum (ndarray, int, int, ndarray)\n object PyArray_Prod (ndarray, int, int, ndarray)\n object PyArray_CumProd (ndarray, int, int, ndarray)\n object PyArray_All (ndarray, int, ndarray)\n object PyArray_Any (ndarray, int, ndarray)\n object PyArray_Compress (ndarray, object, int, ndarray)\n object PyArray_Flatten (ndarray, NPY_ORDER)\n object PyArray_Ravel (ndarray, NPY_ORDER)\n npy_intp PyArray_MultiplyList (npy_intp *, int)\n int PyArray_MultiplyIntList (int *, int)\n void * PyArray_GetPtr (ndarray, npy_intp*)\n int PyArray_CompareLists (npy_intp *, npy_intp *, int)\n #int PyArray_AsCArray (object*, void *, npy_intp *, int, dtype)\n int PyArray_Free (object, void *)\n #int PyArray_Converter (object, object*)\n int PyArray_IntpFromSequence (object, npy_intp *, int) except -1\n object PyArray_Concatenate (object, int)\n object PyArray_InnerProduct (object, object)\n object PyArray_MatrixProduct (object, object)\n object PyArray_Correlate (object, object, int)\n #int PyArray_DescrConverter (object, dtype*) except 0\n #int PyArray_DescrConverter2 (object, dtype*) except 0\n int PyArray_IntpConverter (object, PyArray_Dims *) except 0\n #int PyArray_BufferConverter (object, chunk) except 0\n int PyArray_AxisConverter (object, int *) except 0\n int PyArray_BoolConverter (object, npy_bool *) except 0\n int PyArray_ByteorderConverter (object, char *) except 0\n int PyArray_OrderConverter (object, NPY_ORDER *) except 0\n unsigned char PyArray_EquivTypes (dtype, dtype) # clears errors\n #object PyArray_Zeros (int, npy_intp *, dtype, int)\n #object PyArray_Empty (int, npy_intp *, dtype, int)\n object PyArray_Where (object, object, object)\n object PyArray_Arange (double, double, double, int)\n #object PyArray_ArangeObj (object, object, object, dtype)\n int PyArray_SortkindConverter (object, NPY_SORTKIND *) except 0\n object PyArray_LexSort (object, int)\n object PyArray_Round (ndarray, int, ndarray)\n unsigned char PyArray_EquivTypenums (int, int)\n int PyArray_RegisterDataType (dtype) except -1\n int PyArray_RegisterCastFunc (dtype, int, PyArray_VectorUnaryFunc *) except -1\n int PyArray_RegisterCanCast (dtype, int, NPY_SCALARKIND) except -1\n #void PyArray_InitArrFuncs (PyArray_ArrFuncs *)\n object PyArray_IntTupleFromIntp (int, npy_intp *)\n int PyArray_ClipmodeConverter (object, NPY_CLIPMODE *) except 0\n #int PyArray_OutputConverter (object, ndarray*) except 0\n object PyArray_BroadcastToShape (object, npy_intp *, int)\n #int PyArray_DescrAlignConverter (object, dtype*) except 0\n #int PyArray_DescrAlignConverter2 (object, dtype*) except 0\n int PyArray_SearchsideConverter (object, void *) except 0\n object PyArray_CheckAxis (ndarray, int *, int)\n npy_intp PyArray_OverflowMultiplyList (npy_intp *, int)\n int PyArray_SetBaseObject(ndarray, base) except -1 # NOTE: steals a reference to base! Use "set_array_base()" instead.\n\n # The memory handler functions require the NumPy 1.22 API\n # and may require defining NPY_TARGET_VERSION\n ctypedef struct PyDataMemAllocator:\n void *ctx\n void* (*malloc) (void *ctx, size_t size)\n void* (*calloc) (void *ctx, size_t nelem, size_t elsize)\n void* (*realloc) (void *ctx, void *ptr, size_t new_size)\n void (*free) (void *ctx, void *ptr, size_t size)\n\n ctypedef struct PyDataMem_Handler:\n char* name\n npy_uint8 version\n PyDataMemAllocator allocator\n\n object PyDataMem_SetHandler(object handler)\n object PyDataMem_GetHandler()\n\n # additional datetime related functions are defined below\n\n\n# Typedefs that matches the runtime dtype objects in\n# the numpy module.\n\n# The ones that are commented out needs an IFDEF function\n# in Cython to enable them only on the right systems.\n\nctypedef npy_int8 int8_t\nctypedef npy_int16 int16_t\nctypedef npy_int32 int32_t\nctypedef npy_int64 int64_t\n\nctypedef npy_uint8 uint8_t\nctypedef npy_uint16 uint16_t\nctypedef npy_uint32 uint32_t\nctypedef npy_uint64 uint64_t\n\nctypedef npy_float32 float32_t\nctypedef npy_float64 float64_t\n#ctypedef npy_float80 float80_t\n#ctypedef npy_float128 float128_t\n\nctypedef float complex complex64_t\nctypedef double complex complex128_t\n\nctypedef npy_longlong longlong_t\nctypedef npy_ulonglong ulonglong_t\n\nctypedef npy_intp intp_t\nctypedef npy_uintp uintp_t\n\nctypedef npy_double float_t\nctypedef npy_double double_t\nctypedef npy_longdouble longdouble_t\n\nctypedef float complex cfloat_t\nctypedef double complex cdouble_t\nctypedef double complex complex_t\nctypedef long double complex clongdouble_t\n\ncdef inline object PyArray_MultiIterNew1(a):\n return PyArray_MultiIterNew(1, <void*>a)\n\ncdef inline object PyArray_MultiIterNew2(a, b):\n return PyArray_MultiIterNew(2, <void*>a, <void*>b)\n\ncdef inline object PyArray_MultiIterNew3(a, b, c):\n return PyArray_MultiIterNew(3, <void*>a, <void*>b, <void*> c)\n\ncdef inline object PyArray_MultiIterNew4(a, b, c, d):\n return PyArray_MultiIterNew(4, <void*>a, <void*>b, <void*>c, <void*> d)\n\ncdef inline object PyArray_MultiIterNew5(a, b, c, d, e):\n return PyArray_MultiIterNew(5, <void*>a, <void*>b, <void*>c, <void*> d, <void*> e)\n\ncdef inline tuple PyDataType_SHAPE(dtype d):\n if PyDataType_HASSUBARRAY(d):\n return <tuple>d.subarray.shape\n else:\n return ()\n\n\ncdef extern from "numpy/ndarrayobject.h":\n PyTypeObject PyTimedeltaArrType_Type\n PyTypeObject PyDatetimeArrType_Type\n ctypedef int64_t npy_timedelta\n ctypedef int64_t npy_datetime\n\ncdef extern from "numpy/ndarraytypes.h":\n ctypedef struct PyArray_DatetimeMetaData:\n NPY_DATETIMEUNIT base\n int64_t num\n\n ctypedef struct npy_datetimestruct:\n int64_t year\n int32_t month, day, hour, min, sec, us, ps, as\n\n # Iterator API added in v1.6\n #\n # These don't match the definition in the C API because Cython can't wrap\n # function pointers that return functions.\n # https://github.com/cython/cython/issues/6720\n ctypedef int (*NpyIter_IterNextFunc "NpyIter_IterNextFunc *")(NpyIter* it) noexcept nogil\n ctypedef void (*NpyIter_GetMultiIndexFunc "NpyIter_GetMultiIndexFunc *")(NpyIter* it, npy_intp* outcoords) noexcept nogil\n\ncdef extern from "numpy/arrayscalars.h":\n\n # abstract types\n ctypedef class numpy.generic [object PyObject]:\n pass\n ctypedef class numpy.number [object PyObject]:\n pass\n ctypedef class numpy.integer [object PyObject]:\n pass\n ctypedef class numpy.signedinteger [object PyObject]:\n pass\n ctypedef class numpy.unsignedinteger [object PyObject]:\n pass\n ctypedef class numpy.inexact [object PyObject]:\n pass\n ctypedef class numpy.floating [object PyObject]:\n pass\n ctypedef class numpy.complexfloating [object PyObject]:\n pass\n ctypedef class numpy.flexible [object PyObject]:\n pass\n ctypedef class numpy.character [object PyObject]:\n pass\n\n ctypedef struct PyDatetimeScalarObject:\n # PyObject_HEAD\n npy_datetime obval\n PyArray_DatetimeMetaData obmeta\n\n ctypedef struct PyTimedeltaScalarObject:\n # PyObject_HEAD\n npy_timedelta obval\n PyArray_DatetimeMetaData obmeta\n\n ctypedef enum NPY_DATETIMEUNIT:\n NPY_FR_Y\n NPY_FR_M\n NPY_FR_W\n NPY_FR_D\n NPY_FR_B\n NPY_FR_h\n NPY_FR_m\n NPY_FR_s\n NPY_FR_ms\n NPY_FR_us\n NPY_FR_ns\n NPY_FR_ps\n NPY_FR_fs\n NPY_FR_as\n NPY_FR_GENERIC\n\n\ncdef extern from "numpy/arrayobject.h":\n # These are part of the C-API defined in `__multiarray_api.h`\n\n # NumPy internal definitions in datetime_strings.c:\n int get_datetime_iso_8601_strlen "NpyDatetime_GetDatetimeISO8601StrLen" (\n int local, NPY_DATETIMEUNIT base)\n int make_iso_8601_datetime "NpyDatetime_MakeISO8601Datetime" (\n npy_datetimestruct *dts, char *outstr, npy_intp outlen,\n int local, int utc, NPY_DATETIMEUNIT base, int tzoffset,\n NPY_CASTING casting) except -1\n\n # NumPy internal definition in datetime.c:\n # May return 1 to indicate that object does not appear to be a datetime\n # (returns 0 on success).\n int convert_pydatetime_to_datetimestruct "NpyDatetime_ConvertPyDateTimeToDatetimeStruct" (\n PyObject *obj, npy_datetimestruct *out,\n NPY_DATETIMEUNIT *out_bestunit, int apply_tzinfo) except -1\n int convert_datetime64_to_datetimestruct "NpyDatetime_ConvertDatetime64ToDatetimeStruct" (\n PyArray_DatetimeMetaData *meta, npy_datetime dt,\n npy_datetimestruct *out) except -1\n int convert_datetimestruct_to_datetime64 "NpyDatetime_ConvertDatetimeStructToDatetime64"(\n PyArray_DatetimeMetaData *meta, const npy_datetimestruct *dts,\n npy_datetime *out) except -1\n\n\n#\n# ufunc API\n#\n\ncdef extern from "numpy/ufuncobject.h":\n\n ctypedef void (*PyUFuncGenericFunction) (char **, npy_intp *, npy_intp *, void *)\n\n ctypedef class numpy.ufunc [object PyUFuncObject, check_size ignore]:\n cdef:\n int nin, nout, nargs\n int identity\n PyUFuncGenericFunction *functions\n void **data\n int ntypes\n int check_return\n char *name\n char *types\n char *doc\n void *ptr\n PyObject *obj\n PyObject *userloops\n\n cdef enum:\n PyUFunc_Zero\n PyUFunc_One\n PyUFunc_None\n # deprecated\n UFUNC_FPE_DIVIDEBYZERO\n UFUNC_FPE_OVERFLOW\n UFUNC_FPE_UNDERFLOW\n UFUNC_FPE_INVALID\n # use these instead\n NPY_FPE_DIVIDEBYZERO\n NPY_FPE_OVERFLOW\n NPY_FPE_UNDERFLOW\n NPY_FPE_INVALID\n\n object PyUFunc_FromFuncAndData(PyUFuncGenericFunction *,\n void **, char *, int, int, int, int, char *, char *, int)\n int PyUFunc_RegisterLoopForType(ufunc, int,\n PyUFuncGenericFunction, int *, void *) except -1\n void PyUFunc_f_f_As_d_d \\n (char **, npy_intp *, npy_intp *, void *)\n void PyUFunc_d_d \\n (char **, npy_intp *, npy_intp *, void *)\n void PyUFunc_f_f \\n (char **, npy_intp *, npy_intp *, void *)\n void PyUFunc_g_g \\n (char **, npy_intp *, npy_intp *, void *)\n void PyUFunc_F_F_As_D_D \\n (char **, npy_intp *, npy_intp *, void *)\n void PyUFunc_F_F \\n (char **, npy_intp *, npy_intp *, void *)\n void PyUFunc_D_D \\n (char **, npy_intp *, npy_intp *, void *)\n void PyUFunc_G_G \\n (char **, npy_intp *, npy_intp *, void *)\n void PyUFunc_O_O \\n (char **, npy_intp *, npy_intp *, void *)\n void PyUFunc_ff_f_As_dd_d \\n (char **, npy_intp *, npy_intp *, void *)\n void PyUFunc_ff_f \\n (char **, npy_intp *, npy_intp *, void *)\n void PyUFunc_dd_d \\n (char **, npy_intp *, npy_intp *, void *)\n void PyUFunc_gg_g \\n (char **, npy_intp *, npy_intp *, void *)\n void PyUFunc_FF_F_As_DD_D \\n (char **, npy_intp *, npy_intp *, void *)\n void PyUFunc_DD_D \\n (char **, npy_intp *, npy_intp *, void *)\n void PyUFunc_FF_F \\n (char **, npy_intp *, npy_intp *, void *)\n void PyUFunc_GG_G \\n (char **, npy_intp *, npy_intp *, void *)\n void PyUFunc_OO_O \\n (char **, npy_intp *, npy_intp *, void *)\n void PyUFunc_O_O_method \\n (char **, npy_intp *, npy_intp *, void *)\n void PyUFunc_OO_O_method \\n (char **, npy_intp *, npy_intp *, void *)\n void PyUFunc_On_Om \\n (char **, npy_intp *, npy_intp *, void *)\n void PyUFunc_clearfperr()\n int PyUFunc_getfperr()\n int PyUFunc_ReplaceLoopBySignature \\n (ufunc, PyUFuncGenericFunction, int *, PyUFuncGenericFunction *)\n object PyUFunc_FromFuncAndDataAndSignature \\n (PyUFuncGenericFunction *, void **, char *, int, int, int,\n int, char *, char *, int, char *)\n\n int _import_umath() except -1\n\ncdef inline void set_array_base(ndarray arr, object base):\n Py_INCREF(base) # important to do this before stealing the reference below!\n PyArray_SetBaseObject(arr, base)\n\ncdef inline object get_array_base(ndarray arr):\n base = PyArray_BASE(arr)\n if base is NULL:\n return None\n return <object>base\n\n# Versions of the import_* functions which are more suitable for\n# Cython code.\ncdef inline int import_array() except -1:\n try:\n __pyx_import_array()\n except Exception:\n raise ImportError("numpy._core.multiarray failed to import")\n\ncdef inline int import_umath() except -1:\n try:\n _import_umath()\n except Exception:\n raise ImportError("numpy._core.umath failed to import")\n\ncdef inline int import_ufunc() except -1:\n try:\n _import_umath()\n except Exception:\n raise ImportError("numpy._core.umath failed to import")\n\n\ncdef inline bint is_timedelta64_object(object obj):\n """\n Cython equivalent of `isinstance(obj, np.timedelta64)`\n\n Parameters\n ----------\n obj : object\n\n Returns\n -------\n bool\n """\n return PyObject_TypeCheck(obj, &PyTimedeltaArrType_Type)\n\n\ncdef inline bint is_datetime64_object(object obj):\n """\n Cython equivalent of `isinstance(obj, np.datetime64)`\n\n Parameters\n ----------\n obj : object\n\n Returns\n -------\n bool\n """\n return PyObject_TypeCheck(obj, &PyDatetimeArrType_Type)\n\n\ncdef inline npy_datetime get_datetime64_value(object obj) nogil:\n """\n returns the int64 value underlying scalar numpy datetime64 object\n\n Note that to interpret this as a datetime, the corresponding unit is\n also needed. That can be found using `get_datetime64_unit`.\n """\n return (<PyDatetimeScalarObject*>obj).obval\n\n\ncdef inline npy_timedelta get_timedelta64_value(object obj) nogil:\n """\n returns the int64 value underlying scalar numpy timedelta64 object\n """\n return (<PyTimedeltaScalarObject*>obj).obval\n\n\ncdef inline NPY_DATETIMEUNIT get_datetime64_unit(object obj) nogil:\n """\n returns the unit part of the dtype for a numpy datetime64 object.\n """\n return <NPY_DATETIMEUNIT>(<PyDatetimeScalarObject*>obj).obmeta.base\n\n\ncdef extern from "numpy/arrayobject.h":\n\n ctypedef struct NpyIter:\n pass\n\n cdef enum:\n NPY_FAIL\n NPY_SUCCEED\n\n cdef enum:\n # Track an index representing C order\n NPY_ITER_C_INDEX\n # Track an index representing Fortran order\n NPY_ITER_F_INDEX\n # Track a multi-index\n NPY_ITER_MULTI_INDEX\n # User code external to the iterator does the 1-dimensional innermost loop\n NPY_ITER_EXTERNAL_LOOP\n # Convert all the operands to a common data type\n NPY_ITER_COMMON_DTYPE\n # Operands may hold references, requiring API access during iteration\n NPY_ITER_REFS_OK\n # Zero-sized operands should be permitted, iteration checks IterSize for 0\n NPY_ITER_ZEROSIZE_OK\n # Permits reductions (size-0 stride with dimension size > 1)\n NPY_ITER_REDUCE_OK\n # Enables sub-range iteration\n NPY_ITER_RANGED\n # Enables buffering\n NPY_ITER_BUFFERED\n # When buffering is enabled, grows the inner loop if possible\n NPY_ITER_GROWINNER\n # Delay allocation of buffers until first Reset* call\n NPY_ITER_DELAY_BUFALLOC\n # When NPY_KEEPORDER is specified, disable reversing negative-stride axes\n NPY_ITER_DONT_NEGATE_STRIDES\n NPY_ITER_COPY_IF_OVERLAP\n # The operand will be read from and written to\n NPY_ITER_READWRITE\n # The operand will only be read from\n NPY_ITER_READONLY\n # The operand will only be written to\n NPY_ITER_WRITEONLY\n # The operand's data must be in native byte order\n NPY_ITER_NBO\n # The operand's data must be aligned\n NPY_ITER_ALIGNED\n # The operand's data must be contiguous (within the inner loop)\n NPY_ITER_CONTIG\n # The operand may be copied to satisfy requirements\n NPY_ITER_COPY\n # The operand may be copied with WRITEBACKIFCOPY to satisfy requirements\n NPY_ITER_UPDATEIFCOPY\n # Allocate the operand if it is NULL\n NPY_ITER_ALLOCATE\n # If an operand is allocated, don't use any subtype\n NPY_ITER_NO_SUBTYPE\n # This is a virtual array slot, operand is NULL but temporary data is there\n NPY_ITER_VIRTUAL\n # Require that the dimension match the iterator dimensions exactly\n NPY_ITER_NO_BROADCAST\n # A mask is being used on this array, affects buffer -> array copy\n NPY_ITER_WRITEMASKED\n # This array is the mask for all WRITEMASKED operands\n NPY_ITER_ARRAYMASK\n # Assume iterator order data access for COPY_IF_OVERLAP\n NPY_ITER_OVERLAP_ASSUME_ELEMENTWISE\n\n # construction and destruction functions\n NpyIter* NpyIter_New(ndarray arr, npy_uint32 flags, NPY_ORDER order,\n NPY_CASTING casting, dtype datatype) except NULL\n NpyIter* NpyIter_MultiNew(npy_intp nop, PyArrayObject** op, npy_uint32 flags,\n NPY_ORDER order, NPY_CASTING casting, npy_uint32*\n op_flags, PyArray_Descr** op_dtypes) except NULL\n NpyIter* NpyIter_AdvancedNew(npy_intp nop, PyArrayObject** op,\n npy_uint32 flags, NPY_ORDER order,\n NPY_CASTING casting, npy_uint32* op_flags,\n PyArray_Descr** op_dtypes, int oa_ndim,\n int** op_axes, const npy_intp* itershape,\n npy_intp buffersize) except NULL\n NpyIter* NpyIter_Copy(NpyIter* it) except NULL\n int NpyIter_RemoveAxis(NpyIter* it, int axis) except NPY_FAIL\n int NpyIter_RemoveMultiIndex(NpyIter* it) except NPY_FAIL\n int NpyIter_EnableExternalLoop(NpyIter* it) except NPY_FAIL\n int NpyIter_Deallocate(NpyIter* it) except NPY_FAIL\n int NpyIter_Reset(NpyIter* it, char** errmsg) except NPY_FAIL\n int NpyIter_ResetToIterIndexRange(NpyIter* it, npy_intp istart,\n npy_intp iend, char** errmsg) except NPY_FAIL\n int NpyIter_ResetBasePointers(NpyIter* it, char** baseptrs, char** errmsg) except NPY_FAIL\n int NpyIter_GotoMultiIndex(NpyIter* it, const npy_intp* multi_index) except NPY_FAIL\n int NpyIter_GotoIndex(NpyIter* it, npy_intp index) except NPY_FAIL\n npy_intp NpyIter_GetIterSize(NpyIter* it) nogil\n npy_intp NpyIter_GetIterIndex(NpyIter* it) nogil\n void NpyIter_GetIterIndexRange(NpyIter* it, npy_intp* istart,\n npy_intp* iend) nogil\n int NpyIter_GotoIterIndex(NpyIter* it, npy_intp iterindex) except NPY_FAIL\n npy_bool NpyIter_HasDelayedBufAlloc(NpyIter* it) nogil\n npy_bool NpyIter_HasExternalLoop(NpyIter* it) nogil\n npy_bool NpyIter_HasMultiIndex(NpyIter* it) nogil\n npy_bool NpyIter_HasIndex(NpyIter* it) nogil\n npy_bool NpyIter_RequiresBuffering(NpyIter* it) nogil\n npy_bool NpyIter_IsBuffered(NpyIter* it) nogil\n npy_bool NpyIter_IsGrowInner(NpyIter* it) nogil\n npy_intp NpyIter_GetBufferSize(NpyIter* it) nogil\n int NpyIter_GetNDim(NpyIter* it) nogil\n int NpyIter_GetNOp(NpyIter* it) nogil\n npy_intp* NpyIter_GetAxisStrideArray(NpyIter* it, int axis) except NULL\n int NpyIter_GetShape(NpyIter* it, npy_intp* outshape) nogil\n PyArray_Descr** NpyIter_GetDescrArray(NpyIter* it)\n PyArrayObject** NpyIter_GetOperandArray(NpyIter* it)\n ndarray NpyIter_GetIterView(NpyIter* it, npy_intp i)\n void NpyIter_GetReadFlags(NpyIter* it, char* outreadflags)\n void NpyIter_GetWriteFlags(NpyIter* it, char* outwriteflags)\n int NpyIter_CreateCompatibleStrides(NpyIter* it, npy_intp itemsize,\n npy_intp* outstrides) except NPY_FAIL\n npy_bool NpyIter_IsFirstVisit(NpyIter* it, int iop) nogil\n # functions for iterating an NpyIter object\n #\n # These don't match the definition in the C API because Cython can't wrap\n # function pointers that return functions.\n NpyIter_IterNextFunc* NpyIter_GetIterNext(NpyIter* it, char** errmsg) except NULL\n NpyIter_GetMultiIndexFunc* NpyIter_GetGetMultiIndex(NpyIter* it,\n char** errmsg) except NULL\n char** NpyIter_GetDataPtrArray(NpyIter* it) nogil\n char** NpyIter_GetInitialDataPtrArray(NpyIter* it) nogil\n npy_intp* NpyIter_GetIndexPtr(NpyIter* it)\n npy_intp* NpyIter_GetInnerStrideArray(NpyIter* it) nogil\n npy_intp* NpyIter_GetInnerLoopSizePtr(NpyIter* it) nogil\n void NpyIter_GetInnerFixedStrideArray(NpyIter* it, npy_intp* outstrides) nogil\n npy_bool NpyIter_IterationNeedsAPI(NpyIter* it) nogil\n void NpyIter_DebugPrint(NpyIter* it)\n\n# NpyString API\ncdef extern from "numpy/ndarraytypes.h":\n ctypedef struct npy_string_allocator:\n pass\n\n ctypedef struct npy_packed_static_string:\n pass\n\n ctypedef struct npy_static_string:\n size_t size\n const char *buf\n\n ctypedef struct PyArray_StringDTypeObject:\n PyArray_Descr base\n PyObject *na_object\n char coerce\n char has_nan_na\n char has_string_na\n char array_owned\n npy_static_string default_string\n npy_static_string na_name\n npy_string_allocator *allocator\n\ncdef extern from "numpy/arrayobject.h":\n npy_string_allocator *NpyString_acquire_allocator(const PyArray_StringDTypeObject *descr)\n void NpyString_acquire_allocators(size_t n_descriptors, PyArray_Descr *const descrs[], npy_string_allocator *allocators[])\n void NpyString_release_allocator(npy_string_allocator *allocator)\n void NpyString_release_allocators(size_t length, npy_string_allocator *allocators[])\n int NpyString_load(npy_string_allocator *allocator, const npy_packed_static_string *packed_string, npy_static_string *unpacked_string)\n int NpyString_pack_null(npy_string_allocator *allocator, npy_packed_static_string *packed_string)\n int NpyString_pack(npy_string_allocator *allocator, npy_packed_static_string *packed_string, const char *buf, size_t size)\n
.venv\Lib\site-packages\numpy\__init__.pxd
__init__.pxd
Other
44,912
0.95
0.032062
0.148297
python-kit
337
2024-05-29T10:38:14.954912
Apache-2.0
false
c05f3c04378a692c4d3f46b9560d000f
"""\nNumPy\n=====\n\nProvides\n 1. An array object of arbitrary homogeneous items\n 2. Fast mathematical operations over arrays\n 3. Linear Algebra, Fourier Transforms, Random Number Generation\n\nHow to use the documentation\n----------------------------\nDocumentation is available in two forms: docstrings provided\nwith the code, and a loose standing reference guide, available from\n`the NumPy homepage <https://numpy.org>`_.\n\nWe recommend exploring the docstrings using\n`IPython <https://ipython.org>`_, an advanced Python shell with\nTAB-completion and introspection capabilities. See below for further\ninstructions.\n\nThe docstring examples assume that `numpy` has been imported as ``np``::\n\n >>> import numpy as np\n\nCode snippets are indicated by three greater-than signs::\n\n >>> x = 42\n >>> x = x + 1\n\nUse the built-in ``help`` function to view a function's docstring::\n\n >>> help(np.sort)\n ... # doctest: +SKIP\n\nFor some objects, ``np.info(obj)`` may provide additional help. This is\nparticularly true if you see the line "Help on ufunc object:" at the top\nof the help() page. Ufuncs are implemented in C, not Python, for speed.\nThe native Python help() does not know how to view their help, but our\nnp.info() function does.\n\nAvailable subpackages\n---------------------\nlib\n Basic functions used by several sub-packages.\nrandom\n Core Random Tools\nlinalg\n Core Linear Algebra Tools\nfft\n Core FFT routines\npolynomial\n Polynomial tools\ntesting\n NumPy testing tools\ndistutils\n Enhancements to distutils with support for\n Fortran compilers support and more (for Python <= 3.11)\n\nUtilities\n---------\ntest\n Run numpy unittests\nshow_config\n Show numpy build configuration\n__version__\n NumPy version string\n\nViewing documentation using IPython\n-----------------------------------\n\nStart IPython and import `numpy` usually under the alias ``np``: `import\nnumpy as np`. Then, directly past or use the ``%cpaste`` magic to paste\nexamples into the shell. To see which functions are available in `numpy`,\ntype ``np.<TAB>`` (where ``<TAB>`` refers to the TAB key), or use\n``np.*cos*?<ENTER>`` (where ``<ENTER>`` refers to the ENTER key) to narrow\ndown the list. To view the docstring for a function, use\n``np.cos?<ENTER>`` (to view the docstring) and ``np.cos??<ENTER>`` (to view\nthe source code).\n\nCopies vs. in-place operation\n-----------------------------\nMost of the functions in `numpy` return a copy of the array argument\n(e.g., `np.sort`). In-place versions of these functions are often\navailable as array methods, i.e. ``x = np.array([1,2,3]); x.sort()``.\nExceptions to this rule are documented.\n\n"""\n\n\n# start delvewheel patch\ndef _delvewheel_patch_1_10_1():\n import os\n if os.path.isdir(libs_dir := os.path.abspath(os.path.join(os.path.dirname(__file__), os.pardir, 'numpy.libs'))):\n os.add_dll_directory(libs_dir)\n\n\n_delvewheel_patch_1_10_1()\ndel _delvewheel_patch_1_10_1\n# end delvewheel patch\n\nimport os\nimport sys\nimport warnings\n\n# If a version with git hash was stored, use that instead\nfrom . import version\nfrom ._expired_attrs_2_0 import __expired_attributes__\nfrom ._globals import _CopyMode, _NoValue\nfrom .version import __version__\n\n# We first need to detect if we're being called as part of the numpy setup\n# procedure itself in a reliable manner.\ntry:\n __NUMPY_SETUP__ # noqa: B018\nexcept NameError:\n __NUMPY_SETUP__ = False\n\nif __NUMPY_SETUP__:\n sys.stderr.write('Running from numpy source directory.\n')\nelse:\n # Allow distributors to run custom init code before importing numpy._core\n from . import _distributor_init\n\n try:\n from numpy.__config__ import show_config\n except ImportError as e:\n msg = """Error importing numpy: you should not try to import numpy from\n its source directory; please exit the numpy source tree, and relaunch\n your python interpreter from there."""\n raise ImportError(msg) from e\n\n from . import _core\n from ._core import (\n False_,\n ScalarType,\n True_,\n abs,\n absolute,\n acos,\n acosh,\n add,\n all,\n allclose,\n amax,\n amin,\n any,\n arange,\n arccos,\n arccosh,\n arcsin,\n arcsinh,\n arctan,\n arctan2,\n arctanh,\n argmax,\n argmin,\n argpartition,\n argsort,\n argwhere,\n around,\n array,\n array2string,\n array_equal,\n array_equiv,\n array_repr,\n array_str,\n asanyarray,\n asarray,\n ascontiguousarray,\n asfortranarray,\n asin,\n asinh,\n astype,\n atan,\n atan2,\n atanh,\n atleast_1d,\n atleast_2d,\n atleast_3d,\n base_repr,\n binary_repr,\n bitwise_and,\n bitwise_count,\n bitwise_invert,\n bitwise_left_shift,\n bitwise_not,\n bitwise_or,\n bitwise_right_shift,\n bitwise_xor,\n block,\n bool,\n bool_,\n broadcast,\n busday_count,\n busday_offset,\n busdaycalendar,\n byte,\n bytes_,\n can_cast,\n cbrt,\n cdouble,\n ceil,\n character,\n choose,\n clip,\n clongdouble,\n complex64,\n complex128,\n complexfloating,\n compress,\n concat,\n concatenate,\n conj,\n conjugate,\n convolve,\n copysign,\n copyto,\n correlate,\n cos,\n cosh,\n count_nonzero,\n cross,\n csingle,\n cumprod,\n cumsum,\n cumulative_prod,\n cumulative_sum,\n datetime64,\n datetime_as_string,\n datetime_data,\n deg2rad,\n degrees,\n diagonal,\n divide,\n divmod,\n dot,\n double,\n dtype,\n e,\n einsum,\n einsum_path,\n empty,\n empty_like,\n equal,\n errstate,\n euler_gamma,\n exp,\n exp2,\n expm1,\n fabs,\n finfo,\n flatiter,\n flatnonzero,\n flexible,\n float16,\n float32,\n float64,\n float_power,\n floating,\n floor,\n floor_divide,\n fmax,\n fmin,\n fmod,\n format_float_positional,\n format_float_scientific,\n frexp,\n from_dlpack,\n frombuffer,\n fromfile,\n fromfunction,\n fromiter,\n frompyfunc,\n fromstring,\n full,\n full_like,\n gcd,\n generic,\n geomspace,\n get_printoptions,\n getbufsize,\n geterr,\n geterrcall,\n greater,\n greater_equal,\n half,\n heaviside,\n hstack,\n hypot,\n identity,\n iinfo,\n indices,\n inexact,\n inf,\n inner,\n int8,\n int16,\n int32,\n int64,\n int_,\n intc,\n integer,\n intp,\n invert,\n is_busday,\n isclose,\n isdtype,\n isfinite,\n isfortran,\n isinf,\n isnan,\n isnat,\n isscalar,\n issubdtype,\n lcm,\n ldexp,\n left_shift,\n less,\n less_equal,\n lexsort,\n linspace,\n little_endian,\n log,\n log1p,\n log2,\n log10,\n logaddexp,\n logaddexp2,\n logical_and,\n logical_not,\n logical_or,\n logical_xor,\n logspace,\n long,\n longdouble,\n longlong,\n matmul,\n matrix_transpose,\n matvec,\n max,\n maximum,\n may_share_memory,\n mean,\n memmap,\n min,\n min_scalar_type,\n minimum,\n mod,\n modf,\n moveaxis,\n multiply,\n nan,\n ndarray,\n ndim,\n nditer,\n negative,\n nested_iters,\n newaxis,\n nextafter,\n nonzero,\n not_equal,\n number,\n object_,\n ones,\n ones_like,\n outer,\n partition,\n permute_dims,\n pi,\n positive,\n pow,\n power,\n printoptions,\n prod,\n promote_types,\n ptp,\n put,\n putmask,\n rad2deg,\n radians,\n ravel,\n recarray,\n reciprocal,\n record,\n remainder,\n repeat,\n require,\n reshape,\n resize,\n result_type,\n right_shift,\n rint,\n roll,\n rollaxis,\n round,\n sctypeDict,\n searchsorted,\n set_printoptions,\n setbufsize,\n seterr,\n seterrcall,\n shape,\n shares_memory,\n short,\n sign,\n signbit,\n signedinteger,\n sin,\n single,\n sinh,\n size,\n sort,\n spacing,\n sqrt,\n square,\n squeeze,\n stack,\n std,\n str_,\n subtract,\n sum,\n swapaxes,\n take,\n tan,\n tanh,\n tensordot,\n timedelta64,\n trace,\n transpose,\n true_divide,\n trunc,\n typecodes,\n ubyte,\n ufunc,\n uint,\n uint8,\n uint16,\n uint32,\n uint64,\n uintc,\n uintp,\n ulong,\n ulonglong,\n unsignedinteger,\n unstack,\n ushort,\n var,\n vdot,\n vecdot,\n vecmat,\n void,\n vstack,\n where,\n zeros,\n zeros_like,\n )\n\n # NOTE: It's still under discussion whether these aliases\n # should be removed.\n for ta in ["float96", "float128", "complex192", "complex256"]:\n try:\n globals()[ta] = getattr(_core, ta)\n except AttributeError:\n pass\n del ta\n\n from . import lib\n from . import matrixlib as _mat\n from .lib import scimath as emath\n from .lib._arraypad_impl import pad\n from .lib._arraysetops_impl import (\n ediff1d,\n in1d,\n intersect1d,\n isin,\n setdiff1d,\n setxor1d,\n union1d,\n unique,\n unique_all,\n unique_counts,\n unique_inverse,\n unique_values,\n )\n from .lib._function_base_impl import (\n angle,\n append,\n asarray_chkfinite,\n average,\n bartlett,\n bincount,\n blackman,\n copy,\n corrcoef,\n cov,\n delete,\n diff,\n digitize,\n extract,\n flip,\n gradient,\n hamming,\n hanning,\n i0,\n insert,\n interp,\n iterable,\n kaiser,\n median,\n meshgrid,\n percentile,\n piecewise,\n place,\n quantile,\n rot90,\n select,\n sinc,\n sort_complex,\n trapezoid,\n trapz,\n trim_zeros,\n unwrap,\n vectorize,\n )\n from .lib._histograms_impl import histogram, histogram_bin_edges, histogramdd\n from .lib._index_tricks_impl import (\n c_,\n diag_indices,\n diag_indices_from,\n fill_diagonal,\n index_exp,\n ix_,\n mgrid,\n ndenumerate,\n ndindex,\n ogrid,\n r_,\n ravel_multi_index,\n s_,\n unravel_index,\n )\n from .lib._nanfunctions_impl import (\n nanargmax,\n nanargmin,\n nancumprod,\n nancumsum,\n nanmax,\n nanmean,\n nanmedian,\n nanmin,\n nanpercentile,\n nanprod,\n nanquantile,\n nanstd,\n nansum,\n nanvar,\n )\n from .lib._npyio_impl import (\n fromregex,\n genfromtxt,\n load,\n loadtxt,\n packbits,\n save,\n savetxt,\n savez,\n savez_compressed,\n unpackbits,\n )\n from .lib._polynomial_impl import (\n poly,\n poly1d,\n polyadd,\n polyder,\n polydiv,\n polyfit,\n polyint,\n polymul,\n polysub,\n polyval,\n roots,\n )\n from .lib._shape_base_impl import (\n apply_along_axis,\n apply_over_axes,\n array_split,\n column_stack,\n dsplit,\n dstack,\n expand_dims,\n hsplit,\n kron,\n put_along_axis,\n row_stack,\n split,\n take_along_axis,\n tile,\n vsplit,\n )\n from .lib._stride_tricks_impl import (\n broadcast_arrays,\n broadcast_shapes,\n broadcast_to,\n )\n from .lib._twodim_base_impl import (\n diag,\n diagflat,\n eye,\n fliplr,\n flipud,\n histogram2d,\n mask_indices,\n tri,\n tril,\n tril_indices,\n tril_indices_from,\n triu,\n triu_indices,\n triu_indices_from,\n vander,\n )\n from .lib._type_check_impl import (\n common_type,\n imag,\n iscomplex,\n iscomplexobj,\n isreal,\n isrealobj,\n mintypecode,\n nan_to_num,\n real,\n real_if_close,\n typename,\n )\n from .lib._ufunclike_impl import fix, isneginf, isposinf\n from .lib._utils_impl import get_include, info, show_runtime\n from .matrixlib import asmatrix, bmat, matrix\n\n # public submodules are imported lazily, therefore are accessible from\n # __getattr__. Note that `distutils` (deprecated) and `array_api`\n # (experimental label) are not added here, because `from numpy import *`\n # must not raise any warnings - that's too disruptive.\n __numpy_submodules__ = {\n "linalg", "fft", "dtypes", "random", "polynomial", "ma",\n "exceptions", "lib", "ctypeslib", "testing", "typing",\n "f2py", "test", "rec", "char", "core", "strings",\n }\n\n # We build warning messages for former attributes\n _msg = (\n "module 'numpy' has no attribute '{n}'.\n"\n "`np.{n}` was a deprecated alias for the builtin `{n}`. "\n "To avoid this error in existing code, use `{n}` by itself. "\n "Doing this will not modify any behavior and is safe. {extended_msg}\n"\n "The aliases was originally deprecated in NumPy 1.20; for more "\n "details and guidance see the original release note at:\n"\n " https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations")\n\n _specific_msg = (\n "If you specifically wanted the numpy scalar type, use `np.{}` here.")\n\n _int_extended_msg = (\n "When replacing `np.{}`, you may wish to use e.g. `np.int64` "\n "or `np.int32` to specify the precision. If you wish to review "\n "your current use, check the release note link for "\n "additional information.")\n\n _type_info = [\n ("object", ""), # The NumPy scalar only exists by name.\n ("float", _specific_msg.format("float64")),\n ("complex", _specific_msg.format("complex128")),\n ("str", _specific_msg.format("str_")),\n ("int", _int_extended_msg.format("int"))]\n\n __former_attrs__ = {\n n: _msg.format(n=n, extended_msg=extended_msg)\n for n, extended_msg in _type_info\n }\n\n # Some of these could be defined right away, but most were aliases to\n # the Python objects and only removed in NumPy 1.24. Defining them should\n # probably wait for NumPy 1.26 or 2.0.\n # When defined, these should possibly not be added to `__all__` to avoid\n # import with `from numpy import *`.\n __future_scalars__ = {"str", "bytes", "object"}\n\n __array_api_version__ = "2024.12"\n\n from ._array_api_info import __array_namespace_info__\n\n # now that numpy core module is imported, can initialize limits\n _core.getlimits._register_known_types()\n\n __all__ = list(\n __numpy_submodules__ |\n set(_core.__all__) |\n set(_mat.__all__) |\n set(lib._histograms_impl.__all__) |\n set(lib._nanfunctions_impl.__all__) |\n set(lib._function_base_impl.__all__) |\n set(lib._twodim_base_impl.__all__) |\n set(lib._shape_base_impl.__all__) |\n set(lib._type_check_impl.__all__) |\n set(lib._arraysetops_impl.__all__) |\n set(lib._ufunclike_impl.__all__) |\n set(lib._arraypad_impl.__all__) |\n set(lib._utils_impl.__all__) |\n set(lib._stride_tricks_impl.__all__) |\n set(lib._polynomial_impl.__all__) |\n set(lib._npyio_impl.__all__) |\n set(lib._index_tricks_impl.__all__) |\n {"emath", "show_config", "__version__", "__array_namespace_info__"}\n )\n\n # Filter out Cython harmless warnings\n warnings.filterwarnings("ignore", message="numpy.dtype size changed")\n warnings.filterwarnings("ignore", message="numpy.ufunc size changed")\n warnings.filterwarnings("ignore", message="numpy.ndarray size changed")\n\n def __getattr__(attr):\n # Warn for expired attributes\n import warnings\n\n if attr == "linalg":\n import numpy.linalg as linalg\n return linalg\n elif attr == "fft":\n import numpy.fft as fft\n return fft\n elif attr == "dtypes":\n import numpy.dtypes as dtypes\n return dtypes\n elif attr == "random":\n import numpy.random as random\n return random\n elif attr == "polynomial":\n import numpy.polynomial as polynomial\n return polynomial\n elif attr == "ma":\n import numpy.ma as ma\n return ma\n elif attr == "ctypeslib":\n import numpy.ctypeslib as ctypeslib\n return ctypeslib\n elif attr == "exceptions":\n import numpy.exceptions as exceptions\n return exceptions\n elif attr == "testing":\n import numpy.testing as testing\n return testing\n elif attr == "matlib":\n import numpy.matlib as matlib\n return matlib\n elif attr == "f2py":\n import numpy.f2py as f2py\n return f2py\n elif attr == "typing":\n import numpy.typing as typing\n return typing\n elif attr == "rec":\n import numpy.rec as rec\n return rec\n elif attr == "char":\n import numpy.char as char\n return char\n elif attr == "array_api":\n raise AttributeError("`numpy.array_api` is not available from "\n "numpy 2.0 onwards", name=None)\n elif attr == "core":\n import numpy.core as core\n return core\n elif attr == "strings":\n import numpy.strings as strings\n return strings\n elif attr == "distutils":\n if 'distutils' in __numpy_submodules__:\n import numpy.distutils as distutils\n return distutils\n else:\n raise AttributeError("`numpy.distutils` is not available from "\n "Python 3.12 onwards", name=None)\n\n if attr in __future_scalars__:\n # And future warnings for those that will change, but also give\n # the AttributeError\n warnings.warn(\n f"In the future `np.{attr}` will be defined as the "\n "corresponding NumPy scalar.", FutureWarning, stacklevel=2)\n\n if attr in __former_attrs__:\n raise AttributeError(__former_attrs__[attr], name=None)\n\n if attr in __expired_attributes__:\n raise AttributeError(\n f"`np.{attr}` was removed in the NumPy 2.0 release. "\n f"{__expired_attributes__[attr]}",\n name=None\n )\n\n if attr == "chararray":\n warnings.warn(\n "`np.chararray` is deprecated and will be removed from "\n "the main namespace in the future. Use an array with a string "\n "or bytes dtype instead.", DeprecationWarning, stacklevel=2)\n import numpy.char as char\n return char.chararray\n\n raise AttributeError(f"module {__name__!r} has no attribute {attr!r}")\n\n def __dir__():\n public_symbols = (\n globals().keys() | __numpy_submodules__\n )\n public_symbols -= {\n "matrixlib", "matlib", "tests", "conftest", "version",\n "distutils", "array_api"\n }\n return list(public_symbols)\n\n # Pytest testing\n from numpy._pytesttester import PytestTester\n test = PytestTester(__name__)\n del PytestTester\n\n def _sanity_check():\n """\n Quick sanity checks for common bugs caused by environment.\n There are some cases e.g. with wrong BLAS ABI that cause wrong\n results under specific runtime conditions that are not necessarily\n achieved during test suite runs, and it is useful to catch those early.\n\n See https://github.com/numpy/numpy/issues/8577 and other\n similar bug reports.\n\n """\n try:\n x = ones(2, dtype=float32)\n if not abs(x.dot(x) - float32(2.0)) < 1e-5:\n raise AssertionError\n except AssertionError:\n msg = ("The current Numpy installation ({!r}) fails to "\n "pass simple sanity checks. This can be caused for example "\n "by incorrect BLAS library being linked in, or by mixing "\n "package managers (pip, conda, apt, ...). Search closed "\n "numpy issues for similar problems.")\n raise RuntimeError(msg.format(__file__)) from None\n\n _sanity_check()\n del _sanity_check\n\n def _mac_os_check():\n """\n Quick Sanity check for Mac OS look for accelerate build bugs.\n Testing numpy polyfit calls init_dgelsd(LAPACK)\n """\n try:\n c = array([3., 2., 1.])\n x = linspace(0, 2, 5)\n y = polyval(c, x)\n _ = polyfit(x, y, 2, cov=True)\n except ValueError:\n pass\n\n if sys.platform == "darwin":\n from . import exceptions\n with warnings.catch_warnings(record=True) as w:\n _mac_os_check()\n # Throw runtime error, if the test failed\n # Check for warning and report the error_message\n if len(w) > 0:\n for _wn in w:\n if _wn.category is exceptions.RankWarning:\n # Ignore other warnings, they may not be relevant (see gh-25433)\n error_message = (\n f"{_wn.category.__name__}: {_wn.message}"\n )\n msg = (\n "Polyfit sanity test emitted a warning, most likely due "\n "to using a buggy Accelerate backend."\n "\nIf you compiled yourself, more information is available at:" # noqa: E501\n "\nhttps://numpy.org/devdocs/building/index.html"\n "\nOtherwise report this to the vendor "\n f"that provided NumPy.\n\n{error_message}\n")\n raise RuntimeError(msg)\n del _wn\n del w\n del _mac_os_check\n\n def hugepage_setup():\n """\n We usually use madvise hugepages support, but on some old kernels it\n is slow and thus better avoided. Specifically kernel version 4.6\n had a bug fix which probably fixed this:\n https://github.com/torvalds/linux/commit/7cf91a98e607c2f935dbcc177d70011e95b8faff\n """\n use_hugepage = os.environ.get("NUMPY_MADVISE_HUGEPAGE", None)\n if sys.platform == "linux" and use_hugepage is None:\n # If there is an issue with parsing the kernel version,\n # set use_hugepage to 0. Usage of LooseVersion will handle\n # the kernel version parsing better, but avoided since it\n # will increase the import time.\n # See: #16679 for related discussion.\n try:\n use_hugepage = 1\n kernel_version = os.uname().release.split(".")[:2]\n kernel_version = tuple(int(v) for v in kernel_version)\n if kernel_version < (4, 6):\n use_hugepage = 0\n except ValueError:\n use_hugepage = 0\n elif use_hugepage is None:\n # This is not Linux, so it should not matter, just enable anyway\n use_hugepage = 1\n else:\n use_hugepage = int(use_hugepage)\n return use_hugepage\n\n # Note that this will currently only make a difference on Linux\n _core.multiarray._set_madvise_hugepage(hugepage_setup())\n del hugepage_setup\n\n # Give a warning if NumPy is reloaded or imported on a sub-interpreter\n # We do this from python, since the C-module may not be reloaded and\n # it is tidier organized.\n _core.multiarray._multiarray_umath._reload_guard()\n\n # TODO: Remove the environment variable entirely now that it is "weak"\n if (os.environ.get("NPY_PROMOTION_STATE", "weak") != "weak"):\n warnings.warn(\n "NPY_PROMOTION_STATE was a temporary feature for NumPy 2.0 "\n "transition and is ignored after NumPy 2.2.",\n UserWarning, stacklevel=2)\n\n # Tell PyInstaller where to find hook-numpy.py\n def _pyinstaller_hooks_dir():\n from pathlib import Path\n return [str(Path(__file__).with_name("_pyinstaller").resolve())]\n\n\n# Remove symbols imported for internal use\ndel os, sys, warnings
.venv\Lib\site-packages\numpy\__init__.py
__init__.py
Python
26,476
0.95
0.067021
0.045506
python-kit
341
2024-06-03T22:13:14.088581
BSD-3-Clause
false
304ddec72368a0b0b40c38f1c38fe7d4
from numpy._core.defchararray import *\nfrom numpy._core.defchararray import __all__, __doc__\n
.venv\Lib\site-packages\numpy\char\__init__.py
__init__.py
Python
95
0.65
0
0
awesome-app
287
2023-10-30T23:02:50.162799
MIT
false
72a58e36aee2c726c02f6117091ace7b
from numpy._core.defchararray import (\n add,\n array,\n asarray,\n capitalize,\n center,\n chararray,\n compare_chararrays,\n count,\n decode,\n encode,\n endswith,\n equal,\n expandtabs,\n find,\n greater,\n greater_equal,\n index,\n isalnum,\n isalpha,\n isdecimal,\n isdigit,\n islower,\n isnumeric,\n isspace,\n istitle,\n isupper,\n join,\n less,\n less_equal,\n ljust,\n lower,\n lstrip,\n mod,\n multiply,\n not_equal,\n partition,\n replace,\n rfind,\n rindex,\n rjust,\n rpartition,\n rsplit,\n rstrip,\n split,\n splitlines,\n startswith,\n str_len,\n strip,\n swapcase,\n title,\n translate,\n upper,\n zfill,\n)\n\n__all__ = [\n "equal",\n "not_equal",\n "greater_equal",\n "less_equal",\n "greater",\n "less",\n "str_len",\n "add",\n "multiply",\n "mod",\n "capitalize",\n "center",\n "count",\n "decode",\n "encode",\n "endswith",\n "expandtabs",\n "find",\n "index",\n "isalnum",\n "isalpha",\n "isdigit",\n "islower",\n "isspace",\n "istitle",\n "isupper",\n "join",\n "ljust",\n "lower",\n "lstrip",\n "partition",\n "replace",\n "rfind",\n "rindex",\n "rjust",\n "rpartition",\n "rsplit",\n "rstrip",\n "split",\n "splitlines",\n "startswith",\n "strip",\n "swapcase",\n "title",\n "translate",\n "upper",\n "zfill",\n "isnumeric",\n "isdecimal",\n "array",\n "asarray",\n "compare_chararrays",\n "chararray",\n]\n
.venv\Lib\site-packages\numpy\char\__init__.pyi
__init__.pyi
Other
1,651
0.85
0
0
awesome-app
945
2024-09-07T09:16:34.751690
Apache-2.0
false
e6eea76006cc6fde7852f7753df26376
\n\n
.venv\Lib\site-packages\numpy\char\__pycache__\__init__.cpython-313.pyc
__init__.cpython-313.pyc
Other
283
0.7
0
0
python-kit
930
2024-06-05T07:52:01.089875
Apache-2.0
false
19d3efc84e52ecc4c5c615a93cf3c66a
def __getattr__(attr_name):\n from numpy._core import arrayprint\n\n from ._utils import _raise_warning\n ret = getattr(arrayprint, attr_name, None)\n if ret is None:\n raise AttributeError(\n f"module 'numpy.core.arrayprint' has no attribute {attr_name}")\n _raise_warning(attr_name, "arrayprint")\n return ret\n
.venv\Lib\site-packages\numpy\core\arrayprint.py
arrayprint.py
Python
349
0.85
0.2
0
node-utils
871
2024-05-31T12:17:20.335495
MIT
false
2ed0ce347f7311246c5c54042c617be9
def __getattr__(attr_name):\n from numpy._core import defchararray\n\n from ._utils import _raise_warning\n ret = getattr(defchararray, attr_name, None)\n if ret is None:\n raise AttributeError(\n f"module 'numpy.core.defchararray' has no attribute {attr_name}")\n _raise_warning(attr_name, "defchararray")\n return ret\n
.venv\Lib\site-packages\numpy\core\defchararray.py
defchararray.py
Python
357
0.85
0.2
0
awesome-app
713
2023-10-23T02:06:31.582284
BSD-3-Clause
false
fb768d8d175db8aa18acc9ba6db1ccee
def __getattr__(attr_name):\n from numpy._core import einsumfunc\n\n from ._utils import _raise_warning\n ret = getattr(einsumfunc, attr_name, None)\n if ret is None:\n raise AttributeError(\n f"module 'numpy.core.einsumfunc' has no attribute {attr_name}")\n _raise_warning(attr_name, "einsumfunc")\n return ret\n
.venv\Lib\site-packages\numpy\core\einsumfunc.py
einsumfunc.py
Python
349
0.85
0.2
0
awesome-app
227
2023-07-29T11:05:12.463783
Apache-2.0
false
09a475eeb6f60b053a546e5e91f6d244
def __getattr__(attr_name):\n from numpy._core import fromnumeric\n\n from ._utils import _raise_warning\n ret = getattr(fromnumeric, attr_name, None)\n if ret is None:\n raise AttributeError(\n f"module 'numpy.core.fromnumeric' has no attribute {attr_name}")\n _raise_warning(attr_name, "fromnumeric")\n return ret\n
.venv\Lib\site-packages\numpy\core\fromnumeric.py
fromnumeric.py
Python
353
0.85
0.2
0
vue-tools
20
2025-06-26T21:27:01.200526
BSD-3-Clause
false
68452de7419997a4a62b1d1b1ecf9c8c
def __getattr__(attr_name):\n from numpy._core import function_base\n\n from ._utils import _raise_warning\n ret = getattr(function_base, attr_name, None)\n if ret is None:\n raise AttributeError(\n f"module 'numpy.core.function_base' has no attribute {attr_name}")\n _raise_warning(attr_name, "function_base")\n return ret\n
.venv\Lib\site-packages\numpy\core\function_base.py
function_base.py
Python
361
0.85
0.2
0
python-kit
346
2025-02-10T03:35:06.826807
GPL-3.0
false
87c9b13467e92d46a525b8d619f525f5
def __getattr__(attr_name):\n from numpy._core import getlimits\n\n from ._utils import _raise_warning\n ret = getattr(getlimits, attr_name, None)\n if ret is None:\n raise AttributeError(\n f"module 'numpy.core.getlimits' has no attribute {attr_name}")\n _raise_warning(attr_name, "getlimits")\n return ret\n
.venv\Lib\site-packages\numpy\core\getlimits.py
getlimits.py
Python
345
0.85
0.2
0
react-lib
169
2025-04-25T03:16:09.546017
GPL-3.0
false
870fcf95b5d035188339d2dbf7a7f659
from numpy._core import multiarray\n\n# these must import without warning or error from numpy.core.multiarray to\n# support old pickle files\nfor item in ["_reconstruct", "scalar"]:\n globals()[item] = getattr(multiarray, item)\n\n# Pybind11 (in versions <= 2.11.1) imports _ARRAY_API from the multiarray\n# submodule as a part of NumPy initialization, therefore it must be importable\n# without a warning.\n_ARRAY_API = multiarray._ARRAY_API\n\ndef __getattr__(attr_name):\n from numpy._core import multiarray\n\n from ._utils import _raise_warning\n ret = getattr(multiarray, attr_name, None)\n if ret is None:\n raise AttributeError(\n f"module 'numpy.core.multiarray' has no attribute {attr_name}")\n _raise_warning(attr_name, "multiarray")\n return ret\n\n\ndel multiarray\n
.venv\Lib\site-packages\numpy\core\multiarray.py
multiarray.py
Python
818
0.95
0.12
0.263158
python-kit
131
2024-02-06T15:20:25.486474
BSD-3-Clause
false
8574144c1bf083e986a5d4e3f756de2f
def __getattr__(attr_name):\n from numpy._core import numeric\n\n from ._utils import _raise_warning\n\n sentinel = object()\n ret = getattr(numeric, attr_name, sentinel)\n if ret is sentinel:\n raise AttributeError(\n f"module 'numpy.core.numeric' has no attribute {attr_name}")\n _raise_warning(attr_name, "numeric")\n return ret\n
.venv\Lib\site-packages\numpy\core\numeric.py
numeric.py
Python
372
0.85
0.166667
0
awesome-app
403
2024-04-20T08:56:00.303231
MIT
false
0ff26f202237d5c1a02397db5d244eb1
def __getattr__(attr_name):\n from numpy._core import numerictypes\n\n from ._utils import _raise_warning\n ret = getattr(numerictypes, attr_name, None)\n if ret is None:\n raise AttributeError(\n f"module 'numpy.core.numerictypes' has no attribute {attr_name}")\n _raise_warning(attr_name, "numerictypes")\n return ret\n
.venv\Lib\site-packages\numpy\core\numerictypes.py
numerictypes.py
Python
357
0.85
0.2
0
node-utils
126
2024-01-27T15:57:08.518960
MIT
false
dddd3874b7458b00c3c3be9874abb49e
def __getattr__(attr_name):\n from numpy._core import overrides\n\n from ._utils import _raise_warning\n ret = getattr(overrides, attr_name, None)\n if ret is None:\n raise AttributeError(\n f"module 'numpy.core.overrides' has no attribute {attr_name}")\n _raise_warning(attr_name, "overrides")\n return ret\n
.venv\Lib\site-packages\numpy\core\overrides.py
overrides.py
Python
345
0.85
0.2
0
python-kit
176
2024-03-30T15:57:32.608321
MIT
false
24b265a7f31bf91531e0dfc76ccdd568
# NOTE: At runtime, this submodule dynamically re-exports any `numpy._core.overrides`\n# member, and issues a `DeprecationWarning` when accessed. But since there is no\n# `__dir__` or `__all__` present, these annotations would be unverifiable. Because\n# this module is also deprecated in favor of `numpy._core`, and therefore not part of\n# the public API, we omit the "re-exports", which in practice would require literal\n# duplication of the stubs in order for the `@deprecated` decorator to be understood\n# by type-checkers.\n
.venv\Lib\site-packages\numpy\core\overrides.pyi
overrides.pyi
Other
532
0.95
0.142857
1
awesome-app
875
2023-10-28T16:13:49.509608
BSD-3-Clause
false
e0b8564a78566c928ca831d4d83edf88
def __getattr__(attr_name):\n from numpy._core import records\n\n from ._utils import _raise_warning\n ret = getattr(records, attr_name, None)\n if ret is None:\n raise AttributeError(\n f"module 'numpy.core.records' has no attribute {attr_name}")\n _raise_warning(attr_name, "records")\n return ret\n
.venv\Lib\site-packages\numpy\core\records.py
records.py
Python
337
0.85
0.2
0
python-kit
898
2023-09-03T21:49:43.687325
Apache-2.0
false
ace2cf1d8468fe30a85b8a61714ced08
def __getattr__(attr_name):\n from numpy._core import shape_base\n\n from ._utils import _raise_warning\n ret = getattr(shape_base, attr_name, None)\n if ret is None:\n raise AttributeError(\n f"module 'numpy.core.shape_base' has no attribute {attr_name}")\n _raise_warning(attr_name, "shape_base")\n return ret\n
.venv\Lib\site-packages\numpy\core\shape_base.py
shape_base.py
Python
349
0.85
0.2
0
react-lib
230
2024-12-23T10:27:40.434694
MIT
false
a697a89d9cf265f8dc8537494543928a
def __getattr__(attr_name):\n from numpy._core import umath\n\n from ._utils import _raise_warning\n ret = getattr(umath, attr_name, None)\n if ret is None:\n raise AttributeError(\n f"module 'numpy.core.umath' has no attribute {attr_name}")\n _raise_warning(attr_name, "umath")\n return ret\n
.venv\Lib\site-packages\numpy\core\umath.py
umath.py
Python
329
0.85
0.2
0
vue-tools
981
2025-04-09T18:45:11.806495
BSD-3-Clause
false
4fae0a94eb4e91c1cf250c8761fcac5b
def __getattr__(attr_name):\n from numpy._core import _dtype\n\n from ._utils import _raise_warning\n ret = getattr(_dtype, attr_name, None)\n if ret is None:\n raise AttributeError(\n f"module 'numpy.core._dtype' has no attribute {attr_name}")\n _raise_warning(attr_name, "_dtype")\n return ret\n
.venv\Lib\site-packages\numpy\core\_dtype.py
_dtype.py
Python
333
0.85
0.2
0
awesome-app
636
2023-11-18T16:09:51.884284
Apache-2.0
false
39636ceec778138dcad35b043b589765
def __getattr__(attr_name):\n from numpy._core import _dtype_ctypes\n\n from ._utils import _raise_warning\n ret = getattr(_dtype_ctypes, attr_name, None)\n if ret is None:\n raise AttributeError(\n f"module 'numpy.core._dtype_ctypes' has no attribute {attr_name}")\n _raise_warning(attr_name, "_dtype_ctypes")\n return ret\n
.venv\Lib\site-packages\numpy\core\_dtype_ctypes.py
_dtype_ctypes.py
Python
361
0.85
0.2
0
awesome-app
192
2023-10-04T18:29:09.992254
Apache-2.0
false
07c99d7b885b92d481e883ae2afa10f5
from numpy._core import _internal\n\n\n# Build a new array from the information in a pickle.\n# Note that the name numpy.core._internal._reconstruct is embedded in\n# pickles of ndarrays made with NumPy before release 1.0\n# so don't remove the name here, or you'll\n# break backward compatibility.\ndef _reconstruct(subtype, shape, dtype):\n from numpy import ndarray\n return ndarray.__new__(subtype, shape, dtype)\n\n\n# Pybind11 (in versions <= 2.11.1) imports _dtype_from_pep3118 from the\n# _internal submodule, therefore it must be importable without a warning.\n_dtype_from_pep3118 = _internal._dtype_from_pep3118\n\ndef __getattr__(attr_name):\n from numpy._core import _internal\n\n from ._utils import _raise_warning\n ret = getattr(_internal, attr_name, None)\n if ret is None:\n raise AttributeError(\n f"module 'numpy.core._internal' has no attribute {attr_name}")\n _raise_warning(attr_name, "_internal")\n return ret\n
.venv\Lib\site-packages\numpy\core\_internal.py
_internal.py
Python
976
0.95
0.111111
0.333333
python-kit
12
2025-03-08T09:22:52.512610
BSD-3-Clause
false
8309c18655daaa9bfc85dc3d77cf59f6
from numpy import ufunc\nfrom numpy._core import _multiarray_umath\n\nfor item in _multiarray_umath.__dir__():\n # ufuncs appear in pickles with a path in numpy.core._multiarray_umath\n # and so must import from this namespace without warning or error\n attr = getattr(_multiarray_umath, item)\n if isinstance(attr, ufunc):\n globals()[item] = attr\n\n\ndef __getattr__(attr_name):\n from numpy._core import _multiarray_umath\n\n from ._utils import _raise_warning\n\n if attr_name in {"_ARRAY_API", "_UFUNC_API"}:\n import sys\n import textwrap\n import traceback\n\n from numpy.version import short_version\n\n msg = textwrap.dedent(f"""\n A module that was compiled using NumPy 1.x cannot be run in\n NumPy {short_version} as it may crash. To support both 1.x and 2.x\n versions of NumPy, modules must be compiled with NumPy 2.0.\n Some module may need to rebuild instead e.g. with 'pybind11>=2.12'.\n\n If you are a user of the module, the easiest solution will be to\n downgrade to 'numpy<2' or try to upgrade the affected module.\n We expect that some modules will need time to support NumPy 2.\n\n """)\n tb_msg = "Traceback (most recent call last):"\n for line in traceback.format_stack()[:-1]:\n if "frozen importlib" in line:\n continue\n tb_msg += line\n\n # Also print the message (with traceback). This is because old versions\n # of NumPy unfortunately set up the import to replace (and hide) the\n # error. The traceback shouldn't be needed, but e.g. pytest plugins\n # seem to swallow it and we should be failing anyway...\n sys.stderr.write(msg + tb_msg)\n raise ImportError(msg)\n\n ret = getattr(_multiarray_umath, attr_name, None)\n if ret is None:\n raise AttributeError(\n "module 'numpy.core._multiarray_umath' has no attribute "\n f"{attr_name}")\n _raise_warning(attr_name, "_multiarray_umath")\n return ret\n\n\ndel _multiarray_umath, ufunc\n
.venv\Lib\site-packages\numpy\core\_multiarray_umath.py
_multiarray_umath.py
Python
2,155
0.95
0.140351
0.136364
node-utils
447
2024-03-18T16:03:46.720589
MIT
false
ef0ae05a51bd3c6a427989acc5a0dd89
import warnings\n\n\ndef _raise_warning(attr: str, submodule: str | None = None) -> None:\n new_module = "numpy._core"\n old_module = "numpy.core"\n if submodule is not None:\n new_module = f"{new_module}.{submodule}"\n old_module = f"{old_module}.{submodule}"\n warnings.warn(\n f"{old_module} is deprecated and has been renamed to {new_module}. "\n "The numpy._core namespace contains private NumPy internals and its "\n "use is discouraged, as NumPy internals can change without warning in "\n "any release. In practice, most real-world usage of numpy.core is to "\n "access functionality in the public NumPy API. If that is the case, "\n "use the public NumPy API. If not, you are using NumPy internals. "\n "If you would still like to access an internal attribute, "\n f"use {new_module}.{attr}.",\n DeprecationWarning,\n stacklevel=3\n )\n
.venv\Lib\site-packages\numpy\core\_utils.py
_utils.py
Python
944
0.85
0.095238
0
react-lib
405
2023-09-26T14:45:41.740310
MIT
false
92f84ad62c0480e7cd7d6b01376344c4
"""\nThe `numpy.core` submodule exists solely for backward compatibility\npurposes. The original `core` was renamed to `_core` and made private.\n`numpy.core` will be removed in the future.\n"""\nfrom numpy import _core\n\nfrom ._utils import _raise_warning\n\n\n# We used to use `np.core._ufunc_reconstruct` to unpickle.\n# This is unnecessary, but old pickles saved before 1.20 will be using it,\n# and there is no reason to break loading them.\ndef _ufunc_reconstruct(module, name):\n # The `fromlist` kwarg is required to ensure that `mod` points to the\n # inner-most module rather than the parent package when module name is\n # nested. This makes it possible to pickle non-toplevel ufuncs such as\n # scipy.special.expit for instance.\n mod = __import__(module, fromlist=[name])\n return getattr(mod, name)\n\n\n# force lazy-loading of submodules to ensure a warning is printed\n\n__all__ = ["arrayprint", "defchararray", "_dtype_ctypes", "_dtype", # noqa: F822\n "einsumfunc", "fromnumeric", "function_base", "getlimits",\n "_internal", "multiarray", "_multiarray_umath", "numeric",\n "numerictypes", "overrides", "records", "shape_base", "umath"]\n\ndef __getattr__(attr_name):\n attr = getattr(_core, attr_name)\n _raise_warning(attr_name)\n return attr\n
.venv\Lib\site-packages\numpy\core\__init__.py
__init__.py
Python
1,323
0.95
0.121212
0.307692
awesome-app
854
2024-08-21T17:48:11.101401
MIT
false
28e92f4677259bbd143ff186d803ebd2
\n\n
.venv\Lib\site-packages\numpy\core\__pycache__\arrayprint.cpython-313.pyc
arrayprint.cpython-313.pyc
Other
629
0.7
0
0
node-utils
684
2024-09-22T09:56:15.709213
GPL-3.0
false
38cb34a42bc62c978e73f989890c95c3
\n\n
.venv\Lib\site-packages\numpy\core\__pycache__\defchararray.cpython-313.pyc
defchararray.cpython-313.pyc
Other
635
0.7
0
0
vue-tools
665
2024-09-10T05:32:39.367833
GPL-3.0
false
b4a1c50d2884ab01273438748b486a2e
\n\n
.venv\Lib\site-packages\numpy\core\__pycache__\einsumfunc.cpython-313.pyc
einsumfunc.cpython-313.pyc
Other
629
0.7
0
0
awesome-app
630
2024-03-24T02:52:32.226019
GPL-3.0
false
ae5761896c2403df0e315adc20a0f900
\n\n
.venv\Lib\site-packages\numpy\core\__pycache__\fromnumeric.cpython-313.pyc
fromnumeric.cpython-313.pyc
Other
632
0.7
0
0
node-utils
323
2023-11-09T05:59:43.419708
Apache-2.0
false
619309fd14e6b337def3854602a9d5be
\n\n
.venv\Lib\site-packages\numpy\core\__pycache__\function_base.cpython-313.pyc
function_base.cpython-313.pyc
Other
638
0.85
0
0
python-kit
34
2025-06-21T13:52:27.460253
BSD-3-Clause
false
79fceb67764bcf8e56c8fb8660d6f4de
\n\n
.venv\Lib\site-packages\numpy\core\__pycache__\getlimits.cpython-313.pyc
getlimits.cpython-313.pyc
Other
626
0.7
0
0
awesome-app
687
2024-10-24T00:36:27.559168
MIT
false
9adda72d0d3d750463a1d5452ea50d05
\n\n
.venv\Lib\site-packages\numpy\core\__pycache__\multiarray.cpython-313.pyc
multiarray.cpython-313.pyc
Other
846
0.7
0
0
awesome-app
118
2023-08-10T00:09:00.816297
BSD-3-Clause
false
c8d8fde722d494667429db0f5058d75c
\n\n
.venv\Lib\site-packages\numpy\core\__pycache__\numeric.cpython-313.pyc
numeric.cpython-313.pyc
Other
668
0.8
0
0
awesome-app
694
2024-03-20T04:44:31.469642
BSD-3-Clause
false
c5726c2bfaa82f07275a717936478092
\n\n
.venv\Lib\site-packages\numpy\core\__pycache__\numerictypes.cpython-313.pyc
numerictypes.cpython-313.pyc
Other
635
0.7
0
0
awesome-app
964
2025-06-28T07:40:53.806820
Apache-2.0
false
6e89475d441fbc2e22807af056fd5d95
\n\n
.venv\Lib\site-packages\numpy\core\__pycache__\overrides.cpython-313.pyc
overrides.cpython-313.pyc
Other
626
0.7
0
0
awesome-app
742
2025-01-01T11:59:43.971785
BSD-3-Clause
false
0db17cfd6718edc0d30b9d32392fab58
\n\n
.venv\Lib\site-packages\numpy\core\__pycache__\records.cpython-313.pyc
records.cpython-313.pyc
Other
620
0.8
0
0
react-lib
640
2023-08-31T10:15:10.538440
BSD-3-Clause
false
2055b813421905b9025f7711cacd7e92
\n\n
.venv\Lib\site-packages\numpy\core\__pycache__\shape_base.cpython-313.pyc
shape_base.cpython-313.pyc
Other
629
0.7
0
0
vue-tools
401
2024-04-28T11:44:29.758017
BSD-3-Clause
false
3be9dceecb026676b37666967f809be2
\n\n
.venv\Lib\site-packages\numpy\core\__pycache__\umath.cpython-313.pyc
umath.cpython-313.pyc
Other
614
0.7
0
0
awesome-app
59
2025-04-28T02:25:12.002717
MIT
false
2a0df3a9b560f85223e229e30d14f859
\n\n
.venv\Lib\site-packages\numpy\core\__pycache__\_dtype.cpython-313.pyc
_dtype.cpython-313.pyc
Other
617
0.7
0
0.142857
vue-tools
887
2023-11-24T19:37:16.021845
GPL-3.0
false
8fdb0cf79c613459cfccd42e69288060
\n\n
.venv\Lib\site-packages\numpy\core\__pycache__\_dtype_ctypes.cpython-313.pyc
_dtype_ctypes.cpython-313.pyc
Other
638
0.7
0
0
awesome-app
50
2025-03-30T08:26:22.871123
GPL-3.0
false
2f7bb151a30e634fea7d62098c2c9d33
\n\n
.venv\Lib\site-packages\numpy\core\__pycache__\_internal.cpython-313.pyc
_internal.cpython-313.pyc
Other
953
0.7
0
0
awesome-app
525
2025-05-15T19:55:29.406039
MIT
false
b54d63a8b7ff62412159aaaa61effe73
\n\n
.venv\Lib\site-packages\numpy\core\__pycache__\_multiarray_umath.cpython-313.pyc
_multiarray_umath.cpython-313.pyc
Other
2,133
0.95
0.028571
0
python-kit
778
2024-10-17T13:15:00.540868
BSD-3-Clause
false
84ac678d8de5aaa75bbf3d2c0d7aea11
\n\n
.venv\Lib\site-packages\numpy\core\__pycache__\_utils.cpython-313.pyc
_utils.cpython-313.pyc
Other
1,175
0.85
0
0
node-utils
668
2024-06-23T18:11:39.283070
GPL-3.0
false
629fb27efff1223584a4671a0de3a5b2
\n\n
.venv\Lib\site-packages\numpy\core\__pycache__\__init__.cpython-313.pyc
__init__.cpython-313.pyc
Other
1,147
0.85
0.052632
0
node-utils
981
2024-12-27T09:42:37.617527
MIT
false
7319b02c362d1470a397cd8040d5565d
"""\n============================\n``ctypes`` Utility Functions\n============================\n\nSee Also\n--------\nload_library : Load a C library.\nndpointer : Array restype/argtype with verification.\nas_ctypes : Create a ctypes array from an ndarray.\nas_array : Create an ndarray from a ctypes array.\n\nReferences\n----------\n.. [1] "SciPy Cookbook: ctypes", https://scipy-cookbook.readthedocs.io/items/Ctypes.html\n\nExamples\n--------\nLoad the C library:\n\n>>> _lib = np.ctypeslib.load_library('libmystuff', '.') #doctest: +SKIP\n\nOur result type, an ndarray that must be of type double, be 1-dimensional\nand is C-contiguous in memory:\n\n>>> array_1d_double = np.ctypeslib.ndpointer(\n... dtype=np.double,\n... ndim=1, flags='CONTIGUOUS') #doctest: +SKIP\n\nOur C-function typically takes an array and updates its values\nin-place. For example::\n\n void foo_func(double* x, int length)\n {\n int i;\n for (i = 0; i < length; i++) {\n x[i] = i*i;\n }\n }\n\nWe wrap it using:\n\n>>> _lib.foo_func.restype = None #doctest: +SKIP\n>>> _lib.foo_func.argtypes = [array_1d_double, c_int] #doctest: +SKIP\n\nThen, we're ready to call ``foo_func``:\n\n>>> out = np.empty(15, dtype=np.double)\n>>> _lib.foo_func(out, len(out)) #doctest: +SKIP\n\n"""\n__all__ = ['load_library', 'ndpointer', 'c_intp', 'as_ctypes', 'as_array',\n 'as_ctypes_type']\n\nimport os\n\nimport numpy as np\nimport numpy._core.multiarray as mu\nfrom numpy._utils import set_module\n\ntry:\n import ctypes\nexcept ImportError:\n ctypes = None\n\nif ctypes is None:\n @set_module("numpy.ctypeslib")\n def _dummy(*args, **kwds):\n """\n Dummy object that raises an ImportError if ctypes is not available.\n\n Raises\n ------\n ImportError\n If ctypes is not available.\n\n """\n raise ImportError("ctypes is not available.")\n load_library = _dummy\n as_ctypes = _dummy\n as_ctypes_type = _dummy\n as_array = _dummy\n ndpointer = _dummy\n from numpy import intp as c_intp\n _ndptr_base = object\nelse:\n import numpy._core._internal as nic\n c_intp = nic._getintp_ctype()\n del nic\n _ndptr_base = ctypes.c_void_p\n\n # Adapted from Albert Strasheim\n @set_module("numpy.ctypeslib")\n def load_library(libname, loader_path):\n """\n It is possible to load a library using\n\n >>> lib = ctypes.cdll[<full_path_name>] # doctest: +SKIP\n\n But there are cross-platform considerations, such as library file extensions,\n plus the fact Windows will just load the first library it finds with that name.\n NumPy supplies the load_library function as a convenience.\n\n .. versionchanged:: 1.20.0\n Allow libname and loader_path to take any\n :term:`python:path-like object`.\n\n Parameters\n ----------\n libname : path-like\n Name of the library, which can have 'lib' as a prefix,\n but without an extension.\n loader_path : path-like\n Where the library can be found.\n\n Returns\n -------\n ctypes.cdll[libpath] : library object\n A ctypes library object\n\n Raises\n ------\n OSError\n If there is no library with the expected extension, or the\n library is defective and cannot be loaded.\n """\n # Convert path-like objects into strings\n libname = os.fsdecode(libname)\n loader_path = os.fsdecode(loader_path)\n\n ext = os.path.splitext(libname)[1]\n if not ext:\n import sys\n import sysconfig\n # Try to load library with platform-specific name, otherwise\n # default to libname.[so|dll|dylib]. Sometimes, these files are\n # built erroneously on non-linux platforms.\n base_ext = ".so"\n if sys.platform.startswith("darwin"):\n base_ext = ".dylib"\n elif sys.platform.startswith("win"):\n base_ext = ".dll"\n libname_ext = [libname + base_ext]\n so_ext = sysconfig.get_config_var("EXT_SUFFIX")\n if not so_ext == base_ext:\n libname_ext.insert(0, libname + so_ext)\n else:\n libname_ext = [libname]\n\n loader_path = os.path.abspath(loader_path)\n if not os.path.isdir(loader_path):\n libdir = os.path.dirname(loader_path)\n else:\n libdir = loader_path\n\n for ln in libname_ext:\n libpath = os.path.join(libdir, ln)\n if os.path.exists(libpath):\n try:\n return ctypes.cdll[libpath]\n except OSError:\n # defective lib file\n raise\n # if no successful return in the libname_ext loop:\n raise OSError("no file with expected extension")\n\n\ndef _num_fromflags(flaglist):\n num = 0\n for val in flaglist:\n num += mu._flagdict[val]\n return num\n\n\n_flagnames = ['C_CONTIGUOUS', 'F_CONTIGUOUS', 'ALIGNED', 'WRITEABLE',\n 'OWNDATA', 'WRITEBACKIFCOPY']\ndef _flags_fromnum(num):\n res = []\n for key in _flagnames:\n value = mu._flagdict[key]\n if (num & value):\n res.append(key)\n return res\n\n\nclass _ndptr(_ndptr_base):\n @classmethod\n def from_param(cls, obj):\n if not isinstance(obj, np.ndarray):\n raise TypeError("argument must be an ndarray")\n if cls._dtype_ is not None \\n and obj.dtype != cls._dtype_:\n raise TypeError(f"array must have data type {cls._dtype_}")\n if cls._ndim_ is not None \\n and obj.ndim != cls._ndim_:\n raise TypeError("array must have %d dimension(s)" % cls._ndim_)\n if cls._shape_ is not None \\n and obj.shape != cls._shape_:\n raise TypeError(f"array must have shape {str(cls._shape_)}")\n if cls._flags_ is not None \\n and ((obj.flags.num & cls._flags_) != cls._flags_):\n raise TypeError(f"array must have flags {_flags_fromnum(cls._flags_)}")\n return obj.ctypes\n\n\nclass _concrete_ndptr(_ndptr):\n """\n Like _ndptr, but with `_shape_` and `_dtype_` specified.\n\n Notably, this means the pointer has enough information to reconstruct\n the array, which is not generally true.\n """\n def _check_retval_(self):\n """\n This method is called when this class is used as the .restype\n attribute for a shared-library function, to automatically wrap the\n pointer into an array.\n """\n return self.contents\n\n @property\n def contents(self):\n """\n Get an ndarray viewing the data pointed to by this pointer.\n\n This mirrors the `contents` attribute of a normal ctypes pointer\n """\n full_dtype = np.dtype((self._dtype_, self._shape_))\n full_ctype = ctypes.c_char * full_dtype.itemsize\n buffer = ctypes.cast(self, ctypes.POINTER(full_ctype)).contents\n return np.frombuffer(buffer, dtype=full_dtype).squeeze(axis=0)\n\n\n# Factory for an array-checking class with from_param defined for\n# use with ctypes argtypes mechanism\n_pointer_type_cache = {}\n\n@set_module("numpy.ctypeslib")\ndef ndpointer(dtype=None, ndim=None, shape=None, flags=None):\n """\n Array-checking restype/argtypes.\n\n An ndpointer instance is used to describe an ndarray in restypes\n and argtypes specifications. This approach is more flexible than\n using, for example, ``POINTER(c_double)``, since several restrictions\n can be specified, which are verified upon calling the ctypes function.\n These include data type, number of dimensions, shape and flags. If a\n given array does not satisfy the specified restrictions,\n a ``TypeError`` is raised.\n\n Parameters\n ----------\n dtype : data-type, optional\n Array data-type.\n ndim : int, optional\n Number of array dimensions.\n shape : tuple of ints, optional\n Array shape.\n flags : str or tuple of str\n Array flags; may be one or more of:\n\n - C_CONTIGUOUS / C / CONTIGUOUS\n - F_CONTIGUOUS / F / FORTRAN\n - OWNDATA / O\n - WRITEABLE / W\n - ALIGNED / A\n - WRITEBACKIFCOPY / X\n\n Returns\n -------\n klass : ndpointer type object\n A type object, which is an ``_ndtpr`` instance containing\n dtype, ndim, shape and flags information.\n\n Raises\n ------\n TypeError\n If a given array does not satisfy the specified restrictions.\n\n Examples\n --------\n >>> clib.somefunc.argtypes = [np.ctypeslib.ndpointer(dtype=np.float64,\n ... ndim=1,\n ... flags='C_CONTIGUOUS')]\n ... #doctest: +SKIP\n >>> clib.somefunc(np.array([1, 2, 3], dtype=np.float64))\n ... #doctest: +SKIP\n\n """\n\n # normalize dtype to dtype | None\n if dtype is not None:\n dtype = np.dtype(dtype)\n\n # normalize flags to int | None\n num = None\n if flags is not None:\n if isinstance(flags, str):\n flags = flags.split(',')\n elif isinstance(flags, (int, np.integer)):\n num = flags\n flags = _flags_fromnum(num)\n elif isinstance(flags, mu.flagsobj):\n num = flags.num\n flags = _flags_fromnum(num)\n if num is None:\n try:\n flags = [x.strip().upper() for x in flags]\n except Exception as e:\n raise TypeError("invalid flags specification") from e\n num = _num_fromflags(flags)\n\n # normalize shape to tuple | None\n if shape is not None:\n try:\n shape = tuple(shape)\n except TypeError:\n # single integer -> 1-tuple\n shape = (shape,)\n\n cache_key = (dtype, ndim, shape, num)\n\n try:\n return _pointer_type_cache[cache_key]\n except KeyError:\n pass\n\n # produce a name for the new type\n if dtype is None:\n name = 'any'\n elif dtype.names is not None:\n name = str(id(dtype))\n else:\n name = dtype.str\n if ndim is not None:\n name += "_%dd" % ndim\n if shape is not None:\n name += "_" + "x".join(str(x) for x in shape)\n if flags is not None:\n name += "_" + "_".join(flags)\n\n if dtype is not None and shape is not None:\n base = _concrete_ndptr\n else:\n base = _ndptr\n\n klass = type(f"ndpointer_{name}", (base,),\n {"_dtype_": dtype,\n "_shape_": shape,\n "_ndim_": ndim,\n "_flags_": num})\n _pointer_type_cache[cache_key] = klass\n return klass\n\n\nif ctypes is not None:\n def _ctype_ndarray(element_type, shape):\n """ Create an ndarray of the given element type and shape """\n for dim in shape[::-1]:\n element_type = dim * element_type\n # prevent the type name include np.ctypeslib\n element_type.__module__ = None\n return element_type\n\n def _get_scalar_type_map():\n """\n Return a dictionary mapping native endian scalar dtype to ctypes types\n """\n ct = ctypes\n simple_types = [\n ct.c_byte, ct.c_short, ct.c_int, ct.c_long, ct.c_longlong,\n ct.c_ubyte, ct.c_ushort, ct.c_uint, ct.c_ulong, ct.c_ulonglong,\n ct.c_float, ct.c_double,\n ct.c_bool,\n ]\n return {np.dtype(ctype): ctype for ctype in simple_types}\n\n _scalar_type_map = _get_scalar_type_map()\n\n def _ctype_from_dtype_scalar(dtype):\n # swapping twice ensure that `=` is promoted to <, >, or |\n dtype_with_endian = dtype.newbyteorder('S').newbyteorder('S')\n dtype_native = dtype.newbyteorder('=')\n try:\n ctype = _scalar_type_map[dtype_native]\n except KeyError as e:\n raise NotImplementedError(\n f"Converting {dtype!r} to a ctypes type"\n ) from None\n\n if dtype_with_endian.byteorder == '>':\n ctype = ctype.__ctype_be__\n elif dtype_with_endian.byteorder == '<':\n ctype = ctype.__ctype_le__\n\n return ctype\n\n def _ctype_from_dtype_subarray(dtype):\n element_dtype, shape = dtype.subdtype\n ctype = _ctype_from_dtype(element_dtype)\n return _ctype_ndarray(ctype, shape)\n\n def _ctype_from_dtype_structured(dtype):\n # extract offsets of each field\n field_data = []\n for name in dtype.names:\n field_dtype, offset = dtype.fields[name][:2]\n field_data.append((offset, name, _ctype_from_dtype(field_dtype)))\n\n # ctypes doesn't care about field order\n field_data = sorted(field_data, key=lambda f: f[0])\n\n if len(field_data) > 1 and all(offset == 0 for offset, _, _ in field_data):\n # union, if multiple fields all at address 0\n size = 0\n _fields_ = []\n for offset, name, ctype in field_data:\n _fields_.append((name, ctype))\n size = max(size, ctypes.sizeof(ctype))\n\n # pad to the right size\n if dtype.itemsize != size:\n _fields_.append(('', ctypes.c_char * dtype.itemsize))\n\n # we inserted manual padding, so always `_pack_`\n return type('union', (ctypes.Union,), {\n '_fields_': _fields_,\n '_pack_': 1,\n '__module__': None,\n })\n else:\n last_offset = 0\n _fields_ = []\n for offset, name, ctype in field_data:\n padding = offset - last_offset\n if padding < 0:\n raise NotImplementedError("Overlapping fields")\n if padding > 0:\n _fields_.append(('', ctypes.c_char * padding))\n\n _fields_.append((name, ctype))\n last_offset = offset + ctypes.sizeof(ctype)\n\n padding = dtype.itemsize - last_offset\n if padding > 0:\n _fields_.append(('', ctypes.c_char * padding))\n\n # we inserted manual padding, so always `_pack_`\n return type('struct', (ctypes.Structure,), {\n '_fields_': _fields_,\n '_pack_': 1,\n '__module__': None,\n })\n\n def _ctype_from_dtype(dtype):\n if dtype.fields is not None:\n return _ctype_from_dtype_structured(dtype)\n elif dtype.subdtype is not None:\n return _ctype_from_dtype_subarray(dtype)\n else:\n return _ctype_from_dtype_scalar(dtype)\n\n @set_module("numpy.ctypeslib")\n def as_ctypes_type(dtype):\n r"""\n Convert a dtype into a ctypes type.\n\n Parameters\n ----------\n dtype : dtype\n The dtype to convert\n\n Returns\n -------\n ctype\n A ctype scalar, union, array, or struct\n\n Raises\n ------\n NotImplementedError\n If the conversion is not possible\n\n Notes\n -----\n This function does not losslessly round-trip in either direction.\n\n ``np.dtype(as_ctypes_type(dt))`` will:\n\n - insert padding fields\n - reorder fields to be sorted by offset\n - discard field titles\n\n ``as_ctypes_type(np.dtype(ctype))`` will:\n\n - discard the class names of `ctypes.Structure`\ s and\n `ctypes.Union`\ s\n - convert single-element `ctypes.Union`\ s into single-element\n `ctypes.Structure`\ s\n - insert padding fields\n\n Examples\n --------\n Converting a simple dtype:\n\n >>> dt = np.dtype('int8')\n >>> ctype = np.ctypeslib.as_ctypes_type(dt)\n >>> ctype\n <class 'ctypes.c_byte'>\n\n Converting a structured dtype:\n\n >>> dt = np.dtype([('x', 'i4'), ('y', 'f4')])\n >>> ctype = np.ctypeslib.as_ctypes_type(dt)\n >>> ctype\n <class 'struct'>\n\n """\n return _ctype_from_dtype(np.dtype(dtype))\n\n @set_module("numpy.ctypeslib")\n def as_array(obj, shape=None):\n """\n Create a numpy array from a ctypes array or POINTER.\n\n The numpy array shares the memory with the ctypes object.\n\n The shape parameter must be given if converting from a ctypes POINTER.\n The shape parameter is ignored if converting from a ctypes array\n\n Examples\n --------\n Converting a ctypes integer array:\n\n >>> import ctypes\n >>> ctypes_array = (ctypes.c_int * 5)(0, 1, 2, 3, 4)\n >>> np_array = np.ctypeslib.as_array(ctypes_array)\n >>> np_array\n array([0, 1, 2, 3, 4], dtype=int32)\n\n Converting a ctypes POINTER:\n\n >>> import ctypes\n >>> buffer = (ctypes.c_int * 5)(0, 1, 2, 3, 4)\n >>> pointer = ctypes.cast(buffer, ctypes.POINTER(ctypes.c_int))\n >>> np_array = np.ctypeslib.as_array(pointer, (5,))\n >>> np_array\n array([0, 1, 2, 3, 4], dtype=int32)\n\n """\n if isinstance(obj, ctypes._Pointer):\n # convert pointers to an array of the desired shape\n if shape is None:\n raise TypeError(\n 'as_array() requires a shape argument when called on a '\n 'pointer')\n p_arr_type = ctypes.POINTER(_ctype_ndarray(obj._type_, shape))\n obj = ctypes.cast(obj, p_arr_type).contents\n\n return np.asarray(obj)\n\n @set_module("numpy.ctypeslib")\n def as_ctypes(obj):\n """\n Create and return a ctypes object from a numpy array. Actually\n anything that exposes the __array_interface__ is accepted.\n\n Examples\n --------\n Create ctypes object from inferred int ``np.array``:\n\n >>> inferred_int_array = np.array([1, 2, 3])\n >>> c_int_array = np.ctypeslib.as_ctypes(inferred_int_array)\n >>> type(c_int_array)\n <class 'c_long_Array_3'>\n >>> c_int_array[:]\n [1, 2, 3]\n\n Create ctypes object from explicit 8 bit unsigned int ``np.array`` :\n\n >>> exp_int_array = np.array([1, 2, 3], dtype=np.uint8)\n >>> c_int_array = np.ctypeslib.as_ctypes(exp_int_array)\n >>> type(c_int_array)\n <class 'c_ubyte_Array_3'>\n >>> c_int_array[:]\n [1, 2, 3]\n\n """\n ai = obj.__array_interface__\n if ai["strides"]:\n raise TypeError("strided arrays not supported")\n if ai["version"] != 3:\n raise TypeError("only __array_interface__ version 3 supported")\n addr, readonly = ai["data"]\n if readonly:\n raise TypeError("readonly arrays unsupported")\n\n # can't use `_dtype((ai["typestr"], ai["shape"]))` here, as it overflows\n # dtype.itemsize (gh-14214)\n ctype_scalar = as_ctypes_type(ai["typestr"])\n result_type = _ctype_ndarray(ctype_scalar, ai["shape"])\n result = result_type.from_address(addr)\n result.__keep = obj\n return result\n
.venv\Lib\site-packages\numpy\ctypeslib\_ctypeslib.py
_ctypeslib.py
Python
19,682
0.95
0.155887
0.050201
python-kit
94
2024-09-14T00:54:30.910419
Apache-2.0
false
474fd9106c9352913fb3b43f06790200
# NOTE: Numpy's mypy plugin is used for importing the correct\n# platform-specific `ctypes._SimpleCData[int]` sub-type\nimport ctypes\nfrom collections.abc import Iterable, Sequence\nfrom ctypes import c_int64 as _c_intp\nfrom typing import (\n Any,\n ClassVar,\n Generic,\n TypeAlias,\n TypeVar,\n overload,\n)\nfrom typing import Literal as L\n\nfrom _typeshed import StrOrBytesPath\n\nimport numpy as np\nfrom numpy import (\n byte,\n double,\n dtype,\n generic,\n intc,\n long,\n longdouble,\n longlong,\n ndarray,\n short,\n single,\n ubyte,\n uintc,\n ulong,\n ulonglong,\n ushort,\n void,\n)\nfrom numpy._core._internal import _ctypes\nfrom numpy._core.multiarray import flagsobj\nfrom numpy._typing import (\n DTypeLike,\n NDArray,\n _AnyShape,\n _ArrayLike,\n _BoolCodes,\n _ByteCodes,\n _DoubleCodes,\n _DTypeLike,\n _IntCCodes,\n _LongCodes,\n _LongDoubleCodes,\n _LongLongCodes,\n _ShapeLike,\n _ShortCodes,\n _SingleCodes,\n _UByteCodes,\n _UIntCCodes,\n _ULongCodes,\n _ULongLongCodes,\n _UShortCodes,\n _VoidDTypeLike,\n)\n\n__all__ = ["load_library", "ndpointer", "c_intp", "as_ctypes", "as_array", "as_ctypes_type"]\n\n# TODO: Add a proper `_Shape` bound once we've got variadic typevars\n_DTypeT = TypeVar("_DTypeT", bound=dtype)\n_DTypeOptionalT = TypeVar("_DTypeOptionalT", bound=dtype | None)\n_ScalarT = TypeVar("_ScalarT", bound=generic)\n\n_FlagsKind: TypeAlias = L[\n 'C_CONTIGUOUS', 'CONTIGUOUS', 'C',\n 'F_CONTIGUOUS', 'FORTRAN', 'F',\n 'ALIGNED', 'A',\n 'WRITEABLE', 'W',\n 'OWNDATA', 'O',\n 'WRITEBACKIFCOPY', 'X',\n]\n\n# TODO: Add a shape typevar once we have variadic typevars (PEP 646)\nclass _ndptr(ctypes.c_void_p, Generic[_DTypeOptionalT]):\n # In practice these 4 classvars are defined in the dynamic class\n # returned by `ndpointer`\n _dtype_: ClassVar[_DTypeOptionalT]\n _shape_: ClassVar[None]\n _ndim_: ClassVar[int | None]\n _flags_: ClassVar[list[_FlagsKind] | None]\n\n @overload\n @classmethod\n def from_param(cls: type[_ndptr[None]], obj: NDArray[Any]) -> _ctypes[Any]: ...\n @overload\n @classmethod\n def from_param(cls: type[_ndptr[_DTypeT]], obj: ndarray[Any, _DTypeT]) -> _ctypes[Any]: ...\n\nclass _concrete_ndptr(_ndptr[_DTypeT]):\n _dtype_: ClassVar[_DTypeT]\n _shape_: ClassVar[_AnyShape]\n @property\n def contents(self) -> ndarray[_AnyShape, _DTypeT]: ...\n\ndef load_library(libname: StrOrBytesPath, loader_path: StrOrBytesPath) -> ctypes.CDLL: ...\n\nc_intp = _c_intp\n\n@overload\ndef ndpointer(\n dtype: None = ...,\n ndim: int = ...,\n shape: _ShapeLike | None = ...,\n flags: _FlagsKind | Iterable[_FlagsKind] | int | flagsobj | None = ...,\n) -> type[_ndptr[None]]: ...\n@overload\ndef ndpointer(\n dtype: _DTypeLike[_ScalarT],\n ndim: int = ...,\n *,\n shape: _ShapeLike,\n flags: _FlagsKind | Iterable[_FlagsKind] | int | flagsobj | None = ...,\n) -> type[_concrete_ndptr[dtype[_ScalarT]]]: ...\n@overload\ndef ndpointer(\n dtype: DTypeLike,\n ndim: int = ...,\n *,\n shape: _ShapeLike,\n flags: _FlagsKind | Iterable[_FlagsKind] | int | flagsobj | None = ...,\n) -> type[_concrete_ndptr[dtype]]: ...\n@overload\ndef ndpointer(\n dtype: _DTypeLike[_ScalarT],\n ndim: int = ...,\n shape: None = ...,\n flags: _FlagsKind | Iterable[_FlagsKind] | int | flagsobj | None = ...,\n) -> type[_ndptr[dtype[_ScalarT]]]: ...\n@overload\ndef ndpointer(\n dtype: DTypeLike,\n ndim: int = ...,\n shape: None = ...,\n flags: _FlagsKind | Iterable[_FlagsKind] | int | flagsobj | None = ...,\n) -> type[_ndptr[dtype]]: ...\n\n@overload\ndef as_ctypes_type(dtype: _BoolCodes | _DTypeLike[np.bool] | type[ctypes.c_bool]) -> type[ctypes.c_bool]: ...\n@overload\ndef as_ctypes_type(dtype: _ByteCodes | _DTypeLike[byte] | type[ctypes.c_byte]) -> type[ctypes.c_byte]: ...\n@overload\ndef as_ctypes_type(dtype: _ShortCodes | _DTypeLike[short] | type[ctypes.c_short]) -> type[ctypes.c_short]: ...\n@overload\ndef as_ctypes_type(dtype: _IntCCodes | _DTypeLike[intc] | type[ctypes.c_int]) -> type[ctypes.c_int]: ...\n@overload\ndef as_ctypes_type(dtype: _LongCodes | _DTypeLike[long] | type[ctypes.c_long]) -> type[ctypes.c_long]: ...\n@overload\ndef as_ctypes_type(dtype: type[int]) -> type[c_intp]: ...\n@overload\ndef as_ctypes_type(dtype: _LongLongCodes | _DTypeLike[longlong] | type[ctypes.c_longlong]) -> type[ctypes.c_longlong]: ...\n@overload\ndef as_ctypes_type(dtype: _UByteCodes | _DTypeLike[ubyte] | type[ctypes.c_ubyte]) -> type[ctypes.c_ubyte]: ...\n@overload\ndef as_ctypes_type(dtype: _UShortCodes | _DTypeLike[ushort] | type[ctypes.c_ushort]) -> type[ctypes.c_ushort]: ...\n@overload\ndef as_ctypes_type(dtype: _UIntCCodes | _DTypeLike[uintc] | type[ctypes.c_uint]) -> type[ctypes.c_uint]: ...\n@overload\ndef as_ctypes_type(dtype: _ULongCodes | _DTypeLike[ulong] | type[ctypes.c_ulong]) -> type[ctypes.c_ulong]: ...\n@overload\ndef as_ctypes_type(dtype: _ULongLongCodes | _DTypeLike[ulonglong] | type[ctypes.c_ulonglong]) -> type[ctypes.c_ulonglong]: ...\n@overload\ndef as_ctypes_type(dtype: _SingleCodes | _DTypeLike[single] | type[ctypes.c_float]) -> type[ctypes.c_float]: ...\n@overload\ndef as_ctypes_type(dtype: _DoubleCodes | _DTypeLike[double] | type[float | ctypes.c_double]) -> type[ctypes.c_double]: ...\n@overload\ndef as_ctypes_type(dtype: _LongDoubleCodes | _DTypeLike[longdouble] | type[ctypes.c_longdouble]) -> type[ctypes.c_longdouble]: ...\n@overload\ndef as_ctypes_type(dtype: _VoidDTypeLike) -> type[Any]: ... # `ctypes.Union` or `ctypes.Structure`\n@overload\ndef as_ctypes_type(dtype: str) -> type[Any]: ...\n\n@overload\ndef as_array(obj: ctypes._PointerLike, shape: Sequence[int]) -> NDArray[Any]: ...\n@overload\ndef as_array(obj: _ArrayLike[_ScalarT], shape: _ShapeLike | None = ...) -> NDArray[_ScalarT]: ...\n@overload\ndef as_array(obj: object, shape: _ShapeLike | None = ...) -> NDArray[Any]: ...\n\n@overload\ndef as_ctypes(obj: np.bool) -> ctypes.c_bool: ...\n@overload\ndef as_ctypes(obj: byte) -> ctypes.c_byte: ...\n@overload\ndef as_ctypes(obj: short) -> ctypes.c_short: ...\n@overload\ndef as_ctypes(obj: intc) -> ctypes.c_int: ...\n@overload\ndef as_ctypes(obj: long) -> ctypes.c_long: ...\n@overload\ndef as_ctypes(obj: longlong) -> ctypes.c_longlong: ...\n@overload\ndef as_ctypes(obj: ubyte) -> ctypes.c_ubyte: ...\n@overload\ndef as_ctypes(obj: ushort) -> ctypes.c_ushort: ...\n@overload\ndef as_ctypes(obj: uintc) -> ctypes.c_uint: ...\n@overload\ndef as_ctypes(obj: ulong) -> ctypes.c_ulong: ...\n@overload\ndef as_ctypes(obj: ulonglong) -> ctypes.c_ulonglong: ...\n@overload\ndef as_ctypes(obj: single) -> ctypes.c_float: ...\n@overload\ndef as_ctypes(obj: double) -> ctypes.c_double: ...\n@overload\ndef as_ctypes(obj: longdouble) -> ctypes.c_longdouble: ...\n@overload\ndef as_ctypes(obj: void) -> Any: ... # `ctypes.Union` or `ctypes.Structure`\n@overload\ndef as_ctypes(obj: NDArray[np.bool]) -> ctypes.Array[ctypes.c_bool]: ...\n@overload\ndef as_ctypes(obj: NDArray[byte]) -> ctypes.Array[ctypes.c_byte]: ...\n@overload\ndef as_ctypes(obj: NDArray[short]) -> ctypes.Array[ctypes.c_short]: ...\n@overload\ndef as_ctypes(obj: NDArray[intc]) -> ctypes.Array[ctypes.c_int]: ...\n@overload\ndef as_ctypes(obj: NDArray[long]) -> ctypes.Array[ctypes.c_long]: ...\n@overload\ndef as_ctypes(obj: NDArray[longlong]) -> ctypes.Array[ctypes.c_longlong]: ...\n@overload\ndef as_ctypes(obj: NDArray[ubyte]) -> ctypes.Array[ctypes.c_ubyte]: ...\n@overload\ndef as_ctypes(obj: NDArray[ushort]) -> ctypes.Array[ctypes.c_ushort]: ...\n@overload\ndef as_ctypes(obj: NDArray[uintc]) -> ctypes.Array[ctypes.c_uint]: ...\n@overload\ndef as_ctypes(obj: NDArray[ulong]) -> ctypes.Array[ctypes.c_ulong]: ...\n@overload\ndef as_ctypes(obj: NDArray[ulonglong]) -> ctypes.Array[ctypes.c_ulonglong]: ...\n@overload\ndef as_ctypes(obj: NDArray[single]) -> ctypes.Array[ctypes.c_float]: ...\n@overload\ndef as_ctypes(obj: NDArray[double]) -> ctypes.Array[ctypes.c_double]: ...\n@overload\ndef as_ctypes(obj: NDArray[longdouble]) -> ctypes.Array[ctypes.c_longdouble]: ...\n@overload\ndef as_ctypes(obj: NDArray[void]) -> ctypes.Array[Any]: ... # `ctypes.Union` or `ctypes.Structure`\n
.venv\Lib\site-packages\numpy\ctypeslib\_ctypeslib.pyi
_ctypeslib.pyi
Other
8,329
0.95
0.257143
0.034632
vue-tools
990
2023-10-27T20:30:11.443498
MIT
false
fd9f5d4c2e563ae735e241161a423df0
from ._ctypeslib import (\n __all__,\n __doc__,\n _concrete_ndptr,\n _ndptr,\n as_array,\n as_ctypes,\n as_ctypes_type,\n c_intp,\n ctypes,\n load_library,\n ndpointer,\n)\n
.venv\Lib\site-packages\numpy\ctypeslib\__init__.py
__init__.py
Python
206
0.85
0
0
vue-tools
627
2023-09-28T19:31:49.876124
BSD-3-Clause
false
f442a6d0c68309589773e12ed724a449
import ctypes\nfrom ctypes import c_int64 as _c_intp\n\nfrom ._ctypeslib import (\n __all__ as __all__,\n)\nfrom ._ctypeslib import (\n __doc__ as __doc__,\n)\nfrom ._ctypeslib import (\n _concrete_ndptr as _concrete_ndptr,\n)\nfrom ._ctypeslib import (\n _ndptr as _ndptr,\n)\nfrom ._ctypeslib import (\n as_array as as_array,\n)\nfrom ._ctypeslib import (\n as_ctypes as as_ctypes,\n)\nfrom ._ctypeslib import (\n as_ctypes_type as as_ctypes_type,\n)\nfrom ._ctypeslib import (\n c_intp as c_intp,\n)\nfrom ._ctypeslib import (\n load_library as load_library,\n)\nfrom ._ctypeslib import (\n ndpointer as ndpointer,\n)\n
.venv\Lib\site-packages\numpy\ctypeslib\__init__.pyi
__init__.pyi
Other
652
0.85
0
0
node-utils
169
2024-07-28T06:07:30.255773
MIT
false
c2ea4e4af140d68120a6c0afd2713be1
\n\n
.venv\Lib\site-packages\numpy\ctypeslib\__pycache__\_ctypeslib.cpython-313.pyc
_ctypeslib.cpython-313.pyc
Other
21,550
0.95
0.04359
0.00304
python-kit
966
2024-11-29T09:51:22.255194
MIT
false
3ea2c4fc9480282517365836c86d3aa2
\n\n
.venv\Lib\site-packages\numpy\ctypeslib\__pycache__\__init__.cpython-313.pyc
__init__.cpython-313.pyc
Other
454
0.7
0
0
node-utils
419
2024-04-29T03:35:13.819745
Apache-2.0
false
b06991336aa2ecd34b6478f84da6d724
"""\n===================\nUniversal Functions\n===================\n\nUfuncs are, generally speaking, mathematical functions or operations that are\napplied element-by-element to the contents of an array. That is, the result\nin each output array element only depends on the value in the corresponding\ninput array (or arrays) and on no other array elements. NumPy comes with a\nlarge suite of ufuncs, and scipy extends that suite substantially. The simplest\nexample is the addition operator: ::\n\n >>> np.array([0,2,3,4]) + np.array([1,1,-1,2])\n array([1, 3, 2, 6])\n\nThe ufunc module lists all the available ufuncs in numpy. Documentation on\nthe specific ufuncs may be found in those modules. This documentation is\nintended to address the more general aspects of ufuncs common to most of\nthem. All of the ufuncs that make use of Python operators (e.g., +, -, etc.)\nhave equivalent functions defined (e.g. add() for +)\n\nType coercion\n=============\n\nWhat happens when a binary operator (e.g., +,-,\\*,/, etc) deals with arrays of\ntwo different types? What is the type of the result? Typically, the result is\nthe higher of the two types. For example: ::\n\n float32 + float64 -> float64\n int8 + int32 -> int32\n int16 + float32 -> float32\n float32 + complex64 -> complex64\n\nThere are some less obvious cases generally involving mixes of types\n(e.g. uints, ints and floats) where equal bit sizes for each are not\ncapable of saving all the information in a different type of equivalent\nbit size. Some examples are int32 vs float32 or uint32 vs int32.\nGenerally, the result is the higher type of larger size than both\n(if available). So: ::\n\n int32 + float32 -> float64\n uint32 + int32 -> int64\n\nFinally, the type coercion behavior when expressions involve Python\nscalars is different than that seen for arrays. Since Python has a\nlimited number of types, combining a Python int with a dtype=np.int8\narray does not coerce to the higher type but instead, the type of the\narray prevails. So the rules for Python scalars combined with arrays is\nthat the result will be that of the array equivalent the Python scalar\nif the Python scalar is of a higher 'kind' than the array (e.g., float\nvs. int), otherwise the resultant type will be that of the array.\nFor example: ::\n\n Python int + int8 -> int8\n Python float + int8 -> float64\n\nufunc methods\n=============\n\nBinary ufuncs support 4 methods.\n\n**.reduce(arr)** applies the binary operator to elements of the array in\n sequence. For example: ::\n\n >>> np.add.reduce(np.arange(10)) # adds all elements of array\n 45\n\nFor multidimensional arrays, the first dimension is reduced by default: ::\n\n >>> np.add.reduce(np.arange(10).reshape(2,5))\n array([ 5, 7, 9, 11, 13])\n\nThe axis keyword can be used to specify different axes to reduce: ::\n\n >>> np.add.reduce(np.arange(10).reshape(2,5),axis=1)\n array([10, 35])\n\n**.accumulate(arr)** applies the binary operator and generates an\nequivalently shaped array that includes the accumulated amount for each\nelement of the array. A couple examples: ::\n\n >>> np.add.accumulate(np.arange(10))\n array([ 0, 1, 3, 6, 10, 15, 21, 28, 36, 45])\n >>> np.multiply.accumulate(np.arange(1,9))\n array([ 1, 2, 6, 24, 120, 720, 5040, 40320])\n\nThe behavior for multidimensional arrays is the same as for .reduce(),\nas is the use of the axis keyword).\n\n**.reduceat(arr,indices)** allows one to apply reduce to selected parts\n of an array. It is a difficult method to understand. See the documentation\n at:\n\n**.outer(arr1,arr2)** generates an outer operation on the two arrays arr1 and\n arr2. It will work on multidimensional arrays (the shape of the result is\n the concatenation of the two input shapes.: ::\n\n >>> np.multiply.outer(np.arange(3),np.arange(4))\n array([[0, 0, 0, 0],\n [0, 1, 2, 3],\n [0, 2, 4, 6]])\n\nOutput arguments\n================\n\nAll ufuncs accept an optional output array. The array must be of the expected\noutput shape. Beware that if the type of the output array is of a different\n(and lower) type than the output result, the results may be silently truncated\nor otherwise corrupted in the downcast to the lower type. This usage is useful\nwhen one wants to avoid creating large temporary arrays and instead allows one\nto reuse the same array memory repeatedly (at the expense of not being able to\nuse more convenient operator notation in expressions). Note that when the\noutput argument is used, the ufunc still returns a reference to the result.\n\n >>> x = np.arange(2)\n >>> np.add(np.arange(2, dtype=float), np.arange(2, dtype=float), x,\n ... casting='unsafe')\n array([0, 2])\n >>> x\n array([0, 2])\n\nand & or as ufuncs\n==================\n\nInvariably people try to use the python 'and' and 'or' as logical operators\n(and quite understandably). But these operators do not behave as normal\noperators since Python treats these quite differently. They cannot be\noverloaded with array equivalents. Thus using 'and' or 'or' with an array\nresults in an error. There are two alternatives:\n\n 1) use the ufunc functions logical_and() and logical_or().\n 2) use the bitwise operators & and \\|. The drawback of these is that if\n the arguments to these operators are not boolean arrays, the result is\n likely incorrect. On the other hand, most usages of logical_and and\n logical_or are with boolean arrays. As long as one is careful, this is\n a convenient way to apply these operators.\n\n"""\n
.venv\Lib\site-packages\numpy\doc\ufuncs.py
ufuncs.py
Python
5,552
0.95
0.086957
0.037383
vue-tools
424
2025-05-12T09:07:14.814386
Apache-2.0
false
1a4131f2aec9e8d4051b6a52ccaf66a1
\n\n
.venv\Lib\site-packages\numpy\doc\__pycache__\ufuncs.cpython-313.pyc
ufuncs.cpython-313.pyc
Other
5,608
0.95
0.086331
0.037383
vue-tools
629
2024-04-22T06:13:03.740017
MIT
false
115773792dcae105d0dac00e090e2cc8
"""\nAuxiliary functions for f2py2e.\n\nCopyright 1999 -- 2011 Pearu Peterson all rights reserved.\nCopyright 2011 -- present NumPy Developers.\nPermission to use, modify, and distribute this software is given under the\nterms of the NumPy (BSD style) LICENSE.\n\nNO WARRANTY IS EXPRESSED OR IMPLIED. USE AT YOUR OWN RISK.\n"""\nimport pprint\nimport re\nimport sys\nimport types\nfrom functools import reduce\n\nfrom . import __version__, cfuncs\nfrom .cfuncs import errmess\n\n__all__ = [\n 'applyrules', 'debugcapi', 'dictappend', 'errmess', 'gentitle',\n 'getargs2', 'getcallprotoargument', 'getcallstatement',\n 'getfortranname', 'getpymethoddef', 'getrestdoc', 'getusercode',\n 'getusercode1', 'getdimension', 'hasbody', 'hascallstatement', 'hascommon',\n 'hasexternals', 'hasinitvalue', 'hasnote', 'hasresultnote',\n 'isallocatable', 'isarray', 'isarrayofstrings',\n 'ischaracter', 'ischaracterarray', 'ischaracter_or_characterarray',\n 'iscomplex', 'iscstyledirective',\n 'iscomplexarray', 'iscomplexfunction', 'iscomplexfunction_warn',\n 'isdouble', 'isdummyroutine', 'isexternal', 'isfunction',\n 'isfunction_wrap', 'isint1', 'isint1array', 'isinteger', 'isintent_aux',\n 'isintent_c', 'isintent_callback', 'isintent_copy', 'isintent_dict',\n 'isintent_hide', 'isintent_in', 'isintent_inout', 'isintent_inplace',\n 'isintent_nothide', 'isintent_out', 'isintent_overwrite', 'islogical',\n 'islogicalfunction', 'islong_complex', 'islong_double',\n 'islong_doublefunction', 'islong_long', 'islong_longfunction',\n 'ismodule', 'ismoduleroutine', 'isoptional', 'isprivate', 'isvariable',\n 'isrequired', 'isroutine', 'isscalar', 'issigned_long_longarray',\n 'isstring', 'isstringarray', 'isstring_or_stringarray', 'isstringfunction',\n 'issubroutine', 'get_f2py_modulename', 'issubroutine_wrap', 'isthreadsafe',\n 'isunsigned', 'isunsigned_char', 'isunsigned_chararray',\n 'isunsigned_long_long', 'isunsigned_long_longarray', 'isunsigned_short',\n 'isunsigned_shortarray', 'l_and', 'l_not', 'l_or', 'outmess', 'replace',\n 'show', 'stripcomma', 'throw_error', 'isattr_value', 'getuseblocks',\n 'process_f2cmap_dict', 'containscommon', 'containsderivedtypes'\n]\n\n\nf2py_version = __version__.version\n\n\nshow = pprint.pprint\n\noptions = {}\ndebugoptions = []\nwrapfuncs = 1\n\n\ndef outmess(t):\n if options.get('verbose', 1):\n sys.stdout.write(t)\n\n\ndef debugcapi(var):\n return 'capi' in debugoptions\n\n\ndef _ischaracter(var):\n return 'typespec' in var and var['typespec'] == 'character' and \\n not isexternal(var)\n\n\ndef _isstring(var):\n return 'typespec' in var and var['typespec'] == 'character' and \\n not isexternal(var)\n\n\ndef ischaracter_or_characterarray(var):\n return _ischaracter(var) and 'charselector' not in var\n\n\ndef ischaracter(var):\n return ischaracter_or_characterarray(var) and not isarray(var)\n\n\ndef ischaracterarray(var):\n return ischaracter_or_characterarray(var) and isarray(var)\n\n\ndef isstring_or_stringarray(var):\n return _ischaracter(var) and 'charselector' in var\n\n\ndef isstring(var):\n return isstring_or_stringarray(var) and not isarray(var)\n\n\ndef isstringarray(var):\n return isstring_or_stringarray(var) and isarray(var)\n\n\ndef isarrayofstrings(var): # obsolete?\n # leaving out '*' for now so that `character*(*) a(m)` and `character\n # a(m,*)` are treated differently. Luckily `character**` is illegal.\n return isstringarray(var) and var['dimension'][-1] == '(*)'\n\n\ndef isarray(var):\n return 'dimension' in var and not isexternal(var)\n\n\ndef isscalar(var):\n return not (isarray(var) or isstring(var) or isexternal(var))\n\n\ndef iscomplex(var):\n return isscalar(var) and \\n var.get('typespec') in ['complex', 'double complex']\n\n\ndef islogical(var):\n return isscalar(var) and var.get('typespec') == 'logical'\n\n\ndef isinteger(var):\n return isscalar(var) and var.get('typespec') == 'integer'\n\n\ndef isreal(var):\n return isscalar(var) and var.get('typespec') == 'real'\n\n\ndef get_kind(var):\n try:\n return var['kindselector']['*']\n except KeyError:\n try:\n return var['kindselector']['kind']\n except KeyError:\n pass\n\n\ndef isint1(var):\n return var.get('typespec') == 'integer' \\n and get_kind(var) == '1' and not isarray(var)\n\n\ndef islong_long(var):\n if not isscalar(var):\n return 0\n if var.get('typespec') not in ['integer', 'logical']:\n return 0\n return get_kind(var) == '8'\n\n\ndef isunsigned_char(var):\n if not isscalar(var):\n return 0\n if var.get('typespec') != 'integer':\n return 0\n return get_kind(var) == '-1'\n\n\ndef isunsigned_short(var):\n if not isscalar(var):\n return 0\n if var.get('typespec') != 'integer':\n return 0\n return get_kind(var) == '-2'\n\n\ndef isunsigned(var):\n if not isscalar(var):\n return 0\n if var.get('typespec') != 'integer':\n return 0\n return get_kind(var) == '-4'\n\n\ndef isunsigned_long_long(var):\n if not isscalar(var):\n return 0\n if var.get('typespec') != 'integer':\n return 0\n return get_kind(var) == '-8'\n\n\ndef isdouble(var):\n if not isscalar(var):\n return 0\n if not var.get('typespec') == 'real':\n return 0\n return get_kind(var) == '8'\n\n\ndef islong_double(var):\n if not isscalar(var):\n return 0\n if not var.get('typespec') == 'real':\n return 0\n return get_kind(var) == '16'\n\n\ndef islong_complex(var):\n if not iscomplex(var):\n return 0\n return get_kind(var) == '32'\n\n\ndef iscomplexarray(var):\n return isarray(var) and \\n var.get('typespec') in ['complex', 'double complex']\n\n\ndef isint1array(var):\n return isarray(var) and var.get('typespec') == 'integer' \\n and get_kind(var) == '1'\n\n\ndef isunsigned_chararray(var):\n return isarray(var) and var.get('typespec') in ['integer', 'logical']\\n and get_kind(var) == '-1'\n\n\ndef isunsigned_shortarray(var):\n return isarray(var) and var.get('typespec') in ['integer', 'logical']\\n and get_kind(var) == '-2'\n\n\ndef isunsignedarray(var):\n return isarray(var) and var.get('typespec') in ['integer', 'logical']\\n and get_kind(var) == '-4'\n\n\ndef isunsigned_long_longarray(var):\n return isarray(var) and var.get('typespec') in ['integer', 'logical']\\n and get_kind(var) == '-8'\n\n\ndef issigned_chararray(var):\n return isarray(var) and var.get('typespec') in ['integer', 'logical']\\n and get_kind(var) == '1'\n\n\ndef issigned_shortarray(var):\n return isarray(var) and var.get('typespec') in ['integer', 'logical']\\n and get_kind(var) == '2'\n\n\ndef issigned_array(var):\n return isarray(var) and var.get('typespec') in ['integer', 'logical']\\n and get_kind(var) == '4'\n\n\ndef issigned_long_longarray(var):\n return isarray(var) and var.get('typespec') in ['integer', 'logical']\\n and get_kind(var) == '8'\n\n\ndef isallocatable(var):\n return 'attrspec' in var and 'allocatable' in var['attrspec']\n\n\ndef ismutable(var):\n return not ('dimension' not in var or isstring(var))\n\n\ndef ismoduleroutine(rout):\n return 'modulename' in rout\n\n\ndef ismodule(rout):\n return 'block' in rout and 'module' == rout['block']\n\n\ndef isfunction(rout):\n return 'block' in rout and 'function' == rout['block']\n\n\ndef isfunction_wrap(rout):\n if isintent_c(rout):\n return 0\n return wrapfuncs and isfunction(rout) and (not isexternal(rout))\n\n\ndef issubroutine(rout):\n return 'block' in rout and 'subroutine' == rout['block']\n\n\ndef issubroutine_wrap(rout):\n if isintent_c(rout):\n return 0\n return issubroutine(rout) and hasassumedshape(rout)\n\ndef isattr_value(var):\n return 'value' in var.get('attrspec', [])\n\n\ndef hasassumedshape(rout):\n if rout.get('hasassumedshape'):\n return True\n for a in rout['args']:\n for d in rout['vars'].get(a, {}).get('dimension', []):\n if d == ':':\n rout['hasassumedshape'] = True\n return True\n return False\n\n\ndef requiresf90wrapper(rout):\n return ismoduleroutine(rout) or hasassumedshape(rout)\n\n\ndef isroutine(rout):\n return isfunction(rout) or issubroutine(rout)\n\n\ndef islogicalfunction(rout):\n if not isfunction(rout):\n return 0\n if 'result' in rout:\n a = rout['result']\n else:\n a = rout['name']\n if a in rout['vars']:\n return islogical(rout['vars'][a])\n return 0\n\n\ndef islong_longfunction(rout):\n if not isfunction(rout):\n return 0\n if 'result' in rout:\n a = rout['result']\n else:\n a = rout['name']\n if a in rout['vars']:\n return islong_long(rout['vars'][a])\n return 0\n\n\ndef islong_doublefunction(rout):\n if not isfunction(rout):\n return 0\n if 'result' in rout:\n a = rout['result']\n else:\n a = rout['name']\n if a in rout['vars']:\n return islong_double(rout['vars'][a])\n return 0\n\n\ndef iscomplexfunction(rout):\n if not isfunction(rout):\n return 0\n if 'result' in rout:\n a = rout['result']\n else:\n a = rout['name']\n if a in rout['vars']:\n return iscomplex(rout['vars'][a])\n return 0\n\n\ndef iscomplexfunction_warn(rout):\n if iscomplexfunction(rout):\n outmess("""\\n **************************************************************\n Warning: code with a function returning complex value\n may not work correctly with your Fortran compiler.\n When using GNU gcc/g77 compilers, codes should work\n correctly for callbacks with:\n f2py -c -DF2PY_CB_RETURNCOMPLEX\n **************************************************************\n""")\n return 1\n return 0\n\n\ndef isstringfunction(rout):\n if not isfunction(rout):\n return 0\n if 'result' in rout:\n a = rout['result']\n else:\n a = rout['name']\n if a in rout['vars']:\n return isstring(rout['vars'][a])\n return 0\n\n\ndef hasexternals(rout):\n return 'externals' in rout and rout['externals']\n\n\ndef isthreadsafe(rout):\n return 'f2pyenhancements' in rout and \\n 'threadsafe' in rout['f2pyenhancements']\n\n\ndef hasvariables(rout):\n return 'vars' in rout and rout['vars']\n\n\ndef isoptional(var):\n return ('attrspec' in var and 'optional' in var['attrspec'] and\n 'required' not in var['attrspec']) and isintent_nothide(var)\n\n\ndef isexternal(var):\n return 'attrspec' in var and 'external' in var['attrspec']\n\n\ndef getdimension(var):\n dimpattern = r"\((.*?)\)"\n if 'attrspec' in var.keys():\n if any('dimension' in s for s in var['attrspec']):\n return next(re.findall(dimpattern, v) for v in var['attrspec'])\n\n\ndef isrequired(var):\n return not isoptional(var) and isintent_nothide(var)\n\n\ndef iscstyledirective(f2py_line):\n directives = {"callstatement", "callprotoargument", "pymethoddef"}\n return any(directive in f2py_line.lower() for directive in directives)\n\n\ndef isintent_in(var):\n if 'intent' not in var:\n return 1\n if 'hide' in var['intent']:\n return 0\n if 'inplace' in var['intent']:\n return 0\n if 'in' in var['intent']:\n return 1\n if 'out' in var['intent']:\n return 0\n if 'inout' in var['intent']:\n return 0\n if 'outin' in var['intent']:\n return 0\n return 1\n\n\ndef isintent_inout(var):\n return ('intent' in var and ('inout' in var['intent'] or\n 'outin' in var['intent']) and 'in' not in var['intent'] and\n 'hide' not in var['intent'] and 'inplace' not in var['intent'])\n\n\ndef isintent_out(var):\n return 'out' in var.get('intent', [])\n\n\ndef isintent_hide(var):\n return ('intent' in var and ('hide' in var['intent'] or\n ('out' in var['intent'] and 'in' not in var['intent'] and\n (not l_or(isintent_inout, isintent_inplace)(var)))))\n\n\ndef isintent_nothide(var):\n return not isintent_hide(var)\n\n\ndef isintent_c(var):\n return 'c' in var.get('intent', [])\n\n\ndef isintent_cache(var):\n return 'cache' in var.get('intent', [])\n\n\ndef isintent_copy(var):\n return 'copy' in var.get('intent', [])\n\n\ndef isintent_overwrite(var):\n return 'overwrite' in var.get('intent', [])\n\n\ndef isintent_callback(var):\n return 'callback' in var.get('intent', [])\n\n\ndef isintent_inplace(var):\n return 'inplace' in var.get('intent', [])\n\n\ndef isintent_aux(var):\n return 'aux' in var.get('intent', [])\n\n\ndef isintent_aligned4(var):\n return 'aligned4' in var.get('intent', [])\n\n\ndef isintent_aligned8(var):\n return 'aligned8' in var.get('intent', [])\n\n\ndef isintent_aligned16(var):\n return 'aligned16' in var.get('intent', [])\n\n\nisintent_dict = {isintent_in: 'INTENT_IN', isintent_inout: 'INTENT_INOUT',\n isintent_out: 'INTENT_OUT', isintent_hide: 'INTENT_HIDE',\n isintent_cache: 'INTENT_CACHE',\n isintent_c: 'INTENT_C', isoptional: 'OPTIONAL',\n isintent_inplace: 'INTENT_INPLACE',\n isintent_aligned4: 'INTENT_ALIGNED4',\n isintent_aligned8: 'INTENT_ALIGNED8',\n isintent_aligned16: 'INTENT_ALIGNED16',\n }\n\n\ndef isprivate(var):\n return 'attrspec' in var and 'private' in var['attrspec']\n\n\ndef isvariable(var):\n # heuristic to find public/private declarations of filtered subroutines\n if len(var) == 1 and 'attrspec' in var and \\n var['attrspec'][0] in ('public', 'private'):\n is_var = False\n else:\n is_var = True\n return is_var\n\ndef hasinitvalue(var):\n return '=' in var\n\n\ndef hasinitvalueasstring(var):\n if not hasinitvalue(var):\n return 0\n return var['='][0] in ['"', "'"]\n\n\ndef hasnote(var):\n return 'note' in var\n\n\ndef hasresultnote(rout):\n if not isfunction(rout):\n return 0\n if 'result' in rout:\n a = rout['result']\n else:\n a = rout['name']\n if a in rout['vars']:\n return hasnote(rout['vars'][a])\n return 0\n\n\ndef hascommon(rout):\n return 'common' in rout\n\n\ndef containscommon(rout):\n if hascommon(rout):\n return 1\n if hasbody(rout):\n for b in rout['body']:\n if containscommon(b):\n return 1\n return 0\n\n\ndef hasderivedtypes(rout):\n return ('block' in rout) and rout['block'] == 'type'\n\n\ndef containsderivedtypes(rout):\n if hasderivedtypes(rout):\n return 1\n if hasbody(rout):\n for b in rout['body']:\n if hasderivedtypes(b):\n return 1\n return 0\n\n\ndef containsmodule(block):\n if ismodule(block):\n return 1\n if not hasbody(block):\n return 0\n for b in block['body']:\n if containsmodule(b):\n return 1\n return 0\n\n\ndef hasbody(rout):\n return 'body' in rout\n\n\ndef hascallstatement(rout):\n return getcallstatement(rout) is not None\n\n\ndef istrue(var):\n return 1\n\n\ndef isfalse(var):\n return 0\n\n\nclass F2PYError(Exception):\n pass\n\n\nclass throw_error:\n\n def __init__(self, mess):\n self.mess = mess\n\n def __call__(self, var):\n mess = f'\n\n var = {var}\n Message: {self.mess}\n'\n raise F2PYError(mess)\n\n\ndef l_and(*f):\n l1, l2 = 'lambda v', []\n for i in range(len(f)):\n l1 = '%s,f%d=f[%d]' % (l1, i, i)\n l2.append('f%d(v)' % (i))\n return eval(f"{l1}:{' and '.join(l2)}")\n\n\ndef l_or(*f):\n l1, l2 = 'lambda v', []\n for i in range(len(f)):\n l1 = '%s,f%d=f[%d]' % (l1, i, i)\n l2.append('f%d(v)' % (i))\n return eval(f"{l1}:{' or '.join(l2)}")\n\n\ndef l_not(f):\n return eval('lambda v,f=f:not f(v)')\n\n\ndef isdummyroutine(rout):\n try:\n return rout['f2pyenhancements']['fortranname'] == ''\n except KeyError:\n return 0\n\n\ndef getfortranname(rout):\n try:\n name = rout['f2pyenhancements']['fortranname']\n if name == '':\n raise KeyError\n if not name:\n errmess(f"Failed to use fortranname from {rout['f2pyenhancements']}\n")\n raise KeyError\n except KeyError:\n name = rout['name']\n return name\n\n\ndef getmultilineblock(rout, blockname, comment=1, counter=0):\n try:\n r = rout['f2pyenhancements'].get(blockname)\n except KeyError:\n return\n if not r:\n return\n if counter > 0 and isinstance(r, str):\n return\n if isinstance(r, list):\n if counter >= len(r):\n return\n r = r[counter]\n if r[:3] == "'''":\n if comment:\n r = '\t/* start ' + blockname + \\n ' multiline (' + repr(counter) + ') */\n' + r[3:]\n else:\n r = r[3:]\n if r[-3:] == "'''":\n if comment:\n r = r[:-3] + '\n\t/* end multiline (' + repr(counter) + ')*/'\n else:\n r = r[:-3]\n else:\n errmess(f"{blockname} multiline block should end with `'''`: {repr(r)}\n")\n return r\n\n\ndef getcallstatement(rout):\n return getmultilineblock(rout, 'callstatement')\n\n\ndef getcallprotoargument(rout, cb_map={}):\n r = getmultilineblock(rout, 'callprotoargument', comment=0)\n if r:\n return r\n if hascallstatement(rout):\n outmess(\n 'warning: callstatement is defined without callprotoargument\n')\n return\n from .capi_maps import getctype\n arg_types, arg_types2 = [], []\n if l_and(isstringfunction, l_not(isfunction_wrap))(rout):\n arg_types.extend(['char*', 'size_t'])\n for n in rout['args']:\n var = rout['vars'][n]\n if isintent_callback(var):\n continue\n if n in cb_map:\n ctype = cb_map[n] + '_typedef'\n else:\n ctype = getctype(var)\n if l_and(isintent_c, l_or(isscalar, iscomplex))(var):\n pass\n elif isstring(var):\n pass\n elif not isattr_value(var):\n ctype = ctype + '*'\n if (isstring(var)\n or isarrayofstrings(var) # obsolete?\n or isstringarray(var)):\n arg_types2.append('size_t')\n arg_types.append(ctype)\n\n proto_args = ','.join(arg_types + arg_types2)\n if not proto_args:\n proto_args = 'void'\n return proto_args\n\n\ndef getusercode(rout):\n return getmultilineblock(rout, 'usercode')\n\n\ndef getusercode1(rout):\n return getmultilineblock(rout, 'usercode', counter=1)\n\n\ndef getpymethoddef(rout):\n return getmultilineblock(rout, 'pymethoddef')\n\n\ndef getargs(rout):\n sortargs, args = [], []\n if 'args' in rout:\n args = rout['args']\n if 'sortvars' in rout:\n for a in rout['sortvars']:\n if a in args:\n sortargs.append(a)\n for a in args:\n if a not in sortargs:\n sortargs.append(a)\n else:\n sortargs = rout['args']\n return args, sortargs\n\n\ndef getargs2(rout):\n sortargs, args = [], rout.get('args', [])\n auxvars = [a for a in rout['vars'].keys() if isintent_aux(rout['vars'][a])\n and a not in args]\n args = auxvars + args\n if 'sortvars' in rout:\n for a in rout['sortvars']:\n if a in args:\n sortargs.append(a)\n for a in args:\n if a not in sortargs:\n sortargs.append(a)\n else:\n sortargs = auxvars + rout['args']\n return args, sortargs\n\n\ndef getrestdoc(rout):\n if 'f2pymultilines' not in rout:\n return None\n k = None\n if rout['block'] == 'python module':\n k = rout['block'], rout['name']\n return rout['f2pymultilines'].get(k, None)\n\n\ndef gentitle(name):\n ln = (80 - len(name) - 6) // 2\n return f"/*{ln * '*'} {name} {ln * '*'}*/"\n\n\ndef flatlist(lst):\n if isinstance(lst, list):\n return reduce(lambda x, y, f=flatlist: x + f(y), lst, [])\n return [lst]\n\n\ndef stripcomma(s):\n if s and s[-1] == ',':\n return s[:-1]\n return s\n\n\ndef replace(str, d, defaultsep=''):\n if isinstance(d, list):\n return [replace(str, _m, defaultsep) for _m in d]\n if isinstance(str, list):\n return [replace(_m, d, defaultsep) for _m in str]\n for k in 2 * list(d.keys()):\n if k == 'separatorsfor':\n continue\n if 'separatorsfor' in d and k in d['separatorsfor']:\n sep = d['separatorsfor'][k]\n else:\n sep = defaultsep\n if isinstance(d[k], list):\n str = str.replace(f'#{k}#', sep.join(flatlist(d[k])))\n else:\n str = str.replace(f'#{k}#', d[k])\n return str\n\n\ndef dictappend(rd, ar):\n if isinstance(ar, list):\n for a in ar:\n rd = dictappend(rd, a)\n return rd\n for k in ar.keys():\n if k[0] == '_':\n continue\n if k in rd:\n if isinstance(rd[k], str):\n rd[k] = [rd[k]]\n if isinstance(rd[k], list):\n if isinstance(ar[k], list):\n rd[k] = rd[k] + ar[k]\n else:\n rd[k].append(ar[k])\n elif isinstance(rd[k], dict):\n if isinstance(ar[k], dict):\n if k == 'separatorsfor':\n for k1 in ar[k].keys():\n if k1 not in rd[k]:\n rd[k][k1] = ar[k][k1]\n else:\n rd[k] = dictappend(rd[k], ar[k])\n else:\n rd[k] = ar[k]\n return rd\n\n\ndef applyrules(rules, d, var={}):\n ret = {}\n if isinstance(rules, list):\n for r in rules:\n rr = applyrules(r, d, var)\n ret = dictappend(ret, rr)\n if '_break' in rr:\n break\n return ret\n if '_check' in rules and (not rules['_check'](var)):\n return ret\n if 'need' in rules:\n res = applyrules({'needs': rules['need']}, d, var)\n if 'needs' in res:\n cfuncs.append_needs(res['needs'])\n\n for k in rules.keys():\n if k == 'separatorsfor':\n ret[k] = rules[k]\n continue\n if isinstance(rules[k], str):\n ret[k] = replace(rules[k], d)\n elif isinstance(rules[k], list):\n ret[k] = []\n for i in rules[k]:\n ar = applyrules({k: i}, d, var)\n if k in ar:\n ret[k].append(ar[k])\n elif k[0] == '_':\n continue\n elif isinstance(rules[k], dict):\n ret[k] = []\n for k1 in rules[k].keys():\n if isinstance(k1, types.FunctionType) and k1(var):\n if isinstance(rules[k][k1], list):\n for i in rules[k][k1]:\n if isinstance(i, dict):\n res = applyrules({'supertext': i}, d, var)\n i = res.get('supertext', '')\n ret[k].append(replace(i, d))\n else:\n i = rules[k][k1]\n if isinstance(i, dict):\n res = applyrules({'supertext': i}, d)\n i = res.get('supertext', '')\n ret[k].append(replace(i, d))\n else:\n errmess(f'applyrules: ignoring rule {repr(rules[k])}.\n')\n if isinstance(ret[k], list):\n if len(ret[k]) == 1:\n ret[k] = ret[k][0]\n if ret[k] == []:\n del ret[k]\n return ret\n\n\n_f2py_module_name_match = re.compile(r'\s*python\s*module\s*(?P<name>[\w_]+)',\n re.I).match\n_f2py_user_module_name_match = re.compile(r'\s*python\s*module\s*(?P<name>[\w_]*?'\n r'__user__[\w_]*)', re.I).match\n\ndef get_f2py_modulename(source):\n name = None\n with open(source) as f:\n for line in f:\n m = _f2py_module_name_match(line)\n if m:\n if _f2py_user_module_name_match(line): # skip *__user__* names\n continue\n name = m.group('name')\n break\n return name\n\ndef getuseblocks(pymod):\n all_uses = []\n for inner in pymod['body']:\n for modblock in inner['body']:\n if modblock.get('use'):\n all_uses.extend([x for x in modblock.get("use").keys() if "__" not in x])\n return all_uses\n\ndef process_f2cmap_dict(f2cmap_all, new_map, c2py_map, verbose=False):\n """\n Update the Fortran-to-C type mapping dictionary with new mappings and\n return a list of successfully mapped C types.\n\n This function integrates a new mapping dictionary into an existing\n Fortran-to-C type mapping dictionary. It ensures that all keys are in\n lowercase and validates new entries against a given C-to-Python mapping\n dictionary. Redefinitions and invalid entries are reported with a warning.\n\n Parameters\n ----------\n f2cmap_all : dict\n The existing Fortran-to-C type mapping dictionary that will be updated.\n It should be a dictionary of dictionaries where the main keys represent\n Fortran types and the nested dictionaries map Fortran type specifiers\n to corresponding C types.\n\n new_map : dict\n A dictionary containing new type mappings to be added to `f2cmap_all`.\n The structure should be similar to `f2cmap_all`, with keys representing\n Fortran types and values being dictionaries of type specifiers and their\n C type equivalents.\n\n c2py_map : dict\n A dictionary used for validating the C types in `new_map`. It maps C\n types to corresponding Python types and is used to ensure that the C\n types specified in `new_map` are valid.\n\n verbose : boolean\n A flag used to provide information about the types mapped\n\n Returns\n -------\n tuple of (dict, list)\n The updated Fortran-to-C type mapping dictionary and a list of\n successfully mapped C types.\n """\n f2cmap_mapped = []\n\n new_map_lower = {}\n for k, d1 in new_map.items():\n d1_lower = {k1.lower(): v1 for k1, v1 in d1.items()}\n new_map_lower[k.lower()] = d1_lower\n\n for k, d1 in new_map_lower.items():\n if k not in f2cmap_all:\n f2cmap_all[k] = {}\n\n for k1, v1 in d1.items():\n if v1 in c2py_map:\n if k1 in f2cmap_all[k]:\n outmess(\n "\tWarning: redefinition of {'%s':{'%s':'%s'->'%s'}}\n"\n % (k, k1, f2cmap_all[k][k1], v1)\n )\n f2cmap_all[k][k1] = v1\n if verbose:\n outmess(f'\tMapping "{k}(kind={k1})" to "{v1}\"\n')\n f2cmap_mapped.append(v1)\n elif verbose:\n errmess(\n "\tIgnoring map {'%s':{'%s':'%s'}}: '%s' must be in %s\n"\n % (k, k1, v1, v1, list(c2py_map.keys()))\n )\n\n return f2cmap_all, f2cmap_mapped\n
.venv\Lib\site-packages\numpy\f2py\auxfuncs.py
auxfuncs.py
Python
27,924
0.95
0.291833
0.006702
awesome-app
860
2023-12-19T22:47:35.138205
Apache-2.0
false
b786634bb555039aa2982e2f4a4ffeac
from collections.abc import Callable, Mapping\nfrom pprint import pprint as show\nfrom typing import Any, Final, Never, TypeAlias, TypeVar, overload\nfrom typing import Literal as L\n\nfrom _typeshed import FileDescriptorOrPath\n\nfrom .cfuncs import errmess\n\n__all__ = [\n "applyrules",\n "containscommon",\n "containsderivedtypes",\n "debugcapi",\n "dictappend",\n "errmess",\n "gentitle",\n "get_f2py_modulename",\n "getargs2",\n "getcallprotoargument",\n "getcallstatement",\n "getdimension",\n "getfortranname",\n "getpymethoddef",\n "getrestdoc",\n "getuseblocks",\n "getusercode",\n "getusercode1",\n "hasbody",\n "hascallstatement",\n "hascommon",\n "hasexternals",\n "hasinitvalue",\n "hasnote",\n "hasresultnote",\n "isallocatable",\n "isarray",\n "isarrayofstrings",\n "isattr_value",\n "ischaracter",\n "ischaracter_or_characterarray",\n "ischaracterarray",\n "iscomplex",\n "iscomplexarray",\n "iscomplexfunction",\n "iscomplexfunction_warn",\n "iscstyledirective",\n "isdouble",\n "isdummyroutine",\n "isexternal",\n "isfunction",\n "isfunction_wrap",\n "isint1",\n "isint1array",\n "isinteger",\n "isintent_aux",\n "isintent_c",\n "isintent_callback",\n "isintent_copy",\n "isintent_dict",\n "isintent_hide",\n "isintent_in",\n "isintent_inout",\n "isintent_inplace",\n "isintent_nothide",\n "isintent_out",\n "isintent_overwrite",\n "islogical",\n "islogicalfunction",\n "islong_complex",\n "islong_double",\n "islong_doublefunction",\n "islong_long",\n "islong_longfunction",\n "ismodule",\n "ismoduleroutine",\n "isoptional",\n "isprivate",\n "isrequired",\n "isroutine",\n "isscalar",\n "issigned_long_longarray",\n "isstring",\n "isstring_or_stringarray",\n "isstringarray",\n "isstringfunction",\n "issubroutine",\n "issubroutine_wrap",\n "isthreadsafe",\n "isunsigned",\n "isunsigned_char",\n "isunsigned_chararray",\n "isunsigned_long_long",\n "isunsigned_long_longarray",\n "isunsigned_short",\n "isunsigned_shortarray",\n "isvariable",\n "l_and",\n "l_not",\n "l_or",\n "outmess",\n "process_f2cmap_dict",\n "replace",\n "show",\n "stripcomma",\n "throw_error",\n]\n\n###\n\n_VT = TypeVar("_VT")\n_RT = TypeVar("_RT")\n\n_Var: TypeAlias = Mapping[str, list[str]]\n_ROut: TypeAlias = Mapping[str, str]\n_F2CMap: TypeAlias = Mapping[str, Mapping[str, str]]\n\n_Bool: TypeAlias = bool | L[0, 1]\n_Intent: TypeAlias = L[\n "INTENT_IN",\n "INTENT_OUT",\n "INTENT_INOUT",\n "INTENT_C",\n "INTENT_CACHE",\n "INTENT_HIDE",\n "INTENT_INPLACE",\n "INTENT_ALIGNED4",\n "INTENT_ALIGNED8",\n "INTENT_ALIGNED16",\n "OPTIONAL",\n]\n\n###\n\nisintent_dict: dict[Callable[[_Var], _Bool], _Intent]\n\nclass F2PYError(Exception): ...\n\nclass throw_error:\n mess: Final[str]\n def __init__(self, /, mess: str) -> None: ...\n def __call__(self, /, var: _Var) -> Never: ... # raises F2PYError\n\n#\ndef l_and(*f: tuple[str, Callable[[_VT], _RT]]) -> Callable[[_VT], _RT]: ...\ndef l_or(*f: tuple[str, Callable[[_VT], _RT]]) -> Callable[[_VT], _RT]: ...\ndef l_not(f: tuple[str, Callable[[_VT], _RT]]) -> Callable[[_VT], _RT]: ...\n\n#\ndef outmess(t: str) -> None: ...\ndef debugcapi(var: _Var) -> bool: ...\n\n#\ndef hasinitvalue(var: _Var | str) -> bool: ...\ndef hasnote(var: _Var | str) -> bool: ...\ndef ischaracter(var: _Var) -> bool: ...\ndef ischaracterarray(var: _Var) -> bool: ...\ndef ischaracter_or_characterarray(var: _Var) -> bool: ...\ndef isstring(var: _Var) -> bool: ...\ndef isstringarray(var: _Var) -> bool: ...\ndef isstring_or_stringarray(var: _Var) -> bool: ...\ndef isarray(var: _Var) -> bool: ...\ndef isarrayofstrings(var: _Var) -> bool: ...\ndef isscalar(var: _Var) -> bool: ...\ndef iscomplex(var: _Var) -> bool: ...\ndef islogical(var: _Var) -> bool: ...\ndef isinteger(var: _Var) -> bool: ...\ndef isint1(var: _Var) -> bool: ...\ndef isint1array(var: _Var) -> bool: ...\ndef islong_long(var: _Var) -> _Bool: ...\ndef isunsigned(var: _Var) -> _Bool: ...\ndef isunsigned_char(var: _Var) -> _Bool: ...\ndef isunsigned_chararray(var: _Var) -> bool: ...\ndef isunsigned_short(var: _Var) -> _Bool: ...\ndef isunsigned_shortarray(var: _Var) -> bool: ...\ndef isunsigned_long_long(var: _Var) -> _Bool: ...\ndef isunsigned_long_longarray(var: _Var) -> bool: ...\ndef issigned_long_longarray(var: _Var) -> bool: ...\ndef isdouble(var: _Var) -> _Bool: ...\ndef islong_double(var: _Var) -> _Bool: ...\ndef islong_complex(var: _Var) -> _Bool: ...\ndef iscomplexarray(var: _Var) -> bool: ...\ndef isallocatable(var: _Var) -> bool: ...\ndef isattr_value(var: _Var) -> bool: ...\ndef isoptional(var: _Var) -> bool: ...\ndef isexternal(var: _Var) -> bool: ...\ndef isrequired(var: _Var) -> bool: ...\ndef isprivate(var: _Var) -> bool: ...\ndef isvariable(var: _Var) -> bool: ...\ndef isintent_in(var: _Var) -> _Bool: ...\ndef isintent_inout(var: _Var) -> bool: ...\ndef isintent_out(var: _Var) -> bool: ...\ndef isintent_hide(var: _Var) -> bool: ...\ndef isintent_nothide(var: _Var) -> bool: ...\ndef isintent_c(var: _Var) -> bool: ...\ndef isintent_cache(var: _Var) -> bool: ...\ndef isintent_copy(var: _Var) -> bool: ...\ndef isintent_overwrite(var: _Var) -> bool: ...\ndef isintent_callback(var: _Var) -> bool: ...\ndef isintent_inplace(var: _Var) -> bool: ...\ndef isintent_aux(var: _Var) -> bool: ...\n\n#\ndef containsderivedtypes(rout: _ROut) -> L[0, 1]: ...\ndef containscommon(rout: _ROut) -> _Bool: ...\ndef hasexternals(rout: _ROut) -> bool: ...\ndef hasresultnote(rout: _ROut) -> _Bool: ...\ndef hasbody(rout: _ROut) -> _Bool: ...\ndef hascommon(rout: _ROut) -> bool: ...\ndef hasderivedtypes(rout: _ROut) -> bool: ...\ndef hascallstatement(rout: _ROut) -> bool: ...\ndef isroutine(rout: _ROut) -> bool: ...\ndef ismodule(rout: _ROut) -> bool: ...\ndef ismoduleroutine(rout: _ROut) -> bool: ...\ndef issubroutine(rout: _ROut) -> bool: ...\ndef issubroutine_wrap(rout: _ROut) -> _Bool: ...\ndef isfunction(rout: _ROut) -> bool: ...\ndef isfunction_wrap(rout: _ROut) -> _Bool: ...\ndef islogicalfunction(rout: _ROut) -> _Bool: ...\ndef islong_longfunction(rout: _ROut) -> _Bool: ...\ndef islong_doublefunction(rout: _ROut) -> _Bool: ...\ndef iscomplexfunction(rout: _ROut) -> _Bool: ...\ndef iscomplexfunction_warn(rout: _ROut) -> _Bool: ...\ndef isstringfunction(rout: _ROut) -> _Bool: ...\ndef isthreadsafe(rout: _ROut) -> bool: ...\ndef isdummyroutine(rout: _ROut) -> _Bool: ...\ndef iscstyledirective(f2py_line: str) -> bool: ...\n\n# .\ndef getdimension(var: _Var) -> list[Any] | None: ...\ndef getfortranname(rout: _ROut) -> str: ...\ndef getmultilineblock(rout: _ROut, blockname: str, comment: _Bool = 1, counter: int = 0) -> str | None: ...\ndef getcallstatement(rout: _ROut) -> str | None: ...\ndef getcallprotoargument(rout: _ROut, cb_map: dict[str, str] = {}) -> str: ...\ndef getusercode(rout: _ROut) -> str | None: ...\ndef getusercode1(rout: _ROut) -> str | None: ...\ndef getpymethoddef(rout: _ROut) -> str | None: ...\ndef getargs(rout: _ROut) -> tuple[list[str], list[str]]: ...\ndef getargs2(rout: _ROut) -> tuple[list[str], list[str]]: ...\ndef getrestdoc(rout: _ROut) -> str | None: ...\n\n#\ndef gentitle(name: str) -> str: ...\ndef stripcomma(s: str) -> str: ...\n@overload\ndef replace(str: str, d: list[str], defaultsep: str = "") -> list[str]: ...\n@overload\ndef replace(str: list[str], d: str, defaultsep: str = "") -> list[str]: ...\n@overload\ndef replace(str: str, d: str, defaultsep: str = "") -> str: ...\n\n#\ndef dictappend(rd: Mapping[str, object], ar: Mapping[str, object] | list[Mapping[str, object]]) -> dict[str, Any]: ...\ndef applyrules(rules: Mapping[str, object], d: Mapping[str, object], var: _Var = {}) -> dict[str, Any]: ...\n\n#\ndef get_f2py_modulename(source: FileDescriptorOrPath) -> str: ...\ndef getuseblocks(pymod: Mapping[str, Mapping[str, Mapping[str, str]]]) -> list[str]: ...\ndef process_f2cmap_dict(\n f2cmap_all: _F2CMap,\n new_map: _F2CMap,\n c2py_map: _F2CMap,\n verbose: bool = False,\n) -> tuple[dict[str, dict[str, str]], list[str]]: ...\n
.venv\Lib\site-packages\numpy\f2py\auxfuncs.pyi
auxfuncs.pyi
Other
8,275
0.95
0.386364
0.040816
awesome-app
686
2024-12-11T07:57:30.925237
MIT
false
a5d728656e3004f4a4d9919d00c1451b
"""\nCopyright 1999 -- 2011 Pearu Peterson all rights reserved.\nCopyright 2011 -- present NumPy Developers.\nPermission to use, modify, and distribute this software is given under the\nterms of the NumPy License.\n\nNO WARRANTY IS EXPRESSED OR IMPLIED. USE AT YOUR OWN RISK.\n"""\nfrom . import __version__\n\nf2py_version = __version__.version\n\nimport copy\nimport os\nimport re\n\nfrom . import cb_rules\nfrom ._isocbind import iso_c2py_map, iso_c_binding_map, isoc_c2pycode_map\n\n# The environment provided by auxfuncs.py is needed for some calls to eval.\n# As the needed functions cannot be determined by static inspection of the\n# code, it is safest to use import * pending a major refactoring of f2py.\nfrom .auxfuncs import *\nfrom .crackfortran import markoutercomma\n\n__all__ = [\n 'getctype', 'getstrlength', 'getarrdims', 'getpydocsign',\n 'getarrdocsign', 'getinit', 'sign2map', 'routsign2map', 'modsign2map',\n 'cb_sign2map', 'cb_routsign2map', 'common_sign2map', 'process_f2cmap_dict'\n]\n\n\ndepargs = []\nlcb_map = {}\nlcb2_map = {}\n# forced casting: mainly caused by the fact that Python or Numeric\n# C/APIs do not support the corresponding C types.\nc2py_map = {'double': 'float',\n 'float': 'float', # forced casting\n 'long_double': 'float', # forced casting\n 'char': 'int', # forced casting\n 'signed_char': 'int', # forced casting\n 'unsigned_char': 'int', # forced casting\n 'short': 'int', # forced casting\n 'unsigned_short': 'int', # forced casting\n 'int': 'int', # forced casting\n 'long': 'int',\n 'long_long': 'long',\n 'unsigned': 'int', # forced casting\n 'complex_float': 'complex', # forced casting\n 'complex_double': 'complex',\n 'complex_long_double': 'complex', # forced casting\n 'string': 'string',\n 'character': 'bytes',\n }\n\nc2capi_map = {'double': 'NPY_DOUBLE',\n 'float': 'NPY_FLOAT',\n 'long_double': 'NPY_LONGDOUBLE',\n 'char': 'NPY_BYTE',\n 'unsigned_char': 'NPY_UBYTE',\n 'signed_char': 'NPY_BYTE',\n 'short': 'NPY_SHORT',\n 'unsigned_short': 'NPY_USHORT',\n 'int': 'NPY_INT',\n 'unsigned': 'NPY_UINT',\n 'long': 'NPY_LONG',\n 'unsigned_long': 'NPY_ULONG',\n 'long_long': 'NPY_LONGLONG',\n 'unsigned_long_long': 'NPY_ULONGLONG',\n 'complex_float': 'NPY_CFLOAT',\n 'complex_double': 'NPY_CDOUBLE',\n 'complex_long_double': 'NPY_CDOUBLE',\n 'string': 'NPY_STRING',\n 'character': 'NPY_STRING'}\n\nc2pycode_map = {'double': 'd',\n 'float': 'f',\n 'long_double': 'g',\n 'char': 'b',\n 'unsigned_char': 'B',\n 'signed_char': 'b',\n 'short': 'h',\n 'unsigned_short': 'H',\n 'int': 'i',\n 'unsigned': 'I',\n 'long': 'l',\n 'unsigned_long': 'L',\n 'long_long': 'q',\n 'unsigned_long_long': 'Q',\n 'complex_float': 'F',\n 'complex_double': 'D',\n 'complex_long_double': 'G',\n 'string': 'S',\n 'character': 'c'}\n\n# https://docs.python.org/3/c-api/arg.html#building-values\nc2buildvalue_map = {'double': 'd',\n 'float': 'f',\n 'char': 'b',\n 'signed_char': 'b',\n 'short': 'h',\n 'int': 'i',\n 'long': 'l',\n 'long_long': 'L',\n 'complex_float': 'N',\n 'complex_double': 'N',\n 'complex_long_double': 'N',\n 'string': 'y',\n 'character': 'c'}\n\nf2cmap_all = {'real': {'': 'float', '4': 'float', '8': 'double',\n '12': 'long_double', '16': 'long_double'},\n 'integer': {'': 'int', '1': 'signed_char', '2': 'short',\n '4': 'int', '8': 'long_long',\n '-1': 'unsigned_char', '-2': 'unsigned_short',\n '-4': 'unsigned', '-8': 'unsigned_long_long'},\n 'complex': {'': 'complex_float', '8': 'complex_float',\n '16': 'complex_double', '24': 'complex_long_double',\n '32': 'complex_long_double'},\n 'complexkind': {'': 'complex_float', '4': 'complex_float',\n '8': 'complex_double', '12': 'complex_long_double',\n '16': 'complex_long_double'},\n 'logical': {'': 'int', '1': 'char', '2': 'short', '4': 'int',\n '8': 'long_long'},\n 'double complex': {'': 'complex_double'},\n 'double precision': {'': 'double'},\n 'byte': {'': 'char'},\n }\n\n# Add ISO_C handling\nc2pycode_map.update(isoc_c2pycode_map)\nc2py_map.update(iso_c2py_map)\nf2cmap_all, _ = process_f2cmap_dict(f2cmap_all, iso_c_binding_map, c2py_map)\n# End ISO_C handling\nf2cmap_default = copy.deepcopy(f2cmap_all)\n\nf2cmap_mapped = []\n\ndef load_f2cmap_file(f2cmap_file):\n global f2cmap_all, f2cmap_mapped\n\n f2cmap_all = copy.deepcopy(f2cmap_default)\n\n if f2cmap_file is None:\n # Default value\n f2cmap_file = '.f2py_f2cmap'\n if not os.path.isfile(f2cmap_file):\n return\n\n # User defined additions to f2cmap_all.\n # f2cmap_file must contain a dictionary of dictionaries, only. For\n # example, {'real':{'low':'float'}} means that Fortran 'real(low)' is\n # interpreted as C 'float'. This feature is useful for F90/95 users if\n # they use PARAMETERS in type specifications.\n try:\n outmess(f'Reading f2cmap from {f2cmap_file!r} ...\n')\n with open(f2cmap_file) as f:\n d = eval(f.read().lower(), {}, {})\n f2cmap_all, f2cmap_mapped = process_f2cmap_dict(f2cmap_all, d, c2py_map, True)\n outmess('Successfully applied user defined f2cmap changes\n')\n except Exception as msg:\n errmess(f'Failed to apply user defined f2cmap changes: {msg}. Skipping.\n')\n\n\ncformat_map = {'double': '%g',\n 'float': '%g',\n 'long_double': '%Lg',\n 'char': '%d',\n 'signed_char': '%d',\n 'unsigned_char': '%hhu',\n 'short': '%hd',\n 'unsigned_short': '%hu',\n 'int': '%d',\n 'unsigned': '%u',\n 'long': '%ld',\n 'unsigned_long': '%lu',\n 'long_long': '%ld',\n 'complex_float': '(%g,%g)',\n 'complex_double': '(%g,%g)',\n 'complex_long_double': '(%Lg,%Lg)',\n 'string': '\\"%s\\"',\n 'character': "'%c'",\n }\n\n# Auxiliary functions\n\n\ndef getctype(var):\n """\n Determines C type\n """\n ctype = 'void'\n if isfunction(var):\n if 'result' in var:\n a = var['result']\n else:\n a = var['name']\n if a in var['vars']:\n return getctype(var['vars'][a])\n else:\n errmess(f'getctype: function {a} has no return value?!\n')\n elif issubroutine(var):\n return ctype\n elif ischaracter_or_characterarray(var):\n return 'character'\n elif isstring_or_stringarray(var):\n return 'string'\n elif 'typespec' in var and var['typespec'].lower() in f2cmap_all:\n typespec = var['typespec'].lower()\n f2cmap = f2cmap_all[typespec]\n ctype = f2cmap[''] # default type\n if 'kindselector' in var:\n if '*' in var['kindselector']:\n try:\n ctype = f2cmap[var['kindselector']['*']]\n except KeyError:\n errmess('getctype: "%s %s %s" not supported.\n' %\n (var['typespec'], '*', var['kindselector']['*']))\n elif 'kind' in var['kindselector']:\n if typespec + 'kind' in f2cmap_all:\n f2cmap = f2cmap_all[typespec + 'kind']\n try:\n ctype = f2cmap[var['kindselector']['kind']]\n except KeyError:\n if typespec in f2cmap_all:\n f2cmap = f2cmap_all[typespec]\n try:\n ctype = f2cmap[str(var['kindselector']['kind'])]\n except KeyError:\n errmess('getctype: "%s(kind=%s)" is mapped to C "%s" (to override define dict(%s = dict(%s="<C typespec>")) in %s/.f2py_f2cmap file).\n'\n % (typespec, var['kindselector']['kind'], ctype,\n typespec, var['kindselector']['kind'], os.getcwd()))\n elif not isexternal(var):\n errmess(f'getctype: No C-type found in "{var}", assuming void.\n')\n return ctype\n\n\ndef f2cexpr(expr):\n """Rewrite Fortran expression as f2py supported C expression.\n\n Due to the lack of a proper expression parser in f2py, this\n function uses a heuristic approach that assumes that Fortran\n arithmetic expressions are valid C arithmetic expressions when\n mapping Fortran function calls to the corresponding C function/CPP\n macros calls.\n\n """\n # TODO: support Fortran `len` function with optional kind parameter\n expr = re.sub(r'\blen\b', 'f2py_slen', expr)\n return expr\n\n\ndef getstrlength(var):\n if isstringfunction(var):\n if 'result' in var:\n a = var['result']\n else:\n a = var['name']\n if a in var['vars']:\n return getstrlength(var['vars'][a])\n else:\n errmess(f'getstrlength: function {a} has no return value?!\n')\n if not isstring(var):\n errmess(\n f'getstrlength: expected a signature of a string but got: {repr(var)}\n')\n len = '1'\n if 'charselector' in var:\n a = var['charselector']\n if '*' in a:\n len = a['*']\n elif 'len' in a:\n len = f2cexpr(a['len'])\n if re.match(r'\(\s*(\*|:)\s*\)', len) or re.match(r'(\*|:)', len):\n if isintent_hide(var):\n errmess('getstrlength:intent(hide): expected a string with defined length but got: %s\n' % (\n repr(var)))\n len = '-1'\n return len\n\n\ndef getarrdims(a, var, verbose=0):\n ret = {}\n if isstring(var) and not isarray(var):\n ret['size'] = getstrlength(var)\n ret['rank'] = '0'\n ret['dims'] = ''\n elif isscalar(var):\n ret['size'] = '1'\n ret['rank'] = '0'\n ret['dims'] = ''\n elif isarray(var):\n dim = copy.copy(var['dimension'])\n ret['size'] = '*'.join(dim)\n try:\n ret['size'] = repr(eval(ret['size']))\n except Exception:\n pass\n ret['dims'] = ','.join(dim)\n ret['rank'] = repr(len(dim))\n ret['rank*[-1]'] = repr(len(dim) * [-1])[1:-1]\n for i in range(len(dim)): # solve dim for dependencies\n v = []\n if dim[i] in depargs:\n v = [dim[i]]\n else:\n for va in depargs:\n if re.match(r'.*?\b%s\b.*' % va, dim[i]):\n v.append(va)\n for va in v:\n if depargs.index(va) > depargs.index(a):\n dim[i] = '*'\n break\n ret['setdims'], i = '', -1\n for d in dim:\n i = i + 1\n if d not in ['*', ':', '(*)', '(:)']:\n ret['setdims'] = '%s#varname#_Dims[%d]=%s,' % (\n ret['setdims'], i, d)\n if ret['setdims']:\n ret['setdims'] = ret['setdims'][:-1]\n ret['cbsetdims'], i = '', -1\n for d in var['dimension']:\n i = i + 1\n if d not in ['*', ':', '(*)', '(:)']:\n ret['cbsetdims'] = '%s#varname#_Dims[%d]=%s,' % (\n ret['cbsetdims'], i, d)\n elif isintent_in(var):\n outmess('getarrdims:warning: assumed shape array, using 0 instead of %r\n'\n % (d))\n ret['cbsetdims'] = '%s#varname#_Dims[%d]=%s,' % (\n ret['cbsetdims'], i, 0)\n elif verbose:\n errmess(\n f'getarrdims: If in call-back function: array argument {repr(a)} must have bounded dimensions: got {repr(d)}\n')\n if ret['cbsetdims']:\n ret['cbsetdims'] = ret['cbsetdims'][:-1]\n# if not isintent_c(var):\n# var['dimension'].reverse()\n return ret\n\n\ndef getpydocsign(a, var):\n global lcb_map\n if isfunction(var):\n if 'result' in var:\n af = var['result']\n else:\n af = var['name']\n if af in var['vars']:\n return getpydocsign(af, var['vars'][af])\n else:\n errmess(f'getctype: function {af} has no return value?!\n')\n return '', ''\n sig, sigout = a, a\n opt = ''\n if isintent_in(var):\n opt = 'input'\n elif isintent_inout(var):\n opt = 'in/output'\n out_a = a\n if isintent_out(var):\n for k in var['intent']:\n if k[:4] == 'out=':\n out_a = k[4:]\n break\n init = ''\n ctype = getctype(var)\n\n if hasinitvalue(var):\n init, showinit = getinit(a, var)\n init = f', optional\\n Default: {showinit}'\n if isscalar(var):\n if isintent_inout(var):\n sig = '%s : %s rank-0 array(%s,\'%s\')%s' % (a, opt, c2py_map[ctype],\n c2pycode_map[ctype], init)\n else:\n sig = f'{a} : {opt} {c2py_map[ctype]}{init}'\n sigout = f'{out_a} : {c2py_map[ctype]}'\n elif isstring(var):\n if isintent_inout(var):\n sig = '%s : %s rank-0 array(string(len=%s),\'c\')%s' % (\n a, opt, getstrlength(var), init)\n else:\n sig = f'{a} : {opt} string(len={getstrlength(var)}){init}'\n sigout = f'{out_a} : string(len={getstrlength(var)})'\n elif isarray(var):\n dim = var['dimension']\n rank = repr(len(dim))\n sig = '%s : %s rank-%s array(\'%s\') with bounds (%s)%s' % (a, opt, rank,\n c2pycode_map[\n ctype],\n ','.join(dim), init)\n if a == out_a:\n sigout = '%s : rank-%s array(\'%s\') with bounds (%s)'\\n % (a, rank, c2pycode_map[ctype], ','.join(dim))\n else:\n sigout = '%s : rank-%s array(\'%s\') with bounds (%s) and %s storage'\\n % (out_a, rank, c2pycode_map[ctype], ','.join(dim), a)\n elif isexternal(var):\n ua = ''\n if a in lcb_map and lcb_map[a] in lcb2_map and 'argname' in lcb2_map[lcb_map[a]]:\n ua = lcb2_map[lcb_map[a]]['argname']\n if not ua == a:\n ua = f' => {ua}'\n else:\n ua = ''\n sig = f'{a} : call-back function{ua}'\n sigout = sig\n else:\n errmess(\n f'getpydocsign: Could not resolve docsignature for "{a}".\n')\n return sig, sigout\n\n\ndef getarrdocsign(a, var):\n ctype = getctype(var)\n if isstring(var) and (not isarray(var)):\n sig = f'{a} : rank-0 array(string(len={getstrlength(var)}),\'c\')'\n elif isscalar(var):\n sig = f'{a} : rank-0 array({c2py_map[ctype]},\'{c2pycode_map[ctype]}\')'\n elif isarray(var):\n dim = var['dimension']\n rank = repr(len(dim))\n sig = '%s : rank-%s array(\'%s\') with bounds (%s)' % (a, rank,\n c2pycode_map[\n ctype],\n ','.join(dim))\n return sig\n\n\ndef getinit(a, var):\n if isstring(var):\n init, showinit = '""', "''"\n else:\n init, showinit = '', ''\n if hasinitvalue(var):\n init = var['=']\n showinit = init\n if iscomplex(var) or iscomplexarray(var):\n ret = {}\n\n try:\n v = var["="]\n if ',' in v:\n ret['init.r'], ret['init.i'] = markoutercomma(\n v[1:-1]).split('@,@')\n else:\n v = eval(v, {}, {})\n ret['init.r'], ret['init.i'] = str(v.real), str(v.imag)\n except Exception:\n raise ValueError(\n f'getinit: expected complex number `(r,i)\' but got `{init}\' as initial value of {a!r}.')\n if isarray(var):\n init = f"(capi_c.r={ret['init.r']},capi_c.i={ret['init.i']},capi_c)"\n elif isstring(var):\n if not init:\n init, showinit = '""', "''"\n if init[0] == "'":\n init = '"%s"' % (init[1:-1].replace('"', '\\"'))\n if init[0] == '"':\n showinit = f"'{init[1:-1]}'"\n return init, showinit\n\n\ndef get_elsize(var):\n if isstring(var) or isstringarray(var):\n elsize = getstrlength(var)\n # override with user-specified length when available:\n elsize = var['charselector'].get('f2py_len', elsize)\n return elsize\n if ischaracter(var) or ischaracterarray(var):\n return '1'\n # for numerical types, PyArray_New* functions ignore specified\n # elsize, so we just return 1 and let elsize be determined at\n # runtime, see fortranobject.c\n return '1'\n\n\ndef sign2map(a, var):\n """\n varname,ctype,atype\n init,init.r,init.i,pytype\n vardebuginfo,vardebugshowvalue,varshowvalue\n varrformat\n\n intent\n """\n out_a = a\n if isintent_out(var):\n for k in var['intent']:\n if k[:4] == 'out=':\n out_a = k[4:]\n break\n ret = {'varname': a, 'outvarname': out_a, 'ctype': getctype(var)}\n intent_flags = []\n for f, s in isintent_dict.items():\n if f(var):\n intent_flags.append(f'F2PY_{s}')\n if intent_flags:\n # TODO: Evaluate intent_flags here.\n ret['intent'] = '|'.join(intent_flags)\n else:\n ret['intent'] = 'F2PY_INTENT_IN'\n if isarray(var):\n ret['varrformat'] = 'N'\n elif ret['ctype'] in c2buildvalue_map:\n ret['varrformat'] = c2buildvalue_map[ret['ctype']]\n else:\n ret['varrformat'] = 'O'\n ret['init'], ret['showinit'] = getinit(a, var)\n if hasinitvalue(var) and iscomplex(var) and not isarray(var):\n ret['init.r'], ret['init.i'] = markoutercomma(\n ret['init'][1:-1]).split('@,@')\n if isexternal(var):\n ret['cbnamekey'] = a\n if a in lcb_map:\n ret['cbname'] = lcb_map[a]\n ret['maxnofargs'] = lcb2_map[lcb_map[a]]['maxnofargs']\n ret['nofoptargs'] = lcb2_map[lcb_map[a]]['nofoptargs']\n ret['cbdocstr'] = lcb2_map[lcb_map[a]]['docstr']\n ret['cblatexdocstr'] = lcb2_map[lcb_map[a]]['latexdocstr']\n else:\n ret['cbname'] = a\n errmess('sign2map: Confused: external %s is not in lcb_map%s.\n' % (\n a, list(lcb_map.keys())))\n if isstring(var):\n ret['length'] = getstrlength(var)\n if isarray(var):\n ret = dictappend(ret, getarrdims(a, var))\n dim = copy.copy(var['dimension'])\n if ret['ctype'] in c2capi_map:\n ret['atype'] = c2capi_map[ret['ctype']]\n ret['elsize'] = get_elsize(var)\n # Debug info\n if debugcapi(var):\n il = [isintent_in, 'input', isintent_out, 'output',\n isintent_inout, 'inoutput', isrequired, 'required',\n isoptional, 'optional', isintent_hide, 'hidden',\n iscomplex, 'complex scalar',\n l_and(isscalar, l_not(iscomplex)), 'scalar',\n isstring, 'string', isarray, 'array',\n iscomplexarray, 'complex array', isstringarray, 'string array',\n iscomplexfunction, 'complex function',\n l_and(isfunction, l_not(iscomplexfunction)), 'function',\n isexternal, 'callback',\n isintent_callback, 'callback',\n isintent_aux, 'auxiliary',\n ]\n rl = []\n for i in range(0, len(il), 2):\n if il[i](var):\n rl.append(il[i + 1])\n if isstring(var):\n rl.append(f"slen({a})={ret['length']}")\n if isarray(var):\n ddim = ','.join(\n map(lambda x, y: f'{x}|{y}', var['dimension'], dim))\n rl.append(f'dims({ddim})')\n if isexternal(var):\n ret['vardebuginfo'] = f"debug-capi:{a}=>{ret['cbname']}:{','.join(rl)}"\n else:\n ret['vardebuginfo'] = 'debug-capi:%s %s=%s:%s' % (\n ret['ctype'], a, ret['showinit'], ','.join(rl))\n if isscalar(var):\n if ret['ctype'] in cformat_map:\n ret['vardebugshowvalue'] = f"debug-capi:{a}={cformat_map[ret['ctype']]}"\n if isstring(var):\n ret['vardebugshowvalue'] = 'debug-capi:slen(%s)=%%d %s=\\"%%s\\"' % (\n a, a)\n if isexternal(var):\n ret['vardebugshowvalue'] = f'debug-capi:{a}=%p'\n if ret['ctype'] in cformat_map:\n ret['varshowvalue'] = f"#name#:{a}={cformat_map[ret['ctype']]}"\n ret['showvalueformat'] = f"{cformat_map[ret['ctype']]}"\n if isstring(var):\n ret['varshowvalue'] = '#name#:slen(%s)=%%d %s=\\"%%s\\"' % (a, a)\n ret['pydocsign'], ret['pydocsignout'] = getpydocsign(a, var)\n if hasnote(var):\n ret['note'] = var['note']\n return ret\n\n\ndef routsign2map(rout):\n """\n name,NAME,begintitle,endtitle\n rname,ctype,rformat\n routdebugshowvalue\n """\n global lcb_map\n name = rout['name']\n fname = getfortranname(rout)\n ret = {'name': name,\n 'texname': name.replace('_', '\\_'),\n 'name_lower': name.lower(),\n 'NAME': name.upper(),\n 'begintitle': gentitle(name),\n 'endtitle': gentitle(f'end of {name}'),\n 'fortranname': fname,\n 'FORTRANNAME': fname.upper(),\n 'callstatement': getcallstatement(rout) or '',\n 'usercode': getusercode(rout) or '',\n 'usercode1': getusercode1(rout) or '',\n }\n if '_' in fname:\n ret['F_FUNC'] = 'F_FUNC_US'\n else:\n ret['F_FUNC'] = 'F_FUNC'\n if '_' in name:\n ret['F_WRAPPEDFUNC'] = 'F_WRAPPEDFUNC_US'\n else:\n ret['F_WRAPPEDFUNC'] = 'F_WRAPPEDFUNC'\n lcb_map = {}\n if 'use' in rout:\n for u in rout['use'].keys():\n if u in cb_rules.cb_map:\n for un in cb_rules.cb_map[u]:\n ln = un[0]\n if 'map' in rout['use'][u]:\n for k in rout['use'][u]['map'].keys():\n if rout['use'][u]['map'][k] == un[0]:\n ln = k\n break\n lcb_map[ln] = un[1]\n elif rout.get('externals'):\n errmess('routsign2map: Confused: function %s has externals %s but no "use" statement.\n' % (\n ret['name'], repr(rout['externals'])))\n ret['callprotoargument'] = getcallprotoargument(rout, lcb_map) or ''\n if isfunction(rout):\n if 'result' in rout:\n a = rout['result']\n else:\n a = rout['name']\n ret['rname'] = a\n ret['pydocsign'], ret['pydocsignout'] = getpydocsign(a, rout)\n ret['ctype'] = getctype(rout['vars'][a])\n if hasresultnote(rout):\n ret['resultnote'] = rout['vars'][a]['note']\n rout['vars'][a]['note'] = ['See elsewhere.']\n if ret['ctype'] in c2buildvalue_map:\n ret['rformat'] = c2buildvalue_map[ret['ctype']]\n else:\n ret['rformat'] = 'O'\n errmess('routsign2map: no c2buildvalue key for type %s\n' %\n (repr(ret['ctype'])))\n if debugcapi(rout):\n if ret['ctype'] in cformat_map:\n ret['routdebugshowvalue'] = 'debug-capi:%s=%s' % (\n a, cformat_map[ret['ctype']])\n if isstringfunction(rout):\n ret['routdebugshowvalue'] = 'debug-capi:slen(%s)=%%d %s=\\"%%s\\"' % (\n a, a)\n if isstringfunction(rout):\n ret['rlength'] = getstrlength(rout['vars'][a])\n if ret['rlength'] == '-1':\n errmess('routsign2map: expected explicit specification of the length of the string returned by the fortran function %s; taking 10.\n' % (\n repr(rout['name'])))\n ret['rlength'] = '10'\n if hasnote(rout):\n ret['note'] = rout['note']\n rout['note'] = ['See elsewhere.']\n return ret\n\n\ndef modsign2map(m):\n """\n modulename\n """\n if ismodule(m):\n ret = {'f90modulename': m['name'],\n 'F90MODULENAME': m['name'].upper(),\n 'texf90modulename': m['name'].replace('_', '\\_')}\n else:\n ret = {'modulename': m['name'],\n 'MODULENAME': m['name'].upper(),\n 'texmodulename': m['name'].replace('_', '\\_')}\n ret['restdoc'] = getrestdoc(m) or []\n if hasnote(m):\n ret['note'] = m['note']\n ret['usercode'] = getusercode(m) or ''\n ret['usercode1'] = getusercode1(m) or ''\n if m['body']:\n ret['interface_usercode'] = getusercode(m['body'][0]) or ''\n else:\n ret['interface_usercode'] = ''\n ret['pymethoddef'] = getpymethoddef(m) or ''\n if 'gil_used' in m:\n ret['gil_used'] = m['gil_used']\n if 'coutput' in m:\n ret['coutput'] = m['coutput']\n if 'f2py_wrapper_output' in m:\n ret['f2py_wrapper_output'] = m['f2py_wrapper_output']\n return ret\n\n\ndef cb_sign2map(a, var, index=None):\n ret = {'varname': a}\n ret['varname_i'] = ret['varname']\n ret['ctype'] = getctype(var)\n if ret['ctype'] in c2capi_map:\n ret['atype'] = c2capi_map[ret['ctype']]\n ret['elsize'] = get_elsize(var)\n if ret['ctype'] in cformat_map:\n ret['showvalueformat'] = f"{cformat_map[ret['ctype']]}"\n if isarray(var):\n ret = dictappend(ret, getarrdims(a, var))\n ret['pydocsign'], ret['pydocsignout'] = getpydocsign(a, var)\n if hasnote(var):\n ret['note'] = var['note']\n var['note'] = ['See elsewhere.']\n return ret\n\n\ndef cb_routsign2map(rout, um):\n """\n name,begintitle,endtitle,argname\n ctype,rctype,maxnofargs,nofoptargs,returncptr\n """\n ret = {'name': f"cb_{rout['name']}_in_{um}",\n 'returncptr': ''}\n if isintent_callback(rout):\n if '_' in rout['name']:\n F_FUNC = 'F_FUNC_US'\n else:\n F_FUNC = 'F_FUNC'\n ret['callbackname'] = f"{F_FUNC}({rout['name'].lower()},{rout['name'].upper()})"\n ret['static'] = 'extern'\n else:\n ret['callbackname'] = ret['name']\n ret['static'] = 'static'\n ret['argname'] = rout['name']\n ret['begintitle'] = gentitle(ret['name'])\n ret['endtitle'] = gentitle(f"end of {ret['name']}")\n ret['ctype'] = getctype(rout)\n ret['rctype'] = 'void'\n if ret['ctype'] == 'string':\n ret['rctype'] = 'void'\n else:\n ret['rctype'] = ret['ctype']\n if ret['rctype'] != 'void':\n if iscomplexfunction(rout):\n ret['returncptr'] = """\n#ifdef F2PY_CB_RETURNCOMPLEX\nreturn_value=\n#endif\n"""\n else:\n ret['returncptr'] = 'return_value='\n if ret['ctype'] in cformat_map:\n ret['showvalueformat'] = f"{cformat_map[ret['ctype']]}"\n if isstringfunction(rout):\n ret['strlength'] = getstrlength(rout)\n if isfunction(rout):\n if 'result' in rout:\n a = rout['result']\n else:\n a = rout['name']\n if hasnote(rout['vars'][a]):\n ret['note'] = rout['vars'][a]['note']\n rout['vars'][a]['note'] = ['See elsewhere.']\n ret['rname'] = a\n ret['pydocsign'], ret['pydocsignout'] = getpydocsign(a, rout)\n if iscomplexfunction(rout):\n ret['rctype'] = """\n#ifdef F2PY_CB_RETURNCOMPLEX\n#ctype#\n#else\nvoid\n#endif\n"""\n elif hasnote(rout):\n ret['note'] = rout['note']\n rout['note'] = ['See elsewhere.']\n nofargs = 0\n nofoptargs = 0\n if 'args' in rout and 'vars' in rout:\n for a in rout['args']:\n var = rout['vars'][a]\n if l_or(isintent_in, isintent_inout)(var):\n nofargs = nofargs + 1\n if isoptional(var):\n nofoptargs = nofoptargs + 1\n ret['maxnofargs'] = repr(nofargs)\n ret['nofoptargs'] = repr(nofoptargs)\n if hasnote(rout) and isfunction(rout) and 'result' in rout:\n ret['routnote'] = rout['note']\n rout['note'] = ['See elsewhere.']\n return ret\n\n\ndef common_sign2map(a, var): # obsolete\n ret = {'varname': a, 'ctype': getctype(var)}\n if isstringarray(var):\n ret['ctype'] = 'char'\n if ret['ctype'] in c2capi_map:\n ret['atype'] = c2capi_map[ret['ctype']]\n ret['elsize'] = get_elsize(var)\n if ret['ctype'] in cformat_map:\n ret['showvalueformat'] = f"{cformat_map[ret['ctype']]}"\n if isarray(var):\n ret = dictappend(ret, getarrdims(a, var))\n elif isstring(var):\n ret['size'] = getstrlength(var)\n ret['rank'] = '1'\n ret['pydocsign'], ret['pydocsignout'] = getpydocsign(a, var)\n if hasnote(var):\n ret['note'] = var['note']\n var['note'] = ['See elsewhere.']\n # for strings this returns 0-rank but actually is 1-rank\n ret['arrdocstr'] = getarrdocsign(a, var)\n return ret\n
.venv\Lib\site-packages\numpy\f2py\capi_maps.py
capi_maps.py
Python
30,890
0.95
0.21455
0.040951
node-utils
587
2023-08-18T15:49:09.569685
GPL-3.0
false
01e7f8df0df9b8430b064f64cece7a32
from .auxfuncs import _ROut, _Var, process_f2cmap_dict\n\n__all__ = [\n "cb_routsign2map",\n "cb_sign2map",\n "common_sign2map",\n "getarrdims",\n "getarrdocsign",\n "getctype",\n "getinit",\n "getpydocsign",\n "getstrlength",\n "modsign2map",\n "process_f2cmap_dict",\n "routsign2map",\n "sign2map",\n]\n\n###\n\ndef getctype(var: _Var) -> str: ...\ndef f2cexpr(expr: str) -> str: ...\ndef getstrlength(var: _Var) -> str: ...\ndef getarrdims(a: str, var: _Var, verbose: int = 0) -> dict[str, str]: ...\ndef getpydocsign(a: str, var: _Var) -> tuple[str, str]: ...\ndef getarrdocsign(a: str, var: _Var) -> str: ...\ndef getinit(a: str, var: _Var) -> tuple[str, str]: ...\ndef sign2map(a: str, var: _Var) -> dict[str, str]: ...\ndef routsign2map(rout: _ROut) -> dict[str, str]: ...\ndef modsign2map(m: _ROut) -> dict[str, str]: ...\ndef cb_sign2map(a: str, var: _Var, index: object | None = None) -> dict[str, str]: ...\ndef cb_routsign2map(rout: _ROut, um: str) -> dict[str, str]: ...\ndef common_sign2map(a: str, var: _Var) -> dict[str, str]: ... # obsolete\n
.venv\Lib\site-packages\numpy\f2py\capi_maps.pyi
capi_maps.pyi
Other
1,099
0.95
0.393939
0.033333
vue-tools
479
2025-06-20T14:28:46.542231
Apache-2.0
false
12e11a0b403410b2aa262b58b67177ed
"""\nBuild call-back mechanism for f2py2e.\n\nCopyright 1999 -- 2011 Pearu Peterson all rights reserved.\nCopyright 2011 -- present NumPy Developers.\nPermission to use, modify, and distribute this software is given under the\nterms of the NumPy License.\n\nNO WARRANTY IS EXPRESSED OR IMPLIED. USE AT YOUR OWN RISK.\n"""\nfrom . import __version__, cfuncs\nfrom .auxfuncs import (\n applyrules,\n debugcapi,\n dictappend,\n errmess,\n getargs,\n hasnote,\n isarray,\n iscomplex,\n iscomplexarray,\n iscomplexfunction,\n isfunction,\n isintent_c,\n isintent_hide,\n isintent_in,\n isintent_inout,\n isintent_nothide,\n isintent_out,\n isoptional,\n isrequired,\n isscalar,\n isstring,\n isstringfunction,\n issubroutine,\n l_and,\n l_not,\n l_or,\n outmess,\n replace,\n stripcomma,\n throw_error,\n)\n\nf2py_version = __version__.version\n\n\n################## Rules for callback function ##############\n\ncb_routine_rules = {\n 'cbtypedefs': 'typedef #rctype#(*#name#_typedef)(#optargs_td##args_td##strarglens_td##noargs#);',\n 'body': """\n#begintitle#\ntypedef struct {\n PyObject *capi;\n PyTupleObject *args_capi;\n int nofargs;\n jmp_buf jmpbuf;\n} #name#_t;\n\n#if defined(F2PY_THREAD_LOCAL_DECL) && !defined(F2PY_USE_PYTHON_TLS)\n\nstatic F2PY_THREAD_LOCAL_DECL #name#_t *_active_#name# = NULL;\n\nstatic #name#_t *swap_active_#name#(#name#_t *ptr) {\n #name#_t *prev = _active_#name#;\n _active_#name# = ptr;\n return prev;\n}\n\nstatic #name#_t *get_active_#name#(void) {\n return _active_#name#;\n}\n\n#else\n\nstatic #name#_t *swap_active_#name#(#name#_t *ptr) {\n char *key = "__f2py_cb_#name#";\n return (#name#_t *)F2PySwapThreadLocalCallbackPtr(key, ptr);\n}\n\nstatic #name#_t *get_active_#name#(void) {\n char *key = "__f2py_cb_#name#";\n return (#name#_t *)F2PyGetThreadLocalCallbackPtr(key);\n}\n\n#endif\n\n/*typedef #rctype#(*#name#_typedef)(#optargs_td##args_td##strarglens_td##noargs#);*/\n#static# #rctype# #callbackname# (#optargs##args##strarglens##noargs#) {\n #name#_t cb_local = { NULL, NULL, 0 };\n #name#_t *cb = NULL;\n PyTupleObject *capi_arglist = NULL;\n PyObject *capi_return = NULL;\n PyObject *capi_tmp = NULL;\n PyObject *capi_arglist_list = NULL;\n int capi_j,capi_i = 0;\n int capi_longjmp_ok = 1;\n#decl#\n#ifdef F2PY_REPORT_ATEXIT\nf2py_cb_start_clock();\n#endif\n cb = get_active_#name#();\n if (cb == NULL) {\n capi_longjmp_ok = 0;\n cb = &cb_local;\n }\n capi_arglist = cb->args_capi;\n CFUNCSMESS(\"cb:Call-back function #name# (maxnofargs=#maxnofargs#(-#nofoptargs#))\\n\");\n CFUNCSMESSPY(\"cb:#name#_capi=\",cb->capi);\n if (cb->capi==NULL) {\n capi_longjmp_ok = 0;\n cb->capi = PyObject_GetAttrString(#modulename#_module,\"#argname#\");\n CFUNCSMESSPY(\"cb:#name#_capi=\",cb->capi);\n }\n if (cb->capi==NULL) {\n PyErr_SetString(#modulename#_error,\"cb: Callback #argname# not defined (as an argument or module #modulename# attribute).\\n\");\n goto capi_fail;\n }\n if (F2PyCapsule_Check(cb->capi)) {\n #name#_typedef #name#_cptr;\n #name#_cptr = F2PyCapsule_AsVoidPtr(cb->capi);\n #returncptr#(*#name#_cptr)(#optargs_nm##args_nm##strarglens_nm#);\n #return#\n }\n if (capi_arglist==NULL) {\n capi_longjmp_ok = 0;\n capi_tmp = PyObject_GetAttrString(#modulename#_module,\"#argname#_extra_args\");\n if (capi_tmp) {\n capi_arglist = (PyTupleObject *)PySequence_Tuple(capi_tmp);\n Py_DECREF(capi_tmp);\n if (capi_arglist==NULL) {\n PyErr_SetString(#modulename#_error,\"Failed to convert #modulename#.#argname#_extra_args to tuple.\\n\");\n goto capi_fail;\n }\n } else {\n PyErr_Clear();\n capi_arglist = (PyTupleObject *)Py_BuildValue(\"()\");\n }\n }\n if (capi_arglist == NULL) {\n PyErr_SetString(#modulename#_error,\"Callback #argname# argument list is not set.\\n\");\n goto capi_fail;\n }\n#setdims#\n#ifdef PYPY_VERSION\n#define CAPI_ARGLIST_SETITEM(idx, value) PyList_SetItem((PyObject *)capi_arglist_list, idx, value)\n capi_arglist_list = PySequence_List((PyObject *)capi_arglist);\n if (capi_arglist_list == NULL) goto capi_fail;\n#else\n#define CAPI_ARGLIST_SETITEM(idx, value) PyTuple_SetItem((PyObject *)capi_arglist, idx, value)\n#endif\n#pyobjfrom#\n#undef CAPI_ARGLIST_SETITEM\n#ifdef PYPY_VERSION\n CFUNCSMESSPY(\"cb:capi_arglist=\",capi_arglist_list);\n#else\n CFUNCSMESSPY(\"cb:capi_arglist=\",capi_arglist);\n#endif\n CFUNCSMESS(\"cb:Call-back calling Python function #argname#.\\n\");\n#ifdef F2PY_REPORT_ATEXIT\nf2py_cb_start_call_clock();\n#endif\n#ifdef PYPY_VERSION\n capi_return = PyObject_CallObject(cb->capi,(PyObject *)capi_arglist_list);\n Py_DECREF(capi_arglist_list);\n capi_arglist_list = NULL;\n#else\n capi_return = PyObject_CallObject(cb->capi,(PyObject *)capi_arglist);\n#endif\n#ifdef F2PY_REPORT_ATEXIT\nf2py_cb_stop_call_clock();\n#endif\n CFUNCSMESSPY(\"cb:capi_return=\",capi_return);\n if (capi_return == NULL) {\n fprintf(stderr,\"capi_return is NULL\\n\");\n goto capi_fail;\n }\n if (capi_return == Py_None) {\n Py_DECREF(capi_return);\n capi_return = Py_BuildValue(\"()\");\n }\n else if (!PyTuple_Check(capi_return)) {\n capi_return = Py_BuildValue(\"(N)\",capi_return);\n }\n capi_j = PyTuple_Size(capi_return);\n capi_i = 0;\n#frompyobj#\n CFUNCSMESS(\"cb:#name#:successful\\n\");\n Py_DECREF(capi_return);\n#ifdef F2PY_REPORT_ATEXIT\nf2py_cb_stop_clock();\n#endif\n goto capi_return_pt;\ncapi_fail:\n fprintf(stderr,\"Call-back #name# failed.\\n\");\n Py_XDECREF(capi_return);\n Py_XDECREF(capi_arglist_list);\n if (capi_longjmp_ok) {\n longjmp(cb->jmpbuf,-1);\n }\ncapi_return_pt:\n ;\n#return#\n}\n#endtitle#\n""",\n 'need': ['setjmp.h', 'CFUNCSMESS', 'F2PY_THREAD_LOCAL_DECL'],\n 'maxnofargs': '#maxnofargs#',\n 'nofoptargs': '#nofoptargs#',\n 'docstr': """\\n def #argname#(#docsignature#): return #docreturn#\\n\\\n#docstrsigns#""",\n 'latexdocstr': """\n{{}\\verb@def #argname#(#latexdocsignature#): return #docreturn#@{}}\n#routnote#\n\n#latexdocstrsigns#""",\n 'docstrshort': 'def #argname#(#docsignature#): return #docreturn#'\n}\ncb_rout_rules = [\n { # Init\n 'separatorsfor': {'decl': '\n',\n 'args': ',', 'optargs': '', 'pyobjfrom': '\n', 'freemem': '\n',\n 'args_td': ',', 'optargs_td': '',\n 'args_nm': ',', 'optargs_nm': '',\n 'frompyobj': '\n', 'setdims': '\n',\n 'docstrsigns': '\\n"\n"',\n 'latexdocstrsigns': '\n',\n 'latexdocstrreq': '\n', 'latexdocstropt': '\n',\n 'latexdocstrout': '\n', 'latexdocstrcbs': '\n',\n },\n 'decl': '/*decl*/', 'pyobjfrom': '/*pyobjfrom*/', 'frompyobj': '/*frompyobj*/',\n 'args': [], 'optargs': '', 'return': '', 'strarglens': '', 'freemem': '/*freemem*/',\n 'args_td': [], 'optargs_td': '', 'strarglens_td': '',\n 'args_nm': [], 'optargs_nm': '', 'strarglens_nm': '',\n 'noargs': '',\n 'setdims': '/*setdims*/',\n 'docstrsigns': '', 'latexdocstrsigns': '',\n 'docstrreq': ' Required arguments:',\n 'docstropt': ' Optional arguments:',\n 'docstrout': ' Return objects:',\n 'docstrcbs': ' Call-back functions:',\n 'docreturn': '', 'docsign': '', 'docsignopt': '',\n 'latexdocstrreq': '\\noindent Required arguments:',\n 'latexdocstropt': '\\noindent Optional arguments:',\n 'latexdocstrout': '\\noindent Return objects:',\n 'latexdocstrcbs': '\\noindent Call-back functions:',\n 'routnote': {hasnote: '--- #note#', l_not(hasnote): ''},\n }, { # Function\n 'decl': ' #ctype# return_value = 0;',\n 'frompyobj': [\n {debugcapi: ' CFUNCSMESS("cb:Getting return_value->");'},\n '''\\n if (capi_j>capi_i) {\n GETSCALARFROMPYTUPLE(capi_return,capi_i++,&return_value,#ctype#,\n "#ctype#_from_pyobj failed in converting return_value of"\n " call-back function #name# to C #ctype#\\n");\n } else {\n fprintf(stderr,"Warning: call-back function #name# did not provide"\n " return value (index=%d, type=#ctype#)\\n",capi_i);\n }''',\n {debugcapi:\n ' fprintf(stderr,"#showvalueformat#.\\n",return_value);'}\n ],\n 'need': ['#ctype#_from_pyobj', {debugcapi: 'CFUNCSMESS'}, 'GETSCALARFROMPYTUPLE'],\n 'return': ' return return_value;',\n '_check': l_and(isfunction, l_not(isstringfunction), l_not(iscomplexfunction))\n },\n { # String function\n 'pyobjfrom': {debugcapi: ' fprintf(stderr,"debug-capi:cb:#name#:%d:\\n",return_value_len);'},\n 'args': '#ctype# return_value,int return_value_len',\n 'args_nm': 'return_value,&return_value_len',\n 'args_td': '#ctype# ,int',\n 'frompyobj': [\n {debugcapi: ' CFUNCSMESS("cb:Getting return_value->\\"");'},\n """\\n if (capi_j>capi_i) {\n GETSTRFROMPYTUPLE(capi_return,capi_i++,return_value,return_value_len);\n } else {\n fprintf(stderr,"Warning: call-back function #name# did not provide"\n " return value (index=%d, type=#ctype#)\\n",capi_i);\n }""",\n {debugcapi:\n ' fprintf(stderr,"#showvalueformat#\\".\\n",return_value);'}\n ],\n 'need': ['#ctype#_from_pyobj', {debugcapi: 'CFUNCSMESS'},\n 'string.h', 'GETSTRFROMPYTUPLE'],\n 'return': 'return;',\n '_check': isstringfunction\n },\n { # Complex function\n 'optargs': """\n#ifndef F2PY_CB_RETURNCOMPLEX\n#ctype# *return_value\n#endif\n""",\n 'optargs_nm': """\n#ifndef F2PY_CB_RETURNCOMPLEX\nreturn_value\n#endif\n""",\n 'optargs_td': """\n#ifndef F2PY_CB_RETURNCOMPLEX\n#ctype# *\n#endif\n""",\n 'decl': """\n#ifdef F2PY_CB_RETURNCOMPLEX\n #ctype# return_value = {0, 0};\n#endif\n""",\n 'frompyobj': [\n {debugcapi: ' CFUNCSMESS("cb:Getting return_value->");'},\n """\\n if (capi_j>capi_i) {\n#ifdef F2PY_CB_RETURNCOMPLEX\n GETSCALARFROMPYTUPLE(capi_return,capi_i++,&return_value,#ctype#,\n \"#ctype#_from_pyobj failed in converting return_value of call-back\"\n \" function #name# to C #ctype#\\n\");\n#else\n GETSCALARFROMPYTUPLE(capi_return,capi_i++,return_value,#ctype#,\n \"#ctype#_from_pyobj failed in converting return_value of call-back\"\n \" function #name# to C #ctype#\\n\");\n#endif\n } else {\n fprintf(stderr,\n \"Warning: call-back function #name# did not provide\"\n \" return value (index=%d, type=#ctype#)\\n\",capi_i);\n }""",\n {debugcapi: """\\n#ifdef F2PY_CB_RETURNCOMPLEX\n fprintf(stderr,\"#showvalueformat#.\\n\",(return_value).r,(return_value).i);\n#else\n fprintf(stderr,\"#showvalueformat#.\\n\",(*return_value).r,(*return_value).i);\n#endif\n"""}\n ],\n 'return': """\n#ifdef F2PY_CB_RETURNCOMPLEX\n return return_value;\n#else\n return;\n#endif\n""",\n 'need': ['#ctype#_from_pyobj', {debugcapi: 'CFUNCSMESS'},\n 'string.h', 'GETSCALARFROMPYTUPLE', '#ctype#'],\n '_check': iscomplexfunction\n },\n {'docstrout': ' #pydocsignout#',\n 'latexdocstrout': ['\\item[]{{}\\verb@#pydocsignout#@{}}',\n {hasnote: '--- #note#'}],\n 'docreturn': '#rname#,',\n '_check': isfunction},\n {'_check': issubroutine, 'return': 'return;'}\n]\n\ncb_arg_rules = [\n { # Doc\n 'docstropt': {l_and(isoptional, isintent_nothide): ' #pydocsign#'},\n 'docstrreq': {l_and(isrequired, isintent_nothide): ' #pydocsign#'},\n 'docstrout': {isintent_out: ' #pydocsignout#'},\n 'latexdocstropt': {l_and(isoptional, isintent_nothide): ['\\item[]{{}\\verb@#pydocsign#@{}}',\n {hasnote: '--- #note#'}]},\n 'latexdocstrreq': {l_and(isrequired, isintent_nothide): ['\\item[]{{}\\verb@#pydocsign#@{}}',\n {hasnote: '--- #note#'}]},\n 'latexdocstrout': {isintent_out: ['\\item[]{{}\\verb@#pydocsignout#@{}}',\n {l_and(hasnote, isintent_hide): '--- #note#',\n l_and(hasnote, isintent_nothide): '--- See above.'}]},\n 'docsign': {l_and(isrequired, isintent_nothide): '#varname#,'},\n 'docsignopt': {l_and(isoptional, isintent_nothide): '#varname#,'},\n 'depend': ''\n },\n {\n 'args': {\n l_and(isscalar, isintent_c): '#ctype# #varname_i#',\n l_and(isscalar, l_not(isintent_c)): '#ctype# *#varname_i#_cb_capi',\n isarray: '#ctype# *#varname_i#',\n isstring: '#ctype# #varname_i#'\n },\n 'args_nm': {\n l_and(isscalar, isintent_c): '#varname_i#',\n l_and(isscalar, l_not(isintent_c)): '#varname_i#_cb_capi',\n isarray: '#varname_i#',\n isstring: '#varname_i#'\n },\n 'args_td': {\n l_and(isscalar, isintent_c): '#ctype#',\n l_and(isscalar, l_not(isintent_c)): '#ctype# *',\n isarray: '#ctype# *',\n isstring: '#ctype#'\n },\n 'need': {l_or(isscalar, isarray, isstring): '#ctype#'},\n # untested with multiple args\n 'strarglens': {isstring: ',int #varname_i#_cb_len'},\n 'strarglens_td': {isstring: ',int'}, # untested with multiple args\n # untested with multiple args\n 'strarglens_nm': {isstring: ',#varname_i#_cb_len'},\n },\n { # Scalars\n 'decl': {l_not(isintent_c): ' #ctype# #varname_i#=(*#varname_i#_cb_capi);'},\n 'error': {l_and(isintent_c, isintent_out,\n throw_error('intent(c,out) is forbidden for callback scalar arguments')):\n ''},\n 'frompyobj': [{debugcapi: ' CFUNCSMESS("cb:Getting #varname#->");'},\n {isintent_out:\n ' if (capi_j>capi_i)\n GETSCALARFROMPYTUPLE(capi_return,capi_i++,#varname_i#_cb_capi,#ctype#,"#ctype#_from_pyobj failed in converting argument #varname# of call-back function #name# to C #ctype#\\n");'},\n {l_and(debugcapi, l_and(l_not(iscomplex), isintent_c)):\n ' fprintf(stderr,"#showvalueformat#.\\n",#varname_i#);'},\n {l_and(debugcapi, l_and(l_not(iscomplex), l_not(isintent_c))):\n ' fprintf(stderr,"#showvalueformat#.\\n",*#varname_i#_cb_capi);'},\n {l_and(debugcapi, l_and(iscomplex, isintent_c)):\n ' fprintf(stderr,"#showvalueformat#.\\n",(#varname_i#).r,(#varname_i#).i);'},\n {l_and(debugcapi, l_and(iscomplex, l_not(isintent_c))):\n ' fprintf(stderr,"#showvalueformat#.\\n",(*#varname_i#_cb_capi).r,(*#varname_i#_cb_capi).i);'},\n ],\n 'need': [{isintent_out: ['#ctype#_from_pyobj', 'GETSCALARFROMPYTUPLE']},\n {debugcapi: 'CFUNCSMESS'}],\n '_check': isscalar\n }, {\n 'pyobjfrom': [{isintent_in: """\\n if (cb->nofargs>capi_i)\n if (CAPI_ARGLIST_SETITEM(capi_i++,pyobj_from_#ctype#1(#varname_i#)))\n goto capi_fail;"""},\n {isintent_inout: """\\n if (cb->nofargs>capi_i)\n if (CAPI_ARGLIST_SETITEM(capi_i++,pyarr_from_p_#ctype#1(#varname_i#_cb_capi)))\n goto capi_fail;"""}],\n 'need': [{isintent_in: 'pyobj_from_#ctype#1'},\n {isintent_inout: 'pyarr_from_p_#ctype#1'},\n {iscomplex: '#ctype#'}],\n '_check': l_and(isscalar, isintent_nothide),\n '_optional': ''\n }, { # String\n 'frompyobj': [{debugcapi: ' CFUNCSMESS("cb:Getting #varname#->\\"");'},\n """ if (capi_j>capi_i)\n GETSTRFROMPYTUPLE(capi_return,capi_i++,#varname_i#,#varname_i#_cb_len);""",\n {debugcapi:\n ' fprintf(stderr,"#showvalueformat#\\":%d:.\\n",#varname_i#,#varname_i#_cb_len);'},\n ],\n 'need': ['#ctype#', 'GETSTRFROMPYTUPLE',\n {debugcapi: 'CFUNCSMESS'}, 'string.h'],\n '_check': l_and(isstring, isintent_out)\n }, {\n 'pyobjfrom': [\n {debugcapi:\n (' fprintf(stderr,"debug-capi:cb:#varname#=#showvalueformat#:'\n '%d:\\n",#varname_i#,#varname_i#_cb_len);')},\n {isintent_in: """\\n if (cb->nofargs>capi_i)\n if (CAPI_ARGLIST_SETITEM(capi_i++,pyobj_from_#ctype#1size(#varname_i#,#varname_i#_cb_len)))\n goto capi_fail;"""},\n {isintent_inout: """\\n if (cb->nofargs>capi_i) {\n int #varname_i#_cb_dims[] = {#varname_i#_cb_len};\n if (CAPI_ARGLIST_SETITEM(capi_i++,pyarr_from_p_#ctype#1(#varname_i#,#varname_i#_cb_dims)))\n goto capi_fail;\n }"""}],\n 'need': [{isintent_in: 'pyobj_from_#ctype#1size'},\n {isintent_inout: 'pyarr_from_p_#ctype#1'}],\n '_check': l_and(isstring, isintent_nothide),\n '_optional': ''\n },\n # Array ...\n {\n 'decl': ' npy_intp #varname_i#_Dims[#rank#] = {#rank*[-1]#};',\n 'setdims': ' #cbsetdims#;',\n '_check': isarray,\n '_depend': ''\n },\n {\n 'pyobjfrom': [{debugcapi: ' fprintf(stderr,"debug-capi:cb:#varname#\\n");'},\n {isintent_c: """\\n if (cb->nofargs>capi_i) {\n /* tmp_arr will be inserted to capi_arglist_list that will be\n destroyed when leaving callback function wrapper together\n with tmp_arr. */\n PyArrayObject *tmp_arr = (PyArrayObject *)PyArray_New(&PyArray_Type,\n #rank#,#varname_i#_Dims,#atype#,NULL,(char*)#varname_i#,#elsize#,\n NPY_ARRAY_CARRAY,NULL);\n""",\n l_not(isintent_c): """\\n if (cb->nofargs>capi_i) {\n /* tmp_arr will be inserted to capi_arglist_list that will be\n destroyed when leaving callback function wrapper together\n with tmp_arr. */\n PyArrayObject *tmp_arr = (PyArrayObject *)PyArray_New(&PyArray_Type,\n #rank#,#varname_i#_Dims,#atype#,NULL,(char*)#varname_i#,#elsize#,\n NPY_ARRAY_FARRAY,NULL);\n""",\n },\n """\n if (tmp_arr==NULL)\n goto capi_fail;\n if (CAPI_ARGLIST_SETITEM(capi_i++,(PyObject *)tmp_arr))\n goto capi_fail;\n}"""],\n '_check': l_and(isarray, isintent_nothide, l_or(isintent_in, isintent_inout)),\n '_optional': '',\n }, {\n 'frompyobj': [{debugcapi: ' CFUNCSMESS("cb:Getting #varname#->");'},\n """ if (capi_j>capi_i) {\n PyArrayObject *rv_cb_arr = NULL;\n if ((capi_tmp = PyTuple_GetItem(capi_return,capi_i++))==NULL) goto capi_fail;\n rv_cb_arr = array_from_pyobj(#atype#,#varname_i#_Dims,#rank#,F2PY_INTENT_IN""",\n {isintent_c: '|F2PY_INTENT_C'},\n """,capi_tmp);\n if (rv_cb_arr == NULL) {\n fprintf(stderr,\"rv_cb_arr is NULL\\n\");\n goto capi_fail;\n }\n MEMCOPY(#varname_i#,PyArray_DATA(rv_cb_arr),PyArray_NBYTES(rv_cb_arr));\n if (capi_tmp != (PyObject *)rv_cb_arr) {\n Py_DECREF(rv_cb_arr);\n }\n }""",\n {debugcapi: ' fprintf(stderr,"<-.\\n");'},\n ],\n 'need': ['MEMCOPY', {iscomplexarray: '#ctype#'}],\n '_check': l_and(isarray, isintent_out)\n }, {\n 'docreturn': '#varname#,',\n '_check': isintent_out\n }\n]\n\n################## Build call-back module #############\ncb_map = {}\n\n\ndef buildcallbacks(m):\n cb_map[m['name']] = []\n for bi in m['body']:\n if bi['block'] == 'interface':\n for b in bi['body']:\n if b:\n buildcallback(b, m['name'])\n else:\n errmess(f"warning: empty body for {m['name']}\n")\n\n\ndef buildcallback(rout, um):\n from . import capi_maps\n\n outmess(f" Constructing call-back function \"cb_{rout['name']}_in_{um}\"\n")\n args, depargs = getargs(rout)\n capi_maps.depargs = depargs\n var = rout['vars']\n vrd = capi_maps.cb_routsign2map(rout, um)\n rd = dictappend({}, vrd)\n cb_map[um].append([rout['name'], rd['name']])\n for r in cb_rout_rules:\n if ('_check' in r and r['_check'](rout)) or ('_check' not in r):\n ar = applyrules(r, vrd, rout)\n rd = dictappend(rd, ar)\n savevrd = {}\n for i, a in enumerate(args):\n vrd = capi_maps.cb_sign2map(a, var[a], index=i)\n savevrd[a] = vrd\n for r in cb_arg_rules:\n if '_depend' in r:\n continue\n if '_optional' in r and isoptional(var[a]):\n continue\n if ('_check' in r and r['_check'](var[a])) or ('_check' not in r):\n ar = applyrules(r, vrd, var[a])\n rd = dictappend(rd, ar)\n if '_break' in r:\n break\n for a in args:\n vrd = savevrd[a]\n for r in cb_arg_rules:\n if '_depend' in r:\n continue\n if ('_optional' not in r) or ('_optional' in r and isrequired(var[a])):\n continue\n if ('_check' in r and r['_check'](var[a])) or ('_check' not in r):\n ar = applyrules(r, vrd, var[a])\n rd = dictappend(rd, ar)\n if '_break' in r:\n break\n for a in depargs:\n vrd = savevrd[a]\n for r in cb_arg_rules:\n if '_depend' not in r:\n continue\n if '_optional' in r:\n continue\n if ('_check' in r and r['_check'](var[a])) or ('_check' not in r):\n ar = applyrules(r, vrd, var[a])\n rd = dictappend(rd, ar)\n if '_break' in r:\n break\n if 'args' in rd and 'optargs' in rd:\n if isinstance(rd['optargs'], list):\n rd['optargs'] = rd['optargs'] + ["""\n#ifndef F2PY_CB_RETURNCOMPLEX\n,\n#endif\n"""]\n rd['optargs_nm'] = rd['optargs_nm'] + ["""\n#ifndef F2PY_CB_RETURNCOMPLEX\n,\n#endif\n"""]\n rd['optargs_td'] = rd['optargs_td'] + ["""\n#ifndef F2PY_CB_RETURNCOMPLEX\n,\n#endif\n"""]\n if isinstance(rd['docreturn'], list):\n rd['docreturn'] = stripcomma(\n replace('#docreturn#', {'docreturn': rd['docreturn']}))\n optargs = stripcomma(replace('#docsignopt#',\n {'docsignopt': rd['docsignopt']}\n ))\n if optargs == '':\n rd['docsignature'] = stripcomma(\n replace('#docsign#', {'docsign': rd['docsign']}))\n else:\n rd['docsignature'] = replace('#docsign#[#docsignopt#]',\n {'docsign': rd['docsign'],\n 'docsignopt': optargs,\n })\n rd['latexdocsignature'] = rd['docsignature'].replace('_', '\\_')\n rd['latexdocsignature'] = rd['latexdocsignature'].replace(',', ', ')\n rd['docstrsigns'] = []\n rd['latexdocstrsigns'] = []\n for k in ['docstrreq', 'docstropt', 'docstrout', 'docstrcbs']:\n if k in rd and isinstance(rd[k], list):\n rd['docstrsigns'] = rd['docstrsigns'] + rd[k]\n k = 'latex' + k\n if k in rd and isinstance(rd[k], list):\n rd['latexdocstrsigns'] = rd['latexdocstrsigns'] + rd[k][0:1] +\\n ['\\begin{description}'] + rd[k][1:] +\\n ['\\end{description}']\n if 'args' not in rd:\n rd['args'] = ''\n rd['args_td'] = ''\n rd['args_nm'] = ''\n if not (rd.get('args') or rd.get('optargs') or rd.get('strarglens')):\n rd['noargs'] = 'void'\n\n ar = applyrules(cb_routine_rules, rd)\n cfuncs.callbacks[rd['name']] = ar['body']\n if isinstance(ar['need'], str):\n ar['need'] = [ar['need']]\n\n if 'need' in rd:\n for t in cfuncs.typedefs.keys():\n if t in rd['need']:\n ar['need'].append(t)\n\n cfuncs.typedefs_generated[rd['name'] + '_typedef'] = ar['cbtypedefs']\n ar['need'].append(rd['name'] + '_typedef')\n cfuncs.needs[rd['name']] = ar['need']\n\n capi_maps.lcb2_map[rd['name']] = {'maxnofargs': ar['maxnofargs'],\n 'nofoptargs': ar['nofoptargs'],\n 'docstr': ar['docstr'],\n 'latexdocstr': ar['latexdocstr'],\n 'argname': rd['argname']\n }\n outmess(f" {ar['docstrshort']}\n")\n################## Build call-back function #############\n
.venv\Lib\site-packages\numpy\f2py\cb_rules.py
cb_rules.py
Python
25,716
0.95
0.145865
0.122257
react-lib
86
2023-08-01T02:06:01.732131
BSD-3-Clause
false
fe3450460656d1309580e9ad7a8e2528
from collections.abc import Mapping\nfrom typing import Any, Final\n\nfrom .__version__ import version\n\n##\n\nf2py_version: Final = version\n\ncb_routine_rules: Final[dict[str, str | list[str]]] = ...\ncb_rout_rules: Final[list[dict[str, str | Any]]] = ...\ncb_arg_rules: Final[list[dict[str, str | Any]]] = ...\n\ncb_map: Final[dict[str, list[list[str]]]] = ...\n\ndef buildcallbacks(m: Mapping[str, object]) -> None: ...\ndef buildcallback(rout: Mapping[str, object], um: Mapping[str, object]) -> None: ...\n
.venv\Lib\site-packages\numpy\f2py\cb_rules.pyi
cb_rules.pyi
Other
512
0.95
0.117647
0.090909
react-lib
160
2023-09-28T10:18:39.722339
GPL-3.0
false
f71092ba21c1a59790e88969c8f492a4
"""\nC declarations, CPP macros, and C functions for f2py2e.\nOnly required declarations/macros/functions will be used.\n\nCopyright 1999 -- 2011 Pearu Peterson all rights reserved.\nCopyright 2011 -- present NumPy Developers.\nPermission to use, modify, and distribute this software is given under the\nterms of the NumPy License.\n\nNO WARRANTY IS EXPRESSED OR IMPLIED. USE AT YOUR OWN RISK.\n"""\nimport copy\nimport sys\n\nfrom . import __version__\n\nf2py_version = __version__.version\n\n\ndef errmess(s: str) -> None:\n """\n Write an error message to stderr.\n\n This indirection is needed because sys.stderr might not always be available (see #26862).\n """\n if sys.stderr is not None:\n sys.stderr.write(s)\n\n##################### Definitions ##################\n\n\noutneeds = {'includes0': [], 'includes': [], 'typedefs': [], 'typedefs_generated': [],\n 'userincludes': [],\n 'cppmacros': [], 'cfuncs': [], 'callbacks': [], 'f90modhooks': [],\n 'commonhooks': []}\nneeds = {}\nincludes0 = {'includes0': '/*need_includes0*/'}\nincludes = {'includes': '/*need_includes*/'}\nuserincludes = {'userincludes': '/*need_userincludes*/'}\ntypedefs = {'typedefs': '/*need_typedefs*/'}\ntypedefs_generated = {'typedefs_generated': '/*need_typedefs_generated*/'}\ncppmacros = {'cppmacros': '/*need_cppmacros*/'}\ncfuncs = {'cfuncs': '/*need_cfuncs*/'}\ncallbacks = {'callbacks': '/*need_callbacks*/'}\nf90modhooks = {'f90modhooks': '/*need_f90modhooks*/',\n 'initf90modhooksstatic': '/*initf90modhooksstatic*/',\n 'initf90modhooksdynamic': '/*initf90modhooksdynamic*/',\n }\ncommonhooks = {'commonhooks': '/*need_commonhooks*/',\n 'initcommonhooks': '/*need_initcommonhooks*/',\n }\n\n############ Includes ###################\n\nincludes0['math.h'] = '#include <math.h>'\nincludes0['string.h'] = '#include <string.h>'\nincludes0['setjmp.h'] = '#include <setjmp.h>'\n\nincludes['arrayobject.h'] = '''#define PY_ARRAY_UNIQUE_SYMBOL PyArray_API\n#include "arrayobject.h"'''\nincludes['npy_math.h'] = '#include "numpy/npy_math.h"'\n\nincludes['arrayobject.h'] = '#include "fortranobject.h"'\nincludes['stdarg.h'] = '#include <stdarg.h>'\n\n############# Type definitions ###############\n\ntypedefs['unsigned_char'] = 'typedef unsigned char unsigned_char;'\ntypedefs['unsigned_short'] = 'typedef unsigned short unsigned_short;'\ntypedefs['unsigned_long'] = 'typedef unsigned long unsigned_long;'\ntypedefs['signed_char'] = 'typedef signed char signed_char;'\ntypedefs['long_long'] = """\n#if defined(NPY_OS_WIN32)\ntypedef __int64 long_long;\n#else\ntypedef long long long_long;\ntypedef unsigned long long unsigned_long_long;\n#endif\n"""\ntypedefs['unsigned_long_long'] = """\n#if defined(NPY_OS_WIN32)\ntypedef __uint64 long_long;\n#else\ntypedef unsigned long long unsigned_long_long;\n#endif\n"""\ntypedefs['long_double'] = """\n#ifndef _LONG_DOUBLE\ntypedef long double long_double;\n#endif\n"""\ntypedefs[\n 'complex_long_double'] = 'typedef struct {long double r,i;} complex_long_double;'\ntypedefs['complex_float'] = 'typedef struct {float r,i;} complex_float;'\ntypedefs['complex_double'] = 'typedef struct {double r,i;} complex_double;'\ntypedefs['string'] = """typedef char * string;"""\ntypedefs['character'] = """typedef char character;"""\n\n\n############### CPP macros ####################\ncppmacros['CFUNCSMESS'] = """\n#ifdef DEBUGCFUNCS\n#define CFUNCSMESS(mess) fprintf(stderr,\"debug-capi:\"mess);\n#define CFUNCSMESSPY(mess,obj) CFUNCSMESS(mess) \\\n PyObject_Print((PyObject *)obj,stderr,Py_PRINT_RAW);\\\n fprintf(stderr,\"\\n\");\n#else\n#define CFUNCSMESS(mess)\n#define CFUNCSMESSPY(mess,obj)\n#endif\n"""\ncppmacros['F_FUNC'] = """\n#if defined(PREPEND_FORTRAN)\n#if defined(NO_APPEND_FORTRAN)\n#if defined(UPPERCASE_FORTRAN)\n#define F_FUNC(f,F) _##F\n#else\n#define F_FUNC(f,F) _##f\n#endif\n#else\n#if defined(UPPERCASE_FORTRAN)\n#define F_FUNC(f,F) _##F##_\n#else\n#define F_FUNC(f,F) _##f##_\n#endif\n#endif\n#else\n#if defined(NO_APPEND_FORTRAN)\n#if defined(UPPERCASE_FORTRAN)\n#define F_FUNC(f,F) F\n#else\n#define F_FUNC(f,F) f\n#endif\n#else\n#if defined(UPPERCASE_FORTRAN)\n#define F_FUNC(f,F) F##_\n#else\n#define F_FUNC(f,F) f##_\n#endif\n#endif\n#endif\n#if defined(UNDERSCORE_G77)\n#define F_FUNC_US(f,F) F_FUNC(f##_,F##_)\n#else\n#define F_FUNC_US(f,F) F_FUNC(f,F)\n#endif\n"""\ncppmacros['F_WRAPPEDFUNC'] = """\n#if defined(PREPEND_FORTRAN)\n#if defined(NO_APPEND_FORTRAN)\n#if defined(UPPERCASE_FORTRAN)\n#define F_WRAPPEDFUNC(f,F) _F2PYWRAP##F\n#else\n#define F_WRAPPEDFUNC(f,F) _f2pywrap##f\n#endif\n#else\n#if defined(UPPERCASE_FORTRAN)\n#define F_WRAPPEDFUNC(f,F) _F2PYWRAP##F##_\n#else\n#define F_WRAPPEDFUNC(f,F) _f2pywrap##f##_\n#endif\n#endif\n#else\n#if defined(NO_APPEND_FORTRAN)\n#if defined(UPPERCASE_FORTRAN)\n#define F_WRAPPEDFUNC(f,F) F2PYWRAP##F\n#else\n#define F_WRAPPEDFUNC(f,F) f2pywrap##f\n#endif\n#else\n#if defined(UPPERCASE_FORTRAN)\n#define F_WRAPPEDFUNC(f,F) F2PYWRAP##F##_\n#else\n#define F_WRAPPEDFUNC(f,F) f2pywrap##f##_\n#endif\n#endif\n#endif\n#if defined(UNDERSCORE_G77)\n#define F_WRAPPEDFUNC_US(f,F) F_WRAPPEDFUNC(f##_,F##_)\n#else\n#define F_WRAPPEDFUNC_US(f,F) F_WRAPPEDFUNC(f,F)\n#endif\n"""\ncppmacros['F_MODFUNC'] = """\n#if defined(F90MOD2CCONV1) /*E.g. Compaq Fortran */\n#if defined(NO_APPEND_FORTRAN)\n#define F_MODFUNCNAME(m,f) $ ## m ## $ ## f\n#else\n#define F_MODFUNCNAME(m,f) $ ## m ## $ ## f ## _\n#endif\n#endif\n\n#if defined(F90MOD2CCONV2) /*E.g. IBM XL Fortran, not tested though */\n#if defined(NO_APPEND_FORTRAN)\n#define F_MODFUNCNAME(m,f) __ ## m ## _MOD_ ## f\n#else\n#define F_MODFUNCNAME(m,f) __ ## m ## _MOD_ ## f ## _\n#endif\n#endif\n\n#if defined(F90MOD2CCONV3) /*E.g. MIPSPro Compilers */\n#if defined(NO_APPEND_FORTRAN)\n#define F_MODFUNCNAME(m,f) f ## .in. ## m\n#else\n#define F_MODFUNCNAME(m,f) f ## .in. ## m ## _\n#endif\n#endif\n/*\n#if defined(UPPERCASE_FORTRAN)\n#define F_MODFUNC(m,M,f,F) F_MODFUNCNAME(M,F)\n#else\n#define F_MODFUNC(m,M,f,F) F_MODFUNCNAME(m,f)\n#endif\n*/\n\n#define F_MODFUNC(m,f) (*(f2pymodstruct##m##.##f))\n"""\ncppmacros['SWAPUNSAFE'] = """\n#define SWAP(a,b) (size_t)(a) = ((size_t)(a) ^ (size_t)(b));\\\n (size_t)(b) = ((size_t)(a) ^ (size_t)(b));\\\n (size_t)(a) = ((size_t)(a) ^ (size_t)(b))\n"""\ncppmacros['SWAP'] = """\n#define SWAP(a,b,t) {\\\n t *c;\\\n c = a;\\\n a = b;\\\n b = c;}\n"""\n# cppmacros['ISCONTIGUOUS']='#define ISCONTIGUOUS(m) (PyArray_FLAGS(m) &\n# NPY_ARRAY_C_CONTIGUOUS)'\ncppmacros['PRINTPYOBJERR'] = """\n#define PRINTPYOBJERR(obj)\\\n fprintf(stderr,\"#modulename#.error is related to \");\\\n PyObject_Print((PyObject *)obj,stderr,Py_PRINT_RAW);\\\n fprintf(stderr,\"\\n\");\n"""\ncppmacros['MINMAX'] = """\n#ifndef max\n#define max(a,b) ((a > b) ? (a) : (b))\n#endif\n#ifndef min\n#define min(a,b) ((a < b) ? (a) : (b))\n#endif\n#ifndef MAX\n#define MAX(a,b) ((a > b) ? (a) : (b))\n#endif\n#ifndef MIN\n#define MIN(a,b) ((a < b) ? (a) : (b))\n#endif\n"""\ncppmacros['len..'] = """\n/* See fortranobject.h for definitions. The macros here are provided for BC. */\n#define rank f2py_rank\n#define shape f2py_shape\n#define fshape f2py_shape\n#define len f2py_len\n#define flen f2py_flen\n#define slen f2py_slen\n#define size f2py_size\n"""\ncppmacros['pyobj_from_char1'] = r"""\n#define pyobj_from_char1(v) (PyLong_FromLong(v))\n"""\ncppmacros['pyobj_from_short1'] = r"""\n#define pyobj_from_short1(v) (PyLong_FromLong(v))\n"""\nneeds['pyobj_from_int1'] = ['signed_char']\ncppmacros['pyobj_from_int1'] = r"""\n#define pyobj_from_int1(v) (PyLong_FromLong(v))\n"""\ncppmacros['pyobj_from_long1'] = r"""\n#define pyobj_from_long1(v) (PyLong_FromLong(v))\n"""\nneeds['pyobj_from_long_long1'] = ['long_long']\ncppmacros['pyobj_from_long_long1'] = """\n#ifdef HAVE_LONG_LONG\n#define pyobj_from_long_long1(v) (PyLong_FromLongLong(v))\n#else\n#warning HAVE_LONG_LONG is not available. Redefining pyobj_from_long_long.\n#define pyobj_from_long_long1(v) (PyLong_FromLong(v))\n#endif\n"""\nneeds['pyobj_from_long_double1'] = ['long_double']\ncppmacros['pyobj_from_long_double1'] = """\n#define pyobj_from_long_double1(v) (PyFloat_FromDouble(v))"""\ncppmacros['pyobj_from_double1'] = """\n#define pyobj_from_double1(v) (PyFloat_FromDouble(v))"""\ncppmacros['pyobj_from_float1'] = """\n#define pyobj_from_float1(v) (PyFloat_FromDouble(v))"""\nneeds['pyobj_from_complex_long_double1'] = ['complex_long_double']\ncppmacros['pyobj_from_complex_long_double1'] = """\n#define pyobj_from_complex_long_double1(v) (PyComplex_FromDoubles(v.r,v.i))"""\nneeds['pyobj_from_complex_double1'] = ['complex_double']\ncppmacros['pyobj_from_complex_double1'] = """\n#define pyobj_from_complex_double1(v) (PyComplex_FromDoubles(v.r,v.i))"""\nneeds['pyobj_from_complex_float1'] = ['complex_float']\ncppmacros['pyobj_from_complex_float1'] = """\n#define pyobj_from_complex_float1(v) (PyComplex_FromDoubles(v.r,v.i))"""\nneeds['pyobj_from_string1'] = ['string']\ncppmacros['pyobj_from_string1'] = """\n#define pyobj_from_string1(v) (PyUnicode_FromString((char *)v))"""\nneeds['pyobj_from_string1size'] = ['string']\ncppmacros['pyobj_from_string1size'] = """\n#define pyobj_from_string1size(v,len) (PyUnicode_FromStringAndSize((char *)v, len))"""\nneeds['TRYPYARRAYTEMPLATE'] = ['PRINTPYOBJERR']\ncppmacros['TRYPYARRAYTEMPLATE'] = """\n/* New SciPy */\n#define TRYPYARRAYTEMPLATECHAR case NPY_STRING: *(char *)(PyArray_DATA(arr))=*v; break;\n#define TRYPYARRAYTEMPLATELONG case NPY_LONG: *(long *)(PyArray_DATA(arr))=*v; break;\n#define TRYPYARRAYTEMPLATEOBJECT case NPY_OBJECT: PyArray_SETITEM(arr,PyArray_DATA(arr),pyobj_from_ ## ctype ## 1(*v)); break;\n\n#define TRYPYARRAYTEMPLATE(ctype,typecode) \\\n PyArrayObject *arr = NULL;\\\n if (!obj) return -2;\\\n if (!PyArray_Check(obj)) return -1;\\\n if (!(arr=(PyArrayObject *)obj)) {fprintf(stderr,\"TRYPYARRAYTEMPLATE:\");PRINTPYOBJERR(obj);return 0;}\\\n if (PyArray_DESCR(arr)->type==typecode) {*(ctype *)(PyArray_DATA(arr))=*v; return 1;}\\\n switch (PyArray_TYPE(arr)) {\\\n case NPY_DOUBLE: *(npy_double *)(PyArray_DATA(arr))=*v; break;\\\n case NPY_INT: *(npy_int *)(PyArray_DATA(arr))=*v; break;\\\n case NPY_LONG: *(npy_long *)(PyArray_DATA(arr))=*v; break;\\\n case NPY_FLOAT: *(npy_float *)(PyArray_DATA(arr))=*v; break;\\\n case NPY_CDOUBLE: *(npy_double *)(PyArray_DATA(arr))=*v; break;\\\n case NPY_CFLOAT: *(npy_float *)(PyArray_DATA(arr))=*v; break;\\\n case NPY_BOOL: *(npy_bool *)(PyArray_DATA(arr))=(*v!=0); break;\\\n case NPY_UBYTE: *(npy_ubyte *)(PyArray_DATA(arr))=*v; break;\\\n case NPY_BYTE: *(npy_byte *)(PyArray_DATA(arr))=*v; break;\\\n case NPY_SHORT: *(npy_short *)(PyArray_DATA(arr))=*v; break;\\\n case NPY_USHORT: *(npy_ushort *)(PyArray_DATA(arr))=*v; break;\\\n case NPY_UINT: *(npy_uint *)(PyArray_DATA(arr))=*v; break;\\\n case NPY_ULONG: *(npy_ulong *)(PyArray_DATA(arr))=*v; break;\\\n case NPY_LONGLONG: *(npy_longlong *)(PyArray_DATA(arr))=*v; break;\\\n case NPY_ULONGLONG: *(npy_ulonglong *)(PyArray_DATA(arr))=*v; break;\\\n case NPY_LONGDOUBLE: *(npy_longdouble *)(PyArray_DATA(arr))=*v; break;\\\n case NPY_CLONGDOUBLE: *(npy_longdouble *)(PyArray_DATA(arr))=*v; break;\\\n case NPY_OBJECT: PyArray_SETITEM(arr, PyArray_DATA(arr), pyobj_from_ ## ctype ## 1(*v)); break;\\\n default: return -2;\\\n };\\\n return 1\n"""\n\nneeds['TRYCOMPLEXPYARRAYTEMPLATE'] = ['PRINTPYOBJERR']\ncppmacros['TRYCOMPLEXPYARRAYTEMPLATE'] = """\n#define TRYCOMPLEXPYARRAYTEMPLATEOBJECT case NPY_OBJECT: PyArray_SETITEM(arr, PyArray_DATA(arr), pyobj_from_complex_ ## ctype ## 1((*v))); break;\n#define TRYCOMPLEXPYARRAYTEMPLATE(ctype,typecode)\\\n PyArrayObject *arr = NULL;\\\n if (!obj) return -2;\\\n if (!PyArray_Check(obj)) return -1;\\\n if (!(arr=(PyArrayObject *)obj)) {fprintf(stderr,\"TRYCOMPLEXPYARRAYTEMPLATE:\");PRINTPYOBJERR(obj);return 0;}\\\n if (PyArray_DESCR(arr)->type==typecode) {\\\n *(ctype *)(PyArray_DATA(arr))=(*v).r;\\\n *(ctype *)(PyArray_DATA(arr)+sizeof(ctype))=(*v).i;\\\n return 1;\\\n }\\\n switch (PyArray_TYPE(arr)) {\\\n case NPY_CDOUBLE: *(npy_double *)(PyArray_DATA(arr))=(*v).r;\\\n *(npy_double *)(PyArray_DATA(arr)+sizeof(npy_double))=(*v).i;\\\n break;\\\n case NPY_CFLOAT: *(npy_float *)(PyArray_DATA(arr))=(*v).r;\\\n *(npy_float *)(PyArray_DATA(arr)+sizeof(npy_float))=(*v).i;\\\n break;\\\n case NPY_DOUBLE: *(npy_double *)(PyArray_DATA(arr))=(*v).r; break;\\\n case NPY_LONG: *(npy_long *)(PyArray_DATA(arr))=(*v).r; break;\\\n case NPY_FLOAT: *(npy_float *)(PyArray_DATA(arr))=(*v).r; break;\\\n case NPY_INT: *(npy_int *)(PyArray_DATA(arr))=(*v).r; break;\\\n case NPY_SHORT: *(npy_short *)(PyArray_DATA(arr))=(*v).r; break;\\\n case NPY_UBYTE: *(npy_ubyte *)(PyArray_DATA(arr))=(*v).r; break;\\\n case NPY_BYTE: *(npy_byte *)(PyArray_DATA(arr))=(*v).r; break;\\\n case NPY_BOOL: *(npy_bool *)(PyArray_DATA(arr))=((*v).r!=0 && (*v).i!=0); break;\\\n case NPY_USHORT: *(npy_ushort *)(PyArray_DATA(arr))=(*v).r; break;\\\n case NPY_UINT: *(npy_uint *)(PyArray_DATA(arr))=(*v).r; break;\\\n case NPY_ULONG: *(npy_ulong *)(PyArray_DATA(arr))=(*v).r; break;\\\n case NPY_LONGLONG: *(npy_longlong *)(PyArray_DATA(arr))=(*v).r; break;\\\n case NPY_ULONGLONG: *(npy_ulonglong *)(PyArray_DATA(arr))=(*v).r; break;\\\n case NPY_LONGDOUBLE: *(npy_longdouble *)(PyArray_DATA(arr))=(*v).r; break;\\\n case NPY_CLONGDOUBLE: *(npy_longdouble *)(PyArray_DATA(arr))=(*v).r;\\\n *(npy_longdouble *)(PyArray_DATA(arr)+sizeof(npy_longdouble))=(*v).i;\\\n break;\\\n case NPY_OBJECT: PyArray_SETITEM(arr, PyArray_DATA(arr), pyobj_from_complex_ ## ctype ## 1((*v))); break;\\\n default: return -2;\\\n };\\\n return -1;\n"""\n# cppmacros['NUMFROMARROBJ']="""\n# define NUMFROMARROBJ(typenum,ctype) \\\n# if (PyArray_Check(obj)) arr = (PyArrayObject *)obj;\\\n# else arr = (PyArrayObject *)PyArray_ContiguousFromObject(obj,typenum,0,0);\\\n# if (arr) {\\\n# if (PyArray_TYPE(arr)==NPY_OBJECT) {\\\n# if (!ctype ## _from_pyobj(v,(PyArray_DESCR(arr)->getitem)(PyArray_DATA(arr)),\"\"))\\\n# goto capi_fail;\\\n# } else {\\\n# (PyArray_DESCR(arr)->cast[typenum])(PyArray_DATA(arr),1,(char*)v,1,1);\\\n# }\\\n# if ((PyObject *)arr != obj) { Py_DECREF(arr); }\\\n# return 1;\\\n# }\n# """\n# XXX: Note that CNUMFROMARROBJ is identical with NUMFROMARROBJ\n# cppmacros['CNUMFROMARROBJ']="""\n# define CNUMFROMARROBJ(typenum,ctype) \\\n# if (PyArray_Check(obj)) arr = (PyArrayObject *)obj;\\\n# else arr = (PyArrayObject *)PyArray_ContiguousFromObject(obj,typenum,0,0);\\\n# if (arr) {\\\n# if (PyArray_TYPE(arr)==NPY_OBJECT) {\\\n# if (!ctype ## _from_pyobj(v,(PyArray_DESCR(arr)->getitem)(PyArray_DATA(arr)),\"\"))\\\n# goto capi_fail;\\\n# } else {\\\n# (PyArray_DESCR(arr)->cast[typenum])((void *)(PyArray_DATA(arr)),1,(void *)(v),1,1);\\\n# }\\\n# if ((PyObject *)arr != obj) { Py_DECREF(arr); }\\\n# return 1;\\\n# }\n# """\n\n\nneeds['GETSTRFROMPYTUPLE'] = ['STRINGCOPYN', 'PRINTPYOBJERR']\ncppmacros['GETSTRFROMPYTUPLE'] = """\n#define GETSTRFROMPYTUPLE(tuple,index,str,len) {\\\n PyObject *rv_cb_str = PyTuple_GetItem((tuple),(index));\\\n if (rv_cb_str == NULL)\\\n goto capi_fail;\\\n if (PyBytes_Check(rv_cb_str)) {\\\n str[len-1]='\\0';\\\n STRINGCOPYN((str),PyBytes_AS_STRING((PyBytesObject*)rv_cb_str),(len));\\\n } else {\\\n PRINTPYOBJERR(rv_cb_str);\\\n PyErr_SetString(#modulename#_error,\"string object expected\");\\\n goto capi_fail;\\\n }\\\n }\n"""\ncppmacros['GETSCALARFROMPYTUPLE'] = """\n#define GETSCALARFROMPYTUPLE(tuple,index,var,ctype,mess) {\\\n if ((capi_tmp = PyTuple_GetItem((tuple),(index)))==NULL) goto capi_fail;\\\n if (!(ctype ## _from_pyobj((var),capi_tmp,mess)))\\\n goto capi_fail;\\\n }\n"""\n\ncppmacros['FAILNULL'] = """\\n#define FAILNULL(p) do { \\\n if ((p) == NULL) { \\\n PyErr_SetString(PyExc_MemoryError, "NULL pointer found"); \\\n goto capi_fail; \\\n } \\\n} while (0)\n"""\nneeds['MEMCOPY'] = ['string.h', 'FAILNULL']\ncppmacros['MEMCOPY'] = """\n#define MEMCOPY(to,from,n)\\\n do { FAILNULL(to); FAILNULL(from); (void)memcpy(to,from,n); } while (0)\n"""\ncppmacros['STRINGMALLOC'] = """\n#define STRINGMALLOC(str,len)\\\n if ((str = (string)malloc(len+1)) == NULL) {\\\n PyErr_SetString(PyExc_MemoryError, \"out of memory\");\\\n goto capi_fail;\\\n } else {\\\n (str)[len] = '\\0';\\\n }\n"""\ncppmacros['STRINGFREE'] = """\n#define STRINGFREE(str) do {if (!(str == NULL)) free(str);} while (0)\n"""\nneeds['STRINGPADN'] = ['string.h']\ncppmacros['STRINGPADN'] = """\n/*\nSTRINGPADN replaces null values with padding values from the right.\n\n`to` must have size of at least N bytes.\n\nIf the `to[N-1]` has null value, then replace it and all the\npreceding, nulls with the given padding.\n\nSTRINGPADN(to, N, PADDING, NULLVALUE) is an inverse operation.\n*/\n#define STRINGPADN(to, N, NULLVALUE, PADDING) \\\n do { \\\n int _m = (N); \\\n char *_to = (to); \\\n for (_m -= 1; _m >= 0 && _to[_m] == NULLVALUE; _m--) { \\\n _to[_m] = PADDING; \\\n } \\\n } while (0)\n"""\nneeds['STRINGCOPYN'] = ['string.h', 'FAILNULL']\ncppmacros['STRINGCOPYN'] = """\n/*\nSTRINGCOPYN copies N bytes.\n\n`to` and `from` buffers must have sizes of at least N bytes.\n*/\n#define STRINGCOPYN(to,from,N) \\\n do { \\\n int _m = (N); \\\n char *_to = (to); \\\n char *_from = (from); \\\n FAILNULL(_to); FAILNULL(_from); \\\n (void)strncpy(_to, _from, _m); \\\n } while (0)\n"""\nneeds['STRINGCOPY'] = ['string.h', 'FAILNULL']\ncppmacros['STRINGCOPY'] = """\n#define STRINGCOPY(to,from)\\\n do { FAILNULL(to); FAILNULL(from); (void)strcpy(to,from); } while (0)\n"""\ncppmacros['CHECKGENERIC'] = """\n#define CHECKGENERIC(check,tcheck,name) \\\n if (!(check)) {\\\n PyErr_SetString(#modulename#_error,\"(\"tcheck\") failed for \"name);\\\n /*goto capi_fail;*/\\\n } else """\ncppmacros['CHECKARRAY'] = """\n#define CHECKARRAY(check,tcheck,name) \\\n if (!(check)) {\\\n PyErr_SetString(#modulename#_error,\"(\"tcheck\") failed for \"name);\\\n /*goto capi_fail;*/\\\n } else """\ncppmacros['CHECKSTRING'] = """\n#define CHECKSTRING(check,tcheck,name,show,var)\\\n if (!(check)) {\\\n char errstring[256];\\\n sprintf(errstring, \"%s: \"show, \"(\"tcheck\") failed for \"name, slen(var), var);\\\n PyErr_SetString(#modulename#_error, errstring);\\\n /*goto capi_fail;*/\\\n } else """\ncppmacros['CHECKSCALAR'] = """\n#define CHECKSCALAR(check,tcheck,name,show,var)\\\n if (!(check)) {\\\n char errstring[256];\\\n sprintf(errstring, \"%s: \"show, \"(\"tcheck\") failed for \"name, var);\\\n PyErr_SetString(#modulename#_error,errstring);\\\n /*goto capi_fail;*/\\\n } else """\n# cppmacros['CHECKDIMS']="""\n# define CHECKDIMS(dims,rank) \\\n# for (int i=0;i<(rank);i++)\\\n# if (dims[i]<0) {\\\n# fprintf(stderr,\"Unspecified array argument requires a complete dimension specification.\\n\");\\\n# goto capi_fail;\\\n# }\n# """\ncppmacros[\n 'ARRSIZE'] = '#define ARRSIZE(dims,rank) (_PyArray_multiply_list(dims,rank))'\ncppmacros['OLDPYNUM'] = """\n#ifdef OLDPYNUM\n#error You need to install NumPy version 0.13 or higher. See https://scipy.org/install.html\n#endif\n"""\n\n# Defining the correct value to indicate thread-local storage in C without\n# running a compile-time check (which we have no control over in generated\n# code used outside of NumPy) is hard. Therefore we support overriding this\n# via an external define - the f2py-using package can then use the same\n# compile-time checks as we use for `NPY_TLS` when building NumPy (see\n# scipy#21860 for an example of that).\n#\n# __STDC_NO_THREADS__ should not be coupled to the availability of _Thread_local.\n# In case we get a bug report, guard it with __STDC_NO_THREADS__ after all.\n#\n# `thread_local` has become a keyword in C23, but don't try to use that yet\n# (too new, doing so while C23 support is preliminary will likely cause more\n# problems than it solves).\n#\n# Note: do not try to use `threads.h`, its availability is very low\n# *and* threads.h isn't actually used where `F2PY_THREAD_LOCAL_DECL` is\n# in the generated code. See gh-27718 for more details.\ncppmacros["F2PY_THREAD_LOCAL_DECL"] = """\n#ifndef F2PY_THREAD_LOCAL_DECL\n#if defined(_MSC_VER)\n#define F2PY_THREAD_LOCAL_DECL __declspec(thread)\n#elif defined(NPY_OS_MINGW)\n#define F2PY_THREAD_LOCAL_DECL __thread\n#elif defined(__STDC_VERSION__) && (__STDC_VERSION__ >= 201112L)\n#define F2PY_THREAD_LOCAL_DECL _Thread_local\n#elif defined(__GNUC__) \\\n && (__GNUC__ > 4 || (__GNUC__ == 4 && (__GNUC_MINOR__ >= 4)))\n#define F2PY_THREAD_LOCAL_DECL __thread\n#endif\n#endif\n"""\n################# C functions ###############\n\ncfuncs['calcarrindex'] = """\nstatic int calcarrindex(int *i,PyArrayObject *arr) {\n int k,ii = i[0];\n for (k=1; k < PyArray_NDIM(arr); k++)\n ii += (ii*(PyArray_DIM(arr,k) - 1)+i[k]); /* assuming contiguous arr */\n return ii;\n}"""\ncfuncs['calcarrindextr'] = """\nstatic int calcarrindextr(int *i,PyArrayObject *arr) {\n int k,ii = i[PyArray_NDIM(arr)-1];\n for (k=1; k < PyArray_NDIM(arr); k++)\n ii += (ii*(PyArray_DIM(arr,PyArray_NDIM(arr)-k-1) - 1)+i[PyArray_NDIM(arr)-k-1]); /* assuming contiguous arr */\n return ii;\n}"""\ncfuncs['forcomb'] = """\nstruct ForcombCache { int nd;npy_intp *d;int *i,*i_tr,tr; };\nstatic int initforcomb(struct ForcombCache *cache, npy_intp *dims,int nd,int tr) {\n int k;\n if (dims==NULL) return 0;\n if (nd<0) return 0;\n cache->nd = nd;\n cache->d = dims;\n cache->tr = tr;\n\n cache->i = (int *)malloc(sizeof(int)*nd);\n if (cache->i==NULL) return 0;\n cache->i_tr = (int *)malloc(sizeof(int)*nd);\n if (cache->i_tr==NULL) {free(cache->i); return 0;};\n\n for (k=1;k<nd;k++) {\n cache->i[k] = cache->i_tr[nd-k-1] = 0;\n }\n cache->i[0] = cache->i_tr[nd-1] = -1;\n return 1;\n}\nstatic int *nextforcomb(struct ForcombCache *cache) {\n if (cache==NULL) return NULL;\n int j,*i,*i_tr,k;\n int nd=cache->nd;\n if ((i=cache->i) == NULL) return NULL;\n if ((i_tr=cache->i_tr) == NULL) return NULL;\n if (cache->d == NULL) return NULL;\n i[0]++;\n if (i[0]==cache->d[0]) {\n j=1;\n while ((j<nd) && (i[j]==cache->d[j]-1)) j++;\n if (j==nd) {\n free(i);\n free(i_tr);\n return NULL;\n }\n for (k=0;k<j;k++) i[k] = i_tr[nd-k-1] = 0;\n i[j]++;\n i_tr[nd-j-1]++;\n } else\n i_tr[nd-1]++;\n if (cache->tr) return i_tr;\n return i;\n}"""\nneeds['try_pyarr_from_string'] = ['STRINGCOPYN', 'PRINTPYOBJERR', 'string']\ncfuncs['try_pyarr_from_string'] = """\n/*\n try_pyarr_from_string copies str[:len(obj)] to the data of an `ndarray`.\n\n If obj is an `ndarray`, it is assumed to be contiguous.\n\n If the specified len==-1, str must be null-terminated.\n*/\nstatic int try_pyarr_from_string(PyObject *obj,\n const string str, const int len) {\n#ifdef DEBUGCFUNCS\nfprintf(stderr, "try_pyarr_from_string(str='%s', len=%d, obj=%p)\\n",\n (char*)str,len, obj);\n#endif\n if (!obj) return -2; /* Object missing */\n if (obj == Py_None) return -1; /* None */\n if (!PyArray_Check(obj)) goto capi_fail; /* not an ndarray */\n if (PyArray_Check(obj)) {\n PyArrayObject *arr = (PyArrayObject *)obj;\n assert(ISCONTIGUOUS(arr));\n string buf = PyArray_DATA(arr);\n npy_intp n = len;\n if (n == -1) {\n /* Assuming null-terminated str. */\n n = strlen(str);\n }\n if (n > PyArray_NBYTES(arr)) {\n n = PyArray_NBYTES(arr);\n }\n STRINGCOPYN(buf, str, n);\n return 1;\n }\ncapi_fail:\n PRINTPYOBJERR(obj);\n PyErr_SetString(#modulename#_error, \"try_pyarr_from_string failed\");\n return 0;\n}\n"""\nneeds['string_from_pyobj'] = ['string', 'STRINGMALLOC', 'STRINGCOPYN']\ncfuncs['string_from_pyobj'] = """\n/*\n Create a new string buffer `str` of at most length `len` from a\n Python string-like object `obj`.\n\n The string buffer has given size (len) or the size of inistr when len==-1.\n\n The string buffer is padded with blanks: in Fortran, trailing blanks\n are insignificant contrary to C nulls.\n */\nstatic int\nstring_from_pyobj(string *str, int *len, const string inistr, PyObject *obj,\n const char *errmess)\n{\n PyObject *tmp = NULL;\n string buf = NULL;\n npy_intp n = -1;\n#ifdef DEBUGCFUNCS\nfprintf(stderr,\"string_from_pyobj(str='%s',len=%d,inistr='%s',obj=%p)\\n\",\n (char*)str, *len, (char *)inistr, obj);\n#endif\n if (obj == Py_None) {\n n = strlen(inistr);\n buf = inistr;\n }\n else if (PyArray_Check(obj)) {\n PyArrayObject *arr = (PyArrayObject *)obj;\n if (!ISCONTIGUOUS(arr)) {\n PyErr_SetString(PyExc_ValueError,\n \"array object is non-contiguous.\");\n goto capi_fail;\n }\n n = PyArray_NBYTES(arr);\n buf = PyArray_DATA(arr);\n n = strnlen(buf, n);\n }\n else {\n if (PyBytes_Check(obj)) {\n tmp = obj;\n Py_INCREF(tmp);\n }\n else if (PyUnicode_Check(obj)) {\n tmp = PyUnicode_AsASCIIString(obj);\n }\n else {\n PyObject *tmp2;\n tmp2 = PyObject_Str(obj);\n if (tmp2) {\n tmp = PyUnicode_AsASCIIString(tmp2);\n Py_DECREF(tmp2);\n }\n else {\n tmp = NULL;\n }\n }\n if (tmp == NULL) goto capi_fail;\n n = PyBytes_GET_SIZE(tmp);\n buf = PyBytes_AS_STRING(tmp);\n }\n if (*len == -1) {\n /* TODO: change the type of `len` so that we can remove this */\n if (n > NPY_MAX_INT) {\n PyErr_SetString(PyExc_OverflowError,\n "object too large for a 32-bit int");\n goto capi_fail;\n }\n *len = n;\n }\n else if (*len < n) {\n /* discard the last (len-n) bytes of input buf */\n n = *len;\n }\n if (n < 0 || *len < 0 || buf == NULL) {\n goto capi_fail;\n }\n STRINGMALLOC(*str, *len); // *str is allocated with size (*len + 1)\n if (n < *len) {\n /*\n Pad fixed-width string with nulls. The caller will replace\n nulls with blanks when the corresponding argument is not\n intent(c).\n */\n memset(*str + n, '\\0', *len - n);\n }\n STRINGCOPYN(*str, buf, n);\n Py_XDECREF(tmp);\n return 1;\ncapi_fail:\n Py_XDECREF(tmp);\n {\n PyObject* err = PyErr_Occurred();\n if (err == NULL) {\n err = #modulename#_error;\n }\n PyErr_SetString(err, errmess);\n }\n return 0;\n}\n"""\n\ncfuncs['character_from_pyobj'] = """\nstatic int\ncharacter_from_pyobj(character* v, PyObject *obj, const char *errmess) {\n if (PyBytes_Check(obj)) {\n /* empty bytes has trailing null, so dereferencing is always safe */\n *v = PyBytes_AS_STRING(obj)[0];\n return 1;\n } else if (PyUnicode_Check(obj)) {\n PyObject* tmp = PyUnicode_AsASCIIString(obj);\n if (tmp != NULL) {\n *v = PyBytes_AS_STRING(tmp)[0];\n Py_DECREF(tmp);\n return 1;\n }\n } else if (PyArray_Check(obj)) {\n PyArrayObject* arr = (PyArrayObject*)obj;\n if (F2PY_ARRAY_IS_CHARACTER_COMPATIBLE(arr)) {\n *v = PyArray_BYTES(arr)[0];\n return 1;\n } else if (F2PY_IS_UNICODE_ARRAY(arr)) {\n // TODO: update when numpy will support 1-byte and\n // 2-byte unicode dtypes\n PyObject* tmp = PyUnicode_FromKindAndData(\n PyUnicode_4BYTE_KIND,\n PyArray_BYTES(arr),\n (PyArray_NBYTES(arr)>0?1:0));\n if (tmp != NULL) {\n if (character_from_pyobj(v, tmp, errmess)) {\n Py_DECREF(tmp);\n return 1;\n }\n Py_DECREF(tmp);\n }\n }\n } else if (PySequence_Check(obj)) {\n PyObject* tmp = PySequence_GetItem(obj,0);\n if (tmp != NULL) {\n if (character_from_pyobj(v, tmp, errmess)) {\n Py_DECREF(tmp);\n return 1;\n }\n Py_DECREF(tmp);\n }\n }\n {\n /* TODO: This error (and most other) error handling needs cleaning. */\n char mess[F2PY_MESSAGE_BUFFER_SIZE];\n strcpy(mess, errmess);\n PyObject* err = PyErr_Occurred();\n if (err == NULL) {\n err = PyExc_TypeError;\n Py_INCREF(err);\n }\n else {\n Py_INCREF(err);\n PyErr_Clear();\n }\n sprintf(mess + strlen(mess),\n " -- expected str|bytes|sequence-of-str-or-bytes, got ");\n f2py_describe(obj, mess + strlen(mess));\n PyErr_SetString(err, mess);\n Py_DECREF(err);\n }\n return 0;\n}\n"""\n\n# TODO: These should be dynamically generated, too many mapped to int things,\n# see note in _isocbind.py\nneeds['char_from_pyobj'] = ['int_from_pyobj']\ncfuncs['char_from_pyobj'] = """\nstatic int\nchar_from_pyobj(char* v, PyObject *obj, const char *errmess) {\n int i = 0;\n if (int_from_pyobj(&i, obj, errmess)) {\n *v = (char)i;\n return 1;\n }\n return 0;\n}\n"""\n\n\nneeds['signed_char_from_pyobj'] = ['int_from_pyobj', 'signed_char']\ncfuncs['signed_char_from_pyobj'] = """\nstatic int\nsigned_char_from_pyobj(signed_char* v, PyObject *obj, const char *errmess) {\n int i = 0;\n if (int_from_pyobj(&i, obj, errmess)) {\n *v = (signed_char)i;\n return 1;\n }\n return 0;\n}\n"""\n\n\nneeds['short_from_pyobj'] = ['int_from_pyobj']\ncfuncs['short_from_pyobj'] = """\nstatic int\nshort_from_pyobj(short* v, PyObject *obj, const char *errmess) {\n int i = 0;\n if (int_from_pyobj(&i, obj, errmess)) {\n *v = (short)i;\n return 1;\n }\n return 0;\n}\n"""\n\n\ncfuncs['int_from_pyobj'] = """\nstatic int\nint_from_pyobj(int* v, PyObject *obj, const char *errmess)\n{\n PyObject* tmp = NULL;\n\n if (PyLong_Check(obj)) {\n *v = Npy__PyLong_AsInt(obj);\n return !(*v == -1 && PyErr_Occurred());\n }\n\n tmp = PyNumber_Long(obj);\n if (tmp) {\n *v = Npy__PyLong_AsInt(tmp);\n Py_DECREF(tmp);\n return !(*v == -1 && PyErr_Occurred());\n }\n\n if (PyComplex_Check(obj)) {\n PyErr_Clear();\n tmp = PyObject_GetAttrString(obj,\"real\");\n }\n else if (PyBytes_Check(obj) || PyUnicode_Check(obj)) {\n /*pass*/;\n }\n else if (PySequence_Check(obj)) {\n PyErr_Clear();\n tmp = PySequence_GetItem(obj, 0);\n }\n\n if (tmp) {\n if (int_from_pyobj(v, tmp, errmess)) {\n Py_DECREF(tmp);\n return 1;\n }\n Py_DECREF(tmp);\n }\n\n {\n PyObject* err = PyErr_Occurred();\n if (err == NULL) {\n err = #modulename#_error;\n }\n PyErr_SetString(err, errmess);\n }\n return 0;\n}\n"""\n\n\ncfuncs['long_from_pyobj'] = """\nstatic int\nlong_from_pyobj(long* v, PyObject *obj, const char *errmess) {\n PyObject* tmp = NULL;\n\n if (PyLong_Check(obj)) {\n *v = PyLong_AsLong(obj);\n return !(*v == -1 && PyErr_Occurred());\n }\n\n tmp = PyNumber_Long(obj);\n if (tmp) {\n *v = PyLong_AsLong(tmp);\n Py_DECREF(tmp);\n return !(*v == -1 && PyErr_Occurred());\n }\n\n if (PyComplex_Check(obj)) {\n PyErr_Clear();\n tmp = PyObject_GetAttrString(obj,\"real\");\n }\n else if (PyBytes_Check(obj) || PyUnicode_Check(obj)) {\n /*pass*/;\n }\n else if (PySequence_Check(obj)) {\n PyErr_Clear();\n tmp = PySequence_GetItem(obj, 0);\n }\n\n if (tmp) {\n if (long_from_pyobj(v, tmp, errmess)) {\n Py_DECREF(tmp);\n return 1;\n }\n Py_DECREF(tmp);\n }\n {\n PyObject* err = PyErr_Occurred();\n if (err == NULL) {\n err = #modulename#_error;\n }\n PyErr_SetString(err, errmess);\n }\n return 0;\n}\n"""\n\n\nneeds['long_long_from_pyobj'] = ['long_long']\ncfuncs['long_long_from_pyobj'] = """\nstatic int\nlong_long_from_pyobj(long_long* v, PyObject *obj, const char *errmess)\n{\n PyObject* tmp = NULL;\n\n if (PyLong_Check(obj)) {\n *v = PyLong_AsLongLong(obj);\n return !(*v == -1 && PyErr_Occurred());\n }\n\n tmp = PyNumber_Long(obj);\n if (tmp) {\n *v = PyLong_AsLongLong(tmp);\n Py_DECREF(tmp);\n return !(*v == -1 && PyErr_Occurred());\n }\n\n if (PyComplex_Check(obj)) {\n PyErr_Clear();\n tmp = PyObject_GetAttrString(obj,\"real\");\n }\n else if (PyBytes_Check(obj) || PyUnicode_Check(obj)) {\n /*pass*/;\n }\n else if (PySequence_Check(obj)) {\n PyErr_Clear();\n tmp = PySequence_GetItem(obj, 0);\n }\n\n if (tmp) {\n if (long_long_from_pyobj(v, tmp, errmess)) {\n Py_DECREF(tmp);\n return 1;\n }\n Py_DECREF(tmp);\n }\n {\n PyObject* err = PyErr_Occurred();\n if (err == NULL) {\n err = #modulename#_error;\n }\n PyErr_SetString(err,errmess);\n }\n return 0;\n}\n"""\n\n\nneeds['long_double_from_pyobj'] = ['double_from_pyobj', 'long_double']\ncfuncs['long_double_from_pyobj'] = """\nstatic int\nlong_double_from_pyobj(long_double* v, PyObject *obj, const char *errmess)\n{\n double d=0;\n if (PyArray_CheckScalar(obj)){\n if PyArray_IsScalar(obj, LongDouble) {\n PyArray_ScalarAsCtype(obj, v);\n return 1;\n }\n else if (PyArray_Check(obj)) {\n PyArrayObject *arr = (PyArrayObject *)obj;\n if (PyArray_TYPE(arr) == NPY_LONGDOUBLE) {\n (*v) = *((npy_longdouble *)PyArray_DATA(arr));\n return 1;\n }\n }\n }\n if (double_from_pyobj(&d, obj, errmess)) {\n *v = (long_double)d;\n return 1;\n }\n return 0;\n}\n"""\n\n\ncfuncs['double_from_pyobj'] = """\nstatic int\ndouble_from_pyobj(double* v, PyObject *obj, const char *errmess)\n{\n PyObject* tmp = NULL;\n if (PyFloat_Check(obj)) {\n *v = PyFloat_AsDouble(obj);\n return !(*v == -1.0 && PyErr_Occurred());\n }\n\n tmp = PyNumber_Float(obj);\n if (tmp) {\n *v = PyFloat_AsDouble(tmp);\n Py_DECREF(tmp);\n return !(*v == -1.0 && PyErr_Occurred());\n }\n\n if (PyComplex_Check(obj)) {\n PyErr_Clear();\n tmp = PyObject_GetAttrString(obj,\"real\");\n }\n else if (PyBytes_Check(obj) || PyUnicode_Check(obj)) {\n /*pass*/;\n }\n else if (PySequence_Check(obj)) {\n PyErr_Clear();\n tmp = PySequence_GetItem(obj, 0);\n }\n\n if (tmp) {\n if (double_from_pyobj(v,tmp,errmess)) {Py_DECREF(tmp); return 1;}\n Py_DECREF(tmp);\n }\n {\n PyObject* err = PyErr_Occurred();\n if (err==NULL) err = #modulename#_error;\n PyErr_SetString(err,errmess);\n }\n return 0;\n}\n"""\n\n\nneeds['float_from_pyobj'] = ['double_from_pyobj']\ncfuncs['float_from_pyobj'] = """\nstatic int\nfloat_from_pyobj(float* v, PyObject *obj, const char *errmess)\n{\n double d=0.0;\n if (double_from_pyobj(&d,obj,errmess)) {\n *v = (float)d;\n return 1;\n }\n return 0;\n}\n"""\n\n\nneeds['complex_long_double_from_pyobj'] = ['complex_long_double', 'long_double',\n 'complex_double_from_pyobj', 'npy_math.h']\ncfuncs['complex_long_double_from_pyobj'] = """\nstatic int\ncomplex_long_double_from_pyobj(complex_long_double* v, PyObject *obj, const char *errmess)\n{\n complex_double cd = {0.0,0.0};\n if (PyArray_CheckScalar(obj)){\n if PyArray_IsScalar(obj, CLongDouble) {\n PyArray_ScalarAsCtype(obj, v);\n return 1;\n }\n else if (PyArray_Check(obj)) {\n PyArrayObject *arr = (PyArrayObject *)obj;\n if (PyArray_TYPE(arr)==NPY_CLONGDOUBLE) {\n (*v).r = npy_creall(*(((npy_clongdouble *)PyArray_DATA(arr))));\n (*v).i = npy_cimagl(*(((npy_clongdouble *)PyArray_DATA(arr))));\n return 1;\n }\n }\n }\n if (complex_double_from_pyobj(&cd,obj,errmess)) {\n (*v).r = (long_double)cd.r;\n (*v).i = (long_double)cd.i;\n return 1;\n }\n return 0;\n}\n"""\n\n\nneeds['complex_double_from_pyobj'] = ['complex_double', 'npy_math.h']\ncfuncs['complex_double_from_pyobj'] = """\nstatic int\ncomplex_double_from_pyobj(complex_double* v, PyObject *obj, const char *errmess) {\n Py_complex c;\n if (PyComplex_Check(obj)) {\n c = PyComplex_AsCComplex(obj);\n (*v).r = c.real;\n (*v).i = c.imag;\n return 1;\n }\n if (PyArray_IsScalar(obj, ComplexFloating)) {\n if (PyArray_IsScalar(obj, CFloat)) {\n npy_cfloat new;\n PyArray_ScalarAsCtype(obj, &new);\n (*v).r = (double)npy_crealf(new);\n (*v).i = (double)npy_cimagf(new);\n }\n else if (PyArray_IsScalar(obj, CLongDouble)) {\n npy_clongdouble new;\n PyArray_ScalarAsCtype(obj, &new);\n (*v).r = (double)npy_creall(new);\n (*v).i = (double)npy_cimagl(new);\n }\n else { /* if (PyArray_IsScalar(obj, CDouble)) */\n PyArray_ScalarAsCtype(obj, v);\n }\n return 1;\n }\n if (PyArray_CheckScalar(obj)) { /* 0-dim array or still array scalar */\n PyArrayObject *arr;\n if (PyArray_Check(obj)) {\n arr = (PyArrayObject *)PyArray_Cast((PyArrayObject *)obj, NPY_CDOUBLE);\n }\n else {\n arr = (PyArrayObject *)PyArray_FromScalar(obj, PyArray_DescrFromType(NPY_CDOUBLE));\n }\n if (arr == NULL) {\n return 0;\n }\n (*v).r = npy_creal(*(((npy_cdouble *)PyArray_DATA(arr))));\n (*v).i = npy_cimag(*(((npy_cdouble *)PyArray_DATA(arr))));\n Py_DECREF(arr);\n return 1;\n }\n /* Python does not provide PyNumber_Complex function :-( */\n (*v).i = 0.0;\n if (PyFloat_Check(obj)) {\n (*v).r = PyFloat_AsDouble(obj);\n return !((*v).r == -1.0 && PyErr_Occurred());\n }\n if (PyLong_Check(obj)) {\n (*v).r = PyLong_AsDouble(obj);\n return !((*v).r == -1.0 && PyErr_Occurred());\n }\n if (PySequence_Check(obj) && !(PyBytes_Check(obj) || PyUnicode_Check(obj))) {\n PyObject *tmp = PySequence_GetItem(obj,0);\n if (tmp) {\n if (complex_double_from_pyobj(v,tmp,errmess)) {\n Py_DECREF(tmp);\n return 1;\n }\n Py_DECREF(tmp);\n }\n }\n {\n PyObject* err = PyErr_Occurred();\n if (err==NULL)\n err = PyExc_TypeError;\n PyErr_SetString(err,errmess);\n }\n return 0;\n}\n"""\n\n\nneeds['complex_float_from_pyobj'] = [\n 'complex_float', 'complex_double_from_pyobj']\ncfuncs['complex_float_from_pyobj'] = """\nstatic int\ncomplex_float_from_pyobj(complex_float* v,PyObject *obj,const char *errmess)\n{\n complex_double cd={0.0,0.0};\n if (complex_double_from_pyobj(&cd,obj,errmess)) {\n (*v).r = (float)cd.r;\n (*v).i = (float)cd.i;\n return 1;\n }\n return 0;\n}\n"""\n\n\ncfuncs['try_pyarr_from_character'] = """\nstatic int try_pyarr_from_character(PyObject* obj, character* v) {\n PyArrayObject *arr = (PyArrayObject*)obj;\n if (!obj) return -2;\n if (PyArray_Check(obj)) {\n if (F2PY_ARRAY_IS_CHARACTER_COMPATIBLE(arr)) {\n *(character *)(PyArray_DATA(arr)) = *v;\n return 1;\n }\n }\n {\n char mess[F2PY_MESSAGE_BUFFER_SIZE];\n PyObject* err = PyErr_Occurred();\n if (err == NULL) {\n err = PyExc_ValueError;\n strcpy(mess, "try_pyarr_from_character failed"\n " -- expected bytes array-scalar|array, got ");\n f2py_describe(obj, mess + strlen(mess));\n PyErr_SetString(err, mess);\n }\n }\n return 0;\n}\n"""\n\nneeds['try_pyarr_from_char'] = ['pyobj_from_char1', 'TRYPYARRAYTEMPLATE']\ncfuncs[\n 'try_pyarr_from_char'] = 'static int try_pyarr_from_char(PyObject* obj,char* v) {\n TRYPYARRAYTEMPLATE(char,\'c\');\n}\n'\nneeds['try_pyarr_from_signed_char'] = ['TRYPYARRAYTEMPLATE', 'unsigned_char']\ncfuncs[\n 'try_pyarr_from_unsigned_char'] = 'static int try_pyarr_from_unsigned_char(PyObject* obj,unsigned_char* v) {\n TRYPYARRAYTEMPLATE(unsigned_char,\'b\');\n}\n'\nneeds['try_pyarr_from_signed_char'] = ['TRYPYARRAYTEMPLATE', 'signed_char']\ncfuncs[\n 'try_pyarr_from_signed_char'] = 'static int try_pyarr_from_signed_char(PyObject* obj,signed_char* v) {\n TRYPYARRAYTEMPLATE(signed_char,\'1\');\n}\n'\nneeds['try_pyarr_from_short'] = ['pyobj_from_short1', 'TRYPYARRAYTEMPLATE']\ncfuncs[\n 'try_pyarr_from_short'] = 'static int try_pyarr_from_short(PyObject* obj,short* v) {\n TRYPYARRAYTEMPLATE(short,\'s\');\n}\n'\nneeds['try_pyarr_from_int'] = ['pyobj_from_int1', 'TRYPYARRAYTEMPLATE']\ncfuncs[\n 'try_pyarr_from_int'] = 'static int try_pyarr_from_int(PyObject* obj,int* v) {\n TRYPYARRAYTEMPLATE(int,\'i\');\n}\n'\nneeds['try_pyarr_from_long'] = ['pyobj_from_long1', 'TRYPYARRAYTEMPLATE']\ncfuncs[\n 'try_pyarr_from_long'] = 'static int try_pyarr_from_long(PyObject* obj,long* v) {\n TRYPYARRAYTEMPLATE(long,\'l\');\n}\n'\nneeds['try_pyarr_from_long_long'] = [\n 'pyobj_from_long_long1', 'TRYPYARRAYTEMPLATE', 'long_long']\ncfuncs[\n 'try_pyarr_from_long_long'] = 'static int try_pyarr_from_long_long(PyObject* obj,long_long* v) {\n TRYPYARRAYTEMPLATE(long_long,\'L\');\n}\n'\nneeds['try_pyarr_from_float'] = ['pyobj_from_float1', 'TRYPYARRAYTEMPLATE']\ncfuncs[\n 'try_pyarr_from_float'] = 'static int try_pyarr_from_float(PyObject* obj,float* v) {\n TRYPYARRAYTEMPLATE(float,\'f\');\n}\n'\nneeds['try_pyarr_from_double'] = ['pyobj_from_double1', 'TRYPYARRAYTEMPLATE']\ncfuncs[\n 'try_pyarr_from_double'] = 'static int try_pyarr_from_double(PyObject* obj,double* v) {\n TRYPYARRAYTEMPLATE(double,\'d\');\n}\n'\nneeds['try_pyarr_from_complex_float'] = [\n 'pyobj_from_complex_float1', 'TRYCOMPLEXPYARRAYTEMPLATE', 'complex_float']\ncfuncs[\n 'try_pyarr_from_complex_float'] = 'static int try_pyarr_from_complex_float(PyObject* obj,complex_float* v) {\n TRYCOMPLEXPYARRAYTEMPLATE(float,\'F\');\n}\n'\nneeds['try_pyarr_from_complex_double'] = [\n 'pyobj_from_complex_double1', 'TRYCOMPLEXPYARRAYTEMPLATE', 'complex_double']\ncfuncs[\n 'try_pyarr_from_complex_double'] = 'static int try_pyarr_from_complex_double(PyObject* obj,complex_double* v) {\n TRYCOMPLEXPYARRAYTEMPLATE(double,\'D\');\n}\n'\n\n\nneeds['create_cb_arglist'] = ['CFUNCSMESS', 'PRINTPYOBJERR', 'MINMAX']\n# create the list of arguments to be used when calling back to python\ncfuncs['create_cb_arglist'] = """\nstatic int\ncreate_cb_arglist(PyObject* fun, PyTupleObject* xa , const int maxnofargs,\n const int nofoptargs, int *nofargs, PyTupleObject **args,\n const char *errmess)\n{\n PyObject *tmp = NULL;\n PyObject *tmp_fun = NULL;\n Py_ssize_t tot, opt, ext, siz, i, di = 0;\n CFUNCSMESS(\"create_cb_arglist\\n\");\n tot=opt=ext=siz=0;\n /* Get the total number of arguments */\n if (PyFunction_Check(fun)) {\n tmp_fun = fun;\n Py_INCREF(tmp_fun);\n }\n else {\n di = 1;\n if (PyObject_HasAttrString(fun,\"im_func\")) {\n tmp_fun = PyObject_GetAttrString(fun,\"im_func\");\n }\n else if (PyObject_HasAttrString(fun,\"__call__\")) {\n tmp = PyObject_GetAttrString(fun,\"__call__\");\n if (PyObject_HasAttrString(tmp,\"im_func\"))\n tmp_fun = PyObject_GetAttrString(tmp,\"im_func\");\n else {\n tmp_fun = fun; /* built-in function */\n Py_INCREF(tmp_fun);\n tot = maxnofargs;\n if (PyCFunction_Check(fun)) {\n /* In case the function has a co_argcount (like on PyPy) */\n di = 0;\n }\n if (xa != NULL)\n tot += PyTuple_Size((PyObject *)xa);\n }\n Py_XDECREF(tmp);\n }\n else if (PyFortran_Check(fun) || PyFortran_Check1(fun)) {\n tot = maxnofargs;\n if (xa != NULL)\n tot += PyTuple_Size((PyObject *)xa);\n tmp_fun = fun;\n Py_INCREF(tmp_fun);\n }\n else if (F2PyCapsule_Check(fun)) {\n tot = maxnofargs;\n if (xa != NULL)\n ext = PyTuple_Size((PyObject *)xa);\n if(ext>0) {\n fprintf(stderr,\"extra arguments tuple cannot be used with PyCapsule call-back\\n\");\n goto capi_fail;\n }\n tmp_fun = fun;\n Py_INCREF(tmp_fun);\n }\n }\n\n if (tmp_fun == NULL) {\n fprintf(stderr,\n \"Call-back argument must be function|instance|instance.__call__|f2py-function \"\n \"but got %s.\\n\",\n ((fun == NULL) ? \"NULL\" : Py_TYPE(fun)->tp_name));\n goto capi_fail;\n }\n\n if (PyObject_HasAttrString(tmp_fun,\"__code__\")) {\n if (PyObject_HasAttrString(tmp = PyObject_GetAttrString(tmp_fun,\"__code__\"),\"co_argcount\")) {\n PyObject *tmp_argcount = PyObject_GetAttrString(tmp,\"co_argcount\");\n Py_DECREF(tmp);\n if (tmp_argcount == NULL) {\n goto capi_fail;\n }\n tot = PyLong_AsSsize_t(tmp_argcount) - di;\n Py_DECREF(tmp_argcount);\n }\n }\n /* Get the number of optional arguments */\n if (PyObject_HasAttrString(tmp_fun,\"__defaults__\")) {\n if (PyTuple_Check(tmp = PyObject_GetAttrString(tmp_fun,\"__defaults__\")))\n opt = PyTuple_Size(tmp);\n Py_XDECREF(tmp);\n }\n /* Get the number of extra arguments */\n if (xa != NULL)\n ext = PyTuple_Size((PyObject *)xa);\n /* Calculate the size of call-backs argument list */\n siz = MIN(maxnofargs+ext,tot);\n *nofargs = MAX(0,siz-ext);\n\n#ifdef DEBUGCFUNCS\n fprintf(stderr,\n \"debug-capi:create_cb_arglist:maxnofargs(-nofoptargs),\"\n \"tot,opt,ext,siz,nofargs = %d(-%d), %zd, %zd, %zd, %zd, %d\\n\",\n maxnofargs, nofoptargs, tot, opt, ext, siz, *nofargs);\n#endif\n\n if (siz < tot-opt) {\n fprintf(stderr,\n \"create_cb_arglist: Failed to build argument list \"\n \"(siz) with enough arguments (tot-opt) required by \"\n \"user-supplied function (siz,tot,opt=%zd, %zd, %zd).\\n\",\n siz, tot, opt);\n goto capi_fail;\n }\n\n /* Initialize argument list */\n *args = (PyTupleObject *)PyTuple_New(siz);\n for (i=0;i<*nofargs;i++) {\n Py_INCREF(Py_None);\n PyTuple_SET_ITEM((PyObject *)(*args),i,Py_None);\n }\n if (xa != NULL)\n for (i=(*nofargs);i<siz;i++) {\n tmp = PyTuple_GetItem((PyObject *)xa,i-(*nofargs));\n Py_INCREF(tmp);\n PyTuple_SET_ITEM(*args,i,tmp);\n }\n CFUNCSMESS(\"create_cb_arglist-end\\n\");\n Py_DECREF(tmp_fun);\n return 1;\n\ncapi_fail:\n if (PyErr_Occurred() == NULL)\n PyErr_SetString(#modulename#_error, errmess);\n Py_XDECREF(tmp_fun);\n return 0;\n}\n"""\n\n\ndef buildcfuncs():\n from .capi_maps import c2capi_map\n for k in c2capi_map.keys():\n m = f'pyarr_from_p_{k}1'\n cppmacros[\n m] = f'#define {m}(v) (PyArray_SimpleNewFromData(0,NULL,{c2capi_map[k]},(char *)v))'\n k = 'string'\n m = f'pyarr_from_p_{k}1'\n # NPY_CHAR compatibility, NPY_STRING with itemsize 1\n cppmacros[\n m] = f'#define {m}(v,dims) (PyArray_New(&PyArray_Type, 1, dims, NPY_STRING, NULL, v, 1, NPY_ARRAY_CARRAY, NULL))'\n\n\n############ Auxiliary functions for sorting needs ###################\n\ndef append_needs(need, flag=1):\n # This function modifies the contents of the global `outneeds` dict.\n if isinstance(need, list):\n for n in need:\n append_needs(n, flag)\n elif isinstance(need, str):\n if not need:\n return\n if need in includes0:\n n = 'includes0'\n elif need in includes:\n n = 'includes'\n elif need in typedefs:\n n = 'typedefs'\n elif need in typedefs_generated:\n n = 'typedefs_generated'\n elif need in cppmacros:\n n = 'cppmacros'\n elif need in cfuncs:\n n = 'cfuncs'\n elif need in callbacks:\n n = 'callbacks'\n elif need in f90modhooks:\n n = 'f90modhooks'\n elif need in commonhooks:\n n = 'commonhooks'\n else:\n errmess(f'append_needs: unknown need {repr(need)}\n')\n return\n if need in outneeds[n]:\n return\n if flag:\n tmp = {}\n if need in needs:\n for nn in needs[need]:\n t = append_needs(nn, 0)\n if isinstance(t, dict):\n for nnn in t.keys():\n if nnn in tmp:\n tmp[nnn] = tmp[nnn] + t[nnn]\n else:\n tmp[nnn] = t[nnn]\n for nn in tmp.keys():\n for nnn in tmp[nn]:\n if nnn not in outneeds[nn]:\n outneeds[nn] = [nnn] + outneeds[nn]\n outneeds[n].append(need)\n else:\n tmp = {}\n if need in needs:\n for nn in needs[need]:\n t = append_needs(nn, flag)\n if isinstance(t, dict):\n for nnn in t.keys():\n if nnn in tmp:\n tmp[nnn] = t[nnn] + tmp[nnn]\n else:\n tmp[nnn] = t[nnn]\n if n not in tmp:\n tmp[n] = []\n tmp[n].append(need)\n return tmp\n else:\n errmess(f'append_needs: expected list or string but got :{repr(need)}\n')\n\n\ndef get_needs():\n # This function modifies the contents of the global `outneeds` dict.\n res = {}\n for n in outneeds.keys():\n out = []\n saveout = copy.copy(outneeds[n])\n while len(outneeds[n]) > 0:\n if outneeds[n][0] not in needs:\n out.append(outneeds[n][0])\n del outneeds[n][0]\n else:\n flag = 0\n for k in outneeds[n][1:]:\n if k in needs[outneeds[n][0]]:\n flag = 1\n break\n if flag:\n outneeds[n] = outneeds[n][1:] + [outneeds[n][0]]\n else:\n out.append(outneeds[n][0])\n del outneeds[n][0]\n if saveout and (0 not in map(lambda x, y: x == y, saveout, outneeds[n])) \\n and outneeds[n] != []:\n print(n, saveout)\n errmess(\n 'get_needs: no progress in sorting needs, probably circular dependence, skipping.\n')\n out = out + saveout\n break\n saveout = copy.copy(outneeds[n])\n if out == []:\n out = [n]\n res[n] = out\n return res\n
.venv\Lib\site-packages\numpy\f2py\cfuncs.py
cfuncs.py
Python
54,223
0.75
0.165067
0.218814
python-kit
580
2024-03-20T17:37:18.859872
GPL-3.0
false
7e54650b534bc2ee5deba9d55eac998a
from typing import Final, TypeAlias\n\nfrom .__version__ import version\n\n###\n\n_NeedListDict: TypeAlias = dict[str, list[str]]\n_NeedDict: TypeAlias = dict[str, str]\n\n###\n\nf2py_version: Final = version\n\noutneeds: Final[_NeedListDict] = ...\nneeds: Final[_NeedListDict] = ...\n\nincludes0: Final[_NeedDict] = ...\nincludes: Final[_NeedDict] = ...\nuserincludes: Final[_NeedDict] = ...\ntypedefs: Final[_NeedDict] = ...\ntypedefs_generated: Final[_NeedDict] = ...\ncppmacros: Final[_NeedDict] = ...\ncfuncs: Final[_NeedDict] = ...\ncallbacks: Final[_NeedDict] = ...\nf90modhooks: Final[_NeedDict] = ...\ncommonhooks: Final[_NeedDict] = ...\n\ndef errmess(s: str) -> None: ...\ndef buildcfuncs() -> None: ...\ndef get_needs() -> _NeedListDict: ...\ndef append_needs(need: str | list[str], flag: int = 1) -> _NeedListDict: ...\n
.venv\Lib\site-packages\numpy\f2py\cfuncs.pyi
cfuncs.pyi
Other
833
0.95
0.129032
0.086957
node-utils
591
2024-12-15T19:21:16.736627
Apache-2.0
false
e8747be4fc66ec50d656810e524f9c1d
"""\nBuild common block mechanism for f2py2e.\n\nCopyright 1999 -- 2011 Pearu Peterson all rights reserved.\nCopyright 2011 -- present NumPy Developers.\nPermission to use, modify, and distribute this software is given under the\nterms of the NumPy License\n\nNO WARRANTY IS EXPRESSED OR IMPLIED. USE AT YOUR OWN RISK.\n"""\nfrom . import __version__\n\nf2py_version = __version__.version\n\nfrom . import capi_maps, func2subr\nfrom .auxfuncs import getuseblocks, hasbody, hascommon, hasnote, isintent_hide, outmess\nfrom .crackfortran import rmbadname\n\n\ndef findcommonblocks(block, top=1):\n ret = []\n if hascommon(block):\n for key, value in block['common'].items():\n vars_ = {v: block['vars'][v] for v in value}\n ret.append((key, value, vars_))\n elif hasbody(block):\n for b in block['body']:\n ret = ret + findcommonblocks(b, 0)\n if top:\n tret = []\n names = []\n for t in ret:\n if t[0] not in names:\n names.append(t[0])\n tret.append(t)\n return tret\n return ret\n\n\ndef buildhooks(m):\n ret = {'commonhooks': [], 'initcommonhooks': [],\n 'docs': ['"COMMON blocks:\\n"']}\n fwrap = ['']\n\n def fadd(line, s=fwrap):\n s[0] = f'{s[0]}\n {line}'\n chooks = ['']\n\n def cadd(line, s=chooks):\n s[0] = f'{s[0]}\n{line}'\n ihooks = ['']\n\n def iadd(line, s=ihooks):\n s[0] = f'{s[0]}\n{line}'\n doc = ['']\n\n def dadd(line, s=doc):\n s[0] = f'{s[0]}\n{line}'\n for (name, vnames, vars) in findcommonblocks(m):\n lower_name = name.lower()\n hnames, inames = [], []\n for n in vnames:\n if isintent_hide(vars[n]):\n hnames.append(n)\n else:\n inames.append(n)\n if hnames:\n outmess('\t\tConstructing COMMON block support for "%s"...\n\t\t %s\n\t\t Hidden: %s\n' % (\n name, ','.join(inames), ','.join(hnames)))\n else:\n outmess('\t\tConstructing COMMON block support for "%s"...\n\t\t %s\n' % (\n name, ','.join(inames)))\n fadd(f'subroutine f2pyinit{name}(setupfunc)')\n for usename in getuseblocks(m):\n fadd(f'use {usename}')\n fadd('external setupfunc')\n for n in vnames:\n fadd(func2subr.var2fixfortran(vars, n))\n if name == '_BLNK_':\n fadd(f"common {','.join(vnames)}")\n else:\n fadd(f"common /{name}/ {','.join(vnames)}")\n fadd(f"call setupfunc({','.join(inames)})")\n fadd('end\n')\n cadd('static FortranDataDef f2py_%s_def[] = {' % (name))\n idims = []\n for n in inames:\n ct = capi_maps.getctype(vars[n])\n elsize = capi_maps.get_elsize(vars[n])\n at = capi_maps.c2capi_map[ct]\n dm = capi_maps.getarrdims(n, vars[n])\n if dm['dims']:\n idims.append(f"({dm['dims']})")\n else:\n idims.append('')\n dms = dm['dims'].strip()\n if not dms:\n dms = '-1'\n cadd('\t{\"%s\",%s,{{%s}},%s, %s},'\n % (n, dm['rank'], dms, at, elsize))\n cadd('\t{NULL}\n};')\n inames1 = rmbadname(inames)\n inames1_tps = ','.join(['char *' + s for s in inames1])\n cadd('static void f2py_setup_%s(%s) {' % (name, inames1_tps))\n cadd('\tint i_f2py=0;')\n for n in inames1:\n cadd(f'\tf2py_{name}_def[i_f2py++].data = {n};')\n cadd('}')\n if '_' in lower_name:\n F_FUNC = 'F_FUNC_US'\n else:\n F_FUNC = 'F_FUNC'\n cadd('extern void %s(f2pyinit%s,F2PYINIT%s)(void(*)(%s));'\n % (F_FUNC, lower_name, name.upper(),\n ','.join(['char*'] * len(inames1))))\n cadd('static void f2py_init_%s(void) {' % name)\n cadd('\t%s(f2pyinit%s,F2PYINIT%s)(f2py_setup_%s);'\n % (F_FUNC, lower_name, name.upper(), name))\n cadd('}\n')\n iadd(f'\ttmp = PyFortranObject_New(f2py_{name}_def,f2py_init_{name});')\n iadd('\tif (tmp == NULL) return NULL;')\n iadd(f'\tif (F2PyDict_SetItemString(d, "{name}", tmp) == -1) return NULL;')\n iadd('\tPy_DECREF(tmp);')\n tname = name.replace('_', '\\_')\n dadd('\\subsection{Common block \\texttt{%s}}\n' % (tname))\n dadd('\\begin{description}')\n for n in inames:\n dadd('\\item[]{{}\\verb@%s@{}}' %\n (capi_maps.getarrdocsign(n, vars[n])))\n if hasnote(vars[n]):\n note = vars[n]['note']\n if isinstance(note, list):\n note = '\n'.join(note)\n dadd(f'--- {note}')\n dadd('\\end{description}')\n ret['docs'].append(\n f"\"\t/{name}/ {','.join(map(lambda v, d: v + d, inames, idims))}\\n\"")\n ret['commonhooks'] = chooks\n ret['initcommonhooks'] = ihooks\n ret['latexdoc'] = doc[0]\n if len(ret['docs']) <= 1:\n ret['docs'] = ''\n return ret, fwrap[0]\n
.venv\Lib\site-packages\numpy\f2py\common_rules.py
common_rules.py
Python
5,173
0.85
0.230769
0
vue-tools
288
2024-07-24T00:29:42.905406
MIT
false
1be081adc27cb63519619afde73e7a08
from collections.abc import Mapping\nfrom typing import Any, Final\n\nfrom .__version__ import version\n\nf2py_version: Final = version\n\ndef findcommonblocks(block: Mapping[str, object], top: int = 1) -> list[tuple[str, list[str], dict[str, Any]]]: ...\ndef buildhooks(m: Mapping[str, object]) -> tuple[dict[str, Any], str]: ...\n
.venv\Lib\site-packages\numpy\f2py\common_rules.pyi
common_rules.pyi
Other
332
0.85
0.222222
0
vue-tools
267
2023-09-23T21:48:06.689983
MIT
false
bca7f649d43da0923092c5f31a5df342
import re\nfrom collections.abc import Callable, Iterable, Mapping\nfrom typing import IO, Any, Concatenate, Final, Never, ParamSpec, TypeAlias, overload\nfrom typing import Literal as L\n\nfrom _typeshed import StrOrBytesPath, StrPath\n\nfrom .__version__ import version\nfrom .auxfuncs import isintent_dict as isintent_dict\n\n###\n\n_Tss = ParamSpec("_Tss")\n\n_VisitResult: TypeAlias = list[Any] | dict[str, Any] | None\n_VisitItem: TypeAlias = tuple[str | None, _VisitResult]\n_VisitFunc: TypeAlias = Callable[Concatenate[_VisitItem, list[_VisitItem], _VisitResult, _Tss], _VisitItem | None]\n\n###\n\nCOMMON_FREE_EXTENSIONS: Final[list[str]] = ...\nCOMMON_FIXED_EXTENSIONS: Final[list[str]] = ...\n\nf2py_version: Final = version\ntabchar: Final[str] = " "\n\nf77modulename: str\npyffilename: str\nsourcecodeform: L["fix", "gree"]\nstrictf77: L[0, 1]\nquiet: L[0, 1]\nverbose: L[0, 1, 2]\nskipemptyends: L[0, 1]\nignorecontains: L[1]\ndolowercase: L[1]\n\nbeginpattern: str | re.Pattern[str]\ncurrentfilename: str\nfilepositiontext: str\nexpectbegin: L[0, 1]\ngotnextfile: L[0, 1]\nneededmodule: int\nskipblocksuntil: int\ngroupcounter: int\ngroupname: dict[int, str] | str\ngroupcache: dict[int, dict[str, Any]] | None\ngrouplist: dict[int, list[dict[str, Any]]] | None\nprevious_context: tuple[str, str, int] | None\n\nf90modulevars: dict[str, dict[str, Any]] = {}\ndebug: list[Never] = []\ninclude_paths: list[str] = []\nonlyfuncs: list[str] = []\nskipfuncs: list[str] = []\nskipfunctions: Final[list[str]] = []\nusermodules: Final[list[dict[str, Any]]] = []\n\ndefaultimplicitrules: Final[dict[str, dict[str, str]]] = {}\nbadnames: Final[dict[str, str]] = {}\ninvbadnames: Final[dict[str, str]] = {}\n\nbeforethisafter: Final[str] = ...\nfortrantypes: Final[str] = ...\ngroupbegins77: Final[str] = ...\ngroupbegins90: Final[str] = ...\ngroupends: Final[str] = ...\nendifs: Final[str] = ...\nmoduleprocedures: Final[str] = ...\n\nbeginpattern77: Final[tuple[re.Pattern[str], L["begin"]]] = ...\nbeginpattern90: Final[tuple[re.Pattern[str], L["begin"]]] = ...\ncallpattern: Final[tuple[re.Pattern[str], L["call"]]] = ...\ncallfunpattern: Final[tuple[re.Pattern[str], L["callfun"]]] = ...\ncommonpattern: Final[tuple[re.Pattern[str], L["common"]]] = ...\ncontainspattern: Final[tuple[re.Pattern[str], L["contains"]]] = ...\ndatapattern: Final[tuple[re.Pattern[str], L["data"]]] = ...\ndimensionpattern: Final[tuple[re.Pattern[str], L["dimension"]]] = ...\nendifpattern: Final[tuple[re.Pattern[str], L["endif"]]] = ...\nendpattern: Final[tuple[re.Pattern[str], L["end"]]] = ...\nentrypattern: Final[tuple[re.Pattern[str], L["entry"]]] = ...\nexternalpattern: Final[tuple[re.Pattern[str], L["external"]]] = ...\nf2pyenhancementspattern: Final[tuple[re.Pattern[str], L["f2pyenhancements"]]] = ...\nformatpattern: Final[tuple[re.Pattern[str], L["format"]]] = ...\nfunctionpattern: Final[tuple[re.Pattern[str], L["begin"]]] = ...\nimplicitpattern: Final[tuple[re.Pattern[str], L["implicit"]]] = ...\nintentpattern: Final[tuple[re.Pattern[str], L["intent"]]] = ...\nintrinsicpattern: Final[tuple[re.Pattern[str], L["intrinsic"]]] = ...\noptionalpattern: Final[tuple[re.Pattern[str], L["optional"]]] = ...\nmoduleprocedurepattern: Final[tuple[re.Pattern[str], L["moduleprocedure"]]] = ...\nmultilinepattern: Final[tuple[re.Pattern[str], L["multiline"]]] = ...\nparameterpattern: Final[tuple[re.Pattern[str], L["parameter"]]] = ...\nprivatepattern: Final[tuple[re.Pattern[str], L["private"]]] = ...\npublicpattern: Final[tuple[re.Pattern[str], L["public"]]] = ...\nrequiredpattern: Final[tuple[re.Pattern[str], L["required"]]] = ...\nsubroutinepattern: Final[tuple[re.Pattern[str], L["begin"]]] = ...\ntypespattern: Final[tuple[re.Pattern[str], L["type"]]] = ...\nusepattern: Final[tuple[re.Pattern[str], L["use"]]] = ...\n\nanalyzeargs_re_1: Final[re.Pattern[str]] = ...\ncallnameargspattern: Final[re.Pattern[str]] = ...\ncharselector: Final[re.Pattern[str]] = ...\ncrackline_bind_1: Final[re.Pattern[str]] = ...\ncrackline_bindlang: Final[re.Pattern[str]] = ...\ncrackline_re_1: Final[re.Pattern[str]] = ...\ndetermineexprtype_re_1: Final[re.Pattern[str]] = ...\ndetermineexprtype_re_2: Final[re.Pattern[str]] = ...\ndetermineexprtype_re_3: Final[re.Pattern[str]] = ...\ndetermineexprtype_re_4: Final[re.Pattern[str]] = ...\ndetermineexprtype_re_5: Final[re.Pattern[str]] = ...\ngetlincoef_re_1: Final[re.Pattern[str]] = ...\nkindselector: Final[re.Pattern[str]] = ...\nlenarraypattern: Final[re.Pattern[str]] = ...\nlenkindpattern: Final[re.Pattern[str]] = ...\nnamepattern: Final[re.Pattern[str]] = ...\nnameargspattern: Final[re.Pattern[str]] = ...\noperatorpattern: Final[re.Pattern[str]] = ...\nreal16pattern: Final[re.Pattern[str]] = ...\nreal8pattern: Final[re.Pattern[str]] = ...\nselectpattern: Final[re.Pattern[str]] = ...\ntypedefpattern: Final[re.Pattern[str]] = ...\ntypespattern4implicit: Final[re.Pattern[str]] = ...\nword_pattern: Final[re.Pattern[str]] = ...\n\npost_processing_hooks: Final[list[_VisitFunc[...]]] = []\n\n#\ndef outmess(line: str, flag: int = 1) -> None: ...\ndef reset_global_f2py_vars() -> None: ...\n\n#\ndef rmbadname1(name: str) -> str: ...\ndef undo_rmbadname1(name: str) -> str: ...\ndef rmbadname(names: Iterable[str]) -> list[str]: ...\ndef undo_rmbadname(names: Iterable[str]) -> list[str]: ...\n\n#\ndef openhook(filename: StrPath, mode: str) -> IO[Any]: ...\ndef is_free_format(fname: StrPath) -> bool: ...\ndef readfortrancode(\n ffile: StrOrBytesPath | Iterable[StrOrBytesPath],\n dowithline: Callable[[str, int], object] = ...,\n istop: int = 1,\n) -> None: ...\n\n#\ndef split_by_unquoted(line: str, characters: str) -> tuple[str, str]: ...\n\n#\ndef crackline(line: str, reset: int = 0) -> None: ...\ndef markouterparen(line: str) -> str: ...\ndef markoutercomma(line: str, comma: str = ",") -> str: ...\ndef unmarkouterparen(line: str) -> str: ...\ndef appenddecl(decl: Mapping[str, object] | None, decl2: Mapping[str, object] | None, force: int = 1) -> dict[str, Any]: ...\n\n#\ndef parse_name_for_bind(line: str) -> tuple[str, str | None]: ...\ndef analyzeline(m: re.Match[str], case: str, line: str) -> None: ...\ndef appendmultiline(group: dict[str, Any], context_name: str, ml: str) -> None: ...\ndef cracktypespec0(typespec: str, ll: str | None) -> tuple[str, str | None, str | None, str | None]: ...\n\n#\ndef removespaces(expr: str) -> str: ...\ndef markinnerspaces(line: str) -> str: ...\ndef updatevars(typespec: str, selector: str | None, attrspec: str, entitydecl: str) -> str: ...\ndef cracktypespec(typespec: str, selector: str | None) -> tuple[dict[str, str] | None, dict[str, str] | None, str | None]: ...\n\n#\ndef setattrspec(decl: dict[str, list[str]], attr: str | None, force: int = 0) -> dict[str, list[str]]: ...\ndef setkindselector(decl: dict[str, dict[str, str]], sel: dict[str, str], force: int = 0) -> dict[str, dict[str, str]]: ...\ndef setcharselector(decl: dict[str, dict[str, str]], sel: dict[str, str], force: int = 0) -> dict[str, dict[str, str]]: ...\ndef getblockname(block: Mapping[str, object], unknown: str = "unknown") -> str: ...\ndef setmesstext(block: Mapping[str, object]) -> None: ...\ndef get_usedict(block: Mapping[str, object]) -> dict[str, str]: ...\ndef get_useparameters(block: Mapping[str, object], param_map: Mapping[str, str] | None = None) -> dict[str, str]: ...\n\n#\n@overload\ndef postcrack2(\n block: dict[str, Any],\n tab: str = "",\n param_map: Mapping[str, str] | None = None,\n) -> dict[str, str | Any]: ...\n@overload\ndef postcrack2(\n block: list[dict[str, Any]],\n tab: str = "",\n param_map: Mapping[str, str] | None = None,\n) -> list[dict[str, str | Any]]: ...\n\n#\n@overload\ndef postcrack(block: dict[str, Any], args: Mapping[str, str] | None = None, tab: str = "") -> dict[str, Any]: ...\n@overload\ndef postcrack(block: list[dict[str, str]], args: Mapping[str, str] | None = None, tab: str = "") -> list[dict[str, Any]]: ...\n\n#\ndef sortvarnames(vars: Mapping[str, object]) -> list[str]: ...\ndef analyzecommon(block: Mapping[str, object]) -> dict[str, Any]: ...\ndef analyzebody(block: Mapping[str, object], args: Mapping[str, str], tab: str = "") -> list[dict[str, Any]]: ...\ndef buildimplicitrules(block: Mapping[str, object]) -> tuple[dict[str, dict[str, str]], dict[str, str]]: ...\ndef myeval(e: str, g: object | None = None, l: object | None = None) -> float: ...\n\n#\ndef getlincoef(e: str, xset: set[str]) -> tuple[float | None, float | None, str | None]: ...\n\n#\ndef get_sorted_names(vars: Mapping[str, Mapping[str, str]]) -> list[str]: ...\ndef get_parameters(vars: Mapping[str, Mapping[str, str]], global_params: dict[str, str] = {}) -> dict[str, str]: ...\n\n#\ndef analyzevars(block: Mapping[str, Any]) -> dict[str, dict[str, str]]: ...\n\n#\ndef param_eval(v: str, g_params: dict[str, Any], params: Mapping[str, object], dimspec: str | None = None) -> dict[str, Any]: ...\ndef param_parse(d: str, params: Mapping[str, str]) -> str: ...\ndef expr2name(a: str, block: Mapping[str, object], args: list[str] = []) -> str: ...\ndef analyzeargs(block: Mapping[str, object]) -> dict[str, Any]: ...\n\n#\ndef determineexprtype(expr: str, vars: Mapping[str, object], rules: dict[str, Any] = {}) -> dict[str, Any]: ...\ndef crack2fortrangen(block: Mapping[str, object], tab: str = "\n", as_interface: bool = False) -> str: ...\ndef common2fortran(common: Mapping[str, object], tab: str = "") -> str: ...\ndef use2fortran(use: Mapping[str, object], tab: str = "") -> str: ...\ndef true_intent_list(var: dict[str, list[str]]) -> list[str]: ...\ndef vars2fortran(\n block: Mapping[str, Mapping[str, object]],\n vars: Mapping[str, object],\n args: Mapping[str, str],\n tab: str = "",\n as_interface: bool = False,\n) -> str: ...\n\n#\ndef crackfortran(files: StrOrBytesPath | Iterable[StrOrBytesPath]) -> list[dict[str, Any]]: ...\ndef crack2fortran(block: Mapping[str, Any]) -> str: ...\n\n#\ndef traverse(\n obj: tuple[str | None, _VisitResult],\n visit: _VisitFunc[_Tss],\n parents: list[tuple[str | None, _VisitResult]] = [],\n result: list[Any] | dict[str, Any] | None = None,\n *args: _Tss.args,\n **kwargs: _Tss.kwargs,\n) -> _VisitItem | _VisitResult: ...\n\n#\ndef character_backward_compatibility_hook(\n item: _VisitItem,\n parents: list[_VisitItem],\n result: object, # ignored\n *args: object, # ignored\n **kwargs: object, # ignored\n) -> _VisitItem | None: ...\n\n# namespace pollution\nc: str\nn: str\n
.venv\Lib\site-packages\numpy\f2py\crackfortran.pyi
crackfortran.pyi
Other
10,534
0.95
0.22093
0.117117
python-kit
756
2024-11-23T13:28:16.942524
Apache-2.0
false
2c981b66ed15983dcf09dd6c46b95795
#!/usr/bin/env python3\nimport os\nimport sys\nimport tempfile\n\n\ndef run():\n _path = os.getcwd()\n os.chdir(tempfile.gettempdir())\n print('------')\n print(f'os.name={os.name!r}')\n print('------')\n print(f'sys.platform={sys.platform!r}')\n print('------')\n print('sys.version:')\n print(sys.version)\n print('------')\n print('sys.prefix:')\n print(sys.prefix)\n print('------')\n print(f"sys.path={':'.join(sys.path)!r}")\n print('------')\n\n try:\n import numpy\n has_newnumpy = 1\n except ImportError as e:\n print('Failed to import new numpy:', e)\n has_newnumpy = 0\n\n try:\n from numpy.f2py import f2py2e\n has_f2py2e = 1\n except ImportError as e:\n print('Failed to import f2py2e:', e)\n has_f2py2e = 0\n\n try:\n import numpy.distutils\n has_numpy_distutils = 2\n except ImportError:\n try:\n import numpy_distutils\n has_numpy_distutils = 1\n except ImportError as e:\n print('Failed to import numpy_distutils:', e)\n has_numpy_distutils = 0\n\n if has_newnumpy:\n try:\n print(f'Found new numpy version {numpy.__version__!r} in {numpy.__file__}')\n except Exception as msg:\n print('error:', msg)\n print('------')\n\n if has_f2py2e:\n try:\n print('Found f2py2e version %r in %s' %\n (f2py2e.__version__.version, f2py2e.__file__))\n except Exception as msg:\n print('error:', msg)\n print('------')\n\n if has_numpy_distutils:\n try:\n if has_numpy_distutils == 2:\n print('Found numpy.distutils version %r in %r' % (\n numpy.distutils.__version__,\n numpy.distutils.__file__))\n else:\n print('Found numpy_distutils version %r in %r' % (\n numpy_distutils.numpy_distutils_version.numpy_distutils_version,\n numpy_distutils.__file__))\n print('------')\n except Exception as msg:\n print('error:', msg)\n print('------')\n try:\n if has_numpy_distutils == 1:\n print(\n 'Importing numpy_distutils.command.build_flib ...', end=' ')\n import numpy_distutils.command.build_flib as build_flib\n print('ok')\n print('------')\n try:\n print(\n 'Checking availability of supported Fortran compilers:')\n for compiler_class in build_flib.all_compilers:\n compiler_class(verbose=1).is_available()\n print('------')\n except Exception as msg:\n print('error:', msg)\n print('------')\n except Exception as msg:\n print(\n 'error:', msg, '(ignore it, build_flib is obsolete for numpy.distutils 0.2.2 and up)')\n print('------')\n try:\n if has_numpy_distutils == 2:\n print('Importing numpy.distutils.fcompiler ...', end=' ')\n import numpy.distutils.fcompiler as fcompiler\n else:\n print('Importing numpy_distutils.fcompiler ...', end=' ')\n import numpy_distutils.fcompiler as fcompiler\n print('ok')\n print('------')\n try:\n print('Checking availability of supported Fortran compilers:')\n fcompiler.show_fcompilers()\n print('------')\n except Exception as msg:\n print('error:', msg)\n print('------')\n except Exception as msg:\n print('error:', msg)\n print('------')\n try:\n if has_numpy_distutils == 2:\n print('Importing numpy.distutils.cpuinfo ...', end=' ')\n from numpy.distutils.cpuinfo import cpuinfo\n print('ok')\n print('------')\n else:\n try:\n print(\n 'Importing numpy_distutils.command.cpuinfo ...', end=' ')\n from numpy_distutils.command.cpuinfo import cpuinfo\n print('ok')\n print('------')\n except Exception as msg:\n print('error:', msg, '(ignore it)')\n print('Importing numpy_distutils.cpuinfo ...', end=' ')\n from numpy_distutils.cpuinfo import cpuinfo\n print('ok')\n print('------')\n cpu = cpuinfo()\n print('CPU information:', end=' ')\n for name in dir(cpuinfo):\n if name[0] == '_' and name[1] != '_' and getattr(cpu, name[1:])():\n print(name[1:], end=' ')\n print('------')\n except Exception as msg:\n print('error:', msg)\n print('------')\n os.chdir(_path)\n\n\nif __name__ == "__main__":\n run()\n
.venv\Lib\site-packages\numpy\f2py\diagnose.py
diagnose.py
Python
5,224
0.95
0.174497
0.007194
python-kit
688
2023-11-02T22:16:20.288129
Apache-2.0
false
bb067549e77270ff8fc4666cb5878c2b
def run() -> None: ...\n
.venv\Lib\site-packages\numpy\f2py\diagnose.pyi
diagnose.pyi
Other
24
0.65
1
0
awesome-app
566
2024-06-16T00:11:26.733850
BSD-3-Clause
false
c6a9c26f9cf1370e471f0e2274771af1
"""\n\nf2py2e - Fortran to Python C/API generator. 2nd Edition.\n See __usage__ below.\n\nCopyright 1999 -- 2011 Pearu Peterson all rights reserved.\nCopyright 2011 -- present NumPy Developers.\nPermission to use, modify, and distribute this software is given under the\nterms of the NumPy License.\n\nNO WARRANTY IS EXPRESSED OR IMPLIED. USE AT YOUR OWN RISK.\n"""\nimport argparse\nimport os\nimport pprint\nimport re\nimport sys\n\nfrom numpy.f2py._backends import f2py_build_generator\n\nfrom . import (\n __version__,\n auxfuncs,\n capi_maps,\n cb_rules,\n cfuncs,\n crackfortran,\n f90mod_rules,\n rules,\n)\nfrom .cfuncs import errmess\n\nf2py_version = __version__.version\nnumpy_version = __version__.version\n\n# outmess=sys.stdout.write\nshow = pprint.pprint\noutmess = auxfuncs.outmess\nMESON_ONLY_VER = (sys.version_info >= (3, 12))\n\n__usage__ =\\nf"""Usage:\n\n1) To construct extension module sources:\n\n f2py [<options>] <fortran files> [[[only:]||[skip:]] \\\n <fortran functions> ] \\\n [: <fortran files> ...]\n\n2) To compile fortran files and build extension modules:\n\n f2py -c [<options>, <build_flib options>, <extra options>] <fortran files>\n\n3) To generate signature files:\n\n f2py -h <filename.pyf> ...< same options as in (1) >\n\nDescription: This program generates a Python C/API file (<modulename>module.c)\n that contains wrappers for given fortran functions so that they\n can be called from Python. With the -c option the corresponding\n extension modules are built.\n\nOptions:\n\n -h <filename> Write signatures of the fortran routines to file <filename>\n and exit. You can then edit <filename> and use it instead\n of <fortran files>. If <filename>==stdout then the\n signatures are printed to stdout.\n <fortran functions> Names of fortran routines for which Python C/API\n functions will be generated. Default is all that are found\n in <fortran files>.\n <fortran files> Paths to fortran/signature files that will be scanned for\n <fortran functions> in order to determine their signatures.\n skip: Ignore fortran functions that follow until `:'.\n only: Use only fortran functions that follow until `:'.\n : Get back to <fortran files> mode.\n\n -m <modulename> Name of the module; f2py generates a Python/C API\n file <modulename>module.c or extension module <modulename>.\n Default is 'untitled'.\n\n '-include<header>' Writes additional headers in the C wrapper, can be passed\n multiple times, generates #include <header> each time.\n\n --[no-]lower Do [not] lower the cases in <fortran files>. By default,\n --lower is assumed with -h key, and --no-lower without -h key.\n\n --build-dir <dirname> All f2py generated files are created in <dirname>.\n Default is tempfile.mkdtemp().\n\n --overwrite-signature Overwrite existing signature file.\n\n --[no-]latex-doc Create (or not) <modulename>module.tex.\n Default is --no-latex-doc.\n --short-latex Create 'incomplete' LaTeX document (without commands\n \\documentclass, \\tableofcontents, and \\begin{{document}},\n \\end{{document}}).\n\n --[no-]rest-doc Create (or not) <modulename>module.rst.\n Default is --no-rest-doc.\n\n --debug-capi Create C/API code that reports the state of the wrappers\n during runtime. Useful for debugging.\n\n --[no-]wrap-functions Create Fortran subroutine wrappers to Fortran 77\n functions. --wrap-functions is default because it ensures\n maximum portability/compiler independence.\n\n --[no-]freethreading-compatible Create a module that declares it does or\n doesn't require the GIL. The default is\n --freethreading-compatible for backward\n compatibility. Inspect the Fortran code you are wrapping for\n thread safety issues before passing\n --no-freethreading-compatible, as f2py does not analyze\n fortran code for thread safety issues.\n\n --include-paths <path1>:<path2>:... Search include files from the given\n directories.\n\n --help-link [..] List system resources found by system_info.py. See also\n --link-<resource> switch below. [..] is optional list\n of resources names. E.g. try 'f2py --help-link lapack_opt'.\n\n --f2cmap <filename> Load Fortran-to-Python KIND specification from the given\n file. Default: .f2py_f2cmap in current directory.\n\n --quiet Run quietly.\n --verbose Run with extra verbosity.\n --skip-empty-wrappers Only generate wrapper files when needed.\n -v Print f2py version ID and exit.\n\n\nbuild backend options (only effective with -c)\n[NO_MESON] is used to indicate an option not meant to be used\nwith the meson backend or above Python 3.12:\n\n --fcompiler= Specify Fortran compiler type by vendor [NO_MESON]\n --compiler= Specify distutils C compiler type [NO_MESON]\n\n --help-fcompiler List available Fortran compilers and exit [NO_MESON]\n --f77exec= Specify the path to F77 compiler [NO_MESON]\n --f90exec= Specify the path to F90 compiler [NO_MESON]\n --f77flags= Specify F77 compiler flags\n --f90flags= Specify F90 compiler flags\n --opt= Specify optimization flags [NO_MESON]\n --arch= Specify architecture specific optimization flags [NO_MESON]\n --noopt Compile without optimization [NO_MESON]\n --noarch Compile without arch-dependent optimization [NO_MESON]\n --debug Compile with debugging information\n\n --dep <dependency>\n Specify a meson dependency for the module. This may\n be passed multiple times for multiple dependencies.\n Dependencies are stored in a list for further processing.\n\n Example: --dep lapack --dep scalapack\n This will identify "lapack" and "scalapack" as dependencies\n and remove them from argv, leaving a dependencies list\n containing ["lapack", "scalapack"].\n\n --backend <backend_type>\n Specify the build backend for the compilation process.\n The supported backends are 'meson' and 'distutils'.\n If not specified, defaults to 'distutils'. On\n Python 3.12 or higher, the default is 'meson'.\n\nExtra options (only effective with -c):\n\n --link-<resource> Link extension module with <resource> as defined\n by numpy.distutils/system_info.py. E.g. to link\n with optimized LAPACK libraries (vecLib on MacOSX,\n ATLAS elsewhere), use --link-lapack_opt.\n See also --help-link switch. [NO_MESON]\n\n -L/path/to/lib/ -l<libname>\n -D<define> -U<name>\n -I/path/to/include/\n <filename>.o <filename>.so <filename>.a\n\n Using the following macros may be required with non-gcc Fortran\n compilers:\n -DPREPEND_FORTRAN -DNO_APPEND_FORTRAN -DUPPERCASE_FORTRAN\n\n When using -DF2PY_REPORT_ATEXIT, a performance report of F2PY\n interface is printed out at exit (platforms: Linux).\n\n When using -DF2PY_REPORT_ON_ARRAY_COPY=<int>, a message is\n sent to stderr whenever F2PY interface makes a copy of an\n array. Integer <int> sets the threshold for array sizes when\n a message should be shown.\n\nVersion: {f2py_version}\nnumpy Version: {numpy_version}\nLicense: NumPy license (see LICENSE.txt in the NumPy source code)\nCopyright 1999 -- 2011 Pearu Peterson all rights reserved.\nCopyright 2011 -- present NumPy Developers.\nhttps://numpy.org/doc/stable/f2py/index.html\n"""\n\n\ndef scaninputline(inputline):\n files, skipfuncs, onlyfuncs, debug = [], [], [], []\n f, f2, f3, f5, f6, f8, f9, f10 = 1, 0, 0, 0, 0, 0, 0, 0\n verbose = 1\n emptygen = True\n dolc = -1\n dolatexdoc = 0\n dorestdoc = 0\n wrapfuncs = 1\n buildpath = '.'\n include_paths, freethreading_compatible, inputline = get_newer_options(inputline)\n signsfile, modulename = None, None\n options = {'buildpath': buildpath,\n 'coutput': None,\n 'f2py_wrapper_output': None}\n for l in inputline:\n if l == '':\n pass\n elif l == 'only:':\n f = 0\n elif l == 'skip:':\n f = -1\n elif l == ':':\n f = 1\n elif l[:8] == '--debug-':\n debug.append(l[8:])\n elif l == '--lower':\n dolc = 1\n elif l == '--build-dir':\n f6 = 1\n elif l == '--no-lower':\n dolc = 0\n elif l == '--quiet':\n verbose = 0\n elif l == '--verbose':\n verbose += 1\n elif l == '--latex-doc':\n dolatexdoc = 1\n elif l == '--no-latex-doc':\n dolatexdoc = 0\n elif l == '--rest-doc':\n dorestdoc = 1\n elif l == '--no-rest-doc':\n dorestdoc = 0\n elif l == '--wrap-functions':\n wrapfuncs = 1\n elif l == '--no-wrap-functions':\n wrapfuncs = 0\n elif l == '--short-latex':\n options['shortlatex'] = 1\n elif l == '--coutput':\n f8 = 1\n elif l == '--f2py-wrapper-output':\n f9 = 1\n elif l == '--f2cmap':\n f10 = 1\n elif l == '--overwrite-signature':\n options['h-overwrite'] = 1\n elif l == '-h':\n f2 = 1\n elif l == '-m':\n f3 = 1\n elif l[:2] == '-v':\n print(f2py_version)\n sys.exit()\n elif l == '--show-compilers':\n f5 = 1\n elif l[:8] == '-include':\n cfuncs.outneeds['userincludes'].append(l[9:-1])\n cfuncs.userincludes[l[9:-1]] = '#include ' + l[8:]\n elif l == '--skip-empty-wrappers':\n emptygen = False\n elif l[0] == '-':\n errmess(f'Unknown option {repr(l)}\n')\n sys.exit()\n elif f2:\n f2 = 0\n signsfile = l\n elif f3:\n f3 = 0\n modulename = l\n elif f6:\n f6 = 0\n buildpath = l\n elif f8:\n f8 = 0\n options["coutput"] = l\n elif f9:\n f9 = 0\n options["f2py_wrapper_output"] = l\n elif f10:\n f10 = 0\n options["f2cmap_file"] = l\n elif f == 1:\n try:\n with open(l):\n pass\n files.append(l)\n except OSError as detail:\n errmess(f'OSError: {detail!s}. Skipping file "{l!s}".\n')\n elif f == -1:\n skipfuncs.append(l)\n elif f == 0:\n onlyfuncs.append(l)\n if not f5 and not files and not modulename:\n print(__usage__)\n sys.exit()\n if not os.path.isdir(buildpath):\n if not verbose:\n outmess(f'Creating build directory {buildpath}\n')\n os.mkdir(buildpath)\n if signsfile:\n signsfile = os.path.join(buildpath, signsfile)\n if signsfile and os.path.isfile(signsfile) and 'h-overwrite' not in options:\n errmess(\n f'Signature file "{signsfile}" exists!!! Use --overwrite-signature to overwrite.\n')\n sys.exit()\n\n options['emptygen'] = emptygen\n options['debug'] = debug\n options['verbose'] = verbose\n if dolc == -1 and not signsfile:\n options['do-lower'] = 0\n else:\n options['do-lower'] = dolc\n if modulename:\n options['module'] = modulename\n if signsfile:\n options['signsfile'] = signsfile\n if onlyfuncs:\n options['onlyfuncs'] = onlyfuncs\n if skipfuncs:\n options['skipfuncs'] = skipfuncs\n options['dolatexdoc'] = dolatexdoc\n options['dorestdoc'] = dorestdoc\n options['wrapfuncs'] = wrapfuncs\n options['buildpath'] = buildpath\n options['include_paths'] = include_paths\n options['requires_gil'] = not freethreading_compatible\n options.setdefault('f2cmap_file', None)\n return files, options\n\n\ndef callcrackfortran(files, options):\n rules.options = options\n crackfortran.debug = options['debug']\n crackfortran.verbose = options['verbose']\n if 'module' in options:\n crackfortran.f77modulename = options['module']\n if 'skipfuncs' in options:\n crackfortran.skipfuncs = options['skipfuncs']\n if 'onlyfuncs' in options:\n crackfortran.onlyfuncs = options['onlyfuncs']\n crackfortran.include_paths[:] = options['include_paths']\n crackfortran.dolowercase = options['do-lower']\n postlist = crackfortran.crackfortran(files)\n if 'signsfile' in options:\n outmess(f"Saving signatures to file \"{options['signsfile']}\"\n")\n pyf = crackfortran.crack2fortran(postlist)\n if options['signsfile'][-6:] == 'stdout':\n sys.stdout.write(pyf)\n else:\n with open(options['signsfile'], 'w') as f:\n f.write(pyf)\n if options["coutput"] is None:\n for mod in postlist:\n mod["coutput"] = f"{mod['name']}module.c"\n else:\n for mod in postlist:\n mod["coutput"] = options["coutput"]\n if options["f2py_wrapper_output"] is None:\n for mod in postlist:\n mod["f2py_wrapper_output"] = f"{mod['name']}-f2pywrappers.f"\n else:\n for mod in postlist:\n mod["f2py_wrapper_output"] = options["f2py_wrapper_output"]\n for mod in postlist:\n if options["requires_gil"]:\n mod['gil_used'] = 'Py_MOD_GIL_USED'\n else:\n mod['gil_used'] = 'Py_MOD_GIL_NOT_USED'\n return postlist\n\n\ndef buildmodules(lst):\n cfuncs.buildcfuncs()\n outmess('Building modules...\n')\n modules, mnames, isusedby = [], [], {}\n for item in lst:\n if '__user__' in item['name']:\n cb_rules.buildcallbacks(item)\n else:\n if 'use' in item:\n for u in item['use'].keys():\n if u not in isusedby:\n isusedby[u] = []\n isusedby[u].append(item['name'])\n modules.append(item)\n mnames.append(item['name'])\n ret = {}\n for module, name in zip(modules, mnames):\n if name in isusedby:\n outmess('\tSkipping module "%s" which is used by %s.\n' % (\n name, ','.join('"%s"' % s for s in isusedby[name])))\n else:\n um = []\n if 'use' in module:\n for u in module['use'].keys():\n if u in isusedby and u in mnames:\n um.append(modules[mnames.index(u)])\n else:\n outmess(\n f'\tModule "{name}" uses nonexisting "{u}" '\n 'which will be ignored.\n')\n ret[name] = {}\n dict_append(ret[name], rules.buildmodule(module, um))\n return ret\n\n\ndef dict_append(d_out, d_in):\n for (k, v) in d_in.items():\n if k not in d_out:\n d_out[k] = []\n if isinstance(v, list):\n d_out[k] = d_out[k] + v\n else:\n d_out[k].append(v)\n\n\ndef run_main(comline_list):\n """\n Equivalent to running::\n\n f2py <args>\n\n where ``<args>=string.join(<list>,' ')``, but in Python. Unless\n ``-h`` is used, this function returns a dictionary containing\n information on generated modules and their dependencies on source\n files.\n\n You cannot build extension modules with this function, that is,\n using ``-c`` is not allowed. Use the ``compile`` command instead.\n\n Examples\n --------\n The command ``f2py -m scalar scalar.f`` can be executed from Python as\n follows.\n\n .. literalinclude:: ../../source/f2py/code/results/run_main_session.dat\n :language: python\n\n """\n crackfortran.reset_global_f2py_vars()\n f2pydir = os.path.dirname(os.path.abspath(cfuncs.__file__))\n fobjhsrc = os.path.join(f2pydir, 'src', 'fortranobject.h')\n fobjcsrc = os.path.join(f2pydir, 'src', 'fortranobject.c')\n # gh-22819 -- begin\n parser = make_f2py_compile_parser()\n args, comline_list = parser.parse_known_args(comline_list)\n pyf_files, _ = filter_files("", "[.]pyf([.]src|)", comline_list)\n # Checks that no existing modulename is defined in a pyf file\n # TODO: Remove all this when scaninputline is replaced\n if args.module_name:\n if "-h" in comline_list:\n modname = (\n args.module_name\n ) # Directly use from args when -h is present\n else:\n modname = validate_modulename(\n pyf_files, args.module_name\n ) # Validate modname when -h is not present\n comline_list += ['-m', modname] # needed for the rest of scaninputline\n # gh-22819 -- end\n files, options = scaninputline(comline_list)\n auxfuncs.options = options\n capi_maps.load_f2cmap_file(options['f2cmap_file'])\n postlist = callcrackfortran(files, options)\n isusedby = {}\n for plist in postlist:\n if 'use' in plist:\n for u in plist['use'].keys():\n if u not in isusedby:\n isusedby[u] = []\n isusedby[u].append(plist['name'])\n for plist in postlist:\n module_name = plist['name']\n if plist['block'] == 'python module' and '__user__' in module_name:\n if module_name in isusedby:\n # if not quiet:\n usedby = ','.join(f'"{s}"' for s in isusedby[module_name])\n outmess(\n f'Skipping Makefile build for module "{module_name}" '\n f'which is used by {usedby}\n')\n if 'signsfile' in options:\n if options['verbose'] > 1:\n outmess(\n 'Stopping. Edit the signature file and then run f2py on the signature file: ')\n outmess(f"{os.path.basename(sys.argv[0])} {options['signsfile']}\n")\n return\n for plist in postlist:\n if plist['block'] != 'python module':\n if 'python module' not in options:\n errmess(\n 'Tip: If your original code is Fortran source then you must use -m option.\n')\n raise TypeError('All blocks must be python module blocks but got %s' % (\n repr(plist['block'])))\n auxfuncs.debugoptions = options['debug']\n f90mod_rules.options = options\n auxfuncs.wrapfuncs = options['wrapfuncs']\n\n ret = buildmodules(postlist)\n\n for mn in ret.keys():\n dict_append(ret[mn], {'csrc': fobjcsrc, 'h': fobjhsrc})\n return ret\n\n\ndef filter_files(prefix, suffix, files, remove_prefix=None):\n """\n Filter files by prefix and suffix.\n """\n filtered, rest = [], []\n match = re.compile(prefix + r'.*' + suffix + r'\Z').match\n if remove_prefix:\n ind = len(prefix)\n else:\n ind = 0\n for file in [x.strip() for x in files]:\n if match(file):\n filtered.append(file[ind:])\n else:\n rest.append(file)\n return filtered, rest\n\n\ndef get_prefix(module):\n p = os.path.dirname(os.path.dirname(module.__file__))\n return p\n\n\nclass CombineIncludePaths(argparse.Action):\n def __call__(self, parser, namespace, values, option_string=None):\n include_paths_set = set(getattr(namespace, 'include_paths', []) or [])\n if option_string == "--include_paths":\n outmess("Use --include-paths or -I instead of --include_paths which will be removed")\n if option_string in {"--include-paths", "--include_paths"}:\n include_paths_set.update(values.split(':'))\n else:\n include_paths_set.add(values)\n namespace.include_paths = list(include_paths_set)\n\ndef f2py_parser():\n parser = argparse.ArgumentParser(add_help=False)\n parser.add_argument("-I", dest="include_paths", action=CombineIncludePaths)\n parser.add_argument("--include-paths", dest="include_paths", action=CombineIncludePaths)\n parser.add_argument("--include_paths", dest="include_paths", action=CombineIncludePaths)\n parser.add_argument("--freethreading-compatible", dest="ftcompat", action=argparse.BooleanOptionalAction)\n return parser\n\ndef get_newer_options(iline):\n iline = (' '.join(iline)).split()\n parser = f2py_parser()\n args, remain = parser.parse_known_args(iline)\n ipaths = args.include_paths\n if args.include_paths is None:\n ipaths = []\n return ipaths, args.ftcompat, remain\n\ndef make_f2py_compile_parser():\n parser = argparse.ArgumentParser(add_help=False)\n parser.add_argument("--dep", action="append", dest="dependencies")\n parser.add_argument("--backend", choices=['meson', 'distutils'], default='distutils')\n parser.add_argument("-m", dest="module_name")\n return parser\n\ndef preparse_sysargv():\n # To keep backwards bug compatibility, newer flags are handled by argparse,\n # and `sys.argv` is passed to the rest of `f2py` as is.\n parser = make_f2py_compile_parser()\n\n args, remaining_argv = parser.parse_known_args()\n sys.argv = [sys.argv[0]] + remaining_argv\n\n backend_key = args.backend\n if MESON_ONLY_VER and backend_key == 'distutils':\n outmess("Cannot use distutils backend with Python>=3.12,"\n " using meson backend instead.\n")\n backend_key = "meson"\n\n return {\n "dependencies": args.dependencies or [],\n "backend": backend_key,\n "modulename": args.module_name,\n }\n\ndef run_compile():\n """\n Do it all in one call!\n """\n import tempfile\n\n # Collect dependency flags, preprocess sys.argv\n argy = preparse_sysargv()\n modulename = argy["modulename"]\n if modulename is None:\n modulename = 'untitled'\n dependencies = argy["dependencies"]\n backend_key = argy["backend"]\n build_backend = f2py_build_generator(backend_key)\n\n i = sys.argv.index('-c')\n del sys.argv[i]\n\n remove_build_dir = 0\n try:\n i = sys.argv.index('--build-dir')\n except ValueError:\n i = None\n if i is not None:\n build_dir = sys.argv[i + 1]\n del sys.argv[i + 1]\n del sys.argv[i]\n else:\n remove_build_dir = 1\n build_dir = tempfile.mkdtemp()\n\n _reg1 = re.compile(r'--link-')\n sysinfo_flags = [_m for _m in sys.argv[1:] if _reg1.match(_m)]\n sys.argv = [_m for _m in sys.argv if _m not in sysinfo_flags]\n if sysinfo_flags:\n sysinfo_flags = [f[7:] for f in sysinfo_flags]\n\n _reg2 = re.compile(\n r'--((no-|)(wrap-functions|lower|freethreading-compatible)|debug-capi|quiet|skip-empty-wrappers)|-include')\n f2py_flags = [_m for _m in sys.argv[1:] if _reg2.match(_m)]\n sys.argv = [_m for _m in sys.argv if _m not in f2py_flags]\n f2py_flags2 = []\n fl = 0\n for a in sys.argv[1:]:\n if a in ['only:', 'skip:']:\n fl = 1\n elif a == ':':\n fl = 0\n if fl or a == ':':\n f2py_flags2.append(a)\n if f2py_flags2 and f2py_flags2[-1] != ':':\n f2py_flags2.append(':')\n f2py_flags.extend(f2py_flags2)\n sys.argv = [_m for _m in sys.argv if _m not in f2py_flags2]\n _reg3 = re.compile(\n r'--((f(90)?compiler(-exec|)|compiler)=|help-compiler)')\n flib_flags = [_m for _m in sys.argv[1:] if _reg3.match(_m)]\n sys.argv = [_m for _m in sys.argv if _m not in flib_flags]\n # TODO: Once distutils is dropped completely, i.e. min_ver >= 3.12, unify into --fflags\n reg_f77_f90_flags = re.compile(r'--f(77|90)flags=')\n reg_distutils_flags = re.compile(r'--((f(77|90)exec|opt|arch)=|(debug|noopt|noarch|help-fcompiler))')\n fc_flags = [_m for _m in sys.argv[1:] if reg_f77_f90_flags.match(_m)]\n distutils_flags = [_m for _m in sys.argv[1:] if reg_distutils_flags.match(_m)]\n if not (MESON_ONLY_VER or backend_key == 'meson'):\n fc_flags.extend(distutils_flags)\n sys.argv = [_m for _m in sys.argv if _m not in (fc_flags + distutils_flags)]\n\n del_list = []\n for s in flib_flags:\n v = '--fcompiler='\n if s[:len(v)] == v:\n if MESON_ONLY_VER or backend_key == 'meson':\n outmess(\n "--fcompiler cannot be used with meson,"\n "set compiler with the FC environment variable\n"\n )\n else:\n from numpy.distutils import fcompiler\n fcompiler.load_all_fcompiler_classes()\n allowed_keys = list(fcompiler.fcompiler_class.keys())\n nv = ov = s[len(v):].lower()\n if ov not in allowed_keys:\n vmap = {} # XXX\n try:\n nv = vmap[ov]\n except KeyError:\n if ov not in vmap.values():\n print(f'Unknown vendor: "{s[len(v):]}"')\n nv = ov\n i = flib_flags.index(s)\n flib_flags[i] = '--fcompiler=' + nv # noqa: B909\n continue\n for s in del_list:\n i = flib_flags.index(s)\n del flib_flags[i]\n assert len(flib_flags) <= 2, repr(flib_flags)\n\n _reg5 = re.compile(r'--(verbose)')\n setup_flags = [_m for _m in sys.argv[1:] if _reg5.match(_m)]\n sys.argv = [_m for _m in sys.argv if _m not in setup_flags]\n\n if '--quiet' in f2py_flags:\n setup_flags.append('--quiet')\n\n # Ugly filter to remove everything but sources\n sources = sys.argv[1:]\n f2cmapopt = '--f2cmap'\n if f2cmapopt in sys.argv:\n i = sys.argv.index(f2cmapopt)\n f2py_flags.extend(sys.argv[i:i + 2])\n del sys.argv[i + 1], sys.argv[i]\n sources = sys.argv[1:]\n\n pyf_files, _sources = filter_files("", "[.]pyf([.]src|)", sources)\n sources = pyf_files + _sources\n modulename = validate_modulename(pyf_files, modulename)\n extra_objects, sources = filter_files('', '[.](o|a|so|dylib)', sources)\n library_dirs, sources = filter_files('-L', '', sources, remove_prefix=1)\n libraries, sources = filter_files('-l', '', sources, remove_prefix=1)\n undef_macros, sources = filter_files('-U', '', sources, remove_prefix=1)\n define_macros, sources = filter_files('-D', '', sources, remove_prefix=1)\n for i in range(len(define_macros)):\n name_value = define_macros[i].split('=', 1)\n if len(name_value) == 1:\n name_value.append(None)\n if len(name_value) == 2:\n define_macros[i] = tuple(name_value)\n else:\n print('Invalid use of -D:', name_value)\n\n # Construct wrappers / signatures / things\n if backend_key == 'meson':\n if not pyf_files:\n outmess('Using meson backend\nWill pass --lower to f2py\nSee https://numpy.org/doc/stable/f2py/buildtools/meson.html\n')\n f2py_flags.append('--lower')\n run_main(f" {' '.join(f2py_flags)} -m {modulename} {' '.join(sources)}".split())\n else:\n run_main(f" {' '.join(f2py_flags)} {' '.join(pyf_files)}".split())\n\n # Order matters here, includes are needed for run_main above\n include_dirs, _, sources = get_newer_options(sources)\n # Now use the builder\n builder = build_backend(\n modulename,\n sources,\n extra_objects,\n build_dir,\n include_dirs,\n library_dirs,\n libraries,\n define_macros,\n undef_macros,\n f2py_flags,\n sysinfo_flags,\n fc_flags,\n flib_flags,\n setup_flags,\n remove_build_dir,\n {"dependencies": dependencies},\n )\n\n builder.compile()\n\n\ndef validate_modulename(pyf_files, modulename='untitled'):\n if len(pyf_files) > 1:\n raise ValueError("Only one .pyf file per call")\n if pyf_files:\n pyff = pyf_files[0]\n pyf_modname = auxfuncs.get_f2py_modulename(pyff)\n if modulename != pyf_modname:\n outmess(\n f"Ignoring -m {modulename}.\n"\n f"{pyff} defines {pyf_modname} to be the modulename.\n"\n )\n modulename = pyf_modname\n return modulename\n\ndef main():\n if '--help-link' in sys.argv[1:]:\n sys.argv.remove('--help-link')\n if MESON_ONLY_VER:\n outmess("Use --dep for meson builds\n")\n else:\n from numpy.distutils.system_info import show_all\n show_all()\n return\n\n if '-c' in sys.argv[1:]:\n run_compile()\n else:\n run_main(sys.argv[1:])\n
.venv\Lib\site-packages\numpy\f2py\f2py2e.py
f2py2e.py
Python
29,549
0.95
0.198473
0.02026
node-utils
847
2024-06-02T07:04:54.561100
BSD-3-Clause
false
b3f89fc903608efa2b3e21d7c4360fed
import argparse\nimport pprint\nfrom collections.abc import Hashable, Iterable, Mapping, MutableMapping, Sequence\nfrom types import ModuleType\nfrom typing import Any, Final, NotRequired, TypedDict, type_check_only\n\nfrom typing_extensions import TypeVar, override\n\nfrom .__version__ import version\nfrom .auxfuncs import _Bool\nfrom .auxfuncs import outmess as outmess\n\n###\n\n_KT = TypeVar("_KT", bound=Hashable)\n_VT = TypeVar("_VT")\n\n@type_check_only\nclass _F2PyDict(TypedDict):\n csrc: list[str]\n h: list[str]\n fsrc: NotRequired[list[str]]\n ltx: NotRequired[list[str]]\n\n@type_check_only\nclass _PreparseResult(TypedDict):\n dependencies: list[str]\n backend: str\n modulename: str\n\n###\n\nMESON_ONLY_VER: Final[bool]\nf2py_version: Final = version\nnumpy_version: Final = version\n__usage__: Final[str]\n\nshow = pprint.pprint\n\nclass CombineIncludePaths(argparse.Action):\n @override\n def __call__(\n self,\n /,\n parser: argparse.ArgumentParser,\n namespace: argparse.Namespace,\n values: str | Sequence[str] | None,\n option_string: str | None = None,\n ) -> None: ...\n\n#\ndef run_main(comline_list: Iterable[str]) -> dict[str, _F2PyDict]: ...\ndef run_compile() -> None: ...\ndef main() -> None: ...\n\n#\ndef scaninputline(inputline: Iterable[str]) -> tuple[list[str], dict[str, _Bool]]: ...\ndef callcrackfortran(files: list[str], options: dict[str, bool]) -> list[dict[str, Any]]: ...\ndef buildmodules(lst: Iterable[Mapping[str, object]]) -> dict[str, dict[str, Any]]: ...\ndef dict_append(d_out: MutableMapping[_KT, _VT], d_in: Mapping[_KT, _VT]) -> None: ...\ndef filter_files(\n prefix: str,\n suffix: str,\n files: Iterable[str],\n remove_prefix: _Bool | None = None,\n) -> tuple[list[str], list[str]]: ...\ndef get_prefix(module: ModuleType) -> str: ...\ndef get_newer_options(iline: Iterable[str]) -> tuple[list[str], Any, list[str]]: ...\n\n#\ndef f2py_parser() -> argparse.ArgumentParser: ...\ndef make_f2py_compile_parser() -> argparse.ArgumentParser: ...\n\n#\ndef preparse_sysargv() -> _PreparseResult: ...\ndef validate_modulename(pyf_files: Sequence[str], modulename: str = "untitled") -> str: ...\n
.venv\Lib\site-packages\numpy\f2py\f2py2e.pyi
f2py2e.pyi
Other
2,229
0.95
0.236842
0.096774
vue-tools
431
2024-02-05T14:25:54.320011
Apache-2.0
false
781e677c8866eafa0e96c8c24fe1bb0d
"""\nBuild F90 module support for f2py2e.\n\nCopyright 1999 -- 2011 Pearu Peterson all rights reserved.\nCopyright 2011 -- present NumPy Developers.\nPermission to use, modify, and distribute this software is given under the\nterms of the NumPy License.\n\nNO WARRANTY IS EXPRESSED OR IMPLIED. USE AT YOUR OWN RISK.\n"""\n__version__ = "$Revision: 1.27 $"[10:-1]\n\nf2py_version = 'See `f2py -v`'\n\nimport numpy as np\n\nfrom . import capi_maps, func2subr\n\n# The environment provided by auxfuncs.py is needed for some calls to eval.\n# As the needed functions cannot be determined by static inspection of the\n# code, it is safest to use import * pending a major refactoring of f2py.\nfrom .auxfuncs import *\nfrom .crackfortran import undo_rmbadname, undo_rmbadname1\n\noptions = {}\n\n\ndef findf90modules(m):\n if ismodule(m):\n return [m]\n if not hasbody(m):\n return []\n ret = []\n for b in m['body']:\n if ismodule(b):\n ret.append(b)\n else:\n ret = ret + findf90modules(b)\n return ret\n\n\nfgetdims1 = """\\n external f2pysetdata\n logical ns\n integer r,i\n integer(%d) s(*)\n ns = .FALSE.\n if (allocated(d)) then\n do i=1,r\n if ((size(d,i).ne.s(i)).and.(s(i).ge.0)) then\n ns = .TRUE.\n end if\n end do\n if (ns) then\n deallocate(d)\n end if\n end if\n if ((.not.allocated(d)).and.(s(1).ge.1)) then""" % np.intp().itemsize\n\nfgetdims2 = """\\n end if\n if (allocated(d)) then\n do i=1,r\n s(i) = size(d,i)\n end do\n end if\n flag = 1\n call f2pysetdata(d,allocated(d))"""\n\nfgetdims2_sa = """\\n end if\n if (allocated(d)) then\n do i=1,r\n s(i) = size(d,i)\n end do\n !s(r) must be equal to len(d(1))\n end if\n flag = 2\n call f2pysetdata(d,allocated(d))"""\n\n\ndef buildhooks(pymod):\n from . import rules\n ret = {'f90modhooks': [], 'initf90modhooks': [], 'body': [],\n 'need': ['F_FUNC', 'arrayobject.h'],\n 'separatorsfor': {'includes0': '\n', 'includes': '\n'},\n 'docs': ['"Fortran 90/95 modules:\\n"'],\n 'latexdoc': []}\n fhooks = ['']\n\n def fadd(line, s=fhooks):\n s[0] = f'{s[0]}\n {line}'\n doc = ['']\n\n def dadd(line, s=doc):\n s[0] = f'{s[0]}\n{line}'\n\n usenames = getuseblocks(pymod)\n for m in findf90modules(pymod):\n sargs, fargs, efargs, modobjs, notvars, onlyvars = [], [], [], [], [\n m['name']], []\n sargsp = []\n ifargs = []\n mfargs = []\n if hasbody(m):\n for b in m['body']:\n notvars.append(b['name'])\n for n in m['vars'].keys():\n var = m['vars'][n]\n\n if (n not in notvars and isvariable(var)) and (not l_or(isintent_hide, isprivate)(var)):\n onlyvars.append(n)\n mfargs.append(n)\n outmess(f"\t\tConstructing F90 module support for \"{m['name']}\"...\n")\n if len(onlyvars) == 0 and len(notvars) == 1 and m['name'] in notvars:\n outmess(f"\t\t\tSkipping {m['name']} since there are no public vars/func in this module...\n")\n continue\n\n # gh-25186\n if m['name'] in usenames and containscommon(m):\n outmess(f"\t\t\tSkipping {m['name']} since it is in 'use' and contains a common block...\n")\n continue\n # skip modules with derived types\n if m['name'] in usenames and containsderivedtypes(m):\n outmess(f"\t\t\tSkipping {m['name']} since it is in 'use' and contains a derived type...\n")\n continue\n if onlyvars:\n outmess(f"\t\t Variables: {' '.join(onlyvars)}\n")\n chooks = ['']\n\n def cadd(line, s=chooks):\n s[0] = f'{s[0]}\n{line}'\n ihooks = ['']\n\n def iadd(line, s=ihooks):\n s[0] = f'{s[0]}\n{line}'\n\n vrd = capi_maps.modsign2map(m)\n cadd('static FortranDataDef f2py_%s_def[] = {' % (m['name']))\n dadd('\\subsection{Fortran 90/95 module \\texttt{%s}}\n' % (m['name']))\n if hasnote(m):\n note = m['note']\n if isinstance(note, list):\n note = '\n'.join(note)\n dadd(note)\n if onlyvars:\n dadd('\\begin{description}')\n for n in onlyvars:\n var = m['vars'][n]\n modobjs.append(n)\n ct = capi_maps.getctype(var)\n at = capi_maps.c2capi_map[ct]\n dm = capi_maps.getarrdims(n, var)\n dms = dm['dims'].replace('*', '-1').strip()\n dms = dms.replace(':', '-1').strip()\n if not dms:\n dms = '-1'\n use_fgetdims2 = fgetdims2\n cadd('\t{"%s",%s,{{%s}},%s, %s},' %\n (undo_rmbadname1(n), dm['rank'], dms, at,\n capi_maps.get_elsize(var)))\n dadd('\\item[]{{}\\verb@%s@{}}' %\n (capi_maps.getarrdocsign(n, var)))\n if hasnote(var):\n note = var['note']\n if isinstance(note, list):\n note = '\n'.join(note)\n dadd(f'--- {note}')\n if isallocatable(var):\n fargs.append(f"f2py_{m['name']}_getdims_{n}")\n efargs.append(fargs[-1])\n sargs.append(\n f'void (*{n})(int*,npy_intp*,void(*)(char*,npy_intp*),int*)')\n sargsp.append('void (*)(int*,npy_intp*,void(*)(char*,npy_intp*),int*)')\n iadd(f"\tf2py_{m['name']}_def[i_f2py++].func = {n};")\n fadd(f'subroutine {fargs[-1]}(r,s,f2pysetdata,flag)')\n fadd(f"use {m['name']}, only: d => {undo_rmbadname1(n)}\n")\n fadd('integer flag\n')\n fhooks[0] = fhooks[0] + fgetdims1\n dms = range(1, int(dm['rank']) + 1)\n fadd(' allocate(d(%s))\n' %\n (','.join(['s(%s)' % i for i in dms])))\n fhooks[0] = fhooks[0] + use_fgetdims2\n fadd(f'end subroutine {fargs[-1]}')\n else:\n fargs.append(n)\n sargs.append(f'char *{n}')\n sargsp.append('char*')\n iadd(f"\tf2py_{m['name']}_def[i_f2py++].data = {n};")\n if onlyvars:\n dadd('\\end{description}')\n if hasbody(m):\n for b in m['body']:\n if not isroutine(b):\n outmess("f90mod_rules.buildhooks:"\n f" skipping {b['block']} {b['name']}\n")\n continue\n modobjs.append(f"{b['name']}()")\n b['modulename'] = m['name']\n api, wrap = rules.buildapi(b)\n if isfunction(b):\n fhooks[0] = fhooks[0] + wrap\n fargs.append(f"f2pywrap_{m['name']}_{b['name']}")\n ifargs.append(func2subr.createfuncwrapper(b, signature=1))\n elif wrap:\n fhooks[0] = fhooks[0] + wrap\n fargs.append(f"f2pywrap_{m['name']}_{b['name']}")\n ifargs.append(\n func2subr.createsubrwrapper(b, signature=1))\n else:\n fargs.append(b['name'])\n mfargs.append(fargs[-1])\n api['externroutines'] = []\n ar = applyrules(api, vrd)\n ar['docs'] = []\n ar['docshort'] = []\n ret = dictappend(ret, ar)\n cadd(('\t{"%s",-1,{{-1}},0,0,NULL,(void *)'\n 'f2py_rout_#modulename#_%s_%s,'\n 'doc_f2py_rout_#modulename#_%s_%s},')\n % (b['name'], m['name'], b['name'], m['name'], b['name']))\n sargs.append(f"char *{b['name']}")\n sargsp.append('char *')\n iadd(f"\tf2py_{m['name']}_def[i_f2py++].data = {b['name']};")\n cadd('\t{NULL}\n};\n')\n iadd('}')\n ihooks[0] = 'static void f2py_setup_%s(%s) {\n\tint i_f2py=0;%s' % (\n m['name'], ','.join(sargs), ihooks[0])\n if '_' in m['name']:\n F_FUNC = 'F_FUNC_US'\n else:\n F_FUNC = 'F_FUNC'\n iadd('extern void %s(f2pyinit%s,F2PYINIT%s)(void (*)(%s));'\n % (F_FUNC, m['name'], m['name'].upper(), ','.join(sargsp)))\n iadd('static void f2py_init_%s(void) {' % (m['name']))\n iadd('\t%s(f2pyinit%s,F2PYINIT%s)(f2py_setup_%s);'\n % (F_FUNC, m['name'], m['name'].upper(), m['name']))\n iadd('}\n')\n ret['f90modhooks'] = ret['f90modhooks'] + chooks + ihooks\n ret['initf90modhooks'] = ['\tPyDict_SetItemString(d, "%s", PyFortranObject_New(f2py_%s_def,f2py_init_%s));' % (\n m['name'], m['name'], m['name'])] + ret['initf90modhooks']\n fadd('')\n fadd(f"subroutine f2pyinit{m['name']}(f2pysetupfunc)")\n if mfargs:\n for a in undo_rmbadname(mfargs):\n fadd(f"use {m['name']}, only : {a}")\n if ifargs:\n fadd(' '.join(['interface'] + ifargs))\n fadd('end interface')\n fadd('external f2pysetupfunc')\n if efargs:\n for a in undo_rmbadname(efargs):\n fadd(f'external {a}')\n fadd(f"call f2pysetupfunc({','.join(undo_rmbadname(fargs))})")\n fadd(f"end subroutine f2pyinit{m['name']}\n")\n\n dadd('\n'.join(ret['latexdoc']).replace(\n r'\subsection{', r'\subsubsection{'))\n\n ret['latexdoc'] = []\n ret['docs'].append(f"\"\t{m['name']} --- {','.join(undo_rmbadname(modobjs))}\"")\n\n ret['routine_defs'] = ''\n ret['doc'] = []\n ret['docshort'] = []\n ret['latexdoc'] = doc[0]\n if len(ret['docs']) <= 1:\n ret['docs'] = ''\n return ret, fhooks[0]\n
.venv\Lib\site-packages\numpy\f2py\f90mod_rules.py
f90mod_rules.py
Python
10,079
0.95
0.208178
0.020576
vue-tools
136
2024-12-09T22:43:38.072004
MIT
false
9749a2c4c4debfdb80092899903c71cc