content stringlengths 1 103k ⌀ | path stringlengths 8 216 | filename stringlengths 2 179 | language stringclasses 15
values | size_bytes int64 2 189k | quality_score float64 0.5 0.95 | complexity float64 0 1 | documentation_ratio float64 0 1 | repository stringclasses 5
values | stars int64 0 1k | created_date stringdate 2023-07-10 19:21:08 2025-07-09 19:11:45 | license stringclasses 4
values | is_test bool 2
classes | file_hash stringlengths 32 32 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
\n\n | .venv\Lib\site-packages\joblib\__pycache__\__init__.cpython-313.pyc | __init__.cpython-313.pyc | Other | 4,758 | 0.95 | 0.083333 | 0.097561 | awesome-app | 252 | 2023-09-01T23:03:43.623148 | BSD-3-Clause | false | b9f348a960d1e883994e2c6e706e7091 |
pip\n | .venv\Lib\site-packages\joblib-1.5.1.dist-info\INSTALLER | INSTALLER | Other | 4 | 0.5 | 0 | 0 | awesome-app | 335 | 2023-07-23T01:42:28.885799 | MIT | false | 365c9bfeb7d89244f2ce01c1de44cb85 |
Metadata-Version: 2.4\nName: joblib\nVersion: 1.5.1\nSummary: Lightweight pipelining with Python functions\nAuthor-email: Gael Varoquaux <gael.varoquaux@normalesup.org>\nLicense: BSD 3-Clause\nProject-URL: Homepage, https://joblib.readthedocs.io\nProject-URL: Source, https://github.com/joblib/joblib\nPlatform: any\nClassifier: Development Status :: 5 - Production/Stable\nClassifier: Environment :: Console\nClassifier: Intended Audience :: Developers\nClassifier: Intended Audience :: Science/Research\nClassifier: Intended Audience :: Education\nClassifier: License :: OSI Approved :: BSD License\nClassifier: Operating System :: OS Independent\nClassifier: Programming Language :: Python :: 3\nClassifier: Programming Language :: Python :: 3.9\nClassifier: Programming Language :: Python :: 3.10\nClassifier: Programming Language :: Python :: 3.11\nClassifier: Programming Language :: Python :: 3.12\nClassifier: Programming Language :: Python :: 3.13\nClassifier: Topic :: Scientific/Engineering\nClassifier: Topic :: Utilities\nClassifier: Topic :: Software Development :: Libraries\nRequires-Python: >=3.9\nDescription-Content-Type: text/x-rst\nLicense-File: LICENSE.txt\nDynamic: license-file\n\n|PyPi| |CIStatus| |ReadTheDocs| |Codecov|\n\n.. |PyPi| image:: https://badge.fury.io/py/joblib.svg\n :target: https://badge.fury.io/py/joblib\n :alt: Joblib version\n\n.. |CIStatus| image:: https://github.com/joblib/joblib/actions/workflows/test.yml/badge.svg\n :target: https://github.com/joblib/joblib/actions/workflows/test.yml?query=branch%3Amain\n :alt: CI status\n\n.. |ReadTheDocs| image:: https://readthedocs.org/projects/joblib/badge/?version=latest\n :target: https://joblib.readthedocs.io/en/latest/?badge=latest\n :alt: Documentation Status\n\n.. |Codecov| image:: https://codecov.io/gh/joblib/joblib/branch/main/graph/badge.svg\n :target: https://codecov.io/gh/joblib/joblib\n :alt: Codecov coverage\n\n\nThe homepage of joblib with user documentation is located on:\n\nhttps://joblib.readthedocs.io\n\nGetting the latest code\n=======================\n\nTo get the latest code using git, simply type::\n\n git clone https://github.com/joblib/joblib.git\n\nIf you don't have git installed, you can download a zip\nof the latest code: https://github.com/joblib/joblib/archive/refs/heads/main.zip\n\nInstalling\n==========\n\nYou can use `pip` to install joblib from any directory::\n\n pip install joblib\n\nor install it in editable mode from the source directory::\n\n pip install -e .\n\nDependencies\n============\n\n- Joblib has no mandatory dependencies besides Python (supported versions are\n 3.9+).\n- Joblib has an optional dependency on Numpy (at least version 1.6.1) for array\n manipulation.\n- Joblib includes its own vendored copy of\n `loky <https://github.com/tomMoral/loky>`_ for process management.\n- Joblib can efficiently dump and load numpy arrays but does not require numpy\n to be installed.\n- Joblib has an optional dependency on\n `python-lz4 <https://pypi.python.org/pypi/lz4>`_ as a faster alternative to\n zlib and gzip for compressed serialization.\n- Joblib has an optional dependency on psutil to mitigate memory leaks in\n parallel worker processes.\n- Some examples require external dependencies such as pandas. See the\n instructions in the `Building the docs`_ section for details.\n\nWorkflow to contribute\n======================\n\nTo contribute to joblib, first create an account on `github\n<https://github.com/>`_. Once this is done, fork the `joblib repository\n<https://github.com/joblib/joblib>`_ to have your own repository,\nclone it using ``git clone``. Make your changes in a branch of your clone, push\nthem to your github account, test them locally, and when you are happy with\nthem, send a pull request to the main repository.\n\nYou can use `pre-commit <https://pre-commit.com/#install>`_ to run code style checks\nbefore each commit::\n\n pip install pre-commit\n pre-commit install\n\npre-commit checks can be disabled for a single commit with::\n\n git commit -n\n\nRunning the test suite\n======================\n\nTo run the test suite, you need the pytest (version >= 3) and coverage modules.\nRun the test suite using::\n\n pytest joblib\n\nfrom the root of the project.\n\nBuilding the docs\n=================\n\nTo build the docs you need to have sphinx (>=1.4) and some dependencies\ninstalled::\n\n pip install -U -r .readthedocs-requirements.txt\n\nThe docs can then be built with the following command::\n\n make doc\n\nThe html docs are located in the ``doc/_build/html`` directory.\n\n\nMaking a source tarball\n=======================\n\nTo create a source tarball, eg for packaging or distributing, run the\nfollowing command::\n\n pip install build\n python -m build --sdist\n\nThe tarball will be created in the `dist` directory. This command will create\nthe resulting tarball that can be installed with no extra dependencies than the\nPython standard library.\n\nMaking a release and uploading it to PyPI\n=========================================\n\nThis command is only run by project manager, to make a release, and\nupload in to PyPI::\n\n pip install build\n python -m build --sdist --wheel\n twine upload dist/*\n\n\nNote that the documentation should automatically get updated at each git\npush. If that is not the case, try building th doc locally and resolve\nany doc build error (in particular when running the examples).\n\nUpdating the changelog\n======================\n\nChanges are listed in the CHANGES.rst file. They must be manually updated\nbut, the following git command may be used to generate the lines::\n\n git log --abbrev-commit --date=short --no-merges --sparse\n | .venv\Lib\site-packages\joblib-1.5.1.dist-info\METADATA | METADATA | Other | 5,582 | 0.95 | 0.040462 | 0 | awesome-app | 451 | 2024-04-14T18:45:53.550797 | Apache-2.0 | false | e3d9d37505cc2e14a3b36e352f6ab3d6 |
joblib-1.5.1.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4\njoblib-1.5.1.dist-info/METADATA,sha256=xxMdC2q3Dqhwys0T2JEvjI2YZET36Kn7XNWzMWEL-_Q,5582\njoblib-1.5.1.dist-info/RECORD,,\njoblib-1.5.1.dist-info/WHEEL,sha256=zaaOINJESkSfm_4HQVc5ssNzHCPXhJm0kEUakpsEHaU,91\njoblib-1.5.1.dist-info/licenses/LICENSE.txt,sha256=QmEpEcGHLF5LQ_auDo7llGfNNQMyJBz3LOkGQCZPrmo,1527\njoblib-1.5.1.dist-info/top_level.txt,sha256=P0LsoZ45gBL7ckL4lqQt7tdbrHD4xlVYhffmhHeeT_U,7\njoblib/__init__.py,sha256=uBSusTksXLpKA-pAoLO4wdTrHkJOfhtB297mcTlesE8,5337\njoblib/__pycache__/__init__.cpython-313.pyc,,\njoblib/__pycache__/_cloudpickle_wrapper.cpython-313.pyc,,\njoblib/__pycache__/_dask.cpython-313.pyc,,\njoblib/__pycache__/_memmapping_reducer.cpython-313.pyc,,\njoblib/__pycache__/_multiprocessing_helpers.cpython-313.pyc,,\njoblib/__pycache__/_parallel_backends.cpython-313.pyc,,\njoblib/__pycache__/_store_backends.cpython-313.pyc,,\njoblib/__pycache__/_utils.cpython-313.pyc,,\njoblib/__pycache__/backports.cpython-313.pyc,,\njoblib/__pycache__/compressor.cpython-313.pyc,,\njoblib/__pycache__/disk.cpython-313.pyc,,\njoblib/__pycache__/executor.cpython-313.pyc,,\njoblib/__pycache__/func_inspect.cpython-313.pyc,,\njoblib/__pycache__/hashing.cpython-313.pyc,,\njoblib/__pycache__/logger.cpython-313.pyc,,\njoblib/__pycache__/memory.cpython-313.pyc,,\njoblib/__pycache__/numpy_pickle.cpython-313.pyc,,\njoblib/__pycache__/numpy_pickle_compat.cpython-313.pyc,,\njoblib/__pycache__/numpy_pickle_utils.cpython-313.pyc,,\njoblib/__pycache__/parallel.cpython-313.pyc,,\njoblib/__pycache__/pool.cpython-313.pyc,,\njoblib/__pycache__/testing.cpython-313.pyc,,\njoblib/_cloudpickle_wrapper.py,sha256=HSFxIio3jiGnwVCstAa6obbxs4-5aRAIMDDUAA-cDPc,416\njoblib/_dask.py,sha256=xUYA_2VVc0LvPavSiFy8M7TZc6KF0lIxcQhng5kPaXU,13217\njoblib/_memmapping_reducer.py,sha256=AZ6dqA6fXlm4-ehBCf9m1nq43jUPKman4_2whrOButc,28553\njoblib/_multiprocessing_helpers.py,sha256=f8-Vf_8ildmdg991eLz8xk4DJJFTS_bcrhj6CgQ4lxU,1878\njoblib/_parallel_backends.py,sha256=fgy_FgZiKeNvTWr4wKbSX4kUNx2YD6m7p5O1J96xhb4,28766\njoblib/_store_backends.py,sha256=UriuyltspaMkkQ6Go1w88XkupfHVQ3gPCJRBcKS8ny0,17343\njoblib/_utils.py,sha256=J9keatbwMXMJ1oZiVhEFu0UgL_WTvoVi4Iberk0gfAg,2076\njoblib/backports.py,sha256=mITpG-yuEADimg89_LCdUY9QH9a5xQHsRNJnd7BmAMo,5450\njoblib/compressor.py,sha256=GDDVJmeOBqftc6tMkDupryojHhk_jIV8WrNoMiTxTdQ,19281\njoblib/disk.py,sha256=1J5hhMsCP5LDW65luTtArUxsMAJRrPB6wxSWf6GeBns,4332\njoblib/executor.py,sha256=fbVmE_KKywjJcIKmHO9k8M3VkaMqZXEP4YXBRz_p6xU,5229\njoblib/externals/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0\njoblib/externals/__pycache__/__init__.cpython-313.pyc,,\njoblib/externals/cloudpickle/__init__.py,sha256=IzKm9MzljfhH-QmN_o-zP5QimTwbtgJeRja8nrGFanQ,308\njoblib/externals/cloudpickle/__pycache__/__init__.cpython-313.pyc,,\njoblib/externals/cloudpickle/__pycache__/cloudpickle.cpython-313.pyc,,\njoblib/externals/cloudpickle/__pycache__/cloudpickle_fast.cpython-313.pyc,,\njoblib/externals/cloudpickle/cloudpickle.py,sha256=cNEBKdjBDlzFce_tvZL889uv71AnXTz1XBzkjKASSTo,58466\njoblib/externals/cloudpickle/cloudpickle_fast.py,sha256=AI5ZKf2AbLNxD8lXyLDpKZyzeZ2ofFtdK1ZWFq_ec1c,323\njoblib/externals/loky/__init__.py,sha256=3LZmtu1LDQq7Egw8FhIG2e5fviP-s6Q0fxxdXAn_9Ao,1105\njoblib/externals/loky/__pycache__/__init__.cpython-313.pyc,,\njoblib/externals/loky/__pycache__/_base.cpython-313.pyc,,\njoblib/externals/loky/__pycache__/cloudpickle_wrapper.cpython-313.pyc,,\njoblib/externals/loky/__pycache__/initializers.cpython-313.pyc,,\njoblib/externals/loky/__pycache__/process_executor.cpython-313.pyc,,\njoblib/externals/loky/__pycache__/reusable_executor.cpython-313.pyc,,\njoblib/externals/loky/_base.py,sha256=LsQnEoKWKGhdeqGhMc68Aqwz4MrTnEs20KAYbFiUHzo,1057\njoblib/externals/loky/backend/__init__.py,sha256=Ix9KThV1CYk7-M5OQnJ_A_JrrrWJ-Jowa-HMMeGbp18,312\njoblib/externals/loky/backend/__pycache__/__init__.cpython-313.pyc,,\njoblib/externals/loky/backend/__pycache__/_posix_reduction.cpython-313.pyc,,\njoblib/externals/loky/backend/__pycache__/_win_reduction.cpython-313.pyc,,\njoblib/externals/loky/backend/__pycache__/context.cpython-313.pyc,,\njoblib/externals/loky/backend/__pycache__/fork_exec.cpython-313.pyc,,\njoblib/externals/loky/backend/__pycache__/popen_loky_posix.cpython-313.pyc,,\njoblib/externals/loky/backend/__pycache__/popen_loky_win32.cpython-313.pyc,,\njoblib/externals/loky/backend/__pycache__/process.cpython-313.pyc,,\njoblib/externals/loky/backend/__pycache__/queues.cpython-313.pyc,,\njoblib/externals/loky/backend/__pycache__/reduction.cpython-313.pyc,,\njoblib/externals/loky/backend/__pycache__/resource_tracker.cpython-313.pyc,,\njoblib/externals/loky/backend/__pycache__/spawn.cpython-313.pyc,,\njoblib/externals/loky/backend/__pycache__/synchronize.cpython-313.pyc,,\njoblib/externals/loky/backend/__pycache__/utils.cpython-313.pyc,,\njoblib/externals/loky/backend/_posix_reduction.py,sha256=xgCSrIaLI0k_MI0XNOBSp5e1ox1WN9idgrWbkWpMUr4,1776\njoblib/externals/loky/backend/_win_reduction.py,sha256=WmNB0NXtyJ_o_WzfPUEGh5dPhXIeI6FkEnFNXUxO2ws,683\njoblib/externals/loky/backend/context.py,sha256=RPdZvzkEk7iA0rtdAILSHNzl6wsHpm6XD6IL30owAPE,14284\njoblib/externals/loky/backend/fork_exec.py,sha256=4DZ1iLBB-21rlg3Z4Kh9DTVZj35JPaWFE5rzWZaSDxk,2319\njoblib/externals/loky/backend/popen_loky_posix.py,sha256=3G-2_-ovZtjWcHI-xSyW5zQjAZ-_Z9IGjzY1RrZH4nc,5541\njoblib/externals/loky/backend/popen_loky_win32.py,sha256=bYkhRA0w8qUcYFwoezeGwcnlCocEdheWXc6SZ-_rVxo,5325\njoblib/externals/loky/backend/process.py,sha256=4-Y94EoIrg4btsjTNxUBHAHhR96Nrugn_7_PGL6aU50,2018\njoblib/externals/loky/backend/queues.py,sha256=eETFvbPHwKfdoYyOgNQCyKq_Zlm-lzH3fwwpUIh-_4U,7322\njoblib/externals/loky/backend/reduction.py,sha256=861drQAefXTJjfFWAEWmYAS315d8lAyqWx0RgyxFw_0,6926\njoblib/externals/loky/backend/resource_tracker.py,sha256=7LbIX84-6_gCbI3dpvJ2v_mhIMp8ynmvqthZs2kMU78,13846\njoblib/externals/loky/backend/spawn.py,sha256=t4PzEJ3tjwoF9t8qnQUF9R7Q-LmBpDBIcHURWNznz8M,8626\njoblib/externals/loky/backend/synchronize.py,sha256=nlDwBoLZB93m_l55qfZM_Ql-4L84PSYimoQqt5TzpDk,11768\njoblib/externals/loky/backend/utils.py,sha256=RVsxqyET4TJdbjc9uUHJmfhlQ2v4Uq-fiT_5b5rfC0s,5757\njoblib/externals/loky/cloudpickle_wrapper.py,sha256=jUnfhXI3qMXTlCeTUzpABQlv0VOLMJL1V7fpRlq2LgU,3609\njoblib/externals/loky/initializers.py,sha256=dtKtRsJUmVwiJu0yZ-Ih0m8PvW_MxmouG7mShEcsStc,2567\njoblib/externals/loky/process_executor.py,sha256=QPSKet0OCAWr6g_2fHwPt4yjQaAJsjfeJYFPiKhS9RE,52348\njoblib/externals/loky/reusable_executor.py,sha256=d9ksrTnJS8549Oq50iG08u5pEhuMbhQ3oSYUSq0twNQ,10863\njoblib/func_inspect.py,sha256=bhm_GpBe3H_Dmw5ripzP5BalA6wbq7ZFI3SEuAQbfek,14017\njoblib/hashing.py,sha256=38MM0zRl0Ebk78Ij6cMdrQ8ibYZP0pCJxu3L4Yrw1sc,10694\njoblib/logger.py,sha256=HK06qwNWJYInYIIXFYINAKCxjYxi0hoX45ckNKkogHQ,5342\njoblib/memory.py,sha256=Hd2gXe-Uqeaq8NZjO7zw7-80Fet0xYEXe8bV76jgvc8,45404\njoblib/numpy_pickle.py,sha256=N_wQMf6_vgI71nRYLne0dc2kO6dfh0lkTaOZn8Tq5Hc,28791\njoblib/numpy_pickle_compat.py,sha256=JOlSfMT1uDIztOyQ3dzYgp5fGVnzPVWBCqXjdIZsjLQ,8451\njoblib/numpy_pickle_utils.py,sha256=j3GlI25QFvo-DTPn7uRptu-NtW16ztHM0DuglyQyEDI,9497\njoblib/parallel.py,sha256=SkJYk-cTHC8oMvZU79SDXV61IZ10YIHbBYhrHB47yM8,86989\njoblib/pool.py,sha256=JTc00PEAyPayo8mHdktmburp5OBsnNxwSQI4zzvtKYs,14134\njoblib/test/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0\njoblib/test/__pycache__/__init__.cpython-313.pyc,,\njoblib/test/__pycache__/common.cpython-313.pyc,,\njoblib/test/__pycache__/test_backports.cpython-313.pyc,,\njoblib/test/__pycache__/test_cloudpickle_wrapper.cpython-313.pyc,,\njoblib/test/__pycache__/test_config.cpython-313.pyc,,\njoblib/test/__pycache__/test_dask.cpython-313.pyc,,\njoblib/test/__pycache__/test_disk.cpython-313.pyc,,\njoblib/test/__pycache__/test_func_inspect.cpython-313.pyc,,\njoblib/test/__pycache__/test_func_inspect_special_encoding.cpython-313.pyc,,\njoblib/test/__pycache__/test_hashing.cpython-313.pyc,,\njoblib/test/__pycache__/test_init.cpython-313.pyc,,\njoblib/test/__pycache__/test_logger.cpython-313.pyc,,\njoblib/test/__pycache__/test_memmapping.cpython-313.pyc,,\njoblib/test/__pycache__/test_memory.cpython-313.pyc,,\njoblib/test/__pycache__/test_memory_async.cpython-313.pyc,,\njoblib/test/__pycache__/test_missing_multiprocessing.cpython-313.pyc,,\njoblib/test/__pycache__/test_module.cpython-313.pyc,,\njoblib/test/__pycache__/test_numpy_pickle.cpython-313.pyc,,\njoblib/test/__pycache__/test_numpy_pickle_compat.cpython-313.pyc,,\njoblib/test/__pycache__/test_numpy_pickle_utils.cpython-313.pyc,,\njoblib/test/__pycache__/test_parallel.cpython-313.pyc,,\njoblib/test/__pycache__/test_store_backends.cpython-313.pyc,,\njoblib/test/__pycache__/test_testing.cpython-313.pyc,,\njoblib/test/__pycache__/test_utils.cpython-313.pyc,,\njoblib/test/__pycache__/testutils.cpython-313.pyc,,\njoblib/test/common.py,sha256=vpjpcJgMbmr8H3skc3qsr_KC-u-ZnhVFRk2vAxmJqvA,2102\njoblib/test/data/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0\njoblib/test/data/__pycache__/__init__.cpython-313.pyc,,\njoblib/test/data/__pycache__/create_numpy_pickle.cpython-313.pyc,,\njoblib/test/data/create_numpy_pickle.py,sha256=vZE7JNye9o0gYaxrn1555av6Igee0KeXacAWKNRhsu8,3334\njoblib/test/data/joblib_0.10.0_compressed_pickle_py27_np16.gz,sha256=QYRH6Q2DSGVorjCSqWCxjTWCMOJKyew4Nl2qmfQVvQ8,769\njoblib/test/data/joblib_0.10.0_compressed_pickle_py27_np17.gz,sha256=ofTozM_KlPJa50TR8FCwc09mMmO6OO0GQhgUBLNIsXs,757\njoblib/test/data/joblib_0.10.0_compressed_pickle_py33_np18.gz,sha256=2eIVeA-XjOaT5IEQ6tI2UuHG3hwhiRciMmkBmPcIh4g,792\njoblib/test/data/joblib_0.10.0_compressed_pickle_py34_np19.gz,sha256=Gr2z_1tVWDH1H3_wCVHmakknf8KqeHKT8Yz4d1vmUCM,794\njoblib/test/data/joblib_0.10.0_compressed_pickle_py35_np19.gz,sha256=pWw_xuDbOkECqu1KGf1OFU7s2VbzC2v5F5iXhE7TwB4,790\njoblib/test/data/joblib_0.10.0_pickle_py27_np17.pkl,sha256=icRQjj374B-AHk5znxre0T9oWUHokoHIBQ8MqKo8l-U,986\njoblib/test/data/joblib_0.10.0_pickle_py27_np17.pkl.bz2,sha256=oYQVIyMiUxyRgWSuBBSOvCWKzToA-kUpcoQWdV4UoV4,997\njoblib/test/data/joblib_0.10.0_pickle_py27_np17.pkl.gzip,sha256=Jpv3iGcDgKTv-O4nZsUreIbUK7qnt2cugZ-VMgNeEDQ,798\njoblib/test/data/joblib_0.10.0_pickle_py27_np17.pkl.lzma,sha256=c0wu0x8pPv4BcStj7pE61rZpf68FLG_pNzQZ4e82zH8,660\njoblib/test/data/joblib_0.10.0_pickle_py27_np17.pkl.xz,sha256=77FG1FDG0GHQav-1bxc4Tn9ky6ubUW_MbE0_iGmz5wc,712\njoblib/test/data/joblib_0.10.0_pickle_py33_np18.pkl,sha256=4GTC7s_cWNVShERn2nvVbspZYJgyK_0man4TEqvdVzU,1068\njoblib/test/data/joblib_0.10.0_pickle_py33_np18.pkl.bz2,sha256=6G1vbs_iYmz2kYJ6w4qB1k7D67UnxUMus0S4SWeBtFo,1000\njoblib/test/data/joblib_0.10.0_pickle_py33_np18.pkl.gzip,sha256=tlRUWeJS1BXmcwtLNSNK9L0hDHekFl07CqWxTShinmY,831\njoblib/test/data/joblib_0.10.0_pickle_py33_np18.pkl.lzma,sha256=CorPwnfv3rR5hjNtJI01-sEBMOnkSxNlRVaWTszMopA,694\njoblib/test/data/joblib_0.10.0_pickle_py33_np18.pkl.xz,sha256=Dppj3MffOKsKETeptEtDaxPOv6MA6xnbpK5LzlDQ-oE,752\njoblib/test/data/joblib_0.10.0_pickle_py34_np19.pkl,sha256=HL5Fb1uR9aPLjjhoOPJ2wwM1Qyo1FCZoYYd2HVw0Fos,1068\njoblib/test/data/joblib_0.10.0_pickle_py34_np19.pkl.bz2,sha256=Pyr2fqZnwfUxXdyrBr-kRwBYY8HA_Yi7fgSguKy5pUs,1021\njoblib/test/data/joblib_0.10.0_pickle_py34_np19.pkl.gzip,sha256=os8NJjQI9FhnlZM-Ay9dX_Uo35gZnoJCgQSIVvcBPfE,831\njoblib/test/data/joblib_0.10.0_pickle_py34_np19.pkl.lzma,sha256=Q_0y43qU7_GqAabJ8y3PWVhOisurnCAq3GzuCu04V58,697\njoblib/test/data/joblib_0.10.0_pickle_py34_np19.pkl.xz,sha256=BNfmiQfpeLVpdfkwlJK4hJ5Cpgl0vreVyekyc5d_PNM,752\njoblib/test/data/joblib_0.10.0_pickle_py35_np19.pkl,sha256=l7nvLolhBDIdPFznOz3lBHiMOPBPCMi1bXop1tFSCpY,1068\njoblib/test/data/joblib_0.10.0_pickle_py35_np19.pkl.bz2,sha256=pqGpuIS-ZU4uP8mkglHs8MaSDiVcPy7l3XHYJSppRgY,1005\njoblib/test/data/joblib_0.10.0_pickle_py35_np19.pkl.gzip,sha256=YRFXE6LEb6qK72yPqnXdqQVY8Ts8xKUS9PWQKhLxWvk,833\njoblib/test/data/joblib_0.10.0_pickle_py35_np19.pkl.lzma,sha256=Bf7gCUeTuTjCkbcIdyZYz69irblX4SAVQEzxCnMQhNU,701\njoblib/test/data/joblib_0.10.0_pickle_py35_np19.pkl.xz,sha256=As8w2LGWwwNmKy3QNdKljK63Yq46gjRf_RJ0lh5_WqA,752\njoblib/test/data/joblib_0.11.0_compressed_pickle_py36_np111.gz,sha256=1WrnXDqDoNEPYOZX1Q5Wr2463b8vVV6fw4Wm5S4bMt4,800\njoblib/test/data/joblib_0.11.0_pickle_py36_np111.pkl,sha256=XmsOFxeC1f1aYdGETclG6yfF9rLoB11DayOAhDMULrw,1068\njoblib/test/data/joblib_0.11.0_pickle_py36_np111.pkl.bz2,sha256=vI2yWb50LKL_NgZyd_XkoD5teIg93uI42mWnx9ee-AQ,991\njoblib/test/data/joblib_0.11.0_pickle_py36_np111.pkl.gzip,sha256=1WrnXDqDoNEPYOZX1Q5Wr2463b8vVV6fw4Wm5S4bMt4,800\njoblib/test/data/joblib_0.11.0_pickle_py36_np111.pkl.lzma,sha256=IWA0JlZG2ur53HgTUDl1m7q79dcVq6b0VOq33gKoJU0,715\njoblib/test/data/joblib_0.11.0_pickle_py36_np111.pkl.xz,sha256=3Xh_NbMZdBjYx7ynfJ3Fyke28izSRSSzzNB0z5D4k9Y,752\njoblib/test/data/joblib_0.8.4_compressed_pickle_py27_np17.gz,sha256=Sp-ZT7i6pj5on2gbptszu7RarzJpOmHJ67UKOmCPQMg,659\njoblib/test/data/joblib_0.9.2_compressed_pickle_py27_np16.gz,sha256=NLtDrvo2XIH0KvUUAvhOqMeoXEjGW0IuTk_osu5XiDw,658\njoblib/test/data/joblib_0.9.2_compressed_pickle_py27_np17.gz,sha256=NLtDrvo2XIH0KvUUAvhOqMeoXEjGW0IuTk_osu5XiDw,658\njoblib/test/data/joblib_0.9.2_compressed_pickle_py34_np19.gz,sha256=nzO9iiGkG3KbBdrF3usOho8higkrDj_lmICUzxZyF_Y,673\njoblib/test/data/joblib_0.9.2_compressed_pickle_py35_np19.gz,sha256=nzO9iiGkG3KbBdrF3usOho8higkrDj_lmICUzxZyF_Y,673\njoblib/test/data/joblib_0.9.2_pickle_py27_np16.pkl,sha256=naijdk2xIeKdIa3mfJw0JlmOdtiN6uRM1yOJg6-M73M,670\njoblib/test/data/joblib_0.9.2_pickle_py27_np16.pkl_01.npy,sha256=DvvX2c5-7DpuCg20HnleA5bMo9awN9rWxhtGSEPSiAk,120\njoblib/test/data/joblib_0.9.2_pickle_py27_np16.pkl_02.npy,sha256=HBzzbLeB-8whuVO7CgtF3wktoOrg52WILlljzNcBBbE,120\njoblib/test/data/joblib_0.9.2_pickle_py27_np16.pkl_03.npy,sha256=oMRa4qKJhBy-uiRDt-uqOzHAqencxzKUrKVynaAJJAU,236\njoblib/test/data/joblib_0.9.2_pickle_py27_np16.pkl_04.npy,sha256=PsviRClLqT4IR5sWwbmpQR41af9mDtBFncodJBOB3wU,104\njoblib/test/data/joblib_0.9.2_pickle_py27_np17.pkl,sha256=LynX8dLOygfxDfFywOgm7wgWOhSxLG7z-oDsU6X83Dw,670\njoblib/test/data/joblib_0.9.2_pickle_py27_np17.pkl_01.npy,sha256=DvvX2c5-7DpuCg20HnleA5bMo9awN9rWxhtGSEPSiAk,120\njoblib/test/data/joblib_0.9.2_pickle_py27_np17.pkl_02.npy,sha256=HBzzbLeB-8whuVO7CgtF3wktoOrg52WILlljzNcBBbE,120\njoblib/test/data/joblib_0.9.2_pickle_py27_np17.pkl_03.npy,sha256=oMRa4qKJhBy-uiRDt-uqOzHAqencxzKUrKVynaAJJAU,236\njoblib/test/data/joblib_0.9.2_pickle_py27_np17.pkl_04.npy,sha256=PsviRClLqT4IR5sWwbmpQR41af9mDtBFncodJBOB3wU,104\njoblib/test/data/joblib_0.9.2_pickle_py33_np18.pkl,sha256=w9TLxpDTzp5TI6cU6lRvMsAasXEChcQgGE9s30sm_CU,691\njoblib/test/data/joblib_0.9.2_pickle_py33_np18.pkl_01.npy,sha256=DvvX2c5-7DpuCg20HnleA5bMo9awN9rWxhtGSEPSiAk,120\njoblib/test/data/joblib_0.9.2_pickle_py33_np18.pkl_02.npy,sha256=HBzzbLeB-8whuVO7CgtF3wktoOrg52WILlljzNcBBbE,120\njoblib/test/data/joblib_0.9.2_pickle_py33_np18.pkl_03.npy,sha256=jt6aZKUrJdfbMJUJVsl47As5MrfRSs1avGMhbmS6vec,307\njoblib/test/data/joblib_0.9.2_pickle_py33_np18.pkl_04.npy,sha256=PsviRClLqT4IR5sWwbmpQR41af9mDtBFncodJBOB3wU,104\njoblib/test/data/joblib_0.9.2_pickle_py34_np19.pkl,sha256=ilOBAOaulLFvKrD32S1NfnpiK-LfzA9rC3O2I7xROuI,691\njoblib/test/data/joblib_0.9.2_pickle_py34_np19.pkl_01.npy,sha256=DvvX2c5-7DpuCg20HnleA5bMo9awN9rWxhtGSEPSiAk,120\njoblib/test/data/joblib_0.9.2_pickle_py34_np19.pkl_02.npy,sha256=HBzzbLeB-8whuVO7CgtF3wktoOrg52WILlljzNcBBbE,120\njoblib/test/data/joblib_0.9.2_pickle_py34_np19.pkl_03.npy,sha256=jt6aZKUrJdfbMJUJVsl47As5MrfRSs1avGMhbmS6vec,307\njoblib/test/data/joblib_0.9.2_pickle_py34_np19.pkl_04.npy,sha256=PsviRClLqT4IR5sWwbmpQR41af9mDtBFncodJBOB3wU,104\njoblib/test/data/joblib_0.9.2_pickle_py35_np19.pkl,sha256=WfDVIqKcMzzh1gSAshIfzBoIpdLdZQuG79yYf5kfpOo,691\njoblib/test/data/joblib_0.9.2_pickle_py35_np19.pkl_01.npy,sha256=DvvX2c5-7DpuCg20HnleA5bMo9awN9rWxhtGSEPSiAk,120\njoblib/test/data/joblib_0.9.2_pickle_py35_np19.pkl_02.npy,sha256=HBzzbLeB-8whuVO7CgtF3wktoOrg52WILlljzNcBBbE,120\njoblib/test/data/joblib_0.9.2_pickle_py35_np19.pkl_03.npy,sha256=jt6aZKUrJdfbMJUJVsl47As5MrfRSs1avGMhbmS6vec,307\njoblib/test/data/joblib_0.9.2_pickle_py35_np19.pkl_04.npy,sha256=PsviRClLqT4IR5sWwbmpQR41af9mDtBFncodJBOB3wU,104\njoblib/test/data/joblib_0.9.4.dev0_compressed_cache_size_pickle_py35_np19.gz,sha256=8jYfWJsx0oY2J-3LlmEigK5cClnJSW2J2rfeSTZw-Ts,802\njoblib/test/data/joblib_0.9.4.dev0_compressed_cache_size_pickle_py35_np19.gz_01.npy.z,sha256=YT9VvT3sEl2uWlOyvH2CkyE9Sok4od9O3kWtgeuUUqE,43\njoblib/test/data/joblib_0.9.4.dev0_compressed_cache_size_pickle_py35_np19.gz_02.npy.z,sha256=txA5RDI0PRuiU_UNKY8pGp-zQgQQ9vaVvMi60hOPaVs,43\njoblib/test/data/joblib_0.9.4.dev0_compressed_cache_size_pickle_py35_np19.gz_03.npy.z,sha256=d3AwICvU2MpSNjh2aPIsdJeGZLlDjANAF1Soa6uM0Po,37\njoblib/test/test_backports.py,sha256=ONt0JUPV1etZCO9DTLur1h84XmgHZYK_k73qmp4kRgg,1175\njoblib/test/test_cloudpickle_wrapper.py,sha256=9jx3hqNVO9GXdVHCxr9mN-GiLR0XK-O5d6YPaaG8Y14,729\njoblib/test/test_config.py,sha256=1Z102AO7Gb8Z8mHYahnZy2fxBA-9_vY0ZtWyNNk1cf4,5255\njoblib/test/test_dask.py,sha256=X2MBEYvz5WQwzGZRN04JNgk_75iIHF96yA1F1t1sK_Y,22932\njoblib/test/test_disk.py,sha256=0EaWGENlosrqwrSZvquPQw3jhqay1KD1NRlQ6YLHOOM,2223\njoblib/test/test_func_inspect.py,sha256=RsORR-j48SfXrNBQbb5i-SdmfU7zk2Mr0IKvcu8m1tw,9314\njoblib/test/test_func_inspect_special_encoding.py,sha256=5xILDjSO-xtjQAMLvMeVD-L7IG4ZURb2gvBiShaDE78,145\njoblib/test/test_hashing.py,sha256=wZeTJMX8C8ua3fJsKAI7MKtperUfZf1fLt0ZaOjvSKw,15820\njoblib/test/test_init.py,sha256=Y6y6Hcqa_cqwQ8S8ozUQ180y_RfkRajfZ_fDp2UXgbw,423\njoblib/test/test_logger.py,sha256=FA9ohTNcqIFViQK60_rwZ5PEGL2zoYN5qBOrDwFqVzI,941\njoblib/test/test_memmapping.py,sha256=z0aanbEs3yCDKShyW3IYlLkTARwdvqVTb4beTPRFmjk,43731\njoblib/test/test_memory.py,sha256=vTlNABkQzzHtRU_cXGr9eOEvrHAw7EEBmegMbX-gqZw,50660\njoblib/test/test_memory_async.py,sha256=tUoCI9dngR2AuJjAAKXElJIiz2Qm4AJGdXKn9c8lWaM,5245\njoblib/test/test_missing_multiprocessing.py,sha256=FVoS91krFZogIoDFScyZuJPpaeiq6O-aLAxug0qCQyY,1171\njoblib/test/test_module.py,sha256=IABzz5JmdeY_Adk_vZ0776JN94Ra7tWxDA7DPDNdJKI,1942\njoblib/test/test_numpy_pickle.py,sha256=QExCnBSG-EXdVKnoDkJjNFk6kbX0FDeGeR50wtLHiso,42130\njoblib/test/test_numpy_pickle_compat.py,sha256=paMz1G3Fr9SHdjFmKcG1ec6B5h_S-XE6WRtfHmX9r50,609\njoblib/test/test_numpy_pickle_utils.py,sha256=iB2Ve1TYYUEN3DQiNB5qUxk_QxeIXl7Jpgv4TwkFWTY,382\njoblib/test/test_parallel.py,sha256=_13kli8GYyclwh2QsxysXrRJa44o3gb3FEpSY61ag94,78095\njoblib/test/test_store_backends.py,sha256=DyK1f7PTSPErzhk27gaRoMe2UQrstIz6fnvZh4hKIf0,3057\njoblib/test/test_testing.py,sha256=jL-Ph5pzUJSXOgY2rqbjMRp2y3i3CCWmEi-Lbw4Wzr8,2520\njoblib/test/test_utils.py,sha256=urXuyQ40OV5sLMoNx30Azh3hGr-yJqiMtHRJwBb8mw0,570\njoblib/test/testutils.py,sha256=A1bm-A5Ydis2iZJVI2-r3aFKUufWR42NZ8Yttrp8mzg,252\njoblib/testing.py,sha256=lK8HOBvrpXcTYUCSvpE-M2ede_dTVJzcmyw-9BrBsOc,3029\n | .venv\Lib\site-packages\joblib-1.5.1.dist-info\RECORD | RECORD | Other | 18,845 | 0.7 | 0 | 0 | vue-tools | 153 | 2024-04-21T10:01:32.398368 | BSD-3-Clause | false | b9e40b9f640a4ffd064980e144c609a5 |
joblib\n | .venv\Lib\site-packages\joblib-1.5.1.dist-info\top_level.txt | top_level.txt | Other | 7 | 0.5 | 0 | 0 | node-utils | 473 | 2024-05-10T13:30:10.809106 | Apache-2.0 | false | 13f9b8461dbd6e9f40c8ec14106faf6b |
Wheel-Version: 1.0\nGenerator: setuptools (80.8.0)\nRoot-Is-Purelib: true\nTag: py3-none-any\n\n | .venv\Lib\site-packages\joblib-1.5.1.dist-info\WHEEL | WHEEL | Other | 91 | 0.5 | 0 | 0 | node-utils | 514 | 2023-11-19T19:20:47.698601 | BSD-3-Clause | false | 680a7f5de591bbbd69d57f3627dde971 |
BSD 3-Clause License\n\nCopyright (c) 2008-2021, The joblib developers.\nAll rights reserved.\n\nRedistribution and use in source and binary forms, with or without\nmodification, are permitted provided that the following conditions are met:\n\n* Redistributions of source code must retain the above copyright notice, this\n list of conditions and the following disclaimer.\n\n* Redistributions in binary form must reproduce the above copyright notice,\n this list of conditions and the following disclaimer in the documentation\n and/or other materials provided with the distribution.\n\n* Neither the name of the copyright holder nor the names of its\n contributors may be used to endorse or promote products derived from\n this software without specific prior written permission.\n\nTHIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"\nAND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\nIMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\nDISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE\nFOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\nDAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR\nSERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\nCAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,\nOR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\nOF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n | .venv\Lib\site-packages\joblib-1.5.1.dist-info\licenses\LICENSE.txt | LICENSE.txt | Other | 1,527 | 0.7 | 0 | 0.130435 | python-kit | 989 | 2024-03-19T07:29:24.036316 | GPL-3.0 | false | 2e481820abf0a70a18011a30153df066 |
# Copyright 2014 Google Inc. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the "License");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an "AS IS" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport os\nimport shutil\nimport sys\nimport tempfile\n\n\nclass Host:\n def __init__(self):\n self.stdin = sys.stdin\n self.stdout = sys.stdout\n self.stderr = sys.stderr\n\n def chdir(self, *comps):\n return os.chdir(self.join(*comps))\n\n def getcwd(self):\n return os.getcwd()\n\n def join(self, *comps):\n return os.path.join(*comps)\n\n def mkdtemp(self, **kwargs):\n return tempfile.mkdtemp(**kwargs)\n\n def print(self, msg='', end='\n', file=None):\n file = file or self.stdout\n file.write(str(msg) + end)\n file.flush()\n\n def rmtree(self, path):\n shutil.rmtree(path, ignore_errors=True)\n\n def read_text_file(self, path):\n with open(path, 'rb') as fp:\n return fp.read().decode('utf8')\n\n def write_text_file(self, path, contents):\n with open(path, 'wb') as f:\n f.write(contents.encode('utf8'))\n | .venv\Lib\site-packages\json5\host.py | host.py | Python | 1,512 | 0.95 | 0.207547 | 0.309524 | node-utils | 577 | 2024-05-29T09:01:15.733159 | MIT | false | f0558c2085d338e4e5e10dfb37a8d5e3 |
# Generated by glop version 0.8.3\n# https://github.com/dpranke/glop\n# `glop -o json5/parser.py --no-main --no-memoize -c json5/json5.g`\n\n# pylint: disable=line-too-long,too-many-lines\n# pylint: disable=unnecessary-lambda,unnecessary-direct-lambda-call\n\nimport unicodedata\n\n\nclass Parser:\n def __init__(self, msg, fname, pos=0):\n self.msg = msg\n self.end = len(self.msg)\n self.fname = fname\n self.val = None\n self.pos = pos\n self.failed = False\n self.errpos = pos\n self._scopes = []\n self._cache = {}\n self._global_vars = {}\n\n def parse(self, global_vars=None):\n self._global_vars = global_vars or {}\n self._grammar_()\n if self.failed:\n return None, self._err_str(), self.errpos\n return self.val, None, self.pos\n\n def _err_str(self):\n lineno, colno = self._err_offsets()\n if self.errpos == len(self.msg):\n thing = 'end of input'\n else:\n thing = f'"{self.msg[self.errpos]}"'\n return f'{self.fname}:{lineno} Unexpected {thing} at column {colno}'\n\n def _err_offsets(self):\n lineno = 1\n colno = 1\n for i in range(self.errpos):\n if self.msg[i] == '\n':\n lineno += 1\n colno = 1\n else:\n colno += 1\n return lineno, colno\n\n def _succeed(self, v, newpos=None):\n self.val = v\n self.failed = False\n if newpos is not None:\n self.pos = newpos\n\n def _fail(self):\n self.val = None\n self.failed = True\n self.errpos = max(self.errpos, self.pos)\n\n def _rewind(self, newpos):\n self._succeed(None, newpos)\n\n def _bind(self, rule, var):\n rule()\n if not self.failed:\n self._set(var, self.val)\n\n def _not(self, rule):\n p = self.pos\n errpos = self.errpos\n rule()\n if self.failed:\n self._succeed(None, p)\n else:\n self._rewind(p)\n self.errpos = errpos\n self._fail()\n\n def _opt(self, rule):\n p = self.pos\n rule()\n if self.failed:\n self._succeed([], p)\n else:\n self._succeed([self.val])\n\n def _plus(self, rule):\n vs = []\n rule()\n vs.append(self.val)\n if self.failed:\n return\n self._star(rule, vs)\n\n def _star(self, rule, vs=None):\n vs = vs or []\n while True:\n p = self.pos\n rule()\n if self.failed:\n self._rewind(p)\n break\n vs.append(self.val)\n self._succeed(vs)\n\n def _seq(self, rules):\n for rule in rules:\n rule()\n if self.failed:\n return\n\n def _choose(self, rules):\n p = self.pos\n for rule in rules[:-1]:\n rule()\n if not self.failed:\n return\n self._rewind(p)\n rules[-1]()\n\n def _ch(self, ch):\n p = self.pos\n if p < self.end and self.msg[p] == ch:\n self._succeed(ch, self.pos + 1)\n else:\n self._fail()\n\n def _str(self, s):\n for ch in s:\n self._ch(ch)\n if self.failed:\n return\n self.val = s\n\n def _range(self, i, j):\n p = self.pos\n if p != self.end and ord(i) <= ord(self.msg[p]) <= ord(j):\n self._succeed(self.msg[p], self.pos + 1)\n else:\n self._fail()\n\n def _push(self, name):\n self._scopes.append((name, {}))\n\n def _pop(self, name):\n actual_name, _ = self._scopes.pop()\n assert name == actual_name\n\n def _get(self, var):\n if self._scopes and var in self._scopes[-1][1]:\n return self._scopes[-1][1][var]\n return self._global_vars[var]\n\n def _set(self, var, val):\n self._scopes[-1][1][var] = val\n\n def _is_unicat(self, var, cat):\n return unicodedata.category(var) == cat\n\n def _join(self, s, vs):\n return s.join(vs)\n\n def _xtou(self, s):\n return chr(int(s, base=16))\n\n def _grammar_(self):\n self._push('grammar')\n self._seq(\n [\n self._sp_,\n lambda: self._bind(self._value_, 'v'),\n self._trailing_,\n lambda: self._succeed(self._get('v')),\n ]\n )\n self._pop('grammar')\n\n def _trailing_(self):\n self._choose([self._trailing__c0_, self._trailing__c1_])\n\n def _trailing__c0_(self):\n self._seq([self._trailing__c0__s0_, self._sp_, self._end_])\n\n def _trailing__c0__s0_(self):\n v = self._get('_consume_trailing')\n if v:\n self._succeed(v)\n else:\n self._fail()\n\n def _trailing__c1_(self):\n self._not(self._trailing__c1_n_)\n\n def _trailing__c1_n_(self):\n v = self._get('_consume_trailing')\n if v:\n self._succeed(v)\n else:\n self._fail()\n\n def _sp_(self):\n self._star(self._ws_)\n\n def _ws_(self):\n self._choose(\n [\n self._ws__c0_,\n self._eol_,\n self._comment_,\n self._ws__c3_,\n self._ws__c4_,\n self._ws__c5_,\n self._ws__c6_,\n self._ws__c7_,\n self._ws__c8_,\n ]\n )\n\n def _ws__c0_(self):\n self._ch(' ')\n\n def _ws__c3_(self):\n self._ch('\t')\n\n def _ws__c4_(self):\n self._ch('\v')\n\n def _ws__c5_(self):\n self._ch('\f')\n\n def _ws__c6_(self):\n self._ch('\xa0')\n\n def _ws__c7_(self):\n self._ch('\ufeff')\n\n def _ws__c8_(self):\n self._push('ws__c8')\n self._seq(\n [\n self._ws__c8__s0_,\n lambda: self._bind(self._anything_, 'x'),\n lambda: self._succeed(self._get('x')),\n ]\n )\n self._pop('ws__c8')\n\n def _ws__c8__s0_(self):\n self._not(lambda: self._not(self._ws__c8__s0_n_n_))\n\n def _ws__c8__s0_n_n_(self):\n (lambda: self._choose([self._ws__c8__s0_n_n_g__c0_]))()\n\n def _ws__c8__s0_n_n_g__c0_(self):\n self._seq(\n [\n lambda: self._bind(self._anything_, 'x'),\n self._ws__c8__s0_n_n_g__c0__s1_,\n ]\n )\n\n def _ws__c8__s0_n_n_g__c0__s1_(self):\n v = self._is_unicat(self._get('x'), 'Zs')\n if v:\n self._succeed(v)\n else:\n self._fail()\n\n def _eol_(self):\n self._choose(\n [\n self._eol__c0_,\n self._eol__c1_,\n self._eol__c2_,\n self._eol__c3_,\n self._eol__c4_,\n ]\n )\n\n def _eol__c0_(self):\n self._seq([lambda: self._ch('\r'), lambda: self._ch('\n')])\n\n def _eol__c1_(self):\n self._ch('\r')\n\n def _eol__c2_(self):\n self._ch('\n')\n\n def _eol__c3_(self):\n self._ch('\u2028')\n\n def _eol__c4_(self):\n self._ch('\u2029')\n\n def _comment_(self):\n self._choose([self._comment__c0_, self._comment__c1_])\n\n def _comment__c0_(self):\n self._seq(\n [\n lambda: self._str('//'),\n lambda: self._star(self._comment__c0__s1_p_),\n ]\n )\n\n def _comment__c0__s1_p_(self):\n self._seq([lambda: self._not(self._eol_), self._anything_])\n\n def _comment__c1_(self):\n self._seq(\n [\n lambda: self._str('/*'),\n self._comment__c1__s1_,\n lambda: self._str('*/'),\n ]\n )\n\n def _comment__c1__s1_(self):\n self._star(\n lambda: self._seq([self._comment__c1__s1_p__s0_, self._anything_])\n )\n\n def _comment__c1__s1_p__s0_(self):\n self._not(lambda: self._str('*/'))\n\n def _value_(self):\n self._choose(\n [\n self._value__c0_,\n self._value__c1_,\n self._value__c2_,\n self._value__c3_,\n self._value__c4_,\n self._value__c5_,\n self._value__c6_,\n ]\n )\n\n def _value__c0_(self):\n self._seq([lambda: self._str('null'), lambda: self._succeed('None')])\n\n def _value__c1_(self):\n self._seq([lambda: self._str('true'), lambda: self._succeed('True')])\n\n def _value__c2_(self):\n self._seq([lambda: self._str('false'), lambda: self._succeed('False')])\n\n def _value__c3_(self):\n self._push('value__c3')\n self._seq(\n [\n lambda: self._bind(self._object_, 'v'),\n lambda: self._succeed(['object', self._get('v')]),\n ]\n )\n self._pop('value__c3')\n\n def _value__c4_(self):\n self._push('value__c4')\n self._seq(\n [\n lambda: self._bind(self._array_, 'v'),\n lambda: self._succeed(['array', self._get('v')]),\n ]\n )\n self._pop('value__c4')\n\n def _value__c5_(self):\n self._push('value__c5')\n self._seq(\n [\n lambda: self._bind(self._string_, 'v'),\n lambda: self._succeed(['string', self._get('v')]),\n ]\n )\n self._pop('value__c5')\n\n def _value__c6_(self):\n self._push('value__c6')\n self._seq(\n [\n lambda: self._bind(self._num_literal_, 'v'),\n lambda: self._succeed(['number', self._get('v')]),\n ]\n )\n self._pop('value__c6')\n\n def _object_(self):\n self._choose([self._object__c0_, self._object__c1_])\n\n def _object__c0_(self):\n self._push('object__c0')\n self._seq(\n [\n lambda: self._ch('{'),\n self._sp_,\n lambda: self._bind(self._member_list_, 'v'),\n self._sp_,\n lambda: self._ch('}'),\n lambda: self._succeed(self._get('v')),\n ]\n )\n self._pop('object__c0')\n\n def _object__c1_(self):\n self._seq(\n [\n lambda: self._ch('{'),\n self._sp_,\n lambda: self._ch('}'),\n lambda: self._succeed([]),\n ]\n )\n\n def _array_(self):\n self._choose([self._array__c0_, self._array__c1_])\n\n def _array__c0_(self):\n self._push('array__c0')\n self._seq(\n [\n lambda: self._ch('['),\n self._sp_,\n lambda: self._bind(self._element_list_, 'v'),\n self._sp_,\n lambda: self._ch(']'),\n lambda: self._succeed(self._get('v')),\n ]\n )\n self._pop('array__c0')\n\n def _array__c1_(self):\n self._seq(\n [\n lambda: self._ch('['),\n self._sp_,\n lambda: self._ch(']'),\n lambda: self._succeed([]),\n ]\n )\n\n def _string_(self):\n self._choose([self._string__c0_, self._string__c1_])\n\n def _string__c0_(self):\n self._push('string__c0')\n self._seq(\n [\n self._squote_,\n self._string__c0__s1_,\n self._squote_,\n lambda: self._succeed(self._join('', self._get('cs'))),\n ]\n )\n self._pop('string__c0')\n\n def _string__c0__s1_(self):\n self._bind(lambda: self._star(self._sqchar_), 'cs')\n\n def _string__c1_(self):\n self._push('string__c1')\n self._seq(\n [\n self._dquote_,\n self._string__c1__s1_,\n self._dquote_,\n lambda: self._succeed(self._join('', self._get('cs'))),\n ]\n )\n self._pop('string__c1')\n\n def _string__c1__s1_(self):\n self._bind(lambda: self._star(self._dqchar_), 'cs')\n\n def _sqchar_(self):\n self._choose(\n [\n self._sqchar__c0_,\n self._sqchar__c1_,\n self._sqchar__c2_,\n self._sqchar__c3_,\n ]\n )\n\n def _sqchar__c0_(self):\n self._push('sqchar__c0')\n self._seq(\n [\n self._bslash_,\n lambda: self._bind(self._esc_char_, 'c'),\n lambda: self._succeed(self._get('c')),\n ]\n )\n self._pop('sqchar__c0')\n\n def _sqchar__c1_(self):\n self._seq([self._bslash_, self._eol_, lambda: self._succeed('')])\n\n def _sqchar__c2_(self):\n self._push('sqchar__c2')\n self._seq(\n [\n lambda: self._not(self._bslash_),\n lambda: self._not(self._squote_),\n lambda: self._not(self._eol_),\n lambda: self._bind(self._anything_, 'c'),\n lambda: self._succeed(self._get('c')),\n ]\n )\n self._pop('sqchar__c2')\n\n def _sqchar__c3_(self):\n self._seq(\n [\n lambda: self._not(self._sqchar__c3__s0_n_),\n lambda: self._range('\x00', '\x1f'),\n ]\n )\n\n def _sqchar__c3__s0_n_(self):\n v = self._get('_strict')\n if v:\n self._succeed(v)\n else:\n self._fail()\n\n def _dqchar_(self):\n self._choose(\n [\n self._dqchar__c0_,\n self._dqchar__c1_,\n self._dqchar__c2_,\n self._dqchar__c3_,\n ]\n )\n\n def _dqchar__c0_(self):\n self._push('dqchar__c0')\n self._seq(\n [\n self._bslash_,\n lambda: self._bind(self._esc_char_, 'c'),\n lambda: self._succeed(self._get('c')),\n ]\n )\n self._pop('dqchar__c0')\n\n def _dqchar__c1_(self):\n self._seq([self._bslash_, self._eol_, lambda: self._succeed('')])\n\n def _dqchar__c2_(self):\n self._push('dqchar__c2')\n self._seq(\n [\n lambda: self._not(self._bslash_),\n lambda: self._not(self._dquote_),\n lambda: self._not(self._eol_),\n lambda: self._bind(self._anything_, 'c'),\n lambda: self._succeed(self._get('c')),\n ]\n )\n self._pop('dqchar__c2')\n\n def _dqchar__c3_(self):\n self._seq(\n [\n lambda: self._not(self._dqchar__c3__s0_n_),\n lambda: self._range('\x00', '\x1f'),\n ]\n )\n\n def _dqchar__c3__s0_n_(self):\n v = self._get('_strict')\n if v:\n self._succeed(v)\n else:\n self._fail()\n\n def _bslash_(self):\n self._ch('\\')\n\n def _squote_(self):\n self._ch("'")\n\n def _dquote_(self):\n self._ch('"')\n\n def _esc_char_(self):\n self._choose(\n [\n self._esc_char__c0_,\n self._esc_char__c1_,\n self._esc_char__c2_,\n self._esc_char__c3_,\n self._esc_char__c4_,\n self._esc_char__c5_,\n self._esc_char__c6_,\n self._esc_char__c7_,\n self._esc_char__c8_,\n self._esc_char__c9_,\n self._esc_char__c10_,\n self._esc_char__c11_,\n self._esc_char__c12_,\n ]\n )\n\n def _esc_char__c0_(self):\n self._seq([lambda: self._ch('b'), lambda: self._succeed('\b')])\n\n def _esc_char__c1_(self):\n self._seq([lambda: self._ch('f'), lambda: self._succeed('\f')])\n\n def _esc_char__c10_(self):\n self._seq(\n [\n lambda: self._ch('0'),\n lambda: self._not(self._digit_),\n lambda: self._succeed('\x00'),\n ]\n )\n\n def _esc_char__c11_(self):\n self._push('esc_char__c11')\n self._seq(\n [\n lambda: self._bind(self._hex_esc_, 'c'),\n lambda: self._succeed(self._get('c')),\n ]\n )\n self._pop('esc_char__c11')\n\n def _esc_char__c12_(self):\n self._push('esc_char__c12')\n self._seq(\n [\n lambda: self._bind(self._unicode_esc_, 'c'),\n lambda: self._succeed(self._get('c')),\n ]\n )\n self._pop('esc_char__c12')\n\n def _esc_char__c2_(self):\n self._seq([lambda: self._ch('n'), lambda: self._succeed('\n')])\n\n def _esc_char__c3_(self):\n self._seq([lambda: self._ch('r'), lambda: self._succeed('\r')])\n\n def _esc_char__c4_(self):\n self._seq([lambda: self._ch('t'), lambda: self._succeed('\t')])\n\n def _esc_char__c5_(self):\n self._seq([lambda: self._ch('v'), lambda: self._succeed('\v')])\n\n def _esc_char__c6_(self):\n self._seq([self._squote_, lambda: self._succeed("'")])\n\n def _esc_char__c7_(self):\n self._seq([self._dquote_, lambda: self._succeed('"')])\n\n def _esc_char__c8_(self):\n self._seq([self._bslash_, lambda: self._succeed('\\')])\n\n def _esc_char__c9_(self):\n self._push('esc_char__c9')\n self._seq(\n [\n self._esc_char__c9__s0_,\n lambda: self._bind(self._anything_, 'c'),\n lambda: self._succeed(self._get('c')),\n ]\n )\n self._pop('esc_char__c9')\n\n def _esc_char__c9__s0_(self):\n self._not(lambda: (self._esc_char__c9__s0_n_g_)())\n\n def _esc_char__c9__s0_n_g_(self):\n self._choose(\n [\n self._esc_char__c9__s0_n_g__c0_,\n self._esc_char__c9__s0_n_g__c1_,\n lambda: self._seq([self._digit_]),\n lambda: self._seq([self._eol_]),\n ]\n )\n\n def _esc_char__c9__s0_n_g__c0_(self):\n self._seq([lambda: self._ch('x')])\n\n def _esc_char__c9__s0_n_g__c1_(self):\n self._seq([lambda: self._ch('u')])\n\n def _hex_esc_(self):\n self._push('hex_esc')\n self._seq(\n [\n lambda: self._ch('x'),\n lambda: self._bind(self._hex_, 'h1'),\n lambda: self._bind(self._hex_, 'h2'),\n lambda: self._succeed(\n self._xtou(self._get('h1') + self._get('h2'))\n ),\n ]\n )\n self._pop('hex_esc')\n\n def _unicode_esc_(self):\n self._push('unicode_esc')\n self._seq(\n [\n lambda: self._ch('u'),\n lambda: self._bind(self._hex_, 'a'),\n lambda: self._bind(self._hex_, 'b'),\n lambda: self._bind(self._hex_, 'c'),\n lambda: self._bind(self._hex_, 'd'),\n lambda: self._succeed(\n self._xtou(\n self._get('a')\n + self._get('b')\n + self._get('c')\n + self._get('d')\n )\n ),\n ]\n )\n self._pop('unicode_esc')\n\n def _element_list_(self):\n self._push('element_list')\n self._seq(\n [\n lambda: self._bind(self._value_, 'v'),\n self._element_list__s1_,\n self._sp_,\n self._element_list__s3_,\n lambda: self._succeed([self._get('v')] + self._get('vs')),\n ]\n )\n self._pop('element_list')\n\n def _element_list__s1_(self):\n self._bind(lambda: self._star(self._element_list__s1_l_p_), 'vs')\n\n def _element_list__s1_l_p_(self):\n self._seq([self._sp_, lambda: self._ch(','), self._sp_, self._value_])\n\n def _element_list__s3_(self):\n self._opt(lambda: self._ch(','))\n\n def _member_list_(self):\n self._push('member_list')\n self._seq(\n [\n lambda: self._bind(self._member_, 'm'),\n self._member_list__s1_,\n self._sp_,\n self._member_list__s3_,\n lambda: self._succeed([self._get('m')] + self._get('ms')),\n ]\n )\n self._pop('member_list')\n\n def _member_list__s1_(self):\n self._bind(lambda: self._star(self._member_list__s1_l_p_), 'ms')\n\n def _member_list__s1_l_p_(self):\n self._seq([self._sp_, lambda: self._ch(','), self._sp_, self._member_])\n\n def _member_list__s3_(self):\n self._opt(lambda: self._ch(','))\n\n def _member_(self):\n self._choose([self._member__c0_, self._member__c1_])\n\n def _member__c0_(self):\n self._push('member__c0')\n self._seq(\n [\n lambda: self._bind(self._string_, 'k'),\n self._sp_,\n lambda: self._ch(':'),\n self._sp_,\n lambda: self._bind(self._value_, 'v'),\n lambda: self._succeed([self._get('k'), self._get('v')]),\n ]\n )\n self._pop('member__c0')\n\n def _member__c1_(self):\n self._push('member__c1')\n self._seq(\n [\n lambda: self._bind(self._ident_, 'k'),\n self._sp_,\n lambda: self._ch(':'),\n self._sp_,\n lambda: self._bind(self._value_, 'v'),\n lambda: self._succeed([self._get('k'), self._get('v')]),\n ]\n )\n self._pop('member__c1')\n\n def _ident_(self):\n self._push('ident')\n self._seq(\n [\n lambda: self._bind(self._id_start_, 'hd'),\n self._ident__s1_,\n lambda: self._succeed(\n self._join('', [self._get('hd')] + self._get('tl'))\n ),\n ]\n )\n self._pop('ident')\n\n def _ident__s1_(self):\n self._bind(lambda: self._star(self._id_continue_), 'tl')\n\n def _id_start_(self):\n self._choose(\n [self._ascii_id_start_, self._other_id_start_, self._id_start__c2_]\n )\n\n def _id_start__c2_(self):\n self._seq([self._bslash_, self._unicode_esc_])\n\n def _ascii_id_start_(self):\n self._choose(\n [\n self._ascii_id_start__c0_,\n self._ascii_id_start__c1_,\n self._ascii_id_start__c2_,\n self._ascii_id_start__c3_,\n ]\n )\n\n def _ascii_id_start__c0_(self):\n self._range('a', 'z')\n\n def _ascii_id_start__c1_(self):\n self._range('A', 'Z')\n\n def _ascii_id_start__c2_(self):\n self._ch('$')\n\n def _ascii_id_start__c3_(self):\n self._ch('_')\n\n def _other_id_start_(self):\n self._choose(\n [\n self._other_id_start__c0_,\n self._other_id_start__c1_,\n self._other_id_start__c2_,\n self._other_id_start__c3_,\n self._other_id_start__c4_,\n self._other_id_start__c5_,\n ]\n )\n\n def _other_id_start__c0_(self):\n self._push('other_id_start__c0')\n self._seq(\n [\n lambda: self._bind(self._anything_, 'x'),\n self._other_id_start__c0__s1_,\n lambda: self._succeed(self._get('x')),\n ]\n )\n self._pop('other_id_start__c0')\n\n def _other_id_start__c0__s1_(self):\n v = self._is_unicat(self._get('x'), 'Ll')\n if v:\n self._succeed(v)\n else:\n self._fail()\n\n def _other_id_start__c1_(self):\n self._push('other_id_start__c1')\n self._seq(\n [\n lambda: self._bind(self._anything_, 'x'),\n self._other_id_start__c1__s1_,\n lambda: self._succeed(self._get('x')),\n ]\n )\n self._pop('other_id_start__c1')\n\n def _other_id_start__c1__s1_(self):\n v = self._is_unicat(self._get('x'), 'Lm')\n if v:\n self._succeed(v)\n else:\n self._fail()\n\n def _other_id_start__c2_(self):\n self._push('other_id_start__c2')\n self._seq(\n [\n lambda: self._bind(self._anything_, 'x'),\n self._other_id_start__c2__s1_,\n lambda: self._succeed(self._get('x')),\n ]\n )\n self._pop('other_id_start__c2')\n\n def _other_id_start__c2__s1_(self):\n v = self._is_unicat(self._get('x'), 'Lo')\n if v:\n self._succeed(v)\n else:\n self._fail()\n\n def _other_id_start__c3_(self):\n self._push('other_id_start__c3')\n self._seq(\n [\n lambda: self._bind(self._anything_, 'x'),\n self._other_id_start__c3__s1_,\n lambda: self._succeed(self._get('x')),\n ]\n )\n self._pop('other_id_start__c3')\n\n def _other_id_start__c3__s1_(self):\n v = self._is_unicat(self._get('x'), 'Lt')\n if v:\n self._succeed(v)\n else:\n self._fail()\n\n def _other_id_start__c4_(self):\n self._push('other_id_start__c4')\n self._seq(\n [\n lambda: self._bind(self._anything_, 'x'),\n self._other_id_start__c4__s1_,\n lambda: self._succeed(self._get('x')),\n ]\n )\n self._pop('other_id_start__c4')\n\n def _other_id_start__c4__s1_(self):\n v = self._is_unicat(self._get('x'), 'Lu')\n if v:\n self._succeed(v)\n else:\n self._fail()\n\n def _other_id_start__c5_(self):\n self._push('other_id_start__c5')\n self._seq(\n [\n lambda: self._bind(self._anything_, 'x'),\n self._other_id_start__c5__s1_,\n lambda: self._succeed(self._get('x')),\n ]\n )\n self._pop('other_id_start__c5')\n\n def _other_id_start__c5__s1_(self):\n v = self._is_unicat(self._get('x'), 'Nl')\n if v:\n self._succeed(v)\n else:\n self._fail()\n\n def _id_continue_(self):\n self._choose(\n [\n self._ascii_id_start_,\n self._digit_,\n self._other_id_start_,\n self._id_continue__c3_,\n self._id_continue__c4_,\n self._id_continue__c5_,\n self._id_continue__c6_,\n self._id_continue__c7_,\n self._id_continue__c8_,\n self._id_continue__c9_,\n ]\n )\n\n def _id_continue__c3_(self):\n self._push('id_continue__c3')\n self._seq(\n [\n lambda: self._bind(self._anything_, 'x'),\n self._id_continue__c3__s1_,\n lambda: self._succeed(self._get('x')),\n ]\n )\n self._pop('id_continue__c3')\n\n def _id_continue__c3__s1_(self):\n v = self._is_unicat(self._get('x'), 'Mn')\n if v:\n self._succeed(v)\n else:\n self._fail()\n\n def _id_continue__c4_(self):\n self._push('id_continue__c4')\n self._seq(\n [\n lambda: self._bind(self._anything_, 'x'),\n self._id_continue__c4__s1_,\n lambda: self._succeed(self._get('x')),\n ]\n )\n self._pop('id_continue__c4')\n\n def _id_continue__c4__s1_(self):\n v = self._is_unicat(self._get('x'), 'Mc')\n if v:\n self._succeed(v)\n else:\n self._fail()\n\n def _id_continue__c5_(self):\n self._push('id_continue__c5')\n self._seq(\n [\n lambda: self._bind(self._anything_, 'x'),\n self._id_continue__c5__s1_,\n lambda: self._succeed(self._get('x')),\n ]\n )\n self._pop('id_continue__c5')\n\n def _id_continue__c5__s1_(self):\n v = self._is_unicat(self._get('x'), 'Nd')\n if v:\n self._succeed(v)\n else:\n self._fail()\n\n def _id_continue__c6_(self):\n self._push('id_continue__c6')\n self._seq(\n [\n lambda: self._bind(self._anything_, 'x'),\n self._id_continue__c6__s1_,\n lambda: self._succeed(self._get('x')),\n ]\n )\n self._pop('id_continue__c6')\n\n def _id_continue__c6__s1_(self):\n v = self._is_unicat(self._get('x'), 'Pc')\n if v:\n self._succeed(v)\n else:\n self._fail()\n\n def _id_continue__c7_(self):\n self._seq([self._bslash_, self._unicode_esc_])\n\n def _id_continue__c8_(self):\n self._ch('\u200c')\n\n def _id_continue__c9_(self):\n self._ch('\u200d')\n\n def _num_literal_(self):\n self._choose(\n [\n self._num_literal__c0_,\n self._num_literal__c1_,\n self._num_literal__c2_,\n ]\n )\n\n def _num_literal__c0_(self):\n self._push('num_literal__c0')\n self._seq(\n [\n lambda: self._ch('-'),\n lambda: self._bind(self._unsigned_lit_, 'n'),\n lambda: self._succeed('-' + self._get('n')),\n ]\n )\n self._pop('num_literal__c0')\n\n def _num_literal__c1_(self):\n self._push('num_literal__c1')\n self._seq(\n [\n lambda: self._ch('+'),\n lambda: self._bind(self._unsigned_lit_, 'n'),\n lambda: self._succeed(self._get('n')),\n ]\n )\n self._pop('num_literal__c1')\n\n def _num_literal__c2_(self):\n self._push('num_literal__c2')\n self._seq(\n [\n lambda: self._bind(self._unsigned_lit_, 'n'),\n lambda: self._succeed(self._get('n')),\n ]\n )\n self._pop('num_literal__c2')\n\n def _unsigned_lit_(self):\n self._choose(\n [\n self._unsigned_lit__c0_,\n self._hex_literal_,\n self._unsigned_lit__c2_,\n self._unsigned_lit__c3_,\n ]\n )\n\n def _unsigned_lit__c0_(self):\n self._push('unsigned_lit__c0')\n self._seq(\n [\n lambda: self._bind(self._dec_literal_, 'd'),\n lambda: self._not(self._id_start_),\n lambda: self._succeed(self._get('d')),\n ]\n )\n self._pop('unsigned_lit__c0')\n\n def _unsigned_lit__c2_(self):\n self._str('Infinity')\n\n def _unsigned_lit__c3_(self):\n self._str('NaN')\n\n def _dec_literal_(self):\n self._choose(\n [\n self._dec_literal__c0_,\n self._dec_literal__c1_,\n self._dec_literal__c2_,\n self._dec_literal__c3_,\n self._dec_literal__c4_,\n self._dec_literal__c5_,\n ]\n )\n\n def _dec_literal__c0_(self):\n self._push('dec_literal__c0')\n self._seq(\n [\n lambda: self._bind(self._dec_int_lit_, 'd'),\n lambda: self._bind(self._frac_, 'f'),\n lambda: self._bind(self._exp_, 'e'),\n lambda: self._succeed(\n self._get('d') + self._get('f') + self._get('e')\n ),\n ]\n )\n self._pop('dec_literal__c0')\n\n def _dec_literal__c1_(self):\n self._push('dec_literal__c1')\n self._seq(\n [\n lambda: self._bind(self._dec_int_lit_, 'd'),\n lambda: self._bind(self._frac_, 'f'),\n lambda: self._succeed(self._get('d') + self._get('f')),\n ]\n )\n self._pop('dec_literal__c1')\n\n def _dec_literal__c2_(self):\n self._push('dec_literal__c2')\n self._seq(\n [\n lambda: self._bind(self._dec_int_lit_, 'd'),\n lambda: self._bind(self._exp_, 'e'),\n lambda: self._succeed(self._get('d') + self._get('e')),\n ]\n )\n self._pop('dec_literal__c2')\n\n def _dec_literal__c3_(self):\n self._push('dec_literal__c3')\n self._seq(\n [\n lambda: self._bind(self._dec_int_lit_, 'd'),\n lambda: self._succeed(self._get('d')),\n ]\n )\n self._pop('dec_literal__c3')\n\n def _dec_literal__c4_(self):\n self._push('dec_literal__c4')\n self._seq(\n [\n lambda: self._bind(self._frac_, 'f'),\n lambda: self._bind(self._exp_, 'e'),\n lambda: self._succeed(self._get('f') + self._get('e')),\n ]\n )\n self._pop('dec_literal__c4')\n\n def _dec_literal__c5_(self):\n self._push('dec_literal__c5')\n self._seq(\n [\n lambda: self._bind(self._frac_, 'f'),\n lambda: self._succeed(self._get('f')),\n ]\n )\n self._pop('dec_literal__c5')\n\n def _dec_int_lit_(self):\n self._choose([self._dec_int_lit__c0_, self._dec_int_lit__c1_])\n\n def _dec_int_lit__c0_(self):\n self._seq(\n [\n lambda: self._ch('0'),\n lambda: self._not(self._digit_),\n lambda: self._succeed('0'),\n ]\n )\n\n def _dec_int_lit__c1_(self):\n self._push('dec_int_lit__c1')\n self._seq(\n [\n lambda: self._bind(self._nonzerodigit_, 'd'),\n self._dec_int_lit__c1__s1_,\n lambda: self._succeed(\n self._get('d') + self._join('', self._get('ds'))\n ),\n ]\n )\n self._pop('dec_int_lit__c1')\n\n def _dec_int_lit__c1__s1_(self):\n self._bind(lambda: self._star(self._digit_), 'ds')\n\n def _digit_(self):\n self._range('0', '9')\n\n def _nonzerodigit_(self):\n self._range('1', '9')\n\n def _hex_literal_(self):\n self._push('hex_literal')\n self._seq(\n [\n self._hex_literal__s0_,\n self._hex_literal__s1_,\n lambda: self._succeed('0x' + self._join('', self._get('hs'))),\n ]\n )\n self._pop('hex_literal')\n\n def _hex_literal__s0_(self):\n self._choose([lambda: self._str('0x'), lambda: self._str('0X')])\n\n def _hex_literal__s1_(self):\n self._bind(lambda: self._plus(self._hex_), 'hs')\n\n def _hex_(self):\n self._choose([self._hex__c0_, self._hex__c1_, self._digit_])\n\n def _hex__c0_(self):\n self._range('a', 'f')\n\n def _hex__c1_(self):\n self._range('A', 'F')\n\n def _frac_(self):\n self._push('frac')\n self._seq(\n [\n lambda: self._ch('.'),\n self._frac__s1_,\n lambda: self._succeed('.' + self._join('', self._get('ds'))),\n ]\n )\n self._pop('frac')\n\n def _frac__s1_(self):\n self._bind(lambda: self._star(self._digit_), 'ds')\n\n def _exp_(self):\n self._choose([self._exp__c0_, self._exp__c1_])\n\n def _exp__c0_(self):\n self._push('exp__c0')\n self._seq(\n [\n self._exp__c0__s0_,\n lambda: self._bind(self._exp__c0__s1_l_, 's'),\n self._exp__c0__s2_,\n lambda: self._succeed(\n 'e' + self._get('s') + self._join('', self._get('ds'))\n ),\n ]\n )\n self._pop('exp__c0')\n\n def _exp__c0__s0_(self):\n self._choose([lambda: self._ch('e'), lambda: self._ch('E')])\n\n def _exp__c0__s1_l_(self):\n self._choose([lambda: self._ch('+'), lambda: self._ch('-')])\n\n def _exp__c0__s2_(self):\n self._bind(lambda: self._star(self._digit_), 'ds')\n\n def _exp__c1_(self):\n self._push('exp__c1')\n self._seq(\n [\n self._exp__c1__s0_,\n self._exp__c1__s1_,\n lambda: self._succeed('e' + self._join('', self._get('ds'))),\n ]\n )\n self._pop('exp__c1')\n\n def _exp__c1__s0_(self):\n self._choose([lambda: self._ch('e'), lambda: self._ch('E')])\n\n def _exp__c1__s1_(self):\n self._bind(lambda: self._star(self._digit_), 'ds')\n\n def _anything_(self):\n if self.pos < self.end:\n self._succeed(self.msg[self.pos], self.pos + 1)\n else:\n self._fail()\n\n def _end_(self):\n if self.pos == self.end:\n self._succeed(None)\n else:\n self._fail()\n | .venv\Lib\site-packages\json5\parser.py | parser.py | Python | 36,547 | 0.95 | 0.17226 | 0.004367 | react-lib | 288 | 2025-07-01T07:15:15.198505 | MIT | false | b8a4b68e7f1fb8ba1c5d86bcc69b1eb7 |
# Copyright 2014 Google Inc. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the "License");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an "AS IS" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n"""A tool to parse and pretty-print JSON5.\n\nUsage:\n\n $ echo '{foo:"bar"}' | python -m json5\n {\n foo: 'bar',\n }\n $ echo '{foo:"bar"}' | python -m json5 --as-json\n {\n "foo": "bar"\n }\n"""\n\nimport argparse\nimport sys\n\nimport json5\nfrom json5.host import Host\nfrom json5.version import __version__\n\nQUOTE_STYLES = {q.value: q for q in json5.QuoteStyle}\n\n\ndef main(argv=None, host=None):\n host = host or Host()\n\n args = _parse_args(host, argv)\n\n if args.version:\n host.print(__version__)\n return 0\n\n if args.cmd:\n inp = args.cmd\n elif args.file == '-':\n inp = host.stdin.read()\n else:\n inp = host.read_text_file(args.file)\n\n if args.indent == 'None':\n args.indent = None\n else:\n try:\n args.indent = int(args.indent)\n except ValueError:\n pass\n\n if args.as_json:\n args.quote_keys = True\n args.trailing_commas = False\n args.quote_style = json5.QuoteStyle.ALWAYS_DOUBLE.value\n\n obj = json5.loads(inp, strict=args.strict)\n s = json5.dumps(\n obj,\n indent=args.indent,\n quote_keys=args.quote_keys,\n trailing_commas=args.trailing_commas,\n quote_style=QUOTE_STYLES[args.quote_style],\n )\n host.print(s)\n return 0\n\n\nclass _HostedArgumentParser(argparse.ArgumentParser):\n """An argument parser that plays nicely w/ host objects."""\n\n def __init__(self, host, **kwargs):\n self.host = host\n super().__init__(**kwargs)\n\n def exit(self, status=0, message=None):\n if message:\n self._print_message(message, self.host.stderr)\n sys.exit(status)\n\n def error(self, message):\n self.host.print(f'usage: {self.usage}', end='', file=self.host.stderr)\n self.host.print(' -h/--help for help\n', file=self.host.stderr)\n self.exit(2, f'error: {message}\n')\n\n def print_help(self, file=None):\n self.host.print(self.format_help(), file=file)\n\n\ndef _parse_args(host, argv):\n usage = 'json5 [options] [FILE]\n'\n\n parser = _HostedArgumentParser(\n host,\n prog='json5',\n usage=usage,\n description=__doc__,\n formatter_class=argparse.RawDescriptionHelpFormatter,\n )\n parser.add_argument(\n '-V',\n '--version',\n action='store_true',\n help=f'show JSON5 library version ({__version__})',\n )\n parser.add_argument(\n '-c',\n metavar='STR',\n dest='cmd',\n help='inline json5 string to read instead of reading from a file',\n )\n parser.add_argument(\n '--as-json',\n dest='as_json',\n action='store_const',\n const=True,\n default=False,\n help='output as JSON (same as --quote-keys --no-trailing-commas)',\n )\n parser.add_argument(\n '--indent',\n dest='indent',\n default=4,\n help='amount to indent each line (default is 4 spaces)',\n )\n parser.add_argument(\n '--quote-keys',\n action='store_true',\n default=False,\n help='quote all object keys',\n )\n parser.add_argument(\n '--no-quote-keys',\n action='store_false',\n dest='quote_keys',\n help="don't quote object keys that are identifiers"\n ' (this is the default)',\n )\n parser.add_argument(\n '--trailing-commas',\n action='store_true',\n default=True,\n help='add commas after the last item in multi-line '\n 'objects and arrays (this is the default)',\n )\n parser.add_argument(\n '--no-trailing-commas',\n dest='trailing_commas',\n action='store_false',\n help='do not add commas after the last item in '\n 'multi-line lists and objects',\n )\n parser.add_argument(\n '--strict',\n action='store_true',\n default=True,\n help='Do not allow control characters (\\x00-\\x1f) in strings '\n '(default)',\n )\n parser.add_argument(\n '--no-strict',\n dest='strict',\n action='store_false',\n help='Allow control characters (\\x00-\\x1f) in strings',\n )\n parser.add_argument(\n '--quote-style',\n action='store',\n default='always_double',\n choices=QUOTE_STYLES.keys(),\n help='Controls how strings are encoded. By default they are always '\n 'double-quoted ("always_double")',\n )\n parser.add_argument(\n 'file',\n metavar='FILE',\n nargs='?',\n default='-',\n help='optional file to read JSON5 document from; if '\n 'not specified or "-", will read from stdin '\n 'instead',\n )\n return parser.parse_args(argv)\n\n\nif __name__ == '__main__': # pragma: no cover\n sys.exit(main())\n | .venv\Lib\site-packages\json5\tool.py | tool.py | Python | 5,344 | 0.95 | 0.090909 | 0.075145 | react-lib | 545 | 2024-03-21T17:16:31.481313 | Apache-2.0 | false | 5c582ecba34ed0671b4cd9110af6c598 |
# Copyright 2014 Google Inc. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the "License");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an "AS IS" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n__version__ = '0.12.0'\n\n# For backward-compatibility with earlier versions of json5:\nVERSION = __version__\n | .venv\Lib\site-packages\json5\version.py | version.py | Python | 703 | 0.95 | 0.055556 | 0.875 | node-utils | 287 | 2024-03-01T10:10:19.328349 | Apache-2.0 | false | 16d6028ec8c1d2c4260b10366b8bcbfb |
# Copyright 2014 Google Inc. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the "License");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an "AS IS" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n"""A pure Python implementation of the JSON5 configuration language."""\n\nfrom json5.lib import JSON5Encoder, QuoteStyle, load, loads, parse, dump, dumps\nfrom json5.version import __version__, VERSION\n\n\n__all__ = [\n 'JSON5Encoder',\n 'QuoteStyle',\n 'VERSION',\n '__version__',\n 'dump',\n 'dumps',\n 'parse',\n 'load',\n 'loads',\n]\n | .venv\Lib\site-packages\json5\__init__.py | __init__.py | Python | 947 | 0.95 | 0.032258 | 0.481481 | awesome-app | 502 | 2024-01-15T10:31:25.151996 | Apache-2.0 | false | aaf83321518addf5acce152a750e3db6 |
# Copyright 2014 Google Inc. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the "License");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an "AS IS" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n# pragma: no cover\n\nimport sys\n\nfrom json5.tool import main\n\n\nif __name__ == '__main__':\n sys.exit(main())\n | .venv\Lib\site-packages\json5\__main__.py | __main__.py | Python | 706 | 0.95 | 0.086957 | 0.777778 | node-utils | 777 | 2023-07-23T04:57:32.122278 | Apache-2.0 | false | 404fa73a14186e070181715e1cb6d844 |
\n\n | .venv\Lib\site-packages\json5\__pycache__\host.cpython-313.pyc | host.cpython-313.pyc | Other | 2,814 | 0.8 | 0 | 0 | python-kit | 754 | 2023-10-17T22:15:52.529142 | BSD-3-Clause | false | 6f53b3b47f641ef40be4449c5e4fd0fe |
\n\n | .venv\Lib\site-packages\json5\__pycache__\lib.cpython-313.pyc | lib.cpython-313.pyc | Other | 34,886 | 0.95 | 0.103896 | 0.016097 | vue-tools | 773 | 2024-09-20T04:58:23.823900 | BSD-3-Clause | false | be35cf3668e38dba731162f5186a3b7c |
\n\n | .venv\Lib\site-packages\json5\__pycache__\parser.cpython-313.pyc | parser.cpython-313.pyc | Other | 85,734 | 0.6 | 0 | 0 | python-kit | 590 | 2023-10-04T20:46:34.080802 | MIT | false | 7a7d668b4e130967fc75e85d8b56196b |
\n\n | .venv\Lib\site-packages\json5\__pycache__\tool.cpython-313.pyc | tool.cpython-313.pyc | Other | 6,838 | 0.8 | 0.025974 | 0 | python-kit | 34 | 2024-12-06T15:25:52.390704 | Apache-2.0 | false | 510000322d76e1e203a0f48a6c97541a |
\n\n | .venv\Lib\site-packages\json5\__pycache__\version.cpython-313.pyc | version.cpython-313.pyc | Other | 228 | 0.7 | 0 | 0 | vue-tools | 418 | 2025-04-07T21:11:49.426511 | Apache-2.0 | false | b626995671e49be41b99bc0b1a298ccd |
\n\n | .venv\Lib\site-packages\json5\__pycache__\__init__.cpython-313.pyc | __init__.cpython-313.pyc | Other | 559 | 0.7 | 0 | 0 | react-lib | 51 | 2023-12-05T12:22:06.115152 | Apache-2.0 | false | 3715581bbdaa74c56b25ced12978df17 |
\n\n | .venv\Lib\site-packages\json5\__pycache__\__main__.cpython-313.pyc | __main__.cpython-313.pyc | Other | 355 | 0.7 | 0 | 0 | vue-tools | 348 | 2023-11-09T20:10:41.438910 | MIT | false | e1c0af18297e5f3d13db6058b780c3ee |
[console_scripts]\npyjson5 = json5.tool:main\n | .venv\Lib\site-packages\json5-0.12.0.dist-info\entry_points.txt | entry_points.txt | Other | 44 | 0.5 | 0 | 0 | vue-tools | 724 | 2025-03-25T22:10:51.389707 | BSD-3-Clause | false | b2a3d55e423595ee7cef999ca391a65f |
pip\n | .venv\Lib\site-packages\json5-0.12.0.dist-info\INSTALLER | INSTALLER | Other | 4 | 0.5 | 0 | 0 | react-lib | 330 | 2024-05-17T07:35:16.825858 | BSD-3-Clause | false | 365c9bfeb7d89244f2ce01c1de44cb85 |
Metadata-Version: 2.4\nName: json5\nVersion: 0.12.0\nSummary: A Python implementation of the JSON5 data format.\nAuthor-email: Dirk Pranke <dpranke@chromium.org>\nLicense: Files: Everything except for the benchmarks/*.json files.\n \n Apache License\n Version 2.0, January 2004\n http://www.apache.org/licenses/\n \n TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION\n \n 1. Definitions.\n \n "License" shall mean the terms and conditions for use, reproduction,\n and distribution as defined by Sections 1 through 9 of this document.\n \n "Licensor" shall mean the copyright owner or entity authorized by\n the copyright owner that is granting the License.\n \n "Legal Entity" shall mean the union of the acting entity and all\n other entities that control, are controlled by, or are under common\n control with that entity. For the purposes of this definition,\n "control" means (i) the power, direct or indirect, to cause the\n direction or management of such entity, whether by contract or\n otherwise, or (ii) ownership of fifty percent (50%) or more of the\n outstanding shares, or (iii) beneficial ownership of such entity.\n \n "You" (or "Your") shall mean an individual or Legal Entity\n exercising permissions granted by this License.\n \n "Source" form shall mean the preferred form for making modifications,\n including but not limited to software source code, documentation\n source, and configuration files.\n \n "Object" form shall mean any form resulting from mechanical\n transformation or translation of a Source form, including but\n not limited to compiled object code, generated documentation,\n and conversions to other media types.\n \n "Work" shall mean the work of authorship, whether in Source or\n Object form, made available under the License, as indicated by a\n copyright notice that is included in or attached to the work\n (an example is provided in the Appendix below).\n \n "Derivative Works" shall mean any work, whether in Source or Object\n form, that is based on (or derived from) the Work and for which the\n editorial revisions, annotations, elaborations, or other modifications\n represent, as a whole, an original work of authorship. For the purposes\n of this License, Derivative Works shall not include works that remain\n separable from, or merely link (or bind by name) to the interfaces of,\n the Work and Derivative Works thereof.\n \n "Contribution" shall mean any work of authorship, including\n the original version of the Work and any modifications or additions\n to that Work or Derivative Works thereof, that is intentionally\n submitted to Licensor for inclusion in the Work by the copyright owner\n or by an individual or Legal Entity authorized to submit on behalf of\n the copyright owner. For the purposes of this definition, "submitted"\n means any form of electronic, verbal, or written communication sent\n to the Licensor or its representatives, including but not limited to\n communication on electronic mailing lists, source code control systems,\n and issue tracking systems that are managed by, or on behalf of, the\n Licensor for the purpose of discussing and improving the Work, but\n excluding communication that is conspicuously marked or otherwise\n designated in writing by the copyright owner as "Not a Contribution."\n \n "Contributor" shall mean Licensor and any individual or Legal Entity\n on behalf of whom a Contribution has been received by Licensor and\n subsequently incorporated within the Work.\n \n 2. Grant of Copyright License. Subject to the terms and conditions of\n this License, each Contributor hereby grants to You a perpetual,\n worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n copyright license to reproduce, prepare Derivative Works of,\n publicly display, publicly perform, sublicense, and distribute the\n Work and such Derivative Works in Source or Object form.\n \n 3. Grant of Patent License. Subject to the terms and conditions of\n this License, each Contributor hereby grants to You a perpetual,\n worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n (except as stated in this section) patent license to make, have made,\n use, offer to sell, sell, import, and otherwise transfer the Work,\n where such license applies only to those patent claims licensable\n by such Contributor that are necessarily infringed by their\n Contribution(s) alone or by combination of their Contribution(s)\n with the Work to which such Contribution(s) was submitted. If You\n institute patent litigation against any entity (including a\n cross-claim or counterclaim in a lawsuit) alleging that the Work\n or a Contribution incorporated within the Work constitutes direct\n or contributory patent infringement, then any patent licenses\n granted to You under this License for that Work shall terminate\n as of the date such litigation is filed.\n \n 4. Redistribution. You may reproduce and distribute copies of the\n Work or Derivative Works thereof in any medium, with or without\n modifications, and in Source or Object form, provided that You\n meet the following conditions:\n \n (a) You must give any other recipients of the Work or\n Derivative Works a copy of this License; and\n \n (b) You must cause any modified files to carry prominent notices\n stating that You changed the files; and\n \n (c) You must retain, in the Source form of any Derivative Works\n that You distribute, all copyright, patent, trademark, and\n attribution notices from the Source form of the Work,\n excluding those notices that do not pertain to any part of\n the Derivative Works; and\n \n (d) If the Work includes a "NOTICE" text file as part of its\n distribution, then any Derivative Works that You distribute must\n include a readable copy of the attribution notices contained\n within such NOTICE file, excluding those notices that do not\n pertain to any part of the Derivative Works, in at least one\n of the following places: within a NOTICE text file distributed\n as part of the Derivative Works; within the Source form or\n documentation, if provided along with the Derivative Works; or,\n within a display generated by the Derivative Works, if and\n wherever such third-party notices normally appear. The contents\n of the NOTICE file are for informational purposes only and\n do not modify the License. You may add Your own attribution\n notices within Derivative Works that You distribute, alongside\n or as an addendum to the NOTICE text from the Work, provided\n that such additional attribution notices cannot be construed\n as modifying the License.\n \n You may add Your own copyright statement to Your modifications and\n may provide additional or different license terms and conditions\n for use, reproduction, or distribution of Your modifications, or\n for any such Derivative Works as a whole, provided Your use,\n reproduction, and distribution of the Work otherwise complies with\n the conditions stated in this License.\n \n 5. Submission of Contributions. Unless You explicitly state otherwise,\n any Contribution intentionally submitted for inclusion in the Work\n by You to the Licensor shall be under the terms and conditions of\n this License, without any additional terms or conditions.\n Notwithstanding the above, nothing herein shall supersede or modify\n the terms of any separate license agreement you may have executed\n with Licensor regarding such Contributions.\n \n 6. Trademarks. This License does not grant permission to use the trade\n names, trademarks, service marks, or product names of the Licensor,\n except as required for reasonable and customary use in describing the\n origin of the Work and reproducing the content of the NOTICE file.\n \n 7. Disclaimer of Warranty. Unless required by applicable law or\n agreed to in writing, Licensor provides the Work (and each\n Contributor provides its Contributions) on an "AS IS" BASIS,\n WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n implied, including, without limitation, any warranties or conditions\n of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A\n PARTICULAR PURPOSE. You are solely responsible for determining the\n appropriateness of using or redistributing the Work and assume any\n risks associated with Your exercise of permissions under this License.\n \n 8. Limitation of Liability. In no event and under no legal theory,\n whether in tort (including negligence), contract, or otherwise,\n unless required by applicable law (such as deliberate and grossly\n negligent acts) or agreed to in writing, shall any Contributor be\n liable to You for damages, including any direct, indirect, special,\n incidental, or consequential damages of any character arising as a\n result of this License or out of the use or inability to use the\n Work (including but not limited to damages for loss of goodwill,\n work stoppage, computer failure or malfunction, or any and all\n other commercial damages or losses), even if such Contributor\n has been advised of the possibility of such damages.\n \n 9. Accepting Warranty or Additional Liability. While redistributing\n the Work or Derivative Works thereof, You may choose to offer,\n and charge a fee for, acceptance of support, warranty, indemnity,\n or other liability obligations and/or rights consistent with this\n License. However, in accepting such obligations, You may act only\n on Your own behalf and on Your sole responsibility, not on behalf\n of any other Contributor, and only if You agree to indemnify,\n defend, and hold each Contributor harmless for any liability\n incurred by, or claims asserted against, such Contributor by reason\n of your accepting any such warranty or additional liability.\n \n END OF TERMS AND CONDITIONS\n \n APPENDIX: How to apply the Apache License to your work.\n \n To apply the Apache License to your work, attach the following\n boilerplate notice, with the fields enclosed by brackets "{}"\n replaced with your own identifying information. (Don't include\n the brackets!) The text should be enclosed in the appropriate\n comment syntax for the file format. We also recommend that a\n file or class name and description of purpose be included on the\n same "printed page" as the copyright notice for easier\n identification within third-party archives.\n \n Copyright {yyyy} {name of copyright owner}\n \n Licensed under the Apache License, Version 2.0 (the "License");\n you may not use this file except in compliance with the License.\n You may obtain a copy of the License at\n \n http://www.apache.org/licenses/LICENSE-2.0\n \n Unless required by applicable law or agreed to in writing, software\n distributed under the License is distributed on an "AS IS" BASIS,\n WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n See the License for the specific language governing permissions and\n limitations under the License.\n \n ---\n \n File: benchmarks/64KB-min.json\n \n MIT License\n \n Copyright (c) Microsoft Corporation.\n \n Permission is hereby granted, free of charge, to any person obtaining a copy\n of this software and associated documentation files (the "Software"), to deal\n in the Software without restriction, including without limitation the rights\n to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n copies of the Software, and to permit persons to whom the Software is\n furnished to do so, subject to the following conditions:\n \n The above copyright notice and this permission notice shall be included in all\n copies or substantial portions of the Software.\n \n THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\n SOFTWARE\n \n ---\n \n File: benchmarks/bitly-usa-gov.json\n \n The MIT License (MIT)\n \n Copyright (c) 2017 Wes McKinney\n \n Permission is hereby granted, free of charge, to any person obtaining a copy\n of this software and associated documentation files (the "Software"), to deal\n in the Software without restriction, including without limitation the rights\n to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n copies of the Software, and to permit persons to whom the Software is\n furnished to do so, subject to the following conditions:\n \n The above copyright notice and this permission notice shall be included in all\n copies or substantial portions of the Software.\n \n THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\n SOFTWARE.\n \n ---\n \n File: benchmarks/twitter.json\n \n The MIT License (MIT)\n \n Copyright (c) 2014 Milo Yip\n \n Permission is hereby granted, free of charge, to any person obtaining a copy\n of this software and associated documentation files (the "Software"), to deal\n in the Software without restriction, including without limitation the rights\n to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n copies of the Software, and to permit persons to whom the Software is\n furnished to do so, subject to the following conditions:\n \n The above copyright notice and this permission notice shall be included in all\n copies or substantial portions of the Software.\n \n THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\n SOFTWARE.\n \nProject-URL: Repository, https://github.com/dpranke/pyjson5\nProject-URL: Issues, https://github.com/dpranke/pyjson5/issues\nProject-URL: Changelog, https://github.com/dpranke/pyjson5/blob/master/README.md\nKeywords: json5\nClassifier: Development Status :: 5 - Production/Stable\nClassifier: Intended Audience :: Developers\nClassifier: License :: OSI Approved :: Apache Software License\nClassifier: Programming Language :: Python :: 3\nClassifier: Programming Language :: Python :: 3.8\nClassifier: Programming Language :: Python :: 3.9\nClassifier: Programming Language :: Python :: 3.10\nClassifier: Programming Language :: Python :: 3.11\nClassifier: Programming Language :: Python :: 3.12\nClassifier: Programming Language :: Python :: 3.13\nRequires-Python: >=3.8.0\nDescription-Content-Type: text/markdown\nLicense-File: LICENSE\nProvides-Extra: dev\nRequires-Dist: build==1.2.2.post1; extra == "dev"\nRequires-Dist: coverage==7.5.4; python_version < "3.9" and extra == "dev"\nRequires-Dist: coverage==7.8.0; python_version >= "3.9" and extra == "dev"\nRequires-Dist: mypy==1.14.1; python_version < "3.9" and extra == "dev"\nRequires-Dist: mypy==1.15.0; python_version >= "3.9" and extra == "dev"\nRequires-Dist: pip==25.0.1; extra == "dev"\nRequires-Dist: pylint==3.2.7; python_version < "3.9" and extra == "dev"\nRequires-Dist: pylint==3.3.6; python_version >= "3.9" and extra == "dev"\nRequires-Dist: ruff==0.11.2; extra == "dev"\nRequires-Dist: twine==6.1.0; extra == "dev"\nRequires-Dist: uv==0.6.11; extra == "dev"\nDynamic: license-file\n\n# pyjson5\n\nA Python implementation of the JSON5 data format.\n\n[JSON5](https://json5.org) extends the\n[JSON](http://www.json.org) data interchange format to make it\nslightly more usable as a configuration language:\n\n* JavaScript-style comments (both single and multi-line) are legal.\n\n* Object keys may be unquoted if they are legal ECMAScript identifiers\n\n* Objects and arrays may end with trailing commas.\n\n* Strings can be single-quoted, and multi-line string literals are allowed.\n\nThere are a few other more minor extensions to JSON; see the above page for\nthe full details.\n\nThis project implements a reader and writer implementation for Python;\nwhere possible, it mirrors the\n[standard Python JSON API](https://docs.python.org/library/json.html)\npackage for ease of use.\n\nThere is one notable difference from the JSON api: the `load()` and\n`loads()` methods support optionally checking for (and rejecting) duplicate\nobject keys; pass `allow_duplicate_keys=False` to do so (duplicates are\nallowed by default).\n\nThis is an early release. It has been reasonably well-tested, but it is\n**SLOW**. It can be 1000-6000x slower than the C-optimized JSON module,\nand is 200x slower (or more) than the pure Python JSON module.\n\n**Please Note:** This library only handles JSON5 documents, it does not\nallow you to read arbitrary JavaScript. For example, bare integers can\nbe legal object keys in JavaScript, but they aren't in JSON5.\n\n## Known issues\n\n* Did I mention that it is **SLOW**?\n\n* The implementation follows Python3's `json` implementation where\n possible. This means that the `encoding` method to `dump()` is\n ignored, and unicode strings are always returned.\n\n* The `cls` keyword argument that `json.load()`/`json.loads()` accepts\n to specify a custom subclass of ``JSONDecoder`` is not and will not be\n supported, because this implementation uses a completely different\n approach to parsing strings and doesn't have anything like the\n `JSONDecoder` class.\n\n* The `cls` keyword argument that `json.dump()`/`json.dumps()` accepts\n is also not supported, for consistency with `json5.load()`. The `default`\n keyword *is* supported, though, and might be able to serve as a\n workaround.\n\n## Contributing\n\n`json5` have no runtime dependencies and it is supported on Python version 3.8\nor later. However, in order to develop and build the package you need a\nbunch of extra tools and the latest versions of those tools may require 3.9\nor later. You can install the extra environment on 3.8 (and get older versions\nof the tools), but they may not run completely cleanly.\n\n#### On Mac\n\nThe easiest thing to do is to install [`uv`](https://docs.astral.sh/uv) and\nuse `uv` and the `//run` script to develop things. See `./run --help` for\nthe various commands that are supported. `glop` is the parser generator\ntool used to generate a parser from the grammar in `json5/json5.g`.\n\n```\n$ brew install uv\n$ git clone https://github.com/dpranke/pyjson5\n$ git clone https://github.com/dpranke/glop\n$ cd pyjson5\n$ source $(./run devenv) # To activate a venv w/ all the needed dev tools.\n```\n\n#### On other platforms\n\nInstall `uv` via whatever mechanism is appropriate.\n\n### Create the venv\n\n```\n$ ./run devenv\n```\n\n(This calls `uv sync --extra dev`.)\n\n### Running the tests\n\n```\n$ ./run tests\n```\n\n### Updating the packages\n\n```\n# Update the version in json5/version.py to $VERSION, which should be of\n# the form X.Y.Z where X, Y, and Z are numbers.\n$ ./run regen\n$ ./run presubmit\n$ git commit -a -m "Bump the version to $VERSION"\n$ git tag "v$VERSION"\n$ ./run build\n$ ./run publish --prod\n$ git push origin\n$ git push --tags origin\n```\n\n(Assuming you have upload privileges to PyPI and the GitHub repo, of course.)\n\n## Version History / Release Notes\n\n* v0.12.0 (2025-04-03)\n * Roll back pyproject.toml change for licensing so that we can still\n build the package on 3.8.\n\n* v0.12.0dev0 (prerelease)\n * Upgrade devenv package dependencies to latest versions; they now need\n Python 3.9 or newer, though json5 itself still supports 3.8.\n\n* v0.11.0 (2025-04-01)\n * Add a couple examples to the documentation and run doctest over\n them.\n * Fix a typing issue in dump and dumps with the `cls` argument; turns\n out mypy was right and I was wrong and I didn't realize it :).\n * Introduce a new `parse` method that can be used to iterate through\n a string, extracting multiple values.\n * Add a new `consume_trailing` parameter to `load`/`loads`/`parse`\n that specifies whether to keep parsing after a valid object is\n reached. By default, this is True and the string must only contain\n trailing whitespace. If set to False, parsing will stop when a\n valid object is reached.\n * Add a new `start` parameter to `load`/`loads`/`parse` to specify\n the zero-based offset to start parsing the string or file from.\n * [GitHub issue #60](https://github.com/dpranke/pyjson5/issues/60).\n Fix a bug where we were attempting to allow '--4' as a valid number.\n\n* v0.10.0 (2024-11-25)\n * [GitHub issue #57](https://github.com/dpranke/pyjson5/issues/57).\n Added a `JSON5Encoder` class that can be overridden to do custom\n encoding of values. This class is vaguely similar to the `JSONEncoder`\n class in the standard `json` library, except that it has an\n `encode()` method that can be overridden to customize *any*\n value, not just ones the standard encoder doesn't know how to handle.\n It does also support a `default()` method that can be used to\n encode things not normally encodable, like the JSONEncoder class.\n It does not support an `iterencode` method. One could probably\n be added in the future, although exactly how that would work and\n interact with `encode` is a little unclear.\n * Restructured the code to use the new encoder class; doing so actually\n allowed me to delete a bunch of tediously duplicative code.\n * Added a new `quote_style` argument to `dump()`/`dumps()` to control\n how strings are encoded by default. For compatibility with older\n versions of the json5 library and the standard json library, it\n uses `QuoteStyle.ALWAYS_DOUBLE` which encodes all strings with double\n quotes all the time. You can also configure it to use single quotes\n all the time (`ALWAYS_SINGLE`), and to switch between single and double\n when doing so eliminates a need to escape quotes (`PREFER_SINGLE` and\n `PREFER_DOUBLE`). This also adds a `--quote-style` argument to\n `python -m json5`.\n * This release has a fair number of changes, but is intended to be\n completely backwards-compatible. Code without changes should run exactly\n as it did before.\n* v0.9.28 (2024-11-11)\n * Fix GitHub CI to install `uv` so `./run tests` works properly.\n * Mark Python3.13 as supported in package metadata.\n * Update dev package dependencies (note that the latest versions\n of coverage and pylint no longer work w/ Python3.8)\n* v0.9.27 (2024-11-10)\n * Fix typo in //README.md\n* v0.9.26 (2024-11-10)\n * [GitHub issue #82](https://github.com/dpranke/pyjson5/issues/82)\n Add support for the `strict` parameter to `load()`/`loads()`.\n * Significantly rework the infra and the `run` script to be\n contemporary.\n* v0.9.25 (2024-04-12)\n * [GitHub issue #81](https://github.com/dpranke/pyjson5/issues/81)\n Explicitly specify the directory to use for the package in\n pyproject.toml.\n* v0.9.24 (2024-03-16)\n * Update GitHub workflow config to remove unnecessary steps and\n run on pull requests as well as commits.\n * Added note about removing `hypothesize` in v0.9.23.\n * No code changes.\n* v0.9.23 (2024-03-16)\n * Lots of cleanup:\n * Removed old code needed for Python2 compatibility.\n * Removed tests using `hypothesize`. This ran model-based checks\n and didn't really add anything useful in terms of coverage to\n the test suite, and it introduced dependencies and slowed down\n the tests significantly. It was a good experiment but I think\n we're better off without it.\n * Got everything linting cleanly with pylint 3.1 and `ruff check`\n using ruff 0.3.3 (Note that commit message in 00d73a3 says pylint\n 3.11, which is a typo).\n * Code reformatted with `ruff format`\n * Added missing tests to bring coverage up to 100%.\n * Lots of minor code changes as the result of linting and coverage\n testing, but no intentional functional differences.\n* v0.9.22 (2024-03-06)\n * Attempt to fix the GitHub CI configuration now that setup.py\n is gone. Also, test on 3.12 instead of 3.11.\n * No code changes.\n* v0.9.21 (2024-03-06)\n * Moved the benchmarks/*.json data files' license information\n to //LICENSE to (hopefully) make the Google linter happy.\n* v0.9.20 (2024-03-03)\n * Added `json5.__version__` in addition to `json5.VERSION`.\n * More packaging modernization (no more setup.{cfg,py} files).\n * Mark Python3.12 as supported in project.classifiers.\n * Updated the `//run` script to use python3.\n* v0.9.19 (2024-03-03)\n * Replaced the benchmarking data files that came from chromium.org with\n three files obtained from other datasets on GitHub. Since this repo\n is vendored into the chromium/src repo it was occasionally confusing\n people who thought the data was actually used for non-benchmarking\n purposes and thus updating it for whatever reason.\n * No code changes.\n* v0.9.18 (2024-02-29)\n * Add typing information to the module. This is kind of a big change,\n but there should be no functional differences.\n* v0.9.17 (2024-02-19)\n * Move from `setup.py` to `pyproject.toml`.\n * No code changes (other than the version increasing).\n* v0.9.16 (2024-02-19)\n * Drop Python2 from `setup.py`\n * Add minimal packaging instructions to `//README.md`.\n* v0.9.15 (2024-02-19)\n * Merge in [Pull request #66](https://github.com/dpranke/pyjson5/pull/66)\n to include the tests and sample file in a source distribution.\n* v0.9.14 (2023-05-14)\n * [GitHub issue #63](https://github.com/dpranke/pyjson5/issues/63)\n Handle `+Infinity` as well as `-Infinity` and `Infinity`.\n* v0.9.13 (2023-03-16)\n * [GitHub PR #64](https://github.com/dpranke/pyjson5/pull/64)\n Remove a field from one of the JSON benchmark files to\n reduce confusion in Chromium.\n * No code changes.\n* v0.9.12 (2023-01-02)\n * Fix GitHub Actions config file to no longer test against\n Python 3.6 or 3.7. For now we will only test against an\n "oldest" release (3.8 in this case) and a "current"\n release (3.11 in this case).\n* v0.9.11 (2023-01-02)\n * [GitHub issue #60](https://github.com/dpranke/pyjson5/issues/60)\n Fixed minor Python2 compatibility issue by referring to\n `float("inf")` instead of `math.inf`.\n* v0.9.10 (2022-08-18)\n * [GitHub issue #58](https://github.com/dpranke/pyjson5/issues/58)\n Updated the //README.md to be clear that parsing arbitrary JS\n code may not work.\n * Otherwise, no code changes.\n* v0.9.9 (2022-08-01)\n * [GitHub issue #57](https://github.com/dpranke/pyjson5/issues/57)\n Fixed serialization for objects that subclass `int` or `float`:\n Previously we would use the objects __str__ implementation, but\n that might result in an illegal JSON5 value if the object had\n customized __str__ to return something illegal. Instead,\n we follow the lead of the `JSON` module and call `int.__repr__`\n or `float.__repr__` directly.\n * While I was at it, I added tests for dumps(-inf) and dumps(nan)\n when those were supposed to be disallowed by `allow_nan=False`.\n* v0.9.8 (2022-05-08)\n * [GitHub issue #47](https://github.com/dpranke/pyjson5/issues/47)\n Fixed error reporting in some cases due to how parsing was handling\n nested rules in the grammar - previously the reported location for\n the error could be far away from the point where it actually happened.\n\n* v0.9.7 (2022-05-06)\n * [GitHub issue #52](https://github.com/dpranke/pyjson5/issues/52)\n Fixed behavior of `default` fn in `dump` and `dumps`. Previously\n we didn't require the function to return a string, and so we could\n end up returning something that wasn't actually valid. This change\n now matches the behavior in the `json` module. *Note: This is a\n potentially breaking change.*\n* v0.9.6 (2021-06-21)\n * Bump development status classifier to 5 - Production/Stable, which\n the library feels like it is at this point. If I do end up significantly\n reworking things to speed it up and/or to add round-trip editing,\n that'll likely be a 2.0. If this version has no reported issues,\n I'll likely promote it to 1.0.\n * Also bump the tested Python versions to 2.7, 3.8 and 3.9, though\n earlier Python3 versions will likely continue to work as well.\n * [GitHub issue #46](https://github.com/dpranke/pyjson5/issues/36)\n Fix incorrect serialization of custom subtypes\n * Make it possible to run the tests if `hypothesis` isn't installed.\n\n* v0.9.5 (2020-05-26)\n * Miscellaneous non-source cleanups in the repo, including setting\n up GitHub Actions for a CI system. No changes to the library from\n v0.9.4, other than updating the version.\n\n* v0.9.4 (2020-03-26)\n * [GitHub pull #38](https://github.com/dpranke/pyjson5/pull/38)\n Fix from fredrik@fornwall.net for dumps() crashing when passed\n an empty string as a key in an object.\n\n* v0.9.3 (2020-03-17)\n * [GitHub pull #35](https://github.com/dpranke/pyjson5/pull/35)\n Fix from pastelmind@ for dump() not passing the right args to dumps().\n * Fix from p.skouzos@novafutur.com to remove the tests directory from\n the setup call, making the package a bit smaller.\n\n* v0.9.2 (2020-03-02)\n * [GitHub pull #34](https://github.com/dpranke/pyjson5/pull/34)\n Fix from roosephu@ for a badly formatted nested list.\n\n* v0.9.1 (2020-02-09)\n * [GitHub issue #33](https://github.com/dpranke/pyjson5/issues/33):\n Fix stray trailing comma when dumping an object with an invalid key.\n\n* v0.9.0 (2020-01-30)\n * [GitHub issue #29](https://github.com/dpranke/pyjson5/issues/29):\n Fix an issue where objects keys that started with a reserved\n word were incorrectly quoted.\n * [GitHub issue #30](https://github.com/dpranke/pyjson5/issues/30):\n Fix an issue where dumps() incorrectly thought a data structure\n was cyclic in some cases.\n * [GitHub issue #32](https://github.com/dpranke/pyjson5/issues/32):\n Allow for non-string keys in dicts passed to ``dump()``/``dumps()``.\n Add an ``allow_duplicate_keys=False`` to prevent possible\n ill-formed JSON that might result.\n\n* v0.8.5 (2019-07-04)\n * [GitHub issue #25](https://github.com/dpranke/pyjson5/issues/25):\n Add LICENSE and README.md to the dist.\n * [GitHub issue #26](https://github.com/dpranke/pyjson5/issues/26):\n Fix printing of empty arrays and objects with indentation, fix\n misreporting of the position on parse failures in some cases.\n\n* v0.8.4 (2019-06-11)\n * Updated the version history, too.\n\n* v0.8.3 (2019-06-11)\n * Tweaked the README, bumped the version, forgot to update the version\n history :).\n\n* v0.8.2 (2019-06-11)\n * Actually bump the version properly, to 0.8.2.\n\n* v0.8.1 (2019-06-11)\n * Fix bug in setup.py that messed up the description. Unfortunately,\n I forgot to bump the version for this, so this also identifies as 0.8.0.\n\n* v0.8.0 (2019-06-11)\n * Add `allow_duplicate_keys=True` as a default argument to\n `json5.load()`/`json5.loads()`. If you set the key to `False`, duplicate\n keys in a single dict will be rejected. The default is set to `True`\n for compatibility with `json.load()`, earlier versions of json5, and\n because it's simply not clear if people would want duplicate checking\n enabled by default.\n\n* v0.7 (2019-03-31)\n * Changes dump()/dumps() to not quote object keys by default if they are\n legal identifiers. Passing `quote_keys=True` will turn that off\n and always quote object keys.\n * Changes dump()/dumps() to insert trailing commas after the last item\n in an array or an object if the object is printed across multiple lines\n (i.e., if `indent` is not None). Passing `trailing_commas=False` will\n turn that off.\n * The `json5.tool` command line tool now supports the `--indent`,\n `--[no-]quote-keys`, and `--[no-]trailing-commas` flags to allow\n for more control over the output, in addition to the existing\n `--as-json` flag.\n * The `json5.tool` command line tool no longer supports reading from\n multiple files, you can now only read from a single file or\n from standard input.\n * The implementation no longer relies on the standard `json` module\n for anything. The output should still match the json module (except\n as noted above) and discrepancies should be reported as bugs.\n\n* v0.6.2 (2019-03-08)\n * Fix [GitHub issue #23](https://github.com/dpranke/pyjson5/issues/23) and\n pass through unrecognized escape sequences.\n\n* v0.6.1 (2018-05-22)\n * Cleaned up a couple minor nits in the package.\n\n* v0.6.0 (2017-11-28)\n * First implementation that attempted to implement 100% of the spec.\n\n* v0.5.0 (2017-09-04)\n * First implementation that supported the full set of kwargs that\n the `json` module supports.\n | .venv\Lib\site-packages\json5-0.12.0.dist-info\METADATA | METADATA | Other | 36,203 | 0.95 | 0.090652 | 0.25 | python-kit | 681 | 2024-03-01T11:29:30.597256 | Apache-2.0 | false | cb796b23f612040d052869b5e137ca53 |
../../Scripts/pyjson5.exe,sha256=shelPUBZCZKUjVp1CjRoNXN-NAHBksGuezICyr0gtUM,108411\njson5-0.12.0.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4\njson5-0.12.0.dist-info/METADATA,sha256=P0DHDww4700IQ3KyWsNdWegtXgVwsoOA-9-MV8ZA6TM,36203\njson5-0.12.0.dist-info/RECORD,,\njson5-0.12.0.dist-info/WHEEL,sha256=CmyFI0kx5cdEMTLiONQRbGQwjIoR1aIYB7eCAQ4KPJ0,91\njson5-0.12.0.dist-info/entry_points.txt,sha256=HWGaWXb2dDYjBsahObnXhgRcmQJplFfS32pT9NThOqw,44\njson5-0.12.0.dist-info/licenses/LICENSE,sha256=4hJ8vbRn0xocu8PUGyTH_aEvkvrkfdobVl5-CyvDfHw,14728\njson5-0.12.0.dist-info/top_level.txt,sha256=4lqsmFoRcg-1K0hUeuIbxYBOd-40Bgw8pF9BdZwkC7k,6\njson5/__init__.py,sha256=La4k_f8gv66oQRHfTOSZmy0RGvzYFCjl1ezALI-WjnU,947\njson5/__main__.py,sha256=ysHizwGgmpAUd_GkolQjiCwQ2QW5jjCV82zQce2YqYE,706\njson5/__pycache__/__init__.cpython-313.pyc,,\njson5/__pycache__/__main__.cpython-313.pyc,,\njson5/__pycache__/host.cpython-313.pyc,,\njson5/__pycache__/lib.cpython-313.pyc,,\njson5/__pycache__/parser.cpython-313.pyc,,\njson5/__pycache__/tool.cpython-313.pyc,,\njson5/__pycache__/version.cpython-313.pyc,,\njson5/host.py,sha256=IzvSpAkE2NJb01vVy4aN2IR7tuOjaVkWw4CN4JUrncQ,1512\njson5/lib.py,sha256=sqhCSlKf7XarAl03isfdL5WlJU367BsrIG8_mzxss5Q,36028\njson5/parser.py,sha256=pAoNN-Gz0HRdAwBETQwkrNx-cMSPwfotGA04d7Qsp0E,36547\njson5/py.typed,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0\njson5/tool.py,sha256=AewumzysicTdpaoccP1I1YD6O2-ivUlSORHIUJO71W4,5344\njson5/version.py,sha256=mJa8v5NHwiBx5bc0-UVInMB_FIc2zTJe38kPnYI9XhA,703\n | .venv\Lib\site-packages\json5-0.12.0.dist-info\RECORD | RECORD | Other | 1,546 | 0.7 | 0 | 0 | vue-tools | 996 | 2024-05-02T08:06:20.190267 | GPL-3.0 | false | 8498201d859ee7482b3602ab3336b5fc |
json5\n | .venv\Lib\site-packages\json5-0.12.0.dist-info\top_level.txt | top_level.txt | Other | 6 | 0.5 | 0 | 0 | vue-tools | 620 | 2025-07-05T17:59:18.862639 | GPL-3.0 | false | af32852e8ddc0684e9eac30363030d32 |
Wheel-Version: 1.0\nGenerator: setuptools (78.1.0)\nRoot-Is-Purelib: true\nTag: py3-none-any\n\n | .venv\Lib\site-packages\json5-0.12.0.dist-info\WHEEL | WHEEL | Other | 91 | 0.5 | 0 | 0 | react-lib | 394 | 2024-01-10T14:41:02.708745 | GPL-3.0 | false | 9c3ef2336f4e16b5f7573c25c683ce63 |
Files: Everything except for the benchmarks/*.json files.\n\nApache License\n Version 2.0, January 2004\n http://www.apache.org/licenses/\n\n TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION\n\n 1. Definitions.\n\n "License" shall mean the terms and conditions for use, reproduction,\n and distribution as defined by Sections 1 through 9 of this document.\n\n "Licensor" shall mean the copyright owner or entity authorized by\n the copyright owner that is granting the License.\n\n "Legal Entity" shall mean the union of the acting entity and all\n other entities that control, are controlled by, or are under common\n control with that entity. For the purposes of this definition,\n "control" means (i) the power, direct or indirect, to cause the\n direction or management of such entity, whether by contract or\n otherwise, or (ii) ownership of fifty percent (50%) or more of the\n outstanding shares, or (iii) beneficial ownership of such entity.\n\n "You" (or "Your") shall mean an individual or Legal Entity\n exercising permissions granted by this License.\n\n "Source" form shall mean the preferred form for making modifications,\n including but not limited to software source code, documentation\n source, and configuration files.\n\n "Object" form shall mean any form resulting from mechanical\n transformation or translation of a Source form, including but\n not limited to compiled object code, generated documentation,\n and conversions to other media types.\n\n "Work" shall mean the work of authorship, whether in Source or\n Object form, made available under the License, as indicated by a\n copyright notice that is included in or attached to the work\n (an example is provided in the Appendix below).\n\n "Derivative Works" shall mean any work, whether in Source or Object\n form, that is based on (or derived from) the Work and for which the\n editorial revisions, annotations, elaborations, or other modifications\n represent, as a whole, an original work of authorship. For the purposes\n of this License, Derivative Works shall not include works that remain\n separable from, or merely link (or bind by name) to the interfaces of,\n the Work and Derivative Works thereof.\n\n "Contribution" shall mean any work of authorship, including\n the original version of the Work and any modifications or additions\n to that Work or Derivative Works thereof, that is intentionally\n submitted to Licensor for inclusion in the Work by the copyright owner\n or by an individual or Legal Entity authorized to submit on behalf of\n the copyright owner. For the purposes of this definition, "submitted"\n means any form of electronic, verbal, or written communication sent\n to the Licensor or its representatives, including but not limited to\n communication on electronic mailing lists, source code control systems,\n and issue tracking systems that are managed by, or on behalf of, the\n Licensor for the purpose of discussing and improving the Work, but\n excluding communication that is conspicuously marked or otherwise\n designated in writing by the copyright owner as "Not a Contribution."\n\n "Contributor" shall mean Licensor and any individual or Legal Entity\n on behalf of whom a Contribution has been received by Licensor and\n subsequently incorporated within the Work.\n\n 2. Grant of Copyright License. Subject to the terms and conditions of\n this License, each Contributor hereby grants to You a perpetual,\n worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n copyright license to reproduce, prepare Derivative Works of,\n publicly display, publicly perform, sublicense, and distribute the\n Work and such Derivative Works in Source or Object form.\n\n 3. Grant of Patent License. Subject to the terms and conditions of\n this License, each Contributor hereby grants to You a perpetual,\n worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n (except as stated in this section) patent license to make, have made,\n use, offer to sell, sell, import, and otherwise transfer the Work,\n where such license applies only to those patent claims licensable\n by such Contributor that are necessarily infringed by their\n Contribution(s) alone or by combination of their Contribution(s)\n with the Work to which such Contribution(s) was submitted. If You\n institute patent litigation against any entity (including a\n cross-claim or counterclaim in a lawsuit) alleging that the Work\n or a Contribution incorporated within the Work constitutes direct\n or contributory patent infringement, then any patent licenses\n granted to You under this License for that Work shall terminate\n as of the date such litigation is filed.\n\n 4. Redistribution. You may reproduce and distribute copies of the\n Work or Derivative Works thereof in any medium, with or without\n modifications, and in Source or Object form, provided that You\n meet the following conditions:\n\n (a) You must give any other recipients of the Work or\n Derivative Works a copy of this License; and\n\n (b) You must cause any modified files to carry prominent notices\n stating that You changed the files; and\n\n (c) You must retain, in the Source form of any Derivative Works\n that You distribute, all copyright, patent, trademark, and\n attribution notices from the Source form of the Work,\n excluding those notices that do not pertain to any part of\n the Derivative Works; and\n\n (d) If the Work includes a "NOTICE" text file as part of its\n distribution, then any Derivative Works that You distribute must\n include a readable copy of the attribution notices contained\n within such NOTICE file, excluding those notices that do not\n pertain to any part of the Derivative Works, in at least one\n of the following places: within a NOTICE text file distributed\n as part of the Derivative Works; within the Source form or\n documentation, if provided along with the Derivative Works; or,\n within a display generated by the Derivative Works, if and\n wherever such third-party notices normally appear. The contents\n of the NOTICE file are for informational purposes only and\n do not modify the License. You may add Your own attribution\n notices within Derivative Works that You distribute, alongside\n or as an addendum to the NOTICE text from the Work, provided\n that such additional attribution notices cannot be construed\n as modifying the License.\n\n You may add Your own copyright statement to Your modifications and\n may provide additional or different license terms and conditions\n for use, reproduction, or distribution of Your modifications, or\n for any such Derivative Works as a whole, provided Your use,\n reproduction, and distribution of the Work otherwise complies with\n the conditions stated in this License.\n\n 5. Submission of Contributions. Unless You explicitly state otherwise,\n any Contribution intentionally submitted for inclusion in the Work\n by You to the Licensor shall be under the terms and conditions of\n this License, without any additional terms or conditions.\n Notwithstanding the above, nothing herein shall supersede or modify\n the terms of any separate license agreement you may have executed\n with Licensor regarding such Contributions.\n\n 6. Trademarks. This License does not grant permission to use the trade\n names, trademarks, service marks, or product names of the Licensor,\n except as required for reasonable and customary use in describing the\n origin of the Work and reproducing the content of the NOTICE file.\n\n 7. Disclaimer of Warranty. Unless required by applicable law or\n agreed to in writing, Licensor provides the Work (and each\n Contributor provides its Contributions) on an "AS IS" BASIS,\n WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n implied, including, without limitation, any warranties or conditions\n of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A\n PARTICULAR PURPOSE. You are solely responsible for determining the\n appropriateness of using or redistributing the Work and assume any\n risks associated with Your exercise of permissions under this License.\n\n 8. Limitation of Liability. In no event and under no legal theory,\n whether in tort (including negligence), contract, or otherwise,\n unless required by applicable law (such as deliberate and grossly\n negligent acts) or agreed to in writing, shall any Contributor be\n liable to You for damages, including any direct, indirect, special,\n incidental, or consequential damages of any character arising as a\n result of this License or out of the use or inability to use the\n Work (including but not limited to damages for loss of goodwill,\n work stoppage, computer failure or malfunction, or any and all\n other commercial damages or losses), even if such Contributor\n has been advised of the possibility of such damages.\n\n 9. Accepting Warranty or Additional Liability. While redistributing\n the Work or Derivative Works thereof, You may choose to offer,\n and charge a fee for, acceptance of support, warranty, indemnity,\n or other liability obligations and/or rights consistent with this\n License. However, in accepting such obligations, You may act only\n on Your own behalf and on Your sole responsibility, not on behalf\n of any other Contributor, and only if You agree to indemnify,\n defend, and hold each Contributor harmless for any liability\n incurred by, or claims asserted against, such Contributor by reason\n of your accepting any such warranty or additional liability.\n\n END OF TERMS AND CONDITIONS\n\n APPENDIX: How to apply the Apache License to your work.\n\n To apply the Apache License to your work, attach the following\n boilerplate notice, with the fields enclosed by brackets "{}"\n replaced with your own identifying information. (Don't include\n the brackets!) The text should be enclosed in the appropriate\n comment syntax for the file format. We also recommend that a\n file or class name and description of purpose be included on the\n same "printed page" as the copyright notice for easier\n identification within third-party archives.\n\n Copyright {yyyy} {name of copyright owner}\n\n Licensed under the Apache License, Version 2.0 (the "License");\n you may not use this file except in compliance with the License.\n You may obtain a copy of the License at\n\n http://www.apache.org/licenses/LICENSE-2.0\n\n Unless required by applicable law or agreed to in writing, software\n distributed under the License is distributed on an "AS IS" BASIS,\n WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n See the License for the specific language governing permissions and\n limitations under the License.\n\n---\n\nFile: benchmarks/64KB-min.json\n\nMIT License\n\nCopyright (c) Microsoft Corporation.\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the "Software"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE\n\n---\n\nFile: benchmarks/bitly-usa-gov.json\n\nThe MIT License (MIT)\n\nCopyright (c) 2017 Wes McKinney\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the "Software"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n\n---\n\nFile: benchmarks/twitter.json\n\nThe MIT License (MIT)\n\nCopyright (c) 2014 Milo Yip\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the "Software"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n | .venv\Lib\site-packages\json5-0.12.0.dist-info\licenses\LICENSE | LICENSE | Other | 14,728 | 0.95 | 0.088968 | 0 | node-utils | 560 | 2025-05-29T03:10:48.410720 | Apache-2.0 | false | 18413e679155848797966e3c2319a9ea |
Stefan Kögl <stefan@skoegl.net>\nAlexander Shorin <kxepal@gmail.com>\nChristopher J. White <chris@grierwhite.com>\n | .venv\Lib\site-packages\jsonpointer-3.0.0.dist-info\AUTHORS | AUTHORS | Other | 113 | 0.7 | 0 | 0 | python-kit | 976 | 2024-02-21T00:29:02.617449 | GPL-3.0 | false | 651f2573ac0a5b742e41d92de2231fbb |
pip\n | .venv\Lib\site-packages\jsonpointer-3.0.0.dist-info\INSTALLER | INSTALLER | Other | 4 | 0.5 | 0 | 0 | react-lib | 293 | 2025-05-10T12:57:33.678584 | Apache-2.0 | false | 365c9bfeb7d89244f2ce01c1de44cb85 |
Copyright (c) 2011 Stefan Kögl <stefan@skoegl.net>\nAll rights reserved.\n\nRedistribution and use in source and binary forms, with or without\nmodification, are permitted provided that the following conditions\nare met:\n\n1. Redistributions of source code must retain the above copyright\n notice, this list of conditions and the following disclaimer.\n2. Redistributions in binary form must reproduce the above copyright\n notice, this list of conditions and the following disclaimer in the\n documentation and/or other materials provided with the distribution.\n3. The name of the author may not be used to endorse or promote products\n derived from this software without specific prior written permission.\n\nTHIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR\nIMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES\nOF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.\nIN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,\nINCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT\nNOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,\nDATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY\nTHEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF\nTHIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n\n | .venv\Lib\site-packages\jsonpointer-3.0.0.dist-info\LICENSE.txt | LICENSE.txt | Other | 1,413 | 0.7 | 0 | 0 | vue-tools | 422 | 2025-06-09T07:13:57.755894 | Apache-2.0 | false | 32b15c843b7a329130f4e266a281ebb3 |
Metadata-Version: 2.1\nName: jsonpointer\nVersion: 3.0.0\nSummary: Identify specific nodes in a JSON document (RFC 6901) \nHome-page: https://github.com/stefankoegl/python-json-pointer\nAuthor: Stefan Kögl\nAuthor-email: stefan@skoegl.net\nLicense: Modified BSD License\nClassifier: Development Status :: 5 - Production/Stable\nClassifier: Environment :: Console\nClassifier: Intended Audience :: Developers\nClassifier: License :: OSI Approved :: BSD License\nClassifier: Operating System :: OS Independent\nClassifier: Programming Language :: Python\nClassifier: Programming Language :: Python :: 3\nClassifier: Programming Language :: Python :: 3.7\nClassifier: Programming Language :: Python :: 3.8\nClassifier: Programming Language :: Python :: 3.9\nClassifier: Programming Language :: Python :: 3.10\nClassifier: Programming Language :: Python :: 3.11\nClassifier: Programming Language :: Python :: 3.12\nClassifier: Programming Language :: Python :: Implementation :: CPython\nClassifier: Programming Language :: Python :: Implementation :: PyPy\nClassifier: Topic :: Software Development :: Libraries\nClassifier: Topic :: Utilities\nRequires-Python: >=3.7\nDescription-Content-Type: text/markdown\nLicense-File: LICENSE.txt\nLicense-File: AUTHORS\n\npython-json-pointer\n===================\n\n[](https://pypi.python.org/pypi/jsonpointer/)\n[](https://pypi.python.org/pypi/jsonpointer/)\n[](https://coveralls.io/r/stefankoegl/python-json-pointer?branch=master)\n\n\nResolve JSON Pointers in Python\n-------------------------------\n\nLibrary to resolve JSON Pointers according to\n[RFC 6901](http://tools.ietf.org/html/rfc6901)\n\nSee source code for examples\n* Website: https://github.com/stefankoegl/python-json-pointer\n* Repository: https://github.com/stefankoegl/python-json-pointer.git\n* Documentation: https://python-json-pointer.readthedocs.org/\n* PyPI: https://pypi.python.org/pypi/jsonpointer\n* Travis CI: https://travis-ci.org/stefankoegl/python-json-pointer\n* Coveralls: https://coveralls.io/r/stefankoegl/python-json-pointer\n | .venv\Lib\site-packages\jsonpointer-3.0.0.dist-info\METADATA | METADATA | Other | 2,251 | 0.8 | 0.019608 | 0.133333 | python-kit | 490 | 2025-03-10T11:47:44.390231 | MIT | false | 719cf3adf929155c91afc138c80d8659 |
../../Scripts/jsonpointer,sha256=7_a9x1t_uCv_dUHrirYZNf25F0MPt0LRjWEY24Fw31Y,1834\n__pycache__/jsonpointer.cpython-313.pyc,,\njsonpointer-3.0.0.dist-info/AUTHORS,sha256=TVgxnQ9ZyHvvWwez_k2w8ZwtfVVFsDTGv3tXyJu-9X8,113\njsonpointer-3.0.0.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4\njsonpointer-3.0.0.dist-info/LICENSE.txt,sha256=2LJPFdRyiF94ii1umFhQ8mRie4YBKhe7JCyD8xDZB-U,1413\njsonpointer-3.0.0.dist-info/METADATA,sha256=m8E1xhbkqAdvsZMBI6vy061wLV4wqcmbS6iryFCWgLI,2251\njsonpointer-3.0.0.dist-info/RECORD,,\njsonpointer-3.0.0.dist-info/WHEEL,sha256=k3vXr0c0OitO0k9eCWBlI2yTYnpb_n_I2SGzrrfY7HY,110\njsonpointer-3.0.0.dist-info/top_level.txt,sha256=BsUcar_C0nZzPGV2ackrJ9CpVU8_0W_pHYKwpdnWddM,12\njsonpointer.py,sha256=kXEcNnnUkS8NdSmqSiFJjczJl2-dSZksILgKL0SsABU,10601\n | .venv\Lib\site-packages\jsonpointer-3.0.0.dist-info\RECORD | RECORD | Other | 799 | 0.7 | 0 | 0 | react-lib | 634 | 2024-08-07T14:45:03.534044 | Apache-2.0 | false | 731382fb70b9c6630dd68b78468f850d |
jsonpointer\n | .venv\Lib\site-packages\jsonpointer-3.0.0.dist-info\top_level.txt | top_level.txt | Other | 12 | 0.5 | 0 | 0 | vue-tools | 649 | 2024-04-30T23:21:19.324457 | GPL-3.0 | false | fcd83a80f2cd660c9a1a5d369485c56d |
Wheel-Version: 1.0\nGenerator: bdist_wheel (0.41.0)\nRoot-Is-Purelib: true\nTag: py2-none-any\nTag: py3-none-any\n\n | .venv\Lib\site-packages\jsonpointer-3.0.0.dist-info\WHEEL | WHEEL | Other | 110 | 0.7 | 0 | 0 | vue-tools | 771 | 2024-05-30T11:01:47.817589 | MIT | false | caf80408fbab4e17954f706a6c338096 |
"""\nThe ``jsonschema`` command line.\n"""\n\nfrom importlib import metadata\nfrom json import JSONDecodeError\nfrom textwrap import dedent\nimport argparse\nimport json\nimport sys\nimport traceback\nimport warnings\n\ntry:\n from pkgutil import resolve_name\nexcept ImportError:\n from pkgutil_resolve_name import resolve_name # type: ignore[no-redef]\n\nfrom attrs import define, field\n\nfrom jsonschema.exceptions import SchemaError\nfrom jsonschema.validators import _RefResolver, validator_for\n\nwarnings.warn(\n (\n "The jsonschema CLI is deprecated and will be removed in a future "\n "version. Please use check-jsonschema instead, which can be installed "\n "from https://pypi.org/project/check-jsonschema/"\n ),\n DeprecationWarning,\n stacklevel=2,\n)\n\n\nclass _CannotLoadFile(Exception):\n pass\n\n\n@define\nclass _Outputter:\n\n _formatter = field()\n _stdout = field()\n _stderr = field()\n\n @classmethod\n def from_arguments(cls, arguments, stdout, stderr):\n if arguments["output"] == "plain":\n formatter = _PlainFormatter(arguments["error_format"])\n elif arguments["output"] == "pretty":\n formatter = _PrettyFormatter()\n return cls(formatter=formatter, stdout=stdout, stderr=stderr)\n\n def load(self, path):\n try:\n file = open(path) # noqa: SIM115, PTH123\n except FileNotFoundError as error:\n self.filenotfound_error(path=path, exc_info=sys.exc_info())\n raise _CannotLoadFile() from error\n\n with file:\n try:\n return json.load(file)\n except JSONDecodeError as error:\n self.parsing_error(path=path, exc_info=sys.exc_info())\n raise _CannotLoadFile() from error\n\n def filenotfound_error(self, **kwargs):\n self._stderr.write(self._formatter.filenotfound_error(**kwargs))\n\n def parsing_error(self, **kwargs):\n self._stderr.write(self._formatter.parsing_error(**kwargs))\n\n def validation_error(self, **kwargs):\n self._stderr.write(self._formatter.validation_error(**kwargs))\n\n def validation_success(self, **kwargs):\n self._stdout.write(self._formatter.validation_success(**kwargs))\n\n\n@define\nclass _PrettyFormatter:\n\n _ERROR_MSG = dedent(\n """\\n ===[{type}]===({path})===\n\n {body}\n -----------------------------\n """,\n )\n _SUCCESS_MSG = "===[SUCCESS]===({path})===\n"\n\n def filenotfound_error(self, path, exc_info):\n return self._ERROR_MSG.format(\n path=path,\n type="FileNotFoundError",\n body=f"{path!r} does not exist.",\n )\n\n def parsing_error(self, path, exc_info):\n exc_type, exc_value, exc_traceback = exc_info\n exc_lines = "".join(\n traceback.format_exception(exc_type, exc_value, exc_traceback),\n )\n return self._ERROR_MSG.format(\n path=path,\n type=exc_type.__name__,\n body=exc_lines,\n )\n\n def validation_error(self, instance_path, error):\n return self._ERROR_MSG.format(\n path=instance_path,\n type=error.__class__.__name__,\n body=error,\n )\n\n def validation_success(self, instance_path):\n return self._SUCCESS_MSG.format(path=instance_path)\n\n\n@define\nclass _PlainFormatter:\n\n _error_format = field()\n\n def filenotfound_error(self, path, exc_info):\n return f"{path!r} does not exist.\n"\n\n def parsing_error(self, path, exc_info):\n return "Failed to parse {}: {}\n".format(\n "<stdin>" if path == "<stdin>" else repr(path),\n exc_info[1],\n )\n\n def validation_error(self, instance_path, error):\n return self._error_format.format(file_name=instance_path, error=error)\n\n def validation_success(self, instance_path):\n return ""\n\n\ndef _resolve_name_with_default(name):\n if "." not in name:\n name = "jsonschema." + name\n return resolve_name(name)\n\n\nparser = argparse.ArgumentParser(\n description="JSON Schema Validation CLI",\n)\nparser.add_argument(\n "-i", "--instance",\n action="append",\n dest="instances",\n help="""\n a path to a JSON instance (i.e. filename.json) to validate (may\n be specified multiple times). If no instances are provided via this\n option, one will be expected on standard input.\n """,\n)\nparser.add_argument(\n "-F", "--error-format",\n help="""\n the format to use for each validation error message, specified\n in a form suitable for str.format. This string will be passed\n one formatted object named 'error' for each ValidationError.\n Only provide this option when using --output=plain, which is the\n default. If this argument is unprovided and --output=plain is\n used, a simple default representation will be used.\n """,\n)\nparser.add_argument(\n "-o", "--output",\n choices=["plain", "pretty"],\n default="plain",\n help="""\n an output format to use. 'plain' (default) will produce minimal\n text with one line for each error, while 'pretty' will produce\n more detailed human-readable output on multiple lines.\n """,\n)\nparser.add_argument(\n "-V", "--validator",\n type=_resolve_name_with_default,\n help="""\n the fully qualified object name of a validator to use, or, for\n validators that are registered with jsonschema, simply the name\n of the class.\n """,\n)\nparser.add_argument(\n "--base-uri",\n help="""\n a base URI to assign to the provided schema, even if it does not\n declare one (via e.g. $id). This option can be used if you wish to\n resolve relative references to a particular URI (or local path)\n """,\n)\nparser.add_argument(\n "--version",\n action="version",\n version=metadata.version("jsonschema"),\n)\nparser.add_argument(\n "schema",\n help="the path to a JSON Schema to validate with (i.e. schema.json)",\n)\n\n\ndef parse_args(args): # noqa: D103\n arguments = vars(parser.parse_args(args=args or ["--help"]))\n if arguments["output"] != "plain" and arguments["error_format"]:\n raise parser.error(\n "--error-format can only be used with --output plain",\n )\n if arguments["output"] == "plain" and arguments["error_format"] is None:\n arguments["error_format"] = "{error.instance}: {error.message}\n"\n return arguments\n\n\ndef _validate_instance(instance_path, instance, validator, outputter):\n invalid = False\n for error in validator.iter_errors(instance):\n invalid = True\n outputter.validation_error(instance_path=instance_path, error=error)\n\n if not invalid:\n outputter.validation_success(instance_path=instance_path)\n return invalid\n\n\ndef main(args=sys.argv[1:]): # noqa: D103\n sys.exit(run(arguments=parse_args(args=args)))\n\n\ndef run(arguments, stdout=sys.stdout, stderr=sys.stderr, stdin=sys.stdin): # noqa: D103\n outputter = _Outputter.from_arguments(\n arguments=arguments,\n stdout=stdout,\n stderr=stderr,\n )\n\n try:\n schema = outputter.load(arguments["schema"])\n except _CannotLoadFile:\n return 1\n\n Validator = arguments["validator"]\n if Validator is None:\n Validator = validator_for(schema)\n\n try:\n Validator.check_schema(schema)\n except SchemaError as error:\n outputter.validation_error(\n instance_path=arguments["schema"],\n error=error,\n )\n return 1\n\n if arguments["instances"]:\n load, instances = outputter.load, arguments["instances"]\n else:\n def load(_):\n try:\n return json.load(stdin)\n except JSONDecodeError as error:\n outputter.parsing_error(\n path="<stdin>", exc_info=sys.exc_info(),\n )\n raise _CannotLoadFile() from error\n instances = ["<stdin>"]\n\n resolver = _RefResolver(\n base_uri=arguments["base_uri"],\n referrer=schema,\n ) if arguments["base_uri"] is not None else None\n\n validator = Validator(schema, resolver=resolver)\n exit_code = 0\n for each in instances:\n try:\n instance = load(each)\n except _CannotLoadFile:\n exit_code = 1\n else:\n exit_code |= _validate_instance(\n instance_path=each,\n instance=instance,\n validator=validator,\n outputter=outputter,\n )\n\n return exit_code\n | .venv\Lib\site-packages\jsonschema\cli.py | cli.py | Python | 8,551 | 0.95 | 0.172297 | 0 | awesome-app | 210 | 2025-06-26T05:00:38.615743 | GPL-3.0 | false | d1c50dedc3e47ac2bad41cf42de470fa |
"""\nValidation errors, and some surrounding helpers.\n"""\nfrom __future__ import annotations\n\nfrom collections import defaultdict, deque\nfrom pprint import pformat\nfrom textwrap import dedent, indent\nfrom typing import TYPE_CHECKING, Any, ClassVar\nimport heapq\nimport warnings\n\nfrom attrs import define\nfrom referencing.exceptions import Unresolvable as _Unresolvable\n\nfrom jsonschema import _utils\n\nif TYPE_CHECKING:\n from collections.abc import Iterable, Mapping, MutableMapping, Sequence\n\n from jsonschema import _types\n\nWEAK_MATCHES: frozenset[str] = frozenset(["anyOf", "oneOf"])\nSTRONG_MATCHES: frozenset[str] = frozenset()\n\n_unset = _utils.Unset()\n\n\ndef _pretty(thing: Any, prefix: str):\n """\n Format something for an error message as prettily as we currently can.\n """\n return indent(pformat(thing, width=72, sort_dicts=False), prefix).lstrip()\n\n\ndef __getattr__(name):\n if name == "RefResolutionError":\n warnings.warn(\n _RefResolutionError._DEPRECATION_MESSAGE,\n DeprecationWarning,\n stacklevel=2,\n )\n return _RefResolutionError\n raise AttributeError(f"module {__name__} has no attribute {name}")\n\n\nclass _Error(Exception):\n\n _word_for_schema_in_error_message: ClassVar[str]\n _word_for_instance_in_error_message: ClassVar[str]\n\n def __init__(\n self,\n message: str,\n validator: str = _unset, # type: ignore[assignment]\n path: Iterable[str | int] = (),\n cause: Exception | None = None,\n context=(),\n validator_value: Any = _unset,\n instance: Any = _unset,\n schema: Mapping[str, Any] | bool = _unset, # type: ignore[assignment]\n schema_path: Iterable[str | int] = (),\n parent: _Error | None = None,\n type_checker: _types.TypeChecker = _unset, # type: ignore[assignment]\n ) -> None:\n super().__init__(\n message,\n validator,\n path,\n cause,\n context,\n validator_value,\n instance,\n schema,\n schema_path,\n parent,\n )\n self.message = message\n self.path = self.relative_path = deque(path)\n self.schema_path = self.relative_schema_path = deque(schema_path)\n self.context = list(context)\n self.cause = self.__cause__ = cause\n self.validator = validator\n self.validator_value = validator_value\n self.instance = instance\n self.schema = schema\n self.parent = parent\n self._type_checker = type_checker\n\n for error in context:\n error.parent = self\n\n def __repr__(self) -> str:\n return f"<{self.__class__.__name__}: {self.message!r}>"\n\n def __str__(self) -> str:\n essential_for_verbose = (\n self.validator, self.validator_value, self.instance, self.schema,\n )\n if any(m is _unset for m in essential_for_verbose):\n return self.message\n\n schema_path = _utils.format_as_index(\n container=self._word_for_schema_in_error_message,\n indices=list(self.relative_schema_path)[:-1],\n )\n instance_path = _utils.format_as_index(\n container=self._word_for_instance_in_error_message,\n indices=self.relative_path,\n )\n prefix = 16 * " "\n\n return dedent(\n f"""\\n {self.message}\n\n Failed validating {self.validator!r} in {schema_path}:\n {_pretty(self.schema, prefix=prefix)}\n\n On {instance_path}:\n {_pretty(self.instance, prefix=prefix)}\n """.rstrip(),\n )\n\n @classmethod\n def create_from(cls, other: _Error):\n return cls(**other._contents())\n\n @property\n def absolute_path(self) -> Sequence[str | int]:\n parent = self.parent\n if parent is None:\n return self.relative_path\n\n path = deque(self.relative_path)\n path.extendleft(reversed(parent.absolute_path))\n return path\n\n @property\n def absolute_schema_path(self) -> Sequence[str | int]:\n parent = self.parent\n if parent is None:\n return self.relative_schema_path\n\n path = deque(self.relative_schema_path)\n path.extendleft(reversed(parent.absolute_schema_path))\n return path\n\n @property\n def json_path(self) -> str:\n path = "$"\n for elem in self.absolute_path:\n if isinstance(elem, int):\n path += "[" + str(elem) + "]"\n else:\n path += "." + elem\n return path\n\n def _set(\n self,\n type_checker: _types.TypeChecker | None = None,\n **kwargs: Any,\n ) -> None:\n if type_checker is not None and self._type_checker is _unset:\n self._type_checker = type_checker\n\n for k, v in kwargs.items():\n if getattr(self, k) is _unset:\n setattr(self, k, v)\n\n def _contents(self):\n attrs = (\n "message", "cause", "context", "validator", "validator_value",\n "path", "schema_path", "instance", "schema", "parent",\n )\n return {attr: getattr(self, attr) for attr in attrs}\n\n def _matches_type(self) -> bool:\n try:\n # We ignore this as we want to simply crash if this happens\n expected = self.schema["type"] # type: ignore[index]\n except (KeyError, TypeError):\n return False\n\n if isinstance(expected, str):\n return self._type_checker.is_type(self.instance, expected)\n\n return any(\n self._type_checker.is_type(self.instance, expected_type)\n for expected_type in expected\n )\n\n\nclass ValidationError(_Error):\n """\n An instance was invalid under a provided schema.\n """\n\n _word_for_schema_in_error_message = "schema"\n _word_for_instance_in_error_message = "instance"\n\n\nclass SchemaError(_Error):\n """\n A schema was invalid under its corresponding metaschema.\n """\n\n _word_for_schema_in_error_message = "metaschema"\n _word_for_instance_in_error_message = "schema"\n\n\n@define(slots=False)\nclass _RefResolutionError(Exception):\n """\n A ref could not be resolved.\n """\n\n _DEPRECATION_MESSAGE = (\n "jsonschema.exceptions.RefResolutionError is deprecated as of version "\n "4.18.0. If you wish to catch potential reference resolution errors, "\n "directly catch referencing.exceptions.Unresolvable."\n )\n\n _cause: Exception\n\n def __eq__(self, other):\n if self.__class__ is not other.__class__:\n return NotImplemented # pragma: no cover -- uncovered but deprecated # noqa: E501\n return self._cause == other._cause\n\n def __str__(self) -> str:\n return str(self._cause)\n\n\nclass _WrappedReferencingError(_RefResolutionError, _Unresolvable): # pragma: no cover -- partially uncovered but to be removed # noqa: E501\n def __init__(self, cause: _Unresolvable):\n object.__setattr__(self, "_wrapped", cause)\n\n def __eq__(self, other):\n if other.__class__ is self.__class__:\n return self._wrapped == other._wrapped\n elif other.__class__ is self._wrapped.__class__:\n return self._wrapped == other\n return NotImplemented\n\n def __getattr__(self, attr):\n return getattr(self._wrapped, attr)\n\n def __hash__(self):\n return hash(self._wrapped)\n\n def __repr__(self):\n return f"<WrappedReferencingError {self._wrapped!r}>"\n\n def __str__(self):\n return f"{self._wrapped.__class__.__name__}: {self._wrapped}"\n\n\nclass UndefinedTypeCheck(Exception):\n """\n A type checker was asked to check a type it did not have registered.\n """\n\n def __init__(self, type: str) -> None:\n self.type = type\n\n def __str__(self) -> str:\n return f"Type {self.type!r} is unknown to this type checker"\n\n\nclass UnknownType(Exception):\n """\n A validator was asked to validate an instance against an unknown type.\n """\n\n def __init__(self, type, instance, schema):\n self.type = type\n self.instance = instance\n self.schema = schema\n\n def __str__(self):\n prefix = 16 * " "\n\n return dedent(\n f"""\\n Unknown type {self.type!r} for validator with schema:\n {_pretty(self.schema, prefix=prefix)}\n\n While checking instance:\n {_pretty(self.instance, prefix=prefix)}\n """.rstrip(),\n )\n\n\nclass FormatError(Exception):\n """\n Validating a format failed.\n """\n\n def __init__(self, message, cause=None):\n super().__init__(message, cause)\n self.message = message\n self.cause = self.__cause__ = cause\n\n def __str__(self):\n return self.message\n\n\nclass ErrorTree:\n """\n ErrorTrees make it easier to check which validations failed.\n """\n\n _instance = _unset\n\n def __init__(self, errors: Iterable[ValidationError] = ()):\n self.errors: MutableMapping[str, ValidationError] = {}\n self._contents: Mapping[str, ErrorTree] = defaultdict(self.__class__)\n\n for error in errors:\n container = self\n for element in error.path:\n container = container[element]\n container.errors[error.validator] = error\n\n container._instance = error.instance\n\n def __contains__(self, index: str | int):\n """\n Check whether ``instance[index]`` has any errors.\n """\n return index in self._contents\n\n def __getitem__(self, index):\n """\n Retrieve the child tree one level down at the given ``index``.\n\n If the index is not in the instance that this tree corresponds\n to and is not known by this tree, whatever error would be raised\n by ``instance.__getitem__`` will be propagated (usually this is\n some subclass of `LookupError`.\n """\n if self._instance is not _unset and index not in self:\n self._instance[index]\n return self._contents[index]\n\n def __setitem__(self, index: str | int, value: ErrorTree):\n """\n Add an error to the tree at the given ``index``.\n\n .. deprecated:: v4.20.0\n\n Setting items on an `ErrorTree` is deprecated without replacement.\n To populate a tree, provide all of its sub-errors when you\n construct the tree.\n """\n warnings.warn(\n "ErrorTree.__setitem__ is deprecated without replacement.",\n DeprecationWarning,\n stacklevel=2,\n )\n self._contents[index] = value # type: ignore[index]\n\n def __iter__(self):\n """\n Iterate (non-recursively) over the indices in the instance with errors.\n """\n return iter(self._contents)\n\n def __len__(self):\n """\n Return the `total_errors`.\n """\n return self.total_errors\n\n def __repr__(self):\n total = len(self)\n errors = "error" if total == 1 else "errors"\n return f"<{self.__class__.__name__} ({total} total {errors})>"\n\n @property\n def total_errors(self):\n """\n The total number of errors in the entire tree, including children.\n """\n child_errors = sum(len(tree) for _, tree in self._contents.items())\n return len(self.errors) + child_errors\n\n\ndef by_relevance(weak=WEAK_MATCHES, strong=STRONG_MATCHES):\n """\n Create a key function that can be used to sort errors by relevance.\n\n Arguments:\n weak (set):\n a collection of validation keywords to consider to be\n "weak". If there are two errors at the same level of the\n instance and one is in the set of weak validation keywords,\n the other error will take priority. By default, :kw:`anyOf`\n and :kw:`oneOf` are considered weak keywords and will be\n superseded by other same-level validation errors.\n\n strong (set):\n a collection of validation keywords to consider to be\n "strong"\n\n """\n\n def relevance(error):\n validator = error.validator\n return ( # prefer errors which are ...\n -len(error.path), # 'deeper' and thereby more specific\n error.path, # earlier (for sibling errors)\n validator not in weak, # for a non-low-priority keyword\n validator in strong, # for a high priority keyword\n not error._matches_type(), # at least match the instance's type\n ) # otherwise we'll treat them the same\n\n return relevance\n\n\nrelevance = by_relevance()\n"""\nA key function (e.g. to use with `sorted`) which sorts errors by relevance.\n\nExample:\n\n.. code:: python\n\n sorted(validator.iter_errors(12), key=jsonschema.exceptions.relevance)\n"""\n\n\ndef best_match(errors, key=relevance):\n """\n Try to find an error that appears to be the best match among given errors.\n\n In general, errors that are higher up in the instance (i.e. for which\n `ValidationError.path` is shorter) are considered better matches,\n since they indicate "more" is wrong with the instance.\n\n If the resulting match is either :kw:`oneOf` or :kw:`anyOf`, the\n *opposite* assumption is made -- i.e. the deepest error is picked,\n since these keywords only need to match once, and any other errors\n may not be relevant.\n\n Arguments:\n errors (collections.abc.Iterable):\n\n the errors to select from. Do not provide a mixture of\n errors from different validation attempts (i.e. from\n different instances or schemas), since it won't produce\n sensical output.\n\n key (collections.abc.Callable):\n\n the key to use when sorting errors. See `relevance` and\n transitively `by_relevance` for more details (the default is\n to sort with the defaults of that function). Changing the\n default is only useful if you want to change the function\n that rates errors but still want the error context descent\n done by this function.\n\n Returns:\n the best matching error, or ``None`` if the iterable was empty\n\n .. note::\n\n This function is a heuristic. Its return value may change for a given\n set of inputs from version to version if better heuristics are added.\n\n """\n best = max(errors, key=key, default=None)\n if best is None:\n return\n\n while best.context:\n # Calculate the minimum via nsmallest, because we don't recurse if\n # all nested errors have the same relevance (i.e. if min == max == all)\n smallest = heapq.nsmallest(2, best.context, key=key)\n if len(smallest) == 2 and key(smallest[0]) == key(smallest[1]): # noqa: PLR2004\n return best\n best = smallest[0]\n return best\n | .venv\Lib\site-packages\jsonschema\exceptions.py | exceptions.py | Python | 14,951 | 0.95 | 0.194215 | 0.013298 | node-utils | 979 | 2024-10-07T00:44:18.503462 | Apache-2.0 | false | 34921c9f9d5a94a9141abb4857d60778 |
"""\ntyping.Protocol classes for jsonschema interfaces.\n"""\n\n# for reference material on Protocols, see\n# https://www.python.org/dev/peps/pep-0544/\n\nfrom __future__ import annotations\n\nfrom typing import TYPE_CHECKING, Any, ClassVar, Protocol, runtime_checkable\n\n# in order for Sphinx to resolve references accurately from type annotations,\n# it needs to see names like `jsonschema.TypeChecker`\n# therefore, only import at type-checking time (to avoid circular references),\n# but use `jsonschema` for any types which will otherwise not be resolvable\nif TYPE_CHECKING:\n from collections.abc import Iterable, Mapping\n\n import referencing.jsonschema\n\n from jsonschema import _typing\n from jsonschema.exceptions import ValidationError\n import jsonschema\n import jsonschema.validators\n\n# For code authors working on the validator protocol, these are the three\n# use-cases which should be kept in mind:\n#\n# 1. As a protocol class, it can be used in type annotations to describe the\n# available methods and attributes of a validator\n# 2. It is the source of autodoc for the validator documentation\n# 3. It is runtime_checkable, meaning that it can be used in isinstance()\n# checks.\n#\n# Since protocols are not base classes, isinstance() checking is limited in\n# its capabilities. See docs on runtime_checkable for detail\n\n\n@runtime_checkable\nclass Validator(Protocol):\n """\n The protocol to which all validator classes adhere.\n\n Arguments:\n\n schema:\n\n The schema that the validator object will validate with.\n It is assumed to be valid, and providing\n an invalid schema can lead to undefined behavior. See\n `Validator.check_schema` to validate a schema first.\n\n registry:\n\n a schema registry that will be used for looking up JSON references\n\n resolver:\n\n a resolver that will be used to resolve :kw:`$ref`\n properties (JSON references). If unprovided, one will be created.\n\n .. deprecated:: v4.18.0\n\n `RefResolver <_RefResolver>` has been deprecated in favor of\n `referencing`, and with it, this argument.\n\n format_checker:\n\n if provided, a checker which will be used to assert about\n :kw:`format` properties present in the schema. If unprovided,\n *no* format validation is done, and the presence of format\n within schemas is strictly informational. Certain formats\n require additional packages to be installed in order to assert\n against instances. Ensure you've installed `jsonschema` with\n its `extra (optional) dependencies <index:extras>` when\n invoking ``pip``.\n\n .. deprecated:: v4.12.0\n\n Subclassing validator classes now explicitly warns this is not part of\n their public API.\n\n """\n\n #: An object representing the validator's meta schema (the schema that\n #: describes valid schemas in the given version).\n META_SCHEMA: ClassVar[Mapping]\n\n #: A mapping of validation keywords (`str`\s) to functions that\n #: validate the keyword with that name. For more information see\n #: `creating-validators`.\n VALIDATORS: ClassVar[Mapping]\n\n #: A `jsonschema.TypeChecker` that will be used when validating\n #: :kw:`type` keywords in JSON schemas.\n TYPE_CHECKER: ClassVar[jsonschema.TypeChecker]\n\n #: A `jsonschema.FormatChecker` that will be used when validating\n #: :kw:`format` keywords in JSON schemas.\n FORMAT_CHECKER: ClassVar[jsonschema.FormatChecker]\n\n #: A function which given a schema returns its ID.\n ID_OF: _typing.id_of\n\n #: The schema that will be used to validate instances\n schema: Mapping | bool\n\n def __init__(\n self,\n schema: Mapping | bool,\n registry: referencing.jsonschema.SchemaRegistry,\n format_checker: jsonschema.FormatChecker | None = None,\n ) -> None:\n ...\n\n @classmethod\n def check_schema(cls, schema: Mapping | bool) -> None:\n """\n Validate the given schema against the validator's `META_SCHEMA`.\n\n Raises:\n\n `jsonschema.exceptions.SchemaError`:\n\n if the schema is invalid\n\n """\n\n def is_type(self, instance: Any, type: str) -> bool:\n """\n Check if the instance is of the given (JSON Schema) type.\n\n Arguments:\n\n instance:\n\n the value to check\n\n type:\n\n the name of a known (JSON Schema) type\n\n Returns:\n\n whether the instance is of the given type\n\n Raises:\n\n `jsonschema.exceptions.UnknownType`:\n\n if ``type`` is not a known type\n\n """\n\n def is_valid(self, instance: Any) -> bool:\n """\n Check if the instance is valid under the current `schema`.\n\n Returns:\n\n whether the instance is valid or not\n\n >>> schema = {"maxItems" : 2}\n >>> Draft202012Validator(schema).is_valid([2, 3, 4])\n False\n\n """\n\n def iter_errors(self, instance: Any) -> Iterable[ValidationError]:\n r"""\n Lazily yield each of the validation errors in the given instance.\n\n >>> schema = {\n ... "type" : "array",\n ... "items" : {"enum" : [1, 2, 3]},\n ... "maxItems" : 2,\n ... }\n >>> v = Draft202012Validator(schema)\n >>> for error in sorted(v.iter_errors([2, 3, 4]), key=str):\n ... print(error.message)\n 4 is not one of [1, 2, 3]\n [2, 3, 4] is too long\n\n .. deprecated:: v4.0.0\n\n Calling this function with a second schema argument is deprecated.\n Use `Validator.evolve` instead.\n """\n\n def validate(self, instance: Any) -> None:\n """\n Check if the instance is valid under the current `schema`.\n\n Raises:\n\n `jsonschema.exceptions.ValidationError`:\n\n if the instance is invalid\n\n >>> schema = {"maxItems" : 2}\n >>> Draft202012Validator(schema).validate([2, 3, 4])\n Traceback (most recent call last):\n ...\n ValidationError: [2, 3, 4] is too long\n\n """\n\n def evolve(self, **kwargs) -> Validator:\n """\n Create a new validator like this one, but with given changes.\n\n Preserves all other attributes, so can be used to e.g. create a\n validator with a different schema but with the same :kw:`$ref`\n resolution behavior.\n\n >>> validator = Draft202012Validator({})\n >>> validator.evolve(schema={"type": "number"})\n Draft202012Validator(schema={'type': 'number'}, format_checker=None)\n\n The returned object satisfies the validator protocol, but may not\n be of the same concrete class! In particular this occurs\n when a :kw:`$ref` occurs to a schema with a different\n :kw:`$schema` than this one (i.e. for a different draft).\n\n >>> validator.evolve(\n ... schema={"$schema": Draft7Validator.META_SCHEMA["$id"]}\n ... )\n Draft7Validator(schema=..., format_checker=None)\n """\n | .venv\Lib\site-packages\jsonschema\protocols.py | protocols.py | Python | 7,145 | 0.95 | 0.126638 | 0.179012 | vue-tools | 363 | 2024-01-03T15:13:08.215268 | GPL-3.0 | false | d08103a182052cd0b084af50f567b2d4 |
"""\nCreation and extension of validators, with implementations for existing drafts.\n"""\nfrom __future__ import annotations\n\nfrom collections import deque\nfrom collections.abc import Iterable, Mapping, Sequence\nfrom functools import lru_cache\nfrom operator import methodcaller\nfrom typing import TYPE_CHECKING\nfrom urllib.parse import unquote, urldefrag, urljoin, urlsplit\nfrom urllib.request import urlopen\nfrom warnings import warn\nimport contextlib\nimport json\nimport reprlib\nimport warnings\n\nfrom attrs import define, field, fields\nfrom jsonschema_specifications import REGISTRY as SPECIFICATIONS\nfrom rpds import HashTrieMap\nimport referencing.exceptions\nimport referencing.jsonschema\n\nfrom jsonschema import (\n _format,\n _keywords,\n _legacy_keywords,\n _types,\n _typing,\n _utils,\n exceptions,\n)\n\nif TYPE_CHECKING:\n from jsonschema.protocols import Validator\n\n_UNSET = _utils.Unset()\n\n_VALIDATORS: dict[str, Validator] = {}\n_META_SCHEMAS = _utils.URIDict()\n\n\ndef __getattr__(name):\n if name == "ErrorTree":\n warnings.warn(\n "Importing ErrorTree from jsonschema.validators is deprecated. "\n "Instead import it from jsonschema.exceptions.",\n DeprecationWarning,\n stacklevel=2,\n )\n from jsonschema.exceptions import ErrorTree\n return ErrorTree\n elif name == "validators":\n warnings.warn(\n "Accessing jsonschema.validators.validators is deprecated. "\n "Use jsonschema.validators.validator_for with a given schema.",\n DeprecationWarning,\n stacklevel=2,\n )\n return _VALIDATORS\n elif name == "meta_schemas":\n warnings.warn(\n "Accessing jsonschema.validators.meta_schemas is deprecated. "\n "Use jsonschema.validators.validator_for with a given schema.",\n DeprecationWarning,\n stacklevel=2,\n )\n return _META_SCHEMAS\n elif name == "RefResolver":\n warnings.warn(\n _RefResolver._DEPRECATION_MESSAGE,\n DeprecationWarning,\n stacklevel=2,\n )\n return _RefResolver\n raise AttributeError(f"module {__name__} has no attribute {name}")\n\n\ndef validates(version):\n """\n Register the decorated validator for a ``version`` of the specification.\n\n Registered validators and their meta schemas will be considered when\n parsing :kw:`$schema` keywords' URIs.\n\n Arguments:\n\n version (str):\n\n An identifier to use as the version's name\n\n Returns:\n\n collections.abc.Callable:\n\n a class decorator to decorate the validator with the version\n\n """\n\n def _validates(cls):\n _VALIDATORS[version] = cls\n meta_schema_id = cls.ID_OF(cls.META_SCHEMA)\n _META_SCHEMAS[meta_schema_id] = cls\n return cls\n return _validates\n\n\ndef _warn_for_remote_retrieve(uri: str):\n from urllib.request import Request, urlopen\n headers = {"User-Agent": "python-jsonschema (deprecated $ref resolution)"}\n request = Request(uri, headers=headers) # noqa: S310\n with urlopen(request) as response: # noqa: S310\n warnings.warn(\n "Automatically retrieving remote references can be a security "\n "vulnerability and is discouraged by the JSON Schema "\n "specifications. Relying on this behavior is deprecated "\n "and will shortly become an error. If you are sure you want to "\n "remotely retrieve your reference and that it is safe to do so, "\n "you can find instructions for doing so via referencing.Registry "\n "in the referencing documentation "\n "(https://referencing.readthedocs.org).",\n DeprecationWarning,\n stacklevel=9, # Ha ha ha ha magic numbers :/\n )\n return referencing.Resource.from_contents(\n json.load(response),\n default_specification=referencing.jsonschema.DRAFT202012,\n )\n\n\n_REMOTE_WARNING_REGISTRY = SPECIFICATIONS.combine(\n referencing.Registry(retrieve=_warn_for_remote_retrieve), # type: ignore[call-arg]\n)\n\n\ndef create(\n meta_schema: referencing.jsonschema.ObjectSchema,\n validators: (\n Mapping[str, _typing.SchemaKeywordValidator]\n | Iterable[tuple[str, _typing.SchemaKeywordValidator]]\n ) = (),\n version: str | None = None,\n type_checker: _types.TypeChecker = _types.draft202012_type_checker,\n format_checker: _format.FormatChecker = _format.draft202012_format_checker,\n id_of: _typing.id_of = referencing.jsonschema.DRAFT202012.id_of,\n applicable_validators: _typing.ApplicableValidators = methodcaller(\n "items",\n ),\n):\n """\n Create a new validator class.\n\n Arguments:\n\n meta_schema:\n\n the meta schema for the new validator class\n\n validators:\n\n a mapping from names to callables, where each callable will\n validate the schema property with the given name.\n\n Each callable should take 4 arguments:\n\n 1. a validator instance,\n 2. the value of the property being validated within the\n instance\n 3. the instance\n 4. the schema\n\n version:\n\n an identifier for the version that this validator class will\n validate. If provided, the returned validator class will\n have its ``__name__`` set to include the version, and also\n will have `jsonschema.validators.validates` automatically\n called for the given version.\n\n type_checker:\n\n a type checker, used when applying the :kw:`type` keyword.\n\n If unprovided, a `jsonschema.TypeChecker` will be created\n with a set of default types typical of JSON Schema drafts.\n\n format_checker:\n\n a format checker, used when applying the :kw:`format` keyword.\n\n If unprovided, a `jsonschema.FormatChecker` will be created\n with a set of default formats typical of JSON Schema drafts.\n\n id_of:\n\n A function that given a schema, returns its ID.\n\n applicable_validators:\n\n A function that, given a schema, returns the list of\n applicable schema keywords and associated values\n which will be used to validate the instance.\n This is mostly used to support pre-draft 7 versions of JSON Schema\n which specified behavior around ignoring keywords if they were\n siblings of a ``$ref`` keyword. If you're not attempting to\n implement similar behavior, you can typically ignore this argument\n and leave it at its default.\n\n Returns:\n\n a new `jsonschema.protocols.Validator` class\n\n """\n # preemptively don't shadow the `Validator.format_checker` local\n format_checker_arg = format_checker\n\n specification = referencing.jsonschema.specification_with(\n dialect_id=id_of(meta_schema) or "urn:unknown-dialect",\n default=referencing.Specification.OPAQUE,\n )\n\n @define\n class Validator:\n\n VALIDATORS = dict(validators) # noqa: RUF012\n META_SCHEMA = dict(meta_schema) # noqa: RUF012\n TYPE_CHECKER = type_checker\n FORMAT_CHECKER = format_checker_arg\n ID_OF = staticmethod(id_of)\n\n _APPLICABLE_VALIDATORS = applicable_validators\n _validators = field(init=False, repr=False, eq=False)\n\n schema: referencing.jsonschema.Schema = field(repr=reprlib.repr)\n _ref_resolver = field(default=None, repr=False, alias="resolver")\n format_checker: _format.FormatChecker | None = field(default=None)\n # TODO: include new meta-schemas added at runtime\n _registry: referencing.jsonschema.SchemaRegistry = field(\n default=_REMOTE_WARNING_REGISTRY,\n kw_only=True,\n repr=False,\n )\n _resolver = field(\n alias="_resolver",\n default=None,\n kw_only=True,\n repr=False,\n )\n\n def __init_subclass__(cls):\n warnings.warn(\n (\n "Subclassing validator classes is not intended to "\n "be part of their public API. A future version "\n "will make doing so an error, as the behavior of "\n "subclasses isn't guaranteed to stay the same "\n "between releases of jsonschema. Instead, prefer "\n "composition of validators, wrapping them in an object "\n "owned entirely by the downstream library."\n ),\n DeprecationWarning,\n stacklevel=2,\n )\n\n def evolve(self, **changes):\n cls = self.__class__\n schema = changes.setdefault("schema", self.schema)\n NewValidator = validator_for(schema, default=cls)\n\n for field in fields(cls): # noqa: F402\n if not field.init:\n continue\n attr_name = field.name\n init_name = field.alias\n if init_name not in changes:\n changes[init_name] = getattr(self, attr_name)\n\n return NewValidator(**changes)\n\n cls.evolve = evolve\n\n def __attrs_post_init__(self):\n if self._resolver is None:\n registry = self._registry\n if registry is not _REMOTE_WARNING_REGISTRY:\n registry = SPECIFICATIONS.combine(registry)\n resource = specification.create_resource(self.schema)\n self._resolver = registry.resolver_with_root(resource)\n\n if self.schema is True or self.schema is False:\n self._validators = []\n else:\n self._validators = [\n (self.VALIDATORS[k], k, v)\n for k, v in applicable_validators(self.schema)\n if k in self.VALIDATORS\n ]\n\n # REMOVEME: Legacy ref resolution state management.\n push_scope = getattr(self._ref_resolver, "push_scope", None)\n if push_scope is not None:\n id = id_of(self.schema)\n if id is not None:\n push_scope(id)\n\n @classmethod\n def check_schema(cls, schema, format_checker=_UNSET):\n Validator = validator_for(cls.META_SCHEMA, default=cls)\n if format_checker is _UNSET:\n format_checker = Validator.FORMAT_CHECKER\n validator = Validator(\n schema=cls.META_SCHEMA,\n format_checker=format_checker,\n )\n for error in validator.iter_errors(schema):\n raise exceptions.SchemaError.create_from(error)\n\n @property\n def resolver(self):\n warnings.warn(\n (\n f"Accessing {self.__class__.__name__}.resolver is "\n "deprecated as of v4.18.0, in favor of the "\n "https://github.com/python-jsonschema/referencing "\n "library, which provides more compliant referencing "\n "behavior as well as more flexible APIs for "\n "customization."\n ),\n DeprecationWarning,\n stacklevel=2,\n )\n if self._ref_resolver is None:\n self._ref_resolver = _RefResolver.from_schema(\n self.schema,\n id_of=id_of,\n )\n return self._ref_resolver\n\n def evolve(self, **changes):\n schema = changes.setdefault("schema", self.schema)\n NewValidator = validator_for(schema, default=self.__class__)\n\n for (attr_name, init_name) in evolve_fields:\n if init_name not in changes:\n changes[init_name] = getattr(self, attr_name)\n\n return NewValidator(**changes)\n\n def iter_errors(self, instance, _schema=None):\n if _schema is not None:\n warnings.warn(\n (\n "Passing a schema to Validator.iter_errors "\n "is deprecated and will be removed in a future "\n "release. Call validator.evolve(schema=new_schema)."\n "iter_errors(...) instead."\n ),\n DeprecationWarning,\n stacklevel=2,\n )\n validators = [\n (self.VALIDATORS[k], k, v)\n for k, v in applicable_validators(_schema)\n if k in self.VALIDATORS\n ]\n else:\n _schema, validators = self.schema, self._validators\n\n if _schema is True:\n return\n elif _schema is False:\n yield exceptions.ValidationError(\n f"False schema does not allow {instance!r}",\n validator=None,\n validator_value=None,\n instance=instance,\n schema=_schema,\n )\n return\n\n for validator, k, v in validators:\n errors = validator(self, v, instance, _schema) or ()\n for error in errors:\n # set details if not already set by the called fn\n error._set(\n validator=k,\n validator_value=v,\n instance=instance,\n schema=_schema,\n type_checker=self.TYPE_CHECKER,\n )\n if k not in {"if", "$ref"}:\n error.schema_path.appendleft(k)\n yield error\n\n def descend(\n self,\n instance,\n schema,\n path=None,\n schema_path=None,\n resolver=None,\n ):\n if schema is True:\n return\n elif schema is False:\n yield exceptions.ValidationError(\n f"False schema does not allow {instance!r}",\n validator=None,\n validator_value=None,\n instance=instance,\n schema=schema,\n )\n return\n\n if self._ref_resolver is not None:\n evolved = self.evolve(schema=schema)\n else:\n if resolver is None:\n resolver = self._resolver.in_subresource(\n specification.create_resource(schema),\n )\n evolved = self.evolve(schema=schema, _resolver=resolver)\n\n for k, v in applicable_validators(schema):\n validator = evolved.VALIDATORS.get(k)\n if validator is None:\n continue\n\n errors = validator(evolved, v, instance, schema) or ()\n for error in errors:\n # set details if not already set by the called fn\n error._set(\n validator=k,\n validator_value=v,\n instance=instance,\n schema=schema,\n type_checker=evolved.TYPE_CHECKER,\n )\n if k not in {"if", "$ref"}:\n error.schema_path.appendleft(k)\n if path is not None:\n error.path.appendleft(path)\n if schema_path is not None:\n error.schema_path.appendleft(schema_path)\n yield error\n\n def validate(self, *args, **kwargs):\n for error in self.iter_errors(*args, **kwargs):\n raise error\n\n def is_type(self, instance, type):\n try:\n return self.TYPE_CHECKER.is_type(instance, type)\n except exceptions.UndefinedTypeCheck:\n exc = exceptions.UnknownType(type, instance, self.schema)\n raise exc from None\n\n def _validate_reference(self, ref, instance):\n if self._ref_resolver is None:\n try:\n resolved = self._resolver.lookup(ref)\n except referencing.exceptions.Unresolvable as err:\n raise exceptions._WrappedReferencingError(err) from err\n\n return self.descend(\n instance,\n resolved.contents,\n resolver=resolved.resolver,\n )\n else:\n resolve = getattr(self._ref_resolver, "resolve", None)\n if resolve is None:\n with self._ref_resolver.resolving(ref) as resolved:\n return self.descend(instance, resolved)\n else:\n scope, resolved = resolve(ref)\n self._ref_resolver.push_scope(scope)\n\n try:\n return list(self.descend(instance, resolved))\n finally:\n self._ref_resolver.pop_scope()\n\n def is_valid(self, instance, _schema=None):\n if _schema is not None:\n warnings.warn(\n (\n "Passing a schema to Validator.is_valid is deprecated "\n "and will be removed in a future release. Call "\n "validator.evolve(schema=new_schema).is_valid(...) "\n "instead."\n ),\n DeprecationWarning,\n stacklevel=2,\n )\n self = self.evolve(schema=_schema)\n\n error = next(self.iter_errors(instance), None)\n return error is None\n\n evolve_fields = [\n (field.name, field.alias)\n for field in fields(Validator)\n if field.init\n ]\n\n if version is not None:\n safe = version.title().replace(" ", "").replace("-", "")\n Validator.__name__ = Validator.__qualname__ = f"{safe}Validator"\n Validator = validates(version)(Validator) # type: ignore[misc]\n\n return Validator\n\n\ndef extend(\n validator,\n validators=(),\n version=None,\n type_checker=None,\n format_checker=None,\n):\n """\n Create a new validator class by extending an existing one.\n\n Arguments:\n\n validator (jsonschema.protocols.Validator):\n\n an existing validator class\n\n validators (collections.abc.Mapping):\n\n a mapping of new validator callables to extend with, whose\n structure is as in `create`.\n\n .. note::\n\n Any validator callables with the same name as an\n existing one will (silently) replace the old validator\n callable entirely, effectively overriding any validation\n done in the "parent" validator class.\n\n If you wish to instead extend the behavior of a parent's\n validator callable, delegate and call it directly in\n the new validator function by retrieving it using\n ``OldValidator.VALIDATORS["validation_keyword_name"]``.\n\n version (str):\n\n a version for the new validator class\n\n type_checker (jsonschema.TypeChecker):\n\n a type checker, used when applying the :kw:`type` keyword.\n\n If unprovided, the type checker of the extended\n `jsonschema.protocols.Validator` will be carried along.\n\n format_checker (jsonschema.FormatChecker):\n\n a format checker, used when applying the :kw:`format` keyword.\n\n If unprovided, the format checker of the extended\n `jsonschema.protocols.Validator` will be carried along.\n\n Returns:\n\n a new `jsonschema.protocols.Validator` class extending the one\n provided\n\n .. note:: Meta Schemas\n\n The new validator class will have its parent's meta schema.\n\n If you wish to change or extend the meta schema in the new\n validator class, modify ``META_SCHEMA`` directly on the returned\n class. Note that no implicit copying is done, so a copy should\n likely be made before modifying it, in order to not affect the\n old validator.\n\n """\n all_validators = dict(validator.VALIDATORS)\n all_validators.update(validators)\n\n if type_checker is None:\n type_checker = validator.TYPE_CHECKER\n if format_checker is None:\n format_checker = validator.FORMAT_CHECKER\n return create(\n meta_schema=validator.META_SCHEMA,\n validators=all_validators,\n version=version,\n type_checker=type_checker,\n format_checker=format_checker,\n id_of=validator.ID_OF,\n applicable_validators=validator._APPLICABLE_VALIDATORS,\n )\n\n\nDraft3Validator = create(\n meta_schema=SPECIFICATIONS.contents(\n "http://json-schema.org/draft-03/schema#",\n ),\n validators={\n "$ref": _keywords.ref,\n "additionalItems": _legacy_keywords.additionalItems,\n "additionalProperties": _keywords.additionalProperties,\n "dependencies": _legacy_keywords.dependencies_draft3,\n "disallow": _legacy_keywords.disallow_draft3,\n "divisibleBy": _keywords.multipleOf,\n "enum": _keywords.enum,\n "extends": _legacy_keywords.extends_draft3,\n "format": _keywords.format,\n "items": _legacy_keywords.items_draft3_draft4,\n "maxItems": _keywords.maxItems,\n "maxLength": _keywords.maxLength,\n "maximum": _legacy_keywords.maximum_draft3_draft4,\n "minItems": _keywords.minItems,\n "minLength": _keywords.minLength,\n "minimum": _legacy_keywords.minimum_draft3_draft4,\n "pattern": _keywords.pattern,\n "patternProperties": _keywords.patternProperties,\n "properties": _legacy_keywords.properties_draft3,\n "type": _legacy_keywords.type_draft3,\n "uniqueItems": _keywords.uniqueItems,\n },\n type_checker=_types.draft3_type_checker,\n format_checker=_format.draft3_format_checker,\n version="draft3",\n id_of=referencing.jsonschema.DRAFT3.id_of,\n applicable_validators=_legacy_keywords.ignore_ref_siblings,\n)\n\nDraft4Validator = create(\n meta_schema=SPECIFICATIONS.contents(\n "http://json-schema.org/draft-04/schema#",\n ),\n validators={\n "$ref": _keywords.ref,\n "additionalItems": _legacy_keywords.additionalItems,\n "additionalProperties": _keywords.additionalProperties,\n "allOf": _keywords.allOf,\n "anyOf": _keywords.anyOf,\n "dependencies": _legacy_keywords.dependencies_draft4_draft6_draft7,\n "enum": _keywords.enum,\n "format": _keywords.format,\n "items": _legacy_keywords.items_draft3_draft4,\n "maxItems": _keywords.maxItems,\n "maxLength": _keywords.maxLength,\n "maxProperties": _keywords.maxProperties,\n "maximum": _legacy_keywords.maximum_draft3_draft4,\n "minItems": _keywords.minItems,\n "minLength": _keywords.minLength,\n "minProperties": _keywords.minProperties,\n "minimum": _legacy_keywords.minimum_draft3_draft4,\n "multipleOf": _keywords.multipleOf,\n "not": _keywords.not_,\n "oneOf": _keywords.oneOf,\n "pattern": _keywords.pattern,\n "patternProperties": _keywords.patternProperties,\n "properties": _keywords.properties,\n "required": _keywords.required,\n "type": _keywords.type,\n "uniqueItems": _keywords.uniqueItems,\n },\n type_checker=_types.draft4_type_checker,\n format_checker=_format.draft4_format_checker,\n version="draft4",\n id_of=referencing.jsonschema.DRAFT4.id_of,\n applicable_validators=_legacy_keywords.ignore_ref_siblings,\n)\n\nDraft6Validator = create(\n meta_schema=SPECIFICATIONS.contents(\n "http://json-schema.org/draft-06/schema#",\n ),\n validators={\n "$ref": _keywords.ref,\n "additionalItems": _legacy_keywords.additionalItems,\n "additionalProperties": _keywords.additionalProperties,\n "allOf": _keywords.allOf,\n "anyOf": _keywords.anyOf,\n "const": _keywords.const,\n "contains": _legacy_keywords.contains_draft6_draft7,\n "dependencies": _legacy_keywords.dependencies_draft4_draft6_draft7,\n "enum": _keywords.enum,\n "exclusiveMaximum": _keywords.exclusiveMaximum,\n "exclusiveMinimum": _keywords.exclusiveMinimum,\n "format": _keywords.format,\n "items": _legacy_keywords.items_draft6_draft7_draft201909,\n "maxItems": _keywords.maxItems,\n "maxLength": _keywords.maxLength,\n "maxProperties": _keywords.maxProperties,\n "maximum": _keywords.maximum,\n "minItems": _keywords.minItems,\n "minLength": _keywords.minLength,\n "minProperties": _keywords.minProperties,\n "minimum": _keywords.minimum,\n "multipleOf": _keywords.multipleOf,\n "not": _keywords.not_,\n "oneOf": _keywords.oneOf,\n "pattern": _keywords.pattern,\n "patternProperties": _keywords.patternProperties,\n "properties": _keywords.properties,\n "propertyNames": _keywords.propertyNames,\n "required": _keywords.required,\n "type": _keywords.type,\n "uniqueItems": _keywords.uniqueItems,\n },\n type_checker=_types.draft6_type_checker,\n format_checker=_format.draft6_format_checker,\n version="draft6",\n id_of=referencing.jsonschema.DRAFT6.id_of,\n applicable_validators=_legacy_keywords.ignore_ref_siblings,\n)\n\nDraft7Validator = create(\n meta_schema=SPECIFICATIONS.contents(\n "http://json-schema.org/draft-07/schema#",\n ),\n validators={\n "$ref": _keywords.ref,\n "additionalItems": _legacy_keywords.additionalItems,\n "additionalProperties": _keywords.additionalProperties,\n "allOf": _keywords.allOf,\n "anyOf": _keywords.anyOf,\n "const": _keywords.const,\n "contains": _legacy_keywords.contains_draft6_draft7,\n "dependencies": _legacy_keywords.dependencies_draft4_draft6_draft7,\n "enum": _keywords.enum,\n "exclusiveMaximum": _keywords.exclusiveMaximum,\n "exclusiveMinimum": _keywords.exclusiveMinimum,\n "format": _keywords.format,\n "if": _keywords.if_,\n "items": _legacy_keywords.items_draft6_draft7_draft201909,\n "maxItems": _keywords.maxItems,\n "maxLength": _keywords.maxLength,\n "maxProperties": _keywords.maxProperties,\n "maximum": _keywords.maximum,\n "minItems": _keywords.minItems,\n "minLength": _keywords.minLength,\n "minProperties": _keywords.minProperties,\n "minimum": _keywords.minimum,\n "multipleOf": _keywords.multipleOf,\n "not": _keywords.not_,\n "oneOf": _keywords.oneOf,\n "pattern": _keywords.pattern,\n "patternProperties": _keywords.patternProperties,\n "properties": _keywords.properties,\n "propertyNames": _keywords.propertyNames,\n "required": _keywords.required,\n "type": _keywords.type,\n "uniqueItems": _keywords.uniqueItems,\n },\n type_checker=_types.draft7_type_checker,\n format_checker=_format.draft7_format_checker,\n version="draft7",\n id_of=referencing.jsonschema.DRAFT7.id_of,\n applicable_validators=_legacy_keywords.ignore_ref_siblings,\n)\n\nDraft201909Validator = create(\n meta_schema=SPECIFICATIONS.contents(\n "https://json-schema.org/draft/2019-09/schema",\n ),\n validators={\n "$recursiveRef": _legacy_keywords.recursiveRef,\n "$ref": _keywords.ref,\n "additionalItems": _legacy_keywords.additionalItems,\n "additionalProperties": _keywords.additionalProperties,\n "allOf": _keywords.allOf,\n "anyOf": _keywords.anyOf,\n "const": _keywords.const,\n "contains": _keywords.contains,\n "dependentRequired": _keywords.dependentRequired,\n "dependentSchemas": _keywords.dependentSchemas,\n "enum": _keywords.enum,\n "exclusiveMaximum": _keywords.exclusiveMaximum,\n "exclusiveMinimum": _keywords.exclusiveMinimum,\n "format": _keywords.format,\n "if": _keywords.if_,\n "items": _legacy_keywords.items_draft6_draft7_draft201909,\n "maxItems": _keywords.maxItems,\n "maxLength": _keywords.maxLength,\n "maxProperties": _keywords.maxProperties,\n "maximum": _keywords.maximum,\n "minItems": _keywords.minItems,\n "minLength": _keywords.minLength,\n "minProperties": _keywords.minProperties,\n "minimum": _keywords.minimum,\n "multipleOf": _keywords.multipleOf,\n "not": _keywords.not_,\n "oneOf": _keywords.oneOf,\n "pattern": _keywords.pattern,\n "patternProperties": _keywords.patternProperties,\n "properties": _keywords.properties,\n "propertyNames": _keywords.propertyNames,\n "required": _keywords.required,\n "type": _keywords.type,\n "unevaluatedItems": _legacy_keywords.unevaluatedItems_draft2019,\n "unevaluatedProperties": (\n _legacy_keywords.unevaluatedProperties_draft2019\n ),\n "uniqueItems": _keywords.uniqueItems,\n },\n type_checker=_types.draft201909_type_checker,\n format_checker=_format.draft201909_format_checker,\n version="draft2019-09",\n)\n\nDraft202012Validator = create(\n meta_schema=SPECIFICATIONS.contents(\n "https://json-schema.org/draft/2020-12/schema",\n ),\n validators={\n "$dynamicRef": _keywords.dynamicRef,\n "$ref": _keywords.ref,\n "additionalProperties": _keywords.additionalProperties,\n "allOf": _keywords.allOf,\n "anyOf": _keywords.anyOf,\n "const": _keywords.const,\n "contains": _keywords.contains,\n "dependentRequired": _keywords.dependentRequired,\n "dependentSchemas": _keywords.dependentSchemas,\n "enum": _keywords.enum,\n "exclusiveMaximum": _keywords.exclusiveMaximum,\n "exclusiveMinimum": _keywords.exclusiveMinimum,\n "format": _keywords.format,\n "if": _keywords.if_,\n "items": _keywords.items,\n "maxItems": _keywords.maxItems,\n "maxLength": _keywords.maxLength,\n "maxProperties": _keywords.maxProperties,\n "maximum": _keywords.maximum,\n "minItems": _keywords.minItems,\n "minLength": _keywords.minLength,\n "minProperties": _keywords.minProperties,\n "minimum": _keywords.minimum,\n "multipleOf": _keywords.multipleOf,\n "not": _keywords.not_,\n "oneOf": _keywords.oneOf,\n "pattern": _keywords.pattern,\n "patternProperties": _keywords.patternProperties,\n "prefixItems": _keywords.prefixItems,\n "properties": _keywords.properties,\n "propertyNames": _keywords.propertyNames,\n "required": _keywords.required,\n "type": _keywords.type,\n "unevaluatedItems": _keywords.unevaluatedItems,\n "unevaluatedProperties": _keywords.unevaluatedProperties,\n "uniqueItems": _keywords.uniqueItems,\n },\n type_checker=_types.draft202012_type_checker,\n format_checker=_format.draft202012_format_checker,\n version="draft2020-12",\n)\n\n_LATEST_VERSION: type[Validator] = Draft202012Validator\n\n\nclass _RefResolver:\n """\n Resolve JSON References.\n\n Arguments:\n\n base_uri (str):\n\n The URI of the referring document\n\n referrer:\n\n The actual referring document\n\n store (dict):\n\n A mapping from URIs to documents to cache\n\n cache_remote (bool):\n\n Whether remote refs should be cached after first resolution\n\n handlers (dict):\n\n A mapping from URI schemes to functions that should be used\n to retrieve them\n\n urljoin_cache (:func:`functools.lru_cache`):\n\n A cache that will be used for caching the results of joining\n the resolution scope to subscopes.\n\n remote_cache (:func:`functools.lru_cache`):\n\n A cache that will be used for caching the results of\n resolved remote URLs.\n\n Attributes:\n\n cache_remote (bool):\n\n Whether remote refs should be cached after first resolution\n\n .. deprecated:: v4.18.0\n\n ``RefResolver`` has been deprecated in favor of `referencing`.\n\n """\n\n _DEPRECATION_MESSAGE = (\n "jsonschema.RefResolver is deprecated as of v4.18.0, in favor of the "\n "https://github.com/python-jsonschema/referencing library, which "\n "provides more compliant referencing behavior as well as more "\n "flexible APIs for customization. A future release will remove "\n "RefResolver. Please file a feature request (on referencing) if you "\n "are missing an API for the kind of customization you need."\n )\n\n def __init__(\n self,\n base_uri,\n referrer,\n store=HashTrieMap(),\n cache_remote=True,\n handlers=(),\n urljoin_cache=None,\n remote_cache=None,\n ):\n if urljoin_cache is None:\n urljoin_cache = lru_cache(1024)(urljoin)\n if remote_cache is None:\n remote_cache = lru_cache(1024)(self.resolve_from_url)\n\n self.referrer = referrer\n self.cache_remote = cache_remote\n self.handlers = dict(handlers)\n\n self._scopes_stack = [base_uri]\n\n self.store = _utils.URIDict(\n (uri, each.contents) for uri, each in SPECIFICATIONS.items()\n )\n self.store.update(\n (id, each.META_SCHEMA) for id, each in _META_SCHEMAS.items()\n )\n self.store.update(store)\n self.store.update(\n (schema["$id"], schema)\n for schema in store.values()\n if isinstance(schema, Mapping) and "$id" in schema\n )\n self.store[base_uri] = referrer\n\n self._urljoin_cache = urljoin_cache\n self._remote_cache = remote_cache\n\n @classmethod\n def from_schema( # noqa: D417\n cls,\n schema,\n id_of=referencing.jsonschema.DRAFT202012.id_of,\n *args,\n **kwargs,\n ):\n """\n Construct a resolver from a JSON schema object.\n\n Arguments:\n\n schema:\n\n the referring schema\n\n Returns:\n\n `_RefResolver`\n\n """\n return cls(base_uri=id_of(schema) or "", referrer=schema, *args, **kwargs) # noqa: B026, E501\n\n def push_scope(self, scope):\n """\n Enter a given sub-scope.\n\n Treats further dereferences as being performed underneath the\n given scope.\n """\n self._scopes_stack.append(\n self._urljoin_cache(self.resolution_scope, scope),\n )\n\n def pop_scope(self):\n """\n Exit the most recent entered scope.\n\n Treats further dereferences as being performed underneath the\n original scope.\n\n Don't call this method more times than `push_scope` has been\n called.\n """\n try:\n self._scopes_stack.pop()\n except IndexError:\n raise exceptions._RefResolutionError(\n "Failed to pop the scope from an empty stack. "\n "`pop_scope()` should only be called once for every "\n "`push_scope()`",\n ) from None\n\n @property\n def resolution_scope(self):\n """\n Retrieve the current resolution scope.\n """\n return self._scopes_stack[-1]\n\n @property\n def base_uri(self):\n """\n Retrieve the current base URI, not including any fragment.\n """\n uri, _ = urldefrag(self.resolution_scope)\n return uri\n\n @contextlib.contextmanager\n def in_scope(self, scope):\n """\n Temporarily enter the given scope for the duration of the context.\n\n .. deprecated:: v4.0.0\n """\n warnings.warn(\n "jsonschema.RefResolver.in_scope is deprecated and will be "\n "removed in a future release.",\n DeprecationWarning,\n stacklevel=3,\n )\n self.push_scope(scope)\n try:\n yield\n finally:\n self.pop_scope()\n\n @contextlib.contextmanager\n def resolving(self, ref):\n """\n Resolve the given ``ref`` and enter its resolution scope.\n\n Exits the scope on exit of this context manager.\n\n Arguments:\n\n ref (str):\n\n The reference to resolve\n\n """\n url, resolved = self.resolve(ref)\n self.push_scope(url)\n try:\n yield resolved\n finally:\n self.pop_scope()\n\n def _find_in_referrer(self, key):\n return self._get_subschemas_cache()[key]\n\n @lru_cache # noqa: B019\n def _get_subschemas_cache(self):\n cache = {key: [] for key in _SUBSCHEMAS_KEYWORDS}\n for keyword, subschema in _search_schema(\n self.referrer, _match_subschema_keywords,\n ):\n cache[keyword].append(subschema)\n return cache\n\n @lru_cache # noqa: B019\n def _find_in_subschemas(self, url):\n subschemas = self._get_subschemas_cache()["$id"]\n if not subschemas:\n return None\n uri, fragment = urldefrag(url)\n for subschema in subschemas:\n id = subschema["$id"]\n if not isinstance(id, str):\n continue\n target_uri = self._urljoin_cache(self.resolution_scope, id)\n if target_uri.rstrip("/") == uri.rstrip("/"):\n if fragment:\n subschema = self.resolve_fragment(subschema, fragment)\n self.store[url] = subschema\n return url, subschema\n return None\n\n def resolve(self, ref):\n """\n Resolve the given reference.\n """\n url = self._urljoin_cache(self.resolution_scope, ref).rstrip("/")\n\n match = self._find_in_subschemas(url)\n if match is not None:\n return match\n\n return url, self._remote_cache(url)\n\n def resolve_from_url(self, url):\n """\n Resolve the given URL.\n """\n url, fragment = urldefrag(url)\n if not url:\n url = self.base_uri\n\n try:\n document = self.store[url]\n except KeyError:\n try:\n document = self.resolve_remote(url)\n except Exception as exc:\n raise exceptions._RefResolutionError(exc) from exc\n\n return self.resolve_fragment(document, fragment)\n\n def resolve_fragment(self, document, fragment):\n """\n Resolve a ``fragment`` within the referenced ``document``.\n\n Arguments:\n\n document:\n\n The referent document\n\n fragment (str):\n\n a URI fragment to resolve within it\n\n """\n fragment = fragment.lstrip("/")\n\n if not fragment:\n return document\n\n if document is self.referrer:\n find = self._find_in_referrer\n else:\n\n def find(key):\n yield from _search_schema(document, _match_keyword(key))\n\n for keyword in ["$anchor", "$dynamicAnchor"]:\n for subschema in find(keyword):\n if fragment == subschema[keyword]:\n return subschema\n for keyword in ["id", "$id"]:\n for subschema in find(keyword):\n if "#" + fragment == subschema[keyword]:\n return subschema\n\n # Resolve via path\n parts = unquote(fragment).split("/") if fragment else []\n for part in parts:\n part = part.replace("~1", "/").replace("~0", "~")\n\n if isinstance(document, Sequence):\n try: # noqa: SIM105\n part = int(part)\n except ValueError:\n pass\n try:\n document = document[part]\n except (TypeError, LookupError) as err:\n raise exceptions._RefResolutionError(\n f"Unresolvable JSON pointer: {fragment!r}",\n ) from err\n\n return document\n\n def resolve_remote(self, uri):\n """\n Resolve a remote ``uri``.\n\n If called directly, does not check the store first, but after\n retrieving the document at the specified URI it will be saved in\n the store if :attr:`cache_remote` is True.\n\n .. note::\n\n If the requests_ library is present, ``jsonschema`` will use it to\n request the remote ``uri``, so that the correct encoding is\n detected and used.\n\n If it isn't, or if the scheme of the ``uri`` is not ``http`` or\n ``https``, UTF-8 is assumed.\n\n Arguments:\n\n uri (str):\n\n The URI to resolve\n\n Returns:\n\n The retrieved document\n\n .. _requests: https://pypi.org/project/requests/\n\n """\n try:\n import requests\n except ImportError:\n requests = None\n\n scheme = urlsplit(uri).scheme\n\n if scheme in self.handlers:\n result = self.handlers[scheme](uri)\n elif scheme in ["http", "https"] and requests:\n # Requests has support for detecting the correct encoding of\n # json over http\n result = requests.get(uri).json()\n else:\n # Otherwise, pass off to urllib and assume utf-8\n with urlopen(uri) as url: # noqa: S310\n result = json.loads(url.read().decode("utf-8"))\n\n if self.cache_remote:\n self.store[uri] = result\n return result\n\n\n_SUBSCHEMAS_KEYWORDS = ("$id", "id", "$anchor", "$dynamicAnchor")\n\n\ndef _match_keyword(keyword):\n\n def matcher(value):\n if keyword in value:\n yield value\n\n return matcher\n\n\ndef _match_subschema_keywords(value):\n for keyword in _SUBSCHEMAS_KEYWORDS:\n if keyword in value:\n yield keyword, value\n\n\ndef _search_schema(schema, matcher):\n """Breadth-first search routine."""\n values = deque([schema])\n while values:\n value = values.pop()\n if not isinstance(value, dict):\n continue\n yield from matcher(value)\n values.extendleft(value.values())\n\n\ndef validate(instance, schema, cls=None, *args, **kwargs): # noqa: D417\n """\n Validate an instance under the given schema.\n\n >>> validate([2, 3, 4], {"maxItems": 2})\n Traceback (most recent call last):\n ...\n ValidationError: [2, 3, 4] is too long\n\n :func:`~jsonschema.validators.validate` will first verify that the\n provided schema is itself valid, since not doing so can lead to less\n obvious error messages and fail in less obvious or consistent ways.\n\n If you know you have a valid schema already, especially\n if you intend to validate multiple instances with\n the same schema, you likely would prefer using the\n `jsonschema.protocols.Validator.validate` method directly on a\n specific validator (e.g. ``Draft202012Validator.validate``).\n\n\n Arguments:\n\n instance:\n\n The instance to validate\n\n schema:\n\n The schema to validate with\n\n cls (jsonschema.protocols.Validator):\n\n The class that will be used to validate the instance.\n\n If the ``cls`` argument is not provided, two things will happen\n in accordance with the specification. First, if the schema has a\n :kw:`$schema` keyword containing a known meta-schema [#]_ then the\n proper validator will be used. The specification recommends that\n all schemas contain :kw:`$schema` properties for this reason. If no\n :kw:`$schema` property is found, the default validator class is the\n latest released draft.\n\n Any other provided positional and keyword arguments will be passed\n on when instantiating the ``cls``.\n\n Raises:\n\n `jsonschema.exceptions.ValidationError`:\n\n if the instance is invalid\n\n `jsonschema.exceptions.SchemaError`:\n\n if the schema itself is invalid\n\n .. rubric:: Footnotes\n .. [#] known by a validator registered with\n `jsonschema.validators.validates`\n\n """\n if cls is None:\n cls = validator_for(schema)\n\n cls.check_schema(schema)\n validator = cls(schema, *args, **kwargs)\n error = exceptions.best_match(validator.iter_errors(instance))\n if error is not None:\n raise error\n\n\ndef validator_for(\n schema,\n default: type[Validator] | _utils.Unset = _UNSET,\n) -> type[Validator]:\n """\n Retrieve the validator class appropriate for validating the given schema.\n\n Uses the :kw:`$schema` keyword that should be present in the given\n schema to look up the appropriate validator class.\n\n Arguments:\n\n schema (collections.abc.Mapping or bool):\n\n the schema to look at\n\n default:\n\n the default to return if the appropriate validator class\n cannot be determined.\n\n If unprovided, the default is to return the latest supported\n draft.\n\n Examples:\n\n The :kw:`$schema` JSON Schema keyword will control which validator\n class is returned:\n\n >>> schema = {\n ... "$schema": "https://json-schema.org/draft/2020-12/schema",\n ... "type": "integer",\n ... }\n >>> jsonschema.validators.validator_for(schema)\n <class 'jsonschema.validators.Draft202012Validator'>\n\n\n Here, a draft 7 schema instead will return the draft 7 validator:\n\n >>> schema = {\n ... "$schema": "http://json-schema.org/draft-07/schema#",\n ... "type": "integer",\n ... }\n >>> jsonschema.validators.validator_for(schema)\n <class 'jsonschema.validators.Draft7Validator'>\n\n\n Schemas with no ``$schema`` keyword will fallback to the default\n argument:\n\n >>> schema = {"type": "integer"}\n >>> jsonschema.validators.validator_for(\n ... schema, default=Draft7Validator,\n ... )\n <class 'jsonschema.validators.Draft7Validator'>\n\n or if none is provided, to the latest version supported.\n Always including the keyword when authoring schemas is highly\n recommended.\n\n """\n DefaultValidator = _LATEST_VERSION if default is _UNSET else default\n\n if schema is True or schema is False or "$schema" not in schema:\n return DefaultValidator # type: ignore[return-value]\n if schema["$schema"] not in _META_SCHEMAS and default is _UNSET:\n warn(\n (\n "The metaschema specified by $schema was not found. "\n "Using the latest draft to validate, but this will raise "\n "an error in the future."\n ),\n DeprecationWarning,\n stacklevel=2,\n )\n return _META_SCHEMAS.get(schema["$schema"], DefaultValidator)\n | .venv\Lib\site-packages\jsonschema\validators.py | validators.py | Python | 47,098 | 0.95 | 0.136879 | 0.009549 | python-kit | 0 | 2024-12-23T15:01:17.072899 | Apache-2.0 | false | bcfe126ee9e8cad106cfa2dba3eeebcd |
from __future__ import annotations\n\nfrom contextlib import suppress\nfrom datetime import date, datetime\nfrom uuid import UUID\nimport ipaddress\nimport re\nimport typing\nimport warnings\n\nfrom jsonschema.exceptions import FormatError\n\n_FormatCheckCallable = typing.Callable[[object], bool]\n#: A format checker callable.\n_F = typing.TypeVar("_F", bound=_FormatCheckCallable)\n_RaisesType = typing.Union[type[Exception], tuple[type[Exception], ...]]\n\n_RE_DATE = re.compile(r"^\d{4}-\d{2}-\d{2}$", re.ASCII)\n\n\nclass FormatChecker:\n """\n A ``format`` property checker.\n\n JSON Schema does not mandate that the ``format`` property actually do any\n validation. If validation is desired however, instances of this class can\n be hooked into validators to enable format validation.\n\n `FormatChecker` objects always return ``True`` when asked about\n formats that they do not know how to validate.\n\n To add a check for a custom format use the `FormatChecker.checks`\n decorator.\n\n Arguments:\n\n formats:\n\n The known formats to validate. This argument can be used to\n limit which formats will be used during validation.\n\n """\n\n checkers: dict[\n str,\n tuple[_FormatCheckCallable, _RaisesType],\n ] = {} # noqa: RUF012\n\n def __init__(self, formats: typing.Iterable[str] | None = None):\n if formats is None:\n formats = self.checkers.keys()\n self.checkers = {k: self.checkers[k] for k in formats}\n\n def __repr__(self):\n return f"<FormatChecker checkers={sorted(self.checkers)}>"\n\n def checks(\n self, format: str, raises: _RaisesType = (),\n ) -> typing.Callable[[_F], _F]:\n """\n Register a decorated function as validating a new format.\n\n Arguments:\n\n format:\n\n The format that the decorated function will check.\n\n raises:\n\n The exception(s) raised by the decorated function when an\n invalid instance is found.\n\n The exception object will be accessible as the\n `jsonschema.exceptions.ValidationError.cause` attribute of the\n resulting validation error.\n\n """\n\n def _checks(func: _F) -> _F:\n self.checkers[format] = (func, raises)\n return func\n\n return _checks\n\n @classmethod\n def cls_checks(\n cls, format: str, raises: _RaisesType = (),\n ) -> typing.Callable[[_F], _F]:\n warnings.warn(\n (\n "FormatChecker.cls_checks is deprecated. Call "\n "FormatChecker.checks on a specific FormatChecker instance "\n "instead."\n ),\n DeprecationWarning,\n stacklevel=2,\n )\n return cls._cls_checks(format=format, raises=raises)\n\n @classmethod\n def _cls_checks(\n cls, format: str, raises: _RaisesType = (),\n ) -> typing.Callable[[_F], _F]:\n def _checks(func: _F) -> _F:\n cls.checkers[format] = (func, raises)\n return func\n\n return _checks\n\n def check(self, instance: object, format: str) -> None:\n """\n Check whether the instance conforms to the given format.\n\n Arguments:\n\n instance (*any primitive type*, i.e. str, number, bool):\n\n The instance to check\n\n format:\n\n The format that instance should conform to\n\n Raises:\n\n FormatError:\n\n if the instance does not conform to ``format``\n\n """\n if format not in self.checkers:\n return\n\n func, raises = self.checkers[format]\n result, cause = None, None\n try:\n result = func(instance)\n except raises as e:\n cause = e\n if not result:\n raise FormatError(f"{instance!r} is not a {format!r}", cause=cause)\n\n def conforms(self, instance: object, format: str) -> bool:\n """\n Check whether the instance conforms to the given format.\n\n Arguments:\n\n instance (*any primitive type*, i.e. str, number, bool):\n\n The instance to check\n\n format:\n\n The format that instance should conform to\n\n Returns:\n\n bool: whether it conformed\n\n """\n try:\n self.check(instance, format)\n except FormatError:\n return False\n else:\n return True\n\n\ndraft3_format_checker = FormatChecker()\ndraft4_format_checker = FormatChecker()\ndraft6_format_checker = FormatChecker()\ndraft7_format_checker = FormatChecker()\ndraft201909_format_checker = FormatChecker()\ndraft202012_format_checker = FormatChecker()\n\n_draft_checkers: dict[str, FormatChecker] = dict(\n draft3=draft3_format_checker,\n draft4=draft4_format_checker,\n draft6=draft6_format_checker,\n draft7=draft7_format_checker,\n draft201909=draft201909_format_checker,\n draft202012=draft202012_format_checker,\n)\n\n\ndef _checks_drafts(\n name=None,\n draft3=None,\n draft4=None,\n draft6=None,\n draft7=None,\n draft201909=None,\n draft202012=None,\n raises=(),\n) -> typing.Callable[[_F], _F]:\n draft3 = draft3 or name\n draft4 = draft4 or name\n draft6 = draft6 or name\n draft7 = draft7 or name\n draft201909 = draft201909 or name\n draft202012 = draft202012 or name\n\n def wrap(func: _F) -> _F:\n if draft3:\n func = _draft_checkers["draft3"].checks(draft3, raises)(func)\n if draft4:\n func = _draft_checkers["draft4"].checks(draft4, raises)(func)\n if draft6:\n func = _draft_checkers["draft6"].checks(draft6, raises)(func)\n if draft7:\n func = _draft_checkers["draft7"].checks(draft7, raises)(func)\n if draft201909:\n func = _draft_checkers["draft201909"].checks(draft201909, raises)(\n func,\n )\n if draft202012:\n func = _draft_checkers["draft202012"].checks(draft202012, raises)(\n func,\n )\n\n # Oy. This is bad global state, but relied upon for now, until\n # deprecation. See #519 and test_format_checkers_come_with_defaults\n FormatChecker._cls_checks(\n draft202012 or draft201909 or draft7 or draft6 or draft4 or draft3,\n raises,\n )(func)\n return func\n\n return wrap\n\n\n@_checks_drafts(name="idn-email")\n@_checks_drafts(name="email")\ndef is_email(instance: object) -> bool:\n if not isinstance(instance, str):\n return True\n return "@" in instance\n\n\n@_checks_drafts(\n draft3="ip-address",\n draft4="ipv4",\n draft6="ipv4",\n draft7="ipv4",\n draft201909="ipv4",\n draft202012="ipv4",\n raises=ipaddress.AddressValueError,\n)\ndef is_ipv4(instance: object) -> bool:\n if not isinstance(instance, str):\n return True\n return bool(ipaddress.IPv4Address(instance))\n\n\n@_checks_drafts(name="ipv6", raises=ipaddress.AddressValueError)\ndef is_ipv6(instance: object) -> bool:\n if not isinstance(instance, str):\n return True\n address = ipaddress.IPv6Address(instance)\n return not getattr(address, "scope_id", "")\n\n\nwith suppress(ImportError):\n from fqdn import FQDN\n\n @_checks_drafts(\n draft3="host-name",\n draft4="hostname",\n draft6="hostname",\n draft7="hostname",\n draft201909="hostname",\n draft202012="hostname",\n # fqdn.FQDN("") raises a ValueError due to a bug\n # however, it's not clear when or if that will be fixed, so catch it\n # here for now\n raises=ValueError,\n )\n def is_host_name(instance: object) -> bool:\n if not isinstance(instance, str):\n return True\n return FQDN(instance, min_labels=1).is_valid\n\n\nwith suppress(ImportError):\n # The built-in `idna` codec only implements RFC 3890, so we go elsewhere.\n import idna\n\n @_checks_drafts(\n draft7="idn-hostname",\n draft201909="idn-hostname",\n draft202012="idn-hostname",\n raises=(idna.IDNAError, UnicodeError),\n )\n def is_idn_host_name(instance: object) -> bool:\n if not isinstance(instance, str):\n return True\n idna.encode(instance)\n return True\n\n\ntry:\n import rfc3987\nexcept ImportError:\n with suppress(ImportError):\n from rfc3986_validator import validate_rfc3986\n\n @_checks_drafts(name="uri")\n def is_uri(instance: object) -> bool:\n if not isinstance(instance, str):\n return True\n return validate_rfc3986(instance, rule="URI")\n\n @_checks_drafts(\n draft6="uri-reference",\n draft7="uri-reference",\n draft201909="uri-reference",\n draft202012="uri-reference",\n raises=ValueError,\n )\n def is_uri_reference(instance: object) -> bool:\n if not isinstance(instance, str):\n return True\n return validate_rfc3986(instance, rule="URI_reference")\n\nelse:\n\n @_checks_drafts(\n draft7="iri",\n draft201909="iri",\n draft202012="iri",\n raises=ValueError,\n )\n def is_iri(instance: object) -> bool:\n if not isinstance(instance, str):\n return True\n return rfc3987.parse(instance, rule="IRI")\n\n @_checks_drafts(\n draft7="iri-reference",\n draft201909="iri-reference",\n draft202012="iri-reference",\n raises=ValueError,\n )\n def is_iri_reference(instance: object) -> bool:\n if not isinstance(instance, str):\n return True\n return rfc3987.parse(instance, rule="IRI_reference")\n\n @_checks_drafts(name="uri", raises=ValueError)\n def is_uri(instance: object) -> bool:\n if not isinstance(instance, str):\n return True\n return rfc3987.parse(instance, rule="URI")\n\n @_checks_drafts(\n draft6="uri-reference",\n draft7="uri-reference",\n draft201909="uri-reference",\n draft202012="uri-reference",\n raises=ValueError,\n )\n def is_uri_reference(instance: object) -> bool:\n if not isinstance(instance, str):\n return True\n return rfc3987.parse(instance, rule="URI_reference")\n\n\nwith suppress(ImportError):\n from rfc3339_validator import validate_rfc3339\n\n @_checks_drafts(name="date-time")\n def is_datetime(instance: object) -> bool:\n if not isinstance(instance, str):\n return True\n return validate_rfc3339(instance.upper())\n\n @_checks_drafts(\n draft7="time",\n draft201909="time",\n draft202012="time",\n )\n def is_time(instance: object) -> bool:\n if not isinstance(instance, str):\n return True\n return is_datetime("1970-01-01T" + instance)\n\n\n@_checks_drafts(name="regex", raises=re.error)\ndef is_regex(instance: object) -> bool:\n if not isinstance(instance, str):\n return True\n return bool(re.compile(instance))\n\n\n@_checks_drafts(\n draft3="date",\n draft7="date",\n draft201909="date",\n draft202012="date",\n raises=ValueError,\n)\ndef is_date(instance: object) -> bool:\n if not isinstance(instance, str):\n return True\n return bool(_RE_DATE.fullmatch(instance) and date.fromisoformat(instance))\n\n\n@_checks_drafts(draft3="time", raises=ValueError)\ndef is_draft3_time(instance: object) -> bool:\n if not isinstance(instance, str):\n return True\n return bool(datetime.strptime(instance, "%H:%M:%S")) # noqa: DTZ007\n\n\nwith suppress(ImportError):\n import webcolors\n\n @_checks_drafts(draft3="color", raises=(ValueError, TypeError))\n def is_css21_color(instance: object) -> bool:\n if isinstance(instance, str):\n try:\n webcolors.name_to_hex(instance)\n except ValueError:\n webcolors.normalize_hex(instance.lower())\n return True\n\n\nwith suppress(ImportError):\n import jsonpointer\n\n @_checks_drafts(\n draft6="json-pointer",\n draft7="json-pointer",\n draft201909="json-pointer",\n draft202012="json-pointer",\n raises=jsonpointer.JsonPointerException,\n )\n def is_json_pointer(instance: object) -> bool:\n if not isinstance(instance, str):\n return True\n return bool(jsonpointer.JsonPointer(instance))\n\n # TODO: I don't want to maintain this, so it\n # needs to go either into jsonpointer (pending\n # https://github.com/stefankoegl/python-json-pointer/issues/34) or\n # into a new external library.\n @_checks_drafts(\n draft7="relative-json-pointer",\n draft201909="relative-json-pointer",\n draft202012="relative-json-pointer",\n raises=jsonpointer.JsonPointerException,\n )\n def is_relative_json_pointer(instance: object) -> bool:\n # Definition taken from:\n # https://tools.ietf.org/html/draft-handrews-relative-json-pointer-01#section-3\n if not isinstance(instance, str):\n return True\n if not instance:\n return False\n\n non_negative_integer, rest = [], ""\n for i, character in enumerate(instance):\n if character.isdigit():\n # digits with a leading "0" are not allowed\n if i > 0 and int(instance[i - 1]) == 0:\n return False\n\n non_negative_integer.append(character)\n continue\n\n if not non_negative_integer:\n return False\n\n rest = instance[i:]\n break\n return (rest == "#") or bool(jsonpointer.JsonPointer(rest))\n\n\nwith suppress(ImportError):\n import uri_template\n\n @_checks_drafts(\n draft6="uri-template",\n draft7="uri-template",\n draft201909="uri-template",\n draft202012="uri-template",\n )\n def is_uri_template(instance: object) -> bool:\n if not isinstance(instance, str):\n return True\n return uri_template.validate(instance)\n\n\nwith suppress(ImportError):\n import isoduration\n\n @_checks_drafts(\n draft201909="duration",\n draft202012="duration",\n raises=isoduration.DurationParsingException,\n )\n def is_duration(instance: object) -> bool:\n if not isinstance(instance, str):\n return True\n isoduration.parse_duration(instance)\n # FIXME: See bolsote/isoduration#25 and bolsote/isoduration#21\n return instance.endswith(tuple("DMYWHMS"))\n\n\n@_checks_drafts(\n draft201909="uuid",\n draft202012="uuid",\n raises=ValueError,\n)\ndef is_uuid(instance: object) -> bool:\n if not isinstance(instance, str):\n return True\n UUID(instance)\n return all(instance[position] == "-" for position in (8, 13, 18, 23))\n | .venv\Lib\site-packages\jsonschema\_format.py | _format.py | Python | 14,747 | 0.95 | 0.165067 | 0.036232 | vue-tools | 35 | 2024-12-25T02:05:41.756087 | GPL-3.0 | false | a697b5047a49d22070f25e50d068b1de |
from fractions import Fraction\nimport re\n\nfrom jsonschema._utils import (\n ensure_list,\n equal,\n extras_msg,\n find_additional_properties,\n find_evaluated_item_indexes_by_schema,\n find_evaluated_property_keys_by_schema,\n uniq,\n)\nfrom jsonschema.exceptions import FormatError, ValidationError\n\n\ndef patternProperties(validator, patternProperties, instance, schema):\n if not validator.is_type(instance, "object"):\n return\n\n for pattern, subschema in patternProperties.items():\n for k, v in instance.items():\n if re.search(pattern, k):\n yield from validator.descend(\n v, subschema, path=k, schema_path=pattern,\n )\n\n\ndef propertyNames(validator, propertyNames, instance, schema):\n if not validator.is_type(instance, "object"):\n return\n\n for property in instance:\n yield from validator.descend(instance=property, schema=propertyNames)\n\n\ndef additionalProperties(validator, aP, instance, schema):\n if not validator.is_type(instance, "object"):\n return\n\n extras = set(find_additional_properties(instance, schema))\n\n if validator.is_type(aP, "object"):\n for extra in extras:\n yield from validator.descend(instance[extra], aP, path=extra)\n elif not aP and extras:\n if "patternProperties" in schema:\n verb = "does" if len(extras) == 1 else "do"\n joined = ", ".join(repr(each) for each in sorted(extras))\n patterns = ", ".join(\n repr(each) for each in sorted(schema["patternProperties"])\n )\n error = f"{joined} {verb} not match any of the regexes: {patterns}"\n yield ValidationError(error)\n else:\n error = "Additional properties are not allowed (%s %s unexpected)"\n yield ValidationError(error % extras_msg(sorted(extras, key=str)))\n\n\ndef items(validator, items, instance, schema):\n if not validator.is_type(instance, "array"):\n return\n\n prefix = len(schema.get("prefixItems", []))\n total = len(instance)\n extra = total - prefix\n if extra <= 0:\n return\n\n if items is False:\n rest = instance[prefix:] if extra != 1 else instance[prefix]\n item = "items" if prefix != 1 else "item"\n yield ValidationError(\n f"Expected at most {prefix} {item} but found {extra} "\n f"extra: {rest!r}",\n )\n else:\n for index in range(prefix, total):\n yield from validator.descend(\n instance=instance[index],\n schema=items,\n path=index,\n )\n\n\ndef const(validator, const, instance, schema):\n if not equal(instance, const):\n yield ValidationError(f"{const!r} was expected")\n\n\ndef contains(validator, contains, instance, schema):\n if not validator.is_type(instance, "array"):\n return\n\n matches = 0\n min_contains = schema.get("minContains", 1)\n max_contains = schema.get("maxContains", len(instance))\n\n contains_validator = validator.evolve(schema=contains)\n\n for each in instance:\n if contains_validator.is_valid(each):\n matches += 1\n if matches > max_contains:\n yield ValidationError(\n "Too many items match the given schema "\n f"(expected at most {max_contains})",\n validator="maxContains",\n validator_value=max_contains,\n )\n return\n\n if matches < min_contains:\n if not matches:\n yield ValidationError(\n f"{instance!r} does not contain items "\n "matching the given schema",\n )\n else:\n yield ValidationError(\n "Too few items match the given schema (expected at least "\n f"{min_contains} but only {matches} matched)",\n validator="minContains",\n validator_value=min_contains,\n )\n\n\ndef exclusiveMinimum(validator, minimum, instance, schema):\n if not validator.is_type(instance, "number"):\n return\n\n if instance <= minimum:\n yield ValidationError(\n f"{instance!r} is less than or equal to "\n f"the minimum of {minimum!r}",\n )\n\n\ndef exclusiveMaximum(validator, maximum, instance, schema):\n if not validator.is_type(instance, "number"):\n return\n\n if instance >= maximum:\n yield ValidationError(\n f"{instance!r} is greater than or equal "\n f"to the maximum of {maximum!r}",\n )\n\n\ndef minimum(validator, minimum, instance, schema):\n if not validator.is_type(instance, "number"):\n return\n\n if instance < minimum:\n message = f"{instance!r} is less than the minimum of {minimum!r}"\n yield ValidationError(message)\n\n\ndef maximum(validator, maximum, instance, schema):\n if not validator.is_type(instance, "number"):\n return\n\n if instance > maximum:\n message = f"{instance!r} is greater than the maximum of {maximum!r}"\n yield ValidationError(message)\n\n\ndef multipleOf(validator, dB, instance, schema):\n if not validator.is_type(instance, "number"):\n return\n\n if isinstance(dB, float):\n quotient = instance / dB\n try:\n failed = int(quotient) != quotient\n except OverflowError:\n # When `instance` is large and `dB` is less than one,\n # quotient can overflow to infinity; and then casting to int\n # raises an error.\n #\n # In this case we fall back to Fraction logic, which is\n # exact and cannot overflow. The performance is also\n # acceptable: we try the fast all-float option first, and\n # we know that fraction(dB) can have at most a few hundred\n # digits in each part. The worst-case slowdown is therefore\n # for already-slow enormous integers or Decimals.\n failed = (Fraction(instance) / Fraction(dB)).denominator != 1\n else:\n failed = instance % dB\n\n if failed:\n yield ValidationError(f"{instance!r} is not a multiple of {dB}")\n\n\ndef minItems(validator, mI, instance, schema):\n if validator.is_type(instance, "array") and len(instance) < mI:\n message = "should be non-empty" if mI == 1 else "is too short"\n yield ValidationError(f"{instance!r} {message}")\n\n\ndef maxItems(validator, mI, instance, schema):\n if validator.is_type(instance, "array") and len(instance) > mI:\n message = "is expected to be empty" if mI == 0 else "is too long"\n yield ValidationError(f"{instance!r} {message}")\n\n\ndef uniqueItems(validator, uI, instance, schema):\n if (\n uI\n and validator.is_type(instance, "array")\n and not uniq(instance)\n ):\n yield ValidationError(f"{instance!r} has non-unique elements")\n\n\ndef pattern(validator, patrn, instance, schema):\n if (\n validator.is_type(instance, "string")\n and not re.search(patrn, instance)\n ):\n yield ValidationError(f"{instance!r} does not match {patrn!r}")\n\n\ndef format(validator, format, instance, schema):\n if validator.format_checker is not None:\n try:\n validator.format_checker.check(instance, format)\n except FormatError as error:\n yield ValidationError(error.message, cause=error.cause)\n\n\ndef minLength(validator, mL, instance, schema):\n if validator.is_type(instance, "string") and len(instance) < mL:\n message = "should be non-empty" if mL == 1 else "is too short"\n yield ValidationError(f"{instance!r} {message}")\n\n\ndef maxLength(validator, mL, instance, schema):\n if validator.is_type(instance, "string") and len(instance) > mL:\n message = "is expected to be empty" if mL == 0 else "is too long"\n yield ValidationError(f"{instance!r} {message}")\n\n\ndef dependentRequired(validator, dependentRequired, instance, schema):\n if not validator.is_type(instance, "object"):\n return\n\n for property, dependency in dependentRequired.items():\n if property not in instance:\n continue\n\n for each in dependency:\n if each not in instance:\n message = f"{each!r} is a dependency of {property!r}"\n yield ValidationError(message)\n\n\ndef dependentSchemas(validator, dependentSchemas, instance, schema):\n if not validator.is_type(instance, "object"):\n return\n\n for property, dependency in dependentSchemas.items():\n if property not in instance:\n continue\n yield from validator.descend(\n instance, dependency, schema_path=property,\n )\n\n\ndef enum(validator, enums, instance, schema):\n if all(not equal(each, instance) for each in enums):\n yield ValidationError(f"{instance!r} is not one of {enums!r}")\n\n\ndef ref(validator, ref, instance, schema):\n yield from validator._validate_reference(ref=ref, instance=instance)\n\n\ndef dynamicRef(validator, dynamicRef, instance, schema):\n yield from validator._validate_reference(ref=dynamicRef, instance=instance)\n\n\ndef type(validator, types, instance, schema):\n types = ensure_list(types)\n\n if not any(validator.is_type(instance, type) for type in types):\n reprs = ", ".join(repr(type) for type in types)\n yield ValidationError(f"{instance!r} is not of type {reprs}")\n\n\ndef properties(validator, properties, instance, schema):\n if not validator.is_type(instance, "object"):\n return\n\n for property, subschema in properties.items():\n if property in instance:\n yield from validator.descend(\n instance[property],\n subschema,\n path=property,\n schema_path=property,\n )\n\n\ndef required(validator, required, instance, schema):\n if not validator.is_type(instance, "object"):\n return\n for property in required:\n if property not in instance:\n yield ValidationError(f"{property!r} is a required property")\n\n\ndef minProperties(validator, mP, instance, schema):\n if validator.is_type(instance, "object") and len(instance) < mP:\n message = (\n "should be non-empty" if mP == 1\n else "does not have enough properties"\n )\n yield ValidationError(f"{instance!r} {message}")\n\n\ndef maxProperties(validator, mP, instance, schema):\n if not validator.is_type(instance, "object"):\n return\n if validator.is_type(instance, "object") and len(instance) > mP:\n message = (\n "is expected to be empty" if mP == 0\n else "has too many properties"\n )\n yield ValidationError(f"{instance!r} {message}")\n\n\ndef allOf(validator, allOf, instance, schema):\n for index, subschema in enumerate(allOf):\n yield from validator.descend(instance, subschema, schema_path=index)\n\n\ndef anyOf(validator, anyOf, instance, schema):\n all_errors = []\n for index, subschema in enumerate(anyOf):\n errs = list(validator.descend(instance, subschema, schema_path=index))\n if not errs:\n break\n all_errors.extend(errs)\n else:\n yield ValidationError(\n f"{instance!r} is not valid under any of the given schemas",\n context=all_errors,\n )\n\n\ndef oneOf(validator, oneOf, instance, schema):\n subschemas = enumerate(oneOf)\n all_errors = []\n for index, subschema in subschemas:\n errs = list(validator.descend(instance, subschema, schema_path=index))\n if not errs:\n first_valid = subschema\n break\n all_errors.extend(errs)\n else:\n yield ValidationError(\n f"{instance!r} is not valid under any of the given schemas",\n context=all_errors,\n )\n\n more_valid = [\n each for _, each in subschemas\n if validator.evolve(schema=each).is_valid(instance)\n ]\n if more_valid:\n more_valid.append(first_valid)\n reprs = ", ".join(repr(schema) for schema in more_valid)\n yield ValidationError(f"{instance!r} is valid under each of {reprs}")\n\n\ndef not_(validator, not_schema, instance, schema):\n if validator.evolve(schema=not_schema).is_valid(instance):\n message = f"{instance!r} should not be valid under {not_schema!r}"\n yield ValidationError(message)\n\n\ndef if_(validator, if_schema, instance, schema):\n if validator.evolve(schema=if_schema).is_valid(instance):\n if "then" in schema:\n then = schema["then"]\n yield from validator.descend(instance, then, schema_path="then")\n elif "else" in schema:\n else_ = schema["else"]\n yield from validator.descend(instance, else_, schema_path="else")\n\n\ndef unevaluatedItems(validator, unevaluatedItems, instance, schema):\n if not validator.is_type(instance, "array"):\n return\n evaluated_item_indexes = find_evaluated_item_indexes_by_schema(\n validator, instance, schema,\n )\n unevaluated_items = [\n item for index, item in enumerate(instance)\n if index not in evaluated_item_indexes\n ]\n if unevaluated_items:\n error = "Unevaluated items are not allowed (%s %s unexpected)"\n yield ValidationError(error % extras_msg(unevaluated_items))\n\n\ndef unevaluatedProperties(validator, unevaluatedProperties, instance, schema):\n if not validator.is_type(instance, "object"):\n return\n evaluated_keys = find_evaluated_property_keys_by_schema(\n validator, instance, schema,\n )\n unevaluated_keys = []\n for property in instance:\n if property not in evaluated_keys:\n for _ in validator.descend(\n instance[property],\n unevaluatedProperties,\n path=property,\n schema_path=property,\n ):\n # FIXME: Include context for each unevaluated property\n # indicating why it's invalid under the subschema.\n unevaluated_keys.append(property) # noqa: PERF401\n\n if unevaluated_keys:\n if unevaluatedProperties is False:\n error = "Unevaluated properties are not allowed (%s %s unexpected)"\n extras = sorted(unevaluated_keys, key=str)\n yield ValidationError(error % extras_msg(extras))\n else:\n error = (\n "Unevaluated properties are not valid under "\n "the given schema (%s %s unevaluated and invalid)"\n )\n yield ValidationError(error % extras_msg(unevaluated_keys))\n\n\ndef prefixItems(validator, prefixItems, instance, schema):\n if not validator.is_type(instance, "array"):\n return\n\n for (index, item), subschema in zip(enumerate(instance), prefixItems):\n yield from validator.descend(\n instance=item,\n schema=subschema,\n schema_path=index,\n path=index,\n )\n | .venv\Lib\site-packages\jsonschema\_keywords.py | _keywords.py | Python | 14,949 | 0.95 | 0.305122 | 0.034091 | node-utils | 343 | 2025-06-26T05:05:02.989722 | GPL-3.0 | false | def741c64a9b8585a94ef7421fe26dbd |
import re\n\nfrom referencing.jsonschema import lookup_recursive_ref\n\nfrom jsonschema import _utils\nfrom jsonschema.exceptions import ValidationError\n\n\ndef ignore_ref_siblings(schema):\n """\n Ignore siblings of ``$ref`` if it is present.\n\n Otherwise, return all keywords.\n\n Suitable for use with `create`'s ``applicable_validators`` argument.\n """\n ref = schema.get("$ref")\n if ref is not None:\n return [("$ref", ref)]\n else:\n return schema.items()\n\n\ndef dependencies_draft3(validator, dependencies, instance, schema):\n if not validator.is_type(instance, "object"):\n return\n\n for property, dependency in dependencies.items():\n if property not in instance:\n continue\n\n if validator.is_type(dependency, "object"):\n yield from validator.descend(\n instance, dependency, schema_path=property,\n )\n elif validator.is_type(dependency, "string"):\n if dependency not in instance:\n message = f"{dependency!r} is a dependency of {property!r}"\n yield ValidationError(message)\n else:\n for each in dependency:\n if each not in instance:\n message = f"{each!r} is a dependency of {property!r}"\n yield ValidationError(message)\n\n\ndef dependencies_draft4_draft6_draft7(\n validator,\n dependencies,\n instance,\n schema,\n):\n """\n Support for the ``dependencies`` keyword from pre-draft 2019-09.\n\n In later drafts, the keyword was split into separate\n ``dependentRequired`` and ``dependentSchemas`` validators.\n """\n if not validator.is_type(instance, "object"):\n return\n\n for property, dependency in dependencies.items():\n if property not in instance:\n continue\n\n if validator.is_type(dependency, "array"):\n for each in dependency:\n if each not in instance:\n message = f"{each!r} is a dependency of {property!r}"\n yield ValidationError(message)\n else:\n yield from validator.descend(\n instance, dependency, schema_path=property,\n )\n\n\ndef disallow_draft3(validator, disallow, instance, schema):\n for disallowed in _utils.ensure_list(disallow):\n if validator.evolve(schema={"type": [disallowed]}).is_valid(instance):\n message = f"{disallowed!r} is disallowed for {instance!r}"\n yield ValidationError(message)\n\n\ndef extends_draft3(validator, extends, instance, schema):\n if validator.is_type(extends, "object"):\n yield from validator.descend(instance, extends)\n return\n for index, subschema in enumerate(extends):\n yield from validator.descend(instance, subschema, schema_path=index)\n\n\ndef items_draft3_draft4(validator, items, instance, schema):\n if not validator.is_type(instance, "array"):\n return\n\n if validator.is_type(items, "object"):\n for index, item in enumerate(instance):\n yield from validator.descend(item, items, path=index)\n else:\n for (index, item), subschema in zip(enumerate(instance), items):\n yield from validator.descend(\n item, subschema, path=index, schema_path=index,\n )\n\n\ndef additionalItems(validator, aI, instance, schema):\n if (\n not validator.is_type(instance, "array")\n or validator.is_type(schema.get("items", {}), "object")\n ):\n return\n\n len_items = len(schema.get("items", []))\n if validator.is_type(aI, "object"):\n for index, item in enumerate(instance[len_items:], start=len_items):\n yield from validator.descend(item, aI, path=index)\n elif not aI and len(instance) > len(schema.get("items", [])):\n error = "Additional items are not allowed (%s %s unexpected)"\n yield ValidationError(\n error % _utils.extras_msg(instance[len(schema.get("items", [])):]),\n )\n\n\ndef items_draft6_draft7_draft201909(validator, items, instance, schema):\n if not validator.is_type(instance, "array"):\n return\n\n if validator.is_type(items, "array"):\n for (index, item), subschema in zip(enumerate(instance), items):\n yield from validator.descend(\n item, subschema, path=index, schema_path=index,\n )\n else:\n for index, item in enumerate(instance):\n yield from validator.descend(item, items, path=index)\n\n\ndef minimum_draft3_draft4(validator, minimum, instance, schema):\n if not validator.is_type(instance, "number"):\n return\n\n if schema.get("exclusiveMinimum", False):\n failed = instance <= minimum\n cmp = "less than or equal to"\n else:\n failed = instance < minimum\n cmp = "less than"\n\n if failed:\n message = f"{instance!r} is {cmp} the minimum of {minimum!r}"\n yield ValidationError(message)\n\n\ndef maximum_draft3_draft4(validator, maximum, instance, schema):\n if not validator.is_type(instance, "number"):\n return\n\n if schema.get("exclusiveMaximum", False):\n failed = instance >= maximum\n cmp = "greater than or equal to"\n else:\n failed = instance > maximum\n cmp = "greater than"\n\n if failed:\n message = f"{instance!r} is {cmp} the maximum of {maximum!r}"\n yield ValidationError(message)\n\n\ndef properties_draft3(validator, properties, instance, schema):\n if not validator.is_type(instance, "object"):\n return\n\n for property, subschema in properties.items():\n if property in instance:\n yield from validator.descend(\n instance[property],\n subschema,\n path=property,\n schema_path=property,\n )\n elif subschema.get("required", False):\n error = ValidationError(f"{property!r} is a required property")\n error._set(\n validator="required",\n validator_value=subschema["required"],\n instance=instance,\n schema=schema,\n )\n error.path.appendleft(property)\n error.schema_path.extend([property, "required"])\n yield error\n\n\ndef type_draft3(validator, types, instance, schema):\n types = _utils.ensure_list(types)\n\n all_errors = []\n for index, type in enumerate(types):\n if validator.is_type(type, "object"):\n errors = list(validator.descend(instance, type, schema_path=index))\n if not errors:\n return\n all_errors.extend(errors)\n elif validator.is_type(instance, type):\n return\n\n reprs = []\n for type in types:\n try:\n reprs.append(repr(type["name"]))\n except Exception: # noqa: BLE001\n reprs.append(repr(type))\n yield ValidationError(\n f"{instance!r} is not of type {', '.join(reprs)}",\n context=all_errors,\n )\n\n\ndef contains_draft6_draft7(validator, contains, instance, schema):\n if not validator.is_type(instance, "array"):\n return\n\n if not any(\n validator.evolve(schema=contains).is_valid(element)\n for element in instance\n ):\n yield ValidationError(\n f"None of {instance!r} are valid under the given schema",\n )\n\n\ndef recursiveRef(validator, recursiveRef, instance, schema):\n resolved = lookup_recursive_ref(validator._resolver)\n yield from validator.descend(\n instance,\n resolved.contents,\n resolver=resolved.resolver,\n )\n\n\ndef find_evaluated_item_indexes_by_schema(validator, instance, schema):\n """\n Get all indexes of items that get evaluated under the current schema.\n\n Covers all keywords related to unevaluatedItems: items, prefixItems, if,\n then, else, contains, unevaluatedItems, allOf, oneOf, anyOf\n """\n if validator.is_type(schema, "boolean"):\n return []\n evaluated_indexes = []\n\n ref = schema.get("$ref")\n if ref is not None:\n resolved = validator._resolver.lookup(ref)\n evaluated_indexes.extend(\n find_evaluated_item_indexes_by_schema(\n validator.evolve(\n schema=resolved.contents,\n _resolver=resolved.resolver,\n ),\n instance,\n resolved.contents,\n ),\n )\n\n if "$recursiveRef" in schema:\n resolved = lookup_recursive_ref(validator._resolver)\n evaluated_indexes.extend(\n find_evaluated_item_indexes_by_schema(\n validator.evolve(\n schema=resolved.contents,\n _resolver=resolved.resolver,\n ),\n instance,\n resolved.contents,\n ),\n )\n\n if "items" in schema:\n if "additionalItems" in schema:\n return list(range(len(instance)))\n\n if validator.is_type(schema["items"], "object"):\n return list(range(len(instance)))\n evaluated_indexes += list(range(len(schema["items"])))\n\n if "if" in schema:\n if validator.evolve(schema=schema["if"]).is_valid(instance):\n evaluated_indexes += find_evaluated_item_indexes_by_schema(\n validator, instance, schema["if"],\n )\n if "then" in schema:\n evaluated_indexes += find_evaluated_item_indexes_by_schema(\n validator, instance, schema["then"],\n )\n elif "else" in schema:\n evaluated_indexes += find_evaluated_item_indexes_by_schema(\n validator, instance, schema["else"],\n )\n\n for keyword in ["contains", "unevaluatedItems"]:\n if keyword in schema:\n for k, v in enumerate(instance):\n if validator.evolve(schema=schema[keyword]).is_valid(v):\n evaluated_indexes.append(k)\n\n for keyword in ["allOf", "oneOf", "anyOf"]:\n if keyword in schema:\n for subschema in schema[keyword]:\n errs = next(validator.descend(instance, subschema), None)\n if errs is None:\n evaluated_indexes += find_evaluated_item_indexes_by_schema(\n validator, instance, subschema,\n )\n\n return evaluated_indexes\n\n\ndef unevaluatedItems_draft2019(validator, unevaluatedItems, instance, schema):\n if not validator.is_type(instance, "array"):\n return\n evaluated_item_indexes = find_evaluated_item_indexes_by_schema(\n validator, instance, schema,\n )\n unevaluated_items = [\n item for index, item in enumerate(instance)\n if index not in evaluated_item_indexes\n ]\n if unevaluated_items:\n error = "Unevaluated items are not allowed (%s %s unexpected)"\n yield ValidationError(error % _utils.extras_msg(unevaluated_items))\n\n\ndef find_evaluated_property_keys_by_schema(validator, instance, schema):\n if validator.is_type(schema, "boolean"):\n return []\n evaluated_keys = []\n\n ref = schema.get("$ref")\n if ref is not None:\n resolved = validator._resolver.lookup(ref)\n evaluated_keys.extend(\n find_evaluated_property_keys_by_schema(\n validator.evolve(\n schema=resolved.contents,\n _resolver=resolved.resolver,\n ),\n instance,\n resolved.contents,\n ),\n )\n\n if "$recursiveRef" in schema:\n resolved = lookup_recursive_ref(validator._resolver)\n evaluated_keys.extend(\n find_evaluated_property_keys_by_schema(\n validator.evolve(\n schema=resolved.contents,\n _resolver=resolved.resolver,\n ),\n instance,\n resolved.contents,\n ),\n )\n\n for keyword in [\n "properties", "additionalProperties", "unevaluatedProperties",\n ]:\n if keyword in schema:\n schema_value = schema[keyword]\n if validator.is_type(schema_value, "boolean") and schema_value:\n evaluated_keys += instance.keys()\n\n elif validator.is_type(schema_value, "object"):\n for property in schema_value:\n if property in instance:\n evaluated_keys.append(property)\n\n if "patternProperties" in schema:\n for property in instance:\n for pattern in schema["patternProperties"]:\n if re.search(pattern, property):\n evaluated_keys.append(property)\n\n if "dependentSchemas" in schema:\n for property, subschema in schema["dependentSchemas"].items():\n if property not in instance:\n continue\n evaluated_keys += find_evaluated_property_keys_by_schema(\n validator, instance, subschema,\n )\n\n for keyword in ["allOf", "oneOf", "anyOf"]:\n if keyword in schema:\n for subschema in schema[keyword]:\n errs = next(validator.descend(instance, subschema), None)\n if errs is None:\n evaluated_keys += find_evaluated_property_keys_by_schema(\n validator, instance, subschema,\n )\n\n if "if" in schema:\n if validator.evolve(schema=schema["if"]).is_valid(instance):\n evaluated_keys += find_evaluated_property_keys_by_schema(\n validator, instance, schema["if"],\n )\n if "then" in schema:\n evaluated_keys += find_evaluated_property_keys_by_schema(\n validator, instance, schema["then"],\n )\n elif "else" in schema:\n evaluated_keys += find_evaluated_property_keys_by_schema(\n validator, instance, schema["else"],\n )\n\n return evaluated_keys\n\n\ndef unevaluatedProperties_draft2019(validator, uP, instance, schema):\n if not validator.is_type(instance, "object"):\n return\n evaluated_keys = find_evaluated_property_keys_by_schema(\n validator, instance, schema,\n )\n unevaluated_keys = []\n for property in instance:\n if property not in evaluated_keys:\n for _ in validator.descend(\n instance[property],\n uP,\n path=property,\n schema_path=property,\n ):\n # FIXME: Include context for each unevaluated property\n # indicating why it's invalid under the subschema.\n unevaluated_keys.append(property) # noqa: PERF401\n\n if unevaluated_keys:\n if uP is False:\n error = "Unevaluated properties are not allowed (%s %s unexpected)"\n extras = sorted(unevaluated_keys, key=str)\n yield ValidationError(error % _utils.extras_msg(extras))\n else:\n error = (\n "Unevaluated properties are not valid under "\n "the given schema (%s %s unevaluated and invalid)"\n )\n yield ValidationError(error % _utils.extras_msg(unevaluated_keys))\n | .venv\Lib\site-packages\jsonschema\_legacy_keywords.py | _legacy_keywords.py | Python | 15,191 | 0.95 | 0.278396 | 0.005348 | node-utils | 94 | 2025-03-26T08:43:39.660750 | BSD-3-Clause | false | a81a7f2466e2c98e8b42ad3f1aac04d0 |
from __future__ import annotations\n\nfrom typing import TYPE_CHECKING\nimport numbers\n\nfrom attrs import evolve, field, frozen\nfrom rpds import HashTrieMap\n\nfrom jsonschema.exceptions import UndefinedTypeCheck\n\nif TYPE_CHECKING:\n from collections.abc import Mapping\n from typing import Any, Callable\n\n\n# unfortunately, the type of HashTrieMap is generic, and if used as an attrs\n# converter, the generic type is presented to mypy, which then fails to match\n# the concrete type of a type checker mapping\n# this "do nothing" wrapper presents the correct information to mypy\ndef _typed_map_converter(\n init_val: Mapping[str, Callable[[TypeChecker, Any], bool]],\n) -> HashTrieMap[str, Callable[[TypeChecker, Any], bool]]:\n return HashTrieMap.convert(init_val)\n\n\ndef is_array(checker, instance):\n return isinstance(instance, list)\n\n\ndef is_bool(checker, instance):\n return isinstance(instance, bool)\n\n\ndef is_integer(checker, instance):\n # bool inherits from int, so ensure bools aren't reported as ints\n if isinstance(instance, bool):\n return False\n return isinstance(instance, int)\n\n\ndef is_null(checker, instance):\n return instance is None\n\n\ndef is_number(checker, instance):\n # bool inherits from int, so ensure bools aren't reported as ints\n if isinstance(instance, bool):\n return False\n return isinstance(instance, numbers.Number)\n\n\ndef is_object(checker, instance):\n return isinstance(instance, dict)\n\n\ndef is_string(checker, instance):\n return isinstance(instance, str)\n\n\ndef is_any(checker, instance):\n return True\n\n\n@frozen(repr=False)\nclass TypeChecker:\n """\n A :kw:`type` property checker.\n\n A `TypeChecker` performs type checking for a `Validator`, converting\n between the defined JSON Schema types and some associated Python types or\n objects.\n\n Modifying the behavior just mentioned by redefining which Python objects\n are considered to be of which JSON Schema types can be done using\n `TypeChecker.redefine` or `TypeChecker.redefine_many`, and types can be\n removed via `TypeChecker.remove`. Each of these return a new `TypeChecker`.\n\n Arguments:\n\n type_checkers:\n\n The initial mapping of types to their checking functions.\n\n """\n\n _type_checkers: HashTrieMap[\n str, Callable[[TypeChecker, Any], bool],\n ] = field(default=HashTrieMap(), converter=_typed_map_converter)\n\n def __repr__(self):\n types = ", ".join(repr(k) for k in sorted(self._type_checkers))\n return f"<{self.__class__.__name__} types={{{types}}}>"\n\n def is_type(self, instance, type: str) -> bool:\n """\n Check if the instance is of the appropriate type.\n\n Arguments:\n\n instance:\n\n The instance to check\n\n type:\n\n The name of the type that is expected.\n\n Raises:\n\n `jsonschema.exceptions.UndefinedTypeCheck`:\n\n if ``type`` is unknown to this object.\n\n """\n try:\n fn = self._type_checkers[type]\n except KeyError:\n raise UndefinedTypeCheck(type) from None\n\n return fn(self, instance)\n\n def redefine(self, type: str, fn) -> TypeChecker:\n """\n Produce a new checker with the given type redefined.\n\n Arguments:\n\n type:\n\n The name of the type to check.\n\n fn (collections.abc.Callable):\n\n A callable taking exactly two parameters - the type\n checker calling the function and the instance to check.\n The function should return true if instance is of this\n type and false otherwise.\n\n """\n return self.redefine_many({type: fn})\n\n def redefine_many(self, definitions=()) -> TypeChecker:\n """\n Produce a new checker with the given types redefined.\n\n Arguments:\n\n definitions (dict):\n\n A dictionary mapping types to their checking functions.\n\n """\n type_checkers = self._type_checkers.update(definitions)\n return evolve(self, type_checkers=type_checkers)\n\n def remove(self, *types) -> TypeChecker:\n """\n Produce a new checker with the given types forgotten.\n\n Arguments:\n\n types:\n\n the names of the types to remove.\n\n Raises:\n\n `jsonschema.exceptions.UndefinedTypeCheck`:\n\n if any given type is unknown to this object\n\n """\n type_checkers = self._type_checkers\n for each in types:\n try:\n type_checkers = type_checkers.remove(each)\n except KeyError:\n raise UndefinedTypeCheck(each) from None\n return evolve(self, type_checkers=type_checkers)\n\n\ndraft3_type_checker = TypeChecker(\n {\n "any": is_any,\n "array": is_array,\n "boolean": is_bool,\n "integer": is_integer,\n "object": is_object,\n "null": is_null,\n "number": is_number,\n "string": is_string,\n },\n)\ndraft4_type_checker = draft3_type_checker.remove("any")\ndraft6_type_checker = draft4_type_checker.redefine(\n "integer",\n lambda checker, instance: (\n is_integer(checker, instance)\n or (isinstance(instance, float) and instance.is_integer())\n ),\n)\ndraft7_type_checker = draft6_type_checker\ndraft201909_type_checker = draft7_type_checker\ndraft202012_type_checker = draft201909_type_checker\n | .venv\Lib\site-packages\jsonschema\_types.py | _types.py | Python | 5,456 | 0.95 | 0.147059 | 0.043165 | node-utils | 462 | 2024-02-12T01:56:46.332122 | BSD-3-Clause | false | 1065e837cd2c107607dac9bd27362b66 |
from collections.abc import Mapping, MutableMapping, Sequence\nfrom urllib.parse import urlsplit\nimport itertools\nimport re\n\n\nclass URIDict(MutableMapping):\n """\n Dictionary which uses normalized URIs as keys.\n """\n\n def normalize(self, uri):\n return urlsplit(uri).geturl()\n\n def __init__(self, *args, **kwargs):\n self.store = dict()\n self.store.update(*args, **kwargs)\n\n def __getitem__(self, uri):\n return self.store[self.normalize(uri)]\n\n def __setitem__(self, uri, value):\n self.store[self.normalize(uri)] = value\n\n def __delitem__(self, uri):\n del self.store[self.normalize(uri)]\n\n def __iter__(self):\n return iter(self.store)\n\n def __len__(self): # pragma: no cover -- untested, but to be removed\n return len(self.store)\n\n def __repr__(self): # pragma: no cover -- untested, but to be removed\n return repr(self.store)\n\n\nclass Unset:\n """\n An as-of-yet unset attribute or unprovided default parameter.\n """\n\n def __repr__(self): # pragma: no cover\n return "<unset>"\n\n\ndef format_as_index(container, indices):\n """\n Construct a single string containing indexing operations for the indices.\n\n For example for a container ``bar``, [1, 2, "foo"] -> bar[1][2]["foo"]\n\n Arguments:\n\n container (str):\n\n A word to use for the thing being indexed\n\n indices (sequence):\n\n The indices to format.\n\n """\n if not indices:\n return container\n return f"{container}[{']['.join(repr(index) for index in indices)}]"\n\n\ndef find_additional_properties(instance, schema):\n """\n Return the set of additional properties for the given ``instance``.\n\n Weeds out properties that should have been validated by ``properties`` and\n / or ``patternProperties``.\n\n Assumes ``instance`` is dict-like already.\n """\n properties = schema.get("properties", {})\n patterns = "|".join(schema.get("patternProperties", {}))\n for property in instance:\n if property not in properties:\n if patterns and re.search(patterns, property):\n continue\n yield property\n\n\ndef extras_msg(extras):\n """\n Create an error message for extra items or properties.\n """\n verb = "was" if len(extras) == 1 else "were"\n return ", ".join(repr(extra) for extra in extras), verb\n\n\ndef ensure_list(thing):\n """\n Wrap ``thing`` in a list if it's a single str.\n\n Otherwise, return it unchanged.\n """\n if isinstance(thing, str):\n return [thing]\n return thing\n\n\ndef _mapping_equal(one, two):\n """\n Check if two mappings are equal using the semantics of `equal`.\n """\n if len(one) != len(two):\n return False\n return all(\n key in two and equal(value, two[key])\n for key, value in one.items()\n )\n\n\ndef _sequence_equal(one, two):\n """\n Check if two sequences are equal using the semantics of `equal`.\n """\n if len(one) != len(two):\n return False\n return all(equal(i, j) for i, j in zip(one, two))\n\n\ndef equal(one, two):\n """\n Check if two things are equal evading some Python type hierarchy semantics.\n\n Specifically in JSON Schema, evade `bool` inheriting from `int`,\n recursing into sequences to do the same.\n """\n if one is two:\n return True\n if isinstance(one, str) or isinstance(two, str):\n return one == two\n if isinstance(one, Sequence) and isinstance(two, Sequence):\n return _sequence_equal(one, two)\n if isinstance(one, Mapping) and isinstance(two, Mapping):\n return _mapping_equal(one, two)\n return unbool(one) == unbool(two)\n\n\ndef unbool(element, true=object(), false=object()):\n """\n A hack to make True and 1 and False and 0 unique for ``uniq``.\n """\n if element is True:\n return true\n elif element is False:\n return false\n return element\n\n\ndef uniq(container):\n """\n Check if all of a container's elements are unique.\n\n Tries to rely on the container being recursively sortable, or otherwise\n falls back on (slow) brute force.\n """\n try:\n sort = sorted(unbool(i) for i in container)\n sliced = itertools.islice(sort, 1, None)\n\n for i, j in zip(sort, sliced):\n if equal(i, j):\n return False\n\n except (NotImplementedError, TypeError):\n seen = []\n for e in container:\n e = unbool(e)\n\n for i in seen:\n if equal(i, e):\n return False\n\n seen.append(e)\n return True\n\n\ndef find_evaluated_item_indexes_by_schema(validator, instance, schema):\n """\n Get all indexes of items that get evaluated under the current schema.\n\n Covers all keywords related to unevaluatedItems: items, prefixItems, if,\n then, else, contains, unevaluatedItems, allOf, oneOf, anyOf\n """\n if validator.is_type(schema, "boolean"):\n return []\n evaluated_indexes = []\n\n if "items" in schema:\n return list(range(len(instance)))\n\n ref = schema.get("$ref")\n if ref is not None:\n resolved = validator._resolver.lookup(ref)\n evaluated_indexes.extend(\n find_evaluated_item_indexes_by_schema(\n validator.evolve(\n schema=resolved.contents,\n _resolver=resolved.resolver,\n ),\n instance,\n resolved.contents,\n ),\n )\n\n dynamicRef = schema.get("$dynamicRef")\n if dynamicRef is not None:\n resolved = validator._resolver.lookup(dynamicRef)\n evaluated_indexes.extend(\n find_evaluated_item_indexes_by_schema(\n validator.evolve(\n schema=resolved.contents,\n _resolver=resolved.resolver,\n ),\n instance,\n resolved.contents,\n ),\n )\n\n if "prefixItems" in schema:\n evaluated_indexes += list(range(len(schema["prefixItems"])))\n\n if "if" in schema:\n if validator.evolve(schema=schema["if"]).is_valid(instance):\n evaluated_indexes += find_evaluated_item_indexes_by_schema(\n validator, instance, schema["if"],\n )\n if "then" in schema:\n evaluated_indexes += find_evaluated_item_indexes_by_schema(\n validator, instance, schema["then"],\n )\n elif "else" in schema:\n evaluated_indexes += find_evaluated_item_indexes_by_schema(\n validator, instance, schema["else"],\n )\n\n for keyword in ["contains", "unevaluatedItems"]:\n if keyword in schema:\n for k, v in enumerate(instance):\n if validator.evolve(schema=schema[keyword]).is_valid(v):\n evaluated_indexes.append(k)\n\n for keyword in ["allOf", "oneOf", "anyOf"]:\n if keyword in schema:\n for subschema in schema[keyword]:\n errs = next(validator.descend(instance, subschema), None)\n if errs is None:\n evaluated_indexes += find_evaluated_item_indexes_by_schema(\n validator, instance, subschema,\n )\n\n return evaluated_indexes\n\n\ndef find_evaluated_property_keys_by_schema(validator, instance, schema):\n """\n Get all keys of items that get evaluated under the current schema.\n\n Covers all keywords related to unevaluatedProperties: properties,\n additionalProperties, unevaluatedProperties, patternProperties,\n dependentSchemas, allOf, oneOf, anyOf, if, then, else\n """\n if validator.is_type(schema, "boolean"):\n return []\n evaluated_keys = []\n\n ref = schema.get("$ref")\n if ref is not None:\n resolved = validator._resolver.lookup(ref)\n evaluated_keys.extend(\n find_evaluated_property_keys_by_schema(\n validator.evolve(\n schema=resolved.contents,\n _resolver=resolved.resolver,\n ),\n instance,\n resolved.contents,\n ),\n )\n\n dynamicRef = schema.get("$dynamicRef")\n if dynamicRef is not None:\n resolved = validator._resolver.lookup(dynamicRef)\n evaluated_keys.extend(\n find_evaluated_property_keys_by_schema(\n validator.evolve(\n schema=resolved.contents,\n _resolver=resolved.resolver,\n ),\n instance,\n resolved.contents,\n ),\n )\n\n properties = schema.get("properties")\n if validator.is_type(properties, "object"):\n evaluated_keys += properties.keys() & instance.keys()\n\n for keyword in ["additionalProperties", "unevaluatedProperties"]:\n if (subschema := schema.get(keyword)) is None:\n continue\n evaluated_keys += (\n key\n for key, value in instance.items()\n if is_valid(validator.descend(value, subschema))\n )\n\n if "patternProperties" in schema:\n for property in instance:\n for pattern in schema["patternProperties"]:\n if re.search(pattern, property):\n evaluated_keys.append(property)\n\n if "dependentSchemas" in schema:\n for property, subschema in schema["dependentSchemas"].items():\n if property not in instance:\n continue\n evaluated_keys += find_evaluated_property_keys_by_schema(\n validator, instance, subschema,\n )\n\n for keyword in ["allOf", "oneOf", "anyOf"]:\n for subschema in schema.get(keyword, []):\n if not is_valid(validator.descend(instance, subschema)):\n continue\n evaluated_keys += find_evaluated_property_keys_by_schema(\n validator, instance, subschema,\n )\n\n if "if" in schema:\n if validator.evolve(schema=schema["if"]).is_valid(instance):\n evaluated_keys += find_evaluated_property_keys_by_schema(\n validator, instance, schema["if"],\n )\n if "then" in schema:\n evaluated_keys += find_evaluated_property_keys_by_schema(\n validator, instance, schema["then"],\n )\n elif "else" in schema:\n evaluated_keys += find_evaluated_property_keys_by_schema(\n validator, instance, schema["else"],\n )\n\n return evaluated_keys\n\n\ndef is_valid(errs_it):\n """Whether there are no errors in the given iterator."""\n return next(errs_it, None) is None\n | .venv\Lib\site-packages\jsonschema\_utils.py | _utils.py | Python | 10,659 | 0.95 | 0.290141 | 0 | python-kit | 740 | 2023-08-01T03:00:20.201072 | BSD-3-Clause | false | 6d0ffecf7d523c3ee7a96ca62090443c |
"""\nAn implementation of JSON Schema for Python.\n\nThe main functionality is provided by the validator classes for each of the\nsupported JSON Schema versions.\n\nMost commonly, `jsonschema.validators.validate` is the quickest way to simply\nvalidate a given instance under a schema, and will create a validator\nfor you.\n"""\nimport warnings\n\nfrom jsonschema._format import FormatChecker\nfrom jsonschema._types import TypeChecker\nfrom jsonschema.exceptions import SchemaError, ValidationError\nfrom jsonschema.validators import (\n Draft3Validator,\n Draft4Validator,\n Draft6Validator,\n Draft7Validator,\n Draft201909Validator,\n Draft202012Validator,\n validate,\n)\n\n\ndef __getattr__(name):\n if name == "__version__":\n warnings.warn(\n "Accessing jsonschema.__version__ is deprecated and will be "\n "removed in a future release. Use importlib.metadata directly "\n "to query for jsonschema's version.",\n DeprecationWarning,\n stacklevel=2,\n )\n\n from importlib import metadata\n return metadata.version("jsonschema")\n elif name == "RefResolver":\n from jsonschema.validators import _RefResolver\n warnings.warn(\n _RefResolver._DEPRECATION_MESSAGE,\n DeprecationWarning,\n stacklevel=2,\n )\n return _RefResolver\n elif name == "ErrorTree":\n warnings.warn(\n "Importing ErrorTree directly from the jsonschema package "\n "is deprecated and will become an ImportError. Import it from "\n "jsonschema.exceptions instead.",\n DeprecationWarning,\n stacklevel=2,\n )\n from jsonschema.exceptions import ErrorTree\n return ErrorTree\n elif name == "FormatError":\n warnings.warn(\n "Importing FormatError directly from the jsonschema package "\n "is deprecated and will become an ImportError. Import it from "\n "jsonschema.exceptions instead.",\n DeprecationWarning,\n stacklevel=2,\n )\n from jsonschema.exceptions import FormatError\n return FormatError\n elif name == "Validator":\n warnings.warn(\n "Importing Validator directly from the jsonschema package "\n "is deprecated and will become an ImportError. Import it from "\n "jsonschema.protocols instead.",\n DeprecationWarning,\n stacklevel=2,\n )\n from jsonschema.protocols import Validator\n return Validator\n elif name == "RefResolutionError":\n from jsonschema.exceptions import _RefResolutionError\n warnings.warn(\n _RefResolutionError._DEPRECATION_MESSAGE,\n DeprecationWarning,\n stacklevel=2,\n )\n return _RefResolutionError\n\n format_checkers = {\n "draft3_format_checker": Draft3Validator,\n "draft4_format_checker": Draft4Validator,\n "draft6_format_checker": Draft6Validator,\n "draft7_format_checker": Draft7Validator,\n "draft201909_format_checker": Draft201909Validator,\n "draft202012_format_checker": Draft202012Validator,\n }\n ValidatorForFormat = format_checkers.get(name)\n if ValidatorForFormat is not None:\n warnings.warn(\n f"Accessing jsonschema.{name} is deprecated and will be "\n "removed in a future release. Instead, use the FORMAT_CHECKER "\n "attribute on the corresponding Validator.",\n DeprecationWarning,\n stacklevel=2,\n )\n return ValidatorForFormat.FORMAT_CHECKER\n\n raise AttributeError(f"module {__name__} has no attribute {name}")\n\n\n__all__ = [\n "Draft3Validator",\n "Draft4Validator",\n "Draft6Validator",\n "Draft7Validator",\n "Draft201909Validator",\n "Draft202012Validator",\n "FormatChecker",\n "SchemaError",\n "TypeChecker",\n "ValidationError",\n "validate",\n]\n | .venv\Lib\site-packages\jsonschema\__init__.py | __init__.py | Python | 3,941 | 0.85 | 0.058333 | 0 | vue-tools | 544 | 2024-04-15T06:11:13.968607 | Apache-2.0 | false | a2bf0ad6faf45e4faec8ec4dbc77f8f2 |
"""\nThe jsonschema CLI is now deprecated in favor of check-jsonschema.\n"""\nfrom jsonschema.cli import main\n\nmain()\n | .venv\Lib\site-packages\jsonschema\__main__.py | __main__.py | Python | 115 | 0.85 | 0 | 0 | awesome-app | 678 | 2025-03-30T04:36:48.758278 | GPL-3.0 | false | dc77dd8bda20cabdf95aca947957eced |
"""\nA benchmark for comparing equivalent validation of `const` and `enum`.\n"""\n\nfrom pyperf import Runner\n\nfrom jsonschema import Draft202012Validator\n\nvalue = [37] * 100\nconst_schema = {"const": list(value)}\nenum_schema = {"enum": [list(value)]}\n\nvalid = list(value)\ninvalid = [*valid, 73]\n\nconst = Draft202012Validator(const_schema)\nenum = Draft202012Validator(enum_schema)\n\nassert const.is_valid(valid)\nassert enum.is_valid(valid)\nassert not const.is_valid(invalid)\nassert not enum.is_valid(invalid)\n\n\nif __name__ == "__main__":\n runner = Runner()\n runner.bench_func("const valid", lambda: const.is_valid(valid))\n runner.bench_func("const invalid", lambda: const.is_valid(invalid))\n runner.bench_func("enum valid", lambda: enum.is_valid(valid))\n runner.bench_func("enum invalid", lambda: enum.is_valid(invalid))\n | .venv\Lib\site-packages\jsonschema\benchmarks\const_vs_enum.py | const_vs_enum.py | Python | 830 | 0.85 | 0.066667 | 0 | node-utils | 53 | 2024-08-21T04:31:57.051530 | GPL-3.0 | false | f6171602500fc0abdde575f1d0d5bc96 |
"""\nA benchmark for validation of the `contains` keyword.\n"""\n\nfrom pyperf import Runner\n\nfrom jsonschema import Draft202012Validator\n\nschema = {\n "type": "array",\n "contains": {"const": 37},\n}\nvalidator = Draft202012Validator(schema)\n\nsize = 1000\nbeginning = [37] + [0] * (size - 1)\nmiddle = [0] * (size // 2) + [37] + [0] * (size // 2)\nend = [0] * (size - 1) + [37]\ninvalid = [0] * size\n\n\nif __name__ == "__main__":\n runner = Runner()\n runner.bench_func("baseline", lambda: validator.is_valid([]))\n runner.bench_func("beginning", lambda: validator.is_valid(beginning))\n runner.bench_func("middle", lambda: validator.is_valid(middle))\n runner.bench_func("end", lambda: validator.is_valid(end))\n runner.bench_func("invalid", lambda: validator.is_valid(invalid))\n | .venv\Lib\site-packages\jsonschema\benchmarks\contains.py | contains.py | Python | 786 | 0.95 | 0.071429 | 0 | react-lib | 303 | 2023-10-29T02:53:33.241113 | GPL-3.0 | false | 01f025abd9377a8ccf86bd67f11e6a46 |
"""\nA performance benchmark using the example from issue #232.\n\nSee https://github.com/python-jsonschema/jsonschema/pull/232.\n"""\nfrom pathlib import Path\n\nfrom pyperf import Runner\nfrom referencing import Registry\n\nfrom jsonschema.tests._suite import Version\nimport jsonschema\n\nissue232 = Version(\n path=Path(__file__).parent / "issue232",\n remotes=Registry(),\n name="issue232",\n)\n\n\nif __name__ == "__main__":\n issue232.benchmark(\n runner=Runner(),\n Validator=jsonschema.Draft4Validator,\n )\n | .venv\Lib\site-packages\jsonschema\benchmarks\issue232.py | issue232.py | Python | 521 | 0.95 | 0.04 | 0 | node-utils | 754 | 2025-01-02T15:25:14.860553 | GPL-3.0 | false | e3b8de76c0b601370b36430f5ec4ddea |
"""\nA performance benchmark using the official test suite.\n\nThis benchmarks jsonschema using every valid example in the\nJSON-Schema-Test-Suite. It will take some time to complete.\n"""\nfrom pyperf import Runner\n\nfrom jsonschema.tests._suite import Suite\n\nif __name__ == "__main__":\n Suite().benchmark(runner=Runner())\n | .venv\Lib\site-packages\jsonschema\benchmarks\json_schema_test_suite.py | json_schema_test_suite.py | Python | 320 | 0.85 | 0.083333 | 0 | vue-tools | 32 | 2023-07-20T11:57:05.752290 | GPL-3.0 | true | 2f7129cb92ced2f7a14683dabc6267b8 |
"""\nA benchmark which tries to compare the possible slow subparts of validation.\n"""\nfrom referencing import Registry\nfrom referencing.jsonschema import DRAFT202012\nfrom rpds import HashTrieMap, HashTrieSet\n\nfrom jsonschema import Draft202012Validator\n\nschema = {\n "type": "array",\n "minLength": 1,\n "maxLength": 1,\n "items": {"type": "integer"},\n}\n\nhmap = HashTrieMap()\nhset = HashTrieSet()\n\nregistry = Registry()\n\nv = Draft202012Validator(schema)\n\n\ndef registry_data_structures():\n return hmap.insert("foo", "bar"), hset.insert("foo")\n\n\ndef registry_add():\n resource = DRAFT202012.create_resource(schema)\n return registry.with_resource(uri="urn:example", resource=resource)\n\n\nif __name__ == "__main__":\n from pyperf import Runner\n runner = Runner()\n\n runner.bench_func("HashMap/HashSet insertion", registry_data_structures)\n runner.bench_func("Registry insertion", registry_add)\n runner.bench_func("Success", lambda: v.is_valid([1]))\n runner.bench_func("Failure", lambda: v.is_valid(["foo"]))\n runner.bench_func("Metaschema validation", lambda: v.check_schema(schema))\n | .venv\Lib\site-packages\jsonschema\benchmarks\subcomponents.py | subcomponents.py | Python | 1,113 | 0.85 | 0.071429 | 0 | python-kit | 76 | 2024-02-18T17:49:53.481453 | GPL-3.0 | false | e7f38eeac66b02dafc461848b1e5aa86 |
"""\nAn unused schema registry should not cause slower validation.\n\n"Unused" here means one where no reference resolution is occurring anyhow.\n\nSee https://github.com/python-jsonschema/jsonschema/issues/1088.\n"""\nfrom pyperf import Runner\nfrom referencing import Registry\nfrom referencing.jsonschema import DRAFT201909\n\nfrom jsonschema import Draft201909Validator\n\nregistry = Registry().with_resource(\n "urn:example:foo",\n DRAFT201909.create_resource({}),\n)\n\nschema = {"$ref": "https://json-schema.org/draft/2019-09/schema"}\ninstance = {"maxLength": 4}\n\nno_registry = Draft201909Validator(schema)\nwith_useless_registry = Draft201909Validator(schema, registry=registry)\n\nif __name__ == "__main__":\n runner = Runner()\n\n runner.bench_func(\n "no registry",\n lambda: no_registry.is_valid(instance),\n )\n runner.bench_func(\n "useless registry",\n lambda: with_useless_registry.is_valid(instance),\n )\n | .venv\Lib\site-packages\jsonschema\benchmarks\unused_registry.py | unused_registry.py | Python | 940 | 0.95 | 0.028571 | 0 | node-utils | 804 | 2025-06-26T18:36:27.599910 | GPL-3.0 | false | 12dffd87691227d8e63b9d2628b06099 |
\n"""\nA benchmark for validation of applicators containing lots of useless schemas.\n\nSignals a small possible optimization to remove all such schemas ahead of time.\n"""\n\nfrom pyperf import Runner\n\nfrom jsonschema import Draft202012Validator as Validator\n\nNUM_USELESS = 100000\n\nsubschema = {"const": 37}\n\nvalid = 37\ninvalid = 12\n\nbaseline = Validator(subschema)\n\n\n# These should be indistinguishable from just `subschema`\nby_name = {\n "single subschema": {\n "anyOf": Validator({"anyOf": [subschema]}),\n "allOf": Validator({"allOf": [subschema]}),\n "oneOf": Validator({"oneOf": [subschema]}),\n },\n "redundant subschemas": {\n "anyOf": Validator({"anyOf": [subschema] * NUM_USELESS}),\n "allOf": Validator({"allOf": [subschema] * NUM_USELESS}),\n },\n "useless successful subschemas (beginning)": {\n "anyOf": Validator({"anyOf": [subschema, *[True] * NUM_USELESS]}),\n "allOf": Validator({"allOf": [subschema, *[True] * NUM_USELESS]}),\n },\n "useless successful subschemas (middle)": {\n "anyOf": Validator(\n {\n "anyOf": [\n *[True] * (NUM_USELESS // 2),\n subschema,\n *[True] * (NUM_USELESS // 2),\n ],\n },\n ),\n "allOf": Validator(\n {\n "allOf": [\n *[True] * (NUM_USELESS // 2),\n subschema,\n *[True] * (NUM_USELESS // 2),\n ],\n },\n ),\n },\n "useless successful subschemas (end)": {\n "anyOf": Validator({"anyOf": [*[True] * NUM_USELESS, subschema]}),\n "allOf": Validator({"allOf": [*[True] * NUM_USELESS, subschema]}),\n },\n "useless failing subschemas (beginning)": {\n "anyOf": Validator({"anyOf": [subschema, *[False] * NUM_USELESS]}),\n "oneOf": Validator({"oneOf": [subschema, *[False] * NUM_USELESS]}),\n },\n "useless failing subschemas (middle)": {\n "anyOf": Validator(\n {\n "anyOf": [\n *[False] * (NUM_USELESS // 2),\n subschema,\n *[False] * (NUM_USELESS // 2),\n ],\n },\n ),\n "oneOf": Validator(\n {\n "oneOf": [\n *[False] * (NUM_USELESS // 2),\n subschema,\n *[False] * (NUM_USELESS // 2),\n ],\n },\n ),\n },\n "useless failing subschemas (end)": {\n "anyOf": Validator({"anyOf": [*[False] * NUM_USELESS, subschema]}),\n "oneOf": Validator({"oneOf": [*[False] * NUM_USELESS, subschema]}),\n },\n}\n\nif __name__ == "__main__":\n runner = Runner()\n\n runner.bench_func("baseline valid", lambda: baseline.is_valid(valid))\n runner.bench_func("baseline invalid", lambda: baseline.is_valid(invalid))\n\n for group, applicators in by_name.items():\n for applicator, validator in applicators.items():\n runner.bench_func(\n f"{group}: {applicator} valid",\n lambda validator=validator: validator.is_valid(valid),\n )\n runner.bench_func(\n f"{group}: {applicator} invalid",\n lambda validator=validator: validator.is_valid(invalid),\n )\n | .venv\Lib\site-packages\jsonschema\benchmarks\useless_applicator_schemas.py | useless_applicator_schemas.py | Python | 3,342 | 0.95 | 0.037736 | 0.096774 | python-kit | 303 | 2023-10-12T08:15:20.490217 | Apache-2.0 | false | ea651b78af72e4599e1ea362e83e3a22 |
"""\nA benchmark for validation of schemas containing lots of useless keywords.\n\nChecks we filter them out once, ahead of time.\n"""\n\nfrom pyperf import Runner\n\nfrom jsonschema import Draft202012Validator\n\nNUM_USELESS = 100000\nschema = dict(\n [\n ("not", {"const": 42}),\n *((str(i), i) for i in range(NUM_USELESS)),\n ("type", "integer"),\n *((str(i), i) for i in range(NUM_USELESS, NUM_USELESS)),\n ("minimum", 37),\n ],\n)\nvalidator = Draft202012Validator(schema)\n\nvalid = 3737\ninvalid = 12\n\n\nif __name__ == "__main__":\n runner = Runner()\n runner.bench_func("beginning of schema", lambda: validator.is_valid(42))\n runner.bench_func("middle of schema", lambda: validator.is_valid("foo"))\n runner.bench_func("end of schema", lambda: validator.is_valid(12))\n runner.bench_func("valid", lambda: validator.is_valid(3737))\n | .venv\Lib\site-packages\jsonschema\benchmarks\useless_keywords.py | useless_keywords.py | Python | 867 | 0.85 | 0.125 | 0.08 | python-kit | 516 | 2025-03-12T08:06:32.187449 | BSD-3-Clause | false | da083eb88eff8683e6fc8b6dcd721b7b |
from pyperf import Runner\n\nfrom jsonschema import Draft202012Validator\n\nschema = {\n "type": "array",\n "minLength": 1,\n "maxLength": 1,\n "items": {"type": "integer"},\n}\n\n\nif __name__ == "__main__":\n Runner().bench_func("validator creation", Draft202012Validator, schema)\n | .venv\Lib\site-packages\jsonschema\benchmarks\validator_creation.py | validator_creation.py | Python | 285 | 0.85 | 0.071429 | 0 | node-utils | 758 | 2024-06-18T17:47:41.802946 | Apache-2.0 | false | cdc5adb8ab1d7f930f25ac071f76d626 |
"""\nBenchmarks for validation.\n\nThis package is *not* public API.\n"""\n | .venv\Lib\site-packages\jsonschema\benchmarks\__init__.py | __init__.py | Python | 70 | 0.5 | 0.2 | 0 | node-utils | 568 | 2025-04-02T08:00:01.005206 | GPL-3.0 | false | f85a3896b1e2683f47269e8671f71eae |
\n\n | .venv\Lib\site-packages\jsonschema\benchmarks\__pycache__\const_vs_enum.cpython-313.pyc | const_vs_enum.cpython-313.pyc | Other | 1,943 | 0.7 | 0.035714 | 0 | node-utils | 451 | 2025-05-14T06:18:21.628043 | MIT | false | 2554461d6c706632f783d1f2372c9c3d |
\n\n | .venv\Lib\site-packages\jsonschema\benchmarks\__pycache__\contains.cpython-313.pyc | contains.cpython-313.pyc | Other | 1,915 | 0.8 | 0.032258 | 0 | python-kit | 357 | 2023-08-21T02:28:50.879383 | BSD-3-Clause | false | 3954254c00bb72172c08188c259caeb6 |
\n\n | .venv\Lib\site-packages\jsonschema\benchmarks\__pycache__\issue232.cpython-313.pyc | issue232.cpython-313.pyc | Other | 891 | 0.8 | 0 | 0 | vue-tools | 162 | 2024-06-20T02:50:02.681187 | BSD-3-Clause | false | 89b53c05ffb588365e956a4cdd8394f5 |
\n\n | .venv\Lib\site-packages\jsonschema\benchmarks\__pycache__\json_schema_test_suite.cpython-313.pyc | json_schema_test_suite.cpython-313.pyc | Other | 642 | 0.7 | 0 | 0 | vue-tools | 38 | 2024-02-20T06:17:49.355115 | Apache-2.0 | true | 1f966e24c2635a7721ec24fe44e9a041 |
\n\n | .venv\Lib\site-packages\jsonschema\benchmarks\__pycache__\nested_schemas.cpython-313.pyc | nested_schemas.cpython-313.pyc | Other | 2,442 | 0.8 | 0 | 0 | node-utils | 382 | 2024-10-26T08:43:42.873077 | BSD-3-Clause | false | 49d76495732a032acc257ab0566ba8d8 |
\n\n | .venv\Lib\site-packages\jsonschema\benchmarks\__pycache__\subcomponents.cpython-313.pyc | subcomponents.cpython-313.pyc | Other | 2,265 | 0.7 | 0 | 0 | vue-tools | 519 | 2024-12-02T18:19:00.934405 | MIT | false | e5681e37335779b677b7d228088af138 |
\n\n | .venv\Lib\site-packages\jsonschema\benchmarks\__pycache__\unused_registry.cpython-313.pyc | unused_registry.cpython-313.pyc | Other | 1,577 | 0.8 | 0 | 0 | node-utils | 810 | 2023-12-31T15:12:00.160026 | BSD-3-Clause | false | f91cad319a1b5de20cbd91e258d6fc27 |
\n\n | .venv\Lib\site-packages\jsonschema\benchmarks\__pycache__\useless_applicator_schemas.cpython-313.pyc | useless_applicator_schemas.cpython-313.pyc | Other | 3,562 | 0.8 | 0.018868 | 0 | vue-tools | 431 | 2024-10-03T20:46:10.021984 | GPL-3.0 | false | 5f927958ddd069a6638d74912a02d156 |
\n\n | .venv\Lib\site-packages\jsonschema\benchmarks\__pycache__\useless_keywords.cpython-313.pyc | useless_keywords.cpython-313.pyc | Other | 2,087 | 0.8 | 0.033333 | 0 | node-utils | 842 | 2023-11-11T04:48:32.479337 | MIT | false | eb787ab35c951132bc727b8b022fb436 |
\n\n | .venv\Lib\site-packages\jsonschema\benchmarks\__pycache__\validator_creation.cpython-313.pyc | validator_creation.cpython-313.pyc | Other | 568 | 0.7 | 0 | 0 | node-utils | 932 | 2023-10-30T15:11:15.372041 | MIT | false | 23420651125ea51b241084876d5f507b |
\n\n | .venv\Lib\site-packages\jsonschema\benchmarks\__pycache__\__init__.cpython-313.pyc | __init__.cpython-313.pyc | Other | 276 | 0.7 | 0.142857 | 0 | node-utils | 661 | 2024-01-20T12:34:23.643670 | MIT | false | 1ba2c68082dc7711fc819155664c853f |
"""\nFuzzing setup for OSS-Fuzz.\n\nSee https://github.com/google/oss-fuzz/tree/master/projects/jsonschema for the\nother half of the setup here.\n"""\nimport sys\n\nfrom hypothesis import given, strategies\n\nimport jsonschema\n\nPRIM = strategies.one_of(\n strategies.booleans(),\n strategies.integers(),\n strategies.floats(allow_nan=False, allow_infinity=False),\n strategies.text(),\n)\nDICT = strategies.recursive(\n base=strategies.one_of(\n strategies.booleans(),\n strategies.dictionaries(strategies.text(), PRIM),\n ),\n extend=lambda inner: strategies.dictionaries(strategies.text(), inner),\n)\n\n\n@given(obj1=DICT, obj2=DICT)\ndef test_schemas(obj1, obj2):\n try:\n jsonschema.validate(instance=obj1, schema=obj2)\n except jsonschema.exceptions.ValidationError:\n pass\n except jsonschema.exceptions.SchemaError:\n pass\n\n\ndef main():\n atheris.instrument_all()\n atheris.Setup(\n sys.argv,\n test_schemas.hypothesis.fuzz_one_input,\n enable_python_coverage=True,\n )\n atheris.Fuzz()\n\n\nif __name__ == "__main__":\n import atheris\n main()\n | .venv\Lib\site-packages\jsonschema\tests\fuzz_validate.py | fuzz_validate.py | Python | 1,114 | 0.95 | 0.12 | 0 | react-lib | 831 | 2024-08-12T13:51:25.096984 | BSD-3-Clause | true | dd7635f4bd347a60d47c3b9c4b592fce |
from contextlib import redirect_stderr, redirect_stdout\nfrom importlib import metadata\nfrom io import StringIO\nfrom json import JSONDecodeError\nfrom pathlib import Path\nfrom textwrap import dedent\nfrom unittest import TestCase\nimport json\nimport os\nimport subprocess\nimport sys\nimport tempfile\nimport warnings\n\nfrom jsonschema import Draft4Validator, Draft202012Validator\nfrom jsonschema.exceptions import (\n SchemaError,\n ValidationError,\n _RefResolutionError,\n)\nfrom jsonschema.validators import _LATEST_VERSION, validate\n\nwith warnings.catch_warnings():\n warnings.simplefilter("ignore")\n from jsonschema import cli\n\n\ndef fake_validator(*errors):\n errors = list(reversed(errors))\n\n class FakeValidator:\n def __init__(self, *args, **kwargs):\n pass\n\n def iter_errors(self, instance):\n if errors:\n return errors.pop()\n return [] # pragma: no cover\n\n @classmethod\n def check_schema(self, schema):\n pass\n\n return FakeValidator\n\n\ndef fake_open(all_contents):\n def open(path):\n contents = all_contents.get(path)\n if contents is None:\n raise FileNotFoundError(path)\n return StringIO(contents)\n return open\n\n\ndef _message_for(non_json):\n try:\n json.loads(non_json)\n except JSONDecodeError as error:\n return str(error)\n else: # pragma: no cover\n raise RuntimeError("Tried and failed to capture a JSON dump error.")\n\n\nclass TestCLI(TestCase):\n def run_cli(\n self, argv, files=None, stdin=StringIO(), exit_code=0, **override,\n ):\n arguments = cli.parse_args(argv)\n arguments.update(override)\n\n self.assertFalse(hasattr(cli, "open"))\n cli.open = fake_open(files or {})\n try:\n stdout, stderr = StringIO(), StringIO()\n actual_exit_code = cli.run(\n arguments,\n stdin=stdin,\n stdout=stdout,\n stderr=stderr,\n )\n finally:\n del cli.open\n\n self.assertEqual(\n actual_exit_code, exit_code, msg=dedent(\n f"""\n Expected an exit code of {exit_code} != {actual_exit_code}.\n\n stdout: {stdout.getvalue()}\n\n stderr: {stderr.getvalue()}\n """,\n ),\n )\n return stdout.getvalue(), stderr.getvalue()\n\n def assertOutputs(self, stdout="", stderr="", **kwargs):\n self.assertEqual(\n self.run_cli(**kwargs),\n (dedent(stdout), dedent(stderr)),\n )\n\n def test_invalid_instance(self):\n error = ValidationError("I am an error!", instance=12)\n self.assertOutputs(\n files=dict(\n some_schema='{"does not": "matter since it is stubbed"}',\n some_instance=json.dumps(error.instance),\n ),\n validator=fake_validator([error]),\n\n argv=["-i", "some_instance", "some_schema"],\n\n exit_code=1,\n stderr="12: I am an error!\n",\n )\n\n def test_invalid_instance_pretty_output(self):\n error = ValidationError("I am an error!", instance=12)\n self.assertOutputs(\n files=dict(\n some_schema='{"does not": "matter since it is stubbed"}',\n some_instance=json.dumps(error.instance),\n ),\n validator=fake_validator([error]),\n\n argv=["-i", "some_instance", "--output", "pretty", "some_schema"],\n\n exit_code=1,\n stderr="""\\n ===[ValidationError]===(some_instance)===\n\n I am an error!\n -----------------------------\n """,\n )\n\n def test_invalid_instance_explicit_plain_output(self):\n error = ValidationError("I am an error!", instance=12)\n self.assertOutputs(\n files=dict(\n some_schema='{"does not": "matter since it is stubbed"}',\n some_instance=json.dumps(error.instance),\n ),\n validator=fake_validator([error]),\n\n argv=["--output", "plain", "-i", "some_instance", "some_schema"],\n\n exit_code=1,\n stderr="12: I am an error!\n",\n )\n\n def test_invalid_instance_multiple_errors(self):\n instance = 12\n first = ValidationError("First error", instance=instance)\n second = ValidationError("Second error", instance=instance)\n\n self.assertOutputs(\n files=dict(\n some_schema='{"does not": "matter since it is stubbed"}',\n some_instance=json.dumps(instance),\n ),\n validator=fake_validator([first, second]),\n\n argv=["-i", "some_instance", "some_schema"],\n\n exit_code=1,\n stderr="""\\n 12: First error\n 12: Second error\n """,\n )\n\n def test_invalid_instance_multiple_errors_pretty_output(self):\n instance = 12\n first = ValidationError("First error", instance=instance)\n second = ValidationError("Second error", instance=instance)\n\n self.assertOutputs(\n files=dict(\n some_schema='{"does not": "matter since it is stubbed"}',\n some_instance=json.dumps(instance),\n ),\n validator=fake_validator([first, second]),\n\n argv=["-i", "some_instance", "--output", "pretty", "some_schema"],\n\n exit_code=1,\n stderr="""\\n ===[ValidationError]===(some_instance)===\n\n First error\n -----------------------------\n ===[ValidationError]===(some_instance)===\n\n Second error\n -----------------------------\n """,\n )\n\n def test_multiple_invalid_instances(self):\n first_instance = 12\n first_errors = [\n ValidationError("An error", instance=first_instance),\n ValidationError("Another error", instance=first_instance),\n ]\n second_instance = "foo"\n second_errors = [ValidationError("BOOM", instance=second_instance)]\n\n self.assertOutputs(\n files=dict(\n some_schema='{"does not": "matter since it is stubbed"}',\n some_first_instance=json.dumps(first_instance),\n some_second_instance=json.dumps(second_instance),\n ),\n validator=fake_validator(first_errors, second_errors),\n\n argv=[\n "-i", "some_first_instance",\n "-i", "some_second_instance",\n "some_schema",\n ],\n\n exit_code=1,\n stderr="""\\n 12: An error\n 12: Another error\n foo: BOOM\n """,\n )\n\n def test_multiple_invalid_instances_pretty_output(self):\n first_instance = 12\n first_errors = [\n ValidationError("An error", instance=first_instance),\n ValidationError("Another error", instance=first_instance),\n ]\n second_instance = "foo"\n second_errors = [ValidationError("BOOM", instance=second_instance)]\n\n self.assertOutputs(\n files=dict(\n some_schema='{"does not": "matter since it is stubbed"}',\n some_first_instance=json.dumps(first_instance),\n some_second_instance=json.dumps(second_instance),\n ),\n validator=fake_validator(first_errors, second_errors),\n\n argv=[\n "--output", "pretty",\n "-i", "some_first_instance",\n "-i", "some_second_instance",\n "some_schema",\n ],\n\n exit_code=1,\n stderr="""\\n ===[ValidationError]===(some_first_instance)===\n\n An error\n -----------------------------\n ===[ValidationError]===(some_first_instance)===\n\n Another error\n -----------------------------\n ===[ValidationError]===(some_second_instance)===\n\n BOOM\n -----------------------------\n """,\n )\n\n def test_custom_error_format(self):\n first_instance = 12\n first_errors = [\n ValidationError("An error", instance=first_instance),\n ValidationError("Another error", instance=first_instance),\n ]\n second_instance = "foo"\n second_errors = [ValidationError("BOOM", instance=second_instance)]\n\n self.assertOutputs(\n files=dict(\n some_schema='{"does not": "matter since it is stubbed"}',\n some_first_instance=json.dumps(first_instance),\n some_second_instance=json.dumps(second_instance),\n ),\n validator=fake_validator(first_errors, second_errors),\n\n argv=[\n "--error-format", ":{error.message}._-_.{error.instance}:",\n "-i", "some_first_instance",\n "-i", "some_second_instance",\n "some_schema",\n ],\n\n exit_code=1,\n stderr=":An error._-_.12::Another error._-_.12::BOOM._-_.foo:",\n )\n\n def test_invalid_schema(self):\n self.assertOutputs(\n files=dict(some_schema='{"type": 12}'),\n argv=["some_schema"],\n\n exit_code=1,\n stderr="""\\n 12: 12 is not valid under any of the given schemas\n """,\n )\n\n def test_invalid_schema_pretty_output(self):\n schema = {"type": 12}\n\n with self.assertRaises(SchemaError) as e:\n validate(schema=schema, instance="")\n error = str(e.exception)\n\n self.assertOutputs(\n files=dict(some_schema=json.dumps(schema)),\n argv=["--output", "pretty", "some_schema"],\n\n exit_code=1,\n stderr=(\n "===[SchemaError]===(some_schema)===\n\n"\n + str(error)\n + "\n-----------------------------\n"\n ),\n )\n\n def test_invalid_schema_multiple_errors(self):\n self.assertOutputs(\n files=dict(some_schema='{"type": 12, "items": 57}'),\n argv=["some_schema"],\n\n exit_code=1,\n stderr="""\\n 57: 57 is not of type 'object', 'boolean'\n """,\n )\n\n def test_invalid_schema_multiple_errors_pretty_output(self):\n schema = {"type": 12, "items": 57}\n\n with self.assertRaises(SchemaError) as e:\n validate(schema=schema, instance="")\n error = str(e.exception)\n\n self.assertOutputs(\n files=dict(some_schema=json.dumps(schema)),\n argv=["--output", "pretty", "some_schema"],\n\n exit_code=1,\n stderr=(\n "===[SchemaError]===(some_schema)===\n\n"\n + str(error)\n + "\n-----------------------------\n"\n ),\n )\n\n def test_invalid_schema_with_invalid_instance(self):\n """\n "Validating" an instance that's invalid under an invalid schema\n just shows the schema error.\n """\n self.assertOutputs(\n files=dict(\n some_schema='{"type": 12, "minimum": 30}',\n some_instance="13",\n ),\n argv=["-i", "some_instance", "some_schema"],\n\n exit_code=1,\n stderr="""\\n 12: 12 is not valid under any of the given schemas\n """,\n )\n\n def test_invalid_schema_with_invalid_instance_pretty_output(self):\n instance, schema = 13, {"type": 12, "minimum": 30}\n\n with self.assertRaises(SchemaError) as e:\n validate(schema=schema, instance=instance)\n error = str(e.exception)\n\n self.assertOutputs(\n files=dict(\n some_schema=json.dumps(schema),\n some_instance=json.dumps(instance),\n ),\n argv=["--output", "pretty", "-i", "some_instance", "some_schema"],\n\n exit_code=1,\n stderr=(\n "===[SchemaError]===(some_schema)===\n\n"\n + str(error)\n + "\n-----------------------------\n"\n ),\n )\n\n def test_invalid_instance_continues_with_the_rest(self):\n self.assertOutputs(\n files=dict(\n some_schema='{"minimum": 30}',\n first_instance="not valid JSON!",\n second_instance="12",\n ),\n argv=[\n "-i", "first_instance",\n "-i", "second_instance",\n "some_schema",\n ],\n\n exit_code=1,\n stderr="""\\n Failed to parse 'first_instance': {}\n 12: 12 is less than the minimum of 30\n """.format(_message_for("not valid JSON!")),\n )\n\n def test_custom_error_format_applies_to_schema_errors(self):\n instance, schema = 13, {"type": 12, "minimum": 30}\n\n with self.assertRaises(SchemaError):\n validate(schema=schema, instance=instance)\n\n self.assertOutputs(\n files=dict(some_schema=json.dumps(schema)),\n\n argv=[\n "--error-format", ":{error.message}._-_.{error.instance}:",\n "some_schema",\n ],\n\n exit_code=1,\n stderr=":12 is not valid under any of the given schemas._-_.12:",\n )\n\n def test_instance_is_invalid_JSON(self):\n instance = "not valid JSON!"\n\n self.assertOutputs(\n files=dict(some_schema="{}", some_instance=instance),\n argv=["-i", "some_instance", "some_schema"],\n\n exit_code=1,\n stderr=f"""\\n Failed to parse 'some_instance': {_message_for(instance)}\n """,\n )\n\n def test_instance_is_invalid_JSON_pretty_output(self):\n stdout, stderr = self.run_cli(\n files=dict(\n some_schema="{}",\n some_instance="not valid JSON!",\n ),\n\n argv=["--output", "pretty", "-i", "some_instance", "some_schema"],\n\n exit_code=1,\n )\n self.assertFalse(stdout)\n self.assertIn(\n "(some_instance)===\n\nTraceback (most recent call last):\n",\n stderr,\n )\n self.assertNotIn("some_schema", stderr)\n\n def test_instance_is_invalid_JSON_on_stdin(self):\n instance = "not valid JSON!"\n\n self.assertOutputs(\n files=dict(some_schema="{}"),\n stdin=StringIO(instance),\n\n argv=["some_schema"],\n\n exit_code=1,\n stderr=f"""\\n Failed to parse <stdin>: {_message_for(instance)}\n """,\n )\n\n def test_instance_is_invalid_JSON_on_stdin_pretty_output(self):\n stdout, stderr = self.run_cli(\n files=dict(some_schema="{}"),\n stdin=StringIO("not valid JSON!"),\n\n argv=["--output", "pretty", "some_schema"],\n\n exit_code=1,\n )\n self.assertFalse(stdout)\n self.assertIn(\n "(<stdin>)===\n\nTraceback (most recent call last):\n",\n stderr,\n )\n self.assertNotIn("some_schema", stderr)\n\n def test_schema_is_invalid_JSON(self):\n schema = "not valid JSON!"\n\n self.assertOutputs(\n files=dict(some_schema=schema),\n\n argv=["some_schema"],\n\n exit_code=1,\n stderr=f"""\\n Failed to parse 'some_schema': {_message_for(schema)}\n """,\n )\n\n def test_schema_is_invalid_JSON_pretty_output(self):\n stdout, stderr = self.run_cli(\n files=dict(some_schema="not valid JSON!"),\n\n argv=["--output", "pretty", "some_schema"],\n\n exit_code=1,\n )\n self.assertFalse(stdout)\n self.assertIn(\n "(some_schema)===\n\nTraceback (most recent call last):\n",\n stderr,\n )\n\n def test_schema_and_instance_are_both_invalid_JSON(self):\n """\n Only the schema error is reported, as we abort immediately.\n """\n schema, instance = "not valid JSON!", "also not valid JSON!"\n self.assertOutputs(\n files=dict(some_schema=schema, some_instance=instance),\n\n argv=["some_schema"],\n\n exit_code=1,\n stderr=f"""\\n Failed to parse 'some_schema': {_message_for(schema)}\n """,\n )\n\n def test_schema_and_instance_are_both_invalid_JSON_pretty_output(self):\n """\n Only the schema error is reported, as we abort immediately.\n """\n stdout, stderr = self.run_cli(\n files=dict(\n some_schema="not valid JSON!",\n some_instance="also not valid JSON!",\n ),\n\n argv=["--output", "pretty", "-i", "some_instance", "some_schema"],\n\n exit_code=1,\n )\n self.assertFalse(stdout)\n self.assertIn(\n "(some_schema)===\n\nTraceback (most recent call last):\n",\n stderr,\n )\n self.assertNotIn("some_instance", stderr)\n\n def test_instance_does_not_exist(self):\n self.assertOutputs(\n files=dict(some_schema="{}"),\n argv=["-i", "nonexisting_instance", "some_schema"],\n\n exit_code=1,\n stderr="""\\n 'nonexisting_instance' does not exist.\n """,\n )\n\n def test_instance_does_not_exist_pretty_output(self):\n self.assertOutputs(\n files=dict(some_schema="{}"),\n argv=[\n "--output", "pretty",\n "-i", "nonexisting_instance",\n "some_schema",\n ],\n\n exit_code=1,\n stderr="""\\n ===[FileNotFoundError]===(nonexisting_instance)===\n\n 'nonexisting_instance' does not exist.\n -----------------------------\n """,\n )\n\n def test_schema_does_not_exist(self):\n self.assertOutputs(\n argv=["nonexisting_schema"],\n\n exit_code=1,\n stderr="'nonexisting_schema' does not exist.\n",\n )\n\n def test_schema_does_not_exist_pretty_output(self):\n self.assertOutputs(\n argv=["--output", "pretty", "nonexisting_schema"],\n\n exit_code=1,\n stderr="""\\n ===[FileNotFoundError]===(nonexisting_schema)===\n\n 'nonexisting_schema' does not exist.\n -----------------------------\n """,\n )\n\n def test_neither_instance_nor_schema_exist(self):\n self.assertOutputs(\n argv=["-i", "nonexisting_instance", "nonexisting_schema"],\n\n exit_code=1,\n stderr="'nonexisting_schema' does not exist.\n",\n )\n\n def test_neither_instance_nor_schema_exist_pretty_output(self):\n self.assertOutputs(\n argv=[\n "--output", "pretty",\n "-i", "nonexisting_instance",\n "nonexisting_schema",\n ],\n\n exit_code=1,\n stderr="""\\n ===[FileNotFoundError]===(nonexisting_schema)===\n\n 'nonexisting_schema' does not exist.\n -----------------------------\n """,\n )\n\n def test_successful_validation(self):\n self.assertOutputs(\n files=dict(some_schema="{}", some_instance="{}"),\n argv=["-i", "some_instance", "some_schema"],\n stdout="",\n stderr="",\n )\n\n def test_successful_validation_pretty_output(self):\n self.assertOutputs(\n files=dict(some_schema="{}", some_instance="{}"),\n argv=["--output", "pretty", "-i", "some_instance", "some_schema"],\n stdout="===[SUCCESS]===(some_instance)===\n",\n stderr="",\n )\n\n def test_successful_validation_of_stdin(self):\n self.assertOutputs(\n files=dict(some_schema="{}"),\n stdin=StringIO("{}"),\n argv=["some_schema"],\n stdout="",\n stderr="",\n )\n\n def test_successful_validation_of_stdin_pretty_output(self):\n self.assertOutputs(\n files=dict(some_schema="{}"),\n stdin=StringIO("{}"),\n argv=["--output", "pretty", "some_schema"],\n stdout="===[SUCCESS]===(<stdin>)===\n",\n stderr="",\n )\n\n def test_successful_validation_of_just_the_schema(self):\n self.assertOutputs(\n files=dict(some_schema="{}", some_instance="{}"),\n argv=["-i", "some_instance", "some_schema"],\n stdout="",\n stderr="",\n )\n\n def test_successful_validation_of_just_the_schema_pretty_output(self):\n self.assertOutputs(\n files=dict(some_schema="{}", some_instance="{}"),\n argv=["--output", "pretty", "-i", "some_instance", "some_schema"],\n stdout="===[SUCCESS]===(some_instance)===\n",\n stderr="",\n )\n\n def test_successful_validation_via_explicit_base_uri(self):\n ref_schema_file = tempfile.NamedTemporaryFile(delete=False) # noqa: SIM115\n ref_schema_file.close()\n self.addCleanup(os.remove, ref_schema_file.name)\n\n ref_path = Path(ref_schema_file.name)\n ref_path.write_text('{"definitions": {"num": {"type": "integer"}}}')\n\n schema = f'{{"$ref": "{ref_path.name}#/definitions/num"}}'\n\n self.assertOutputs(\n files=dict(some_schema=schema, some_instance="1"),\n argv=[\n "-i", "some_instance",\n "--base-uri", ref_path.parent.as_uri() + "/",\n "some_schema",\n ],\n stdout="",\n stderr="",\n )\n\n def test_unsuccessful_validation_via_explicit_base_uri(self):\n ref_schema_file = tempfile.NamedTemporaryFile(delete=False) # noqa: SIM115\n ref_schema_file.close()\n self.addCleanup(os.remove, ref_schema_file.name)\n\n ref_path = Path(ref_schema_file.name)\n ref_path.write_text('{"definitions": {"num": {"type": "integer"}}}')\n\n schema = f'{{"$ref": "{ref_path.name}#/definitions/num"}}'\n\n self.assertOutputs(\n files=dict(some_schema=schema, some_instance='"1"'),\n argv=[\n "-i", "some_instance",\n "--base-uri", ref_path.parent.as_uri() + "/",\n "some_schema",\n ],\n exit_code=1,\n stdout="",\n stderr="1: '1' is not of type 'integer'\n",\n )\n\n def test_nonexistent_file_with_explicit_base_uri(self):\n schema = '{"$ref": "someNonexistentFile.json#definitions/num"}'\n instance = "1"\n\n with self.assertRaises(_RefResolutionError) as e:\n self.assertOutputs(\n files=dict(\n some_schema=schema,\n some_instance=instance,\n ),\n argv=[\n "-i", "some_instance",\n "--base-uri", Path.cwd().as_uri(),\n "some_schema",\n ],\n )\n error = str(e.exception)\n self.assertIn(f"{os.sep}someNonexistentFile.json'", error)\n\n def test_invalid_explicit_base_uri(self):\n schema = '{"$ref": "foo.json#definitions/num"}'\n instance = "1"\n\n with self.assertRaises(_RefResolutionError) as e:\n self.assertOutputs(\n files=dict(\n some_schema=schema,\n some_instance=instance,\n ),\n argv=[\n "-i", "some_instance",\n "--base-uri", "not@UR1",\n "some_schema",\n ],\n )\n error = str(e.exception)\n self.assertEqual(\n error, "unknown url type: 'foo.json'",\n )\n\n def test_it_validates_using_the_latest_validator_when_unspecified(self):\n # There isn't a better way now I can think of to ensure that the\n # latest version was used, given that the call to validator_for\n # is hidden inside the CLI, so guard that that's the case, and\n # this test will have to be updated when versions change until\n # we can think of a better way to ensure this behavior.\n self.assertIs(Draft202012Validator, _LATEST_VERSION)\n\n self.assertOutputs(\n files=dict(some_schema='{"const": "check"}', some_instance='"a"'),\n argv=["-i", "some_instance", "some_schema"],\n exit_code=1,\n stdout="",\n stderr="a: 'check' was expected\n",\n )\n\n def test_it_validates_using_draft7_when_specified(self):\n """\n Specifically, `const` validation applies for Draft 7.\n """\n schema = """\n {\n "$schema": "http://json-schema.org/draft-07/schema#",\n "const": "check"\n }\n """\n instance = '"foo"'\n self.assertOutputs(\n files=dict(some_schema=schema, some_instance=instance),\n argv=["-i", "some_instance", "some_schema"],\n exit_code=1,\n stdout="",\n stderr="foo: 'check' was expected\n",\n )\n\n def test_it_validates_using_draft4_when_specified(self):\n """\n Specifically, `const` validation *does not* apply for Draft 4.\n """\n schema = """\n {\n "$schema": "http://json-schema.org/draft-04/schema#",\n "const": "check"\n }\n """\n instance = '"foo"'\n self.assertOutputs(\n files=dict(some_schema=schema, some_instance=instance),\n argv=["-i", "some_instance", "some_schema"],\n stdout="",\n stderr="",\n )\n\n\nclass TestParser(TestCase):\n\n FakeValidator = fake_validator()\n\n def test_find_validator_by_fully_qualified_object_name(self):\n arguments = cli.parse_args(\n [\n "--validator",\n "jsonschema.tests.test_cli.TestParser.FakeValidator",\n "--instance", "mem://some/instance",\n "mem://some/schema",\n ],\n )\n self.assertIs(arguments["validator"], self.FakeValidator)\n\n def test_find_validator_in_jsonschema(self):\n arguments = cli.parse_args(\n [\n "--validator", "Draft4Validator",\n "--instance", "mem://some/instance",\n "mem://some/schema",\n ],\n )\n self.assertIs(arguments["validator"], Draft4Validator)\n\n def cli_output_for(self, *argv):\n stdout, stderr = StringIO(), StringIO()\n with redirect_stdout(stdout), redirect_stderr(stderr): # noqa: SIM117\n with self.assertRaises(SystemExit):\n cli.parse_args(argv)\n return stdout.getvalue(), stderr.getvalue()\n\n def test_unknown_output(self):\n stdout, stderr = self.cli_output_for(\n "--output", "foo",\n "mem://some/schema",\n )\n self.assertIn("invalid choice: 'foo'", stderr)\n self.assertFalse(stdout)\n\n def test_useless_error_format(self):\n stdout, stderr = self.cli_output_for(\n "--output", "pretty",\n "--error-format", "foo",\n "mem://some/schema",\n )\n self.assertIn(\n "--error-format can only be used with --output plain",\n stderr,\n )\n self.assertFalse(stdout)\n\n\nclass TestCLIIntegration(TestCase):\n def test_license(self):\n our_metadata = metadata.metadata("jsonschema")\n self.assertEqual(our_metadata.get("License-Expression"), "MIT")\n\n def test_version(self):\n version = subprocess.check_output(\n [sys.executable, "-W", "ignore", "-m", "jsonschema", "--version"],\n stderr=subprocess.STDOUT,\n )\n version = version.decode("utf-8").strip()\n self.assertEqual(version, metadata.version("jsonschema"))\n\n def test_no_arguments_shows_usage_notes(self):\n output = subprocess.check_output(\n [sys.executable, "-m", "jsonschema"],\n stderr=subprocess.STDOUT,\n )\n output_for_help = subprocess.check_output(\n [sys.executable, "-m", "jsonschema", "--help"],\n stderr=subprocess.STDOUT,\n )\n self.assertEqual(output, output_for_help)\n | .venv\Lib\site-packages\jsonschema\tests\test_cli.py | test_cli.py | Python | 28,544 | 0.95 | 0.077434 | 0.006667 | python-kit | 412 | 2024-04-12T22:50:15.789922 | MIT | true | 85f21e13a1c1ef9c7aa1f42243eec8c3 |
from unittest import TestCase\nimport textwrap\n\nfrom jsonschema import exceptions\nfrom jsonschema.validators import _LATEST_VERSION\n\n\nclass TestBestMatch(TestCase):\n def best_match_of(self, instance, schema):\n errors = list(_LATEST_VERSION(schema).iter_errors(instance))\n msg = f"No errors found for {instance} under {schema!r}!"\n self.assertTrue(errors, msg=msg)\n\n best = exceptions.best_match(iter(errors))\n reversed_best = exceptions.best_match(reversed(errors))\n\n self.assertEqual(\n best._contents(),\n reversed_best._contents(),\n f"No consistent best match!\nGot: {best}\n\nThen: {reversed_best}",\n )\n return best\n\n def test_shallower_errors_are_better_matches(self):\n schema = {\n "properties": {\n "foo": {\n "minProperties": 2,\n "properties": {"bar": {"type": "object"}},\n },\n },\n }\n best = self.best_match_of(instance={"foo": {"bar": []}}, schema=schema)\n self.assertEqual(best.validator, "minProperties")\n\n def test_oneOf_and_anyOf_are_weak_matches(self):\n """\n A property you *must* match is probably better than one you have to\n match a part of.\n """\n\n schema = {\n "minProperties": 2,\n "anyOf": [{"type": "string"}, {"type": "number"}],\n "oneOf": [{"type": "string"}, {"type": "number"}],\n }\n best = self.best_match_of(instance={}, schema=schema)\n self.assertEqual(best.validator, "minProperties")\n\n def test_if_the_most_relevant_error_is_anyOf_it_is_traversed(self):\n """\n If the most relevant error is an anyOf, then we traverse its context\n and select the otherwise *least* relevant error, since in this case\n that means the most specific, deep, error inside the instance.\n\n I.e. since only one of the schemas must match, we look for the most\n relevant one.\n """\n\n schema = {\n "properties": {\n "foo": {\n "anyOf": [\n {"type": "string"},\n {"properties": {"bar": {"type": "array"}}},\n ],\n },\n },\n }\n best = self.best_match_of(instance={"foo": {"bar": 12}}, schema=schema)\n self.assertEqual(best.validator_value, "array")\n\n def test_no_anyOf_traversal_for_equally_relevant_errors(self):\n """\n We don't traverse into an anyOf (as above) if all of its context errors\n seem to be equally "wrong" against the instance.\n """\n\n schema = {\n "anyOf": [\n {"type": "string"},\n {"type": "integer"},\n {"type": "object"},\n ],\n }\n best = self.best_match_of(instance=[], schema=schema)\n self.assertEqual(best.validator, "anyOf")\n\n def test_anyOf_traversal_for_single_equally_relevant_error(self):\n """\n We *do* traverse anyOf with a single nested error, even though it is\n vacuously equally relevant to itself.\n """\n\n schema = {\n "anyOf": [\n {"type": "string"},\n ],\n }\n best = self.best_match_of(instance=[], schema=schema)\n self.assertEqual(best.validator, "type")\n\n def test_anyOf_traversal_for_single_sibling_errors(self):\n """\n We *do* traverse anyOf with a single subschema that fails multiple\n times (e.g. on multiple items).\n """\n\n schema = {\n "anyOf": [\n {"items": {"const": 37}},\n ],\n }\n best = self.best_match_of(instance=[12, 12], schema=schema)\n self.assertEqual(best.validator, "const")\n\n def test_anyOf_traversal_for_non_type_matching_sibling_errors(self):\n """\n We *do* traverse anyOf with multiple subschemas when one does not type\n match.\n """\n\n schema = {\n "anyOf": [\n {"type": "object"},\n {"items": {"const": 37}},\n ],\n }\n best = self.best_match_of(instance=[12, 12], schema=schema)\n self.assertEqual(best.validator, "const")\n\n def test_if_the_most_relevant_error_is_oneOf_it_is_traversed(self):\n """\n If the most relevant error is an oneOf, then we traverse its context\n and select the otherwise *least* relevant error, since in this case\n that means the most specific, deep, error inside the instance.\n\n I.e. since only one of the schemas must match, we look for the most\n relevant one.\n """\n\n schema = {\n "properties": {\n "foo": {\n "oneOf": [\n {"type": "string"},\n {"properties": {"bar": {"type": "array"}}},\n ],\n },\n },\n }\n best = self.best_match_of(instance={"foo": {"bar": 12}}, schema=schema)\n self.assertEqual(best.validator_value, "array")\n\n def test_no_oneOf_traversal_for_equally_relevant_errors(self):\n """\n We don't traverse into an oneOf (as above) if all of its context errors\n seem to be equally "wrong" against the instance.\n """\n\n schema = {\n "oneOf": [\n {"type": "string"},\n {"type": "integer"},\n {"type": "object"},\n ],\n }\n best = self.best_match_of(instance=[], schema=schema)\n self.assertEqual(best.validator, "oneOf")\n\n def test_oneOf_traversal_for_single_equally_relevant_error(self):\n """\n We *do* traverse oneOf with a single nested error, even though it is\n vacuously equally relevant to itself.\n """\n\n schema = {\n "oneOf": [\n {"type": "string"},\n ],\n }\n best = self.best_match_of(instance=[], schema=schema)\n self.assertEqual(best.validator, "type")\n\n def test_oneOf_traversal_for_single_sibling_errors(self):\n """\n We *do* traverse oneOf with a single subschema that fails multiple\n times (e.g. on multiple items).\n """\n\n schema = {\n "oneOf": [\n {"items": {"const": 37}},\n ],\n }\n best = self.best_match_of(instance=[12, 12], schema=schema)\n self.assertEqual(best.validator, "const")\n\n def test_oneOf_traversal_for_non_type_matching_sibling_errors(self):\n """\n We *do* traverse oneOf with multiple subschemas when one does not type\n match.\n """\n\n schema = {\n "oneOf": [\n {"type": "object"},\n {"items": {"const": 37}},\n ],\n }\n best = self.best_match_of(instance=[12, 12], schema=schema)\n self.assertEqual(best.validator, "const")\n\n def test_if_the_most_relevant_error_is_allOf_it_is_traversed(self):\n """\n Now, if the error is allOf, we traverse but select the *most* relevant\n error from the context, because all schemas here must match anyways.\n """\n\n schema = {\n "properties": {\n "foo": {\n "allOf": [\n {"type": "string"},\n {"properties": {"bar": {"type": "array"}}},\n ],\n },\n },\n }\n best = self.best_match_of(instance={"foo": {"bar": 12}}, schema=schema)\n self.assertEqual(best.validator_value, "string")\n\n def test_nested_context_for_oneOf(self):\n """\n We traverse into nested contexts (a oneOf containing an error in a\n nested oneOf here).\n """\n\n schema = {\n "properties": {\n "foo": {\n "oneOf": [\n {"type": "string"},\n {\n "oneOf": [\n {"type": "string"},\n {\n "properties": {\n "bar": {"type": "array"},\n },\n },\n ],\n },\n ],\n },\n },\n }\n best = self.best_match_of(instance={"foo": {"bar": 12}}, schema=schema)\n self.assertEqual(best.validator_value, "array")\n\n def test_it_prioritizes_matching_types(self):\n schema = {\n "properties": {\n "foo": {\n "anyOf": [\n {"type": "array", "minItems": 2},\n {"type": "string", "minLength": 10},\n ],\n },\n },\n }\n best = self.best_match_of(instance={"foo": "bar"}, schema=schema)\n self.assertEqual(best.validator, "minLength")\n\n reordered = {\n "properties": {\n "foo": {\n "anyOf": [\n {"type": "string", "minLength": 10},\n {"type": "array", "minItems": 2},\n ],\n },\n },\n }\n best = self.best_match_of(instance={"foo": "bar"}, schema=reordered)\n self.assertEqual(best.validator, "minLength")\n\n def test_it_prioritizes_matching_union_types(self):\n schema = {\n "properties": {\n "foo": {\n "anyOf": [\n {"type": ["array", "object"], "minItems": 2},\n {"type": ["integer", "string"], "minLength": 10},\n ],\n },\n },\n }\n best = self.best_match_of(instance={"foo": "bar"}, schema=schema)\n self.assertEqual(best.validator, "minLength")\n\n reordered = {\n "properties": {\n "foo": {\n "anyOf": [\n {"type": "string", "minLength": 10},\n {"type": "array", "minItems": 2},\n ],\n },\n },\n }\n best = self.best_match_of(instance={"foo": "bar"}, schema=reordered)\n self.assertEqual(best.validator, "minLength")\n\n def test_boolean_schemas(self):\n schema = {"properties": {"foo": False}}\n best = self.best_match_of(instance={"foo": "bar"}, schema=schema)\n self.assertIsNone(best.validator)\n\n def test_one_error(self):\n validator = _LATEST_VERSION({"minProperties": 2})\n error, = validator.iter_errors({})\n self.assertEqual(\n exceptions.best_match(validator.iter_errors({})).validator,\n "minProperties",\n )\n\n def test_no_errors(self):\n validator = _LATEST_VERSION({})\n self.assertIsNone(exceptions.best_match(validator.iter_errors({})))\n\n\nclass TestByRelevance(TestCase):\n def test_short_paths_are_better_matches(self):\n shallow = exceptions.ValidationError("Oh no!", path=["baz"])\n deep = exceptions.ValidationError("Oh yes!", path=["foo", "bar"])\n match = max([shallow, deep], key=exceptions.relevance)\n self.assertIs(match, shallow)\n\n match = max([deep, shallow], key=exceptions.relevance)\n self.assertIs(match, shallow)\n\n def test_global_errors_are_even_better_matches(self):\n shallow = exceptions.ValidationError("Oh no!", path=[])\n deep = exceptions.ValidationError("Oh yes!", path=["foo"])\n\n errors = sorted([shallow, deep], key=exceptions.relevance)\n self.assertEqual(\n [list(error.path) for error in errors],\n [["foo"], []],\n )\n\n errors = sorted([deep, shallow], key=exceptions.relevance)\n self.assertEqual(\n [list(error.path) for error in errors],\n [["foo"], []],\n )\n\n def test_weak_keywords_are_lower_priority(self):\n weak = exceptions.ValidationError("Oh no!", path=[], validator="a")\n normal = exceptions.ValidationError("Oh yes!", path=[], validator="b")\n\n best_match = exceptions.by_relevance(weak="a")\n\n match = max([weak, normal], key=best_match)\n self.assertIs(match, normal)\n\n match = max([normal, weak], key=best_match)\n self.assertIs(match, normal)\n\n def test_strong_keywords_are_higher_priority(self):\n weak = exceptions.ValidationError("Oh no!", path=[], validator="a")\n normal = exceptions.ValidationError("Oh yes!", path=[], validator="b")\n strong = exceptions.ValidationError("Oh fine!", path=[], validator="c")\n\n best_match = exceptions.by_relevance(weak="a", strong="c")\n\n match = max([weak, normal, strong], key=best_match)\n self.assertIs(match, strong)\n\n match = max([strong, normal, weak], key=best_match)\n self.assertIs(match, strong)\n\n\nclass TestErrorTree(TestCase):\n def test_it_knows_how_many_total_errors_it_contains(self):\n # FIXME: #442\n errors = [\n exceptions.ValidationError("Something", validator=i)\n for i in range(8)\n ]\n tree = exceptions.ErrorTree(errors)\n self.assertEqual(tree.total_errors, 8)\n\n def test_it_contains_an_item_if_the_item_had_an_error(self):\n errors = [exceptions.ValidationError("a message", path=["bar"])]\n tree = exceptions.ErrorTree(errors)\n self.assertIn("bar", tree)\n\n def test_it_does_not_contain_an_item_if_the_item_had_no_error(self):\n errors = [exceptions.ValidationError("a message", path=["bar"])]\n tree = exceptions.ErrorTree(errors)\n self.assertNotIn("foo", tree)\n\n def test_keywords_that_failed_appear_in_errors_dict(self):\n error = exceptions.ValidationError("a message", validator="foo")\n tree = exceptions.ErrorTree([error])\n self.assertEqual(tree.errors, {"foo": error})\n\n def test_it_creates_a_child_tree_for_each_nested_path(self):\n errors = [\n exceptions.ValidationError("a bar message", path=["bar"]),\n exceptions.ValidationError("a bar -> 0 message", path=["bar", 0]),\n ]\n tree = exceptions.ErrorTree(errors)\n self.assertIn(0, tree["bar"])\n self.assertNotIn(1, tree["bar"])\n\n def test_children_have_their_errors_dicts_built(self):\n e1, e2 = (\n exceptions.ValidationError("1", validator="foo", path=["bar", 0]),\n exceptions.ValidationError("2", validator="quux", path=["bar", 0]),\n )\n tree = exceptions.ErrorTree([e1, e2])\n self.assertEqual(tree["bar"][0].errors, {"foo": e1, "quux": e2})\n\n def test_multiple_errors_with_instance(self):\n e1, e2 = (\n exceptions.ValidationError(\n "1",\n validator="foo",\n path=["bar", "bar2"],\n instance="i1"),\n exceptions.ValidationError(\n "2",\n validator="quux",\n path=["foobar", 2],\n instance="i2"),\n )\n exceptions.ErrorTree([e1, e2])\n\n def test_it_does_not_contain_subtrees_that_are_not_in_the_instance(self):\n error = exceptions.ValidationError("123", validator="foo", instance=[])\n tree = exceptions.ErrorTree([error])\n\n with self.assertRaises(IndexError):\n tree[0]\n\n def test_if_its_in_the_tree_anyhow_it_does_not_raise_an_error(self):\n """\n If a keyword refers to a path that isn't in the instance, the\n tree still properly returns a subtree for that path.\n """\n\n error = exceptions.ValidationError(\n "a message", validator="foo", instance={}, path=["foo"],\n )\n tree = exceptions.ErrorTree([error])\n self.assertIsInstance(tree["foo"], exceptions.ErrorTree)\n\n def test_iter(self):\n e1, e2 = (\n exceptions.ValidationError(\n "1",\n validator="foo",\n path=["bar", "bar2"],\n instance="i1"),\n exceptions.ValidationError(\n "2",\n validator="quux",\n path=["foobar", 2],\n instance="i2"),\n )\n tree = exceptions.ErrorTree([e1, e2])\n self.assertEqual(set(tree), {"bar", "foobar"})\n\n def test_repr_single(self):\n error = exceptions.ValidationError(\n "1",\n validator="foo",\n path=["bar", "bar2"],\n instance="i1",\n )\n tree = exceptions.ErrorTree([error])\n self.assertEqual(repr(tree), "<ErrorTree (1 total error)>")\n\n def test_repr_multiple(self):\n e1, e2 = (\n exceptions.ValidationError(\n "1",\n validator="foo",\n path=["bar", "bar2"],\n instance="i1"),\n exceptions.ValidationError(\n "2",\n validator="quux",\n path=["foobar", 2],\n instance="i2"),\n )\n tree = exceptions.ErrorTree([e1, e2])\n self.assertEqual(repr(tree), "<ErrorTree (2 total errors)>")\n\n def test_repr_empty(self):\n tree = exceptions.ErrorTree([])\n self.assertEqual(repr(tree), "<ErrorTree (0 total errors)>")\n\n\nclass TestErrorInitReprStr(TestCase):\n def make_error(self, **kwargs):\n defaults = dict(\n message="hello",\n validator="type",\n validator_value="string",\n instance=5,\n schema={"type": "string"},\n )\n defaults.update(kwargs)\n return exceptions.ValidationError(**defaults)\n\n def assertShows(self, expected, **kwargs):\n expected = textwrap.dedent(expected).rstrip("\n")\n\n error = self.make_error(**kwargs)\n message_line, _, rest = str(error).partition("\n")\n self.assertEqual(message_line, error.message)\n self.assertEqual(rest, expected)\n\n def test_it_calls_super_and_sets_args(self):\n error = self.make_error()\n self.assertGreater(len(error.args), 1)\n\n def test_repr(self):\n self.assertEqual(\n repr(exceptions.ValidationError(message="Hello!")),\n "<ValidationError: 'Hello!'>",\n )\n\n def test_unset_error(self):\n error = exceptions.ValidationError("message")\n self.assertEqual(str(error), "message")\n\n kwargs = {\n "validator": "type",\n "validator_value": "string",\n "instance": 5,\n "schema": {"type": "string"},\n }\n # Just the message should show if any of the attributes are unset\n for attr in kwargs:\n k = dict(kwargs)\n del k[attr]\n error = exceptions.ValidationError("message", **k)\n self.assertEqual(str(error), "message")\n\n def test_empty_paths(self):\n self.assertShows(\n """\n Failed validating 'type' in schema:\n {'type': 'string'}\n\n On instance:\n 5\n """,\n path=[],\n schema_path=[],\n )\n\n def test_one_item_paths(self):\n self.assertShows(\n """\n Failed validating 'type' in schema:\n {'type': 'string'}\n\n On instance[0]:\n 5\n """,\n path=[0],\n schema_path=["items"],\n )\n\n def test_multiple_item_paths(self):\n self.assertShows(\n """\n Failed validating 'type' in schema['items'][0]:\n {'type': 'string'}\n\n On instance[0]['a']:\n 5\n """,\n path=[0, "a"],\n schema_path=["items", 0, 1],\n )\n\n def test_uses_pprint(self):\n self.assertShows(\n """\n Failed validating 'maxLength' in schema:\n {0: 0,\n 1: 1,\n 2: 2,\n 3: 3,\n 4: 4,\n 5: 5,\n 6: 6,\n 7: 7,\n 8: 8,\n 9: 9,\n 10: 10,\n 11: 11,\n 12: 12,\n 13: 13,\n 14: 14,\n 15: 15,\n 16: 16,\n 17: 17,\n 18: 18,\n 19: 19}\n\n On instance:\n [0,\n 1,\n 2,\n 3,\n 4,\n 5,\n 6,\n 7,\n 8,\n 9,\n 10,\n 11,\n 12,\n 13,\n 14,\n 15,\n 16,\n 17,\n 18,\n 19,\n 20,\n 21,\n 22,\n 23,\n 24]\n """,\n instance=list(range(25)),\n schema=dict(zip(range(20), range(20))),\n validator="maxLength",\n )\n\n def test_does_not_reorder_dicts(self):\n self.assertShows(\n """\n Failed validating 'type' in schema:\n {'do': 3, 'not': 7, 'sort': 37, 'me': 73}\n\n On instance:\n {'here': 73, 'too': 37, 'no': 7, 'sorting': 3}\n """,\n schema={\n "do": 3,\n "not": 7,\n "sort": 37,\n "me": 73,\n },\n instance={\n "here": 73,\n "too": 37,\n "no": 7,\n "sorting": 3,\n },\n )\n\n def test_str_works_with_instances_having_overriden_eq_operator(self):\n """\n Check for #164 which rendered exceptions unusable when a\n `ValidationError` involved instances with an `__eq__` method\n that returned truthy values.\n """\n\n class DontEQMeBro:\n def __eq__(this, other): # pragma: no cover\n self.fail("Don't!")\n\n def __ne__(this, other): # pragma: no cover\n self.fail("Don't!")\n\n instance = DontEQMeBro()\n error = exceptions.ValidationError(\n "a message",\n validator="foo",\n instance=instance,\n validator_value="some",\n schema="schema",\n )\n self.assertIn(repr(instance), str(error))\n\n\nclass TestHashable(TestCase):\n def test_hashable(self):\n {exceptions.ValidationError("")}\n {exceptions.SchemaError("")}\n | .venv\Lib\site-packages\jsonschema\tests\test_exceptions.py | test_exceptions.py | Python | 22,591 | 0.95 | 0.099715 | 0.003295 | awesome-app | 951 | 2025-02-12T08:58:36.910370 | MIT | true | 3bea3d7183a210b1f0a34746a025957f |
"""\nTests for the parts of jsonschema related to the :kw:`format` keyword.\n"""\n\nfrom unittest import TestCase\n\nfrom jsonschema import FormatChecker, ValidationError\nfrom jsonschema.exceptions import FormatError\nfrom jsonschema.validators import Draft4Validator\n\nBOOM = ValueError("Boom!")\nBANG = ZeroDivisionError("Bang!")\n\n\ndef boom(thing):\n if thing == "bang":\n raise BANG\n raise BOOM\n\n\nclass TestFormatChecker(TestCase):\n def test_it_can_validate_no_formats(self):\n checker = FormatChecker(formats=())\n self.assertFalse(checker.checkers)\n\n def test_it_raises_a_key_error_for_unknown_formats(self):\n with self.assertRaises(KeyError):\n FormatChecker(formats=["o noes"])\n\n def test_it_can_register_cls_checkers(self):\n original = dict(FormatChecker.checkers)\n self.addCleanup(FormatChecker.checkers.pop, "boom")\n with self.assertWarns(DeprecationWarning):\n FormatChecker.cls_checks("boom")(boom)\n self.assertEqual(\n FormatChecker.checkers,\n dict(original, boom=(boom, ())),\n )\n\n def test_it_can_register_checkers(self):\n checker = FormatChecker()\n checker.checks("boom")(boom)\n self.assertEqual(\n checker.checkers,\n dict(FormatChecker.checkers, boom=(boom, ())),\n )\n\n def test_it_catches_registered_errors(self):\n checker = FormatChecker()\n checker.checks("boom", raises=type(BOOM))(boom)\n\n with self.assertRaises(FormatError) as cm:\n checker.check(instance=12, format="boom")\n\n self.assertIs(cm.exception.cause, BOOM)\n self.assertIs(cm.exception.__cause__, BOOM)\n self.assertEqual(str(cm.exception), "12 is not a 'boom'")\n\n # Unregistered errors should not be caught\n with self.assertRaises(type(BANG)):\n checker.check(instance="bang", format="boom")\n\n def test_format_error_causes_become_validation_error_causes(self):\n checker = FormatChecker()\n checker.checks("boom", raises=ValueError)(boom)\n validator = Draft4Validator({"format": "boom"}, format_checker=checker)\n\n with self.assertRaises(ValidationError) as cm:\n validator.validate("BOOM")\n\n self.assertIs(cm.exception.cause, BOOM)\n self.assertIs(cm.exception.__cause__, BOOM)\n\n def test_format_checkers_come_with_defaults(self):\n # This is bad :/ but relied upon.\n # The docs for quite awhile recommended people do things like\n # validate(..., format_checker=FormatChecker())\n # We should change that, but we can't without deprecation...\n checker = FormatChecker()\n with self.assertRaises(FormatError):\n checker.check(instance="not-an-ipv4", format="ipv4")\n\n def test_repr(self):\n checker = FormatChecker(formats=())\n checker.checks("foo")(lambda thing: True) # pragma: no cover\n checker.checks("bar")(lambda thing: True) # pragma: no cover\n checker.checks("baz")(lambda thing: True) # pragma: no cover\n self.assertEqual(\n repr(checker),\n "<FormatChecker checkers=['bar', 'baz', 'foo']>",\n )\n | .venv\Lib\site-packages\jsonschema\tests\test_format.py | test_format.py | Python | 3,188 | 0.95 | 0.142857 | 0.069444 | vue-tools | 274 | 2024-09-13T00:54:58.193878 | MIT | true | 9c381b867da4d1553f5185a9a05eb6ff |
"""\nTest runner for the JSON Schema official test suite\n\nTests comprehensive correctness of each draft's validator.\n\nSee https://github.com/json-schema-org/JSON-Schema-Test-Suite for details.\n"""\n\n\nfrom jsonschema.tests._suite import Suite\nimport jsonschema\n\nSUITE = Suite()\nDRAFT3 = SUITE.version(name="draft3")\nDRAFT4 = SUITE.version(name="draft4")\nDRAFT6 = SUITE.version(name="draft6")\nDRAFT7 = SUITE.version(name="draft7")\nDRAFT201909 = SUITE.version(name="draft2019-09")\nDRAFT202012 = SUITE.version(name="draft2020-12")\n\n\ndef skip(message, **kwargs):\n def skipper(test):\n if all(value == getattr(test, attr) for attr, value in kwargs.items()):\n return message\n return skipper\n\n\ndef ecmascript_regex(test):\n if test.subject == "ecmascript-regex":\n return "ECMA regex support will be added in #1142."\n\n\ndef missing_format(Validator):\n def missing_format(test): # pragma: no cover\n schema = test.schema\n if (\n schema is True\n or schema is False\n or "format" not in schema\n or schema["format"] in Validator.FORMAT_CHECKER.checkers\n or test.valid\n ):\n return\n\n return f"Format checker {schema['format']!r} not found."\n return missing_format\n\n\ndef complex_email_validation(test):\n if test.subject != "email":\n return\n\n message = "Complex email validation is (intentionally) unsupported."\n return skip(\n message=message,\n description="an invalid domain",\n )(test) or skip(\n message=message,\n description="an invalid IPv4-address-literal",\n )(test) or skip(\n message=message,\n description="dot after local part is not valid",\n )(test) or skip(\n message=message,\n description="dot before local part is not valid",\n )(test) or skip(\n message=message,\n description="two subsequent dots inside local part are not valid",\n )(test)\n\n\ndef leap_second(test):\n message = "Leap seconds are unsupported."\n return skip(\n message=message,\n subject="time",\n description="a valid time string with leap second",\n )(test) or skip(\n message=message,\n subject="time",\n description="a valid time string with leap second, Zulu",\n )(test) or skip(\n message=message,\n subject="time",\n description="a valid time string with leap second with offset",\n )(test) or skip(\n message=message,\n subject="time",\n description="valid leap second, positive time-offset",\n )(test) or skip(\n message=message,\n subject="time",\n description="valid leap second, negative time-offset",\n )(test) or skip(\n message=message,\n subject="time",\n description="valid leap second, large positive time-offset",\n )(test) or skip(\n message=message,\n subject="time",\n description="valid leap second, large negative time-offset",\n )(test) or skip(\n message=message,\n subject="time",\n description="valid leap second, zero time-offset",\n )(test) or skip(\n message=message,\n subject="date-time",\n description="a valid date-time with a leap second, UTC",\n )(test) or skip(\n message=message,\n subject="date-time",\n description="a valid date-time with a leap second, with minus offset",\n )(test)\n\n\nTestDraft3 = DRAFT3.to_unittest_testcase(\n DRAFT3.cases(),\n DRAFT3.format_cases(),\n DRAFT3.optional_cases_of(name="bignum"),\n DRAFT3.optional_cases_of(name="non-bmp-regex"),\n DRAFT3.optional_cases_of(name="zeroTerminatedFloats"),\n Validator=jsonschema.Draft3Validator,\n format_checker=jsonschema.Draft3Validator.FORMAT_CHECKER,\n skip=lambda test: (\n ecmascript_regex(test)\n or missing_format(jsonschema.Draft3Validator)(test)\n or complex_email_validation(test)\n ),\n)\n\n\nTestDraft4 = DRAFT4.to_unittest_testcase(\n DRAFT4.cases(),\n DRAFT4.format_cases(),\n DRAFT4.optional_cases_of(name="bignum"),\n DRAFT4.optional_cases_of(name="float-overflow"),\n DRAFT4.optional_cases_of(name="id"),\n DRAFT4.optional_cases_of(name="non-bmp-regex"),\n DRAFT4.optional_cases_of(name="zeroTerminatedFloats"),\n Validator=jsonschema.Draft4Validator,\n format_checker=jsonschema.Draft4Validator.FORMAT_CHECKER,\n skip=lambda test: (\n ecmascript_regex(test)\n or leap_second(test)\n or missing_format(jsonschema.Draft4Validator)(test)\n or complex_email_validation(test)\n ),\n)\n\n\nTestDraft6 = DRAFT6.to_unittest_testcase(\n DRAFT6.cases(),\n DRAFT6.format_cases(),\n DRAFT6.optional_cases_of(name="bignum"),\n DRAFT6.optional_cases_of(name="float-overflow"),\n DRAFT6.optional_cases_of(name="id"),\n DRAFT6.optional_cases_of(name="non-bmp-regex"),\n Validator=jsonschema.Draft6Validator,\n format_checker=jsonschema.Draft6Validator.FORMAT_CHECKER,\n skip=lambda test: (\n ecmascript_regex(test)\n or leap_second(test)\n or missing_format(jsonschema.Draft6Validator)(test)\n or complex_email_validation(test)\n ),\n)\n\n\nTestDraft7 = DRAFT7.to_unittest_testcase(\n DRAFT7.cases(),\n DRAFT7.format_cases(),\n DRAFT7.optional_cases_of(name="bignum"),\n DRAFT7.optional_cases_of(name="cross-draft"),\n DRAFT7.optional_cases_of(name="float-overflow"),\n DRAFT6.optional_cases_of(name="id"),\n DRAFT7.optional_cases_of(name="non-bmp-regex"),\n DRAFT7.optional_cases_of(name="unknownKeyword"),\n Validator=jsonschema.Draft7Validator,\n format_checker=jsonschema.Draft7Validator.FORMAT_CHECKER,\n skip=lambda test: (\n ecmascript_regex(test)\n or leap_second(test)\n or missing_format(jsonschema.Draft7Validator)(test)\n or complex_email_validation(test)\n ),\n)\n\n\nTestDraft201909 = DRAFT201909.to_unittest_testcase(\n DRAFT201909.cases(),\n DRAFT201909.optional_cases_of(name="anchor"),\n DRAFT201909.optional_cases_of(name="bignum"),\n DRAFT201909.optional_cases_of(name="cross-draft"),\n DRAFT201909.optional_cases_of(name="float-overflow"),\n DRAFT201909.optional_cases_of(name="id"),\n DRAFT201909.optional_cases_of(name="no-schema"),\n DRAFT201909.optional_cases_of(name="non-bmp-regex"),\n DRAFT201909.optional_cases_of(name="refOfUnknownKeyword"),\n DRAFT201909.optional_cases_of(name="unknownKeyword"),\n Validator=jsonschema.Draft201909Validator,\n skip=skip(\n message="Vocabulary support is still in-progress.",\n subject="vocabulary",\n description=(\n "no validation: invalid number, but it still validates"\n ),\n ),\n)\n\n\nTestDraft201909Format = DRAFT201909.to_unittest_testcase(\n DRAFT201909.format_cases(),\n name="TestDraft201909Format",\n Validator=jsonschema.Draft201909Validator,\n format_checker=jsonschema.Draft201909Validator.FORMAT_CHECKER,\n skip=lambda test: (\n complex_email_validation(test)\n or ecmascript_regex(test)\n or leap_second(test)\n or missing_format(jsonschema.Draft201909Validator)(test)\n or complex_email_validation(test)\n ),\n)\n\n\nTestDraft202012 = DRAFT202012.to_unittest_testcase(\n DRAFT202012.cases(),\n DRAFT201909.optional_cases_of(name="anchor"),\n DRAFT202012.optional_cases_of(name="bignum"),\n DRAFT202012.optional_cases_of(name="cross-draft"),\n DRAFT202012.optional_cases_of(name="float-overflow"),\n DRAFT202012.optional_cases_of(name="id"),\n DRAFT202012.optional_cases_of(name="no-schema"),\n DRAFT202012.optional_cases_of(name="non-bmp-regex"),\n DRAFT202012.optional_cases_of(name="refOfUnknownKeyword"),\n DRAFT202012.optional_cases_of(name="unknownKeyword"),\n Validator=jsonschema.Draft202012Validator,\n skip=skip(\n message="Vocabulary support is still in-progress.",\n subject="vocabulary",\n description=(\n "no validation: invalid number, but it still validates"\n ),\n ),\n)\n\n\nTestDraft202012Format = DRAFT202012.to_unittest_testcase(\n DRAFT202012.format_cases(),\n name="TestDraft202012Format",\n Validator=jsonschema.Draft202012Validator,\n format_checker=jsonschema.Draft202012Validator.FORMAT_CHECKER,\n skip=lambda test: (\n complex_email_validation(test)\n or ecmascript_regex(test)\n or leap_second(test)\n or missing_format(jsonschema.Draft202012Validator)(test)\n or complex_email_validation(test)\n ),\n)\n | .venv\Lib\site-packages\jsonschema\tests\test_jsonschema_test_suite.py | test_jsonschema_test_suite.py | Python | 8,461 | 0.95 | 0.053435 | 0 | awesome-app | 645 | 2025-04-02T20:07:42.203974 | GPL-3.0 | true | 71a6aef5f2e9670a32eeae863f22c443 |
"""\nTests for the `TypeChecker`-based type interface.\n\nThe actual correctness of the type checking is handled in\n`test_jsonschema_test_suite`; these tests check that TypeChecker\nfunctions correctly at a more granular level.\n"""\nfrom collections import namedtuple\nfrom unittest import TestCase\n\nfrom jsonschema import ValidationError, _keywords\nfrom jsonschema._types import TypeChecker\nfrom jsonschema.exceptions import UndefinedTypeCheck, UnknownType\nfrom jsonschema.validators import Draft202012Validator, extend\n\n\ndef equals_2(checker, instance):\n return instance == 2\n\n\ndef is_namedtuple(instance):\n return isinstance(instance, tuple) and getattr(instance, "_fields", None)\n\n\ndef is_object_or_named_tuple(checker, instance):\n if Draft202012Validator.TYPE_CHECKER.is_type(instance, "object"):\n return True\n return is_namedtuple(instance)\n\n\nclass TestTypeChecker(TestCase):\n def test_is_type(self):\n checker = TypeChecker({"two": equals_2})\n self.assertEqual(\n (\n checker.is_type(instance=2, type="two"),\n checker.is_type(instance="bar", type="two"),\n ),\n (True, False),\n )\n\n def test_is_unknown_type(self):\n with self.assertRaises(UndefinedTypeCheck) as e:\n TypeChecker().is_type(4, "foobar")\n self.assertIn(\n "'foobar' is unknown to this type checker",\n str(e.exception),\n )\n self.assertTrue(\n e.exception.__suppress_context__,\n msg="Expected the internal KeyError to be hidden.",\n )\n\n def test_checks_can_be_added_at_init(self):\n checker = TypeChecker({"two": equals_2})\n self.assertEqual(checker, TypeChecker().redefine("two", equals_2))\n\n def test_redefine_existing_type(self):\n self.assertEqual(\n TypeChecker().redefine("two", object()).redefine("two", equals_2),\n TypeChecker().redefine("two", equals_2),\n )\n\n def test_remove(self):\n self.assertEqual(\n TypeChecker({"two": equals_2}).remove("two"),\n TypeChecker(),\n )\n\n def test_remove_unknown_type(self):\n with self.assertRaises(UndefinedTypeCheck) as context:\n TypeChecker().remove("foobar")\n self.assertIn("foobar", str(context.exception))\n\n def test_redefine_many(self):\n self.assertEqual(\n TypeChecker().redefine_many({"foo": int, "bar": str}),\n TypeChecker().redefine("foo", int).redefine("bar", str),\n )\n\n def test_remove_multiple(self):\n self.assertEqual(\n TypeChecker({"foo": int, "bar": str}).remove("foo", "bar"),\n TypeChecker(),\n )\n\n def test_type_check_can_raise_key_error(self):\n """\n Make sure no one writes:\n\n try:\n self._type_checkers[type](...)\n except KeyError:\n\n ignoring the fact that the function itself can raise that.\n """\n\n error = KeyError("Stuff")\n\n def raises_keyerror(checker, instance):\n raise error\n\n with self.assertRaises(KeyError) as context:\n TypeChecker({"foo": raises_keyerror}).is_type(4, "foo")\n\n self.assertIs(context.exception, error)\n\n def test_repr(self):\n checker = TypeChecker({"foo": is_namedtuple, "bar": is_namedtuple})\n self.assertEqual(repr(checker), "<TypeChecker types={'bar', 'foo'}>")\n\n\nclass TestCustomTypes(TestCase):\n def test_simple_type_can_be_extended(self):\n def int_or_str_int(checker, instance):\n if not isinstance(instance, (int, str)):\n return False\n try:\n int(instance)\n except ValueError:\n return False\n return True\n\n CustomValidator = extend(\n Draft202012Validator,\n type_checker=Draft202012Validator.TYPE_CHECKER.redefine(\n "integer", int_or_str_int,\n ),\n )\n validator = CustomValidator({"type": "integer"})\n\n validator.validate(4)\n validator.validate("4")\n\n with self.assertRaises(ValidationError):\n validator.validate(4.4)\n\n with self.assertRaises(ValidationError):\n validator.validate("foo")\n\n def test_object_can_be_extended(self):\n schema = {"type": "object"}\n\n Point = namedtuple("Point", ["x", "y"])\n\n type_checker = Draft202012Validator.TYPE_CHECKER.redefine(\n "object", is_object_or_named_tuple,\n )\n\n CustomValidator = extend(\n Draft202012Validator,\n type_checker=type_checker,\n )\n validator = CustomValidator(schema)\n\n validator.validate(Point(x=4, y=5))\n\n def test_object_extensions_require_custom_validators(self):\n schema = {"type": "object", "required": ["x"]}\n\n type_checker = Draft202012Validator.TYPE_CHECKER.redefine(\n "object", is_object_or_named_tuple,\n )\n\n CustomValidator = extend(\n Draft202012Validator,\n type_checker=type_checker,\n )\n validator = CustomValidator(schema)\n\n Point = namedtuple("Point", ["x", "y"])\n # Cannot handle required\n with self.assertRaises(ValidationError):\n validator.validate(Point(x=4, y=5))\n\n def test_object_extensions_can_handle_custom_validators(self):\n schema = {\n "type": "object",\n "required": ["x"],\n "properties": {"x": {"type": "integer"}},\n }\n\n type_checker = Draft202012Validator.TYPE_CHECKER.redefine(\n "object", is_object_or_named_tuple,\n )\n\n def coerce_named_tuple(fn):\n def coerced(validator, value, instance, schema):\n if is_namedtuple(instance):\n instance = instance._asdict()\n return fn(validator, value, instance, schema)\n return coerced\n\n required = coerce_named_tuple(_keywords.required)\n properties = coerce_named_tuple(_keywords.properties)\n\n CustomValidator = extend(\n Draft202012Validator,\n type_checker=type_checker,\n validators={"required": required, "properties": properties},\n )\n\n validator = CustomValidator(schema)\n\n Point = namedtuple("Point", ["x", "y"])\n # Can now process required and properties\n validator.validate(Point(x=4, y=5))\n\n with self.assertRaises(ValidationError):\n validator.validate(Point(x="not an integer", y=5))\n\n # As well as still handle objects.\n validator.validate({"x": 4, "y": 5})\n\n with self.assertRaises(ValidationError):\n validator.validate({"x": "not an integer", "y": 5})\n\n def test_unknown_type(self):\n with self.assertRaises(UnknownType) as e:\n Draft202012Validator({}).is_type(12, "some unknown type")\n self.assertIn("'some unknown type'", str(e.exception))\n | .venv\Lib\site-packages\jsonschema\tests\test_types.py | test_types.py | Python | 6,977 | 0.95 | 0.140271 | 0.017647 | react-lib | 217 | 2024-06-03T17:59:05.721358 | BSD-3-Clause | true | d5f930e3d474ec158f55293bf75f91ac |
from math import nan\nfrom unittest import TestCase\n\nfrom jsonschema._utils import equal\n\n\nclass TestEqual(TestCase):\n def test_none(self):\n self.assertTrue(equal(None, None))\n\n def test_nan(self):\n self.assertTrue(equal(nan, nan))\n\n\nclass TestDictEqual(TestCase):\n def test_equal_dictionaries(self):\n dict_1 = {"a": "b", "c": "d"}\n dict_2 = {"c": "d", "a": "b"}\n self.assertTrue(equal(dict_1, dict_2))\n\n def test_equal_dictionaries_with_nan(self):\n dict_1 = {"a": nan, "c": "d"}\n dict_2 = {"c": "d", "a": nan}\n self.assertTrue(equal(dict_1, dict_2))\n\n def test_missing_key(self):\n dict_1 = {"a": "b", "c": "d"}\n dict_2 = {"c": "d", "x": "b"}\n self.assertFalse(equal(dict_1, dict_2))\n\n def test_additional_key(self):\n dict_1 = {"a": "b", "c": "d"}\n dict_2 = {"c": "d", "a": "b", "x": "x"}\n self.assertFalse(equal(dict_1, dict_2))\n\n def test_missing_value(self):\n dict_1 = {"a": "b", "c": "d"}\n dict_2 = {"c": "d", "a": "x"}\n self.assertFalse(equal(dict_1, dict_2))\n\n def test_empty_dictionaries(self):\n dict_1 = {}\n dict_2 = {}\n self.assertTrue(equal(dict_1, dict_2))\n\n def test_one_none(self):\n dict_1 = None\n dict_2 = {"a": "b", "c": "d"}\n self.assertFalse(equal(dict_1, dict_2))\n\n def test_same_item(self):\n dict_1 = {"a": "b", "c": "d"}\n self.assertTrue(equal(dict_1, dict_1))\n\n def test_nested_equal(self):\n dict_1 = {"a": {"a": "b", "c": "d"}, "c": "d"}\n dict_2 = {"c": "d", "a": {"a": "b", "c": "d"}}\n self.assertTrue(equal(dict_1, dict_2))\n\n def test_nested_dict_unequal(self):\n dict_1 = {"a": {"a": "b", "c": "d"}, "c": "d"}\n dict_2 = {"c": "d", "a": {"a": "b", "c": "x"}}\n self.assertFalse(equal(dict_1, dict_2))\n\n def test_mixed_nested_equal(self):\n dict_1 = {"a": ["a", "b", "c", "d"], "c": "d"}\n dict_2 = {"c": "d", "a": ["a", "b", "c", "d"]}\n self.assertTrue(equal(dict_1, dict_2))\n\n def test_nested_list_unequal(self):\n dict_1 = {"a": ["a", "b", "c", "d"], "c": "d"}\n dict_2 = {"c": "d", "a": ["b", "c", "d", "a"]}\n self.assertFalse(equal(dict_1, dict_2))\n\n\nclass TestListEqual(TestCase):\n def test_equal_lists(self):\n list_1 = ["a", "b", "c"]\n list_2 = ["a", "b", "c"]\n self.assertTrue(equal(list_1, list_2))\n\n def test_equal_lists_with_nan(self):\n list_1 = ["a", nan, "c"]\n list_2 = ["a", nan, "c"]\n self.assertTrue(equal(list_1, list_2))\n\n def test_unsorted_lists(self):\n list_1 = ["a", "b", "c"]\n list_2 = ["b", "b", "a"]\n self.assertFalse(equal(list_1, list_2))\n\n def test_first_list_larger(self):\n list_1 = ["a", "b", "c"]\n list_2 = ["a", "b"]\n self.assertFalse(equal(list_1, list_2))\n\n def test_second_list_larger(self):\n list_1 = ["a", "b"]\n list_2 = ["a", "b", "c"]\n self.assertFalse(equal(list_1, list_2))\n\n def test_list_with_none_unequal(self):\n list_1 = ["a", "b", None]\n list_2 = ["a", "b", "c"]\n self.assertFalse(equal(list_1, list_2))\n\n list_1 = ["a", "b", None]\n list_2 = [None, "b", "c"]\n self.assertFalse(equal(list_1, list_2))\n\n def test_list_with_none_equal(self):\n list_1 = ["a", None, "c"]\n list_2 = ["a", None, "c"]\n self.assertTrue(equal(list_1, list_2))\n\n def test_empty_list(self):\n list_1 = []\n list_2 = []\n self.assertTrue(equal(list_1, list_2))\n\n def test_one_none(self):\n list_1 = None\n list_2 = []\n self.assertFalse(equal(list_1, list_2))\n\n def test_same_list(self):\n list_1 = ["a", "b", "c"]\n self.assertTrue(equal(list_1, list_1))\n\n def test_equal_nested_lists(self):\n list_1 = ["a", ["b", "c"], "d"]\n list_2 = ["a", ["b", "c"], "d"]\n self.assertTrue(equal(list_1, list_2))\n\n def test_unequal_nested_lists(self):\n list_1 = ["a", ["b", "c"], "d"]\n list_2 = ["a", [], "c"]\n self.assertFalse(equal(list_1, list_2))\n | .venv\Lib\site-packages\jsonschema\tests\test_utils.py | test_utils.py | Python | 4,163 | 0.85 | 0.210145 | 0 | node-utils | 184 | 2024-10-28T22:49:26.467109 | BSD-3-Clause | true | a0a4847576c89a805b738659cec17bdb |
"""\nPython representations of the JSON Schema Test Suite tests.\n"""\nfrom __future__ import annotations\n\nfrom contextlib import suppress\nfrom functools import partial\nfrom pathlib import Path\nfrom typing import TYPE_CHECKING, Any\nimport json\nimport os\nimport re\nimport sys\nimport unittest\n\nfrom attrs import field, frozen\nfrom referencing import Registry\nimport referencing.jsonschema\n\nif TYPE_CHECKING:\n from collections.abc import Iterable, Mapping, Sequence\n\n from referencing.jsonschema import Schema\n import pyperf\n\nfrom jsonschema.validators import _VALIDATORS\nimport jsonschema\n\nMAGIC_REMOTE_URL = "http://localhost:1234"\n\n_DELIMITERS = re.compile(r"[\W\- ]+")\n\n\ndef _find_suite():\n root = os.environ.get("JSON_SCHEMA_TEST_SUITE")\n if root is not None:\n return Path(root)\n\n root = Path(jsonschema.__file__).parent.parent / "json"\n if not root.is_dir(): # pragma: no cover\n raise ValueError(\n (\n "Can't find the JSON-Schema-Test-Suite directory. "\n "Set the 'JSON_SCHEMA_TEST_SUITE' environment "\n "variable or run the tests from alongside a checkout "\n "of the suite."\n ),\n )\n return root\n\n\n@frozen\nclass Suite:\n\n _root: Path = field(factory=_find_suite)\n\n\n def benchmark(self, runner: pyperf.Runner): # pragma: no cover\n for name, Validator in _VALIDATORS.items():\n self.version(name=name).benchmark(\n runner=runner,\n Validator=Validator,\n )\n\n def version(self, name) -> Version:\n Validator = _VALIDATORS[name]\n uri: str = Validator.ID_OF(Validator.META_SCHEMA) # type: ignore[assignment]\n specification = referencing.jsonschema.specification_with(uri)\n\n registry = Registry().with_contents(\n remotes_in(root=self._root / "remotes", name=name, uri=uri),\n default_specification=specification,\n )\n return Version(\n name=name,\n path=self._root / "tests" / name,\n remotes=registry,\n )\n\n\n@frozen\nclass Version:\n\n _path: Path\n _remotes: referencing.jsonschema.SchemaRegistry\n\n name: str\n\n def benchmark(self, **kwargs): # pragma: no cover\n for case in self.cases():\n case.benchmark(**kwargs)\n\n def cases(self) -> Iterable[_Case]:\n return self._cases_in(paths=self._path.glob("*.json"))\n\n def format_cases(self) -> Iterable[_Case]:\n return self._cases_in(paths=self._path.glob("optional/format/*.json"))\n\n def optional_cases_of(self, name: str) -> Iterable[_Case]:\n return self._cases_in(paths=[self._path / "optional" / f"{name}.json"])\n\n def to_unittest_testcase(self, *groups, **kwargs):\n name = kwargs.pop("name", "Test" + self.name.title().replace("-", ""))\n methods = {\n method.__name__: method\n for method in (\n test.to_unittest_method(**kwargs)\n for group in groups\n for case in group\n for test in case.tests\n )\n }\n cls = type(name, (unittest.TestCase,), methods)\n\n # We're doing crazy things, so if they go wrong, like a function\n # behaving differently on some other interpreter, just make them\n # not happen.\n with suppress(Exception):\n cls.__module__ = _someone_save_us_the_module_of_the_caller()\n\n return cls\n\n def _cases_in(self, paths: Iterable[Path]) -> Iterable[_Case]:\n for path in paths:\n for case in json.loads(path.read_text(encoding="utf-8")):\n yield _Case.from_dict(\n case,\n version=self,\n subject=path.stem,\n remotes=self._remotes,\n )\n\n\n@frozen\nclass _Case:\n\n version: Version\n\n subject: str\n description: str\n schema: Mapping[str, Any] | bool\n tests: list[_Test]\n comment: str | None = None\n specification: Sequence[dict[str, str]] = ()\n\n @classmethod\n def from_dict(cls, data, remotes, **kwargs):\n data.update(kwargs)\n tests = [\n _Test(\n version=data["version"],\n subject=data["subject"],\n case_description=data["description"],\n schema=data["schema"],\n remotes=remotes,\n **test,\n ) for test in data.pop("tests")\n ]\n return cls(tests=tests, **data)\n\n def benchmark(self, runner: pyperf.Runner, **kwargs): # pragma: no cover\n for test in self.tests:\n runner.bench_func(\n test.fully_qualified_name,\n partial(test.validate_ignoring_errors, **kwargs),\n )\n\n\ndef remotes_in(\n root: Path,\n name: str,\n uri: str,\n) -> Iterable[tuple[str, Schema]]:\n # This messy logic is because the test suite is terrible at indicating\n # what remotes are needed for what drafts, and mixes in schemas which\n # have no $schema and which are invalid under earlier versions, in with\n # other schemas which are needed for tests.\n\n for each in root.rglob("*.json"):\n schema = json.loads(each.read_text())\n\n relative = str(each.relative_to(root)).replace("\\", "/")\n\n if (\n ( # invalid boolean schema\n name in {"draft3", "draft4"}\n and each.stem == "tree"\n ) or\n ( # draft<NotThisDialect>/*.json\n "$schema" not in schema\n and relative.startswith("draft")\n and not relative.startswith(name)\n )\n ):\n continue\n yield f"{MAGIC_REMOTE_URL}/{relative}", schema\n\n\n@frozen(repr=False)\nclass _Test:\n\n version: Version\n\n subject: str\n case_description: str\n description: str\n\n data: Any\n schema: Mapping[str, Any] | bool\n\n valid: bool\n\n _remotes: referencing.jsonschema.SchemaRegistry\n\n comment: str | None = None\n\n def __repr__(self): # pragma: no cover\n return f"<Test {self.fully_qualified_name}>"\n\n @property\n def fully_qualified_name(self): # pragma: no cover\n return " > ".join( # noqa: FLY002\n [\n self.version.name,\n self.subject,\n self.case_description,\n self.description,\n ],\n )\n\n def to_unittest_method(self, skip=lambda test: None, **kwargs):\n if self.valid:\n def fn(this):\n self.validate(**kwargs)\n else:\n def fn(this):\n with this.assertRaises(jsonschema.ValidationError):\n self.validate(**kwargs)\n\n fn.__name__ = "_".join(\n [\n "test",\n _DELIMITERS.sub("_", self.subject),\n _DELIMITERS.sub("_", self.case_description),\n _DELIMITERS.sub("_", self.description),\n ],\n )\n reason = skip(self)\n if reason is None or os.environ.get("JSON_SCHEMA_DEBUG", "0") != "0":\n return fn\n elif os.environ.get("JSON_SCHEMA_EXPECTED_FAILURES", "0") != "0": # pragma: no cover # noqa: E501\n return unittest.expectedFailure(fn)\n else:\n return unittest.skip(reason)(fn)\n\n def validate(self, Validator, **kwargs):\n Validator.check_schema(self.schema)\n validator = Validator(\n schema=self.schema,\n registry=self._remotes,\n **kwargs,\n )\n if os.environ.get("JSON_SCHEMA_DEBUG", "0") != "0": # pragma: no cover\n breakpoint() # noqa: T100\n validator.validate(instance=self.data)\n\n def validate_ignoring_errors(self, Validator): # pragma: no cover\n with suppress(jsonschema.ValidationError):\n self.validate(Validator=Validator)\n\n\ndef _someone_save_us_the_module_of_the_caller():\n """\n The FQON of the module 2nd stack frames up from here.\n\n This is intended to allow us to dynamically return test case classes that\n are indistinguishable from being defined in the module that wants them.\n\n Otherwise, trial will mis-print the FQON, and copy pasting it won't re-run\n the class that really is running.\n\n Save us all, this is all so so so so so terrible.\n """\n\n return sys._getframe(2).f_globals["__name__"]\n | .venv\Lib\site-packages\jsonschema\tests\_suite.py | _suite.py | Python | 8,374 | 0.95 | 0.164912 | 0.04 | awesome-app | 46 | 2023-10-10T14:10:48.831566 | BSD-3-Clause | true | f798399b1bfd68bf840c9b55af57727f |
\n\n | .venv\Lib\site-packages\jsonschema\tests\__pycache__\fuzz_validate.cpython-313.pyc | fuzz_validate.cpython-313.pyc | Other | 2,302 | 0.8 | 0.076923 | 0 | vue-tools | 190 | 2025-03-19T00:06:19.831441 | BSD-3-Clause | true | 88c5cb224b6a84fe0af17271ef0313d9 |
\n\n | .venv\Lib\site-packages\jsonschema\tests\__pycache__\test_cli.cpython-313.pyc | test_cli.cpython-313.pyc | Other | 32,827 | 0.95 | 0.004773 | 0.01084 | react-lib | 34 | 2025-06-14T13:22:56.070249 | Apache-2.0 | true | 2e7bd2ee6e92aa55e5f3fa01c2086630 |
\n\n | .venv\Lib\site-packages\jsonschema\tests\__pycache__\test_deprecations.cpython-313.pyc | test_deprecations.cpython-313.pyc | Other | 24,989 | 0.95 | 0.004376 | 0 | vue-tools | 863 | 2024-04-03T11:36:12.708057 | BSD-3-Clause | true | bdccf15452b505b69eba14ca774c5d20 |
\n\n | .venv\Lib\site-packages\jsonschema\tests\__pycache__\test_exceptions.cpython-313.pyc | test_exceptions.cpython-313.pyc | Other | 29,359 | 0.8 | 0.024024 | 0.009967 | react-lib | 925 | 2024-12-29T23:55:59.027820 | GPL-3.0 | true | 22bb8dd2138086431cee02e0dba00e55 |
\n\n | .venv\Lib\site-packages\jsonschema\tests\__pycache__\test_format.cpython-313.pyc | test_format.cpython-313.pyc | Other | 6,375 | 0.8 | 0.011765 | 0.042254 | node-utils | 323 | 2024-02-12T04:58:04.629352 | MIT | true | cf0156d62c33da93c6cb05bfb8e15e2b |
\n\n | .venv\Lib\site-packages\jsonschema\tests\__pycache__\test_jsonschema_test_suite.cpython-313.pyc | test_jsonschema_test_suite.cpython-313.pyc | Other | 10,454 | 0.8 | 0.021739 | 0 | vue-tools | 222 | 2024-03-08T21:42:01.638488 | Apache-2.0 | true | 09fbfa9deae26197c53d6e9e8085a898 |
\n\n | .venv\Lib\site-packages\jsonschema\tests\__pycache__\test_types.cpython-313.pyc | test_types.cpython-313.pyc | Other | 11,918 | 0.95 | 0.019231 | 0 | python-kit | 516 | 2024-09-01T20:41:41.877705 | MIT | true | 2d01873ab5386b160ccbd4625cb61995 |
\n\n | .venv\Lib\site-packages\jsonschema\tests\__pycache__\test_utils.cpython-313.pyc | test_utils.cpython-313.pyc | Other | 7,924 | 0.8 | 0 | 0 | node-utils | 331 | 2024-12-12T14:20:19.209289 | BSD-3-Clause | true | ea9014f657a3068b9c96b319da00fc7a |
\n\n | .venv\Lib\site-packages\jsonschema\tests\__pycache__\_suite.cpython-313.pyc | _suite.cpython-313.pyc | Other | 13,216 | 0.95 | 0.007299 | 0.008 | awesome-app | 668 | 2024-08-10T18:00:41.455489 | BSD-3-Clause | true | a0848ed5e22aa2aea981cdf9bffe895f |
\n\n | .venv\Lib\site-packages\jsonschema\tests\__pycache__\__init__.cpython-313.pyc | __init__.cpython-313.pyc | Other | 191 | 0.7 | 0 | 0 | vue-tools | 445 | 2024-05-16T09:39:37.078900 | Apache-2.0 | true | 53ab874aaf70606ee47679e3767aa652 |
\n\n | .venv\Lib\site-packages\jsonschema\__pycache__\cli.cpython-313.pyc | cli.cpython-313.pyc | Other | 12,372 | 0.95 | 0.066176 | 0 | python-kit | 264 | 2024-11-18T09:25:38.069788 | Apache-2.0 | false | ebb04cb9bc1b4ab87ab9c76dc073e9bf |
\n\n | .venv\Lib\site-packages\jsonschema\__pycache__\exceptions.cpython-313.pyc | exceptions.cpython-313.pyc | Other | 21,322 | 0.95 | 0.059761 | 0.004525 | awesome-app | 393 | 2025-02-26T13:53:13.207494 | Apache-2.0 | false | 16c79ca352aaafd9c96aff53b7b367ca |
\n\n | .venv\Lib\site-packages\jsonschema\__pycache__\protocols.cpython-313.pyc | protocols.cpython-313.pyc | Other | 6,233 | 0.95 | 0.079755 | 0.008547 | react-lib | 456 | 2024-05-02T01:08:43.403673 | MIT | false | 9dce107eb931e99ee116f6a6b4d988fa |
\n\n | .venv\Lib\site-packages\jsonschema\__pycache__\validators.cpython-313.pyc | validators.cpython-313.pyc | Other | 51,795 | 0.95 | 0.075071 | 0 | react-lib | 263 | 2025-06-19T03:08:16.750339 | MIT | false | 66ae863f40729323273e6dd02e5cbe25 |
\n\n | .venv\Lib\site-packages\jsonschema\__pycache__\_format.cpython-313.pyc | _format.cpython-313.pyc | Other | 17,887 | 0.95 | 0.035294 | 0 | awesome-app | 517 | 2024-04-30T07:22:10.792476 | BSD-3-Clause | false | a569b0cf2c370a532b9159f62395eb2a |
\n\n | .venv\Lib\site-packages\jsonschema\__pycache__\_keywords.cpython-313.pyc | _keywords.cpython-313.pyc | Other | 20,011 | 0.95 | 0 | 0.007463 | node-utils | 907 | 2023-10-16T20:11:48.085480 | BSD-3-Clause | false | d11704160cf367948421a93dc3495fad |
\n\n | .venv\Lib\site-packages\jsonschema\__pycache__\_legacy_keywords.cpython-313.pyc | _legacy_keywords.cpython-313.pyc | Other | 17,412 | 0.95 | 0.025773 | 0 | react-lib | 753 | 2024-03-01T15:11:39.596377 | BSD-3-Clause | false | ef075476c86525c32d7a46fc297cda71 |
\n\n | .venv\Lib\site-packages\jsonschema\__pycache__\_types.cpython-313.pyc | _types.cpython-313.pyc | Other | 6,896 | 0.95 | 0.064815 | 0 | awesome-app | 12 | 2023-07-31T11:50:39.869790 | BSD-3-Clause | false | 59b0bad61ab7e9c52af4f75032b42713 |
\n\n | .venv\Lib\site-packages\jsonschema\__pycache__\_typing.cpython-313.pyc | _typing.cpython-313.pyc | Other | 1,409 | 0.8 | 0.05 | 0 | python-kit | 342 | 2023-08-16T05:12:11.378919 | GPL-3.0 | false | 9927dce1168c058bba58dbb888a0b53e |
\n\n | .venv\Lib\site-packages\jsonschema\__pycache__\_utils.cpython-313.pyc | _utils.cpython-313.pyc | Other | 14,414 | 0.8 | 0.090909 | 0.007407 | vue-tools | 946 | 2024-01-13T01:19:41.650326 | BSD-3-Clause | false | b2b9013155d41f43583d29779bd9232f |
\n\n | .venv\Lib\site-packages\jsonschema\__pycache__\__init__.cpython-313.pyc | __init__.cpython-313.pyc | Other | 3,766 | 0.95 | 0.058824 | 0 | react-lib | 629 | 2023-10-24T05:17:34.215316 | BSD-3-Clause | false | 9c13ec1a55823c301dc3e95ddaab2d66 |
\n\n | .venv\Lib\site-packages\jsonschema\__pycache__\__main__.cpython-313.pyc | __main__.cpython-313.pyc | Other | 340 | 0.7 | 0 | 0 | vue-tools | 653 | 2023-12-02T22:54:05.531859 | MIT | false | e1d0f47c61db26ae181a355c7266c70a |
[console_scripts]\njsonschema = jsonschema.cli:main\n | .venv\Lib\site-packages\jsonschema-4.24.0.dist-info\entry_points.txt | entry_points.txt | Other | 51 | 0.5 | 0 | 0 | node-utils | 108 | 2024-09-06T16:50:37.012274 | GPL-3.0 | false | 095614c840316fb033b6b246ebc5a437 |
pip\n | .venv\Lib\site-packages\jsonschema-4.24.0.dist-info\INSTALLER | INSTALLER | Other | 4 | 0.5 | 0 | 0 | react-lib | 288 | 2025-02-23T17:08:53.094037 | GPL-3.0 | false | 365c9bfeb7d89244f2ce01c1de44cb85 |
Metadata-Version: 2.4\nName: jsonschema\nVersion: 4.24.0\nSummary: An implementation of JSON Schema validation for Python\nProject-URL: Homepage, https://github.com/python-jsonschema/jsonschema\nProject-URL: Documentation, https://python-jsonschema.readthedocs.io/\nProject-URL: Issues, https://github.com/python-jsonschema/jsonschema/issues/\nProject-URL: Funding, https://github.com/sponsors/Julian\nProject-URL: Tidelift, https://tidelift.com/subscription/pkg/pypi-jsonschema?utm_source=pypi-jsonschema&utm_medium=referral&utm_campaign=pypi-link\nProject-URL: Changelog, https://github.com/python-jsonschema/jsonschema/blob/main/CHANGELOG.rst\nProject-URL: Source, https://github.com/python-jsonschema/jsonschema\nAuthor-email: Julian Berman <Julian+jsonschema@GrayVines.com>\nLicense-Expression: MIT\nLicense-File: COPYING\nKeywords: data validation,json,json schema,jsonschema,validation\nClassifier: Development Status :: 5 - Production/Stable\nClassifier: Intended Audience :: Developers\nClassifier: Operating System :: OS Independent\nClassifier: Programming Language :: Python\nClassifier: Programming Language :: Python :: 3.9\nClassifier: Programming Language :: Python :: 3.10\nClassifier: Programming Language :: Python :: 3.11\nClassifier: Programming Language :: Python :: 3.12\nClassifier: Programming Language :: Python :: 3.13\nClassifier: Programming Language :: Python :: Implementation :: CPython\nClassifier: Programming Language :: Python :: Implementation :: PyPy\nClassifier: Topic :: File Formats :: JSON\nClassifier: Topic :: File Formats :: JSON :: JSON Schema\nRequires-Python: >=3.9\nRequires-Dist: attrs>=22.2.0\nRequires-Dist: importlib-resources>=1.4.0; python_version < '3.9'\nRequires-Dist: jsonschema-specifications>=2023.03.6\nRequires-Dist: pkgutil-resolve-name>=1.3.10; python_version < '3.9'\nRequires-Dist: referencing>=0.28.4\nRequires-Dist: rpds-py>=0.7.1\nProvides-Extra: format\nRequires-Dist: fqdn; extra == 'format'\nRequires-Dist: idna; extra == 'format'\nRequires-Dist: isoduration; extra == 'format'\nRequires-Dist: jsonpointer>1.13; extra == 'format'\nRequires-Dist: rfc3339-validator; extra == 'format'\nRequires-Dist: rfc3987; extra == 'format'\nRequires-Dist: uri-template; extra == 'format'\nRequires-Dist: webcolors>=1.11; extra == 'format'\nProvides-Extra: format-nongpl\nRequires-Dist: fqdn; extra == 'format-nongpl'\nRequires-Dist: idna; extra == 'format-nongpl'\nRequires-Dist: isoduration; extra == 'format-nongpl'\nRequires-Dist: jsonpointer>1.13; extra == 'format-nongpl'\nRequires-Dist: rfc3339-validator; extra == 'format-nongpl'\nRequires-Dist: rfc3986-validator>0.1.0; extra == 'format-nongpl'\nRequires-Dist: uri-template; extra == 'format-nongpl'\nRequires-Dist: webcolors>=24.6.0; extra == 'format-nongpl'\nDescription-Content-Type: text/x-rst\n\n==========\njsonschema\n==========\n\n|PyPI| |Pythons| |CI| |ReadTheDocs| |Precommit| |Zenodo|\n\n.. |PyPI| image:: https://img.shields.io/pypi/v/jsonschema.svg\n :alt: PyPI version\n :target: https://pypi.org/project/jsonschema/\n\n.. |Pythons| image:: https://img.shields.io/pypi/pyversions/jsonschema.svg\n :alt: Supported Python versions\n :target: https://pypi.org/project/jsonschema/\n\n.. |CI| image:: https://github.com/python-jsonschema/jsonschema/workflows/CI/badge.svg\n :alt: Build status\n :target: https://github.com/python-jsonschema/jsonschema/actions?query=workflow%3ACI\n\n.. |ReadTheDocs| image:: https://readthedocs.org/projects/python-jsonschema/badge/?version=stable&style=flat\n :alt: ReadTheDocs status\n :target: https://python-jsonschema.readthedocs.io/en/stable/\n\n.. |Precommit| image:: https://results.pre-commit.ci/badge/github/python-jsonschema/jsonschema/main.svg\n :alt: pre-commit.ci status\n :target: https://results.pre-commit.ci/latest/github/python-jsonschema/jsonschema/main\n\n.. |Zenodo| image:: https://zenodo.org/badge/3072629.svg\n :alt: Zenodo DOI\n :target: https://zenodo.org/badge/latestdoi/3072629\n\n\n``jsonschema`` is an implementation of the `JSON Schema <https://json-schema.org>`_ specification for Python.\n\n.. code:: python\n\n >>> from jsonschema import validate\n\n >>> # A sample schema, like what we'd get from json.load()\n >>> schema = {\n ... "type" : "object",\n ... "properties" : {\n ... "price" : {"type" : "number"},\n ... "name" : {"type" : "string"},\n ... },\n ... }\n\n >>> # If no exception is raised by validate(), the instance is valid.\n >>> validate(instance={"name" : "Eggs", "price" : 34.99}, schema=schema)\n\n >>> validate(\n ... instance={"name" : "Eggs", "price" : "Invalid"}, schema=schema,\n ... ) # doctest: +IGNORE_EXCEPTION_DETAIL\n Traceback (most recent call last):\n ...\n ValidationError: 'Invalid' is not of type 'number'\n\nIt can also be used from the command line by installing `check-jsonschema <https://github.com/python-jsonschema/check-jsonschema>`_.\n\nFeatures\n--------\n\n* Full support for `Draft 2020-12 <https://python-jsonschema.readthedocs.io/en/latest/api/jsonschema/validators/#jsonschema.validators.Draft202012Validator>`_, `Draft 2019-09 <https://python-jsonschema.readthedocs.io/en/latest/api/jsonschema/validators/#jsonschema.validators.Draft201909Validator>`_, `Draft 7 <https://python-jsonschema.readthedocs.io/en/latest/api/jsonschema/validators/#jsonschema.validators.Draft7Validator>`_, `Draft 6 <https://python-jsonschema.readthedocs.io/en/latest/api/jsonschema/validators/#jsonschema.validators.Draft6Validator>`_, `Draft 4 <https://python-jsonschema.readthedocs.io/en/latest/api/jsonschema/validators/#jsonschema.validators.Draft4Validator>`_ and `Draft 3 <https://python-jsonschema.readthedocs.io/en/latest/api/jsonschema/validators/#jsonschema.validators.Draft3Validator>`_\n\n* `Lazy validation <https://python-jsonschema.readthedocs.io/en/latest/api/jsonschema/protocols/#jsonschema.protocols.Validator.iter_errors>`_ that can iteratively report *all* validation errors.\n\n* `Programmatic querying <https://python-jsonschema.readthedocs.io/en/latest/errors/>`_ of which properties or items failed validation.\n\n\nInstallation\n------------\n\n``jsonschema`` is available on `PyPI <https://pypi.org/project/jsonschema/>`_. You can install using `pip <https://pip.pypa.io/en/stable/>`_:\n\n.. code:: bash\n\n $ pip install jsonschema\n\n\nExtras\n======\n\nTwo extras are available when installing the package, both currently related to ``format`` validation:\n\n * ``format``\n * ``format-nongpl``\n\nThey can be used when installing in order to include additional dependencies, e.g.:\n\n.. code:: bash\n\n $ pip install jsonschema'[format]'\n\nBe aware that the mere presence of these dependencies – or even the specification of ``format`` checks in a schema – do *not* activate format checks (as per the specification).\nPlease read the `format validation documentation <https://python-jsonschema.readthedocs.io/en/latest/validate/#validating-formats>`_ for further details.\n\nAbout\n-----\n\nI'm Julian Berman.\n\n``jsonschema`` is on `GitHub <https://github.com/python-jsonschema/jsonschema>`_.\n\nGet in touch, via GitHub or otherwise, if you've got something to contribute, it'd be most welcome!\n\nIf you feel overwhelmingly grateful, you can also `sponsor me <https://github.com/sponsors/Julian/>`_.\n\nAnd for companies who appreciate ``jsonschema`` and its continued support and growth, ``jsonschema`` is also now supportable via `TideLift <https://tidelift.com/subscription/pkg/pypi-jsonschema?utm_source=pypi-jsonschema&utm_medium=referral&utm_campaign=readme>`_.\n\n\nRelease Information\n-------------------\n\nv4.24.0\n=======\n\n* Fix improper handling of ``unevaluatedProperties`` in the presence of ``additionalProperties`` (#1351).\n* Support for Python 3.8 has been dropped, as it is end-of-life.\n | .venv\Lib\site-packages\jsonschema-4.24.0.dist-info\METADATA | METADATA | Other | 7,755 | 0.95 | 0.040698 | 0.054264 | python-kit | 776 | 2024-06-25T13:30:52.366322 | MIT | false | 8e38f05dd0180f9ca775b061370ada07 |
../../Scripts/jsonschema.exe,sha256=LD7m5ynrYDQfrG2UqwWnBgSNkptZFkaneXuPno2ZNbc,108415\njsonschema-4.24.0.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4\njsonschema-4.24.0.dist-info/METADATA,sha256=Ibtqh_YF6ihrEgixIpXcUNcsTQ76s_ieJ7c0-9XevuY,7755\njsonschema-4.24.0.dist-info/RECORD,,\njsonschema-4.24.0.dist-info/WHEEL,sha256=qtCwoSJWgHk21S1Kb4ihdzI2rlJ1ZKaIurTj_ngOhyQ,87\njsonschema-4.24.0.dist-info/entry_points.txt,sha256=vO7rX4Fs_xIVJy2pnAtKgTSxfpnozAVQ0DjCmpMxnWE,51\njsonschema-4.24.0.dist-info/licenses/COPYING,sha256=T5KgFaE8TRoEC-8BiqE0MLTxvHO0Gxa7hGw0Z2bedDk,1057\njsonschema/__init__.py,sha256=p-Rw4TS_0OPHZIJyImDWsdWgmd6CPWHMXLq7BuQxTGc,3941\njsonschema/__main__.py,sha256=iLsZf2upUB3ilBKTlMnyK-HHt2Cnnfkwwxi_c6gLvSA,115\njsonschema/__pycache__/__init__.cpython-313.pyc,,\njsonschema/__pycache__/__main__.cpython-313.pyc,,\njsonschema/__pycache__/_format.cpython-313.pyc,,\njsonschema/__pycache__/_keywords.cpython-313.pyc,,\njsonschema/__pycache__/_legacy_keywords.cpython-313.pyc,,\njsonschema/__pycache__/_types.cpython-313.pyc,,\njsonschema/__pycache__/_typing.cpython-313.pyc,,\njsonschema/__pycache__/_utils.cpython-313.pyc,,\njsonschema/__pycache__/cli.cpython-313.pyc,,\njsonschema/__pycache__/exceptions.cpython-313.pyc,,\njsonschema/__pycache__/protocols.cpython-313.pyc,,\njsonschema/__pycache__/validators.cpython-313.pyc,,\njsonschema/_format.py,sha256=XMG7Qu44gUEH1H6h-gvU2BKZR0EfzqVfAtjoI9BasbM,14747\njsonschema/_keywords.py,sha256=r8_DrqAfn6QLwQnmXEggveiSU-UaIL2p2nuPINelfFc,14949\njsonschema/_legacy_keywords.py,sha256=2tWuwRPWbYS7EAl8wBIC_rabGuv1J4dfYLqNEPpShhA,15191\njsonschema/_types.py,sha256=0pYJG61cn_4ZWVnqyD24tax2QBMlnSPy0fcECCpASMk,5456\njsonschema/_typing.py,sha256=hFfAEeFJ76LYAl_feuVa0gnHnV9VEq_UhjLJS-7axgY,630\njsonschema/_utils.py,sha256=Xv6_wKKslBJlwyj9-j2c8JDFw-4z4aWFnVe2pX8h7U4,10659\njsonschema/benchmarks/__init__.py,sha256=A0sQrxDBVHSyQ-8ru3L11hMXf3q9gVuB9x_YgHb4R9M,70\njsonschema/benchmarks/__pycache__/__init__.cpython-313.pyc,,\njsonschema/benchmarks/__pycache__/const_vs_enum.cpython-313.pyc,,\njsonschema/benchmarks/__pycache__/contains.cpython-313.pyc,,\njsonschema/benchmarks/__pycache__/issue232.cpython-313.pyc,,\njsonschema/benchmarks/__pycache__/json_schema_test_suite.cpython-313.pyc,,\njsonschema/benchmarks/__pycache__/nested_schemas.cpython-313.pyc,,\njsonschema/benchmarks/__pycache__/subcomponents.cpython-313.pyc,,\njsonschema/benchmarks/__pycache__/unused_registry.cpython-313.pyc,,\njsonschema/benchmarks/__pycache__/useless_applicator_schemas.cpython-313.pyc,,\njsonschema/benchmarks/__pycache__/useless_keywords.cpython-313.pyc,,\njsonschema/benchmarks/__pycache__/validator_creation.cpython-313.pyc,,\njsonschema/benchmarks/const_vs_enum.py,sha256=DVFi3WDqBalZFOibnjpX1uTSr3Rxa2cPgFcowd7Ukrs,830\njsonschema/benchmarks/contains.py,sha256=gexQoUrCOwECofbt19BeosQZ7WFL6PDdkX49DWwBlOg,786\njsonschema/benchmarks/issue232.py,sha256=3LLYLIlBGQnVuyyo2iAv-xky5P6PRFHANx4-zIIQOoE,521\njsonschema/benchmarks/issue232/issue.json,sha256=eaPOZjMRu5u8RpKrsA9uk7ucPZS5tkKG4D_hkOTQ3Hk,117105\njsonschema/benchmarks/json_schema_test_suite.py,sha256=PvfabpUYcF4_7csYDTcTauED8rnFEGYbdY5RqTXD08s,320\njsonschema/benchmarks/nested_schemas.py,sha256=mo07dx-CIgmSOI62CNs4g5xu1FzHklLBpkQoDxWYcKs,1892\njsonschema/benchmarks/subcomponents.py,sha256=fEyiMzsWeK2pd7DEGCuuY-vzGunwhHczRBWEnBRLKIo,1113\njsonschema/benchmarks/unused_registry.py,sha256=hwRwONc9cefPtYzkoX_TYRO3GyUojriv0-YQaK3vnj0,940\njsonschema/benchmarks/useless_applicator_schemas.py,sha256=EVm5-EtOEFoLP_Vt2j4SrCwlx05NhPqNuZQ6LIMP1Dc,3342\njsonschema/benchmarks/useless_keywords.py,sha256=bj_zKr1oVctFlqyZaObCsYTgFjiiNgPzC0hr1Y868mE,867\njsonschema/benchmarks/validator_creation.py,sha256=UkUQlLAnussnr_KdCIdad6xx2pXxQLmYtsXoiirKeWQ,285\njsonschema/cli.py,sha256=SGy9JPg02mgXhNxugU8iXhYNivfSjBhKTNAgV90ty-M,8551\njsonschema/exceptions.py,sha256=l1wGgRg_8lpS1r8g9WpUC1Sue9DmMHc3chk6GLXYLzg,14951\njsonschema/protocols.py,sha256=Cv3L2xUl1MxQCRMcpNNbBL0nh14ekPYoazNfochqiag,7145\njsonschema/tests/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0\njsonschema/tests/__pycache__/__init__.cpython-313.pyc,,\njsonschema/tests/__pycache__/_suite.cpython-313.pyc,,\njsonschema/tests/__pycache__/fuzz_validate.cpython-313.pyc,,\njsonschema/tests/__pycache__/test_cli.cpython-313.pyc,,\njsonschema/tests/__pycache__/test_deprecations.cpython-313.pyc,,\njsonschema/tests/__pycache__/test_exceptions.cpython-313.pyc,,\njsonschema/tests/__pycache__/test_format.cpython-313.pyc,,\njsonschema/tests/__pycache__/test_jsonschema_test_suite.cpython-313.pyc,,\njsonschema/tests/__pycache__/test_types.cpython-313.pyc,,\njsonschema/tests/__pycache__/test_utils.cpython-313.pyc,,\njsonschema/tests/__pycache__/test_validators.cpython-313.pyc,,\njsonschema/tests/_suite.py,sha256=2k0X91N7dOHhQc5mrYv40OKf1weioj6RMBqWgLT6-PI,8374\njsonschema/tests/fuzz_validate.py,sha256=fUA7yTJIihaCwJplkUehZeyB84HcXEcqtY5oPJXIO7I,1114\njsonschema/tests/test_cli.py,sha256=A89r5LOHy-peLPZA5YDkOaMTWqzQO_w2Tu8WFz_vphM,28544\njsonschema/tests/test_deprecations.py,sha256=yG6mkRJHpTHbWoxpLC5y5H7fk8erGOs8f_9V4tCBEh8,15754\njsonschema/tests/test_exceptions.py,sha256=JgC-E1ZFZK2puVBp35WFRnG8CNOiSWLYtyLjh9IvFKI,22591\njsonschema/tests/test_format.py,sha256=eVm5SMaWF2lOPO28bPAwNvkiQvHCQKy-MnuAgEchfEc,3188\njsonschema/tests/test_jsonschema_test_suite.py,sha256=tAfxknM65OR9LyDPHu1pkEaombLgjRLnJ6FPiWPdxjg,8461\njsonschema/tests/test_types.py,sha256=cF51KTDmdsx06MrIc4fXKt0X9fIsVgw5uhT8CamVa8U,6977\njsonschema/tests/test_utils.py,sha256=sao74o1PyYMxBfqweokQN48CFSS6yhJk5FkCfMJ5PsI,4163\njsonschema/tests/test_validators.py,sha256=eiaigsZMzHYYsniQ1UPygaS56a1d-_7-9NC4wVXAhzs,87975\njsonschema/validators.py,sha256=8gThVddl0AObBsfChZ2rrzyRUosnFdICxzIL8xrvu84,47098\n | .venv\Lib\site-packages\jsonschema-4.24.0.dist-info\RECORD | RECORD | Other | 5,782 | 0.7 | 0 | 0 | vue-tools | 479 | 2024-10-16T21:40:06.384491 | MIT | false | 2657cd1c166c54a2b6f173d1a71d68a4 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.