diff --git a/.gitattributes b/.gitattributes index 39f5a613c9566fd18f356b72c011464ce64a5b7a..6bc8e55ddd545c5f49cfceb537d4b76fecbf48dd 100644 --- a/.gitattributes +++ b/.gitattributes @@ -1643,3 +1643,4 @@ evalkit_internvl/lib/python3.10/site-packages/sympy/solvers/ode/__pycache__/ode. evalkit_internvl/lib/python3.10/site-packages/sympy/solvers/__pycache__/solvers.cpython-310.pyc filter=lfs diff=lfs merge=lfs -text evalkit_internvl/lib/python3.10/site-packages/sympy/solvers/__pycache__/solveset.cpython-310.pyc filter=lfs diff=lfs merge=lfs -text evalkit_internvl/lib/python3.10/site-packages/sympy/solvers/ode/__pycache__/single.cpython-310.pyc filter=lfs diff=lfs merge=lfs -text +evalkit_internvl/lib/python3.10/site-packages/sympy/polys/benchmarks/__pycache__/bench_solvers.cpython-310.pyc filter=lfs diff=lfs merge=lfs -text diff --git a/evalkit_internvl/lib/python3.10/site-packages/bitsandbytes-0.41.0.dist-info/INSTALLER b/evalkit_internvl/lib/python3.10/site-packages/bitsandbytes-0.41.0.dist-info/INSTALLER new file mode 100644 index 0000000000000000000000000000000000000000..a1b589e38a32041e49332e5e81c2d363dc418d68 --- /dev/null +++ b/evalkit_internvl/lib/python3.10/site-packages/bitsandbytes-0.41.0.dist-info/INSTALLER @@ -0,0 +1 @@ +pip diff --git a/evalkit_internvl/lib/python3.10/site-packages/bitsandbytes-0.41.0.dist-info/METADATA b/evalkit_internvl/lib/python3.10/site-packages/bitsandbytes-0.41.0.dist-info/METADATA new file mode 100644 index 0000000000000000000000000000000000000000..5a8a3d2859c99a529de1a6fda8560e4827418726 --- /dev/null +++ b/evalkit_internvl/lib/python3.10/site-packages/bitsandbytes-0.41.0.dist-info/METADATA @@ -0,0 +1,206 @@ +Metadata-Version: 2.1 +Name: bitsandbytes +Version: 0.41.0 +Summary: k-bit optimizers and matrix multiplication routines. +Home-page: https://github.com/TimDettmers/bitsandbytes +Author: Tim Dettmers +Author-email: dettmers@cs.washington.edu +License: MIT +Keywords: gpu optimizers optimization 8-bit quantization compression +Classifier: Development Status :: 4 - Beta +Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence +Description-Content-Type: text/markdown +License-File: LICENSE +License-File: NOTICE.md + +# bitsandbytes + +The bitsandbytes is a lightweight wrapper around CUDA custom functions, in particular 8-bit optimizers, matrix multiplication (LLM.int8()), and quantization functions. + + + +Resources: +- [8-bit Optimizer Paper](https://arxiv.org/abs/2110.02861) -- [Video](https://www.youtube.com/watch?v=IxrlHAJtqKE) -- [Docs](https://bitsandbytes.readthedocs.io/en/latest/) + +- [LLM.int8() Paper](https://arxiv.org/abs/2208.07339) -- [LLM.int8() Software Blog Post](https://huggingface.co/blog/hf-bitsandbytes-integration) -- [LLM.int8() Emergent Features Blog Post](https://timdettmers.com/2022/08/17/llm-int8-and-emergent-features/) + +## TL;DR +**Requirements** +Python >=3.8. Linux distribution (Ubuntu, MacOS, etc.) + CUDA > 10.0. + +(Deprecated: CUDA 10.0 is deprecated and only CUDA >= 11.0) will be supported with release 0.39.0) + +**Installation**: + +``pip install bitsandbytes`` + +In some cases it can happen that you need to compile from source. If this happens please consider submitting a bug report with `python -m bitsandbytes` information. What now follows is some short instructions which might work out of the box if `nvcc` is installed. If these do not work see further below. + +Compilation quickstart: +```bash +git clone https://github.com/timdettmers/bitsandbytes.git +cd bitsandbytes + +# CUDA_VERSIONS in {110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 120} +# make argument in {cuda110, cuda11x, cuda12x} +# if you do not know what CUDA you have, try looking at the output of: python -m bitsandbytes +CUDA_VERSION=117 make cuda11x +python setup.py install +``` + +**Using Int8 inference with HuggingFace Transformers** + +```python +from transformers import AutoModelForCausalLM +model = AutoModelForCausalLM.from_pretrained( + 'decapoda-research/llama-7b-hf, + device_map='auto', + load_in_8bit=True, + max_memory=f'{int(torch.cuda.mem_get_info()[0]/1024**3)-2}GB') +``` + +A more detailed example, can be found in [examples/int8_inference_huggingface.py](examples/int8_inference_huggingface.py). + +**Using 8-bit optimizer**: +1. Comment out optimizer: ``#torch.optim.Adam(....)`` +2. Add 8-bit optimizer of your choice ``bnb.optim.Adam8bit(....)`` (arguments stay the same) +3. Replace embedding layer if necessary: ``torch.nn.Embedding(..) -> bnb.nn.Embedding(..)`` + + +**Using 8-bit Inference**: +1. Comment out torch.nn.Linear: ``#linear = torch.nn.Linear(...)`` +2. Add bnb 8-bit linear light module: ``linear = bnb.nn.Linear8bitLt(...)`` (base arguments stay the same) +3. There are two modes: + - Mixed 8-bit training with 16-bit main weights. Pass the argument ``has_fp16_weights=True`` (default) + - Int8 inference. Pass the argument ``has_fp16_weights=False`` +4. To use the full LLM.int8() method, use the ``threshold=k`` argument. We recommend ``k=6.0``. +```python +# LLM.int8() +linear = bnb.nn.Linear8bitLt(dim1, dim2, bias=True, has_fp16_weights=False, threshold=6.0) +# inputs need to be fp16 +out = linear(x.to(torch.float16)) +``` + + +## Features +- 8-bit Matrix multiplication with mixed precision decomposition +- LLM.int8() inference +- 8-bit Optimizers: Adam, AdamW, RMSProp, LARS, LAMB, Lion (saves 75% memory) +- Stable Embedding Layer: Improved stability through better initialization, and normalization +- 8-bit quantization: Quantile, Linear, and Dynamic quantization +- Fast quantile estimation: Up to 100x faster than other algorithms + +## Requirements & Installation + +Requirements: anaconda, cudatoolkit, pytorch + +Hardware requirements: + - LLM.int8(): NVIDIA Turing (RTX 20xx; T4) or Ampere GPU (RTX 30xx; A4-A100); (a GPU from 2018 or older). + - 8-bit optimizers and quantization: NVIDIA Kepler GPU or newer (>=GTX 78X). + +Supported CUDA versions: 10.2 - 12.0 + +The bitsandbytes library is currently only supported on Linux distributions. Windows is not supported at the moment. + +The requirements can best be fulfilled by installing pytorch via anaconda. You can install PyTorch by following the ["Get Started"](https://pytorch.org/get-started/locally/) instructions on the official website. + +To install run: + +``pip install bitsandbytes`` + +## Using bitsandbytes + +### Using Int8 Matrix Multiplication + +For straight Int8 matrix multiplication with mixed precision decomposition you can use ``bnb.matmul(...)``. To enable mixed precision decomposition, use the threshold parameter: +```python +bnb.matmul(..., threshold=6.0) +``` + +For instructions how to use LLM.int8() inference layers in your own code, see the TL;DR above or for extended instruction see [this blog post](https://github.com/huggingface/transformers). + +### Using the 8-bit Optimizers + +With bitsandbytes 8-bit optimizers can be used by changing a single line of code in your codebase. For NLP models we recommend also to use the StableEmbedding layers (see below) which improves results and helps with stable 8-bit optimization. To get started with 8-bit optimizers, it is sufficient to replace your old optimizer with the 8-bit optimizer in the following way: +```python +import bitsandbytes as bnb + +# adam = torch.optim.Adam(model.parameters(), lr=0.001, betas=(0.9, 0.995)) # comment out old optimizer +adam = bnb.optim.Adam8bit(model.parameters(), lr=0.001, betas=(0.9, 0.995)) # add bnb optimizer +adam = bnb.optim.Adam(model.parameters(), lr=0.001, betas=(0.9, 0.995), optim_bits=8) # equivalent + + +torch.nn.Embedding(...) -> bnb.nn.StableEmbedding(...) # recommended for NLP models +``` + +Note that by default all parameter tensors with less than 4096 elements are kept at 32-bit even if you initialize those parameters with 8-bit optimizers. This is done since such small tensors do not save much memory and often contain highly variable parameters (biases) or parameters that require high precision (batch norm, layer norm). You can change this behavior like so: +``` +# parameter tensors with less than 16384 values are optimized in 32-bit +# it is recommended to use multiplies of 4096 +adam = bnb.optim.Adam8bit(model.parameters(), min_8bit_size=16384) +``` + +### Change Bits and other Hyperparameters for Individual Parameters + +If you want to optimize some unstable parameters with 32-bit Adam and others with 8-bit Adam, you can use the `GlobalOptimManager`. With this, we can also configure specific hyperparameters for particular layers, such as embedding layers. To do that, we need two things: (1) register the parameter while they are still on the CPU, (2) override the config with the new desired hyperparameters (anytime, anywhere). See our [guide](howto_config_override.md) for more details + +### Fairseq Users + +To use the Stable Embedding Layer, override the respective `build_embedding(...)` function of your model. Make sure to also use the `--no-scale-embedding` flag to disable scaling of the word embedding layer (nor replaced with layer norm). You can use the optimizers by replacing the optimizer in the respective file (`adam.py` etc.). + +## Release and Feature History + +For upcoming features and changes and full history see [Patch Notes](CHANGELOG.md). + +## Errors + +1. RuntimeError: CUDA error: no kernel image is available for execution on the device. [Solution](errors_and_solutions.md#No-kernel-image-available) +2. __fatbinwrap_.. [Solution](errors_and_solutions.md#fatbinwrap_) + +## Compile from source +To compile from source, you need an installation of CUDA. If `nvcc` is not installed, you can install the CUDA Toolkit with nvcc through the following commands. + +```bash +wget https://raw.githubusercontent.com/TimDettmers/bitsandbytes/main/cuda_install.sh +# Syntax cuda_install CUDA_VERSION INSTALL_PREFIX EXPORT_TO_BASH +# CUDA_VERSION in {110, 111, 112, 113, 114, 115, 116, 117, 118, 120, 121} +# EXPORT_TO_BASH in {0, 1} with 0=False and 1=True + +# For example, the following installs CUDA 11.8 to ~/local/cuda-11.8 and exports the path to your .bashrc +bash cuda install 118 ~/local 1 +``` + +To use a specific CUDA version just for a single compile run, you can set the variable `CUDA_HOME`, for example the following command compiles `libbitsandbytes_cuda117.so` using compiler flags for cuda11x with the cuda version at `~/local/cuda-11.7`: + +``CUDA_HOME=~/local/cuda-11.7 CUDA_VERSION=117 make cuda11x`` + +For more detailed instruction, please follow the [compile_from_source.md](compile_from_source.md) instructions. + +## License + +The majority of bitsandbytes is licensed under MIT, however portions of the project are available under separate license terms: Pytorch is licensed under the BSD license. + +We thank Fabio Cannizzo for his work on [FastBinarySearch](https://github.com/fabiocannizzo/FastBinarySearch) which we use for CPU quantization. + +## How to cite us +If you found this library and found LLM.int8() useful, please consider citing our work: + +```bibtex +@article{dettmers2022llmint8, + title={LLM.int8(): 8-bit Matrix Multiplication for Transformers at Scale}, + author={Dettmers, Tim and Lewis, Mike and Belkada, Younes and Zettlemoyer, Luke}, + journal={arXiv preprint arXiv:2208.07339}, + year={2022} +} +``` + +For 8-bit optimizers or quantization routines, please consider citing the following work: + +```bibtex +@article{dettmers2022optimizers, + title={8-bit Optimizers via Block-wise Quantization}, + author={Dettmers, Tim and Lewis, Mike and Shleifer, Sam and Zettlemoyer, Luke}, + journal={9th International Conference on Learning Representations, ICLR}, + year={2022} +} +``` diff --git a/evalkit_internvl/lib/python3.10/site-packages/bitsandbytes-0.41.0.dist-info/NOTICE.md b/evalkit_internvl/lib/python3.10/site-packages/bitsandbytes-0.41.0.dist-info/NOTICE.md new file mode 100644 index 0000000000000000000000000000000000000000..660658b057ad820c341d932fcbd4dd4ffe8e30f4 --- /dev/null +++ b/evalkit_internvl/lib/python3.10/site-packages/bitsandbytes-0.41.0.dist-info/NOTICE.md @@ -0,0 +1,3 @@ +The majority of bitsandbytes is licensed under MIT, however portions of the project are available under separate license terms: Pytorch is licensed under the BSD license. + +We thank Fabio Cannizzo for this work on FastBinarySearch which is included in this project. diff --git a/evalkit_internvl/lib/python3.10/site-packages/bitsandbytes-0.41.0.dist-info/RECORD b/evalkit_internvl/lib/python3.10/site-packages/bitsandbytes-0.41.0.dist-info/RECORD new file mode 100644 index 0000000000000000000000000000000000000000..d9e461d06ff155fb8d95cb950737445d3b836837 --- /dev/null +++ b/evalkit_internvl/lib/python3.10/site-packages/bitsandbytes-0.41.0.dist-info/RECORD @@ -0,0 +1,99 @@ +bitsandbytes-0.41.0.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4 +bitsandbytes-0.41.0.dist-info/LICENSE,sha256=UkEte8fOQVfqYou6rLiCngqcs8WPV_mRdhJryM8r_IU,1086 +bitsandbytes-0.41.0.dist-info/METADATA,sha256=z88wKooZxLJ9Z5T3i4YEWBIzRKR9o3DZIes663fhUu4,9810 +bitsandbytes-0.41.0.dist-info/NOTICE.md,sha256=_4zDL2L8BqUwtmvoznR_wqhQmsP2QwdXHrAHnBMzAl8,265 +bitsandbytes-0.41.0.dist-info/RECORD,, +bitsandbytes-0.41.0.dist-info/REQUESTED,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +bitsandbytes-0.41.0.dist-info/WHEEL,sha256=AtBG6SXL3KF_v0NxLf0ehyVOh0cold-JbJYXNGorC6Q,92 +bitsandbytes-0.41.0.dist-info/top_level.txt,sha256=bK-Zzu-JyIIh4njm8jTYcbuqX-Z80XTcDal4lXCG0-M,13 +bitsandbytes/__init__.py,sha256=mQQknbw8xSpKDtEJgVEiyCemE4HaB-FtAddxY2-Uyhc,670 +bitsandbytes/__main__.py,sha256=rWjs6LsifG_Vglj3WM4brY2IOCjwKpAjuBP3OIzYFPU,4014 +bitsandbytes/__pycache__/__init__.cpython-310.pyc,, +bitsandbytes/__pycache__/__main__.cpython-310.pyc,, +bitsandbytes/__pycache__/cextension.cpython-310.pyc,, +bitsandbytes/__pycache__/functional.cpython-310.pyc,, +bitsandbytes/__pycache__/utils.cpython-310.pyc,, +bitsandbytes/autograd/__init__.py,sha256=Ltb59FJrcWYVsTfGW6SscEZtiDhHZe7EFrYnIhnASug,67 +bitsandbytes/autograd/__pycache__/__init__.cpython-310.pyc,, +bitsandbytes/autograd/__pycache__/_functions.cpython-310.pyc,, +bitsandbytes/autograd/_functions.py,sha256=ER9xwzolX9T32Xu0VFbvpoRdDiCas1neEaKOZARI2Kw,22361 +bitsandbytes/cextension.py,sha256=klJwL-8ZPylUOETDTW-fvUbZ_Bt_rdB6wRDND1fB_wk,1635 +bitsandbytes/cuda_setup/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +bitsandbytes/cuda_setup/__pycache__/__init__.cpython-310.pyc,, +bitsandbytes/cuda_setup/__pycache__/env_vars.cpython-310.pyc,, +bitsandbytes/cuda_setup/__pycache__/main.cpython-310.pyc,, +bitsandbytes/cuda_setup/env_vars.py,sha256=4T8i0LKAbE6tyDceGbJxdW1o4Nm4_vDLY6br39VwCxc,1614 +bitsandbytes/cuda_setup/main.py,sha256=o9YcJj87_t1yADdrMWY0c_XQRyX_8t3XGjwiERKtaVk,17946 +bitsandbytes/functional.py,sha256=vw-RE4CfEirCvM-O8rsiGKvAGIM5cKWNM0Ekbr8-xXc,79598 +bitsandbytes/libbitsandbytes_cpu.so,sha256=nejNfivapxN6MN_bJxFfR423YImIeqNVhXdts2BcDR8,41608 +bitsandbytes/libbitsandbytes_cuda110.so,sha256=1NM_-9xHfCz2djWods0YXQcDKITkX3KSJfklrUESkKw,5938904 +bitsandbytes/libbitsandbytes_cuda110_nocublaslt.so,sha256=q_1Zn2FlCd6LaXYwjkDrE_rq0lFuNwDjGBJlWM_Nufg,11110784 +bitsandbytes/libbitsandbytes_cuda111.so,sha256=JBLZ6wBWB5x1DasFqxcog59xxks5XHzLAdQFGZjCiDY,8974040 +bitsandbytes/libbitsandbytes_cuda111_nocublaslt.so,sha256=1qsndcAVNcCz-LcXytWYx81hPJgifIgNDw1MSx81ays,20244864 +bitsandbytes/libbitsandbytes_cuda114.so,sha256=kh0dVhz5EoSIcpFoRt9vB9rtMSYayFrT1uQmDAP_nCI,9313912 +bitsandbytes/libbitsandbytes_cuda114_nocublaslt.so,sha256=7BfmpKsEYpxamIB7a9WhjhXN7FC1o0FpyqO8IXu1Ep4,20973856 +bitsandbytes/libbitsandbytes_cuda115.so,sha256=ncH3CjlEB0fyXvvj9my_SkUyfGwj_FVo4D-adRX63Gs,9310152 +bitsandbytes/libbitsandbytes_cuda115_nocublaslt.so,sha256=1vB8bV-E6pXTKZzOmfxFWiz3l7LrtQuSAh9n33oY1hM,20925040 +bitsandbytes/libbitsandbytes_cuda117.so,sha256=bEkYZLxEKQZvsu3Agy-aDcIC2ZqQ8B6JDBHL2n1Osq0,9117944 +bitsandbytes/libbitsandbytes_cuda117_nocublaslt.so,sha256=jqc_QsosEBzjd7cNFNA-6QG5e1GGG1cLfEoh7d23zxA,20741032 +bitsandbytes/libbitsandbytes_cuda118.so,sha256=B2MQaG_5NLc8iVHawOSu3V-ABcpbos6QdpSLTQ0IDXY,14918184 +bitsandbytes/libbitsandbytes_cuda118_nocublaslt.so,sha256=GaYqo8N7cNkxbAhI-dizyyBbuOqbEbNRR0nyh8LIWW4,26516696 +bitsandbytes/libbitsandbytes_cuda120.so,sha256=1olVGrA_Frm3ZzYaUxDKRyeWXbJlTTWhlPjO1a0il_o,14504296 +bitsandbytes/libbitsandbytes_cuda120_nocublaslt.so,sha256=VUXyIHZb4V6-SOGPVPWVHyeKafG9xQPLEQIelTh69Oo,25709592 +bitsandbytes/libbitsandbytes_cuda121.so,sha256=XRKDct-9s0poQp0sNFSgdvrGUMed2lRror6aVBU3hGM,14512488 +bitsandbytes/libbitsandbytes_cuda121_nocublaslt.so,sha256=YeYH36m5h2N7tULUoZ8Gt-CAfb8szLDPW5m9OLAQFAE,25721880 +bitsandbytes/libbitsandbytes_cuda122.so,sha256=FrhXhmfraDbGt5I6OzUI1igJ5OkUKWdKDDq5fPYMU0k,14561032 +bitsandbytes/libbitsandbytes_cuda122_nocublaslt.so,sha256=WPSiBD_ozuUsk_aRdoJd5XVTcnpannmEmR6yok2mZTA,25803272 +bitsandbytes/nn/__init__.py,sha256=i-gJR2uQrRvn8zZCZcS1KC0SbsUqCKTta4aV7HXZTT4,446 +bitsandbytes/nn/__pycache__/__init__.cpython-310.pyc,, +bitsandbytes/nn/__pycache__/modules.cpython-310.pyc,, +bitsandbytes/nn/__pycache__/triton_based_modules.cpython-310.pyc,, +bitsandbytes/nn/modules.py,sha256=sIwAAAtMnk9s95HHTOC10rKERMvAl5gw03dCPL12oBY,20528 +bitsandbytes/nn/triton_based_modules.py,sha256=eMEldLd7GX0Dc3dzX0XZpfgzofBPRAi-z1NXf84wCPs,9843 +bitsandbytes/optim/__init__.py,sha256=TSl80yMFkwGBl8N0FBFcfBLt2vt4cZn-hbkuwHGuCUE,794 +bitsandbytes/optim/__pycache__/__init__.cpython-310.pyc,, +bitsandbytes/optim/__pycache__/adagrad.cpython-310.pyc,, +bitsandbytes/optim/__pycache__/adam.cpython-310.pyc,, +bitsandbytes/optim/__pycache__/adamw.cpython-310.pyc,, +bitsandbytes/optim/__pycache__/lamb.cpython-310.pyc,, +bitsandbytes/optim/__pycache__/lars.cpython-310.pyc,, +bitsandbytes/optim/__pycache__/lion.cpython-310.pyc,, +bitsandbytes/optim/__pycache__/optimizer.cpython-310.pyc,, +bitsandbytes/optim/__pycache__/rmsprop.cpython-310.pyc,, +bitsandbytes/optim/__pycache__/sgd.cpython-310.pyc,, +bitsandbytes/optim/adagrad.py,sha256=E4KsNJKOB2VfgkyKEoeYwFFXnedsxHZItdfzwc5_cdE,3719 +bitsandbytes/optim/adam.py,sha256=nHHvXoeiAuosn4a9VWI3Z7_XmvYC6bOHb8en6mxiwkA,12776 +bitsandbytes/optim/adamw.py,sha256=byibv4xoBM7FUK8FScRTx2KbI4-2Mi0yB8WJCb2x3wE,2699 +bitsandbytes/optim/lamb.py,sha256=hfH4H9eVAHcbjL04DAI_lcPD1OPAmcY4_myow-o21aw,2313 +bitsandbytes/optim/lars.py,sha256=PeUB8RlfaRtHEa-ZZZkrKDdmkHa7XEEfU81irU-mKsY,5653 +bitsandbytes/optim/lion.py,sha256=jANwqVZSAxNZnoqi_OQ9XG8hKa6e84mkwJ9CchtpLHs,2304 +bitsandbytes/optim/optimizer.py,sha256=219zPzx9dpeY0VndzlXt6jn2yV9sEiSXkrxe26wXjIo,25167 +bitsandbytes/optim/rmsprop.py,sha256=1zGT9JIZh214fbBZ-CTirVKk1rQxSZe-BRJzhRtYL2U,2785 +bitsandbytes/optim/sgd.py,sha256=YHVUeEkwxgYx_0GhH0Et6fCpk7rfhboDR2F06jRWz4E,2340 +bitsandbytes/research/__init__.py,sha256=_MilJdwSRWObRfzzy14WD6HsJa6okT4d5YxH4aB9zg4,119 +bitsandbytes/research/__pycache__/__init__.cpython-310.pyc,, +bitsandbytes/research/autograd/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +bitsandbytes/research/autograd/__pycache__/__init__.cpython-310.pyc,, +bitsandbytes/research/autograd/__pycache__/_functions.cpython-310.pyc,, +bitsandbytes/research/autograd/_functions.py,sha256=k72rcf4hT3M5GOpGoijWkpTAqjRNoecGlOHmTTn3n80,15874 +bitsandbytes/research/nn/__init__.py,sha256=j5XA_2ZA6efMtcbuUCyegfCLkDDQuL3ix5xS4yKZayY,53 +bitsandbytes/research/nn/__pycache__/__init__.cpython-310.pyc,, +bitsandbytes/research/nn/__pycache__/modules.cpython-310.pyc,, +bitsandbytes/research/nn/modules.py,sha256=EnI2qVTosAMkH4G1fQleA0zvm8dZR9G-GJ4pFDo8V9M,2357 +bitsandbytes/triton/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +bitsandbytes/triton/__pycache__/__init__.cpython-310.pyc,, +bitsandbytes/triton/__pycache__/dequantize_rowwise.cpython-310.pyc,, +bitsandbytes/triton/__pycache__/int8_matmul_mixed_dequanitze.cpython-310.pyc,, +bitsandbytes/triton/__pycache__/int8_matmul_rowwise_dequantize.cpython-310.pyc,, +bitsandbytes/triton/__pycache__/quantize_columnwise_and_transpose.cpython-310.pyc,, +bitsandbytes/triton/__pycache__/quantize_global.cpython-310.pyc,, +bitsandbytes/triton/__pycache__/quantize_rowwise.cpython-310.pyc,, +bitsandbytes/triton/__pycache__/triton_utils.cpython-310.pyc,, +bitsandbytes/triton/dequantize_rowwise.py,sha256=qdh3f4O53faM6SFT_aYvrytWF_FQW3q2bhBll6Uwfc4,2193 +bitsandbytes/triton/int8_matmul_mixed_dequanitze.py,sha256=QJ_hrZ94ZthnoPD0TCp5ZCPAMkxNNQQY-UNg50TWwHo,8256 +bitsandbytes/triton/int8_matmul_rowwise_dequantize.py,sha256=EMiY3nfx0LIvYEGUqtzcfUonQxwoDcppYli9Qd6kViw,8240 +bitsandbytes/triton/quantize_columnwise_and_transpose.py,sha256=K2fFegPtSsi2tgKxb5goO8YpUmQ6wgTvsXabgTRAFNI,2749 +bitsandbytes/triton/quantize_global.py,sha256=5in9Plx1Kgf6Nx5B1RBXCiJnb0G4qwraGADNiq1LtVc,3957 +bitsandbytes/triton/quantize_rowwise.py,sha256=sraX6TMubZQGiG9Gyh0UFzK823e_TkXZk9R1BILJdPU,2331 +bitsandbytes/triton/triton_utils.py,sha256=f7CP_3lvUoTQJ-xSp4wAfiU8uX_trtGdUsoLzlcsHQY,103 +bitsandbytes/utils.py,sha256=XASxdyR11sKKtY9DIwthe-zLU6v0vXwZzQvIVasjH7o,7499 diff --git a/evalkit_internvl/lib/python3.10/site-packages/bitsandbytes-0.41.0.dist-info/WHEEL b/evalkit_internvl/lib/python3.10/site-packages/bitsandbytes-0.41.0.dist-info/WHEEL new file mode 100644 index 0000000000000000000000000000000000000000..d272f6ed555bf206d2a9572524bfa3c0b500fe8d --- /dev/null +++ b/evalkit_internvl/lib/python3.10/site-packages/bitsandbytes-0.41.0.dist-info/WHEEL @@ -0,0 +1,5 @@ +Wheel-Version: 1.0 +Generator: bdist_wheel (0.41.0) +Root-Is-Purelib: true +Tag: py3-none-any + diff --git a/evalkit_internvl/lib/python3.10/site-packages/bitsandbytes-0.41.0.dist-info/top_level.txt b/evalkit_internvl/lib/python3.10/site-packages/bitsandbytes-0.41.0.dist-info/top_level.txt new file mode 100644 index 0000000000000000000000000000000000000000..38cb1102777e6c9f61ee8e9ef4bfd8e37b59a368 --- /dev/null +++ b/evalkit_internvl/lib/python3.10/site-packages/bitsandbytes-0.41.0.dist-info/top_level.txt @@ -0,0 +1 @@ +bitsandbytes diff --git a/evalkit_internvl/lib/python3.10/site-packages/httpcore/__init__.py b/evalkit_internvl/lib/python3.10/site-packages/httpcore/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..da95f8d0bb6bf7c91713dddc9615873d5bf268bc --- /dev/null +++ b/evalkit_internvl/lib/python3.10/site-packages/httpcore/__init__.py @@ -0,0 +1,139 @@ +from ._api import request, stream +from ._async import ( + AsyncConnectionInterface, + AsyncConnectionPool, + AsyncHTTP2Connection, + AsyncHTTP11Connection, + AsyncHTTPConnection, + AsyncHTTPProxy, + AsyncSOCKSProxy, +) +from ._backends.base import ( + SOCKET_OPTION, + AsyncNetworkBackend, + AsyncNetworkStream, + NetworkBackend, + NetworkStream, +) +from ._backends.mock import AsyncMockBackend, AsyncMockStream, MockBackend, MockStream +from ._backends.sync import SyncBackend +from ._exceptions import ( + ConnectError, + ConnectionNotAvailable, + ConnectTimeout, + LocalProtocolError, + NetworkError, + PoolTimeout, + ProtocolError, + ProxyError, + ReadError, + ReadTimeout, + RemoteProtocolError, + TimeoutException, + UnsupportedProtocol, + WriteError, + WriteTimeout, +) +from ._models import URL, Origin, Request, Response +from ._ssl import default_ssl_context +from ._sync import ( + ConnectionInterface, + ConnectionPool, + HTTP2Connection, + HTTP11Connection, + HTTPConnection, + HTTPProxy, + SOCKSProxy, +) + +# The 'httpcore.AnyIOBackend' class is conditional on 'anyio' being installed. +try: + from ._backends.anyio import AnyIOBackend +except ImportError: # pragma: nocover + + class AnyIOBackend: # type: ignore + def __init__(self, *args, **kwargs): # type: ignore + msg = ( + "Attempted to use 'httpcore.AnyIOBackend' but 'anyio' is not installed." + ) + raise RuntimeError(msg) + + +# The 'httpcore.TrioBackend' class is conditional on 'trio' being installed. +try: + from ._backends.trio import TrioBackend +except ImportError: # pragma: nocover + + class TrioBackend: # type: ignore + def __init__(self, *args, **kwargs): # type: ignore + msg = "Attempted to use 'httpcore.TrioBackend' but 'trio' is not installed." + raise RuntimeError(msg) + + +__all__ = [ + # top-level requests + "request", + "stream", + # models + "Origin", + "URL", + "Request", + "Response", + # async + "AsyncHTTPConnection", + "AsyncConnectionPool", + "AsyncHTTPProxy", + "AsyncHTTP11Connection", + "AsyncHTTP2Connection", + "AsyncConnectionInterface", + "AsyncSOCKSProxy", + # sync + "HTTPConnection", + "ConnectionPool", + "HTTPProxy", + "HTTP11Connection", + "HTTP2Connection", + "ConnectionInterface", + "SOCKSProxy", + # network backends, implementations + "SyncBackend", + "AnyIOBackend", + "TrioBackend", + # network backends, mock implementations + "AsyncMockBackend", + "AsyncMockStream", + "MockBackend", + "MockStream", + # network backends, interface + "AsyncNetworkStream", + "AsyncNetworkBackend", + "NetworkStream", + "NetworkBackend", + # util + "default_ssl_context", + "SOCKET_OPTION", + # exceptions + "ConnectionNotAvailable", + "ProxyError", + "ProtocolError", + "LocalProtocolError", + "RemoteProtocolError", + "UnsupportedProtocol", + "TimeoutException", + "PoolTimeout", + "ConnectTimeout", + "ReadTimeout", + "WriteTimeout", + "NetworkError", + "ConnectError", + "ReadError", + "WriteError", +] + +__version__ = "0.17.3" + + +__locals = locals() +for __name in __all__: + if not __name.startswith("__"): + setattr(__locals[__name], "__module__", "httpcore") # noqa diff --git a/evalkit_internvl/lib/python3.10/site-packages/httpcore/_api.py b/evalkit_internvl/lib/python3.10/site-packages/httpcore/_api.py new file mode 100644 index 0000000000000000000000000000000000000000..854235f5f6035031f0960d4a4b8834081d5df389 --- /dev/null +++ b/evalkit_internvl/lib/python3.10/site-packages/httpcore/_api.py @@ -0,0 +1,92 @@ +from contextlib import contextmanager +from typing import Iterator, Optional, Union + +from ._models import URL, Extensions, HeaderTypes, Response +from ._sync.connection_pool import ConnectionPool + + +def request( + method: Union[bytes, str], + url: Union[URL, bytes, str], + *, + headers: HeaderTypes = None, + content: Union[bytes, Iterator[bytes], None] = None, + extensions: Optional[Extensions] = None, +) -> Response: + """ + Sends an HTTP request, returning the response. + + ``` + response = httpcore.request("GET", "https://www.example.com/") + ``` + + Arguments: + method: The HTTP method for the request. Typically one of `"GET"`, + `"OPTIONS"`, `"HEAD"`, `"POST"`, `"PUT"`, `"PATCH"`, or `"DELETE"`. + url: The URL of the HTTP request. Either as an instance of `httpcore.URL`, + or as str/bytes. + headers: The HTTP request headers. Either as a dictionary of str/bytes, + or as a list of two-tuples of str/bytes. + content: The content of the request body. Either as bytes, + or as a bytes iterator. + extensions: A dictionary of optional extra information included on the request. + Possible keys include `"timeout"`. + + Returns: + An instance of `httpcore.Response`. + """ + with ConnectionPool() as pool: + return pool.request( + method=method, + url=url, + headers=headers, + content=content, + extensions=extensions, + ) + + +@contextmanager +def stream( + method: Union[bytes, str], + url: Union[URL, bytes, str], + *, + headers: HeaderTypes = None, + content: Union[bytes, Iterator[bytes], None] = None, + extensions: Optional[Extensions] = None, +) -> Iterator[Response]: + """ + Sends an HTTP request, returning the response within a content manager. + + ``` + with httpcore.stream("GET", "https://www.example.com/") as response: + ... + ``` + + When using the `stream()` function, the body of the response will not be + automatically read. If you want to access the response body you should + either use `content = response.read()`, or `for chunk in response.iter_content()`. + + Arguments: + method: The HTTP method for the request. Typically one of `"GET"`, + `"OPTIONS"`, `"HEAD"`, `"POST"`, `"PUT"`, `"PATCH"`, or `"DELETE"`. + url: The URL of the HTTP request. Either as an instance of `httpcore.URL`, + or as str/bytes. + headers: The HTTP request headers. Either as a dictionary of str/bytes, + or as a list of two-tuples of str/bytes. + content: The content of the request body. Either as bytes, + or as a bytes iterator. + extensions: A dictionary of optional extra information included on the request. + Possible keys include `"timeout"`. + + Returns: + An instance of `httpcore.Response`. + """ + with ConnectionPool() as pool: + with pool.stream( + method=method, + url=url, + headers=headers, + content=content, + extensions=extensions, + ) as response: + yield response diff --git a/evalkit_internvl/lib/python3.10/site-packages/httpcore/_exceptions.py b/evalkit_internvl/lib/python3.10/site-packages/httpcore/_exceptions.py new file mode 100644 index 0000000000000000000000000000000000000000..81e7fc61ddfe258296d4d08b436fa8627f335dc9 --- /dev/null +++ b/evalkit_internvl/lib/python3.10/site-packages/httpcore/_exceptions.py @@ -0,0 +1,81 @@ +import contextlib +from typing import Iterator, Mapping, Type + +ExceptionMapping = Mapping[Type[Exception], Type[Exception]] + + +@contextlib.contextmanager +def map_exceptions(map: ExceptionMapping) -> Iterator[None]: + try: + yield + except Exception as exc: # noqa: PIE786 + for from_exc, to_exc in map.items(): + if isinstance(exc, from_exc): + raise to_exc(exc) from exc + raise # pragma: nocover + + +class ConnectionNotAvailable(Exception): + pass + + +class ProxyError(Exception): + pass + + +class UnsupportedProtocol(Exception): + pass + + +class ProtocolError(Exception): + pass + + +class RemoteProtocolError(ProtocolError): + pass + + +class LocalProtocolError(ProtocolError): + pass + + +# Timeout errors + + +class TimeoutException(Exception): + pass + + +class PoolTimeout(TimeoutException): + pass + + +class ConnectTimeout(TimeoutException): + pass + + +class ReadTimeout(TimeoutException): + pass + + +class WriteTimeout(TimeoutException): + pass + + +# Network errors + + +class NetworkError(Exception): + pass + + +class ConnectError(NetworkError): + pass + + +class ReadError(NetworkError): + pass + + +class WriteError(NetworkError): + pass diff --git a/evalkit_internvl/lib/python3.10/site-packages/httpcore/_models.py b/evalkit_internvl/lib/python3.10/site-packages/httpcore/_models.py new file mode 100644 index 0000000000000000000000000000000000000000..e15305eec4ee7037f3eaaa1b2c8aca21fcfdc0d6 --- /dev/null +++ b/evalkit_internvl/lib/python3.10/site-packages/httpcore/_models.py @@ -0,0 +1,483 @@ +from typing import ( + Any, + AsyncIterable, + AsyncIterator, + Iterable, + Iterator, + List, + Mapping, + Optional, + Sequence, + Tuple, + Union, +) +from urllib.parse import urlparse + +# Functions for typechecking... + + +HeadersAsSequence = Sequence[Tuple[Union[bytes, str], Union[bytes, str]]] +HeadersAsMapping = Mapping[Union[bytes, str], Union[bytes, str]] +HeaderTypes = Union[HeadersAsSequence, HeadersAsMapping, None] + +Extensions = Mapping[str, Any] + + +def enforce_bytes(value: Union[bytes, str], *, name: str) -> bytes: + """ + Any arguments that are ultimately represented as bytes can be specified + either as bytes or as strings. + + However we enforce that any string arguments must only contain characters in + the plain ASCII range. chr(0)...chr(127). If you need to use characters + outside that range then be precise, and use a byte-wise argument. + """ + if isinstance(value, str): + try: + return value.encode("ascii") + except UnicodeEncodeError: + raise TypeError(f"{name} strings may not include unicode characters.") + elif isinstance(value, bytes): + return value + + seen_type = type(value).__name__ + raise TypeError(f"{name} must be bytes or str, but got {seen_type}.") + + +def enforce_url(value: Union["URL", bytes, str], *, name: str) -> "URL": + """ + Type check for URL parameters. + """ + if isinstance(value, (bytes, str)): + return URL(value) + elif isinstance(value, URL): + return value + + seen_type = type(value).__name__ + raise TypeError(f"{name} must be a URL, bytes, or str, but got {seen_type}.") + + +def enforce_headers( + value: Union[HeadersAsMapping, HeadersAsSequence, None] = None, *, name: str +) -> List[Tuple[bytes, bytes]]: + """ + Convienence function that ensure all items in request or response headers + are either bytes or strings in the plain ASCII range. + """ + if value is None: + return [] + elif isinstance(value, Mapping): + return [ + ( + enforce_bytes(k, name="header name"), + enforce_bytes(v, name="header value"), + ) + for k, v in value.items() + ] + elif isinstance(value, Sequence): + return [ + ( + enforce_bytes(k, name="header name"), + enforce_bytes(v, name="header value"), + ) + for k, v in value + ] + + seen_type = type(value).__name__ + raise TypeError( + f"{name} must be a mapping or sequence of two-tuples, but got {seen_type}." + ) + + +def enforce_stream( + value: Union[bytes, Iterable[bytes], AsyncIterable[bytes], None], *, name: str +) -> Union[Iterable[bytes], AsyncIterable[bytes]]: + if value is None: + return ByteStream(b"") + elif isinstance(value, bytes): + return ByteStream(value) + return value + + +# * https://tools.ietf.org/html/rfc3986#section-3.2.3 +# * https://url.spec.whatwg.org/#url-miscellaneous +# * https://url.spec.whatwg.org/#scheme-state +DEFAULT_PORTS = { + b"ftp": 21, + b"http": 80, + b"https": 443, + b"ws": 80, + b"wss": 443, +} + + +def include_request_headers( + headers: List[Tuple[bytes, bytes]], + *, + url: "URL", + content: Union[None, bytes, Iterable[bytes], AsyncIterable[bytes]], +) -> List[Tuple[bytes, bytes]]: + headers_set = set(k.lower() for k, v in headers) + + if b"host" not in headers_set: + default_port = DEFAULT_PORTS.get(url.scheme) + if url.port is None or url.port == default_port: + header_value = url.host + else: + header_value = b"%b:%d" % (url.host, url.port) + headers = [(b"Host", header_value)] + headers + + if ( + content is not None + and b"content-length" not in headers_set + and b"transfer-encoding" not in headers_set + ): + if isinstance(content, bytes): + content_length = str(len(content)).encode("ascii") + headers += [(b"Content-Length", content_length)] + else: + headers += [(b"Transfer-Encoding", b"chunked")] # pragma: nocover + + return headers + + +# Interfaces for byte streams... + + +class ByteStream: + """ + A container for non-streaming content, and that supports both sync and async + stream iteration. + """ + + def __init__(self, content: bytes) -> None: + self._content = content + + def __iter__(self) -> Iterator[bytes]: + yield self._content + + async def __aiter__(self) -> AsyncIterator[bytes]: + yield self._content + + def __repr__(self) -> str: + return f"<{self.__class__.__name__} [{len(self._content)} bytes]>" + + +class Origin: + def __init__(self, scheme: bytes, host: bytes, port: int) -> None: + self.scheme = scheme + self.host = host + self.port = port + + def __eq__(self, other: Any) -> bool: + return ( + isinstance(other, Origin) + and self.scheme == other.scheme + and self.host == other.host + and self.port == other.port + ) + + def __str__(self) -> str: + scheme = self.scheme.decode("ascii") + host = self.host.decode("ascii") + port = str(self.port) + return f"{scheme}://{host}:{port}" + + +class URL: + """ + Represents the URL against which an HTTP request may be made. + + The URL may either be specified as a plain string, for convienence: + + ```python + url = httpcore.URL("https://www.example.com/") + ``` + + Or be constructed with explicitily pre-parsed components: + + ```python + url = httpcore.URL(scheme=b'https', host=b'www.example.com', port=None, target=b'/') + ``` + + Using this second more explicit style allows integrations that are using + `httpcore` to pass through URLs that have already been parsed in order to use + libraries such as `rfc-3986` rather than relying on the stdlib. It also ensures + that URL parsing is treated identically at both the networking level and at any + higher layers of abstraction. + + The four components are important here, as they allow the URL to be precisely + specified in a pre-parsed format. They also allow certain types of request to + be created that could not otherwise be expressed. + + For example, an HTTP request to `http://www.example.com/` forwarded via a proxy + at `http://localhost:8080`... + + ```python + # Constructs an HTTP request with a complete URL as the target: + # GET https://www.example.com/ HTTP/1.1 + url = httpcore.URL( + scheme=b'http', + host=b'localhost', + port=8080, + target=b'https://www.example.com/' + ) + request = httpcore.Request( + method="GET", + url=url + ) + ``` + + Another example is constructing an `OPTIONS *` request... + + ```python + # Constructs an 'OPTIONS *' HTTP request: + # OPTIONS * HTTP/1.1 + url = httpcore.URL(scheme=b'https', host=b'www.example.com', target=b'*') + request = httpcore.Request(method="OPTIONS", url=url) + ``` + + This kind of request is not possible to formulate with a URL string, + because the `/` delimiter is always used to demark the target from the + host/port portion of the URL. + + For convenience, string-like arguments may be specified either as strings or + as bytes. However, once a request is being issue over-the-wire, the URL + components are always ultimately required to be a bytewise representation. + + In order to avoid any ambiguity over character encodings, when strings are used + as arguments, they must be strictly limited to the ASCII range `chr(0)`-`chr(127)`. + If you require a bytewise representation that is outside this range you must + handle the character encoding directly, and pass a bytes instance. + """ + + def __init__( + self, + url: Union[bytes, str] = "", + *, + scheme: Union[bytes, str] = b"", + host: Union[bytes, str] = b"", + port: Optional[int] = None, + target: Union[bytes, str] = b"", + ) -> None: + """ + Parameters: + url: The complete URL as a string or bytes. + scheme: The URL scheme as a string or bytes. + Typically either `"http"` or `"https"`. + host: The URL host as a string or bytes. Such as `"www.example.com"`. + port: The port to connect to. Either an integer or `None`. + target: The target of the HTTP request. Such as `"/items?search=red"`. + """ + if url: + parsed = urlparse(enforce_bytes(url, name="url")) + self.scheme = parsed.scheme + self.host = parsed.hostname or b"" + self.port = parsed.port + self.target = (parsed.path or b"/") + ( + b"?" + parsed.query if parsed.query else b"" + ) + else: + self.scheme = enforce_bytes(scheme, name="scheme") + self.host = enforce_bytes(host, name="host") + self.port = port + self.target = enforce_bytes(target, name="target") + + @property + def origin(self) -> Origin: + default_port = { + b"http": 80, + b"https": 443, + b"ws": 80, + b"wss": 443, + b"socks5": 1080, + }[self.scheme] + return Origin( + scheme=self.scheme, host=self.host, port=self.port or default_port + ) + + def __eq__(self, other: Any) -> bool: + return ( + isinstance(other, URL) + and other.scheme == self.scheme + and other.host == self.host + and other.port == self.port + and other.target == self.target + ) + + def __bytes__(self) -> bytes: + if self.port is None: + return b"%b://%b%b" % (self.scheme, self.host, self.target) + return b"%b://%b:%d%b" % (self.scheme, self.host, self.port, self.target) + + def __repr__(self) -> str: + return ( + f"{self.__class__.__name__}(scheme={self.scheme!r}, " + f"host={self.host!r}, port={self.port!r}, target={self.target!r})" + ) + + +class Request: + """ + An HTTP request. + """ + + def __init__( + self, + method: Union[bytes, str], + url: Union[URL, bytes, str], + *, + headers: HeaderTypes = None, + content: Union[bytes, Iterable[bytes], AsyncIterable[bytes], None] = None, + extensions: Optional[Extensions] = None, + ) -> None: + """ + Parameters: + method: The HTTP request method, either as a string or bytes. + For example: `GET`. + url: The request URL, either as a `URL` instance, or as a string or bytes. + For example: `"https://www.example.com".` + headers: The HTTP request headers. + content: The content of the response body. + extensions: A dictionary of optional extra information included on + the request. Possible keys include `"timeout"`, and `"trace"`. + """ + self.method: bytes = enforce_bytes(method, name="method") + self.url: URL = enforce_url(url, name="url") + self.headers: List[Tuple[bytes, bytes]] = enforce_headers( + headers, name="headers" + ) + self.stream: Union[Iterable[bytes], AsyncIterable[bytes]] = enforce_stream( + content, name="content" + ) + self.extensions = {} if extensions is None else extensions + + def __repr__(self) -> str: + return f"<{self.__class__.__name__} [{self.method!r}]>" + + +class Response: + """ + An HTTP response. + """ + + def __init__( + self, + status: int, + *, + headers: HeaderTypes = None, + content: Union[bytes, Iterable[bytes], AsyncIterable[bytes], None] = None, + extensions: Optional[Extensions] = None, + ) -> None: + """ + Parameters: + status: The HTTP status code of the response. For example `200`. + headers: The HTTP response headers. + content: The content of the response body. + extensions: A dictionary of optional extra information included on + the responseself.Possible keys include `"http_version"`, + `"reason_phrase"`, and `"network_stream"`. + """ + self.status: int = status + self.headers: List[Tuple[bytes, bytes]] = enforce_headers( + headers, name="headers" + ) + self.stream: Union[Iterable[bytes], AsyncIterable[bytes]] = enforce_stream( + content, name="content" + ) + self.extensions = {} if extensions is None else extensions + + self._stream_consumed = False + + @property + def content(self) -> bytes: + if not hasattr(self, "_content"): + if isinstance(self.stream, Iterable): + raise RuntimeError( + "Attempted to access 'response.content' on a streaming response. " + "Call 'response.read()' first." + ) + else: + raise RuntimeError( + "Attempted to access 'response.content' on a streaming response. " + "Call 'await response.aread()' first." + ) + return self._content + + def __repr__(self) -> str: + return f"<{self.__class__.__name__} [{self.status}]>" + + # Sync interface... + + def read(self) -> bytes: + if not isinstance(self.stream, Iterable): # pragma: nocover + raise RuntimeError( + "Attempted to read an asynchronous response using 'response.read()'. " + "You should use 'await response.aread()' instead." + ) + if not hasattr(self, "_content"): + self._content = b"".join([part for part in self.iter_stream()]) + return self._content + + def iter_stream(self) -> Iterator[bytes]: + if not isinstance(self.stream, Iterable): # pragma: nocover + raise RuntimeError( + "Attempted to stream an asynchronous response using 'for ... in " + "response.iter_stream()'. " + "You should use 'async for ... in response.aiter_stream()' instead." + ) + if self._stream_consumed: + raise RuntimeError( + "Attempted to call 'for ... in response.iter_stream()' more than once." + ) + self._stream_consumed = True + for chunk in self.stream: + yield chunk + + def close(self) -> None: + if not isinstance(self.stream, Iterable): # pragma: nocover + raise RuntimeError( + "Attempted to close an asynchronous response using 'response.close()'. " + "You should use 'await response.aclose()' instead." + ) + if hasattr(self.stream, "close"): + self.stream.close() + + # Async interface... + + async def aread(self) -> bytes: + if not isinstance(self.stream, AsyncIterable): # pragma: nocover + raise RuntimeError( + "Attempted to read an synchronous response using " + "'await response.aread()'. " + "You should use 'response.read()' instead." + ) + if not hasattr(self, "_content"): + self._content = b"".join([part async for part in self.aiter_stream()]) + return self._content + + async def aiter_stream(self) -> AsyncIterator[bytes]: + if not isinstance(self.stream, AsyncIterable): # pragma: nocover + raise RuntimeError( + "Attempted to stream an synchronous response using 'async for ... in " + "response.aiter_stream()'. " + "You should use 'for ... in response.iter_stream()' instead." + ) + if self._stream_consumed: + raise RuntimeError( + "Attempted to call 'async for ... in response.aiter_stream()' " + "more than once." + ) + self._stream_consumed = True + async for chunk in self.stream: + yield chunk + + async def aclose(self) -> None: + if not isinstance(self.stream, AsyncIterable): # pragma: nocover + raise RuntimeError( + "Attempted to close a synchronous response using " + "'await response.aclose()'. " + "You should use 'response.close()' instead." + ) + if hasattr(self.stream, "aclose"): + await self.stream.aclose() diff --git a/evalkit_internvl/lib/python3.10/site-packages/httpcore/_ssl.py b/evalkit_internvl/lib/python3.10/site-packages/httpcore/_ssl.py new file mode 100644 index 0000000000000000000000000000000000000000..c99c5a67945b8a3a3544d481e979c791ab45fe23 --- /dev/null +++ b/evalkit_internvl/lib/python3.10/site-packages/httpcore/_ssl.py @@ -0,0 +1,9 @@ +import ssl + +import certifi + + +def default_ssl_context() -> ssl.SSLContext: + context = ssl.create_default_context() + context.load_verify_locations(certifi.where()) + return context diff --git a/evalkit_internvl/lib/python3.10/site-packages/httpcore/_sync/__init__.py b/evalkit_internvl/lib/python3.10/site-packages/httpcore/_sync/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..b476d76d9a7ff45de8d18ec22d33d6af2982f92e --- /dev/null +++ b/evalkit_internvl/lib/python3.10/site-packages/httpcore/_sync/__init__.py @@ -0,0 +1,39 @@ +from .connection import HTTPConnection +from .connection_pool import ConnectionPool +from .http11 import HTTP11Connection +from .http_proxy import HTTPProxy +from .interfaces import ConnectionInterface + +try: + from .http2 import HTTP2Connection +except ImportError: # pragma: nocover + + class HTTP2Connection: # type: ignore + def __init__(self, *args, **kwargs) -> None: # type: ignore + raise RuntimeError( + "Attempted to use http2 support, but the `h2` package is not " + "installed. Use 'pip install httpcore[http2]'." + ) + + +try: + from .socks_proxy import SOCKSProxy +except ImportError: # pragma: nocover + + class SOCKSProxy: # type: ignore + def __init__(self, *args, **kwargs) -> None: # type: ignore + raise RuntimeError( + "Attempted to use SOCKS support, but the `socksio` package is not " + "installed. Use 'pip install httpcore[socks]'." + ) + + +__all__ = [ + "HTTPConnection", + "ConnectionPool", + "HTTPProxy", + "HTTP11Connection", + "HTTP2Connection", + "ConnectionInterface", + "SOCKSProxy", +] diff --git a/evalkit_internvl/lib/python3.10/site-packages/httpcore/_sync/connection_pool.py b/evalkit_internvl/lib/python3.10/site-packages/httpcore/_sync/connection_pool.py new file mode 100644 index 0000000000000000000000000000000000000000..dbcaff1fcf1b1cbb404b3e7367b037942f4e9d03 --- /dev/null +++ b/evalkit_internvl/lib/python3.10/site-packages/httpcore/_sync/connection_pool.py @@ -0,0 +1,356 @@ +import ssl +import sys +from types import TracebackType +from typing import Iterable, Iterator, Iterable, List, Optional, Type + +from .._backends.sync import SyncBackend +from .._backends.base import SOCKET_OPTION, NetworkBackend +from .._exceptions import ConnectionNotAvailable, UnsupportedProtocol +from .._models import Origin, Request, Response +from .._synchronization import Event, Lock, ShieldCancellation +from .connection import HTTPConnection +from .interfaces import ConnectionInterface, RequestInterface + + +class RequestStatus: + def __init__(self, request: Request): + self.request = request + self.connection: Optional[ConnectionInterface] = None + self._connection_acquired = Event() + + def set_connection(self, connection: ConnectionInterface) -> None: + assert self.connection is None + self.connection = connection + self._connection_acquired.set() + + def unset_connection(self) -> None: + assert self.connection is not None + self.connection = None + self._connection_acquired = Event() + + def wait_for_connection( + self, timeout: Optional[float] = None + ) -> ConnectionInterface: + if self.connection is None: + self._connection_acquired.wait(timeout=timeout) + assert self.connection is not None + return self.connection + + +class ConnectionPool(RequestInterface): + """ + A connection pool for making HTTP requests. + """ + + def __init__( + self, + ssl_context: Optional[ssl.SSLContext] = None, + max_connections: Optional[int] = 10, + max_keepalive_connections: Optional[int] = None, + keepalive_expiry: Optional[float] = None, + http1: bool = True, + http2: bool = False, + retries: int = 0, + local_address: Optional[str] = None, + uds: Optional[str] = None, + network_backend: Optional[NetworkBackend] = None, + socket_options: Optional[Iterable[SOCKET_OPTION]] = None, + ) -> None: + """ + A connection pool for making HTTP requests. + + Parameters: + ssl_context: An SSL context to use for verifying connections. + If not specified, the default `httpcore.default_ssl_context()` + will be used. + max_connections: The maximum number of concurrent HTTP connections that + the pool should allow. Any attempt to send a request on a pool that + would exceed this amount will block until a connection is available. + max_keepalive_connections: The maximum number of idle HTTP connections + that will be maintained in the pool. + keepalive_expiry: The duration in seconds that an idle HTTP connection + may be maintained for before being expired from the pool. + http1: A boolean indicating if HTTP/1.1 requests should be supported + by the connection pool. Defaults to True. + http2: A boolean indicating if HTTP/2 requests should be supported by + the connection pool. Defaults to False. + retries: The maximum number of retries when trying to establish a + connection. + local_address: Local address to connect from. Can also be used to connect + using a particular address family. Using `local_address="0.0.0.0"` + will connect using an `AF_INET` address (IPv4), while using + `local_address="::"` will connect using an `AF_INET6` address (IPv6). + uds: Path to a Unix Domain Socket to use instead of TCP sockets. + network_backend: A backend instance to use for handling network I/O. + socket_options: Socket options that have to be included + in the TCP socket when the connection was established. + """ + self._ssl_context = ssl_context + + self._max_connections = ( + sys.maxsize if max_connections is None else max_connections + ) + self._max_keepalive_connections = ( + sys.maxsize + if max_keepalive_connections is None + else max_keepalive_connections + ) + self._max_keepalive_connections = min( + self._max_connections, self._max_keepalive_connections + ) + + self._keepalive_expiry = keepalive_expiry + self._http1 = http1 + self._http2 = http2 + self._retries = retries + self._local_address = local_address + self._uds = uds + + self._pool: List[ConnectionInterface] = [] + self._requests: List[RequestStatus] = [] + self._pool_lock = Lock() + self._network_backend = ( + SyncBackend() if network_backend is None else network_backend + ) + self._socket_options = socket_options + + def create_connection(self, origin: Origin) -> ConnectionInterface: + return HTTPConnection( + origin=origin, + ssl_context=self._ssl_context, + keepalive_expiry=self._keepalive_expiry, + http1=self._http1, + http2=self._http2, + retries=self._retries, + local_address=self._local_address, + uds=self._uds, + network_backend=self._network_backend, + socket_options=self._socket_options, + ) + + @property + def connections(self) -> List[ConnectionInterface]: + """ + Return a list of the connections currently in the pool. + + For example: + + ```python + >>> pool.connections + [ + , + , + , + ] + ``` + """ + return list(self._pool) + + def _attempt_to_acquire_connection(self, status: RequestStatus) -> bool: + """ + Attempt to provide a connection that can handle the given origin. + """ + origin = status.request.url.origin + + # If there are queued requests in front of us, then don't acquire a + # connection. We handle requests strictly in order. + waiting = [s for s in self._requests if s.connection is None] + if waiting and waiting[0] is not status: + return False + + # Reuse an existing connection if one is currently available. + for idx, connection in enumerate(self._pool): + if connection.can_handle_request(origin) and connection.is_available(): + self._pool.pop(idx) + self._pool.insert(0, connection) + status.set_connection(connection) + return True + + # If the pool is currently full, attempt to close one idle connection. + if len(self._pool) >= self._max_connections: + for idx, connection in reversed(list(enumerate(self._pool))): + if connection.is_idle(): + connection.close() + self._pool.pop(idx) + break + + # If the pool is still full, then we cannot acquire a connection. + if len(self._pool) >= self._max_connections: + return False + + # Otherwise create a new connection. + connection = self.create_connection(origin) + self._pool.insert(0, connection) + status.set_connection(connection) + return True + + def _close_expired_connections(self) -> None: + """ + Clean up the connection pool by closing off any connections that have expired. + """ + # Close any connections that have expired their keep-alive time. + for idx, connection in reversed(list(enumerate(self._pool))): + if connection.has_expired(): + connection.close() + self._pool.pop(idx) + + # If the pool size exceeds the maximum number of allowed keep-alive connections, + # then close off idle connections as required. + pool_size = len(self._pool) + for idx, connection in reversed(list(enumerate(self._pool))): + if connection.is_idle() and pool_size > self._max_keepalive_connections: + connection.close() + self._pool.pop(idx) + pool_size -= 1 + + def handle_request(self, request: Request) -> Response: + """ + Send an HTTP request, and return an HTTP response. + + This is the core implementation that is called into by `.request()` or `.stream()`. + """ + scheme = request.url.scheme.decode() + if scheme == "": + raise UnsupportedProtocol( + "Request URL is missing an 'http://' or 'https://' protocol." + ) + if scheme not in ("http", "https", "ws", "wss"): + raise UnsupportedProtocol( + f"Request URL has an unsupported protocol '{scheme}://'." + ) + + status = RequestStatus(request) + + with self._pool_lock: + self._requests.append(status) + self._close_expired_connections() + self._attempt_to_acquire_connection(status) + + while True: + timeouts = request.extensions.get("timeout", {}) + timeout = timeouts.get("pool", None) + try: + connection = status.wait_for_connection(timeout=timeout) + except BaseException as exc: + # If we timeout here, or if the task is cancelled, then make + # sure to remove the request from the queue before bubbling + # up the exception. + with self._pool_lock: + # Ensure only remove when task exists. + if status in self._requests: + self._requests.remove(status) + raise exc + + try: + response = connection.handle_request(request) + except ConnectionNotAvailable: + # The ConnectionNotAvailable exception is a special case, that + # indicates we need to retry the request on a new connection. + # + # The most common case where this can occur is when multiple + # requests are queued waiting for a single connection, which + # might end up as an HTTP/2 connection, but which actually ends + # up as HTTP/1.1. + with self._pool_lock: + # Maintain our position in the request queue, but reset the + # status so that the request becomes queued again. + status.unset_connection() + self._attempt_to_acquire_connection(status) + except BaseException as exc: + with ShieldCancellation(): + self.response_closed(status) + raise exc + else: + break + + # When we return the response, we wrap the stream in a special class + # that handles notifying the connection pool once the response + # has been released. + assert isinstance(response.stream, Iterable) + return Response( + status=response.status, + headers=response.headers, + content=ConnectionPoolByteStream(response.stream, self, status), + extensions=response.extensions, + ) + + def response_closed(self, status: RequestStatus) -> None: + """ + This method acts as a callback once the request/response cycle is complete. + + It is called into from the `ConnectionPoolByteStream.close()` method. + """ + assert status.connection is not None + connection = status.connection + + with self._pool_lock: + # Update the state of the connection pool. + if status in self._requests: + self._requests.remove(status) + + if connection.is_closed() and connection in self._pool: + self._pool.remove(connection) + + # Since we've had a response closed, it's possible we'll now be able + # to service one or more requests that are currently pending. + for status in self._requests: + if status.connection is None: + acquired = self._attempt_to_acquire_connection(status) + # If we could not acquire a connection for a queued request + # then we don't need to check anymore requests that are + # queued later behind it. + if not acquired: + break + + # Housekeeping. + self._close_expired_connections() + + def close(self) -> None: + """ + Close any connections in the pool. + """ + with self._pool_lock: + for connection in self._pool: + connection.close() + self._pool = [] + self._requests = [] + + def __enter__(self) -> "ConnectionPool": + return self + + def __exit__( + self, + exc_type: Optional[Type[BaseException]] = None, + exc_value: Optional[BaseException] = None, + traceback: Optional[TracebackType] = None, + ) -> None: + self.close() + + +class ConnectionPoolByteStream: + """ + A wrapper around the response byte stream, that additionally handles + notifying the connection pool when the response has been closed. + """ + + def __init__( + self, + stream: Iterable[bytes], + pool: ConnectionPool, + status: RequestStatus, + ) -> None: + self._stream = stream + self._pool = pool + self._status = status + + def __iter__(self) -> Iterator[bytes]: + for part in self._stream: + yield part + + def close(self) -> None: + try: + if hasattr(self._stream, "close"): + self._stream.close() + finally: + with ShieldCancellation(): + self._pool.response_closed(self._status) diff --git a/evalkit_internvl/lib/python3.10/site-packages/httpcore/_sync/interfaces.py b/evalkit_internvl/lib/python3.10/site-packages/httpcore/_sync/interfaces.py new file mode 100644 index 0000000000000000000000000000000000000000..5e95be1ec72425178245c32c33874303e0906405 --- /dev/null +++ b/evalkit_internvl/lib/python3.10/site-packages/httpcore/_sync/interfaces.py @@ -0,0 +1,135 @@ +from contextlib import contextmanager +from typing import Iterator, Optional, Union + +from .._models import ( + URL, + Extensions, + HeaderTypes, + Origin, + Request, + Response, + enforce_bytes, + enforce_headers, + enforce_url, + include_request_headers, +) + + +class RequestInterface: + def request( + self, + method: Union[bytes, str], + url: Union[URL, bytes, str], + *, + headers: HeaderTypes = None, + content: Union[bytes, Iterator[bytes], None] = None, + extensions: Optional[Extensions] = None, + ) -> Response: + # Strict type checking on our parameters. + method = enforce_bytes(method, name="method") + url = enforce_url(url, name="url") + headers = enforce_headers(headers, name="headers") + + # Include Host header, and optionally Content-Length or Transfer-Encoding. + headers = include_request_headers(headers, url=url, content=content) + + request = Request( + method=method, + url=url, + headers=headers, + content=content, + extensions=extensions, + ) + response = self.handle_request(request) + try: + response.read() + finally: + response.close() + return response + + @contextmanager + def stream( + self, + method: Union[bytes, str], + url: Union[URL, bytes, str], + *, + headers: HeaderTypes = None, + content: Union[bytes, Iterator[bytes], None] = None, + extensions: Optional[Extensions] = None, + ) -> Iterator[Response]: + # Strict type checking on our parameters. + method = enforce_bytes(method, name="method") + url = enforce_url(url, name="url") + headers = enforce_headers(headers, name="headers") + + # Include Host header, and optionally Content-Length or Transfer-Encoding. + headers = include_request_headers(headers, url=url, content=content) + + request = Request( + method=method, + url=url, + headers=headers, + content=content, + extensions=extensions, + ) + response = self.handle_request(request) + try: + yield response + finally: + response.close() + + def handle_request(self, request: Request) -> Response: + raise NotImplementedError() # pragma: nocover + + +class ConnectionInterface(RequestInterface): + def close(self) -> None: + raise NotImplementedError() # pragma: nocover + + def info(self) -> str: + raise NotImplementedError() # pragma: nocover + + def can_handle_request(self, origin: Origin) -> bool: + raise NotImplementedError() # pragma: nocover + + def is_available(self) -> bool: + """ + Return `True` if the connection is currently able to accept an + outgoing request. + + An HTTP/1.1 connection will only be available if it is currently idle. + + An HTTP/2 connection will be available so long as the stream ID space is + not yet exhausted, and the connection is not in an error state. + + While the connection is being established we may not yet know if it is going + to result in an HTTP/1.1 or HTTP/2 connection. The connection should be + treated as being available, but might ultimately raise `NewConnectionRequired` + required exceptions if multiple requests are attempted over a connection + that ends up being established as HTTP/1.1. + """ + raise NotImplementedError() # pragma: nocover + + def has_expired(self) -> bool: + """ + Return `True` if the connection is in a state where it should be closed. + + This either means that the connection is idle and it has passed the + expiry time on its keep-alive, or that server has sent an EOF. + """ + raise NotImplementedError() # pragma: nocover + + def is_idle(self) -> bool: + """ + Return `True` if the connection is currently idle. + """ + raise NotImplementedError() # pragma: nocover + + def is_closed(self) -> bool: + """ + Return `True` if the connection has been closed. + + Used when a response is closed to determine if the connection may be + returned to the connection pool or not. + """ + raise NotImplementedError() # pragma: nocover diff --git a/evalkit_internvl/lib/python3.10/site-packages/httpcore/_sync/socks_proxy.py b/evalkit_internvl/lib/python3.10/site-packages/httpcore/_sync/socks_proxy.py new file mode 100644 index 0000000000000000000000000000000000000000..407351d06b21954cad45dca7d2065bf1d24d88fd --- /dev/null +++ b/evalkit_internvl/lib/python3.10/site-packages/httpcore/_sync/socks_proxy.py @@ -0,0 +1,340 @@ +import logging +import ssl +import typing + +from socksio import socks5 + +from .._backends.sync import SyncBackend +from .._backends.base import NetworkBackend, NetworkStream +from .._exceptions import ConnectionNotAvailable, ProxyError +from .._models import URL, Origin, Request, Response, enforce_bytes, enforce_url +from .._ssl import default_ssl_context +from .._synchronization import Lock +from .._trace import Trace +from .connection_pool import ConnectionPool +from .http11 import HTTP11Connection +from .interfaces import ConnectionInterface + +logger = logging.getLogger("httpcore.socks") + + +AUTH_METHODS = { + b"\x00": "NO AUTHENTICATION REQUIRED", + b"\x01": "GSSAPI", + b"\x02": "USERNAME/PASSWORD", + b"\xff": "NO ACCEPTABLE METHODS", +} + +REPLY_CODES = { + b"\x00": "Succeeded", + b"\x01": "General SOCKS server failure", + b"\x02": "Connection not allowed by ruleset", + b"\x03": "Network unreachable", + b"\x04": "Host unreachable", + b"\x05": "Connection refused", + b"\x06": "TTL expired", + b"\x07": "Command not supported", + b"\x08": "Address type not supported", +} + + +def _init_socks5_connection( + stream: NetworkStream, + *, + host: bytes, + port: int, + auth: typing.Optional[typing.Tuple[bytes, bytes]] = None, +) -> None: + conn = socks5.SOCKS5Connection() + + # Auth method request + auth_method = ( + socks5.SOCKS5AuthMethod.NO_AUTH_REQUIRED + if auth is None + else socks5.SOCKS5AuthMethod.USERNAME_PASSWORD + ) + conn.send(socks5.SOCKS5AuthMethodsRequest([auth_method])) + outgoing_bytes = conn.data_to_send() + stream.write(outgoing_bytes) + + # Auth method response + incoming_bytes = stream.read(max_bytes=4096) + response = conn.receive_data(incoming_bytes) + assert isinstance(response, socks5.SOCKS5AuthReply) + if response.method != auth_method: + requested = AUTH_METHODS.get(auth_method, "UNKNOWN") + responded = AUTH_METHODS.get(response.method, "UNKNOWN") + raise ProxyError( + f"Requested {requested} from proxy server, but got {responded}." + ) + + if response.method == socks5.SOCKS5AuthMethod.USERNAME_PASSWORD: + # Username/password request + assert auth is not None + username, password = auth + conn.send(socks5.SOCKS5UsernamePasswordRequest(username, password)) + outgoing_bytes = conn.data_to_send() + stream.write(outgoing_bytes) + + # Username/password response + incoming_bytes = stream.read(max_bytes=4096) + response = conn.receive_data(incoming_bytes) + assert isinstance(response, socks5.SOCKS5UsernamePasswordReply) + if not response.success: + raise ProxyError("Invalid username/password") + + # Connect request + conn.send( + socks5.SOCKS5CommandRequest.from_address( + socks5.SOCKS5Command.CONNECT, (host, port) + ) + ) + outgoing_bytes = conn.data_to_send() + stream.write(outgoing_bytes) + + # Connect response + incoming_bytes = stream.read(max_bytes=4096) + response = conn.receive_data(incoming_bytes) + assert isinstance(response, socks5.SOCKS5Reply) + if response.reply_code != socks5.SOCKS5ReplyCode.SUCCEEDED: + reply_code = REPLY_CODES.get(response.reply_code, "UNKOWN") + raise ProxyError(f"Proxy Server could not connect: {reply_code}.") + + +class SOCKSProxy(ConnectionPool): + """ + A connection pool that sends requests via an HTTP proxy. + """ + + def __init__( + self, + proxy_url: typing.Union[URL, bytes, str], + proxy_auth: typing.Optional[ + typing.Tuple[typing.Union[bytes, str], typing.Union[bytes, str]] + ] = None, + ssl_context: typing.Optional[ssl.SSLContext] = None, + max_connections: typing.Optional[int] = 10, + max_keepalive_connections: typing.Optional[int] = None, + keepalive_expiry: typing.Optional[float] = None, + http1: bool = True, + http2: bool = False, + retries: int = 0, + network_backend: typing.Optional[NetworkBackend] = None, + ) -> None: + """ + A connection pool for making HTTP requests. + + Parameters: + proxy_url: The URL to use when connecting to the proxy server. + For example `"http://127.0.0.1:8080/"`. + ssl_context: An SSL context to use for verifying connections. + If not specified, the default `httpcore.default_ssl_context()` + will be used. + max_connections: The maximum number of concurrent HTTP connections that + the pool should allow. Any attempt to send a request on a pool that + would exceed this amount will block until a connection is available. + max_keepalive_connections: The maximum number of idle HTTP connections + that will be maintained in the pool. + keepalive_expiry: The duration in seconds that an idle HTTP connection + may be maintained for before being expired from the pool. + http1: A boolean indicating if HTTP/1.1 requests should be supported + by the connection pool. Defaults to True. + http2: A boolean indicating if HTTP/2 requests should be supported by + the connection pool. Defaults to False. + retries: The maximum number of retries when trying to establish + a connection. + local_address: Local address to connect from. Can also be used to + connect using a particular address family. Using + `local_address="0.0.0.0"` will connect using an `AF_INET` address + (IPv4), while using `local_address="::"` will connect using an + `AF_INET6` address (IPv6). + uds: Path to a Unix Domain Socket to use instead of TCP sockets. + network_backend: A backend instance to use for handling network I/O. + """ + super().__init__( + ssl_context=ssl_context, + max_connections=max_connections, + max_keepalive_connections=max_keepalive_connections, + keepalive_expiry=keepalive_expiry, + http1=http1, + http2=http2, + network_backend=network_backend, + retries=retries, + ) + self._ssl_context = ssl_context + self._proxy_url = enforce_url(proxy_url, name="proxy_url") + if proxy_auth is not None: + username, password = proxy_auth + username_bytes = enforce_bytes(username, name="proxy_auth") + password_bytes = enforce_bytes(password, name="proxy_auth") + self._proxy_auth: typing.Optional[typing.Tuple[bytes, bytes]] = ( + username_bytes, + password_bytes, + ) + else: + self._proxy_auth = None + + def create_connection(self, origin: Origin) -> ConnectionInterface: + return Socks5Connection( + proxy_origin=self._proxy_url.origin, + remote_origin=origin, + proxy_auth=self._proxy_auth, + ssl_context=self._ssl_context, + keepalive_expiry=self._keepalive_expiry, + http1=self._http1, + http2=self._http2, + network_backend=self._network_backend, + ) + + +class Socks5Connection(ConnectionInterface): + def __init__( + self, + proxy_origin: Origin, + remote_origin: Origin, + proxy_auth: typing.Optional[typing.Tuple[bytes, bytes]] = None, + ssl_context: typing.Optional[ssl.SSLContext] = None, + keepalive_expiry: typing.Optional[float] = None, + http1: bool = True, + http2: bool = False, + network_backend: typing.Optional[NetworkBackend] = None, + ) -> None: + self._proxy_origin = proxy_origin + self._remote_origin = remote_origin + self._proxy_auth = proxy_auth + self._ssl_context = ssl_context + self._keepalive_expiry = keepalive_expiry + self._http1 = http1 + self._http2 = http2 + + self._network_backend: NetworkBackend = ( + SyncBackend() if network_backend is None else network_backend + ) + self._connect_lock = Lock() + self._connection: typing.Optional[ConnectionInterface] = None + self._connect_failed = False + + def handle_request(self, request: Request) -> Response: + timeouts = request.extensions.get("timeout", {}) + timeout = timeouts.get("connect", None) + + with self._connect_lock: + if self._connection is None: + try: + # Connect to the proxy + kwargs = { + "host": self._proxy_origin.host.decode("ascii"), + "port": self._proxy_origin.port, + "timeout": timeout, + } + with Trace("connect_tcp", logger, request, kwargs) as trace: + stream = self._network_backend.connect_tcp(**kwargs) + trace.return_value = stream + + # Connect to the remote host using socks5 + kwargs = { + "stream": stream, + "host": self._remote_origin.host.decode("ascii"), + "port": self._remote_origin.port, + "auth": self._proxy_auth, + } + with Trace( + "setup_socks5_connection", logger, request, kwargs + ) as trace: + _init_socks5_connection(**kwargs) + trace.return_value = stream + + # Upgrade the stream to SSL + if self._remote_origin.scheme == b"https": + ssl_context = ( + default_ssl_context() + if self._ssl_context is None + else self._ssl_context + ) + alpn_protocols = ( + ["http/1.1", "h2"] if self._http2 else ["http/1.1"] + ) + ssl_context.set_alpn_protocols(alpn_protocols) + + kwargs = { + "ssl_context": ssl_context, + "server_hostname": self._remote_origin.host.decode("ascii"), + "timeout": timeout, + } + with Trace("start_tls", logger, request, kwargs) as trace: + stream = stream.start_tls(**kwargs) + trace.return_value = stream + + # Determine if we should be using HTTP/1.1 or HTTP/2 + ssl_object = stream.get_extra_info("ssl_object") + http2_negotiated = ( + ssl_object is not None + and ssl_object.selected_alpn_protocol() == "h2" + ) + + # Create the HTTP/1.1 or HTTP/2 connection + if http2_negotiated or ( + self._http2 and not self._http1 + ): # pragma: nocover + from .http2 import HTTP2Connection + + self._connection = HTTP2Connection( + origin=self._remote_origin, + stream=stream, + keepalive_expiry=self._keepalive_expiry, + ) + else: + self._connection = HTTP11Connection( + origin=self._remote_origin, + stream=stream, + keepalive_expiry=self._keepalive_expiry, + ) + except Exception as exc: + self._connect_failed = True + raise exc + elif not self._connection.is_available(): # pragma: nocover + raise ConnectionNotAvailable() + + return self._connection.handle_request(request) + + def can_handle_request(self, origin: Origin) -> bool: + return origin == self._remote_origin + + def close(self) -> None: + if self._connection is not None: + self._connection.close() + + def is_available(self) -> bool: + if self._connection is None: # pragma: nocover + # If HTTP/2 support is enabled, and the resulting connection could + # end up as HTTP/2 then we should indicate the connection as being + # available to service multiple requests. + return ( + self._http2 + and (self._remote_origin.scheme == b"https" or not self._http1) + and not self._connect_failed + ) + return self._connection.is_available() + + def has_expired(self) -> bool: + if self._connection is None: # pragma: nocover + return self._connect_failed + return self._connection.has_expired() + + def is_idle(self) -> bool: + if self._connection is None: # pragma: nocover + return self._connect_failed + return self._connection.is_idle() + + def is_closed(self) -> bool: + if self._connection is None: # pragma: nocover + return self._connect_failed + return self._connection.is_closed() + + def info(self) -> str: + if self._connection is None: # pragma: nocover + return "CONNECTION FAILED" if self._connect_failed else "CONNECTING" + return self._connection.info() + + def __repr__(self) -> str: + return f"<{self.__class__.__name__} [{self.info()}]>" diff --git a/evalkit_internvl/lib/python3.10/site-packages/httpcore/_synchronization.py b/evalkit_internvl/lib/python3.10/site-packages/httpcore/_synchronization.py new file mode 100644 index 0000000000000000000000000000000000000000..bae27c1b11255891997ae21c0f1c240f547a65a5 --- /dev/null +++ b/evalkit_internvl/lib/python3.10/site-packages/httpcore/_synchronization.py @@ -0,0 +1,279 @@ +import threading +from types import TracebackType +from typing import Optional, Type + +import sniffio + +from ._exceptions import ExceptionMapping, PoolTimeout, map_exceptions + +# Our async synchronization primatives use either 'anyio' or 'trio' depending +# on if they're running under asyncio or trio. + +try: + import trio +except ImportError: # pragma: nocover + trio = None # type: ignore + +try: + import anyio +except ImportError: # pragma: nocover + anyio = None # type: ignore + + +class AsyncLock: + def __init__(self) -> None: + self._backend = "" + + def setup(self) -> None: + """ + Detect if we're running under 'asyncio' or 'trio' and create + a lock with the correct implementation. + """ + self._backend = sniffio.current_async_library() + if self._backend == "trio": + if trio is None: # pragma: nocover + raise RuntimeError( + "Running under trio, requires the 'trio' package to be installed." + ) + self._trio_lock = trio.Lock() + else: + if anyio is None: # pragma: nocover + raise RuntimeError( + "Running under asyncio requires the 'anyio' package to be installed." + ) + self._anyio_lock = anyio.Lock() + + async def __aenter__(self) -> "AsyncLock": + if not self._backend: + self.setup() + + if self._backend == "trio": + await self._trio_lock.acquire() + else: + await self._anyio_lock.acquire() + + return self + + async def __aexit__( + self, + exc_type: Optional[Type[BaseException]] = None, + exc_value: Optional[BaseException] = None, + traceback: Optional[TracebackType] = None, + ) -> None: + if self._backend == "trio": + self._trio_lock.release() + else: + self._anyio_lock.release() + + +class AsyncEvent: + def __init__(self) -> None: + self._backend = "" + + def setup(self) -> None: + """ + Detect if we're running under 'asyncio' or 'trio' and create + a lock with the correct implementation. + """ + self._backend = sniffio.current_async_library() + if self._backend == "trio": + if trio is None: # pragma: nocover + raise RuntimeError( + "Running under trio requires the 'trio' package to be installed." + ) + self._trio_event = trio.Event() + else: + if anyio is None: # pragma: nocover + raise RuntimeError( + "Running under asyncio requires the 'anyio' package to be installed." + ) + self._anyio_event = anyio.Event() + + def set(self) -> None: + if not self._backend: + self.setup() + + if self._backend == "trio": + self._trio_event.set() + else: + self._anyio_event.set() + + async def wait(self, timeout: Optional[float] = None) -> None: + if not self._backend: + self.setup() + + if self._backend == "trio": + if trio is None: # pragma: nocover + raise RuntimeError( + "Running under trio requires the 'trio' package to be installed." + ) + + trio_exc_map: ExceptionMapping = {trio.TooSlowError: PoolTimeout} + timeout_or_inf = float("inf") if timeout is None else timeout + with map_exceptions(trio_exc_map): + with trio.fail_after(timeout_or_inf): + await self._trio_event.wait() + else: + if anyio is None: # pragma: nocover + raise RuntimeError( + "Running under asyncio requires the 'anyio' package to be installed." + ) + + anyio_exc_map: ExceptionMapping = {TimeoutError: PoolTimeout} + with map_exceptions(anyio_exc_map): + with anyio.fail_after(timeout): + await self._anyio_event.wait() + + +class AsyncSemaphore: + def __init__(self, bound: int) -> None: + self._bound = bound + self._backend = "" + + def setup(self) -> None: + """ + Detect if we're running under 'asyncio' or 'trio' and create + a semaphore with the correct implementation. + """ + self._backend = sniffio.current_async_library() + if self._backend == "trio": + if trio is None: # pragma: nocover + raise RuntimeError( + "Running under trio requires the 'trio' package to be installed." + ) + + self._trio_semaphore = trio.Semaphore( + initial_value=self._bound, max_value=self._bound + ) + else: + if anyio is None: # pragma: nocover + raise RuntimeError( + "Running under asyncio requires the 'anyio' package to be installed." + ) + + self._anyio_semaphore = anyio.Semaphore( + initial_value=self._bound, max_value=self._bound + ) + + async def acquire(self) -> None: + if not self._backend: + self.setup() + + if self._backend == "trio": + await self._trio_semaphore.acquire() + else: + await self._anyio_semaphore.acquire() + + async def release(self) -> None: + if self._backend == "trio": + self._trio_semaphore.release() + else: + self._anyio_semaphore.release() + + +class AsyncShieldCancellation: + # For certain portions of our codebase where we're dealing with + # closing connections during exception handling we want to shield + # the operation from being cancelled. + # + # with AsyncShieldCancellation(): + # ... # clean-up operations, shielded from cancellation. + + def __init__(self) -> None: + """ + Detect if we're running under 'asyncio' or 'trio' and create + a shielded scope with the correct implementation. + """ + self._backend = sniffio.current_async_library() + + if self._backend == "trio": + if trio is None: # pragma: nocover + raise RuntimeError( + "Running under trio requires the 'trio' package to be installed." + ) + + self._trio_shield = trio.CancelScope(shield=True) + else: + if anyio is None: # pragma: nocover + raise RuntimeError( + "Running under asyncio requires the 'anyio' package to be installed." + ) + + self._anyio_shield = anyio.CancelScope(shield=True) + + def __enter__(self) -> "AsyncShieldCancellation": + if self._backend == "trio": + self._trio_shield.__enter__() + else: + self._anyio_shield.__enter__() + return self + + def __exit__( + self, + exc_type: Optional[Type[BaseException]] = None, + exc_value: Optional[BaseException] = None, + traceback: Optional[TracebackType] = None, + ) -> None: + if self._backend == "trio": + self._trio_shield.__exit__(exc_type, exc_value, traceback) + else: + self._anyio_shield.__exit__(exc_type, exc_value, traceback) + + +# Our thread-based synchronization primitives... + + +class Lock: + def __init__(self) -> None: + self._lock = threading.Lock() + + def __enter__(self) -> "Lock": + self._lock.acquire() + return self + + def __exit__( + self, + exc_type: Optional[Type[BaseException]] = None, + exc_value: Optional[BaseException] = None, + traceback: Optional[TracebackType] = None, + ) -> None: + self._lock.release() + + +class Event: + def __init__(self) -> None: + self._event = threading.Event() + + def set(self) -> None: + self._event.set() + + def wait(self, timeout: Optional[float] = None) -> None: + if not self._event.wait(timeout=timeout): + raise PoolTimeout() # pragma: nocover + + +class Semaphore: + def __init__(self, bound: int) -> None: + self._semaphore = threading.Semaphore(value=bound) + + def acquire(self) -> None: + self._semaphore.acquire() + + def release(self) -> None: + self._semaphore.release() + + +class ShieldCancellation: + # Thread-synchronous codebases don't support cancellation semantics. + # We have this class because we need to mirror the async and sync + # cases within our package, but it's just a no-op. + def __enter__(self) -> "ShieldCancellation": + return self + + def __exit__( + self, + exc_type: Optional[Type[BaseException]] = None, + exc_value: Optional[BaseException] = None, + traceback: Optional[TracebackType] = None, + ) -> None: + pass diff --git a/evalkit_internvl/lib/python3.10/site-packages/httpcore/_trace.py b/evalkit_internvl/lib/python3.10/site-packages/httpcore/_trace.py new file mode 100644 index 0000000000000000000000000000000000000000..b122a53e88f17e1e450f63b05ede3e28e8a7992a --- /dev/null +++ b/evalkit_internvl/lib/python3.10/site-packages/httpcore/_trace.py @@ -0,0 +1,105 @@ +import inspect +import logging +from types import TracebackType +from typing import Any, Dict, Optional, Type + +from ._models import Request + + +class Trace: + def __init__( + self, + name: str, + logger: logging.Logger, + request: Optional[Request] = None, + kwargs: Optional[Dict[str, Any]] = None, + ) -> None: + self.name = name + self.logger = logger + self.trace_extension = ( + None if request is None else request.extensions.get("trace") + ) + self.debug = self.logger.isEnabledFor(logging.DEBUG) + self.kwargs = kwargs or {} + self.return_value: Any = None + self.should_trace = self.debug or self.trace_extension is not None + self.prefix = self.logger.name.split(".")[-1] + + def trace(self, name: str, info: Dict[str, Any]) -> None: + if self.trace_extension is not None: + prefix_and_name = f"{self.prefix}.{name}" + ret = self.trace_extension(prefix_and_name, info) + if inspect.iscoroutine(ret): # pragma: no cover + raise TypeError( + "If you are using a synchronous interface, " + "the callback of the `trace` extension should " + "be a normal function instead of an asynchronous function." + ) + + if self.debug: + if not info or "return_value" in info and info["return_value"] is None: + message = name + else: + args = " ".join([f"{key}={value!r}" for key, value in info.items()]) + message = f"{name} {args}" + self.logger.debug(message) + + def __enter__(self) -> "Trace": + if self.should_trace: + info = self.kwargs + self.trace(f"{self.name}.started", info) + return self + + def __exit__( + self, + exc_type: Optional[Type[BaseException]] = None, + exc_value: Optional[BaseException] = None, + traceback: Optional[TracebackType] = None, + ) -> None: + if self.should_trace: + if exc_value is None: + info = {"return_value": self.return_value} + self.trace(f"{self.name}.complete", info) + else: + info = {"exception": exc_value} + self.trace(f"{self.name}.failed", info) + + async def atrace(self, name: str, info: Dict[str, Any]) -> None: + if self.trace_extension is not None: + prefix_and_name = f"{self.prefix}.{name}" + coro = self.trace_extension(prefix_and_name, info) + if not inspect.iscoroutine(coro): # pragma: no cover + raise TypeError( + "If you're using an asynchronous interface, " + "the callback of the `trace` extension should " + "be an asynchronous function rather than a normal function." + ) + await coro + + if self.debug: + if not info or "return_value" in info and info["return_value"] is None: + message = name + else: + args = " ".join([f"{key}={value!r}" for key, value in info.items()]) + message = f"{name} {args}" + self.logger.debug(message) + + async def __aenter__(self) -> "Trace": + if self.should_trace: + info = self.kwargs + await self.atrace(f"{self.name}.started", info) + return self + + async def __aexit__( + self, + exc_type: Optional[Type[BaseException]] = None, + exc_value: Optional[BaseException] = None, + traceback: Optional[TracebackType] = None, + ) -> None: + if self.should_trace: + if exc_value is None: + info = {"return_value": self.return_value} + await self.atrace(f"{self.name}.complete", info) + else: + info = {"exception": exc_value} + await self.atrace(f"{self.name}.failed", info) diff --git a/evalkit_internvl/lib/python3.10/site-packages/httpcore/_utils.py b/evalkit_internvl/lib/python3.10/site-packages/httpcore/_utils.py new file mode 100644 index 0000000000000000000000000000000000000000..df5dea8fe472697afea4156d2916389e2f70d684 --- /dev/null +++ b/evalkit_internvl/lib/python3.10/site-packages/httpcore/_utils.py @@ -0,0 +1,36 @@ +import select +import socket +import sys +import typing + + +def is_socket_readable(sock: typing.Optional[socket.socket]) -> bool: + """ + Return whether a socket, as identifed by its file descriptor, is readable. + "A socket is readable" means that the read buffer isn't empty, i.e. that calling + .recv() on it would immediately return some data. + """ + # NOTE: we want check for readability without actually attempting to read, because + # we don't want to block forever if it's not readable. + + # In the case that the socket no longer exists, or cannot return a file + # descriptor, we treat it as being readable, as if it the next read operation + # on it is ready to return the terminating `b""`. + sock_fd = None if sock is None else sock.fileno() + if sock_fd is None or sock_fd < 0: # pragma: nocover + return True + + # The implementation below was stolen from: + # https://github.com/python-trio/trio/blob/20ee2b1b7376db637435d80e266212a35837ddcc/trio/_socket.py#L471-L478 + # See also: https://github.com/encode/httpcore/pull/193#issuecomment-703129316 + + # Use select.select on Windows, and when poll is unavailable and select.poll + # everywhere else. (E.g. When eventlet is in use. See #327) + if ( + sys.platform == "win32" or getattr(select, "poll", None) is None + ): # pragma: nocover + rready, _, _ = select.select([sock_fd], [], [], 0) + return bool(rready) + p = select.poll() + p.register(sock_fd, select.POLLIN) + return bool(p.poll(0)) diff --git a/evalkit_internvl/lib/python3.10/site-packages/httpcore/py.typed b/evalkit_internvl/lib/python3.10/site-packages/httpcore/py.typed new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/evalkit_internvl/lib/python3.10/site-packages/pexpect/ANSI.py b/evalkit_internvl/lib/python3.10/site-packages/pexpect/ANSI.py new file mode 100644 index 0000000000000000000000000000000000000000..1cd2e90e7ab0c54430801b53d84ef9bb8d749485 --- /dev/null +++ b/evalkit_internvl/lib/python3.10/site-packages/pexpect/ANSI.py @@ -0,0 +1,351 @@ +'''This implements an ANSI (VT100) terminal emulator as a subclass of screen. + +PEXPECT LICENSE + + This license is approved by the OSI and FSF as GPL-compatible. + http://opensource.org/licenses/isc-license.txt + + Copyright (c) 2012, Noah Spurrier + PERMISSION TO USE, COPY, MODIFY, AND/OR DISTRIBUTE THIS SOFTWARE FOR ANY + PURPOSE WITH OR WITHOUT FEE IS HEREBY GRANTED, PROVIDED THAT THE ABOVE + COPYRIGHT NOTICE AND THIS PERMISSION NOTICE APPEAR IN ALL COPIES. + THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES + WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF + MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR + ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES + WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN + ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF + OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. + +''' + +# references: +# http://en.wikipedia.org/wiki/ANSI_escape_code +# http://www.retards.org/terminals/vt102.html +# http://vt100.net/docs/vt102-ug/contents.html +# http://vt100.net/docs/vt220-rm/ +# http://www.termsys.demon.co.uk/vtansi.htm + +from . import screen +from . import FSM +import string + +# +# The 'Do.*' functions are helper functions for the ANSI class. +# +def DoEmit (fsm): + + screen = fsm.memory[0] + screen.write_ch(fsm.input_symbol) + +def DoStartNumber (fsm): + + fsm.memory.append (fsm.input_symbol) + +def DoBuildNumber (fsm): + + ns = fsm.memory.pop() + ns = ns + fsm.input_symbol + fsm.memory.append (ns) + +def DoBackOne (fsm): + + screen = fsm.memory[0] + screen.cursor_back () + +def DoBack (fsm): + + count = int(fsm.memory.pop()) + screen = fsm.memory[0] + screen.cursor_back (count) + +def DoDownOne (fsm): + + screen = fsm.memory[0] + screen.cursor_down () + +def DoDown (fsm): + + count = int(fsm.memory.pop()) + screen = fsm.memory[0] + screen.cursor_down (count) + +def DoForwardOne (fsm): + + screen = fsm.memory[0] + screen.cursor_forward () + +def DoForward (fsm): + + count = int(fsm.memory.pop()) + screen = fsm.memory[0] + screen.cursor_forward (count) + +def DoUpReverse (fsm): + + screen = fsm.memory[0] + screen.cursor_up_reverse() + +def DoUpOne (fsm): + + screen = fsm.memory[0] + screen.cursor_up () + +def DoUp (fsm): + + count = int(fsm.memory.pop()) + screen = fsm.memory[0] + screen.cursor_up (count) + +def DoHome (fsm): + + c = int(fsm.memory.pop()) + r = int(fsm.memory.pop()) + screen = fsm.memory[0] + screen.cursor_home (r,c) + +def DoHomeOrigin (fsm): + + c = 1 + r = 1 + screen = fsm.memory[0] + screen.cursor_home (r,c) + +def DoEraseDown (fsm): + + screen = fsm.memory[0] + screen.erase_down() + +def DoErase (fsm): + + arg = int(fsm.memory.pop()) + screen = fsm.memory[0] + if arg == 0: + screen.erase_down() + elif arg == 1: + screen.erase_up() + elif arg == 2: + screen.erase_screen() + +def DoEraseEndOfLine (fsm): + + screen = fsm.memory[0] + screen.erase_end_of_line() + +def DoEraseLine (fsm): + + arg = int(fsm.memory.pop()) + screen = fsm.memory[0] + if arg == 0: + screen.erase_end_of_line() + elif arg == 1: + screen.erase_start_of_line() + elif arg == 2: + screen.erase_line() + +def DoEnableScroll (fsm): + + screen = fsm.memory[0] + screen.scroll_screen() + +def DoCursorSave (fsm): + + screen = fsm.memory[0] + screen.cursor_save_attrs() + +def DoCursorRestore (fsm): + + screen = fsm.memory[0] + screen.cursor_restore_attrs() + +def DoScrollRegion (fsm): + + screen = fsm.memory[0] + r2 = int(fsm.memory.pop()) + r1 = int(fsm.memory.pop()) + screen.scroll_screen_rows (r1,r2) + +def DoMode (fsm): + + screen = fsm.memory[0] + mode = fsm.memory.pop() # Should be 4 + # screen.setReplaceMode () + +def DoLog (fsm): + + screen = fsm.memory[0] + fsm.memory = [screen] + fout = open ('log', 'a') + fout.write (fsm.input_symbol + ',' + fsm.current_state + '\n') + fout.close() + +class term (screen.screen): + + '''This class is an abstract, generic terminal. + This does nothing. This is a placeholder that + provides a common base class for other terminals + such as an ANSI terminal. ''' + + def __init__ (self, r=24, c=80, *args, **kwargs): + + screen.screen.__init__(self, r,c,*args,**kwargs) + +class ANSI (term): + '''This class implements an ANSI (VT100) terminal. + It is a stream filter that recognizes ANSI terminal + escape sequences and maintains the state of a screen object. ''' + + def __init__ (self, r=24,c=80,*args,**kwargs): + + term.__init__(self,r,c,*args,**kwargs) + + #self.screen = screen (24,80) + self.state = FSM.FSM ('INIT',[self]) + self.state.set_default_transition (DoLog, 'INIT') + self.state.add_transition_any ('INIT', DoEmit, 'INIT') + self.state.add_transition ('\x1b', 'INIT', None, 'ESC') + self.state.add_transition_any ('ESC', DoLog, 'INIT') + self.state.add_transition ('(', 'ESC', None, 'G0SCS') + self.state.add_transition (')', 'ESC', None, 'G1SCS') + self.state.add_transition_list ('AB012', 'G0SCS', None, 'INIT') + self.state.add_transition_list ('AB012', 'G1SCS', None, 'INIT') + self.state.add_transition ('7', 'ESC', DoCursorSave, 'INIT') + self.state.add_transition ('8', 'ESC', DoCursorRestore, 'INIT') + self.state.add_transition ('M', 'ESC', DoUpReverse, 'INIT') + self.state.add_transition ('>', 'ESC', DoUpReverse, 'INIT') + self.state.add_transition ('<', 'ESC', DoUpReverse, 'INIT') + self.state.add_transition ('=', 'ESC', None, 'INIT') # Selects application keypad. + self.state.add_transition ('#', 'ESC', None, 'GRAPHICS_POUND') + self.state.add_transition_any ('GRAPHICS_POUND', None, 'INIT') + self.state.add_transition ('[', 'ESC', None, 'ELB') + # ELB means Escape Left Bracket. That is ^[[ + self.state.add_transition ('H', 'ELB', DoHomeOrigin, 'INIT') + self.state.add_transition ('D', 'ELB', DoBackOne, 'INIT') + self.state.add_transition ('B', 'ELB', DoDownOne, 'INIT') + self.state.add_transition ('C', 'ELB', DoForwardOne, 'INIT') + self.state.add_transition ('A', 'ELB', DoUpOne, 'INIT') + self.state.add_transition ('J', 'ELB', DoEraseDown, 'INIT') + self.state.add_transition ('K', 'ELB', DoEraseEndOfLine, 'INIT') + self.state.add_transition ('r', 'ELB', DoEnableScroll, 'INIT') + self.state.add_transition ('m', 'ELB', self.do_sgr, 'INIT') + self.state.add_transition ('?', 'ELB', None, 'MODECRAP') + self.state.add_transition_list (string.digits, 'ELB', DoStartNumber, 'NUMBER_1') + self.state.add_transition_list (string.digits, 'NUMBER_1', DoBuildNumber, 'NUMBER_1') + self.state.add_transition ('D', 'NUMBER_1', DoBack, 'INIT') + self.state.add_transition ('B', 'NUMBER_1', DoDown, 'INIT') + self.state.add_transition ('C', 'NUMBER_1', DoForward, 'INIT') + self.state.add_transition ('A', 'NUMBER_1', DoUp, 'INIT') + self.state.add_transition ('J', 'NUMBER_1', DoErase, 'INIT') + self.state.add_transition ('K', 'NUMBER_1', DoEraseLine, 'INIT') + self.state.add_transition ('l', 'NUMBER_1', DoMode, 'INIT') + ### It gets worse... the 'm' code can have infinite number of + ### number;number;number before it. I've never seen more than two, + ### but the specs say it's allowed. crap! + self.state.add_transition ('m', 'NUMBER_1', self.do_sgr, 'INIT') + ### LED control. Same implementation problem as 'm' code. + self.state.add_transition ('q', 'NUMBER_1', self.do_decsca, 'INIT') + + # \E[?47h switch to alternate screen + # \E[?47l restores to normal screen from alternate screen. + self.state.add_transition_list (string.digits, 'MODECRAP', DoStartNumber, 'MODECRAP_NUM') + self.state.add_transition_list (string.digits, 'MODECRAP_NUM', DoBuildNumber, 'MODECRAP_NUM') + self.state.add_transition ('l', 'MODECRAP_NUM', self.do_modecrap, 'INIT') + self.state.add_transition ('h', 'MODECRAP_NUM', self.do_modecrap, 'INIT') + +#RM Reset Mode Esc [ Ps l none + self.state.add_transition (';', 'NUMBER_1', None, 'SEMICOLON') + self.state.add_transition_any ('SEMICOLON', DoLog, 'INIT') + self.state.add_transition_list (string.digits, 'SEMICOLON', DoStartNumber, 'NUMBER_2') + self.state.add_transition_list (string.digits, 'NUMBER_2', DoBuildNumber, 'NUMBER_2') + self.state.add_transition_any ('NUMBER_2', DoLog, 'INIT') + self.state.add_transition ('H', 'NUMBER_2', DoHome, 'INIT') + self.state.add_transition ('f', 'NUMBER_2', DoHome, 'INIT') + self.state.add_transition ('r', 'NUMBER_2', DoScrollRegion, 'INIT') + ### It gets worse... the 'm' code can have infinite number of + ### number;number;number before it. I've never seen more than two, + ### but the specs say it's allowed. crap! + self.state.add_transition ('m', 'NUMBER_2', self.do_sgr, 'INIT') + ### LED control. Same problem as 'm' code. + self.state.add_transition ('q', 'NUMBER_2', self.do_decsca, 'INIT') + self.state.add_transition (';', 'NUMBER_2', None, 'SEMICOLON_X') + + # Create a state for 'q' and 'm' which allows an infinite number of ignored numbers + self.state.add_transition_any ('SEMICOLON_X', DoLog, 'INIT') + self.state.add_transition_list (string.digits, 'SEMICOLON_X', DoStartNumber, 'NUMBER_X') + self.state.add_transition_list (string.digits, 'NUMBER_X', DoBuildNumber, 'NUMBER_X') + self.state.add_transition_any ('NUMBER_X', DoLog, 'INIT') + self.state.add_transition ('m', 'NUMBER_X', self.do_sgr, 'INIT') + self.state.add_transition ('q', 'NUMBER_X', self.do_decsca, 'INIT') + self.state.add_transition (';', 'NUMBER_X', None, 'SEMICOLON_X') + + def process (self, c): + """Process a single character. Called by :meth:`write`.""" + if isinstance(c, bytes): + c = self._decode(c) + self.state.process(c) + + def process_list (self, l): + + self.write(l) + + def write (self, s): + """Process text, writing it to the virtual screen while handling + ANSI escape codes. + """ + if isinstance(s, bytes): + s = self._decode(s) + for c in s: + self.process(c) + + def flush (self): + pass + + def write_ch (self, ch): + '''This puts a character at the current cursor position. The cursor + position is moved forward with wrap-around, but no scrolling is done if + the cursor hits the lower-right corner of the screen. ''' + + if isinstance(ch, bytes): + ch = self._decode(ch) + + #\r and \n both produce a call to cr() and lf(), respectively. + ch = ch[0] + + if ch == u'\r': + self.cr() + return + if ch == u'\n': + self.crlf() + return + if ch == chr(screen.BS): + self.cursor_back() + return + self.put_abs(self.cur_r, self.cur_c, ch) + old_r = self.cur_r + old_c = self.cur_c + self.cursor_forward() + if old_c == self.cur_c: + self.cursor_down() + if old_r != self.cur_r: + self.cursor_home (self.cur_r, 1) + else: + self.scroll_up () + self.cursor_home (self.cur_r, 1) + self.erase_line() + + def do_sgr (self, fsm): + '''Select Graphic Rendition, e.g. color. ''' + screen = fsm.memory[0] + fsm.memory = [screen] + + def do_decsca (self, fsm): + '''Select character protection attribute. ''' + screen = fsm.memory[0] + fsm.memory = [screen] + + def do_modecrap (self, fsm): + '''Handler for \x1b[?h and \x1b[?l. If anyone + wanted to actually use these, they'd need to add more states to the + FSM rather than just improve or override this method. ''' + screen = fsm.memory[0] + fsm.memory = [screen] diff --git a/evalkit_internvl/lib/python3.10/site-packages/pexpect/FSM.py b/evalkit_internvl/lib/python3.10/site-packages/pexpect/FSM.py new file mode 100644 index 0000000000000000000000000000000000000000..46b392ea08aaf53577b27c0552776ecc86d72510 --- /dev/null +++ b/evalkit_internvl/lib/python3.10/site-packages/pexpect/FSM.py @@ -0,0 +1,334 @@ +#!/usr/bin/env python + +'''This module implements a Finite State Machine (FSM). In addition to state +this FSM also maintains a user defined "memory". So this FSM can be used as a +Push-down Automata (PDA) since a PDA is a FSM + memory. + +The following describes how the FSM works, but you will probably also need to +see the example function to understand how the FSM is used in practice. + +You define an FSM by building tables of transitions. For a given input symbol +the process() method uses these tables to decide what action to call and what +the next state will be. The FSM has a table of transitions that associate: + + (input_symbol, current_state) --> (action, next_state) + +Where "action" is a function you define. The symbols and states can be any +objects. You use the add_transition() and add_transition_list() methods to add +to the transition table. The FSM also has a table of transitions that +associate: + + (current_state) --> (action, next_state) + +You use the add_transition_any() method to add to this transition table. The +FSM also has one default transition that is not associated with any specific +input_symbol or state. You use the set_default_transition() method to set the +default transition. + +When an action function is called it is passed a reference to the FSM. The +action function may then access attributes of the FSM such as input_symbol, +current_state, or "memory". The "memory" attribute can be any object that you +want to pass along to the action functions. It is not used by the FSM itself. +For parsing you would typically pass a list to be used as a stack. + +The processing sequence is as follows. The process() method is given an +input_symbol to process. The FSM will search the table of transitions that +associate: + + (input_symbol, current_state) --> (action, next_state) + +If the pair (input_symbol, current_state) is found then process() will call the +associated action function and then set the current state to the next_state. + +If the FSM cannot find a match for (input_symbol, current_state) it will then +search the table of transitions that associate: + + (current_state) --> (action, next_state) + +If the current_state is found then the process() method will call the +associated action function and then set the current state to the next_state. +Notice that this table lacks an input_symbol. It lets you define transitions +for a current_state and ANY input_symbol. Hence, it is called the "any" table. +Remember, it is always checked after first searching the table for a specific +(input_symbol, current_state). + +For the case where the FSM did not match either of the previous two cases the +FSM will try to use the default transition. If the default transition is +defined then the process() method will call the associated action function and +then set the current state to the next_state. This lets you define a default +transition as a catch-all case. You can think of it as an exception handler. +There can be only one default transition. + +Finally, if none of the previous cases are defined for an input_symbol and +current_state then the FSM will raise an exception. This may be desirable, but +you can always prevent this just by defining a default transition. + +Noah Spurrier 20020822 + +PEXPECT LICENSE + + This license is approved by the OSI and FSF as GPL-compatible. + http://opensource.org/licenses/isc-license.txt + + Copyright (c) 2012, Noah Spurrier + PERMISSION TO USE, COPY, MODIFY, AND/OR DISTRIBUTE THIS SOFTWARE FOR ANY + PURPOSE WITH OR WITHOUT FEE IS HEREBY GRANTED, PROVIDED THAT THE ABOVE + COPYRIGHT NOTICE AND THIS PERMISSION NOTICE APPEAR IN ALL COPIES. + THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES + WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF + MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR + ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES + WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN + ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF + OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. + +''' + +class ExceptionFSM(Exception): + + '''This is the FSM Exception class.''' + + def __init__(self, value): + self.value = value + + def __str__(self): + return 'ExceptionFSM: ' + str(self.value) + +class FSM: + + '''This is a Finite State Machine (FSM). + ''' + + def __init__(self, initial_state, memory=None): + + '''This creates the FSM. You set the initial state here. The "memory" + attribute is any object that you want to pass along to the action + functions. It is not used by the FSM. For parsing you would typically + pass a list to be used as a stack. ''' + + # Map (input_symbol, current_state) --> (action, next_state). + self.state_transitions = {} + # Map (current_state) --> (action, next_state). + self.state_transitions_any = {} + self.default_transition = None + + self.input_symbol = None + self.initial_state = initial_state + self.current_state = self.initial_state + self.next_state = None + self.action = None + self.memory = memory + + def reset (self): + + '''This sets the current_state to the initial_state and sets + input_symbol to None. The initial state was set by the constructor + __init__(). ''' + + self.current_state = self.initial_state + self.input_symbol = None + + def add_transition (self, input_symbol, state, action=None, next_state=None): + + '''This adds a transition that associates: + + (input_symbol, current_state) --> (action, next_state) + + The action may be set to None in which case the process() method will + ignore the action and only set the next_state. The next_state may be + set to None in which case the current state will be unchanged. + + You can also set transitions for a list of symbols by using + add_transition_list(). ''' + + if next_state is None: + next_state = state + self.state_transitions[(input_symbol, state)] = (action, next_state) + + def add_transition_list (self, list_input_symbols, state, action=None, next_state=None): + + '''This adds the same transition for a list of input symbols. + You can pass a list or a string. Note that it is handy to use + string.digits, string.whitespace, string.letters, etc. to add + transitions that match character classes. + + The action may be set to None in which case the process() method will + ignore the action and only set the next_state. The next_state may be + set to None in which case the current state will be unchanged. ''' + + if next_state is None: + next_state = state + for input_symbol in list_input_symbols: + self.add_transition (input_symbol, state, action, next_state) + + def add_transition_any (self, state, action=None, next_state=None): + + '''This adds a transition that associates: + + (current_state) --> (action, next_state) + + That is, any input symbol will match the current state. + The process() method checks the "any" state associations after it first + checks for an exact match of (input_symbol, current_state). + + The action may be set to None in which case the process() method will + ignore the action and only set the next_state. The next_state may be + set to None in which case the current state will be unchanged. ''' + + if next_state is None: + next_state = state + self.state_transitions_any [state] = (action, next_state) + + def set_default_transition (self, action, next_state): + + '''This sets the default transition. This defines an action and + next_state if the FSM cannot find the input symbol and the current + state in the transition list and if the FSM cannot find the + current_state in the transition_any list. This is useful as a final + fall-through state for catching errors and undefined states. + + The default transition can be removed by setting the attribute + default_transition to None. ''' + + self.default_transition = (action, next_state) + + def get_transition (self, input_symbol, state): + + '''This returns (action, next state) given an input_symbol and state. + This does not modify the FSM state, so calling this method has no side + effects. Normally you do not call this method directly. It is called by + process(). + + The sequence of steps to check for a defined transition goes from the + most specific to the least specific. + + 1. Check state_transitions[] that match exactly the tuple, + (input_symbol, state) + + 2. Check state_transitions_any[] that match (state) + In other words, match a specific state and ANY input_symbol. + + 3. Check if the default_transition is defined. + This catches any input_symbol and any state. + This is a handler for errors, undefined states, or defaults. + + 4. No transition was defined. If we get here then raise an exception. + ''' + + if (input_symbol, state) in self.state_transitions: + return self.state_transitions[(input_symbol, state)] + elif state in self.state_transitions_any: + return self.state_transitions_any[state] + elif self.default_transition is not None: + return self.default_transition + else: + raise ExceptionFSM ('Transition is undefined: (%s, %s).' % + (str(input_symbol), str(state)) ) + + def process (self, input_symbol): + + '''This is the main method that you call to process input. This may + cause the FSM to change state and call an action. This method calls + get_transition() to find the action and next_state associated with the + input_symbol and current_state. If the action is None then the action + is not called and only the current state is changed. This method + processes one complete input symbol. You can process a list of symbols + (or a string) by calling process_list(). ''' + + self.input_symbol = input_symbol + (self.action, self.next_state) = self.get_transition (self.input_symbol, self.current_state) + if self.action is not None: + self.action (self) + self.current_state = self.next_state + self.next_state = None + + def process_list (self, input_symbols): + + '''This takes a list and sends each element to process(). The list may + be a string or any iterable object. ''' + + for s in input_symbols: + self.process (s) + +############################################################################## +# The following is an example that demonstrates the use of the FSM class to +# process an RPN expression. Run this module from the command line. You will +# get a prompt > for input. Enter an RPN Expression. Numbers may be integers. +# Operators are * / + - Use the = sign to evaluate and print the expression. +# For example: +# +# 167 3 2 2 * * * 1 - = +# +# will print: +# +# 2003 +############################################################################## + +import sys +import string + +PY3 = (sys.version_info[0] >= 3) + +# +# These define the actions. +# Note that "memory" is a list being used as a stack. +# + +def BeginBuildNumber (fsm): + fsm.memory.append (fsm.input_symbol) + +def BuildNumber (fsm): + s = fsm.memory.pop () + s = s + fsm.input_symbol + fsm.memory.append (s) + +def EndBuildNumber (fsm): + s = fsm.memory.pop () + fsm.memory.append (int(s)) + +def DoOperator (fsm): + ar = fsm.memory.pop() + al = fsm.memory.pop() + if fsm.input_symbol == '+': + fsm.memory.append (al + ar) + elif fsm.input_symbol == '-': + fsm.memory.append (al - ar) + elif fsm.input_symbol == '*': + fsm.memory.append (al * ar) + elif fsm.input_symbol == '/': + fsm.memory.append (al / ar) + +def DoEqual (fsm): + print(str(fsm.memory.pop())) + +def Error (fsm): + print('That does not compute.') + print(str(fsm.input_symbol)) + +def main(): + + '''This is where the example starts and the FSM state transitions are + defined. Note that states are strings (such as 'INIT'). This is not + necessary, but it makes the example easier to read. ''' + + f = FSM ('INIT', []) + f.set_default_transition (Error, 'INIT') + f.add_transition_any ('INIT', None, 'INIT') + f.add_transition ('=', 'INIT', DoEqual, 'INIT') + f.add_transition_list (string.digits, 'INIT', BeginBuildNumber, 'BUILDING_NUMBER') + f.add_transition_list (string.digits, 'BUILDING_NUMBER', BuildNumber, 'BUILDING_NUMBER') + f.add_transition_list (string.whitespace, 'BUILDING_NUMBER', EndBuildNumber, 'INIT') + f.add_transition_list ('+-*/', 'INIT', DoOperator, 'INIT') + + print() + print('Enter an RPN Expression.') + print('Numbers may be integers. Operators are * / + -') + print('Use the = sign to evaluate and print the expression.') + print('For example: ') + print(' 167 3 2 2 * * * 1 - =') + inputstr = (input if PY3 else raw_input)('> ') # analysis:ignore + f.process_list(inputstr) + + +if __name__ == '__main__': + main() diff --git a/evalkit_internvl/lib/python3.10/site-packages/pexpect/__init__.py b/evalkit_internvl/lib/python3.10/site-packages/pexpect/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..86254ee720ecda4e15e7656dc0c0e0febd61d291 --- /dev/null +++ b/evalkit_internvl/lib/python3.10/site-packages/pexpect/__init__.py @@ -0,0 +1,91 @@ +'''Pexpect is a Python module for spawning child applications and controlling +them automatically. Pexpect can be used for automating interactive applications +such as ssh, ftp, passwd, telnet, etc. It can be used to automate setup +scripts for duplicating software package installations on different servers. It +can be used for automated software testing. Pexpect is in the spirit of Don +Libes' Expect, but Pexpect is pure Python. Other Expect-like modules for Python +require TCL and Expect or require C extensions to be compiled. Pexpect does not +use C, Expect, or TCL extensions. It should work on any platform that supports +the standard Python pty module. The Pexpect interface focuses on ease of use so +that simple tasks are easy. + +There are two main interfaces to the Pexpect system; these are the function, +run() and the class, spawn. The spawn class is more powerful. The run() +function is simpler than spawn, and is good for quickly calling program. When +you call the run() function it executes a given program and then returns the +output. This is a handy replacement for os.system(). + +For example:: + + pexpect.run('ls -la') + +The spawn class is the more powerful interface to the Pexpect system. You can +use this to spawn a child program then interact with it by sending input and +expecting responses (waiting for patterns in the child's output). + +For example:: + + child = pexpect.spawn('scp foo user@example.com:.') + child.expect('Password:') + child.sendline(mypassword) + +Context manager can be used for the spawn() function:: + + with pexpect.spawn('scp foo user@example.com:.') as child: + child.expect('Password:') + child.sendline(mypassword) + +This works even for commands that ask for passwords or other input outside of +the normal stdio streams. For example, ssh reads input directly from the TTY +device which bypasses stdin. + +Credits: Noah Spurrier, Richard Holden, Marco Molteni, Kimberley Burchett, +Robert Stone, Hartmut Goebel, Chad Schroeder, Erick Tryzelaar, Dave Kirby, Ids +vander Molen, George Todd, Noel Taylor, Nicolas D. Cesar, Alexander Gattin, +Jacques-Etienne Baudoux, Geoffrey Marshall, Francisco Lourenco, Glen Mabey, +Karthik Gurusamy, Fernando Perez, Corey Minyard, Jon Cohen, Guillaume +Chazarain, Andrew Ryan, Nick Craig-Wood, Andrew Stone, Jorgen Grahn, John +Spiegel, Jan Grant, and Shane Kerr. Let me know if I forgot anyone. + +Pexpect is free, open source, and all that good stuff. +http://pexpect.sourceforge.net/ + +PEXPECT LICENSE + + This license is approved by the OSI and FSF as GPL-compatible. + http://opensource.org/licenses/isc-license.txt + + Copyright (c) 2012, Noah Spurrier + PERMISSION TO USE, COPY, MODIFY, AND/OR DISTRIBUTE THIS SOFTWARE FOR ANY + PURPOSE WITH OR WITHOUT FEE IS HEREBY GRANTED, PROVIDED THAT THE ABOVE + COPYRIGHT NOTICE AND THIS PERMISSION NOTICE APPEAR IN ALL COPIES. + THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES + WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF + MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR + ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES + WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN + ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF + OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. + +''' + +import sys +PY3 = (sys.version_info[0] >= 3) + +from .exceptions import ExceptionPexpect, EOF, TIMEOUT +from .utils import split_command_line, which, is_executable_file +from .expect import Expecter, searcher_re, searcher_string + +if sys.platform != 'win32': + # On Unix, these are available at the top level for backwards compatibility + from .pty_spawn import spawn, spawnu + from .run import run, runu + +__version__ = '4.9.0' +__revision__ = '' +__all__ = ['ExceptionPexpect', 'EOF', 'TIMEOUT', 'spawn', 'spawnu', 'run', 'runu', + 'which', 'split_command_line', '__version__', '__revision__'] + + + +# vim: set shiftround expandtab tabstop=4 shiftwidth=4 ft=python autoindent : diff --git a/evalkit_internvl/lib/python3.10/site-packages/pexpect/__pycache__/ANSI.cpython-310.pyc b/evalkit_internvl/lib/python3.10/site-packages/pexpect/__pycache__/ANSI.cpython-310.pyc new file mode 100644 index 0000000000000000000000000000000000000000..5d5511f114514229ba76e2828b49474f1e400c5e Binary files /dev/null and b/evalkit_internvl/lib/python3.10/site-packages/pexpect/__pycache__/ANSI.cpython-310.pyc differ diff --git a/evalkit_internvl/lib/python3.10/site-packages/pexpect/__pycache__/FSM.cpython-310.pyc b/evalkit_internvl/lib/python3.10/site-packages/pexpect/__pycache__/FSM.cpython-310.pyc new file mode 100644 index 0000000000000000000000000000000000000000..8d351b256acdc48222488b9ca110dbd07d332ecf Binary files /dev/null and b/evalkit_internvl/lib/python3.10/site-packages/pexpect/__pycache__/FSM.cpython-310.pyc differ diff --git a/evalkit_internvl/lib/python3.10/site-packages/pexpect/__pycache__/_async_pre_await.cpython-310.pyc b/evalkit_internvl/lib/python3.10/site-packages/pexpect/__pycache__/_async_pre_await.cpython-310.pyc new file mode 100644 index 0000000000000000000000000000000000000000..31ceb5f53dcffb9a5ae7332c6a5c6ce7891bf0c1 Binary files /dev/null and b/evalkit_internvl/lib/python3.10/site-packages/pexpect/__pycache__/_async_pre_await.cpython-310.pyc differ diff --git a/evalkit_internvl/lib/python3.10/site-packages/pexpect/__pycache__/_async_w_await.cpython-310.pyc b/evalkit_internvl/lib/python3.10/site-packages/pexpect/__pycache__/_async_w_await.cpython-310.pyc new file mode 100644 index 0000000000000000000000000000000000000000..e0df94edda9aeea14d40e366a8bd38eb828c2755 Binary files /dev/null and b/evalkit_internvl/lib/python3.10/site-packages/pexpect/__pycache__/_async_w_await.cpython-310.pyc differ diff --git a/evalkit_internvl/lib/python3.10/site-packages/pexpect/__pycache__/exceptions.cpython-310.pyc b/evalkit_internvl/lib/python3.10/site-packages/pexpect/__pycache__/exceptions.cpython-310.pyc new file mode 100644 index 0000000000000000000000000000000000000000..22c71d093c9bae6b4eb851e240068c9d43fbef54 Binary files /dev/null and b/evalkit_internvl/lib/python3.10/site-packages/pexpect/__pycache__/exceptions.cpython-310.pyc differ diff --git a/evalkit_internvl/lib/python3.10/site-packages/pexpect/__pycache__/pty_spawn.cpython-310.pyc b/evalkit_internvl/lib/python3.10/site-packages/pexpect/__pycache__/pty_spawn.cpython-310.pyc new file mode 100644 index 0000000000000000000000000000000000000000..dec2b1a4a17ee4068dbd548c468e12805f048596 Binary files /dev/null and b/evalkit_internvl/lib/python3.10/site-packages/pexpect/__pycache__/pty_spawn.cpython-310.pyc differ diff --git a/evalkit_internvl/lib/python3.10/site-packages/pexpect/__pycache__/pxssh.cpython-310.pyc b/evalkit_internvl/lib/python3.10/site-packages/pexpect/__pycache__/pxssh.cpython-310.pyc new file mode 100644 index 0000000000000000000000000000000000000000..1d312a810d557fcfa7c4de57f91f73471a1df937 Binary files /dev/null and b/evalkit_internvl/lib/python3.10/site-packages/pexpect/__pycache__/pxssh.cpython-310.pyc differ diff --git a/evalkit_internvl/lib/python3.10/site-packages/pexpect/__pycache__/replwrap.cpython-310.pyc b/evalkit_internvl/lib/python3.10/site-packages/pexpect/__pycache__/replwrap.cpython-310.pyc new file mode 100644 index 0000000000000000000000000000000000000000..ce8a2e2a8f304237d0685452c3b4e380a59e4b08 Binary files /dev/null and b/evalkit_internvl/lib/python3.10/site-packages/pexpect/__pycache__/replwrap.cpython-310.pyc differ diff --git a/evalkit_internvl/lib/python3.10/site-packages/pexpect/__pycache__/screen.cpython-310.pyc b/evalkit_internvl/lib/python3.10/site-packages/pexpect/__pycache__/screen.cpython-310.pyc new file mode 100644 index 0000000000000000000000000000000000000000..ca0883794f9b13fd6bdfd990ef6bf2b4f145697d Binary files /dev/null and b/evalkit_internvl/lib/python3.10/site-packages/pexpect/__pycache__/screen.cpython-310.pyc differ diff --git a/evalkit_internvl/lib/python3.10/site-packages/pexpect/__pycache__/spawnbase.cpython-310.pyc b/evalkit_internvl/lib/python3.10/site-packages/pexpect/__pycache__/spawnbase.cpython-310.pyc new file mode 100644 index 0000000000000000000000000000000000000000..bed8b20b2fc772820a9ec2a33d3013b2482698c0 Binary files /dev/null and b/evalkit_internvl/lib/python3.10/site-packages/pexpect/__pycache__/spawnbase.cpython-310.pyc differ diff --git a/evalkit_internvl/lib/python3.10/site-packages/pexpect/_async.py b/evalkit_internvl/lib/python3.10/site-packages/pexpect/_async.py new file mode 100644 index 0000000000000000000000000000000000000000..261720c16069e89411b1511eeba53f1068c3c913 --- /dev/null +++ b/evalkit_internvl/lib/python3.10/site-packages/pexpect/_async.py @@ -0,0 +1,28 @@ +"""Facade that provides coroutines implementation pertinent to running Py version. + +Python 3.5 introduced the async def/await syntax keyword. +With later versions coroutines and methods to get the running asyncio loop are +being deprecated, not supported anymore. + +For Python versions later than 3.6, coroutines and objects that are defined via +``async def``/``await`` keywords are imported. + +Here the code is just imported, to provide the same interface to older code. +""" +# pylint: disable=unused-import +# flake8: noqa: F401 +from sys import version_info as py_version_info + +# this assumes async def/await are more stable +if py_version_info >= (3, 6): + from pexpect._async_w_await import ( + PatternWaiter, + expect_async, + repl_run_command_async, + ) +else: + from pexpect._async_pre_await import ( + PatternWaiter, + expect_async, + repl_run_command_async, + ) diff --git a/evalkit_internvl/lib/python3.10/site-packages/pexpect/_async_pre_await.py b/evalkit_internvl/lib/python3.10/site-packages/pexpect/_async_pre_await.py new file mode 100644 index 0000000000000000000000000000000000000000..81ece1b6da1c9effe768717844ebc791ce7b86a5 --- /dev/null +++ b/evalkit_internvl/lib/python3.10/site-packages/pexpect/_async_pre_await.py @@ -0,0 +1,111 @@ +"""Implementation of coroutines without using ``async def``/``await`` keywords. + +``@asyncio.coroutine`` and ``yield from`` are used here instead. +""" +import asyncio +import errno +import signal + +from pexpect import EOF + + +@asyncio.coroutine +def expect_async(expecter, timeout=None): + # First process data that was previously read - if it maches, we don't need + # async stuff. + idx = expecter.existing_data() + if idx is not None: + return idx + if not expecter.spawn.async_pw_transport: + pw = PatternWaiter() + pw.set_expecter(expecter) + transport, pw = yield from asyncio.get_event_loop().connect_read_pipe( + lambda: pw, expecter.spawn + ) + expecter.spawn.async_pw_transport = pw, transport + else: + pw, transport = expecter.spawn.async_pw_transport + pw.set_expecter(expecter) + transport.resume_reading() + try: + return (yield from asyncio.wait_for(pw.fut, timeout)) + except asyncio.TimeoutError as e: + transport.pause_reading() + return expecter.timeout(e) + + +@asyncio.coroutine +def repl_run_command_async(repl, cmdlines, timeout=-1): + res = [] + repl.child.sendline(cmdlines[0]) + for line in cmdlines[1:]: + yield from repl._expect_prompt(timeout=timeout, async_=True) + res.append(repl.child.before) + repl.child.sendline(line) + + # Command was fully submitted, now wait for the next prompt + prompt_idx = yield from repl._expect_prompt(timeout=timeout, async_=True) + if prompt_idx == 1: + # We got the continuation prompt - command was incomplete + repl.child.kill(signal.SIGINT) + yield from repl._expect_prompt(timeout=1, async_=True) + raise ValueError("Continuation prompt found - input was incomplete:") + return "".join(res + [repl.child.before]) + + +class PatternWaiter(asyncio.Protocol): + transport = None + + def set_expecter(self, expecter): + self.expecter = expecter + self.fut = asyncio.Future() + + def found(self, result): + if not self.fut.done(): + self.fut.set_result(result) + self.transport.pause_reading() + + def error(self, exc): + if not self.fut.done(): + self.fut.set_exception(exc) + self.transport.pause_reading() + + def connection_made(self, transport): + self.transport = transport + + def data_received(self, data): + spawn = self.expecter.spawn + s = spawn._decoder.decode(data) + spawn._log(s, "read") + + if self.fut.done(): + spawn._before.write(s) + spawn._buffer.write(s) + return + + try: + index = self.expecter.new_data(s) + if index is not None: + # Found a match + self.found(index) + except Exception as e: + self.expecter.errored() + self.error(e) + + def eof_received(self): + # N.B. If this gets called, async will close the pipe (the spawn object) + # for us + try: + self.expecter.spawn.flag_eof = True + index = self.expecter.eof() + except EOF as e: + self.error(e) + else: + self.found(index) + + def connection_lost(self, exc): + if isinstance(exc, OSError) and exc.errno == errno.EIO: + # We may get here without eof_received being called, e.g on Linux + self.eof_received() + elif exc is not None: + self.error(exc) diff --git a/evalkit_internvl/lib/python3.10/site-packages/pexpect/_async_w_await.py b/evalkit_internvl/lib/python3.10/site-packages/pexpect/_async_w_await.py new file mode 100644 index 0000000000000000000000000000000000000000..59cb1ef13d5041ddf24398891db9fa41c2ad63d2 --- /dev/null +++ b/evalkit_internvl/lib/python3.10/site-packages/pexpect/_async_w_await.py @@ -0,0 +1,118 @@ +"""Implementation of coroutines using ``async def``/``await`` keywords. + +These keywords replaced ``@asyncio.coroutine`` and ``yield from`` from +Python 3.5 onwards. +""" +import asyncio +import errno +import signal +from sys import version_info as py_version_info + +from pexpect import EOF + +if py_version_info >= (3, 7): + # get_running_loop, new in 3.7, is preferred to get_event_loop + _loop_getter = asyncio.get_running_loop +else: + # Deprecation warning since 3.10 + _loop_getter = asyncio.get_event_loop + + +async def expect_async(expecter, timeout=None): + # First process data that was previously read - if it maches, we don't need + # async stuff. + idx = expecter.existing_data() + if idx is not None: + return idx + if not expecter.spawn.async_pw_transport: + pattern_waiter = PatternWaiter() + pattern_waiter.set_expecter(expecter) + transport, pattern_waiter = await _loop_getter().connect_read_pipe( + lambda: pattern_waiter, expecter.spawn + ) + expecter.spawn.async_pw_transport = pattern_waiter, transport + else: + pattern_waiter, transport = expecter.spawn.async_pw_transport + pattern_waiter.set_expecter(expecter) + transport.resume_reading() + try: + return await asyncio.wait_for(pattern_waiter.fut, timeout) + except asyncio.TimeoutError as exc: + transport.pause_reading() + return expecter.timeout(exc) + + +async def repl_run_command_async(repl, cmdlines, timeout=-1): + res = [] + repl.child.sendline(cmdlines[0]) + for line in cmdlines[1:]: + await repl._expect_prompt(timeout=timeout, async_=True) + res.append(repl.child.before) + repl.child.sendline(line) + + # Command was fully submitted, now wait for the next prompt + prompt_idx = await repl._expect_prompt(timeout=timeout, async_=True) + if prompt_idx == 1: + # We got the continuation prompt - command was incomplete + repl.child.kill(signal.SIGINT) + await repl._expect_prompt(timeout=1, async_=True) + raise ValueError("Continuation prompt found - input was incomplete:") + return "".join(res + [repl.child.before]) + + +class PatternWaiter(asyncio.Protocol): + transport = None + + def set_expecter(self, expecter): + self.expecter = expecter + self.fut = asyncio.Future() + + def found(self, result): + if not self.fut.done(): + self.fut.set_result(result) + self.transport.pause_reading() + + def error(self, exc): + if not self.fut.done(): + self.fut.set_exception(exc) + self.transport.pause_reading() + + def connection_made(self, transport): + self.transport = transport + + def data_received(self, data): + spawn = self.expecter.spawn + s = spawn._decoder.decode(data) + spawn._log(s, "read") + + if self.fut.done(): + spawn._before.write(s) + spawn._buffer.write(s) + return + + try: + index = self.expecter.new_data(s) + if index is not None: + # Found a match + self.found(index) + except Exception as exc: + self.expecter.errored() + self.error(exc) + + def eof_received(self): + # N.B. If this gets called, async will close the pipe (the spawn object) + # for us + try: + self.expecter.spawn.flag_eof = True + index = self.expecter.eof() + except EOF as exc: + self.error(exc) + else: + self.found(index) + + def connection_lost(self, exc): + if isinstance(exc, OSError) and exc.errno == errno.EIO: + # We may get here without eof_received being called, e.g on Linux + self.eof_received() + elif exc is not None: + self.error(exc) diff --git a/evalkit_internvl/lib/python3.10/site-packages/pexpect/bashrc.sh b/evalkit_internvl/lib/python3.10/site-packages/pexpect/bashrc.sh new file mode 100644 index 0000000000000000000000000000000000000000..d75d1a5b626e4c6c12b17655e089a1ecc72fc70e --- /dev/null +++ b/evalkit_internvl/lib/python3.10/site-packages/pexpect/bashrc.sh @@ -0,0 +1,18 @@ +# Different platforms have different names for the systemwide bashrc +if [[ -f /etc/bashrc ]]; then + source /etc/bashrc +fi +if [[ -f /etc/bash.bashrc ]]; then + source /etc/bash.bashrc +fi +if [[ -f ~/.bashrc ]]; then + source ~/.bashrc +fi + +# Reset PS1 so pexpect can find it +PS1="$" + +# Unset PROMPT_COMMAND, so that it can't change PS1 to something unexpected. +unset PROMPT_COMMAND + +bind 'set enable-bracketed-paste off' diff --git a/evalkit_internvl/lib/python3.10/site-packages/pexpect/exceptions.py b/evalkit_internvl/lib/python3.10/site-packages/pexpect/exceptions.py new file mode 100644 index 0000000000000000000000000000000000000000..cb360f02614304b306714b309cbc9b0a0f2ff770 --- /dev/null +++ b/evalkit_internvl/lib/python3.10/site-packages/pexpect/exceptions.py @@ -0,0 +1,35 @@ +"""Exception classes used by Pexpect""" + +import traceback +import sys + +class ExceptionPexpect(Exception): + '''Base class for all exceptions raised by this module. + ''' + + def __init__(self, value): + super(ExceptionPexpect, self).__init__(value) + self.value = value + + def __str__(self): + return str(self.value) + + def get_trace(self): + '''This returns an abbreviated stack trace with lines that only concern + the caller. In other words, the stack trace inside the Pexpect module + is not included. ''' + + tblist = traceback.extract_tb(sys.exc_info()[2]) + tblist = [item for item in tblist if ('pexpect/__init__' not in item[0]) + and ('pexpect/expect' not in item[0])] + tblist = traceback.format_list(tblist) + return ''.join(tblist) + + +class EOF(ExceptionPexpect): + '''Raised when EOF is read from a child. + This usually means the child has exited.''' + + +class TIMEOUT(ExceptionPexpect): + '''Raised when a read time exceeds the timeout. ''' diff --git a/evalkit_internvl/lib/python3.10/site-packages/pexpect/expect.py b/evalkit_internvl/lib/python3.10/site-packages/pexpect/expect.py new file mode 100644 index 0000000000000000000000000000000000000000..d3409db9d73d55d5b59fa15816be1021af1439bc --- /dev/null +++ b/evalkit_internvl/lib/python3.10/site-packages/pexpect/expect.py @@ -0,0 +1,371 @@ +import time + +from .exceptions import EOF, TIMEOUT + +class Expecter(object): + def __init__(self, spawn, searcher, searchwindowsize=-1): + self.spawn = spawn + self.searcher = searcher + # A value of -1 means to use the figure from spawn, which should + # be None or a positive number. + if searchwindowsize == -1: + searchwindowsize = spawn.searchwindowsize + self.searchwindowsize = searchwindowsize + self.lookback = None + if hasattr(searcher, 'longest_string'): + self.lookback = searcher.longest_string + + def do_search(self, window, freshlen): + spawn = self.spawn + searcher = self.searcher + if freshlen > len(window): + freshlen = len(window) + index = searcher.search(window, freshlen, self.searchwindowsize) + if index >= 0: + spawn._buffer = spawn.buffer_type() + spawn._buffer.write(window[searcher.end:]) + spawn.before = spawn._before.getvalue()[ + 0:-(len(window) - searcher.start)] + spawn._before = spawn.buffer_type() + spawn._before.write(window[searcher.end:]) + spawn.after = window[searcher.start:searcher.end] + spawn.match = searcher.match + spawn.match_index = index + # Found a match + return index + elif self.searchwindowsize or self.lookback: + maintain = self.searchwindowsize or self.lookback + if spawn._buffer.tell() > maintain: + spawn._buffer = spawn.buffer_type() + spawn._buffer.write(window[-maintain:]) + + def existing_data(self): + # First call from a new call to expect_loop or expect_async. + # self.searchwindowsize may have changed. + # Treat all data as fresh. + spawn = self.spawn + before_len = spawn._before.tell() + buf_len = spawn._buffer.tell() + freshlen = before_len + if before_len > buf_len: + if not self.searchwindowsize: + spawn._buffer = spawn.buffer_type() + window = spawn._before.getvalue() + spawn._buffer.write(window) + elif buf_len < self.searchwindowsize: + spawn._buffer = spawn.buffer_type() + spawn._before.seek( + max(0, before_len - self.searchwindowsize)) + window = spawn._before.read() + spawn._buffer.write(window) + else: + spawn._buffer.seek(max(0, buf_len - self.searchwindowsize)) + window = spawn._buffer.read() + else: + if self.searchwindowsize: + spawn._buffer.seek(max(0, buf_len - self.searchwindowsize)) + window = spawn._buffer.read() + else: + window = spawn._buffer.getvalue() + return self.do_search(window, freshlen) + + def new_data(self, data): + # A subsequent call, after a call to existing_data. + spawn = self.spawn + freshlen = len(data) + spawn._before.write(data) + if not self.searchwindowsize: + if self.lookback: + # search lookback + new data. + old_len = spawn._buffer.tell() + spawn._buffer.write(data) + spawn._buffer.seek(max(0, old_len - self.lookback)) + window = spawn._buffer.read() + else: + # copy the whole buffer (really slow for large datasets). + spawn._buffer.write(data) + window = spawn.buffer + else: + if len(data) >= self.searchwindowsize or not spawn._buffer.tell(): + window = data[-self.searchwindowsize:] + spawn._buffer = spawn.buffer_type() + spawn._buffer.write(window[-self.searchwindowsize:]) + else: + spawn._buffer.write(data) + new_len = spawn._buffer.tell() + spawn._buffer.seek(max(0, new_len - self.searchwindowsize)) + window = spawn._buffer.read() + return self.do_search(window, freshlen) + + def eof(self, err=None): + spawn = self.spawn + + spawn.before = spawn._before.getvalue() + spawn._buffer = spawn.buffer_type() + spawn._before = spawn.buffer_type() + spawn.after = EOF + index = self.searcher.eof_index + if index >= 0: + spawn.match = EOF + spawn.match_index = index + return index + else: + spawn.match = None + spawn.match_index = None + msg = str(spawn) + msg += '\nsearcher: %s' % self.searcher + if err is not None: + msg = str(err) + '\n' + msg + + exc = EOF(msg) + exc.__cause__ = None # in Python 3.x we can use "raise exc from None" + raise exc + + def timeout(self, err=None): + spawn = self.spawn + + spawn.before = spawn._before.getvalue() + spawn.after = TIMEOUT + index = self.searcher.timeout_index + if index >= 0: + spawn.match = TIMEOUT + spawn.match_index = index + return index + else: + spawn.match = None + spawn.match_index = None + msg = str(spawn) + msg += '\nsearcher: %s' % self.searcher + if err is not None: + msg = str(err) + '\n' + msg + + exc = TIMEOUT(msg) + exc.__cause__ = None # in Python 3.x we can use "raise exc from None" + raise exc + + def errored(self): + spawn = self.spawn + spawn.before = spawn._before.getvalue() + spawn.after = None + spawn.match = None + spawn.match_index = None + + def expect_loop(self, timeout=-1): + """Blocking expect""" + spawn = self.spawn + + if timeout is not None: + end_time = time.time() + timeout + + try: + idx = self.existing_data() + if idx is not None: + return idx + while True: + # No match at this point + if (timeout is not None) and (timeout < 0): + return self.timeout() + # Still have time left, so read more data + incoming = spawn.read_nonblocking(spawn.maxread, timeout) + if self.spawn.delayafterread is not None: + time.sleep(self.spawn.delayafterread) + idx = self.new_data(incoming) + # Keep reading until exception or return. + if idx is not None: + return idx + if timeout is not None: + timeout = end_time - time.time() + except EOF as e: + return self.eof(e) + except TIMEOUT as e: + return self.timeout(e) + except: + self.errored() + raise + + +class searcher_string(object): + '''This is a plain string search helper for the spawn.expect_any() method. + This helper class is for speed. For more powerful regex patterns + see the helper class, searcher_re. + + Attributes: + + eof_index - index of EOF, or -1 + timeout_index - index of TIMEOUT, or -1 + + After a successful match by the search() method the following attributes + are available: + + start - index into the buffer, first byte of match + end - index into the buffer, first byte after match + match - the matching string itself + + ''' + + def __init__(self, strings): + '''This creates an instance of searcher_string. This argument 'strings' + may be a list; a sequence of strings; or the EOF or TIMEOUT types. ''' + + self.eof_index = -1 + self.timeout_index = -1 + self._strings = [] + self.longest_string = 0 + for n, s in enumerate(strings): + if s is EOF: + self.eof_index = n + continue + if s is TIMEOUT: + self.timeout_index = n + continue + self._strings.append((n, s)) + if len(s) > self.longest_string: + self.longest_string = len(s) + + def __str__(self): + '''This returns a human-readable string that represents the state of + the object.''' + + ss = [(ns[0], ' %d: %r' % ns) for ns in self._strings] + ss.append((-1, 'searcher_string:')) + if self.eof_index >= 0: + ss.append((self.eof_index, ' %d: EOF' % self.eof_index)) + if self.timeout_index >= 0: + ss.append((self.timeout_index, + ' %d: TIMEOUT' % self.timeout_index)) + ss.sort() + ss = list(zip(*ss))[1] + return '\n'.join(ss) + + def search(self, buffer, freshlen, searchwindowsize=None): + '''This searches 'buffer' for the first occurrence of one of the search + strings. 'freshlen' must indicate the number of bytes at the end of + 'buffer' which have not been searched before. It helps to avoid + searching the same, possibly big, buffer over and over again. + + See class spawn for the 'searchwindowsize' argument. + + If there is a match this returns the index of that string, and sets + 'start', 'end' and 'match'. Otherwise, this returns -1. ''' + + first_match = None + + # 'freshlen' helps a lot here. Further optimizations could + # possibly include: + # + # using something like the Boyer-Moore Fast String Searching + # Algorithm; pre-compiling the search through a list of + # strings into something that can scan the input once to + # search for all N strings; realize that if we search for + # ['bar', 'baz'] and the input is '...foo' we need not bother + # rescanning until we've read three more bytes. + # + # Sadly, I don't know enough about this interesting topic. /grahn + + for index, s in self._strings: + if searchwindowsize is None: + # the match, if any, can only be in the fresh data, + # or at the very end of the old data + offset = -(freshlen + len(s)) + else: + # better obey searchwindowsize + offset = -searchwindowsize + n = buffer.find(s, offset) + if n >= 0 and (first_match is None or n < first_match): + first_match = n + best_index, best_match = index, s + if first_match is None: + return -1 + self.match = best_match + self.start = first_match + self.end = self.start + len(self.match) + return best_index + + +class searcher_re(object): + '''This is regular expression string search helper for the + spawn.expect_any() method. This helper class is for powerful + pattern matching. For speed, see the helper class, searcher_string. + + Attributes: + + eof_index - index of EOF, or -1 + timeout_index - index of TIMEOUT, or -1 + + After a successful match by the search() method the following attributes + are available: + + start - index into the buffer, first byte of match + end - index into the buffer, first byte after match + match - the re.match object returned by a successful re.search + + ''' + + def __init__(self, patterns): + '''This creates an instance that searches for 'patterns' Where + 'patterns' may be a list or other sequence of compiled regular + expressions, or the EOF or TIMEOUT types.''' + + self.eof_index = -1 + self.timeout_index = -1 + self._searches = [] + for n, s in enumerate(patterns): + if s is EOF: + self.eof_index = n + continue + if s is TIMEOUT: + self.timeout_index = n + continue + self._searches.append((n, s)) + + def __str__(self): + '''This returns a human-readable string that represents the state of + the object.''' + + #ss = [(n, ' %d: re.compile("%s")' % + # (n, repr(s.pattern))) for n, s in self._searches] + ss = list() + for n, s in self._searches: + ss.append((n, ' %d: re.compile(%r)' % (n, s.pattern))) + ss.append((-1, 'searcher_re:')) + if self.eof_index >= 0: + ss.append((self.eof_index, ' %d: EOF' % self.eof_index)) + if self.timeout_index >= 0: + ss.append((self.timeout_index, ' %d: TIMEOUT' % + self.timeout_index)) + ss.sort() + ss = list(zip(*ss))[1] + return '\n'.join(ss) + + def search(self, buffer, freshlen, searchwindowsize=None): + '''This searches 'buffer' for the first occurrence of one of the regular + expressions. 'freshlen' must indicate the number of bytes at the end of + 'buffer' which have not been searched before. + + See class spawn for the 'searchwindowsize' argument. + + If there is a match this returns the index of that string, and sets + 'start', 'end' and 'match'. Otherwise, returns -1.''' + + first_match = None + # 'freshlen' doesn't help here -- we cannot predict the + # length of a match, and the re module provides no help. + if searchwindowsize is None: + searchstart = 0 + else: + searchstart = max(0, len(buffer) - searchwindowsize) + for index, s in self._searches: + match = s.search(buffer, searchstart) + if match is None: + continue + n = match.start() + if first_match is None or n < first_match: + first_match = n + the_match = match + best_index = index + if first_match is None: + return -1 + self.start = first_match + self.match = the_match + self.end = self.match.end() + return best_index diff --git a/evalkit_internvl/lib/python3.10/site-packages/pexpect/fdpexpect.py b/evalkit_internvl/lib/python3.10/site-packages/pexpect/fdpexpect.py new file mode 100644 index 0000000000000000000000000000000000000000..140bdfeeda6992acf01dae8bf2d058ac84fc8647 --- /dev/null +++ b/evalkit_internvl/lib/python3.10/site-packages/pexpect/fdpexpect.py @@ -0,0 +1,152 @@ +'''This is like :mod:`pexpect`, but it will work with any file descriptor that you +pass it. You are responsible for opening and close the file descriptor. +This allows you to use Pexpect with sockets and named pipes (FIFOs). + +.. note:: + socket.fileno() does not give a readable file descriptor on windows. + Use :mod:`pexpect.socket_pexpect` for cross-platform socket support + +PEXPECT LICENSE + + This license is approved by the OSI and FSF as GPL-compatible. + http://opensource.org/licenses/isc-license.txt + + Copyright (c) 2012, Noah Spurrier + PERMISSION TO USE, COPY, MODIFY, AND/OR DISTRIBUTE THIS SOFTWARE FOR ANY + PURPOSE WITH OR WITHOUT FEE IS HEREBY GRANTED, PROVIDED THAT THE ABOVE + COPYRIGHT NOTICE AND THIS PERMISSION NOTICE APPEAR IN ALL COPIES. + THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES + WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF + MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR + ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES + WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN + ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF + OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. + +''' + +from .spawnbase import SpawnBase +from .exceptions import ExceptionPexpect, TIMEOUT +from .utils import select_ignore_interrupts, poll_ignore_interrupts +import os + +__all__ = ['fdspawn'] + +class fdspawn(SpawnBase): + '''This is like pexpect.spawn but allows you to supply your own open file + descriptor. For example, you could use it to read through a file looking + for patterns, or to control a modem or serial device. ''' + + def __init__ (self, fd, args=None, timeout=30, maxread=2000, searchwindowsize=None, + logfile=None, encoding=None, codec_errors='strict', use_poll=False): + '''This takes a file descriptor (an int) or an object that support the + fileno() method (returning an int). All Python file-like objects + support fileno(). ''' + + if type(fd) != type(0) and hasattr(fd, 'fileno'): + fd = fd.fileno() + + if type(fd) != type(0): + raise ExceptionPexpect('The fd argument is not an int. If this is a command string then maybe you want to use pexpect.spawn.') + + try: # make sure fd is a valid file descriptor + os.fstat(fd) + except OSError: + raise ExceptionPexpect('The fd argument is not a valid file descriptor.') + + self.args = None + self.command = None + SpawnBase.__init__(self, timeout, maxread, searchwindowsize, logfile, + encoding=encoding, codec_errors=codec_errors) + self.child_fd = fd + self.own_fd = False + self.closed = False + self.name = '' % fd + self.use_poll = use_poll + + def close (self): + """Close the file descriptor. + + Calling this method a second time does nothing, but if the file + descriptor was closed elsewhere, :class:`OSError` will be raised. + """ + if self.child_fd == -1: + return + + self.flush() + os.close(self.child_fd) + self.child_fd = -1 + self.closed = True + + def isalive (self): + '''This checks if the file descriptor is still valid. If :func:`os.fstat` + does not raise an exception then we assume it is alive. ''' + + if self.child_fd == -1: + return False + try: + os.fstat(self.child_fd) + return True + except: + return False + + def terminate (self, force=False): # pragma: no cover + '''Deprecated and invalid. Just raises an exception.''' + raise ExceptionPexpect('This method is not valid for file descriptors.') + + # These four methods are left around for backwards compatibility, but not + # documented as part of fdpexpect. You're encouraged to use os.write + # directly. + def send(self, s): + "Write to fd, return number of bytes written" + s = self._coerce_send_string(s) + self._log(s, 'send') + + b = self._encoder.encode(s, final=False) + return os.write(self.child_fd, b) + + def sendline(self, s): + "Write to fd with trailing newline, return number of bytes written" + s = self._coerce_send_string(s) + return self.send(s + self.linesep) + + def write(self, s): + "Write to fd, return None" + self.send(s) + + def writelines(self, sequence): + "Call self.write() for each item in sequence" + for s in sequence: + self.write(s) + + def read_nonblocking(self, size=1, timeout=-1): + """ + Read from the file descriptor and return the result as a string. + + The read_nonblocking method of :class:`SpawnBase` assumes that a call + to os.read will not block (timeout parameter is ignored). This is not + the case for POSIX file-like objects such as sockets and serial ports. + + Use :func:`select.select`, timeout is implemented conditionally for + POSIX systems. + + :param int size: Read at most *size* bytes. + :param int timeout: Wait timeout seconds for file descriptor to be + ready to read. When -1 (default), use self.timeout. When 0, poll. + :return: String containing the bytes read + """ + if os.name == 'posix': + if timeout == -1: + timeout = self.timeout + rlist = [self.child_fd] + wlist = [] + xlist = [] + if self.use_poll: + rlist = poll_ignore_interrupts(rlist, timeout) + else: + rlist, wlist, xlist = select_ignore_interrupts( + rlist, wlist, xlist, timeout + ) + if self.child_fd not in rlist: + raise TIMEOUT('Timeout exceeded.') + return super(fdspawn, self).read_nonblocking(size) diff --git a/evalkit_internvl/lib/python3.10/site-packages/pexpect/popen_spawn.py b/evalkit_internvl/lib/python3.10/site-packages/pexpect/popen_spawn.py new file mode 100644 index 0000000000000000000000000000000000000000..e6bdf07d614a462a34cc6114e6ab40afa3562b72 --- /dev/null +++ b/evalkit_internvl/lib/python3.10/site-packages/pexpect/popen_spawn.py @@ -0,0 +1,188 @@ +"""Provides an interface like pexpect.spawn interface using subprocess.Popen +""" +import os +import threading +import subprocess +import sys +import time +import signal +import shlex + +try: + from queue import Queue, Empty # Python 3 +except ImportError: + from Queue import Queue, Empty # Python 2 + +from .spawnbase import SpawnBase, PY3 +from .exceptions import EOF +from .utils import string_types + +class PopenSpawn(SpawnBase): + def __init__(self, cmd, timeout=30, maxread=2000, searchwindowsize=None, + logfile=None, cwd=None, env=None, encoding=None, + codec_errors='strict', preexec_fn=None): + super(PopenSpawn, self).__init__(timeout=timeout, maxread=maxread, + searchwindowsize=searchwindowsize, logfile=logfile, + encoding=encoding, codec_errors=codec_errors) + + # Note that `SpawnBase` initializes `self.crlf` to `\r\n` + # because the default behaviour for a PTY is to convert + # incoming LF to `\r\n` (see the `onlcr` flag and + # https://stackoverflow.com/a/35887657/5397009). Here we set + # it to `os.linesep` because that is what the spawned + # application outputs by default and `popen` doesn't translate + # anything. + if encoding is None: + self.crlf = os.linesep.encode ("ascii") + else: + self.crlf = self.string_type (os.linesep) + + kwargs = dict(bufsize=0, stdin=subprocess.PIPE, + stderr=subprocess.STDOUT, stdout=subprocess.PIPE, + cwd=cwd, preexec_fn=preexec_fn, env=env) + + if sys.platform == 'win32': + startupinfo = subprocess.STARTUPINFO() + startupinfo.dwFlags |= subprocess.STARTF_USESHOWWINDOW + kwargs['startupinfo'] = startupinfo + kwargs['creationflags'] = subprocess.CREATE_NEW_PROCESS_GROUP + + if isinstance(cmd, string_types) and sys.platform != 'win32': + cmd = shlex.split(cmd, posix=os.name == 'posix') + + self.proc = subprocess.Popen(cmd, **kwargs) + self.pid = self.proc.pid + self.closed = False + self._buf = self.string_type() + + self._read_queue = Queue() + self._read_thread = threading.Thread(target=self._read_incoming) + self._read_thread.daemon = True + self._read_thread.start() + + _read_reached_eof = False + + def read_nonblocking(self, size, timeout): + buf = self._buf + if self._read_reached_eof: + # We have already finished reading. Use up any buffered data, + # then raise EOF + if buf: + self._buf = buf[size:] + return buf[:size] + else: + self.flag_eof = True + raise EOF('End Of File (EOF).') + + if timeout == -1: + timeout = self.timeout + elif timeout is None: + timeout = 1e6 + + t0 = time.time() + while (time.time() - t0) < timeout and size and len(buf) < size: + try: + incoming = self._read_queue.get_nowait() + except Empty: + break + else: + if incoming is None: + self._read_reached_eof = True + break + + buf += self._decoder.decode(incoming, final=False) + + r, self._buf = buf[:size], buf[size:] + + self._log(r, 'read') + return r + + def _read_incoming(self): + """Run in a thread to move output from a pipe to a queue.""" + fileno = self.proc.stdout.fileno() + while 1: + buf = b'' + try: + buf = os.read(fileno, 1024) + except OSError as e: + self._log(e, 'read') + + if not buf: + # This indicates we have reached EOF + self._read_queue.put(None) + return + + self._read_queue.put(buf) + + def write(self, s): + '''This is similar to send() except that there is no return value. + ''' + self.send(s) + + def writelines(self, sequence): + '''This calls write() for each element in the sequence. + + The sequence can be any iterable object producing strings, typically a + list of strings. This does not add line separators. There is no return + value. + ''' + for s in sequence: + self.send(s) + + def send(self, s): + '''Send data to the subprocess' stdin. + + Returns the number of bytes written. + ''' + s = self._coerce_send_string(s) + self._log(s, 'send') + + b = self._encoder.encode(s, final=False) + if PY3: + return self.proc.stdin.write(b) + else: + # On Python 2, .write() returns None, so we return the length of + # bytes written ourselves. This assumes they all got written. + self.proc.stdin.write(b) + return len(b) + + def sendline(self, s=''): + '''Wraps send(), sending string ``s`` to child process, with os.linesep + automatically appended. Returns number of bytes written. ''' + + n = self.send(s) + return n + self.send(self.linesep) + + def wait(self): + '''Wait for the subprocess to finish. + + Returns the exit code. + ''' + status = self.proc.wait() + if status >= 0: + self.exitstatus = status + self.signalstatus = None + else: + self.exitstatus = None + self.signalstatus = -status + self.terminated = True + return status + + def kill(self, sig): + '''Sends a Unix signal to the subprocess. + + Use constants from the :mod:`signal` module to specify which signal. + ''' + if sys.platform == 'win32': + if sig in [signal.SIGINT, signal.CTRL_C_EVENT]: + sig = signal.CTRL_C_EVENT + elif sig in [signal.SIGBREAK, signal.CTRL_BREAK_EVENT]: + sig = signal.CTRL_BREAK_EVENT + else: + sig = signal.SIGTERM + + os.kill(self.proc.pid, sig) + + def sendeof(self): + '''Closes the stdin pipe from the writing end.''' + self.proc.stdin.close() diff --git a/evalkit_internvl/lib/python3.10/site-packages/pexpect/pty_spawn.py b/evalkit_internvl/lib/python3.10/site-packages/pexpect/pty_spawn.py new file mode 100644 index 0000000000000000000000000000000000000000..8e28ca7cd7de46ede116b2b9d354e50669f05de2 --- /dev/null +++ b/evalkit_internvl/lib/python3.10/site-packages/pexpect/pty_spawn.py @@ -0,0 +1,860 @@ +import os +import sys +import time +import pty +import tty +import errno +import signal +from contextlib import contextmanager + +import ptyprocess +from ptyprocess.ptyprocess import use_native_pty_fork + +from .exceptions import ExceptionPexpect, EOF, TIMEOUT +from .spawnbase import SpawnBase +from .utils import ( + which, split_command_line, select_ignore_interrupts, poll_ignore_interrupts +) + +@contextmanager +def _wrap_ptyprocess_err(): + """Turn ptyprocess errors into our own ExceptionPexpect errors""" + try: + yield + except ptyprocess.PtyProcessError as e: + raise ExceptionPexpect(*e.args) + +PY3 = (sys.version_info[0] >= 3) + +class spawn(SpawnBase): + '''This is the main class interface for Pexpect. Use this class to start + and control child applications. ''' + + # This is purely informational now - changing it has no effect + use_native_pty_fork = use_native_pty_fork + + def __init__(self, command, args=[], timeout=30, maxread=2000, + searchwindowsize=None, logfile=None, cwd=None, env=None, + ignore_sighup=False, echo=True, preexec_fn=None, + encoding=None, codec_errors='strict', dimensions=None, + use_poll=False): + '''This is the constructor. The command parameter may be a string that + includes a command and any arguments to the command. For example:: + + child = pexpect.spawn('/usr/bin/ftp') + child = pexpect.spawn('/usr/bin/ssh user@example.com') + child = pexpect.spawn('ls -latr /tmp') + + You may also construct it with a list of arguments like so:: + + child = pexpect.spawn('/usr/bin/ftp', []) + child = pexpect.spawn('/usr/bin/ssh', ['user@example.com']) + child = pexpect.spawn('ls', ['-latr', '/tmp']) + + After this the child application will be created and will be ready to + talk to. For normal use, see expect() and send() and sendline(). + + Remember that Pexpect does NOT interpret shell meta characters such as + redirect, pipe, or wild cards (``>``, ``|``, or ``*``). This is a + common mistake. If you want to run a command and pipe it through + another command then you must also start a shell. For example:: + + child = pexpect.spawn('/bin/bash -c "ls -l | grep LOG > logs.txt"') + child.expect(pexpect.EOF) + + The second form of spawn (where you pass a list of arguments) is useful + in situations where you wish to spawn a command and pass it its own + argument list. This can make syntax more clear. For example, the + following is equivalent to the previous example:: + + shell_cmd = 'ls -l | grep LOG > logs.txt' + child = pexpect.spawn('/bin/bash', ['-c', shell_cmd]) + child.expect(pexpect.EOF) + + The maxread attribute sets the read buffer size. This is maximum number + of bytes that Pexpect will try to read from a TTY at one time. Setting + the maxread size to 1 will turn off buffering. Setting the maxread + value higher may help performance in cases where large amounts of + output are read back from the child. This feature is useful in + conjunction with searchwindowsize. + + When the keyword argument *searchwindowsize* is None (default), the + full buffer is searched at each iteration of receiving incoming data. + The default number of bytes scanned at each iteration is very large + and may be reduced to collaterally reduce search cost. After + :meth:`~.expect` returns, the full buffer attribute remains up to + size *maxread* irrespective of *searchwindowsize* value. + + When the keyword argument ``timeout`` is specified as a number, + (default: *30*), then :class:`TIMEOUT` will be raised after the value + specified has elapsed, in seconds, for any of the :meth:`~.expect` + family of method calls. When None, TIMEOUT will not be raised, and + :meth:`~.expect` may block indefinitely until match. + + + The logfile member turns on or off logging. All input and output will + be copied to the given file object. Set logfile to None to stop + logging. This is the default. Set logfile to sys.stdout to echo + everything to standard output. The logfile is flushed after each write. + + Example log input and output to a file:: + + child = pexpect.spawn('some_command') + fout = open('mylog.txt','wb') + child.logfile = fout + + Example log to stdout:: + + # In Python 2: + child = pexpect.spawn('some_command') + child.logfile = sys.stdout + + # In Python 3, we'll use the ``encoding`` argument to decode data + # from the subprocess and handle it as unicode: + child = pexpect.spawn('some_command', encoding='utf-8') + child.logfile = sys.stdout + + The logfile_read and logfile_send members can be used to separately log + the input from the child and output sent to the child. Sometimes you + don't want to see everything you write to the child. You only want to + log what the child sends back. For example:: + + child = pexpect.spawn('some_command') + child.logfile_read = sys.stdout + + You will need to pass an encoding to spawn in the above code if you are + using Python 3. + + To separately log output sent to the child use logfile_send:: + + child.logfile_send = fout + + If ``ignore_sighup`` is True, the child process will ignore SIGHUP + signals. The default is False from Pexpect 4.0, meaning that SIGHUP + will be handled normally by the child. + + The delaybeforesend helps overcome a weird behavior that many users + were experiencing. The typical problem was that a user would expect() a + "Password:" prompt and then immediately call sendline() to send the + password. The user would then see that their password was echoed back + to them. Passwords don't normally echo. The problem is caused by the + fact that most applications print out the "Password" prompt and then + turn off stdin echo, but if you send your password before the + application turned off echo, then you get your password echoed. + Normally this wouldn't be a problem when interacting with a human at a + real keyboard. If you introduce a slight delay just before writing then + this seems to clear up the problem. This was such a common problem for + many users that I decided that the default pexpect behavior should be + to sleep just before writing to the child application. 1/20th of a + second (50 ms) seems to be enough to clear up the problem. You can set + delaybeforesend to None to return to the old behavior. + + Note that spawn is clever about finding commands on your path. + It uses the same logic that "which" uses to find executables. + + If you wish to get the exit status of the child you must call the + close() method. The exit or signal status of the child will be stored + in self.exitstatus or self.signalstatus. If the child exited normally + then exitstatus will store the exit return code and signalstatus will + be None. If the child was terminated abnormally with a signal then + signalstatus will store the signal value and exitstatus will be None:: + + child = pexpect.spawn('some_command') + child.close() + print(child.exitstatus, child.signalstatus) + + If you need more detail you can also read the self.status member which + stores the status returned by os.waitpid. You can interpret this using + os.WIFEXITED/os.WEXITSTATUS or os.WIFSIGNALED/os.TERMSIG. + + The echo attribute may be set to False to disable echoing of input. + As a pseudo-terminal, all input echoed by the "keyboard" (send() + or sendline()) will be repeated to output. For many cases, it is + not desirable to have echo enabled, and it may be later disabled + using setecho(False) followed by waitnoecho(). However, for some + platforms such as Solaris, this is not possible, and should be + disabled immediately on spawn. + + If preexec_fn is given, it will be called in the child process before + launching the given command. This is useful to e.g. reset inherited + signal handlers. + + The dimensions attribute specifies the size of the pseudo-terminal as + seen by the subprocess, and is specified as a two-entry tuple (rows, + columns). If this is unspecified, the defaults in ptyprocess will apply. + + The use_poll attribute enables using select.poll() over select.select() + for socket handling. This is handy if your system could have > 1024 fds + ''' + super(spawn, self).__init__(timeout=timeout, maxread=maxread, searchwindowsize=searchwindowsize, + logfile=logfile, encoding=encoding, codec_errors=codec_errors) + self.STDIN_FILENO = pty.STDIN_FILENO + self.STDOUT_FILENO = pty.STDOUT_FILENO + self.STDERR_FILENO = pty.STDERR_FILENO + self.str_last_chars = 100 + self.cwd = cwd + self.env = env + self.echo = echo + self.ignore_sighup = ignore_sighup + self.__irix_hack = sys.platform.lower().startswith('irix') + if command is None: + self.command = None + self.args = None + self.name = '' + else: + self._spawn(command, args, preexec_fn, dimensions) + self.use_poll = use_poll + + def __str__(self): + '''This returns a human-readable string that represents the state of + the object. ''' + + s = [] + s.append(repr(self)) + s.append('command: ' + str(self.command)) + s.append('args: %r' % (self.args,)) + s.append('buffer (last %s chars): %r' % (self.str_last_chars,self.buffer[-self.str_last_chars:])) + s.append('before (last %s chars): %r' % (self.str_last_chars,self.before[-self.str_last_chars:] if self.before else '')) + s.append('after: %r' % (self.after,)) + s.append('match: %r' % (self.match,)) + s.append('match_index: ' + str(self.match_index)) + s.append('exitstatus: ' + str(self.exitstatus)) + if hasattr(self, 'ptyproc'): + s.append('flag_eof: ' + str(self.flag_eof)) + s.append('pid: ' + str(self.pid)) + s.append('child_fd: ' + str(self.child_fd)) + s.append('closed: ' + str(self.closed)) + s.append('timeout: ' + str(self.timeout)) + s.append('delimiter: ' + str(self.delimiter)) + s.append('logfile: ' + str(self.logfile)) + s.append('logfile_read: ' + str(self.logfile_read)) + s.append('logfile_send: ' + str(self.logfile_send)) + s.append('maxread: ' + str(self.maxread)) + s.append('ignorecase: ' + str(self.ignorecase)) + s.append('searchwindowsize: ' + str(self.searchwindowsize)) + s.append('delaybeforesend: ' + str(self.delaybeforesend)) + s.append('delayafterclose: ' + str(self.delayafterclose)) + s.append('delayafterterminate: ' + str(self.delayafterterminate)) + return '\n'.join(s) + + def _spawn(self, command, args=[], preexec_fn=None, dimensions=None): + '''This starts the given command in a child process. This does all the + fork/exec type of stuff for a pty. This is called by __init__. If args + is empty then command will be parsed (split on spaces) and args will be + set to parsed arguments. ''' + + # The pid and child_fd of this object get set by this method. + # Note that it is difficult for this method to fail. + # You cannot detect if the child process cannot start. + # So the only way you can tell if the child process started + # or not is to try to read from the file descriptor. If you get + # EOF immediately then it means that the child is already dead. + # That may not necessarily be bad because you may have spawned a child + # that performs some task; creates no stdout output; and then dies. + + # If command is an int type then it may represent a file descriptor. + if isinstance(command, type(0)): + raise ExceptionPexpect('Command is an int type. ' + + 'If this is a file descriptor then maybe you want to ' + + 'use fdpexpect.fdspawn which takes an existing ' + + 'file descriptor instead of a command string.') + + if not isinstance(args, type([])): + raise TypeError('The argument, args, must be a list.') + + if args == []: + self.args = split_command_line(command) + self.command = self.args[0] + else: + # Make a shallow copy of the args list. + self.args = args[:] + self.args.insert(0, command) + self.command = command + + command_with_path = which(self.command, env=self.env) + if command_with_path is None: + raise ExceptionPexpect('The command was not found or was not ' + + 'executable: %s.' % self.command) + self.command = command_with_path + self.args[0] = self.command + + self.name = '<' + ' '.join(self.args) + '>' + + assert self.pid is None, 'The pid member must be None.' + assert self.command is not None, 'The command member must not be None.' + + kwargs = {'echo': self.echo, 'preexec_fn': preexec_fn} + if self.ignore_sighup: + def preexec_wrapper(): + "Set SIGHUP to be ignored, then call the real preexec_fn" + signal.signal(signal.SIGHUP, signal.SIG_IGN) + if preexec_fn is not None: + preexec_fn() + kwargs['preexec_fn'] = preexec_wrapper + + if dimensions is not None: + kwargs['dimensions'] = dimensions + + if self.encoding is not None: + # Encode command line using the specified encoding + self.args = [a if isinstance(a, bytes) else a.encode(self.encoding) + for a in self.args] + + self.ptyproc = self._spawnpty(self.args, env=self.env, + cwd=self.cwd, **kwargs) + + self.pid = self.ptyproc.pid + self.child_fd = self.ptyproc.fd + + + self.terminated = False + self.closed = False + + def _spawnpty(self, args, **kwargs): + '''Spawn a pty and return an instance of PtyProcess.''' + return ptyprocess.PtyProcess.spawn(args, **kwargs) + + def close(self, force=True): + '''This closes the connection with the child application. Note that + calling close() more than once is valid. This emulates standard Python + behavior with files. Set force to True if you want to make sure that + the child is terminated (SIGKILL is sent if the child ignores SIGHUP + and SIGINT). ''' + + self.flush() + with _wrap_ptyprocess_err(): + # PtyProcessError may be raised if it is not possible to terminate + # the child. + self.ptyproc.close(force=force) + self.isalive() # Update exit status from ptyproc + self.child_fd = -1 + self.closed = True + + def isatty(self): + '''This returns True if the file descriptor is open and connected to a + tty(-like) device, else False. + + On SVR4-style platforms implementing streams, such as SunOS and HP-UX, + the child pty may not appear as a terminal device. This means + methods such as setecho(), setwinsize(), getwinsize() may raise an + IOError. ''' + + return os.isatty(self.child_fd) + + def waitnoecho(self, timeout=-1): + '''This waits until the terminal ECHO flag is set False. This returns + True if the echo mode is off. This returns False if the ECHO flag was + not set False before the timeout. This can be used to detect when the + child is waiting for a password. Usually a child application will turn + off echo mode when it is waiting for the user to enter a password. For + example, instead of expecting the "password:" prompt you can wait for + the child to set ECHO off:: + + p = pexpect.spawn('ssh user@example.com') + p.waitnoecho() + p.sendline(mypassword) + + If timeout==-1 then this method will use the value in self.timeout. + If timeout==None then this method to block until ECHO flag is False. + ''' + + if timeout == -1: + timeout = self.timeout + if timeout is not None: + end_time = time.time() + timeout + while True: + if not self.getecho(): + return True + if timeout < 0 and timeout is not None: + return False + if timeout is not None: + timeout = end_time - time.time() + time.sleep(0.1) + + def getecho(self): + '''This returns the terminal echo mode. This returns True if echo is + on or False if echo is off. Child applications that are expecting you + to enter a password often set ECHO False. See waitnoecho(). + + Not supported on platforms where ``isatty()`` returns False. ''' + return self.ptyproc.getecho() + + def setecho(self, state): + '''This sets the terminal echo mode on or off. Note that anything the + child sent before the echo will be lost, so you should be sure that + your input buffer is empty before you call setecho(). For example, the + following will work as expected:: + + p = pexpect.spawn('cat') # Echo is on by default. + p.sendline('1234') # We expect see this twice from the child... + p.expect(['1234']) # ... once from the tty echo... + p.expect(['1234']) # ... and again from cat itself. + p.setecho(False) # Turn off tty echo + p.sendline('abcd') # We will set this only once (echoed by cat). + p.sendline('wxyz') # We will set this only once (echoed by cat) + p.expect(['abcd']) + p.expect(['wxyz']) + + The following WILL NOT WORK because the lines sent before the setecho + will be lost:: + + p = pexpect.spawn('cat') + p.sendline('1234') + p.setecho(False) # Turn off tty echo + p.sendline('abcd') # We will set this only once (echoed by cat). + p.sendline('wxyz') # We will set this only once (echoed by cat) + p.expect(['1234']) + p.expect(['1234']) + p.expect(['abcd']) + p.expect(['wxyz']) + + + Not supported on platforms where ``isatty()`` returns False. + ''' + return self.ptyproc.setecho(state) + + def read_nonblocking(self, size=1, timeout=-1): + '''This reads at most size characters from the child application. It + includes a timeout. If the read does not complete within the timeout + period then a TIMEOUT exception is raised. If the end of file is read + then an EOF exception will be raised. If a logfile is specified, a + copy is written to that log. + + If timeout is None then the read may block indefinitely. + If timeout is -1 then the self.timeout value is used. If timeout is 0 + then the child is polled and if there is no data immediately ready + then this will raise a TIMEOUT exception. + + The timeout refers only to the amount of time to read at least one + character. This is not affected by the 'size' parameter, so if you call + read_nonblocking(size=100, timeout=30) and only one character is + available right away then one character will be returned immediately. + It will not wait for 30 seconds for another 99 characters to come in. + + On the other hand, if there are bytes available to read immediately, + all those bytes will be read (up to the buffer size). So, if the + buffer size is 1 megabyte and there is 1 megabyte of data available + to read, the buffer will be filled, regardless of timeout. + + This is a wrapper around os.read(). It uses select.select() or + select.poll() to implement the timeout. ''' + + if self.closed: + raise ValueError('I/O operation on closed file.') + + if self.use_poll: + def select(timeout): + return poll_ignore_interrupts([self.child_fd], timeout) + else: + def select(timeout): + return select_ignore_interrupts([self.child_fd], [], [], timeout)[0] + + # If there is data available to read right now, read as much as + # we can. We do this to increase performance if there are a lot + # of bytes to be read. This also avoids calling isalive() too + # often. See also: + # * https://github.com/pexpect/pexpect/pull/304 + # * http://trac.sagemath.org/ticket/10295 + if select(0): + try: + incoming = super(spawn, self).read_nonblocking(size) + except EOF: + # Maybe the child is dead: update some attributes in that case + self.isalive() + raise + while len(incoming) < size and select(0): + try: + incoming += super(spawn, self).read_nonblocking(size - len(incoming)) + except EOF: + # Maybe the child is dead: update some attributes in that case + self.isalive() + # Don't raise EOF, just return what we read so far. + return incoming + return incoming + + if timeout == -1: + timeout = self.timeout + + if not self.isalive(): + # The process is dead, but there may or may not be data + # available to read. Note that some systems such as Solaris + # do not give an EOF when the child dies. In fact, you can + # still try to read from the child_fd -- it will block + # forever or until TIMEOUT. For that reason, it's important + # to do this check before calling select() with timeout. + if select(0): + return super(spawn, self).read_nonblocking(size) + self.flag_eof = True + raise EOF('End Of File (EOF). Braindead platform.') + elif self.__irix_hack: + # Irix takes a long time before it realizes a child was terminated. + # Make sure that the timeout is at least 2 seconds. + # FIXME So does this mean Irix systems are forced to always have + # FIXME a 2 second delay when calling read_nonblocking? That sucks. + if timeout is not None and timeout < 2: + timeout = 2 + + # Because of the select(0) check above, we know that no data + # is available right now. But if a non-zero timeout is given + # (possibly timeout=None), we call select() with a timeout. + if (timeout != 0) and select(timeout): + return super(spawn, self).read_nonblocking(size) + + if not self.isalive(): + # Some platforms, such as Irix, will claim that their + # processes are alive; timeout on the select; and + # then finally admit that they are not alive. + self.flag_eof = True + raise EOF('End of File (EOF). Very slow platform.') + else: + raise TIMEOUT('Timeout exceeded.') + + def write(self, s): + '''This is similar to send() except that there is no return value. + ''' + + self.send(s) + + def writelines(self, sequence): + '''This calls write() for each element in the sequence. The sequence + can be any iterable object producing strings, typically a list of + strings. This does not add line separators. There is no return value. + ''' + + for s in sequence: + self.write(s) + + def send(self, s): + '''Sends string ``s`` to the child process, returning the number of + bytes written. If a logfile is specified, a copy is written to that + log. + + The default terminal input mode is canonical processing unless set + otherwise by the child process. This allows backspace and other line + processing to be performed prior to transmitting to the receiving + program. As this is buffered, there is a limited size of such buffer. + + On Linux systems, this is 4096 (defined by N_TTY_BUF_SIZE). All + other systems honor the POSIX.1 definition PC_MAX_CANON -- 1024 + on OSX, 256 on OpenSolaris, and 1920 on FreeBSD. + + This value may be discovered using fpathconf(3):: + + >>> from os import fpathconf + >>> print(fpathconf(0, 'PC_MAX_CANON')) + 256 + + On such a system, only 256 bytes may be received per line. Any + subsequent bytes received will be discarded. BEL (``'\a'``) is then + sent to output if IMAXBEL (termios.h) is set by the tty driver. + This is usually enabled by default. Linux does not honor this as + an option -- it behaves as though it is always set on. + + Canonical input processing may be disabled altogether by executing + a shell, then stty(1), before executing the final program:: + + >>> bash = pexpect.spawn('/bin/bash', echo=False) + >>> bash.sendline('stty -icanon') + >>> bash.sendline('base64') + >>> bash.sendline('x' * 5000) + ''' + + if self.delaybeforesend is not None: + time.sleep(self.delaybeforesend) + + s = self._coerce_send_string(s) + self._log(s, 'send') + + b = self._encoder.encode(s, final=False) + return os.write(self.child_fd, b) + + def sendline(self, s=''): + '''Wraps send(), sending string ``s`` to child process, with + ``os.linesep`` automatically appended. Returns number of bytes + written. Only a limited number of bytes may be sent for each + line in the default terminal mode, see docstring of :meth:`send`. + ''' + s = self._coerce_send_string(s) + return self.send(s + self.linesep) + + def _log_control(self, s): + """Write control characters to the appropriate log files""" + if self.encoding is not None: + s = s.decode(self.encoding, 'replace') + self._log(s, 'send') + + def sendcontrol(self, char): + '''Helper method that wraps send() with mnemonic access for sending control + character to the child (such as Ctrl-C or Ctrl-D). For example, to send + Ctrl-G (ASCII 7, bell, '\a'):: + + child.sendcontrol('g') + + See also, sendintr() and sendeof(). + ''' + n, byte = self.ptyproc.sendcontrol(char) + self._log_control(byte) + return n + + def sendeof(self): + '''This sends an EOF to the child. This sends a character which causes + the pending parent output buffer to be sent to the waiting child + program without waiting for end-of-line. If it is the first character + of the line, the read() in the user program returns 0, which signifies + end-of-file. This means to work as expected a sendeof() has to be + called at the beginning of a line. This method does not send a newline. + It is the responsibility of the caller to ensure the eof is sent at the + beginning of a line. ''' + + n, byte = self.ptyproc.sendeof() + self._log_control(byte) + + def sendintr(self): + '''This sends a SIGINT to the child. It does not require + the SIGINT to be the first character on a line. ''' + + n, byte = self.ptyproc.sendintr() + self._log_control(byte) + + @property + def flag_eof(self): + return self.ptyproc.flag_eof + + @flag_eof.setter + def flag_eof(self, value): + self.ptyproc.flag_eof = value + + def eof(self): + '''This returns True if the EOF exception was ever raised. + ''' + return self.flag_eof + + def terminate(self, force=False): + '''This forces a child process to terminate. It starts nicely with + SIGHUP and SIGINT. If "force" is True then moves onto SIGKILL. This + returns True if the child was terminated. This returns False if the + child could not be terminated. ''' + + if not self.isalive(): + return True + try: + self.kill(signal.SIGHUP) + time.sleep(self.delayafterterminate) + if not self.isalive(): + return True + self.kill(signal.SIGCONT) + time.sleep(self.delayafterterminate) + if not self.isalive(): + return True + self.kill(signal.SIGINT) + time.sleep(self.delayafterterminate) + if not self.isalive(): + return True + if force: + self.kill(signal.SIGKILL) + time.sleep(self.delayafterterminate) + if not self.isalive(): + return True + else: + return False + return False + except OSError: + # I think there are kernel timing issues that sometimes cause + # this to happen. I think isalive() reports True, but the + # process is dead to the kernel. + # Make one last attempt to see if the kernel is up to date. + time.sleep(self.delayafterterminate) + if not self.isalive(): + return True + else: + return False + + def wait(self): + '''This waits until the child exits. This is a blocking call. This will + not read any data from the child, so this will block forever if the + child has unread output and has terminated. In other words, the child + may have printed output then called exit(), but, the child is + technically still alive until its output is read by the parent. + + This method is non-blocking if :meth:`wait` has already been called + previously or :meth:`isalive` method returns False. It simply returns + the previously determined exit status. + ''' + + ptyproc = self.ptyproc + with _wrap_ptyprocess_err(): + # exception may occur if "Is some other process attempting + # "job control with our child pid?" + exitstatus = ptyproc.wait() + self.status = ptyproc.status + self.exitstatus = ptyproc.exitstatus + self.signalstatus = ptyproc.signalstatus + self.terminated = True + + return exitstatus + + def isalive(self): + '''This tests if the child process is running or not. This is + non-blocking. If the child was terminated then this will read the + exitstatus or signalstatus of the child. This returns True if the child + process appears to be running or False if not. It can take literally + SECONDS for Solaris to return the right status. ''' + + ptyproc = self.ptyproc + with _wrap_ptyprocess_err(): + alive = ptyproc.isalive() + + if not alive: + self.status = ptyproc.status + self.exitstatus = ptyproc.exitstatus + self.signalstatus = ptyproc.signalstatus + self.terminated = True + + return alive + + def kill(self, sig): + + '''This sends the given signal to the child application. In keeping + with UNIX tradition it has a misleading name. It does not necessarily + kill the child unless you send the right signal. ''' + + # Same as os.kill, but the pid is given for you. + if self.isalive(): + os.kill(self.pid, sig) + + def getwinsize(self): + '''This returns the terminal window size of the child tty. The return + value is a tuple of (rows, cols). ''' + return self.ptyproc.getwinsize() + + def setwinsize(self, rows, cols): + '''This sets the terminal window size of the child tty. This will cause + a SIGWINCH signal to be sent to the child. This does not change the + physical window size. It changes the size reported to TTY-aware + applications like vi or curses -- applications that respond to the + SIGWINCH signal. ''' + return self.ptyproc.setwinsize(rows, cols) + + + def interact(self, escape_character=chr(29), + input_filter=None, output_filter=None): + + '''This gives control of the child process to the interactive user (the + human at the keyboard). Keystrokes are sent to the child process, and + the stdout and stderr output of the child process is printed. This + simply echos the child stdout and child stderr to the real stdout and + it echos the real stdin to the child stdin. When the user types the + escape_character this method will return None. The escape_character + will not be transmitted. The default for escape_character is + entered as ``Ctrl - ]``, the very same as BSD telnet. To prevent + escaping, escape_character may be set to None. + + If a logfile is specified, then the data sent and received from the + child process in interact mode is duplicated to the given log. + + You may pass in optional input and output filter functions. These + functions should take bytes array and return bytes array too. Even + with ``encoding='utf-8'`` support, meth:`interact` will always pass + input_filter and output_filter bytes. You may need to wrap your + function to decode and encode back to UTF-8. + + The output_filter will be passed all the output from the child process. + The input_filter will be passed all the keyboard input from the user. + The input_filter is run BEFORE the check for the escape_character. + + Note that if you change the window size of the parent the SIGWINCH + signal will not be passed through to the child. If you want the child + window size to change when the parent's window size changes then do + something like the following example:: + + import pexpect, struct, fcntl, termios, signal, sys + def sigwinch_passthrough (sig, data): + s = struct.pack("HHHH", 0, 0, 0, 0) + a = struct.unpack('hhhh', fcntl.ioctl(sys.stdout.fileno(), + termios.TIOCGWINSZ , s)) + if not p.closed: + p.setwinsize(a[0],a[1]) + + # Note this 'p' is global and used in sigwinch_passthrough. + p = pexpect.spawn('/bin/bash') + signal.signal(signal.SIGWINCH, sigwinch_passthrough) + p.interact() + ''' + + # Flush the buffer. + self.write_to_stdout(self.buffer) + self.stdout.flush() + self._buffer = self.buffer_type() + mode = tty.tcgetattr(self.STDIN_FILENO) + tty.setraw(self.STDIN_FILENO) + if escape_character is not None and PY3: + escape_character = escape_character.encode('latin-1') + try: + self.__interact_copy(escape_character, input_filter, output_filter) + finally: + tty.tcsetattr(self.STDIN_FILENO, tty.TCSAFLUSH, mode) + + def __interact_writen(self, fd, data): + '''This is used by the interact() method. + ''' + + while data != b'' and self.isalive(): + n = os.write(fd, data) + data = data[n:] + + def __interact_read(self, fd): + '''This is used by the interact() method. + ''' + + return os.read(fd, 1000) + + def __interact_copy( + self, escape_character=None, input_filter=None, output_filter=None + ): + + '''This is used by the interact() method. + ''' + + while self.isalive(): + if self.use_poll: + r = poll_ignore_interrupts([self.child_fd, self.STDIN_FILENO]) + else: + r, w, e = select_ignore_interrupts( + [self.child_fd, self.STDIN_FILENO], [], [] + ) + if self.child_fd in r: + try: + data = self.__interact_read(self.child_fd) + except OSError as err: + if err.args[0] == errno.EIO: + # Linux-style EOF + break + raise + if data == b'': + # BSD-style EOF + break + if output_filter: + data = output_filter(data) + self._log(data, 'read') + os.write(self.STDOUT_FILENO, data) + if self.STDIN_FILENO in r: + data = self.__interact_read(self.STDIN_FILENO) + if input_filter: + data = input_filter(data) + i = -1 + if escape_character is not None: + i = data.rfind(escape_character) + if i != -1: + data = data[:i] + if data: + self._log(data, 'send') + self.__interact_writen(self.child_fd, data) + break + self._log(data, 'send') + self.__interact_writen(self.child_fd, data) + + +def spawnu(*args, **kwargs): + """Deprecated: pass encoding to spawn() instead.""" + kwargs.setdefault('encoding', 'utf-8') + return spawn(*args, **kwargs) diff --git a/evalkit_internvl/lib/python3.10/site-packages/pexpect/pxssh.py b/evalkit_internvl/lib/python3.10/site-packages/pexpect/pxssh.py new file mode 100644 index 0000000000000000000000000000000000000000..742f59e4069e7f171801ba4a01373aa938f13df5 --- /dev/null +++ b/evalkit_internvl/lib/python3.10/site-packages/pexpect/pxssh.py @@ -0,0 +1,540 @@ +'''This class extends pexpect.spawn to specialize setting up SSH connections. +This adds methods for login, logout, and expecting the shell prompt. + +PEXPECT LICENSE + + This license is approved by the OSI and FSF as GPL-compatible. + http://opensource.org/licenses/isc-license.txt + + Copyright (c) 2012, Noah Spurrier + PERMISSION TO USE, COPY, MODIFY, AND/OR DISTRIBUTE THIS SOFTWARE FOR ANY + PURPOSE WITH OR WITHOUT FEE IS HEREBY GRANTED, PROVIDED THAT THE ABOVE + COPYRIGHT NOTICE AND THIS PERMISSION NOTICE APPEAR IN ALL COPIES. + THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES + WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF + MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR + ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES + WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN + ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF + OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. + +''' + +from pexpect import ExceptionPexpect, TIMEOUT, EOF, spawn +import time +import os +import sys +import re + +__all__ = ['ExceptionPxssh', 'pxssh'] + +# Exception classes used by this module. +class ExceptionPxssh(ExceptionPexpect): + '''Raised for pxssh exceptions. + ''' + +if sys.version_info > (3, 0): + from shlex import quote +else: + _find_unsafe = re.compile(r'[^\w@%+=:,./-]').search + + def quote(s): + """Return a shell-escaped version of the string *s*.""" + if not s: + return "''" + if _find_unsafe(s) is None: + return s + + # use single quotes, and put single quotes into double quotes + # the string $'b is then quoted as '$'"'"'b' + return "'" + s.replace("'", "'\"'\"'") + "'" + +class pxssh (spawn): + '''This class extends pexpect.spawn to specialize setting up SSH + connections. This adds methods for login, logout, and expecting the shell + prompt. It does various tricky things to handle many situations in the SSH + login process. For example, if the session is your first login, then pxssh + automatically accepts the remote certificate; or if you have public key + authentication setup then pxssh won't wait for the password prompt. + + pxssh uses the shell prompt to synchronize output from the remote host. In + order to make this more robust it sets the shell prompt to something more + unique than just $ or #. This should work on most Borne/Bash or Csh style + shells. + + Example that runs a few commands on a remote server and prints the result:: + + from pexpect import pxssh + import getpass + try: + s = pxssh.pxssh() + hostname = raw_input('hostname: ') + username = raw_input('username: ') + password = getpass.getpass('password: ') + s.login(hostname, username, password) + s.sendline('uptime') # run a command + s.prompt() # match the prompt + print(s.before) # print everything before the prompt. + s.sendline('ls -l') + s.prompt() + print(s.before) + s.sendline('df') + s.prompt() + print(s.before) + s.logout() + except pxssh.ExceptionPxssh as e: + print("pxssh failed on login.") + print(e) + + Example showing how to specify SSH options:: + + from pexpect import pxssh + s = pxssh.pxssh(options={ + "StrictHostKeyChecking": "no", + "UserKnownHostsFile": "/dev/null"}) + ... + + Note that if you have ssh-agent running while doing development with pxssh + then this can lead to a lot of confusion. Many X display managers (xdm, + gdm, kdm, etc.) will automatically start a GUI agent. You may see a GUI + dialog box popup asking for a password during development. You should turn + off any key agents during testing. The 'force_password' attribute will turn + off public key authentication. This will only work if the remote SSH server + is configured to allow password logins. Example of using 'force_password' + attribute:: + + s = pxssh.pxssh() + s.force_password = True + hostname = raw_input('hostname: ') + username = raw_input('username: ') + password = getpass.getpass('password: ') + s.login (hostname, username, password) + + `debug_command_string` is only for the test suite to confirm that the string + generated for SSH is correct, using this will not allow you to do + anything other than get a string back from `pxssh.pxssh.login()`. + ''' + + def __init__ (self, timeout=30, maxread=2000, searchwindowsize=None, + logfile=None, cwd=None, env=None, ignore_sighup=True, echo=True, + options={}, encoding=None, codec_errors='strict', + debug_command_string=False, use_poll=False): + + spawn.__init__(self, None, timeout=timeout, maxread=maxread, + searchwindowsize=searchwindowsize, logfile=logfile, + cwd=cwd, env=env, ignore_sighup=ignore_sighup, echo=echo, + encoding=encoding, codec_errors=codec_errors, use_poll=use_poll) + + self.name = '' + + #SUBTLE HACK ALERT! Note that the command that SETS the prompt uses a + #slightly different string than the regular expression to match it. This + #is because when you set the prompt the command will echo back, but we + #don't want to match the echoed command. So if we make the set command + #slightly different than the regex we eliminate the problem. To make the + #set command different we add a backslash in front of $. The $ doesn't + #need to be escaped, but it doesn't hurt and serves to make the set + #prompt command different than the regex. + + # used to match the command-line prompt + self.UNIQUE_PROMPT = r"\[PEXPECT\][\$\#] " + self.PROMPT = self.UNIQUE_PROMPT + + # used to set shell command-line prompt to UNIQUE_PROMPT. + self.PROMPT_SET_SH = r"PS1='[PEXPECT]\$ '" + self.PROMPT_SET_CSH = r"set prompt='[PEXPECT]\$ '" + self.PROMPT_SET_ZSH = "prompt restore;\nPS1='[PEXPECT]%(!.#.$) '" + self.SSH_OPTS = (" -o 'PubkeyAuthentication=no'") +# Disabling host key checking, makes you vulnerable to MITM attacks. +# + " -o 'StrictHostKeyChecking=no'" +# + " -o 'UserKnownHostsFile /dev/null' ") + # Disabling X11 forwarding gets rid of the annoying SSH_ASKPASS from + # displaying a GUI password dialog. I have not figured out how to + # disable only SSH_ASKPASS without also disabling X11 forwarding. + # Unsetting SSH_ASKPASS on the remote side doesn't disable it! Annoying! + #self.SSH_OPTS = "-x -o 'PubkeyAuthentication=no'" + self.force_password = False + + self.debug_command_string = debug_command_string + + # User defined SSH options, eg, + # ssh.otions = dict(StrictHostKeyChecking="no",UserKnownHostsFile="/dev/null") + self.options = options + + def levenshtein_distance(self, a, b): + '''This calculates the Levenshtein distance between a and b. + ''' + + n, m = len(a), len(b) + if n > m: + a,b = b,a + n,m = m,n + current = range(n+1) + for i in range(1,m+1): + previous, current = current, [i]+[0]*n + for j in range(1,n+1): + add, delete = previous[j]+1, current[j-1]+1 + change = previous[j-1] + if a[j-1] != b[i-1]: + change = change + 1 + current[j] = min(add, delete, change) + return current[n] + + def try_read_prompt(self, timeout_multiplier): + '''This facilitates using communication timeouts to perform + synchronization as quickly as possible, while supporting high latency + connections with a tunable worst case performance. Fast connections + should be read almost immediately. Worst case performance for this + method is timeout_multiplier * 3 seconds. + ''' + + # maximum time allowed to read the first response + first_char_timeout = timeout_multiplier * 0.5 + + # maximum time allowed between subsequent characters + inter_char_timeout = timeout_multiplier * 0.1 + + # maximum time for reading the entire prompt + total_timeout = timeout_multiplier * 3.0 + + prompt = self.string_type() + begin = time.time() + expired = 0.0 + timeout = first_char_timeout + + while expired < total_timeout: + try: + prompt += self.read_nonblocking(size=1, timeout=timeout) + expired = time.time() - begin # updated total time expired + timeout = inter_char_timeout + except TIMEOUT: + break + + return prompt + + def sync_original_prompt (self, sync_multiplier=1.0): + '''This attempts to find the prompt. Basically, press enter and record + the response; press enter again and record the response; if the two + responses are similar then assume we are at the original prompt. + This can be a slow function. Worst case with the default sync_multiplier + can take 12 seconds. Low latency connections are more likely to fail + with a low sync_multiplier. Best case sync time gets worse with a + high sync multiplier (500 ms with default). ''' + + # All of these timing pace values are magic. + # I came up with these based on what seemed reliable for + # connecting to a heavily loaded machine I have. + self.sendline() + time.sleep(0.1) + + try: + # Clear the buffer before getting the prompt. + self.try_read_prompt(sync_multiplier) + except TIMEOUT: + pass + + self.sendline() + x = self.try_read_prompt(sync_multiplier) + + self.sendline() + a = self.try_read_prompt(sync_multiplier) + + self.sendline() + b = self.try_read_prompt(sync_multiplier) + + ld = self.levenshtein_distance(a,b) + len_a = len(a) + if len_a == 0: + return False + if float(ld)/len_a < 0.4: + return True + return False + + ### TODO: This is getting messy and I'm pretty sure this isn't perfect. + ### TODO: I need to draw a flow chart for this. + ### TODO: Unit tests for SSH tunnels, remote SSH command exec, disabling original prompt sync + def login (self, server, username=None, password='', terminal_type='ansi', + original_prompt=r"[#$]", login_timeout=10, port=None, + auto_prompt_reset=True, ssh_key=None, quiet=True, + sync_multiplier=1, check_local_ip=True, + password_regex=r'(?i)(?:password:)|(?:passphrase for key)', + ssh_tunnels={}, spawn_local_ssh=True, + sync_original_prompt=True, ssh_config=None, cmd='ssh'): + '''This logs the user into the given server. + + It uses 'original_prompt' to try to find the prompt right after login. + When it finds the prompt it immediately tries to reset the prompt to + something more easily matched. The default 'original_prompt' is very + optimistic and is easily fooled. It's more reliable to try to match the original + prompt as exactly as possible to prevent false matches by server + strings such as the "Message Of The Day". On many systems you can + disable the MOTD on the remote server by creating a zero-length file + called :file:`~/.hushlogin` on the remote server. If a prompt cannot be found + then this will not necessarily cause the login to fail. In the case of + a timeout when looking for the prompt we assume that the original + prompt was so weird that we could not match it, so we use a few tricks + to guess when we have reached the prompt. Then we hope for the best and + blindly try to reset the prompt to something more unique. If that fails + then login() raises an :class:`ExceptionPxssh` exception. + + In some situations it is not possible or desirable to reset the + original prompt. In this case, pass ``auto_prompt_reset=False`` to + inhibit setting the prompt to the UNIQUE_PROMPT. Remember that pxssh + uses a unique prompt in the :meth:`prompt` method. If the original prompt is + not reset then this will disable the :meth:`prompt` method unless you + manually set the :attr:`PROMPT` attribute. + + Set ``password_regex`` if there is a MOTD message with `password` in it. + Changing this is like playing in traffic, don't (p)expect it to match straight + away. + + If you require to connect to another SSH server from the your original SSH + connection set ``spawn_local_ssh`` to `False` and this will use your current + session to do so. Setting this option to `False` and not having an active session + will trigger an error. + + Set ``ssh_key`` to a file path to an SSH private key to use that SSH key + for the session authentication. + Set ``ssh_key`` to `True` to force passing the current SSH authentication socket + to the desired ``hostname``. + + Set ``ssh_config`` to a file path string of an SSH client config file to pass that + file to the client to handle itself. You may set any options you wish in here, however + doing so will require you to post extra information that you may not want to if you + run into issues. + + Alter the ``cmd`` to change the ssh client used, or to prepend it with network + namespaces. For example ```cmd="ip netns exec vlan2 ssh"``` to execute the ssh in + network namespace named ```vlan```. + ''' + + session_regex_array = ["(?i)are you sure you want to continue connecting", original_prompt, password_regex, "(?i)permission denied", "(?i)terminal type", TIMEOUT] + session_init_regex_array = [] + session_init_regex_array.extend(session_regex_array) + session_init_regex_array.extend(["(?i)connection closed by remote host", EOF]) + + ssh_options = ''.join([" -o '%s=%s'" % (o, v) for (o, v) in self.options.items()]) + if quiet: + ssh_options = ssh_options + ' -q' + if not check_local_ip: + ssh_options = ssh_options + " -o'NoHostAuthenticationForLocalhost=yes'" + if self.force_password: + ssh_options = ssh_options + ' ' + self.SSH_OPTS + if ssh_config is not None: + if spawn_local_ssh and not os.path.isfile(ssh_config): + raise ExceptionPxssh('SSH config does not exist or is not a file.') + ssh_options = ssh_options + ' -F ' + ssh_config + if port is not None: + ssh_options = ssh_options + ' -p %s'%(str(port)) + if ssh_key is not None: + # Allow forwarding our SSH key to the current session + if ssh_key==True: + ssh_options = ssh_options + ' -A' + else: + if spawn_local_ssh and not os.path.isfile(ssh_key): + raise ExceptionPxssh('private ssh key does not exist or is not a file.') + ssh_options = ssh_options + ' -i %s' % (ssh_key) + + # SSH tunnels, make sure you know what you're putting into the lists + # under each heading. Do not expect these to open 100% of the time, + # The port you're requesting might be bound. + # + # The structure should be like this: + # { 'local': ['2424:localhost:22'], # Local SSH tunnels + # 'remote': ['2525:localhost:22'], # Remote SSH tunnels + # 'dynamic': [8888] } # Dynamic/SOCKS tunnels + if ssh_tunnels!={} and isinstance({},type(ssh_tunnels)): + tunnel_types = { + 'local':'L', + 'remote':'R', + 'dynamic':'D' + } + for tunnel_type in tunnel_types: + cmd_type = tunnel_types[tunnel_type] + if tunnel_type in ssh_tunnels: + tunnels = ssh_tunnels[tunnel_type] + for tunnel in tunnels: + if spawn_local_ssh==False: + tunnel = quote(str(tunnel)) + ssh_options = ssh_options + ' -' + cmd_type + ' ' + str(tunnel) + + if username is not None: + ssh_options = ssh_options + ' -l ' + username + elif ssh_config is None: + raise TypeError('login() needs either a username or an ssh_config') + else: # make sure ssh_config has an entry for the server with a username + with open(ssh_config, 'rt') as f: + lines = [l.strip() for l in f.readlines()] + + server_regex = r'^Host\s+%s\s*$' % server + user_regex = r'^User\s+\w+\s*$' + config_has_server = False + server_has_username = False + for line in lines: + if not config_has_server and re.match(server_regex, line, re.IGNORECASE): + config_has_server = True + elif config_has_server and 'hostname' in line.lower(): + pass + elif config_has_server and 'host' in line.lower(): + server_has_username = False # insurance + break # we have left the relevant section + elif config_has_server and re.match(user_regex, line, re.IGNORECASE): + server_has_username = True + break + + if lines: + del line + + del lines + + if not config_has_server: + raise TypeError('login() ssh_config has no Host entry for %s' % server) + elif not server_has_username: + raise TypeError('login() ssh_config has no user entry for %s' % server) + + cmd += " %s %s" % (ssh_options, server) + if self.debug_command_string: + return(cmd) + + # Are we asking for a local ssh command or to spawn one in another session? + if spawn_local_ssh: + spawn._spawn(self, cmd) + else: + self.sendline(cmd) + + # This does not distinguish between a remote server 'password' prompt + # and a local ssh 'passphrase' prompt (for unlocking a private key). + i = self.expect(session_init_regex_array, timeout=login_timeout) + + # First phase + if i==0: + # New certificate -- always accept it. + # This is what you get if SSH does not have the remote host's + # public key stored in the 'known_hosts' cache. + self.sendline("yes") + i = self.expect(session_regex_array) + if i==2: # password or passphrase + self.sendline(password) + i = self.expect(session_regex_array) + if i==4: + self.sendline(terminal_type) + i = self.expect(session_regex_array) + if i==7: + self.close() + raise ExceptionPxssh('Could not establish connection to host') + + # Second phase + if i==0: + # This is weird. This should not happen twice in a row. + self.close() + raise ExceptionPxssh('Weird error. Got "are you sure" prompt twice.') + elif i==1: # can occur if you have a public key pair set to authenticate. + ### TODO: May NOT be OK if expect() got tricked and matched a false prompt. + pass + elif i==2: # password prompt again + # For incorrect passwords, some ssh servers will + # ask for the password again, others return 'denied' right away. + # If we get the password prompt again then this means + # we didn't get the password right the first time. + self.close() + raise ExceptionPxssh('password refused') + elif i==3: # permission denied -- password was bad. + self.close() + raise ExceptionPxssh('permission denied') + elif i==4: # terminal type again? WTF? + self.close() + raise ExceptionPxssh('Weird error. Got "terminal type" prompt twice.') + elif i==5: # Timeout + #This is tricky... I presume that we are at the command-line prompt. + #It may be that the shell prompt was so weird that we couldn't match + #it. Or it may be that we couldn't log in for some other reason. I + #can't be sure, but it's safe to guess that we did login because if + #I presume wrong and we are not logged in then this should be caught + #later when I try to set the shell prompt. + pass + elif i==6: # Connection closed by remote host + self.close() + raise ExceptionPxssh('connection closed') + else: # Unexpected + self.close() + raise ExceptionPxssh('unexpected login response') + if sync_original_prompt: + if not self.sync_original_prompt(sync_multiplier): + self.close() + raise ExceptionPxssh('could not synchronize with original prompt') + # We appear to be in. + # set shell prompt to something unique. + if auto_prompt_reset: + if not self.set_unique_prompt(): + self.close() + raise ExceptionPxssh('could not set shell prompt ' + '(received: %r, expected: %r).' % ( + self.before, self.PROMPT,)) + return True + + def logout (self): + '''Sends exit to the remote shell. + + If there are stopped jobs then this automatically sends exit twice. + ''' + self.sendline("exit") + index = self.expect([EOF, "(?i)there are stopped jobs"]) + if index==1: + self.sendline("exit") + self.expect(EOF) + self.close() + + def prompt(self, timeout=-1): + '''Match the next shell prompt. + + This is little more than a short-cut to the :meth:`~pexpect.spawn.expect` + method. Note that if you called :meth:`login` with + ``auto_prompt_reset=False``, then before calling :meth:`prompt` you must + set the :attr:`PROMPT` attribute to a regex that it will use for + matching the prompt. + + Calling :meth:`prompt` will erase the contents of the :attr:`before` + attribute even if no prompt is ever matched. If timeout is not given or + it is set to -1 then self.timeout is used. + + :return: True if the shell prompt was matched, False if the timeout was + reached. + ''' + + if timeout == -1: + timeout = self.timeout + i = self.expect([self.PROMPT, TIMEOUT], timeout=timeout) + if i==1: + return False + return True + + def set_unique_prompt(self): + '''This sets the remote prompt to something more unique than ``#`` or ``$``. + This makes it easier for the :meth:`prompt` method to match the shell prompt + unambiguously. This method is called automatically by the :meth:`login` + method, but you may want to call it manually if you somehow reset the + shell prompt. For example, if you 'su' to a different user then you + will need to manually reset the prompt. This sends shell commands to + the remote host to set the prompt, so this assumes the remote host is + ready to receive commands. + + Alternatively, you may use your own prompt pattern. In this case you + should call :meth:`login` with ``auto_prompt_reset=False``; then set the + :attr:`PROMPT` attribute to a regular expression. After that, the + :meth:`prompt` method will try to match your prompt pattern. + ''' + + self.sendline("unset PROMPT_COMMAND") + self.sendline(self.PROMPT_SET_SH) # sh-style + i = self.expect ([TIMEOUT, self.PROMPT], timeout=10) + if i == 0: # csh-style + self.sendline(self.PROMPT_SET_CSH) + i = self.expect([TIMEOUT, self.PROMPT], timeout=10) + if i == 0: # zsh-style + self.sendline(self.PROMPT_SET_ZSH) + i = self.expect([TIMEOUT, self.PROMPT], timeout=10) + if i == 0: + return False + return True + +# vi:ts=4:sw=4:expandtab:ft=python: diff --git a/evalkit_internvl/lib/python3.10/site-packages/pexpect/replwrap.py b/evalkit_internvl/lib/python3.10/site-packages/pexpect/replwrap.py new file mode 100644 index 0000000000000000000000000000000000000000..08dbd5e8692fa812821a1204c0a5675c5fd4c8df --- /dev/null +++ b/evalkit_internvl/lib/python3.10/site-packages/pexpect/replwrap.py @@ -0,0 +1,136 @@ +"""Generic wrapper for read-eval-print-loops, a.k.a. interactive shells +""" +import os.path +import signal +import sys + +import pexpect + +PY3 = (sys.version_info[0] >= 3) + +if PY3: + basestring = str + +PEXPECT_PROMPT = u'[PEXPECT_PROMPT>' +PEXPECT_CONTINUATION_PROMPT = u'[PEXPECT_PROMPT+' + +class REPLWrapper(object): + """Wrapper for a REPL. + + :param cmd_or_spawn: This can either be an instance of :class:`pexpect.spawn` + in which a REPL has already been started, or a str command to start a new + REPL process. + :param str orig_prompt: The prompt to expect at first. + :param str prompt_change: A command to change the prompt to something more + unique. If this is ``None``, the prompt will not be changed. This will + be formatted with the new and continuation prompts as positional + parameters, so you can use ``{}`` style formatting to insert them into + the command. + :param str new_prompt: The more unique prompt to expect after the change. + :param str extra_init_cmd: Commands to do extra initialisation, such as + disabling pagers. + """ + def __init__(self, cmd_or_spawn, orig_prompt, prompt_change, + new_prompt=PEXPECT_PROMPT, + continuation_prompt=PEXPECT_CONTINUATION_PROMPT, + extra_init_cmd=None): + if isinstance(cmd_or_spawn, basestring): + self.child = pexpect.spawn(cmd_or_spawn, echo=False, encoding='utf-8') + else: + self.child = cmd_or_spawn + if self.child.echo: + # Existing spawn instance has echo enabled, disable it + # to prevent our input from being repeated to output. + self.child.setecho(False) + self.child.waitnoecho() + + if prompt_change is None: + self.prompt = orig_prompt + else: + self.set_prompt(orig_prompt, + prompt_change.format(new_prompt, continuation_prompt)) + self.prompt = new_prompt + self.continuation_prompt = continuation_prompt + + self._expect_prompt() + + if extra_init_cmd is not None: + self.run_command(extra_init_cmd) + + def set_prompt(self, orig_prompt, prompt_change): + self.child.expect(orig_prompt) + self.child.sendline(prompt_change) + + def _expect_prompt(self, timeout=-1, async_=False): + return self.child.expect_exact([self.prompt, self.continuation_prompt], + timeout=timeout, async_=async_) + + def run_command(self, command, timeout=-1, async_=False): + """Send a command to the REPL, wait for and return output. + + :param str command: The command to send. Trailing newlines are not needed. + This should be a complete block of input that will trigger execution; + if a continuation prompt is found after sending input, :exc:`ValueError` + will be raised. + :param int timeout: How long to wait for the next prompt. -1 means the + default from the :class:`pexpect.spawn` object (default 30 seconds). + None means to wait indefinitely. + :param bool async_: On Python 3.4, or Python 3.3 with asyncio + installed, passing ``async_=True`` will make this return an + :mod:`asyncio` Future, which you can yield from to get the same + result that this method would normally give directly. + """ + # Split up multiline commands and feed them in bit-by-bit + cmdlines = command.splitlines() + # splitlines ignores trailing newlines - add it back in manually + if command.endswith('\n'): + cmdlines.append('') + if not cmdlines: + raise ValueError("No command was given") + + if async_: + from ._async import repl_run_command_async + return repl_run_command_async(self, cmdlines, timeout) + + res = [] + self.child.sendline(cmdlines[0]) + for line in cmdlines[1:]: + self._expect_prompt(timeout=timeout) + res.append(self.child.before) + self.child.sendline(line) + + # Command was fully submitted, now wait for the next prompt + if self._expect_prompt(timeout=timeout) == 1: + # We got the continuation prompt - command was incomplete + self.child.kill(signal.SIGINT) + self._expect_prompt(timeout=1) + raise ValueError("Continuation prompt found - input was incomplete:\n" + + command) + return u''.join(res + [self.child.before]) + +def python(command=sys.executable): + """Start a Python shell and return a :class:`REPLWrapper` object.""" + return REPLWrapper(command, u">>> ", u"import sys; sys.ps1={0!r}; sys.ps2={1!r}") + +def _repl_sh(command, args, non_printable_insert): + child = pexpect.spawn(command, args, echo=False, encoding='utf-8') + + # If the user runs 'env', the value of PS1 will be in the output. To avoid + # replwrap seeing that as the next prompt, we'll embed the marker characters + # for invisible characters in the prompt; these show up when inspecting the + # environment variable, but not when bash displays the prompt. + ps1 = PEXPECT_PROMPT[:5] + non_printable_insert + PEXPECT_PROMPT[5:] + ps2 = PEXPECT_CONTINUATION_PROMPT[:5] + non_printable_insert + PEXPECT_CONTINUATION_PROMPT[5:] + prompt_change = u"PS1='{0}' PS2='{1}' PROMPT_COMMAND=''".format(ps1, ps2) + + return REPLWrapper(child, u'\\$', prompt_change, + extra_init_cmd="export PAGER=cat") + +def bash(command="bash"): + """Start a bash shell and return a :class:`REPLWrapper` object.""" + bashrc = os.path.join(os.path.dirname(__file__), 'bashrc.sh') + return _repl_sh(command, ['--rcfile', bashrc], non_printable_insert='\\[\\]') + +def zsh(command="zsh", args=("--no-rcs", "-V", "+Z")): + """Start a zsh shell and return a :class:`REPLWrapper` object.""" + return _repl_sh(command, list(args), non_printable_insert='%(!..)') diff --git a/evalkit_internvl/lib/python3.10/site-packages/pexpect/run.py b/evalkit_internvl/lib/python3.10/site-packages/pexpect/run.py new file mode 100644 index 0000000000000000000000000000000000000000..5695ab7f7b4811dffdddadf6e2310a9879f8817c --- /dev/null +++ b/evalkit_internvl/lib/python3.10/site-packages/pexpect/run.py @@ -0,0 +1,157 @@ +import sys +import types + +from .exceptions import EOF, TIMEOUT +from .pty_spawn import spawn + +def run(command, timeout=30, withexitstatus=False, events=None, + extra_args=None, logfile=None, cwd=None, env=None, **kwargs): + + ''' + This function runs the given command; waits for it to finish; then + returns all output as a string. STDERR is included in output. If the full + path to the command is not given then the path is searched. + + Note that lines are terminated by CR/LF (\\r\\n) combination even on + UNIX-like systems because this is the standard for pseudottys. If you set + 'withexitstatus' to true, then run will return a tuple of (command_output, + exitstatus). If 'withexitstatus' is false then this returns just + command_output. + + The run() function can often be used instead of creating a spawn instance. + For example, the following code uses spawn:: + + from pexpect import * + child = spawn('scp foo user@example.com:.') + child.expect('(?i)password') + child.sendline(mypassword) + + The previous code can be replace with the following:: + + from pexpect import * + run('scp foo user@example.com:.', events={'(?i)password': mypassword}) + + **Examples** + + Start the apache daemon on the local machine:: + + from pexpect import * + run("/usr/local/apache/bin/apachectl start") + + Check in a file using SVN:: + + from pexpect import * + run("svn ci -m 'automatic commit' my_file.py") + + Run a command and capture exit status:: + + from pexpect import * + (command_output, exitstatus) = run('ls -l /bin', withexitstatus=1) + + The following will run SSH and execute 'ls -l' on the remote machine. The + password 'secret' will be sent if the '(?i)password' pattern is ever seen:: + + run("ssh username@machine.example.com 'ls -l'", + events={'(?i)password':'secret\\n'}) + + This will start mencoder to rip a video from DVD. This will also display + progress ticks every 5 seconds as it runs. For example:: + + from pexpect import * + def print_ticks(d): + print d['event_count'], + run("mencoder dvd://1 -o video.avi -oac copy -ovc copy", + events={TIMEOUT:print_ticks}, timeout=5) + + The 'events' argument should be either a dictionary or a tuple list that + contains patterns and responses. Whenever one of the patterns is seen + in the command output, run() will send the associated response string. + So, run() in the above example can be also written as:: + + run("mencoder dvd://1 -o video.avi -oac copy -ovc copy", + events=[(TIMEOUT,print_ticks)], timeout=5) + + Use a tuple list for events if the command output requires a delicate + control over what pattern should be matched, since the tuple list is passed + to pexpect() as its pattern list, with the order of patterns preserved. + + Note that you should put newlines in your string if Enter is necessary. + + Like the example above, the responses may also contain a callback, either + a function or method. It should accept a dictionary value as an argument. + The dictionary contains all the locals from the run() function, so you can + access the child spawn object or any other variable defined in run() + (event_count, child, and extra_args are the most useful). A callback may + return True to stop the current run process. Otherwise run() continues + until the next event. A callback may also return a string which will be + sent to the child. 'extra_args' is not used by directly run(). It provides + a way to pass data to a callback function through run() through the locals + dictionary passed to a callback. + + Like :class:`spawn`, passing *encoding* will make it work with unicode + instead of bytes. You can pass *codec_errors* to control how errors in + encoding and decoding are handled. + ''' + if timeout == -1: + child = spawn(command, maxread=2000, logfile=logfile, cwd=cwd, env=env, + **kwargs) + else: + child = spawn(command, timeout=timeout, maxread=2000, logfile=logfile, + cwd=cwd, env=env, **kwargs) + if isinstance(events, list): + patterns= [x for x,y in events] + responses = [y for x,y in events] + elif isinstance(events, dict): + patterns = list(events.keys()) + responses = list(events.values()) + else: + # This assumes EOF or TIMEOUT will eventually cause run to terminate. + patterns = None + responses = None + child_result_list = [] + event_count = 0 + while True: + try: + index = child.expect(patterns) + if isinstance(child.after, child.allowed_string_types): + child_result_list.append(child.before + child.after) + else: + # child.after may have been a TIMEOUT or EOF, + # which we don't want appended to the list. + child_result_list.append(child.before) + if isinstance(responses[index], child.allowed_string_types): + child.send(responses[index]) + elif (isinstance(responses[index], types.FunctionType) or + isinstance(responses[index], types.MethodType)): + callback_result = responses[index](locals()) + sys.stdout.flush() + if isinstance(callback_result, child.allowed_string_types): + child.send(callback_result) + elif callback_result: + break + else: + raise TypeError("parameter `event' at index {index} must be " + "a string, method, or function: {value!r}" + .format(index=index, value=responses[index])) + event_count = event_count + 1 + except TIMEOUT: + child_result_list.append(child.before) + break + except EOF: + child_result_list.append(child.before) + break + child_result = child.string_type().join(child_result_list) + if withexitstatus: + child.close() + return (child_result, child.exitstatus) + else: + return child_result + +def runu(command, timeout=30, withexitstatus=False, events=None, + extra_args=None, logfile=None, cwd=None, env=None, **kwargs): + """Deprecated: pass encoding to run() instead. + """ + kwargs.setdefault('encoding', 'utf-8') + return run(command, timeout=timeout, withexitstatus=withexitstatus, + events=events, extra_args=extra_args, logfile=logfile, cwd=cwd, + env=env, **kwargs) diff --git a/evalkit_internvl/lib/python3.10/site-packages/pexpect/screen.py b/evalkit_internvl/lib/python3.10/site-packages/pexpect/screen.py new file mode 100644 index 0000000000000000000000000000000000000000..79f95c4e54273c3ca4e00623fd55cce516040392 --- /dev/null +++ b/evalkit_internvl/lib/python3.10/site-packages/pexpect/screen.py @@ -0,0 +1,431 @@ +'''This implements a virtual screen. This is used to support ANSI terminal +emulation. The screen representation and state is implemented in this class. +Most of the methods are inspired by ANSI screen control codes. The +:class:`~pexpect.ANSI.ANSI` class extends this class to add parsing of ANSI +escape codes. + +PEXPECT LICENSE + + This license is approved by the OSI and FSF as GPL-compatible. + http://opensource.org/licenses/isc-license.txt + + Copyright (c) 2012, Noah Spurrier + PERMISSION TO USE, COPY, MODIFY, AND/OR DISTRIBUTE THIS SOFTWARE FOR ANY + PURPOSE WITH OR WITHOUT FEE IS HEREBY GRANTED, PROVIDED THAT THE ABOVE + COPYRIGHT NOTICE AND THIS PERMISSION NOTICE APPEAR IN ALL COPIES. + THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES + WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF + MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR + ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES + WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN + ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF + OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. + +''' + +import codecs +import copy +import sys + +import warnings + +warnings.warn(("pexpect.screen and pexpect.ANSI are deprecated. " + "We recommend using pyte to emulate a terminal screen: " + "https://pypi.python.org/pypi/pyte"), + stacklevel=2) + +NUL = 0 # Fill character; ignored on input. +ENQ = 5 # Transmit answerback message. +BEL = 7 # Ring the bell. +BS = 8 # Move cursor left. +HT = 9 # Move cursor to next tab stop. +LF = 10 # Line feed. +VT = 11 # Same as LF. +FF = 12 # Same as LF. +CR = 13 # Move cursor to left margin or newline. +SO = 14 # Invoke G1 character set. +SI = 15 # Invoke G0 character set. +XON = 17 # Resume transmission. +XOFF = 19 # Halt transmission. +CAN = 24 # Cancel escape sequence. +SUB = 26 # Same as CAN. +ESC = 27 # Introduce a control sequence. +DEL = 127 # Fill character; ignored on input. +SPACE = u' ' # Space or blank character. + +PY3 = (sys.version_info[0] >= 3) +if PY3: + unicode = str + +def constrain (n, min, max): + + '''This returns a number, n constrained to the min and max bounds. ''' + + if n < min: + return min + if n > max: + return max + return n + +class screen: + '''This object maintains the state of a virtual text screen as a + rectangular array. This maintains a virtual cursor position and handles + scrolling as characters are added. This supports most of the methods needed + by an ANSI text screen. Row and column indexes are 1-based (not zero-based, + like arrays). + + Characters are represented internally using unicode. Methods that accept + input characters, when passed 'bytes' (which in Python 2 is equivalent to + 'str'), convert them from the encoding specified in the 'encoding' + parameter to the constructor. Methods that return screen contents return + unicode strings, with the exception of __str__() under Python 2. Passing + ``encoding=None`` limits the API to only accept unicode input, so passing + bytes in will raise :exc:`TypeError`. + ''' + def __init__(self, r=24, c=80, encoding='latin-1', encoding_errors='replace'): + '''This initializes a blank screen of the given dimensions.''' + + self.rows = r + self.cols = c + self.encoding = encoding + self.encoding_errors = encoding_errors + if encoding is not None: + self.decoder = codecs.getincrementaldecoder(encoding)(encoding_errors) + else: + self.decoder = None + self.cur_r = 1 + self.cur_c = 1 + self.cur_saved_r = 1 + self.cur_saved_c = 1 + self.scroll_row_start = 1 + self.scroll_row_end = self.rows + self.w = [ [SPACE] * self.cols for _ in range(self.rows)] + + def _decode(self, s): + '''This converts from the external coding system (as passed to + the constructor) to the internal one (unicode). ''' + if self.decoder is not None: + return self.decoder.decode(s) + else: + raise TypeError("This screen was constructed with encoding=None, " + "so it does not handle bytes.") + + def _unicode(self): + '''This returns a printable representation of the screen as a unicode + string (which, under Python 3.x, is the same as 'str'). The end of each + screen line is terminated by a newline.''' + + return u'\n'.join ([ u''.join(c) for c in self.w ]) + + if PY3: + __str__ = _unicode + else: + __unicode__ = _unicode + + def __str__(self): + '''This returns a printable representation of the screen. The end of + each screen line is terminated by a newline. ''' + encoding = self.encoding or 'ascii' + return self._unicode().encode(encoding, 'replace') + + def dump (self): + '''This returns a copy of the screen as a unicode string. This is similar to + __str__/__unicode__ except that lines are not terminated with line + feeds.''' + + return u''.join ([ u''.join(c) for c in self.w ]) + + def pretty (self): + '''This returns a copy of the screen as a unicode string with an ASCII + text box around the screen border. This is similar to + __str__/__unicode__ except that it adds a box.''' + + top_bot = u'+' + u'-'*self.cols + u'+\n' + return top_bot + u'\n'.join([u'|'+line+u'|' for line in unicode(self).split(u'\n')]) + u'\n' + top_bot + + def fill (self, ch=SPACE): + + if isinstance(ch, bytes): + ch = self._decode(ch) + + self.fill_region (1,1,self.rows,self.cols, ch) + + def fill_region (self, rs,cs, re,ce, ch=SPACE): + + if isinstance(ch, bytes): + ch = self._decode(ch) + + rs = constrain (rs, 1, self.rows) + re = constrain (re, 1, self.rows) + cs = constrain (cs, 1, self.cols) + ce = constrain (ce, 1, self.cols) + if rs > re: + rs, re = re, rs + if cs > ce: + cs, ce = ce, cs + for r in range (rs, re+1): + for c in range (cs, ce + 1): + self.put_abs (r,c,ch) + + def cr (self): + '''This moves the cursor to the beginning (col 1) of the current row. + ''' + + self.cursor_home (self.cur_r, 1) + + def lf (self): + '''This moves the cursor down with scrolling. + ''' + + old_r = self.cur_r + self.cursor_down() + if old_r == self.cur_r: + self.scroll_up () + self.erase_line() + + def crlf (self): + '''This advances the cursor with CRLF properties. + The cursor will line wrap and the screen may scroll. + ''' + + self.cr () + self.lf () + + def newline (self): + '''This is an alias for crlf(). + ''' + + self.crlf() + + def put_abs (self, r, c, ch): + '''Screen array starts at 1 index.''' + + r = constrain (r, 1, self.rows) + c = constrain (c, 1, self.cols) + if isinstance(ch, bytes): + ch = self._decode(ch)[0] + else: + ch = ch[0] + self.w[r-1][c-1] = ch + + def put (self, ch): + '''This puts a characters at the current cursor position. + ''' + + if isinstance(ch, bytes): + ch = self._decode(ch) + + self.put_abs (self.cur_r, self.cur_c, ch) + + def insert_abs (self, r, c, ch): + '''This inserts a character at (r,c). Everything under + and to the right is shifted right one character. + The last character of the line is lost. + ''' + + if isinstance(ch, bytes): + ch = self._decode(ch) + + r = constrain (r, 1, self.rows) + c = constrain (c, 1, self.cols) + for ci in range (self.cols, c, -1): + self.put_abs (r,ci, self.get_abs(r,ci-1)) + self.put_abs (r,c,ch) + + def insert (self, ch): + + if isinstance(ch, bytes): + ch = self._decode(ch) + + self.insert_abs (self.cur_r, self.cur_c, ch) + + def get_abs (self, r, c): + + r = constrain (r, 1, self.rows) + c = constrain (c, 1, self.cols) + return self.w[r-1][c-1] + + def get (self): + + self.get_abs (self.cur_r, self.cur_c) + + def get_region (self, rs,cs, re,ce): + '''This returns a list of lines representing the region. + ''' + + rs = constrain (rs, 1, self.rows) + re = constrain (re, 1, self.rows) + cs = constrain (cs, 1, self.cols) + ce = constrain (ce, 1, self.cols) + if rs > re: + rs, re = re, rs + if cs > ce: + cs, ce = ce, cs + sc = [] + for r in range (rs, re+1): + line = u'' + for c in range (cs, ce + 1): + ch = self.get_abs (r,c) + line = line + ch + sc.append (line) + return sc + + def cursor_constrain (self): + '''This keeps the cursor within the screen area. + ''' + + self.cur_r = constrain (self.cur_r, 1, self.rows) + self.cur_c = constrain (self.cur_c, 1, self.cols) + + def cursor_home (self, r=1, c=1): # [{ROW};{COLUMN}H + + self.cur_r = r + self.cur_c = c + self.cursor_constrain () + + def cursor_back (self,count=1): # [{COUNT}D (not confused with down) + + self.cur_c = self.cur_c - count + self.cursor_constrain () + + def cursor_down (self,count=1): # [{COUNT}B (not confused with back) + + self.cur_r = self.cur_r + count + self.cursor_constrain () + + def cursor_forward (self,count=1): # [{COUNT}C + + self.cur_c = self.cur_c + count + self.cursor_constrain () + + def cursor_up (self,count=1): # [{COUNT}A + + self.cur_r = self.cur_r - count + self.cursor_constrain () + + def cursor_up_reverse (self): # M (called RI -- Reverse Index) + + old_r = self.cur_r + self.cursor_up() + if old_r == self.cur_r: + self.scroll_up() + + def cursor_force_position (self, r, c): # [{ROW};{COLUMN}f + '''Identical to Cursor Home.''' + + self.cursor_home (r, c) + + def cursor_save (self): # [s + '''Save current cursor position.''' + + self.cursor_save_attrs() + + def cursor_unsave (self): # [u + '''Restores cursor position after a Save Cursor.''' + + self.cursor_restore_attrs() + + def cursor_save_attrs (self): # 7 + '''Save current cursor position.''' + + self.cur_saved_r = self.cur_r + self.cur_saved_c = self.cur_c + + def cursor_restore_attrs (self): # 8 + '''Restores cursor position after a Save Cursor.''' + + self.cursor_home (self.cur_saved_r, self.cur_saved_c) + + def scroll_constrain (self): + '''This keeps the scroll region within the screen region.''' + + if self.scroll_row_start <= 0: + self.scroll_row_start = 1 + if self.scroll_row_end > self.rows: + self.scroll_row_end = self.rows + + def scroll_screen (self): # [r + '''Enable scrolling for entire display.''' + + self.scroll_row_start = 1 + self.scroll_row_end = self.rows + + def scroll_screen_rows (self, rs, re): # [{start};{end}r + '''Enable scrolling from row {start} to row {end}.''' + + self.scroll_row_start = rs + self.scroll_row_end = re + self.scroll_constrain() + + def scroll_down (self): # D + '''Scroll display down one line.''' + + # Screen is indexed from 1, but arrays are indexed from 0. + s = self.scroll_row_start - 1 + e = self.scroll_row_end - 1 + self.w[s+1:e+1] = copy.deepcopy(self.w[s:e]) + + def scroll_up (self): # M + '''Scroll display up one line.''' + + # Screen is indexed from 1, but arrays are indexed from 0. + s = self.scroll_row_start - 1 + e = self.scroll_row_end - 1 + self.w[s:e] = copy.deepcopy(self.w[s+1:e+1]) + + def erase_end_of_line (self): # [0K -or- [K + '''Erases from the current cursor position to the end of the current + line.''' + + self.fill_region (self.cur_r, self.cur_c, self.cur_r, self.cols) + + def erase_start_of_line (self): # [1K + '''Erases from the current cursor position to the start of the current + line.''' + + self.fill_region (self.cur_r, 1, self.cur_r, self.cur_c) + + def erase_line (self): # [2K + '''Erases the entire current line.''' + + self.fill_region (self.cur_r, 1, self.cur_r, self.cols) + + def erase_down (self): # [0J -or- [J + '''Erases the screen from the current line down to the bottom of the + screen.''' + + self.erase_end_of_line () + self.fill_region (self.cur_r + 1, 1, self.rows, self.cols) + + def erase_up (self): # [1J + '''Erases the screen from the current line up to the top of the + screen.''' + + self.erase_start_of_line () + self.fill_region (self.cur_r-1, 1, 1, self.cols) + + def erase_screen (self): # [2J + '''Erases the screen with the background color.''' + + self.fill () + + def set_tab (self): # H + '''Sets a tab at the current position.''' + + pass + + def clear_tab (self): # [g + '''Clears tab at the current position.''' + + pass + + def clear_all_tabs (self): # [3g + '''Clears all tabs.''' + + pass + +# Insert line Esc [ Pn L +# Delete line Esc [ Pn M +# Delete character Esc [ Pn P +# Scrolling region Esc [ Pn(top);Pn(bot) r + diff --git a/evalkit_internvl/lib/python3.10/site-packages/pexpect/socket_pexpect.py b/evalkit_internvl/lib/python3.10/site-packages/pexpect/socket_pexpect.py new file mode 100644 index 0000000000000000000000000000000000000000..cb11ac225891f090f9c4fb312d0707210d0a1b55 --- /dev/null +++ b/evalkit_internvl/lib/python3.10/site-packages/pexpect/socket_pexpect.py @@ -0,0 +1,145 @@ +"""This is like :mod:`pexpect`, but it will work with any socket that you +pass it. You are responsible for opening and closing the socket. + +PEXPECT LICENSE + + This license is approved by the OSI and FSF as GPL-compatible. + http://opensource.org/licenses/isc-license.txt + + Copyright (c) 2012, Noah Spurrier + PERMISSION TO USE, COPY, MODIFY, AND/OR DISTRIBUTE THIS SOFTWARE FOR ANY + PURPOSE WITH OR WITHOUT FEE IS HEREBY GRANTED, PROVIDED THAT THE ABOVE + COPYRIGHT NOTICE AND THIS PERMISSION NOTICE APPEAR IN ALL COPIES. + THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES + WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF + MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR + ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES + WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN + ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF + OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. + +""" + +import socket +from contextlib import contextmanager + +from .exceptions import TIMEOUT, EOF +from .spawnbase import SpawnBase + +__all__ = ["SocketSpawn"] + + +class SocketSpawn(SpawnBase): + """This is like :mod:`pexpect.fdpexpect` but uses the cross-platform python socket api, + rather than the unix-specific file descriptor api. Thus, it works with + remote connections on both unix and windows.""" + + def __init__( + self, + socket: socket.socket, + args=None, + timeout=30, + maxread=2000, + searchwindowsize=None, + logfile=None, + encoding=None, + codec_errors="strict", + use_poll=False, + ): + """This takes an open socket.""" + + self.args = None + self.command = None + SpawnBase.__init__( + self, + timeout, + maxread, + searchwindowsize, + logfile, + encoding=encoding, + codec_errors=codec_errors, + ) + self.socket = socket + self.child_fd = socket.fileno() + self.closed = False + self.name = "" % socket + self.use_poll = use_poll + + def close(self): + """Close the socket. + + Calling this method a second time does nothing, but if the file + descriptor was closed elsewhere, :class:`OSError` will be raised. + """ + if self.child_fd == -1: + return + + self.flush() + self.socket.shutdown(socket.SHUT_RDWR) + self.socket.close() + self.child_fd = -1 + self.closed = True + + def isalive(self): + """ Alive if the fileno is valid """ + return self.socket.fileno() >= 0 + + def send(self, s) -> int: + """Write to socket, return number of bytes written""" + s = self._coerce_send_string(s) + self._log(s, "send") + + b = self._encoder.encode(s, final=False) + self.socket.sendall(b) + return len(b) + + def sendline(self, s) -> int: + """Write to socket with trailing newline, return number of bytes written""" + s = self._coerce_send_string(s) + return self.send(s + self.linesep) + + def write(self, s): + """Write to socket, return None""" + self.send(s) + + def writelines(self, sequence): + "Call self.write() for each item in sequence" + for s in sequence: + self.write(s) + + @contextmanager + def _timeout(self, timeout): + saved_timeout = self.socket.gettimeout() + try: + self.socket.settimeout(timeout) + yield + finally: + self.socket.settimeout(saved_timeout) + + def read_nonblocking(self, size=1, timeout=-1): + """ + Read from the file descriptor and return the result as a string. + + The read_nonblocking method of :class:`SpawnBase` assumes that a call + to os.read will not block (timeout parameter is ignored). This is not + the case for POSIX file-like objects such as sockets and serial ports. + + Use :func:`select.select`, timeout is implemented conditionally for + POSIX systems. + + :param int size: Read at most *size* bytes. + :param int timeout: Wait timeout seconds for file descriptor to be + ready to read. When -1 (default), use self.timeout. When 0, poll. + :return: String containing the bytes read + """ + if timeout == -1: + timeout = self.timeout + try: + with self._timeout(timeout): + s = self.socket.recv(size) + if s == b'': + self.flag_eof = True + raise EOF("Socket closed") + return s + except socket.timeout: + raise TIMEOUT("Timeout exceeded.") diff --git a/evalkit_internvl/lib/python3.10/site-packages/pexpect/spawnbase.py b/evalkit_internvl/lib/python3.10/site-packages/pexpect/spawnbase.py new file mode 100644 index 0000000000000000000000000000000000000000..abf8071ec152e68d4d30265360b185e24c1a6231 --- /dev/null +++ b/evalkit_internvl/lib/python3.10/site-packages/pexpect/spawnbase.py @@ -0,0 +1,536 @@ +from io import StringIO, BytesIO +import codecs +import os +import sys +import re +import errno +from .exceptions import ExceptionPexpect, EOF, TIMEOUT +from .expect import Expecter, searcher_string, searcher_re + +PY3 = (sys.version_info[0] >= 3) +text_type = str if PY3 else unicode + +class _NullCoder(object): + """Pass bytes through unchanged.""" + @staticmethod + def encode(b, final=False): + return b + + @staticmethod + def decode(b, final=False): + return b + +class SpawnBase(object): + """A base class providing the backwards-compatible spawn API for Pexpect. + + This should not be instantiated directly: use :class:`pexpect.spawn` or + :class:`pexpect.fdpexpect.fdspawn`. + """ + encoding = None + pid = None + flag_eof = False + + def __init__(self, timeout=30, maxread=2000, searchwindowsize=None, + logfile=None, encoding=None, codec_errors='strict'): + self.stdin = sys.stdin + self.stdout = sys.stdout + self.stderr = sys.stderr + + self.searcher = None + self.ignorecase = False + self.before = None + self.after = None + self.match = None + self.match_index = None + self.terminated = True + self.exitstatus = None + self.signalstatus = None + # status returned by os.waitpid + self.status = None + # the child file descriptor is initially closed + self.child_fd = -1 + self.timeout = timeout + self.delimiter = EOF + self.logfile = logfile + # input from child (read_nonblocking) + self.logfile_read = None + # output to send (send, sendline) + self.logfile_send = None + # max bytes to read at one time into buffer + self.maxread = maxread + # Data before searchwindowsize point is preserved, but not searched. + self.searchwindowsize = searchwindowsize + # Delay used before sending data to child. Time in seconds. + # Set this to None to skip the time.sleep() call completely. + self.delaybeforesend = 0.05 + # Used by close() to give kernel time to update process status. + # Time in seconds. + self.delayafterclose = 0.1 + # Used by terminate() to give kernel time to update process status. + # Time in seconds. + self.delayafterterminate = 0.1 + # Delay in seconds to sleep after each call to read_nonblocking(). + # Set this to None to skip the time.sleep() call completely: that + # would restore the behavior from pexpect-2.0 (for performance + # reasons or because you don't want to release Python's global + # interpreter lock). + self.delayafterread = 0.0001 + self.softspace = False + self.name = '<' + repr(self) + '>' + self.closed = True + + # Unicode interface + self.encoding = encoding + self.codec_errors = codec_errors + if encoding is None: + # bytes mode (accepts some unicode for backwards compatibility) + self._encoder = self._decoder = _NullCoder() + self.string_type = bytes + self.buffer_type = BytesIO + self.crlf = b'\r\n' + if PY3: + self.allowed_string_types = (bytes, str) + self.linesep = os.linesep.encode('ascii') + def write_to_stdout(b): + try: + return sys.stdout.buffer.write(b) + except AttributeError: + # If stdout has been replaced, it may not have .buffer + return sys.stdout.write(b.decode('ascii', 'replace')) + self.write_to_stdout = write_to_stdout + else: + self.allowed_string_types = (basestring,) # analysis:ignore + self.linesep = os.linesep + self.write_to_stdout = sys.stdout.write + else: + # unicode mode + self._encoder = codecs.getincrementalencoder(encoding)(codec_errors) + self._decoder = codecs.getincrementaldecoder(encoding)(codec_errors) + self.string_type = text_type + self.buffer_type = StringIO + self.crlf = u'\r\n' + self.allowed_string_types = (text_type, ) + if PY3: + self.linesep = os.linesep + else: + self.linesep = os.linesep.decode('ascii') + # This can handle unicode in both Python 2 and 3 + self.write_to_stdout = sys.stdout.write + # storage for async transport + self.async_pw_transport = None + # This is the read buffer. See maxread. + self._buffer = self.buffer_type() + # The buffer may be trimmed for efficiency reasons. This is the + # untrimmed buffer, used to create the before attribute. + self._before = self.buffer_type() + + def _log(self, s, direction): + if self.logfile is not None: + self.logfile.write(s) + self.logfile.flush() + second_log = self.logfile_send if (direction=='send') else self.logfile_read + if second_log is not None: + second_log.write(s) + second_log.flush() + + # For backwards compatibility, in bytes mode (when encoding is None) + # unicode is accepted for send and expect. Unicode mode is strictly unicode + # only. + def _coerce_expect_string(self, s): + if self.encoding is None and not isinstance(s, bytes): + return s.encode('ascii') + return s + + # In bytes mode, regex patterns should also be of bytes type + def _coerce_expect_re(self, r): + p = r.pattern + if self.encoding is None and not isinstance(p, bytes): + return re.compile(p.encode('utf-8')) + # And vice-versa + elif self.encoding is not None and isinstance(p, bytes): + return re.compile(p.decode('utf-8')) + return r + + def _coerce_send_string(self, s): + if self.encoding is None and not isinstance(s, bytes): + return s.encode('utf-8') + return s + + def _get_buffer(self): + return self._buffer.getvalue() + + def _set_buffer(self, value): + self._buffer = self.buffer_type() + self._buffer.write(value) + + # This property is provided for backwards compatibility (self.buffer used + # to be a string/bytes object) + buffer = property(_get_buffer, _set_buffer) + + def read_nonblocking(self, size=1, timeout=None): + """This reads data from the file descriptor. + + This is a simple implementation suitable for a regular file. Subclasses using ptys or pipes should override it. + + The timeout parameter is ignored. + """ + + try: + s = os.read(self.child_fd, size) + except OSError as err: + if err.args[0] == errno.EIO: + # Linux-style EOF + self.flag_eof = True + raise EOF('End Of File (EOF). Exception style platform.') + raise + if s == b'': + # BSD-style EOF + self.flag_eof = True + raise EOF('End Of File (EOF). Empty string style platform.') + + s = self._decoder.decode(s, final=False) + self._log(s, 'read') + return s + + def _pattern_type_err(self, pattern): + raise TypeError('got {badtype} ({badobj!r}) as pattern, must be one' + ' of: {goodtypes}, pexpect.EOF, pexpect.TIMEOUT'\ + .format(badtype=type(pattern), + badobj=pattern, + goodtypes=', '.join([str(ast)\ + for ast in self.allowed_string_types]) + ) + ) + + def compile_pattern_list(self, patterns): + '''This compiles a pattern-string or a list of pattern-strings. + Patterns must be a StringType, EOF, TIMEOUT, SRE_Pattern, or a list of + those. Patterns may also be None which results in an empty list (you + might do this if waiting for an EOF or TIMEOUT condition without + expecting any pattern). + + This is used by expect() when calling expect_list(). Thus expect() is + nothing more than:: + + cpl = self.compile_pattern_list(pl) + return self.expect_list(cpl, timeout) + + If you are using expect() within a loop it may be more + efficient to compile the patterns first and then call expect_list(). + This avoid calls in a loop to compile_pattern_list():: + + cpl = self.compile_pattern_list(my_pattern) + while some_condition: + ... + i = self.expect_list(cpl, timeout) + ... + ''' + + if patterns is None: + return [] + if not isinstance(patterns, list): + patterns = [patterns] + + # Allow dot to match \n + compile_flags = re.DOTALL + if self.ignorecase: + compile_flags = compile_flags | re.IGNORECASE + compiled_pattern_list = [] + for idx, p in enumerate(patterns): + if isinstance(p, self.allowed_string_types): + p = self._coerce_expect_string(p) + compiled_pattern_list.append(re.compile(p, compile_flags)) + elif p is EOF: + compiled_pattern_list.append(EOF) + elif p is TIMEOUT: + compiled_pattern_list.append(TIMEOUT) + elif isinstance(p, type(re.compile(''))): + p = self._coerce_expect_re(p) + compiled_pattern_list.append(p) + else: + self._pattern_type_err(p) + return compiled_pattern_list + + def expect(self, pattern, timeout=-1, searchwindowsize=-1, async_=False, **kw): + '''This seeks through the stream until a pattern is matched. The + pattern is overloaded and may take several types. The pattern can be a + StringType, EOF, a compiled re, or a list of any of those types. + Strings will be compiled to re types. This returns the index into the + pattern list. If the pattern was not a list this returns index 0 on a + successful match. This may raise exceptions for EOF or TIMEOUT. To + avoid the EOF or TIMEOUT exceptions add EOF or TIMEOUT to the pattern + list. That will cause expect to match an EOF or TIMEOUT condition + instead of raising an exception. + + If you pass a list of patterns and more than one matches, the first + match in the stream is chosen. If more than one pattern matches at that + point, the leftmost in the pattern list is chosen. For example:: + + # the input is 'foobar' + index = p.expect(['bar', 'foo', 'foobar']) + # returns 1('foo') even though 'foobar' is a "better" match + + Please note, however, that buffering can affect this behavior, since + input arrives in unpredictable chunks. For example:: + + # the input is 'foobar' + index = p.expect(['foobar', 'foo']) + # returns 0('foobar') if all input is available at once, + # but returns 1('foo') if parts of the final 'bar' arrive late + + When a match is found for the given pattern, the class instance + attribute *match* becomes an re.MatchObject result. Should an EOF + or TIMEOUT pattern match, then the match attribute will be an instance + of that exception class. The pairing before and after class + instance attributes are views of the data preceding and following + the matching pattern. On general exception, class attribute + *before* is all data received up to the exception, while *match* and + *after* attributes are value None. + + When the keyword argument timeout is -1 (default), then TIMEOUT will + raise after the default value specified by the class timeout + attribute. When None, TIMEOUT will not be raised and may block + indefinitely until match. + + When the keyword argument searchwindowsize is -1 (default), then the + value specified by the class maxread attribute is used. + + A list entry may be EOF or TIMEOUT instead of a string. This will + catch these exceptions and return the index of the list entry instead + of raising the exception. The attribute 'after' will be set to the + exception type. The attribute 'match' will be None. This allows you to + write code like this:: + + index = p.expect(['good', 'bad', pexpect.EOF, pexpect.TIMEOUT]) + if index == 0: + do_something() + elif index == 1: + do_something_else() + elif index == 2: + do_some_other_thing() + elif index == 3: + do_something_completely_different() + + instead of code like this:: + + try: + index = p.expect(['good', 'bad']) + if index == 0: + do_something() + elif index == 1: + do_something_else() + except EOF: + do_some_other_thing() + except TIMEOUT: + do_something_completely_different() + + These two forms are equivalent. It all depends on what you want. You + can also just expect the EOF if you are waiting for all output of a + child to finish. For example:: + + p = pexpect.spawn('/bin/ls') + p.expect(pexpect.EOF) + print p.before + + If you are trying to optimize for speed then see expect_list(). + + On Python 3.4, or Python 3.3 with asyncio installed, passing + ``async_=True`` will make this return an :mod:`asyncio` coroutine, + which you can yield from to get the same result that this method would + normally give directly. So, inside a coroutine, you can replace this code:: + + index = p.expect(patterns) + + With this non-blocking form:: + + index = yield from p.expect(patterns, async_=True) + ''' + if 'async' in kw: + async_ = kw.pop('async') + if kw: + raise TypeError("Unknown keyword arguments: {}".format(kw)) + + compiled_pattern_list = self.compile_pattern_list(pattern) + return self.expect_list(compiled_pattern_list, + timeout, searchwindowsize, async_) + + def expect_list(self, pattern_list, timeout=-1, searchwindowsize=-1, + async_=False, **kw): + '''This takes a list of compiled regular expressions and returns the + index into the pattern_list that matched the child output. The list may + also contain EOF or TIMEOUT(which are not compiled regular + expressions). This method is similar to the expect() method except that + expect_list() does not recompile the pattern list on every call. This + may help if you are trying to optimize for speed, otherwise just use + the expect() method. This is called by expect(). + + + Like :meth:`expect`, passing ``async_=True`` will make this return an + asyncio coroutine. + ''' + if timeout == -1: + timeout = self.timeout + if 'async' in kw: + async_ = kw.pop('async') + if kw: + raise TypeError("Unknown keyword arguments: {}".format(kw)) + + exp = Expecter(self, searcher_re(pattern_list), searchwindowsize) + if async_: + from ._async import expect_async + return expect_async(exp, timeout) + else: + return exp.expect_loop(timeout) + + def expect_exact(self, pattern_list, timeout=-1, searchwindowsize=-1, + async_=False, **kw): + + '''This is similar to expect(), but uses plain string matching instead + of compiled regular expressions in 'pattern_list'. The 'pattern_list' + may be a string; a list or other sequence of strings; or TIMEOUT and + EOF. + + This call might be faster than expect() for two reasons: string + searching is faster than RE matching and it is possible to limit the + search to just the end of the input buffer. + + This method is also useful when you don't want to have to worry about + escaping regular expression characters that you want to match. + + Like :meth:`expect`, passing ``async_=True`` will make this return an + asyncio coroutine. + ''' + if timeout == -1: + timeout = self.timeout + if 'async' in kw: + async_ = kw.pop('async') + if kw: + raise TypeError("Unknown keyword arguments: {}".format(kw)) + + if (isinstance(pattern_list, self.allowed_string_types) or + pattern_list in (TIMEOUT, EOF)): + pattern_list = [pattern_list] + + def prepare_pattern(pattern): + if pattern in (TIMEOUT, EOF): + return pattern + if isinstance(pattern, self.allowed_string_types): + return self._coerce_expect_string(pattern) + self._pattern_type_err(pattern) + + try: + pattern_list = iter(pattern_list) + except TypeError: + self._pattern_type_err(pattern_list) + pattern_list = [prepare_pattern(p) for p in pattern_list] + + exp = Expecter(self, searcher_string(pattern_list), searchwindowsize) + if async_: + from ._async import expect_async + return expect_async(exp, timeout) + else: + return exp.expect_loop(timeout) + + def expect_loop(self, searcher, timeout=-1, searchwindowsize=-1): + '''This is the common loop used inside expect. The 'searcher' should be + an instance of searcher_re or searcher_string, which describes how and + what to search for in the input. + + See expect() for other arguments, return value and exceptions. ''' + + exp = Expecter(self, searcher, searchwindowsize) + return exp.expect_loop(timeout) + + def read(self, size=-1): + '''This reads at most "size" bytes from the file (less if the read hits + EOF before obtaining size bytes). If the size argument is negative or + omitted, read all data until EOF is reached. The bytes are returned as + a string object. An empty string is returned when EOF is encountered + immediately. ''' + + if size == 0: + return self.string_type() + if size < 0: + # delimiter default is EOF + self.expect(self.delimiter) + return self.before + + # I could have done this more directly by not using expect(), but + # I deliberately decided to couple read() to expect() so that + # I would catch any bugs early and ensure consistent behavior. + # It's a little less efficient, but there is less for me to + # worry about if I have to later modify read() or expect(). + # Note, it's OK if size==-1 in the regex. That just means it + # will never match anything in which case we stop only on EOF. + cre = re.compile(self._coerce_expect_string('.{%d}' % size), re.DOTALL) + # delimiter default is EOF + index = self.expect([cre, self.delimiter]) + if index == 0: + ### FIXME self.before should be ''. Should I assert this? + return self.after + return self.before + + def readline(self, size=-1): + '''This reads and returns one entire line. The newline at the end of + line is returned as part of the string, unless the file ends without a + newline. An empty string is returned if EOF is encountered immediately. + This looks for a newline as a CR/LF pair (\\r\\n) even on UNIX because + this is what the pseudotty device returns. So contrary to what you may + expect you will receive newlines as \\r\\n. + + If the size argument is 0 then an empty string is returned. In all + other cases the size argument is ignored, which is not standard + behavior for a file-like object. ''' + + if size == 0: + return self.string_type() + # delimiter default is EOF + index = self.expect([self.crlf, self.delimiter]) + if index == 0: + return self.before + self.crlf + else: + return self.before + + def __iter__(self): + '''This is to support iterators over a file-like object. + ''' + return iter(self.readline, self.string_type()) + + def readlines(self, sizehint=-1): + '''This reads until EOF using readline() and returns a list containing + the lines thus read. The optional 'sizehint' argument is ignored. + Remember, because this reads until EOF that means the child + process should have closed its stdout. If you run this method on + a child that is still running with its stdout open then this + method will block until it timesout.''' + + lines = [] + while True: + line = self.readline() + if not line: + break + lines.append(line) + return lines + + def fileno(self): + '''Expose file descriptor for a file-like interface + ''' + return self.child_fd + + def flush(self): + '''This does nothing. It is here to support the interface for a + File-like object. ''' + pass + + def isatty(self): + """Overridden in subclass using tty""" + return False + + # For 'with spawn(...) as child:' + def __enter__(self): + return self + + def __exit__(self, etype, evalue, tb): + # We rely on subclasses to implement close(). If they don't, it's not + # clear what a context manager should do. + self.close() diff --git a/evalkit_internvl/lib/python3.10/site-packages/pexpect/utils.py b/evalkit_internvl/lib/python3.10/site-packages/pexpect/utils.py new file mode 100644 index 0000000000000000000000000000000000000000..f774519609005dface41fd15021c7f237f091341 --- /dev/null +++ b/evalkit_internvl/lib/python3.10/site-packages/pexpect/utils.py @@ -0,0 +1,187 @@ +import os +import sys +import stat +import select +import time +import errno + +try: + InterruptedError +except NameError: + # Alias Python2 exception to Python3 + InterruptedError = select.error + +if sys.version_info[0] >= 3: + string_types = (str,) +else: + string_types = (unicode, str) + + +def is_executable_file(path): + """Checks that path is an executable regular file, or a symlink towards one. + + This is roughly ``os.path isfile(path) and os.access(path, os.X_OK)``. + """ + # follow symlinks, + fpath = os.path.realpath(path) + + if not os.path.isfile(fpath): + # non-files (directories, fifo, etc.) + return False + + mode = os.stat(fpath).st_mode + + if (sys.platform.startswith('sunos') + and os.getuid() == 0): + # When root on Solaris, os.X_OK is True for *all* files, irregardless + # of their executability -- instead, any permission bit of any user, + # group, or other is fine enough. + # + # (This may be true for other "Unix98" OS's such as HP-UX and AIX) + return bool(mode & (stat.S_IXUSR | + stat.S_IXGRP | + stat.S_IXOTH)) + + return os.access(fpath, os.X_OK) + + +def which(filename, env=None): + '''This takes a given filename; tries to find it in the environment path; + then checks if it is executable. This returns the full path to the filename + if found and executable. Otherwise this returns None.''' + + # Special case where filename contains an explicit path. + if os.path.dirname(filename) != '' and is_executable_file(filename): + return filename + if env is None: + env = os.environ + p = env.get('PATH') + if not p: + p = os.defpath + pathlist = p.split(os.pathsep) + for path in pathlist: + ff = os.path.join(path, filename) + if is_executable_file(ff): + return ff + return None + + +def split_command_line(command_line): + + '''This splits a command line into a list of arguments. It splits arguments + on spaces, but handles embedded quotes, doublequotes, and escaped + characters. It's impossible to do this with a regular expression, so I + wrote a little state machine to parse the command line. ''' + + arg_list = [] + arg = '' + + # Constants to name the states we can be in. + state_basic = 0 + state_esc = 1 + state_singlequote = 2 + state_doublequote = 3 + # The state when consuming whitespace between commands. + state_whitespace = 4 + state = state_basic + + for c in command_line: + if state == state_basic or state == state_whitespace: + if c == '\\': + # Escape the next character + state = state_esc + elif c == r"'": + # Handle single quote + state = state_singlequote + elif c == r'"': + # Handle double quote + state = state_doublequote + elif c.isspace(): + # Add arg to arg_list if we aren't in the middle of whitespace. + if state == state_whitespace: + # Do nothing. + None + else: + arg_list.append(arg) + arg = '' + state = state_whitespace + else: + arg = arg + c + state = state_basic + elif state == state_esc: + arg = arg + c + state = state_basic + elif state == state_singlequote: + if c == r"'": + state = state_basic + else: + arg = arg + c + elif state == state_doublequote: + if c == r'"': + state = state_basic + else: + arg = arg + c + + if arg != '': + arg_list.append(arg) + return arg_list + + +def select_ignore_interrupts(iwtd, owtd, ewtd, timeout=None): + + '''This is a wrapper around select.select() that ignores signals. If + select.select raises a select.error exception and errno is an EINTR + error then it is ignored. Mainly this is used to ignore sigwinch + (terminal resize). ''' + + # if select() is interrupted by a signal (errno==EINTR) then + # we loop back and enter the select() again. + if timeout is not None: + end_time = time.time() + timeout + while True: + try: + return select.select(iwtd, owtd, ewtd, timeout) + except InterruptedError: + err = sys.exc_info()[1] + if err.args[0] == errno.EINTR: + # if we loop back we have to subtract the + # amount of time we already waited. + if timeout is not None: + timeout = end_time - time.time() + if timeout < 0: + return([], [], []) + else: + # something else caused the select.error, so + # this actually is an exception. + raise + + +def poll_ignore_interrupts(fds, timeout=None): + '''Simple wrapper around poll to register file descriptors and + ignore signals.''' + + if timeout is not None: + end_time = time.time() + timeout + + poller = select.poll() + for fd in fds: + poller.register(fd, select.POLLIN | select.POLLPRI | select.POLLHUP | select.POLLERR) + + while True: + try: + timeout_ms = None if timeout is None else timeout * 1000 + results = poller.poll(timeout_ms) + return [afd for afd, _ in results] + except InterruptedError: + err = sys.exc_info()[1] + if err.args[0] == errno.EINTR: + # if we loop back we have to subtract the + # amount of time we already waited. + if timeout is not None: + timeout = end_time - time.time() + if timeout < 0: + return [] + else: + # something else caused the select.error, so + # this actually is an exception. + raise diff --git a/evalkit_internvl/lib/python3.10/site-packages/sympy/polys/benchmarks/__pycache__/bench_solvers.cpython-310.pyc b/evalkit_internvl/lib/python3.10/site-packages/sympy/polys/benchmarks/__pycache__/bench_solvers.cpython-310.pyc new file mode 100644 index 0000000000000000000000000000000000000000..4ea25d28608b47dae7d0f18eb54c60d2165526b4 --- /dev/null +++ b/evalkit_internvl/lib/python3.10/site-packages/sympy/polys/benchmarks/__pycache__/bench_solvers.cpython-310.pyc @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f2db86a15808f09d39c7522e3a99012d3d7542da9be29c699a86333c58226752 +size 334858 diff --git a/evalkit_internvl/lib/python3.10/site-packages/tiktoken/__pycache__/__init__.cpython-310.pyc b/evalkit_internvl/lib/python3.10/site-packages/tiktoken/__pycache__/__init__.cpython-310.pyc new file mode 100644 index 0000000000000000000000000000000000000000..59071aa99602a4754c7514980a5df8b820a937c2 Binary files /dev/null and b/evalkit_internvl/lib/python3.10/site-packages/tiktoken/__pycache__/__init__.cpython-310.pyc differ diff --git a/evalkit_internvl/lib/python3.10/site-packages/tiktoken/__pycache__/_educational.cpython-310.pyc b/evalkit_internvl/lib/python3.10/site-packages/tiktoken/__pycache__/_educational.cpython-310.pyc new file mode 100644 index 0000000000000000000000000000000000000000..df6abf9d05736a19f6daf542ab883beb01644341 Binary files /dev/null and b/evalkit_internvl/lib/python3.10/site-packages/tiktoken/__pycache__/_educational.cpython-310.pyc differ diff --git a/evalkit_internvl/lib/python3.10/site-packages/tiktoken/__pycache__/core.cpython-310.pyc b/evalkit_internvl/lib/python3.10/site-packages/tiktoken/__pycache__/core.cpython-310.pyc new file mode 100644 index 0000000000000000000000000000000000000000..d47ae1ce7622b763142719f75841470f7bfc4307 Binary files /dev/null and b/evalkit_internvl/lib/python3.10/site-packages/tiktoken/__pycache__/core.cpython-310.pyc differ diff --git a/evalkit_internvl/lib/python3.10/site-packages/tiktoken/__pycache__/registry.cpython-310.pyc b/evalkit_internvl/lib/python3.10/site-packages/tiktoken/__pycache__/registry.cpython-310.pyc new file mode 100644 index 0000000000000000000000000000000000000000..3b0f10abe19c95b6a5f9c96bbf38867c4f996301 Binary files /dev/null and b/evalkit_internvl/lib/python3.10/site-packages/tiktoken/__pycache__/registry.cpython-310.pyc differ diff --git a/evalkit_internvl/lib/python3.10/site-packages/tiktoken/load.py b/evalkit_internvl/lib/python3.10/site-packages/tiktoken/load.py new file mode 100644 index 0000000000000000000000000000000000000000..8434c23450d393d790ded0264ee2ec6ebd671805 --- /dev/null +++ b/evalkit_internvl/lib/python3.10/site-packages/tiktoken/load.py @@ -0,0 +1,148 @@ +from __future__ import annotations + +import base64 +import hashlib +import json +import os +import tempfile +import uuid + +import requests + + +def read_file(blobpath: str) -> bytes: + if not blobpath.startswith("http://") and not blobpath.startswith("https://"): + try: + import blobfile + except ImportError as e: + raise ImportError( + "blobfile is not installed. Please install it by running `pip install blobfile`." + ) from e + with blobfile.BlobFile(blobpath, "rb") as f: + return f.read() + # avoiding blobfile for public files helps avoid auth issues, like MFA prompts + resp = requests.get(blobpath) + resp.raise_for_status() + return resp.content + + +def check_hash(data: bytes, expected_hash: str) -> bool: + actual_hash = hashlib.sha256(data).hexdigest() + return actual_hash == expected_hash + + +def read_file_cached(blobpath: str, expected_hash: str | None = None) -> bytes: + user_specified_cache = True + if "TIKTOKEN_CACHE_DIR" in os.environ: + cache_dir = os.environ["TIKTOKEN_CACHE_DIR"] + elif "DATA_GYM_CACHE_DIR" in os.environ: + cache_dir = os.environ["DATA_GYM_CACHE_DIR"] + else: + cache_dir = os.path.join(tempfile.gettempdir(), "data-gym-cache") + user_specified_cache = False + + if cache_dir == "": + # disable caching + return read_file(blobpath) + + cache_key = hashlib.sha1(blobpath.encode()).hexdigest() + + cache_path = os.path.join(cache_dir, cache_key) + if os.path.exists(cache_path): + with open(cache_path, "rb") as f: + data = f.read() + if expected_hash is None or check_hash(data, expected_hash): + return data + + # the cached file does not match the hash, remove it and re-fetch + try: + os.remove(cache_path) + except OSError: + pass + + contents = read_file(blobpath) + if expected_hash and not check_hash(contents, expected_hash): + raise ValueError( + f"Hash mismatch for data downloaded from {blobpath} (expected {expected_hash}). " + f"This may indicate a corrupted download. Please try again." + ) + + try: + os.makedirs(cache_dir, exist_ok=True) + tmp_filename = cache_path + "." + str(uuid.uuid4()) + ".tmp" + with open(tmp_filename, "wb") as f: + f.write(contents) + os.rename(tmp_filename, cache_path) + except OSError: + # don't raise if we can't write to the default cache, e.g. issue #75 + if user_specified_cache: + raise + + return contents + + +def data_gym_to_mergeable_bpe_ranks( + vocab_bpe_file: str, + encoder_json_file: str, + vocab_bpe_hash: str | None = None, + encoder_json_hash: str | None = None, +) -> dict[bytes, int]: + # NB: do not add caching to this function + rank_to_intbyte = [b for b in range(2**8) if chr(b).isprintable() and chr(b) != " "] + + data_gym_byte_to_byte = {chr(b): b for b in rank_to_intbyte} + n = 0 + for b in range(2**8): + if b not in rank_to_intbyte: + rank_to_intbyte.append(b) + data_gym_byte_to_byte[chr(2**8 + n)] = b + n += 1 + assert len(rank_to_intbyte) == 2**8 + + # vocab_bpe contains the merges along with associated ranks + vocab_bpe_contents = read_file_cached(vocab_bpe_file, vocab_bpe_hash).decode() + bpe_merges = [tuple(merge_str.split()) for merge_str in vocab_bpe_contents.split("\n")[1:-1]] + + def decode_data_gym(value: str) -> bytes: + return bytes(data_gym_byte_to_byte[b] for b in value) + + # add the single byte tokens + bpe_ranks = {bytes([b]): i for i, b in enumerate(rank_to_intbyte)} + # add the merged tokens + n = len(bpe_ranks) + for first, second in bpe_merges: + bpe_ranks[decode_data_gym(first) + decode_data_gym(second)] = n + n += 1 + + # check that the encoder file matches the merges file + # this sanity check is important since tiktoken assumes that ranks are ordered the same + # as merge priority + encoder_json = json.loads(read_file_cached(encoder_json_file, encoder_json_hash)) + encoder_json_loaded = {decode_data_gym(k): v for k, v in encoder_json.items()} + # drop these two special tokens if present, since they're not mergeable bpe tokens + encoder_json_loaded.pop(b"<|endoftext|>", None) + encoder_json_loaded.pop(b"<|startoftext|>", None) + assert bpe_ranks == encoder_json_loaded + + return bpe_ranks + + +def dump_tiktoken_bpe(bpe_ranks: dict[bytes, int], tiktoken_bpe_file: str) -> None: + try: + import blobfile + except ImportError as e: + raise ImportError( + "blobfile is not installed. Please install it by running `pip install blobfile`." + ) from e + with blobfile.BlobFile(tiktoken_bpe_file, "wb") as f: + for token, rank in sorted(bpe_ranks.items(), key=lambda x: x[1]): + f.write(base64.b64encode(token) + b" " + str(rank).encode() + b"\n") + + +def load_tiktoken_bpe(tiktoken_bpe_file: str, expected_hash: str | None = None) -> dict[bytes, int]: + # NB: do not add caching to this function + contents = read_file_cached(tiktoken_bpe_file, expected_hash) + return { + base64.b64decode(token): int(rank) + for token, rank in (line.split() for line in contents.splitlines() if line) + } diff --git a/evalkit_internvl/lib/python3.10/site-packages/tiktoken/model.py b/evalkit_internvl/lib/python3.10/site-packages/tiktoken/model.py new file mode 100644 index 0000000000000000000000000000000000000000..681b9131b92357cd0ee03e555fee7735018c71ec --- /dev/null +++ b/evalkit_internvl/lib/python3.10/site-packages/tiktoken/model.py @@ -0,0 +1,105 @@ +from __future__ import annotations + +from .core import Encoding +from .registry import get_encoding + +# TODO: these will likely be replaced by an API endpoint +MODEL_PREFIX_TO_ENCODING: dict[str, str] = { + "o1-": "o200k_base", + # chat + "chatgpt-4o-": "o200k_base", + "gpt-4o-": "o200k_base", # e.g., gpt-4o-2024-05-13 + "gpt-4-": "cl100k_base", # e.g., gpt-4-0314, etc., plus gpt-4-32k + "gpt-3.5-turbo-": "cl100k_base", # e.g, gpt-3.5-turbo-0301, -0401, etc. + "gpt-35-turbo-": "cl100k_base", # Azure deployment name + # fine-tuned + "ft:gpt-4": "cl100k_base", + "ft:gpt-3.5-turbo": "cl100k_base", + "ft:davinci-002": "cl100k_base", + "ft:babbage-002": "cl100k_base", +} + +MODEL_TO_ENCODING: dict[str, str] = { + # chat + "gpt-4o": "o200k_base", + "gpt-4": "cl100k_base", + "gpt-3.5-turbo": "cl100k_base", + "gpt-3.5": "cl100k_base", # Common shorthand + "gpt-35-turbo": "cl100k_base", # Azure deployment name + # base + "davinci-002": "cl100k_base", + "babbage-002": "cl100k_base", + # embeddings + "text-embedding-ada-002": "cl100k_base", + "text-embedding-3-small": "cl100k_base", + "text-embedding-3-large": "cl100k_base", + # DEPRECATED MODELS + # text (DEPRECATED) + "text-davinci-003": "p50k_base", + "text-davinci-002": "p50k_base", + "text-davinci-001": "r50k_base", + "text-curie-001": "r50k_base", + "text-babbage-001": "r50k_base", + "text-ada-001": "r50k_base", + "davinci": "r50k_base", + "curie": "r50k_base", + "babbage": "r50k_base", + "ada": "r50k_base", + # code (DEPRECATED) + "code-davinci-002": "p50k_base", + "code-davinci-001": "p50k_base", + "code-cushman-002": "p50k_base", + "code-cushman-001": "p50k_base", + "davinci-codex": "p50k_base", + "cushman-codex": "p50k_base", + # edit (DEPRECATED) + "text-davinci-edit-001": "p50k_edit", + "code-davinci-edit-001": "p50k_edit", + # old embeddings (DEPRECATED) + "text-similarity-davinci-001": "r50k_base", + "text-similarity-curie-001": "r50k_base", + "text-similarity-babbage-001": "r50k_base", + "text-similarity-ada-001": "r50k_base", + "text-search-davinci-doc-001": "r50k_base", + "text-search-curie-doc-001": "r50k_base", + "text-search-babbage-doc-001": "r50k_base", + "text-search-ada-doc-001": "r50k_base", + "code-search-babbage-code-001": "r50k_base", + "code-search-ada-code-001": "r50k_base", + # open source + "gpt2": "gpt2", + "gpt-2": "gpt2", # Maintains consistency with gpt-4 +} + + +def encoding_name_for_model(model_name: str) -> str: + """Returns the name of the encoding used by a model. + + Raises a KeyError if the model name is not recognised. + """ + encoding_name = None + if model_name in MODEL_TO_ENCODING: + encoding_name = MODEL_TO_ENCODING[model_name] + else: + # Check if the model matches a known prefix + # Prefix matching avoids needing library updates for every model version release + # Note that this can match on non-existent models (e.g., gpt-3.5-turbo-FAKE) + for model_prefix, model_encoding_name in MODEL_PREFIX_TO_ENCODING.items(): + if model_name.startswith(model_prefix): + return model_encoding_name + + if encoding_name is None: + raise KeyError( + f"Could not automatically map {model_name} to a tokeniser. " + "Please use `tiktoken.get_encoding` to explicitly get the tokeniser you expect." + ) from None + + return encoding_name + + +def encoding_for_model(model_name: str) -> Encoding: + """Returns the encoding used by a model. + + Raises a KeyError if the model name is not recognised. + """ + return get_encoding(encoding_name_for_model(model_name)) diff --git a/evalkit_tf437/lib/python3.10/site-packages/sklearn/__check_build/__pycache__/__init__.cpython-310.pyc b/evalkit_tf437/lib/python3.10/site-packages/sklearn/__check_build/__pycache__/__init__.cpython-310.pyc new file mode 100644 index 0000000000000000000000000000000000000000..ebc528f1945086e1651315db8432107c81df7c93 Binary files /dev/null and b/evalkit_tf437/lib/python3.10/site-packages/sklearn/__check_build/__pycache__/__init__.cpython-310.pyc differ diff --git a/evalkit_tf437/lib/python3.10/site-packages/sklearn/__check_build/_check_build.cpython-310-x86_64-linux-gnu.so b/evalkit_tf437/lib/python3.10/site-packages/sklearn/__check_build/_check_build.cpython-310-x86_64-linux-gnu.so new file mode 100644 index 0000000000000000000000000000000000000000..6b61f2e1df132d0bd1812e0bb6b13ea819e29035 Binary files /dev/null and b/evalkit_tf437/lib/python3.10/site-packages/sklearn/__check_build/_check_build.cpython-310-x86_64-linux-gnu.so differ diff --git a/evalkit_tf437/lib/python3.10/site-packages/sklearn/__check_build/_check_build.pyx b/evalkit_tf437/lib/python3.10/site-packages/sklearn/__check_build/_check_build.pyx new file mode 100644 index 0000000000000000000000000000000000000000..0409e73f5e96dc3a4c27889fa44eda8a17d36ef9 --- /dev/null +++ b/evalkit_tf437/lib/python3.10/site-packages/sklearn/__check_build/_check_build.pyx @@ -0,0 +1,2 @@ +def check_build(): + return diff --git a/evalkit_tf437/lib/python3.10/site-packages/sklearn/__check_build/meson.build b/evalkit_tf437/lib/python3.10/site-packages/sklearn/__check_build/meson.build new file mode 100644 index 0000000000000000000000000000000000000000..8295e6b5736390aec21f112e0557f044889be49d --- /dev/null +++ b/evalkit_tf437/lib/python3.10/site-packages/sklearn/__check_build/meson.build @@ -0,0 +1,7 @@ +py.extension_module( + '_check_build', + '_check_build.pyx', + cython_args: cython_args, + install: true, + subdir: 'sklearn/__check_build', +) diff --git a/evalkit_tf437/lib/python3.10/site-packages/sklearn/manifold/__init__.py b/evalkit_tf437/lib/python3.10/site-packages/sklearn/manifold/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..2266b6e08af88e528539eadbb0d3f516e4f380d2 --- /dev/null +++ b/evalkit_tf437/lib/python3.10/site-packages/sklearn/manifold/__init__.py @@ -0,0 +1,22 @@ +"""Data embedding techniques.""" + +# Authors: The scikit-learn developers +# SPDX-License-Identifier: BSD-3-Clause + +from ._isomap import Isomap +from ._locally_linear import LocallyLinearEmbedding, locally_linear_embedding +from ._mds import MDS, smacof +from ._spectral_embedding import SpectralEmbedding, spectral_embedding +from ._t_sne import TSNE, trustworthiness + +__all__ = [ + "locally_linear_embedding", + "LocallyLinearEmbedding", + "Isomap", + "MDS", + "smacof", + "SpectralEmbedding", + "spectral_embedding", + "TSNE", + "trustworthiness", +] diff --git a/evalkit_tf437/lib/python3.10/site-packages/sklearn/manifold/_barnes_hut_tsne.pyx b/evalkit_tf437/lib/python3.10/site-packages/sklearn/manifold/_barnes_hut_tsne.pyx new file mode 100644 index 0000000000000000000000000000000000000000..f0906fbf2bec80ec8aa71a138e2637d182d4733f --- /dev/null +++ b/evalkit_tf437/lib/python3.10/site-packages/sklearn/manifold/_barnes_hut_tsne.pyx @@ -0,0 +1,295 @@ +# Author: Christopher Moody +# Author: Nick Travers +# Implementation by Chris Moody & Nick Travers +# See http://homepage.tudelft.nl/19j49/t-SNE.html for reference +# implementations and papers describing the technique + + +import numpy as np +cimport numpy as cnp +from libc.stdio cimport printf +from libc.math cimport log +from libc.stdlib cimport malloc, free +from libc.time cimport clock, clock_t +from cython.parallel cimport prange, parallel + +from ..neighbors._quad_tree cimport _QuadTree + +cnp.import_array() + + +cdef char* EMPTY_STRING = "" + +# Smallest strictly positive value that can be represented by floating +# point numbers for different precision levels. This is useful to avoid +# taking the log of zero when computing the KL divergence. +cdef float FLOAT32_TINY = np.finfo(np.float32).tiny + +# Useful to void division by zero or divergence to +inf. +cdef float FLOAT64_EPS = np.finfo(np.float64).eps + +# This is effectively an ifdef statement in Cython +# It allows us to write printf debugging lines +# and remove them at compile time +cdef enum: + DEBUGFLAG = 0 + +cdef float compute_gradient(float[:] val_P, + float[:, :] pos_reference, + cnp.int64_t[:] neighbors, + cnp.int64_t[:] indptr, + float[:, :] tot_force, + _QuadTree qt, + float theta, + int dof, + long start, + bint compute_error, + int num_threads) noexcept nogil: + # Having created the tree, calculate the gradient + # in two components, the positive and negative forces + cdef: + long i, coord + int ax + long n_samples = pos_reference.shape[0] + int n_dimensions = qt.n_dimensions + clock_t t1 = 0, t2 = 0 + double sQ + float error + int take_timing = 1 if qt.verbose > 15 else 0 + + if qt.verbose > 11: + printf("[t-SNE] Allocating %li elements in force arrays\n", + n_samples * n_dimensions * 2) + cdef float* neg_f = malloc(sizeof(float) * n_samples * n_dimensions) + cdef float* pos_f = malloc(sizeof(float) * n_samples * n_dimensions) + + if take_timing: + t1 = clock() + sQ = compute_gradient_negative(pos_reference, neg_f, qt, dof, theta, start, + num_threads) + if take_timing: + t2 = clock() + printf("[t-SNE] Computing negative gradient: %e ticks\n", ((float) (t2 - t1))) + + if take_timing: + t1 = clock() + error = compute_gradient_positive(val_P, pos_reference, neighbors, indptr, + pos_f, n_dimensions, dof, sQ, start, + qt.verbose, compute_error, num_threads) + if take_timing: + t2 = clock() + printf("[t-SNE] Computing positive gradient: %e ticks\n", + ((float) (t2 - t1))) + for i in prange(start, n_samples, nogil=True, num_threads=num_threads, + schedule='static'): + for ax in range(n_dimensions): + coord = i * n_dimensions + ax + tot_force[i, ax] = pos_f[coord] - (neg_f[coord] / sQ) + + free(neg_f) + free(pos_f) + return error + + +cdef float compute_gradient_positive(float[:] val_P, + float[:, :] pos_reference, + cnp.int64_t[:] neighbors, + cnp.int64_t[:] indptr, + float* pos_f, + int n_dimensions, + int dof, + double sum_Q, + cnp.int64_t start, + int verbose, + bint compute_error, + int num_threads) noexcept nogil: + # Sum over the following expression for i not equal to j + # grad_i = p_ij (1 + ||y_i - y_j||^2)^-1 (y_i - y_j) + # This is equivalent to compute_edge_forces in the authors' code + # It just goes over the nearest neighbors instead of all the data points + # (unlike the non-nearest neighbors version of `compute_gradient_positive') + cdef: + int ax + long i, j, k + long n_samples = indptr.shape[0] - 1 + float C = 0.0 + float dij, qij, pij + float exponent = (dof + 1.0) / 2.0 + float float_dof = (float) (dof) + float* buff + clock_t t1 = 0, t2 = 0 + float dt + + if verbose > 10: + t1 = clock() + + with nogil, parallel(num_threads=num_threads): + # Define private buffer variables + buff = malloc(sizeof(float) * n_dimensions) + + for i in prange(start, n_samples, schedule='static'): + # Init the gradient vector + for ax in range(n_dimensions): + pos_f[i * n_dimensions + ax] = 0.0 + # Compute the positive interaction for the nearest neighbors + for k in range(indptr[i], indptr[i+1]): + j = neighbors[k] + dij = 0.0 + pij = val_P[k] + for ax in range(n_dimensions): + buff[ax] = pos_reference[i, ax] - pos_reference[j, ax] + dij += buff[ax] * buff[ax] + qij = float_dof / (float_dof + dij) + if dof != 1: # i.e. exponent != 1 + qij = qij ** exponent + dij = pij * qij + + # only compute the error when needed + if compute_error: + qij = qij / sum_Q + C += pij * log(max(pij, FLOAT32_TINY) / max(qij, FLOAT32_TINY)) + for ax in range(n_dimensions): + pos_f[i * n_dimensions + ax] += dij * buff[ax] + + free(buff) + if verbose > 10: + t2 = clock() + dt = ((float) (t2 - t1)) + printf("[t-SNE] Computed error=%1.4f in %1.1e ticks\n", C, dt) + return C + + +cdef double compute_gradient_negative(float[:, :] pos_reference, + float* neg_f, + _QuadTree qt, + int dof, + float theta, + long start, + int num_threads) noexcept nogil: + cdef: + int ax + int n_dimensions = qt.n_dimensions + int offset = n_dimensions + 2 + long i, j, idx + long n_samples = pos_reference.shape[0] + long n = n_samples - start + long dta = 0 + long dtb = 0 + float size, dist2s, mult + float exponent = (dof + 1.0) / 2.0 + float float_dof = (float) (dof) + double qijZ, sum_Q = 0.0 + float* force + float* neg_force + float* pos + clock_t t1 = 0, t2 = 0, t3 = 0 + int take_timing = 1 if qt.verbose > 20 else 0 + + with nogil, parallel(num_threads=num_threads): + # Define thread-local buffers + summary = malloc(sizeof(float) * n * offset) + pos = malloc(sizeof(float) * n_dimensions) + force = malloc(sizeof(float) * n_dimensions) + neg_force = malloc(sizeof(float) * n_dimensions) + + for i in prange(start, n_samples, schedule='static'): + # Clear the arrays + for ax in range(n_dimensions): + force[ax] = 0.0 + neg_force[ax] = 0.0 + pos[ax] = pos_reference[i, ax] + + # Find which nodes are summarizing and collect their centers of mass + # deltas, and sizes, into vectorized arrays + if take_timing: + t1 = clock() + idx = qt.summarize(pos, summary, theta*theta) + if take_timing: + t2 = clock() + # Compute the t-SNE negative force + # for the digits dataset, walking the tree + # is about 10-15x more expensive than the + # following for loop + for j in range(idx // offset): + + dist2s = summary[j * offset + n_dimensions] + size = summary[j * offset + n_dimensions + 1] + qijZ = float_dof / (float_dof + dist2s) # 1/(1+dist) + if dof != 1: # i.e. exponent != 1 + qijZ = qijZ ** exponent + + sum_Q += size * qijZ # size of the node * q + mult = size * qijZ * qijZ + for ax in range(n_dimensions): + neg_force[ax] += mult * summary[j * offset + ax] + if take_timing: + t3 = clock() + for ax in range(n_dimensions): + neg_f[i * n_dimensions + ax] = neg_force[ax] + if take_timing: + dta += t2 - t1 + dtb += t3 - t2 + free(pos) + free(force) + free(neg_force) + free(summary) + if take_timing: + printf("[t-SNE] Tree: %li clock ticks | ", dta) + printf("Force computation: %li clock ticks\n", dtb) + + # Put sum_Q to machine EPSILON to avoid divisions by 0 + sum_Q = max(sum_Q, FLOAT64_EPS) + return sum_Q + + +def gradient(float[:] val_P, + float[:, :] pos_output, + cnp.int64_t[:] neighbors, + cnp.int64_t[:] indptr, + float[:, :] forces, + float theta, + int n_dimensions, + int verbose, + int dof=1, + long skip_num_points=0, + bint compute_error=1, + int num_threads=1): + # This function is designed to be called from external Python + # it passes the 'forces' array by reference and fills that's array + # up in-place + cdef float C + cdef int n + n = pos_output.shape[0] + assert val_P.itemsize == 4 + assert pos_output.itemsize == 4 + assert forces.itemsize == 4 + m = "Forces array and pos_output shapes are incompatible" + assert n == forces.shape[0], m + m = "Pij and pos_output shapes are incompatible" + assert n == indptr.shape[0] - 1, m + if verbose > 10: + printf("[t-SNE] Initializing tree of n_dimensions %i\n", n_dimensions) + cdef _QuadTree qt = _QuadTree(pos_output.shape[1], verbose) + if verbose > 10: + printf("[t-SNE] Inserting %li points\n", pos_output.shape[0]) + qt.build_tree(pos_output) + if verbose > 10: + # XXX: format hack to workaround lack of `const char *` type + # in the generated C code that triggers error with gcc 4.9 + # and -Werror=format-security + printf("[t-SNE] Computing gradient\n%s", EMPTY_STRING) + + C = compute_gradient(val_P, pos_output, neighbors, indptr, forces, + qt, theta, dof, skip_num_points, compute_error, + num_threads) + + if verbose > 10: + # XXX: format hack to workaround lack of `const char *` type + # in the generated C code + # and -Werror=format-security + printf("[t-SNE] Checking tree consistency\n%s", EMPTY_STRING) + m = "Tree consistency failed: unexpected number of points on the tree" + assert qt.cells[0].cumulative_size == qt.n_points, m + if not compute_error: + C = np.nan + return C diff --git a/evalkit_tf437/lib/python3.10/site-packages/sklearn/manifold/_isomap.py b/evalkit_tf437/lib/python3.10/site-packages/sklearn/manifold/_isomap.py new file mode 100644 index 0000000000000000000000000000000000000000..90154470c18a486a250ea112cb31e57167d2eb43 --- /dev/null +++ b/evalkit_tf437/lib/python3.10/site-packages/sklearn/manifold/_isomap.py @@ -0,0 +1,442 @@ +"""Isomap for manifold learning""" + +# Authors: The scikit-learn developers +# SPDX-License-Identifier: BSD-3-Clause + +import warnings +from numbers import Integral, Real + +import numpy as np +from scipy.sparse import issparse +from scipy.sparse.csgraph import connected_components, shortest_path + +from ..base import ( + BaseEstimator, + ClassNamePrefixFeaturesOutMixin, + TransformerMixin, + _fit_context, +) +from ..decomposition import KernelPCA +from ..metrics.pairwise import _VALID_METRICS +from ..neighbors import NearestNeighbors, kneighbors_graph, radius_neighbors_graph +from ..preprocessing import KernelCenterer +from ..utils._param_validation import Interval, StrOptions +from ..utils.graph import _fix_connected_components +from ..utils.validation import check_is_fitted + + +class Isomap(ClassNamePrefixFeaturesOutMixin, TransformerMixin, BaseEstimator): + """Isomap Embedding. + + Non-linear dimensionality reduction through Isometric Mapping + + Read more in the :ref:`User Guide `. + + Parameters + ---------- + n_neighbors : int or None, default=5 + Number of neighbors to consider for each point. If `n_neighbors` is an int, + then `radius` must be `None`. + + radius : float or None, default=None + Limiting distance of neighbors to return. If `radius` is a float, + then `n_neighbors` must be set to `None`. + + .. versionadded:: 1.1 + + n_components : int, default=2 + Number of coordinates for the manifold. + + eigen_solver : {'auto', 'arpack', 'dense'}, default='auto' + 'auto' : Attempt to choose the most efficient solver + for the given problem. + + 'arpack' : Use Arnoldi decomposition to find the eigenvalues + and eigenvectors. + + 'dense' : Use a direct solver (i.e. LAPACK) + for the eigenvalue decomposition. + + tol : float, default=0 + Convergence tolerance passed to arpack or lobpcg. + not used if eigen_solver == 'dense'. + + max_iter : int, default=None + Maximum number of iterations for the arpack solver. + not used if eigen_solver == 'dense'. + + path_method : {'auto', 'FW', 'D'}, default='auto' + Method to use in finding shortest path. + + 'auto' : attempt to choose the best algorithm automatically. + + 'FW' : Floyd-Warshall algorithm. + + 'D' : Dijkstra's algorithm. + + neighbors_algorithm : {'auto', 'brute', 'kd_tree', 'ball_tree'}, \ + default='auto' + Algorithm to use for nearest neighbors search, + passed to neighbors.NearestNeighbors instance. + + n_jobs : int or None, default=None + The number of parallel jobs to run. + ``None`` means 1 unless in a :obj:`joblib.parallel_backend` context. + ``-1`` means using all processors. See :term:`Glossary ` + for more details. + + metric : str, or callable, default="minkowski" + The metric to use when calculating distance between instances in a + feature array. If metric is a string or callable, it must be one of + the options allowed by :func:`sklearn.metrics.pairwise_distances` for + its metric parameter. + If metric is "precomputed", X is assumed to be a distance matrix and + must be square. X may be a :term:`Glossary `. + + .. versionadded:: 0.22 + + p : float, default=2 + Parameter for the Minkowski metric from + sklearn.metrics.pairwise.pairwise_distances. When p = 1, this is + equivalent to using manhattan_distance (l1), and euclidean_distance + (l2) for p = 2. For arbitrary p, minkowski_distance (l_p) is used. + + .. versionadded:: 0.22 + + metric_params : dict, default=None + Additional keyword arguments for the metric function. + + .. versionadded:: 0.22 + + Attributes + ---------- + embedding_ : array-like, shape (n_samples, n_components) + Stores the embedding vectors. + + kernel_pca_ : object + :class:`~sklearn.decomposition.KernelPCA` object used to implement the + embedding. + + nbrs_ : sklearn.neighbors.NearestNeighbors instance + Stores nearest neighbors instance, including BallTree or KDtree + if applicable. + + dist_matrix_ : array-like, shape (n_samples, n_samples) + Stores the geodesic distance matrix of training data. + + n_features_in_ : int + Number of features seen during :term:`fit`. + + .. versionadded:: 0.24 + + feature_names_in_ : ndarray of shape (`n_features_in_`,) + Names of features seen during :term:`fit`. Defined only when `X` + has feature names that are all strings. + + .. versionadded:: 1.0 + + See Also + -------- + sklearn.decomposition.PCA : Principal component analysis that is a linear + dimensionality reduction method. + sklearn.decomposition.KernelPCA : Non-linear dimensionality reduction using + kernels and PCA. + MDS : Manifold learning using multidimensional scaling. + TSNE : T-distributed Stochastic Neighbor Embedding. + LocallyLinearEmbedding : Manifold learning using Locally Linear Embedding. + SpectralEmbedding : Spectral embedding for non-linear dimensionality. + + References + ---------- + + .. [1] Tenenbaum, J.B.; De Silva, V.; & Langford, J.C. A global geometric + framework for nonlinear dimensionality reduction. Science 290 (5500) + + Examples + -------- + >>> from sklearn.datasets import load_digits + >>> from sklearn.manifold import Isomap + >>> X, _ = load_digits(return_X_y=True) + >>> X.shape + (1797, 64) + >>> embedding = Isomap(n_components=2) + >>> X_transformed = embedding.fit_transform(X[:100]) + >>> X_transformed.shape + (100, 2) + """ + + _parameter_constraints: dict = { + "n_neighbors": [Interval(Integral, 1, None, closed="left"), None], + "radius": [Interval(Real, 0, None, closed="both"), None], + "n_components": [Interval(Integral, 1, None, closed="left")], + "eigen_solver": [StrOptions({"auto", "arpack", "dense"})], + "tol": [Interval(Real, 0, None, closed="left")], + "max_iter": [Interval(Integral, 1, None, closed="left"), None], + "path_method": [StrOptions({"auto", "FW", "D"})], + "neighbors_algorithm": [StrOptions({"auto", "brute", "kd_tree", "ball_tree"})], + "n_jobs": [Integral, None], + "p": [Interval(Real, 1, None, closed="left")], + "metric": [StrOptions(set(_VALID_METRICS) | {"precomputed"}), callable], + "metric_params": [dict, None], + } + + def __init__( + self, + *, + n_neighbors=5, + radius=None, + n_components=2, + eigen_solver="auto", + tol=0, + max_iter=None, + path_method="auto", + neighbors_algorithm="auto", + n_jobs=None, + metric="minkowski", + p=2, + metric_params=None, + ): + self.n_neighbors = n_neighbors + self.radius = radius + self.n_components = n_components + self.eigen_solver = eigen_solver + self.tol = tol + self.max_iter = max_iter + self.path_method = path_method + self.neighbors_algorithm = neighbors_algorithm + self.n_jobs = n_jobs + self.metric = metric + self.p = p + self.metric_params = metric_params + + def _fit_transform(self, X): + if self.n_neighbors is not None and self.radius is not None: + raise ValueError( + "Both n_neighbors and radius are provided. Use" + f" Isomap(radius={self.radius}, n_neighbors=None) if intended to use" + " radius-based neighbors" + ) + + self.nbrs_ = NearestNeighbors( + n_neighbors=self.n_neighbors, + radius=self.radius, + algorithm=self.neighbors_algorithm, + metric=self.metric, + p=self.p, + metric_params=self.metric_params, + n_jobs=self.n_jobs, + ) + self.nbrs_.fit(X) + self.n_features_in_ = self.nbrs_.n_features_in_ + if hasattr(self.nbrs_, "feature_names_in_"): + self.feature_names_in_ = self.nbrs_.feature_names_in_ + + self.kernel_pca_ = KernelPCA( + n_components=self.n_components, + kernel="precomputed", + eigen_solver=self.eigen_solver, + tol=self.tol, + max_iter=self.max_iter, + n_jobs=self.n_jobs, + ).set_output(transform="default") + + if self.n_neighbors is not None: + nbg = kneighbors_graph( + self.nbrs_, + self.n_neighbors, + metric=self.metric, + p=self.p, + metric_params=self.metric_params, + mode="distance", + n_jobs=self.n_jobs, + ) + else: + nbg = radius_neighbors_graph( + self.nbrs_, + radius=self.radius, + metric=self.metric, + p=self.p, + metric_params=self.metric_params, + mode="distance", + n_jobs=self.n_jobs, + ) + + # Compute the number of connected components, and connect the different + # components to be able to compute a shortest path between all pairs + # of samples in the graph. + # Similar fix to cluster._agglomerative._fix_connectivity. + n_connected_components, labels = connected_components(nbg) + if n_connected_components > 1: + if self.metric == "precomputed" and issparse(X): + raise RuntimeError( + "The number of connected components of the neighbors graph" + f" is {n_connected_components} > 1. The graph cannot be " + "completed with metric='precomputed', and Isomap cannot be" + "fitted. Increase the number of neighbors to avoid this " + "issue, or precompute the full distance matrix instead " + "of passing a sparse neighbors graph." + ) + warnings.warn( + ( + "The number of connected components of the neighbors graph " + f"is {n_connected_components} > 1. Completing the graph to fit" + " Isomap might be slow. Increase the number of neighbors to " + "avoid this issue." + ), + stacklevel=2, + ) + + # use array validated by NearestNeighbors + nbg = _fix_connected_components( + X=self.nbrs_._fit_X, + graph=nbg, + n_connected_components=n_connected_components, + component_labels=labels, + mode="distance", + metric=self.nbrs_.effective_metric_, + **self.nbrs_.effective_metric_params_, + ) + + self.dist_matrix_ = shortest_path(nbg, method=self.path_method, directed=False) + + if self.nbrs_._fit_X.dtype == np.float32: + self.dist_matrix_ = self.dist_matrix_.astype( + self.nbrs_._fit_X.dtype, copy=False + ) + + G = self.dist_matrix_**2 + G *= -0.5 + + self.embedding_ = self.kernel_pca_.fit_transform(G) + self._n_features_out = self.embedding_.shape[1] + + def reconstruction_error(self): + """Compute the reconstruction error for the embedding. + + Returns + ------- + reconstruction_error : float + Reconstruction error. + + Notes + ----- + The cost function of an isomap embedding is + + ``E = frobenius_norm[K(D) - K(D_fit)] / n_samples`` + + Where D is the matrix of distances for the input data X, + D_fit is the matrix of distances for the output embedding X_fit, + and K is the isomap kernel: + + ``K(D) = -0.5 * (I - 1/n_samples) * D^2 * (I - 1/n_samples)`` + """ + G = -0.5 * self.dist_matrix_**2 + G_center = KernelCenterer().fit_transform(G) + evals = self.kernel_pca_.eigenvalues_ + return np.sqrt(np.sum(G_center**2) - np.sum(evals**2)) / G.shape[0] + + @_fit_context( + # Isomap.metric is not validated yet + prefer_skip_nested_validation=False + ) + def fit(self, X, y=None): + """Compute the embedding vectors for data X. + + Parameters + ---------- + X : {array-like, sparse matrix, BallTree, KDTree, NearestNeighbors} + Sample data, shape = (n_samples, n_features), in the form of a + numpy array, sparse matrix, precomputed tree, or NearestNeighbors + object. + + y : Ignored + Not used, present for API consistency by convention. + + Returns + ------- + self : object + Returns a fitted instance of self. + """ + self._fit_transform(X) + return self + + @_fit_context( + # Isomap.metric is not validated yet + prefer_skip_nested_validation=False + ) + def fit_transform(self, X, y=None): + """Fit the model from data in X and transform X. + + Parameters + ---------- + X : {array-like, sparse matrix, BallTree, KDTree} + Training vector, where `n_samples` is the number of samples + and `n_features` is the number of features. + + y : Ignored + Not used, present for API consistency by convention. + + Returns + ------- + X_new : array-like, shape (n_samples, n_components) + X transformed in the new space. + """ + self._fit_transform(X) + return self.embedding_ + + def transform(self, X): + """Transform X. + + This is implemented by linking the points X into the graph of geodesic + distances of the training data. First the `n_neighbors` nearest + neighbors of X are found in the training data, and from these the + shortest geodesic distances from each point in X to each point in + the training data are computed in order to construct the kernel. + The embedding of X is the projection of this kernel onto the + embedding vectors of the training set. + + Parameters + ---------- + X : {array-like, sparse matrix}, shape (n_queries, n_features) + If neighbors_algorithm='precomputed', X is assumed to be a + distance matrix or a sparse graph of shape + (n_queries, n_samples_fit). + + Returns + ------- + X_new : array-like, shape (n_queries, n_components) + X transformed in the new space. + """ + check_is_fitted(self) + if self.n_neighbors is not None: + distances, indices = self.nbrs_.kneighbors(X, return_distance=True) + else: + distances, indices = self.nbrs_.radius_neighbors(X, return_distance=True) + + # Create the graph of shortest distances from X to + # training data via the nearest neighbors of X. + # This can be done as a single array operation, but it potentially + # takes a lot of memory. To avoid that, use a loop: + + n_samples_fit = self.nbrs_.n_samples_fit_ + n_queries = distances.shape[0] + + if hasattr(X, "dtype") and X.dtype == np.float32: + dtype = np.float32 + else: + dtype = np.float64 + + G_X = np.zeros((n_queries, n_samples_fit), dtype) + for i in range(n_queries): + G_X[i] = np.min(self.dist_matrix_[indices[i]] + distances[i][:, None], 0) + + G_X **= 2 + G_X *= -0.5 + + return self.kernel_pca_.transform(G_X) + + def __sklearn_tags__(self): + tags = super().__sklearn_tags__() + tags.transformer_tags.preserves_dtype = ["float64", "float32"] + tags.input_tags.sparse = True + return tags diff --git a/evalkit_tf437/lib/python3.10/site-packages/sklearn/manifold/_locally_linear.py b/evalkit_tf437/lib/python3.10/site-packages/sklearn/manifold/_locally_linear.py new file mode 100644 index 0000000000000000000000000000000000000000..c07976ae50c7169ef24b3a6e1d9765ff000c73fe --- /dev/null +++ b/evalkit_tf437/lib/python3.10/site-packages/sklearn/manifold/_locally_linear.py @@ -0,0 +1,879 @@ +"""Locally Linear Embedding""" + +# Authors: The scikit-learn developers +# SPDX-License-Identifier: BSD-3-Clause + +from numbers import Integral, Real + +import numpy as np +from scipy.linalg import eigh, qr, solve, svd +from scipy.sparse import csr_matrix, eye, lil_matrix +from scipy.sparse.linalg import eigsh + +from ..base import ( + BaseEstimator, + ClassNamePrefixFeaturesOutMixin, + TransformerMixin, + _fit_context, + _UnstableArchMixin, +) +from ..neighbors import NearestNeighbors +from ..utils import check_array, check_random_state +from ..utils._arpack import _init_arpack_v0 +from ..utils._param_validation import Interval, StrOptions, validate_params +from ..utils.extmath import stable_cumsum +from ..utils.validation import FLOAT_DTYPES, check_is_fitted, validate_data + + +def barycenter_weights(X, Y, indices, reg=1e-3): + """Compute barycenter weights of X from Y along the first axis + + We estimate the weights to assign to each point in Y[indices] to recover + the point X[i]. The barycenter weights sum to 1. + + Parameters + ---------- + X : array-like, shape (n_samples, n_dim) + + Y : array-like, shape (n_samples, n_dim) + + indices : array-like, shape (n_samples, n_dim) + Indices of the points in Y used to compute the barycenter + + reg : float, default=1e-3 + Amount of regularization to add for the problem to be + well-posed in the case of n_neighbors > n_dim + + Returns + ------- + B : array-like, shape (n_samples, n_neighbors) + + Notes + ----- + See developers note for more information. + """ + X = check_array(X, dtype=FLOAT_DTYPES) + Y = check_array(Y, dtype=FLOAT_DTYPES) + indices = check_array(indices, dtype=int) + + n_samples, n_neighbors = indices.shape + assert X.shape[0] == n_samples + + B = np.empty((n_samples, n_neighbors), dtype=X.dtype) + v = np.ones(n_neighbors, dtype=X.dtype) + + # this might raise a LinalgError if G is singular and has trace + # zero + for i, ind in enumerate(indices): + A = Y[ind] + C = A - X[i] # broadcasting + G = np.dot(C, C.T) + trace = np.trace(G) + if trace > 0: + R = reg * trace + else: + R = reg + G.flat[:: n_neighbors + 1] += R + w = solve(G, v, assume_a="pos") + B[i, :] = w / np.sum(w) + return B + + +def barycenter_kneighbors_graph(X, n_neighbors, reg=1e-3, n_jobs=None): + """Computes the barycenter weighted graph of k-Neighbors for points in X + + Parameters + ---------- + X : {array-like, NearestNeighbors} + Sample data, shape = (n_samples, n_features), in the form of a + numpy array or a NearestNeighbors object. + + n_neighbors : int + Number of neighbors for each sample. + + reg : float, default=1e-3 + Amount of regularization when solving the least-squares + problem. Only relevant if mode='barycenter'. If None, use the + default. + + n_jobs : int or None, default=None + The number of parallel jobs to run for neighbors search. + ``None`` means 1 unless in a :obj:`joblib.parallel_backend` context. + ``-1`` means using all processors. See :term:`Glossary ` + for more details. + + Returns + ------- + A : sparse matrix in CSR format, shape = [n_samples, n_samples] + A[i, j] is assigned the weight of edge that connects i to j. + + See Also + -------- + sklearn.neighbors.kneighbors_graph + sklearn.neighbors.radius_neighbors_graph + """ + knn = NearestNeighbors(n_neighbors=n_neighbors + 1, n_jobs=n_jobs).fit(X) + X = knn._fit_X + n_samples = knn.n_samples_fit_ + ind = knn.kneighbors(X, return_distance=False)[:, 1:] + data = barycenter_weights(X, X, ind, reg=reg) + indptr = np.arange(0, n_samples * n_neighbors + 1, n_neighbors) + return csr_matrix((data.ravel(), ind.ravel(), indptr), shape=(n_samples, n_samples)) + + +def null_space( + M, k, k_skip=1, eigen_solver="arpack", tol=1e-6, max_iter=100, random_state=None +): + """ + Find the null space of a matrix M. + + Parameters + ---------- + M : {array, matrix, sparse matrix, LinearOperator} + Input covariance matrix: should be symmetric positive semi-definite + + k : int + Number of eigenvalues/vectors to return + + k_skip : int, default=1 + Number of low eigenvalues to skip. + + eigen_solver : {'auto', 'arpack', 'dense'}, default='arpack' + auto : algorithm will attempt to choose the best method for input data + arpack : use arnoldi iteration in shift-invert mode. + For this method, M may be a dense matrix, sparse matrix, + or general linear operator. + Warning: ARPACK can be unstable for some problems. It is + best to try several random seeds in order to check results. + dense : use standard dense matrix operations for the eigenvalue + decomposition. For this method, M must be an array + or matrix type. This method should be avoided for + large problems. + + tol : float, default=1e-6 + Tolerance for 'arpack' method. + Not used if eigen_solver=='dense'. + + max_iter : int, default=100 + Maximum number of iterations for 'arpack' method. + Not used if eigen_solver=='dense' + + random_state : int, RandomState instance, default=None + Determines the random number generator when ``solver`` == 'arpack'. + Pass an int for reproducible results across multiple function calls. + See :term:`Glossary `. + """ + if eigen_solver == "auto": + if M.shape[0] > 200 and k + k_skip < 10: + eigen_solver = "arpack" + else: + eigen_solver = "dense" + + if eigen_solver == "arpack": + v0 = _init_arpack_v0(M.shape[0], random_state) + try: + eigen_values, eigen_vectors = eigsh( + M, k + k_skip, sigma=0.0, tol=tol, maxiter=max_iter, v0=v0 + ) + except RuntimeError as e: + raise ValueError( + "Error in determining null-space with ARPACK. Error message: " + "'%s'. Note that eigen_solver='arpack' can fail when the " + "weight matrix is singular or otherwise ill-behaved. In that " + "case, eigen_solver='dense' is recommended. See online " + "documentation for more information." % e + ) from e + + return eigen_vectors[:, k_skip:], np.sum(eigen_values[k_skip:]) + elif eigen_solver == "dense": + if hasattr(M, "toarray"): + M = M.toarray() + eigen_values, eigen_vectors = eigh( + M, subset_by_index=(k_skip, k + k_skip - 1), overwrite_a=True + ) + index = np.argsort(np.abs(eigen_values)) + return eigen_vectors[:, index], np.sum(eigen_values) + else: + raise ValueError("Unrecognized eigen_solver '%s'" % eigen_solver) + + +def _locally_linear_embedding( + X, + *, + n_neighbors, + n_components, + reg=1e-3, + eigen_solver="auto", + tol=1e-6, + max_iter=100, + method="standard", + hessian_tol=1e-4, + modified_tol=1e-12, + random_state=None, + n_jobs=None, +): + nbrs = NearestNeighbors(n_neighbors=n_neighbors + 1, n_jobs=n_jobs) + nbrs.fit(X) + X = nbrs._fit_X + + N, d_in = X.shape + + if n_components > d_in: + raise ValueError( + "output dimension must be less than or equal to input dimension" + ) + if n_neighbors >= N: + raise ValueError( + "Expected n_neighbors <= n_samples, but n_samples = %d, n_neighbors = %d" + % (N, n_neighbors) + ) + + M_sparse = eigen_solver != "dense" + M_container_constructor = lil_matrix if M_sparse else np.zeros + + if method == "standard": + W = barycenter_kneighbors_graph( + nbrs, n_neighbors=n_neighbors, reg=reg, n_jobs=n_jobs + ) + + # we'll compute M = (I-W)'(I-W) + # depending on the solver, we'll do this differently + if M_sparse: + M = eye(*W.shape, format=W.format) - W + M = M.T * M + else: + M = (W.T * W - W.T - W).toarray() + M.flat[:: M.shape[0] + 1] += 1 # M = W' W - W' - W + I + + elif method == "hessian": + dp = n_components * (n_components + 1) // 2 + + if n_neighbors <= n_components + dp: + raise ValueError( + "for method='hessian', n_neighbors must be " + "greater than " + "[n_components * (n_components + 3) / 2]" + ) + + neighbors = nbrs.kneighbors( + X, n_neighbors=n_neighbors + 1, return_distance=False + ) + neighbors = neighbors[:, 1:] + + Yi = np.empty((n_neighbors, 1 + n_components + dp), dtype=np.float64) + Yi[:, 0] = 1 + + M = M_container_constructor((N, N), dtype=np.float64) + + use_svd = n_neighbors > d_in + + for i in range(N): + Gi = X[neighbors[i]] + Gi -= Gi.mean(0) + + # build Hessian estimator + if use_svd: + U = svd(Gi, full_matrices=0)[0] + else: + Ci = np.dot(Gi, Gi.T) + U = eigh(Ci)[1][:, ::-1] + + Yi[:, 1 : 1 + n_components] = U[:, :n_components] + + j = 1 + n_components + for k in range(n_components): + Yi[:, j : j + n_components - k] = U[:, k : k + 1] * U[:, k:n_components] + j += n_components - k + + Q, R = qr(Yi) + + w = Q[:, n_components + 1 :] + S = w.sum(0) + + S[np.where(abs(S) < hessian_tol)] = 1 + w /= S + + nbrs_x, nbrs_y = np.meshgrid(neighbors[i], neighbors[i]) + M[nbrs_x, nbrs_y] += np.dot(w, w.T) + + elif method == "modified": + if n_neighbors < n_components: + raise ValueError("modified LLE requires n_neighbors >= n_components") + + neighbors = nbrs.kneighbors( + X, n_neighbors=n_neighbors + 1, return_distance=False + ) + neighbors = neighbors[:, 1:] + + # find the eigenvectors and eigenvalues of each local covariance + # matrix. We want V[i] to be a [n_neighbors x n_neighbors] matrix, + # where the columns are eigenvectors + V = np.zeros((N, n_neighbors, n_neighbors)) + nev = min(d_in, n_neighbors) + evals = np.zeros([N, nev]) + + # choose the most efficient way to find the eigenvectors + use_svd = n_neighbors > d_in + + if use_svd: + for i in range(N): + X_nbrs = X[neighbors[i]] - X[i] + V[i], evals[i], _ = svd(X_nbrs, full_matrices=True) + evals **= 2 + else: + for i in range(N): + X_nbrs = X[neighbors[i]] - X[i] + C_nbrs = np.dot(X_nbrs, X_nbrs.T) + evi, vi = eigh(C_nbrs) + evals[i] = evi[::-1] + V[i] = vi[:, ::-1] + + # find regularized weights: this is like normal LLE. + # because we've already computed the SVD of each covariance matrix, + # it's faster to use this rather than np.linalg.solve + reg = 1e-3 * evals.sum(1) + + tmp = np.dot(V.transpose(0, 2, 1), np.ones(n_neighbors)) + tmp[:, :nev] /= evals + reg[:, None] + tmp[:, nev:] /= reg[:, None] + + w_reg = np.zeros((N, n_neighbors)) + for i in range(N): + w_reg[i] = np.dot(V[i], tmp[i]) + w_reg /= w_reg.sum(1)[:, None] + + # calculate eta: the median of the ratio of small to large eigenvalues + # across the points. This is used to determine s_i, below + rho = evals[:, n_components:].sum(1) / evals[:, :n_components].sum(1) + eta = np.median(rho) + + # find s_i, the size of the "almost null space" for each point: + # this is the size of the largest set of eigenvalues + # such that Sum[v; v in set]/Sum[v; v not in set] < eta + s_range = np.zeros(N, dtype=int) + evals_cumsum = stable_cumsum(evals, 1) + eta_range = evals_cumsum[:, -1:] / evals_cumsum[:, :-1] - 1 + for i in range(N): + s_range[i] = np.searchsorted(eta_range[i, ::-1], eta) + s_range += n_neighbors - nev # number of zero eigenvalues + + # Now calculate M. + # This is the [N x N] matrix whose null space is the desired embedding + M = M_container_constructor((N, N), dtype=np.float64) + + for i in range(N): + s_i = s_range[i] + + # select bottom s_i eigenvectors and calculate alpha + Vi = V[i, :, n_neighbors - s_i :] + alpha_i = np.linalg.norm(Vi.sum(0)) / np.sqrt(s_i) + + # compute Householder matrix which satisfies + # Hi*Vi.T*ones(n_neighbors) = alpha_i*ones(s) + # using prescription from paper + h = np.full(s_i, alpha_i) - np.dot(Vi.T, np.ones(n_neighbors)) + + norm_h = np.linalg.norm(h) + if norm_h < modified_tol: + h *= 0 + else: + h /= norm_h + + # Householder matrix is + # >> Hi = np.identity(s_i) - 2*np.outer(h,h) + # Then the weight matrix is + # >> Wi = np.dot(Vi,Hi) + (1-alpha_i) * w_reg[i,:,None] + # We do this much more efficiently: + Wi = Vi - 2 * np.outer(np.dot(Vi, h), h) + (1 - alpha_i) * w_reg[i, :, None] + + # Update M as follows: + # >> W_hat = np.zeros( (N,s_i) ) + # >> W_hat[neighbors[i],:] = Wi + # >> W_hat[i] -= 1 + # >> M += np.dot(W_hat,W_hat.T) + # We can do this much more efficiently: + nbrs_x, nbrs_y = np.meshgrid(neighbors[i], neighbors[i]) + M[nbrs_x, nbrs_y] += np.dot(Wi, Wi.T) + Wi_sum1 = Wi.sum(1) + M[i, neighbors[i]] -= Wi_sum1 + M[neighbors[i], [i]] -= Wi_sum1 + M[i, i] += s_i + + elif method == "ltsa": + neighbors = nbrs.kneighbors( + X, n_neighbors=n_neighbors + 1, return_distance=False + ) + neighbors = neighbors[:, 1:] + + M = M_container_constructor((N, N), dtype=np.float64) + + use_svd = n_neighbors > d_in + + for i in range(N): + Xi = X[neighbors[i]] + Xi -= Xi.mean(0) + + # compute n_components largest eigenvalues of Xi * Xi^T + if use_svd: + v = svd(Xi, full_matrices=True)[0] + else: + Ci = np.dot(Xi, Xi.T) + v = eigh(Ci)[1][:, ::-1] + + Gi = np.zeros((n_neighbors, n_components + 1)) + Gi[:, 1:] = v[:, :n_components] + Gi[:, 0] = 1.0 / np.sqrt(n_neighbors) + + GiGiT = np.dot(Gi, Gi.T) + + nbrs_x, nbrs_y = np.meshgrid(neighbors[i], neighbors[i]) + M[nbrs_x, nbrs_y] -= GiGiT + + M[neighbors[i], neighbors[i]] += np.ones(shape=n_neighbors) + + if M_sparse: + M = M.tocsr() + + return null_space( + M, + n_components, + k_skip=1, + eigen_solver=eigen_solver, + tol=tol, + max_iter=max_iter, + random_state=random_state, + ) + + +@validate_params( + { + "X": ["array-like", NearestNeighbors], + "n_neighbors": [Interval(Integral, 1, None, closed="left")], + "n_components": [Interval(Integral, 1, None, closed="left")], + "reg": [Interval(Real, 0, None, closed="left")], + "eigen_solver": [StrOptions({"auto", "arpack", "dense"})], + "tol": [Interval(Real, 0, None, closed="left")], + "max_iter": [Interval(Integral, 1, None, closed="left")], + "method": [StrOptions({"standard", "hessian", "modified", "ltsa"})], + "hessian_tol": [Interval(Real, 0, None, closed="left")], + "modified_tol": [Interval(Real, 0, None, closed="left")], + "random_state": ["random_state"], + "n_jobs": [None, Integral], + }, + prefer_skip_nested_validation=True, +) +def locally_linear_embedding( + X, + *, + n_neighbors, + n_components, + reg=1e-3, + eigen_solver="auto", + tol=1e-6, + max_iter=100, + method="standard", + hessian_tol=1e-4, + modified_tol=1e-12, + random_state=None, + n_jobs=None, +): + """Perform a Locally Linear Embedding analysis on the data. + + Read more in the :ref:`User Guide `. + + Parameters + ---------- + X : {array-like, NearestNeighbors} + Sample data, shape = (n_samples, n_features), in the form of a + numpy array or a NearestNeighbors object. + + n_neighbors : int + Number of neighbors to consider for each point. + + n_components : int + Number of coordinates for the manifold. + + reg : float, default=1e-3 + Regularization constant, multiplies the trace of the local covariance + matrix of the distances. + + eigen_solver : {'auto', 'arpack', 'dense'}, default='auto' + auto : algorithm will attempt to choose the best method for input data + + arpack : use arnoldi iteration in shift-invert mode. + For this method, M may be a dense matrix, sparse matrix, + or general linear operator. + Warning: ARPACK can be unstable for some problems. It is + best to try several random seeds in order to check results. + + dense : use standard dense matrix operations for the eigenvalue + decomposition. For this method, M must be an array + or matrix type. This method should be avoided for + large problems. + + tol : float, default=1e-6 + Tolerance for 'arpack' method + Not used if eigen_solver=='dense'. + + max_iter : int, default=100 + Maximum number of iterations for the arpack solver. + + method : {'standard', 'hessian', 'modified', 'ltsa'}, default='standard' + standard : use the standard locally linear embedding algorithm. + see reference [1]_ + hessian : use the Hessian eigenmap method. This method requires + n_neighbors > n_components * (1 + (n_components + 1) / 2. + see reference [2]_ + modified : use the modified locally linear embedding algorithm. + see reference [3]_ + ltsa : use local tangent space alignment algorithm + see reference [4]_ + + hessian_tol : float, default=1e-4 + Tolerance for Hessian eigenmapping method. + Only used if method == 'hessian'. + + modified_tol : float, default=1e-12 + Tolerance for modified LLE method. + Only used if method == 'modified'. + + random_state : int, RandomState instance, default=None + Determines the random number generator when ``solver`` == 'arpack'. + Pass an int for reproducible results across multiple function calls. + See :term:`Glossary `. + + n_jobs : int or None, default=None + The number of parallel jobs to run for neighbors search. + ``None`` means 1 unless in a :obj:`joblib.parallel_backend` context. + ``-1`` means using all processors. See :term:`Glossary ` + for more details. + + Returns + ------- + Y : ndarray of shape (n_samples, n_components) + Embedding vectors. + + squared_error : float + Reconstruction error for the embedding vectors. Equivalent to + ``norm(Y - W Y, 'fro')**2``, where W are the reconstruction weights. + + References + ---------- + + .. [1] Roweis, S. & Saul, L. Nonlinear dimensionality reduction + by locally linear embedding. Science 290:2323 (2000). + .. [2] Donoho, D. & Grimes, C. Hessian eigenmaps: Locally + linear embedding techniques for high-dimensional data. + Proc Natl Acad Sci U S A. 100:5591 (2003). + .. [3] `Zhang, Z. & Wang, J. MLLE: Modified Locally Linear + Embedding Using Multiple Weights. + `_ + .. [4] Zhang, Z. & Zha, H. Principal manifolds and nonlinear + dimensionality reduction via tangent space alignment. + Journal of Shanghai Univ. 8:406 (2004) + + Examples + -------- + >>> from sklearn.datasets import load_digits + >>> from sklearn.manifold import locally_linear_embedding + >>> X, _ = load_digits(return_X_y=True) + >>> X.shape + (1797, 64) + >>> embedding, _ = locally_linear_embedding(X[:100],n_neighbors=5, n_components=2) + >>> embedding.shape + (100, 2) + """ + return _locally_linear_embedding( + X=X, + n_neighbors=n_neighbors, + n_components=n_components, + reg=reg, + eigen_solver=eigen_solver, + tol=tol, + max_iter=max_iter, + method=method, + hessian_tol=hessian_tol, + modified_tol=modified_tol, + random_state=random_state, + n_jobs=n_jobs, + ) + + +class LocallyLinearEmbedding( + ClassNamePrefixFeaturesOutMixin, + TransformerMixin, + _UnstableArchMixin, + BaseEstimator, +): + """Locally Linear Embedding. + + Read more in the :ref:`User Guide `. + + Parameters + ---------- + n_neighbors : int, default=5 + Number of neighbors to consider for each point. + + n_components : int, default=2 + Number of coordinates for the manifold. + + reg : float, default=1e-3 + Regularization constant, multiplies the trace of the local covariance + matrix of the distances. + + eigen_solver : {'auto', 'arpack', 'dense'}, default='auto' + The solver used to compute the eigenvectors. The available options are: + + - `'auto'` : algorithm will attempt to choose the best method for input + data. + - `'arpack'` : use arnoldi iteration in shift-invert mode. For this + method, M may be a dense matrix, sparse matrix, or general linear + operator. + - `'dense'` : use standard dense matrix operations for the eigenvalue + decomposition. For this method, M must be an array or matrix type. + This method should be avoided for large problems. + + .. warning:: + ARPACK can be unstable for some problems. It is best to try several + random seeds in order to check results. + + tol : float, default=1e-6 + Tolerance for 'arpack' method + Not used if eigen_solver=='dense'. + + max_iter : int, default=100 + Maximum number of iterations for the arpack solver. + Not used if eigen_solver=='dense'. + + method : {'standard', 'hessian', 'modified', 'ltsa'}, default='standard' + - `standard`: use the standard locally linear embedding algorithm. see + reference [1]_ + - `hessian`: use the Hessian eigenmap method. This method requires + ``n_neighbors > n_components * (1 + (n_components + 1) / 2``. see + reference [2]_ + - `modified`: use the modified locally linear embedding algorithm. + see reference [3]_ + - `ltsa`: use local tangent space alignment algorithm. see + reference [4]_ + + hessian_tol : float, default=1e-4 + Tolerance for Hessian eigenmapping method. + Only used if ``method == 'hessian'``. + + modified_tol : float, default=1e-12 + Tolerance for modified LLE method. + Only used if ``method == 'modified'``. + + neighbors_algorithm : {'auto', 'brute', 'kd_tree', 'ball_tree'}, \ + default='auto' + Algorithm to use for nearest neighbors search, passed to + :class:`~sklearn.neighbors.NearestNeighbors` instance. + + random_state : int, RandomState instance, default=None + Determines the random number generator when + ``eigen_solver`` == 'arpack'. Pass an int for reproducible results + across multiple function calls. See :term:`Glossary `. + + n_jobs : int or None, default=None + The number of parallel jobs to run. + ``None`` means 1 unless in a :obj:`joblib.parallel_backend` context. + ``-1`` means using all processors. See :term:`Glossary ` + for more details. + + Attributes + ---------- + embedding_ : array-like, shape [n_samples, n_components] + Stores the embedding vectors + + reconstruction_error_ : float + Reconstruction error associated with `embedding_` + + n_features_in_ : int + Number of features seen during :term:`fit`. + + .. versionadded:: 0.24 + + feature_names_in_ : ndarray of shape (`n_features_in_`,) + Names of features seen during :term:`fit`. Defined only when `X` + has feature names that are all strings. + + .. versionadded:: 1.0 + + nbrs_ : NearestNeighbors object + Stores nearest neighbors instance, including BallTree or KDtree + if applicable. + + See Also + -------- + SpectralEmbedding : Spectral embedding for non-linear dimensionality + reduction. + TSNE : Distributed Stochastic Neighbor Embedding. + + References + ---------- + + .. [1] Roweis, S. & Saul, L. Nonlinear dimensionality reduction + by locally linear embedding. Science 290:2323 (2000). + .. [2] Donoho, D. & Grimes, C. Hessian eigenmaps: Locally + linear embedding techniques for high-dimensional data. + Proc Natl Acad Sci U S A. 100:5591 (2003). + .. [3] `Zhang, Z. & Wang, J. MLLE: Modified Locally Linear + Embedding Using Multiple Weights. + `_ + .. [4] Zhang, Z. & Zha, H. Principal manifolds and nonlinear + dimensionality reduction via tangent space alignment. + Journal of Shanghai Univ. 8:406 (2004) + + Examples + -------- + >>> from sklearn.datasets import load_digits + >>> from sklearn.manifold import LocallyLinearEmbedding + >>> X, _ = load_digits(return_X_y=True) + >>> X.shape + (1797, 64) + >>> embedding = LocallyLinearEmbedding(n_components=2) + >>> X_transformed = embedding.fit_transform(X[:100]) + >>> X_transformed.shape + (100, 2) + """ + + _parameter_constraints: dict = { + "n_neighbors": [Interval(Integral, 1, None, closed="left")], + "n_components": [Interval(Integral, 1, None, closed="left")], + "reg": [Interval(Real, 0, None, closed="left")], + "eigen_solver": [StrOptions({"auto", "arpack", "dense"})], + "tol": [Interval(Real, 0, None, closed="left")], + "max_iter": [Interval(Integral, 1, None, closed="left")], + "method": [StrOptions({"standard", "hessian", "modified", "ltsa"})], + "hessian_tol": [Interval(Real, 0, None, closed="left")], + "modified_tol": [Interval(Real, 0, None, closed="left")], + "neighbors_algorithm": [StrOptions({"auto", "brute", "kd_tree", "ball_tree"})], + "random_state": ["random_state"], + "n_jobs": [None, Integral], + } + + def __init__( + self, + *, + n_neighbors=5, + n_components=2, + reg=1e-3, + eigen_solver="auto", + tol=1e-6, + max_iter=100, + method="standard", + hessian_tol=1e-4, + modified_tol=1e-12, + neighbors_algorithm="auto", + random_state=None, + n_jobs=None, + ): + self.n_neighbors = n_neighbors + self.n_components = n_components + self.reg = reg + self.eigen_solver = eigen_solver + self.tol = tol + self.max_iter = max_iter + self.method = method + self.hessian_tol = hessian_tol + self.modified_tol = modified_tol + self.random_state = random_state + self.neighbors_algorithm = neighbors_algorithm + self.n_jobs = n_jobs + + def _fit_transform(self, X): + self.nbrs_ = NearestNeighbors( + n_neighbors=self.n_neighbors, + algorithm=self.neighbors_algorithm, + n_jobs=self.n_jobs, + ) + + random_state = check_random_state(self.random_state) + X = validate_data(self, X, dtype=float) + self.nbrs_.fit(X) + self.embedding_, self.reconstruction_error_ = _locally_linear_embedding( + X=self.nbrs_, + n_neighbors=self.n_neighbors, + n_components=self.n_components, + eigen_solver=self.eigen_solver, + tol=self.tol, + max_iter=self.max_iter, + method=self.method, + hessian_tol=self.hessian_tol, + modified_tol=self.modified_tol, + random_state=random_state, + reg=self.reg, + n_jobs=self.n_jobs, + ) + self._n_features_out = self.embedding_.shape[1] + + @_fit_context(prefer_skip_nested_validation=True) + def fit(self, X, y=None): + """Compute the embedding vectors for data X. + + Parameters + ---------- + X : array-like of shape (n_samples, n_features) + Training set. + + y : Ignored + Not used, present here for API consistency by convention. + + Returns + ------- + self : object + Fitted `LocallyLinearEmbedding` class instance. + """ + self._fit_transform(X) + return self + + @_fit_context(prefer_skip_nested_validation=True) + def fit_transform(self, X, y=None): + """Compute the embedding vectors for data X and transform X. + + Parameters + ---------- + X : array-like of shape (n_samples, n_features) + Training set. + + y : Ignored + Not used, present here for API consistency by convention. + + Returns + ------- + X_new : array-like, shape (n_samples, n_components) + Returns the instance itself. + """ + self._fit_transform(X) + return self.embedding_ + + def transform(self, X): + """ + Transform new points into embedding space. + + Parameters + ---------- + X : array-like of shape (n_samples, n_features) + Training set. + + Returns + ------- + X_new : ndarray of shape (n_samples, n_components) + Returns the instance itself. + + Notes + ----- + Because of scaling performed by this method, it is discouraged to use + it together with methods that are not scale-invariant (like SVMs). + """ + check_is_fitted(self) + + X = validate_data(self, X, reset=False) + ind = self.nbrs_.kneighbors( + X, n_neighbors=self.n_neighbors, return_distance=False + ) + weights = barycenter_weights(X, self.nbrs_._fit_X, ind, reg=self.reg) + X_new = np.empty((X.shape[0], self.n_components)) + for i in range(X.shape[0]): + X_new[i] = np.dot(self.embedding_[ind[i]].T, weights[i]) + return X_new diff --git a/evalkit_tf437/lib/python3.10/site-packages/sklearn/manifold/_mds.py b/evalkit_tf437/lib/python3.10/site-packages/sklearn/manifold/_mds.py new file mode 100644 index 0000000000000000000000000000000000000000..dc9f88b502da5ddcfe4cfc01a540d1561bae36d3 --- /dev/null +++ b/evalkit_tf437/lib/python3.10/site-packages/sklearn/manifold/_mds.py @@ -0,0 +1,659 @@ +""" +Multi-dimensional Scaling (MDS). +""" + +# Authors: The scikit-learn developers +# SPDX-License-Identifier: BSD-3-Clause + +import warnings +from numbers import Integral, Real + +import numpy as np +from joblib import effective_n_jobs + +from ..base import BaseEstimator, _fit_context +from ..isotonic import IsotonicRegression +from ..metrics import euclidean_distances +from ..utils import check_array, check_random_state, check_symmetric +from ..utils._param_validation import Interval, StrOptions, validate_params +from ..utils.parallel import Parallel, delayed +from ..utils.validation import validate_data + + +def _smacof_single( + dissimilarities, + metric=True, + n_components=2, + init=None, + max_iter=300, + verbose=0, + eps=1e-3, + random_state=None, + normalized_stress=False, +): + """Computes multidimensional scaling using SMACOF algorithm. + + Parameters + ---------- + dissimilarities : ndarray of shape (n_samples, n_samples) + Pairwise dissimilarities between the points. Must be symmetric. + + metric : bool, default=True + Compute metric or nonmetric SMACOF algorithm. + When ``False`` (i.e. non-metric MDS), dissimilarities with 0 are considered as + missing values. + + n_components : int, default=2 + Number of dimensions in which to immerse the dissimilarities. If an + ``init`` array is provided, this option is overridden and the shape of + ``init`` is used to determine the dimensionality of the embedding + space. + + init : ndarray of shape (n_samples, n_components), default=None + Starting configuration of the embedding to initialize the algorithm. By + default, the algorithm is initialized with a randomly chosen array. + + max_iter : int, default=300 + Maximum number of iterations of the SMACOF algorithm for a single run. + + verbose : int, default=0 + Level of verbosity. + + eps : float, default=1e-3 + Relative tolerance with respect to stress at which to declare + convergence. The value of `eps` should be tuned separately depending + on whether or not `normalized_stress` is being used. + + random_state : int, RandomState instance or None, default=None + Determines the random number generator used to initialize the centers. + Pass an int for reproducible results across multiple function calls. + See :term:`Glossary `. + + normalized_stress : bool, default=False + Whether use and return normed stress value (Stress-1) instead of raw + stress calculated by default. Only supported in non-metric MDS. The + caller must ensure that if `normalized_stress=True` then `metric=False` + + .. versionadded:: 1.2 + + Returns + ------- + X : ndarray of shape (n_samples, n_components) + Coordinates of the points in a ``n_components``-space. + + stress : float + The final value of the stress (sum of squared distance of the + disparities and the distances for all constrained points). + If `normalized_stress=True`, and `metric=False` returns Stress-1. + A value of 0 indicates "perfect" fit, 0.025 excellent, 0.05 good, + 0.1 fair, and 0.2 poor [1]_. + + n_iter : int + The number of iterations corresponding to the best stress. + + References + ---------- + .. [1] "Nonmetric multidimensional scaling: a numerical method" Kruskal, J. + Psychometrika, 29 (1964) + + .. [2] "Multidimensional scaling by optimizing goodness of fit to a nonmetric + hypothesis" Kruskal, J. Psychometrika, 29, (1964) + + .. [3] "Modern Multidimensional Scaling - Theory and Applications" Borg, I.; + Groenen P. Springer Series in Statistics (1997) + """ + dissimilarities = check_symmetric(dissimilarities, raise_exception=True) + + n_samples = dissimilarities.shape[0] + random_state = check_random_state(random_state) + + sim_flat = ((1 - np.tri(n_samples)) * dissimilarities).ravel() + sim_flat_w = sim_flat[sim_flat != 0] + if init is None: + # Randomly choose initial configuration + X = random_state.uniform(size=n_samples * n_components) + X = X.reshape((n_samples, n_components)) + else: + # overrides the parameter p + n_components = init.shape[1] + if n_samples != init.shape[0]: + raise ValueError( + "init matrix should be of shape (%d, %d)" % (n_samples, n_components) + ) + X = init + + old_stress = None + ir = IsotonicRegression() + for it in range(max_iter): + # Compute distance and monotonic regression + dis = euclidean_distances(X) + + if metric: + disparities = dissimilarities + else: + dis_flat = dis.ravel() + # dissimilarities with 0 are considered as missing values + dis_flat_w = dis_flat[sim_flat != 0] + + # Compute the disparities using a monotonic regression + disparities_flat = ir.fit_transform(sim_flat_w, dis_flat_w) + disparities = dis_flat.copy() + disparities[sim_flat != 0] = disparities_flat + disparities = disparities.reshape((n_samples, n_samples)) + disparities *= np.sqrt( + (n_samples * (n_samples - 1) / 2) / (disparities**2).sum() + ) + + # Compute stress + stress = ((dis.ravel() - disparities.ravel()) ** 2).sum() / 2 + if normalized_stress: + stress = np.sqrt(stress / ((disparities.ravel() ** 2).sum() / 2)) + # Update X using the Guttman transform + dis[dis == 0] = 1e-5 + ratio = disparities / dis + B = -ratio + B[np.arange(len(B)), np.arange(len(B))] += ratio.sum(axis=1) + X = 1.0 / n_samples * np.dot(B, X) + + dis = np.sqrt((X**2).sum(axis=1)).sum() + if verbose >= 2: + print("it: %d, stress %s" % (it, stress)) + if old_stress is not None: + if (old_stress - stress / dis) < eps: + if verbose: + print("breaking at iteration %d with stress %s" % (it, stress)) + break + old_stress = stress / dis + + return X, stress, it + 1 + + +@validate_params( + { + "dissimilarities": ["array-like"], + "metric": ["boolean"], + "n_components": [Interval(Integral, 1, None, closed="left")], + "init": ["array-like", None], + "n_init": [Interval(Integral, 1, None, closed="left")], + "n_jobs": [Integral, None], + "max_iter": [Interval(Integral, 1, None, closed="left")], + "verbose": ["verbose"], + "eps": [Interval(Real, 0, None, closed="left")], + "random_state": ["random_state"], + "return_n_iter": ["boolean"], + "normalized_stress": ["boolean", StrOptions({"auto"})], + }, + prefer_skip_nested_validation=True, +) +def smacof( + dissimilarities, + *, + metric=True, + n_components=2, + init=None, + n_init=8, + n_jobs=None, + max_iter=300, + verbose=0, + eps=1e-3, + random_state=None, + return_n_iter=False, + normalized_stress="auto", +): + """Compute multidimensional scaling using the SMACOF algorithm. + + The SMACOF (Scaling by MAjorizing a COmplicated Function) algorithm is a + multidimensional scaling algorithm which minimizes an objective function + (the *stress*) using a majorization technique. Stress majorization, also + known as the Guttman Transform, guarantees a monotone convergence of + stress, and is more powerful than traditional techniques such as gradient + descent. + + The SMACOF algorithm for metric MDS can be summarized by the following + steps: + + 1. Set an initial start configuration, randomly or not. + 2. Compute the stress + 3. Compute the Guttman Transform + 4. Iterate 2 and 3 until convergence. + + The nonmetric algorithm adds a monotonic regression step before computing + the stress. + + Parameters + ---------- + dissimilarities : array-like of shape (n_samples, n_samples) + Pairwise dissimilarities between the points. Must be symmetric. + + metric : bool, default=True + Compute metric or nonmetric SMACOF algorithm. + When ``False`` (i.e. non-metric MDS), dissimilarities with 0 are considered as + missing values. + + n_components : int, default=2 + Number of dimensions in which to immerse the dissimilarities. If an + ``init`` array is provided, this option is overridden and the shape of + ``init`` is used to determine the dimensionality of the embedding + space. + + init : array-like of shape (n_samples, n_components), default=None + Starting configuration of the embedding to initialize the algorithm. By + default, the algorithm is initialized with a randomly chosen array. + + n_init : int, default=8 + Number of times the SMACOF algorithm will be run with different + initializations. The final results will be the best output of the runs, + determined by the run with the smallest final stress. If ``init`` is + provided, this option is overridden and a single run is performed. + + n_jobs : int, default=None + The number of jobs to use for the computation. If multiple + initializations are used (``n_init``), each run of the algorithm is + computed in parallel. + + ``None`` means 1 unless in a :obj:`joblib.parallel_backend` context. + ``-1`` means using all processors. See :term:`Glossary ` + for more details. + + max_iter : int, default=300 + Maximum number of iterations of the SMACOF algorithm for a single run. + + verbose : int, default=0 + Level of verbosity. + + eps : float, default=1e-3 + Relative tolerance with respect to stress at which to declare + convergence. The value of `eps` should be tuned separately depending + on whether or not `normalized_stress` is being used. + + random_state : int, RandomState instance or None, default=None + Determines the random number generator used to initialize the centers. + Pass an int for reproducible results across multiple function calls. + See :term:`Glossary `. + + return_n_iter : bool, default=False + Whether or not to return the number of iterations. + + normalized_stress : bool or "auto" default="auto" + Whether use and return normed stress value (Stress-1) instead of raw + stress calculated by default. Only supported in non-metric MDS. + + .. versionadded:: 1.2 + + .. versionchanged:: 1.4 + The default value changed from `False` to `"auto"` in version 1.4. + + Returns + ------- + X : ndarray of shape (n_samples, n_components) + Coordinates of the points in a ``n_components``-space. + + stress : float + The final value of the stress (sum of squared distance of the + disparities and the distances for all constrained points). + If `normalized_stress=True`, and `metric=False` returns Stress-1. + A value of 0 indicates "perfect" fit, 0.025 excellent, 0.05 good, + 0.1 fair, and 0.2 poor [1]_. + + n_iter : int + The number of iterations corresponding to the best stress. Returned + only if ``return_n_iter`` is set to ``True``. + + References + ---------- + .. [1] "Nonmetric multidimensional scaling: a numerical method" Kruskal, J. + Psychometrika, 29 (1964) + + .. [2] "Multidimensional scaling by optimizing goodness of fit to a nonmetric + hypothesis" Kruskal, J. Psychometrika, 29, (1964) + + .. [3] "Modern Multidimensional Scaling - Theory and Applications" Borg, I.; + Groenen P. Springer Series in Statistics (1997) + + Examples + -------- + >>> import numpy as np + >>> from sklearn.manifold import smacof + >>> from sklearn.metrics import euclidean_distances + >>> X = np.array([[0, 1, 2], [1, 0, 3],[2, 3, 0]]) + >>> dissimilarities = euclidean_distances(X) + >>> mds_result, stress = smacof(dissimilarities, n_components=2, random_state=42) + >>> mds_result + array([[ 0.05... -1.07... ], + [ 1.74..., -0.75...], + [-1.79..., 1.83...]]) + >>> stress + np.float64(0.0012...) + """ + + dissimilarities = check_array(dissimilarities) + random_state = check_random_state(random_state) + + if normalized_stress == "auto": + normalized_stress = not metric + + if normalized_stress and metric: + raise ValueError( + "Normalized stress is not supported for metric MDS. Either set" + " `normalized_stress=False` or use `metric=False`." + ) + if hasattr(init, "__array__"): + init = np.asarray(init).copy() + if not n_init == 1: + warnings.warn( + "Explicit initial positions passed: " + "performing only one init of the MDS instead of %d" % n_init + ) + n_init = 1 + + best_pos, best_stress = None, None + + if effective_n_jobs(n_jobs) == 1: + for it in range(n_init): + pos, stress, n_iter_ = _smacof_single( + dissimilarities, + metric=metric, + n_components=n_components, + init=init, + max_iter=max_iter, + verbose=verbose, + eps=eps, + random_state=random_state, + normalized_stress=normalized_stress, + ) + if best_stress is None or stress < best_stress: + best_stress = stress + best_pos = pos.copy() + best_iter = n_iter_ + else: + seeds = random_state.randint(np.iinfo(np.int32).max, size=n_init) + results = Parallel(n_jobs=n_jobs, verbose=max(verbose - 1, 0))( + delayed(_smacof_single)( + dissimilarities, + metric=metric, + n_components=n_components, + init=init, + max_iter=max_iter, + verbose=verbose, + eps=eps, + random_state=seed, + normalized_stress=normalized_stress, + ) + for seed in seeds + ) + positions, stress, n_iters = zip(*results) + best = np.argmin(stress) + best_stress = stress[best] + best_pos = positions[best] + best_iter = n_iters[best] + + if return_n_iter: + return best_pos, best_stress, best_iter + else: + return best_pos, best_stress + + +class MDS(BaseEstimator): + """Multidimensional scaling. + + Read more in the :ref:`User Guide `. + + Parameters + ---------- + n_components : int, default=2 + Number of dimensions in which to immerse the dissimilarities. + + metric : bool, default=True + If ``True``, perform metric MDS; otherwise, perform nonmetric MDS. + When ``False`` (i.e. non-metric MDS), dissimilarities with 0 are considered as + missing values. + + n_init : int, default=4 + Number of times the SMACOF algorithm will be run with different + initializations. The final results will be the best output of the runs, + determined by the run with the smallest final stress. + + max_iter : int, default=300 + Maximum number of iterations of the SMACOF algorithm for a single run. + + verbose : int, default=0 + Level of verbosity. + + eps : float, default=1e-3 + Relative tolerance with respect to stress at which to declare + convergence. The value of `eps` should be tuned separately depending + on whether or not `normalized_stress` is being used. + + n_jobs : int, default=None + The number of jobs to use for the computation. If multiple + initializations are used (``n_init``), each run of the algorithm is + computed in parallel. + + ``None`` means 1 unless in a :obj:`joblib.parallel_backend` context. + ``-1`` means using all processors. See :term:`Glossary ` + for more details. + + random_state : int, RandomState instance or None, default=None + Determines the random number generator used to initialize the centers. + Pass an int for reproducible results across multiple function calls. + See :term:`Glossary `. + + dissimilarity : {'euclidean', 'precomputed'}, default='euclidean' + Dissimilarity measure to use: + + - 'euclidean': + Pairwise Euclidean distances between points in the dataset. + + - 'precomputed': + Pre-computed dissimilarities are passed directly to ``fit`` and + ``fit_transform``. + + normalized_stress : bool or "auto" default="auto" + Whether use and return normed stress value (Stress-1) instead of raw + stress calculated by default. Only supported in non-metric MDS. + + .. versionadded:: 1.2 + + .. versionchanged:: 1.4 + The default value changed from `False` to `"auto"` in version 1.4. + + Attributes + ---------- + embedding_ : ndarray of shape (n_samples, n_components) + Stores the position of the dataset in the embedding space. + + stress_ : float + The final value of the stress (sum of squared distance of the + disparities and the distances for all constrained points). + If `normalized_stress=True`, and `metric=False` returns Stress-1. + A value of 0 indicates "perfect" fit, 0.025 excellent, 0.05 good, + 0.1 fair, and 0.2 poor [1]_. + + dissimilarity_matrix_ : ndarray of shape (n_samples, n_samples) + Pairwise dissimilarities between the points. Symmetric matrix that: + + - either uses a custom dissimilarity matrix by setting `dissimilarity` + to 'precomputed'; + - or constructs a dissimilarity matrix from data using + Euclidean distances. + + n_features_in_ : int + Number of features seen during :term:`fit`. + + .. versionadded:: 0.24 + + feature_names_in_ : ndarray of shape (`n_features_in_`,) + Names of features seen during :term:`fit`. Defined only when `X` + has feature names that are all strings. + + .. versionadded:: 1.0 + + n_iter_ : int + The number of iterations corresponding to the best stress. + + See Also + -------- + sklearn.decomposition.PCA : Principal component analysis that is a linear + dimensionality reduction method. + sklearn.decomposition.KernelPCA : Non-linear dimensionality reduction using + kernels and PCA. + TSNE : T-distributed Stochastic Neighbor Embedding. + Isomap : Manifold learning based on Isometric Mapping. + LocallyLinearEmbedding : Manifold learning using Locally Linear Embedding. + SpectralEmbedding : Spectral embedding for non-linear dimensionality. + + References + ---------- + .. [1] "Nonmetric multidimensional scaling: a numerical method" Kruskal, J. + Psychometrika, 29 (1964) + + .. [2] "Multidimensional scaling by optimizing goodness of fit to a nonmetric + hypothesis" Kruskal, J. Psychometrika, 29, (1964) + + .. [3] "Modern Multidimensional Scaling - Theory and Applications" Borg, I.; + Groenen P. Springer Series in Statistics (1997) + + Examples + -------- + >>> from sklearn.datasets import load_digits + >>> from sklearn.manifold import MDS + >>> X, _ = load_digits(return_X_y=True) + >>> X.shape + (1797, 64) + >>> embedding = MDS(n_components=2, normalized_stress='auto') + >>> X_transformed = embedding.fit_transform(X[:100]) + >>> X_transformed.shape + (100, 2) + + For a more detailed example of usage, see + :ref:`sphx_glr_auto_examples_manifold_plot_mds.py`. + + For a comparison of manifold learning techniques, see + :ref:`sphx_glr_auto_examples_manifold_plot_compare_methods.py`. + """ + + _parameter_constraints: dict = { + "n_components": [Interval(Integral, 1, None, closed="left")], + "metric": ["boolean"], + "n_init": [Interval(Integral, 1, None, closed="left")], + "max_iter": [Interval(Integral, 1, None, closed="left")], + "verbose": ["verbose"], + "eps": [Interval(Real, 0.0, None, closed="left")], + "n_jobs": [None, Integral], + "random_state": ["random_state"], + "dissimilarity": [StrOptions({"euclidean", "precomputed"})], + "normalized_stress": ["boolean", StrOptions({"auto"})], + } + + def __init__( + self, + n_components=2, + *, + metric=True, + n_init=4, + max_iter=300, + verbose=0, + eps=1e-3, + n_jobs=None, + random_state=None, + dissimilarity="euclidean", + normalized_stress="auto", + ): + self.n_components = n_components + self.dissimilarity = dissimilarity + self.metric = metric + self.n_init = n_init + self.max_iter = max_iter + self.eps = eps + self.verbose = verbose + self.n_jobs = n_jobs + self.random_state = random_state + self.normalized_stress = normalized_stress + + def __sklearn_tags__(self): + tags = super().__sklearn_tags__() + tags.input_tags.pairwise = self.dissimilarity == "precomputed" + return tags + + def fit(self, X, y=None, init=None): + """ + Compute the position of the points in the embedding space. + + Parameters + ---------- + X : array-like of shape (n_samples, n_features) or \ + (n_samples, n_samples) + Input data. If ``dissimilarity=='precomputed'``, the input should + be the dissimilarity matrix. + + y : Ignored + Not used, present for API consistency by convention. + + init : ndarray of shape (n_samples, n_components), default=None + Starting configuration of the embedding to initialize the SMACOF + algorithm. By default, the algorithm is initialized with a randomly + chosen array. + + Returns + ------- + self : object + Fitted estimator. + """ + self.fit_transform(X, init=init) + return self + + @_fit_context(prefer_skip_nested_validation=True) + def fit_transform(self, X, y=None, init=None): + """ + Fit the data from `X`, and returns the embedded coordinates. + + Parameters + ---------- + X : array-like of shape (n_samples, n_features) or \ + (n_samples, n_samples) + Input data. If ``dissimilarity=='precomputed'``, the input should + be the dissimilarity matrix. + + y : Ignored + Not used, present for API consistency by convention. + + init : ndarray of shape (n_samples, n_components), default=None + Starting configuration of the embedding to initialize the SMACOF + algorithm. By default, the algorithm is initialized with a randomly + chosen array. + + Returns + ------- + X_new : ndarray of shape (n_samples, n_components) + X transformed in the new space. + """ + X = validate_data(self, X) + if X.shape[0] == X.shape[1] and self.dissimilarity != "precomputed": + warnings.warn( + "The MDS API has changed. ``fit`` now constructs an" + " dissimilarity matrix from data. To use a custom " + "dissimilarity matrix, set " + "``dissimilarity='precomputed'``." + ) + + if self.dissimilarity == "precomputed": + self.dissimilarity_matrix_ = X + elif self.dissimilarity == "euclidean": + self.dissimilarity_matrix_ = euclidean_distances(X) + + self.embedding_, self.stress_, self.n_iter_ = smacof( + self.dissimilarity_matrix_, + metric=self.metric, + n_components=self.n_components, + init=init, + n_init=self.n_init, + n_jobs=self.n_jobs, + max_iter=self.max_iter, + verbose=self.verbose, + eps=self.eps, + random_state=self.random_state, + return_n_iter=True, + normalized_stress=self.normalized_stress, + ) + + return self.embedding_ diff --git a/evalkit_tf437/lib/python3.10/site-packages/sklearn/manifold/_spectral_embedding.py b/evalkit_tf437/lib/python3.10/site-packages/sklearn/manifold/_spectral_embedding.py new file mode 100644 index 0000000000000000000000000000000000000000..d3d45ec0773c3931bbcfd3b742186f8fc8888988 --- /dev/null +++ b/evalkit_tf437/lib/python3.10/site-packages/sklearn/manifold/_spectral_embedding.py @@ -0,0 +1,778 @@ +"""Spectral Embedding.""" + +# Authors: The scikit-learn developers +# SPDX-License-Identifier: BSD-3-Clause + + +import warnings +from numbers import Integral, Real + +import numpy as np +from scipy import sparse +from scipy.linalg import eigh +from scipy.sparse.csgraph import connected_components +from scipy.sparse.linalg import eigsh, lobpcg + +from ..base import BaseEstimator, _fit_context +from ..metrics.pairwise import rbf_kernel +from ..neighbors import NearestNeighbors, kneighbors_graph +from ..utils import ( + check_array, + check_random_state, + check_symmetric, +) +from ..utils._arpack import _init_arpack_v0 +from ..utils._param_validation import Interval, StrOptions, validate_params +from ..utils.extmath import _deterministic_vector_sign_flip +from ..utils.fixes import laplacian as csgraph_laplacian +from ..utils.fixes import parse_version, sp_version +from ..utils.validation import validate_data + + +def _graph_connected_component(graph, node_id): + """Find the largest graph connected components that contains one + given node. + + Parameters + ---------- + graph : array-like of shape (n_samples, n_samples) + Adjacency matrix of the graph, non-zero weight means an edge + between the nodes. + + node_id : int + The index of the query node of the graph. + + Returns + ------- + connected_components_matrix : array-like of shape (n_samples,) + An array of bool value indicating the indexes of the nodes + belonging to the largest connected components of the given query + node. + """ + n_node = graph.shape[0] + if sparse.issparse(graph): + # speed up row-wise access to boolean connection mask + graph = graph.tocsr() + connected_nodes = np.zeros(n_node, dtype=bool) + nodes_to_explore = np.zeros(n_node, dtype=bool) + nodes_to_explore[node_id] = True + for _ in range(n_node): + last_num_component = connected_nodes.sum() + np.logical_or(connected_nodes, nodes_to_explore, out=connected_nodes) + if last_num_component >= connected_nodes.sum(): + break + indices = np.where(nodes_to_explore)[0] + nodes_to_explore.fill(False) + for i in indices: + if sparse.issparse(graph): + # scipy not yet implemented 1D sparse slices; can be changed back to + # `neighbors = graph[i].toarray().ravel()` once implemented + neighbors = graph[[i], :].toarray().ravel() + else: + neighbors = graph[i] + np.logical_or(nodes_to_explore, neighbors, out=nodes_to_explore) + return connected_nodes + + +def _graph_is_connected(graph): + """Return whether the graph is connected (True) or Not (False). + + Parameters + ---------- + graph : {array-like, sparse matrix} of shape (n_samples, n_samples) + Adjacency matrix of the graph, non-zero weight means an edge + between the nodes. + + Returns + ------- + is_connected : bool + True means the graph is fully connected and False means not. + """ + if sparse.issparse(graph): + # Before Scipy 1.11.3, `connected_components` only supports 32-bit indices. + # PR: https://github.com/scipy/scipy/pull/18913 + # First integration in 1.11.3: https://github.com/scipy/scipy/pull/19279 + # TODO(jjerphan): Once SciPy 1.11.3 is the minimum supported version, use + # `accept_large_sparse=True`. + accept_large_sparse = sp_version >= parse_version("1.11.3") + graph = check_array( + graph, accept_sparse=True, accept_large_sparse=accept_large_sparse + ) + # sparse graph, find all the connected components + n_connected_components, _ = connected_components(graph) + return n_connected_components == 1 + else: + # dense graph, find all connected components start from node 0 + return _graph_connected_component(graph, 0).sum() == graph.shape[0] + + +def _set_diag(laplacian, value, norm_laplacian): + """Set the diagonal of the laplacian matrix and convert it to a + sparse format well suited for eigenvalue decomposition. + + Parameters + ---------- + laplacian : {ndarray, sparse matrix} + The graph laplacian. + + value : float + The value of the diagonal. + + norm_laplacian : bool + Whether the value of the diagonal should be changed or not. + + Returns + ------- + laplacian : {array, sparse matrix} + An array of matrix in a form that is well suited to fast + eigenvalue decomposition, depending on the band width of the + matrix. + """ + n_nodes = laplacian.shape[0] + # We need all entries in the diagonal to values + if not sparse.issparse(laplacian): + if norm_laplacian: + laplacian.flat[:: n_nodes + 1] = value + else: + laplacian = laplacian.tocoo() + if norm_laplacian: + diag_idx = laplacian.row == laplacian.col + laplacian.data[diag_idx] = value + # If the matrix has a small number of diagonals (as in the + # case of structured matrices coming from images), the + # dia format might be best suited for matvec products: + n_diags = np.unique(laplacian.row - laplacian.col).size + if n_diags <= 7: + # 3 or less outer diagonals on each side + laplacian = laplacian.todia() + else: + # csr has the fastest matvec and is thus best suited to + # arpack + laplacian = laplacian.tocsr() + return laplacian + + +@validate_params( + { + "adjacency": ["array-like", "sparse matrix"], + "n_components": [Interval(Integral, 1, None, closed="left")], + "eigen_solver": [StrOptions({"arpack", "lobpcg", "amg"}), None], + "random_state": ["random_state"], + "eigen_tol": [Interval(Real, 0, None, closed="left"), StrOptions({"auto"})], + "norm_laplacian": ["boolean"], + "drop_first": ["boolean"], + }, + prefer_skip_nested_validation=True, +) +def spectral_embedding( + adjacency, + *, + n_components=8, + eigen_solver=None, + random_state=None, + eigen_tol="auto", + norm_laplacian=True, + drop_first=True, +): + """Project the sample on the first eigenvectors of the graph Laplacian. + + The adjacency matrix is used to compute a normalized graph Laplacian + whose spectrum (especially the eigenvectors associated to the + smallest eigenvalues) has an interpretation in terms of minimal + number of cuts necessary to split the graph into comparably sized + components. + + This embedding can also 'work' even if the ``adjacency`` variable is + not strictly the adjacency matrix of a graph but more generally + an affinity or similarity matrix between samples (for instance the + heat kernel of a euclidean distance matrix or a k-NN matrix). + + However care must taken to always make the affinity matrix symmetric + so that the eigenvector decomposition works as expected. + + Note : Laplacian Eigenmaps is the actual algorithm implemented here. + + Read more in the :ref:`User Guide `. + + Parameters + ---------- + adjacency : {array-like, sparse graph} of shape (n_samples, n_samples) + The adjacency matrix of the graph to embed. + + n_components : int, default=8 + The dimension of the projection subspace. + + eigen_solver : {'arpack', 'lobpcg', 'amg'}, default=None + The eigenvalue decomposition strategy to use. AMG requires pyamg + to be installed. It can be faster on very large, sparse problems, + but may also lead to instabilities. If None, then ``'arpack'`` is + used. + + random_state : int, RandomState instance or None, default=None + A pseudo random number generator used for the initialization + of the lobpcg eigen vectors decomposition when `eigen_solver == + 'amg'`, and for the K-Means initialization. Use an int to make + the results deterministic across calls (See + :term:`Glossary `). + + .. note:: + When using `eigen_solver == 'amg'`, + it is necessary to also fix the global numpy seed with + `np.random.seed(int)` to get deterministic results. See + https://github.com/pyamg/pyamg/issues/139 for further + information. + + eigen_tol : float, default="auto" + Stopping criterion for eigendecomposition of the Laplacian matrix. + If `eigen_tol="auto"` then the passed tolerance will depend on the + `eigen_solver`: + + - If `eigen_solver="arpack"`, then `eigen_tol=0.0`; + - If `eigen_solver="lobpcg"` or `eigen_solver="amg"`, then + `eigen_tol=None` which configures the underlying `lobpcg` solver to + automatically resolve the value according to their heuristics. See, + :func:`scipy.sparse.linalg.lobpcg` for details. + + Note that when using `eigen_solver="amg"` values of `tol<1e-5` may lead + to convergence issues and should be avoided. + + .. versionadded:: 1.2 + Added 'auto' option. + + norm_laplacian : bool, default=True + If True, then compute symmetric normalized Laplacian. + + drop_first : bool, default=True + Whether to drop the first eigenvector. For spectral embedding, this + should be True as the first eigenvector should be constant vector for + connected graph, but for spectral clustering, this should be kept as + False to retain the first eigenvector. + + Returns + ------- + embedding : ndarray of shape (n_samples, n_components) + The reduced samples. + + Notes + ----- + Spectral Embedding (Laplacian Eigenmaps) is most useful when the graph + has one connected component. If there graph has many components, the first + few eigenvectors will simply uncover the connected components of the graph. + + References + ---------- + * https://en.wikipedia.org/wiki/LOBPCG + + * :doi:`"Toward the Optimal Preconditioned Eigensolver: Locally Optimal + Block Preconditioned Conjugate Gradient Method", + Andrew V. Knyazev + <10.1137/S1064827500366124>` + + Examples + -------- + >>> from sklearn.datasets import load_digits + >>> from sklearn.neighbors import kneighbors_graph + >>> from sklearn.manifold import spectral_embedding + >>> X, _ = load_digits(return_X_y=True) + >>> X = X[:100] + >>> affinity_matrix = kneighbors_graph( + ... X, n_neighbors=int(X.shape[0] / 10), include_self=True + ... ) + >>> # make the matrix symmetric + >>> affinity_matrix = 0.5 * (affinity_matrix + affinity_matrix.T) + >>> embedding = spectral_embedding(affinity_matrix, n_components=2, random_state=42) + >>> embedding.shape + (100, 2) + """ + random_state = check_random_state(random_state) + + return _spectral_embedding( + adjacency, + n_components=n_components, + eigen_solver=eigen_solver, + random_state=random_state, + eigen_tol=eigen_tol, + norm_laplacian=norm_laplacian, + drop_first=drop_first, + ) + + +def _spectral_embedding( + adjacency, + *, + n_components=8, + eigen_solver=None, + random_state=None, + eigen_tol="auto", + norm_laplacian=True, + drop_first=True, +): + adjacency = check_symmetric(adjacency) + + if eigen_solver == "amg": + try: + from pyamg import smoothed_aggregation_solver + except ImportError as e: + raise ValueError( + "The eigen_solver was set to 'amg', but pyamg is not available." + ) from e + + if eigen_solver is None: + eigen_solver = "arpack" + + n_nodes = adjacency.shape[0] + # Whether to drop the first eigenvector + if drop_first: + n_components = n_components + 1 + + if not _graph_is_connected(adjacency): + warnings.warn( + "Graph is not fully connected, spectral embedding may not work as expected." + ) + + laplacian, dd = csgraph_laplacian( + adjacency, normed=norm_laplacian, return_diag=True + ) + if ( + eigen_solver == "arpack" + or eigen_solver != "lobpcg" + and (not sparse.issparse(laplacian) or n_nodes < 5 * n_components) + ): + # lobpcg used with eigen_solver='amg' has bugs for low number of nodes + # for details see the source code in scipy: + # https://github.com/scipy/scipy/blob/v0.11.0/scipy/sparse/linalg/eigen + # /lobpcg/lobpcg.py#L237 + # or matlab: + # https://www.mathworks.com/matlabcentral/fileexchange/48-lobpcg-m + laplacian = _set_diag(laplacian, 1, norm_laplacian) + + # Here we'll use shift-invert mode for fast eigenvalues + # (see https://docs.scipy.org/doc/scipy/reference/tutorial/arpack.html + # for a short explanation of what this means) + # Because the normalized Laplacian has eigenvalues between 0 and 2, + # I - L has eigenvalues between -1 and 1. ARPACK is most efficient + # when finding eigenvalues of largest magnitude (keyword which='LM') + # and when these eigenvalues are very large compared to the rest. + # For very large, very sparse graphs, I - L can have many, many + # eigenvalues very near 1.0. This leads to slow convergence. So + # instead, we'll use ARPACK's shift-invert mode, asking for the + # eigenvalues near 1.0. This effectively spreads-out the spectrum + # near 1.0 and leads to much faster convergence: potentially an + # orders-of-magnitude speedup over simply using keyword which='LA' + # in standard mode. + try: + # We are computing the opposite of the laplacian inplace so as + # to spare a memory allocation of a possibly very large array + tol = 0 if eigen_tol == "auto" else eigen_tol + laplacian *= -1 + v0 = _init_arpack_v0(laplacian.shape[0], random_state) + laplacian = check_array( + laplacian, accept_sparse="csr", accept_large_sparse=False + ) + _, diffusion_map = eigsh( + laplacian, k=n_components, sigma=1.0, which="LM", tol=tol, v0=v0 + ) + embedding = diffusion_map.T[n_components::-1] + if norm_laplacian: + # recover u = D^-1/2 x from the eigenvector output x + embedding = embedding / dd + except RuntimeError: + # When submatrices are exactly singular, an LU decomposition + # in arpack fails. We fallback to lobpcg + eigen_solver = "lobpcg" + # Revert the laplacian to its opposite to have lobpcg work + laplacian *= -1 + + elif eigen_solver == "amg": + # Use AMG to get a preconditioner and speed up the eigenvalue + # problem. + if not sparse.issparse(laplacian): + warnings.warn("AMG works better for sparse matrices") + laplacian = check_array( + laplacian, dtype=[np.float64, np.float32], accept_sparse=True + ) + laplacian = _set_diag(laplacian, 1, norm_laplacian) + + # The Laplacian matrix is always singular, having at least one zero + # eigenvalue, corresponding to the trivial eigenvector, which is a + # constant. Using a singular matrix for preconditioning may result in + # random failures in LOBPCG and is not supported by the existing + # theory: + # see https://doi.org/10.1007/s10208-015-9297-1 + # Shift the Laplacian so its diagononal is not all ones. The shift + # does change the eigenpairs however, so we'll feed the shifted + # matrix to the solver and afterward set it back to the original. + diag_shift = 1e-5 * sparse.eye(laplacian.shape[0]) + laplacian += diag_shift + if hasattr(sparse, "csr_array") and isinstance(laplacian, sparse.csr_array): + # `pyamg` does not work with `csr_array` and we need to convert it to a + # `csr_matrix` object. + laplacian = sparse.csr_matrix(laplacian) + ml = smoothed_aggregation_solver(check_array(laplacian, accept_sparse="csr")) + laplacian -= diag_shift + + M = ml.aspreconditioner() + # Create initial approximation X to eigenvectors + X = random_state.standard_normal(size=(laplacian.shape[0], n_components + 1)) + X[:, 0] = dd.ravel() + X = X.astype(laplacian.dtype) + + tol = None if eigen_tol == "auto" else eigen_tol + _, diffusion_map = lobpcg(laplacian, X, M=M, tol=tol, largest=False) + embedding = diffusion_map.T + if norm_laplacian: + # recover u = D^-1/2 x from the eigenvector output x + embedding = embedding / dd + if embedding.shape[0] == 1: + raise ValueError + + if eigen_solver == "lobpcg": + laplacian = check_array( + laplacian, dtype=[np.float64, np.float32], accept_sparse=True + ) + if n_nodes < 5 * n_components + 1: + # see note above under arpack why lobpcg has problems with small + # number of nodes + # lobpcg will fallback to eigh, so we short circuit it + if sparse.issparse(laplacian): + laplacian = laplacian.toarray() + _, diffusion_map = eigh(laplacian, check_finite=False) + embedding = diffusion_map.T[:n_components] + if norm_laplacian: + # recover u = D^-1/2 x from the eigenvector output x + embedding = embedding / dd + else: + laplacian = _set_diag(laplacian, 1, norm_laplacian) + # We increase the number of eigenvectors requested, as lobpcg + # doesn't behave well in low dimension and create initial + # approximation X to eigenvectors + X = random_state.standard_normal( + size=(laplacian.shape[0], n_components + 1) + ) + X[:, 0] = dd.ravel() + X = X.astype(laplacian.dtype) + tol = None if eigen_tol == "auto" else eigen_tol + _, diffusion_map = lobpcg( + laplacian, X, tol=tol, largest=False, maxiter=2000 + ) + embedding = diffusion_map.T[:n_components] + if norm_laplacian: + # recover u = D^-1/2 x from the eigenvector output x + embedding = embedding / dd + if embedding.shape[0] == 1: + raise ValueError + + embedding = _deterministic_vector_sign_flip(embedding) + if drop_first: + return embedding[1:n_components].T + else: + return embedding[:n_components].T + + +class SpectralEmbedding(BaseEstimator): + """Spectral embedding for non-linear dimensionality reduction. + + Forms an affinity matrix given by the specified function and + applies spectral decomposition to the corresponding graph laplacian. + The resulting transformation is given by the value of the + eigenvectors for each data point. + + Note : Laplacian Eigenmaps is the actual algorithm implemented here. + + Read more in the :ref:`User Guide `. + + Parameters + ---------- + n_components : int, default=2 + The dimension of the projected subspace. + + affinity : {'nearest_neighbors', 'rbf', 'precomputed', \ + 'precomputed_nearest_neighbors'} or callable, \ + default='nearest_neighbors' + How to construct the affinity matrix. + - 'nearest_neighbors' : construct the affinity matrix by computing a + graph of nearest neighbors. + - 'rbf' : construct the affinity matrix by computing a radial basis + function (RBF) kernel. + - 'precomputed' : interpret ``X`` as a precomputed affinity matrix. + - 'precomputed_nearest_neighbors' : interpret ``X`` as a sparse graph + of precomputed nearest neighbors, and constructs the affinity matrix + by selecting the ``n_neighbors`` nearest neighbors. + - callable : use passed in function as affinity + the function takes in data matrix (n_samples, n_features) + and return affinity matrix (n_samples, n_samples). + + gamma : float, default=None + Kernel coefficient for rbf kernel. If None, gamma will be set to + 1/n_features. + + random_state : int, RandomState instance or None, default=None + A pseudo random number generator used for the initialization + of the lobpcg eigen vectors decomposition when `eigen_solver == + 'amg'`, and for the K-Means initialization. Use an int to make + the results deterministic across calls (See + :term:`Glossary `). + + .. note:: + When using `eigen_solver == 'amg'`, + it is necessary to also fix the global numpy seed with + `np.random.seed(int)` to get deterministic results. See + https://github.com/pyamg/pyamg/issues/139 for further + information. + + eigen_solver : {'arpack', 'lobpcg', 'amg'}, default=None + The eigenvalue decomposition strategy to use. AMG requires pyamg + to be installed. It can be faster on very large, sparse problems. + If None, then ``'arpack'`` is used. + + eigen_tol : float, default="auto" + Stopping criterion for eigendecomposition of the Laplacian matrix. + If `eigen_tol="auto"` then the passed tolerance will depend on the + `eigen_solver`: + + - If `eigen_solver="arpack"`, then `eigen_tol=0.0`; + - If `eigen_solver="lobpcg"` or `eigen_solver="amg"`, then + `eigen_tol=None` which configures the underlying `lobpcg` solver to + automatically resolve the value according to their heuristics. See, + :func:`scipy.sparse.linalg.lobpcg` for details. + + Note that when using `eigen_solver="lobpcg"` or `eigen_solver="amg"` + values of `tol<1e-5` may lead to convergence issues and should be + avoided. + + .. versionadded:: 1.2 + + n_neighbors : int, default=None + Number of nearest neighbors for nearest_neighbors graph building. + If None, n_neighbors will be set to max(n_samples/10, 1). + + n_jobs : int, default=None + The number of parallel jobs to run. + ``None`` means 1 unless in a :obj:`joblib.parallel_backend` context. + ``-1`` means using all processors. See :term:`Glossary ` + for more details. + + Attributes + ---------- + embedding_ : ndarray of shape (n_samples, n_components) + Spectral embedding of the training matrix. + + affinity_matrix_ : ndarray of shape (n_samples, n_samples) + Affinity_matrix constructed from samples or precomputed. + + n_features_in_ : int + Number of features seen during :term:`fit`. + + .. versionadded:: 0.24 + + feature_names_in_ : ndarray of shape (`n_features_in_`,) + Names of features seen during :term:`fit`. Defined only when `X` + has feature names that are all strings. + + .. versionadded:: 1.0 + + n_neighbors_ : int + Number of nearest neighbors effectively used. + + See Also + -------- + Isomap : Non-linear dimensionality reduction through Isometric Mapping. + + References + ---------- + + - :doi:`A Tutorial on Spectral Clustering, 2007 + Ulrike von Luxburg + <10.1007/s11222-007-9033-z>` + + - `On Spectral Clustering: Analysis and an algorithm, 2001 + Andrew Y. Ng, Michael I. Jordan, Yair Weiss + `_ + + - :doi:`Normalized cuts and image segmentation, 2000 + Jianbo Shi, Jitendra Malik + <10.1109/34.868688>` + + Examples + -------- + >>> from sklearn.datasets import load_digits + >>> from sklearn.manifold import SpectralEmbedding + >>> X, _ = load_digits(return_X_y=True) + >>> X.shape + (1797, 64) + >>> embedding = SpectralEmbedding(n_components=2) + >>> X_transformed = embedding.fit_transform(X[:100]) + >>> X_transformed.shape + (100, 2) + """ + + _parameter_constraints: dict = { + "n_components": [Interval(Integral, 1, None, closed="left")], + "affinity": [ + StrOptions( + { + "nearest_neighbors", + "rbf", + "precomputed", + "precomputed_nearest_neighbors", + }, + ), + callable, + ], + "gamma": [Interval(Real, 0, None, closed="left"), None], + "random_state": ["random_state"], + "eigen_solver": [StrOptions({"arpack", "lobpcg", "amg"}), None], + "eigen_tol": [Interval(Real, 0, None, closed="left"), StrOptions({"auto"})], + "n_neighbors": [Interval(Integral, 1, None, closed="left"), None], + "n_jobs": [None, Integral], + } + + def __init__( + self, + n_components=2, + *, + affinity="nearest_neighbors", + gamma=None, + random_state=None, + eigen_solver=None, + eigen_tol="auto", + n_neighbors=None, + n_jobs=None, + ): + self.n_components = n_components + self.affinity = affinity + self.gamma = gamma + self.random_state = random_state + self.eigen_solver = eigen_solver + self.eigen_tol = eigen_tol + self.n_neighbors = n_neighbors + self.n_jobs = n_jobs + + def __sklearn_tags__(self): + tags = super().__sklearn_tags__() + tags.input_tags.sparse = True + tags.input_tags.pairwise = self.affinity in [ + "precomputed", + "precomputed_nearest_neighbors", + ] + return tags + + def _get_affinity_matrix(self, X, Y=None): + """Calculate the affinity matrix from data + Parameters + ---------- + X : array-like of shape (n_samples, n_features) + Training vector, where `n_samples` is the number of samples + and `n_features` is the number of features. + + If affinity is "precomputed" + X : array-like of shape (n_samples, n_samples), + Interpret X as precomputed adjacency graph computed from + samples. + + Y: Ignored + + Returns + ------- + affinity_matrix of shape (n_samples, n_samples) + """ + if self.affinity == "precomputed": + self.affinity_matrix_ = X + return self.affinity_matrix_ + if self.affinity == "precomputed_nearest_neighbors": + estimator = NearestNeighbors( + n_neighbors=self.n_neighbors, n_jobs=self.n_jobs, metric="precomputed" + ).fit(X) + connectivity = estimator.kneighbors_graph(X=X, mode="connectivity") + self.affinity_matrix_ = 0.5 * (connectivity + connectivity.T) + return self.affinity_matrix_ + if self.affinity == "nearest_neighbors": + if sparse.issparse(X): + warnings.warn( + "Nearest neighbors affinity currently does " + "not support sparse input, falling back to " + "rbf affinity" + ) + self.affinity = "rbf" + else: + self.n_neighbors_ = ( + self.n_neighbors + if self.n_neighbors is not None + else max(int(X.shape[0] / 10), 1) + ) + self.affinity_matrix_ = kneighbors_graph( + X, self.n_neighbors_, include_self=True, n_jobs=self.n_jobs + ) + # currently only symmetric affinity_matrix supported + self.affinity_matrix_ = 0.5 * ( + self.affinity_matrix_ + self.affinity_matrix_.T + ) + return self.affinity_matrix_ + if self.affinity == "rbf": + self.gamma_ = self.gamma if self.gamma is not None else 1.0 / X.shape[1] + self.affinity_matrix_ = rbf_kernel(X, gamma=self.gamma_) + return self.affinity_matrix_ + self.affinity_matrix_ = self.affinity(X) + return self.affinity_matrix_ + + @_fit_context(prefer_skip_nested_validation=True) + def fit(self, X, y=None): + """Fit the model from data in X. + + Parameters + ---------- + X : {array-like, sparse matrix} of shape (n_samples, n_features) + Training vector, where `n_samples` is the number of samples + and `n_features` is the number of features. + + If affinity is "precomputed" + X : {array-like, sparse matrix}, shape (n_samples, n_samples), + Interpret X as precomputed adjacency graph computed from + samples. + + y : Ignored + Not used, present for API consistency by convention. + + Returns + ------- + self : object + Returns the instance itself. + """ + X = validate_data(self, X, accept_sparse="csr", ensure_min_samples=2) + + random_state = check_random_state(self.random_state) + + affinity_matrix = self._get_affinity_matrix(X) + self.embedding_ = _spectral_embedding( + affinity_matrix, + n_components=self.n_components, + eigen_solver=self.eigen_solver, + eigen_tol=self.eigen_tol, + random_state=random_state, + ) + return self + + def fit_transform(self, X, y=None): + """Fit the model from data in X and transform X. + + Parameters + ---------- + X : {array-like, sparse matrix} of shape (n_samples, n_features) + Training vector, where `n_samples` is the number of samples + and `n_features` is the number of features. + + If affinity is "precomputed" + X : {array-like, sparse matrix} of shape (n_samples, n_samples), + Interpret X as precomputed adjacency graph computed from + samples. + + y : Ignored + Not used, present for API consistency by convention. + + Returns + ------- + X_new : array-like of shape (n_samples, n_components) + Spectral embedding of the training matrix. + """ + self.fit(X) + return self.embedding_ diff --git a/evalkit_tf437/lib/python3.10/site-packages/sklearn/manifold/_t_sne.py b/evalkit_tf437/lib/python3.10/site-packages/sklearn/manifold/_t_sne.py new file mode 100644 index 0000000000000000000000000000000000000000..71125d8b9f1d55bdbbe6abebbdcd9da4fdd6260a --- /dev/null +++ b/evalkit_tf437/lib/python3.10/site-packages/sklearn/manifold/_t_sne.py @@ -0,0 +1,1218 @@ +# Authors: The scikit-learn developers +# SPDX-License-Identifier: BSD-3-Clause + +# This is the exact and Barnes-Hut t-SNE implementation. There are other +# modifications of the algorithm: +# * Fast Optimization for t-SNE: +# https://cseweb.ucsd.edu/~lvdmaaten/workshops/nips2010/papers/vandermaaten.pdf + +import warnings +from numbers import Integral, Real +from time import time + +import numpy as np +from scipy import linalg +from scipy.sparse import csr_matrix, issparse +from scipy.spatial.distance import pdist, squareform + +from ..base import ( + BaseEstimator, + ClassNamePrefixFeaturesOutMixin, + TransformerMixin, + _fit_context, +) +from ..decomposition import PCA +from ..metrics.pairwise import _VALID_METRICS, pairwise_distances +from ..neighbors import NearestNeighbors +from ..utils import check_random_state +from ..utils._openmp_helpers import _openmp_effective_n_threads +from ..utils._param_validation import Hidden, Interval, StrOptions, validate_params +from ..utils.validation import _num_samples, check_non_negative, validate_data + +# mypy error: Module 'sklearn.manifold' has no attribute '_utils' +# mypy error: Module 'sklearn.manifold' has no attribute '_barnes_hut_tsne' +from . import _barnes_hut_tsne, _utils # type: ignore + +MACHINE_EPSILON = np.finfo(np.double).eps + + +def _joint_probabilities(distances, desired_perplexity, verbose): + """Compute joint probabilities p_ij from distances. + + Parameters + ---------- + distances : ndarray of shape (n_samples * (n_samples-1) / 2,) + Distances of samples are stored as condensed matrices, i.e. + we omit the diagonal and duplicate entries and store everything + in a one-dimensional array. + + desired_perplexity : float + Desired perplexity of the joint probability distributions. + + verbose : int + Verbosity level. + + Returns + ------- + P : ndarray of shape (n_samples * (n_samples-1) / 2,) + Condensed joint probability matrix. + """ + # Compute conditional probabilities such that they approximately match + # the desired perplexity + distances = distances.astype(np.float32, copy=False) + conditional_P = _utils._binary_search_perplexity( + distances, desired_perplexity, verbose + ) + P = conditional_P + conditional_P.T + sum_P = np.maximum(np.sum(P), MACHINE_EPSILON) + P = np.maximum(squareform(P) / sum_P, MACHINE_EPSILON) + return P + + +def _joint_probabilities_nn(distances, desired_perplexity, verbose): + """Compute joint probabilities p_ij from distances using just nearest + neighbors. + + This method is approximately equal to _joint_probabilities. The latter + is O(N), but limiting the joint probability to nearest neighbors improves + this substantially to O(uN). + + Parameters + ---------- + distances : sparse matrix of shape (n_samples, n_samples) + Distances of samples to its n_neighbors nearest neighbors. All other + distances are left to zero (and are not materialized in memory). + Matrix should be of CSR format. + + desired_perplexity : float + Desired perplexity of the joint probability distributions. + + verbose : int + Verbosity level. + + Returns + ------- + P : sparse matrix of shape (n_samples, n_samples) + Condensed joint probability matrix with only nearest neighbors. Matrix + will be of CSR format. + """ + t0 = time() + # Compute conditional probabilities such that they approximately match + # the desired perplexity + distances.sort_indices() + n_samples = distances.shape[0] + distances_data = distances.data.reshape(n_samples, -1) + distances_data = distances_data.astype(np.float32, copy=False) + conditional_P = _utils._binary_search_perplexity( + distances_data, desired_perplexity, verbose + ) + assert np.all(np.isfinite(conditional_P)), "All probabilities should be finite" + + # Symmetrize the joint probability distribution using sparse operations + P = csr_matrix( + (conditional_P.ravel(), distances.indices, distances.indptr), + shape=(n_samples, n_samples), + ) + P = P + P.T + + # Normalize the joint probability distribution + sum_P = np.maximum(P.sum(), MACHINE_EPSILON) + P /= sum_P + + assert np.all(np.abs(P.data) <= 1.0) + if verbose >= 2: + duration = time() - t0 + print("[t-SNE] Computed conditional probabilities in {:.3f}s".format(duration)) + return P + + +def _kl_divergence( + params, + P, + degrees_of_freedom, + n_samples, + n_components, + skip_num_points=0, + compute_error=True, +): + """t-SNE objective function: gradient of the KL divergence + of p_ijs and q_ijs and the absolute error. + + Parameters + ---------- + params : ndarray of shape (n_params,) + Unraveled embedding. + + P : ndarray of shape (n_samples * (n_samples-1) / 2,) + Condensed joint probability matrix. + + degrees_of_freedom : int + Degrees of freedom of the Student's-t distribution. + + n_samples : int + Number of samples. + + n_components : int + Dimension of the embedded space. + + skip_num_points : int, default=0 + This does not compute the gradient for points with indices below + `skip_num_points`. This is useful when computing transforms of new + data where you'd like to keep the old data fixed. + + compute_error: bool, default=True + If False, the kl_divergence is not computed and returns NaN. + + Returns + ------- + kl_divergence : float + Kullback-Leibler divergence of p_ij and q_ij. + + grad : ndarray of shape (n_params,) + Unraveled gradient of the Kullback-Leibler divergence with respect to + the embedding. + """ + X_embedded = params.reshape(n_samples, n_components) + + # Q is a heavy-tailed distribution: Student's t-distribution + dist = pdist(X_embedded, "sqeuclidean") + dist /= degrees_of_freedom + dist += 1.0 + dist **= (degrees_of_freedom + 1.0) / -2.0 + Q = np.maximum(dist / (2.0 * np.sum(dist)), MACHINE_EPSILON) + + # Optimization trick below: np.dot(x, y) is faster than + # np.sum(x * y) because it calls BLAS + + # Objective: C (Kullback-Leibler divergence of P and Q) + if compute_error: + kl_divergence = 2.0 * np.dot(P, np.log(np.maximum(P, MACHINE_EPSILON) / Q)) + else: + kl_divergence = np.nan + + # Gradient: dC/dY + # pdist always returns double precision distances. Thus we need to take + grad = np.ndarray((n_samples, n_components), dtype=params.dtype) + PQd = squareform((P - Q) * dist) + for i in range(skip_num_points, n_samples): + grad[i] = np.dot(np.ravel(PQd[i], order="K"), X_embedded[i] - X_embedded) + grad = grad.ravel() + c = 2.0 * (degrees_of_freedom + 1.0) / degrees_of_freedom + grad *= c + + return kl_divergence, grad + + +def _kl_divergence_bh( + params, + P, + degrees_of_freedom, + n_samples, + n_components, + angle=0.5, + skip_num_points=0, + verbose=False, + compute_error=True, + num_threads=1, +): + """t-SNE objective function: KL divergence of p_ijs and q_ijs. + + Uses Barnes-Hut tree methods to calculate the gradient that + runs in O(NlogN) instead of O(N^2). + + Parameters + ---------- + params : ndarray of shape (n_params,) + Unraveled embedding. + + P : sparse matrix of shape (n_samples, n_sample) + Sparse approximate joint probability matrix, computed only for the + k nearest-neighbors and symmetrized. Matrix should be of CSR format. + + degrees_of_freedom : int + Degrees of freedom of the Student's-t distribution. + + n_samples : int + Number of samples. + + n_components : int + Dimension of the embedded space. + + angle : float, default=0.5 + This is the trade-off between speed and accuracy for Barnes-Hut T-SNE. + 'angle' is the angular size (referred to as theta in [3]) of a distant + node as measured from a point. If this size is below 'angle' then it is + used as a summary node of all points contained within it. + This method is not very sensitive to changes in this parameter + in the range of 0.2 - 0.8. Angle less than 0.2 has quickly increasing + computation time and angle greater 0.8 has quickly increasing error. + + skip_num_points : int, default=0 + This does not compute the gradient for points with indices below + `skip_num_points`. This is useful when computing transforms of new + data where you'd like to keep the old data fixed. + + verbose : int, default=False + Verbosity level. + + compute_error: bool, default=True + If False, the kl_divergence is not computed and returns NaN. + + num_threads : int, default=1 + Number of threads used to compute the gradient. This is set here to + avoid calling _openmp_effective_n_threads for each gradient step. + + Returns + ------- + kl_divergence : float + Kullback-Leibler divergence of p_ij and q_ij. + + grad : ndarray of shape (n_params,) + Unraveled gradient of the Kullback-Leibler divergence with respect to + the embedding. + """ + params = params.astype(np.float32, copy=False) + X_embedded = params.reshape(n_samples, n_components) + + val_P = P.data.astype(np.float32, copy=False) + neighbors = P.indices.astype(np.int64, copy=False) + indptr = P.indptr.astype(np.int64, copy=False) + + grad = np.zeros(X_embedded.shape, dtype=np.float32) + error = _barnes_hut_tsne.gradient( + val_P, + X_embedded, + neighbors, + indptr, + grad, + angle, + n_components, + verbose, + dof=degrees_of_freedom, + compute_error=compute_error, + num_threads=num_threads, + ) + c = 2.0 * (degrees_of_freedom + 1.0) / degrees_of_freedom + grad = grad.ravel() + grad *= c + + return error, grad + + +def _gradient_descent( + objective, + p0, + it, + max_iter, + n_iter_check=1, + n_iter_without_progress=300, + momentum=0.8, + learning_rate=200.0, + min_gain=0.01, + min_grad_norm=1e-7, + verbose=0, + args=None, + kwargs=None, +): + """Batch gradient descent with momentum and individual gains. + + Parameters + ---------- + objective : callable + Should return a tuple of cost and gradient for a given parameter + vector. When expensive to compute, the cost can optionally + be None and can be computed every n_iter_check steps using + the objective_error function. + + p0 : array-like of shape (n_params,) + Initial parameter vector. + + it : int + Current number of iterations (this function will be called more than + once during the optimization). + + max_iter : int + Maximum number of gradient descent iterations. + + n_iter_check : int, default=1 + Number of iterations before evaluating the global error. If the error + is sufficiently low, we abort the optimization. + + n_iter_without_progress : int, default=300 + Maximum number of iterations without progress before we abort the + optimization. + + momentum : float within (0.0, 1.0), default=0.8 + The momentum generates a weight for previous gradients that decays + exponentially. + + learning_rate : float, default=200.0 + The learning rate for t-SNE is usually in the range [10.0, 1000.0]. If + the learning rate is too high, the data may look like a 'ball' with any + point approximately equidistant from its nearest neighbours. If the + learning rate is too low, most points may look compressed in a dense + cloud with few outliers. + + min_gain : float, default=0.01 + Minimum individual gain for each parameter. + + min_grad_norm : float, default=1e-7 + If the gradient norm is below this threshold, the optimization will + be aborted. + + verbose : int, default=0 + Verbosity level. + + args : sequence, default=None + Arguments to pass to objective function. + + kwargs : dict, default=None + Keyword arguments to pass to objective function. + + Returns + ------- + p : ndarray of shape (n_params,) + Optimum parameters. + + error : float + Optimum. + + i : int + Last iteration. + """ + if args is None: + args = [] + if kwargs is None: + kwargs = {} + + p = p0.copy().ravel() + update = np.zeros_like(p) + gains = np.ones_like(p) + error = np.finfo(float).max + best_error = np.finfo(float).max + best_iter = i = it + + tic = time() + for i in range(it, max_iter): + check_convergence = (i + 1) % n_iter_check == 0 + # only compute the error when needed + kwargs["compute_error"] = check_convergence or i == max_iter - 1 + + error, grad = objective(p, *args, **kwargs) + + inc = update * grad < 0.0 + dec = np.invert(inc) + gains[inc] += 0.2 + gains[dec] *= 0.8 + np.clip(gains, min_gain, np.inf, out=gains) + grad *= gains + update = momentum * update - learning_rate * grad + p += update + + if check_convergence: + toc = time() + duration = toc - tic + tic = toc + grad_norm = linalg.norm(grad) + + if verbose >= 2: + print( + "[t-SNE] Iteration %d: error = %.7f," + " gradient norm = %.7f" + " (%s iterations in %0.3fs)" + % (i + 1, error, grad_norm, n_iter_check, duration) + ) + + if error < best_error: + best_error = error + best_iter = i + elif i - best_iter > n_iter_without_progress: + if verbose >= 2: + print( + "[t-SNE] Iteration %d: did not make any progress " + "during the last %d episodes. Finished." + % (i + 1, n_iter_without_progress) + ) + break + if grad_norm <= min_grad_norm: + if verbose >= 2: + print( + "[t-SNE] Iteration %d: gradient norm %f. Finished." + % (i + 1, grad_norm) + ) + break + + return p, error, i + + +@validate_params( + { + "X": ["array-like", "sparse matrix"], + "X_embedded": ["array-like", "sparse matrix"], + "n_neighbors": [Interval(Integral, 1, None, closed="left")], + "metric": [StrOptions(set(_VALID_METRICS) | {"precomputed"}), callable], + }, + prefer_skip_nested_validation=True, +) +def trustworthiness(X, X_embedded, *, n_neighbors=5, metric="euclidean"): + r"""Indicate to what extent the local structure is retained. + + The trustworthiness is within [0, 1]. It is defined as + + .. math:: + + T(k) = 1 - \frac{2}{nk (2n - 3k - 1)} \sum^n_{i=1} + \sum_{j \in \mathcal{N}_{i}^{k}} \max(0, (r(i, j) - k)) + + where for each sample i, :math:`\mathcal{N}_{i}^{k}` are its k nearest + neighbors in the output space, and every sample j is its :math:`r(i, j)`-th + nearest neighbor in the input space. In other words, any unexpected nearest + neighbors in the output space are penalised in proportion to their rank in + the input space. + + Parameters + ---------- + X : {array-like, sparse matrix} of shape (n_samples, n_features) or \ + (n_samples, n_samples) + If the metric is 'precomputed' X must be a square distance + matrix. Otherwise it contains a sample per row. + + X_embedded : {array-like, sparse matrix} of shape (n_samples, n_components) + Embedding of the training data in low-dimensional space. + + n_neighbors : int, default=5 + The number of neighbors that will be considered. Should be fewer than + `n_samples / 2` to ensure the trustworthiness to lies within [0, 1], as + mentioned in [1]_. An error will be raised otherwise. + + metric : str or callable, default='euclidean' + Which metric to use for computing pairwise distances between samples + from the original input space. If metric is 'precomputed', X must be a + matrix of pairwise distances or squared distances. Otherwise, for a list + of available metrics, see the documentation of argument metric in + `sklearn.pairwise.pairwise_distances` and metrics listed in + `sklearn.metrics.pairwise.PAIRWISE_DISTANCE_FUNCTIONS`. Note that the + "cosine" metric uses :func:`~sklearn.metrics.pairwise.cosine_distances`. + + .. versionadded:: 0.20 + + Returns + ------- + trustworthiness : float + Trustworthiness of the low-dimensional embedding. + + References + ---------- + .. [1] Jarkko Venna and Samuel Kaski. 2001. Neighborhood + Preservation in Nonlinear Projection Methods: An Experimental Study. + In Proceedings of the International Conference on Artificial Neural Networks + (ICANN '01). Springer-Verlag, Berlin, Heidelberg, 485-491. + + .. [2] Laurens van der Maaten. Learning a Parametric Embedding by Preserving + Local Structure. Proceedings of the Twelfth International Conference on + Artificial Intelligence and Statistics, PMLR 5:384-391, 2009. + + Examples + -------- + >>> from sklearn.datasets import make_blobs + >>> from sklearn.decomposition import PCA + >>> from sklearn.manifold import trustworthiness + >>> X, _ = make_blobs(n_samples=100, n_features=10, centers=3, random_state=42) + >>> X_embedded = PCA(n_components=2).fit_transform(X) + >>> print(f"{trustworthiness(X, X_embedded, n_neighbors=5):.2f}") + 0.92 + """ + n_samples = _num_samples(X) + if n_neighbors >= n_samples / 2: + raise ValueError( + f"n_neighbors ({n_neighbors}) should be less than n_samples / 2" + f" ({n_samples / 2})" + ) + dist_X = pairwise_distances(X, metric=metric) + if metric == "precomputed": + dist_X = dist_X.copy() + # we set the diagonal to np.inf to exclude the points themselves from + # their own neighborhood + np.fill_diagonal(dist_X, np.inf) + ind_X = np.argsort(dist_X, axis=1) + # `ind_X[i]` is the index of sorted distances between i and other samples + ind_X_embedded = ( + NearestNeighbors(n_neighbors=n_neighbors) + .fit(X_embedded) + .kneighbors(return_distance=False) + ) + + # We build an inverted index of neighbors in the input space: For sample i, + # we define `inverted_index[i]` as the inverted index of sorted distances: + # inverted_index[i][ind_X[i]] = np.arange(1, n_sample + 1) + inverted_index = np.zeros((n_samples, n_samples), dtype=int) + ordered_indices = np.arange(n_samples + 1) + inverted_index[ordered_indices[:-1, np.newaxis], ind_X] = ordered_indices[1:] + ranks = ( + inverted_index[ordered_indices[:-1, np.newaxis], ind_X_embedded] - n_neighbors + ) + t = np.sum(ranks[ranks > 0]) + t = 1.0 - t * ( + 2.0 / (n_samples * n_neighbors * (2.0 * n_samples - 3.0 * n_neighbors - 1.0)) + ) + return t + + +class TSNE(ClassNamePrefixFeaturesOutMixin, TransformerMixin, BaseEstimator): + """T-distributed Stochastic Neighbor Embedding. + + t-SNE [1] is a tool to visualize high-dimensional data. It converts + similarities between data points to joint probabilities and tries + to minimize the Kullback-Leibler divergence between the joint + probabilities of the low-dimensional embedding and the + high-dimensional data. t-SNE has a cost function that is not convex, + i.e. with different initializations we can get different results. + + It is highly recommended to use another dimensionality reduction + method (e.g. PCA for dense data or TruncatedSVD for sparse data) + to reduce the number of dimensions to a reasonable amount (e.g. 50) + if the number of features is very high. This will suppress some + noise and speed up the computation of pairwise distances between + samples. For more tips see Laurens van der Maaten's FAQ [2]. + + Read more in the :ref:`User Guide `. + + Parameters + ---------- + n_components : int, default=2 + Dimension of the embedded space. + + perplexity : float, default=30.0 + The perplexity is related to the number of nearest neighbors that + is used in other manifold learning algorithms. Larger datasets + usually require a larger perplexity. Consider selecting a value + between 5 and 50. Different values can result in significantly + different results. The perplexity must be less than the number + of samples. + + early_exaggeration : float, default=12.0 + Controls how tight natural clusters in the original space are in + the embedded space and how much space will be between them. For + larger values, the space between natural clusters will be larger + in the embedded space. Again, the choice of this parameter is not + very critical. If the cost function increases during initial + optimization, the early exaggeration factor or the learning rate + might be too high. + + learning_rate : float or "auto", default="auto" + The learning rate for t-SNE is usually in the range [10.0, 1000.0]. If + the learning rate is too high, the data may look like a 'ball' with any + point approximately equidistant from its nearest neighbours. If the + learning rate is too low, most points may look compressed in a dense + cloud with few outliers. If the cost function gets stuck in a bad local + minimum increasing the learning rate may help. + Note that many other t-SNE implementations (bhtsne, FIt-SNE, openTSNE, + etc.) use a definition of learning_rate that is 4 times smaller than + ours. So our learning_rate=200 corresponds to learning_rate=800 in + those other implementations. The 'auto' option sets the learning_rate + to `max(N / early_exaggeration / 4, 50)` where N is the sample size, + following [4] and [5]. + + .. versionchanged:: 1.2 + The default value changed to `"auto"`. + + max_iter : int, default=1000 + Maximum number of iterations for the optimization. Should be at + least 250. + + .. versionchanged:: 1.5 + Parameter name changed from `n_iter` to `max_iter`. + + n_iter_without_progress : int, default=300 + Maximum number of iterations without progress before we abort the + optimization, used after 250 initial iterations with early + exaggeration. Note that progress is only checked every 50 iterations so + this value is rounded to the next multiple of 50. + + .. versionadded:: 0.17 + parameter *n_iter_without_progress* to control stopping criteria. + + min_grad_norm : float, default=1e-7 + If the gradient norm is below this threshold, the optimization will + be stopped. + + metric : str or callable, default='euclidean' + The metric to use when calculating distance between instances in a + feature array. If metric is a string, it must be one of the options + allowed by scipy.spatial.distance.pdist for its metric parameter, or + a metric listed in pairwise.PAIRWISE_DISTANCE_FUNCTIONS. + If metric is "precomputed", X is assumed to be a distance matrix. + Alternatively, if metric is a callable function, it is called on each + pair of instances (rows) and the resulting value recorded. The callable + should take two arrays from X as input and return a value indicating + the distance between them. The default is "euclidean" which is + interpreted as squared euclidean distance. + + metric_params : dict, default=None + Additional keyword arguments for the metric function. + + .. versionadded:: 1.1 + + init : {"random", "pca"} or ndarray of shape (n_samples, n_components), \ + default="pca" + Initialization of embedding. + PCA initialization cannot be used with precomputed distances and is + usually more globally stable than random initialization. + + .. versionchanged:: 1.2 + The default value changed to `"pca"`. + + verbose : int, default=0 + Verbosity level. + + random_state : int, RandomState instance or None, default=None + Determines the random number generator. Pass an int for reproducible + results across multiple function calls. Note that different + initializations might result in different local minima of the cost + function. See :term:`Glossary `. + + method : {'barnes_hut', 'exact'}, default='barnes_hut' + By default the gradient calculation algorithm uses Barnes-Hut + approximation running in O(NlogN) time. method='exact' + will run on the slower, but exact, algorithm in O(N^2) time. The + exact algorithm should be used when nearest-neighbor errors need + to be better than 3%. However, the exact method cannot scale to + millions of examples. + + .. versionadded:: 0.17 + Approximate optimization *method* via the Barnes-Hut. + + angle : float, default=0.5 + Only used if method='barnes_hut' + This is the trade-off between speed and accuracy for Barnes-Hut T-SNE. + 'angle' is the angular size (referred to as theta in [3]) of a distant + node as measured from a point. If this size is below 'angle' then it is + used as a summary node of all points contained within it. + This method is not very sensitive to changes in this parameter + in the range of 0.2 - 0.8. Angle less than 0.2 has quickly increasing + computation time and angle greater 0.8 has quickly increasing error. + + n_jobs : int, default=None + The number of parallel jobs to run for neighbors search. This parameter + has no impact when ``metric="precomputed"`` or + (``metric="euclidean"`` and ``method="exact"``). + ``None`` means 1 unless in a :obj:`joblib.parallel_backend` context. + ``-1`` means using all processors. See :term:`Glossary ` + for more details. + + .. versionadded:: 0.22 + + n_iter : int + Maximum number of iterations for the optimization. Should be at + least 250. + + .. deprecated:: 1.5 + `n_iter` was deprecated in version 1.5 and will be removed in 1.7. + Please use `max_iter` instead. + + Attributes + ---------- + embedding_ : array-like of shape (n_samples, n_components) + Stores the embedding vectors. + + kl_divergence_ : float + Kullback-Leibler divergence after optimization. + + n_features_in_ : int + Number of features seen during :term:`fit`. + + .. versionadded:: 0.24 + + feature_names_in_ : ndarray of shape (`n_features_in_`,) + Names of features seen during :term:`fit`. Defined only when `X` + has feature names that are all strings. + + .. versionadded:: 1.0 + + learning_rate_ : float + Effective learning rate. + + .. versionadded:: 1.2 + + n_iter_ : int + Number of iterations run. + + See Also + -------- + sklearn.decomposition.PCA : Principal component analysis that is a linear + dimensionality reduction method. + sklearn.decomposition.KernelPCA : Non-linear dimensionality reduction using + kernels and PCA. + MDS : Manifold learning using multidimensional scaling. + Isomap : Manifold learning based on Isometric Mapping. + LocallyLinearEmbedding : Manifold learning using Locally Linear Embedding. + SpectralEmbedding : Spectral embedding for non-linear dimensionality. + + Notes + ----- + For an example of using :class:`~sklearn.manifold.TSNE` in combination with + :class:`~sklearn.neighbors.KNeighborsTransformer` see + :ref:`sphx_glr_auto_examples_neighbors_approximate_nearest_neighbors.py`. + + References + ---------- + + [1] van der Maaten, L.J.P.; Hinton, G.E. Visualizing High-Dimensional Data + Using t-SNE. Journal of Machine Learning Research 9:2579-2605, 2008. + + [2] van der Maaten, L.J.P. t-Distributed Stochastic Neighbor Embedding + https://lvdmaaten.github.io/tsne/ + + [3] L.J.P. van der Maaten. Accelerating t-SNE using Tree-Based Algorithms. + Journal of Machine Learning Research 15(Oct):3221-3245, 2014. + https://lvdmaaten.github.io/publications/papers/JMLR_2014.pdf + + [4] Belkina, A. C., Ciccolella, C. O., Anno, R., Halpert, R., Spidlen, J., + & Snyder-Cappione, J. E. (2019). Automated optimized parameters for + T-distributed stochastic neighbor embedding improve visualization + and analysis of large datasets. Nature Communications, 10(1), 1-12. + + [5] Kobak, D., & Berens, P. (2019). The art of using t-SNE for single-cell + transcriptomics. Nature Communications, 10(1), 1-14. + + Examples + -------- + >>> import numpy as np + >>> from sklearn.manifold import TSNE + >>> X = np.array([[0, 0, 0], [0, 1, 1], [1, 0, 1], [1, 1, 1]]) + >>> X_embedded = TSNE(n_components=2, learning_rate='auto', + ... init='random', perplexity=3).fit_transform(X) + >>> X_embedded.shape + (4, 2) + """ + + _parameter_constraints: dict = { + "n_components": [Interval(Integral, 1, None, closed="left")], + "perplexity": [Interval(Real, 0, None, closed="neither")], + "early_exaggeration": [Interval(Real, 1, None, closed="left")], + "learning_rate": [ + StrOptions({"auto"}), + Interval(Real, 0, None, closed="neither"), + ], + "max_iter": [Interval(Integral, 250, None, closed="left"), None], + "n_iter_without_progress": [Interval(Integral, -1, None, closed="left")], + "min_grad_norm": [Interval(Real, 0, None, closed="left")], + "metric": [StrOptions(set(_VALID_METRICS) | {"precomputed"}), callable], + "metric_params": [dict, None], + "init": [ + StrOptions({"pca", "random"}), + np.ndarray, + ], + "verbose": ["verbose"], + "random_state": ["random_state"], + "method": [StrOptions({"barnes_hut", "exact"})], + "angle": [Interval(Real, 0, 1, closed="both")], + "n_jobs": [None, Integral], + "n_iter": [ + Interval(Integral, 250, None, closed="left"), + Hidden(StrOptions({"deprecated"})), + ], + } + + # Control the number of exploration iterations with early_exaggeration on + _EXPLORATION_MAX_ITER = 250 + + # Control the number of iterations between progress checks + _N_ITER_CHECK = 50 + + def __init__( + self, + n_components=2, + *, + perplexity=30.0, + early_exaggeration=12.0, + learning_rate="auto", + max_iter=None, # TODO(1.7): set to 1000 + n_iter_without_progress=300, + min_grad_norm=1e-7, + metric="euclidean", + metric_params=None, + init="pca", + verbose=0, + random_state=None, + method="barnes_hut", + angle=0.5, + n_jobs=None, + n_iter="deprecated", + ): + self.n_components = n_components + self.perplexity = perplexity + self.early_exaggeration = early_exaggeration + self.learning_rate = learning_rate + self.max_iter = max_iter + self.n_iter_without_progress = n_iter_without_progress + self.min_grad_norm = min_grad_norm + self.metric = metric + self.metric_params = metric_params + self.init = init + self.verbose = verbose + self.random_state = random_state + self.method = method + self.angle = angle + self.n_jobs = n_jobs + self.n_iter = n_iter + + def _check_params_vs_input(self, X): + if self.perplexity >= X.shape[0]: + raise ValueError("perplexity must be less than n_samples") + + def _fit(self, X, skip_num_points=0): + """Private function to fit the model using X as training data.""" + + if isinstance(self.init, str) and self.init == "pca" and issparse(X): + raise TypeError( + "PCA initialization is currently not supported " + "with the sparse input matrix. Use " + 'init="random" instead.' + ) + + if self.learning_rate == "auto": + # See issue #18018 + self.learning_rate_ = X.shape[0] / self.early_exaggeration / 4 + self.learning_rate_ = np.maximum(self.learning_rate_, 50) + else: + self.learning_rate_ = self.learning_rate + + if self.method == "barnes_hut": + X = validate_data( + self, + X, + accept_sparse=["csr"], + ensure_min_samples=2, + dtype=[np.float32, np.float64], + ) + else: + X = validate_data( + self, + X, + accept_sparse=["csr", "csc", "coo"], + dtype=[np.float32, np.float64], + ) + if self.metric == "precomputed": + if isinstance(self.init, str) and self.init == "pca": + raise ValueError( + 'The parameter init="pca" cannot be used with metric="precomputed".' + ) + if X.shape[0] != X.shape[1]: + raise ValueError("X should be a square distance matrix") + + check_non_negative( + X, + ( + "TSNE.fit(). With metric='precomputed', X " + "should contain positive distances." + ), + ) + + if self.method == "exact" and issparse(X): + raise TypeError( + 'TSNE with method="exact" does not accept sparse ' + 'precomputed distance matrix. Use method="barnes_hut" ' + "or provide the dense distance matrix." + ) + + if self.method == "barnes_hut" and self.n_components > 3: + raise ValueError( + "'n_components' should be inferior to 4 for the " + "barnes_hut algorithm as it relies on " + "quad-tree or oct-tree." + ) + random_state = check_random_state(self.random_state) + + n_samples = X.shape[0] + + neighbors_nn = None + if self.method == "exact": + # Retrieve the distance matrix, either using the precomputed one or + # computing it. + if self.metric == "precomputed": + distances = X + else: + if self.verbose: + print("[t-SNE] Computing pairwise distances...") + + if self.metric == "euclidean": + # Euclidean is squared here, rather than using **= 2, + # because euclidean_distances already calculates + # squared distances, and returns np.sqrt(dist) for + # squared=False. + # Also, Euclidean is slower for n_jobs>1, so don't set here + distances = pairwise_distances(X, metric=self.metric, squared=True) + else: + metric_params_ = self.metric_params or {} + distances = pairwise_distances( + X, metric=self.metric, n_jobs=self.n_jobs, **metric_params_ + ) + + if np.any(distances < 0): + raise ValueError( + "All distances should be positive, the metric given is not correct" + ) + + if self.metric != "euclidean": + distances **= 2 + + # compute the joint probability distribution for the input space + P = _joint_probabilities(distances, self.perplexity, self.verbose) + assert np.all(np.isfinite(P)), "All probabilities should be finite" + assert np.all(P >= 0), "All probabilities should be non-negative" + assert np.all( + P <= 1 + ), "All probabilities should be less or then equal to one" + + else: + # Compute the number of nearest neighbors to find. + # LvdM uses 3 * perplexity as the number of neighbors. + # In the event that we have very small # of points + # set the neighbors to n - 1. + n_neighbors = min(n_samples - 1, int(3.0 * self.perplexity + 1)) + + if self.verbose: + print("[t-SNE] Computing {} nearest neighbors...".format(n_neighbors)) + + # Find the nearest neighbors for every point + knn = NearestNeighbors( + algorithm="auto", + n_jobs=self.n_jobs, + n_neighbors=n_neighbors, + metric=self.metric, + metric_params=self.metric_params, + ) + t0 = time() + knn.fit(X) + duration = time() - t0 + if self.verbose: + print( + "[t-SNE] Indexed {} samples in {:.3f}s...".format( + n_samples, duration + ) + ) + + t0 = time() + distances_nn = knn.kneighbors_graph(mode="distance") + duration = time() - t0 + if self.verbose: + print( + "[t-SNE] Computed neighbors for {} samples in {:.3f}s...".format( + n_samples, duration + ) + ) + + # Free the memory used by the ball_tree + del knn + + # knn return the euclidean distance but we need it squared + # to be consistent with the 'exact' method. Note that the + # the method was derived using the euclidean method as in the + # input space. Not sure of the implication of using a different + # metric. + distances_nn.data **= 2 + + # compute the joint probability distribution for the input space + P = _joint_probabilities_nn(distances_nn, self.perplexity, self.verbose) + + if isinstance(self.init, np.ndarray): + X_embedded = self.init + elif self.init == "pca": + pca = PCA( + n_components=self.n_components, + svd_solver="randomized", + random_state=random_state, + ) + # Always output a numpy array, no matter what is configured globally + pca.set_output(transform="default") + X_embedded = pca.fit_transform(X).astype(np.float32, copy=False) + # PCA is rescaled so that PC1 has standard deviation 1e-4 which is + # the default value for random initialization. See issue #18018. + X_embedded = X_embedded / np.std(X_embedded[:, 0]) * 1e-4 + elif self.init == "random": + # The embedding is initialized with iid samples from Gaussians with + # standard deviation 1e-4. + X_embedded = 1e-4 * random_state.standard_normal( + size=(n_samples, self.n_components) + ).astype(np.float32) + + # Degrees of freedom of the Student's t-distribution. The suggestion + # degrees_of_freedom = n_components - 1 comes from + # "Learning a Parametric Embedding by Preserving Local Structure" + # Laurens van der Maaten, 2009. + degrees_of_freedom = max(self.n_components - 1, 1) + + return self._tsne( + P, + degrees_of_freedom, + n_samples, + X_embedded=X_embedded, + neighbors=neighbors_nn, + skip_num_points=skip_num_points, + ) + + def _tsne( + self, + P, + degrees_of_freedom, + n_samples, + X_embedded, + neighbors=None, + skip_num_points=0, + ): + """Runs t-SNE.""" + # t-SNE minimizes the Kullback-Leiber divergence of the Gaussians P + # and the Student's t-distributions Q. The optimization algorithm that + # we use is batch gradient descent with two stages: + # * initial optimization with early exaggeration and momentum at 0.5 + # * final optimization with momentum at 0.8 + params = X_embedded.ravel() + + opt_args = { + "it": 0, + "n_iter_check": self._N_ITER_CHECK, + "min_grad_norm": self.min_grad_norm, + "learning_rate": self.learning_rate_, + "verbose": self.verbose, + "kwargs": dict(skip_num_points=skip_num_points), + "args": [P, degrees_of_freedom, n_samples, self.n_components], + "n_iter_without_progress": self._EXPLORATION_MAX_ITER, + "max_iter": self._EXPLORATION_MAX_ITER, + "momentum": 0.5, + } + if self.method == "barnes_hut": + obj_func = _kl_divergence_bh + opt_args["kwargs"]["angle"] = self.angle + # Repeat verbose argument for _kl_divergence_bh + opt_args["kwargs"]["verbose"] = self.verbose + # Get the number of threads for gradient computation here to + # avoid recomputing it at each iteration. + opt_args["kwargs"]["num_threads"] = _openmp_effective_n_threads() + else: + obj_func = _kl_divergence + + # Learning schedule (part 1): do 250 iteration with lower momentum but + # higher learning rate controlled via the early exaggeration parameter + P *= self.early_exaggeration + params, kl_divergence, it = _gradient_descent(obj_func, params, **opt_args) + if self.verbose: + print( + "[t-SNE] KL divergence after %d iterations with early exaggeration: %f" + % (it + 1, kl_divergence) + ) + + # Learning schedule (part 2): disable early exaggeration and finish + # optimization with a higher momentum at 0.8 + P /= self.early_exaggeration + remaining = self._max_iter - self._EXPLORATION_MAX_ITER + if it < self._EXPLORATION_MAX_ITER or remaining > 0: + opt_args["max_iter"] = self._max_iter + opt_args["it"] = it + 1 + opt_args["momentum"] = 0.8 + opt_args["n_iter_without_progress"] = self.n_iter_without_progress + params, kl_divergence, it = _gradient_descent(obj_func, params, **opt_args) + + # Save the final number of iterations + self.n_iter_ = it + + if self.verbose: + print( + "[t-SNE] KL divergence after %d iterations: %f" + % (it + 1, kl_divergence) + ) + + X_embedded = params.reshape(n_samples, self.n_components) + self.kl_divergence_ = kl_divergence + + return X_embedded + + @_fit_context( + # TSNE.metric is not validated yet + prefer_skip_nested_validation=False + ) + def fit_transform(self, X, y=None): + """Fit X into an embedded space and return that transformed output. + + Parameters + ---------- + X : {array-like, sparse matrix} of shape (n_samples, n_features) or \ + (n_samples, n_samples) + If the metric is 'precomputed' X must be a square distance + matrix. Otherwise it contains a sample per row. If the method + is 'exact', X may be a sparse matrix of type 'csr', 'csc' + or 'coo'. If the method is 'barnes_hut' and the metric is + 'precomputed', X may be a precomputed sparse graph. + + y : None + Ignored. + + Returns + ------- + X_new : ndarray of shape (n_samples, n_components) + Embedding of the training data in low-dimensional space. + """ + # TODO(1.7): remove + # Also make sure to change `max_iter` default back to 1000 and deprecate None + if self.n_iter != "deprecated": + if self.max_iter is not None: + raise ValueError( + "Both 'n_iter' and 'max_iter' attributes were set. Attribute" + " 'n_iter' was deprecated in version 1.5 and will be removed in" + " 1.7. To avoid this error, only set the 'max_iter' attribute." + ) + warnings.warn( + ( + "'n_iter' was renamed to 'max_iter' in version 1.5 and " + "will be removed in 1.7." + ), + FutureWarning, + ) + self._max_iter = self.n_iter + elif self.max_iter is None: + self._max_iter = 1000 + else: + self._max_iter = self.max_iter + + self._check_params_vs_input(X) + embedding = self._fit(X) + self.embedding_ = embedding + return self.embedding_ + + @_fit_context( + # TSNE.metric is not validated yet + prefer_skip_nested_validation=False + ) + def fit(self, X, y=None): + """Fit X into an embedded space. + + Parameters + ---------- + X : {array-like, sparse matrix} of shape (n_samples, n_features) or \ + (n_samples, n_samples) + If the metric is 'precomputed' X must be a square distance + matrix. Otherwise it contains a sample per row. If the method + is 'exact', X may be a sparse matrix of type 'csr', 'csc' + or 'coo'. If the method is 'barnes_hut' and the metric is + 'precomputed', X may be a precomputed sparse graph. + + y : None + Ignored. + + Returns + ------- + self : object + Fitted estimator. + """ + self.fit_transform(X) + return self + + @property + def _n_features_out(self): + """Number of transformed output features.""" + return self.embedding_.shape[1] + + def __sklearn_tags__(self): + tags = super().__sklearn_tags__() + tags.input_tags.pairwise = self.metric == "precomputed" + return tags diff --git a/evalkit_tf437/lib/python3.10/site-packages/sklearn/manifold/_utils.pyx b/evalkit_tf437/lib/python3.10/site-packages/sklearn/manifold/_utils.pyx new file mode 100644 index 0000000000000000000000000000000000000000..be3a1d2f91f6670cea8eee130990becc3fc4b8bb --- /dev/null +++ b/evalkit_tf437/lib/python3.10/site-packages/sklearn/manifold/_utils.pyx @@ -0,0 +1,120 @@ +import numpy as np + +from libc cimport math +from libc.math cimport INFINITY + +from ..utils._typedefs cimport float32_t, float64_t + + +cdef float EPSILON_DBL = 1e-8 +cdef float PERPLEXITY_TOLERANCE = 1e-5 + + +# TODO: have this function support float32 and float64 and preserve inputs' dtypes. +def _binary_search_perplexity( + const float32_t[:, :] sqdistances, + float desired_perplexity, + int verbose): + """Binary search for sigmas of conditional Gaussians. + + This approximation reduces the computational complexity from O(N^2) to + O(uN). + + Parameters + ---------- + sqdistances : ndarray of shape (n_samples, n_neighbors), dtype=np.float32 + Distances between training samples and their k nearest neighbors. + When using the exact method, this is a square (n_samples, n_samples) + distance matrix. The TSNE default metric is "euclidean" which is + interpreted as squared euclidean distance. + + desired_perplexity : float + Desired perplexity (2^entropy) of the conditional Gaussians. + + verbose : int + Verbosity level. + + Returns + ------- + P : ndarray of shape (n_samples, n_samples), dtype=np.float64 + Probabilities of conditional Gaussian distributions p_i|j. + """ + # Maximum number of binary search steps + cdef long n_steps = 100 + + cdef long n_samples = sqdistances.shape[0] + cdef long n_neighbors = sqdistances.shape[1] + cdef int using_neighbors = n_neighbors < n_samples + # Precisions of conditional Gaussian distributions + cdef double beta + cdef double beta_min + cdef double beta_max + cdef double beta_sum = 0.0 + + # Use log scale + cdef double desired_entropy = math.log(desired_perplexity) + cdef double entropy_diff + + cdef double entropy + cdef double sum_Pi + cdef double sum_disti_Pi + cdef long i, j, l + + # This array is later used as a 32bit array. It has multiple intermediate + # floating point additions that benefit from the extra precision + cdef float64_t[:, :] P = np.zeros( + (n_samples, n_neighbors), dtype=np.float64) + + for i in range(n_samples): + beta_min = -INFINITY + beta_max = INFINITY + beta = 1.0 + + # Binary search of precision for i-th conditional distribution + for l in range(n_steps): + # Compute current entropy and corresponding probabilities + # computed just over the nearest neighbors or over all data + # if we're not using neighbors + sum_Pi = 0.0 + for j in range(n_neighbors): + if j != i or using_neighbors: + P[i, j] = math.exp(-sqdistances[i, j] * beta) + sum_Pi += P[i, j] + + if sum_Pi == 0.0: + sum_Pi = EPSILON_DBL + sum_disti_Pi = 0.0 + + for j in range(n_neighbors): + P[i, j] /= sum_Pi + sum_disti_Pi += sqdistances[i, j] * P[i, j] + + entropy = math.log(sum_Pi) + beta * sum_disti_Pi + entropy_diff = entropy - desired_entropy + + if math.fabs(entropy_diff) <= PERPLEXITY_TOLERANCE: + break + + if entropy_diff > 0.0: + beta_min = beta + if beta_max == INFINITY: + beta *= 2.0 + else: + beta = (beta + beta_max) / 2.0 + else: + beta_max = beta + if beta_min == -INFINITY: + beta /= 2.0 + else: + beta = (beta + beta_min) / 2.0 + + beta_sum += beta + + if verbose and ((i + 1) % 1000 == 0 or i + 1 == n_samples): + print("[t-SNE] Computed conditional probabilities for sample " + "%d / %d" % (i + 1, n_samples)) + + if verbose: + print("[t-SNE] Mean sigma: %f" + % np.mean(math.sqrt(n_samples / beta_sum))) + return np.asarray(P) diff --git a/evalkit_tf437/lib/python3.10/site-packages/sklearn/manifold/tests/__init__.py b/evalkit_tf437/lib/python3.10/site-packages/sklearn/manifold/tests/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/evalkit_tf437/lib/python3.10/site-packages/sklearn/manifold/tests/__pycache__/test_locally_linear.cpython-310.pyc b/evalkit_tf437/lib/python3.10/site-packages/sklearn/manifold/tests/__pycache__/test_locally_linear.cpython-310.pyc new file mode 100644 index 0000000000000000000000000000000000000000..8b9a4d971e4f1ce7de11a7d29c99b4df8ae24128 Binary files /dev/null and b/evalkit_tf437/lib/python3.10/site-packages/sklearn/manifold/tests/__pycache__/test_locally_linear.cpython-310.pyc differ diff --git a/evalkit_tf437/lib/python3.10/site-packages/sklearn/manifold/tests/test_spectral_embedding.py b/evalkit_tf437/lib/python3.10/site-packages/sklearn/manifold/tests/test_spectral_embedding.py new file mode 100644 index 0000000000000000000000000000000000000000..d63f6bd33fc96e333dc7a13ab5556364645c9318 --- /dev/null +++ b/evalkit_tf437/lib/python3.10/site-packages/sklearn/manifold/tests/test_spectral_embedding.py @@ -0,0 +1,502 @@ +from unittest.mock import Mock + +import numpy as np +import pytest +from scipy import sparse +from scipy.linalg import eigh +from scipy.sparse.linalg import eigsh, lobpcg + +from sklearn.cluster import KMeans +from sklearn.datasets import make_blobs +from sklearn.manifold import SpectralEmbedding, _spectral_embedding, spectral_embedding +from sklearn.manifold._spectral_embedding import ( + _graph_connected_component, + _graph_is_connected, +) +from sklearn.metrics import normalized_mutual_info_score, pairwise_distances +from sklearn.metrics.pairwise import rbf_kernel +from sklearn.neighbors import NearestNeighbors +from sklearn.utils._testing import assert_array_almost_equal, assert_array_equal +from sklearn.utils.extmath import _deterministic_vector_sign_flip +from sklearn.utils.fixes import ( + COO_CONTAINERS, + CSC_CONTAINERS, + CSR_CONTAINERS, + parse_version, + sp_version, +) +from sklearn.utils.fixes import laplacian as csgraph_laplacian + +try: + from pyamg import smoothed_aggregation_solver # noqa + + pyamg_available = True +except ImportError: + pyamg_available = False +skip_if_no_pyamg = pytest.mark.skipif( + not pyamg_available, reason="PyAMG is required for the tests in this function." +) + +# non centered, sparse centers to check the +centers = np.array( + [ + [0.0, 5.0, 0.0, 0.0, 0.0], + [0.0, 0.0, 4.0, 0.0, 0.0], + [1.0, 0.0, 0.0, 5.0, 1.0], + ] +) +n_samples = 1000 +n_clusters, n_features = centers.shape +S, true_labels = make_blobs( + n_samples=n_samples, centers=centers, cluster_std=1.0, random_state=42 +) + + +def _assert_equal_with_sign_flipping(A, B, tol=0.0): + """Check array A and B are equal with possible sign flipping on + each column""" + tol_squared = tol**2 + for A_col, B_col in zip(A.T, B.T): + assert ( + np.max((A_col - B_col) ** 2) <= tol_squared + or np.max((A_col + B_col) ** 2) <= tol_squared + ) + + +@pytest.mark.parametrize("coo_container", COO_CONTAINERS) +def test_sparse_graph_connected_component(coo_container): + rng = np.random.RandomState(42) + n_samples = 300 + boundaries = [0, 42, 121, 200, n_samples] + p = rng.permutation(n_samples) + connections = [] + + for start, stop in zip(boundaries[:-1], boundaries[1:]): + group = p[start:stop] + # Connect all elements within the group at least once via an + # arbitrary path that spans the group. + for i in range(len(group) - 1): + connections.append((group[i], group[i + 1])) + + # Add some more random connections within the group + min_idx, max_idx = 0, len(group) - 1 + n_random_connections = 1000 + source = rng.randint(min_idx, max_idx, size=n_random_connections) + target = rng.randint(min_idx, max_idx, size=n_random_connections) + connections.extend(zip(group[source], group[target])) + + # Build a symmetric affinity matrix + row_idx, column_idx = tuple(np.array(connections).T) + data = rng.uniform(0.1, 42, size=len(connections)) + affinity = coo_container((data, (row_idx, column_idx))) + affinity = 0.5 * (affinity + affinity.T) + + for start, stop in zip(boundaries[:-1], boundaries[1:]): + component_1 = _graph_connected_component(affinity, p[start]) + component_size = stop - start + assert component_1.sum() == component_size + + # We should retrieve the same component mask by starting by both ends + # of the group + component_2 = _graph_connected_component(affinity, p[stop - 1]) + assert component_2.sum() == component_size + assert_array_equal(component_1, component_2) + + +# TODO: investigate why this test is seed-sensitive on 32-bit Python +# runtimes. Is this revealing a numerical stability problem ? Or is it +# expected from the test numerical design ? In the latter case the test +# should be made less seed-sensitive instead. +@pytest.mark.parametrize( + "eigen_solver", + [ + "arpack", + "lobpcg", + pytest.param("amg", marks=skip_if_no_pyamg), + ], +) +@pytest.mark.parametrize("dtype", [np.float32, np.float64]) +def test_spectral_embedding_two_components(eigen_solver, dtype, seed=0): + # Test spectral embedding with two components + random_state = np.random.RandomState(seed) + n_sample = 100 + affinity = np.zeros(shape=[n_sample * 2, n_sample * 2]) + # first component + affinity[0:n_sample, 0:n_sample] = ( + np.abs(random_state.randn(n_sample, n_sample)) + 2 + ) + # second component + affinity[n_sample::, n_sample::] = ( + np.abs(random_state.randn(n_sample, n_sample)) + 2 + ) + + # Test of internal _graph_connected_component before connection + component = _graph_connected_component(affinity, 0) + assert component[:n_sample].all() + assert not component[n_sample:].any() + component = _graph_connected_component(affinity, -1) + assert not component[:n_sample].any() + assert component[n_sample:].all() + + # connection + affinity[0, n_sample + 1] = 1 + affinity[n_sample + 1, 0] = 1 + affinity.flat[:: 2 * n_sample + 1] = 0 + affinity = 0.5 * (affinity + affinity.T) + + true_label = np.zeros(shape=2 * n_sample) + true_label[0:n_sample] = 1 + + se_precomp = SpectralEmbedding( + n_components=1, + affinity="precomputed", + random_state=np.random.RandomState(seed), + eigen_solver=eigen_solver, + ) + + embedded_coordinate = se_precomp.fit_transform(affinity.astype(dtype)) + # thresholding on the first components using 0. + label_ = np.array(embedded_coordinate.ravel() < 0, dtype=np.int64) + assert normalized_mutual_info_score(true_label, label_) == pytest.approx(1.0) + + +@pytest.mark.parametrize("sparse_container", [None, *CSR_CONTAINERS]) +@pytest.mark.parametrize( + "eigen_solver", + [ + "arpack", + "lobpcg", + pytest.param("amg", marks=skip_if_no_pyamg), + ], +) +@pytest.mark.parametrize("dtype", (np.float32, np.float64)) +def test_spectral_embedding_precomputed_affinity( + sparse_container, eigen_solver, dtype, seed=36 +): + # Test spectral embedding with precomputed kernel + gamma = 1.0 + X = S if sparse_container is None else sparse_container(S) + + se_precomp = SpectralEmbedding( + n_components=2, + affinity="precomputed", + random_state=np.random.RandomState(seed), + eigen_solver=eigen_solver, + ) + se_rbf = SpectralEmbedding( + n_components=2, + affinity="rbf", + gamma=gamma, + random_state=np.random.RandomState(seed), + eigen_solver=eigen_solver, + ) + embed_precomp = se_precomp.fit_transform(rbf_kernel(X.astype(dtype), gamma=gamma)) + embed_rbf = se_rbf.fit_transform(X.astype(dtype)) + assert_array_almost_equal(se_precomp.affinity_matrix_, se_rbf.affinity_matrix_) + _assert_equal_with_sign_flipping(embed_precomp, embed_rbf, 0.05) + + +def test_precomputed_nearest_neighbors_filtering(): + # Test precomputed graph filtering when containing too many neighbors + n_neighbors = 2 + results = [] + for additional_neighbors in [0, 10]: + nn = NearestNeighbors(n_neighbors=n_neighbors + additional_neighbors).fit(S) + graph = nn.kneighbors_graph(S, mode="connectivity") + embedding = ( + SpectralEmbedding( + random_state=0, + n_components=2, + affinity="precomputed_nearest_neighbors", + n_neighbors=n_neighbors, + ) + .fit(graph) + .embedding_ + ) + results.append(embedding) + + assert_array_equal(results[0], results[1]) + + +@pytest.mark.parametrize("sparse_container", [None, *CSR_CONTAINERS]) +def test_spectral_embedding_callable_affinity(sparse_container, seed=36): + # Test spectral embedding with callable affinity + gamma = 0.9 + kern = rbf_kernel(S, gamma=gamma) + X = S if sparse_container is None else sparse_container(S) + + se_callable = SpectralEmbedding( + n_components=2, + affinity=(lambda x: rbf_kernel(x, gamma=gamma)), + gamma=gamma, + random_state=np.random.RandomState(seed), + ) + se_rbf = SpectralEmbedding( + n_components=2, + affinity="rbf", + gamma=gamma, + random_state=np.random.RandomState(seed), + ) + embed_rbf = se_rbf.fit_transform(X) + embed_callable = se_callable.fit_transform(X) + assert_array_almost_equal(se_callable.affinity_matrix_, se_rbf.affinity_matrix_) + assert_array_almost_equal(kern, se_rbf.affinity_matrix_) + _assert_equal_with_sign_flipping(embed_rbf, embed_callable, 0.05) + + +@pytest.mark.skipif( + not pyamg_available, reason="PyAMG is required for the tests in this function." +) +@pytest.mark.parametrize("dtype", (np.float32, np.float64)) +@pytest.mark.parametrize("coo_container", COO_CONTAINERS) +def test_spectral_embedding_amg_solver(dtype, coo_container, seed=36): + se_amg = SpectralEmbedding( + n_components=2, + affinity="nearest_neighbors", + eigen_solver="amg", + n_neighbors=5, + random_state=np.random.RandomState(seed), + ) + se_arpack = SpectralEmbedding( + n_components=2, + affinity="nearest_neighbors", + eigen_solver="arpack", + n_neighbors=5, + random_state=np.random.RandomState(seed), + ) + embed_amg = se_amg.fit_transform(S.astype(dtype)) + embed_arpack = se_arpack.fit_transform(S.astype(dtype)) + _assert_equal_with_sign_flipping(embed_amg, embed_arpack, 1e-5) + + # same with special case in which amg is not actually used + # regression test for #10715 + # affinity between nodes + row = np.array([0, 0, 1, 2, 3, 3, 4], dtype=np.int32) + col = np.array([1, 2, 2, 3, 4, 5, 5], dtype=np.int32) + val = np.array([100, 100, 100, 1, 100, 100, 100], dtype=np.int64) + + affinity = coo_container( + (np.hstack([val, val]), (np.hstack([row, col]), np.hstack([col, row]))), + shape=(6, 6), + ) + se_amg.affinity = "precomputed" + se_arpack.affinity = "precomputed" + embed_amg = se_amg.fit_transform(affinity.astype(dtype)) + embed_arpack = se_arpack.fit_transform(affinity.astype(dtype)) + _assert_equal_with_sign_flipping(embed_amg, embed_arpack, 1e-5) + + # Check that passing a sparse matrix with `np.int64` indices dtype raises an error + # or is successful based on the version of SciPy which is installed. + # Use a CSR matrix to avoid any conversion during the validation + affinity = affinity.tocsr() + affinity.indptr = affinity.indptr.astype(np.int64) + affinity.indices = affinity.indices.astype(np.int64) + + # PR: https://github.com/scipy/scipy/pull/18913 + # First integration in 1.11.3: https://github.com/scipy/scipy/pull/19279 + scipy_graph_traversal_supports_int64_index = sp_version >= parse_version("1.11.3") + if scipy_graph_traversal_supports_int64_index: + se_amg.fit_transform(affinity) + else: + err_msg = "Only sparse matrices with 32-bit integer indices are accepted" + with pytest.raises(ValueError, match=err_msg): + se_amg.fit_transform(affinity) + + +@pytest.mark.skipif( + not pyamg_available, reason="PyAMG is required for the tests in this function." +) +@pytest.mark.parametrize("dtype", (np.float32, np.float64)) +def test_spectral_embedding_amg_solver_failure(dtype, seed=36): + # Non-regression test for amg solver failure (issue #13393 on github) + num_nodes = 100 + X = sparse.rand(num_nodes, num_nodes, density=0.1, random_state=seed) + X = X.astype(dtype) + upper = sparse.triu(X) - sparse.diags(X.diagonal()) + sym_matrix = upper + upper.T + embedding = spectral_embedding( + sym_matrix, n_components=10, eigen_solver="amg", random_state=0 + ) + + # Check that the learned embedding is stable w.r.t. random solver init: + for i in range(3): + new_embedding = spectral_embedding( + sym_matrix, n_components=10, eigen_solver="amg", random_state=i + 1 + ) + _assert_equal_with_sign_flipping(embedding, new_embedding, tol=0.05) + + +def test_pipeline_spectral_clustering(seed=36): + # Test using pipeline to do spectral clustering + random_state = np.random.RandomState(seed) + se_rbf = SpectralEmbedding( + n_components=n_clusters, affinity="rbf", random_state=random_state + ) + se_knn = SpectralEmbedding( + n_components=n_clusters, + affinity="nearest_neighbors", + n_neighbors=5, + random_state=random_state, + ) + for se in [se_rbf, se_knn]: + km = KMeans(n_clusters=n_clusters, random_state=random_state, n_init=10) + km.fit(se.fit_transform(S)) + assert_array_almost_equal( + normalized_mutual_info_score(km.labels_, true_labels), 1.0, 2 + ) + + +def test_connectivity(seed=36): + # Test that graph connectivity test works as expected + graph = np.array( + [ + [1, 0, 0, 0, 0], + [0, 1, 1, 0, 0], + [0, 1, 1, 1, 0], + [0, 0, 1, 1, 1], + [0, 0, 0, 1, 1], + ] + ) + assert not _graph_is_connected(graph) + for csr_container in CSR_CONTAINERS: + assert not _graph_is_connected(csr_container(graph)) + for csc_container in CSC_CONTAINERS: + assert not _graph_is_connected(csc_container(graph)) + + graph = np.array( + [ + [1, 1, 0, 0, 0], + [1, 1, 1, 0, 0], + [0, 1, 1, 1, 0], + [0, 0, 1, 1, 1], + [0, 0, 0, 1, 1], + ] + ) + assert _graph_is_connected(graph) + for csr_container in CSR_CONTAINERS: + assert _graph_is_connected(csr_container(graph)) + for csc_container in CSC_CONTAINERS: + assert _graph_is_connected(csc_container(graph)) + + +def test_spectral_embedding_deterministic(): + # Test that Spectral Embedding is deterministic + random_state = np.random.RandomState(36) + data = random_state.randn(10, 30) + sims = rbf_kernel(data) + embedding_1 = spectral_embedding(sims) + embedding_2 = spectral_embedding(sims) + assert_array_almost_equal(embedding_1, embedding_2) + + +def test_spectral_embedding_unnormalized(): + # Test that spectral_embedding is also processing unnormalized laplacian + # correctly + random_state = np.random.RandomState(36) + data = random_state.randn(10, 30) + sims = rbf_kernel(data) + n_components = 8 + embedding_1 = spectral_embedding( + sims, norm_laplacian=False, n_components=n_components, drop_first=False + ) + + # Verify using manual computation with dense eigh + laplacian, dd = csgraph_laplacian(sims, normed=False, return_diag=True) + _, diffusion_map = eigh(laplacian) + embedding_2 = diffusion_map.T[:n_components] + embedding_2 = _deterministic_vector_sign_flip(embedding_2).T + + assert_array_almost_equal(embedding_1, embedding_2) + + +def test_spectral_embedding_first_eigen_vector(): + # Test that the first eigenvector of spectral_embedding + # is constant and that the second is not (for a connected graph) + random_state = np.random.RandomState(36) + data = random_state.randn(10, 30) + sims = rbf_kernel(data) + n_components = 2 + + for seed in range(10): + embedding = spectral_embedding( + sims, + norm_laplacian=False, + n_components=n_components, + drop_first=False, + random_state=seed, + ) + + assert np.std(embedding[:, 0]) == pytest.approx(0) + assert np.std(embedding[:, 1]) > 1e-3 + + +@pytest.mark.parametrize( + "eigen_solver", + [ + "arpack", + "lobpcg", + pytest.param("amg", marks=skip_if_no_pyamg), + ], +) +@pytest.mark.parametrize("dtype", [np.float32, np.float64]) +def test_spectral_embedding_preserves_dtype(eigen_solver, dtype): + """Check that `SpectralEmbedding is preserving the dtype of the fitted + attribute and transformed data. + + Ideally, this test should be covered by the common test + `check_transformer_preserve_dtypes`. However, this test only run + with transformers implementing `transform` while `SpectralEmbedding` + implements only `fit_transform`. + """ + X = S.astype(dtype) + se = SpectralEmbedding( + n_components=2, affinity="rbf", eigen_solver=eigen_solver, random_state=0 + ) + X_trans = se.fit_transform(X) + + assert X_trans.dtype == dtype + assert se.embedding_.dtype == dtype + assert se.affinity_matrix_.dtype == dtype + + +@pytest.mark.skipif( + pyamg_available, + reason="PyAMG is installed and we should not test for an error.", +) +def test_error_pyamg_not_available(): + se_precomp = SpectralEmbedding( + n_components=2, + affinity="rbf", + eigen_solver="amg", + ) + err_msg = "The eigen_solver was set to 'amg', but pyamg is not available." + with pytest.raises(ValueError, match=err_msg): + se_precomp.fit_transform(S) + + +@pytest.mark.parametrize("solver", ["arpack", "amg", "lobpcg"]) +@pytest.mark.parametrize("csr_container", CSR_CONTAINERS) +def test_spectral_eigen_tol_auto(monkeypatch, solver, csr_container): + """Test that `eigen_tol="auto"` is resolved correctly""" + if solver == "amg" and not pyamg_available: + pytest.skip("PyAMG is not available.") + X, _ = make_blobs( + n_samples=200, random_state=0, centers=[[1, 1], [-1, -1]], cluster_std=0.01 + ) + D = pairwise_distances(X) # Distance matrix + S = np.max(D) - D # Similarity matrix + + solver_func = eigsh if solver == "arpack" else lobpcg + default_value = 0 if solver == "arpack" else None + if solver == "amg": + S = csr_container(S) + + mocked_solver = Mock(side_effect=solver_func) + + monkeypatch.setattr(_spectral_embedding, solver_func.__qualname__, mocked_solver) + + spectral_embedding(S, random_state=42, eigen_solver=solver, eigen_tol="auto") + mocked_solver.assert_called() + + _, kwargs = mocked_solver.call_args + assert kwargs["tol"] == default_value diff --git a/evalkit_tf437/lib/python3.10/site-packages/sklearn/neural_network/__pycache__/__init__.cpython-310.pyc b/evalkit_tf437/lib/python3.10/site-packages/sklearn/neural_network/__pycache__/__init__.cpython-310.pyc new file mode 100644 index 0000000000000000000000000000000000000000..a72427fbe551968a39576728c44fcbd26bcb184e Binary files /dev/null and b/evalkit_tf437/lib/python3.10/site-packages/sklearn/neural_network/__pycache__/__init__.cpython-310.pyc differ diff --git a/evalkit_tf437/lib/python3.10/site-packages/sklearn/neural_network/__pycache__/_base.cpython-310.pyc b/evalkit_tf437/lib/python3.10/site-packages/sklearn/neural_network/__pycache__/_base.cpython-310.pyc new file mode 100644 index 0000000000000000000000000000000000000000..935406f4c123c3838fb486b2d9ecccf0d2cf9f64 Binary files /dev/null and b/evalkit_tf437/lib/python3.10/site-packages/sklearn/neural_network/__pycache__/_base.cpython-310.pyc differ diff --git a/evalkit_tf437/lib/python3.10/site-packages/sklearn/neural_network/__pycache__/_multilayer_perceptron.cpython-310.pyc b/evalkit_tf437/lib/python3.10/site-packages/sklearn/neural_network/__pycache__/_multilayer_perceptron.cpython-310.pyc new file mode 100644 index 0000000000000000000000000000000000000000..e93779c017bdd71ab46c8888ded0db8efe774620 Binary files /dev/null and b/evalkit_tf437/lib/python3.10/site-packages/sklearn/neural_network/__pycache__/_multilayer_perceptron.cpython-310.pyc differ diff --git a/evalkit_tf437/lib/python3.10/site-packages/sklearn/neural_network/__pycache__/_rbm.cpython-310.pyc b/evalkit_tf437/lib/python3.10/site-packages/sklearn/neural_network/__pycache__/_rbm.cpython-310.pyc new file mode 100644 index 0000000000000000000000000000000000000000..bd2eaf3850f224c398f55b5bc0e7522e859bab9b Binary files /dev/null and b/evalkit_tf437/lib/python3.10/site-packages/sklearn/neural_network/__pycache__/_rbm.cpython-310.pyc differ diff --git a/evalkit_tf437/lib/python3.10/site-packages/sklearn/neural_network/__pycache__/_stochastic_optimizers.cpython-310.pyc b/evalkit_tf437/lib/python3.10/site-packages/sklearn/neural_network/__pycache__/_stochastic_optimizers.cpython-310.pyc new file mode 100644 index 0000000000000000000000000000000000000000..4d54571ea7c09128847bbffdc38c73ea690ab629 Binary files /dev/null and b/evalkit_tf437/lib/python3.10/site-packages/sklearn/neural_network/__pycache__/_stochastic_optimizers.cpython-310.pyc differ diff --git a/evalkit_tf437/lib/python3.10/site-packages/sklearn/neural_network/tests/__pycache__/test_mlp.cpython-310.pyc b/evalkit_tf437/lib/python3.10/site-packages/sklearn/neural_network/tests/__pycache__/test_mlp.cpython-310.pyc new file mode 100644 index 0000000000000000000000000000000000000000..d9ec7c0b59cab07b71100ffdd0f25714b478853d Binary files /dev/null and b/evalkit_tf437/lib/python3.10/site-packages/sklearn/neural_network/tests/__pycache__/test_mlp.cpython-310.pyc differ