ZTWHHH commited on
Commit
876cf05
·
verified ·
1 Parent(s): 500fd90

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. .gitattributes +1 -0
  2. llava/lib/python3.10/site-packages/pip/_vendor/distlib/compat.py +1137 -0
  3. llava/lib/python3.10/site-packages/pip/_vendor/distlib/database.py +1329 -0
  4. llava/lib/python3.10/site-packages/pip/_vendor/distlib/locators.py +1295 -0
  5. llava/lib/python3.10/site-packages/pip/_vendor/distlib/w32.exe +0 -0
  6. llava/lib/python3.10/site-packages/pip/_vendor/idna/__pycache__/idnadata.cpython-310.pyc +3 -0
  7. llava/lib/python3.10/site-packages/pip/_vendor/platformdirs/__pycache__/__init__.cpython-310.pyc +0 -0
  8. llava/lib/python3.10/site-packages/pip/_vendor/platformdirs/__pycache__/macos.cpython-310.pyc +0 -0
  9. llava/lib/python3.10/site-packages/pip/_vendor/platformdirs/__pycache__/unix.cpython-310.pyc +0 -0
  10. llava/lib/python3.10/site-packages/pip/_vendor/platformdirs/__pycache__/version.cpython-310.pyc +0 -0
  11. llava/lib/python3.10/site-packages/pip/_vendor/platformdirs/__pycache__/windows.cpython-310.pyc +0 -0
  12. llava/lib/python3.10/site-packages/pip/_vendor/platformdirs/android.py +249 -0
  13. llava/lib/python3.10/site-packages/pip/_vendor/requests/__init__.py +179 -0
  14. llava/lib/python3.10/site-packages/pip/_vendor/requests/__pycache__/__version__.cpython-310.pyc +0 -0
  15. llava/lib/python3.10/site-packages/pip/_vendor/requests/__pycache__/_internal_utils.cpython-310.pyc +0 -0
  16. llava/lib/python3.10/site-packages/pip/_vendor/requests/__pycache__/adapters.cpython-310.pyc +0 -0
  17. llava/lib/python3.10/site-packages/pip/_vendor/requests/__pycache__/api.cpython-310.pyc +0 -0
  18. llava/lib/python3.10/site-packages/pip/_vendor/requests/__pycache__/auth.cpython-310.pyc +0 -0
  19. llava/lib/python3.10/site-packages/pip/_vendor/requests/__pycache__/certs.cpython-310.pyc +0 -0
  20. llava/lib/python3.10/site-packages/pip/_vendor/requests/__pycache__/cookies.cpython-310.pyc +0 -0
  21. llava/lib/python3.10/site-packages/pip/_vendor/requests/__pycache__/exceptions.cpython-310.pyc +0 -0
  22. llava/lib/python3.10/site-packages/pip/_vendor/requests/__pycache__/help.cpython-310.pyc +0 -0
  23. llava/lib/python3.10/site-packages/pip/_vendor/requests/__pycache__/models.cpython-310.pyc +0 -0
  24. llava/lib/python3.10/site-packages/pip/_vendor/requests/__pycache__/packages.cpython-310.pyc +0 -0
  25. llava/lib/python3.10/site-packages/pip/_vendor/requests/__pycache__/sessions.cpython-310.pyc +0 -0
  26. llava/lib/python3.10/site-packages/pip/_vendor/requests/__pycache__/status_codes.cpython-310.pyc +0 -0
  27. llava/lib/python3.10/site-packages/pip/_vendor/requests/_internal_utils.py +50 -0
  28. llava/lib/python3.10/site-packages/pip/_vendor/requests/adapters.py +719 -0
  29. llava/lib/python3.10/site-packages/pip/_vendor/requests/auth.py +314 -0
  30. llava/lib/python3.10/site-packages/pip/_vendor/requests/compat.py +78 -0
  31. llava/lib/python3.10/site-packages/pip/_vendor/requests/help.py +127 -0
  32. llava/lib/python3.10/site-packages/pip/_vendor/requests/hooks.py +33 -0
  33. minigpt2/lib/python3.10/site-packages/Crypto/Hash/MD5.pyi +19 -0
  34. minigpt2/lib/python3.10/site-packages/Crypto/Hash/RIPEMD160.py +169 -0
  35. minigpt2/lib/python3.10/site-packages/Crypto/Hash/SHA.py +24 -0
  36. minigpt2/lib/python3.10/site-packages/Crypto/Hash/SHA256.pyi +18 -0
  37. minigpt2/lib/python3.10/site-packages/Crypto/Hash/SHA3_224.pyi +19 -0
  38. minigpt2/lib/python3.10/site-packages/Crypto/Hash/SHAKE256.py +130 -0
  39. minigpt2/lib/python3.10/site-packages/Crypto/Hash/TurboSHAKE128.pyi +17 -0
  40. minigpt2/lib/python3.10/site-packages/Crypto/Hash/_BLAKE2b.abi3.so +0 -0
  41. minigpt2/lib/python3.10/site-packages/Crypto/Hash/_MD5.abi3.so +0 -0
  42. minigpt2/lib/python3.10/site-packages/Crypto/Hash/__pycache__/BLAKE2b.cpython-310.pyc +0 -0
  43. minigpt2/lib/python3.10/site-packages/Crypto/Hash/__pycache__/BLAKE2s.cpython-310.pyc +0 -0
  44. minigpt2/lib/python3.10/site-packages/Crypto/Hash/__pycache__/CMAC.cpython-310.pyc +0 -0
  45. minigpt2/lib/python3.10/site-packages/Crypto/Hash/__pycache__/HMAC.cpython-310.pyc +0 -0
  46. minigpt2/lib/python3.10/site-packages/Crypto/Hash/__pycache__/KMAC128.cpython-310.pyc +0 -0
  47. minigpt2/lib/python3.10/site-packages/Crypto/Hash/__pycache__/KMAC256.cpython-310.pyc +0 -0
  48. minigpt2/lib/python3.10/site-packages/Crypto/Hash/__pycache__/KangarooTwelve.cpython-310.pyc +0 -0
  49. minigpt2/lib/python3.10/site-packages/Crypto/Hash/__pycache__/MD2.cpython-310.pyc +0 -0
  50. minigpt2/lib/python3.10/site-packages/Crypto/Hash/__pycache__/MD4.cpython-310.pyc +0 -0
.gitattributes CHANGED
@@ -1345,3 +1345,4 @@ xverse/lib/python3.10/site-packages/clang/native/libclang.so filter=lfs diff=lfs
1345
  minigpt2/lib/python3.10/site-packages/ray/data/__pycache__/dataset.cpython-310.pyc filter=lfs diff=lfs merge=lfs -text
1346
  minigpt2/lib/python3.10/site-packages/ray/data/__pycache__/read_api.cpython-310.pyc filter=lfs diff=lfs merge=lfs -text
1347
  llava/lib/python3.10/site-packages/pip/_vendor/idna/__pycache__/uts46data.cpython-310.pyc filter=lfs diff=lfs merge=lfs -text
 
 
1345
  minigpt2/lib/python3.10/site-packages/ray/data/__pycache__/dataset.cpython-310.pyc filter=lfs diff=lfs merge=lfs -text
1346
  minigpt2/lib/python3.10/site-packages/ray/data/__pycache__/read_api.cpython-310.pyc filter=lfs diff=lfs merge=lfs -text
1347
  llava/lib/python3.10/site-packages/pip/_vendor/idna/__pycache__/uts46data.cpython-310.pyc filter=lfs diff=lfs merge=lfs -text
1348
+ llava/lib/python3.10/site-packages/pip/_vendor/idna/__pycache__/idnadata.cpython-310.pyc filter=lfs diff=lfs merge=lfs -text
llava/lib/python3.10/site-packages/pip/_vendor/distlib/compat.py ADDED
@@ -0,0 +1,1137 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # -*- coding: utf-8 -*-
2
+ #
3
+ # Copyright (C) 2013-2017 Vinay Sajip.
4
+ # Licensed to the Python Software Foundation under a contributor agreement.
5
+ # See LICENSE.txt and CONTRIBUTORS.txt.
6
+ #
7
+ from __future__ import absolute_import
8
+
9
+ import os
10
+ import re
11
+ import shutil
12
+ import sys
13
+
14
+ try:
15
+ import ssl
16
+ except ImportError: # pragma: no cover
17
+ ssl = None
18
+
19
+ if sys.version_info[0] < 3: # pragma: no cover
20
+ from StringIO import StringIO
21
+ string_types = basestring,
22
+ text_type = unicode
23
+ from types import FileType as file_type
24
+ import __builtin__ as builtins
25
+ import ConfigParser as configparser
26
+ from urlparse import urlparse, urlunparse, urljoin, urlsplit, urlunsplit
27
+ from urllib import (urlretrieve, quote as _quote, unquote, url2pathname,
28
+ pathname2url, ContentTooShortError, splittype)
29
+
30
+ def quote(s):
31
+ if isinstance(s, unicode):
32
+ s = s.encode('utf-8')
33
+ return _quote(s)
34
+
35
+ import urllib2
36
+ from urllib2 import (Request, urlopen, URLError, HTTPError,
37
+ HTTPBasicAuthHandler, HTTPPasswordMgr, HTTPHandler,
38
+ HTTPRedirectHandler, build_opener)
39
+ if ssl:
40
+ from urllib2 import HTTPSHandler
41
+ import httplib
42
+ import xmlrpclib
43
+ import Queue as queue
44
+ from HTMLParser import HTMLParser
45
+ import htmlentitydefs
46
+ raw_input = raw_input
47
+ from itertools import ifilter as filter
48
+ from itertools import ifilterfalse as filterfalse
49
+
50
+ # Leaving this around for now, in case it needs resurrecting in some way
51
+ # _userprog = None
52
+ # def splituser(host):
53
+ # """splituser('user[:passwd]@host[:port]') --> 'user[:passwd]', 'host[:port]'."""
54
+ # global _userprog
55
+ # if _userprog is None:
56
+ # import re
57
+ # _userprog = re.compile('^(.*)@(.*)$')
58
+
59
+ # match = _userprog.match(host)
60
+ # if match: return match.group(1, 2)
61
+ # return None, host
62
+
63
+ else: # pragma: no cover
64
+ from io import StringIO
65
+ string_types = str,
66
+ text_type = str
67
+ from io import TextIOWrapper as file_type
68
+ import builtins
69
+ import configparser
70
+ from urllib.parse import (urlparse, urlunparse, urljoin, quote, unquote,
71
+ urlsplit, urlunsplit, splittype)
72
+ from urllib.request import (urlopen, urlretrieve, Request, url2pathname,
73
+ pathname2url, HTTPBasicAuthHandler,
74
+ HTTPPasswordMgr, HTTPHandler,
75
+ HTTPRedirectHandler, build_opener)
76
+ if ssl:
77
+ from urllib.request import HTTPSHandler
78
+ from urllib.error import HTTPError, URLError, ContentTooShortError
79
+ import http.client as httplib
80
+ import urllib.request as urllib2
81
+ import xmlrpc.client as xmlrpclib
82
+ import queue
83
+ from html.parser import HTMLParser
84
+ import html.entities as htmlentitydefs
85
+ raw_input = input
86
+ from itertools import filterfalse
87
+ filter = filter
88
+
89
+ try:
90
+ from ssl import match_hostname, CertificateError
91
+ except ImportError: # pragma: no cover
92
+
93
+ class CertificateError(ValueError):
94
+ pass
95
+
96
+ def _dnsname_match(dn, hostname, max_wildcards=1):
97
+ """Matching according to RFC 6125, section 6.4.3
98
+
99
+ http://tools.ietf.org/html/rfc6125#section-6.4.3
100
+ """
101
+ pats = []
102
+ if not dn:
103
+ return False
104
+
105
+ parts = dn.split('.')
106
+ leftmost, remainder = parts[0], parts[1:]
107
+
108
+ wildcards = leftmost.count('*')
109
+ if wildcards > max_wildcards:
110
+ # Issue #17980: avoid denials of service by refusing more
111
+ # than one wildcard per fragment. A survey of established
112
+ # policy among SSL implementations showed it to be a
113
+ # reasonable choice.
114
+ raise CertificateError(
115
+ "too many wildcards in certificate DNS name: " + repr(dn))
116
+
117
+ # speed up common case w/o wildcards
118
+ if not wildcards:
119
+ return dn.lower() == hostname.lower()
120
+
121
+ # RFC 6125, section 6.4.3, subitem 1.
122
+ # The client SHOULD NOT attempt to match a presented identifier in which
123
+ # the wildcard character comprises a label other than the left-most label.
124
+ if leftmost == '*':
125
+ # When '*' is a fragment by itself, it matches a non-empty dotless
126
+ # fragment.
127
+ pats.append('[^.]+')
128
+ elif leftmost.startswith('xn--') or hostname.startswith('xn--'):
129
+ # RFC 6125, section 6.4.3, subitem 3.
130
+ # The client SHOULD NOT attempt to match a presented identifier
131
+ # where the wildcard character is embedded within an A-label or
132
+ # U-label of an internationalized domain name.
133
+ pats.append(re.escape(leftmost))
134
+ else:
135
+ # Otherwise, '*' matches any dotless string, e.g. www*
136
+ pats.append(re.escape(leftmost).replace(r'\*', '[^.]*'))
137
+
138
+ # add the remaining fragments, ignore any wildcards
139
+ for frag in remainder:
140
+ pats.append(re.escape(frag))
141
+
142
+ pat = re.compile(r'\A' + r'\.'.join(pats) + r'\Z', re.IGNORECASE)
143
+ return pat.match(hostname)
144
+
145
+ def match_hostname(cert, hostname):
146
+ """Verify that *cert* (in decoded format as returned by
147
+ SSLSocket.getpeercert()) matches the *hostname*. RFC 2818 and RFC 6125
148
+ rules are followed, but IP addresses are not accepted for *hostname*.
149
+
150
+ CertificateError is raised on failure. On success, the function
151
+ returns nothing.
152
+ """
153
+ if not cert:
154
+ raise ValueError("empty or no certificate, match_hostname needs a "
155
+ "SSL socket or SSL context with either "
156
+ "CERT_OPTIONAL or CERT_REQUIRED")
157
+ dnsnames = []
158
+ san = cert.get('subjectAltName', ())
159
+ for key, value in san:
160
+ if key == 'DNS':
161
+ if _dnsname_match(value, hostname):
162
+ return
163
+ dnsnames.append(value)
164
+ if not dnsnames:
165
+ # The subject is only checked when there is no dNSName entry
166
+ # in subjectAltName
167
+ for sub in cert.get('subject', ()):
168
+ for key, value in sub:
169
+ # XXX according to RFC 2818, the most specific Common Name
170
+ # must be used.
171
+ if key == 'commonName':
172
+ if _dnsname_match(value, hostname):
173
+ return
174
+ dnsnames.append(value)
175
+ if len(dnsnames) > 1:
176
+ raise CertificateError("hostname %r "
177
+ "doesn't match either of %s" %
178
+ (hostname, ', '.join(map(repr, dnsnames))))
179
+ elif len(dnsnames) == 1:
180
+ raise CertificateError("hostname %r "
181
+ "doesn't match %r" %
182
+ (hostname, dnsnames[0]))
183
+ else:
184
+ raise CertificateError("no appropriate commonName or "
185
+ "subjectAltName fields were found")
186
+
187
+
188
+ try:
189
+ from types import SimpleNamespace as Container
190
+ except ImportError: # pragma: no cover
191
+
192
+ class Container(object):
193
+ """
194
+ A generic container for when multiple values need to be returned
195
+ """
196
+
197
+ def __init__(self, **kwargs):
198
+ self.__dict__.update(kwargs)
199
+
200
+
201
+ try:
202
+ from shutil import which
203
+ except ImportError: # pragma: no cover
204
+ # Implementation from Python 3.3
205
+ def which(cmd, mode=os.F_OK | os.X_OK, path=None):
206
+ """Given a command, mode, and a PATH string, return the path which
207
+ conforms to the given mode on the PATH, or None if there is no such
208
+ file.
209
+
210
+ `mode` defaults to os.F_OK | os.X_OK. `path` defaults to the result
211
+ of os.environ.get("PATH"), or can be overridden with a custom search
212
+ path.
213
+
214
+ """
215
+
216
+ # Check that a given file can be accessed with the correct mode.
217
+ # Additionally check that `file` is not a directory, as on Windows
218
+ # directories pass the os.access check.
219
+ def _access_check(fn, mode):
220
+ return (os.path.exists(fn) and os.access(fn, mode) and not os.path.isdir(fn))
221
+
222
+ # If we're given a path with a directory part, look it up directly rather
223
+ # than referring to PATH directories. This includes checking relative to the
224
+ # current directory, e.g. ./script
225
+ if os.path.dirname(cmd):
226
+ if _access_check(cmd, mode):
227
+ return cmd
228
+ return None
229
+
230
+ if path is None:
231
+ path = os.environ.get("PATH", os.defpath)
232
+ if not path:
233
+ return None
234
+ path = path.split(os.pathsep)
235
+
236
+ if sys.platform == "win32":
237
+ # The current directory takes precedence on Windows.
238
+ if os.curdir not in path:
239
+ path.insert(0, os.curdir)
240
+
241
+ # PATHEXT is necessary to check on Windows.
242
+ pathext = os.environ.get("PATHEXT", "").split(os.pathsep)
243
+ # See if the given file matches any of the expected path extensions.
244
+ # This will allow us to short circuit when given "python.exe".
245
+ # If it does match, only test that one, otherwise we have to try
246
+ # others.
247
+ if any(cmd.lower().endswith(ext.lower()) for ext in pathext):
248
+ files = [cmd]
249
+ else:
250
+ files = [cmd + ext for ext in pathext]
251
+ else:
252
+ # On other platforms you don't have things like PATHEXT to tell you
253
+ # what file suffixes are executable, so just pass on cmd as-is.
254
+ files = [cmd]
255
+
256
+ seen = set()
257
+ for dir in path:
258
+ normdir = os.path.normcase(dir)
259
+ if normdir not in seen:
260
+ seen.add(normdir)
261
+ for thefile in files:
262
+ name = os.path.join(dir, thefile)
263
+ if _access_check(name, mode):
264
+ return name
265
+ return None
266
+
267
+
268
+ # ZipFile is a context manager in 2.7, but not in 2.6
269
+
270
+ from zipfile import ZipFile as BaseZipFile
271
+
272
+ if hasattr(BaseZipFile, '__enter__'): # pragma: no cover
273
+ ZipFile = BaseZipFile
274
+ else: # pragma: no cover
275
+ from zipfile import ZipExtFile as BaseZipExtFile
276
+
277
+ class ZipExtFile(BaseZipExtFile):
278
+
279
+ def __init__(self, base):
280
+ self.__dict__.update(base.__dict__)
281
+
282
+ def __enter__(self):
283
+ return self
284
+
285
+ def __exit__(self, *exc_info):
286
+ self.close()
287
+ # return None, so if an exception occurred, it will propagate
288
+
289
+ class ZipFile(BaseZipFile):
290
+
291
+ def __enter__(self):
292
+ return self
293
+
294
+ def __exit__(self, *exc_info):
295
+ self.close()
296
+ # return None, so if an exception occurred, it will propagate
297
+
298
+ def open(self, *args, **kwargs):
299
+ base = BaseZipFile.open(self, *args, **kwargs)
300
+ return ZipExtFile(base)
301
+
302
+
303
+ try:
304
+ from platform import python_implementation
305
+ except ImportError: # pragma: no cover
306
+
307
+ def python_implementation():
308
+ """Return a string identifying the Python implementation."""
309
+ if 'PyPy' in sys.version:
310
+ return 'PyPy'
311
+ if os.name == 'java':
312
+ return 'Jython'
313
+ if sys.version.startswith('IronPython'):
314
+ return 'IronPython'
315
+ return 'CPython'
316
+
317
+
318
+ import sysconfig
319
+
320
+ try:
321
+ callable = callable
322
+ except NameError: # pragma: no cover
323
+ from collections.abc import Callable
324
+
325
+ def callable(obj):
326
+ return isinstance(obj, Callable)
327
+
328
+
329
+ try:
330
+ fsencode = os.fsencode
331
+ fsdecode = os.fsdecode
332
+ except AttributeError: # pragma: no cover
333
+ # Issue #99: on some systems (e.g. containerised),
334
+ # sys.getfilesystemencoding() returns None, and we need a real value,
335
+ # so fall back to utf-8. From the CPython 2.7 docs relating to Unix and
336
+ # sys.getfilesystemencoding(): the return value is "the user’s preference
337
+ # according to the result of nl_langinfo(CODESET), or None if the
338
+ # nl_langinfo(CODESET) failed."
339
+ _fsencoding = sys.getfilesystemencoding() or 'utf-8'
340
+ if _fsencoding == 'mbcs':
341
+ _fserrors = 'strict'
342
+ else:
343
+ _fserrors = 'surrogateescape'
344
+
345
+ def fsencode(filename):
346
+ if isinstance(filename, bytes):
347
+ return filename
348
+ elif isinstance(filename, text_type):
349
+ return filename.encode(_fsencoding, _fserrors)
350
+ else:
351
+ raise TypeError("expect bytes or str, not %s" %
352
+ type(filename).__name__)
353
+
354
+ def fsdecode(filename):
355
+ if isinstance(filename, text_type):
356
+ return filename
357
+ elif isinstance(filename, bytes):
358
+ return filename.decode(_fsencoding, _fserrors)
359
+ else:
360
+ raise TypeError("expect bytes or str, not %s" %
361
+ type(filename).__name__)
362
+
363
+
364
+ try:
365
+ from tokenize import detect_encoding
366
+ except ImportError: # pragma: no cover
367
+ from codecs import BOM_UTF8, lookup
368
+
369
+ cookie_re = re.compile(r"coding[:=]\s*([-\w.]+)")
370
+
371
+ def _get_normal_name(orig_enc):
372
+ """Imitates get_normal_name in tokenizer.c."""
373
+ # Only care about the first 12 characters.
374
+ enc = orig_enc[:12].lower().replace("_", "-")
375
+ if enc == "utf-8" or enc.startswith("utf-8-"):
376
+ return "utf-8"
377
+ if enc in ("latin-1", "iso-8859-1", "iso-latin-1") or \
378
+ enc.startswith(("latin-1-", "iso-8859-1-", "iso-latin-1-")):
379
+ return "iso-8859-1"
380
+ return orig_enc
381
+
382
+ def detect_encoding(readline):
383
+ """
384
+ The detect_encoding() function is used to detect the encoding that should
385
+ be used to decode a Python source file. It requires one argument, readline,
386
+ in the same way as the tokenize() generator.
387
+
388
+ It will call readline a maximum of twice, and return the encoding used
389
+ (as a string) and a list of any lines (left as bytes) it has read in.
390
+
391
+ It detects the encoding from the presence of a utf-8 bom or an encoding
392
+ cookie as specified in pep-0263. If both a bom and a cookie are present,
393
+ but disagree, a SyntaxError will be raised. If the encoding cookie is an
394
+ invalid charset, raise a SyntaxError. Note that if a utf-8 bom is found,
395
+ 'utf-8-sig' is returned.
396
+
397
+ If no encoding is specified, then the default of 'utf-8' will be returned.
398
+ """
399
+ try:
400
+ filename = readline.__self__.name
401
+ except AttributeError:
402
+ filename = None
403
+ bom_found = False
404
+ encoding = None
405
+ default = 'utf-8'
406
+
407
+ def read_or_stop():
408
+ try:
409
+ return readline()
410
+ except StopIteration:
411
+ return b''
412
+
413
+ def find_cookie(line):
414
+ try:
415
+ # Decode as UTF-8. Either the line is an encoding declaration,
416
+ # in which case it should be pure ASCII, or it must be UTF-8
417
+ # per default encoding.
418
+ line_string = line.decode('utf-8')
419
+ except UnicodeDecodeError:
420
+ msg = "invalid or missing encoding declaration"
421
+ if filename is not None:
422
+ msg = '{} for {!r}'.format(msg, filename)
423
+ raise SyntaxError(msg)
424
+
425
+ matches = cookie_re.findall(line_string)
426
+ if not matches:
427
+ return None
428
+ encoding = _get_normal_name(matches[0])
429
+ try:
430
+ codec = lookup(encoding)
431
+ except LookupError:
432
+ # This behaviour mimics the Python interpreter
433
+ if filename is None:
434
+ msg = "unknown encoding: " + encoding
435
+ else:
436
+ msg = "unknown encoding for {!r}: {}".format(
437
+ filename, encoding)
438
+ raise SyntaxError(msg)
439
+
440
+ if bom_found:
441
+ if codec.name != 'utf-8':
442
+ # This behaviour mimics the Python interpreter
443
+ if filename is None:
444
+ msg = 'encoding problem: utf-8'
445
+ else:
446
+ msg = 'encoding problem for {!r}: utf-8'.format(
447
+ filename)
448
+ raise SyntaxError(msg)
449
+ encoding += '-sig'
450
+ return encoding
451
+
452
+ first = read_or_stop()
453
+ if first.startswith(BOM_UTF8):
454
+ bom_found = True
455
+ first = first[3:]
456
+ default = 'utf-8-sig'
457
+ if not first:
458
+ return default, []
459
+
460
+ encoding = find_cookie(first)
461
+ if encoding:
462
+ return encoding, [first]
463
+
464
+ second = read_or_stop()
465
+ if not second:
466
+ return default, [first]
467
+
468
+ encoding = find_cookie(second)
469
+ if encoding:
470
+ return encoding, [first, second]
471
+
472
+ return default, [first, second]
473
+
474
+
475
+ # For converting & <-> &amp; etc.
476
+ try:
477
+ from html import escape
478
+ except ImportError:
479
+ from cgi import escape
480
+ if sys.version_info[:2] < (3, 4):
481
+ unescape = HTMLParser().unescape
482
+ else:
483
+ from html import unescape
484
+
485
+ try:
486
+ from collections import ChainMap
487
+ except ImportError: # pragma: no cover
488
+ from collections import MutableMapping
489
+
490
+ try:
491
+ from reprlib import recursive_repr as _recursive_repr
492
+ except ImportError:
493
+
494
+ def _recursive_repr(fillvalue='...'):
495
+ '''
496
+ Decorator to make a repr function return fillvalue for a recursive
497
+ call
498
+ '''
499
+
500
+ def decorating_function(user_function):
501
+ repr_running = set()
502
+
503
+ def wrapper(self):
504
+ key = id(self), get_ident()
505
+ if key in repr_running:
506
+ return fillvalue
507
+ repr_running.add(key)
508
+ try:
509
+ result = user_function(self)
510
+ finally:
511
+ repr_running.discard(key)
512
+ return result
513
+
514
+ # Can't use functools.wraps() here because of bootstrap issues
515
+ wrapper.__module__ = getattr(user_function, '__module__')
516
+ wrapper.__doc__ = getattr(user_function, '__doc__')
517
+ wrapper.__name__ = getattr(user_function, '__name__')
518
+ wrapper.__annotations__ = getattr(user_function,
519
+ '__annotations__', {})
520
+ return wrapper
521
+
522
+ return decorating_function
523
+
524
+ class ChainMap(MutableMapping):
525
+ '''
526
+ A ChainMap groups multiple dicts (or other mappings) together
527
+ to create a single, updateable view.
528
+
529
+ The underlying mappings are stored in a list. That list is public and can
530
+ accessed or updated using the *maps* attribute. There is no other state.
531
+
532
+ Lookups search the underlying mappings successively until a key is found.
533
+ In contrast, writes, updates, and deletions only operate on the first
534
+ mapping.
535
+ '''
536
+
537
+ def __init__(self, *maps):
538
+ '''Initialize a ChainMap by setting *maps* to the given mappings.
539
+ If no mappings are provided, a single empty dictionary is used.
540
+
541
+ '''
542
+ self.maps = list(maps) or [{}] # always at least one map
543
+
544
+ def __missing__(self, key):
545
+ raise KeyError(key)
546
+
547
+ def __getitem__(self, key):
548
+ for mapping in self.maps:
549
+ try:
550
+ return mapping[
551
+ key] # can't use 'key in mapping' with defaultdict
552
+ except KeyError:
553
+ pass
554
+ return self.__missing__(
555
+ key) # support subclasses that define __missing__
556
+
557
+ def get(self, key, default=None):
558
+ return self[key] if key in self else default
559
+
560
+ def __len__(self):
561
+ return len(set().union(
562
+ *self.maps)) # reuses stored hash values if possible
563
+
564
+ def __iter__(self):
565
+ return iter(set().union(*self.maps))
566
+
567
+ def __contains__(self, key):
568
+ return any(key in m for m in self.maps)
569
+
570
+ def __bool__(self):
571
+ return any(self.maps)
572
+
573
+ @_recursive_repr()
574
+ def __repr__(self):
575
+ return '{0.__class__.__name__}({1})'.format(
576
+ self, ', '.join(map(repr, self.maps)))
577
+
578
+ @classmethod
579
+ def fromkeys(cls, iterable, *args):
580
+ 'Create a ChainMap with a single dict created from the iterable.'
581
+ return cls(dict.fromkeys(iterable, *args))
582
+
583
+ def copy(self):
584
+ 'New ChainMap or subclass with a new copy of maps[0] and refs to maps[1:]'
585
+ return self.__class__(self.maps[0].copy(), *self.maps[1:])
586
+
587
+ __copy__ = copy
588
+
589
+ def new_child(self): # like Django's Context.push()
590
+ 'New ChainMap with a new dict followed by all previous maps.'
591
+ return self.__class__({}, *self.maps)
592
+
593
+ @property
594
+ def parents(self): # like Django's Context.pop()
595
+ 'New ChainMap from maps[1:].'
596
+ return self.__class__(*self.maps[1:])
597
+
598
+ def __setitem__(self, key, value):
599
+ self.maps[0][key] = value
600
+
601
+ def __delitem__(self, key):
602
+ try:
603
+ del self.maps[0][key]
604
+ except KeyError:
605
+ raise KeyError(
606
+ 'Key not found in the first mapping: {!r}'.format(key))
607
+
608
+ def popitem(self):
609
+ 'Remove and return an item pair from maps[0]. Raise KeyError is maps[0] is empty.'
610
+ try:
611
+ return self.maps[0].popitem()
612
+ except KeyError:
613
+ raise KeyError('No keys found in the first mapping.')
614
+
615
+ def pop(self, key, *args):
616
+ 'Remove *key* from maps[0] and return its value. Raise KeyError if *key* not in maps[0].'
617
+ try:
618
+ return self.maps[0].pop(key, *args)
619
+ except KeyError:
620
+ raise KeyError(
621
+ 'Key not found in the first mapping: {!r}'.format(key))
622
+
623
+ def clear(self):
624
+ 'Clear maps[0], leaving maps[1:] intact.'
625
+ self.maps[0].clear()
626
+
627
+
628
+ try:
629
+ from importlib.util import cache_from_source # Python >= 3.4
630
+ except ImportError: # pragma: no cover
631
+
632
+ def cache_from_source(path, debug_override=None):
633
+ assert path.endswith('.py')
634
+ if debug_override is None:
635
+ debug_override = __debug__
636
+ if debug_override:
637
+ suffix = 'c'
638
+ else:
639
+ suffix = 'o'
640
+ return path + suffix
641
+
642
+
643
+ try:
644
+ from collections import OrderedDict
645
+ except ImportError: # pragma: no cover
646
+ # {{{ http://code.activestate.com/recipes/576693/ (r9)
647
+ # Backport of OrderedDict() class that runs on Python 2.4, 2.5, 2.6, 2.7 and pypy.
648
+ # Passes Python2.7's test suite and incorporates all the latest updates.
649
+ try:
650
+ from thread import get_ident as _get_ident
651
+ except ImportError:
652
+ from dummy_thread import get_ident as _get_ident
653
+
654
+ try:
655
+ from _abcoll import KeysView, ValuesView, ItemsView
656
+ except ImportError:
657
+ pass
658
+
659
+ class OrderedDict(dict):
660
+ 'Dictionary that remembers insertion order'
661
+
662
+ # An inherited dict maps keys to values.
663
+ # The inherited dict provides __getitem__, __len__, __contains__, and get.
664
+ # The remaining methods are order-aware.
665
+ # Big-O running times for all methods are the same as for regular dictionaries.
666
+
667
+ # The internal self.__map dictionary maps keys to links in a doubly linked list.
668
+ # The circular doubly linked list starts and ends with a sentinel element.
669
+ # The sentinel element never gets deleted (this simplifies the algorithm).
670
+ # Each link is stored as a list of length three: [PREV, NEXT, KEY].
671
+
672
+ def __init__(self, *args, **kwds):
673
+ '''Initialize an ordered dictionary. Signature is the same as for
674
+ regular dictionaries, but keyword arguments are not recommended
675
+ because their insertion order is arbitrary.
676
+
677
+ '''
678
+ if len(args) > 1:
679
+ raise TypeError('expected at most 1 arguments, got %d' %
680
+ len(args))
681
+ try:
682
+ self.__root
683
+ except AttributeError:
684
+ self.__root = root = [] # sentinel node
685
+ root[:] = [root, root, None]
686
+ self.__map = {}
687
+ self.__update(*args, **kwds)
688
+
689
+ def __setitem__(self, key, value, dict_setitem=dict.__setitem__):
690
+ 'od.__setitem__(i, y) <==> od[i]=y'
691
+ # Setting a new item creates a new link which goes at the end of the linked
692
+ # list, and the inherited dictionary is updated with the new key/value pair.
693
+ if key not in self:
694
+ root = self.__root
695
+ last = root[0]
696
+ last[1] = root[0] = self.__map[key] = [last, root, key]
697
+ dict_setitem(self, key, value)
698
+
699
+ def __delitem__(self, key, dict_delitem=dict.__delitem__):
700
+ 'od.__delitem__(y) <==> del od[y]'
701
+ # Deleting an existing item uses self.__map to find the link which is
702
+ # then removed by updating the links in the predecessor and successor nodes.
703
+ dict_delitem(self, key)
704
+ link_prev, link_next, key = self.__map.pop(key)
705
+ link_prev[1] = link_next
706
+ link_next[0] = link_prev
707
+
708
+ def __iter__(self):
709
+ 'od.__iter__() <==> iter(od)'
710
+ root = self.__root
711
+ curr = root[1]
712
+ while curr is not root:
713
+ yield curr[2]
714
+ curr = curr[1]
715
+
716
+ def __reversed__(self):
717
+ 'od.__reversed__() <==> reversed(od)'
718
+ root = self.__root
719
+ curr = root[0]
720
+ while curr is not root:
721
+ yield curr[2]
722
+ curr = curr[0]
723
+
724
+ def clear(self):
725
+ 'od.clear() -> None. Remove all items from od.'
726
+ try:
727
+ for node in self.__map.itervalues():
728
+ del node[:]
729
+ root = self.__root
730
+ root[:] = [root, root, None]
731
+ self.__map.clear()
732
+ except AttributeError:
733
+ pass
734
+ dict.clear(self)
735
+
736
+ def popitem(self, last=True):
737
+ '''od.popitem() -> (k, v), return and remove a (key, value) pair.
738
+ Pairs are returned in LIFO order if last is true or FIFO order if false.
739
+
740
+ '''
741
+ if not self:
742
+ raise KeyError('dictionary is empty')
743
+ root = self.__root
744
+ if last:
745
+ link = root[0]
746
+ link_prev = link[0]
747
+ link_prev[1] = root
748
+ root[0] = link_prev
749
+ else:
750
+ link = root[1]
751
+ link_next = link[1]
752
+ root[1] = link_next
753
+ link_next[0] = root
754
+ key = link[2]
755
+ del self.__map[key]
756
+ value = dict.pop(self, key)
757
+ return key, value
758
+
759
+ # -- the following methods do not depend on the internal structure --
760
+
761
+ def keys(self):
762
+ 'od.keys() -> list of keys in od'
763
+ return list(self)
764
+
765
+ def values(self):
766
+ 'od.values() -> list of values in od'
767
+ return [self[key] for key in self]
768
+
769
+ def items(self):
770
+ 'od.items() -> list of (key, value) pairs in od'
771
+ return [(key, self[key]) for key in self]
772
+
773
+ def iterkeys(self):
774
+ 'od.iterkeys() -> an iterator over the keys in od'
775
+ return iter(self)
776
+
777
+ def itervalues(self):
778
+ 'od.itervalues -> an iterator over the values in od'
779
+ for k in self:
780
+ yield self[k]
781
+
782
+ def iteritems(self):
783
+ 'od.iteritems -> an iterator over the (key, value) items in od'
784
+ for k in self:
785
+ yield (k, self[k])
786
+
787
+ def update(*args, **kwds):
788
+ '''od.update(E, **F) -> None. Update od from dict/iterable E and F.
789
+
790
+ If E is a dict instance, does: for k in E: od[k] = E[k]
791
+ If E has a .keys() method, does: for k in E.keys(): od[k] = E[k]
792
+ Or if E is an iterable of items, does: for k, v in E: od[k] = v
793
+ In either case, this is followed by: for k, v in F.items(): od[k] = v
794
+
795
+ '''
796
+ if len(args) > 2:
797
+ raise TypeError('update() takes at most 2 positional '
798
+ 'arguments (%d given)' % (len(args), ))
799
+ elif not args:
800
+ raise TypeError('update() takes at least 1 argument (0 given)')
801
+ self = args[0]
802
+ # Make progressively weaker assumptions about "other"
803
+ other = ()
804
+ if len(args) == 2:
805
+ other = args[1]
806
+ if isinstance(other, dict):
807
+ for key in other:
808
+ self[key] = other[key]
809
+ elif hasattr(other, 'keys'):
810
+ for key in other.keys():
811
+ self[key] = other[key]
812
+ else:
813
+ for key, value in other:
814
+ self[key] = value
815
+ for key, value in kwds.items():
816
+ self[key] = value
817
+
818
+ __update = update # let subclasses override update without breaking __init__
819
+
820
+ __marker = object()
821
+
822
+ def pop(self, key, default=__marker):
823
+ '''od.pop(k[,d]) -> v, remove specified key and return the corresponding value.
824
+ If key is not found, d is returned if given, otherwise KeyError is raised.
825
+
826
+ '''
827
+ if key in self:
828
+ result = self[key]
829
+ del self[key]
830
+ return result
831
+ if default is self.__marker:
832
+ raise KeyError(key)
833
+ return default
834
+
835
+ def setdefault(self, key, default=None):
836
+ 'od.setdefault(k[,d]) -> od.get(k,d), also set od[k]=d if k not in od'
837
+ if key in self:
838
+ return self[key]
839
+ self[key] = default
840
+ return default
841
+
842
+ def __repr__(self, _repr_running=None):
843
+ 'od.__repr__() <==> repr(od)'
844
+ if not _repr_running:
845
+ _repr_running = {}
846
+ call_key = id(self), _get_ident()
847
+ if call_key in _repr_running:
848
+ return '...'
849
+ _repr_running[call_key] = 1
850
+ try:
851
+ if not self:
852
+ return '%s()' % (self.__class__.__name__, )
853
+ return '%s(%r)' % (self.__class__.__name__, self.items())
854
+ finally:
855
+ del _repr_running[call_key]
856
+
857
+ def __reduce__(self):
858
+ 'Return state information for pickling'
859
+ items = [[k, self[k]] for k in self]
860
+ inst_dict = vars(self).copy()
861
+ for k in vars(OrderedDict()):
862
+ inst_dict.pop(k, None)
863
+ if inst_dict:
864
+ return (self.__class__, (items, ), inst_dict)
865
+ return self.__class__, (items, )
866
+
867
+ def copy(self):
868
+ 'od.copy() -> a shallow copy of od'
869
+ return self.__class__(self)
870
+
871
+ @classmethod
872
+ def fromkeys(cls, iterable, value=None):
873
+ '''OD.fromkeys(S[, v]) -> New ordered dictionary with keys from S
874
+ and values equal to v (which defaults to None).
875
+
876
+ '''
877
+ d = cls()
878
+ for key in iterable:
879
+ d[key] = value
880
+ return d
881
+
882
+ def __eq__(self, other):
883
+ '''od.__eq__(y) <==> od==y. Comparison to another OD is order-sensitive
884
+ while comparison to a regular mapping is order-insensitive.
885
+
886
+ '''
887
+ if isinstance(other, OrderedDict):
888
+ return len(self) == len(
889
+ other) and self.items() == other.items()
890
+ return dict.__eq__(self, other)
891
+
892
+ def __ne__(self, other):
893
+ return not self == other
894
+
895
+ # -- the following methods are only used in Python 2.7 --
896
+
897
+ def viewkeys(self):
898
+ "od.viewkeys() -> a set-like object providing a view on od's keys"
899
+ return KeysView(self)
900
+
901
+ def viewvalues(self):
902
+ "od.viewvalues() -> an object providing a view on od's values"
903
+ return ValuesView(self)
904
+
905
+ def viewitems(self):
906
+ "od.viewitems() -> a set-like object providing a view on od's items"
907
+ return ItemsView(self)
908
+
909
+
910
+ try:
911
+ from logging.config import BaseConfigurator, valid_ident
912
+ except ImportError: # pragma: no cover
913
+ IDENTIFIER = re.compile('^[a-z_][a-z0-9_]*$', re.I)
914
+
915
+ def valid_ident(s):
916
+ m = IDENTIFIER.match(s)
917
+ if not m:
918
+ raise ValueError('Not a valid Python identifier: %r' % s)
919
+ return True
920
+
921
+ # The ConvertingXXX classes are wrappers around standard Python containers,
922
+ # and they serve to convert any suitable values in the container. The
923
+ # conversion converts base dicts, lists and tuples to their wrapped
924
+ # equivalents, whereas strings which match a conversion format are converted
925
+ # appropriately.
926
+ #
927
+ # Each wrapper should have a configurator attribute holding the actual
928
+ # configurator to use for conversion.
929
+
930
+ class ConvertingDict(dict):
931
+ """A converting dictionary wrapper."""
932
+
933
+ def __getitem__(self, key):
934
+ value = dict.__getitem__(self, key)
935
+ result = self.configurator.convert(value)
936
+ # If the converted value is different, save for next time
937
+ if value is not result:
938
+ self[key] = result
939
+ if type(result) in (ConvertingDict, ConvertingList,
940
+ ConvertingTuple):
941
+ result.parent = self
942
+ result.key = key
943
+ return result
944
+
945
+ def get(self, key, default=None):
946
+ value = dict.get(self, key, default)
947
+ result = self.configurator.convert(value)
948
+ # If the converted value is different, save for next time
949
+ if value is not result:
950
+ self[key] = result
951
+ if type(result) in (ConvertingDict, ConvertingList,
952
+ ConvertingTuple):
953
+ result.parent = self
954
+ result.key = key
955
+ return result
956
+
957
+ def pop(self, key, default=None):
958
+ value = dict.pop(self, key, default)
959
+ result = self.configurator.convert(value)
960
+ if value is not result:
961
+ if type(result) in (ConvertingDict, ConvertingList,
962
+ ConvertingTuple):
963
+ result.parent = self
964
+ result.key = key
965
+ return result
966
+
967
+ class ConvertingList(list):
968
+ """A converting list wrapper."""
969
+
970
+ def __getitem__(self, key):
971
+ value = list.__getitem__(self, key)
972
+ result = self.configurator.convert(value)
973
+ # If the converted value is different, save for next time
974
+ if value is not result:
975
+ self[key] = result
976
+ if type(result) in (ConvertingDict, ConvertingList,
977
+ ConvertingTuple):
978
+ result.parent = self
979
+ result.key = key
980
+ return result
981
+
982
+ def pop(self, idx=-1):
983
+ value = list.pop(self, idx)
984
+ result = self.configurator.convert(value)
985
+ if value is not result:
986
+ if type(result) in (ConvertingDict, ConvertingList,
987
+ ConvertingTuple):
988
+ result.parent = self
989
+ return result
990
+
991
+ class ConvertingTuple(tuple):
992
+ """A converting tuple wrapper."""
993
+
994
+ def __getitem__(self, key):
995
+ value = tuple.__getitem__(self, key)
996
+ result = self.configurator.convert(value)
997
+ if value is not result:
998
+ if type(result) in (ConvertingDict, ConvertingList,
999
+ ConvertingTuple):
1000
+ result.parent = self
1001
+ result.key = key
1002
+ return result
1003
+
1004
+ class BaseConfigurator(object):
1005
+ """
1006
+ The configurator base class which defines some useful defaults.
1007
+ """
1008
+
1009
+ CONVERT_PATTERN = re.compile(r'^(?P<prefix>[a-z]+)://(?P<suffix>.*)$')
1010
+
1011
+ WORD_PATTERN = re.compile(r'^\s*(\w+)\s*')
1012
+ DOT_PATTERN = re.compile(r'^\.\s*(\w+)\s*')
1013
+ INDEX_PATTERN = re.compile(r'^\[\s*(\w+)\s*\]\s*')
1014
+ DIGIT_PATTERN = re.compile(r'^\d+$')
1015
+
1016
+ value_converters = {
1017
+ 'ext': 'ext_convert',
1018
+ 'cfg': 'cfg_convert',
1019
+ }
1020
+
1021
+ # We might want to use a different one, e.g. importlib
1022
+ importer = staticmethod(__import__)
1023
+
1024
+ def __init__(self, config):
1025
+ self.config = ConvertingDict(config)
1026
+ self.config.configurator = self
1027
+
1028
+ def resolve(self, s):
1029
+ """
1030
+ Resolve strings to objects using standard import and attribute
1031
+ syntax.
1032
+ """
1033
+ name = s.split('.')
1034
+ used = name.pop(0)
1035
+ try:
1036
+ found = self.importer(used)
1037
+ for frag in name:
1038
+ used += '.' + frag
1039
+ try:
1040
+ found = getattr(found, frag)
1041
+ except AttributeError:
1042
+ self.importer(used)
1043
+ found = getattr(found, frag)
1044
+ return found
1045
+ except ImportError:
1046
+ e, tb = sys.exc_info()[1:]
1047
+ v = ValueError('Cannot resolve %r: %s' % (s, e))
1048
+ v.__cause__, v.__traceback__ = e, tb
1049
+ raise v
1050
+
1051
+ def ext_convert(self, value):
1052
+ """Default converter for the ext:// protocol."""
1053
+ return self.resolve(value)
1054
+
1055
+ def cfg_convert(self, value):
1056
+ """Default converter for the cfg:// protocol."""
1057
+ rest = value
1058
+ m = self.WORD_PATTERN.match(rest)
1059
+ if m is None:
1060
+ raise ValueError("Unable to convert %r" % value)
1061
+ else:
1062
+ rest = rest[m.end():]
1063
+ d = self.config[m.groups()[0]]
1064
+ while rest:
1065
+ m = self.DOT_PATTERN.match(rest)
1066
+ if m:
1067
+ d = d[m.groups()[0]]
1068
+ else:
1069
+ m = self.INDEX_PATTERN.match(rest)
1070
+ if m:
1071
+ idx = m.groups()[0]
1072
+ if not self.DIGIT_PATTERN.match(idx):
1073
+ d = d[idx]
1074
+ else:
1075
+ try:
1076
+ n = int(
1077
+ idx
1078
+ ) # try as number first (most likely)
1079
+ d = d[n]
1080
+ except TypeError:
1081
+ d = d[idx]
1082
+ if m:
1083
+ rest = rest[m.end():]
1084
+ else:
1085
+ raise ValueError('Unable to convert '
1086
+ '%r at %r' % (value, rest))
1087
+ # rest should be empty
1088
+ return d
1089
+
1090
+ def convert(self, value):
1091
+ """
1092
+ Convert values to an appropriate type. dicts, lists and tuples are
1093
+ replaced by their converting alternatives. Strings are checked to
1094
+ see if they have a conversion format and are converted if they do.
1095
+ """
1096
+ if not isinstance(value, ConvertingDict) and isinstance(
1097
+ value, dict):
1098
+ value = ConvertingDict(value)
1099
+ value.configurator = self
1100
+ elif not isinstance(value, ConvertingList) and isinstance(
1101
+ value, list):
1102
+ value = ConvertingList(value)
1103
+ value.configurator = self
1104
+ elif not isinstance(value, ConvertingTuple) and isinstance(value, tuple):
1105
+ value = ConvertingTuple(value)
1106
+ value.configurator = self
1107
+ elif isinstance(value, string_types):
1108
+ m = self.CONVERT_PATTERN.match(value)
1109
+ if m:
1110
+ d = m.groupdict()
1111
+ prefix = d['prefix']
1112
+ converter = self.value_converters.get(prefix, None)
1113
+ if converter:
1114
+ suffix = d['suffix']
1115
+ converter = getattr(self, converter)
1116
+ value = converter(suffix)
1117
+ return value
1118
+
1119
+ def configure_custom(self, config):
1120
+ """Configure an object with a user-supplied factory."""
1121
+ c = config.pop('()')
1122
+ if not callable(c):
1123
+ c = self.resolve(c)
1124
+ props = config.pop('.', None)
1125
+ # Check for valid identifiers
1126
+ kwargs = dict([(k, config[k]) for k in config if valid_ident(k)])
1127
+ result = c(**kwargs)
1128
+ if props:
1129
+ for name, value in props.items():
1130
+ setattr(result, name, value)
1131
+ return result
1132
+
1133
+ def as_tuple(self, value):
1134
+ """Utility function which converts lists to tuples."""
1135
+ if isinstance(value, list):
1136
+ value = tuple(value)
1137
+ return value
llava/lib/python3.10/site-packages/pip/_vendor/distlib/database.py ADDED
@@ -0,0 +1,1329 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # -*- coding: utf-8 -*-
2
+ #
3
+ # Copyright (C) 2012-2023 The Python Software Foundation.
4
+ # See LICENSE.txt and CONTRIBUTORS.txt.
5
+ #
6
+ """PEP 376 implementation."""
7
+
8
+ from __future__ import unicode_literals
9
+
10
+ import base64
11
+ import codecs
12
+ import contextlib
13
+ import hashlib
14
+ import logging
15
+ import os
16
+ import posixpath
17
+ import sys
18
+ import zipimport
19
+
20
+ from . import DistlibException, resources
21
+ from .compat import StringIO
22
+ from .version import get_scheme, UnsupportedVersionError
23
+ from .metadata import (Metadata, METADATA_FILENAME, WHEEL_METADATA_FILENAME, LEGACY_METADATA_FILENAME)
24
+ from .util import (parse_requirement, cached_property, parse_name_and_version, read_exports, write_exports, CSVReader,
25
+ CSVWriter)
26
+
27
+ __all__ = [
28
+ 'Distribution', 'BaseInstalledDistribution', 'InstalledDistribution', 'EggInfoDistribution', 'DistributionPath'
29
+ ]
30
+
31
+ logger = logging.getLogger(__name__)
32
+
33
+ EXPORTS_FILENAME = 'pydist-exports.json'
34
+ COMMANDS_FILENAME = 'pydist-commands.json'
35
+
36
+ DIST_FILES = ('INSTALLER', METADATA_FILENAME, 'RECORD', 'REQUESTED', 'RESOURCES', EXPORTS_FILENAME, 'SHARED')
37
+
38
+ DISTINFO_EXT = '.dist-info'
39
+
40
+
41
+ class _Cache(object):
42
+ """
43
+ A simple cache mapping names and .dist-info paths to distributions
44
+ """
45
+
46
+ def __init__(self):
47
+ """
48
+ Initialise an instance. There is normally one for each DistributionPath.
49
+ """
50
+ self.name = {}
51
+ self.path = {}
52
+ self.generated = False
53
+
54
+ def clear(self):
55
+ """
56
+ Clear the cache, setting it to its initial state.
57
+ """
58
+ self.name.clear()
59
+ self.path.clear()
60
+ self.generated = False
61
+
62
+ def add(self, dist):
63
+ """
64
+ Add a distribution to the cache.
65
+ :param dist: The distribution to add.
66
+ """
67
+ if dist.path not in self.path:
68
+ self.path[dist.path] = dist
69
+ self.name.setdefault(dist.key, []).append(dist)
70
+
71
+
72
+ class DistributionPath(object):
73
+ """
74
+ Represents a set of distributions installed on a path (typically sys.path).
75
+ """
76
+
77
+ def __init__(self, path=None, include_egg=False):
78
+ """
79
+ Create an instance from a path, optionally including legacy (distutils/
80
+ setuptools/distribute) distributions.
81
+ :param path: The path to use, as a list of directories. If not specified,
82
+ sys.path is used.
83
+ :param include_egg: If True, this instance will look for and return legacy
84
+ distributions as well as those based on PEP 376.
85
+ """
86
+ if path is None:
87
+ path = sys.path
88
+ self.path = path
89
+ self._include_dist = True
90
+ self._include_egg = include_egg
91
+
92
+ self._cache = _Cache()
93
+ self._cache_egg = _Cache()
94
+ self._cache_enabled = True
95
+ self._scheme = get_scheme('default')
96
+
97
+ def _get_cache_enabled(self):
98
+ return self._cache_enabled
99
+
100
+ def _set_cache_enabled(self, value):
101
+ self._cache_enabled = value
102
+
103
+ cache_enabled = property(_get_cache_enabled, _set_cache_enabled)
104
+
105
+ def clear_cache(self):
106
+ """
107
+ Clears the internal cache.
108
+ """
109
+ self._cache.clear()
110
+ self._cache_egg.clear()
111
+
112
+ def _yield_distributions(self):
113
+ """
114
+ Yield .dist-info and/or .egg(-info) distributions.
115
+ """
116
+ # We need to check if we've seen some resources already, because on
117
+ # some Linux systems (e.g. some Debian/Ubuntu variants) there are
118
+ # symlinks which alias other files in the environment.
119
+ seen = set()
120
+ for path in self.path:
121
+ finder = resources.finder_for_path(path)
122
+ if finder is None:
123
+ continue
124
+ r = finder.find('')
125
+ if not r or not r.is_container:
126
+ continue
127
+ rset = sorted(r.resources)
128
+ for entry in rset:
129
+ r = finder.find(entry)
130
+ if not r or r.path in seen:
131
+ continue
132
+ try:
133
+ if self._include_dist and entry.endswith(DISTINFO_EXT):
134
+ possible_filenames = [METADATA_FILENAME, WHEEL_METADATA_FILENAME, LEGACY_METADATA_FILENAME]
135
+ for metadata_filename in possible_filenames:
136
+ metadata_path = posixpath.join(entry, metadata_filename)
137
+ pydist = finder.find(metadata_path)
138
+ if pydist:
139
+ break
140
+ else:
141
+ continue
142
+
143
+ with contextlib.closing(pydist.as_stream()) as stream:
144
+ metadata = Metadata(fileobj=stream, scheme='legacy')
145
+ logger.debug('Found %s', r.path)
146
+ seen.add(r.path)
147
+ yield new_dist_class(r.path, metadata=metadata, env=self)
148
+ elif self._include_egg and entry.endswith(('.egg-info', '.egg')):
149
+ logger.debug('Found %s', r.path)
150
+ seen.add(r.path)
151
+ yield old_dist_class(r.path, self)
152
+ except Exception as e:
153
+ msg = 'Unable to read distribution at %s, perhaps due to bad metadata: %s'
154
+ logger.warning(msg, r.path, e)
155
+ import warnings
156
+ warnings.warn(msg % (r.path, e), stacklevel=2)
157
+
158
+ def _generate_cache(self):
159
+ """
160
+ Scan the path for distributions and populate the cache with
161
+ those that are found.
162
+ """
163
+ gen_dist = not self._cache.generated
164
+ gen_egg = self._include_egg and not self._cache_egg.generated
165
+ if gen_dist or gen_egg:
166
+ for dist in self._yield_distributions():
167
+ if isinstance(dist, InstalledDistribution):
168
+ self._cache.add(dist)
169
+ else:
170
+ self._cache_egg.add(dist)
171
+
172
+ if gen_dist:
173
+ self._cache.generated = True
174
+ if gen_egg:
175
+ self._cache_egg.generated = True
176
+
177
+ @classmethod
178
+ def distinfo_dirname(cls, name, version):
179
+ """
180
+ The *name* and *version* parameters are converted into their
181
+ filename-escaped form, i.e. any ``'-'`` characters are replaced
182
+ with ``'_'`` other than the one in ``'dist-info'`` and the one
183
+ separating the name from the version number.
184
+
185
+ :parameter name: is converted to a standard distribution name by replacing
186
+ any runs of non- alphanumeric characters with a single
187
+ ``'-'``.
188
+ :type name: string
189
+ :parameter version: is converted to a standard version string. Spaces
190
+ become dots, and all other non-alphanumeric characters
191
+ (except dots) become dashes, with runs of multiple
192
+ dashes condensed to a single dash.
193
+ :type version: string
194
+ :returns: directory name
195
+ :rtype: string"""
196
+ name = name.replace('-', '_')
197
+ return '-'.join([name, version]) + DISTINFO_EXT
198
+
199
+ def get_distributions(self):
200
+ """
201
+ Provides an iterator that looks for distributions and returns
202
+ :class:`InstalledDistribution` or
203
+ :class:`EggInfoDistribution` instances for each one of them.
204
+
205
+ :rtype: iterator of :class:`InstalledDistribution` and
206
+ :class:`EggInfoDistribution` instances
207
+ """
208
+ if not self._cache_enabled:
209
+ for dist in self._yield_distributions():
210
+ yield dist
211
+ else:
212
+ self._generate_cache()
213
+
214
+ for dist in self._cache.path.values():
215
+ yield dist
216
+
217
+ if self._include_egg:
218
+ for dist in self._cache_egg.path.values():
219
+ yield dist
220
+
221
+ def get_distribution(self, name):
222
+ """
223
+ Looks for a named distribution on the path.
224
+
225
+ This function only returns the first result found, as no more than one
226
+ value is expected. If nothing is found, ``None`` is returned.
227
+
228
+ :rtype: :class:`InstalledDistribution`, :class:`EggInfoDistribution`
229
+ or ``None``
230
+ """
231
+ result = None
232
+ name = name.lower()
233
+ if not self._cache_enabled:
234
+ for dist in self._yield_distributions():
235
+ if dist.key == name:
236
+ result = dist
237
+ break
238
+ else:
239
+ self._generate_cache()
240
+
241
+ if name in self._cache.name:
242
+ result = self._cache.name[name][0]
243
+ elif self._include_egg and name in self._cache_egg.name:
244
+ result = self._cache_egg.name[name][0]
245
+ return result
246
+
247
+ def provides_distribution(self, name, version=None):
248
+ """
249
+ Iterates over all distributions to find which distributions provide *name*.
250
+ If a *version* is provided, it will be used to filter the results.
251
+
252
+ This function only returns the first result found, since no more than
253
+ one values are expected. If the directory is not found, returns ``None``.
254
+
255
+ :parameter version: a version specifier that indicates the version
256
+ required, conforming to the format in ``PEP-345``
257
+
258
+ :type name: string
259
+ :type version: string
260
+ """
261
+ matcher = None
262
+ if version is not None:
263
+ try:
264
+ matcher = self._scheme.matcher('%s (%s)' % (name, version))
265
+ except ValueError:
266
+ raise DistlibException('invalid name or version: %r, %r' % (name, version))
267
+
268
+ for dist in self.get_distributions():
269
+ # We hit a problem on Travis where enum34 was installed and doesn't
270
+ # have a provides attribute ...
271
+ if not hasattr(dist, 'provides'):
272
+ logger.debug('No "provides": %s', dist)
273
+ else:
274
+ provided = dist.provides
275
+
276
+ for p in provided:
277
+ p_name, p_ver = parse_name_and_version(p)
278
+ if matcher is None:
279
+ if p_name == name:
280
+ yield dist
281
+ break
282
+ else:
283
+ if p_name == name and matcher.match(p_ver):
284
+ yield dist
285
+ break
286
+
287
+ def get_file_path(self, name, relative_path):
288
+ """
289
+ Return the path to a resource file.
290
+ """
291
+ dist = self.get_distribution(name)
292
+ if dist is None:
293
+ raise LookupError('no distribution named %r found' % name)
294
+ return dist.get_resource_path(relative_path)
295
+
296
+ def get_exported_entries(self, category, name=None):
297
+ """
298
+ Return all of the exported entries in a particular category.
299
+
300
+ :param category: The category to search for entries.
301
+ :param name: If specified, only entries with that name are returned.
302
+ """
303
+ for dist in self.get_distributions():
304
+ r = dist.exports
305
+ if category in r:
306
+ d = r[category]
307
+ if name is not None:
308
+ if name in d:
309
+ yield d[name]
310
+ else:
311
+ for v in d.values():
312
+ yield v
313
+
314
+
315
+ class Distribution(object):
316
+ """
317
+ A base class for distributions, whether installed or from indexes.
318
+ Either way, it must have some metadata, so that's all that's needed
319
+ for construction.
320
+ """
321
+
322
+ build_time_dependency = False
323
+ """
324
+ Set to True if it's known to be only a build-time dependency (i.e.
325
+ not needed after installation).
326
+ """
327
+
328
+ requested = False
329
+ """A boolean that indicates whether the ``REQUESTED`` metadata file is
330
+ present (in other words, whether the package was installed by user
331
+ request or it was installed as a dependency)."""
332
+
333
+ def __init__(self, metadata):
334
+ """
335
+ Initialise an instance.
336
+ :param metadata: The instance of :class:`Metadata` describing this
337
+ distribution.
338
+ """
339
+ self.metadata = metadata
340
+ self.name = metadata.name
341
+ self.key = self.name.lower() # for case-insensitive comparisons
342
+ self.version = metadata.version
343
+ self.locator = None
344
+ self.digest = None
345
+ self.extras = None # additional features requested
346
+ self.context = None # environment marker overrides
347
+ self.download_urls = set()
348
+ self.digests = {}
349
+
350
+ @property
351
+ def source_url(self):
352
+ """
353
+ The source archive download URL for this distribution.
354
+ """
355
+ return self.metadata.source_url
356
+
357
+ download_url = source_url # Backward compatibility
358
+
359
+ @property
360
+ def name_and_version(self):
361
+ """
362
+ A utility property which displays the name and version in parentheses.
363
+ """
364
+ return '%s (%s)' % (self.name, self.version)
365
+
366
+ @property
367
+ def provides(self):
368
+ """
369
+ A set of distribution names and versions provided by this distribution.
370
+ :return: A set of "name (version)" strings.
371
+ """
372
+ plist = self.metadata.provides
373
+ s = '%s (%s)' % (self.name, self.version)
374
+ if s not in plist:
375
+ plist.append(s)
376
+ return plist
377
+
378
+ def _get_requirements(self, req_attr):
379
+ md = self.metadata
380
+ reqts = getattr(md, req_attr)
381
+ logger.debug('%s: got requirements %r from metadata: %r', self.name, req_attr, reqts)
382
+ return set(md.get_requirements(reqts, extras=self.extras, env=self.context))
383
+
384
+ @property
385
+ def run_requires(self):
386
+ return self._get_requirements('run_requires')
387
+
388
+ @property
389
+ def meta_requires(self):
390
+ return self._get_requirements('meta_requires')
391
+
392
+ @property
393
+ def build_requires(self):
394
+ return self._get_requirements('build_requires')
395
+
396
+ @property
397
+ def test_requires(self):
398
+ return self._get_requirements('test_requires')
399
+
400
+ @property
401
+ def dev_requires(self):
402
+ return self._get_requirements('dev_requires')
403
+
404
+ def matches_requirement(self, req):
405
+ """
406
+ Say if this instance matches (fulfills) a requirement.
407
+ :param req: The requirement to match.
408
+ :rtype req: str
409
+ :return: True if it matches, else False.
410
+ """
411
+ # Requirement may contain extras - parse to lose those
412
+ # from what's passed to the matcher
413
+ r = parse_requirement(req)
414
+ scheme = get_scheme(self.metadata.scheme)
415
+ try:
416
+ matcher = scheme.matcher(r.requirement)
417
+ except UnsupportedVersionError:
418
+ # XXX compat-mode if cannot read the version
419
+ logger.warning('could not read version %r - using name only', req)
420
+ name = req.split()[0]
421
+ matcher = scheme.matcher(name)
422
+
423
+ name = matcher.key # case-insensitive
424
+
425
+ result = False
426
+ for p in self.provides:
427
+ p_name, p_ver = parse_name_and_version(p)
428
+ if p_name != name:
429
+ continue
430
+ try:
431
+ result = matcher.match(p_ver)
432
+ break
433
+ except UnsupportedVersionError:
434
+ pass
435
+ return result
436
+
437
+ def __repr__(self):
438
+ """
439
+ Return a textual representation of this instance,
440
+ """
441
+ if self.source_url:
442
+ suffix = ' [%s]' % self.source_url
443
+ else:
444
+ suffix = ''
445
+ return '<Distribution %s (%s)%s>' % (self.name, self.version, suffix)
446
+
447
+ def __eq__(self, other):
448
+ """
449
+ See if this distribution is the same as another.
450
+ :param other: The distribution to compare with. To be equal to one
451
+ another. distributions must have the same type, name,
452
+ version and source_url.
453
+ :return: True if it is the same, else False.
454
+ """
455
+ if type(other) is not type(self):
456
+ result = False
457
+ else:
458
+ result = (self.name == other.name and self.version == other.version and self.source_url == other.source_url)
459
+ return result
460
+
461
+ def __hash__(self):
462
+ """
463
+ Compute hash in a way which matches the equality test.
464
+ """
465
+ return hash(self.name) + hash(self.version) + hash(self.source_url)
466
+
467
+
468
+ class BaseInstalledDistribution(Distribution):
469
+ """
470
+ This is the base class for installed distributions (whether PEP 376 or
471
+ legacy).
472
+ """
473
+
474
+ hasher = None
475
+
476
+ def __init__(self, metadata, path, env=None):
477
+ """
478
+ Initialise an instance.
479
+ :param metadata: An instance of :class:`Metadata` which describes the
480
+ distribution. This will normally have been initialised
481
+ from a metadata file in the ``path``.
482
+ :param path: The path of the ``.dist-info`` or ``.egg-info``
483
+ directory for the distribution.
484
+ :param env: This is normally the :class:`DistributionPath`
485
+ instance where this distribution was found.
486
+ """
487
+ super(BaseInstalledDistribution, self).__init__(metadata)
488
+ self.path = path
489
+ self.dist_path = env
490
+
491
+ def get_hash(self, data, hasher=None):
492
+ """
493
+ Get the hash of some data, using a particular hash algorithm, if
494
+ specified.
495
+
496
+ :param data: The data to be hashed.
497
+ :type data: bytes
498
+ :param hasher: The name of a hash implementation, supported by hashlib,
499
+ or ``None``. Examples of valid values are ``'sha1'``,
500
+ ``'sha224'``, ``'sha384'``, '``sha256'``, ``'md5'`` and
501
+ ``'sha512'``. If no hasher is specified, the ``hasher``
502
+ attribute of the :class:`InstalledDistribution` instance
503
+ is used. If the hasher is determined to be ``None``, MD5
504
+ is used as the hashing algorithm.
505
+ :returns: The hash of the data. If a hasher was explicitly specified,
506
+ the returned hash will be prefixed with the specified hasher
507
+ followed by '='.
508
+ :rtype: str
509
+ """
510
+ if hasher is None:
511
+ hasher = self.hasher
512
+ if hasher is None:
513
+ hasher = hashlib.md5
514
+ prefix = ''
515
+ else:
516
+ hasher = getattr(hashlib, hasher)
517
+ prefix = '%s=' % self.hasher
518
+ digest = hasher(data).digest()
519
+ digest = base64.urlsafe_b64encode(digest).rstrip(b'=').decode('ascii')
520
+ return '%s%s' % (prefix, digest)
521
+
522
+
523
+ class InstalledDistribution(BaseInstalledDistribution):
524
+ """
525
+ Created with the *path* of the ``.dist-info`` directory provided to the
526
+ constructor. It reads the metadata contained in ``pydist.json`` when it is
527
+ instantiated., or uses a passed in Metadata instance (useful for when
528
+ dry-run mode is being used).
529
+ """
530
+
531
+ hasher = 'sha256'
532
+
533
+ def __init__(self, path, metadata=None, env=None):
534
+ self.modules = []
535
+ self.finder = finder = resources.finder_for_path(path)
536
+ if finder is None:
537
+ raise ValueError('finder unavailable for %s' % path)
538
+ if env and env._cache_enabled and path in env._cache.path:
539
+ metadata = env._cache.path[path].metadata
540
+ elif metadata is None:
541
+ r = finder.find(METADATA_FILENAME)
542
+ # Temporary - for Wheel 0.23 support
543
+ if r is None:
544
+ r = finder.find(WHEEL_METADATA_FILENAME)
545
+ # Temporary - for legacy support
546
+ if r is None:
547
+ r = finder.find(LEGACY_METADATA_FILENAME)
548
+ if r is None:
549
+ raise ValueError('no %s found in %s' % (METADATA_FILENAME, path))
550
+ with contextlib.closing(r.as_stream()) as stream:
551
+ metadata = Metadata(fileobj=stream, scheme='legacy')
552
+
553
+ super(InstalledDistribution, self).__init__(metadata, path, env)
554
+
555
+ if env and env._cache_enabled:
556
+ env._cache.add(self)
557
+
558
+ r = finder.find('REQUESTED')
559
+ self.requested = r is not None
560
+ p = os.path.join(path, 'top_level.txt')
561
+ if os.path.exists(p):
562
+ with open(p, 'rb') as f:
563
+ data = f.read().decode('utf-8')
564
+ self.modules = data.splitlines()
565
+
566
+ def __repr__(self):
567
+ return '<InstalledDistribution %r %s at %r>' % (self.name, self.version, self.path)
568
+
569
+ def __str__(self):
570
+ return "%s %s" % (self.name, self.version)
571
+
572
+ def _get_records(self):
573
+ """
574
+ Get the list of installed files for the distribution
575
+ :return: A list of tuples of path, hash and size. Note that hash and
576
+ size might be ``None`` for some entries. The path is exactly
577
+ as stored in the file (which is as in PEP 376).
578
+ """
579
+ results = []
580
+ r = self.get_distinfo_resource('RECORD')
581
+ with contextlib.closing(r.as_stream()) as stream:
582
+ with CSVReader(stream=stream) as record_reader:
583
+ # Base location is parent dir of .dist-info dir
584
+ # base_location = os.path.dirname(self.path)
585
+ # base_location = os.path.abspath(base_location)
586
+ for row in record_reader:
587
+ missing = [None for i in range(len(row), 3)]
588
+ path, checksum, size = row + missing
589
+ # if not os.path.isabs(path):
590
+ # path = path.replace('/', os.sep)
591
+ # path = os.path.join(base_location, path)
592
+ results.append((path, checksum, size))
593
+ return results
594
+
595
+ @cached_property
596
+ def exports(self):
597
+ """
598
+ Return the information exported by this distribution.
599
+ :return: A dictionary of exports, mapping an export category to a dict
600
+ of :class:`ExportEntry` instances describing the individual
601
+ export entries, and keyed by name.
602
+ """
603
+ result = {}
604
+ r = self.get_distinfo_resource(EXPORTS_FILENAME)
605
+ if r:
606
+ result = self.read_exports()
607
+ return result
608
+
609
+ def read_exports(self):
610
+ """
611
+ Read exports data from a file in .ini format.
612
+
613
+ :return: A dictionary of exports, mapping an export category to a list
614
+ of :class:`ExportEntry` instances describing the individual
615
+ export entries.
616
+ """
617
+ result = {}
618
+ r = self.get_distinfo_resource(EXPORTS_FILENAME)
619
+ if r:
620
+ with contextlib.closing(r.as_stream()) as stream:
621
+ result = read_exports(stream)
622
+ return result
623
+
624
+ def write_exports(self, exports):
625
+ """
626
+ Write a dictionary of exports to a file in .ini format.
627
+ :param exports: A dictionary of exports, mapping an export category to
628
+ a list of :class:`ExportEntry` instances describing the
629
+ individual export entries.
630
+ """
631
+ rf = self.get_distinfo_file(EXPORTS_FILENAME)
632
+ with open(rf, 'w') as f:
633
+ write_exports(exports, f)
634
+
635
+ def get_resource_path(self, relative_path):
636
+ """
637
+ NOTE: This API may change in the future.
638
+
639
+ Return the absolute path to a resource file with the given relative
640
+ path.
641
+
642
+ :param relative_path: The path, relative to .dist-info, of the resource
643
+ of interest.
644
+ :return: The absolute path where the resource is to be found.
645
+ """
646
+ r = self.get_distinfo_resource('RESOURCES')
647
+ with contextlib.closing(r.as_stream()) as stream:
648
+ with CSVReader(stream=stream) as resources_reader:
649
+ for relative, destination in resources_reader:
650
+ if relative == relative_path:
651
+ return destination
652
+ raise KeyError('no resource file with relative path %r '
653
+ 'is installed' % relative_path)
654
+
655
+ def list_installed_files(self):
656
+ """
657
+ Iterates over the ``RECORD`` entries and returns a tuple
658
+ ``(path, hash, size)`` for each line.
659
+
660
+ :returns: iterator of (path, hash, size)
661
+ """
662
+ for result in self._get_records():
663
+ yield result
664
+
665
+ def write_installed_files(self, paths, prefix, dry_run=False):
666
+ """
667
+ Writes the ``RECORD`` file, using the ``paths`` iterable passed in. Any
668
+ existing ``RECORD`` file is silently overwritten.
669
+
670
+ prefix is used to determine when to write absolute paths.
671
+ """
672
+ prefix = os.path.join(prefix, '')
673
+ base = os.path.dirname(self.path)
674
+ base_under_prefix = base.startswith(prefix)
675
+ base = os.path.join(base, '')
676
+ record_path = self.get_distinfo_file('RECORD')
677
+ logger.info('creating %s', record_path)
678
+ if dry_run:
679
+ return None
680
+ with CSVWriter(record_path) as writer:
681
+ for path in paths:
682
+ if os.path.isdir(path) or path.endswith(('.pyc', '.pyo')):
683
+ # do not put size and hash, as in PEP-376
684
+ hash_value = size = ''
685
+ else:
686
+ size = '%d' % os.path.getsize(path)
687
+ with open(path, 'rb') as fp:
688
+ hash_value = self.get_hash(fp.read())
689
+ if path.startswith(base) or (base_under_prefix and path.startswith(prefix)):
690
+ path = os.path.relpath(path, base)
691
+ writer.writerow((path, hash_value, size))
692
+
693
+ # add the RECORD file itself
694
+ if record_path.startswith(base):
695
+ record_path = os.path.relpath(record_path, base)
696
+ writer.writerow((record_path, '', ''))
697
+ return record_path
698
+
699
+ def check_installed_files(self):
700
+ """
701
+ Checks that the hashes and sizes of the files in ``RECORD`` are
702
+ matched by the files themselves. Returns a (possibly empty) list of
703
+ mismatches. Each entry in the mismatch list will be a tuple consisting
704
+ of the path, 'exists', 'size' or 'hash' according to what didn't match
705
+ (existence is checked first, then size, then hash), the expected
706
+ value and the actual value.
707
+ """
708
+ mismatches = []
709
+ base = os.path.dirname(self.path)
710
+ record_path = self.get_distinfo_file('RECORD')
711
+ for path, hash_value, size in self.list_installed_files():
712
+ if not os.path.isabs(path):
713
+ path = os.path.join(base, path)
714
+ if path == record_path:
715
+ continue
716
+ if not os.path.exists(path):
717
+ mismatches.append((path, 'exists', True, False))
718
+ elif os.path.isfile(path):
719
+ actual_size = str(os.path.getsize(path))
720
+ if size and actual_size != size:
721
+ mismatches.append((path, 'size', size, actual_size))
722
+ elif hash_value:
723
+ if '=' in hash_value:
724
+ hasher = hash_value.split('=', 1)[0]
725
+ else:
726
+ hasher = None
727
+
728
+ with open(path, 'rb') as f:
729
+ actual_hash = self.get_hash(f.read(), hasher)
730
+ if actual_hash != hash_value:
731
+ mismatches.append((path, 'hash', hash_value, actual_hash))
732
+ return mismatches
733
+
734
+ @cached_property
735
+ def shared_locations(self):
736
+ """
737
+ A dictionary of shared locations whose keys are in the set 'prefix',
738
+ 'purelib', 'platlib', 'scripts', 'headers', 'data' and 'namespace'.
739
+ The corresponding value is the absolute path of that category for
740
+ this distribution, and takes into account any paths selected by the
741
+ user at installation time (e.g. via command-line arguments). In the
742
+ case of the 'namespace' key, this would be a list of absolute paths
743
+ for the roots of namespace packages in this distribution.
744
+
745
+ The first time this property is accessed, the relevant information is
746
+ read from the SHARED file in the .dist-info directory.
747
+ """
748
+ result = {}
749
+ shared_path = os.path.join(self.path, 'SHARED')
750
+ if os.path.isfile(shared_path):
751
+ with codecs.open(shared_path, 'r', encoding='utf-8') as f:
752
+ lines = f.read().splitlines()
753
+ for line in lines:
754
+ key, value = line.split('=', 1)
755
+ if key == 'namespace':
756
+ result.setdefault(key, []).append(value)
757
+ else:
758
+ result[key] = value
759
+ return result
760
+
761
+ def write_shared_locations(self, paths, dry_run=False):
762
+ """
763
+ Write shared location information to the SHARED file in .dist-info.
764
+ :param paths: A dictionary as described in the documentation for
765
+ :meth:`shared_locations`.
766
+ :param dry_run: If True, the action is logged but no file is actually
767
+ written.
768
+ :return: The path of the file written to.
769
+ """
770
+ shared_path = os.path.join(self.path, 'SHARED')
771
+ logger.info('creating %s', shared_path)
772
+ if dry_run:
773
+ return None
774
+ lines = []
775
+ for key in ('prefix', 'lib', 'headers', 'scripts', 'data'):
776
+ path = paths[key]
777
+ if os.path.isdir(paths[key]):
778
+ lines.append('%s=%s' % (key, path))
779
+ for ns in paths.get('namespace', ()):
780
+ lines.append('namespace=%s' % ns)
781
+
782
+ with codecs.open(shared_path, 'w', encoding='utf-8') as f:
783
+ f.write('\n'.join(lines))
784
+ return shared_path
785
+
786
+ def get_distinfo_resource(self, path):
787
+ if path not in DIST_FILES:
788
+ raise DistlibException('invalid path for a dist-info file: '
789
+ '%r at %r' % (path, self.path))
790
+ finder = resources.finder_for_path(self.path)
791
+ if finder is None:
792
+ raise DistlibException('Unable to get a finder for %s' % self.path)
793
+ return finder.find(path)
794
+
795
+ def get_distinfo_file(self, path):
796
+ """
797
+ Returns a path located under the ``.dist-info`` directory. Returns a
798
+ string representing the path.
799
+
800
+ :parameter path: a ``'/'``-separated path relative to the
801
+ ``.dist-info`` directory or an absolute path;
802
+ If *path* is an absolute path and doesn't start
803
+ with the ``.dist-info`` directory path,
804
+ a :class:`DistlibException` is raised
805
+ :type path: str
806
+ :rtype: str
807
+ """
808
+ # Check if it is an absolute path # XXX use relpath, add tests
809
+ if path.find(os.sep) >= 0:
810
+ # it's an absolute path?
811
+ distinfo_dirname, path = path.split(os.sep)[-2:]
812
+ if distinfo_dirname != self.path.split(os.sep)[-1]:
813
+ raise DistlibException('dist-info file %r does not belong to the %r %s '
814
+ 'distribution' % (path, self.name, self.version))
815
+
816
+ # The file must be relative
817
+ if path not in DIST_FILES:
818
+ raise DistlibException('invalid path for a dist-info file: '
819
+ '%r at %r' % (path, self.path))
820
+
821
+ return os.path.join(self.path, path)
822
+
823
+ def list_distinfo_files(self):
824
+ """
825
+ Iterates over the ``RECORD`` entries and returns paths for each line if
826
+ the path is pointing to a file located in the ``.dist-info`` directory
827
+ or one of its subdirectories.
828
+
829
+ :returns: iterator of paths
830
+ """
831
+ base = os.path.dirname(self.path)
832
+ for path, checksum, size in self._get_records():
833
+ # XXX add separator or use real relpath algo
834
+ if not os.path.isabs(path):
835
+ path = os.path.join(base, path)
836
+ if path.startswith(self.path):
837
+ yield path
838
+
839
+ def __eq__(self, other):
840
+ return (isinstance(other, InstalledDistribution) and self.path == other.path)
841
+
842
+ # See http://docs.python.org/reference/datamodel#object.__hash__
843
+ __hash__ = object.__hash__
844
+
845
+
846
+ class EggInfoDistribution(BaseInstalledDistribution):
847
+ """Created with the *path* of the ``.egg-info`` directory or file provided
848
+ to the constructor. It reads the metadata contained in the file itself, or
849
+ if the given path happens to be a directory, the metadata is read from the
850
+ file ``PKG-INFO`` under that directory."""
851
+
852
+ requested = True # as we have no way of knowing, assume it was
853
+ shared_locations = {}
854
+
855
+ def __init__(self, path, env=None):
856
+
857
+ def set_name_and_version(s, n, v):
858
+ s.name = n
859
+ s.key = n.lower() # for case-insensitive comparisons
860
+ s.version = v
861
+
862
+ self.path = path
863
+ self.dist_path = env
864
+ if env and env._cache_enabled and path in env._cache_egg.path:
865
+ metadata = env._cache_egg.path[path].metadata
866
+ set_name_and_version(self, metadata.name, metadata.version)
867
+ else:
868
+ metadata = self._get_metadata(path)
869
+
870
+ # Need to be set before caching
871
+ set_name_and_version(self, metadata.name, metadata.version)
872
+
873
+ if env and env._cache_enabled:
874
+ env._cache_egg.add(self)
875
+ super(EggInfoDistribution, self).__init__(metadata, path, env)
876
+
877
+ def _get_metadata(self, path):
878
+ requires = None
879
+
880
+ def parse_requires_data(data):
881
+ """Create a list of dependencies from a requires.txt file.
882
+
883
+ *data*: the contents of a setuptools-produced requires.txt file.
884
+ """
885
+ reqs = []
886
+ lines = data.splitlines()
887
+ for line in lines:
888
+ line = line.strip()
889
+ # sectioned files have bare newlines (separating sections)
890
+ if not line: # pragma: no cover
891
+ continue
892
+ if line.startswith('['): # pragma: no cover
893
+ logger.warning('Unexpected line: quitting requirement scan: %r', line)
894
+ break
895
+ r = parse_requirement(line)
896
+ if not r: # pragma: no cover
897
+ logger.warning('Not recognised as a requirement: %r', line)
898
+ continue
899
+ if r.extras: # pragma: no cover
900
+ logger.warning('extra requirements in requires.txt are '
901
+ 'not supported')
902
+ if not r.constraints:
903
+ reqs.append(r.name)
904
+ else:
905
+ cons = ', '.join('%s%s' % c for c in r.constraints)
906
+ reqs.append('%s (%s)' % (r.name, cons))
907
+ return reqs
908
+
909
+ def parse_requires_path(req_path):
910
+ """Create a list of dependencies from a requires.txt file.
911
+
912
+ *req_path*: the path to a setuptools-produced requires.txt file.
913
+ """
914
+
915
+ reqs = []
916
+ try:
917
+ with codecs.open(req_path, 'r', 'utf-8') as fp:
918
+ reqs = parse_requires_data(fp.read())
919
+ except IOError:
920
+ pass
921
+ return reqs
922
+
923
+ tl_path = tl_data = None
924
+ if path.endswith('.egg'):
925
+ if os.path.isdir(path):
926
+ p = os.path.join(path, 'EGG-INFO')
927
+ meta_path = os.path.join(p, 'PKG-INFO')
928
+ metadata = Metadata(path=meta_path, scheme='legacy')
929
+ req_path = os.path.join(p, 'requires.txt')
930
+ tl_path = os.path.join(p, 'top_level.txt')
931
+ requires = parse_requires_path(req_path)
932
+ else:
933
+ # FIXME handle the case where zipfile is not available
934
+ zipf = zipimport.zipimporter(path)
935
+ fileobj = StringIO(zipf.get_data('EGG-INFO/PKG-INFO').decode('utf8'))
936
+ metadata = Metadata(fileobj=fileobj, scheme='legacy')
937
+ try:
938
+ data = zipf.get_data('EGG-INFO/requires.txt')
939
+ tl_data = zipf.get_data('EGG-INFO/top_level.txt').decode('utf-8')
940
+ requires = parse_requires_data(data.decode('utf-8'))
941
+ except IOError:
942
+ requires = None
943
+ elif path.endswith('.egg-info'):
944
+ if os.path.isdir(path):
945
+ req_path = os.path.join(path, 'requires.txt')
946
+ requires = parse_requires_path(req_path)
947
+ path = os.path.join(path, 'PKG-INFO')
948
+ tl_path = os.path.join(path, 'top_level.txt')
949
+ metadata = Metadata(path=path, scheme='legacy')
950
+ else:
951
+ raise DistlibException('path must end with .egg-info or .egg, '
952
+ 'got %r' % path)
953
+
954
+ if requires:
955
+ metadata.add_requirements(requires)
956
+ # look for top-level modules in top_level.txt, if present
957
+ if tl_data is None:
958
+ if tl_path is not None and os.path.exists(tl_path):
959
+ with open(tl_path, 'rb') as f:
960
+ tl_data = f.read().decode('utf-8')
961
+ if not tl_data:
962
+ tl_data = []
963
+ else:
964
+ tl_data = tl_data.splitlines()
965
+ self.modules = tl_data
966
+ return metadata
967
+
968
+ def __repr__(self):
969
+ return '<EggInfoDistribution %r %s at %r>' % (self.name, self.version, self.path)
970
+
971
+ def __str__(self):
972
+ return "%s %s" % (self.name, self.version)
973
+
974
+ def check_installed_files(self):
975
+ """
976
+ Checks that the hashes and sizes of the files in ``RECORD`` are
977
+ matched by the files themselves. Returns a (possibly empty) list of
978
+ mismatches. Each entry in the mismatch list will be a tuple consisting
979
+ of the path, 'exists', 'size' or 'hash' according to what didn't match
980
+ (existence is checked first, then size, then hash), the expected
981
+ value and the actual value.
982
+ """
983
+ mismatches = []
984
+ record_path = os.path.join(self.path, 'installed-files.txt')
985
+ if os.path.exists(record_path):
986
+ for path, _, _ in self.list_installed_files():
987
+ if path == record_path:
988
+ continue
989
+ if not os.path.exists(path):
990
+ mismatches.append((path, 'exists', True, False))
991
+ return mismatches
992
+
993
+ def list_installed_files(self):
994
+ """
995
+ Iterates over the ``installed-files.txt`` entries and returns a tuple
996
+ ``(path, hash, size)`` for each line.
997
+
998
+ :returns: a list of (path, hash, size)
999
+ """
1000
+
1001
+ def _md5(path):
1002
+ f = open(path, 'rb')
1003
+ try:
1004
+ content = f.read()
1005
+ finally:
1006
+ f.close()
1007
+ return hashlib.md5(content).hexdigest()
1008
+
1009
+ def _size(path):
1010
+ return os.stat(path).st_size
1011
+
1012
+ record_path = os.path.join(self.path, 'installed-files.txt')
1013
+ result = []
1014
+ if os.path.exists(record_path):
1015
+ with codecs.open(record_path, 'r', encoding='utf-8') as f:
1016
+ for line in f:
1017
+ line = line.strip()
1018
+ p = os.path.normpath(os.path.join(self.path, line))
1019
+ # "./" is present as a marker between installed files
1020
+ # and installation metadata files
1021
+ if not os.path.exists(p):
1022
+ logger.warning('Non-existent file: %s', p)
1023
+ if p.endswith(('.pyc', '.pyo')):
1024
+ continue
1025
+ # otherwise fall through and fail
1026
+ if not os.path.isdir(p):
1027
+ result.append((p, _md5(p), _size(p)))
1028
+ result.append((record_path, None, None))
1029
+ return result
1030
+
1031
+ def list_distinfo_files(self, absolute=False):
1032
+ """
1033
+ Iterates over the ``installed-files.txt`` entries and returns paths for
1034
+ each line if the path is pointing to a file located in the
1035
+ ``.egg-info`` directory or one of its subdirectories.
1036
+
1037
+ :parameter absolute: If *absolute* is ``True``, each returned path is
1038
+ transformed into a local absolute path. Otherwise the
1039
+ raw value from ``installed-files.txt`` is returned.
1040
+ :type absolute: boolean
1041
+ :returns: iterator of paths
1042
+ """
1043
+ record_path = os.path.join(self.path, 'installed-files.txt')
1044
+ if os.path.exists(record_path):
1045
+ skip = True
1046
+ with codecs.open(record_path, 'r', encoding='utf-8') as f:
1047
+ for line in f:
1048
+ line = line.strip()
1049
+ if line == './':
1050
+ skip = False
1051
+ continue
1052
+ if not skip:
1053
+ p = os.path.normpath(os.path.join(self.path, line))
1054
+ if p.startswith(self.path):
1055
+ if absolute:
1056
+ yield p
1057
+ else:
1058
+ yield line
1059
+
1060
+ def __eq__(self, other):
1061
+ return (isinstance(other, EggInfoDistribution) and self.path == other.path)
1062
+
1063
+ # See http://docs.python.org/reference/datamodel#object.__hash__
1064
+ __hash__ = object.__hash__
1065
+
1066
+
1067
+ new_dist_class = InstalledDistribution
1068
+ old_dist_class = EggInfoDistribution
1069
+
1070
+
1071
+ class DependencyGraph(object):
1072
+ """
1073
+ Represents a dependency graph between distributions.
1074
+
1075
+ The dependency relationships are stored in an ``adjacency_list`` that maps
1076
+ distributions to a list of ``(other, label)`` tuples where ``other``
1077
+ is a distribution and the edge is labeled with ``label`` (i.e. the version
1078
+ specifier, if such was provided). Also, for more efficient traversal, for
1079
+ every distribution ``x``, a list of predecessors is kept in
1080
+ ``reverse_list[x]``. An edge from distribution ``a`` to
1081
+ distribution ``b`` means that ``a`` depends on ``b``. If any missing
1082
+ dependencies are found, they are stored in ``missing``, which is a
1083
+ dictionary that maps distributions to a list of requirements that were not
1084
+ provided by any other distributions.
1085
+ """
1086
+
1087
+ def __init__(self):
1088
+ self.adjacency_list = {}
1089
+ self.reverse_list = {}
1090
+ self.missing = {}
1091
+
1092
+ def add_distribution(self, distribution):
1093
+ """Add the *distribution* to the graph.
1094
+
1095
+ :type distribution: :class:`distutils2.database.InstalledDistribution`
1096
+ or :class:`distutils2.database.EggInfoDistribution`
1097
+ """
1098
+ self.adjacency_list[distribution] = []
1099
+ self.reverse_list[distribution] = []
1100
+ # self.missing[distribution] = []
1101
+
1102
+ def add_edge(self, x, y, label=None):
1103
+ """Add an edge from distribution *x* to distribution *y* with the given
1104
+ *label*.
1105
+
1106
+ :type x: :class:`distutils2.database.InstalledDistribution` or
1107
+ :class:`distutils2.database.EggInfoDistribution`
1108
+ :type y: :class:`distutils2.database.InstalledDistribution` or
1109
+ :class:`distutils2.database.EggInfoDistribution`
1110
+ :type label: ``str`` or ``None``
1111
+ """
1112
+ self.adjacency_list[x].append((y, label))
1113
+ # multiple edges are allowed, so be careful
1114
+ if x not in self.reverse_list[y]:
1115
+ self.reverse_list[y].append(x)
1116
+
1117
+ def add_missing(self, distribution, requirement):
1118
+ """
1119
+ Add a missing *requirement* for the given *distribution*.
1120
+
1121
+ :type distribution: :class:`distutils2.database.InstalledDistribution`
1122
+ or :class:`distutils2.database.EggInfoDistribution`
1123
+ :type requirement: ``str``
1124
+ """
1125
+ logger.debug('%s missing %r', distribution, requirement)
1126
+ self.missing.setdefault(distribution, []).append(requirement)
1127
+
1128
+ def _repr_dist(self, dist):
1129
+ return '%s %s' % (dist.name, dist.version)
1130
+
1131
+ def repr_node(self, dist, level=1):
1132
+ """Prints only a subgraph"""
1133
+ output = [self._repr_dist(dist)]
1134
+ for other, label in self.adjacency_list[dist]:
1135
+ dist = self._repr_dist(other)
1136
+ if label is not None:
1137
+ dist = '%s [%s]' % (dist, label)
1138
+ output.append(' ' * level + str(dist))
1139
+ suboutput = self.repr_node(other, level + 1)
1140
+ subs = suboutput.split('\n')
1141
+ output.extend(subs[1:])
1142
+ return '\n'.join(output)
1143
+
1144
+ def to_dot(self, f, skip_disconnected=True):
1145
+ """Writes a DOT output for the graph to the provided file *f*.
1146
+
1147
+ If *skip_disconnected* is set to ``True``, then all distributions
1148
+ that are not dependent on any other distribution are skipped.
1149
+
1150
+ :type f: has to support ``file``-like operations
1151
+ :type skip_disconnected: ``bool``
1152
+ """
1153
+ disconnected = []
1154
+
1155
+ f.write("digraph dependencies {\n")
1156
+ for dist, adjs in self.adjacency_list.items():
1157
+ if len(adjs) == 0 and not skip_disconnected:
1158
+ disconnected.append(dist)
1159
+ for other, label in adjs:
1160
+ if label is not None:
1161
+ f.write('"%s" -> "%s" [label="%s"]\n' % (dist.name, other.name, label))
1162
+ else:
1163
+ f.write('"%s" -> "%s"\n' % (dist.name, other.name))
1164
+ if not skip_disconnected and len(disconnected) > 0:
1165
+ f.write('subgraph disconnected {\n')
1166
+ f.write('label = "Disconnected"\n')
1167
+ f.write('bgcolor = red\n')
1168
+
1169
+ for dist in disconnected:
1170
+ f.write('"%s"' % dist.name)
1171
+ f.write('\n')
1172
+ f.write('}\n')
1173
+ f.write('}\n')
1174
+
1175
+ def topological_sort(self):
1176
+ """
1177
+ Perform a topological sort of the graph.
1178
+ :return: A tuple, the first element of which is a topologically sorted
1179
+ list of distributions, and the second element of which is a
1180
+ list of distributions that cannot be sorted because they have
1181
+ circular dependencies and so form a cycle.
1182
+ """
1183
+ result = []
1184
+ # Make a shallow copy of the adjacency list
1185
+ alist = {}
1186
+ for k, v in self.adjacency_list.items():
1187
+ alist[k] = v[:]
1188
+ while True:
1189
+ # See what we can remove in this run
1190
+ to_remove = []
1191
+ for k, v in list(alist.items())[:]:
1192
+ if not v:
1193
+ to_remove.append(k)
1194
+ del alist[k]
1195
+ if not to_remove:
1196
+ # What's left in alist (if anything) is a cycle.
1197
+ break
1198
+ # Remove from the adjacency list of others
1199
+ for k, v in alist.items():
1200
+ alist[k] = [(d, r) for d, r in v if d not in to_remove]
1201
+ logger.debug('Moving to result: %s', ['%s (%s)' % (d.name, d.version) for d in to_remove])
1202
+ result.extend(to_remove)
1203
+ return result, list(alist.keys())
1204
+
1205
+ def __repr__(self):
1206
+ """Representation of the graph"""
1207
+ output = []
1208
+ for dist, adjs in self.adjacency_list.items():
1209
+ output.append(self.repr_node(dist))
1210
+ return '\n'.join(output)
1211
+
1212
+
1213
+ def make_graph(dists, scheme='default'):
1214
+ """Makes a dependency graph from the given distributions.
1215
+
1216
+ :parameter dists: a list of distributions
1217
+ :type dists: list of :class:`distutils2.database.InstalledDistribution` and
1218
+ :class:`distutils2.database.EggInfoDistribution` instances
1219
+ :rtype: a :class:`DependencyGraph` instance
1220
+ """
1221
+ scheme = get_scheme(scheme)
1222
+ graph = DependencyGraph()
1223
+ provided = {} # maps names to lists of (version, dist) tuples
1224
+
1225
+ # first, build the graph and find out what's provided
1226
+ for dist in dists:
1227
+ graph.add_distribution(dist)
1228
+
1229
+ for p in dist.provides:
1230
+ name, version = parse_name_and_version(p)
1231
+ logger.debug('Add to provided: %s, %s, %s', name, version, dist)
1232
+ provided.setdefault(name, []).append((version, dist))
1233
+
1234
+ # now make the edges
1235
+ for dist in dists:
1236
+ requires = (dist.run_requires | dist.meta_requires | dist.build_requires | dist.dev_requires)
1237
+ for req in requires:
1238
+ try:
1239
+ matcher = scheme.matcher(req)
1240
+ except UnsupportedVersionError:
1241
+ # XXX compat-mode if cannot read the version
1242
+ logger.warning('could not read version %r - using name only', req)
1243
+ name = req.split()[0]
1244
+ matcher = scheme.matcher(name)
1245
+
1246
+ name = matcher.key # case-insensitive
1247
+
1248
+ matched = False
1249
+ if name in provided:
1250
+ for version, provider in provided[name]:
1251
+ try:
1252
+ match = matcher.match(version)
1253
+ except UnsupportedVersionError:
1254
+ match = False
1255
+
1256
+ if match:
1257
+ graph.add_edge(dist, provider, req)
1258
+ matched = True
1259
+ break
1260
+ if not matched:
1261
+ graph.add_missing(dist, req)
1262
+ return graph
1263
+
1264
+
1265
+ def get_dependent_dists(dists, dist):
1266
+ """Recursively generate a list of distributions from *dists* that are
1267
+ dependent on *dist*.
1268
+
1269
+ :param dists: a list of distributions
1270
+ :param dist: a distribution, member of *dists* for which we are interested
1271
+ """
1272
+ if dist not in dists:
1273
+ raise DistlibException('given distribution %r is not a member '
1274
+ 'of the list' % dist.name)
1275
+ graph = make_graph(dists)
1276
+
1277
+ dep = [dist] # dependent distributions
1278
+ todo = graph.reverse_list[dist] # list of nodes we should inspect
1279
+
1280
+ while todo:
1281
+ d = todo.pop()
1282
+ dep.append(d)
1283
+ for succ in graph.reverse_list[d]:
1284
+ if succ not in dep:
1285
+ todo.append(succ)
1286
+
1287
+ dep.pop(0) # remove dist from dep, was there to prevent infinite loops
1288
+ return dep
1289
+
1290
+
1291
+ def get_required_dists(dists, dist):
1292
+ """Recursively generate a list of distributions from *dists* that are
1293
+ required by *dist*.
1294
+
1295
+ :param dists: a list of distributions
1296
+ :param dist: a distribution, member of *dists* for which we are interested
1297
+ in finding the dependencies.
1298
+ """
1299
+ if dist not in dists:
1300
+ raise DistlibException('given distribution %r is not a member '
1301
+ 'of the list' % dist.name)
1302
+ graph = make_graph(dists)
1303
+
1304
+ req = set() # required distributions
1305
+ todo = graph.adjacency_list[dist] # list of nodes we should inspect
1306
+ seen = set(t[0] for t in todo) # already added to todo
1307
+
1308
+ while todo:
1309
+ d = todo.pop()[0]
1310
+ req.add(d)
1311
+ pred_list = graph.adjacency_list[d]
1312
+ for pred in pred_list:
1313
+ d = pred[0]
1314
+ if d not in req and d not in seen:
1315
+ seen.add(d)
1316
+ todo.append(pred)
1317
+ return req
1318
+
1319
+
1320
+ def make_dist(name, version, **kwargs):
1321
+ """
1322
+ A convenience method for making a dist given just a name and version.
1323
+ """
1324
+ summary = kwargs.pop('summary', 'Placeholder for summary')
1325
+ md = Metadata(**kwargs)
1326
+ md.name = name
1327
+ md.version = version
1328
+ md.summary = summary or 'Placeholder for summary'
1329
+ return Distribution(md)
llava/lib/python3.10/site-packages/pip/_vendor/distlib/locators.py ADDED
@@ -0,0 +1,1295 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # -*- coding: utf-8 -*-
2
+ #
3
+ # Copyright (C) 2012-2023 Vinay Sajip.
4
+ # Licensed to the Python Software Foundation under a contributor agreement.
5
+ # See LICENSE.txt and CONTRIBUTORS.txt.
6
+ #
7
+
8
+ import gzip
9
+ from io import BytesIO
10
+ import json
11
+ import logging
12
+ import os
13
+ import posixpath
14
+ import re
15
+ try:
16
+ import threading
17
+ except ImportError: # pragma: no cover
18
+ import dummy_threading as threading
19
+ import zlib
20
+
21
+ from . import DistlibException
22
+ from .compat import (urljoin, urlparse, urlunparse, url2pathname, pathname2url, queue, quote, unescape, build_opener,
23
+ HTTPRedirectHandler as BaseRedirectHandler, text_type, Request, HTTPError, URLError)
24
+ from .database import Distribution, DistributionPath, make_dist
25
+ from .metadata import Metadata, MetadataInvalidError
26
+ from .util import (cached_property, ensure_slash, split_filename, get_project_data, parse_requirement,
27
+ parse_name_and_version, ServerProxy, normalize_name)
28
+ from .version import get_scheme, UnsupportedVersionError
29
+ from .wheel import Wheel, is_compatible
30
+
31
+ logger = logging.getLogger(__name__)
32
+
33
+ HASHER_HASH = re.compile(r'^(\w+)=([a-f0-9]+)')
34
+ CHARSET = re.compile(r';\s*charset\s*=\s*(.*)\s*$', re.I)
35
+ HTML_CONTENT_TYPE = re.compile('text/html|application/x(ht)?ml')
36
+ DEFAULT_INDEX = 'https://pypi.org/pypi'
37
+
38
+
39
+ def get_all_distribution_names(url=None):
40
+ """
41
+ Return all distribution names known by an index.
42
+ :param url: The URL of the index.
43
+ :return: A list of all known distribution names.
44
+ """
45
+ if url is None:
46
+ url = DEFAULT_INDEX
47
+ client = ServerProxy(url, timeout=3.0)
48
+ try:
49
+ return client.list_packages()
50
+ finally:
51
+ client('close')()
52
+
53
+
54
+ class RedirectHandler(BaseRedirectHandler):
55
+ """
56
+ A class to work around a bug in some Python 3.2.x releases.
57
+ """
58
+
59
+ # There's a bug in the base version for some 3.2.x
60
+ # (e.g. 3.2.2 on Ubuntu Oneiric). If a Location header
61
+ # returns e.g. /abc, it bails because it says the scheme ''
62
+ # is bogus, when actually it should use the request's
63
+ # URL for the scheme. See Python issue #13696.
64
+ def http_error_302(self, req, fp, code, msg, headers):
65
+ # Some servers (incorrectly) return multiple Location headers
66
+ # (so probably same goes for URI). Use first header.
67
+ newurl = None
68
+ for key in ('location', 'uri'):
69
+ if key in headers:
70
+ newurl = headers[key]
71
+ break
72
+ if newurl is None: # pragma: no cover
73
+ return
74
+ urlparts = urlparse(newurl)
75
+ if urlparts.scheme == '':
76
+ newurl = urljoin(req.get_full_url(), newurl)
77
+ if hasattr(headers, 'replace_header'):
78
+ headers.replace_header(key, newurl)
79
+ else:
80
+ headers[key] = newurl
81
+ return BaseRedirectHandler.http_error_302(self, req, fp, code, msg, headers)
82
+
83
+ http_error_301 = http_error_303 = http_error_307 = http_error_302
84
+
85
+
86
+ class Locator(object):
87
+ """
88
+ A base class for locators - things that locate distributions.
89
+ """
90
+ source_extensions = ('.tar.gz', '.tar.bz2', '.tar', '.zip', '.tgz', '.tbz')
91
+ binary_extensions = ('.egg', '.exe', '.whl')
92
+ excluded_extensions = ('.pdf', )
93
+
94
+ # A list of tags indicating which wheels you want to match. The default
95
+ # value of None matches against the tags compatible with the running
96
+ # Python. If you want to match other values, set wheel_tags on a locator
97
+ # instance to a list of tuples (pyver, abi, arch) which you want to match.
98
+ wheel_tags = None
99
+
100
+ downloadable_extensions = source_extensions + ('.whl', )
101
+
102
+ def __init__(self, scheme='default'):
103
+ """
104
+ Initialise an instance.
105
+ :param scheme: Because locators look for most recent versions, they
106
+ need to know the version scheme to use. This specifies
107
+ the current PEP-recommended scheme - use ``'legacy'``
108
+ if you need to support existing distributions on PyPI.
109
+ """
110
+ self._cache = {}
111
+ self.scheme = scheme
112
+ # Because of bugs in some of the handlers on some of the platforms,
113
+ # we use our own opener rather than just using urlopen.
114
+ self.opener = build_opener(RedirectHandler())
115
+ # If get_project() is called from locate(), the matcher instance
116
+ # is set from the requirement passed to locate(). See issue #18 for
117
+ # why this can be useful to know.
118
+ self.matcher = None
119
+ self.errors = queue.Queue()
120
+
121
+ def get_errors(self):
122
+ """
123
+ Return any errors which have occurred.
124
+ """
125
+ result = []
126
+ while not self.errors.empty(): # pragma: no cover
127
+ try:
128
+ e = self.errors.get(False)
129
+ result.append(e)
130
+ except self.errors.Empty:
131
+ continue
132
+ self.errors.task_done()
133
+ return result
134
+
135
+ def clear_errors(self):
136
+ """
137
+ Clear any errors which may have been logged.
138
+ """
139
+ # Just get the errors and throw them away
140
+ self.get_errors()
141
+
142
+ def clear_cache(self):
143
+ self._cache.clear()
144
+
145
+ def _get_scheme(self):
146
+ return self._scheme
147
+
148
+ def _set_scheme(self, value):
149
+ self._scheme = value
150
+
151
+ scheme = property(_get_scheme, _set_scheme)
152
+
153
+ def _get_project(self, name):
154
+ """
155
+ For a given project, get a dictionary mapping available versions to Distribution
156
+ instances.
157
+
158
+ This should be implemented in subclasses.
159
+
160
+ If called from a locate() request, self.matcher will be set to a
161
+ matcher for the requirement to satisfy, otherwise it will be None.
162
+ """
163
+ raise NotImplementedError('Please implement in the subclass')
164
+
165
+ def get_distribution_names(self):
166
+ """
167
+ Return all the distribution names known to this locator.
168
+ """
169
+ raise NotImplementedError('Please implement in the subclass')
170
+
171
+ def get_project(self, name):
172
+ """
173
+ For a given project, get a dictionary mapping available versions to Distribution
174
+ instances.
175
+
176
+ This calls _get_project to do all the work, and just implements a caching layer on top.
177
+ """
178
+ if self._cache is None: # pragma: no cover
179
+ result = self._get_project(name)
180
+ elif name in self._cache:
181
+ result = self._cache[name]
182
+ else:
183
+ self.clear_errors()
184
+ result = self._get_project(name)
185
+ self._cache[name] = result
186
+ return result
187
+
188
+ def score_url(self, url):
189
+ """
190
+ Give an url a score which can be used to choose preferred URLs
191
+ for a given project release.
192
+ """
193
+ t = urlparse(url)
194
+ basename = posixpath.basename(t.path)
195
+ compatible = True
196
+ is_wheel = basename.endswith('.whl')
197
+ is_downloadable = basename.endswith(self.downloadable_extensions)
198
+ if is_wheel:
199
+ compatible = is_compatible(Wheel(basename), self.wheel_tags)
200
+ return (t.scheme == 'https', 'pypi.org' in t.netloc, is_downloadable, is_wheel, compatible, basename)
201
+
202
+ def prefer_url(self, url1, url2):
203
+ """
204
+ Choose one of two URLs where both are candidates for distribution
205
+ archives for the same version of a distribution (for example,
206
+ .tar.gz vs. zip).
207
+
208
+ The current implementation favours https:// URLs over http://, archives
209
+ from PyPI over those from other locations, wheel compatibility (if a
210
+ wheel) and then the archive name.
211
+ """
212
+ result = url2
213
+ if url1:
214
+ s1 = self.score_url(url1)
215
+ s2 = self.score_url(url2)
216
+ if s1 > s2:
217
+ result = url1
218
+ if result != url2:
219
+ logger.debug('Not replacing %r with %r', url1, url2)
220
+ else:
221
+ logger.debug('Replacing %r with %r', url1, url2)
222
+ return result
223
+
224
+ def split_filename(self, filename, project_name):
225
+ """
226
+ Attempt to split a filename in project name, version and Python version.
227
+ """
228
+ return split_filename(filename, project_name)
229
+
230
+ def convert_url_to_download_info(self, url, project_name):
231
+ """
232
+ See if a URL is a candidate for a download URL for a project (the URL
233
+ has typically been scraped from an HTML page).
234
+
235
+ If it is, a dictionary is returned with keys "name", "version",
236
+ "filename" and "url"; otherwise, None is returned.
237
+ """
238
+
239
+ def same_project(name1, name2):
240
+ return normalize_name(name1) == normalize_name(name2)
241
+
242
+ result = None
243
+ scheme, netloc, path, params, query, frag = urlparse(url)
244
+ if frag.lower().startswith('egg='): # pragma: no cover
245
+ logger.debug('%s: version hint in fragment: %r', project_name, frag)
246
+ m = HASHER_HASH.match(frag)
247
+ if m:
248
+ algo, digest = m.groups()
249
+ else:
250
+ algo, digest = None, None
251
+ origpath = path
252
+ if path and path[-1] == '/': # pragma: no cover
253
+ path = path[:-1]
254
+ if path.endswith('.whl'):
255
+ try:
256
+ wheel = Wheel(path)
257
+ if not is_compatible(wheel, self.wheel_tags):
258
+ logger.debug('Wheel not compatible: %s', path)
259
+ else:
260
+ if project_name is None:
261
+ include = True
262
+ else:
263
+ include = same_project(wheel.name, project_name)
264
+ if include:
265
+ result = {
266
+ 'name': wheel.name,
267
+ 'version': wheel.version,
268
+ 'filename': wheel.filename,
269
+ 'url': urlunparse((scheme, netloc, origpath, params, query, '')),
270
+ 'python-version': ', '.join(['.'.join(list(v[2:])) for v in wheel.pyver]),
271
+ }
272
+ except Exception: # pragma: no cover
273
+ logger.warning('invalid path for wheel: %s', path)
274
+ elif not path.endswith(self.downloadable_extensions): # pragma: no cover
275
+ logger.debug('Not downloadable: %s', path)
276
+ else: # downloadable extension
277
+ path = filename = posixpath.basename(path)
278
+ for ext in self.downloadable_extensions:
279
+ if path.endswith(ext):
280
+ path = path[:-len(ext)]
281
+ t = self.split_filename(path, project_name)
282
+ if not t: # pragma: no cover
283
+ logger.debug('No match for project/version: %s', path)
284
+ else:
285
+ name, version, pyver = t
286
+ if not project_name or same_project(project_name, name):
287
+ result = {
288
+ 'name': name,
289
+ 'version': version,
290
+ 'filename': filename,
291
+ 'url': urlunparse((scheme, netloc, origpath, params, query, '')),
292
+ }
293
+ if pyver: # pragma: no cover
294
+ result['python-version'] = pyver
295
+ break
296
+ if result and algo:
297
+ result['%s_digest' % algo] = digest
298
+ return result
299
+
300
+ def _get_digest(self, info):
301
+ """
302
+ Get a digest from a dictionary by looking at a "digests" dictionary
303
+ or keys of the form 'algo_digest'.
304
+
305
+ Returns a 2-tuple (algo, digest) if found, else None. Currently
306
+ looks only for SHA256, then MD5.
307
+ """
308
+ result = None
309
+ if 'digests' in info:
310
+ digests = info['digests']
311
+ for algo in ('sha256', 'md5'):
312
+ if algo in digests:
313
+ result = (algo, digests[algo])
314
+ break
315
+ if not result:
316
+ for algo in ('sha256', 'md5'):
317
+ key = '%s_digest' % algo
318
+ if key in info:
319
+ result = (algo, info[key])
320
+ break
321
+ return result
322
+
323
+ def _update_version_data(self, result, info):
324
+ """
325
+ Update a result dictionary (the final result from _get_project) with a
326
+ dictionary for a specific version, which typically holds information
327
+ gleaned from a filename or URL for an archive for the distribution.
328
+ """
329
+ name = info.pop('name')
330
+ version = info.pop('version')
331
+ if version in result:
332
+ dist = result[version]
333
+ md = dist.metadata
334
+ else:
335
+ dist = make_dist(name, version, scheme=self.scheme)
336
+ md = dist.metadata
337
+ dist.digest = digest = self._get_digest(info)
338
+ url = info['url']
339
+ result['digests'][url] = digest
340
+ if md.source_url != info['url']:
341
+ md.source_url = self.prefer_url(md.source_url, url)
342
+ result['urls'].setdefault(version, set()).add(url)
343
+ dist.locator = self
344
+ result[version] = dist
345
+
346
+ def locate(self, requirement, prereleases=False):
347
+ """
348
+ Find the most recent distribution which matches the given
349
+ requirement.
350
+
351
+ :param requirement: A requirement of the form 'foo (1.0)' or perhaps
352
+ 'foo (>= 1.0, < 2.0, != 1.3)'
353
+ :param prereleases: If ``True``, allow pre-release versions
354
+ to be located. Otherwise, pre-release versions
355
+ are not returned.
356
+ :return: A :class:`Distribution` instance, or ``None`` if no such
357
+ distribution could be located.
358
+ """
359
+ result = None
360
+ r = parse_requirement(requirement)
361
+ if r is None: # pragma: no cover
362
+ raise DistlibException('Not a valid requirement: %r' % requirement)
363
+ scheme = get_scheme(self.scheme)
364
+ self.matcher = matcher = scheme.matcher(r.requirement)
365
+ logger.debug('matcher: %s (%s)', matcher, type(matcher).__name__)
366
+ versions = self.get_project(r.name)
367
+ if len(versions) > 2: # urls and digests keys are present
368
+ # sometimes, versions are invalid
369
+ slist = []
370
+ vcls = matcher.version_class
371
+ for k in versions:
372
+ if k in ('urls', 'digests'):
373
+ continue
374
+ try:
375
+ if not matcher.match(k):
376
+ pass # logger.debug('%s did not match %r', matcher, k)
377
+ else:
378
+ if prereleases or not vcls(k).is_prerelease:
379
+ slist.append(k)
380
+ except Exception: # pragma: no cover
381
+ logger.warning('error matching %s with %r', matcher, k)
382
+ pass # slist.append(k)
383
+ if len(slist) > 1:
384
+ slist = sorted(slist, key=scheme.key)
385
+ if slist:
386
+ logger.debug('sorted list: %s', slist)
387
+ version = slist[-1]
388
+ result = versions[version]
389
+ if result:
390
+ if r.extras:
391
+ result.extras = r.extras
392
+ result.download_urls = versions.get('urls', {}).get(version, set())
393
+ d = {}
394
+ sd = versions.get('digests', {})
395
+ for url in result.download_urls:
396
+ if url in sd: # pragma: no cover
397
+ d[url] = sd[url]
398
+ result.digests = d
399
+ self.matcher = None
400
+ return result
401
+
402
+
403
+ class PyPIRPCLocator(Locator):
404
+ """
405
+ This locator uses XML-RPC to locate distributions. It therefore
406
+ cannot be used with simple mirrors (that only mirror file content).
407
+ """
408
+
409
+ def __init__(self, url, **kwargs):
410
+ """
411
+ Initialise an instance.
412
+
413
+ :param url: The URL to use for XML-RPC.
414
+ :param kwargs: Passed to the superclass constructor.
415
+ """
416
+ super(PyPIRPCLocator, self).__init__(**kwargs)
417
+ self.base_url = url
418
+ self.client = ServerProxy(url, timeout=3.0)
419
+
420
+ def get_distribution_names(self):
421
+ """
422
+ Return all the distribution names known to this locator.
423
+ """
424
+ return set(self.client.list_packages())
425
+
426
+ def _get_project(self, name):
427
+ result = {'urls': {}, 'digests': {}}
428
+ versions = self.client.package_releases(name, True)
429
+ for v in versions:
430
+ urls = self.client.release_urls(name, v)
431
+ data = self.client.release_data(name, v)
432
+ metadata = Metadata(scheme=self.scheme)
433
+ metadata.name = data['name']
434
+ metadata.version = data['version']
435
+ metadata.license = data.get('license')
436
+ metadata.keywords = data.get('keywords', [])
437
+ metadata.summary = data.get('summary')
438
+ dist = Distribution(metadata)
439
+ if urls:
440
+ info = urls[0]
441
+ metadata.source_url = info['url']
442
+ dist.digest = self._get_digest(info)
443
+ dist.locator = self
444
+ result[v] = dist
445
+ for info in urls:
446
+ url = info['url']
447
+ digest = self._get_digest(info)
448
+ result['urls'].setdefault(v, set()).add(url)
449
+ result['digests'][url] = digest
450
+ return result
451
+
452
+
453
+ class PyPIJSONLocator(Locator):
454
+ """
455
+ This locator uses PyPI's JSON interface. It's very limited in functionality
456
+ and probably not worth using.
457
+ """
458
+
459
+ def __init__(self, url, **kwargs):
460
+ super(PyPIJSONLocator, self).__init__(**kwargs)
461
+ self.base_url = ensure_slash(url)
462
+
463
+ def get_distribution_names(self):
464
+ """
465
+ Return all the distribution names known to this locator.
466
+ """
467
+ raise NotImplementedError('Not available from this locator')
468
+
469
+ def _get_project(self, name):
470
+ result = {'urls': {}, 'digests': {}}
471
+ url = urljoin(self.base_url, '%s/json' % quote(name))
472
+ try:
473
+ resp = self.opener.open(url)
474
+ data = resp.read().decode() # for now
475
+ d = json.loads(data)
476
+ md = Metadata(scheme=self.scheme)
477
+ data = d['info']
478
+ md.name = data['name']
479
+ md.version = data['version']
480
+ md.license = data.get('license')
481
+ md.keywords = data.get('keywords', [])
482
+ md.summary = data.get('summary')
483
+ dist = Distribution(md)
484
+ dist.locator = self
485
+ # urls = d['urls']
486
+ result[md.version] = dist
487
+ for info in d['urls']:
488
+ url = info['url']
489
+ dist.download_urls.add(url)
490
+ dist.digests[url] = self._get_digest(info)
491
+ result['urls'].setdefault(md.version, set()).add(url)
492
+ result['digests'][url] = self._get_digest(info)
493
+ # Now get other releases
494
+ for version, infos in d['releases'].items():
495
+ if version == md.version:
496
+ continue # already done
497
+ omd = Metadata(scheme=self.scheme)
498
+ omd.name = md.name
499
+ omd.version = version
500
+ odist = Distribution(omd)
501
+ odist.locator = self
502
+ result[version] = odist
503
+ for info in infos:
504
+ url = info['url']
505
+ odist.download_urls.add(url)
506
+ odist.digests[url] = self._get_digest(info)
507
+ result['urls'].setdefault(version, set()).add(url)
508
+ result['digests'][url] = self._get_digest(info)
509
+
510
+
511
+ # for info in urls:
512
+ # md.source_url = info['url']
513
+ # dist.digest = self._get_digest(info)
514
+ # dist.locator = self
515
+ # for info in urls:
516
+ # url = info['url']
517
+ # result['urls'].setdefault(md.version, set()).add(url)
518
+ # result['digests'][url] = self._get_digest(info)
519
+ except Exception as e:
520
+ self.errors.put(text_type(e))
521
+ logger.exception('JSON fetch failed: %s', e)
522
+ return result
523
+
524
+
525
+ class Page(object):
526
+ """
527
+ This class represents a scraped HTML page.
528
+ """
529
+ # The following slightly hairy-looking regex just looks for the contents of
530
+ # an anchor link, which has an attribute "href" either immediately preceded
531
+ # or immediately followed by a "rel" attribute. The attribute values can be
532
+ # declared with double quotes, single quotes or no quotes - which leads to
533
+ # the length of the expression.
534
+ _href = re.compile(
535
+ """
536
+ (rel\\s*=\\s*(?:"(?P<rel1>[^"]*)"|'(?P<rel2>[^']*)'|(?P<rel3>[^>\\s\n]*))\\s+)?
537
+ href\\s*=\\s*(?:"(?P<url1>[^"]*)"|'(?P<url2>[^']*)'|(?P<url3>[^>\\s\n]*))
538
+ (\\s+rel\\s*=\\s*(?:"(?P<rel4>[^"]*)"|'(?P<rel5>[^']*)'|(?P<rel6>[^>\\s\n]*)))?
539
+ """, re.I | re.S | re.X)
540
+ _base = re.compile(r"""<base\s+href\s*=\s*['"]?([^'">]+)""", re.I | re.S)
541
+
542
+ def __init__(self, data, url):
543
+ """
544
+ Initialise an instance with the Unicode page contents and the URL they
545
+ came from.
546
+ """
547
+ self.data = data
548
+ self.base_url = self.url = url
549
+ m = self._base.search(self.data)
550
+ if m:
551
+ self.base_url = m.group(1)
552
+
553
+ _clean_re = re.compile(r'[^a-z0-9$&+,/:;=?@.#%_\\|-]', re.I)
554
+
555
+ @cached_property
556
+ def links(self):
557
+ """
558
+ Return the URLs of all the links on a page together with information
559
+ about their "rel" attribute, for determining which ones to treat as
560
+ downloads and which ones to queue for further scraping.
561
+ """
562
+
563
+ def clean(url):
564
+ "Tidy up an URL."
565
+ scheme, netloc, path, params, query, frag = urlparse(url)
566
+ return urlunparse((scheme, netloc, quote(path), params, query, frag))
567
+
568
+ result = set()
569
+ for match in self._href.finditer(self.data):
570
+ d = match.groupdict('')
571
+ rel = (d['rel1'] or d['rel2'] or d['rel3'] or d['rel4'] or d['rel5'] or d['rel6'])
572
+ url = d['url1'] or d['url2'] or d['url3']
573
+ url = urljoin(self.base_url, url)
574
+ url = unescape(url)
575
+ url = self._clean_re.sub(lambda m: '%%%2x' % ord(m.group(0)), url)
576
+ result.add((url, rel))
577
+ # We sort the result, hoping to bring the most recent versions
578
+ # to the front
579
+ result = sorted(result, key=lambda t: t[0], reverse=True)
580
+ return result
581
+
582
+
583
+ class SimpleScrapingLocator(Locator):
584
+ """
585
+ A locator which scrapes HTML pages to locate downloads for a distribution.
586
+ This runs multiple threads to do the I/O; performance is at least as good
587
+ as pip's PackageFinder, which works in an analogous fashion.
588
+ """
589
+
590
+ # These are used to deal with various Content-Encoding schemes.
591
+ decoders = {
592
+ 'deflate': zlib.decompress,
593
+ 'gzip': lambda b: gzip.GzipFile(fileobj=BytesIO(b)).read(),
594
+ 'none': lambda b: b,
595
+ }
596
+
597
+ def __init__(self, url, timeout=None, num_workers=10, **kwargs):
598
+ """
599
+ Initialise an instance.
600
+ :param url: The root URL to use for scraping.
601
+ :param timeout: The timeout, in seconds, to be applied to requests.
602
+ This defaults to ``None`` (no timeout specified).
603
+ :param num_workers: The number of worker threads you want to do I/O,
604
+ This defaults to 10.
605
+ :param kwargs: Passed to the superclass.
606
+ """
607
+ super(SimpleScrapingLocator, self).__init__(**kwargs)
608
+ self.base_url = ensure_slash(url)
609
+ self.timeout = timeout
610
+ self._page_cache = {}
611
+ self._seen = set()
612
+ self._to_fetch = queue.Queue()
613
+ self._bad_hosts = set()
614
+ self.skip_externals = False
615
+ self.num_workers = num_workers
616
+ self._lock = threading.RLock()
617
+ # See issue #45: we need to be resilient when the locator is used
618
+ # in a thread, e.g. with concurrent.futures. We can't use self._lock
619
+ # as it is for coordinating our internal threads - the ones created
620
+ # in _prepare_threads.
621
+ self._gplock = threading.RLock()
622
+ self.platform_check = False # See issue #112
623
+
624
+ def _prepare_threads(self):
625
+ """
626
+ Threads are created only when get_project is called, and terminate
627
+ before it returns. They are there primarily to parallelise I/O (i.e.
628
+ fetching web pages).
629
+ """
630
+ self._threads = []
631
+ for i in range(self.num_workers):
632
+ t = threading.Thread(target=self._fetch)
633
+ t.daemon = True
634
+ t.start()
635
+ self._threads.append(t)
636
+
637
+ def _wait_threads(self):
638
+ """
639
+ Tell all the threads to terminate (by sending a sentinel value) and
640
+ wait for them to do so.
641
+ """
642
+ # Note that you need two loops, since you can't say which
643
+ # thread will get each sentinel
644
+ for t in self._threads:
645
+ self._to_fetch.put(None) # sentinel
646
+ for t in self._threads:
647
+ t.join()
648
+ self._threads = []
649
+
650
+ def _get_project(self, name):
651
+ result = {'urls': {}, 'digests': {}}
652
+ with self._gplock:
653
+ self.result = result
654
+ self.project_name = name
655
+ url = urljoin(self.base_url, '%s/' % quote(name))
656
+ self._seen.clear()
657
+ self._page_cache.clear()
658
+ self._prepare_threads()
659
+ try:
660
+ logger.debug('Queueing %s', url)
661
+ self._to_fetch.put(url)
662
+ self._to_fetch.join()
663
+ finally:
664
+ self._wait_threads()
665
+ del self.result
666
+ return result
667
+
668
+ platform_dependent = re.compile(r'\b(linux_(i\d86|x86_64|arm\w+)|'
669
+ r'win(32|_amd64)|macosx_?\d+)\b', re.I)
670
+
671
+ def _is_platform_dependent(self, url):
672
+ """
673
+ Does an URL refer to a platform-specific download?
674
+ """
675
+ return self.platform_dependent.search(url)
676
+
677
+ def _process_download(self, url):
678
+ """
679
+ See if an URL is a suitable download for a project.
680
+
681
+ If it is, register information in the result dictionary (for
682
+ _get_project) about the specific version it's for.
683
+
684
+ Note that the return value isn't actually used other than as a boolean
685
+ value.
686
+ """
687
+ if self.platform_check and self._is_platform_dependent(url):
688
+ info = None
689
+ else:
690
+ info = self.convert_url_to_download_info(url, self.project_name)
691
+ logger.debug('process_download: %s -> %s', url, info)
692
+ if info:
693
+ with self._lock: # needed because self.result is shared
694
+ self._update_version_data(self.result, info)
695
+ return info
696
+
697
+ def _should_queue(self, link, referrer, rel):
698
+ """
699
+ Determine whether a link URL from a referring page and with a
700
+ particular "rel" attribute should be queued for scraping.
701
+ """
702
+ scheme, netloc, path, _, _, _ = urlparse(link)
703
+ if path.endswith(self.source_extensions + self.binary_extensions + self.excluded_extensions):
704
+ result = False
705
+ elif self.skip_externals and not link.startswith(self.base_url):
706
+ result = False
707
+ elif not referrer.startswith(self.base_url):
708
+ result = False
709
+ elif rel not in ('homepage', 'download'):
710
+ result = False
711
+ elif scheme not in ('http', 'https', 'ftp'):
712
+ result = False
713
+ elif self._is_platform_dependent(link):
714
+ result = False
715
+ else:
716
+ host = netloc.split(':', 1)[0]
717
+ if host.lower() == 'localhost':
718
+ result = False
719
+ else:
720
+ result = True
721
+ logger.debug('should_queue: %s (%s) from %s -> %s', link, rel, referrer, result)
722
+ return result
723
+
724
+ def _fetch(self):
725
+ """
726
+ Get a URL to fetch from the work queue, get the HTML page, examine its
727
+ links for download candidates and candidates for further scraping.
728
+
729
+ This is a handy method to run in a thread.
730
+ """
731
+ while True:
732
+ url = self._to_fetch.get()
733
+ try:
734
+ if url:
735
+ page = self.get_page(url)
736
+ if page is None: # e.g. after an error
737
+ continue
738
+ for link, rel in page.links:
739
+ if link not in self._seen:
740
+ try:
741
+ self._seen.add(link)
742
+ if (not self._process_download(link) and self._should_queue(link, url, rel)):
743
+ logger.debug('Queueing %s from %s', link, url)
744
+ self._to_fetch.put(link)
745
+ except MetadataInvalidError: # e.g. invalid versions
746
+ pass
747
+ except Exception as e: # pragma: no cover
748
+ self.errors.put(text_type(e))
749
+ finally:
750
+ # always do this, to avoid hangs :-)
751
+ self._to_fetch.task_done()
752
+ if not url:
753
+ # logger.debug('Sentinel seen, quitting.')
754
+ break
755
+
756
+ def get_page(self, url):
757
+ """
758
+ Get the HTML for an URL, possibly from an in-memory cache.
759
+
760
+ XXX TODO Note: this cache is never actually cleared. It's assumed that
761
+ the data won't get stale over the lifetime of a locator instance (not
762
+ necessarily true for the default_locator).
763
+ """
764
+ # http://peak.telecommunity.com/DevCenter/EasyInstall#package-index-api
765
+ scheme, netloc, path, _, _, _ = urlparse(url)
766
+ if scheme == 'file' and os.path.isdir(url2pathname(path)):
767
+ url = urljoin(ensure_slash(url), 'index.html')
768
+
769
+ if url in self._page_cache:
770
+ result = self._page_cache[url]
771
+ logger.debug('Returning %s from cache: %s', url, result)
772
+ else:
773
+ host = netloc.split(':', 1)[0]
774
+ result = None
775
+ if host in self._bad_hosts:
776
+ logger.debug('Skipping %s due to bad host %s', url, host)
777
+ else:
778
+ req = Request(url, headers={'Accept-encoding': 'identity'})
779
+ try:
780
+ logger.debug('Fetching %s', url)
781
+ resp = self.opener.open(req, timeout=self.timeout)
782
+ logger.debug('Fetched %s', url)
783
+ headers = resp.info()
784
+ content_type = headers.get('Content-Type', '')
785
+ if HTML_CONTENT_TYPE.match(content_type):
786
+ final_url = resp.geturl()
787
+ data = resp.read()
788
+ encoding = headers.get('Content-Encoding')
789
+ if encoding:
790
+ decoder = self.decoders[encoding] # fail if not found
791
+ data = decoder(data)
792
+ encoding = 'utf-8'
793
+ m = CHARSET.search(content_type)
794
+ if m:
795
+ encoding = m.group(1)
796
+ try:
797
+ data = data.decode(encoding)
798
+ except UnicodeError: # pragma: no cover
799
+ data = data.decode('latin-1') # fallback
800
+ result = Page(data, final_url)
801
+ self._page_cache[final_url] = result
802
+ except HTTPError as e:
803
+ if e.code != 404:
804
+ logger.exception('Fetch failed: %s: %s', url, e)
805
+ except URLError as e: # pragma: no cover
806
+ logger.exception('Fetch failed: %s: %s', url, e)
807
+ with self._lock:
808
+ self._bad_hosts.add(host)
809
+ except Exception as e: # pragma: no cover
810
+ logger.exception('Fetch failed: %s: %s', url, e)
811
+ finally:
812
+ self._page_cache[url] = result # even if None (failure)
813
+ return result
814
+
815
+ _distname_re = re.compile('<a href=[^>]*>([^<]+)<')
816
+
817
+ def get_distribution_names(self):
818
+ """
819
+ Return all the distribution names known to this locator.
820
+ """
821
+ result = set()
822
+ page = self.get_page(self.base_url)
823
+ if not page:
824
+ raise DistlibException('Unable to get %s' % self.base_url)
825
+ for match in self._distname_re.finditer(page.data):
826
+ result.add(match.group(1))
827
+ return result
828
+
829
+
830
+ class DirectoryLocator(Locator):
831
+ """
832
+ This class locates distributions in a directory tree.
833
+ """
834
+
835
+ def __init__(self, path, **kwargs):
836
+ """
837
+ Initialise an instance.
838
+ :param path: The root of the directory tree to search.
839
+ :param kwargs: Passed to the superclass constructor,
840
+ except for:
841
+ * recursive - if True (the default), subdirectories are
842
+ recursed into. If False, only the top-level directory
843
+ is searched,
844
+ """
845
+ self.recursive = kwargs.pop('recursive', True)
846
+ super(DirectoryLocator, self).__init__(**kwargs)
847
+ path = os.path.abspath(path)
848
+ if not os.path.isdir(path): # pragma: no cover
849
+ raise DistlibException('Not a directory: %r' % path)
850
+ self.base_dir = path
851
+
852
+ def should_include(self, filename, parent):
853
+ """
854
+ Should a filename be considered as a candidate for a distribution
855
+ archive? As well as the filename, the directory which contains it
856
+ is provided, though not used by the current implementation.
857
+ """
858
+ return filename.endswith(self.downloadable_extensions)
859
+
860
+ def _get_project(self, name):
861
+ result = {'urls': {}, 'digests': {}}
862
+ for root, dirs, files in os.walk(self.base_dir):
863
+ for fn in files:
864
+ if self.should_include(fn, root):
865
+ fn = os.path.join(root, fn)
866
+ url = urlunparse(('file', '', pathname2url(os.path.abspath(fn)), '', '', ''))
867
+ info = self.convert_url_to_download_info(url, name)
868
+ if info:
869
+ self._update_version_data(result, info)
870
+ if not self.recursive:
871
+ break
872
+ return result
873
+
874
+ def get_distribution_names(self):
875
+ """
876
+ Return all the distribution names known to this locator.
877
+ """
878
+ result = set()
879
+ for root, dirs, files in os.walk(self.base_dir):
880
+ for fn in files:
881
+ if self.should_include(fn, root):
882
+ fn = os.path.join(root, fn)
883
+ url = urlunparse(('file', '', pathname2url(os.path.abspath(fn)), '', '', ''))
884
+ info = self.convert_url_to_download_info(url, None)
885
+ if info:
886
+ result.add(info['name'])
887
+ if not self.recursive:
888
+ break
889
+ return result
890
+
891
+
892
+ class JSONLocator(Locator):
893
+ """
894
+ This locator uses special extended metadata (not available on PyPI) and is
895
+ the basis of performant dependency resolution in distlib. Other locators
896
+ require archive downloads before dependencies can be determined! As you
897
+ might imagine, that can be slow.
898
+ """
899
+
900
+ def get_distribution_names(self):
901
+ """
902
+ Return all the distribution names known to this locator.
903
+ """
904
+ raise NotImplementedError('Not available from this locator')
905
+
906
+ def _get_project(self, name):
907
+ result = {'urls': {}, 'digests': {}}
908
+ data = get_project_data(name)
909
+ if data:
910
+ for info in data.get('files', []):
911
+ if info['ptype'] != 'sdist' or info['pyversion'] != 'source':
912
+ continue
913
+ # We don't store summary in project metadata as it makes
914
+ # the data bigger for no benefit during dependency
915
+ # resolution
916
+ dist = make_dist(data['name'],
917
+ info['version'],
918
+ summary=data.get('summary', 'Placeholder for summary'),
919
+ scheme=self.scheme)
920
+ md = dist.metadata
921
+ md.source_url = info['url']
922
+ # TODO SHA256 digest
923
+ if 'digest' in info and info['digest']:
924
+ dist.digest = ('md5', info['digest'])
925
+ md.dependencies = info.get('requirements', {})
926
+ dist.exports = info.get('exports', {})
927
+ result[dist.version] = dist
928
+ result['urls'].setdefault(dist.version, set()).add(info['url'])
929
+ return result
930
+
931
+
932
+ class DistPathLocator(Locator):
933
+ """
934
+ This locator finds installed distributions in a path. It can be useful for
935
+ adding to an :class:`AggregatingLocator`.
936
+ """
937
+
938
+ def __init__(self, distpath, **kwargs):
939
+ """
940
+ Initialise an instance.
941
+
942
+ :param distpath: A :class:`DistributionPath` instance to search.
943
+ """
944
+ super(DistPathLocator, self).__init__(**kwargs)
945
+ assert isinstance(distpath, DistributionPath)
946
+ self.distpath = distpath
947
+
948
+ def _get_project(self, name):
949
+ dist = self.distpath.get_distribution(name)
950
+ if dist is None:
951
+ result = {'urls': {}, 'digests': {}}
952
+ else:
953
+ result = {
954
+ dist.version: dist,
955
+ 'urls': {
956
+ dist.version: set([dist.source_url])
957
+ },
958
+ 'digests': {
959
+ dist.version: set([None])
960
+ }
961
+ }
962
+ return result
963
+
964
+
965
+ class AggregatingLocator(Locator):
966
+ """
967
+ This class allows you to chain and/or merge a list of locators.
968
+ """
969
+
970
+ def __init__(self, *locators, **kwargs):
971
+ """
972
+ Initialise an instance.
973
+
974
+ :param locators: The list of locators to search.
975
+ :param kwargs: Passed to the superclass constructor,
976
+ except for:
977
+ * merge - if False (the default), the first successful
978
+ search from any of the locators is returned. If True,
979
+ the results from all locators are merged (this can be
980
+ slow).
981
+ """
982
+ self.merge = kwargs.pop('merge', False)
983
+ self.locators = locators
984
+ super(AggregatingLocator, self).__init__(**kwargs)
985
+
986
+ def clear_cache(self):
987
+ super(AggregatingLocator, self).clear_cache()
988
+ for locator in self.locators:
989
+ locator.clear_cache()
990
+
991
+ def _set_scheme(self, value):
992
+ self._scheme = value
993
+ for locator in self.locators:
994
+ locator.scheme = value
995
+
996
+ scheme = property(Locator.scheme.fget, _set_scheme)
997
+
998
+ def _get_project(self, name):
999
+ result = {}
1000
+ for locator in self.locators:
1001
+ d = locator.get_project(name)
1002
+ if d:
1003
+ if self.merge:
1004
+ files = result.get('urls', {})
1005
+ digests = result.get('digests', {})
1006
+ # next line could overwrite result['urls'], result['digests']
1007
+ result.update(d)
1008
+ df = result.get('urls')
1009
+ if files and df:
1010
+ for k, v in files.items():
1011
+ if k in df:
1012
+ df[k] |= v
1013
+ else:
1014
+ df[k] = v
1015
+ dd = result.get('digests')
1016
+ if digests and dd:
1017
+ dd.update(digests)
1018
+ else:
1019
+ # See issue #18. If any dists are found and we're looking
1020
+ # for specific constraints, we only return something if
1021
+ # a match is found. For example, if a DirectoryLocator
1022
+ # returns just foo (1.0) while we're looking for
1023
+ # foo (>= 2.0), we'll pretend there was nothing there so
1024
+ # that subsequent locators can be queried. Otherwise we
1025
+ # would just return foo (1.0) which would then lead to a
1026
+ # failure to find foo (>= 2.0), because other locators
1027
+ # weren't searched. Note that this only matters when
1028
+ # merge=False.
1029
+ if self.matcher is None:
1030
+ found = True
1031
+ else:
1032
+ found = False
1033
+ for k in d:
1034
+ if self.matcher.match(k):
1035
+ found = True
1036
+ break
1037
+ if found:
1038
+ result = d
1039
+ break
1040
+ return result
1041
+
1042
+ def get_distribution_names(self):
1043
+ """
1044
+ Return all the distribution names known to this locator.
1045
+ """
1046
+ result = set()
1047
+ for locator in self.locators:
1048
+ try:
1049
+ result |= locator.get_distribution_names()
1050
+ except NotImplementedError:
1051
+ pass
1052
+ return result
1053
+
1054
+
1055
+ # We use a legacy scheme simply because most of the dists on PyPI use legacy
1056
+ # versions which don't conform to PEP 440.
1057
+ default_locator = AggregatingLocator(
1058
+ # JSONLocator(), # don't use as PEP 426 is withdrawn
1059
+ SimpleScrapingLocator('https://pypi.org/simple/', timeout=3.0),
1060
+ scheme='legacy')
1061
+
1062
+ locate = default_locator.locate
1063
+
1064
+
1065
+ class DependencyFinder(object):
1066
+ """
1067
+ Locate dependencies for distributions.
1068
+ """
1069
+
1070
+ def __init__(self, locator=None):
1071
+ """
1072
+ Initialise an instance, using the specified locator
1073
+ to locate distributions.
1074
+ """
1075
+ self.locator = locator or default_locator
1076
+ self.scheme = get_scheme(self.locator.scheme)
1077
+
1078
+ def add_distribution(self, dist):
1079
+ """
1080
+ Add a distribution to the finder. This will update internal information
1081
+ about who provides what.
1082
+ :param dist: The distribution to add.
1083
+ """
1084
+ logger.debug('adding distribution %s', dist)
1085
+ name = dist.key
1086
+ self.dists_by_name[name] = dist
1087
+ self.dists[(name, dist.version)] = dist
1088
+ for p in dist.provides:
1089
+ name, version = parse_name_and_version(p)
1090
+ logger.debug('Add to provided: %s, %s, %s', name, version, dist)
1091
+ self.provided.setdefault(name, set()).add((version, dist))
1092
+
1093
+ def remove_distribution(self, dist):
1094
+ """
1095
+ Remove a distribution from the finder. This will update internal
1096
+ information about who provides what.
1097
+ :param dist: The distribution to remove.
1098
+ """
1099
+ logger.debug('removing distribution %s', dist)
1100
+ name = dist.key
1101
+ del self.dists_by_name[name]
1102
+ del self.dists[(name, dist.version)]
1103
+ for p in dist.provides:
1104
+ name, version = parse_name_and_version(p)
1105
+ logger.debug('Remove from provided: %s, %s, %s', name, version, dist)
1106
+ s = self.provided[name]
1107
+ s.remove((version, dist))
1108
+ if not s:
1109
+ del self.provided[name]
1110
+
1111
+ def get_matcher(self, reqt):
1112
+ """
1113
+ Get a version matcher for a requirement.
1114
+ :param reqt: The requirement
1115
+ :type reqt: str
1116
+ :return: A version matcher (an instance of
1117
+ :class:`distlib.version.Matcher`).
1118
+ """
1119
+ try:
1120
+ matcher = self.scheme.matcher(reqt)
1121
+ except UnsupportedVersionError: # pragma: no cover
1122
+ # XXX compat-mode if cannot read the version
1123
+ name = reqt.split()[0]
1124
+ matcher = self.scheme.matcher(name)
1125
+ return matcher
1126
+
1127
+ def find_providers(self, reqt):
1128
+ """
1129
+ Find the distributions which can fulfill a requirement.
1130
+
1131
+ :param reqt: The requirement.
1132
+ :type reqt: str
1133
+ :return: A set of distribution which can fulfill the requirement.
1134
+ """
1135
+ matcher = self.get_matcher(reqt)
1136
+ name = matcher.key # case-insensitive
1137
+ result = set()
1138
+ provided = self.provided
1139
+ if name in provided:
1140
+ for version, provider in provided[name]:
1141
+ try:
1142
+ match = matcher.match(version)
1143
+ except UnsupportedVersionError:
1144
+ match = False
1145
+
1146
+ if match:
1147
+ result.add(provider)
1148
+ break
1149
+ return result
1150
+
1151
+ def try_to_replace(self, provider, other, problems):
1152
+ """
1153
+ Attempt to replace one provider with another. This is typically used
1154
+ when resolving dependencies from multiple sources, e.g. A requires
1155
+ (B >= 1.0) while C requires (B >= 1.1).
1156
+
1157
+ For successful replacement, ``provider`` must meet all the requirements
1158
+ which ``other`` fulfills.
1159
+
1160
+ :param provider: The provider we are trying to replace with.
1161
+ :param other: The provider we're trying to replace.
1162
+ :param problems: If False is returned, this will contain what
1163
+ problems prevented replacement. This is currently
1164
+ a tuple of the literal string 'cantreplace',
1165
+ ``provider``, ``other`` and the set of requirements
1166
+ that ``provider`` couldn't fulfill.
1167
+ :return: True if we can replace ``other`` with ``provider``, else
1168
+ False.
1169
+ """
1170
+ rlist = self.reqts[other]
1171
+ unmatched = set()
1172
+ for s in rlist:
1173
+ matcher = self.get_matcher(s)
1174
+ if not matcher.match(provider.version):
1175
+ unmatched.add(s)
1176
+ if unmatched:
1177
+ # can't replace other with provider
1178
+ problems.add(('cantreplace', provider, other, frozenset(unmatched)))
1179
+ result = False
1180
+ else:
1181
+ # can replace other with provider
1182
+ self.remove_distribution(other)
1183
+ del self.reqts[other]
1184
+ for s in rlist:
1185
+ self.reqts.setdefault(provider, set()).add(s)
1186
+ self.add_distribution(provider)
1187
+ result = True
1188
+ return result
1189
+
1190
+ def find(self, requirement, meta_extras=None, prereleases=False):
1191
+ """
1192
+ Find a distribution and all distributions it depends on.
1193
+
1194
+ :param requirement: The requirement specifying the distribution to
1195
+ find, or a Distribution instance.
1196
+ :param meta_extras: A list of meta extras such as :test:, :build: and
1197
+ so on.
1198
+ :param prereleases: If ``True``, allow pre-release versions to be
1199
+ returned - otherwise, don't return prereleases
1200
+ unless they're all that's available.
1201
+
1202
+ Return a set of :class:`Distribution` instances and a set of
1203
+ problems.
1204
+
1205
+ The distributions returned should be such that they have the
1206
+ :attr:`required` attribute set to ``True`` if they were
1207
+ from the ``requirement`` passed to ``find()``, and they have the
1208
+ :attr:`build_time_dependency` attribute set to ``True`` unless they
1209
+ are post-installation dependencies of the ``requirement``.
1210
+
1211
+ The problems should be a tuple consisting of the string
1212
+ ``'unsatisfied'`` and the requirement which couldn't be satisfied
1213
+ by any distribution known to the locator.
1214
+ """
1215
+
1216
+ self.provided = {}
1217
+ self.dists = {}
1218
+ self.dists_by_name = {}
1219
+ self.reqts = {}
1220
+
1221
+ meta_extras = set(meta_extras or [])
1222
+ if ':*:' in meta_extras:
1223
+ meta_extras.remove(':*:')
1224
+ # :meta: and :run: are implicitly included
1225
+ meta_extras |= set([':test:', ':build:', ':dev:'])
1226
+
1227
+ if isinstance(requirement, Distribution):
1228
+ dist = odist = requirement
1229
+ logger.debug('passed %s as requirement', odist)
1230
+ else:
1231
+ dist = odist = self.locator.locate(requirement, prereleases=prereleases)
1232
+ if dist is None:
1233
+ raise DistlibException('Unable to locate %r' % requirement)
1234
+ logger.debug('located %s', odist)
1235
+ dist.requested = True
1236
+ problems = set()
1237
+ todo = set([dist])
1238
+ install_dists = set([odist])
1239
+ while todo:
1240
+ dist = todo.pop()
1241
+ name = dist.key # case-insensitive
1242
+ if name not in self.dists_by_name:
1243
+ self.add_distribution(dist)
1244
+ else:
1245
+ # import pdb; pdb.set_trace()
1246
+ other = self.dists_by_name[name]
1247
+ if other != dist:
1248
+ self.try_to_replace(dist, other, problems)
1249
+
1250
+ ireqts = dist.run_requires | dist.meta_requires
1251
+ sreqts = dist.build_requires
1252
+ ereqts = set()
1253
+ if meta_extras and dist in install_dists:
1254
+ for key in ('test', 'build', 'dev'):
1255
+ e = ':%s:' % key
1256
+ if e in meta_extras:
1257
+ ereqts |= getattr(dist, '%s_requires' % key)
1258
+ all_reqts = ireqts | sreqts | ereqts
1259
+ for r in all_reqts:
1260
+ providers = self.find_providers(r)
1261
+ if not providers:
1262
+ logger.debug('No providers found for %r', r)
1263
+ provider = self.locator.locate(r, prereleases=prereleases)
1264
+ # If no provider is found and we didn't consider
1265
+ # prereleases, consider them now.
1266
+ if provider is None and not prereleases:
1267
+ provider = self.locator.locate(r, prereleases=True)
1268
+ if provider is None:
1269
+ logger.debug('Cannot satisfy %r', r)
1270
+ problems.add(('unsatisfied', r))
1271
+ else:
1272
+ n, v = provider.key, provider.version
1273
+ if (n, v) not in self.dists:
1274
+ todo.add(provider)
1275
+ providers.add(provider)
1276
+ if r in ireqts and dist in install_dists:
1277
+ install_dists.add(provider)
1278
+ logger.debug('Adding %s to install_dists', provider.name_and_version)
1279
+ for p in providers:
1280
+ name = p.key
1281
+ if name not in self.dists_by_name:
1282
+ self.reqts.setdefault(p, set()).add(r)
1283
+ else:
1284
+ other = self.dists_by_name[name]
1285
+ if other != p:
1286
+ # see if other can be replaced by p
1287
+ self.try_to_replace(p, other, problems)
1288
+
1289
+ dists = set(self.dists.values())
1290
+ for dist in dists:
1291
+ dist.build_time_dependency = dist not in install_dists
1292
+ if dist.build_time_dependency:
1293
+ logger.debug('%s is a build-time dependency only.', dist.name_and_version)
1294
+ logger.debug('find done for %s', odist)
1295
+ return dists, problems
llava/lib/python3.10/site-packages/pip/_vendor/distlib/w32.exe ADDED
Binary file (91.6 kB). View file
 
llava/lib/python3.10/site-packages/pip/_vendor/idna/__pycache__/idnadata.cpython-310.pyc ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c4b791a4898b259bd3d7412e86c80461d4bf79be88814303662a9eeb9d80daee
3
+ size 194427
llava/lib/python3.10/site-packages/pip/_vendor/platformdirs/__pycache__/__init__.cpython-310.pyc ADDED
Binary file (15.8 kB). View file
 
llava/lib/python3.10/site-packages/pip/_vendor/platformdirs/__pycache__/macos.cpython-310.pyc ADDED
Binary file (6.5 kB). View file
 
llava/lib/python3.10/site-packages/pip/_vendor/platformdirs/__pycache__/unix.cpython-310.pyc ADDED
Binary file (10.5 kB). View file
 
llava/lib/python3.10/site-packages/pip/_vendor/platformdirs/__pycache__/version.cpython-310.pyc ADDED
Binary file (497 Bytes). View file
 
llava/lib/python3.10/site-packages/pip/_vendor/platformdirs/__pycache__/windows.cpython-310.pyc ADDED
Binary file (9.07 kB). View file
 
llava/lib/python3.10/site-packages/pip/_vendor/platformdirs/android.py ADDED
@@ -0,0 +1,249 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Android."""
2
+
3
+ from __future__ import annotations
4
+
5
+ import os
6
+ import re
7
+ import sys
8
+ from functools import lru_cache
9
+ from typing import TYPE_CHECKING, cast
10
+
11
+ from .api import PlatformDirsABC
12
+
13
+
14
+ class Android(PlatformDirsABC):
15
+ """
16
+ Follows the guidance `from here <https://android.stackexchange.com/a/216132>`_.
17
+
18
+ Makes use of the `appname <platformdirs.api.PlatformDirsABC.appname>`, `version
19
+ <platformdirs.api.PlatformDirsABC.version>`, `ensure_exists <platformdirs.api.PlatformDirsABC.ensure_exists>`.
20
+
21
+ """
22
+
23
+ @property
24
+ def user_data_dir(self) -> str:
25
+ """:return: data directory tied to the user, e.g. ``/data/user/<userid>/<packagename>/files/<AppName>``"""
26
+ return self._append_app_name_and_version(cast(str, _android_folder()), "files")
27
+
28
+ @property
29
+ def site_data_dir(self) -> str:
30
+ """:return: data directory shared by users, same as `user_data_dir`"""
31
+ return self.user_data_dir
32
+
33
+ @property
34
+ def user_config_dir(self) -> str:
35
+ """
36
+ :return: config directory tied to the user, e.g. \
37
+ ``/data/user/<userid>/<packagename>/shared_prefs/<AppName>``
38
+ """
39
+ return self._append_app_name_and_version(cast(str, _android_folder()), "shared_prefs")
40
+
41
+ @property
42
+ def site_config_dir(self) -> str:
43
+ """:return: config directory shared by the users, same as `user_config_dir`"""
44
+ return self.user_config_dir
45
+
46
+ @property
47
+ def user_cache_dir(self) -> str:
48
+ """:return: cache directory tied to the user, e.g.,``/data/user/<userid>/<packagename>/cache/<AppName>``"""
49
+ return self._append_app_name_and_version(cast(str, _android_folder()), "cache")
50
+
51
+ @property
52
+ def site_cache_dir(self) -> str:
53
+ """:return: cache directory shared by users, same as `user_cache_dir`"""
54
+ return self.user_cache_dir
55
+
56
+ @property
57
+ def user_state_dir(self) -> str:
58
+ """:return: state directory tied to the user, same as `user_data_dir`"""
59
+ return self.user_data_dir
60
+
61
+ @property
62
+ def user_log_dir(self) -> str:
63
+ """
64
+ :return: log directory tied to the user, same as `user_cache_dir` if not opinionated else ``log`` in it,
65
+ e.g. ``/data/user/<userid>/<packagename>/cache/<AppName>/log``
66
+ """
67
+ path = self.user_cache_dir
68
+ if self.opinion:
69
+ path = os.path.join(path, "log") # noqa: PTH118
70
+ return path
71
+
72
+ @property
73
+ def user_documents_dir(self) -> str:
74
+ """:return: documents directory tied to the user e.g. ``/storage/emulated/0/Documents``"""
75
+ return _android_documents_folder()
76
+
77
+ @property
78
+ def user_downloads_dir(self) -> str:
79
+ """:return: downloads directory tied to the user e.g. ``/storage/emulated/0/Downloads``"""
80
+ return _android_downloads_folder()
81
+
82
+ @property
83
+ def user_pictures_dir(self) -> str:
84
+ """:return: pictures directory tied to the user e.g. ``/storage/emulated/0/Pictures``"""
85
+ return _android_pictures_folder()
86
+
87
+ @property
88
+ def user_videos_dir(self) -> str:
89
+ """:return: videos directory tied to the user e.g. ``/storage/emulated/0/DCIM/Camera``"""
90
+ return _android_videos_folder()
91
+
92
+ @property
93
+ def user_music_dir(self) -> str:
94
+ """:return: music directory tied to the user e.g. ``/storage/emulated/0/Music``"""
95
+ return _android_music_folder()
96
+
97
+ @property
98
+ def user_desktop_dir(self) -> str:
99
+ """:return: desktop directory tied to the user e.g. ``/storage/emulated/0/Desktop``"""
100
+ return "/storage/emulated/0/Desktop"
101
+
102
+ @property
103
+ def user_runtime_dir(self) -> str:
104
+ """
105
+ :return: runtime directory tied to the user, same as `user_cache_dir` if not opinionated else ``tmp`` in it,
106
+ e.g. ``/data/user/<userid>/<packagename>/cache/<AppName>/tmp``
107
+ """
108
+ path = self.user_cache_dir
109
+ if self.opinion:
110
+ path = os.path.join(path, "tmp") # noqa: PTH118
111
+ return path
112
+
113
+ @property
114
+ def site_runtime_dir(self) -> str:
115
+ """:return: runtime directory shared by users, same as `user_runtime_dir`"""
116
+ return self.user_runtime_dir
117
+
118
+
119
+ @lru_cache(maxsize=1)
120
+ def _android_folder() -> str | None: # noqa: C901
121
+ """:return: base folder for the Android OS or None if it cannot be found"""
122
+ result: str | None = None
123
+ # type checker isn't happy with our "import android", just don't do this when type checking see
124
+ # https://stackoverflow.com/a/61394121
125
+ if not TYPE_CHECKING:
126
+ try:
127
+ # First try to get a path to android app using python4android (if available)...
128
+ from android import mActivity # noqa: PLC0415
129
+
130
+ context = cast("android.content.Context", mActivity.getApplicationContext()) # noqa: F821
131
+ result = context.getFilesDir().getParentFile().getAbsolutePath()
132
+ except Exception: # noqa: BLE001
133
+ result = None
134
+ if result is None:
135
+ try:
136
+ # ...and fall back to using plain pyjnius, if python4android isn't available or doesn't deliver any useful
137
+ # result...
138
+ from jnius import autoclass # noqa: PLC0415
139
+
140
+ context = autoclass("android.content.Context")
141
+ result = context.getFilesDir().getParentFile().getAbsolutePath()
142
+ except Exception: # noqa: BLE001
143
+ result = None
144
+ if result is None:
145
+ # and if that fails, too, find an android folder looking at path on the sys.path
146
+ # warning: only works for apps installed under /data, not adopted storage etc.
147
+ pattern = re.compile(r"/data/(data|user/\d+)/(.+)/files")
148
+ for path in sys.path:
149
+ if pattern.match(path):
150
+ result = path.split("/files")[0]
151
+ break
152
+ else:
153
+ result = None
154
+ if result is None:
155
+ # one last try: find an android folder looking at path on the sys.path taking adopted storage paths into
156
+ # account
157
+ pattern = re.compile(r"/mnt/expand/[a-fA-F0-9-]{36}/(data|user/\d+)/(.+)/files")
158
+ for path in sys.path:
159
+ if pattern.match(path):
160
+ result = path.split("/files")[0]
161
+ break
162
+ else:
163
+ result = None
164
+ return result
165
+
166
+
167
+ @lru_cache(maxsize=1)
168
+ def _android_documents_folder() -> str:
169
+ """:return: documents folder for the Android OS"""
170
+ # Get directories with pyjnius
171
+ try:
172
+ from jnius import autoclass # noqa: PLC0415
173
+
174
+ context = autoclass("android.content.Context")
175
+ environment = autoclass("android.os.Environment")
176
+ documents_dir: str = context.getExternalFilesDir(environment.DIRECTORY_DOCUMENTS).getAbsolutePath()
177
+ except Exception: # noqa: BLE001
178
+ documents_dir = "/storage/emulated/0/Documents"
179
+
180
+ return documents_dir
181
+
182
+
183
+ @lru_cache(maxsize=1)
184
+ def _android_downloads_folder() -> str:
185
+ """:return: downloads folder for the Android OS"""
186
+ # Get directories with pyjnius
187
+ try:
188
+ from jnius import autoclass # noqa: PLC0415
189
+
190
+ context = autoclass("android.content.Context")
191
+ environment = autoclass("android.os.Environment")
192
+ downloads_dir: str = context.getExternalFilesDir(environment.DIRECTORY_DOWNLOADS).getAbsolutePath()
193
+ except Exception: # noqa: BLE001
194
+ downloads_dir = "/storage/emulated/0/Downloads"
195
+
196
+ return downloads_dir
197
+
198
+
199
+ @lru_cache(maxsize=1)
200
+ def _android_pictures_folder() -> str:
201
+ """:return: pictures folder for the Android OS"""
202
+ # Get directories with pyjnius
203
+ try:
204
+ from jnius import autoclass # noqa: PLC0415
205
+
206
+ context = autoclass("android.content.Context")
207
+ environment = autoclass("android.os.Environment")
208
+ pictures_dir: str = context.getExternalFilesDir(environment.DIRECTORY_PICTURES).getAbsolutePath()
209
+ except Exception: # noqa: BLE001
210
+ pictures_dir = "/storage/emulated/0/Pictures"
211
+
212
+ return pictures_dir
213
+
214
+
215
+ @lru_cache(maxsize=1)
216
+ def _android_videos_folder() -> str:
217
+ """:return: videos folder for the Android OS"""
218
+ # Get directories with pyjnius
219
+ try:
220
+ from jnius import autoclass # noqa: PLC0415
221
+
222
+ context = autoclass("android.content.Context")
223
+ environment = autoclass("android.os.Environment")
224
+ videos_dir: str = context.getExternalFilesDir(environment.DIRECTORY_DCIM).getAbsolutePath()
225
+ except Exception: # noqa: BLE001
226
+ videos_dir = "/storage/emulated/0/DCIM/Camera"
227
+
228
+ return videos_dir
229
+
230
+
231
+ @lru_cache(maxsize=1)
232
+ def _android_music_folder() -> str:
233
+ """:return: music folder for the Android OS"""
234
+ # Get directories with pyjnius
235
+ try:
236
+ from jnius import autoclass # noqa: PLC0415
237
+
238
+ context = autoclass("android.content.Context")
239
+ environment = autoclass("android.os.Environment")
240
+ music_dir: str = context.getExternalFilesDir(environment.DIRECTORY_MUSIC).getAbsolutePath()
241
+ except Exception: # noqa: BLE001
242
+ music_dir = "/storage/emulated/0/Music"
243
+
244
+ return music_dir
245
+
246
+
247
+ __all__ = [
248
+ "Android",
249
+ ]
llava/lib/python3.10/site-packages/pip/_vendor/requests/__init__.py ADDED
@@ -0,0 +1,179 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # __
2
+ # /__) _ _ _ _ _/ _
3
+ # / ( (- (/ (/ (- _) / _)
4
+ # /
5
+
6
+ """
7
+ Requests HTTP Library
8
+ ~~~~~~~~~~~~~~~~~~~~~
9
+
10
+ Requests is an HTTP library, written in Python, for human beings.
11
+ Basic GET usage:
12
+
13
+ >>> import requests
14
+ >>> r = requests.get('https://www.python.org')
15
+ >>> r.status_code
16
+ 200
17
+ >>> b'Python is a programming language' in r.content
18
+ True
19
+
20
+ ... or POST:
21
+
22
+ >>> payload = dict(key1='value1', key2='value2')
23
+ >>> r = requests.post('https://httpbin.org/post', data=payload)
24
+ >>> print(r.text)
25
+ {
26
+ ...
27
+ "form": {
28
+ "key1": "value1",
29
+ "key2": "value2"
30
+ },
31
+ ...
32
+ }
33
+
34
+ The other HTTP methods are supported - see `requests.api`. Full documentation
35
+ is at <https://requests.readthedocs.io>.
36
+
37
+ :copyright: (c) 2017 by Kenneth Reitz.
38
+ :license: Apache 2.0, see LICENSE for more details.
39
+ """
40
+
41
+ import warnings
42
+
43
+ from pip._vendor import urllib3
44
+
45
+ from .exceptions import RequestsDependencyWarning
46
+
47
+ charset_normalizer_version = None
48
+ chardet_version = None
49
+
50
+
51
+ def check_compatibility(urllib3_version, chardet_version, charset_normalizer_version):
52
+ urllib3_version = urllib3_version.split(".")
53
+ assert urllib3_version != ["dev"] # Verify urllib3 isn't installed from git.
54
+
55
+ # Sometimes, urllib3 only reports its version as 16.1.
56
+ if len(urllib3_version) == 2:
57
+ urllib3_version.append("0")
58
+
59
+ # Check urllib3 for compatibility.
60
+ major, minor, patch = urllib3_version # noqa: F811
61
+ major, minor, patch = int(major), int(minor), int(patch)
62
+ # urllib3 >= 1.21.1
63
+ assert major >= 1
64
+ if major == 1:
65
+ assert minor >= 21
66
+
67
+ # Check charset_normalizer for compatibility.
68
+ if chardet_version:
69
+ major, minor, patch = chardet_version.split(".")[:3]
70
+ major, minor, patch = int(major), int(minor), int(patch)
71
+ # chardet_version >= 3.0.2, < 6.0.0
72
+ assert (3, 0, 2) <= (major, minor, patch) < (6, 0, 0)
73
+ elif charset_normalizer_version:
74
+ major, minor, patch = charset_normalizer_version.split(".")[:3]
75
+ major, minor, patch = int(major), int(minor), int(patch)
76
+ # charset_normalizer >= 2.0.0 < 4.0.0
77
+ assert (2, 0, 0) <= (major, minor, patch) < (4, 0, 0)
78
+ else:
79
+ # pip does not need or use character detection
80
+ pass
81
+
82
+
83
+ def _check_cryptography(cryptography_version):
84
+ # cryptography < 1.3.4
85
+ try:
86
+ cryptography_version = list(map(int, cryptography_version.split(".")))
87
+ except ValueError:
88
+ return
89
+
90
+ if cryptography_version < [1, 3, 4]:
91
+ warning = "Old version of cryptography ({}) may cause slowdown.".format(
92
+ cryptography_version
93
+ )
94
+ warnings.warn(warning, RequestsDependencyWarning)
95
+
96
+
97
+ # Check imported dependencies for compatibility.
98
+ try:
99
+ check_compatibility(
100
+ urllib3.__version__, chardet_version, charset_normalizer_version
101
+ )
102
+ except (AssertionError, ValueError):
103
+ warnings.warn(
104
+ "urllib3 ({}) or chardet ({})/charset_normalizer ({}) doesn't match a supported "
105
+ "version!".format(
106
+ urllib3.__version__, chardet_version, charset_normalizer_version
107
+ ),
108
+ RequestsDependencyWarning,
109
+ )
110
+
111
+ # Attempt to enable urllib3's fallback for SNI support
112
+ # if the standard library doesn't support SNI or the
113
+ # 'ssl' library isn't available.
114
+ try:
115
+ # Note: This logic prevents upgrading cryptography on Windows, if imported
116
+ # as part of pip.
117
+ from pip._internal.utils.compat import WINDOWS
118
+ if not WINDOWS:
119
+ raise ImportError("pip internals: don't import cryptography on Windows")
120
+ try:
121
+ import ssl
122
+ except ImportError:
123
+ ssl = None
124
+
125
+ if not getattr(ssl, "HAS_SNI", False):
126
+ from pip._vendor.urllib3.contrib import pyopenssl
127
+
128
+ pyopenssl.inject_into_urllib3()
129
+
130
+ # Check cryptography version
131
+ from cryptography import __version__ as cryptography_version
132
+
133
+ _check_cryptography(cryptography_version)
134
+ except ImportError:
135
+ pass
136
+
137
+ # urllib3's DependencyWarnings should be silenced.
138
+ from pip._vendor.urllib3.exceptions import DependencyWarning
139
+
140
+ warnings.simplefilter("ignore", DependencyWarning)
141
+
142
+ # Set default logging handler to avoid "No handler found" warnings.
143
+ import logging
144
+ from logging import NullHandler
145
+
146
+ from . import packages, utils
147
+ from .__version__ import (
148
+ __author__,
149
+ __author_email__,
150
+ __build__,
151
+ __cake__,
152
+ __copyright__,
153
+ __description__,
154
+ __license__,
155
+ __title__,
156
+ __url__,
157
+ __version__,
158
+ )
159
+ from .api import delete, get, head, options, patch, post, put, request
160
+ from .exceptions import (
161
+ ConnectionError,
162
+ ConnectTimeout,
163
+ FileModeWarning,
164
+ HTTPError,
165
+ JSONDecodeError,
166
+ ReadTimeout,
167
+ RequestException,
168
+ Timeout,
169
+ TooManyRedirects,
170
+ URLRequired,
171
+ )
172
+ from .models import PreparedRequest, Request, Response
173
+ from .sessions import Session, session
174
+ from .status_codes import codes
175
+
176
+ logging.getLogger(__name__).addHandler(NullHandler())
177
+
178
+ # FileModeWarnings go off per the default.
179
+ warnings.simplefilter("default", FileModeWarning, append=True)
llava/lib/python3.10/site-packages/pip/_vendor/requests/__pycache__/__version__.cpython-310.pyc ADDED
Binary file (529 Bytes). View file
 
llava/lib/python3.10/site-packages/pip/_vendor/requests/__pycache__/_internal_utils.cpython-310.pyc ADDED
Binary file (1.61 kB). View file
 
llava/lib/python3.10/site-packages/pip/_vendor/requests/__pycache__/adapters.cpython-310.pyc ADDED
Binary file (22.1 kB). View file
 
llava/lib/python3.10/site-packages/pip/_vendor/requests/__pycache__/api.cpython-310.pyc ADDED
Binary file (6.71 kB). View file
 
llava/lib/python3.10/site-packages/pip/_vendor/requests/__pycache__/auth.cpython-310.pyc ADDED
Binary file (8.1 kB). View file
 
llava/lib/python3.10/site-packages/pip/_vendor/requests/__pycache__/certs.cpython-310.pyc ADDED
Binary file (618 Bytes). View file
 
llava/lib/python3.10/site-packages/pip/_vendor/requests/__pycache__/cookies.cpython-310.pyc ADDED
Binary file (18.7 kB). View file
 
llava/lib/python3.10/site-packages/pip/_vendor/requests/__pycache__/exceptions.cpython-310.pyc ADDED
Binary file (6.22 kB). View file
 
llava/lib/python3.10/site-packages/pip/_vendor/requests/__pycache__/help.cpython-310.pyc ADDED
Binary file (2.8 kB). View file
 
llava/lib/python3.10/site-packages/pip/_vendor/requests/__pycache__/models.cpython-310.pyc ADDED
Binary file (24.3 kB). View file
 
llava/lib/python3.10/site-packages/pip/_vendor/requests/__pycache__/packages.cpython-310.pyc ADDED
Binary file (719 Bytes). View file
 
llava/lib/python3.10/site-packages/pip/_vendor/requests/__pycache__/sessions.cpython-310.pyc ADDED
Binary file (19.7 kB). View file
 
llava/lib/python3.10/site-packages/pip/_vendor/requests/__pycache__/status_codes.cpython-310.pyc ADDED
Binary file (4.72 kB). View file
 
llava/lib/python3.10/site-packages/pip/_vendor/requests/_internal_utils.py ADDED
@@ -0,0 +1,50 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ requests._internal_utils
3
+ ~~~~~~~~~~~~~~
4
+
5
+ Provides utility functions that are consumed internally by Requests
6
+ which depend on extremely few external helpers (such as compat)
7
+ """
8
+ import re
9
+
10
+ from .compat import builtin_str
11
+
12
+ _VALID_HEADER_NAME_RE_BYTE = re.compile(rb"^[^:\s][^:\r\n]*$")
13
+ _VALID_HEADER_NAME_RE_STR = re.compile(r"^[^:\s][^:\r\n]*$")
14
+ _VALID_HEADER_VALUE_RE_BYTE = re.compile(rb"^\S[^\r\n]*$|^$")
15
+ _VALID_HEADER_VALUE_RE_STR = re.compile(r"^\S[^\r\n]*$|^$")
16
+
17
+ _HEADER_VALIDATORS_STR = (_VALID_HEADER_NAME_RE_STR, _VALID_HEADER_VALUE_RE_STR)
18
+ _HEADER_VALIDATORS_BYTE = (_VALID_HEADER_NAME_RE_BYTE, _VALID_HEADER_VALUE_RE_BYTE)
19
+ HEADER_VALIDATORS = {
20
+ bytes: _HEADER_VALIDATORS_BYTE,
21
+ str: _HEADER_VALIDATORS_STR,
22
+ }
23
+
24
+
25
+ def to_native_string(string, encoding="ascii"):
26
+ """Given a string object, regardless of type, returns a representation of
27
+ that string in the native string type, encoding and decoding where
28
+ necessary. This assumes ASCII unless told otherwise.
29
+ """
30
+ if isinstance(string, builtin_str):
31
+ out = string
32
+ else:
33
+ out = string.decode(encoding)
34
+
35
+ return out
36
+
37
+
38
+ def unicode_is_ascii(u_string):
39
+ """Determine if unicode string only contains ASCII characters.
40
+
41
+ :param str u_string: unicode string to check. Must be unicode
42
+ and not Python 2 `str`.
43
+ :rtype: bool
44
+ """
45
+ assert isinstance(u_string, str)
46
+ try:
47
+ u_string.encode("ascii")
48
+ return True
49
+ except UnicodeEncodeError:
50
+ return False
llava/lib/python3.10/site-packages/pip/_vendor/requests/adapters.py ADDED
@@ -0,0 +1,719 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ requests.adapters
3
+ ~~~~~~~~~~~~~~~~~
4
+
5
+ This module contains the transport adapters that Requests uses to define
6
+ and maintain connections.
7
+ """
8
+
9
+ import os.path
10
+ import socket # noqa: F401
11
+ import typing
12
+ import warnings
13
+
14
+ from pip._vendor.urllib3.exceptions import ClosedPoolError, ConnectTimeoutError
15
+ from pip._vendor.urllib3.exceptions import HTTPError as _HTTPError
16
+ from pip._vendor.urllib3.exceptions import InvalidHeader as _InvalidHeader
17
+ from pip._vendor.urllib3.exceptions import (
18
+ LocationValueError,
19
+ MaxRetryError,
20
+ NewConnectionError,
21
+ ProtocolError,
22
+ )
23
+ from pip._vendor.urllib3.exceptions import ProxyError as _ProxyError
24
+ from pip._vendor.urllib3.exceptions import ReadTimeoutError, ResponseError
25
+ from pip._vendor.urllib3.exceptions import SSLError as _SSLError
26
+ from pip._vendor.urllib3.poolmanager import PoolManager, proxy_from_url
27
+ from pip._vendor.urllib3.util import Timeout as TimeoutSauce
28
+ from pip._vendor.urllib3.util import parse_url
29
+ from pip._vendor.urllib3.util.retry import Retry
30
+ from pip._vendor.urllib3.util.ssl_ import create_urllib3_context
31
+
32
+ from .auth import _basic_auth_str
33
+ from .compat import basestring, urlparse
34
+ from .cookies import extract_cookies_to_jar
35
+ from .exceptions import (
36
+ ConnectionError,
37
+ ConnectTimeout,
38
+ InvalidHeader,
39
+ InvalidProxyURL,
40
+ InvalidSchema,
41
+ InvalidURL,
42
+ ProxyError,
43
+ ReadTimeout,
44
+ RetryError,
45
+ SSLError,
46
+ )
47
+ from .models import Response
48
+ from .structures import CaseInsensitiveDict
49
+ from .utils import (
50
+ DEFAULT_CA_BUNDLE_PATH,
51
+ extract_zipped_paths,
52
+ get_auth_from_url,
53
+ get_encoding_from_headers,
54
+ prepend_scheme_if_needed,
55
+ select_proxy,
56
+ urldefragauth,
57
+ )
58
+
59
+ try:
60
+ from pip._vendor.urllib3.contrib.socks import SOCKSProxyManager
61
+ except ImportError:
62
+
63
+ def SOCKSProxyManager(*args, **kwargs):
64
+ raise InvalidSchema("Missing dependencies for SOCKS support.")
65
+
66
+
67
+ if typing.TYPE_CHECKING:
68
+ from .models import PreparedRequest
69
+
70
+
71
+ DEFAULT_POOLBLOCK = False
72
+ DEFAULT_POOLSIZE = 10
73
+ DEFAULT_RETRIES = 0
74
+ DEFAULT_POOL_TIMEOUT = None
75
+
76
+
77
+ try:
78
+ import ssl # noqa: F401
79
+
80
+ _preloaded_ssl_context = create_urllib3_context()
81
+ _preloaded_ssl_context.load_verify_locations(
82
+ extract_zipped_paths(DEFAULT_CA_BUNDLE_PATH)
83
+ )
84
+ except ImportError:
85
+ # Bypass default SSLContext creation when Python
86
+ # interpreter isn't built with the ssl module.
87
+ _preloaded_ssl_context = None
88
+
89
+
90
+ def _urllib3_request_context(
91
+ request: "PreparedRequest",
92
+ verify: "bool | str | None",
93
+ client_cert: "typing.Tuple[str, str] | str | None",
94
+ poolmanager: "PoolManager",
95
+ ) -> "(typing.Dict[str, typing.Any], typing.Dict[str, typing.Any])":
96
+ host_params = {}
97
+ pool_kwargs = {}
98
+ parsed_request_url = urlparse(request.url)
99
+ scheme = parsed_request_url.scheme.lower()
100
+ port = parsed_request_url.port
101
+
102
+ # Determine if we have and should use our default SSLContext
103
+ # to optimize performance on standard requests.
104
+ poolmanager_kwargs = getattr(poolmanager, "connection_pool_kw", {})
105
+ has_poolmanager_ssl_context = poolmanager_kwargs.get("ssl_context")
106
+ should_use_default_ssl_context = (
107
+ _preloaded_ssl_context is not None and not has_poolmanager_ssl_context
108
+ )
109
+
110
+ cert_reqs = "CERT_REQUIRED"
111
+ if verify is False:
112
+ cert_reqs = "CERT_NONE"
113
+ elif verify is True and should_use_default_ssl_context:
114
+ pool_kwargs["ssl_context"] = _preloaded_ssl_context
115
+ elif isinstance(verify, str):
116
+ if not os.path.isdir(verify):
117
+ pool_kwargs["ca_certs"] = verify
118
+ else:
119
+ pool_kwargs["ca_cert_dir"] = verify
120
+ pool_kwargs["cert_reqs"] = cert_reqs
121
+ if client_cert is not None:
122
+ if isinstance(client_cert, tuple) and len(client_cert) == 2:
123
+ pool_kwargs["cert_file"] = client_cert[0]
124
+ pool_kwargs["key_file"] = client_cert[1]
125
+ else:
126
+ # According to our docs, we allow users to specify just the client
127
+ # cert path
128
+ pool_kwargs["cert_file"] = client_cert
129
+ host_params = {
130
+ "scheme": scheme,
131
+ "host": parsed_request_url.hostname,
132
+ "port": port,
133
+ }
134
+ return host_params, pool_kwargs
135
+
136
+
137
+ class BaseAdapter:
138
+ """The Base Transport Adapter"""
139
+
140
+ def __init__(self):
141
+ super().__init__()
142
+
143
+ def send(
144
+ self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None
145
+ ):
146
+ """Sends PreparedRequest object. Returns Response object.
147
+
148
+ :param request: The :class:`PreparedRequest <PreparedRequest>` being sent.
149
+ :param stream: (optional) Whether to stream the request content.
150
+ :param timeout: (optional) How long to wait for the server to send
151
+ data before giving up, as a float, or a :ref:`(connect timeout,
152
+ read timeout) <timeouts>` tuple.
153
+ :type timeout: float or tuple
154
+ :param verify: (optional) Either a boolean, in which case it controls whether we verify
155
+ the server's TLS certificate, or a string, in which case it must be a path
156
+ to a CA bundle to use
157
+ :param cert: (optional) Any user-provided SSL certificate to be trusted.
158
+ :param proxies: (optional) The proxies dictionary to apply to the request.
159
+ """
160
+ raise NotImplementedError
161
+
162
+ def close(self):
163
+ """Cleans up adapter specific items."""
164
+ raise NotImplementedError
165
+
166
+
167
+ class HTTPAdapter(BaseAdapter):
168
+ """The built-in HTTP Adapter for urllib3.
169
+
170
+ Provides a general-case interface for Requests sessions to contact HTTP and
171
+ HTTPS urls by implementing the Transport Adapter interface. This class will
172
+ usually be created by the :class:`Session <Session>` class under the
173
+ covers.
174
+
175
+ :param pool_connections: The number of urllib3 connection pools to cache.
176
+ :param pool_maxsize: The maximum number of connections to save in the pool.
177
+ :param max_retries: The maximum number of retries each connection
178
+ should attempt. Note, this applies only to failed DNS lookups, socket
179
+ connections and connection timeouts, never to requests where data has
180
+ made it to the server. By default, Requests does not retry failed
181
+ connections. If you need granular control over the conditions under
182
+ which we retry a request, import urllib3's ``Retry`` class and pass
183
+ that instead.
184
+ :param pool_block: Whether the connection pool should block for connections.
185
+
186
+ Usage::
187
+
188
+ >>> import requests
189
+ >>> s = requests.Session()
190
+ >>> a = requests.adapters.HTTPAdapter(max_retries=3)
191
+ >>> s.mount('http://', a)
192
+ """
193
+
194
+ __attrs__ = [
195
+ "max_retries",
196
+ "config",
197
+ "_pool_connections",
198
+ "_pool_maxsize",
199
+ "_pool_block",
200
+ ]
201
+
202
+ def __init__(
203
+ self,
204
+ pool_connections=DEFAULT_POOLSIZE,
205
+ pool_maxsize=DEFAULT_POOLSIZE,
206
+ max_retries=DEFAULT_RETRIES,
207
+ pool_block=DEFAULT_POOLBLOCK,
208
+ ):
209
+ if max_retries == DEFAULT_RETRIES:
210
+ self.max_retries = Retry(0, read=False)
211
+ else:
212
+ self.max_retries = Retry.from_int(max_retries)
213
+ self.config = {}
214
+ self.proxy_manager = {}
215
+
216
+ super().__init__()
217
+
218
+ self._pool_connections = pool_connections
219
+ self._pool_maxsize = pool_maxsize
220
+ self._pool_block = pool_block
221
+
222
+ self.init_poolmanager(pool_connections, pool_maxsize, block=pool_block)
223
+
224
+ def __getstate__(self):
225
+ return {attr: getattr(self, attr, None) for attr in self.__attrs__}
226
+
227
+ def __setstate__(self, state):
228
+ # Can't handle by adding 'proxy_manager' to self.__attrs__ because
229
+ # self.poolmanager uses a lambda function, which isn't pickleable.
230
+ self.proxy_manager = {}
231
+ self.config = {}
232
+
233
+ for attr, value in state.items():
234
+ setattr(self, attr, value)
235
+
236
+ self.init_poolmanager(
237
+ self._pool_connections, self._pool_maxsize, block=self._pool_block
238
+ )
239
+
240
+ def init_poolmanager(
241
+ self, connections, maxsize, block=DEFAULT_POOLBLOCK, **pool_kwargs
242
+ ):
243
+ """Initializes a urllib3 PoolManager.
244
+
245
+ This method should not be called from user code, and is only
246
+ exposed for use when subclassing the
247
+ :class:`HTTPAdapter <requests.adapters.HTTPAdapter>`.
248
+
249
+ :param connections: The number of urllib3 connection pools to cache.
250
+ :param maxsize: The maximum number of connections to save in the pool.
251
+ :param block: Block when no free connections are available.
252
+ :param pool_kwargs: Extra keyword arguments used to initialize the Pool Manager.
253
+ """
254
+ # save these values for pickling
255
+ self._pool_connections = connections
256
+ self._pool_maxsize = maxsize
257
+ self._pool_block = block
258
+
259
+ self.poolmanager = PoolManager(
260
+ num_pools=connections,
261
+ maxsize=maxsize,
262
+ block=block,
263
+ **pool_kwargs,
264
+ )
265
+
266
+ def proxy_manager_for(self, proxy, **proxy_kwargs):
267
+ """Return urllib3 ProxyManager for the given proxy.
268
+
269
+ This method should not be called from user code, and is only
270
+ exposed for use when subclassing the
271
+ :class:`HTTPAdapter <requests.adapters.HTTPAdapter>`.
272
+
273
+ :param proxy: The proxy to return a urllib3 ProxyManager for.
274
+ :param proxy_kwargs: Extra keyword arguments used to configure the Proxy Manager.
275
+ :returns: ProxyManager
276
+ :rtype: urllib3.ProxyManager
277
+ """
278
+ if proxy in self.proxy_manager:
279
+ manager = self.proxy_manager[proxy]
280
+ elif proxy.lower().startswith("socks"):
281
+ username, password = get_auth_from_url(proxy)
282
+ manager = self.proxy_manager[proxy] = SOCKSProxyManager(
283
+ proxy,
284
+ username=username,
285
+ password=password,
286
+ num_pools=self._pool_connections,
287
+ maxsize=self._pool_maxsize,
288
+ block=self._pool_block,
289
+ **proxy_kwargs,
290
+ )
291
+ else:
292
+ proxy_headers = self.proxy_headers(proxy)
293
+ manager = self.proxy_manager[proxy] = proxy_from_url(
294
+ proxy,
295
+ proxy_headers=proxy_headers,
296
+ num_pools=self._pool_connections,
297
+ maxsize=self._pool_maxsize,
298
+ block=self._pool_block,
299
+ **proxy_kwargs,
300
+ )
301
+
302
+ return manager
303
+
304
+ def cert_verify(self, conn, url, verify, cert):
305
+ """Verify a SSL certificate. This method should not be called from user
306
+ code, and is only exposed for use when subclassing the
307
+ :class:`HTTPAdapter <requests.adapters.HTTPAdapter>`.
308
+
309
+ :param conn: The urllib3 connection object associated with the cert.
310
+ :param url: The requested URL.
311
+ :param verify: Either a boolean, in which case it controls whether we verify
312
+ the server's TLS certificate, or a string, in which case it must be a path
313
+ to a CA bundle to use
314
+ :param cert: The SSL certificate to verify.
315
+ """
316
+ if url.lower().startswith("https") and verify:
317
+ conn.cert_reqs = "CERT_REQUIRED"
318
+
319
+ # Only load the CA certificates if 'verify' is a string indicating the CA bundle to use.
320
+ # Otherwise, if verify is a boolean, we don't load anything since
321
+ # the connection will be using a context with the default certificates already loaded,
322
+ # and this avoids a call to the slow load_verify_locations()
323
+ if verify is not True:
324
+ # `verify` must be a str with a path then
325
+ cert_loc = verify
326
+
327
+ if not os.path.exists(cert_loc):
328
+ raise OSError(
329
+ f"Could not find a suitable TLS CA certificate bundle, "
330
+ f"invalid path: {cert_loc}"
331
+ )
332
+
333
+ if not os.path.isdir(cert_loc):
334
+ conn.ca_certs = cert_loc
335
+ else:
336
+ conn.ca_cert_dir = cert_loc
337
+ else:
338
+ conn.cert_reqs = "CERT_NONE"
339
+ conn.ca_certs = None
340
+ conn.ca_cert_dir = None
341
+
342
+ if cert:
343
+ if not isinstance(cert, basestring):
344
+ conn.cert_file = cert[0]
345
+ conn.key_file = cert[1]
346
+ else:
347
+ conn.cert_file = cert
348
+ conn.key_file = None
349
+ if conn.cert_file and not os.path.exists(conn.cert_file):
350
+ raise OSError(
351
+ f"Could not find the TLS certificate file, "
352
+ f"invalid path: {conn.cert_file}"
353
+ )
354
+ if conn.key_file and not os.path.exists(conn.key_file):
355
+ raise OSError(
356
+ f"Could not find the TLS key file, invalid path: {conn.key_file}"
357
+ )
358
+
359
+ def build_response(self, req, resp):
360
+ """Builds a :class:`Response <requests.Response>` object from a urllib3
361
+ response. This should not be called from user code, and is only exposed
362
+ for use when subclassing the
363
+ :class:`HTTPAdapter <requests.adapters.HTTPAdapter>`
364
+
365
+ :param req: The :class:`PreparedRequest <PreparedRequest>` used to generate the response.
366
+ :param resp: The urllib3 response object.
367
+ :rtype: requests.Response
368
+ """
369
+ response = Response()
370
+
371
+ # Fallback to None if there's no status_code, for whatever reason.
372
+ response.status_code = getattr(resp, "status", None)
373
+
374
+ # Make headers case-insensitive.
375
+ response.headers = CaseInsensitiveDict(getattr(resp, "headers", {}))
376
+
377
+ # Set encoding.
378
+ response.encoding = get_encoding_from_headers(response.headers)
379
+ response.raw = resp
380
+ response.reason = response.raw.reason
381
+
382
+ if isinstance(req.url, bytes):
383
+ response.url = req.url.decode("utf-8")
384
+ else:
385
+ response.url = req.url
386
+
387
+ # Add new cookies from the server.
388
+ extract_cookies_to_jar(response.cookies, req, resp)
389
+
390
+ # Give the Response some context.
391
+ response.request = req
392
+ response.connection = self
393
+
394
+ return response
395
+
396
+ def build_connection_pool_key_attributes(self, request, verify, cert=None):
397
+ """Build the PoolKey attributes used by urllib3 to return a connection.
398
+
399
+ This looks at the PreparedRequest, the user-specified verify value,
400
+ and the value of the cert parameter to determine what PoolKey values
401
+ to use to select a connection from a given urllib3 Connection Pool.
402
+
403
+ The SSL related pool key arguments are not consistently set. As of
404
+ this writing, use the following to determine what keys may be in that
405
+ dictionary:
406
+
407
+ * If ``verify`` is ``True``, ``"ssl_context"`` will be set and will be the
408
+ default Requests SSL Context
409
+ * If ``verify`` is ``False``, ``"ssl_context"`` will not be set but
410
+ ``"cert_reqs"`` will be set
411
+ * If ``verify`` is a string, (i.e., it is a user-specified trust bundle)
412
+ ``"ca_certs"`` will be set if the string is not a directory recognized
413
+ by :py:func:`os.path.isdir`, otherwise ``"ca_certs_dir"`` will be
414
+ set.
415
+ * If ``"cert"`` is specified, ``"cert_file"`` will always be set. If
416
+ ``"cert"`` is a tuple with a second item, ``"key_file"`` will also
417
+ be present
418
+
419
+ To override these settings, one may subclass this class, call this
420
+ method and use the above logic to change parameters as desired. For
421
+ example, if one wishes to use a custom :py:class:`ssl.SSLContext` one
422
+ must both set ``"ssl_context"`` and based on what else they require,
423
+ alter the other keys to ensure the desired behaviour.
424
+
425
+ :param request:
426
+ The PreparedReqest being sent over the connection.
427
+ :type request:
428
+ :class:`~requests.models.PreparedRequest`
429
+ :param verify:
430
+ Either a boolean, in which case it controls whether
431
+ we verify the server's TLS certificate, or a string, in which case it
432
+ must be a path to a CA bundle to use.
433
+ :param cert:
434
+ (optional) Any user-provided SSL certificate for client
435
+ authentication (a.k.a., mTLS). This may be a string (i.e., just
436
+ the path to a file which holds both certificate and key) or a
437
+ tuple of length 2 with the certificate file path and key file
438
+ path.
439
+ :returns:
440
+ A tuple of two dictionaries. The first is the "host parameters"
441
+ portion of the Pool Key including scheme, hostname, and port. The
442
+ second is a dictionary of SSLContext related parameters.
443
+ """
444
+ return _urllib3_request_context(request, verify, cert, self.poolmanager)
445
+
446
+ def get_connection_with_tls_context(self, request, verify, proxies=None, cert=None):
447
+ """Returns a urllib3 connection for the given request and TLS settings.
448
+ This should not be called from user code, and is only exposed for use
449
+ when subclassing the :class:`HTTPAdapter <requests.adapters.HTTPAdapter>`.
450
+
451
+ :param request:
452
+ The :class:`PreparedRequest <PreparedRequest>` object to be sent
453
+ over the connection.
454
+ :param verify:
455
+ Either a boolean, in which case it controls whether we verify the
456
+ server's TLS certificate, or a string, in which case it must be a
457
+ path to a CA bundle to use.
458
+ :param proxies:
459
+ (optional) The proxies dictionary to apply to the request.
460
+ :param cert:
461
+ (optional) Any user-provided SSL certificate to be used for client
462
+ authentication (a.k.a., mTLS).
463
+ :rtype:
464
+ urllib3.ConnectionPool
465
+ """
466
+ proxy = select_proxy(request.url, proxies)
467
+ try:
468
+ host_params, pool_kwargs = self.build_connection_pool_key_attributes(
469
+ request,
470
+ verify,
471
+ cert,
472
+ )
473
+ except ValueError as e:
474
+ raise InvalidURL(e, request=request)
475
+ if proxy:
476
+ proxy = prepend_scheme_if_needed(proxy, "http")
477
+ proxy_url = parse_url(proxy)
478
+ if not proxy_url.host:
479
+ raise InvalidProxyURL(
480
+ "Please check proxy URL. It is malformed "
481
+ "and could be missing the host."
482
+ )
483
+ proxy_manager = self.proxy_manager_for(proxy)
484
+ conn = proxy_manager.connection_from_host(
485
+ **host_params, pool_kwargs=pool_kwargs
486
+ )
487
+ else:
488
+ # Only scheme should be lower case
489
+ conn = self.poolmanager.connection_from_host(
490
+ **host_params, pool_kwargs=pool_kwargs
491
+ )
492
+
493
+ return conn
494
+
495
+ def get_connection(self, url, proxies=None):
496
+ """DEPRECATED: Users should move to `get_connection_with_tls_context`
497
+ for all subclasses of HTTPAdapter using Requests>=2.32.2.
498
+
499
+ Returns a urllib3 connection for the given URL. This should not be
500
+ called from user code, and is only exposed for use when subclassing the
501
+ :class:`HTTPAdapter <requests.adapters.HTTPAdapter>`.
502
+
503
+ :param url: The URL to connect to.
504
+ :param proxies: (optional) A Requests-style dictionary of proxies used on this request.
505
+ :rtype: urllib3.ConnectionPool
506
+ """
507
+ warnings.warn(
508
+ (
509
+ "`get_connection` has been deprecated in favor of "
510
+ "`get_connection_with_tls_context`. Custom HTTPAdapter subclasses "
511
+ "will need to migrate for Requests>=2.32.2. Please see "
512
+ "https://github.com/psf/requests/pull/6710 for more details."
513
+ ),
514
+ DeprecationWarning,
515
+ )
516
+ proxy = select_proxy(url, proxies)
517
+
518
+ if proxy:
519
+ proxy = prepend_scheme_if_needed(proxy, "http")
520
+ proxy_url = parse_url(proxy)
521
+ if not proxy_url.host:
522
+ raise InvalidProxyURL(
523
+ "Please check proxy URL. It is malformed "
524
+ "and could be missing the host."
525
+ )
526
+ proxy_manager = self.proxy_manager_for(proxy)
527
+ conn = proxy_manager.connection_from_url(url)
528
+ else:
529
+ # Only scheme should be lower case
530
+ parsed = urlparse(url)
531
+ url = parsed.geturl()
532
+ conn = self.poolmanager.connection_from_url(url)
533
+
534
+ return conn
535
+
536
+ def close(self):
537
+ """Disposes of any internal state.
538
+
539
+ Currently, this closes the PoolManager and any active ProxyManager,
540
+ which closes any pooled connections.
541
+ """
542
+ self.poolmanager.clear()
543
+ for proxy in self.proxy_manager.values():
544
+ proxy.clear()
545
+
546
+ def request_url(self, request, proxies):
547
+ """Obtain the url to use when making the final request.
548
+
549
+ If the message is being sent through a HTTP proxy, the full URL has to
550
+ be used. Otherwise, we should only use the path portion of the URL.
551
+
552
+ This should not be called from user code, and is only exposed for use
553
+ when subclassing the
554
+ :class:`HTTPAdapter <requests.adapters.HTTPAdapter>`.
555
+
556
+ :param request: The :class:`PreparedRequest <PreparedRequest>` being sent.
557
+ :param proxies: A dictionary of schemes or schemes and hosts to proxy URLs.
558
+ :rtype: str
559
+ """
560
+ proxy = select_proxy(request.url, proxies)
561
+ scheme = urlparse(request.url).scheme
562
+
563
+ is_proxied_http_request = proxy and scheme != "https"
564
+ using_socks_proxy = False
565
+ if proxy:
566
+ proxy_scheme = urlparse(proxy).scheme.lower()
567
+ using_socks_proxy = proxy_scheme.startswith("socks")
568
+
569
+ url = request.path_url
570
+ if url.startswith("//"): # Don't confuse urllib3
571
+ url = f"/{url.lstrip('/')}"
572
+
573
+ if is_proxied_http_request and not using_socks_proxy:
574
+ url = urldefragauth(request.url)
575
+
576
+ return url
577
+
578
+ def add_headers(self, request, **kwargs):
579
+ """Add any headers needed by the connection. As of v2.0 this does
580
+ nothing by default, but is left for overriding by users that subclass
581
+ the :class:`HTTPAdapter <requests.adapters.HTTPAdapter>`.
582
+
583
+ This should not be called from user code, and is only exposed for use
584
+ when subclassing the
585
+ :class:`HTTPAdapter <requests.adapters.HTTPAdapter>`.
586
+
587
+ :param request: The :class:`PreparedRequest <PreparedRequest>` to add headers to.
588
+ :param kwargs: The keyword arguments from the call to send().
589
+ """
590
+ pass
591
+
592
+ def proxy_headers(self, proxy):
593
+ """Returns a dictionary of the headers to add to any request sent
594
+ through a proxy. This works with urllib3 magic to ensure that they are
595
+ correctly sent to the proxy, rather than in a tunnelled request if
596
+ CONNECT is being used.
597
+
598
+ This should not be called from user code, and is only exposed for use
599
+ when subclassing the
600
+ :class:`HTTPAdapter <requests.adapters.HTTPAdapter>`.
601
+
602
+ :param proxy: The url of the proxy being used for this request.
603
+ :rtype: dict
604
+ """
605
+ headers = {}
606
+ username, password = get_auth_from_url(proxy)
607
+
608
+ if username:
609
+ headers["Proxy-Authorization"] = _basic_auth_str(username, password)
610
+
611
+ return headers
612
+
613
+ def send(
614
+ self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None
615
+ ):
616
+ """Sends PreparedRequest object. Returns Response object.
617
+
618
+ :param request: The :class:`PreparedRequest <PreparedRequest>` being sent.
619
+ :param stream: (optional) Whether to stream the request content.
620
+ :param timeout: (optional) How long to wait for the server to send
621
+ data before giving up, as a float, or a :ref:`(connect timeout,
622
+ read timeout) <timeouts>` tuple.
623
+ :type timeout: float or tuple or urllib3 Timeout object
624
+ :param verify: (optional) Either a boolean, in which case it controls whether
625
+ we verify the server's TLS certificate, or a string, in which case it
626
+ must be a path to a CA bundle to use
627
+ :param cert: (optional) Any user-provided SSL certificate to be trusted.
628
+ :param proxies: (optional) The proxies dictionary to apply to the request.
629
+ :rtype: requests.Response
630
+ """
631
+
632
+ try:
633
+ conn = self.get_connection_with_tls_context(
634
+ request, verify, proxies=proxies, cert=cert
635
+ )
636
+ except LocationValueError as e:
637
+ raise InvalidURL(e, request=request)
638
+
639
+ self.cert_verify(conn, request.url, verify, cert)
640
+ url = self.request_url(request, proxies)
641
+ self.add_headers(
642
+ request,
643
+ stream=stream,
644
+ timeout=timeout,
645
+ verify=verify,
646
+ cert=cert,
647
+ proxies=proxies,
648
+ )
649
+
650
+ chunked = not (request.body is None or "Content-Length" in request.headers)
651
+
652
+ if isinstance(timeout, tuple):
653
+ try:
654
+ connect, read = timeout
655
+ timeout = TimeoutSauce(connect=connect, read=read)
656
+ except ValueError:
657
+ raise ValueError(
658
+ f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, "
659
+ f"or a single float to set both timeouts to the same value."
660
+ )
661
+ elif isinstance(timeout, TimeoutSauce):
662
+ pass
663
+ else:
664
+ timeout = TimeoutSauce(connect=timeout, read=timeout)
665
+
666
+ try:
667
+ resp = conn.urlopen(
668
+ method=request.method,
669
+ url=url,
670
+ body=request.body,
671
+ headers=request.headers,
672
+ redirect=False,
673
+ assert_same_host=False,
674
+ preload_content=False,
675
+ decode_content=False,
676
+ retries=self.max_retries,
677
+ timeout=timeout,
678
+ chunked=chunked,
679
+ )
680
+
681
+ except (ProtocolError, OSError) as err:
682
+ raise ConnectionError(err, request=request)
683
+
684
+ except MaxRetryError as e:
685
+ if isinstance(e.reason, ConnectTimeoutError):
686
+ # TODO: Remove this in 3.0.0: see #2811
687
+ if not isinstance(e.reason, NewConnectionError):
688
+ raise ConnectTimeout(e, request=request)
689
+
690
+ if isinstance(e.reason, ResponseError):
691
+ raise RetryError(e, request=request)
692
+
693
+ if isinstance(e.reason, _ProxyError):
694
+ raise ProxyError(e, request=request)
695
+
696
+ if isinstance(e.reason, _SSLError):
697
+ # This branch is for urllib3 v1.22 and later.
698
+ raise SSLError(e, request=request)
699
+
700
+ raise ConnectionError(e, request=request)
701
+
702
+ except ClosedPoolError as e:
703
+ raise ConnectionError(e, request=request)
704
+
705
+ except _ProxyError as e:
706
+ raise ProxyError(e)
707
+
708
+ except (_SSLError, _HTTPError) as e:
709
+ if isinstance(e, _SSLError):
710
+ # This branch is for urllib3 versions earlier than v1.22
711
+ raise SSLError(e, request=request)
712
+ elif isinstance(e, ReadTimeoutError):
713
+ raise ReadTimeout(e, request=request)
714
+ elif isinstance(e, _InvalidHeader):
715
+ raise InvalidHeader(e, request=request)
716
+ else:
717
+ raise
718
+
719
+ return self.build_response(request, resp)
llava/lib/python3.10/site-packages/pip/_vendor/requests/auth.py ADDED
@@ -0,0 +1,314 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ requests.auth
3
+ ~~~~~~~~~~~~~
4
+
5
+ This module contains the authentication handlers for Requests.
6
+ """
7
+
8
+ import hashlib
9
+ import os
10
+ import re
11
+ import threading
12
+ import time
13
+ import warnings
14
+ from base64 import b64encode
15
+
16
+ from ._internal_utils import to_native_string
17
+ from .compat import basestring, str, urlparse
18
+ from .cookies import extract_cookies_to_jar
19
+ from .utils import parse_dict_header
20
+
21
+ CONTENT_TYPE_FORM_URLENCODED = "application/x-www-form-urlencoded"
22
+ CONTENT_TYPE_MULTI_PART = "multipart/form-data"
23
+
24
+
25
+ def _basic_auth_str(username, password):
26
+ """Returns a Basic Auth string."""
27
+
28
+ # "I want us to put a big-ol' comment on top of it that
29
+ # says that this behaviour is dumb but we need to preserve
30
+ # it because people are relying on it."
31
+ # - Lukasa
32
+ #
33
+ # These are here solely to maintain backwards compatibility
34
+ # for things like ints. This will be removed in 3.0.0.
35
+ if not isinstance(username, basestring):
36
+ warnings.warn(
37
+ "Non-string usernames will no longer be supported in Requests "
38
+ "3.0.0. Please convert the object you've passed in ({!r}) to "
39
+ "a string or bytes object in the near future to avoid "
40
+ "problems.".format(username),
41
+ category=DeprecationWarning,
42
+ )
43
+ username = str(username)
44
+
45
+ if not isinstance(password, basestring):
46
+ warnings.warn(
47
+ "Non-string passwords will no longer be supported in Requests "
48
+ "3.0.0. Please convert the object you've passed in ({!r}) to "
49
+ "a string or bytes object in the near future to avoid "
50
+ "problems.".format(type(password)),
51
+ category=DeprecationWarning,
52
+ )
53
+ password = str(password)
54
+ # -- End Removal --
55
+
56
+ if isinstance(username, str):
57
+ username = username.encode("latin1")
58
+
59
+ if isinstance(password, str):
60
+ password = password.encode("latin1")
61
+
62
+ authstr = "Basic " + to_native_string(
63
+ b64encode(b":".join((username, password))).strip()
64
+ )
65
+
66
+ return authstr
67
+
68
+
69
+ class AuthBase:
70
+ """Base class that all auth implementations derive from"""
71
+
72
+ def __call__(self, r):
73
+ raise NotImplementedError("Auth hooks must be callable.")
74
+
75
+
76
+ class HTTPBasicAuth(AuthBase):
77
+ """Attaches HTTP Basic Authentication to the given Request object."""
78
+
79
+ def __init__(self, username, password):
80
+ self.username = username
81
+ self.password = password
82
+
83
+ def __eq__(self, other):
84
+ return all(
85
+ [
86
+ self.username == getattr(other, "username", None),
87
+ self.password == getattr(other, "password", None),
88
+ ]
89
+ )
90
+
91
+ def __ne__(self, other):
92
+ return not self == other
93
+
94
+ def __call__(self, r):
95
+ r.headers["Authorization"] = _basic_auth_str(self.username, self.password)
96
+ return r
97
+
98
+
99
+ class HTTPProxyAuth(HTTPBasicAuth):
100
+ """Attaches HTTP Proxy Authentication to a given Request object."""
101
+
102
+ def __call__(self, r):
103
+ r.headers["Proxy-Authorization"] = _basic_auth_str(self.username, self.password)
104
+ return r
105
+
106
+
107
+ class HTTPDigestAuth(AuthBase):
108
+ """Attaches HTTP Digest Authentication to the given Request object."""
109
+
110
+ def __init__(self, username, password):
111
+ self.username = username
112
+ self.password = password
113
+ # Keep state in per-thread local storage
114
+ self._thread_local = threading.local()
115
+
116
+ def init_per_thread_state(self):
117
+ # Ensure state is initialized just once per-thread
118
+ if not hasattr(self._thread_local, "init"):
119
+ self._thread_local.init = True
120
+ self._thread_local.last_nonce = ""
121
+ self._thread_local.nonce_count = 0
122
+ self._thread_local.chal = {}
123
+ self._thread_local.pos = None
124
+ self._thread_local.num_401_calls = None
125
+
126
+ def build_digest_header(self, method, url):
127
+ """
128
+ :rtype: str
129
+ """
130
+
131
+ realm = self._thread_local.chal["realm"]
132
+ nonce = self._thread_local.chal["nonce"]
133
+ qop = self._thread_local.chal.get("qop")
134
+ algorithm = self._thread_local.chal.get("algorithm")
135
+ opaque = self._thread_local.chal.get("opaque")
136
+ hash_utf8 = None
137
+
138
+ if algorithm is None:
139
+ _algorithm = "MD5"
140
+ else:
141
+ _algorithm = algorithm.upper()
142
+ # lambdas assume digest modules are imported at the top level
143
+ if _algorithm == "MD5" or _algorithm == "MD5-SESS":
144
+
145
+ def md5_utf8(x):
146
+ if isinstance(x, str):
147
+ x = x.encode("utf-8")
148
+ return hashlib.md5(x).hexdigest()
149
+
150
+ hash_utf8 = md5_utf8
151
+ elif _algorithm == "SHA":
152
+
153
+ def sha_utf8(x):
154
+ if isinstance(x, str):
155
+ x = x.encode("utf-8")
156
+ return hashlib.sha1(x).hexdigest()
157
+
158
+ hash_utf8 = sha_utf8
159
+ elif _algorithm == "SHA-256":
160
+
161
+ def sha256_utf8(x):
162
+ if isinstance(x, str):
163
+ x = x.encode("utf-8")
164
+ return hashlib.sha256(x).hexdigest()
165
+
166
+ hash_utf8 = sha256_utf8
167
+ elif _algorithm == "SHA-512":
168
+
169
+ def sha512_utf8(x):
170
+ if isinstance(x, str):
171
+ x = x.encode("utf-8")
172
+ return hashlib.sha512(x).hexdigest()
173
+
174
+ hash_utf8 = sha512_utf8
175
+
176
+ KD = lambda s, d: hash_utf8(f"{s}:{d}") # noqa:E731
177
+
178
+ if hash_utf8 is None:
179
+ return None
180
+
181
+ # XXX not implemented yet
182
+ entdig = None
183
+ p_parsed = urlparse(url)
184
+ #: path is request-uri defined in RFC 2616 which should not be empty
185
+ path = p_parsed.path or "/"
186
+ if p_parsed.query:
187
+ path += f"?{p_parsed.query}"
188
+
189
+ A1 = f"{self.username}:{realm}:{self.password}"
190
+ A2 = f"{method}:{path}"
191
+
192
+ HA1 = hash_utf8(A1)
193
+ HA2 = hash_utf8(A2)
194
+
195
+ if nonce == self._thread_local.last_nonce:
196
+ self._thread_local.nonce_count += 1
197
+ else:
198
+ self._thread_local.nonce_count = 1
199
+ ncvalue = f"{self._thread_local.nonce_count:08x}"
200
+ s = str(self._thread_local.nonce_count).encode("utf-8")
201
+ s += nonce.encode("utf-8")
202
+ s += time.ctime().encode("utf-8")
203
+ s += os.urandom(8)
204
+
205
+ cnonce = hashlib.sha1(s).hexdigest()[:16]
206
+ if _algorithm == "MD5-SESS":
207
+ HA1 = hash_utf8(f"{HA1}:{nonce}:{cnonce}")
208
+
209
+ if not qop:
210
+ respdig = KD(HA1, f"{nonce}:{HA2}")
211
+ elif qop == "auth" or "auth" in qop.split(","):
212
+ noncebit = f"{nonce}:{ncvalue}:{cnonce}:auth:{HA2}"
213
+ respdig = KD(HA1, noncebit)
214
+ else:
215
+ # XXX handle auth-int.
216
+ return None
217
+
218
+ self._thread_local.last_nonce = nonce
219
+
220
+ # XXX should the partial digests be encoded too?
221
+ base = (
222
+ f'username="{self.username}", realm="{realm}", nonce="{nonce}", '
223
+ f'uri="{path}", response="{respdig}"'
224
+ )
225
+ if opaque:
226
+ base += f', opaque="{opaque}"'
227
+ if algorithm:
228
+ base += f', algorithm="{algorithm}"'
229
+ if entdig:
230
+ base += f', digest="{entdig}"'
231
+ if qop:
232
+ base += f', qop="auth", nc={ncvalue}, cnonce="{cnonce}"'
233
+
234
+ return f"Digest {base}"
235
+
236
+ def handle_redirect(self, r, **kwargs):
237
+ """Reset num_401_calls counter on redirects."""
238
+ if r.is_redirect:
239
+ self._thread_local.num_401_calls = 1
240
+
241
+ def handle_401(self, r, **kwargs):
242
+ """
243
+ Takes the given response and tries digest-auth, if needed.
244
+
245
+ :rtype: requests.Response
246
+ """
247
+
248
+ # If response is not 4xx, do not auth
249
+ # See https://github.com/psf/requests/issues/3772
250
+ if not 400 <= r.status_code < 500:
251
+ self._thread_local.num_401_calls = 1
252
+ return r
253
+
254
+ if self._thread_local.pos is not None:
255
+ # Rewind the file position indicator of the body to where
256
+ # it was to resend the request.
257
+ r.request.body.seek(self._thread_local.pos)
258
+ s_auth = r.headers.get("www-authenticate", "")
259
+
260
+ if "digest" in s_auth.lower() and self._thread_local.num_401_calls < 2:
261
+ self._thread_local.num_401_calls += 1
262
+ pat = re.compile(r"digest ", flags=re.IGNORECASE)
263
+ self._thread_local.chal = parse_dict_header(pat.sub("", s_auth, count=1))
264
+
265
+ # Consume content and release the original connection
266
+ # to allow our new request to reuse the same one.
267
+ r.content
268
+ r.close()
269
+ prep = r.request.copy()
270
+ extract_cookies_to_jar(prep._cookies, r.request, r.raw)
271
+ prep.prepare_cookies(prep._cookies)
272
+
273
+ prep.headers["Authorization"] = self.build_digest_header(
274
+ prep.method, prep.url
275
+ )
276
+ _r = r.connection.send(prep, **kwargs)
277
+ _r.history.append(r)
278
+ _r.request = prep
279
+
280
+ return _r
281
+
282
+ self._thread_local.num_401_calls = 1
283
+ return r
284
+
285
+ def __call__(self, r):
286
+ # Initialize per-thread state, if needed
287
+ self.init_per_thread_state()
288
+ # If we have a saved nonce, skip the 401
289
+ if self._thread_local.last_nonce:
290
+ r.headers["Authorization"] = self.build_digest_header(r.method, r.url)
291
+ try:
292
+ self._thread_local.pos = r.body.tell()
293
+ except AttributeError:
294
+ # In the case of HTTPDigestAuth being reused and the body of
295
+ # the previous request was a file-like object, pos has the
296
+ # file position of the previous body. Ensure it's set to
297
+ # None.
298
+ self._thread_local.pos = None
299
+ r.register_hook("response", self.handle_401)
300
+ r.register_hook("response", self.handle_redirect)
301
+ self._thread_local.num_401_calls = 1
302
+
303
+ return r
304
+
305
+ def __eq__(self, other):
306
+ return all(
307
+ [
308
+ self.username == getattr(other, "username", None),
309
+ self.password == getattr(other, "password", None),
310
+ ]
311
+ )
312
+
313
+ def __ne__(self, other):
314
+ return not self == other
llava/lib/python3.10/site-packages/pip/_vendor/requests/compat.py ADDED
@@ -0,0 +1,78 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ requests.compat
3
+ ~~~~~~~~~~~~~~~
4
+
5
+ This module previously handled import compatibility issues
6
+ between Python 2 and Python 3. It remains for backwards
7
+ compatibility until the next major version.
8
+ """
9
+
10
+ import sys
11
+
12
+ # -------------------
13
+ # Character Detection
14
+ # -------------------
15
+
16
+
17
+ def _resolve_char_detection():
18
+ """Find supported character detection libraries."""
19
+ chardet = None
20
+ return chardet
21
+
22
+
23
+ chardet = _resolve_char_detection()
24
+
25
+ # -------
26
+ # Pythons
27
+ # -------
28
+
29
+ # Syntax sugar.
30
+ _ver = sys.version_info
31
+
32
+ #: Python 2.x?
33
+ is_py2 = _ver[0] == 2
34
+
35
+ #: Python 3.x?
36
+ is_py3 = _ver[0] == 3
37
+
38
+ # Note: We've patched out simplejson support in pip because it prevents
39
+ # upgrading simplejson on Windows.
40
+ import json
41
+ from json import JSONDecodeError
42
+
43
+ # Keep OrderedDict for backwards compatibility.
44
+ from collections import OrderedDict
45
+ from collections.abc import Callable, Mapping, MutableMapping
46
+ from http import cookiejar as cookielib
47
+ from http.cookies import Morsel
48
+ from io import StringIO
49
+
50
+ # --------------
51
+ # Legacy Imports
52
+ # --------------
53
+ from urllib.parse import (
54
+ quote,
55
+ quote_plus,
56
+ unquote,
57
+ unquote_plus,
58
+ urldefrag,
59
+ urlencode,
60
+ urljoin,
61
+ urlparse,
62
+ urlsplit,
63
+ urlunparse,
64
+ )
65
+ from urllib.request import (
66
+ getproxies,
67
+ getproxies_environment,
68
+ parse_http_list,
69
+ proxy_bypass,
70
+ proxy_bypass_environment,
71
+ )
72
+
73
+ builtin_str = str
74
+ str = str
75
+ bytes = bytes
76
+ basestring = (str, bytes)
77
+ numeric_types = (int, float)
78
+ integer_types = (int,)
llava/lib/python3.10/site-packages/pip/_vendor/requests/help.py ADDED
@@ -0,0 +1,127 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Module containing bug report helper(s)."""
2
+
3
+ import json
4
+ import platform
5
+ import ssl
6
+ import sys
7
+
8
+ from pip._vendor import idna
9
+ from pip._vendor import urllib3
10
+
11
+ from . import __version__ as requests_version
12
+
13
+ charset_normalizer = None
14
+ chardet = None
15
+
16
+ try:
17
+ from pip._vendor.urllib3.contrib import pyopenssl
18
+ except ImportError:
19
+ pyopenssl = None
20
+ OpenSSL = None
21
+ cryptography = None
22
+ else:
23
+ import cryptography
24
+ import OpenSSL
25
+
26
+
27
+ def _implementation():
28
+ """Return a dict with the Python implementation and version.
29
+
30
+ Provide both the name and the version of the Python implementation
31
+ currently running. For example, on CPython 3.10.3 it will return
32
+ {'name': 'CPython', 'version': '3.10.3'}.
33
+
34
+ This function works best on CPython and PyPy: in particular, it probably
35
+ doesn't work for Jython or IronPython. Future investigation should be done
36
+ to work out the correct shape of the code for those platforms.
37
+ """
38
+ implementation = platform.python_implementation()
39
+
40
+ if implementation == "CPython":
41
+ implementation_version = platform.python_version()
42
+ elif implementation == "PyPy":
43
+ implementation_version = "{}.{}.{}".format(
44
+ sys.pypy_version_info.major,
45
+ sys.pypy_version_info.minor,
46
+ sys.pypy_version_info.micro,
47
+ )
48
+ if sys.pypy_version_info.releaselevel != "final":
49
+ implementation_version = "".join(
50
+ [implementation_version, sys.pypy_version_info.releaselevel]
51
+ )
52
+ elif implementation == "Jython":
53
+ implementation_version = platform.python_version() # Complete Guess
54
+ elif implementation == "IronPython":
55
+ implementation_version = platform.python_version() # Complete Guess
56
+ else:
57
+ implementation_version = "Unknown"
58
+
59
+ return {"name": implementation, "version": implementation_version}
60
+
61
+
62
+ def info():
63
+ """Generate information for a bug report."""
64
+ try:
65
+ platform_info = {
66
+ "system": platform.system(),
67
+ "release": platform.release(),
68
+ }
69
+ except OSError:
70
+ platform_info = {
71
+ "system": "Unknown",
72
+ "release": "Unknown",
73
+ }
74
+
75
+ implementation_info = _implementation()
76
+ urllib3_info = {"version": urllib3.__version__}
77
+ charset_normalizer_info = {"version": None}
78
+ chardet_info = {"version": None}
79
+ if charset_normalizer:
80
+ charset_normalizer_info = {"version": charset_normalizer.__version__}
81
+ if chardet:
82
+ chardet_info = {"version": chardet.__version__}
83
+
84
+ pyopenssl_info = {
85
+ "version": None,
86
+ "openssl_version": "",
87
+ }
88
+ if OpenSSL:
89
+ pyopenssl_info = {
90
+ "version": OpenSSL.__version__,
91
+ "openssl_version": f"{OpenSSL.SSL.OPENSSL_VERSION_NUMBER:x}",
92
+ }
93
+ cryptography_info = {
94
+ "version": getattr(cryptography, "__version__", ""),
95
+ }
96
+ idna_info = {
97
+ "version": getattr(idna, "__version__", ""),
98
+ }
99
+
100
+ system_ssl = ssl.OPENSSL_VERSION_NUMBER
101
+ system_ssl_info = {"version": f"{system_ssl:x}" if system_ssl is not None else ""}
102
+
103
+ return {
104
+ "platform": platform_info,
105
+ "implementation": implementation_info,
106
+ "system_ssl": system_ssl_info,
107
+ "using_pyopenssl": pyopenssl is not None,
108
+ "using_charset_normalizer": chardet is None,
109
+ "pyOpenSSL": pyopenssl_info,
110
+ "urllib3": urllib3_info,
111
+ "chardet": chardet_info,
112
+ "charset_normalizer": charset_normalizer_info,
113
+ "cryptography": cryptography_info,
114
+ "idna": idna_info,
115
+ "requests": {
116
+ "version": requests_version,
117
+ },
118
+ }
119
+
120
+
121
+ def main():
122
+ """Pretty-print the bug information as JSON."""
123
+ print(json.dumps(info(), sort_keys=True, indent=2))
124
+
125
+
126
+ if __name__ == "__main__":
127
+ main()
llava/lib/python3.10/site-packages/pip/_vendor/requests/hooks.py ADDED
@@ -0,0 +1,33 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ requests.hooks
3
+ ~~~~~~~~~~~~~~
4
+
5
+ This module provides the capabilities for the Requests hooks system.
6
+
7
+ Available hooks:
8
+
9
+ ``response``:
10
+ The response generated from a Request.
11
+ """
12
+ HOOKS = ["response"]
13
+
14
+
15
+ def default_hooks():
16
+ return {event: [] for event in HOOKS}
17
+
18
+
19
+ # TODO: response is the only one
20
+
21
+
22
+ def dispatch_hook(key, hooks, hook_data, **kwargs):
23
+ """Dispatches a hook dictionary on a given piece of data."""
24
+ hooks = hooks or {}
25
+ hooks = hooks.get(key)
26
+ if hooks:
27
+ if hasattr(hooks, "__call__"):
28
+ hooks = [hooks]
29
+ for hook in hooks:
30
+ _hook_data = hook(hook_data, **kwargs)
31
+ if _hook_data is not None:
32
+ hook_data = _hook_data
33
+ return hook_data
minigpt2/lib/python3.10/site-packages/Crypto/Hash/MD5.pyi ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from typing import Union
2
+
3
+ Buffer = Union[bytes, bytearray, memoryview]
4
+
5
+ class MD5Hash(object):
6
+ digest_size: int
7
+ block_size: int
8
+ oid: str
9
+
10
+ def __init__(self, data: Buffer = ...) -> None: ...
11
+ def update(self, data: Buffer) -> None: ...
12
+ def digest(self) -> bytes: ...
13
+ def hexdigest(self) -> str: ...
14
+ def copy(self) -> MD5Hash: ...
15
+ def new(self, data: Buffer = ...) -> MD5Hash: ...
16
+
17
+ def new(data: Buffer = ...) -> MD5Hash: ...
18
+ digest_size: int
19
+ block_size: int
minigpt2/lib/python3.10/site-packages/Crypto/Hash/RIPEMD160.py ADDED
@@ -0,0 +1,169 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # ===================================================================
2
+ #
3
+ # Copyright (c) 2014, Legrandin <helderijs@gmail.com>
4
+ # All rights reserved.
5
+ #
6
+ # Redistribution and use in source and binary forms, with or without
7
+ # modification, are permitted provided that the following conditions
8
+ # are met:
9
+ #
10
+ # 1. Redistributions of source code must retain the above copyright
11
+ # notice, this list of conditions and the following disclaimer.
12
+ # 2. Redistributions in binary form must reproduce the above copyright
13
+ # notice, this list of conditions and the following disclaimer in
14
+ # the documentation and/or other materials provided with the
15
+ # distribution.
16
+ #
17
+ # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
18
+ # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
19
+ # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
20
+ # FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
21
+ # COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
22
+ # INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
23
+ # BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
24
+ # LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
25
+ # CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
26
+ # LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
27
+ # ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
28
+ # POSSIBILITY OF SUCH DAMAGE.
29
+ # ===================================================================
30
+
31
+ from Crypto.Util.py3compat import bord
32
+
33
+ from Crypto.Util._raw_api import (load_pycryptodome_raw_lib,
34
+ VoidPointer, SmartPointer,
35
+ create_string_buffer,
36
+ get_raw_buffer, c_size_t,
37
+ c_uint8_ptr)
38
+
39
+ _raw_ripemd160_lib = load_pycryptodome_raw_lib(
40
+ "Crypto.Hash._RIPEMD160",
41
+ """
42
+ int ripemd160_init(void **shaState);
43
+ int ripemd160_destroy(void *shaState);
44
+ int ripemd160_update(void *hs,
45
+ const uint8_t *buf,
46
+ size_t len);
47
+ int ripemd160_digest(const void *shaState,
48
+ uint8_t digest[20]);
49
+ int ripemd160_copy(const void *src, void *dst);
50
+ """)
51
+
52
+
53
+ class RIPEMD160Hash(object):
54
+ """A RIPEMD-160 hash object.
55
+ Do not instantiate directly.
56
+ Use the :func:`new` function.
57
+
58
+ :ivar oid: ASN.1 Object ID
59
+ :vartype oid: string
60
+
61
+ :ivar block_size: the size in bytes of the internal message block,
62
+ input to the compression function
63
+ :vartype block_size: integer
64
+
65
+ :ivar digest_size: the size in bytes of the resulting hash
66
+ :vartype digest_size: integer
67
+ """
68
+
69
+ # The size of the resulting hash in bytes.
70
+ digest_size = 20
71
+ # The internal block size of the hash algorithm in bytes.
72
+ block_size = 64
73
+ # ASN.1 Object ID
74
+ oid = "1.3.36.3.2.1"
75
+
76
+ def __init__(self, data=None):
77
+ state = VoidPointer()
78
+ result = _raw_ripemd160_lib.ripemd160_init(state.address_of())
79
+ if result:
80
+ raise ValueError("Error %d while instantiating RIPEMD160"
81
+ % result)
82
+ self._state = SmartPointer(state.get(),
83
+ _raw_ripemd160_lib.ripemd160_destroy)
84
+ if data:
85
+ self.update(data)
86
+
87
+ def update(self, data):
88
+ """Continue hashing of a message by consuming the next chunk of data.
89
+
90
+ Args:
91
+ data (byte string/byte array/memoryview): The next chunk of the message being hashed.
92
+ """
93
+
94
+ result = _raw_ripemd160_lib.ripemd160_update(self._state.get(),
95
+ c_uint8_ptr(data),
96
+ c_size_t(len(data)))
97
+ if result:
98
+ raise ValueError("Error %d while instantiating ripemd160"
99
+ % result)
100
+
101
+ def digest(self):
102
+ """Return the **binary** (non-printable) digest of the message that has been hashed so far.
103
+
104
+ :return: The hash digest, computed over the data processed so far.
105
+ Binary form.
106
+ :rtype: byte string
107
+ """
108
+
109
+ bfr = create_string_buffer(self.digest_size)
110
+ result = _raw_ripemd160_lib.ripemd160_digest(self._state.get(),
111
+ bfr)
112
+ if result:
113
+ raise ValueError("Error %d while instantiating ripemd160"
114
+ % result)
115
+
116
+ return get_raw_buffer(bfr)
117
+
118
+ def hexdigest(self):
119
+ """Return the **printable** digest of the message that has been hashed so far.
120
+
121
+ :return: The hash digest, computed over the data processed so far.
122
+ Hexadecimal encoded.
123
+ :rtype: string
124
+ """
125
+
126
+ return "".join(["%02x" % bord(x) for x in self.digest()])
127
+
128
+ def copy(self):
129
+ """Return a copy ("clone") of the hash object.
130
+
131
+ The copy will have the same internal state as the original hash
132
+ object.
133
+ This can be used to efficiently compute the digests of strings that
134
+ share a common initial substring.
135
+
136
+ :return: A hash object of the same type
137
+ """
138
+
139
+ clone = RIPEMD160Hash()
140
+ result = _raw_ripemd160_lib.ripemd160_copy(self._state.get(),
141
+ clone._state.get())
142
+ if result:
143
+ raise ValueError("Error %d while copying ripemd160" % result)
144
+ return clone
145
+
146
+ def new(self, data=None):
147
+ """Create a fresh RIPEMD-160 hash object."""
148
+
149
+ return RIPEMD160Hash(data)
150
+
151
+
152
+ def new(data=None):
153
+ """Create a new hash object.
154
+
155
+ :parameter data:
156
+ Optional. The very first chunk of the message to hash.
157
+ It is equivalent to an early call to :meth:`RIPEMD160Hash.update`.
158
+ :type data: byte string/byte array/memoryview
159
+
160
+ :Return: A :class:`RIPEMD160Hash` hash object
161
+ """
162
+
163
+ return RIPEMD160Hash().new(data)
164
+
165
+ # The size of the resulting hash in bytes.
166
+ digest_size = RIPEMD160Hash.digest_size
167
+
168
+ # The internal block size of the hash algorithm in bytes.
169
+ block_size = RIPEMD160Hash.block_size
minigpt2/lib/python3.10/site-packages/Crypto/Hash/SHA.py ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # -*- coding: utf-8 -*-
2
+ #
3
+ # ===================================================================
4
+ # The contents of this file are dedicated to the public domain. To
5
+ # the extent that dedication to the public domain is not available,
6
+ # everyone is granted a worldwide, perpetual, royalty-free,
7
+ # non-exclusive license to exercise all rights associated with the
8
+ # contents of this file for any purpose whatsoever.
9
+ # No rights are reserved.
10
+ #
11
+ # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
12
+ # EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
13
+ # MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
14
+ # NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
15
+ # BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
16
+ # ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
17
+ # CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
18
+ # SOFTWARE.
19
+ # ===================================================================
20
+
21
+ # This file exists for backward compatibility with old code that refers to
22
+ # Crypto.Hash.SHA
23
+
24
+ from Crypto.Hash.SHA1 import __doc__, new, block_size, digest_size
minigpt2/lib/python3.10/site-packages/Crypto/Hash/SHA256.pyi ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from typing import Union, Optional
2
+
3
+
4
+ class SHA256Hash(object):
5
+ digest_size: int
6
+ block_size: int
7
+ oid: str
8
+ def __init__(self, data: Optional[Union[bytes, bytearray, memoryview]]=None) -> None: ...
9
+ def update(self, data: Union[bytes, bytearray, memoryview]) -> None: ...
10
+ def digest(self) -> bytes: ...
11
+ def hexdigest(self) -> str: ...
12
+ def copy(self) -> SHA256Hash: ...
13
+ def new(self, data: Optional[Union[bytes, bytearray, memoryview]]=None) -> SHA256Hash: ...
14
+
15
+ def new(data: Optional[Union[bytes, bytearray, memoryview]]=None) -> SHA256Hash: ...
16
+
17
+ digest_size: int
18
+ block_size: int
minigpt2/lib/python3.10/site-packages/Crypto/Hash/SHA3_224.pyi ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from typing import Union, Optional
2
+
3
+ Buffer = Union[bytes, bytearray, memoryview]
4
+
5
+ class SHA3_224_Hash(object):
6
+ digest_size: int
7
+ block_size: int
8
+ oid: str
9
+ def __init__(self, data: Optional[Buffer], update_after_digest: bool) -> None: ...
10
+ def update(self, data: Buffer) -> SHA3_224_Hash: ...
11
+ def digest(self) -> bytes: ...
12
+ def hexdigest(self) -> str: ...
13
+ def copy(self) -> SHA3_224_Hash: ...
14
+ def new(self, data: Optional[Buffer]) -> SHA3_224_Hash: ...
15
+
16
+ def new(__data: Buffer = ..., update_after_digest: bool = ...) -> SHA3_224_Hash: ...
17
+
18
+ digest_size: int
19
+ block_size: int
minigpt2/lib/python3.10/site-packages/Crypto/Hash/SHAKE256.py ADDED
@@ -0,0 +1,130 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # ===================================================================
2
+ #
3
+ # Copyright (c) 2015, Legrandin <helderijs@gmail.com>
4
+ # All rights reserved.
5
+ #
6
+ # Redistribution and use in source and binary forms, with or without
7
+ # modification, are permitted provided that the following conditions
8
+ # are met:
9
+ #
10
+ # 1. Redistributions of source code must retain the above copyright
11
+ # notice, this list of conditions and the following disclaimer.
12
+ # 2. Redistributions in binary form must reproduce the above copyright
13
+ # notice, this list of conditions and the following disclaimer in
14
+ # the documentation and/or other materials provided with the
15
+ # distribution.
16
+ #
17
+ # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
18
+ # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
19
+ # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
20
+ # FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
21
+ # COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
22
+ # INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
23
+ # BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
24
+ # LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
25
+ # CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
26
+ # LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
27
+ # ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
28
+ # POSSIBILITY OF SUCH DAMAGE.
29
+ # ===================================================================
30
+
31
+ from Crypto.Util.py3compat import bord
32
+
33
+ from Crypto.Util._raw_api import (load_pycryptodome_raw_lib,
34
+ VoidPointer, SmartPointer,
35
+ create_string_buffer,
36
+ get_raw_buffer, c_size_t,
37
+ c_uint8_ptr, c_ubyte)
38
+
39
+ from Crypto.Hash.keccak import _raw_keccak_lib
40
+
41
+ class SHAKE256_XOF(object):
42
+ """A SHAKE256 hash object.
43
+ Do not instantiate directly.
44
+ Use the :func:`new` function.
45
+
46
+ :ivar oid: ASN.1 Object ID
47
+ :vartype oid: string
48
+ """
49
+
50
+ # ASN.1 Object ID
51
+ oid = "2.16.840.1.101.3.4.2.12"
52
+
53
+ def __init__(self, data=None):
54
+ state = VoidPointer()
55
+ result = _raw_keccak_lib.keccak_init(state.address_of(),
56
+ c_size_t(64),
57
+ c_ubyte(24))
58
+ if result:
59
+ raise ValueError("Error %d while instantiating SHAKE256"
60
+ % result)
61
+ self._state = SmartPointer(state.get(),
62
+ _raw_keccak_lib.keccak_destroy)
63
+ self._is_squeezing = False
64
+ self._padding = 0x1F
65
+
66
+ if data:
67
+ self.update(data)
68
+
69
+ def update(self, data):
70
+ """Continue hashing of a message by consuming the next chunk of data.
71
+
72
+ Args:
73
+ data (byte string/byte array/memoryview): The next chunk of the message being hashed.
74
+ """
75
+
76
+ if self._is_squeezing:
77
+ raise TypeError("You cannot call 'update' after the first 'read'")
78
+
79
+ result = _raw_keccak_lib.keccak_absorb(self._state.get(),
80
+ c_uint8_ptr(data),
81
+ c_size_t(len(data)))
82
+ if result:
83
+ raise ValueError("Error %d while updating SHAKE256 state"
84
+ % result)
85
+ return self
86
+
87
+ def read(self, length):
88
+ """
89
+ Compute the next piece of XOF output.
90
+
91
+ .. note::
92
+ You cannot use :meth:`update` anymore after the first call to
93
+ :meth:`read`.
94
+
95
+ Args:
96
+ length (integer): the amount of bytes this method must return
97
+
98
+ :return: the next piece of XOF output (of the given length)
99
+ :rtype: byte string
100
+ """
101
+
102
+ self._is_squeezing = True
103
+ bfr = create_string_buffer(length)
104
+ result = _raw_keccak_lib.keccak_squeeze(self._state.get(),
105
+ bfr,
106
+ c_size_t(length),
107
+ c_ubyte(self._padding))
108
+ if result:
109
+ raise ValueError("Error %d while extracting from SHAKE256"
110
+ % result)
111
+
112
+ return get_raw_buffer(bfr)
113
+
114
+ def new(self, data=None):
115
+ return type(self)(data=data)
116
+
117
+
118
+ def new(data=None):
119
+ """Return a fresh instance of a SHAKE256 object.
120
+
121
+ Args:
122
+ data (bytes/bytearray/memoryview):
123
+ The very first chunk of the message to hash.
124
+ It is equivalent to an early call to :meth:`update`.
125
+ Optional.
126
+
127
+ :Return: A :class:`SHAKE256_XOF` object
128
+ """
129
+
130
+ return SHAKE256_XOF(data=data)
minigpt2/lib/python3.10/site-packages/Crypto/Hash/TurboSHAKE128.pyi ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from typing import Union, Optional
2
+ from typing_extensions import TypedDict, Unpack, NotRequired
3
+
4
+ Buffer = Union[bytes, bytearray, memoryview]
5
+
6
+ class TurboSHAKE(object):
7
+
8
+ def __init__(self, capacity: int, domain_separation: int, data: Union[Buffer, None]) -> None: ...
9
+ def update(self, data: Buffer) -> TurboSHAKE : ...
10
+ def read(self, length: int) -> bytes: ...
11
+ def new(self, data: Optional[Buffer]=None) -> TurboSHAKE: ...
12
+
13
+ class Args(TypedDict):
14
+ domain: NotRequired[int]
15
+ data: NotRequired[Buffer]
16
+
17
+ def new(**kwargs: Unpack[Args]) -> TurboSHAKE: ...
minigpt2/lib/python3.10/site-packages/Crypto/Hash/_BLAKE2b.abi3.so ADDED
Binary file (27.4 kB). View file
 
minigpt2/lib/python3.10/site-packages/Crypto/Hash/_MD5.abi3.so ADDED
Binary file (32 kB). View file
 
minigpt2/lib/python3.10/site-packages/Crypto/Hash/__pycache__/BLAKE2b.cpython-310.pyc ADDED
Binary file (7.29 kB). View file
 
minigpt2/lib/python3.10/site-packages/Crypto/Hash/__pycache__/BLAKE2s.cpython-310.pyc ADDED
Binary file (7.3 kB). View file
 
minigpt2/lib/python3.10/site-packages/Crypto/Hash/__pycache__/CMAC.cpython-310.pyc ADDED
Binary file (7.93 kB). View file
 
minigpt2/lib/python3.10/site-packages/Crypto/Hash/__pycache__/HMAC.cpython-310.pyc ADDED
Binary file (6.36 kB). View file
 
minigpt2/lib/python3.10/site-packages/Crypto/Hash/__pycache__/KMAC128.cpython-310.pyc ADDED
Binary file (4.94 kB). View file
 
minigpt2/lib/python3.10/site-packages/Crypto/Hash/__pycache__/KMAC256.cpython-310.pyc ADDED
Binary file (1.53 kB). View file
 
minigpt2/lib/python3.10/site-packages/Crypto/Hash/__pycache__/KangarooTwelve.cpython-310.pyc ADDED
Binary file (4.01 kB). View file
 
minigpt2/lib/python3.10/site-packages/Crypto/Hash/__pycache__/MD2.cpython-310.pyc ADDED
Binary file (4.5 kB). View file
 
minigpt2/lib/python3.10/site-packages/Crypto/Hash/__pycache__/MD4.cpython-310.pyc ADDED
Binary file (4.99 kB). View file