ZTWHHH commited on
Commit
6028258
·
verified ·
1 Parent(s): 25372cf

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. llava/lib/python3.10/site-packages/setuptools/_distutils/tests/__pycache__/test_archive_util.cpython-310.pyc +0 -0
  2. llava/lib/python3.10/site-packages/setuptools/_distutils/tests/__pycache__/test_bdist.cpython-310.pyc +0 -0
  3. llava/lib/python3.10/site-packages/setuptools/_distutils/tests/__pycache__/test_bdist_dumb.cpython-310.pyc +0 -0
  4. llava/lib/python3.10/site-packages/setuptools/_distutils/tests/__pycache__/test_build_ext.cpython-310.pyc +0 -0
  5. llava/lib/python3.10/site-packages/setuptools/_distutils/tests/__pycache__/test_build_py.cpython-310.pyc +0 -0
  6. llava/lib/python3.10/site-packages/setuptools/_distutils/tests/__pycache__/test_build_scripts.cpython-310.pyc +0 -0
  7. llava/lib/python3.10/site-packages/setuptools/_distutils/tests/__pycache__/test_ccompiler.cpython-310.pyc +0 -0
  8. llava/lib/python3.10/site-packages/setuptools/_distutils/tests/__pycache__/test_install_data.cpython-310.pyc +0 -0
  9. llava/lib/python3.10/site-packages/setuptools/_distutils/tests/__pycache__/test_install_scripts.cpython-310.pyc +0 -0
  10. llava/lib/python3.10/site-packages/setuptools/_distutils/tests/__pycache__/test_mingwccompiler.cpython-310.pyc +0 -0
  11. llava/lib/python3.10/site-packages/setuptools/_distutils/tests/__pycache__/test_msvccompiler.cpython-310.pyc +0 -0
  12. llava/lib/python3.10/site-packages/setuptools/_distutils/tests/__pycache__/test_sysconfig.cpython-310.pyc +0 -0
  13. llava/lib/python3.10/site-packages/setuptools/_distutils/tests/__pycache__/test_version.cpython-310.pyc +0 -0
  14. llava/lib/python3.10/site-packages/setuptools/_distutils/tests/test_bdist.py +47 -0
  15. llava/lib/python3.10/site-packages/setuptools/_distutils/tests/test_build_clib.py +134 -0
  16. llava/lib/python3.10/site-packages/setuptools/_distutils/tests/test_check.py +194 -0
  17. llava/lib/python3.10/site-packages/setuptools/_distutils/tests/test_clean.py +45 -0
  18. llava/lib/python3.10/site-packages/setuptools/_distutils/tests/test_config_cmd.py +87 -0
  19. llava/lib/python3.10/site-packages/setuptools/_distutils/tests/test_dir_util.py +139 -0
  20. llava/lib/python3.10/site-packages/setuptools/_distutils/tests/test_install_scripts.py +52 -0
  21. llava/lib/python3.10/site-packages/setuptools/_distutils/tests/test_log.py +12 -0
  22. llava/lib/python3.10/site-packages/setuptools/_distutils/tests/test_mingwccompiler.py +56 -0
  23. llava/lib/python3.10/site-packages/setuptools/_distutils/tests/test_spawn.py +141 -0
  24. llava/lib/python3.10/site-packages/setuptools/_distutils/tests/test_versionpredicate.py +0 -0
  25. llava/lib/python3.10/site-packages/setuptools/_distutils/tests/unix_compat.py +17 -0
  26. minigpt2/lib/python3.10/site-packages/anyio-4.7.0.dist-info/INSTALLER +1 -0
  27. minigpt2/lib/python3.10/site-packages/anyio-4.7.0.dist-info/LICENSE +20 -0
  28. minigpt2/lib/python3.10/site-packages/anyio-4.7.0.dist-info/METADATA +105 -0
  29. minigpt2/lib/python3.10/site-packages/anyio-4.7.0.dist-info/RECORD +85 -0
  30. minigpt2/lib/python3.10/site-packages/anyio-4.7.0.dist-info/REQUESTED +0 -0
  31. minigpt2/lib/python3.10/site-packages/anyio-4.7.0.dist-info/WHEEL +5 -0
  32. minigpt2/lib/python3.10/site-packages/anyio-4.7.0.dist-info/entry_points.txt +2 -0
  33. minigpt2/lib/python3.10/site-packages/anyio-4.7.0.dist-info/top_level.txt +1 -0
  34. minigpt2/lib/python3.10/site-packages/ftfy/__init__.py +802 -0
  35. minigpt2/lib/python3.10/site-packages/ftfy/chardata.py +691 -0
  36. minigpt2/lib/python3.10/site-packages/pydub-0.25.1.dist-info/AUTHORS +98 -0
  37. minigpt2/lib/python3.10/site-packages/sniffio-1.3.1.dist-info/LICENSE +3 -0
  38. minigpt2/lib/python3.10/site-packages/sniffio-1.3.1.dist-info/LICENSE.APACHE2 +202 -0
  39. minigpt2/lib/python3.10/site-packages/sniffio-1.3.1.dist-info/LICENSE.MIT +20 -0
  40. minigpt2/lib/python3.10/site-packages/sniffio-1.3.1.dist-info/METADATA +104 -0
  41. minigpt2/lib/python3.10/site-packages/sniffio-1.3.1.dist-info/REQUESTED +0 -0
  42. minigpt2/lib/python3.10/site-packages/sniffio-1.3.1.dist-info/WHEEL +5 -0
  43. minigpt2/lib/python3.10/site-packages/torchgen/api/__pycache__/__init__.cpython-310.pyc +0 -0
  44. minigpt2/lib/python3.10/site-packages/torchgen/api/__pycache__/autograd.cpython-310.pyc +0 -0
  45. minigpt2/lib/python3.10/site-packages/torchgen/api/__pycache__/dispatcher.cpython-310.pyc +0 -0
  46. minigpt2/lib/python3.10/site-packages/torchgen/api/__pycache__/functionalization.cpython-310.pyc +0 -0
  47. minigpt2/lib/python3.10/site-packages/torchgen/api/__pycache__/lazy.cpython-310.pyc +0 -0
  48. minigpt2/lib/python3.10/site-packages/torchgen/api/__pycache__/meta.cpython-310.pyc +0 -0
  49. minigpt2/lib/python3.10/site-packages/torchgen/api/__pycache__/native.cpython-310.pyc +0 -0
  50. minigpt2/lib/python3.10/site-packages/torchgen/api/__pycache__/python.cpython-310.pyc +0 -0
llava/lib/python3.10/site-packages/setuptools/_distutils/tests/__pycache__/test_archive_util.cpython-310.pyc ADDED
Binary file (10.7 kB). View file
 
llava/lib/python3.10/site-packages/setuptools/_distutils/tests/__pycache__/test_bdist.cpython-310.pyc ADDED
Binary file (1.3 kB). View file
 
llava/lib/python3.10/site-packages/setuptools/_distutils/tests/__pycache__/test_bdist_dumb.cpython-310.pyc ADDED
Binary file (2.09 kB). View file
 
llava/lib/python3.10/site-packages/setuptools/_distutils/tests/__pycache__/test_build_ext.cpython-310.pyc ADDED
Binary file (13.7 kB). View file
 
llava/lib/python3.10/site-packages/setuptools/_distutils/tests/__pycache__/test_build_py.cpython-310.pyc ADDED
Binary file (5.57 kB). View file
 
llava/lib/python3.10/site-packages/setuptools/_distutils/tests/__pycache__/test_build_scripts.cpython-310.pyc ADDED
Binary file (3.11 kB). View file
 
llava/lib/python3.10/site-packages/setuptools/_distutils/tests/__pycache__/test_ccompiler.cpython-310.pyc ADDED
Binary file (2.75 kB). View file
 
llava/lib/python3.10/site-packages/setuptools/_distutils/tests/__pycache__/test_install_data.cpython-310.pyc ADDED
Binary file (1.85 kB). View file
 
llava/lib/python3.10/site-packages/setuptools/_distutils/tests/__pycache__/test_install_scripts.cpython-310.pyc ADDED
Binary file (1.67 kB). View file
 
llava/lib/python3.10/site-packages/setuptools/_distutils/tests/__pycache__/test_mingwccompiler.cpython-310.pyc ADDED
Binary file (2.56 kB). View file
 
llava/lib/python3.10/site-packages/setuptools/_distutils/tests/__pycache__/test_msvccompiler.cpython-310.pyc ADDED
Binary file (5.18 kB). View file
 
llava/lib/python3.10/site-packages/setuptools/_distutils/tests/__pycache__/test_sysconfig.cpython-310.pyc ADDED
Binary file (10.9 kB). View file
 
llava/lib/python3.10/site-packages/setuptools/_distutils/tests/__pycache__/test_version.cpython-310.pyc ADDED
Binary file (2.46 kB). View file
 
llava/lib/python3.10/site-packages/setuptools/_distutils/tests/test_bdist.py ADDED
@@ -0,0 +1,47 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Tests for distutils.command.bdist."""
2
+
3
+ from distutils.command.bdist import bdist
4
+ from distutils.tests import support
5
+
6
+
7
+ class TestBuild(support.TempdirManager):
8
+ def test_formats(self):
9
+ # let's create a command and make sure
10
+ # we can set the format
11
+ dist = self.create_dist()[1]
12
+ cmd = bdist(dist)
13
+ cmd.formats = ['gztar']
14
+ cmd.ensure_finalized()
15
+ assert cmd.formats == ['gztar']
16
+
17
+ # what formats does bdist offer?
18
+ formats = [
19
+ 'bztar',
20
+ 'gztar',
21
+ 'rpm',
22
+ 'tar',
23
+ 'xztar',
24
+ 'zip',
25
+ 'ztar',
26
+ ]
27
+ found = sorted(cmd.format_commands)
28
+ assert found == formats
29
+
30
+ def test_skip_build(self):
31
+ # bug #10946: bdist --skip-build should trickle down to subcommands
32
+ dist = self.create_dist()[1]
33
+ cmd = bdist(dist)
34
+ cmd.skip_build = True
35
+ cmd.ensure_finalized()
36
+ dist.command_obj['bdist'] = cmd
37
+
38
+ names = [
39
+ 'bdist_dumb',
40
+ ] # bdist_rpm does not support --skip-build
41
+
42
+ for name in names:
43
+ subcmd = cmd.get_finalized_command(name)
44
+ if getattr(subcmd, '_unsupported', False):
45
+ # command is not supported on this build
46
+ continue
47
+ assert subcmd.skip_build, f'{name} should take --skip-build from bdist'
llava/lib/python3.10/site-packages/setuptools/_distutils/tests/test_build_clib.py ADDED
@@ -0,0 +1,134 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Tests for distutils.command.build_clib."""
2
+
3
+ import os
4
+ from distutils.command.build_clib import build_clib
5
+ from distutils.errors import DistutilsSetupError
6
+ from distutils.tests import missing_compiler_executable, support
7
+
8
+ import pytest
9
+
10
+
11
+ class TestBuildCLib(support.TempdirManager):
12
+ def test_check_library_dist(self):
13
+ pkg_dir, dist = self.create_dist()
14
+ cmd = build_clib(dist)
15
+
16
+ # 'libraries' option must be a list
17
+ with pytest.raises(DistutilsSetupError):
18
+ cmd.check_library_list('foo')
19
+
20
+ # each element of 'libraries' must a 2-tuple
21
+ with pytest.raises(DistutilsSetupError):
22
+ cmd.check_library_list(['foo1', 'foo2'])
23
+
24
+ # first element of each tuple in 'libraries'
25
+ # must be a string (the library name)
26
+ with pytest.raises(DistutilsSetupError):
27
+ cmd.check_library_list([(1, 'foo1'), ('name', 'foo2')])
28
+
29
+ # library name may not contain directory separators
30
+ with pytest.raises(DistutilsSetupError):
31
+ cmd.check_library_list(
32
+ [('name', 'foo1'), ('another/name', 'foo2')],
33
+ )
34
+
35
+ # second element of each tuple must be a dictionary (build info)
36
+ with pytest.raises(DistutilsSetupError):
37
+ cmd.check_library_list(
38
+ [('name', {}), ('another', 'foo2')],
39
+ )
40
+
41
+ # those work
42
+ libs = [('name', {}), ('name', {'ok': 'good'})]
43
+ cmd.check_library_list(libs)
44
+
45
+ def test_get_source_files(self):
46
+ pkg_dir, dist = self.create_dist()
47
+ cmd = build_clib(dist)
48
+
49
+ # "in 'libraries' option 'sources' must be present and must be
50
+ # a list of source filenames
51
+ cmd.libraries = [('name', {})]
52
+ with pytest.raises(DistutilsSetupError):
53
+ cmd.get_source_files()
54
+
55
+ cmd.libraries = [('name', {'sources': 1})]
56
+ with pytest.raises(DistutilsSetupError):
57
+ cmd.get_source_files()
58
+
59
+ cmd.libraries = [('name', {'sources': ['a', 'b']})]
60
+ assert cmd.get_source_files() == ['a', 'b']
61
+
62
+ cmd.libraries = [('name', {'sources': ('a', 'b')})]
63
+ assert cmd.get_source_files() == ['a', 'b']
64
+
65
+ cmd.libraries = [
66
+ ('name', {'sources': ('a', 'b')}),
67
+ ('name2', {'sources': ['c', 'd']}),
68
+ ]
69
+ assert cmd.get_source_files() == ['a', 'b', 'c', 'd']
70
+
71
+ def test_build_libraries(self):
72
+ pkg_dir, dist = self.create_dist()
73
+ cmd = build_clib(dist)
74
+
75
+ class FakeCompiler:
76
+ def compile(*args, **kw):
77
+ pass
78
+
79
+ create_static_lib = compile
80
+
81
+ cmd.compiler = FakeCompiler()
82
+
83
+ # build_libraries is also doing a bit of typo checking
84
+ lib = [('name', {'sources': 'notvalid'})]
85
+ with pytest.raises(DistutilsSetupError):
86
+ cmd.build_libraries(lib)
87
+
88
+ lib = [('name', {'sources': list()})]
89
+ cmd.build_libraries(lib)
90
+
91
+ lib = [('name', {'sources': tuple()})]
92
+ cmd.build_libraries(lib)
93
+
94
+ def test_finalize_options(self):
95
+ pkg_dir, dist = self.create_dist()
96
+ cmd = build_clib(dist)
97
+
98
+ cmd.include_dirs = 'one-dir'
99
+ cmd.finalize_options()
100
+ assert cmd.include_dirs == ['one-dir']
101
+
102
+ cmd.include_dirs = None
103
+ cmd.finalize_options()
104
+ assert cmd.include_dirs == []
105
+
106
+ cmd.distribution.libraries = 'WONTWORK'
107
+ with pytest.raises(DistutilsSetupError):
108
+ cmd.finalize_options()
109
+
110
+ @pytest.mark.skipif('platform.system() == "Windows"')
111
+ def test_run(self):
112
+ pkg_dir, dist = self.create_dist()
113
+ cmd = build_clib(dist)
114
+
115
+ foo_c = os.path.join(pkg_dir, 'foo.c')
116
+ self.write_file(foo_c, 'int main(void) { return 1;}\n')
117
+ cmd.libraries = [('foo', {'sources': [foo_c]})]
118
+
119
+ build_temp = os.path.join(pkg_dir, 'build')
120
+ os.mkdir(build_temp)
121
+ cmd.build_temp = build_temp
122
+ cmd.build_clib = build_temp
123
+
124
+ # Before we run the command, we want to make sure
125
+ # all commands are present on the system.
126
+ ccmd = missing_compiler_executable()
127
+ if ccmd is not None:
128
+ self.skipTest(f'The {ccmd!r} command is not found')
129
+
130
+ # this should work
131
+ cmd.run()
132
+
133
+ # let's check the result
134
+ assert 'libfoo.a' in os.listdir(build_temp)
llava/lib/python3.10/site-packages/setuptools/_distutils/tests/test_check.py ADDED
@@ -0,0 +1,194 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Tests for distutils.command.check."""
2
+
3
+ import os
4
+ import textwrap
5
+ from distutils.command.check import check
6
+ from distutils.errors import DistutilsSetupError
7
+ from distutils.tests import support
8
+
9
+ import pytest
10
+
11
+ try:
12
+ import pygments
13
+ except ImportError:
14
+ pygments = None
15
+
16
+
17
+ HERE = os.path.dirname(__file__)
18
+
19
+
20
+ @support.combine_markers
21
+ class TestCheck(support.TempdirManager):
22
+ def _run(self, metadata=None, cwd=None, **options):
23
+ if metadata is None:
24
+ metadata = {}
25
+ if cwd is not None:
26
+ old_dir = os.getcwd()
27
+ os.chdir(cwd)
28
+ pkg_info, dist = self.create_dist(**metadata)
29
+ cmd = check(dist)
30
+ cmd.initialize_options()
31
+ for name, value in options.items():
32
+ setattr(cmd, name, value)
33
+ cmd.ensure_finalized()
34
+ cmd.run()
35
+ if cwd is not None:
36
+ os.chdir(old_dir)
37
+ return cmd
38
+
39
+ def test_check_metadata(self):
40
+ # let's run the command with no metadata at all
41
+ # by default, check is checking the metadata
42
+ # should have some warnings
43
+ cmd = self._run()
44
+ assert cmd._warnings == 1
45
+
46
+ # now let's add the required fields
47
+ # and run it again, to make sure we don't get
48
+ # any warning anymore
49
+ metadata = {
50
+ 'url': 'xxx',
51
+ 'author': 'xxx',
52
+ 'author_email': 'xxx',
53
+ 'name': 'xxx',
54
+ 'version': 'xxx',
55
+ }
56
+ cmd = self._run(metadata)
57
+ assert cmd._warnings == 0
58
+
59
+ # now with the strict mode, we should
60
+ # get an error if there are missing metadata
61
+ with pytest.raises(DistutilsSetupError):
62
+ self._run({}, **{'strict': 1})
63
+
64
+ # and of course, no error when all metadata are present
65
+ cmd = self._run(metadata, strict=True)
66
+ assert cmd._warnings == 0
67
+
68
+ # now a test with non-ASCII characters
69
+ metadata = {
70
+ 'url': 'xxx',
71
+ 'author': '\u00c9ric',
72
+ 'author_email': 'xxx',
73
+ 'name': 'xxx',
74
+ 'version': 'xxx',
75
+ 'description': 'Something about esszet \u00df',
76
+ 'long_description': 'More things about esszet \u00df',
77
+ }
78
+ cmd = self._run(metadata)
79
+ assert cmd._warnings == 0
80
+
81
+ def test_check_author_maintainer(self):
82
+ for kind in ("author", "maintainer"):
83
+ # ensure no warning when author_email or maintainer_email is given
84
+ # (the spec allows these fields to take the form "Name <email>")
85
+ metadata = {
86
+ 'url': 'xxx',
87
+ kind + '_email': 'Name <name@email.com>',
88
+ 'name': 'xxx',
89
+ 'version': 'xxx',
90
+ }
91
+ cmd = self._run(metadata)
92
+ assert cmd._warnings == 0
93
+
94
+ # the check should not warn if only email is given
95
+ metadata[kind + '_email'] = 'name@email.com'
96
+ cmd = self._run(metadata)
97
+ assert cmd._warnings == 0
98
+
99
+ # the check should not warn if only the name is given
100
+ metadata[kind] = "Name"
101
+ del metadata[kind + '_email']
102
+ cmd = self._run(metadata)
103
+ assert cmd._warnings == 0
104
+
105
+ def test_check_document(self):
106
+ pytest.importorskip('docutils')
107
+ pkg_info, dist = self.create_dist()
108
+ cmd = check(dist)
109
+
110
+ # let's see if it detects broken rest
111
+ broken_rest = 'title\n===\n\ntest'
112
+ msgs = cmd._check_rst_data(broken_rest)
113
+ assert len(msgs) == 1
114
+
115
+ # and non-broken rest
116
+ rest = 'title\n=====\n\ntest'
117
+ msgs = cmd._check_rst_data(rest)
118
+ assert len(msgs) == 0
119
+
120
+ def test_check_restructuredtext(self):
121
+ pytest.importorskip('docutils')
122
+ # let's see if it detects broken rest in long_description
123
+ broken_rest = 'title\n===\n\ntest'
124
+ pkg_info, dist = self.create_dist(long_description=broken_rest)
125
+ cmd = check(dist)
126
+ cmd.check_restructuredtext()
127
+ assert cmd._warnings == 1
128
+
129
+ # let's see if we have an error with strict=True
130
+ metadata = {
131
+ 'url': 'xxx',
132
+ 'author': 'xxx',
133
+ 'author_email': 'xxx',
134
+ 'name': 'xxx',
135
+ 'version': 'xxx',
136
+ 'long_description': broken_rest,
137
+ }
138
+ with pytest.raises(DistutilsSetupError):
139
+ self._run(metadata, **{'strict': 1, 'restructuredtext': 1})
140
+
141
+ # and non-broken rest, including a non-ASCII character to test #12114
142
+ metadata['long_description'] = 'title\n=====\n\ntest \u00df'
143
+ cmd = self._run(metadata, strict=True, restructuredtext=True)
144
+ assert cmd._warnings == 0
145
+
146
+ # check that includes work to test #31292
147
+ metadata['long_description'] = 'title\n=====\n\n.. include:: includetest.rst'
148
+ cmd = self._run(metadata, cwd=HERE, strict=True, restructuredtext=True)
149
+ assert cmd._warnings == 0
150
+
151
+ def test_check_restructuredtext_with_syntax_highlight(self):
152
+ pytest.importorskip('docutils')
153
+ # Don't fail if there is a `code` or `code-block` directive
154
+
155
+ example_rst_docs = [
156
+ textwrap.dedent(
157
+ """\
158
+ Here's some code:
159
+
160
+ .. code:: python
161
+
162
+ def foo():
163
+ pass
164
+ """
165
+ ),
166
+ textwrap.dedent(
167
+ """\
168
+ Here's some code:
169
+
170
+ .. code-block:: python
171
+
172
+ def foo():
173
+ pass
174
+ """
175
+ ),
176
+ ]
177
+
178
+ for rest_with_code in example_rst_docs:
179
+ pkg_info, dist = self.create_dist(long_description=rest_with_code)
180
+ cmd = check(dist)
181
+ cmd.check_restructuredtext()
182
+ msgs = cmd._check_rst_data(rest_with_code)
183
+ if pygments is not None:
184
+ assert len(msgs) == 0
185
+ else:
186
+ assert len(msgs) == 1
187
+ assert (
188
+ str(msgs[0][1])
189
+ == 'Cannot analyze code. Pygments package not found.'
190
+ )
191
+
192
+ def test_check_all(self):
193
+ with pytest.raises(DistutilsSetupError):
194
+ self._run({}, **{'strict': 1, 'restructuredtext': 1})
llava/lib/python3.10/site-packages/setuptools/_distutils/tests/test_clean.py ADDED
@@ -0,0 +1,45 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Tests for distutils.command.clean."""
2
+
3
+ import os
4
+ from distutils.command.clean import clean
5
+ from distutils.tests import support
6
+
7
+
8
+ class TestClean(support.TempdirManager):
9
+ def test_simple_run(self):
10
+ pkg_dir, dist = self.create_dist()
11
+ cmd = clean(dist)
12
+
13
+ # let's add some elements clean should remove
14
+ dirs = [
15
+ (d, os.path.join(pkg_dir, d))
16
+ for d in (
17
+ 'build_temp',
18
+ 'build_lib',
19
+ 'bdist_base',
20
+ 'build_scripts',
21
+ 'build_base',
22
+ )
23
+ ]
24
+
25
+ for name, path in dirs:
26
+ os.mkdir(path)
27
+ setattr(cmd, name, path)
28
+ if name == 'build_base':
29
+ continue
30
+ for f in ('one', 'two', 'three'):
31
+ self.write_file(os.path.join(path, f))
32
+
33
+ # let's run the command
34
+ cmd.all = 1
35
+ cmd.ensure_finalized()
36
+ cmd.run()
37
+
38
+ # make sure the files where removed
39
+ for _name, path in dirs:
40
+ assert not os.path.exists(path), f'{path} was not removed'
41
+
42
+ # let's run the command again (should spit warnings but succeed)
43
+ cmd.all = 1
44
+ cmd.ensure_finalized()
45
+ cmd.run()
llava/lib/python3.10/site-packages/setuptools/_distutils/tests/test_config_cmd.py ADDED
@@ -0,0 +1,87 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Tests for distutils.command.config."""
2
+
3
+ import os
4
+ import sys
5
+ from distutils._log import log
6
+ from distutils.command.config import config, dump_file
7
+ from distutils.tests import missing_compiler_executable, support
8
+
9
+ import more_itertools
10
+ import path
11
+ import pytest
12
+
13
+
14
+ @pytest.fixture(autouse=True)
15
+ def info_log(request, monkeypatch):
16
+ self = request.instance
17
+ self._logs = []
18
+ monkeypatch.setattr(log, 'info', self._info)
19
+
20
+
21
+ @support.combine_markers
22
+ class TestConfig(support.TempdirManager):
23
+ def _info(self, msg, *args):
24
+ for line in msg.splitlines():
25
+ self._logs.append(line)
26
+
27
+ def test_dump_file(self):
28
+ this_file = path.Path(__file__).with_suffix('.py')
29
+ with this_file.open(encoding='utf-8') as f:
30
+ numlines = more_itertools.ilen(f)
31
+
32
+ dump_file(this_file, 'I am the header')
33
+ assert len(self._logs) == numlines + 1
34
+
35
+ @pytest.mark.skipif('platform.system() == "Windows"')
36
+ def test_search_cpp(self):
37
+ cmd = missing_compiler_executable(['preprocessor'])
38
+ if cmd is not None:
39
+ self.skipTest(f'The {cmd!r} command is not found')
40
+ pkg_dir, dist = self.create_dist()
41
+ cmd = config(dist)
42
+ cmd._check_compiler()
43
+ compiler = cmd.compiler
44
+ if sys.platform[:3] == "aix" and "xlc" in compiler.preprocessor[0].lower():
45
+ self.skipTest(
46
+ 'xlc: The -E option overrides the -P, -o, and -qsyntaxonly options'
47
+ )
48
+
49
+ # simple pattern searches
50
+ match = cmd.search_cpp(pattern='xxx', body='/* xxx */')
51
+ assert match == 0
52
+
53
+ match = cmd.search_cpp(pattern='_configtest', body='/* xxx */')
54
+ assert match == 1
55
+
56
+ def test_finalize_options(self):
57
+ # finalize_options does a bit of transformation
58
+ # on options
59
+ pkg_dir, dist = self.create_dist()
60
+ cmd = config(dist)
61
+ cmd.include_dirs = f'one{os.pathsep}two'
62
+ cmd.libraries = 'one'
63
+ cmd.library_dirs = f'three{os.pathsep}four'
64
+ cmd.ensure_finalized()
65
+
66
+ assert cmd.include_dirs == ['one', 'two']
67
+ assert cmd.libraries == ['one']
68
+ assert cmd.library_dirs == ['three', 'four']
69
+
70
+ def test_clean(self):
71
+ # _clean removes files
72
+ tmp_dir = self.mkdtemp()
73
+ f1 = os.path.join(tmp_dir, 'one')
74
+ f2 = os.path.join(tmp_dir, 'two')
75
+
76
+ self.write_file(f1, 'xxx')
77
+ self.write_file(f2, 'xxx')
78
+
79
+ for f in (f1, f2):
80
+ assert os.path.exists(f)
81
+
82
+ pkg_dir, dist = self.create_dist()
83
+ cmd = config(dist)
84
+ cmd._clean(f1, f2)
85
+
86
+ for f in (f1, f2):
87
+ assert not os.path.exists(f)
llava/lib/python3.10/site-packages/setuptools/_distutils/tests/test_dir_util.py ADDED
@@ -0,0 +1,139 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Tests for distutils.dir_util."""
2
+
3
+ import os
4
+ import pathlib
5
+ import stat
6
+ import sys
7
+ import unittest.mock as mock
8
+ from distutils import dir_util, errors
9
+ from distutils.dir_util import (
10
+ copy_tree,
11
+ create_tree,
12
+ ensure_relative,
13
+ mkpath,
14
+ remove_tree,
15
+ )
16
+ from distutils.tests import support
17
+
18
+ import jaraco.path
19
+ import path
20
+ import pytest
21
+
22
+
23
+ @pytest.fixture(autouse=True)
24
+ def stuff(request, monkeypatch, distutils_managed_tempdir):
25
+ self = request.instance
26
+ tmp_dir = self.mkdtemp()
27
+ self.root_target = os.path.join(tmp_dir, 'deep')
28
+ self.target = os.path.join(self.root_target, 'here')
29
+ self.target2 = os.path.join(tmp_dir, 'deep2')
30
+
31
+
32
+ class TestDirUtil(support.TempdirManager):
33
+ def test_mkpath_remove_tree_verbosity(self, caplog):
34
+ mkpath(self.target, verbose=False)
35
+ assert not caplog.records
36
+ remove_tree(self.root_target, verbose=False)
37
+
38
+ mkpath(self.target, verbose=True)
39
+ wanted = [f'creating {self.target}']
40
+ assert caplog.messages == wanted
41
+ caplog.clear()
42
+
43
+ remove_tree(self.root_target, verbose=True)
44
+ wanted = [f"removing '{self.root_target}' (and everything under it)"]
45
+ assert caplog.messages == wanted
46
+
47
+ @pytest.mark.skipif("platform.system() == 'Windows'")
48
+ def test_mkpath_with_custom_mode(self):
49
+ # Get and set the current umask value for testing mode bits.
50
+ umask = os.umask(0o002)
51
+ os.umask(umask)
52
+ mkpath(self.target, 0o700)
53
+ assert stat.S_IMODE(os.stat(self.target).st_mode) == 0o700 & ~umask
54
+ mkpath(self.target2, 0o555)
55
+ assert stat.S_IMODE(os.stat(self.target2).st_mode) == 0o555 & ~umask
56
+
57
+ def test_create_tree_verbosity(self, caplog):
58
+ create_tree(self.root_target, ['one', 'two', 'three'], verbose=False)
59
+ assert caplog.messages == []
60
+ remove_tree(self.root_target, verbose=False)
61
+
62
+ wanted = [f'creating {self.root_target}']
63
+ create_tree(self.root_target, ['one', 'two', 'three'], verbose=True)
64
+ assert caplog.messages == wanted
65
+
66
+ remove_tree(self.root_target, verbose=False)
67
+
68
+ def test_copy_tree_verbosity(self, caplog):
69
+ mkpath(self.target, verbose=False)
70
+
71
+ copy_tree(self.target, self.target2, verbose=False)
72
+ assert caplog.messages == []
73
+
74
+ remove_tree(self.root_target, verbose=False)
75
+
76
+ mkpath(self.target, verbose=False)
77
+ a_file = path.Path(self.target) / 'ok.txt'
78
+ jaraco.path.build({'ok.txt': 'some content'}, self.target)
79
+
80
+ wanted = [f'copying {a_file} -> {self.target2}']
81
+ copy_tree(self.target, self.target2, verbose=True)
82
+ assert caplog.messages == wanted
83
+
84
+ remove_tree(self.root_target, verbose=False)
85
+ remove_tree(self.target2, verbose=False)
86
+
87
+ def test_copy_tree_skips_nfs_temp_files(self):
88
+ mkpath(self.target, verbose=False)
89
+
90
+ jaraco.path.build({'ok.txt': 'some content', '.nfs123abc': ''}, self.target)
91
+
92
+ copy_tree(self.target, self.target2)
93
+ assert os.listdir(self.target2) == ['ok.txt']
94
+
95
+ remove_tree(self.root_target, verbose=False)
96
+ remove_tree(self.target2, verbose=False)
97
+
98
+ def test_ensure_relative(self):
99
+ if os.sep == '/':
100
+ assert ensure_relative('/home/foo') == 'home/foo'
101
+ assert ensure_relative('some/path') == 'some/path'
102
+ else: # \\
103
+ assert ensure_relative('c:\\home\\foo') == 'c:home\\foo'
104
+ assert ensure_relative('home\\foo') == 'home\\foo'
105
+
106
+ def test_copy_tree_exception_in_listdir(self):
107
+ """
108
+ An exception in listdir should raise a DistutilsFileError
109
+ """
110
+ with (
111
+ mock.patch("os.listdir", side_effect=OSError()),
112
+ pytest.raises(errors.DistutilsFileError),
113
+ ):
114
+ src = self.tempdirs[-1]
115
+ dir_util.copy_tree(src, None)
116
+
117
+ def test_mkpath_exception_uncached(self, monkeypatch, tmp_path):
118
+ """
119
+ Caching should not remember failed attempts.
120
+
121
+ pypa/distutils#304
122
+ """
123
+
124
+ class FailPath(pathlib.Path):
125
+ def mkdir(self, *args, **kwargs):
126
+ raise OSError("Failed to create directory")
127
+
128
+ if sys.version_info < (3, 12):
129
+ _flavour = pathlib.Path()._flavour
130
+
131
+ target = tmp_path / 'foodir'
132
+
133
+ with pytest.raises(errors.DistutilsFileError):
134
+ mkpath(FailPath(target))
135
+
136
+ assert not target.exists()
137
+
138
+ mkpath(target)
139
+ assert target.exists()
llava/lib/python3.10/site-packages/setuptools/_distutils/tests/test_install_scripts.py ADDED
@@ -0,0 +1,52 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Tests for distutils.command.install_scripts."""
2
+
3
+ import os
4
+ from distutils.command.install_scripts import install_scripts
5
+ from distutils.core import Distribution
6
+ from distutils.tests import support
7
+
8
+ from . import test_build_scripts
9
+
10
+
11
+ class TestInstallScripts(support.TempdirManager):
12
+ def test_default_settings(self):
13
+ dist = Distribution()
14
+ dist.command_obj["build"] = support.DummyCommand(build_scripts="/foo/bar")
15
+ dist.command_obj["install"] = support.DummyCommand(
16
+ install_scripts="/splat/funk",
17
+ force=True,
18
+ skip_build=True,
19
+ )
20
+ cmd = install_scripts(dist)
21
+ assert not cmd.force
22
+ assert not cmd.skip_build
23
+ assert cmd.build_dir is None
24
+ assert cmd.install_dir is None
25
+
26
+ cmd.finalize_options()
27
+
28
+ assert cmd.force
29
+ assert cmd.skip_build
30
+ assert cmd.build_dir == "/foo/bar"
31
+ assert cmd.install_dir == "/splat/funk"
32
+
33
+ def test_installation(self):
34
+ source = self.mkdtemp()
35
+
36
+ expected = test_build_scripts.TestBuildScripts.write_sample_scripts(source)
37
+
38
+ target = self.mkdtemp()
39
+ dist = Distribution()
40
+ dist.command_obj["build"] = support.DummyCommand(build_scripts=source)
41
+ dist.command_obj["install"] = support.DummyCommand(
42
+ install_scripts=target,
43
+ force=True,
44
+ skip_build=True,
45
+ )
46
+ cmd = install_scripts(dist)
47
+ cmd.finalize_options()
48
+ cmd.run()
49
+
50
+ installed = os.listdir(target)
51
+ for name in expected:
52
+ assert name in installed
llava/lib/python3.10/site-packages/setuptools/_distutils/tests/test_log.py ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Tests for distutils.log"""
2
+
3
+ import logging
4
+ from distutils._log import log
5
+
6
+
7
+ class TestLog:
8
+ def test_non_ascii(self, caplog):
9
+ caplog.set_level(logging.DEBUG)
10
+ log.debug('Dεbug\tMėssãge')
11
+ log.fatal('Fαtal\tÈrrōr')
12
+ assert caplog.messages == ['Dεbug\tMėssãge', 'Fαtal\tÈrrōr']
llava/lib/python3.10/site-packages/setuptools/_distutils/tests/test_mingwccompiler.py ADDED
@@ -0,0 +1,56 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from distutils import sysconfig
2
+ from distutils.errors import CCompilerError, DistutilsPlatformError
3
+ from distutils.util import is_mingw, split_quoted
4
+
5
+ import pytest
6
+
7
+
8
+ class TestMingw32CCompiler:
9
+ @pytest.mark.skipif(not is_mingw(), reason='not on mingw')
10
+ def test_compiler_type(self):
11
+ from distutils.cygwinccompiler import Mingw32CCompiler
12
+
13
+ compiler = Mingw32CCompiler()
14
+ assert compiler.compiler_type == 'mingw32'
15
+
16
+ @pytest.mark.skipif(not is_mingw(), reason='not on mingw')
17
+ def test_set_executables(self, monkeypatch):
18
+ from distutils.cygwinccompiler import Mingw32CCompiler
19
+
20
+ monkeypatch.setenv('CC', 'cc')
21
+ monkeypatch.setenv('CXX', 'c++')
22
+
23
+ compiler = Mingw32CCompiler()
24
+
25
+ assert compiler.compiler == split_quoted('cc -O -Wall')
26
+ assert compiler.compiler_so == split_quoted('cc -shared -O -Wall')
27
+ assert compiler.compiler_cxx == split_quoted('c++ -O -Wall')
28
+ assert compiler.linker_exe == split_quoted('cc')
29
+ assert compiler.linker_so == split_quoted('cc -shared')
30
+
31
+ @pytest.mark.skipif(not is_mingw(), reason='not on mingw')
32
+ def test_runtime_library_dir_option(self):
33
+ from distutils.cygwinccompiler import Mingw32CCompiler
34
+
35
+ compiler = Mingw32CCompiler()
36
+ with pytest.raises(DistutilsPlatformError):
37
+ compiler.runtime_library_dir_option('/usr/lib')
38
+
39
+ @pytest.mark.skipif(not is_mingw(), reason='not on mingw')
40
+ def test_cygwincc_error(self, monkeypatch):
41
+ import distutils.cygwinccompiler
42
+
43
+ monkeypatch.setattr(distutils.cygwinccompiler, 'is_cygwincc', lambda _: True)
44
+
45
+ with pytest.raises(CCompilerError):
46
+ distutils.cygwinccompiler.Mingw32CCompiler()
47
+
48
+ @pytest.mark.skipif('sys.platform == "cygwin"')
49
+ def test_customize_compiler_with_msvc_python(self):
50
+ from distutils.cygwinccompiler import Mingw32CCompiler
51
+
52
+ # In case we have an MSVC Python build, but still want to use
53
+ # Mingw32CCompiler, then customize_compiler() shouldn't fail at least.
54
+ # https://github.com/pypa/setuptools/issues/4456
55
+ compiler = Mingw32CCompiler()
56
+ sysconfig.customize_compiler(compiler)
llava/lib/python3.10/site-packages/setuptools/_distutils/tests/test_spawn.py ADDED
@@ -0,0 +1,141 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Tests for distutils.spawn."""
2
+
3
+ import os
4
+ import stat
5
+ import sys
6
+ import unittest.mock as mock
7
+ from distutils.errors import DistutilsExecError
8
+ from distutils.spawn import find_executable, spawn
9
+ from distutils.tests import support
10
+
11
+ import path
12
+ import pytest
13
+ from test.support import unix_shell
14
+
15
+ from .compat import py39 as os_helper
16
+
17
+
18
+ class TestSpawn(support.TempdirManager):
19
+ @pytest.mark.skipif("os.name not in ('nt', 'posix')")
20
+ def test_spawn(self):
21
+ tmpdir = self.mkdtemp()
22
+
23
+ # creating something executable
24
+ # through the shell that returns 1
25
+ if sys.platform != 'win32':
26
+ exe = os.path.join(tmpdir, 'foo.sh')
27
+ self.write_file(exe, f'#!{unix_shell}\nexit 1')
28
+ else:
29
+ exe = os.path.join(tmpdir, 'foo.bat')
30
+ self.write_file(exe, 'exit 1')
31
+
32
+ os.chmod(exe, 0o777)
33
+ with pytest.raises(DistutilsExecError):
34
+ spawn([exe])
35
+
36
+ # now something that works
37
+ if sys.platform != 'win32':
38
+ exe = os.path.join(tmpdir, 'foo.sh')
39
+ self.write_file(exe, f'#!{unix_shell}\nexit 0')
40
+ else:
41
+ exe = os.path.join(tmpdir, 'foo.bat')
42
+ self.write_file(exe, 'exit 0')
43
+
44
+ os.chmod(exe, 0o777)
45
+ spawn([exe]) # should work without any error
46
+
47
+ def test_find_executable(self, tmp_path):
48
+ program_path = self._make_executable(tmp_path, '.exe')
49
+ program = program_path.name
50
+ program_noeext = program_path.with_suffix('').name
51
+ filename = str(program_path)
52
+ tmp_dir = path.Path(tmp_path)
53
+
54
+ # test path parameter
55
+ rv = find_executable(program, path=tmp_dir)
56
+ assert rv == filename
57
+
58
+ if sys.platform == 'win32':
59
+ # test without ".exe" extension
60
+ rv = find_executable(program_noeext, path=tmp_dir)
61
+ assert rv == filename
62
+
63
+ # test find in the current directory
64
+ with tmp_dir:
65
+ rv = find_executable(program)
66
+ assert rv == program
67
+
68
+ # test non-existent program
69
+ dont_exist_program = "dontexist_" + program
70
+ rv = find_executable(dont_exist_program, path=tmp_dir)
71
+ assert rv is None
72
+
73
+ # PATH='': no match, except in the current directory
74
+ with os_helper.EnvironmentVarGuard() as env:
75
+ env['PATH'] = ''
76
+ with (
77
+ mock.patch(
78
+ 'distutils.spawn.os.confstr', return_value=tmp_dir, create=True
79
+ ),
80
+ mock.patch('distutils.spawn.os.defpath', tmp_dir),
81
+ ):
82
+ rv = find_executable(program)
83
+ assert rv is None
84
+
85
+ # look in current directory
86
+ with tmp_dir:
87
+ rv = find_executable(program)
88
+ assert rv == program
89
+
90
+ # PATH=':': explicitly looks in the current directory
91
+ with os_helper.EnvironmentVarGuard() as env:
92
+ env['PATH'] = os.pathsep
93
+ with (
94
+ mock.patch('distutils.spawn.os.confstr', return_value='', create=True),
95
+ mock.patch('distutils.spawn.os.defpath', ''),
96
+ ):
97
+ rv = find_executable(program)
98
+ assert rv is None
99
+
100
+ # look in current directory
101
+ with tmp_dir:
102
+ rv = find_executable(program)
103
+ assert rv == program
104
+
105
+ # missing PATH: test os.confstr("CS_PATH") and os.defpath
106
+ with os_helper.EnvironmentVarGuard() as env:
107
+ env.pop('PATH', None)
108
+
109
+ # without confstr
110
+ with (
111
+ mock.patch(
112
+ 'distutils.spawn.os.confstr', side_effect=ValueError, create=True
113
+ ),
114
+ mock.patch('distutils.spawn.os.defpath', tmp_dir),
115
+ ):
116
+ rv = find_executable(program)
117
+ assert rv == filename
118
+
119
+ # with confstr
120
+ with (
121
+ mock.patch(
122
+ 'distutils.spawn.os.confstr', return_value=tmp_dir, create=True
123
+ ),
124
+ mock.patch('distutils.spawn.os.defpath', ''),
125
+ ):
126
+ rv = find_executable(program)
127
+ assert rv == filename
128
+
129
+ @staticmethod
130
+ def _make_executable(tmp_path, ext):
131
+ # Give the temporary program a suffix regardless of platform.
132
+ # It's needed on Windows and not harmful on others.
133
+ program = tmp_path.joinpath('program').with_suffix(ext)
134
+ program.write_text("", encoding='utf-8')
135
+ program.chmod(stat.S_IXUSR)
136
+ return program
137
+
138
+ def test_spawn_missing_exe(self):
139
+ with pytest.raises(DistutilsExecError) as ctx:
140
+ spawn(['does-not-exist'])
141
+ assert "command 'does-not-exist' failed" in str(ctx.value)
llava/lib/python3.10/site-packages/setuptools/_distutils/tests/test_versionpredicate.py ADDED
File without changes
llava/lib/python3.10/site-packages/setuptools/_distutils/tests/unix_compat.py ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import sys
2
+
3
+ try:
4
+ import grp
5
+ import pwd
6
+ except ImportError:
7
+ grp = pwd = None
8
+
9
+ import pytest
10
+
11
+ UNIX_ID_SUPPORT = grp and pwd
12
+ UID_0_SUPPORT = UNIX_ID_SUPPORT and sys.platform != "cygwin"
13
+
14
+ require_unix_id = pytest.mark.skipif(
15
+ not UNIX_ID_SUPPORT, reason="Requires grp and pwd support"
16
+ )
17
+ require_uid_0 = pytest.mark.skipif(not UID_0_SUPPORT, reason="Requires UID 0 support")
minigpt2/lib/python3.10/site-packages/anyio-4.7.0.dist-info/INSTALLER ADDED
@@ -0,0 +1 @@
 
 
1
+ pip
minigpt2/lib/python3.10/site-packages/anyio-4.7.0.dist-info/LICENSE ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ The MIT License (MIT)
2
+
3
+ Copyright (c) 2018 Alex Grönholm
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy of
6
+ this software and associated documentation files (the "Software"), to deal in
7
+ the Software without restriction, including without limitation the rights to
8
+ use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of
9
+ the Software, and to permit persons to whom the Software is furnished to do so,
10
+ subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS
17
+ FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR
18
+ COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER
19
+ IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
20
+ CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
minigpt2/lib/python3.10/site-packages/anyio-4.7.0.dist-info/METADATA ADDED
@@ -0,0 +1,105 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Metadata-Version: 2.1
2
+ Name: anyio
3
+ Version: 4.7.0
4
+ Summary: High level compatibility layer for multiple asynchronous event loop implementations
5
+ Author-email: Alex Grönholm <alex.gronholm@nextday.fi>
6
+ License: MIT
7
+ Project-URL: Documentation, https://anyio.readthedocs.io/en/latest/
8
+ Project-URL: Changelog, https://anyio.readthedocs.io/en/stable/versionhistory.html
9
+ Project-URL: Source code, https://github.com/agronholm/anyio
10
+ Project-URL: Issue tracker, https://github.com/agronholm/anyio/issues
11
+ Classifier: Development Status :: 5 - Production/Stable
12
+ Classifier: Intended Audience :: Developers
13
+ Classifier: License :: OSI Approved :: MIT License
14
+ Classifier: Framework :: AnyIO
15
+ Classifier: Typing :: Typed
16
+ Classifier: Programming Language :: Python
17
+ Classifier: Programming Language :: Python :: 3
18
+ Classifier: Programming Language :: Python :: 3.9
19
+ Classifier: Programming Language :: Python :: 3.10
20
+ Classifier: Programming Language :: Python :: 3.11
21
+ Classifier: Programming Language :: Python :: 3.12
22
+ Classifier: Programming Language :: Python :: 3.13
23
+ Requires-Python: >=3.9
24
+ Description-Content-Type: text/x-rst
25
+ License-File: LICENSE
26
+ Requires-Dist: exceptiongroup>=1.0.2; python_version < "3.11"
27
+ Requires-Dist: idna>=2.8
28
+ Requires-Dist: sniffio>=1.1
29
+ Requires-Dist: typing_extensions>=4.5; python_version < "3.13"
30
+ Provides-Extra: trio
31
+ Requires-Dist: trio>=0.26.1; extra == "trio"
32
+ Provides-Extra: test
33
+ Requires-Dist: anyio[trio]; extra == "test"
34
+ Requires-Dist: coverage[toml]>=7; extra == "test"
35
+ Requires-Dist: exceptiongroup>=1.2.0; extra == "test"
36
+ Requires-Dist: hypothesis>=4.0; extra == "test"
37
+ Requires-Dist: psutil>=5.9; extra == "test"
38
+ Requires-Dist: pytest>=7.0; extra == "test"
39
+ Requires-Dist: pytest-mock>=3.6.1; extra == "test"
40
+ Requires-Dist: trustme; extra == "test"
41
+ Requires-Dist: truststore>=0.9.1; python_version >= "3.10" and extra == "test"
42
+ Requires-Dist: uvloop>=0.21; (platform_python_implementation == "CPython" and platform_system != "Windows") and extra == "test"
43
+ Provides-Extra: doc
44
+ Requires-Dist: packaging; extra == "doc"
45
+ Requires-Dist: Sphinx~=7.4; extra == "doc"
46
+ Requires-Dist: sphinx_rtd_theme; extra == "doc"
47
+ Requires-Dist: sphinx-autodoc-typehints>=1.2.0; extra == "doc"
48
+
49
+ .. image:: https://github.com/agronholm/anyio/actions/workflows/test.yml/badge.svg
50
+ :target: https://github.com/agronholm/anyio/actions/workflows/test.yml
51
+ :alt: Build Status
52
+ .. image:: https://coveralls.io/repos/github/agronholm/anyio/badge.svg?branch=master
53
+ :target: https://coveralls.io/github/agronholm/anyio?branch=master
54
+ :alt: Code Coverage
55
+ .. image:: https://readthedocs.org/projects/anyio/badge/?version=latest
56
+ :target: https://anyio.readthedocs.io/en/latest/?badge=latest
57
+ :alt: Documentation
58
+ .. image:: https://badges.gitter.im/gitterHQ/gitter.svg
59
+ :target: https://gitter.im/python-trio/AnyIO
60
+ :alt: Gitter chat
61
+
62
+ AnyIO is an asynchronous networking and concurrency library that works on top of either asyncio_ or
63
+ trio_. It implements trio-like `structured concurrency`_ (SC) on top of asyncio and works in harmony
64
+ with the native SC of trio itself.
65
+
66
+ Applications and libraries written against AnyIO's API will run unmodified on either asyncio_ or
67
+ trio_. AnyIO can also be adopted into a library or application incrementally – bit by bit, no full
68
+ refactoring necessary. It will blend in with the native libraries of your chosen backend.
69
+
70
+ Documentation
71
+ -------------
72
+
73
+ View full documentation at: https://anyio.readthedocs.io/
74
+
75
+ Features
76
+ --------
77
+
78
+ AnyIO offers the following functionality:
79
+
80
+ * Task groups (nurseries_ in trio terminology)
81
+ * High-level networking (TCP, UDP and UNIX sockets)
82
+
83
+ * `Happy eyeballs`_ algorithm for TCP connections (more robust than that of asyncio on Python
84
+ 3.8)
85
+ * async/await style UDP sockets (unlike asyncio where you still have to use Transports and
86
+ Protocols)
87
+
88
+ * A versatile API for byte streams and object streams
89
+ * Inter-task synchronization and communication (locks, conditions, events, semaphores, object
90
+ streams)
91
+ * Worker threads
92
+ * Subprocesses
93
+ * Asynchronous file I/O (using worker threads)
94
+ * Signal handling
95
+
96
+ AnyIO also comes with its own pytest_ plugin which also supports asynchronous fixtures.
97
+ It even works with the popular Hypothesis_ library.
98
+
99
+ .. _asyncio: https://docs.python.org/3/library/asyncio.html
100
+ .. _trio: https://github.com/python-trio/trio
101
+ .. _structured concurrency: https://en.wikipedia.org/wiki/Structured_concurrency
102
+ .. _nurseries: https://trio.readthedocs.io/en/stable/reference-core.html#nurseries-and-spawning
103
+ .. _Happy eyeballs: https://en.wikipedia.org/wiki/Happy_Eyeballs
104
+ .. _pytest: https://docs.pytest.org/en/latest/
105
+ .. _Hypothesis: https://hypothesis.works/
minigpt2/lib/python3.10/site-packages/anyio-4.7.0.dist-info/RECORD ADDED
@@ -0,0 +1,85 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ anyio-4.7.0.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4
2
+ anyio-4.7.0.dist-info/LICENSE,sha256=U2GsncWPLvX9LpsJxoKXwX8ElQkJu8gCO9uC6s8iwrA,1081
3
+ anyio-4.7.0.dist-info/METADATA,sha256=A-A-n0m-esMw8lYv8a9kqsr84J2aFh8MvqcTq2Xx_so,4653
4
+ anyio-4.7.0.dist-info/RECORD,,
5
+ anyio-4.7.0.dist-info/REQUESTED,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
6
+ anyio-4.7.0.dist-info/WHEEL,sha256=PZUExdf71Ui_so67QXpySuHtCi3-J3wvF4ORK6k_S8U,91
7
+ anyio-4.7.0.dist-info/entry_points.txt,sha256=_d6Yu6uiaZmNe0CydowirE9Cmg7zUL2g08tQpoS3Qvc,39
8
+ anyio-4.7.0.dist-info/top_level.txt,sha256=QglSMiWX8_5dpoVAEIHdEYzvqFMdSYWmCj6tYw2ITkQ,6
9
+ anyio/__init__.py,sha256=5NCKQNJueCeIJqVbOpAQdho2HIQrQvcnfQjuEhAiZcc,4433
10
+ anyio/__pycache__/__init__.cpython-310.pyc,,
11
+ anyio/__pycache__/from_thread.cpython-310.pyc,,
12
+ anyio/__pycache__/lowlevel.cpython-310.pyc,,
13
+ anyio/__pycache__/pytest_plugin.cpython-310.pyc,,
14
+ anyio/__pycache__/to_process.cpython-310.pyc,,
15
+ anyio/__pycache__/to_thread.cpython-310.pyc,,
16
+ anyio/_backends/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
17
+ anyio/_backends/__pycache__/__init__.cpython-310.pyc,,
18
+ anyio/_backends/__pycache__/_asyncio.cpython-310.pyc,,
19
+ anyio/_backends/__pycache__/_trio.cpython-310.pyc,,
20
+ anyio/_backends/_asyncio.py,sha256=i5Qe4IBdiWRlww0qIUAVsF-K0z30bgZakuMePpNbdro,94051
21
+ anyio/_backends/_trio.py,sha256=7oGxbqeveiesGm2pAnCRBydqy-Gbistn_xfsmKhSLLg,40371
22
+ anyio/_core/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
23
+ anyio/_core/__pycache__/__init__.cpython-310.pyc,,
24
+ anyio/_core/__pycache__/_asyncio_selector_thread.cpython-310.pyc,,
25
+ anyio/_core/__pycache__/_eventloop.cpython-310.pyc,,
26
+ anyio/_core/__pycache__/_exceptions.cpython-310.pyc,,
27
+ anyio/_core/__pycache__/_fileio.cpython-310.pyc,,
28
+ anyio/_core/__pycache__/_resources.cpython-310.pyc,,
29
+ anyio/_core/__pycache__/_signals.cpython-310.pyc,,
30
+ anyio/_core/__pycache__/_sockets.cpython-310.pyc,,
31
+ anyio/_core/__pycache__/_streams.cpython-310.pyc,,
32
+ anyio/_core/__pycache__/_subprocesses.cpython-310.pyc,,
33
+ anyio/_core/__pycache__/_synchronization.cpython-310.pyc,,
34
+ anyio/_core/__pycache__/_tasks.cpython-310.pyc,,
35
+ anyio/_core/__pycache__/_testing.cpython-310.pyc,,
36
+ anyio/_core/__pycache__/_typedattr.cpython-310.pyc,,
37
+ anyio/_core/_asyncio_selector_thread.py,sha256=vTdZBWaxRgVcgUaRb5uBwQ_VGgY3qPKF7l91IJ5Mqzo,4773
38
+ anyio/_core/_eventloop.py,sha256=t_tAwBFPjF8jrZGjlJ6bbYy6KA3bjsbZxV9mvh9t1i0,4695
39
+ anyio/_core/_exceptions.py,sha256=bKPr2QbkYG7nIb425L5JePUie9bGc9XfkY0y4JKWvFM,2488
40
+ anyio/_core/_fileio.py,sha256=DqnG_zvQFMqiIFaUeDRC1Ts3LT0FWHkWtGgm-684hvQ,20957
41
+ anyio/_core/_resources.py,sha256=NbmU5O5UX3xEyACnkmYX28Fmwdl-f-ny0tHym26e0w0,435
42
+ anyio/_core/_signals.py,sha256=vulT1M1xdLYtAR-eY5TamIgaf1WTlOwOrMGwswlTTr8,905
43
+ anyio/_core/_sockets.py,sha256=vQ5GnSDLHjEhHhV2yvsdiPs5wmPxxb1kRsv3RM5lbQk,26951
44
+ anyio/_core/_streams.py,sha256=OnaKgoDD-FcMSwLvkoAUGP51sG2ZdRvMpxt9q2w1gYA,1804
45
+ anyio/_core/_subprocesses.py,sha256=WquR6sHrnaZofaeqnL8U4Yv___msVW_WqivleLHK4zI,7760
46
+ anyio/_core/_synchronization.py,sha256=tct5FJFdgYjiEMtUeg5NGG15tf-2Qd7VaWuSgzS5dIU,20347
47
+ anyio/_core/_tasks.py,sha256=pvVEX2Fw159sf0ypAPerukKsZgRRwvFFedVW52nR2Vk,4764
48
+ anyio/_core/_testing.py,sha256=YUGwA5cgFFbUTv4WFd7cv_BSVr4ryTtPp8owQA3JdWE,2118
49
+ anyio/_core/_typedattr.py,sha256=P4ozZikn3-DbpoYcvyghS_FOYAgbmUxeoU8-L_07pZM,2508
50
+ anyio/abc/__init__.py,sha256=c2OQbTCS_fQowviMXanLPh8m29ccwkXmpDr7uyNZYOo,2652
51
+ anyio/abc/__pycache__/__init__.cpython-310.pyc,,
52
+ anyio/abc/__pycache__/_eventloop.cpython-310.pyc,,
53
+ anyio/abc/__pycache__/_resources.cpython-310.pyc,,
54
+ anyio/abc/__pycache__/_sockets.cpython-310.pyc,,
55
+ anyio/abc/__pycache__/_streams.cpython-310.pyc,,
56
+ anyio/abc/__pycache__/_subprocesses.cpython-310.pyc,,
57
+ anyio/abc/__pycache__/_tasks.cpython-310.pyc,,
58
+ anyio/abc/__pycache__/_testing.cpython-310.pyc,,
59
+ anyio/abc/_eventloop.py,sha256=Wd_3C3hLm0ex5z_eHHWGqvLle2OKCSexJSZVnwQNGV4,9658
60
+ anyio/abc/_resources.py,sha256=DrYvkNN1hH6Uvv5_5uKySvDsnknGVDe8FCKfko0VtN8,783
61
+ anyio/abc/_sockets.py,sha256=KhWtJxan8jpBXKwPaFeQzI4iRXdFaOIn0HXtDZnaO7U,6262
62
+ anyio/abc/_streams.py,sha256=GzST5Q2zQmxVzdrAqtbSyHNxkPlIC9AzeZJg_YyPAXw,6598
63
+ anyio/abc/_subprocesses.py,sha256=cumAPJTktOQtw63IqG0lDpyZqu_l1EElvQHMiwJgL08,2067
64
+ anyio/abc/_tasks.py,sha256=yJWbMwowvqjlAX4oJ3l9Is1w-zwynr2lX1Z02AWJqsY,3080
65
+ anyio/abc/_testing.py,sha256=tBJUzkSfOXJw23fe8qSJ03kJlShOYjjaEyFB6k6MYT8,1821
66
+ anyio/from_thread.py,sha256=dbi5TUH45_Sg_jZ8Vv1NJWVohe0WeQ_OaCvXIKveAGg,17478
67
+ anyio/lowlevel.py,sha256=nkgmW--SdxGVp0cmLUYazjkigveRm5HY7-gW8Bpp9oY,4169
68
+ anyio/py.typed,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
69
+ anyio/pytest_plugin.py,sha256=vjGhGRHD31OyMgJRFQrMvExhx3Ea8KbyDqYKmiSDdXA,6712
70
+ anyio/streams/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
71
+ anyio/streams/__pycache__/__init__.cpython-310.pyc,,
72
+ anyio/streams/__pycache__/buffered.cpython-310.pyc,,
73
+ anyio/streams/__pycache__/file.cpython-310.pyc,,
74
+ anyio/streams/__pycache__/memory.cpython-310.pyc,,
75
+ anyio/streams/__pycache__/stapled.cpython-310.pyc,,
76
+ anyio/streams/__pycache__/text.cpython-310.pyc,,
77
+ anyio/streams/__pycache__/tls.cpython-310.pyc,,
78
+ anyio/streams/buffered.py,sha256=UCldKC168YuLvT7n3HtNPnQ2iWAMSTYQWbZvzLwMwkM,4500
79
+ anyio/streams/file.py,sha256=6uoTNb5KbMoj-6gS3_xrrL8uZN8Q4iIvOS1WtGyFfKw,4383
80
+ anyio/streams/memory.py,sha256=j8AyOExK4-UPaon_Xbhwax25Vqs0DwFg3ZXc-EIiHjY,10550
81
+ anyio/streams/stapled.py,sha256=U09pCrmOw9kkNhe6tKopsm1QIMT1lFTFvtb-A7SIe4k,4302
82
+ anyio/streams/text.py,sha256=6x8w8xlfCZKTUWQoJiMPoMhSSJFUBRKgoBNSBtbd9yg,5094
83
+ anyio/streams/tls.py,sha256=m3AE2LVSpoRHSIwSoSCupiOVL54EvOFoY3CcwTxcZfg,12742
84
+ anyio/to_process.py,sha256=cR4n7TssbbJowE_9cWme49zaeuoBuMzqgZ6cBIs0YIs,9571
85
+ anyio/to_thread.py,sha256=WM2JQ2MbVsd5D5CM08bQiTwzZIvpsGjfH1Fy247KoDQ,2396
minigpt2/lib/python3.10/site-packages/anyio-4.7.0.dist-info/REQUESTED ADDED
File without changes
minigpt2/lib/python3.10/site-packages/anyio-4.7.0.dist-info/WHEEL ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ Wheel-Version: 1.0
2
+ Generator: setuptools (75.6.0)
3
+ Root-Is-Purelib: true
4
+ Tag: py3-none-any
5
+
minigpt2/lib/python3.10/site-packages/anyio-4.7.0.dist-info/entry_points.txt ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ [pytest11]
2
+ anyio = anyio.pytest_plugin
minigpt2/lib/python3.10/site-packages/anyio-4.7.0.dist-info/top_level.txt ADDED
@@ -0,0 +1 @@
 
 
1
+ anyio
minigpt2/lib/python3.10/site-packages/ftfy/__init__.py ADDED
@@ -0,0 +1,802 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ ftfy: fixes text for you
3
+
4
+ This is a module for making text less broken. See the `fix_text` function
5
+ for more information.
6
+ """
7
+
8
+ from __future__ import annotations
9
+
10
+ import unicodedata
11
+ import warnings
12
+ from collections.abc import Iterator
13
+ from typing import (
14
+ Any,
15
+ BinaryIO,
16
+ Callable,
17
+ Literal,
18
+ NamedTuple,
19
+ TextIO,
20
+ cast,
21
+ )
22
+
23
+ from ftfy import bad_codecs, chardata, fixes
24
+ from ftfy.badness import is_bad
25
+ from ftfy.formatting import display_ljust
26
+
27
+ __version__ = "6.3.1"
28
+
29
+
30
+ # Though this function does nothing, it lets linters know that we're using
31
+ # ftfy.bad_codecs. See the docstring in `bad_codecs/__init__.py` for more.
32
+ bad_codecs.ok()
33
+
34
+
35
+ class ExplanationStep(NamedTuple):
36
+ """
37
+ A step in an ExplainedText, explaining how to decode text.
38
+
39
+ The possible actions are:
40
+
41
+ - "encode": take in a string and encode it as bytes, with the given encoding
42
+ - "decode": take in bytes and decode them as a string, with the given encoding
43
+ - "transcode": convert bytes to bytes with a particular named function
44
+ - "apply": convert str to str with a particular named function
45
+
46
+ The `parameter` is the name of the encoding or function to use. If it's a
47
+ function, it must appear in the FIXERS dictionary.
48
+ """
49
+
50
+ action: str
51
+ parameter: str
52
+
53
+ def __repr__(self) -> str:
54
+ """
55
+ Get the string representation of an ExplanationStep. We output the
56
+ representation of the equivalent tuple, for simplicity.
57
+ """
58
+ return repr(tuple(self))
59
+
60
+
61
+ class ExplainedText(NamedTuple):
62
+ """
63
+ The return type from ftfy's functions that provide an "explanation" of which
64
+ steps it applied to fix the text, such as :func:`fix_and_explain()`.
65
+
66
+ When the 'explain' option is disabled, these functions return the same
67
+ type, but the `explanation` will be None.
68
+ """
69
+
70
+ text: str
71
+ explanation: list[ExplanationStep] | None
72
+
73
+
74
+ # Functions that can be applied using `apply_plan`.
75
+ FIXERS: dict[str, Callable] = { # type: ignore[type-arg]
76
+ "unescape_html": fixes.unescape_html,
77
+ "remove_terminal_escapes": fixes.remove_terminal_escapes,
78
+ "restore_byte_a0": fixes.restore_byte_a0,
79
+ "replace_lossy_sequences": fixes.replace_lossy_sequences,
80
+ "decode_inconsistent_utf8": fixes.decode_inconsistent_utf8,
81
+ "fix_c1_controls": fixes.fix_c1_controls,
82
+ "fix_latin_ligatures": fixes.fix_latin_ligatures,
83
+ "fix_character_width": fixes.fix_character_width,
84
+ "uncurl_quotes": fixes.uncurl_quotes,
85
+ "fix_line_breaks": fixes.fix_line_breaks,
86
+ "fix_surrogates": fixes.fix_surrogates,
87
+ "remove_control_chars": fixes.remove_control_chars,
88
+ }
89
+
90
+
91
+ class TextFixerConfig(NamedTuple):
92
+ r"""
93
+ A TextFixerConfig object stores configuration options for ftfy.
94
+
95
+ It's implemented as a namedtuple with defaults, so you can instantiate
96
+ it by providing the values to change from their defaults as keyword arguments.
97
+ For example, to disable 'unescape_html' and keep the rest of the defaults::
98
+
99
+ TextFixerConfig(unescape_html=False)
100
+
101
+ Here are the options and their default values:
102
+
103
+ - `unescape_html`: "auto"
104
+
105
+ Configures whether to replace HTML entities such as &amp; with the character
106
+ they represent. "auto" says to do this by default, but disable it when a
107
+ literal < character appears, indicating that the input is actual HTML and
108
+ entities should be preserved. The value can be True, to always enable this
109
+ fixer, or False, to always disable it.
110
+
111
+ - `remove_terminal_escapes`: True
112
+
113
+ Removes "ANSI" terminal escapes, such as for changing the color of text in a
114
+ terminal window.
115
+
116
+ - `fix_encoding`: True
117
+
118
+ Detect mojibake and attempt to fix it by decoding the text in a different
119
+ encoding standard.
120
+
121
+ The following four options affect `fix_encoding` works, and do nothing if
122
+ `fix_encoding` is False:
123
+
124
+ - `restore_byte_a0`: True
125
+
126
+ Allow a literal space (U+20) to be interpreted as a non-breaking space
127
+ (U+A0) when that would make it part of a fixable mojibake string.
128
+
129
+ Because spaces are very common characters, this could lead to false
130
+ positives, but we try to apply it only when there's strong evidence for
131
+ mojibake. Disabling `restore_byte_a0` is safer from false positives,
132
+ but creates false negatives.
133
+
134
+ - `replace_lossy_sequences`: True
135
+
136
+ Detect mojibake that has been partially replaced by the characters
137
+ '�' or '?'. If the mojibake could be decoded otherwise, replace the
138
+ detected sequence with '�'.
139
+
140
+ - `decode_inconsistent_utf8`: True
141
+
142
+ When we see sequences that distinctly look like UTF-8 mojibake, but
143
+ there's no consistent way to reinterpret the string in a new encoding,
144
+ replace the mojibake with the appropriate UTF-8 characters anyway.
145
+
146
+ This helps to decode strings that are concatenated from different
147
+ encodings.
148
+
149
+ - `fix_c1_controls`: True
150
+
151
+ Replace C1 control characters (the useless characters U+80 - U+9B that
152
+ come from Latin-1) with their Windows-1252 equivalents, like HTML5 does,
153
+ even if the whole string doesn't decode as Latin-1.
154
+
155
+ - `fix_latin_ligatures`: True
156
+
157
+ Replace common Latin-alphabet ligatures, such as ``fi``, with the
158
+ letters they're made of.
159
+
160
+ - `fix_character_width`: True
161
+
162
+ Replace fullwidth Latin characters and halfwidth Katakana with
163
+ their more standard widths.
164
+
165
+ - `uncurl_quotes`: True
166
+
167
+ Replace curly quotes with straight quotes.
168
+
169
+ - `fix_line_breaks`: True
170
+
171
+ Replace various forms of line breaks with the standard Unix line
172
+ break, ``\n``.
173
+
174
+ - `fix_surrogates`: True
175
+
176
+ Replace sequences of UTF-16 surrogate codepoints with the character
177
+ they were meant to encode. This fixes text that was decoded with the
178
+ obsolete UCS-2 standard, and allows it to support high-numbered
179
+ codepoints such as emoji.
180
+
181
+ - `remove_control_chars`: True
182
+
183
+ Remove certain control characters that have no displayed effect on text.
184
+
185
+ - `normalization`: "NFC"
186
+
187
+ Choose what kind of Unicode normalization is applied. Usually, we apply
188
+ NFC normalization, so that letters followed by combining characters become
189
+ single combined characters.
190
+
191
+ Changing this to "NFKC" applies more compatibility conversions, such as
192
+ replacing the 'micro sign' with a standard Greek lowercase mu, which looks
193
+ identical. However, some NFKC normalizations change the meaning of text,
194
+ such as converting "10³" to "103".
195
+
196
+ `normalization` can be None, to apply no normalization.
197
+
198
+ - `max_decode_length`: 1_000_000
199
+
200
+ The maximum size of "segment" that ftfy will try to fix all at once.
201
+
202
+ - `explain`: True
203
+
204
+ Whether to compute 'explanations', lists describing what ftfy changed.
205
+ When this is False, the explanation will be None, and the code that
206
+ builds the explanation will be skipped, possibly saving time.
207
+
208
+ Functions that accept TextFixerConfig and don't return an explanation
209
+ will automatically set `explain` to False.
210
+ """
211
+
212
+ unescape_html: str | bool = "auto"
213
+ remove_terminal_escapes: bool = True
214
+ fix_encoding: bool = True
215
+ restore_byte_a0: bool = True
216
+ replace_lossy_sequences: bool = True
217
+ decode_inconsistent_utf8: bool = True
218
+ fix_c1_controls: bool = True
219
+ fix_latin_ligatures: bool = True
220
+ fix_character_width: bool = True
221
+ uncurl_quotes: bool = True
222
+ fix_line_breaks: bool = True
223
+ fix_surrogates: bool = True
224
+ remove_control_chars: bool = True
225
+ normalization: Literal["NFC", "NFD", "NFKC", "NFKD"] | None = "NFC"
226
+ max_decode_length: int = 1000000
227
+ explain: bool = True
228
+
229
+
230
+ def _config_from_kwargs(
231
+ config: TextFixerConfig, kwargs: dict[str, Any]
232
+ ) -> TextFixerConfig:
233
+ """
234
+ Handle parameters provided as keyword arguments to ftfy's top-level
235
+ functions, converting them into a TextFixerConfig.
236
+ """
237
+ if "fix_entities" in kwargs:
238
+ warnings.warn(
239
+ "`fix_entities` has been renamed to `unescape_html`",
240
+ DeprecationWarning,
241
+ stacklevel=2,
242
+ )
243
+ kwargs = kwargs.copy()
244
+ kwargs["unescape_html"] = kwargs["fix_entities"]
245
+ del kwargs["fix_entities"]
246
+ config = config._replace(**kwargs)
247
+ return config
248
+
249
+
250
+ BYTES_ERROR_TEXT = """Hey wait, this isn't Unicode.
251
+
252
+ ftfy is designed to fix problems with text. Treating bytes like they're
253
+ interchangeable with Unicode text is usually something that introduces
254
+ problems with text.
255
+
256
+ You should first decode these bytes from the encoding you think they're in.
257
+ If you're not sure what encoding they're in:
258
+
259
+ - First, try to find out. 'utf-8' is a good assumption.
260
+ - If the encoding is simply unknowable, try running your bytes through
261
+ ftfy.guess_bytes. As the name implies, this may not always be accurate.
262
+
263
+ For more information on the distinction between bytes and text, read the
264
+ Python Unicode HOWTO:
265
+
266
+ http://docs.python.org/3/howto/unicode.html
267
+ """
268
+
269
+
270
+ def _try_fix(
271
+ fixer_name: str,
272
+ text: str,
273
+ config: TextFixerConfig,
274
+ steps: list[ExplanationStep] | None,
275
+ ) -> str:
276
+ """
277
+ A helper function used across several 'fixer' steps, deciding whether to
278
+ apply the fix and whether to record the fix in `steps`.
279
+ """
280
+ if getattr(config, fixer_name):
281
+ fixer = FIXERS[fixer_name]
282
+ fixed = fixer(text)
283
+ if steps is not None and fixed != text:
284
+ steps.append(ExplanationStep("apply", fixer_name))
285
+ return cast(str, fixed)
286
+
287
+ return text
288
+
289
+
290
+ def fix_text(text: str, config: TextFixerConfig | None = None, **kwargs: Any) -> str:
291
+ r"""
292
+ Given Unicode text as input, fix inconsistencies and glitches in it,
293
+ such as mojibake (text that was decoded in the wrong encoding).
294
+
295
+ Let's start with some examples:
296
+
297
+ >>> fix_text('✔ No problems')
298
+ '✔ No problems'
299
+
300
+ >>> print(fix_text("&macr;\\_(ã\x83\x84)_/&macr;"))
301
+ ¯\_(ツ)_/¯
302
+
303
+ >>> fix_text('Broken text&hellip; it&#x2019;s flubberific!')
304
+ "Broken text... it's flubberific!"
305
+
306
+ >>> fix_text('LOUD NOISES')
307
+ 'LOUD NOISES'
308
+
309
+ ftfy applies a number of different fixes to the text, and can accept
310
+ configuration to select which fixes to apply.
311
+
312
+ The configuration takes the form of a :class:`TextFixerConfig` object,
313
+ and you can see a description of the options in that class's docstring
314
+ or in the full documentation at ftfy.readthedocs.org.
315
+
316
+ For convenience and backward compatibility, the configuration can also
317
+ take the form of keyword arguments, which will set the equivalently-named
318
+ fields of the TextFixerConfig object.
319
+
320
+ For example, here are two ways to fix text but skip the "uncurl_quotes"
321
+ step::
322
+
323
+ fix_text(text, TextFixerConfig(uncurl_quotes=False))
324
+ fix_text(text, uncurl_quotes=False)
325
+
326
+ This function fixes text in independent segments, which are usually lines
327
+ of text, or arbitrarily broken up every 1 million codepoints (configurable
328
+ with `config.max_decode_length`) if there aren't enough line breaks. The
329
+ bound on segment lengths helps to avoid unbounded slowdowns.
330
+
331
+ ftfy can also provide an 'explanation', a list of transformations it applied
332
+ to the text that would fix more text like it. This function doesn't provide
333
+ explanations (because there may be different fixes for different segments
334
+ of text).
335
+
336
+ To get an explanation, use the :func:`fix_and_explain()` function, which
337
+ fixes the string in one segment and explains what it fixed.
338
+ """
339
+
340
+ if config is None:
341
+ config = TextFixerConfig(explain=False)
342
+ config = _config_from_kwargs(config, kwargs)
343
+ if isinstance(text, bytes):
344
+ raise UnicodeError(BYTES_ERROR_TEXT)
345
+
346
+ out = []
347
+ pos = 0
348
+ while pos < len(text):
349
+ textbreak = text.find("\n", pos) + 1
350
+ if textbreak == 0:
351
+ textbreak = len(text)
352
+ if (textbreak - pos) > config.max_decode_length:
353
+ textbreak = pos + config.max_decode_length
354
+
355
+ segment = text[pos:textbreak]
356
+ if config.unescape_html == "auto" and "<" in segment:
357
+ config = config._replace(unescape_html=False)
358
+ fixed_segment, _ = fix_and_explain(segment, config)
359
+ out.append(fixed_segment)
360
+ pos = textbreak
361
+ return "".join(out)
362
+
363
+
364
+ def fix_and_explain(
365
+ text: str, config: TextFixerConfig | None = None, **kwargs: Any
366
+ ) -> ExplainedText:
367
+ """
368
+ Fix text as a single segment, returning the fixed text and an explanation
369
+ of what was fixed.
370
+
371
+ The explanation is a list of steps that can be applied with
372
+ :func:`apply_plan`, or if config.explain is False, it will be None.
373
+ """
374
+ if config is None:
375
+ config = TextFixerConfig()
376
+ if isinstance(text, bytes):
377
+ raise UnicodeError(BYTES_ERROR_TEXT)
378
+ config = _config_from_kwargs(config, kwargs)
379
+
380
+ if config.unescape_html == "auto" and "<" in text:
381
+ config = config._replace(unescape_html=False)
382
+
383
+ if config.explain:
384
+ steps: list[ExplanationStep] | None = []
385
+ else:
386
+ # If explanations aren't desired, `steps` will be None
387
+ steps = None
388
+
389
+ while True:
390
+ origtext = text
391
+
392
+ text = _try_fix("unescape_html", text, config, steps)
393
+
394
+ if config.fix_encoding:
395
+ if steps is None:
396
+ text = fix_encoding(text)
397
+ else:
398
+ text, encoding_steps = fix_encoding_and_explain(text, config)
399
+ if encoding_steps is not None:
400
+ steps.extend(encoding_steps)
401
+
402
+ for fixer in [
403
+ "fix_c1_controls",
404
+ "fix_latin_ligatures",
405
+ "fix_character_width",
406
+ "uncurl_quotes",
407
+ "fix_line_breaks",
408
+ "fix_surrogates",
409
+ "remove_terminal_escapes",
410
+ "remove_control_chars",
411
+ ]:
412
+ text = _try_fix(fixer, text, config, steps)
413
+
414
+ if config.normalization is not None:
415
+ fixed = unicodedata.normalize(config.normalization, text)
416
+ if steps is not None and fixed != text:
417
+ steps.append(ExplanationStep("normalize", config.normalization))
418
+ text = fixed
419
+
420
+ if text == origtext:
421
+ return ExplainedText(text, steps)
422
+
423
+
424
+ def fix_encoding_and_explain(
425
+ text: str, config: TextFixerConfig | None = None, **kwargs: Any
426
+ ) -> ExplainedText:
427
+ """
428
+ Apply the steps of ftfy that detect mojibake and fix it. Returns the fixed
429
+ text and a list explaining what was fixed.
430
+
431
+ This includes fixing text by encoding and decoding it in different encodings,
432
+ as well as the subordinate fixes `restore_byte_a0`, `replace_lossy_sequences`,
433
+ `decode_inconsistent_utf8`, and `fix_c1_controls`.
434
+
435
+ Examples::
436
+
437
+ >>> fix_encoding_and_explain("só")
438
+ ExplainedText(text='só', explanation=[('encode', 'latin-1'), ('decode', 'utf-8')])
439
+
440
+ >>> result = fix_encoding_and_explain("voilà le travail")
441
+ >>> result.text
442
+ 'voilà le travail'
443
+ >>> result.explanation
444
+ [('encode', 'latin-1'), ('transcode', 'restore_byte_a0'), ('decode', 'utf-8')]
445
+
446
+ """
447
+ if config is None:
448
+ config = TextFixerConfig()
449
+ if isinstance(text, bytes):
450
+ raise UnicodeError(BYTES_ERROR_TEXT)
451
+ config = _config_from_kwargs(config, kwargs)
452
+
453
+ if not config.fix_encoding:
454
+ # A weird trivial case: we're asked to fix the encoding, but skip
455
+ # fixing the encoding
456
+ return ExplainedText(text, [])
457
+
458
+ plan_so_far: list[ExplanationStep] = []
459
+ while True:
460
+ prevtext = text
461
+ text, plan = _fix_encoding_one_step_and_explain(text, config)
462
+ if plan is not None:
463
+ plan_so_far.extend(plan)
464
+ if text == prevtext:
465
+ return ExplainedText(text, plan_so_far)
466
+
467
+
468
+ def _fix_encoding_one_step_and_explain(
469
+ text: str, config: TextFixerConfig
470
+ ) -> ExplainedText:
471
+ """
472
+ Perform one step of fixing the encoding of text.
473
+ """
474
+ if config is None:
475
+ config = TextFixerConfig()
476
+
477
+ if len(text) == 0:
478
+ return ExplainedText(text, [])
479
+
480
+ # The first plan is to return ASCII text unchanged, as well as text
481
+ # that doesn't look like it contains mojibake
482
+ if chardata.possible_encoding(text, "ascii") or not is_bad(text):
483
+ return ExplainedText(text, [])
484
+
485
+ # As we go through the next step, remember the possible encodings
486
+ # that we encounter but don't successfully fix yet. We may need them
487
+ # later.
488
+ possible_1byte_encodings = []
489
+
490
+ # Suppose the text was supposed to be UTF-8, but it was decoded using
491
+ # a single-byte encoding instead. When these cases can be fixed, they
492
+ # are usually the correct thing to do, so try them next.
493
+ for encoding in chardata.CHARMAP_ENCODINGS:
494
+ if chardata.possible_encoding(text, encoding):
495
+ possible_1byte_encodings.append(encoding)
496
+ encoded_bytes = text.encode(encoding)
497
+ encode_step = ExplanationStep("encode", encoding)
498
+ transcode_steps = []
499
+
500
+ # Now, find out if it's UTF-8 (or close enough). Otherwise,
501
+ # remember the encoding for later.
502
+ try:
503
+ decoding = "utf-8"
504
+ # Check encoded_bytes for sequences that would be UTF-8,
505
+ # except they have b' ' where b'\xa0' would belong.
506
+ #
507
+ # Don't do this in the macroman encoding, where it would match
508
+ # an en dash followed by a space, leading to false positives.
509
+ if (
510
+ config.restore_byte_a0
511
+ and encoding != "macroman"
512
+ and chardata.ALTERED_UTF8_RE.search(encoded_bytes)
513
+ ):
514
+ replaced_bytes = fixes.restore_byte_a0(encoded_bytes)
515
+ if replaced_bytes != encoded_bytes:
516
+ transcode_steps.append(
517
+ ExplanationStep("transcode", "restore_byte_a0")
518
+ )
519
+ encoded_bytes = replaced_bytes
520
+
521
+ # Replace sequences where information has been lost
522
+ if config.replace_lossy_sequences and encoding.startswith("sloppy"):
523
+ replaced_bytes = fixes.replace_lossy_sequences(encoded_bytes)
524
+ if replaced_bytes != encoded_bytes:
525
+ transcode_steps.append(
526
+ ExplanationStep("transcode", "replace_lossy_sequences")
527
+ )
528
+ encoded_bytes = replaced_bytes
529
+
530
+ if 0xED in encoded_bytes or 0xC0 in encoded_bytes:
531
+ decoding = "utf-8-variants"
532
+
533
+ decode_step = ExplanationStep("decode", decoding)
534
+ steps = [encode_step] + transcode_steps + [decode_step]
535
+ fixed = encoded_bytes.decode(decoding)
536
+ return ExplainedText(fixed, steps)
537
+
538
+ except UnicodeDecodeError:
539
+ pass
540
+
541
+ # Look for a-hat-euro sequences that remain, and fix them in isolation.
542
+ if config.decode_inconsistent_utf8 and chardata.UTF8_DETECTOR_RE.search(text):
543
+ steps = [ExplanationStep("apply", "decode_inconsistent_utf8")]
544
+ fixed = fixes.decode_inconsistent_utf8(text)
545
+ if fixed != text:
546
+ return ExplainedText(fixed, steps)
547
+
548
+ # The next most likely case is that this is Latin-1 that was intended to
549
+ # be read as Windows-1252, because those two encodings in particular are
550
+ # easily confused.
551
+ if "latin-1" in possible_1byte_encodings:
552
+ if "windows-1252" in possible_1byte_encodings:
553
+ # This text is in the intersection of Latin-1 and
554
+ # Windows-1252, so it's probably legit.
555
+ return ExplainedText(text, [])
556
+ else:
557
+ # Otherwise, it means we have characters that are in Latin-1 but
558
+ # not in Windows-1252. Those are C1 control characters. Nobody
559
+ # wants those. Assume they were meant to be Windows-1252.
560
+ try:
561
+ fixed = text.encode("latin-1").decode("windows-1252")
562
+ if fixed != text:
563
+ steps = [
564
+ ExplanationStep("encode", "latin-1"),
565
+ ExplanationStep("decode", "windows-1252"),
566
+ ]
567
+ return ExplainedText(fixed, steps)
568
+ except UnicodeDecodeError:
569
+ pass
570
+
571
+ # Fix individual characters of Latin-1 with a less satisfying explanation
572
+ if config.fix_c1_controls and chardata.C1_CONTROL_RE.search(text):
573
+ steps = [ExplanationStep("transcode", "fix_c1_controls")]
574
+ fixed = fixes.fix_c1_controls(text)
575
+ return ExplainedText(fixed, steps)
576
+
577
+ # The cases that remain are mixups between two different single-byte
578
+ # encodings, and not the common case of Latin-1 vs. Windows-1252.
579
+ #
580
+ # With the new heuristic in 6.0, it's possible that we're closer to solving
581
+ # these in some cases. It would require a lot of testing and tuning, though.
582
+ # For now, we leave the text unchanged in these cases.
583
+ return ExplainedText(text, [])
584
+
585
+
586
+ def fix_encoding(
587
+ text: str, config: TextFixerConfig | None = None, **kwargs: Any
588
+ ) -> str:
589
+ """
590
+ Apply just the encoding-fixing steps of ftfy to this text. Returns the
591
+ fixed text, discarding the explanation.
592
+
593
+ >>> fix_encoding("ó")
594
+ 'ó'
595
+ >>> fix_encoding("&ATILDE;&SUP3;")
596
+ '&ATILDE;&SUP3;'
597
+ """
598
+ if config is None:
599
+ config = TextFixerConfig(explain=False)
600
+ config = _config_from_kwargs(config, kwargs)
601
+ fixed, _explan = fix_encoding_and_explain(text, config)
602
+ return fixed
603
+
604
+
605
+ # Some alternate names for the main functions
606
+ ftfy = fix_text
607
+
608
+
609
+ def fix_text_segment(
610
+ text: str, config: TextFixerConfig | None = None, **kwargs: Any
611
+ ) -> str:
612
+ """
613
+ Fix text as a single segment, with a consistent sequence of steps that
614
+ are applied to fix the text. Discard the explanation.
615
+ """
616
+ if config is None:
617
+ config = TextFixerConfig(explain=False)
618
+ config = _config_from_kwargs(config, kwargs)
619
+ fixed, _explan = fix_and_explain(text, config)
620
+ return fixed
621
+
622
+
623
+ def fix_file(
624
+ input_file: TextIO | BinaryIO,
625
+ encoding: str | None = None,
626
+ config: TextFixerConfig | None = None,
627
+ **kwargs: Any,
628
+ ) -> Iterator[str]:
629
+ """
630
+ Fix text that is found in a file.
631
+
632
+ If the file is being read as Unicode text, use that. If it's being read as
633
+ bytes, then we hope an encoding was supplied. If not, unfortunately, we
634
+ have to guess what encoding it is. We'll try a few common encodings, but we
635
+ make no promises. See the `guess_bytes` function for how this is done.
636
+
637
+ The output is a stream of fixed lines of text.
638
+ """
639
+ if config is None:
640
+ config = TextFixerConfig()
641
+ config = _config_from_kwargs(config, kwargs)
642
+
643
+ for line in input_file:
644
+ if isinstance(line, bytes):
645
+ if encoding is None:
646
+ line, encoding = guess_bytes(line)
647
+ else:
648
+ line = line.decode(encoding)
649
+ if config.unescape_html == "auto" and "<" in line:
650
+ config = config._replace(unescape_html=False)
651
+
652
+ fixed_line, _explan = fix_and_explain(line, config)
653
+ yield fixed_line
654
+
655
+
656
+ def guess_bytes(bstring: bytes) -> tuple[str, str]:
657
+ """
658
+ NOTE: Using `guess_bytes` is not the recommended way of using ftfy. ftfy
659
+ is not designed to be an encoding detector.
660
+
661
+ In the unfortunate situation that you have some bytes in an unknown
662
+ encoding, ftfy can guess a reasonable strategy for decoding them, by trying
663
+ a few common encodings that can be distinguished from each other.
664
+
665
+ Unlike the rest of ftfy, this may not be accurate, and it may *create*
666
+ Unicode problems instead of solving them!
667
+
668
+ The encodings we try here are:
669
+
670
+ - UTF-16 with a byte order mark, because a UTF-16 byte order mark looks
671
+ like nothing else
672
+ - UTF-8, because it's the global standard, which has been used by a
673
+ majority of the Web since 2008
674
+ - "utf-8-variants", or buggy implementations of UTF-8
675
+ - MacRoman, because Microsoft Office thinks it's still a thing, and it
676
+ can be distinguished by its line breaks. (If there are no line breaks in
677
+ the string, though, you're out of luck.)
678
+ - "sloppy-windows-1252", the Latin-1-like encoding that is the most common
679
+ single-byte encoding.
680
+ """
681
+ if isinstance(bstring, str):
682
+ raise UnicodeError(
683
+ "This string was already decoded as Unicode. You should pass "
684
+ "bytes to guess_bytes, not Unicode."
685
+ )
686
+
687
+ if bstring.startswith(b"\xfe\xff") or bstring.startswith(b"\xff\xfe"):
688
+ return bstring.decode("utf-16"), "utf-16"
689
+
690
+ byteset = set(bstring)
691
+ try:
692
+ if 0xED in byteset or 0xC0 in byteset:
693
+ # Byte 0xed can be used to encode a range of codepoints that
694
+ # are UTF-16 surrogates. UTF-8 does not use UTF-16 surrogates,
695
+ # so when we see 0xed, it's very likely we're being asked to
696
+ # decode CESU-8, the variant that encodes UTF-16 surrogates
697
+ # instead of the original characters themselves.
698
+ #
699
+ # This will occasionally trigger on standard UTF-8, as there
700
+ # are some Korean characters that also use byte 0xed, but that's
701
+ # not harmful because standard UTF-8 characters will decode the
702
+ # same way in our 'utf-8-variants' codec.
703
+ #
704
+ # Byte 0xc0 is impossible because, numerically, it would only
705
+ # encode characters lower than U+0040. Those already have
706
+ # single-byte representations, and UTF-8 requires using the
707
+ # shortest possible representation. However, Java hides the null
708
+ # codepoint, U+0000, in a non-standard longer representation -- it
709
+ # encodes it as 0xc0 0x80 instead of 0x00, guaranteeing that 0x00
710
+ # will never appear in the encoded bytes.
711
+ #
712
+ # The 'utf-8-variants' decoder can handle both of these cases, as
713
+ # well as standard UTF-8, at the cost of a bit of speed.
714
+ return bstring.decode("utf-8-variants"), "utf-8-variants"
715
+ else:
716
+ return bstring.decode("utf-8"), "utf-8"
717
+ except UnicodeDecodeError:
718
+ pass
719
+
720
+ if 0x0D in byteset and 0x0A not in byteset:
721
+ # Files that contain CR and not LF are likely to be MacRoman.
722
+ return bstring.decode("macroman"), "macroman"
723
+
724
+ return bstring.decode("sloppy-windows-1252"), "sloppy-windows-1252"
725
+
726
+
727
+ def apply_plan(text: str, plan: list[tuple[str, str]]) -> str:
728
+ """
729
+ Apply a plan for fixing the encoding of text.
730
+
731
+ The plan is a list of tuples of the form (operation, arg).
732
+
733
+ `operation` is one of:
734
+
735
+ - `'encode'`: convert a string to bytes, using `arg` as the encoding
736
+ - `'decode'`: convert bytes to a string, using `arg` as the encoding
737
+ - `'transcode'`: convert bytes to bytes, using the function named `arg`
738
+ - `'apply'`: convert a string to a string, using the function named `arg`
739
+
740
+ The functions that can be applied by 'transcode' and 'apply' are
741
+ specifically those that appear in the dictionary named `FIXERS`. They
742
+ can also can be imported from the `ftfy.fixes` module.
743
+
744
+ Example::
745
+
746
+ >>> mojibake = "schön"
747
+ >>> text, plan = fix_and_explain(mojibake)
748
+ >>> apply_plan(mojibake, plan)
749
+ 'schön'
750
+ """
751
+ obj = text
752
+ for operation, encoding in plan:
753
+ if operation == "encode":
754
+ obj = obj.encode(encoding) # type: ignore
755
+ elif operation == "decode":
756
+ obj = obj.decode(encoding) # type: ignore
757
+ elif operation in ("transcode", "apply"):
758
+ if encoding in FIXERS:
759
+ obj = FIXERS[encoding](obj)
760
+ else:
761
+ raise ValueError(f"Unknown function to apply: {encoding}")
762
+ else:
763
+ raise ValueError(f"Unknown plan step: {operation}")
764
+
765
+ return obj
766
+
767
+
768
+ def explain_unicode(text: str) -> None:
769
+ """
770
+ A utility method that's useful for debugging mysterious Unicode.
771
+
772
+ It breaks down a string, showing you for each codepoint its number in
773
+ hexadecimal, its glyph, its category in the Unicode standard, and its name
774
+ in the Unicode standard.
775
+
776
+ >>> explain_unicode('(╯°□°)╯︵ ┻━┻')
777
+ U+0028 ( [Ps] LEFT PARENTHESIS
778
+ U+256F ╯ [So] BOX DRAWINGS LIGHT ARC UP AND LEFT
779
+ U+00B0 ° [So] DEGREE SIGN
780
+ U+25A1 □ [So] WHITE SQUARE
781
+ U+00B0 ° [So] DEGREE SIGN
782
+ U+0029 ) [Pe] RIGHT PARENTHESIS
783
+ U+256F ╯ [So] BOX DRAWINGS LIGHT ARC UP AND LEFT
784
+ U+FE35 ︵ [Ps] PRESENTATION FORM FOR VERTICAL LEFT PARENTHESIS
785
+ U+0020 [Zs] SPACE
786
+ U+253B ┻ [So] BOX DRAWINGS HEAVY UP AND HORIZONTAL
787
+ U+2501 ━ [So] BOX DRAWINGS HEAVY HORIZONTAL
788
+ U+253B ┻ [So] BOX DRAWINGS HEAVY UP AND HORIZONTAL
789
+ """
790
+ for char in text:
791
+ if char.isprintable():
792
+ display = char
793
+ else:
794
+ display = char.encode("unicode-escape").decode("ascii")
795
+ print(
796
+ "U+{code:04X} {display} [{category}] {name}".format(
797
+ display=display_ljust(display, 7),
798
+ code=ord(char),
799
+ category=unicodedata.category(char),
800
+ name=unicodedata.name(char, "<unknown>"),
801
+ )
802
+ )
minigpt2/lib/python3.10/site-packages/ftfy/chardata.py ADDED
@@ -0,0 +1,691 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ This gives other modules access to the gritty details about characters and the
3
+ encodings that use them.
4
+ """
5
+
6
+ from __future__ import annotations
7
+
8
+ import html
9
+ import itertools
10
+ import re
11
+ import unicodedata
12
+
13
+ # These are the encodings we will try to fix in ftfy, in the
14
+ # order that they should be tried.
15
+ CHARMAP_ENCODINGS = [
16
+ "latin-1",
17
+ "sloppy-windows-1252",
18
+ "sloppy-windows-1251",
19
+ "sloppy-windows-1250",
20
+ "sloppy-windows-1253",
21
+ "sloppy-windows-1254",
22
+ "sloppy-windows-1257",
23
+ "iso-8859-2",
24
+ "macroman",
25
+ "cp437",
26
+ ]
27
+
28
+ SINGLE_QUOTE_RE = re.compile("[\u02bc\u2018-\u201b]")
29
+ DOUBLE_QUOTE_RE = re.compile("[\u201c-\u201f]")
30
+
31
+
32
+ def _build_regexes() -> dict[str, re.Pattern[str]]:
33
+ """
34
+ ENCODING_REGEXES contain reasonably fast ways to detect if we
35
+ could represent a given string in a given encoding. The simplest one is
36
+ the 'ascii' detector, which of course just determines if all characters
37
+ are between U+0000 and U+007F.
38
+ """
39
+ # Define a regex that matches ASCII text.
40
+ encoding_regexes = {"ascii": re.compile("^[\x00-\x7f]*$")}
41
+
42
+ for encoding in CHARMAP_ENCODINGS:
43
+ # Make a sequence of characters that bytes \x80 to \xFF decode to
44
+ # in each encoding, as well as byte \x1A, which is used to represent
45
+ # the replacement character � in the sloppy-* encodings.
46
+ byte_range = bytes(list(range(0x80, 0x100)) + [0x1A])
47
+ charlist = byte_range.decode(encoding)
48
+
49
+ # The rest of the ASCII bytes -- bytes \x00 to \x19 and \x1B
50
+ # to \x7F -- will decode as those ASCII characters in any encoding we
51
+ # support, so we can just include them as ranges. This also lets us
52
+ # not worry about escaping regex special characters, because all of
53
+ # them are in the \x1B to \x7F range.
54
+ regex = f"^[\x00-\x19\x1b-\x7f{charlist}]*$"
55
+ encoding_regexes[encoding] = re.compile(regex)
56
+ return encoding_regexes
57
+
58
+
59
+ ENCODING_REGEXES = _build_regexes()
60
+
61
+
62
+ def _build_html_entities() -> dict[str, str]:
63
+ entities = {}
64
+ # Create a dictionary based on the built-in HTML5 entity dictionary.
65
+ # Add a limited set of HTML entities that we'll also decode if they've
66
+ # been case-folded to uppercase, such as decoding &NTILDE; as "Ñ".
67
+ for name, char in html.entities.html5.items(): # type: ignore
68
+ if name.endswith(";"):
69
+ entities["&" + name] = char
70
+
71
+ # Restrict the set of characters we can attempt to decode if their
72
+ # name has been uppercased. If we tried to handle all entity names,
73
+ # the results would be ambiguous.
74
+ if name == name.lower():
75
+ name_upper = name.upper()
76
+ entity_upper = "&" + name_upper
77
+ if html.unescape(entity_upper) == entity_upper:
78
+ entities[entity_upper] = char.upper()
79
+ return entities
80
+
81
+
82
+ HTML_ENTITY_RE = re.compile(r"&#?[0-9A-Za-z]{1,24};")
83
+ HTML_ENTITIES = _build_html_entities()
84
+
85
+
86
+ def possible_encoding(text: str, encoding: str) -> bool:
87
+ """
88
+ Given text and a single-byte encoding, check whether that text could have
89
+ been decoded from that single-byte encoding.
90
+
91
+ In other words, check whether it can be encoded in that encoding, possibly
92
+ sloppily.
93
+ """
94
+ return bool(ENCODING_REGEXES[encoding].match(text))
95
+
96
+
97
+ def _build_control_char_mapping() -> dict[int, None]:
98
+ """
99
+ Build a translate mapping that strips likely-unintended control characters.
100
+ See :func:`ftfy.fixes.remove_control_chars` for a description of these
101
+ codepoint ranges and why they should be removed.
102
+ """
103
+ control_chars: dict[int, None] = {}
104
+
105
+ for i in itertools.chain(
106
+ range(0x00, 0x09),
107
+ [0x0B],
108
+ range(0x0E, 0x20),
109
+ [0x7F],
110
+ range(0x206A, 0x2070),
111
+ [0xFEFF],
112
+ range(0xFFF9, 0xFFFD),
113
+ ):
114
+ control_chars[i] = None
115
+
116
+ return control_chars
117
+
118
+
119
+ CONTROL_CHARS = _build_control_char_mapping()
120
+
121
+
122
+ # Recognize UTF-8 sequences that would be valid if it weren't for a b'\xa0'
123
+ # that some Windows-1252 program converted to a plain space.
124
+ #
125
+ # The smaller values are included on a case-by-case basis, because we don't want
126
+ # to decode likely input sequences to unlikely characters. These are the ones
127
+ # that *do* form likely characters before 0xa0:
128
+ #
129
+ # 0xc2 -> U+A0 NO-BREAK SPACE
130
+ # 0xc3 -> U+E0 LATIN SMALL LETTER A WITH GRAVE
131
+ # 0xc5 -> U+160 LATIN CAPITAL LETTER S WITH CARON
132
+ # 0xce -> U+3A0 GREEK CAPITAL LETTER PI
133
+ # 0xd0 -> U+420 CYRILLIC CAPITAL LETTER ER
134
+ # 0xd9 -> U+660 ARABIC-INDIC DIGIT ZERO
135
+ #
136
+ # In three-character sequences, we exclude some lead bytes in some cases.
137
+ #
138
+ # When the lead byte is immediately followed by 0xA0, we shouldn't accept
139
+ # a space there, because it leads to some less-likely character ranges:
140
+ #
141
+ # 0xe0 -> Samaritan script
142
+ # 0xe1 -> Mongolian script (corresponds to Latin-1 'á' which is too common)
143
+ #
144
+ # We accept 0xe2 and 0xe3, which cover many scripts. Bytes 0xe4 and
145
+ # higher point mostly to CJK characters, which we generally don't want to
146
+ # decode near Latin lowercase letters.
147
+ #
148
+ # In four-character sequences, the lead byte must be F0, because that accounts
149
+ # for almost all of the usage of high-numbered codepoints (tag characters whose
150
+ # UTF-8 starts with the byte F3 are only used in some rare new emoji sequences).
151
+ #
152
+ # This is meant to be applied to encodings of text that tests true for `is_bad`.
153
+ # Any of these could represent characters that legitimately appear surrounded by
154
+ # spaces, particularly U+C5 (Å), which is a word in multiple languages!
155
+ #
156
+ # We should consider checking for b'\x85' being converted to ... in the future.
157
+ # I've seen it once, but the text still wasn't recoverable.
158
+
159
+ ALTERED_UTF8_RE = re.compile(
160
+ b"[\xc2\xc3\xc5\xce\xd0\xd9][ ]"
161
+ b"|[\xe2\xe3][ ][\x80-\x84\x86-\x9f\xa1-\xbf]"
162
+ b"|[\xe0-\xe3][\x80-\x84\x86-\x9f\xa1-\xbf][ ]"
163
+ b"|[\xf0][ ][\x80-\xbf][\x80-\xbf]"
164
+ b"|[\xf0][\x80-\xbf][ ][\x80-\xbf]"
165
+ b"|[\xf0][\x80-\xbf][\x80-\xbf][ ]"
166
+ )
167
+
168
+
169
+ # This expression matches UTF-8 and CESU-8 sequences where some of the
170
+ # continuation bytes have been lost. The byte 0x1a (sometimes written as ^Z) is
171
+ # used within ftfy to represent a byte that produced the replacement character
172
+ # \ufffd. We don't know which byte it was, but we can at least decode the UTF-8
173
+ # sequence as \ufffd instead of failing to re-decode it at all.
174
+ #
175
+ # In some cases, we allow the ASCII '?' in place of \ufffd, but at most once per
176
+ # sequence.
177
+ LOSSY_UTF8_RE = re.compile(
178
+ b"[\xc2-\xdf][\x1a]"
179
+ b"|[\xc2-\xc3][?]"
180
+ b"|\xed[\xa0-\xaf][\x1a?]\xed[\xb0-\xbf][\x1a?\x80-\xbf]"
181
+ b"|\xed[\xa0-\xaf][\x1a?\x80-\xbf]\xed[\xb0-\xbf][\x1a?]"
182
+ b"|[\xe0-\xef][\x1a?][\x1a\x80-\xbf]"
183
+ b"|[\xe0-\xef][\x1a\x80-\xbf][\x1a?]"
184
+ b"|[\xf0-\xf4][\x1a?][\x1a\x80-\xbf][\x1a\x80-\xbf]"
185
+ b"|[\xf0-\xf4][\x1a\x80-\xbf][\x1a?][\x1a\x80-\xbf]"
186
+ b"|[\xf0-\xf4][\x1a\x80-\xbf][\x1a\x80-\xbf][\x1a?]"
187
+ b"|\x1a"
188
+ )
189
+
190
+
191
+ # This regex matches C1 control characters, which occupy some of the positions
192
+ # in the Latin-1 character map that Windows assigns to other characters instead.
193
+ C1_CONTROL_RE = re.compile(r"[\x80-\x9f]")
194
+
195
+
196
+ # A translate mapping that breaks ligatures made of Latin letters. While
197
+ # ligatures may be important to the representation of other languages, in Latin
198
+ # letters they tend to represent a copy/paste error. It omits ligatures such
199
+ # as æ that are frequently used intentionally.
200
+ #
201
+ # This list additionally includes some Latin digraphs that represent two
202
+ # characters for legacy encoding reasons, not for typographical reasons.
203
+ #
204
+ # Ligatures and digraphs may also be separated by NFKC normalization, but that
205
+ # is sometimes more normalization than you want.
206
+
207
+ LIGATURES = {
208
+ ord("IJ"): "IJ", # Dutch ligatures
209
+ ord("ij"): "ij",
210
+ ord("ʼn"): "ʼn", # Afrikaans digraph meant to avoid auto-curled quote
211
+ ord("DZ"): "DZ", # Serbian/Croatian digraphs for Cyrillic conversion
212
+ ord("Dz"): "Dz",
213
+ ord("dz"): "dz",
214
+ ord("DŽ"): "DŽ",
215
+ ord("Dž"): "Dž",
216
+ ord("dž"): "dž",
217
+ ord("LJ"): "LJ",
218
+ ord("Lj"): "Lj",
219
+ ord("lj"): "lj",
220
+ ord("NJ"): "NJ",
221
+ ord("Nj"): "Nj",
222
+ ord("nj"): "nj",
223
+ ord("ff"): "ff", # Latin typographical ligatures
224
+ ord("fi"): "fi",
225
+ ord("fl"): "fl",
226
+ ord("ffi"): "ffi",
227
+ ord("ffl"): "ffl",
228
+ ord("ſt"): "ſt",
229
+ ord("st"): "st",
230
+ }
231
+
232
+
233
+ def _build_width_map() -> dict[int, str]:
234
+ """
235
+ Build a translate mapping that replaces halfwidth and fullwidth forms
236
+ with their standard-width forms.
237
+ """
238
+ # Though it's not listed as a fullwidth character, we'll want to convert
239
+ # U+3000 IDEOGRAPHIC SPACE to U+20 SPACE on the same principle, so start
240
+ # with that in the dictionary.
241
+ width_map = {0x3000: " "}
242
+ for i in range(0xFF01, 0xFFF0):
243
+ char = chr(i)
244
+ alternate = unicodedata.normalize("NFKC", char)
245
+ if alternate != char:
246
+ width_map[i] = alternate
247
+ return width_map
248
+
249
+
250
+ WIDTH_MAP = _build_width_map()
251
+
252
+
253
+ # Character classes that help us pinpoint embedded mojibake. These can
254
+ # include common characters, because we'll also check them for 'badness'.
255
+ #
256
+ # Though they go on for many lines, the members of this dictionary are
257
+ # single concatenated strings.
258
+ #
259
+ # This code is generated using scripts/char_data_table.py.
260
+ UTF8_CLUES: dict[str, str] = {
261
+ # Letters that decode to 0xC2 - 0xDF in a Latin-1-like encoding
262
+ "utf8_first_of_2": (
263
+ "\N{LATIN CAPITAL LETTER A WITH BREVE}" # windows-1250:C3
264
+ "\N{LATIN CAPITAL LETTER A WITH CIRCUMFLEX}" # latin-1:C2
265
+ "\N{LATIN CAPITAL LETTER A WITH DIAERESIS}" # latin-1:C4
266
+ "\N{LATIN CAPITAL LETTER A WITH MACRON}" # windows-1257:C2
267
+ "\N{LATIN CAPITAL LETTER A WITH RING ABOVE}" # latin-1:C5
268
+ "\N{LATIN CAPITAL LETTER A WITH TILDE}" # latin-1:C3
269
+ "\N{LATIN CAPITAL LETTER AE}" # latin-1:C6
270
+ "\N{LATIN CAPITAL LETTER C WITH ACUTE}" # windows-1250:C6
271
+ "\N{LATIN CAPITAL LETTER C WITH CARON}" # windows-1250:C8
272
+ "\N{LATIN CAPITAL LETTER C WITH CEDILLA}" # latin-1:C7
273
+ "\N{LATIN CAPITAL LETTER D WITH CARON}" # windows-1250:CF
274
+ "\N{LATIN CAPITAL LETTER D WITH STROKE}" # windows-1250:D0
275
+ "\N{LATIN CAPITAL LETTER E WITH ACUTE}" # latin-1:C9
276
+ "\N{LATIN CAPITAL LETTER E WITH CARON}" # windows-1250:CC
277
+ "\N{LATIN CAPITAL LETTER E WITH CIRCUMFLEX}" # latin-1:CA
278
+ "\N{LATIN CAPITAL LETTER E WITH DIAERESIS}" # latin-1:CB
279
+ "\N{LATIN CAPITAL LETTER E WITH DOT ABOVE}" # windows-1257:CB
280
+ "\N{LATIN CAPITAL LETTER E WITH GRAVE}" # latin-1:C8
281
+ "\N{LATIN CAPITAL LETTER E WITH MACRON}" # windows-1257:C7
282
+ "\N{LATIN CAPITAL LETTER E WITH OGONEK}" # windows-1250:CA
283
+ "\N{LATIN CAPITAL LETTER ETH}" # latin-1:D0
284
+ "\N{LATIN CAPITAL LETTER G WITH BREVE}" # windows-1254:D0
285
+ "\N{LATIN CAPITAL LETTER G WITH CEDILLA}" # windows-1257:CC
286
+ "\N{LATIN CAPITAL LETTER I WITH ACUTE}" # latin-1:CD
287
+ "\N{LATIN CAPITAL LETTER I WITH CIRCUMFLEX}" # latin-1:CE
288
+ "\N{LATIN CAPITAL LETTER I WITH DIAERESIS}" # latin-1:CF
289
+ "\N{LATIN CAPITAL LETTER I WITH DOT ABOVE}" # windows-1254:DD
290
+ "\N{LATIN CAPITAL LETTER I WITH GRAVE}" # latin-1:CC
291
+ "\N{LATIN CAPITAL LETTER I WITH MACRON}" # windows-1257:CE
292
+ "\N{LATIN CAPITAL LETTER K WITH CEDILLA}" # windows-1257:CD
293
+ "\N{LATIN CAPITAL LETTER L WITH ACUTE}" # windows-1250:C5
294
+ "\N{LATIN CAPITAL LETTER L WITH CEDILLA}" # windows-1257:CF
295
+ "\N{LATIN CAPITAL LETTER L WITH STROKE}" # windows-1257:D9
296
+ "\N{LATIN CAPITAL LETTER N WITH ACUTE}" # windows-1250:D1
297
+ "\N{LATIN CAPITAL LETTER N WITH CARON}" # windows-1250:D2
298
+ "\N{LATIN CAPITAL LETTER N WITH CEDILLA}" # windows-1257:D2
299
+ "\N{LATIN CAPITAL LETTER N WITH TILDE}" # latin-1:D1
300
+ "\N{LATIN CAPITAL LETTER O WITH ACUTE}" # latin-1:D3
301
+ "\N{LATIN CAPITAL LETTER O WITH CIRCUMFLEX}" # latin-1:D4
302
+ "\N{LATIN CAPITAL LETTER O WITH DIAERESIS}" # latin-1:D6
303
+ "\N{LATIN CAPITAL LETTER O WITH DOUBLE ACUTE}" # windows-1250:D5
304
+ "\N{LATIN CAPITAL LETTER O WITH GRAVE}" # latin-1:D2
305
+ "\N{LATIN CAPITAL LETTER O WITH MACRON}" # windows-1257:D4
306
+ "\N{LATIN CAPITAL LETTER O WITH STROKE}" # latin-1:D8
307
+ "\N{LATIN CAPITAL LETTER O WITH TILDE}" # latin-1:D5
308
+ "\N{LATIN CAPITAL LETTER R WITH CARON}" # windows-1250:D8
309
+ "\N{LATIN CAPITAL LETTER S WITH ACUTE}" # windows-1257:DA
310
+ "\N{LATIN CAPITAL LETTER S WITH CARON}" # windows-1257:D0
311
+ "\N{LATIN CAPITAL LETTER S WITH CEDILLA}" # windows-1254:DE
312
+ "\N{LATIN CAPITAL LETTER T WITH CEDILLA}" # windows-1250:DE
313
+ "\N{LATIN CAPITAL LETTER THORN}" # latin-1:DE
314
+ "\N{LATIN CAPITAL LETTER U WITH ACUTE}" # latin-1:DA
315
+ "\N{LATIN CAPITAL LETTER U WITH CIRCUMFLEX}" # latin-1:DB
316
+ "\N{LATIN CAPITAL LETTER U WITH DIAERESIS}" # latin-1:DC
317
+ "\N{LATIN CAPITAL LETTER U WITH DOUBLE ACUTE}" # windows-1250:DB
318
+ "\N{LATIN CAPITAL LETTER U WITH GRAVE}" # latin-1:D9
319
+ "\N{LATIN CAPITAL LETTER U WITH MACRON}" # windows-1257:DB
320
+ "\N{LATIN CAPITAL LETTER U WITH OGONEK}" # windows-1257:D8
321
+ "\N{LATIN CAPITAL LETTER U WITH RING ABOVE}" # windows-1250:D9
322
+ "\N{LATIN CAPITAL LETTER Y WITH ACUTE}" # latin-1:DD
323
+ "\N{LATIN CAPITAL LETTER Z WITH ACUTE}" # windows-1257:CA
324
+ "\N{LATIN CAPITAL LETTER Z WITH CARON}" # windows-1257:DE
325
+ "\N{LATIN CAPITAL LETTER Z WITH DOT ABOVE}" # windows-1257:DD
326
+ "\N{LATIN SMALL LETTER SHARP S}" # latin-1:DF
327
+ "\N{MULTIPLICATION SIGN}" # latin-1:D7
328
+ "\N{GREEK CAPITAL LETTER BETA}" # windows-1253:C2
329
+ "\N{GREEK CAPITAL LETTER GAMMA}" # windows-1253:C3
330
+ "\N{GREEK CAPITAL LETTER DELTA}" # windows-1253:C4
331
+ "\N{GREEK CAPITAL LETTER EPSILON}" # windows-1253:C5
332
+ "\N{GREEK CAPITAL LETTER ZETA}" # windows-1253:C6
333
+ "\N{GREEK CAPITAL LETTER ETA}" # windows-1253:C7
334
+ "\N{GREEK CAPITAL LETTER THETA}" # windows-1253:C8
335
+ "\N{GREEK CAPITAL LETTER IOTA}" # windows-1253:C9
336
+ "\N{GREEK CAPITAL LETTER KAPPA}" # windows-1253:CA
337
+ "\N{GREEK CAPITAL LETTER LAMDA}" # windows-1253:CB
338
+ "\N{GREEK CAPITAL LETTER MU}" # windows-1253:CC
339
+ "\N{GREEK CAPITAL LETTER NU}" # windows-1253:CD
340
+ "\N{GREEK CAPITAL LETTER XI}" # windows-1253:CE
341
+ "\N{GREEK CAPITAL LETTER OMICRON}" # windows-1253:CF
342
+ "\N{GREEK CAPITAL LETTER PI}" # windows-1253:D0
343
+ "\N{GREEK CAPITAL LETTER RHO}" # windows-1253:D1
344
+ "\N{GREEK CAPITAL LETTER SIGMA}" # windows-1253:D3
345
+ "\N{GREEK CAPITAL LETTER TAU}" # windows-1253:D4
346
+ "\N{GREEK CAPITAL LETTER UPSILON}" # windows-1253:D5
347
+ "\N{GREEK CAPITAL LETTER PHI}" # windows-1253:D6
348
+ "\N{GREEK CAPITAL LETTER CHI}" # windows-1253:D7
349
+ "\N{GREEK CAPITAL LETTER PSI}" # windows-1253:D8
350
+ "\N{GREEK CAPITAL LETTER OMEGA}" # windows-1253:D9
351
+ "\N{GREEK CAPITAL LETTER IOTA WITH DIALYTIKA}" # windows-1253:DA
352
+ "\N{GREEK CAPITAL LETTER UPSILON WITH DIALYTIKA}" # windows-1253:DB
353
+ "\N{GREEK SMALL LETTER ALPHA WITH TONOS}" # windows-1253:DC
354
+ "\N{GREEK SMALL LETTER EPSILON WITH TONOS}" # windows-1253:DD
355
+ "\N{GREEK SMALL LETTER ETA WITH TONOS}" # windows-1253:DE
356
+ "\N{GREEK SMALL LETTER IOTA WITH TONOS}" # windows-1253:DF
357
+ "\N{CYRILLIC CAPITAL LETTER VE}" # windows-1251:C2
358
+ "\N{CYRILLIC CAPITAL LETTER GHE}" # windows-1251:C3
359
+ "\N{CYRILLIC CAPITAL LETTER DE}" # windows-1251:C4
360
+ "\N{CYRILLIC CAPITAL LETTER IE}" # windows-1251:C5
361
+ "\N{CYRILLIC CAPITAL LETTER ZHE}" # windows-1251:C6
362
+ "\N{CYRILLIC CAPITAL LETTER ZE}" # windows-1251:C7
363
+ "\N{CYRILLIC CAPITAL LETTER I}" # windows-1251:C8
364
+ "\N{CYRILLIC CAPITAL LETTER SHORT I}" # windows-1251:C9
365
+ "\N{CYRILLIC CAPITAL LETTER KA}" # windows-1251:CA
366
+ "\N{CYRILLIC CAPITAL LETTER EL}" # windows-1251:CB
367
+ "\N{CYRILLIC CAPITAL LETTER EM}" # windows-1251:CC
368
+ "\N{CYRILLIC CAPITAL LETTER EN}" # windows-1251:CD
369
+ "\N{CYRILLIC CAPITAL LETTER O}" # windows-1251:CE
370
+ "\N{CYRILLIC CAPITAL LETTER PE}" # windows-1251:CF
371
+ "\N{CYRILLIC CAPITAL LETTER ER}" # windows-1251:D0
372
+ "\N{CYRILLIC CAPITAL LETTER ES}" # windows-1251:D1
373
+ "\N{CYRILLIC CAPITAL LETTER TE}" # windows-1251:D2
374
+ "\N{CYRILLIC CAPITAL LETTER U}" # windows-1251:D3
375
+ "\N{CYRILLIC CAPITAL LETTER EF}" # windows-1251:D4
376
+ "\N{CYRILLIC CAPITAL LETTER HA}" # windows-1251:D5
377
+ "\N{CYRILLIC CAPITAL LETTER TSE}" # windows-1251:D6
378
+ "\N{CYRILLIC CAPITAL LETTER CHE}" # windows-1251:D7
379
+ "\N{CYRILLIC CAPITAL LETTER SHA}" # windows-1251:D8
380
+ "\N{CYRILLIC CAPITAL LETTER SHCHA}" # windows-1251:D9
381
+ "\N{CYRILLIC CAPITAL LETTER HARD SIGN}" # windows-1251:DA
382
+ "\N{CYRILLIC CAPITAL LETTER YERU}" # windows-1251:DB
383
+ "\N{CYRILLIC CAPITAL LETTER SOFT SIGN}" # windows-1251:DC
384
+ "\N{CYRILLIC CAPITAL LETTER E}" # windows-1251:DD
385
+ "\N{CYRILLIC CAPITAL LETTER YU}" # windows-1251:DE
386
+ "\N{CYRILLIC CAPITAL LETTER YA}" # windows-1251:DF
387
+ ),
388
+ # Letters that decode to 0xE0 - 0xEF in a Latin-1-like encoding
389
+ "utf8_first_of_3": (
390
+ "\N{LATIN SMALL LETTER A WITH ACUTE}" # latin-1:E1
391
+ "\N{LATIN SMALL LETTER A WITH BREVE}" # windows-1250:E3
392
+ "\N{LATIN SMALL LETTER A WITH CIRCUMFLEX}" # latin-1:E2
393
+ "\N{LATIN SMALL LETTER A WITH DIAERESIS}" # latin-1:E4
394
+ "\N{LATIN SMALL LETTER A WITH GRAVE}" # latin-1:E0
395
+ "\N{LATIN SMALL LETTER A WITH MACRON}" # windows-1257:E2
396
+ "\N{LATIN SMALL LETTER A WITH OGONEK}" # windows-1257:E0
397
+ "\N{LATIN SMALL LETTER A WITH RING ABOVE}" # latin-1:E5
398
+ "\N{LATIN SMALL LETTER A WITH TILDE}" # latin-1:E3
399
+ "\N{LATIN SMALL LETTER AE}" # latin-1:E6
400
+ "\N{LATIN SMALL LETTER C WITH ACUTE}" # windows-1250:E6
401
+ "\N{LATIN SMALL LETTER C WITH CARON}" # windows-1250:E8
402
+ "\N{LATIN SMALL LETTER C WITH CEDILLA}" # latin-1:E7
403
+ "\N{LATIN SMALL LETTER D WITH CARON}" # windows-1250:EF
404
+ "\N{LATIN SMALL LETTER E WITH ACUTE}" # latin-1:E9
405
+ "\N{LATIN SMALL LETTER E WITH CARON}" # windows-1250:EC
406
+ "\N{LATIN SMALL LETTER E WITH CIRCUMFLEX}" # latin-1:EA
407
+ "\N{LATIN SMALL LETTER E WITH DIAERESIS}" # latin-1:EB
408
+ "\N{LATIN SMALL LETTER E WITH DOT ABOVE}" # windows-1257:EB
409
+ "\N{LATIN SMALL LETTER E WITH GRAVE}" # latin-1:E8
410
+ "\N{LATIN SMALL LETTER E WITH MACRON}" # windows-1257:E7
411
+ "\N{LATIN SMALL LETTER E WITH OGONEK}" # windows-1250:EA
412
+ "\N{LATIN SMALL LETTER E WITH OGONEK}" # windows-1250:EA
413
+ "\N{LATIN SMALL LETTER G WITH CEDILLA}" # windows-1257:EC
414
+ "\N{LATIN SMALL LETTER I WITH ACUTE}" # latin-1:ED
415
+ "\N{LATIN SMALL LETTER I WITH CIRCUMFLEX}" # latin-1:EE
416
+ "\N{LATIN SMALL LETTER I WITH DIAERESIS}" # latin-1:EF
417
+ "\N{LATIN SMALL LETTER I WITH GRAVE}" # latin-1:EC
418
+ "\N{LATIN SMALL LETTER I WITH MACRON}" # windows-1257:EE
419
+ "\N{LATIN SMALL LETTER I WITH OGONEK}" # windows-1257:E1
420
+ "\N{LATIN SMALL LETTER K WITH CEDILLA}" # windows-1257:ED
421
+ "\N{LATIN SMALL LETTER L WITH ACUTE}" # windows-1250:E5
422
+ "\N{LATIN SMALL LETTER L WITH CEDILLA}" # windows-1257:EF
423
+ "\N{LATIN SMALL LETTER R WITH ACUTE}" # windows-1250:E0
424
+ "\N{LATIN SMALL LETTER Z WITH ACUTE}" # windows-1257:EA
425
+ "\N{GREEK SMALL LETTER UPSILON WITH DIALYTIKA AND TONOS}" # windows-1253:E0
426
+ "\N{GREEK SMALL LETTER ALPHA}" # windows-1253:E1
427
+ "\N{GREEK SMALL LETTER BETA}" # windows-1253:E2
428
+ "\N{GREEK SMALL LETTER GAMMA}" # windows-1253:E3
429
+ "\N{GREEK SMALL LETTER DELTA}" # windows-1253:E4
430
+ "\N{GREEK SMALL LETTER EPSILON}" # windows-1253:E5
431
+ "\N{GREEK SMALL LETTER ZETA}" # windows-1253:E6
432
+ "\N{GREEK SMALL LETTER ETA}" # windows-1253:E7
433
+ "\N{GREEK SMALL LETTER THETA}" # windows-1253:E8
434
+ "\N{GREEK SMALL LETTER IOTA}" # windows-1253:E9
435
+ "\N{GREEK SMALL LETTER KAPPA}" # windows-1253:EA
436
+ "\N{GREEK SMALL LETTER LAMDA}" # windows-1253:EB
437
+ "\N{GREEK SMALL LETTER MU}" # windows-1253:EC
438
+ "\N{GREEK SMALL LETTER NU}" # windows-1253:ED
439
+ "\N{GREEK SMALL LETTER XI}" # windows-1253:EE
440
+ "\N{GREEK SMALL LETTER OMICRON}" # windows-1253:EF
441
+ "\N{CYRILLIC SMALL LETTER A}" # windows-1251:E0
442
+ "\N{CYRILLIC SMALL LETTER BE}" # windows-1251:E1
443
+ "\N{CYRILLIC SMALL LETTER VE}" # windows-1251:E2
444
+ "\N{CYRILLIC SMALL LETTER GHE}" # windows-1251:E3
445
+ "\N{CYRILLIC SMALL LETTER DE}" # windows-1251:E4
446
+ "\N{CYRILLIC SMALL LETTER IE}" # windows-1251:E5
447
+ "\N{CYRILLIC SMALL LETTER ZHE}" # windows-1251:E6
448
+ "\N{CYRILLIC SMALL LETTER ZE}" # windows-1251:E7
449
+ "\N{CYRILLIC SMALL LETTER I}" # windows-1251:E8
450
+ "\N{CYRILLIC SMALL LETTER SHORT I}" # windows-1251:E9
451
+ "\N{CYRILLIC SMALL LETTER KA}" # windows-1251:EA
452
+ "\N{CYRILLIC SMALL LETTER EL}" # windows-1251:EB
453
+ "\N{CYRILLIC SMALL LETTER EM}" # windows-1251:EC
454
+ "\N{CYRILLIC SMALL LETTER EN}" # windows-1251:ED
455
+ "\N{CYRILLIC SMALL LETTER O}" # windows-1251:EE
456
+ "\N{CYRILLIC SMALL LETTER PE}" # windows-1251:EF
457
+ ),
458
+ # Letters that decode to 0xF0 or 0xF3 in a Latin-1-like encoding.
459
+ # (Other leading bytes correspond only to unassigned codepoints)
460
+ "utf8_first_of_4": (
461
+ "\N{LATIN SMALL LETTER D WITH STROKE}" # windows-1250:F0
462
+ "\N{LATIN SMALL LETTER ETH}" # latin-1:F0
463
+ "\N{LATIN SMALL LETTER G WITH BREVE}" # windows-1254:F0
464
+ "\N{LATIN SMALL LETTER O WITH ACUTE}" # latin-1:F3
465
+ "\N{LATIN SMALL LETTER S WITH CARON}" # windows-1257:F0
466
+ "\N{GREEK SMALL LETTER PI}" # windows-1253:F0
467
+ "\N{GREEK SMALL LETTER SIGMA}" # windows-1253:F3
468
+ "\N{CYRILLIC SMALL LETTER ER}" # windows-1251:F0
469
+ "\N{CYRILLIC SMALL LETTER U}" # windows-1251:F3
470
+ ),
471
+ # Letters that decode to 0x80 - 0xBF in a Latin-1-like encoding,
472
+ # including a space standing in for 0xA0
473
+ "utf8_continuation": (
474
+ "\x80-\xbf"
475
+ "\N{SPACE}" # modification of latin-1:A0, NO-BREAK SPACE
476
+ "\N{LATIN CAPITAL LETTER A WITH OGONEK}" # windows-1250:A5
477
+ "\N{LATIN CAPITAL LETTER AE}" # windows-1257:AF
478
+ "\N{LATIN CAPITAL LETTER L WITH CARON}" # windows-1250:BC
479
+ "\N{LATIN CAPITAL LETTER L WITH STROKE}" # windows-1250:A3
480
+ "\N{LATIN CAPITAL LETTER O WITH STROKE}" # windows-1257:A8
481
+ "\N{LATIN CAPITAL LETTER R WITH CEDILLA}" # windows-1257:AA
482
+ "\N{LATIN CAPITAL LETTER S WITH ACUTE}" # windows-1250:8C
483
+ "\N{LATIN CAPITAL LETTER S WITH CARON}" # windows-1252:8A
484
+ "\N{LATIN CAPITAL LETTER S WITH CEDILLA}" # windows-1250:AA
485
+ "\N{LATIN CAPITAL LETTER T WITH CARON}" # windows-1250:8D
486
+ "\N{LATIN CAPITAL LETTER Y WITH DIAERESIS}" # windows-1252:9F
487
+ "\N{LATIN CAPITAL LETTER Z WITH ACUTE}" # windows-1250:8F
488
+ "\N{LATIN CAPITAL LETTER Z WITH CARON}" # windows-1252:8E
489
+ "\N{LATIN CAPITAL LETTER Z WITH DOT ABOVE}" # windows-1250:AF
490
+ "\N{LATIN CAPITAL LIGATURE OE}" # windows-1252:8C
491
+ "\N{LATIN SMALL LETTER A WITH OGONEK}" # windows-1250:B9
492
+ "\N{LATIN SMALL LETTER AE}" # windows-1257:BF
493
+ "\N{LATIN SMALL LETTER F WITH HOOK}" # windows-1252:83
494
+ "\N{LATIN SMALL LETTER L WITH CARON}" # windows-1250:BE
495
+ "\N{LATIN SMALL LETTER L WITH STROKE}" # windows-1250:B3
496
+ "\N{LATIN SMALL LETTER O WITH STROKE}" # windows-1257:B8
497
+ "\N{LATIN SMALL LETTER R WITH CEDILLA}" # windows-1257:BA
498
+ "\N{LATIN SMALL LETTER S WITH ACUTE}" # windows-1250:9C
499
+ "\N{LATIN SMALL LETTER S WITH CARON}" # windows-1252:9A
500
+ "\N{LATIN SMALL LETTER S WITH CEDILLA}" # windows-1250:BA
501
+ "\N{LATIN SMALL LETTER T WITH CARON}" # windows-1250:9D
502
+ "\N{LATIN SMALL LETTER Z WITH ACUTE}" # windows-1250:9F
503
+ "\N{LATIN SMALL LETTER Z WITH CARON}" # windows-1252:9E
504
+ "\N{LATIN SMALL LETTER Z WITH DOT ABOVE}" # windows-1250:BF
505
+ "\N{LATIN SMALL LIGATURE OE}" # windows-1252:9C
506
+ "\N{MODIFIER LETTER CIRCUMFLEX ACCENT}" # windows-1252:88
507
+ "\N{CARON}" # windows-1250:A1
508
+ "\N{BREVE}" # windows-1250:A2
509
+ "\N{OGONEK}" # windows-1250:B2
510
+ "\N{SMALL TILDE}" # windows-1252:98
511
+ "\N{DOUBLE ACUTE ACCENT}" # windows-1250:BD
512
+ "\N{GREEK TONOS}" # windows-1253:B4
513
+ "\N{GREEK DIALYTIKA TONOS}" # windows-1253:A1
514
+ "\N{GREEK CAPITAL LETTER ALPHA WITH TONOS}" # windows-1253:A2
515
+ "\N{GREEK CAPITAL LETTER EPSILON WITH TONOS}" # windows-1253:B8
516
+ "\N{GREEK CAPITAL LETTER ETA WITH TONOS}" # windows-1253:B9
517
+ "\N{GREEK CAPITAL LETTER IOTA WITH TONOS}" # windows-1253:BA
518
+ "\N{GREEK CAPITAL LETTER OMICRON WITH TONOS}" # windows-1253:BC
519
+ "\N{GREEK CAPITAL LETTER UPSILON WITH TONOS}" # windows-1253:BE
520
+ "\N{GREEK CAPITAL LETTER OMEGA WITH TONOS}" # windows-1253:BF
521
+ "\N{CYRILLIC CAPITAL LETTER IO}" # windows-1251:A8
522
+ "\N{CYRILLIC CAPITAL LETTER DJE}" # windows-1251:80
523
+ "\N{CYRILLIC CAPITAL LETTER GJE}" # windows-1251:81
524
+ "\N{CYRILLIC CAPITAL LETTER UKRAINIAN IE}" # windows-1251:AA
525
+ "\N{CYRILLIC CAPITAL LETTER DZE}" # windows-1251:BD
526
+ "\N{CYRILLIC CAPITAL LETTER BYELORUSSIAN-UKRAINIAN I}" # windows-1251:B2
527
+ "\N{CYRILLIC CAPITAL LETTER YI}" # windows-1251:AF
528
+ "\N{CYRILLIC CAPITAL LETTER JE}" # windows-1251:A3
529
+ "\N{CYRILLIC CAPITAL LETTER LJE}" # windows-1251:8A
530
+ "\N{CYRILLIC CAPITAL LETTER NJE}" # windows-1251:8C
531
+ "\N{CYRILLIC CAPITAL LETTER TSHE}" # windows-1251:8E
532
+ "\N{CYRILLIC CAPITAL LETTER KJE}" # windows-1251:8D
533
+ "\N{CYRILLIC CAPITAL LETTER SHORT U}" # windows-1251:A1
534
+ "\N{CYRILLIC CAPITAL LETTER DZHE}" # windows-1251:8F
535
+ "\N{CYRILLIC SMALL LETTER IO}" # windows-1251:B8
536
+ "\N{CYRILLIC SMALL LETTER DJE}" # windows-1251:90
537
+ "\N{CYRILLIC SMALL LETTER GJE}" # windows-1251:83
538
+ "\N{CYRILLIC SMALL LETTER UKRAINIAN IE}" # windows-1251:BA
539
+ "\N{CYRILLIC SMALL LETTER DZE}" # windows-1251:BE
540
+ "\N{CYRILLIC SMALL LETTER BYELORUSSIAN-UKRAINIAN I}" # windows-1251:B3
541
+ "\N{CYRILLIC SMALL LETTER YI}" # windows-1251:BF
542
+ "\N{CYRILLIC SMALL LETTER JE}" # windows-1251:BC
543
+ "\N{CYRILLIC SMALL LETTER LJE}" # windows-1251:9A
544
+ "\N{CYRILLIC SMALL LETTER NJE}" # windows-1251:9C
545
+ "\N{CYRILLIC SMALL LETTER TSHE}" # windows-1251:9E
546
+ "\N{CYRILLIC SMALL LETTER KJE}" # windows-1251:9D
547
+ "\N{CYRILLIC SMALL LETTER SHORT U}" # windows-1251:A2
548
+ "\N{CYRILLIC SMALL LETTER DZHE}" # windows-1251:9F
549
+ "\N{CYRILLIC CAPITAL LETTER GHE WITH UPTURN}" # windows-1251:A5
550
+ "\N{CYRILLIC SMALL LETTER GHE WITH UPTURN}" # windows-1251:B4
551
+ "\N{EN DASH}" # windows-1252:96
552
+ "\N{EM DASH}" # windows-1252:97
553
+ "\N{HORIZONTAL BAR}" # windows-1253:AF
554
+ "\N{LEFT SINGLE QUOTATION MARK}" # windows-1252:91
555
+ "\N{RIGHT SINGLE QUOTATION MARK}" # windows-1252:92
556
+ "\N{SINGLE LOW-9 QUOTATION MARK}" # windows-1252:82
557
+ "\N{LEFT DOUBLE QUOTATION MARK}" # windows-1252:93
558
+ "\N{RIGHT DOUBLE QUOTATION MARK}" # windows-1252:94
559
+ "\N{DOUBLE LOW-9 QUOTATION MARK}" # windows-1252:84
560
+ "\N{DAGGER}" # windows-1252:86
561
+ "\N{DOUBLE DAGGER}" # windows-1252:87
562
+ "\N{BULLET}" # windows-1252:95
563
+ "\N{HORIZONTAL ELLIPSIS}" # windows-1252:85
564
+ "\N{PER MILLE SIGN}" # windows-1252:89
565
+ "\N{SINGLE LEFT-POINTING ANGLE QUOTATION MARK}" # windows-1252:8B
566
+ "\N{SINGLE RIGHT-POINTING ANGLE QUOTATION MARK}" # windows-1252:9B
567
+ "\N{EURO SIGN}" # windows-1252:80
568
+ "\N{NUMERO SIGN}" # windows-1251:B9
569
+ "\N{TRADE MARK SIGN}" # windows-1252:99
570
+ ),
571
+ # Letters that decode to 0x80 - 0xBF in a Latin-1-like encoding,
572
+ # and don't usually stand for themselves when adjacent to mojibake.
573
+ # This excludes spaces, dashes, 'bullet', quotation marks, and ellipses.
574
+ "utf8_continuation_strict": (
575
+ "\x80-\xbf"
576
+ "\N{LATIN CAPITAL LETTER A WITH OGONEK}" # windows-1250:A5
577
+ "\N{LATIN CAPITAL LETTER AE}" # windows-1257:AF
578
+ "\N{LATIN CAPITAL LETTER L WITH CARON}" # windows-1250:BC
579
+ "\N{LATIN CAPITAL LETTER L WITH STROKE}" # windows-1250:A3
580
+ "\N{LATIN CAPITAL LETTER O WITH STROKE}" # windows-1257:A8
581
+ "\N{LATIN CAPITAL LETTER R WITH CEDILLA}" # windows-1257:AA
582
+ "\N{LATIN CAPITAL LETTER S WITH ACUTE}" # windows-1250:8C
583
+ "\N{LATIN CAPITAL LETTER S WITH CARON}" # windows-1252:8A
584
+ "\N{LATIN CAPITAL LETTER S WITH CEDILLA}" # windows-1250:AA
585
+ "\N{LATIN CAPITAL LETTER T WITH CARON}" # windows-1250:8D
586
+ "\N{LATIN CAPITAL LETTER Y WITH DIAERESIS}" # windows-1252:9F
587
+ "\N{LATIN CAPITAL LETTER Z WITH ACUTE}" # windows-1250:8F
588
+ "\N{LATIN CAPITAL LETTER Z WITH CARON}" # windows-1252:8E
589
+ "\N{LATIN CAPITAL LETTER Z WITH DOT ABOVE}" # windows-1250:AF
590
+ "\N{LATIN CAPITAL LIGATURE OE}" # windows-1252:8C
591
+ "\N{LATIN SMALL LETTER A WITH OGONEK}" # windows-1250:B9
592
+ "\N{LATIN SMALL LETTER AE}" # windows-1257:BF
593
+ "\N{LATIN SMALL LETTER F WITH HOOK}" # windows-1252:83
594
+ "\N{LATIN SMALL LETTER L WITH CARON}" # windows-1250:BE
595
+ "\N{LATIN SMALL LETTER L WITH STROKE}" # windows-1250:B3
596
+ "\N{LATIN SMALL LETTER O WITH STROKE}" # windows-1257:B8
597
+ "\N{LATIN SMALL LETTER R WITH CEDILLA}" # windows-1257:BA
598
+ "\N{LATIN SMALL LETTER S WITH ACUTE}" # windows-1250:9C
599
+ "\N{LATIN SMALL LETTER S WITH CARON}" # windows-1252:9A
600
+ "\N{LATIN SMALL LETTER S WITH CEDILLA}" # windows-1250:BA
601
+ "\N{LATIN SMALL LETTER T WITH CARON}" # windows-1250:9D
602
+ "\N{LATIN SMALL LETTER Z WITH ACUTE}" # windows-1250:9F
603
+ "\N{LATIN SMALL LETTER Z WITH CARON}" # windows-1252:9E
604
+ "\N{LATIN SMALL LETTER Z WITH DOT ABOVE}" # windows-1250:BF
605
+ "\N{LATIN SMALL LIGATURE OE}" # windows-1252:9C
606
+ "\N{MODIFIER LETTER CIRCUMFLEX ACCENT}" # windows-1252:88
607
+ "\N{CARON}" # windows-1250:A1
608
+ "\N{BREVE}" # windows-1250:A2
609
+ "\N{OGONEK}" # windows-1250:B2
610
+ "\N{SMALL TILDE}" # windows-1252:98
611
+ "\N{DOUBLE ACUTE ACCENT}" # windows-1250:BD
612
+ "\N{GREEK TONOS}" # windows-1253:B4
613
+ "\N{GREEK DIALYTIKA TONOS}" # windows-1253:A1
614
+ "\N{GREEK CAPITAL LETTER ALPHA WITH TONOS}" # windows-1253:A2
615
+ "\N{GREEK CAPITAL LETTER EPSILON WITH TONOS}" # windows-1253:B8
616
+ "\N{GREEK CAPITAL LETTER ETA WITH TONOS}" # windows-1253:B9
617
+ "\N{GREEK CAPITAL LETTER IOTA WITH TONOS}" # windows-1253:BA
618
+ "\N{GREEK CAPITAL LETTER OMICRON WITH TONOS}" # windows-1253:BC
619
+ "\N{GREEK CAPITAL LETTER UPSILON WITH TONOS}" # windows-1253:BE
620
+ "\N{GREEK CAPITAL LETTER OMEGA WITH TONOS}" # windows-1253:BF
621
+ "\N{CYRILLIC CAPITAL LETTER IO}" # windows-1251:A8
622
+ "\N{CYRILLIC CAPITAL LETTER DJE}" # windows-1251:80
623
+ "\N{CYRILLIC CAPITAL LETTER GJE}" # windows-1251:81
624
+ "\N{CYRILLIC CAPITAL LETTER UKRAINIAN IE}" # windows-1251:AA
625
+ "\N{CYRILLIC CAPITAL LETTER DZE}" # windows-1251:BD
626
+ "\N{CYRILLIC CAPITAL LETTER BYELORUSSIAN-UKRAINIAN I}" # windows-1251:B2
627
+ "\N{CYRILLIC CAPITAL LETTER YI}" # windows-1251:AF
628
+ "\N{CYRILLIC CAPITAL LETTER JE}" # windows-1251:A3
629
+ "\N{CYRILLIC CAPITAL LETTER LJE}" # windows-1251:8A
630
+ "\N{CYRILLIC CAPITAL LETTER NJE}" # windows-1251:8C
631
+ "\N{CYRILLIC CAPITAL LETTER TSHE}" # windows-1251:8E
632
+ "\N{CYRILLIC CAPITAL LETTER KJE}" # windows-1251:8D
633
+ "\N{CYRILLIC CAPITAL LETTER SHORT U}" # windows-1251:A1
634
+ "\N{CYRILLIC CAPITAL LETTER DZHE}" # windows-1251:8F
635
+ "\N{CYRILLIC SMALL LETTER IO}" # windows-1251:B8
636
+ "\N{CYRILLIC SMALL LETTER DJE}" # windows-1251:90
637
+ "\N{CYRILLIC SMALL LETTER GJE}" # windows-1251:83
638
+ "\N{CYRILLIC SMALL LETTER UKRAINIAN IE}" # windows-1251:BA
639
+ "\N{CYRILLIC SMALL LETTER DZE}" # windows-1251:BE
640
+ "\N{CYRILLIC SMALL LETTER BYELORUSSIAN-UKRAINIAN I}" # windows-1251:B3
641
+ "\N{CYRILLIC SMALL LETTER YI}" # windows-1251:BF
642
+ "\N{CYRILLIC SMALL LETTER JE}" # windows-1251:BC
643
+ "\N{CYRILLIC SMALL LETTER LJE}" # windows-1251:9A
644
+ "\N{CYRILLIC SMALL LETTER NJE}" # windows-1251:9C
645
+ "\N{CYRILLIC SMALL LETTER TSHE}" # windows-1251:9E
646
+ "\N{CYRILLIC SMALL LETTER KJE}" # windows-1251:9D
647
+ "\N{CYRILLIC SMALL LETTER SHORT U}" # windows-1251:A2
648
+ "\N{CYRILLIC SMALL LETTER DZHE}" # windows-1251:9F
649
+ "\N{CYRILLIC CAPITAL LETTER GHE WITH UPTURN}" # windows-1251:A5
650
+ "\N{CYRILLIC SMALL LETTER GHE WITH UPTURN}" # windows-1251:B4
651
+ "\N{DAGGER}" # windows-1252:86
652
+ "\N{DOUBLE DAGGER}" # windows-1252:87
653
+ "\N{PER MILLE SIGN}" # windows-1252:89
654
+ "\N{SINGLE LEFT-POINTING ANGLE QUOTATION MARK}" # windows-1252:8B
655
+ "\N{SINGLE RIGHT-POINTING ANGLE QUOTATION MARK}" # windows-1252:9B
656
+ "\N{EURO SIGN}" # windows-1252:80
657
+ "\N{NUMERO SIGN}" # windows-1251:B9
658
+ "\N{TRADE MARK SIGN}" # windows-1252:99
659
+ ),
660
+ }
661
+
662
+ # This regex uses UTF8_CLUES to find sequences of likely mojibake.
663
+ # It matches them with + so that several adjacent UTF-8-looking sequences
664
+ # get coalesced into one, allowing them to be fixed more efficiently
665
+ # and not requiring every individual subsequence to be detected as 'badness'.
666
+ #
667
+ # We accept spaces in place of "utf8_continuation", because spaces might have
668
+ # been intended to be U+A0 NO-BREAK SPACE.
669
+ #
670
+ # We do a lookbehind to make sure the previous character isn't a
671
+ # "utf8_continuation_strict" character, so that we don't fix just a few
672
+ # characters in a huge garble and make the situation worse.
673
+ #
674
+ # Unfortunately, the matches to this regular expression won't show their
675
+ # surrounding context, and including context would make the expression much
676
+ # less efficient. The 'badness' rules that require context, such as a preceding
677
+ # lowercase letter, will prevent some cases of inconsistent UTF-8 from being
678
+ # fixed when they don't see it.
679
+ UTF8_DETECTOR_RE = re.compile(
680
+ """
681
+ (?<! [{utf8_continuation_strict}])
682
+ (
683
+ [{utf8_first_of_2}] [{utf8_continuation}]
684
+ |
685
+ [{utf8_first_of_3}] [{utf8_continuation}]{{2}}
686
+ |
687
+ [{utf8_first_of_4}] [{utf8_continuation}]{{3}}
688
+ )+
689
+ """.format(**UTF8_CLUES),
690
+ re.VERBOSE,
691
+ )
minigpt2/lib/python3.10/site-packages/pydub-0.25.1.dist-info/AUTHORS ADDED
@@ -0,0 +1,98 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ James Robert
2
+ github: jiaaro
3
+ twitter: @jiaaro
4
+ web: jiaaro.com
5
+ email: pydub@jiaaro.com
6
+
7
+ Marc Webbie
8
+ github: marcwebbie
9
+
10
+ Jean-philippe Serafin
11
+ github: jeanphix
12
+
13
+ Anurag Ramdasan
14
+ github: AnuragRamdasan
15
+
16
+ Choongmin Lee
17
+ github: clee704
18
+
19
+ Patrick Pittman
20
+ github: ptpittman
21
+
22
+ Hunter Lang
23
+ github: hunterlang
24
+
25
+ Alexey
26
+ github: nihisil
27
+
28
+ Jaymz Campbell
29
+ github: jaymzcd
30
+
31
+ Ross McFarland
32
+ github: ross
33
+
34
+ John McMellen
35
+ github: jmcmellen
36
+
37
+ Johan Lövgren
38
+ github: dashj
39
+
40
+ Joachim Krüger
41
+ github: jkrgr
42
+
43
+ Shichao An
44
+ github: shichao-an
45
+
46
+ Michael Bortnyck
47
+ github: mbortnyck
48
+
49
+ André Cloete
50
+ github: aj-cloete
51
+
52
+ David Acacio
53
+ github: dacacioa
54
+
55
+ Thiago Abdnur
56
+ github: bolaum
57
+
58
+ Aurélien Ooms
59
+ github: aureooms
60
+
61
+ Mike Mattozzi
62
+ github: mmattozzi
63
+
64
+ Marcio Mazza
65
+ github: marciomazza
66
+
67
+ Sungsu Lim
68
+ github: proflim
69
+
70
+ Evandro Myller
71
+ github: emyller
72
+
73
+ Sérgio Agostinho
74
+ github: SergioRAgostinho
75
+
76
+ Antonio Larrosa
77
+ github: antlarr
78
+
79
+ Aaron Craig
80
+ github: craigthelinguist
81
+
82
+ Carlos del Castillo
83
+ github: greyalien502
84
+
85
+ Yudong Sun
86
+ github: sunjerry019
87
+
88
+ Jorge Perianez
89
+ github: JPery
90
+
91
+ Chendi Luo
92
+ github: Creonalia
93
+
94
+ Daniel Lefevre
95
+ gitHub: dplefevre
96
+
97
+ Grzegorz Kotfis
98
+ github: gkotfis
minigpt2/lib/python3.10/site-packages/sniffio-1.3.1.dist-info/LICENSE ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ This software is made available under the terms of *either* of the
2
+ licenses found in LICENSE.APACHE2 or LICENSE.MIT. Contributions to are
3
+ made under the terms of *both* these licenses.
minigpt2/lib/python3.10/site-packages/sniffio-1.3.1.dist-info/LICENSE.APACHE2 ADDED
@@ -0,0 +1,202 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ Apache License
3
+ Version 2.0, January 2004
4
+ http://www.apache.org/licenses/
5
+
6
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
7
+
8
+ 1. Definitions.
9
+
10
+ "License" shall mean the terms and conditions for use, reproduction,
11
+ and distribution as defined by Sections 1 through 9 of this document.
12
+
13
+ "Licensor" shall mean the copyright owner or entity authorized by
14
+ the copyright owner that is granting the License.
15
+
16
+ "Legal Entity" shall mean the union of the acting entity and all
17
+ other entities that control, are controlled by, or are under common
18
+ control with that entity. For the purposes of this definition,
19
+ "control" means (i) the power, direct or indirect, to cause the
20
+ direction or management of such entity, whether by contract or
21
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
22
+ outstanding shares, or (iii) beneficial ownership of such entity.
23
+
24
+ "You" (or "Your") shall mean an individual or Legal Entity
25
+ exercising permissions granted by this License.
26
+
27
+ "Source" form shall mean the preferred form for making modifications,
28
+ including but not limited to software source code, documentation
29
+ source, and configuration files.
30
+
31
+ "Object" form shall mean any form resulting from mechanical
32
+ transformation or translation of a Source form, including but
33
+ not limited to compiled object code, generated documentation,
34
+ and conversions to other media types.
35
+
36
+ "Work" shall mean the work of authorship, whether in Source or
37
+ Object form, made available under the License, as indicated by a
38
+ copyright notice that is included in or attached to the work
39
+ (an example is provided in the Appendix below).
40
+
41
+ "Derivative Works" shall mean any work, whether in Source or Object
42
+ form, that is based on (or derived from) the Work and for which the
43
+ editorial revisions, annotations, elaborations, or other modifications
44
+ represent, as a whole, an original work of authorship. For the purposes
45
+ of this License, Derivative Works shall not include works that remain
46
+ separable from, or merely link (or bind by name) to the interfaces of,
47
+ the Work and Derivative Works thereof.
48
+
49
+ "Contribution" shall mean any work of authorship, including
50
+ the original version of the Work and any modifications or additions
51
+ to that Work or Derivative Works thereof, that is intentionally
52
+ submitted to Licensor for inclusion in the Work by the copyright owner
53
+ or by an individual or Legal Entity authorized to submit on behalf of
54
+ the copyright owner. For the purposes of this definition, "submitted"
55
+ means any form of electronic, verbal, or written communication sent
56
+ to the Licensor or its representatives, including but not limited to
57
+ communication on electronic mailing lists, source code control systems,
58
+ and issue tracking systems that are managed by, or on behalf of, the
59
+ Licensor for the purpose of discussing and improving the Work, but
60
+ excluding communication that is conspicuously marked or otherwise
61
+ designated in writing by the copyright owner as "Not a Contribution."
62
+
63
+ "Contributor" shall mean Licensor and any individual or Legal Entity
64
+ on behalf of whom a Contribution has been received by Licensor and
65
+ subsequently incorporated within the Work.
66
+
67
+ 2. Grant of Copyright License. Subject to the terms and conditions of
68
+ this License, each Contributor hereby grants to You a perpetual,
69
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
70
+ copyright license to reproduce, prepare Derivative Works of,
71
+ publicly display, publicly perform, sublicense, and distribute the
72
+ Work and such Derivative Works in Source or Object form.
73
+
74
+ 3. Grant of Patent License. Subject to the terms and conditions of
75
+ this License, each Contributor hereby grants to You a perpetual,
76
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
77
+ (except as stated in this section) patent license to make, have made,
78
+ use, offer to sell, sell, import, and otherwise transfer the Work,
79
+ where such license applies only to those patent claims licensable
80
+ by such Contributor that are necessarily infringed by their
81
+ Contribution(s) alone or by combination of their Contribution(s)
82
+ with the Work to which such Contribution(s) was submitted. If You
83
+ institute patent litigation against any entity (including a
84
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
85
+ or a Contribution incorporated within the Work constitutes direct
86
+ or contributory patent infringement, then any patent licenses
87
+ granted to You under this License for that Work shall terminate
88
+ as of the date such litigation is filed.
89
+
90
+ 4. Redistribution. You may reproduce and distribute copies of the
91
+ Work or Derivative Works thereof in any medium, with or without
92
+ modifications, and in Source or Object form, provided that You
93
+ meet the following conditions:
94
+
95
+ (a) You must give any other recipients of the Work or
96
+ Derivative Works a copy of this License; and
97
+
98
+ (b) You must cause any modified files to carry prominent notices
99
+ stating that You changed the files; and
100
+
101
+ (c) You must retain, in the Source form of any Derivative Works
102
+ that You distribute, all copyright, patent, trademark, and
103
+ attribution notices from the Source form of the Work,
104
+ excluding those notices that do not pertain to any part of
105
+ the Derivative Works; and
106
+
107
+ (d) If the Work includes a "NOTICE" text file as part of its
108
+ distribution, then any Derivative Works that You distribute must
109
+ include a readable copy of the attribution notices contained
110
+ within such NOTICE file, excluding those notices that do not
111
+ pertain to any part of the Derivative Works, in at least one
112
+ of the following places: within a NOTICE text file distributed
113
+ as part of the Derivative Works; within the Source form or
114
+ documentation, if provided along with the Derivative Works; or,
115
+ within a display generated by the Derivative Works, if and
116
+ wherever such third-party notices normally appear. The contents
117
+ of the NOTICE file are for informational purposes only and
118
+ do not modify the License. You may add Your own attribution
119
+ notices within Derivative Works that You distribute, alongside
120
+ or as an addendum to the NOTICE text from the Work, provided
121
+ that such additional attribution notices cannot be construed
122
+ as modifying the License.
123
+
124
+ You may add Your own copyright statement to Your modifications and
125
+ may provide additional or different license terms and conditions
126
+ for use, reproduction, or distribution of Your modifications, or
127
+ for any such Derivative Works as a whole, provided Your use,
128
+ reproduction, and distribution of the Work otherwise complies with
129
+ the conditions stated in this License.
130
+
131
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
132
+ any Contribution intentionally submitted for inclusion in the Work
133
+ by You to the Licensor shall be under the terms and conditions of
134
+ this License, without any additional terms or conditions.
135
+ Notwithstanding the above, nothing herein shall supersede or modify
136
+ the terms of any separate license agreement you may have executed
137
+ with Licensor regarding such Contributions.
138
+
139
+ 6. Trademarks. This License does not grant permission to use the trade
140
+ names, trademarks, service marks, or product names of the Licensor,
141
+ except as required for reasonable and customary use in describing the
142
+ origin of the Work and reproducing the content of the NOTICE file.
143
+
144
+ 7. Disclaimer of Warranty. Unless required by applicable law or
145
+ agreed to in writing, Licensor provides the Work (and each
146
+ Contributor provides its Contributions) on an "AS IS" BASIS,
147
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
148
+ implied, including, without limitation, any warranties or conditions
149
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
150
+ PARTICULAR PURPOSE. You are solely responsible for determining the
151
+ appropriateness of using or redistributing the Work and assume any
152
+ risks associated with Your exercise of permissions under this License.
153
+
154
+ 8. Limitation of Liability. In no event and under no legal theory,
155
+ whether in tort (including negligence), contract, or otherwise,
156
+ unless required by applicable law (such as deliberate and grossly
157
+ negligent acts) or agreed to in writing, shall any Contributor be
158
+ liable to You for damages, including any direct, indirect, special,
159
+ incidental, or consequential damages of any character arising as a
160
+ result of this License or out of the use or inability to use the
161
+ Work (including but not limited to damages for loss of goodwill,
162
+ work stoppage, computer failure or malfunction, or any and all
163
+ other commercial damages or losses), even if such Contributor
164
+ has been advised of the possibility of such damages.
165
+
166
+ 9. Accepting Warranty or Additional Liability. While redistributing
167
+ the Work or Derivative Works thereof, You may choose to offer,
168
+ and charge a fee for, acceptance of support, warranty, indemnity,
169
+ or other liability obligations and/or rights consistent with this
170
+ License. However, in accepting such obligations, You may act only
171
+ on Your own behalf and on Your sole responsibility, not on behalf
172
+ of any other Contributor, and only if You agree to indemnify,
173
+ defend, and hold each Contributor harmless for any liability
174
+ incurred by, or claims asserted against, such Contributor by reason
175
+ of your accepting any such warranty or additional liability.
176
+
177
+ END OF TERMS AND CONDITIONS
178
+
179
+ APPENDIX: How to apply the Apache License to your work.
180
+
181
+ To apply the Apache License to your work, attach the following
182
+ boilerplate notice, with the fields enclosed by brackets "[]"
183
+ replaced with your own identifying information. (Don't include
184
+ the brackets!) The text should be enclosed in the appropriate
185
+ comment syntax for the file format. We also recommend that a
186
+ file or class name and description of purpose be included on the
187
+ same "printed page" as the copyright notice for easier
188
+ identification within third-party archives.
189
+
190
+ Copyright [yyyy] [name of copyright owner]
191
+
192
+ Licensed under the Apache License, Version 2.0 (the "License");
193
+ you may not use this file except in compliance with the License.
194
+ You may obtain a copy of the License at
195
+
196
+ http://www.apache.org/licenses/LICENSE-2.0
197
+
198
+ Unless required by applicable law or agreed to in writing, software
199
+ distributed under the License is distributed on an "AS IS" BASIS,
200
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
201
+ See the License for the specific language governing permissions and
202
+ limitations under the License.
minigpt2/lib/python3.10/site-packages/sniffio-1.3.1.dist-info/LICENSE.MIT ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ The MIT License (MIT)
2
+
3
+ Permission is hereby granted, free of charge, to any person obtaining
4
+ a copy of this software and associated documentation files (the
5
+ "Software"), to deal in the Software without restriction, including
6
+ without limitation the rights to use, copy, modify, merge, publish,
7
+ distribute, sublicense, and/or sell copies of the Software, and to
8
+ permit persons to whom the Software is furnished to do so, subject to
9
+ the following conditions:
10
+
11
+ The above copyright notice and this permission notice shall be
12
+ included in all copies or substantial portions of the Software.
13
+
14
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
15
+ EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
16
+ MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
17
+ NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
18
+ LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
19
+ OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
20
+ WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
minigpt2/lib/python3.10/site-packages/sniffio-1.3.1.dist-info/METADATA ADDED
@@ -0,0 +1,104 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Metadata-Version: 2.1
2
+ Name: sniffio
3
+ Version: 1.3.1
4
+ Summary: Sniff out which async library your code is running under
5
+ Author-email: "Nathaniel J. Smith" <njs@pobox.com>
6
+ License: MIT OR Apache-2.0
7
+ Project-URL: Homepage, https://github.com/python-trio/sniffio
8
+ Project-URL: Documentation, https://sniffio.readthedocs.io/
9
+ Project-URL: Changelog, https://sniffio.readthedocs.io/en/latest/history.html
10
+ Keywords: async,trio,asyncio
11
+ Classifier: License :: OSI Approved :: MIT License
12
+ Classifier: License :: OSI Approved :: Apache Software License
13
+ Classifier: Framework :: Trio
14
+ Classifier: Framework :: AsyncIO
15
+ Classifier: Operating System :: POSIX :: Linux
16
+ Classifier: Operating System :: MacOS :: MacOS X
17
+ Classifier: Operating System :: Microsoft :: Windows
18
+ Classifier: Programming Language :: Python :: 3 :: Only
19
+ Classifier: Programming Language :: Python :: Implementation :: CPython
20
+ Classifier: Programming Language :: Python :: Implementation :: PyPy
21
+ Classifier: Intended Audience :: Developers
22
+ Classifier: Development Status :: 5 - Production/Stable
23
+ Requires-Python: >=3.7
24
+ Description-Content-Type: text/x-rst
25
+ License-File: LICENSE
26
+ License-File: LICENSE.APACHE2
27
+ License-File: LICENSE.MIT
28
+
29
+ .. image:: https://img.shields.io/badge/chat-join%20now-blue.svg
30
+ :target: https://gitter.im/python-trio/general
31
+ :alt: Join chatroom
32
+
33
+ .. image:: https://img.shields.io/badge/docs-read%20now-blue.svg
34
+ :target: https://sniffio.readthedocs.io/en/latest/?badge=latest
35
+ :alt: Documentation Status
36
+
37
+ .. image:: https://img.shields.io/pypi/v/sniffio.svg
38
+ :target: https://pypi.org/project/sniffio
39
+ :alt: Latest PyPi version
40
+
41
+ .. image:: https://img.shields.io/conda/vn/conda-forge/sniffio.svg
42
+ :target: https://anaconda.org/conda-forge/sniffio
43
+ :alt: Latest conda-forge version
44
+
45
+ .. image:: https://travis-ci.org/python-trio/sniffio.svg?branch=master
46
+ :target: https://travis-ci.org/python-trio/sniffio
47
+ :alt: Automated test status
48
+
49
+ .. image:: https://codecov.io/gh/python-trio/sniffio/branch/master/graph/badge.svg
50
+ :target: https://codecov.io/gh/python-trio/sniffio
51
+ :alt: Test coverage
52
+
53
+ =================================================================
54
+ sniffio: Sniff out which async library your code is running under
55
+ =================================================================
56
+
57
+ You're writing a library. You've decided to be ambitious, and support
58
+ multiple async I/O packages, like `Trio
59
+ <https://trio.readthedocs.io>`__, and `asyncio
60
+ <https://docs.python.org/3/library/asyncio.html>`__, and ... You've
61
+ written a bunch of clever code to handle all the differences. But...
62
+ how do you know *which* piece of clever code to run?
63
+
64
+ This is a tiny package whose only purpose is to let you detect which
65
+ async library your code is running under.
66
+
67
+ * Documentation: https://sniffio.readthedocs.io
68
+
69
+ * Bug tracker and source code: https://github.com/python-trio/sniffio
70
+
71
+ * License: MIT or Apache License 2.0, your choice
72
+
73
+ * Contributor guide: https://trio.readthedocs.io/en/latest/contributing.html
74
+
75
+ * Code of conduct: Contributors are requested to follow our `code of
76
+ conduct
77
+ <https://trio.readthedocs.io/en/latest/code-of-conduct.html>`_
78
+ in all project spaces.
79
+
80
+ This library is maintained by the Trio project, as a service to the
81
+ async Python community as a whole.
82
+
83
+
84
+ Quickstart
85
+ ----------
86
+
87
+ .. code-block:: python3
88
+
89
+ from sniffio import current_async_library
90
+ import trio
91
+ import asyncio
92
+
93
+ async def print_library():
94
+ library = current_async_library()
95
+ print("This is:", library)
96
+
97
+ # Prints "This is trio"
98
+ trio.run(print_library)
99
+
100
+ # Prints "This is asyncio"
101
+ asyncio.run(print_library())
102
+
103
+ For more details, including how to add support to new async libraries,
104
+ `please peruse our fine manual <https://sniffio.readthedocs.io>`__.
minigpt2/lib/python3.10/site-packages/sniffio-1.3.1.dist-info/REQUESTED ADDED
File without changes
minigpt2/lib/python3.10/site-packages/sniffio-1.3.1.dist-info/WHEEL ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ Wheel-Version: 1.0
2
+ Generator: bdist_wheel (0.42.0)
3
+ Root-Is-Purelib: true
4
+ Tag: py3-none-any
5
+
minigpt2/lib/python3.10/site-packages/torchgen/api/__pycache__/__init__.cpython-310.pyc ADDED
Binary file (166 Bytes). View file
 
minigpt2/lib/python3.10/site-packages/torchgen/api/__pycache__/autograd.cpython-310.pyc ADDED
Binary file (17.6 kB). View file
 
minigpt2/lib/python3.10/site-packages/torchgen/api/__pycache__/dispatcher.cpython-310.pyc ADDED
Binary file (2.88 kB). View file
 
minigpt2/lib/python3.10/site-packages/torchgen/api/__pycache__/functionalization.cpython-310.pyc ADDED
Binary file (3.8 kB). View file
 
minigpt2/lib/python3.10/site-packages/torchgen/api/__pycache__/lazy.cpython-310.pyc ADDED
Binary file (11.5 kB). View file
 
minigpt2/lib/python3.10/site-packages/torchgen/api/__pycache__/meta.cpython-310.pyc ADDED
Binary file (406 Bytes). View file
 
minigpt2/lib/python3.10/site-packages/torchgen/api/__pycache__/native.cpython-310.pyc ADDED
Binary file (3.36 kB). View file
 
minigpt2/lib/python3.10/site-packages/torchgen/api/__pycache__/python.cpython-310.pyc ADDED
Binary file (28.6 kB). View file