Matt300209 commited on
Commit
a53b43b
·
verified ·
1 Parent(s): 971728b

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. venv/lib/python3.10/site-packages/__pycache__/cython.cpython-310.pyc +0 -0
  2. venv/lib/python3.10/site-packages/__pycache__/decorator.cpython-310.pyc +0 -0
  3. venv/lib/python3.10/site-packages/__pycache__/google_auth_httplib2.cpython-310.pyc +0 -0
  4. venv/lib/python3.10/site-packages/__pycache__/isympy.cpython-310.pyc +0 -0
  5. venv/lib/python3.10/site-packages/__pycache__/nest_asyncio.cpython-310.pyc +0 -0
  6. venv/lib/python3.10/site-packages/__pycache__/py.cpython-310.pyc +0 -0
  7. venv/lib/python3.10/site-packages/__pycache__/six.cpython-310.pyc +0 -0
  8. venv/lib/python3.10/site-packages/__pycache__/sqlitedict.cpython-310.pyc +0 -0
  9. venv/lib/python3.10/site-packages/__pycache__/threadpoolctl.cpython-310.pyc +0 -0
  10. venv/lib/python3.10/site-packages/aiohttp-3.12.15.dist-info/INSTALLER +1 -0
  11. venv/lib/python3.10/site-packages/aiohttp-3.12.15.dist-info/METADATA +251 -0
  12. venv/lib/python3.10/site-packages/aiohttp-3.12.15.dist-info/RECORD +138 -0
  13. venv/lib/python3.10/site-packages/aiohttp-3.12.15.dist-info/WHEEL +6 -0
  14. venv/lib/python3.10/site-packages/aiohttp-3.12.15.dist-info/licenses/LICENSE.txt +13 -0
  15. venv/lib/python3.10/site-packages/aiohttp-3.12.15.dist-info/licenses/vendor/llhttp/LICENSE +22 -0
  16. venv/lib/python3.10/site-packages/aiohttp-3.12.15.dist-info/top_level.txt +1 -0
  17. venv/lib/python3.10/site-packages/antlr4/BufferedTokenStream.py +302 -0
  18. venv/lib/python3.10/site-packages/antlr4/CommonTokenFactory.py +61 -0
  19. venv/lib/python3.10/site-packages/antlr4/CommonTokenStream.py +87 -0
  20. venv/lib/python3.10/site-packages/antlr4/FileStream.py +27 -0
  21. venv/lib/python3.10/site-packages/antlr4/InputStream.py +87 -0
  22. venv/lib/python3.10/site-packages/antlr4/IntervalSet.py +180 -0
  23. venv/lib/python3.10/site-packages/antlr4/LL1Analyzer.py +173 -0
  24. venv/lib/python3.10/site-packages/antlr4/Lexer.py +329 -0
  25. venv/lib/python3.10/site-packages/antlr4/ListTokenSource.py +144 -0
  26. venv/lib/python3.10/site-packages/antlr4/Parser.py +580 -0
  27. venv/lib/python3.10/site-packages/antlr4/ParserInterpreter.py +170 -0
  28. venv/lib/python3.10/site-packages/antlr4/ParserRuleContext.py +186 -0
  29. venv/lib/python3.10/site-packages/antlr4/PredictionContext.py +623 -0
  30. venv/lib/python3.10/site-packages/antlr4/Recognizer.py +147 -0
  31. venv/lib/python3.10/site-packages/antlr4/RuleContext.py +227 -0
  32. venv/lib/python3.10/site-packages/antlr4/StdinStream.py +11 -0
  33. venv/lib/python3.10/site-packages/antlr4/Token.py +155 -0
  34. venv/lib/python3.10/site-packages/antlr4/TokenStreamRewriter.py +255 -0
  35. venv/lib/python3.10/site-packages/antlr4/Utils.py +33 -0
  36. venv/lib/python3.10/site-packages/antlr4/__init__.py +21 -0
  37. venv/lib/python3.10/site-packages/antlr4/__pycache__/BufferedTokenStream.cpython-310.pyc +0 -0
  38. venv/lib/python3.10/site-packages/antlr4/__pycache__/CommonTokenFactory.cpython-310.pyc +0 -0
  39. venv/lib/python3.10/site-packages/antlr4/__pycache__/CommonTokenStream.cpython-310.pyc +0 -0
  40. venv/lib/python3.10/site-packages/antlr4/__pycache__/FileStream.cpython-310.pyc +0 -0
  41. venv/lib/python3.10/site-packages/antlr4/__pycache__/InputStream.cpython-310.pyc +0 -0
  42. venv/lib/python3.10/site-packages/antlr4/__pycache__/IntervalSet.cpython-310.pyc +0 -0
  43. venv/lib/python3.10/site-packages/antlr4/__pycache__/LL1Analyzer.cpython-310.pyc +0 -0
  44. venv/lib/python3.10/site-packages/antlr4/__pycache__/Lexer.cpython-310.pyc +0 -0
  45. venv/lib/python3.10/site-packages/antlr4/__pycache__/ListTokenSource.cpython-310.pyc +0 -0
  46. venv/lib/python3.10/site-packages/antlr4/__pycache__/Parser.cpython-310.pyc +0 -0
  47. venv/lib/python3.10/site-packages/antlr4/__pycache__/ParserInterpreter.cpython-310.pyc +0 -0
  48. venv/lib/python3.10/site-packages/antlr4/__pycache__/ParserRuleContext.cpython-310.pyc +0 -0
  49. venv/lib/python3.10/site-packages/antlr4/__pycache__/PredictionContext.cpython-310.pyc +0 -0
  50. venv/lib/python3.10/site-packages/antlr4/__pycache__/Recognizer.cpython-310.pyc +0 -0
venv/lib/python3.10/site-packages/__pycache__/cython.cpython-310.pyc ADDED
Binary file (691 Bytes). View file
 
venv/lib/python3.10/site-packages/__pycache__/decorator.cpython-310.pyc ADDED
Binary file (13.9 kB). View file
 
venv/lib/python3.10/site-packages/__pycache__/google_auth_httplib2.cpython-310.pyc ADDED
Binary file (8.65 kB). View file
 
venv/lib/python3.10/site-packages/__pycache__/isympy.cpython-310.pyc ADDED
Binary file (9.5 kB). View file
 
venv/lib/python3.10/site-packages/__pycache__/nest_asyncio.cpython-310.pyc ADDED
Binary file (6.59 kB). View file
 
venv/lib/python3.10/site-packages/__pycache__/py.cpython-310.pyc ADDED
Binary file (466 Bytes). View file
 
venv/lib/python3.10/site-packages/__pycache__/six.cpython-310.pyc ADDED
Binary file (27.7 kB). View file
 
venv/lib/python3.10/site-packages/__pycache__/sqlitedict.cpython-310.pyc ADDED
Binary file (18.8 kB). View file
 
venv/lib/python3.10/site-packages/__pycache__/threadpoolctl.cpython-310.pyc ADDED
Binary file (44.2 kB). View file
 
venv/lib/python3.10/site-packages/aiohttp-3.12.15.dist-info/INSTALLER ADDED
@@ -0,0 +1 @@
 
 
1
+ pip
venv/lib/python3.10/site-packages/aiohttp-3.12.15.dist-info/METADATA ADDED
@@ -0,0 +1,251 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Metadata-Version: 2.4
2
+ Name: aiohttp
3
+ Version: 3.12.15
4
+ Summary: Async http client/server framework (asyncio)
5
+ Home-page: https://github.com/aio-libs/aiohttp
6
+ Maintainer: aiohttp team <team@aiohttp.org>
7
+ Maintainer-email: team@aiohttp.org
8
+ License: Apache-2.0 AND MIT
9
+ Project-URL: Chat: Matrix, https://matrix.to/#/#aio-libs:matrix.org
10
+ Project-URL: Chat: Matrix Space, https://matrix.to/#/#aio-libs-space:matrix.org
11
+ Project-URL: CI: GitHub Actions, https://github.com/aio-libs/aiohttp/actions?query=workflow%3ACI
12
+ Project-URL: Coverage: codecov, https://codecov.io/github/aio-libs/aiohttp
13
+ Project-URL: Docs: Changelog, https://docs.aiohttp.org/en/stable/changes.html
14
+ Project-URL: Docs: RTD, https://docs.aiohttp.org
15
+ Project-URL: GitHub: issues, https://github.com/aio-libs/aiohttp/issues
16
+ Project-URL: GitHub: repo, https://github.com/aio-libs/aiohttp
17
+ Classifier: Development Status :: 5 - Production/Stable
18
+ Classifier: Framework :: AsyncIO
19
+ Classifier: Intended Audience :: Developers
20
+ Classifier: Operating System :: POSIX
21
+ Classifier: Operating System :: MacOS :: MacOS X
22
+ Classifier: Operating System :: Microsoft :: Windows
23
+ Classifier: Programming Language :: Python
24
+ Classifier: Programming Language :: Python :: 3
25
+ Classifier: Programming Language :: Python :: 3.9
26
+ Classifier: Programming Language :: Python :: 3.10
27
+ Classifier: Programming Language :: Python :: 3.11
28
+ Classifier: Programming Language :: Python :: 3.12
29
+ Classifier: Programming Language :: Python :: 3.13
30
+ Classifier: Topic :: Internet :: WWW/HTTP
31
+ Requires-Python: >=3.9
32
+ Description-Content-Type: text/x-rst
33
+ License-File: LICENSE.txt
34
+ License-File: vendor/llhttp/LICENSE
35
+ Requires-Dist: aiohappyeyeballs>=2.5.0
36
+ Requires-Dist: aiosignal>=1.4.0
37
+ Requires-Dist: async-timeout<6.0,>=4.0; python_version < "3.11"
38
+ Requires-Dist: attrs>=17.3.0
39
+ Requires-Dist: frozenlist>=1.1.1
40
+ Requires-Dist: multidict<7.0,>=4.5
41
+ Requires-Dist: propcache>=0.2.0
42
+ Requires-Dist: yarl<2.0,>=1.17.0
43
+ Provides-Extra: speedups
44
+ Requires-Dist: aiodns>=3.3.0; extra == "speedups"
45
+ Requires-Dist: Brotli; platform_python_implementation == "CPython" and extra == "speedups"
46
+ Requires-Dist: brotlicffi; platform_python_implementation != "CPython" and extra == "speedups"
47
+ Dynamic: license-file
48
+
49
+ ==================================
50
+ Async http client/server framework
51
+ ==================================
52
+
53
+ .. image:: https://raw.githubusercontent.com/aio-libs/aiohttp/master/docs/aiohttp-plain.svg
54
+ :height: 64px
55
+ :width: 64px
56
+ :alt: aiohttp logo
57
+
58
+ |
59
+
60
+ .. image:: https://github.com/aio-libs/aiohttp/workflows/CI/badge.svg
61
+ :target: https://github.com/aio-libs/aiohttp/actions?query=workflow%3ACI
62
+ :alt: GitHub Actions status for master branch
63
+
64
+ .. image:: https://codecov.io/gh/aio-libs/aiohttp/branch/master/graph/badge.svg
65
+ :target: https://codecov.io/gh/aio-libs/aiohttp
66
+ :alt: codecov.io status for master branch
67
+
68
+ .. image:: https://img.shields.io/endpoint?url=https://codspeed.io/badge.json
69
+ :target: https://codspeed.io/aio-libs/aiohttp
70
+ :alt: Codspeed.io status for aiohttp
71
+
72
+ .. image:: https://badge.fury.io/py/aiohttp.svg
73
+ :target: https://pypi.org/project/aiohttp
74
+ :alt: Latest PyPI package version
75
+
76
+ .. image:: https://readthedocs.org/projects/aiohttp/badge/?version=latest
77
+ :target: https://docs.aiohttp.org/
78
+ :alt: Latest Read The Docs
79
+
80
+ .. image:: https://img.shields.io/matrix/aio-libs:matrix.org?label=Discuss%20on%20Matrix%20at%20%23aio-libs%3Amatrix.org&logo=matrix&server_fqdn=matrix.org&style=flat
81
+ :target: https://matrix.to/#/%23aio-libs:matrix.org
82
+ :alt: Matrix Room — #aio-libs:matrix.org
83
+
84
+ .. image:: https://img.shields.io/matrix/aio-libs-space:matrix.org?label=Discuss%20on%20Matrix%20at%20%23aio-libs-space%3Amatrix.org&logo=matrix&server_fqdn=matrix.org&style=flat
85
+ :target: https://matrix.to/#/%23aio-libs-space:matrix.org
86
+ :alt: Matrix Space — #aio-libs-space:matrix.org
87
+
88
+
89
+ Key Features
90
+ ============
91
+
92
+ - Supports both client and server side of HTTP protocol.
93
+ - Supports both client and server Web-Sockets out-of-the-box and avoids
94
+ Callback Hell.
95
+ - Provides Web-server with middleware and pluggable routing.
96
+
97
+
98
+ Getting started
99
+ ===============
100
+
101
+ Client
102
+ ------
103
+
104
+ To get something from the web:
105
+
106
+ .. code-block:: python
107
+
108
+ import aiohttp
109
+ import asyncio
110
+
111
+ async def main():
112
+
113
+ async with aiohttp.ClientSession() as session:
114
+ async with session.get('http://python.org') as response:
115
+
116
+ print("Status:", response.status)
117
+ print("Content-type:", response.headers['content-type'])
118
+
119
+ html = await response.text()
120
+ print("Body:", html[:15], "...")
121
+
122
+ asyncio.run(main())
123
+
124
+ This prints:
125
+
126
+ .. code-block::
127
+
128
+ Status: 200
129
+ Content-type: text/html; charset=utf-8
130
+ Body: <!doctype html> ...
131
+
132
+ Coming from `requests <https://requests.readthedocs.io/>`_ ? Read `why we need so many lines <https://aiohttp.readthedocs.io/en/latest/http_request_lifecycle.html>`_.
133
+
134
+ Server
135
+ ------
136
+
137
+ An example using a simple server:
138
+
139
+ .. code-block:: python
140
+
141
+ # examples/server_simple.py
142
+ from aiohttp import web
143
+
144
+ async def handle(request):
145
+ name = request.match_info.get('name', "Anonymous")
146
+ text = "Hello, " + name
147
+ return web.Response(text=text)
148
+
149
+ async def wshandle(request):
150
+ ws = web.WebSocketResponse()
151
+ await ws.prepare(request)
152
+
153
+ async for msg in ws:
154
+ if msg.type == web.WSMsgType.text:
155
+ await ws.send_str("Hello, {}".format(msg.data))
156
+ elif msg.type == web.WSMsgType.binary:
157
+ await ws.send_bytes(msg.data)
158
+ elif msg.type == web.WSMsgType.close:
159
+ break
160
+
161
+ return ws
162
+
163
+
164
+ app = web.Application()
165
+ app.add_routes([web.get('/', handle),
166
+ web.get('/echo', wshandle),
167
+ web.get('/{name}', handle)])
168
+
169
+ if __name__ == '__main__':
170
+ web.run_app(app)
171
+
172
+
173
+ Documentation
174
+ =============
175
+
176
+ https://aiohttp.readthedocs.io/
177
+
178
+
179
+ Demos
180
+ =====
181
+
182
+ https://github.com/aio-libs/aiohttp-demos
183
+
184
+
185
+ External links
186
+ ==============
187
+
188
+ * `Third party libraries
189
+ <http://aiohttp.readthedocs.io/en/latest/third_party.html>`_
190
+ * `Built with aiohttp
191
+ <http://aiohttp.readthedocs.io/en/latest/built_with.html>`_
192
+ * `Powered by aiohttp
193
+ <http://aiohttp.readthedocs.io/en/latest/powered_by.html>`_
194
+
195
+ Feel free to make a Pull Request for adding your link to these pages!
196
+
197
+
198
+ Communication channels
199
+ ======================
200
+
201
+ *aio-libs Discussions*: https://github.com/aio-libs/aiohttp/discussions
202
+
203
+ *Matrix*: `#aio-libs:matrix.org <https://matrix.to/#/#aio-libs:matrix.org>`_
204
+
205
+ We support `Stack Overflow
206
+ <https://stackoverflow.com/questions/tagged/aiohttp>`_.
207
+ Please add *aiohttp* tag to your question there.
208
+
209
+ Requirements
210
+ ============
211
+
212
+ - attrs_
213
+ - multidict_
214
+ - yarl_
215
+ - frozenlist_
216
+
217
+ Optionally you may install the aiodns_ library (highly recommended for sake of speed).
218
+
219
+ .. _aiodns: https://pypi.python.org/pypi/aiodns
220
+ .. _attrs: https://github.com/python-attrs/attrs
221
+ .. _multidict: https://pypi.python.org/pypi/multidict
222
+ .. _frozenlist: https://pypi.org/project/frozenlist/
223
+ .. _yarl: https://pypi.python.org/pypi/yarl
224
+ .. _async-timeout: https://pypi.python.org/pypi/async_timeout
225
+
226
+ License
227
+ =======
228
+
229
+ ``aiohttp`` is offered under the Apache 2 license.
230
+
231
+
232
+ Keepsafe
233
+ ========
234
+
235
+ The aiohttp community would like to thank Keepsafe
236
+ (https://www.getkeepsafe.com) for its support in the early days of
237
+ the project.
238
+
239
+
240
+ Source code
241
+ ===========
242
+
243
+ The latest developer version is available in a GitHub repository:
244
+ https://github.com/aio-libs/aiohttp
245
+
246
+ Benchmarks
247
+ ==========
248
+
249
+ If you are interested in efficiency, the AsyncIO community maintains a
250
+ list of benchmarks on the official wiki:
251
+ https://github.com/python/asyncio/wiki/Benchmarks
venv/lib/python3.10/site-packages/aiohttp-3.12.15.dist-info/RECORD ADDED
@@ -0,0 +1,138 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ aiohttp-3.12.15.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4
2
+ aiohttp-3.12.15.dist-info/METADATA,sha256=uZt4MMQKfAQbpmxyQ63yHApgfILSD4loMGxE79YfMKE,7657
3
+ aiohttp-3.12.15.dist-info/RECORD,,
4
+ aiohttp-3.12.15.dist-info/WHEEL,sha256=DTnKjM5OInJxWADod3iQyWxWcdG-eRwxzGww236swpY,151
5
+ aiohttp-3.12.15.dist-info/licenses/LICENSE.txt,sha256=n4DQ2311WpQdtFchcsJw7L2PCCuiFd3QlZhZQu2Uqes,588
6
+ aiohttp-3.12.15.dist-info/licenses/vendor/llhttp/LICENSE,sha256=68qFTgE0zSVtZzYnwgSZ9CV363S6zwi58ltianPJEnc,1105
7
+ aiohttp-3.12.15.dist-info/top_level.txt,sha256=iv-JIaacmTl-hSho3QmphcKnbRRYx1st47yjz_178Ro,8
8
+ aiohttp/.hash/_cparser.pxd.hash,sha256=pjs-sEXNw_eijXGAedwG-BHnlFp8B7sOCgUagIWaU2A,121
9
+ aiohttp/.hash/_find_header.pxd.hash,sha256=_mbpD6vM-CVCKq3ulUvsOAz5Wdo88wrDzfpOsMQaMNA,125
10
+ aiohttp/.hash/_http_parser.pyx.hash,sha256=8LCTs_O4fFH1HswgQLgjUn8gknOO8Z8V63c_hQ4fNnM,125
11
+ aiohttp/.hash/_http_writer.pyx.hash,sha256=uhOanbDG8R2Pxria3xMb15h7biBeeT3ioBoQNwqKYp8,125
12
+ aiohttp/.hash/hdrs.py.hash,sha256=v6IaKbsxjsdQxBzhb5AjP0x_9G3rUe84D7avf7AI4cs,116
13
+ aiohttp/__init__.py,sha256=_g5Icol1-XPd2n0J5OVA9f4m9MPY5dD1wUgj7rzsAUA,8303
14
+ aiohttp/__pycache__/__init__.cpython-310.pyc,,
15
+ aiohttp/__pycache__/_cookie_helpers.cpython-310.pyc,,
16
+ aiohttp/__pycache__/abc.cpython-310.pyc,,
17
+ aiohttp/__pycache__/base_protocol.cpython-310.pyc,,
18
+ aiohttp/__pycache__/client.cpython-310.pyc,,
19
+ aiohttp/__pycache__/client_exceptions.cpython-310.pyc,,
20
+ aiohttp/__pycache__/client_middleware_digest_auth.cpython-310.pyc,,
21
+ aiohttp/__pycache__/client_middlewares.cpython-310.pyc,,
22
+ aiohttp/__pycache__/client_proto.cpython-310.pyc,,
23
+ aiohttp/__pycache__/client_reqrep.cpython-310.pyc,,
24
+ aiohttp/__pycache__/client_ws.cpython-310.pyc,,
25
+ aiohttp/__pycache__/compression_utils.cpython-310.pyc,,
26
+ aiohttp/__pycache__/connector.cpython-310.pyc,,
27
+ aiohttp/__pycache__/cookiejar.cpython-310.pyc,,
28
+ aiohttp/__pycache__/formdata.cpython-310.pyc,,
29
+ aiohttp/__pycache__/hdrs.cpython-310.pyc,,
30
+ aiohttp/__pycache__/helpers.cpython-310.pyc,,
31
+ aiohttp/__pycache__/http.cpython-310.pyc,,
32
+ aiohttp/__pycache__/http_exceptions.cpython-310.pyc,,
33
+ aiohttp/__pycache__/http_parser.cpython-310.pyc,,
34
+ aiohttp/__pycache__/http_websocket.cpython-310.pyc,,
35
+ aiohttp/__pycache__/http_writer.cpython-310.pyc,,
36
+ aiohttp/__pycache__/log.cpython-310.pyc,,
37
+ aiohttp/__pycache__/multipart.cpython-310.pyc,,
38
+ aiohttp/__pycache__/payload.cpython-310.pyc,,
39
+ aiohttp/__pycache__/payload_streamer.cpython-310.pyc,,
40
+ aiohttp/__pycache__/pytest_plugin.cpython-310.pyc,,
41
+ aiohttp/__pycache__/resolver.cpython-310.pyc,,
42
+ aiohttp/__pycache__/streams.cpython-310.pyc,,
43
+ aiohttp/__pycache__/tcp_helpers.cpython-310.pyc,,
44
+ aiohttp/__pycache__/test_utils.cpython-310.pyc,,
45
+ aiohttp/__pycache__/tracing.cpython-310.pyc,,
46
+ aiohttp/__pycache__/typedefs.cpython-310.pyc,,
47
+ aiohttp/__pycache__/web.cpython-310.pyc,,
48
+ aiohttp/__pycache__/web_app.cpython-310.pyc,,
49
+ aiohttp/__pycache__/web_exceptions.cpython-310.pyc,,
50
+ aiohttp/__pycache__/web_fileresponse.cpython-310.pyc,,
51
+ aiohttp/__pycache__/web_log.cpython-310.pyc,,
52
+ aiohttp/__pycache__/web_middlewares.cpython-310.pyc,,
53
+ aiohttp/__pycache__/web_protocol.cpython-310.pyc,,
54
+ aiohttp/__pycache__/web_request.cpython-310.pyc,,
55
+ aiohttp/__pycache__/web_response.cpython-310.pyc,,
56
+ aiohttp/__pycache__/web_routedef.cpython-310.pyc,,
57
+ aiohttp/__pycache__/web_runner.cpython-310.pyc,,
58
+ aiohttp/__pycache__/web_server.cpython-310.pyc,,
59
+ aiohttp/__pycache__/web_urldispatcher.cpython-310.pyc,,
60
+ aiohttp/__pycache__/web_ws.cpython-310.pyc,,
61
+ aiohttp/__pycache__/worker.cpython-310.pyc,,
62
+ aiohttp/_cookie_helpers.py,sha256=xjCVZKrQIfH1bwN5UeNrem8kevnXwZcBoNY94yyk8Qc,12418
63
+ aiohttp/_cparser.pxd,sha256=UnbUYCHg4NdXfgyRVYAMv2KTLWClB4P-xCrvtj_r7ew,4295
64
+ aiohttp/_find_header.pxd,sha256=0GfwFCPN2zxEKTO1_MA5sYq2UfzsG8kcV3aTqvwlz3g,68
65
+ aiohttp/_headers.pxi,sha256=n701k28dVPjwRnx5j6LpJhLTfj7dqu2vJt7f0O60Oyg,2007
66
+ aiohttp/_http_parser.cpython-310-x86_64-linux-gnu.so,sha256=rtLjHoDA9H23E7B3g3d6OpEnIaWOjlzRnNm4dNgoiRo,2736176
67
+ aiohttp/_http_parser.pyx,sha256=1L07PKuJjgDGQuqlmy965a5aoTdOaYWX99gFowLyPiE,28239
68
+ aiohttp/_http_writer.cpython-310-x86_64-linux-gnu.so,sha256=ZL4K9tshhZQDS7lIOCB-WPfcYgEJssQtnBGLlIDYQ0U,476088
69
+ aiohttp/_http_writer.pyx,sha256=96seJigne4J3LVnB3DAzwTSV12nfZ7HR1JsaR0p13VI,4561
70
+ aiohttp/_websocket/.hash/mask.pxd.hash,sha256=Y0zBddk_ck3pi9-BFzMcpkcvCKvwvZ4GTtZFb9u1nxQ,128
71
+ aiohttp/_websocket/.hash/mask.pyx.hash,sha256=90owpXYM8_kIma4KUcOxhWSk-Uv4NVMBoCYeFM1B3d0,128
72
+ aiohttp/_websocket/.hash/reader_c.pxd.hash,sha256=5xf3oobk6vx4xbJm-xtZ1_QufB8fYFtLQV2MNdqUc1w,132
73
+ aiohttp/_websocket/__init__.py,sha256=Mar3R9_vBN_Ea4lsW7iTAVXD7OKswKPGqF5xgSyt77k,44
74
+ aiohttp/_websocket/__pycache__/__init__.cpython-310.pyc,,
75
+ aiohttp/_websocket/__pycache__/helpers.cpython-310.pyc,,
76
+ aiohttp/_websocket/__pycache__/models.cpython-310.pyc,,
77
+ aiohttp/_websocket/__pycache__/reader.cpython-310.pyc,,
78
+ aiohttp/_websocket/__pycache__/reader_c.cpython-310.pyc,,
79
+ aiohttp/_websocket/__pycache__/reader_py.cpython-310.pyc,,
80
+ aiohttp/_websocket/__pycache__/writer.cpython-310.pyc,,
81
+ aiohttp/_websocket/helpers.py,sha256=P-XLv8IUaihKzDenVUqfKU5DJbWE5HvG8uhvUZK8Ic4,5038
82
+ aiohttp/_websocket/mask.cpython-310-x86_64-linux-gnu.so,sha256=m6O4IsGUsEoLFkKWVXojt_a03R_7n8eH5mB0tZPtdPY,220664
83
+ aiohttp/_websocket/mask.pxd,sha256=sBmZ1Amym9kW4Ge8lj1fLZ7mPPya4LzLdpkQExQXv5M,112
84
+ aiohttp/_websocket/mask.pyx,sha256=BHjOtV0O0w7xp9p0LNADRJvGmgfPn9sGeJvSs0fL__4,1397
85
+ aiohttp/_websocket/models.py,sha256=XAzjs_8JYszWXIgZ6R3ZRrF-tX9Q_6LiD49WRYojopM,2121
86
+ aiohttp/_websocket/reader.py,sha256=eC4qS0c5sOeQ2ebAHLaBpIaTVFaSKX79pY2xvh3Pqyw,1030
87
+ aiohttp/_websocket/reader_c.cpython-310-x86_64-linux-gnu.so,sha256=hLuO10HvKQ11dZd7oTrecTP33H4Ezvfmty6rzcn89zY,1694720
88
+ aiohttp/_websocket/reader_c.pxd,sha256=nl_njtDrzlQU0rjgGGjZDB-swguE0tX_bCPobkShVa4,2625
89
+ aiohttp/_websocket/reader_c.py,sha256=gSsE_iSBr7-ORvOmgkCT7Jpj4_j3854i_Cp88Se1_6E,18791
90
+ aiohttp/_websocket/reader_py.py,sha256=gSsE_iSBr7-ORvOmgkCT7Jpj4_j3854i_Cp88Se1_6E,18791
91
+ aiohttp/_websocket/writer.py,sha256=9qCnQnCFwPmvf6U6i_7VfTldjpcDfQ_ojeCv5mXoMkw,7139
92
+ aiohttp/abc.py,sha256=jA2jRYAxc217gO96C-wDXcAPcDWjVJpqXrTGfa7uwqM,7148
93
+ aiohttp/base_protocol.py,sha256=Tp8cxUPQvv9kUPk3w6lAzk6d2MAzV3scwI_3Go3C47c,3025
94
+ aiohttp/client.py,sha256=UmwwoDurmDDvxTwa4e1VElko4mc8_Snsvs3CA6SE-kc,57584
95
+ aiohttp/client_exceptions.py,sha256=uyKbxI2peZhKl7lELBMx3UeusNkfpemPWpGFq0r6JeM,11367
96
+ aiohttp/client_middleware_digest_auth.py,sha256=BIoQJ5eWL5NNkPOmezTGrceWIho8ETDvS8NKvX-3Xdw,17088
97
+ aiohttp/client_middlewares.py,sha256=kP5N9CMzQPMGPIEydeVUiLUTLsw8Vl8Gr4qAWYdu3vM,1918
98
+ aiohttp/client_proto.py,sha256=56_WtLStZGBFPYKzgEgY6v24JkhV1y6JEmmuxeJT2So,12110
99
+ aiohttp/client_reqrep.py,sha256=5IZIhC016PMwmEg1EyGdP2byKcY8n28Dc_TLAzU2e1o,53531
100
+ aiohttp/client_ws.py,sha256=1CIjIXwyzOMIYw6AjUES4-qUwbyVHW1seJKQfg_Rta8,15109
101
+ aiohttp/compression_utils.py,sha256=LDUVfDiChHNb_ojMEITJuoSEbOAQ4Qznu07vTHL-_pY,8868
102
+ aiohttp/connector.py,sha256=WQetKoSW7XnHA9r4o9OWwO3-n7ymOwBd2Tg_xHNw0Bs,68456
103
+ aiohttp/cookiejar.py,sha256=e28ZMQwJ5P0vbPX1OX4Se7-k3zeGvocFEqzGhwpG53k,18922
104
+ aiohttp/formdata.py,sha256=dRmQY8LA6WSj5HzqF9tUzu_SNe6mzZ1DqXXkyg4ga20,6410
105
+ aiohttp/hdrs.py,sha256=2rj5MyA-6yRdYPhW5UKkW4iNWhEAlGIOSBH5D4FmKNE,5111
106
+ aiohttp/helpers.py,sha256=bblNEhp4hFimEmxMdPNxEluBY17L5YUArHYvoxzoEe4,29614
107
+ aiohttp/http.py,sha256=8o8j8xH70OWjnfTWA9V44NR785QPxEPrUtzMXiAVpwc,1842
108
+ aiohttp/http_exceptions.py,sha256=AZafFHgtAkAgrKZf8zYPU8VX2dq32-VAoP-UZxBLU0c,2960
109
+ aiohttp/http_parser.py,sha256=SRADKjgUtYJxUgvvYTyJA0wB8WpKjTcKpzIT8fsE1aE,36896
110
+ aiohttp/http_websocket.py,sha256=8VXFKw6KQUEmPg48GtRMB37v0gTK7A0inoxXuDxMZEc,842
111
+ aiohttp/http_writer.py,sha256=fbRtKPYSqRbtAdr_gqpjF2-4sI1ESL8dPDF-xY_mAMY,12446
112
+ aiohttp/log.py,sha256=BbNKx9e3VMIm0xYjZI0IcBBoS7wjdeIeSaiJE7-qK2g,325
113
+ aiohttp/multipart.py,sha256=vNIFEgZUdVzYYU0wsowcx7CvsUTkqPo-LWgzupsPnL8,39901
114
+ aiohttp/payload.py,sha256=O6nsYNULL7AeM2cyJ6TYX73ncVnL5xJwt5AegxwMKqw,40874
115
+ aiohttp/payload_streamer.py,sha256=ZzEYyfzcjGWkVkK3XR2pBthSCSIykYvY3Wr5cGQ2eTc,2211
116
+ aiohttp/py.typed,sha256=sow9soTwP9T_gEAQSVh7Gb8855h04Nwmhs2We-JRgZM,7
117
+ aiohttp/pytest_plugin.py,sha256=z4XwqmsKdyJCKxbGiA5kFf90zcedvomqk4RqjZbhKNk,12901
118
+ aiohttp/resolver.py,sha256=gsrfUpFf8iHlcHfJvY-1fiBHW3PRvRVNb5lNZBg3zlY,10031
119
+ aiohttp/streams.py,sha256=U-qTkuAqIfpJChuKEy-vYn8nQ_Z1MVcW0WO2DHiJz_o,22329
120
+ aiohttp/tcp_helpers.py,sha256=BSadqVWaBpMFDRWnhaaR941N9MiDZ7bdTrxgCb0CW-M,961
121
+ aiohttp/test_utils.py,sha256=ZJSzZWjC76KSbtwddTKcP6vHpUl_ozfAf3F93ewmHRU,23016
122
+ aiohttp/tracing.py,sha256=-6aaW6l0J9uJD45LzR4cijYH0j62pt0U_nn_aVzFku4,14558
123
+ aiohttp/typedefs.py,sha256=wUlqwe9Mw9W8jT3HsYJcYk00qP3EMPz3nTkYXmeNN48,1657
124
+ aiohttp/web.py,sha256=sG_U41AY4S_LBY9sReiBzXKJRZpXk8xgiE_l5S_UPPg,18390
125
+ aiohttp/web_app.py,sha256=lGU_aAMN-h3wy-LTTHi6SeKH8ydt1G51BXcCspgD5ZA,19452
126
+ aiohttp/web_exceptions.py,sha256=7nIuiwhZ39vJJ9KrWqArA5QcWbUdqkz2CLwEpJapeN8,10360
127
+ aiohttp/web_fileresponse.py,sha256=EtDuw5mF7uGkjrrwSBaDQk6F1FJW4pnwE2pZGv3T1QI,16474
128
+ aiohttp/web_log.py,sha256=rX5D7xLOX2B6BMdiZ-chme_KfJfW5IXEoFwLfkfkajs,7865
129
+ aiohttp/web_middlewares.py,sha256=sFI0AgeNjdyAjuz92QtMIpngmJSOxrqe2Jfbs4BNUu0,4165
130
+ aiohttp/web_protocol.py,sha256=c8a0PKGqfhIAiq2RboMsy1NRza4dnj6gnXIWvJUeCF0,27015
131
+ aiohttp/web_request.py,sha256=zN96OlMRlrCFOMRpdh7y9rvHP0Hm8zavC0OFCj0wlSg,29833
132
+ aiohttp/web_response.py,sha256=PKcziNU4LmftXqKVvoRMrAbOeVClpSN-iznHsiWezmU,29341
133
+ aiohttp/web_routedef.py,sha256=VT1GAx6BrawoDh5RwBwBu5wSABSqgWwAe74AUCyZAEo,6110
134
+ aiohttp/web_runner.py,sha256=v1G1nKiOOQgFnTSR4IMc6I9ReEFDMaHtMLvO_roDM-A,11786
135
+ aiohttp/web_server.py,sha256=-9WDKUAiR9ll-rSdwXSqG6YjaoW79d1R4y0BGSqgUMA,2888
136
+ aiohttp/web_urldispatcher.py,sha256=sFkcsa8qLFkDp47_oW7Z7fiq7DcVXiff1Etn0QN8DJA,44000
137
+ aiohttp/web_ws.py,sha256=lItgmyatkXh0M6EY7JoZnSZkUl6R0wv8B88X4ILqQbU,22739
138
+ aiohttp/worker.py,sha256=zT0iWN5Xze194bO6_VjHou0x7lR_k0MviN6Kadnk22g,8152
venv/lib/python3.10/site-packages/aiohttp-3.12.15.dist-info/WHEEL ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ Wheel-Version: 1.0
2
+ Generator: setuptools (80.9.0)
3
+ Root-Is-Purelib: false
4
+ Tag: cp310-cp310-manylinux_2_17_x86_64
5
+ Tag: cp310-cp310-manylinux2014_x86_64
6
+
venv/lib/python3.10/site-packages/aiohttp-3.12.15.dist-info/licenses/LICENSE.txt ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Copyright aio-libs contributors.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License");
4
+ you may not use this file except in compliance with the License.
5
+ You may obtain a copy of the License at
6
+
7
+ http://www.apache.org/licenses/LICENSE-2.0
8
+
9
+ Unless required by applicable law or agreed to in writing, software
10
+ distributed under the License is distributed on an "AS IS" BASIS,
11
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ See the License for the specific language governing permissions and
13
+ limitations under the License.
venv/lib/python3.10/site-packages/aiohttp-3.12.15.dist-info/licenses/vendor/llhttp/LICENSE ADDED
@@ -0,0 +1,22 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ This software is licensed under the MIT License.
2
+
3
+ Copyright Fedor Indutny, 2018.
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a
6
+ copy of this software and associated documentation files (the
7
+ "Software"), to deal in the Software without restriction, including
8
+ without limitation the rights to use, copy, modify, merge, publish,
9
+ distribute, sublicense, and/or sell copies of the Software, and to permit
10
+ persons to whom the Software is furnished to do so, subject to the
11
+ following conditions:
12
+
13
+ The above copyright notice and this permission notice shall be included
14
+ in all copies or substantial portions of the Software.
15
+
16
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
17
+ OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
18
+ MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN
19
+ NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM,
20
+ DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
21
+ OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE
22
+ USE OR OTHER DEALINGS IN THE SOFTWARE.
venv/lib/python3.10/site-packages/aiohttp-3.12.15.dist-info/top_level.txt ADDED
@@ -0,0 +1 @@
 
 
1
+ aiohttp
venv/lib/python3.10/site-packages/antlr4/BufferedTokenStream.py ADDED
@@ -0,0 +1,302 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #
2
+ # Copyright (c) 2012-2017 The ANTLR Project. All rights reserved.
3
+ # Use of this file is governed by the BSD 3-clause license that
4
+ # can be found in the LICENSE.txt file in the project root.
5
+
6
+ # This implementation of {@link TokenStream} loads tokens from a
7
+ # {@link TokenSource} on-demand, and places the tokens in a buffer to provide
8
+ # access to any previous token by index.
9
+ #
10
+ # <p>
11
+ # This token stream ignores the value of {@link Token#getChannel}. If your
12
+ # parser requires the token stream filter tokens to only those on a particular
13
+ # channel, such as {@link Token#DEFAULT_CHANNEL} or
14
+ # {@link Token#HIDDEN_CHANNEL}, use a filtering token stream such a
15
+ # {@link CommonTokenStream}.</p>
16
+ from io import StringIO
17
+ from antlr4.Token import Token
18
+ from antlr4.error.Errors import IllegalStateException
19
+
20
+ # need forward declaration
21
+ Lexer = None
22
+
23
+ # this is just to keep meaningful parameter types to Parser
24
+ class TokenStream(object):
25
+
26
+ pass
27
+
28
+
29
+ class BufferedTokenStream(TokenStream):
30
+ __slots__ = ('tokenSource', 'tokens', 'index', 'fetchedEOF')
31
+
32
+ def __init__(self, tokenSource:Lexer):
33
+ # The {@link TokenSource} from which tokens for this stream are fetched.
34
+ self.tokenSource = tokenSource
35
+
36
+ # A collection of all tokens fetched from the token source. The list is
37
+ # considered a complete view of the input once {@link #fetchedEOF} is set
38
+ # to {@code true}.
39
+ self.tokens = []
40
+
41
+ # The index into {@link #tokens} of the current token (next token to
42
+ # {@link #consume}). {@link #tokens}{@code [}{@link #p}{@code ]} should be
43
+ # {@link #LT LT(1)}.
44
+ #
45
+ # <p>This field is set to -1 when the stream is first constructed or when
46
+ # {@link #setTokenSource} is called, indicating that the first token has
47
+ # not yet been fetched from the token source. For additional information,
48
+ # see the documentation of {@link IntStream} for a description of
49
+ # Initializing Methods.</p>
50
+ self.index = -1
51
+
52
+ # Indicates whether the {@link Token#EOF} token has been fetched from
53
+ # {@link #tokenSource} and added to {@link #tokens}. This field improves
54
+ # performance for the following cases:
55
+ #
56
+ # <ul>
57
+ # <li>{@link #consume}: The lookahead check in {@link #consume} to prevent
58
+ # consuming the EOF symbol is optimized by checking the values of
59
+ # {@link #fetchedEOF} and {@link #p} instead of calling {@link #LA}.</li>
60
+ # <li>{@link #fetch}: The check to prevent adding multiple EOF symbols into
61
+ # {@link #tokens} is trivial with this field.</li>
62
+ # <ul>
63
+ self.fetchedEOF = False
64
+
65
+ def mark(self):
66
+ return 0
67
+
68
+ def release(self, marker:int):
69
+ # no resources to release
70
+ pass
71
+
72
+ def reset(self):
73
+ self.seek(0)
74
+
75
+ def seek(self, index:int):
76
+ self.lazyInit()
77
+ self.index = self.adjustSeekIndex(index)
78
+
79
+ def get(self, index:int):
80
+ self.lazyInit()
81
+ return self.tokens[index]
82
+
83
+ def consume(self):
84
+ skipEofCheck = False
85
+ if self.index >= 0:
86
+ if self.fetchedEOF:
87
+ # the last token in tokens is EOF. skip check if p indexes any
88
+ # fetched token except the last.
89
+ skipEofCheck = self.index < len(self.tokens) - 1
90
+ else:
91
+ # no EOF token in tokens. skip check if p indexes a fetched token.
92
+ skipEofCheck = self.index < len(self.tokens)
93
+ else:
94
+ # not yet initialized
95
+ skipEofCheck = False
96
+
97
+ if not skipEofCheck and self.LA(1) == Token.EOF:
98
+ raise IllegalStateException("cannot consume EOF")
99
+
100
+ if self.sync(self.index + 1):
101
+ self.index = self.adjustSeekIndex(self.index + 1)
102
+
103
+ # Make sure index {@code i} in tokens has a token.
104
+ #
105
+ # @return {@code true} if a token is located at index {@code i}, otherwise
106
+ # {@code false}.
107
+ # @see #get(int i)
108
+ #/
109
+ def sync(self, i:int):
110
+ n = i - len(self.tokens) + 1 # how many more elements we need?
111
+ if n > 0 :
112
+ fetched = self.fetch(n)
113
+ return fetched >= n
114
+ return True
115
+
116
+ # Add {@code n} elements to buffer.
117
+ #
118
+ # @return The actual number of elements added to the buffer.
119
+ #/
120
+ def fetch(self, n:int):
121
+ if self.fetchedEOF:
122
+ return 0
123
+ for i in range(0, n):
124
+ t = self.tokenSource.nextToken()
125
+ t.tokenIndex = len(self.tokens)
126
+ self.tokens.append(t)
127
+ if t.type==Token.EOF:
128
+ self.fetchedEOF = True
129
+ return i + 1
130
+ return n
131
+
132
+
133
+ # Get all tokens from start..stop inclusively#/
134
+ def getTokens(self, start:int, stop:int, types:set=None):
135
+ if start<0 or stop<0:
136
+ return None
137
+ self.lazyInit()
138
+ subset = []
139
+ if stop >= len(self.tokens):
140
+ stop = len(self.tokens)-1
141
+ for i in range(start, stop):
142
+ t = self.tokens[i]
143
+ if t.type==Token.EOF:
144
+ break
145
+ if types is None or t.type in types:
146
+ subset.append(t)
147
+ return subset
148
+
149
+ def LA(self, i:int):
150
+ return self.LT(i).type
151
+
152
+ def LB(self, k:int):
153
+ if (self.index-k) < 0:
154
+ return None
155
+ return self.tokens[self.index-k]
156
+
157
+ def LT(self, k:int):
158
+ self.lazyInit()
159
+ if k==0:
160
+ return None
161
+ if k < 0:
162
+ return self.LB(-k)
163
+ i = self.index + k - 1
164
+ self.sync(i)
165
+ if i >= len(self.tokens): # return EOF token
166
+ # EOF must be last token
167
+ return self.tokens[len(self.tokens)-1]
168
+ return self.tokens[i]
169
+
170
+ # Allowed derived classes to modify the behavior of operations which change
171
+ # the current stream position by adjusting the target token index of a seek
172
+ # operation. The default implementation simply returns {@code i}. If an
173
+ # exception is thrown in this method, the current stream index should not be
174
+ # changed.
175
+ #
176
+ # <p>For example, {@link CommonTokenStream} overrides this method to ensure that
177
+ # the seek target is always an on-channel token.</p>
178
+ #
179
+ # @param i The target token index.
180
+ # @return The adjusted target token index.
181
+
182
+ def adjustSeekIndex(self, i:int):
183
+ return i
184
+
185
+ def lazyInit(self):
186
+ if self.index == -1:
187
+ self.setup()
188
+
189
+ def setup(self):
190
+ self.sync(0)
191
+ self.index = self.adjustSeekIndex(0)
192
+
193
+ # Reset this token stream by setting its token source.#/
194
+ def setTokenSource(self, tokenSource:Lexer):
195
+ self.tokenSource = tokenSource
196
+ self.tokens = []
197
+ self.index = -1
198
+ self.fetchedEOF = False
199
+
200
+
201
+ # Given a starting index, return the index of the next token on channel.
202
+ # Return i if tokens[i] is on channel. Return the index of the EOF token
203
+ # if there are no tokens on channel between i and EOF.
204
+ #/
205
+ def nextTokenOnChannel(self, i:int, channel:int):
206
+ self.sync(i)
207
+ if i>=len(self.tokens):
208
+ return len(self.tokens) - 1
209
+ token = self.tokens[i]
210
+ while token.channel!=channel:
211
+ if token.type==Token.EOF:
212
+ return i
213
+ i += 1
214
+ self.sync(i)
215
+ token = self.tokens[i]
216
+ return i
217
+
218
+ # Given a starting index, return the index of the previous token on channel.
219
+ # Return i if tokens[i] is on channel. Return -1 if there are no tokens
220
+ # on channel between i and 0.
221
+ def previousTokenOnChannel(self, i:int, channel:int):
222
+ while i>=0 and self.tokens[i].channel!=channel:
223
+ i -= 1
224
+ return i
225
+
226
+ # Collect all tokens on specified channel to the right of
227
+ # the current token up until we see a token on DEFAULT_TOKEN_CHANNEL or
228
+ # EOF. If channel is -1, find any non default channel token.
229
+ def getHiddenTokensToRight(self, tokenIndex:int, channel:int=-1):
230
+ self.lazyInit()
231
+ if tokenIndex<0 or tokenIndex>=len(self.tokens):
232
+ raise Exception(str(tokenIndex) + " not in 0.." + str(len(self.tokens)-1))
233
+ from antlr4.Lexer import Lexer
234
+ nextOnChannel = self.nextTokenOnChannel(tokenIndex + 1, Lexer.DEFAULT_TOKEN_CHANNEL)
235
+ from_ = tokenIndex+1
236
+ # if none onchannel to right, nextOnChannel=-1 so set to = last token
237
+ to = (len(self.tokens)-1) if nextOnChannel==-1 else nextOnChannel
238
+ return self.filterForChannel(from_, to, channel)
239
+
240
+
241
+ # Collect all tokens on specified channel to the left of
242
+ # the current token up until we see a token on DEFAULT_TOKEN_CHANNEL.
243
+ # If channel is -1, find any non default channel token.
244
+ def getHiddenTokensToLeft(self, tokenIndex:int, channel:int=-1):
245
+ self.lazyInit()
246
+ if tokenIndex<0 or tokenIndex>=len(self.tokens):
247
+ raise Exception(str(tokenIndex) + " not in 0.." + str(len(self.tokens)-1))
248
+ from antlr4.Lexer import Lexer
249
+ prevOnChannel = self.previousTokenOnChannel(tokenIndex - 1, Lexer.DEFAULT_TOKEN_CHANNEL)
250
+ if prevOnChannel == tokenIndex - 1:
251
+ return None
252
+ # if none on channel to left, prevOnChannel=-1 then from=0
253
+ from_ = prevOnChannel+1
254
+ to = tokenIndex-1
255
+ return self.filterForChannel(from_, to, channel)
256
+
257
+
258
+ def filterForChannel(self, left:int, right:int, channel:int):
259
+ hidden = []
260
+ for i in range(left, right+1):
261
+ t = self.tokens[i]
262
+ if channel==-1:
263
+ from antlr4.Lexer import Lexer
264
+ if t.channel!= Lexer.DEFAULT_TOKEN_CHANNEL:
265
+ hidden.append(t)
266
+ elif t.channel==channel:
267
+ hidden.append(t)
268
+ if len(hidden)==0:
269
+ return None
270
+ return hidden
271
+
272
+ def getSourceName(self):
273
+ return self.tokenSource.getSourceName()
274
+
275
+ # Get the text of all tokens in this buffer.#/
276
+ def getText(self, start:int=None, stop:int=None):
277
+ self.lazyInit()
278
+ self.fill()
279
+ if isinstance(start, Token):
280
+ start = start.tokenIndex
281
+ elif start is None:
282
+ start = 0
283
+ if isinstance(stop, Token):
284
+ stop = stop.tokenIndex
285
+ elif stop is None or stop >= len(self.tokens):
286
+ stop = len(self.tokens) - 1
287
+ if start < 0 or stop < 0 or stop < start:
288
+ return ""
289
+ with StringIO() as buf:
290
+ for i in range(start, stop+1):
291
+ t = self.tokens[i]
292
+ if t.type==Token.EOF:
293
+ break
294
+ buf.write(t.text)
295
+ return buf.getvalue()
296
+
297
+
298
+ # Get all tokens from lexer until EOF#/
299
+ def fill(self):
300
+ self.lazyInit()
301
+ while self.fetch(1000)==1000:
302
+ pass
venv/lib/python3.10/site-packages/antlr4/CommonTokenFactory.py ADDED
@@ -0,0 +1,61 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #
2
+ # Copyright (c) 2012-2017 The ANTLR Project. All rights reserved.
3
+ # Use of this file is governed by the BSD 3-clause license that
4
+ # can be found in the LICENSE.txt file in the project root.
5
+ #
6
+
7
+ #
8
+ # This default implementation of {@link TokenFactory} creates
9
+ # {@link CommonToken} objects.
10
+ #
11
+ from antlr4.Token import CommonToken
12
+
13
+ class TokenFactory(object):
14
+
15
+ pass
16
+
17
+ class CommonTokenFactory(TokenFactory):
18
+ __slots__ = 'copyText'
19
+
20
+ #
21
+ # The default {@link CommonTokenFactory} instance.
22
+ #
23
+ # <p>
24
+ # This token factory does not explicitly copy token text when constructing
25
+ # tokens.</p>
26
+ #
27
+ DEFAULT = None
28
+
29
+ def __init__(self, copyText:bool=False):
30
+ # Indicates whether {@link CommonToken#setText} should be called after
31
+ # constructing tokens to explicitly set the text. This is useful for cases
32
+ # where the input stream might not be able to provide arbitrary substrings
33
+ # of text from the input after the lexer creates a token (e.g. the
34
+ # implementation of {@link CharStream#getText} in
35
+ # {@link UnbufferedCharStream} throws an
36
+ # {@link UnsupportedOperationException}). Explicitly setting the token text
37
+ # allows {@link Token#getText} to be called at any time regardless of the
38
+ # input stream implementation.
39
+ #
40
+ # <p>
41
+ # The default value is {@code false} to avoid the performance and memory
42
+ # overhead of copying text for every token unless explicitly requested.</p>
43
+ #
44
+ self.copyText = copyText
45
+
46
+ def create(self, source, type:int, text:str, channel:int, start:int, stop:int, line:int, column:int):
47
+ t = CommonToken(source, type, channel, start, stop)
48
+ t.line = line
49
+ t.column = column
50
+ if text is not None:
51
+ t.text = text
52
+ elif self.copyText and source[1] is not None:
53
+ t.text = source[1].getText(start,stop)
54
+ return t
55
+
56
+ def createThin(self, type:int, text:str):
57
+ t = CommonToken(type=type)
58
+ t.text = text
59
+ return t
60
+
61
+ CommonTokenFactory.DEFAULT = CommonTokenFactory()
venv/lib/python3.10/site-packages/antlr4/CommonTokenStream.py ADDED
@@ -0,0 +1,87 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #
2
+ # Copyright (c) 2012-2017 The ANTLR Project. All rights reserved.
3
+ # Use of this file is governed by the BSD 3-clause license that
4
+ # can be found in the LICENSE.txt file in the project root.
5
+ #/
6
+
7
+ #
8
+ # This class extends {@link BufferedTokenStream} with functionality to filter
9
+ # token streams to tokens on a particular channel (tokens where
10
+ # {@link Token#getChannel} returns a particular value).
11
+ #
12
+ # <p>
13
+ # This token stream provides access to all tokens by index or when calling
14
+ # methods like {@link #getText}. The channel filtering is only used for code
15
+ # accessing tokens via the lookahead methods {@link #LA}, {@link #LT}, and
16
+ # {@link #LB}.</p>
17
+ #
18
+ # <p>
19
+ # By default, tokens are placed on the default channel
20
+ # ({@link Token#DEFAULT_CHANNEL}), but may be reassigned by using the
21
+ # {@code ->channel(HIDDEN)} lexer command, or by using an embedded action to
22
+ # call {@link Lexer#setChannel}.
23
+ # </p>
24
+ #
25
+ # <p>
26
+ # Note: lexer rules which use the {@code ->skip} lexer command or call
27
+ # {@link Lexer#skip} do not produce tokens at all, so input text matched by
28
+ # such a rule will not be available as part of the token stream, regardless of
29
+ # channel.</p>
30
+ #/
31
+
32
+ from antlr4.BufferedTokenStream import BufferedTokenStream
33
+ from antlr4.Lexer import Lexer
34
+ from antlr4.Token import Token
35
+
36
+
37
+ class CommonTokenStream(BufferedTokenStream):
38
+ __slots__ = 'channel'
39
+
40
+ def __init__(self, lexer:Lexer, channel:int=Token.DEFAULT_CHANNEL):
41
+ super().__init__(lexer)
42
+ self.channel = channel
43
+
44
+ def adjustSeekIndex(self, i:int):
45
+ return self.nextTokenOnChannel(i, self.channel)
46
+
47
+ def LB(self, k:int):
48
+ if k==0 or (self.index-k)<0:
49
+ return None
50
+ i = self.index
51
+ n = 1
52
+ # find k good tokens looking backwards
53
+ while n <= k:
54
+ # skip off-channel tokens
55
+ i = self.previousTokenOnChannel(i - 1, self.channel)
56
+ n += 1
57
+ if i < 0:
58
+ return None
59
+ return self.tokens[i]
60
+
61
+ def LT(self, k:int):
62
+ self.lazyInit()
63
+ if k == 0:
64
+ return None
65
+ if k < 0:
66
+ return self.LB(-k)
67
+ i = self.index
68
+ n = 1 # we know tokens[pos] is a good one
69
+ # find k good tokens
70
+ while n < k:
71
+ # skip off-channel tokens, but make sure to not look past EOF
72
+ if self.sync(i + 1):
73
+ i = self.nextTokenOnChannel(i + 1, self.channel)
74
+ n += 1
75
+ return self.tokens[i]
76
+
77
+ # Count EOF just once.#/
78
+ def getNumberOfOnChannelTokens(self):
79
+ n = 0
80
+ self.fill()
81
+ for i in range(0, len(self.tokens)):
82
+ t = self.tokens[i]
83
+ if t.channel==self.channel:
84
+ n += 1
85
+ if t.type==Token.EOF:
86
+ break
87
+ return n
venv/lib/python3.10/site-packages/antlr4/FileStream.py ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #
2
+ # Copyright (c) 2012-2017 The ANTLR Project. All rights reserved.
3
+ # Use of this file is governed by the BSD 3-clause license that
4
+ # can be found in the LICENSE.txt file in the project root.
5
+ #
6
+
7
+ #
8
+ # This is an InputStream that is loaded from a file all at once
9
+ # when you construct the object.
10
+ #
11
+
12
+ import codecs
13
+ from antlr4.InputStream import InputStream
14
+
15
+
16
+ class FileStream(InputStream):
17
+ __slots__ = 'fileName'
18
+
19
+ def __init__(self, fileName:str, encoding:str='ascii', errors:str='strict'):
20
+ super().__init__(self.readDataFrom(fileName, encoding, errors))
21
+ self.fileName = fileName
22
+
23
+ def readDataFrom(self, fileName:str, encoding:str, errors:str='strict'):
24
+ # read binary to avoid line ending conversion
25
+ with open(fileName, 'rb') as file:
26
+ bytes = file.read()
27
+ return codecs.decode(bytes, encoding, errors)
venv/lib/python3.10/site-packages/antlr4/InputStream.py ADDED
@@ -0,0 +1,87 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #
2
+ # Copyright (c) 2012-2017 The ANTLR Project. All rights reserved.
3
+ # Use of this file is governed by the BSD 3-clause license that
4
+ # can be found in the LICENSE.txt file in the project root.
5
+ #
6
+
7
+
8
+ #
9
+ # Vacuum all input from a string and then treat it like a buffer.
10
+ #
11
+ from antlr4.Token import Token
12
+
13
+
14
+ class InputStream (object):
15
+ __slots__ = ('name', 'strdata', '_index', 'data', '_size')
16
+
17
+ def __init__(self, data: str):
18
+ self.name = "<empty>"
19
+ self.strdata = data
20
+ self._loadString()
21
+
22
+ def _loadString(self):
23
+ self._index = 0
24
+ self.data = [ord(c) for c in self.strdata]
25
+ self._size = len(self.data)
26
+
27
+ @property
28
+ def index(self):
29
+ return self._index
30
+
31
+ @property
32
+ def size(self):
33
+ return self._size
34
+
35
+ # Reset the stream so that it's in the same state it was
36
+ # when the object was created *except* the data array is not
37
+ # touched.
38
+ #
39
+ def reset(self):
40
+ self._index = 0
41
+
42
+ def consume(self):
43
+ if self._index >= self._size:
44
+ assert self.LA(1) == Token.EOF
45
+ raise Exception("cannot consume EOF")
46
+ self._index += 1
47
+
48
+ def LA(self, offset: int):
49
+ if offset==0:
50
+ return 0 # undefined
51
+ if offset<0:
52
+ offset += 1 # e.g., translate LA(-1) to use offset=0
53
+ pos = self._index + offset - 1
54
+ if pos < 0 or pos >= self._size: # invalid
55
+ return Token.EOF
56
+ return self.data[pos]
57
+
58
+ def LT(self, offset: int):
59
+ return self.LA(offset)
60
+
61
+ # mark/release do nothing; we have entire buffer
62
+ def mark(self):
63
+ return -1
64
+
65
+ def release(self, marker: int):
66
+ pass
67
+
68
+ # consume() ahead until p==_index; can't just set p=_index as we must
69
+ # update line and column. If we seek backwards, just set p
70
+ #
71
+ def seek(self, _index: int):
72
+ if _index<=self._index:
73
+ self._index = _index # just jump; don't update stream state (line, ...)
74
+ return
75
+ # seek forward
76
+ self._index = min(_index, self._size)
77
+
78
+ def getText(self, start :int, stop: int):
79
+ if stop >= self._size:
80
+ stop = self._size-1
81
+ if start >= self._size:
82
+ return ""
83
+ else:
84
+ return self.strdata[start:stop+1]
85
+
86
+ def __str__(self):
87
+ return self.strdata
venv/lib/python3.10/site-packages/antlr4/IntervalSet.py ADDED
@@ -0,0 +1,180 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #
2
+ # Copyright (c) 2012-2017 The ANTLR Project. All rights reserved.
3
+ # Use of this file is governed by the BSD 3-clause license that
4
+ # can be found in the LICENSE.txt file in the project root.
5
+ #
6
+
7
+ from io import StringIO
8
+ from antlr4.Token import Token
9
+
10
+ # need forward declarations
11
+ IntervalSet = None
12
+
13
+ class IntervalSet(object):
14
+ __slots__ = ('intervals', 'readonly')
15
+
16
+ def __init__(self):
17
+ self.intervals = None
18
+ self.readonly = False
19
+
20
+ def __iter__(self):
21
+ if self.intervals is not None:
22
+ for i in self.intervals:
23
+ for c in i:
24
+ yield c
25
+
26
+ def __getitem__(self, item):
27
+ i = 0
28
+ for k in self:
29
+ if i==item:
30
+ return k
31
+ else:
32
+ i += 1
33
+ return Token.INVALID_TYPE
34
+
35
+ def addOne(self, v:int):
36
+ self.addRange(range(v, v+1))
37
+
38
+ def addRange(self, v:range):
39
+ if self.intervals is None:
40
+ self.intervals = list()
41
+ self.intervals.append(v)
42
+ else:
43
+ # find insert pos
44
+ k = 0
45
+ for i in self.intervals:
46
+ # distinct range -> insert
47
+ if v.stop<i.start:
48
+ self.intervals.insert(k, v)
49
+ return
50
+ # contiguous range -> adjust
51
+ elif v.stop==i.start:
52
+ self.intervals[k] = range(v.start, i.stop)
53
+ return
54
+ # overlapping range -> adjust and reduce
55
+ elif v.start<=i.stop:
56
+ self.intervals[k] = range(min(i.start,v.start), max(i.stop,v.stop))
57
+ self.reduce(k)
58
+ return
59
+ k += 1
60
+ # greater than any existing
61
+ self.intervals.append(v)
62
+
63
+ def addSet(self, other:IntervalSet):
64
+ if other.intervals is not None:
65
+ for i in other.intervals:
66
+ self.addRange(i)
67
+ return self
68
+
69
+ def reduce(self, k:int):
70
+ # only need to reduce if k is not the last
71
+ if k<len(self.intervals)-1:
72
+ l = self.intervals[k]
73
+ r = self.intervals[k+1]
74
+ # if r contained in l
75
+ if l.stop >= r.stop:
76
+ self.intervals.pop(k+1)
77
+ self.reduce(k)
78
+ elif l.stop >= r.start:
79
+ self.intervals[k] = range(l.start, r.stop)
80
+ self.intervals.pop(k+1)
81
+
82
+ def complement(self, start, stop):
83
+ result = IntervalSet()
84
+ result.addRange(range(start,stop+1))
85
+ for i in self.intervals:
86
+ result.removeRange(i)
87
+ return result
88
+
89
+ def __contains__(self, item):
90
+ if self.intervals is None:
91
+ return False
92
+ else:
93
+ return any(item in i for i in self.intervals)
94
+
95
+ def __len__(self):
96
+ return sum(len(i) for i in self.intervals)
97
+
98
+ def removeRange(self, v):
99
+ if v.start==v.stop-1:
100
+ self.removeOne(v.start)
101
+ elif self.intervals is not None:
102
+ k = 0
103
+ for i in self.intervals:
104
+ # intervals are ordered
105
+ if v.stop<=i.start:
106
+ return
107
+ # check for including range, split it
108
+ elif v.start>i.start and v.stop<i.stop:
109
+ self.intervals[k] = range(i.start, v.start)
110
+ x = range(v.stop, i.stop)
111
+ self.intervals.insert(k, x)
112
+ return
113
+ # check for included range, remove it
114
+ elif v.start<=i.start and v.stop>=i.stop:
115
+ self.intervals.pop(k)
116
+ k -= 1 # need another pass
117
+ # check for lower boundary
118
+ elif v.start<i.stop:
119
+ self.intervals[k] = range(i.start, v.start)
120
+ # check for upper boundary
121
+ elif v.stop<i.stop:
122
+ self.intervals[k] = range(v.stop, i.stop)
123
+ k += 1
124
+
125
+ def removeOne(self, v):
126
+ if self.intervals is not None:
127
+ k = 0
128
+ for i in self.intervals:
129
+ # intervals is ordered
130
+ if v<i.start:
131
+ return
132
+ # check for single value range
133
+ elif v==i.start and v==i.stop-1:
134
+ self.intervals.pop(k)
135
+ return
136
+ # check for lower boundary
137
+ elif v==i.start:
138
+ self.intervals[k] = range(i.start+1, i.stop)
139
+ return
140
+ # check for upper boundary
141
+ elif v==i.stop-1:
142
+ self.intervals[k] = range(i.start, i.stop-1)
143
+ return
144
+ # split existing range
145
+ elif v<i.stop-1:
146
+ x = range(i.start, v)
147
+ self.intervals[k] = range(v + 1, i.stop)
148
+ self.intervals.insert(k, x)
149
+ return
150
+ k += 1
151
+
152
+
153
+ def toString(self, literalNames:list, symbolicNames:list):
154
+ if self.intervals is None:
155
+ return "{}"
156
+ with StringIO() as buf:
157
+ if len(self)>1:
158
+ buf.write("{")
159
+ first = True
160
+ for i in self.intervals:
161
+ for j in i:
162
+ if not first:
163
+ buf.write(", ")
164
+ buf.write(self.elementName(literalNames, symbolicNames, j))
165
+ first = False
166
+ if len(self)>1:
167
+ buf.write("}")
168
+ return buf.getvalue()
169
+
170
+ def elementName(self, literalNames:list, symbolicNames:list, a:int):
171
+ if a==Token.EOF:
172
+ return "<EOF>"
173
+ elif a==Token.EPSILON:
174
+ return "<EPSILON>"
175
+ else:
176
+ if a<len(literalNames) and literalNames[a] != "<INVALID>":
177
+ return literalNames[a]
178
+ if a<len(symbolicNames):
179
+ return symbolicNames[a]
180
+ return "<UNKNOWN>"
venv/lib/python3.10/site-packages/antlr4/LL1Analyzer.py ADDED
@@ -0,0 +1,173 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #
2
+ # Copyright (c) 2012-2017 The ANTLR Project. All rights reserved.
3
+ # Use of this file is governed by the BSD 3-clause license that
4
+ # can be found in the LICENSE.txt file in the project root.
5
+ #/
6
+ from antlr4.IntervalSet import IntervalSet
7
+ from antlr4.Token import Token
8
+ from antlr4.PredictionContext import PredictionContext, SingletonPredictionContext, PredictionContextFromRuleContext
9
+ from antlr4.RuleContext import RuleContext
10
+ from antlr4.atn.ATN import ATN
11
+ from antlr4.atn.ATNConfig import ATNConfig
12
+ from antlr4.atn.ATNState import ATNState, RuleStopState
13
+ from antlr4.atn.Transition import WildcardTransition, NotSetTransition, AbstractPredicateTransition, RuleTransition
14
+
15
+
16
+ class LL1Analyzer (object):
17
+ __slots__ = 'atn'
18
+
19
+ #* Special value added to the lookahead sets to indicate that we hit
20
+ # a predicate during analysis if {@code seeThruPreds==false}.
21
+ #/
22
+ HIT_PRED = Token.INVALID_TYPE
23
+
24
+ def __init__(self, atn:ATN):
25
+ self.atn = atn
26
+
27
+ #*
28
+ # Calculates the SLL(1) expected lookahead set for each outgoing transition
29
+ # of an {@link ATNState}. The returned array has one element for each
30
+ # outgoing transition in {@code s}. If the closure from transition
31
+ # <em>i</em> leads to a semantic predicate before matching a symbol, the
32
+ # element at index <em>i</em> of the result will be {@code null}.
33
+ #
34
+ # @param s the ATN state
35
+ # @return the expected symbols for each outgoing transition of {@code s}.
36
+ #/
37
+ def getDecisionLookahead(self, s:ATNState):
38
+ if s is None:
39
+ return None
40
+
41
+ count = len(s.transitions)
42
+ look = [] * count
43
+ for alt in range(0, count):
44
+ look[alt] = set()
45
+ lookBusy = set()
46
+ seeThruPreds = False # fail to get lookahead upon pred
47
+ self._LOOK(s.transition(alt).target, None, PredictionContext.EMPTY,
48
+ look[alt], lookBusy, set(), seeThruPreds, False)
49
+ # Wipe out lookahead for this alternative if we found nothing
50
+ # or we had a predicate when we !seeThruPreds
51
+ if len(look[alt])==0 or self.HIT_PRED in look[alt]:
52
+ look[alt] = None
53
+ return look
54
+
55
+ #*
56
+ # Compute set of tokens that can follow {@code s} in the ATN in the
57
+ # specified {@code ctx}.
58
+ #
59
+ # <p>If {@code ctx} is {@code null} and the end of the rule containing
60
+ # {@code s} is reached, {@link Token#EPSILON} is added to the result set.
61
+ # If {@code ctx} is not {@code null} and the end of the outermost rule is
62
+ # reached, {@link Token#EOF} is added to the result set.</p>
63
+ #
64
+ # @param s the ATN state
65
+ # @param stopState the ATN state to stop at. This can be a
66
+ # {@link BlockEndState} to detect epsilon paths through a closure.
67
+ # @param ctx the complete parser context, or {@code null} if the context
68
+ # should be ignored
69
+ #
70
+ # @return The set of tokens that can follow {@code s} in the ATN in the
71
+ # specified {@code ctx}.
72
+ #/
73
+ def LOOK(self, s:ATNState, stopState:ATNState=None, ctx:RuleContext=None):
74
+ r = IntervalSet()
75
+ seeThruPreds = True # ignore preds; get all lookahead
76
+ lookContext = PredictionContextFromRuleContext(s.atn, ctx) if ctx is not None else None
77
+ self._LOOK(s, stopState, lookContext, r, set(), set(), seeThruPreds, True)
78
+ return r
79
+
80
+ #*
81
+ # Compute set of tokens that can follow {@code s} in the ATN in the
82
+ # specified {@code ctx}.
83
+ #
84
+ # <p>If {@code ctx} is {@code null} and {@code stopState} or the end of the
85
+ # rule containing {@code s} is reached, {@link Token#EPSILON} is added to
86
+ # the result set. If {@code ctx} is not {@code null} and {@code addEOF} is
87
+ # {@code true} and {@code stopState} or the end of the outermost rule is
88
+ # reached, {@link Token#EOF} is added to the result set.</p>
89
+ #
90
+ # @param s the ATN state.
91
+ # @param stopState the ATN state to stop at. This can be a
92
+ # {@link BlockEndState} to detect epsilon paths through a closure.
93
+ # @param ctx The outer context, or {@code null} if the outer context should
94
+ # not be used.
95
+ # @param look The result lookahead set.
96
+ # @param lookBusy A set used for preventing epsilon closures in the ATN
97
+ # from causing a stack overflow. Outside code should pass
98
+ # {@code new HashSet<ATNConfig>} for this argument.
99
+ # @param calledRuleStack A set used for preventing left recursion in the
100
+ # ATN from causing a stack overflow. Outside code should pass
101
+ # {@code new BitSet()} for this argument.
102
+ # @param seeThruPreds {@code true} to true semantic predicates as
103
+ # implicitly {@code true} and "see through them", otherwise {@code false}
104
+ # to treat semantic predicates as opaque and add {@link #HIT_PRED} to the
105
+ # result if one is encountered.
106
+ # @param addEOF Add {@link Token#EOF} to the result if the end of the
107
+ # outermost context is reached. This parameter has no effect if {@code ctx}
108
+ # is {@code null}.
109
+ #/
110
+ def _LOOK(self, s:ATNState, stopState:ATNState , ctx:PredictionContext, look:IntervalSet, lookBusy:set,
111
+ calledRuleStack:set, seeThruPreds:bool, addEOF:bool):
112
+ c = ATNConfig(s, 0, ctx)
113
+
114
+ if c in lookBusy:
115
+ return
116
+ lookBusy.add(c)
117
+
118
+ if s == stopState:
119
+ if ctx is None:
120
+ look.addOne(Token.EPSILON)
121
+ return
122
+ elif ctx.isEmpty() and addEOF:
123
+ look.addOne(Token.EOF)
124
+ return
125
+
126
+ if isinstance(s, RuleStopState ):
127
+ if ctx is None:
128
+ look.addOne(Token.EPSILON)
129
+ return
130
+ elif ctx.isEmpty() and addEOF:
131
+ look.addOne(Token.EOF)
132
+ return
133
+
134
+ if ctx != PredictionContext.EMPTY:
135
+ removed = s.ruleIndex in calledRuleStack
136
+ try:
137
+ calledRuleStack.discard(s.ruleIndex)
138
+ # run thru all possible stack tops in ctx
139
+ for i in range(0, len(ctx)):
140
+ returnState = self.atn.states[ctx.getReturnState(i)]
141
+ self._LOOK(returnState, stopState, ctx.getParent(i), look, lookBusy, calledRuleStack, seeThruPreds, addEOF)
142
+ finally:
143
+ if removed:
144
+ calledRuleStack.add(s.ruleIndex)
145
+ return
146
+
147
+ for t in s.transitions:
148
+ if type(t) == RuleTransition:
149
+ if t.target.ruleIndex in calledRuleStack:
150
+ continue
151
+
152
+ newContext = SingletonPredictionContext.create(ctx, t.followState.stateNumber)
153
+
154
+ try:
155
+ calledRuleStack.add(t.target.ruleIndex)
156
+ self._LOOK(t.target, stopState, newContext, look, lookBusy, calledRuleStack, seeThruPreds, addEOF)
157
+ finally:
158
+ calledRuleStack.remove(t.target.ruleIndex)
159
+ elif isinstance(t, AbstractPredicateTransition ):
160
+ if seeThruPreds:
161
+ self._LOOK(t.target, stopState, ctx, look, lookBusy, calledRuleStack, seeThruPreds, addEOF)
162
+ else:
163
+ look.addOne(self.HIT_PRED)
164
+ elif t.isEpsilon:
165
+ self._LOOK(t.target, stopState, ctx, look, lookBusy, calledRuleStack, seeThruPreds, addEOF)
166
+ elif type(t) == WildcardTransition:
167
+ look.addRange( range(Token.MIN_USER_TOKEN_TYPE, self.atn.maxTokenType + 1) )
168
+ else:
169
+ set_ = t.label
170
+ if set_ is not None:
171
+ if isinstance(t, NotSetTransition):
172
+ set_ = set_.complement(Token.MIN_USER_TOKEN_TYPE, self.atn.maxTokenType)
173
+ look.addSet(set_)
venv/lib/python3.10/site-packages/antlr4/Lexer.py ADDED
@@ -0,0 +1,329 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (c) 2012-2017 The ANTLR Project. All rights reserved.
2
+ # Use of this file is governed by the BSD 3-clause license that
3
+ # can be found in the LICENSE.txt file in the project root.
4
+ #/
5
+
6
+ # A lexer is recognizer that draws input symbols from a character stream.
7
+ # lexer grammars result in a subclass of self object. A Lexer object
8
+ # uses simplified match() and error recovery mechanisms in the interest
9
+ # of speed.
10
+ #/
11
+ from io import StringIO
12
+
13
+ import sys
14
+ if sys.version_info[1] > 5:
15
+ from typing import TextIO
16
+ else:
17
+ from typing.io import TextIO
18
+ from antlr4.CommonTokenFactory import CommonTokenFactory
19
+ from antlr4.atn.LexerATNSimulator import LexerATNSimulator
20
+ from antlr4.InputStream import InputStream
21
+ from antlr4.Recognizer import Recognizer
22
+ from antlr4.Token import Token
23
+ from antlr4.error.Errors import IllegalStateException, LexerNoViableAltException, RecognitionException
24
+
25
+ class TokenSource(object):
26
+
27
+ pass
28
+
29
+
30
+ class Lexer(Recognizer, TokenSource):
31
+ __slots__ = (
32
+ '_input', '_output', '_factory', '_tokenFactorySourcePair', '_token',
33
+ '_tokenStartCharIndex', '_tokenStartLine', '_tokenStartColumn',
34
+ '_hitEOF', '_channel', '_type', '_modeStack', '_mode', '_text'
35
+ )
36
+
37
+ DEFAULT_MODE = 0
38
+ MORE = -2
39
+ SKIP = -3
40
+
41
+ DEFAULT_TOKEN_CHANNEL = Token.DEFAULT_CHANNEL
42
+ HIDDEN = Token.HIDDEN_CHANNEL
43
+ MIN_CHAR_VALUE = 0x0000
44
+ MAX_CHAR_VALUE = 0x10FFFF
45
+
46
+ def __init__(self, input:InputStream, output:TextIO = sys.stdout):
47
+ super().__init__()
48
+ self._input = input
49
+ self._output = output
50
+ self._factory = CommonTokenFactory.DEFAULT
51
+ self._tokenFactorySourcePair = (self, input)
52
+
53
+ self._interp = None # child classes must populate this
54
+
55
+ # The goal of all lexer rules/methods is to create a token object.
56
+ # self is an instance variable as multiple rules may collaborate to
57
+ # create a single token. nextToken will return self object after
58
+ # matching lexer rule(s). If you subclass to allow multiple token
59
+ # emissions, then set self to the last token to be matched or
60
+ # something nonnull so that the auto token emit mechanism will not
61
+ # emit another token.
62
+ self._token = None
63
+
64
+ # What character index in the stream did the current token start at?
65
+ # Needed, for example, to get the text for current token. Set at
66
+ # the start of nextToken.
67
+ self._tokenStartCharIndex = -1
68
+
69
+ # The line on which the first character of the token resides#/
70
+ self._tokenStartLine = -1
71
+
72
+ # The character position of first character within the line#/
73
+ self._tokenStartColumn = -1
74
+
75
+ # Once we see EOF on char stream, next token will be EOF.
76
+ # If you have DONE : EOF ; then you see DONE EOF.
77
+ self._hitEOF = False
78
+
79
+ # The channel number for the current token#/
80
+ self._channel = Token.DEFAULT_CHANNEL
81
+
82
+ # The token type for the current token#/
83
+ self._type = Token.INVALID_TYPE
84
+
85
+ self._modeStack = []
86
+ self._mode = self.DEFAULT_MODE
87
+
88
+ # You can set the text for the current token to override what is in
89
+ # the input char buffer. Use setText() or can set self instance var.
90
+ #/
91
+ self._text = None
92
+
93
+
94
+ def reset(self):
95
+ # wack Lexer state variables
96
+ if self._input is not None:
97
+ self._input.seek(0) # rewind the input
98
+ self._token = None
99
+ self._type = Token.INVALID_TYPE
100
+ self._channel = Token.DEFAULT_CHANNEL
101
+ self._tokenStartCharIndex = -1
102
+ self._tokenStartColumn = -1
103
+ self._tokenStartLine = -1
104
+ self._text = None
105
+
106
+ self._hitEOF = False
107
+ self._mode = Lexer.DEFAULT_MODE
108
+ self._modeStack = []
109
+
110
+ self._interp.reset()
111
+
112
+ # Return a token from self source; i.e., match a token on the char
113
+ # stream.
114
+ def nextToken(self):
115
+ if self._input is None:
116
+ raise IllegalStateException("nextToken requires a non-null input stream.")
117
+
118
+ # Mark start location in char stream so unbuffered streams are
119
+ # guaranteed at least have text of current token
120
+ tokenStartMarker = self._input.mark()
121
+ try:
122
+ while True:
123
+ if self._hitEOF:
124
+ self.emitEOF()
125
+ return self._token
126
+ self._token = None
127
+ self._channel = Token.DEFAULT_CHANNEL
128
+ self._tokenStartCharIndex = self._input.index
129
+ self._tokenStartColumn = self._interp.column
130
+ self._tokenStartLine = self._interp.line
131
+ self._text = None
132
+ continueOuter = False
133
+ while True:
134
+ self._type = Token.INVALID_TYPE
135
+ ttype = self.SKIP
136
+ try:
137
+ ttype = self._interp.match(self._input, self._mode)
138
+ except LexerNoViableAltException as e:
139
+ self.notifyListeners(e) # report error
140
+ self.recover(e)
141
+ if self._input.LA(1)==Token.EOF:
142
+ self._hitEOF = True
143
+ if self._type == Token.INVALID_TYPE:
144
+ self._type = ttype
145
+ if self._type == self.SKIP:
146
+ continueOuter = True
147
+ break
148
+ if self._type!=self.MORE:
149
+ break
150
+ if continueOuter:
151
+ continue
152
+ if self._token is None:
153
+ self.emit()
154
+ return self._token
155
+ finally:
156
+ # make sure we release marker after match or
157
+ # unbuffered char stream will keep buffering
158
+ self._input.release(tokenStartMarker)
159
+
160
+ # Instruct the lexer to skip creating a token for current lexer rule
161
+ # and look for another token. nextToken() knows to keep looking when
162
+ # a lexer rule finishes with token set to SKIP_TOKEN. Recall that
163
+ # if token==null at end of any token rule, it creates one for you
164
+ # and emits it.
165
+ #/
166
+ def skip(self):
167
+ self._type = self.SKIP
168
+
169
+ def more(self):
170
+ self._type = self.MORE
171
+
172
+ def mode(self, m:int):
173
+ self._mode = m
174
+
175
+ def pushMode(self, m:int):
176
+ if self._interp.debug:
177
+ print("pushMode " + str(m), file=self._output)
178
+ self._modeStack.append(self._mode)
179
+ self.mode(m)
180
+
181
+ def popMode(self):
182
+ if len(self._modeStack)==0:
183
+ raise Exception("Empty Stack")
184
+ if self._interp.debug:
185
+ print("popMode back to "+ self._modeStack[:-1], file=self._output)
186
+ self.mode( self._modeStack.pop() )
187
+ return self._mode
188
+
189
+ # Set the char stream and reset the lexer#/
190
+ @property
191
+ def inputStream(self):
192
+ return self._input
193
+
194
+ @inputStream.setter
195
+ def inputStream(self, input:InputStream):
196
+ self._input = None
197
+ self._tokenFactorySourcePair = (self, self._input)
198
+ self.reset()
199
+ self._input = input
200
+ self._tokenFactorySourcePair = (self, self._input)
201
+
202
+ @property
203
+ def sourceName(self):
204
+ return self._input.sourceName
205
+
206
+ # By default does not support multiple emits per nextToken invocation
207
+ # for efficiency reasons. Subclass and override self method, nextToken,
208
+ # and getToken (to push tokens into a list and pull from that list
209
+ # rather than a single variable as self implementation does).
210
+ #/
211
+ def emitToken(self, token:Token):
212
+ self._token = token
213
+
214
+ # The standard method called to automatically emit a token at the
215
+ # outermost lexical rule. The token object should point into the
216
+ # char buffer start..stop. If there is a text override in 'text',
217
+ # use that to set the token's text. Override self method to emit
218
+ # custom Token objects or provide a new factory.
219
+ #/
220
+ def emit(self):
221
+ t = self._factory.create(self._tokenFactorySourcePair, self._type, self._text, self._channel, self._tokenStartCharIndex,
222
+ self.getCharIndex()-1, self._tokenStartLine, self._tokenStartColumn)
223
+ self.emitToken(t)
224
+ return t
225
+
226
+ def emitEOF(self):
227
+ cpos = self.column
228
+ lpos = self.line
229
+ eof = self._factory.create(self._tokenFactorySourcePair, Token.EOF, None, Token.DEFAULT_CHANNEL, self._input.index,
230
+ self._input.index-1, lpos, cpos)
231
+ self.emitToken(eof)
232
+ return eof
233
+
234
+ @property
235
+ def type(self):
236
+ return self._type
237
+
238
+ @type.setter
239
+ def type(self, type:int):
240
+ self._type = type
241
+
242
+ @property
243
+ def line(self):
244
+ return self._interp.line
245
+
246
+ @line.setter
247
+ def line(self, line:int):
248
+ self._interp.line = line
249
+
250
+ @property
251
+ def column(self):
252
+ return self._interp.column
253
+
254
+ @column.setter
255
+ def column(self, column:int):
256
+ self._interp.column = column
257
+
258
+ # What is the index of the current character of lookahead?#/
259
+ def getCharIndex(self):
260
+ return self._input.index
261
+
262
+ # Return the text matched so far for the current token or any
263
+ # text override.
264
+ @property
265
+ def text(self):
266
+ if self._text is not None:
267
+ return self._text
268
+ else:
269
+ return self._interp.getText(self._input)
270
+
271
+ # Set the complete text of self token; it wipes any previous
272
+ # changes to the text.
273
+ @text.setter
274
+ def text(self, txt:str):
275
+ self._text = txt
276
+
277
+ # Return a list of all Token objects in input char stream.
278
+ # Forces load of all tokens. Does not include EOF token.
279
+ #/
280
+ def getAllTokens(self):
281
+ tokens = []
282
+ t = self.nextToken()
283
+ while t.type!=Token.EOF:
284
+ tokens.append(t)
285
+ t = self.nextToken()
286
+ return tokens
287
+
288
+ def notifyListeners(self, e:LexerNoViableAltException):
289
+ start = self._tokenStartCharIndex
290
+ stop = self._input.index
291
+ text = self._input.getText(start, stop)
292
+ msg = "token recognition error at: '" + self.getErrorDisplay(text) + "'"
293
+ listener = self.getErrorListenerDispatch()
294
+ listener.syntaxError(self, None, self._tokenStartLine, self._tokenStartColumn, msg, e)
295
+
296
+ def getErrorDisplay(self, s:str):
297
+ with StringIO() as buf:
298
+ for c in s:
299
+ buf.write(self.getErrorDisplayForChar(c))
300
+ return buf.getvalue()
301
+
302
+ def getErrorDisplayForChar(self, c:str):
303
+ if ord(c[0])==Token.EOF:
304
+ return "<EOF>"
305
+ elif c=='\n':
306
+ return "\\n"
307
+ elif c=='\t':
308
+ return "\\t"
309
+ elif c=='\r':
310
+ return "\\r"
311
+ else:
312
+ return c
313
+
314
+ def getCharErrorDisplay(self, c:str):
315
+ return "'" + self.getErrorDisplayForChar(c) + "'"
316
+
317
+ # Lexers can normally match any char in it's vocabulary after matching
318
+ # a token, so do the easy thing and just kill a character and hope
319
+ # it all works out. You can instead use the rule invocation stack
320
+ # to do sophisticated error recovery if you are in a fragment rule.
321
+ #/
322
+ def recover(self, re:RecognitionException):
323
+ if self._input.LA(1) != Token.EOF:
324
+ if isinstance(re, LexerNoViableAltException):
325
+ # skip a char and try again
326
+ self._interp.consume(self._input)
327
+ else:
328
+ # TODO: Do we lose character or line position information?
329
+ self._input.consume()
venv/lib/python3.10/site-packages/antlr4/ListTokenSource.py ADDED
@@ -0,0 +1,144 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #
2
+ # Copyright (c) 2012-2017 The ANTLR Project. All rights reserved.
3
+ # Use of this file is governed by the BSD 3-clause license that
4
+ # can be found in the LICENSE.txt file in the project root.
5
+ #
6
+
7
+ #
8
+ # Provides an implementation of {@link TokenSource} as a wrapper around a list
9
+ # of {@link Token} objects.
10
+ #
11
+ # <p>If the final token in the list is an {@link Token#EOF} token, it will be used
12
+ # as the EOF token for every call to {@link #nextToken} after the end of the
13
+ # list is reached. Otherwise, an EOF token will be created.</p>
14
+ #
15
+ from antlr4.CommonTokenFactory import CommonTokenFactory
16
+ from antlr4.Lexer import TokenSource
17
+ from antlr4.Token import Token
18
+
19
+
20
+ class ListTokenSource(TokenSource):
21
+ __slots__ = ('tokens', 'sourceName', 'pos', 'eofToken', '_factory')
22
+
23
+ # Constructs a new {@link ListTokenSource} instance from the specified
24
+ # collection of {@link Token} objects and source name.
25
+ #
26
+ # @param tokens The collection of {@link Token} objects to provide as a
27
+ # {@link TokenSource}.
28
+ # @param sourceName The name of the {@link TokenSource}. If this value is
29
+ # {@code null}, {@link #getSourceName} will attempt to infer the name from
30
+ # the next {@link Token} (or the previous token if the end of the input has
31
+ # been reached).
32
+ #
33
+ # @exception NullPointerException if {@code tokens} is {@code null}
34
+ #
35
+ def __init__(self, tokens:list, sourceName:str=None):
36
+ if tokens is None:
37
+ raise ReferenceError("tokens cannot be null")
38
+ self.tokens = tokens
39
+ self.sourceName = sourceName
40
+ # The index into {@link #tokens} of token to return by the next call to
41
+ # {@link #nextToken}. The end of the input is indicated by this value
42
+ # being greater than or equal to the number of items in {@link #tokens}.
43
+ self.pos = 0
44
+ # This field caches the EOF token for the token source.
45
+ self.eofToken = None
46
+ # This is the backing field for {@link #getTokenFactory} and
47
+ self._factory = CommonTokenFactory.DEFAULT
48
+
49
+
50
+ #
51
+ # {@inheritDoc}
52
+ #
53
+ @property
54
+ def column(self):
55
+ if self.pos < len(self.tokens):
56
+ return self.tokens[self.pos].column
57
+ elif self.eofToken is not None:
58
+ return self.eofToken.column
59
+ elif len(self.tokens) > 0:
60
+ # have to calculate the result from the line/column of the previous
61
+ # token, along with the text of the token.
62
+ lastToken = self.tokens[len(self.tokens) - 1]
63
+ tokenText = lastToken.text
64
+ if tokenText is not None:
65
+ lastNewLine = tokenText.rfind('\n')
66
+ if lastNewLine >= 0:
67
+ return len(tokenText) - lastNewLine - 1
68
+ return lastToken.column + lastToken.stop - lastToken.start + 1
69
+
70
+ # only reach this if tokens is empty, meaning EOF occurs at the first
71
+ # position in the input
72
+ return 0
73
+
74
+ #
75
+ # {@inheritDoc}
76
+ #
77
+ def nextToken(self):
78
+ if self.pos >= len(self.tokens):
79
+ if self.eofToken is None:
80
+ start = -1
81
+ if len(self.tokens) > 0:
82
+ previousStop = self.tokens[len(self.tokens) - 1].stop
83
+ if previousStop != -1:
84
+ start = previousStop + 1
85
+ stop = max(-1, start - 1)
86
+ self.eofToken = self._factory.create((self, self.getInputStream()),
87
+ Token.EOF, "EOF", Token.DEFAULT_CHANNEL, start, stop, self.line, self.column)
88
+ return self.eofToken
89
+ t = self.tokens[self.pos]
90
+ if self.pos == len(self.tokens) - 1 and t.type == Token.EOF:
91
+ self.eofToken = t
92
+ self.pos += 1
93
+ return t
94
+
95
+ #
96
+ # {@inheritDoc}
97
+ #
98
+ @property
99
+ def line(self):
100
+ if self.pos < len(self.tokens):
101
+ return self.tokens[self.pos].line
102
+ elif self.eofToken is not None:
103
+ return self.eofToken.line
104
+ elif len(self.tokens) > 0:
105
+ # have to calculate the result from the line/column of the previous
106
+ # token, along with the text of the token.
107
+ lastToken = self.tokens[len(self.tokens) - 1]
108
+ line = lastToken.line
109
+ tokenText = lastToken.text
110
+ if tokenText is not None:
111
+ line += tokenText.count('\n')
112
+
113
+ # if no text is available, assume the token did not contain any newline characters.
114
+ return line
115
+
116
+ # only reach this if tokens is empty, meaning EOF occurs at the first
117
+ # position in the input
118
+ return 1
119
+
120
+ #
121
+ # {@inheritDoc}
122
+ #
123
+ def getInputStream(self):
124
+ if self.pos < len(self.tokens):
125
+ return self.tokens[self.pos].getInputStream()
126
+ elif self.eofToken is not None:
127
+ return self.eofToken.getInputStream()
128
+ elif len(self.tokens) > 0:
129
+ return self.tokens[len(self.tokens) - 1].getInputStream()
130
+ else:
131
+ # no input stream information is available
132
+ return None
133
+
134
+ #
135
+ # {@inheritDoc}
136
+ #
137
+ def getSourceName(self):
138
+ if self.sourceName is not None:
139
+ return self.sourceName
140
+ inputStream = self.getInputStream()
141
+ if inputStream is not None:
142
+ return inputStream.getSourceName()
143
+ else:
144
+ return "List"
venv/lib/python3.10/site-packages/antlr4/Parser.py ADDED
@@ -0,0 +1,580 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #
2
+ # Copyright (c) 2012-2017 The ANTLR Project. All rights reserved.
3
+ # Use of this file is governed by the BSD 3-clause license that
4
+ # can be found in the LICENSE.txt file in the project root.
5
+ import sys
6
+ if sys.version_info[1] > 5:
7
+ from typing import TextIO
8
+ else:
9
+ from typing.io import TextIO
10
+ from antlr4.BufferedTokenStream import TokenStream
11
+ from antlr4.CommonTokenFactory import TokenFactory
12
+ from antlr4.error.ErrorStrategy import DefaultErrorStrategy
13
+ from antlr4.InputStream import InputStream
14
+ from antlr4.Recognizer import Recognizer
15
+ from antlr4.RuleContext import RuleContext
16
+ from antlr4.ParserRuleContext import ParserRuleContext
17
+ from antlr4.Token import Token
18
+ from antlr4.Lexer import Lexer
19
+ from antlr4.atn.ATNDeserializer import ATNDeserializer
20
+ from antlr4.atn.ATNDeserializationOptions import ATNDeserializationOptions
21
+ from antlr4.error.Errors import UnsupportedOperationException, RecognitionException
22
+ from antlr4.tree.ParseTreePatternMatcher import ParseTreePatternMatcher
23
+ from antlr4.tree.Tree import ParseTreeListener, TerminalNode, ErrorNode
24
+
25
+ class TraceListener(ParseTreeListener):
26
+ __slots__ = '_parser'
27
+
28
+ def __init__(self, parser):
29
+ self._parser = parser
30
+
31
+ def enterEveryRule(self, ctx):
32
+ print("enter " + self._parser.ruleNames[ctx.getRuleIndex()] + ", LT(1)=" + self._parser._input.LT(1).text, file=self._parser._output)
33
+
34
+ def visitTerminal(self, node):
35
+
36
+ print("consume " + str(node.symbol) + " rule " + self._parser.ruleNames[self._parser._ctx.getRuleIndex()], file=self._parser._output)
37
+
38
+ def visitErrorNode(self, node):
39
+ pass
40
+
41
+
42
+ def exitEveryRule(self, ctx):
43
+ print("exit " + self._parser.ruleNames[ctx.getRuleIndex()] + ", LT(1)=" + self._parser._input.LT(1).text, file=self._parser._output)
44
+
45
+
46
+ # self is all the parsing support code essentially; most of it is error recovery stuff.#
47
+ class Parser (Recognizer):
48
+ __slots__ = (
49
+ '_input', '_output', '_errHandler', '_precedenceStack', '_ctx',
50
+ 'buildParseTrees', '_tracer', '_parseListeners', '_syntaxErrors'
51
+
52
+ )
53
+ # self field maps from the serialized ATN string to the deserialized {@link ATN} with
54
+ # bypass alternatives.
55
+ #
56
+ # @see ATNDeserializationOptions#isGenerateRuleBypassTransitions()
57
+ #
58
+ bypassAltsAtnCache = dict()
59
+
60
+ def __init__(self, input:TokenStream, output:TextIO = sys.stdout):
61
+ super().__init__()
62
+ # The input stream.
63
+ self._input = None
64
+ self._output = output
65
+ # The error handling strategy for the parser. The default value is a new
66
+ # instance of {@link DefaultErrorStrategy}.
67
+ self._errHandler = DefaultErrorStrategy()
68
+ self._precedenceStack = list()
69
+ self._precedenceStack.append(0)
70
+ # The {@link ParserRuleContext} object for the currently executing rule.
71
+ # self is always non-null during the parsing process.
72
+ self._ctx = None
73
+ # Specifies whether or not the parser should construct a parse tree during
74
+ # the parsing process. The default value is {@code true}.
75
+ self.buildParseTrees = True
76
+ # When {@link #setTrace}{@code (true)} is called, a reference to the
77
+ # {@link TraceListener} is stored here so it can be easily removed in a
78
+ # later call to {@link #setTrace}{@code (false)}. The listener itself is
79
+ # implemented as a parser listener so self field is not directly used by
80
+ # other parser methods.
81
+ self._tracer = None
82
+ # The list of {@link ParseTreeListener} listeners registered to receive
83
+ # events during the parse.
84
+ self._parseListeners = None
85
+ # The number of syntax errors reported during parsing. self value is
86
+ # incremented each time {@link #notifyErrorListeners} is called.
87
+ self._syntaxErrors = 0
88
+ self.setInputStream(input)
89
+
90
+ # reset the parser's state#
91
+ def reset(self):
92
+ if self._input is not None:
93
+ self._input.seek(0)
94
+ self._errHandler.reset(self)
95
+ self._ctx = None
96
+ self._syntaxErrors = 0
97
+ self.setTrace(False)
98
+ self._precedenceStack = list()
99
+ self._precedenceStack.append(0)
100
+ if self._interp is not None:
101
+ self._interp.reset()
102
+
103
+ # Match current input symbol against {@code ttype}. If the symbol type
104
+ # matches, {@link ANTLRErrorStrategy#reportMatch} and {@link #consume} are
105
+ # called to complete the match process.
106
+ #
107
+ # <p>If the symbol type does not match,
108
+ # {@link ANTLRErrorStrategy#recoverInline} is called on the current error
109
+ # strategy to attempt recovery. If {@link #getBuildParseTree} is
110
+ # {@code true} and the token index of the symbol returned by
111
+ # {@link ANTLRErrorStrategy#recoverInline} is -1, the symbol is added to
112
+ # the parse tree by calling {@link ParserRuleContext#addErrorNode}.</p>
113
+ #
114
+ # @param ttype the token type to match
115
+ # @return the matched symbol
116
+ # @throws RecognitionException if the current input symbol did not match
117
+ # {@code ttype} and the error strategy could not recover from the
118
+ # mismatched symbol
119
+
120
+ def match(self, ttype:int):
121
+ t = self.getCurrentToken()
122
+ if t.type==ttype:
123
+ self._errHandler.reportMatch(self)
124
+ self.consume()
125
+ else:
126
+ t = self._errHandler.recoverInline(self)
127
+ if self.buildParseTrees and t.tokenIndex==-1:
128
+ # we must have conjured up a new token during single token insertion
129
+ # if it's not the current symbol
130
+ self._ctx.addErrorNode(t)
131
+ return t
132
+
133
+ # Match current input symbol as a wildcard. If the symbol type matches
134
+ # (i.e. has a value greater than 0), {@link ANTLRErrorStrategy#reportMatch}
135
+ # and {@link #consume} are called to complete the match process.
136
+ #
137
+ # <p>If the symbol type does not match,
138
+ # {@link ANTLRErrorStrategy#recoverInline} is called on the current error
139
+ # strategy to attempt recovery. If {@link #getBuildParseTree} is
140
+ # {@code true} and the token index of the symbol returned by
141
+ # {@link ANTLRErrorStrategy#recoverInline} is -1, the symbol is added to
142
+ # the parse tree by calling {@link ParserRuleContext#addErrorNode}.</p>
143
+ #
144
+ # @return the matched symbol
145
+ # @throws RecognitionException if the current input symbol did not match
146
+ # a wildcard and the error strategy could not recover from the mismatched
147
+ # symbol
148
+
149
+ def matchWildcard(self):
150
+ t = self.getCurrentToken()
151
+ if t.type > 0:
152
+ self._errHandler.reportMatch(self)
153
+ self.consume()
154
+ else:
155
+ t = self._errHandler.recoverInline(self)
156
+ if self.buildParseTrees and t.tokenIndex == -1:
157
+ # we must have conjured up a new token during single token insertion
158
+ # if it's not the current symbol
159
+ self._ctx.addErrorNode(t)
160
+
161
+ return t
162
+
163
+ def getParseListeners(self):
164
+ return list() if self._parseListeners is None else self._parseListeners
165
+
166
+ # Registers {@code listener} to receive events during the parsing process.
167
+ #
168
+ # <p>To support output-preserving grammar transformations (including but not
169
+ # limited to left-recursion removal, automated left-factoring, and
170
+ # optimized code generation), calls to listener methods during the parse
171
+ # may differ substantially from calls made by
172
+ # {@link ParseTreeWalker#DEFAULT} used after the parse is complete. In
173
+ # particular, rule entry and exit events may occur in a different order
174
+ # during the parse than after the parser. In addition, calls to certain
175
+ # rule entry methods may be omitted.</p>
176
+ #
177
+ # <p>With the following specific exceptions, calls to listener events are
178
+ # <em>deterministic</em>, i.e. for identical input the calls to listener
179
+ # methods will be the same.</p>
180
+ #
181
+ # <ul>
182
+ # <li>Alterations to the grammar used to generate code may change the
183
+ # behavior of the listener calls.</li>
184
+ # <li>Alterations to the command line options passed to ANTLR 4 when
185
+ # generating the parser may change the behavior of the listener calls.</li>
186
+ # <li>Changing the version of the ANTLR Tool used to generate the parser
187
+ # may change the behavior of the listener calls.</li>
188
+ # </ul>
189
+ #
190
+ # @param listener the listener to add
191
+ #
192
+ # @throws NullPointerException if {@code} listener is {@code null}
193
+ #
194
+ def addParseListener(self, listener:ParseTreeListener):
195
+ if listener is None:
196
+ raise ReferenceError("listener")
197
+ if self._parseListeners is None:
198
+ self._parseListeners = []
199
+ self._parseListeners.append(listener)
200
+
201
+ #
202
+ # Remove {@code listener} from the list of parse listeners.
203
+ #
204
+ # <p>If {@code listener} is {@code null} or has not been added as a parse
205
+ # listener, self method does nothing.</p>
206
+ # @param listener the listener to remove
207
+ #
208
+ def removeParseListener(self, listener:ParseTreeListener):
209
+ if self._parseListeners is not None:
210
+ self._parseListeners.remove(listener)
211
+ if len(self._parseListeners)==0:
212
+ self._parseListeners = None
213
+
214
+ # Remove all parse listeners.
215
+ def removeParseListeners(self):
216
+ self._parseListeners = None
217
+
218
+ # Notify any parse listeners of an enter rule event.
219
+ def triggerEnterRuleEvent(self):
220
+ if self._parseListeners is not None:
221
+ for listener in self._parseListeners:
222
+ listener.enterEveryRule(self._ctx)
223
+ self._ctx.enterRule(listener)
224
+
225
+ #
226
+ # Notify any parse listeners of an exit rule event.
227
+ #
228
+ # @see #addParseListener
229
+ #
230
+ def triggerExitRuleEvent(self):
231
+ if self._parseListeners is not None:
232
+ # reverse order walk of listeners
233
+ for listener in reversed(self._parseListeners):
234
+ self._ctx.exitRule(listener)
235
+ listener.exitEveryRule(self._ctx)
236
+
237
+
238
+ # Gets the number of syntax errors reported during parsing. This value is
239
+ # incremented each time {@link #notifyErrorListeners} is called.
240
+ #
241
+ # @see #notifyErrorListeners
242
+ #
243
+ def getNumberOfSyntaxErrors(self):
244
+ return self._syntaxErrors
245
+
246
+ def getTokenFactory(self):
247
+ return self._input.tokenSource._factory
248
+
249
+ # Tell our token source and error strategy about a new way to create tokens.#
250
+ def setTokenFactory(self, factory:TokenFactory):
251
+ self._input.tokenSource._factory = factory
252
+
253
+ # The ATN with bypass alternatives is expensive to create so we create it
254
+ # lazily.
255
+ #
256
+ # @throws UnsupportedOperationException if the current parser does not
257
+ # implement the {@link #getSerializedATN()} method.
258
+ #
259
+ def getATNWithBypassAlts(self):
260
+ serializedAtn = self.getSerializedATN()
261
+ if serializedAtn is None:
262
+ raise UnsupportedOperationException("The current parser does not support an ATN with bypass alternatives.")
263
+ result = self.bypassAltsAtnCache.get(serializedAtn, None)
264
+ if result is None:
265
+ deserializationOptions = ATNDeserializationOptions()
266
+ deserializationOptions.generateRuleBypassTransitions = True
267
+ result = ATNDeserializer(deserializationOptions).deserialize(serializedAtn)
268
+ self.bypassAltsAtnCache[serializedAtn] = result
269
+ return result
270
+
271
+ # The preferred method of getting a tree pattern. For example, here's a
272
+ # sample use:
273
+ #
274
+ # <pre>
275
+ # ParseTree t = parser.expr();
276
+ # ParseTreePattern p = parser.compileParseTreePattern("&lt;ID&gt;+0", MyParser.RULE_expr);
277
+ # ParseTreeMatch m = p.match(t);
278
+ # String id = m.get("ID");
279
+ # </pre>
280
+ #
281
+ def compileParseTreePattern(self, pattern:str, patternRuleIndex:int, lexer:Lexer = None):
282
+ if lexer is None:
283
+ if self.getTokenStream() is not None:
284
+ tokenSource = self.getTokenStream().tokenSource
285
+ if isinstance( tokenSource, Lexer ):
286
+ lexer = tokenSource
287
+ if lexer is None:
288
+ raise UnsupportedOperationException("Parser can't discover a lexer to use")
289
+
290
+ m = ParseTreePatternMatcher(lexer, self)
291
+ return m.compile(pattern, patternRuleIndex)
292
+
293
+
294
+ def getInputStream(self):
295
+ return self.getTokenStream()
296
+
297
+ def setInputStream(self, input:InputStream):
298
+ self.setTokenStream(input)
299
+
300
+ def getTokenStream(self):
301
+ return self._input
302
+
303
+ # Set the token stream and reset the parser.#
304
+ def setTokenStream(self, input:TokenStream):
305
+ self._input = None
306
+ self.reset()
307
+ self._input = input
308
+
309
+ # Match needs to return the current input symbol, which gets put
310
+ # into the label for the associated token ref; e.g., x=ID.
311
+ #
312
+ def getCurrentToken(self):
313
+ return self._input.LT(1)
314
+
315
+ def notifyErrorListeners(self, msg:str, offendingToken:Token = None, e:RecognitionException = None):
316
+ if offendingToken is None:
317
+ offendingToken = self.getCurrentToken()
318
+ self._syntaxErrors += 1
319
+ line = offendingToken.line
320
+ column = offendingToken.column
321
+ listener = self.getErrorListenerDispatch()
322
+ listener.syntaxError(self, offendingToken, line, column, msg, e)
323
+
324
+ #
325
+ # Consume and return the {@linkplain #getCurrentToken current symbol}.
326
+ #
327
+ # <p>E.g., given the following input with {@code A} being the current
328
+ # lookahead symbol, self function moves the cursor to {@code B} and returns
329
+ # {@code A}.</p>
330
+ #
331
+ # <pre>
332
+ # A B
333
+ # ^
334
+ # </pre>
335
+ #
336
+ # If the parser is not in error recovery mode, the consumed symbol is added
337
+ # to the parse tree using {@link ParserRuleContext#addChild(Token)}, and
338
+ # {@link ParseTreeListener#visitTerminal} is called on any parse listeners.
339
+ # If the parser <em>is</em> in error recovery mode, the consumed symbol is
340
+ # added to the parse tree using
341
+ # {@link ParserRuleContext#addErrorNode(Token)}, and
342
+ # {@link ParseTreeListener#visitErrorNode} is called on any parse
343
+ # listeners.
344
+ #
345
+ def consume(self):
346
+ o = self.getCurrentToken()
347
+ if o.type != Token.EOF:
348
+ self.getInputStream().consume()
349
+ hasListener = self._parseListeners is not None and len(self._parseListeners)>0
350
+ if self.buildParseTrees or hasListener:
351
+ if self._errHandler.inErrorRecoveryMode(self):
352
+ node = self._ctx.addErrorNode(o)
353
+ else:
354
+ node = self._ctx.addTokenNode(o)
355
+ if hasListener:
356
+ for listener in self._parseListeners:
357
+ if isinstance(node, ErrorNode):
358
+ listener.visitErrorNode(node)
359
+ elif isinstance(node, TerminalNode):
360
+ listener.visitTerminal(node)
361
+ return o
362
+
363
+ def addContextToParseTree(self):
364
+ # add current context to parent if we have a parent
365
+ if self._ctx.parentCtx is not None:
366
+ self._ctx.parentCtx.addChild(self._ctx)
367
+
368
+ # Always called by generated parsers upon entry to a rule. Access field
369
+ # {@link #_ctx} get the current context.
370
+ #
371
+ def enterRule(self, localctx:ParserRuleContext , state:int , ruleIndex:int):
372
+ self.state = state
373
+ self._ctx = localctx
374
+ self._ctx.start = self._input.LT(1)
375
+ if self.buildParseTrees:
376
+ self.addContextToParseTree()
377
+ if self._parseListeners is not None:
378
+ self.triggerEnterRuleEvent()
379
+
380
+ def exitRule(self):
381
+ self._ctx.stop = self._input.LT(-1)
382
+ # trigger event on _ctx, before it reverts to parent
383
+ if self._parseListeners is not None:
384
+ self.triggerExitRuleEvent()
385
+ self.state = self._ctx.invokingState
386
+ self._ctx = self._ctx.parentCtx
387
+
388
+ def enterOuterAlt(self, localctx:ParserRuleContext, altNum:int):
389
+ localctx.setAltNumber(altNum)
390
+ # if we have new localctx, make sure we replace existing ctx
391
+ # that is previous child of parse tree
392
+ if self.buildParseTrees and self._ctx != localctx:
393
+ if self._ctx.parentCtx is not None:
394
+ self._ctx.parentCtx.removeLastChild()
395
+ self._ctx.parentCtx.addChild(localctx)
396
+ self._ctx = localctx
397
+
398
+ # Get the precedence level for the top-most precedence rule.
399
+ #
400
+ # @return The precedence level for the top-most precedence rule, or -1 if
401
+ # the parser context is not nested within a precedence rule.
402
+ #
403
+ def getPrecedence(self):
404
+ if len(self._precedenceStack)==0:
405
+ return -1
406
+ else:
407
+ return self._precedenceStack[-1]
408
+
409
+ def enterRecursionRule(self, localctx:ParserRuleContext, state:int, ruleIndex:int, precedence:int):
410
+ self.state = state
411
+ self._precedenceStack.append(precedence)
412
+ self._ctx = localctx
413
+ self._ctx.start = self._input.LT(1)
414
+ if self._parseListeners is not None:
415
+ self.triggerEnterRuleEvent() # simulates rule entry for left-recursive rules
416
+
417
+ #
418
+ # Like {@link #enterRule} but for recursive rules.
419
+ #
420
+ def pushNewRecursionContext(self, localctx:ParserRuleContext, state:int, ruleIndex:int):
421
+ previous = self._ctx
422
+ previous.parentCtx = localctx
423
+ previous.invokingState = state
424
+ previous.stop = self._input.LT(-1)
425
+
426
+ self._ctx = localctx
427
+ self._ctx.start = previous.start
428
+ if self.buildParseTrees:
429
+ self._ctx.addChild(previous)
430
+
431
+ if self._parseListeners is not None:
432
+ self.triggerEnterRuleEvent() # simulates rule entry for left-recursive rules
433
+
434
+ def unrollRecursionContexts(self, parentCtx:ParserRuleContext):
435
+ self._precedenceStack.pop()
436
+ self._ctx.stop = self._input.LT(-1)
437
+ retCtx = self._ctx # save current ctx (return value)
438
+ # unroll so _ctx is as it was before call to recursive method
439
+ if self._parseListeners is not None:
440
+ while self._ctx is not parentCtx:
441
+ self.triggerExitRuleEvent()
442
+ self._ctx = self._ctx.parentCtx
443
+ else:
444
+ self._ctx = parentCtx
445
+
446
+ # hook into tree
447
+ retCtx.parentCtx = parentCtx
448
+
449
+ if self.buildParseTrees and parentCtx is not None:
450
+ # add return ctx into invoking rule's tree
451
+ parentCtx.addChild(retCtx)
452
+
453
+ def getInvokingContext(self, ruleIndex:int):
454
+ ctx = self._ctx
455
+ while ctx is not None:
456
+ if ctx.getRuleIndex() == ruleIndex:
457
+ return ctx
458
+ ctx = ctx.parentCtx
459
+ return None
460
+
461
+
462
+ def precpred(self, localctx:RuleContext , precedence:int):
463
+ return precedence >= self._precedenceStack[-1]
464
+
465
+ def inContext(self, context:str):
466
+ # TODO: useful in parser?
467
+ return False
468
+
469
+ #
470
+ # Checks whether or not {@code symbol} can follow the current state in the
471
+ # ATN. The behavior of self method is equivalent to the following, but is
472
+ # implemented such that the complete context-sensitive follow set does not
473
+ # need to be explicitly constructed.
474
+ #
475
+ # <pre>
476
+ # return getExpectedTokens().contains(symbol);
477
+ # </pre>
478
+ #
479
+ # @param symbol the symbol type to check
480
+ # @return {@code true} if {@code symbol} can follow the current state in
481
+ # the ATN, otherwise {@code false}.
482
+ #
483
+ def isExpectedToken(self, symbol:int):
484
+ atn = self._interp.atn
485
+ ctx = self._ctx
486
+ s = atn.states[self.state]
487
+ following = atn.nextTokens(s)
488
+ if symbol in following:
489
+ return True
490
+ if not Token.EPSILON in following:
491
+ return False
492
+
493
+ while ctx is not None and ctx.invokingState>=0 and Token.EPSILON in following:
494
+ invokingState = atn.states[ctx.invokingState]
495
+ rt = invokingState.transitions[0]
496
+ following = atn.nextTokens(rt.followState)
497
+ if symbol in following:
498
+ return True
499
+ ctx = ctx.parentCtx
500
+
501
+ if Token.EPSILON in following and symbol == Token.EOF:
502
+ return True
503
+ else:
504
+ return False
505
+
506
+ # Computes the set of input symbols which could follow the current parser
507
+ # state and context, as given by {@link #getState} and {@link #getContext},
508
+ # respectively.
509
+ #
510
+ # @see ATN#getExpectedTokens(int, RuleContext)
511
+ #
512
+ def getExpectedTokens(self):
513
+ return self._interp.atn.getExpectedTokens(self.state, self._ctx)
514
+
515
+ def getExpectedTokensWithinCurrentRule(self):
516
+ atn = self._interp.atn
517
+ s = atn.states[self.state]
518
+ return atn.nextTokens(s)
519
+
520
+ # Get a rule's index (i.e., {@code RULE_ruleName} field) or -1 if not found.#
521
+ def getRuleIndex(self, ruleName:str):
522
+ ruleIndex = self.getRuleIndexMap().get(ruleName, None)
523
+ if ruleIndex is not None:
524
+ return ruleIndex
525
+ else:
526
+ return -1
527
+
528
+ # Return List&lt;String&gt; of the rule names in your parser instance
529
+ # leading up to a call to the current rule. You could override if
530
+ # you want more details such as the file/line info of where
531
+ # in the ATN a rule is invoked.
532
+ #
533
+ # this is very useful for error messages.
534
+ #
535
+ def getRuleInvocationStack(self, p:RuleContext=None):
536
+ if p is None:
537
+ p = self._ctx
538
+ stack = list()
539
+ while p is not None:
540
+ # compute what follows who invoked us
541
+ ruleIndex = p.getRuleIndex()
542
+ if ruleIndex<0:
543
+ stack.append("n/a")
544
+ else:
545
+ stack.append(self.ruleNames[ruleIndex])
546
+ p = p.parentCtx
547
+ return stack
548
+
549
+ # For debugging and other purposes.#
550
+ def getDFAStrings(self):
551
+ return [ str(dfa) for dfa in self._interp.decisionToDFA]
552
+
553
+ # For debugging and other purposes.#
554
+ def dumpDFA(self):
555
+ seenOne = False
556
+ for i in range(0, len(self._interp.decisionToDFA)):
557
+ dfa = self._interp.decisionToDFA[i]
558
+ if len(dfa.states)>0:
559
+ if seenOne:
560
+ print(file=self._output)
561
+ print("Decision " + str(dfa.decision) + ":", file=self._output)
562
+ print(dfa.toString(self.literalNames, self.symbolicNames), end='', file=self._output)
563
+ seenOne = True
564
+
565
+
566
+ def getSourceName(self):
567
+ return self._input.sourceName
568
+
569
+ # During a parse is sometimes useful to listen in on the rule entry and exit
570
+ # events as well as token matches. self is for quick and dirty debugging.
571
+ #
572
+ def setTrace(self, trace:bool):
573
+ if not trace:
574
+ self.removeParseListener(self._tracer)
575
+ self._tracer = None
576
+ else:
577
+ if self._tracer is not None:
578
+ self.removeParseListener(self._tracer)
579
+ self._tracer = TraceListener(self)
580
+ self.addParseListener(self._tracer)
venv/lib/python3.10/site-packages/antlr4/ParserInterpreter.py ADDED
@@ -0,0 +1,170 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #
2
+ # Copyright (c) 2012-2017 The ANTLR Project. All rights reserved.
3
+ # Use of this file is governed by the BSD 3-clause license that
4
+ # can be found in the LICENSE.txt file in the project root.
5
+ #
6
+
7
+ # A parser simulator that mimics what ANTLR's generated
8
+ # parser code does. A ParserATNSimulator is used to make
9
+ # predictions via adaptivePredict but this class moves a pointer through the
10
+ # ATN to simulate parsing. ParserATNSimulator just
11
+ # makes us efficient rather than having to backtrack, for example.
12
+ #
13
+ # This properly creates parse trees even for left recursive rules.
14
+ #
15
+ # We rely on the left recursive rule invocation and special predicate
16
+ # transitions to make left recursive rules work.
17
+ #
18
+ # See TestParserInterpreter for examples.
19
+ #
20
+ from antlr4.dfa.DFA import DFA
21
+ from antlr4.BufferedTokenStream import TokenStream
22
+ from antlr4.Lexer import Lexer
23
+ from antlr4.Parser import Parser
24
+ from antlr4.ParserRuleContext import InterpreterRuleContext, ParserRuleContext
25
+ from antlr4.Token import Token
26
+ from antlr4.atn.ATN import ATN
27
+ from antlr4.atn.ATNState import StarLoopEntryState, ATNState, LoopEndState
28
+ from antlr4.atn.ParserATNSimulator import ParserATNSimulator
29
+ from antlr4.PredictionContext import PredictionContextCache
30
+ from antlr4.atn.Transition import Transition
31
+ from antlr4.error.Errors import RecognitionException, UnsupportedOperationException, FailedPredicateException
32
+
33
+
34
+ class ParserInterpreter(Parser):
35
+ __slots__ = (
36
+ 'grammarFileName', 'atn', 'tokenNames', 'ruleNames', 'decisionToDFA',
37
+ 'sharedContextCache', '_parentContextStack',
38
+ 'pushRecursionContextStates'
39
+ )
40
+
41
+ def __init__(self, grammarFileName:str, tokenNames:list, ruleNames:list, atn:ATN, input:TokenStream):
42
+ super().__init__(input)
43
+ self.grammarFileName = grammarFileName
44
+ self.atn = atn
45
+ self.tokenNames = tokenNames
46
+ self.ruleNames = ruleNames
47
+ self.decisionToDFA = [ DFA(state) for state in atn.decisionToState ]
48
+ self.sharedContextCache = PredictionContextCache()
49
+ self._parentContextStack = list()
50
+ # identify the ATN states where pushNewRecursionContext must be called
51
+ self.pushRecursionContextStates = set()
52
+ for state in atn.states:
53
+ if not isinstance(state, StarLoopEntryState):
54
+ continue
55
+ if state.isPrecedenceDecision:
56
+ self.pushRecursionContextStates.add(state.stateNumber)
57
+ # get atn simulator that knows how to do predictions
58
+ self._interp = ParserATNSimulator(self, atn, self.decisionToDFA, self.sharedContextCache)
59
+
60
+ # Begin parsing at startRuleIndex#
61
+ def parse(self, startRuleIndex:int):
62
+ startRuleStartState = self.atn.ruleToStartState[startRuleIndex]
63
+ rootContext = InterpreterRuleContext(None, ATNState.INVALID_STATE_NUMBER, startRuleIndex)
64
+ if startRuleStartState.isPrecedenceRule:
65
+ self.enterRecursionRule(rootContext, startRuleStartState.stateNumber, startRuleIndex, 0)
66
+ else:
67
+ self.enterRule(rootContext, startRuleStartState.stateNumber, startRuleIndex)
68
+ while True:
69
+ p = self.getATNState()
70
+ if p.stateType==ATNState.RULE_STOP :
71
+ # pop; return from rule
72
+ if len(self._ctx)==0:
73
+ if startRuleStartState.isPrecedenceRule:
74
+ result = self._ctx
75
+ parentContext = self._parentContextStack.pop()
76
+ self.unrollRecursionContexts(parentContext.a)
77
+ return result
78
+ else:
79
+ self.exitRule()
80
+ return rootContext
81
+ self.visitRuleStopState(p)
82
+
83
+ else:
84
+ try:
85
+ self.visitState(p)
86
+ except RecognitionException as e:
87
+ self.state = self.atn.ruleToStopState[p.ruleIndex].stateNumber
88
+ self._ctx.exception = e
89
+ self._errHandler.reportError(self, e)
90
+ self._errHandler.recover(self, e)
91
+
92
+ def enterRecursionRule(self, localctx:ParserRuleContext, state:int, ruleIndex:int, precedence:int):
93
+ self._parentContextStack.append((self._ctx, localctx.invokingState))
94
+ super().enterRecursionRule(localctx, state, ruleIndex, precedence)
95
+
96
+ def getATNState(self):
97
+ return self.atn.states[self.state]
98
+
99
+ def visitState(self, p:ATNState):
100
+ edge = 0
101
+ if len(p.transitions) > 1:
102
+ self._errHandler.sync(self)
103
+ edge = self._interp.adaptivePredict(self._input, p.decision, self._ctx)
104
+ else:
105
+ edge = 1
106
+
107
+ transition = p.transitions[edge - 1]
108
+ tt = transition.serializationType
109
+ if tt==Transition.EPSILON:
110
+
111
+ if self.pushRecursionContextStates[p.stateNumber] and not isinstance(transition.target, LoopEndState):
112
+ t = self._parentContextStack[-1]
113
+ ctx = InterpreterRuleContext(t[0], t[1], self._ctx.ruleIndex)
114
+ self.pushNewRecursionContext(ctx, self.atn.ruleToStartState[p.ruleIndex].stateNumber, self._ctx.ruleIndex)
115
+
116
+ elif tt==Transition.ATOM:
117
+
118
+ self.match(transition.label)
119
+
120
+ elif tt in [ Transition.RANGE, Transition.SET, Transition.NOT_SET]:
121
+
122
+ if not transition.matches(self._input.LA(1), Token.MIN_USER_TOKEN_TYPE, Lexer.MAX_CHAR_VALUE):
123
+ self._errHandler.recoverInline(self)
124
+ self.matchWildcard()
125
+
126
+ elif tt==Transition.WILDCARD:
127
+
128
+ self.matchWildcard()
129
+
130
+ elif tt==Transition.RULE:
131
+
132
+ ruleStartState = transition.target
133
+ ruleIndex = ruleStartState.ruleIndex
134
+ ctx = InterpreterRuleContext(self._ctx, p.stateNumber, ruleIndex)
135
+ if ruleStartState.isPrecedenceRule:
136
+ self.enterRecursionRule(ctx, ruleStartState.stateNumber, ruleIndex, transition.precedence)
137
+ else:
138
+ self.enterRule(ctx, transition.target.stateNumber, ruleIndex)
139
+
140
+ elif tt==Transition.PREDICATE:
141
+
142
+ if not self.sempred(self._ctx, transition.ruleIndex, transition.predIndex):
143
+ raise FailedPredicateException(self)
144
+
145
+ elif tt==Transition.ACTION:
146
+
147
+ self.action(self._ctx, transition.ruleIndex, transition.actionIndex)
148
+
149
+ elif tt==Transition.PRECEDENCE:
150
+
151
+ if not self.precpred(self._ctx, transition.precedence):
152
+ msg = "precpred(_ctx, " + str(transition.precedence) + ")"
153
+ raise FailedPredicateException(self, msg)
154
+
155
+ else:
156
+ raise UnsupportedOperationException("Unrecognized ATN transition type.")
157
+
158
+ self.state = transition.target.stateNumber
159
+
160
+ def visitRuleStopState(self, p:ATNState):
161
+ ruleStartState = self.atn.ruleToStartState[p.ruleIndex]
162
+ if ruleStartState.isPrecedenceRule:
163
+ parentContext = self._parentContextStack.pop()
164
+ self.unrollRecursionContexts(parentContext.a)
165
+ self.state = parentContext[1]
166
+ else:
167
+ self.exitRule()
168
+
169
+ ruleTransition = self.atn.states[self.state].transitions[0]
170
+ self.state = ruleTransition.followState.stateNumber
venv/lib/python3.10/site-packages/antlr4/ParserRuleContext.py ADDED
@@ -0,0 +1,186 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (c) 2012-2017 The ANTLR Project. All rights reserved.
2
+ # Use of this file is governed by the BSD 3-clause license that
3
+ # can be found in the LICENSE.txt file in the project root.
4
+
5
+ #* A rule invocation record for parsing.
6
+ #
7
+ # Contains all of the information about the current rule not stored in the
8
+ # RuleContext. It handles parse tree children list, Any ATN state
9
+ # tracing, and the default values available for rule indications:
10
+ # start, stop, rule index, current alt number, current
11
+ # ATN state.
12
+ #
13
+ # Subclasses made for each rule and grammar track the parameters,
14
+ # return values, locals, and labels specific to that rule. These
15
+ # are the objects that are returned from rules.
16
+ #
17
+ # Note text is not an actual field of a rule return value; it is computed
18
+ # from start and stop using the input stream's toString() method. I
19
+ # could add a ctor to this so that we can pass in and store the input
20
+ # stream, but I'm not sure we want to do that. It would seem to be undefined
21
+ # to get the .text property anyway if the rule matches tokens from multiple
22
+ # input streams.
23
+ #
24
+ # I do not use getters for fields of objects that are used simply to
25
+ # group values such as this aggregate. The getters/setters are there to
26
+ # satisfy the superclass interface.
27
+
28
+ from antlr4.RuleContext import RuleContext
29
+ from antlr4.Token import Token
30
+ from antlr4.tree.Tree import ParseTreeListener, ParseTree, TerminalNodeImpl, ErrorNodeImpl, TerminalNode, \
31
+ INVALID_INTERVAL
32
+
33
+ # need forward declaration
34
+ ParserRuleContext = None
35
+
36
+ class ParserRuleContext(RuleContext):
37
+ __slots__ = ('children', 'start', 'stop', 'exception')
38
+ def __init__(self, parent:ParserRuleContext = None, invokingStateNumber:int = None ):
39
+ super().__init__(parent, invokingStateNumber)
40
+ #* If we are debugging or building a parse tree for a visitor,
41
+ # we need to track all of the tokens and rule invocations associated
42
+ # with this rule's context. This is empty for parsing w/o tree constr.
43
+ # operation because we don't the need to track the details about
44
+ # how we parse this rule.
45
+ #/
46
+ self.children = None
47
+ self.start = None
48
+ self.stop = None
49
+ # The exception that forced this rule to return. If the rule successfully
50
+ # completed, this is {@code null}.
51
+ self.exception = None
52
+
53
+ #* COPY a ctx (I'm deliberately not using copy constructor)#/
54
+ #
55
+ # This is used in the generated parser code to flip a generic XContext
56
+ # node for rule X to a YContext for alt label Y. In that sense, it is
57
+ # not really a generic copy function.
58
+ #
59
+ # If we do an error sync() at start of a rule, we might add error nodes
60
+ # to the generic XContext so this function must copy those nodes to
61
+ # the YContext as well else they are lost!
62
+ #/
63
+ def copyFrom(self, ctx:ParserRuleContext):
64
+ # from RuleContext
65
+ self.parentCtx = ctx.parentCtx
66
+ self.invokingState = ctx.invokingState
67
+ self.children = None
68
+ self.start = ctx.start
69
+ self.stop = ctx.stop
70
+
71
+ # copy any error nodes to alt label node
72
+ if ctx.children is not None:
73
+ self.children = []
74
+ # reset parent pointer for any error nodes
75
+ for child in ctx.children:
76
+ if isinstance(child, ErrorNodeImpl):
77
+ self.children.append(child)
78
+ child.parentCtx = self
79
+
80
+ # Double dispatch methods for listeners
81
+ def enterRule(self, listener:ParseTreeListener):
82
+ pass
83
+
84
+ def exitRule(self, listener:ParseTreeListener):
85
+ pass
86
+
87
+ #* Does not set parent link; other add methods do that#/
88
+ def addChild(self, child:ParseTree):
89
+ if self.children is None:
90
+ self.children = []
91
+ self.children.append(child)
92
+ return child
93
+
94
+ #* Used by enterOuterAlt to toss out a RuleContext previously added as
95
+ # we entered a rule. If we have # label, we will need to remove
96
+ # generic ruleContext object.
97
+ #/
98
+ def removeLastChild(self):
99
+ if self.children is not None:
100
+ del self.children[len(self.children)-1]
101
+
102
+ def addTokenNode(self, token:Token):
103
+ node = TerminalNodeImpl(token)
104
+ self.addChild(node)
105
+ node.parentCtx = self
106
+ return node
107
+
108
+ def addErrorNode(self, badToken:Token):
109
+ node = ErrorNodeImpl(badToken)
110
+ self.addChild(node)
111
+ node.parentCtx = self
112
+ return node
113
+
114
+ def getChild(self, i:int, ttype:type = None):
115
+ if ttype is None:
116
+ return self.children[i] if len(self.children)>i else None
117
+ else:
118
+ for child in self.getChildren():
119
+ if not isinstance(child, ttype):
120
+ continue
121
+ if i==0:
122
+ return child
123
+ i -= 1
124
+ return None
125
+
126
+ def getChildren(self, predicate = None):
127
+ if self.children is not None:
128
+ for child in self.children:
129
+ if predicate is not None and not predicate(child):
130
+ continue
131
+ yield child
132
+
133
+ def getToken(self, ttype:int, i:int):
134
+ for child in self.getChildren():
135
+ if not isinstance(child, TerminalNode):
136
+ continue
137
+ if child.symbol.type != ttype:
138
+ continue
139
+ if i==0:
140
+ return child
141
+ i -= 1
142
+ return None
143
+
144
+ def getTokens(self, ttype:int ):
145
+ if self.getChildren() is None:
146
+ return []
147
+ tokens = []
148
+ for child in self.getChildren():
149
+ if not isinstance(child, TerminalNode):
150
+ continue
151
+ if child.symbol.type != ttype:
152
+ continue
153
+ tokens.append(child)
154
+ return tokens
155
+
156
+ def getTypedRuleContext(self, ctxType:type, i:int):
157
+ return self.getChild(i, ctxType)
158
+
159
+ def getTypedRuleContexts(self, ctxType:type):
160
+ children = self.getChildren()
161
+ if children is None:
162
+ return []
163
+ contexts = []
164
+ for child in children:
165
+ if not isinstance(child, ctxType):
166
+ continue
167
+ contexts.append(child)
168
+ return contexts
169
+
170
+ def getChildCount(self):
171
+ return len(self.children) if self.children else 0
172
+
173
+ def getSourceInterval(self):
174
+ if self.start is None or self.stop is None:
175
+ return INVALID_INTERVAL
176
+ else:
177
+ return (self.start.tokenIndex, self.stop.tokenIndex)
178
+
179
+
180
+ RuleContext.EMPTY = ParserRuleContext()
181
+
182
+ class InterpreterRuleContext(ParserRuleContext):
183
+
184
+ def __init__(self, parent:ParserRuleContext, invokingStateNumber:int, ruleIndex:int):
185
+ super().__init__(parent, invokingStateNumber)
186
+ self.ruleIndex = ruleIndex
venv/lib/python3.10/site-packages/antlr4/PredictionContext.py ADDED
@@ -0,0 +1,623 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #
2
+ # Copyright (c) 2012-2017 The ANTLR Project. All rights reserved.
3
+ # Use of this file is governed by the BSD 3-clause license that
4
+ # can be found in the LICENSE.txt file in the project root.
5
+ #/
6
+ from io import StringIO
7
+
8
+ from antlr4.error.Errors import IllegalStateException
9
+
10
+ from antlr4.RuleContext import RuleContext
11
+ from antlr4.atn.ATN import ATN
12
+ from antlr4.atn.ATNState import ATNState
13
+
14
+
15
+ class PredictionContext(object):
16
+
17
+ # Represents {@code $} in local context prediction, which means wildcard.
18
+ # {@code#+x =#}.
19
+ #/
20
+ EMPTY = None
21
+
22
+ # Represents {@code $} in an array in full context mode, when {@code $}
23
+ # doesn't mean wildcard: {@code $ + x = [$,x]}. Here,
24
+ # {@code $} = {@link #EMPTY_RETURN_STATE}.
25
+ #/
26
+ EMPTY_RETURN_STATE = 0x7FFFFFFF
27
+
28
+ globalNodeCount = 1
29
+ id = globalNodeCount
30
+
31
+ # Stores the computed hash code of this {@link PredictionContext}. The hash
32
+ # code is computed in parts to match the following reference algorithm.
33
+ #
34
+ # <pre>
35
+ # private int referenceHashCode() {
36
+ # int hash = {@link MurmurHash#initialize MurmurHash.initialize}({@link #INITIAL_HASH});
37
+ #
38
+ # for (int i = 0; i &lt; {@link #size()}; i++) {
39
+ # hash = {@link MurmurHash#update MurmurHash.update}(hash, {@link #getParent getParent}(i));
40
+ # }
41
+ #
42
+ # for (int i = 0; i &lt; {@link #size()}; i++) {
43
+ # hash = {@link MurmurHash#update MurmurHash.update}(hash, {@link #getReturnState getReturnState}(i));
44
+ # }
45
+ #
46
+ # hash = {@link MurmurHash#finish MurmurHash.finish}(hash, 2# {@link #size()});
47
+ # return hash;
48
+ # }
49
+ # </pre>
50
+ #/
51
+
52
+ def __init__(self, cachedHashCode:int):
53
+ self.cachedHashCode = cachedHashCode
54
+
55
+ def __len__(self):
56
+ return 0
57
+
58
+ # This means only the {@link #EMPTY} context is in set.
59
+ def isEmpty(self):
60
+ return self is self.EMPTY
61
+
62
+ def hasEmptyPath(self):
63
+ return self.getReturnState(len(self) - 1) == self.EMPTY_RETURN_STATE
64
+
65
+ def getReturnState(self, index:int):
66
+ raise IllegalStateException("illegal!")
67
+
68
+ def __hash__(self):
69
+ return self.cachedHashCode
70
+
71
+ def calculateHashCode(parent:PredictionContext, returnState:int):
72
+ return hash("") if parent is None else hash((hash(parent), returnState))
73
+
74
+ def calculateListsHashCode(parents:[], returnStates:[] ):
75
+ h = 0
76
+ for parent, returnState in zip(parents, returnStates):
77
+ h = hash((h, calculateHashCode(parent, returnState)))
78
+ return h
79
+
80
+ # Used to cache {@link PredictionContext} objects. Its used for the shared
81
+ # context cash associated with contexts in DFA states. This cache
82
+ # can be used for both lexers and parsers.
83
+
84
+ class PredictionContextCache(object):
85
+
86
+ def __init__(self):
87
+ self.cache = dict()
88
+
89
+ # Add a context to the cache and return it. If the context already exists,
90
+ # return that one instead and do not add a new context to the cache.
91
+ # Protect shared cache from unsafe thread access.
92
+ #
93
+ def add(self, ctx:PredictionContext):
94
+ if ctx==PredictionContext.EMPTY:
95
+ return PredictionContext.EMPTY
96
+ existing = self.cache.get(ctx, None)
97
+ if existing is not None:
98
+ return existing
99
+ self.cache[ctx] = ctx
100
+ return ctx
101
+
102
+ def get(self, ctx:PredictionContext):
103
+ return self.cache.get(ctx, None)
104
+
105
+ def __len__(self):
106
+ return len(self.cache)
107
+
108
+
109
+ class SingletonPredictionContext(PredictionContext):
110
+
111
+ @staticmethod
112
+ def create(parent:PredictionContext , returnState:int ):
113
+ if returnState == PredictionContext.EMPTY_RETURN_STATE and parent is None:
114
+ # someone can pass in the bits of an array ctx that mean $
115
+ return SingletonPredictionContext.EMPTY
116
+ else:
117
+ return SingletonPredictionContext(parent, returnState)
118
+
119
+ def __init__(self, parent:PredictionContext, returnState:int):
120
+ hashCode = calculateHashCode(parent, returnState)
121
+ super().__init__(hashCode)
122
+ self.parentCtx = parent
123
+ self.returnState = returnState
124
+
125
+ def __len__(self):
126
+ return 1
127
+
128
+ def getParent(self, index:int):
129
+ return self.parentCtx
130
+
131
+ def getReturnState(self, index:int):
132
+ return self.returnState
133
+
134
+ def __eq__(self, other):
135
+ if self is other:
136
+ return True
137
+ elif other is None:
138
+ return False
139
+ elif not isinstance(other, SingletonPredictionContext):
140
+ return False
141
+ else:
142
+ return self.returnState == other.returnState and self.parentCtx == other.parentCtx
143
+
144
+ def __hash__(self):
145
+ return self.cachedHashCode
146
+
147
+ def __str__(self):
148
+ up = "" if self.parentCtx is None else str(self.parentCtx)
149
+ if len(up)==0:
150
+ if self.returnState == self.EMPTY_RETURN_STATE:
151
+ return "$"
152
+ else:
153
+ return str(self.returnState)
154
+ else:
155
+ return str(self.returnState) + " " + up
156
+
157
+
158
+ class EmptyPredictionContext(SingletonPredictionContext):
159
+
160
+ def __init__(self):
161
+ super().__init__(None, PredictionContext.EMPTY_RETURN_STATE)
162
+
163
+ def isEmpty(self):
164
+ return True
165
+
166
+ def __eq__(self, other):
167
+ return self is other
168
+
169
+ def __hash__(self):
170
+ return self.cachedHashCode
171
+
172
+ def __str__(self):
173
+ return "$"
174
+
175
+
176
+ PredictionContext.EMPTY = EmptyPredictionContext()
177
+
178
+ class ArrayPredictionContext(PredictionContext):
179
+ # Parent can be null only if full ctx mode and we make an array
180
+ # from {@link #EMPTY} and non-empty. We merge {@link #EMPTY} by using null parent and
181
+ # returnState == {@link #EMPTY_RETURN_STATE}.
182
+
183
+ def __init__(self, parents:list, returnStates:list):
184
+ super().__init__(calculateListsHashCode(parents, returnStates))
185
+ self.parents = parents
186
+ self.returnStates = returnStates
187
+
188
+ def isEmpty(self):
189
+ # since EMPTY_RETURN_STATE can only appear in the last position, we
190
+ # don't need to verify that size==1
191
+ return self.returnStates[0]==PredictionContext.EMPTY_RETURN_STATE
192
+
193
+ def __len__(self):
194
+ return len(self.returnStates)
195
+
196
+ def getParent(self, index:int):
197
+ return self.parents[index]
198
+
199
+ def getReturnState(self, index:int):
200
+ return self.returnStates[index]
201
+
202
+ def __eq__(self, other):
203
+ if self is other:
204
+ return True
205
+ elif not isinstance(other, ArrayPredictionContext):
206
+ return False
207
+ elif hash(self) != hash(other):
208
+ return False # can't be same if hash is different
209
+ else:
210
+ return self.returnStates==other.returnStates and self.parents==other.parents
211
+
212
+ def __str__(self):
213
+ if self.isEmpty():
214
+ return "[]"
215
+ with StringIO() as buf:
216
+ buf.write("[")
217
+ for i in range(0,len(self.returnStates)):
218
+ if i>0:
219
+ buf.write(", ")
220
+ if self.returnStates[i]==PredictionContext.EMPTY_RETURN_STATE:
221
+ buf.write("$")
222
+ continue
223
+ buf.write(str(self.returnStates[i]))
224
+ if self.parents[i] is not None:
225
+ buf.write(' ')
226
+ buf.write(str(self.parents[i]))
227
+ else:
228
+ buf.write("null")
229
+ buf.write("]")
230
+ return buf.getvalue()
231
+
232
+ def __hash__(self):
233
+ return self.cachedHashCode
234
+
235
+
236
+
237
+ # Convert a {@link RuleContext} tree to a {@link PredictionContext} graph.
238
+ # Return {@link #EMPTY} if {@code outerContext} is empty or null.
239
+ #/
240
+ def PredictionContextFromRuleContext(atn:ATN, outerContext:RuleContext=None):
241
+ if outerContext is None:
242
+ outerContext = RuleContext.EMPTY
243
+
244
+ # if we are in RuleContext of start rule, s, then PredictionContext
245
+ # is EMPTY. Nobody called us. (if we are empty, return empty)
246
+ if outerContext.parentCtx is None or outerContext is RuleContext.EMPTY:
247
+ return PredictionContext.EMPTY
248
+
249
+ # If we have a parent, convert it to a PredictionContext graph
250
+ parent = PredictionContextFromRuleContext(atn, outerContext.parentCtx)
251
+ state = atn.states[outerContext.invokingState]
252
+ transition = state.transitions[0]
253
+ return SingletonPredictionContext.create(parent, transition.followState.stateNumber)
254
+
255
+
256
+ def merge(a:PredictionContext, b:PredictionContext, rootIsWildcard:bool, mergeCache:dict):
257
+
258
+ # share same graph if both same
259
+ if a==b:
260
+ return a
261
+
262
+ if isinstance(a, SingletonPredictionContext) and isinstance(b, SingletonPredictionContext):
263
+ return mergeSingletons(a, b, rootIsWildcard, mergeCache)
264
+
265
+ # At least one of a or b is array
266
+ # If one is $ and rootIsWildcard, return $ as# wildcard
267
+ if rootIsWildcard:
268
+ if isinstance( a, EmptyPredictionContext ):
269
+ return a
270
+ if isinstance( b, EmptyPredictionContext ):
271
+ return b
272
+
273
+ # convert singleton so both are arrays to normalize
274
+ if isinstance( a, SingletonPredictionContext ):
275
+ a = ArrayPredictionContext([a.parentCtx], [a.returnState])
276
+ if isinstance( b, SingletonPredictionContext):
277
+ b = ArrayPredictionContext([b.parentCtx], [b.returnState])
278
+ return mergeArrays(a, b, rootIsWildcard, mergeCache)
279
+
280
+
281
+ #
282
+ # Merge two {@link SingletonPredictionContext} instances.
283
+ #
284
+ # <p>Stack tops equal, parents merge is same; return left graph.<br>
285
+ # <embed src="images/SingletonMerge_SameRootSamePar.svg" type="image/svg+xml"/></p>
286
+ #
287
+ # <p>Same stack top, parents differ; merge parents giving array node, then
288
+ # remainders of those graphs. A new root node is created to point to the
289
+ # merged parents.<br>
290
+ # <embed src="images/SingletonMerge_SameRootDiffPar.svg" type="image/svg+xml"/></p>
291
+ #
292
+ # <p>Different stack tops pointing to same parent. Make array node for the
293
+ # root where both element in the root point to the same (original)
294
+ # parent.<br>
295
+ # <embed src="images/SingletonMerge_DiffRootSamePar.svg" type="image/svg+xml"/></p>
296
+ #
297
+ # <p>Different stack tops pointing to different parents. Make array node for
298
+ # the root where each element points to the corresponding original
299
+ # parent.<br>
300
+ # <embed src="images/SingletonMerge_DiffRootDiffPar.svg" type="image/svg+xml"/></p>
301
+ #
302
+ # @param a the first {@link SingletonPredictionContext}
303
+ # @param b the second {@link SingletonPredictionContext}
304
+ # @param rootIsWildcard {@code true} if this is a local-context merge,
305
+ # otherwise false to indicate a full-context merge
306
+ # @param mergeCache
307
+ #/
308
+ def mergeSingletons(a:SingletonPredictionContext, b:SingletonPredictionContext, rootIsWildcard:bool, mergeCache:dict):
309
+ if mergeCache is not None:
310
+ previous = mergeCache.get((a,b), None)
311
+ if previous is not None:
312
+ return previous
313
+ previous = mergeCache.get((b,a), None)
314
+ if previous is not None:
315
+ return previous
316
+
317
+ merged = mergeRoot(a, b, rootIsWildcard)
318
+ if merged is not None:
319
+ if mergeCache is not None:
320
+ mergeCache[(a, b)] = merged
321
+ return merged
322
+
323
+ if a.returnState==b.returnState:
324
+ parent = merge(a.parentCtx, b.parentCtx, rootIsWildcard, mergeCache)
325
+ # if parent is same as existing a or b parent or reduced to a parent, return it
326
+ if parent == a.parentCtx:
327
+ return a # ax + bx = ax, if a=b
328
+ if parent == b.parentCtx:
329
+ return b # ax + bx = bx, if a=b
330
+ # else: ax + ay = a'[x,y]
331
+ # merge parents x and y, giving array node with x,y then remainders
332
+ # of those graphs. dup a, a' points at merged array
333
+ # new joined parent so create new singleton pointing to it, a'
334
+ merged = SingletonPredictionContext.create(parent, a.returnState)
335
+ if mergeCache is not None:
336
+ mergeCache[(a, b)] = merged
337
+ return merged
338
+ else: # a != b payloads differ
339
+ # see if we can collapse parents due to $+x parents if local ctx
340
+ singleParent = None
341
+ if a is b or (a.parentCtx is not None and a.parentCtx==b.parentCtx): # ax + bx = [a,b]x
342
+ singleParent = a.parentCtx
343
+ if singleParent is not None: # parents are same
344
+ # sort payloads and use same parent
345
+ payloads = [ a.returnState, b.returnState ]
346
+ if a.returnState > b.returnState:
347
+ payloads = [ b.returnState, a.returnState ]
348
+ parents = [singleParent, singleParent]
349
+ merged = ArrayPredictionContext(parents, payloads)
350
+ if mergeCache is not None:
351
+ mergeCache[(a, b)] = merged
352
+ return merged
353
+ # parents differ and can't merge them. Just pack together
354
+ # into array; can't merge.
355
+ # ax + by = [ax,by]
356
+ payloads = [ a.returnState, b.returnState ]
357
+ parents = [ a.parentCtx, b.parentCtx ]
358
+ if a.returnState > b.returnState: # sort by payload
359
+ payloads = [ b.returnState, a.returnState ]
360
+ parents = [ b.parentCtx, a.parentCtx ]
361
+ merged = ArrayPredictionContext(parents, payloads)
362
+ if mergeCache is not None:
363
+ mergeCache[(a, b)] = merged
364
+ return merged
365
+
366
+
367
+ #
368
+ # Handle case where at least one of {@code a} or {@code b} is
369
+ # {@link #EMPTY}. In the following diagrams, the symbol {@code $} is used
370
+ # to represent {@link #EMPTY}.
371
+ #
372
+ # <h2>Local-Context Merges</h2>
373
+ #
374
+ # <p>These local-context merge operations are used when {@code rootIsWildcard}
375
+ # is true.</p>
376
+ #
377
+ # <p>{@link #EMPTY} is superset of any graph; return {@link #EMPTY}.<br>
378
+ # <embed src="images/LocalMerge_EmptyRoot.svg" type="image/svg+xml"/></p>
379
+ #
380
+ # <p>{@link #EMPTY} and anything is {@code #EMPTY}, so merged parent is
381
+ # {@code #EMPTY}; return left graph.<br>
382
+ # <embed src="images/LocalMerge_EmptyParent.svg" type="image/svg+xml"/></p>
383
+ #
384
+ # <p>Special case of last merge if local context.<br>
385
+ # <embed src="images/LocalMerge_DiffRoots.svg" type="image/svg+xml"/></p>
386
+ #
387
+ # <h2>Full-Context Merges</h2>
388
+ #
389
+ # <p>These full-context merge operations are used when {@code rootIsWildcard}
390
+ # is false.</p>
391
+ #
392
+ # <p><embed src="images/FullMerge_EmptyRoots.svg" type="image/svg+xml"/></p>
393
+ #
394
+ # <p>Must keep all contexts; {@link #EMPTY} in array is a special value (and
395
+ # null parent).<br>
396
+ # <embed src="images/FullMerge_EmptyRoot.svg" type="image/svg+xml"/></p>
397
+ #
398
+ # <p><embed src="images/FullMerge_SameRoot.svg" type="image/svg+xml"/></p>
399
+ #
400
+ # @param a the first {@link SingletonPredictionContext}
401
+ # @param b the second {@link SingletonPredictionContext}
402
+ # @param rootIsWildcard {@code true} if this is a local-context merge,
403
+ # otherwise false to indicate a full-context merge
404
+ #/
405
+ def mergeRoot(a:SingletonPredictionContext, b:SingletonPredictionContext, rootIsWildcard:bool):
406
+ if rootIsWildcard:
407
+ if a == PredictionContext.EMPTY:
408
+ return PredictionContext.EMPTY ## + b =#
409
+ if b == PredictionContext.EMPTY:
410
+ return PredictionContext.EMPTY # a +# =#
411
+ else:
412
+ if a == PredictionContext.EMPTY and b == PredictionContext.EMPTY:
413
+ return PredictionContext.EMPTY # $ + $ = $
414
+ elif a == PredictionContext.EMPTY: # $ + x = [$,x]
415
+ payloads = [ b.returnState, PredictionContext.EMPTY_RETURN_STATE ]
416
+ parents = [ b.parentCtx, None ]
417
+ return ArrayPredictionContext(parents, payloads)
418
+ elif b == PredictionContext.EMPTY: # x + $ = [$,x] ($ is always first if present)
419
+ payloads = [ a.returnState, PredictionContext.EMPTY_RETURN_STATE ]
420
+ parents = [ a.parentCtx, None ]
421
+ return ArrayPredictionContext(parents, payloads)
422
+ return None
423
+
424
+
425
+ #
426
+ # Merge two {@link ArrayPredictionContext} instances.
427
+ #
428
+ # <p>Different tops, different parents.<br>
429
+ # <embed src="images/ArrayMerge_DiffTopDiffPar.svg" type="image/svg+xml"/></p>
430
+ #
431
+ # <p>Shared top, same parents.<br>
432
+ # <embed src="images/ArrayMerge_ShareTopSamePar.svg" type="image/svg+xml"/></p>
433
+ #
434
+ # <p>Shared top, different parents.<br>
435
+ # <embed src="images/ArrayMerge_ShareTopDiffPar.svg" type="image/svg+xml"/></p>
436
+ #
437
+ # <p>Shared top, all shared parents.<br>
438
+ # <embed src="images/ArrayMerge_ShareTopSharePar.svg" type="image/svg+xml"/></p>
439
+ #
440
+ # <p>Equal tops, merge parents and reduce top to
441
+ # {@link SingletonPredictionContext}.<br>
442
+ # <embed src="images/ArrayMerge_EqualTop.svg" type="image/svg+xml"/></p>
443
+ #/
444
+ def mergeArrays(a:ArrayPredictionContext, b:ArrayPredictionContext, rootIsWildcard:bool, mergeCache:dict):
445
+ if mergeCache is not None:
446
+ previous = mergeCache.get((a,b), None)
447
+ if previous is not None:
448
+ return previous
449
+ previous = mergeCache.get((b,a), None)
450
+ if previous is not None:
451
+ return previous
452
+
453
+ # merge sorted payloads a + b => M
454
+ i = 0 # walks a
455
+ j = 0 # walks b
456
+ k = 0 # walks target M array
457
+
458
+ mergedReturnStates = [None] * (len(a.returnStates) + len( b.returnStates))
459
+ mergedParents = [None] * len(mergedReturnStates)
460
+ # walk and merge to yield mergedParents, mergedReturnStates
461
+ while i<len(a.returnStates) and j<len(b.returnStates):
462
+ a_parent = a.parents[i]
463
+ b_parent = b.parents[j]
464
+ if a.returnStates[i]==b.returnStates[j]:
465
+ # same payload (stack tops are equal), must yield merged singleton
466
+ payload = a.returnStates[i]
467
+ # $+$ = $
468
+ bothDollars = payload == PredictionContext.EMPTY_RETURN_STATE and \
469
+ a_parent is None and b_parent is None
470
+ ax_ax = (a_parent is not None and b_parent is not None) and a_parent==b_parent # ax+ax -> ax
471
+ if bothDollars or ax_ax:
472
+ mergedParents[k] = a_parent # choose left
473
+ mergedReturnStates[k] = payload
474
+ else: # ax+ay -> a'[x,y]
475
+ mergedParent = merge(a_parent, b_parent, rootIsWildcard, mergeCache)
476
+ mergedParents[k] = mergedParent
477
+ mergedReturnStates[k] = payload
478
+ i += 1 # hop over left one as usual
479
+ j += 1 # but also skip one in right side since we merge
480
+ elif a.returnStates[i]<b.returnStates[j]: # copy a[i] to M
481
+ mergedParents[k] = a_parent
482
+ mergedReturnStates[k] = a.returnStates[i]
483
+ i += 1
484
+ else: # b > a, copy b[j] to M
485
+ mergedParents[k] = b_parent
486
+ mergedReturnStates[k] = b.returnStates[j]
487
+ j += 1
488
+ k += 1
489
+
490
+ # copy over any payloads remaining in either array
491
+ if i < len(a.returnStates):
492
+ for p in range(i, len(a.returnStates)):
493
+ mergedParents[k] = a.parents[p]
494
+ mergedReturnStates[k] = a.returnStates[p]
495
+ k += 1
496
+ else:
497
+ for p in range(j, len(b.returnStates)):
498
+ mergedParents[k] = b.parents[p]
499
+ mergedReturnStates[k] = b.returnStates[p]
500
+ k += 1
501
+
502
+ # trim merged if we combined a few that had same stack tops
503
+ if k < len(mergedParents): # write index < last position; trim
504
+ if k == 1: # for just one merged element, return singleton top
505
+ merged = SingletonPredictionContext.create(mergedParents[0], mergedReturnStates[0])
506
+ if mergeCache is not None:
507
+ mergeCache[(a,b)] = merged
508
+ return merged
509
+ mergedParents = mergedParents[0:k]
510
+ mergedReturnStates = mergedReturnStates[0:k]
511
+
512
+ merged = ArrayPredictionContext(mergedParents, mergedReturnStates)
513
+
514
+ # if we created same array as a or b, return that instead
515
+ # TODO: track whether this is possible above during merge sort for speed
516
+ if merged==a:
517
+ if mergeCache is not None:
518
+ mergeCache[(a,b)] = a
519
+ return a
520
+ if merged==b:
521
+ if mergeCache is not None:
522
+ mergeCache[(a,b)] = b
523
+ return b
524
+ combineCommonParents(mergedParents)
525
+
526
+ if mergeCache is not None:
527
+ mergeCache[(a,b)] = merged
528
+ return merged
529
+
530
+
531
+ #
532
+ # Make pass over all <em>M</em> {@code parents}; merge any {@code equals()}
533
+ # ones.
534
+ #/
535
+ def combineCommonParents(parents:list):
536
+ uniqueParents = dict()
537
+
538
+ for p in range(0, len(parents)):
539
+ parent = parents[p]
540
+ if uniqueParents.get(parent, None) is None:
541
+ uniqueParents[parent] = parent
542
+
543
+ for p in range(0, len(parents)):
544
+ parents[p] = uniqueParents[parents[p]]
545
+
546
+ def getCachedPredictionContext(context:PredictionContext, contextCache:PredictionContextCache, visited:dict):
547
+ if context.isEmpty():
548
+ return context
549
+ existing = visited.get(context)
550
+ if existing is not None:
551
+ return existing
552
+ existing = contextCache.get(context)
553
+ if existing is not None:
554
+ visited[context] = existing
555
+ return existing
556
+ changed = False
557
+ parents = [None] * len(context)
558
+ for i in range(0, len(parents)):
559
+ parent = getCachedPredictionContext(context.getParent(i), contextCache, visited)
560
+ if changed or parent is not context.getParent(i):
561
+ if not changed:
562
+ parents = [context.getParent(j) for j in range(len(context))]
563
+ changed = True
564
+ parents[i] = parent
565
+ if not changed:
566
+ contextCache.add(context)
567
+ visited[context] = context
568
+ return context
569
+
570
+ updated = None
571
+ if len(parents) == 0:
572
+ updated = PredictionContext.EMPTY
573
+ elif len(parents) == 1:
574
+ updated = SingletonPredictionContext.create(parents[0], context.getReturnState(0))
575
+ else:
576
+ updated = ArrayPredictionContext(parents, context.returnStates)
577
+
578
+ contextCache.add(updated)
579
+ visited[updated] = updated
580
+ visited[context] = updated
581
+
582
+ return updated
583
+
584
+
585
+ # # extra structures, but cut/paste/morphed works, so leave it.
586
+ # # seems to do a breadth-first walk
587
+ # public static List<PredictionContext> getAllNodes(PredictionContext context) {
588
+ # Map<PredictionContext, PredictionContext> visited =
589
+ # new IdentityHashMap<PredictionContext, PredictionContext>();
590
+ # Deque<PredictionContext> workList = new ArrayDeque<PredictionContext>();
591
+ # workList.add(context);
592
+ # visited.put(context, context);
593
+ # List<PredictionContext> nodes = new ArrayList<PredictionContext>();
594
+ # while (!workList.isEmpty()) {
595
+ # PredictionContext current = workList.pop();
596
+ # nodes.add(current);
597
+ # for (int i = 0; i < current.size(); i++) {
598
+ # PredictionContext parent = current.getParent(i);
599
+ # if ( parent!=null && visited.put(parent, parent) == null) {
600
+ # workList.push(parent);
601
+ # }
602
+ # }
603
+ # }
604
+ # return nodes;
605
+ # }
606
+
607
+ # ter's recursive version of Sam's getAllNodes()
608
+ def getAllContextNodes(context:PredictionContext, nodes:list=None, visited:dict=None):
609
+ if nodes is None:
610
+ nodes = list()
611
+ return getAllContextNodes(context, nodes, visited)
612
+ elif visited is None:
613
+ visited = dict()
614
+ return getAllContextNodes(context, nodes, visited)
615
+ else:
616
+ if context is None or visited.get(context, None) is not None:
617
+ return nodes
618
+ visited.put(context, context)
619
+ nodes.add(context)
620
+ for i in range(0, len(context)):
621
+ getAllContextNodes(context.getParent(i), nodes, visited)
622
+ return nodes
623
+
venv/lib/python3.10/site-packages/antlr4/Recognizer.py ADDED
@@ -0,0 +1,147 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #
2
+ # Copyright (c) 2012-2017 The ANTLR Project. All rights reserved.
3
+ # Use of this file is governed by the BSD 3-clause license that
4
+ # can be found in the LICENSE.txt file in the project root.
5
+ #
6
+ from antlr4.RuleContext import RuleContext
7
+ from antlr4.Token import Token
8
+ from antlr4.error.ErrorListener import ProxyErrorListener, ConsoleErrorListener
9
+
10
+ # need forward delcaration
11
+ RecognitionException = None
12
+
13
+ class Recognizer(object):
14
+ __slots__ = ('_listeners', '_interp', '_stateNumber')
15
+
16
+ tokenTypeMapCache = dict()
17
+ ruleIndexMapCache = dict()
18
+
19
+ def __init__(self):
20
+ self._listeners = [ ConsoleErrorListener.INSTANCE ]
21
+ self._interp = None
22
+ self._stateNumber = -1
23
+
24
+ def extractVersion(self, version):
25
+ pos = version.find(".")
26
+ major = version[0:pos]
27
+ version = version[pos+1:]
28
+ pos = version.find(".")
29
+ if pos==-1:
30
+ pos = version.find("-")
31
+ if pos==-1:
32
+ pos = len(version)
33
+ minor = version[0:pos]
34
+ return major, minor
35
+
36
+ def checkVersion(self, toolVersion):
37
+ runtimeVersion = "4.11.0"
38
+ rvmajor, rvminor = self.extractVersion(runtimeVersion)
39
+ tvmajor, tvminor = self.extractVersion(toolVersion)
40
+ if rvmajor!=tvmajor or rvminor!=tvminor:
41
+ print("ANTLR runtime and generated code versions disagree: "+runtimeVersion+"!="+toolVersion)
42
+
43
+ def addErrorListener(self, listener):
44
+ self._listeners.append(listener)
45
+
46
+ def removeErrorListener(self, listener):
47
+ self._listeners.remove(listener)
48
+
49
+ def removeErrorListeners(self):
50
+ self._listeners = []
51
+
52
+ def getTokenTypeMap(self):
53
+ tokenNames = self.getTokenNames()
54
+ if tokenNames is None:
55
+ from antlr4.error.Errors import UnsupportedOperationException
56
+ raise UnsupportedOperationException("The current recognizer does not provide a list of token names.")
57
+ result = self.tokenTypeMapCache.get(tokenNames, None)
58
+ if result is None:
59
+ result = zip( tokenNames, range(0, len(tokenNames)))
60
+ result["EOF"] = Token.EOF
61
+ self.tokenTypeMapCache[tokenNames] = result
62
+ return result
63
+
64
+ # Get a map from rule names to rule indexes.
65
+ #
66
+ # <p>Used for XPath and tree pattern compilation.</p>
67
+ #
68
+ def getRuleIndexMap(self):
69
+ ruleNames = self.getRuleNames()
70
+ if ruleNames is None:
71
+ from antlr4.error.Errors import UnsupportedOperationException
72
+ raise UnsupportedOperationException("The current recognizer does not provide a list of rule names.")
73
+ result = self.ruleIndexMapCache.get(ruleNames, None)
74
+ if result is None:
75
+ result = zip( ruleNames, range(0, len(ruleNames)))
76
+ self.ruleIndexMapCache[ruleNames] = result
77
+ return result
78
+
79
+ def getTokenType(self, tokenName:str):
80
+ ttype = self.getTokenTypeMap().get(tokenName, None)
81
+ if ttype is not None:
82
+ return ttype
83
+ else:
84
+ return Token.INVALID_TYPE
85
+
86
+
87
+ # What is the error header, normally line/character position information?#
88
+ def getErrorHeader(self, e:RecognitionException):
89
+ line = e.getOffendingToken().line
90
+ column = e.getOffendingToken().column
91
+ return "line "+line+":"+column
92
+
93
+
94
+ # How should a token be displayed in an error message? The default
95
+ # is to display just the text, but during development you might
96
+ # want to have a lot of information spit out. Override in that case
97
+ # to use t.toString() (which, for CommonToken, dumps everything about
98
+ # the token). This is better than forcing you to override a method in
99
+ # your token objects because you don't have to go modify your lexer
100
+ # so that it creates a new Java type.
101
+ #
102
+ # @deprecated This method is not called by the ANTLR 4 Runtime. Specific
103
+ # implementations of {@link ANTLRErrorStrategy} may provide a similar
104
+ # feature when necessary. For example, see
105
+ # {@link DefaultErrorStrategy#getTokenErrorDisplay}.
106
+ #
107
+ def getTokenErrorDisplay(self, t:Token):
108
+ if t is None:
109
+ return "<no token>"
110
+ s = t.text
111
+ if s is None:
112
+ if t.type==Token.EOF:
113
+ s = "<EOF>"
114
+ else:
115
+ s = "<" + str(t.type) + ">"
116
+ s = s.replace("\n","\\n")
117
+ s = s.replace("\r","\\r")
118
+ s = s.replace("\t","\\t")
119
+ return "'" + s + "'"
120
+
121
+ def getErrorListenerDispatch(self):
122
+ return ProxyErrorListener(self._listeners)
123
+
124
+ # subclass needs to override these if there are sempreds or actions
125
+ # that the ATN interp needs to execute
126
+ def sempred(self, localctx:RuleContext, ruleIndex:int, actionIndex:int):
127
+ return True
128
+
129
+ def precpred(self, localctx:RuleContext , precedence:int):
130
+ return True
131
+
132
+ @property
133
+ def state(self):
134
+ return self._stateNumber
135
+
136
+ # Indicate that the recognizer has changed internal state that is
137
+ # consistent with the ATN state passed in. This way we always know
138
+ # where we are in the ATN as the parser goes along. The rule
139
+ # context objects form a stack that lets us see the stack of
140
+ # invoking rules. Combine this and we have complete ATN
141
+ # configuration information.
142
+
143
+ @state.setter
144
+ def state(self, atnState:int):
145
+ self._stateNumber = atnState
146
+
147
+ del RecognitionException
venv/lib/python3.10/site-packages/antlr4/RuleContext.py ADDED
@@ -0,0 +1,227 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (c) 2012-2017 The ANTLR Project. All rights reserved.
2
+ # Use of this file is governed by the BSD 3-clause license that
3
+ # can be found in the LICENSE.txt file in the project root.
4
+ #/
5
+
6
+
7
+ # A rule context is a record of a single rule invocation. It knows
8
+ # which context invoked it, if any. If there is no parent context, then
9
+ # naturally the invoking state is not valid. The parent link
10
+ # provides a chain upwards from the current rule invocation to the root
11
+ # of the invocation tree, forming a stack. We actually carry no
12
+ # information about the rule associated with this context (except
13
+ # when parsing). We keep only the state number of the invoking state from
14
+ # the ATN submachine that invoked this. Contrast this with the s
15
+ # pointer inside ParserRuleContext that tracks the current state
16
+ # being "executed" for the current rule.
17
+ #
18
+ # The parent contexts are useful for computing lookahead sets and
19
+ # getting error information.
20
+ #
21
+ # These objects are used during parsing and prediction.
22
+ # For the special case of parsers, we use the subclass
23
+ # ParserRuleContext.
24
+ #
25
+ # @see ParserRuleContext
26
+ #/
27
+ from io import StringIO
28
+ from antlr4.tree.Tree import RuleNode, INVALID_INTERVAL, ParseTreeVisitor
29
+ from antlr4.tree.Trees import Trees
30
+
31
+ # need forward declarations
32
+ RuleContext = None
33
+ Parser = None
34
+
35
+ class RuleContext(RuleNode):
36
+ __slots__ = ('parentCtx', 'invokingState')
37
+ EMPTY = None
38
+
39
+ def __init__(self, parent:RuleContext=None, invokingState:int=-1):
40
+ super().__init__()
41
+ # What context invoked this rule?
42
+ self.parentCtx = parent
43
+ # What state invoked the rule associated with this context?
44
+ # The "return address" is the followState of invokingState
45
+ # If parent is null, this should be -1.
46
+ self.invokingState = invokingState
47
+
48
+
49
+ def depth(self):
50
+ n = 0
51
+ p = self
52
+ while p is not None:
53
+ p = p.parentCtx
54
+ n += 1
55
+ return n
56
+
57
+ # A context is empty if there is no invoking state; meaning nobody call
58
+ # current context.
59
+ def isEmpty(self):
60
+ return self.invokingState == -1
61
+
62
+ # satisfy the ParseTree / SyntaxTree interface
63
+
64
+ def getSourceInterval(self):
65
+ return INVALID_INTERVAL
66
+
67
+ def getRuleContext(self):
68
+ return self
69
+
70
+ def getPayload(self):
71
+ return self
72
+
73
+ # Return the combined text of all child nodes. This method only considers
74
+ # tokens which have been added to the parse tree.
75
+ # <p>
76
+ # Since tokens on hidden channels (e.g. whitespace or comments) are not
77
+ # added to the parse trees, they will not appear in the output of this
78
+ # method.
79
+ #/
80
+ def getText(self):
81
+ if self.getChildCount() == 0:
82
+ return ""
83
+ with StringIO() as builder:
84
+ for child in self.getChildren():
85
+ builder.write(child.getText())
86
+ return builder.getvalue()
87
+
88
+ def getRuleIndex(self):
89
+ return -1
90
+
91
+ # For rule associated with this parse tree internal node, return
92
+ # the outer alternative number used to match the input. Default
93
+ # implementation does not compute nor store this alt num. Create
94
+ # a subclass of ParserRuleContext with backing field and set
95
+ # option contextSuperClass.
96
+ # to set it.
97
+ def getAltNumber(self):
98
+ return 0 # should use ATN.INVALID_ALT_NUMBER but won't compile
99
+
100
+ # Set the outer alternative number for this context node. Default
101
+ # implementation does nothing to avoid backing field overhead for
102
+ # trees that don't need it. Create
103
+ # a subclass of ParserRuleContext with backing field and set
104
+ # option contextSuperClass.
105
+ def setAltNumber(self, altNumber:int):
106
+ pass
107
+
108
+ def getChild(self, i:int):
109
+ return None
110
+
111
+ def getChildCount(self):
112
+ return 0
113
+
114
+ def getChildren(self):
115
+ for c in []:
116
+ yield c
117
+
118
+ def accept(self, visitor:ParseTreeVisitor):
119
+ return visitor.visitChildren(self)
120
+
121
+ # # Call this method to view a parse tree in a dialog box visually.#/
122
+ # public Future<JDialog> inspect(@Nullable Parser parser) {
123
+ # List<String> ruleNames = parser != null ? Arrays.asList(parser.getRuleNames()) : null;
124
+ # return inspect(ruleNames);
125
+ # }
126
+ #
127
+ # public Future<JDialog> inspect(@Nullable List<String> ruleNames) {
128
+ # TreeViewer viewer = new TreeViewer(ruleNames, this);
129
+ # return viewer.open();
130
+ # }
131
+ #
132
+ # # Save this tree in a postscript file#/
133
+ # public void save(@Nullable Parser parser, String fileName)
134
+ # throws IOException, PrintException
135
+ # {
136
+ # List<String> ruleNames = parser != null ? Arrays.asList(parser.getRuleNames()) : null;
137
+ # save(ruleNames, fileName);
138
+ # }
139
+ #
140
+ # # Save this tree in a postscript file using a particular font name and size#/
141
+ # public void save(@Nullable Parser parser, String fileName,
142
+ # String fontName, int fontSize)
143
+ # throws IOException
144
+ # {
145
+ # List<String> ruleNames = parser != null ? Arrays.asList(parser.getRuleNames()) : null;
146
+ # save(ruleNames, fileName, fontName, fontSize);
147
+ # }
148
+ #
149
+ # # Save this tree in a postscript file#/
150
+ # public void save(@Nullable List<String> ruleNames, String fileName)
151
+ # throws IOException, PrintException
152
+ # {
153
+ # Trees.writePS(this, ruleNames, fileName);
154
+ # }
155
+ #
156
+ # # Save this tree in a postscript file using a particular font name and size#/
157
+ # public void save(@Nullable List<String> ruleNames, String fileName,
158
+ # String fontName, int fontSize)
159
+ # throws IOException
160
+ # {
161
+ # Trees.writePS(this, ruleNames, fileName, fontName, fontSize);
162
+ # }
163
+ #
164
+ # # Print out a whole tree, not just a node, in LISP format
165
+ # # (root child1 .. childN). Print just a node if this is a leaf.
166
+ # # We have to know the recognizer so we can get rule names.
167
+ # #/
168
+ # @Override
169
+ # public String toStringTree(@Nullable Parser recog) {
170
+ # return Trees.toStringTree(this, recog);
171
+ # }
172
+ #
173
+ # Print out a whole tree, not just a node, in LISP format
174
+ # (root child1 .. childN). Print just a node if this is a leaf.
175
+ #
176
+ def toStringTree(self, ruleNames:list=None, recog:Parser=None):
177
+ return Trees.toStringTree(self, ruleNames=ruleNames, recog=recog)
178
+ # }
179
+ #
180
+ # @Override
181
+ # public String toStringTree() {
182
+ # return toStringTree((List<String>)null);
183
+ # }
184
+ #
185
+ def __str__(self):
186
+ return self.toString(None, None)
187
+
188
+ # @Override
189
+ # public String toString() {
190
+ # return toString((List<String>)null, (RuleContext)null);
191
+ # }
192
+ #
193
+ # public final String toString(@Nullable Recognizer<?,?> recog) {
194
+ # return toString(recog, ParserRuleContext.EMPTY);
195
+ # }
196
+ #
197
+ # public final String toString(@Nullable List<String> ruleNames) {
198
+ # return toString(ruleNames, null);
199
+ # }
200
+ #
201
+ # // recog null unless ParserRuleContext, in which case we use subclass toString(...)
202
+ # public String toString(@Nullable Recognizer<?,?> recog, @Nullable RuleContext stop) {
203
+ # String[] ruleNames = recog != null ? recog.getRuleNames() : null;
204
+ # List<String> ruleNamesList = ruleNames != null ? Arrays.asList(ruleNames) : null;
205
+ # return toString(ruleNamesList, stop);
206
+ # }
207
+
208
+ def toString(self, ruleNames:list, stop:RuleContext)->str:
209
+ with StringIO() as buf:
210
+ p = self
211
+ buf.write("[")
212
+ while p is not None and p is not stop:
213
+ if ruleNames is None:
214
+ if not p.isEmpty():
215
+ buf.write(str(p.invokingState))
216
+ else:
217
+ ri = p.getRuleIndex()
218
+ ruleName = ruleNames[ri] if ri >= 0 and ri < len(ruleNames) else str(ri)
219
+ buf.write(ruleName)
220
+
221
+ if p.parentCtx is not None and (ruleNames is not None or not p.parentCtx.isEmpty()):
222
+ buf.write(" ")
223
+
224
+ p = p.parentCtx
225
+
226
+ buf.write("]")
227
+ return buf.getvalue()
venv/lib/python3.10/site-packages/antlr4/StdinStream.py ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import codecs
2
+ import sys
3
+
4
+ from antlr4.InputStream import InputStream
5
+
6
+
7
+ class StdinStream(InputStream):
8
+ def __init__(self, encoding:str='ascii', errors:str='strict') -> None:
9
+ bytes = sys.stdin.buffer.read()
10
+ data = codecs.decode(bytes, encoding, errors)
11
+ super().__init__(data)
venv/lib/python3.10/site-packages/antlr4/Token.py ADDED
@@ -0,0 +1,155 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (c) 2012-2017 The ANTLR Project. All rights reserved.
2
+ # Use of this file is governed by the BSD 3-clause license that
3
+ # can be found in the LICENSE.txt file in the project root.
4
+ #
5
+
6
+ # A token has properties: text, type, line, character position in the line
7
+ # (so we can ignore tabs), token channel, index, and source from which
8
+ # we obtained this token.
9
+ from io import StringIO
10
+
11
+
12
+ class Token (object):
13
+ __slots__ = ('source', 'type', 'channel', 'start', 'stop', 'tokenIndex', 'line', 'column', '_text')
14
+
15
+ INVALID_TYPE = 0
16
+
17
+ # During lookahead operations, this "token" signifies we hit rule end ATN state
18
+ # and did not follow it despite needing to.
19
+ EPSILON = -2
20
+
21
+ MIN_USER_TOKEN_TYPE = 1
22
+
23
+ EOF = -1
24
+
25
+ # All tokens go to the parser (unless skip() is called in that rule)
26
+ # on a particular "channel". The parser tunes to a particular channel
27
+ # so that whitespace etc... can go to the parser on a "hidden" channel.
28
+
29
+ DEFAULT_CHANNEL = 0
30
+
31
+ # Anything on different channel than DEFAULT_CHANNEL is not parsed
32
+ # by parser.
33
+
34
+ HIDDEN_CHANNEL = 1
35
+
36
+ def __init__(self):
37
+ self.source = None
38
+ self.type = None # token type of the token
39
+ self.channel = None # The parser ignores everything not on DEFAULT_CHANNEL
40
+ self.start = None # optional; return -1 if not implemented.
41
+ self.stop = None # optional; return -1 if not implemented.
42
+ self.tokenIndex = None # from 0..n-1 of the token object in the input stream
43
+ self.line = None # line=1..n of the 1st character
44
+ self.column = None # beginning of the line at which it occurs, 0..n-1
45
+ self._text = None # text of the token.
46
+
47
+ @property
48
+ def text(self):
49
+ return self._text
50
+
51
+ # Explicitly set the text for this token. If {code text} is not
52
+ # {@code null}, then {@link #getText} will return this value rather than
53
+ # extracting the text from the input.
54
+ #
55
+ # @param text The explicit text of the token, or {@code null} if the text
56
+ # should be obtained from the input along with the start and stop indexes
57
+ # of the token.
58
+
59
+ @text.setter
60
+ def text(self, text:str):
61
+ self._text = text
62
+
63
+
64
+ def getTokenSource(self):
65
+ return self.source[0]
66
+
67
+ def getInputStream(self):
68
+ return self.source[1]
69
+
70
+ class CommonToken(Token):
71
+
72
+ # An empty {@link Pair} which is used as the default value of
73
+ # {@link #source} for tokens that do not have a source.
74
+ EMPTY_SOURCE = (None, None)
75
+
76
+ def __init__(self, source:tuple = EMPTY_SOURCE, type:int = None, channel:int=Token.DEFAULT_CHANNEL, start:int=-1, stop:int=-1):
77
+ super().__init__()
78
+ self.source = source
79
+ self.type = type
80
+ self.channel = channel
81
+ self.start = start
82
+ self.stop = stop
83
+ self.tokenIndex = -1
84
+ if source[0] is not None:
85
+ self.line = source[0].line
86
+ self.column = source[0].column
87
+ else:
88
+ self.column = -1
89
+
90
+ # Constructs a new {@link CommonToken} as a copy of another {@link Token}.
91
+ #
92
+ # <p>
93
+ # If {@code oldToken} is also a {@link CommonToken} instance, the newly
94
+ # constructed token will share a reference to the {@link #text} field and
95
+ # the {@link Pair} stored in {@link #source}. Otherwise, {@link #text} will
96
+ # be assigned the result of calling {@link #getText}, and {@link #source}
97
+ # will be constructed from the result of {@link Token#getTokenSource} and
98
+ # {@link Token#getInputStream}.</p>
99
+ #
100
+ # @param oldToken The token to copy.
101
+ #
102
+ def clone(self):
103
+ t = CommonToken(self.source, self.type, self.channel, self.start, self.stop)
104
+ t.tokenIndex = self.tokenIndex
105
+ t.line = self.line
106
+ t.column = self.column
107
+ t.text = self.text
108
+ return t
109
+
110
+ @property
111
+ def text(self):
112
+ if self._text is not None:
113
+ return self._text
114
+ input = self.getInputStream()
115
+ if input is None:
116
+ return None
117
+ n = input.size
118
+ if self.start < n and self.stop < n:
119
+ return input.getText(self.start, self.stop)
120
+ else:
121
+ return "<EOF>"
122
+
123
+ @text.setter
124
+ def text(self, text:str):
125
+ self._text = text
126
+
127
+ def __str__(self):
128
+ with StringIO() as buf:
129
+ buf.write("[@")
130
+ buf.write(str(self.tokenIndex))
131
+ buf.write(",")
132
+ buf.write(str(self.start))
133
+ buf.write(":")
134
+ buf.write(str(self.stop))
135
+ buf.write("='")
136
+ txt = self.text
137
+ if txt is not None:
138
+ txt = txt.replace("\n","\\n")
139
+ txt = txt.replace("\r","\\r")
140
+ txt = txt.replace("\t","\\t")
141
+ else:
142
+ txt = "<no text>"
143
+ buf.write(txt)
144
+ buf.write("',<")
145
+ buf.write(str(self.type))
146
+ buf.write(">")
147
+ if self.channel > 0:
148
+ buf.write(",channel=")
149
+ buf.write(str(self.channel))
150
+ buf.write(",")
151
+ buf.write(str(self.line))
152
+ buf.write(":")
153
+ buf.write(str(self.column))
154
+ buf.write("]")
155
+ return buf.getvalue()
venv/lib/python3.10/site-packages/antlr4/TokenStreamRewriter.py ADDED
@@ -0,0 +1,255 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #
2
+ # Copyright (c) 2012-2017 The ANTLR Project. All rights reserved.
3
+ # Use of this file is governed by the BSD 3-clause license that
4
+ # can be found in the LICENSE.txt file in the project root.
5
+ #
6
+
7
+ from io import StringIO
8
+ from antlr4.Token import Token
9
+
10
+ from antlr4.CommonTokenStream import CommonTokenStream
11
+
12
+
13
+ class TokenStreamRewriter(object):
14
+ __slots__ = ('tokens', 'programs', 'lastRewriteTokenIndexes')
15
+
16
+ DEFAULT_PROGRAM_NAME = "default"
17
+ PROGRAM_INIT_SIZE = 100
18
+ MIN_TOKEN_INDEX = 0
19
+
20
+ def __init__(self, tokens):
21
+ """
22
+ :type tokens: antlr4.BufferedTokenStream.BufferedTokenStream
23
+ :param tokens:
24
+ :return:
25
+ """
26
+ super(TokenStreamRewriter, self).__init__()
27
+ self.tokens = tokens
28
+ self.programs = {self.DEFAULT_PROGRAM_NAME: []}
29
+ self.lastRewriteTokenIndexes = {}
30
+
31
+ def getTokenStream(self):
32
+ return self.tokens
33
+
34
+ def rollback(self, instruction_index, program_name):
35
+ ins = self.programs.get(program_name, None)
36
+ if ins:
37
+ self.programs[program_name] = ins[self.MIN_TOKEN_INDEX: instruction_index]
38
+
39
+ def deleteProgram(self, program_name=DEFAULT_PROGRAM_NAME):
40
+ self.rollback(self.MIN_TOKEN_INDEX, program_name)
41
+
42
+ def insertAfterToken(self, token, text, program_name=DEFAULT_PROGRAM_NAME):
43
+ self.insertAfter(token.tokenIndex, text, program_name)
44
+
45
+ def insertAfter(self, index, text, program_name=DEFAULT_PROGRAM_NAME):
46
+ op = self.InsertAfterOp(self.tokens, index + 1, text)
47
+ rewrites = self.getProgram(program_name)
48
+ op.instructionIndex = len(rewrites)
49
+ rewrites.append(op)
50
+
51
+ def insertBeforeIndex(self, index, text):
52
+ self.insertBefore(self.DEFAULT_PROGRAM_NAME, index, text)
53
+
54
+ def insertBeforeToken(self, token, text, program_name=DEFAULT_PROGRAM_NAME):
55
+ self.insertBefore(program_name, token.tokenIndex, text)
56
+
57
+ def insertBefore(self, program_name, index, text):
58
+ op = self.InsertBeforeOp(self.tokens, index, text)
59
+ rewrites = self.getProgram(program_name)
60
+ op.instructionIndex = len(rewrites)
61
+ rewrites.append(op)
62
+
63
+ def replaceIndex(self, index, text):
64
+ self.replace(self.DEFAULT_PROGRAM_NAME, index, index, text)
65
+
66
+ def replaceRange(self, from_idx, to_idx, text):
67
+ self.replace(self.DEFAULT_PROGRAM_NAME, from_idx, to_idx, text)
68
+
69
+ def replaceSingleToken(self, token, text):
70
+ self.replace(self.DEFAULT_PROGRAM_NAME, token.tokenIndex, token.tokenIndex, text)
71
+
72
+ def replaceRangeTokens(self, from_token, to_token, text, program_name=DEFAULT_PROGRAM_NAME):
73
+ self.replace(program_name, from_token.tokenIndex, to_token.tokenIndex, text)
74
+
75
+ def replace(self, program_name, from_idx, to_idx, text):
76
+ if any((from_idx > to_idx, from_idx < 0, to_idx < 0, to_idx >= len(self.tokens.tokens))):
77
+ raise ValueError(
78
+ 'replace: range invalid: {}..{}(size={})'.format(from_idx, to_idx, len(self.tokens.tokens)))
79
+ op = self.ReplaceOp(from_idx, to_idx, self.tokens, text)
80
+ rewrites = self.getProgram(program_name)
81
+ op.instructionIndex = len(rewrites)
82
+ rewrites.append(op)
83
+
84
+ def deleteToken(self, token):
85
+ self.delete(self.DEFAULT_PROGRAM_NAME, token, token)
86
+
87
+ def deleteIndex(self, index):
88
+ self.delete(self.DEFAULT_PROGRAM_NAME, index, index)
89
+
90
+ def delete(self, program_name, from_idx, to_idx):
91
+ if isinstance(from_idx, Token):
92
+ self.replace(program_name, from_idx.tokenIndex, to_idx.tokenIndex, "")
93
+ else:
94
+ self.replace(program_name, from_idx, to_idx, "")
95
+
96
+ def lastRewriteTokenIndex(self, program_name=DEFAULT_PROGRAM_NAME):
97
+ return self.lastRewriteTokenIndexes.get(program_name, -1)
98
+
99
+ def setLastRewriteTokenIndex(self, program_name, i):
100
+ self.lastRewriteTokenIndexes[program_name] = i
101
+
102
+ def getProgram(self, program_name):
103
+ return self.programs.setdefault(program_name, [])
104
+
105
+ def getDefaultText(self):
106
+ return self.getText(self.DEFAULT_PROGRAM_NAME, 0, len(self.tokens.tokens) - 1)
107
+
108
+ def getText(self, program_name, start:int, stop:int):
109
+ """
110
+ :return: the text in tokens[start, stop](closed interval)
111
+ """
112
+ rewrites = self.programs.get(program_name)
113
+
114
+ # ensure start/end are in range
115
+ if stop > len(self.tokens.tokens) - 1:
116
+ stop = len(self.tokens.tokens) - 1
117
+ if start < 0:
118
+ start = 0
119
+
120
+ # if no instructions to execute
121
+ if not rewrites: return self.tokens.getText(start, stop)
122
+ buf = StringIO()
123
+ indexToOp = self._reduceToSingleOperationPerIndex(rewrites)
124
+ i = start
125
+ while all((i <= stop, i < len(self.tokens.tokens))):
126
+ op = indexToOp.pop(i, None)
127
+ token = self.tokens.get(i)
128
+ if op is None:
129
+ if token.type != Token.EOF: buf.write(token.text)
130
+ i += 1
131
+ else:
132
+ i = op.execute(buf)
133
+
134
+ if stop == len(self.tokens.tokens)-1:
135
+ for op in indexToOp.values():
136
+ if op.index >= len(self.tokens.tokens)-1: buf.write(op.text)
137
+
138
+ return buf.getvalue()
139
+
140
+ def _reduceToSingleOperationPerIndex(self, rewrites):
141
+ # Walk replaces
142
+ for i, rop in enumerate(rewrites):
143
+ if any((rop is None, not isinstance(rop, TokenStreamRewriter.ReplaceOp))):
144
+ continue
145
+ # Wipe prior inserts within range
146
+ inserts = [op for op in rewrites[:i] if isinstance(op, TokenStreamRewriter.InsertBeforeOp)]
147
+ for iop in inserts:
148
+ if iop.index == rop.index:
149
+ rewrites[iop.instructionIndex] = None
150
+ rop.text = '{}{}'.format(iop.text, rop.text)
151
+ elif all((iop.index > rop.index, iop.index <= rop.last_index)):
152
+ rewrites[iop.instructionIndex] = None
153
+
154
+ # Drop any prior replaces contained within
155
+ prevReplaces = [op for op in rewrites[:i] if isinstance(op, TokenStreamRewriter.ReplaceOp)]
156
+ for prevRop in prevReplaces:
157
+ if all((prevRop.index >= rop.index, prevRop.last_index <= rop.last_index)):
158
+ rewrites[prevRop.instructionIndex] = None
159
+ continue
160
+ isDisjoint = any((prevRop.last_index<rop.index, prevRop.index>rop.last_index))
161
+ if all((prevRop.text is None, rop.text is None, not isDisjoint)):
162
+ rewrites[prevRop.instructionIndex] = None
163
+ rop.index = min(prevRop.index, rop.index)
164
+ rop.last_index = min(prevRop.last_index, rop.last_index)
165
+ print('New rop {}'.format(rop))
166
+ elif (not(isDisjoint)):
167
+ raise ValueError("replace op boundaries of {} overlap with previous {}".format(rop, prevRop))
168
+
169
+ # Walk inserts
170
+ for i, iop in enumerate(rewrites):
171
+ if any((iop is None, not isinstance(iop, TokenStreamRewriter.InsertBeforeOp))):
172
+ continue
173
+ prevInserts = [op for op in rewrites[:i] if isinstance(op, TokenStreamRewriter.InsertBeforeOp)]
174
+ for prev_index, prevIop in enumerate(prevInserts):
175
+ if prevIop.index == iop.index and type(prevIop) is TokenStreamRewriter.InsertBeforeOp:
176
+ iop.text += prevIop.text
177
+ rewrites[prev_index] = None
178
+ elif prevIop.index == iop.index and type(prevIop) is TokenStreamRewriter.InsertAfterOp:
179
+ iop.text = prevIop.text + iop.text
180
+ rewrites[prev_index] = None
181
+ # look for replaces where iop.index is in range; error
182
+ prevReplaces = [op for op in rewrites[:i] if isinstance(op, TokenStreamRewriter.ReplaceOp)]
183
+ for rop in prevReplaces:
184
+ if iop.index == rop.index:
185
+ rop.text = iop.text + rop.text
186
+ rewrites[i] = None
187
+ continue
188
+ if all((iop.index >= rop.index, iop.index <= rop.last_index)):
189
+ raise ValueError("insert op {} within boundaries of previous {}".format(iop, rop))
190
+
191
+ reduced = {}
192
+ for i, op in enumerate(rewrites):
193
+ if op is None: continue
194
+ if reduced.get(op.index): raise ValueError('should be only one op per index')
195
+ reduced[op.index] = op
196
+
197
+ return reduced
198
+
199
+ class RewriteOperation(object):
200
+ __slots__ = ('tokens', 'index', 'text', 'instructionIndex')
201
+
202
+ def __init__(self, tokens, index, text=""):
203
+ """
204
+ :type tokens: CommonTokenStream
205
+ :param tokens:
206
+ :param index:
207
+ :param text:
208
+ :return:
209
+ """
210
+ self.tokens = tokens
211
+ self.index = index
212
+ self.text = text
213
+ self.instructionIndex = 0
214
+
215
+ def execute(self, buf):
216
+ """
217
+ :type buf: StringIO.StringIO
218
+ :param buf:
219
+ :return:
220
+ """
221
+ return self.index
222
+
223
+ def __str__(self):
224
+ return '<{}@{}:"{}">'.format(self.__class__.__name__, self.tokens.get(self.index), self.text)
225
+
226
+ class InsertBeforeOp(RewriteOperation):
227
+
228
+ def __init__(self, tokens, index, text=""):
229
+ super(TokenStreamRewriter.InsertBeforeOp, self).__init__(tokens, index, text)
230
+
231
+ def execute(self, buf):
232
+ buf.write(self.text)
233
+ if self.tokens.get(self.index).type != Token.EOF:
234
+ buf.write(self.tokens.get(self.index).text)
235
+ return self.index + 1
236
+
237
+ class InsertAfterOp(InsertBeforeOp):
238
+ pass
239
+
240
+ class ReplaceOp(RewriteOperation):
241
+ __slots__ = 'last_index'
242
+
243
+ def __init__(self, from_idx, to_idx, tokens, text):
244
+ super(TokenStreamRewriter.ReplaceOp, self).__init__(tokens, from_idx, text)
245
+ self.last_index = to_idx
246
+
247
+ def execute(self, buf):
248
+ if self.text:
249
+ buf.write(self.text)
250
+ return self.last_index + 1
251
+
252
+ def __str__(self):
253
+ if self.text:
254
+ return '<ReplaceOp@{}..{}:"{}">'.format(self.tokens.get(self.index), self.tokens.get(self.last_index),
255
+ self.text)
venv/lib/python3.10/site-packages/antlr4/Utils.py ADDED
@@ -0,0 +1,33 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (c) 2012-2017 The ANTLR Project. All rights reserved.
2
+ # Use of this file is governed by the BSD 3-clause license that
3
+ # can be found in the LICENSE.txt file in the project root.
4
+ #
5
+
6
+ from io import StringIO
7
+
8
+ def str_list(val):
9
+ with StringIO() as buf:
10
+ buf.write('[')
11
+ first = True
12
+ for item in val:
13
+ if not first:
14
+ buf.write(', ')
15
+ buf.write(str(item))
16
+ first = False
17
+ buf.write(']')
18
+ return buf.getvalue()
19
+
20
+ def escapeWhitespace(s:str, escapeSpaces:bool):
21
+ with StringIO() as buf:
22
+ for c in s:
23
+ if c==' ' and escapeSpaces:
24
+ buf.write('\u00B7')
25
+ elif c=='\t':
26
+ buf.write("\\t")
27
+ elif c=='\n':
28
+ buf.write("\\n")
29
+ elif c=='\r':
30
+ buf.write("\\r")
31
+ else:
32
+ buf.write(c)
33
+ return buf.getvalue()
venv/lib/python3.10/site-packages/antlr4/__init__.py ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from antlr4.Token import Token
2
+ from antlr4.InputStream import InputStream
3
+ from antlr4.FileStream import FileStream
4
+ from antlr4.StdinStream import StdinStream
5
+ from antlr4.BufferedTokenStream import TokenStream
6
+ from antlr4.CommonTokenStream import CommonTokenStream
7
+ from antlr4.Lexer import Lexer
8
+ from antlr4.Parser import Parser
9
+ from antlr4.dfa.DFA import DFA
10
+ from antlr4.atn.ATN import ATN
11
+ from antlr4.atn.ATNDeserializer import ATNDeserializer
12
+ from antlr4.atn.LexerATNSimulator import LexerATNSimulator
13
+ from antlr4.atn.ParserATNSimulator import ParserATNSimulator
14
+ from antlr4.atn.PredictionMode import PredictionMode
15
+ from antlr4.PredictionContext import PredictionContextCache
16
+ from antlr4.ParserRuleContext import RuleContext, ParserRuleContext
17
+ from antlr4.tree.Tree import ParseTreeListener, ParseTreeVisitor, ParseTreeWalker, TerminalNode, ErrorNode, RuleNode
18
+ from antlr4.error.Errors import RecognitionException, IllegalStateException, NoViableAltException
19
+ from antlr4.error.ErrorStrategy import BailErrorStrategy
20
+ from antlr4.error.DiagnosticErrorListener import DiagnosticErrorListener
21
+ from antlr4.Utils import str_list
venv/lib/python3.10/site-packages/antlr4/__pycache__/BufferedTokenStream.cpython-310.pyc ADDED
Binary file (7.1 kB). View file
 
venv/lib/python3.10/site-packages/antlr4/__pycache__/CommonTokenFactory.cpython-310.pyc ADDED
Binary file (1.42 kB). View file
 
venv/lib/python3.10/site-packages/antlr4/__pycache__/CommonTokenStream.cpython-310.pyc ADDED
Binary file (1.97 kB). View file
 
venv/lib/python3.10/site-packages/antlr4/__pycache__/FileStream.cpython-310.pyc ADDED
Binary file (1.06 kB). View file
 
venv/lib/python3.10/site-packages/antlr4/__pycache__/InputStream.cpython-310.pyc ADDED
Binary file (2.83 kB). View file
 
venv/lib/python3.10/site-packages/antlr4/__pycache__/IntervalSet.cpython-310.pyc ADDED
Binary file (4.79 kB). View file
 
venv/lib/python3.10/site-packages/antlr4/__pycache__/LL1Analyzer.cpython-310.pyc ADDED
Binary file (3.35 kB). View file
 
venv/lib/python3.10/site-packages/antlr4/__pycache__/Lexer.cpython-310.pyc ADDED
Binary file (7.78 kB). View file
 
venv/lib/python3.10/site-packages/antlr4/__pycache__/ListTokenSource.cpython-310.pyc ADDED
Binary file (2.67 kB). View file
 
venv/lib/python3.10/site-packages/antlr4/__pycache__/Parser.cpython-310.pyc ADDED
Binary file (13.4 kB). View file
 
venv/lib/python3.10/site-packages/antlr4/__pycache__/ParserInterpreter.cpython-310.pyc ADDED
Binary file (5.12 kB). View file
 
venv/lib/python3.10/site-packages/antlr4/__pycache__/ParserRuleContext.cpython-310.pyc ADDED
Binary file (4.73 kB). View file
 
venv/lib/python3.10/site-packages/antlr4/__pycache__/PredictionContext.cpython-310.pyc ADDED
Binary file (11.8 kB). View file
 
venv/lib/python3.10/site-packages/antlr4/__pycache__/Recognizer.cpython-310.pyc ADDED
Binary file (4.51 kB). View file