ZTWHHH commited on
Commit
ac48ba4
·
verified ·
1 Parent(s): 7c70d58

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. evalkit_eagle/lib/python3.10/site-packages/blib2to3/Grammar.txt +256 -0
  2. evalkit_eagle/lib/python3.10/site-packages/blib2to3/__pycache__/__init__.cpython-310.pyc +0 -0
  3. evalkit_eagle/lib/python3.10/site-packages/blib2to3/__pycache__/pygram.cpython-310.pyc +0 -0
  4. evalkit_eagle/lib/python3.10/site-packages/blib2to3/__pycache__/pytree.cpython-310.pyc +0 -0
  5. evalkit_eagle/lib/python3.10/site-packages/blib2to3/pgen2/__init__.py +4 -0
  6. evalkit_eagle/lib/python3.10/site-packages/blib2to3/pgen2/driver.cpython-310-x86_64-linux-gnu.so +0 -0
  7. evalkit_eagle/lib/python3.10/site-packages/blib2to3/pgen2/driver.py +316 -0
  8. evalkit_eagle/lib/python3.10/site-packages/blib2to3/pgen2/grammar.py +227 -0
  9. evalkit_eagle/lib/python3.10/site-packages/blib2to3/pgen2/literals.cpython-310-x86_64-linux-gnu.so +0 -0
  10. evalkit_eagle/lib/python3.10/site-packages/blib2to3/pgen2/parse.py +411 -0
  11. evalkit_eagle/lib/python3.10/site-packages/blib2to3/pgen2/pgen.py +428 -0
  12. evalkit_eagle/lib/python3.10/site-packages/blib2to3/pgen2/token.py +88 -0
  13. evalkit_eagle/lib/python3.10/site-packages/blib2to3/pygram.py +200 -0
  14. evalkit_eagle/lib/python3.10/site-packages/blib2to3/pytree.py +983 -0
  15. janus/share/terminfo/b/beehive4 +0 -0
  16. janus/share/terminfo/b/bg1.25nv +0 -0
  17. janus/share/terminfo/h/h100bw +0 -0
  18. janus/share/terminfo/h/h19-a +0 -0
  19. janus/share/terminfo/h/h19a +0 -0
  20. janus/share/terminfo/h/h19us +0 -0
  21. janus/share/terminfo/h/h29a-nkc-uc +0 -0
  22. janus/share/terminfo/h/h80 +0 -0
  23. janus/share/terminfo/h/hds200 +0 -0
  24. janus/share/terminfo/h/he80 +0 -0
  25. janus/share/terminfo/h/heath-ansi +0 -0
  26. janus/share/terminfo/h/hp+pfk+arrows +0 -0
  27. janus/share/terminfo/h/hp110 +0 -0
  28. janus/share/terminfo/h/hp2621 +0 -0
  29. janus/share/terminfo/h/hp2621-ba +0 -0
  30. janus/share/terminfo/h/hp2621-fl +0 -0
  31. janus/share/terminfo/h/hp2621b +0 -0
  32. janus/share/terminfo/h/hp2621b-kx +0 -0
  33. janus/share/terminfo/h/hp2621b-kx-p +0 -0
  34. janus/share/terminfo/h/hp2621b-p +0 -0
  35. janus/share/terminfo/h/hp2623a +0 -0
  36. janus/share/terminfo/h/hp2624a-10p +0 -0
  37. janus/share/terminfo/h/hp2626-12-s +0 -0
  38. janus/share/terminfo/h/hp2626p +0 -0
  39. janus/share/terminfo/h/hp2627a +0 -0
  40. janus/share/terminfo/h/hp2640a +0 -0
  41. janus/share/terminfo/h/hp45 +0 -0
  42. janus/share/terminfo/h/hp9837 +0 -0
  43. janus/share/terminfo/h/hpterm +0 -0
  44. janus/share/terminfo/h/hz1000 +0 -0
  45. janus/share/terminfo/h/hz1500 +0 -0
  46. janus/share/terminfo/j/jaixterm-m +0 -0
  47. janus/share/terminfo/j/jfbterm +0 -0
  48. janus/share/terminfo/p/p14 +0 -0
  49. janus/share/terminfo/p/p5 +0 -0
  50. janus/share/terminfo/p/p9 +0 -0
evalkit_eagle/lib/python3.10/site-packages/blib2to3/Grammar.txt ADDED
@@ -0,0 +1,256 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Grammar for 2to3. This grammar supports Python 2.x and 3.x.
2
+
3
+ # NOTE WELL: You should also follow all the steps listed at
4
+ # https://devguide.python.org/grammar/
5
+
6
+ # Start symbols for the grammar:
7
+ # file_input is a module or sequence of commands read from an input file;
8
+ # single_input is a single interactive statement;
9
+ # eval_input is the input for the eval() and input() functions.
10
+ # NB: compound_stmt in single_input is followed by extra NEWLINE!
11
+ file_input: (NEWLINE | stmt)* ENDMARKER
12
+ single_input: NEWLINE | simple_stmt | compound_stmt NEWLINE
13
+ eval_input: testlist NEWLINE* ENDMARKER
14
+
15
+ typevar: NAME [':' expr]
16
+ paramspec: '**' NAME
17
+ typevartuple: '*' NAME
18
+ typeparam: typevar | paramspec | typevartuple
19
+ typeparams: '[' typeparam (',' typeparam)* [','] ']'
20
+
21
+ decorator: '@' namedexpr_test NEWLINE
22
+ decorators: decorator+
23
+ decorated: decorators (classdef | funcdef | async_funcdef)
24
+ async_funcdef: ASYNC funcdef
25
+ funcdef: 'def' NAME [typeparams] parameters ['->' test] ':' suite
26
+ parameters: '(' [typedargslist] ')'
27
+
28
+ # The following definition for typedarglist is equivalent to this set of rules:
29
+ #
30
+ # arguments = argument (',' argument)*
31
+ # argument = tfpdef ['=' test]
32
+ # kwargs = '**' tname [',']
33
+ # args = '*' [tname_star]
34
+ # kwonly_kwargs = (',' argument)* [',' [kwargs]]
35
+ # args_kwonly_kwargs = args kwonly_kwargs | kwargs
36
+ # poskeyword_args_kwonly_kwargs = arguments [',' [args_kwonly_kwargs]]
37
+ # typedargslist_no_posonly = poskeyword_args_kwonly_kwargs | args_kwonly_kwargs
38
+ # typedarglist = arguments ',' '/' [',' [typedargslist_no_posonly]])|(typedargslist_no_posonly)"
39
+ #
40
+ # It needs to be fully expanded to allow our LL(1) parser to work on it.
41
+
42
+ typedargslist: tfpdef ['=' test] (',' tfpdef ['=' test])* ',' '/' [
43
+ ',' [((tfpdef ['=' test] ',')* ('*' [tname_star] (',' tname ['=' test])*
44
+ [',' ['**' tname [',']]] | '**' tname [','])
45
+ | tfpdef ['=' test] (',' tfpdef ['=' test])* [','])]
46
+ ] | ((tfpdef ['=' test] ',')* ('*' [tname_star] (',' tname ['=' test])*
47
+ [',' ['**' tname [',']]] | '**' tname [','])
48
+ | tfpdef ['=' test] (',' tfpdef ['=' test])* [','])
49
+
50
+ tname: NAME [':' test]
51
+ tname_star: NAME [':' (test|star_expr)]
52
+ tfpdef: tname | '(' tfplist ')'
53
+ tfplist: tfpdef (',' tfpdef)* [',']
54
+
55
+ # The following definition for varargslist is equivalent to this set of rules:
56
+ #
57
+ # arguments = argument (',' argument )*
58
+ # argument = vfpdef ['=' test]
59
+ # kwargs = '**' vname [',']
60
+ # args = '*' [vname]
61
+ # kwonly_kwargs = (',' argument )* [',' [kwargs]]
62
+ # args_kwonly_kwargs = args kwonly_kwargs | kwargs
63
+ # poskeyword_args_kwonly_kwargs = arguments [',' [args_kwonly_kwargs]]
64
+ # vararglist_no_posonly = poskeyword_args_kwonly_kwargs | args_kwonly_kwargs
65
+ # varargslist = arguments ',' '/' [','[(vararglist_no_posonly)]] | (vararglist_no_posonly)
66
+ #
67
+ # It needs to be fully expanded to allow our LL(1) parser to work on it.
68
+
69
+ varargslist: vfpdef ['=' test ](',' vfpdef ['=' test])* ',' '/' [',' [
70
+ ((vfpdef ['=' test] ',')* ('*' [vname] (',' vname ['=' test])*
71
+ [',' ['**' vname [',']]] | '**' vname [','])
72
+ | vfpdef ['=' test] (',' vfpdef ['=' test])* [','])
73
+ ]] | ((vfpdef ['=' test] ',')*
74
+ ('*' [vname] (',' vname ['=' test])* [',' ['**' vname [',']]]| '**' vname [','])
75
+ | vfpdef ['=' test] (',' vfpdef ['=' test])* [','])
76
+
77
+ vname: NAME
78
+ vfpdef: vname | '(' vfplist ')'
79
+ vfplist: vfpdef (',' vfpdef)* [',']
80
+
81
+ stmt: simple_stmt | compound_stmt
82
+ simple_stmt: small_stmt (';' small_stmt)* [';'] NEWLINE
83
+ small_stmt: (type_stmt | expr_stmt | del_stmt | pass_stmt | flow_stmt |
84
+ import_stmt | global_stmt | assert_stmt)
85
+ expr_stmt: testlist_star_expr (annassign | augassign (yield_expr|testlist) |
86
+ ('=' (yield_expr|testlist_star_expr))*)
87
+ annassign: ':' test ['=' (yield_expr|testlist_star_expr)]
88
+ testlist_star_expr: (test|star_expr) (',' (test|star_expr))* [',']
89
+ augassign: ('+=' | '-=' | '*=' | '@=' | '/=' | '%=' | '&=' | '|=' | '^=' |
90
+ '<<=' | '>>=' | '**=' | '//=')
91
+ # For normal and annotated assignments, additional restrictions enforced by the interpreter
92
+ del_stmt: 'del' exprlist
93
+ pass_stmt: 'pass'
94
+ flow_stmt: break_stmt | continue_stmt | return_stmt | raise_stmt | yield_stmt
95
+ break_stmt: 'break'
96
+ continue_stmt: 'continue'
97
+ return_stmt: 'return' [testlist_star_expr]
98
+ yield_stmt: yield_expr
99
+ raise_stmt: 'raise' [test ['from' test | ',' test [',' test]]]
100
+ import_stmt: import_name | import_from
101
+ import_name: 'import' dotted_as_names
102
+ import_from: ('from' ('.'* dotted_name | '.'+)
103
+ 'import' ('*' | '(' import_as_names ')' | import_as_names))
104
+ import_as_name: NAME ['as' NAME]
105
+ dotted_as_name: dotted_name ['as' NAME]
106
+ import_as_names: import_as_name (',' import_as_name)* [',']
107
+ dotted_as_names: dotted_as_name (',' dotted_as_name)*
108
+ dotted_name: NAME ('.' NAME)*
109
+ global_stmt: ('global' | 'nonlocal') NAME (',' NAME)*
110
+ assert_stmt: 'assert' test [',' test]
111
+ type_stmt: "type" NAME [typeparams] '=' test
112
+
113
+ compound_stmt: if_stmt | while_stmt | for_stmt | try_stmt | with_stmt | funcdef | classdef | decorated | async_stmt | match_stmt
114
+ async_stmt: ASYNC (funcdef | with_stmt | for_stmt)
115
+ if_stmt: 'if' namedexpr_test ':' suite ('elif' namedexpr_test ':' suite)* ['else' ':' suite]
116
+ while_stmt: 'while' namedexpr_test ':' suite ['else' ':' suite]
117
+ for_stmt: 'for' exprlist 'in' testlist_star_expr ':' suite ['else' ':' suite]
118
+ try_stmt: ('try' ':' suite
119
+ ((except_clause ':' suite)+
120
+ ['else' ':' suite]
121
+ ['finally' ':' suite] |
122
+ 'finally' ':' suite))
123
+ with_stmt: 'with' asexpr_test (',' asexpr_test)* ':' suite
124
+
125
+ # NB compile.c makes sure that the default except clause is last
126
+ except_clause: 'except' ['*'] [test [(',' | 'as') test]]
127
+ suite: simple_stmt | NEWLINE INDENT stmt+ DEDENT
128
+
129
+ # Backward compatibility cruft to support:
130
+ # [ x for x in lambda: True, lambda: False if x() ]
131
+ # even while also allowing:
132
+ # lambda x: 5 if x else 2
133
+ # (But not a mix of the two)
134
+ testlist_safe: old_test [(',' old_test)+ [',']]
135
+ old_test: or_test | old_lambdef
136
+ old_lambdef: 'lambda' [varargslist] ':' old_test
137
+
138
+ namedexpr_test: asexpr_test [':=' asexpr_test]
139
+
140
+ # This is actually not a real rule, though since the parser is very
141
+ # limited in terms of the strategy about match/case rules, we are inserting
142
+ # a virtual case (<expr> as <expr>) as a valid expression. Unless a better
143
+ # approach is thought, the only side effect of this seem to be just allowing
144
+ # more stuff to be parser (which would fail on the ast).
145
+ asexpr_test: test ['as' test]
146
+
147
+ test: or_test ['if' or_test 'else' test] | lambdef
148
+ or_test: and_test ('or' and_test)*
149
+ and_test: not_test ('and' not_test)*
150
+ not_test: 'not' not_test | comparison
151
+ comparison: expr (comp_op expr)*
152
+ comp_op: '<'|'>'|'=='|'>='|'<='|'<>'|'!='|'in'|'not' 'in'|'is'|'is' 'not'
153
+ star_expr: '*' expr
154
+ expr: xor_expr ('|' xor_expr)*
155
+ xor_expr: and_expr ('^' and_expr)*
156
+ and_expr: shift_expr ('&' shift_expr)*
157
+ shift_expr: arith_expr (('<<'|'>>') arith_expr)*
158
+ arith_expr: term (('+'|'-') term)*
159
+ term: factor (('*'|'@'|'/'|'%'|'//') factor)*
160
+ factor: ('+'|'-'|'~') factor | power
161
+ power: [AWAIT] atom trailer* ['**' factor]
162
+ atom: ('(' [yield_expr|testlist_gexp] ')' |
163
+ '[' [listmaker] ']' |
164
+ '{' [dictsetmaker] '}' |
165
+ '`' testlist1 '`' |
166
+ NAME | NUMBER | STRING+ | '.' '.' '.')
167
+ listmaker: (namedexpr_test|star_expr) ( old_comp_for | (',' (namedexpr_test|star_expr))* [','] )
168
+ testlist_gexp: (namedexpr_test|star_expr) ( old_comp_for | (',' (namedexpr_test|star_expr))* [','] )
169
+ lambdef: 'lambda' [varargslist] ':' test
170
+ trailer: '(' [arglist] ')' | '[' subscriptlist ']' | '.' NAME
171
+ subscriptlist: (subscript|star_expr) (',' (subscript|star_expr))* [',']
172
+ subscript: test [':=' test] | [test] ':' [test] [sliceop]
173
+ sliceop: ':' [test]
174
+ exprlist: (expr|star_expr) (',' (expr|star_expr))* [',']
175
+ testlist: test (',' test)* [',']
176
+ dictsetmaker: ( ((test ':' asexpr_test | '**' expr)
177
+ (comp_for | (',' (test ':' asexpr_test | '**' expr))* [','])) |
178
+ ((test [':=' test] | star_expr)
179
+ (comp_for | (',' (test [':=' test] | star_expr))* [','])) )
180
+
181
+ classdef: 'class' NAME [typeparams] ['(' [arglist] ')'] ':' suite
182
+
183
+ arglist: argument (',' argument)* [',']
184
+
185
+ # "test '=' test" is really "keyword '=' test", but we have no such token.
186
+ # These need to be in a single rule to avoid grammar that is ambiguous
187
+ # to our LL(1) parser. Even though 'test' includes '*expr' in star_expr,
188
+ # we explicitly match '*' here, too, to give it proper precedence.
189
+ # Illegal combinations and orderings are blocked in ast.c:
190
+ # multiple (test comp_for) arguments are blocked; keyword unpackings
191
+ # that precede iterable unpackings are blocked; etc.
192
+ argument: ( test [comp_for] |
193
+ test ':=' test [comp_for] |
194
+ test 'as' test |
195
+ test '=' asexpr_test |
196
+ '**' test |
197
+ '*' test )
198
+
199
+ comp_iter: comp_for | comp_if
200
+ comp_for: [ASYNC] 'for' exprlist 'in' or_test [comp_iter]
201
+ comp_if: 'if' old_test [comp_iter]
202
+
203
+ # As noted above, testlist_safe extends the syntax allowed in list
204
+ # comprehensions and generators. We can't use it indiscriminately in all
205
+ # derivations using a comp_for-like pattern because the testlist_safe derivation
206
+ # contains comma which clashes with trailing comma in arglist.
207
+ #
208
+ # This was an issue because the parser would not follow the correct derivation
209
+ # when parsing syntactically valid Python code. Since testlist_safe was created
210
+ # specifically to handle list comprehensions and generator expressions enclosed
211
+ # with parentheses, it's safe to only use it in those. That avoids the issue; we
212
+ # can parse code like set(x for x in [],).
213
+ #
214
+ # The syntax supported by this set of rules is not a valid Python 3 syntax,
215
+ # hence the prefix "old".
216
+ #
217
+ # See https://bugs.python.org/issue27494
218
+ old_comp_iter: old_comp_for | old_comp_if
219
+ old_comp_for: [ASYNC] 'for' exprlist 'in' testlist_safe [old_comp_iter]
220
+ old_comp_if: 'if' old_test [old_comp_iter]
221
+
222
+ testlist1: test (',' test)*
223
+
224
+ # not used in grammar, but may appear in "node" passed from Parser to Compiler
225
+ encoding_decl: NAME
226
+
227
+ yield_expr: 'yield' [yield_arg]
228
+ yield_arg: 'from' test | testlist_star_expr
229
+
230
+
231
+ # 3.10 match statement definition
232
+
233
+ # PS: normally the grammar is much much more restricted, but
234
+ # at this moment for not trying to bother much with encoding the
235
+ # exact same DSL in a LL(1) parser, we will just accept an expression
236
+ # and let the ast.parse() step of the safe mode to reject invalid
237
+ # grammar.
238
+
239
+ # The reason why it is more restricted is that, patterns are some
240
+ # sort of a DSL (more advanced than our LHS on assignments, but
241
+ # still in a very limited python subset). They are not really
242
+ # expressions, but who cares. If we can parse them, that is enough
243
+ # to reformat them.
244
+
245
+ match_stmt: "match" subject_expr ':' NEWLINE INDENT case_block+ DEDENT
246
+
247
+ # This is more permissive than the actual version. For example it
248
+ # accepts `match *something:`, even though single-item starred expressions
249
+ # are forbidden.
250
+ subject_expr: (namedexpr_test|star_expr) (',' (namedexpr_test|star_expr))* [',']
251
+
252
+ # cases
253
+ case_block: "case" patterns [guard] ':' suite
254
+ guard: 'if' namedexpr_test
255
+ patterns: pattern (',' pattern)* [',']
256
+ pattern: (expr|star_expr) ['as' expr]
evalkit_eagle/lib/python3.10/site-packages/blib2to3/__pycache__/__init__.cpython-310.pyc ADDED
Binary file (167 Bytes). View file
 
evalkit_eagle/lib/python3.10/site-packages/blib2to3/__pycache__/pygram.cpython-310.pyc ADDED
Binary file (4.53 kB). View file
 
evalkit_eagle/lib/python3.10/site-packages/blib2to3/__pycache__/pytree.cpython-310.pyc ADDED
Binary file (27.9 kB). View file
 
evalkit_eagle/lib/python3.10/site-packages/blib2to3/pgen2/__init__.py ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ # Copyright 2004-2005 Elemental Security, Inc. All Rights Reserved.
2
+ # Licensed to PSF under a Contributor Agreement.
3
+
4
+ """The pgen2 package."""
evalkit_eagle/lib/python3.10/site-packages/blib2to3/pgen2/driver.cpython-310-x86_64-linux-gnu.so ADDED
Binary file (8.26 kB). View file
 
evalkit_eagle/lib/python3.10/site-packages/blib2to3/pgen2/driver.py ADDED
@@ -0,0 +1,316 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2004-2005 Elemental Security, Inc. All Rights Reserved.
2
+ # Licensed to PSF under a Contributor Agreement.
3
+
4
+ # Modifications:
5
+ # Copyright 2006 Google, Inc. All Rights Reserved.
6
+ # Licensed to PSF under a Contributor Agreement.
7
+
8
+ """Parser driver.
9
+
10
+ This provides a high-level interface to parse a file into a syntax tree.
11
+
12
+ """
13
+
14
+ __author__ = "Guido van Rossum <guido@python.org>"
15
+
16
+ __all__ = ["Driver", "load_grammar"]
17
+
18
+ # Python imports
19
+ import io
20
+ import logging
21
+ import os
22
+ import pkgutil
23
+ import sys
24
+ from contextlib import contextmanager
25
+ from dataclasses import dataclass, field
26
+ from logging import Logger
27
+ from typing import IO, Any, Iterable, Iterator, List, Optional, Tuple, Union, cast
28
+
29
+ from blib2to3.pgen2.grammar import Grammar
30
+ from blib2to3.pgen2.tokenize import GoodTokenInfo
31
+ from blib2to3.pytree import NL
32
+
33
+ # Pgen imports
34
+ from . import grammar, parse, pgen, token, tokenize
35
+
36
+ Path = Union[str, "os.PathLike[str]"]
37
+
38
+
39
+ @dataclass
40
+ class ReleaseRange:
41
+ start: int
42
+ end: Optional[int] = None
43
+ tokens: List[Any] = field(default_factory=list)
44
+
45
+ def lock(self) -> None:
46
+ total_eaten = len(self.tokens)
47
+ self.end = self.start + total_eaten
48
+
49
+
50
+ class TokenProxy:
51
+ def __init__(self, generator: Any) -> None:
52
+ self._tokens = generator
53
+ self._counter = 0
54
+ self._release_ranges: List[ReleaseRange] = []
55
+
56
+ @contextmanager
57
+ def release(self) -> Iterator["TokenProxy"]:
58
+ release_range = ReleaseRange(self._counter)
59
+ self._release_ranges.append(release_range)
60
+ try:
61
+ yield self
62
+ finally:
63
+ # Lock the last release range to the final position that
64
+ # has been eaten.
65
+ release_range.lock()
66
+
67
+ def eat(self, point: int) -> Any:
68
+ eaten_tokens = self._release_ranges[-1].tokens
69
+ if point < len(eaten_tokens):
70
+ return eaten_tokens[point]
71
+ else:
72
+ while point >= len(eaten_tokens):
73
+ token = next(self._tokens)
74
+ eaten_tokens.append(token)
75
+ return token
76
+
77
+ def __iter__(self) -> "TokenProxy":
78
+ return self
79
+
80
+ def __next__(self) -> Any:
81
+ # If the current position is already compromised (looked up)
82
+ # return the eaten token, if not just go further on the given
83
+ # token producer.
84
+ for release_range in self._release_ranges:
85
+ assert release_range.end is not None
86
+
87
+ start, end = release_range.start, release_range.end
88
+ if start <= self._counter < end:
89
+ token = release_range.tokens[self._counter - start]
90
+ break
91
+ else:
92
+ token = next(self._tokens)
93
+ self._counter += 1
94
+ return token
95
+
96
+ def can_advance(self, to: int) -> bool:
97
+ # Try to eat, fail if it can't. The eat operation is cached
98
+ # so there won't be any additional cost of eating here
99
+ try:
100
+ self.eat(to)
101
+ except StopIteration:
102
+ return False
103
+ else:
104
+ return True
105
+
106
+
107
+ class Driver:
108
+ def __init__(self, grammar: Grammar, logger: Optional[Logger] = None) -> None:
109
+ self.grammar = grammar
110
+ if logger is None:
111
+ logger = logging.getLogger(__name__)
112
+ self.logger = logger
113
+
114
+ def parse_tokens(self, tokens: Iterable[GoodTokenInfo], debug: bool = False) -> NL:
115
+ """Parse a series of tokens and return the syntax tree."""
116
+ # XXX Move the prefix computation into a wrapper around tokenize.
117
+ proxy = TokenProxy(tokens)
118
+
119
+ p = parse.Parser(self.grammar)
120
+ p.setup(proxy=proxy)
121
+
122
+ lineno = 1
123
+ column = 0
124
+ indent_columns: List[int] = []
125
+ type = value = start = end = line_text = None
126
+ prefix = ""
127
+
128
+ for quintuple in proxy:
129
+ type, value, start, end, line_text = quintuple
130
+ if start != (lineno, column):
131
+ assert (lineno, column) <= start, ((lineno, column), start)
132
+ s_lineno, s_column = start
133
+ if lineno < s_lineno:
134
+ prefix += "\n" * (s_lineno - lineno)
135
+ lineno = s_lineno
136
+ column = 0
137
+ if column < s_column:
138
+ prefix += line_text[column:s_column]
139
+ column = s_column
140
+ if type in (tokenize.COMMENT, tokenize.NL):
141
+ prefix += value
142
+ lineno, column = end
143
+ if value.endswith("\n"):
144
+ lineno += 1
145
+ column = 0
146
+ continue
147
+ if type == token.OP:
148
+ type = grammar.opmap[value]
149
+ if debug:
150
+ assert type is not None
151
+ self.logger.debug(
152
+ "%s %r (prefix=%r)", token.tok_name[type], value, prefix
153
+ )
154
+ if type == token.INDENT:
155
+ indent_columns.append(len(value))
156
+ _prefix = prefix + value
157
+ prefix = ""
158
+ value = ""
159
+ elif type == token.DEDENT:
160
+ _indent_col = indent_columns.pop()
161
+ prefix, _prefix = self._partially_consume_prefix(prefix, _indent_col)
162
+ if p.addtoken(cast(int, type), value, (prefix, start)):
163
+ if debug:
164
+ self.logger.debug("Stop.")
165
+ break
166
+ prefix = ""
167
+ if type in {token.INDENT, token.DEDENT}:
168
+ prefix = _prefix
169
+ lineno, column = end
170
+ if value.endswith("\n"):
171
+ lineno += 1
172
+ column = 0
173
+ else:
174
+ # We never broke out -- EOF is too soon (how can this happen???)
175
+ assert start is not None
176
+ raise parse.ParseError("incomplete input", type, value, (prefix, start))
177
+ assert p.rootnode is not None
178
+ return p.rootnode
179
+
180
+ def parse_stream_raw(self, stream: IO[str], debug: bool = False) -> NL:
181
+ """Parse a stream and return the syntax tree."""
182
+ tokens = tokenize.generate_tokens(stream.readline, grammar=self.grammar)
183
+ return self.parse_tokens(tokens, debug)
184
+
185
+ def parse_stream(self, stream: IO[str], debug: bool = False) -> NL:
186
+ """Parse a stream and return the syntax tree."""
187
+ return self.parse_stream_raw(stream, debug)
188
+
189
+ def parse_file(
190
+ self, filename: Path, encoding: Optional[str] = None, debug: bool = False
191
+ ) -> NL:
192
+ """Parse a file and return the syntax tree."""
193
+ with open(filename, encoding=encoding) as stream:
194
+ return self.parse_stream(stream, debug)
195
+
196
+ def parse_string(self, text: str, debug: bool = False) -> NL:
197
+ """Parse a string and return the syntax tree."""
198
+ tokens = tokenize.generate_tokens(
199
+ io.StringIO(text).readline, grammar=self.grammar
200
+ )
201
+ return self.parse_tokens(tokens, debug)
202
+
203
+ def _partially_consume_prefix(self, prefix: str, column: int) -> Tuple[str, str]:
204
+ lines: List[str] = []
205
+ current_line = ""
206
+ current_column = 0
207
+ wait_for_nl = False
208
+ for char in prefix:
209
+ current_line += char
210
+ if wait_for_nl:
211
+ if char == "\n":
212
+ if current_line.strip() and current_column < column:
213
+ res = "".join(lines)
214
+ return res, prefix[len(res) :]
215
+
216
+ lines.append(current_line)
217
+ current_line = ""
218
+ current_column = 0
219
+ wait_for_nl = False
220
+ elif char in " \t":
221
+ current_column += 1
222
+ elif char == "\n":
223
+ # unexpected empty line
224
+ current_column = 0
225
+ elif char == "\f":
226
+ current_column = 0
227
+ else:
228
+ # indent is finished
229
+ wait_for_nl = True
230
+ return "".join(lines), current_line
231
+
232
+
233
+ def _generate_pickle_name(gt: Path, cache_dir: Optional[Path] = None) -> str:
234
+ head, tail = os.path.splitext(gt)
235
+ if tail == ".txt":
236
+ tail = ""
237
+ name = head + tail + ".".join(map(str, sys.version_info)) + ".pickle"
238
+ if cache_dir:
239
+ return os.path.join(cache_dir, os.path.basename(name))
240
+ else:
241
+ return name
242
+
243
+
244
+ def load_grammar(
245
+ gt: str = "Grammar.txt",
246
+ gp: Optional[str] = None,
247
+ save: bool = True,
248
+ force: bool = False,
249
+ logger: Optional[Logger] = None,
250
+ ) -> Grammar:
251
+ """Load the grammar (maybe from a pickle)."""
252
+ if logger is None:
253
+ logger = logging.getLogger(__name__)
254
+ gp = _generate_pickle_name(gt) if gp is None else gp
255
+ if force or not _newer(gp, gt):
256
+ g: grammar.Grammar = pgen.generate_grammar(gt)
257
+ if save:
258
+ try:
259
+ g.dump(gp)
260
+ except OSError:
261
+ # Ignore error, caching is not vital.
262
+ pass
263
+ else:
264
+ g = grammar.Grammar()
265
+ g.load(gp)
266
+ return g
267
+
268
+
269
+ def _newer(a: str, b: str) -> bool:
270
+ """Inquire whether file a was written since file b."""
271
+ if not os.path.exists(a):
272
+ return False
273
+ if not os.path.exists(b):
274
+ return True
275
+ return os.path.getmtime(a) >= os.path.getmtime(b)
276
+
277
+
278
+ def load_packaged_grammar(
279
+ package: str, grammar_source: str, cache_dir: Optional[Path] = None
280
+ ) -> grammar.Grammar:
281
+ """Normally, loads a pickled grammar by doing
282
+ pkgutil.get_data(package, pickled_grammar)
283
+ where *pickled_grammar* is computed from *grammar_source* by adding the
284
+ Python version and using a ``.pickle`` extension.
285
+
286
+ However, if *grammar_source* is an extant file, load_grammar(grammar_source)
287
+ is called instead. This facilitates using a packaged grammar file when needed
288
+ but preserves load_grammar's automatic regeneration behavior when possible.
289
+
290
+ """
291
+ if os.path.isfile(grammar_source):
292
+ gp = _generate_pickle_name(grammar_source, cache_dir) if cache_dir else None
293
+ return load_grammar(grammar_source, gp=gp)
294
+ pickled_name = _generate_pickle_name(os.path.basename(grammar_source), cache_dir)
295
+ data = pkgutil.get_data(package, pickled_name)
296
+ assert data is not None
297
+ g = grammar.Grammar()
298
+ g.loads(data)
299
+ return g
300
+
301
+
302
+ def main(*args: str) -> bool:
303
+ """Main program, when run as a script: produce grammar pickle files.
304
+
305
+ Calls load_grammar for each argument, a path to a grammar text file.
306
+ """
307
+ if not args:
308
+ args = tuple(sys.argv[1:])
309
+ logging.basicConfig(level=logging.INFO, stream=sys.stdout, format="%(message)s")
310
+ for gt in args:
311
+ load_grammar(gt, save=True, force=True)
312
+ return True
313
+
314
+
315
+ if __name__ == "__main__":
316
+ sys.exit(int(not main()))
evalkit_eagle/lib/python3.10/site-packages/blib2to3/pgen2/grammar.py ADDED
@@ -0,0 +1,227 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2004-2005 Elemental Security, Inc. All Rights Reserved.
2
+ # Licensed to PSF under a Contributor Agreement.
3
+
4
+ """This module defines the data structures used to represent a grammar.
5
+
6
+ These are a bit arcane because they are derived from the data
7
+ structures used by Python's 'pgen' parser generator.
8
+
9
+ There's also a table here mapping operators to their names in the
10
+ token module; the Python tokenize module reports all operators as the
11
+ fallback token code OP, but the parser needs the actual token code.
12
+
13
+ """
14
+
15
+ # Python imports
16
+ import os
17
+ import pickle
18
+ import tempfile
19
+ from typing import Any, Dict, List, Optional, Tuple, TypeVar, Union
20
+
21
+ # Local imports
22
+ from . import token
23
+
24
+ _P = TypeVar("_P", bound="Grammar")
25
+ Label = Tuple[int, Optional[str]]
26
+ DFA = List[List[Tuple[int, int]]]
27
+ DFAS = Tuple[DFA, Dict[int, int]]
28
+ Path = Union[str, "os.PathLike[str]"]
29
+
30
+
31
+ class Grammar:
32
+ """Pgen parsing tables conversion class.
33
+
34
+ Once initialized, this class supplies the grammar tables for the
35
+ parsing engine implemented by parse.py. The parsing engine
36
+ accesses the instance variables directly. The class here does not
37
+ provide initialization of the tables; several subclasses exist to
38
+ do this (see the conv and pgen modules).
39
+
40
+ The load() method reads the tables from a pickle file, which is
41
+ much faster than the other ways offered by subclasses. The pickle
42
+ file is written by calling dump() (after loading the grammar
43
+ tables using a subclass). The report() method prints a readable
44
+ representation of the tables to stdout, for debugging.
45
+
46
+ The instance variables are as follows:
47
+
48
+ symbol2number -- a dict mapping symbol names to numbers. Symbol
49
+ numbers are always 256 or higher, to distinguish
50
+ them from token numbers, which are between 0 and
51
+ 255 (inclusive).
52
+
53
+ number2symbol -- a dict mapping numbers to symbol names;
54
+ these two are each other's inverse.
55
+
56
+ states -- a list of DFAs, where each DFA is a list of
57
+ states, each state is a list of arcs, and each
58
+ arc is a (i, j) pair where i is a label and j is
59
+ a state number. The DFA number is the index into
60
+ this list. (This name is slightly confusing.)
61
+ Final states are represented by a special arc of
62
+ the form (0, j) where j is its own state number.
63
+
64
+ dfas -- a dict mapping symbol numbers to (DFA, first)
65
+ pairs, where DFA is an item from the states list
66
+ above, and first is a set of tokens that can
67
+ begin this grammar rule (represented by a dict
68
+ whose values are always 1).
69
+
70
+ labels -- a list of (x, y) pairs where x is either a token
71
+ number or a symbol number, and y is either None
72
+ or a string; the strings are keywords. The label
73
+ number is the index in this list; label numbers
74
+ are used to mark state transitions (arcs) in the
75
+ DFAs.
76
+
77
+ start -- the number of the grammar's start symbol.
78
+
79
+ keywords -- a dict mapping keyword strings to arc labels.
80
+
81
+ tokens -- a dict mapping token numbers to arc labels.
82
+
83
+ """
84
+
85
+ def __init__(self) -> None:
86
+ self.symbol2number: Dict[str, int] = {}
87
+ self.number2symbol: Dict[int, str] = {}
88
+ self.states: List[DFA] = []
89
+ self.dfas: Dict[int, DFAS] = {}
90
+ self.labels: List[Label] = [(0, "EMPTY")]
91
+ self.keywords: Dict[str, int] = {}
92
+ self.soft_keywords: Dict[str, int] = {}
93
+ self.tokens: Dict[int, int] = {}
94
+ self.symbol2label: Dict[str, int] = {}
95
+ self.version: Tuple[int, int] = (0, 0)
96
+ self.start = 256
97
+ # Python 3.7+ parses async as a keyword, not an identifier
98
+ self.async_keywords = False
99
+
100
+ def dump(self, filename: Path) -> None:
101
+ """Dump the grammar tables to a pickle file."""
102
+
103
+ # mypyc generates objects that don't have a __dict__, but they
104
+ # do have __getstate__ methods that will return an equivalent
105
+ # dictionary
106
+ if hasattr(self, "__dict__"):
107
+ d = self.__dict__
108
+ else:
109
+ d = self.__getstate__() # type: ignore
110
+
111
+ with tempfile.NamedTemporaryFile(
112
+ dir=os.path.dirname(filename), delete=False
113
+ ) as f:
114
+ pickle.dump(d, f, pickle.HIGHEST_PROTOCOL)
115
+ os.replace(f.name, filename)
116
+
117
+ def _update(self, attrs: Dict[str, Any]) -> None:
118
+ for k, v in attrs.items():
119
+ setattr(self, k, v)
120
+
121
+ def load(self, filename: Path) -> None:
122
+ """Load the grammar tables from a pickle file."""
123
+ with open(filename, "rb") as f:
124
+ d = pickle.load(f)
125
+ self._update(d)
126
+
127
+ def loads(self, pkl: bytes) -> None:
128
+ """Load the grammar tables from a pickle bytes object."""
129
+ self._update(pickle.loads(pkl))
130
+
131
+ def copy(self: _P) -> _P:
132
+ """
133
+ Copy the grammar.
134
+ """
135
+ new = self.__class__()
136
+ for dict_attr in (
137
+ "symbol2number",
138
+ "number2symbol",
139
+ "dfas",
140
+ "keywords",
141
+ "soft_keywords",
142
+ "tokens",
143
+ "symbol2label",
144
+ ):
145
+ setattr(new, dict_attr, getattr(self, dict_attr).copy())
146
+ new.labels = self.labels[:]
147
+ new.states = self.states[:]
148
+ new.start = self.start
149
+ new.version = self.version
150
+ new.async_keywords = self.async_keywords
151
+ return new
152
+
153
+ def report(self) -> None:
154
+ """Dump the grammar tables to standard output, for debugging."""
155
+ from pprint import pprint
156
+
157
+ print("s2n")
158
+ pprint(self.symbol2number)
159
+ print("n2s")
160
+ pprint(self.number2symbol)
161
+ print("states")
162
+ pprint(self.states)
163
+ print("dfas")
164
+ pprint(self.dfas)
165
+ print("labels")
166
+ pprint(self.labels)
167
+ print("start", self.start)
168
+
169
+
170
+ # Map from operator to number (since tokenize doesn't do this)
171
+
172
+ opmap_raw = """
173
+ ( LPAR
174
+ ) RPAR
175
+ [ LSQB
176
+ ] RSQB
177
+ : COLON
178
+ , COMMA
179
+ ; SEMI
180
+ + PLUS
181
+ - MINUS
182
+ * STAR
183
+ / SLASH
184
+ | VBAR
185
+ & AMPER
186
+ < LESS
187
+ > GREATER
188
+ = EQUAL
189
+ . DOT
190
+ % PERCENT
191
+ ` BACKQUOTE
192
+ { LBRACE
193
+ } RBRACE
194
+ @ AT
195
+ @= ATEQUAL
196
+ == EQEQUAL
197
+ != NOTEQUAL
198
+ <> NOTEQUAL
199
+ <= LESSEQUAL
200
+ >= GREATEREQUAL
201
+ ~ TILDE
202
+ ^ CIRCUMFLEX
203
+ << LEFTSHIFT
204
+ >> RIGHTSHIFT
205
+ ** DOUBLESTAR
206
+ += PLUSEQUAL
207
+ -= MINEQUAL
208
+ *= STAREQUAL
209
+ /= SLASHEQUAL
210
+ %= PERCENTEQUAL
211
+ &= AMPEREQUAL
212
+ |= VBAREQUAL
213
+ ^= CIRCUMFLEXEQUAL
214
+ <<= LEFTSHIFTEQUAL
215
+ >>= RIGHTSHIFTEQUAL
216
+ **= DOUBLESTAREQUAL
217
+ // DOUBLESLASH
218
+ //= DOUBLESLASHEQUAL
219
+ -> RARROW
220
+ := COLONEQUAL
221
+ """
222
+
223
+ opmap = {}
224
+ for line in opmap_raw.splitlines():
225
+ if line:
226
+ op, name = line.split()
227
+ opmap[op] = getattr(token, name)
evalkit_eagle/lib/python3.10/site-packages/blib2to3/pgen2/literals.cpython-310-x86_64-linux-gnu.so ADDED
Binary file (8.26 kB). View file
 
evalkit_eagle/lib/python3.10/site-packages/blib2to3/pgen2/parse.py ADDED
@@ -0,0 +1,411 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2004-2005 Elemental Security, Inc. All Rights Reserved.
2
+ # Licensed to PSF under a Contributor Agreement.
3
+
4
+ """Parser engine for the grammar tables generated by pgen.
5
+
6
+ The grammar table must be loaded first.
7
+
8
+ See Parser/parser.c in the Python distribution for additional info on
9
+ how this parsing engine works.
10
+
11
+ """
12
+ from contextlib import contextmanager
13
+ from typing import (
14
+ TYPE_CHECKING,
15
+ Any,
16
+ Callable,
17
+ Dict,
18
+ Iterator,
19
+ List,
20
+ Optional,
21
+ Set,
22
+ Tuple,
23
+ Union,
24
+ cast,
25
+ )
26
+
27
+ from blib2to3.pgen2.grammar import Grammar
28
+ from blib2to3.pytree import NL, Context, Leaf, Node, RawNode, convert
29
+
30
+ # Local imports
31
+ from . import grammar, token, tokenize
32
+
33
+ if TYPE_CHECKING:
34
+ from blib2to3.pgen2.driver import TokenProxy
35
+
36
+
37
+ Results = Dict[str, NL]
38
+ Convert = Callable[[Grammar, RawNode], Union[Node, Leaf]]
39
+ DFA = List[List[Tuple[int, int]]]
40
+ DFAS = Tuple[DFA, Dict[int, int]]
41
+
42
+
43
+ def lam_sub(grammar: Grammar, node: RawNode) -> NL:
44
+ assert node[3] is not None
45
+ return Node(type=node[0], children=node[3], context=node[2])
46
+
47
+
48
+ # A placeholder node, used when parser is backtracking.
49
+ DUMMY_NODE = (-1, None, None, None)
50
+
51
+
52
+ def stack_copy(
53
+ stack: List[Tuple[DFAS, int, RawNode]]
54
+ ) -> List[Tuple[DFAS, int, RawNode]]:
55
+ """Nodeless stack copy."""
56
+ return [(dfa, label, DUMMY_NODE) for dfa, label, _ in stack]
57
+
58
+
59
+ class Recorder:
60
+ def __init__(self, parser: "Parser", ilabels: List[int], context: Context) -> None:
61
+ self.parser = parser
62
+ self._ilabels = ilabels
63
+ self.context = context # not really matter
64
+
65
+ self._dead_ilabels: Set[int] = set()
66
+ self._start_point = self.parser.stack
67
+ self._points = {ilabel: stack_copy(self._start_point) for ilabel in ilabels}
68
+
69
+ @property
70
+ def ilabels(self) -> Set[int]:
71
+ return self._dead_ilabels.symmetric_difference(self._ilabels)
72
+
73
+ @contextmanager
74
+ def switch_to(self, ilabel: int) -> Iterator[None]:
75
+ with self.backtrack():
76
+ self.parser.stack = self._points[ilabel]
77
+ try:
78
+ yield
79
+ except ParseError:
80
+ self._dead_ilabels.add(ilabel)
81
+ finally:
82
+ self.parser.stack = self._start_point
83
+
84
+ @contextmanager
85
+ def backtrack(self) -> Iterator[None]:
86
+ """
87
+ Use the node-level invariant ones for basic parsing operations (push/pop/shift).
88
+ These still will operate on the stack; but they won't create any new nodes, or
89
+ modify the contents of any other existing nodes.
90
+
91
+ This saves us a ton of time when we are backtracking, since we
92
+ want to restore to the initial state as quick as possible, which
93
+ can only be done by having as little mutatations as possible.
94
+ """
95
+ is_backtracking = self.parser.is_backtracking
96
+ try:
97
+ self.parser.is_backtracking = True
98
+ yield
99
+ finally:
100
+ self.parser.is_backtracking = is_backtracking
101
+
102
+ def add_token(self, tok_type: int, tok_val: str, raw: bool = False) -> None:
103
+ func: Callable[..., Any]
104
+ if raw:
105
+ func = self.parser._addtoken
106
+ else:
107
+ func = self.parser.addtoken
108
+
109
+ for ilabel in self.ilabels:
110
+ with self.switch_to(ilabel):
111
+ args = [tok_type, tok_val, self.context]
112
+ if raw:
113
+ args.insert(0, ilabel)
114
+ func(*args)
115
+
116
+ def determine_route(
117
+ self, value: Optional[str] = None, force: bool = False
118
+ ) -> Optional[int]:
119
+ alive_ilabels = self.ilabels
120
+ if len(alive_ilabels) == 0:
121
+ *_, most_successful_ilabel = self._dead_ilabels
122
+ raise ParseError("bad input", most_successful_ilabel, value, self.context)
123
+
124
+ ilabel, *rest = alive_ilabels
125
+ if force or not rest:
126
+ return ilabel
127
+ else:
128
+ return None
129
+
130
+
131
+ class ParseError(Exception):
132
+ """Exception to signal the parser is stuck."""
133
+
134
+ def __init__(
135
+ self, msg: str, type: Optional[int], value: Optional[str], context: Context
136
+ ) -> None:
137
+ Exception.__init__(
138
+ self, f"{msg}: type={type!r}, value={value!r}, context={context!r}"
139
+ )
140
+ self.msg = msg
141
+ self.type = type
142
+ self.value = value
143
+ self.context = context
144
+
145
+
146
+ class Parser:
147
+ """Parser engine.
148
+
149
+ The proper usage sequence is:
150
+
151
+ p = Parser(grammar, [converter]) # create instance
152
+ p.setup([start]) # prepare for parsing
153
+ <for each input token>:
154
+ if p.addtoken(...): # parse a token; may raise ParseError
155
+ break
156
+ root = p.rootnode # root of abstract syntax tree
157
+
158
+ A Parser instance may be reused by calling setup() repeatedly.
159
+
160
+ A Parser instance contains state pertaining to the current token
161
+ sequence, and should not be used concurrently by different threads
162
+ to parse separate token sequences.
163
+
164
+ See driver.py for how to get input tokens by tokenizing a file or
165
+ string.
166
+
167
+ Parsing is complete when addtoken() returns True; the root of the
168
+ abstract syntax tree can then be retrieved from the rootnode
169
+ instance variable. When a syntax error occurs, addtoken() raises
170
+ the ParseError exception. There is no error recovery; the parser
171
+ cannot be used after a syntax error was reported (but it can be
172
+ reinitialized by calling setup()).
173
+
174
+ """
175
+
176
+ def __init__(self, grammar: Grammar, convert: Optional[Convert] = None) -> None:
177
+ """Constructor.
178
+
179
+ The grammar argument is a grammar.Grammar instance; see the
180
+ grammar module for more information.
181
+
182
+ The parser is not ready yet for parsing; you must call the
183
+ setup() method to get it started.
184
+
185
+ The optional convert argument is a function mapping concrete
186
+ syntax tree nodes to abstract syntax tree nodes. If not
187
+ given, no conversion is done and the syntax tree produced is
188
+ the concrete syntax tree. If given, it must be a function of
189
+ two arguments, the first being the grammar (a grammar.Grammar
190
+ instance), and the second being the concrete syntax tree node
191
+ to be converted. The syntax tree is converted from the bottom
192
+ up.
193
+
194
+ **post-note: the convert argument is ignored since for Black's
195
+ usage, convert will always be blib2to3.pytree.convert. Allowing
196
+ this to be dynamic hurts mypyc's ability to use early binding.
197
+ These docs are left for historical and informational value.
198
+
199
+ A concrete syntax tree node is a (type, value, context, nodes)
200
+ tuple, where type is the node type (a token or symbol number),
201
+ value is None for symbols and a string for tokens, context is
202
+ None or an opaque value used for error reporting (typically a
203
+ (lineno, offset) pair), and nodes is a list of children for
204
+ symbols, and None for tokens.
205
+
206
+ An abstract syntax tree node may be anything; this is entirely
207
+ up to the converter function.
208
+
209
+ """
210
+ self.grammar = grammar
211
+ # See note in docstring above. TL;DR this is ignored.
212
+ self.convert = convert or lam_sub
213
+ self.is_backtracking = False
214
+ self.last_token: Optional[int] = None
215
+
216
+ def setup(self, proxy: "TokenProxy", start: Optional[int] = None) -> None:
217
+ """Prepare for parsing.
218
+
219
+ This *must* be called before starting to parse.
220
+
221
+ The optional argument is an alternative start symbol; it
222
+ defaults to the grammar's start symbol.
223
+
224
+ You can use a Parser instance to parse any number of programs;
225
+ each time you call setup() the parser is reset to an initial
226
+ state determined by the (implicit or explicit) start symbol.
227
+
228
+ """
229
+ if start is None:
230
+ start = self.grammar.start
231
+ # Each stack entry is a tuple: (dfa, state, node).
232
+ # A node is a tuple: (type, value, context, children),
233
+ # where children is a list of nodes or None, and context may be None.
234
+ newnode: RawNode = (start, None, None, [])
235
+ stackentry = (self.grammar.dfas[start], 0, newnode)
236
+ self.stack: List[Tuple[DFAS, int, RawNode]] = [stackentry]
237
+ self.rootnode: Optional[NL] = None
238
+ self.used_names: Set[str] = set()
239
+ self.proxy = proxy
240
+ self.last_token = None
241
+
242
+ def addtoken(self, type: int, value: str, context: Context) -> bool:
243
+ """Add a token; return True iff this is the end of the program."""
244
+ # Map from token to label
245
+ ilabels = self.classify(type, value, context)
246
+ assert len(ilabels) >= 1
247
+
248
+ # If we have only one state to advance, we'll directly
249
+ # take it as is.
250
+ if len(ilabels) == 1:
251
+ [ilabel] = ilabels
252
+ return self._addtoken(ilabel, type, value, context)
253
+
254
+ # If there are multiple states which we can advance (only
255
+ # happen under soft-keywords), then we will try all of them
256
+ # in parallel and as soon as one state can reach further than
257
+ # the rest, we'll choose that one. This is a pretty hacky
258
+ # and hopefully temporary algorithm.
259
+ #
260
+ # For a more detailed explanation, check out this post:
261
+ # https://tree.science/what-the-backtracking.html
262
+
263
+ with self.proxy.release() as proxy:
264
+ counter, force = 0, False
265
+ recorder = Recorder(self, ilabels, context)
266
+ recorder.add_token(type, value, raw=True)
267
+
268
+ next_token_value = value
269
+ while recorder.determine_route(next_token_value) is None:
270
+ if not proxy.can_advance(counter):
271
+ force = True
272
+ break
273
+
274
+ next_token_type, next_token_value, *_ = proxy.eat(counter)
275
+ if next_token_type in (tokenize.COMMENT, tokenize.NL):
276
+ counter += 1
277
+ continue
278
+
279
+ if next_token_type == tokenize.OP:
280
+ next_token_type = grammar.opmap[next_token_value]
281
+
282
+ recorder.add_token(next_token_type, next_token_value)
283
+ counter += 1
284
+
285
+ ilabel = cast(int, recorder.determine_route(next_token_value, force=force))
286
+ assert ilabel is not None
287
+
288
+ return self._addtoken(ilabel, type, value, context)
289
+
290
+ def _addtoken(self, ilabel: int, type: int, value: str, context: Context) -> bool:
291
+ # Loop until the token is shifted; may raise exceptions
292
+ while True:
293
+ dfa, state, node = self.stack[-1]
294
+ states, first = dfa
295
+ arcs = states[state]
296
+ # Look for a state with this label
297
+ for i, newstate in arcs:
298
+ t = self.grammar.labels[i][0]
299
+ if t >= 256:
300
+ # See if it's a symbol and if we're in its first set
301
+ itsdfa = self.grammar.dfas[t]
302
+ itsstates, itsfirst = itsdfa
303
+ if ilabel in itsfirst:
304
+ # Push a symbol
305
+ self.push(t, itsdfa, newstate, context)
306
+ break # To continue the outer while loop
307
+
308
+ elif ilabel == i:
309
+ # Look it up in the list of labels
310
+ # Shift a token; we're done with it
311
+ self.shift(type, value, newstate, context)
312
+ # Pop while we are in an accept-only state
313
+ state = newstate
314
+ while states[state] == [(0, state)]:
315
+ self.pop()
316
+ if not self.stack:
317
+ # Done parsing!
318
+ return True
319
+ dfa, state, node = self.stack[-1]
320
+ states, first = dfa
321
+ # Done with this token
322
+ self.last_token = type
323
+ return False
324
+
325
+ else:
326
+ if (0, state) in arcs:
327
+ # An accepting state, pop it and try something else
328
+ self.pop()
329
+ if not self.stack:
330
+ # Done parsing, but another token is input
331
+ raise ParseError("too much input", type, value, context)
332
+ else:
333
+ # No success finding a transition
334
+ raise ParseError("bad input", type, value, context)
335
+
336
+ def classify(self, type: int, value: str, context: Context) -> List[int]:
337
+ """Turn a token into a label. (Internal)
338
+
339
+ Depending on whether the value is a soft-keyword or not,
340
+ this function may return multiple labels to choose from."""
341
+ if type == token.NAME:
342
+ # Keep a listing of all used names
343
+ self.used_names.add(value)
344
+ # Check for reserved words
345
+ if value in self.grammar.keywords:
346
+ return [self.grammar.keywords[value]]
347
+ elif value in self.grammar.soft_keywords:
348
+ assert type in self.grammar.tokens
349
+ # Current soft keywords (match, case, type) can only appear at the
350
+ # beginning of a statement. So as a shortcut, don't try to treat them
351
+ # like keywords in any other context.
352
+ # ('_' is also a soft keyword in the real grammar, but for our grammar
353
+ # it's just an expression, so we don't need to treat it specially.)
354
+ if self.last_token not in (
355
+ None,
356
+ token.INDENT,
357
+ token.DEDENT,
358
+ token.NEWLINE,
359
+ token.SEMI,
360
+ token.COLON,
361
+ ):
362
+ return [self.grammar.tokens[type]]
363
+ return [
364
+ self.grammar.tokens[type],
365
+ self.grammar.soft_keywords[value],
366
+ ]
367
+
368
+ ilabel = self.grammar.tokens.get(type)
369
+ if ilabel is None:
370
+ raise ParseError("bad token", type, value, context)
371
+ return [ilabel]
372
+
373
+ def shift(self, type: int, value: str, newstate: int, context: Context) -> None:
374
+ """Shift a token. (Internal)"""
375
+ if self.is_backtracking:
376
+ dfa, state, _ = self.stack[-1]
377
+ self.stack[-1] = (dfa, newstate, DUMMY_NODE)
378
+ else:
379
+ dfa, state, node = self.stack[-1]
380
+ rawnode: RawNode = (type, value, context, None)
381
+ newnode = convert(self.grammar, rawnode)
382
+ assert node[-1] is not None
383
+ node[-1].append(newnode)
384
+ self.stack[-1] = (dfa, newstate, node)
385
+
386
+ def push(self, type: int, newdfa: DFAS, newstate: int, context: Context) -> None:
387
+ """Push a nonterminal. (Internal)"""
388
+ if self.is_backtracking:
389
+ dfa, state, _ = self.stack[-1]
390
+ self.stack[-1] = (dfa, newstate, DUMMY_NODE)
391
+ self.stack.append((newdfa, 0, DUMMY_NODE))
392
+ else:
393
+ dfa, state, node = self.stack[-1]
394
+ newnode: RawNode = (type, None, context, [])
395
+ self.stack[-1] = (dfa, newstate, node)
396
+ self.stack.append((newdfa, 0, newnode))
397
+
398
+ def pop(self) -> None:
399
+ """Pop a nonterminal. (Internal)"""
400
+ if self.is_backtracking:
401
+ self.stack.pop()
402
+ else:
403
+ popdfa, popstate, popnode = self.stack.pop()
404
+ newnode = convert(self.grammar, popnode)
405
+ if self.stack:
406
+ dfa, state, node = self.stack[-1]
407
+ assert node[-1] is not None
408
+ node[-1].append(newnode)
409
+ else:
410
+ self.rootnode = newnode
411
+ self.rootnode.used_names = self.used_names
evalkit_eagle/lib/python3.10/site-packages/blib2to3/pgen2/pgen.py ADDED
@@ -0,0 +1,428 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2004-2005 Elemental Security, Inc. All Rights Reserved.
2
+ # Licensed to PSF under a Contributor Agreement.
3
+
4
+ import os
5
+ from typing import (
6
+ IO,
7
+ Any,
8
+ Dict,
9
+ Iterator,
10
+ List,
11
+ NoReturn,
12
+ Optional,
13
+ Sequence,
14
+ Tuple,
15
+ Union,
16
+ )
17
+
18
+ from blib2to3.pgen2 import grammar, token, tokenize
19
+ from blib2to3.pgen2.tokenize import GoodTokenInfo
20
+
21
+ Path = Union[str, "os.PathLike[str]"]
22
+
23
+
24
+ class PgenGrammar(grammar.Grammar):
25
+ pass
26
+
27
+
28
+ class ParserGenerator:
29
+ filename: Path
30
+ stream: IO[str]
31
+ generator: Iterator[GoodTokenInfo]
32
+ first: Dict[str, Optional[Dict[str, int]]]
33
+
34
+ def __init__(self, filename: Path, stream: Optional[IO[str]] = None) -> None:
35
+ close_stream = None
36
+ if stream is None:
37
+ stream = open(filename, encoding="utf-8")
38
+ close_stream = stream.close
39
+ self.filename = filename
40
+ self.stream = stream
41
+ self.generator = tokenize.generate_tokens(stream.readline)
42
+ self.gettoken() # Initialize lookahead
43
+ self.dfas, self.startsymbol = self.parse()
44
+ if close_stream is not None:
45
+ close_stream()
46
+ self.first = {} # map from symbol name to set of tokens
47
+ self.addfirstsets()
48
+
49
+ def make_grammar(self) -> PgenGrammar:
50
+ c = PgenGrammar()
51
+ names = list(self.dfas.keys())
52
+ names.sort()
53
+ names.remove(self.startsymbol)
54
+ names.insert(0, self.startsymbol)
55
+ for name in names:
56
+ i = 256 + len(c.symbol2number)
57
+ c.symbol2number[name] = i
58
+ c.number2symbol[i] = name
59
+ for name in names:
60
+ dfa = self.dfas[name]
61
+ states = []
62
+ for state in dfa:
63
+ arcs = []
64
+ for label, next in sorted(state.arcs.items()):
65
+ arcs.append((self.make_label(c, label), dfa.index(next)))
66
+ if state.isfinal:
67
+ arcs.append((0, dfa.index(state)))
68
+ states.append(arcs)
69
+ c.states.append(states)
70
+ c.dfas[c.symbol2number[name]] = (states, self.make_first(c, name))
71
+ c.start = c.symbol2number[self.startsymbol]
72
+ return c
73
+
74
+ def make_first(self, c: PgenGrammar, name: str) -> Dict[int, int]:
75
+ rawfirst = self.first[name]
76
+ assert rawfirst is not None
77
+ first = {}
78
+ for label in sorted(rawfirst):
79
+ ilabel = self.make_label(c, label)
80
+ ##assert ilabel not in first # XXX failed on <> ... !=
81
+ first[ilabel] = 1
82
+ return first
83
+
84
+ def make_label(self, c: PgenGrammar, label: str) -> int:
85
+ # XXX Maybe this should be a method on a subclass of converter?
86
+ ilabel = len(c.labels)
87
+ if label[0].isalpha():
88
+ # Either a symbol name or a named token
89
+ if label in c.symbol2number:
90
+ # A symbol name (a non-terminal)
91
+ if label in c.symbol2label:
92
+ return c.symbol2label[label]
93
+ else:
94
+ c.labels.append((c.symbol2number[label], None))
95
+ c.symbol2label[label] = ilabel
96
+ return ilabel
97
+ else:
98
+ # A named token (NAME, NUMBER, STRING)
99
+ itoken = getattr(token, label, None)
100
+ assert isinstance(itoken, int), label
101
+ assert itoken in token.tok_name, label
102
+ if itoken in c.tokens:
103
+ return c.tokens[itoken]
104
+ else:
105
+ c.labels.append((itoken, None))
106
+ c.tokens[itoken] = ilabel
107
+ return ilabel
108
+ else:
109
+ # Either a keyword or an operator
110
+ assert label[0] in ('"', "'"), label
111
+ value = eval(label)
112
+ if value[0].isalpha():
113
+ if label[0] == '"':
114
+ keywords = c.soft_keywords
115
+ else:
116
+ keywords = c.keywords
117
+
118
+ # A keyword
119
+ if value in keywords:
120
+ return keywords[value]
121
+ else:
122
+ c.labels.append((token.NAME, value))
123
+ keywords[value] = ilabel
124
+ return ilabel
125
+ else:
126
+ # An operator (any non-numeric token)
127
+ itoken = grammar.opmap[value] # Fails if unknown token
128
+ if itoken in c.tokens:
129
+ return c.tokens[itoken]
130
+ else:
131
+ c.labels.append((itoken, None))
132
+ c.tokens[itoken] = ilabel
133
+ return ilabel
134
+
135
+ def addfirstsets(self) -> None:
136
+ names = list(self.dfas.keys())
137
+ names.sort()
138
+ for name in names:
139
+ if name not in self.first:
140
+ self.calcfirst(name)
141
+ # print name, self.first[name].keys()
142
+
143
+ def calcfirst(self, name: str) -> None:
144
+ dfa = self.dfas[name]
145
+ self.first[name] = None # dummy to detect left recursion
146
+ state = dfa[0]
147
+ totalset: Dict[str, int] = {}
148
+ overlapcheck = {}
149
+ for label in state.arcs:
150
+ if label in self.dfas:
151
+ if label in self.first:
152
+ fset = self.first[label]
153
+ if fset is None:
154
+ raise ValueError("recursion for rule %r" % name)
155
+ else:
156
+ self.calcfirst(label)
157
+ fset = self.first[label]
158
+ assert fset is not None
159
+ totalset.update(fset)
160
+ overlapcheck[label] = fset
161
+ else:
162
+ totalset[label] = 1
163
+ overlapcheck[label] = {label: 1}
164
+ inverse: Dict[str, str] = {}
165
+ for label, itsfirst in overlapcheck.items():
166
+ for symbol in itsfirst:
167
+ if symbol in inverse:
168
+ raise ValueError(
169
+ "rule %s is ambiguous; %s is in the first sets of %s as well"
170
+ " as %s" % (name, symbol, label, inverse[symbol])
171
+ )
172
+ inverse[symbol] = label
173
+ self.first[name] = totalset
174
+
175
+ def parse(self) -> Tuple[Dict[str, List["DFAState"]], str]:
176
+ dfas = {}
177
+ startsymbol: Optional[str] = None
178
+ # MSTART: (NEWLINE | RULE)* ENDMARKER
179
+ while self.type != token.ENDMARKER:
180
+ while self.type == token.NEWLINE:
181
+ self.gettoken()
182
+ # RULE: NAME ':' RHS NEWLINE
183
+ name = self.expect(token.NAME)
184
+ self.expect(token.OP, ":")
185
+ a, z = self.parse_rhs()
186
+ self.expect(token.NEWLINE)
187
+ # self.dump_nfa(name, a, z)
188
+ dfa = self.make_dfa(a, z)
189
+ # self.dump_dfa(name, dfa)
190
+ # oldlen = len(dfa)
191
+ self.simplify_dfa(dfa)
192
+ # newlen = len(dfa)
193
+ dfas[name] = dfa
194
+ # print name, oldlen, newlen
195
+ if startsymbol is None:
196
+ startsymbol = name
197
+ assert startsymbol is not None
198
+ return dfas, startsymbol
199
+
200
+ def make_dfa(self, start: "NFAState", finish: "NFAState") -> List["DFAState"]:
201
+ # To turn an NFA into a DFA, we define the states of the DFA
202
+ # to correspond to *sets* of states of the NFA. Then do some
203
+ # state reduction. Let's represent sets as dicts with 1 for
204
+ # values.
205
+ assert isinstance(start, NFAState)
206
+ assert isinstance(finish, NFAState)
207
+
208
+ def closure(state: NFAState) -> Dict[NFAState, int]:
209
+ base: Dict[NFAState, int] = {}
210
+ addclosure(state, base)
211
+ return base
212
+
213
+ def addclosure(state: NFAState, base: Dict[NFAState, int]) -> None:
214
+ assert isinstance(state, NFAState)
215
+ if state in base:
216
+ return
217
+ base[state] = 1
218
+ for label, next in state.arcs:
219
+ if label is None:
220
+ addclosure(next, base)
221
+
222
+ states = [DFAState(closure(start), finish)]
223
+ for state in states: # NB states grows while we're iterating
224
+ arcs: Dict[str, Dict[NFAState, int]] = {}
225
+ for nfastate in state.nfaset:
226
+ for label, next in nfastate.arcs:
227
+ if label is not None:
228
+ addclosure(next, arcs.setdefault(label, {}))
229
+ for label, nfaset in sorted(arcs.items()):
230
+ for st in states:
231
+ if st.nfaset == nfaset:
232
+ break
233
+ else:
234
+ st = DFAState(nfaset, finish)
235
+ states.append(st)
236
+ state.addarc(st, label)
237
+ return states # List of DFAState instances; first one is start
238
+
239
+ def dump_nfa(self, name: str, start: "NFAState", finish: "NFAState") -> None:
240
+ print("Dump of NFA for", name)
241
+ todo = [start]
242
+ for i, state in enumerate(todo):
243
+ print(" State", i, state is finish and "(final)" or "")
244
+ for label, next in state.arcs:
245
+ if next in todo:
246
+ j = todo.index(next)
247
+ else:
248
+ j = len(todo)
249
+ todo.append(next)
250
+ if label is None:
251
+ print(" -> %d" % j)
252
+ else:
253
+ print(" %s -> %d" % (label, j))
254
+
255
+ def dump_dfa(self, name: str, dfa: Sequence["DFAState"]) -> None:
256
+ print("Dump of DFA for", name)
257
+ for i, state in enumerate(dfa):
258
+ print(" State", i, state.isfinal and "(final)" or "")
259
+ for label, next in sorted(state.arcs.items()):
260
+ print(" %s -> %d" % (label, dfa.index(next)))
261
+
262
+ def simplify_dfa(self, dfa: List["DFAState"]) -> None:
263
+ # This is not theoretically optimal, but works well enough.
264
+ # Algorithm: repeatedly look for two states that have the same
265
+ # set of arcs (same labels pointing to the same nodes) and
266
+ # unify them, until things stop changing.
267
+
268
+ # dfa is a list of DFAState instances
269
+ changes = True
270
+ while changes:
271
+ changes = False
272
+ for i, state_i in enumerate(dfa):
273
+ for j in range(i + 1, len(dfa)):
274
+ state_j = dfa[j]
275
+ if state_i == state_j:
276
+ # print " unify", i, j
277
+ del dfa[j]
278
+ for state in dfa:
279
+ state.unifystate(state_j, state_i)
280
+ changes = True
281
+ break
282
+
283
+ def parse_rhs(self) -> Tuple["NFAState", "NFAState"]:
284
+ # RHS: ALT ('|' ALT)*
285
+ a, z = self.parse_alt()
286
+ if self.value != "|":
287
+ return a, z
288
+ else:
289
+ aa = NFAState()
290
+ zz = NFAState()
291
+ aa.addarc(a)
292
+ z.addarc(zz)
293
+ while self.value == "|":
294
+ self.gettoken()
295
+ a, z = self.parse_alt()
296
+ aa.addarc(a)
297
+ z.addarc(zz)
298
+ return aa, zz
299
+
300
+ def parse_alt(self) -> Tuple["NFAState", "NFAState"]:
301
+ # ALT: ITEM+
302
+ a, b = self.parse_item()
303
+ while self.value in ("(", "[") or self.type in (token.NAME, token.STRING):
304
+ c, d = self.parse_item()
305
+ b.addarc(c)
306
+ b = d
307
+ return a, b
308
+
309
+ def parse_item(self) -> Tuple["NFAState", "NFAState"]:
310
+ # ITEM: '[' RHS ']' | ATOM ['+' | '*']
311
+ if self.value == "[":
312
+ self.gettoken()
313
+ a, z = self.parse_rhs()
314
+ self.expect(token.OP, "]")
315
+ a.addarc(z)
316
+ return a, z
317
+ else:
318
+ a, z = self.parse_atom()
319
+ value = self.value
320
+ if value not in ("+", "*"):
321
+ return a, z
322
+ self.gettoken()
323
+ z.addarc(a)
324
+ if value == "+":
325
+ return a, z
326
+ else:
327
+ return a, a
328
+
329
+ def parse_atom(self) -> Tuple["NFAState", "NFAState"]:
330
+ # ATOM: '(' RHS ')' | NAME | STRING
331
+ if self.value == "(":
332
+ self.gettoken()
333
+ a, z = self.parse_rhs()
334
+ self.expect(token.OP, ")")
335
+ return a, z
336
+ elif self.type in (token.NAME, token.STRING):
337
+ a = NFAState()
338
+ z = NFAState()
339
+ a.addarc(z, self.value)
340
+ self.gettoken()
341
+ return a, z
342
+ else:
343
+ self.raise_error(
344
+ "expected (...) or NAME or STRING, got %s/%s", self.type, self.value
345
+ )
346
+ raise AssertionError
347
+
348
+ def expect(self, type: int, value: Optional[Any] = None) -> str:
349
+ if self.type != type or (value is not None and self.value != value):
350
+ self.raise_error(
351
+ "expected %s/%s, got %s/%s", type, value, self.type, self.value
352
+ )
353
+ value = self.value
354
+ self.gettoken()
355
+ return value
356
+
357
+ def gettoken(self) -> None:
358
+ tup = next(self.generator)
359
+ while tup[0] in (tokenize.COMMENT, tokenize.NL):
360
+ tup = next(self.generator)
361
+ self.type, self.value, self.begin, self.end, self.line = tup
362
+ # print token.tok_name[self.type], repr(self.value)
363
+
364
+ def raise_error(self, msg: str, *args: Any) -> NoReturn:
365
+ if args:
366
+ try:
367
+ msg = msg % args
368
+ except Exception:
369
+ msg = " ".join([msg] + list(map(str, args)))
370
+ raise SyntaxError(msg, (self.filename, self.end[0], self.end[1], self.line))
371
+
372
+
373
+ class NFAState:
374
+ arcs: List[Tuple[Optional[str], "NFAState"]]
375
+
376
+ def __init__(self) -> None:
377
+ self.arcs = [] # list of (label, NFAState) pairs
378
+
379
+ def addarc(self, next: "NFAState", label: Optional[str] = None) -> None:
380
+ assert label is None or isinstance(label, str)
381
+ assert isinstance(next, NFAState)
382
+ self.arcs.append((label, next))
383
+
384
+
385
+ class DFAState:
386
+ nfaset: Dict[NFAState, Any]
387
+ isfinal: bool
388
+ arcs: Dict[str, "DFAState"]
389
+
390
+ def __init__(self, nfaset: Dict[NFAState, Any], final: NFAState) -> None:
391
+ assert isinstance(nfaset, dict)
392
+ assert isinstance(next(iter(nfaset)), NFAState)
393
+ assert isinstance(final, NFAState)
394
+ self.nfaset = nfaset
395
+ self.isfinal = final in nfaset
396
+ self.arcs = {} # map from label to DFAState
397
+
398
+ def addarc(self, next: "DFAState", label: str) -> None:
399
+ assert isinstance(label, str)
400
+ assert label not in self.arcs
401
+ assert isinstance(next, DFAState)
402
+ self.arcs[label] = next
403
+
404
+ def unifystate(self, old: "DFAState", new: "DFAState") -> None:
405
+ for label, next in self.arcs.items():
406
+ if next is old:
407
+ self.arcs[label] = new
408
+
409
+ def __eq__(self, other: Any) -> bool:
410
+ # Equality test -- ignore the nfaset instance variable
411
+ assert isinstance(other, DFAState)
412
+ if self.isfinal != other.isfinal:
413
+ return False
414
+ # Can't just return self.arcs == other.arcs, because that
415
+ # would invoke this method recursively, with cycles...
416
+ if len(self.arcs) != len(other.arcs):
417
+ return False
418
+ for label, next in self.arcs.items():
419
+ if next is not other.arcs.get(label):
420
+ return False
421
+ return True
422
+
423
+ __hash__: Any = None # For Py3 compatibility.
424
+
425
+
426
+ def generate_grammar(filename: Path = "Grammar.txt") -> PgenGrammar:
427
+ p = ParserGenerator(filename)
428
+ return p.make_grammar()
evalkit_eagle/lib/python3.10/site-packages/blib2to3/pgen2/token.py ADDED
@@ -0,0 +1,88 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Token constants (from "token.h")."""
2
+
3
+ from typing import Dict, Final
4
+
5
+ # Taken from Python (r53757) and modified to include some tokens
6
+ # originally monkeypatched in by pgen2.tokenize
7
+
8
+ # --start constants--
9
+ ENDMARKER: Final = 0
10
+ NAME: Final = 1
11
+ NUMBER: Final = 2
12
+ STRING: Final = 3
13
+ NEWLINE: Final = 4
14
+ INDENT: Final = 5
15
+ DEDENT: Final = 6
16
+ LPAR: Final = 7
17
+ RPAR: Final = 8
18
+ LSQB: Final = 9
19
+ RSQB: Final = 10
20
+ COLON: Final = 11
21
+ COMMA: Final = 12
22
+ SEMI: Final = 13
23
+ PLUS: Final = 14
24
+ MINUS: Final = 15
25
+ STAR: Final = 16
26
+ SLASH: Final = 17
27
+ VBAR: Final = 18
28
+ AMPER: Final = 19
29
+ LESS: Final = 20
30
+ GREATER: Final = 21
31
+ EQUAL: Final = 22
32
+ DOT: Final = 23
33
+ PERCENT: Final = 24
34
+ BACKQUOTE: Final = 25
35
+ LBRACE: Final = 26
36
+ RBRACE: Final = 27
37
+ EQEQUAL: Final = 28
38
+ NOTEQUAL: Final = 29
39
+ LESSEQUAL: Final = 30
40
+ GREATEREQUAL: Final = 31
41
+ TILDE: Final = 32
42
+ CIRCUMFLEX: Final = 33
43
+ LEFTSHIFT: Final = 34
44
+ RIGHTSHIFT: Final = 35
45
+ DOUBLESTAR: Final = 36
46
+ PLUSEQUAL: Final = 37
47
+ MINEQUAL: Final = 38
48
+ STAREQUAL: Final = 39
49
+ SLASHEQUAL: Final = 40
50
+ PERCENTEQUAL: Final = 41
51
+ AMPEREQUAL: Final = 42
52
+ VBAREQUAL: Final = 43
53
+ CIRCUMFLEXEQUAL: Final = 44
54
+ LEFTSHIFTEQUAL: Final = 45
55
+ RIGHTSHIFTEQUAL: Final = 46
56
+ DOUBLESTAREQUAL: Final = 47
57
+ DOUBLESLASH: Final = 48
58
+ DOUBLESLASHEQUAL: Final = 49
59
+ AT: Final = 50
60
+ ATEQUAL: Final = 51
61
+ OP: Final = 52
62
+ COMMENT: Final = 53
63
+ NL: Final = 54
64
+ RARROW: Final = 55
65
+ AWAIT: Final = 56
66
+ ASYNC: Final = 57
67
+ ERRORTOKEN: Final = 58
68
+ COLONEQUAL: Final = 59
69
+ N_TOKENS: Final = 60
70
+ NT_OFFSET: Final = 256
71
+ # --end constants--
72
+
73
+ tok_name: Final[Dict[int, str]] = {}
74
+ for _name, _value in list(globals().items()):
75
+ if type(_value) is int:
76
+ tok_name[_value] = _name
77
+
78
+
79
+ def ISTERMINAL(x: int) -> bool:
80
+ return x < NT_OFFSET
81
+
82
+
83
+ def ISNONTERMINAL(x: int) -> bool:
84
+ return x >= NT_OFFSET
85
+
86
+
87
+ def ISEOF(x: int) -> bool:
88
+ return x == ENDMARKER
evalkit_eagle/lib/python3.10/site-packages/blib2to3/pygram.py ADDED
@@ -0,0 +1,200 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2006 Google, Inc. All Rights Reserved.
2
+ # Licensed to PSF under a Contributor Agreement.
3
+
4
+ """Export the Python grammar and symbols."""
5
+
6
+ # Python imports
7
+ import os
8
+ from typing import Union
9
+
10
+ # Local imports
11
+ from .pgen2 import driver
12
+ from .pgen2.grammar import Grammar
13
+
14
+ # Moved into initialize because mypyc can't handle __file__ (XXX bug)
15
+ # # The grammar file
16
+ # _GRAMMAR_FILE = os.path.join(os.path.dirname(__file__), "Grammar.txt")
17
+ # _PATTERN_GRAMMAR_FILE = os.path.join(os.path.dirname(__file__),
18
+ # "PatternGrammar.txt")
19
+
20
+
21
+ class Symbols:
22
+ def __init__(self, grammar: Grammar) -> None:
23
+ """Initializer.
24
+
25
+ Creates an attribute for each grammar symbol (nonterminal),
26
+ whose value is the symbol's type (an int >= 256).
27
+ """
28
+ for name, symbol in grammar.symbol2number.items():
29
+ setattr(self, name, symbol)
30
+
31
+
32
+ class _python_symbols(Symbols):
33
+ and_expr: int
34
+ and_test: int
35
+ annassign: int
36
+ arglist: int
37
+ argument: int
38
+ arith_expr: int
39
+ asexpr_test: int
40
+ assert_stmt: int
41
+ async_funcdef: int
42
+ async_stmt: int
43
+ atom: int
44
+ augassign: int
45
+ break_stmt: int
46
+ case_block: int
47
+ classdef: int
48
+ comp_for: int
49
+ comp_if: int
50
+ comp_iter: int
51
+ comp_op: int
52
+ comparison: int
53
+ compound_stmt: int
54
+ continue_stmt: int
55
+ decorated: int
56
+ decorator: int
57
+ decorators: int
58
+ del_stmt: int
59
+ dictsetmaker: int
60
+ dotted_as_name: int
61
+ dotted_as_names: int
62
+ dotted_name: int
63
+ encoding_decl: int
64
+ eval_input: int
65
+ except_clause: int
66
+ expr: int
67
+ expr_stmt: int
68
+ exprlist: int
69
+ factor: int
70
+ file_input: int
71
+ flow_stmt: int
72
+ for_stmt: int
73
+ funcdef: int
74
+ global_stmt: int
75
+ guard: int
76
+ if_stmt: int
77
+ import_as_name: int
78
+ import_as_names: int
79
+ import_from: int
80
+ import_name: int
81
+ import_stmt: int
82
+ lambdef: int
83
+ listmaker: int
84
+ match_stmt: int
85
+ namedexpr_test: int
86
+ not_test: int
87
+ old_comp_for: int
88
+ old_comp_if: int
89
+ old_comp_iter: int
90
+ old_lambdef: int
91
+ old_test: int
92
+ or_test: int
93
+ parameters: int
94
+ paramspec: int
95
+ pass_stmt: int
96
+ pattern: int
97
+ patterns: int
98
+ power: int
99
+ raise_stmt: int
100
+ return_stmt: int
101
+ shift_expr: int
102
+ simple_stmt: int
103
+ single_input: int
104
+ sliceop: int
105
+ small_stmt: int
106
+ subject_expr: int
107
+ star_expr: int
108
+ stmt: int
109
+ subscript: int
110
+ subscriptlist: int
111
+ suite: int
112
+ term: int
113
+ test: int
114
+ testlist: int
115
+ testlist1: int
116
+ testlist_gexp: int
117
+ testlist_safe: int
118
+ testlist_star_expr: int
119
+ tfpdef: int
120
+ tfplist: int
121
+ tname: int
122
+ tname_star: int
123
+ trailer: int
124
+ try_stmt: int
125
+ type_stmt: int
126
+ typedargslist: int
127
+ typeparam: int
128
+ typeparams: int
129
+ typevar: int
130
+ typevartuple: int
131
+ varargslist: int
132
+ vfpdef: int
133
+ vfplist: int
134
+ vname: int
135
+ while_stmt: int
136
+ with_stmt: int
137
+ xor_expr: int
138
+ yield_arg: int
139
+ yield_expr: int
140
+ yield_stmt: int
141
+
142
+
143
+ class _pattern_symbols(Symbols):
144
+ Alternative: int
145
+ Alternatives: int
146
+ Details: int
147
+ Matcher: int
148
+ NegatedUnit: int
149
+ Repeater: int
150
+ Unit: int
151
+
152
+
153
+ python_grammar: Grammar
154
+ python_grammar_async_keywords: Grammar
155
+ python_grammar_soft_keywords: Grammar
156
+ pattern_grammar: Grammar
157
+ python_symbols: _python_symbols
158
+ pattern_symbols: _pattern_symbols
159
+
160
+
161
+ def initialize(cache_dir: Union[str, "os.PathLike[str]", None] = None) -> None:
162
+ global python_grammar
163
+ global python_grammar_async_keywords
164
+ global python_grammar_soft_keywords
165
+ global python_symbols
166
+ global pattern_grammar
167
+ global pattern_symbols
168
+
169
+ # The grammar file
170
+ _GRAMMAR_FILE = os.path.join(os.path.dirname(__file__), "Grammar.txt")
171
+ _PATTERN_GRAMMAR_FILE = os.path.join(
172
+ os.path.dirname(__file__), "PatternGrammar.txt"
173
+ )
174
+
175
+ python_grammar = driver.load_packaged_grammar("blib2to3", _GRAMMAR_FILE, cache_dir)
176
+ assert "print" not in python_grammar.keywords
177
+ assert "exec" not in python_grammar.keywords
178
+
179
+ soft_keywords = python_grammar.soft_keywords.copy()
180
+ python_grammar.soft_keywords.clear()
181
+
182
+ python_symbols = _python_symbols(python_grammar)
183
+
184
+ # Python 3.0-3.6
185
+ python_grammar.version = (3, 0)
186
+
187
+ # Python 3.7+
188
+ python_grammar_async_keywords = python_grammar.copy()
189
+ python_grammar_async_keywords.async_keywords = True
190
+ python_grammar_async_keywords.version = (3, 7)
191
+
192
+ # Python 3.10+
193
+ python_grammar_soft_keywords = python_grammar_async_keywords.copy()
194
+ python_grammar_soft_keywords.soft_keywords = soft_keywords
195
+ python_grammar_soft_keywords.version = (3, 10)
196
+
197
+ pattern_grammar = driver.load_packaged_grammar(
198
+ "blib2to3", _PATTERN_GRAMMAR_FILE, cache_dir
199
+ )
200
+ pattern_symbols = _pattern_symbols(pattern_grammar)
evalkit_eagle/lib/python3.10/site-packages/blib2to3/pytree.py ADDED
@@ -0,0 +1,983 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2006 Google, Inc. All Rights Reserved.
2
+ # Licensed to PSF under a Contributor Agreement.
3
+
4
+ """
5
+ Python parse tree definitions.
6
+
7
+ This is a very concrete parse tree; we need to keep every token and
8
+ even the comments and whitespace between tokens.
9
+
10
+ There's also a pattern matching implementation here.
11
+ """
12
+
13
+ # mypy: allow-untyped-defs, allow-incomplete-defs
14
+
15
+ from typing import (
16
+ Any,
17
+ Dict,
18
+ Iterable,
19
+ Iterator,
20
+ List,
21
+ Optional,
22
+ Set,
23
+ Tuple,
24
+ TypeVar,
25
+ Union,
26
+ )
27
+
28
+ from blib2to3.pgen2.grammar import Grammar
29
+
30
+ __author__ = "Guido van Rossum <guido@python.org>"
31
+
32
+ import sys
33
+ from io import StringIO
34
+
35
+ HUGE: int = 0x7FFFFFFF # maximum repeat count, default max
36
+
37
+ _type_reprs: Dict[int, Union[str, int]] = {}
38
+
39
+
40
+ def type_repr(type_num: int) -> Union[str, int]:
41
+ global _type_reprs
42
+ if not _type_reprs:
43
+ from .pygram import python_symbols
44
+
45
+ # printing tokens is possible but not as useful
46
+ # from .pgen2 import token // token.__dict__.items():
47
+ for name in dir(python_symbols):
48
+ val = getattr(python_symbols, name)
49
+ if type(val) == int:
50
+ _type_reprs[val] = name
51
+ return _type_reprs.setdefault(type_num, type_num)
52
+
53
+
54
+ _P = TypeVar("_P", bound="Base")
55
+
56
+ NL = Union["Node", "Leaf"]
57
+ Context = Tuple[str, Tuple[int, int]]
58
+ RawNode = Tuple[int, Optional[str], Optional[Context], Optional[List[NL]]]
59
+
60
+
61
+ class Base:
62
+ """
63
+ Abstract base class for Node and Leaf.
64
+
65
+ This provides some default functionality and boilerplate using the
66
+ template pattern.
67
+
68
+ A node may be a subnode of at most one parent.
69
+ """
70
+
71
+ # Default values for instance variables
72
+ type: int # int: token number (< 256) or symbol number (>= 256)
73
+ parent: Optional["Node"] = None # Parent node pointer, or None
74
+ children: List[NL] # List of subnodes
75
+ was_changed: bool = False
76
+ was_checked: bool = False
77
+
78
+ def __new__(cls, *args, **kwds):
79
+ """Constructor that prevents Base from being instantiated."""
80
+ assert cls is not Base, "Cannot instantiate Base"
81
+ return object.__new__(cls)
82
+
83
+ def __eq__(self, other: Any) -> bool:
84
+ """
85
+ Compare two nodes for equality.
86
+
87
+ This calls the method _eq().
88
+ """
89
+ if self.__class__ is not other.__class__:
90
+ return NotImplemented
91
+ return self._eq(other)
92
+
93
+ @property
94
+ def prefix(self) -> str:
95
+ raise NotImplementedError
96
+
97
+ def _eq(self: _P, other: _P) -> bool:
98
+ """
99
+ Compare two nodes for equality.
100
+
101
+ This is called by __eq__ and __ne__. It is only called if the two nodes
102
+ have the same type. This must be implemented by the concrete subclass.
103
+ Nodes should be considered equal if they have the same structure,
104
+ ignoring the prefix string and other context information.
105
+ """
106
+ raise NotImplementedError
107
+
108
+ def __deepcopy__(self: _P, memo: Any) -> _P:
109
+ return self.clone()
110
+
111
+ def clone(self: _P) -> _P:
112
+ """
113
+ Return a cloned (deep) copy of self.
114
+
115
+ This must be implemented by the concrete subclass.
116
+ """
117
+ raise NotImplementedError
118
+
119
+ def post_order(self) -> Iterator[NL]:
120
+ """
121
+ Return a post-order iterator for the tree.
122
+
123
+ This must be implemented by the concrete subclass.
124
+ """
125
+ raise NotImplementedError
126
+
127
+ def pre_order(self) -> Iterator[NL]:
128
+ """
129
+ Return a pre-order iterator for the tree.
130
+
131
+ This must be implemented by the concrete subclass.
132
+ """
133
+ raise NotImplementedError
134
+
135
+ def replace(self, new: Union[NL, List[NL]]) -> None:
136
+ """Replace this node with a new one in the parent."""
137
+ assert self.parent is not None, str(self)
138
+ assert new is not None
139
+ if not isinstance(new, list):
140
+ new = [new]
141
+ l_children = []
142
+ found = False
143
+ for ch in self.parent.children:
144
+ if ch is self:
145
+ assert not found, (self.parent.children, self, new)
146
+ if new is not None:
147
+ l_children.extend(new)
148
+ found = True
149
+ else:
150
+ l_children.append(ch)
151
+ assert found, (self.children, self, new)
152
+ self.parent.children = l_children
153
+ self.parent.changed()
154
+ self.parent.invalidate_sibling_maps()
155
+ for x in new:
156
+ x.parent = self.parent
157
+ self.parent = None
158
+
159
+ def get_lineno(self) -> Optional[int]:
160
+ """Return the line number which generated the invocant node."""
161
+ node = self
162
+ while not isinstance(node, Leaf):
163
+ if not node.children:
164
+ return None
165
+ node = node.children[0]
166
+ return node.lineno
167
+
168
+ def changed(self) -> None:
169
+ if self.was_changed:
170
+ return
171
+ if self.parent:
172
+ self.parent.changed()
173
+ self.was_changed = True
174
+
175
+ def remove(self) -> Optional[int]:
176
+ """
177
+ Remove the node from the tree. Returns the position of the node in its
178
+ parent's children before it was removed.
179
+ """
180
+ if self.parent:
181
+ for i, node in enumerate(self.parent.children):
182
+ if node is self:
183
+ del self.parent.children[i]
184
+ self.parent.changed()
185
+ self.parent.invalidate_sibling_maps()
186
+ self.parent = None
187
+ return i
188
+ return None
189
+
190
+ @property
191
+ def next_sibling(self) -> Optional[NL]:
192
+ """
193
+ The node immediately following the invocant in their parent's children
194
+ list. If the invocant does not have a next sibling, it is None
195
+ """
196
+ if self.parent is None:
197
+ return None
198
+
199
+ if self.parent.next_sibling_map is None:
200
+ self.parent.update_sibling_maps()
201
+ assert self.parent.next_sibling_map is not None
202
+ return self.parent.next_sibling_map[id(self)]
203
+
204
+ @property
205
+ def prev_sibling(self) -> Optional[NL]:
206
+ """
207
+ The node immediately preceding the invocant in their parent's children
208
+ list. If the invocant does not have a previous sibling, it is None.
209
+ """
210
+ if self.parent is None:
211
+ return None
212
+
213
+ if self.parent.prev_sibling_map is None:
214
+ self.parent.update_sibling_maps()
215
+ assert self.parent.prev_sibling_map is not None
216
+ return self.parent.prev_sibling_map[id(self)]
217
+
218
+ def leaves(self) -> Iterator["Leaf"]:
219
+ for child in self.children:
220
+ yield from child.leaves()
221
+
222
+ def depth(self) -> int:
223
+ if self.parent is None:
224
+ return 0
225
+ return 1 + self.parent.depth()
226
+
227
+ def get_suffix(self) -> str:
228
+ """
229
+ Return the string immediately following the invocant node. This is
230
+ effectively equivalent to node.next_sibling.prefix
231
+ """
232
+ next_sib = self.next_sibling
233
+ if next_sib is None:
234
+ return ""
235
+ prefix = next_sib.prefix
236
+ return prefix
237
+
238
+
239
+ class Node(Base):
240
+ """Concrete implementation for interior nodes."""
241
+
242
+ fixers_applied: Optional[List[Any]]
243
+ used_names: Optional[Set[str]]
244
+
245
+ def __init__(
246
+ self,
247
+ type: int,
248
+ children: List[NL],
249
+ context: Optional[Any] = None,
250
+ prefix: Optional[str] = None,
251
+ fixers_applied: Optional[List[Any]] = None,
252
+ ) -> None:
253
+ """
254
+ Initializer.
255
+
256
+ Takes a type constant (a symbol number >= 256), a sequence of
257
+ child nodes, and an optional context keyword argument.
258
+
259
+ As a side effect, the parent pointers of the children are updated.
260
+ """
261
+ assert type >= 256, type
262
+ self.type = type
263
+ self.children = list(children)
264
+ for ch in self.children:
265
+ assert ch.parent is None, repr(ch)
266
+ ch.parent = self
267
+ self.invalidate_sibling_maps()
268
+ if prefix is not None:
269
+ self.prefix = prefix
270
+ if fixers_applied:
271
+ self.fixers_applied = fixers_applied[:]
272
+ else:
273
+ self.fixers_applied = None
274
+
275
+ def __repr__(self) -> str:
276
+ """Return a canonical string representation."""
277
+ assert self.type is not None
278
+ return "{}({}, {!r})".format(
279
+ self.__class__.__name__,
280
+ type_repr(self.type),
281
+ self.children,
282
+ )
283
+
284
+ def __str__(self) -> str:
285
+ """
286
+ Return a pretty string representation.
287
+
288
+ This reproduces the input source exactly.
289
+ """
290
+ return "".join(map(str, self.children))
291
+
292
+ def _eq(self, other: Base) -> bool:
293
+ """Compare two nodes for equality."""
294
+ return (self.type, self.children) == (other.type, other.children)
295
+
296
+ def clone(self) -> "Node":
297
+ assert self.type is not None
298
+ """Return a cloned (deep) copy of self."""
299
+ return Node(
300
+ self.type,
301
+ [ch.clone() for ch in self.children],
302
+ fixers_applied=self.fixers_applied,
303
+ )
304
+
305
+ def post_order(self) -> Iterator[NL]:
306
+ """Return a post-order iterator for the tree."""
307
+ for child in self.children:
308
+ yield from child.post_order()
309
+ yield self
310
+
311
+ def pre_order(self) -> Iterator[NL]:
312
+ """Return a pre-order iterator for the tree."""
313
+ yield self
314
+ for child in self.children:
315
+ yield from child.pre_order()
316
+
317
+ @property
318
+ def prefix(self) -> str:
319
+ """
320
+ The whitespace and comments preceding this node in the input.
321
+ """
322
+ if not self.children:
323
+ return ""
324
+ return self.children[0].prefix
325
+
326
+ @prefix.setter
327
+ def prefix(self, prefix: str) -> None:
328
+ if self.children:
329
+ self.children[0].prefix = prefix
330
+
331
+ def set_child(self, i: int, child: NL) -> None:
332
+ """
333
+ Equivalent to 'node.children[i] = child'. This method also sets the
334
+ child's parent attribute appropriately.
335
+ """
336
+ child.parent = self
337
+ self.children[i].parent = None
338
+ self.children[i] = child
339
+ self.changed()
340
+ self.invalidate_sibling_maps()
341
+
342
+ def insert_child(self, i: int, child: NL) -> None:
343
+ """
344
+ Equivalent to 'node.children.insert(i, child)'. This method also sets
345
+ the child's parent attribute appropriately.
346
+ """
347
+ child.parent = self
348
+ self.children.insert(i, child)
349
+ self.changed()
350
+ self.invalidate_sibling_maps()
351
+
352
+ def append_child(self, child: NL) -> None:
353
+ """
354
+ Equivalent to 'node.children.append(child)'. This method also sets the
355
+ child's parent attribute appropriately.
356
+ """
357
+ child.parent = self
358
+ self.children.append(child)
359
+ self.changed()
360
+ self.invalidate_sibling_maps()
361
+
362
+ def invalidate_sibling_maps(self) -> None:
363
+ self.prev_sibling_map: Optional[Dict[int, Optional[NL]]] = None
364
+ self.next_sibling_map: Optional[Dict[int, Optional[NL]]] = None
365
+
366
+ def update_sibling_maps(self) -> None:
367
+ _prev: Dict[int, Optional[NL]] = {}
368
+ _next: Dict[int, Optional[NL]] = {}
369
+ self.prev_sibling_map = _prev
370
+ self.next_sibling_map = _next
371
+ previous: Optional[NL] = None
372
+ for current in self.children:
373
+ _prev[id(current)] = previous
374
+ _next[id(previous)] = current
375
+ previous = current
376
+ _next[id(current)] = None
377
+
378
+
379
+ class Leaf(Base):
380
+ """Concrete implementation for leaf nodes."""
381
+
382
+ # Default values for instance variables
383
+ value: str
384
+ fixers_applied: List[Any]
385
+ bracket_depth: int
386
+ # Changed later in brackets.py
387
+ opening_bracket: Optional["Leaf"] = None
388
+ used_names: Optional[Set[str]]
389
+ _prefix = "" # Whitespace and comments preceding this token in the input
390
+ lineno: int = 0 # Line where this token starts in the input
391
+ column: int = 0 # Column where this token starts in the input
392
+ # If not None, this Leaf is created by converting a block of fmt off/skip
393
+ # code, and `fmt_pass_converted_first_leaf` points to the first Leaf in the
394
+ # converted code.
395
+ fmt_pass_converted_first_leaf: Optional["Leaf"] = None
396
+
397
+ def __init__(
398
+ self,
399
+ type: int,
400
+ value: str,
401
+ context: Optional[Context] = None,
402
+ prefix: Optional[str] = None,
403
+ fixers_applied: List[Any] = [],
404
+ opening_bracket: Optional["Leaf"] = None,
405
+ fmt_pass_converted_first_leaf: Optional["Leaf"] = None,
406
+ ) -> None:
407
+ """
408
+ Initializer.
409
+
410
+ Takes a type constant (a token number < 256), a string value, and an
411
+ optional context keyword argument.
412
+ """
413
+
414
+ assert 0 <= type < 256, type
415
+ if context is not None:
416
+ self._prefix, (self.lineno, self.column) = context
417
+ self.type = type
418
+ self.value = value
419
+ if prefix is not None:
420
+ self._prefix = prefix
421
+ self.fixers_applied: Optional[List[Any]] = fixers_applied[:]
422
+ self.children = []
423
+ self.opening_bracket = opening_bracket
424
+ self.fmt_pass_converted_first_leaf = fmt_pass_converted_first_leaf
425
+
426
+ def __repr__(self) -> str:
427
+ """Return a canonical string representation."""
428
+ from .pgen2.token import tok_name
429
+
430
+ assert self.type is not None
431
+ return "{}({}, {!r})".format(
432
+ self.__class__.__name__,
433
+ tok_name.get(self.type, self.type),
434
+ self.value,
435
+ )
436
+
437
+ def __str__(self) -> str:
438
+ """
439
+ Return a pretty string representation.
440
+
441
+ This reproduces the input source exactly.
442
+ """
443
+ return self._prefix + str(self.value)
444
+
445
+ def _eq(self, other: "Leaf") -> bool:
446
+ """Compare two nodes for equality."""
447
+ return (self.type, self.value) == (other.type, other.value)
448
+
449
+ def clone(self) -> "Leaf":
450
+ assert self.type is not None
451
+ """Return a cloned (deep) copy of self."""
452
+ return Leaf(
453
+ self.type,
454
+ self.value,
455
+ (self.prefix, (self.lineno, self.column)),
456
+ fixers_applied=self.fixers_applied,
457
+ )
458
+
459
+ def leaves(self) -> Iterator["Leaf"]:
460
+ yield self
461
+
462
+ def post_order(self) -> Iterator["Leaf"]:
463
+ """Return a post-order iterator for the tree."""
464
+ yield self
465
+
466
+ def pre_order(self) -> Iterator["Leaf"]:
467
+ """Return a pre-order iterator for the tree."""
468
+ yield self
469
+
470
+ @property
471
+ def prefix(self) -> str:
472
+ """
473
+ The whitespace and comments preceding this token in the input.
474
+ """
475
+ return self._prefix
476
+
477
+ @prefix.setter
478
+ def prefix(self, prefix: str) -> None:
479
+ self.changed()
480
+ self._prefix = prefix
481
+
482
+
483
+ def convert(gr: Grammar, raw_node: RawNode) -> NL:
484
+ """
485
+ Convert raw node information to a Node or Leaf instance.
486
+
487
+ This is passed to the parser driver which calls it whenever a reduction of a
488
+ grammar rule produces a new complete node, so that the tree is build
489
+ strictly bottom-up.
490
+ """
491
+ type, value, context, children = raw_node
492
+ if children or type in gr.number2symbol:
493
+ # If there's exactly one child, return that child instead of
494
+ # creating a new node.
495
+ assert children is not None
496
+ if len(children) == 1:
497
+ return children[0]
498
+ return Node(type, children, context=context)
499
+ else:
500
+ return Leaf(type, value or "", context=context)
501
+
502
+
503
+ _Results = Dict[str, NL]
504
+
505
+
506
+ class BasePattern:
507
+ """
508
+ A pattern is a tree matching pattern.
509
+
510
+ It looks for a specific node type (token or symbol), and
511
+ optionally for a specific content.
512
+
513
+ This is an abstract base class. There are three concrete
514
+ subclasses:
515
+
516
+ - LeafPattern matches a single leaf node;
517
+ - NodePattern matches a single node (usually non-leaf);
518
+ - WildcardPattern matches a sequence of nodes of variable length.
519
+ """
520
+
521
+ # Defaults for instance variables
522
+ type: Optional[int]
523
+ type = None # Node type (token if < 256, symbol if >= 256)
524
+ content: Any = None # Optional content matching pattern
525
+ name: Optional[str] = None # Optional name used to store match in results dict
526
+
527
+ def __new__(cls, *args, **kwds):
528
+ """Constructor that prevents BasePattern from being instantiated."""
529
+ assert cls is not BasePattern, "Cannot instantiate BasePattern"
530
+ return object.__new__(cls)
531
+
532
+ def __repr__(self) -> str:
533
+ assert self.type is not None
534
+ args = [type_repr(self.type), self.content, self.name]
535
+ while args and args[-1] is None:
536
+ del args[-1]
537
+ return "{}({})".format(self.__class__.__name__, ", ".join(map(repr, args)))
538
+
539
+ def _submatch(self, node, results=None) -> bool:
540
+ raise NotImplementedError
541
+
542
+ def optimize(self) -> "BasePattern":
543
+ """
544
+ A subclass can define this as a hook for optimizations.
545
+
546
+ Returns either self or another node with the same effect.
547
+ """
548
+ return self
549
+
550
+ def match(self, node: NL, results: Optional[_Results] = None) -> bool:
551
+ """
552
+ Does this pattern exactly match a node?
553
+
554
+ Returns True if it matches, False if not.
555
+
556
+ If results is not None, it must be a dict which will be
557
+ updated with the nodes matching named subpatterns.
558
+
559
+ Default implementation for non-wildcard patterns.
560
+ """
561
+ if self.type is not None and node.type != self.type:
562
+ return False
563
+ if self.content is not None:
564
+ r: Optional[_Results] = None
565
+ if results is not None:
566
+ r = {}
567
+ if not self._submatch(node, r):
568
+ return False
569
+ if r:
570
+ assert results is not None
571
+ results.update(r)
572
+ if results is not None and self.name:
573
+ results[self.name] = node
574
+ return True
575
+
576
+ def match_seq(self, nodes: List[NL], results: Optional[_Results] = None) -> bool:
577
+ """
578
+ Does this pattern exactly match a sequence of nodes?
579
+
580
+ Default implementation for non-wildcard patterns.
581
+ """
582
+ if len(nodes) != 1:
583
+ return False
584
+ return self.match(nodes[0], results)
585
+
586
+ def generate_matches(self, nodes: List[NL]) -> Iterator[Tuple[int, _Results]]:
587
+ """
588
+ Generator yielding all matches for this pattern.
589
+
590
+ Default implementation for non-wildcard patterns.
591
+ """
592
+ r: _Results = {}
593
+ if nodes and self.match(nodes[0], r):
594
+ yield 1, r
595
+
596
+
597
+ class LeafPattern(BasePattern):
598
+ def __init__(
599
+ self,
600
+ type: Optional[int] = None,
601
+ content: Optional[str] = None,
602
+ name: Optional[str] = None,
603
+ ) -> None:
604
+ """
605
+ Initializer. Takes optional type, content, and name.
606
+
607
+ The type, if given must be a token type (< 256). If not given,
608
+ this matches any *leaf* node; the content may still be required.
609
+
610
+ The content, if given, must be a string.
611
+
612
+ If a name is given, the matching node is stored in the results
613
+ dict under that key.
614
+ """
615
+ if type is not None:
616
+ assert 0 <= type < 256, type
617
+ if content is not None:
618
+ assert isinstance(content, str), repr(content)
619
+ self.type = type
620
+ self.content = content
621
+ self.name = name
622
+
623
+ def match(self, node: NL, results=None) -> bool:
624
+ """Override match() to insist on a leaf node."""
625
+ if not isinstance(node, Leaf):
626
+ return False
627
+ return BasePattern.match(self, node, results)
628
+
629
+ def _submatch(self, node, results=None):
630
+ """
631
+ Match the pattern's content to the node's children.
632
+
633
+ This assumes the node type matches and self.content is not None.
634
+
635
+ Returns True if it matches, False if not.
636
+
637
+ If results is not None, it must be a dict which will be
638
+ updated with the nodes matching named subpatterns.
639
+
640
+ When returning False, the results dict may still be updated.
641
+ """
642
+ return self.content == node.value
643
+
644
+
645
+ class NodePattern(BasePattern):
646
+ wildcards: bool = False
647
+
648
+ def __init__(
649
+ self,
650
+ type: Optional[int] = None,
651
+ content: Optional[Iterable[str]] = None,
652
+ name: Optional[str] = None,
653
+ ) -> None:
654
+ """
655
+ Initializer. Takes optional type, content, and name.
656
+
657
+ The type, if given, must be a symbol type (>= 256). If the
658
+ type is None this matches *any* single node (leaf or not),
659
+ except if content is not None, in which it only matches
660
+ non-leaf nodes that also match the content pattern.
661
+
662
+ The content, if not None, must be a sequence of Patterns that
663
+ must match the node's children exactly. If the content is
664
+ given, the type must not be None.
665
+
666
+ If a name is given, the matching node is stored in the results
667
+ dict under that key.
668
+ """
669
+ if type is not None:
670
+ assert type >= 256, type
671
+ if content is not None:
672
+ assert not isinstance(content, str), repr(content)
673
+ newcontent = list(content)
674
+ for i, item in enumerate(newcontent):
675
+ assert isinstance(item, BasePattern), (i, item)
676
+ # I don't even think this code is used anywhere, but it does cause
677
+ # unreachable errors from mypy. This function's signature does look
678
+ # odd though *shrug*.
679
+ if isinstance(item, WildcardPattern): # type: ignore[unreachable]
680
+ self.wildcards = True # type: ignore[unreachable]
681
+ self.type = type
682
+ self.content = newcontent # TODO: this is unbound when content is None
683
+ self.name = name
684
+
685
+ def _submatch(self, node, results=None) -> bool:
686
+ """
687
+ Match the pattern's content to the node's children.
688
+
689
+ This assumes the node type matches and self.content is not None.
690
+
691
+ Returns True if it matches, False if not.
692
+
693
+ If results is not None, it must be a dict which will be
694
+ updated with the nodes matching named subpatterns.
695
+
696
+ When returning False, the results dict may still be updated.
697
+ """
698
+ if self.wildcards:
699
+ for c, r in generate_matches(self.content, node.children):
700
+ if c == len(node.children):
701
+ if results is not None:
702
+ results.update(r)
703
+ return True
704
+ return False
705
+ if len(self.content) != len(node.children):
706
+ return False
707
+ for subpattern, child in zip(self.content, node.children):
708
+ if not subpattern.match(child, results):
709
+ return False
710
+ return True
711
+
712
+
713
+ class WildcardPattern(BasePattern):
714
+ """
715
+ A wildcard pattern can match zero or more nodes.
716
+
717
+ This has all the flexibility needed to implement patterns like:
718
+
719
+ .* .+ .? .{m,n}
720
+ (a b c | d e | f)
721
+ (...)* (...)+ (...)? (...){m,n}
722
+
723
+ except it always uses non-greedy matching.
724
+ """
725
+
726
+ min: int
727
+ max: int
728
+
729
+ def __init__(
730
+ self,
731
+ content: Optional[str] = None,
732
+ min: int = 0,
733
+ max: int = HUGE,
734
+ name: Optional[str] = None,
735
+ ) -> None:
736
+ """
737
+ Initializer.
738
+
739
+ Args:
740
+ content: optional sequence of subsequences of patterns;
741
+ if absent, matches one node;
742
+ if present, each subsequence is an alternative [*]
743
+ min: optional minimum number of times to match, default 0
744
+ max: optional maximum number of times to match, default HUGE
745
+ name: optional name assigned to this match
746
+
747
+ [*] Thus, if content is [[a, b, c], [d, e], [f, g, h]] this is
748
+ equivalent to (a b c | d e | f g h); if content is None,
749
+ this is equivalent to '.' in regular expression terms.
750
+ The min and max parameters work as follows:
751
+ min=0, max=maxint: .*
752
+ min=1, max=maxint: .+
753
+ min=0, max=1: .?
754
+ min=1, max=1: .
755
+ If content is not None, replace the dot with the parenthesized
756
+ list of alternatives, e.g. (a b c | d e | f g h)*
757
+ """
758
+ assert 0 <= min <= max <= HUGE, (min, max)
759
+ if content is not None:
760
+ f = lambda s: tuple(s)
761
+ wrapped_content = tuple(map(f, content)) # Protect against alterations
762
+ # Check sanity of alternatives
763
+ assert len(wrapped_content), repr(
764
+ wrapped_content
765
+ ) # Can't have zero alternatives
766
+ for alt in wrapped_content:
767
+ assert len(alt), repr(alt) # Can have empty alternatives
768
+ self.content = wrapped_content
769
+ self.min = min
770
+ self.max = max
771
+ self.name = name
772
+
773
+ def optimize(self) -> Any:
774
+ """Optimize certain stacked wildcard patterns."""
775
+ subpattern = None
776
+ if (
777
+ self.content is not None
778
+ and len(self.content) == 1
779
+ and len(self.content[0]) == 1
780
+ ):
781
+ subpattern = self.content[0][0]
782
+ if self.min == 1 and self.max == 1:
783
+ if self.content is None:
784
+ return NodePattern(name=self.name)
785
+ if subpattern is not None and self.name == subpattern.name:
786
+ return subpattern.optimize()
787
+ if (
788
+ self.min <= 1
789
+ and isinstance(subpattern, WildcardPattern)
790
+ and subpattern.min <= 1
791
+ and self.name == subpattern.name
792
+ ):
793
+ return WildcardPattern(
794
+ subpattern.content,
795
+ self.min * subpattern.min,
796
+ self.max * subpattern.max,
797
+ subpattern.name,
798
+ )
799
+ return self
800
+
801
+ def match(self, node, results=None) -> bool:
802
+ """Does this pattern exactly match a node?"""
803
+ return self.match_seq([node], results)
804
+
805
+ def match_seq(self, nodes, results=None) -> bool:
806
+ """Does this pattern exactly match a sequence of nodes?"""
807
+ for c, r in self.generate_matches(nodes):
808
+ if c == len(nodes):
809
+ if results is not None:
810
+ results.update(r)
811
+ if self.name:
812
+ results[self.name] = list(nodes)
813
+ return True
814
+ return False
815
+
816
+ def generate_matches(self, nodes) -> Iterator[Tuple[int, _Results]]:
817
+ """
818
+ Generator yielding matches for a sequence of nodes.
819
+
820
+ Args:
821
+ nodes: sequence of nodes
822
+
823
+ Yields:
824
+ (count, results) tuples where:
825
+ count: the match comprises nodes[:count];
826
+ results: dict containing named submatches.
827
+ """
828
+ if self.content is None:
829
+ # Shortcut for special case (see __init__.__doc__)
830
+ for count in range(self.min, 1 + min(len(nodes), self.max)):
831
+ r = {}
832
+ if self.name:
833
+ r[self.name] = nodes[:count]
834
+ yield count, r
835
+ elif self.name == "bare_name":
836
+ yield self._bare_name_matches(nodes)
837
+ else:
838
+ # The reason for this is that hitting the recursion limit usually
839
+ # results in some ugly messages about how RuntimeErrors are being
840
+ # ignored. We only have to do this on CPython, though, because other
841
+ # implementations don't have this nasty bug in the first place.
842
+ if hasattr(sys, "getrefcount"):
843
+ save_stderr = sys.stderr
844
+ sys.stderr = StringIO()
845
+ try:
846
+ for count, r in self._recursive_matches(nodes, 0):
847
+ if self.name:
848
+ r[self.name] = nodes[:count]
849
+ yield count, r
850
+ except RuntimeError:
851
+ # We fall back to the iterative pattern matching scheme if the recursive
852
+ # scheme hits the recursion limit.
853
+ for count, r in self._iterative_matches(nodes):
854
+ if self.name:
855
+ r[self.name] = nodes[:count]
856
+ yield count, r
857
+ finally:
858
+ if hasattr(sys, "getrefcount"):
859
+ sys.stderr = save_stderr
860
+
861
+ def _iterative_matches(self, nodes) -> Iterator[Tuple[int, _Results]]:
862
+ """Helper to iteratively yield the matches."""
863
+ nodelen = len(nodes)
864
+ if 0 >= self.min:
865
+ yield 0, {}
866
+
867
+ results = []
868
+ # generate matches that use just one alt from self.content
869
+ for alt in self.content:
870
+ for c, r in generate_matches(alt, nodes):
871
+ yield c, r
872
+ results.append((c, r))
873
+
874
+ # for each match, iterate down the nodes
875
+ while results:
876
+ new_results = []
877
+ for c0, r0 in results:
878
+ # stop if the entire set of nodes has been matched
879
+ if c0 < nodelen and c0 <= self.max:
880
+ for alt in self.content:
881
+ for c1, r1 in generate_matches(alt, nodes[c0:]):
882
+ if c1 > 0:
883
+ r = {}
884
+ r.update(r0)
885
+ r.update(r1)
886
+ yield c0 + c1, r
887
+ new_results.append((c0 + c1, r))
888
+ results = new_results
889
+
890
+ def _bare_name_matches(self, nodes) -> Tuple[int, _Results]:
891
+ """Special optimized matcher for bare_name."""
892
+ count = 0
893
+ r = {} # type: _Results
894
+ done = False
895
+ max = len(nodes)
896
+ while not done and count < max:
897
+ done = True
898
+ for leaf in self.content:
899
+ if leaf[0].match(nodes[count], r):
900
+ count += 1
901
+ done = False
902
+ break
903
+ assert self.name is not None
904
+ r[self.name] = nodes[:count]
905
+ return count, r
906
+
907
+ def _recursive_matches(self, nodes, count) -> Iterator[Tuple[int, _Results]]:
908
+ """Helper to recursively yield the matches."""
909
+ assert self.content is not None
910
+ if count >= self.min:
911
+ yield 0, {}
912
+ if count < self.max:
913
+ for alt in self.content:
914
+ for c0, r0 in generate_matches(alt, nodes):
915
+ for c1, r1 in self._recursive_matches(nodes[c0:], count + 1):
916
+ r = {}
917
+ r.update(r0)
918
+ r.update(r1)
919
+ yield c0 + c1, r
920
+
921
+
922
+ class NegatedPattern(BasePattern):
923
+ def __init__(self, content: Optional[BasePattern] = None) -> None:
924
+ """
925
+ Initializer.
926
+
927
+ The argument is either a pattern or None. If it is None, this
928
+ only matches an empty sequence (effectively '$' in regex
929
+ lingo). If it is not None, this matches whenever the argument
930
+ pattern doesn't have any matches.
931
+ """
932
+ if content is not None:
933
+ assert isinstance(content, BasePattern), repr(content)
934
+ self.content = content
935
+
936
+ def match(self, node, results=None) -> bool:
937
+ # We never match a node in its entirety
938
+ return False
939
+
940
+ def match_seq(self, nodes, results=None) -> bool:
941
+ # We only match an empty sequence of nodes in its entirety
942
+ return len(nodes) == 0
943
+
944
+ def generate_matches(self, nodes: List[NL]) -> Iterator[Tuple[int, _Results]]:
945
+ if self.content is None:
946
+ # Return a match if there is an empty sequence
947
+ if len(nodes) == 0:
948
+ yield 0, {}
949
+ else:
950
+ # Return a match if the argument pattern has no matches
951
+ for c, r in self.content.generate_matches(nodes):
952
+ return
953
+ yield 0, {}
954
+
955
+
956
+ def generate_matches(
957
+ patterns: List[BasePattern], nodes: List[NL]
958
+ ) -> Iterator[Tuple[int, _Results]]:
959
+ """
960
+ Generator yielding matches for a sequence of patterns and nodes.
961
+
962
+ Args:
963
+ patterns: a sequence of patterns
964
+ nodes: a sequence of nodes
965
+
966
+ Yields:
967
+ (count, results) tuples where:
968
+ count: the entire sequence of patterns matches nodes[:count];
969
+ results: dict containing named submatches.
970
+ """
971
+ if not patterns:
972
+ yield 0, {}
973
+ else:
974
+ p, rest = patterns[0], patterns[1:]
975
+ for c0, r0 in p.generate_matches(nodes):
976
+ if not rest:
977
+ yield c0, r0
978
+ else:
979
+ for c1, r1 in generate_matches(rest, nodes[c0:]):
980
+ r = {}
981
+ r.update(r0)
982
+ r.update(r1)
983
+ yield c0 + c1, r
janus/share/terminfo/b/beehive4 ADDED
Binary file (333 Bytes). View file
 
janus/share/terminfo/b/bg1.25nv ADDED
Binary file (570 Bytes). View file
 
janus/share/terminfo/h/h100bw ADDED
Binary file (689 Bytes). View file
 
janus/share/terminfo/h/h19-a ADDED
Binary file (671 Bytes). View file
 
janus/share/terminfo/h/h19a ADDED
Binary file (671 Bytes). View file
 
janus/share/terminfo/h/h19us ADDED
Binary file (633 Bytes). View file
 
janus/share/terminfo/h/h29a-nkc-uc ADDED
Binary file (1.69 kB). View file
 
janus/share/terminfo/h/h80 ADDED
Binary file (1.47 kB). View file
 
janus/share/terminfo/h/hds200 ADDED
Binary file (1.65 kB). View file
 
janus/share/terminfo/h/he80 ADDED
Binary file (1.47 kB). View file
 
janus/share/terminfo/h/heath-ansi ADDED
Binary file (671 Bytes). View file
 
janus/share/terminfo/h/hp+pfk+arrows ADDED
Binary file (266 Bytes). View file
 
janus/share/terminfo/h/hp110 ADDED
Binary file (528 Bytes). View file
 
janus/share/terminfo/h/hp2621 ADDED
Binary file (622 Bytes). View file
 
janus/share/terminfo/h/hp2621-ba ADDED
Binary file (606 Bytes). View file
 
janus/share/terminfo/h/hp2621-fl ADDED
Binary file (566 Bytes). View file
 
janus/share/terminfo/h/hp2621b ADDED
Binary file (716 Bytes). View file
 
janus/share/terminfo/h/hp2621b-kx ADDED
Binary file (742 Bytes). View file
 
janus/share/terminfo/h/hp2621b-kx-p ADDED
Binary file (781 Bytes). View file
 
janus/share/terminfo/h/hp2621b-p ADDED
Binary file (739 Bytes). View file
 
janus/share/terminfo/h/hp2623a ADDED
Binary file (1.2 kB). View file
 
janus/share/terminfo/h/hp2624a-10p ADDED
Binary file (1.29 kB). View file
 
janus/share/terminfo/h/hp2626-12-s ADDED
Binary file (1.38 kB). View file
 
janus/share/terminfo/h/hp2626p ADDED
Binary file (1.23 kB). View file
 
janus/share/terminfo/h/hp2627a ADDED
Binary file (633 Bytes). View file
 
janus/share/terminfo/h/hp2640a ADDED
Binary file (658 Bytes). View file
 
janus/share/terminfo/h/hp45 ADDED
Binary file (700 Bytes). View file
 
janus/share/terminfo/h/hp9837 ADDED
Binary file (562 Bytes). View file
 
janus/share/terminfo/h/hpterm ADDED
Binary file (1.39 kB). View file
 
janus/share/terminfo/h/hz1000 ADDED
Binary file (354 Bytes). View file
 
janus/share/terminfo/h/hz1500 ADDED
Binary file (452 Bytes). View file
 
janus/share/terminfo/j/jaixterm-m ADDED
Binary file (1.5 kB). View file
 
janus/share/terminfo/j/jfbterm ADDED
Binary file (1.62 kB). View file
 
janus/share/terminfo/p/p14 ADDED
Binary file (1.15 kB). View file
 
janus/share/terminfo/p/p5 ADDED
Binary file (743 Bytes). View file
 
janus/share/terminfo/p/p9 ADDED
Binary file (1.15 kB). View file