id
int64
0
190k
prompt
stringlengths
21
13.4M
docstring
stringlengths
1
12k
187,782
import math import numbers import random from fractions import Fraction from decimal import Decimal from itertools import groupby, repeat from bisect import bisect_left, bisect_right from math import hypot, sqrt, fabs, exp, erf, tau, log, fsum from operator import itemgetter from collections import Counter, namedtuple class StatisticsError(ValueError): pass The provided code snippet includes necessary dependencies for implementing the `median_high` function. Write a Python function `def median_high(data)` to solve the following problem: Return the high median of data. When the number of data points is odd, the middle value is returned. When it is even, the larger of the two middle values is returned. >>> median_high([1, 3, 5]) 3 >>> median_high([1, 3, 5, 7]) 5 Here is the function: def median_high(data): """Return the high median of data. When the number of data points is odd, the middle value is returned. When it is even, the larger of the two middle values is returned. >>> median_high([1, 3, 5]) 3 >>> median_high([1, 3, 5, 7]) 5 """ data = sorted(data) n = len(data) if n == 0: raise StatisticsError("no median for empty data") return data[n // 2]
Return the high median of data. When the number of data points is odd, the middle value is returned. When it is even, the larger of the two middle values is returned. >>> median_high([1, 3, 5]) 3 >>> median_high([1, 3, 5, 7]) 5
187,783
import math import numbers import random from fractions import Fraction from decimal import Decimal from itertools import groupby, repeat from bisect import bisect_left, bisect_right from math import hypot, sqrt, fabs, exp, erf, tau, log, fsum from operator import itemgetter from collections import Counter, namedtuple class StatisticsError(ValueError): pass def _find_lteq(a, x): 'Locate the leftmost value exactly equal to x' i = bisect_left(a, x) if i != len(a) and a[i] == x: return i raise ValueError def _find_rteq(a, l, x): 'Locate the rightmost value exactly equal to x' i = bisect_right(a, x, lo=l) if i != (len(a) + 1) and a[i - 1] == x: return i - 1 raise ValueError The provided code snippet includes necessary dependencies for implementing the `median_grouped` function. Write a Python function `def median_grouped(data, interval=1)` to solve the following problem: Return the 50th percentile (median) of grouped continuous data. >>> median_grouped([1, 2, 2, 3, 4, 4, 4, 4, 4, 5]) 3.7 >>> median_grouped([52, 52, 53, 54]) 52.5 This calculates the median as the 50th percentile, and should be used when your data is continuous and grouped. In the above example, the values 1, 2, 3, etc. actually represent the midpoint of classes 0.5-1.5, 1.5-2.5, 2.5-3.5, etc. The middle value falls somewhere in class 3.5-4.5, and interpolation is used to estimate it. Optional argument ``interval`` represents the class interval, and defaults to 1. Changing the class interval naturally will change the interpolated 50th percentile value: >>> median_grouped([1, 3, 3, 5, 7], interval=1) 3.25 >>> median_grouped([1, 3, 3, 5, 7], interval=2) 3.5 This function does not check whether the data points are at least ``interval`` apart. Here is the function: def median_grouped(data, interval=1): """Return the 50th percentile (median) of grouped continuous data. >>> median_grouped([1, 2, 2, 3, 4, 4, 4, 4, 4, 5]) 3.7 >>> median_grouped([52, 52, 53, 54]) 52.5 This calculates the median as the 50th percentile, and should be used when your data is continuous and grouped. In the above example, the values 1, 2, 3, etc. actually represent the midpoint of classes 0.5-1.5, 1.5-2.5, 2.5-3.5, etc. The middle value falls somewhere in class 3.5-4.5, and interpolation is used to estimate it. Optional argument ``interval`` represents the class interval, and defaults to 1. Changing the class interval naturally will change the interpolated 50th percentile value: >>> median_grouped([1, 3, 3, 5, 7], interval=1) 3.25 >>> median_grouped([1, 3, 3, 5, 7], interval=2) 3.5 This function does not check whether the data points are at least ``interval`` apart. """ data = sorted(data) n = len(data) if n == 0: raise StatisticsError("no median for empty data") elif n == 1: return data[0] # Find the value at the midpoint. Remember this corresponds to the # centre of the class interval. x = data[n // 2] for obj in (x, interval): if isinstance(obj, (str, bytes)): raise TypeError('expected number but got %r' % obj) try: L = x - interval / 2 # The lower limit of the median interval. except TypeError: # Mixed type. For now we just coerce to float. L = float(x) - float(interval) / 2 # Uses bisection search to search for x in data with log(n) time complexity # Find the position of leftmost occurrence of x in data l1 = _find_lteq(data, x) # Find the position of rightmost occurrence of x in data[l1...len(data)] # Assuming always l1 <= l2 l2 = _find_rteq(data, l1, x) cf = l1 f = l2 - l1 + 1 return L + interval * (n / 2 - cf) / f
Return the 50th percentile (median) of grouped continuous data. >>> median_grouped([1, 2, 2, 3, 4, 4, 4, 4, 4, 5]) 3.7 >>> median_grouped([52, 52, 53, 54]) 52.5 This calculates the median as the 50th percentile, and should be used when your data is continuous and grouped. In the above example, the values 1, 2, 3, etc. actually represent the midpoint of classes 0.5-1.5, 1.5-2.5, 2.5-3.5, etc. The middle value falls somewhere in class 3.5-4.5, and interpolation is used to estimate it. Optional argument ``interval`` represents the class interval, and defaults to 1. Changing the class interval naturally will change the interpolated 50th percentile value: >>> median_grouped([1, 3, 3, 5, 7], interval=1) 3.25 >>> median_grouped([1, 3, 3, 5, 7], interval=2) 3.5 This function does not check whether the data points are at least ``interval`` apart.
187,784
import math import numbers import random from fractions import Fraction from decimal import Decimal from itertools import groupby, repeat from bisect import bisect_left, bisect_right from math import hypot, sqrt, fabs, exp, erf, tau, log, fsum from operator import itemgetter from collections import Counter, namedtuple class StatisticsError(ValueError): pass class Counter(dict): '''Dict subclass for counting hashable items. Sometimes called a bag or multiset. Elements are stored as dictionary keys and their counts are stored as dictionary values. >>> c = Counter('abcdeabcdabcaba') # count elements from a string >>> c.most_common(3) # three most common elements [('a', 5), ('b', 4), ('c', 3)] >>> sorted(c) # list all unique elements ['a', 'b', 'c', 'd', 'e'] >>> ''.join(sorted(c.elements())) # list elements with repetitions 'aaaaabbbbcccdde' >>> sum(c.values()) # total of all counts 15 >>> c['a'] # count of letter 'a' 5 >>> for elem in 'shazam': # update counts from an iterable ... c[elem] += 1 # by adding 1 to each element's count >>> c['a'] # now there are seven 'a' 7 >>> del c['b'] # remove all 'b' >>> c['b'] # now there are zero 'b' 0 >>> d = Counter('simsalabim') # make another counter >>> c.update(d) # add in the second counter >>> c['a'] # now there are nine 'a' 9 >>> c.clear() # empty the counter >>> c Counter() Note: If a count is set to zero or reduced to zero, it will remain in the counter until the entry is deleted or the counter is cleared: >>> c = Counter('aaabbc') >>> c['b'] -= 2 # reduce the count of 'b' by two >>> c.most_common() # 'b' is still in, but its count is zero [('a', 3), ('c', 1), ('b', 0)] ''' # References: # http://en.wikipedia.org/wiki/Multiset # http://www.gnu.org/software/smalltalk/manual-base/html_node/Bag.html # http://www.demo2s.com/Tutorial/Cpp/0380__set-multiset/Catalog0380__set-multiset.htm # http://code.activestate.com/recipes/259174/ # Knuth, TAOCP Vol. II section 4.6.3 def __init__(self, iterable=None, /, **kwds): '''Create a new, empty Counter object. And if given, count elements from an input iterable. Or, initialize the count from another mapping of elements to their counts. >>> c = Counter() # a new, empty counter >>> c = Counter('gallahad') # a new counter from an iterable >>> c = Counter({'a': 4, 'b': 2}) # a new counter from a mapping >>> c = Counter(a=4, b=2) # a new counter from keyword args ''' super().__init__() self.update(iterable, **kwds) def __missing__(self, key): 'The count of elements not in the Counter is zero.' # Needed so that self[missing_item] does not raise KeyError return 0 def total(self): 'Sum of the counts' return sum(self.values()) def most_common(self, n=None): '''List the n most common elements and their counts from the most common to the least. If n is None, then list all element counts. >>> Counter('abracadabra').most_common(3) [('a', 5), ('b', 2), ('r', 2)] ''' # Emulate Bag.sortedByCount from Smalltalk if n is None: return sorted(self.items(), key=_itemgetter(1), reverse=True) # Lazy import to speedup Python startup time import heapq return heapq.nlargest(n, self.items(), key=_itemgetter(1)) def elements(self): '''Iterator over elements repeating each as many times as its count. >>> c = Counter('ABCABC') >>> sorted(c.elements()) ['A', 'A', 'B', 'B', 'C', 'C'] # Knuth's example for prime factors of 1836: 2**2 * 3**3 * 17**1 >>> prime_factors = Counter({2: 2, 3: 3, 17: 1}) >>> product = 1 >>> for factor in prime_factors.elements(): # loop over factors ... product *= factor # and multiply them >>> product 1836 Note, if an element's count has been set to zero or is a negative number, elements() will ignore it. ''' # Emulate Bag.do from Smalltalk and Multiset.begin from C++. return _chain.from_iterable(_starmap(_repeat, self.items())) # Override dict methods where necessary def fromkeys(cls, iterable, v=None): # There is no equivalent method for counters because the semantics # would be ambiguous in cases such as Counter.fromkeys('aaabbc', v=2). # Initializing counters to zero values isn't necessary because zero # is already the default value for counter lookups. Initializing # to one is easily accomplished with Counter(set(iterable)). For # more exotic cases, create a dictionary first using a dictionary # comprehension or dict.fromkeys(). raise NotImplementedError( 'Counter.fromkeys() is undefined. Use Counter(iterable) instead.') def update(self, iterable=None, /, **kwds): '''Like dict.update() but add counts instead of replacing them. Source can be an iterable, a dictionary, or another Counter instance. >>> c = Counter('which') >>> c.update('witch') # add elements from another iterable >>> d = Counter('watch') >>> c.update(d) # add elements from another counter >>> c['h'] # four 'h' in which, witch, and watch 4 ''' # The regular dict.update() operation makes no sense here because the # replace behavior results in the some of original untouched counts # being mixed-in with all of the other counts for a mismash that # doesn't have a straight-forward interpretation in most counting # contexts. Instead, we implement straight-addition. Both the inputs # and outputs are allowed to contain zero and negative counts. if iterable is not None: if isinstance(iterable, _collections_abc.Mapping): if self: self_get = self.get for elem, count in iterable.items(): self[elem] = count + self_get(elem, 0) else: # fast path when counter is empty super().update(iterable) else: _count_elements(self, iterable) if kwds: self.update(kwds) def subtract(self, iterable=None, /, **kwds): '''Like dict.update() but subtracts counts instead of replacing them. Counts can be reduced below zero. Both the inputs and outputs are allowed to contain zero and negative counts. Source can be an iterable, a dictionary, or another Counter instance. >>> c = Counter('which') >>> c.subtract('witch') # subtract elements from another iterable >>> c.subtract(Counter('watch')) # subtract elements from another counter >>> c['h'] # 2 in which, minus 1 in witch, minus 1 in watch 0 >>> c['w'] # 1 in which, minus 1 in witch, minus 1 in watch -1 ''' if iterable is not None: self_get = self.get if isinstance(iterable, _collections_abc.Mapping): for elem, count in iterable.items(): self[elem] = self_get(elem, 0) - count else: for elem in iterable: self[elem] = self_get(elem, 0) - 1 if kwds: self.subtract(kwds) def copy(self): 'Return a shallow copy.' return self.__class__(self) def __reduce__(self): return self.__class__, (dict(self),) def __delitem__(self, elem): 'Like dict.__delitem__() but does not raise KeyError for missing values.' if elem in self: super().__delitem__(elem) def __eq__(self, other): 'True if all counts agree. Missing counts are treated as zero.' if not isinstance(other, Counter): return NotImplemented return all(self[e] == other[e] for c in (self, other) for e in c) def __ne__(self, other): 'True if any counts disagree. Missing counts are treated as zero.' if not isinstance(other, Counter): return NotImplemented return not self == other def __le__(self, other): 'True if all counts in self are a subset of those in other.' if not isinstance(other, Counter): return NotImplemented return all(self[e] <= other[e] for c in (self, other) for e in c) def __lt__(self, other): 'True if all counts in self are a proper subset of those in other.' if not isinstance(other, Counter): return NotImplemented return self <= other and self != other def __ge__(self, other): 'True if all counts in self are a superset of those in other.' if not isinstance(other, Counter): return NotImplemented return all(self[e] >= other[e] for c in (self, other) for e in c) def __gt__(self, other): 'True if all counts in self are a proper superset of those in other.' if not isinstance(other, Counter): return NotImplemented return self >= other and self != other def __repr__(self): if not self: return f'{self.__class__.__name__}()' try: # dict() preserves the ordering returned by most_common() d = dict(self.most_common()) except TypeError: # handle case where values are not orderable d = dict(self) return f'{self.__class__.__name__}({d!r})' # Multiset-style mathematical operations discussed in: # Knuth TAOCP Volume II section 4.6.3 exercise 19 # and at http://en.wikipedia.org/wiki/Multiset # # Outputs guaranteed to only include positive counts. # # To strip negative and zero counts, add-in an empty counter: # c += Counter() # # Results are ordered according to when an element is first # encountered in the left operand and then by the order # encountered in the right operand. # # When the multiplicities are all zero or one, multiset operations # are guaranteed to be equivalent to the corresponding operations # for regular sets. # Given counter multisets such as: # cp = Counter(a=1, b=0, c=1) # cq = Counter(c=1, d=0, e=1) # The corresponding regular sets would be: # sp = {'a', 'c'} # sq = {'c', 'e'} # All of the following relations would hold: # set(cp + cq) == sp | sq # set(cp - cq) == sp - sq # set(cp | cq) == sp | sq # set(cp & cq) == sp & sq # (cp == cq) == (sp == sq) # (cp != cq) == (sp != sq) # (cp <= cq) == (sp <= sq) # (cp < cq) == (sp < sq) # (cp >= cq) == (sp >= sq) # (cp > cq) == (sp > sq) def __add__(self, other): '''Add counts from two counters. >>> Counter('abbb') + Counter('bcc') Counter({'b': 4, 'c': 2, 'a': 1}) ''' if not isinstance(other, Counter): return NotImplemented result = Counter() for elem, count in self.items(): newcount = count + other[elem] if newcount > 0: result[elem] = newcount for elem, count in other.items(): if elem not in self and count > 0: result[elem] = count return result def __sub__(self, other): ''' Subtract count, but keep only results with positive counts. >>> Counter('abbbc') - Counter('bccd') Counter({'b': 2, 'a': 1}) ''' if not isinstance(other, Counter): return NotImplemented result = Counter() for elem, count in self.items(): newcount = count - other[elem] if newcount > 0: result[elem] = newcount for elem, count in other.items(): if elem not in self and count < 0: result[elem] = 0 - count return result def __or__(self, other): '''Union is the maximum of value in either of the input counters. >>> Counter('abbb') | Counter('bcc') Counter({'b': 3, 'c': 2, 'a': 1}) ''' if not isinstance(other, Counter): return NotImplemented result = Counter() for elem, count in self.items(): other_count = other[elem] newcount = other_count if count < other_count else count if newcount > 0: result[elem] = newcount for elem, count in other.items(): if elem not in self and count > 0: result[elem] = count return result def __and__(self, other): ''' Intersection is the minimum of corresponding counts. >>> Counter('abbb') & Counter('bcc') Counter({'b': 1}) ''' if not isinstance(other, Counter): return NotImplemented result = Counter() for elem, count in self.items(): other_count = other[elem] newcount = count if count < other_count else other_count if newcount > 0: result[elem] = newcount return result def __pos__(self): 'Adds an empty counter, effectively stripping negative and zero counts' result = Counter() for elem, count in self.items(): if count > 0: result[elem] = count return result def __neg__(self): '''Subtracts from an empty counter. Strips positive and zero counts, and flips the sign on negative counts. ''' result = Counter() for elem, count in self.items(): if count < 0: result[elem] = 0 - count return result def _keep_positive(self): '''Internal method to strip elements with a negative or zero count''' nonpositive = [elem for elem, count in self.items() if not count > 0] for elem in nonpositive: del self[elem] return self def __iadd__(self, other): '''Inplace add from another counter, keeping only positive counts. >>> c = Counter('abbb') >>> c += Counter('bcc') >>> c Counter({'b': 4, 'c': 2, 'a': 1}) ''' for elem, count in other.items(): self[elem] += count return self._keep_positive() def __isub__(self, other): '''Inplace subtract counter, but keep only results with positive counts. >>> c = Counter('abbbc') >>> c -= Counter('bccd') >>> c Counter({'b': 2, 'a': 1}) ''' for elem, count in other.items(): self[elem] -= count return self._keep_positive() def __ior__(self, other): '''Inplace union is the maximum of value from either counter. >>> c = Counter('abbb') >>> c |= Counter('bcc') >>> c Counter({'b': 3, 'c': 2, 'a': 1}) ''' for elem, other_count in other.items(): count = self[elem] if other_count > count: self[elem] = other_count return self._keep_positive() def __iand__(self, other): '''Inplace intersection is the minimum of corresponding counts. >>> c = Counter('abbb') >>> c &= Counter('bcc') >>> c Counter({'b': 1}) ''' for elem, count in self.items(): other_count = other[elem] if other_count < count: self[elem] = other_count return self._keep_positive() The provided code snippet includes necessary dependencies for implementing the `mode` function. Write a Python function `def mode(data)` to solve the following problem: Return the most common data point from discrete or nominal data. ``mode`` assumes discrete data, and returns a single value. This is the standard treatment of the mode as commonly taught in schools: >>> mode([1, 1, 2, 3, 3, 3, 3, 4]) 3 This also works with nominal (non-numeric) data: >>> mode(["red", "blue", "blue", "red", "green", "red", "red"]) 'red' If there are multiple modes with same frequency, return the first one encountered: >>> mode(['red', 'red', 'green', 'blue', 'blue']) 'red' If *data* is empty, ``mode``, raises StatisticsError. Here is the function: def mode(data): """Return the most common data point from discrete or nominal data. ``mode`` assumes discrete data, and returns a single value. This is the standard treatment of the mode as commonly taught in schools: >>> mode([1, 1, 2, 3, 3, 3, 3, 4]) 3 This also works with nominal (non-numeric) data: >>> mode(["red", "blue", "blue", "red", "green", "red", "red"]) 'red' If there are multiple modes with same frequency, return the first one encountered: >>> mode(['red', 'red', 'green', 'blue', 'blue']) 'red' If *data* is empty, ``mode``, raises StatisticsError. """ pairs = Counter(iter(data)).most_common(1) try: return pairs[0][0] except IndexError: raise StatisticsError('no mode for empty data') from None
Return the most common data point from discrete or nominal data. ``mode`` assumes discrete data, and returns a single value. This is the standard treatment of the mode as commonly taught in schools: >>> mode([1, 1, 2, 3, 3, 3, 3, 4]) 3 This also works with nominal (non-numeric) data: >>> mode(["red", "blue", "blue", "red", "green", "red", "red"]) 'red' If there are multiple modes with same frequency, return the first one encountered: >>> mode(['red', 'red', 'green', 'blue', 'blue']) 'red' If *data* is empty, ``mode``, raises StatisticsError.
187,785
import math import numbers import random from fractions import Fraction from decimal import Decimal from itertools import groupby, repeat from bisect import bisect_left, bisect_right from math import hypot, sqrt, fabs, exp, erf, tau, log, fsum from operator import itemgetter from collections import Counter, namedtuple class itemgetter: """ Return a callable object that fetches the given item(s) from its operand. After f = itemgetter(2), the call f(r) returns r[2]. After g = itemgetter(2, 5, 3), the call g(r) returns (r[2], r[5], r[3]) """ __slots__ = ('_items', '_call') def __init__(self, item, *items): if not items: self._items = (item,) def func(obj): return obj[item] self._call = func else: self._items = items = (item,) + items def func(obj): return tuple(obj[i] for i in items) self._call = func def __call__(self, obj): return self._call(obj) def __repr__(self): return '%s.%s(%s)' % (self.__class__.__module__, self.__class__.__name__, ', '.join(map(repr, self._items))) def __reduce__(self): return self.__class__, self._items class Counter(dict): '''Dict subclass for counting hashable items. Sometimes called a bag or multiset. Elements are stored as dictionary keys and their counts are stored as dictionary values. >>> c = Counter('abcdeabcdabcaba') # count elements from a string >>> c.most_common(3) # three most common elements [('a', 5), ('b', 4), ('c', 3)] >>> sorted(c) # list all unique elements ['a', 'b', 'c', 'd', 'e'] >>> ''.join(sorted(c.elements())) # list elements with repetitions 'aaaaabbbbcccdde' >>> sum(c.values()) # total of all counts 15 >>> c['a'] # count of letter 'a' 5 >>> for elem in 'shazam': # update counts from an iterable ... c[elem] += 1 # by adding 1 to each element's count >>> c['a'] # now there are seven 'a' 7 >>> del c['b'] # remove all 'b' >>> c['b'] # now there are zero 'b' 0 >>> d = Counter('simsalabim') # make another counter >>> c.update(d) # add in the second counter >>> c['a'] # now there are nine 'a' 9 >>> c.clear() # empty the counter >>> c Counter() Note: If a count is set to zero or reduced to zero, it will remain in the counter until the entry is deleted or the counter is cleared: >>> c = Counter('aaabbc') >>> c['b'] -= 2 # reduce the count of 'b' by two >>> c.most_common() # 'b' is still in, but its count is zero [('a', 3), ('c', 1), ('b', 0)] ''' # References: # http://en.wikipedia.org/wiki/Multiset # http://www.gnu.org/software/smalltalk/manual-base/html_node/Bag.html # http://www.demo2s.com/Tutorial/Cpp/0380__set-multiset/Catalog0380__set-multiset.htm # http://code.activestate.com/recipes/259174/ # Knuth, TAOCP Vol. II section 4.6.3 def __init__(self, iterable=None, /, **kwds): '''Create a new, empty Counter object. And if given, count elements from an input iterable. Or, initialize the count from another mapping of elements to their counts. >>> c = Counter() # a new, empty counter >>> c = Counter('gallahad') # a new counter from an iterable >>> c = Counter({'a': 4, 'b': 2}) # a new counter from a mapping >>> c = Counter(a=4, b=2) # a new counter from keyword args ''' super().__init__() self.update(iterable, **kwds) def __missing__(self, key): 'The count of elements not in the Counter is zero.' # Needed so that self[missing_item] does not raise KeyError return 0 def total(self): 'Sum of the counts' return sum(self.values()) def most_common(self, n=None): '''List the n most common elements and their counts from the most common to the least. If n is None, then list all element counts. >>> Counter('abracadabra').most_common(3) [('a', 5), ('b', 2), ('r', 2)] ''' # Emulate Bag.sortedByCount from Smalltalk if n is None: return sorted(self.items(), key=_itemgetter(1), reverse=True) # Lazy import to speedup Python startup time import heapq return heapq.nlargest(n, self.items(), key=_itemgetter(1)) def elements(self): '''Iterator over elements repeating each as many times as its count. >>> c = Counter('ABCABC') >>> sorted(c.elements()) ['A', 'A', 'B', 'B', 'C', 'C'] # Knuth's example for prime factors of 1836: 2**2 * 3**3 * 17**1 >>> prime_factors = Counter({2: 2, 3: 3, 17: 1}) >>> product = 1 >>> for factor in prime_factors.elements(): # loop over factors ... product *= factor # and multiply them >>> product 1836 Note, if an element's count has been set to zero or is a negative number, elements() will ignore it. ''' # Emulate Bag.do from Smalltalk and Multiset.begin from C++. return _chain.from_iterable(_starmap(_repeat, self.items())) # Override dict methods where necessary def fromkeys(cls, iterable, v=None): # There is no equivalent method for counters because the semantics # would be ambiguous in cases such as Counter.fromkeys('aaabbc', v=2). # Initializing counters to zero values isn't necessary because zero # is already the default value for counter lookups. Initializing # to one is easily accomplished with Counter(set(iterable)). For # more exotic cases, create a dictionary first using a dictionary # comprehension or dict.fromkeys(). raise NotImplementedError( 'Counter.fromkeys() is undefined. Use Counter(iterable) instead.') def update(self, iterable=None, /, **kwds): '''Like dict.update() but add counts instead of replacing them. Source can be an iterable, a dictionary, or another Counter instance. >>> c = Counter('which') >>> c.update('witch') # add elements from another iterable >>> d = Counter('watch') >>> c.update(d) # add elements from another counter >>> c['h'] # four 'h' in which, witch, and watch 4 ''' # The regular dict.update() operation makes no sense here because the # replace behavior results in the some of original untouched counts # being mixed-in with all of the other counts for a mismash that # doesn't have a straight-forward interpretation in most counting # contexts. Instead, we implement straight-addition. Both the inputs # and outputs are allowed to contain zero and negative counts. if iterable is not None: if isinstance(iterable, _collections_abc.Mapping): if self: self_get = self.get for elem, count in iterable.items(): self[elem] = count + self_get(elem, 0) else: # fast path when counter is empty super().update(iterable) else: _count_elements(self, iterable) if kwds: self.update(kwds) def subtract(self, iterable=None, /, **kwds): '''Like dict.update() but subtracts counts instead of replacing them. Counts can be reduced below zero. Both the inputs and outputs are allowed to contain zero and negative counts. Source can be an iterable, a dictionary, or another Counter instance. >>> c = Counter('which') >>> c.subtract('witch') # subtract elements from another iterable >>> c.subtract(Counter('watch')) # subtract elements from another counter >>> c['h'] # 2 in which, minus 1 in witch, minus 1 in watch 0 >>> c['w'] # 1 in which, minus 1 in witch, minus 1 in watch -1 ''' if iterable is not None: self_get = self.get if isinstance(iterable, _collections_abc.Mapping): for elem, count in iterable.items(): self[elem] = self_get(elem, 0) - count else: for elem in iterable: self[elem] = self_get(elem, 0) - 1 if kwds: self.subtract(kwds) def copy(self): 'Return a shallow copy.' return self.__class__(self) def __reduce__(self): return self.__class__, (dict(self),) def __delitem__(self, elem): 'Like dict.__delitem__() but does not raise KeyError for missing values.' if elem in self: super().__delitem__(elem) def __eq__(self, other): 'True if all counts agree. Missing counts are treated as zero.' if not isinstance(other, Counter): return NotImplemented return all(self[e] == other[e] for c in (self, other) for e in c) def __ne__(self, other): 'True if any counts disagree. Missing counts are treated as zero.' if not isinstance(other, Counter): return NotImplemented return not self == other def __le__(self, other): 'True if all counts in self are a subset of those in other.' if not isinstance(other, Counter): return NotImplemented return all(self[e] <= other[e] for c in (self, other) for e in c) def __lt__(self, other): 'True if all counts in self are a proper subset of those in other.' if not isinstance(other, Counter): return NotImplemented return self <= other and self != other def __ge__(self, other): 'True if all counts in self are a superset of those in other.' if not isinstance(other, Counter): return NotImplemented return all(self[e] >= other[e] for c in (self, other) for e in c) def __gt__(self, other): 'True if all counts in self are a proper superset of those in other.' if not isinstance(other, Counter): return NotImplemented return self >= other and self != other def __repr__(self): if not self: return f'{self.__class__.__name__}()' try: # dict() preserves the ordering returned by most_common() d = dict(self.most_common()) except TypeError: # handle case where values are not orderable d = dict(self) return f'{self.__class__.__name__}({d!r})' # Multiset-style mathematical operations discussed in: # Knuth TAOCP Volume II section 4.6.3 exercise 19 # and at http://en.wikipedia.org/wiki/Multiset # # Outputs guaranteed to only include positive counts. # # To strip negative and zero counts, add-in an empty counter: # c += Counter() # # Results are ordered according to when an element is first # encountered in the left operand and then by the order # encountered in the right operand. # # When the multiplicities are all zero or one, multiset operations # are guaranteed to be equivalent to the corresponding operations # for regular sets. # Given counter multisets such as: # cp = Counter(a=1, b=0, c=1) # cq = Counter(c=1, d=0, e=1) # The corresponding regular sets would be: # sp = {'a', 'c'} # sq = {'c', 'e'} # All of the following relations would hold: # set(cp + cq) == sp | sq # set(cp - cq) == sp - sq # set(cp | cq) == sp | sq # set(cp & cq) == sp & sq # (cp == cq) == (sp == sq) # (cp != cq) == (sp != sq) # (cp <= cq) == (sp <= sq) # (cp < cq) == (sp < sq) # (cp >= cq) == (sp >= sq) # (cp > cq) == (sp > sq) def __add__(self, other): '''Add counts from two counters. >>> Counter('abbb') + Counter('bcc') Counter({'b': 4, 'c': 2, 'a': 1}) ''' if not isinstance(other, Counter): return NotImplemented result = Counter() for elem, count in self.items(): newcount = count + other[elem] if newcount > 0: result[elem] = newcount for elem, count in other.items(): if elem not in self and count > 0: result[elem] = count return result def __sub__(self, other): ''' Subtract count, but keep only results with positive counts. >>> Counter('abbbc') - Counter('bccd') Counter({'b': 2, 'a': 1}) ''' if not isinstance(other, Counter): return NotImplemented result = Counter() for elem, count in self.items(): newcount = count - other[elem] if newcount > 0: result[elem] = newcount for elem, count in other.items(): if elem not in self and count < 0: result[elem] = 0 - count return result def __or__(self, other): '''Union is the maximum of value in either of the input counters. >>> Counter('abbb') | Counter('bcc') Counter({'b': 3, 'c': 2, 'a': 1}) ''' if not isinstance(other, Counter): return NotImplemented result = Counter() for elem, count in self.items(): other_count = other[elem] newcount = other_count if count < other_count else count if newcount > 0: result[elem] = newcount for elem, count in other.items(): if elem not in self and count > 0: result[elem] = count return result def __and__(self, other): ''' Intersection is the minimum of corresponding counts. >>> Counter('abbb') & Counter('bcc') Counter({'b': 1}) ''' if not isinstance(other, Counter): return NotImplemented result = Counter() for elem, count in self.items(): other_count = other[elem] newcount = count if count < other_count else other_count if newcount > 0: result[elem] = newcount return result def __pos__(self): 'Adds an empty counter, effectively stripping negative and zero counts' result = Counter() for elem, count in self.items(): if count > 0: result[elem] = count return result def __neg__(self): '''Subtracts from an empty counter. Strips positive and zero counts, and flips the sign on negative counts. ''' result = Counter() for elem, count in self.items(): if count < 0: result[elem] = 0 - count return result def _keep_positive(self): '''Internal method to strip elements with a negative or zero count''' nonpositive = [elem for elem, count in self.items() if not count > 0] for elem in nonpositive: del self[elem] return self def __iadd__(self, other): '''Inplace add from another counter, keeping only positive counts. >>> c = Counter('abbb') >>> c += Counter('bcc') >>> c Counter({'b': 4, 'c': 2, 'a': 1}) ''' for elem, count in other.items(): self[elem] += count return self._keep_positive() def __isub__(self, other): '''Inplace subtract counter, but keep only results with positive counts. >>> c = Counter('abbbc') >>> c -= Counter('bccd') >>> c Counter({'b': 2, 'a': 1}) ''' for elem, count in other.items(): self[elem] -= count return self._keep_positive() def __ior__(self, other): '''Inplace union is the maximum of value from either counter. >>> c = Counter('abbb') >>> c |= Counter('bcc') >>> c Counter({'b': 3, 'c': 2, 'a': 1}) ''' for elem, other_count in other.items(): count = self[elem] if other_count > count: self[elem] = other_count return self._keep_positive() def __iand__(self, other): '''Inplace intersection is the minimum of corresponding counts. >>> c = Counter('abbb') >>> c &= Counter('bcc') >>> c Counter({'b': 1}) ''' for elem, count in self.items(): other_count = other[elem] if other_count < count: self[elem] = other_count return self._keep_positive() The provided code snippet includes necessary dependencies for implementing the `multimode` function. Write a Python function `def multimode(data)` to solve the following problem: Return a list of the most frequently occurring values. Will return more than one result if there are multiple modes or an empty list if *data* is empty. >>> multimode('aabbbbbbbbcc') ['b'] >>> multimode('aabbbbccddddeeffffgg') ['b', 'd', 'f'] >>> multimode('') [] Here is the function: def multimode(data): """Return a list of the most frequently occurring values. Will return more than one result if there are multiple modes or an empty list if *data* is empty. >>> multimode('aabbbbbbbbcc') ['b'] >>> multimode('aabbbbccddddeeffffgg') ['b', 'd', 'f'] >>> multimode('') [] """ counts = Counter(iter(data)).most_common() maxcount, mode_items = next(groupby(counts, key=itemgetter(1)), (0, [])) return list(map(itemgetter(0), mode_items))
Return a list of the most frequently occurring values. Will return more than one result if there are multiple modes or an empty list if *data* is empty. >>> multimode('aabbbbbbbbcc') ['b'] >>> multimode('aabbbbccddddeeffffgg') ['b', 'd', 'f'] >>> multimode('') []
187,786
import math import numbers import random from fractions import Fraction from decimal import Decimal from itertools import groupby, repeat from bisect import bisect_left, bisect_right from math import hypot, sqrt, fabs, exp, erf, tau, log, fsum from operator import itemgetter from collections import Counter, namedtuple class StatisticsError(ValueError): pass The provided code snippet includes necessary dependencies for implementing the `quantiles` function. Write a Python function `def quantiles(data, *, n=4, method='exclusive')` to solve the following problem: Divide *data* into *n* continuous intervals with equal probability. Returns a list of (n - 1) cut points separating the intervals. Set *n* to 4 for quartiles (the default). Set *n* to 10 for deciles. Set *n* to 100 for percentiles which gives the 99 cuts points that separate *data* in to 100 equal sized groups. The *data* can be any iterable containing sample. The cut points are linearly interpolated between data points. If *method* is set to *inclusive*, *data* is treated as population data. The minimum value is treated as the 0th percentile and the maximum value is treated as the 100th percentile. Here is the function: def quantiles(data, *, n=4, method='exclusive'): """Divide *data* into *n* continuous intervals with equal probability. Returns a list of (n - 1) cut points separating the intervals. Set *n* to 4 for quartiles (the default). Set *n* to 10 for deciles. Set *n* to 100 for percentiles which gives the 99 cuts points that separate *data* in to 100 equal sized groups. The *data* can be any iterable containing sample. The cut points are linearly interpolated between data points. If *method* is set to *inclusive*, *data* is treated as population data. The minimum value is treated as the 0th percentile and the maximum value is treated as the 100th percentile. """ if n < 1: raise StatisticsError('n must be at least 1') data = sorted(data) ld = len(data) if ld < 2: raise StatisticsError('must have at least two data points') if method == 'inclusive': m = ld - 1 result = [] for i in range(1, n): j, delta = divmod(i * m, n) interpolated = (data[j] * (n - delta) + data[j + 1] * delta) / n result.append(interpolated) return result if method == 'exclusive': m = ld + 1 result = [] for i in range(1, n): j = i * m // n # rescale i to m/n j = 1 if j < 1 else ld-1 if j > ld-1 else j # clamp to 1 .. ld-1 delta = i*m - j*n # exact integer math interpolated = (data[j - 1] * (n - delta) + data[j] * delta) / n result.append(interpolated) return result raise ValueError(f'Unknown method: {method!r}')
Divide *data* into *n* continuous intervals with equal probability. Returns a list of (n - 1) cut points separating the intervals. Set *n* to 4 for quartiles (the default). Set *n* to 10 for deciles. Set *n* to 100 for percentiles which gives the 99 cuts points that separate *data* in to 100 equal sized groups. The *data* can be any iterable containing sample. The cut points are linearly interpolated between data points. If *method* is set to *inclusive*, *data* is treated as population data. The minimum value is treated as the 0th percentile and the maximum value is treated as the 100th percentile.
187,787
import math import numbers import random from fractions import Fraction from decimal import Decimal from itertools import groupby, repeat from bisect import bisect_left, bisect_right from math import hypot, sqrt, fabs, exp, erf, tau, log, fsum from operator import itemgetter from collections import Counter, namedtuple def pvariance(data, mu=None): """Return the population variance of ``data``. data should be a sequence or iterable of Real-valued numbers, with at least one value. The optional argument mu, if given, should be the mean of the data. If it is missing or None, the mean is automatically calculated. Use this function to calculate the variance from the entire population. To estimate the variance from a sample, the ``variance`` function is usually a better choice. Examples: >>> data = [0.0, 0.25, 0.25, 1.25, 1.5, 1.75, 2.75, 3.25] >>> pvariance(data) 1.25 If you have already calculated the mean of the data, you can pass it as the optional second argument to avoid recalculating it: >>> mu = mean(data) >>> pvariance(data, mu) 1.25 Decimals and Fractions are supported: >>> from decimal import Decimal as D >>> pvariance([D("27.5"), D("30.25"), D("30.25"), D("34.5"), D("41.75")]) Decimal('24.815') >>> from fractions import Fraction as F >>> pvariance([F(1, 4), F(5, 4), F(1, 2)]) Fraction(13, 72) """ if iter(data) is data: data = list(data) n = len(data) if n < 1: raise StatisticsError('pvariance requires at least one data point') T, ss = _ss(data, mu) return _convert(ss / n, T) The provided code snippet includes necessary dependencies for implementing the `pstdev` function. Write a Python function `def pstdev(data, mu=None)` to solve the following problem: Return the square root of the population variance. See ``pvariance`` for arguments and other details. >>> pstdev([1.5, 2.5, 2.5, 2.75, 3.25, 4.75]) 0.986893273527251 Here is the function: def pstdev(data, mu=None): """Return the square root of the population variance. See ``pvariance`` for arguments and other details. >>> pstdev([1.5, 2.5, 2.5, 2.75, 3.25, 4.75]) 0.986893273527251 """ # Fixme: Despite the exact sum of squared deviations, some inaccuracy # remain because there are two rounding steps. The first occurs in # the _convert() step for pvariance(), the second occurs in math.sqrt(). var = pvariance(data, mu) try: return var.sqrt() except AttributeError: return math.sqrt(var)
Return the square root of the population variance. See ``pvariance`` for arguments and other details. >>> pstdev([1.5, 2.5, 2.5, 2.75, 3.25, 4.75]) 0.986893273527251
187,788
import math import numbers import random from fractions import Fraction from decimal import Decimal from itertools import groupby, repeat from bisect import bisect_left, bisect_right from math import hypot, sqrt, fabs, exp, erf, tau, log, fsum from operator import itemgetter from collections import Counter, namedtuple class StatisticsError(ValueError): pass The provided code snippet includes necessary dependencies for implementing the `covariance` function. Write a Python function `def covariance(x, y, /)` to solve the following problem: Covariance Return the sample covariance of two inputs *x* and *y*. Covariance is a measure of the joint variability of two inputs. >>> x = [1, 2, 3, 4, 5, 6, 7, 8, 9] >>> y = [1, 2, 3, 1, 2, 3, 1, 2, 3] >>> covariance(x, y) 0.75 >>> z = [9, 8, 7, 6, 5, 4, 3, 2, 1] >>> covariance(x, z) -7.5 >>> covariance(z, x) -7.5 Here is the function: def covariance(x, y, /): """Covariance Return the sample covariance of two inputs *x* and *y*. Covariance is a measure of the joint variability of two inputs. >>> x = [1, 2, 3, 4, 5, 6, 7, 8, 9] >>> y = [1, 2, 3, 1, 2, 3, 1, 2, 3] >>> covariance(x, y) 0.75 >>> z = [9, 8, 7, 6, 5, 4, 3, 2, 1] >>> covariance(x, z) -7.5 >>> covariance(z, x) -7.5 """ n = len(x) if len(y) != n: raise StatisticsError('covariance requires that both inputs have same number of data points') if n < 2: raise StatisticsError('covariance requires at least two data points') xbar = fsum(x) / n ybar = fsum(y) / n sxy = fsum((xi - xbar) * (yi - ybar) for xi, yi in zip(x, y)) return sxy / (n - 1)
Covariance Return the sample covariance of two inputs *x* and *y*. Covariance is a measure of the joint variability of two inputs. >>> x = [1, 2, 3, 4, 5, 6, 7, 8, 9] >>> y = [1, 2, 3, 1, 2, 3, 1, 2, 3] >>> covariance(x, y) 0.75 >>> z = [9, 8, 7, 6, 5, 4, 3, 2, 1] >>> covariance(x, z) -7.5 >>> covariance(z, x) -7.5
187,789
import math import numbers import random from fractions import Fraction from decimal import Decimal from itertools import groupby, repeat from bisect import bisect_left, bisect_right from math import hypot, sqrt, fabs, exp, erf, tau, log, fsum from operator import itemgetter from collections import Counter, namedtuple class StatisticsError(ValueError): pass The provided code snippet includes necessary dependencies for implementing the `correlation` function. Write a Python function `def correlation(x, y, /)` to solve the following problem: Pearson's correlation coefficient Return the Pearson's correlation coefficient for two inputs. Pearson's correlation coefficient *r* takes values between -1 and +1. It measures the strength and direction of the linear relationship, where +1 means very strong, positive linear relationship, -1 very strong, negative linear relationship, and 0 no linear relationship. >>> x = [1, 2, 3, 4, 5, 6, 7, 8, 9] >>> y = [9, 8, 7, 6, 5, 4, 3, 2, 1] >>> correlation(x, x) 1.0 >>> correlation(x, y) -1.0 Here is the function: def correlation(x, y, /): """Pearson's correlation coefficient Return the Pearson's correlation coefficient for two inputs. Pearson's correlation coefficient *r* takes values between -1 and +1. It measures the strength and direction of the linear relationship, where +1 means very strong, positive linear relationship, -1 very strong, negative linear relationship, and 0 no linear relationship. >>> x = [1, 2, 3, 4, 5, 6, 7, 8, 9] >>> y = [9, 8, 7, 6, 5, 4, 3, 2, 1] >>> correlation(x, x) 1.0 >>> correlation(x, y) -1.0 """ n = len(x) if len(y) != n: raise StatisticsError('correlation requires that both inputs have same number of data points') if n < 2: raise StatisticsError('correlation requires at least two data points') xbar = fsum(x) / n ybar = fsum(y) / n sxy = fsum((xi - xbar) * (yi - ybar) for xi, yi in zip(x, y)) sxx = fsum((xi - xbar) ** 2.0 for xi in x) syy = fsum((yi - ybar) ** 2.0 for yi in y) try: return sxy / sqrt(sxx * syy) except ZeroDivisionError: raise StatisticsError('at least one of the inputs is constant')
Pearson's correlation coefficient Return the Pearson's correlation coefficient for two inputs. Pearson's correlation coefficient *r* takes values between -1 and +1. It measures the strength and direction of the linear relationship, where +1 means very strong, positive linear relationship, -1 very strong, negative linear relationship, and 0 no linear relationship. >>> x = [1, 2, 3, 4, 5, 6, 7, 8, 9] >>> y = [9, 8, 7, 6, 5, 4, 3, 2, 1] >>> correlation(x, x) 1.0 >>> correlation(x, y) -1.0
187,790
import math import numbers import random from fractions import Fraction from decimal import Decimal from itertools import groupby, repeat from bisect import bisect_left, bisect_right from math import hypot, sqrt, fabs, exp, erf, tau, log, fsum from operator import itemgetter from collections import Counter, namedtuple class StatisticsError(ValueError): pass LinearRegression = namedtuple('LinearRegression', ('slope', 'intercept')) The provided code snippet includes necessary dependencies for implementing the `linear_regression` function. Write a Python function `def linear_regression(x, y, /)` to solve the following problem: Slope and intercept for simple linear regression. Return the slope and intercept of simple linear regression parameters estimated using ordinary least squares. Simple linear regression describes relationship between an independent variable *x* and a dependent variable *y* in terms of linear function: y = slope * x + intercept + noise where *slope* and *intercept* are the regression parameters that are estimated, and noise represents the variability of the data that was not explained by the linear regression (it is equal to the difference between predicted and actual values of the dependent variable). The parameters are returned as a named tuple. >>> x = [1, 2, 3, 4, 5] >>> noise = NormalDist().samples(5, seed=42) >>> y = [3 * x[i] + 2 + noise[i] for i in range(5)] >>> linear_regression(x, y) #doctest: +ELLIPSIS LinearRegression(slope=3.09078914170..., intercept=1.75684970486...) Here is the function: def linear_regression(x, y, /): """Slope and intercept for simple linear regression. Return the slope and intercept of simple linear regression parameters estimated using ordinary least squares. Simple linear regression describes relationship between an independent variable *x* and a dependent variable *y* in terms of linear function: y = slope * x + intercept + noise where *slope* and *intercept* are the regression parameters that are estimated, and noise represents the variability of the data that was not explained by the linear regression (it is equal to the difference between predicted and actual values of the dependent variable). The parameters are returned as a named tuple. >>> x = [1, 2, 3, 4, 5] >>> noise = NormalDist().samples(5, seed=42) >>> y = [3 * x[i] + 2 + noise[i] for i in range(5)] >>> linear_regression(x, y) #doctest: +ELLIPSIS LinearRegression(slope=3.09078914170..., intercept=1.75684970486...) """ n = len(x) if len(y) != n: raise StatisticsError('linear regression requires that both inputs have same number of data points') if n < 2: raise StatisticsError('linear regression requires at least two data points') xbar = fsum(x) / n ybar = fsum(y) / n sxy = fsum((xi - xbar) * (yi - ybar) for xi, yi in zip(x, y)) sxx = fsum((xi - xbar) ** 2.0 for xi in x) try: slope = sxy / sxx # equivalent to: covariance(x, y) / variance(x) except ZeroDivisionError: raise StatisticsError('x is constant') intercept = ybar - slope * xbar return LinearRegression(slope=slope, intercept=intercept)
Slope and intercept for simple linear regression. Return the slope and intercept of simple linear regression parameters estimated using ordinary least squares. Simple linear regression describes relationship between an independent variable *x* and a dependent variable *y* in terms of linear function: y = slope * x + intercept + noise where *slope* and *intercept* are the regression parameters that are estimated, and noise represents the variability of the data that was not explained by the linear regression (it is equal to the difference between predicted and actual values of the dependent variable). The parameters are returned as a named tuple. >>> x = [1, 2, 3, 4, 5] >>> noise = NormalDist().samples(5, seed=42) >>> y = [3 * x[i] + 2 + noise[i] for i in range(5)] >>> linear_regression(x, y) #doctest: +ELLIPSIS LinearRegression(slope=3.09078914170..., intercept=1.75684970486...)
187,791
import math import numbers import random from fractions import Fraction from decimal import Decimal from itertools import groupby, repeat from bisect import bisect_left, bisect_right from math import hypot, sqrt, fabs, exp, erf, tau, log, fsum from operator import itemgetter from collections import Counter, namedtuple def _normal_dist_inv_cdf(p, mu, sigma): # There is no closed-form solution to the inverse CDF for the normal # distribution, so we use a rational approximation instead: # Wichura, M.J. (1988). "Algorithm AS241: The Percentage Points of the # Normal Distribution". Applied Statistics. Blackwell Publishing. 37 # (3): 477–484. doi:10.2307/2347330. JSTOR 2347330. q = p - 0.5 if fabs(q) <= 0.425: r = 0.180625 - q * q # Hash sum: 55.88319_28806_14901_4439 num = (((((((2.50908_09287_30122_6727e+3 * r + 3.34305_75583_58812_8105e+4) * r + 6.72657_70927_00870_0853e+4) * r + 4.59219_53931_54987_1457e+4) * r + 1.37316_93765_50946_1125e+4) * r + 1.97159_09503_06551_4427e+3) * r + 1.33141_66789_17843_7745e+2) * r + 3.38713_28727_96366_6080e+0) * q den = (((((((5.22649_52788_52854_5610e+3 * r + 2.87290_85735_72194_2674e+4) * r + 3.93078_95800_09271_0610e+4) * r + 2.12137_94301_58659_5867e+4) * r + 5.39419_60214_24751_1077e+3) * r + 6.87187_00749_20579_0830e+2) * r + 4.23133_30701_60091_1252e+1) * r + 1.0) x = num / den return mu + (x * sigma) r = p if q <= 0.0 else 1.0 - p r = sqrt(-log(r)) if r <= 5.0: r = r - 1.6 # Hash sum: 49.33206_50330_16102_89036 num = (((((((7.74545_01427_83414_07640e-4 * r + 2.27238_44989_26918_45833e-2) * r + 2.41780_72517_74506_11770e-1) * r + 1.27045_82524_52368_38258e+0) * r + 3.64784_83247_63204_60504e+0) * r + 5.76949_72214_60691_40550e+0) * r + 4.63033_78461_56545_29590e+0) * r + 1.42343_71107_49683_57734e+0) den = (((((((1.05075_00716_44416_84324e-9 * r + 5.47593_80849_95344_94600e-4) * r + 1.51986_66563_61645_71966e-2) * r + 1.48103_97642_74800_74590e-1) * r + 6.89767_33498_51000_04550e-1) * r + 1.67638_48301_83803_84940e+0) * r + 2.05319_16266_37758_82187e+0) * r + 1.0) else: r = r - 5.0 # Hash sum: 47.52583_31754_92896_71629 num = (((((((2.01033_43992_92288_13265e-7 * r + 2.71155_55687_43487_57815e-5) * r + 1.24266_09473_88078_43860e-3) * r + 2.65321_89526_57612_30930e-2) * r + 2.96560_57182_85048_91230e-1) * r + 1.78482_65399_17291_33580e+0) * r + 5.46378_49111_64114_36990e+0) * r + 6.65790_46435_01103_77720e+0) den = (((((((2.04426_31033_89939_78564e-15 * r + 1.42151_17583_16445_88870e-7) * r + 1.84631_83175_10054_68180e-5) * r + 7.86869_13114_56132_59100e-4) * r + 1.48753_61290_85061_48525e-2) * r + 1.36929_88092_27358_05310e-1) * r + 5.99832_20655_58879_37690e-1) * r + 1.0) x = num / den if q < 0.0: x = -x return mu + (x * sigma)
null
187,812
import os import re import time import random import socket import datetime import urllib.parse from email._parseaddr import quote from email._parseaddr import AddressList as _AddressList from email._parseaddr import mktime_tz from email._parseaddr import parsedate, parsedate_tz, _parsedate_tz from email.charset import Charset def _parsedate_tz(data): """Convert date to extended time tuple. The last (additional) element is the time zone offset in seconds, except if the timezone was specified as -0000. In that case the last element is None. This indicates a UTC timestamp that explicitly declaims knowledge of the source timezone, as opposed to a +0000 timestamp that indicates the source timezone really was UTC. """ if not data: return None data = data.split() if not data: # This happens for whitespace-only input. return None # The FWS after the comma after the day-of-week is optional, so search and # adjust for this. if data[0].endswith(',') or data[0].lower() in _daynames: # There's a dayname here. Skip it del data[0] else: i = data[0].rfind(',') if i >= 0: data[0] = data[0][i+1:] if len(data) == 3: # RFC 850 date, deprecated stuff = data[0].split('-') if len(stuff) == 3: data = stuff + data[1:] if len(data) == 4: s = data[3] i = s.find('+') if i == -1: i = s.find('-') if i > 0: data[3:] = [s[:i], s[i:]] else: data.append('') # Dummy tz if len(data) < 5: return None data = data[:5] [dd, mm, yy, tm, tz] = data mm = mm.lower() if mm not in _monthnames: dd, mm = mm, dd.lower() if mm not in _monthnames: return None mm = _monthnames.index(mm) + 1 if mm > 12: mm -= 12 if dd[-1] == ',': dd = dd[:-1] i = yy.find(':') if i > 0: yy, tm = tm, yy if yy[-1] == ',': yy = yy[:-1] if not yy[0].isdigit(): yy, tz = tz, yy if tm[-1] == ',': tm = tm[:-1] tm = tm.split(':') if len(tm) == 2: [thh, tmm] = tm tss = '0' elif len(tm) == 3: [thh, tmm, tss] = tm elif len(tm) == 1 and '.' in tm[0]: # Some non-compliant MUAs use '.' to separate time elements. tm = tm[0].split('.') if len(tm) == 2: [thh, tmm] = tm tss = 0 elif len(tm) == 3: [thh, tmm, tss] = tm else: return None else: return None try: yy = int(yy) dd = int(dd) thh = int(thh) tmm = int(tmm) tss = int(tss) except ValueError: return None # Check for a yy specified in two-digit format, then convert it to the # appropriate four-digit format, according to the POSIX standard. RFC 822 # calls for a two-digit yy, but RFC 2822 (which obsoletes RFC 822) # mandates a 4-digit yy. For more information, see the documentation for # the time module. if yy < 100: # The year is between 1969 and 1999 (inclusive). if yy > 68: yy += 1900 # The year is between 2000 and 2068 (inclusive). else: yy += 2000 tzoffset = None tz = tz.upper() if tz in _timezones: tzoffset = _timezones[tz] else: try: tzoffset = int(tz) except ValueError: pass if tzoffset==0 and tz.startswith('-'): tzoffset = None # Convert a timezone offset into seconds ; -0500 -> -18000 if tzoffset: if tzoffset < 0: tzsign = -1 tzoffset = -tzoffset else: tzsign = 1 tzoffset = tzsign * ( (tzoffset//100)*3600 + (tzoffset % 100)*60) # Daylight Saving Time flag is set to -1, since DST is unknown. return [yy, mm, dd, thh, tmm, tss, 0, 1, -1, tzoffset] def parsedate_to_datetime(data): parsed_date_tz = _parsedate_tz(data) if parsed_date_tz is None: raise ValueError('Invalid date value or format "%s"' % str(data)) *dtuple, tz = parsed_date_tz if tz is None: return datetime.datetime(*dtuple[:6]) return datetime.datetime(*dtuple[:6], tzinfo=datetime.timezone(datetime.timedelta(seconds=tz)))
null
187,823
from base64 import b64encode from binascii import b2a_base64, a2b_base64 NL = '\n' EMPTYSTRING = '' def decode(string): """Decode a raw base64 string, returning a bytes object. This function does not parse a full MIME header value encoded with base64 (like =?iso-8859-1?b?bmloISBuaWgh?=) -- please use the high level email.header class for that functionality. """ if not string: return bytes() elif isinstance(string, str): return a2b_base64(string.encode('raw-unicode-escape'))kwards compatibility w/ standard base64 module The provided code snippet includes necessary dependencies for implementing the `body_encode` function. Write a Python function `def body_encode(s, maxlinelen=76, eol=NL)` to solve the following problem: r"""Encode a string with base64. Each line will be wrapped at, at most, maxlinelen characters (defaults to 76 characters). Each line of encoded text will end with eol, which defaults to "\n". Set this to "\r\n" if you will be using the result of this function directly in an email. Here is the function: def body_encode(s, maxlinelen=76, eol=NL): r"""Encode a string with base64. Each line will be wrapped at, at most, maxlinelen characters (defaults to 76 characters). Each line of encoded text will end with eol, which defaults to "\n". Set this to "\r\n" if you will be using the result of this function directly in an email. """ if not s: return "" encvec = [] max_unencoded = maxlinelen * 3 // 4 for i in range(0, len(s), max_unencoded): # BAW: should encode() inherit b2a_base64()'s dubious behavior in # adding a newline to the encoded string? enc = b2a_base64(s[i:i + max_unencoded]).decode("ascii") if enc.endswith(NL) and eol != NL: enc = enc[:-1] + eol encvec.append(enc) return EMPTYSTRING.join(encvec)
r"""Encode a string with base64. Each line will be wrapped at, at most, maxlinelen characters (defaults to 76 characters). Each line of encoded text will end with eol, which defaults to "\n". Set this to "\r\n" if you will be using the result of this function directly in an email.
187,836
import re import base64 import binascii import functools from string import ascii_letters, digits from email import errors def decode(ew): """Decode encoded word and return (string, charset, lang, defects) tuple. An RFC 2047/2243 encoded word has the form: =?charset*lang?cte?encoded_string?= where '*lang' may be omitted but the other parts may not be. This function expects exactly such a string (that is, it does not check the syntax and may raise errors if the string is not well formed), and returns the encoded_string decoded first from its Content Transfer Encoding and then from the resulting bytes into unicode using the specified charset. If the cte-decoded string does not successfully decode using the specified character set, a defect is added to the defects list and the unknown octets are replaced by the unicode 'unknown' character \\uFDFF. The specified charset and language are returned. The default for language, which is rarely if ever encountered, is the empty string. """ _, charset, cte, cte_string, _ = ew.split('?') charset, _, lang = charset.partition('*') cte = cte.lower() # Recover the original bytes and do CTE decoding. bstring = cte_string.encode('ascii', 'surrogateescape') bstring, defects = _cte_decoders[cte](bstring) # Turn the CTE decoded bytes into unicode. try: string = bstring.decode(charset) except UnicodeDecodeError: defects.append(errors.UndecodableBytesDefect("Encoded word " f"contains bytes not decodable using {charset!r} charset")) string = bstring.decode(charset, 'surrogateescape') except (LookupError, UnicodeEncodeError): string = bstring.decode('ascii', 'surrogateescape') if charset.lower() != 'unknown-8bit': defects.append(errors.CharsetError(f"Unknown charset {charset!r} " f"in encoded word; decoded as unknown bytes")) return string, charset, lang, defects def encode_b(bstring): return base64.b64encode(bstring).decode('ascii')
null
187,838
import re import sys import urllib.parse from string import hexdigits from operator import itemgetter from email import _encoded_words as _ew from email import errors from email import utils CFWS_LEADER = WSP | set('(') class AddressList(TokenList): token_type = 'address-list' def addresses(self): return [x for x in self if x.token_type=='address'] def mailboxes(self): return sum((x.mailboxes for x in self if x.token_type=='address'), []) def all_mailboxes(self): return sum((x.all_mailboxes for x in self if x.token_type=='address'), []) class Address(TokenList): token_type = 'address' def display_name(self): if self[0].token_type == 'group': return self[0].display_name def mailboxes(self): if self[0].token_type == 'mailbox': return [self[0]] elif self[0].token_type == 'invalid-mailbox': return [] return self[0].mailboxes def all_mailboxes(self): if self[0].token_type == 'mailbox': return [self[0]] elif self[0].token_type == 'invalid-mailbox': return [self[0]] return self[0].all_mailboxes class ValueTerminal(Terminal): def value(self): return self def startswith_fws(self): return False def get_cfws(value): """CFWS = (1*([FWS] comment) [FWS]) / FWS """ cfws = CFWSList() while value and value[0] in CFWS_LEADER: if value[0] in WSP: token, value = get_fws(value) else: token, value = get_comment(value) cfws.append(token) return cfws, value def get_invalid_mailbox(value, endchars): """ Read everything up to one of the chars in endchars. This is outside the formal grammar. The InvalidMailbox TokenList that is returned acts like a Mailbox, but the data attributes are None. """ invalid_mailbox = InvalidMailbox() while value and value[0] not in endchars: if value[0] in PHRASE_ENDS: invalid_mailbox.append(ValueTerminal(value[0], 'misplaced-special')) value = value[1:] else: token, value = get_phrase(value) invalid_mailbox.append(token) return invalid_mailbox, value def get_address(value): """ address = mailbox / group Note that counter-intuitively, an address can be either a single address or a list of addresses (a group). This is why the returned Address object has a 'mailboxes' attribute which treats a single address as a list of length one. When you need to differentiate between to two cases, extract the single element, which is either a mailbox or a group token. """ # The formal grammar isn't very helpful when parsing an address. mailbox # and group, especially when allowing for obsolete forms, start off very # similarly. It is only when you reach one of @, <, or : that you know # what you've got. So, we try each one in turn, starting with the more # likely of the two. We could perhaps make this more efficient by looking # for a phrase and then branching based on the next character, but that # would be a premature optimization. address = Address() try: token, value = get_group(value) except errors.HeaderParseError: try: token, value = get_mailbox(value) except errors.HeaderParseError: raise errors.HeaderParseError( "expected address but found '{}'".format(value)) address.append(token) return address, value The provided code snippet includes necessary dependencies for implementing the `get_address_list` function. Write a Python function `def get_address_list(value)` to solve the following problem: address_list = (address *("," address)) / obs-addr-list obs-addr-list = *([CFWS] ",") address *("," [address / CFWS]) We depart from the formal grammar here by continuing to parse until the end of the input, assuming the input to be entirely composed of an address-list. This is always true in email parsing, and allows us to skip invalid addresses to parse additional valid ones. Here is the function: def get_address_list(value): """ address_list = (address *("," address)) / obs-addr-list obs-addr-list = *([CFWS] ",") address *("," [address / CFWS]) We depart from the formal grammar here by continuing to parse until the end of the input, assuming the input to be entirely composed of an address-list. This is always true in email parsing, and allows us to skip invalid addresses to parse additional valid ones. """ address_list = AddressList() while value: try: token, value = get_address(value) address_list.append(token) except errors.HeaderParseError as err: leader = None if value[0] in CFWS_LEADER: leader, value = get_cfws(value) if not value or value[0] == ',': address_list.append(leader) address_list.defects.append(errors.ObsoleteHeaderDefect( "address-list entry with no content")) else: token, value = get_invalid_mailbox(value, ',') if leader is not None: token[:0] = [leader] address_list.append(Address([token])) address_list.defects.append(errors.InvalidHeaderDefect( "invalid address in address-list")) elif value[0] == ',': address_list.defects.append(errors.ObsoleteHeaderDefect( "empty element in address-list")) else: token, value = get_invalid_mailbox(value, ',') if leader is not None: token[:0] = [leader] address_list.append(Address([token])) address_list.defects.append(errors.InvalidHeaderDefect( "invalid address in address-list")) if value and value[0] != ',': # Crap after address; treat it as an invalid mailbox. # The mailbox info will still be available. mailbox = address_list[-1][0] mailbox.token_type = 'invalid-mailbox' token, value = get_invalid_mailbox(value, ',') mailbox.extend(token) address_list.defects.append(errors.InvalidHeaderDefect( "invalid address in address-list")) if value: # Must be a , at this point. address_list.append(ValueTerminal(',', 'list-separator')) value = value[1:] return address_list, value
address_list = (address *("," address)) / obs-addr-list obs-addr-list = *([CFWS] ",") address *("," [address / CFWS]) We depart from the formal grammar here by continuing to parse until the end of the input, assuming the input to be entirely composed of an address-list. This is always true in email parsing, and allows us to skip invalid addresses to parse additional valid ones.
187,839
import re import sys import urllib.parse from string import hexdigits from operator import itemgetter from email import _encoded_words as _ew from email import errors from email import utils class MessageID(MsgID): token_type = 'message-id' class InvalidMessageID(MessageID): token_type = 'invalid-message-id' def get_unstructured(value): """unstructured = (*([FWS] vchar) *WSP) / obs-unstruct obs-unstruct = *((*LF *CR *(obs-utext) *LF *CR)) / FWS) obs-utext = %d0 / obs-NO-WS-CTL / LF / CR obs-NO-WS-CTL is control characters except WSP/CR/LF. So, basically, we have printable runs, plus control characters or nulls in the obsolete syntax, separated by whitespace. Since RFC 2047 uses the obsolete syntax in its specification, but requires whitespace on either side of the encoded words, I can see no reason to need to separate the non-printable-non-whitespace from the printable runs if they occur, so we parse this into xtext tokens separated by WSP tokens. Because an 'unstructured' value must by definition constitute the entire value, this 'get' routine does not return a remaining value, only the parsed TokenList. """ # XXX: but what about bare CR and LF? They might signal the start or # end of an encoded word. YAGNI for now, since our current parsers # will never send us strings with bare CR or LF. unstructured = UnstructuredTokenList() while value: if value[0] in WSP: token, value = get_fws(value) unstructured.append(token) continue valid_ew = True if value.startswith('=?'): try: token, value = get_encoded_word(value) except _InvalidEwError: valid_ew = False except errors.HeaderParseError: # XXX: Need to figure out how to register defects when # appropriate here. pass else: have_ws = True if len(unstructured) > 0: if unstructured[-1].token_type != 'fws': unstructured.defects.append(errors.InvalidHeaderDefect( "missing whitespace before encoded word")) have_ws = False if have_ws and len(unstructured) > 1: if unstructured[-2].token_type == 'encoded-word': unstructured[-1] = EWWhiteSpaceTerminal( unstructured[-1], 'fws') unstructured.append(token) continue tok, *remainder = _wsp_splitter(value, 1) # Split in the middle of an atom if there is a rfc2047 encoded word # which does not have WSP on both sides. The defect will be registered # the next time through the loop. # This needs to only be performed when the encoded word is valid; # otherwise, performing it on an invalid encoded word can cause # the parser to go in an infinite loop. if valid_ew and rfc2047_matcher.search(tok): tok, *remainder = value.partition('=?') vtext = ValueTerminal(tok, 'vtext') _validate_xtext(vtext) unstructured.append(vtext) value = ''.join(remainder) return unstructured def get_msg_id(value): """msg-id = [CFWS] "<" id-left '@' id-right ">" [CFWS] id-left = dot-atom-text / obs-id-left id-right = dot-atom-text / no-fold-literal / obs-id-right no-fold-literal = "[" *dtext "]" """ msg_id = MsgID() if value and value[0] in CFWS_LEADER: token, value = get_cfws(value) msg_id.append(token) if not value or value[0] != '<': raise errors.HeaderParseError( "expected msg-id but found '{}'".format(value)) msg_id.append(ValueTerminal('<', 'msg-id-start')) value = value[1:] # Parse id-left. try: token, value = get_dot_atom_text(value) except errors.HeaderParseError: try: # obs-id-left is same as local-part of add-spec. token, value = get_obs_local_part(value) msg_id.defects.append(errors.ObsoleteHeaderDefect( "obsolete id-left in msg-id")) except errors.HeaderParseError: raise errors.HeaderParseError( "expected dot-atom-text or obs-id-left" " but found '{}'".format(value)) msg_id.append(token) if not value or value[0] != '@': msg_id.defects.append(errors.InvalidHeaderDefect( "msg-id with no id-right")) # Even though there is no id-right, if the local part # ends with `>` let's just parse it too and return # along with the defect. if value and value[0] == '>': msg_id.append(ValueTerminal('>', 'msg-id-end')) value = value[1:] return msg_id, value msg_id.append(ValueTerminal('@', 'address-at-symbol')) value = value[1:] # Parse id-right. try: token, value = get_dot_atom_text(value) except errors.HeaderParseError: try: token, value = get_no_fold_literal(value) except errors.HeaderParseError as e: try: token, value = get_domain(value) msg_id.defects.append(errors.ObsoleteHeaderDefect( "obsolete id-right in msg-id")) except errors.HeaderParseError: raise errors.HeaderParseError( "expected dot-atom-text, no-fold-literal or obs-id-right" " but found '{}'".format(value)) msg_id.append(token) if value and value[0] == '>': value = value[1:] else: msg_id.defects.append(errors.InvalidHeaderDefect( "missing trailing '>' on msg-id")) msg_id.append(ValueTerminal('>', 'msg-id-end')) if value and value[0] in CFWS_LEADER: token, value = get_cfws(value) msg_id.append(token) return msg_id, value The provided code snippet includes necessary dependencies for implementing the `parse_message_id` function. Write a Python function `def parse_message_id(value)` to solve the following problem: message-id = "Message-ID:" msg-id CRLF Here is the function: def parse_message_id(value): """message-id = "Message-ID:" msg-id CRLF """ message_id = MessageID() try: token, value = get_msg_id(value) message_id.append(token) except errors.HeaderParseError as ex: token = get_unstructured(value) message_id = InvalidMessageID(token) message_id.defects.append( errors.InvalidHeaderDefect("Invalid msg-id: {!r}".format(ex))) else: # Value after parsing a valid msg_id should be None. if value: message_id.defects.append(errors.InvalidHeaderDefect( "Unexpected {!r}".format(value))) return message_id
message-id = "Message-ID:" msg-id CRLF
187,840
import re import sys import urllib.parse from string import hexdigits from operator import itemgetter from email import _encoded_words as _ew from email import errors from email import utils CFWS_LEADER = WSP | set('(') class MIMEVersion(TokenList): token_type = 'mime-version' major = None minor = None class ValueTerminal(Terminal): def value(self): return self def startswith_fws(self): return False def get_cfws(value): """CFWS = (1*([FWS] comment) [FWS]) / FWS """ cfws = CFWSList() while value and value[0] in CFWS_LEADER: if value[0] in WSP: token, value = get_fws(value) else: token, value = get_comment(value) cfws.append(token) return cfws, value The provided code snippet includes necessary dependencies for implementing the `parse_mime_version` function. Write a Python function `def parse_mime_version(value)` to solve the following problem: mime-version = [CFWS] 1*digit [CFWS] "." [CFWS] 1*digit [CFWS] Here is the function: def parse_mime_version(value): """ mime-version = [CFWS] 1*digit [CFWS] "." [CFWS] 1*digit [CFWS] """ # The [CFWS] is implicit in the RFC 2045 BNF. # XXX: This routine is a bit verbose, should factor out a get_int method. mime_version = MIMEVersion() if not value: mime_version.defects.append(errors.HeaderMissingRequiredValue( "Missing MIME version number (eg: 1.0)")) return mime_version if value[0] in CFWS_LEADER: token, value = get_cfws(value) mime_version.append(token) if not value: mime_version.defects.append(errors.HeaderMissingRequiredValue( "Expected MIME version number but found only CFWS")) digits = '' while value and value[0] != '.' and value[0] not in CFWS_LEADER: digits += value[0] value = value[1:] if not digits.isdigit(): mime_version.defects.append(errors.InvalidHeaderDefect( "Expected MIME major version number but found {!r}".format(digits))) mime_version.append(ValueTerminal(digits, 'xtext')) else: mime_version.major = int(digits) mime_version.append(ValueTerminal(digits, 'digits')) if value and value[0] in CFWS_LEADER: token, value = get_cfws(value) mime_version.append(token) if not value or value[0] != '.': if mime_version.major is not None: mime_version.defects.append(errors.InvalidHeaderDefect( "Incomplete MIME version; found only major number")) if value: mime_version.append(ValueTerminal(value, 'xtext')) return mime_version mime_version.append(ValueTerminal('.', 'version-separator')) value = value[1:] if value and value[0] in CFWS_LEADER: token, value = get_cfws(value) mime_version.append(token) if not value: if mime_version.major is not None: mime_version.defects.append(errors.InvalidHeaderDefect( "Incomplete MIME version; found only major number")) return mime_version digits = '' while value and value[0] not in CFWS_LEADER: digits += value[0] value = value[1:] if not digits.isdigit(): mime_version.defects.append(errors.InvalidHeaderDefect( "Expected MIME minor version number but found {!r}".format(digits))) mime_version.append(ValueTerminal(digits, 'xtext')) else: mime_version.minor = int(digits) mime_version.append(ValueTerminal(digits, 'digits')) if value and value[0] in CFWS_LEADER: token, value = get_cfws(value) mime_version.append(token) if value: mime_version.defects.append(errors.InvalidHeaderDefect( "Excess non-CFWS text after MIME version")) mime_version.append(ValueTerminal(value, 'xtext')) return mime_version
mime-version = [CFWS] 1*digit [CFWS] "." [CFWS] 1*digit [CFWS]
187,841
import re import sys import urllib.parse from string import hexdigits from operator import itemgetter from email import _encoded_words as _ew from email import errors from email import utils class ContentType(ParameterizedHeaderValue): token_type = 'content-type' as_ew_allowed = False maintype = 'text' subtype = 'plain' class ValueTerminal(Terminal): def value(self): return self def startswith_fws(self): return False def get_token(value): """token = [CFWS] 1*ttext [CFWS] The RFC equivalent of ttext is any US-ASCII chars except space, ctls, or tspecials. We also exclude tabs even though the RFC doesn't. The RFC implies the CFWS but is not explicit about it in the BNF. """ mtoken = Token() if value and value[0] in CFWS_LEADER: token, value = get_cfws(value) mtoken.append(token) if value and value[0] in TOKEN_ENDS: raise errors.HeaderParseError( "expected token but found '{}'".format(value)) token, value = get_ttext(value) mtoken.append(token) if value and value[0] in CFWS_LEADER: token, value = get_cfws(value) mtoken.append(token) return mtoken, value def parse_mime_parameters(value): """ parameter *( ";" parameter ) That BNF is meant to indicate this routine should only be called after finding and handling the leading ';'. There is no corresponding rule in the formal RFC grammar, but it is more convenient for us for the set of parameters to be treated as its own TokenList. This is 'parse' routine because it consumes the remaining value, but it would never be called to parse a full header. Instead it is called to parse everything after the non-parameter value of a specific MIME header. """ mime_parameters = MimeParameters() while value: try: token, value = get_parameter(value) mime_parameters.append(token) except errors.HeaderParseError as err: leader = None if value[0] in CFWS_LEADER: leader, value = get_cfws(value) if not value: mime_parameters.append(leader) return mime_parameters if value[0] == ';': if leader is not None: mime_parameters.append(leader) mime_parameters.defects.append(errors.InvalidHeaderDefect( "parameter entry with no content")) else: token, value = get_invalid_parameter(value) if leader: token[:0] = [leader] mime_parameters.append(token) mime_parameters.defects.append(errors.InvalidHeaderDefect( "invalid parameter {!r}".format(token))) if value and value[0] != ';': # Junk after the otherwise valid parameter. Mark it as # invalid, but it will have a value. param = mime_parameters[-1] param.token_type = 'invalid-parameter' token, value = get_invalid_parameter(value) param.extend(token) mime_parameters.defects.append(errors.InvalidHeaderDefect( "parameter with invalid trailing text {!r}".format(token))) if value: # Must be a ';' at this point. mime_parameters.append(ValueTerminal(';', 'parameter-separator')) value = value[1:] return mime_parameters def _find_mime_parameters(tokenlist, value): """Do our best to find the parameters in an invalid MIME header """ while value and value[0] != ';': if value[0] in PHRASE_ENDS: tokenlist.append(ValueTerminal(value[0], 'misplaced-special')) value = value[1:] else: token, value = get_phrase(value) tokenlist.append(token) if not value: return tokenlist.append(ValueTerminal(';', 'parameter-separator')) tokenlist.append(parse_mime_parameters(value[1:])) The provided code snippet includes necessary dependencies for implementing the `parse_content_type_header` function. Write a Python function `def parse_content_type_header(value)` to solve the following problem: maintype "/" subtype *( ";" parameter ) The maintype and substype are tokens. Theoretically they could be checked against the official IANA list + x-token, but we don't do that. Here is the function: def parse_content_type_header(value): """ maintype "/" subtype *( ";" parameter ) The maintype and substype are tokens. Theoretically they could be checked against the official IANA list + x-token, but we don't do that. """ ctype = ContentType() recover = False if not value: ctype.defects.append(errors.HeaderMissingRequiredValue( "Missing content type specification")) return ctype try: token, value = get_token(value) except errors.HeaderParseError: ctype.defects.append(errors.InvalidHeaderDefect( "Expected content maintype but found {!r}".format(value))) _find_mime_parameters(ctype, value) return ctype ctype.append(token) # XXX: If we really want to follow the formal grammar we should make # mantype and subtype specialized TokenLists here. Probably not worth it. if not value or value[0] != '/': ctype.defects.append(errors.InvalidHeaderDefect( "Invalid content type")) if value: _find_mime_parameters(ctype, value) return ctype ctype.maintype = token.value.strip().lower() ctype.append(ValueTerminal('/', 'content-type-separator')) value = value[1:] try: token, value = get_token(value) except errors.HeaderParseError: ctype.defects.append(errors.InvalidHeaderDefect( "Expected content subtype but found {!r}".format(value))) _find_mime_parameters(ctype, value) return ctype ctype.append(token) ctype.subtype = token.value.strip().lower() if not value: return ctype if value[0] != ';': ctype.defects.append(errors.InvalidHeaderDefect( "Only parameters are valid after content type, but " "found {!r}".format(value))) # The RFC requires that a syntactically invalid content-type be treated # as text/plain. Perhaps we should postel this, but we should probably # only do that if we were checking the subtype value against IANA. del ctype.maintype, ctype.subtype _find_mime_parameters(ctype, value) return ctype ctype.append(ValueTerminal(';', 'parameter-separator')) ctype.append(parse_mime_parameters(value[1:])) return ctype
maintype "/" subtype *( ";" parameter ) The maintype and substype are tokens. Theoretically they could be checked against the official IANA list + x-token, but we don't do that.
187,842
import re import sys import urllib.parse from string import hexdigits from operator import itemgetter from email import _encoded_words as _ew from email import errors from email import utils class ContentDisposition(ParameterizedHeaderValue): token_type = 'content-disposition' as_ew_allowed = False content_disposition = None class ValueTerminal(Terminal): def value(self): return self def startswith_fws(self): return False def get_token(value): """token = [CFWS] 1*ttext [CFWS] The RFC equivalent of ttext is any US-ASCII chars except space, ctls, or tspecials. We also exclude tabs even though the RFC doesn't. The RFC implies the CFWS but is not explicit about it in the BNF. """ mtoken = Token() if value and value[0] in CFWS_LEADER: token, value = get_cfws(value) mtoken.append(token) if value and value[0] in TOKEN_ENDS: raise errors.HeaderParseError( "expected token but found '{}'".format(value)) token, value = get_ttext(value) mtoken.append(token) if value and value[0] in CFWS_LEADER: token, value = get_cfws(value) mtoken.append(token) return mtoken, value def parse_mime_parameters(value): """ parameter *( ";" parameter ) That BNF is meant to indicate this routine should only be called after finding and handling the leading ';'. There is no corresponding rule in the formal RFC grammar, but it is more convenient for us for the set of parameters to be treated as its own TokenList. This is 'parse' routine because it consumes the remaining value, but it would never be called to parse a full header. Instead it is called to parse everything after the non-parameter value of a specific MIME header. """ mime_parameters = MimeParameters() while value: try: token, value = get_parameter(value) mime_parameters.append(token) except errors.HeaderParseError as err: leader = None if value[0] in CFWS_LEADER: leader, value = get_cfws(value) if not value: mime_parameters.append(leader) return mime_parameters if value[0] == ';': if leader is not None: mime_parameters.append(leader) mime_parameters.defects.append(errors.InvalidHeaderDefect( "parameter entry with no content")) else: token, value = get_invalid_parameter(value) if leader: token[:0] = [leader] mime_parameters.append(token) mime_parameters.defects.append(errors.InvalidHeaderDefect( "invalid parameter {!r}".format(token))) if value and value[0] != ';': # Junk after the otherwise valid parameter. Mark it as # invalid, but it will have a value. param = mime_parameters[-1] param.token_type = 'invalid-parameter' token, value = get_invalid_parameter(value) param.extend(token) mime_parameters.defects.append(errors.InvalidHeaderDefect( "parameter with invalid trailing text {!r}".format(token))) if value: # Must be a ';' at this point. mime_parameters.append(ValueTerminal(';', 'parameter-separator')) value = value[1:] return mime_parameters def _find_mime_parameters(tokenlist, value): """Do our best to find the parameters in an invalid MIME header """ while value and value[0] != ';': if value[0] in PHRASE_ENDS: tokenlist.append(ValueTerminal(value[0], 'misplaced-special')) value = value[1:] else: token, value = get_phrase(value) tokenlist.append(token) if not value: return tokenlist.append(ValueTerminal(';', 'parameter-separator')) tokenlist.append(parse_mime_parameters(value[1:])) The provided code snippet includes necessary dependencies for implementing the `parse_content_disposition_header` function. Write a Python function `def parse_content_disposition_header(value)` to solve the following problem: disposition-type *( ";" parameter ) Here is the function: def parse_content_disposition_header(value): """ disposition-type *( ";" parameter ) """ disp_header = ContentDisposition() if not value: disp_header.defects.append(errors.HeaderMissingRequiredValue( "Missing content disposition")) return disp_header try: token, value = get_token(value) except errors.HeaderParseError: disp_header.defects.append(errors.InvalidHeaderDefect( "Expected content disposition but found {!r}".format(value))) _find_mime_parameters(disp_header, value) return disp_header disp_header.append(token) disp_header.content_disposition = token.value.strip().lower() if not value: return disp_header if value[0] != ';': disp_header.defects.append(errors.InvalidHeaderDefect( "Only parameters are valid after content disposition, but " "found {!r}".format(value))) _find_mime_parameters(disp_header, value) return disp_header disp_header.append(ValueTerminal(';', 'parameter-separator')) disp_header.append(parse_mime_parameters(value[1:])) return disp_header
disposition-type *( ";" parameter )
187,843
import re import sys import urllib.parse from string import hexdigits from operator import itemgetter from email import _encoded_words as _ew from email import errors from email import utils PHRASE_ENDS = SPECIALS - set('."(') class ContentTransferEncoding(TokenList): token_type = 'content-transfer-encoding' as_ew_allowed = False cte = '7bit' class ValueTerminal(Terminal): def value(self): return self def startswith_fws(self): return False def get_phrase(value): """ phrase = 1*word / obs-phrase obs-phrase = word *(word / "." / CFWS) This means a phrase can be a sequence of words, periods, and CFWS in any order as long as it starts with at least one word. If anything other than words is detected, an ObsoleteHeaderDefect is added to the token's defect list. We also accept a phrase that starts with CFWS followed by a dot; this is registered as an InvalidHeaderDefect, since it is not supported by even the obsolete grammar. """ phrase = Phrase() try: token, value = get_word(value) phrase.append(token) except errors.HeaderParseError: phrase.defects.append(errors.InvalidHeaderDefect( "phrase does not start with word")) while value and value[0] not in PHRASE_ENDS: if value[0]=='.': phrase.append(DOT) phrase.defects.append(errors.ObsoleteHeaderDefect( "period in 'phrase'")) value = value[1:] else: try: token, value = get_word(value) except errors.HeaderParseError: if value[0] in CFWS_LEADER: token, value = get_cfws(value) phrase.defects.append(errors.ObsoleteHeaderDefect( "comment found without atom")) else: raise phrase.append(token) return phrase, value def get_token(value): """token = [CFWS] 1*ttext [CFWS] The RFC equivalent of ttext is any US-ASCII chars except space, ctls, or tspecials. We also exclude tabs even though the RFC doesn't. The RFC implies the CFWS but is not explicit about it in the BNF. """ mtoken = Token() if value and value[0] in CFWS_LEADER: token, value = get_cfws(value) mtoken.append(token) if value and value[0] in TOKEN_ENDS: raise errors.HeaderParseError( "expected token but found '{}'".format(value)) token, value = get_ttext(value) mtoken.append(token) if value and value[0] in CFWS_LEADER: token, value = get_cfws(value) mtoken.append(token) return mtoken, value The provided code snippet includes necessary dependencies for implementing the `parse_content_transfer_encoding_header` function. Write a Python function `def parse_content_transfer_encoding_header(value)` to solve the following problem: mechanism Here is the function: def parse_content_transfer_encoding_header(value): """ mechanism """ # We should probably validate the values, since the list is fixed. cte_header = ContentTransferEncoding() if not value: cte_header.defects.append(errors.HeaderMissingRequiredValue( "Missing content transfer encoding")) return cte_header try: token, value = get_token(value) except errors.HeaderParseError: cte_header.defects.append(errors.InvalidHeaderDefect( "Expected content transfer encoding but found {!r}".format(value))) else: cte_header.append(token) cte_header.cte = token.value.strip().lower() if not value: return cte_header while value: cte_header.defects.append(errors.InvalidHeaderDefect( "Extra text after content transfer encoding")) if value[0] in PHRASE_ENDS: cte_header.append(ValueTerminal(value[0], 'misplaced-special')) value = value[1:] else: token, value = get_phrase(value) cte_header.append(token) return cte_header
mechanism
187,844
import re import sys import urllib.parse from string import hexdigits from operator import itemgetter from email import _encoded_words as _ew from email import errors from email import utils SPECIALS = set(r'()<>@,:;.\"[]') class Terminal(str): as_ew_allowed = True ew_combine_allowed = True syntactic_break = True def __new__(cls, value, token_type): self = super().__new__(cls, value) self.token_type = token_type self.defects = [] return self def __repr__(self): return "{}({})".format(self.__class__.__name__, super().__repr__()) def pprint(self): print(self.__class__.__name__ + '/' + self.token_type) def all_defects(self): return list(self.defects) def _pp(self, indent=''): return ["{}{}/{}({}){}".format( indent, self.__class__.__name__, self.token_type, super().__repr__(), '' if not self.defects else ' {}'.format(self.defects), )] def pop_trailing_ws(self): # This terminates the recursion. return None def comments(self): return [] def __getnewargs__(self): return(str(self), self.token_type) def _steal_trailing_WSP_if_exists(lines): wsp = '' if lines and lines[-1] and lines[-1][-1] in WSP: wsp = lines[-1][-1] lines[-1] = lines[-1][:-1] return wsp def _fold_as_ew(to_encode, lines, maxlen, last_ew, ew_combine_allowed, charset): """Fold string to_encode into lines as encoded word, combining if allowed. Return the new value for last_ew, or None if ew_combine_allowed is False. If there is already an encoded word in the last line of lines (indicated by a non-None value for last_ew) and ew_combine_allowed is true, decode the existing ew, combine it with to_encode, and re-encode. Otherwise, encode to_encode. In either case, split to_encode as necessary so that the encoded segments fit within maxlen. """ if last_ew is not None and ew_combine_allowed: to_encode = str( get_unstructured(lines[-1][last_ew:] + to_encode)) lines[-1] = lines[-1][:last_ew] if to_encode[0] in WSP: # We're joining this to non-encoded text, so don't encode # the leading blank. leading_wsp = to_encode[0] to_encode = to_encode[1:] if (len(lines[-1]) == maxlen): lines.append(_steal_trailing_WSP_if_exists(lines)) lines[-1] += leading_wsp trailing_wsp = '' if to_encode[-1] in WSP: # Likewise for the trailing space. trailing_wsp = to_encode[-1] to_encode = to_encode[:-1] new_last_ew = len(lines[-1]) if last_ew is None else last_ew encode_as = 'utf-8' if charset == 'us-ascii' else charset # The RFC2047 chrome takes up 7 characters plus the length # of the charset name. chrome_len = len(encode_as) + 7 if (chrome_len + 1) >= maxlen: raise errors.HeaderParseError( "max_line_length is too small to fit an encoded word") while to_encode: remaining_space = maxlen - len(lines[-1]) text_space = remaining_space - chrome_len if text_space <= 0: lines.append(' ') continue to_encode_word = to_encode[:text_space] encoded_word = _ew.encode(to_encode_word, charset=encode_as) excess = len(encoded_word) - remaining_space while excess > 0: # Since the chunk to encode is guaranteed to fit into less than 100 characters, # shrinking it by one at a time shouldn't take long. to_encode_word = to_encode_word[:-1] encoded_word = _ew.encode(to_encode_word, charset=encode_as) excess = len(encoded_word) - remaining_space lines[-1] += encoded_word to_encode = to_encode[len(to_encode_word):] if to_encode: lines.append(' ') new_last_ew = len(lines[-1]) lines[-1] += trailing_wsp return new_last_ew if ew_combine_allowed else None def _fold_mime_parameters(part, lines, maxlen, encoding): """Fold TokenList 'part' into the 'lines' list as mime parameters. Using the decoded list of parameters and values, format them according to the RFC rules, including using RFC2231 encoding if the value cannot be expressed in 'encoding' and/or the parameter+value is too long to fit within 'maxlen'. """ # Special case for RFC2231 encoding: start from decoded values and use # RFC2231 encoding iff needed. # # Note that the 1 and 2s being added to the length calculations are # accounting for the possibly-needed spaces and semicolons we'll be adding. # for name, value in part.params: # XXX What if this ';' puts us over maxlen the first time through the # loop? We should split the header value onto a newline in that case, # but to do that we need to recognize the need earlier or reparse the # header, so I'm going to ignore that bug for now. It'll only put us # one character over. if not lines[-1].rstrip().endswith(';'): lines[-1] += ';' charset = encoding error_handler = 'strict' try: value.encode(encoding) encoding_required = False except UnicodeEncodeError: encoding_required = True if utils._has_surrogates(value): charset = 'unknown-8bit' error_handler = 'surrogateescape' else: charset = 'utf-8' if encoding_required: encoded_value = urllib.parse.quote( value, safe='', errors=error_handler) tstr = "{}*={}''{}".format(name, charset, encoded_value) else: tstr = '{}={}'.format(name, quote_string(value)) if len(lines[-1]) + len(tstr) + 1 < maxlen: lines[-1] = lines[-1] + ' ' + tstr continue elif len(tstr) + 2 <= maxlen: lines.append(' ' + tstr) continue # We need multiple sections. We are allowed to mix encoded and # non-encoded sections, but we aren't going to. We'll encode them all. section = 0 extra_chrome = charset + "''" while value: chrome_len = len(name) + len(str(section)) + 3 + len(extra_chrome) if maxlen <= chrome_len + 3: # We need room for the leading blank, the trailing semicolon, # and at least one character of the value. If we don't # have that, we'd be stuck, so in that case fall back to # the RFC standard width. maxlen = 78 splitpoint = maxchars = maxlen - chrome_len - 2 while True: partial = value[:splitpoint] encoded_value = urllib.parse.quote( partial, safe='', errors=error_handler) if len(encoded_value) <= maxchars: break splitpoint -= 1 lines.append(" {}*{}*={}{}".format( name, section, extra_chrome, encoded_value)) extra_chrome = '' section += 1 value = value[splitpoint:] if value: lines[-1] += ';' The provided code snippet includes necessary dependencies for implementing the `_refold_parse_tree` function. Write a Python function `def _refold_parse_tree(parse_tree, *, policy)` to solve the following problem: Return string of contents of parse_tree folded according to RFC rules. Here is the function: def _refold_parse_tree(parse_tree, *, policy): """Return string of contents of parse_tree folded according to RFC rules. """ # max_line_length 0/None means no limit, ie: infinitely long. maxlen = policy.max_line_length or sys.maxsize encoding = 'utf-8' if policy.utf8 else 'us-ascii' lines = [''] last_ew = None wrap_as_ew_blocked = 0 want_encoding = False end_ew_not_allowed = Terminal('', 'wrap_as_ew_blocked') parts = list(parse_tree) while parts: part = parts.pop(0) if part is end_ew_not_allowed: wrap_as_ew_blocked -= 1 continue tstr = str(part) if part.token_type == 'ptext' and set(tstr) & SPECIALS: # Encode if tstr contains special characters. want_encoding = True try: tstr.encode(encoding) charset = encoding except UnicodeEncodeError: if any(isinstance(x, errors.UndecodableBytesDefect) for x in part.all_defects): charset = 'unknown-8bit' else: # If policy.utf8 is false this should really be taken from a # 'charset' property on the policy. charset = 'utf-8' want_encoding = True if part.token_type == 'mime-parameters': # Mime parameter folding (using RFC2231) is extra special. _fold_mime_parameters(part, lines, maxlen, encoding) continue if want_encoding and not wrap_as_ew_blocked: if not part.as_ew_allowed: want_encoding = False last_ew = None if part.syntactic_break: encoded_part = part.fold(policy=policy)[:-len(policy.linesep)] if policy.linesep not in encoded_part: # It fits on a single line if len(encoded_part) > maxlen - len(lines[-1]): # But not on this one, so start a new one. newline = _steal_trailing_WSP_if_exists(lines) # XXX what if encoded_part has no leading FWS? lines.append(newline) lines[-1] += encoded_part continue # Either this is not a major syntactic break, so we don't # want it on a line by itself even if it fits, or it # doesn't fit on a line by itself. Either way, fall through # to unpacking the subparts and wrapping them. if not hasattr(part, 'encode'): # It's not a Terminal, do each piece individually. parts = list(part) + parts else: # It's a terminal, wrap it as an encoded word, possibly # combining it with previously encoded words if allowed. last_ew = _fold_as_ew(tstr, lines, maxlen, last_ew, part.ew_combine_allowed, charset) want_encoding = False continue if len(tstr) <= maxlen - len(lines[-1]): lines[-1] += tstr continue # This part is too long to fit. The RFC wants us to break at # "major syntactic breaks", so unless we don't consider this # to be one, check if it will fit on the next line by itself. if (part.syntactic_break and len(tstr) + 1 <= maxlen): newline = _steal_trailing_WSP_if_exists(lines) if newline or part.startswith_fws(): lines.append(newline + tstr) last_ew = None continue if not hasattr(part, 'encode'): # It's not a terminal, try folding the subparts. newparts = list(part) if not part.as_ew_allowed: wrap_as_ew_blocked += 1 newparts.append(end_ew_not_allowed) parts = newparts + parts continue if part.as_ew_allowed and not wrap_as_ew_blocked: # It doesn't need CTE encoding, but encode it anyway so we can # wrap it. parts.insert(0, part) want_encoding = True continue # We can't figure out how to wrap, it, so give up. newline = _steal_trailing_WSP_if_exists(lines) if newline or part.startswith_fws(): lines.append(newline + tstr) else: # We can't fold it onto the next line either... lines[-1] += tstr return policy.linesep.join(lines) + policy.linesep
Return string of contents of parse_tree folded according to RFC rules.
187,845
import abc from email import header from email import charset as _charset from email.utils import _has_surrogates def _append_doc(doc, added_doc): def _extend_docstrings(cls): if cls.__doc__ and cls.__doc__.startswith('+'): cls.__doc__ = _append_doc(cls.__bases__[0].__doc__, cls.__doc__) for name, attr in cls.__dict__.items(): if attr.__doc__ and attr.__doc__.startswith('+'): for c in (c for base in cls.__bases__ for c in base.mro()): doc = getattr(getattr(c, name), '__doc__') if doc: attr.__doc__ = _append_doc(doc, attr.__doc__) break return cls
null
187,859
from builtins import open as _builtin_open from codecs import lookup, BOM_UTF8 import collections import functools from io import TextIOWrapper import itertools as _itertools import re import sys from token import * from token import EXACT_TOKEN_TYPES import token def group(*choices): return '(' + '|'.join(choices) + ')' def maybe(*choices): return group(*choices) + '?'
null
187,860
from builtins import open as _builtin_open from codecs import lookup, BOM_UTF8 import collections import functools from io import TextIOWrapper import itertools as _itertools import re import sys from token import * from token import EXACT_TOKEN_TYPES import token for t in _all_string_prefixes(): for u in (t + '"', t + "'"): single_quoted.add(u) for u in (t + '"""', t + "'''"): triple_quoted.add(u) def _all_string_prefixes(): # The valid string prefixes. Only contain the lower case versions, # and don't contain any permutations (include 'fr', but not # 'rf'). The various permutations will be generated. _valid_string_prefixes = ['b', 'r', 'u', 'f', 'br', 'fr'] # if we add binary f-strings, add: ['fb', 'fbr'] result = {''} for prefix in _valid_string_prefixes: for t in _itertools.permutations(prefix): # create a list with upper and lower versions of each # character for u in _itertools.product(*[(c, c.upper()) for c in t]): result.add(''.join(u)) return result
null
187,861
from builtins import open as _builtin_open from codecs import lookup, BOM_UTF8 import collections import functools from io import TextIOWrapper import itertools as _itertools import re import sys from token import * from token import EXACT_TOKEN_TYPES import token class Untokenizer: def __init__(self): self.tokens = [] self.prev_row = 1 self.prev_col = 0 self.encoding = None def add_whitespace(self, start): row, col = start if row < self.prev_row or row == self.prev_row and col < self.prev_col: raise ValueError("start ({},{}) precedes previous end ({},{})" .format(row, col, self.prev_row, self.prev_col)) row_offset = row - self.prev_row if row_offset: self.tokens.append("\\\n" * row_offset) self.prev_col = 0 col_offset = col - self.prev_col if col_offset: self.tokens.append(" " * col_offset) def untokenize(self, iterable): it = iter(iterable) indents = [] startline = False for t in it: if len(t) == 2: self.compat(t, it) break tok_type, token, start, end, line = t if tok_type == ENCODING: self.encoding = token continue if tok_type == ENDMARKER: break if tok_type == INDENT: indents.append(token) continue elif tok_type == DEDENT: indents.pop() self.prev_row, self.prev_col = end continue elif tok_type in (NEWLINE, NL): startline = True elif startline and indents: indent = indents[-1] if start[1] >= len(indent): self.tokens.append(indent) self.prev_col = len(indent) startline = False self.add_whitespace(start) self.tokens.append(token) self.prev_row, self.prev_col = end if tok_type in (NEWLINE, NL): self.prev_row += 1 self.prev_col = 0 return "".join(self.tokens) def compat(self, token, iterable): indents = [] toks_append = self.tokens.append startline = token[0] in (NEWLINE, NL) prevstring = False for tok in _itertools.chain([token], iterable): toknum, tokval = tok[:2] if toknum == ENCODING: self.encoding = tokval continue if toknum in (NAME, NUMBER): tokval += ' ' # Insert a space between two consecutive strings if toknum == STRING: if prevstring: tokval = ' ' + tokval prevstring = True else: prevstring = False if toknum == INDENT: indents.append(tokval) continue elif toknum == DEDENT: indents.pop() continue elif toknum in (NEWLINE, NL): startline = True elif startline and indents: toks_append(indents[-1]) startline = False toks_append(tokval) The provided code snippet includes necessary dependencies for implementing the `untokenize` function. Write a Python function `def untokenize(iterable)` to solve the following problem: Transform tokens back into Python source code. It returns a bytes object, encoded using the ENCODING token, which is the first token sequence output by tokenize. Each element returned by the iterable must be a token sequence with at least two elements, a token number and token value. If only two tokens are passed, the resulting output is poor. Round-trip invariant for full input: Untokenized source will match input source exactly Round-trip invariant for limited input: # Output bytes will tokenize back to the input t1 = [tok[:2] for tok in tokenize(f.readline)] newcode = untokenize(t1) readline = BytesIO(newcode).readline t2 = [tok[:2] for tok in tokenize(readline)] assert t1 == t2 Here is the function: def untokenize(iterable): """Transform tokens back into Python source code. It returns a bytes object, encoded using the ENCODING token, which is the first token sequence output by tokenize. Each element returned by the iterable must be a token sequence with at least two elements, a token number and token value. If only two tokens are passed, the resulting output is poor. Round-trip invariant for full input: Untokenized source will match input source exactly Round-trip invariant for limited input: # Output bytes will tokenize back to the input t1 = [tok[:2] for tok in tokenize(f.readline)] newcode = untokenize(t1) readline = BytesIO(newcode).readline t2 = [tok[:2] for tok in tokenize(readline)] assert t1 == t2 """ ut = Untokenizer() out = ut.untokenize(iterable) if ut.encoding is not None: out = out.encode(ut.encoding) return out
Transform tokens back into Python source code. It returns a bytes object, encoded using the ENCODING token, which is the first token sequence output by tokenize. Each element returned by the iterable must be a token sequence with at least two elements, a token number and token value. If only two tokens are passed, the resulting output is poor. Round-trip invariant for full input: Untokenized source will match input source exactly Round-trip invariant for limited input: # Output bytes will tokenize back to the input t1 = [tok[:2] for tok in tokenize(f.readline)] newcode = untokenize(t1) readline = BytesIO(newcode).readline t2 = [tok[:2] for tok in tokenize(readline)] assert t1 == t2
187,876
import re from tkinter import StringVar, TclError from idlelib.searchbase import SearchDialogBase from idlelib import searchengine def replace(text, insert_tags=None): """Create or reuse a singleton ReplaceDialog instance. The singleton dialog saves user entries and preferences across instances. Args: text: Text widget containing the text to be searched. """ root = text._root() engine = searchengine.get(root) if not hasattr(engine, "_replacedialog"): engine._replacedialog = ReplaceDialog(root, engine) dialog = engine._replacedialog dialog.open(text, insert_tags=insert_tags) class Toplevel(BaseWidget, Wm): """Toplevel widget, e.g. for dialogs.""" def __init__(self, master=None, cnf={}, **kw): """Construct a toplevel widget with the parent MASTER. Valid resource names: background, bd, bg, borderwidth, class, colormap, container, cursor, height, highlightbackground, highlightcolor, highlightthickness, menu, relief, screen, takefocus, use, visual, width.""" if kw: cnf = _cnfmerge((cnf, kw)) extra = () for wmkey in ['screen', 'class_', 'class', 'visual', 'colormap']: if wmkey in cnf: val = cnf[wmkey] # TBD: a hack needed because some keys # are not valid as keyword arguments if wmkey[-1] == '_': opt = '-'+wmkey[:-1] else: opt = '-'+wmkey extra = extra + (opt, val) del cnf[wmkey] BaseWidget.__init__(self, master, 'toplevel', cnf, {}, extra) root = self._root() self.iconname(root.iconname()) self.title(root.title()) self.protocol("WM_DELETE_WINDOW", self.destroy) class Text(Widget, XView, YView): """Text widget which can display text in various forms.""" def __init__(self, master=None, cnf={}, **kw): """Construct a text widget with the parent MASTER. STANDARD OPTIONS background, borderwidth, cursor, exportselection, font, foreground, highlightbackground, highlightcolor, highlightthickness, insertbackground, insertborderwidth, insertofftime, insertontime, insertwidth, padx, pady, relief, selectbackground, selectborderwidth, selectforeground, setgrid, takefocus, xscrollcommand, yscrollcommand, WIDGET-SPECIFIC OPTIONS autoseparators, height, maxundo, spacing1, spacing2, spacing3, state, tabs, undo, width, wrap, """ Widget.__init__(self, master, 'text', cnf, kw) def bbox(self, index): """Return a tuple of (x,y,width,height) which gives the bounding box of the visible part of the character at the given index.""" return self._getints( self.tk.call(self._w, 'bbox', index)) or None def compare(self, index1, op, index2): """Return whether between index INDEX1 and index INDEX2 the relation OP is satisfied. OP is one of <, <=, ==, >=, >, or !=.""" return self.tk.getboolean(self.tk.call( self._w, 'compare', index1, op, index2)) def count(self, index1, index2, *args): # new in Tk 8.5 """Counts the number of relevant things between the two indices. If index1 is after index2, the result will be a negative number (and this holds for each of the possible options). The actual items which are counted depends on the options given by args. The result is a list of integers, one for the result of each counting option given. Valid counting options are "chars", "displaychars", "displayindices", "displaylines", "indices", "lines", "xpixels" and "ypixels". There is an additional possible option "update", which if given then all subsequent options ensure that any possible out of date information is recalculated.""" args = ['-%s' % arg for arg in args if not arg.startswith('-')] args += [index1, index2] res = self.tk.call(self._w, 'count', *args) or None if res is not None and len(args) <= 3: return (res, ) else: return res def debug(self, boolean=None): """Turn on the internal consistency checks of the B-Tree inside the text widget according to BOOLEAN.""" if boolean is None: return self.tk.getboolean(self.tk.call(self._w, 'debug')) self.tk.call(self._w, 'debug', boolean) def delete(self, index1, index2=None): """Delete the characters between INDEX1 and INDEX2 (not included).""" self.tk.call(self._w, 'delete', index1, index2) def dlineinfo(self, index): """Return tuple (x,y,width,height,baseline) giving the bounding box and baseline position of the visible part of the line containing the character at INDEX.""" return self._getints(self.tk.call(self._w, 'dlineinfo', index)) def dump(self, index1, index2=None, command=None, **kw): """Return the contents of the widget between index1 and index2. The type of contents returned in filtered based on the keyword parameters; if 'all', 'image', 'mark', 'tag', 'text', or 'window' are given and true, then the corresponding items are returned. The result is a list of triples of the form (key, value, index). If none of the keywords are true then 'all' is used by default. If the 'command' argument is given, it is called once for each element of the list of triples, with the values of each triple serving as the arguments to the function. In this case the list is not returned.""" args = [] func_name = None result = None if not command: # Never call the dump command without the -command flag, since the # output could involve Tcl quoting and would be a pain to parse # right. Instead just set the command to build a list of triples # as if we had done the parsing. result = [] def append_triple(key, value, index, result=result): result.append((key, value, index)) command = append_triple try: if not isinstance(command, str): func_name = command = self._register(command) args += ["-command", command] for key in kw: if kw[key]: args.append("-" + key) args.append(index1) if index2: args.append(index2) self.tk.call(self._w, "dump", *args) return result finally: if func_name: self.deletecommand(func_name) ## new in tk8.4 def edit(self, *args): """Internal method This method controls the undo mechanism and the modified flag. The exact behavior of the command depends on the option argument that follows the edit argument. The following forms of the command are currently supported: edit_modified, edit_redo, edit_reset, edit_separator and edit_undo """ return self.tk.call(self._w, 'edit', *args) def edit_modified(self, arg=None): """Get or Set the modified flag If arg is not specified, returns the modified flag of the widget. The insert, delete, edit undo and edit redo commands or the user can set or clear the modified flag. If boolean is specified, sets the modified flag of the widget to arg. """ return self.edit("modified", arg) def edit_redo(self): """Redo the last undone edit When the undo option is true, reapplies the last undone edits provided no other edits were done since then. Generates an error when the redo stack is empty. Does nothing when the undo option is false. """ return self.edit("redo") def edit_reset(self): """Clears the undo and redo stacks """ return self.edit("reset") def edit_separator(self): """Inserts a separator (boundary) on the undo stack. Does nothing when the undo option is false """ return self.edit("separator") def edit_undo(self): """Undoes the last edit action If the undo option is true. An edit action is defined as all the insert and delete commands that are recorded on the undo stack in between two separators. Generates an error when the undo stack is empty. Does nothing when the undo option is false """ return self.edit("undo") def get(self, index1, index2=None): """Return the text from INDEX1 to INDEX2 (not included).""" return self.tk.call(self._w, 'get', index1, index2) # (Image commands are new in 8.0) def image_cget(self, index, option): """Return the value of OPTION of an embedded image at INDEX.""" if option[:1] != "-": option = "-" + option if option[-1:] == "_": option = option[:-1] return self.tk.call(self._w, "image", "cget", index, option) def image_configure(self, index, cnf=None, **kw): """Configure an embedded image at INDEX.""" return self._configure(('image', 'configure', index), cnf, kw) def image_create(self, index, cnf={}, **kw): """Create an embedded image at INDEX.""" return self.tk.call( self._w, "image", "create", index, *self._options(cnf, kw)) def image_names(self): """Return all names of embedded images in this widget.""" return self.tk.call(self._w, "image", "names") def index(self, index): """Return the index in the form line.char for INDEX.""" return str(self.tk.call(self._w, 'index', index)) def insert(self, index, chars, *args): """Insert CHARS before the characters at INDEX. An additional tag can be given in ARGS. Additional CHARS and tags can follow in ARGS.""" self.tk.call((self._w, 'insert', index, chars) + args) def mark_gravity(self, markName, direction=None): """Change the gravity of a mark MARKNAME to DIRECTION (LEFT or RIGHT). Return the current value if None is given for DIRECTION.""" return self.tk.call( (self._w, 'mark', 'gravity', markName, direction)) def mark_names(self): """Return all mark names.""" return self.tk.splitlist(self.tk.call( self._w, 'mark', 'names')) def mark_set(self, markName, index): """Set mark MARKNAME before the character at INDEX.""" self.tk.call(self._w, 'mark', 'set', markName, index) def mark_unset(self, *markNames): """Delete all marks in MARKNAMES.""" self.tk.call((self._w, 'mark', 'unset') + markNames) def mark_next(self, index): """Return the name of the next mark after INDEX.""" return self.tk.call(self._w, 'mark', 'next', index) or None def mark_previous(self, index): """Return the name of the previous mark before INDEX.""" return self.tk.call(self._w, 'mark', 'previous', index) or None def peer_create(self, newPathName, cnf={}, **kw): # new in Tk 8.5 """Creates a peer text widget with the given newPathName, and any optional standard configuration options. By default the peer will have the same start and end line as the parent widget, but these can be overridden with the standard configuration options.""" self.tk.call(self._w, 'peer', 'create', newPathName, *self._options(cnf, kw)) def peer_names(self): # new in Tk 8.5 """Returns a list of peers of this widget (this does not include the widget itself).""" return self.tk.splitlist(self.tk.call(self._w, 'peer', 'names')) def replace(self, index1, index2, chars, *args): # new in Tk 8.5 """Replaces the range of characters between index1 and index2 with the given characters and tags specified by args. See the method insert for some more information about args, and the method delete for information about the indices.""" self.tk.call(self._w, 'replace', index1, index2, chars, *args) def scan_mark(self, x, y): """Remember the current X, Y coordinates.""" self.tk.call(self._w, 'scan', 'mark', x, y) def scan_dragto(self, x, y): """Adjust the view of the text to 10 times the difference between X and Y and the coordinates given in scan_mark.""" self.tk.call(self._w, 'scan', 'dragto', x, y) def search(self, pattern, index, stopindex=None, forwards=None, backwards=None, exact=None, regexp=None, nocase=None, count=None, elide=None): """Search PATTERN beginning from INDEX until STOPINDEX. Return the index of the first character of a match or an empty string.""" args = [self._w, 'search'] if forwards: args.append('-forwards') if backwards: args.append('-backwards') if exact: args.append('-exact') if regexp: args.append('-regexp') if nocase: args.append('-nocase') if elide: args.append('-elide') if count: args.append('-count'); args.append(count) if pattern and pattern[0] == '-': args.append('--') args.append(pattern) args.append(index) if stopindex: args.append(stopindex) return str(self.tk.call(tuple(args))) def see(self, index): """Scroll such that the character at INDEX is visible.""" self.tk.call(self._w, 'see', index) def tag_add(self, tagName, index1, *args): """Add tag TAGNAME to all characters between INDEX1 and index2 in ARGS. Additional pairs of indices may follow in ARGS.""" self.tk.call( (self._w, 'tag', 'add', tagName, index1) + args) def tag_unbind(self, tagName, sequence, funcid=None): """Unbind for all characters with TAGNAME for event SEQUENCE the function identified with FUNCID.""" self.tk.call(self._w, 'tag', 'bind', tagName, sequence, '') if funcid: self.deletecommand(funcid) def tag_bind(self, tagName, sequence, func, add=None): """Bind to all characters with TAGNAME at event SEQUENCE a call to function FUNC. An additional boolean parameter ADD specifies whether FUNC will be called additionally to the other bound function or whether it will replace the previous function. See bind for the return value.""" return self._bind((self._w, 'tag', 'bind', tagName), sequence, func, add) def tag_cget(self, tagName, option): """Return the value of OPTION for tag TAGNAME.""" if option[:1] != '-': option = '-' + option if option[-1:] == '_': option = option[:-1] return self.tk.call(self._w, 'tag', 'cget', tagName, option) def tag_configure(self, tagName, cnf=None, **kw): """Configure a tag TAGNAME.""" return self._configure(('tag', 'configure', tagName), cnf, kw) tag_config = tag_configure def tag_delete(self, *tagNames): """Delete all tags in TAGNAMES.""" self.tk.call((self._w, 'tag', 'delete') + tagNames) def tag_lower(self, tagName, belowThis=None): """Change the priority of tag TAGNAME such that it is lower than the priority of BELOWTHIS.""" self.tk.call(self._w, 'tag', 'lower', tagName, belowThis) def tag_names(self, index=None): """Return a list of all tag names.""" return self.tk.splitlist( self.tk.call(self._w, 'tag', 'names', index)) def tag_nextrange(self, tagName, index1, index2=None): """Return a list of start and end index for the first sequence of characters between INDEX1 and INDEX2 which all have tag TAGNAME. The text is searched forward from INDEX1.""" return self.tk.splitlist(self.tk.call( self._w, 'tag', 'nextrange', tagName, index1, index2)) def tag_prevrange(self, tagName, index1, index2=None): """Return a list of start and end index for the first sequence of characters between INDEX1 and INDEX2 which all have tag TAGNAME. The text is searched backwards from INDEX1.""" return self.tk.splitlist(self.tk.call( self._w, 'tag', 'prevrange', tagName, index1, index2)) def tag_raise(self, tagName, aboveThis=None): """Change the priority of tag TAGNAME such that it is higher than the priority of ABOVETHIS.""" self.tk.call( self._w, 'tag', 'raise', tagName, aboveThis) def tag_ranges(self, tagName): """Return a list of ranges of text which have tag TAGNAME.""" return self.tk.splitlist(self.tk.call( self._w, 'tag', 'ranges', tagName)) def tag_remove(self, tagName, index1, index2=None): """Remove tag TAGNAME from all characters between INDEX1 and INDEX2.""" self.tk.call( self._w, 'tag', 'remove', tagName, index1, index2) def window_cget(self, index, option): """Return the value of OPTION of an embedded window at INDEX.""" if option[:1] != '-': option = '-' + option if option[-1:] == '_': option = option[:-1] return self.tk.call(self._w, 'window', 'cget', index, option) def window_configure(self, index, cnf=None, **kw): """Configure an embedded window at INDEX.""" return self._configure(('window', 'configure', index), cnf, kw) window_config = window_configure def window_create(self, index, cnf={}, **kw): """Create a window at INDEX.""" self.tk.call( (self._w, 'window', 'create', index) + self._options(cnf, kw)) def window_names(self): """Return all names of embedded windows in this widget.""" return self.tk.splitlist( self.tk.call(self._w, 'window', 'names')) def yview_pickplace(self, *what): """Obsolete function, use see.""" self.tk.call((self._w, 'yview', '-pickplace') + what) class Button(Widget): """Ttk Button widget, displays a textual label and/or image, and evaluates a command when pressed.""" def __init__(self, master=None, **kw): """Construct a Ttk Button widget with the parent master. STANDARD OPTIONS class, compound, cursor, image, state, style, takefocus, text, textvariable, underline, width WIDGET-SPECIFIC OPTIONS command, default, width """ Widget.__init__(self, master, "ttk::button", kw) def invoke(self): """Invokes the command associated with the button.""" return self.tk.call(self._w, "invoke") class Frame(Widget): """Ttk Frame widget is a container, used to group other widgets together.""" def __init__(self, master=None, **kw): """Construct a Ttk Frame with parent master. STANDARD OPTIONS class, cursor, style, takefocus WIDGET-SPECIFIC OPTIONS borderwidth, relief, padding, width, height """ Widget.__init__(self, master, "ttk::frame", kw) def _replace_dialog(parent): # htest # from tkinter import Toplevel, Text, END, SEL from tkinter.ttk import Frame, Button top = Toplevel(parent) top.title("Test ReplaceDialog") x, y = map(int, parent.geometry().split('+')[1:]) top.geometry("+%d+%d" % (x, y + 175)) # mock undo delegator methods def undo_block_start(): pass def undo_block_stop(): pass frame = Frame(top) frame.pack() text = Text(frame, inactiveselectbackground='gray') text.undo_block_start = undo_block_start text.undo_block_stop = undo_block_stop text.pack() text.insert("insert","This is a sample sTring\nPlus MORE.") text.focus_set() def show_replace(): text.tag_add(SEL, "1.0", END) replace(text) text.tag_remove(SEL, "1.0", END) button = Button(frame, text="Replace", command=show_replace) button.pack()
null
187,877
from idlelib.delegator import Delegator from idlelib.redirector import WidgetRedirector class Percolator: def __init__(self, text): # XXX would be nice to inherit from Delegator self.text = text self.redir = WidgetRedirector(text) self.top = self.bottom = Delegator(text) self.bottom.insert = self.redir.register("insert", self.insert) self.bottom.delete = self.redir.register("delete", self.delete) self.filters = [] def close(self): while self.top is not self.bottom: self.removefilter(self.top) self.top = None self.bottom.setdelegate(None) self.bottom = None self.redir.close() self.redir = None self.text = None def insert(self, index, chars, tags=None): # Could go away if inheriting from Delegator self.top.insert(index, chars, tags) def delete(self, index1, index2=None): # Could go away if inheriting from Delegator self.top.delete(index1, index2) def insertfilter(self, filter): # Perhaps rename to pushfilter()? assert isinstance(filter, Delegator) assert filter.delegate is None filter.setdelegate(self.top) self.top = filter def insertfilterafter(self, filter, after): assert isinstance(filter, Delegator) assert isinstance(after, Delegator) assert filter.delegate is None f = self.top f.resetcache() while f is not after: assert f is not self.bottom f = f.delegate f.resetcache() filter.setdelegate(f.delegate) f.setdelegate(filter) def removefilter(self, filter): # XXX Perhaps should only support popfilter()? assert isinstance(filter, Delegator) assert filter.delegate is not None f = self.top if f is filter: self.top = filter.delegate filter.setdelegate(None) else: while f.delegate is not filter: assert f is not self.bottom f.resetcache() f = f.delegate f.setdelegate(filter.delegate) filter.setdelegate(None) class Delegator: def __init__(self, delegate=None): self.delegate = delegate self.__cache = set() # Cache is used to only remove added attributes # when changing the delegate. def __getattr__(self, name): attr = getattr(self.delegate, name) # May raise AttributeError setattr(self, name, attr) self.__cache.add(name) return attr def resetcache(self): "Removes added attributes while leaving original attributes." # Function is really about resetting delegator dict # to original state. Cache is just a means for key in self.__cache: try: delattr(self, key) except AttributeError: pass self.__cache.clear() def setdelegate(self, delegate): "Reset attributes and change delegate." self.resetcache() self.delegate = delegate def _percolator(parent): # htest # import tkinter as tk class Tracer(Delegator): def __init__(self, name): self.name = name Delegator.__init__(self, None) def insert(self, *args): print(self.name, ": insert", args) self.delegate.insert(*args) def delete(self, *args): print(self.name, ": delete", args) self.delegate.delete(*args) box = tk.Toplevel(parent) box.title("Test Percolator") x, y = map(int, parent.geometry().split('+')[1:]) box.geometry("+%d+%d" % (x, y + 175)) text = tk.Text(box) p = Percolator(text) pin = p.insertfilter pout = p.removefilter t1 = Tracer("t1") t2 = Tracer("t2") def toggle1(): (pin if var1.get() else pout)(t1) def toggle2(): (pin if var2.get() else pout)(t2) text.pack() var1 = tk.IntVar(parent) cb1 = tk.Checkbutton(box, text="Tracer1", command=toggle1, variable=var1) cb1.pack() var2 = tk.IntVar(parent) cb2 = tk.Checkbutton(box, text="Tracer2", command=toggle2, variable=var2) cb2.pack()
null
187,891
import io import os import shlex import sys import tempfile import tokenize from tkinter import filedialog from tkinter import messagebox from tkinter.simpledialog import askstring import idlelib from idlelib.config import idleConf from idlelib.util import py_extensions class IOBinding: # One instance per editor Window so methods know which to save, close. # Open returns focus to self.editwin if aborted. # EditorWindow.open_module, others, belong here. def __init__(self, editwin): self.editwin = editwin self.text = editwin.text self.__id_open = self.text.bind("<<open-window-from-file>>", self.open) self.__id_save = self.text.bind("<<save-window>>", self.save) self.__id_saveas = self.text.bind("<<save-window-as-file>>", self.save_as) self.__id_savecopy = self.text.bind("<<save-copy-of-window-as-file>>", self.save_a_copy) self.fileencoding = 'utf-8' self.__id_print = self.text.bind("<<print-window>>", self.print_window) def close(self): # Undo command bindings self.text.unbind("<<open-window-from-file>>", self.__id_open) self.text.unbind("<<save-window>>", self.__id_save) self.text.unbind("<<save-window-as-file>>",self.__id_saveas) self.text.unbind("<<save-copy-of-window-as-file>>", self.__id_savecopy) self.text.unbind("<<print-window>>", self.__id_print) # Break cycles self.editwin = None self.text = None self.filename_change_hook = None def get_saved(self): return self.editwin.get_saved() def set_saved(self, flag): self.editwin.set_saved(flag) def reset_undo(self): self.editwin.reset_undo() filename_change_hook = None def set_filename_change_hook(self, hook): self.filename_change_hook = hook filename = None dirname = None def set_filename(self, filename): if filename and os.path.isdir(filename): self.filename = None self.dirname = filename else: self.filename = filename self.dirname = None self.set_saved(1) if self.filename_change_hook: self.filename_change_hook() def open(self, event=None, editFile=None): flist = self.editwin.flist # Save in case parent window is closed (ie, during askopenfile()). if flist: if not editFile: filename = self.askopenfile() else: filename=editFile if filename: # If editFile is valid and already open, flist.open will # shift focus to its existing window. # If the current window exists and is a fresh unnamed, # unmodified editor window (not an interpreter shell), # pass self.loadfile to flist.open so it will load the file # in the current window (if the file is not already open) # instead of a new window. if (self.editwin and not getattr(self.editwin, 'interp', None) and not self.filename and self.get_saved()): flist.open(filename, self.loadfile) else: flist.open(filename) else: if self.text: self.text.focus_set() return "break" # Code for use outside IDLE: if self.get_saved(): reply = self.maybesave() if reply == "cancel": self.text.focus_set() return "break" if not editFile: filename = self.askopenfile() else: filename=editFile if filename: self.loadfile(filename) else: self.text.focus_set() return "break" eol_convention = os.linesep # default def loadfile(self, filename): try: try: with tokenize.open(filename) as f: chars = f.read() fileencoding = f.encoding eol_convention = f.newlines converted = False except (UnicodeDecodeError, SyntaxError): # Wait for the editor window to appear self.editwin.text.update() enc = askstring( "Specify file encoding", "The file's encoding is invalid for Python 3.x.\n" "IDLE will convert it to UTF-8.\n" "What is the current encoding of the file?", initialvalue='utf-8', parent=self.editwin.text) with open(filename, encoding=enc) as f: chars = f.read() fileencoding = f.encoding eol_convention = f.newlines converted = True except OSError as err: messagebox.showerror("I/O Error", str(err), parent=self.text) return False except UnicodeDecodeError: messagebox.showerror("Decoding Error", "File %s\nFailed to Decode" % filename, parent=self.text) return False if not isinstance(eol_convention, str): # If the file does not contain line separators, it is None. # If the file contains mixed line separators, it is a tuple. if eol_convention is not None: messagebox.showwarning("Mixed Newlines", "Mixed newlines detected.\n" "The file will be changed on save.", parent=self.text) converted = True eol_convention = os.linesep # default self.text.delete("1.0", "end") self.set_filename(None) self.fileencoding = fileencoding self.eol_convention = eol_convention self.text.insert("1.0", chars) self.reset_undo() self.set_filename(filename) if converted: # We need to save the conversion results first # before being able to execute the code self.set_saved(False) self.text.mark_set("insert", "1.0") self.text.yview("insert") self.updaterecentfileslist(filename) return True def maybesave(self): if self.get_saved(): return "yes" message = "Do you want to save %s before closing?" % ( self.filename or "this untitled document") confirm = messagebox.askyesnocancel( title="Save On Close", message=message, default=messagebox.YES, parent=self.text) if confirm: reply = "yes" self.save(None) if not self.get_saved(): reply = "cancel" elif confirm is None: reply = "cancel" else: reply = "no" self.text.focus_set() return reply def save(self, event): if not self.filename: self.save_as(event) else: if self.writefile(self.filename): self.set_saved(True) try: self.editwin.store_file_breaks() except AttributeError: # may be a PyShell pass self.text.focus_set() return "break" def save_as(self, event): filename = self.asksavefile() if filename: if self.writefile(filename): self.set_filename(filename) self.set_saved(1) try: self.editwin.store_file_breaks() except AttributeError: pass self.text.focus_set() self.updaterecentfileslist(filename) return "break" def save_a_copy(self, event): filename = self.asksavefile() if filename: self.writefile(filename) self.text.focus_set() self.updaterecentfileslist(filename) return "break" def writefile(self, filename): text = self.fixnewlines() chars = self.encode(text) try: with open(filename, "wb") as f: f.write(chars) f.flush() os.fsync(f.fileno()) return True except OSError as msg: messagebox.showerror("I/O Error", str(msg), parent=self.text) return False def fixnewlines(self): "Return text with final \n if needed and os eols." if (self.text.get("end-2c") != '\n' and not hasattr(self.editwin, "interp")): # Not shell. self.text.insert("end-1c", "\n") text = self.text.get("1.0", "end-1c") if self.eol_convention != "\n": text = text.replace("\n", self.eol_convention) return text def encode(self, chars): if isinstance(chars, bytes): # This is either plain ASCII, or Tk was returning mixed-encoding # text to us. Don't try to guess further. return chars # Preserve a BOM that might have been present on opening if self.fileencoding == 'utf-8-sig': return chars.encode('utf-8-sig') # See whether there is anything non-ASCII in it. # If not, no need to figure out the encoding. try: return chars.encode('ascii') except UnicodeEncodeError: pass # Check if there is an encoding declared try: encoded = chars.encode('ascii', 'replace') enc, _ = tokenize.detect_encoding(io.BytesIO(encoded).readline) return chars.encode(enc) except SyntaxError as err: failed = str(err) except UnicodeEncodeError: failed = "Invalid encoding '%s'" % enc messagebox.showerror( "I/O Error", "%s.\nSaving as UTF-8" % failed, parent=self.text) # Fallback: save as UTF-8, with BOM - ignoring the incorrect # declared encoding return chars.encode('utf-8-sig') def print_window(self, event): confirm = messagebox.askokcancel( title="Print", message="Print to Default Printer", default=messagebox.OK, parent=self.text) if not confirm: self.text.focus_set() return "break" tempfilename = None saved = self.get_saved() if saved: filename = self.filename # shell undo is reset after every prompt, looks saved, probably isn't if not saved or filename is None: (tfd, tempfilename) = tempfile.mkstemp(prefix='IDLE_tmp_') filename = tempfilename os.close(tfd) if not self.writefile(tempfilename): os.unlink(tempfilename) return "break" platform = os.name printPlatform = True if platform == 'posix': #posix platform command = idleConf.GetOption('main','General', 'print-command-posix') command = command + " 2>&1" elif platform == 'nt': #win32 platform command = idleConf.GetOption('main','General','print-command-win') else: #no printing for this platform printPlatform = False if printPlatform: #we can try to print for this platform command = command % shlex.quote(filename) pipe = os.popen(command, "r") # things can get ugly on NT if there is no printer available. output = pipe.read().strip() status = pipe.close() if status: output = "Printing failed (exit status 0x%x)\n" % \ status + output if output: output = "Printing command: %s\n" % repr(command) + output messagebox.showerror("Print status", output, parent=self.text) else: #no printing for this platform message = "Printing is not enabled for this platform: %s" % platform messagebox.showinfo("Print status", message, parent=self.text) if tempfilename: os.unlink(tempfilename) return "break" opendialog = None savedialog = None filetypes = ( ("Python files", py_extensions, "TEXT"), ("Text files", "*.txt", "TEXT"), ("All files", "*"), ) defaultextension = '.py' if sys.platform == 'darwin' else '' def askopenfile(self): dir, base = self.defaultfilename("open") if not self.opendialog: self.opendialog = filedialog.Open(parent=self.text, filetypes=self.filetypes) filename = self.opendialog.show(initialdir=dir, initialfile=base) return filename def defaultfilename(self, mode="open"): if self.filename: return os.path.split(self.filename) elif self.dirname: return self.dirname, "" else: try: pwd = os.getcwd() except OSError: pwd = "" return pwd, "" def asksavefile(self): dir, base = self.defaultfilename("save") if not self.savedialog: self.savedialog = filedialog.SaveAs( parent=self.text, filetypes=self.filetypes, defaultextension=self.defaultextension) filename = self.savedialog.show(initialdir=dir, initialfile=base) return filename def updaterecentfileslist(self,filename): "Update recent file list on all editor windows" if self.editwin.flist: self.editwin.update_recent_files_list(filename) class Toplevel(BaseWidget, Wm): """Toplevel widget, e.g. for dialogs.""" def __init__(self, master=None, cnf={}, **kw): """Construct a toplevel widget with the parent MASTER. Valid resource names: background, bd, bg, borderwidth, class, colormap, container, cursor, height, highlightbackground, highlightcolor, highlightthickness, menu, relief, screen, takefocus, use, visual, width.""" if kw: cnf = _cnfmerge((cnf, kw)) extra = () for wmkey in ['screen', 'class_', 'class', 'visual', 'colormap']: if wmkey in cnf: val = cnf[wmkey] # TBD: a hack needed because some keys # are not valid as keyword arguments if wmkey[-1] == '_': opt = '-'+wmkey[:-1] else: opt = '-'+wmkey extra = extra + (opt, val) del cnf[wmkey] BaseWidget.__init__(self, master, 'toplevel', cnf, {}, extra) root = self._root() self.iconname(root.iconname()) self.title(root.title()) self.protocol("WM_DELETE_WINDOW", self.destroy) class Text(Widget, XView, YView): """Text widget which can display text in various forms.""" def __init__(self, master=None, cnf={}, **kw): """Construct a text widget with the parent MASTER. STANDARD OPTIONS background, borderwidth, cursor, exportselection, font, foreground, highlightbackground, highlightcolor, highlightthickness, insertbackground, insertborderwidth, insertofftime, insertontime, insertwidth, padx, pady, relief, selectbackground, selectborderwidth, selectforeground, setgrid, takefocus, xscrollcommand, yscrollcommand, WIDGET-SPECIFIC OPTIONS autoseparators, height, maxundo, spacing1, spacing2, spacing3, state, tabs, undo, width, wrap, """ Widget.__init__(self, master, 'text', cnf, kw) def bbox(self, index): """Return a tuple of (x,y,width,height) which gives the bounding box of the visible part of the character at the given index.""" return self._getints( self.tk.call(self._w, 'bbox', index)) or None def compare(self, index1, op, index2): """Return whether between index INDEX1 and index INDEX2 the relation OP is satisfied. OP is one of <, <=, ==, >=, >, or !=.""" return self.tk.getboolean(self.tk.call( self._w, 'compare', index1, op, index2)) def count(self, index1, index2, *args): # new in Tk 8.5 """Counts the number of relevant things between the two indices. If index1 is after index2, the result will be a negative number (and this holds for each of the possible options). The actual items which are counted depends on the options given by args. The result is a list of integers, one for the result of each counting option given. Valid counting options are "chars", "displaychars", "displayindices", "displaylines", "indices", "lines", "xpixels" and "ypixels". There is an additional possible option "update", which if given then all subsequent options ensure that any possible out of date information is recalculated.""" args = ['-%s' % arg for arg in args if not arg.startswith('-')] args += [index1, index2] res = self.tk.call(self._w, 'count', *args) or None if res is not None and len(args) <= 3: return (res, ) else: return res def debug(self, boolean=None): """Turn on the internal consistency checks of the B-Tree inside the text widget according to BOOLEAN.""" if boolean is None: return self.tk.getboolean(self.tk.call(self._w, 'debug')) self.tk.call(self._w, 'debug', boolean) def delete(self, index1, index2=None): """Delete the characters between INDEX1 and INDEX2 (not included).""" self.tk.call(self._w, 'delete', index1, index2) def dlineinfo(self, index): """Return tuple (x,y,width,height,baseline) giving the bounding box and baseline position of the visible part of the line containing the character at INDEX.""" return self._getints(self.tk.call(self._w, 'dlineinfo', index)) def dump(self, index1, index2=None, command=None, **kw): """Return the contents of the widget between index1 and index2. The type of contents returned in filtered based on the keyword parameters; if 'all', 'image', 'mark', 'tag', 'text', or 'window' are given and true, then the corresponding items are returned. The result is a list of triples of the form (key, value, index). If none of the keywords are true then 'all' is used by default. If the 'command' argument is given, it is called once for each element of the list of triples, with the values of each triple serving as the arguments to the function. In this case the list is not returned.""" args = [] func_name = None result = None if not command: # Never call the dump command without the -command flag, since the # output could involve Tcl quoting and would be a pain to parse # right. Instead just set the command to build a list of triples # as if we had done the parsing. result = [] def append_triple(key, value, index, result=result): result.append((key, value, index)) command = append_triple try: if not isinstance(command, str): func_name = command = self._register(command) args += ["-command", command] for key in kw: if kw[key]: args.append("-" + key) args.append(index1) if index2: args.append(index2) self.tk.call(self._w, "dump", *args) return result finally: if func_name: self.deletecommand(func_name) ## new in tk8.4 def edit(self, *args): """Internal method This method controls the undo mechanism and the modified flag. The exact behavior of the command depends on the option argument that follows the edit argument. The following forms of the command are currently supported: edit_modified, edit_redo, edit_reset, edit_separator and edit_undo """ return self.tk.call(self._w, 'edit', *args) def edit_modified(self, arg=None): """Get or Set the modified flag If arg is not specified, returns the modified flag of the widget. The insert, delete, edit undo and edit redo commands or the user can set or clear the modified flag. If boolean is specified, sets the modified flag of the widget to arg. """ return self.edit("modified", arg) def edit_redo(self): """Redo the last undone edit When the undo option is true, reapplies the last undone edits provided no other edits were done since then. Generates an error when the redo stack is empty. Does nothing when the undo option is false. """ return self.edit("redo") def edit_reset(self): """Clears the undo and redo stacks """ return self.edit("reset") def edit_separator(self): """Inserts a separator (boundary) on the undo stack. Does nothing when the undo option is false """ return self.edit("separator") def edit_undo(self): """Undoes the last edit action If the undo option is true. An edit action is defined as all the insert and delete commands that are recorded on the undo stack in between two separators. Generates an error when the undo stack is empty. Does nothing when the undo option is false """ return self.edit("undo") def get(self, index1, index2=None): """Return the text from INDEX1 to INDEX2 (not included).""" return self.tk.call(self._w, 'get', index1, index2) # (Image commands are new in 8.0) def image_cget(self, index, option): """Return the value of OPTION of an embedded image at INDEX.""" if option[:1] != "-": option = "-" + option if option[-1:] == "_": option = option[:-1] return self.tk.call(self._w, "image", "cget", index, option) def image_configure(self, index, cnf=None, **kw): """Configure an embedded image at INDEX.""" return self._configure(('image', 'configure', index), cnf, kw) def image_create(self, index, cnf={}, **kw): """Create an embedded image at INDEX.""" return self.tk.call( self._w, "image", "create", index, *self._options(cnf, kw)) def image_names(self): """Return all names of embedded images in this widget.""" return self.tk.call(self._w, "image", "names") def index(self, index): """Return the index in the form line.char for INDEX.""" return str(self.tk.call(self._w, 'index', index)) def insert(self, index, chars, *args): """Insert CHARS before the characters at INDEX. An additional tag can be given in ARGS. Additional CHARS and tags can follow in ARGS.""" self.tk.call((self._w, 'insert', index, chars) + args) def mark_gravity(self, markName, direction=None): """Change the gravity of a mark MARKNAME to DIRECTION (LEFT or RIGHT). Return the current value if None is given for DIRECTION.""" return self.tk.call( (self._w, 'mark', 'gravity', markName, direction)) def mark_names(self): """Return all mark names.""" return self.tk.splitlist(self.tk.call( self._w, 'mark', 'names')) def mark_set(self, markName, index): """Set mark MARKNAME before the character at INDEX.""" self.tk.call(self._w, 'mark', 'set', markName, index) def mark_unset(self, *markNames): """Delete all marks in MARKNAMES.""" self.tk.call((self._w, 'mark', 'unset') + markNames) def mark_next(self, index): """Return the name of the next mark after INDEX.""" return self.tk.call(self._w, 'mark', 'next', index) or None def mark_previous(self, index): """Return the name of the previous mark before INDEX.""" return self.tk.call(self._w, 'mark', 'previous', index) or None def peer_create(self, newPathName, cnf={}, **kw): # new in Tk 8.5 """Creates a peer text widget with the given newPathName, and any optional standard configuration options. By default the peer will have the same start and end line as the parent widget, but these can be overridden with the standard configuration options.""" self.tk.call(self._w, 'peer', 'create', newPathName, *self._options(cnf, kw)) def peer_names(self): # new in Tk 8.5 """Returns a list of peers of this widget (this does not include the widget itself).""" return self.tk.splitlist(self.tk.call(self._w, 'peer', 'names')) def replace(self, index1, index2, chars, *args): # new in Tk 8.5 """Replaces the range of characters between index1 and index2 with the given characters and tags specified by args. See the method insert for some more information about args, and the method delete for information about the indices.""" self.tk.call(self._w, 'replace', index1, index2, chars, *args) def scan_mark(self, x, y): """Remember the current X, Y coordinates.""" self.tk.call(self._w, 'scan', 'mark', x, y) def scan_dragto(self, x, y): """Adjust the view of the text to 10 times the difference between X and Y and the coordinates given in scan_mark.""" self.tk.call(self._w, 'scan', 'dragto', x, y) def search(self, pattern, index, stopindex=None, forwards=None, backwards=None, exact=None, regexp=None, nocase=None, count=None, elide=None): """Search PATTERN beginning from INDEX until STOPINDEX. Return the index of the first character of a match or an empty string.""" args = [self._w, 'search'] if forwards: args.append('-forwards') if backwards: args.append('-backwards') if exact: args.append('-exact') if regexp: args.append('-regexp') if nocase: args.append('-nocase') if elide: args.append('-elide') if count: args.append('-count'); args.append(count) if pattern and pattern[0] == '-': args.append('--') args.append(pattern) args.append(index) if stopindex: args.append(stopindex) return str(self.tk.call(tuple(args))) def see(self, index): """Scroll such that the character at INDEX is visible.""" self.tk.call(self._w, 'see', index) def tag_add(self, tagName, index1, *args): """Add tag TAGNAME to all characters between INDEX1 and index2 in ARGS. Additional pairs of indices may follow in ARGS.""" self.tk.call( (self._w, 'tag', 'add', tagName, index1) + args) def tag_unbind(self, tagName, sequence, funcid=None): """Unbind for all characters with TAGNAME for event SEQUENCE the function identified with FUNCID.""" self.tk.call(self._w, 'tag', 'bind', tagName, sequence, '') if funcid: self.deletecommand(funcid) def tag_bind(self, tagName, sequence, func, add=None): """Bind to all characters with TAGNAME at event SEQUENCE a call to function FUNC. An additional boolean parameter ADD specifies whether FUNC will be called additionally to the other bound function or whether it will replace the previous function. See bind for the return value.""" return self._bind((self._w, 'tag', 'bind', tagName), sequence, func, add) def tag_cget(self, tagName, option): """Return the value of OPTION for tag TAGNAME.""" if option[:1] != '-': option = '-' + option if option[-1:] == '_': option = option[:-1] return self.tk.call(self._w, 'tag', 'cget', tagName, option) def tag_configure(self, tagName, cnf=None, **kw): """Configure a tag TAGNAME.""" return self._configure(('tag', 'configure', tagName), cnf, kw) tag_config = tag_configure def tag_delete(self, *tagNames): """Delete all tags in TAGNAMES.""" self.tk.call((self._w, 'tag', 'delete') + tagNames) def tag_lower(self, tagName, belowThis=None): """Change the priority of tag TAGNAME such that it is lower than the priority of BELOWTHIS.""" self.tk.call(self._w, 'tag', 'lower', tagName, belowThis) def tag_names(self, index=None): """Return a list of all tag names.""" return self.tk.splitlist( self.tk.call(self._w, 'tag', 'names', index)) def tag_nextrange(self, tagName, index1, index2=None): """Return a list of start and end index for the first sequence of characters between INDEX1 and INDEX2 which all have tag TAGNAME. The text is searched forward from INDEX1.""" return self.tk.splitlist(self.tk.call( self._w, 'tag', 'nextrange', tagName, index1, index2)) def tag_prevrange(self, tagName, index1, index2=None): """Return a list of start and end index for the first sequence of characters between INDEX1 and INDEX2 which all have tag TAGNAME. The text is searched backwards from INDEX1.""" return self.tk.splitlist(self.tk.call( self._w, 'tag', 'prevrange', tagName, index1, index2)) def tag_raise(self, tagName, aboveThis=None): """Change the priority of tag TAGNAME such that it is higher than the priority of ABOVETHIS.""" self.tk.call( self._w, 'tag', 'raise', tagName, aboveThis) def tag_ranges(self, tagName): """Return a list of ranges of text which have tag TAGNAME.""" return self.tk.splitlist(self.tk.call( self._w, 'tag', 'ranges', tagName)) def tag_remove(self, tagName, index1, index2=None): """Remove tag TAGNAME from all characters between INDEX1 and INDEX2.""" self.tk.call( self._w, 'tag', 'remove', tagName, index1, index2) def window_cget(self, index, option): """Return the value of OPTION of an embedded window at INDEX.""" if option[:1] != '-': option = '-' + option if option[-1:] == '_': option = option[:-1] return self.tk.call(self._w, 'window', 'cget', index, option) def window_configure(self, index, cnf=None, **kw): """Configure an embedded window at INDEX.""" return self._configure(('window', 'configure', index), cnf, kw) window_config = window_configure def window_create(self, index, cnf={}, **kw): """Create a window at INDEX.""" self.tk.call( (self._w, 'window', 'create', index) + self._options(cnf, kw)) def window_names(self): """Return all names of embedded windows in this widget.""" return self.tk.splitlist( self.tk.call(self._w, 'window', 'names')) def yview_pickplace(self, *what): """Obsolete function, use see.""" self.tk.call((self._w, 'yview', '-pickplace') + what) def _io_binding(parent): # htest # from tkinter import Toplevel, Text root = Toplevel(parent) root.title("Test IOBinding") x, y = map(int, parent.geometry().split('+')[1:]) root.geometry("+%d+%d" % (x, y + 175)) class MyEditWin: def __init__(self, text): self.text = text self.flist = None self.text.bind("<Control-o>", self.open) self.text.bind('<Control-p>', self.print) self.text.bind("<Control-s>", self.save) self.text.bind("<Alt-s>", self.saveas) self.text.bind('<Control-c>', self.savecopy) def get_saved(self): return 0 def set_saved(self, flag): pass def reset_undo(self): pass def open(self, event): self.text.event_generate("<<open-window-from-file>>") def print(self, event): self.text.event_generate("<<print-window>>") def save(self, event): self.text.event_generate("<<save-window>>") def saveas(self, event): self.text.event_generate("<<save-window-as-file>>") def savecopy(self, event): self.text.event_generate("<<save-copy-of-window-as-file>>") text = Text(root) text.pack() text.focus_set() editwin = MyEditWin(text) IOBinding(editwin)
null
187,892
import importlib.abc import importlib.util import os import platform import re import string import sys import tokenize import traceback import webbrowser from tkinter import * from tkinter.font import Font from tkinter.ttk import Scrollbar from tkinter import simpledialog from tkinter import messagebox from idlelib.config import idleConf from idlelib import configdialog from idlelib import grep from idlelib import help from idlelib import help_about from idlelib import macosx from idlelib.multicall import MultiCallCreator from idlelib import pyparse from idlelib import query from idlelib import replace from idlelib import search from idlelib.tree import wheel_event from idlelib.util import py_extensions from idlelib import window The provided code snippet includes necessary dependencies for implementing the `_sphinx_version` function. Write a Python function `def _sphinx_version()` to solve the following problem: Format sys.version_info to produce the Sphinx version string used to install the chm docs Here is the function: def _sphinx_version(): "Format sys.version_info to produce the Sphinx version string used to install the chm docs" major, minor, micro, level, serial = sys.version_info release = '%s%s' % (major, minor) release += '%s' % (micro,) if level == 'candidate': release += 'rc%s' % (serial,) elif level != 'final': release += '%s%s' % (level[0], serial) return release
Format sys.version_info to produce the Sphinx version string used to install the chm docs
187,893
import importlib.abc import importlib.util import os import platform import re import string import sys import tokenize import traceback import webbrowser from tkinter import * from tkinter.font import Font from tkinter.ttk import Scrollbar from tkinter import simpledialog from tkinter import messagebox from idlelib.config import idleConf from idlelib import configdialog from idlelib import grep from idlelib import help from idlelib import help_about from idlelib import macosx from idlelib.multicall import MultiCallCreator from idlelib import pyparse from idlelib import query from idlelib import replace from idlelib import search from idlelib.tree import wheel_event from idlelib.util import py_extensions from idlelib import window def index2line(index): return int(float(index))
null
187,894
import importlib.abc import importlib.util import os import platform import re import string import sys import tokenize import traceback import webbrowser from tkinter import * from tkinter.font import Font from tkinter.ttk import Scrollbar from tkinter import simpledialog from tkinter import messagebox from idlelib.config import idleConf from idlelib import configdialog from idlelib import grep from idlelib import help from idlelib import help_about from idlelib import macosx from idlelib.multicall import MultiCallCreator from idlelib import pyparse from idlelib import query from idlelib import replace from idlelib import search from idlelib.tree import wheel_event from idlelib.util import py_extensions from idlelib import window _line_indent_re = re.compile(r'[ \t]*') The provided code snippet includes necessary dependencies for implementing the `get_line_indent` function. Write a Python function `def get_line_indent(line, tabwidth)` to solve the following problem: Return a line's indentation as (# chars, effective # of spaces). The effective # of spaces is the length after properly "expanding" the tabs into spaces, as done by str.expandtabs(tabwidth). Here is the function: def get_line_indent(line, tabwidth): """Return a line's indentation as (# chars, effective # of spaces). The effective # of spaces is the length after properly "expanding" the tabs into spaces, as done by str.expandtabs(tabwidth). """ m = _line_indent_re.match(line) return m.end(), len(m.group().expandtabs(tabwidth))
Return a line's indentation as (# chars, effective # of spaces). The effective # of spaces is the length after properly "expanding" the tabs into spaces, as done by str.expandtabs(tabwidth).
187,895
import importlib.abc import importlib.util import os import platform import re import string import sys import tokenize import traceback import webbrowser from tkinter import * from tkinter.font import Font from tkinter.ttk import Scrollbar from tkinter import simpledialog from tkinter import messagebox from idlelib.config import idleConf from idlelib import configdialog from idlelib import grep from idlelib import help from idlelib import help_about from idlelib import macosx from idlelib.multicall import MultiCallCreator from idlelib import pyparse from idlelib import query from idlelib import replace from idlelib import search from idlelib.tree import wheel_event from idlelib.util import py_extensions from idlelib import window def prepstr(s): # Helper to extract the underscore from a string, e.g. # prepstr("Co_py") returns (2, "Copy"). i = s.find('_') if i >= 0: s = s[:i] + s[i+1:] return i, s
null
187,896
import importlib.abc import importlib.util import os import platform import re import string import sys import tokenize import traceback import webbrowser from tkinter import * from tkinter.font import Font from tkinter.ttk import Scrollbar from tkinter import simpledialog from tkinter import messagebox from idlelib.config import idleConf from idlelib import configdialog from idlelib import grep from idlelib import help from idlelib import help_about from idlelib import macosx from idlelib.multicall import MultiCallCreator from idlelib import pyparse from idlelib import query from idlelib import replace from idlelib import search from idlelib.tree import wheel_event from idlelib.util import py_extensions from idlelib import window keynames = { 'bracketleft': '[', 'bracketright': ']', 'slash': '/', } def get_accelerator(keydefs, eventname): keylist = keydefs.get(eventname) # issue10940: temporary workaround to prevent hang with OS X Cocoa Tk 8.5 # if not keylist: if (not keylist) or (macosx.isCocoaTk() and eventname in { "<<open-module>>", "<<goto-line>>", "<<change-indentwidth>>"}): return "" s = keylist[0] s = re.sub(r"-[a-z]\b", lambda m: m.group().upper(), s) s = re.sub(r"\b\w+\b", lambda m: keynames.get(m.group(), m.group()), s) s = re.sub("Key-", "", s) s = re.sub("Cancel","Ctrl-Break",s) # dscherer@cmu.edu s = re.sub("Control-", "Ctrl-", s) s = re.sub("-", "+", s) s = re.sub("><", " ", s) s = re.sub("<", "", s) s = re.sub(">", "", s) return s
null
187,897
import importlib.abc import importlib.util import os import platform import re import string import sys import tokenize import traceback import webbrowser from tkinter import * from tkinter.font import Font from tkinter.ttk import Scrollbar from tkinter import simpledialog from tkinter import messagebox from idlelib.config import idleConf from idlelib import configdialog from idlelib import grep from idlelib import help from idlelib import help_about from idlelib import macosx from idlelib.multicall import MultiCallCreator from idlelib import pyparse from idlelib import query from idlelib import replace from idlelib import search from idlelib.tree import wheel_event from idlelib.util import py_extensions from idlelib import window class EditorWindow: def __init__(self, flist=None, filename=None, key=None, root=None): def handle_winconfig(self, event=None): def set_width(self): def new_callback(self, event): def home_callback(self, event): def set_status_bar(self): def set_line_and_column(self, event=None): def createmenubar(self): def postwindowsmenu(self): def update_menu_label(self, menu, index, label): def update_menu_state(self, menu, index, state): def handle_yview(self, event, *args): def right_menu_event(self, event): def make_rmenu(self): def command(text=self.text, eventname=eventname): def rmenu_check_cut(self): def rmenu_check_copy(self): def rmenu_check_paste(self): def about_dialog(self, event=None): def config_dialog(self, event=None): def help_dialog(self, event=None): def python_docs(self, event=None): def cut(self,event): def copy(self,event): def paste(self,event): def select_all(self, event=None): def remove_selection(self, event=None): def move_at_edge_if_selection(self, edge_index): def move_at_edge(event): def del_word_left(self, event): def del_word_right(self, event): def find_event(self, event): def find_again_event(self, event): def find_selection_event(self, event): def find_in_files_event(self, event): def replace_event(self, event): def goto_line_event(self, event): def open_module(self): def open_module_event(self, event): def open_module_browser(self, event=None): def open_path_browser(self, event=None): def open_turtle_demo(self, event = None): def gotoline(self, lineno): def ispythonsource(self, filename): def close_hook(self): def set_close_hook(self, close_hook): def filename_change_hook(self): def _addcolorizer(self): def _rmcolorizer(self): def ResetColorizer(self): def colorize_syntax_error(self, text, pos): def update_cursor_blink(self): def ResetFont(self): def RemoveKeybindings(self): def ApplyKeybindings(self): def set_notabs_indentwidth(self): def reset_help_menu_entries(self): def __extra_help_callback(self, helpfile): def display_extra_help(helpfile=helpfile): def update_recent_files_list(self, new_file=None): def __recent_file_callback(self, file_name): def open_recent_file(fn_closure=file_name): def saved_change_hook(self): def get_saved(self): def set_saved(self, flag): def reset_undo(self): def short_title(self): def long_title(self): def center_insert_event(self, event): def center(self, mark="insert"): def getwindowlines(self): def getlineno(self, mark="insert"): def get_geometry(self): def close_event(self, event): def maybesave(self): def close(self): def _close(self): def load_extensions(self): def unload_extensions(self): def load_standard_extensions(self): def get_standard_extension_names(self): def load_extension(self, name): def apply_bindings(self, keydefs=None): def fill_menus(self, menudefs=None, keydefs=None): def command(text=text, eventname=eventname): def getvar(self, name): def setvar(self, name, value, vartype=None): def get_var_obj(self, name, vartype=None): def is_char_in_string(self, text_index): def get_selection_indices(self): def get_tk_tabwidth(self): def set_tk_tabwidth(self, newtabwidth): def set_indentation_params(self, is_py_src, guess=True): def smart_backspace_event(self, event): def smart_indent_event(self, event): def newline_and_indent_event(self, event): def _build_char_in_string_func(self, startindex): def inner(offset, _startindex=startindex, _icis=self.is_char_in_string): def _make_blanks(self, n): def reindent_to(self, column): def guess_indent(self): def toggle_line_numbers_event(self, event=None): def fixwordbreaks(root): def _editor_window(parent): # htest # # error if close master window first - timer event, after script root = parent fixwordbreaks(root) if sys.argv[1:]: filename = sys.argv[1] else: filename = None macosx.setupApp(root, None) edit = EditorWindow(root=root, filename=filename) text = edit.text text['height'] = 10 for i in range(20): text.insert('insert', ' '*i + str(i) + '\n') # text.bind("<<close-all-windows>>", edit.close_event) # Does not stop error, neither does following # edit.text.bind("<<close-window>>", edit.close_event)
null
187,898
import contextlib import functools import io import linecache import queue import sys import textwrap import time import traceback import _thread as thread import threading import warnings import idlelib from idlelib import autocomplete from idlelib import calltip from idlelib import debugger_r from idlelib import debugobj_r from idlelib import iomenu from idlelib import rpc from idlelib import stackviewer import __main__ import tkinter The provided code snippet includes necessary dependencies for implementing the `handle_tk_events` function. Write a Python function `def handle_tk_events(tcl=tcl)` to solve the following problem: Process any tk events that are ready to be dispatched if tkinter has been imported, a tcl interpreter has been created and tk has been loaded. Here is the function: def handle_tk_events(tcl=tcl): """Process any tk events that are ready to be dispatched if tkinter has been imported, a tcl interpreter has been created and tk has been loaded.""" tcl.eval("update")
Process any tk events that are ready to be dispatched if tkinter has been imported, a tcl interpreter has been created and tk has been loaded.
187,899
import contextlib import functools import io import linecache import queue import sys import textwrap import time import traceback import _thread as thread import threading import warnings import idlelib from idlelib import autocomplete from idlelib import calltip from idlelib import debugger_r from idlelib import debugobj_r from idlelib import iomenu from idlelib import rpc from idlelib import stackviewer import __main__ import tkinter exit_now = False def show_socket_error(err, address): class MyRPCServer(rpc.RPCServer): def handle_error(self, request, client_address): class MyHandler(rpc.RPCHandler): def handle(self): def exithook(self): def EOFhook(self): def decode_interrupthook(self): def manage_socket(address): for i in range(3): time.sleep(i) try: server = MyRPCServer(address, MyHandler) break except OSError as err: print("IDLE Subprocess: OSError: " + err.args[1] + ", retrying....", file=sys.__stderr__) socket_error = err else: print("IDLE Subprocess: Connection to " "IDLE GUI failed, exiting.", file=sys.__stderr__) show_socket_error(socket_error, address) global exit_now exit_now = True return server.handle_request() # A single request only
null
187,900
import contextlib import functools import io import linecache import queue import sys import textwrap import time import traceback import _thread as thread import threading import warnings import idlelib from idlelib import autocomplete from idlelib import calltip from idlelib import debugger_r from idlelib import debugobj_r from idlelib import iomenu from idlelib import rpc from idlelib import stackviewer import __main__ import tkinter def get_message_lines(typ, exc, tb): "Return line composing the exception message." if typ in (AttributeError, NameError): # 3.10+ hints are not directly accessible from python (#44026). err = io.StringIO() with contextlib.redirect_stderr(err): sys.__excepthook__(typ, exc, tb) return [err.getvalue().split("\n")[-2] + "\n"] else: return traceback.format_exception_only(typ, exc) def cleanup_traceback(tb, exclude): "Remove excluded traces from beginning/end of tb; get cached lines" orig_tb = tb[:] while tb: for rpcfile in exclude: if tb[0][0].count(rpcfile): break # found an exclude, break for: and delete tb[0] else: break # no excludes, have left RPC code, break while: del tb[0] while tb: for rpcfile in exclude: if tb[-1][0].count(rpcfile): break else: break del tb[-1] if len(tb) == 0: # exception was in IDLE internals, don't prune! tb[:] = orig_tb[:] print("** IDLE Internal Exception: ", file=sys.stderr) rpchandler = rpc.objecttable['exec'].rpchandler for i in range(len(tb)): fn, ln, nm, line = tb[i] if nm == '?': nm = "-toplevel-" if not line and fn.startswith("<pyshell#"): line = rpchandler.remotecall('linecache', 'getline', (fn, ln), {}) tb[i] = fn, ln, nm, line def flush_stdout(): """XXX How to do this now?""" def print_exception(): import linecache linecache.checkcache() flush_stdout() efile = sys.stderr typ, val, tb = excinfo = sys.exc_info() sys.last_type, sys.last_value, sys.last_traceback = excinfo seen = set() def print_exc(typ, exc, tb): seen.add(id(exc)) context = exc.__context__ cause = exc.__cause__ if cause is not None and id(cause) not in seen: print_exc(type(cause), cause, cause.__traceback__) print("\nThe above exception was the direct cause " "of the following exception:\n", file=efile) elif (context is not None and not exc.__suppress_context__ and id(context) not in seen): print_exc(type(context), context, context.__traceback__) print("\nDuring handling of the above exception, " "another exception occurred:\n", file=efile) if tb: tbe = traceback.extract_tb(tb) print('Traceback (most recent call last):', file=efile) exclude = ("run.py", "rpc.py", "threading.py", "queue.py", "debugger_r.py", "bdb.py") cleanup_traceback(tbe, exclude) traceback.print_list(tbe, file=efile) lines = get_message_lines(typ, exc, tb) for line in lines: print(line, end='', file=efile) print_exc(typ, val, tb)
null
187,901
import contextlib import functools import io import linecache import queue import sys import textwrap import time import traceback import _thread as thread import threading import warnings import idlelib from idlelib import autocomplete from idlelib import calltip from idlelib import debugger_r from idlelib import debugobj_r from idlelib import iomenu from idlelib import rpc from idlelib import stackviewer import __main__ import tkinter def capture_warnings(capture): "Replace warning.showwarning with idle_showwarning_subproc, or reverse." global _warnings_showwarning if capture: if _warnings_showwarning is None: _warnings_showwarning = warnings.showwarning warnings.showwarning = idle_showwarning_subproc else: if _warnings_showwarning is not None: warnings.showwarning = _warnings_showwarning _warnings_showwarning = None capture_warnings(True) capture_warnings(False) The provided code snippet includes necessary dependencies for implementing the `exit` function. Write a Python function `def exit()` to solve the following problem: Exit subprocess, possibly after first clearing exit functions. If config-main.cfg/.def 'General' 'delete-exitfunc' is True, then any functions registered with atexit will be removed before exiting. (VPython support) Here is the function: def exit(): """Exit subprocess, possibly after first clearing exit functions. If config-main.cfg/.def 'General' 'delete-exitfunc' is True, then any functions registered with atexit will be removed before exiting. (VPython support) """ if no_exitfunc: import atexit atexit._clear() capture_warnings(False) sys.exit(0)
Exit subprocess, possibly after first clearing exit functions. If config-main.cfg/.def 'General' 'delete-exitfunc' is True, then any functions registered with atexit will be removed before exiting. (VPython support)
187,902
import contextlib import functools import io import linecache import queue import sys import textwrap import time import traceback import _thread as thread import threading import warnings import idlelib from idlelib import autocomplete from idlelib import calltip from idlelib import debugger_r from idlelib import debugobj_r from idlelib import iomenu from idlelib import rpc from idlelib import stackviewer import __main__ import tkinter def fixdoc(fun, text): tem = (fun.__doc__ + '\n\n') if fun.__doc__ is not None else '' fun.__doc__ = tem + textwrap.fill(textwrap.dedent(text)) RECURSIONLIMIT_DELTA = 30 The provided code snippet includes necessary dependencies for implementing the `install_recursionlimit_wrappers` function. Write a Python function `def install_recursionlimit_wrappers()` to solve the following problem: Install wrappers to always add 30 to the recursion limit. Here is the function: def install_recursionlimit_wrappers(): """Install wrappers to always add 30 to the recursion limit.""" # see: bpo-26806 @functools.wraps(sys.setrecursionlimit) def setrecursionlimit(*args, **kwargs): # mimic the original sys.setrecursionlimit()'s input handling if kwargs: raise TypeError( "setrecursionlimit() takes no keyword arguments") try: limit, = args except ValueError: raise TypeError(f"setrecursionlimit() takes exactly one " f"argument ({len(args)} given)") if not limit > 0: raise ValueError( "recursion limit must be greater or equal than 1") return setrecursionlimit.__wrapped__(limit + RECURSIONLIMIT_DELTA) fixdoc(setrecursionlimit, f"""\ This IDLE wrapper adds {RECURSIONLIMIT_DELTA} to prevent possible uninterruptible loops.""") @functools.wraps(sys.getrecursionlimit) def getrecursionlimit(): return getrecursionlimit.__wrapped__() - RECURSIONLIMIT_DELTA fixdoc(getrecursionlimit, f"""\ This IDLE wrapper subtracts {RECURSIONLIMIT_DELTA} to compensate for the {RECURSIONLIMIT_DELTA} IDLE adds when setting the limit.""") # add the delta to the default recursion limit, to compensate sys.setrecursionlimit(sys.getrecursionlimit() + RECURSIONLIMIT_DELTA) sys.setrecursionlimit = setrecursionlimit sys.getrecursionlimit = getrecursionlimit
Install wrappers to always add 30 to the recursion limit.
187,903
import contextlib import functools import io import linecache import queue import sys import textwrap import time import traceback import _thread as thread import threading import warnings import idlelib from idlelib import autocomplete from idlelib import calltip from idlelib import debugger_r from idlelib import debugobj_r from idlelib import iomenu from idlelib import rpc from idlelib import stackviewer import __main__ import tkinter RECURSIONLIMIT_DELTA = 30 The provided code snippet includes necessary dependencies for implementing the `uninstall_recursionlimit_wrappers` function. Write a Python function `def uninstall_recursionlimit_wrappers()` to solve the following problem: Uninstall the recursion limit wrappers from the sys module. IDLE only uses this for tests. Users can import run and call this to remove the wrapping. Here is the function: def uninstall_recursionlimit_wrappers(): """Uninstall the recursion limit wrappers from the sys module. IDLE only uses this for tests. Users can import run and call this to remove the wrapping. """ if ( getattr(sys.setrecursionlimit, '__wrapped__', None) and getattr(sys.getrecursionlimit, '__wrapped__', None) ): sys.setrecursionlimit = sys.setrecursionlimit.__wrapped__ sys.getrecursionlimit = sys.getrecursionlimit.__wrapped__ sys.setrecursionlimit(sys.getrecursionlimit() - RECURSIONLIMIT_DELTA)
Uninstall the recursion limit wrappers from the sys module. IDLE only uses this for tests. Users can import run and call this to remove the wrapping.
187,904
import os from tkinter import messagebox class FileList: def __init__(self, root): def open(self, filename, action=None): def gotofileline(self, filename, lineno=None): def new(self, filename=None): def close_all_callback(self, *args, **kwds): def unregister_maybe_terminate(self, edit): def filename_changed_edit(self, edit): def canonize(self, filename): class Tk(Misc, Wm): def __init__(self, screenName=None, baseName=None, className='Tk', useTk=True, sync=False, use=None): def loadtk(self): def _loadtk(self): def destroy(self): def readprofile(self, baseName, className): def report_callback_exception(self, exc, val, tb): def __getattr__(self, attr): def fixwordbreaks(root): def fix_scaling(root): def _test(): # TODO check and convert to htest from tkinter import Tk from idlelib.editor import fixwordbreaks from idlelib.run import fix_scaling root = Tk() fix_scaling(root) fixwordbreaks(root) root.withdraw() flist = FileList(root) flist.new() if flist.inversedict: root.mainloop()
null
187,905
import contextlib import functools import itertools import tkinter as tk from tkinter.font import Font from idlelib.config import idleConf from idlelib.delegator import Delegator from idlelib import macosx def get_lineno(text, index): """Return the line number of an index in a Tk text widget.""" text_index = text.index(index) return int(float(text_index)) if text_index else None The provided code snippet includes necessary dependencies for implementing the `get_end_linenumber` function. Write a Python function `def get_end_linenumber(text)` to solve the following problem: Return the number of the last line in a Tk text widget. Here is the function: def get_end_linenumber(text): """Return the number of the last line in a Tk text widget.""" return get_lineno(text, 'end-1c')
Return the number of the last line in a Tk text widget.
187,906
import contextlib import functools import itertools import tkinter as tk from tkinter.font import Font from idlelib.config import idleConf from idlelib.delegator import Delegator from idlelib import macosx The provided code snippet includes necessary dependencies for implementing the `get_displaylines` function. Write a Python function `def get_displaylines(text, index)` to solve the following problem: Display height, in lines, of a logical line in a Tk text widget. Here is the function: def get_displaylines(text, index): """Display height, in lines, of a logical line in a Tk text widget.""" res = text.count(f"{index} linestart", f"{index} lineend", "displaylines") return res[0] if res else 0
Display height, in lines, of a logical line in a Tk text widget.
187,907
import contextlib import functools import itertools import tkinter as tk from tkinter.font import Font from idlelib.config import idleConf from idlelib.delegator import Delegator from idlelib import macosx The provided code snippet includes necessary dependencies for implementing the `get_widget_padding` function. Write a Python function `def get_widget_padding(widget)` to solve the following problem: Get the total padding of a Tk widget, including its border. Here is the function: def get_widget_padding(widget): """Get the total padding of a Tk widget, including its border.""" # TODO: use also in codecontext.py manager = widget.winfo_manager() if manager == 'pack': info = widget.pack_info() elif manager == 'grid': info = widget.grid_info() else: raise ValueError(f"Unsupported geometry manager: {manager}") # All values are passed through getint(), since some # values may be pixel objects, which can't simply be added to ints. padx = sum(map(widget.tk.getint, [ info['padx'], widget.cget('padx'), widget.cget('border'), ])) pady = sum(map(widget.tk.getint, [ info['pady'], widget.cget('pady'), widget.cget('border'), ])) return padx, pady
Get the total padding of a Tk widget, including its border.
187,908
import contextlib import functools import itertools import tkinter as tk from tkinter.font import Font from idlelib.config import idleConf from idlelib.delegator import Delegator from idlelib import macosx def temp_enable_text_widget(text): text.configure(state=tk.NORMAL) try: yield finally: text.configure(state=tk.DISABLED)
null
187,909
import contextlib import functools import itertools import tkinter as tk from tkinter.font import Font from idlelib.config import idleConf from idlelib.delegator import Delegator from idlelib import macosx class LineNumbers(BaseSideBar): """Line numbers support for editor windows.""" def __init__(self, editwin): super().__init__(editwin) end_line_delegator = EndLineDelegator(self.update_sidebar_text) # Insert the delegator after the undo delegator, so that line numbers # are properly updated after undo and redo actions. self.editwin.per.insertfilterafter(end_line_delegator, after=self.editwin.undo) def init_widgets(self): _padx, pady = get_widget_padding(self.text) self.sidebar_text = tk.Text(self.parent, width=1, wrap=tk.NONE, padx=2, pady=pady, borderwidth=0, highlightthickness=0) self.sidebar_text.config(state=tk.DISABLED) self.prev_end = 1 self._sidebar_width_type = type(self.sidebar_text['width']) with temp_enable_text_widget(self.sidebar_text): self.sidebar_text.insert('insert', '1', 'linenumber') self.sidebar_text.config(takefocus=False, exportselection=False) self.sidebar_text.tag_config('linenumber', justify=tk.RIGHT) end = get_end_linenumber(self.text) self.update_sidebar_text(end) return self.sidebar_text def grid(self): self.sidebar_text.grid(row=1, column=0, sticky=tk.NSEW) def update_font(self): font = idleConf.GetFont(self.text, 'main', 'EditorWindow') self.sidebar_text['font'] = font def update_colors(self): """Update the sidebar text colors, usually after config changes.""" colors = idleConf.GetHighlight(idleConf.CurrentTheme(), 'linenumber') foreground = colors['foreground'] background = colors['background'] self.sidebar_text.config( fg=foreground, bg=background, selectforeground=foreground, selectbackground=background, inactiveselectbackground=background, ) def update_sidebar_text(self, end): """ Perform the following action: Each line sidebar_text contains the linenumber for that line Synchronize with editwin.text so that both sidebar_text and editwin.text contain the same number of lines""" if end == self.prev_end: return width_difference = len(str(end)) - len(str(self.prev_end)) if width_difference: cur_width = int(float(self.sidebar_text['width'])) new_width = cur_width + width_difference self.sidebar_text['width'] = self._sidebar_width_type(new_width) with temp_enable_text_widget(self.sidebar_text): if end > self.prev_end: new_text = '\n'.join(itertools.chain( [''], map(str, range(self.prev_end + 1, end + 1)), )) self.sidebar_text.insert(f'end -1c', new_text, 'linenumber') else: self.sidebar_text.delete(f'{end+1}.0 -1c', 'end -1c') self.prev_end = end def yscroll_event(self, *args, **kwargs): self.sidebar_text.yview_moveto(args[0]) return 'break' idleConf = IdleConf() class Dummy_editwin: def __init__(self, text): self.text = text self.text_frame = self.text.master self.per = Percolator(text) self.undo = Delegator() self.per.insertfilter(self.undo) def setvar(self, name, value): pass def getlineno(self, index): return int(float(self.text.index(index))) def _linenumbers_drag_scrolling(parent): # htest # from idlelib.idle_test.test_sidebar import Dummy_editwin toplevel = tk.Toplevel(parent) text_frame = tk.Frame(toplevel) text_frame.pack(side=tk.LEFT, fill=tk.BOTH, expand=True) text_frame.rowconfigure(1, weight=1) text_frame.columnconfigure(1, weight=1) font = idleConf.GetFont(toplevel, 'main', 'EditorWindow') text = tk.Text(text_frame, width=80, height=24, wrap=tk.NONE, font=font) text.grid(row=1, column=1, sticky=tk.NSEW) editwin = Dummy_editwin(text) editwin.vbar = tk.Scrollbar(text_frame) linenumbers = LineNumbers(editwin) linenumbers.show_sidebar() text.insert('1.0', '\n'.join('a'*i for i in range(1, 101)))
null
187,914
import os import sys import webbrowser from platform import python_version, architecture from tkinter import Toplevel, Frame, Label, Button, PhotoImage from tkinter import SUNKEN, TOP, BOTTOM, LEFT, X, BOTH, W, EW, NSEW, E from idlelib import textview def architecture(executable=sys.executable, bits='', linkage=''): """ Queries the given executable (defaults to the Python interpreter binary) for various architecture information. Returns a tuple (bits, linkage) which contains information about the bit architecture and the linkage format used for the executable. Both values are returned as strings. Values that cannot be determined are returned as given by the parameter presets. If bits is given as '', the sizeof(pointer) (or sizeof(long) on Python version < 1.5.2) is used as indicator for the supported pointer size. The function relies on the system's "file" command to do the actual work. This is available on most if not all Unix platforms. On some non-Unix platforms where the "file" command does not exist and the executable is set to the Python interpreter binary defaults from _default_architecture are used. """ # Use the sizeof(pointer) as default number of bits if nothing # else is given as default. if not bits: import struct size = struct.calcsize('P') bits = str(size * 8) + 'bit' # Get data from the 'file' system command if executable: fileout = _syscmd_file(executable, '') else: fileout = '' if not fileout and \ executable == sys.executable: # "file" command did not return anything; we'll try to provide # some sensible defaults then... if sys.platform in _default_architecture: b, l = _default_architecture[sys.platform] if b: bits = b if l: linkage = l return bits, linkage if 'executable' not in fileout and 'shared object' not in fileout: # Format not supported return bits, linkage # Bits if '32-bit' in fileout: bits = '32bit' elif '64-bit' in fileout: bits = '64bit' # Linkage if 'ELF' in fileout: linkage = 'ELF' elif 'PE' in fileout: # E.g. Windows uses this format if 'Windows' in fileout: linkage = 'WindowsPE' else: linkage = 'PE' elif 'COFF' in fileout: linkage = 'COFF' elif 'MS-DOS' in fileout: linkage = 'MSDOS' else: # XXX the A.OUT format also falls under this class... pass return bits, linkage The provided code snippet includes necessary dependencies for implementing the `build_bits` function. Write a Python function `def build_bits()` to solve the following problem: Return bits for platform. Here is the function: def build_bits(): "Return bits for platform." if sys.platform == 'darwin': return '64' if sys.maxsize > 2**32 else '32' else: return architecture()[0][:2]
Return bits for platform.
187,928
import builtins import keyword import re import time from idlelib.config import idleConf from idlelib.delegator import Delegator def any(name, alternates): prog = make_pat() def make_pat(): kw = r"\b" + any("KEYWORD", keyword.kwlist) + r"\b" match_softkw = ( r"^[ \t]*" + # at beginning of line + possible indentation r"(?P<MATCH_SOFTKW>match)\b" + r"(?![ \t]*(?:" + "|".join([ # not followed by ... r"[:,;=^&|@~)\]}]", # a character which means it can't be a # pattern-matching statement r"\b(?:" + r"|".join(keyword.kwlist) + r")\b", # a keyword ]) + r"))" ) case_default = ( r"^[ \t]*" + # at beginning of line + possible indentation r"(?P<CASE_SOFTKW>case)" + r"[ \t]+(?P<CASE_DEFAULT_UNDERSCORE>_\b)" ) case_softkw_and_pattern = ( r"^[ \t]*" + # at beginning of line + possible indentation r"(?P<CASE_SOFTKW2>case)\b" + r"(?![ \t]*(?:" + "|".join([ # not followed by ... r"_\b", # a lone underscore r"[:,;=^&|@~)\]}]", # a character which means it can't be a # pattern-matching case r"\b(?:" + r"|".join(keyword.kwlist) + r")\b", # a keyword ]) + r"))" ) builtinlist = [str(name) for name in dir(builtins) if not name.startswith('_') and name not in keyword.kwlist] builtin = r"([^.'\"\\#]\b|^)" + any("BUILTIN", builtinlist) + r"\b" comment = any("COMMENT", [r"#[^\n]*"]) stringprefix = r"(?i:r|u|f|fr|rf|b|br|rb)?" sqstring = stringprefix + r"'[^'\\\n]*(\\.[^'\\\n]*)*'?" dqstring = stringprefix + r'"[^"\\\n]*(\\.[^"\\\n]*)*"?' sq3string = stringprefix + r"'''[^'\\]*((\\.|'(?!''))[^'\\]*)*(''')?" dq3string = stringprefix + r'"""[^"\\]*((\\.|"(?!""))[^"\\]*)*(""")?' string = any("STRING", [sq3string, dq3string, sqstring, dqstring]) prog = re.compile("|".join([ builtin, comment, string, kw, match_softkw, case_default, case_softkw_and_pattern, any("SYNC", [r"\n"]), ]), re.DOTALL | re.MULTILINE) return prog
null
187,929
import builtins import keyword import re import time from idlelib.config import idleConf from idlelib.delegator import Delegator The provided code snippet includes necessary dependencies for implementing the `matched_named_groups` function. Write a Python function `def matched_named_groups(re_match)` to solve the following problem: Get only the non-empty named groups from an re.Match object. Here is the function: def matched_named_groups(re_match): "Get only the non-empty named groups from an re.Match object." return ((k, v) for (k, v) in re_match.groupdict().items() if v)
Get only the non-empty named groups from an re.Match object.
187,930
import builtins import keyword import re import time from idlelib.config import idleConf from idlelib.delegator import Delegator def color_config(text): """Set color options of Text widget. If ColorDelegator is used, this should be called first. """ # Called from htest, TextFrame, Editor, and Turtledemo. # Not automatic because ColorDelegator does not know 'text'. theme = idleConf.CurrentTheme() normal_colors = idleConf.GetHighlight(theme, 'normal') cursor_color = idleConf.GetHighlight(theme, 'cursor')['foreground'] select_colors = idleConf.GetHighlight(theme, 'hilite') text.config( foreground=normal_colors['foreground'], background=normal_colors['background'], insertbackground=cursor_color, selectforeground=select_colors['foreground'], selectbackground=select_colors['background'], inactiveselectbackground=select_colors['background'], # new in 8.5 ) class ColorDelegator(Delegator): """Delegator for syntax highlighting (text coloring). Instance variables: delegate: Delegator below this one in the stack, meaning the one this one delegates to. Used to track state: after_id: Identifier for scheduled after event, which is a timer for colorizing the text. allow_colorizing: Boolean toggle for applying colorizing. colorizing: Boolean flag when colorizing is in process. stop_colorizing: Boolean flag to end an active colorizing process. """ def __init__(self): Delegator.__init__(self) self.init_state() self.prog = prog self.idprog = idprog self.LoadTagDefs() def init_state(self): "Initialize variables that track colorizing state." self.after_id = None self.allow_colorizing = True self.stop_colorizing = False self.colorizing = False def setdelegate(self, delegate): """Set the delegate for this instance. A delegate is an instance of a Delegator class and each delegate points to the next delegator in the stack. This allows multiple delegators to be chained together for a widget. The bottom delegate for a colorizer is a Text widget. If there is a delegate, also start the colorizing process. """ if self.delegate is not None: self.unbind("<<toggle-auto-coloring>>") Delegator.setdelegate(self, delegate) if delegate is not None: self.config_colors() self.bind("<<toggle-auto-coloring>>", self.toggle_colorize_event) self.notify_range("1.0", "end") else: # No delegate - stop any colorizing. self.stop_colorizing = True self.allow_colorizing = False def config_colors(self): "Configure text widget tags with colors from tagdefs." for tag, cnf in self.tagdefs.items(): self.tag_configure(tag, **cnf) self.tag_raise('sel') def LoadTagDefs(self): "Create dictionary of tag names to text colors." theme = idleConf.CurrentTheme() self.tagdefs = { "COMMENT": idleConf.GetHighlight(theme, "comment"), "KEYWORD": idleConf.GetHighlight(theme, "keyword"), "BUILTIN": idleConf.GetHighlight(theme, "builtin"), "STRING": idleConf.GetHighlight(theme, "string"), "DEFINITION": idleConf.GetHighlight(theme, "definition"), "SYNC": {'background': None, 'foreground': None}, "TODO": {'background': None, 'foreground': None}, "ERROR": idleConf.GetHighlight(theme, "error"), # "hit" is used by ReplaceDialog to mark matches. It shouldn't be changed by Colorizer, but # that currently isn't technically possible. This should be moved elsewhere in the future # when fixing the "hit" tag's visibility, or when the replace dialog is replaced with a # non-modal alternative. "hit": idleConf.GetHighlight(theme, "hit"), } if DEBUG: print('tagdefs', self.tagdefs) def insert(self, index, chars, tags=None): "Insert chars into widget at index and mark for colorizing." index = self.index(index) self.delegate.insert(index, chars, tags) self.notify_range(index, index + "+%dc" % len(chars)) def delete(self, index1, index2=None): "Delete chars between indexes and mark for colorizing." index1 = self.index(index1) self.delegate.delete(index1, index2) self.notify_range(index1) def notify_range(self, index1, index2=None): "Mark text changes for processing and restart colorizing, if active." self.tag_add("TODO", index1, index2) if self.after_id: if DEBUG: print("colorizing already scheduled") return if self.colorizing: self.stop_colorizing = True if DEBUG: print("stop colorizing") if self.allow_colorizing: if DEBUG: print("schedule colorizing") self.after_id = self.after(1, self.recolorize) return def close(self): if self.after_id: after_id = self.after_id self.after_id = None if DEBUG: print("cancel scheduled recolorizer") self.after_cancel(after_id) self.allow_colorizing = False self.stop_colorizing = True def toggle_colorize_event(self, event=None): """Toggle colorizing on and off. When toggling off, if colorizing is scheduled or is in process, it will be cancelled and/or stopped. When toggling on, colorizing will be scheduled. """ if self.after_id: after_id = self.after_id self.after_id = None if DEBUG: print("cancel scheduled recolorizer") self.after_cancel(after_id) if self.allow_colorizing and self.colorizing: if DEBUG: print("stop colorizing") self.stop_colorizing = True self.allow_colorizing = not self.allow_colorizing if self.allow_colorizing and not self.colorizing: self.after_id = self.after(1, self.recolorize) if DEBUG: print("auto colorizing turned", "on" if self.allow_colorizing else "off") return "break" def recolorize(self): """Timer event (every 1ms) to colorize text. Colorizing is only attempted when the text widget exists, when colorizing is toggled on, and when the colorizing process is not already running. After colorizing is complete, some cleanup is done to make sure that all the text has been colorized. """ self.after_id = None if not self.delegate: if DEBUG: print("no delegate") return if not self.allow_colorizing: if DEBUG: print("auto colorizing is off") return if self.colorizing: if DEBUG: print("already colorizing") return try: self.stop_colorizing = False self.colorizing = True if DEBUG: print("colorizing...") t0 = time.perf_counter() self.recolorize_main() t1 = time.perf_counter() if DEBUG: print("%.3f seconds" % (t1-t0)) finally: self.colorizing = False if self.allow_colorizing and self.tag_nextrange("TODO", "1.0"): if DEBUG: print("reschedule colorizing") self.after_id = self.after(1, self.recolorize) def recolorize_main(self): "Evaluate text and apply colorizing tags." next = "1.0" while todo_tag_range := self.tag_nextrange("TODO", next): self.tag_remove("SYNC", todo_tag_range[0], todo_tag_range[1]) sync_tag_range = self.tag_prevrange("SYNC", todo_tag_range[0]) head = sync_tag_range[1] if sync_tag_range else "1.0" chars = "" next = head lines_to_get = 1 ok = False while not ok: mark = next next = self.index(mark + "+%d lines linestart" % lines_to_get) lines_to_get = min(lines_to_get * 2, 100) ok = "SYNC" in self.tag_names(next + "-1c") line = self.get(mark, next) ##print head, "get", mark, next, "->", repr(line) if not line: return for tag in self.tagdefs: self.tag_remove(tag, mark, next) chars += line self._add_tags_in_section(chars, head) if "SYNC" in self.tag_names(next + "-1c"): head = next chars = "" else: ok = False if not ok: # We're in an inconsistent state, and the call to # update may tell us to stop. It may also change # the correct value for "next" (since this is a # line.col string, not a true mark). So leave a # crumb telling the next invocation to resume here # in case update tells us to leave. self.tag_add("TODO", next) self.update() if self.stop_colorizing: if DEBUG: print("colorizing stopped") return def _add_tag(self, start, end, head, matched_group_name): """Add a tag to a given range in the text widget. This is a utility function, receiving the range as `start` and `end` positions, each of which is a number of characters relative to the given `head` index in the text widget. The tag to add is determined by `matched_group_name`, which is the name of a regular expression "named group" as matched by by the relevant highlighting regexps. """ tag = prog_group_name_to_tag.get(matched_group_name, matched_group_name) self.tag_add(tag, f"{head}+{start:d}c", f"{head}+{end:d}c") def _add_tags_in_section(self, chars, head): """Parse and add highlighting tags to a given part of the text. `chars` is a string with the text to parse and to which highlighting is to be applied. `head` is the index in the text widget where the text is found. """ for m in self.prog.finditer(chars): for name, matched_text in matched_named_groups(m): a, b = m.span(name) self._add_tag(a, b, head, name) if matched_text in ("def", "class"): if m1 := self.idprog.match(chars, b): a, b = m1.span(1) self._add_tag(a, b, head, "DEFINITION") def removecolors(self): "Remove all colorizing tags." for tag in self.tagdefs: self.tag_remove(tag, "1.0", "end") class Toplevel(BaseWidget, Wm): """Toplevel widget, e.g. for dialogs.""" def __init__(self, master=None, cnf={}, **kw): """Construct a toplevel widget with the parent MASTER. Valid resource names: background, bd, bg, borderwidth, class, colormap, container, cursor, height, highlightbackground, highlightcolor, highlightthickness, menu, relief, screen, takefocus, use, visual, width.""" if kw: cnf = _cnfmerge((cnf, kw)) extra = () for wmkey in ['screen', 'class_', 'class', 'visual', 'colormap']: if wmkey in cnf: val = cnf[wmkey] # TBD: a hack needed because some keys # are not valid as keyword arguments if wmkey[-1] == '_': opt = '-'+wmkey[:-1] else: opt = '-'+wmkey extra = extra + (opt, val) del cnf[wmkey] BaseWidget.__init__(self, master, 'toplevel', cnf, {}, extra) root = self._root() self.iconname(root.iconname()) self.title(root.title()) self.protocol("WM_DELETE_WINDOW", self.destroy) class Text(Widget, XView, YView): """Text widget which can display text in various forms.""" def __init__(self, master=None, cnf={}, **kw): """Construct a text widget with the parent MASTER. STANDARD OPTIONS background, borderwidth, cursor, exportselection, font, foreground, highlightbackground, highlightcolor, highlightthickness, insertbackground, insertborderwidth, insertofftime, insertontime, insertwidth, padx, pady, relief, selectbackground, selectborderwidth, selectforeground, setgrid, takefocus, xscrollcommand, yscrollcommand, WIDGET-SPECIFIC OPTIONS autoseparators, height, maxundo, spacing1, spacing2, spacing3, state, tabs, undo, width, wrap, """ Widget.__init__(self, master, 'text', cnf, kw) def bbox(self, index): """Return a tuple of (x,y,width,height) which gives the bounding box of the visible part of the character at the given index.""" return self._getints( self.tk.call(self._w, 'bbox', index)) or None def compare(self, index1, op, index2): """Return whether between index INDEX1 and index INDEX2 the relation OP is satisfied. OP is one of <, <=, ==, >=, >, or !=.""" return self.tk.getboolean(self.tk.call( self._w, 'compare', index1, op, index2)) def count(self, index1, index2, *args): # new in Tk 8.5 """Counts the number of relevant things between the two indices. If index1 is after index2, the result will be a negative number (and this holds for each of the possible options). The actual items which are counted depends on the options given by args. The result is a list of integers, one for the result of each counting option given. Valid counting options are "chars", "displaychars", "displayindices", "displaylines", "indices", "lines", "xpixels" and "ypixels". There is an additional possible option "update", which if given then all subsequent options ensure that any possible out of date information is recalculated.""" args = ['-%s' % arg for arg in args if not arg.startswith('-')] args += [index1, index2] res = self.tk.call(self._w, 'count', *args) or None if res is not None and len(args) <= 3: return (res, ) else: return res def debug(self, boolean=None): """Turn on the internal consistency checks of the B-Tree inside the text widget according to BOOLEAN.""" if boolean is None: return self.tk.getboolean(self.tk.call(self._w, 'debug')) self.tk.call(self._w, 'debug', boolean) def delete(self, index1, index2=None): """Delete the characters between INDEX1 and INDEX2 (not included).""" self.tk.call(self._w, 'delete', index1, index2) def dlineinfo(self, index): """Return tuple (x,y,width,height,baseline) giving the bounding box and baseline position of the visible part of the line containing the character at INDEX.""" return self._getints(self.tk.call(self._w, 'dlineinfo', index)) def dump(self, index1, index2=None, command=None, **kw): """Return the contents of the widget between index1 and index2. The type of contents returned in filtered based on the keyword parameters; if 'all', 'image', 'mark', 'tag', 'text', or 'window' are given and true, then the corresponding items are returned. The result is a list of triples of the form (key, value, index). If none of the keywords are true then 'all' is used by default. If the 'command' argument is given, it is called once for each element of the list of triples, with the values of each triple serving as the arguments to the function. In this case the list is not returned.""" args = [] func_name = None result = None if not command: # Never call the dump command without the -command flag, since the # output could involve Tcl quoting and would be a pain to parse # right. Instead just set the command to build a list of triples # as if we had done the parsing. result = [] def append_triple(key, value, index, result=result): result.append((key, value, index)) command = append_triple try: if not isinstance(command, str): func_name = command = self._register(command) args += ["-command", command] for key in kw: if kw[key]: args.append("-" + key) args.append(index1) if index2: args.append(index2) self.tk.call(self._w, "dump", *args) return result finally: if func_name: self.deletecommand(func_name) ## new in tk8.4 def edit(self, *args): """Internal method This method controls the undo mechanism and the modified flag. The exact behavior of the command depends on the option argument that follows the edit argument. The following forms of the command are currently supported: edit_modified, edit_redo, edit_reset, edit_separator and edit_undo """ return self.tk.call(self._w, 'edit', *args) def edit_modified(self, arg=None): """Get or Set the modified flag If arg is not specified, returns the modified flag of the widget. The insert, delete, edit undo and edit redo commands or the user can set or clear the modified flag. If boolean is specified, sets the modified flag of the widget to arg. """ return self.edit("modified", arg) def edit_redo(self): """Redo the last undone edit When the undo option is true, reapplies the last undone edits provided no other edits were done since then. Generates an error when the redo stack is empty. Does nothing when the undo option is false. """ return self.edit("redo") def edit_reset(self): """Clears the undo and redo stacks """ return self.edit("reset") def edit_separator(self): """Inserts a separator (boundary) on the undo stack. Does nothing when the undo option is false """ return self.edit("separator") def edit_undo(self): """Undoes the last edit action If the undo option is true. An edit action is defined as all the insert and delete commands that are recorded on the undo stack in between two separators. Generates an error when the undo stack is empty. Does nothing when the undo option is false """ return self.edit("undo") def get(self, index1, index2=None): """Return the text from INDEX1 to INDEX2 (not included).""" return self.tk.call(self._w, 'get', index1, index2) # (Image commands are new in 8.0) def image_cget(self, index, option): """Return the value of OPTION of an embedded image at INDEX.""" if option[:1] != "-": option = "-" + option if option[-1:] == "_": option = option[:-1] return self.tk.call(self._w, "image", "cget", index, option) def image_configure(self, index, cnf=None, **kw): """Configure an embedded image at INDEX.""" return self._configure(('image', 'configure', index), cnf, kw) def image_create(self, index, cnf={}, **kw): """Create an embedded image at INDEX.""" return self.tk.call( self._w, "image", "create", index, *self._options(cnf, kw)) def image_names(self): """Return all names of embedded images in this widget.""" return self.tk.call(self._w, "image", "names") def index(self, index): """Return the index in the form line.char for INDEX.""" return str(self.tk.call(self._w, 'index', index)) def insert(self, index, chars, *args): """Insert CHARS before the characters at INDEX. An additional tag can be given in ARGS. Additional CHARS and tags can follow in ARGS.""" self.tk.call((self._w, 'insert', index, chars) + args) def mark_gravity(self, markName, direction=None): """Change the gravity of a mark MARKNAME to DIRECTION (LEFT or RIGHT). Return the current value if None is given for DIRECTION.""" return self.tk.call( (self._w, 'mark', 'gravity', markName, direction)) def mark_names(self): """Return all mark names.""" return self.tk.splitlist(self.tk.call( self._w, 'mark', 'names')) def mark_set(self, markName, index): """Set mark MARKNAME before the character at INDEX.""" self.tk.call(self._w, 'mark', 'set', markName, index) def mark_unset(self, *markNames): """Delete all marks in MARKNAMES.""" self.tk.call((self._w, 'mark', 'unset') + markNames) def mark_next(self, index): """Return the name of the next mark after INDEX.""" return self.tk.call(self._w, 'mark', 'next', index) or None def mark_previous(self, index): """Return the name of the previous mark before INDEX.""" return self.tk.call(self._w, 'mark', 'previous', index) or None def peer_create(self, newPathName, cnf={}, **kw): # new in Tk 8.5 """Creates a peer text widget with the given newPathName, and any optional standard configuration options. By default the peer will have the same start and end line as the parent widget, but these can be overridden with the standard configuration options.""" self.tk.call(self._w, 'peer', 'create', newPathName, *self._options(cnf, kw)) def peer_names(self): # new in Tk 8.5 """Returns a list of peers of this widget (this does not include the widget itself).""" return self.tk.splitlist(self.tk.call(self._w, 'peer', 'names')) def replace(self, index1, index2, chars, *args): # new in Tk 8.5 """Replaces the range of characters between index1 and index2 with the given characters and tags specified by args. See the method insert for some more information about args, and the method delete for information about the indices.""" self.tk.call(self._w, 'replace', index1, index2, chars, *args) def scan_mark(self, x, y): """Remember the current X, Y coordinates.""" self.tk.call(self._w, 'scan', 'mark', x, y) def scan_dragto(self, x, y): """Adjust the view of the text to 10 times the difference between X and Y and the coordinates given in scan_mark.""" self.tk.call(self._w, 'scan', 'dragto', x, y) def search(self, pattern, index, stopindex=None, forwards=None, backwards=None, exact=None, regexp=None, nocase=None, count=None, elide=None): """Search PATTERN beginning from INDEX until STOPINDEX. Return the index of the first character of a match or an empty string.""" args = [self._w, 'search'] if forwards: args.append('-forwards') if backwards: args.append('-backwards') if exact: args.append('-exact') if regexp: args.append('-regexp') if nocase: args.append('-nocase') if elide: args.append('-elide') if count: args.append('-count'); args.append(count) if pattern and pattern[0] == '-': args.append('--') args.append(pattern) args.append(index) if stopindex: args.append(stopindex) return str(self.tk.call(tuple(args))) def see(self, index): """Scroll such that the character at INDEX is visible.""" self.tk.call(self._w, 'see', index) def tag_add(self, tagName, index1, *args): """Add tag TAGNAME to all characters between INDEX1 and index2 in ARGS. Additional pairs of indices may follow in ARGS.""" self.tk.call( (self._w, 'tag', 'add', tagName, index1) + args) def tag_unbind(self, tagName, sequence, funcid=None): """Unbind for all characters with TAGNAME for event SEQUENCE the function identified with FUNCID.""" self.tk.call(self._w, 'tag', 'bind', tagName, sequence, '') if funcid: self.deletecommand(funcid) def tag_bind(self, tagName, sequence, func, add=None): """Bind to all characters with TAGNAME at event SEQUENCE a call to function FUNC. An additional boolean parameter ADD specifies whether FUNC will be called additionally to the other bound function or whether it will replace the previous function. See bind for the return value.""" return self._bind((self._w, 'tag', 'bind', tagName), sequence, func, add) def tag_cget(self, tagName, option): """Return the value of OPTION for tag TAGNAME.""" if option[:1] != '-': option = '-' + option if option[-1:] == '_': option = option[:-1] return self.tk.call(self._w, 'tag', 'cget', tagName, option) def tag_configure(self, tagName, cnf=None, **kw): """Configure a tag TAGNAME.""" return self._configure(('tag', 'configure', tagName), cnf, kw) tag_config = tag_configure def tag_delete(self, *tagNames): """Delete all tags in TAGNAMES.""" self.tk.call((self._w, 'tag', 'delete') + tagNames) def tag_lower(self, tagName, belowThis=None): """Change the priority of tag TAGNAME such that it is lower than the priority of BELOWTHIS.""" self.tk.call(self._w, 'tag', 'lower', tagName, belowThis) def tag_names(self, index=None): """Return a list of all tag names.""" return self.tk.splitlist( self.tk.call(self._w, 'tag', 'names', index)) def tag_nextrange(self, tagName, index1, index2=None): """Return a list of start and end index for the first sequence of characters between INDEX1 and INDEX2 which all have tag TAGNAME. The text is searched forward from INDEX1.""" return self.tk.splitlist(self.tk.call( self._w, 'tag', 'nextrange', tagName, index1, index2)) def tag_prevrange(self, tagName, index1, index2=None): """Return a list of start and end index for the first sequence of characters between INDEX1 and INDEX2 which all have tag TAGNAME. The text is searched backwards from INDEX1.""" return self.tk.splitlist(self.tk.call( self._w, 'tag', 'prevrange', tagName, index1, index2)) def tag_raise(self, tagName, aboveThis=None): """Change the priority of tag TAGNAME such that it is higher than the priority of ABOVETHIS.""" self.tk.call( self._w, 'tag', 'raise', tagName, aboveThis) def tag_ranges(self, tagName): """Return a list of ranges of text which have tag TAGNAME.""" return self.tk.splitlist(self.tk.call( self._w, 'tag', 'ranges', tagName)) def tag_remove(self, tagName, index1, index2=None): """Remove tag TAGNAME from all characters between INDEX1 and INDEX2.""" self.tk.call( self._w, 'tag', 'remove', tagName, index1, index2) def window_cget(self, index, option): """Return the value of OPTION of an embedded window at INDEX.""" if option[:1] != '-': option = '-' + option if option[-1:] == '_': option = option[:-1] return self.tk.call(self._w, 'window', 'cget', index, option) def window_configure(self, index, cnf=None, **kw): """Configure an embedded window at INDEX.""" return self._configure(('window', 'configure', index), cnf, kw) window_config = window_configure def window_create(self, index, cnf={}, **kw): """Create a window at INDEX.""" self.tk.call( (self._w, 'window', 'create', index) + self._options(cnf, kw)) def window_names(self): """Return all names of embedded windows in this widget.""" return self.tk.splitlist( self.tk.call(self._w, 'window', 'names')) def yview_pickplace(self, *what): """Obsolete function, use see.""" self.tk.call((self._w, 'yview', '-pickplace') + what) source = textwrap.dedent("""\ if True: int ('1') # keyword, builtin, string, comment elif False: print(0) # 'string' in comment else: float(None) # if in comment if iF + If + IF: 'keyword matching must respect case' if'': x or'' # valid keyword-string no-space combinations async def f(): await g() # Strings should be entirely colored, including quotes. 'x', '''x''', "x", \"""x\""" 'abc\\ def' '''abc\\ def''' # All valid prefixes for unicode and byte strings should be colored. r'x', u'x', R'x', U'x', f'x', F'x' fr'x', Fr'x', fR'x', FR'x', rf'x', rF'x', Rf'x', RF'x' b'x',B'x', br'x',Br'x',bR'x',BR'x', rb'x', rB'x',Rb'x',RB'x' # Invalid combinations of legal characters should be half colored. ur'x', ru'x', uf'x', fu'x', UR'x', ufr'x', rfu'x', xf'x', fx'x' match point: case (x, 0) as _: print(f"X={x}") case [_, [_], "_", _]: pass case _ if ("a" if _ else set()): pass case _: raise ValueError("Not a point _") ''' case _:''' "match x:" """) class Percolator: def __init__(self, text): # XXX would be nice to inherit from Delegator self.text = text self.redir = WidgetRedirector(text) self.top = self.bottom = Delegator(text) self.bottom.insert = self.redir.register("insert", self.insert) self.bottom.delete = self.redir.register("delete", self.delete) self.filters = [] def close(self): while self.top is not self.bottom: self.removefilter(self.top) self.top = None self.bottom.setdelegate(None) self.bottom = None self.redir.close() self.redir = None self.text = None def insert(self, index, chars, tags=None): # Could go away if inheriting from Delegator self.top.insert(index, chars, tags) def delete(self, index1, index2=None): # Could go away if inheriting from Delegator self.top.delete(index1, index2) def insertfilter(self, filter): # Perhaps rename to pushfilter()? assert isinstance(filter, Delegator) assert filter.delegate is None filter.setdelegate(self.top) self.top = filter def insertfilterafter(self, filter, after): assert isinstance(filter, Delegator) assert isinstance(after, Delegator) assert filter.delegate is None f = self.top f.resetcache() while f is not after: assert f is not self.bottom f = f.delegate f.resetcache() filter.setdelegate(f.delegate) f.setdelegate(filter) def removefilter(self, filter): # XXX Perhaps should only support popfilter()? assert isinstance(filter, Delegator) assert filter.delegate is not None f = self.top if f is filter: self.top = filter.delegate filter.setdelegate(None) else: while f.delegate is not filter: assert f is not self.bottom f.resetcache() f = f.delegate f.setdelegate(filter.delegate) filter.setdelegate(None) def _color_delegator(parent): # htest # from tkinter import Toplevel, Text from idlelib.idle_test.test_colorizer import source from idlelib.percolator import Percolator top = Toplevel(parent) top.title("Test ColorDelegator") x, y = map(int, parent.geometry().split('+')[1:]) top.geometry("700x550+%d+%d" % (x + 20, y + 175)) text = Text(top, background="white") text.pack(expand=1, fill="both") text.insert("insert", source) text.focus_set() color_config(text) p = Percolator(text) d = ColorDelegator() p.insertfilter(d)
null
187,933
import string from idlelib.delegator import Delegator class UndoDelegator(Delegator): def __init__(self): def setdelegate(self, delegate): def dump_event(self, event): def reset_undo(self): def set_saved(self, flag): def get_saved(self): def set_saved_change_hook(self, hook): def check_saved(self): def insert(self, index, chars, tags=None): def delete(self, index1, index2=None): def undo_block_start(self): def undo_block_stop(self): def addcmd(self, cmd, execute=True): def undo_event(self, event): def redo_event(self, event): class Toplevel(BaseWidget, Wm): def __init__(self, master=None, cnf={}, **kw): class Button(Widget): def __init__(self, master=None, cnf={}, **kw): def flash(self): def invoke(self): class Text(Widget, XView, YView): def __init__(self, master=None, cnf={}, **kw): def bbox(self, index): def compare(self, index1, op, index2): def count(self, index1, index2, *args): def debug(self, boolean=None): def delete(self, index1, index2=None): def dlineinfo(self, index): def dump(self, index1, index2=None, command=None, **kw): def append_triple(key, value, index, result=result): def edit(self, *args): def edit_modified(self, arg=None): def edit_redo(self): def edit_reset(self): def edit_separator(self): def edit_undo(self): def get(self, index1, index2=None): def image_cget(self, index, option): def image_configure(self, index, cnf=None, **kw): def image_create(self, index, cnf={}, **kw): def image_names(self): def index(self, index): def insert(self, index, chars, *args): def mark_gravity(self, markName, direction=None): def mark_names(self): def mark_set(self, markName, index): def mark_unset(self, *markNames): def mark_next(self, index): def mark_previous(self, index): def peer_create(self, newPathName, cnf={}, **kw): def peer_names(self): def replace(self, index1, index2, chars, *args): def scan_mark(self, x, y): def scan_dragto(self, x, y): def search(self, pattern, index, stopindex=None, forwards=None, backwards=None, exact=None, regexp=None, nocase=None, count=None, elide=None): def see(self, index): def tag_add(self, tagName, index1, *args): def tag_unbind(self, tagName, sequence, funcid=None): def tag_bind(self, tagName, sequence, func, add=None): def tag_cget(self, tagName, option): def tag_configure(self, tagName, cnf=None, **kw): def tag_delete(self, *tagNames): def tag_lower(self, tagName, belowThis=None): def tag_names(self, index=None): def tag_nextrange(self, tagName, index1, index2=None): def tag_prevrange(self, tagName, index1, index2=None): def tag_raise(self, tagName, aboveThis=None): def tag_ranges(self, tagName): def tag_remove(self, tagName, index1, index2=None): def window_cget(self, index, option): def window_configure(self, index, cnf=None, **kw): def window_create(self, index, cnf={}, **kw): def window_names(self): def yview_pickplace(self, *what): class Percolator: def __init__(self, text): def close(self): def insert(self, index, chars, tags=None): def delete(self, index1, index2=None): def insertfilter(self, filter): def insertfilterafter(self, filter, after): def removefilter(self, filter): def _undo_delegator(parent): # htest # from tkinter import Toplevel, Text, Button from idlelib.percolator import Percolator undowin = Toplevel(parent) undowin.title("Test UndoDelegator") x, y = map(int, parent.geometry().split('+')[1:]) undowin.geometry("+%d+%d" % (x, y + 175)) text = Text(undowin, height=10) text.pack() text.focus_set() p = Percolator(text) d = UndoDelegator() p.insertfilter(d) undo = Button(undowin, text="Undo", command=lambda:d.undo_event(None)) undo.pack(side='left') redo = Button(undowin, text="Redo", command=lambda:d.redo_event(None)) redo.pack(side='left') dump = Button(undowin, text="Dump", command=lambda:d.dump_event(None)) dump.pack(side='left')
null
187,948
import os import sys import importlib.util import py_compile import struct import filecmp from functools import partial from pathlib import Path def compile_dir(dir, maxlevels=None, ddir=None, force=False, rx=None, quiet=0, legacy=False, optimize=-1, workers=1, invalidation_mode=None, *, stripdir=None, prependdir=None, limit_sl_dest=None, hardlink_dupes=False, loader_override=None, strict_compile=False): """Byte-compile all modules in the given directory tree. Arguments (only dir is required): dir: the directory to byte-compile maxlevels: maximum recursion level (default `sys.getrecursionlimit()`) ddir: the directory that will be prepended to the path to the file as it is compiled into each byte-code file. force: if True, force compilation, even if timestamps are up-to-date quiet: full output with False or 0, errors only with 1, no output with 2 legacy: if True, produce legacy pyc paths instead of PEP 3147 paths optimize: int or list of optimization levels or -1 for level of the interpreter. Multiple levels leads to multiple compiled files each with one optimization level. workers: maximum number of parallel workers invalidation_mode: how the up-to-dateness of the pyc will be checked stripdir: part of path to left-strip from source file path prependdir: path to prepend to beginning of original file path, applied after stripdir limit_sl_dest: ignore symlinks if they are pointing outside of the defined path hardlink_dupes: hardlink duplicated pyc files loader_override: loader type to use instead of default SourceFileLoader strict_compile: Whether to use the strict compiler instead of the default. """ ProcessPoolExecutor = None if ddir is not None and (stripdir is not None or prependdir is not None): raise ValueError(("Destination dir (ddir) cannot be used " "in combination with stripdir or prependdir")) if ddir is not None: stripdir = dir prependdir = ddir ddir = None if workers < 0: raise ValueError('workers must be greater or equal to 0') if workers != 1: # Check if this is a system where ProcessPoolExecutor can function. from concurrent.futures.process import _check_system_limits try: _check_system_limits() except NotImplementedError: workers = 1 else: from concurrent.futures import ProcessPoolExecutor if maxlevels is None: maxlevels = sys.getrecursionlimit() files = _walk_dir(dir, quiet=quiet, maxlevels=maxlevels) success = True if workers != 1 and ProcessPoolExecutor is not None: # If workers == 0, let ProcessPoolExecutor choose workers = workers or None with ProcessPoolExecutor(max_workers=workers) as executor: results = executor.map(partial(compile_file, ddir=ddir, force=force, rx=rx, quiet=quiet, legacy=legacy, optimize=optimize, invalidation_mode=invalidation_mode, stripdir=stripdir, prependdir=prependdir, limit_sl_dest=limit_sl_dest, hardlink_dupes=hardlink_dupes, loader_override=loader_override, strict_compile=strict_compile), files) success = min(results, default=True) else: for file in files: if not compile_file(file, ddir, force, rx, quiet, legacy, optimize, invalidation_mode, stripdir=stripdir, prependdir=prependdir, limit_sl_dest=limit_sl_dest, hardlink_dupes=hardlink_dupes, loader_override=loader_override, strict_compile=strict_compile): success = False return success from os.path import (curdir, pardir, sep, pathsep, defpath, extsep, altsep, devnull) The provided code snippet includes necessary dependencies for implementing the `compile_path` function. Write a Python function `def compile_path(skip_curdir=1, maxlevels=0, force=False, quiet=0, legacy=False, optimize=-1, invalidation_mode=None, loader_override=None, strict_compile=False)` to solve the following problem: Byte-compile all module on sys.path. Arguments (all optional): skip_curdir: if true, skip current directory (default True) maxlevels: max recursion level (default 0) force: as for compile_dir() (default False) quiet: as for compile_dir() (default 0) legacy: as for compile_dir() (default False) optimize: as for compile_dir() (default -1) invalidation_mode: as for compiler_dir() loader_override: as for compiler_dir() Here is the function: def compile_path(skip_curdir=1, maxlevels=0, force=False, quiet=0, legacy=False, optimize=-1, invalidation_mode=None, loader_override=None, strict_compile=False): """Byte-compile all module on sys.path. Arguments (all optional): skip_curdir: if true, skip current directory (default True) maxlevels: max recursion level (default 0) force: as for compile_dir() (default False) quiet: as for compile_dir() (default 0) legacy: as for compile_dir() (default False) optimize: as for compile_dir() (default -1) invalidation_mode: as for compiler_dir() loader_override: as for compiler_dir() """ success = True for dir in sys.path: if (not dir or dir == os.curdir) and skip_curdir: if quiet < 2: print('Skipping current directory') else: success = success and compile_dir( dir, maxlevels, None, force, quiet=quiet, legacy=legacy, optimize=optimize, invalidation_mode=invalidation_mode, loader_override=loader_override, strict_compile=strict_compile, ) return success
Byte-compile all module on sys.path. Arguments (all optional): skip_curdir: if true, skip current directory (default True) maxlevels: max recursion level (default 0) force: as for compile_dir() (default False) quiet: as for compile_dir() (default 0) legacy: as for compile_dir() (default False) optimize: as for compile_dir() (default -1) invalidation_mode: as for compiler_dir() loader_override: as for compiler_dir()
187,949
import io import math import mmap import os import re import struct import sys import time from collections import deque, OrderedDict from importlib.util import spec_from_file_location, decode_source from os import path from types import CodeType def _float_equals(a, b): if math.isnan(a) and math.isnan(b): return True elif (a == 0 and b == 0 and math.copysign(1, a) != math.copysign(1, b)): return False else: return a == b
null
187,950
import io import math import mmap import os import re import struct import sys import time from collections import deque, OrderedDict from importlib.util import spec_from_file_location, decode_source from os import path from types import CodeType def _align_file(file, align=8): len = file.tell() padding = (((len + align - 1) & (~(align - 1))) - len) file.write(b'\x00' * padding)
null
187,951
import io import math import mmap import os import re import struct import sys import time from collections import deque, OrderedDict from importlib.util import spec_from_file_location, decode_source from os import path from types import CodeType class PyIceImporter: def __init__(self, import_path): self.path = import_path try: if (EXTENSION + '/') in import_path: # sys.path entry should be # 'path/to/compiled.icepack//relative/loc' components = import_path.split(EXTENSION + '/') pack_name = components[0] + EXTENSION if path.isfile(pack_name): self.disk_loc = components[1] self.breaker = IceBreaker(open(pack_name, 'rb'), self.disk_loc) return except IcePackError as e: print('failed to load ice pack (invalid)', e) except OSError as e: print('failed to load ice pack: ' + str(e), e) raise ImportError() def find_spec(self, fullname, target=None): if '\x00' in fullname: # Invalid module name, return None, and let the import machinery # report the module as not found. return None mod_info = self.breaker.find_module(fullname) if mod_info is None: return None mod, is_package, filename = mod_info disk_loc = path.join(self.disk_loc, fullname.replace('.', '/')) if filename: file_path = path.join(self.disk_loc, filename) try: mtime = os.stat(file_path).st_mtime if int(mtime) > self.breaker.timestamp: # the file on disk has been updated since the icepack was # generated, prefer the on-disk version. return None except OSError: # no file on disk, use the icepack pass else: # namespace package file_path = None if is_package: search = [self.path, disk_loc] else: search = None loader = PyIceLoader(mod, self, file_path, is_package) spec = spec_from_file_location(fullname, file_path, loader=loader, submodule_search_locations=search) if not file_path: spec.has_location = False return spec def install(): sys.path_hooks.append(PyIceImporter)
null
187,952
import io import math import mmap import os import re import struct import sys import time from collections import deque, OrderedDict from importlib.util import spec_from_file_location, decode_source from os import path from types import CodeType class PyIceImporter: def __init__(self, import_path): def find_spec(self, fullname, target=None): def uninstall(): sys.path_hooks.remove(PyIceImporter)
null
187,964
from datetime import tzinfo, timedelta, datetime import time as _time def first_sunday_on_or_after(dt): days_to_go = 6 - dt.weekday() if days_to_go: dt += timedelta(days_to_go) return dt DSTSTART_2007 = datetime(1, 3, 8, 2) DSTEND_2007 = datetime(1, 11, 1, 2) DSTSTART_1987_2006 = datetime(1, 4, 1, 2) DSTEND_1987_2006 = datetime(1, 10, 25, 2) DSTSTART_1967_1986 = datetime(1, 4, 24, 2) DSTEND_1967_1986 = DSTEND_1987_2006 class datetime(date): """datetime(year, month, day[, hour[, minute[, second[, microsecond[,tzinfo]]]]]) The year, month and day arguments are required. tzinfo may be None, or an instance of a tzinfo subclass. The remaining arguments may be ints. """ __slots__ = date.__slots__ + time.__slots__ def __new__(cls, year, month=None, day=None, hour=0, minute=0, second=0, microsecond=0, tzinfo=None, *, fold=0): if (isinstance(year, (bytes, str)) and len(year) == 10 and 1 <= ord(year[2:3])&0x7F <= 12): # Pickle support if isinstance(year, str): try: year = bytes(year, 'latin1') except UnicodeEncodeError: # More informative error message. raise ValueError( "Failed to encode latin1 string when unpickling " "a datetime object. " "pickle.load(data, encoding='latin1') is assumed.") self = object.__new__(cls) self.__setstate(year, month) self._hashcode = -1 return self year, month, day = _check_date_fields(year, month, day) hour, minute, second, microsecond, fold = _check_time_fields( hour, minute, second, microsecond, fold) _check_tzinfo_arg(tzinfo) self = object.__new__(cls) self._year = year self._month = month self._day = day self._hour = hour self._minute = minute self._second = second self._microsecond = microsecond self._tzinfo = tzinfo self._hashcode = -1 self._fold = fold return self # Read-only field accessors def hour(self): """hour (0-23)""" return self._hour def minute(self): """minute (0-59)""" return self._minute def second(self): """second (0-59)""" return self._second def microsecond(self): """microsecond (0-999999)""" return self._microsecond def tzinfo(self): """timezone info object""" return self._tzinfo def fold(self): return self._fold def _fromtimestamp(cls, t, utc, tz): """Construct a datetime from a POSIX timestamp (like time.time()). A timezone info object may be passed in as well. """ frac, t = _math.modf(t) us = round(frac * 1e6) if us >= 1000000: t += 1 us -= 1000000 elif us < 0: t -= 1 us += 1000000 converter = _time.gmtime if utc else _time.localtime y, m, d, hh, mm, ss, weekday, jday, dst = converter(t) ss = min(ss, 59) # clamp out leap seconds if the platform has them result = cls(y, m, d, hh, mm, ss, us, tz) if tz is None and not utc: # As of version 2015f max fold in IANA database is # 23 hours at 1969-09-30 13:00:00 in Kwajalein. # Let's probe 24 hours in the past to detect a transition: max_fold_seconds = 24 * 3600 # On Windows localtime_s throws an OSError for negative values, # thus we can't perform fold detection for values of time less # than the max time fold. See comments in _datetimemodule's # version of this method for more details. if t < max_fold_seconds and sys.platform.startswith("win"): return result y, m, d, hh, mm, ss = converter(t - max_fold_seconds)[:6] probe1 = cls(y, m, d, hh, mm, ss, us, tz) trans = result - probe1 - timedelta(0, max_fold_seconds) if trans.days < 0: y, m, d, hh, mm, ss = converter(t + trans // timedelta(0, 1))[:6] probe2 = cls(y, m, d, hh, mm, ss, us, tz) if probe2 == result: result._fold = 1 elif tz is not None: result = tz.fromutc(result) return result def fromtimestamp(cls, t, tz=None): """Construct a datetime from a POSIX timestamp (like time.time()). A timezone info object may be passed in as well. """ _check_tzinfo_arg(tz) return cls._fromtimestamp(t, tz is not None, tz) def utcfromtimestamp(cls, t): """Construct a naive UTC datetime from a POSIX timestamp.""" return cls._fromtimestamp(t, True, None) def now(cls, tz=None): "Construct a datetime from time.time() and optional time zone info." t = _time.time() return cls.fromtimestamp(t, tz) def utcnow(cls): "Construct a UTC datetime from time.time()." t = _time.time() return cls.utcfromtimestamp(t) def combine(cls, date, time, tzinfo=True): "Construct a datetime from a given date and a given time." if not isinstance(date, _date_class): raise TypeError("date argument must be a date instance") if not isinstance(time, _time_class): raise TypeError("time argument must be a time instance") if tzinfo is True: tzinfo = time.tzinfo return cls(date.year, date.month, date.day, time.hour, time.minute, time.second, time.microsecond, tzinfo, fold=time.fold) def fromisoformat(cls, date_string): """Construct a datetime from the output of datetime.isoformat().""" if not isinstance(date_string, str): raise TypeError('fromisoformat: argument must be str') # Split this at the separator dstr = date_string[0:10] tstr = date_string[11:] try: date_components = _parse_isoformat_date(dstr) except ValueError: raise ValueError(f'Invalid isoformat string: {date_string!r}') if tstr: try: time_components = _parse_isoformat_time(tstr) except ValueError: raise ValueError(f'Invalid isoformat string: {date_string!r}') else: time_components = [0, 0, 0, 0, None] return cls(*(date_components + time_components)) def timetuple(self): "Return local time tuple compatible with time.localtime()." dst = self.dst() if dst is None: dst = -1 elif dst: dst = 1 else: dst = 0 return _build_struct_time(self.year, self.month, self.day, self.hour, self.minute, self.second, dst) def _mktime(self): """Return integer POSIX timestamp.""" epoch = datetime(1970, 1, 1) max_fold_seconds = 24 * 3600 t = (self - epoch) // timedelta(0, 1) def local(u): y, m, d, hh, mm, ss = _time.localtime(u)[:6] return (datetime(y, m, d, hh, mm, ss) - epoch) // timedelta(0, 1) # Our goal is to solve t = local(u) for u. a = local(t) - t u1 = t - a t1 = local(u1) if t1 == t: # We found one solution, but it may not be the one we need. # Look for an earlier solution (if `fold` is 0), or a # later one (if `fold` is 1). u2 = u1 + (-max_fold_seconds, max_fold_seconds)[self.fold] b = local(u2) - u2 if a == b: return u1 else: b = t1 - u1 assert a != b u2 = t - b t2 = local(u2) if t2 == t: return u2 if t1 == t: return u1 # We have found both offsets a and b, but neither t - a nor t - b is # a solution. This means t is in the gap. return (max, min)[self.fold](u1, u2) def timestamp(self): "Return POSIX timestamp as float" if self._tzinfo is None: s = self._mktime() return s + self.microsecond / 1e6 else: return (self - _EPOCH).total_seconds() def utctimetuple(self): "Return UTC time tuple compatible with time.gmtime()." offset = self.utcoffset() if offset: self -= offset y, m, d = self.year, self.month, self.day hh, mm, ss = self.hour, self.minute, self.second return _build_struct_time(y, m, d, hh, mm, ss, 0) def date(self): "Return the date part." return date(self._year, self._month, self._day) def time(self): "Return the time part, with tzinfo None." return time(self.hour, self.minute, self.second, self.microsecond, fold=self.fold) def timetz(self): "Return the time part, with same tzinfo." return time(self.hour, self.minute, self.second, self.microsecond, self._tzinfo, fold=self.fold) def replace(self, year=None, month=None, day=None, hour=None, minute=None, second=None, microsecond=None, tzinfo=True, *, fold=None): """Return a new datetime with new values for the specified fields.""" if year is None: year = self.year if month is None: month = self.month if day is None: day = self.day if hour is None: hour = self.hour if minute is None: minute = self.minute if second is None: second = self.second if microsecond is None: microsecond = self.microsecond if tzinfo is True: tzinfo = self.tzinfo if fold is None: fold = self.fold return type(self)(year, month, day, hour, minute, second, microsecond, tzinfo, fold=fold) def _local_timezone(self): if self.tzinfo is None: ts = self._mktime() else: ts = (self - _EPOCH) // timedelta(seconds=1) localtm = _time.localtime(ts) local = datetime(*localtm[:6]) # Extract TZ data gmtoff = localtm.tm_gmtoff zone = localtm.tm_zone return timezone(timedelta(seconds=gmtoff), zone) def astimezone(self, tz=None): if tz is None: tz = self._local_timezone() elif not isinstance(tz, tzinfo): raise TypeError("tz argument must be an instance of tzinfo") mytz = self.tzinfo if mytz is None: mytz = self._local_timezone() myoffset = mytz.utcoffset(self) else: myoffset = mytz.utcoffset(self) if myoffset is None: mytz = self.replace(tzinfo=None)._local_timezone() myoffset = mytz.utcoffset(self) if tz is mytz: return self # Convert self to UTC, and attach the new time zone object. utc = (self - myoffset).replace(tzinfo=tz) # Convert from UTC to tz's local time. return tz.fromutc(utc) # Ways to produce a string. def ctime(self): "Return ctime() style string." weekday = self.toordinal() % 7 or 7 return "%s %s %2d %02d:%02d:%02d %04d" % ( _DAYNAMES[weekday], _MONTHNAMES[self._month], self._day, self._hour, self._minute, self._second, self._year) def isoformat(self, sep='T', timespec='auto'): """Return the time formatted according to ISO. The full format looks like 'YYYY-MM-DD HH:MM:SS.mmmmmm'. By default, the fractional part is omitted if self.microsecond == 0. If self.tzinfo is not None, the UTC offset is also attached, giving giving a full format of 'YYYY-MM-DD HH:MM:SS.mmmmmm+HH:MM'. Optional argument sep specifies the separator between date and time, default 'T'. The optional argument timespec specifies the number of additional terms of the time to include. Valid options are 'auto', 'hours', 'minutes', 'seconds', 'milliseconds' and 'microseconds'. """ s = ("%04d-%02d-%02d%c" % (self._year, self._month, self._day, sep) + _format_time(self._hour, self._minute, self._second, self._microsecond, timespec)) off = self.utcoffset() tz = _format_offset(off) if tz: s += tz return s def __repr__(self): """Convert to formal string, for repr().""" L = [self._year, self._month, self._day, # These are never zero self._hour, self._minute, self._second, self._microsecond] if L[-1] == 0: del L[-1] if L[-1] == 0: del L[-1] s = "%s.%s(%s)" % (self.__class__.__module__, self.__class__.__qualname__, ", ".join(map(str, L))) if self._tzinfo is not None: assert s[-1:] == ")" s = s[:-1] + ", tzinfo=%r" % self._tzinfo + ")" if self._fold: assert s[-1:] == ")" s = s[:-1] + ", fold=1)" return s def __str__(self): "Convert to string, for str()." return self.isoformat(sep=' ') def strptime(cls, date_string, format): 'string, format -> new datetime parsed from a string (like time.strptime()).' import _strptime return _strptime._strptime_datetime(cls, date_string, format) def utcoffset(self): """Return the timezone offset as timedelta positive east of UTC (negative west of UTC).""" if self._tzinfo is None: return None offset = self._tzinfo.utcoffset(self) _check_utc_offset("utcoffset", offset) return offset def tzname(self): """Return the timezone name. Note that the name is 100% informational -- there's no requirement that it mean anything in particular. For example, "GMT", "UTC", "-500", "-5:00", "EDT", "US/Eastern", "America/New York" are all valid replies. """ if self._tzinfo is None: return None name = self._tzinfo.tzname(self) _check_tzname(name) return name def dst(self): """Return 0 if DST is not in effect, or the DST offset (as timedelta positive eastward) if DST is in effect. This is purely informational; the DST offset has already been added to the UTC offset returned by utcoffset() if applicable, so there's no need to consult dst() unless you're interested in displaying the DST info. """ if self._tzinfo is None: return None offset = self._tzinfo.dst(self) _check_utc_offset("dst", offset) return offset # Comparisons of datetime objects with other. def __eq__(self, other): if isinstance(other, datetime): return self._cmp(other, allow_mixed=True) == 0 elif not isinstance(other, date): return NotImplemented else: return False def __le__(self, other): if isinstance(other, datetime): return self._cmp(other) <= 0 elif not isinstance(other, date): return NotImplemented else: _cmperror(self, other) def __lt__(self, other): if isinstance(other, datetime): return self._cmp(other) < 0 elif not isinstance(other, date): return NotImplemented else: _cmperror(self, other) def __ge__(self, other): if isinstance(other, datetime): return self._cmp(other) >= 0 elif not isinstance(other, date): return NotImplemented else: _cmperror(self, other) def __gt__(self, other): if isinstance(other, datetime): return self._cmp(other) > 0 elif not isinstance(other, date): return NotImplemented else: _cmperror(self, other) def _cmp(self, other, allow_mixed=False): assert isinstance(other, datetime) mytz = self._tzinfo ottz = other._tzinfo myoff = otoff = None if mytz is ottz: base_compare = True else: myoff = self.utcoffset() otoff = other.utcoffset() # Assume that allow_mixed means that we are called from __eq__ if allow_mixed: if myoff != self.replace(fold=not self.fold).utcoffset(): return 2 if otoff != other.replace(fold=not other.fold).utcoffset(): return 2 base_compare = myoff == otoff if base_compare: return _cmp((self._year, self._month, self._day, self._hour, self._minute, self._second, self._microsecond), (other._year, other._month, other._day, other._hour, other._minute, other._second, other._microsecond)) if myoff is None or otoff is None: if allow_mixed: return 2 # arbitrary non-zero value else: raise TypeError("cannot compare naive and aware datetimes") # XXX What follows could be done more efficiently... diff = self - other # this will take offsets into account if diff.days < 0: return -1 return diff and 1 or 0 def __add__(self, other): "Add a datetime and a timedelta." if not isinstance(other, timedelta): return NotImplemented delta = timedelta(self.toordinal(), hours=self._hour, minutes=self._minute, seconds=self._second, microseconds=self._microsecond) delta += other hour, rem = divmod(delta.seconds, 3600) minute, second = divmod(rem, 60) if 0 < delta.days <= _MAXORDINAL: return type(self).combine(date.fromordinal(delta.days), time(hour, minute, second, delta.microseconds, tzinfo=self._tzinfo)) raise OverflowError("result out of range") __radd__ = __add__ def __sub__(self, other): "Subtract two datetimes, or a datetime and a timedelta." if not isinstance(other, datetime): if isinstance(other, timedelta): return self + -other return NotImplemented days1 = self.toordinal() days2 = other.toordinal() secs1 = self._second + self._minute * 60 + self._hour * 3600 secs2 = other._second + other._minute * 60 + other._hour * 3600 base = timedelta(days1 - days2, secs1 - secs2, self._microsecond - other._microsecond) if self._tzinfo is other._tzinfo: return base myoff = self.utcoffset() otoff = other.utcoffset() if myoff == otoff: return base if myoff is None or otoff is None: raise TypeError("cannot mix naive and timezone-aware time") return base + otoff - myoff def __hash__(self): if self._hashcode == -1: if self.fold: t = self.replace(fold=0) else: t = self tzoff = t.utcoffset() if tzoff is None: self._hashcode = hash(t._getstate()[0]) else: days = _ymd2ord(self.year, self.month, self.day) seconds = self.hour * 3600 + self.minute * 60 + self.second self._hashcode = hash(timedelta(days, seconds, self.microsecond) - tzoff) return self._hashcode # Pickle support. def _getstate(self, protocol=3): yhi, ylo = divmod(self._year, 256) us2, us3 = divmod(self._microsecond, 256) us1, us2 = divmod(us2, 256) m = self._month if self._fold and protocol > 3: m += 128 basestate = bytes([yhi, ylo, m, self._day, self._hour, self._minute, self._second, us1, us2, us3]) if self._tzinfo is None: return (basestate,) else: return (basestate, self._tzinfo) def __setstate(self, string, tzinfo): if tzinfo is not None and not isinstance(tzinfo, _tzinfo_class): raise TypeError("bad tzinfo state arg") (yhi, ylo, m, self._day, self._hour, self._minute, self._second, us1, us2, us3) = string if m > 127: self._fold = 1 self._month = m - 128 else: self._fold = 0 self._month = m self._year = yhi * 256 + ylo self._microsecond = (((us1 << 8) | us2) << 8) | us3 self._tzinfo = tzinfo def __reduce_ex__(self, protocol): return (self.__class__, self._getstate(protocol)) def __reduce__(self): return self.__reduce_ex__(2) datetime.min = datetime(1, 1, 1) datetime.max = datetime(9999, 12, 31, 23, 59, 59, 999999) datetime.resolution = timedelta(microseconds=1) def us_dst_range(year): # Find start and end times for US DST. For years before 1967, return # start = end for no DST. if 2006 < year: dststart, dstend = DSTSTART_2007, DSTEND_2007 elif 1986 < year < 2007: dststart, dstend = DSTSTART_1987_2006, DSTEND_1987_2006 elif 1966 < year < 1987: dststart, dstend = DSTSTART_1967_1986, DSTEND_1967_1986 else: return (datetime(year, 1, 1), ) * 2 start = first_sunday_on_or_after(dststart.replace(year=year)) end = first_sunday_on_or_after(dstend.replace(year=year)) return start, end
null
187,974
import os import re import sys import getopt from string import ascii_letters from os.path import join, splitext, abspath, exists from collections import defaultdict checkers = {} checker_props = {'severity': 1, 'falsepositives': False} The provided code snippet includes necessary dependencies for implementing the `checker` function. Write a Python function `def checker(*suffixes, **kwds)` to solve the following problem: Decorator to register a function as a checker. Here is the function: def checker(*suffixes, **kwds): """Decorator to register a function as a checker.""" def deco(func): for suffix in suffixes: checkers.setdefault(suffix, []).append(func) for prop in checker_props: setattr(func, prop, kwds.get(prop, checker_props[prop])) return func return deco
Decorator to register a function as a checker.
187,975
import os import re import sys import getopt from string import ascii_letters from os.path import join, splitext, abspath, exists from collections import defaultdict The provided code snippet includes necessary dependencies for implementing the `check_syntax` function. Write a Python function `def check_syntax(fn, lines)` to solve the following problem: Check Python examples for valid syntax. Here is the function: def check_syntax(fn, lines): """Check Python examples for valid syntax.""" code = ''.join(lines) if '\r' in code: if os.name != 'nt': yield 0, '\\r in code file' code = code.replace('\r', '') try: compile(code, fn, 'exec') except SyntaxError as err: yield err.lineno, 'not compilable: %s' % err
Check Python examples for valid syntax.
187,976
import os import re import sys import getopt from string import ascii_letters from os.path import join, splitext, abspath, exists from collections import defaultdict seems_directive_re = re.compile(r'(?<!\.)\.\. %s([^a-z:]|:(?!:))' % all_directives) default_role_re = re.compile(r'(^| )`\w([^`]*?\w)?`($| )') The provided code snippet includes necessary dependencies for implementing the `check_suspicious_constructs` function. Write a Python function `def check_suspicious_constructs(fn, lines)` to solve the following problem: Check for suspicious reST constructs. Here is the function: def check_suspicious_constructs(fn, lines): """Check for suspicious reST constructs.""" inprod = False for lno, line in enumerate(lines): if seems_directive_re.search(line): yield lno+1, 'comment seems to be intended as a directive' if '.. productionlist::' in line: inprod = True elif not inprod and default_role_re.search(line): yield lno+1, 'default role used' elif inprod and not line.strip(): inprod = False
Check for suspicious reST constructs.
187,977
import os import re import sys import getopt from string import ascii_letters from os.path import join, splitext, abspath, exists from collections import defaultdict The provided code snippet includes necessary dependencies for implementing the `check_whitespace` function. Write a Python function `def check_whitespace(fn, lines)` to solve the following problem: Check for whitespace and line length issues. Here is the function: def check_whitespace(fn, lines): """Check for whitespace and line length issues.""" for lno, line in enumerate(lines): if '\r' in line: yield lno+1, '\\r in line' if '\t' in line: yield lno+1, 'OMG TABS!!!1' if line[:-1].rstrip(' \t') != line[:-1]: yield lno+1, 'trailing whitespace'
Check for whitespace and line length issues.
187,978
import os import re import sys import getopt from string import ascii_letters from os.path import join, splitext, abspath, exists from collections import defaultdict The provided code snippet includes necessary dependencies for implementing the `check_line_length` function. Write a Python function `def check_line_length(fn, lines)` to solve the following problem: Check for line length; this checker is not run by default. Here is the function: def check_line_length(fn, lines): """Check for line length; this checker is not run by default.""" for lno, line in enumerate(lines): if len(line) > 81: # don't complain about tables, links and function signatures if line.lstrip()[0] not in '+|' and \ 'http://' not in line and \ not line.lstrip().startswith(('.. function', '.. method', '.. cfunction')): yield lno+1, "line too long"
Check for line length; this checker is not run by default.
187,979
import os import re import sys import getopt from string import ascii_letters from os.path import join, splitext, abspath, exists from collections import defaultdict leaked_markup_re = re.compile(r'[a-z]::\s|`|\.\.\s*\w+:') The provided code snippet includes necessary dependencies for implementing the `check_leaked_markup` function. Write a Python function `def check_leaked_markup(fn, lines)` to solve the following problem: Check HTML files for leaked reST markup; this only works if the HTML files have been built. Here is the function: def check_leaked_markup(fn, lines): """Check HTML files for leaked reST markup; this only works if the HTML files have been built. """ for lno, line in enumerate(lines): if leaked_markup_re.search(line): yield lno+1, 'possibly leaked markup: %r' % line
Check HTML files for leaked reST markup; this only works if the HTML files have been built.
187,980
import os import re import sys import getopt from string import ascii_letters from os.path import join, splitext, abspath, exists from collections import defaultdict def hide_literal_blocks(lines): """Tool to remove literal blocks from given lines. It yields empty lines in place of blocks, so line numbers are still meaningful. """ in_block = False for line in lines: if line.endswith("::\n"): in_block = True elif in_block: if line == "\n" or line.startswith(" "): line = "\n" else: in_block = False yield line def hide_comments(lines): """Tool to remove comments from given lines. It yields empty lines in place of comments, so line numbers are still meaningful. """ in_multiline_comment = False for line in lines: if line == "..\n": in_multiline_comment = True elif in_multiline_comment: if line == "\n" or line.startswith(" "): line = "\n" else: in_multiline_comment = False if line.startswith(".. ") and type_of_explicit_markup(line) == 'comment': line = "\n" yield line ascii_letters = ascii_lowercase + ascii_uppercase The provided code snippet includes necessary dependencies for implementing the `check_missing_surrogate_space_on_plural` function. Write a Python function `def check_missing_surrogate_space_on_plural(fn, lines)` to solve the following problem: r"""Check for missing 'backslash-space' between a code sample a letter. Good: ``Point``\ s Bad: ``Point``s Here is the function: def check_missing_surrogate_space_on_plural(fn, lines): r"""Check for missing 'backslash-space' between a code sample a letter. Good: ``Point``\ s Bad: ``Point``s """ in_code_sample = False check_next_one = False for lno, line in enumerate(hide_comments(hide_literal_blocks(lines))): tokens = line.split("``") for token_no, token in enumerate(tokens): if check_next_one: if token[0] in ascii_letters: yield lno + 1, f"Missing backslash-space between code sample and {token!r}." check_next_one = False if token_no == len(tokens) - 1: continue if in_code_sample: check_next_one = True in_code_sample = not in_code_sample
r"""Check for missing 'backslash-space' between a code sample a letter. Good: ``Point``\ s Bad: ``Point``s
187,981
from pygments.lexer import RegexLexer, bygroups, include from pygments.token import Comment, Generic, Keyword, Name, Operator, Punctuation, Text from sphinx.highlighting import lexers class PEGLexer(RegexLexer): """Pygments Lexer for PEG grammar (.gram) files This lexer strips the following elements from the grammar: - Meta-tags - Variable assignments - Actions - Lookaheads - Rule types - Rule options - Rules named `invalid_*` or `incorrect_*` """ name = "PEG" aliases = ["peg"] filenames = ["*.gram"] _name = r"([^\W\d]\w*)" _text_ws = r"(\s*)" tokens = { "ws": [(r"\n", Text), (r"\s+", Text), (r"#.*$", Comment.Singleline),], "lookaheads": [ # Forced tokens (r"(&&)(?=\w+\s?)", bygroups(None)), (r"(&&)(?='.+'\s?)", bygroups(None)), (r'(&&)(?=".+"\s?)', bygroups(None)), (r"(&&)(?=\(.+\)\s?)", bygroups(None)), (r"(?<=\|\s)(&\w+\s?)", bygroups(None)), (r"(?<=\|\s)(&'.+'\s?)", bygroups(None)), (r'(?<=\|\s)(&".+"\s?)', bygroups(None)), (r"(?<=\|\s)(&\(.+\)\s?)", bygroups(None)), ], "metas": [ (r"(@\w+ '''(.|\n)+?''')", bygroups(None)), (r"^(@.*)$", bygroups(None)), ], "actions": [ (r"{(.|\n)+?}", bygroups(None)), ], "strings": [ (r"'\w+?'", Keyword), (r'"\w+?"', Keyword), (r"'\W+?'", Text), (r'"\W+?"', Text), ], "variables": [ (_name + _text_ws + "(=)", bygroups(None, None, None),), (_name + _text_ws + r"(\[[\w\d_\*]+?\])" + _text_ws + "(=)", bygroups(None, None, None, None, None),), ], "invalids": [ (r"^(\s+\|\s+.*invalid_\w+.*\n)", bygroups(None)), (r"^(\s+\|\s+.*incorrect_\w+.*\n)", bygroups(None)), (r"^(#.*invalid syntax.*(?:.|\n)*)", bygroups(None),), ], "root": [ include("invalids"), include("ws"), include("lookaheads"), include("metas"), include("actions"), include("strings"), include("variables"), (r"\b(?!(NULL|EXTRA))([A-Z_]+)\b\s*(?!\()", Text,), ( r"^\s*" + _name + r"\s*" + r"(\[.*\])?" + r"\s*" + r"(\(.+\))?" + r"\s*(:)", bygroups(Name.Function, None, None, Punctuation), ), (_name, Name.Function), (r"[\||\.|\+|\*|\?]", Operator), (r"{|}|\(|\)|\[|\]", Punctuation), (r".", Text), ], } def setup(app): lexers["peg"] = PEGLexer() return {"version": "1.0", "parallel_read_safe": True}
null
187,982
import re import io from os import getenv, path from time import asctime from pprint import pformat from docutils.io import StringOutput from docutils.parsers.rst import Directive from docutils.utils import new_document from docutils import nodes, utils from sphinx import addnodes from sphinx.builders import Builder from sphinx.locale import translators from sphinx.util import status_iterator, logging from sphinx.util.nodes import split_explicit_title from sphinx.writers.text import TextWriter, TextTranslator from sphinx.writers.latex import LaTeXTranslator import suspicious from docutils.parsers.rst.states import Body def issue_role(typ, rawtext, text, lineno, inliner, options={}, content=[]): def gh_issue_role(typ, rawtext, text, lineno, inliner, options={}, content=[]): def source_role(typ, rawtext, text, lineno, inliner, options={}, content=[]): class ImplementationDetail(Directive): def run(self): class Availability(Directive): def run(self): def audit_events_purge(app, env, docname): def audit_events_merge(app, env, docnames, other): class AuditEvent(Directive): def logger(self): def run(self): def _do_args_match(self, args1, args2): class AuditEventListDirective(Directive): def run(self): class PyDecoratorFunction(PyDecoratorMixin, PyFunction): def run(self): class PyDecoratorMethod(PyDecoratorMixin, PyMethod): def run(self): class PyCoroutineFunction(PyCoroutineMixin, PyFunction): def run(self): class PyCoroutineMethod(PyCoroutineMixin, PyMethod): def run(self): class PyAwaitableFunction(PyAwaitableMixin, PyFunction): def run(self): class PyAwaitableMethod(PyAwaitableMixin, PyMethod): def run(self): class PyAbstractMethod(PyMethod): def handle_signature(self, sig, signode): def run(self): class DeprecatedRemoved(Directive): def run(self): class MiscNews(Directive): def run(self): class PydocTopicsBuilder(Builder): def init(self): def get_outdated_docs(self): def get_target_uri(self, docname, typ=None): def write(self, *ignored): def finish(self): def parse_opcode_signature(env, sig, signode): def parse_pdb_command(env, sig, signode): def process_audit_events(app, doctree, fromdocname): def setup(app): app.add_role('issue', issue_role) app.add_role('gh', gh_issue_role) app.add_role('source', source_role) app.add_directive('impl-detail', ImplementationDetail) app.add_directive('availability', Availability) app.add_directive('audit-event', AuditEvent) app.add_directive('audit-event-table', AuditEventListDirective) app.add_directive('deprecated-removed', DeprecatedRemoved) app.add_builder(PydocTopicsBuilder) app.add_builder(suspicious.CheckSuspiciousMarkupBuilder) app.add_object_type('opcode', 'opcode', '%s (opcode)', parse_opcode_signature) app.add_object_type('pdbcommand', 'pdbcmd', '%s (pdb command)', parse_pdb_command) app.add_object_type('2to3fixer', '2to3fixer', '%s (2to3 fixer)') app.add_directive_to_domain('py', 'decorator', PyDecoratorFunction) app.add_directive_to_domain('py', 'decoratormethod', PyDecoratorMethod) app.add_directive_to_domain('py', 'coroutinefunction', PyCoroutineFunction) app.add_directive_to_domain('py', 'coroutinemethod', PyCoroutineMethod) app.add_directive_to_domain('py', 'awaitablefunction', PyAwaitableFunction) app.add_directive_to_domain('py', 'awaitablemethod', PyAwaitableMethod) app.add_directive_to_domain('py', 'abstractmethod', PyAbstractMethod) app.add_directive('miscnews', MiscNews) app.connect('doctree-resolved', process_audit_events) app.connect('env-merge-info', audit_events_merge) app.connect('env-purge-doc', audit_events_purge) return {'version': '1.0', 'parallel_read_safe': True}
null
187,986
import json import os.path from docutils.nodes import definition_list_item from sphinx.addnodes import glossary from sphinx.util import logging def process_glossary_nodes(app, doctree, fromdocname): if app.builder.format != 'html': return terms = {} for node in doctree.traverse(glossary): for glossary_item in node.traverse(definition_list_item): term = glossary_item[0].astext().lower() definition = glossary_item[1] rendered = app.builder.render_partial(definition) terms[term] = { 'title': glossary_item[0].astext(), 'body': rendered['html_body'] } if hasattr(app.env, 'glossary_terms'): app.env.glossary_terms.update(terms) else: app.env.glossary_terms = terms def on_build_finish(app, exc): if not hasattr(app.env, 'glossary_terms'): return if not app.env.glossary_terms: return logger.info(f'Writing {JSON}', color='green') dest_dir = os.path.join(app.outdir, STATIC_DIR) os.makedirs(dest_dir, exist_ok=True) with open(os.path.join(dest_dir, JSON), 'w') as f: json.dump(app.env.glossary_terms, f) def setup(app): app.connect('doctree-resolved', process_glossary_nodes) app.connect('build-finished', on_build_finish) return {'version': '0.1', 'parallel_read_safe': True}
null
187,987
from os import path from docutils import nodes from docutils.parsers.rst import directives from docutils.parsers.rst import Directive from docutils.statemachine import StringList import csv from sphinx import addnodes from sphinx.domains.c import CObject def init_annotations(app): def setup(app): app.add_config_value('refcount_file', '', True) app.add_config_value('stable_abi_file', '', True) app.connect('builder-inited', init_annotations) # monkey-patch C object... CObject.option_spec = { 'noindex': directives.flag, 'stableabi': directives.flag, } old_handle_signature = CObject.handle_signature def new_handle_signature(self, sig, signode): signode.parent['stableabi'] = 'stableabi' in self.options return old_handle_signature(self, sig, signode) CObject.handle_signature = new_handle_signature return {'version': '1.0', 'parallel_read_safe': True}
null
187,988
import pathlib import re from html.entities import codepoint2name from sphinx.util.logging import getLogger def escape_for_chm(app, pagename, templatename, context, doctree): # only works for .chm output if getattr(app.builder, 'name', '') != 'htmlhelp': return # escape the `body` part to 7-bit ASCII body = context.get('body') if body is not None: context['body'] = _process(body) def fixup_keywords(app, exception): # only works for .chm output if getattr(app.builder, 'name', '') != 'htmlhelp' or exception: return getLogger(__name__).info('fixing HTML escapes in keywords file...') outdir = pathlib.Path(app.builder.outdir) outname = app.builder.config.htmlhelp_basename with open(outdir / (outname + '.hhk'), 'rb') as f: index = f.read() with open(outdir / (outname + '.hhk'), 'wb') as f: f.write(index.replace(b'&#x27;', b'&#39;')) def setup(app): # `html-page-context` event emitted when the HTML builder has # created a context dictionary to render a template with. app.connect('html-page-context', escape_for_chm) # `build-finished` event emitted when all the files have been # output. app.connect('build-finished', fixup_keywords) return {'version': '1.0', 'parallel_read_safe': True}
null
187,989
import os import sys from pathlib import Path from pygments.lexer import RegexLexer, bygroups, include, words from pygments.token import (Comment, Generic, Keyword, Name, Operator, Punctuation, Text) from asdl import builtin_types from sphinx.highlighting import lexers class ASDLLexer(RegexLexer): def setup(app): lexers["asdl"] = ASDLLexer() return {'version': '1.0', 'parallel_read_safe': True}
null
187,995
import copy import math import torch import torch.nn as nn import torch.nn.functional as F from torch.distributions import Normal def create_masks( input_size, hidden_size, n_hidden, input_order="sequential", input_degrees=None ): # MADE paper sec 4: # degrees of connections between layers -- ensure at most in_degree - 1 connections degrees = [] # set input degrees to what is provided in args (the flipped order of the previous layer in a stack of mades); # else init input degrees based on strategy in input_order (sequential or random) if input_order == "sequential": degrees += ( [torch.arange(input_size)] if input_degrees is None else [input_degrees] ) for _ in range(n_hidden + 1): degrees += [torch.arange(hidden_size) % (input_size - 1)] degrees += ( [torch.arange(input_size) % input_size - 1] if input_degrees is None else [input_degrees % input_size - 1] ) elif input_order == "random": degrees += ( [torch.randperm(input_size)] if input_degrees is None else [input_degrees] ) for _ in range(n_hidden + 1): min_prev_degree = min(degrees[-1].min().item(), input_size - 1) degrees += [torch.randint(min_prev_degree, input_size, (hidden_size,))] min_prev_degree = min(degrees[-1].min().item(), input_size - 1) degrees += ( [torch.randint(min_prev_degree, input_size, (input_size,)) - 1] if input_degrees is None else [input_degrees - 1] ) # construct masks masks = [] for (d0, d1) in zip(degrees[:-1], degrees[1:]): masks += [(d1.unsqueeze(-1) >= d0.unsqueeze(0)).float()] return masks, degrees[0]
null
187,996
from functools import partial from inspect import isfunction import numpy as np import torch from torch import nn, einsum import torch.nn.functional as F def default(val, d): if val is not None: return val return d() if isfunction(d) else d
null
187,997
from functools import partial from inspect import isfunction import numpy as np import torch from torch import nn, einsum import torch.nn.functional as F def extract(a, t, x_shape): b, *_ = t.shape out = a.gather(-1, t) return out.reshape(b, *((1,) * (len(x_shape) - 1)))
null
187,998
from functools import partial from inspect import isfunction import numpy as np import torch from torch import nn, einsum import torch.nn.functional as F def noise_like(shape, device, repeat=False): repeat_noise = lambda: torch.randn((1, *shape[1:]), device=device).repeat( shape[0], *((1,) * (len(shape) - 1)) ) noise = lambda: torch.randn(shape, device=device) return repeat_noise() if repeat else noise()
null
187,999
from functools import partial from inspect import isfunction import numpy as np import torch from torch import nn, einsum import torch.nn.functional as F The provided code snippet includes necessary dependencies for implementing the `cosine_beta_schedule` function. Write a Python function `def cosine_beta_schedule(timesteps, s=0.008)` to solve the following problem: cosine schedule as proposed in https://openreview.net/forum?id=-NEXDKk8gZ Here is the function: def cosine_beta_schedule(timesteps, s=0.008): """ cosine schedule as proposed in https://openreview.net/forum?id=-NEXDKk8gZ """ steps = timesteps + 1 x = np.linspace(0, timesteps, steps) alphas_cumprod = np.cos(((x / timesteps) + s) / (1 + s) * np.pi * 0.5) ** 2 alphas_cumprod = alphas_cumprod / alphas_cumprod[0] betas = 1 - (alphas_cumprod[1:] / alphas_cumprod[:-1]) return np.clip(betas, 0, 0.999)
cosine schedule as proposed in https://openreview.net/forum?id=-NEXDKk8gZ
188,000
import json import os from pathlib import Path from functools import lru_cache import numpy as np import pandas as pd from gluonts.dataset.field_names import FieldName from gluonts.dataset.repository._util import metadata, save_to_file from gluonts.time_feature.holiday import squared_exponential_kernel from pts.feature import CustomDateFeatureSet def generate_pts_m5_dataset( dataset_path: Path, pandas_freq: str, prediction_length: int = 28, alpha: float = 0.5, ): cal_path = f"{dataset_path}/calendar.csv" sales_path = f"{dataset_path}/sales_train_validation.csv" sales_test_path = f"{dataset_path}/sales_train_evaluation.csv" sell_prices_path = f"{dataset_path}/sell_prices.csv" if not os.path.exists(cal_path) or not os.path.exists(sales_path): raise RuntimeError( f"M5 data is available on Kaggle (https://www.kaggle.com/c/m5-forecasting-accuracy/data). " f"You first need to agree to the terms of the competition before being able to download the data. " f"After you have done that, please copy the files into {dataset_path}." ) # Read M5 data from dataset_path calendar = pd.read_csv(cal_path, parse_dates=True) calendar.sort_index(inplace=True) calendar.date = pd.to_datetime(calendar.date) sales_train_validation = pd.read_csv( sales_path, index_col=["id", "item_id", "dept_id", "cat_id", "store_id", "state_id"], ) sales_train_validation.sort_index(inplace=True) sales_train_evaluation = pd.read_csv( sales_test_path, index_col=["id", "item_id", "dept_id", "cat_id", "store_id", "state_id"], ) sales_train_evaluation.sort_index(inplace=True) sell_prices = pd.read_csv(sell_prices_path, index_col=["item_id", "store_id"]) sell_prices.sort_index(inplace=True) @lru_cache(maxsize=None) def get_sell_price(item_id, store_id): return calendar.merge( sell_prices.loc[item_id, store_id], on=["wm_yr_wk"], how="left" ).sell_price # Build dynamic features kernel = squared_exponential_kernel(alpha=alpha) event_1 = CustomDateFeatureSet(calendar[calendar.event_name_1.notna()].date, kernel) event_2 = CustomDateFeatureSet(calendar[calendar.event_name_2.notna()].date, kernel) snap_CA = CustomDateFeatureSet(calendar[calendar.snap_CA == 1].date, kernel) snap_TX = CustomDateFeatureSet(calendar[calendar.snap_TX == 1].date, kernel) snap_WI = CustomDateFeatureSet(calendar[calendar.snap_WI == 1].date, kernel) time_index = pd.to_datetime(calendar.date) event_1_feature = event_1(time_index) event_2_feature = event_2(time_index) snap_CA_feature = snap_CA(time_index) snap_TX_feature = snap_TX(time_index) snap_WI_feature = snap_WI(time_index) # Build static features sales_train_validation["state"] = pd.CategoricalIndex( sales_train_validation.index.get_level_values(5) ).codes sales_train_validation["store"] = pd.CategoricalIndex( sales_train_validation.index.get_level_values(4) ).codes sales_train_validation["cat"] = pd.CategoricalIndex( sales_train_validation.index.get_level_values(3) ).codes sales_train_validation["dept"] = pd.CategoricalIndex( sales_train_validation.index.get_level_values(2) ).codes sales_train_validation["item"] = pd.CategoricalIndex( sales_train_validation.index.get_level_values(1) ).codes sales_train_evaluation["state"] = pd.CategoricalIndex( sales_train_evaluation.index.get_level_values(5) ).codes sales_train_evaluation["store"] = pd.CategoricalIndex( sales_train_evaluation.index.get_level_values(4) ).codes sales_train_evaluation["cat"] = pd.CategoricalIndex( sales_train_evaluation.index.get_level_values(3) ).codes sales_train_evaluation["dept"] = pd.CategoricalIndex( sales_train_evaluation.index.get_level_values(2) ).codes sales_train_evaluation["item"] = pd.CategoricalIndex( sales_train_evaluation.index.get_level_values(1) ).codes feat_static_cat = [ { "name": "state_id", "cardinality": len(sales_train_validation["state"].unique()), }, { "name": "store_id", "cardinality": len(sales_train_validation["store"].unique()), }, {"name": "cat_id", "cardinality": len(sales_train_validation["cat"].unique())}, { "name": "dept_id", "cardinality": len(sales_train_validation["dept"].unique()), }, { "name": "item_id", "cardinality": len(sales_train_validation["item"].unique()), }, ] feat_dynamic_real = [ {"name": "sell_price", "cardinality": 1}, {"name": "event_1", "cardinality": 1}, {"name": "event_2", "cardinality": 1}, {"name": "snap", "cardinality": 1}, ] # Build training set train_file = dataset_path / "train" / "data.json" train_ds = [] for index, item in sales_train_validation.iterrows(): id, item_id, dept_id, cat_id, store_id, state_id = index start_index = np.nonzero(item.iloc[:1913].values)[0][0] start_date = time_index[start_index] time_series = {} state_enc, store_enc, cat_enc, dept_enc, item_enc = item.iloc[1913:] time_series["start"] = str(start_date) time_series["item_id"] = id[:-11] time_series["feat_static_cat"] = [ state_enc, store_enc, cat_enc, dept_enc, item_enc, ] sell_price = get_sell_price(item_id, store_id) snap_feature = { "CA": snap_CA_feature, "TX": snap_TX_feature, "WI": snap_WI_feature, }[state_id] time_series["target"] = ( item.iloc[start_index:1913].values.astype(np.float32).tolist() ) time_series["feat_dynamic_real"] = ( np.concatenate( ( np.expand_dims(sell_price.iloc[start_index:1913].values, 0), event_1_feature[:, start_index:1913], event_2_feature[:, start_index:1913], snap_feature[:, start_index:1913], ), 0, ) .astype(np.float32) .tolist() ) train_ds.append(time_series.copy()) # Build training set train_file = dataset_path / "train" / "data.json" save_to_file(train_file, train_ds) # Create metadata file meta_file = dataset_path / "metadata.json" with open(meta_file, "w") as f: f.write( json.dumps( { "freq": pandas_freq, "prediction_length": prediction_length, "feat_static_cat": feat_static_cat, "feat_dynamic_real": feat_dynamic_real, "cardinality": len(train_ds), } ) ) # Build testing set test_file = dataset_path / "test" / "data.json" test_ds = [] for index, item in sales_train_evaluation.iterrows(): id, item_id, dept_id, cat_id, store_id, state_id = index start_index = np.nonzero(item.iloc[:1941].values)[0][0] start_date = time_index[start_index] time_series = {} state_enc, store_enc, cat_enc, dept_enc, item_enc = item.iloc[1941:] time_series["start"] = str(start_date) time_series["item_id"] = id[:-11] time_series["feat_static_cat"] = [ state_enc, store_enc, cat_enc, dept_enc, item_enc, ] sell_price = get_sell_price(item_id, store_id) snap_feature = { "CA": snap_CA_feature, "TX": snap_TX_feature, "WI": snap_WI_feature, }[state_id] time_series["target"] = ( item.iloc[start_index:1941].values.astype(np.float32).tolist() ) time_series["feat_dynamic_real"] = ( np.concatenate( ( np.expand_dims(sell_price.iloc[start_index:1941].values, 0), event_1_feature[:, start_index:1941], event_2_feature[:, start_index:1941], snap_feature[:, start_index:1941], ), 0, ) .astype(np.float32) .tolist() ) test_ds.append(time_series.copy()) save_to_file(test_file, test_ds)
null
188,001
from typing import List import numpy as np import pandas as pd from pandas.tseries.frequencies import to_offset from gluonts.core.component import validated from gluonts.time_feature import TimeFeature, norm_freq_str class FourierDateFeatures(TimeFeature): def __init__(self, freq: str) -> None: super().__init__() # reocurring freq freqs = [ "month", "day", "hour", "minute", "weekofyear", "weekday", "dayofweek", "dayofyear", "daysinmonth", ] assert freq in freqs self.freq = freq def __call__(self, index: pd.DatetimeIndex) -> np.ndarray: values = getattr(index, self.freq) num_values = max(values) + 1 steps = [x * 2.0 * np.pi / num_values for x in values] return np.vstack([np.cos(steps), np.sin(steps)]) def fourier_time_features_from_frequency(freq_str: str) -> List[TimeFeature]: offset = to_offset(freq_str) granularity = norm_freq_str(offset.name) features = { "M": ["weekofyear"], "W": ["daysinmonth", "weekofyear"], "D": ["dayofweek"], "B": ["dayofweek", "dayofyear"], "H": ["hour", "dayofweek"], "min": ["minute", "hour", "dayofweek"], "T": ["minute", "hour", "dayofweek"], } assert granularity in features, f"freq {granularity} not supported" feature_classes: List[TimeFeature] = [ FourierDateFeatures(freq=freq) for freq in features[granularity] ] return feature_classes
null
188,002
from typing import List, Optional from pandas.tseries.frequencies import to_offset def lags_for_fourier_time_features_from_frequency( freq_str: str, num_lags: Optional[int] = None ) -> List[int]: offset = to_offset(freq_str) multiple, granularity = offset.n, offset.name if granularity == "M": lags = [[1, 12]] elif granularity == "D": lags = [[1, 7, 14]] elif granularity == "B": lags = [[1, 2]] elif granularity == "H": lags = [[1, 24, 168]] elif granularity in ("T", "min"): lags = [[1, 4, 12, 24, 48]] else: lags = [[1]] # use less lags output_lags = list([int(lag) for sub_list in lags for lag in sub_list]) output_lags = sorted(list(set(output_lags))) return output_lags[:num_lags]
null
188,003
The provided code snippet includes necessary dependencies for implementing the `broadcast_shape` function. Write a Python function `def broadcast_shape(*shapes, **kwargs)` to solve the following problem: Similar to ``np.broadcast()`` but for shapes. Equivalent to ``np.broadcast(*map(np.empty, shapes)).shape``. :param tuple shapes: shapes of tensors. :param bool strict: whether to use extend-but-not-resize broadcasting. :returns: broadcasted shape :rtype: tuple :raises: ValueError Here is the function: def broadcast_shape(*shapes, **kwargs): """ Similar to ``np.broadcast()`` but for shapes. Equivalent to ``np.broadcast(*map(np.empty, shapes)).shape``. :param tuple shapes: shapes of tensors. :param bool strict: whether to use extend-but-not-resize broadcasting. :returns: broadcasted shape :rtype: tuple :raises: ValueError """ strict = kwargs.pop("strict", False) reversed_shape = [] for shape in shapes: for i, size in enumerate(reversed(shape)): if i >= len(reversed_shape): reversed_shape.append(size) elif reversed_shape[i] == 1 and not strict: reversed_shape[i] = size elif reversed_shape[i] != size and (size != 1 or strict): raise ValueError( "shape mismatch: objects cannot be broadcast to a single shape: {}".format( " vs ".join(map(str, shapes)) ) ) return tuple(reversed(reversed_shape))
Similar to ``np.broadcast()`` but for shapes. Equivalent to ``np.broadcast(*map(np.empty, shapes)).shape``. :param tuple shapes: shapes of tensors. :param bool strict: whether to use extend-but-not-resize broadcasting. :returns: broadcasted shape :rtype: tuple :raises: ValueError
188,004
import inspect from typing import Optional import torch import torch.nn as nn def get_module_forward_input_names(module: nn.Module): params = inspect.signature(module.forward).parameters param_names = [k for k, v in params.items() if not str(v).startswith("*")] return param_names
null
188,005
import inspect from typing import Optional import torch import torch.nn as nn The provided code snippet includes necessary dependencies for implementing the `weighted_average` function. Write a Python function `def weighted_average( x: torch.Tensor, weights: Optional[torch.Tensor] = None, dim=None ) -> torch.Tensor` to solve the following problem: Computes the weighted average of a given tensor across a given dim, masking values associated with weight zero, meaning instead of `nan * 0 = nan` you will get `0 * 0 = 0`. Parameters ---------- x Input tensor, of which the average must be computed. weights Weights tensor, of the same shape as `x`. dim The dim along which to average `x` Returns ------- Tensor: The tensor with values averaged along the specified `dim`. Here is the function: def weighted_average( x: torch.Tensor, weights: Optional[torch.Tensor] = None, dim=None ) -> torch.Tensor: """ Computes the weighted average of a given tensor across a given dim, masking values associated with weight zero, meaning instead of `nan * 0 = nan` you will get `0 * 0 = 0`. Parameters ---------- x Input tensor, of which the average must be computed. weights Weights tensor, of the same shape as `x`. dim The dim along which to average `x` Returns ------- Tensor: The tensor with values averaged along the specified `dim`. """ if weights is not None: weighted_tensor = torch.where(weights != 0, x * weights, torch.zeros_like(x)) sum_weights = torch.clamp( weights.sum(dim=dim) if dim else weights.sum(), min=1.0 ) return ( weighted_tensor.sum(dim=dim) if dim else weighted_tensor.sum() ) / sum_weights else: return x.mean(dim=dim)
Computes the weighted average of a given tensor across a given dim, masking values associated with weight zero, meaning instead of `nan * 0 = nan` you will get `0 * 0 = 0`. Parameters ---------- x Input tensor, of which the average must be computed. weights Weights tensor, of the same shape as `x`. dim The dim along which to average `x` Returns ------- Tensor: The tensor with values averaged along the specified `dim`.
188,006
from typing import List, Tuple import numpy as np import torch import torch.nn as nn import torch.nn.functional as F from gluonts.time_feature import get_seasonality def linspace( backcast_length: int, forecast_length: int ) -> Tuple[np.ndarray, np.ndarray]: lin_space = np.linspace( -backcast_length, forecast_length, backcast_length + forecast_length, dtype=np.float32, ) b_ls = lin_space[:backcast_length] f_ls = lin_space[backcast_length:] return b_ls, f_ls
null
188,007
from typing import List, Optional, Tuple, Union import numpy as np import torch import torch.nn as nn from torch.distributions import Distribution from gluonts.core.component import validated from gluonts.torch.distributions.distribution_output import DistributionOutput from pts.model import weighted_average from pts.modules import MeanScaler, NOPScaler, FeatureEmbedder def prod(xs): p = 1 for x in xs: p *= x return p
null
188,008
from typing import List, Optional, Tuple, Union import numpy as np import torch import torch.nn as nn from torch.distributions import Distribution from gluonts.core.component import validated from gluonts.torch.modules.distribution_output import DistributionOutput from pts.model import weighted_average from pts.modules import MeanScaler, NOPScaler, FeatureEmbedder def prod(xs): p = 1 for x in xs: p *= x return p
null
188,009
from typing import List, Optional, Tuple import torch import torch.nn as nn from gluonts.core.component import validated from gluonts.torch.modules.distribution_output import DistributionOutput from pts.modules import MeanScaler, NOPScaler, FeatureEmbedder def prod(xs): p = 1 for x in xs: p *= x return p
null
188,010
from itertools import chain from typing import List, Optional, Dict import numpy as np import torch from gluonts.core.component import validated from gluonts.dataset.field_names import FieldName from gluonts.model.forecast_generator import QuantileForecastGenerator from gluonts.model.predictor import Predictor from gluonts.time_feature import ( TimeFeature, time_features_from_frequency_str, ) from gluonts.torch.model.predictor import PyTorchPredictor from gluonts.torch.util import copy_parameters from gluonts.transform import ( Transformation, Chain, ValidationSplitSampler, TestSplitSampler, ExpectedNumInstanceSampler, AddAgeFeature, AsNumpyArray, AddObservedValuesIndicator, AddTimeFeatures, VstackFeatures, SetField, ) from pts import Trainer from pts.model import PyTorchEstimator from pts.model.utils import get_module_forward_input_names from .tft_network import ( TemporalFusionTransformerPredictionNetwork, TemporalFusionTransformerTrainingNetwork, ) from .tft_transform import BroadcastTo, TFTInstanceSplitter def _default_feat_args(dims_or_cardinalities: List[int]): if dims_or_cardinalities: return dims_or_cardinalities return [1]
null
188,011
import os import subprocess from os.path import join import yaml import tempfile import argparse from skimage.io import imread import numpy as np import librosa from util import util from tqdm import tqdm import torch from collections import OrderedDict import cv2 from moviepy.video.io.ffmpeg_tools import ffmpeg_extract_subclip from cog import BasePredictor, Input, Path import scipy.io as sio import albumentations as A from options.test_audio2feature_options import TestOptions as FeatureOptions from options.test_audio2headpose_options import TestOptions as HeadposeOptions from options.test_feature2face_options import TestOptions as RenderOptions from datasets import create_dataset from models import create_model from models.networks import APC_encoder from util.visualizer import Visualizer from funcs import utils, audio_funcs from demo import write_video_with_audio import warnings def clean_folder(folder): for filename in os.listdir(folder): file_path = os.path.join(folder, filename) try: if os.path.isfile(file_path) or os.path.islink(file_path): os.unlink(file_path) elif os.path.isdir(file_path): shutil.rmtree(file_path) except Exception as e: print('Failed to delete %s. Reason: %s' % (file_path, e))
null
188,012
import os import subprocess from os.path import join from tqdm import tqdm import numpy as np import torch from collections import OrderedDict import librosa from skimage.io import imread import cv2 import scipy.io as sio import argparse import yaml import albumentations as A import albumentations.pytorch from pathlib import Path from options.test_audio2feature_options import TestOptions as FeatureOptions from options.test_audio2headpose_options import TestOptions as HeadposeOptions from options.test_feature2face_options import TestOptions as RenderOptions from datasets import create_dataset from models import create_model from models.networks import APC_encoder import util.util as util from util.visualizer import Visualizer from funcs import utils from funcs import audio_funcs import warnings def write_video_with_audio(audio_path, output_path, prefix='pred_'): fps, fourcc = 60, cv2.VideoWriter_fourcc(*'DIVX') video_tmp_path = join(save_root, 'tmp.avi') out = cv2.VideoWriter(video_tmp_path, fourcc, fps, (Renderopt.loadSize, Renderopt.loadSize)) for j in tqdm(range(nframe), position=0, desc='writing video'): img = cv2.imread(join(save_root, prefix + str(j+1) + '.jpg')) out.write(img) out.release() cmd = 'ffmpeg -i "' + video_tmp_path + '" -i "' + audio_path + '" -codec copy -shortest "' + output_path + '"' subprocess.call(cmd, shell=True) os.remove(video_tmp_path) # remove the template video
null
188,013
import sys from . import audio_funcs import numpy as np from math import cos, sin import torch from numpy.linalg import solve from scipy.ndimage import gaussian_filter1d from sklearn.neighbors import KDTree import time from tqdm import tqdm The provided code snippet includes necessary dependencies for implementing the `compute_mel_one_sequence` function. Write a Python function `def compute_mel_one_sequence(audio, hop_length=int(16000/120), winlen=1/60, winstep=0.5/60, sr=16000, fps=60, device='cpu')` to solve the following problem: compute mel for an audio sequence. Here is the function: def compute_mel_one_sequence(audio, hop_length=int(16000/120), winlen=1/60, winstep=0.5/60, sr=16000, fps=60, device='cpu'): ''' compute mel for an audio sequence. ''' device = torch.device(device) Audio2Mel_torch = audio_funcs.Audio2Mel(n_fft=512, hop_length=int(16000/120), win_length=int(16000/60), sampling_rate=16000, n_mel_channels=80, mel_fmin=90, mel_fmax=7600.0).to(device) nframe = int(audio.shape[0] / 16000 * 60) mel_nframe = 2 * nframe mel_frame_len = int(sr * winlen) mel_frame_step = sr * winstep mel80s = np.zeros([mel_nframe, 80]) for i in range(mel_nframe): # for i in tqdm(range(mel_nframe)): st = int(i * mel_frame_step) audio_clip = audio[st : st + mel_frame_len] if len(audio_clip) < mel_frame_len: audio_clip = np.concatenate([audio_clip, np.zeros([mel_frame_len - len(audio_clip)])]) audio_clip_device = torch.from_numpy(audio_clip).unsqueeze(0).unsqueeze(0).to(device).float() mel80s[i] = Audio2Mel_torch(audio_clip_device).cpu().numpy()[0].T # [1, 80] return mel80s
compute mel for an audio sequence.
188,014
import sys from . import audio_funcs import numpy as np from math import cos, sin import torch from numpy.linalg import solve from scipy.ndimage import gaussian_filter1d from sklearn.neighbors import KDTree import time from tqdm import tqdm The provided code snippet includes necessary dependencies for implementing the `KNN` function. Write a Python function `def KNN(feats, feat_database, K=10)` to solve the following problem: compute KNN for feat in feat base Here is the function: def KNN(feats, feat_database, K=10): ''' compute KNN for feat in feat base ''' tree = KDTree(feat_database, leaf_size=100000) print('start computing KNN ...') st = time.time() dist, ind = tree.query(feats, k=K) et = time.time() print('Taken time: ', et-st) return dist, ind
compute KNN for feat in feat base
188,015
import sys from . import audio_funcs import numpy as np from math import cos, sin import torch from numpy.linalg import solve from scipy.ndimage import gaussian_filter1d from sklearn.neighbors import KDTree import time from tqdm import tqdm def KNN_with_torch(feats, feat_database, K=10): feats = torch.from_numpy(feats)#.cuda() feat_database = torch.from_numpy(feat_database)#.cuda() # Training feat_base_norm = (feat_database ** 2).sum(-1) # print('start computing KNN ...') # st = time.time() feats_norm = (feats ** 2).sum(-1) diss = (feats_norm.view(-1, 1) + feat_base_norm.view(1, -1) - 2 * feats @ feat_database.t() # Rely on cuBLAS for better performance! ) ind = diss.topk(K, dim=1, largest=False).indices # et = time.time() # print('Taken time: ', et-st) return ind.cpu().numpy()
null
188,016
import sys from . import audio_funcs import numpy as np from math import cos, sin import torch from numpy.linalg import solve from scipy.ndimage import gaussian_filter1d from sklearn.neighbors import KDTree import time from tqdm import tqdm def solve_LLE_projection(feat, feat_base): '''find LLE projection weights given feat base and target feat Args: feat: [ndim, ] target feat feat_base: [K, ndim] K-nearest feat base ======================================= We need to solve the following function ``` min|| feat - \sum_0^k{w_i} * feat_base_i ||, s.t. \sum_0^k{w_i}=1 ``` equals to: ft = w1*f1 + w2*f2 + ... + wk*fk, s.t. w1+w2+...+wk=1 = (1-w2-...-wk)*f1 + w2*f2 + ... + wk*fk ft-f1 = w2*(f2-f1) + w3*(f3-f1) + ... + wk*(fk-f1) ft-f1 = (f2-f1, f3-f1, ..., fk-f1) dot (w2, w3, ..., wk).T B = A dot w_, here, B: [ndim,] A: [ndim, k-1], w_: [k-1,] Finally, ft' = (1-w2-..wk, w2, ..., wk) dot (f1, f2, ..., fk) ======================================= Returns: w: [K,] linear weights, sums to 1 ft': [ndim,] reconstructed feats ''' K, ndim = feat_base.shape if K == 1: feat_fuse = feat_base[0] w = np.array([1]) else: w = np.zeros(K) B = feat - feat_base[0] # [ndim,] A = (feat_base[1:] - feat_base[0]).T # [ndim, K-1] AT = A.T w[1:] = solve(AT.dot(A), AT.dot(B)) w[0] = 1 - w[1:].sum() feat_fuse = w.dot(feat_base) return w, feat_fuse def compute_LLE_projection_frame(feats, feat_database, ind): nframe = feats.shape[0] feat_fuse = np.zeros_like(feats) w = np.zeros([nframe, ind.shape[1]]) current_K_feats = feat_database[ind] w, feat_fuse = solve_LLE_projection(feats, current_K_feats) return w, feat_fuse
null
188,017
import sys from . import audio_funcs import numpy as np from math import cos, sin import torch from numpy.linalg import solve from scipy.ndimage import gaussian_filter1d from sklearn.neighbors import KDTree import time from tqdm import tqdm def solve_LLE_projection(feat, feat_base): def compute_LLE_projection_all_frame(feats, feat_database, ind, nframe): nframe = feats.shape[0] feat_fuse = np.zeros_like(feats) w = np.zeros([nframe, ind.shape[1]]) for i in tqdm(range(nframe), desc='LLE projection'): current_K_feats = feat_database[ind[i]] w[i], feat_fuse[i] = solve_LLE_projection(feats[i], current_K_feats) return w, feat_fuse
null
188,018
import sys from . import audio_funcs import numpy as np from math import cos, sin import torch from numpy.linalg import solve from scipy.ndimage import gaussian_filter1d from sklearn.neighbors import KDTree import time from tqdm import tqdm def angle2matrix(angles, gradient='false'): ''' get rotation matrix from three rotation angles(degree). right-handed. Args: angles: [3,]. x, y, z angles x: pitch. positive for looking down. y: yaw. positive for looking left. z: roll. positive for tilting head right. gradient(str): whether to compute gradient matrix: dR/d_x,y,z Returns: R: [3, 3]. rotation matrix. ''' x, y, z = np.deg2rad(angles[0]), np.deg2rad(angles[1]), np.deg2rad(angles[2]) # x Rx=np.array([[1, 0, 0], [0, cos(x), -sin(x)], [0, sin(x), cos(x)]]) # y Ry=np.array([[ cos(y), 0, sin(y)], [ 0, 1, 0], [-sin(y), 0, cos(y)]]) # z Rz=np.array([[cos(z), -sin(z), 0], [sin(z), cos(z), 0], [ 0, 0, 1]]) R=Rz.dot(Ry.dot(Rx)) #R=Rx.dot(Ry.dot(Rz)) if gradient != 'true': return R.astype(np.float32) elif gradient == 'true': # gradident matrix dRxdx = np.array([[0, 0, 0], [0, -sin(x), -cos(x)], [0, cos(x), -sin(x)]]) dRdx = Rz.dot(Ry.dot(dRxdx)) * np.pi/180 dRydy = np.array([[-sin(y), 0, cos(y)], [ 0, 0, 0], [-cos(y), 0, -sin(y)]]) dRdy = Rz.dot(dRydy.dot(Rx)) * np.pi/180 dRzdz = np.array([[-sin(z), -cos(z), 0], [ cos(z), -sin(z), 0], [ 0, 0, 0]]) dRdz = dRzdz.dot(Ry.dot(Rx)) * np.pi/180 return R.astype(np.float32), [dRdx.astype(np.float32), dRdy.astype(np.float32), dRdz.astype(np.float32)] The provided code snippet includes necessary dependencies for implementing the `project_landmarks` function. Write a Python function `def project_landmarks(camera_intrinsic, viewpoint_R, viewpoint_T, scale, headposes, pts_3d)` to solve the following problem: project 2d landmarks given predicted 3d landmarks & headposes and user-defined camera & viewpoint parameters Here is the function: def project_landmarks(camera_intrinsic, viewpoint_R, viewpoint_T, scale, headposes, pts_3d): ''' project 2d landmarks given predicted 3d landmarks & headposes and user-defined camera & viewpoint parameters ''' rot, trans = angle2matrix(headposes[:3]), headposes[3:][:, None] pts3d_headpose = scale * rot.dot(pts_3d.T) + trans pts3d_viewpoint = viewpoint_R.dot(pts3d_headpose) + viewpoint_T[:, None] pts2d_project = camera_intrinsic.dot(pts3d_viewpoint) pts2d_project[:2, :] /= pts2d_project[2, :] # divide z pts2d_project = pts2d_project[:2, :].T return pts2d_project, rot, trans
project 2d landmarks given predicted 3d landmarks & headposes and user-defined camera & viewpoint parameters
188,019
import sys from . import audio_funcs import numpy as np from math import cos, sin import torch from numpy.linalg import solve from scipy.ndimage import gaussian_filter1d from sklearn.neighbors import KDTree import time from tqdm import tqdm The provided code snippet includes necessary dependencies for implementing the `landmark_smooth_3d` function. Write a Python function `def landmark_smooth_3d(pts3d, smooth_sigma=0, area='only_mouth')` to solve the following problem: smooth the input 3d landmarks using gaussian filters on each dimension. Args: pts3d: [N, 73, 3] Here is the function: def landmark_smooth_3d(pts3d, smooth_sigma=0, area='only_mouth'): ''' smooth the input 3d landmarks using gaussian filters on each dimension. Args: pts3d: [N, 73, 3] ''' # per-landmark smooth if not smooth_sigma == 0: if area == 'all': pts3d = gaussian_filter1d(pts3d.reshape(-1, 73*3), smooth_sigma, axis=0).reshape(-1, 73, 3) elif area == 'only_mouth': mouth_pts3d = pts3d[:, 46:64, :].copy() mouth_pts3d = gaussian_filter1d(mouth_pts3d.reshape(-1, 18*3), smooth_sigma, axis=0).reshape(-1, 18, 3) pts3d = gaussian_filter1d(pts3d.reshape(-1, 73*3), smooth_sigma, axis=0).reshape(-1, 73, 3) pts3d[:, 46:64, :] = mouth_pts3d return pts3d
smooth the input 3d landmarks using gaussian filters on each dimension. Args: pts3d: [N, 73, 3]
188,020
import sys from . import audio_funcs import numpy as np from math import cos, sin import torch from numpy.linalg import solve from scipy.ndimage import gaussian_filter1d from sklearn.neighbors import KDTree import time from tqdm import tqdm lower_mouth = [53, 54, 55, 56, 57, 58, 59, 60] upper_mouth = [46, 47, 48, 49, 50, 51, 52, 61, 62, 63] The provided code snippet includes necessary dependencies for implementing the `mouth_pts_AMP` function. Write a Python function `def mouth_pts_AMP(pts3d, is_delta=True, method='XY', paras=[1,1])` to solve the following problem: mouth region AMP to control the reaction amplitude. method: 'XY', 'delta', 'XYZ', 'LowerMore' or 'CloseSmall' Here is the function: def mouth_pts_AMP(pts3d, is_delta=True, method='XY', paras=[1,1]): ''' mouth region AMP to control the reaction amplitude. method: 'XY', 'delta', 'XYZ', 'LowerMore' or 'CloseSmall' ''' if method == 'XY': AMP_scale_x, AMP_scale_y = paras if is_delta: pts3d[:, 46:64, 0] *= AMP_scale_x pts3d[:, 46:64, 1] *= AMP_scale_y else: mean_mouth3d_xy = pts3d[:, 46:64, :2].mean(axis=0) pts3d[:, 46:64, 0] += (AMP_scale_x-1) * (pts3d[:, 46:64, 0] - mean_mouth3d_xy[:,0]) pts3d[:, 46:64, 1] += (AMP_scale_y-1) * (pts3d[:, 46:64, 1] - mean_mouth3d_xy[:,1]) elif method == 'delta': AMP_scale_x, AMP_scale_y = paras if is_delta: diff = AMP_scale_x * (pts3d[1:, 46:64] - pts3d[:-1, 46:64]) pts3d[1:, 46:64] += diff elif method == 'XYZ': AMP_scale_x, AMP_scale_y, AMP_scale_z = paras if is_delta: pts3d[:, 46:64, 0] *= AMP_scale_x pts3d[:, 46:64, 1] *= AMP_scale_y pts3d[:, 46:64, 2] *= AMP_scale_z elif method == 'LowerMore': upper_x, upper_y, upper_z, lower_x, lower_y, lower_z = paras if is_delta: pts3d[:, upper_mouth, 0] *= upper_x pts3d[:, upper_mouth, 1] *= upper_y pts3d[:, upper_mouth, 2] *= upper_z pts3d[:, lower_mouth, 0] *= lower_x pts3d[:, lower_mouth, 1] *= lower_y pts3d[:, lower_mouth, 2] *= lower_z elif method == 'CloseSmall': open_x, open_y, open_z, close_x, close_y, close_z = paras nframe = pts3d.shape[0] for i in tqdm(range(nframe), desc='AMP mouth..'): if sum(pts3d[i, upper_mouth, 1] > 0) + sum(pts3d[i, lower_mouth, 1] < 0) > 16 * 0.3: # open pts3d[i, 46:64, 0] *= open_x pts3d[i, 46:64, 1] *= open_y pts3d[i, 46:64, 2] *= open_z else: # close pts3d[:, 46:64, 0] *= close_x pts3d[:, 46:64, 1] *= close_y pts3d[:, 46:64, 2] *= close_z return pts3d
mouth region AMP to control the reaction amplitude. method: 'XY', 'delta', 'XYZ', 'LowerMore' or 'CloseSmall'
188,021
import sys from . import audio_funcs import numpy as np from math import cos, sin import torch from numpy.linalg import solve from scipy.ndimage import gaussian_filter1d from sklearn.neighbors import KDTree import time from tqdm import tqdm upper_outer_lip = list(range(47, 52)) upper_inner_lip = [63, 62, 61] lower_inner_lip = [58, 59, 60] lower_outer_lip = list(range(57, 52, -1)) The provided code snippet includes necessary dependencies for implementing the `solve_intersect_mouth` function. Write a Python function `def solve_intersect_mouth(pts3d)` to solve the following problem: solve the generated intersec lips, usually happens in mouth AMP usage. Args: pts3d: [N, 73, 3] Here is the function: def solve_intersect_mouth(pts3d): ''' solve the generated intersec lips, usually happens in mouth AMP usage. Args: pts3d: [N, 73, 3] ''' upper_inner = pts3d[:, upper_inner_lip] lower_inner = pts3d[:, lower_inner_lip] lower_inner_y = lower_inner[:,:,1] upper_inner_y = upper_inner[:,:,1] # all three inner lip flip flip = lower_inner_y > upper_inner_y flip = np.where(flip.sum(axis=1) == 3)[0] # flip frames inner_y_diff = lower_inner_y[flip] - upper_inner_y[flip] half_inner_y_diff = inner_y_diff * 0.5 # upper inner pts3d[flip[:,None], upper_inner_lip, 1] += half_inner_y_diff # lower inner pts3d[flip[:,None], lower_inner_lip, 1] -= half_inner_y_diff # upper outer pts3d[flip[:,None], upper_outer_lip, 1] += half_inner_y_diff.mean() # lower outer pts3d[flip[:,None], lower_outer_lip, 1] -= half_inner_y_diff.mean() return pts3d
solve the generated intersec lips, usually happens in mouth AMP usage. Args: pts3d: [N, 73, 3]
188,022
import sys from . import audio_funcs import numpy as np from math import cos, sin import torch from numpy.linalg import solve from scipy.ndimage import gaussian_filter1d from sklearn.neighbors import KDTree import time from tqdm import tqdm def headpose_smooth(headpose, smooth_sigmas=[0,0], method='gaussian'): rot_sigma, trans_sigma = smooth_sigmas rot = gaussian_filter1d(headpose.reshape(-1, 6)[:,:3], rot_sigma, axis=0).reshape(-1, 3) trans = gaussian_filter1d(headpose.reshape(-1, 6)[:,3:], trans_sigma, axis=0).reshape(-1, 3) headpose_smooth = np.concatenate([rot, trans], axis=1) return headpose_smooth
null
188,023
import os import os.path import math import torch import torch.utils.data import numpy as np import librosa from librosa.filters import mel as librosa_mel_fn import torch.nn.functional as F The provided code snippet includes necessary dependencies for implementing the `mu_law_encoding` function. Write a Python function `def mu_law_encoding(data, mu=255)` to solve the following problem: encode the original audio via mu-law companding and mu-bits quantization Here is the function: def mu_law_encoding(data, mu=255): '''encode the original audio via mu-law companding and mu-bits quantization ''' # mu-law companding mu_x = np.sign(data) * np.log(1 + mu * np.abs(data)) / np.log(mu + 1) # mu-bits quantization from [-1, 1] to [0, mu] mu_x = (mu_x + 1) / 2 * mu + 0.5 return mu_x.astype(np.int32)
encode the original audio via mu-law companding and mu-bits quantization
188,024
import os import os.path import math import torch import torch.utils.data import numpy as np import librosa from librosa.filters import mel as librosa_mel_fn import torch.nn.functional as F The provided code snippet includes necessary dependencies for implementing the `mu_law_decoding` function. Write a Python function `def mu_law_decoding(data, mu=255)` to solve the following problem: inverse the mu-law compressed and quantized data. Here is the function: def mu_law_decoding(data, mu=255): '''inverse the mu-law compressed and quantized data. ''' # dequantization y = 2 * (data.astype(np.float32) / mu) - 1 # inverse mu-law companding x = np.sign(y) * (1.0 / mu) * ((1.0 + mu)**abs(y) - 1.0) return x
inverse the mu-law compressed and quantized data.
188,025
import os import os.path import math import torch import torch.utils.data import numpy as np import librosa from librosa.filters import mel as librosa_mel_fn import torch.nn.functional as F The provided code snippet includes necessary dependencies for implementing the `inject_gaussian_noise` function. Write a Python function `def inject_gaussian_noise(data, noise_factor, use_torch=False)` to solve the following problem: inject random gaussian noise (mean=0, std=1) to audio clip In my test, a reasonable factor region could be [0, 0.01] larger will be too large and smaller could be ignored. Args: data: [n,] original audio sequence noise_factor(float): scaled factor use_torch(bool): optional, if use_torch=True, input data and implementation will be torch methods. Returns: augmented_data: [n,] noised audio clip Here is the function: def inject_gaussian_noise(data, noise_factor, use_torch=False): ''' inject random gaussian noise (mean=0, std=1) to audio clip In my test, a reasonable factor region could be [0, 0.01] larger will be too large and smaller could be ignored. Args: data: [n,] original audio sequence noise_factor(float): scaled factor use_torch(bool): optional, if use_torch=True, input data and implementation will be torch methods. Returns: augmented_data: [n,] noised audio clip ''' if use_torch == False: augmented_data = data + noise_factor * np.random.normal(0, 1, len(data)) # Cast back to same data type augmented_data = augmented_data.astype(type(data[0])) # use torch else: augmented_data = data + noise_factor * torch.randn(1).cuda() return augmented_data
inject random gaussian noise (mean=0, std=1) to audio clip In my test, a reasonable factor region could be [0, 0.01] larger will be too large and smaller could be ignored. Args: data: [n,] original audio sequence noise_factor(float): scaled factor use_torch(bool): optional, if use_torch=True, input data and implementation will be torch methods. Returns: augmented_data: [n,] noised audio clip