text_prompt stringlengths 157 13.1k | code_prompt stringlengths 7 19.8k ⌀ |
|---|---|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def tabs_or_spaces(physical_line, indent_char):
r"""Never mix tabs and spaces. The most popular way of indenting Python is with spaces only. The second-most popular way is with tabs only. Code indented with a mixture of tabs and spaces should be converted to using spaces exclusively. When invoking the Python command line interpreter with the -t option, it issues warnings about code that illegally mixes tabs and spaces. When using -tt these warnings become errors. These options are highly recommended! Okay: if a == 0:\n a = 1\n b = 1 E101: if a == 0:\n a = 1\n\tb = 1 """ |
indent = INDENT_REGEX.match(physical_line).group(1)
for offset, char in enumerate(indent):
if char != indent_char:
return offset, "E101 indentation contains mixed spaces and tabs" |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def tabs_obsolete(physical_line):
r"""For new projects, spaces-only are strongly recommended over tabs. Okay: if True:\n return W191: if True:\n\treturn """ |
indent = INDENT_REGEX.match(physical_line).group(1)
if '\t' in indent:
return indent.index('\t'), "W191 indentation contains tabs" |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def trailing_whitespace(physical_line):
r"""Trailing whitespace is superfluous. The warning returned varies on whether the line itself is blank, for easier filtering for those who want to indent their blank lines. Okay: spam(1)\n# W291: spam(1) \n# W293: class Foo(object):
\n \n bang = 12 """ |
physical_line = physical_line.rstrip('\n') # chr(10), newline
physical_line = physical_line.rstrip('\r') # chr(13), carriage return
physical_line = physical_line.rstrip('\x0c') # chr(12), form feed, ^L
stripped = physical_line.rstrip(' \t\v')
if physical_line != stripped:
if stripped:
return len(stripped), "W291 trailing whitespace"
else:
return 0, "W293 blank line contains whitespace" |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def trailing_blank_lines(physical_line, lines, line_number, total_lines):
r"""Trailing blank lines are superfluous. Okay: spam(1) W391: spam(1)\n However the last line should end with a new line (warning W292). """ |
if line_number == total_lines:
stripped_last_line = physical_line.rstrip()
if not stripped_last_line:
return 0, "W391 blank line at end of file"
if stripped_last_line == physical_line:
return len(physical_line), "W292 no newline at end of file" |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def maximum_line_length(physical_line, max_line_length, multiline, noqa):
r"""Limit all lines to a maximum of 79 characters. There are still many devices around that are limited to 80 character lines; plus, limiting windows to 80 characters makes it possible to have several windows side-by-side. The default wrapping on such devices looks ugly. Therefore, please limit all lines to a maximum of 79 characters. For flowing long blocks of text (docstrings or comments), limiting the length to 72 characters is recommended. Reports error E501. """ |
line = physical_line.rstrip()
length = len(line)
if length > max_line_length and not noqa:
# Special case for long URLs in multi-line docstrings or comments,
# but still report the error when the 72 first chars are whitespaces.
chunks = line.split()
if ((len(chunks) == 1 and multiline) or
(len(chunks) == 2 and chunks[0] == '#')) and \
len(line) - len(chunks[-1]) < max_line_length - 7:
return
if hasattr(line, 'decode'): # Python 2
# The line could contain multi-byte characters
try:
length = len(line.decode('utf-8'))
except UnicodeError:
pass
if length > max_line_length:
return (max_line_length, "E501 line too long "
"(%d > %d characters)" % (length, max_line_length)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def blank_lines(logical_line, blank_lines, indent_level, line_number, blank_before, previous_logical, previous_unindented_logical_line, previous_indent_level, lines):
r"""Separate top-level function and class definitions with two blank lines. Method definitions inside a class are separated by a single blank line. Extra blank lines may be used (sparingly) to separate groups of related functions. Blank lines may be omitted between a bunch of related one-liners (e.g. a set of dummy implementations). Use blank lines in functions, sparingly, to indicate logical sections. Okay: def a():
\n pass\n\n\ndef b():
\n pass Okay: def a():
\n pass\n\n\nasync def b():
\n pass Okay: def a():
\n pass\n\n\n# Foo\n# Bar\n\ndef b():
\n pass Okay: default = 1\nfoo = 1 Okay: classify = 1\nfoo = 1 E301: class Foo:\n b = 0\n def bar():
\n pass E302: def a():
\n pass\n\ndef b(n):
\n pass E302: def a():
\n pass\n\nasync def b(n):
\n pass E303: def a():
\n pass\n\n\n\ndef b(n):
\n pass E303: def a():
\n\n\n\n pass E304: @decorator\n\ndef a():
\n pass E305: def a():
\n pass\na() E306: def a():
\n def b():
\n pass\n def c():
\n pass """ |
if line_number < 3 and not previous_logical:
return # Don't expect blank lines before the first line
if previous_logical.startswith('@'):
if blank_lines:
yield 0, "E304 blank lines found after function decorator"
elif blank_lines > 2 or (indent_level and blank_lines == 2):
yield 0, "E303 too many blank lines (%d)" % blank_lines
elif STARTSWITH_TOP_LEVEL_REGEX.match(logical_line):
if indent_level:
if not (blank_before or previous_indent_level < indent_level or
DOCSTRING_REGEX.match(previous_logical)):
ancestor_level = indent_level
nested = False
# Search backwards for a def ancestor or tree root (top level).
for line in lines[line_number - 2::-1]:
if line.strip() and expand_indent(line) < ancestor_level:
ancestor_level = expand_indent(line)
nested = line.lstrip().startswith('def ')
if nested or ancestor_level == 0:
break
if nested:
yield 0, "E306 expected 1 blank line before a " \
"nested definition, found 0"
else:
yield 0, "E301 expected 1 blank line, found 0"
elif blank_before != 2:
yield 0, "E302 expected 2 blank lines, found %d" % blank_before
elif (logical_line and not indent_level and blank_before != 2 and
previous_unindented_logical_line.startswith(('def ', 'class '))):
yield 0, "E305 expected 2 blank lines after " \
"class or function definition, found %d" % blank_before |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def whitespace_around_keywords(logical_line):
r"""Avoid extraneous whitespace around keywords. Okay: True and False E271: True and False E272: True and False E273: True and\tFalse E274: True\tand False """ |
for match in KEYWORD_REGEX.finditer(logical_line):
before, after = match.groups()
if '\t' in before:
yield match.start(1), "E274 tab before keyword"
elif len(before) > 1:
yield match.start(1), "E272 multiple spaces before keyword"
if '\t' in after:
yield match.start(2), "E273 tab after keyword"
elif len(after) > 1:
yield match.start(2), "E271 multiple spaces after keyword" |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def missing_whitespace(logical_line):
r"""Each comma, semicolon or colon should be followed by whitespace. Okay: [a, b] Okay: (3,) Okay: a[1:4] Okay: a[:4] Okay: a[1:] Okay: a[1:4:2] E231: ['a','b'] E231: foo(bar,baz) E231: [{'a':'b'}] """ |
line = logical_line
for index in range(len(line) - 1):
char = line[index]
if char in ',;:' and line[index + 1] not in WHITESPACE:
before = line[:index]
if char == ':' and before.count('[') > before.count(']') and \
before.rfind('{') < before.rfind('['):
continue # Slice syntax, no space required
if char == ',' and line[index + 1] == ')':
continue # Allow tuple with only one element: (3,)
yield index, "E231 missing whitespace after '%s'" % char |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def indentation(logical_line, previous_logical, indent_char, indent_level, previous_indent_level):
r"""Use 4 spaces per indentation level. For really old code that you don't want to mess up, you can continue to use 8-space tabs. Okay: a = 1 Okay: if a == 0:\n a = 1 E111: a = 1 E114: # a = 1 Okay: for item in items:\n pass E112: for item in items:\npass E115: for item in items:\n# Hi\n pass Okay: a = 1\nb = 2 E113: a = 1\n b = 2 E116: a = 1\n # b = 2 """ |
c = 0 if logical_line else 3
tmpl = "E11%d %s" if logical_line else "E11%d %s (comment)"
if indent_level % 4:
yield 0, tmpl % (1 + c, "indentation is not a multiple of four")
indent_expect = previous_logical.endswith(':')
if indent_expect and indent_level <= previous_indent_level:
yield 0, tmpl % (2 + c, "expected an indented block")
elif not indent_expect and indent_level > previous_indent_level:
yield 0, tmpl % (3 + c, "unexpected indentation") |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def whitespace_around_operator(logical_line):
r"""Avoid extraneous whitespace around an operator. Okay: a = 12 + 3 E221: a = 4 + 5 E222: a = 4 + 5 E223: a = 4\t+ 5 E224: a = 4 +\t5 """ |
for match in OPERATOR_REGEX.finditer(logical_line):
before, after = match.groups()
if '\t' in before:
yield match.start(1), "E223 tab before operator"
elif len(before) > 1:
yield match.start(1), "E221 multiple spaces before operator"
if '\t' in after:
yield match.start(2), "E224 tab after operator"
elif len(after) > 1:
yield match.start(2), "E222 multiple spaces after operator" |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def missing_whitespace_around_operator(logical_line, tokens):
r"""Surround operators with a single space on either side. - Always surround these binary operators with a single space on either side: assignment (=), augmented assignment (+=, -= etc.), comparisons (==, <, >, !=, <=, >=, in, not in, is, is not), Booleans (and, or, not). - If operators with different priorities are used, consider adding whitespace around the operators with the lowest priorities. Okay: i = i + 1 Okay: submitted += 1 Okay: x = x * 2 - 1 Okay: hypot2 = x * x + y * y Okay: c = (a + b) * (a - b) Okay: foo(bar, key='word', *args, **kwargs) Okay: alpha[:-i] E225: i=i+1 E225: submitted +=1 E225: x = x /2 - 1 E225: z = x **y E226: c = (a+b) * (a-b) E226: hypot2 = x*x + y*y E227: c = a|b E228: msg = fmt%(errno, errmsg) """ |
parens = 0
need_space = False
prev_type = tokenize.OP
prev_text = prev_end = None
for token_type, text, start, end, line in tokens:
if token_type in SKIP_COMMENTS:
continue
if text in ('(', 'lambda'):
parens += 1
elif text == ')':
parens -= 1
if need_space:
if start != prev_end:
# Found a (probably) needed space
if need_space is not True and not need_space[1]:
yield (need_space[0],
"E225 missing whitespace around operator")
need_space = False
elif text == '>' and prev_text in ('<', '-'):
# Tolerate the "<>" operator, even if running Python 3
# Deal with Python 3's annotated return value "->"
pass
else:
if need_space is True or need_space[1]:
# A needed trailing space was not found
yield prev_end, "E225 missing whitespace around operator"
elif prev_text != '**':
code, optype = 'E226', 'arithmetic'
if prev_text == '%':
code, optype = 'E228', 'modulo'
elif prev_text not in ARITHMETIC_OP:
code, optype = 'E227', 'bitwise or shift'
yield (need_space[0], "%s missing whitespace "
"around %s operator" % (code, optype))
need_space = False
elif token_type == tokenize.OP and prev_end is not None:
if text == '=' and parens:
# Allow keyword args or defaults: foo(bar=None).
pass
elif text in WS_NEEDED_OPERATORS:
need_space = True
elif text in UNARY_OPERATORS:
# Check if the operator is being used as a binary operator
# Allow unary operators: -123, -x, +1.
# Allow argument unpacking: foo(*args, **kwargs).
if (prev_text in '}])' if prev_type == tokenize.OP
else prev_text not in KEYWORDS):
need_space = None
elif text in WS_OPTIONAL_OPERATORS:
need_space = None
if need_space is None:
# Surrounding space is optional, but ensure that
# trailing space matches opening space
need_space = (prev_end, start != prev_end)
elif need_space and start == prev_end:
# A needed opening space was not found
yield prev_end, "E225 missing whitespace around operator"
need_space = False
prev_type = token_type
prev_text = text
prev_end = end |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def whitespace_around_comma(logical_line):
r"""Avoid extraneous whitespace after a comma or a colon. Note: these checks are disabled by default Okay: a = (1, 2) E241: a = (1, 2) E242: a = (1,\t2) """ |
line = logical_line
for m in WHITESPACE_AFTER_COMMA_REGEX.finditer(line):
found = m.start() + 1
if '\t' in m.group():
yield found, "E242 tab after '%s'" % m.group()[0]
else:
yield found, "E241 multiple spaces after '%s'" % m.group()[0] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def whitespace_around_named_parameter_equals(logical_line, tokens):
r"""Don't use spaces around the '=' sign in function arguments. Don't use spaces around the '=' sign when used to indicate a keyword argument or a default parameter value. Okay: def complex(real, imag=0.0):
Okay: return magic(r=real, i=imag) Okay: boolean(a == b) Okay: boolean(a != b) Okay: boolean(a <= b) Okay: boolean(a >= b) Okay: def foo(arg: int = 42):
Okay: async def foo(arg: int = 42):
E251: def complex(real, imag = 0.0):
E251: return magic(r = real, i = imag) """ |
parens = 0
no_space = False
prev_end = None
annotated_func_arg = False
in_def = bool(STARTSWITH_DEF_REGEX.match(logical_line))
message = "E251 unexpected spaces around keyword / parameter equals"
for token_type, text, start, end, line in tokens:
if token_type == tokenize.NL:
continue
if no_space:
no_space = False
if start != prev_end:
yield (prev_end, message)
if token_type == tokenize.OP:
if text in '([':
parens += 1
elif text in ')]':
parens -= 1
elif in_def and text == ':' and parens == 1:
annotated_func_arg = True
elif parens and text == ',' and parens == 1:
annotated_func_arg = False
elif parens and text == '=' and not annotated_func_arg:
no_space = True
if start != prev_end:
yield (prev_end, message)
if not parens:
annotated_func_arg = False
prev_end = end |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def whitespace_before_comment(logical_line, tokens):
r"""Separate inline comments by at least two spaces. An inline comment is a comment on the same line as a statement. Inline comments should be separated by at least two spaces from the statement. They should start with a # and a single space. Each line of a block comment starts with a # and a single space (unless it is indented text inside the comment). Okay: x = x + 1 # Increment x Okay: x = x + 1 # Increment x Okay: # Block comment E261: x = x + 1 # Increment x E262: x = x + 1 #Increment x E262: x = x + 1 # Increment x E265: #Block comment E266: ### Block comment """ |
prev_end = (0, 0)
for token_type, text, start, end, line in tokens:
if token_type == tokenize.COMMENT:
inline_comment = line[:start[1]].strip()
if inline_comment:
if prev_end[0] == start[0] and start[1] < prev_end[1] + 2:
yield (prev_end,
"E261 at least two spaces before inline comment")
symbol, sp, comment = text.partition(' ')
bad_prefix = symbol not in '#:' and (symbol.lstrip('#')[:1] or '#')
if inline_comment:
if bad_prefix or comment[:1] in WHITESPACE:
yield start, "E262 inline comment should start with '# '"
elif bad_prefix and (bad_prefix != '!' or start[0] > 1):
if bad_prefix != '#':
yield start, "E265 block comment should start with '# '"
elif comment:
yield start, "E266 too many leading '#' for block comment"
elif token_type != tokenize.NL:
prev_end = end |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def imports_on_separate_lines(logical_line):
r"""Place imports on separate lines. Okay: import os\nimport sys E401: import sys, os Okay: from subprocess import Popen, PIPE Okay: from myclas import MyClass Okay: from foo.bar.yourclass import YourClass Okay: import myclass Okay: import foo.bar.yourclass """ |
line = logical_line
if line.startswith('import '):
found = line.find(',')
if -1 < found and ';' not in line[:found]:
yield found, "E401 multiple imports on one line" |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def module_imports_on_top_of_file( logical_line, indent_level, checker_state, noqa):
r"""Place imports at the top of the file. Always put imports at the top of the file, just after any module comments and docstrings, and before module globals and constants. Okay: import os Okay: # this is a comment\nimport os Okay: '''this is a module docstring'''\nimport os Okay: r'''this is a module docstring'''\nimport os Okay: try:\n\timport x\nexcept ImportError:\n\tpass\nelse:\n\tpass\nimport y Okay: try:\n\timport x\nexcept ImportError:\n\tpass\nfinally:\n\tpass\nimport y E402: a=1\nimport os E402: 'One string'\n"Two string"\nimport os E402: a=1\nfrom sys import x Okay: if x:\n import os """ |
def is_string_literal(line):
if line[0] in 'uUbB':
line = line[1:]
if line and line[0] in 'rR':
line = line[1:]
return line and (line[0] == '"' or line[0] == "'")
allowed_try_keywords = ('try', 'except', 'else', 'finally')
if indent_level: # Allow imports in conditional statements or functions
return
if not logical_line: # Allow empty lines or comments
return
if noqa:
return
line = logical_line
if line.startswith('import ') or line.startswith('from '):
if checker_state.get('seen_non_imports', False):
yield 0, "E402 module level import not at top of file"
elif re.match(DUNDER_REGEX, line):
return
elif any(line.startswith(kw) for kw in allowed_try_keywords):
# Allow try, except, else, finally keywords intermixed with imports in
# order to support conditional importing
return
elif is_string_literal(line):
# The first literal is a docstring, allow it. Otherwise, report error.
if checker_state.get('seen_docstring', False):
checker_state['seen_non_imports'] = True
else:
checker_state['seen_docstring'] = True
else:
checker_state['seen_non_imports'] = True |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def explicit_line_join(logical_line, tokens):
r"""Avoid explicit line join between brackets. The preferred way of wrapping long lines is by using Python's implied line continuation inside parentheses, brackets and braces. Long lines can be broken over multiple lines by wrapping expressions in parentheses. These should be used in preference to using a backslash for line continuation. E502: aaa = [123, \\n 123] E502: aaa = ("bbb " \\n "ccc") Okay: aaa = [123,\n 123] Okay: aaa = ("bbb "\n "ccc") Okay: aaa = "bbb " \\n "ccc" Okay: aaa = 123 # \\ """ |
prev_start = prev_end = parens = 0
comment = False
backslash = None
for token_type, text, start, end, line in tokens:
if token_type == tokenize.COMMENT:
comment = True
if start[0] != prev_start and parens and backslash and not comment:
yield backslash, "E502 the backslash is redundant between brackets"
if end[0] != prev_end:
if line.rstrip('\r\n').endswith('\\'):
backslash = (end[0], len(line.splitlines()[-1]) - 1)
else:
backslash = None
prev_start = prev_end = end[0]
else:
prev_start = start[0]
if token_type == tokenize.OP:
if text in '([{':
parens += 1
elif text in ')]}':
parens -= 1 |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def break_around_binary_operator(logical_line, tokens):
r""" Avoid breaks before binary operators. The preferred place to break around a binary operator is after the operator, not before it. W503: (width == 0\n + height == 0) W503: (width == 0\n and height == 0) Okay: (width == 0 +\n height == 0) Okay: foo(\n -x) Okay: foo(x\n []) Okay: x = '''\n''' + '' Okay: foo(x,\n -y) Okay: foo(x, # comment\n -y) Okay: var = (1 &\n ~2) Okay: var = (1 /\n -2) Okay: var = (1 +\n -1 +\n -2) """ |
def is_binary_operator(token_type, text):
# The % character is strictly speaking a binary operator, but the
# common usage seems to be to put it next to the format parameters,
# after a line break.
return ((token_type == tokenize.OP or text in ['and', 'or']) and
text not in "()[]{},:.;@=%~")
line_break = False
unary_context = True
# Previous non-newline token types and text
previous_token_type = None
previous_text = None
for token_type, text, start, end, line in tokens:
if token_type == tokenize.COMMENT:
continue
if ('\n' in text or '\r' in text) and token_type != tokenize.STRING:
line_break = True
else:
if (is_binary_operator(token_type, text) and line_break and
not unary_context and
not is_binary_operator(previous_token_type,
previous_text)):
yield start, "W503 line break before binary operator"
unary_context = text in '([{,;'
line_break = False
previous_token_type = token_type
previous_text = text |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def comparison_to_singleton(logical_line, noqa):
r"""Comparison to singletons should use "is" or "is not". Comparisons to singletons like None should always be done with "is" or "is not", never the equality operators. Okay: if arg is not None: E711: if arg != None: E711: if None == arg: E712: if arg == True: E712: if False == arg: Also, beware of writing if x when you really mean if x is not None -- e.g. when testing whether a variable or argument that defaults to None was set to some other value. The other value might have a type (such as a container) that could be false in a boolean context! """ |
match = not noqa and COMPARE_SINGLETON_REGEX.search(logical_line)
if match:
singleton = match.group(1) or match.group(3)
same = (match.group(2) == '==')
msg = "'if cond is %s:'" % (('' if same else 'not ') + singleton)
if singleton in ('None',):
code = 'E711'
else:
code = 'E712'
nonzero = ((singleton == 'True' and same) or
(singleton == 'False' and not same))
msg += " or 'if %scond:'" % ('' if nonzero else 'not ')
yield match.start(2), ("%s comparison to %s should be %s" %
(code, singleton, msg)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def comparison_negative(logical_line):
r"""Negative comparison should be done using "not in" and "is not". Okay: if x not in y:\n pass Okay: assert (X in Y or X is Z) Okay: if not (X in Y):
\n pass Okay: zz = x is not y E713: Z = not X in Y E713: if not X.B in Y:\n pass E714: if not X is Y:\n pass E714: Z = not X.B is Y """ |
match = COMPARE_NEGATIVE_REGEX.search(logical_line)
if match:
pos = match.start(1)
if match.group(2) == 'in':
yield pos, "E713 test for membership should be 'not in'"
else:
yield pos, "E714 test for object identity should be 'is not'" |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def bare_except(logical_line, noqa):
r"""When catching exceptions, mention specific exceptions whenever possible. Okay: except Exception: Okay: except BaseException: E722: except: """ |
if noqa:
return
regex = re.compile(r"except\s*:")
match = regex.match(logical_line)
if match:
yield match.start(), "E722 do not use bare except'" |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def ambiguous_identifier(logical_line, tokens):
r"""Never use the characters 'l', 'O', or 'I' as variable names. In some fonts, these characters are indistinguishable from the numerals one and zero. When tempted to use 'l', use 'L' instead. Okay: L = 0 Okay: o = 123 Okay: i = 42 E741: l = 0 E741: O = 123 E741: I = 42 Variables can be bound in several other contexts, including class and function definitions, 'global' and 'nonlocal' statements, exception handlers, and 'with' statements. Okay: except AttributeError as o: Okay: with lock as L: E741: except AttributeError as O: E741: with lock as l: E741: global I E741: nonlocal l E742: class I(object):
E743: def l(x):
""" |
idents_to_avoid = ('l', 'O', 'I')
prev_type, prev_text, prev_start, prev_end, __ = tokens[0]
for token_type, text, start, end, line in tokens[1:]:
ident = pos = None
# identifiers on the lhs of an assignment operator
if token_type == tokenize.OP and '=' in text:
if prev_text in idents_to_avoid:
ident = prev_text
pos = prev_start
# identifiers bound to a value with 'as', 'global', or 'nonlocal'
if prev_text in ('as', 'global', 'nonlocal'):
if text in idents_to_avoid:
ident = text
pos = start
if prev_text == 'class':
if text in idents_to_avoid:
yield start, "E742 ambiguous class definition '%s'" % text
if prev_text == 'def':
if text in idents_to_avoid:
yield start, "E743 ambiguous function definition '%s'" % text
if ident:
yield pos, "E741 ambiguous variable name '%s'" % ident
prev_text = text
prev_start = start |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def expand_indent(line):
r"""Return the amount of indentation. Tabs are expanded to the next multiple of 8. 4 8 8 16 """ |
if '\t' not in line:
return len(line) - len(line.lstrip())
result = 0
for char in line:
if char == '\t':
result = result // 8 * 8 + 8
elif char == ' ':
result += 1
else:
break
return result |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def parse_udiff(diff, patterns=None, parent='.'):
"""Return a dictionary of matching lines.""" |
# For each file of the diff, the entry key is the filename,
# and the value is a set of row numbers to consider.
rv = {}
path = nrows = None
for line in diff.splitlines():
if nrows:
if line[:1] != '-':
nrows -= 1
continue
if line[:3] == '@@ ':
hunk_match = HUNK_REGEX.match(line)
(row, nrows) = [int(g or '1') for g in hunk_match.groups()]
rv[path].update(range(row, row + nrows))
elif line[:3] == '+++':
path = line[4:].split('\t', 1)[0]
if path[:2] == 'b/':
path = path[2:]
rv[path] = set()
return dict([(os.path.join(parent, path), rows)
for (path, rows) in rv.items()
if rows and filename_match(path, patterns)]) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def normalize_paths(value, parent=os.curdir):
"""Parse a comma-separated list of paths. Return a list of absolute paths. """ |
if not value:
return []
if isinstance(value, list):
return value
paths = []
for path in value.split(','):
path = path.strip()
if '/' in path:
path = os.path.abspath(os.path.join(parent, path))
paths.append(path.rstrip('/'))
return paths |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def filename_match(filename, patterns, default=True):
"""Check if patterns contains a pattern that matches filename. If patterns is unspecified, this always returns True. """ |
if not patterns:
return default
return any(fnmatch(filename, pattern) for pattern in patterns) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def register_check(check, codes=None):
"""Register a new check object.""" |
def _add_check(check, kind, codes, args):
if check in _checks[kind]:
_checks[kind][check][0].extend(codes or [])
else:
_checks[kind][check] = (codes or [''], args)
if inspect.isfunction(check):
args = _get_parameters(check)
if args and args[0] in ('physical_line', 'logical_line'):
if codes is None:
codes = ERRORCODE_REGEX.findall(check.__doc__ or '')
_add_check(check, args[0], codes, args)
elif inspect.isclass(check):
if _get_parameters(check.__init__)[:2] == ['self', 'tree']:
_add_check(check, 'tree', codes, None) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def init_checks_registry():
"""Register all globally visible functions. The first argument name is either 'physical_line' or 'logical_line'. """ |
mod = inspect.getmodule(register_check)
for (name, function) in inspect.getmembers(mod, inspect.isfunction):
register_check(function) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def read_config(options, args, arglist, parser):
"""Read and parse configurations. If a config file is specified on the command line with the "--config" option, then only it is used for configuration. Otherwise, the user configuration (~/.config/pycodestyle) and any local configurations in the current directory or above will be merged together (in that order) using the read method of ConfigParser. """ |
config = RawConfigParser()
cli_conf = options.config
local_dir = os.curdir
if USER_CONFIG and os.path.isfile(USER_CONFIG):
if options.verbose:
print('user configuration: %s' % USER_CONFIG)
config.read(USER_CONFIG)
parent = tail = args and os.path.abspath(os.path.commonprefix(args))
while tail:
if config.read(os.path.join(parent, fn) for fn in PROJECT_CONFIG):
local_dir = parent
if options.verbose:
print('local configuration: in %s' % parent)
break
(parent, tail) = os.path.split(parent)
if cli_conf and os.path.isfile(cli_conf):
if options.verbose:
print('cli configuration: %s' % cli_conf)
config.read(cli_conf)
pycodestyle_section = None
if config.has_section(parser.prog):
pycodestyle_section = parser.prog
elif config.has_section('pep8'):
pycodestyle_section = 'pep8' # Deprecated
warnings.warn('[pep8] section is deprecated. Use [pycodestyle].')
if pycodestyle_section:
option_list = dict([(o.dest, o.type or o.action)
for o in parser.option_list])
# First, read the default values
(new_options, __) = parser.parse_args([])
# Second, parse the configuration
for opt in config.options(pycodestyle_section):
if opt.replace('_', '-') not in parser.config_options:
print(" unknown option '%s' ignored" % opt)
continue
if options.verbose > 1:
print(" %s = %s" % (opt,
config.get(pycodestyle_section, opt)))
normalized_opt = opt.replace('-', '_')
opt_type = option_list[normalized_opt]
if opt_type in ('int', 'count'):
value = config.getint(pycodestyle_section, opt)
elif opt_type in ('store_true', 'store_false'):
value = config.getboolean(pycodestyle_section, opt)
else:
value = config.get(pycodestyle_section, opt)
if normalized_opt == 'exclude':
value = normalize_paths(value, local_dir)
setattr(new_options, normalized_opt, value)
# Third, overwrite with the command-line options
(options, __) = parser.parse_args(arglist, values=new_options)
options.doctest = options.testsuite = False
return options |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _parse_multi_options(options, split_token=','):
r"""Split and strip and discard empties. Turns the following: A, B, into ["A", "B"] """ |
if options:
return [o.strip() for o in options.split(split_token) if o.strip()]
else:
return options |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def report_invalid_syntax(self):
"""Check if the syntax is valid.""" |
(exc_type, exc) = sys.exc_info()[:2]
if len(exc.args) > 1:
offset = exc.args[1]
if len(offset) > 2:
offset = offset[1:3]
else:
offset = (1, 0)
self.report_error(offset[0], offset[1] or 0,
'E901 %s: %s' % (exc_type.__name__, exc.args[0]),
self.report_invalid_syntax) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def run_check(self, check, argument_names):
"""Run a check plugin.""" |
arguments = []
for name in argument_names:
arguments.append(getattr(self, name))
return check(*arguments) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def init_checker_state(self, name, argument_names):
"""Prepare custom state for the specific checker plugin.""" |
if 'checker_state' in argument_names:
self.checker_state = self._checker_states.setdefault(name, {}) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def check_ast(self):
"""Build the file's AST and run all AST checks.""" |
try:
tree = compile(''.join(self.lines), '', 'exec', PyCF_ONLY_AST)
except (ValueError, SyntaxError, TypeError):
return self.report_invalid_syntax()
for name, cls, __ in self._ast_checks:
checker = cls(tree, self.filename)
for lineno, offset, text, check in checker.run():
if not self.lines or not noqa(self.lines[lineno - 1]):
self.report_error(lineno, offset, text, check) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def generate_tokens(self):
"""Tokenize the file, run physical line checks and yield tokens.""" |
if self._io_error:
self.report_error(1, 0, 'E902 %s' % self._io_error, readlines)
tokengen = tokenize.generate_tokens(self.readline)
try:
for token in tokengen:
if token[2][0] > self.total_lines:
return
self.noqa = token[4] and noqa(token[4])
self.maybe_check_physical(token)
yield token
except (SyntaxError, tokenize.TokenError):
self.report_invalid_syntax() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_count(self, prefix=''):
"""Return the total count of errors and warnings.""" |
return sum([self.counters[key]
for key in self.messages if key.startswith(prefix)]) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_file_results(self):
"""Print the result and return the overall count for this file.""" |
self._deferred_print.sort()
for line_number, offset, code, text, doc in self._deferred_print:
print(self._fmt % {
'path': self.filename,
'row': self.line_offset + line_number, 'col': offset + 1,
'code': code, 'text': text,
})
if self._show_source:
if line_number > len(self.lines):
line = ''
else:
line = self.lines[line_number - 1]
print(line.rstrip())
print(re.sub(r'\S', ' ', line[:offset]) + '^')
if self._show_pep8 and doc:
print(' ' + doc.strip())
# stdout is block buffered when not stdout.isatty().
# line can be broken where buffer boundary since other processes
# write to same file.
# flush() after print() to avoid buffer boundary.
# Typical buffer size is 8192. line written safely when
# len(line) < 8192.
sys.stdout.flush()
return self.file_errors |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def init_report(self, reporter=None):
"""Initialize the report instance.""" |
self.options.report = (reporter or self.options.reporter)(self.options)
return self.options.report |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def check_files(self, paths=None):
"""Run all checks on the paths.""" |
if paths is None:
paths = self.paths
report = self.options.report
runner = self.runner
report.start()
try:
for path in paths:
if os.path.isdir(path):
self.input_dir(path)
elif not self.excluded(path):
runner(path)
except KeyboardInterrupt:
print('... stopped')
report.stop()
return report |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def input_dir(self, dirname):
"""Check all files in this directory and all subdirectories.""" |
dirname = dirname.rstrip('/')
if self.excluded(dirname):
return 0
counters = self.options.report.counters
verbose = self.options.verbose
filepatterns = self.options.filename
runner = self.runner
for root, dirs, files in os.walk(dirname):
if verbose:
print('directory ' + root)
counters['directories'] += 1
for subdir in sorted(dirs):
if self.excluded(subdir, root):
dirs.remove(subdir)
for filename in sorted(files):
# contain a pattern that matches?
if ((filename_match(filename, filepatterns) and
not self.excluded(filename, root))):
runner(os.path.join(root, filename)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def ignore_code(self, code):
"""Check if the error code should be ignored. If 'options.select' contains a prefix of the error code, return False. Else, if 'options.ignore' contains a prefix of the error code, return True. """ |
if len(code) < 4 and any(s.startswith(code)
for s in self.options.select):
return False
return (code.startswith(self.options.ignore) and
not code.startswith(self.options.select)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_checks(self, argument_name):
"""Get all the checks for this category. Find all globally visible functions where the first argument name starts with argument_name and which contain selected tests. """ |
checks = []
for check, attrs in _checks[argument_name].items():
(codes, args) = attrs
if any(not (code and self.ignore_code(code)) for code in codes):
checks.append((check.__name__, check, args))
return sorted(checks) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def loads(data, use_datetime=0):
"""data -> unmarshalled data, method name Convert an XML-RPC packet to unmarshalled data plus a method name (None if not present). If the XML-RPC packet represents a fault condition, this function raises a Fault exception. """ |
p, u = getparser(use_datetime=use_datetime)
p.feed(data)
p.close()
return u.close(), u.getmethodname() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def transform_import(self, node, results):
"""Transform for the basic import case. Replaces the old import name with a comma separated list of its replacements. """ |
import_mod = results.get("module")
pref = import_mod.prefix
names = []
# create a Node list of the replacement modules
for name in MAPPING[import_mod.value][:-1]:
names.extend([Name(name[0], prefix=pref), Comma()])
names.append(Name(MAPPING[import_mod.value][-1][0], prefix=pref))
import_mod.replace(names) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def transform_member(self, node, results):
"""Transform for imports of specific module elements. Replaces the module to be imported from with the appropriate new module. """ |
mod_member = results.get("mod_member")
pref = mod_member.prefix
member = results.get("member")
# Simple case with only a single member being imported
if member:
# this may be a list of length one, or just a node
if isinstance(member, list):
member = member[0]
new_name = None
for change in MAPPING[mod_member.value]:
if member.value in change[1]:
new_name = change[0]
break
if new_name:
mod_member.replace(Name(new_name, prefix=pref))
else:
self.cannot_convert(node, "This is an invalid module element")
# Multiple members being imported
else:
# a dictionary for replacements, order matters
modules = []
mod_dict = {}
members = results["members"]
for member in members:
# we only care about the actual members
if member.type == syms.import_as_name:
as_name = member.children[2].value
member_name = member.children[0].value
else:
member_name = member.value
as_name = None
if member_name != u",":
for change in MAPPING[mod_member.value]:
if member_name in change[1]:
if change[0] not in mod_dict:
modules.append(change[0])
mod_dict.setdefault(change[0], []).append(member)
new_nodes = []
indentation = find_indentation(node)
first = True
def handle_name(name, prefix):
if name.type == syms.import_as_name:
kids = [Name(name.children[0].value, prefix=prefix),
name.children[1].clone(),
name.children[2].clone()]
return [Node(syms.import_as_name, kids)]
return [Name(name.value, prefix=prefix)]
for module in modules:
elts = mod_dict[module]
names = []
for elt in elts[:-1]:
names.extend(handle_name(elt, pref))
names.append(Comma())
names.extend(handle_name(elts[-1], pref))
new = FromImport(module, names)
if not first or node.parent.prefix.endswith(indentation):
new.prefix = indentation
new_nodes.append(new)
first = False
if new_nodes:
nodes = []
for new_node in new_nodes[:-1]:
nodes.extend([new_node, Newline()])
nodes.append(new_nodes[-1])
node.replace(nodes)
else:
self.cannot_convert(node, "All module elements are invalid") |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def transform_dot(self, node, results):
"""Transform for calls to module members in code.""" |
module_dot = results.get("bare_with_attr")
member = results.get("member")
new_name = None
if isinstance(member, list):
member = member[0]
for change in MAPPING[module_dot.value]:
if member.value in change[1]:
new_name = change[0]
break
if new_name:
module_dot.replace(Name(new_name,
prefix=module_dot.prefix))
else:
self.cannot_convert(node, "This is an invalid module element") |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def call_only_once(func):
'''
To be used as a decorator
@call_only_once
def func():
print 'Calling func only this time'
Actually, in PyDev it must be called as:
func = call_only_once(func) to support older versions of Python.
'''
def new_func(*args, **kwargs):
if not new_func._called:
new_func._called = True
return func(*args, **kwargs)
new_func._called = False
return new_func |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_all_fix_names(fixer_pkg, remove_prefix=True):
"""Return a sorted list of all available fix names in the given package.""" |
pkg = __import__(fixer_pkg, [], [], ["*"])
fixer_dir = os.path.dirname(pkg.__file__)
fix_names = []
for name in sorted(os.listdir(fixer_dir)):
if name.startswith("fix_") and name.endswith(".py"):
if remove_prefix:
name = name[4:]
fix_names.append(name[:-3])
return fix_names |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _get_head_types(pat):
""" Accepts a pytree Pattern Node and returns a set of the pattern types which will match first. """ |
if isinstance(pat, (pytree.NodePattern, pytree.LeafPattern)):
# NodePatters must either have no type and no content
# or a type and content -- so they don't get any farther
# Always return leafs
if pat.type is None:
raise _EveryNode
return set([pat.type])
if isinstance(pat, pytree.NegatedPattern):
if pat.content:
return _get_head_types(pat.content)
raise _EveryNode # Negated Patterns don't have a type
if isinstance(pat, pytree.WildcardPattern):
# Recurse on each node in content
r = set()
for p in pat.content:
for x in p:
r.update(_get_head_types(x))
return r
raise Exception("Oh no! I don't understand pattern %s" %(pat)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_fixers(self):
"""Inspects the options to load the requested patterns and handlers. Returns: (pre_order, post_order), where pre_order is the list of fixers that want a pre-order AST traversal, and post_order is the list that want post-order traversal. """ |
pre_order_fixers = []
post_order_fixers = []
for fix_mod_path in self.fixers:
mod = __import__(fix_mod_path, {}, {}, ["*"])
fix_name = fix_mod_path.rsplit(".", 1)[-1]
if fix_name.startswith(self.FILE_PREFIX):
fix_name = fix_name[len(self.FILE_PREFIX):]
parts = fix_name.split("_")
class_name = self.CLASS_PREFIX + "".join([p.title() for p in parts])
try:
fix_class = getattr(mod, class_name)
except AttributeError:
raise FixerError("Can't find %s.%s" % (fix_name, class_name))
fixer = fix_class(self.options, self.fixer_log)
if fixer.explicit and self.explicit is not True and \
fix_mod_path not in self.explicit:
self.log_message("Skipping implicit fixer: %s", fix_name)
continue
self.log_debug("Adding transformation: %s", fix_name)
if fixer.order == "pre":
pre_order_fixers.append(fixer)
elif fixer.order == "post":
post_order_fixers.append(fixer)
else:
raise FixerError("Illegal fixer order: %r" % fixer.order)
key_func = operator.attrgetter("run_order")
pre_order_fixers.sort(key=key_func)
post_order_fixers.sort(key=key_func)
return (pre_order_fixers, post_order_fixers) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def log_message(self, msg, *args):
"""Hook to log a message.""" |
if args:
msg = msg % args
self.logger.info(msg) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def refactor(self, items, write=False, doctests_only=False):
"""Refactor a list of files and directories.""" |
for dir_or_file in items:
if os.path.isdir(dir_or_file):
self.refactor_dir(dir_or_file, write, doctests_only)
else:
self.refactor_file(dir_or_file, write, doctests_only) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def refactor_dir(self, dir_name, write=False, doctests_only=False):
"""Descends down a directory and refactor every Python file found. Python files are assumed to have a .py extension. Files and subdirectories starting with '.' are skipped. """ |
py_ext = os.extsep + "py"
for dirpath, dirnames, filenames in os.walk(dir_name):
self.log_debug("Descending into %s", dirpath)
dirnames.sort()
filenames.sort()
for name in filenames:
if (not name.startswith(".") and
os.path.splitext(name)[1] == py_ext):
fullname = os.path.join(dirpath, name)
self.refactor_file(fullname, write, doctests_only)
# Modify dirnames in-place to remove subdirs with leading dots
dirnames[:] = [dn for dn in dirnames if not dn.startswith(".")] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _read_python_source(self, filename):
""" Do our best to decode a Python source file correctly. """ |
try:
f = open(filename, "rb")
except IOError as err:
self.log_error("Can't open %s: %s", filename, err)
return None, None
try:
encoding = tokenize.detect_encoding(f.readline)[0]
finally:
f.close()
with _open_with_encoding(filename, "r", encoding=encoding) as f:
return _from_system_newlines(f.read()), encoding |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def refactor_file(self, filename, write=False, doctests_only=False):
"""Refactors a file.""" |
input, encoding = self._read_python_source(filename)
if input is None:
# Reading the file failed.
return
input += u"\n" # Silence certain parse errors
if doctests_only:
self.log_debug("Refactoring doctests in %s", filename)
output = self.refactor_docstring(input, filename)
if self.write_unchanged_files or output != input:
self.processed_file(output, filename, input, write, encoding)
else:
self.log_debug("No doctest changes in %s", filename)
else:
tree = self.refactor_string(input, filename)
if self.write_unchanged_files or (tree and tree.was_changed):
# The [:-1] is to take off the \n we added earlier
self.processed_file(unicode(tree)[:-1], filename,
write=write, encoding=encoding)
else:
self.log_debug("No changes in %s", filename) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def refactor_string(self, data, name):
"""Refactor a given input string. Args: data: a string holding the code to be refactored. name: a human-readable name for use in error/log messages. Returns: An AST corresponding to the refactored input stream; None if there were errors during the parse. """ |
features = _detect_future_features(data)
if "print_function" in features:
self.driver.grammar = pygram.python_grammar_no_print_statement
try:
tree = self.driver.parse_string(data)
except Exception as err:
self.log_error("Can't parse %s: %s: %s",
name, err.__class__.__name__, err)
return
finally:
self.driver.grammar = self.grammar
tree.future_features = features
self.log_debug("Refactoring %s", name)
self.refactor_tree(tree, name)
return tree |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def traverse_by(self, fixers, traversal):
"""Traverse an AST, applying a set of fixers to each node. This is a helper method for refactor_tree(). Args: fixers: a list of fixer instances. traversal: a generator that yields AST nodes. Returns: None """ |
if not fixers:
return
for node in traversal:
for fixer in fixers[node.type]:
results = fixer.match(node)
if results:
new = fixer.transform(node, results)
if new is not None:
node.replace(new)
node = new |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def processed_file(self, new_text, filename, old_text=None, write=False, encoding=None):
""" Called when a file has been refactored and there may be changes. """ |
self.files.append(filename)
if old_text is None:
old_text = self._read_python_source(filename)[0]
if old_text is None:
return
equal = old_text == new_text
self.print_output(old_text, new_text, filename, equal)
if equal:
self.log_debug("No changes to %s", filename)
if not self.write_unchanged_files:
return
if write:
self.write_file(new_text, filename, old_text, encoding)
else:
self.log_debug("Not writing changes to %s", filename) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def write_file(self, new_text, filename, old_text, encoding=None):
"""Writes a string to a file. It first shows a unified diff between the old text and the new text, and then rewrites the file; the latter is only done if the write option is set. """ |
try:
f = _open_with_encoding(filename, "w", encoding=encoding)
except os.error as err:
self.log_error("Can't create %s: %s", filename, err)
return
try:
f.write(_to_system_newlines(new_text))
except os.error as err:
self.log_error("Can't write %s: %s", filename, err)
finally:
f.close()
self.log_debug("Wrote changes to %s", filename)
self.wrote = True |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def refactor_docstring(self, input, filename):
"""Refactors a docstring, looking for doctests. This returns a modified version of the input string. It looks (Unfortunately we can't use the doctest module's parser, since, like most parsers, it is not geared towards preserving the original source.) """ |
result = []
block = None
block_lineno = None
indent = None
lineno = 0
for line in input.splitlines(True):
lineno += 1
if line.lstrip().startswith(self.PS1):
if block is not None:
result.extend(self.refactor_doctest(block, block_lineno,
indent, filename))
block_lineno = lineno
block = [line]
i = line.find(self.PS1)
indent = line[:i]
elif (indent is not None and
(line.startswith(indent + self.PS2) or
line == indent + self.PS2.rstrip() + u"\n")):
block.append(line)
else:
if block is not None:
result.extend(self.refactor_doctest(block, block_lineno,
indent, filename))
block = None
indent = None
result.append(line)
if block is not None:
result.extend(self.refactor_doctest(block, block_lineno,
indent, filename))
return u"".join(result) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def parse_block(self, block, lineno, indent):
"""Parses a block into a tree. This is necessary to get correct line number / offset information in the parser diagnostics and embedded into the parse tree. """ |
tree = self.driver.parse_tokens(self.wrap_toks(block, lineno, indent))
tree.future_features = frozenset()
return tree |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def gen_lines(self, block, indent):
"""Generates lines as expected by tokenize from a list of lines. This strips the first len(indent + self.PS1) characters off each line. """ |
prefix1 = indent + self.PS1
prefix2 = indent + self.PS2
prefix = prefix1
for line in block:
if line.startswith(prefix):
yield line[len(prefix):]
elif line == prefix.rstrip() + u"\n":
yield u"\n"
else:
raise AssertionError("line=%r, prefix=%r" % (line, prefix))
prefix = prefix2
while True:
yield "" |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def var_to_xml(val, name, trim_if_too_big=True, additional_in_xml='', evaluate_full_value=True):
""" single variable or dictionary to xml representation """ |
type_name, type_qualifier, is_exception_on_eval, resolver, value = get_variable_details(
val, evaluate_full_value)
try:
name = quote(name, '/>_= ') # TODO: Fix PY-5834 without using quote
except:
pass
xml = '<var name="%s" type="%s" ' % (make_valid_xml_value(name), make_valid_xml_value(type_name))
if type_qualifier:
xml_qualifier = 'qualifier="%s"' % make_valid_xml_value(type_qualifier)
else:
xml_qualifier = ''
if value:
# cannot be too big... communication may not handle it.
if len(value) > MAXIMUM_VARIABLE_REPRESENTATION_SIZE and trim_if_too_big:
value = value[0:MAXIMUM_VARIABLE_REPRESENTATION_SIZE]
value += '...'
xml_value = ' value="%s"' % (make_valid_xml_value(quote(value, '/>_= ')))
else:
xml_value = ''
if is_exception_on_eval:
xml_container = ' isErrorOnEval="True"'
else:
if resolver is not None:
xml_container = ' isContainer="True"'
else:
xml_container = ''
return ''.join((xml, xml_qualifier, xml_value, xml_container, additional_in_xml, ' />\n')) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def do(self, arg):
".exploitable - Determine the approximate exploitability rating"
from winappdbg import Crash
event = self.debug.lastEvent
crash = Crash(event)
crash.fetch_extra_data(event)
status, rule, description = crash.isExploitable()
print "-" * 79
print "Exploitability: %s" % status
print "Matched rule: %s" % rule
print "Description: %s" % description
print "-" * 79 |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _cleanup(self):
""" Frees lots of non-textual information, such as the fonts and images and the objects that were needed to parse the PDF. """ |
self.device = None
self.doc = None
self.parser = None
self.resmgr = None
self.interpreter = None |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_app_name(app_name):
""" Returns a app name from new app config if is a class or the same app name if is not a class. """ |
type_ = locate(app_name)
if inspect.isclass(type_):
return type_.name
return app_name |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def has_role(user, roles):
"""Check if a user has any of the given roles.""" |
if user and user.is_superuser:
return True
if not isinstance(roles, list):
roles = [roles]
normalized_roles = []
for role in roles:
if not inspect.isclass(role):
role = RolesManager.retrieve_role(role)
normalized_roles.append(role)
user_roles = get_user_roles(user)
return any([role in user_roles for role in normalized_roles]) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def has_permission(user, permission_name):
"""Check if a user has a given permission.""" |
if user and user.is_superuser:
return True
return permission_name in available_perm_names(user) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def has_object_permission(checker_name, user, obj):
"""Check if a user has permission to perform an action on an object.""" |
if user and user.is_superuser:
return True
checker = PermissionsManager.retrieve_checker(checker_name)
user_roles = get_user_roles(user)
if not user_roles:
user_roles = [None]
return any([checker(user_role, user, obj) for user_role in user_roles]) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_or_create_permission(codename, name=camel_or_snake_to_title):
""" Get a Permission object from a permission name. @:param codename: permission code name @:param name: human-readable permissions name (str) or callable that takes codename as argument and returns str """ |
user_ct = ContentType.objects.get_for_model(get_user_model())
return Permission.objects.get_or_create(content_type=user_ct, codename=codename,
defaults={'name': name(codename) if callable(name) else name}) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_user_roles(user):
"""Get a list of a users's roles.""" |
if user:
groups = user.groups.all() # Important! all() query may be cached on User with prefetch_related.
roles = (RolesManager.retrieve_role(group.name) for group in groups if group.name in RolesManager.get_roles_names())
return sorted(roles, key=lambda r: r.get_name() )
else:
return [] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def clear_roles(user):
"""Remove all roles from a user.""" |
roles = get_user_roles(user)
for role in roles:
role.remove_role_from_user(user)
return roles |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def available_perm_status(user):
""" Get a boolean map of the permissions available to a user based on that user's roles. """ |
roles = get_user_roles(user)
permission_hash = {}
for role in roles:
permission_names = role.permission_names_list()
for permission_name in permission_names:
permission_hash[permission_name] = get_permission(
permission_name) in user.user_permissions.all()
return permission_hash |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def grant_permission(user, permission_name):
""" Grant a user a specified permission. Permissions are only granted if they are in the scope any of the user's roles. If the permission is out of scope, a RolePermissionScopeException is raised. """ |
roles = get_user_roles(user)
for role in roles:
if permission_name in role.permission_names_list():
permission = get_permission(permission_name)
user.user_permissions.add(permission)
return
raise RolePermissionScopeException(
"This permission isn't in the scope of "
"any of this user's roles.") |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def revoke_permission(user, permission_name):
""" Revoke a specified permission from a user. Permissions are only revoked if they are in the scope any of the user's roles. If the permission is out of scope, a RolePermissionScopeException is raised. """ |
roles = get_user_roles(user)
for role in roles:
if permission_name in role.permission_names_list():
permission = get_permission(permission_name)
user.user_permissions.remove(permission)
return
raise RolePermissionScopeException(
"This permission isn't in the scope of "
"any of this user's roles.") |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def model_saved(sender, instance, created, raw, using, **kwargs):
""" Automatically triggers "created" and "updated" actions. """ |
opts = get_opts(instance)
model = '.'.join([opts.app_label, opts.object_name])
action = 'created' if created else 'updated'
distill_model_event(instance, model, action) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def model_deleted(sender, instance, using, **kwargs):
""" Automatically triggers "deleted" actions. """ |
opts = get_opts(instance)
model = '.'.join([opts.app_label, opts.object_name])
distill_model_event(instance, model, 'deleted') |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def raw_custom_event(sender, event_name, payload, user, send_hook_meta=True, instance=None, **kwargs):
""" Give a full payload """ |
HookModel = get_hook_model()
hooks = HookModel.objects.filter(user=user, event=event_name)
for hook in hooks:
new_payload = payload
if send_hook_meta:
new_payload = {
'hook': hook.dict(),
'data': payload
}
hook.deliver_hook(instance, payload_override=new_payload) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def clean(self):
""" Validation for events. """ |
if self.event not in HOOK_EVENTS.keys():
raise ValidationError(
"Invalid hook event {evt}.".format(evt=self.event)
) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_module(path):
""" A modified duplicate from Django's built in backend retriever. slugify = get_module('django.template.defaultfilters.slugify') """ |
try:
from importlib import import_module
except ImportError as e:
from django.utils.importlib import import_module
try:
mod_name, func_name = path.rsplit('.', 1)
mod = import_module(mod_name)
except ImportError as e:
raise ImportError(
'Error importing alert function {0}: "{1}"'.format(mod_name, e))
try:
func = getattr(mod, func_name)
except AttributeError:
raise ImportError(
('Module "{0}" does not define a "{1}" function'
).format(mod_name, func_name))
return func |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_hook_model():
""" Returns the Custom Hook model if defined in settings, otherwise the default Hook model. """ |
from rest_hooks.models import Hook
HookModel = Hook
if getattr(settings, 'HOOK_CUSTOM_MODEL', None):
HookModel = get_module(settings.HOOK_CUSTOM_MODEL)
return HookModel |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def find_and_fire_hook(event_name, instance, user_override=None):
""" Look up Hooks that apply """ |
try:
from django.contrib.auth import get_user_model
User = get_user_model()
except ImportError:
from django.contrib.auth.models import User
from rest_hooks.models import HOOK_EVENTS
if not event_name in HOOK_EVENTS.keys():
raise Exception(
'"{}" does not exist in `settings.HOOK_EVENTS`.'.format(event_name)
)
filters = {'event': event_name}
# Ignore the user if the user_override is False
if user_override is not False:
if user_override:
filters['user'] = user_override
elif hasattr(instance, 'user'):
filters['user'] = instance.user
elif isinstance(instance, User):
filters['user'] = instance
else:
raise Exception(
'{} has no `user` property. REST Hooks needs this.'.format(repr(instance))
)
# NOTE: This is probably up for discussion, but I think, in this
# case, instead of raising an error, we should fire the hook for
# all users/accounts it is subscribed to. That would be a genuine
# usecase rather than erroring because no user is associated with
# this event.
HookModel = get_hook_model()
hooks = HookModel.objects.filter(**filters)
for hook in hooks:
hook.deliver_hook(instance) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _get_plot_data(data, ndim=None):
"""Get plot data out of an input object Parameters data : array-like, `phate.PHATE` or `scanpy.AnnData` ndim : int, optional (default: None) Minimum number of dimensions """ |
out = data
if isinstance(data, PHATE):
out = data.transform()
else:
try:
if isinstance(data, anndata.AnnData):
try:
out = data.obsm['X_phate']
except KeyError:
raise RuntimeError(
"data.obsm['X_phate'] not found. "
"Please run `sc.tl.phate(adata)` before plotting.")
except NameError:
# anndata not installed
pass
if ndim is not None and out[0].shape[0] < ndim:
if isinstance(data, PHATE):
data.set_params(n_components=ndim)
out = data.transform()
else:
raise ValueError(
"Expected at least {}-dimensional data, got {}".format(
ndim, out[0].shape[0]))
return out |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def scatter(x, y, z=None, c=None, cmap=None, s=None, discrete=None, ax=None, legend=None, figsize=None, xticks=False, yticks=False, zticks=False, xticklabels=True, yticklabels=True, zticklabels=True, label_prefix="PHATE", xlabel=None, ylabel=None, zlabel=None, title=None, legend_title="", legend_loc='best', filename=None, dpi=None, **plot_kwargs):
"""Create a scatter plot Builds upon `matplotlib.pyplot.scatter` with nice defaults and handles categorical colors / legends better. For easy access, use `scatter2d` or `scatter3d`. Parameters x : list-like data for x axis y : list-like data for y axis z : list-like, optional (default: None) data for z axis c : list-like or None, optional (default: None) Color vector. Can be a single color value (RGB, RGBA, or named matplotlib colors), an array of these of length n_samples, or a list of discrete or continuous values of any data type. If `c` is not a single or list of matplotlib colors, the values in `c` will be used to populate the legend / colorbar with colors from `cmap` cmap : `matplotlib` colormap, str, dict or None, optional (default: None) matplotlib colormap. If None, uses `tab20` for discrete data and `inferno` for continuous data. If a dictionary, expects one key for every unique value in `c`, where values are valid matplotlib colors (hsv, rbg, rgba, or named colors) s : float, optional (default: 1) Point size. discrete : bool or None, optional (default: None) If True, the legend is categorical. If False, the legend is a colorbar. If None, discreteness is detected automatically. Data containing non-numeric `c` is always discrete, and numeric data with 20 or less unique values is discrete. ax : `matplotlib.Axes` or None, optional (default: None) axis on which to plot. If None, an axis is created legend : bool, optional (default: True) States whether or not to create a legend. If data is continuous, the legend is a colorbar. figsize : tuple, optional (default: None) Tuple of floats for creation of new `matplotlib` figure. Only used if `ax` is None. xticks : True, False, or list-like (default: False) If True, keeps default x ticks. If False, removes x ticks. If a list, sets custom x ticks yticks : True, False, or list-like (default: False) If True, keeps default y ticks. If False, removes y ticks. If a list, sets custom y ticks zticks : True, False, or list-like (default: False) If True, keeps default z ticks. If False, removes z ticks. If a list, sets custom z ticks. Only used for 3D plots. xticklabels : True, False, or list-like (default: True) If True, keeps default x tick labels. If False, removes x tick labels. If a list, sets custom x tick labels yticklabels : True, False, or list-like (default: True) If True, keeps default y tick labels. If False, removes y tick labels. If a list, sets custom y tick labels zticklabels : True, False, or list-like (default: True) If True, keeps default z tick labels. If False, removes z tick labels. If a list, sets custom z tick labels. Only used for 3D plots. label_prefix : str or None (default: "PHATE") Prefix for all axis labels. Axes will be labelled `label_prefix`1, `label_prefix`2, etc. Can be overriden by setting `xlabel`, `ylabel`, and `zlabel`. xlabel : str or None (default : None) Label for the x axis. Overrides the automatic label given by label_prefix. If None and label_prefix is None, no label is set. ylabel : str or None (default : None) Label for the y axis. Overrides the automatic label given by label_prefix. If None and label_prefix is None, no label is set. zlabel : str or None (default : None) Label for the z axis. Overrides the automatic label given by label_prefix. If None and label_prefix is None, no label is set. Only used for 3D plots. title : str or None (default: None) axis title. If None, no title is set. legend_title : str (default: "") title for the colorbar of legend. Only used for discrete data. legend_loc : int or string or pair of floats, default: 'best' Matplotlib legend location. Only used for discrete data. See <https://matplotlib.org/api/_as_gen/matplotlib.pyplot.legend.html> for details. filename : str or None (default: None) file to which the output is saved dpi : int or None, optional (default: None) The resolution in dots per inch. If None it will default to the value savefig.dpi in the matplotlibrc file. If 'figure' it will set the dpi to be the value of the figure. Only used if filename is not None. **plot_kwargs : keyword arguments Extra arguments passed to `matplotlib.pyplot.scatter`. Returns ------- ax : `matplotlib.Axes` axis on which plot was drawn Examples -------- (2000, 100) (2000, 2) """ |
warnings.warn("`phate.plot.scatter` is deprecated. "
"Use `scprep.plot.scatter` instead.",
FutureWarning)
return scprep.plot.scatter(x=x, y=y, z=z,
c=c, cmap=cmap, s=s, discrete=discrete,
ax=ax, legend=legend, figsize=figsize,
xticks=xticks,
yticks=yticks,
zticks=zticks,
xticklabels=xticklabels,
yticklabels=yticklabels,
zticklabels=zticklabels,
label_prefix=label_prefix,
xlabel=xlabel,
ylabel=ylabel,
zlabel=zlabel,
title=title,
legend_title=legend_title,
legend_loc=legend_loc,
filename=filename,
dpi=dpi,
**plot_kwargs) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def scatter2d(data, **kwargs):
"""Create a 2D scatter plot Builds upon `matplotlib.pyplot.scatter` with nice defaults and handles categorical colors / legends better. Parameters data : array-like, shape=[n_samples, n_features] Input data. Only the first two components will be used. c : list-like or None, optional (default: None) Color vector. Can be a single color value (RGB, RGBA, or named matplotlib colors), an array of these of length n_samples, or a list of discrete or continuous values of any data type. If `c` is not a single or list of matplotlib colors, the values in `c` will be used to populate the legend / colorbar with colors from `cmap` cmap : `matplotlib` colormap, str, dict or None, optional (default: None) matplotlib colormap. If None, uses `tab20` for discrete data and `inferno` for continuous data. If a dictionary, expects one key for every unique value in `c`, where values are valid matplotlib colors (hsv, rbg, rgba, or named colors) s : float, optional (default: 1) Point size. discrete : bool or None, optional (default: None) If True, the legend is categorical. If False, the legend is a colorbar. If None, discreteness is detected automatically. Data containing non-numeric `c` is always discrete, and numeric data with 20 or less unique values is discrete. ax : `matplotlib.Axes` or None, optional (default: None) axis on which to plot. If None, an axis is created legend : bool, optional (default: True) States whether or not to create a legend. If data is continuous, the legend is a colorbar. figsize : tuple, optional (default: None) Tuple of floats for creation of new `matplotlib` figure. Only used if `ax` is None. xticks : True, False, or list-like (default: False) If True, keeps default x ticks. If False, removes x ticks. If a list, sets custom x ticks yticks : True, False, or list-like (default: False) If True, keeps default y ticks. If False, removes y ticks. If a list, sets custom y ticks zticks : True, False, or list-like (default: False) If True, keeps default z ticks. If False, removes z ticks. If a list, sets custom z ticks. Only used for 3D plots. xticklabels : True, False, or list-like (default: True) If True, keeps default x tick labels. If False, removes x tick labels. If a list, sets custom x tick labels yticklabels : True, False, or list-like (default: True) If True, keeps default y tick labels. If False, removes y tick labels. If a list, sets custom y tick labels zticklabels : True, False, or list-like (default: True) If True, keeps default z tick labels. If False, removes z tick labels. If a list, sets custom z tick labels. Only used for 3D plots. label_prefix : str or None (default: "PHATE") Prefix for all axis labels. Axes will be labelled `label_prefix`1, `label_prefix`2, etc. Can be overriden by setting `xlabel`, `ylabel`, and `zlabel`. xlabel : str or None (default : None) Label for the x axis. Overrides the automatic label given by label_prefix. If None and label_prefix is None, no label is set. ylabel : str or None (default : None) Label for the y axis. Overrides the automatic label given by label_prefix. If None and label_prefix is None, no label is set. zlabel : str or None (default : None) Label for the z axis. Overrides the automatic label given by label_prefix. If None and label_prefix is None, no label is set. Only used for 3D plots. title : str or None (default: None) axis title. If None, no title is set. legend_title : str (default: "") title for the colorbar of legend legend_loc : int or string or pair of floats, default: 'best' Matplotlib legend location. Only used for discrete data. See <https://matplotlib.org/api/_as_gen/matplotlib.pyplot.legend.html> for details. filename : str or None (default: None) file to which the output is saved dpi : int or None, optional (default: None) The resolution in dots per inch. If None it will default to the value savefig.dpi in the matplotlibrc file. If 'figure' it will set the dpi to be the value of the figure. Only used if filename is not None. **plot_kwargs : keyword arguments Extra arguments passed to `matplotlib.pyplot.scatter`. Returns ------- ax : `matplotlib.Axes` axis on which plot was drawn Examples -------- (2000, 100) (2000, 2) """ |
warnings.warn("`phate.plot.scatter2d` is deprecated. "
"Use `scprep.plot.scatter2d` instead.",
FutureWarning)
data = _get_plot_data(data, ndim=2)
return scprep.plot.scatter2d(data, **kwargs) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def rotate_scatter3d(data, filename=None, elev=30, rotation_speed=30, fps=10, ax=None, figsize=None, dpi=None, ipython_html="jshtml", **kwargs):
"""Create a rotating 3D scatter plot Builds upon `matplotlib.pyplot.scatter` with nice defaults and handles categorical colors / legends better. Parameters data : array-like, `phate.PHATE` or `scanpy.AnnData` Input data. Only the first three dimensions are used. filename : str, optional (default: None) If not None, saves a .gif or .mp4 with the output elev : float, optional (default: 30) Elevation of viewpoint from horizontal, in degrees rotation_speed : float, optional (default: 30) Speed of axis rotation, in degrees per second fps : int, optional (default: 10) Frames per second. Increase this for a smoother animation ax : `matplotlib.Axes` or None, optional (default: None) axis on which to plot. If None, an axis is created figsize : tuple, optional (default: None) Tuple of floats for creation of new `matplotlib` figure. Only used if `ax` is None. dpi : number, optional (default: None) Controls the dots per inch for the movie frames. This combined with the figure's size in inches controls the size of the movie. If None, defaults to rcParams["savefig.dpi"] ipython_html : {'html5', 'jshtml'} which html writer to use if using a Jupyter Notebook **kwargs : keyword arguments See :~func:`phate.plot.scatter3d`. Returns ------- ani : `matplotlib.animation.FuncAnimation` animation object Examples -------- (2000, 100) (2000, 2) """ |
warnings.warn("`phate.plot.rotate_scatter3d` is deprecated. "
"Use `scprep.plot.rotate_scatter3d` instead.",
FutureWarning)
return scprep.plot.rotate_scatter3d(data,
filename=filename,
elev=elev,
rotation_speed=rotation_speed,
fps=fps,
ax=ax,
figsize=figsize,
dpi=dpi,
ipython_html=ipython_html,
**kwargs) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def kmeans(phate_op, k=8, random_state=None):
"""KMeans on the PHATE potential Clustering on the PHATE operator as introduced in Moon et al. This is similar to spectral clustering. Parameters phate_op : phate.PHATE Fitted PHATE operator k : int, optional (default: 8) Number of clusters random_state : int or None, optional (default: None) Random seed for k-means Returns ------- clusters : np.ndarray Integer array of cluster assignments """ |
if phate_op.graph is not None:
diff_potential = phate_op.calculate_potential()
if isinstance(phate_op.graph, graphtools.graphs.LandmarkGraph):
diff_potential = phate_op.graph.interpolate(diff_potential)
return cluster.KMeans(k, random_state=random_state).fit_predict(diff_potential)
else:
raise exceptions.NotFittedError(
"This PHATE instance is not fitted yet. Call "
"'fit' with appropriate arguments before "
"using this method.") |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def cmdscale_fast(D, ndim):
"""Fast CMDS using random SVD Parameters D : array-like, input data [n_samples, n_dimensions] ndim : int, number of dimensions in which to embed `D` Returns ------- Y : array-like, embedded data [n_sample, ndim] """ |
tasklogger.log_debug("Performing classic MDS on {} of shape {}...".format(
type(D).__name__, D.shape))
D = D**2
D = D - D.mean(axis=0)[None, :]
D = D - D.mean(axis=1)[:, None]
pca = PCA(n_components=ndim, svd_solver='randomized')
Y = pca.fit_transform(D)
return Y |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def embed_MDS(X, ndim=2, how='metric', distance_metric='euclidean', n_jobs=1, seed=None, verbose=0):
"""Performs classic, metric, and non-metric MDS Metric MDS is initialized using classic MDS, non-metric MDS is initialized using metric MDS. Parameters X: ndarray [n_samples, n_samples] 2 dimensional input data array with n_samples embed_MDS does not check for matrix squareness, but this is necessary for PHATE n_dim : int, optional, default: 2 number of dimensions in which the data will be embedded how : string, optional, default: 'classic' choose from ['classic', 'metric', 'nonmetric'] which MDS algorithm is used for dimensionality reduction distance_metric : string, optional, default: 'euclidean' choose from ['cosine', 'euclidean'] distance metric for MDS n_jobs : integer, optional, default: 1 The number of jobs to use for the computation. If -1 all CPUs are used. If 1 is given, no parallel computing code is used at all, which is useful for debugging. For n_jobs below -1, (n_cpus + 1 + n_jobs) are used. Thus for n_jobs = -2, all CPUs but one are used seed: integer or numpy.RandomState, optional The generator used to initialize SMACOF (metric, nonmetric) MDS If an integer is given, it fixes the seed Defaults to the global numpy random number generator Returns ------- Y : ndarray [n_samples, n_dim] low dimensional embedding of X using MDS """ |
if how not in ['classic', 'metric', 'nonmetric']:
raise ValueError("Allowable 'how' values for MDS: 'classic', "
"'metric', or 'nonmetric'. "
"'{}' was passed.".format(how))
# MDS embeddings, each gives a different output.
X_dist = squareform(pdist(X, distance_metric))
# initialize all by CMDS
Y = cmdscale_fast(X_dist, ndim)
if how in ['metric', 'nonmetric']:
tasklogger.log_debug("Performing metric MDS on "
"{} of shape {}...".format(type(X_dist),
X_dist.shape))
# Metric MDS from sklearn
Y, _ = smacof(X_dist, n_components=ndim, metric=True, max_iter=3000,
eps=1e-6, random_state=seed, n_jobs=n_jobs,
n_init=1, init=Y, verbose=verbose)
if how == 'nonmetric':
tasklogger.log_debug(
"Performing non-metric MDS on "
"{} of shape {}...".format(type(X_dist),
X_dist.shape))
# Nonmetric MDS from sklearn using metric MDS as an initialization
Y, _ = smacof(X_dist, n_components=ndim, metric=True, max_iter=3000,
eps=1e-6, random_state=seed, n_jobs=n_jobs,
n_init=1, init=Y, verbose=verbose)
return Y |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def compute_von_neumann_entropy(data, t_max=100):
""" Determines the Von Neumann entropy of data at varying matrix powers. The user should select a value of t around the "knee" of the entropy curve. Parameters t_max : int, default: 100 Maximum value of t to test Returns ------- entropy : array, shape=[t_max] The entropy of the diffusion affinities for each value of t Examples -------- 23 """ |
_, eigenvalues, _ = svd(data)
entropy = []
eigenvalues_t = np.copy(eigenvalues)
for _ in range(t_max):
prob = eigenvalues_t / np.sum(eigenvalues_t)
prob = prob + np.finfo(float).eps
entropy.append(-np.sum(prob * np.log(prob)))
eigenvalues_t = eigenvalues_t * eigenvalues
entropy = np.array(entropy)
return np.array(entropy) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def check_positive(**params):
"""Check that parameters are positive as expected Raises ------ ValueError : unacceptable choice of parameters """ |
for p in params:
if not isinstance(params[p], numbers.Number) or params[p] <= 0:
raise ValueError(
"Expected {} > 0, got {}".format(p, params[p])) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def check_int(**params):
"""Check that parameters are integers as expected Raises ------ ValueError : unacceptable choice of parameters """ |
for p in params:
if not isinstance(params[p], numbers.Integral):
raise ValueError(
"Expected {} integer, got {}".format(p, params[p])) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def check_if_not(x, *checks, **params):
"""Run checks only if parameters are not equal to a specified value Parameters x : excepted value Checks not run if parameters equal x checks : function Unnamed arguments, check functions to be run params : object Named arguments, parameters to be checked Raises ------ ValueError : unacceptable choice of parameters """ |
for p in params:
if params[p] is not x and params[p] != x:
[check(**{p: params[p]}) for check in checks] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def check_in(choices, **params):
"""Checks parameters are in a list of allowed parameters Parameters choices : array-like, accepted values params : object Named arguments, parameters to be checked Raises ------ ValueError : unacceptable choice of parameters """ |
for p in params:
if params[p] not in choices:
raise ValueError(
"{} value {} not recognized. Choose from {}".format(
p, params[p], choices)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def check_between(v_min, v_max, **params):
"""Checks parameters are in a specified range Parameters v_min : float, minimum allowed value (inclusive) v_max : float, maximum allowed value (inclusive) params : object Named arguments, parameters to be checked Raises ------ ValueError : unacceptable choice of parameters """ |
for p in params:
if params[p] < v_min or params[p] > v_max:
raise ValueError("Expected {} between {} and {}, "
"got {}".format(p, v_min, v_max, params[p])) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def matrix_is_equivalent(X, Y):
""" Checks matrix equivalence with numpy, scipy and pandas """ |
return X is Y or (isinstance(X, Y.__class__) and X.shape == Y.shape and
np.sum((X != Y).sum()) == 0) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def diff_op(self):
"""The diffusion operator calculated from the data """ |
if self.graph is not None:
if isinstance(self.graph, graphtools.graphs.LandmarkGraph):
diff_op = self.graph.landmark_op
else:
diff_op = self.graph.diff_op
if sparse.issparse(diff_op):
diff_op = diff_op.toarray()
return diff_op
else:
raise NotFittedError("This PHATE instance is not fitted yet. Call "
"'fit' with appropriate arguments before "
"using this method.") |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _check_params(self):
"""Check PHATE parameters This allows us to fail early - otherwise certain unacceptable parameter choices, such as mds='mmds', would only fail after minutes of runtime. Raises ------ ValueError : unacceptable choice of parameters """ |
utils.check_positive(n_components=self.n_components,
k=self.knn)
utils.check_int(n_components=self.n_components,
k=self.knn,
n_jobs=self.n_jobs)
utils.check_between(0, 1, gamma=self.gamma)
utils.check_if_not(None, utils.check_positive, a=self.decay)
utils.check_if_not(None, utils.check_positive, utils.check_int,
n_landmark=self.n_landmark,
n_pca=self.n_pca)
utils.check_if_not('auto', utils.check_positive, utils.check_int,
t=self.t)
if not callable(self.knn_dist):
utils.check_in(['euclidean', 'precomputed', 'cosine',
'correlation', 'cityblock', 'l1', 'l2',
'manhattan', 'braycurtis', 'canberra',
'chebyshev', 'dice', 'hamming', 'jaccard',
'kulsinski', 'mahalanobis', 'matching',
'minkowski', 'rogerstanimoto', 'russellrao',
'seuclidean', 'sokalmichener', 'sokalsneath',
'sqeuclidean', 'yule',
'precomputed_affinity', 'precomputed_distance'],
knn_dist=self.knn_dist)
if not callable(self.mds_dist):
utils.check_in(['euclidean', 'cosine', 'correlation', 'braycurtis',
'canberra', 'chebyshev', 'cityblock', 'dice',
'hamming', 'jaccard', 'kulsinski', 'mahalanobis',
'matching', 'minkowski', 'rogerstanimoto',
'russellrao', 'seuclidean', 'sokalmichener',
'sokalsneath', 'sqeuclidean', 'yule'],
mds_dist=self.mds_dist)
utils.check_in(['classic', 'metric', 'nonmetric'],
mds=self.mds) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def fit(self, X):
"""Computes the diffusion operator Parameters X : array, shape=[n_samples, n_features] input data with `n_samples` samples and `n_dimensions` dimensions. Accepted data types: `numpy.ndarray`, `scipy.sparse.spmatrix`, `pd.DataFrame`, `anndata.AnnData`. If `knn_dist` is 'precomputed', `data` should be a n_samples x n_samples distance or affinity matrix Returns ------- phate_operator : PHATE The estimator object """ |
X, n_pca, precomputed, update_graph = self._parse_input(X)
if precomputed is None:
tasklogger.log_info(
"Running PHATE on {} cells and {} genes.".format(
X.shape[0], X.shape[1]))
else:
tasklogger.log_info(
"Running PHATE on precomputed {} matrix with {} cells.".format(
precomputed, X.shape[0]))
if self.n_landmark is None or X.shape[0] <= self.n_landmark:
n_landmark = None
else:
n_landmark = self.n_landmark
if self.graph is not None and update_graph:
self._update_graph(X, precomputed, n_pca, n_landmark)
self.X = X
if self.graph is None:
tasklogger.log_start("graph and diffusion operator")
self.graph = graphtools.Graph(
X,
n_pca=n_pca,
n_landmark=n_landmark,
distance=self.knn_dist,
precomputed=precomputed,
knn=self.knn,
decay=self.decay,
thresh=1e-4,
n_jobs=self.n_jobs,
verbose=self.verbose,
random_state=self.random_state,
**(self.kwargs))
tasklogger.log_complete("graph and diffusion operator")
# landmark op doesn't build unless forced
self.diff_op
return self |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def transform(self, X=None, t_max=100, plot_optimal_t=False, ax=None):
"""Computes the position of the cells in the embedding space Parameters X : array, optional, shape=[n_samples, n_features] input data with `n_samples` samples and `n_dimensions` dimensions. Not required, since PHATE does not currently embed cells not given in the input matrix to `PHATE.fit()`. Accepted data types: `numpy.ndarray`, `scipy.sparse.spmatrix`, `pd.DataFrame`, `anndata.AnnData`. If `knn_dist` is 'precomputed', `data` should be a n_samples x n_samples distance or affinity matrix t_max : int, optional, default: 100 maximum t to test if `t` is set to 'auto' plot_optimal_t : boolean, optional, default: False If true and `t` is set to 'auto', plot the Von Neumann entropy used to select t ax : matplotlib.axes.Axes, optional If given and `plot_optimal_t` is true, plot will be drawn on the given axis. Returns ------- embedding : array, shape=[n_samples, n_dimensions] The cells embedded in a lower dimensional space using PHATE """ |
if self.graph is None:
raise NotFittedError("This PHATE instance is not fitted yet. Call "
"'fit' with appropriate arguments before "
"using this method.")
elif X is not None and not utils.matrix_is_equivalent(X, self.X):
# fit to external data
warnings.warn("Pre-fit PHATE cannot be used to transform a "
"new data matrix. Please fit PHATE to the new"
" data by running 'fit' with the new data.",
RuntimeWarning)
if isinstance(self.graph, graphtools.graphs.TraditionalGraph) and \
self.graph.precomputed is not None:
raise ValueError("Cannot transform additional data using a "
"precomputed distance matrix.")
else:
transitions = self.graph.extend_to_data(X)
return self.graph.interpolate(self.embedding,
transitions)
else:
diff_potential = self.calculate_potential(
t_max=t_max, plot_optimal_t=plot_optimal_t, ax=ax)
if self.embedding is None:
tasklogger.log_start("{} MDS".format(self.mds))
self.embedding = mds.embed_MDS(
diff_potential, ndim=self.n_components, how=self.mds,
distance_metric=self.mds_dist, n_jobs=self.n_jobs,
seed=self.random_state, verbose=max(self.verbose - 1, 0))
tasklogger.log_complete("{} MDS".format(self.mds))
if isinstance(self.graph, graphtools.graphs.LandmarkGraph):
tasklogger.log_debug("Extending to original data...")
return self.graph.interpolate(self.embedding)
else:
return self.embedding |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.