text_prompt stringlengths 157 13.1k | code_prompt stringlengths 7 19.8k ⌀ |
|---|---|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def z(self):
"Day of the year; i.e. '0' to '365'"
doy = self.year_days[self.data.month] + self.data.day
if self.L() and self.data.month > 2:
doy += 1
return doy |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def print_metric(name, count, elapsed):
"""A metric function that prints to standard output :arg str name: name of the metric :arg int count: number of items :arg float elapsed: time in seconds """ |
_do_print(name, count, elapsed, file=sys.stdout) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def stderr_metric(name, count, elapsed):
"""A metric function that prints to standard error :arg str name: name of the metric :arg int count: number of items :arg float elapsed: time in seconds """ |
_do_print(name, count, elapsed, file=sys.stderr) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def make_multi_metric(*metrics):
"""Make a new metric function that calls the supplied metrics :arg functions metrics: metric functions :rtype: function """ |
def multi_metric(name, count, elapsed):
"""Calls multiple metrics (closure)"""
for m in metrics:
m(name, count, elapsed)
return multi_metric |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _is_orphan(scc, graph):
""" Return False iff the given scc is reachable from elsewhere. """ |
return all(p in scc for v in scc for p in graph.parents(v)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def key_cycles():
""" Collect cyclic garbage, and return the strongly connected components that were keeping the garbage alive. """ |
graph = garbage()
sccs = graph.strongly_connected_components()
return [scc for scc in sccs if _is_orphan(scc, graph)] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _run_command(self, command, **kwargs):
"""Wrapper to pass command to plowshare. :param command: The command to pass to plowshare. :type command: str :param **kwargs: Additional keywords passed into :type **kwargs: dict :returns: Object containing either output of plowshare command or an error message. :rtype: dict :raises: Exception """ |
try:
return {'output': subprocess.check_output(command, **kwargs)}
except Exception as e:
return {'error': str(e)} |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _filter_sources(self, sources):
"""Remove sources with errors and return ordered by host success. :param sources: List of potential sources to connect to. :type sources: list :returns: Sorted list of potential sources without errors. :rtype: list """ |
filtered, hosts = [], []
for source in sources:
if 'error' in source:
continue
filtered.append(source)
hosts.append(source['host_name'])
return sorted(filtered, key=lambda s:
self._hosts_by_success(hosts).index(s['host_name'])) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def upload(self, filename, number_of_hosts):
"""Upload the given file to the specified number of hosts. :param filename: The filename of the file to upload. :type filename: str :param number_of_hosts: The number of hosts to connect to. :type number_of_hosts: int :returns: A list of dicts with 'host_name' and 'url' keys for all successful uploads or an empty list if all uploads failed. :rtype: list """ |
return self.multiupload(filename, self.random_hosts(number_of_hosts)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def download(self, sources, output_directory, filename):
"""Download a file from one of the provided sources The sources will be ordered by least amount of errors, so most successful hosts will be tried first. In case of failure, the next source will be attempted, until the first successful download is completed or all sources have been depleted. :param sources: A list of dicts with 'host_name' and 'url' keys. :type sources: list :param output_directory: Directory to save the downloaded file in. :type output_directory: str :param filename: Filename assigned to the downloaded file. :type filename: str :returns: A dict with 'host_name' and 'filename' keys if the download is successful, or an empty dict otherwise. :rtype: dict """ |
valid_sources = self._filter_sources(sources)
if not valid_sources:
return {'error': 'no valid sources'}
manager = Manager()
successful_downloads = manager.list([])
def f(source):
if not successful_downloads:
result = self.download_from_host(
source, output_directory, filename)
if 'error' in result:
self._host_errors[source['host_name']] += 1
else:
successful_downloads.append(result)
multiprocessing.dummy.Pool(len(valid_sources)).map(f, valid_sources)
return successful_downloads[0] if successful_downloads else {} |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def download_from_host(self, source, output_directory, filename):
"""Download a file from a given host. This method renames the file to the given string. :param source: Dictionary containing information about host. :type source: dict :param output_directory: Directory to place output in. :type output_directory: str :param filename: The filename to rename to. :type filename: str :returns: Dictionary with information about downloaded file. :rtype: dict """ |
result = self._run_command(
["plowdown", source["url"], "-o",
output_directory, "--temp-rename"],
stderr=open("/dev/null", "w")
)
result['host_name'] = source['host_name']
if 'error' in result:
return result
temporary_filename = self.parse_output(
result['host_name'], result['output'])
result['filename'] = os.path.join(output_directory, filename)
result.pop('output')
os.rename(temporary_filename, result['filename'])
return result |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def multiupload(self, filename, hosts):
"""Upload file to multiple hosts simultaneously The upload will be attempted for each host until the optimal file redundancy is achieved (a percentage of successful uploads) or the host list is depleted. :param filename: The filename of the file to upload. :type filename: str :param hosts: A list of hosts as defined in the master host list. :type hosts: list :returns: A list of dicts with 'host_name' and 'url' keys for all successful uploads or an empty list if all uploads failed. :rtype: list """ |
manager = Manager()
successful_uploads = manager.list([])
def f(host):
if len(successful_uploads) / float(len(hosts)) < \
settings.MIN_FILE_REDUNDANCY:
# Optimal redundancy not achieved, keep going
result = self.upload_to_host(filename, host)
if 'error' in result:
self._host_errors[host] += 1
else:
successful_uploads.append(result)
multiprocessing.dummy.Pool(len(hosts)).map(
f, self._hosts_by_success(hosts))
return list(successful_uploads) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def upload_to_host(self, filename, hostname):
"""Upload a file to the given host. This method relies on 'plowup' being installed on the system. If it succeeds, this method returns a dictionary with the host name, and the final URL. Otherwise, it returns a dictionary with the host name and an error flag. :param filename: The filename of the file to upload. :type filename: str :param hostname: The host you are uploading the file to. :type hostname: str :returns: Dictionary containing information about upload to host. :rtype: dict """ |
result = self._run_command(
["plowup", hostname, filename],
stderr=open("/dev/null", "w")
)
result['host_name'] = hostname
if 'error' not in result:
result['url'] = self.parse_output(hostname, result.pop('output'))
return result |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def parse_output(self, hostname, output):
"""Parse plowup's output. For now, we just return the last line. :param hostname: Name of host you are working with. :type hostname: str :param output: Dictionary containing information about a plowshare action. :type output: dict :returns: Parsed and decoded output list. :rtype: list """ |
if isinstance(output, bytes):
output = output.decode('utf-8')
return output.split()[-1] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _generate_queues(queues, exchange, platform_queue):
""" Queues known by this worker """ |
return set([
Queue('celery', exchange, routing_key='celery'),
Queue(platform_queue, exchange, routing_key='#'),
] + [
Queue(q_name, exchange, routing_key=q_name)
for q_name in queues
]) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _erf(x):
""" Port of cephes ``ndtr.c`` ``erf`` function. See https://github.com/jeremybarnes/cephes/blob/master/cprob/ndtr.c """ |
T = [
9.60497373987051638749E0,
9.00260197203842689217E1,
2.23200534594684319226E3,
7.00332514112805075473E3,
5.55923013010394962768E4,
]
U = [
3.35617141647503099647E1,
5.21357949780152679795E2,
4.59432382970980127987E3,
2.26290000613890934246E4,
4.92673942608635921086E4,
]
# Shorcut special cases
if x == 0:
return 0
if x >= MAXVAL:
return 1
if x <= -MAXVAL:
return -1
if abs(x) > 1:
return 1 - erfc(x)
z = x * x
return x * _polevl(z, T, 4) / _p1evl(z, U, 5) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _erfc(a):
""" Port of cephes ``ndtr.c`` ``erfc`` function. See https://github.com/jeremybarnes/cephes/blob/master/cprob/ndtr.c """ |
# approximation for abs(a) < 8 and abs(a) >= 1
P = [
2.46196981473530512524E-10,
5.64189564831068821977E-1,
7.46321056442269912687E0,
4.86371970985681366614E1,
1.96520832956077098242E2,
5.26445194995477358631E2,
9.34528527171957607540E2,
1.02755188689515710272E3,
5.57535335369399327526E2,
]
Q = [
1.32281951154744992508E1,
8.67072140885989742329E1,
3.54937778887819891062E2,
9.75708501743205489753E2,
1.82390916687909736289E3,
2.24633760818710981792E3,
1.65666309194161350182E3,
5.57535340817727675546E2,
]
# approximation for abs(a) >= 8
R = [
5.64189583547755073984E-1,
1.27536670759978104416E0,
5.01905042251180477414E0,
6.16021097993053585195E0,
7.40974269950448939160E0,
2.97886665372100240670E0,
]
S = [
2.26052863220117276590E0,
9.39603524938001434673E0,
1.20489539808096656605E1,
1.70814450747565897222E1,
9.60896809063285878198E0,
3.36907645100081516050E0,
]
# Shortcut special cases
if a == 0:
return 1
if a >= MAXVAL:
return 0
if a <= -MAXVAL:
return 2
x = a
if a < 0:
x = -a
# computationally cheaper to calculate erf for small values, I guess.
if x < 1:
return 1 - erf(a)
z = -a * a
z = math.exp(z)
if x < 8:
p = _polevl(x, P, 8)
q = _p1evl(x, Q, 8)
else:
p = _polevl(x, R, 5)
q = _p1evl(x, S, 6)
y = (z * p) / q
if a < 0:
y = 2 - y
return y |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def erfinv(z):
""" Calculate the inverse error function at point ``z``. This is a direct port of the SciPy ``erfinv`` function, originally written in C. Parameters z : numeric Returns ------- float References + https://en.wikipedia.org/wiki/Error_function#Inverse_functions + http://functions.wolfram.com/GammaBetaErf/InverseErf/ Examples -------- 0.088855990494 0.476936276204 -0.476936276204 1.38590382435 0.3 0.5 0 inf -inf """ |
if abs(z) > 1:
raise ValueError("`z` must be between -1 and 1 inclusive")
# Shortcut special cases
if z == 0:
return 0
if z == 1:
return inf
if z == -1:
return -inf
# otherwise calculate things.
return _ndtri((z + 1) / 2.0) / math.sqrt(2) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_cmap(name, lut=None):
""" Returns the specified colormap. Parameters name: str or :class:`matplotlib.colors.Colormap` If a colormap, it returned unchanged. %(cmap_note)s lut: int An integer giving the number of entries desired in the lookup table Returns ------- matplotlib.colors.Colormap The colormap specified by `name` See Also -------- show_colormaps: A function to display all available colormaps Notes ----- Different from the :func::`matpltolib.pyplot.get_cmap` function, this function changes the number of colors if `name` is a :class:`matplotlib.colors.Colormap` instance to match the given `lut`.""" |
if name in rcParams['colors.cmaps']:
colors = rcParams['colors.cmaps'][name]
lut = lut or len(colors)
return FixedColorMap.from_list(name=name, colors=colors, N=lut)
elif name in _cmapnames:
colors = _cmapnames[name]
lut = lut or len(colors)
return FixedColorMap.from_list(name=name, colors=colors, N=lut)
else:
cmap = mpl_get_cmap(name)
# Note: we could include the `lut` in the call of mpl_get_cmap, but
# this raises a ValueError for colormaps like 'viridis' in mpl version
# 1.5. Besides the mpl_get_cmap function does not modify the lut if
# it does not match
if lut is not None and cmap.N != lut:
cmap = FixedColorMap.from_list(
name=cmap.name, colors=cmap(np.linspace(0, 1, lut)), N=lut)
return cmap |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _get_cmaps(names):
"""Filter the given `names` for colormaps""" |
import matplotlib.pyplot as plt
available_cmaps = list(
chain(plt.cm.cmap_d, _cmapnames, rcParams['colors.cmaps']))
names = list(names)
wrongs = []
for arg in (arg for arg in names if (not isinstance(arg, Colormap) and
arg not in available_cmaps)):
if isinstance(arg, str):
similarkeys = get_close_matches(arg, available_cmaps)
if similarkeys != []:
warn("Colormap %s not found in standard colormaps.\n"
"Similar colormaps are %s." % (arg, ', '.join(similarkeys)))
else:
warn("Colormap %s not found in standard colormaps.\n"
"Run function without arguments to see all colormaps" % arg)
names.remove(arg)
wrongs.append(arg)
if not names and not wrongs:
names = sorted(m for m in available_cmaps if not m.endswith("_r"))
return names |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def show_colormaps(names=[], N=10, show=True, use_qt=None):
"""Function to show standard colormaps from pyplot Parameters ``*args``: str or :class:`matplotlib.colors.Colormap` If a colormap, it returned unchanged. %(cmap_note)s N: int, optional Default: 11. The number of increments in the colormap. show: bool, optional Default: True. If True, show the created figure at the end with pyplot.show(block=False) use_qt: bool If True, use the :class:`psy_simple.widgets.color.ColormapDialog.show_colormaps`, if False use a matplotlib implementation based on [1]_. If None, use the Qt implementation if it is running in the psyplot GUI. Returns ------- psy_simple.widgets.color.ColormapDialog or matplitlib.figure.Figure Depending on `use_qt`, either an instance of the :class:`psy_simple.widgets.color.ColormapDialog` or the :class:`matplotlib.figure.Figure` References .. [1] http://matplotlib.org/1.2.1/examples/pylab_examples/show_colormaps.html """ |
names = safe_list(names)
if use_qt or (use_qt is None and psyplot.with_gui):
from psy_simple.widgets.colors import ColormapDialog
from psyplot_gui.main import mainwindow
return ColormapDialog.show_colormap(names, N, show, parent=mainwindow)
import matplotlib.pyplot as plt
# This example comes from the Cookbook on www.scipy.org. According to the
# history, Andrew Straw did the conversion from an old page, but it is
# unclear who the original author is.
a = np.vstack((np.linspace(0, 1, 256).reshape(1, -1)))
# Get a list of the colormaps in matplotlib. Ignore the ones that end with
# '_r' because these are simply reversed versions of ones that don't end
# with '_r'
cmaps = _get_cmaps(names)
nargs = len(cmaps) + 1
fig = plt.figure(figsize=(5, 10))
fig.subplots_adjust(top=0.99, bottom=0.01, left=0.2, right=0.99)
for i, m in enumerate(cmaps):
ax = plt.subplot(nargs, 1, i+1)
plt.axis("off")
plt.pcolormesh(a, cmap=get_cmap(m, N + 1))
pos = list(ax.get_position().bounds)
fig.text(pos[0] - 0.01, pos[1], m, fontsize=10,
horizontalalignment='right')
fig.canvas.set_window_title("Figure %i: Predefined colormaps" % fig.number)
if show:
plt.show(block=False)
return fig |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _create_stdout_logger(logging_level):
""" create a logger to stdout. This creates logger for a series of module we would like to log information on. """ |
out_hdlr = logging.StreamHandler(sys.stdout)
out_hdlr.setFormatter(logging.Formatter(
'[%(asctime)s] %(message)s', "%H:%M:%S"
))
out_hdlr.setLevel(logging_level)
for name in LOGGING_NAMES:
log = logging.getLogger(name)
log.addHandler(out_hdlr)
log.setLevel(logging_level) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def main():
"""Generate a PDF using the async method.""" |
docraptor = DocRaptor()
print("Create PDF")
resp = docraptor.create(
{
"document_content": "<h1>python-docraptor</h1><p>Async Test</p>",
"test": True,
"async": True,
}
)
print("Status ID: {status_id}".format(status_id=resp["status_id"]))
status_id = resp["status_id"]
resp = docraptor.status(status_id)
print(" {status}".format(status=resp["status"]))
while resp["status"] != "completed":
time.sleep(3)
resp = docraptor.status(status_id)
print(" {status}".format(status=resp["status"]))
print("Download to test_async.pdf")
with open("test_async.pdf", "wb") as pdf_file:
pdf_file.write(docraptor.download(resp["download_key"]).content)
print("[DONE]") |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_alternate_types_resolving_forwardref_union_and_typevar(typ, _memo: List[Any] = None) \ """ Returns a tuple of all alternate types allowed by the `typ` type annotation. If typ is a TypeVar, * if the typevar is bound, return get_alternate_types_resolving_forwardref_union_and_typevar(bound) * if the typevar has constraints, return a tuple containing all the types listed in the constraints (with appropriate recursive call to get_alternate_types_resolving_forwardref_union_and_typevar for each of them) * otherwise return (object, ) If typ is a Union, return a tuple containing all the types listed in the union (with appropriate recursive call to get_alternate_types_resolving_forwardref_union_and_typevar for each of them) If typ is a forward reference, it is evaluated and this method is applied to the results. Otherwise (typ, ) is returned Note that this function automatically prevent infinite recursion through forward references such as in `A = Union[str, 'A']`, by keeping a _memo of already met symbols. :param typ: :return: """ |
# avoid infinite recursion by using a _memo
_memo = _memo or []
if typ in _memo:
return tuple()
# remember that this was already explored
_memo.append(typ)
if is_typevar(typ):
if hasattr(typ, '__bound__') and typ.__bound__ is not None:
# TypeVar is 'bound' to a class
if hasattr(typ, '__contravariant__') and typ.__contravariant__:
# Contravariant means that only super classes of this type are supported!
raise Exception('Contravariant TypeVars are not supported')
else:
# only subclasses of this are allowed (even if not covariant, because as of today we cant do otherwise)
return get_alternate_types_resolving_forwardref_union_and_typevar(typ.__bound__, _memo=_memo)
elif hasattr(typ, '__constraints__') and typ.__constraints__ is not None:
if hasattr(typ, '__contravariant__') and typ.__contravariant__:
# Contravariant means that only super classes of this type are supported!
raise Exception('Contravariant TypeVars are not supported')
else:
# TypeVar is 'constrained' to several alternate classes, meaning that subclasses of any of them are
# allowed (even if not covariant, because as of today we cant do otherwise)
return tuple(typpp for c in typ.__constraints__
for typpp in get_alternate_types_resolving_forwardref_union_and_typevar(c, _memo=_memo))
else:
# A non-parametrized TypeVar means 'any'
return object,
elif is_union_type(typ):
# do not use typ.__args__, it may be wrong
# the solution below works even in typevar+config cases such as u = Union[T, str][Optional[int]]
return tuple(t for typpp in get_args(typ, evaluate=True)
for t in get_alternate_types_resolving_forwardref_union_and_typevar(typpp, _memo=_memo))
elif is_forward_ref(typ):
return get_alternate_types_resolving_forwardref_union_and_typevar(resolve_forward_ref(typ), _memo=_memo)
else:
return typ, |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def robust_isinstance(inst, typ) -> bool: """ Similar to isinstance, but if 'typ' is a parametrized generic Type, it is first transformed into its base generic class so that the instance check works. It is also robust to Union and Any. :param inst: :param typ: :return: """ |
if typ is Any:
return True
if is_typevar(typ):
if hasattr(typ, '__constraints__') and typ.__constraints__ is not None:
typs = get_args(typ, evaluate=True)
return any(robust_isinstance(inst, t) for t in typs)
elif hasattr(typ, '__bound__') and typ.__bound__ is not None:
return robust_isinstance(inst, typ.__bound__)
else:
# a raw TypeVar means 'anything'
return True
else:
if is_union_type(typ):
typs = get_args(typ, evaluate=True)
return any(robust_isinstance(inst, t) for t in typs)
else:
return isinstance(inst, get_base_generic_type(typ)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def eval_forward_ref(typ: _ForwardRef):
""" Climbs the current stack until the given Forward reference has been resolved, or raises an InvalidForwardRefError :param typ: the forward reference to resolve :return: """ |
for frame in stack():
m = getmodule(frame[0])
m_name = m.__name__ if m is not None else '<unknown>'
if m_name.startswith('parsyfiles.tests') or not m_name.startswith('parsyfiles'):
try:
# print("File {}:{}".format(frame.filename, frame.lineno))
return typ._eval_type(frame[0].f_globals, frame[0].f_locals)
except NameError:
pass
raise InvalidForwardRefError(typ) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def is_valid_pep484_type_hint(typ_hint, allow_forward_refs: bool = False):
""" Returns True if the provided type is a valid PEP484 type hint, False otherwise. Note: string type hints (forward references) are not supported by default, since callers of this function in parsyfiles lib actually require them to be resolved already. :param typ_hint: :param allow_forward_refs: :return: """ |
# most common case first, to be faster
try:
if isinstance(typ_hint, type):
return True
except:
pass
# optionally, check forward reference
try:
if allow_forward_refs and is_forward_ref(typ_hint):
return True
except:
pass
# finally check unions and typevars
try:
return is_union_type(typ_hint) or is_typevar(typ_hint)
except:
return False |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def is_pep484_nonable(typ):
""" Checks if a given type is nonable, meaning that it explicitly or implicitly declares a Union with NoneType. Nested TypeVars and Unions are supported. :param typ: :return: """ |
# TODO rely on typing_inspect if there is an answer to https://github.com/ilevkivskyi/typing_inspect/issues/14
if typ is type(None):
return True
elif is_typevar(typ) or is_union_type(typ):
return any(is_pep484_nonable(tt) for tt in get_alternate_types_resolving_forwardref_union_and_typevar(typ))
else:
return False |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def create_for_collection_items(item_type, hint):
""" Helper method for collection items :param item_type: :return: """ |
# this leads to infinite loops
# try:
# prt_type = get_pretty_type_str(item_type)
# except:
# prt_type = str(item_type)
return TypeInformationRequiredError("Cannot parse object of type {t} as a collection: this type has no valid "
"PEP484 type hint about its contents: found {h}. Please use a standard "
"PEP484 declaration such as Dict[str, Foo] or List[Foo]"
"".format(t=str(item_type), h=hint)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def create_for_object_attributes(item_type, faulty_attribute_name: str, hint):
""" Helper method for constructor attributes :param item_type: :return: """ |
# this leads to infinite loops
# try:
# prt_type = get_pretty_type_str(item_type)
# except:
# prt_type = str(item_type)
return TypeInformationRequiredError("Cannot create instances of type {t}: constructor attribute '{a}' has an"
" invalid PEP484 type hint: {h}.".format(t=str(item_type),
a=faulty_attribute_name, h=hint)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def exception_class(self, exception):
"""Return a name representing the class of an exception.""" |
cls = type(exception)
if cls.__module__ == 'exceptions': # Built-in exception.
return cls.__name__
return "%s.%s" % (cls.__module__, cls.__name__) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def request_info(self, request):
""" Return a dictionary of information for a given request. This will be run once for every request. """ |
# We have to re-resolve the request path here, because the information
# is not stored on the request.
view, args, kwargs = resolve(request.path)
for i, arg in enumerate(args):
kwargs[i] = arg
parameters = {}
parameters.update(kwargs)
parameters.update(request.POST.items())
environ = request.META
return {
"session": dict(request.session),
'cookies': dict(request.COOKIES),
'headers': dict(get_headers(environ)),
'env': dict(get_environ(environ)),
"remote_ip": request.META["REMOTE_ADDR"],
"parameters": parameters,
"action": view.__name__,
"application": view.__module__,
"method": request.method,
"url": request.build_absolute_uri()
} |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _save(self, hdf5, model, positives, negatives):
"""Saves the given intermediate state of the bootstrapping to file.""" |
# write the model and the training set indices to the given HDF5 file
hdf5.set("PositiveIndices", sorted(list(positives)))
hdf5.set("NegativeIndices", sorted(list(negatives)))
hdf5.create_group("Model")
hdf5.cd("Model")
model.save(hdf5)
del hdf5 |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _load(self, hdf5):
"""Loads the intermediate state of the bootstrapping from file.""" |
positives = set(hdf5.get("PositiveIndices"))
negatives = set(hdf5.get("NegativeIndices"))
hdf5.cd("Model")
model = bob.learn.boosting.BoostedMachine(hdf5)
return model, positives, negatives |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def undelay(self):
'''resolves all delayed arguments'''
i = 0
while i < len(self):
op = self[i]
i += 1
if hasattr(op, 'arg1'):
if isinstance(op.arg1,DelayedArg):
op.arg1 = op.arg1.resolve()
if isinstance(op.arg1,CodeBlock):
op.arg1.undelay() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_directorship_heads(self, val):
"""Get the head of a directorship Arguments: val -- the cn of the directorship """ |
__ldap_group_ou__ = "cn=groups,cn=accounts,dc=csh,dc=rit,dc=edu"
res = self.__con__.search_s(
__ldap_group_ou__,
ldap.SCOPE_SUBTREE,
"(cn=eboard-%s)" % val,
['member'])
ret = []
for member in res[0][1]['member']:
try:
ret.append(member.decode('utf-8'))
except UnicodeDecodeError:
ret.append(member)
except KeyError:
continue
return [CSHMember(self,
dn.split('=')[1].split(',')[0],
True)
for dn in ret] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def enqueue_mod(self, dn, mod):
"""Enqueue a LDAP modification. Arguments: dn -- the distinguished name of the object to modify mod -- an ldap modfication entry to enqueue """ |
# mark for update
if dn not in self.__pending_mod_dn__:
self.__pending_mod_dn__.append(dn)
self.__mod_queue__[dn] = []
self.__mod_queue__[dn].append(mod) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def flush_mod(self):
"""Flush all pending LDAP modifications.""" |
for dn in self.__pending_mod_dn__:
try:
if self.__ro__:
for mod in self.__mod_queue__[dn]:
if mod[0] == ldap.MOD_DELETE:
mod_str = "DELETE"
elif mod[0] == ldap.MOD_ADD:
mod_str = "ADD"
else:
mod_str = "REPLACE"
print("{} VALUE {} = {} FOR {}".format(mod_str,
mod[1],
mod[2],
dn))
else:
self.__con__.modify_s(dn, self.__mod_queue__[dn])
except ldap.TYPE_OR_VALUE_EXISTS:
print("Error! Conflicting Batch Modification: %s"
% str(self.__mod_queue__[dn]))
continue
except ldap.NO_SUCH_ATTRIBUTE:
print("Error! Conflicting Batch Modification: %s"
% str(self.__mod_queue__[dn]))
continue
self.__mod_queue__[dn] = None
self.__pending_mod_dn__ = [] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def detect_encoding(value):
"""Returns the character encoding for a JSON string.""" |
# https://tools.ietf.org/html/rfc4627#section-3
if six.PY2:
null_pattern = tuple(bool(ord(char)) for char in value[:4])
else:
null_pattern = tuple(bool(char) for char in value[:4])
encodings = {
# Zero is a null-byte, 1 is anything else.
(0, 0, 0, 1): 'utf-32-be',
(0, 1, 0, 1): 'utf-16-be',
(1, 0, 0, 0): 'utf-32-le',
(1, 0, 1, 0): 'utf-16-le',
}
return encodings.get(null_pattern, 'utf-8') |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _merge_params(url, params):
"""Merge and encode query parameters with an URL.""" |
if isinstance(params, dict):
params = list(params.items())
scheme, netloc, path, query, fragment = urllib.parse.urlsplit(url)
url_params = urllib.parse.parse_qsl(query, keep_blank_values=True)
url_params.extend(params)
query = _encode_data(url_params)
return urllib.parse.urlunsplit((scheme, netloc, path, query, fragment)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def json(self, **kwargs):
"""Decodes response as JSON.""" |
encoding = detect_encoding(self.content[:4])
value = self.content.decode(encoding)
return simplejson.loads(value, **kwargs) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def raise_for_status(self):
"""Raises HTTPError if the request got an error.""" |
if 400 <= self.status_code < 600:
message = 'Error %s for %s' % (self.status_code, self.url)
raise HTTPError(message) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def metric(cls, name, count, elapsed):
"""A metric function that buffers through numpy :arg str name: name of the metric :arg int count: number of items :arg float elapsed: time in seconds """ |
if name is None:
warnings.warn("Ignoring unnamed metric", stacklevel=3)
return
with cls.lock:
# register with atexit on first call
if cls.dump_atexit and not cls.instances:
atexit.register(cls.dump)
try:
self = cls.instances[name]
except KeyError:
self = cls.instances[name] = cls(name)
self.temp.write(self.struct.pack(count, elapsed)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _dump(self):
"""dump data for an individual metric. For internal use only.""" |
try:
self.temp.seek(0) # seek to beginning
arr = np.fromfile(self.temp, self.dtype)
self.count_arr = arr['count']
self.elapsed_arr = arr['elapsed']
if self.calc_stats:
# calculate mean & standard deviation
self.count_mean = np.mean(self.count_arr)
self.count_std = np.std(self.count_arr)
self.elapsed_mean = np.mean(self.elapsed_arr)
self.elapsed_std = np.std(self.elapsed_arr)
self._output()
finally:
self.temp.close()
self._cleanup() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def list(self, host_rec=None, service_rec=None, hostfilter=None):
""" Returns a list of vulnerabilities based on t_hosts.id or t_services.id. If neither are set then statistical results are added :param host_rec: db.t_hosts.id :param service_rec: db.t_services.id :param hostfilter: Valid hostfilter or None """ |
return self.send.vuln_list(host_rec, service_rec, hostfilter) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def ip_info(self, vuln_name=None, vuln_id=None, ip_list_only=True, hostfilter=None):
""" List of all IP Addresses with a vulnerability :param vuln_name: t_vulndata.f_vulnid :param vuln_id: t_vulndata.id :param ip_list_only: IP List only (default) or rest of t_hosts fields :param hostfilter: Valid hostfilter or none """ |
return self.send.vuln_ip_info(vuln_name, vuln_id, ip_list_only, hostfilter) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def service_list(self, vuln_name=None, vuln_id=None, hostfilter=None):
""" Returns a dictionary of vulns with services and IP Addresses :param vuln_name: t_vulndata.f_vulnid :param vuln_id: t_vulndata.id :param hostfilter: Valid hostfilter or none """ |
return self.send.vuln_service_list(vuln_name, vuln_id, hostfilter) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def import_name(mod_name):
"""Import a module by module name. @param mod_name: module name. """ |
try:
mod_obj_old = sys.modules[mod_name]
except KeyError:
mod_obj_old = None
if mod_obj_old is not None:
return mod_obj_old
__import__(mod_name)
mod_obj = sys.modules[mod_name]
return mod_obj |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def import_path(mod_path, mod_name):
"""Import a module by module file path. @param mod_path: module file path. @param mod_name: module name. """ |
mod_code = open(mod_path).read()
mod_obj = import_code(
mod_code=mod_code,
mod_name=mod_name,
)
if not hasattr(mod_obj, '__file__'):
mod_obj.__file__ = mod_path
return mod_obj |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def import_obj( uri, mod_name=None, mod_attr_sep='::', attr_chain_sep='.', retn_mod=False, ):
"""Load an object from a module. @param uri: an uri specifying which object to load. An `uri` consists of two parts: module URI and attribute chain, e.g. `a/b/c.py::x.y.z` or `a.b.c::x.y.z` # Module URI E.g. `a/b/c.py` or `a.b.c`. Can be either a module name or a file path. Whether it is a file path is determined by whether it ends with `.py`. # Attribute chain E.g. `x.y.z`. @param mod_name: module name. Must be given when `uri` specifies a module file path, not a module name. @param mod_attr_sep: the separator between module name and attribute name. @param attr_chain_sep: the separator between parts of attribute name. @retn_mod: whether return module object. """ |
if mod_attr_sep is None:
mod_attr_sep = '::'
uri_parts = split_uri(uri=uri, mod_attr_sep=mod_attr_sep)
protocol, mod_uri, attr_chain = uri_parts
if protocol == 'py':
mod_obj = import_name(mod_uri)
else:
if not mod_name:
msg = (
'Argument `mod_name` must be given when loading by file path.'
)
raise ValueError(msg)
mod_obj = import_path(mod_uri, mod_name=mod_name)
if not attr_chain:
if retn_mod:
return mod_obj, None
else:
return mod_obj
if attr_chain_sep is None:
attr_chain_sep = '.'
attr_obj = get_attr_chain(
obj=mod_obj,
attr_chain=attr_chain,
sep=attr_chain_sep,
)
if retn_mod:
return mod_obj, attr_obj
else:
return attr_obj |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def add_to_sys_modules(mod_name, mod_obj=None):
"""Add a module object to `sys.modules`. @param mod_name: module name, used as key to `sys.modules`. If `mod_name` is `a.b.c` while modules `a` and `a.b` are not existing, empty modules will be created for `a` and `a.b` as well. @param mod_obj: a module object. If None, an empty module object will be created. """ |
mod_snames = mod_name.split('.')
parent_mod_name = ''
parent_mod_obj = None
for mod_sname in mod_snames:
if parent_mod_name == '':
current_mod_name = mod_sname
else:
current_mod_name = parent_mod_name + '.' + mod_sname
if current_mod_name == mod_name:
current_mod_obj = mod_obj
else:
current_mod_obj = sys.modules.get(current_mod_name, None)
if current_mod_obj is None:
current_mod_obj = imp.new_module(current_mod_name)
sys.modules[current_mod_name] = current_mod_obj
if parent_mod_obj is not None:
setattr(parent_mod_obj, mod_sname, current_mod_obj)
parent_mod_name = current_mod_name
parent_mod_obj = current_mod_obj |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_host(environ):
"""Return the real host for the given WSGI environment. This takes care of the `X-Forwarded-Host` header. :param environ: the WSGI environment to get the host of. """ |
scheme = environ.get('wsgi.url_scheme')
if 'HTTP_X_FORWARDED_HOST' in environ:
result = environ['HTTP_X_FORWARDED_HOST']
elif 'HTTP_HOST' in environ:
result = environ['HTTP_HOST']
else:
result = environ['SERVER_NAME']
if (scheme, str(environ['SERVER_PORT'])) not \
in (('https', '443'), ('http', '80')):
result += ':' + environ['SERVER_PORT']
if result.endswith(':80') and scheme == 'http':
result = result[:-3]
elif result.endswith(':443') and scheme == 'https':
result = result[:-4]
return result |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _raw(cls, vertices, edges, out_edges, in_edges, head, tail):
""" Private constructor for direct construction of an ObjectGraph from its attributes. vertices is the collection of vertices out_edges and in_edges map vertices to lists of edges head and tail map edges to objects. """ |
self = object.__new__(cls)
self._out_edges = out_edges
self._in_edges = in_edges
self._head = head
self._tail = tail
self._vertices = vertices
self._edges = edges
return self |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def annotated(self):
""" Annotate this graph, returning an AnnotatedGraph object with the same structure. """ |
# Build up dictionary of edge annotations.
edge_annotations = {}
for edge in self.edges:
if edge not in edge_annotations:
# We annotate all edges from a given object at once.
referrer = self._tail[edge]
known_refs = annotated_references(referrer)
for out_edge in self._out_edges[referrer]:
referent = self._head[out_edge]
if known_refs[referent]:
annotation = known_refs[referent].pop()
else:
annotation = None
edge_annotations[out_edge] = annotation
annotated_vertices = [
AnnotatedVertex(
id=id(vertex),
annotation=object_annotation(vertex),
)
for vertex in self.vertices
]
annotated_edges = [
AnnotatedEdge(
id=edge,
annotation=edge_annotations[edge],
head=id(self._head[edge]),
tail=id(self._tail[edge]),
)
for edge in self.edges
]
return AnnotatedGraph(
vertices=annotated_vertices,
edges=annotated_edges,
) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def owned_objects(self):
""" List of gc-tracked objects owned by this ObjectGraph instance. """ |
return (
[
self,
self.__dict__,
self._head,
self._tail,
self._out_edges,
self._out_edges._keys,
self._out_edges._values,
self._in_edges,
self._in_edges._keys,
self._in_edges._values,
self._vertices,
self._vertices._elements,
self._edges,
] +
list(six.itervalues(self._out_edges)) +
list(six.itervalues(self._in_edges))
) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def find_by_typename(self, typename):
""" List of all objects whose type has the given name. """ |
return self.find_by(lambda obj: type(obj).__name__ == typename) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_unset_inputs(self):
""" Return a set of unset inputs """ |
return set([k for k, v in self._inputs.items() if v.is_empty(False)]) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def prompt_unset_inputs(self, force=False):
""" Prompt for unset input values """ |
for k, v in self._inputs.items():
if force or v.is_empty(False):
self.get_input(k, force=force) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def values(self, with_defaults=True):
""" Return the values dictionary, defaulting to default values """ |
return dict(((k, str(v)) for k, v in self._inputs.items() if not v.is_empty(with_defaults))) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def write_values(self):
""" Return the dictionary with which to write values """ |
return dict(((k, v.value) for k, v in self._inputs.items() if not v.is_secret and not v.is_empty(False))) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _parse_param_line(self, line):
""" Parse a single param line. """ |
value = line.strip('\n \t')
if len(value) > 0:
i = Input()
if value.find('#') != -1:
value, extra_attributes = value.split('#')
try:
extra_attributes = eval(extra_attributes)
except SyntaxError:
raise InputException("Incorrectly formatted input for {0}!".format(value))
if not isinstance(extra_attributes, dict):
raise InputException("Incorrectly formatted input for {0}!".format(value))
if 'prompt' in extra_attributes:
i.prompt = extra_attributes['prompt']
if 'help' in extra_attributes:
i.help = extra_attributes['help']
if 'type' in extra_attributes:
i.in_type = extra_attributes['type']
if i.in_type.find('/') != -1:
i.in_type, i.out_type = i.in_type.split('/')
if 'cast' in extra_attributes:
i.out_type = extra_attributes['cast']
if value.find('==') != -1:
value, default = value.split('==')
i.default = default
if value.endswith('?'):
value = value[:-1]
i.is_secret = True
return (value, i)
return None |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def download(self, overwrite=True):
""" Download the zipcodes CSV file. If ``overwrite`` is set to False, the file won't be downloaded if it already exists. """ |
if overwrite or not os.path.exists(self.file_path):
_, f = tempfile.mkstemp()
try:
urlretrieve(self.DOWNLOAD_URL, f)
extract_csv(f, self.file_path)
finally:
os.remove(f) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_zipcodes_for_canton(self, canton):
""" Return the list of zipcodes for the given canton code. """ |
zipcodes = [
zipcode for zipcode, location in self.get_locations().items()
if location.canton == canton
]
return zipcodes |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_cantons(self):
""" Return the list of unique cantons, sorted by name. """ |
return sorted(list(set([
location.canton for location in self.get_locations().values()
]))) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_municipalities(self):
""" Return the list of unique municipalities, sorted by name. """ |
return sorted(list(set([
location.municipality for location in self.get_locations().values()
]))) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _get_formula_class(self, formula):
""" get a formula class object if it exists, else create one, add it to the dict, and pass return it. """ |
# recursive import otherwise
from sprinter.formula.base import FormulaBase
if formula in LEGACY_MAPPINGS:
formula = LEGACY_MAPPINGS[formula]
formula_class, formula_url = formula, None
if ':' in formula:
formula_class, formula_url = formula.split(":", 1)
if formula_class not in self._formula_dict:
try:
self._formula_dict[formula_class] = lib.get_subclass_from_module(formula_class, FormulaBase)
except (SprinterException, ImportError):
logger.info("Downloading %s..." % formula_class)
try:
self._pip.install_egg(formula_url or formula_class)
try:
self._formula_dict[formula_class] = lib.get_subclass_from_module(formula_class, FormulaBase)
except ImportError:
logger.debug("FeatureDict import Error", exc_info=sys.exc_info())
raise SprinterException("Error: Unable to retrieve formula %s!" % formula_class)
except PipException:
logger.error("ERROR: Unable to download %s!" % formula_class)
return self._formula_dict[formula_class] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def is_backup_class(cls):
"""Return true if given class supports back up. Currently this means a gludb.data.Storable-derived class that has a mapping as defined in gludb.config""" |
return True if (
isclass(cls) and
issubclass(cls, Storable) and
get_mapping(cls, no_mapping_ok=True)
) else False |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def process( hw_num: int, problems_to_do: Optional[Iterable[int]] = None, prefix: Optional[Path] = None, by_hand: Optional[Iterable[int]] = None, ) -> None: """Process the homework problems in ``prefix`` folder. Arguments --------- hw_num The number of this homework problems_to_do, optional A list of the problems to be processed prefix, optional A `~pathlib.Path` to this homework assignment folder by_hand, optional A list of the problems that should be labeled to be completed by hand and have an image with the solution included. """ |
if prefix is None:
prefix = Path(".")
problems: Iterable[Path]
if problems_to_do is None:
# The glob syntax here means a the filename must start with
# homework-, be followed the homework number, followed by a
# dash, then a digit representing the problem number for this
# homework number, then any number of characters (in practice
# either nothing or, rarely, another digit), then the ipynb
# extension. Examples:
# homework-1-1.ipynb, homework-10-1.ipynb, homework-3-10.ipynb
problems = list(prefix.glob(f"homework-{hw_num}-[0-9]*.ipynb"))
else:
problems = [prefix / f"homework-{hw_num}-{i}.ipynb" for i in problems_to_do]
problems = sorted(problems, key=lambda k: k.stem[-1])
output_directory: Path = (prefix / "output").resolve()
fw = FilesWriter(build_directory=str(output_directory))
assignment_zip_name = output_directory / f"homework-{hw_num}.zip"
solution_zip_name = output_directory / f"homework-{hw_num}-soln.zip"
assignment_pdfs: List[BytesIO] = []
solution_pdfs: List[BytesIO] = []
assignment_pdf: bytes
solution_pdf: bytes
assignment_nb: str
solution_nb: str
res: Dict[str, Union[str, bool]] = {
"delete_pymarkdown": True,
"global_content_filter": {"include_raw": False},
}
for problem in problems:
print("Working on:", problem)
res["unique_key"] = problem.stem
problem_number = int(problem.stem.split("-")[-1])
if by_hand is not None and problem_number in by_hand:
res["by_hand"] = True
else:
res["by_hand"] = False
problem_fname = str(problem.resolve())
# Process assignments
res["remove_solution"] = True
assignment_pdf, _ = pdf_exp.from_filename(problem_fname, resources=res)
assignment_pdfs.append(BytesIO(assignment_pdf))
assignment_nb, _ = nb_exp.from_filename(problem_fname, resources=res)
with ZipFile(assignment_zip_name, mode="a") as zip_file:
zip_file.writestr(problem.name, assignment_nb)
# Process solutions
res["remove_solution"] = False
solution_pdf, _ = pdf_exp.from_filename(problem_fname, resources=res)
solution_pdfs.append(BytesIO(solution_pdf))
solution_nb, _ = nb_exp.from_filename(problem_fname, resources=res)
with ZipFile(solution_zip_name, mode="a") as zip_file:
zip_file.writestr(problem.stem + "-soln" + problem.suffix, solution_nb)
resources: Dict[str, Any] = {
"metadata": {
"name": f"homework-{hw_num}",
"path": str(prefix),
"modified_date": date.today().strftime("%B %d, %Y"),
},
"output_extension": ".pdf",
}
fw.write(combine_pdf_as_bytes(assignment_pdfs), resources, f"homework-{hw_num}")
resources["metadata"]["name"] = f"homework-{hw_num}-soln"
fw.write(combine_pdf_as_bytes(solution_pdfs), resources, f"homework-{hw_num}-soln") |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def main(argv: Optional[Sequence[str]] = None) -> None: """Parse arguments and process the homework assignment.""" |
parser = ArgumentParser(description="Convert Jupyter Notebook assignments to PDFs")
parser.add_argument(
"--hw",
type=int,
required=True,
help="Homework number to convert",
dest="hw_num",
)
parser.add_argument(
"-p",
"--problems",
type=int,
help="Problem numbers to convert",
dest="problems",
nargs="*",
)
parser.add_argument(
"--by-hand",
type=int,
help="Problem numbers to be completed by hand",
dest="by_hand",
nargs="*",
)
args = parser.parse_args(argv)
prefix = Path(f"homework/homework-{args.hw_num}")
process(args.hw_num, args.problems, prefix=prefix, by_hand=args.by_hand) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def get_vm_by_name(content, name, regex=False):
'''
Get a VM by its name
'''
return get_object_by_name(content, vim.VirtualMachine, name, regex) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def get_datacenter(content, obj):
'''
Get the datacenter to whom an object belongs
'''
datacenters = content.rootFolder.childEntity
for d in datacenters:
dch = get_all(content, d, type(obj))
if dch is not None and obj in dch:
return d |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def get_all_vswitches(content):
'''
Get all the virtual switches
'''
vswitches = []
hosts = get_all_hosts(content)
for h in hosts:
for s in h.config.network.vswitch:
vswitches.append(s)
return vswitches |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def print_vm_info(vm):
'''
Print information for a particular virtual machine
'''
summary = vm.summary
print('Name : ', summary.config.name)
print('Path : ', summary.config.vmPathName)
print('Guest : ', summary.config.guestFullName)
annotation = summary.config.annotation
if annotation is not None and annotation != '':
print('Annotation : ', annotation)
print('State : ', summary.runtime.powerState)
if summary.guest is not None:
ip = summary.guest.ipAddress
if ip is not None and ip != '':
print('IP : ', ip)
if summary.runtime.question is not None:
print('Question : ', summary.runtime.question.text)
print('') |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def module_import(module_path):
"""Imports the module indicated in name Args: module_path: string representing a module path such as 'app.config' or 'app.extras.my_module' Returns: the module matching name of the last component, ie: for 'app.extras.my_module' it returns a reference to my_module Raises: BadModulePathError if the module is not found """ |
try:
# Import whole module path.
module = __import__(module_path)
# Split into components: ['contour',
# 'extras','appengine','ndb_persistence'].
components = module_path.split('.')
# Starting at the second component, set module to a
# a reference to that component. at the end
# module with be the last component. In this case:
# ndb_persistence
for component in components[1:]:
module = getattr(module, component)
return module
except ImportError:
raise BadModulePathError(
'Unable to find module "%s".' % (module_path,)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def find_contour_yaml(config_file=__file__, names=None):
""" Traverse directory trees to find a contour.yaml file Begins with the location of this file then checks the working directory if not found Args: config_file: location of this file, override for testing Returns: the path of contour.yaml or None if not found """ |
checked = set()
contour_yaml = _find_countour_yaml(os.path.dirname(config_file), checked,
names=names)
if not contour_yaml:
contour_yaml = _find_countour_yaml(os.getcwd(), checked, names=names)
return contour_yaml |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _find_countour_yaml(start, checked, names=None):
"""Traverse the directory tree identified by start until a directory already in checked is encountered or the path of countour.yaml is found. Checked is present both to make the loop termination easy to reason about and so the same directories do not get rechecked Args: start: the path to start looking in and work upward from checked: the set of already checked directories Returns: the path of the countour.yaml file or None if it is not found """ |
extensions = []
if names:
for name in names:
if not os.path.splitext(name)[1]:
extensions.append(name + ".yaml")
extensions.append(name + ".yml")
yaml_names = (names or []) + CONTOUR_YAML_NAMES + extensions
directory = start
while directory not in checked:
checked.add(directory)
for fs_yaml_name in yaml_names:
yaml_path = os.path.join(directory, fs_yaml_name)
if os.path.exists(yaml_path):
return yaml_path
directory = os.path.dirname(directory)
return |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _guess_type_from_validator(validator):
""" Utility method to return the declared type of an attribute or None. It handles _OptionalValidator and _AndValidator in order to unpack the validators. :param validator: :return: the type of attribute declared in an inner 'instance_of' validator (if any is found, the first one is used) or None if no inner 'instance_of' validator is found """ |
if isinstance(validator, _OptionalValidator):
# Optional : look inside
return _guess_type_from_validator(validator.validator)
elif isinstance(validator, _AndValidator):
# Sequence : try each of them
for v in validator.validators:
typ = _guess_type_from_validator(v)
if typ is not None:
return typ
return None
elif isinstance(validator, _InstanceOfValidator):
# InstanceOf validator : found it !
return validator.type
else:
# we could not find the type
return None |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def is_optional(attr):
""" Helper method to find if an attribute is mandatory :param attr: :return: """ |
return isinstance(attr.validator, _OptionalValidator) or (attr.default is not None and attr.default is not NOTHING) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def preprocess( self, nb: "NotebookNode", resources: dict ) -> Tuple["NotebookNode", dict]: """Remove any raw cells from the Notebook. By default, exclude raw cells from the output. Change this by including global_content_filter->include_raw = True in the resources dictionary. This preprocessor is necessary because the NotebookExporter doesn't include the exclude_raw config.""" |
if not resources.get("global_content_filter", {}).get("include_raw", False):
keep_cells = []
for cell in nb.cells:
if cell.cell_type != "raw":
keep_cells.append(cell)
nb.cells = keep_cells
return nb, resources |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def preprocess( self, nb: "NotebookNode", resources: dict ) -> Tuple["NotebookNode", dict]: """Preprocess the entire notebook.""" |
if "remove_solution" not in resources:
raise KeyError("The resources dictionary must have a remove_solution key.")
if resources["remove_solution"]:
keep_cells_idx = []
for index, cell in enumerate(nb.cells):
if "## solution" in cell.source.lower():
keep_cells_idx.append(index)
# The space at the end of the test string here is important
elif len(keep_cells_idx) > 0 and cell.source.startswith("### "):
keep_cells_idx.append(index)
keep_cells = nb.cells[: keep_cells_idx[0] + 1]
for i in keep_cells_idx[1:]:
keep_cells.append(nb.cells[i])
if resources["by_hand"]:
keep_cells.append(by_hand_cell)
else:
if "sketch" in nb.cells[i].source.lower():
keep_cells.append(sketch_cell)
else:
keep_cells.append(md_expl_cell)
keep_cells.append(code_ans_cell)
keep_cells.append(md_ans_cell)
nb.cells = keep_cells
return nb, resources |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def parse_from_dict(json_dict):
""" Given a Unified Uploader message, parse the contents and return a MarketOrderList. :param dict json_dict: A Unified Uploader message as a JSON dict. :rtype: MarketOrderList :returns: An instance of MarketOrderList, containing the orders within. """ |
order_columns = json_dict['columns']
order_list = MarketOrderList(
upload_keys=json_dict['uploadKeys'],
order_generator=json_dict['generator'],
)
for rowset in json_dict['rowsets']:
generated_at = parse_datetime(rowset['generatedAt'])
region_id = rowset['regionID']
type_id = rowset['typeID']
order_list.set_empty_region(region_id, type_id, generated_at)
for row in rowset['rows']:
order_kwargs = _columns_to_kwargs(
SPEC_TO_KWARG_CONVERSION, order_columns, row)
order_kwargs.update({
'region_id': region_id,
'type_id': type_id,
'generated_at': generated_at,
})
order_kwargs['order_issue_date'] = parse_datetime(order_kwargs['order_issue_date'])
order_list.add_order(MarketOrder(**order_kwargs))
return order_list |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def encode_to_json(order_list):
""" Encodes this list of MarketOrder instances to a JSON string. :param MarketOrderList order_list: The order list to serialize. :rtype: str """ |
rowsets = []
for items_in_region_list in order_list._orders.values():
region_id = items_in_region_list.region_id
type_id = items_in_region_list.type_id
generated_at = gen_iso_datetime_str(items_in_region_list.generated_at)
rows = []
for order in items_in_region_list.orders:
issue_date = gen_iso_datetime_str(order.order_issue_date)
# The order in which these values are added is crucial. It must
# match STANDARD_ENCODED_COLUMNS.
rows.append([
order.price,
order.volume_remaining,
order.order_range,
order.order_id,
order.volume_entered,
order.minimum_volume,
order.is_bid,
issue_date,
order.order_duration,
order.station_id,
order.solar_system_id,
])
rowsets.append(dict(
generatedAt = generated_at,
regionID = region_id,
typeID = type_id,
rows = rows,
))
json_dict = {
'resultType': 'orders',
'version': '0.1',
'uploadKeys': order_list.upload_keys,
'generator': order_list.order_generator,
'currentTime': gen_iso_datetime_str(now_dtime_in_utc()),
# This must match the order of the values in the row assembling portion
# above this.
'columns': STANDARD_ENCODED_COLUMNS,
'rowsets': rowsets,
}
return json.dumps(json_dict) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def decode(f):
"""Extract a `MqttFixedHeader` from ``f``. Parameters f: file Object with read method. Raises ------- DecodeError When bytes decoded have values incompatible with a `MqttFixedHeader` object. UnderflowDecodeError When end-of-stream is encountered before the end of the fixed header. Returns ------- int Number of bytes consumed from ``f``. MqttFixedHeader Header object extracted from ``f``. """ |
decoder = mqtt_io.FileDecoder(f)
(byte_0,) = decoder.unpack(mqtt_io.FIELD_U8)
packet_type_u4 = (byte_0 >> 4)
flags = byte_0 & 0x0f
try:
packet_type = MqttControlPacketType(packet_type_u4)
except ValueError:
raise DecodeError('Unknown packet type 0x{:02x}.'.format(packet_type_u4))
if not are_flags_valid(packet_type, flags):
raise DecodeError('Invalid flags for packet type.')
num_bytes, num_remaining_bytes = decoder.unpack_varint(4)
return decoder.num_bytes_consumed, MqttFixedHeader(packet_type, flags, num_remaining_bytes) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def decode_body(cls, header, f):
"""Generates a `MqttSubscribe` packet given a `MqttFixedHeader`. This method asserts that header.packet_type is `subscribe`. Parameters header: MqttFixedHeader f: file Object with a read method. Raises ------ DecodeError When there are extra bytes at the end of the packet. Returns ------- int Number of bytes consumed from ``f``. MqttSubscribe Object extracted from ``f``. """ |
assert header.packet_type == MqttControlPacketType.subscribe
decoder = mqtt_io.FileDecoder(mqtt_io.LimitReader(f, header.remaining_len))
packet_id, = decoder.unpack(mqtt_io.FIELD_PACKET_ID)
topics = []
while header.remaining_len > decoder.num_bytes_consumed:
num_str_bytes, name = decoder.unpack_utf8()
max_qos, = decoder.unpack(mqtt_io.FIELD_U8)
try:
sub_topic = MqttTopic(name, max_qos)
except ValueError:
raise DecodeError('Invalid QOS {}'.format(max_qos))
topics.append(sub_topic)
assert header.remaining_len == decoder.num_bytes_consumed
return decoder.num_bytes_consumed, MqttSubscribe(packet_id, topics) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def decode_body(cls, header, f):
"""Generates a `MqttSuback` packet given a `MqttFixedHeader`. This method asserts that header.packet_type is `suback`. Parameters header: MqttFixedHeader f: file Object with a read method. Raises ------ DecodeError When there are extra bytes at the end of the packet. Returns ------- int Number of bytes consumed from ``f``. MqttSuback Object extracted from ``f``. """ |
assert header.packet_type == MqttControlPacketType.suback
decoder = mqtt_io.FileDecoder(mqtt_io.LimitReader(f, header.remaining_len))
packet_id, = decoder.unpack(mqtt_io.FIELD_PACKET_ID)
results = []
while header.remaining_len > decoder.num_bytes_consumed:
result, = decoder.unpack(mqtt_io.FIELD_U8)
try:
results.append(SubscribeResult(result))
except ValueError:
raise DecodeError('Unsupported result {:02x}.'.format(result))
assert header.remaining_len == decoder.num_bytes_consumed
return decoder.num_bytes_consumed, MqttSuback(packet_id, results) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def decode_body(cls, header, f):
"""Generates a `MqttPublish` packet given a `MqttFixedHeader`. This method asserts that header.packet_type is `publish`. Parameters header: MqttFixedHeader f: file Object with a read method. Raises ------ DecodeError When there are extra bytes at the end of the packet. Returns ------- int Number of bytes consumed from ``f``. MqttPublish Object extracted from ``f``. """ |
assert header.packet_type == MqttControlPacketType.publish
dupe = bool(header.flags & 0x08)
retain = bool(header.flags & 0x01)
qos = ((header.flags & 0x06) >> 1)
if qos == 0 and dupe:
# The DUP flag MUST be set to 0 for all QoS 0 messages
# [MQTT-3.3.1-2]
raise DecodeError("Unexpected dupe=True for qos==0 message [MQTT-3.3.1-2].")
decoder = mqtt_io.FileDecoder(mqtt_io.LimitReader(f, header.remaining_len))
num_bytes_consumed, topic_name = decoder.unpack_utf8()
if qos != 0:
# See MQTT 3.1.1 section 3.3.2.2
# See https://github.com/kcallin/mqtt-codec/issues/5
packet_id, = decoder.unpack(mqtt_io.FIELD_PACKET_ID)
else:
packet_id = 0
payload_len = header.remaining_len - decoder.num_bytes_consumed
payload = decoder.read(payload_len)
return decoder.num_bytes_consumed, MqttPublish(packet_id, topic_name, payload, dupe, qos, retain) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def decode_body(cls, header, f):
"""Generates a `MqttPubrel` packet given a `MqttFixedHeader`. This method asserts that header.packet_type is `pubrel`. Parameters header: MqttFixedHeader f: file Object with a read method. Raises ------ DecodeError When there are extra bytes at the end of the packet. Returns ------- int Number of bytes consumed from ``f``. MqttPubrel Object extracted from ``f``. """ |
assert header.packet_type == MqttControlPacketType.pubrel
decoder = mqtt_io.FileDecoder(mqtt_io.LimitReader(f, header.remaining_len))
packet_id, = decoder.unpack(mqtt_io.FIELD_U16)
if header.remaining_len != decoder.num_bytes_consumed:
raise DecodeError('Extra bytes at end of packet.')
return decoder.num_bytes_consumed, MqttPubrel(packet_id) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def decode_body(cls, header, f):
"""Generates a `MqttUnsubscribe` packet given a `MqttFixedHeader`. This method asserts that header.packet_type is `unsubscribe`. Parameters header: MqttFixedHeader f: file Object with a read method. Raises ------ DecodeError When there are extra bytes at the end of the packet. Returns ------- int Number of bytes consumed from ``f``. MqttUnsubscribe Object extracted from ``f``. """ |
assert header.packet_type == MqttControlPacketType.unsubscribe
decoder = mqtt_io.FileDecoder(mqtt_io.LimitReader(f, header.remaining_len))
packet_id, = decoder.unpack(mqtt_io.FIELD_PACKET_ID)
topics = []
while header.remaining_len > decoder.num_bytes_consumed:
num_str_bytes, topic = decoder.unpack_utf8()
topics.append(topic)
assert header.remaining_len - decoder.num_bytes_consumed == 0
return decoder.num_bytes_consumed, MqttUnsubscribe(packet_id, topics) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def decode_body(cls, header, f):
"""Generates a `MqttUnsuback` packet given a `MqttFixedHeader`. This method asserts that header.packet_type is `unsuback`. Parameters header: MqttFixedHeader f: file Object with a read method. Raises ------ DecodeError When there are extra bytes at the end of the packet. Returns ------- int Number of bytes consumed from ``f``. MqttUnsuback Object extracted from ``f``. """ |
assert header.packet_type == MqttControlPacketType.unsuback
decoder = mqtt_io.FileDecoder(mqtt_io.LimitReader(f, header.remaining_len))
packet_id, = decoder.unpack(mqtt_io.FIELD_PACKET_ID)
if header.remaining_len != decoder.num_bytes_consumed:
raise DecodeError('Extra bytes at end of packet.')
return decoder.num_bytes_consumed, MqttUnsuback(packet_id) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def decode_body(cls, header, f):
"""Generates a `MqttPingreq` packet given a `MqttFixedHeader`. This method asserts that header.packet_type is `pingreq`. Parameters header: MqttFixedHeader f: file Object with a read method. Raises ------ DecodeError When there are extra bytes at the end of the packet. Returns ------- int Number of bytes consumed from ``f``. MqttPingreq Object extracted from ``f``. """ |
assert header.packet_type == MqttControlPacketType.pingreq
if header.remaining_len != 0:
raise DecodeError('Extra bytes at end of packet.')
return 0, MqttPingreq() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def decode_body(cls, header, f):
"""Generates a `MqttPingresp` packet given a `MqttFixedHeader`. This method asserts that header.packet_type is `pingresp`. Parameters header: MqttFixedHeader f: file Object with a read method. Raises ------ DecodeError When there are extra bytes at the end of the packet. Returns ------- int Number of bytes consumed from ``f``. MqttPingresp Object extracted from ``f``. """ |
assert header.packet_type == MqttControlPacketType.pingresp
if header.remaining_len != 0:
raise DecodeError('Extra bytes at end of packet.')
return 0, MqttPingresp() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def connect(self):
""" Sets up your Phabricator session, it's not necessary to call this directly """ |
if self.token:
self.phab_session = {'token': self.token}
return
req = self.req_session.post('%s/api/conduit.connect' % self.host, data={
'params': json.dumps(self.connect_params),
'output': 'json',
'__conduit__': True,
})
# Parse out the response (error handling ommitted)
result = req.json()['result']
self.phab_session = {
'sessionKey': result['sessionKey'],
'connectionID': result['connectionID'],
} |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def install(force=False):
"""Install git hooks.""" |
ret, git_dir, _ = run("git rev-parse --show-toplevel")
if ret != 0:
click.echo(
"ERROR: Please run from within a GIT repository.",
file=sys.stderr)
raise click.Abort
git_dir = git_dir[0]
hooks_dir = os.path.join(git_dir, HOOK_PATH)
for hook in HOOKS:
hook_path = os.path.join(hooks_dir, hook)
if os.path.exists(hook_path):
if not force:
click.echo(
"Hook already exists. Skipping {0}".format(hook_path),
file=sys.stderr)
continue
else:
os.unlink(hook_path)
source = os.path.join(sys.prefix, "bin", "kwalitee-" + hook)
os.symlink(os.path.normpath(source), hook_path)
return True |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def uninstall():
"""Uninstall git hooks.""" |
ret, git_dir, _ = run("git rev-parse --show-toplevel")
if ret != 0:
click.echo(
"ERROR: Please run from within a GIT repository.",
file=sys.stderr)
raise click.Abort
git_dir = git_dir[0]
hooks_dir = os.path.join(git_dir, HOOK_PATH)
for hook in HOOKS:
hook_path = os.path.join(hooks_dir, hook)
if os.path.exists(hook_path):
os.remove(hook_path)
return True |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def setup_logger():
""" setup basic logger """ |
logger = logging.getLogger('dockerstache')
logger.setLevel(logging.INFO)
handler = logging.StreamHandler(stream=sys.stdout)
handler.setLevel(logging.INFO)
logger.addHandler(handler)
return logger |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def named_any(name):
""" Retrieve a Python object by its fully qualified name from the global Python module namespace. The first part of the name, that describes a module, will be discovered and imported. Each subsequent part of the name is treated as the name of an attribute of the object specified by all of the name which came before it. @param name: The name of the object to return. @return: the Python object identified by 'name'. """ |
assert name, 'Empty module name'
names = name.split('.')
topLevelPackage = None
moduleNames = names[:]
while not topLevelPackage:
if moduleNames:
trialname = '.'.join(moduleNames)
try:
topLevelPackage = __import__(trialname)
except Exception, ex:
moduleNames.pop()
else:
if len(names) == 1:
raise Exception("No module named %r" % (name,))
else:
raise Exception('%r does not name an object' % (name,))
obj = topLevelPackage
for n in names[1:]:
obj = getattr(obj, n)
return obj |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def for_name(modpath, classname):
'''
Returns a class of "classname" from module "modname".
'''
module = __import__(modpath, fromlist=[classname])
classobj = getattr(module, classname)
return classobj() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _convert(self, val):
""" Convert the type if necessary and return if a conversion happened. """ |
if isinstance(val, dict) and not isinstance(val, DotDict):
return DotDict(val), True
elif isinstance(val, list) and not isinstance(val, DotList):
return DotList(val), True
return val, False |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def to_json(self):
""" Convert to a JSON string. """ |
obj = {
"vertices": [
{
"id": vertex.id,
"annotation": vertex.annotation,
}
for vertex in self.vertices
],
"edges": [
{
"id": edge.id,
"annotation": edge.annotation,
"head": edge.head,
"tail": edge.tail,
}
for edge in self._edges
],
}
# Ensure that we always return unicode output on Python 2.
return six.text_type(json.dumps(obj, ensure_ascii=False)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def from_json(cls, json_graph):
""" Reconstruct the graph from a graph exported to JSON. """ |
obj = json.loads(json_graph)
vertices = [
AnnotatedVertex(
id=vertex["id"],
annotation=vertex["annotation"],
)
for vertex in obj["vertices"]
]
edges = [
AnnotatedEdge(
id=edge["id"],
annotation=edge["annotation"],
head=edge["head"],
tail=edge["tail"],
)
for edge in obj["edges"]
]
return cls(vertices=vertices, edges=edges) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.