text_prompt stringlengths 157 13.1k | code_prompt stringlengths 7 19.8k ⌀ |
|---|---|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def read_dataframe_from_xls(desired_type: Type[T], file_path: str, encoding: str, logger: Logger, **kwargs) -> pd.DataFrame: """ We register this method rather than the other because pandas guesses the encoding by itself. Also, it is easier to put a breakpoint and debug by trying various options to find the good one (in streaming mode you just have one try and then the stream is consumed) :param desired_type: :param file_path: :param encoding: :param logger: :param kwargs: :return: """ |
return pd.read_excel(file_path, **kwargs) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def read_df_or_series_from_csv(desired_type: Type[pd.DataFrame], file_path: str, encoding: str, logger: Logger, **kwargs) -> pd.DataFrame: """ Helper method to read a dataframe from a csv file. By default this is well suited for a dataframe with headers in the first row, for example a parameter dataframe. :param desired_type: :param file_path: :param encoding: :param logger: :param kwargs: :return: """ |
if desired_type is pd.Series:
# as recommended in http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.from_csv.html
# and from http://stackoverflow.com/questions/15760856/how-to-read-a-pandas-series-from-a-csv-file
# TODO there should be a way to decide between row-oriented (squeeze=True) and col-oriented (index_col=0)
# note : squeeze=true only works for row-oriented, so we dont use it. We rather expect that a row-oriented
# dataframe would be convertible to a series using the df to series converter below
if 'index_col' not in kwargs.keys():
one_col_df = pd.read_csv(file_path, encoding=encoding, index_col=0, **kwargs)
else:
one_col_df = pd.read_csv(file_path, encoding=encoding, **kwargs)
if one_col_df.shape[1] == 1:
return one_col_df[one_col_df.columns[0]]
else:
raise Exception('Cannot build a series from this csv: it has more than two columns (one index + one value).'
' Probably the parsing chain $read_df_or_series_from_csv => single_row_or_col_df_to_series$'
'will work, though.')
else:
return pd.read_csv(file_path, encoding=encoding, **kwargs) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def dict_to_df(desired_type: Type[T], dict_obj: Dict, logger: Logger, orient: str = None, **kwargs) -> pd.DataFrame: """ Helper method to convert a dictionary into a dataframe. It supports both simple key-value dicts as well as true table dicts. For this it uses pd.DataFrame constructor or pd.DataFrame.from_dict intelligently depending on the case. The orientation of the resulting dataframe can be configured, or left to default behaviour. Default orientation is different depending on the contents: * 'index' for 2-level dictionaries, in order to align as much as possible with the natural way to express rows in JSON * 'columns' for 1-level (simple key-value) dictionaries, so as to preserve the data types of the scalar values in the resulting dataframe columns if they are different :param desired_type: :param dict_obj: :param logger: :param orient: the orientation of the resulting dataframe. :param kwargs: :return: """ |
if len(dict_obj) > 0:
first_val = dict_obj[next(iter(dict_obj))]
if isinstance(first_val, dict) or isinstance(first_val, list):
# --'full' table
# default is index orientation
orient = orient or 'index'
# if orient is 'columns':
# return pd.DataFrame(dict_obj)
# else:
return pd.DataFrame.from_dict(dict_obj, orient=orient)
else:
# --scalar > single-row or single-col
# default is columns orientation
orient = orient or 'columns'
if orient is 'columns':
return pd.DataFrame(dict_obj, index=[0])
else:
res = pd.DataFrame.from_dict(dict_obj, orient=orient)
res.index.name = 'key'
return res.rename(columns={0: 'value'})
else:
# for empty dictionaries, orientation does not matter
# but maybe we should still create a column 'value' in this empty dataframe ?
return pd.DataFrame.from_dict(dict_obj) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def single_row_or_col_df_to_series(desired_type: Type[T], single_rowcol_df: pd.DataFrame, logger: Logger, **kwargs)\ -> pd.Series: """ Helper method to convert a dataframe with one row or one or two columns into a Series :param desired_type: :param single_col_df: :param logger: :param kwargs: :return: """ |
if single_rowcol_df.shape[0] == 1:
# one row
return single_rowcol_df.transpose()[0]
elif single_rowcol_df.shape[1] == 2 and isinstance(single_rowcol_df.index, pd.RangeIndex):
# two columns but the index contains nothing but the row number : we can use the first column
d = single_rowcol_df.set_index(single_rowcol_df.columns[0])
return d[d.columns[0]]
elif single_rowcol_df.shape[1] == 1:
# one column and one index
d = single_rowcol_df
return d[d.columns[0]]
else:
raise ValueError('Unable to convert provided dataframe to a series : '
'expected exactly 1 row or 1 column, found : ' + str(single_rowcol_df.shape) + '') |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def single_row_or_col_df_to_dict(desired_type: Type[T], single_rowcol_df: pd.DataFrame, logger: Logger, **kwargs)\ -> Dict[str, str]: """ Helper method to convert a dataframe with one row or one or two columns into a dictionary :param desired_type: :param single_rowcol_df: :param logger: :param kwargs: :return: """ |
if single_rowcol_df.shape[0] == 1:
return single_rowcol_df.transpose()[0].to_dict()
# return {col_name: single_rowcol_df[col_name][single_rowcol_df.index.values[0]] for col_name in single_rowcol_df.columns}
elif single_rowcol_df.shape[1] == 2 and isinstance(single_rowcol_df.index, pd.RangeIndex):
# two columns but the index contains nothing but the row number : we can use the first column
d = single_rowcol_df.set_index(single_rowcol_df.columns[0])
return d[d.columns[0]].to_dict()
elif single_rowcol_df.shape[1] == 1:
# one column and one index
d = single_rowcol_df
return d[d.columns[0]].to_dict()
else:
raise ValueError('Unable to convert provided dataframe to a parameters dictionary : '
'expected exactly 1 row or 1 column, found : ' + str(single_rowcol_df.shape) + '') |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def full_subgraph(self, vertices):
""" Return the subgraph of this graph whose vertices are the given ones and whose edges are all the edges of the original graph between those vertices. """ |
subgraph_vertices = {v for v in vertices}
subgraph_edges = {edge
for v in subgraph_vertices
for edge in self._out_edges[v]
if self._heads[edge] in subgraph_vertices}
subgraph_heads = {edge: self._heads[edge]
for edge in subgraph_edges}
subgraph_tails = {edge: self._tails[edge]
for edge in subgraph_edges}
return DirectedGraph._raw(
vertices=subgraph_vertices,
edges=subgraph_edges,
heads=subgraph_heads,
tails=subgraph_tails,
) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _raw(cls, vertices, edges, heads, tails):
""" Private constructor for direct construction of a DirectedGraph from its consituents. """ |
self = object.__new__(cls)
self._vertices = vertices
self._edges = edges
self._heads = heads
self._tails = tails
# For future use, map each vertex to its outward and inward edges.
# These could be computed on demand instead of precomputed.
self._out_edges = collections.defaultdict(set)
self._in_edges = collections.defaultdict(set)
for edge in self._edges:
self._out_edges[self._tails[edge]].add(edge)
self._in_edges[self._heads[edge]].add(edge)
return self |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def from_out_edges(cls, vertices, edge_mapper):
""" Create a DirectedGraph from a collection of vertices and a mapping giving the vertices that each vertex is connected to. """ |
vertices = set(vertices)
edges = set()
heads = {}
tails = {}
# Number the edges arbitrarily.
edge_identifier = itertools.count()
for tail in vertices:
for head in edge_mapper[tail]:
edge = next(edge_identifier)
edges.add(edge)
heads[edge] = head
tails[edge] = tail
return cls._raw(
vertices=vertices,
edges=edges,
heads=heads,
tails=tails,
) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def from_edge_pairs(cls, vertices, edge_pairs):
""" Create a DirectedGraph from a collection of vertices and a collection of pairs giving links between the vertices. """ |
vertices = set(vertices)
edges = set()
heads = {}
tails = {}
# Number the edges arbitrarily.
edge_identifier = itertools.count()
for tail, head in edge_pairs:
edge = next(edge_identifier)
edges.add(edge)
heads[edge] = head
tails[edge] = tail
return cls._raw(
vertices=vertices,
edges=edges,
heads=heads,
tails=tails,
) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def annotated(self):
""" Return an AnnotatedGraph with the same structure as this graph. """ |
annotated_vertices = {
vertex: AnnotatedVertex(
id=vertex_id,
annotation=six.text_type(vertex),
)
for vertex_id, vertex in zip(itertools.count(), self.vertices)
}
annotated_edges = [
AnnotatedEdge(
id=edge_id,
annotation=six.text_type(edge),
head=annotated_vertices[self.head(edge)].id,
tail=annotated_vertices[self.tail(edge)].id,
)
for edge_id, edge in zip(itertools.count(), self.edges)
]
return AnnotatedGraph(
vertices=annotated_vertices.values(),
edges=annotated_edges,
) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def load(self):
""" read dotfile and populate self opts will override the dotfile settings, make sure everything is synced in both opts and this object """ |
if self.exists():
with open(self.dot_file, 'r') as handle:
self.update(json.load(handle))
if self.options['context'] is not None:
self['context'] = self.options['context']
else:
self.options['context'] = self['context']
if self.options['defaults'] is not None:
self['defaults'] = self.options['defaults']
else:
self.options['defaults'] = self['defaults']
if self.options['output'] is not None:
self['output'] = self.options['output']
if self.options.get('inclusive', False):
self['inclusive'] = True
if self.options.get('exclude', []):
self['exclude'].extend(self.options['exclude'])
if self['output'] is None:
self['output'] = os.path.join(os.getcwd(), 'dockerstache-output')
self['output_path'] = self.abs_output_dir()
self['input_path'] = self.abs_input_dir()
if self['context'] is not None:
self['context_path'] = absolute_path(self['context'])
if self['defaults'] is not None:
self['defaults_path'] = absolute_path(self['defaults']) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def env_dictionary(self):
""" convert the options to this script into an env var dictionary for pre and post scripts """ |
none_to_str = lambda x: str(x) if x else ""
return {"DOCKERSTACHE_{}".format(k.upper()): none_to_str(v) for k, v in six.iteritems(self)} |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def pre_script(self):
""" execute the pre script if it is defined """ |
if self['pre_script'] is None:
return
LOGGER.info("Executing pre script: {}".format(self['pre_script']))
cmd = self['pre_script']
execute_command(self.abs_input_dir(), cmd, self.env_dictionary())
LOGGER.info("Pre Script completed") |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def say_tmp_filepath( text = None, preference_program = "festival" ):
""" Say specified text to a temporary file and return the filepath. """ |
filepath = shijian.tmp_filepath() + ".wav"
say(
text = text,
preference_program = preference_program,
filepath = filepath
)
return filepath |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def clacks_overhead(fn):
""" A Django view decorator that will add the `X-Clacks-Overhead` header. Usage: @clacks_overhead def my_view(request):
return my_response """ |
@wraps(fn)
def _wrapped(*args, **kw):
response = fn(*args, **kw)
response['X-Clacks-Overhead'] = 'GNU Terry Pratchett'
return response
return _wrapped |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def render(self, request, template, context):
""" Returns a response. By default, this will contain the rendered PDF, but if both ``allow_force_html`` is ``True`` and the querystring ``html=true`` was set it will return a plain HTML. """ |
if self.allow_force_html and self.request.GET.get('html', False):
html = get_template(template).render(context)
return HttpResponse(html)
else:
response = HttpResponse(content_type='application/pdf')
if self.prompt_download:
response['Content-Disposition'] = 'attachment; filename="{}"' \
.format(self.get_download_name())
helpers.render_pdf(
template=template,
file_=response,
url_fetcher=self.url_fetcher,
context=context,
)
return response |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def replace(self, s, data, attrs=None):
""" Replace the attributes of the plotter data in a string %(replace_note)s Parameters s: str String where the replacements shall be made data: InteractiveBase Data object from which to use the coordinates and insert the coordinate and attribute informations attrs: dict Meta attributes that shall be used for replacements. If None, it will be gained from `data.attrs` Returns ------- str `s` with inserted informations""" |
# insert labels
s = s.format(**self.rc['labels'])
# replace attributes
attrs = attrs or data.attrs
if hasattr(getattr(data, 'psy', None), 'arr_name'):
attrs = attrs.copy()
attrs['arr_name'] = data.psy.arr_name
s = safe_modulo(s, attrs)
# replace datetime.datetime like time informations
if isinstance(data, InteractiveList):
data = data[0]
tname = self.any_decoder.get_tname(
next(self.plotter.iter_base_variables), data.coords)
if tname is not None and tname in data.coords:
time = data.coords[tname]
if not time.values.ndim:
try: # assume a valid datetime.datetime instance
s = pd.to_datetime(str(time.values[()])).strftime(s)
except ValueError:
pass
if six.PY2:
return s.decode('utf-8')
return s |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_fig_data_attrs(self, delimiter=None):
"""Join the data attributes with other plotters in the project This method joins the attributes of the :class:`~psyplot.InteractiveBase` instances in the project that draw on the same figure as this instance does. Parameters delimiter: str Specifies the delimiter with what the attributes are joined. If None, the :attr:`delimiter` attribute of this instance or (if the latter is also None), the rcParams['texts.delimiter'] item is used. Returns ------- dict A dictionary with all the meta attributes joined by the specified `delimiter`""" |
if self.project is not None:
delimiter = next(filter(lambda d: d is not None, [
delimiter, self.delimiter, self.rc['delimiter']]))
figs = self.project.figs
fig = self.ax.get_figure()
if self.plotter._initialized and fig in figs:
ret = figs[fig].joined_attrs(delimiter=delimiter,
plot_data=True)
else:
ret = self.get_enhanced_attrs(self.plotter.plot_data)
self.logger.debug(
'Can not get the figure attributes because plot has not '
'yet been initialized!')
return ret
else:
return self.get_enhanced_attrs(self.plotter.plot_data) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_fmt_widget(self, parent, project):
"""Create a combobox with the attributes""" |
from psy_simple.widgets.texts import LabelWidget
return LabelWidget(parent, self, project) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def clear_other_texts(self, remove=False):
"""Make sure that no other text is a the same position as this one This method clears all text instances in the figure that are at the same position as the :attr:`_text` attribute Parameters remove: bool If True, the Text instances are permanently deleted from the figure, otherwise there text is simply set to ''""" |
fig = self.ax.get_figure()
# don't do anything if our figtitle is the only Text instance
if len(fig.texts) == 1:
return
for i, text in enumerate(fig.texts):
if text == self._text:
continue
if text.get_position() == self._text.get_position():
if not remove:
text.set_text('')
else:
del fig[i] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def transform(self):
"""Dictionary containing the relevant transformations""" |
ax = self.ax
return {'axes': ax.transAxes,
'fig': ax.get_figure().transFigure,
'data': ax.transData} |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _remove_texttuple(self, pos):
"""Remove a texttuple from the value in the plotter Parameters pos: tuple (x, y, cs) x and y are the x- and y-positions and cs the coordinate system""" |
for i, (old_x, old_y, s, old_cs, d) in enumerate(self.value):
if (old_x, old_y, old_cs) == pos:
self.value.pop(i)
return
raise ValueError("{0} not found!".format(pos)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _update_texttuple(self, x, y, s, cs, d):
"""Update the text tuple at `x` and `y` with the given `s` and `d`""" |
pos = (x, y, cs)
for i, (old_x, old_y, old_s, old_cs, old_d) in enumerate(self.value):
if (old_x, old_y, old_cs) == pos:
self.value[i] = (old_x, old_y, s, old_cs, d)
return
raise ValueError("No text tuple found at {0}!".format(pos)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def share(self, fmto, **kwargs):
"""Share the settings of this formatoption with other data objects Parameters fmto: Formatoption The :class:`Formatoption` instance to share the attributes with ``**kwargs`` Any other keyword argument that shall be passed to the update method of `fmto` Notes ----- The Text formatoption sets the 'texts_to_remove' keyword to the :attr:`_texts_to_remove` attribute of this instance (if not already specified in ``**kwargs``""" |
kwargs.setdefault('texts_to_remove', self._texts_to_remove)
super(Text, self).share(fmto, **kwargs) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def preprocess_cell( self, cell: "NotebookNode", resources: dict, index: int ) -> Tuple["NotebookNode", dict]: """Preprocess cell. Parameters cell : NotebookNode cell Notebook cell being processed resources : dictionary Additional resources used in the conversion process. Allows preprocessors to pass variables into the Jinja engine. cell_index : int Index of the cell being processed (see base.py) """ |
if cell.cell_type == "markdown":
variables = cell["metadata"].get("variables", {})
if len(variables) > 0:
cell.source = self.replace_variables(cell.source, variables)
if resources.get("delete_pymarkdown", False):
del cell.metadata["variables"]
return cell, resources |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def index_dir(self, folder):
""" Creates a nested dictionary that represents the folder structure of folder. Also extracts meta data from all markdown posts and adds to the dictionary. """ |
folder_path = folder
print('Indexing folder: ' + folder_path)
nested_dir = {}
folder = folder_path.rstrip(os.sep)
start = folder.rfind(os.sep) + 1
for root, dirs, files in os.walk(folder):
folders = root[start:].split(os.sep)
# subdir = dict.fromkeys(files)
subdir = {}
for f in files:
# Create an entry for every markdown file
if os.path.splitext(f)[1] == '.md':
with open(os.path.abspath(os.path.join(root, f)), encoding='utf-8') as fp:
try:
_, meta = self.mrk.extract_meta(fp.read())
except:
print("Skipping indexing " + f +"; Could not parse metadata")
meta = {'title': f}
pass
# Value of the entry (the key) is it's metadata
subdir[f] = meta
parent = nested_dir
for fold in folders[:-1]:
parent = parent.get(fold)
# Attach the config of all children nodes onto the parent
parent[folders[-1]] = subdir
return nested_dir |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def cycles_created_by(callable):
""" Return graph of cyclic garbage created by the given callable. Return an :class:`~refcycle.object_graph.ObjectGraph` representing those objects generated by the given callable that can't be collected by Python's usual reference-count based garbage collection. This includes objects that will eventually be collected by the cyclic garbage collector, as well as genuinely unreachable objects that will never be collected. `callable` should be a callable that takes no arguments; its return value (if any) will be ignored. """ |
with restore_gc_state():
gc.disable()
gc.collect()
gc.set_debug(gc.DEBUG_SAVEALL)
callable()
new_object_count = gc.collect()
if new_object_count:
objects = gc.garbage[-new_object_count:]
del gc.garbage[-new_object_count:]
else:
objects = []
return ObjectGraph(objects) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def snapshot():
"""Return the graph of all currently gc-tracked objects. Excludes the returned :class:`~refcycle.object_graph.ObjectGraph` and objects owned by it. Note that a subsequent call to :func:`~refcycle.creators.snapshot` will capture all of the objects owned by this snapshot. The :meth:`~refcycle.object_graph.ObjectGraph.owned_objects` method may be helpful when excluding these objects from consideration. """ |
all_objects = gc.get_objects()
this_frame = inspect.currentframe()
selected_objects = []
for obj in all_objects:
if obj is not this_frame:
selected_objects.append(obj)
graph = ObjectGraph(selected_objects)
del this_frame, all_objects, selected_objects, obj
return graph |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def extendMarkdown(self, md, md_globals):
""" Every extension requires a extendMarkdown method to tell the markdown renderer how use the extension. """ |
md.registerExtension(self)
for processor in (self.preprocessors or []):
md.preprocessors.add(processor.__name__.lower(), processor(md), '_end')
for pattern in (self.inlinepatterns or []):
md.inlinePatterns.add(pattern.__name__.lower(), pattern(md), '_end')
for processor in (self.postprocessors or []):
md.postprocessors.add(processor.__name__.lower(), processor(md), '_end') |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def run( paths, output=_I_STILL_HATE_EVERYTHING, recurse=core.flat, sort_by=None, ls=core.ls, stdout=stdout, ):
""" Project-oriented directory and file information lister. """ |
if output is _I_STILL_HATE_EVERYTHING:
output = core.columnized if stdout.isatty() else core.one_per_line
if sort_by is None:
if output == core.as_tree:
def sort_by(thing):
return (
thing.parent(),
thing.basename().lstrip(string.punctuation).lower(),
)
else:
def sort_by(thing):
return thing
def _sort_by(thing):
return not getattr(thing, "_always_sorts_first", False), sort_by(thing)
contents = [
path_and_children
for path in paths or (project.from_path(FilePath(".")),)
for path_and_children in recurse(path=path, ls=ls)
]
for line in output(contents, sort_by=_sort_by):
stdout.write(line)
stdout.write("\n") |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def getCustomLogger(name, logLevel, logFormat='%(asctime)s %(levelname)-9s:%(name)s:%(module)s:%(funcName)s: %(message)s'):
'''
Set up logging
:param str name: What log level to set
:param str logLevel: What log level to use
:param str logFormat: Format string for logging
:rtype: logger
'''
assert isinstance(logFormat, basestring), ("logFormat must be a string but is %r" % logFormat)
assert isinstance(logLevel, basestring), ("logLevel must be a string but is %r" % logLevel)
assert isinstance(name, basestring), ("name must be a string but is %r" % name)
validLogLevels = ['CRITICAL', 'DEBUG', 'ERROR', 'INFO', 'WARNING']
if not logLevel:
logLevel = 'DEBUG'
# If they don't specify a valid log level, err on the side of verbosity
if logLevel.upper() not in validLogLevels:
logLevel = 'DEBUG'
numericLevel = getattr(logging, logLevel.upper(), None)
if not isinstance(numericLevel, int):
raise ValueError("Invalid log level: %s" % logLevel)
logging.basicConfig(level=numericLevel, format=logFormat)
logger = logging.getLogger(name)
return logger |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def mkdir_p(path):
'''
Mimic `mkdir -p` since os module doesn't provide one.
:param str path: directory to create
'''
assert isinstance(path, basestring), ("path must be a string but is %r" % path)
try:
os.makedirs(path)
except OSError as exception:
if exception.errno != errno.EEXIST:
raise |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def setup_exchanges(app):
""" Setup result exchange to route all tasks to platform queue. """ |
with app.producer_or_acquire() as P:
# Ensure all queues are noticed and configured with their
# appropriate exchange.
for q in app.amqp.queues.values():
P.maybe_declare(q) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def setup_app(app, throw=True):
""" Ensure application is set up to expected configuration. This function is typically triggered by the worker_init signal, however it must be called manually by codebases that are run only as task producers or from within a Python shell. """ |
success = True
try:
for func in SETUP_FUNCS:
try:
func(app)
except Exception:
success = False
if throw:
raise
else:
msg = "Failed to run setup function %r(app)"
logger.exception(msg, func.__name__)
finally:
setattr(app, 'is_set_up', success) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _poplast(self):
"""For avoiding lock during inserting to keep maxlen""" |
try:
tup = self.data.pop()
except IndexError as ex:
ex.args = ('DEPQ is already empty',)
raise
self_items = self.items
try:
self_items[tup[0]] -= 1
if self_items[tup[0]] == 0:
del self_items[tup[0]]
except TypeError:
r = repr(tup[0])
self_items[r] -= 1
if self_items[r] == 0:
del self_items[r]
return tup |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def DatabaseEnabled(cls):
"""Given persistence methods to classes with this annotation. All this really does is add some functions that forward to the mapped database class. """ |
if not issubclass(cls, Storable):
raise ValueError(
"%s is not a subclass of gludb.datab.Storage" % repr(cls)
)
cls.ensure_table = classmethod(_ensure_table)
cls.find_one = classmethod(_find_one)
cls.find_all = classmethod(_find_all)
cls.find_by_index = classmethod(_find_by_index)
cls.save = _save
cls.delete = _delete
return cls |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _find_playlist(self):
""" Internal method to populate the object given the ``id`` or ``reference_id`` that has been set in the constructor. """ |
data = None
if self.id:
data = self.connection.get_item(
'find_playlist_by_id', playlist_id=self.id)
elif self.reference_id:
data = self.connection.get_item(
'find_playlist_by_reference_id',
reference_id=self.reference_id)
if data:
self._load(data) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _to_dict(self):
""" Internal method that serializes object into a dictionary. """ |
data = {
'name': self.name,
'referenceId': self.reference_id,
'shortDescription': self.short_description,
'playlistType': self.type,
'id': self.id}
if self.videos:
for video in self.videos:
if video.id not in self.video_ids:
self.video_ids.append(video.id)
if self.video_ids:
data['videoIds'] = self.video_ids
[data.pop(key) for key in data.keys() if data[key] == None]
return data |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _load(self, data):
""" Internal method that deserializes a ``pybrightcove.playlist.Playlist`` object. """ |
self.raw_data = data
self.id = data['id']
self.reference_id = data['referenceId']
self.name = data['name']
self.short_description = data['shortDescription']
self.thumbnail_url = data['thumbnailURL']
self.videos = []
self.video_ids = data['videoIds']
self.type = data['playlistType']
for video in data.get('videos', []):
self.videos.append(pybrightcove.video.Video(
data=video, connection=self.connection)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def save(self):
""" Create or update a playlist. """ |
d = self._to_dict()
if len(d.get('videoIds', [])) > 0:
if not self.id:
self.id = self.connection.post('create_playlist', playlist=d)
else:
data = self.connection.post('update_playlist', playlist=d)
if data:
self._load(data) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def delete(self, cascade=False):
""" Deletes this playlist. """ |
if self.id:
self.connection.post('delete_playlist', playlist_id=self.id,
cascade=cascade)
self.id = None |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def find_all(connection=None, page_size=100, page_number=0, sort_by=DEFAULT_SORT_BY, sort_order=DEFAULT_SORT_ORDER):
""" List all playlists. """ |
return pybrightcove.connection.ItemResultSet("find_all_playlists",
Playlist, connection, page_size, page_number, sort_by, sort_order) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def find_by_ids(ids, connection=None, page_size=100, page_number=0, sort_by=DEFAULT_SORT_BY, sort_order=DEFAULT_SORT_ORDER):
""" List playlists by specific IDs. """ |
ids = ','.join([str(i) for i in ids])
return pybrightcove.connection.ItemResultSet('find_playlists_by_ids',
Playlist, connection, page_size, page_number, sort_by, sort_order,
playlist_ids=ids) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def find_by_reference_ids(reference_ids, connection=None, page_size=100, page_number=0, sort_by=DEFAULT_SORT_BY, sort_order=DEFAULT_SORT_ORDER):
""" List playlists by specific reference_ids. """ |
reference_ids = ','.join([str(i) for i in reference_ids])
return pybrightcove.connection.ItemResultSet(
"find_playlists_by_reference_ids", Playlist, connection, page_size,
page_number, sort_by, sort_order, reference_ids=reference_ids) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def find_for_player_id(player_id, connection=None, page_size=100, page_number=0, sort_by=DEFAULT_SORT_BY, sort_order=DEFAULT_SORT_ORDER):
""" List playlists for a for given player id. """ |
return pybrightcove.connection.ItemResultSet(
"find_playlists_for_player_id", Playlist, connection, page_size,
page_number, sort_by, sort_order, player_id=player_id) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_options_for_id(options: Dict[str, Dict[str, Any]], identifier: str):
""" Helper method, from the full options dict of dicts, to return either the options related to this parser or an empty dictionary. It also performs all the var type checks :param options: :param identifier: :return: """ |
check_var(options, var_types=dict, var_name='options')
res = options[identifier] if identifier in options.keys() else dict()
check_var(res, var_types=dict, var_name='options[' + identifier + ']')
return res |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _convert(self, desired_type: Type[T], source_obj: S, logger: Logger, options: Dict[str, Dict[str, Any]]) -> T: """ Implementing classes should implement this method to perform the conversion itself :param desired_type: the destination type of the conversion :param source_obj: the source object that should be converter :param logger: a logger to use if any is available, or None :param options: additional options map. Implementing classes may use 'self.get_applicable_options()' to get the options that are of interest for this converter. :return: """ |
pass |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _convert(self, desired_type: Type[T], source_obj: S, logger: Logger, options: Dict[str, Dict[str, Any]]) -> T: """ Delegates to the user-provided method. Passes the appropriate part of the options according to the function name. :param desired_type: :param source_obj: :param logger: :param options: :return: """ |
try:
if self.unpack_options:
opts = self.get_applicable_options(options)
if self.function_args is not None:
return self.conversion_method(desired_type, source_obj, logger, **self.function_args, **opts)
else:
return self.conversion_method(desired_type, source_obj, logger, **opts)
else:
if self.function_args is not None:
return self.conversion_method(desired_type, source_obj, logger, options, **self.function_args)
else:
return self.conversion_method(desired_type, source_obj, logger, options)
except TypeError as e:
raise CaughtTypeError.create(self.conversion_method, e) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def remove_first(self, inplace: bool = False):
""" Utility method to remove the first converter of this chain. If inplace is True, this object is modified and None is returned. Otherwise, a copy is returned :param inplace: boolean indicating whether to modify this object (True) or return a copy (False) :return: None or a copy with the first converter removed """ |
if len(self._converters_list) > 1:
if inplace:
self._converters_list = self._converters_list[1:]
# update the current source type
self.from_type = self._converters_list[0].from_type
return
else:
new = copy(self)
new._converters_list = new._converters_list[1:]
# update the current source type
new.from_type = new._converters_list[0].from_type
return new
else:
raise ValueError('cant remove first: would make it empty!') |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def add_conversion_steps(self, converters: List[Converter], inplace: bool = False):
""" Utility method to add converters to this chain. If inplace is True, this object is modified and None is returned. Otherwise, a copy is returned :param converters: the list of converters to add :param inplace: boolean indicating whether to modify this object (True) or return a copy (False) :return: None or a copy with the converters added """ |
check_var(converters, var_types=list, min_len=1)
if inplace:
for converter in converters:
self.add_conversion_step(converter, inplace=True)
else:
new = copy(self)
new.add_conversion_steps(converters, inplace=True)
return new |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def add_conversion_step(self, converter: Converter[S, T], inplace: bool = False):
""" Utility method to add a converter to this chain. If inplace is True, this object is modified and None is returned. Otherwise, a copy is returned :param converter: the converter to add :param inplace: boolean indicating whether to modify this object (True) or return a copy (False) :return: None or a copy with the converter added """ |
# it the current chain is generic, raise an error
if self.is_generic() and converter.is_generic():
raise ValueError('Cannot chain this generic converter chain to the provided converter : it is generic too!')
# if the current chain is able to transform its input into a valid input for the new converter
elif converter.can_be_appended_to(self, self.strict):
if inplace:
self._converters_list.append(converter)
# update the current destination type
self.to_type = converter.to_type
return
else:
new = copy(self)
new._converters_list.append(converter)
# update the current destination type
new.to_type = converter.to_type
return new
else:
raise TypeError('Cannnot register a converter on this conversion chain : source type \'' +
get_pretty_type_str(converter.from_type)
+ '\' is not compliant with current destination type of the chain : \'' +
get_pretty_type_str(self.to_type) + ' (this chain performs '
+ ('' if self.strict else 'non-') + 'strict mode matching)') |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def insert_conversion_steps_at_beginning(self, converters: List[Converter], inplace: bool = False):
""" Utility method to insert converters at the beginning ofthis chain. If inplace is True, this object is modified and None is returned. Otherwise, a copy is returned :param converters: the list of converters to insert :param inplace: boolean indicating whether to modify this object (True) or return a copy (False) :return: None or a copy with the converters added """ |
if inplace:
for converter in reversed(converters):
self.insert_conversion_step_at_beginning(converter, inplace=True)
return
else:
new = copy(self)
for converter in reversed(converters):
# do inplace since it is a copy
new.insert_conversion_step_at_beginning(converter, inplace=True)
return new |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _convert(self, desired_type: Type[T], obj: S, logger: Logger, options: Dict[str, Dict[str, Any]]) -> T: """ Apply the converters of the chain in order to produce the desired result. Only the last converter will see the 'desired type', the others will be asked to produce their declared to_type. :param desired_type: :param obj: :param logger: :param options: :return: """ |
for converter in self._converters_list[:-1]:
# convert into each converters destination type
obj = converter.convert(converter.to_type, obj, logger, options)
# the last converter in the chain should convert to desired type
return self._converters_list[-1].convert(desired_type, obj, logger, options) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def listens_to(name, sender=None, weak=True):
"""Listens to a named signal """ |
def decorator(f):
if sender:
return signal(name).connect(f, sender=sender, weak=weak)
return signal(name).connect(f, weak=weak)
return decorator |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def LoadInstallations(counter):
"""Load installed packages and export the version map. This function may be called multiple times, but the counters will be increased each time. Since Prometheus counters are never decreased, the aggregated results will not make sense. """ |
process = subprocess.Popen(["pip", "list", "--format=json"],
stdout=subprocess.PIPE)
output, _ = process.communicate()
installations = json.loads(output)
for i in installations:
counter.labels(i["name"], i["version"]).inc() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def RESTrequest(*args, **kwargs):
"""return and save the blob of data that is returned from kegg without caring to the format""" |
verbose = kwargs.get('verbose', False)
force_download = kwargs.get('force', False)
save = kwargs.get('force', True)
# so you can copy paste from kegg
args = list(chain.from_iterable(a.split('/') for a in args))
args = [a for a in args if a]
request = 'http://rest.kegg.jp/' + "/".join(args)
print_verbose(verbose, "richiedo la pagina: " + request)
filename = "KEGG_" + "_".join(args)
try:
if force_download:
raise IOError()
print_verbose(verbose, "loading the cached file " + filename)
with open(filename, 'r') as f:
data = pickle.load(f)
except IOError:
print_verbose(verbose, "downloading the library,it may take some time")
import urllib2
try:
req = urllib2.urlopen(request)
data = req.read()
if save:
with open(filename, 'w') as f:
print_verbose(verbose, "saving the file to " + filename)
pickle.dump(data, f)
# clean the error stacktrace
except urllib2.HTTPError as e:
raise e
return data |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def command_help_long(self):
""" Return command help for use in global parser usage string @TODO update to support self.current_indent from formatter """ |
indent = " " * 2 # replace with current_indent
help = "Command must be one of:\n"
for action_name in self.parser.valid_commands:
help += "%s%-10s %-70s\n" % (indent, action_name, self.parser.commands[action_name].desc_short.capitalize())
help += '\nSee \'%s help COMMAND\' for help and information on a command' % self.parser.prog
return help |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def run(self):
""" Run the multiopt parser """ |
self.parser = MultioptOptionParser(
usage="%prog <command> [options] [args]",
prog=self.clsname,
version=self.version,
option_list=self.global_options,
description=self.desc_short,
commands=self.command_set,
epilog=self.footer
)
try:
self.options, self.args = self.parser.parse_args(self.argv)
except Exception, e:
print str(e)
pass
if len(self.args) < 1:
self.parser.print_lax_help()
return 2
self.command = self.args.pop(0)
showHelp = False
if self.command == 'help':
if len(self.args) < 1:
self.parser.print_lax_help()
return 2
else:
self.command = self.args.pop()
showHelp = True
if self.command not in self.valid_commands:
self.parser.print_cmd_error(self.command)
return 2
self.command_set[self.command].set_cmdname(self.command)
subcmd_parser = self.command_set[self.command].get_parser(self.clsname, self.version, self.global_options)
subcmd_options, subcmd_args = subcmd_parser.parse_args(self.args)
if showHelp:
subcmd_parser.print_help_long()
return 1
try:
self.command_set[self.command].func(subcmd_options, *subcmd_args)
except (CommandError, TypeError), e:
# self.parser.print_exec_error(self.command, str(e))
subcmd_parser.print_exec_error(self.command, str(e))
print
# @TODO show command help
# self.parser.print_lax_help()
return 2
return 1 |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def add(self, host=None, f_community=None, f_access=None, f_version=None):
""" Add an SNMP community string to a host :param host: t_hosts.id or t_hosts.f_ipaddr :param f_community: Community string to add :param f_access: READ or WRITE :param f_version: v1, v2c or v3 :return: (True/False, t_snmp.id/Error string) """ |
return self.send.snmp_add(host, f_community, f_access, f_version) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def delete_collection(db_name, collection_name, host='localhost', port=27017):
"""Almost exclusively for testing.""" |
client = MongoClient("mongodb://%s:%d" % (host, port))
client[db_name].drop_collection(collection_name) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _check_1st_line(line, **kwargs):
"""First line check. Check that the first line has a known component name followed by a colon and then a short description of the commit. :param line: first line :type line: str :param components: list of known component names :type line: list :param max_first_line: maximum length of the first line :type max_first_line: int :return: errors as in (code, line number, *args) :rtype: list """ |
components = kwargs.get("components", ())
max_first_line = kwargs.get("max_first_line", 50)
errors = []
lineno = 1
if len(line) > max_first_line:
errors.append(("M190", lineno, max_first_line, len(line)))
if line.endswith("."):
errors.append(("M191", lineno))
if ':' not in line:
errors.append(("M110", lineno))
else:
component, msg = line.split(':', 1)
if component not in components:
errors.append(("M111", lineno, component))
return errors |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _check_bullets(lines, **kwargs):
"""Check that the bullet point list is well formatted. Each bullet point shall have one space before and after it. The bullet character is the "*" and there is no space before it but one after it meaning the next line are starting with two blanks spaces to respect the indentation. :param lines: all the lines of the message :type lines: list :param max_lengths: maximum length of any line. (Default 72) :return: errors as in (code, line number, *args) :rtype: list """ |
max_length = kwargs.get("max_length", 72)
labels = {l for l, _ in kwargs.get("commit_msg_labels", tuple())}
def _strip_ticket_directives(line):
return re.sub(r'( \([^)]*\)){1,}$', '', line)
errors = []
missed_lines = []
skipped = []
for (i, line) in enumerate(lines[1:]):
if line.startswith('*'):
dot_found = False
if len(missed_lines) > 0:
errors.append(("M130", i + 2))
if lines[i].strip() != '':
errors.append(("M120", i + 2))
if _strip_ticket_directives(line).endswith('.'):
dot_found = True
label = _re_bullet_label.search(line)
if label and label.group('label') not in labels:
errors.append(("M122", i + 2, label.group('label')))
for (j, indented) in enumerate(lines[i + 2:]):
if indented.strip() == '':
break
if not re.search(r"^ {2}\S", indented):
errors.append(("M121", i + j + 3))
else:
skipped.append(i + j + 1)
stripped_line = _strip_ticket_directives(indented)
if stripped_line.endswith('.'):
dot_found = True
elif stripped_line.strip():
dot_found = False
if not dot_found:
errors.append(("M123", i + 2))
elif i not in skipped and line.strip():
missed_lines.append((i + 2, line))
if len(line) > max_length:
errors.append(("M190", i + 2, max_length, len(line)))
return errors, missed_lines |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _check_signatures(lines, **kwargs):
"""Check that the signatures are valid. There should be at least three signatures. If not, one of them should be a trusted developer/reviewer. Formatting supported being: [signature] full name <email@address> :param lines: lines (lineno, content) to verify. :type lines: list :param signatures: list of supported signature :type signatures: list :param alt_signatures: list of alternative signatures, not counted :type alt_signatures: list :param trusted: list of trusted reviewers, the e-mail address. :type trusted: list :param min_reviewers: minimal number of reviewers needed. (Default 3) :type min_reviewers: int :return: errors as in (code, line number, *args) :rtype: list """ |
trusted = kwargs.get("trusted", ())
signatures = tuple(kwargs.get("signatures", ()))
alt_signatures = tuple(kwargs.get("alt_signatures", ()))
min_reviewers = kwargs.get("min_reviewers", 3)
matching = []
errors = []
signatures += alt_signatures
test_signatures = re.compile("^({0})".format("|".join(signatures)))
test_alt_signatures = re.compile("^({0})".format("|".join(alt_signatures)))
for i, line in lines:
if signatures and test_signatures.search(line):
if line.endswith("."):
errors.append(("M191", i))
if not alt_signatures or not test_alt_signatures.search(line):
matching.append(line)
else:
errors.append(("M102", i))
if not matching:
errors.append(("M101", 1))
errors.append(("M100", 1))
elif len(matching) < min_reviewers:
pattern = re.compile('|'.join(map(lambda x: '<' + re.escape(x) + '>',
trusted)))
trusted_matching = list(filter(None, map(pattern.search, matching)))
if len(trusted_matching) == 0:
errors.append(("M100", 1))
return errors |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def check_message(message, **kwargs):
"""Check the message format. Rules: - the first line must start by a component name - and a short description (52 chars), - then bullet points are expected - and finally signatures. :param components: compontents, e.g. ``('auth', 'utils', 'misc')`` :type components: `list` :param signatures: signatures, e.g. ``('Signed-off-by', 'Reviewed-by')`` :type signatures: `list` :param alt_signatures: alternative signatures, e.g. ``('Tested-by',)`` :type alt_signatures: `list` :param trusted: optional list of reviewers, e.g. ``('john.doe@foo.org',)`` :type trusted: `list` :param max_length: optional maximum line length (by default: 72) :type max_length: int :param max_first_line: optional maximum first line length (by default: 50) :type max_first_line: int :param allow_empty: optional way to allow empty message (by default: False) :type allow_empty: bool :return: errors sorted by line number :rtype: `list` """ |
if kwargs.pop("allow_empty", False):
if not message or message.isspace():
return []
lines = re.split(r"\r\n|\r|\n", message)
errors = _check_1st_line(lines[0], **kwargs)
err, signature_lines = _check_bullets(lines, **kwargs)
errors += err
errors += _check_signatures(signature_lines, **kwargs)
def _format(code, lineno, args):
return "{0}: {1} {2}".format(lineno,
code,
_messages_codes[code].format(*args))
return list(map(lambda x: _format(x[0], x[1], x[2:]),
sorted(errors, key=lambda x: x[0]))) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _register_pyflakes_check():
"""Register the pyFlakes checker into PEP8 set of checks.""" |
from flake8_isort import Flake8Isort
from flake8_blind_except import check_blind_except
# Resolving conflicts between pep8 and pyflakes.
codes = {
"UnusedImport": "F401",
"ImportShadowedByLoopVar": "F402",
"ImportStarUsed": "F403",
"LateFutureImport": "F404",
"Redefined": "F801",
"RedefinedInListComp": "F812",
"UndefinedName": "F821",
"UndefinedExport": "F822",
"UndefinedLocal": "F823",
"DuplicateArgument": "F831",
"UnusedVariable": "F841",
}
for name, obj in vars(pyflakes.messages).items():
if name[0].isupper() and obj.message:
obj.tpl = "{0} {1}".format(codes.get(name, "F999"), obj.message)
pep8.register_check(_PyFlakesChecker, codes=['F'])
# FIXME parser hack
parser = pep8.get_parser('', '')
Flake8Isort.add_options(parser)
options, args = parser.parse_args([])
# end of hack
pep8.register_check(Flake8Isort, codes=['I'])
pep8.register_check(check_blind_except, codes=['B90']) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def check_pydocstyle(filename, **kwargs):
"""Perform static analysis on the given file docstrings. :param filename: path of file to check. :type filename: str :param ignore: codes to ignore, e.g. ('D400',) :type ignore: `list` :param match: regex the filename has to match to be checked :type match: str :param match_dir: regex everydir in path should match to be checked :type match_dir: str :return: errors :rtype: `list` .. seealso:: `PyCQA/pydocstyle <https://github.com/GreenSteam/pydocstyle/>`_ """ |
ignore = kwargs.get("ignore")
match = kwargs.get("match", None)
match_dir = kwargs.get("match_dir", None)
errors = []
if match and not re.match(match, os.path.basename(filename)):
return errors
if match_dir:
# FIXME here the full path is checked, be sure, if match_dir doesn't
# match the path (usually temporary) before the actual application path
# it may not run the checks when it should have.
path = os.path.split(os.path.abspath(filename))[0]
while path != "/":
path, dirname = os.path.split(path)
if not re.match(match_dir, dirname):
return errors
checker = pydocstyle.PEP257Checker()
with open(filename) as fp:
try:
for error in checker.check_source(fp.read(), filename):
if ignore is None or error.code not in ignore:
# Removing the colon ':' after the error code
message = re.sub("(D[0-9]{3}): ?(.*)",
r"\1 \2",
error.message)
errors.append("{0}: {1}".format(error.line, message))
except tokenize.TokenError as e:
errors.append("{1}:{2} {0}".format(e.args[0], *e.args[1]))
except pydocstyle.AllError as e:
errors.append(str(e))
return errors |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def check_license(filename, **kwargs):
"""Perform a license check on the given file. The license format should be commented using # and live at the top of the file. Also, the year should be the current one. :param filename: path of file to check. :type filename: str :param year: default current year :type year: int :param ignore: codes to ignore, e.g. ``('L100', 'L101')`` :type ignore: `list` :param python_style: False for JavaScript or CSS files :type python_style: bool :return: errors :rtype: `list` """ |
year = kwargs.pop("year", datetime.now().year)
python_style = kwargs.pop("python_style", True)
ignores = kwargs.get("ignore")
template = "{0}: {1} {2}"
if python_style:
re_comment = re.compile(r"^#.*|\{#.*|[\r\n]+$")
starter = "# "
else:
re_comment = re.compile(r"^/\*.*| \*.*|[\r\n]+$")
starter = " *"
errors = []
lines = []
file_is_empty = False
license = ""
lineno = 0
try:
with codecs.open(filename, "r", "utf-8") as fp:
line = fp.readline()
blocks = []
while re_comment.match(line):
if line.startswith(starter):
line = line[len(starter):].lstrip()
blocks.append(line)
lines.append((lineno, line.strip()))
lineno, line = lineno + 1, fp.readline()
file_is_empty = line == ""
license = "".join(blocks)
except UnicodeDecodeError:
errors.append((lineno + 1, "L190", "utf-8"))
license = ""
if file_is_empty and not license.strip():
return errors
match_year = _re_copyright_year.search(license)
if match_year is None:
errors.append((lineno + 1, "L101"))
elif int(match_year.group("year")) != year:
theline = match_year.group(0)
lno = lineno
for no, l in lines:
if theline.strip() == l:
lno = no
break
errors.append((lno + 1, "L102", year, match_year.group("year")))
else:
program_match = _re_program.search(license)
program_2_match = _re_program_2.search(license)
program_3_match = _re_program_3.search(license)
if program_match is None:
errors.append((lineno, "L100"))
elif (program_2_match is None or
program_3_match is None or
(program_match.group("program").upper() !=
program_2_match.group("program").upper() !=
program_3_match.group("program").upper())):
errors.append((lineno, "L103"))
def _format_error(lineno, code, *args):
return template.format(lineno, code,
_licenses_codes[code].format(*args))
def _filter_codes(error):
if not ignores or error[1] not in ignores:
return error
return list(map(lambda x: _format_error(*x),
filter(_filter_codes, errors))) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_options(config=None):
"""Build the options from the config object.""" |
if config is None:
from . import config
config.get = lambda key, default=None: getattr(config, key, default)
base = {
"components": config.get("COMPONENTS"),
"signatures": config.get("SIGNATURES"),
"commit_msg_template": config.get("COMMIT_MSG_TEMPLATE"),
"commit_msg_labels": config.get("COMMIT_MSG_LABELS"),
"alt_signatures": config.get("ALT_SIGNATURES"),
"trusted": config.get("TRUSTED_DEVELOPERS"),
"pep8": config.get("CHECK_PEP8", True),
"pydocstyle": config.get("CHECK_PYDOCSTYLE", True),
"license": config.get("CHECK_LICENSE", True),
"pyflakes": config.get("CHECK_PYFLAKES", True),
"ignore": config.get("IGNORE"),
"select": config.get("SELECT"),
"match": config.get("PYDOCSTYLE_MATCH"),
"match_dir": config.get("PYDOCSTYLE_MATCH_DIR"),
"min_reviewers": config.get("MIN_REVIEWERS"),
"colors": config.get("COLORS", True),
"excludes": config.get("EXCLUDES", []),
"authors": config.get("AUTHORS"),
"exclude_author_names": config.get("EXCLUDE_AUTHOR_NAMES"),
}
options = {}
for k, v in base.items():
if v is not None:
options[k] = v
return options |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def run(self):
"""Yield the error messages.""" |
for msg in self.messages:
col = getattr(msg, 'col', 0)
yield msg.lineno, col, (msg.tpl % msg.message_args), msg.__class__ |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def error(self, line_number, offset, text, check):
"""Run the checks and collect the errors.""" |
code = super(_Report, self).error(line_number, offset, text, check)
if code:
self.errors.append((line_number, offset + 1, code, text, check)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def prompt(prompt_string, default=None, secret=False, boolean=False, bool_type=None):
""" Prompt user for a string, with a default value * secret converts to password prompt * boolean converts return value to boolean, checking for starting with a Y """ |
if boolean or bool_type in BOOLEAN_DEFAULTS:
if bool_type is None:
bool_type = 'y_n'
default_msg = BOOLEAN_DEFAULTS[bool_type][is_affirmative(default)]
else:
default_msg = " (default {val}): "
prompt_string += (default_msg.format(val=default) if default else ": ")
if secret:
val = getpass(prompt_string)
else:
val = input(prompt_string)
val = (val if val else default)
if boolean:
val = val.lower().startswith('y')
return val |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def jflatten(j):
"""
Flatten 3_D Jacobian into 2-D.
""" |
nobs, nf, nargs = j.shape
nrows, ncols = nf * nobs, nargs * nobs
jflat = np.zeros((nrows, ncols))
for n in xrange(nobs):
r, c = n * nf, n * nargs
jflat[r:(r + nf), c:(c + nargs)] = j[n]
return jflat |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def jtosparse(j):
"""
Generate sparse matrix coordinates from 3-D Jacobian.
""" |
data = j.flatten().tolist()
nobs, nf, nargs = j.shape
indices = zip(*[(r, c) for n in xrange(nobs)
for r in xrange(n * nf, (n + 1) * nf)
for c in xrange(n * nargs, (n + 1) * nargs)])
return csr_matrix((data, indices), shape=(nobs * nf, nobs * nargs)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def upload_file(self, service_rec=None, host_service=None, filename=None, pw_data=None, f_type=None, add_to_evidence=True):
""" Upload a password file :param service_rec: db.t_services.id :param host_service: db.t_hosts.id :param filename: Filename :param pw_data: Content of file :param f_type: Type of file :param add_to_evidence: True/False to add to t_evidence :return: (True/False, Response Message) """ |
return self.send.accounts_upload_file(service_rec, host_service, filename, pw_data, f_type, add_to_evidence) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def parse_datetime(time_str):
""" Wraps dateutil's parser function to set an explicit UTC timezone, and to make sure microseconds are 0. Unified Uploader format and EMK format bother don't use microseconds at all. :param str time_str: The date/time str to parse. :rtype: datetime.datetime :returns: A parsed, UTC datetime. """ |
try:
return dateutil.parser.parse(
time_str
).replace(microsecond=0).astimezone(UTC_TZINFO)
except ValueError:
# This was some kind of unrecognizable time string.
raise ParseError("Invalid time string: %s" % time_str) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def append(self, content, encoding='utf8'):
""" add a line to file """ |
if not self.parent.exists:
self.parent.create()
with open(self._filename, "ab") as output_file:
if not is_text(content):
Log.error(u"expecting to write unicode only")
output_file.write(content.encode(encoding))
output_file.write(b"\n") |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def url_param2value(param):
""" CONVERT URL QUERY PARAMETERS INTO DICT """ |
if param == None:
return Null
if param == None:
return Null
def _decode(v):
output = []
i = 0
while i < len(v):
c = v[i]
if c == "%":
d = hex2chr(v[i + 1:i + 3])
output.append(d)
i += 3
else:
output.append(c)
i += 1
output = text_type("".join(output))
try:
return json2value(output)
except Exception:
pass
return output
query = Data()
for p in param.split('&'):
if not p:
continue
if p.find("=") == -1:
k = p
v = True
else:
k, v = p.split("=")
v = _decode(v)
u = query.get(k)
if u is None:
query[k] = v
elif is_list(u):
u += [v]
else:
query[k] = [u, v]
return query |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def configfile_from_path(path, strict=True):
"""Get a ConfigFile object based on a file path. This method will inspect the file extension and return the appropriate ConfigFile subclass initialized with the given path. Args: path (str):
The file path which represents the configuration file. strict (bool):
Whether or not to parse the file in strict mode. Returns: confpy.loaders.base.ConfigurationFile: The subclass which is specialized for the given file path. Raises: UnrecognizedFileExtension: If there is no loader for the path. """ |
extension = path.split('.')[-1]
conf_type = FILE_TYPES.get(extension)
if not conf_type:
raise exc.UnrecognizedFileExtension(
"Cannot parse file of type {0}. Choices are {1}.".format(
extension,
FILE_TYPES.keys(),
)
)
return conf_type(path=path, strict=strict) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def configuration_from_paths(paths, strict=True):
"""Get a Configuration object based on multiple file paths. Args: paths (iter of str):
An iterable of file paths which identify config files on the system. strict (bool):
Whether or not to parse the files in strict mode. Returns: confpy.core.config.Configuration: The loaded configuration object. Raises: NamespaceNotRegistered: If a file contains a namespace which is not defined. OptionNotRegistered: If a file contains an option which is not defined but resides under a valid namespace. UnrecognizedFileExtension: If there is no loader for a path. """ |
for path in paths:
cfg = configfile_from_path(path, strict=strict).config
return cfg |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def set_environment_var_options(config, env=None, prefix='CONFPY'):
"""Set any configuration options which have an environment var set. Args: config (confpy.core.config.Configuration):
A configuration object which has been initialized with options. env (dict):
Optional dictionary which contains environment variables. The default is os.environ if no value is given. prefix (str):
The string prefix prepended to all environment variables. This value will be set to upper case. The default is CONFPY. Returns: confpy.core.config.Configuration: A configuration object with environment variables set. The pattern to follow when setting environment variables is: <PREFIX>_<SECTION>_<OPTION> Each value should be upper case and separated by underscores. """ |
env = env or os.environ
for section_name, section in config:
for option_name, _ in section:
var_name = '{0}_{1}_{2}'.format(
prefix.upper(),
section_name.upper(),
option_name.upper(),
)
env_var = env.get(var_name)
if env_var:
setattr(section, option_name, env_var)
return config |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def set_cli_options(config, arguments=None):
"""Set any configuration options which have a CLI value set. Args: config (confpy.core.config.Configuration):
A configuration object which has been initialized with options. arguments (iter of str):
An iterable of strings which contains the CLI arguments passed. If nothing is give then sys.argv is used. Returns: confpy.core.config.Configuration: A configuration object with CLI values set. The pattern to follow when setting CLI values is: <section>_<option> Each value should be lower case and separated by underscores. """ |
arguments = arguments or sys.argv[1:]
parser = argparse.ArgumentParser()
for section_name, section in config:
for option_name, _ in section:
var_name = '{0}_{1}'.format(
section_name.lower(),
option_name.lower(),
)
parser.add_argument('--{0}'.format(var_name))
args, _ = parser.parse_known_args(arguments)
args = vars(args)
for section_name, section in config:
for option_name, _ in section:
var_name = '{0}_{1}'.format(
section_name.lower(),
option_name.lower(),
)
value = args.get(var_name)
if value:
setattr(section, option_name, value)
return config |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def check_for_missing_options(config):
"""Iter over a config and raise if a required option is still not set. Args: config (confpy.core.config.Configuration):
The configuration object to validate. Raises: MissingRequiredOption: If any required options are not set in the configuration object. Required options with default values are considered set and will not cause this function to raise. """ |
for section_name, section in config:
for option_name, option in section:
if option.required and option.value is None:
raise exc.MissingRequiredOption(
"Option {0} in namespace {1} is required.".format(
option_name,
section_name,
)
)
return config |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def parse_options(files, env_prefix='CONFPY', strict=True):
"""Parse configuration options and return a configuration object. Args: files (iter of str):
File paths which identify configuration files. These files are processed in order with values in later files overwriting values in earlier files. env_prefix (str):
The static prefix prepended to all options when set as environment variables. The default is CONFPY. strict (bool):
Whether or not to parse the files in strict mode. Returns: confpy.core.config.Configuration: The loaded configuration object. Raises: MissingRequiredOption: If a required option is not defined in any file. NamespaceNotRegistered: If a file contains a namespace which is not defined. OptionNotRegistered: If a file contains an option which is not defined but resides under a valid namespace. UnrecognizedFileExtension: If there is no loader for a path. """ |
return check_for_missing_options(
config=set_cli_options(
config=set_environment_var_options(
config=configuration_from_paths(
paths=files,
strict=strict,
),
prefix=env_prefix,
),
)
) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def render(self, sphinx_app: Sphinx, context):
""" Given a Sphinx builder and context with sphinx_app in it, generate HTML """ |
# Called from kaybee.plugins.widgets.handlers.render_widgets
builder: StandaloneHTMLBuilder = sphinx_app.builder
resource = sphinx_app.env.resources[self.docname]
context['sphinx_app'] = sphinx_app
context['widget'] = self
context['resource'] = resource
# make_context is optionally implemented on the concrete class
# for each widget
self.make_context(context, sphinx_app)
# NOTE: Can use builder.templates.render_string
template = self.template + '.html'
html = builder.templates.render(template, context)
return html |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def desc(t=None, reg=True):
""" Describe Class Dependency :param reg: should we register this class as well :param t: custom type as well :return: """ |
def decorated_fn(cls):
if not inspect.isclass(cls):
return NotImplemented('For now we can only describe classes')
name = t or camel_case_to_underscore(cls.__name__)[0]
if reg:
di.injector.register(name, cls)
else:
di.injector.describe(name, cls)
return cls
return decorated_fn |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def label(self, value):
""" Returns a pretty text version of the key for the inputted value. :param value | <variant> :return <str> """ |
return self._labels.get(value) or text.pretty(self(value)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def setLabel(self, value, label):
""" Sets the label text for the inputted value. This will override the default pretty text label that is used for the key. :param value | <variant> label | <str> """ |
if label:
self._labels[value] = label
else:
self._labels.pop(value, None) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def valueByLabel(self, label):
""" Determine a given value based on the inputted label. :param label <str> :return <int> """ |
keys = self.keys()
labels = [text.pretty(key) for key in keys]
if label in labels:
return self[keys[labels.index(label)]]
return 0 |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def load_config_file(self):
"""Parse configuration file and get config values.""" |
config_parser = SafeConfigParser()
config_parser.read(self.CONFIG_FILE)
if config_parser.has_section('handlers'):
self._config['handlers_package'] = config_parser.get('handlers', 'package')
if config_parser.has_section('auth'):
self._config['consumer_key'] = config_parser.get('auth', 'consumer_key')
self._config['consumer_secret'] = config_parser.get('auth', 'consumer_secret')
self._config['token_key'] = config_parser.get('auth', 'token_key')
self._config['token_secret'] = config_parser.get('auth', 'token_secret')
if config_parser.has_section('stream'):
self._config['user_stream'] = config_parser.get('stream', 'user_stream').lower() == 'true'
else:
self._config['user_stream'] = False
if config_parser.has_option('general', 'min_seconds_between_errors'):
self._config['min_seconds_between_errors'] = config_parser.get('general', 'min_seconds_between_errors')
if config_parser.has_option('general', 'sleep_seconds_on_consecutive_errors'):
self._config['sleep_seconds_on_consecutive_errors'] = config_parser.get(
'general', 'sleep_seconds_on_consecutive_errors') |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def load_config_from_cli_arguments(self, *args, **kwargs):
""" Get config values of passed in CLI options. :param dict kwargs: CLI options """ |
self._load_config_from_cli_argument(key='handlers_package', **kwargs)
self._load_config_from_cli_argument(key='auth', **kwargs)
self._load_config_from_cli_argument(key='user_stream', **kwargs)
self._load_config_from_cli_argument(key='min_seconds_between_errors', **kwargs)
self._load_config_from_cli_argument(key='sleep_seconds_on_consecutive_errors', **kwargs) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get(self, id):
""" Gets the dict data and builds the item object. """ |
data = self.db.get_data(self.get_path, id=id)
return self._build_item(**data['Data'][self.name]) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def save(self, entity):
"""Maps entity to dict and returns future""" |
assert isinstance(entity, Entity), " entity must have an instance of Entity"
return self.__collection.save(entity.as_dict()) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def find_one(self, **kwargs):
"""Returns future. Executes collection's find_one method based on keyword args maps result ( dict to instance ) and return future Example:: manager = EntityManager(Product) product_saved = yield manager.find_one(_id=object_id) """ |
future = TracebackFuture()
def handle_response(result, error):
if error:
future.set_exception(error)
else:
instance = self.__entity()
instance.map_dict(result)
future.set_result(instance)
self.__collection.find_one(kwargs, callback=handle_response)
return future |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def update(self, entity):
""" Executes collection's update method based on keyword args. Example:: manager = EntityManager(Product) p = Product() p.name = 'new name' p.description = 'new description' p.price = 300.0 yield manager.update(p) """ |
assert isinstance(entity, Entity), "Error: entity must have an instance of Entity"
return self.__collection.update({'_id': entity._id}, {'$set': entity.as_dict()}) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def open(self, results=False):
""" Open the strawpoll in a browser. Can specify to open the main or results page. :param results: True/False """ |
webbrowser.open(self.results_url if results else self.url) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def main():
""" Testing function for DFA brzozowski algebraic method Operation """ |
argv = sys.argv
if len(argv) < 2:
targetfile = 'target.y'
else:
targetfile = argv[1]
print 'Parsing ruleset: ' + targetfile,
flex_a = Flexparser()
mma = flex_a.yyparse(targetfile)
print 'OK'
print 'Perform minimization on initial automaton:',
mma.minimize()
print 'OK'
print 'Perform Brzozowski on minimal automaton:',
brzozowski_a = Brzozowski(mma)
mma_regex = brzozowski_a.get_regex()
print mma_regex |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def load_mmd():
"""Loads libMultiMarkdown for usage""" |
global _MMD_LIB
global _LIB_LOCATION
try:
lib_file = 'libMultiMarkdown' + SHLIB_EXT[platform.system()]
_LIB_LOCATION = os.path.abspath(os.path.join(DEFAULT_LIBRARY_DIR, lib_file))
if not os.path.isfile(_LIB_LOCATION):
_LIB_LOCATION = ctypes.util.find_library('MultiMarkdown')
_MMD_LIB = ctypes.cdll.LoadLibrary(_LIB_LOCATION)
except:
_MMD_LIB = None |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _expand_source(source, dname, fmt):
"""Expands source text to include headers, footers, and expands Multimarkdown transclusion directives. Keyword arguments: source -- string containing the Multimarkdown text to expand dname -- directory name to use as the base directory for transclusion references fmt -- format flag indicating which format to use to convert transclusion statements """ |
_MMD_LIB.g_string_new.restype = ctypes.POINTER(GString)
_MMD_LIB.g_string_new.argtypes = [ctypes.c_char_p]
src = source.encode('utf-8')
gstr = _MMD_LIB.g_string_new(src)
_MMD_LIB.prepend_mmd_header(gstr)
_MMD_LIB.append_mmd_footer(gstr)
manif = _MMD_LIB.g_string_new(b"")
_MMD_LIB.transclude_source.argtypes = [ctypes.POINTER(GString), ctypes.c_char_p,
ctypes.c_char_p, ctypes.c_int, ctypes.POINTER(GString)]
_MMD_LIB.transclude_source(gstr, dname.encode('utf-8'), None, fmt, manif)
manifest_txt = manif.contents.str
full_txt = gstr.contents.str
_MMD_LIB.g_string_free(manif, True)
_MMD_LIB.g_string_free(gstr, True)
manifest_txt = [ii for ii in manifest_txt.decode('utf-8').split('\n') if ii]
return full_txt.decode('utf-8'), manifest_txt |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def has_metadata(source, ext):
"""Returns a flag indicating if a given block of MultiMarkdown text contains metadata.""" |
_MMD_LIB.has_metadata.argtypes = [ctypes.c_char_p, ctypes.c_int]
_MMD_LIB.has_metadata.restype = ctypes.c_bool
return _MMD_LIB.has_metadata(source.encode('utf-8'), ext) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def convert(source, ext=COMPLETE, fmt=HTML, dname=None):
"""Converts a string of MultiMarkdown text to the requested format. Transclusion is performed if the COMPATIBILITY extension is not set, and dname is set to a valid directory Keyword arguments: source -- string containing MultiMarkdown text ext -- extension bitfield to pass to conversion process fmt -- flag indicating output format to use dname -- Path to use for transclusion - if None, transclusion functionality is bypassed """ |
if dname and not ext & COMPATIBILITY:
if os.path.isfile(dname):
dname = os.path.abspath(os.path.dirname(dname))
source, _ = _expand_source(source, dname, fmt)
_MMD_LIB.markdown_to_string.argtypes = [ctypes.c_char_p, ctypes.c_ulong, ctypes.c_int]
_MMD_LIB.markdown_to_string.restype = ctypes.c_char_p
src = source.encode('utf-8')
return _MMD_LIB.markdown_to_string(src, ext, fmt).decode('utf-8') |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.