text_prompt stringlengths 157 13.1k | code_prompt stringlengths 7 19.8k ⌀ |
|---|---|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def append_sint32(self, value):
"""Appends a 32-bit integer to our buffer, zigzag-encoded and then varint-encoded. """ |
zigzag_value = wire_format.zig_zag_encode(value)
self._stream.append_var_uint32(zigzag_value) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def append_sint64(self, value):
"""Appends a 64-bit integer to our buffer, zigzag-encoded and then varint-encoded. """ |
zigzag_value = wire_format.zig_zag_encode(value)
self._stream.append_var_uint64(zigzag_value) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def append_sfixed32(self, value):
"""Appends a signed 32-bit integer to our buffer, in little-endian byte-order. """ |
sign = (value & 0x80000000) and -1 or 0
if value >> 32 != sign:
raise errors.EncodeError('SFixed32 out of range: %d' % value)
self._stream.append_little_endian32(value & 0xffffffff) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def append_sfixed64(self, value):
"""Appends a signed 64-bit integer to our buffer, in little-endian byte-order. """ |
sign = (value & 0x8000000000000000) and -1 or 0
if value >> 64 != sign:
raise errors.EncodeError('SFixed64 out of range: %d' % value)
self._stream.append_little_endian64(value & 0xffffffffffffffff) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def append_string(self, value):
"""Appends a length-prefixed string to our buffer, with the length varint-encoded. """ |
self._stream.append_var_uint32(len(value))
self._stream.append_raw_bytes(value) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_SHA1_bin(word):
""" Return SHA1 hash of any string :param word: :return: """ |
from hashlib import sha1
if PY3 and isinstance(word, str):
word = word.encode('utf-8')
hash_s = sha1()
hash_s.update(word)
return bin(int(hash_s.hexdigest(), 16))[2:].zfill(160) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_index(binstr, end_index=160):
""" Return the position of the first 1 bit from the left in the word until end_index :param binstr: :param end_index: :return: """ |
res = -1
try:
res = binstr.index('1') + 1
except ValueError:
res = end_index
return res |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _estimate(self, buffer):
""" Return the estimate of the cardinality :return: esitimate of the cardinality """ |
m = self._bucket_number
raw_e = self._alpha * pow(m, 2) / sum([pow(2, -x) for x in buffer])
if raw_e <= 5 / 2.0 * m:
v = buffer.count(0)
if v != 0:
return m * log(m / float(v), 2)
else:
return raw_e
elif raw_e <= 1 / 30.0 * 2 ** 160:
return raw_e
else:
return -2 ** 160 * log(1 - raw_e / 2.0 ** 160, 2) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def merge(self, buffer, other_hyper_log_log):
""" Merge the HyperLogLog :param other_hyper_log_log: :return: """ |
for i in range(len(buffer)):
buffer[i] = max(buffer[i], other_hyper_log_log[i]) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def read_string(self):
"""Reads and returns a length-delimited string.""" |
length = self._stream.read_var_uint32()
return self._stream.read_string(length) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def _chk_truncate(self):
'''
Checks whether the frame should be truncated. If so, slices
the frame up.
'''
# Column of which first element is used to determine width of a dot col
self.tr_size_col = -1
# Cut the data to the information actually printed
max_cols = self.max_cols
max_rows = self.max_rows
if max_cols == 0 or max_rows == 0: # assume we are in the terminal (why else = 0)
(w, h) = get_terminal_size()
self.w = w
self.h = h
if self.max_rows == 0:
dot_row = 1
prompt_row = 1
if self.show_dimensions:
show_dimension_rows = 3
n_add_rows = self.header + dot_row + show_dimension_rows + prompt_row
max_rows_adj = self.h - n_add_rows # rows available to fill with actual data
self.max_rows_adj = max_rows_adj
# Format only rows and columns that could potentially fit the screen
if max_cols == 0 and len(self.frame.columns) > w:
max_cols = w
if max_rows == 0 and len(self.frame) > h:
max_rows = h
if not hasattr(self, 'max_rows_adj'):
self.max_rows_adj = max_rows
if not hasattr(self, 'max_cols_adj'):
self.max_cols_adj = max_cols
max_cols_adj = self.max_cols_adj
max_rows_adj = self.max_rows_adj
truncate_h = max_cols_adj and (len(self.columns) > max_cols_adj)
truncate_v = max_rows_adj and (len(self.frame) > max_rows_adj)
frame = self.frame
if truncate_h:
if max_cols_adj == 0:
col_num = len(frame.columns)
elif max_cols_adj == 1:
frame = frame[:, :max_cols]
col_num = max_cols
else:
col_num = (max_cols_adj // 2)
frame = frame[:, :col_num].concat(frame[:, -col_num:], axis=1)
self.tr_col_num = col_num
if truncate_v:
if max_rows_adj == 0:
row_num = len(frame)
if max_rows_adj == 1:
row_num = max_rows
frame = frame[:max_rows, :]
else:
row_num = max_rows_adj // 2
frame = frame[:row_num, :].concat(frame[-row_num:, :])
self.tr_row_num = row_num
self.tr_frame = frame
self.truncate_h = truncate_h
self.truncate_v = truncate_v
self.is_truncated = self.truncate_h or self.truncate_v |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def build_input_table(cls, name='inputTableName', input_name='input'):
""" Build an input table parameter :param name: parameter name :type name: str :param input_name: bind input port name :param input_name: str :return: input description :rtype: ParamDef """ |
obj = cls(name)
obj.exporter = 'get_input_table_name'
obj.input_name = input_name
return obj |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def build_input_partitions(cls, name='inputTablePartitions', input_name='input'):
""" Build an input table partition parameter :param name: parameter name :type name: str :param input_name: bind input port name :param input_name: str :return: input description :rtype: ParamDef """ |
obj = cls(name)
obj.exporter = 'get_input_partitions'
obj.input_name = input_name
return obj |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def build_output_table(cls, name='inputTableName', output_name='output'):
""" Build an output table parameter :param name: parameter name :type name: str :param output_name: bind input port name :type output_name: str :return: output description :rtype: ParamDef """ |
obj = cls(name)
obj.exporter = 'get_output_table_name'
obj.output_name = output_name
return obj |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def build_output_partitions(cls, name='inputTablePartitions', output_name='output'):
""" Build an output table partition parameter :param name: parameter name :type name: str :param output_name: bind input port name :type output_name: str :return: output description :rtype: ParamDef """ |
obj = cls(name)
obj.exporter = 'get_output_table_partition'
obj.output_name = output_name
return obj |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def build_model_name(cls, name='modelName', output_name='output'):
""" Build an output model name parameter. :param name: model name :type name: str :param output_name: bind output port name :type output_name: str :return: output description :rtype: ParamDef """ |
obj = cls(name)
obj.exporter = 'generate_model_name'
obj.output_name = output_name
return obj |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def build_data_input(cls, name='input'):
""" Build a data input port. :param name: port name :type name: str :return: port object :rtype: PortDef """ |
return cls(name, PortDirection.INPUT, type=PortType.DATA) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def build_data_output(cls, name='output', copy_input=None, schema=None):
""" Build a data output port. :param name: port name :type name: str :return: port object :param copy_input: input name where the schema is copied from. :type copy_input: str :param schema: k1:v1,k2:v2 string describing the schema to be appended :type schema: str :rtype: PortDef """ |
return cls(name, PortDirection.OUTPUT, type=PortType.DATA, copy_input=copy_input, schema=schema) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def build_model_input(cls, name='input'):
""" Build a model input port. :param name: port name :type name: str :return: port object :rtype: PortDef """ |
return cls(name, PortDirection.INPUT, type=PortType.MODEL) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def build_model_output(cls, name='output'):
""" Build a model output port. :param name: port name :type name: str :return: port object :rtype: PortDef """ |
return cls(name, PortDirection.OUTPUT, type=PortType.MODEL) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def add_port(self, port):
""" Add a port object to the definition :param port: port definition :type port: PortDef """ |
self.ports.append(port)
if port.io_type not in self.port_seqs:
self.port_seqs[port.io_type] = 0
self.port_seqs[port.io_type] += 1
port.sequence = self.port_seqs[port.io_type]
return self |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def add_meta(self, name, value):
""" Add a pair of meta data to the definition :param name: name of the meta :type name: str :param value: value of the meta :type value: str """ |
for mt in self.metas:
if mt.name == name:
mt.value = value
return self
self.metas.append(MetaDef(name, value))
return self |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def serialize(self):
""" Serialize the algorithm definition """ |
# fill sequences
for keys, groups in groupby(self.ports, lambda x: x.io_type):
for seq, port in enumerate(groups):
port.sequence = seq
return super(AlgorithmDef, self).serialize() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _count(expr, pat, flags=0):
""" Count occurrences of pattern in each string of the sequence or scalar :param expr: sequence or scalar :param pat: valid regular expression :param flags: re module flags, e.g. re.IGNORECASE :return: """ |
return _string_op(expr, Count, output_type=types.int64,
_pat=pat, _flags=flags) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _extract(expr, pat, flags=0, group=0):
""" Find group in each string in the Series using passed regular expression. :param expr: :param pat: Pattern or regular expression :param flags: re module, e.g. re.IGNORECASE :param group: if None as group 0 :return: sequence or scalar """ |
return _string_op(expr, Extract, _pat=pat, _flags=flags, _group=group) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _pad(expr, width, side='left', fillchar=' '):
""" Pad strings in the sequence or scalar with an additional character to specified side. :param expr: :param width: Minimum width of resulting string; additional characters will be filled with spaces :param side: {‘left’, ‘right’, ‘both’}, default ‘left’ :param fillchar: Additional character for filling, default is whitespace :return: sequence or scalar """ |
if not isinstance(fillchar, six.string_types):
msg = 'fillchar must be a character, not {0}'
raise TypeError(msg.format(type(fillchar).__name__))
if len(fillchar) != 1:
raise TypeError('fillchar must be a character, not str')
if side not in ('left', 'right', 'both'):
raise ValueError('Invalid side')
return _string_op(expr, Pad, _width=width, _side=side, _fillchar=fillchar) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _slice(expr, start=None, stop=None, step=None):
""" Slice substrings from each element in the sequence or scalar :param expr: :param start: int or None :param stop: int or None :param step: int or None :return: sliced """ |
return _string_op(expr, Slice, _start=start, _end=stop, _step=step) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _strptime(expr, date_format):
""" Return datetimes specified by date_format, which supports the same string format as the python standard library. Details of the string format can be found in python string format doc :param expr: :param date_format: date format string (e.g. “%Y-%m-%d”) :type date_format: str :return: """ |
return _string_op(expr, Strptime, _date_format=date_format,
output_type=types.datetime) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _list_tables_model(self, prefix='', project=None):
""" List all TablesModel in the given project. :param prefix: model prefix :param str project: project name, if you want to look up in another project :rtype: list[str] """ |
tset = set()
if prefix.startswith(TEMP_TABLE_PREFIX):
prefix = TEMP_TABLE_MODEL_PREFIX + prefix[len(TEMP_TABLE_PREFIX):]
it = self.list_tables(project=project, prefix=prefix)
else:
it = self.list_tables(project=project, prefix=TABLE_MODEL_PREFIX + prefix)
if TEMP_TABLE_PREFIX.startswith(prefix):
new_iter = self.list_tables(project=project, prefix=TEMP_TABLE_MODEL_PREFIX)
it = itertools.chain(it, new_iter)
for table in it:
if TABLE_MODEL_SEPARATOR not in table.name:
continue
if not table.name.startswith(TEMP_TABLE_MODEL_PREFIX) and not table.name.startswith(TABLE_MODEL_PREFIX):
continue
model_name, _ = table.name.rsplit(TABLE_MODEL_SEPARATOR, 1)
if model_name.startswith(TEMP_TABLE_MODEL_PREFIX):
model_name = TEMP_TABLE_PREFIX + model_name[len(TEMP_TABLE_MODEL_PREFIX):]
else:
model_name = model_name[len(TABLE_MODEL_PREFIX):]
if model_name not in tset:
tset.add(model_name)
yield TablesModelObject(_odps=self, name=model_name, project=project) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def open_reader(self, file_name, reopen=False, endpoint=None, start=None, length=None, **kwargs):
""" Open a volume file for read. A file-like object will be returned which can be used to read contents from volume files. :param str file_name: name of the file :param bool reopen: whether we need to open an existing read session :param str endpoint: tunnel service URL :param start: start position :param length: length limit :param compress_option: the compression algorithm, level and strategy :type compress_option: :class:`odps.tunnel.CompressOption` :Example: """ |
tunnel = self._create_volume_tunnel(endpoint=endpoint)
download_id = self._download_id if not reopen else None
download_session = tunnel.create_download_session(volume=self.volume.name, partition_spec=self.name,
file_name=file_name, download_id=download_id, **kwargs)
self._download_id = download_session.id
open_args = {}
if start is not None:
open_args['start'] = start
if length is not None:
open_args['length'] = length
return download_session.open(**open_args) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def open_writer(self, reopen=False, endpoint=None, **kwargs):
""" Open a volume partition to write to. You can use `open` method to open a file inside the volume and write to it, or use `write` method to write to specific files. :param bool reopen: whether we need to open an existing write session :param str endpoint: tunnel service URL :param compress_option: the compression algorithm, level and strategy :type compress_option: :class:`odps.tunnel.CompressOption` :Example: """ |
tunnel = self._create_volume_tunnel(endpoint=endpoint)
upload_id = self._upload_id if not reopen else None
upload_session = tunnel.create_upload_session(volume=self.volume.name, partition_spec=self.name,
upload_id=upload_id, **kwargs)
file_dict = dict()
class FilesWriter(object):
@property
def status(self):
return upload_session.status
@staticmethod
def open(file_name, **kwargs):
if file_name in file_dict:
return file_dict[file_name]
writer = upload_session.open(file_name, **kwargs)
file_dict[file_name] = writer
return writer
@staticmethod
def write(file_name, buf, **kwargs):
writer = FilesWriter.open(file_name, **kwargs)
writer.write(buf)
@staticmethod
def close():
for w in six.itervalues(file_dict):
w.close()
upload_session.commit(list(six.iterkeys(file_dict)))
def __enter__(self):
return self
def __exit__(self, exc_type, exc_val, exc_tb):
self.close()
return FilesWriter() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def batch_persist(dfs, tables, *args, **kwargs):
""" Persist multiple DataFrames into ODPS. :param dfs: DataFrames to persist. :param tables: Table names to persist to. Use (table, partition) tuple to store to a table partition. :param args: args for Expr.persist :param kwargs: kwargs for Expr.persist :Examples: """ |
from .delay import Delay
if 'async' in kwargs:
kwargs['async_'] = kwargs['async']
execute_keys = ('ui', 'async_', 'n_parallel', 'timeout', 'close_and_notify')
execute_kw = dict((k, v) for k, v in six.iteritems(kwargs) if k in execute_keys)
persist_kw = dict((k, v) for k, v in six.iteritems(kwargs) if k not in execute_keys)
delay = Delay()
persist_kw['delay'] = delay
for df, table in izip(dfs, tables):
if isinstance(table, tuple):
table, partition = table
else:
partition = None
df.persist(table, partition=partition, *args, **persist_kw)
return delay.execute(**execute_kw) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _repr_fits_horizontal_(self, ignore_width=False):
""" Check if full repr fits in horizontal boundaries imposed by the display options width and max_columns. In case off non-interactive session, no boundaries apply. ignore_width is here so ipnb+HTML output can behave the way users expect. display.max_columns remains in effect. GH3541, GH3573 """ |
width, height = get_console_size()
max_columns = options.display.max_columns
nb_columns = len(self.columns)
# exceed max columns
if ((max_columns and nb_columns > max_columns) or
((not ignore_width) and width and nb_columns > (width // 2))):
return False
if (ignore_width # used by repr_html under IPython notebook
# scripts ignore terminal dims
or not in_interactive_session()):
return True
if (options.display.width is not None or
in_ipython_frontend()):
# check at least the column row for excessive width
max_rows = 1
else:
max_rows = options.display.max_rows
# when auto-detecting, so width=None and not in ipython front end
# check whether repr fits horizontal by actualy checking
# the width of the rendered repr
buf = six.StringIO()
# only care about the stuff we'll actually print out
# and to_string on entire frame may be expensive
d = self
if not (max_rows is None): # unlimited rows
# min of two, where one may be None
d = d[:min(max_rows, len(d))]
else:
return True
d.to_string(buf=buf)
value = buf.getvalue()
repr_width = max([len(l) for l in value.split('\n')])
return repr_width < width |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _repr_html_(self):
""" Return a html representation for a particular DataFrame. Mainly for IPython notebook. """ |
# qtconsole doesn't report its line width, and also
# behaves badly when outputting an HTML table
# that doesn't fit the window, so disable it.
# XXX: In IPython 3.x and above, the Qt console will not attempt to
# display HTML, so this check can be removed when support for IPython 2.x
# is no longer needed.
if self._pandas and options.display.notebook_repr_widget:
from .. import DataFrame
from ..ui import show_df_widget
show_df_widget(DataFrame(self._values, schema=self.schema))
if self._pandas:
return self._values._repr_html_()
if in_qtconsole():
# 'HTML output is disabled in QtConsole'
return None
if options.display.notebook_repr_html:
max_rows = options.display.max_rows
max_cols = options.display.max_columns
show_dimensions = options.display.show_dimensions
return self.to_html(max_rows=max_rows, max_cols=max_cols,
show_dimensions=show_dimensions,
notebook=True)
else:
return None |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def to_html(self, buf=None, columns=None, col_space=None, header=True, index=True, na_rep='NaN', formatters=None, float_format=None, sparsify=None, index_names=True, justify=None, bold_rows=True, classes=None, escape=True, max_rows=None, max_cols=None, show_dimensions=False, notebook=False):
""" Render a DataFrame as an HTML table. `to_html`-specific options: bold_rows : boolean, default True Make the row labels bold in the output classes : str or list or tuple, default None CSS class(es) to apply to the resulting html table escape : boolean, default True Convert the characters <, >, and & to HTML-safe sequences.= max_rows : int, optional Maximum number of rows to show before truncating. If None, show all. max_cols : int, optional Maximum number of columns to show before truncating. If None, show all. """ |
formatter = fmt.ResultFrameFormatter(self, buf=buf, columns=columns,
col_space=col_space, na_rep=na_rep,
formatters=formatters,
float_format=float_format,
sparsify=sparsify,
justify=justify,
index_names=index_names,
header=header, index=index,
bold_rows=bold_rows,
escape=escape,
max_rows=max_rows,
max_cols=max_cols,
show_dimensions=show_dimensions)
formatter.to_html(classes=classes, notebook=notebook)
if buf is None:
return formatter.buf.getvalue() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_localzone():
"""Returns the zoneinfo-based tzinfo object that matches the Windows-configured timezone.""" |
global _cache_tz
if _cache_tz is None:
_cache_tz = pytz.timezone(get_localzone_name())
utils.assert_tz_offset(_cache_tz)
return _cache_tz |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def reload_localzone():
"""Reload the cached localzone. You need to call this if the timezone has changed.""" |
global _cache_tz
_cache_tz = pytz.timezone(get_localzone_name())
utils.assert_tz_offset(_cache_tz)
return _cache_tz |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def update(self, async_=False, **kw):
""" Update online model parameters to server. """ |
async_ = kw.get('async', async_)
headers = {'Content-Type': 'application/xml'}
new_kw = dict()
if self.offline_model_name:
upload_keys = ('_parent', 'name', 'offline_model_name', 'offline_model_project', 'qos', 'instance_num')
else:
upload_keys = ('_parent', 'name', 'qos', '_model_resource', 'instance_num', 'predictor', 'runtime')
for k in upload_keys:
new_kw[k] = getattr(self, k)
new_kw.update(kw)
obj = type(self)(version='0', **new_kw)
data = obj.serialize()
self._client.put(self.resource(), data, headers=headers)
self.reload()
if not async_:
self.wait_for_service() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def wait_for_service(self, interval=1):
""" Wait for the online model to be ready for service. :param interval: check interval """ |
while self.status in (OnlineModel.Status.DEPLOYING, OnlineModel.Status.UPDATING):
time.sleep(interval)
if self.status == OnlineModel.Status.DEPLOY_FAILED:
raise OnlineModelError(self.last_fail_msg, self)
elif self.status != OnlineModel.Status.SERVING:
raise OnlineModelError('Unexpected status occurs: %s' % self.status.value, self) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def wait_for_deletion(self, interval=1):
""" Wait for the online model to be deleted. :param interval: check interval """ |
deleted = False
while True:
try:
if self.status != OnlineModel.Status.DELETING:
break
except errors.NoSuchObject:
deleted = True
break
time.sleep(interval)
if not deleted:
if self.status == OnlineModel.Status.DELETE_FAILED:
raise OnlineModelError(self.last_fail_msg, self)
else:
raise OnlineModelError('Unexpected status occurs: %s' % self.status.value, self) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def predict(self, data, schema=None, endpoint=None):
""" Predict data labels with current online model. :param data: data to be predicted :param schema: schema of input data :param endpoint: endpoint of predict service :return: prediction result """ |
from .. import Projects
if endpoint is not None:
self._endpoint = endpoint
if self._predict_rest is None:
# do not add project option
self._predict_rest = RestClient(self._client.account, self._endpoint, proxy=options.data_proxy)
json_data = json.dumps(self._build_predict_request(data, schema))
headers = {'Content-Type': 'application/json'}
predict_model = Projects(client=self._predict_rest)[self.project.name].online_models[self.name]
resp = self._predict_rest.post(predict_model.resource(), json_data, headers=headers)
if not self._client.is_ok(resp):
e = errors.ODPSError.parse(resp)
raise e
return ModelPredictResults.parse(resp).outputs |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def move(self, new_path, replication=None):
""" Move current path to a new location. :param new_path: target location of current file / directory :param replication: number of replication """ |
if not new_path.startswith('/'):
new_path = self._normpath(self.dirname + '/' + new_path)
else:
new_path = self._normpath(new_path)
if new_path == self.path:
raise ValueError('New path should be different from the original one.')
update_def = self.UpdateRequestXML(path=new_path)
if replication:
update_def.replication = replication
headers = {
'Content-Type': 'application/xml',
'x-odps-volume-fs-path': self.path,
}
self._client.put(self.parent.resource(), params={'meta': ''}, headers=headers, data=update_def.serialize())
self._del_cache(self.path)
self.path = new_path
self.reload() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_default_runner(udf_class, input_col_delim=',', null_indicator='NULL', stdin=None):
"""Create a default runner with specified udf class. """ |
proto = udf.get_annotation(udf_class)
in_types, out_types = parse_proto(proto)
stdin = stdin or sys.stdin
arg_parser = ArgParser(in_types, stdin, input_col_delim, null_indicator)
stdin_feed = make_feed(arg_parser)
collector = StdoutCollector(out_types)
ctor = _get_runner_class(udf_class)
return ctor(udf_class, stdin_feed, collector) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_system_offset():
"""Get system's timezone offset using built-in library time. For the Timezone constants (altzone, daylight, timezone, and tzname), the value is determined by the timezone rules in effect at module load time or the last time tzset() is called and may be incorrect for times in the past. To keep compatibility with Windows, we're always importing time module here. """ |
import time
if time.daylight and time.localtime().tm_isdst > 0:
return -time.altzone
else:
return -time.timezone |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def assert_tz_offset(tz):
"""Assert that system's timezone offset equals to the timezone offset found. If they don't match, we probably have a misconfiguration, for example, an incorrect timezone set in /etc/timezone file in systemd distributions.""" |
tz_offset = get_tz_offset(tz)
system_offset = get_system_offset()
if tz_offset != system_offset:
msg = ('Timezone offset does not match system offset: {0} != {1}. '
'Please, check your config files.').format(
tz_offset, system_offset
)
raise ValueError(msg) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def resources(self):
""" Return all the resources which this function refer to. :return: resources :rtype: list .. seealso:: :class:`odps.models.Resource` """ |
if self._resources_objects is not None:
return self._resources_objects
resources = self.parent.parent.resources
resources = [resources[name] for name in self._resources]
self._resources_objects = resources
return resources |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def update(self):
""" Update this function. :return: None """ |
if self._owner_changed:
self.update_owner(self.owner)
self._resources = [res.name for res in self.resources]
return self.parent.update(self) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def table_creator(func):
""" Decorator for table creating method """ |
def method(self, table_name, **kwargs):
if self.odps.exist_table(table_name):
return
if kwargs.get('project', self.odps.project) != self.odps.project:
tunnel = TableTunnel(self.odps, project=kwargs['project'])
else:
tunnel = self.tunnel
func(self.odps, table_name, tunnel=tunnel, **kwargs)
self.after_create_test_data(table_name)
method.__name__ = func.__name__
setattr(TestDataMixIn, func.__name__, method)
return func |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def confusion_matrix(df, col_true=None, col_pred=None):
""" Compute confusion matrix of a predicted DataFrame. Note that this method will trigger the defined flow to execute. :param df: predicted data frame :type df: DataFrame :param col_true: column name of true label :type col_true: str :param col_true: column name of predicted label, 'prediction_result' by default. :type col_pred: str :return: Confusion matrix and mapping list for classes :Example: """ |
if not col_pred:
col_pred = get_field_name_by_role(df, FieldRole.PREDICTED_CLASS)
return _run_cm_node(df, col_true, col_pred)[0] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def accuracy_score(df, col_true=None, col_pred=None, normalize=True):
""" Compute accuracy of a predicted DataFrame. Note that this method will trigger the defined flow to execute. :param df: predicted data frame :type df: DataFrame :param col_true: column name of true label :type col_true: str :param col_true: column name of predicted label, 'prediction_result' by default. :type col_pred: str :param normalize: denoting if the output is normalized between [0, 1] :type normalize: bool :return: Accuracy value :rtype: float """ |
if not col_pred:
col_pred = get_field_name_by_role(df, FieldRole.PREDICTED_CLASS)
mat, _ = _run_cm_node(df, col_true, col_pred)
if np is not None:
acc_count = np.sum(np.diag(mat))
if not normalize:
return acc_count
else:
return acc_count * 1.0 / np.sum(mat)
else:
diag_sum = mat_sum = 0
mat_size = len(mat)
for i in compat.irange(mat_size):
for j in compat.irange(mat_size):
if i == j:
diag_sum += mat[i][j]
mat_sum += mat[i][j]
if not normalize:
return diag_sum
else:
return diag_sum * 1.0 / mat_sum |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def fbeta_score(df, col_true=None, col_pred='precision_result', beta=1.0, pos_label=1, average=None):
r""" Compute f-beta score of a predicted DataFrame. f-beta is defined as .. math:: \frac{1 + \beta^2 \cdot precision \cdot recall}{\beta^2 \cdot precision + recall} F-beta score is a generalization of f-1 score. :Parameters: - **df** - predicted data frame - **col_true** - column name of true label - **col_pred** - column name of predicted label, 'prediction_result' by default. - **pos_label** - denote the desired class label when ``average`` == `binary` - **average** - denote the method to compute average. :Returns: Recall score :Return type: float | numpy.array[float] The parameter ``average`` controls the behavior of the function. * When ``average`` == None (by default), f-beta of every class is given as a list. * When ``average`` == 'binary', f-beta of class specified in ``pos_label`` is given. * When ``average`` == 'micro', f-beta of overall precision and recall is given, where overall precision and recall are computed in micro-average mode. * When ``average`` == 'macro', average f-beta of all the class is given. * When ``average`` == `weighted`, average f-beta of all the class weighted by support of every true classes is given. :Example: Assume we have a table named 'predicted' as follows: ======== =================== label prediction_result ======== =================== 0 1 1 2 2 1 1 1 1 0 2 2 ======== =================== Different options of ``average`` parameter outputs different values: .. code-block:: python array([ 0. , 0.33333333, 0.5 ]) 0.27 0.33 0.33 """ |
if not col_pred:
col_pred = get_field_name_by_role(df, FieldRole.PREDICTED_CLASS)
mat, label_list = _run_cm_node(df, col_true, col_pred)
class_dict = dict((label, idx) for idx, label in enumerate(label_list))
tps = np.diag(mat)
pred_count = np.sum(mat, axis=0)
supp_count = np.sum(mat, axis=1)
beta2 = beta ** 2
precision = tps * 1.0 / pred_count
recall = tps * 1.0 / supp_count
ppr = precision * beta2 + recall
ppr[ppr == 0] = 1e-6
fbeta = (1 + beta2) * precision * recall / ppr
if average is None:
return fbeta
elif average == 'binary':
class_idx = class_dict[pos_label]
return fbeta[class_idx]
elif average == 'micro':
g_precision = np.sum(tps) * 1.0 / np.sum(supp_count)
g_recall = np.sum(tps) * 1.0 / np.sum(pred_count)
return (1 + beta2) * g_precision * g_recall / (beta2 * g_precision + g_recall)
elif average == 'macro':
return np.mean(fbeta)
elif average == 'weighted':
return sum(fbeta * supp_count) / sum(supp_count) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def f1_score(df, col_true=None, col_pred='precision_result', pos_label=1, average=None):
r""" Compute f-1 score of a predicted DataFrame. f-1 is defined as .. math:: \frac{2 \cdot precision \cdot recall}{precision + recall} :Parameters: - **df** - predicted data frame - **col_true** - column name of true label - **col_pred** - column name of predicted label, 'prediction_result' by default. - **pos_label** - denote the desired class label when ``average`` == `binary` - **average** - denote the method to compute average. :Returns: Recall score :Return type: float | numpy.array[float] The parameter ``average`` controls the behavior of the function. * When ``average`` == None (by default), f-1 of every class is given as a list. * When ``average`` == 'binary', f-1 of class specified in ``pos_label`` is given. * When ``average`` == 'micro', f-1 of overall precision and recall is given, where overall precision and recall are computed in micro-average mode. * When ``average`` == 'macro', average f-1 of all the class is given. * When ``average`` == `weighted`, average f-1 of all the class weighted by support of every true classes is given. :Example: Assume we have a table named 'predicted' as follows: ======== =================== label prediction_result ======== =================== 0 1 1 2 2 1 1 1 1 0 2 2 ======== =================== Different options of ``average`` parameter outputs different values: .. code-block:: python array([ 0. , 0.33333333, 0.5 ]) 0.27 0.33 0.33 """ |
if not col_pred:
col_pred = get_field_name_by_role(df, FieldRole.PREDICTED_CLASS)
return fbeta_score(df, col_true, col_pred, pos_label=pos_label, average=average) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def average_precision_score(df, col_true=None, col_pred=None, col_scores=None, pos_label=1):
""" Compute average precision score, i.e., the area under precision-recall curve. Note that this method will trigger the defined flow to execute. :param df: predicted data frame :type df: DataFrame :param pos_label: positive label :type pos_label: str :param col_true: true column :type col_true: str :param col_pred: predicted column, 'prediction_result' if absent. :type col_pred: str :param col_scores: score column, 'prediction_score' if absent. :type col_scores: str :return: Average precision score :rtype: float """ |
if not col_pred:
col_pred = get_field_name_by_role(df, FieldRole.PREDICTED_CLASS)
if not col_scores:
col_scores = get_field_name_by_role(df, FieldRole.PREDICTED_SCORE)
thresh, tp, fn, tn, fp = _run_roc_node(df, pos_label, col_true, col_pred, col_scores)
precisions = np.squeeze(np.asarray(tp * 1.0 / (tp + fp)))
recalls = np.squeeze(np.asarray(tp * 1.0 / (tp + fn)))
return np.trapz(precisions, recalls) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _var_uint64_byte_size_no_tag(uint64):
"""Returns the bytes required to serialize a single varint. uint64 must be unsigned. """ |
if uint64 > UINT64_MAX:
raise errors.EncodeError('Value out of range: %d' % uint64)
bytes = 1
while uint64 > 0x7f:
bytes += 1
uint64 >>= 7
return bytes |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def train(self, *args, **kwargs):
""" Perform training on a DataFrame. The label field is specified by the ``label_field`` method. :param train_data: DataFrame to be trained. Label field must be specified. :type train_data: DataFrame :return: Trained model :rtype: MLModel """ |
objs = self._do_transform(*args, **kwargs)
obj_list = [objs, ] if not isinstance(objs, Iterable) else objs
for obj in obj_list:
if not isinstance(obj, ODPSModelExpr):
continue
for meta in ['predictor', 'recommender']:
if meta not in self._metas:
continue
mod = __import__(self.__class__.__module__.__name__, fromlist=[''])\
if not hasattr(self, '_env') else self._env
action_cls_name = underline_to_capitalized(self._metas[meta])
if not hasattr(mod, action_cls_name):
action_cls_name = '_' + action_cls_name
setattr(obj, '_' + meta, mod + '.' + action_cls_name)
return objs |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def wait(fs, timeout=None, return_when=ALL_COMPLETED):
"""Wait for the futures in the given sequence to complete. Args: fs: The sequence of Futures (possibly created by different Executors) to wait upon. timeout: The maximum number of seconds to wait. If None, then there is no limit on the wait time. return_when: Indicates when this function should return. The options are: FIRST_COMPLETED - Return when any future finishes or is cancelled. FIRST_EXCEPTION - Return when any future finishes by raising an exception. If no future raises an exception then it is equivalent to ALL_COMPLETED. ALL_COMPLETED - Return when all futures finish or are cancelled. Returns: A named 2-tuple of sets. The first set, named 'done', contains the futures that completed (is finished or cancelled) before the wait completed. The second set, named 'not_done', contains uncompleted futures. """ |
with _AcquireFutures(fs):
done = set(f for f in fs
if f._state in [CANCELLED_AND_NOTIFIED, FINISHED])
not_done = set(fs) - done
if (return_when == FIRST_COMPLETED) and done:
return DoneAndNotDoneFutures(done, not_done)
elif (return_when == FIRST_EXCEPTION) and done:
if any(f for f in done
if not f.cancelled() and f.exception() is not None):
return DoneAndNotDoneFutures(done, not_done)
if len(done) == len(fs):
return DoneAndNotDoneFutures(done, not_done)
waiter = _create_and_install_waiters(fs, return_when)
waiter.event.wait(timeout)
for f in fs:
with f._condition:
f._waiters.remove(waiter)
done.update(waiter.finished_futures)
return DoneAndNotDoneFutures(done, set(fs) - done) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def cancel(self):
"""Cancel the future if possible. Returns True if the future was cancelled, False otherwise. A future cannot be cancelled if it is running or has already completed. """ |
with self._condition:
if self._state in [RUNNING, FINISHED]:
return False
if self._state in [CANCELLED, CANCELLED_AND_NOTIFIED]:
return True
self._state = CANCELLED
self._condition.notify_all()
self._invoke_callbacks()
return True |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def add_done_callback(self, fn):
"""Attaches a callable that will be called when the future finishes. Args: fn: A callable that will be called with this future as its only argument when the future completes or is cancelled. The callable will always be called by a thread in the same process in which it was added. If the future has already completed or been cancelled then the callable will be called immediately. These callables are called in the order that they were added. """ |
with self._condition:
if self._state not in [CANCELLED, CANCELLED_AND_NOTIFIED, FINISHED]:
self._done_callbacks.append(fn)
return
fn(self) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def set_exception_info(self, exception, traceback):
"""Sets the result of the future as being the given exception and traceback. Should only be used by Executor implementations and unit tests. """ |
with self._condition:
self._exception = exception
self._traceback = traceback
self._state = FINISHED
for waiter in self._waiters:
waiter.add_exception(self)
self._condition.notify_all()
self._invoke_callbacks() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def persist(self, name, project=None, drop_model=False, **kwargs):
""" Persist the execution into a new model. :param name: model name :param project: name of the project :param drop_model: drop model before creation """ |
return super(ODPSModelExpr, self).persist(name, project=project, drop_model=drop_model, **kwargs) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def quantile(expr, prob=None, **kw):
""" Percentile value. :param expr: :param prob: probability or list of probabilities, in [0, 1] :return: """ |
prob = kw.get('_prob', prob)
output_type = _stats_type(expr)
if isinstance(prob, (list, set)) and not isinstance(expr, GroupBy):
output_type = types.List(output_type)
return _reduction(expr, Quantile, output_type, _prob=prob) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def nunique(expr):
""" The distinct count. :param expr: :return: """ |
output_type = types.int64
if isinstance(expr, SequenceExpr):
return NUnique(_value_type=output_type, _inputs=[expr])
elif isinstance(expr, SequenceGroupBy):
return GroupedNUnique(_data_type=output_type, _inputs=[expr.to_column()], _grouped=expr.input)
elif isinstance(expr, CollectionExpr):
unique_input = _extract_unique_input(expr)
if unique_input:
return nunique(unique_input)
else:
return NUnique(_value_type=types.int64, _inputs=expr._project_fields)
elif isinstance(expr, GroupBy):
if expr._to_agg:
inputs = expr.input[expr._to_agg.names]._project_fields
else:
inputs = expr.input._project_fields
return GroupedNUnique(_data_type=types.int64, _inputs=inputs,
_grouped=expr) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def cat(expr, others=None, sep=None, na_rep=None):
""" Concatenate strings in sequence with given separator :param expr: :param others: other sequences :param sep: string or None, default None :param na_rep: string or None default None, if None, NA in the sequence are ignored :return: """ |
if others is not None:
from .strings import _cat as cat_str
return cat_str(expr, others, sep=sep, na_rep=na_rep)
return _cat(expr, sep=sep, na_rep=na_rep) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def moment(expr, order, central=False):
""" Calculate the n-th order moment of the sequence :param expr: :param order: moment order, must be an integer :param central: if central moments are to be computed. :return: """ |
if not isinstance(order, six.integer_types):
raise ValueError('Only integer-ordered moments are supported.')
if order < 0:
raise ValueError('Only non-negative orders are supported.')
output_type = _stats_type(expr)
return _reduction(expr, Moment, output_type, _order=order, _center=central) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def stop(self):
""" Stop this instance. :return: None """ |
instance_status = Instance.InstanceStatus(status='Terminated')
xml_content = instance_status.serialize()
headers = {'Content-Type': 'application/xml'}
self._client.put(self.resource(), xml_content, headers=headers) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_task_results(self):
""" Get all the task results. :return: a dict which key is task name, and value is the task result as string :rtype: dict """ |
results = self.get_task_results_without_format()
if options.tunnel.string_as_binary:
return compat.OrderedDict([(k, bytes(result)) for k, result in six.iteritems(results)])
else:
return compat.OrderedDict([(k, str(result)) for k, result in six.iteritems(results)]) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_task_summary(self, task_name):
""" Get a task's summary, mostly used for MapReduce. :param task_name: task name :return: summary as a dict parsed from JSON :rtype: dict """ |
params = {'instancesummary': '', 'taskname': task_name}
resp = self._client.get(self.resource(), params=params)
map_reduce = resp.json().get('Instance')
if map_reduce:
json_summary = map_reduce.get('JsonSummary')
if json_summary:
summary = Instance.TaskSummary(json.loads(json_summary))
summary.summary_text = map_reduce.get('Summary')
summary.json_summary = json_summary
return summary |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_task_statuses(self):
""" Get all tasks' statuses :return: a dict which key is the task name and value is the :class:`odps.models.Instance.Task` object :rtype: dict """ |
params = {'taskstatus': ''}
resp = self._client.get(self.resource(), params=params)
self.parse(self._client, resp, obj=self)
return dict([(task.name, task) for task in self._tasks]) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_task_cost(self, task_name):
""" Get task cost :param task_name: name of the task :return: task cost :rtype: Instance.TaskCost :Example: 200 4096 0 """ |
summary = self.get_task_summary(task_name)
if summary is None:
return None
if 'Cost' in summary:
task_cost = summary['Cost']
cpu_cost = task_cost.get('CPU')
memory = task_cost.get('Memory')
input_size = task_cost.get('Input')
return Instance.TaskCost(cpu_cost, memory, input_size) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_task_info(self, task_name, key):
""" Get task related information. :param task_name: name of the task :param key: key of the information item :return: a string of the task information """ |
params = OrderedDict([('info', ''), ('taskname', task_name), ('key', key)])
resp = self._client.get(self.resource(), params=params)
return resp.text |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def put_task_info(self, task_name, key, value):
""" Put information into a task. :param task_name: name of the task :param key: key of the information item :param value: value of the information item """ |
params = OrderedDict([('info', ''), ('taskname', task_name)])
headers = {'Content-Type': 'application/xml'}
body = self.TaskInfo(key=key, value=value).serialize()
self._client.put(self.resource(), params=params, headers=headers, data=body) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_task_quota(self, task_name):
""" Get queueing info of the task. Note that time between two calls should larger than 30 seconds, otherwise empty dict is returned. :param task_name: name of the task :return: quota info in dict format """ |
params = OrderedDict([('instancequota', ''), ('taskname', task_name)])
resp = self._client.get(self.resource(), params=params)
return json.loads(resp.text) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_sql_task_cost(self):
""" Get cost information of the sql task. Including input data size, number of UDF, Complexity of the sql task :return: cost info in dict format """ |
resp = self.get_task_result(self.get_task_names()[0])
cost = json.loads(resp)
sql_cost = cost['Cost']['SQL']
udf_num = sql_cost.get('UDF')
complexity = sql_cost.get('Complexity')
input_size = sql_cost.get('Input')
return Instance.SQLCost(udf_num, complexity, input_size) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def is_terminated(self, retry=False):
""" If this instance has finished or not. :return: True if finished else False :rtype: bool """ |
retry_num = options.retry_times
while retry_num > 0:
try:
return self.status == Instance.Status.TERMINATED
except (errors.InternalServerError, errors.RequestTimeTooSkewed):
retry_num -= 1
if not retry or retry_num <= 0:
raise |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def is_successful(self, retry=False):
""" If the instance runs successfully. :return: True if successful else False :rtype: bool """ |
if not self.is_terminated(retry=retry):
return False
retry_num = options.retry_times
while retry_num > 0:
try:
statuses = self.get_task_statuses()
return all(task.status == Instance.Task.TaskStatus.SUCCESS
for task in statuses.values())
except (errors.InternalServerError, errors.RequestTimeTooSkewed):
retry_num -= 1
if not retry or retry_num <= 0:
raise |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def wait_for_completion(self, interval=1):
""" Wait for the instance to complete, and neglect the consequence. :param interval: time interval to check :return: None """ |
while not self.is_terminated(retry=True):
try:
time.sleep(interval)
except KeyboardInterrupt:
break |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def wait_for_success(self, interval=1):
""" Wait for instance to complete, and check if the instance is successful. :param interval: time interval to check :return: None :raise: :class:`odps.errors.ODPSError` if the instance failed """ |
self.wait_for_completion(interval=interval)
if not self.is_successful(retry=True):
for task_name, task in six.iteritems(self.get_task_statuses()):
exc = None
if task.status == Instance.Task.TaskStatus.FAILED:
exc = errors.parse_instance_error(self.get_task_result(task_name))
elif task.status != Instance.Task.TaskStatus.SUCCESS:
exc = errors.ODPSError('%s, status=%s' % (task_name, task.status.value))
if exc:
exc.instance_id = self.id
raise exc |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_task_progress(self, task_name):
""" Get task's current progress :param task_name: task_name :return: the task's progress :rtype: :class:`odps.models.Instance.Task.TaskProgress` """ |
params = {'instanceprogress': task_name, 'taskname': task_name}
resp = self._client.get(self.resource(), params=params)
return Instance.Task.TaskProgress.parse(self._client, resp) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_task_detail(self, task_name):
""" Get task's detail :param task_name: task name :return: the task's detail :rtype: list or dict according to the JSON """ |
def _get_detail():
from ..compat import json # fix object_pairs_hook parameter for Py2.6
params = {'instancedetail': '',
'taskname': task_name}
resp = self._client.get(self.resource(), params=params)
return json.loads(resp.text if six.PY3 else resp.content,
object_pairs_hook=OrderedDict)
result = _get_detail()
if not result:
# todo: this is a workaround for the bug that get_task_detail returns nothing.
self.get_task_detail2(task_name)
return _get_detail()
else:
return result |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_task_detail2(self, task_name):
""" Get task's detail v2 :param task_name: task name :return: the task's detail :rtype: list or dict according to the JSON """ |
from ..compat import json # fix object_pairs_hook parameter for Py2.6
params = {'detail': '',
'taskname': task_name}
resp = self._client.get(self.resource(), params=params)
res = resp.text if six.PY3 else resp.content
try:
return json.loads(res, object_pairs_hook=OrderedDict)
except ValueError:
return res |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_logview_address(self, hours=None):
""" Get logview address of the instance object by hours. :param hours: :return: logview address :rtype: str """ |
hours = hours or options.log_view_hours
project = self.project
url = '%s/authorization' % project.resource()
policy = {
'expires_in_hours': hours,
'policy': {
'Statement': [{
'Action': ['odps:Read'],
'Effect': 'Allow',
'Resource': 'acs:odps:*:projects/%s/instances/%s' % \
(project.name, self.id)
}],
'Version': '1',
}
}
headers = {'Content-Type': 'application/json'}
params = {'sign_bearer_token': ''}
data = json.dumps(policy)
res = self._client.post(url, data, headers=headers, params=params)
content = res.text if six.PY3 else res.content
root = ElementTree.fromstring(content)
token = root.find('Result').text
link = options.log_view_host + "/logview/?h=" + self._client.endpoint + "&p=" \
+ project.name + "&i=" + self.id + "&token=" + token
return link |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def open_reader(self, *args, **kwargs):
""" Open the reader to read records from the result of the instance. If `tunnel` is `True`, instance tunnel will be used. Otherwise conventional routine will be used. If instance tunnel is not available and `tunnel` is not specified,, the method will fall back to the conventional routine. Note that the number of records returned is limited unless `options.limited_instance_tunnel` is set to `True` or `limit=True` is configured under instance tunnel mode. Otherwise the number of records returned is always limited. :param tunnel: if true, use instance tunnel to read from the instance. if false, use conventional routine. if absent, `options.tunnel.use_instance_tunnel` will be used and automatic fallback is enabled. :param reopen: the reader will reuse last one, reopen is true means open a new reader. :type reopen: bool :param endpoint: the tunnel service URL :param compress_option: compression algorithm, level and strategy :type compress_option: :class:`odps.tunnel.CompressOption` :param compress_algo: compression algorithm, work when ``compress_option`` is not provided, can be ``zlib``, ``snappy`` :param compress_level: used for ``zlib``, work when ``compress_option`` is not provided :param compress_strategy: used for ``zlib``, work when ``compress_option`` is not provided :return: reader, ``count`` means the full size, ``status`` means the tunnel status :Example: """ |
use_tunnel = kwargs.get('use_tunnel', kwargs.get('tunnel'))
auto_fallback_result = use_tunnel is None
if use_tunnel is None:
use_tunnel = options.tunnel.use_instance_tunnel
result_fallback_errors = (errors.InvalidProjectTable, errors.InvalidArgument)
if use_tunnel:
# for compatibility
if 'limit_enabled' in kwargs:
kwargs['limit'] = kwargs['limit_enabled']
del kwargs['limit_enabled']
if 'limit' not in kwargs:
kwargs['limit'] = options.tunnel.limit_instance_tunnel
auto_fallback_protection = False
if kwargs['limit'] is None:
kwargs['limit'] = False
auto_fallback_protection = True
try:
return self._open_tunnel_reader(**kwargs)
except result_fallback_errors:
# service version too low to support instance tunnel.
if not auto_fallback_result:
raise
if not kwargs.get('limit'):
warnings.warn('Instance tunnel not supported, will fallback to '
'conventional ways. 10000 records will be limited.')
except requests.Timeout:
# tunnel creation timed out, which might be caused by too many files
# on the service.
if not auto_fallback_result:
raise
if not kwargs.get('limit'):
warnings.warn('Instance tunnel timed out, will fallback to '
'conventional ways. 10000 records will be limited.')
except (Instance.DownloadSessionCreationError, errors.InstanceTypeNotSupported):
# this is for DDL sql instances such as `show partitions` which raises
# InternalServerError when creating download sessions.
if not auto_fallback_result:
raise
except errors.NoPermission:
# project is protected
if not auto_fallback_protection:
raise
if not kwargs.get('limit'):
warnings.warn('Project under protection, 10000 records will be limited.')
kwargs['limit'] = True
return self._open_tunnel_reader(**kwargs)
return self._open_result_reader(*args, **kwargs) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def in_qtconsole():
""" check if we're inside an IPython qtconsole DEPRECATED: This is no longer needed, or working, in IPython 3 and above. """ |
try:
ip = get_ipython()
front_end = (
ip.config.get('KernelApp', {}).get('parent_appname', "") or
ip.config.get('IPKernelApp', {}).get('parent_appname', "")
)
if 'qtconsole' in front_end.lower():
return True
except:
return False
return False |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def isatty(file):
""" Returns `True` if `file` is a tty. Most built-in Python file-like objects have an `isatty` member, but some user-defined types may not, so this assumes those are not ttys. """ |
if (multiprocessing.current_process().name != 'MainProcess' or
threading.current_thread().getName() != 'MainThread'):
return False
if hasattr(file, 'isatty'):
return file.isatty()
elif (OutStream is not None and
isinstance(file, (OutStream, IPythonIOStream)) and
((hasattr(file, 'name') and file.name == 'stdout') or
(hasattr(file, 'stream') and
isinstance(file.stream, PyreadlineConsole)))):
# File is an IPython OutStream or IOStream and
# File name is 'stdout' or
# File wraps a Console
return True
return False |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def color_print(*args, **kwargs):
""" Prints colors and styles to the terminal uses ANSI escape sequences. :: color_print('This is the color ', 'default', 'GREEN', 'green') Parameters positional args : str The positional arguments come in pairs (*msg*, *color*), where *msg* is the string to display and *color* is the color to display it in. *color* is an ANSI terminal color name. Must be one of: black, red, green, brown, blue, magenta, cyan, lightgrey, default, darkgrey, lightred, lightgreen, yellow, lightblue, lightmagenta, lightcyan, white, or '' (the empty string). file : writeable file-like object, optional Where to write to. Defaults to `sys.stdout`. If file is not a tty (as determined by calling its `isatty` member, if one exists), no coloring will be included. end : str, optional The ending of the message. Defaults to ``\\n``. The end will be printed after resetting any color or font state. """ |
file = kwargs.get('file', _get_stdout())
end = kwargs.get('end', '\n')
write = file.write
if isatty(file) and options.console.use_color:
for i in range(0, len(args), 2):
msg = args[i]
if i + 1 == len(args):
color = ''
else:
color = args[i + 1]
if color:
msg = _color_text(msg, color)
# Some file objects support writing unicode sensibly on some Python
# versions; if this fails try creating a writer using the locale's
# preferred encoding. If that fails too give up.
if not six.PY3 and isinstance(msg, bytes):
msg = _decode_preferred_encoding(msg)
write = _write_with_fallback(msg, write, file)
write(end)
else:
for i in range(0, len(args), 2):
msg = args[i]
if not six.PY3 and isinstance(msg, bytes):
# Support decoding bytes to unicode on Python 2; use the
# preferred encoding for the locale (which is *sometimes*
# sensible)
msg = _decode_preferred_encoding(msg)
write(msg)
write(end) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def human_time(seconds):
""" Returns a human-friendly time string that is always exactly 6 characters long. Depending on the number of seconds given, can be one of:: 1w 3d 2d 4h 1h 5m 1m 4s 15s Will be in color if console coloring is turned on. Parameters seconds : int The number of seconds to represent Returns ------- time : str A human-friendly representation of the given number of seconds that is always exactly 6 characters. """ |
units = [
('y', 60 * 60 * 24 * 7 * 52),
('w', 60 * 60 * 24 * 7),
('d', 60 * 60 * 24),
('h', 60 * 60),
('m', 60),
('s', 1),
]
seconds = int(seconds)
if seconds < 60:
return ' {0:2d}s'.format(seconds)
for i in range(len(units) - 1):
unit1, limit1 = units[i]
unit2, limit2 = units[i + 1]
if seconds >= limit1:
return '{0:2d}{1}{2:2d}{3}'.format(
seconds // limit1, unit1,
(seconds % limit1) // limit2, unit2)
return ' ~inf' |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def human_file_size(size):
""" Returns a human-friendly string representing a file size that is 2-4 characters long. For example, depending on the number of bytes given, can be one of:: 256b 64k 1.1G Parameters size : int The size of the file (in bytes) Returns ------- size : str A human-friendly representation of the size of the file """ |
suffixes = ' kMGTPEZY'
if size == 0:
num_scale = 0
else:
num_scale = int(math.floor(math.log(size) / math.log(1000)))
num_scale = max(num_scale, 0)
if num_scale >= len(suffixes):
suffix = '?'
else:
suffix = suffixes[num_scale]
num_scale = int(math.pow(1000, num_scale))
value = float(size) / num_scale
str_value = str(value)
if suffix == ' ':
if '.' in str_value:
str_value = str_value[:str_value.index('.')]
elif str_value[2] == '.':
str_value = str_value[:2]
else:
str_value = str_value[:3]
return "{0:>3s}{1}".format(str_value, suffix) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def update(self, value=None):
""" Update progress bar via the console or notebook accordingly. """ |
# Update self.value
if value is None:
value = self._current_value + 1
self._current_value = value
# Choose the appropriate environment
if self._ipython_widget:
try:
self._update_ipython_widget(value)
except RuntimeError:
pass
else:
self._update_console(value) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def map(cls, function, items, multiprocess=False, file=None):
""" Does a `map` operation while displaying a progress bar with percentage complete. :: def work(i):
print(i) ProgressBar.map(work, range(50)) Parameters function : function Function to call for each step items : sequence Sequence where each element is a tuple of arguments to pass to *function*. multiprocess : bool, optional If `True`, use the `multiprocessing` module to distribute each task to a different processor core. file : writeable file-like object, optional The file to write the progress bar to. Defaults to `sys.stdout`. If `file` is not a tty (as determined by calling its `isatty` member, if any), the scrollbar will be completely silent. """ |
results = []
if file is None:
file = _get_stdout()
with cls(len(items), file=file) as bar:
step_size = max(200, bar._bar_length)
steps = max(int(float(len(items)) / step_size), 1)
if not multiprocess:
for i, item in enumerate(items):
results.append(function(item))
if (i % steps) == 0:
bar.update(i)
else:
p = multiprocessing.Pool()
for i, result in enumerate(
p.imap_unordered(function, items, steps)):
bar.update(i)
results.append(result)
p.close()
p.join()
return results |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def hll_count(expr, error_rate=0.01, splitter=None):
""" Calculate HyperLogLog count :param expr: :param error_rate: error rate :type error_rate: float :param splitter: the splitter to split the column value :return: sequence or scalar :Example: 63270 63250 """ |
# to make the class pickled right by the cloudpickle
with open(os.path.join(path, 'lib', 'hll.py')) as hll_file:
local = {}
six.exec_(hll_file.read(), local)
HyperLogLog = local['HyperLogLog']
return expr.agg(HyperLogLog, rtype=types.int64, args=(error_rate, splitter)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def bloomfilter(collection, on, column, capacity=3000, error_rate=0.01):
""" Filter collection on the `on` sequence by BloomFilter built by `column` :param collection: :param on: sequence or column name :param column: instance of Column :param capacity: numbers of capacity :type capacity: int :param error_rate: error rate :type error_rate: float :return: collection :Example: a b 0 name1 1 1 name1 4 """ |
if not isinstance(column, Column):
raise TypeError('bloomfilter can only filter on the column of a collection')
# to make the class pickled right by the cloudpickle
with open(os.path.join(path, 'lib', 'bloomfilter.py')) as bloomfilter_file:
local = {}
six.exec_(bloomfilter_file.read(), local)
BloomFilter = local['BloomFilter']
col_name = column.source_name or column.name
on_name = on.name if isinstance(on, SequenceExpr) else on
rand_name = '%s_%s'% (on_name, str(uuid.uuid4()).replace('-', '_'))
on_col = collection._get_field(on).rename(rand_name)
src_collection = collection
collection = collection[collection, on_col]
@output(src_collection.schema.names, src_collection.schema.types)
class Filter(object):
def __init__(self, resources):
table = resources[0]
bloom = BloomFilter(capacity, error_rate)
for row in table:
bloom.add(str(getattr(row, col_name)))
self.bloom = bloom
def __call__(self, row):
if str(getattr(row, rand_name)) not in self.bloom:
return
return row[:-1]
return collection.apply(Filter, axis=1, resources=[column.input, ]) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def cumsum(expr, sort=None, ascending=True, unique=False, preceding=None, following=None):
""" Calculate cumulative summation of a sequence expression. :param expr: expression for calculation :param sort: name of the sort column :param ascending: whether to sort in ascending order :param unique: whether to eliminate duplicate entries :param preceding: the start point of a window :param following: the end point of a window :return: calculated column """ |
if expr._data_type == types.boolean:
output_type = types.int64
else:
output_type = expr._data_type
return _cumulative_op(expr, CumSum, sort=sort, ascending=ascending,
unique=unique, preceding=preceding,
following=following, data_type=output_type) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def cummax(expr, sort=None, ascending=True, unique=False, preceding=None, following=None):
""" Calculate cumulative maximum of a sequence expression. :param expr: expression for calculation :param sort: name of the sort column :param ascending: whether to sort in ascending order :param unique: whether to eliminate duplicate entries :param preceding: the start point of a window :param following: the end point of a window :return: calculated column """ |
return _cumulative_op(expr, CumMax, sort=sort, ascending=ascending,
unique=unique, preceding=preceding,
following=following) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def cummin(expr, sort=None, ascending=True, unique=False, preceding=None, following=None):
""" Calculate cumulative minimum of a sequence expression. :param expr: expression for calculation :param sort: name of the sort column :param ascending: whether to sort in ascending order :param unique: whether to eliminate duplicate entries :param preceding: the start point of a window :param following: the end point of a window :return: calculated column """ |
return _cumulative_op(expr, CumMin, sort=sort, ascending=ascending,
unique=unique, preceding=preceding,
following=following) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def cummean(expr, sort=None, ascending=True, unique=False, preceding=None, following=None):
""" Calculate cumulative mean of a sequence expression. :param expr: expression for calculation :param sort: name of the sort column :param ascending: whether to sort in ascending order :param unique: whether to eliminate duplicate entries :param preceding: the start point of a window :param following: the end point of a window :return: calculated column """ |
data_type = _stats_type(expr)
return _cumulative_op(expr, CumMean, sort=sort, ascending=ascending,
unique=unique, preceding=preceding,
following=following, data_type=data_type) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def cummedian(expr, sort=None, ascending=True, unique=False, preceding=None, following=None):
""" Calculate cumulative median of a sequence expression. :param expr: expression for calculation :param sort: name of the sort column :param ascending: whether to sort in ascending order :param unique: whether to eliminate duplicate entries :param preceding: the start point of a window :param following: the end point of a window :return: calculated column """ |
data_type = _stats_type(expr)
return _cumulative_op(expr, CumMedian, sort=sort, ascending=ascending,
unique=unique, preceding=preceding,
following=following, data_type=data_type) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def cumcount(expr, sort=None, ascending=True, unique=False, preceding=None, following=None):
""" Calculate cumulative count of a sequence expression. :param expr: expression for calculation :param sort: name of the sort column :param ascending: whether to sort in ascending order :param unique: whether to eliminate duplicate entries :param preceding: the start point of a window :param following: the end point of a window :return: calculated column """ |
data_type = types.int64
return _cumulative_op(expr, CumCount, sort=sort, ascending=ascending,
unique=unique, preceding=preceding,
following=following, data_type=data_type) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def cumstd(expr, sort=None, ascending=True, unique=False, preceding=None, following=None):
""" Calculate cumulative standard deviation of a sequence expression. :param expr: expression for calculation :param sort: name of the sort column :param ascending: whether to sort in ascending order :param unique: whether to eliminate duplicate entries :param preceding: the start point of a window :param following: the end point of a window :return: calculated column """ |
data_type = _stats_type(expr)
return _cumulative_op(expr, CumStd, sort=sort, ascending=ascending,
unique=unique, preceding=preceding,
following=following, data_type=data_type) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def nth_value(expr, nth, skip_nulls=False, sort=None, ascending=True):
""" Get nth value of a grouped and sorted expression. :param expr: expression for calculation :param nth: integer position :param skip_nulls: whether to skip null values, False by default :param sort: name of the sort column :param ascending: whether to sort in ascending order :return: calculated column """ |
return _cumulative_op(expr, NthValue, data_type=expr._data_type, sort=sort,
ascending=ascending, _nth=nth, _skip_nulls=skip_nulls) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def rank(expr, sort=None, ascending=True):
""" Calculate rank of a sequence expression. :param expr: expression for calculation :param sort: name of the sort column :param ascending: whether to sort in ascending order :return: calculated column """ |
return _rank_op(expr, Rank, types.int64, sort=sort, ascending=ascending) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.