text_prompt stringlengths 157 13.1k | code_prompt stringlengths 7 19.8k ⌀ |
|---|---|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def post(self, document):
"""Send to API a document or a list of document. :param document: a document or a list of document. :type document: dict or list :return: Message with location of job :rtype: dict :raises ValidationError: if API returns status 400 :raises Unauthorized: if API returns status 401 :raises Forbidden: if API returns status 403 :raises NotFound: if API returns status 404 :raises ApiError: if API returns other status """ |
if type(document) is dict:
document = [document]
return self.make_request(method='POST', uri='updates/', data=document) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get(self, key):
"""Return the status from a job. :param key: id of job :type document: dict or list :return: message with location of job :rtype: dict :raises Unauthorized: if API returns status 401 :raises Forbidden: if API returns status 403 :raises NotFound: if API returns status 404 :raises ApiError: if API returns other status """ |
uri = 'updates/job/{}'.format(key)
return self.make_request(method='GET', uri=uri) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def detect_range(line = None):
'''
A helper function that checks a given host line to see if it contains
a range pattern descibed in the docstring above.
Returnes True if the given line contains a pattern, else False.
'''
if (not line.startswith("[") and
line.find("[") != -1 and
line.find(":") != -1 and
line.find("]") != -1 and
line.index("[") < line.index(":") < line.index("]")):
return True
else:
return False |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def expand_hostname_range(line = None):
'''
A helper function that expands a given line that contains a pattern
specified in top docstring, and returns a list that consists of the
expanded version.
The '[' and ']' characters are used to maintain the pseudo-code
appearance. They are replaced in this function with '|' to ease
string splitting.
References: http://ansible.github.com/patterns.html#hosts-and-groups
'''
all_hosts = []
if line:
# A hostname such as db[1:6]-node is considered to consists
# three parts:
# head: 'db'
# nrange: [1:6]; range() is a built-in. Can't use the name
# tail: '-node'
(head, nrange, tail) = line.replace('[','|').replace(']','|').split('|')
bounds = nrange.split(":")
if len(bounds) != 2:
raise errors.AnsibleError("host range incorrectly specified")
beg = bounds[0]
end = bounds[1]
if not beg:
beg = "0"
if not end:
raise errors.AnsibleError("host range end value missing")
if beg[0] == '0' and len(beg) > 1:
rlen = len(beg) # range length formatting hint
if rlen != len(end):
raise errors.AnsibleError("host range format incorrectly specified!")
fill = lambda _: str(_).zfill(rlen) # range sequence
else:
fill = str
try:
i_beg = string.ascii_letters.index(beg)
i_end = string.ascii_letters.index(end)
if i_beg > i_end:
raise errors.AnsibleError("host range format incorrectly specified!")
seq = string.ascii_letters[i_beg:i_end+1]
except ValueError: # not a alpha range
seq = range(int(beg), int(end)+1)
for rseq in seq:
hname = ''.join((head, fill(rseq), tail))
all_hosts.append(hname)
return all_hosts |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def create(self, image=None):
"""Create content and return url. In case of images add the image.""" |
container = self.context
new = api.content.create(
container=container,
type=self.portal_type,
title=self.title,
safe_id=True,
)
if image:
namedblobimage = NamedBlobImage(
data=image.read(),
filename=safe_unicode(image.filename))
new.image = namedblobimage
if new:
new.description = safe_unicode(self.description)
return new.absolute_url() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def redirect(self, url):
"""Has its own method to allow overriding""" |
url = '{}/view'.format(url)
return self.request.response.redirect(url) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def update_users(self, entries):
"""Update user properties on the roster """ |
ws = IWorkspace(self.context)
members = ws.members
# check user permissions against join policy
join_policy = self.context.join_policy
if (join_policy == "admin"
and not checkPermission(
"collective.workspace: Manage roster",
self.context)):
raise Unauthorized("You are not allowed to add users here")
for entry in entries:
id = entry.get('id')
is_member = bool(entry.get('member'))
is_admin = bool(entry.get('admin'))
# Existing members
if id in members:
member = members[id]
if not is_member:
if checkPermission(
"ploneintranet.workspace: Manage workspace",
self.context):
ws.membership_factory(ws, member).remove_from_team()
else:
raise Unauthorized(
"Only team managers can remove members")
elif not is_admin:
ws.membership_factory(ws, member).groups -= {'Admins'}
else:
ws.membership_factory(ws, member).groups |= {'Admins'}
# New members
elif id not in members and (is_member or is_admin):
groups = set()
if is_admin:
groups.add('Admins')
ws.add_to_team(user=id, groups=groups) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def _get_remote(self, cached=True):
'''
Helper function to determine remote
:param cached:
Use cached values or query remotes
'''
return self.m(
'getting current remote',
cmdd=dict(
cmd='git remote show %s' % ('-n' if cached else ''),
cwd=self.local
),
verbose=False
) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def _log(self, num=None, format=None):
'''
Helper function to receive git log
:param num:
Number of entries
:param format:
Use formatted output with specified format string
'''
num = '-n %s' % (num) if num else ''
format = '--format="%s"' % (format) if format else ''
return self.m(
'getting git log',
cmdd=dict(cmd='git log %s %s' % (num, format), cwd=self.local),
verbose=False
) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def _get_branch(self, remotes=False):
'''
Helper function to determine current branch
:param remotes:
List the remote-tracking branches
'''
return self.m(
'getting git branch information',
cmdd=dict(
cmd='git branch %s' % ('-r' if remotes else ''),
cwd=self.local
),
verbose=False
) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def _checkout(self, treeish):
'''
Helper function to checkout something
:param treeish:
String for '`tag`', '`branch`', or remote tracking '-B `banch`'
'''
return self.m(
'checking out "%s"' % (treeish),
cmdd=dict(cmd='git checkout %s' % (treeish), cwd=self.local),
verbose=False
) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def _pull(self):
'''
Helper function to pull from remote
'''
pull = self.m(
'pulling remote changes',
cmdd=dict(cmd='git pull --tags', cwd=self.local),
critical=False
)
if 'CONFLICT' in pull.get('out'):
self.m(
'Congratulations! You have merge conflicts in the repository!',
state=True,
more=pull
)
return pull |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _temp_filename(contents):
""" Make a temporary file with `contents`. The file will be cleaned up on exit. """ |
fp = tempfile.NamedTemporaryFile(
prefix='codequalitytmp', delete=False)
name = fp.name
fp.write(contents)
fp.close()
_files_to_cleanup.append(name)
return name |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def set_options(pool_or_cursor,row_instance):
"for connection-level options that need to be set on Row instances"
# todo: move around an Options object instead
for option in ('JSON_READ',): setattr(row_instance,option,getattr(pool_or_cursor,option,None))
return row_instance |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def transform_specialfield(jsonify,f,v):
"helper for serialize_row"
raw = f.ser(v) if is_serdes(f) else v
return ujson.dumps(raw) if not isinstance(f,basestring) and jsonify else raw |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def dirty(field,ttl=None):
"decorator to cache the result of a function until a field changes"
if ttl is not None: raise NotImplementedError('pg.dirty ttl feature')
def decorator(f):
@functools.wraps(f)
def wrapper(self,*args,**kwargs):
# warning: not reentrant
d=self.dirty_cache[field] if field in self.dirty_cache else self.dirty_cache.setdefault(field,{})
return d[f.__name__] if f.__name__ in d else d.setdefault(f.__name__,f(self,*args,**kwargs))
return wrapper
return decorator |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def create_table(clas,pool_or_cursor):
"uses FIELDS, PKEY, INDEXES and TABLE members to create a sql table for the model"
def mkfield((name,tp)): return name,(tp if isinstance(tp,basestring) else 'jsonb')
fields = ','.join(map(' '.join,map(mkfield,clas.FIELDS)))
base = 'create table if not exists %s (%s'%(clas.TABLE,fields)
if clas.PKEY: base += ',primary key (%s)'%clas.PKEY
base += ')'
commit_or_execute(pool_or_cursor,base)
clas.create_indexes(pool_or_cursor) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def pkey_get(clas,pool_or_cursor,*vals):
"lookup by primary keys in order"
pkey = clas.PKEY.split(',')
if len(vals)!=len(pkey): raise ValueError("%i args != %i-len primary key for %s"%(len(vals),len(pkey),clas.TABLE))
rows = list(clas.select(pool_or_cursor,**dict(zip(pkey,vals))))
if not rows: raise Missing
return set_options(pool_or_cursor,clas(*rows[0])) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def select_models(clas,pool_or_cursor,**kwargs):
"returns generator yielding instances of the class"
if 'columns' in kwargs: raise ValueError("don't pass 'columns' to select_models")
return (set_options(pool_or_cursor,clas(*row)) for row in clas.select(pool_or_cursor,**kwargs)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def kwinsert(clas,pool_or_cursor,**kwargs):
"kwargs version of insert"
returning = kwargs.pop('returning',None)
fields,vals = zip(*kwargs.items())
# note: don't do SpecialField resolution here; clas.insert takes care of it
return clas.insert(pool_or_cursor,fields,vals,returning=returning) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def kwinsert_mk(clas,pool_or_cursor,**kwargs):
"wrapper for kwinsert that returns a constructed class. use this over kwinsert in most cases"
if 'returning' in kwargs: raise ValueError("don't call kwinsert_mk with 'returning'")
return set_options(
pool_or_cursor,
clas(*clas.kwinsert(pool_or_cursor,returning='*',**kwargs))
) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def updatewhere(clas,pool_or_cursor,where_keys,**update_keys):
"this doesn't allow raw_keys for now"
# if clas.JSONFIELDS: raise NotImplementedError # todo(awinter): do I need to make the same change for SpecialField?
if not where_keys or not update_keys: raise ValueError
setclause=','.join(k+'=%s' for k in update_keys)
whereclause=' and '.join(eqexpr(k,v) for k,v in where_keys.items())
q='update %s set %s where %s'%(clas.TABLE,setclause,whereclause)
vals = tuple(update_keys.values()+where_keys.values())
commit_or_execute(pool_or_cursor,q,vals) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def clean_to_decimal(x, prec=28):
"""Convert an string, int or float to Decimal object Parameters x : str, list, tuple, numpy.ndarray, pandas.DataFrame A string, int or float number, or a list, array or dataframe of these. digits : int (Default prec=None) Set the getcontext precision Returns ------- y : str, list, tuple, numpy.ndarray, pandas.DataFrame Decimal object or array of Decimal objects Example ------- clean_to_decimal('12.345') Decimal('12.345') clean_to_decimal('12.345', prec=2) Decimal('12') clean_to_decimal(12.345) Decimal('12.34500000000000063948846218') clean_to_decimal(12.345, prec=5) Decimal('12.345') """ |
import numpy as np
import pandas as pd
import decimal
def proc_elem(e):
try:
return decimal.Decimal(e) + decimal.Decimal('0.0')
except Exception as e:
print(e)
return None
def proc_list(x):
return [proc_elem(e) for e in x]
def proc_ndarray(x):
tmp = proc_list(list(x.reshape((x.size,))))
return np.array(tmp).reshape(x.shape)
# set precision
if prec:
decimal.getcontext().prec = prec
# transform string, list/tuple, numpy array, pandas dataframe
if isinstance(x, (str, int, float)):
return proc_elem(x)
elif isinstance(x, (list, tuple)):
return proc_list(x)
elif isinstance(x, np.ndarray):
return proc_ndarray(x)
elif isinstance(x, pd.DataFrame):
return pd.DataFrame(proc_ndarray(x.values),
columns=x.columns,
index=x.index)
else:
return None |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def rating_for(context, obj):
""" Provides a generic context variable name for the object that ratings are being rendered for, and the rating form. """ |
context["rating_object"] = context["rating_obj"] = obj
context["rating_form"] = RatingForm(context["request"], obj)
ratings = context["request"].COOKIES.get("yacms-rating", "")
rating_string = "%s.%s" % (obj._meta, obj.pk)
context["rated"] = (rating_string in ratings)
rating_name = obj.get_ratingfield_name()
for f in ("average", "count", "sum"):
context["rating_" + f] = getattr(obj, "%s_%s" % (rating_name, f))
return context |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def get_attr_info(binary_view):
'''Gets basic information from a binary stream to allow correct processing of
the attribute header.
This function allows the interpretation of the Attribute type, attribute length
and if the attribute is non resident.
Args:
binary_view (memoryview of bytearray) - A binary stream with the
information of the attribute
Returns:
An tuple with the attribute type, the attribute length, in bytes, and
if the attribute is resident or not.
'''
global _ATTR_BASIC
attr_type, attr_len, non_resident = _ATTR_BASIC.unpack(binary_view[:9])
return (AttrTypes(attr_type), attr_len, bool(non_resident)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def _create_attrcontent_class(name, fields, inheritance=(object,), data_structure=None, extra_functions=None, docstring=""):
'''Helper function that creates a class for attribute contents.
This function creates is a boilerplate to create all the expected methods of
an attributes. The basic methods work in the same way for all classes.
Once it executes it defines a dynamic class with the methods "__init__",
"__repr__" and "__eq__" based on the fields passed in the ``fields`` parameter.
If the ``data_structure`` parameter is present, the classmethod ``get_representation_size``
and the class variable ``_REPR`` will also be present.
It is also possible to define the inheritance using this method by passing
a list of classes in the ``inheritance`` parameter.
If the ``extra_functions`` argument is present, they will be added to the
class.
Note:
If the ``extra_functions`` has defined any of dinamically created methods,
they will *replace* the ones created.
Args:
name (str): Name of the class that will be created.
fields (tuple(str)): The attributes that will be added to the class.
inherited (tuple(object)): List of objects that will be inherited by
the new class
extra_functions (dict(str : function)): A dictionary where the key
will be the name of the function in the class and the content
of the key is a function that will be bound to the class
doctring (str): Class' docstring
Returns:
A new class with the ``name`` as it's name.
'''
def create_func_from_str(f_name, args, content, docstring=""):
'''Helper function to create functions from strings.
To improve performance, the standard functions are created at runtime
based on the string derived from the content. This way the function, from
the interpreter point of view, looks like statically defined.
Note:
This function should be used only for methods that will receive
``self`` (instace methods). The ``self`` argument is added automatically.
Args:
f_name (str): Function name
args (list(str)): List of extra arguments that the function will receive
content (str): Content of the function
docstring (str): Function's docstring
Returns:
A new function object that can be inserted in the class.
'''
exec_namespace = {"__name__" : f"{f_name}"}
new_args = ", ".join(["self"] + args)
func_str = f"def {f_name}({new_args}): {content}"
exec(func_str, exec_namespace)
func = exec_namespace[f_name]
func.__doc__ = docstring
return func
#creates the functions necessary for the new class
slots = fields
init_content = ", ".join([f"self.{field}" for field in fields]) + " = content"
__init__ = create_func_from_str("__init__", [f"content=(None,)*{len(fields)}"], init_content)
temp = ", ".join([f"{field}={{self.{field}}}" for field in fields])
repr = "return " + f"f\'{{self.__class__.__name__}}({temp})\'"
__repr__ = create_func_from_str("__repr__", [], repr)
temp = " and ".join([f"self.{field} == other.{field}" for field in fields])
eq = f"return {temp} if isinstance(other, {name}) else False"
__eq__ = create_func_from_str("__eq__", ["other"], eq)
@classmethod
def get_representation_size(cls):
return cls._REPR.size
#adapted from namedtuple code
# Modify function metadata to help with introspection and debugging
for method in (__init__, get_representation_size.__func__, __eq__,
__repr__):
method.__qualname__ = f'{name}.{method.__name__}'
#map class namespace for the class creation
namespace = {"__slots__" : slots,
"__init__" : __init__,
"__repr__" : __repr__,
"__eq__" : __eq__
}
if data_structure is not None:
namespace["_REPR"] = struct.Struct(data_structure)
namespace["get_representation_size"] = get_representation_size
if docstring:
namespace["__doc__"] = docstring
#some new mappings can be set or overload the ones defined
if extra_functions is not None:
for method in extra_functions.values():
try:
method.__qualname__ = f'{name}.{method.__name__}'
except AttributeError:
try:
method.__func__.__qualname__ = f'{name}.{method.__func__.__name__}'
except AttributeError:
#if we got here, it is not a method or classmethod, must be an attribute
#TODO feels like a hack, change it
#TODO design a test for this
pass
namespace = {**namespace, **extra_functions}
#TODO check if docstring was provided, issue a warning
new_class = type(name, inheritance, namespace)
# adapted from namedtuple code
# For pickling to work, the __module__ variable needs to be set to the frame
# where the named tuple is created. Bypass this step in environments where
# sys._getframe is not defined (Jython for example) or sys._getframe is not
# defined for arguments greater than 0 (IronPython), or where the user has
# specified a particular module.
try:
new_class.__module__ = _sys._getframe(1).f_globals.get('__name__', '__main__')
except (AttributeError, ValueError):
pass
return new_class |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _astimezone_ts(self, timezone):
"""Changes the time zones of all timestamps. Receives a new timezone and applies to all timestamps, if necessary. Args: timezone (:obj:`tzinfo`):
Time zone to be applied Returns: A new ``Timestamps`` object if the time zone changes, otherwise returns ``self``. """ |
if self.created.tzinfo is timezone:
return self
else:
nw_obj = Timestamps((None,)*4)
nw_obj.created = self.created.astimezone(timezone)
nw_obj.changed = self.changed.astimezone(timezone)
nw_obj.mft_changed = self.mft_changed.astimezone(timezone)
nw_obj.accessed = self.accessed.astimezone(timezone)
return nw_obj |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def _len_objid(self):
'''Get the actual size of the content, as some attributes have variable sizes'''
try:
return self._size
except AttributeError:
temp = (self.object_id, self.birth_vol_id, self.birth_object_id, self.birth_domain_id)
self._size = sum([ObjectID._UUID_SIZE for data in temp if data is not None])
return self._size |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def _allocated_entries_bitmap(self):
'''Creates a generator that returns all allocated entries in the
bitmap.
Yields:
int: The bit index of the allocated entries.
'''
for entry_number in range(len(self._bitmap) * 8):
if self.entry_allocated(entry_number):
yield entry_number |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _entry_allocated_bitmap(self, entry_number):
"""Checks if a particular index is allocated. Args: entry_number (int):
Index to verify Returns: bool: True if it is allocated, False otherwise. """ |
index, offset = divmod(entry_number, 8)
return bool(self._bitmap[index] & (1 << offset)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _get_next_empty_bitmap(self):
"""Returns the next empty entry. Returns: int: The value of the empty entry """ |
#TODO probably not the best way, redo
for i, byte in enumerate(self._bitmap):
if byte != 255:
for offset in range(8):
if not byte & (1 << offset):
return (i * 8) + offset |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def _len_ea_entry(self):
'''Returns the size of the entry'''
return EaEntry._REPR.size + len(self.name.encode("ascii")) + self.value_len |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def _str_sid(self):
'Return a nicely formatted representation string'
sub_auths = "-".join([str(sub) for sub in self.sub_authorities])
return f'S-{self.revision_number}-{self.authority}-{sub_auths}' |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def _len_sec_desc(self):
'''Returns the logical size of the file'''
return len(self.header) + len(self.owner_sid) + len(self.group_sid) + len(self.sacl) + len(self.dacl) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def create_from_binary(cls, binary_view):
'''Creates a new object DataRuns from a binary stream. The binary
stream can be represented by a byte string, bytearray or a memoryview of the
bytearray.
Args:
binary_view (memoryview of bytearray) - A binary stream with the
information of the attribute
Returns:
DataRuns: New object using hte binary stream as source
'''
nw_obj = cls()
offset = 0
previous_dr_offset = 0
header_size = cls._INFO.size #"header" of a data run is always a byte
while binary_view[offset] != 0: #the runlist ends with an 0 as the "header"
header = cls._INFO.unpack(binary_view[offset:offset+header_size])[0]
length_len = header & 0x0F
length_offset = (header & 0xF0) >> 4
temp_len = offset+header_size+length_len #helper variable just to make things simpler
dr_length = int.from_bytes(binary_view[offset+header_size:temp_len], "little", signed=False)
if length_offset: #the offset is relative to the previous data run
dr_offset = int.from_bytes(binary_view[temp_len:temp_len+length_offset], "little", signed=True) + previous_dr_offset
previous_dr_offset = dr_offset
else: #if it is sparse, requires a a different approach
dr_offset = None
offset += header_size + length_len + length_offset
nw_obj.data_runs.append((dr_length, dr_offset))
#nw_obj.data_runs.append(DataRun(dr_length, dr_offset))
_MOD_LOGGER.debug("DataRuns object created successfully")
return nw_obj |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def create_from_binary(cls, binary_view):
'''Creates a new object AttributeHeader from a binary stream. The binary
stream can be represented by a byte string, bytearray or a memoryview of the
bytearray.
Args:
binary_view (memoryview of bytearray) - A binary stream with the
information of the attribute
Returns:
AttributeHeader: New object using hte binary stream as source
'''
attr_type, attr_len, non_resident, name_len, name_offset, flags, attr_id, \
content_len, content_offset, indexed_flag = cls._REPR.unpack(binary_view[:cls._REPR.size])
if name_len:
name = binary_view[name_offset:name_offset+(2*name_len)].tobytes().decode("utf_16_le")
else:
name = None
nw_obj = cls((AttrTypes(attr_type), attr_len, bool(non_resident), AttrFlags(flags), attr_id, name),
(content_len, content_offset, indexed_flag))
return nw_obj |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def create_from_binary(cls, load_dataruns, binary_view):
'''Creates a new object NonResidentAttrHeader from a binary stream. The binary
stream can be represented by a byte string, bytearray or a memoryview of the
bytearray.
Args:
load_dataruns (bool) - Indicates if the dataruns are to be loaded
binary_view (memoryview of bytearray) - A binary stream with the
information of the attribute
non_resident_offset (int) - The offset where the non resident header
begins
Returns:
NonResidentAttrHeader: New object using hte binary stream as source
'''
attr_type, attr_len, non_resident, name_len, name_offset, flags, attr_id, \
start_vcn, end_vcn, rl_offset, compress_usize, alloc_sstream, curr_sstream, \
init_sstream = cls._REPR.unpack(binary_view[:cls._REPR.size])
if name_len:
name = binary_view[name_offset:name_offset+(2*name_len)].tobytes().decode("utf_16_le")
else:
name = None
#content = cls._REPR.unpack(binary_view[non_resident_offset:non_resident_offset+cls._REPR.size])
nw_obj = cls((AttrTypes(attr_type), attr_len, bool(non_resident), AttrFlags(flags), attr_id, name),
(start_vcn, end_vcn, rl_offset, compress_usize, alloc_sstream, curr_sstream, init_sstream))
if load_dataruns:
nw_obj.data_runs = DataRuns.create_from_binary(binary_view[nw_obj.rl_offset:])
_MOD_LOGGER.debug("NonResidentAttrHeader object created successfully")
return nw_obj |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def write_slide_list(self, logname, slides):
""" Write list of slides to logfile """ |
# Write slides.txt with list of slides
with open('%s/%s' % (self.cache, logname), 'w') as logfile:
for slide in slides:
heading = slide['heading']['text']
filename = self.get_image_name(heading)
print('%s,%d' % (filename, slide.get('time', 0)),
file=logfile) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def rotate(self, img):
""" Rotate image if exif says it needs it """ |
try:
exif = image2exif.get_exif(img)
except AttributeError:
# image format doesn't support exif
return img
orientation = exif.get('Orientation', 1)
landscape = img.height < img.width
if orientation == 6 and landscape:
print("ROTATING")
return img.rotate(-90)
return img |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def draw_image(self, image, item, source):
""" Add an image to the image """ |
top, left = item['top'], item['left']
width, height = item['width'], item['height']
image_file = item['image']
img = Image.open(source)
img = self.rotate(img)
iwidth, iheight = img.size
wratio = width / iwidth
hratio = height / iheight
ratio = min(wratio, hratio)
img = img.resize((int(iwidth * ratio),
int(iheight * ratio)),
Image.ANTIALIAS)
# get updated image size
iwidth, iheight = img.size
# Adjust top, left for actual size of image so centre
# is in the same place as it would have been
top += (height - iheight) // 2
left += (width - iwidth) // 2
# now paste the image
image.paste(img, (left, top)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def slugify(self, name):
""" Turn name into a slug suitable for an image file name """ |
slug = ''
last = ''
for char in name.replace('#', '').lower().strip():
if not char.isalnum():
char = '_'
if last == '_' and char == '_':
continue
slug += char
last = char
return slug |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def list_keys(self):
""" Returns list of the available keys :return: List of the keys available in the storage :rtype list """ |
return [k for k, el in self._keystore.items() if not el.is_expired] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def set(self, key, value, expire_in=None):
""" Function to set or change particular property in the storage :param key: key name :param value: value to set :param expire_in: seconds to expire key :type key str :type expire_in int """ |
if key not in self._keystore:
self._keystore[key] = InMemoryItemValue(expire_in=expire_in)
k = self._keystore[key]
""":type k InMemoryItemValue"""
k.update_expire_time(expire_in)
k.value = value |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get(self, key):
""" Retrieves previously stored key from the storage :return value, stored in the storage """ |
if key not in self._keystore:
return None
rec = self._keystore[key]
""":type rec InMemoryItemValue"""
if rec.is_expired:
self.delete(key)
return None
return rec.value |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def exists(self, key):
""" Check if the particular key exists in the storage :param key: name of the key which existence need to be checked :return: :type key str :rtype bool """ |
if key in self._keystore and not self._keystore[key].is_expired:
return True
elif key in self._keystore and self._keystore[key].is_expired:
self.delete(key)
return False
return False |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def plot_2d_single(x, y, pdffilename, **kwargs):
""" Do make_2d_single_plot and pass all arguments args: x: array_like xdata y: array_like ydata filepath: string filepath of pdf to save **kwargs: figure_options: passed to matplotlib.pyplot.figure xlabel_options: dict kwargs passed in set_xlabel ylabel_options: dict kwargs passed in set_ylabel suptitle_options: dict kwargs passed in figure.suptitle title_options: dict kwargs passed in set_title scilimits: tuple if number outside this limits, will use scientific notation errors: dictionary, array_like, scalar dictionary: {"xerr": xerr, "yerr": yerr} array_like, scalar: yerr fmt: string, default="k." line format bestfitfmt: string, default="k-" bestfit line format bestfit: BestFit child class eg. bestfit.polyfit.PolyFit, bestfit.logfit.LogFit bestfitlim: tuple, default=None xlim for bestfit line suptitle: string, default=xlim suptitle of pdf plot, formatted with outputdict suptitle_fontsize: int, default=15 font size of suptitle title: string, default=None title of the pdf plot title_fontsize: int, default=12 font size of title, formatted with outputdict xlabel: string, default=None label of string xlabel, formatted with outputdict ylabel: string, default=None label of string ylabel, formatted with outputdict xlim: tuple, default=None xlim ylim: tuple, default=None ylim outputdict: dictionary, default=None pass keys and arguments for formatting and to output """ |
pdffilepath = DataSets.get_pdffilepath(pdffilename)
plotsingle2d = PlotSingle2D(x, y, pdffilepath, **kwargs)
return plotsingle2d.plot() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_pdffilepath(pdffilename):
""" Returns the path for the pdf file args: pdffilename: string returns path for the plots folder / pdffilename.pdf """ |
return FILEPATHSTR.format(
root_dir=ROOT_DIR, os_sep=os.sep, os_extsep=os.extsep,
name=pdffilename,
folder=PURPOSE.get("plots").get("folder", "plots"),
ext=PURPOSE.get("plots").get("extension", "pdf")
) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def make_tex_table(inputlist, outputfilename, fmt=None, **kwargs):
""" Do make_tex_table and pass all arguments args: inputlist: list outputfilename: string fmt: dictionary key: integer column index starting with 0 values: string format string. eg "{:g}" **kwargs: nonestring: string string when objecttype is None """ |
outputfilepath = FILEPATHSTR.format(
root_dir=ROOT_DIR, os_sep=os.sep, os_extsep=os.extsep,
name=outputfilename,
folder=PURPOSE.get("tables").get("folder", "tables"),
ext=PURPOSE.get("tables").get("extension", "tex")
)
table.make_tex_table(inputlist, open(outputfilepath, 'wb'), fmt=fmt,
close=kwargs.get("close", True), **kwargs) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def make_compute_file(self):
""" Make the compute file from the self.vardict and self.vardictformat """ |
string = ""
try:
vardict_items = self.vardict.iteritems()
except AttributeError:
vardict_items = self.vardict.items()
for key, val in vardict_items:
# get default
default_format = get_default_format(val)
string_format = "\\newcommand{{\\{}}}{{" + self.vardictformat.get(
key, default_format) + "}}\n"
string += string_format.format(key, val).replace("+", "")
# get settings
compute_file = open(
"{root_dir}{os_sep}{name}{os_extsep}{ext}".format(
root_dir=ROOT_DIR, os_sep=os.sep, os_extsep=os.extsep,
name=SETTINGS["COMPUTE"]["name"],
ext=SETTINGS["COMPUTE"]["extension"]
), "wb")
compute_file.write(string)
compute_file.close() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_es_mappings(self):
"""
Returns the mapping defitions presetn in elasticsearh
""" |
es_mappings = json.loads(requests.get(self.mapping_url).text)
es_mappings = {"_".join(key.split("_")[:-1]): value['mappings'] \
for key, value in es_mappings.items()}
return es_mappings |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def readable_time_delta(seconds):
""" Convert a number of seconds into readable days, hours, and minutes """ |
days = seconds // 86400
seconds -= days * 86400
hours = seconds // 3600
seconds -= hours * 3600
minutes = seconds // 60
m_suffix = 's' if minutes != 1 else ''
h_suffix = 's' if hours != 1 else ''
d_suffix = 's' if days != 1 else ''
retval = u'{0} minute{1}'.format(minutes, m_suffix)
if hours != 0:
retval = u'{0} hour{1} and {2}'.format(hours, h_suffix, retval)
if days != 0:
retval = u'{0} day{1}, {2}'.format(days, d_suffix, retval)
return retval |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def next_occurrence(reminder):
""" Calculate the next occurrence of a repeatable reminder """ |
now = datetime.datetime.utcnow().replace(tzinfo=pytz.UTC)
now_dow = now.weekday()
# Start/end dow starting from tomorrow
start_dow = now_dow + 1
end_dow = start_dow + 7
# Modded range from tomorrow until 1 week from now. Normalizes
# wraparound values that span into next week
dow_iter = imap(lambda x: x % 7, xrange(start_dow, end_dow))
# Filter out any days that aren't in the schedule
dow_iter = ifilter(lambda x: x in reminder['repeat'], dow_iter)
# Get the first one. That's the next day of week
try:
next_dow = next(dow_iter)
except StopIteration: # How?
logger.exception("Somehow, we didn't get a next day of week?")
_scheduled.discard(reminder['_id'])
return
# Get the real day delta. Take the next day of the week. if that day of
# the week is before the current day of the week, add a week. Normalize
# this value by subtracting the starting point. Example:
# Now = 3, Next = 1, Delta = (1 + 7) - 3 = 5
day_delta = next_dow
if next_dow <= now_dow:
day_delta += 7
day_delta -= now_dow
# Update the record
return reminder['when'] + datetime.timedelta(days=day_delta), day_delta |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def at_reminder(client, channel, nick, args):
""" Schedule a reminder to occur at a specific time. The given time can optionally be specified to occur at a specific timezone, but will default to the value of settings.TIMEZONE if none is specified. Times should be on a 24-hour clock. These types of reminders are repeatable, should the last two words of the message be of the form "repeat <days_of_week>" where days_of_week is a single string consisting of any of the following days: M, Tu, W, Th, F, Sa, Su. For example, 'repeat MWF' will repeat a reminder at the same time every Monday, Wednesday, and Friday. A full example of how one would use this: <sduncan> helga at 13:00 EST standup time repeat MTuWThF This will create a reminder "standup time" to occur at 1:00PM Eastern every weekday. Optionally, a specific channel can be specified to receive the reminder message. This is useful if creating several reminders via a private message. To use this, specify "on <channel>" between the time amount and the message: <sduncan> helga at 13:00 EST on #bots standup time repeat MTuWThF <sduncan> helga at 13:00 EST on bots standup time repeat MTuWThF Note that the '#' char for specifying the channel is entirely optional. """ |
global _scheduled
now = datetime.datetime.utcnow().replace(tzinfo=pytz.UTC)
# Parse the time it should go off, and the minute offset of the day
hh, mm = map(int, args[0].split(':'))
# Strip time from args
args = args[1:]
# Default timezone
timezone = pytz.timezone(getattr(settings, 'TIMEZONE', 'US/Eastern'))
try:
# If there was a timezone passed in
timezone = pytz.timezone(args[0])
except pytz.UnknownTimeZoneError:
pass
else:
# If so, remove it from args
args = args[1:]
local_now = now.astimezone(timezone)
local_next = local_now.replace(hour=hh, minute=mm)
if local_next <= local_now:
local_next += datetime.timedelta(days=1)
reminder = {
'when': local_next.astimezone(pytz.UTC),
'channel': channel,
'message': ' '.join(args),
'creator': nick,
}
# Check for 'repeat' arg
try:
repeat = args[-2] == 'repeat'
except IndexError:
repeat = False
if repeat:
# If repeating, strip off the last two for the message
sched = args[-1]
reminder['message'] = ' '.join(args[:-2])
repeat_days = sorted([v for k, v in days_of_week.iteritems() if k in sched])
if not repeat_days:
return u"I didn't understand '{0}'. You must use any of M,Tu,W,Th,F,Sa,Su. Ex: MWF".format(sched)
reminder['repeat'] = repeat_days
for attempt in xrange(7):
if reminder['when'].weekday() in repeat_days:
break
reminder['when'] += datetime.timedelta(days=1)
# Handle ability to specify the channel
if reminder['message'].startswith('on'):
parts = reminder['message'].split(' ')
chan = parts[1]
reminder['message'] = ' '.join(parts[2:])
# Make sure channel is formatted correctly
if not chan.startswith('#'):
chan = '#{0}'.format(chan)
reminder['channel'] = chan
id = db.reminders.insert(reminder)
diff = reminder['when'] - now
delay = (diff.days * 24 * 3600) + diff.seconds
_scheduled.add(id)
reactor.callLater(delay, _do_reminder, id, client)
return u'Reminder set for {0} from now'.format(readable_time_delta(delay)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def handle_template(self, template, subdir):
""" Use yacms's project template by default. The method of picking the default directory is copied from Django's TemplateCommand. """ |
if template is None:
return six.text_type(os.path.join(yacms.__path__[0], subdir))
return super(Command, self).handle_template(template, subdir) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def save(self, *args, **kwargs):
"""Automatically set image""" |
if not self.image:
# Fetch image
url = "http://img.youtube.com/vi/%s/0.jpg" % self.youtube_id
response = None
try:
response = requests.get(url)
except requests.exceptions.RequestException:
# Nothing we can really do in this case
pass
if response is not None:
# Jump through filesystem hoop to please photologue
filename = self.youtube_id + '.jpg'
filepath = os.path.join(mkdtemp(), filename)
fp = open(filepath, 'wb')
try:
fp.write(response.content)
finally:
fp.close()
# Check for a valid image
image = None
try:
image = Image.open(filepath)
except IOError:
os.remove(filepath)
if image is not None:
try:
# Overlay a play button if possible
video_play_image = \
preferences.GalleryPreferences.video_play_image
if video_play_image:
overlay = Image.open(video_play_image)
# Downsize image_overlay if it is larger than image
w1, h1 = image.size
w2, h2 = overlay.size
if w2 > w1 or h2 > h1:
ratio1 = w1 / float(h1)
ratio2 = w2 / float(h2)
if ratio1 > ratio2:
resize_fract = h1 / float(h2)
else:
resize_fract = w1 / float(w2)
overlay.resize(
w2 * resize_fract,
h2 * resize_fract,
Image.ANTIALIAS
)
image.paste(
overlay,
(int((w1 - w2) / 2.0), int((h1 - h2) / 2.0)),
mask=overlay
)
image.save(filepath)
# Finally set image
image = File(open(filepath, 'rb'))
image.name = filename
self.image = image
finally:
os.remove(filepath)
super(VideoEmbed, self).save(*args, **kwargs) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def name(self):
""" Compute a name according to sub meta results names 'operation:[plus, moins]' """ |
return "%s:[%s]" % (self._name, ", ".join(meta.name for meta in self._metas)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def errors(self):
""" get all the errors [ValueError('invalid data',), RuntimeError('server not anwsering',)] """ |
errors = []
for meta in self:
errors.extend(meta.errors)
return errors |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def defaults(self):
""" component default component .. Note:: default components is just an indication for user and the views, except if the Block is required. If required then default is selected if nothing explisitely selected. """ |
default = self._defaults
# if require and no default, the first component as default
if not len(default) and self.required and len(self._components):
default = [six.next(six.itervalues(self._components)).name]
return default |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def selected(self):
""" returns the list of selected component names. if no component selected return the one marked as default. If the block is required and no component where indicated as default, then the first component is selected. """ |
selected = self._selected
if len(self._selected) == 0 and self.required:
# nothing has been selected yet BUT the component is required
selected = self.defaults
return selected |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def as_dict(self):
""" returns a dictionary representation of the block and of all component options """ |
#TODO/FIXME: add selected information
if self.hidden:
rdict = {}
else:
def_selected = self.selected()
comps = [
{
'name': comp.name,
'default': comp.name in self.defaults,
'options': comp.get_ordered_options() if isinstance(comp, Optionable) else None
}
for comp in self
]
rdict = {
'name': self.name,
'required': self.required,
'multiple': self.multiple,
'args': self.in_name,
'returns': self.out_name,
'components': comps
}
return rdict |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def reset(self):
""" Removes all the components of the block """ |
self._components = OrderedDict()
self.clear_selections()
self._logger.info("<block: %s> reset component list" % (self.name)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def setup(self, in_name=None, out_name=None, required=None, hidden=None, multiple=None, defaults=None):
""" Set the options of the block. Only the not None given options are set .. note:: a block may have multiple inputs but have only one output :param in_name: name(s) of the block input data :type in_name: str or list of str :param out_name: name of the block output data :type out_name: str :param required: whether the block will be required or not :type required: bool :param hidden: whether the block will be hidden to the user or not :type hidden: bool :param multiple: if True more than one component may be selected/ run) :type multiple: bool :param defaults: names of the selected components :type defaults: list of str, or str """ |
if in_name is not None:
self.in_name = in_name if isinstance(in_name, list) else [in_name]
if out_name is not None:
self.out_name = out_name
if required is not None:
self.required = required
if hidden is not None:
self.hidden = hidden
if multiple is not None:
self.multiple = multiple
if defaults is not None:
#if default is just a 'str' it is managed in setter
self.defaults = defaults |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def validate(self):
""" check that the block can be run """ |
if self.required and len(self.selected()) == 0:
raise ReliureError("No component selected for block '%s'" % self.name) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def requires(self, *names):
""" Declare what block will be used in this engine. It should be call before adding or setting any component. Blocks order will be preserved for runnning task. """ |
if len(names) == 0:
raise ValueError("You should give at least one block name")
if self._blocks is not None and len(self._blocks) > 0:
raise ReliureError("Method 'requires' should be called only once before adding any composant")
for name in names:
if name in self._blocks:
raise ValueError("Duplicate block name %s" % name)
self._blocks[name] = Block(name)
self._logger.info(" ** requires ** %s", names) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def needed_inputs(self):
""" List all the needed inputs of a configured engine ['in'] But now if we unactivate the first component: ['middle'] More complex example: True Note that by default the needed input is 'input': ['input'] """ |
needed = set()
available = set() # set of available data
for bnum, block in enumerate(self):
if not block.selected(): # if the block will not be used
continue
if block.in_name is not None:
for in_name in block.in_name:
if not in_name in available:
needed.add(in_name)
elif bnum == 0:
# if the first block
needed.add(Engine.DEFAULT_IN_NAME)
# register the output
available.add(block.out_name)
return needed |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def as_dict(self):
""" dict repr of the components """ |
drepr = {
'blocks': [
block.as_dict() for block in self if block.hidden == False
],
'args': list(self.needed_inputs())
}
return drepr |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def all(self):
""" Returns a list of cached instances. """ |
class_list = list(self.get_class_list())
if not class_list:
self.cache = []
return []
if self.cache is not None:
return self.cache
results = []
for cls_path in class_list:
module_name, class_name = cls_path.rsplit('.', 1)
try:
module = __import__(module_name, {}, {}, class_name)
cls = getattr(module, class_name)
if self.instances:
results.append(cls())
else:
results.append(cls)
except Exception:
logger.exception('Unable to import {cls}'.format(cls=cls_path))
continue
self.cache = results
return results |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _download_initial_config(self):
"""Loads the initial config.""" |
_initial_config = self._download_running_config() # this is a bit slow!
self._last_working_config = _initial_config
self._config_history.append(_initial_config)
self._config_history.append(_initial_config) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _upload_config_content(self, configuration, rollbacked=False):
"""Will try to upload a specific configuration on the device.""" |
try:
for configuration_line in configuration.splitlines():
self._device.cli(configuration_line)
self._config_changed = True # configuration was changed
self._committed = False # and not committed yet
except (pyPluribus.exceptions.CommandExecutionError,
pyPluribus.exceptions.TimeoutError) as clierr:
if not rollbacked:
# rollack errors will just trow
# to avoid loops
self.discard()
raise pyPluribus.exceptions.ConfigLoadError("Unable to upload config on the device: {err}.\
Configuration will be discarded.".format(err=clierr.message))
return True |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def load_candidate(self, filename=None, config=None):
""" Loads a candidate configuration on the device. In case the load fails at any point, will automatically rollback to last working configuration. :param filename: Specifies the name of the file with the configuration content. :param config: New configuration to be uploaded on the device. :raise pyPluribus.exceptions.ConfigLoadError: When the configuration could not be uploaded to the device. """ |
configuration = ''
if filename is None:
configuration = config
else:
with open(filename) as config_file:
configuration = config_file.read()
return self._upload_config_content(configuration) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def discard(self):
# pylint: disable=no-self-use """ Clears uncommited changes. :raise pyPluribus.exceptions.ConfigurationDiscardError: If the configuration applied cannot be discarded. """ |
try:
self.rollback(0)
except pyPluribus.exceptions.RollbackError as rbackerr:
raise pyPluribus.exceptions.ConfigurationDiscardError("Cannot discard configuration: {err}.\
".format(err=rbackerr)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def commit(self):
# pylint: disable=no-self-use """Will commit the changes on the device""" |
if self._config_changed:
self._last_working_config = self._download_running_config()
self._config_history.append(self._last_working_config)
self._committed = True # comfiguration was committed
self._config_changed = False # no changes since last commit :)
return True # this will be always true
# since the changes are automatically applied
self._committed = False # make sure the _committed attribute is not True by any chance
return False |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def compare(self):
# pylint: disable=no-self-use """ Computes the difference between the candidate config and the running config. """ |
# becuase we emulate the configuration history
# the difference is between the last committed config and the running-config
running_config = self._download_running_config()
running_config_lines = running_config.splitlines()
last_committed_config = self._last_working_config
last_committed_config_lines = last_committed_config.splitlines()
difference = difflib.unified_diff(running_config_lines, last_committed_config_lines, n=0)
return '\n'.join(difference) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def rollback(self, number=0):
""" Will rollback the configuration to a previous state. Can be called also when :param number: How many steps back in the configuration history must look back. :raise pyPluribus.exceptions.RollbackError: In case the configuration cannot be rolled back. """ |
if number < 0:
raise pyPluribus.exceptions.RollbackError("Please provide a positive number to rollback to!")
available_configs = len(self._config_history)
max_rollbacks = available_configs - 2
if max_rollbacks < 0:
raise pyPluribus.exceptions.RollbackError("Cannot rollback: \
not enough configration history available!")
if max_rollbacks > 0 and number > max_rollbacks:
raise pyPluribus.exceptions.RollbackError("Cannot rollback more than {cfgs} configurations!\
".format(cfgs=max_rollbacks))
config_location = 1 # will load the initial config worst case (user never commited, but wants to discard)
if max_rollbacks > 0: # in case of previous commit(s) will be able to load a specific configuration
config_location = available_configs - number - 1 # stored in location len() - rollabck_nb - 1
# covers also the case of discard uncommitted changes (rollback 0)
desired_config = self._config_history[config_location]
try:
self._upload_config_content(desired_config, rollbacked=True)
except pyPluribus.exceptions.ConfigLoadError as loaderr:
raise pyPluribus.exceptions.RollbackError("Cannot rollback: {err}".format(err=loaderr))
del self._config_history[(config_location+1):] # delete all newer configurations than the config rolled back
self._last_working_config = desired_config
self._committed = True
self._config_changed = False
return True |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def draw(self, label, expire):
""" Return a Serial number for this resource queue, after bootstrapping. """ |
# get next number
with self.client.pipeline() as pipe:
pipe.msetnx({self.keys.dispenser: 0, self.keys.indicator: 1})
pipe.incr(self.keys.dispenser)
number = pipe.execute()[-1]
# publish for humans
self.message('{} assigned to "{}"'.format(number, label))
# launch keeper
kwargs = {'client': self.client, 'key': self.keys.key(number)}
keeper = Keeper(label=label, expire=expire, **kwargs)
try:
yield number
except:
self.message('{} crashed!'.format(number))
raise
finally:
keeper.close()
self.message('{} completed by "{}"'.format(number, label))
number += 1
self.client.set(self.keys.indicator, number)
self.announce(number) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def wait(self, number, patience):
""" Waits and resets if necessary. """ |
# inspect indicator for our number
waiting = int(self.client.get(self.keys.indicator)) != number
# wait until someone announces our number
while waiting:
message = self.subscription.listen(patience)
if message is None:
# timeout beyond patience, bump and try again
self.message('{} bumps'.format(number))
self.bump()
continue
if message['type'] != 'message':
continue # a subscribe message
waiting = self.keys.number(message['data']) != number
# our turn now
self.message('{} started'.format(number)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def message(self, text):
""" Public message. """ |
self.client.publish(self.keys.external,
'{}: {}'.format(self.resource, text)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def announce(self, number):
""" Announce an indicator change on both channels. """ |
self.client.publish(self.keys.internal, self.keys.key(number))
self.message('{} granted'.format(number)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def bump(self):
""" Fix indicator in case of unnanounced departments. """ |
# read client
values = self.client.mget(self.keys.indicator, self.keys.dispenser)
indicator, dispenser = map(int, values)
# determine active users
numbers = range(indicator, dispenser + 1)
keys = [self.keys.key(n) for n in numbers]
pairs = zip(keys, self.client.mget(*keys))
try:
# determine number of first active user
number = next(self.keys.number(key)
for key, value in pairs if value is not None)
except:
# set number to next result of incr on dispenser
number = dispenser + 1
# set indicator to it if necessary
if number != indicator:
self.client.set(self.keys.indicator, number)
# announce and return it anyway
self.announce(number)
return number |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def lock(self, resource, label='', expire=60, patience=60):
""" Lock a resource. :param resource: String corresponding to resource type :param label: String label to attach :param expire: int seconds :param patience: int seconds """ |
queue = Queue(client=self.client, resource=resource)
with queue.draw(label=label, expire=expire) as number:
queue.wait(number=number, patience=patience)
yield
queue.close() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def list_refs(profile, ref_type=None):
"""List all refs. Args: profile A profile generated from ``simplygithub.authentication.profile``. Such profiles tell this module (i) the ``repo`` to connect to, and (ii) the ``token`` to connect with. ref_type The type of ref you want. For heads, it's ``heads``. For tags, it's ``tags``. That sort of thing. If you don't specify a type, all refs are returned. Returns: A list of dicts with data about each ref. """ |
resource = "/refs"
if ref_type:
resource += "/" + ref_type
data = api.get_request(profile, resource)
result = [prepare(x) for x in data]
return result |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_ref(profile, ref):
"""Fetch a ref. Args: profile A profile generated from ``simplygithub.authentication.profile``. Such profiles tell this module (i) the ``repo`` to connect to, and (ii) the ``token`` to connect with. ref The ref to fetch, e.g., ``heads/my-feature-branch``. Returns A dict with data about the ref. """ |
resource = "/refs/" + ref
data = api.get_request(profile, resource)
return prepare(data) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def create_ref(profile, ref, sha):
"""Create a ref. Args: profile A profile generated from ``simplygithub.authentication.profile``. Such profiles tell this module (i) the ``repo`` to connect to, and (ii) the ``token`` to connect with. ref The ref to create, e.g., ``heads/my-feature-branch``. sha The SHA of the commit to point the ref to. Returns A dict with data about the ref. """ |
resource = "/refs"
payload = {"ref": "refs/" + ref, "sha": sha}
data = api.post_request(profile, resource, payload)
return prepare(data) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def update_ref(profile, ref, sha):
"""Point a ref to a new SHA. Args: profile A profile generated from ``simplygithub.authentication.profile``. Such profiles tell this module (i) the ``repo`` to connect to, and (ii) the ``token`` to connect with. ref The ref to update, e.g., ``heads/my-feature-branch``. sha The SHA of the commit to point the ref to. Returns A dict with data about the ref. """ |
resource = "/refs/" + ref
payload = {"sha": sha}
data = api.patch_request(profile, resource, payload)
return prepare(data) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def latency(self):
""" Checks the connection latency. """ |
with self.lock:
self.send('PING %s' % self.server)
ctime = self._m_time.time()
msg = self._recv(expected_replies=('PONG',))
if msg[0] == 'PONG':
latency = self._m_time.time() - ctime
return latency |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_section_relations(Section):
"""Find every relationship between section and the item model.""" |
all_rels = (Section._meta.get_all_related_objects() +
Section._meta.get_all_related_many_to_many_objects())
return filter_item_rels(all_rels) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def authorize_url(self):
""" Build the authorization url and save the state. Return the authorization url """ |
url, self.state = self.oauth.authorization_url(
'%sauthorize' % OAUTH_URL)
return url |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def fetch_token(self, code, state):
""" Fetch the token, using the verification code. Also, make sure the state received in the response matches the one in the request. Returns the access_token. """ |
if self.state != state:
raise MismatchingStateError()
self.token = self.oauth.fetch_token(
'%saccess_token/' % OAUTH_URL, code=code,
client_secret=self.client_secret)
return self.token['access_token'] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def refresh_token(self, refresh_token):
""" Get a new token, using the provided refresh token. Returns the new access_token. """ |
response = requests.post('%saccess_token' % OAUTH_URL, {
'refresh_token': refresh_token,
'grant_type': 'refresh_token',
'client_id': self.client_id,
'client_secret': self.client_secret
})
resp = json.loads(response.content)
if 'access_token' in resp:
self.token = resp['access_token']
return resp |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def browser_authorize(self):
""" Open a browser to the authorization url and spool up a CherryPy server to accept the response """ |
url = self.authorize_url()
# Open the web browser in a new thread for command-line browser support
threading.Timer(1, webbrowser.open, args=(url,)).start()
server_config = {
'server.socket_host': '0.0.0.0',
'server.socket_port': 443,
'server.ssl_module': 'pyopenssl',
'server.ssl_certificate': 'tests/files/certificate.cert',
'server.ssl_private_key': 'tests/files/key.key',
}
cherrypy.config.update(server_config)
cherrypy.quickstart(self) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def index(self, state, code=None, error=None):
""" Receive a Exist response containing a verification code. Use the code to fetch the access_token. """ |
error = None
if code:
try:
auth_token = self.fetch_token(code, state)
except MissingTokenError:
error = self._fmt_failure(
'Missing access token parameter.</br>Please check that '
'you are using the correct client_secret')
except MismatchingStateError:
error = self._fmt_failure('CSRF Warning! Mismatching state')
else:
error = self._fmt_failure('Unknown error while authenticating')
# Use a thread to shutdown cherrypy so we can return HTML first
self._shutdown_cherrypy()
return error if error else self.success_html % (auth_token) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _shutdown_cherrypy(self):
""" Shutdown cherrypy in one second, if it's running """ |
if cherrypy.engine.state == cherrypy.engine.states.STARTED:
threading.Timer(1, cherrypy.engine.exit).start() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def roundup(x, order):
'''Round a number to the passed order
Args
----
x: float
Number to be rounded
order: int
Order to which `x` should be rounded
Returns
-------
x_round: float
The passed value rounded to the passed order
'''
return x if x % 10**order == 0 else x + 10**order - x % 10**order |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def hourminsec(n_seconds):
'''Generate a string of hours and minutes from total number of seconds
Args
----
n_seconds: int
Total number of seconds to calculate hours, minutes, and seconds from
Returns
-------
hours: int
Number of hours in `n_seconds`
minutes: int
Remaining minutes in `n_seconds` after number of hours
seconds: int
Remaining seconds in `n_seconds` after number of minutes
'''
hours, remainder = divmod(n_seconds, 3600)
minutes, seconds = divmod(remainder, 60)
return abs(hours), abs(minutes), abs(seconds) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def add_alpha_labels(axes, xpos=0.03, ypos=0.95, suffix='', color=None,
fontsize=14, fontweight='normal', boxstyle='square', facecolor='white',
edgecolor='white', alpha=1.0):
'''Add sequential alphbet labels to subplot axes
Args
----
axes: list of pyplot.ax
A list of matplotlib axes to add the label labels to
xpos: float or array_like
X position(s) of labels in figure coordinates
ypos: float or array_like
Y position(s) of labels in figure coordinates
suffix: str
String to append to labels (e.g. '.' or ' name)
color: matplotlib color
Color of labels
fontsize: int
Alppa fontsize
fontweight: matplotlib fontweight
Alpha fontweight
boxstyle: matplotlib boxstyle
Alpha boxstyle
facecolor: matplotlib facecolor
Color of box containing label
edgecolor: matplotlib edgecolor
Color of box'es border containing label
alpha: float
Transparency of label
Returns
-------
axes: list of pyplot.ax
A list of matplotlib axes objects with alpha labels added
'''
import seaborn
import string
import numpy
if not numpy.iterable(xpos):
xpos = [xpos,]*len(axes)
ypos = [ypos,]*len(axes)
if (len(xpos) > 1) or (len(ypos) > 1):
try:
assert (len(axes) == len(xpos))
except AssertionError as e:
e.args += 'xpos iterable must be same length as axes'
raise
try:
assert (len(axes) == len(ypos))
except AssertionError as e:
e.args += 'ypos iterable must be same length as axes'
raise
else:
xpos = [xpos,]
ypos = [ypos,]
colors = seaborn.color_palette()
abc = string.ascii_uppercase
for i, (label, ax) in enumerate(zip(abc[:len(axes)], axes)):
if color is None:
color = colors[i]
kwargs = dict(color=color,
fontweight=fontweight,)
bbox = dict(boxstyle=boxstyle,
facecolor=facecolor,
edgecolor=edgecolor,
alpha=alpha)
ax.text(xpos[i], ypos[i], '{}{}'.format(label, suffix),
transform=ax.transAxes, fontsize=fontsize,
verticalalignment='top', bbox=bbox, **kwargs)
return axes |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def merge_limits(axes, xlim=True, ylim=True):
'''Set maximum and minimum limits from list of axis objects to each axis
Args
----
axes: iterable
list of `matplotlib.pyplot` axis objects whose limits should be modified
xlim: bool
Flag to set modification of x axis limits
ylim: bool
Flag to set modification of y axis limits
'''
# Compile lists of all x/y limits
xlims = list()
ylims = list()
for ax in axes:
[xlims.append(lim) for lim in ax.get_xlim()]
[ylims.append(lim) for lim in ax.get_ylim()]
# Iterate over axes objects and set limits
for ax in axes:
if xlim:
ax.set_xlim(min(xlims), max(xlims))
if ylim:
ax.set_ylim(min(ylims), max(ylims))
return None |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def plot_noncontiguous(ax, data, ind, color='black', label='', offset=0,
linewidth=0.5, linestyle='-'):
'''Plot non-contiguous slice of data
Args
----
data: ndarray
The data with non continguous regions to plot
ind: ndarray
indices of data to be plotted
color: matplotlib color
Color of plotted line
label: str
Name to be shown in legend
offset: int
The number of index positions to reset start of data to zero
linewidth: float
The width of the plotted line
linstyle: str
The char representation of the plotting style for the line
Returns
-------
ax: pyplot.ax
Axes object with line glyph added for non-contiguous regions
'''
def slice_with_nans(ind, data, offset):
'''Insert nans in indices and data where indices non-contiguous'''
import copy
import numpy
ind_nan = numpy.zeros(len(data))
ind_nan[:] = numpy.nan
# prevent ind from overwrite with deepcopy
ind_nan[ind-offset] = copy.deepcopy(ind)
#ind_nan = ind_nan[ind[0]-offset:ind[-1]-offset]
# prevent data from overwrite with deepcopy
data_nan = copy.deepcopy(data)
data_nan[numpy.isnan(ind_nan)] = numpy.nan
return ind_nan, data_nan
x, y = slice_with_nans(ind, data, offset)
ax.plot(x, y, color=color, linewidth=linewidth, linestyle=linestyle,
label=label)
return ax |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def plot_shade_mask(ax, ind, mask, facecolor='gray', alpha=0.5):
'''Shade across x values where boolean mask is `True`
Args
----
ax: pyplot.ax
Axes object to plot with a shaded region
ind: ndarray
The indices to use for the x-axis values of the data
mask: ndarray
Boolean mask array to determine which regions should be shaded
facecolor: matplotlib color
Color of the shaded area
Returns
-------
ax: pyplot.ax
Axes object with the shaded region added
'''
ymin, ymax = ax.get_ylim()
ax.fill_between(ind, ymin, ymax, where=mask,
facecolor=facecolor, alpha=alpha)
return ax |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def fields(self, new_fieldnames):
""" Overwrite all field names with new field names. Mass renaming. """ |
if len(new_fieldnames) != len(self.fields):
raise Exception("Cannot replace fieldnames (len: %s) with list of "
"incorrect length (len: %s)" % (len(new_fieldnames),
len(self.fields)))
for old_name, new_name in izip(self.fields, new_fieldnames):
# use pop instead of `del` in case old_name == new_name
self.__data[new_name] = self.__data.pop(old_name) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def fromcsvstring(cls, csvstring, delimiter=",", quotechar="\""):
""" Takes one string that represents the entire contents of the CSV file, or similar delimited file. If you have a list of lists, where the first list is the headers, then use the main constructor. If you see an excess of whitespace in the first column of your data, this is probably because you tried to format a triple-quoted string literal nicely. Don't add any padding to the left. NOTE: Please prefix your triple-quoted string literal with `u` or `ur` as necessary. For copy-pasting directly from Excel, use `ur`. For copy-pasting from something Python (or similar) printed, use `ur`. For something just dumped from Python via __repr__ or some other text source that displays escape characters used, use `u`. --- Implementation notes: This solution was inspired by UnicodeRW. cStringIO.StringIO turns the passed string into a file-like (readble) object. The string must be encoded so that StringIO presents encoded text. In UnicodeRW, codecs.getreader('utf-8') reads an encoded file object to product a decoded file object on the fly. We don't need this. We read the StringIO object line by line into csv.reader, which is consumes encoded text and parses the CSV format out of it. Then we decode each cell one by one as we pass it into the data table csv.QUOTE_NONE (as well as the r-prefix on r'''string''') are vital since we're copy-pasting directly from Excel. The string should be treated as "literally" ("raw") as possible. """ |
if not isinstance(csvstring, basestring):
raise Exception("If trying to construct a DataTable with "
"a list of lists, just use the main "
"constructor. Make sure to include a header row")
stringio = StringIO(csvstring.encode('utf-8'))
csv_data = csv.reader((line for line in stringio),
delimiter=delimiter,
dialect=csv.excel,
quotechar=quotechar,
quoting=csv.QUOTE_NONE)
new_datatable = cls((s.decode('utf-8') for s in row)
for row in csv_data)
for field in new_datatable.fields:
new_datatable[field] = parse_column(new_datatable[field])
return new_datatable |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.