text_prompt stringlengths 157 13.1k | code_prompt stringlengths 7 19.8k ⌀ |
|---|---|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def now(utc=False):
"""Returns the current time. :param utc: If ``True``, returns a timezone-aware ``datetime`` object in UTC. When ``False`` (the default), returns a naive ``datetime`` object in local time. :return: A ``datetime`` object representing the current time at the time of the call. """ |
if utc:
return datetime.datetime.utcnow().replace(tzinfo=dateutil.tz.tzutc())
else:
return datetime.datetime.now() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def merge_envs(*args):
"""Union of one or more dictionaries. In case of duplicate keys, the values in the right-most arguments will squash (overwrite) the value provided by any dict preceding it. :param args: Sequence of ``dict`` objects that should be merged. :return: A ``dict`` containing the union of keys in all input dicts. """ |
env = {}
for arg in args:
if not arg:
continue
env.update(arg)
return env |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def run_once(name, cmd, env, shutdown, loop=None, utc=False):
"""Starts a child process and waits for its completion. .. note:: This function is a coroutine. Standard output and error streams are captured and forwarded to the parent process' standard output. Each line is prefixed with the current time (as measured by the parent process) and the child process ``name``. :param name: Label for the child process. Will be used as a prefix to all lines captured by this child process. :param cmd: Command-line that will be used to invoke the child process. Can be a string or sequence of strings. When a string is passed, ``shlex.split()`` will be used to break it into a sequence of strings with smart quoting analysis. If this does not give the intended results, break it down as you see fit and pass a sequence of strings. :param env: Environment variables that should be injected in the child process. If ``None``, the parent's environment will be inherited as it. If a ``dict`` is provided, this will overwrite the entire environment; it is the caller's responsibility to merge this with the parent's environment if they see fit. :param shutdown: Future that the caller will fulfill to indicate that the process should be killed early. When this is set, the process is sent SIGINT and then is let complete naturally. :param loop: Event loop to use. When ``None``, the default event loop is used. :param utc: When ``True``, the timestamps are logged using the current time in UTC. :return: A future that will be completed when the process has completed. Upon completion, the future's result will contain the process' exit status. """ |
# Get the default event loop if necessary.
loop = loop or asyncio.get_event_loop()
# Launch the command into a child process.
if isinstance(cmd, str):
cmd = shlex.split(cmd)
process = yield from asyncio.create_subprocess_exec(
*cmd,
env=env,
stdin=asyncio.subprocess.PIPE,
stdout=asyncio.subprocess.PIPE,
stderr=asyncio.subprocess.STDOUT,
)
print('%s [strawboss] %s(%d) spawned.' % (
now(utc).isoformat(), name, process.pid
))
# Exhaust the child's standard output stream.
#
# TODO: close stdin for new process.
# TODO: terminate the process after the grace period.
ready = asyncio.ensure_future(process.wait())
pending = {
shutdown,
ready,
asyncio.ensure_future(process.stdout.readline()),
}
while not ready.done():
done, pending = yield from asyncio.wait(
pending,
return_when=asyncio.FIRST_COMPLETED,
)
for future in done:
# React to a request to shutdown the process.
#
# NOTE: shutdown is asynchronous unless the process completion
# notification is "in flight". We forward the request to
# shutdown and then wait until the child process completes.
if future is shutdown:
try:
process.kill()
except ProcessLookupError:
pass
else:
print('%s [strawboss] %s(%d) killed.' % (
now(utc).isoformat(), name, process.pid
))
continue
# React to process death (natural, killed or terminated).
if future is ready:
exit_code = yield from future
print('%s [strawboss] %s(%d) completed with exit status %d.' % (
now(utc).isoformat(), name, process.pid, exit_code
))
continue
# React to stdout having a full line of text.
data = yield from future
if not data:
print('%s [strawboss] EOF from %s(%d).' % (
now(utc).isoformat(), name, process.pid,
))
continue
data = data.decode('utf-8').strip()
print('%s [%s] %s' % (
now(utc).isoformat(), name, data
))
pending.add(asyncio.ensure_future(process.stdout.readline()))
# Cancel any remaining tasks (e.g. readline).
for future in pending:
if future is shutdown:
continue
future.cancel()
# Pass the exit code back to the caller.
return exit_code |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def run_and_respawn(shutdown, loop=None, **kwds):
"""Starts a child process and re-spawns it every time it completes. .. note:: This function is a coroutine. :param shutdown: Future that the caller will fulfill to indicate that the process should not be re-spawned. It is also passed to ``run_once()`` to indicate that the currently running process should be killed early. :param loop: Event loop to use. Defaults to the ``asyncio.get_event_loop()``. :param kwds: Arguments to forward to :py:func:`run_once`. :return: A future that will be completed when the process has stopped re-spawning and has completed. The future has no result. """ |
# Get the default event loop if necessary.
loop = loop or asyncio.get_event_loop()
while not shutdown.done():
t = loop.create_task(run_once(shutdown=shutdown, loop=loop, **kwds))
yield from t |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def add_date(log):
'''Userful for randomizing the name of a log'''
return '{base} - {time}.log'.format(
base=os.path.splitext(log)[0],
time=strftime("%a, %d %b %Y %H-%M-%S", gmtime())) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def delete_log(self, log):
'''check we don't delte anythin unintended'''
if os.path.splitext(log)[-1] != '.log':
raise Exception('File without .log was passed in for deletoin')
with suppress(Exception):
os.remove(log) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def get_logs(self):
'''returns logs from disk, requires .log extenstion'''
folder = os.path.dirname(self.pcfg['log_file'])
for path, dir, files in os.walk(folder):
for file in files:
if os.path.splitext(file)[-1] == '.log':
yield os.path.join(path, file) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def _uniquename(self, log):
'''
renames the log to ensure we get no clashes on the server
subclass this to change the path etc'''
return '{hostname} - {time}.log'.format(
hostname=os.getenv('USERNAME'),
time=strftime("%a, %d %b %Y %H-%M-%S", gmtime())) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def arg_strings(parsed_args, name=None):
"""A list of all strings for the named arg""" |
name = name or 'arg_strings'
value = getattr(parsed_args, name, [])
if isinstance(value, str):
return [value]
try:
return [v for v in value if isinstance(v, str)]
except TypeError:
return [] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def create_module_file(txt, directory):
"""Create a file in the given directory with a valid module name populated with the given txt. Returns: A path to the file""" |
name = nonpresent_module_filename()
path = os.path.join(directory, name)
with open(path, 'w') as fh:
fh.write(txt)
return path |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def nonpresent_module_filename():
"""Return module name that doesn't already exist""" |
while True:
module_name = get_random_name()
loader = pkgutil.find_loader(module_name)
if loader is not None:
continue
importlib.invalidate_caches()
return "{}.py".format(module_name) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_random_name():
"""Return random lowercase name""" |
char_seq = []
name_source = random.randint(1, 2**8-1)
current_value = name_source
while current_value > 0:
char_offset = current_value % 26
current_value = current_value - random.randint(1, 26)
char_seq.append(chr(char_offset + ord('a')))
name = ''.join(char_seq)
assert re.match(VALID_PACKAGE_RE, name)
return name |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_header_dict(response, header):
""" returns a dictionary of the cache control headers the same as is used by django.utils.cache.patch_cache_control if there are no Cache-Control headers returns and empty dict """ |
def dictitem(s):
t = s.split('=', 1)
if len(t) > 1:
return (t[0].lower(), t[1])
else:
return (t[0].lower(), True)
if response.has_header(header):
hd = dict([dictitem(el) for el in cc_delim_re.split(response[header])])
else:
hd= {}
return hd |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def set_header_dict(response, header, header_dict):
"""Formats and sets a header dict in a response, inververs of get_header_dict.""" |
def dictvalue(t):
if t[1] is True:
return t[0]
return t[0] + '=' + smart_str(t[1])
response[header] = ', '.join([dictvalue(el) for el in header_dict.items()]) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def smart_import(mpath):
"""Given a path smart_import will import the module and return the attr reffered to.""" |
try:
rest = __import__(mpath)
except ImportError:
split = mpath.split('.')
rest = smart_import('.'.join(split[:-1]))
rest = getattr(rest, split[-1])
return rest |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def strip_wsgi(request):
"""Strip WSGI data out of the request META data.""" |
meta = copy(request.META)
for key in meta:
if key[:4] == 'wsgi':
meta[key] = None
return meta |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def patch_headers(self, response):
"""Set the headers we want for caching.""" |
# Remove Vary:Cookie if we want to cache non-anonymous
if not getattr(settings, 'BETTERCACHE_ANONYMOUS_ONLY', False):
vdict = get_header_dict(response, 'Vary')
try:
vdict.pop('cookie')
except KeyError:
pass
else:
set_header_dict(response, 'Vary', vdict)
# Set max-age, post-check and pre-check
cc_headers = get_header_dict(response, 'Cache-Control')
try:
timeout = cc_headers['max-age']
except KeyError:
timeout = settings.BETTERCACHE_CACHE_MAXAGE
cc_headers['max-age'] = timeout
# This should never happen but let's be safe
if timeout is 0:
return response
if not 'pre-check' in cc_headers:
cc_headers['pre-check'] = timeout
if not 'post-check' in cc_headers:
cc_headers['post-check'] = int(timeout * settings.BETTERCACHE_EDGE_POSTCHECK_RATIO)
set_header_dict(response, 'Cache-Control', cc_headers)
# this should be the main/first place we're setting edge control so we can just set what we want
ec_dict = {'cache-maxage' : settings.BETTERCACHE_EDGE_MAXAGE}
set_header_dict(response, 'Edge-Control', ec_dict)
response['Last-Modified'] = http_date()
return response |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def should_cache(self, request, response):
""" Given the request and response should it be cached """ |
if not getattr(request, '_cache_update_cache', False):
return False
if not response.status_code in getattr(settings, 'BETTERCACHE_CACHEABLE_STATUS', CACHEABLE_STATUS):
return False
if getattr(settings, 'BETTERCACHE_ANONYMOUS_ONLY', False) and self.session_accessed and request.user.is_authenticated:
return False
if self.has_uncacheable_headers(response):
return False
return True |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def should_regenerate(self, response):
""" Check if this page was originally generated less than LOCAL_POSTCHECK seconds ago """ |
if response.has_header('Last-Modified'):
last_modified = parse_http_date(response['Last-Modified'])
next_regen = last_modified + settings.BETTERCACHE_LOCAL_POSTCHECK
return time.time() > next_regen |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def has_uncacheable_headers(self, response):
""" Should this response be cached based on it's headers broken out from should_cache for flexibility """ |
cc_dict = get_header_dict(response, 'Cache-Control')
if cc_dict:
if 'max-age' in cc_dict and cc_dict['max-age'] == '0':
return True
if 'no-cache' in cc_dict:
return True
if 'private' in cc_dict:
return True
if response.has_header('Expires'):
if parse_http_date(response['Expires']) < time.time():
return True
return False |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def set_cache(self, request, response):
""" caches the response supresses and logs exceptions""" |
try:
cache_key = self.cache_key(request)
#presumably this is to deal with requests with attr functions that won't pickle
if hasattr(response, 'render') and callable(response.render):
response.add_post_render_callback(lambda r: cache.set(cache_key, (r, time.time(),), settings.BETTERCACHE_LOCAL_MAXAGE))
else:
cache.set(cache_key, (response, time.time(),) , settings.BETTERCACHE_LOCAL_MAXAGE)
except:
logger.error("failed to cache to %s" %cache_key) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def cache_key(self, request, method=None):
""" the cache key is the absolute uri and the request method """ |
if method is None:
method = request.method
return "bettercache_page:%s:%s" %(request.build_absolute_uri(), method) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def send_task(self, request, response):
"""send off a celery task for the current page and recache""" |
# TODO is this too messy?
from bettercache.tasks import GeneratePage
try:
GeneratePage.apply_async((strip_wsgi(request),))
except:
logger.error("failed to send celery task")
self.set_cache(request, response) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def entity_to_unicode(string):
""" Quick convert unicode HTML entities to unicode characters using a regular expression replacement """ |
# Selected character replacements that have been seen
replacements = []
replacements.append((r"α", u"\u03b1"))
replacements.append((r"β", u"\u03b2"))
replacements.append((r"γ", u"\u03b3"))
replacements.append((r"δ", u"\u03b4"))
replacements.append((r"ε", u"\u03b5"))
replacements.append((r"º", u"\u00ba"))
replacements.append((r"ï", u"\u00cf"))
replacements.append((r"“", '"'))
replacements.append((r"”", '"'))
# First, replace numeric entities with unicode
string = re.sub(r"&#x(....);", repl, string)
# Second, replace some specific entities specified in the list
for entity, replacement in replacements:
string = re.sub(entity, replacement, string)
return string |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def remove_tag(tag_name, string):
""" Remove open and close tags - the tags themselves only - using a non-greedy angle bracket pattern match """ |
if not string:
return string
pattern = re.compile('</?' + tag_name + '.*?>')
string = pattern.sub('', string)
return string |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def version_from_xml_filename(filename):
"extract the numeric version from the xml filename"
try:
filename_parts = filename.split(os.sep)[-1].split('-')
except AttributeError:
return None
if len(filename_parts) == 3:
try:
return int(filename_parts[-1].lstrip('v').rstrip('.xml'))
except ValueError:
return None
else:
return None |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_last_commit_to_master(repo_path="."):
""" returns the last commit on the master branch. It would be more ideal to get the commit from the branch we are currently on, but as this is a check mostly to help with production issues, returning the commit from master will be sufficient. """ |
last_commit = None
repo = None
try:
repo = Repo(repo_path)
except (InvalidGitRepositoryError, NoSuchPathError):
repo = None
if repo:
try:
last_commit = repo.commits()[0]
except AttributeError:
# Optimised for version 0.3.2.RC1
last_commit = repo.head.commit
return str(last_commit) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def calculate_journal_volume(pub_date, year):
""" volume value is based on the pub date year pub_date is a python time object """ |
try:
volume = str(pub_date.tm_year - year + 1)
except TypeError:
volume = None
except AttributeError:
volume = None
return volume |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def author_name_from_json(author_json):
"concatenate an author name from json data"
author_name = None
if author_json.get('type'):
if author_json.get('type') == 'group' and author_json.get('name'):
author_name = author_json.get('name')
elif author_json.get('type') == 'person' and author_json.get('name'):
if author_json.get('name').get('preferred'):
author_name = author_json.get('name').get('preferred')
return author_name |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def text_from_affiliation_elements(department, institution, city, country):
"format an author affiliation from details"
return ', '.join(element for element in [department, institution, city, country] if element) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def read_variants(fns, remove=['DBSNP'], keep_only=True, min_tumor_f=0.1, min_tumor_cov=14, min_normal_cov=8):
"""Read muTect results from the list of files fns Parameters fns : list List of MuTect output files. Returns ------- variants : pandas.DataFrame Pandas DataFrame summarizing variant calling results. remove : list List of site types for column "dbsnp_site" to remove. keep_only : boolean If True, only keep variants with 'KEEP' in "judgement" column. Otherwise, keep all variants. min_tumor_f : float between 0 and 1 Minimum tumor allelic fraction. min_tumor_cov : int > 0 Minimum coverage of the variant in the tumor. min_normal_cov : int > 0 Minimum coverage of the variant in the normal. """ |
variants = []
for i, f in enumerate(fns):
# If keep_only, use awk to only grab those lines for big speedup.
if keep_only:
from numpy import dtype
import subprocess
res = subprocess.check_output(
'awk \'$35 == "KEEP"\' {}'.format(f), shell=True)
if res.strip() != '':
columns = [u'contig', u'position', u'context', u'ref_allele',
u'alt_allele', u'tumor_name', u'normal_name',
u'score', u'dbsnp_site', u'covered', u'power',
u'tumor_power', u'normal_power', u'total_pairs',
u'improper_pairs', u'map_Q0_reads', u't_lod_fstar',
u'tumor_f', u'contaminant_fraction',
u'contaminant_lod', u't_ref_count', u't_alt_count',
u't_ref_sum', u't_alt_sum', u't_ref_max_mapq',
u't_alt_max_mapq', u't_ins_count', u't_del_count',
u'normal_best_gt', u'init_n_lod', u'n_ref_count',
u'n_alt_count', u'n_ref_sum', u'n_alt_sum',
u'judgement']
tdf = pd.DataFrame(
[x.split('\t') for x in res.strip().split('\n')],
columns=columns)
tdf = tdf.convert_objects(convert_numeric=True)
else:
tdf = pd.DataFrame(columns=columns)
tdf['contig'] = tdf.contig.astype(object)
else:
tdf = pd.read_table(f, index_col=None, header=0, skiprows=1,
low_memory=False,
dtype={'contig':object})
for t in remove:
tdf = tdf[tdf.dbsnp_site != t]
tdf = tdf[tdf.tumor_f > min_tumor_f]
tdf = tdf[tdf.t_ref_count + tdf.t_alt_count > min_tumor_cov]
tdf = tdf[tdf.n_ref_count + tdf.n_alt_count > min_normal_cov]
variants.append(tdf)
variants = pd.concat(variants)
variants.index = range(variants.shape[0])
return variants |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def traverse(self, traverser, **kwargs):
""" Implementation of mandatory interface for traversing the whole rule tree. This method will call the ``traverse`` method of child rule tree and then perform arbitrary conversion of the result before returning it back. The optional ``kwargs`` are passed down to traverser callback as additional arguments and can be used to provide additional data or context. :param pynspect.rules.RuleTreeTraverser traverser: Traverser object providing appropriate interface. :param dict kwargs: Additional optional keyword arguments to be passed down to traverser callback. """ |
result = self.rule.traverse(traverser, **kwargs)
return self.conversion(result) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def register_variable_compilation(self, path, compilation_cbk, listclass):
""" Register given compilation method for variable on given path. :param str path: JPath for given variable. :param callable compilation_cbk: Compilation callback to be called. :param class listclass: List class to use for lists. """ |
self.compilations_variable[path] = {
'callback': compilation_cbk,
'listclass': listclass
} |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def register_function_compilation(self, func, compilation_cbk, listclass):
""" Register given compilation method for given function. :param str path: Function name. :param callable compilation_cbk: Compilation callback to be called. :param class listclass: List class to use for lists. """ |
self.compilations_function[func] = {
'callback': compilation_cbk,
'listclass': listclass
} |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _cor_compile(rule, var, val, result_class, key, compilation_list):
""" Actual compilation worker method. """ |
compilation = compilation_list.get(key, None)
if compilation:
if isinstance(val, ListRule):
result = []
for itemv in val.value:
result.append(compilation['callback'](itemv))
val = compilation['listclass'](result)
else:
val = compilation['callback'](val)
return result_class(rule.operation, var, val) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _compile_operation_rule(self, rule, left, right, result_class):
""" Compile given operation rule, when possible for given compination of operation operands. """ |
# Make sure variables always have constant with correct datatype on the
# opposite side of operation.
if isinstance(left, VariableRule) and isinstance(right, (ConstantRule, ListRule)):
return self._cor_compile(
rule,
left,
right,
result_class,
clean_variable(left.value),
self.compilations_variable
)
if isinstance(right, VariableRule) and isinstance(left, (ConstantRule, ListRule)):
return self._cor_compile(
rule,
right,
left,
result_class,
clean_variable(right.value),
self.compilations_variable
)
# Make sure functions always have constant with correct datatype on the
# opposite side of operation.
if isinstance(left, FunctionRule) and isinstance(right, (ConstantRule, ListRule)):
return self._cor_compile(
rule,
left,
right,
result_class,
left.function,
self.compilations_function
)
if isinstance(right, FunctionRule) and isinstance(left, (ConstantRule, ListRule)):
return self._cor_compile(
rule,
right,
left,
result_class,
right.function,
self.compilations_function
)
# In all other cases just keep things the way they are.
return result_class(rule.operation, left, right) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _calculate_operation_math(self, rule, left, right):
""" Perform compilation of given math operation by actually calculating given math expression. """ |
# Attempt to keep integer data type for the result, when possible.
if isinstance(left, IntegerRule) and isinstance(right, IntegerRule):
result = self.evaluate_binop_math(rule.operation, left.value, right.value)
if isinstance(result, list):
return ListRule([IntegerRule(r) for r in result])
return IntegerRule(result)
# Otherwise the result is float.
if isinstance(left, NumberRule) and isinstance(right, NumberRule):
result = self.evaluate_binop_math(rule.operation, left.value, right.value)
if isinstance(result, list):
return ListRule([FloatRule(r) for r in result])
return FloatRule(result)
# This point should never be reached.
raise Exception() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_db(directory, engine=None):
"""Get a database :param directory: The root data directory :param engine: a pre-created SQLAlchemy engine (default: in-memory SQLite) """ |
if engine is None:
engine = create_engine('sqlite://')
tables.metadata.create_all(engine)
Session = sessionmaker(bind=engine)
db = Session()
if directory is not None:
load_from_directory(db, directory)
return db |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def bind_unix_socket(file_, mode=0o600, backlog=_DEFAULT_BACKLOG):
"""Creates a listening unix socket. If a socket with the given name already exists, it will be deleted. If any other file with that name exists, an exception will be raised. Returns a socket object (not a list of socket objects like `bind_sockets`) """ |
sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
sock.setblocking(0)
try:
st = os.stat(file_)
except OSError as err:
if err.errno != errno.ENOENT:
raise
else:
if stat.S_ISSOCK(st.st_mode):
os.remove(file_)
else:
raise ValueError('File %s exists and is not a socket', file_)
sock.bind(file_)
os.chmod(file_, mode)
sock.listen(backlog)
return sock |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def firsthash(frame, removedupes=False):
'''
Hashes the first time step. Only will work as long as
the hash can fit in a uint64.
Parameters:
-----------
frame : first frame.
Keywords:
---------
removedups: specify duplicates for the given frame.
Returns a dictionary of everything needed
to generate hashes from the genhash function.
'''
#hashes must have i8 available
#overwise, we'll have overflow
def avgdiff(d):
d=np.sort(d);
d = d[1:] - d[:-1]
ret = np.average(d[np.nonzero(d)]);
if np.isnan(ret):
return 1.0;
return ret;
def hasextent(l,eps=1e-10):
#will I one day make pic sims on the pm scale??
dim = frame['data'][l];
return np.abs(dim.max()-dim.min()) > eps;
fields = list(frame['data'].dtype.names);
dims = [ i for i in ['xi','yi','zi']
if i in fields and hasextent(i) ];
ip = np.array([ frame['data'][l]
for l in dims ]).T;
avgdiffs = np.array([avgdiff(a) for a in ip.T]);
mins = ip.min(axis=0);
ips = (((ip - mins)/avgdiffs).round().astype('uint64'))
pws = np.floor(np.log10(ips.max(axis=0))).astype('uint64')+1
pws = list(pws);
pw = [0]+[ ipw+jpw for ipw,jpw in
zip([0]+pws[:-1],pws[:-1]) ];
pw = 10**np.array(pw);#.astype('int64');
#the dictionary used for hashing
d=dict(dims=dims, mins=mins, avgdiffs=avgdiffs, pw=pw);
hashes = genhash(frame,removedupes=False,**d);
if removedupes:
#consider if the negation of this is faster for genhash
uni,counts = np.unique(hashes,return_counts=True);
d['dupes']=uni[counts>1]
dupei = np.in1d(hashes, d['dupes']);
hashes[dupei] = -1;
d['removedupes']=True;
return hashes,d |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def genhash(frame,**kw):
'''
Generate the hashes for the given frame for a specification
given in the dictionary d returned from firsthash.
Parameters:
-----------
frame : frame to hash.
Keywords:
---------
d : hash specification generated from firsthash.
new : use new hashing, which isn't really hashing.
removedups: put -1 in duplicates,
dims : specify dims. Supercedes the setting in `d'.
dupes : array of hashes known to be dupes.
ftype : type of floats. defaults to 'f'.
-- old keywords from old hashing --
mins : minima of each axis
avgdifs : average differences
pw : powers of each axis
Returns an array of the shape of the frames with hashes.
'''
getkw = mk_getkw(kw,genhash_defaults,prefer_passed=True);
dims = getkw('dims');
dupes= getkw('dupes');
if not getkw('new'):
ip = np.array([frame['data'][l] for l in dims]).T;
scaled = ((ip - getkw('mins'))/getkw('avgdiffs')).round().astype('int64');
hashes = (scaled*getkw('pw')).sum(axis=1).astype('int64');
else:
hashes = np.array([
struct.pack('{}{}'.format(len(dims),getkw('ftype')), *[p[l] for l in dims])
for p in frame['data']]);
if getkw('removedupes'):
#marking duplicated particles
if not getkw('dupes'):
hashes = np.unique(hashes);
else:
dupei = np.in1d(hashes, getkw('dupes'));
hashes[dupei] = -1
return hashes; |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def addhash(frame,**kw):
'''
helper function to add hashes to the given frame
given in the dictionary d returned from firsthash.
Parameters:
-----------
frame : frame to hash.
Keywords:
---------
same as genhash
Returns frame with added hashes, although it will be added in
place.
'''
hashes = genhash(frame,**kw);
frame['data'] = rfn.rec_append_fields(
frame['data'],'hash',hashes);
return frame; |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def sortframe(frame):
'''
sorts particles for a frame
'''
d = frame['data'];
sortedargs = np.lexsort([d['xi'],d['yi'],d['zi']])
d = d[sortedargs];
frame['data']=d;
return frame; |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def read_and_hash(fname, **kw):
'''
Read and and addhash each frame.
'''
return [addhash(frame, **kw) for frame in read(fname, **kw)]; |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def filter_hashes_from_file(fname, f, **kw):
'''
Obtain good hashes from a .p4 file with the dict hashd and a
function that returns good hashes. Any keywords will be
sent to read_and_hash.
Parameters:
-----------
fname -- filename of file.
f -- function that returns a list of good hashes.
'''
return np.concatenate([
frame['data']['hash'][f(frame)]
for frame in read_and_hash(fname, **kw)
]); |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def check_match(self, step) -> list: """Like matchers.CFParseMatcher.check_match but also add the implicit parameters from the context """ |
args = []
match = super().check_match(step)
if match is None:
return None
for arg in match:
args.append(model.Argument.from_argument(arg))
for arg in self.context_params:
args.append(model.Argument(0, 0, "", None, name=arg, implicit=True))
return args |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def convert(self, pattern: str) -> str: """Convert the goat step string to CFParse String""" |
parameters = OrderedDict()
for parameter in self.signature.parameters.values():
annotation = self.convert_type_to_parse_type(parameter)
parameters[parameter.name] = "{%s:%s}" % (parameter.name, annotation)
formatter = GoatFormatter()
# We have to use vformat here to ensure that kwargs will be OrderedDict
values = parameters.values()
parameter_list = list(values)
converted_pattern = formatter.vformat(pattern, parameter_list, parameters)
self.context_params = formatter.unused_args
return converted_pattern |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def xml_to_json(root, tag_prefix=None, on_tag={}):
'''
Parses a XML element to JSON format.
This is a relatively generic function parsing a XML element
to JSON format. It does not guarantee any specific formal
behaviour but is empirically known to "work well" with respect
to the author's needs. External verification of the returned
results by the user is therefore instrumental.
For bigger XML elements the whole procedure may take a while,
so the philosophy should be to save the laboriously mapped
JSON data structure to a file once you have it. This of course
also means that this functions is probably of little value
when you have to constantly JSONify big XMLs. In summary,
this function is mostly useful for one-time parsing of XML to
JSON for subsequent use of the resulting JSON data instead of
the XML-formated data.
Args:
root: A XML element
tag_prefix: A tag prefix which will be cut from the keys
on_tag: User-defined parsing for elements identified by tag
Returns:
A Python data structure corresponding to the JSON mapping
of the supplied XML element
'''
def get_key(tag):
if tag_prefix is not None:
return tag.split(tag_prefix)[1]
return tag
def parse_element(elmt):
key = get_key(elmt.tag)
if key in on_tag:
return on_tag[key](elmt)
items = dict(elmt.items())
if len(elmt) == 0:
if items:
return { **items, **{key : elmt.text} }
else:
return elmt.text
else:
tags = {child.tag for child in elmt}
max_children = max({len(child) for child in elmt})
if len(tags) == 1:
value_list = [parse_element(child) for child in elmt]
if items:
return { **items, **{key : value_list} }
else:
return value_list
elif len(tags) > 1:
tag2children = {tag: [] for tag in tags}
for child in elmt:
tag2children[child.tag].append(child)
if max_children == 0:
value_dict = {get_key(tag) : [child.text for child in children] if len(children) > 1
else children[0].text
for tag, children in tag2children.items()}
else:
value_dict = {get_key(tag) : [parse_element(child) for child in children] if len(children) > 1
else parse_element(children[0])
for tag, children in tag2children.items()}
if items:
return { **items, **value_dict }
else:
return value_dict
# ---
return parse_element(root) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _textwrap_slices(text, width, strip_leading_indent=False):
""" Nearly identical to textwrap.wrap except this routine is a tad bit safer in its algo that textwrap. I ran into some issues with textwrap output that make it unusable to this usecase as a baseline text wrapper. Further this utility returns slices instead of strings. So the slices can be used to extract your lines manually. """ |
if not isinstance(text, str):
raise TypeError("Expected `str` type")
chunks = (x for x in _textwrap_word_break.split(text) if x)
remaining = width
buf = []
lines = [buf]
whitespace = []
whitespace_len = 0
pos = 0
try:
chunk = next(chunks)
except StopIteration:
chunk = ''
if not strip_leading_indent and is_whitespace(chunk):
# Add leading indent for first line, but only up to one lines worth.
chunk_len = len(chunk)
if chunk_len >= width:
_add_slice(buf, slice(0, width))
buf = []
lines.append(buf)
else:
_add_slice(buf, slice(0, chunk_len))
remaining -= chunk_len
pos = chunk_len
try:
chunk = next(chunks)
except StopIteration:
chunk = ''
while True:
avail_len = remaining - whitespace_len
chunk_len = len(chunk)
if chunk == '\n':
buf = []
lines.append(buf)
whitespace = []
whitespace_len = 0
remaining = width
elif is_whitespace(chunk):
if buf:
_add_slice(whitespace, slice(pos, pos + chunk_len))
whitespace_len += chunk_len
elif len(chunk) > avail_len:
if not buf:
# Must hard split the chunk.
for x in whitespace:
_add_slice(buf, x)
_add_slice(buf, slice(pos, pos + avail_len))
chunk = chunk[avail_len:]
pos += avail_len
# Bump to next line without fetching the next chunk.
buf = []
lines.append(buf)
whitespace = []
whitespace_len = 0
remaining = width
continue
else:
if buf:
remaining -= whitespace_len
for x in whitespace:
_add_slice(buf, x)
whitespace = []
whitespace_len = 0
_add_slice(buf, slice(pos, pos + chunk_len))
remaining -= chunk_len
pos += chunk_len
try:
chunk = next(chunks)
except StopIteration:
break
return lines |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def vtmlrender(vtmarkup, plain=None, strict=False, vtmlparser=VTMLParser()):
""" Look for vt100 markup and render vt opcodes into a VTMLBuffer. """ |
if isinstance(vtmarkup, VTMLBuffer):
return vtmarkup.plain() if plain else vtmarkup
try:
vtmlparser.feed(vtmarkup)
vtmlparser.close()
except:
if strict:
raise
buf = VTMLBuffer()
buf.append_str(str(vtmarkup))
return buf
else:
buf = vtmlparser.getvalue()
return buf.plain() if plain else buf
finally:
vtmlparser.reset() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def clean_text(value, topic=False):
""" Replaces "profane" words with more suitable ones. Uses bleach to strip all but whitelisted html. Converts bbcode to Markdown """ |
for x in PROFANITY_REPLACEMENTS:
value = value.replace(x[0], x[1])
for bbset in BBCODE_REPLACEMENTS:
p = re.compile(bbset[0], re.DOTALL)
value = p.sub(bbset[1], value)
bleached = bleach.clean(value, tags=ALLOWED_TAGS, attributes=ALLOWED_ATTRIBUTES, strip=True)
# We want to retain markdown quotes and we'll be running bleach again in format_post.
bleached = bleached.replace('>', '>').replace('&', '&')
return bleached |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def is_email_simple(value):
"""Return True if value looks like an email address.""" |
# An @ must be in the middle of the value.
if '@' not in value or value.startswith('@') or value.endswith('@'):
return False
try:
p1, p2 = value.split('@')
except ValueError:
# value contains more than one @.
return False
# Dot must be in p2 (e.g. example.com)
if '.' not in p2 or p2.startswith('.'):
return False
return True |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def convert_links(text, trim_url_limit=None, nofollow=False, autoescape=False):
""" Finds URLs in text and attempts to handle correctly. Heavily based on django.utils.html.urlize With the additions of attempting to embed media links, particularly images. Works on http://, https://, www. links, and also on links ending in one of the original seven gTLDs (.com, .edu, .gov, .int, .mil, .net, and .org). Links can have trailing punctuation (periods, commas, close-parens) and leading punctuation (opening parens) and it'll still do the right thing. TO-DO: refactor to better leverage existing django.utils.html """ |
safe_input = isinstance(text, SafeData)
words = word_split_re.split(force_text(text))
for i, word in enumerate(words):
if '.' in word or ':' in word:
# Deal with punctuation.
lead, middle, trail = '', word, ''
stripped = middle.rstrip(TRAILING_PUNCTUATION_CHARS)
if middle != stripped:
trail = middle[len(stripped):] + trail
middle = stripped
for opening, closing in WRAPPING_PUNCTUATION:
if middle.startswith(opening):
middle = middle[len(opening):]
lead = lead + opening
# Keep parentheses at the end only if they're balanced.
if (middle.endswith(closing)
and middle.count(closing) == middle.count(opening) + 1):
middle = middle[:-len(closing)]
trail = closing + trail
# Make URL we want to point to.
url = None
if simple_url_re.match(middle):
url = smart_urlquote(middle)
elif simple_url_2_re.match(middle):
url = smart_urlquote('http://%s' % middle)
elif ':' not in middle and is_email_simple(middle):
local, domain = middle.rsplit('@', 1)
try:
domain = domain.encode('idna').decode('ascii')
except UnicodeError:
continue
if url:
u = url.lower()
if autoescape and not safe_input:
lead, trail = escape(lead), escape(trail)
url = escape(url)
# Photos
if u.endswith('.jpg') or u.endswith('.gif') or u.endswith('.png'):
middle = '<img src="%s">' % url
# Youtube
#'https://www.youtube.com/watch?v=gkqXgaUuxZg'
elif 'youtube.com/watch' in url:
parsed = urlparse.urlsplit(url)
query = urlparse.parse_qs(parsed.query)
token = query.get('v')
if token and len(token) > 0:
middle = '<iframe src="http://www.youtube.com/embed/%s" height="320" width="100%%"></iframe>' % token[0]
else:
middle = url
elif 'youtu.be/' in url:
try:
token = url.rsplit('/', 1)[1]
middle = '<iframe src="http://www.youtube.com/embed/%s" height="320" width="100%%"></iframe>' % token
except IndexError:
middle = six.u(url)
words[i] = mark_safe('%s%s%s' % (lead, middle, trail))
else:
if safe_input:
words[i] = mark_safe(word)
elif autoescape:
words[i] = escape(word)
elif safe_input:
words[i] = mark_safe(word)
elif autoescape:
words[i] = escape(word)
return ''.join(words) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def pool_process(func, iterable, cpus=cpu_count(), return_vals=False, cpu_reduction=0, progress_bar=False):
""" Multiprocessing helper function for performing looped operation using multiple processors. :param func: Function to call :param iterable: Iterable object to perform each function on :param cpus: Number of cpu cores, defaults to system's cpu count :param return_vals: Bool, returns output values when True :param cpu_reduction: Number of cpu core's to not use :param progress_bar: Display text based progress bar :return: """ |
with Pool(cpus - abs(cpu_reduction)) as pool:
# Return values returned by 'func'
if return_vals:
# Show progress bar
if progress_bar:
vals = [v for v in tqdm(pool.imap_unordered(func, iterable), total=len(iterable))]
# No progress bar
else:
vals = pool.map(func, iterable)
# Close pool and return values
pool.close()
# pool.join()
return vals
# Don't capture values returned by 'func'
else:
pool.map(func, iterable)
pool.close()
return True |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def map(self):
"""Perform a function on every item in an iterable.""" |
with Pool(self.cpu_count) as pool:
pool.map(self._func, self._iterable)
pool.close()
return True |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def map_return(self):
"""Perform a function on every item and return a list of yield values.""" |
with Pool(self.cpu_count) as pool:
vals = pool.map(self._func, self._iterable)
pool.close()
return vals |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def map_tqdm(self):
""" Perform a function on every item while displaying a progress bar. :return: A list of yielded values """ |
with Pool(self.cpu_count) as pool:
vals = [v for v in tqdm(pool.imap_unordered(self._func, self._iterable), total=len(self._iterable))]
pool.close()
return vals |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def split_by_line(content):
"""Split the given content into a list of items by newline. Both \r\n and \n are supported. This is done since it seems that TTY devices on POSIX systems use \r\n for newlines in some instances. If the given content is an empty string or a string of only whitespace, an empty list will be returned. If the given content does not contain any newlines, it will be returned as the only element in a single item list. Leading and trailing whitespace is remove from all elements returned. :param str content: Content to split by newlines :return: List of items that were separated by newlines. :rtype: list """ |
# Make sure we don't end up splitting a string with
# just a single trailing \n or \r\n into multiple parts.
stripped = content.strip()
if not stripped:
return []
if '\r\n' in stripped:
return _strip_all(stripped.split('\r\n'))
if '\n' in stripped:
return _strip_all(stripped.split('\n'))
return _strip_all([stripped]) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_release_id(version=None):
"""Get a unique, time-based identifier for a deployment that optionally, also includes some sort of version number or release. If a version is supplied, the release ID will be of the form '$timestamp-$version'. For example: '20140214231159-1.4.1' If the version is not supplied the release ID will be of the form '$timestamp'. For example: '20140214231159' The timestamp component of this release ID will be generated using the current time in UTC. :param str version: Version to include in the release ID :return: Unique name for this particular deployment :rtype: str """ |
# pylint: disable=invalid-name
ts = datetime.utcnow().strftime(RELEASE_DATE_FMT)
if version is None:
return ts
return '{0}-{1}'.format(ts, version) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def try_repeatedly(method, max_retries=None, delay=None):
"""Execute the given Fabric call, retrying up to a certain number of times. The method is expected to be wrapper around a Fabric :func:`run` or :func:`sudo` call that returns the results of that call. The call will be executed at least once, and up to :code:`max_retries` additional times until the call executes with out failing. Optionally, a delay in seconds can be specified in between successive calls. :param callable method: Wrapped Fabric method to execute :param int max_retries: Max number of times to retry execution after a failed call :param float delay: Number of seconds between successive calls of :code:`method` :return: The results of running :code:`method` """ |
max_retries = max_retries if max_retries is not None else 1
delay = delay if delay is not None else 0
tries = 0
with warn_only():
while tries < max_retries:
res = method()
if not res.failed:
return res
tries += 1
time.sleep(delay)
# final try outside the warn_only block so that if it
# fails it'll just blow up or do whatever it was going to
# do anyway.
return method() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_current_release(self):
"""Get the release ID of the "current" deployment, None if there is no current deployment. This method performs one network operation. :return: Get the current release ID :rtype: str """ |
current = self._runner.run("readlink '{0}'".format(self._current))
if current.failed:
return None
return os.path.basename(current.strip()) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_previous_release(self):
"""Get the release ID of the deployment immediately before the "current" deployment, ``None`` if no previous release could be determined. This method performs two network operations. :return: The release ID of the release previous to the "current" release. :rtype: str """ |
releases = self.get_releases()
if not releases:
return None
current = self.get_current_release()
if not current:
return None
try:
current_idx = releases.index(current)
except ValueError:
return None
try:
return releases[current_idx + 1]
except IndexError:
return None |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def cleanup(self, keep=5):
"""Remove all but the ``keep`` most recent releases. If any of the candidates for deletion are pointed to by the 'current' symlink, they will not be deleted. This method performs N + 2 network operations where N is the number of old releases that are cleaned up. :param int keep: Number of old releases to keep around """ |
releases = self.get_releases()
current_version = self.get_current_release()
to_delete = [version for version in releases[keep:] if version != current_version]
for release in to_delete:
self._runner.run("rm -rf '{0}'".format(os.path.join(self._releases, release))) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def setup_directories(self, use_sudo=True):
"""Create the minimal required directories for deploying multiple releases of a project. By default, creation of directories is done with the Fabric ``sudo`` function but can optionally use the ``run`` function. This method performs one network operation. :param bool use_sudo: If ``True``, use ``sudo()`` to create required directories. If ``False`` try to create directories using the ``run()`` command. """ |
runner = self._runner.sudo if use_sudo else self._runner.run
runner("mkdir -p '{0}'".format(self._releases)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def set_permissions( self, owner, file_perms=PERMS_FILE_DEFAULT, dir_perms=PERMS_DIR_DEFAULT, use_sudo=True):
"""Set the owner and permissions of the code deploy. The owner will be set recursively for the entire code deploy. The directory permissions will be set on only the base of the code deploy and the releases directory. The file permissions will be set recursively for the entire code deploy. If not specified default values will be used for file or directory permissions. By default the Fabric ``sudo`` function will be used for changing the owner and permissions of the code deploy. Optionally, you can pass the ``use_sudo=False`` argument to skip trying to change the owner of the code deploy and to use the ``run`` function to change permissions. This method performs between three and four network operations depending on if ``use_sudo`` is false or true, respectively. :param str owner: User and group in the form 'owner:group' to set for the code deploy. :param str file_perms: Permissions to set for all files in the code deploy in the form 'u+perms,g+perms,o+perms'. Default is ``u+rw,g+rw,o+r``. :param str dir_perms: Permissions to set for the base and releases directories in the form 'u+perms,g+perms,o+perms'. Default is ``u+rwx,g+rws,o+rx``. :param bool use_sudo: If ``True``, use ``sudo()`` to change ownership and permissions of the code deploy. If ``False`` try to change permissions using the ``run()`` command, do not change ownership. .. versionchanged:: 0.2.0 ``use_sudo=False`` will no longer attempt to change ownership of the code deploy since this will just be a no-op or fail. """ |
runner = self._runner.sudo if use_sudo else self._runner.run
if use_sudo:
runner("chown -R '{0}' '{1}'".format(owner, self._base))
for path in (self._base, self._releases):
runner("chmod '{0}' '{1}'".format(dir_perms, path))
runner("chmod -R '{0}' '{1}'".format(file_perms, self._base)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def create_pending_after_task(self):
""" Creates pending task results in a dict on self.after_result with task string as key. It will also create a list on self.tasks that is used to make sure the serialization of the results creates a correctly ordered list. """ |
for task in self.settings.tasks[self.after_tasks_key]:
self.after_tasks.append(task)
self.after_results[task] = Result(task) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def load_config():
'''try loading config file from a default directory'''
cfg_path = '/usr/local/etc/freelan'
cfg_file = 'freelan.cfg'
if not os.path.isdir(cfg_path):
print("Can not find default freelan config directory.")
return
cfg_file_path = os.path.join(cfg_path,cfg_file)
if not os.path.isfile( cfg_file_path ):
print("Can not find default freelan config file.")
return
return _load_config(cfg_file_path) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def write_config(cfg):
'''try writing config file to a default directory'''
cfg_path = '/usr/local/etc/freelan'
cfg_file = 'freelan_TEST.cfg'
cfg_lines = []
if not isinstance(cfg, FreelanCFG):
if not isinstance(cfg, (list, tuple)):
print("Freelan write input can not be processed.")
return
cfg_lines = cfg
else:
cfg_lines = cfg.build()
if not os.path.isdir(cfg_path):
print("Can not find default freelan config directory.")
return
cfg_file_path = os.path.join(cfg_path,cfg_file)
if os.path.isfile( cfg_file_path ):
print("freelan config file already exists - moving to not replace content.")
ts = time.time()
backup_file = cfg_file_path+'.ORG-'+datetime.datetime.fromtimestamp(ts).strftime('%y-%m-%d-%H-%M-%S')
shutil.move(cfg_file_path, backup_file)
cfg_lines = [cfg_line+'\n' for cfg_line in cfg_lines]
with open(cfg_file_path, 'w') as cfg_f:
cfg_f.writelines(cfg_lines) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def cfgdump(path, config):
"""Create output directory path and output there the config.yaml file.""" |
dump = yaml_dump(config)
if not os.path.exists(path):
os.makedirs(path)
with open(os.path.join(path, 'config.yaml'), 'w') as outf:
outf.write(dump)
print(dump) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def videometadata(ctx, city, date, outpath):
"""Generate metadata for video records. city: The meetup series. \b date: The date. May be: - YYYY-MM-DD or YY-MM-DD (e.g. 2015-08-27) - YYYY-MM or YY-MM (e.g. 2015-08) - MM (e.g. 08):
the given month in the current year - pN (e.g. p1):
show the N-th last meetup """ |
db = ctx.obj['db']
today = ctx.obj['now'].date()
event = cliutil.get_event(db, city, date, today)
data = event.as_dict()
cliutil.handle_raw_output(ctx, data)
evdir = "{}-{}".format(event.city.name, event.slug)
config = OrderedDict()
config['speaker'] = ''
config['title'] = ''
config['lightning'] = True
config['speaker_only'] = False
config['widescreen'] = False
config['speaker_vid'] = "*.MTS"
config['screen_vid'] = "*.ts"
config['event'] = event.name
if event.number:
config['event'] += " #{}".format(event.number)
config['date'] = event.date.strftime("%Y-%m-%d")
config['url'] = "https://pyvo.cz/{}/{}/".format(event.series_slug,
event.slug)
print(evdir)
cfgdump(os.path.join(outpath, evdir), config)
if event.talks:
for talknum, talk in enumerate(event.talks, start=1):
config['speaker'] = ', '.join(s.name for s in talk.speakers)
config['title'] = talk.title
config['lightning'] = talk.is_lightning
talkdir = "{:02d}-{}".format(talknum, slugify(talk.title))
print(talkdir)
cfgdump(os.path.join(outpath, evdir, talkdir), config) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def split_data(data, subset, splits):
'''Returns the data for a given protocol
'''
return dict([(k, data[k][splits[subset]]) for k in data]) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def get(protocol, subset, classes=CLASSES, variables=VARIABLES):
'''Returns the data subset given a particular protocol
Parameters
protocol (string): one of the valid protocols supported by this interface
subset (string): one of 'train' or 'test'
classes (list of string): a list of strings containing the names of the
classes from which you want to have the data from
variables (list of strings): a list of strings containg the names of the
variables (features) you want to have data from
Returns:
data (numpy.ndarray): The data for all the classes and variables nicely
packed into one numpy 3D array. One depth represents the data for one
class, one row is one example, one column a given feature.
'''
retval = split_data(bob.db.iris.data(), subset, PROTOCOLS[protocol])
# filter variables (features)
varindex = [VARIABLES.index(k) for k in variables]
# filter class names and variable indexes at the same time
retval = dict([(k, retval[k][:,varindex]) for k in classes])
# squash the data
return numpy.array([retval[k] for k in classes]) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def silent_parse_args(self, command, args):
""" Silently attempt to parse args. If there is a failure then we ignore the effects. Using an in-place namespace object ensures we capture as many of the valid arguments as possible when the argparse system would otherwise throw away the results. """ |
args_ns = argparse.Namespace()
stderr_save = argparse._sys.stderr
stdout_save = argparse._sys.stdout
argparse._sys.stderr = os.devnull
argparse._sys.stdout = os.devnull
try:
command.argparser.parse_known_args(args, args_ns)
except BaseException:
pass
finally:
argparse._sys.stderr = stderr_save
argparse._sys.stdout = stdout_save
return args_ns |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def parse_nargs(self, nargs):
""" Nargs is essentially a multi-type encoding. We have to parse it to understand how many values this action may consume. """ |
self.max_args = self.min_args = 0
if nargs is None:
self.max_args = self.min_args = 1
elif nargs == argparse.OPTIONAL:
self.max_args = 1
elif nargs == argparse.ZERO_OR_MORE:
self.max_args = None
elif nargs in (argparse.ONE_OR_MORE, argparse.REMAINDER):
self.min_args = 1
self.max_args = None
elif nargs != argparse.PARSER:
self.max_args = self.min_args = nargs |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def consume(self, args):
""" Consume the arguments we support. The args are modified inline. The return value is the number of args eaten. """ |
consumable = args[:self.max_args]
self.consumed = len(consumable)
del args[:self.consumed]
return self.consumed |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def about_action(self):
""" Simple string describing the action. """ |
name = self.action.metavar or self.action.dest
type_name = self.action.type.__name__ if self.action.type else ''
if self.action.help or type_name:
extra = ' (%s)' % (self.action.help or 'type: %s' % type_name)
else:
extra = ''
return name + extra |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def file_complete(self, prefix, args):
""" Look in the local filesystem for valid file choices. """ |
path = os.path.expanduser(prefix)
dirname, name = os.path.split(path)
if not dirname:
dirname = '.'
try:
dirs = os.listdir(dirname)
except FileNotFoundError:
return frozenset()
choices = []
session = self.calling_command.session
for f in dirs:
try:
if (not name or f.startswith(name)) and \
not f.startswith('.'):
choices.append(f)
except PermissionError:
pass
prevent_pad = session.pad_completion and len(choices) == 1 and \
os.path.isdir(choices[0])
names = [os.path.join(dirname, x) for x in choices]
if prevent_pad:
names.append(names[0] + '/')
return frozenset(names) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def exec_command(command, **kwargs):
""" Executes the given command and send the output to the console :param str|list command: :kwargs: * `shell` (``bool`` = False) -- * `stdin` (``*`` = None) -- * `stdout` (``*`` = None) -- * `stderr` (``*`` = None) -- :return: CommandReturnValue """ |
shell = kwargs.get('shell', False)
stdin = kwargs.get('stdin', None)
stdout = kwargs.get('stdout', None)
stderr = kwargs.get('stderr', None)
kwargs.update(shell=shell)
kwargs.update(stdin=stdin)
kwargs.update(stdout=stdout)
kwargs.update(stderr=stderr)
if not isinstance(command, list):
command = shlex.split(command)
return_value = subprocess.call(command, **kwargs)
return CommandReturnValue(return_value=return_value,
stdin=stdin,
stdout=stdout,
stderr=stderr) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def observe_command(command, **kwargs):
""" Executes the given command and captures the output without any output to the console :param str|list command: :kwargs: * `shell` (``bool`` = False) -- * `timeout` (``int`` = 15) -- Timeout in seconds * `stdin` (``*`` = None) -- * `stdout` (``*`` = None) -- * `stderr` (``*`` = None) -- * `cwd` (``string`` = None) -- :return: CommandReturnValue """ |
shell = kwargs.get('shell', False)
timeout = kwargs.get('timeout', 15)
stdin = kwargs.get('stdin', subprocess.PIPE)
stdout = kwargs.get('stdout', subprocess.PIPE)
stderr = kwargs.get('stderr', subprocess.PIPE)
cwd = kwargs.get('cwd', None)
kwargs.update(shell=shell)
kwargs.update(stdin=stdin)
kwargs.update(stdout=stdout)
kwargs.update(stderr=stderr)
kwargs.update(cwd=cwd)
if not isinstance(command, list):
command = shlex.split(command)
# TODO: implement and process stdin - 1
proc = subprocess.Popen(command, **kwargs)
try:
# only Python versions from 3.3 have the 'timeout' argument
if sys.version_info[0] >= 3 and sys.version_info[1] >= 3:
proc_stdout, proc_stderr = proc.communicate(timeout=timeout)
else:
proc_stdout, proc_stderr = proc.communicate()
except subprocess.TimeoutExpired:
proc.kill()
proc_stdout, proc_stderr = proc.communicate()
# TODO: implement and process stdin - 2
# process stdin
# try:
# _stdin = proc.stdin.read()
# except IOError:
# _stdin = None
#
# if not _stdin:
# _stdin = None
# process stdout
try:
_stdout = proc_stdout.decode('utf-8')
except IOError:
_stdout = None
if not _stdout:
_stdout = None
# process stderr
try:
_stderr = proc_stderr.decode('utf-8')
except IOError:
_stderr = None
if not _stderr:
_stderr = None
return CommandReturnValue(return_value=proc.returncode,
stdout=_stdout,
stderr=_stderr) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def calculate_hash(obj):
""" Computes fingerprint for an object, this code is duplicated from representatives.models.HashableModel because we don't have access to model methods in a migration scenario. """ |
hashable_fields = {
'Chamber': ['name', 'country', 'abbreviation'],
'Constituency': ['name'],
'Group': ['name', 'abbreviation', 'kind', 'chamber'],
'Mandate': ['group', 'constituency', 'role', 'begin_date', 'end_date',
'representative']
}
fingerprint = hashlib.sha1()
for field_name in hashable_fields[obj.__class__.__name__]:
field = obj._meta.get_field(field_name)
if field.is_relation:
related = getattr(obj, field_name)
if related is None:
fingerprint.update(smart_str(related))
else:
fingerprint.update(related.fingerprint)
else:
fingerprint.update(smart_str(getattr(obj, field_name)))
obj.fingerprint = fingerprint.hexdigest()
return obj.fingerprint |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_or_create(cls, **kwargs):
""" Implements get_or_create logic for models that inherit from representatives.models.HashableModel because we don't have access to model methods in a migration scenario. """ |
try:
obj = cls.objects.get(**kwargs)
created = False
except cls.DoesNotExist:
obj = cls(**kwargs)
created = True
calculate_hash(obj)
obj.save()
return (obj, created) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def decorator_with_args(func, return_original=False, target_pos=0):
"""Enable a function to work with a decorator with arguments Args: func (callable):
The input function. return_original (bool):
Whether the resultant decorator returns the decorating target unchanged. If True, will return the target unchanged. Otherwise, return the returned value from *func*. Default to False. This is useful for converting a non-decorator function to a decorator. See examples below. Return: callable: a decorator with arguments. Examples: Registering plugin1 with arg1=10 Registering plugin1 with arg1=10 Registering plugin2 with arg1=100 """ |
if sys.version_info[0] >= 3:
target_name = inspect.getfullargspec(func).args[target_pos]
else:
target_name = inspect.getargspec(func).args[target_pos]
@functools.wraps(func)
def wrapper(*args, **kwargs):
if len(args) > target_pos:
res = func(*args, **kwargs)
return args[target_pos] if return_original else res
elif len(args) <= 0 and target_name in kwargs:
res = func(*args, **kwargs)
return kwargs[target_name] if return_original else res
else:
return wrap_with_args(*args, **kwargs)
def wrap_with_args(*args, **kwargs):
def wrapped_with_args(target):
kwargs2 = dict()
kwargs2[target_name] = target
kwargs2.update(kwargs)
res = func(*args, **kwargs2)
return target if return_original else res
return wrapped_with_args
return wrapper |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def elements_equal(first, *others):
""" Check elements for equality """ |
f = first
lf = list(f)
for e in others:
le = list(e)
if (len(lf) != len(le)
or f.tag != e.tag
or f.text != e.text
or f.tail != e.tail
or f.attrib != e.attrib
or (not all(map(elements_equal, lf, le)))
):
return False
return True |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_element(text_or_tree_or_element):
""" Get back an ET.Element for several possible input formats """ |
if isinstance(text_or_tree_or_element, ET.Element):
return text_or_tree_or_element
elif isinstance(text_or_tree_or_element, ET.ElementTree):
return text_or_tree_or_element.getroot()
elif isinstance(text_or_tree_or_element, (unicode, bytes)):
return ET.fromstring(text_or_tree_or_element)
else:
return ET.parse(text_or_tree_or_element).getroot() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def task_failure_handler(task_id=None, exception=None, traceback=None, args=None, **kwargs):
"""Task failure handler""" |
# TODO: find a better way to acces workdir/archive/image
task_report = {'task_id': task_id,
'exception': exception,
'traceback': traceback,
'archive': args[1]['archive_path'],
'image': args[1]['image']}
notifier.send_task_failure_report(task_report)
workdir = args[1]['workdir']
remove_file(workdir) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _update_config(self,directory,filename):
"""Manages FLICKR config files""" |
basefilename=os.path.splitext(filename)[0]
ext=os.path.splitext(filename)[1].lower()
if filename==LOCATION_FILE:
print("%s - Updating geotag information"%(LOCATION_FILE))
return self._update_config_location(directory)
elif filename==TAG_FILE:
print("%s - Updating tags"%(TAG_FILE))
return self._update_config_tags(directory)
elif filename==SET_FILE:
print("%s - Updating sets"%(SET_FILE))
return self._update_config_sets(directory)
elif filename==MEGAPIXEL_FILE:
print("%s - Updating photo size"%(MEGAPIXEL_FILE))
return self._upload_media(directory,resize_request=True)
elif ext in self.FLICKR_META_EXTENSIONS:
return self._update_meta(directory,basefilename)
return False |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _update_meta(self,directory,filename):
"""Opens up filename.title and filename.description, updates on flickr""" |
if not self._connectToFlickr():
print("%s - Couldn't connect to flickr"%(directory))
return False
db = self._loadDB(directory)
# Look up photo id for this photo
pid=db[filename]['photoid']
# =========== LOAD TITLE ========
fullfile=os.path.join(directory,filename+'.title')
try:
logger.debug('trying to open [%s]'%(fullfile))
_title=(open(fullfile).readline().strip())
logger.debug("_updatemeta: %s - title is %s",filename,_title)
except:
_title=''
# =========== LOAD DESCRIPTION ========
fullfile=os.path.join(directory,filename+'.description')
try:
_description=(open(fullfile).readline().strip())
logger.debug("_updatemeta: %s - description is %s",filename,_description)
except:
_description=''
logger.info('%s - updating metadata (title=%s) (description=%s)'\
%(filename,_title,_description))
resp=self.flickr.photos_setMeta(photo_id=pid,title=_title,\
description=_description)
if resp.attrib['stat']!='ok':
logger.error("%s - flickr: photos_setTags failed with status: %s",\
resp.attrib['stat']);
return False
else:
return True |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _createphotoset(self,myset,primary_photoid):
"""Creates a photo set on Flickr""" |
if not self._connectToFlickr():
print("%s - Couldn't connect to flickr"%(directory))
return False
logger.debug('Creating photo set %s with prim photo %s'\
%(myset,primary_photoid))
resp=self.flickr.photosets_create(title=myset,\
primary_photo_id=primary_photoid)
if resp.attrib['stat']!='ok':
logger.error("%s - flickr: photos_setTags failed with status: %s",\
resp.attrib['stat']);
return False
else:
return True |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _update_config_sets(self,directory,files=None):
""" Loads set information from file and updates on flickr, only reads first line. Format is comma separated eg. travel, 2010, South Africa, Pretoria If files is None, will update all files in DB, otherwise will only update files that are in the flickr DB and files list """ |
if not self._connectToFlickr():
print("%s - Couldn't connect to flickr"%(directory))
return False
# Load sets from SET_FILE
_sets=self._load_sets(directory)
# Connect to flickr and get dicionary of photosets
psets=self._getphotosets()
db = self._loadDB(directory)
# To create a set, one needs to pass it the primary
# photo to use, let's open the DB and load the first
# photo
primary_pid=db[db.keys()[0]]['photoid']
# Loop through all sets, create if it doesn't exist
for myset in _sets:
if myset not in psets:
logger.info('set [%s] not in flickr sets, will create set'%(myset))
self._createphotoset(myset,primary_pid)
# Now reaload photosets from flickr
psets=self._getphotosets()
# --- Load DB of photos, and update them all with new tags
for fn in db:
# --- If file list provided, skip files not in the list
if files and fn not in files:
continue
pid=db[fn]['photoid']
# Get all the photosets this photo belongs to
psets_for_photo=self._getphotosets_forphoto(pid)
for myset in _sets:
if myset in psets_for_photo:
logger.debug("%s - Already in photoset [%s] - skipping"%(fn,myset))
continue
logger.info("%s [flickr] Adding to set [%s]" %(fn,myset))
psid=psets[myset]['id']
logger.debug("%s - Adding to photoset %s"%(fn,psid))
resp=self.flickr.photosets_addPhoto(photoset_id=psid,photo_id=pid)
if resp.attrib['stat']!='ok':
logger.error("%s - flickr: photos_addPhoto failed with status: %s",\
resp.attrib['stat']);
return False
# Go through all sets flickr says this photo belongs to and
# remove from those sets if they don't appear in SET_FILE
for pset in psets_for_photo:
if pset not in _sets:
psid=psets[pset]['id']
logger.info("%s [flickr] Removing from set [%s]" %(fn,pset))
logger.debug("%s - Removing from photoset %s"%(fn,psid))
resp=self.flickr.photosets_removePhoto(photoset_id=psid,photo_id=pid)
if resp.attrib['stat']!='ok':
logger.error("%s - flickr: photossets_removePhoto failed with status: %s",\
resp.attrib['stat']);
return False
return True |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _getphotosets_forphoto(self,pid):
"""Asks flickr which photosets photo with given pid belongs to, returns list of photoset names""" |
resp=self.flickr.photos_getAllContexts(photo_id=pid)
if resp.attrib['stat']!='ok':
logger.error("%s - flickr: photos_getAllContext failed with status: %s",\
resp.attrib['stat']);
return None
lphotosets=[]
for element in resp.findall('set'):
lphotosets.append(element.attrib['title'])
logger.debug('%s - belongs to these photosets %s',pid,lphotosets)
return lphotosets |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _getphoto_originalsize(self,pid):
"""Asks flickr for photo original size returns tuple with width,height """ |
logger.debug('%s - Getting original size from flickr'%(pid))
width=None
height=None
resp=self.flickr.photos_getSizes(photo_id=pid)
if resp.attrib['stat']!='ok':
logger.error("%s - flickr: photos_getSizes failed with status: %s",\
resp.attrib['stat']);
return (None,None)
for size in resp.find('sizes').findall('size'):
if size.attrib['label']=="Original":
width=int(size.attrib['width'])
height=int(size.attrib['height'])
logger.debug('Found pid %s original size of %s,%s'\
%(pid,width,height))
return (width,height) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _getphoto_information(self,pid):
"""Asks flickr for photo information returns dictionary with attributes {'dateuploaded': '1383410793', 'farm': '3', 'id': '10628709834', 'isfavorite': '0', 'license': '0', 'media': 'photo', 'originalformat': 'jpg', 'originalsecret': 'b60f4f675f', 'rotation': '0', 'safety_level': '0', 'secret': 'a4c96e996b', 'server': '2823', 'views': '1', 'title': 'Image title' } """ |
if not self._connectToFlickr():
print("%s - Couldn't connect to flickr"%(directory))
return False
d={}
logger.debug('%s - Getting photo information from flickr'%(pid))
resp=self.flickr.photos_getInfo(photo_id=pid)
if resp.attrib['stat']!='ok':
logger.error("%s - flickr: photos_getInfo failed with status: %s",\
resp.attrib['stat']);
return None
p=resp.find('photo')
p.attrib['title']=p.find('title').text
return p.attrib |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _update_config_tags(self,directory,files=None):
""" Loads tags information from file and updates on flickr, only reads first line. Format is comma separated eg. travel, 2010, South Africa, Pretoria If files is None, will update all files in DB, otherwise will only update files that are in the flickr DB and files list """ |
if not self._connectToFlickr():
print("%s - Couldn't connect to flickr"%(directory))
return False
logger.debug("Updating tags in %s"%(directory))
_tags=self._load_tags(directory)
# --- Load DB of photos, and update them all with new tags
db = self._loadDB(directory)
for fn in db:
# --- If file list provided, skip files not in the list
if files and fn not in files:
logger.debug('%s [flickr] Skipping, tag update',fn)
continue
logger.info("%s [flickr] Updating tags [%s]" %(fn,_tags))
pid=db[fn]['photoid']
resp=self.flickr.photos_setTags(photo_id=pid,tags=_tags)
if resp.attrib['stat']!='ok':
logger.error("%s - flickr: photos_setTags failed with status: %s",\
resp.attrib['stat']);
return False
else:
return True
return False |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _remove_media(self,directory,files=None):
"""Removes specified files from flickr""" |
# Connect if we aren't already
if not self._connectToFlickr():
logger.error("%s - Couldn't connect to flickr")
return False
db=self._loadDB(directory)
# If no files given, use files from DB in dir
if not files:
files=db.keys()
#If only one file given, make it a list
if isinstance(files,basestring):
files=[files]
for fn in files:
print("%s - Deleting from flickr [local copy intact]"%(fn))
try:
pid=db[fn]['photoid']
except:
logger.debug("%s - Was never in flickr DB"%(fn))
continue
resp=self.flickr.photos_delete(photo_id=pid,format='etree')
if resp.attrib['stat']!='ok':
print("%s - flickr: delete failed with status: %s",\
resp.attrib['stat']);
return False
else:
logger.debug('Removing %s from flickr DB'%(fn))
del db[fn]
self._saveDB(directory,db)
return True |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _upload_media(self,directory,files=None,resize_request=None):
"""Uploads media file to FLICKR, returns True if uploaded successfully, Will replace if already uploaded, If megapixels > 0, will scale photos before upload If no filename given, will go through all files in DB""" |
# Connect if we aren't already
if not self._connectToFlickr():
logger.error("%s - Couldn't connect to flickr")
return False
_tags=self._load_tags(directory)
_megapixels=self._load_megapixels(directory)
# If no files given, use files from DB in dir
if not files:
db=self._loadDB(directory)
files=db.keys()
#If only one file given, make it a list
if isinstance(files,basestring):
files=[files]
files.sort()
for filename in files:
#FIXME: If this fails, should send a list
# to Upload() about which files DID make it,
# so we don't have to upload it again!
status,replaced=self._upload_or_replace_flickr(directory,filename, \
_tags, _megapixels,resize_request)
if not status:
return False
# If uploaded OK, update photo properties, tags
# already taken care of - only update if
# this is a new photo (eg, if it was replaced
# then we don't need to do this
if not replaced:
self._update_config_location(directory,filename)
self._update_config_sets(directory,filename)
return True |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def create_project(self, project_path):
""" Create Trionyx project in given path :param str path: path to create project in. :raises FileExistsError: """ |
shutil.copytree(self.project_path, project_path)
self.update_file(project_path, 'requirements.txt', {
'trionyx_version': trionyx.__version__
})
self.update_file(project_path, 'config/local_settings.py', {
'secret_key': utils.random_string(32)
}) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def create_app(self, apps_path, name):
""" Create Trionyx app in given path :param str path: path to create app in. :param str name: name of app :raises FileExistsError: """ |
app_path = os.path.join(apps_path, name.lower())
shutil.copytree(self.app_path, app_path)
self.update_file(app_path, '__init__.py', {
'name': name.lower()
})
self.update_file(app_path, 'apps.py', {
'name': name.lower(),
'verbose_name': name.capitalize()
}) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def find_one(self, filter=None, fields=None, skip=0, sort=None):
""" Similar to find. This method will only retrieve one row. If no row matches, returns None """ |
result = self.find(filter=filter, fields=fields, skip=skip, limit=1, sort=sort)
if len(result) > 0:
return result[0]
else:
return None |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_existing_keys(self, events):
"""Returns the list of keys from the given event source that are already in the DB""" |
data = [e[self.key] for e in events]
ss = ','.join(['%s' for _ in data])
query = 'SELECT %s FROM %s WHERE %s IN (%s)' % (self.key, self.table, self.key, ss)
cursor = self.conn.conn.cursor()
cursor.execute(query, data)
LOG.info("%s (data: %s)", query, data)
existing = [r[0] for r in cursor.fetchall()]
LOG.info("Existing IDs: %s" % existing)
return set(existing) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def insert(self, events):
"""Constructs and executes a MySQL insert for the given events.""" |
if not len(events):
return
keys = sorted(events[0].keys())
ss = ','.join(['%s' for _ in keys])
query = 'INSERT INTO %s (%s) VALUES ' % (self.table, ','.join(keys))
data = []
for event in events:
query += '(%s),' % ss
data += [event[k] for k in keys]
query = query[:-1] + ';'
LOG.info("%s (data: %s)", query, data)
conn = self.conn.conn
cursor = conn.cursor()
cursor.execute(query, data)
conn.commit() |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.