text_prompt stringlengths 157 13.1k | code_prompt stringlengths 7 19.8k ⌀ |
|---|---|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def issueCommand(self, command, *args):
""" Issue the given Assuan command and return a Deferred that will fire with the response. """ |
result = Deferred()
self._dq.append(result)
self.sendLine(b" ".join([command] + list(args)))
return result |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _currentResponse(self, debugInfo):
""" Pull the current response off the queue. """ |
bd = b''.join(self._bufferedData)
self._bufferedData = []
return AssuanResponse(bd, debugInfo) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def lineReceived(self, line):
""" A line was received. """ |
if line.startswith(b"#"): # ignore it
return
if line.startswith(b"OK"):
# if no command issued, then just 'ready'
if self._ready:
self._dq.pop(0).callback(self._currentResponse(line))
else:
self._ready = True
if line.startswith(b"D "):
self._bufferedData.append(line[2:].replace(b"%0A", b"\r")
.replace(b"%0D", b"\n")
.replace(b"%25", b"%"))
if line.startswith(b"ERR"):
self._dq.pop(0).errback(AssuanError(line)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def read_chunks(stream, block_size=2**10):
""" Given a byte stream with reader, yield chunks of block_size until the stream is consusmed. """ |
while True:
chunk = stream.read(block_size)
if not chunk:
break
yield chunk |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _load_stream_py3(dc, chunks):
""" Given a decompression stream and chunks, yield chunks of decompressed data until the compression window ends. """ |
while not dc.eof:
res = dc.decompress(dc.unconsumed_tail + next(chunks))
yield res |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def load_streams(chunks):
""" Given a gzipped stream of data, yield streams of decompressed data. """ |
chunks = peekable(chunks)
while chunks:
if six.PY3:
dc = zlib.decompressobj(wbits=zlib.MAX_WBITS | 16)
else:
dc = zlib.decompressobj(zlib.MAX_WBITS | 16)
yield load_stream(dc, chunks)
if dc.unused_data:
chunks = peekable(itertools.chain((dc.unused_data,), chunks)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def lines_from_stream(chunks):
""" Given data in chunks, yield lines of text """ |
buf = buffer.DecodingLineBuffer()
for chunk in chunks:
buf.feed(chunk)
# when Python 3, yield from buf
for _ in buf:
yield _ |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def session_registration(uri, session):
"""Requests-mock registration with a specific Session. :param uri: base URI to match against :param session: Python requests' Session object :returns: n/a """ |
# log the URI that is used to access the Stack-In-A-Box services
logger.debug('Registering Stack-In-A-Box at {0} under Python Requests-Mock'
.format(uri))
logger.debug('Session has id {0}'.format(id(session)))
# tell Stack-In-A-Box what URI to match with
StackInABox.update_uri(uri)
# Create a Python Requests Adapter object for handling the session
StackInABox.hold_onto('adapter', requests_mock.Adapter())
# Add the Request handler object for the URI
StackInABox.hold_out('adapter').add_matcher(RequestMockCallable(uri))
# Tell the session about the adapter and the URI
session.mount('http://{0}'.format(uri), StackInABox.hold_out('adapter'))
session.mount('https://{0}'.format(uri), StackInABox.hold_out('adapter')) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def requests_request(method, url, **kwargs):
"""Requests-mock requests.request wrapper.""" |
session = local_sessions.session
response = session.request(method=method, url=url, **kwargs)
session.close()
return response |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def requests_post(url, data=None, json=None, **kwargs):
"""Requests-mock requests.post wrapper.""" |
return requests_request('post', url, data=data, json=json, **kwargs) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_reason_for_status(status_code):
"""Lookup the HTTP reason text for a given status code. :param status_code: int - HTTP status code :returns: string - HTTP reason text """ |
if status_code in requests.status_codes.codes:
return requests.status_codes._codes[status_code][0].replace('_',
' ')
else:
return 'Unknown status code - {0}'.format(status_code) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def split_status(status):
"""Split a HTTP Status and Reason code string into a tuple. :param status string containing the status and reason text or the integer of the status code :returns: tuple - (int, string) containing the integer status code and reason text string """ |
# If the status is an integer, then lookup the reason text
if isinstance(status, int):
return (status, RequestMockCallable.get_reason_for_status(
status))
# otherwise, ensure it is a string and try to split it based on the
# standard HTTP status and reason text format
elif isinstance(status, str) or isinstance(status, bytes):
code, reason = status.split(' ', 1)
return (code, reason)
# otherwise, return with a default reason code
else:
return (status, 'Unknown') |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def handle(self, request, uri):
"""Request handler interface. :param request: Python requests Request object :param uri: URI of the request """ |
# Convert the call over to Stack-In-A-Box
method = request.method
headers = CaseInsensitiveDict()
request_headers = CaseInsensitiveDict()
request_headers.update(request.headers)
request.headers = request_headers
stackinabox_result = StackInABox.call_into(method,
request,
uri,
headers)
# reformat the result for easier use
status_code, output_headers, body = stackinabox_result
json_data = None
text_data = None
content_data = None
body_data = None
# if the body is a string-type...
if isinstance(body, six.string_types):
# Try to convert it to JSON
text_data = body
try:
json_data = json.dumps(text_data)
text_data = json_data
except Exception:
json_data = None
text_data = body
# if the body is binary, then it's the content
elif isinstance(body, six.binary_type):
content_data = body
# by default, it's just body data
else:
# default to body data
body_data = body
# build the Python requests' Response object
return requests_mock.response.create_response(
request,
headers=output_headers,
status_code=status_code,
body=body_data,
json=json_data,
text=text_data,
content=content_data
) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def send_tip(self, sender, receiver, message, context_uid, meta):
""" Send a request to the ChangeTip API, to be delivered immediately. """ |
assert self.channel is not None, "channel must be defined"
# Add extra data to meta
meta["mention_bot"] = self.mention_bot()
data = json.dumps({
"channel": self.channel,
"sender": sender,
"receiver": receiver,
"message": message,
"context_uid": context_uid,
"meta": meta,
})
response = requests.post(self.get_api_url("/tips/"), data=data, headers={'content-type': 'application/json'})
if response.headers.get("Content-Type", None) == "application/json":
out = response.json()
out["state"] = response.reason.lower()
return out
else:
return {"state": response.reason.lower(), "error": "%s error submitting tip" % response.status_code} |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def discoverEndpoint(domain, endpoint, content=None, look_in={'name': 'link'}, test_urls=True, validateCerts=True):
"""Find the given endpoint for the given domain. Only scan html element matching all criteria in look_in. optionally the content to be scanned can be given as an argument. :param domain: the URL of the domain to handle :param endpoint: list of endpoints to look for :param content: the content to be scanned for the endpoint :param look_in: dictionary with name, id and class_. only element matching all of these will be scanned :param test_urls: optional flag to test URLs for validation :param validateCerts: optional flag to enforce HTTPS certificates if present :rtype: list of endpoints """ |
if test_urls:
ronkyuu.URLValidator(message='invalid domain URL')(domain)
if content:
result = {'status': requests.codes.ok,
'headers': None,
'content': content
}
else:
r = requests.get(domain, verify=validateCerts)
result = {'status': r.status_code,
'headers': r.headers
}
# check for character encodings and use 'correct' data
if 'charset' in r.headers.get('content-type', ''):
result['content'] = r.text
else:
result['content'] = r.content
for key in endpoint:
result.update({key: set()})
result.update({'domain': domain})
if result['status'] == requests.codes.ok:
if 'link' in r.headers:
all_links = r.headers['link'].split(',', 1)
for link in all_links:
if ';' in link:
href, rel = link.split(';')
url = urlparse(href.strip()[1:-1])
if url.scheme in ('http', 'https') and rel in endpoint:
result[rel].add(url)
all_links = BeautifulSoup(result['content'], _html_parser, parse_only=SoupStrainer(**look_in)).find_all('link')
for link in all_links:
rel = link.get('rel', None)[0]
if rel in endpoint:
href = link.get('href', None)
if href:
url = urlparse(href)
if url.scheme == '' or url.netloc == '':
url = urlparse(urljoin(domain, href))
if url.scheme in ('http', 'https'):
result[rel].add(url)
return result |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def discoverMicropubEndpoints(domain, content=None, look_in={'name': 'link'}, test_urls=True, validateCerts=True):
"""Find the micropub for the given domain. Only scan html element matching all criteria in look_in. optionally the content to be scanned can be given as an argument. :param domain: the URL of the domain to handle :param content: the content to be scanned for the endpoint :param look_in: dictionary with name, id and class_. only element matching all of these will be scanned :param test_urls: optional flag to test URLs for validation :param validateCerts: optional flag to enforce HTTPS certificates if present :rtype: list of endpoints """ |
return discoverEndpoint(domain, ('micropub',), content, look_in, test_urls, validateCerts) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def discoverTokenEndpoints(domain, content=None, look_in={'name': 'link'}, test_urls=True, validateCerts=True):
"""Find the token for the given domain. Only scan html element matching all criteria in look_in. optionally the content to be scanned can be given as an argument. :param domain: the URL of the domain to handle :param content: the content to be scanned for the endpoint :param look_in: dictionary with name, id and class_. only element matching all of these will be scanned :param test_urls: optional flag to test URLs for validation :param validateCerts: optional flag to enforce HTTPS certificates if present :rtype: list of endpoints """ |
return discoverEndpoint(domain, ('token_endpoint',), content, look_in, test_urls, validateCerts) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def form_node(cls):
"""A class decorator to finalize fully derived FormNode subclasses.""" |
assert issubclass(cls, FormNode)
res = attrs(init=False, slots=True)(cls)
res._args = []
res._required_args = 0
res._rest_arg = None
state = _FormArgMode.REQUIRED
for field in fields(res):
if 'arg_mode' in field.metadata:
if state is _FormArgMode.REST:
raise RuntimeError('rest argument must be last')
if field.metadata['arg_mode'] is _FormArgMode.REQUIRED:
if state is _FormArgMode.OPTIONAL:
raise RuntimeError('required arg after optional arg')
res._args.append(field)
res._required_args += 1
elif field.metadata['arg_mode'] is _FormArgMode.OPTIONAL:
state = _FormArgMode.OPTIONAL
res._args.append(field)
elif field.metadata['arg_mode'] is _FormArgMode.REST:
state = _FormArgMode.REST
res._rest_arg = field
else:
assert 0
return res |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def intercept(aspects):
"""Decorate class to intercept its matching methods and apply advices on them. Advices are the cross-cutting concerns that need to be separated out from the business logic. This decorator applies such advices to the decorated class. :arg aspects: mapping of joint-points to dictionary of advices. joint-points are regex patterns to be matched against methods of class. If the pattern matches to name of a method, the advices available for the joint-point are applied to the method. Advices from all matching joint-points are applied to the method. In case of conflicting advices for a joint-point, joint-point exactly matching the name of the method is given preference. Following are the identified advices: before: Runs before around before around_before: Runs before the method after_exc: Runs when method encounters exception around_after: Runs after method is successful after_success: Runs after method is successful after_finally: Runs after method is run successfully or unsuccessfully. """ |
if not isinstance(aspects, dict):
raise TypeError("Aspects must be a dictionary of joint-points and advices")
def get_matching_advices(name):
"""Get all advices matching method name"""
all_advices = dict()
for joint_point, advices in aspects.iteritems():
if re.match(joint_point, name):
for advice, impl in advices.items():
# Whole word matching regex might have \b around.
if advice in all_advices and joint_point.strip(r'\b') != name:
# Give priority to exactly matching method joint-points over wild-card
# joint points.
continue
all_advices[advice] = impl
return all_advices
def apply_advices(advices):
"""Decorating method"""
def decorate(method): # pylint: disable=C0111
@wraps(method)
def trivial(self, *arg, **kw): # pylint: disable=C0111
def run_advices(advice, extra_arg=None):
"""Run all the advices for the joint-point"""
if advice not in advices:
return
advice_impl = advices[advice]
if not isinstance(advice_impl, (list, tuple, set)):
advice_impl = [advice_impl]
for impl in advice_impl:
impl(self, method, extra_arg, *arg, **kw)
run_advices('before')
run_advices('around_before')
try:
if method.__self__ is None:
ret = method(self, *arg, **kw)
else: # classmethods
ret = method(*arg, **kw)
except Exception as e: # pylint: disable=W0703
run_advices('after_exc', e)
ret = None
raise e
else:
run_advices('around_after', ret)
run_advices('after_success', ret)
finally:
run_advices('after_finally', ret)
return ret
return trivial
return decorate
def decorate_class(cls):
"""Decorating class"""
# TODO: handle staticmethods
for name, method in inspect.getmembers(cls, inspect.ismethod):
if method.__self__ is not None:
# TODO: handle classmethods
continue
if name not in ('__init__',) and name.startswith('__'):
continue
matching_advices = get_matching_advices(name)
if not matching_advices:
continue
setattr(cls, name, apply_advices(matching_advices)(method))
return cls
return decorate_class |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _get_diff(environ, label, pop=False):
"""Get previously frozen key-value pairs. :param str label: The name for the frozen environment. :param bool pop: Destroy the freeze after use; only allow application once. :returns: ``dict`` of frozen values. """ |
if pop:
blob = environ.pop(_variable_name(label), None)
else:
blob = environ.get(_variable_name(label))
return _loads(blob) if blob else {} |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _apply_diff(environ, diff):
"""Apply a frozen environment. :param dict diff: key-value pairs to apply to the environment. :returns: A dict of the key-value pairs that are being changed. """ |
original = {}
if diff:
for k, v in diff.iteritems():
if v is None:
log.log(5, 'unset %s', k)
else:
log.log(5, '%s="%s"', k, v)
original[k] = environ.get(k)
if original[k] is None:
log.log(1, '%s was not set', k)
else:
log.log(1, '%s was "%s"', k, original[k])
if v is None:
environ.pop(k, None)
else:
environ[k] = v
else:
log.log(5, 'nothing to apply')
return original |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_current():
"""return current Xresources color theme""" |
global current
if exists( SETTINGSFILE ):
f = open( SETTINGSFILE ).read()
current = re.findall('config[^\s]+.+', f)[1].split('/')[-1]
return current
else:
return "** Not Set **" |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_colors():
"""return list of available Xresources color themes""" |
if exists( THEMEDIR ):
contents = os.listdir( THEMEDIR )
themes = [theme for theme in contents if '.' not in theme]
if len(themes) > 0:
themes.sort()
return themes
else:
print "** No themes in themedir **"
print " run:"
print " dotcolors (-s | --sync) <limit>"
sys.exit(0)
else:
print "** Theme directory not found **"
print " run: "
print " dotcolors --setup"
sys.exit(0) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def getch_selection(colors, per_page=15):
"""prompt for selection, validate input, return selection""" |
global transparency, prefix, current
get_transparency()
page = 1
length = len(colors)
last_page = length / per_page
if (last_page * per_page) < length:
last_page += 1
getch = _Getch()
valid = False
while valid == False:
menu_pages(colors, page, True, per_page)
sys.stdout.write(">")
char = getch()
try:
int(char)
entry = raw_input_with_default(' Selection: ', char)
entry = int(entry)
if colors[entry - 1]:
valid = True
except ValueError:
pass
if( char == 'j' ):
page += 1
if page > last_page:
page = last_page
menu_pages(colors, page, True, per_page)
if( char == 'k' ):
if(page > 1):
page -= 1
else:
page = 1
menu_pages(colors, page, True, per_page)
if( char.lower() == 'q' ):
c = os.system('clear')
sys.exit(0)
if( char == 'J' ):
if transparency > 0:
transparency -= 1
menu_pages(colors, page, True, per_page)
if( char == 'K' ):
if transparency < 100:
transparency += 1
menu_pages(colors, page, True, per_page)
if( char.lower() == 'p' ):
prefix = raw_input_with_default(' prefix: ', 'urxvt')
if( char == '\r' ):
return current
return colors[entry - 1] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def format_theme(selection):
"""removes any non-color related lines from theme file""" |
global themefile
text = open(THEMEDIR + '/' + selection).read()
if '!dotcolors' in text[:10]:
themefile = text
return
lines = ['!dotcolors auto formatted\n']
for line in text.split('\n'):
lline = line.lower()
background = 'background' in lline
foreground = 'foreground' in lline
color = 'color' in lline
if background:
if 'rgb' in line:
# rbga: 0000/0000/0000/dddd
rgb = line.split(':')[2].replace(' ', '')
rgb = rgb_to_hex(rgb)
lines.append('*background:\t%s' % rgb)
else:
lines.append('\t#'.join(line \
.replace(' ', '') \
.replace('\t', '') \
.split('#')))
if foreground:
if 'rgb' in line:
# rbga: 0000/0000/0000/dddd
rgb = line.split(':')[2].replace(' ', '')
rgb = rgb_to_hex(rgb)
lines.append('*foreground:\t%s' % rgb)
else:
lines.append('\t#'.join(line \
.replace(' ', '') \
.replace('\t', '') \
.split('#')))
if color:
if lline[0] != '!':
lines.append('\t#'.join(line \
.replace(' ', '') \
.replace('\t', '') \
.split('#')))
themefile = '\n'.join(lines) + '\n'
fd, tmpfile = tempfile.mkstemp()
if exists( THEMEDIR + '/' + selection ):
old = open( THEMEDIR + '/' + selection )
new = os.fdopen(fd, 'w')
os.write(fd, themefile)
old.close()
new.close()
move( tmpfile, THEMEDIR + '/' + selection ) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def pretty_size(value):
"""Convert a number of bytes into a human-readable string. """ |
exp = int(math.log(value, 1024)) if value > 0 else 0
unit = 'bkMGTPEZY'[exp]
if exp == 0:
return '%d%s' % (value, unit) # value < 1024, result is always without fractions
unit_value = value / (1024.0 ** exp) # value in the relevant units
places = int(math.log(unit_value, 10)) # number of digits before decimal point
return '%.*f%s' % (2 - places, unit_value, unit) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _on_open(self, _):
"""Joins the hack.chat channel and starts pinging.""" |
nick = self._format_nick(self._nick, self._pwd)
data = {"cmd": "join", "channel": self._channel, "nick": nick}
self._send_packet(data)
self._thread = True
threading.Thread(target=self._ping).start() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def join(self, new_channel, nick, pwd=None):
"""Joins a new channel. Keyword arguments: new_channel: <str>; the channel to connect to nick: <str>; the nickname to use pwd: <str>; the (optional) password to use """ |
self._send_packet({"cmd": "join", "channel": new_channel,
"nick": self._format_nick(nick, pwd)}) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def upload(client, source_dir):
""" Upload images to play store. The function will iterate through source_dir and upload all matching image_types found in folder herachy. """ |
print('')
print('upload images')
print('-------------')
base_image_folders = [
os.path.join(source_dir, 'images', x) for x in image_types]
for type_folder in base_image_folders:
if os.path.exists(type_folder):
image_type = os.path.basename(type_folder)
langfolders = filter(os.path.isdir, list_dir_abspath(type_folder))
for language_dir in langfolders:
language = os.path.basename(language_dir)
delete_and_upload_images(
client, image_type, language, type_folder) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def delete_and_upload_images(client, image_type, language, base_dir):
""" Delete and upload images with given image_type and language. Function will stage delete and stage upload all found images in matching folders. """ |
print('{0} {1}'.format(image_type, language))
files_in_dir = os.listdir(os.path.join(base_dir, language))
delete_result = client.deleteall(
'images', imageType=image_type, language=language)
deleted = delete_result.get('deleted', list())
for deleted_files in deleted:
print(' delete image: {0}'.format(deleted_files['id']))
for image_file in files_in_dir[:8]:
image_file_path = os.path.join(base_dir, language, image_file)
image_response = client.upload(
'images',
imageType=image_type,
language=language,
media_body=image_file_path)
print(" upload image {0} new id {1}".format(image_file, image_response['image']['id'])) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def download(client, target_dir):
"""Download images from play store into folder herachy.""" |
print('download image previews')
print(
"Warning! Downloaded images are only previews!"
"They may be to small for upload.")
tree = {}
listings = client.list('listings')
languages = map(lambda listing: listing['language'], listings)
parameters = [{'imageType': image_type, 'language': language}
for image_type in image_types for language in languages]
tree = {image_type: {language: list()
for language in languages}
for image_type in image_types}
for params in parameters:
result = client.list('images', **params)
image_type = params['imageType']
language = params['language']
tree[image_type][language] = map(
lambda r: r['url'], result)
for image_type, language_map in tree.items():
for language, files in language_map.items():
if len(files) > 0:
mkdir_p(
os.path.join(target_dir, 'images', image_type, language))
if image_type in single_image_types:
if len(files) > 0:
image_url = files[0]
path = os.path.join(
target_dir,
'images',
image_type,
language,
image_type)
load_and_save_image(image_url, path)
else:
for idx, image_url in enumerate(files):
path = os.path.join(
target_dir,
'images',
image_type,
language,
image_type + '_' + str(idx))
load_and_save_image(image_url, path) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def load_and_save_image(url, destination):
"""Download image from given url and saves it to destination.""" |
from urllib2 import Request, urlopen, URLError, HTTPError
# create the url and the request
req = Request(url)
# Open the url
try:
f = urlopen(req)
print "downloading " + url
# Open our local file for writing
local_file = open(destination, "wb")
# Write to our local file
local_file.write(f.read())
local_file.close()
file_type = imghdr.what(destination)
local_file = open(destination, "rb")
data = local_file.read()
local_file.close()
final_file = open(destination + '.' + file_type, "wb")
final_file.write(data)
final_file.close()
print('save image preview {0}'.format(destination + '.' + file_type))
os.remove(destination)
# handle errors
except HTTPError, e:
print "HTTP Error:", e.code, url
except URLError, e:
print "URL Error:", e.reason, url |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_qapp():
"""Return an instance of QApplication. Creates one if neccessary. :returns: a QApplication instance :rtype: QApplication :raises: None """ |
global app
app = QtGui.QApplication.instance()
if app is None:
app = QtGui.QApplication([], QtGui.QApplication.GuiClient)
return app |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def load_all_resources():
"""Load all resources inside this package When compiling qt resources, the compiled python file will register the resource on import. .. Warning:: This will simply import all modules inside this package """ |
pkgname = resources.__name__
for importer, mod_name, _ in pkgutil.iter_modules(resources.__path__):
full_mod_name = '%s.%s' % (pkgname, mod_name)
if full_mod_name not in sys.modules:
module = importer.find_module(mod_name
).load_module(full_mod_name)
log.debug("Loaded resource from: %s", module) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def set_main_style(widget):
"""Load the main.qss and apply it to the application :param widget: The widget to apply the stylesheet to. Can also be a QApplication. ``setStylesheet`` is called on the widget. :type widget: :class:`QtGui.QWidget` :returns: None :rtype: None :raises: None """ |
load_all_resources()
with open(MAIN_STYLESHEET, 'r') as qss:
sheet = qss.read()
widget.setStyleSheet(sheet) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def wrap(ptr, base=None):
"""Wrap the given pointer with shiboken and return the appropriate QObject :returns: if ptr is not None returns a QObject that is cast to the appropriate class :rtype: QObject | None :raises: None """ |
if ptr is None:
return None
ptr = long(ptr) # Ensure type
if base is None:
qObj = shiboken.wrapInstance(long(ptr), QtCore.QObject)
metaObj = qObj.metaObject()
cls = metaObj.className()
superCls = metaObj.superClass().className()
if hasattr(QtGui, cls):
base = getattr(QtGui, cls)
elif hasattr(QtGui, superCls):
base = getattr(QtGui, superCls)
else:
base = QtGui.QWidget
return shiboken.wrapInstance(long(ptr), base) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def dt_to_qdatetime(dt):
"""Convert a python datetime.datetime object to QDateTime :param dt: the datetime object :type dt: :class:`datetime.datetime` :returns: the QDateTime conversion :rtype: :class:`QtCore.QDateTime` :raises: None """ |
return QtCore.QDateTime(QtCore.QDate(dt.year, dt.month, dt.day),
QtCore.QTime(dt.hour, dt.minute, dt.second)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_icon(name, aspix=False, asicon=False):
"""Return the real file path to the given icon name If aspix is True return as QtGui.QPixmap, if asicon is True return as QtGui.QIcon. :param name: the name of the icon :type name: str :param aspix: If True, return a QtGui.QPixmap. :type aspix: bool :param asicon: If True, return a QtGui.QIcon. :type asicon: bool :returns: The real file path to the given icon name. If aspix is True return as QtGui.QPixmap, if asicon is True return as QtGui.QIcon. If both are True, a QtGui.QIcon is returned. :rtype: string :raises: None """ |
datapath = os.path.join(ICON_PATH, name)
icon = pkg_resources.resource_filename('jukeboxcore', datapath)
if aspix or asicon:
icon = QtGui.QPixmap(icon)
if asicon:
icon = QtGui.QIcon(icon)
return icon |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def allinstances(cls):
"""Return all instances that inherit from JB_Gui :returns: all instances that inherit from JB_Gui :rtype: list :raises: None """ |
JB_Gui._allinstances = weakref.WeakSet([i for i in cls._allinstances if shiboken.isValid(i)])
return list(cls._allinstances) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def classinstances(cls):
"""Return all instances of the current class JB_Gui will not return the instances of subclasses A subclass will only return the instances that have the same type as the subclass. So it won\'t return instances of further subclasses. :returns: all instnaces of the current class :rtype: list :raises: None """ |
l = [i for i in cls.allinstances() if type(i) == cls]
return l |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def instances(cls):
"""Return all instances of this class and subclasses :returns: all instances of the current class and subclasses :rtype: list :raises: None """ |
l = [i for i in cls.allinstances() if isinstance(i, cls)]
return l |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def error(self, error_msg):
""" Outputs error message on own logger. Also raises exceptions if need be. Args: error_msg: message to output """ |
if self.logger is not None:
self.logger.error(error_msg)
if self.exc is not None:
raise self.exc(error_msg) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def clear_data(self):
"""Clear both ontology and annotation data. Parameters Returns ------- None """ |
self.clear_annotation_data()
self.terms = {}
self._alt_id = {}
self._syn2id = {}
self._name2id = {}
self._flattened = False |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def clear_annotation_data(self):
"""Clear annotation data. Parameters Returns ------- None """ |
self.genes = set()
self.annotations = []
self.term_annotations = {}
self.gene_annotations = {} |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _flatten_ancestors(self, include_part_of=True):
"""Determines and stores all ancestors of each GO term. Parameters include_part_of: bool, optional Whether to include ``part_of`` relations in determining ancestors. Returns ------- None """ |
def get_all_ancestors(term):
ancestors = set()
for id_ in term.is_a:
ancestors.add(id_)
ancestors.update(get_all_ancestors(self.terms[id_]))
if include_part_of:
for id_ in term.part_of:
ancestors.add(id_)
ancestors.update(get_all_ancestors(self.terms[id_]))
return ancestors
for term in self.terms.values():
term.ancestors = get_all_ancestors(term) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_gene_goterms(self, gene, ancestors=False):
"""Return all GO terms a particular gene is annotated with. Parameters gene: str The gene symbol of the gene. ancestors: bool, optional If set to True, also return all ancestor GO terms. Returns ------- set of GOTerm objects The set of GO terms the gene is annotated with. Notes ----- If a gene is annotated with a particular GO term, it can also be considered annotated with all ancestors of that GO term. """ |
annotations = self.gene_annotations[gene]
terms = set(ann.term for ann in annotations)
if ancestors:
assert self._flattened
ancestor_terms = set()
for t in terms:
ancestor_terms.update(self.terms[id_] for id_ in t.ancestors)
terms |= ancestor_terms
return frozenset(terms) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_goterm_genes(self, id_, descendants=True):
"""Return all genes that are annotated with a particular GO term. Parameters id_: str GO term ID of the GO term. descendants: bool, optional If set to False, only return genes that are directly annotated with the specified GO term. By default, also genes annotated with any descendant term are returned. Returns ------- Notes """ |
# determine which terms to include
main_term = self.terms[id_]
check_terms = {main_term, }
if descendants:
assert self._flattened
check_terms.update([self.terms[id_]
for id_ in main_term.descendants])
# get annotations of all included terms
genes = set()
for term in check_terms:
genes.update(ann.gene for ann in self.term_annotations[term.id])
return frozenset(genes) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_gene_sets(self, min_genes=None, max_genes=None):
"""Return the set of annotated genes for each GO term. Parameters min_genes: int, optional Exclude GO terms with fewer than this number of genes. max_genes: int, optional Exclude GO terms with more than this number of genes. Returns ------- GeneSetCollection A gene set "database" with one gene set for each GO term. """ |
if not self.terms:
raise ValueError('You need to first parse both an OBO file and '
'a gene association file!')
if not self.annotations:
raise ValueError('You need to first parse a gene association '
'file!')
all_term_ids = sorted(self.terms.keys())
# go over all GO terms and get associated genes
logger.info('Obtaining GO term associations...')
# n = len(all_term_ids)
# term_gene_counts = []
# term_ids = []
term_genes = OrderedDict()
geneset_terms = {}
gene_sets = []
for j, id_ in enumerate(all_term_ids):
tg = self.get_goterm_genes(id_)
assert isinstance(tg, frozenset)
c = len(tg)
if c == 0:
continue
if (min_genes is not None and c < min_genes) or \
(max_genes is not None and c > max_genes):
# term doesn't meet min/max number of genes criteria
continue
# for finding redundant terms (use set of genes as key)
try:
geneset_terms[tg].append(id_)
except KeyError:
geneset_terms[tg] = [id_]
term_genes[id_] = tg
selected = len(term_genes)
affected = 0
excl = 0
for id_, tg in term_genes.items():
# check if there are redundant terms
term = self.terms[id_]
if len(geneset_terms[tg]) > 1:
gt = geneset_terms[tg]
affected += 1
# check if this term is an ancestor of any of them
# if so, exclude it
excluded = False
for other_id in gt:
if (other_id != id_) and (other_id in term.descendants):
excluded = True
break
if excluded:
excl += 1
continue
# if the term is not redundant with any other term,
# or if it isn't the ancestor of any redundant term,
# add its gene set to the list
name = term.name
source = 'GO'
coll = term.domain_short
desc = term.definition
gs = GeneSet(id_, name, tg, source=source,
collection=coll, description=desc)
gene_sets.append(gs)
D = GeneSetCollection(gene_sets)
logger.info('# terms selected intially: %d', selected)
logger.info('# terms with redundant gene sets: %d', affected)
logger.info('# terms excluded due to redundancy: %d', excl)
logger.info('# terms retained: %d', D.n)
return D |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def read_meta(self, f):
"""Read the headers of a file in file format and place them in the self.meta dictionary. """ |
if not isinstance(f, BacktrackableFile):
f = BacktrackableFile(f)
try:
(name, value) = self.read_meta_line(f)
while name:
name = (name == 'nominal_offset' and 'timestamp_rounding' or
name)
name = (name == 'actual_offset' and 'timestamp_offset' or name)
method_name = 'get_{}'.format(name)
method = getattr(self, method_name, None)
if method:
method(name, value)
name, value = self.read_meta_line(f)
if not name and not value:
break
except ParsingError as e:
e.args = e.args + (f.line_number,)
raise |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _is_excluded(self, prop, info_dict):
""" Check if the given prop should be excluded from the export """ |
if prop.key in BLACKLISTED_KEYS:
return True
if info_dict.get('exclude', False):
return True
if prop.key in self.excludes:
return True
if self.includes and prop.key not in self.includes:
return True
return False |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _get_title(self, prop, main_infos, info_dict):
""" Return the title configured as in colanderalchemy """ |
result = main_infos.get('label')
if result is None:
result = info_dict.get('colanderalchemy', {}).get('title')
if result is None:
result = prop.key
return result |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _get_prop_infos(self, prop):
""" Return the infos configured for this specific prop, merging the different configuration level """ |
info_dict = self.get_info_field(prop)
main_infos = info_dict.get('export', {}).copy()
infos = main_infos.get(self.config_key, {})
main_infos['label'] = self._get_title(prop, main_infos, info_dict)
main_infos['name'] = prop.key
main_infos['key'] = prop.key
main_infos.update(infos)
main_infos['__col__'] = prop
return main_infos |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _collect_headers(self):
""" Collect headers from the models attribute info col """ |
res = []
for prop in self.get_sorted_columns():
main_infos = self._get_prop_infos(prop)
if self._is_excluded(prop, main_infos):
continue
if isinstance(prop, RelationshipProperty):
main_infos = self._collect_relationship(main_infos, prop, res)
if not main_infos:
# If still no success, we forgot this one
print("Maybe there's missing some informations \
about a relationship")
continue
else:
main_infos = self._merge_many_to_one_field_from_fkey(
main_infos, prop, res
)
if not main_infos:
continue
if isinstance(main_infos, (list, tuple)):
# In case _collect_relationship returned a list
res.extend(main_infos)
else:
res.append(main_infos)
return res |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _merge_many_to_one_field_from_fkey(self, main_infos, prop, result):
""" Find the relationship associated with this fkey and set the title :param dict main_infos: The already collected datas about this column :param obj prop: The property mapper of the relationship :param list result: The actual collected headers :returns: a main_infos dict or None """ |
if prop.columns[0].foreign_keys and prop.key.endswith('_id'):
# We have a foreign key, we'll try to merge it with the
# associated foreign key
rel_name = prop.key[0:-3]
for val in result:
if val["name"] == rel_name:
val["label"] = main_infos['label']
main_infos = None # We can forget this field in export
break
return main_infos |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def add_row(self, obj):
""" fill a new row with the given obj obj instance of the exporter's model """ |
row = {}
for column in self.headers:
value = ''
if '__col__' in column:
if isinstance(column['__col__'], ColumnProperty):
value = self._get_column_cell_val(obj, column)
elif isinstance(column['__col__'], RelationshipProperty):
value = self._get_relationship_cell_val(obj, column)
row[column['name']] = value
self._datas.append(self.format_row(row)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _get_formatted_val(self, obj, name, column):
""" Format the value of the attribute 'name' from the given object """ |
attr_path = name.split('.')
val = None
tmp_val = obj
for attr in attr_path:
tmp_val = getattr(tmp_val, attr, None)
if tmp_val is None:
break
if tmp_val is not None:
val = tmp_val
return format_value(column, val, self.config_key) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _get_relationship_cell_val(self, obj, column):
""" Return the value to insert in a relationship cell """ |
val = ""
key = column['key']
related_key = column.get('related_key', None)
related_obj = getattr(obj, key, None)
if related_obj is None:
return ""
if column['__col__'].uselist: # OneToMany
# We know how to retrieve a value from the related objects
if related_key is not None:
# Only the related object of the given index
if column.get('index') is not None:
if len(related_obj) > column['index']:
rel_obj = related_obj[column['index']]
val = self._get_formatted_val(
rel_obj,
related_key,
column,
)
# We join all the related objects val
else:
_vals = []
for rel_obj in related_obj:
_vals.append(
self._get_formatted_val(
rel_obj,
related_key,
column,
)
)
val = '\n'.join(_vals)
else: # Many to One
if related_key is not None:
val = self._get_formatted_val(related_obj, related_key, column)
return val |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _get_column_cell_val(self, obj, column):
""" Return a value of a "column" cell """ |
name = column['name']
return self._get_formatted_val(obj, name, column) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def join(input_files, output_file):
'''
Join geojsons into one. The spatial reference system of the output file is the same
as the one of the last file in the list.
Args:
input_files (list): List of file name strings.
output_file (str): Output file name.
'''
# get feature collections
final_features = []
for file in input_files:
with open(file) as f:
feat_collection = geojson.load(f)
final_features += feat_collection['features']
feat_collection['features'] = final_features
# write to output file
with open(output_file, 'w') as f:
geojson.dump(feat_collection, f) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def split(input_file, file_1, file_2, no_in_first_file):
'''
Split a geojson in two separate files.
Args:
input_file (str): Input filename.
file_1 (str): Output file name 1.
file_2 (str): Output file name 2.
no_features (int): Number of features in input_file to go to file_1.
output_file (str): Output file name.
'''
# get feature collection
with open(input_file) as f:
feat_collection = geojson.load(f)
features = feat_collection['features']
feat_collection_1 = geojson.FeatureCollection(features[0:no_in_first_file])
feat_collection_2 = geojson.FeatureCollection(features[no_in_first_file:])
with open(file_1, 'w') as f:
geojson.dump(feat_collection_1, f)
with open(file_2, 'w') as f:
geojson.dump(feat_collection_2, f) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def get_from(input_file, property_names):
'''
Reads a geojson and returns a list of value tuples, each value corresponding to a
property in property_names.
Args:
input_file (str): File name.
property_names: List of strings; each string is a property name.
Returns:
List of value tuples.
'''
# get feature collections
with open(input_file) as f:
feature_collection = geojson.load(f)
features = feature_collection['features']
values = [tuple([feat['properties'].get(x)
for x in property_names]) for feat in features]
return values |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def write_properties_to(data, property_names, input_file, output_file, filter=None):
'''
Writes property data to polygon_file for all geometries indicated in the filter, and
creates output file. The length of data must be equal to the number of geometries
in the filter. Existing property values are overwritten.
Args
data (list): List of tuples. Each entry is a tuple of dimension equal to
property_names.
property_names (list): Property names.
input_file (str): Input file name.
output_file (str): Output file name.
filter (dict): Filter format is {'property_name':[value1,value2,...]}.What this
achieves is to write the first entry of data to the properties of the feature
with 'property_name'=value1, and so on. This makes sense only if these values
are unique. If Filter=None, then data is written to all geometries in the
input file.
'''
with open(input_file) as f:
feature_collection = geojson.load(f)
features = feature_collection['features']
if filter is None:
for i, feature in enumerate(features):
for j, property_value in enumerate(data[i]):
feature['properties'][property_names[j]] = property_value
else:
filter_name = filter.keys()[0]
filter_values = np.array(filter.values()[0])
for feature in features:
compare_value = feature['properties'][filter_name]
ind = np.where(filter_values == compare_value)[0]
if len(ind) > 0:
for j, property_value in enumerate(data[ind]):
feature['properties'][property_names[j]] = property_value
feature_collection['features'] = features
with open(output_file, 'w') as f:
geojson.dump(feature_collection, f) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def find_unique_values(input_file, property_name):
'''
Find unique values of a given property in a geojson file.
Args
input_file (str): File name.
property_name (str): Property name.
Returns
List of distinct values of property. If property does not exist, it returns None.
'''
with open(input_file) as f:
feature_collection = geojson.load(f)
features = feature_collection['features']
values = np.array([feat['properties'].get(property_name)
for feat in features])
return np.unique(values) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def create_balanced_geojson(input_file, classes, output_file='balanced.geojson',
samples_per_class=None):
'''
Create a geojson comprised of balanced classes from the class_name property in
input_file. Randomly selects polygons from all classes.
Args:
input_file (str): File name
classes (list[str]): Classes in input_file to include in the balanced output file.
Must exactly match the 'class_name' property in the features of input_file.
output_file (str): Name under which to save the balanced output file. Defualts to
balanced.geojson.
samples_per_class (int or None): Number of features to select per class in
input_file. If None will use the smallest class size. Defaults to None.
'''
if not output_file.endswith('.geojson'):
output_file += '.geojson'
with open(input_file) as f:
data = geojson.load(f)
# Sort classes in separate lists
sorted_classes = {clss : [] for clss in classes}
for feat in data['features']:
try:
sorted_classes[feat['properties']['class_name']].append(feat)
except (KeyError):
continue
# Determine sample size per class
if not samples_per_class:
smallest_class = min(sorted_classes, key=lambda clss: len(sorted_classes[clss]))
samples_per_class = len(sorted_classes[smallest_class])
# Randomly select features from each class
try:
samps = [random.sample(feats, samples_per_class) for feats in sorted_classes.values()]
final = [feat for sample in samps for feat in sample]
except (ValueError):
raise Exception('Insufficient features in at least one class. Set ' \
'samples_per_class to None to use maximum amount of '\
'features.')
# Shuffle and save balanced data
np.random.shuffle(final)
data['features'] = final
with open(output_file, 'wb') as f:
geojson.dump(data, f) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def setup_manage_parser(self, parser):
"""Setup the given parser for manage command :param parser: the argument parser to setup :type parser: :class:`argparse.ArgumentParser` :returns: None :rtype: None :raises: None """ |
parser.set_defaults(func=self.manage)
parser.add_argument("args", nargs=argparse.REMAINDER,
help="arguments for django manage command") |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def manage(self, namespace, unknown):
"""Execute the manage command for django :param namespace: namespace containing args with django manage.py arguments :type namespace: Namespace :param unknown: list of unknown arguments that get passed to the manage.py command :type unknown: list :returns: None :rtype: None :raises: None """ |
# first argument is usually manage.py. This will also adapt the help messages
args = ['jukebox manage']
args.extend(namespace.args)
args.extend(unknown)
from django.core.management import execute_from_command_line
execute_from_command_line(args) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def setup_compile_ui_parser(self, parser):
"""Setup the given parser for the compile_ui command :param parser: the argument parser to setup :type parser: :class:`argparse.ArgumentParser` :returns: None :rtype: None :raises: None """ |
parser.set_defaults(func=self.compile_ui)
parser.add_argument('uifile',
nargs="+",
help='the uifile that will be compiled.\
The compiled file will be in the same directory but ends with _ui.py.\
Optional a list of files.',
type=argparse.FileType('r')) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def compile_ui(self, namespace, unknown):
"""Compile qt designer files :param namespace: namespace containing arguments from the launch parser :type namespace: Namespace :param unknown: list of unknown arguments :type unknown: list :returns: None :rtype: None :raises: None """ |
uifiles = namespace.uifile
for f in uifiles:
qtcompile.compile_ui(f.name) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def setup_compile_rcc_parser(self, parser):
"""Setup the given parser for the compile_rcc command :param parser: the argument parser to setup :type parser: :class:`argparse.ArgumentParser` :returns: None :rtype: None :raises: None """ |
parser.set_defaults(func=self.compile_rcc)
parser.add_argument('rccfile',
help='the resource file to compile.\
The compiled file will be in the jukeboxcore.gui.resources package and ends with _rc.py',
type=argparse.FileType('r')) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def compile_rcc(self, namespace, unknown):
"""Compile qt resource files :param namespace: namespace containing arguments from the launch parser :type namespace: Namespace :param unknown: list of unknown arguments :type unknown: list :returns: None :rtype: None :raises: None """ |
rccfile = namespace.rccfile.name
qtcompile.compile_rcc(rccfile) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def to_nullable_array(value):
""" Converts value into array object. Single values are converted into arrays with a single element. :param value: the value to convert. :return: array object or None when value is None. """ |
# Shortcuts
if value == None:
return None
if type(value) == list:
return value
if type(value) in [tuple, set]:
return list(value)
return [value] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def to_array_with_default(value, default_value):
""" Converts value into array object with specified default. Single values are converted into arrays with single element. :param value: the value to convert. :param default_value: default array object. :return: array object or default array when value is None. """ |
result = ArrayConverter.to_nullable_array(value)
return result if result != None else default_value |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def list_to_array(value):
""" Converts value into array object with empty array as default. Strings with comma-delimited values are split into array of strings. :param value: the list to convert. :return: array object or empty array when value is None """ |
if value == None:
return []
elif type(value) in [list, tuple, set]:
return list(value)
elif type(value) in [str]:
return value.split(',')
else:
return [value] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def get_server_sock():
"Get a server socket"
s = _socket.socket()
s.setsockopt(_socket.SOL_SOCKET, _socket.SO_REUSEADDR, True)
s.setblocking(False)
s.bind(('0.0.0.0', _config.server_listen_port))
s.listen(5)
return s |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def get_client_sock(addr):
"Get a client socket"
s = _socket.create_connection(addr)
s.setsockopt(_socket.SOL_SOCKET, _socket.SO_REUSEADDR, True)
s.setblocking(False)
return s |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def get_beacon():
"Get a beacon socket"
s = _socket.socket(_socket.AF_INET, _socket.SOCK_DGRAM)
s.setsockopt(_socket.SOL_SOCKET, _socket.SO_REUSEADDR, True)
s.setsockopt(_socket.SOL_SOCKET, _socket.SO_BROADCAST, True)
return s |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def message(self):
''' Override this to provide failure message'''
name = self.__class__.__name__
return "{0} {1}".format(humanize(name),
pp(*self.expectedArgs, **self.expectedKwArgs)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def search_results_info(self):
""" Returns the search results info for this command invocation or None. The search results info object is created from the search results info file associated with the command invocation. Splunk does not pass the location of this file by default. You must request it by specifying these configuration settings in commands.conf: .. code-block:: python enableheader=true requires_srinfo=true The :code:`enableheader` setting is :code:`true` by default. Hence, you need not set it. The :code:`requires_srinfo` setting is false by default. Hence, you must set it. :return: :class:`SearchResultsInfo`, if :code:`enableheader` and :code:`requires_srinfo` are both :code:`true`. Otherwise, if either :code:`enableheader` or :code:`requires_srinfo` are :code:`false`, a value of :code:`None` is returned. """ |
if self._search_results_info is not None:
return self._search_results_info
try:
info_path = self.input_header['infoPath']
except KeyError:
return None
def convert_field(field):
return (field[1:] if field[0] == '_' else field).replace('.', '_')
def convert_value(field, value):
if field == 'countMap':
split = value.split(';')
value = dict((key, int(value))
for key, value in zip(split[0::2], split[1::2]))
elif field == 'vix_families':
value = ElementTree.fromstring(value)
elif value == '':
value = None
else:
try:
value = float(value)
if value.is_integer():
value = int(value)
except ValueError:
pass
return value
with open(info_path, 'rb') as f:
from collections import namedtuple
import csv
reader = csv.reader(f, dialect='splunklib.searchcommands')
fields = [convert_field(x) for x in reader.next()]
values = [convert_value(f, v) for f, v in zip(fields, reader.next())]
search_results_info_type = namedtuple('SearchResultsInfo', fields)
self._search_results_info = search_results_info_type._make(values)
return self._search_results_info |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def process(self, args=argv, input_file=stdin, output_file=stdout):
""" Processes search results as specified by command arguments. :param args: Sequence of command arguments :param input_file: Pipeline input file :param output_file: Pipeline output file """ |
self.logger.debug(u'%s arguments: %s', type(self).__name__, args)
self._configuration = None
self._output_file = output_file
try:
if len(args) >= 2 and args[1] == '__GETINFO__':
ConfigurationSettings, operation, args, reader = self._prepare(args, input_file=None)
self.parser.parse(args, self)
self._configuration = ConfigurationSettings(self)
writer = splunk_csv.DictWriter(output_file, self, self.configuration.keys(), mv_delimiter=',')
writer.writerow(self.configuration.items())
elif len(args) >= 2 and args[1] == '__EXECUTE__':
self.input_header.read(input_file)
ConfigurationSettings, operation, args, reader = self._prepare(args, input_file)
self.parser.parse(args, self)
self._configuration = ConfigurationSettings(self)
if self.show_configuration:
self.messages.append(
'info_message', '%s command configuration settings: %s'
% (self.name, self._configuration))
writer = splunk_csv.DictWriter(output_file, self)
self._execute(operation, reader, writer)
else:
file_name = path.basename(args[0])
message = (
u'Command {0} appears to be statically configured and static '
u'configuration is unsupported by splunklib.searchcommands. '
u'Please ensure that default/commands.conf contains this '
u'stanza:\n'
u'[{0}]\n'
u'filename = {1}\n'
u'supports_getinfo = true\n'
u'supports_rawargs = true\n'
u'outputheader = true'.format(type(self).name, file_name))
raise NotImplementedError(message)
except SystemExit:
raise
except:
import traceback
import sys
error_type, error_message, error_traceback = sys.exc_info()
self.logger.error(traceback.format_exc(error_traceback))
origin = error_traceback
while origin.tb_next is not None:
origin = origin.tb_next
filename = origin.tb_frame.f_code.co_filename
lineno = origin.tb_lineno
self.write_error('%s at "%s", line %d : %s', error_type.__name__, filename, lineno, error_message)
exit(1)
return |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def check_contract_allowed(func):
"""Check if Contract is allowed by token """ |
@wraps(func)
def decorator(*args, **kwargs):
contract = kwargs.get('contract')
if (contract and current_user.is_authenticated()
and not current_user.allowed(contract)):
return current_app.login_manager.unauthorized()
return func(*args, **kwargs)
return decorator |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def check_cups_allowed(func):
"""Check if CUPS is allowd by token """ |
@wraps(func)
def decorator(*args, **kwargs):
cups = kwargs.get('cups')
if (cups and current_user.is_authenticated()
and not current_user.allowed(cups, 'cups')):
return current_app.login_manager.unauthorized()
return func(*args, **kwargs)
return decorator |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def format_py3o_val(value):
""" format a value to fit py3o's context * Handle linebreaks """ |
value = force_unicode(value)
value = escape(value)
value = value.replace(u'\n', u'<text:line-break/>')
return Markup(value) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_compilation_context(instance):
""" Return the compilation context for py3o templating Build a deep dict representation of the given instance and add config values :param obj instance: a SQLAlchemy model instance :return: a multi level dict with context datas :rtype: dict """ |
context_builder = SqlaContext(instance.__class__)
py3o_context = context_builder.compile_obj(instance)
return py3o_context |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def compile_template(instance, template, additionnal_context=None):
""" Fill the given template with the instance's datas and return the odt file For every instance class, common values are also inserted in the context dict (and so can be used) : * config values :param obj instance: the instance of a model (like Userdatas, Company) :param template: the template object to use :param dict additionnal_context: A dict containing datas we'd like to add to the py3o compilation template :return: a stringIO object filled with the resulting odt's informations """ |
py3o_context = get_compilation_context(instance)
if additionnal_context is not None:
py3o_context.update(additionnal_context)
output_doc = StringIO()
odt_builder = Template(template, output_doc)
odt_builder.render(py3o_context)
return output_doc |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def collect_columns(self):
""" Collect columns information from a given model. a column info contains the py3 informations exclude Should the column be excluded from the current context ? name the name of the key in the resulting py3o context of the column __col__ The original column object __prop__ In case of a relationship, the SqlaContext wrapping the given object """ |
res = []
for prop in self.get_sorted_columns():
info_dict = self.get_info_field(prop)
export_infos = info_dict.get('export', {}).copy()
main_infos = export_infos.get(self.config_key, {}).copy()
if export_infos.get('exclude'):
if main_infos.get('exclude', True):
continue
infos = export_infos
infos.update(main_infos)
# Si la clé name n'est pas définit on la met au nom de la colonne
# par défaut
infos.setdefault('name', prop.key)
infos['__col__'] = prop
if isinstance(prop, RelationshipProperty):
join = str(prop.primaryjoin)
if join in self.rels:
continue
else:
self.rels.append(str(join))
infos['__prop__'] = SqlaContext(
prop.mapper,
rels=self.rels[:]
)
res.append(infos)
return res |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def gen_xml_doc(self):
""" Generate the text tags that should be inserted in the content.xml of a full model """ |
res = self.make_doc()
var_tag = """
<text:user-field-decl office:value-type="string"
office:string-value="%s" text:name="py3o.%s"/>"""
text_tag = """<text:p text:style-name="P1">
<text:user-field-get text:name="py3o.%s">%s</text:user-field-get>
</text:p>
"""
keys = res.keys()
keys.sort()
texts = ""
vars = ""
for key in keys:
value = res[key]
vars += var_tag % (value, key)
texts += text_tag % (key, value)
return CONTENT_TMPL % (vars, texts) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _get_formatted_val(self, obj, attribute, column):
""" Return the formatted value of the attribute "attribute" of the obj "obj" regarding the column's description :param obj obj: The instance we manage :param str attribute: The string defining the path to access the end attribute we want to manage :param dict column: The column description dictionnary :returns: The associated value """ |
attr_path = attribute.split('.')
val = None
tmp_val = obj
for attr in attr_path:
tmp_val = getattr(tmp_val, attr, None)
if tmp_val is None:
break
if tmp_val is not None:
val = tmp_val
value = format_value(column, val, self.config_key)
return format_py3o_val(value) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _get_column_value(self, obj, column):
""" Return a single cell's value :param obj obj: The instance we manage :param dict column: The column description dictionnary :returns: The associated value """ |
return self._get_formatted_val(obj, column['__col__'].key, column) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _get_to_many_relationship_value(self, obj, column):
""" Get the resulting datas for a One To many or a many to many relationship :param obj obj: The instance we manage :param dict column: The column description dictionnary :returns: The associated value """ |
related_key = column.get('related_key', None)
related = getattr(obj, column['__col__'].key)
value = {}
if related:
total = len(related)
for index, rel_obj in enumerate(related):
if related_key:
compiled_res = self._get_formatted_val(
rel_obj, related_key, column
)
else:
compiled_res = column['__prop__'].compile_obj(
rel_obj
)
value['item_%d' % index] = compiled_res
value[str(index)] = compiled_res
value["_" + str(index)] = compiled_res
if index == 0:
value['first'] = compiled_res
if index == total - 1:
value['last'] = compiled_res
return value |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _get_to_one_relationship_value(self, obj, column):
""" Compute datas produced for a many to one relationship :param obj obj: The instance we manage :param dict column: The column description dictionnary :returns: The associated value """ |
related_key = column.get('related_key', None)
related = getattr(obj, column['__col__'].key)
if related:
if related_key is not None:
value = self._get_formatted_val(
related, related_key, column
)
else:
value = column['__prop__'].compile_obj(related)
else:
value = ""
return value |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _get_relationship_value(self, obj, column):
""" Compute datas produced for a given relationship """ |
if column['__col__'].uselist:
value = self._get_to_many_relationship_value(obj, column)
else:
value = self._get_to_one_relationship_value(obj, column)
return value |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def compile_obj(self, obj):
""" generate a context based on the given obj :param obj: an instance of the model """ |
res = {}
for column in self.columns:
if isinstance(column['__col__'], ColumnProperty):
value = self._get_column_value(obj, column)
elif isinstance(column['__col__'], RelationshipProperty):
value = self._get_relationship_value(obj, column)
res[column['name']] = value
return res |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def write(_filename, _long, enter=True):
"""Write the call info to file""" |
def method(*arg, **kw): # pylint: disable=W0613
"""Reference to the advice in order to facilitate argument support."""
def get_short(_fname):
"""Get basename of the file. If file is __init__.py, get its directory too"""
dir_path, short_fname = os.path.split(_fname)
short_fname = short_fname.replace(".py", "")
if short_fname == "__init__":
short_fname = "%s.%s" % (os.path.basename(dir_path), short_fname)
return short_fname
def get_long(_fname):
"""Get full reference to the file"""
try:
return re.findall(r'(ansible.*)\.py', _fname)[-1].replace(os.sep, ".")
except IndexError:
# If ansible is extending some library, ansible won't be present in the path.
return get_short(_fname)
meth_code = arg[1].im_func.func_code
fname, lineno, _name = meth_code.co_filename, meth_code.co_firstlineno, meth_code.co_name
marker = ENTER_MARKER
if not _long:
_fname, _rjust = get_short(fname), RJUST_SMALL
else:
_fname, _rjust = get_long(fname), RJUST_LONG
if not enter:
try:
meth_line_count = len(inspect.getsourcelines(meth_code)[0])
lineno += meth_line_count - 1
except Exception: # pylint: disable=W0703
# TODO: Find other way to get ending line number for the method
# Line number same as start of method.
pass
marker = EXIT_MARKER
with open(_filename, "a") as fptr:
call_info = "%s: %s:%s %s%s\n" % (
_fname.rjust(_rjust), # filename
str(lineno).rjust(4), # line number
(" %s" % DEPTH_MARKER) * COUNT, # Depth
marker, # Method enter, exit marker
_name # Method name
)
fptr.write(call_info)
return method |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _addPub(self, stem, source):
"""Enters stem as value for source. """ |
key = re.sub("[^A-Za-z0-9&]+", " ", source).strip().upper()
self.sourceDict[key] = stem
self.bibstemWords.setdefault(stem, set()).update(
key.lower().split()) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _loadOneSource(self, sourceFName):
"""handles one authority file including format auto-detection. """ |
sourceLines = open(sourceFName).readlines()
del sourceLines[0]
if len(sourceLines[0].split("\t"))==2:
self._loadTwoPartSource(sourceFName, sourceLines)
elif len(sourceLines[0].split("\t"))==3:
self._loadThreePartSource(sourceFName, sourceLines)
else:
raise Error, "%s does not appear to be a source authority file" |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _loadSources(self):
"""creates a trigdict and populates it with data from self.autorityFiles """ |
self.confstems = {}
self.sourceDict = newtrigdict.Trigdict()
for fName in self.authorityFiles:
self._loadOneSource(fName)
# We want to allow naked bibstems in references, too
for stem in self.sourceDict.values():
cleanStem = stem.replace(".", "").upper()
self._addPub(stem, cleanStem) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def long_description():
""" Build the long description from a README file located in the same directory as this module. """ |
base_path = os.path.dirname(os.path.realpath(__file__))
with io.open(os.path.join(base_path, 'README.md'), encoding='utf-8') as f:
return f.read() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_build_info(api_instance, build_id=None, keys=DEFAULT_BUILD_KEYS, wait=False):
""" print build info about a job """ |
build = (api_instance.get_build(build_id) if build_id
else api_instance.get_last_build())
output = ""
if wait:
build.block_until_complete()
if 'timestamp' in keys:
output += str(build.get_timestamp()) + '\n'
if 'console' in keys:
output += build.get_console() + '\n'
if 'scm' in keys:
# https://github.com/salimfadhley/jenkinsapi/pull/250
# try/except while this is still occuring
try:
output += build.get_revision() + '\n'
except IndexError:
pass
return output |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _filter_names(names):
""" Given a list of file names, return those names that should be copied. """ |
names = [n for n in names
if n not in EXCLUDE_NAMES]
# This is needed when building a distro from a working
# copy (likely a checkout) rather than a pristine export:
for pattern in EXCLUDE_PATTERNS:
names = [n for n in names
if (not fnmatch.fnmatch(n, pattern))
and (not n.endswith('.py'))]
return names |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def relative_to(base, relativee):
""" Gets 'relativee' relative to 'basepath'. i.e., 'radix' 'Projects/Twisted' The 'relativee' must be a child of 'basepath'. """ |
basepath = os.path.abspath(base)
relativee = os.path.abspath(relativee)
if relativee.startswith(basepath):
relative = relativee[len(basepath):]
if relative.startswith(os.sep):
relative = relative[1:]
return os.path.join(base, relative)
raise ValueError("%s is not a subpath of %s" % (relativee, basepath)) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.