text_prompt stringlengths 157 13.1k | code_prompt stringlengths 7 19.8k ⌀ |
|---|---|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def create_request(query):
""" Creates a GET request to Yarr! server :param query: Free-text search query :returns: Requests object """ |
yarr_url = app.config.get('YARR_URL', False)
if not yarr_url:
raise('No URL to Yarr! server specified in config.')
api_token = app.config.get('YARR_API_TOKEN', False)
headers = {'X-API-KEY': api_token} if api_token else {}
payload = {'q': query}
url = '%s/search' % yarr_url
return requests.get(url, params=payload, headers=headers) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def requires_authentication(func):
""" Function decorator that throws an exception if the user is not authenticated, and executes the function normally if the user is authenticated. """ |
def _auth(self, *args, **kwargs):
if not self._authenticated:
raise NotAuthenticatedException('Function {} requires'
.format(func.__name__)
+ ' authentication')
else:
return func(self, *args, **kwargs)
return _auth |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_user_info(self):
""" Returns a TSquareUser object representing the currently logged in user. Throws a NotAuthenticatedException if the user is not authenticated. """ |
response = self._session.get(BASE_URL_TSQUARE + '/user/current.json')
response.raise_for_status() # raises an exception if not 200: OK
user_data = response.json()
del user_data['password'] # tsquare doesn't store passwords
return TSquareUser(**user_data) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_site_by_id(self, id):
""" Looks up a site by ID and returns a TSquareSite representing that object, or throws an exception if no such site is found. @param id - The entityID of the site to look up @returns A TSquareSite object """ |
response = self._session.get(BASE_URL_TSQUARE + '/site/{}.json'.format(id))
response.raise_for_status()
site_data = response.json()
return TSquareSite(**site_data) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_sites(self, filter_func=lambda x: True):
""" Returns a list of TSquareSite objects that represent the sites available to a user. @param filter_func - A function taking in a Site object as a parameter that returns a True or False, depending on whether or not that site should be returned by this function. Filter_func should be used to create filters on the list of sites (i.e. user's preferences on what sites to display by default). If not specified, no filter is applied. @returns - A list of TSquareSite objects encapsulating t-square's JSON response. """ |
response = self._session.get(BASE_URL_TSQUARE + 'site.json')
response.raise_for_status() # raise an exception if not 200: OK
site_list = response.json()['site_collection']
if not site_list:
# this means that this t-square session expired. It's up
# to the user to re-authenticate.
self._authenticated = False
raise SessionExpiredException('The session has expired')
result_list = []
for site in site_list:
t_site = TSquareSite(**site)
if not hasattr(t_site, "props"):
t_site.props = {}
if not 'banner-crn' in t_site.props:
t_site.props['banner-crn'] = None
if not 'term' in t_site.props:
t_site.props['term'] = None
if not 'term_eid' in t_site.props:
t_site.props['term_eid'] = None
if filter_func(t_site):
result_list.append(t_site)
return result_list |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_announcements(self, site=None, num=10, age=20):
""" Gets announcements from a site if site is not None, or from every site otherwise. Returns a list of TSquareAnnouncement objects. @param site_obj (TSquareSite) If non-None, gets only the announcements from that site. If none, get anouncements from all sites. @param num - The number of announcements to fetch. Default is 10. @param age - 'How far back' to go to retreive announcements. Default is 20, which means that only announcements that are less than 20 days old will be returned, even if there less than 'num' of them. @returns - A list of TSquareAnnouncement objects. The length will be at most num, and it may be less than num depending on the number of announcements whose age is less than age. """ |
url = BASE_URL_TSQUARE + 'announcement/'
if site:
url += 'site/{}.json?n={}&d={}'.format(site.id, num, age)
else:
url += 'user.json?n={}&d={}'.format(num, age)
request = self._session.get(url)
request.raise_for_status()
announcement_list = request.json()['announcement_collection']
return map(lambda x: TSquareAnnouncement(**x), announcement_list) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_tools(self, site):
""" Gets all tools associated with a site. @param site (TSquareSite) - The site to search for tools @returns A list of dictionaries representing Tsquare tools. """ |
# hack - gotta bypass the tsquare REST api because it kinda sucks with tools
url = site.entityURL.replace('direct', 'portal')
response = self._session.get(url)
response.raise_for_status()
# scrape the resulting html
tools_dict_list = self._html_iface.get_tools(response.text)
return [TSquareTool(**x) for x in tools_dict_list] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_grades(self, site):
""" Gets a list of grades associated with a site. The return type is a dictionary whose keys are assignment categories, similar to how the page is laid out in TSquare. """ |
tools = self.get_tools(site)
grade_tool_filter = [x.href for x in tools if x.name == 'gradebook-tool']
if not grade_tool_filter:
return []
response = self._session.get(grade_tool_filter[0])
response.raise_for_status()
iframes = self._html_iface.get_iframes(response.text)
iframe_url = ''
for frame in iframes:
if frame['title'] == 'Gradebook ':
iframe_url = frame['src']
if iframe_url == '':
print "WARNING: NO GRADEBOOK IFRAMES FOUND"
response = self._session.get(iframe_url)
response.raise_for_status()
grade_dict_list = self._html_iface.get_grades(response.text)
return grade_dict_list |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_syllabus(self, site):
""" Gets the syllabus for a course. The syllabus may or may not contain HTML, depending on the site. TSquare does not enforce whether or not pages are allowed to have HTML, so it is impossible to tell. """ |
tools = self.get_tools(site)
syllabus_filter = [x.href for x in tools if x.name == 'syllabus']
if not syllabus_filter:
return ''
response = self._session.get(syllabus_filter[0])
response.raise_for_status()
iframes = self._html_iface.get_iframes(response.text)
iframe_url = ''
for frame in iframes:
if frame['title'] == 'Syllabus ':
iframe_url = frame['src']
if iframe_url == '':
print "WARHING: NO SYLLABUS IFRAME FOUND"
response = self._session.get(iframe_url)
response.raise_for_status()
syllabus_html = self._html_iface.get_syllabus(response.text)
return syllabus_html |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def setup_app_scope(name, scope):
"""activate plugins accordingly to config""" |
# load plugins
plugins = []
for plugin_name, active in get('settings').get('rw.plugins', {}).items():
plugin = __import__(plugin_name)
plugin_path = plugin_name.split('.')[1:] + ['plugin']
for sub in plugin_path:
plugin = getattr(plugin, sub)
plugins.append(scope.activate(plugin))
yield plugins
raise rw.gen.Return(scope['settings']) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def add_section(self, section_name):
"""Add an empty section. """ |
if section_name == "DEFAULT":
raise Exception("'DEFAULT' is reserved section name.")
if section_name in self._sections:
raise Exception(
"Error! %s is already one of the sections" % section_name)
else:
self._sections[section_name] = Section(section_name) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def remove_section(self, section_name):
"""Remove a section, it cannot be the DEFAULT section. """ |
if section_name == "DEFAULT":
raise Exception("'DEFAULT' is reserved section name.")
if section_name in self._sections:
del self._sections[section_name]
else:
raise Exception("Error! cannot find section '%s'.") |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def set_section(self, section):
"""Set a section. If section already exists, overwrite the old one. """ |
if not isinstance(section, Section):
raise Exception("You")
try:
self.remove_section(section.name)
except:
pass
self._sections[section.name] = copy.deepcopy(section) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def setup_logger(log_file, level=logging.DEBUG):
'''One function call to set up logging with some nice logs about the machine'''
cfg = AppBuilder.get_pcfg()
logger = cfg['log_module']
# todo make sure structlog is compliant and that logbook is also the correct name???
assert logger in ("logging", "logbook", "structlog"), 'bad logger specified'
exec("import {0};logging = {0}".format(logger))
AppBuilder.logger = logging
logging.basicConfig(
filename=log_file,
filemode='w',
level=level,
format='%(asctime)s:%(levelname)s: %(message)s') # one run
logging.debug('System is: %s' % platform.platform())
logging.debug('Python archetecture is: %s' % platform.architecture()[0])
logging.debug('Machine archetecture is: %s' % platform.machine())
set_windows_permissions(log_file) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def set_windows_permissions(filename):
'''
At least on windows 7 if a file is created on an Admin account,
Other users will not be given execute or full control.
However if a user creates the file himself it will work...
So just always change permissions after creating a file on windows
Change the permissions for Allusers of the application
The Everyone Group
Full access
http://timgolden.me.uk/python/win32_how_do_i/add-security-to-a-file.html
'''
#Todo rename this to allow_all, also make international not just for english..
if os.name == 'nt':
try:
everyone, domain, type = win32security.LookupAccountName(
"", "Everyone")
except Exception:
# Todo fails on non english langauge systesm ... FU WINDOWS
# Just allow permission for the current user then...
everyone, domain, type = win32security.LookupAccountName ("", win32api.GetUserName())
# ~ user, domain, type = win32security.LookupAccountName ("", win32api.GetUserName())
#~ userx, domain, type = win32security.LookupAccountName ("", "User")
#~ usery, domain, type = win32security.LookupAccountName ("", "User Y")
sd = win32security.GetFileSecurity(
filename,
win32security.DACL_SECURITY_INFORMATION)
# instead of dacl = win32security.ACL()
dacl = sd.GetSecurityDescriptorDacl()
#~ dacl.AddAccessAllowedAce(win32security.ACL_REVISION, con.FILE_GENERIC_READ | con.FILE_GENERIC_WRITE, everyone)
#~ dacl.AddAccessAllowedAce(win32security.ACL_REVISION, con.FILE_ALL_ACCESS, user)
dacl.AddAccessAllowedAce(
win32security.ACL_REVISION,
con.FILE_ALL_ACCESS,
everyone)
sd.SetSecurityDescriptorDacl(1, dacl, 0) # may not be necessary
win32security.SetFileSecurity(
filename,
win32security.DACL_SECURITY_INFORMATION,
sd) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def setup_raven():
'''we setup sentry to get all stuff from our logs'''
pcfg = AppBuilder.get_pcfg()
from raven.handlers.logging import SentryHandler
from raven import Client
from raven.conf import setup_logging
client = Client(pcfg['raven_dsn'])
handler = SentryHandler(client)
# TODO VERIFY THIS -> This is the way to do it if you have a paid account, each log call is an event so this isn't going to work for free accounts...
handler.setLevel(pcfg["raven_loglevel"])
setup_logging(handler)
return client |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def save(self):
'''saves our config objet to file'''
if self.app.cfg_mode == 'json':
with open(self.app.cfg_file, 'w') as opened_file:
json.dump(self.app.cfg, opened_file)
else:
with open(self.app.cfg_file, 'w')as opened_file:
yaml.dump(self.app.cfg, opened_file) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def create_cfg(self, cfg_file, defaults=None, mode='json'):
'''
set mode to json or yaml? probably remove this option..Todo
Creates the config file for your app with default values
The file will only be created if it doesn't exits
also sets up the first_run attribute.
also sets correct windows permissions
you can add custom stuff to the config by doing
app.cfg['fkdsfa'] = 'fdsaf'
# todo auto save on change
remember to call cfg.save()
'''
assert mode in ('json', 'yaml')
self.cfg_mode = mode
self.cfg_file = cfg_file
try:
self.cfg = CfgDict(app=self, cfg=self.load_cfg())
logging.info('cfg file found : %s' % self.cfg_file)
except FileNotFoundError:
self.cfg = CfgDict(app=self, cfg={'first_run': True})
with suppress(TypeError):
self.cfg.update(defaults)
self.cfg.save()
set_windows_permissions(self.cfg_file)
logging.info(
'Created cfg file for first time!: %s' %
self.cfg_file)
if self._check_first_run():
self.first_run = True
else:
self.first_run = False |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def load_cfg(self):
'''loads our config object accessible via self.cfg'''
if self.cfg_mode == 'json':
with open(self.cfg_file) as opened_file:
return json.load(opened_file)
else:
with open(self.cfg_file) as ymlfile:
return yaml.safe_load(ymlfile) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def check_if_open(self, path=None, appdata=False, verbose=False):
'''
Allows only one version of the app to be open at a time.
If you are calling create_cfg() before calling this,
you don't need to give a path. Otherwise a file path must be
given so we can save our file there.
Set appdata to True to run uac_bypass on the path, otherwise
leave it as False
'''
#~ To know if the system crashed, or if the prgram was exited smoothly
#~ turn verbose to True and the function will return a named tuple # TBD
#~ if os.name == 'nt':
#~ hwnd = int(self.root.wm_frame(),0)
#~ #saving a hwnd reference so we can check if we still open later on
#~ with open (self.check_file,'a') as f:
#~ f.write(str(hwnd))
#~ logging.info('adding hwnd to running info :'+str(hwnd))
#~
logging.info('Checking if our app is already Open')
if not path and self.cfg:
self._check_if_open_using_config()
elif path:
if appdata:
file = path.split(os.sep)[-1]
self.check_file = self.uac_bypass(file=file)
else:
self.check_file = path
self._check_if_open_using_path()
self.shutdown_cleanup['release_singleton'] = self.release_singleton |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def release_singleton(self):
'''deletes the data that lets our program know if it is
running as singleton when calling check_if_open,
i.e check_if_open will return fals after calling this
'''
with suppress(KeyError):
del self.cfg['is_programming_running_info']
with suppress(FileNotFoundError, AttributeError):
os.remove(self.check_file) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _unpack_int_base128(varint, offset):
"""Implement Perl unpack's 'w' option, aka base 128 decoding.""" |
res = ord(varint[offset])
if ord(varint[offset]) >= 0x80:
offset += 1
res = ((res - 0x80) << 7) + ord(varint[offset])
if ord(varint[offset]) >= 0x80:
offset += 1
res = ((res - 0x80) << 7) + ord(varint[offset])
if ord(varint[offset]) >= 0x80:
offset += 1
res = ((res - 0x80) << 7) + ord(varint[offset])
if ord(varint[offset]) >= 0x80:
offset += 1
res = ((res - 0x80) << 7) + ord(varint[offset])
return res, offset + 1 |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _unpack_body(self, buff):
""" Parse the response body. After body unpacking its data available as python list of tuples For each request type the response body has the same format: <insert_response_body> ::= <count> | <count><fq_tuple> <update_response_body> ::= <count> | <count><fq_tuple> <delete_response_body> ::= <count> | <count><fq_tuple> <select_response_body> ::= <count><fq_tuple>* <call_response_body> ::= <count><fq_tuple> :param buff: buffer containing request body :type byff: ctypes buffer """ |
# Unpack <return_code> and <count> (how many records affected or selected)
self._return_code = struct_L.unpack_from(buff, offset=0)[0]
# Separate return_code and completion_code
self._completion_status = self._return_code & 0x00ff
self._return_code >>= 8
# In case of an error unpack the body as an error message
if self._return_code != 0:
self._return_message = unicode(buff[4:-1], self.charset, self.errors)
if self._completion_status == 2:
raise TarantoolError(self._return_code, self._return_message)
# Unpack <count> (how many records affected or selected)
self._rowcount = struct_L.unpack_from(buff, offset=4)[0]
# If the response doesn't contain any tuple - there is nothing to unpack
if self._body_length == 8:
return
# Parse response tuples (<fq_tuple>)
if self._rowcount > 0:
offset = 8 # The first 4 bytes in the response body is the <count> we have already read
while offset < self._body_length:
# In resonse tuples have the form <size><tuple> (<fq_tuple> ::= <size><tuple>).
# Attribute <size> takes into account only size of tuple's <field> payload,
# but does not include 4-byte of <cardinality> field.
#Therefore the actual size of the <tuple> is greater to 4 bytes.
tuple_size = struct.unpack_from("<L", buff, offset)[0] + 4
tuple_data = struct.unpack_from("<%ds" % (tuple_size), buff, offset+4)[0]
tuple_value = self._unpack_tuple(tuple_data)
if self.field_types:
self.append(self._cast_tuple(tuple_value))
else:
self.append(tuple_value)
offset = offset + tuple_size + 4 |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _cast_field(self, cast_to, value):
""" Convert field type from raw bytes to native python type :param cast_to: native python type to cast to :type cast_to: a type object (one of bytes, int, unicode (str for py3k)) :param value: raw value from the database :type value: bytes :return: converted value :rtype: value of native python type (one of bytes, int, unicode (str for py3k)) """ |
if cast_to in (int, long, str):
return cast_to(value)
elif cast_to == unicode:
try:
value = value.decode(self.charset, self.errors)
except UnicodeEncodeError, e:
raise InvalidData("Error encoding unicode value '%s': %s" % (repr(value), e))
return value
elif cast_to in (any, bytes):
return value
else:
raise TypeError("Invalid field type %s" % (cast_to)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _cast_tuple(self, values):
""" Convert values of the tuple from raw bytes to native python types :param values: tuple of the raw database values :type value: tuple of bytes :return: converted tuple value :rtype: value of native python types (bytes, int, unicode (or str for py3k)) """ |
result = []
for i, value in enumerate(values):
if i < len(self.field_types):
result.append(self._cast_field(self.field_types[i], value))
else:
result.append(self._cast_field(self.field_types[-1], value))
return tuple(result) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def ping(self):
""" send ping packet to tarantool server and receive response with empty body """ |
d = self.replyQueue.get_ping()
packet = RequestPing(self.charset, self.errors)
self.transport.write(bytes(packet))
return d.addCallback(self.handle_reply, self.charset, self.errors, None) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def insert(self, space_no, *args):
""" insert tuple, if primary key exists server will return error """ |
d = self.replyQueue.get()
packet = RequestInsert(self.charset, self.errors, d._ipro_request_id, space_no, Request.TNT_FLAG_ADD, *args)
self.transport.write(bytes(packet))
return d.addCallback(self.handle_reply, self.charset, self.errors, None) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def delete(self, space_no, *args):
""" delete tuple by primary key """ |
d = self.replyQueue.get()
packet = RequestDelete(self.charset, self.errors, d._ipro_request_id, space_no, 0, *args)
self.transport.write(bytes(packet))
return d.addCallback(self.handle_reply, self.charset, self.errors, None) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def call(self, proc_name, field_types, *args):
""" call server procedure """ |
d = self.replyQueue.get()
packet = RequestCall(self.charset, self.errors, d._ipro_request_id, proc_name, 0, *args)
self.transport.write(bytes(packet))
return d.addCallback(self.handle_reply, self.charset, self.errors, field_types) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def fibs(n, m):
""" Yields Fibonacci numbers starting from ``n`` and ending at ``m``. """ |
a = b = 1
for x in range(3, m + 1):
a, b = b, a + b
if x >= n:
yield b |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def unload_fixture(apps, schema_editor):
""" Brutally deleting all 'Country' model entries for reversing operation """ |
appmodel = apps.get_model(APP_LABEL, COUNTRY_MODELNAME)
appmodel.objects.all().delete() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def precondition(precond):
""" Runs the callable responsible for making some assertions about the data structure expected for the transformation. If the precondition is not achieved, a UnmetPrecondition exception must be raised, and then the transformation pipe is bypassed. """ |
def decorator(f):
"""`f` can be a reference to a method or function. In
both cases the `data` is expected to be passed as the
first positional argument (obviously respecting the
`self` argument when it is a method).
"""
def decorated(*args):
if len(args) > 2:
raise TypeError('%s takes only 1 argument (or 2 for instance methods)' % f.__name__)
try:
instance, data = args
if not isinstance(instance, Pipe):
raise TypeError('%s is not a valid pipe instance' % instance)
except ValueError: # tuple unpacking error
data = args[0]
try:
precond(data)
except UnmetPrecondition:
# bypass the pipe
return data
else:
return f(*args)
return decorated
return decorator |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def run(self, data, rewrap=False, prefetch=0):
""" Wires the pipeline and returns a lazy object of the transformed data. :param data: must be an iterable, where a full document must be returned for each loop :param rewrap: (optional) is a bool that indicates the need to rewrap data in cases where iterating over it produces undesired results, for instance ``dict`` instances. :param prefetch: (optional) is an int defining the number of items to be prefetched once the pipeline starts yielding data. The default prefetching mechanism is based on threads, so be careful with CPU-bound processing pipelines. """ |
if rewrap:
data = [data]
for _filter in self._filters:
_filter.feed(data)
data = _filter
else:
iterable = self._prefetch_callable(data, prefetch) if prefetch else data
for out_data in iterable:
yield out_data |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def camel_case_to_snake_case(name):
""" HelloWorld -> hello_world """ |
s1 = _FIRST_CAP_RE.sub(r'\1_\2', name)
return _ALL_CAP_RE.sub(r'\1_\2', s1).lower() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def load_builtin_config(name, module_name=__name__, specs_path=specs.__path__):
""" Uses package info magic to find the resource file located in the specs submodule. """ |
config_path = Path(next(iter(specs_path)))
config_path = config_path / PurePath(resource_filename(module_name, name + '.yaml'))
return load_config(config_path) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def load_config(path):
""" Loads a yaml configuration. :param path: a pathlib Path object pointing to the configuration """ |
with path.open('rb') as fi:
file_bytes = fi.read()
config = yaml.load(file_bytes.decode('utf-8'))
return config |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def drag(*args, function = move):
""" Drags the mouse along a specified path :param args: list of arguments passed to function :param function: path to traverse :return: None """ |
x, y = win32api.GetCursorPos()
win32api.mouse_event(win32con.MOUSEEVENTF_LEFTDOWN, x, y, 0, 0)
function(*args)
x, y = win32api.GetCursorPos()
win32api.mouse_event(win32con.MOUSEEVENTF_LEFTUP, x, y, 0, 0) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def add(self, action):
"""Add an action to the execution queue.""" |
self._state_machine.transition_to_add()
self._actions.append(action) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def execute(self):
"""Execute all actions, throwing an ExecutionException on failure. Catch the ExecutionException and call rollback() to rollback. """ |
self._state_machine.transition_to_execute()
for action in self._actions:
self._executed_actions.append(action)
self.execute_with_retries(action, lambda a: a.execute())
self._state_machine.transition_to_execute_complete() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def rollback(self):
"""Call rollback on executed actions.""" |
self._state_machine.transition_to_rollback()
for action in reversed(self._executed_actions):
try:
self.execute_with_retries(action, lambda a: a.rollback())
except: # pylint: disable=bare-except
pass # on exception, carry on with rollback of other steps
self._state_machine.transition_to_rollback_complete() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def execute_with_retries(self, action, f):
"""Execute function f with single argument action. Retry if ActionRetryException is raised. """ |
# Run action until either it succeeds or throws an exception
# that's not an ActionRetryException
retry = True
while retry:
retry = False
try:
f(action)
except ActionRetryException as ex: # other exceptions should bubble out
retry = True
time.sleep(ex.ms_backoff / 1000.0) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def input_option(message, options="yn", error_message=None):
""" Reads an option from the screen, with a specified prompt. Keeps asking until a valid option is sent by the user. """ |
def _valid(character):
if character not in options:
print(error_message % character)
return input("%s [%s]" % (message, options), _valid, True, lambda a: a.lower()) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def safe_unicode(obj, *args):
""" return the unicode representation of obj """ |
try:
return unicode(obj, *args) # noqa for undefined-variable
except UnicodeDecodeError:
# obj is byte string
ascii_text = str(obj).encode('string_escape')
try:
return unicode(ascii_text) # noqa for undefined-variable
except NameError:
# This is Python 3, just return the obj as it's already unicode
return obj
except NameError:
# This is Python 3, just return the obj as it's already unicode
return obj |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def safe_str(obj):
""" return the byte string representation of obj """ |
try:
return str(obj)
except UnicodeEncodeError:
# obj is unicode
try:
return unicode(obj).encode('unicode_escape') # noqa for undefined-variable
except NameError:
# This is Python 3, just return the obj as it's already unicode
return obj |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def crosstalk_correction(pathway_definitions, random_seed=2015, gene_set=set(), all_genes=True, max_iters=1000):
"""A wrapper function around the maximum impact estimation algorithm. Parameters pathway_definitions : dict(str -> set(str)) The original pathway definitions. A pathway (key) is defined by a set of genes (value). random_seed : int (default=2015) Sets the numpy random seed gene_set : set(str) (default=set()) Donato et al. (2013) uses this algorithm to remove crosstalk from definitions in case-control studies. Here, `gene_set` is equivalent to the DE (differentially expressed) genes they refer to in their paper. Because crosstalk removal is a preprocessing step applicable to pathway analyses in general, we keep the variable name nonspecific. all_genes : bool (default=True) This value is checked if `gene_set` is not empty. If False, crosstalk correction is only applied to annotations in the `gene_set`. max_iters : int (default=1000) The maximum number of expectation-maximization steps to take in the maximum impact estimation algorithm. Returns dict(str -> tup(set(str), set(str))), where the (str) keys are the pathway names. tup[0] : crosstalk-correction applied to genes in the pathway definition that are also in `gene_set` tup[1] : - `all_genes` is True. Correction is applied to genes outside of `gene_set`. - `all_genes` is False. The second element in the tuple is all genes remaining in the original definition (definition - `gene_set`). """ |
np.random.seed(seed=random_seed)
genes_in_pathway_definitions = set.union(*pathway_definitions.values())
pathway_column_names = index_element_map(pathway_definitions.keys())
corrected_pathway_defns = {}
if gene_set:
gene_set = gene_set & genes_in_pathway_definitions
if not gene_set and not all_genes:
print("`gene_set` parameter was {0}, returning original"
"pathway definitions".format(gene_set))
for pathway, definition in pathway_definitions.items():
corrected_pathway_defns[pathway] = (set(), definition)
return pathway_definitions
corrected_pathway_defns = _apply_correction_on_genes(
gene_set, pathway_column_names, pathway_definitions)
# crosstalk correction is _only_ applied to `gene_set`
if not all_genes:
for pathway, definition in pathway_definitions.items():
if pathway not in corrected_pathway_defns:
corrected_pathway_defns[pathway] = set()
gene_set_defn = corrected_pathway_defns[pathway]
remaining_defn = definition - gene_set
corrected_pathway_defns[pathway] = (
gene_set_defn, remaining_defn)
return corrected_pathway_defns
remaining_genes = genes_in_pathway_definitions - gene_set
if not remaining_genes:
for pathway, definition in corrected_pathway_defns.items():
corrected_pathway_defns[pathway] = (definition, set())
return corrected_pathway_defns
pathway_remaining_defns = _apply_correction_on_genes(
remaining_genes, pathway_column_names, pathway_definitions)
for pathway, definitions in pathway_definitions.items():
if pathway not in corrected_pathway_defns:
corrected_pathway_defns[pathway] = set()
if pathway not in pathway_remaining_defns:
pathway_remaining_defns[pathway] = set()
corrected_pathway_defns[pathway] = (
corrected_pathway_defns[pathway],
pathway_remaining_defns[pathway])
return corrected_pathway_defns |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def maximum_impact_estimation(membership_matrix, max_iters=1000):
"""An expectation maximization technique that produces pathway definitions devoid of crosstalk. That is, each gene is mapped to the pathway in which it has the greatest predicted impact; this removes any overlap between pathway definitions. Parameters membership_matrix : numpy.array(float), shape = [n, k] The observed gene-to-pathway membership matrix, where n is the number of genes and k is the number of pathways we are interested in. max_iters : int (default=1000) The maximum number of expectation-maximization steps to take. Returns dict(int -> set(int)), a dictionary mapping a pathway to a set of genes. These are the pathway definitions after the maximum impact estimation procedure has been applied to remove crosstalk. - The keys are ints corresponding to the pathway column indices in the membership matrix. - The values are sets of ints corresponding to gene row indices in the membership matrix. """ |
# Initialize the probability vector as the sum of each column in the
# membership matrix normalized by the sum of the entire membership matrix.
# The probability at some index j in the vector represents the likelihood
# that a pathway (column) j is defined by the current set of genes (rows)
# in the membership matrix.
pr_0 = np.sum(membership_matrix, axis=0) / np.sum(membership_matrix)
pr_1 = _update_probabilities(pr_0, membership_matrix)
epsilon = np.linalg.norm(pr_1 - pr_0)/100.
pr_old = pr_1
check_for_convergence = epsilon
count = 0
while epsilon > NEAR_ZERO and check_for_convergence >= epsilon:
count += 1
if count > max_iters:
print("Reached the maximum number of iterations {0}".format(
max_iters))
break
pr_new = _update_probabilities(pr_old, membership_matrix)
check_for_convergence = np.linalg.norm(pr_new - pr_old)
pr_old = pr_new
pr_final = pr_old # renaming for readability
corrected_pathway_definitions = {}
n, k = membership_matrix.shape
for gene_index in range(n):
gene_membership = membership_matrix[gene_index]
denominator = np.dot(gene_membership, pr_final)
# Approximation is used to prevent divide by zero warning.
# Since we are only looking for the _most_ probable pathway in which a
# gene contributes its maximum impact, precision is not as important
# as maintaining the relative differences between each
# pathway's probability.
if denominator < NEAR_ZERO:
denominator = NEAR_ZERO
# This is equivalent to one row in what Donato et al. (2013) refer
# to as the underlying (latent) Z matrix.
conditional_pathway_pr = (np.multiply(gene_membership, pr_final) /
denominator)
all_pathways_at_max = np.where(
conditional_pathway_pr == conditional_pathway_pr.max())[0]
gene_in_pathways = np.where(gene_membership == 1)[0]
all_pathways_at_max = np.intersect1d(
all_pathways_at_max, gene_in_pathways)
pathway_index = np.random.choice(all_pathways_at_max)
if pathway_index not in corrected_pathway_definitions:
corrected_pathway_definitions[pathway_index] = set()
corrected_pathway_definitions[pathway_index].add(gene_index)
return corrected_pathway_definitions |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def initialize_membership_matrix(gene_row_names, pathway_definitions):
"""Create the binary gene-to-pathway membership matrix that will be considered in the maximum impact estimation procedure. Parameters gene_row_names : set(str) The genes for which we want to assess pathway membership pathway_definitions : dict(str -> set(str)) Pathway definitions, pre-crosstalk-removal. A pathway (key) is defined by a set of genes (value). Returns numpy.array, shape = [n, k], the membership matrix """ |
membership = []
for pathway, full_definition in pathway_definitions.items():
pathway_genes = list(full_definition & gene_row_names)
membership.append(np.in1d(list(gene_row_names), pathway_genes))
membership = np.array(membership).astype("float").T
return membership |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def index_element_map(arr):
"""Map the indices of the array to the respective elements. Parameters arr : list(a) The array to process, of generic type a Returns dict(int -> a), a dictionary corresponding the index to the element """ |
index_to_element = {}
for index, element in enumerate(arr):
index_to_element[index] = element
return index_to_element |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _apply_correction_on_genes(genes, pathway_column_names, pathway_definitions):
"""Helper function to create the gene-to-pathway membership matrix and apply crosstalk correction on that matrix. Returns the crosstalk-corrected pathway definitions for the input `genes.` """ |
gene_row_names = index_element_map(genes)
membership_matrix = initialize_membership_matrix(
genes, pathway_definitions)
crosstalk_corrected_index_map = maximum_impact_estimation(
membership_matrix)
updated_pathway_definitions = _update_pathway_definitions(
crosstalk_corrected_index_map,
gene_row_names, pathway_column_names)
return updated_pathway_definitions |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _update_probabilities(pr, membership_matrix):
"""Updates the probability vector for each iteration of the expectation maximum algorithm in maximum impact estimation. Parameters pr : numpy.array(float), shape = [k] The current vector of probabilities. An element at index j, where j is between 0 and k - 1, corresponds to the probability that, given a gene g_i, g_i has the greatest impact in pathway j. membership_matrix : numpy.array(float), shape = [n, k] The observed gene-to-pathway membership matrix, where n is the number of genes and k is the number of pathways we are interested in. Returns numpy.array(float), shape = [k], a vector of updated probabilities """ |
n, k = membership_matrix.shape
pathway_col_sums = np.sum(membership_matrix, axis=0)
weighted_pathway_col_sums = np.multiply(pathway_col_sums, pr)
sum_of_col_sums = np.sum(weighted_pathway_col_sums)
try:
new_pr = weighted_pathway_col_sums / sum_of_col_sums
except FloatingPointError:
# In the event that we encounter underflow or overflow issues,
# apply this approximation.
cutoff = 1e-150 / k
log_cutoff = np.log(cutoff)
weighted_pathway_col_sums = _replace_zeros(
weighted_pathway_col_sums, cutoff)
log_weighted_col_sums = np.log(weighted_pathway_col_sums)
log_weighted_col_sums -= np.max(log_weighted_col_sums)
below_cutoff = log_weighted_col_sums < log_cutoff
geq_cutoff = log_weighted_col_sums >= log_cutoff
print("{1} adjustments made to a vector of length {0}"
" containing the raw weight values"
" in a call to 'update_probabilities'".format(
k, len(log_weighted_col_sums[below_cutoff])))
new_pr = np.zeros(k)
new_pr[below_cutoff] = cutoff
col_sums_geq_cutoff = log_weighted_col_sums[geq_cutoff]
new_pr[geq_cutoff] = np.exp(
col_sums_geq_cutoff) / np.sum(np.exp(sorted(col_sums_geq_cutoff)))
difference = np.abs(1. - np.sum(new_pr))
assert difference < 1e-12, "Probabilities sum to {0}.".format(
np.sum(new_pr))
return new_pr |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _replace_zeros(arr, default_min_value):
"""Substitute 0s in the list with a near-zero value. Parameters arr : numpy.array(float) default_min_value : float If the smallest non-zero element in `arr` is greater than the default, use the default instead. Returns numpy.array(float) """ |
min_nonzero_value = min(default_min_value, np.min(arr[arr > 0]))
closest_to_zero = np.nextafter(min_nonzero_value, min_nonzero_value - 1)
arr[arr == 0] = closest_to_zero
return arr |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def hdfoutput(outname, frames, dozip=False):
'''Outputs the frames to an hdf file.'''
with h5.File(outname,'a') as f:
for frame in frames:
group=str(frame['step']);
h5w(f, frame, group=group,
compression='lzf' if dozip else None); |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def beds_to_boolean(beds, ref=None, beds_sorted=False, ref_sorted=False, **kwargs):
""" Compare a list of bed files or BedTool objects to a reference bed file and create a boolean matrix where each row is an interval and each column is a 1 if that file has an interval that overlaps the row interval and a 0 otherwise. If no reference bed is provided, the provided bed files will be merged into a single bed and compared to that. Parameters beds : list List of paths to bed files or BedTool objects. ref : str or BedTool Reference bed file to compare against. If no reference bed is provided, the provided bed files will be merged into a single bed and compared to that. beds_sorted : boolean Whether the bed files in beds are already sorted. If False, all bed files in beds will be sorted. ref_sorted : boolean Whether the reference bed file is sorted. If False, ref will be sorted. names : list of strings Names to use for columns of output files. Overrides define_sample_name if provided. define_sample_name : function that takes string as input Function mapping filename to sample name (or basename). For instance, you may have the basename in the path and use a regex to extract it. The basenames will be used as the column names. If this is not provided, the columns will be named as the input files. Returns ------- out : pandas.DataFrame Boolean data frame indicating whether each bed file has an interval that overlaps each interval in the reference bed file. """ |
beds = copy.deepcopy(beds)
fns = []
for i,v in enumerate(beds):
if type(v) == str:
fns.append(v)
beds[i] = pbt.BedTool(v)
else:
fns.append(v.fn)
if not beds_sorted:
beds[i] = beds[i].sort()
names = _sample_names(fns, kwargs)
if ref:
if type(ref) == str:
ref = pbt.BedTool(ref)
if not ref_sorted:
ref = ref.sort()
else:
ref = combine(beds)
ind = []
for r in ref:
ind.append('{}:{}-{}'.format(r.chrom, r.start, r.stop))
bdf = pd.DataFrame(0, index=ind, columns=names)
for i,bed in enumerate(beds):
res = ref.intersect(bed, sorted=True, wa=True)
ind = []
for r in res:
ind.append('{}:{}-{}'.format(r.chrom,
r.start,
r.stop))
bdf.ix[ind, names[i]] = 1
return bdf |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def combine(beds, beds_sorted=False, postmerge=True):
""" Combine a list of bed files or BedTool objects into a single BedTool object. Parameters beds : list List of paths to bed files or BedTool objects. beds_sorted : boolean Whether the bed files in beds are already sorted. If False, all bed files in beds will be sorted. postmerge : boolean Whether to merge intervals after combining beds together. Returns ------- out : pybedtools.BedTool New sorted BedTool with intervals from all input beds. """ |
beds = copy.deepcopy(beds)
for i,v in enumerate(beds):
if type(v) == str:
beds[i] = pbt.BedTool(v)
if not beds_sorted:
beds[i] = beds[i].sort()
# For some reason, doing the merging in the reduce statement doesn't work. I
# think this might be a pybedtools bug. In any fashion, I can merge
# afterward although I think it makes a performance hit because the combined
# bed file grows larger than it needs to.
out = reduce(lambda x,y : x.cat(y, postmerge=False), beds)
out = out.sort()
if postmerge:
out = out.merge()
return out |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def strip_chr(bt):
"""Strip 'chr' from chromosomes for BedTool object Parameters bt : pybedtools.BedTool BedTool to strip 'chr' from. Returns ------- out : pybedtools.BedTool New BedTool with 'chr' stripped from chromosome names. """ |
try:
df = pd.read_table(bt.fn, header=None, dtype=str)
# If the try fails, I assume that's because the file has a trackline. Note
# that I don't preserve the trackline (I'm not sure how pybedtools keeps
# track of it anyway).
except pd.parser.CParserError:
df = pd.read_table(bt.fn, header=None, skiprows=1, dtype=str)
df[0] = df[0].apply(lambda x: x[3:])
s = '\n'.join(df.astype(str).apply(lambda x: '\t'.join(x), axis=1)) + '\n'
out = pbt.BedTool(s, from_string=True)
return out |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def load_saved_bts(self):
"""If the AnnotatedInteractions object was saved to a pickle and reloaded, this method remakes the BedTool objects.""" |
if self._bt1_path:
self.bt1 = pbt.BedTool(self._bt1_path)
if self._bt2_path:
self.bt2 = pbt.BedTool(self._bt2_path)
if self._bt_loop_path:
self.bt_loop = pbt.BedTool(self._bt_loop_path)
if self._bt_loop_inner_path:
self.bt_loop_inner = pbt.BedTool(self._bt_loop_inner_path) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def delete(self):
"""Delete the file.""" |
self.close()
if self.does_file_exist():
os.remove(self.path) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def open(self, mode='read'):
"""Open the file.""" |
if self.file:
self.close()
raise 'Close file before opening.'
if mode == 'write':
self.file = open(self.path, 'w')
elif mode == 'overwrite':
# Delete file if exist.
self.file = open(self.path, 'w+')
else:
# Open for reading.
self.file = open(self.path, 'r')
self._csv = csv.DictWriter(self.file,
fieldnames=self.fields,
delimiter=',', quotechar='|',
quoting=csv.QUOTE_MINIMAL,
extrasaction='ignore')
if self.file.tell() == 0:
self._csv.writeheader() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def add_hit(self, hit):
"""Add a hit to the file.""" |
if not self._csv:
raise 'Open before write'
self._csv.writerow(hit)
self.number_of_hits += 1
# Todo: check performance for timestamp check
# assert self._path == self.get_filename_by_timestamp(timestamp)
timestamp = hit['timestamp']
if self.latest_timestamp <= timestamp:
self.latest_timestamp = timestamp |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_records(self):
""" Get all stored records. """ |
self.close()
with open(self.path, 'r') as filep:
first_line = filep.readline().split(',')
if first_line[0] != self.fields[0]:
yield first_line
for line in filep:
yield line.split(',') |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def add_user(self, uid, nodes, weights):
"""Add a user.""" |
for i, node in enumerate(nodes):
self.file.write("{},{},{}\n".format(uid, node, weights[i])) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get(self, key, default=None):
"""Get a key.""" |
key = "{0}{1}".format(self.prefix, key)
data = self.redis.get(key)
# Redis returns None not an exception
if data is None:
data = default
else:
data = json.loads(data)
return data |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def set(self, key, value):
"""Set a key, value pair.""" |
key = "{0}{1}".format(self.prefix, key)
value = json.dumps(value, cls=NumpyEncoder)
self.redis.set(key, value) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_by_timestamp(self, prefix, timestamp):
"""Get the cache file to a given timestamp.""" |
year, week = get_year_week(timestamp)
return self.get(prefix, year, week) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get(self, prefix, year, week):
"""Get the cache file.""" |
filename = self._format_filename(prefix, year, week)
return RawEvents(filename, prefix, year, week) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_user_profiles(self, prefix):
"""Get the user profil from the cache to the given prefix.""" |
filepath = "{}{}".format(self.base_path, prefix)
return UserProfiles(filepath, prefix) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _format_filename(self, prefix, year, week):
"""Construct the file name based on the path and options.""" |
return "{}{}_{}-{}.csv".format(self.base_path, prefix, year, week) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_recommendation_store(self):
"""Get the configured recommendation store.""" |
return RedisStore(self.config['host'],
self.config['port'],
self.config['db'],
self.config['prefix']) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _deleteFile(self,directory,fn,dentry,db,service):
"""Deletets file and changes status to '?' if no more services manages the file """ |
# FIXME : can switch back to only managing once service
# at a time
logger.debug("%s - Deleting"%(fn))
if fn not in db:
print("%s - rm: Not in DB, can't remove !"%(fn))
return False
# Build up list of names
servicenames=db[fn]['services'].keys()
# If service is none, build list of all services
# to perform this action on
if service is None:
servicelist=servicenames
else:
servicelist=[service]
for service in servicelist:
if not db[fn]['services'].has_key(service):
print("%s - Can't delete, service [%s] unknown"%(service))
continue
if db[fn]['services'][service]['status']!=self.ST_DELETED:
print("%s - rm: Can't remove file with non 'D' status (%s)!"\
%(fn,service))
continue
# Only change status if correctly deleted
if self.sman.GetServiceObj(service).Remove(directory,fn):
# Delete our service entry
del db[fn]['services'][service]
logger.debug('%s - deleted by service: %s'%(fn,service))
else:
logger.error('%s - Failed to delete by service: %s'%(fn,service))
continue
# Delete whole entry if no services manage it any more
if len(db[fn]['services'].keys())==0:
del db[fn]
return True |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _uploadFile(self,directory,fn,dentry,db,service):
"""Uploads file and changes status to 'S'. Looks up service name with service string """ |
# Create a hash of the file
if fn not in db:
print("%s - Not in DB, must run 'add' first!"%(fn))
else:
# If already added, see if it's modified, only then
# do another upload
if db[fn]['services'][service]['status']==self.ST_UPTODATE:
if not dentry['status']==self.ST_MODIFIED:
logger.info("%s - Up to date, skipping (%s)!"\
%(fn,service))
return False
sobj=self.sman.GetServiceObj(service)
# If nobody manages this file, just skip it
if (not sobj):
print("%s - Upload: No service of name [%s] found"%(fn,service))
return
# Only change status if correctly uploaded
if sobj.Upload(directory,fn):
# Write newest mtime/hash to indicate all is well
db[fn]['mtime']=dentry['mtime']
db[fn]['hash']=self._hashfile(os.path.join(directory,fn))
db[fn]['services'][service]['status']=self.ST_UPTODATE
logger.debug('%s - uploaded by service: %s'%(fn,sobj.GetName()))
return True
else:
logger.error('%s - Failed to upload by service: %s'%(fn,sobj.GetName()))
return False
return False |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _updateToDeleted(self,directory,fn,dentry,db,service):
"""Changes to status to 'D' as long as a handler exists, directory - DIR where stuff is happening fn - File name to be added dentry - dictionary entry as returned by GetStatus for this file db - pusher DB for this directory service - service to delete, None means all """ |
# Create a hash of the file
if fn not in db:
print("%s - rm: not in DB, skipping!"%(fn))
return
services=self.sman.GetServices(fn)
# If nobody manages this file, just skip it
if (not services):
print("%s - no manger of this file type found"%(fn))
return
if service:
if db[fn]['services'].has_key(service):
db[fn]['services'][service]['status']=self.ST_DELETED
else:
print("%s - Service %s doesn't exist, can't delete"%(fn,service))
return
# If we get here it means all services should delete
for service in db[fn]['services']:
db[fn]['services'][service]['status']=self.ST_DELETED
return |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _updateToAdded(self,directory,fn,dentry,db,service):
"""Changes to status to 'A' as long as a handler exists, also generates a hash directory - DIR where stuff is happening fn - File name to be added dentry - dictionary entry as returned by GetStatus for this file db - pusher DB for this directory service - None means all services, otherwise looks for service """ |
services=self.sman.GetServices(fn)
# If nobody manages this file, just skip it
if services is None:
print("%s - No services handle this file" %(fn))
return
# Build up list of names
servicenames=[]
for s in services:
servicenames.append(s.GetName())
if service is not None and service not in servicenames:
print("%s - Requested service (%s) not available for this file"\
%(fn,service))
return
# If service is none, build list of all services
# to perform this action on
if service is None:
servicelist=servicenames
else:
servicelist=[service]
if not db.has_key(fn):
# Since this is a new entry, populate with stuff
# we got from GetSatus for this file (usually mtime)
db[fn]=dentry
del db[fn]['status'] # Delete this key we're not using
db[fn]['services']={} # Empty dictionary of services
# that manages this file + status
# Now add the hash
db[fn]['hash']=self._hashfile(os.path.join(directory,fn))
# Now run through services and see if we should
# perform actions
for service in servicelist:
if not db[fn]['services'].has_key(service):
db[fn]['services'][service]={}
db[fn]['services'][service]['status']=self.ST_ADDED
else:
print("%s - Already managed by service %s, maybe do a 'push'?"\
%(fn,service))
logger.info('%s - managers: %s'%(fn,db[fn]['services'].keys()))
return |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _hashfile(self,filename,blocksize=65536):
"""Hashes the file and returns hash""" |
logger.debug("Hashing file %s"%(filename))
hasher=hashlib.sha256()
afile=open(filename,'rb')
buf=afile.read(blocksize)
while len(buf) > 0:
hasher.update(buf)
buf = afile.read(blocksize)
return hasher.hexdigest() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def storage(self):
""" Instantiates and returns a storage instance """ |
if self.backend == 'redis':
return RedisBackend(self.prefix, self.secondary_indexes)
if self.backend == 'dynamodb':
return DynamoDBBackend(self.prefix, self.key, self.sort_key,
self.secondary_indexes)
return DictBackend(self.prefix, self.secondary_indexes) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_property(self, prop):
"""Access nested value using dot separated keys Args: prop (:obj:`str`):
Property in the form of dot separated keys Returns: Property value if exists, else `None` """ |
prop = prop.split('.')
root = self
for p in prop:
if p in root:
root = root[p]
else:
return None
return root |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def from_dict(cls, val):
"""Creates dict2 object from dict object Args: val (:obj:`dict`):
Value to create from Returns: Equivalent dict2 object. """ |
if isinstance(val, dict2):
return val
elif isinstance(val, dict):
res = cls()
for k, v in val.items():
res[k] = cls.from_dict(v)
return res
elif isinstance(val, list):
res = []
for item in val:
res.append(cls.from_dict(item))
return res
else:
return val |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def to_dict(self, val=UNSET):
"""Creates dict object from dict2 object Args: val (:obj:`dict2`):
Value to create from Returns: Equivalent dict object. """ |
if val is UNSET:
val = self
if isinstance(val, dict2) or isinstance(val, dict):
res = dict()
for k, v in val.items():
res[k] = self.to_dict(v)
return res
elif isinstance(val, list):
res = []
for item in val:
res.append(self.to_dict(item))
return res
else:
return val |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def main():
"Process CLI arguments and call appropriate functions."
try:
args = docopt.docopt(__doc__, version=__about__.__version__)
except docopt.DocoptExit:
if len(sys.argv) > 1:
print(f"{Fore.RED}Invalid command syntax, "
f"check help:{Fore.RESET}\n")
print(__doc__)
sys.exit(1)
print_all = False
if not (args["--int-width"] or args["--int-height"] or args["--decimal"]):
print_all = True
width = float(args["WIDTH"])
height = float(args["HEIGHT"])
as_int_ = as_int(width, height)
as_float_ = as_float(width, height)
if args["--ndigits"]:
as_float_ = round(as_float_, int(args["--ndigits"]))
to_print = []
if args["--int-width"] or print_all:
to_print.append(f"{Fore.BLUE}{as_int_[0]!s}")
if args["--int-height"] or print_all:
to_print.append(f"{Fore.BLUE}{as_int_[1]!s}")
if args["--decimal"] or print_all:
to_print.append(f"{Fore.MAGENTA}{as_float_!s}")
print(" ".join(to_print)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def start(self):
"""The event's start time, as a timezone-aware datetime object""" |
if self.start_time is None:
time = datetime.time(hour=19, tzinfo=CET)
else:
time = self.start_time.replace(tzinfo=CET)
return datetime.datetime.combine(self.date, time) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def next_occurrences(self, n=None, since=None):
"""Yield the next planned occurrences after the date "since" The `since` argument can be either a date or datetime onject. If not given, it defaults to the date of the last event that's already planned. If `n` is given, the result is limited to that many dates; otherwise, infinite results may be generated. Note that less than `n` results may be yielded. """ |
scheme = self.recurrence_scheme
if scheme is None:
return ()
db = Session.object_session(self)
query = db.query(Event)
query = query.filter(Event.series_slug == self.slug)
query = query.order_by(desc(Event.date))
query = query.limit(1)
last_planned_event = query.one_or_none()
if since is None:
last_planned_event = query.one()
since = last_planned_event.date
elif since < last_planned_event.date:
since = last_planned_event.date
start = getattr(since, 'date', since)
start += relativedelta.relativedelta(days=+1)
if (scheme == 'monthly'
and last_planned_event
and last_planned_event.date.year == start.year
and last_planned_event.date.month == start.month):
# Monthly events try to have one event per month, so exclude
# the current month if there was already a meetup
start += relativedelta.relativedelta(months=+1)
start = start.replace(day=1)
start = datetime.datetime.combine(start, datetime.time(tzinfo=CET))
result = rrule.rrulestr(self.recurrence_rule, dtstart=start)
if n is not None:
result = itertools.islice(result, n)
return result |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def do_upgrade():
"""Carry out the upgrade.""" |
op.alter_column(
table_name='knwKBRVAL',
column_name='id_knwKB',
type_=db.MediumInteger(8, unsigned=True),
existing_nullable=False,
existing_server_default='0'
) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def spark_string(ints):
"""Returns a spark string from given iterable of ints.""" |
ticks = u'▁▂▃▅▆▇'
ints = [i for i in ints if type(i) == int]
if len(ints) == 0:
return ""
step = (max(ints) / float(len(ticks) - 1)) or 1
return u''.join(
ticks[int(round(i / step))] if type(i) == int else u'.' for i in ints) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_facts_by_name(api_url=None, fact_name=None, verify=False, cert=list()):
""" Returns facts by name :param api_url: Base PuppetDB API url :param fact_name: Name of fact """ |
return utils._make_api_request(api_url, '/facts/{0}'.format(fact_name), verify, cert) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_facts_by_name_and_value(api_url=None, fact_name=None, fact_value=None, verify=False, cert=list()):
""" Returns facts by name and value :param api_url: Base PuppetDB API url :param fact_name: Name of fact :param fact_value: Value of fact """ |
return utils._make_api_request(api_url, '/facts/{0}/{1}'.format(fact_name, fact_value), verify, cert) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def sanitize_config_loglevel(level):
'''
Kinda sorta backport of loglevel sanitization for Python 2.6.
'''
if sys.version_info[:2] != (2, 6) or isinstance(level, (int, long)):
return level
lvl = None
if isinstance(level, basestring):
lvl = logging._levelNames.get(level)
if not lvl:
raise ValueError('Invalid log level, %s' % level)
return lvl |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def count_by_tag(stack, descriptor):
""" Returns the count of currently running or pending instances that match the given stack and deployer combo """ |
ec2_conn = boto.ec2.connection.EC2Connection()
resses = ec2_conn.get_all_instances(
filters={
'tag:stack': stack,
'tag:descriptor': descriptor
})
instance_list_raw = list()
[[instance_list_raw.append(x) for x in res.instances] for res in resses]
instance_list = [x for x in instance_list_raw if state_filter(x)]
instances = len(instance_list)
return instances |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def add_if_unique(self, name):
""" Returns ``True`` on success. Returns ``False`` if the name already exists in the namespace. """ |
with self.lock:
if name not in self.names:
self.names.append(name)
return True
return False |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _add_to_spec(self, name):
"""The spec of the mirrored mock object is updated whenever the mirror gains new attributes""" |
self._spec.add(name)
self._mock.mock_add_spec(list(self._spec), True) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_fp_meta(fp):
"""Processes a CMIP3 style file path. The standard CMIP3 directory structure: <experiment>/<variable_name>/<model>/<ensemble_member>/<CMOR filename>.nc Filename is of pattern: <model>-<experiment>-<variable_name>-<ensemble_member>.nc Arguments: fp (str):
A file path conforming to CMIP3 spec. Returns: dict: Metadata as extracted from the file path. """ |
# Copy metadata list then reverse to start at end of path
directory_meta = list(DIR_ATTS)
# Prefer meta extracted from filename
meta = get_dir_meta(fp, directory_meta)
meta.update(get_fname_meta(fp))
return meta |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_fname_meta(fp):
"""Processes a CMIP3 style file name. Filename is of pattern: <model>-<experiment>-<variable_name>-<ensemble_member>.nc Arguments: fp (str):
A file path/name conforming to DRS spec. Returns: dict: Metadata as extracted from the filename. .. _Data Reference Syntax: http://cmip-pcmdi.llnl.gov/cmip5/docs/cmip5_data_reference_syntax.pdf """ |
# Strip directory, extension, then split
if '/' in fp:
fp = os.path.split(fp)[1]
fname = os.path.splitext(fp)[0]
meta = fname.split('-')
res = {}
try:
for key in FNAME_ATTS:
res[key] = meta.pop(0)
except IndexError:
raise PathError(fname)
return res |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def gen_user_agent(version):
""" generating the user agent witch will be used for most requests monkey patching system and release functions from platform module to prevent disclosure of the OS and it's version """ |
def monkey_patch():
"""
small monkey patch
"""
raise IOError
# saving original functions
orig_system = platform.system
orig_release = platform.release
# applying patch
platform.system = monkey_patch
platform.release = monkey_patch
user_agent = requests_toolbelt.user_agent('picuplib', version)
# reverting patch
platform.system = orig_system
platform.release = orig_release
return user_agent |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def build_url_request(self):
""" Consults the authenticator and grant for HTTP request parameters and headers to send with the access token request, builds the request using the stored endpoint and returns it. """ |
params = {}
headers = {}
self._authenticator(params, headers)
self._grant(params)
return Request(self._endpoint, urlencode(params), headers) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def send(self, response_decoder=None):
""" Creates and sends a request to the OAuth server, decodes the response and returns the resulting token object. response_decoder - A custom callable can be supplied to override the default method of extracting AccessToken parameters from the response. This is necessary for server implementations which do not conform to the more recent OAuth2 specification (ex: Facebook). By default, this will assume the response is encoded using JSON. The callable should return a dictionary with keys and values as follows: access_token - The access token token_type - The token type expires_in - The number of seconds in which the token expires refresh_token - The refresh token scope - The permission scope (as a space delimited string) """ |
decoder = loads
if response_decoder is not None and callable(response_decoder):
decoder = response_decoder
request = self.build_url_request()
try:
f = urlopen(request)
except HTTPError as e:
try:
error_resp = e.read()
error_data = loads(error_resp)
except Exception:
raise AccessTokenResponseError('Access request returned an error, but the response could not be read: %s ' % error_resp)
if error_data.get('error') is None:
raise AccessTokenResponseError('Access request returned an error, but did not include an error code')
raise AccessTokenRequestError(error_data['error'], error_data.get('error_description'), error_data.get('error_uri'))
token_data = decoder(f.read())
return self._create_access_token(token_data) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get(self, model, **spec):
"""get a single model instance by handle :param model: model :param handle: instance handle :return: """ |
handles = self.__find_handles(model, **spec)
if len(handles) > 1:
raise MultipleObjectsReturned()
if not handles:
raise ObjectDoesNotExist()
return self.get_instance(model, handles[0]) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_sha_blob(self):
""" if the current file exists returns the sha blob else returns None """ |
r = requests.get(self.api_url, auth=self.get_auth_details())
try:
return r.json()['sha']
except KeyError:
return None |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def publish_post(self):
""" If it's a new file, add it. Else, update it. """ |
payload = {'content': self.content_base64.decode('utf-8')}
sha_blob = self.get_sha_blob()
if sha_blob:
commit_msg = 'ghPublish UPDATE: {}'.format(self.title)
payload.update(sha=sha_blob)
payload.update(message=commit_msg)
else:
commit_msg = 'ghPublish ADD: {}'.format(self.title)
payload.update(message=commit_msg)
r = requests.put(self.api_url,
auth=self.get_auth_details(),
data=json.dumps(payload))
try:
url = r.json()['content']['html_url']
return r.status_code, url
except KeyError:
return r.status_code, None |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _gpio_callback(self, gpio):
""" Gets triggered whenever the the gpio state changes :param gpio: Number of gpio that changed :type gpio: int :rtype: None """ |
self.debug(u"Triggered #{}".format(gpio))
try:
index = self.gpios.index(gpio)
except ValueError:
self.error(u"{} not present in GPIO list".format(gpio))
return
with self._people_lock:
person = self.people[index]
read_val = GPIO.input(gpio)
if read_val == person.sitting:
# Nothing changed?
time.sleep(self.gpio_bouncetime_sleep)
# Really sure?
read_val = GPIO.input(gpio)
if person.sitting != read_val:
person.sitting = read_val
self.debug(u"Person is now {}sitting".format(
"" if person.sitting else "not ")
)
try:
self.changer.on_person_update(self.people)
except:
self.exception(
u"Failed to update people (Person: {})".format(person)
)
else:
self.warning(u"Nothing changed on {}".format(gpio)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _get_report(self, with_line_nums=True):
""" Returns a report which includes each distinct error only once, together with a list of the input lines where the error occurs. The latter will be omitted if flag is set to False. Helper for the get_report method. """ |
templ = '{} ← {}' if with_line_nums else '{}'
return '\n'.join([
templ.format(error.string, ','.join(map(str, sorted(set(lines)))))
for error, lines in self.errors.items()]) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def send(self, to, from_, body):
""" Send BODY to TO from FROM as an SMS! """ |
try:
msg = self.client.sms.messages.create(
body=body,
to=to,
from_=from_
)
print msg.sid
except twilio.TwilioRestException as e:
raise |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def tar_gzip_dir(directory, destination, base=None):
"""Creates a tar.gz from a directory.""" |
dest_file = tarfile.open(destination, 'w:gz')
abs_dir_path = os.path.abspath(directory)
base_name = abs_dir_path + "/"
base = base or os.path.basename(directory)
for path, dirnames, filenames in os.walk(abs_dir_path):
rel_path = path[len(base_name):]
dir_norm_path = os.path.join(base, rel_path)
dir_tar_info = dest_file.gettarinfo(name=path)
dir_tar_info.name = dir_norm_path
dest_file.addfile(dir_tar_info)
for filename in filenames:
actual_path = os.path.join(path, filename)
norm_path = os.path.join(base, rel_path, filename)
tar_info = dest_file.gettarinfo(name=actual_path)
tar_info.name = norm_path
new_file = open(actual_path, 'rb')
dest_file.addfile(tar_info, new_file)
dest_file.close() |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.