text_prompt stringlengths 157 13.1k | code_prompt stringlengths 7 19.8k ⌀ |
|---|---|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def RemoveEmptyDirectoryTree(path, silent = False, recursion = 0):
""" Delete tree of empty directories. Parameters path : string Path to root of directory tree. silent : boolean [optional: default = False] Turn off log output. recursion : int [optional: default = 0] Indicates level of recursion. """ |
if not silent and recursion is 0:
goodlogging.Log.Info("UTIL", "Starting removal of empty directory tree at: {0}".format(path))
try:
os.rmdir(path)
except OSError:
if not silent:
goodlogging.Log.Info("UTIL", "Removal of empty directory tree terminated at: {0}".format(path))
return
else:
if not silent:
goodlogging.Log.Info("UTIL", "Directory deleted: {0}".format(path))
RemoveEmptyDirectoryTree(os.path.dirname(path), silent, recursion + 1) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def ValidUserResponse(response, validList):
""" Check if user response is in a list of valid entires. If an invalid response is given re-prompt user to enter one of the valid options. Do not proceed until a valid entry is given. Parameters response : string Response string to check. validList : list A list of valid responses. Returns string A valid response string. """ |
if response in validList:
return response
else:
prompt = "Unknown response given - please reenter one of [{0}]: ".format('/'.join(validList))
response = goodlogging.Log.Input("DM", prompt)
return ValidUserResponse(response, validList) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def UserAcceptance( matchList, recursiveLookup = True, promptComment = None, promptOnly = False, xStrOverride = "to skip this selection" ):
""" Prompt user to select a entry from a given match list or to enter a new string to look up. If the match list is empty user must enter a new string or exit. Parameters matchList : list A list of entries which the user can select a valid match from. recursiveLookup : boolean [optional: default = True] Allow user to enter a new string to look up. promptComment : string [optional: default = None] Add an additional comment on the end of the prompt message. promptOnly : boolean [optional: default = False] Set to true if match list is expected to be empty. In which case the presence of an empty match list will not be mentioned and user will be expected to enter a new response to look up. xStrOverride : string [optional: default = "to skip this selection"] Override the string for 'x' response. This can be used if the behaviour of the 'x' response is changed. Returns string or None Either a entry from matchList, another valid response or a new string to look up. If match list is empty and recursive lookup is disabled or if the user response is 'x' this will return None. """ |
matchString = ', '.join(matchList)
if len(matchList) == 1:
goodlogging.Log.Info("UTIL", "Match found: {0}".format(matchString))
prompt = "Enter 'y' to accept this match or e"
elif len(matchList) > 1:
goodlogging.Log.Info("UTIL", "Multiple possible matches found: {0}".format(matchString))
prompt = "Enter correct match from list or e"
else:
if promptOnly is False:
goodlogging.Log.Info("UTIL", "No match found")
prompt = "E"
if not recursiveLookup:
return None
if recursiveLookup:
prompt = prompt + "nter a different string to look up or e"
prompt = prompt + "nter 'x' {0} or enter 'exit' to quit this program".format(xStrOverride)
if promptComment is None:
prompt = prompt + ": "
else:
prompt = prompt + " ({0}): ".format(promptComment)
while(1):
response = goodlogging.Log.Input('UTIL', prompt)
if response.lower() == 'exit':
goodlogging.Log.Fatal("UTIL", "Program terminated by user 'exit'")
if response.lower() == 'x':
return None
elif response.lower() == 'y' and len(matchList) == 1:
return matchList[0]
elif len(matchList) > 1:
for match in matchList:
if response.lower() == match.lower():
return match
if recursiveLookup:
return response |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def GetBestMatch(target, matchList):
""" Finds the elements of matchList which best match the target string. Note that this searches substrings so "abc" will have a 100% match in both "this is the abc", "abcde" and "abc". The return from this function is a list of potention matches which shared the same highest match score. If any exact match is found (1.0 score and equal size string) this will be given alone. Parameters target : string Target string to match. matchList : list List of strings to match target against. Returns list A list of potention matches which share the same highest match score. If any exact match is found (1.0 score and equal size string) this will be given alone. """ |
bestMatchList = []
if len(matchList) > 0:
ratioMatch = []
for item in matchList:
ratioMatch.append(GetBestStringMatchValue(target, item))
maxRatio = max(ratioMatch)
if maxRatio > 0.8:
matchIndexList = [i for i, j in enumerate(ratioMatch) if j == maxRatio]
for index in matchIndexList:
if maxRatio == 1 and len(matchList[index]) == len(target):
return [matchList[index], ]
else:
bestMatchList.append(matchList[index])
return bestMatchList |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def GetBestStringMatchValue(string1, string2):
""" Return the value of the highest matching substrings between two strings. Parameters string1 : string First string. string2 : string Second string. Returns int Integer value representing the best match found between string1 and string2. """ |
# Ignore case
string1 = string1.lower()
string2 = string2.lower()
# Ignore non-alphanumeric characters
string1 = ''.join(i for i in string1 if i.isalnum())
string2 = ''.join(i for i in string2 if i.isalnum())
# Finding best match value between string1 and string2
if len(string1) == 0 or len(string2) == 0:
bestRatio = 0
elif len(string1) == len(string2):
match = difflib.SequenceMatcher(None, string1, string2)
bestRatio = match.ratio()
else:
if len(string1) > len(string2):
shortString = string2
longString = string1
else:
shortString = string1
longString = string2
match = difflib.SequenceMatcher(None, shortString, longString)
bestRatio = match.ratio()
for block in match.get_matching_blocks():
subString = longString[block[1]:block[1]+block[2]]
subMatch = difflib.SequenceMatcher(None, shortString, subString)
if(subMatch.ratio() > bestRatio):
bestRatio = subMatch.ratio()
return(bestRatio) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def WebLookup(url, urlQuery=None, utf8=True):
""" Look up webpage at given url with optional query string Parameters url : string Web url. urlQuery : dictionary [optional: default = None] Parameter to be passed to GET method of requests module utf8 : boolean [optional: default = True] Set response encoding Returns string GET response text """ |
goodlogging.Log.Info("UTIL", "Looking up info from URL:{0} with QUERY:{1})".format(url, urlQuery), verbosity=goodlogging.Verbosity.MINIMAL)
response = requests.get(url, params=urlQuery)
goodlogging.Log.Info("UTIL", "Full url: {0}".format(response.url), verbosity=goodlogging.Verbosity.MINIMAL)
if utf8 is True:
response.encoding = 'utf-8'
if(response.status_code == requests.codes.ok):
return(response.text)
else:
response.raise_for_status() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def ArchiveProcessedFile(filePath, archiveDir):
""" Move file from given file path to archive directory. Note the archive directory is relative to the file path directory. Parameters filePath : string File path archiveDir : string Name of archive directory """ |
targetDir = os.path.join(os.path.dirname(filePath), archiveDir)
goodlogging.Log.Info("UTIL", "Moving file to archive directory:")
goodlogging.Log.IncreaseIndent()
goodlogging.Log.Info("UTIL", "FROM: {0}".format(filePath))
goodlogging.Log.Info("UTIL", "TO: {0}".format(os.path.join(targetDir, os.path.basename(filePath))))
goodlogging.Log.DecreaseIndent()
os.makedirs(targetDir, exist_ok=True)
try:
shutil.move(filePath, targetDir)
except shutil.Error as ex4:
err = ex4.args[0]
goodlogging.Log.Info("UTIL", "Move to archive directory failed - Shutil Error: {0}".format(err)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def send_wait(self,text):
"""Send a string to the PiLite, sleep until the message has been displayed (based on an estimate of the speed of the display. Due to the font not being monotype, this will wait too long in most cases""" |
self.send(text)
time.sleep(len(text)*PiLite.COLS_PER_CHAR*self.speed/1000.0) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def set_speed(self,speed):
"""Set the display speed. The parameters is the number of milliseconds between each column scrolling off the display""" |
self.speed=speed
self.send_cmd("SPEED"+str(speed)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def set_fb_random(self):
"""Set the "frame buffer" to a random pattern""" |
pattern=''.join([random.choice(['0','1']) for i in xrange(14*9)])
self.set_fb(pattern) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def set_pixel(self,x,y,state):
"""Set pixel at "x,y" to "state" where state can be one of "ON", "OFF" or "TOGGLE" """ |
self.send_cmd("P"+str(x+1)+","+str(y+1)+","+state) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def display_char(self,x,y,char):
"""Display character "char" with its top left at "x,y" """ |
self.send_cmd("T"+str(x+1)+","+str(y+1)+","+char) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def the_magic_mapping_function(peptides, fastaPath, importAttributes=None, ignoreUnmapped=True):
"""Returns a dictionary mapping peptides to protein group leading proteins. :param peptides: a set of peptide sequences :param fastaPath: FASTA file path :param importAttributes: dict, can be used to override default parameters passed to the function maspy.proteindb.importProteinDatabase(). Default attribtues are: {'cleavageRule': '[KR]', 'removeNtermM': True, 'ignoreIsoleucine': True, forceId': True, 'headerParser': PROTEINDB.fastaParserSpectraClusterPy} :param ignoreUnmapped: bool, if True ignores peptides that cannot be mapped to any protein present in the FASTA file where groupLeaders is a string joining all leading proteins of a group with a ";", for example {'peptide': {"protein1;proetin2;protein3"}} """ |
missedCleavage = max([p.count('K') + p.count('R') for p in peptides]) - 1
minLength = min([len(p) for p in peptides])
maxLength = max([len(p) for p in peptides])
defaultAttributes = {
'cleavageRule': '[KR]', 'minLength': minLength, 'maxLength': maxLength,
'removeNtermM': True, 'ignoreIsoleucine': True,
'missedCleavage': missedCleavage, 'forceId': True,
'headerParser': PROTEINDB.fastaParserSpectraClusterPy,
}
if importAttributes is not None:
defaultAttributes.update(importAttributes)
proteindb = PROTEINDB.importProteinDatabase(fastaPath, **defaultAttributes)
#This could be automated by adding a function to the inference module
proteinToPeptides = ddict(set)
for peptide in peptides:
#ignore any peptide that's not mapped if "ignoreUnmapped" is True
try:
peptideDbEntry = proteindb.peptides[peptide]
except KeyError as exception:
if ignoreUnmapped:
continue
else:
exceptionText = 'No protein mappings for peptide "'+peptide+'"'
raise KeyError(exceptionText)
for protein in peptideDbEntry.proteins:
proteinToPeptides[protein].add(peptide)
#Generate the ProteinInference instance
inference = INFERENCE.mappingBasedGrouping(proteinToPeptides)
peptideGroupMapping = dict()
for peptide in peptides:
groupLeaders = set()
for proteinId in inference.pepToProts[peptide]:
for proteinGroup in inference.getGroups(proteinId):
groupLeaders.add(';'.join(sorted(proteinGroup.leading)))
peptideGroupMapping[peptide] = groupLeaders
return peptideGroupMapping |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
""" Returns a truncated version of the inputted text. :param text | <str> length | <int> ellipsis | <str> :return <str> """ |
text = nativestring(text)
return text[:length] + (text[length:] and ellipsis) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def to_json(data):
"""Return data as a JSON string.""" |
return json.dumps(data, default=lambda x: x.__dict__, sort_keys=True, indent=4) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def convert_string(string, chars=None):
"""Remove certain characters from a string.""" |
if chars is None:
chars = [',', '.', '-', '/', ':', ' ']
for ch in chars:
if ch in string:
string = string.replace(ch, ' ')
return string |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def convert_time(time):
"""Convert a time string into 24-hour time.""" |
split_time = time.split()
try:
# Get rid of period in a.m./p.m.
am_pm = split_time[1].replace('.', '')
time_str = '{0} {1}'.format(split_time[0], am_pm)
except IndexError:
return time
try:
time_obj = datetime.strptime(time_str, '%I:%M %p')
except ValueError:
time_obj = datetime.strptime(time_str, '%I %p')
return time_obj.strftime('%H:%M %p') |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def convert_month(date, shorten=True, cable=True):
"""Replace month by shortening or lengthening it. :param shorten: Set to True to shorten month name. :param cable: Set to True if category is Cable. """ |
month = date.split()[0].lower()
if 'sept' in month:
shorten = False if cable else True
try:
if shorten:
month = SHORT_MONTHS[MONTHS.index(month)]
else:
month = MONTHS[SHORT_MONTHS.index(month)]
except ValueError:
month = month.title()
return '{0} {1}'.format(month, ' '.join(date.split()[1:])) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def convert_date(date):
"""Convert string to datetime object.""" |
date = convert_month(date, shorten=False)
clean_string = convert_string(date)
return datetime.strptime(clean_string, DATE_FMT.replace('-','')) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def date_in_range(date1, date2, range):
"""Check if two date objects are within a specific range""" |
date_obj1 = convert_date(date1)
date_obj2 = convert_date(date2)
return (date_obj2 - date_obj1).days <= range |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def inc_date(date_obj, num, date_fmt):
"""Increment the date by a certain number and return date object. as the specific string format. """ |
return (date_obj + timedelta(days=num)).strftime(date_fmt) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_soup(url):
"""Request the page and return the soup.""" |
html = requests.get(url, stream=True, headers=HEADERS)
if html.status_code != 404:
return BeautifulSoup(html.content, 'html.parser')
else:
return None |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def match_list(query_list, string):
"""Return True if all words in a word list are in the string. :param query_list: list of words to match :param string: the word or words to be matched against """ |
# Get rid of 'the' word to ease string matching
match = False
index = 0
string = ' '.join(filter_stopwords(string))
if not isinstance(query_list, list):
query_list = [query_list]
while index < len(query_list):
query = query_list[index]
words_query = filter_stopwords(query)
match = all(word in string for word in words_query)
if match:
break
index += 1
return match |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def filter_stopwords(phrase):
"""Filter out stop words and return as a list of words""" |
if not isinstance(phrase, list):
phrase = phrase.split()
stopwords = ['the', 'a', 'in', 'to']
return [word.lower() for word in phrase if word.lower() not in stopwords] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def safe_unicode(string):
"""If Python 2, replace non-ascii characters and return encoded string.""" |
if not PY3:
uni = string.replace(u'\u2019', "'")
return uni.encode('utf-8')
return string |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_strings(soup, tag):
"""Get all the string children from an html tag.""" |
tags = soup.find_all(tag)
strings = [s.string for s in tags if s.string]
return strings |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def cli(ctx, given_name, demo):
"""Initializes a bubble.""" |
path = None
if path is None:
path = ctx.home
bubble_file_name = path + '/.bubble'
config_file = path + '/config/config.yaml'
if os.path.exists(bubble_file_name) and os.path.isfile(bubble_file_name):
ctx.say_yellow(
'There is already a bubble present, will not initialize bubble in:' + path)
return
else:
given_name = '(((' + given_name + ')))'
with open(bubble_file_name, 'w') as dot_bubble:
dot_bubble.write('bubble=' + metadata.version + '\n')
dot_bubble.write('name=' + given_name + '\n')
dot_bubble.write('home=' + ctx.home + '\n')
dot_bubble.write(
'local_init_timestamp=' + str(arrow.utcnow()) + '\n')
# aka date_of_birth
dot_bubble.write(
'local_creator_user=' + str(os.getenv('USER')) + '\n')
dot_bubble.write(
'local_created_in_env=' + str(os.environ) + '\n')
ctx.say_green('Initialised a new bubble in [%s]' %
click.format_filename(bubble_file_name))
create_dir(ctx, path + '/config/')
create_dir(ctx, path + '/logs/')
create_dir(ctx, path + '/export/')
create_dir(ctx, path + '/import/')
create_dir(ctx, path + '/remember/')
create_dir(ctx, path + '/remember/archive')
with open(config_file, 'w') as cfg_file:
cfg_file.write(get_example_configuration())
ctx.say_green('Created an example configuration in %s' %
click.format_filename(config_file))
rules_file = path + '/config/rules.bubble'
with open(rules_file, 'w') as rules:
rules.write(get_example_rules_bubble())
ctx.say_green('Created an example rules in [%s]' %
click.format_filename(rules_file))
rule_functions_file = path + '/custom_rule_functions.py'
with open(rule_functions_file, 'w') as rule_functions:
rule_functions.write(get_example_rule_functions())
ctx.say_green('Created an example rule_functions in [%s]' %
click.format_filename(rule_functions_file))
src_client_file = path + '/mysrcclient.py'
with open(src_client_file, 'w') as src_client:
src_client.write(get_example_client_pull())
ctx.say_green('Created source example client with pull method [%s]' %
click.format_filename(src_client_file))
tgt_client_file = path + '/mytgtclient.py'
with open(tgt_client_file, 'w') as tgt_client:
tgt_client.write(get_example_client_push())
ctx.say_green('Created an target example client with push method [%s]' %
click.format_filename(src_client_file))
ctx.say_green(
'Bubble initialized, please adjust your configuration file') |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _bld_op(self, op, num, **kwargs):
"""implements pandas an operator""" |
kwargs['other'] = num
setattr(self, op, {'mtype': pab, 'kwargs': kwargs}) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _bld_pab_generic(self, funcname, **kwargs):
"""
implements a generic version of an attribute based pandas function
""" |
margs = {'mtype': pab, 'kwargs': kwargs}
setattr(self, funcname, margs) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _bld_pnab_generic(self, funcname, **kwargs):
"""
implement's a generic version of a non-attribute based pandas function
""" |
margs = {'mtype': pnab, 'kwargs': kwargs}
setattr(self, funcname, margs) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get(self, request, *args, **kwargs):
""" List all products in the shopping cart """ |
cart = ShoppingCartProxy(request)
return JsonResponse(cart.get_products(onlypublic=request.GET.get('onlypublic', True))) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def post(self, request, *args, **kwargs):
""" Adds new product to the current shopping cart """ |
POST = json.loads(request.body.decode('utf-8'))
if 'product_pk' in POST and 'quantity' in POST:
cart = ShoppingCartProxy(request)
cart.add(
product_pk=int(POST['product_pk']),
quantity=int(POST['quantity'])
)
return JsonResponse(cart.products)
return HttpResponseBadRequest() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def register_signal(alias: str, signal: pyqtSignal):
""" Used to register signal at the dispatcher. Note that you can not use alias that already exists. :param alias: Alias of the signal. String. :param signal: Signal itself. Usually pyqtSignal instance. :return: """ |
if SignalDispatcher.signal_alias_exists(alias):
raise SignalDispatcherError('Alias "' + alias + '" for signal already exists!')
SignalDispatcher.signals[alias] = signal |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def register_handler(alias: str, handler: callable):
""" Used to register handler at the dispatcher. :param alias: Signal alias to match handler to. :param handler: Handler. Some callable. :return: """ |
if SignalDispatcher.handlers.get(alias) is None:
SignalDispatcher.handlers[alias] = [handler]
else:
SignalDispatcher.handlers.get(alias).append(handler) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def dispatch():
""" This methods runs the wheel. It is used to connect signal with their handlers, based on the aliases. :return: """ |
aliases = SignalDispatcher.signals.keys()
for alias in aliases:
handlers = SignalDispatcher.handlers.get(alias)
signal = SignalDispatcher.signals.get(alias)
if signal is None or handlers.__len__() == 0:
continue
for handler in handlers:
signal.connect(handler) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _get_rev(self, fpath):
""" Get an SCM version number. Try svn and git. """ |
rev = None
try:
cmd = ["git", "log", "-n1", "--pretty=format:\"%h\"", fpath]
rev = Popen(cmd, stdout=PIPE, stderr=PIPE).communicate()[0]
except:
pass
if not rev:
try:
cmd = ["svn", "info", fpath]
svninfo = Popen(cmd,
stdout=PIPE,
stderr=PIPE).stdout.readlines()
for info in svninfo:
tokens = info.split(":")
if tokens[0].strip() == "Last Changed Rev":
rev = tokens[1].strip()
except:
pass
return rev |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def execute_migrations(self, show_traceback=True):
""" Executes all pending migrations across all capable databases """ |
all_migrations = get_pending_migrations(self.path, self.databases)
if not len(all_migrations):
sys.stdout.write("There are no migrations to apply.\n")
for db, migrations in all_migrations.iteritems():
connection = connections[db]
# init connection
cursor = connection.cursor()
cursor.close()
for migration in migrations:
migration_path = self._get_migration_path(db, migration)
with Transactional():
sys.stdout.write(
"Executing migration %r on %r...." %
(migration, db)
)
created_models = self._execute_migration(
db,
migration_path,
show_traceback=show_traceback
)
emit_post_sync_signal(
created_models=created_models,
verbosity=self.verbosity,
interactive=self.interactive,
db=db,
)
if self.load_initial_data:
sys.stdout.write(
"Running loaddata for initial_data fixtures on %r.\n" % db
)
call_command(
"loaddata",
"initial_data",
verbosity=self.verbosity,
database=db,
) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def handle(self, *args, **options):
""" Upgrades the database. Executes SQL scripts that haven't already been applied to the database. """ |
self.do_list = options.get("do_list")
self.do_execute = options.get("do_execute")
self.do_create = options.get("do_create")
self.do_create_all = options.get("do_create_all")
self.do_seed = options.get("do_seed")
self.load_initial_data = options.get("load_initial_data", True)
self.args = args
if options.get("path"):
self.path = options.get("path")
else:
default_path = self._get_default_migration_path()
self.path = getattr(
settings, "NASHVEGAS_MIGRATIONS_DIRECTORY", default_path
)
self.verbosity = int(options.get("verbosity", 1))
self.interactive = options.get("interactive")
self.databases = options.get("databases")
# We only use the default alias in creation scenarios (upgrades
# default to all databases)
if self.do_create and not self.databases:
self.databases = [DEFAULT_DB_ALIAS]
if self.do_create and self.do_create_all:
raise CommandError("You cannot combine --create and --create-all")
self.init_nashvegas()
if self.do_create_all:
self.create_all_migrations()
elif self.do_create:
assert len(self.databases) == 1
self.create_migrations(self.databases[0])
if self.do_execute:
self.execute_migrations()
if self.do_list:
self.list_migrations()
if self.do_seed:
self.seed_migrations() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def plantuml(desc):
"""Generate plantuml class diagram :param desc: result of sadisplay.describe function Return plantuml class diagram string """ |
classes, relations, inherits = desc
result = [
'@startuml',
'skinparam defaultFontName Courier',
]
for cls in classes:
# issue #11 - tabular output of class members (attrs)
# http://stackoverflow.com/a/8356620/258194
# build table
class_desc = []
# table columns
class_desc += [(i[1], i[0]) for i in cls['cols']]
# class properties
class_desc += [('+', i) for i in cls['props']]
# methods
class_desc += [('%s()' % i, '') for i in cls['methods']]
result.append(
'Class %(name)s {\n%(desc)s\n}' % {
'name': cls['name'],
'desc': '\n'.join(tabular_output(class_desc)),
}
)
for item in inherits:
result.append("%(parent)s <|-- %(child)s" % item)
for item in relations:
result.append("%(from)s <--o %(to)s: %(by)s" % item)
result += [
'right footer generated by sadisplay v%s' % __version__,
'@enduml',
]
return '\n\n'.join(result) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def is_reference_target(resource, rtype, label):
""" Return true if the resource has this rtype with this label """ |
prop = resource.props.references.get(rtype, False)
if prop:
return label in prop |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_sources(self, resources):
""" Filter resources based on which have this reference """ |
rtype = self.rtype # E.g. category
label = self.props.label # E.g. category1
result = [
resource
for resource in resources.values()
if is_reference_target(resource, rtype, label)
]
return result |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def setup(app: Sphinx):
""" Initialize Kaybee as a Sphinx extension """ |
# Scan for directives, first in the system, second in the docs project
importscan.scan(plugins)
dectate.commit(kb)
app.add_config_value('kaybee_settings', KaybeeSettings(), 'html')
bridge = 'kaybee.plugins.postrenderer.config.KaybeeBridge'
app.config.template_bridge = bridge
app.connect('env-updated', flush_everything)
app.connect(SphinxEvent.BI.value,
# pragma nocover
lambda sphinx_app: EventAction.call_builder_init(
kb, sphinx_app)
)
app.connect(SphinxEvent.EPD.value,
# pragma nocover
lambda sphinx_app, sphinx_env,
docname: EventAction.call_purge_doc(
kb, sphinx_app, sphinx_env, docname)
)
app.connect(SphinxEvent.EBRD.value,
# pragma nocover
lambda sphinx_app, sphinx_env,
docnames: EventAction.call_env_before_read_docs(
kb, sphinx_app, sphinx_env, docnames)
)
app.connect(SphinxEvent.DREAD.value,
# pragma nocover
lambda sphinx_app,
doctree: EventAction.call_env_doctree_read(
kb, sphinx_app, doctree)
)
app.connect(SphinxEvent.DRES.value,
# pragma nocover
lambda sphinx_app, doctree,
fromdocname: EventAction.call_doctree_resolved(
kb, sphinx_app, doctree, fromdocname)
)
app.connect(SphinxEvent.EU.value,
# pragma nocover
lambda sphinx_app, sphinx_env: EventAction.call_env_updated(
kb, sphinx_app, sphinx_env)
)
app.connect(SphinxEvent.HCP.value,
# pragma nocover
lambda sphinx_app: EventAction.call_html_collect_pages(
kb, sphinx_app)
)
app.connect(SphinxEvent.ECC.value,
# pragma nocover
lambda sphinx_builder,
sphinx_env: EventAction.call_env_check_consistency(
kb, sphinx_builder, sphinx_env)
)
app.connect(SphinxEvent.MR.value,
# pragma nocover
lambda sphinx_app, sphinx_env, node,
contnode: EventAction.call_missing_reference(
kb, sphinx_app, sphinx_env, node, contnode)
)
app.connect(SphinxEvent.HPC.value,
# pragma nocover
lambda sphinx_app, pagename, templatename, context,
doctree: EventAction.call_html_page_context(
kb, sphinx_app, pagename, templatename, context, doctree)
)
return dict(
version=__version__,
parallel_read_safe=False
) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def loadInstance(self):
""" Loads the plugin from the proxy information that was created from the registry file. """ |
if self._loaded:
return
self._loaded = True
module_path = self.modulePath()
package = projex.packageFromPath(module_path)
path = os.path.normpath(projex.packageRootPath(module_path))
if path in sys.path:
sys.path.remove(path)
sys.path.insert(0, path)
try:
__import__(package)
except Exception, e:
err = Plugin(self.name(), self.version())
err.setError(e)
err.setFilepath(module_path)
self._instance = err
self.setError(e)
msg = "%s.plugin('%s') errored loading instance from %s"
opts = (self.proxyClass().__name__, self.name(), module_path)
logger.warning(msg % opts)
logger.error(e) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def clean_resource_json(resource_json):
""" The catalog wants to be smaller, let's drop some stuff """ |
for a in ('parent_docname', 'parent', 'template', 'repr', 'series'):
if a in resource_json:
del resource_json[a]
props = resource_json['props']
for prop in (
'acquireds', 'style', 'in_nav', 'nav_title', 'weight',
'auto_excerpt'):
if prop in props:
del props[prop]
return resource_json |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get(self, url, params=None, cache_cb=None, **kwargs):
""" Make http get request. :param url: :param params: :param cache_cb: (optional) a function that taking requests.Response as input, and returns a bool flag, indicate whether should update the cache. :param cache_expire: (optional). :param kwargs: optional arguments. """ |
if self.use_random_user_agent:
headers = kwargs.get("headers", dict())
headers.update({Headers.UserAgent.KEY: Headers.UserAgent.random()})
kwargs["headers"] = headers
url = add_params(url, params)
cache_consumed, value = self.try_read_cache(url)
if cache_consumed:
response = requests.Response()
response.url = url
response._content = value
else:
response = self.ses.get(url, **kwargs)
if self.should_we_update_cache(response, cache_cb, cache_consumed):
self.cache.set(
url, response.content,
expire=kwargs.get("cache_expire", self.cache_expire),
)
return response |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def download(self, url, dst, params=None, cache_cb=None, overwrite=False, stream=False, minimal_size=-1, maximum_size=1024 ** 6, **kwargs):
""" Download binary content to destination. :param url: binary content url :param dst: path to the 'save_as' file :param cache_cb: (optional) a function that taking requests.Response as input, and returns a bool flag, indicate whether should update the cache. :param overwrite: bool, :param stream: bool, whether we load everything into memory at once, or read the data chunk by chunk :param minimal_size: default -1, if response content smaller than minimal_size, then delete what just download. :param maximum_size: default 1GB, if response content greater than maximum_size, then delete what just download. """ |
response = self.get(
url,
params=params,
cache_cb=cache_cb,
stream=stream,
**kwargs
)
if not overwrite: # pragma: no cover
if os.path.exists(dst):
raise OSError("'%s' exists!" % dst)
if stream:
chunk_size = 1024 * 1024
downloaded_size = 0
with atomic_write(dst, mode="wb") as f:
for chunk in response.iter_content(chunk_size):
if not chunk: # pragma: no cover
break
f.write(chunk)
downloaded_size += chunk_size
if (downloaded_size < minimal_size) or (downloaded_size > maximum_size):
self.raise_download_oversize_error(
url, downloaded_size, minimal_size, maximum_size)
else:
content = response.content
downloaded_size = sys.getsizeof(content)
if (downloaded_size < minimal_size) or (downloaded_size > maximum_size):
self.raise_download_oversize_error(
url, downloaded_size, minimal_size, maximum_size)
else:
with atomic_write(dst, mode="wb") as f:
f.write(content) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def option(*args, **kwargs):
"""Decorator to add an option to the optparser argument of a Cmdln subcommand To add a toplevel option, apply the decorator on the class itself. (see p4.py for an example) Example: @cmdln.option("-E", dest="environment_path") class MyShell(cmdln.Cmdln):
@cmdln.option("-f", "--force", help="force removal") def do_remove(self, subcmd, opts, *args):
""" |
def decorate_sub_command(method):
"""create and add sub-command options"""
if not hasattr(method, "optparser"):
method.optparser = SubCmdOptionParser()
method.optparser.add_option(*args, **kwargs)
return method
def decorate_class(klass):
"""store toplevel options"""
assert _forgiving_issubclass(klass, Cmdln)
_inherit_attr(klass, "toplevel_optparser_options", [], cp=lambda l: l[:])
klass.toplevel_optparser_options.append( (args, kwargs) )
return klass
#XXX Is there a possible optimization for many options to not have a
# large stack depth here?
def decorate(obj):
if _forgiving_issubclass(obj, Cmdln):
return decorate_class(obj)
else:
return decorate_sub_command(obj)
return decorate |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _inherit_attr(klass, attr, default, cp):
"""Inherit the attribute from the base class Copy `attr` from base class (otherwise use `default`). Copying is done using the passed `cp` function. The motivation behind writing this function is to allow inheritance among Cmdln classes where base classes set 'common' options using the `@cmdln.option` decorator. To ensure this, we must not write to the base class's options when handling the derived class. """ |
if attr not in klass.__dict__:
if hasattr(klass, attr):
value = cp(getattr(klass, attr))
else:
value = default
setattr(klass, attr, value) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _forgiving_issubclass(derived_class, base_class):
"""Forgiving version of ``issubclass`` Does not throw any exception when arguments are not of class type """ |
return (type(derived_class) is ClassType and \
type(base_class) is ClassType and \
issubclass(derived_class, base_class)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def timecalMs1DataMedian(msrunContainer, specfile, calibrationData, minDataPoints=50, deviationKey='relDev'):
"""Generates a calibration value for each MS1 scan by calculating the median deviation :param msrunContainer: intance of :class:`maspy.core.MsrunContainer` :param specfile: filename of an ms-run file, used to generate an calibration value for each MS1 spectrum item. :param calibrationData: a dictionary of ``numpy.arrays`` containing calibration data, as returned by :func:`aquireMs1CalibrationData()` :param minDataPoints: The minimal number of data points necessary to calculate the calibration value, default value is "50". The calibration value for each scan is calulated as the median of all calibration data points present for this scan. However, if the number of data points is less then specified by ``minDataPoints` the data points of the preceeding and subsequent scans are added until the minimal number of data points is reached. :param deviationKey: the ``calibrationData`` key which contains the calibration data that should be used. :returns: a dictionary containing the calibration values for each MS1 ``Si``. ``{si.id: {'calibValue': float, 'n': int, 'data': list}`` """ |
corrData = dict()
_posDict = dict()
pos = 0
for si in msrunContainer.getItems(specfiles=specfile, sort='rt',
selector=lambda si: si.msLevel==1
):
corrData[si.id] = {'calibValue': float(), 'n': int(), 'data': list()}
_posDict[pos] = si.id
pos += 1
for siId, deviation in zip(calibrationData['siId'],
calibrationData[deviationKey]):
corrData[siId]['data'].append(deviation)
corrData[siId]['n'] += 1
for pos in range(len(corrData)):
entry = corrData[_posDict[pos]]
_data = [entry['data']]
_n = entry['n']
expansion = 0
while _n < minDataPoints:
expansion += 1
try:
expData = corrData[_posDict[pos+expansion]]['data']
_data.append(expData)
_n += corrData[_posDict[pos+expansion]]['n']
except KeyError:
pass
try:
expData = corrData[_posDict[pos-expansion]]['data']
_data.append(expData)
_n += corrData[_posDict[pos-expansion]]['n']
except KeyError:
pass
if len(entry['data']) > 0:
median = numpy.median(entry['data'])
factor = 1
else:
median = float()
factor = 0
for expData in _data[1:]:
if len(expData) > 0:
median += numpy.median(expData) * 0.5
factor += 0.5
median = median / factor
entry['calibValue'] = median
return corrData |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_genericpage(cls, kb_app):
""" Return the one class if configured, otherwise default """ |
# Presumes the registry has been committed
q = dectate.Query('genericpage')
klasses = sorted(q(kb_app), key=lambda args: args[0].order)
if not klasses:
# The site doesn't configure a genericpage,
return Genericpage
else:
return klasses[0][1] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def cli(ctx):
"""Shows the man page packed inside the bubble tool this is mainly too overcome limitations on installing manual pages in a distribution agnostic and simple way and the way bubble has been developed, in virtual python environments, installing a man page into a system location makes no sense, the system manpage will not reflect the development version. and if your is system is really bare like : docker.io/python, you will not even have man installed """ |
manfile = bubble_lib_dir+os.sep+'extras'+os.sep+'Bubble.1.gz'
mancmd = ["/usr/bin/man", manfile]
try:
return subprocess.call(mancmd)
except Exception as e:
print('cannot run man with bubble man page')
print('you can always have a look at: '+manfile) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _fetch_dimensions(self, dataset):
""" We override this method just to set the correct datatype and dialect for regions. """ |
for dimension in super(SCB, self)._fetch_dimensions(dataset):
if dimension.id == "Region":
yield Dimension(dimension.id,
datatype="region",
dialect="skatteverket",
label=dimension.label)
else:
yield dimension |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def call(self, func, key, timeout=None):
'''Wraps a function call with cache.
Args:
func (function): the function to call.
key (str): the cache key for this call.
timeout (int): the cache timeout for the key (the
unit of this parameter depends on
the cache class you use, for example,
if you use the classes from werkzeug,
then timeout is in seconds.)
Returns:
The return value of calling func
'''
result = self.get(key)
if result == NONE_RESULT:
return None
if result is None:
result = func()
self.set(
key,
result if result is not None else NONE_RESULT,
timeout
)
return result |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def map(self, key_pattern, func, all_args, timeout=None):
'''Cache return value of multiple calls.
Args:
key_pattern (str): the key pattern to use for generating
keys for caches of the decorated function.
func (function): the function to call.
all_args (list): a list of args to be used to make calls to
the function.
timeout (int): the cache timeout
Returns:
A list of the return values of the calls.
Example::
def add(a, b):
return a + b
cache.map(key_pat, add, [(1, 2), (3, 4)]) == [3, 7]
'''
results = []
keys = [
make_key(key_pattern, func, args, {})
for args in all_args
]
cached = dict(zip(keys, self.get_many(keys)))
cache_to_add = {}
for key, args in zip(keys, all_args):
val = cached[key]
if val is None:
val = func(*args)
cache_to_add[key] = val if val is not None else NONE_RESULT
if val == NONE_RESULT:
val = None
results.append(val)
if cache_to_add:
self.set_many(cache_to_add, timeout)
return results |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
async def _window_open(self, stream_id: int):
"""Wait until the identified stream's flow control window is open. """ |
stream = self._get_stream(stream_id)
return await stream.window_open.wait() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
async def send_data( self, stream_id: int, data: bytes, end_stream: bool = False, ):
"""Send data, respecting the receiver's flow control instructions. If the provided data is larger than the connection's maximum outbound frame size, it will be broken into several frames as appropriate. """ |
if self.closed:
raise ConnectionClosedError
stream = self._get_stream(stream_id)
if stream.closed:
raise StreamClosedError(stream_id)
remaining = data
while len(remaining) > 0:
await asyncio.gather(
self._writable.wait(),
self._window_open(stream.id),
)
remaining_size = len(remaining)
window_size = self._h2.local_flow_control_window(stream.id)
max_frame_size = self._h2.max_outbound_frame_size
send_size = min(remaining_size, window_size, max_frame_size)
if send_size == 0:
continue
logger.debug(
f'[{stream.id}] Sending {send_size} of {remaining_size} '
f'bytes (window {window_size}, frame max {max_frame_size})'
)
to_send = remaining[:send_size]
remaining = remaining[send_size:]
end = (end_stream is True and len(remaining) == 0)
self._h2.send_data(stream.id, to_send, end_stream=end)
self._flush()
if self._h2.local_flow_control_window(stream.id) == 0:
stream.window_open.clear() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
async def read_data(self, stream_id: int) -> bytes: """Read data from the specified stream until it is closed by the remote peer. If the stream is never ended, this never returns. """ |
frames = [f async for f in self.stream_frames(stream_id)]
return b''.join(frames) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
async def read_frame(self, stream_id: int) -> bytes: """Read a single frame of data from the specified stream, waiting until frames are available if none are present in the local buffer. If the stream is closed and all buffered frames have been consumed, raises a StreamConsumedError. """ |
stream = self._get_stream(stream_id)
frame = await stream.read_frame()
if frame.flow_controlled_length > 0:
self._acknowledge_data(frame.flow_controlled_length, stream_id)
return frame.data |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
async def get_pushed_stream_ids(self, parent_stream_id: int) -> List[int]: """Return a list of all streams pushed by the remote peer that are children of the specified stream. If no streams have been pushed when this method is called, waits until at least one stream has been pushed. """ |
if parent_stream_id not in self._streams:
logger.error(
f'Parent stream {parent_stream_id} unknown to this connection'
)
raise NoSuchStreamError(parent_stream_id)
parent = self._get_stream(parent_stream_id)
await parent.pushed_streams_available.wait()
pushed_streams_ids = self._pushed_stream_ids[parent.id]
stream_ids: List[int] = []
if len(pushed_streams_ids) > 0:
stream_ids.extend(pushed_streams_ids)
pushed_streams_ids.clear()
parent.pushed_streams_available.clear()
return stream_ids |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def convertMzml(mzmlPath, outputDirectory=None):
"""Imports an mzml file and converts it to a MsrunContainer file :param mzmlPath: path of the mzml file :param outputDirectory: directory where the MsrunContainer file should be written if it is not specified, the output directory is set to the mzml files directory. """ |
outputDirectory = outputDirectory if outputDirectory is not None else os.path.dirname(mzmlPath)
msrunContainer = importMzml(mzmlPath)
msrunContainer.setPath(outputDirectory)
msrunContainer.save() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def prepareSiiImport(siiContainer, specfile, path, qcAttr, qcLargerBetter, qcCutoff, rankAttr, rankLargerBetter):
"""Prepares the ``siiContainer`` for the import of peptide spectrum matching results. Adds entries to ``siiContainer.container`` and to ``siiContainer.info``. :param siiContainer: instance of :class:`maspy.core.SiiContainer` :param specfile: unambiguous identifier of a ms-run file. Is also used as a reference to other MasPy file containers. :param path: folder location used by the ``SiiContainer`` to save and load data to the hard disk. :param qcAttr: name of the parameter to define a ``Sii`` quality cut off. Typically this is some sort of a global false positive estimator, for example a 'false discovery rate' (FDR). :param qcLargerBetter: bool, True if a large value for the ``.qcAttr`` means a higher confidence. :param qcCutOff: float, the quality threshold for the specifed ``.qcAttr`` :param rankAttr: name of the parameter used for ranking ``Sii`` according to how well they match to a fragment ion spectrum, in the case when their are multiple ``Sii`` present for the same spectrum. :param rankLargerBetter: bool, True if a large value for the ``.rankAttr`` means a better match to the fragment ion spectrum. For details on ``Sii`` ranking see :func:`applySiiRanking()` For details on ``Sii`` quality validation see :func:`applySiiQcValidation()` """ |
if specfile not in siiContainer.info:
siiContainer.addSpecfile(specfile, path)
else:
raise Exception('...')
siiContainer.info[specfile]['qcAttr'] = qcAttr
siiContainer.info[specfile]['qcLargerBetter'] = qcLargerBetter
siiContainer.info[specfile]['qcCutoff'] = qcCutoff
siiContainer.info[specfile]['rankAttr'] = rankAttr
siiContainer.info[specfile]['rankLargerBetter'] = rankLargerBetter |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def importPeptideFeatures(fiContainer, filelocation, specfile):
""" Import peptide features from a featureXml file, as generated for example by the OpenMS node featureFinderCentroided, or a features.tsv file by the Dinosaur command line tool. :param fiContainer: imported features are added to this instance of :class:`FeatureContainer <maspy.core.FeatureContainer>`. :param filelocation: Actual file path :param specfile: Keyword (filename) to represent file in the :class:`FeatureContainer`. Each filename can only occure once, therefore importing the same filename again is prevented. """ |
if not os.path.isfile(filelocation):
warnings.warn('The specified file does not exist %s' %(filelocation, ))
return None
elif (not filelocation.lower().endswith('.featurexml') and
not filelocation.lower().endswith('.features.tsv')
):
#TODO: this is depricated as importPeptideFeatues
#is not longer be used solely for featurexml
print('Wrong file extension, %s' %(filelocation, ))
elif specfile in fiContainer.info:
print('%s is already present in the SiContainer, import interrupted.'
%(specfile, )
)
return None
#Prepare the file container for the import
fiContainer.addSpecfile(specfile, os.path.dirname(filelocation))
#import featurexml file
if filelocation.lower().endswith('.featurexml'):
featureDict = _importFeatureXml(filelocation)
for featureId, featureEntryDict in viewitems(featureDict):
rtArea = set()
for convexHullEntry in featureEntryDict['convexHullDict']['0']:
rtArea.update([convexHullEntry[0]])
fi = maspy.core.Fi(featureId, specfile)
fi.rt = featureEntryDict['rt']
fi.rtArea = max(rtArea) - min(rtArea)
fi.rtLow = min(rtArea)
fi.rtHigh = max(rtArea)
fi.charge = featureEntryDict['charge']
fi.mz = featureEntryDict['mz']
fi.mh = maspy.peptidemethods.calcMhFromMz(featureEntryDict['mz'],
featureEntryDict['charge'])
fi.intensity = featureEntryDict['intensity']
fi.quality = featureEntryDict['overallquality']
fi.isMatched = False
fi.isAnnotated = False
fi.isValid = True
fiContainer.container[specfile][featureId] = fi
#import dinosaur tsv file
elif filelocation.lower().endswith('.features.tsv'):
featureDict = _importDinosaurTsv(filelocation)
for featureId, featureEntryDict in viewitems(featureDict):
fi = maspy.core.Fi(featureId, specfile)
fi.rt = featureEntryDict['rtApex']
fi.rtArea = featureEntryDict['rtEnd'] - featureEntryDict['rtStart']
fi.rtFwhm = featureEntryDict['fwhm']
fi.rtLow = featureEntryDict['rtStart']
fi.rtHigh = featureEntryDict['rtEnd']
fi.charge = featureEntryDict['charge']
fi.numScans = featureEntryDict['nScans']
fi.mz = featureEntryDict['mz']
fi.mh = maspy.peptidemethods.calcMhFromMz(featureEntryDict['mz'],
featureEntryDict['charge'])
fi.intensity = featureEntryDict['intensitySum']
fi.intensityApex = featureEntryDict['intensityApex']
#Note: not used keys:
#mostAbundantMz nIsotopes nScans averagineCorr mass massCalib
fi.isMatched = False
fi.isAnnotated = False
fi.isValid = True
fiContainer.container[specfile][featureId] = fi |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _importDinosaurTsv(filelocation):
"""Reads a Dinosaur tsv file. See also :func:`importPeptideFeatures` """ |
with io.open(filelocation, 'r', encoding='utf-8') as openFile:
#NOTE: this is pretty similar to importing percolator results, maybe unify in a common function
lines = openFile.readlines()
headerDict = dict([[y,x] for (x,y) in enumerate(lines[0].strip().split('\t'))])
featureDict = dict()
for linePos, line in enumerate(lines[1:]):
featureId = str(linePos)
fields = line.strip().split('\t')
entryDict = dict()
for headerName, headerPos in viewitems(headerDict):
entryDict[headerName] = float(fields[headerPos])
if headerName in ['rtApex', 'rtEnd', 'rtStart', 'fwhm']:
#Covnert to seconds
entryDict[headerName] *= 60
elif headerName in ['charge', 'intensitySum', 'nIsotopes', 'nScans', 'intensityApex']:
entryDict[headerName] = int(entryDict[headerName])
featureDict[featureId] = entryDict
return featureDict |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def rst_to_html(input_string: str) -> str: """ Given a string of RST, use docutils to generate html """ |
overrides = dict(input_encoding='unicode', doctitle_xform=True,
initial_header_level=1)
parts = publish_parts(
writer_name='html',
source=input_string,
settings_overrides=overrides
)
return parts['html_body'] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_rst_title(rst_doc: Node) -> Optional[Any]: """ Given some RST, extract what docutils thinks is the title """ |
for title in rst_doc.traverse(nodes.title):
return title.astext()
return None |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_rst_excerpt(rst_doc: document, paragraphs: int = 1) -> str: """ Given rst, parse and return a portion """ |
texts = []
for count, p in enumerate(rst_doc.traverse(paragraph)):
texts.append(p.astext())
if count + 1 == paragraphs:
break
return ' '.join(texts) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def requires_password_auth(fn):
"""Decorator for HAPI methods that requires the instance to be authenticated with a password""" |
def wrapper(self, *args, **kwargs):
self.auth_context = HAPI.auth_context_password
return fn(self, *args, **kwargs)
return wrapper |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def requires_api_auth(fn):
"""Decorator for HAPI methods that requires the instance to be authenticated with a HAPI token""" |
def wrapper(self, *args, **kwargs):
self.auth_context = HAPI.auth_context_hapi
return fn(self, *args, **kwargs)
return wrapper |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def parse(response):
"""Parse a postdata-style response format from the API into usable data""" |
"""Split a a=1b=2c=3 string into a dictionary of pairs"""
tokens = {r[0]: r[1] for r in [r.split('=') for r in response.split("&")]}
# The odd dummy parameter is of no use to us
if 'dummy' in tokens:
del tokens['dummy']
"""
If we have key names that end in digits, these indicate the result set contains multiple sets
For example, planet0=Hoth&x=1&y=-10&planet1=Naboo&x=9&y=13 is actually data for two planets
Elements that end in digits (like tag0, tag1 for planets) are formatted like (tag0_1, tag1_1), so we rstrip
underscores afterwards.
"""
if re.match('\D\d+$', tokens.keys()[0]):
# Produce a list of dictionaries
set_tokens = []
for key, value in tokens:
key = re.match('^(.+\D)(\d+)$', key)
# If the key isn't in the format (i.e. a failsafe), skip it
if key is not None:
if key.group(1) not in set_tokens:
set_tokens[key.group(1)] = {}
set_tokens[key.group(1)][key.group(0).rstrip('_')] = value
tokens = set_tokens
return tokens |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def init_chain(self):
"""Autodetect the devices attached to the Controller, and initialize a JTAGDevice for each. This is a required call before device specific Primitives can be used. """ |
if not self._hasinit:
self._hasinit = True
self._devices = []
self.jtag_enable()
while True:
# pylint: disable=no-member
idcode = self.rw_dr(bitcount=32, read=True,
lastbit=False)()
if idcode in NULL_ID_CODES: break
dev = self.initialize_device_from_id(self, idcode)
if self._debug:
print(dev)
self._devices.append(dev)
if len(self._devices) >= 128:
raise JTAGTooManyDevicesError("This is an arbitrary "
"limit to deal with breaking infinite loops. If "
"you have more devices, please open a bug")
self.jtag_disable()
#The chain comes out last first. Reverse it to get order.
self._devices.reverse() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _UserUpdateConfigValue(self, configKey, strDescriptor, isDir = True, dbConfigValue = None):
""" Allow user to set or update config values in the database table. This is always called if no valid entry exists in the table already. Parameters configKey : string Name of config field. strDescriptor : string Description of config field. isDir : boolean [optional : default = True] Set to True if config value is expected to be a directory path. dbConfigValue : string [optional : default = None] The value of an existing entry for the given config field. Returns string New value for given config field in database. """ |
newConfigValue = None
if dbConfigValue is None:
prompt = "Enter new {0} or 'x' to exit: ".format(strDescriptor)
else:
prompt = "Enter 'y' to use existing {0}, enter a new {0} or 'x' to exit: ".format(strDescriptor)
while newConfigValue is None:
response = goodlogging.Log.Input("CLEAR", prompt)
if response.lower() == 'x':
sys.exit(0)
elif dbConfigValue is not None and response.lower() == 'y':
newConfigValue = dbConfigValue
elif not isDir:
newConfigValue = response
self._db.SetConfigValue(configKey, newConfigValue)
else:
if os.path.isdir(response):
newConfigValue = os.path.abspath(response)
self._db.SetConfigValue(configKey, newConfigValue)
else:
goodlogging.Log.Info("CLEAR", "{0} is not recognised as a directory".format(response))
return newConfigValue |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _GetConfigValue(self, configKey, strDescriptor, isDir = True):
""" Get configuration value from database table. If no value found user will be prompted to enter one. Parameters configKey : string Name of config field. strDescriptor : string Description of config field. isDir : boolean [optional : default = True] Set to True if config value is expected to be a directory path. Returns string Value for given config field in database. """ |
goodlogging.Log.Info("CLEAR", "Loading {0} from database:".format(strDescriptor))
goodlogging.Log.IncreaseIndent()
configValue = self._db.GetConfigValue(configKey)
if configValue is None:
goodlogging.Log.Info("CLEAR", "No {0} exists in database".format(strDescriptor))
configValue = self._UserUpdateConfigValue(configKey, strDescriptor, isDir)
else:
goodlogging.Log.Info("CLEAR", "Got {0} {1} from database".format(strDescriptor, configValue))
if not isDir or os.path.isdir(configValue):
goodlogging.Log.Info("CLEAR", "Using {0} {1}".format(strDescriptor, configValue))
goodlogging.Log.DecreaseIndent()
return configValue
else:
goodlogging.Log.Info("CLEAR", "Exiting... {0} is not recognised as a directory".format(configValue))
sys.exit(0) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _UserUpdateSupportedFormats(self, origFormatList = []):
""" Add supported formats to database table. Always called if the database table is empty. User can build a list of entries to add to the database table (one entry at a time). Once finished they select the finish option and all entries will be added to the table. They can reset the list at any time before finishing. Parameters origFormatList : list [optional : default = []] List of original formats from database table. Returns string List of updated formats from database table. """ |
formatList = list(origFormatList)
inputDone = None
while inputDone is None:
prompt = "Enter new format (e.g. .mp4, .avi), " \
"'r' to reset format list, " \
"'f' to finish or " \
"'x' to exit: "
response = goodlogging.Log.Input("CLEAR", prompt)
if response.lower() == 'x':
sys.exit(0)
elif response.lower() == 'f':
inputDone = 1
elif response.lower() == 'r':
formatList = []
else:
if response is not None:
if(response[0] != '.'):
response = '.' + response
formatList.append(response)
formatList = set(formatList)
origFormatList = set(origFormatList)
if formatList != origFormatList:
self._db.PurgeSupportedFormats()
for fileFormat in formatList:
self._db.AddSupportedFormat(fileFormat)
return formatList |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _GetSupportedFormats(self):
""" Get supported format values from database table. If no values found user will be prompted to enter values for this table. Returns string List of supported formats from database table. """ |
goodlogging.Log.Info("CLEAR", "Loading supported formats from database:")
goodlogging.Log.IncreaseIndent()
formatList = self._db.GetSupportedFormats()
if formatList is None:
goodlogging.Log.Info("CLEAR", "No supported formats exist in database")
formatList = self._UserUpdateSupportedFormats()
else:
goodlogging.Log.Info("CLEAR", "Got supported formats from database: {0}".format(formatList))
goodlogging.Log.Info("CLEAR", "Using supported formats: {0}".format(formatList))
goodlogging.Log.DecreaseIndent()
return formatList |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _UserUpdateIgnoredDirs(self, origIgnoredDirs = []):
""" Add ignored directories to database table. Always called if the database table is empty. User can build a list of entries to add to the database table (one entry at a time). Once finished they select the finish option and all entries will be added to the table. They can reset the list at any time before finishing. Parameters origIgnoredDirs : list [optional : default = []] List of original ignored directories from database table. Returns string List of updated ignored directories from database table. """ |
ignoredDirs = list(origIgnoredDirs)
inputDone = None
while inputDone is None:
prompt = "Enter new directory to ignore (e.g. DONE), " \
"'r' to reset directory list, " \
"'f' to finish or " \
"'x' to exit: "
response = goodlogging.Log.Input("CLEAR", prompt)
if response.lower() == 'x':
sys.exit(0)
elif response.lower() == 'f':
inputDone = 1
elif response.lower() == 'r':
ignoredDirs = []
else:
if response is not None:
ignoredDirs.append(response)
ignoredDirs = set(ignoredDirs)
origIgnoredDirs = set(origIgnoredDirs)
if ignoredDirs != origIgnoredDirs:
self._db.PurgeIgnoredDirs()
for ignoredDir in ignoredDirs:
self._db.AddIgnoredDir(ignoredDir)
return list(ignoredDirs) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _GetIgnoredDirs(self):
""" Get ignored directories values from database table. If no values found user will be prompted to enter values for this table. Returns string List of ignored directories from database table. """ |
goodlogging.Log.Info("CLEAR", "Loading ignored directories from database:")
goodlogging.Log.IncreaseIndent()
ignoredDirs = self._db.GetIgnoredDirs()
if ignoredDirs is None:
goodlogging.Log.Info("CLEAR", "No ignored directories exist in database")
ignoredDirs = self._UserUpdateIgnoredDirs()
else:
goodlogging.Log.Info("CLEAR", "Got ignored directories from database: {0}".format(ignoredDirs))
if self._archiveDir not in ignoredDirs:
ignoredDirs.append(self._archiveDir)
goodlogging.Log.Info("CLEAR", "Using ignored directories: {0}".format(ignoredDirs))
goodlogging.Log.DecreaseIndent()
return ignoredDirs |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _GetDatabaseConfig(self):
""" Get all configuration from database. This includes values from the Config table as well as populating lists for supported formats and ignored directories from their respective database tables. """ |
goodlogging.Log.Seperator()
goodlogging.Log.Info("CLEAR", "Getting configuration variables...")
goodlogging.Log.IncreaseIndent()
# SOURCE DIRECTORY
if self._sourceDir is None:
self._sourceDir = self._GetConfigValue('SourceDir', 'source directory')
# TV DIRECTORY
if self._inPlaceRename is False and self._tvDir is None:
self._tvDir = self._GetConfigValue('TVDir', 'tv directory')
# ARCHIVE DIRECTORY
self._archiveDir = self._GetConfigValue('ArchiveDir', 'archive directory', isDir = False)
# SUPPORTED FILE FORMATS
self._supportedFormatsList = self._GetSupportedFormats()
# IGNORED DIRECTORIES
self._ignoredDirsList = self._GetIgnoredDirs()
goodlogging.Log.NewLine()
goodlogging.Log.Info("CLEAR", "Configuation is:")
goodlogging.Log.IncreaseIndent()
goodlogging.Log.Info("CLEAR", "Source directory = {0}".format(self._sourceDir))
goodlogging.Log.Info("CLEAR", "TV directory = {0}".format(self._tvDir))
goodlogging.Log.Info("CLEAR", "Supported formats = {0}".format(self._supportedFormatsList))
goodlogging.Log.Info("CLEAR", "Ignored directory list = {0}".format(self._ignoredDirsList))
goodlogging.Log.ResetIndent() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _GetSupportedFilesInDir(self, fileDir, fileList, supportedFormatList, ignoreDirList):
""" Recursively get all supported files given a root search directory. Supported file extensions are given as a list, as are any directories which should be ignored. The result will be appended to the given file list argument. Parameters fileDir : string Path to root of directory tree to search. fileList : string List to add any found files to. supportedFormatList : list List of supported file extensions. ignoreDirList : list List of directories to ignore. """ |
goodlogging.Log.Info("CLEAR", "Parsing file directory: {0}".format(fileDir))
if os.path.isdir(fileDir) is True:
for globPath in glob.glob(os.path.join(fileDir, '*')):
if util.FileExtensionMatch(globPath, supportedFormatList):
newFile = tvfile.TVFile(globPath)
if newFile.GetShowDetails():
fileList.append(newFile)
elif os.path.isdir(globPath):
if(os.path.basename(globPath) in ignoreDirList):
goodlogging.Log.Info("CLEAR", "Skipping ignored directory: {0}".format(globPath))
else:
self._GetSupportedFilesInDir(globPath, fileList, supportedFormatList, ignoreDirList)
else:
goodlogging.Log.Info("CLEAR", "Ignoring unsupported file or folder: {0}".format(globPath))
else:
goodlogging.Log.Info("CLEAR", "Invalid non-directory path given to parse") |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def Run(self):
""" Main entry point for ClearManager class. Does the following steps: - Parse script arguments. - Optionally print or update database tables. - Get all configuration settings from database. - Optionally parse directory for file extraction. - Recursively parse source directory for files matching supported format list. - Call renamer.TVRenamer with file list. """ |
self._GetArgs()
goodlogging.Log.Info("CLEAR", "Using database: {0}".format(self._databasePath))
self._db = database.RenamerDB(self._databasePath)
if self._dbPrint or self._dbUpdate:
goodlogging.Log.Seperator()
self._db.PrintAllTables()
if self._dbUpdate:
goodlogging.Log.Seperator()
self._db.ManualUpdateTables()
self._GetDatabaseConfig()
if self._enableExtract:
goodlogging.Log.Seperator()
extractFileList = []
goodlogging.Log.Info("CLEAR", "Parsing source directory for compressed files")
goodlogging.Log.IncreaseIndent()
extract.GetCompressedFilesInDir(self._sourceDir, extractFileList, self._ignoredDirsList)
goodlogging.Log.DecreaseIndent()
goodlogging.Log.Seperator()
extract.Extract(extractFileList, self._supportedFormatsList, self._archiveDir, self._skipUserInputExtract)
goodlogging.Log.Seperator()
tvFileList = []
goodlogging.Log.Info("CLEAR", "Parsing source directory for compatible files")
goodlogging.Log.IncreaseIndent()
self._GetSupportedFilesInDir(self._sourceDir, tvFileList, self._supportedFormatsList, self._ignoredDirsList)
goodlogging.Log.DecreaseIndent()
tvRenamer = renamer.TVRenamer(self._db,
tvFileList,
self._archiveDir,
guideName = 'EPGUIDES',
tvDir = self._tvDir,
inPlaceRename = self._inPlaceRename,
forceCopy = self._crossSystemCopyEnabled,
skipUserInput = self._skipUserInputRename)
tvRenamer.Run() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def flush(self):
"""Force the queue of Primitives to compile, execute on the Controller, and fulfill promises with the data returned.""" |
self.stages = []
self.stagenames = []
if not self.queue:
return
if self.print_statistics:#pragma: no cover
print("LEN OF QUENE", len(self))
t = time()
if self._chain._collect_compiler_artifacts:
self._compile(debug=True, stages=self.stages,
stagenames=self.stagenames)
else:
self._compile()
if self.debug:
print("ABOUT TO EXEC", self.queue)#pragma: no cover
if self.print_statistics:#pragma: no cover
print("COMPILE TIME", time()-t)
print("TOTAL BITS OF ALL PRIMS", sum(
(p.count for p in self.queue if hasattr(p, 'count'))))
t = time()
self._chain._controller._execute_primitives(self.queue)
if self.print_statistics:
print("EXECUTE TIME", time()-t)#pragma: no cover
self.queue = []
self._chain._sm.state = self._fsm.state |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def step_impl(context):
"""Compares text as written to the log output""" |
expected_lines = context.text.split('\n')
assert len(expected_lines) == len(context.output)
for expected, actual in zip(expected_lines, context.output):
print('--\n\texpected: {}\n\tactual: {}'.format(expected, actual))
assert expected == actual |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _ParseShowList(self, checkOnly=False):
""" Read self._allShowList as csv file and make list of titles and IDs. Parameters checkOnly : boolean [optional : default = False] If checkOnly is True this will only check to ensure the column headers can be extracted correctly. """ |
showTitleList = []
showIDList = []
csvReader = csv.reader(self._allShowList.splitlines())
for rowCnt, row in enumerate(csvReader):
if rowCnt == 0:
# Get header column index
for colCnt, column in enumerate(row):
if column == 'title':
titleIndex = colCnt
if column == self.ID_LOOKUP_TAG:
lookupIndex = colCnt
else:
try:
showTitleList.append(row[titleIndex])
showIDList.append(row[lookupIndex])
except UnboundLocalError:
goodlogging.Log.Fatal("EPGUIDE", "Error detected in EPGUIDES allshows csv content")
else:
if checkOnly and rowCnt > 1:
return True
self._showTitleList = showTitleList
self._showIDList = showIDList
return True |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _GetAllShowList(self):
""" Populates self._allShowList with the epguides all show info. On the first lookup for a day the information will be loaded from the epguides url. This will be saved to local file _epguides_YYYYMMDD.csv and any old files will be removed. Subsequent accesses for the same day will read this file. """ |
today = datetime.date.today().strftime("%Y%m%d")
saveFile = '_epguides_' + today + '.csv'
saveFilePath = os.path.join(self._saveDir, saveFile)
if os.path.exists(saveFilePath):
# Load data previous saved to file
with open(saveFilePath, 'r') as allShowsFile:
self._allShowList = allShowsFile.read()
else:
# Download new list from EPGUIDES and strip any leading or trailing whitespace
self._allShowList = util.WebLookup(self.ALLSHOW_IDLIST_URL).strip()
if self._ParseShowList(checkOnly=True):
# Save to file to avoid multiple url requests in same day
with open(saveFilePath, 'w') as allShowsFile:
goodlogging.Log.Info("EPGUIDE", "Adding new EPGUIDES file: {0}".format(saveFilePath), verbosity=self.logVerbosity)
allShowsFile.write(self._allShowList)
# Delete old copies of this file
globPattern = '_epguides_????????.csv'
globFilePath = os.path.join(self._saveDir, globPattern)
for filePath in glob.glob(globFilePath):
if filePath != saveFilePath:
goodlogging.Log.Info("EPGUIDE", "Removing old EPGUIDES file: {0}".format(filePath), verbosity=self.logVerbosity)
os.remove(filePath) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _GetShowID(self, showName):
""" Get epguides show id for a given show name. Attempts to match the given show name against a show title in self._showTitleList and, if found, returns the corresponding index in self._showIDList. Parameters showName : string Show name to get show ID for. Returns int or None If a show id is found this will be returned, otherwise None is returned. """ |
self._GetTitleList()
self._GetIDList()
for index, showTitle in enumerate(self._showTitleList):
if showName == showTitle:
return self._showIDList[index]
return None |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _GetEpisodeName(self, showID, season, episode):
""" Get episode name from epguides show info. Parameters showID : string Identifier matching show in epguides. season : int Season number. epiosde : int Epiosde number. Returns int or None If an episode name is found this is returned, otherwise the return value is None. """ |
# Load data for showID from dictionary
showInfo = csv.reader(self._showInfoDict[showID].splitlines())
for rowCnt, row in enumerate(showInfo):
if rowCnt == 0:
# Get header column index
for colCnt, column in enumerate(row):
if column == 'season':
seasonIndex = colCnt
if column == 'episode':
episodeIndex = colCnt
if column == 'title':
titleIndex = colCnt
else:
# Iterate rows until matching season and episode found
try:
int(row[seasonIndex])
int(row[episodeIndex])
except ValueError:
# Skip rows which don't provide integer season or episode numbers
pass
else:
if int(row[seasonIndex]) == int(season) and int(row[episodeIndex]) == int(episode):
goodlogging.Log.Info("EPGUIDE", "Episode name is {0}".format(row[titleIndex]), verbosity=self.logVerbosity)
return row[titleIndex]
return None |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def ShowNameLookUp(self, string):
""" Attempts to find the best match for the given string in the list of epguides show titles. If this list has not previous been generated it will be generated first. Parameters string : string String to find show name match against. Returns string Show name which best matches input string. """ |
goodlogging.Log.Info("EPGUIDES", "Looking up show name match for string '{0}' in guide".format(string), verbosity=self.logVerbosity)
self._GetTitleList()
showName = util.GetBestMatch(string, self._showTitleList)
return(showName) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def EpisodeNameLookUp(self, showName, season, episode):
""" Get the episode name correspondng to the given show name, season number and episode number. Parameters showName : string Name of TV show. This must match an entry in the epguides title list (this can be achieved by calling ShowNameLookUp first). season : int Season number. epiosde : int Epiosde number. Returns string or None If an episode name can be found it is returned, otherwise the return value is None. """ |
goodlogging.Log.Info("EPGUIDE", "Looking up episode name for {0} S{1}E{2}".format(showName, season, episode), verbosity=self.logVerbosity)
goodlogging.Log.IncreaseIndent()
showID = self._GetShowID(showName)
if showID is not None:
try:
self._showInfoDict[showID]
except KeyError:
goodlogging.Log.Info("EPGUIDE", "Looking up info for new show: {0}(ID:{1})".format(showName, showID), verbosity=self.logVerbosity)
urlData = util.WebLookup(self.EPISODE_LOOKUP_URL, {self.EP_LOOKUP_TAG: showID})
self._showInfoDict[showID] = self._ExtractDataFromShowHtml(urlData)
else:
goodlogging.Log.Info("EPGUIDE", "Reusing show info previous obtained for: {0}({1})".format(showName, showID), verbosity=self.logVerbosity)
finally:
episodeName = self._GetEpisodeName(showID, season, episode)
goodlogging.Log.DecreaseIndent()
return episodeName
goodlogging.Log.DecreaseIndent() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def private_path(self):
"""Get the path to a directory which can be used to store arbitrary data This directory should not conflict with any of the repository internals. The directory should be created if it does not already exist. """ |
path = os.path.join(self.path, '.hg', '.private')
try:
os.mkdir(path)
except OSError as e:
if e.errno != errno.EEXIST:
raise
return path |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def bookmarks(self):
"""Get list of bookmarks""" |
cmd = [HG, 'bookmarks']
output = self._command(cmd).decode(self.encoding, 'replace')
if output.startswith('no bookmarks set'):
return []
results = []
for line in output.splitlines():
m = bookmarks_rx.match(line)
assert m, 'unexpected output: ' + line
results.append(m.group('name'))
return results |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def content(self):
"""Get the file contents. This property is cached. The file is only read once. """ |
if not self._content:
self._content = self._read()
return self._content |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def config(self):
"""Get a Configuration object from the file contents.""" |
conf = config.Configuration()
for namespace in self.namespaces:
if not hasattr(conf, namespace):
if not self._strict:
continue
raise exc.NamespaceNotRegistered(
"The namespace {0} is not registered.".format(namespace)
)
name = getattr(conf, namespace)
for item, value in compat.iteritems(self.items(namespace)):
if not hasattr(name, item):
if not self._strict:
continue
raise exc.OptionNotRegistered(
"The option {0} is not registered.".format(item)
)
setattr(name, item, value)
return conf |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _read(self):
"""Open the file and return its contents.""" |
with open(self.path, 'r') as file_handle:
content = file_handle.read()
# Py27 INI config parser chokes if the content provided is not unicode.
# All other versions seems to work appropriately. Forcing the value to
# unicode here in order to resolve this issue.
return compat.unicode(content) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
async def ask(self, body, quick_replies=None, options=None, user=None):
""" simple ask with predefined quick replies :param body: :param quick_replies: (optional) in form of {'title': <message>, 'payload': <any json>} :param options: :param user: :return: """ |
await self.send_text_message_to_all_interfaces(
recipient=user,
text=body,
quick_replies=quick_replies,
options=options,
)
return any.Any() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
async def say(self, body, user, options):
""" say something to user :param body: :param user: :return: """ |
return await self.send_text_message_to_all_interfaces(
recipient=user, text=body, options=options) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def connect(self, protocolFactory):
"""Starts a process and connect a protocol to it. """ |
deferred = self._startProcess()
deferred.addCallback(self._connectRelay, protocolFactory)
deferred.addCallback(self._startRelay)
return deferred |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _startProcess(self):
"""Use the inductor to start the process we want to relay data from. """ |
connectedDeferred = defer.Deferred()
processProtocol = RelayProcessProtocol(connectedDeferred)
self.inductor.execute(processProtocol, *self.inductorArgs)
return connectedDeferred |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _connectRelay(self, process, protocolFactory):
"""Set up and connect the protocol we want to relay to the process. This method is automatically called when the process is started, and we are ready to relay through it. """ |
try:
wf = _WrappingFactory(protocolFactory)
connector = RelayConnector(process, wf, self.timeout,
self.inductor.reactor)
connector.connect()
except:
return defer.fail()
# Return a deferred that is called back when the protocol is connected.
return wf._onConnection |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _startRelay(self, client):
"""Start relaying data between the process and the protocol. This method is called when the protocol is connected. """ |
process = client.transport.connector.process
# Relay any buffered data that was received from the process before
# we got connected and started relaying.
for _, data in process.data:
client.dataReceived(data)
process.protocol = client
@process._endedDeferred.addBoth
def stopRelay(reason):
"""Stop relaying data. Called when the process has ended.
"""
relay = client.transport
relay.loseConnection(reason)
connector = relay.connector
connector.connectionLost(reason)
# Pass through the client protocol.
return client |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def connectRelay(self):
"""Builds the target protocol and connects it to the relay transport. """ |
self.protocol = self.connector.buildProtocol(None)
self.connected = True
self.protocol.makeConnection(self) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.