text_prompt
stringlengths
157
13.1k
code_prompt
stringlengths
7
19.8k
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def doc_string(cls): """Get the doc string of this class. If this class does not have a doc string or the doc string is empty, try its base classes until the root base class, _ShellBase, is reached. CAVEAT: This method assumes that this class and all its super classes are derived from _ShellBase or object. """
clz = cls while not clz.__doc__: clz = clz.__bases__[0] return clz.__doc__
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def launch_subshell(self, shell_cls, cmd, args, *, prompt = None, context = {}): """Launch a subshell. The doc string of the cmdloop() method explains how shell histories and history files are saved and restored. The design of the _ShellBase class encourage launching of subshells through the subshell() decorator function. Nonetheless, the user has the option of directly launching subshells via this method. Arguments: shell_cls: The _ShellBase class object to instantiate and launch. args: Arguments used to launch this subshell. prompt: The name of the subshell. The default, None, means to use the shell_cls.__name__. context: A dictionary to pass to the subshell as its context. Returns: 'root': Inform the parent shell to keep exiting until the root shell is reached. 'all': Exit the the command line. False, None, or anything that are evaluated as False: Inform the parent shell to stay in that parent shell. An integer indicating the depth of shell to exit to. 0 = root shell. """
# Save history of the current shell. readline.write_history_file(self.history_fname) prompt = prompt if prompt else shell_cls.__name__ mode = _ShellBase._Mode( shell = self, cmd = cmd, args = args, prompt = prompt, context = context, ) shell = shell_cls( batch_mode = self.batch_mode, debug = self.debug, mode_stack = self._mode_stack + [ mode ], pipe_end = self._pipe_end, root_prompt = self.root_prompt, stdout = self.stdout, stderr = self.stderr, temp_dir = self._temp_dir, ) # The subshell creates its own history context. self.print_debug("Leave parent shell '{}'".format(self.prompt)) exit_directive = shell.cmdloop() self.print_debug("Enter parent shell '{}': {}".format(self.prompt, exit_directive)) # Restore history. The subshell could have deleted the history file of # this shell via 'history clearall'. readline.clear_history() if os.path.isfile(self.history_fname): readline.read_history_file(self.history_fname) if not exit_directive is True: return exit_directive
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def batch_string(self, content): """Process a string in batch mode. Arguments: content: A unicode string representing the content to be processed. """
pipe_send, pipe_recv = multiprocessing.Pipe() self._pipe_end = pipe_recv proc = multiprocessing.Process(target = self.cmdloop) for line in content.split('\n'): pipe_send.send(line) pipe_send.close() proc.start() proc.join()
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def cmdloop(self): """Start the main loop of the interactive shell. The preloop() and postloop() methods are always run before and after the main loop, respectively. Returns: 'root': Inform the parent shell to to keep exiting until the root shell is reached. 'all': Exit all the way back the the command line shell. False, None, or anything that are evaluated as False: Exit this shell, enter the parent shell. An integer: The depth of the shell to exit to. 0 = root shell. History: _ShellBase histories are persistently saved to files, whose name matches the prompt string. For example, if the prompt of a subshell is '(Foo-Bar-Kar)$ ', the name of its history file is s-Foo-Bar-Kar. The history_fname property encodes this algorithm. All history files are saved to the the directory whose path is self._temp_dir. Subshells use the same temp_dir as their parent shells, thus their root shell. The history of the parent shell is saved and restored by the parent shell, as in launch_subshell(). The history of the subshell is saved and restored by the subshell, as in cmdloop(). When a subshell is started, i.e., when the cmdloop() method of the subshell is called, the subshell will try to load its own history file, whose file name is determined by the naming convention introduced earlier. Completer Delimiters: Certain characters such as '-' could be part of a command. But by default they are considered the delimiters by the readline library, which causes completion candidates with those characters to malfunction. The old completer delimiters are saved before the loop and restored after the loop ends. This is to keep the environment clean. """
self.print_debug("Enter subshell '{}'".format(self.prompt)) # Save the completer function, the history buffer, and the # completer_delims. old_completer = readline.get_completer() old_delims = readline.get_completer_delims() new_delims = ''.join(list(set(old_delims) - set(_ShellBase._non_delims))) readline.set_completer_delims(new_delims) # Load the new completer function and start a new history buffer. readline.set_completer(self.__driver_stub) readline.clear_history() if os.path.isfile(self.history_fname): readline.read_history_file(self.history_fname) # main loop try: # The exit_directive: # True Leave this shell, enter the parent shell. # False Continue with the loop. # 'root' Exit to the root shell. # 'all' Exit to the command line. # an integer The depth of the shell to exit to. 0 = root # shell. Negative number is taken as error. self.preloop() while True: exit_directive = False try: if self.batch_mode: line = self._pipe_end.recv() else: line = input(self.prompt).strip() except EOFError: line = _ShellBase.EOF try: exit_directive = self.__exec_line__(line) except: self.stderr.write(traceback.format_exc()) if type(exit_directive) is int: if len(self._mode_stack) > exit_directive: break if len(self._mode_stack) == exit_directive: continue if self._mode_stack and exit_directive == 'root': break if exit_directive in { 'all', True, }: break finally: self.postloop() # Restore the completer function, save the history, and restore old # delims. readline.set_completer(old_completer) readline.write_history_file(self.history_fname) readline.set_completer_delims(old_delims) self.print_debug("Leave subshell '{}': {}".format(self.prompt, exit_directive)) return exit_directive
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def parse_line(self, line): """Parse a line of input. The input line is tokenized using the same rules as the way bash shell tokenizes inputs. All quoting and escaping rules from the bash shell apply here too. The following cases are handled by __exec_line__(): 1. Empty line. 2. The input line is completely made of whitespace characters. 3. The input line is the EOF character. 4. The first token, as tokenized by shlex.split(), is '!'. 5. Internal commands, i.e., commands registered with internal = True Arguments: The line to parse. Returns: A tuple (cmd, args). The first element cmd must be a python3 string. The second element is, by default, a list of strings representing the arguments, as tokenized by shlex.split(). How to overload parse_line(): 1. The signature of the method must be the same. 2. The return value must be a tuple (cmd, args), where the cmd is a string representing the first token, and args is a list of strings. """
toks = shlex.split(line) # Safe to index the 0-th element because this line would have been # parsed by __exec_line__ if toks is an empty list. return ( toks[0], [] if len(toks) == 1 else toks[1:] )
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def __driver_stub(self, text, state): """Display help messages or invoke the proper completer. The interface of helper methods and completer methods are documented in the helper() decorator method and the completer() decorator method, respectively. Arguments: text: A string, that is the current completion scope. state: An integer. Returns: A string used to replace the given text, if any. None if no completion candidates are found. Raises: This method is called via the readline callback. If this method raises an error, it is silently ignored by the readline library. This behavior makes debugging very difficult. For this reason, non-driver methods are run within try-except blocks. When an error occurs, the stack trace is printed to self.stderr. """
origline = readline.get_line_buffer() line = origline.lstrip() if line and line[-1] == '?': self.__driver_helper(line) else: toks = shlex.split(line) return self.__driver_completer(toks, text, state)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def __driver_completer(self, toks, text, state): """Driver level completer. Arguments: toks: A list of tokens, tokenized from the original input line. text: A string, the text to be replaced if a completion candidate is chosen. state: An integer, the index of the candidate out of the list of candidates. Returns: A string, the candidate. """
if state != 0: return self.__completion_candidates[state] # Update the cache when this method is first called, i.e., state == 0. # If the line is empty or the user is still inputing the first token, # complete with available commands. if not toks or (len(toks) == 1 and text == toks[0]): try: self.__completion_candidates = self.__complete_cmds(text) except: self.stderr.write('\n') self.stderr.write(traceback.format_exc()) self.__completion_candidates = [] return self.__completion_candidates[state] # Otherwise, try to complete with the registered completer method. cmd = toks[0] args = toks[1:] if len(toks) > 1 else None if text and args: del args[-1] if cmd in self._completer_map.keys(): completer_name = self._completer_map[cmd] completer_method = getattr(self, completer_name) try: self.__completion_candidates = completer_method(cmd, args, text) except: self.stderr.write('\n') self.stderr.write(traceback.format_exc()) self.__completion_candidates = [] else: self.__completion_candidates = [] return self.__completion_candidates[state]
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def __complete_cmds(self, text): """Get the list of commands whose names start with a given text."""
return [ name for name in self._cmd_map_visible.keys() if name.startswith(text) ]
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def __driver_helper(self, line): """Driver level helper method. 1. Display help message for the given input. Internally calls self.__get_help_message() to obtain the help message. 2. Re-display the prompt and the input line. Arguments: line: The input line. Raises: Errors from helper methods print stack trace without terminating this shell. Other exceptions will terminate this shell. """
if line.strip() == '?': self.stdout.write('\n') self.stdout.write(self.doc_string()) else: toks = shlex.split(line[:-1]) try: msg = self.__get_help_message(toks) except Exception as e: self.stderr.write('\n') self.stderr.write(traceback.format_exc()) self.stderr.flush() self.stdout.write('\n') self.stdout.write(msg) # Restore the prompt and the original input. self.stdout.write('\n') self.stdout.write(self.prompt) self.stdout.write(line) self.stdout.flush()
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def __build_cmd_maps(cls): """Build the mapping from command names to method names. One command name maps to at most one method. Multiple command names can map to the same method. Only used by __init__() to initialize self._cmd_map. MUST NOT be used elsewhere. Returns: A tuple (cmd_map, hidden_cmd_map, internal_cmd_map). """
cmd_map_all = {} cmd_map_visible = {} cmd_map_internal = {} for name in dir(cls): obj = getattr(cls, name) if iscommand(obj): for cmd in getcommands(obj): if cmd in cmd_map_all.keys(): raise PyShellError("The command '{}' already has cmd" " method '{}', cannot register a" " second method '{}'.".format( \ cmd, cmd_map_all[cmd], obj.__name__)) cmd_map_all[cmd] = obj.__name__ if isvisiblecommand(obj): cmd_map_visible[cmd] = obj.__name__ if isinternalcommand(obj): cmd_map_internal[cmd] = obj.__name__ return cmd_map_all, cmd_map_visible, cmd_map_internal
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def __build_helper_map(cls): """Build a mapping from command names to helper names. One command name maps to at most one helper method. Multiple command names can map to the same helper method. Only used by __init__() to initialize self._cmd_map. MUST NOT be used elsewhere. Raises: PyShellError: A command maps to multiple helper methods. """
ret = {} for name in dir(cls): obj = getattr(cls, name) if ishelper(obj): for cmd in obj.__help_targets__: if cmd in ret.keys(): raise PyShellError("The command '{}' already has helper" " method '{}', cannot register a" " second method '{}'.".format( \ cmd, ret[cmd], obj.__name__)) ret[cmd] = obj.__name__ return ret
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def __build_completer_map(cls): """Build a mapping from command names to completer names. One command name maps to at most one completer method. Multiple command names can map to the same completer method. Only used by __init__() to initialize self._cmd_map. MUST NOT be used elsewhere. Raises: PyShellError: A command maps to multiple helper methods. """
ret = {} for name in dir(cls): obj = getattr(cls, name) if iscompleter(obj): for cmd in obj.__complete_targets__: if cmd in ret.keys(): raise PyShellError("The command '{}' already has" " complter" " method '{}', cannot register a" " second method '{}'.".format( \ cmd, ret[cmd], obj.__name__)) ret[cmd] = obj.__name__ return ret
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def review_score(self, reviewer, product): """Find a review score from a given reviewer to a product. Args: reviewer: Reviewer i.e. an instance of :class:`ria.bipartite.Reviewer`. product: Product i.e. an instance of :class:`ria.bipartite.Product`. Returns: A review object representing the review from the reviewer to the product. """
return self._g.retrieve_review(reviewer, product).score
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def from_dict(cls: typing.Type[T], dikt) -> T: """Returns the dict as a model"""
return util.deserialize_model(dikt, cls)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def get_defs(self, cache=True): """ Gets the defitions args: cache: True will read from the file cache, False queries the triplestore """
log.debug(" *** Started") cache = self.__use_cache__(cache) if cache: log.info(" loading json cache") try: with open(self.cache_filepath) as file_obj: self.results = json.loads(file_obj.read()) except FileNotFoundError: self.results = [] if not cache or len(self.results) == 0: log.info(" NO CACHE, querying the triplestore") sparql = render_without_request(self.def_sparql, graph=self.conn.graph, prefix=self.nsm.prefix()) start = datetime.datetime.now() log.info(" Starting query") self.results = self.conn.query(sparql) log.info("query complete in: %s | %s triples retrieved.", (datetime.datetime.now() - start), len(self.results)) with open(self.cache_filepath, "w") as file_obj: file_obj.write(json.dumps(self.results, indent=4)) with open(self.loaded_filepath, "w") as file_obj: file_obj.write((json.dumps(self.conn.mgr.loaded)))
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def conv_defs(self): """ Reads through the JSON object and converts them to Dataset """
log.setLevel(self.log_level) start = datetime.datetime.now() log.debug(" Converting to a Dataset: %s Triples", len(self.results)) self.defs = RdfDataset(self.results, def_load=True, bnode_only=True) # self.cfg.__setattr__('rdf_prop_defs', self.defs, True) log.debug(" conv complete in: %s" % (datetime.datetime.now() - start))
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def set_class_dict(self): """ Reads through the dataset and assigns self.class_dict the key value pairs for the classes in the dataset """
self.class_dict = {} for name, cls_defs in self.defs.items(): def_type = set(cls_defs.get(self.rdf_type, [])) if name.type == 'bnode': continue # a class can be determined by checking to see if it is of an # rdf_type listed in the classes_key or has a property that is # listed in the inferred_key if def_type.intersection(self.classes_key) or \ list([cls_defs.get(item) for item in self.inferred_key]): self.class_dict[name] = cls_defs
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def tie_properties(self, class_list): """ Runs through the classess and ties the properties to the class args: class_list: a list of class names to run """
log.setLevel(self.log_level) start = datetime.datetime.now() log.info(" Tieing properties to the class") for cls_name in class_list: cls_obj = getattr(MODULE.rdfclass, cls_name) prop_dict = dict(cls_obj.properties) for prop_name, prop_obj in cls_obj.properties.items(): setattr(cls_obj, prop_name, link_property(prop_obj, cls_obj)) log.info(" Finished tieing properties in: %s", (datetime.datetime.now() - start))
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def elemgetter(path: str) -> t.Callable[[Element], Element]: """shortcut making an XML element getter"""
return compose( partial(_raise_if_none, exc=LookupError(path)), methodcaller('find', path) )
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def textgetter(path: str, *, default: T=NO_DEFAULT, strip: bool=False) -> t.Callable[[Element], t.Union[str, T]]: """shortcut for making an XML element text getter"""
find = compose( str.strip if strip else identity, partial(_raise_if_none, exc=LookupError(path)), methodcaller('findtext', path) ) return (find if default is NO_DEFAULT else lookup_defaults(find, default))
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def _parse_alt_url(html_chunk): """ Parse URL from alternative location if not found where it should be. Args: html_chunk (obj): HTMLElement containing slice of the page with details. Returns: str: Book's URL. """
url_list = html_chunk.find("a", fn=has_param("href")) url_list = map(lambda x: x.params["href"], url_list) url_list = filter(lambda x: not x.startswith("autori/"), url_list) if not url_list: return None return normalize_url(BASE_URL, url_list[0])
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def _parse_from_table(html_chunk, what): """ Go thru table data in `html_chunk` and try to locate content of the neighbor cell of the cell containing `what`. Returns: str: Table data or None. """
ean_tag = html_chunk.find("tr", fn=must_contain("th", what, "td")) if not ean_tag: return None return get_first_content(ean_tag[0].find("td"))
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def get_publications(): """ Get list of publication offered by cpress.cz. Returns: list: List of :class:`.Publication` objects. """
data = DOWNER.download(URL) dom = dhtmlparser.parseString( handle_encodnig(data) ) book_list = dom.find("div", {"class": "polozka"}) books = [] for book in book_list: books.append( _process_book(book) ) return books
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def resolve( self, configurable=None, scope=None, safe=None, besteffort=None ): """Resolve all parameters. :param Configurable configurable: configurable to use for foreign parameter resolution. :param dict scope: variables to use for parameter expression evaluation. :param bool safe: safe execution (remove builtins functions). :raises: Parameter.Error for any raised exception. """
if scope is None: scope = self.scope if safe is None: safe = self.safe if besteffort is None: besteffort = self.besteffort for category in self.values(): for param in category.values(): param.resolve( configurable=configurable, conf=self, scope=scope, safe=safe, besteffort=besteffort )
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def param(self, pname, cname=None, history=0): """Get parameter from a category and history. :param str pname: parameter name. :param str cname: category name. Default is the last registered. :param int history: historical param value from specific category or final parameter value if cname is not given. For example, if history equals 1 and cname is None, result is the value defined just before the last parameter value if exist. If cname is given, the result is the parameter value defined before the category cname. :rtype: Parameter :raises: NameError if pname or cname do not exist."""
result = None category = None categories = [] # list of categories containing input parameter name for cat in self.values(): if pname in cat: categories.append(cat) if cname == cat.name: break if cname is not None and ( not categories or categories[-1].name != cname ): raise NameError('Category {0} does not exist.'.format(cname)) categories = categories[:max(1, len(categories) - history)] for category in categories: if pname in category: if result is None: result = category[pname].copy() else: result.update(category[pname]) return result
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def apache_md5crypt(password, salt, magic='$apr1$'): """ Calculates the Apache-style MD5 hash of a password """
password = password.encode('utf-8') salt = salt.encode('utf-8') magic = magic.encode('utf-8') m = md5() m.update(password + magic + salt) mixin = md5(password + salt + password).digest() for i in range(0, len(password)): m.update(mixin[i % 16]) i = len(password) while i: if i & 1: m.update('\x00') else: m.update(password[0]) i >>= 1 final = m.digest() for i in range(1000): m2 = md5() if i & 1: m2.update(password) else: m2.update(final) if i % 3: m2.update(salt) if i % 7: m2.update(password) if i & 1: m2.update(final) else: m2.update(password) final = m2.digest() itoa64 = './0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz' rearranged = '' seq = ((0, 6, 12), (1, 7, 13), (2, 8, 14), (3, 9, 15), (4, 10, 5)) for a, b, c in seq: v = ord(final[a]) << 16 | ord(final[b]) << 8 | ord(final[c]) for i in range(4): rearranged += itoa64[v & 0x3f] v >>= 6 v = ord(final[11]) for i in range(2): rearranged += itoa64[v & 0x3f] v >>= 6 return magic + salt + '$' + rearranged
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def get_juttle_data_url(deployment_name, token_manager=None, app_url=defaults.APP_URL): """ return the juttle data url """
return get_data_url(deployment_name, endpoint_type='juttle', app_url=app_url, token_manager=token_manager)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def get_import_data_url(deployment_name, token_manager=None, app_url=defaults.APP_URL): """ return the import data url """
return get_data_url(deployment_name, endpoint_type='http-import', app_url=app_url, token_manager=token_manager)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def __wss_connect(data_url, token_manager, job_id=None): """ Establish the websocket connection to the data engine. When job_id is provided we're basically establishing a websocket to an existing program that was already started using the jobs API job_id: job id of a running program """
url = '%s/api/v1/juttle/channel' % data_url.replace('https://', 'wss://') token_obj = { "accessToken": token_manager.get_access_token() } if job_id != None: token_obj['job_id'] = job_id if is_debug_enabled(): debug("connecting to %s", url) websocket = create_connection(url) websocket.settimeout(10) if is_debug_enabled(): debug("sent %s", json.dumps(token_obj)) websocket.send(json.dumps(token_obj)) return websocket
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def connect_job(job_id, deployment_name, token_manager=None, app_url=defaults.APP_URL, persist=False, websocket=None, data_url=None): """ connect to a running Juttle program by job_id """
if data_url == None: data_url = get_data_url_for_job(job_id, deployment_name, token_manager=token_manager, app_url=app_url) if websocket == None: websocket = __wss_connect(data_url, token_manager, job_id=job_id) pong = json.dumps({ 'pong': True }) if not persist: job_finished = False while not job_finished: try: data = websocket.recv() if data: payload = json.loads(data) if is_debug_enabled(): printable_payload = dict(payload) if 'points' in payload: # don't want to print out all the outputs when in # debug mode del printable_payload['points'] printable_payload['points'] = 'NOT SHOWN' debug('received %s' % json.dumps(printable_payload)) if 'ping' in payload.keys(): # ping/pong (ie heartbeat) mechanism websocket.send(pong) if is_debug_enabled(): debug('sent %s' % json.dumps(pong)) if 'job_end' in payload.keys() and payload['job_end'] == True: job_finished = True if token_manager.is_access_token_expired(): debug('refreshing access token') token_obj = { "accessToken": token_manager.get_access_token() } # refresh authentication token websocket.send(json.dumps(token_obj)) if 'error' in payload: if payload['error'] == 'NONEXISTENT-JOB': raise JutException('Job "%s" no longer running' % job_id) # return all channel messages yield payload else: debug('payload was "%s", forcing websocket reconnect' % data) raise IOError() except IOError: if is_debug_enabled(): traceback.print_exc() # # We'll retry for just under 30s since internally we stop # running non persistent programs after 30s of not heartbeating # with the client # retry = 1 while retry <= 5: try: debug('network error reconnecting to job %s, ' 'try %s of 5' % (job_id, retry)) websocket = __wss_connect(data_url, token_manager, job_id=job_id) break except socket.error: if is_debug_enabled(): traceback.print_exc() retry += 1 time.sleep(5) debug('network error reconnecting to job %s, ' 'try %s of 5' % (job_id, retry)) websocket = __wss_connect(data_url, token_manager, job_id=job_id) websocket.close()
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def get_jobs(deployment_name, token_manager=None, app_url=defaults.APP_URL): """ return list of currently running jobs """
headers = token_manager.get_access_token_headers() data_urls = get_data_urls(deployment_name, app_url=app_url, token_manager=token_manager) jobs = [] for data_url in data_urls: url = '%s/api/v1/jobs' % data_url response = requests.get(url, headers=headers) if response.status_code == 200: # saving the data_url for the specific job so you know where to # connect if you want to interact with that job these_jobs = response.json()['jobs'] for job in these_jobs: job['data_url'] = data_url jobs += these_jobs else: raise JutException('Error %s: %s' % (response.status_code, response.text)) return jobs
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def get_job_details(job_id, deployment_name, token_manager=None, app_url=defaults.APP_URL): """ return job details for a specific job id """
jobs = get_jobs(deployment_name, token_manager=token_manager, app_url=app_url) for job in jobs: if job['id'] == job_id: return job raise JutException('Unable to find job with id "%s"' % job_id)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def delete_job(job_id, deployment_name, token_manager=None, app_url=defaults.APP_URL): """ delete a job with a specific job id """
headers = token_manager.get_access_token_headers() data_url = get_data_url_for_job(job_id, deployment_name, token_manager=token_manager, app_url=app_url) url = '%s/api/v1/jobs/%s' % (data_url, job_id) response = requests.delete(url, headers=headers) if response.status_code != 200: raise JutException('Error %s: %s' % (response.status_code, response.text))
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def lines(input): """Remove comments and empty lines"""
for raw_line in input: line = raw_line.strip() if line and not line.startswith('#'): yield strip_comments(line)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description:
def exec_command(self, cmd, tmp_path, sudo_user, sudoable=False, executable='/bin/sh'): ''' run a command on the local host ''' if not self.runner.sudo or not sudoable: if executable: local_cmd = [executable, '-c', cmd] else: local_cmd = cmd else: local_cmd, prompt = utils.make_sudo_cmd(sudo_user, executable, cmd) vvv("EXEC %s" % (local_cmd), host=self.host) p = subprocess.Popen(local_cmd, shell=isinstance(local_cmd, basestring), cwd=self.runner.basedir, executable=executable or None, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) if self.runner.sudo and sudoable and self.runner.sudo_pass: fcntl.fcntl(p.stdout, fcntl.F_SETFL, fcntl.fcntl(p.stdout, fcntl.F_GETFL) | os.O_NONBLOCK) fcntl.fcntl(p.stderr, fcntl.F_SETFL, fcntl.fcntl(p.stderr, fcntl.F_GETFL) | os.O_NONBLOCK) sudo_output = '' while not sudo_output.endswith(prompt): rfd, wfd, efd = select.select([p.stdout, p.stderr], [], [p.stdout, p.stderr], self.runner.timeout) if p.stdout in rfd: chunk = p.stdout.read() elif p.stderr in rfd: chunk = p.stderr.read() else: stdout, stderr = p.communicate() raise errors.AnsibleError('timeout waiting for sudo password prompt:\n' + sudo_output) if not chunk: stdout, stderr = p.communicate() raise errors.AnsibleError('sudo output closed while waiting for password prompt:\n' + sudo_output) sudo_output += chunk p.stdin.write(self.runner.sudo_pass + '\n') fcntl.fcntl(p.stdout, fcntl.F_SETFL, fcntl.fcntl(p.stdout, fcntl.F_GETFL) & ~os.O_NONBLOCK) fcntl.fcntl(p.stderr, fcntl.F_SETFL, fcntl.fcntl(p.stderr, fcntl.F_GETFL) & ~os.O_NONBLOCK) stdout, stderr = p.communicate() return (p.returncode, '', stdout, stderr)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description:
def put_file(self, in_path, out_path): ''' transfer a file from local to local ''' vvv("PUT %s TO %s" % (in_path, out_path), host=self.host) if not os.path.exists(in_path): raise errors.AnsibleFileNotFound("file or module does not exist: %s" % in_path) try: shutil.copyfile(in_path, out_path) except shutil.Error: traceback.print_exc() raise errors.AnsibleError("failed to copy: %s and %s are the same" % (in_path, out_path)) except IOError: traceback.print_exc() raise errors.AnsibleError("failed to transfer file to %s" % out_path)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def getPythonVarName(name): """Get the python variable name """
return SUB_REGEX.sub('', name.replace('+', '_').replace('-', '_').replace('.', '_').replace(' ', '').replace('/', '_')).upper()
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def parse(self, text): """Parse the text content """
root = ET.fromstring(text) for elm in root.findall('{http://www.iana.org/assignments}registry'): for record in elm.findall('{http://www.iana.org/assignments}record'): for fileElm in record.findall('{http://www.iana.org/assignments}file'): if fileElm.get('type') == 'template': mimeType = fileElm.text.strip() yield mimeType break
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def parsefile(self, filename): """Parse from the file """
with open(filename, 'rb') as fd: return self.parse(fd.read())
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def check(self): """ Check if we have an active login session set @rtype: bool """
self.log.debug('Testing for a valid login session') # If our cookie jar is empty, we obviously don't have a valid login session if not len(self.cookiejar): return False # Test our login session and make sure it's still active return requests.get(self.TEST_URL, cookies=self.cookiejar).status_code == 200
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def process(self, username, password, remember=True): """ Process a login request @type username: str @type password: str @param remember: Save the login session to disk @type remember: bool @raise BadLoginException: Login request failed @return: Session cookies @rtype: cookielib.LWPCookieJar """
self.log.debug('Processing login request') self.browser.open(self.LOGIN_URL) self.log.info('Login page loaded: %s', self.browser.title()) self.browser.select_form(nr=0) # Set the fields self.log.debug('Username: %s', username) self.log.debug('Password: %s', (password[0] + '*' * (len(password) - 2) + password[-1])) self.log.debug('Remember: %s', remember) self.browser.form[self.USERNAME_FIELD] = username self.browser.form[self.PASSWORD_FIELD] = password self.browser.find_control(self.REMEMBER_FIELD).items[0].selected = remember # Submit the request self.browser.submit() self.log.debug('Response code: %s', self.browser.response().code) self.log.debug('== Cookies ==') for cookie in self.cookiejar: self.log.debug(cookie) self.cookies[cookie.name] = cookie.value self.log.debug('== End Cookies ==') # Make sure we successfully logged in if self.LOGIN_COOKIE not in self.cookies: raise BadLoginException('No login cookie returned, this probably means an invalid login was provided') # Should we save our login session? if remember: self.log.info('Saving login session to disk') self.cookiejar.save() self.log.info('Login request successful') return self.cookiejar
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def get_libmarquise_header(): """Read the libmarquise header to extract definitions."""
# Header file is packaged in the same place as the rest of the # module. header_path = os.path.join(os.path.dirname(__file__), "marquise.h") with open(header_path) as header: libmarquise_header_lines = header.readlines() libmarquise_header_lines = [ line for line in libmarquise_header_lines if not line.startswith('#include ') and not line.startswith('#define ') ] libmarquise_header_lines = [ line for line in libmarquise_header_lines if not line.startswith('#include ') ] # We can't #include glib so FFI doesn't know what a GTree is. Leave it for # later and let the C compiler resolve it when we call FFI.verify() libmarquise_header_lines = [ line.replace("GTree *sd_hashes;", "...;") for line in libmarquise_header_lines ] return ''.join(libmarquise_header_lines)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def head(self, uuid): """ Get one thread."""
url = "%(base)s/%(uuid)s" % { 'base': self.local_base_url, 'uuid': uuid } return self.core.head(url)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def default(self, obj): # pylint: disable=method-hidden """Use the default behavior unless the object to be encoded has a `strftime` attribute."""
if hasattr(obj, 'strftime'): return obj.strftime("%Y-%m-%dT%H:%M:%SZ") elif hasattr(obj, 'get_public_dict'): return obj.get_public_dict() else: return json.JSONEncoder.default(self, obj)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def taskotron_task(config, message, task=None): """ Particular taskotron task With this rule, you can limit messages to only those of particular `taskotron <https://taskotron.fedoraproject.org/>`_ task. You can specify several tasks by separating them with a comma ',', i.e.: ``dist.depcheck,dist.rpmlint``. """
# We only operate on taskotron messages, first off. if not taskotron_result_new(config, message): return False if not task: return False tasks = [item.strip().lower() for item in task.split(',')] return message['msg']['task'].get('name').lower() in tasks
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def taskotron_changed_outcome(config, message): """ Taskotron task outcome changed With this rule, you can limit messages to only those task results with changed outcomes. This is useful when an object (a build, an update, etc) gets retested and either the object itself or the environment changes and the task outcome is now different (e.g. FAILED -> PASSED). """
# We only operate on taskotron messages, first off. if not taskotron_result_new(config, message): return False outcome = message['msg']['result'].get('outcome') prev_outcome = message['msg']['result'].get('prev_outcome') return prev_outcome is not None and outcome != prev_outcome
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def taskotron_task_outcome(config, message, outcome=None): """ Particular taskotron task outcome With this rule, you can limit messages to only those of particular `taskotron <https://taskotron.fedoraproject.org/>`_ task outcome. You can specify several outcomes by separating them with a comma ',', i.e.: ``PASSED,FAILED``. The full list of supported outcomes can be found in the libtaskotron `documentation <https://docs.qadevel.cloud.fedoraproject.org/ libtaskotron/latest/resultyaml.html#minimal-version>`_. """
# We only operate on taskotron messages, first off. if not taskotron_result_new(config, message): return False if not outcome: return False outcomes = [item.strip().lower() for item in outcome.split(',')] return message['msg']['result'].get('outcome').lower() in outcomes
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def taskotron_release_critical_task(config, message): """ Release-critical taskotron tasks With this rule, you can limit messages to only those of release-critical `taskotron <https://taskotron.fedoraproject.org/>`_ task. These are the tasks which are deemed extremely important by the distribution, and their failure should be carefully inspected. Currently these tasks are ``dist.depcheck`` and ``dist.upgradepath``. """
# We only operate on taskotron messages, first off. if not taskotron_result_new(config, message): return False task = message['msg']['task'].get('name') return task in ['dist.depcheck', 'dist.upgradepath']
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def execute(action, io_loop=None): """Execute the given action and return a Future with the result. The ``forwards`` and/or ``backwards`` methods for the action may be synchronous or asynchronous. If asynchronous, that method must return a Future that will resolve to its result. See :py:func:`reversible.execute` for more details on the behavior of ``execute``. :param action: The action to execute. :param io_loop: IOLoop through which asynchronous operations will be executed. If omitted, the current IOLoop is used. :returns: A future containing the result of executing the action. """
if not io_loop: io_loop = IOLoop.current() output = Future() def call(): try: result = _execute(_TornadoAction(action, io_loop)) except Exception: output.set_exc_info(sys.exc_info()) else: output.set_result(result) io_loop.add_callback(greenlet.greenlet(call).switch) return output
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def add_triple(self, sub, pred=None, obj=None, **kwargs): """ Adds a triple to the dataset args: sub: The subject of the triple or dictionary contaning a triple pred: Optional if supplied in sub, predicate of the triple obj: Optional if supplied in sub, object of the triple kwargs: map: Optional, a ditionary mapping for a supplied dictionary strip_orphans: Optional, remove triples that have an orphan blanknode as the object obj_method: if "list" than the object will be returned in the form of a list """
self.__set_map__(**kwargs) strip_orphans = kwargs.get("strip_orphans", False) obj_method = kwargs.get("obj_method") if isinstance(sub, DictClass) or isinstance(sub, dict): pred = sub[self.pmap] obj = sub[self.omap] sub = sub[self.smap] pred = pyrdf(pred) obj = pyrdf(obj) sub = pyrdf(sub) # reference existing attr for bnodes and uris if obj.type in self.relate_obj_types : if strip_orphans and not self.get(obj): return obj = self.get(obj,obj) try: self[sub].add_property(pred, obj) except KeyError: self[sub] = RdfClassBase(sub, self, **kwargs) self[sub].add_property(pred, obj)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def load_data(self, data, **kwargs): """ Bulk adds rdf data to the class args: data: the data to be loaded kwargs: strip_orphans: True or False - remove triples that have an orphan blanknode as the object obj_method: "list", or None: if "list" the object of a method will be in the form of a list. """
self.__set_map__(**kwargs) start = datetime.datetime.now() log.debug("Dataload stated") if isinstance(data, list): data = self._convert_results(data, **kwargs) class_types = self.__group_data__(data, **kwargs) # generate classes and add attributes to the data self._generate_classes(class_types, self.non_defined, **kwargs) # add triples to the dataset for triple in data: self.add_triple(sub=triple, **kwargs) log.debug("Dataload completed in '%s'", (datetime.datetime.now() - start))
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def add_rmap_item(self, subj, pred, obj): """ adds a triple to the inverted dataset index """
def add_item(self, subj, pred, obj): try: self.rmap[obj][pred].append(subj) except KeyError: try: self.rmap[obj][pred] = [subj] except KeyError: self.rmap[obj] = {pred: [subj]} if isinstance(obj, list): for item in obj: add_item(self, subj, pred, item) else: add_item(self, subj, pred, obj)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def _generate_classes(self, class_types, non_defined, **kwargs): """ creates the class for each class in the data set args: class_types: list of class_types in the dataset non_defined: list of subjects that have no defined class """
# kwargs['dataset'] = self for class_type in class_types: self[class_type[self.smap]] = self._get_rdfclass(class_type, **kwargs)\ (class_type, self, **kwargs) self.add_rmap_item(self[class_type[self.smap]], class_type[self.pmap], class_type[self.omap]) for class_type in non_defined: self[class_type] = RdfClassBase(class_type, self, **kwargs) self.add_rmap_item(self[class_type], __a__, None) self.__set_classes__ try: self.base_class = self[self.base_uri] except KeyError: self.base_class = None
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def _get_rdfclass(self, class_type, **kwargs): """ returns the instanticated class from the class list args: class_type: dictionary with rdf_types """
def select_class(class_name): """ finds the class in the rdfclass Module""" try: return getattr(MODULE.rdfclass, class_name.pyuri) except AttributeError: return RdfClassBase if kwargs.get("def_load"): return RdfClassBase if isinstance(class_type[self.omap], list): bases = [select_class(class_name) for class_name in class_type[self.omap]] bases = [base for base in bases if base != RdfClassBase] if len(bases) == 0: return RdfClassBase elif len(bases) == 1: return bases[0] else: bases = remove_parents(bases) if len(bases) == 1: return bases[0] else: name = "_".join(sorted(class_type[self.omap])) # if the the class has already been created return it if hasattr(MODULE.rdfclass, name): return getattr(MODULE.rdfclass, name) new_class = type(name, tuple(bases), {}) new_class.hierarchy = list_hierarchy(class_type[self.omap][0], bases) new_class.class_names = sorted([base.__name__ \ for base in bases \ if base not in [RdfClassBase, dict]]) setattr(MODULE.rdfclass, name, new_class) return new_class else: return select_class(class_type[self.omap])
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def from_timedelta(cls, datetime_obj, duration): """Create a new TimeInterval object from a start point and a duration. If duration is positive, datetime_obj is the start of the interval; if duration is negative, datetime_obj is the end of the interval. Parameters datetime_obj : datetime.datetime duration : datetime.timedelta Returns ------- neutils.time.TimeInterval """
if duration.total_seconds() > 0: return TimeInterval(datetime_obj, datetime_obj + duration) else: return TimeInterval(datetime_obj + duration, datetime_obj)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def _get_startstop(sheet, startcell=None, stopcell=None): """ Return two StartStop objects, based on the sheet and startcell and stopcell. sheet: xlrd.sheet.Sheet instance Ready for use. startcell: str or None If given, a spread sheet style notation of the cell where data start, ("F9"). stopcell: str or None A spread sheet style notation of the cell where data end, ("F9"). startcell and stopcell can be used in any combination. """
start = StartStop(0, 0) # row, col stop = StartStop(sheet.nrows, sheet.ncols) if startcell: m = re.match(XLNOT_RX, startcell) start.row = int(m.group(2)) - 1 start.col = letter2num(m.group(1), zbase=True) if stopcell: m = re.match(XLNOT_RX, stopcell) stop.row = int(m.group(2)) # Stop number is exclusive stop.col = letter2num(m.group(1), zbase=False) return [start, stop]
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def prepread(sheet, header=True, startcell=None, stopcell=None): """Return four StartStop objects, defining the outer bounds of header row and data range, respectively. If header is False, the first two items will be None. --> [headstart, headstop, datstart, datstop] sheet: xlrd.sheet.Sheet instance Ready for use. header: bool or str True if the defined data range includes a header with field names. Else False - the whole range is data. If a string, it is spread sheet style notation of the startcell for the header ("F9"). The "width" of this record is the same as for the data. startcell: str or None If given, a spread sheet style notation of the cell where reading start, ("F9"). stopcell: str or None A spread sheet style notation of the cell where data end, ("F9"). startcell and stopcell can both be None, either one specified or both specified. Note to self: consider making possible to specify headers in a column. """
datstart, datstop = _get_startstop(sheet, startcell, stopcell) headstart, headstop = StartStop(0, 0), StartStop(0, 0) # Holders def typicalprep(): headstart.row, headstart.col = datstart.row, datstart.col headstop.row, headstop.col = datstart.row + 1, datstop.col # Tick the data start row by 1: datstart.row += 1 def offsetheaderprep(): headstart.row, headstart.col = headrow, headcol headstop.row = headrow + 1 headstop.col = headcol + (datstop.col - datstart.col) # stop > start if header is True: # Simply the toprow of the table. typicalprep() return [headstart, headstop, datstart, datstop] elif header: # Then it is a string if not False. ("F9") m = re.match(XLNOT_RX, header) headrow = int(m.group(2)) - 1 headcol = letter2num(m.group(1), zbase=True) if headrow == datstart.row and headcol == datstart.col: typicalprep() return [headstart, headstop, datstart, datstop] elif headrow == datstart.row: typicalprep() offsetheaderprep() return [headstart, headstop, datstart, datstop] else: offsetheaderprep() return [headstart, headstop, datstart, datstop] else: # header is False return [None, None, datstart, datstop]
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def sheetheader(sheet, startstops, usecols=None): """Return the channel names in a list suitable as an argument to ChannelPack's `set_channel_names` method. Return None if first two StartStops are None. This function is slightly confusing, because it shall be called with the same parameters as sheet_asdict. But knowing that, it should be convenient. sheet: xlrd.sheet.Sheet instance Ready for use. startstops: list Four StartStop objects defining the data to read. See :func:`~channelpack.pullxl.prepread`, returning such a list. usecols: str or sequence of ints or None The columns to use, 0-based. 0 is the spread sheet column "A". Can be given as a string also - 'C:E, H' for columns C, D, E and H. """
headstart, headstop, dstart, dstop = startstops if headstart is None: return None assert headstop.row - headstart.row == 1, ('Field names must be in ' 'same row so far. Or ' 'this is a bug') header = [] # One need to make same offsets within start and stop as in usecols: usecols = _sanitize_usecols(usecols) cols = usecols or range(dstart.col, dstop.col) headcols = [c + (headstart.col - dstart.col) for c in cols] for col in headcols: fieldname = sheet.cell(headstart.row, col).value header.append(unicode(fieldname)) return header
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def _sanitize_usecols(usecols): """Make a tuple of sorted integers and return it. Return None if usecols is None"""
if usecols is None: return None try: pats = usecols.split(',') pats = [p.strip() for p in pats if p] except AttributeError: usecols = [int(c) for c in usecols] # Make error if mix. usecols.sort() return tuple(usecols) # Assume sane sequence of integers. cols = [] for pat in pats: if ':' in pat: c1, c2 = pat.split(':') n1 = letter2num(c1, zbase=True) n2 = letter2num(c2, zbase=False) cols += range(n1, n2) else: cols += [letter2num(pat, zbase=True)] # Remove duplicates: cols = list(set(cols)) cols.sort() return tuple(cols)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def letter2num(letters, zbase=False): """A = 1, C = 3 and so on. Convert spreadsheet style column enumeration to a number. Answers: A = 1, Z = 26, AA = 27, AZ = 52, ZZ = 702, AMJ = 1024 True True True True True True True """
letters = letters.upper() res = 0 weight = len(letters) - 1 assert weight >= 0, letters for i, c in enumerate(letters): assert 65 <= ord(c) <= 90, c # A-Z res += (ord(c) - 64) * 26**(weight - i) if not zbase: return res return res - 1
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def fromxldate(xldate, datemode=1): """Return a python datetime object xldate: float The xl number. datemode: int 0: 1900-based, 1: 1904-based. See xlrd documentation. """
t = xlrd.xldate_as_tuple(xldate, datemode) return datetime.datetime(*t)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def language(fname, is_ext=False): """Return an instance of the language class that fname is suited for. Searches through the module langs for the class that matches up with fname. If is_ext is True then fname will be taken to be the extension for a language. """
global _langmapping # Normalize the fname so that it looks like an extension. if is_ext: fname = '.' + fname _, ext = os.path.splitext(fname) return _langmapping[ext]()
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def certify_text( value, min_length=None, max_length=None, nonprintable=True, required=True, ): """ Certifier for human readable string values. :param unicode value: The string to be certified. :param int min_length: The minimum length of the string. :param int max_length: The maximum acceptable length for the string. By default, the length is not checked. :param nonprintable: Whether the string can contain non-printable characters. Non-printable characters are allowed by default. :param bool required: Whether the value can be `None`. Defaults to True. :raises CertifierTypeError: The type is invalid :raises CertifierValueError: The value is invalid """
certify_params( (_certify_int_param, 'max_length', max_length, dict(negative=False, required=False)), (_certify_int_param, 'min_length', min_length, dict(negative=False, required=False)), (certify_bool, 'nonprintable', nonprintable), ) if certify_required( value=value, required=required, ): return if not isinstance(value, six.text_type): raise CertifierTypeError( message="expected unicode string, but value is of type {cls!r}".format( cls=value.__class__.__name__), value=value, required=required, ) if min_length is not None and len(value) < min_length: raise CertifierValueError( message="{length} is shorter than minimum acceptable {min}".format( length=len(value), min=min_length), value=value, required=required, ) if max_length is not None and len(value) > max_length: raise CertifierValueError( message="{length} is longer than maximum acceptable {max}".format( length=len(value), max=max_length), value=value, required=required, ) _certify_printable( value=value, nonprintable=nonprintable, required=required, )
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def certify_int(value, min_value=None, max_value=None, required=True): """ Certifier for integer values. :param six.integer_types value: The number to be certified. :param int min_value: The minimum acceptable value for the number. :param int max_value: The maximum acceptable value for the number. :param bool required: Whether the value can be `None`. Defaults to True. :raises CertifierTypeError: The type is invalid :raises CertifierValueError: The value is invalid """
certify_params( (_certify_int_param, 'max_length', max_value, dict(negative=True, required=False)), (_certify_int_param, 'min_length', min_value, dict(negative=True, required=False)), ) if certify_required( value=value, required=required, ): return if not isinstance(value, six.integer_types): raise CertifierTypeError( message="expected integer, but value is of type {cls!r}".format( cls=value.__class__.__name__), value=value, required=required, ) if min_value is not None and value < min_value: raise CertifierValueError( message="{value} is less than minimum acceptable {min}".format( value=value, min=min_value), value=value, required=required, ) if max_value is not None and value > max_value: raise CertifierValueError( message="{value} is more than the maximum acceptable {max}".format( value=value, max=max_value), value=value, required=required, )
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def certify_bool(value, required=True): """ Certifier for boolean values. :param value: The value to be certified. :param bool required: Whether the value can be `None`. Defaults to True. :raises CertifierTypeError: The type is invalid """
if certify_required( value=value, required=required, ): return if not isinstance(value, bool): raise CertifierTypeError( message="expected bool, but value is of type {cls!r}".format( cls=value.__class__.__name__), value=value, required=required, )
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def certify_bytes(value, min_length=None, max_length=None, required=True): """ Certifier for bytestring values. Should not be used for certifying human readable strings, Please use `certify_string` instead. :param bytes|str value: The string to be certified. :param int min_length: The minimum length of the string. :param int max_length: The maximum acceptable length for the string. By default, the length is not checked. :param bool required: Whether the value can be `None`. Defaults to True. :raises CertifierTypeError: The type is invalid :raises CertifierValueError: The value is invalid """
certify_params( (_certify_int_param, 'min_value', min_length, dict(negative=False, required=False)), (_certify_int_param, 'max_value', max_length, dict(negative=False, required=False)), ) if certify_required( value=value, required=required, ): return if not isinstance(value, six.binary_type): raise CertifierTypeError( message="expected byte string, but value is of type {cls!r}".format( cls=value.__class__.__name__), value=value, required=required, ) if min_length is not None and len(value) < min_length: raise CertifierValueError( message="{length} is shorter than minimum acceptable {min}".format( length=len(value), min=min_length), value=value, required=required, ) if max_length is not None and len(value) > max_length: raise CertifierValueError( message="{length} is longer than maximum acceptable {max}".format( length=len(value), max=max_length), value=value, required=required, )
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def certify_enum(value, kind=None, required=True): """ Certifier for enum. :param value: The value to be certified. :param kind: The enum type that value should be an instance of. :param bool required: Whether the value can be `None`. Defaults to True. :raises CertifierTypeError: The type is invalid """
if certify_required( value=value, required=required, ): return if not isinstance(value, kind): raise CertifierTypeError( message="expected {expected!r}, but value is of type {actual!r}".format( expected=kind.__name__, actual=value.__class__.__name__), value=value, required=required, )
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def certify_enum_value(value, kind=None, required=True): """ Certifier for enum values. :param value: The value to be certified. :param kind: The enum type that value should be an instance of. :param bool required: Whether the value can be `None`. Defaults to True. :raises CertifierValueError: The type is invalid """
if certify_required( value=value, required=required, ): return try: kind(value) except: # noqa raise CertifierValueError( message="value {value!r} is not a valid member of {enum!r}".format( value=value, enum=kind.__name__), value=value, required=required, )
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def certify_object(value, kind=None, required=True): """ Certifier for class object. :param object value: The object to certify. :param object kind: The type of the model that the value is expected to evaluate to. :param bool required: Whether the value can be `None`. Defaults to True. :raises CertifierTypeError: The type is invalid :raises CertifierValueError: The value is invalid """
if certify_required( value=value, required=required, ): return if not isinstance(value, kind): try: name = value.__class__.__name__ except: # noqa # pragma: no cover name = type(value).__name__ try: expected = kind.__class__.__name__ except: # noqa # pragma: no cover expected = type(kind).__name__ raise CertifierValueError( message="Expected object {expected!r}, but got {actual!r}".format( expected=expected, actual=name), value=value, required=required, )
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def certify_time(value, required=True): """ Certifier for datetime.time values. :param value: The value to be certified. :param bool required: Whether the value can be `None` Defaults to True. :raises CertifierTypeError: The type is invalid """
if certify_required( value=value, required=required, ): return if not isinstance(value, time): raise CertifierTypeError( message="expected timestamp (time), but value is of type {cls!r}".format( cls=value.__class__.__name__), value=value, required=required, )
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def AsDict(self, dt=True): """ A dict representation of this User instance. The return value uses the same key names as the JSON representation. Args: dt (bool): If True, return dates as python datetime objects. If False, return dates as ISO strings. Return: A dict representing this User instance """
data = {} if self.name: data['name'] = self.name data['mlkshk_url'] = self.mlkshk_url if self.profile_image_url: data['profile_image_url'] = self.profile_image_url if self.id: data['id'] = self.id if self.about: data['about'] = self.about if self.website: data['website'] = self.website if self.shakes: data['shakes'] = [shk.AsDict(dt=dt) for shk in self.shakes] data['shake_count'] = self.shake_count return data
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def AsJsonString(self): """A JSON string representation of this User instance. Returns: A JSON string representation of this User instance """
return json.dumps(self.AsDict(dt=False), sort_keys=True)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def NewFromJSON(data): """ Create a new User instance from a JSON dict. Args: data (dict): JSON dictionary representing a user. Returns: A User instance. """
if data.get('shakes', None): shakes = [Shake.NewFromJSON(shk) for shk in data.get('shakes')] else: shakes = None return User( id=data.get('id', None), name=data.get('name', None), profile_image_url=data.get('profile_image_url', None), about=data.get('about', None), website=data.get('website', None), shakes=shakes)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def AsDict(self, dt=True): """ A dict representation of this Comment instance. The return value uses the same key names as the JSON representation. Args: dt (bool): If True, return dates as python datetime objects. If False, return dates as ISO strings. Return: A dict representing this Comment instance """
data = {} if self.body: data['body'] = self.body if self.posted_at: data['posted_at'] = self.posted_at if self.user: data['user'] = self.user.AsDict() return data
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def NewFromJSON(data): """ Create a new Comment instance from a JSON dict. Args: data (dict): JSON dictionary representing a Comment. Returns: A Comment instance. """
return Comment( body=data.get('body', None), posted_at=data.get('posted_at', None), user=User.NewFromJSON(data.get('user', None)) )
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def NewFromJSON(data): """ Create a new Shake instance from a JSON dict. Args: data (dict): JSON dictionary representing a Shake. Returns: A Shake instance. """
s = Shake( id=data.get('id', None), name=data.get('name', None), url=data.get('url', None), thumbnail_url=data.get('thumbnail_url', None), description=data.get('description', None), type=data.get('type', None), created_at=data.get('created_at', None), updated_at=data.get('updated_at', None) ) if data.get('owner', None): s.owner = User.NewFromJSON(data.get('owner', None)) return s
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def NewFromJSON(data): """ Create a new SharedFile instance from a JSON dict. Args: data (dict): JSON dictionary representing a SharedFile. Returns: A SharedFile instance. """
return SharedFile( sharekey=data.get('sharekey', None), name=data.get('name', None), user=User.NewFromJSON(data.get('user', None)), title=data.get('title', None), description=data.get('description', None), posted_at=data.get('posted_at', None), permalink=data.get('permalink', None), width=data.get('width', None), height=data.get('height', None), views=data.get('views', 0), likes=data.get('likes', 0), saves=data.get('saves', 0), comments=data.get('comments', None), nsfw=data.get('nsfw', False), image_url=data.get('image_url', None), source_url=data.get('source_url', None), saved=data.get('saved', False), liked=data.get('liked', False), )
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def _start_tracer(self, origin): """ Start a new Tracer object, and store it in self.tracers. """
tracer = self._tracer_class(log=self.log) tracer.data = self.data fn = tracer.start(origin) self.tracers.append(tracer) return fn
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def start(self): """ Start collecting trace information. """
origin = inspect.stack()[1][0] self.reset() # Install the tracer on this thread. self._start_tracer(origin)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def gauge(self, name, producer): """Creates or gets an existing gauge. :param name: The name :return: The created or existing gauge for the given name """
return self._get_or_add_stat(name, functools.partial(Gauge, producer))
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def get_stats(self): """Retrieves the current values of the metrics associated with this registry, formatted as a dict. The metrics form a hierarchy, their names are split on '.'. The returned dict is an `addict`, so you can use it as either a regular dict or via attributes, e.g., 0 0 :return: The values of the metrics associated with this registry """
def _get_value(stats): try: return Dict((k, _get_value(v)) for k, v in stats.items()) except AttributeError: return Dict(stats.get_values()) return _get_value(self.stats)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def _populate_ips_versions(self): """ Populate IPS version data for mapping @return: """
# Get a map of version ID's from our most recent IPS version ips = IpsManager(self.ctx) ips = ips.dev_version or ips.latest with ZipFile(ips.filepath) as zip: namelist = zip.namelist() ips_versions_path = os.path.join(namelist[0], 'applications/core/data/versions.json') if ips_versions_path not in namelist: raise BadZipfile('Missing versions.json file') self.ips_versions = json.loads(zip.read(ips_versions_path), object_pairs_hook=OrderedDict) self.log.debug("%d version ID's loaded from latest IPS release", len(self.ips_versions))
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def load(data_path): """ Extract data from provided file and return it as a string. """
with open(data_path, "r") as data_file: raw_data = data_file.read() data_file.close() return raw_data
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def parse(self, data): """ Split and iterate through the datafile to extract genres, tags and points. """
categories = data.split("\n\n") reference = {} reference_points = {} genre_index = [] tag_index = [] for category in categories: entries = category.strip().split("\n") entry_category, entry_points = self._parse_entry(entries[0].lower()) if entry_category.startswith("#"): continue for entry in entries: entry = entry.lower() if not entry: continue # Comment, ignore if entry.startswith("#"): continue # Handle genre if not entry.startswith("-"): genre, points = self._parse_entry(entry) reference[genre] = entry_category reference_points[genre] = points genre_index.append(genre) # Handle tag else: tag = entry[1:] tag, points = self._parse_entry(tag, limit=9.5) reference[tag] = entry_category reference_points[tag] = points tag_index.append(tag) self.reference = reference self.genres = genre_index self.tags = tag_index self.points = reference_points
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def _parse_entry(entry, limit=10): """ Finds both label and if provided, the points for ranking. """
entry = entry.split(",") label = entry[0] points = limit if len(entry) > 1: proc = float(entry[1].strip()) points = limit * proc return label, int(points)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def has_site_permission(user): """ Checks if a staff user has staff-level access for the current site. The actual permission lookup occurs in ``SitePermissionMiddleware`` which then marks the request with the ``has_site_permission`` flag, so that we only query the db once per request, so this function serves as the entry point for everything else to check access. We also fall back to an ``is_staff`` check if the middleware is not installed, to ease migration. """
mw = "yacms.core.middleware.SitePermissionMiddleware" if mw not in get_middleware_setting(): from warnings import warn warn(mw + " missing from settings.MIDDLEWARE - per site" "permissions not applied") return user.is_staff and user.is_active return getattr(user, "has_site_permission", False)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def host_theme_path(): """ Returns the directory of the theme associated with the given host. """
# Set domain to None, which we'll then query for in the first # iteration of HOST_THEMES. We use the current site_id rather # than a request object here, as it may differ for admin users. domain = None for (host, theme) in settings.HOST_THEMES: if domain is None: domain = Site.objects.get(id=current_site_id()).domain if host.lower() == domain.lower(): try: __import__(theme) module = sys.modules[theme] except ImportError: pass else: return os.path.dirname(os.path.abspath(module.__file__)) return ""
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def read(url, **args): """Loads an object from a data URI."""
info, data = url.path.split(',') info = data_re.search(info).groupdict() mediatype = info.setdefault('mediatype', 'text/plain;charset=US-ASCII') if ';' in mediatype: mimetype, params = mediatype.split(';', 1) params = [p.split('=') for p in params.split(';')] params = dict((k.strip(), v.strip()) for k, v in params) else: mimetype, params = mediatype, dict() data = base64.b64decode(data) if info['base64'] else urllib.unquote(data) return content_types.get(mimetype).parse(data, **params)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def write(url, object_, **args): """Writes an object to a data URI."""
default_content_type = ('text/plain', {'charset': 'US-ASCII'}) content_encoding = args.get('content_encoding', 'base64') content_type, params = args.get('content_type', default_content_type) data = content_types.get(content_type).format(object_, **params) args['data'].write('data:{}'.format(content_type)) for param, value in params.items(): args['data'].write(';{}={}'.format(param, value)) if content_encoding == 'base64': args['data'].write(';base64,{}'.format(base64.b64decode(data))) else: args['data'].write(',{}', urllib.quote(data)) args['data'].seek(0)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def deprecated(new_fct_name, logger=None): """ Decorator to notify that a fct is deprecated """
if logger is None: logger = logging.getLogger("kodex") nfct_name = new_fct_name def aux_deprecated(func): """This is a decorator which can be used to mark functions as deprecated. It will result in a warning being emmitted when the function is used.""" def newFunc(*args, **kwargs): msg = "DeprecationWarning: use '%s' instead of '%s'." % (new_fct_name, func.__name__) logger.warning(msg) warnings.warn(msg, category=DeprecationWarning) return func(*args, **kwargs) newFunc.__name__ = func.__name__ newFunc.__doc__ = func.__doc__ newFunc.__dict__.update(func.__dict__) return newFunc return aux_deprecated
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description:
def blockgen(bytes, block_size=16): ''' a block generator for pprp ''' for i in range(0, len(bytes), block_size): block = bytes[i:i + block_size] block_len = len(block) if block_len > 0: yield block if block_len < block_size: break
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def get_basic_logger(level=logging.WARN, scope='reliure'): """ return a basic logger that print on stdout msg from reliure lib """
logger = logging.getLogger(scope) logger.setLevel(level) # create console handler with a higher log level ch = logging.StreamHandler() ch.setLevel(level) # create formatter and add it to the handlers formatter = ColorFormatter('%(asctime)s:%(levelname)s:%(name)s:%(message)s') ch.setFormatter(formatter) # add the handlers to the logger logger.addHandler(ch) return logger
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def save(self, *args, **kwargs): """ Save the created_by and last_modified_by fields based on the current admin user. """
if not self.instance.id: self.instance.created_by = self.user self.instance.last_modified_by = self.user return super(ChangeableContentForm, self).save(*args, **kwargs)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def get_urls(self): """ Add our preview view to our urls. """
urls = super(PageAdmin, self).get_urls() my_urls = patterns('', (r'^add/preview$', self.admin_site.admin_view(PagePreviewView.as_view())), (r'^(?P<id>\d+)/preview$', self.admin_site.admin_view(PagePreviewView.as_view())), (r'^(?P<id>\d+)/history/(\d+)/preview$', self.admin_site.admin_view(PagePreviewView.as_view())), ) return my_urls + urls
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def get_template_names(self): """ Return the page's specified template name, or a fallback if one hasn't been chosen. """
posted_name = self.request.POST.get('template_name') if posted_name: return [posted_name,] else: return super(PagePreviewView, self).get_template_names()
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def post(self, request, *args, **kwargs): """ Accepts POST requests, and substitute the data in for the page's attributes. """
self.object = self.get_object() self.object.content = request.POST['content'] self.object.title = request.POST['title'] self.object = self._mark_html_fields_as_safe(self.object) context = self.get_context_data(object=self.object) return self.render_to_response(context, content_type=self.get_mimetype())
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def redirect_stdout(self): """Redirect stdout to file so that it can be tailed and aggregated with the other logs."""
self.hijacked_stdout = sys.stdout self.hijacked_stderr = sys.stderr # 0 must be set as the buffer, otherwise lines won't get logged in time. sys.stdout = open(self.hitch_dir.driverout(), "ab", 0) sys.stderr = open(self.hitch_dir.drivererr(), "ab", 0)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def unredirect_stdout(self): """Redirect stdout and stderr back to screen."""
if hasattr(self, 'hijacked_stdout') and hasattr(self, 'hijacked_stderr'): sys.stdout = self.hijacked_stdout sys.stderr = self.hijacked_stderr
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def time_travel(self, datetime=None, timedelta=None, seconds=0, minutes=0, hours=0, days=0): """Mock moving forward or backward in time by shifting the system clock fed to the services tested. Note that all of these arguments can be used together, individually or not at all. The time traveled to will be the sum of all specified time deltas from datetime. If no datetime is specified, the deltas will be added to the current time. Args: datetime (Optional[datetime]): Time travel to specific datetime. timedelta (Optional[timedelta]): Time travel to 'timedelta' from now. seconds (Optional[number]): Time travel 'seconds' seconds from now. minutes (Optional[number]): Time travel 'minutes' minutes from now. hours (Optional[number]): Time travel 'hours' hours from now. days (Optional[number]): Time travel 'days' days from now. """
if datetime is not None: self.timedelta = datetime - python_datetime.now() if timedelta is not None: self.timedelta = self.timedelta + timedelta self.timedelta = self.timedelta + python_timedelta(seconds=seconds) self.timedelta = self.timedelta + python_timedelta(minutes=minutes) self.timedelta = self.timedelta + python_timedelta(hours=hours) self.timedelta = self.timedelta + python_timedelta(days=days) log("Time traveling to {}\n".format(humanize.naturaltime(self.now()))) faketime.change_time(self.hitch_dir.faketime(), self.now())
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def wait_for_ipykernel(self, service_name, timeout=10): """Wait for an IPython kernel-nnnn.json filename message to appear in log."""
kernel_line = self._services[service_name].logs.tail.until( lambda line: "--existing" in line[1], timeout=10, lines_back=5 ) return kernel_line.replace("--existing", "").strip()