code
stringlengths
75
104k
docstring
stringlengths
1
46.9k
text
stringlengths
164
112k
def getMetricDetails(self, metricLabel): """ Gets detailed info about a given metric, in addition to its value. This may including any statistics or auxilary data that are computed for a given metric. :param metricLabel: (string) label of the given metric (see :class:`~nupic.frameworks.opf.metrics.MetricSpec`) :returns: (dict) of metric information, as returned by :meth:`nupic.frameworks.opf.metrics.MetricsIface.getMetric`. """ try: metricIndex = self.__metricLabels.index(metricLabel) except IndexError: return None return self.__metrics[metricIndex].getMetric()
Gets detailed info about a given metric, in addition to its value. This may including any statistics or auxilary data that are computed for a given metric. :param metricLabel: (string) label of the given metric (see :class:`~nupic.frameworks.opf.metrics.MetricSpec`) :returns: (dict) of metric information, as returned by :meth:`nupic.frameworks.opf.metrics.MetricsIface.getMetric`.
Below is the the instruction that describes the task: ### Input: Gets detailed info about a given metric, in addition to its value. This may including any statistics or auxilary data that are computed for a given metric. :param metricLabel: (string) label of the given metric (see :class:`~nupic.frameworks.opf.metrics.MetricSpec`) :returns: (dict) of metric information, as returned by :meth:`nupic.frameworks.opf.metrics.MetricsIface.getMetric`. ### Response: def getMetricDetails(self, metricLabel): """ Gets detailed info about a given metric, in addition to its value. This may including any statistics or auxilary data that are computed for a given metric. :param metricLabel: (string) label of the given metric (see :class:`~nupic.frameworks.opf.metrics.MetricSpec`) :returns: (dict) of metric information, as returned by :meth:`nupic.frameworks.opf.metrics.MetricsIface.getMetric`. """ try: metricIndex = self.__metricLabels.index(metricLabel) except IndexError: return None return self.__metrics[metricIndex].getMetric()
def delete_repository_tag(self, project_id, tag_name): """ Deletes a tag of a repository with given name. :param project_id: The ID of a project :param tag_name: The name of a tag :return: Dictionary containing delete tag :raise: HttpError: If invalid response returned """ return self.delete('/projects/{project_id}/repository/tags/{tag_name}'.format( project_id=project_id, tag_name=tag_name))
Deletes a tag of a repository with given name. :param project_id: The ID of a project :param tag_name: The name of a tag :return: Dictionary containing delete tag :raise: HttpError: If invalid response returned
Below is the the instruction that describes the task: ### Input: Deletes a tag of a repository with given name. :param project_id: The ID of a project :param tag_name: The name of a tag :return: Dictionary containing delete tag :raise: HttpError: If invalid response returned ### Response: def delete_repository_tag(self, project_id, tag_name): """ Deletes a tag of a repository with given name. :param project_id: The ID of a project :param tag_name: The name of a tag :return: Dictionary containing delete tag :raise: HttpError: If invalid response returned """ return self.delete('/projects/{project_id}/repository/tags/{tag_name}'.format( project_id=project_id, tag_name=tag_name))
async def get_game_high_scores(self, user_id: base.Integer, chat_id: typing.Union[base.Integer, None] = None, message_id: typing.Union[base.Integer, None] = None, inline_message_id: typing.Union[base.String, None] = None) -> typing.List[types.GameHighScore]: """ Use this method to get data for high score tables. This method will currently return scores for the target user, plus two of his closest neighbors on each side. Will also return the top three users if the user and his neighbors are not among them. Please note that this behavior is subject to change. Source: https://core.telegram.org/bots/api#getgamehighscores :param user_id: Target user id :type user_id: :obj:`base.Integer` :param chat_id: Required if inline_message_id is not specified. Unique identifier for the target chat :type chat_id: :obj:`typing.Union[base.Integer, None]` :param message_id: Required if inline_message_id is not specified. Identifier of the sent message :type message_id: :obj:`typing.Union[base.Integer, None]` :param inline_message_id: Required if chat_id and message_id are not specified. Identifier of the inline message :type inline_message_id: :obj:`typing.Union[base.String, None]` :return: Will return the score of the specified user and several of his neighbors in a game On success, returns an Array of GameHighScore objects. This method will currently return scores for the target user, plus two of his closest neighbors on each side. Will also return the top three users if the user and his neighbors are not among them. :rtype: :obj:`typing.List[types.GameHighScore]` """ payload = generate_payload(**locals()) result = await self.request(api.Methods.GET_GAME_HIGH_SCORES, payload) return [types.GameHighScore(**gamehighscore) for gamehighscore in result]
Use this method to get data for high score tables. This method will currently return scores for the target user, plus two of his closest neighbors on each side. Will also return the top three users if the user and his neighbors are not among them. Please note that this behavior is subject to change. Source: https://core.telegram.org/bots/api#getgamehighscores :param user_id: Target user id :type user_id: :obj:`base.Integer` :param chat_id: Required if inline_message_id is not specified. Unique identifier for the target chat :type chat_id: :obj:`typing.Union[base.Integer, None]` :param message_id: Required if inline_message_id is not specified. Identifier of the sent message :type message_id: :obj:`typing.Union[base.Integer, None]` :param inline_message_id: Required if chat_id and message_id are not specified. Identifier of the inline message :type inline_message_id: :obj:`typing.Union[base.String, None]` :return: Will return the score of the specified user and several of his neighbors in a game On success, returns an Array of GameHighScore objects. This method will currently return scores for the target user, plus two of his closest neighbors on each side. Will also return the top three users if the user and his neighbors are not among them. :rtype: :obj:`typing.List[types.GameHighScore]`
Below is the the instruction that describes the task: ### Input: Use this method to get data for high score tables. This method will currently return scores for the target user, plus two of his closest neighbors on each side. Will also return the top three users if the user and his neighbors are not among them. Please note that this behavior is subject to change. Source: https://core.telegram.org/bots/api#getgamehighscores :param user_id: Target user id :type user_id: :obj:`base.Integer` :param chat_id: Required if inline_message_id is not specified. Unique identifier for the target chat :type chat_id: :obj:`typing.Union[base.Integer, None]` :param message_id: Required if inline_message_id is not specified. Identifier of the sent message :type message_id: :obj:`typing.Union[base.Integer, None]` :param inline_message_id: Required if chat_id and message_id are not specified. Identifier of the inline message :type inline_message_id: :obj:`typing.Union[base.String, None]` :return: Will return the score of the specified user and several of his neighbors in a game On success, returns an Array of GameHighScore objects. This method will currently return scores for the target user, plus two of his closest neighbors on each side. Will also return the top three users if the user and his neighbors are not among them. :rtype: :obj:`typing.List[types.GameHighScore]` ### Response: async def get_game_high_scores(self, user_id: base.Integer, chat_id: typing.Union[base.Integer, None] = None, message_id: typing.Union[base.Integer, None] = None, inline_message_id: typing.Union[base.String, None] = None) -> typing.List[types.GameHighScore]: """ Use this method to get data for high score tables. This method will currently return scores for the target user, plus two of his closest neighbors on each side. Will also return the top three users if the user and his neighbors are not among them. Please note that this behavior is subject to change. Source: https://core.telegram.org/bots/api#getgamehighscores :param user_id: Target user id :type user_id: :obj:`base.Integer` :param chat_id: Required if inline_message_id is not specified. Unique identifier for the target chat :type chat_id: :obj:`typing.Union[base.Integer, None]` :param message_id: Required if inline_message_id is not specified. Identifier of the sent message :type message_id: :obj:`typing.Union[base.Integer, None]` :param inline_message_id: Required if chat_id and message_id are not specified. Identifier of the inline message :type inline_message_id: :obj:`typing.Union[base.String, None]` :return: Will return the score of the specified user and several of his neighbors in a game On success, returns an Array of GameHighScore objects. This method will currently return scores for the target user, plus two of his closest neighbors on each side. Will also return the top three users if the user and his neighbors are not among them. :rtype: :obj:`typing.List[types.GameHighScore]` """ payload = generate_payload(**locals()) result = await self.request(api.Methods.GET_GAME_HIGH_SCORES, payload) return [types.GameHighScore(**gamehighscore) for gamehighscore in result]
def defUtilityFuncs(self): ''' Defines CRRA utility function for this period (and its derivatives, and their inverses), saving them as attributes of self for other methods to use. Extends version from ConsIndShock models by also defining inverse marginal utility function over medical care. Parameters ---------- none Returns ------- none ''' ConsGenIncProcessSolver.defUtilityFuncs(self) # Do basic version self.uMedPinv = lambda Med : utilityP_inv(Med,gam=self.CRRAmed) self.uMed = lambda Med : utility(Med,gam=self.CRRAmed) self.uMedPP = lambda Med : utilityPP(Med,gam=self.CRRAmed)
Defines CRRA utility function for this period (and its derivatives, and their inverses), saving them as attributes of self for other methods to use. Extends version from ConsIndShock models by also defining inverse marginal utility function over medical care. Parameters ---------- none Returns ------- none
Below is the the instruction that describes the task: ### Input: Defines CRRA utility function for this period (and its derivatives, and their inverses), saving them as attributes of self for other methods to use. Extends version from ConsIndShock models by also defining inverse marginal utility function over medical care. Parameters ---------- none Returns ------- none ### Response: def defUtilityFuncs(self): ''' Defines CRRA utility function for this period (and its derivatives, and their inverses), saving them as attributes of self for other methods to use. Extends version from ConsIndShock models by also defining inverse marginal utility function over medical care. Parameters ---------- none Returns ------- none ''' ConsGenIncProcessSolver.defUtilityFuncs(self) # Do basic version self.uMedPinv = lambda Med : utilityP_inv(Med,gam=self.CRRAmed) self.uMed = lambda Med : utility(Med,gam=self.CRRAmed) self.uMedPP = lambda Med : utilityPP(Med,gam=self.CRRAmed)
def login(self, username, password, disableautosave=True, print_response=True): """ :param username: :param password: :param disableautosave: boolean :param print_response: print log if required :return: status code, response data """ if type(username) != str: return False, "Username must be string" if type(password) != str: return False, "Password must be string" if type(disableautosave) != bool: return False, "Disableautosave must be boolean" data = {"username": username, "password": password, "disableautosave": disableautosave} status_response, response = self.call_api("r/user/login/", data, print_response=print_response) # Store httpcookie if possible if status_response and "deployr" in response: if "response" in response["deployr"]: if "httpcookie" in response["deployr"]["response"]: self.JSESSIONID = response["deployr"]["response"]["httpcookie"] return status_response, response
:param username: :param password: :param disableautosave: boolean :param print_response: print log if required :return: status code, response data
Below is the the instruction that describes the task: ### Input: :param username: :param password: :param disableautosave: boolean :param print_response: print log if required :return: status code, response data ### Response: def login(self, username, password, disableautosave=True, print_response=True): """ :param username: :param password: :param disableautosave: boolean :param print_response: print log if required :return: status code, response data """ if type(username) != str: return False, "Username must be string" if type(password) != str: return False, "Password must be string" if type(disableautosave) != bool: return False, "Disableautosave must be boolean" data = {"username": username, "password": password, "disableautosave": disableautosave} status_response, response = self.call_api("r/user/login/", data, print_response=print_response) # Store httpcookie if possible if status_response and "deployr" in response: if "response" in response["deployr"]: if "httpcookie" in response["deployr"]["response"]: self.JSESSIONID = response["deployr"]["response"]["httpcookie"] return status_response, response
def gets(self, key, default=None, cas_default=None): """ The memcached "gets" command for one key, as a convenience. Args: key: str, see class docs for details. default: value that will be returned if the key was not found. cas_default: same behaviour as default argument. Returns: A tuple of (value, cas) or (default, cas_defaults) if the key was not found. """ defaults = (default, cas_default) return self._fetch_cmd(b'gets', [key], True).get(key, defaults)
The memcached "gets" command for one key, as a convenience. Args: key: str, see class docs for details. default: value that will be returned if the key was not found. cas_default: same behaviour as default argument. Returns: A tuple of (value, cas) or (default, cas_defaults) if the key was not found.
Below is the the instruction that describes the task: ### Input: The memcached "gets" command for one key, as a convenience. Args: key: str, see class docs for details. default: value that will be returned if the key was not found. cas_default: same behaviour as default argument. Returns: A tuple of (value, cas) or (default, cas_defaults) if the key was not found. ### Response: def gets(self, key, default=None, cas_default=None): """ The memcached "gets" command for one key, as a convenience. Args: key: str, see class docs for details. default: value that will be returned if the key was not found. cas_default: same behaviour as default argument. Returns: A tuple of (value, cas) or (default, cas_defaults) if the key was not found. """ defaults = (default, cas_default) return self._fetch_cmd(b'gets', [key], True).get(key, defaults)
def get_file_url(path, config): """ Update this function to help build the link to your file """ file_url_regex = re.compile(config['file_url_regex']) new_path = re.sub(file_url_regex, config['file_url_base'], path) return new_path
Update this function to help build the link to your file
Below is the the instruction that describes the task: ### Input: Update this function to help build the link to your file ### Response: def get_file_url(path, config): """ Update this function to help build the link to your file """ file_url_regex = re.compile(config['file_url_regex']) new_path = re.sub(file_url_regex, config['file_url_base'], path) return new_path
def delete(self, *args, **kwargs): """ Deletes the actual file from storage after the object is deleted. Calls super to actually delete the object. """ file_obj = self.file super(AssetBase, self).delete(*args, **kwargs) self.delete_real_file(file_obj)
Deletes the actual file from storage after the object is deleted. Calls super to actually delete the object.
Below is the the instruction that describes the task: ### Input: Deletes the actual file from storage after the object is deleted. Calls super to actually delete the object. ### Response: def delete(self, *args, **kwargs): """ Deletes the actual file from storage after the object is deleted. Calls super to actually delete the object. """ file_obj = self.file super(AssetBase, self).delete(*args, **kwargs) self.delete_real_file(file_obj)
def delete(self, roomId): """Delete a room. Args: roomId(basestring): The ID of the room to be deleted. Raises: TypeError: If the parameter types are incorrect. ApiError: If the Webex Teams cloud returns an error. """ check_type(roomId, basestring, may_be_none=False) # API request self._session.delete(API_ENDPOINT + '/' + roomId)
Delete a room. Args: roomId(basestring): The ID of the room to be deleted. Raises: TypeError: If the parameter types are incorrect. ApiError: If the Webex Teams cloud returns an error.
Below is the the instruction that describes the task: ### Input: Delete a room. Args: roomId(basestring): The ID of the room to be deleted. Raises: TypeError: If the parameter types are incorrect. ApiError: If the Webex Teams cloud returns an error. ### Response: def delete(self, roomId): """Delete a room. Args: roomId(basestring): The ID of the room to be deleted. Raises: TypeError: If the parameter types are incorrect. ApiError: If the Webex Teams cloud returns an error. """ check_type(roomId, basestring, may_be_none=False) # API request self._session.delete(API_ENDPOINT + '/' + roomId)
def delays(self, delays=[]): """ Gets / Sets the delays. """ if delays: return self._session.put( self.__v1() + "/delays", data=json.dumps(delays)).json() else: return self._session.get(self.__v1() + "/delays").json()
Gets / Sets the delays.
Below is the the instruction that describes the task: ### Input: Gets / Sets the delays. ### Response: def delays(self, delays=[]): """ Gets / Sets the delays. """ if delays: return self._session.put( self.__v1() + "/delays", data=json.dumps(delays)).json() else: return self._session.get(self.__v1() + "/delays").json()
def execute(command, working_directory=config.BASE_DIR, stderr=sp.STDOUT): """ Executes shell command in a given working_directory. Command is a string to pass to the shell. Output is ignored. """ LOG.info("Executing in %s ...", working_directory) LOG.info(command) sp.check_call(command, cwd=working_directory, stderr=stderr, shell=True)
Executes shell command in a given working_directory. Command is a string to pass to the shell. Output is ignored.
Below is the the instruction that describes the task: ### Input: Executes shell command in a given working_directory. Command is a string to pass to the shell. Output is ignored. ### Response: def execute(command, working_directory=config.BASE_DIR, stderr=sp.STDOUT): """ Executes shell command in a given working_directory. Command is a string to pass to the shell. Output is ignored. """ LOG.info("Executing in %s ...", working_directory) LOG.info(command) sp.check_call(command, cwd=working_directory, stderr=stderr, shell=True)
def readComplexCode(self, hskip, alphabet): """Read complex code""" stream = self.stream #read the lengths for the length code lengths = [1,2,3,4,0,5,17,6,16,7,8,9,10,11,12,13,14,15][hskip:] codeLengths = {} total = 0 lol = LengthOfLengthAlphabet('##'+alphabet.name) #lengthCode will be used for coding the lengths of the new code #we use it for display until now; definition comes below lengthCode = LengthAlphabet('#'+alphabet.name) lengthIter = iter(lengths) lengthsLeft = len(lengths) while total<32 and lengthsLeft>0: lengthsLeft -= 1 newSymbol = next(lengthIter) lol.description = str(lengthCode[newSymbol]) length = self.verboseRead(lol) if length: codeLengths[newSymbol] = length total += 32>>length if total>32: raise ValueError("Stream format") if len(codeLengths)==1: codeLengths[list(codeLengths.keys())[0]] = 0 #Now set the encoding of the lengthCode lengthCode.setLength(codeLengths) print("***** Lengths for {} will be coded as:".format(alphabet.name)) lengthCode.showCode() #Now determine the symbol lengths with the lengthCode symbolLengths = {} total = 0 lastLength = 8 alphabetIter = iter(alphabet) while total<32768: #look ahead to see what is going to happen length = lengthCode.decodePeek( self.stream.peek(lengthCode.maxLength))[1].index #in every branch, set lengthCode.description to explanatory text #lengthCode calls format(symbol, extra) with this string if length==0: symbol = next(alphabetIter) lengthCode.description = 'symbol {} unused'.format(symbol) self.verboseRead(lengthCode) #unused symbol continue if length==16: lengthCode.description = \ '{1}+3 symbols of length '+str(lastLength) extra = self.verboseRead(lengthCode) #scan series of 16s (repeat counts) #start with repeat count 2 repeat = 2 startSymbol = next(alphabetIter) endSymbol = next(alphabetIter) symbolLengths[startSymbol.index] = \ symbolLengths[endSymbol.index] = lastLength #count the two just defined symbols total += 2*32768>>lastLength #note: loop may end because we're there #even if a 16 _appears_ to follow while True: #determine last symbol oldRepeat = repeat repeat = (repeat-2<<2)+extra+3 #read as many symbols as repeat increased for i in range(oldRepeat, repeat): endSymbol = next(alphabetIter) symbolLengths[endSymbol.index] = lastLength #compute new total; it may be end of loop total += (repeat-oldRepeat)*32768>>lastLength if total>=32768: break #see if there is more to do length = lengthCode.decodePeek( self.stream.peek(lengthCode.maxLength))[1].index if length!=16: break lengthCode.description = 'total {}+{{1}} symbols'.format( (repeat-2<<2)+3) extra = self.verboseRead(lengthCode) elif length==17: #read, and show explanation lengthCode.description = '{1}+3 unused' extra = self.verboseRead(lengthCode) #scan series of 17s (groups of zero counts) #start with repeat count 2 repeat = 2 startSymbol = next(alphabetIter) endSymbol = next(alphabetIter) #note: loop will not end with total==32768, #since total doesn't change here while True: #determine last symbol oldRepeat = repeat repeat = (repeat-2<<3)+extra+3 #read as many symbols as repeat increases for i in range(repeat-oldRepeat): endSymbol = next(alphabetIter) #see if there is more to do length = lengthCode.decodePeek( self.stream.peek(lengthCode.maxLength))[1].index if length!=17: break lengthCode.description = 'total {}+{{1}} unused'.format( (repeat-2<<3)+3) extra = self.verboseRead(lengthCode) else: symbol = next(alphabetIter) #double braces for format char = str(symbol) if char in '{}': char *= 2 lengthCode.description = \ 'Length for {} is {{0.index}} bits'.format(char) #output is not needed (will be 0) self.verboseRead(lengthCode) symbolLengths[symbol.index] = length total += 32768>>length lastLength = length assert total==32768 alphabet.setLength(symbolLengths) print('End of table. Prefix code '+alphabet.name+':') alphabet.showCode()
Read complex code
Below is the the instruction that describes the task: ### Input: Read complex code ### Response: def readComplexCode(self, hskip, alphabet): """Read complex code""" stream = self.stream #read the lengths for the length code lengths = [1,2,3,4,0,5,17,6,16,7,8,9,10,11,12,13,14,15][hskip:] codeLengths = {} total = 0 lol = LengthOfLengthAlphabet('##'+alphabet.name) #lengthCode will be used for coding the lengths of the new code #we use it for display until now; definition comes below lengthCode = LengthAlphabet('#'+alphabet.name) lengthIter = iter(lengths) lengthsLeft = len(lengths) while total<32 and lengthsLeft>0: lengthsLeft -= 1 newSymbol = next(lengthIter) lol.description = str(lengthCode[newSymbol]) length = self.verboseRead(lol) if length: codeLengths[newSymbol] = length total += 32>>length if total>32: raise ValueError("Stream format") if len(codeLengths)==1: codeLengths[list(codeLengths.keys())[0]] = 0 #Now set the encoding of the lengthCode lengthCode.setLength(codeLengths) print("***** Lengths for {} will be coded as:".format(alphabet.name)) lengthCode.showCode() #Now determine the symbol lengths with the lengthCode symbolLengths = {} total = 0 lastLength = 8 alphabetIter = iter(alphabet) while total<32768: #look ahead to see what is going to happen length = lengthCode.decodePeek( self.stream.peek(lengthCode.maxLength))[1].index #in every branch, set lengthCode.description to explanatory text #lengthCode calls format(symbol, extra) with this string if length==0: symbol = next(alphabetIter) lengthCode.description = 'symbol {} unused'.format(symbol) self.verboseRead(lengthCode) #unused symbol continue if length==16: lengthCode.description = \ '{1}+3 symbols of length '+str(lastLength) extra = self.verboseRead(lengthCode) #scan series of 16s (repeat counts) #start with repeat count 2 repeat = 2 startSymbol = next(alphabetIter) endSymbol = next(alphabetIter) symbolLengths[startSymbol.index] = \ symbolLengths[endSymbol.index] = lastLength #count the two just defined symbols total += 2*32768>>lastLength #note: loop may end because we're there #even if a 16 _appears_ to follow while True: #determine last symbol oldRepeat = repeat repeat = (repeat-2<<2)+extra+3 #read as many symbols as repeat increased for i in range(oldRepeat, repeat): endSymbol = next(alphabetIter) symbolLengths[endSymbol.index] = lastLength #compute new total; it may be end of loop total += (repeat-oldRepeat)*32768>>lastLength if total>=32768: break #see if there is more to do length = lengthCode.decodePeek( self.stream.peek(lengthCode.maxLength))[1].index if length!=16: break lengthCode.description = 'total {}+{{1}} symbols'.format( (repeat-2<<2)+3) extra = self.verboseRead(lengthCode) elif length==17: #read, and show explanation lengthCode.description = '{1}+3 unused' extra = self.verboseRead(lengthCode) #scan series of 17s (groups of zero counts) #start with repeat count 2 repeat = 2 startSymbol = next(alphabetIter) endSymbol = next(alphabetIter) #note: loop will not end with total==32768, #since total doesn't change here while True: #determine last symbol oldRepeat = repeat repeat = (repeat-2<<3)+extra+3 #read as many symbols as repeat increases for i in range(repeat-oldRepeat): endSymbol = next(alphabetIter) #see if there is more to do length = lengthCode.decodePeek( self.stream.peek(lengthCode.maxLength))[1].index if length!=17: break lengthCode.description = 'total {}+{{1}} unused'.format( (repeat-2<<3)+3) extra = self.verboseRead(lengthCode) else: symbol = next(alphabetIter) #double braces for format char = str(symbol) if char in '{}': char *= 2 lengthCode.description = \ 'Length for {} is {{0.index}} bits'.format(char) #output is not needed (will be 0) self.verboseRead(lengthCode) symbolLengths[symbol.index] = length total += 32768>>length lastLength = length assert total==32768 alphabet.setLength(symbolLengths) print('End of table. Prefix code '+alphabet.name+':') alphabet.showCode()
def update_path_labels(self): """Update labels showing config paths """ self.view['core_label'].set_text("Core Config Path: " + str(self.core_config_model.config.config_file_path)) self.view['gui_label'].set_text("GUI Config Path: " + str(self.gui_config_model.config.config_file_path))
Update labels showing config paths
Below is the the instruction that describes the task: ### Input: Update labels showing config paths ### Response: def update_path_labels(self): """Update labels showing config paths """ self.view['core_label'].set_text("Core Config Path: " + str(self.core_config_model.config.config_file_path)) self.view['gui_label'].set_text("GUI Config Path: " + str(self.gui_config_model.config.config_file_path))
def get_apphook_configs(obj): """ Get apphook configs for an object obj :param obj: any model instance :return: list of apphook configs for given obj """ keys = get_apphook_field_names(obj) return [getattr(obj, key) for key in keys] if keys else []
Get apphook configs for an object obj :param obj: any model instance :return: list of apphook configs for given obj
Below is the the instruction that describes the task: ### Input: Get apphook configs for an object obj :param obj: any model instance :return: list of apphook configs for given obj ### Response: def get_apphook_configs(obj): """ Get apphook configs for an object obj :param obj: any model instance :return: list of apphook configs for given obj """ keys = get_apphook_field_names(obj) return [getattr(obj, key) for key in keys] if keys else []
def common_directory(paths): """Find the deepest common directory of a list of paths. :return: if no paths are provided, None is returned; if there is no common directory, '' is returned; otherwise the common directory with a trailing / is returned. """ import posixpath def get_dir_with_slash(path): if path == b'' or path.endswith(b'/'): return path else: dirname, basename = posixpath.split(path) if dirname == b'': return dirname else: return dirname + b'/' if not paths: return None elif len(paths) == 1: return get_dir_with_slash(paths[0]) else: common = common_path(paths[0], paths[1]) for path in paths[2:]: common = common_path(common, path) return get_dir_with_slash(common)
Find the deepest common directory of a list of paths. :return: if no paths are provided, None is returned; if there is no common directory, '' is returned; otherwise the common directory with a trailing / is returned.
Below is the the instruction that describes the task: ### Input: Find the deepest common directory of a list of paths. :return: if no paths are provided, None is returned; if there is no common directory, '' is returned; otherwise the common directory with a trailing / is returned. ### Response: def common_directory(paths): """Find the deepest common directory of a list of paths. :return: if no paths are provided, None is returned; if there is no common directory, '' is returned; otherwise the common directory with a trailing / is returned. """ import posixpath def get_dir_with_slash(path): if path == b'' or path.endswith(b'/'): return path else: dirname, basename = posixpath.split(path) if dirname == b'': return dirname else: return dirname + b'/' if not paths: return None elif len(paths) == 1: return get_dir_with_slash(paths[0]) else: common = common_path(paths[0], paths[1]) for path in paths[2:]: common = common_path(common, path) return get_dir_with_slash(common)
def summarizeReads(file_handle, file_type): """ open a fasta or fastq file, prints number of of reads, average length of read, total number of bases, longest, shortest and median read, total number and average of individual base (A, T, G, C, N). """ base_counts = defaultdict(int) read_number = 0 total_length = 0 length_list = [] records = SeqIO.parse(file_handle, file_type) for record in records: total_length += len(record) read_number += 1 length_list.append(len(record)) for base in record: base_counts[base] += 1 result = { "read_number": read_number, "total_length": total_length, "average_length": total_length / read_number if read_number > 0 else 0, "max_length": max(length_list) if length_list else 0, "min_length": min(length_list) if length_list else 0, "median_length": median(length_list) if length_list else 0, "base_counts": base_counts } return result
open a fasta or fastq file, prints number of of reads, average length of read, total number of bases, longest, shortest and median read, total number and average of individual base (A, T, G, C, N).
Below is the the instruction that describes the task: ### Input: open a fasta or fastq file, prints number of of reads, average length of read, total number of bases, longest, shortest and median read, total number and average of individual base (A, T, G, C, N). ### Response: def summarizeReads(file_handle, file_type): """ open a fasta or fastq file, prints number of of reads, average length of read, total number of bases, longest, shortest and median read, total number and average of individual base (A, T, G, C, N). """ base_counts = defaultdict(int) read_number = 0 total_length = 0 length_list = [] records = SeqIO.parse(file_handle, file_type) for record in records: total_length += len(record) read_number += 1 length_list.append(len(record)) for base in record: base_counts[base] += 1 result = { "read_number": read_number, "total_length": total_length, "average_length": total_length / read_number if read_number > 0 else 0, "max_length": max(length_list) if length_list else 0, "min_length": min(length_list) if length_list else 0, "median_length": median(length_list) if length_list else 0, "base_counts": base_counts } return result
def extend(self, base, key, value=None): """ Adds a new definition to this enumerated type, extending the given base type. This will create a new key for the type and register it as a new viable option from the system, however, it will also register its base information so you can use enum.base to retrieve the root type. :param base | <variant> | value for this enumeration key | <str> | new key for the value value | <variant> | if None is supplied, it will be auto-assigned :usage |>>> from projex.enum import enum |>>> Types = enum('Integer', 'Boolean') |>>> Types.Integer |1 |>>> Types.Boolean |2 |>>> Types.extend(Types.Integer, 'BigInteger') |>>> Types.BigInteger |4 |>>> Types.base(Types.BigInteger) |1 """ new_val = self.add(key, value) self._bases[new_val] = base
Adds a new definition to this enumerated type, extending the given base type. This will create a new key for the type and register it as a new viable option from the system, however, it will also register its base information so you can use enum.base to retrieve the root type. :param base | <variant> | value for this enumeration key | <str> | new key for the value value | <variant> | if None is supplied, it will be auto-assigned :usage |>>> from projex.enum import enum |>>> Types = enum('Integer', 'Boolean') |>>> Types.Integer |1 |>>> Types.Boolean |2 |>>> Types.extend(Types.Integer, 'BigInteger') |>>> Types.BigInteger |4 |>>> Types.base(Types.BigInteger) |1
Below is the the instruction that describes the task: ### Input: Adds a new definition to this enumerated type, extending the given base type. This will create a new key for the type and register it as a new viable option from the system, however, it will also register its base information so you can use enum.base to retrieve the root type. :param base | <variant> | value for this enumeration key | <str> | new key for the value value | <variant> | if None is supplied, it will be auto-assigned :usage |>>> from projex.enum import enum |>>> Types = enum('Integer', 'Boolean') |>>> Types.Integer |1 |>>> Types.Boolean |2 |>>> Types.extend(Types.Integer, 'BigInteger') |>>> Types.BigInteger |4 |>>> Types.base(Types.BigInteger) |1 ### Response: def extend(self, base, key, value=None): """ Adds a new definition to this enumerated type, extending the given base type. This will create a new key for the type and register it as a new viable option from the system, however, it will also register its base information so you can use enum.base to retrieve the root type. :param base | <variant> | value for this enumeration key | <str> | new key for the value value | <variant> | if None is supplied, it will be auto-assigned :usage |>>> from projex.enum import enum |>>> Types = enum('Integer', 'Boolean') |>>> Types.Integer |1 |>>> Types.Boolean |2 |>>> Types.extend(Types.Integer, 'BigInteger') |>>> Types.BigInteger |4 |>>> Types.base(Types.BigInteger) |1 """ new_val = self.add(key, value) self._bases[new_val] = base
def create_requests_session( retries=None, backoff_factor=None, status_forcelist=None, pools_size=4, maxsize=4, ssl_verify=None, ssl_cert=None, proxy=None, session=None, ): """Create a requests session that retries some errors.""" # pylint: disable=too-many-branches config = Configuration() if retries is None: if config.error_retry_max is None: retries = 5 else: retries = config.error_retry_max if backoff_factor is None: if config.error_retry_backoff is None: backoff_factor = 0.23 else: backoff_factor = config.error_retry_backoff if status_forcelist is None: if config.error_retry_codes is None: status_forcelist = [500, 502, 503, 504] else: status_forcelist = config.error_retry_codes if ssl_verify is None: ssl_verify = config.verify_ssl if ssl_cert is None: if config.cert_file and config.key_file: ssl_cert = (config.cert_file, config.key_file) elif config.cert_file: ssl_cert = config.cert_file if proxy is None: proxy = Configuration().proxy session = session or requests.Session() session.verify = ssl_verify session.cert = ssl_cert if proxy: session.proxies = {"http": proxy, "https": proxy} retry = Retry( backoff_factor=backoff_factor, connect=retries, method_whitelist=False, read=retries, status_forcelist=tuple(status_forcelist), total=retries, ) adapter = HTTPAdapter( max_retries=retry, pool_connections=pools_size, pool_maxsize=maxsize, pool_block=True, ) session.mount("http://", adapter) session.mount("https://", adapter) return session
Create a requests session that retries some errors.
Below is the the instruction that describes the task: ### Input: Create a requests session that retries some errors. ### Response: def create_requests_session( retries=None, backoff_factor=None, status_forcelist=None, pools_size=4, maxsize=4, ssl_verify=None, ssl_cert=None, proxy=None, session=None, ): """Create a requests session that retries some errors.""" # pylint: disable=too-many-branches config = Configuration() if retries is None: if config.error_retry_max is None: retries = 5 else: retries = config.error_retry_max if backoff_factor is None: if config.error_retry_backoff is None: backoff_factor = 0.23 else: backoff_factor = config.error_retry_backoff if status_forcelist is None: if config.error_retry_codes is None: status_forcelist = [500, 502, 503, 504] else: status_forcelist = config.error_retry_codes if ssl_verify is None: ssl_verify = config.verify_ssl if ssl_cert is None: if config.cert_file and config.key_file: ssl_cert = (config.cert_file, config.key_file) elif config.cert_file: ssl_cert = config.cert_file if proxy is None: proxy = Configuration().proxy session = session or requests.Session() session.verify = ssl_verify session.cert = ssl_cert if proxy: session.proxies = {"http": proxy, "https": proxy} retry = Retry( backoff_factor=backoff_factor, connect=retries, method_whitelist=False, read=retries, status_forcelist=tuple(status_forcelist), total=retries, ) adapter = HTTPAdapter( max_retries=retry, pool_connections=pools_size, pool_maxsize=maxsize, pool_block=True, ) session.mount("http://", adapter) session.mount("https://", adapter) return session
def parse(self, **kwargs): """Parse the contents of the output files retrieved in the `FolderData`.""" try: output_folder = self.retrieved except exceptions.NotExistent: return self.exit_codes.ERROR_NO_RETRIEVED_FOLDER filename_stdout = self.node.get_attribute('output_filename') filename_stderr = self.node.get_attribute('error_filename') try: with output_folder.open(filename_stderr, 'r') as handle: exit_code = self.parse_stderr(handle) except (OSError, IOError): self.logger.exception('Failed to read the stderr file\n%s', traceback.format_exc()) return self.exit_codes.ERROR_READING_ERROR_FILE if exit_code: return exit_code try: with output_folder.open(filename_stdout, 'r') as handle: handle.seek(0) exit_code = self.parse_stdout(handle) except (OSError, IOError): self.logger.exception('Failed to read the stdout file\n%s', traceback.format_exc()) return self.exit_codes.ERROR_READING_OUTPUT_FILE if exit_code: return exit_code
Parse the contents of the output files retrieved in the `FolderData`.
Below is the the instruction that describes the task: ### Input: Parse the contents of the output files retrieved in the `FolderData`. ### Response: def parse(self, **kwargs): """Parse the contents of the output files retrieved in the `FolderData`.""" try: output_folder = self.retrieved except exceptions.NotExistent: return self.exit_codes.ERROR_NO_RETRIEVED_FOLDER filename_stdout = self.node.get_attribute('output_filename') filename_stderr = self.node.get_attribute('error_filename') try: with output_folder.open(filename_stderr, 'r') as handle: exit_code = self.parse_stderr(handle) except (OSError, IOError): self.logger.exception('Failed to read the stderr file\n%s', traceback.format_exc()) return self.exit_codes.ERROR_READING_ERROR_FILE if exit_code: return exit_code try: with output_folder.open(filename_stdout, 'r') as handle: handle.seek(0) exit_code = self.parse_stdout(handle) except (OSError, IOError): self.logger.exception('Failed to read the stdout file\n%s', traceback.format_exc()) return self.exit_codes.ERROR_READING_OUTPUT_FILE if exit_code: return exit_code
def save_message_from_pat_portal(self, patient_id, p_vendor_name, p_message_id, p_practice_id, message, sent_date, transaction_type ): """ invokes TouchWorksMagicConstants.ACTION_GET_ENCOUNTER_LIST_FOR_PATIENT action :param :param message :param sent_date :param transaction_type - type To register a patient with the portal, this should be 'Register Patient Request.' Valid types are stored in iHealth_TransCode_DE. Approve Online Consultation Custom Form Submitted Decline Online Consultation Deny Patient Registration Form Requested Health Remiders Register Patient Register Patient Request RenewRx Seek Appointment Seek Online Consultation Send Clinical Document Send General Message Send Notification Message Unregister Patient :return: JSON response """ portal_info_xml = '<msg>' + \ '<ppvendor value="@@VENDOR@@" />' + \ '<ppmsgid value="@@MESSAGEID@@" />' + \ '<pppractice value="@@PRACTICE@@" />' + \ '</msg>' portal_info_xml = portal_info_xml.replace( '@@VENDOR@@', p_vendor_name).replace( '@@MESSAGEID@@', p_message_id).replace( '@@PRACTICE@@', p_practice_id) magic = self._magic_json( action=TouchWorksMagicConstants.ACTION_SAVE_MSG_FROM_PAT_PORTAL, patient_id=patient_id, parameter1=portal_info_xml, parameter2=self._ehr_username, parameter3=message, parameter4=sent_date, parameter5=transaction_type) response = self._http_request(TouchWorksEndPoints.MAGIC_JSON, data=magic) result = self._get_results_or_raise_if_magic_invalid( magic, response, TouchWorksMagicConstants.RESULT_SAVE_MSG_FROM_PAT_PORTAL) return result
invokes TouchWorksMagicConstants.ACTION_GET_ENCOUNTER_LIST_FOR_PATIENT action :param :param message :param sent_date :param transaction_type - type To register a patient with the portal, this should be 'Register Patient Request.' Valid types are stored in iHealth_TransCode_DE. Approve Online Consultation Custom Form Submitted Decline Online Consultation Deny Patient Registration Form Requested Health Remiders Register Patient Register Patient Request RenewRx Seek Appointment Seek Online Consultation Send Clinical Document Send General Message Send Notification Message Unregister Patient :return: JSON response
Below is the the instruction that describes the task: ### Input: invokes TouchWorksMagicConstants.ACTION_GET_ENCOUNTER_LIST_FOR_PATIENT action :param :param message :param sent_date :param transaction_type - type To register a patient with the portal, this should be 'Register Patient Request.' Valid types are stored in iHealth_TransCode_DE. Approve Online Consultation Custom Form Submitted Decline Online Consultation Deny Patient Registration Form Requested Health Remiders Register Patient Register Patient Request RenewRx Seek Appointment Seek Online Consultation Send Clinical Document Send General Message Send Notification Message Unregister Patient :return: JSON response ### Response: def save_message_from_pat_portal(self, patient_id, p_vendor_name, p_message_id, p_practice_id, message, sent_date, transaction_type ): """ invokes TouchWorksMagicConstants.ACTION_GET_ENCOUNTER_LIST_FOR_PATIENT action :param :param message :param sent_date :param transaction_type - type To register a patient with the portal, this should be 'Register Patient Request.' Valid types are stored in iHealth_TransCode_DE. Approve Online Consultation Custom Form Submitted Decline Online Consultation Deny Patient Registration Form Requested Health Remiders Register Patient Register Patient Request RenewRx Seek Appointment Seek Online Consultation Send Clinical Document Send General Message Send Notification Message Unregister Patient :return: JSON response """ portal_info_xml = '<msg>' + \ '<ppvendor value="@@VENDOR@@" />' + \ '<ppmsgid value="@@MESSAGEID@@" />' + \ '<pppractice value="@@PRACTICE@@" />' + \ '</msg>' portal_info_xml = portal_info_xml.replace( '@@VENDOR@@', p_vendor_name).replace( '@@MESSAGEID@@', p_message_id).replace( '@@PRACTICE@@', p_practice_id) magic = self._magic_json( action=TouchWorksMagicConstants.ACTION_SAVE_MSG_FROM_PAT_PORTAL, patient_id=patient_id, parameter1=portal_info_xml, parameter2=self._ehr_username, parameter3=message, parameter4=sent_date, parameter5=transaction_type) response = self._http_request(TouchWorksEndPoints.MAGIC_JSON, data=magic) result = self._get_results_or_raise_if_magic_invalid( magic, response, TouchWorksMagicConstants.RESULT_SAVE_MSG_FROM_PAT_PORTAL) return result
def get_valid_build_systems(working_dir, package=None): """Returns the build system classes that could build the source in given dir. Args: working_dir (str): Dir containing the package definition and potentially build files. package (`Package`): Package to be built. This may or may not be needed to determine the build system. For eg, cmake just has to look for a CMakeLists.txt file, whereas the 'build_command' package field must be present for the 'custom' build system type. Returns: List of class: Valid build system class types. """ from rez.plugin_managers import plugin_manager from rez.exceptions import PackageMetadataError try: package = package or get_developer_package(working_dir) except PackageMetadataError: # no package, or bad package pass if package: if getattr(package, "build_command", None) is not None: buildsys_name = "custom" else: buildsys_name = getattr(package, "build_system", None) # package explicitly specifies build system if buildsys_name: cls = plugin_manager.get_plugin_class('build_system', buildsys_name) return [cls] # detect valid build systems clss = [] for buildsys_name in get_buildsys_types(): cls = plugin_manager.get_plugin_class('build_system', buildsys_name) if cls.is_valid_root(working_dir, package=package): clss.append(cls) # Sometimes files for multiple build systems can be present, because one # build system uses another (a 'child' build system) - eg, cmake uses # make. Detect this case and ignore files from the child build system. # child_clss = set(x.child_build_system() for x in clss) clss = list(set(clss) - child_clss) return clss
Returns the build system classes that could build the source in given dir. Args: working_dir (str): Dir containing the package definition and potentially build files. package (`Package`): Package to be built. This may or may not be needed to determine the build system. For eg, cmake just has to look for a CMakeLists.txt file, whereas the 'build_command' package field must be present for the 'custom' build system type. Returns: List of class: Valid build system class types.
Below is the the instruction that describes the task: ### Input: Returns the build system classes that could build the source in given dir. Args: working_dir (str): Dir containing the package definition and potentially build files. package (`Package`): Package to be built. This may or may not be needed to determine the build system. For eg, cmake just has to look for a CMakeLists.txt file, whereas the 'build_command' package field must be present for the 'custom' build system type. Returns: List of class: Valid build system class types. ### Response: def get_valid_build_systems(working_dir, package=None): """Returns the build system classes that could build the source in given dir. Args: working_dir (str): Dir containing the package definition and potentially build files. package (`Package`): Package to be built. This may or may not be needed to determine the build system. For eg, cmake just has to look for a CMakeLists.txt file, whereas the 'build_command' package field must be present for the 'custom' build system type. Returns: List of class: Valid build system class types. """ from rez.plugin_managers import plugin_manager from rez.exceptions import PackageMetadataError try: package = package or get_developer_package(working_dir) except PackageMetadataError: # no package, or bad package pass if package: if getattr(package, "build_command", None) is not None: buildsys_name = "custom" else: buildsys_name = getattr(package, "build_system", None) # package explicitly specifies build system if buildsys_name: cls = plugin_manager.get_plugin_class('build_system', buildsys_name) return [cls] # detect valid build systems clss = [] for buildsys_name in get_buildsys_types(): cls = plugin_manager.get_plugin_class('build_system', buildsys_name) if cls.is_valid_root(working_dir, package=package): clss.append(cls) # Sometimes files for multiple build systems can be present, because one # build system uses another (a 'child' build system) - eg, cmake uses # make. Detect this case and ignore files from the child build system. # child_clss = set(x.child_build_system() for x in clss) clss = list(set(clss) - child_clss) return clss
def __discover_node(self, node, depth): ''' Given a node, recursively enumerate its adjacencies until we reach the specified depth (>0). Args: node: natlas_node object to enumerate. depth: The depth left that we can go further away from the root. ''' if (node == None): return if (depth >= self.max_depth): return if (node.discovered > 0): return node.discovered = 1 # vmware ESX can report IP as 0.0.0.0 # If we are allowing 0.0.0.0/32 in the config, # then we added it as a leaf, but don't discover it if (node.ip[0] == '0.0.0.0'): return # may be a leaf we couldn't connect to previously if (node.snmpobj.success == 0): return # print some info to stdout dcodes = DCODE_STEP_INTO if (depth == 0): dcodes |= DCODE_ROOT self.__print_step(node.ip[0], node.name, depth, dcodes) # get the cached snmp credentials snmpobj = node.snmpobj # list of valid neighbors to discover next valid_neighbors = [] # get list of neighbors cdp_neighbors = node.get_cdp_neighbors() lldp_neighbors = node.get_lldp_neighbors() neighbors = cdp_neighbors + lldp_neighbors if (len(neighbors) == 0): return for n in neighbors: # some neighbors may not advertise IP addresses - default them to 0.0.0.0 if (n.remote_ip == None): n.remote_ip = '0.0.0.0' # check the ACL acl_action = self.__match_node_acl(n.remote_ip, n.remote_name) if (acl_action == 'deny'): # deny inclusion of this node continue dcodes = DCODE_DISCOVERED child = None if (acl_action == 'include'): # include this node but do not discover it child = natlas_node() child.ip = [n.remote_ip] dcodes |= DCODE_INCLUDE else: # discover this node child, query_result = self.__query_node(n.remote_ip, n.remote_name) # if we couldn't pull info from SNMP fill in what we know if (child.snmpobj.success == 0): child.name = util.shorten_host_name(n.remote_name, self.config.host_domains) dcodes |= DCODE_ERR_SNMP # need to check the ACL again for extended ops (we have more info) acl_action = self.__match_node_acl(n.remote_ip, n.remote_name, n.remote_plat, n.remote_ios, child.serial) if (acl_action == 'deny'): continue if (query_result == NODE_NEW): self.nodes.append(child) if (acl_action == 'leaf'): dcodes |= DCODE_LEAF if (n.discovered_proto == 'cdp'): dcodes |= DCODE_CDP if (n.discovered_proto == 'lldp'): dcodes |= DCODE_LLDP self.__print_step(n.remote_ip, n.remote_name, depth+1, dcodes) # CDP/LLDP advertises the platform child.plat = n.remote_plat child.ios = n.remote_ios # add the discovered node to the link object and link to the parent n.node = child self.__add_link(node, n) # if we need to discover this node then add it to the list if ((query_result == NODE_NEW) & (acl_action != 'leaf') & (acl_action != 'include')): valid_neighbors.append(child) # discover the valid neighbors for n in valid_neighbors: self.__discover_node(n, depth+1)
Given a node, recursively enumerate its adjacencies until we reach the specified depth (>0). Args: node: natlas_node object to enumerate. depth: The depth left that we can go further away from the root.
Below is the the instruction that describes the task: ### Input: Given a node, recursively enumerate its adjacencies until we reach the specified depth (>0). Args: node: natlas_node object to enumerate. depth: The depth left that we can go further away from the root. ### Response: def __discover_node(self, node, depth): ''' Given a node, recursively enumerate its adjacencies until we reach the specified depth (>0). Args: node: natlas_node object to enumerate. depth: The depth left that we can go further away from the root. ''' if (node == None): return if (depth >= self.max_depth): return if (node.discovered > 0): return node.discovered = 1 # vmware ESX can report IP as 0.0.0.0 # If we are allowing 0.0.0.0/32 in the config, # then we added it as a leaf, but don't discover it if (node.ip[0] == '0.0.0.0'): return # may be a leaf we couldn't connect to previously if (node.snmpobj.success == 0): return # print some info to stdout dcodes = DCODE_STEP_INTO if (depth == 0): dcodes |= DCODE_ROOT self.__print_step(node.ip[0], node.name, depth, dcodes) # get the cached snmp credentials snmpobj = node.snmpobj # list of valid neighbors to discover next valid_neighbors = [] # get list of neighbors cdp_neighbors = node.get_cdp_neighbors() lldp_neighbors = node.get_lldp_neighbors() neighbors = cdp_neighbors + lldp_neighbors if (len(neighbors) == 0): return for n in neighbors: # some neighbors may not advertise IP addresses - default them to 0.0.0.0 if (n.remote_ip == None): n.remote_ip = '0.0.0.0' # check the ACL acl_action = self.__match_node_acl(n.remote_ip, n.remote_name) if (acl_action == 'deny'): # deny inclusion of this node continue dcodes = DCODE_DISCOVERED child = None if (acl_action == 'include'): # include this node but do not discover it child = natlas_node() child.ip = [n.remote_ip] dcodes |= DCODE_INCLUDE else: # discover this node child, query_result = self.__query_node(n.remote_ip, n.remote_name) # if we couldn't pull info from SNMP fill in what we know if (child.snmpobj.success == 0): child.name = util.shorten_host_name(n.remote_name, self.config.host_domains) dcodes |= DCODE_ERR_SNMP # need to check the ACL again for extended ops (we have more info) acl_action = self.__match_node_acl(n.remote_ip, n.remote_name, n.remote_plat, n.remote_ios, child.serial) if (acl_action == 'deny'): continue if (query_result == NODE_NEW): self.nodes.append(child) if (acl_action == 'leaf'): dcodes |= DCODE_LEAF if (n.discovered_proto == 'cdp'): dcodes |= DCODE_CDP if (n.discovered_proto == 'lldp'): dcodes |= DCODE_LLDP self.__print_step(n.remote_ip, n.remote_name, depth+1, dcodes) # CDP/LLDP advertises the platform child.plat = n.remote_plat child.ios = n.remote_ios # add the discovered node to the link object and link to the parent n.node = child self.__add_link(node, n) # if we need to discover this node then add it to the list if ((query_result == NODE_NEW) & (acl_action != 'leaf') & (acl_action != 'include')): valid_neighbors.append(child) # discover the valid neighbors for n in valid_neighbors: self.__discover_node(n, depth+1)
def resync( ctx, opts, owner_repo_package, skip_errors, wait_interval, no_wait_for_sync, sync_attempts, ): """ Resynchronise a package in a repository. This requires appropriate permissions for package. - OWNER/REPO/PACKAGE: Specify the OWNER namespace (i.e. user or org), the REPO name where the package is stored, and the PACKAGE name (slug) of the package itself. All separated by a slash. Example: 'your-org/awesome-repo/better-pkg'. Full CLI example: $ cloudsmith resync your-org/awesome-repo/better-pkg """ owner, source, slug = owner_repo_package resync_package( ctx=ctx, opts=opts, owner=owner, repo=source, slug=slug, skip_errors=skip_errors ) if no_wait_for_sync: return wait_for_package_sync( ctx=ctx, opts=opts, owner=owner, repo=source, slug=slug, wait_interval=wait_interval, skip_errors=skip_errors, attempts=sync_attempts, )
Resynchronise a package in a repository. This requires appropriate permissions for package. - OWNER/REPO/PACKAGE: Specify the OWNER namespace (i.e. user or org), the REPO name where the package is stored, and the PACKAGE name (slug) of the package itself. All separated by a slash. Example: 'your-org/awesome-repo/better-pkg'. Full CLI example: $ cloudsmith resync your-org/awesome-repo/better-pkg
Below is the the instruction that describes the task: ### Input: Resynchronise a package in a repository. This requires appropriate permissions for package. - OWNER/REPO/PACKAGE: Specify the OWNER namespace (i.e. user or org), the REPO name where the package is stored, and the PACKAGE name (slug) of the package itself. All separated by a slash. Example: 'your-org/awesome-repo/better-pkg'. Full CLI example: $ cloudsmith resync your-org/awesome-repo/better-pkg ### Response: def resync( ctx, opts, owner_repo_package, skip_errors, wait_interval, no_wait_for_sync, sync_attempts, ): """ Resynchronise a package in a repository. This requires appropriate permissions for package. - OWNER/REPO/PACKAGE: Specify the OWNER namespace (i.e. user or org), the REPO name where the package is stored, and the PACKAGE name (slug) of the package itself. All separated by a slash. Example: 'your-org/awesome-repo/better-pkg'. Full CLI example: $ cloudsmith resync your-org/awesome-repo/better-pkg """ owner, source, slug = owner_repo_package resync_package( ctx=ctx, opts=opts, owner=owner, repo=source, slug=slug, skip_errors=skip_errors ) if no_wait_for_sync: return wait_for_package_sync( ctx=ctx, opts=opts, owner=owner, repo=source, slug=slug, wait_interval=wait_interval, skip_errors=skip_errors, attempts=sync_attempts, )
def add_batch_parser(subparsers, parent_parser): """Adds arguments parsers for the batch list, batch show and batch status commands Args: subparsers: Add parsers to this subparser object parent_parser: The parent argparse.ArgumentParser object """ parser = subparsers.add_parser( 'batch', help='Displays information about batches and submit new batches', description='Provides subcommands to display Batch information and ' 'submit Batches to the validator via the REST API.') grand_parsers = parser.add_subparsers(title='subcommands', dest='subcommand') grand_parsers.required = True add_batch_list_parser(grand_parsers, parent_parser) add_batch_show_parser(grand_parsers, parent_parser) add_batch_status_parser(grand_parsers, parent_parser) add_batch_submit_parser(grand_parsers, parent_parser)
Adds arguments parsers for the batch list, batch show and batch status commands Args: subparsers: Add parsers to this subparser object parent_parser: The parent argparse.ArgumentParser object
Below is the the instruction that describes the task: ### Input: Adds arguments parsers for the batch list, batch show and batch status commands Args: subparsers: Add parsers to this subparser object parent_parser: The parent argparse.ArgumentParser object ### Response: def add_batch_parser(subparsers, parent_parser): """Adds arguments parsers for the batch list, batch show and batch status commands Args: subparsers: Add parsers to this subparser object parent_parser: The parent argparse.ArgumentParser object """ parser = subparsers.add_parser( 'batch', help='Displays information about batches and submit new batches', description='Provides subcommands to display Batch information and ' 'submit Batches to the validator via the REST API.') grand_parsers = parser.add_subparsers(title='subcommands', dest='subcommand') grand_parsers.required = True add_batch_list_parser(grand_parsers, parent_parser) add_batch_show_parser(grand_parsers, parent_parser) add_batch_status_parser(grand_parsers, parent_parser) add_batch_submit_parser(grand_parsers, parent_parser)
def encrypt(**kwargs): """Encrypts and serializes provided plaintext. .. note:: When using this function, the entire ciphertext message is encrypted into memory before returning any data. If streaming is desired, see :class:`aws_encryption_sdk.stream`. .. code:: python >>> import aws_encryption_sdk >>> kms_key_provider = aws_encryption_sdk.KMSMasterKeyProvider(key_ids=[ ... 'arn:aws:kms:us-east-1:2222222222222:key/22222222-2222-2222-2222-222222222222', ... 'arn:aws:kms:us-east-1:3333333333333:key/33333333-3333-3333-3333-333333333333' ... ]) >>> my_ciphertext, encryptor_header = aws_encryption_sdk.encrypt( ... source=my_plaintext, ... key_provider=kms_key_provider ... ) :param config: Client configuration object (config or individual parameters required) :type config: aws_encryption_sdk.streaming_client.EncryptorConfig :param source: Source data to encrypt or decrypt :type source: str, bytes, io.IOBase, or file :param materials_manager: `CryptoMaterialsManager` from which to obtain cryptographic materials (either `materials_manager` or `key_provider` required) :type materials_manager: aws_encryption_sdk.materials_managers.base.CryptoMaterialsManager :param key_provider: `MasterKeyProvider` from which to obtain data keys for encryption (either `materials_manager` or `key_provider` required) :type key_provider: aws_encryption_sdk.key_providers.base.MasterKeyProvider :param int source_length: Length of source data (optional) .. note:: If source_length is not provided and unframed message is being written or read() is called, will attempt to seek() to the end of the stream and tell() to find the length of source data. .. note:: .. versionadded:: 1.3.0 If `source_length` and `materials_manager` are both provided, the total plaintext bytes encrypted will not be allowed to exceed `source_length`. To maintain backwards compatibility, this is not enforced if a `key_provider` is provided. :param dict encryption_context: Dictionary defining encryption context :param algorithm: Algorithm to use for encryption :type algorithm: aws_encryption_sdk.identifiers.Algorithm :param int frame_length: Frame length in bytes :returns: Tuple containing the encrypted ciphertext and the message header object :rtype: tuple of bytes and :class:`aws_encryption_sdk.structures.MessageHeader` """ with StreamEncryptor(**kwargs) as encryptor: ciphertext = encryptor.read() return ciphertext, encryptor.header
Encrypts and serializes provided plaintext. .. note:: When using this function, the entire ciphertext message is encrypted into memory before returning any data. If streaming is desired, see :class:`aws_encryption_sdk.stream`. .. code:: python >>> import aws_encryption_sdk >>> kms_key_provider = aws_encryption_sdk.KMSMasterKeyProvider(key_ids=[ ... 'arn:aws:kms:us-east-1:2222222222222:key/22222222-2222-2222-2222-222222222222', ... 'arn:aws:kms:us-east-1:3333333333333:key/33333333-3333-3333-3333-333333333333' ... ]) >>> my_ciphertext, encryptor_header = aws_encryption_sdk.encrypt( ... source=my_plaintext, ... key_provider=kms_key_provider ... ) :param config: Client configuration object (config or individual parameters required) :type config: aws_encryption_sdk.streaming_client.EncryptorConfig :param source: Source data to encrypt or decrypt :type source: str, bytes, io.IOBase, or file :param materials_manager: `CryptoMaterialsManager` from which to obtain cryptographic materials (either `materials_manager` or `key_provider` required) :type materials_manager: aws_encryption_sdk.materials_managers.base.CryptoMaterialsManager :param key_provider: `MasterKeyProvider` from which to obtain data keys for encryption (either `materials_manager` or `key_provider` required) :type key_provider: aws_encryption_sdk.key_providers.base.MasterKeyProvider :param int source_length: Length of source data (optional) .. note:: If source_length is not provided and unframed message is being written or read() is called, will attempt to seek() to the end of the stream and tell() to find the length of source data. .. note:: .. versionadded:: 1.3.0 If `source_length` and `materials_manager` are both provided, the total plaintext bytes encrypted will not be allowed to exceed `source_length`. To maintain backwards compatibility, this is not enforced if a `key_provider` is provided. :param dict encryption_context: Dictionary defining encryption context :param algorithm: Algorithm to use for encryption :type algorithm: aws_encryption_sdk.identifiers.Algorithm :param int frame_length: Frame length in bytes :returns: Tuple containing the encrypted ciphertext and the message header object :rtype: tuple of bytes and :class:`aws_encryption_sdk.structures.MessageHeader`
Below is the the instruction that describes the task: ### Input: Encrypts and serializes provided plaintext. .. note:: When using this function, the entire ciphertext message is encrypted into memory before returning any data. If streaming is desired, see :class:`aws_encryption_sdk.stream`. .. code:: python >>> import aws_encryption_sdk >>> kms_key_provider = aws_encryption_sdk.KMSMasterKeyProvider(key_ids=[ ... 'arn:aws:kms:us-east-1:2222222222222:key/22222222-2222-2222-2222-222222222222', ... 'arn:aws:kms:us-east-1:3333333333333:key/33333333-3333-3333-3333-333333333333' ... ]) >>> my_ciphertext, encryptor_header = aws_encryption_sdk.encrypt( ... source=my_plaintext, ... key_provider=kms_key_provider ... ) :param config: Client configuration object (config or individual parameters required) :type config: aws_encryption_sdk.streaming_client.EncryptorConfig :param source: Source data to encrypt or decrypt :type source: str, bytes, io.IOBase, or file :param materials_manager: `CryptoMaterialsManager` from which to obtain cryptographic materials (either `materials_manager` or `key_provider` required) :type materials_manager: aws_encryption_sdk.materials_managers.base.CryptoMaterialsManager :param key_provider: `MasterKeyProvider` from which to obtain data keys for encryption (either `materials_manager` or `key_provider` required) :type key_provider: aws_encryption_sdk.key_providers.base.MasterKeyProvider :param int source_length: Length of source data (optional) .. note:: If source_length is not provided and unframed message is being written or read() is called, will attempt to seek() to the end of the stream and tell() to find the length of source data. .. note:: .. versionadded:: 1.3.0 If `source_length` and `materials_manager` are both provided, the total plaintext bytes encrypted will not be allowed to exceed `source_length`. To maintain backwards compatibility, this is not enforced if a `key_provider` is provided. :param dict encryption_context: Dictionary defining encryption context :param algorithm: Algorithm to use for encryption :type algorithm: aws_encryption_sdk.identifiers.Algorithm :param int frame_length: Frame length in bytes :returns: Tuple containing the encrypted ciphertext and the message header object :rtype: tuple of bytes and :class:`aws_encryption_sdk.structures.MessageHeader` ### Response: def encrypt(**kwargs): """Encrypts and serializes provided plaintext. .. note:: When using this function, the entire ciphertext message is encrypted into memory before returning any data. If streaming is desired, see :class:`aws_encryption_sdk.stream`. .. code:: python >>> import aws_encryption_sdk >>> kms_key_provider = aws_encryption_sdk.KMSMasterKeyProvider(key_ids=[ ... 'arn:aws:kms:us-east-1:2222222222222:key/22222222-2222-2222-2222-222222222222', ... 'arn:aws:kms:us-east-1:3333333333333:key/33333333-3333-3333-3333-333333333333' ... ]) >>> my_ciphertext, encryptor_header = aws_encryption_sdk.encrypt( ... source=my_plaintext, ... key_provider=kms_key_provider ... ) :param config: Client configuration object (config or individual parameters required) :type config: aws_encryption_sdk.streaming_client.EncryptorConfig :param source: Source data to encrypt or decrypt :type source: str, bytes, io.IOBase, or file :param materials_manager: `CryptoMaterialsManager` from which to obtain cryptographic materials (either `materials_manager` or `key_provider` required) :type materials_manager: aws_encryption_sdk.materials_managers.base.CryptoMaterialsManager :param key_provider: `MasterKeyProvider` from which to obtain data keys for encryption (either `materials_manager` or `key_provider` required) :type key_provider: aws_encryption_sdk.key_providers.base.MasterKeyProvider :param int source_length: Length of source data (optional) .. note:: If source_length is not provided and unframed message is being written or read() is called, will attempt to seek() to the end of the stream and tell() to find the length of source data. .. note:: .. versionadded:: 1.3.0 If `source_length` and `materials_manager` are both provided, the total plaintext bytes encrypted will not be allowed to exceed `source_length`. To maintain backwards compatibility, this is not enforced if a `key_provider` is provided. :param dict encryption_context: Dictionary defining encryption context :param algorithm: Algorithm to use for encryption :type algorithm: aws_encryption_sdk.identifiers.Algorithm :param int frame_length: Frame length in bytes :returns: Tuple containing the encrypted ciphertext and the message header object :rtype: tuple of bytes and :class:`aws_encryption_sdk.structures.MessageHeader` """ with StreamEncryptor(**kwargs) as encryptor: ciphertext = encryptor.read() return ciphertext, encryptor.header
def get_action_descriptions(self, action_name=None): """ Get the thing's actions as an array. action_name -- Optional action name to get descriptions for Returns the action descriptions. """ descriptions = [] if action_name is None: for name in self.actions: for action in self.actions[name]: descriptions.append(action.as_action_description()) elif action_name in self.actions: for action in self.actions[action_name]: descriptions.append(action.as_action_description()) return descriptions
Get the thing's actions as an array. action_name -- Optional action name to get descriptions for Returns the action descriptions.
Below is the the instruction that describes the task: ### Input: Get the thing's actions as an array. action_name -- Optional action name to get descriptions for Returns the action descriptions. ### Response: def get_action_descriptions(self, action_name=None): """ Get the thing's actions as an array. action_name -- Optional action name to get descriptions for Returns the action descriptions. """ descriptions = [] if action_name is None: for name in self.actions: for action in self.actions[name]: descriptions.append(action.as_action_description()) elif action_name in self.actions: for action in self.actions[action_name]: descriptions.append(action.as_action_description()) return descriptions
def s3_cache_readonly(self): """ Whether the Amazon S3 bucket is considered read only. If this is :data:`True` then the Amazon S3 bucket will only be used for :class:`~pip_accel.caches.s3.S3CacheBackend.get()` operations (all :class:`~pip_accel.caches.s3.S3CacheBackend.put()` operations will be disabled). - Environment variable: ``$PIP_ACCEL_S3_READONLY`` (refer to :func:`~humanfriendly.coerce_boolean()` for details on how the value of the environment variable is interpreted) - Configuration option: ``s3-readonly`` (also parsed using :func:`~humanfriendly.coerce_boolean()`) - Default: :data:`False` For details please refer to the :mod:`pip_accel.caches.s3` module. """ return coerce_boolean(self.get(property_name='s3_cache_readonly', environment_variable='PIP_ACCEL_S3_READONLY', configuration_option='s3-readonly', default=False))
Whether the Amazon S3 bucket is considered read only. If this is :data:`True` then the Amazon S3 bucket will only be used for :class:`~pip_accel.caches.s3.S3CacheBackend.get()` operations (all :class:`~pip_accel.caches.s3.S3CacheBackend.put()` operations will be disabled). - Environment variable: ``$PIP_ACCEL_S3_READONLY`` (refer to :func:`~humanfriendly.coerce_boolean()` for details on how the value of the environment variable is interpreted) - Configuration option: ``s3-readonly`` (also parsed using :func:`~humanfriendly.coerce_boolean()`) - Default: :data:`False` For details please refer to the :mod:`pip_accel.caches.s3` module.
Below is the the instruction that describes the task: ### Input: Whether the Amazon S3 bucket is considered read only. If this is :data:`True` then the Amazon S3 bucket will only be used for :class:`~pip_accel.caches.s3.S3CacheBackend.get()` operations (all :class:`~pip_accel.caches.s3.S3CacheBackend.put()` operations will be disabled). - Environment variable: ``$PIP_ACCEL_S3_READONLY`` (refer to :func:`~humanfriendly.coerce_boolean()` for details on how the value of the environment variable is interpreted) - Configuration option: ``s3-readonly`` (also parsed using :func:`~humanfriendly.coerce_boolean()`) - Default: :data:`False` For details please refer to the :mod:`pip_accel.caches.s3` module. ### Response: def s3_cache_readonly(self): """ Whether the Amazon S3 bucket is considered read only. If this is :data:`True` then the Amazon S3 bucket will only be used for :class:`~pip_accel.caches.s3.S3CacheBackend.get()` operations (all :class:`~pip_accel.caches.s3.S3CacheBackend.put()` operations will be disabled). - Environment variable: ``$PIP_ACCEL_S3_READONLY`` (refer to :func:`~humanfriendly.coerce_boolean()` for details on how the value of the environment variable is interpreted) - Configuration option: ``s3-readonly`` (also parsed using :func:`~humanfriendly.coerce_boolean()`) - Default: :data:`False` For details please refer to the :mod:`pip_accel.caches.s3` module. """ return coerce_boolean(self.get(property_name='s3_cache_readonly', environment_variable='PIP_ACCEL_S3_READONLY', configuration_option='s3-readonly', default=False))
def open_unknown(self, fullurl, data=None): """Overridable interface to open unknown URL type.""" type, url = splittype(fullurl) raise IOError('url error', 'unknown url type', type)
Overridable interface to open unknown URL type.
Below is the the instruction that describes the task: ### Input: Overridable interface to open unknown URL type. ### Response: def open_unknown(self, fullurl, data=None): """Overridable interface to open unknown URL type.""" type, url = splittype(fullurl) raise IOError('url error', 'unknown url type', type)
def update_course_runs(self, course_runs, enterprise_customer, enterprise_context): """ Update Marketing urls in course metadata and return updated course. Arguments: course_runs (list): List of course runs. enterprise_customer (EnterpriseCustomer): enterprise customer instance. enterprise_context (dict): The context to inject into URLs. Returns: (dict): Dictionary containing updated course metadata. """ updated_course_runs = [] for course_run in course_runs: track_selection_url = utils.get_course_track_selection_url( course_run=course_run, query_parameters=dict(enterprise_context, **utils.get_enterprise_utm_context(enterprise_customer)), ) enrollment_url = enterprise_customer.get_course_run_enrollment_url(course_run.get('key')) course_run.update({ 'enrollment_url': enrollment_url, 'track_selection_url': track_selection_url, }) # Update marketing urls in course metadata to include enterprise related info. marketing_url = course_run.get('marketing_url') if marketing_url: query_parameters = dict(enterprise_context, **utils.get_enterprise_utm_context(enterprise_customer)) course_run.update({'marketing_url': utils.update_query_parameters(marketing_url, query_parameters)}) # Add updated course run to the list. updated_course_runs.append(course_run) return updated_course_runs
Update Marketing urls in course metadata and return updated course. Arguments: course_runs (list): List of course runs. enterprise_customer (EnterpriseCustomer): enterprise customer instance. enterprise_context (dict): The context to inject into URLs. Returns: (dict): Dictionary containing updated course metadata.
Below is the the instruction that describes the task: ### Input: Update Marketing urls in course metadata and return updated course. Arguments: course_runs (list): List of course runs. enterprise_customer (EnterpriseCustomer): enterprise customer instance. enterprise_context (dict): The context to inject into URLs. Returns: (dict): Dictionary containing updated course metadata. ### Response: def update_course_runs(self, course_runs, enterprise_customer, enterprise_context): """ Update Marketing urls in course metadata and return updated course. Arguments: course_runs (list): List of course runs. enterprise_customer (EnterpriseCustomer): enterprise customer instance. enterprise_context (dict): The context to inject into URLs. Returns: (dict): Dictionary containing updated course metadata. """ updated_course_runs = [] for course_run in course_runs: track_selection_url = utils.get_course_track_selection_url( course_run=course_run, query_parameters=dict(enterprise_context, **utils.get_enterprise_utm_context(enterprise_customer)), ) enrollment_url = enterprise_customer.get_course_run_enrollment_url(course_run.get('key')) course_run.update({ 'enrollment_url': enrollment_url, 'track_selection_url': track_selection_url, }) # Update marketing urls in course metadata to include enterprise related info. marketing_url = course_run.get('marketing_url') if marketing_url: query_parameters = dict(enterprise_context, **utils.get_enterprise_utm_context(enterprise_customer)) course_run.update({'marketing_url': utils.update_query_parameters(marketing_url, query_parameters)}) # Add updated course run to the list. updated_course_runs.append(course_run) return updated_course_runs
def set_input(self, variable_name, period, value): """ Set a variable's value for a given period :param variable: the variable to be set :param value: the input value for the variable :param period: the period for which the value is setted Example: >>> from openfisca_country_template import CountryTaxBenefitSystem >>> simulation = Simulation(CountryTaxBenefitSystem()) >>> simulation.set_input('age', '2018-04', [12, 14]) >>> simulation.get_array('age', '2018-04') array([12, 14], dtype=int32) If a ``set_input`` property has been set for the variable, this method may accept inputs for periods not matching the ``definition_period`` of the variable. To read more about this, check the `documentation <https://openfisca.org/doc/coding-the-legislation/35_periods.html#automatically-process-variable-inputs-defined-for-periods-not-matching-the-definitionperiod>`_. """ variable = self.tax_benefit_system.get_variable(variable_name, check_existence = True) period = periods.period(period) if ((variable.end is not None) and (period.start.date > variable.end)): return self.get_holder(variable_name).set_input(period, value)
Set a variable's value for a given period :param variable: the variable to be set :param value: the input value for the variable :param period: the period for which the value is setted Example: >>> from openfisca_country_template import CountryTaxBenefitSystem >>> simulation = Simulation(CountryTaxBenefitSystem()) >>> simulation.set_input('age', '2018-04', [12, 14]) >>> simulation.get_array('age', '2018-04') array([12, 14], dtype=int32) If a ``set_input`` property has been set for the variable, this method may accept inputs for periods not matching the ``definition_period`` of the variable. To read more about this, check the `documentation <https://openfisca.org/doc/coding-the-legislation/35_periods.html#automatically-process-variable-inputs-defined-for-periods-not-matching-the-definitionperiod>`_.
Below is the the instruction that describes the task: ### Input: Set a variable's value for a given period :param variable: the variable to be set :param value: the input value for the variable :param period: the period for which the value is setted Example: >>> from openfisca_country_template import CountryTaxBenefitSystem >>> simulation = Simulation(CountryTaxBenefitSystem()) >>> simulation.set_input('age', '2018-04', [12, 14]) >>> simulation.get_array('age', '2018-04') array([12, 14], dtype=int32) If a ``set_input`` property has been set for the variable, this method may accept inputs for periods not matching the ``definition_period`` of the variable. To read more about this, check the `documentation <https://openfisca.org/doc/coding-the-legislation/35_periods.html#automatically-process-variable-inputs-defined-for-periods-not-matching-the-definitionperiod>`_. ### Response: def set_input(self, variable_name, period, value): """ Set a variable's value for a given period :param variable: the variable to be set :param value: the input value for the variable :param period: the period for which the value is setted Example: >>> from openfisca_country_template import CountryTaxBenefitSystem >>> simulation = Simulation(CountryTaxBenefitSystem()) >>> simulation.set_input('age', '2018-04', [12, 14]) >>> simulation.get_array('age', '2018-04') array([12, 14], dtype=int32) If a ``set_input`` property has been set for the variable, this method may accept inputs for periods not matching the ``definition_period`` of the variable. To read more about this, check the `documentation <https://openfisca.org/doc/coding-the-legislation/35_periods.html#automatically-process-variable-inputs-defined-for-periods-not-matching-the-definitionperiod>`_. """ variable = self.tax_benefit_system.get_variable(variable_name, check_existence = True) period = periods.period(period) if ((variable.end is not None) and (period.start.date > variable.end)): return self.get_holder(variable_name).set_input(period, value)
def recursive_dirname(f): """Given a relative path like 'a/b/c/d', yield all ascending path components like: 'a/b/c/d' 'a/b/c' 'a/b' 'a' '' """ prev = None while f != prev: yield f prev = f f = os.path.dirname(f) yield ''
Given a relative path like 'a/b/c/d', yield all ascending path components like: 'a/b/c/d' 'a/b/c' 'a/b' 'a' ''
Below is the the instruction that describes the task: ### Input: Given a relative path like 'a/b/c/d', yield all ascending path components like: 'a/b/c/d' 'a/b/c' 'a/b' 'a' '' ### Response: def recursive_dirname(f): """Given a relative path like 'a/b/c/d', yield all ascending path components like: 'a/b/c/d' 'a/b/c' 'a/b' 'a' '' """ prev = None while f != prev: yield f prev = f f = os.path.dirname(f) yield ''
def run(create_application, settings=None, log_config=None): """ Run a Tornado create_application. :param create_application: function to call to create a new application instance :param dict|None settings: optional configuration dictionary that will be passed through to ``create_application`` as kwargs. :param dict|None log_config: optional logging configuration dictionary to use. By default, a reasonable logging configuration is generated based on settings. If you need to override the configuration, then use this parameter. It is passed as-is to :func:`logging.config.dictConfig`. .. rubric:: settings['debug'] If the `settings` parameter includes a value for the ``debug`` key, then the application will be run in Tornado debug mode. If the `settings` parameter does not include a ``debug`` key, then debug mode will be enabled based on the :envvar:`DEBUG` environment variable. .. rubric:: settings['port'] If the `settings` parameter includes a value for the ``port`` key, then the application will be configured to listen on the specified port. If this key is not present, then the :envvar:`PORT` environment variable determines which port to bind to. The default port is 8000 if nothing overrides it. .. rubric:: settings['number_of_procs'] If the `settings` parameter includes a value for the ``number_of_procs`` key, then the application will be configured to run this many processes unless in *debug* mode. This is passed to ``HTTPServer.start``. .. rubric:: settings['xheaders'] If the `settings` parameter includes a value for the ``xheaders`` key, then the application will be configured to use headers, like X-Real-IP, to get the user's IP address instead of attributing all traffic to the load balancer's IP address. When running behind a load balancer like nginx, it is recommended to pass xheaders=True. The default value is False if nothing overrides it. """ from . import runner app_settings = {} if settings is None else settings.copy() debug_mode = bool(app_settings.get('debug', int(os.environ.get('DEBUG', 0)) != 0)) app_settings['debug'] = debug_mode logging.config.dictConfig(_get_logging_config(debug_mode) if log_config is None else log_config) port_number = int(app_settings.pop('port', os.environ.get('PORT', 8000))) num_procs = int(app_settings.pop('number_of_procs', '0')) server = runner.Runner(create_application(**app_settings)) server.run(port_number, num_procs)
Run a Tornado create_application. :param create_application: function to call to create a new application instance :param dict|None settings: optional configuration dictionary that will be passed through to ``create_application`` as kwargs. :param dict|None log_config: optional logging configuration dictionary to use. By default, a reasonable logging configuration is generated based on settings. If you need to override the configuration, then use this parameter. It is passed as-is to :func:`logging.config.dictConfig`. .. rubric:: settings['debug'] If the `settings` parameter includes a value for the ``debug`` key, then the application will be run in Tornado debug mode. If the `settings` parameter does not include a ``debug`` key, then debug mode will be enabled based on the :envvar:`DEBUG` environment variable. .. rubric:: settings['port'] If the `settings` parameter includes a value for the ``port`` key, then the application will be configured to listen on the specified port. If this key is not present, then the :envvar:`PORT` environment variable determines which port to bind to. The default port is 8000 if nothing overrides it. .. rubric:: settings['number_of_procs'] If the `settings` parameter includes a value for the ``number_of_procs`` key, then the application will be configured to run this many processes unless in *debug* mode. This is passed to ``HTTPServer.start``. .. rubric:: settings['xheaders'] If the `settings` parameter includes a value for the ``xheaders`` key, then the application will be configured to use headers, like X-Real-IP, to get the user's IP address instead of attributing all traffic to the load balancer's IP address. When running behind a load balancer like nginx, it is recommended to pass xheaders=True. The default value is False if nothing overrides it.
Below is the the instruction that describes the task: ### Input: Run a Tornado create_application. :param create_application: function to call to create a new application instance :param dict|None settings: optional configuration dictionary that will be passed through to ``create_application`` as kwargs. :param dict|None log_config: optional logging configuration dictionary to use. By default, a reasonable logging configuration is generated based on settings. If you need to override the configuration, then use this parameter. It is passed as-is to :func:`logging.config.dictConfig`. .. rubric:: settings['debug'] If the `settings` parameter includes a value for the ``debug`` key, then the application will be run in Tornado debug mode. If the `settings` parameter does not include a ``debug`` key, then debug mode will be enabled based on the :envvar:`DEBUG` environment variable. .. rubric:: settings['port'] If the `settings` parameter includes a value for the ``port`` key, then the application will be configured to listen on the specified port. If this key is not present, then the :envvar:`PORT` environment variable determines which port to bind to. The default port is 8000 if nothing overrides it. .. rubric:: settings['number_of_procs'] If the `settings` parameter includes a value for the ``number_of_procs`` key, then the application will be configured to run this many processes unless in *debug* mode. This is passed to ``HTTPServer.start``. .. rubric:: settings['xheaders'] If the `settings` parameter includes a value for the ``xheaders`` key, then the application will be configured to use headers, like X-Real-IP, to get the user's IP address instead of attributing all traffic to the load balancer's IP address. When running behind a load balancer like nginx, it is recommended to pass xheaders=True. The default value is False if nothing overrides it. ### Response: def run(create_application, settings=None, log_config=None): """ Run a Tornado create_application. :param create_application: function to call to create a new application instance :param dict|None settings: optional configuration dictionary that will be passed through to ``create_application`` as kwargs. :param dict|None log_config: optional logging configuration dictionary to use. By default, a reasonable logging configuration is generated based on settings. If you need to override the configuration, then use this parameter. It is passed as-is to :func:`logging.config.dictConfig`. .. rubric:: settings['debug'] If the `settings` parameter includes a value for the ``debug`` key, then the application will be run in Tornado debug mode. If the `settings` parameter does not include a ``debug`` key, then debug mode will be enabled based on the :envvar:`DEBUG` environment variable. .. rubric:: settings['port'] If the `settings` parameter includes a value for the ``port`` key, then the application will be configured to listen on the specified port. If this key is not present, then the :envvar:`PORT` environment variable determines which port to bind to. The default port is 8000 if nothing overrides it. .. rubric:: settings['number_of_procs'] If the `settings` parameter includes a value for the ``number_of_procs`` key, then the application will be configured to run this many processes unless in *debug* mode. This is passed to ``HTTPServer.start``. .. rubric:: settings['xheaders'] If the `settings` parameter includes a value for the ``xheaders`` key, then the application will be configured to use headers, like X-Real-IP, to get the user's IP address instead of attributing all traffic to the load balancer's IP address. When running behind a load balancer like nginx, it is recommended to pass xheaders=True. The default value is False if nothing overrides it. """ from . import runner app_settings = {} if settings is None else settings.copy() debug_mode = bool(app_settings.get('debug', int(os.environ.get('DEBUG', 0)) != 0)) app_settings['debug'] = debug_mode logging.config.dictConfig(_get_logging_config(debug_mode) if log_config is None else log_config) port_number = int(app_settings.pop('port', os.environ.get('PORT', 8000))) num_procs = int(app_settings.pop('number_of_procs', '0')) server = runner.Runner(create_application(**app_settings)) server.run(port_number, num_procs)
def validate_email(value: str) -> Tuple[str, str]: """ Brutally simple email address validation. Note unlike most email address validation * raw ip address (literal) domain parts are not allowed. * "John Doe <local_part@domain.com>" style "pretty" email addresses are processed * the local part check is extremely basic. This raises the possibility of unicode spoofing, but no better solution is really possible. * spaces are striped from the beginning and end of addresses but no error is raised See RFC 5322 but treat it with suspicion, there seems to exist no universally acknowledged test for a valid email! """ if email_validator is None: raise ImportError('email-validator is not installed, run `pip install pydantic[email]`') m = PRETTY_REGEX.fullmatch(value) name: Optional[str] = None if m: name, value = m.groups() email = value.strip() try: email_validator.validate_email(email, check_deliverability=False) except email_validator.EmailNotValidError as e: raise errors.EmailError() from e return name or email[: email.index('@')], email.lower()
Brutally simple email address validation. Note unlike most email address validation * raw ip address (literal) domain parts are not allowed. * "John Doe <local_part@domain.com>" style "pretty" email addresses are processed * the local part check is extremely basic. This raises the possibility of unicode spoofing, but no better solution is really possible. * spaces are striped from the beginning and end of addresses but no error is raised See RFC 5322 but treat it with suspicion, there seems to exist no universally acknowledged test for a valid email!
Below is the the instruction that describes the task: ### Input: Brutally simple email address validation. Note unlike most email address validation * raw ip address (literal) domain parts are not allowed. * "John Doe <local_part@domain.com>" style "pretty" email addresses are processed * the local part check is extremely basic. This raises the possibility of unicode spoofing, but no better solution is really possible. * spaces are striped from the beginning and end of addresses but no error is raised See RFC 5322 but treat it with suspicion, there seems to exist no universally acknowledged test for a valid email! ### Response: def validate_email(value: str) -> Tuple[str, str]: """ Brutally simple email address validation. Note unlike most email address validation * raw ip address (literal) domain parts are not allowed. * "John Doe <local_part@domain.com>" style "pretty" email addresses are processed * the local part check is extremely basic. This raises the possibility of unicode spoofing, but no better solution is really possible. * spaces are striped from the beginning and end of addresses but no error is raised See RFC 5322 but treat it with suspicion, there seems to exist no universally acknowledged test for a valid email! """ if email_validator is None: raise ImportError('email-validator is not installed, run `pip install pydantic[email]`') m = PRETTY_REGEX.fullmatch(value) name: Optional[str] = None if m: name, value = m.groups() email = value.strip() try: email_validator.validate_email(email, check_deliverability=False) except email_validator.EmailNotValidError as e: raise errors.EmailError() from e return name or email[: email.index('@')], email.lower()
def one_point_crossover(parents): """Perform one point crossover on two parent chromosomes. Select a random position in the chromosome. Take genes to the left from one parent and the rest from the other parent. Ex. p1 = xxxxx, p2 = yyyyy, position = 2 (starting at 0), child = xxyyy """ # The point that the chromosomes will be crossed at (see Ex. above) crossover_point = random.randint(1, len(parents[0]) - 1) return (_one_parent_crossover(parents[0], parents[1], crossover_point), _one_parent_crossover(parents[1], parents[0], crossover_point))
Perform one point crossover on two parent chromosomes. Select a random position in the chromosome. Take genes to the left from one parent and the rest from the other parent. Ex. p1 = xxxxx, p2 = yyyyy, position = 2 (starting at 0), child = xxyyy
Below is the the instruction that describes the task: ### Input: Perform one point crossover on two parent chromosomes. Select a random position in the chromosome. Take genes to the left from one parent and the rest from the other parent. Ex. p1 = xxxxx, p2 = yyyyy, position = 2 (starting at 0), child = xxyyy ### Response: def one_point_crossover(parents): """Perform one point crossover on two parent chromosomes. Select a random position in the chromosome. Take genes to the left from one parent and the rest from the other parent. Ex. p1 = xxxxx, p2 = yyyyy, position = 2 (starting at 0), child = xxyyy """ # The point that the chromosomes will be crossed at (see Ex. above) crossover_point = random.randint(1, len(parents[0]) - 1) return (_one_parent_crossover(parents[0], parents[1], crossover_point), _one_parent_crossover(parents[1], parents[0], crossover_point))
def check_smart_storage_config_ids(self): """Check SmartStorageConfig controllers is there in hardware. :raises: IloError, on an error from iLO. """ if self.smart_storage_config_identities is None: msg = ('The Redfish controller failed to get the ' 'SmartStorageConfig controller configurations.') LOG.debug(msg) raise exception.IloError(msg)
Check SmartStorageConfig controllers is there in hardware. :raises: IloError, on an error from iLO.
Below is the the instruction that describes the task: ### Input: Check SmartStorageConfig controllers is there in hardware. :raises: IloError, on an error from iLO. ### Response: def check_smart_storage_config_ids(self): """Check SmartStorageConfig controllers is there in hardware. :raises: IloError, on an error from iLO. """ if self.smart_storage_config_identities is None: msg = ('The Redfish controller failed to get the ' 'SmartStorageConfig controller configurations.') LOG.debug(msg) raise exception.IloError(msg)
def discovery_mdns(self): """ Installs the mDNS discovery bundles and instantiates components """ # Remove Zeroconf debug output logging.getLogger("zeroconf").setLevel(logging.WARNING) # Install the bundle self.context.install_bundle("pelix.remote.discovery.mdns").start() with use_waiting_list(self.context) as ipopo: # Instantiate the discovery ipopo.add(rs.FACTORY_DISCOVERY_ZEROCONF, "pelix-discovery-zeroconf")
Installs the mDNS discovery bundles and instantiates components
Below is the the instruction that describes the task: ### Input: Installs the mDNS discovery bundles and instantiates components ### Response: def discovery_mdns(self): """ Installs the mDNS discovery bundles and instantiates components """ # Remove Zeroconf debug output logging.getLogger("zeroconf").setLevel(logging.WARNING) # Install the bundle self.context.install_bundle("pelix.remote.discovery.mdns").start() with use_waiting_list(self.context) as ipopo: # Instantiate the discovery ipopo.add(rs.FACTORY_DISCOVERY_ZEROCONF, "pelix-discovery-zeroconf")
def slotsToJSON(obj, slots=None): """Converts the given Python object to one suitable for Javascript Object Notation (JSON) serialization via :func:`json.dump` or :func:`json.dumps`. This function delegates to :func:`toJSON`. Specifically only attributes in the list of *slots* are converted. If *slots* is not provided, it defaults to the object's ``__slots__` and any inherited ``__slots__``. To omit certain slots from serialization, the object may define a :meth:`__jsonOmit__(key, val)` method. When the method returns True for any particular slot name (i.e. key) and value combination, the slot will not serialized. """ if slots is None: slots = list(obj.__slots__) if hasattr(obj, '__slots__') else [ ] for base in obj.__class__.__bases__: if hasattr(base, '__slots__'): slots.extend(base.__slots__) testOmit = hasattr(obj, '__jsonOmit__') and callable(obj.__jsonOmit__) result = { } for slot in slots: key = slot[1:] if slot.startswith('_') else slot val = getattr(obj, slot, None) if testOmit is False or obj.__jsonOmit__(key, val) is False: result[key] = toJSON(val) return result
Converts the given Python object to one suitable for Javascript Object Notation (JSON) serialization via :func:`json.dump` or :func:`json.dumps`. This function delegates to :func:`toJSON`. Specifically only attributes in the list of *slots* are converted. If *slots* is not provided, it defaults to the object's ``__slots__` and any inherited ``__slots__``. To omit certain slots from serialization, the object may define a :meth:`__jsonOmit__(key, val)` method. When the method returns True for any particular slot name (i.e. key) and value combination, the slot will not serialized.
Below is the the instruction that describes the task: ### Input: Converts the given Python object to one suitable for Javascript Object Notation (JSON) serialization via :func:`json.dump` or :func:`json.dumps`. This function delegates to :func:`toJSON`. Specifically only attributes in the list of *slots* are converted. If *slots* is not provided, it defaults to the object's ``__slots__` and any inherited ``__slots__``. To omit certain slots from serialization, the object may define a :meth:`__jsonOmit__(key, val)` method. When the method returns True for any particular slot name (i.e. key) and value combination, the slot will not serialized. ### Response: def slotsToJSON(obj, slots=None): """Converts the given Python object to one suitable for Javascript Object Notation (JSON) serialization via :func:`json.dump` or :func:`json.dumps`. This function delegates to :func:`toJSON`. Specifically only attributes in the list of *slots* are converted. If *slots* is not provided, it defaults to the object's ``__slots__` and any inherited ``__slots__``. To omit certain slots from serialization, the object may define a :meth:`__jsonOmit__(key, val)` method. When the method returns True for any particular slot name (i.e. key) and value combination, the slot will not serialized. """ if slots is None: slots = list(obj.__slots__) if hasattr(obj, '__slots__') else [ ] for base in obj.__class__.__bases__: if hasattr(base, '__slots__'): slots.extend(base.__slots__) testOmit = hasattr(obj, '__jsonOmit__') and callable(obj.__jsonOmit__) result = { } for slot in slots: key = slot[1:] if slot.startswith('_') else slot val = getattr(obj, slot, None) if testOmit is False or obj.__jsonOmit__(key, val) is False: result[key] = toJSON(val) return result
def calculate_diagram_ranges(data): """ Given a numpy array calculate what the ranges of the H-R diagram should be. """ data = round_arr_teff_luminosity(data) temps = data['temp'] x_range = [1.05 * np.amax(temps), .95 * np.amin(temps)] lums = data['lum'] y_range = [.50 * np.amin(lums), 2 * np.amax(lums)] return (x_range, y_range)
Given a numpy array calculate what the ranges of the H-R diagram should be.
Below is the the instruction that describes the task: ### Input: Given a numpy array calculate what the ranges of the H-R diagram should be. ### Response: def calculate_diagram_ranges(data): """ Given a numpy array calculate what the ranges of the H-R diagram should be. """ data = round_arr_teff_luminosity(data) temps = data['temp'] x_range = [1.05 * np.amax(temps), .95 * np.amin(temps)] lums = data['lum'] y_range = [.50 * np.amin(lums), 2 * np.amax(lums)] return (x_range, y_range)
def inset_sizes(cls, original_width, original_height, target_width, target_height): """ Calculate new image sizes for inset mode :param original_width: int :param original_height: int :param target_width: int :param target_height: int :return: tuple(int, int) """ if target_width >= original_width and target_height >= original_height: target_width = float(original_width) target_height = original_height elif target_width <= original_width and target_height >= original_height: k = original_width / float(target_width) target_height = int(original_height / k) elif target_width >= original_width and target_height <= original_height: k = original_height / float(target_height) target_width = int(original_width / k) elif target_width < original_width and target_height < original_height: k = original_width / float(original_height) k_w = original_width / float(target_width) k_h = original_height / float(target_height) if k_w >= k_h: target_height = int(target_width / k) else: target_width = int(target_height * k) return target_width, target_height
Calculate new image sizes for inset mode :param original_width: int :param original_height: int :param target_width: int :param target_height: int :return: tuple(int, int)
Below is the the instruction that describes the task: ### Input: Calculate new image sizes for inset mode :param original_width: int :param original_height: int :param target_width: int :param target_height: int :return: tuple(int, int) ### Response: def inset_sizes(cls, original_width, original_height, target_width, target_height): """ Calculate new image sizes for inset mode :param original_width: int :param original_height: int :param target_width: int :param target_height: int :return: tuple(int, int) """ if target_width >= original_width and target_height >= original_height: target_width = float(original_width) target_height = original_height elif target_width <= original_width and target_height >= original_height: k = original_width / float(target_width) target_height = int(original_height / k) elif target_width >= original_width and target_height <= original_height: k = original_height / float(target_height) target_width = int(original_width / k) elif target_width < original_width and target_height < original_height: k = original_width / float(original_height) k_w = original_width / float(target_width) k_h = original_height / float(target_height) if k_w >= k_h: target_height = int(target_width / k) else: target_width = int(target_height * k) return target_width, target_height
def temporary_attr(obj, name, value): """ Context manager that removes key from dictionary on closing The dictionary will hold the key for the duration of the context. Parameters ---------- obj : object Object onto which to add a temporary attribute. name : str Name of attribute to add to ``obj``. value : object Value of ``attr``. """ setattr(obj, name, value) try: yield obj finally: delattr(obj, name)
Context manager that removes key from dictionary on closing The dictionary will hold the key for the duration of the context. Parameters ---------- obj : object Object onto which to add a temporary attribute. name : str Name of attribute to add to ``obj``. value : object Value of ``attr``.
Below is the the instruction that describes the task: ### Input: Context manager that removes key from dictionary on closing The dictionary will hold the key for the duration of the context. Parameters ---------- obj : object Object onto which to add a temporary attribute. name : str Name of attribute to add to ``obj``. value : object Value of ``attr``. ### Response: def temporary_attr(obj, name, value): """ Context manager that removes key from dictionary on closing The dictionary will hold the key for the duration of the context. Parameters ---------- obj : object Object onto which to add a temporary attribute. name : str Name of attribute to add to ``obj``. value : object Value of ``attr``. """ setattr(obj, name, value) try: yield obj finally: delattr(obj, name)
def change_object_link_card(obj, perms): """ If the user has permission to change `obj`, show a link to its Admin page. obj -- An object like Movie, Play, ClassicalWork, Publication, etc. perms -- The `perms` object that it's the template. """ # eg: 'movie' or 'classicalwork': name = obj.__class__.__name__.lower() permission = 'spectator.can_edit_{}'.format(name) # eg: 'admin:events_classicalwork_change': change_url_name = 'admin:{}_{}_change'.format(obj._meta.app_label, name) return { 'display_link': (permission in perms), 'change_url': reverse(change_url_name, args=[obj.id]) }
If the user has permission to change `obj`, show a link to its Admin page. obj -- An object like Movie, Play, ClassicalWork, Publication, etc. perms -- The `perms` object that it's the template.
Below is the the instruction that describes the task: ### Input: If the user has permission to change `obj`, show a link to its Admin page. obj -- An object like Movie, Play, ClassicalWork, Publication, etc. perms -- The `perms` object that it's the template. ### Response: def change_object_link_card(obj, perms): """ If the user has permission to change `obj`, show a link to its Admin page. obj -- An object like Movie, Play, ClassicalWork, Publication, etc. perms -- The `perms` object that it's the template. """ # eg: 'movie' or 'classicalwork': name = obj.__class__.__name__.lower() permission = 'spectator.can_edit_{}'.format(name) # eg: 'admin:events_classicalwork_change': change_url_name = 'admin:{}_{}_change'.format(obj._meta.app_label, name) return { 'display_link': (permission in perms), 'change_url': reverse(change_url_name, args=[obj.id]) }
def get_url(self, *paths, **params): """ Returns the URL for this request. :param paths: Additional URL path parts to add to the request :param params: Additional query parameters to add to the request """ path_stack = self._attribute_stack[:] if paths: path_stack.extend(paths) u = self._stack_collapser(path_stack) url = self._url_template % { "domain": self._api_url, "generated_url" : u, } if self._params or params: internal_params = self._params.copy() internal_params.update(params) url += self._generate_params(internal_params) return url
Returns the URL for this request. :param paths: Additional URL path parts to add to the request :param params: Additional query parameters to add to the request
Below is the the instruction that describes the task: ### Input: Returns the URL for this request. :param paths: Additional URL path parts to add to the request :param params: Additional query parameters to add to the request ### Response: def get_url(self, *paths, **params): """ Returns the URL for this request. :param paths: Additional URL path parts to add to the request :param params: Additional query parameters to add to the request """ path_stack = self._attribute_stack[:] if paths: path_stack.extend(paths) u = self._stack_collapser(path_stack) url = self._url_template % { "domain": self._api_url, "generated_url" : u, } if self._params or params: internal_params = self._params.copy() internal_params.update(params) url += self._generate_params(internal_params) return url
def addFreetextAnnot(self, rect, text, fontsize=12, fontname=None, color=None, rotate=0): """Add a 'FreeText' annotation in rectangle 'rect'.""" CheckParent(self) val = _fitz.Page_addFreetextAnnot(self, rect, text, fontsize, fontname, color, rotate) if not val: return val.thisown = True val.parent = weakref.proxy(self) self._annot_refs[id(val)] = val return val
Add a 'FreeText' annotation in rectangle 'rect'.
Below is the the instruction that describes the task: ### Input: Add a 'FreeText' annotation in rectangle 'rect'. ### Response: def addFreetextAnnot(self, rect, text, fontsize=12, fontname=None, color=None, rotate=0): """Add a 'FreeText' annotation in rectangle 'rect'.""" CheckParent(self) val = _fitz.Page_addFreetextAnnot(self, rect, text, fontsize, fontname, color, rotate) if not val: return val.thisown = True val.parent = weakref.proxy(self) self._annot_refs[id(val)] = val return val
def root_chip(self): """The coordinates (x, y) of the chip used to boot the machine.""" # If not known, query the machine if self._root_chip is None: self._root_chip = self.get_software_version(255, 255, 0).position return self._root_chip
The coordinates (x, y) of the chip used to boot the machine.
Below is the the instruction that describes the task: ### Input: The coordinates (x, y) of the chip used to boot the machine. ### Response: def root_chip(self): """The coordinates (x, y) of the chip used to boot the machine.""" # If not known, query the machine if self._root_chip is None: self._root_chip = self.get_software_version(255, 255, 0).position return self._root_chip
def add_event(self, event): """Adds an event to the event file. Args: event: An `Event` protocol buffer. """ if not isinstance(event, event_pb2.Event): raise TypeError("Expected an event_pb2.Event proto, " " but got %s" % type(event)) self._async_writer.write(event.SerializeToString())
Adds an event to the event file. Args: event: An `Event` protocol buffer.
Below is the the instruction that describes the task: ### Input: Adds an event to the event file. Args: event: An `Event` protocol buffer. ### Response: def add_event(self, event): """Adds an event to the event file. Args: event: An `Event` protocol buffer. """ if not isinstance(event, event_pb2.Event): raise TypeError("Expected an event_pb2.Event proto, " " but got %s" % type(event)) self._async_writer.write(event.SerializeToString())
def init_passbands(refresh=False): """ This function should be called only once, at import time. It traverses the passbands directory and builds a lookup table of passband names qualified as 'pbset:pbname' and corresponding files and atmosphere content within. """ global _initialized if not _initialized or refresh: # load information from online passbands first so that any that are # available locally will override online_passbands = list_online_passbands(full_dict=True, refresh=refresh) for pb, info in online_passbands.items(): _pbtable[pb] = {'fname': None, 'atms': info['atms'], 'pb': None} # load global passbands (in install directory) next and then local # (in .phoebe directory) second so that local passbands override # global passbands whenever there is a name conflict for path in [_pbdir_global, _pbdir_local]: for f in os.listdir(path): if f=='README': continue init_passband(path+f) #Check if _pbdir_env has been set and load those passbands too if not _pbdir_env == None: for path in [_pbdir_env]: for f in os.listdir(path): if f=='README': continue init_passband(path+f) _initialized = True
This function should be called only once, at import time. It traverses the passbands directory and builds a lookup table of passband names qualified as 'pbset:pbname' and corresponding files and atmosphere content within.
Below is the the instruction that describes the task: ### Input: This function should be called only once, at import time. It traverses the passbands directory and builds a lookup table of passband names qualified as 'pbset:pbname' and corresponding files and atmosphere content within. ### Response: def init_passbands(refresh=False): """ This function should be called only once, at import time. It traverses the passbands directory and builds a lookup table of passband names qualified as 'pbset:pbname' and corresponding files and atmosphere content within. """ global _initialized if not _initialized or refresh: # load information from online passbands first so that any that are # available locally will override online_passbands = list_online_passbands(full_dict=True, refresh=refresh) for pb, info in online_passbands.items(): _pbtable[pb] = {'fname': None, 'atms': info['atms'], 'pb': None} # load global passbands (in install directory) next and then local # (in .phoebe directory) second so that local passbands override # global passbands whenever there is a name conflict for path in [_pbdir_global, _pbdir_local]: for f in os.listdir(path): if f=='README': continue init_passband(path+f) #Check if _pbdir_env has been set and load those passbands too if not _pbdir_env == None: for path in [_pbdir_env]: for f in os.listdir(path): if f=='README': continue init_passband(path+f) _initialized = True
def define(cls, start, step, num, dtype=None): """Define a new `Index`. The output is basically:: start + numpy.arange(num) * step Parameters ---------- start : `Number` The starting value of the index. step : `Number` The step size of the index. num : `int` The size of the index (number of samples). dtype : `numpy.dtype`, `None`, optional The desired dtype of the index, if not given, defaults to the higher-precision dtype from ``start`` and ``step``. Returns ------- index : `Index` A new `Index` created from the given parameters. """ if dtype is None: dtype = max( numpy.array(start, subok=True, copy=False).dtype, numpy.array(step, subok=True, copy=False).dtype, ) start = start.astype(dtype, copy=False) step = step.astype(dtype, copy=False) return cls(start + numpy.arange(num, dtype=dtype) * step, copy=False)
Define a new `Index`. The output is basically:: start + numpy.arange(num) * step Parameters ---------- start : `Number` The starting value of the index. step : `Number` The step size of the index. num : `int` The size of the index (number of samples). dtype : `numpy.dtype`, `None`, optional The desired dtype of the index, if not given, defaults to the higher-precision dtype from ``start`` and ``step``. Returns ------- index : `Index` A new `Index` created from the given parameters.
Below is the the instruction that describes the task: ### Input: Define a new `Index`. The output is basically:: start + numpy.arange(num) * step Parameters ---------- start : `Number` The starting value of the index. step : `Number` The step size of the index. num : `int` The size of the index (number of samples). dtype : `numpy.dtype`, `None`, optional The desired dtype of the index, if not given, defaults to the higher-precision dtype from ``start`` and ``step``. Returns ------- index : `Index` A new `Index` created from the given parameters. ### Response: def define(cls, start, step, num, dtype=None): """Define a new `Index`. The output is basically:: start + numpy.arange(num) * step Parameters ---------- start : `Number` The starting value of the index. step : `Number` The step size of the index. num : `int` The size of the index (number of samples). dtype : `numpy.dtype`, `None`, optional The desired dtype of the index, if not given, defaults to the higher-precision dtype from ``start`` and ``step``. Returns ------- index : `Index` A new `Index` created from the given parameters. """ if dtype is None: dtype = max( numpy.array(start, subok=True, copy=False).dtype, numpy.array(step, subok=True, copy=False).dtype, ) start = start.astype(dtype, copy=False) step = step.astype(dtype, copy=False) return cls(start + numpy.arange(num, dtype=dtype) * step, copy=False)
def context_path(cls, project, session, context): """Return a fully-qualified context string.""" return google.api_core.path_template.expand( 'projects/{project}/agent/sessions/{session}/contexts/{context}', project=project, session=session, context=context, )
Return a fully-qualified context string.
Below is the the instruction that describes the task: ### Input: Return a fully-qualified context string. ### Response: def context_path(cls, project, session, context): """Return a fully-qualified context string.""" return google.api_core.path_template.expand( 'projects/{project}/agent/sessions/{session}/contexts/{context}', project=project, session=session, context=context, )
def qindex2index(index): """ from a QIndex (row/column coordinate system), get the buffer index of the byte """ r = index.row() c = index.column() if c > 0x10: return (0x10 * r) + c - 0x11 else: return (0x10 * r) + c
from a QIndex (row/column coordinate system), get the buffer index of the byte
Below is the the instruction that describes the task: ### Input: from a QIndex (row/column coordinate system), get the buffer index of the byte ### Response: def qindex2index(index): """ from a QIndex (row/column coordinate system), get the buffer index of the byte """ r = index.row() c = index.column() if c > 0x10: return (0x10 * r) + c - 0x11 else: return (0x10 * r) + c
def publish_alias(self, func_data, alias): """Create or update an alias for the given function. """ if not alias: return func_data['FunctionArn'] func_name = func_data['FunctionName'] func_version = func_data['Version'] exists = resource_exists( self.client.get_alias, FunctionName=func_name, Name=alias) if not exists: log.debug("Publishing custodian lambda alias %s", alias) alias_result = self.client.create_alias( FunctionName=func_name, Name=alias, FunctionVersion=func_version) else: if (exists['FunctionVersion'] == func_version and exists['Name'] == alias): return exists['AliasArn'] log.debug('Updating custodian lambda alias %s', alias) alias_result = self.client.update_alias( FunctionName=func_name, Name=alias, FunctionVersion=func_version) return alias_result['AliasArn']
Create or update an alias for the given function.
Below is the the instruction that describes the task: ### Input: Create or update an alias for the given function. ### Response: def publish_alias(self, func_data, alias): """Create or update an alias for the given function. """ if not alias: return func_data['FunctionArn'] func_name = func_data['FunctionName'] func_version = func_data['Version'] exists = resource_exists( self.client.get_alias, FunctionName=func_name, Name=alias) if not exists: log.debug("Publishing custodian lambda alias %s", alias) alias_result = self.client.create_alias( FunctionName=func_name, Name=alias, FunctionVersion=func_version) else: if (exists['FunctionVersion'] == func_version and exists['Name'] == alias): return exists['AliasArn'] log.debug('Updating custodian lambda alias %s', alias) alias_result = self.client.update_alias( FunctionName=func_name, Name=alias, FunctionVersion=func_version) return alias_result['AliasArn']
def get_real_related(self, id_equip): """ Find reals related with equipment :param id_equip: Identifier of equipment :return: Following dictionary: :: {'vips': [{'port_real': < port_real >, 'server_pool_member_id': < server_pool_member_id >, 'ip': < ip >, 'port_vip': < port_vip >, 'host_name': < host_name >, 'id_vip': < id_vip >, ...], 'equip_name': < equip_name > }} :raise EquipamentoNaoExisteError: Equipment not registered. :raise InvalidParameterError: Some parameter was invalid. :raise DataBaseError: Networkapi failed to access the database. :raise XMLError: Networkapi failed to generate the XML response. """ url = 'equipamento/get_real_related/' + str(id_equip) + '/' code, xml = self.submit(None, 'GET', url) data = self.response(code, xml) return data
Find reals related with equipment :param id_equip: Identifier of equipment :return: Following dictionary: :: {'vips': [{'port_real': < port_real >, 'server_pool_member_id': < server_pool_member_id >, 'ip': < ip >, 'port_vip': < port_vip >, 'host_name': < host_name >, 'id_vip': < id_vip >, ...], 'equip_name': < equip_name > }} :raise EquipamentoNaoExisteError: Equipment not registered. :raise InvalidParameterError: Some parameter was invalid. :raise DataBaseError: Networkapi failed to access the database. :raise XMLError: Networkapi failed to generate the XML response.
Below is the the instruction that describes the task: ### Input: Find reals related with equipment :param id_equip: Identifier of equipment :return: Following dictionary: :: {'vips': [{'port_real': < port_real >, 'server_pool_member_id': < server_pool_member_id >, 'ip': < ip >, 'port_vip': < port_vip >, 'host_name': < host_name >, 'id_vip': < id_vip >, ...], 'equip_name': < equip_name > }} :raise EquipamentoNaoExisteError: Equipment not registered. :raise InvalidParameterError: Some parameter was invalid. :raise DataBaseError: Networkapi failed to access the database. :raise XMLError: Networkapi failed to generate the XML response. ### Response: def get_real_related(self, id_equip): """ Find reals related with equipment :param id_equip: Identifier of equipment :return: Following dictionary: :: {'vips': [{'port_real': < port_real >, 'server_pool_member_id': < server_pool_member_id >, 'ip': < ip >, 'port_vip': < port_vip >, 'host_name': < host_name >, 'id_vip': < id_vip >, ...], 'equip_name': < equip_name > }} :raise EquipamentoNaoExisteError: Equipment not registered. :raise InvalidParameterError: Some parameter was invalid. :raise DataBaseError: Networkapi failed to access the database. :raise XMLError: Networkapi failed to generate the XML response. """ url = 'equipamento/get_real_related/' + str(id_equip) + '/' code, xml = self.submit(None, 'GET', url) data = self.response(code, xml) return data
def process_credentials_elements(cred_tree): """ Receive an XML object with the credentials to run a scan against a given target. @param: <credentials> <credential type="up" service="ssh" port="22"> <username>scanuser</username> <password>mypass</password> </credential> <credential type="up" service="smb"> <username>smbuser</username> <password>mypass</password> </credential> </credentials> @return: Dictionary containing the credentials for a given target. Example form: {'ssh': {'type': type, 'port': port, 'username': username, 'password': pass, }, 'smb': {'type': type, 'username': username, 'password': pass, }, } """ credentials = {} for credential in cred_tree: service = credential.attrib.get('service') credentials[service] = {} credentials[service]['type'] = credential.attrib.get('type') if service == 'ssh': credentials[service]['port'] = credential.attrib.get('port') for param in credential: credentials[service][param.tag] = param.text return credentials
Receive an XML object with the credentials to run a scan against a given target. @param: <credentials> <credential type="up" service="ssh" port="22"> <username>scanuser</username> <password>mypass</password> </credential> <credential type="up" service="smb"> <username>smbuser</username> <password>mypass</password> </credential> </credentials> @return: Dictionary containing the credentials for a given target. Example form: {'ssh': {'type': type, 'port': port, 'username': username, 'password': pass, }, 'smb': {'type': type, 'username': username, 'password': pass, }, }
Below is the the instruction that describes the task: ### Input: Receive an XML object with the credentials to run a scan against a given target. @param: <credentials> <credential type="up" service="ssh" port="22"> <username>scanuser</username> <password>mypass</password> </credential> <credential type="up" service="smb"> <username>smbuser</username> <password>mypass</password> </credential> </credentials> @return: Dictionary containing the credentials for a given target. Example form: {'ssh': {'type': type, 'port': port, 'username': username, 'password': pass, }, 'smb': {'type': type, 'username': username, 'password': pass, }, } ### Response: def process_credentials_elements(cred_tree): """ Receive an XML object with the credentials to run a scan against a given target. @param: <credentials> <credential type="up" service="ssh" port="22"> <username>scanuser</username> <password>mypass</password> </credential> <credential type="up" service="smb"> <username>smbuser</username> <password>mypass</password> </credential> </credentials> @return: Dictionary containing the credentials for a given target. Example form: {'ssh': {'type': type, 'port': port, 'username': username, 'password': pass, }, 'smb': {'type': type, 'username': username, 'password': pass, }, } """ credentials = {} for credential in cred_tree: service = credential.attrib.get('service') credentials[service] = {} credentials[service]['type'] = credential.attrib.get('type') if service == 'ssh': credentials[service]['port'] = credential.attrib.get('port') for param in credential: credentials[service][param.tag] = param.text return credentials
def generate(basename, xml): '''generate complete MAVLink CSharp implemenation''' structsfilename = basename + '.generated.cs' msgs = [] enums = [] filelist = [] for x in xml: msgs.extend(x.message) enums.extend(x.enum) filelist.append(os.path.basename(x.filename)) for m in msgs: m.order_map = [ 0 ] * len(m.fieldnames) for i in range(0, len(m.fieldnames)): m.order_map[i] = m.ordered_fieldnames.index(m.fieldnames[i]) m.fields_in_order = [] for i in range(0, len(m.fieldnames)): m.order_map[i] = m.ordered_fieldnames.index(m.fieldnames[i]) print("Generating messages file: %s" % structsfilename) dir = os.path.dirname(structsfilename) if not os.path.exists(dir): os.makedirs(dir) outf = open(structsfilename, "w") generate_preamble(outf, msgs, filelist, xml[0]) outf.write(""" using System.Reflection; [assembly: AssemblyTitle("Mavlink Classes")] [assembly: AssemblyDescription("Generated Message Classes for Mavlink. See http://qgroundcontrol.org/mavlink/start")] [assembly: AssemblyProduct("Mavlink")] [assembly: AssemblyVersion("1.0.0.0")] [assembly: AssemblyFileVersion("1.0.0.0")] """) generate_enums(outf, enums) generate_classes(outf, msgs) outf.close() print("Generating the (De)Serializer classes") serfilename = basename + '_codec.generated.cs' outf = open(serfilename, "w") generate_CodecIndex(outf, msgs, xml) generate_Deserialization(outf, msgs) generate_Serialization(outf, msgs) outf.write("\t}\n\n") outf.write("}\n\n") outf.close() # Some build commands depend on the platform - eg MS .NET Windows Vs Mono on Linux if platform.system() == "Windows": winpath=os.environ['WinDir'] cscCommand = winpath + "\\Microsoft.NET\\Framework\\v4.0.30319\\csc.exe" if (os.path.exists(cscCommand)==False): print("\nError: CS compiler not found. .Net Assembly generation skipped") return else: print("Error:.Net Assembly generation not yet supported on non Windows platforms") return cscCommand = "csc" print("Compiling Assembly for .Net Framework 4.0") generatedCsFiles = [ serfilename, structsfilename] includedCsFiles = [ 'CS/common/ByteArrayUtil.cs', 'CS/common/FrameworkBitConverter.cs', 'CS/common/Mavlink.cs' ] outputLibraryPath = os.path.normpath(dir + "/mavlink.dll") compileCommand = "%s %s" % (cscCommand, "/target:library /debug /out:" + outputLibraryPath) compileCommand = compileCommand + " /doc:" + os.path.normpath(dir + "/mavlink.xml") for csFile in generatedCsFiles + includedCsFiles: compileCommand = compileCommand + " " + os.path.normpath(csFile) #print("Cmd:" + compileCommand) res = os.system (compileCommand) if res == '0': print("Generated %s OK" % filename) else: print("Error")
generate complete MAVLink CSharp implemenation
Below is the the instruction that describes the task: ### Input: generate complete MAVLink CSharp implemenation ### Response: def generate(basename, xml): '''generate complete MAVLink CSharp implemenation''' structsfilename = basename + '.generated.cs' msgs = [] enums = [] filelist = [] for x in xml: msgs.extend(x.message) enums.extend(x.enum) filelist.append(os.path.basename(x.filename)) for m in msgs: m.order_map = [ 0 ] * len(m.fieldnames) for i in range(0, len(m.fieldnames)): m.order_map[i] = m.ordered_fieldnames.index(m.fieldnames[i]) m.fields_in_order = [] for i in range(0, len(m.fieldnames)): m.order_map[i] = m.ordered_fieldnames.index(m.fieldnames[i]) print("Generating messages file: %s" % structsfilename) dir = os.path.dirname(structsfilename) if not os.path.exists(dir): os.makedirs(dir) outf = open(structsfilename, "w") generate_preamble(outf, msgs, filelist, xml[0]) outf.write(""" using System.Reflection; [assembly: AssemblyTitle("Mavlink Classes")] [assembly: AssemblyDescription("Generated Message Classes for Mavlink. See http://qgroundcontrol.org/mavlink/start")] [assembly: AssemblyProduct("Mavlink")] [assembly: AssemblyVersion("1.0.0.0")] [assembly: AssemblyFileVersion("1.0.0.0")] """) generate_enums(outf, enums) generate_classes(outf, msgs) outf.close() print("Generating the (De)Serializer classes") serfilename = basename + '_codec.generated.cs' outf = open(serfilename, "w") generate_CodecIndex(outf, msgs, xml) generate_Deserialization(outf, msgs) generate_Serialization(outf, msgs) outf.write("\t}\n\n") outf.write("}\n\n") outf.close() # Some build commands depend on the platform - eg MS .NET Windows Vs Mono on Linux if platform.system() == "Windows": winpath=os.environ['WinDir'] cscCommand = winpath + "\\Microsoft.NET\\Framework\\v4.0.30319\\csc.exe" if (os.path.exists(cscCommand)==False): print("\nError: CS compiler not found. .Net Assembly generation skipped") return else: print("Error:.Net Assembly generation not yet supported on non Windows platforms") return cscCommand = "csc" print("Compiling Assembly for .Net Framework 4.0") generatedCsFiles = [ serfilename, structsfilename] includedCsFiles = [ 'CS/common/ByteArrayUtil.cs', 'CS/common/FrameworkBitConverter.cs', 'CS/common/Mavlink.cs' ] outputLibraryPath = os.path.normpath(dir + "/mavlink.dll") compileCommand = "%s %s" % (cscCommand, "/target:library /debug /out:" + outputLibraryPath) compileCommand = compileCommand + " /doc:" + os.path.normpath(dir + "/mavlink.xml") for csFile in generatedCsFiles + includedCsFiles: compileCommand = compileCommand + " " + os.path.normpath(csFile) #print("Cmd:" + compileCommand) res = os.system (compileCommand) if res == '0': print("Generated %s OK" % filename) else: print("Error")
def select_ignore_interrupts(iwtd, owtd, ewtd, timeout=None): '''This is a wrapper around select.select() that ignores signals. If select.select raises a select.error exception and errno is an EINTR error then it is ignored. Mainly this is used to ignore sigwinch (terminal resize). ''' # if select() is interrupted by a signal (errno==EINTR) then # we loop back and enter the select() again. if timeout is not None: end_time = time.time() + timeout while True: try: return select.select(iwtd, owtd, ewtd, timeout) except InterruptedError: err = sys.exc_info()[1] if err.args[0] == errno.EINTR: # if we loop back we have to subtract the # amount of time we already waited. if timeout is not None: timeout = end_time - time.time() if timeout < 0: return([], [], []) else: # something else caused the select.error, so # this actually is an exception. raise
This is a wrapper around select.select() that ignores signals. If select.select raises a select.error exception and errno is an EINTR error then it is ignored. Mainly this is used to ignore sigwinch (terminal resize).
Below is the the instruction that describes the task: ### Input: This is a wrapper around select.select() that ignores signals. If select.select raises a select.error exception and errno is an EINTR error then it is ignored. Mainly this is used to ignore sigwinch (terminal resize). ### Response: def select_ignore_interrupts(iwtd, owtd, ewtd, timeout=None): '''This is a wrapper around select.select() that ignores signals. If select.select raises a select.error exception and errno is an EINTR error then it is ignored. Mainly this is used to ignore sigwinch (terminal resize). ''' # if select() is interrupted by a signal (errno==EINTR) then # we loop back and enter the select() again. if timeout is not None: end_time = time.time() + timeout while True: try: return select.select(iwtd, owtd, ewtd, timeout) except InterruptedError: err = sys.exc_info()[1] if err.args[0] == errno.EINTR: # if we loop back we have to subtract the # amount of time we already waited. if timeout is not None: timeout = end_time - time.time() if timeout < 0: return([], [], []) else: # something else caused the select.error, so # this actually is an exception. raise
def get_atoms(self, ligands=True, inc_alt_states=False): """Flat list of all the Atoms in the Polymer. Parameters ---------- inc_alt_states : bool If true atoms from alternate conformations are included rather than only the "active" states. Returns ------- atoms : itertools.chain Returns an iterator of all the atoms. Convert to list if you require indexing. """ if ligands and self.ligands: monomers = self._monomers + self.ligands._monomers else: monomers = self._monomers atoms = itertools.chain( *(list(m.get_atoms(inc_alt_states=inc_alt_states)) for m in monomers)) return atoms
Flat list of all the Atoms in the Polymer. Parameters ---------- inc_alt_states : bool If true atoms from alternate conformations are included rather than only the "active" states. Returns ------- atoms : itertools.chain Returns an iterator of all the atoms. Convert to list if you require indexing.
Below is the the instruction that describes the task: ### Input: Flat list of all the Atoms in the Polymer. Parameters ---------- inc_alt_states : bool If true atoms from alternate conformations are included rather than only the "active" states. Returns ------- atoms : itertools.chain Returns an iterator of all the atoms. Convert to list if you require indexing. ### Response: def get_atoms(self, ligands=True, inc_alt_states=False): """Flat list of all the Atoms in the Polymer. Parameters ---------- inc_alt_states : bool If true atoms from alternate conformations are included rather than only the "active" states. Returns ------- atoms : itertools.chain Returns an iterator of all the atoms. Convert to list if you require indexing. """ if ligands and self.ligands: monomers = self._monomers + self.ligands._monomers else: monomers = self._monomers atoms = itertools.chain( *(list(m.get_atoms(inc_alt_states=inc_alt_states)) for m in monomers)) return atoms
def swaggerize_response(response, op): """ Delegate handling the Swagger concerns of the response to bravado-core. :type response: :class:`pyramid.response.Response` :type op: :class:`bravado_core.operation.Operation` """ response_spec = get_response_spec(response.status_int, op) bravado_core.response.validate_response( response_spec, op, PyramidSwaggerResponse(response))
Delegate handling the Swagger concerns of the response to bravado-core. :type response: :class:`pyramid.response.Response` :type op: :class:`bravado_core.operation.Operation`
Below is the the instruction that describes the task: ### Input: Delegate handling the Swagger concerns of the response to bravado-core. :type response: :class:`pyramid.response.Response` :type op: :class:`bravado_core.operation.Operation` ### Response: def swaggerize_response(response, op): """ Delegate handling the Swagger concerns of the response to bravado-core. :type response: :class:`pyramid.response.Response` :type op: :class:`bravado_core.operation.Operation` """ response_spec = get_response_spec(response.status_int, op) bravado_core.response.validate_response( response_spec, op, PyramidSwaggerResponse(response))
def fmt(self, fills): """Format block (CSS) args: fills (dict): Fill elements returns: str (CSS) """ f = "%(identifier)s%(ws)s{%(nl)s%(proplist)s}%(eb)s" out = [] name = self.name.fmt(fills) if self.parsed and any( p for p in self.parsed if str(type(p)) != "<class 'lesscpy.plib.variable.Variable'>"): fills.update({ 'identifier': name, 'proplist': ''.join([p.fmt(fills) for p in self.parsed if p]), }) out.append(f % fills) if hasattr(self, 'inner'): if self.name.subparse and len(self.inner) > 0: # @media inner = ''.join([p.fmt(fills) for p in self.inner]) inner = inner.replace(fills['nl'], fills['nl'] + fills['tab']).rstrip( fills['tab']) if not fills['nl']: inner = inner.strip() fills.update({ 'identifier': name, 'proplist': fills['tab'] + inner }) out.append(f % fills) else: out.append(''.join([p.fmt(fills) for p in self.inner])) return ''.join(out)
Format block (CSS) args: fills (dict): Fill elements returns: str (CSS)
Below is the the instruction that describes the task: ### Input: Format block (CSS) args: fills (dict): Fill elements returns: str (CSS) ### Response: def fmt(self, fills): """Format block (CSS) args: fills (dict): Fill elements returns: str (CSS) """ f = "%(identifier)s%(ws)s{%(nl)s%(proplist)s}%(eb)s" out = [] name = self.name.fmt(fills) if self.parsed and any( p for p in self.parsed if str(type(p)) != "<class 'lesscpy.plib.variable.Variable'>"): fills.update({ 'identifier': name, 'proplist': ''.join([p.fmt(fills) for p in self.parsed if p]), }) out.append(f % fills) if hasattr(self, 'inner'): if self.name.subparse and len(self.inner) > 0: # @media inner = ''.join([p.fmt(fills) for p in self.inner]) inner = inner.replace(fills['nl'], fills['nl'] + fills['tab']).rstrip( fills['tab']) if not fills['nl']: inner = inner.strip() fills.update({ 'identifier': name, 'proplist': fills['tab'] + inner }) out.append(f % fills) else: out.append(''.join([p.fmt(fills) for p in self.inner])) return ''.join(out)
def parse_copy_object(bucket_name, object_name, data): """ Parser for copy object response. :param data: Response data for copy object. :return: :class:`CopyObjectResult <CopyObjectResult>` """ root = S3Element.fromstring('CopyObjectResult', data) return CopyObjectResult( bucket_name, object_name, root.get_etag_elem(), root.get_localized_time_elem('LastModified') )
Parser for copy object response. :param data: Response data for copy object. :return: :class:`CopyObjectResult <CopyObjectResult>`
Below is the the instruction that describes the task: ### Input: Parser for copy object response. :param data: Response data for copy object. :return: :class:`CopyObjectResult <CopyObjectResult>` ### Response: def parse_copy_object(bucket_name, object_name, data): """ Parser for copy object response. :param data: Response data for copy object. :return: :class:`CopyObjectResult <CopyObjectResult>` """ root = S3Element.fromstring('CopyObjectResult', data) return CopyObjectResult( bucket_name, object_name, root.get_etag_elem(), root.get_localized_time_elem('LastModified') )
def _load_assembly_mapping_data(filename): """ Load assembly mapping data. Parameters ---------- filename : str path to compressed archive with assembly mapping data Returns ------- assembly_mapping_data : dict dict of assembly maps if loading was successful, else None Notes ----- Keys of returned dict are chromosomes and values are the corresponding assembly map. """ try: assembly_mapping_data = {} with tarfile.open(filename, "r") as tar: # http://stackoverflow.com/a/2018576 for member in tar.getmembers(): if ".json" in member.name: with tar.extractfile(member) as tar_file: tar_bytes = tar_file.read() # https://stackoverflow.com/a/42683509/4727627 assembly_mapping_data[member.name.split(".")[0]] = json.loads( tar_bytes.decode("utf-8") ) return assembly_mapping_data except Exception as err: print(err) return None
Load assembly mapping data. Parameters ---------- filename : str path to compressed archive with assembly mapping data Returns ------- assembly_mapping_data : dict dict of assembly maps if loading was successful, else None Notes ----- Keys of returned dict are chromosomes and values are the corresponding assembly map.
Below is the the instruction that describes the task: ### Input: Load assembly mapping data. Parameters ---------- filename : str path to compressed archive with assembly mapping data Returns ------- assembly_mapping_data : dict dict of assembly maps if loading was successful, else None Notes ----- Keys of returned dict are chromosomes and values are the corresponding assembly map. ### Response: def _load_assembly_mapping_data(filename): """ Load assembly mapping data. Parameters ---------- filename : str path to compressed archive with assembly mapping data Returns ------- assembly_mapping_data : dict dict of assembly maps if loading was successful, else None Notes ----- Keys of returned dict are chromosomes and values are the corresponding assembly map. """ try: assembly_mapping_data = {} with tarfile.open(filename, "r") as tar: # http://stackoverflow.com/a/2018576 for member in tar.getmembers(): if ".json" in member.name: with tar.extractfile(member) as tar_file: tar_bytes = tar_file.read() # https://stackoverflow.com/a/42683509/4727627 assembly_mapping_data[member.name.split(".")[0]] = json.loads( tar_bytes.decode("utf-8") ) return assembly_mapping_data except Exception as err: print(err) return None
def p_chars(self, p): """chars : | chars char""" if len(p) == 1: p[0] = unicode() else: p[0] = p[1] + p[2]
chars : | chars char
Below is the the instruction that describes the task: ### Input: chars : | chars char ### Response: def p_chars(self, p): """chars : | chars char""" if len(p) == 1: p[0] = unicode() else: p[0] = p[1] + p[2]
def vmeasure(reference_intervals, reference_labels, estimated_intervals, estimated_labels, frame_size=0.1, beta=1.0): """Frame-clustering segmentation: v-measure Computes cross-entropy of cluster assignment, normalized by the marginal-entropy. This is equivalent to `nce(..., marginal=True)`. Examples -------- >>> (ref_intervals, ... ref_labels) = mir_eval.io.load_labeled_intervals('ref.lab') >>> (est_intervals, ... est_labels) = mir_eval.io.load_labeled_intervals('est.lab') >>> # Trim or pad the estimate to match reference timing >>> (ref_intervals, ... ref_labels) = mir_eval.util.adjust_intervals(ref_intervals, ... ref_labels, ... t_min=0) >>> (est_intervals, ... est_labels) = mir_eval.util.adjust_intervals( ... est_intervals, est_labels, t_min=0, t_max=ref_intervals.max()) >>> V_precision, V_recall, V_F = mir_eval.structure.vmeasure(ref_intervals, ... ref_labels, ... est_intervals, ... est_labels) Parameters ---------- reference_intervals : np.ndarray, shape=(n, 2) reference segment intervals, in the format returned by :func:`mir_eval.io.load_labeled_intervals`. reference_labels : list, shape=(n,) reference segment labels, in the format returned by :func:`mir_eval.io.load_labeled_intervals`. estimated_intervals : np.ndarray, shape=(m, 2) estimated segment intervals, in the format returned by :func:`mir_eval.io.load_labeled_intervals`. estimated_labels : list, shape=(m,) estimated segment labels, in the format returned by :func:`mir_eval.io.load_labeled_intervals`. frame_size : float > 0 length (in seconds) of frames for clustering (Default value = 0.1) beta : float > 0 beta for F-measure (Default value = 1.0) Returns ------- V_precision Over-clustering score: ``1 - H(y_est | y_ref) / H(y_est)`` If `|y_est|==1`, then `V_precision` will be 0. V_recall Under-clustering score: ``1 - H(y_ref | y_est) / H(y_ref)`` If `|y_ref|==1`, then `V_recall` will be 0. V_F F-measure for (V_precision, V_recall) """ return nce(reference_intervals, reference_labels, estimated_intervals, estimated_labels, frame_size=frame_size, beta=beta, marginal=True)
Frame-clustering segmentation: v-measure Computes cross-entropy of cluster assignment, normalized by the marginal-entropy. This is equivalent to `nce(..., marginal=True)`. Examples -------- >>> (ref_intervals, ... ref_labels) = mir_eval.io.load_labeled_intervals('ref.lab') >>> (est_intervals, ... est_labels) = mir_eval.io.load_labeled_intervals('est.lab') >>> # Trim or pad the estimate to match reference timing >>> (ref_intervals, ... ref_labels) = mir_eval.util.adjust_intervals(ref_intervals, ... ref_labels, ... t_min=0) >>> (est_intervals, ... est_labels) = mir_eval.util.adjust_intervals( ... est_intervals, est_labels, t_min=0, t_max=ref_intervals.max()) >>> V_precision, V_recall, V_F = mir_eval.structure.vmeasure(ref_intervals, ... ref_labels, ... est_intervals, ... est_labels) Parameters ---------- reference_intervals : np.ndarray, shape=(n, 2) reference segment intervals, in the format returned by :func:`mir_eval.io.load_labeled_intervals`. reference_labels : list, shape=(n,) reference segment labels, in the format returned by :func:`mir_eval.io.load_labeled_intervals`. estimated_intervals : np.ndarray, shape=(m, 2) estimated segment intervals, in the format returned by :func:`mir_eval.io.load_labeled_intervals`. estimated_labels : list, shape=(m,) estimated segment labels, in the format returned by :func:`mir_eval.io.load_labeled_intervals`. frame_size : float > 0 length (in seconds) of frames for clustering (Default value = 0.1) beta : float > 0 beta for F-measure (Default value = 1.0) Returns ------- V_precision Over-clustering score: ``1 - H(y_est | y_ref) / H(y_est)`` If `|y_est|==1`, then `V_precision` will be 0. V_recall Under-clustering score: ``1 - H(y_ref | y_est) / H(y_ref)`` If `|y_ref|==1`, then `V_recall` will be 0. V_F F-measure for (V_precision, V_recall)
Below is the the instruction that describes the task: ### Input: Frame-clustering segmentation: v-measure Computes cross-entropy of cluster assignment, normalized by the marginal-entropy. This is equivalent to `nce(..., marginal=True)`. Examples -------- >>> (ref_intervals, ... ref_labels) = mir_eval.io.load_labeled_intervals('ref.lab') >>> (est_intervals, ... est_labels) = mir_eval.io.load_labeled_intervals('est.lab') >>> # Trim or pad the estimate to match reference timing >>> (ref_intervals, ... ref_labels) = mir_eval.util.adjust_intervals(ref_intervals, ... ref_labels, ... t_min=0) >>> (est_intervals, ... est_labels) = mir_eval.util.adjust_intervals( ... est_intervals, est_labels, t_min=0, t_max=ref_intervals.max()) >>> V_precision, V_recall, V_F = mir_eval.structure.vmeasure(ref_intervals, ... ref_labels, ... est_intervals, ... est_labels) Parameters ---------- reference_intervals : np.ndarray, shape=(n, 2) reference segment intervals, in the format returned by :func:`mir_eval.io.load_labeled_intervals`. reference_labels : list, shape=(n,) reference segment labels, in the format returned by :func:`mir_eval.io.load_labeled_intervals`. estimated_intervals : np.ndarray, shape=(m, 2) estimated segment intervals, in the format returned by :func:`mir_eval.io.load_labeled_intervals`. estimated_labels : list, shape=(m,) estimated segment labels, in the format returned by :func:`mir_eval.io.load_labeled_intervals`. frame_size : float > 0 length (in seconds) of frames for clustering (Default value = 0.1) beta : float > 0 beta for F-measure (Default value = 1.0) Returns ------- V_precision Over-clustering score: ``1 - H(y_est | y_ref) / H(y_est)`` If `|y_est|==1`, then `V_precision` will be 0. V_recall Under-clustering score: ``1 - H(y_ref | y_est) / H(y_ref)`` If `|y_ref|==1`, then `V_recall` will be 0. V_F F-measure for (V_precision, V_recall) ### Response: def vmeasure(reference_intervals, reference_labels, estimated_intervals, estimated_labels, frame_size=0.1, beta=1.0): """Frame-clustering segmentation: v-measure Computes cross-entropy of cluster assignment, normalized by the marginal-entropy. This is equivalent to `nce(..., marginal=True)`. Examples -------- >>> (ref_intervals, ... ref_labels) = mir_eval.io.load_labeled_intervals('ref.lab') >>> (est_intervals, ... est_labels) = mir_eval.io.load_labeled_intervals('est.lab') >>> # Trim or pad the estimate to match reference timing >>> (ref_intervals, ... ref_labels) = mir_eval.util.adjust_intervals(ref_intervals, ... ref_labels, ... t_min=0) >>> (est_intervals, ... est_labels) = mir_eval.util.adjust_intervals( ... est_intervals, est_labels, t_min=0, t_max=ref_intervals.max()) >>> V_precision, V_recall, V_F = mir_eval.structure.vmeasure(ref_intervals, ... ref_labels, ... est_intervals, ... est_labels) Parameters ---------- reference_intervals : np.ndarray, shape=(n, 2) reference segment intervals, in the format returned by :func:`mir_eval.io.load_labeled_intervals`. reference_labels : list, shape=(n,) reference segment labels, in the format returned by :func:`mir_eval.io.load_labeled_intervals`. estimated_intervals : np.ndarray, shape=(m, 2) estimated segment intervals, in the format returned by :func:`mir_eval.io.load_labeled_intervals`. estimated_labels : list, shape=(m,) estimated segment labels, in the format returned by :func:`mir_eval.io.load_labeled_intervals`. frame_size : float > 0 length (in seconds) of frames for clustering (Default value = 0.1) beta : float > 0 beta for F-measure (Default value = 1.0) Returns ------- V_precision Over-clustering score: ``1 - H(y_est | y_ref) / H(y_est)`` If `|y_est|==1`, then `V_precision` will be 0. V_recall Under-clustering score: ``1 - H(y_ref | y_est) / H(y_ref)`` If `|y_ref|==1`, then `V_recall` will be 0. V_F F-measure for (V_precision, V_recall) """ return nce(reference_intervals, reference_labels, estimated_intervals, estimated_labels, frame_size=frame_size, beta=beta, marginal=True)
def zlim(name, min, max): """ This function will set the z axis range displayed for a specific tplot variable. This is only used for spec plots, where the z axis represents the magnitude of the values in each bin. Parameters: name : str The name of the tplot variable that you wish to set z limits for. min : flt The start of the z axis. max : flt The end of the z axis. Returns: None Examples: >>> # Change the z range of Variable1 >>> import pytplot >>> x_data = [1,2,3] >>> y_data = [ [1,2,3] , [4,5,6], [7,8,9] ] >>> v_data = [1,2,3] >>> pytplot.store_data("Variable3", data={'x':x_data, 'y':y_data, 'v':v_data}) >>> pytplot.zlim('Variable1', 2, 3) """ if name not in data_quants.keys(): print("That name is currently not in pytplot.") return temp_data_quant = data_quants[name] temp_data_quant.zaxis_opt['z_range'] = [min, max] return
This function will set the z axis range displayed for a specific tplot variable. This is only used for spec plots, where the z axis represents the magnitude of the values in each bin. Parameters: name : str The name of the tplot variable that you wish to set z limits for. min : flt The start of the z axis. max : flt The end of the z axis. Returns: None Examples: >>> # Change the z range of Variable1 >>> import pytplot >>> x_data = [1,2,3] >>> y_data = [ [1,2,3] , [4,5,6], [7,8,9] ] >>> v_data = [1,2,3] >>> pytplot.store_data("Variable3", data={'x':x_data, 'y':y_data, 'v':v_data}) >>> pytplot.zlim('Variable1', 2, 3)
Below is the the instruction that describes the task: ### Input: This function will set the z axis range displayed for a specific tplot variable. This is only used for spec plots, where the z axis represents the magnitude of the values in each bin. Parameters: name : str The name of the tplot variable that you wish to set z limits for. min : flt The start of the z axis. max : flt The end of the z axis. Returns: None Examples: >>> # Change the z range of Variable1 >>> import pytplot >>> x_data = [1,2,3] >>> y_data = [ [1,2,3] , [4,5,6], [7,8,9] ] >>> v_data = [1,2,3] >>> pytplot.store_data("Variable3", data={'x':x_data, 'y':y_data, 'v':v_data}) >>> pytplot.zlim('Variable1', 2, 3) ### Response: def zlim(name, min, max): """ This function will set the z axis range displayed for a specific tplot variable. This is only used for spec plots, where the z axis represents the magnitude of the values in each bin. Parameters: name : str The name of the tplot variable that you wish to set z limits for. min : flt The start of the z axis. max : flt The end of the z axis. Returns: None Examples: >>> # Change the z range of Variable1 >>> import pytplot >>> x_data = [1,2,3] >>> y_data = [ [1,2,3] , [4,5,6], [7,8,9] ] >>> v_data = [1,2,3] >>> pytplot.store_data("Variable3", data={'x':x_data, 'y':y_data, 'v':v_data}) >>> pytplot.zlim('Variable1', 2, 3) """ if name not in data_quants.keys(): print("That name is currently not in pytplot.") return temp_data_quant = data_quants[name] temp_data_quant.zaxis_opt['z_range'] = [min, max] return
def pack_new_sequence(self, sequence): """Packs a new sequence onto the polymer using Scwrl4. Parameters ---------- sequence : str String containing the amino acid sequence. This must be the same length as the Polymer Raises ------ ValueError Raised if the sequence length does not match the number of monomers in the Polymer. """ # This import is here to prevent a circular import. from ampal.pdb_parser import convert_pdb_to_ampal polymer_bb = self.backbone if len(sequence) != len(polymer_bb): raise ValueError( 'Sequence length ({}) does not match Polymer length ({}).'.format( len(sequence), len(polymer_bb))) scwrl_out = pack_sidechains(self.backbone.pdb, sequence) if scwrl_out is None: return else: packed_structure, scwrl_score = scwrl_out new_assembly = convert_pdb_to_ampal(packed_structure, path=False) self._monomers = new_assembly[0]._monomers[:] self.tags['scwrl_score'] = scwrl_score self.assign_force_field(global_settings['buff']['force_field']) return
Packs a new sequence onto the polymer using Scwrl4. Parameters ---------- sequence : str String containing the amino acid sequence. This must be the same length as the Polymer Raises ------ ValueError Raised if the sequence length does not match the number of monomers in the Polymer.
Below is the the instruction that describes the task: ### Input: Packs a new sequence onto the polymer using Scwrl4. Parameters ---------- sequence : str String containing the amino acid sequence. This must be the same length as the Polymer Raises ------ ValueError Raised if the sequence length does not match the number of monomers in the Polymer. ### Response: def pack_new_sequence(self, sequence): """Packs a new sequence onto the polymer using Scwrl4. Parameters ---------- sequence : str String containing the amino acid sequence. This must be the same length as the Polymer Raises ------ ValueError Raised if the sequence length does not match the number of monomers in the Polymer. """ # This import is here to prevent a circular import. from ampal.pdb_parser import convert_pdb_to_ampal polymer_bb = self.backbone if len(sequence) != len(polymer_bb): raise ValueError( 'Sequence length ({}) does not match Polymer length ({}).'.format( len(sequence), len(polymer_bb))) scwrl_out = pack_sidechains(self.backbone.pdb, sequence) if scwrl_out is None: return else: packed_structure, scwrl_score = scwrl_out new_assembly = convert_pdb_to_ampal(packed_structure, path=False) self._monomers = new_assembly[0]._monomers[:] self.tags['scwrl_score'] = scwrl_score self.assign_force_field(global_settings['buff']['force_field']) return
def predict_proba(self, X): """Returns the predicted probabilities for ``X``. Arguments: X (array-like or sparse matrix of shape (n_samples, n_features)): The input samples. Sparse matrices are accepted only if they are supported by the weak model. Returns: array of shape (n_samples, n_classes) containing the predicted probabilities. """ return collections.deque(self.iter_predict_proba(X), maxlen=1).pop()
Returns the predicted probabilities for ``X``. Arguments: X (array-like or sparse matrix of shape (n_samples, n_features)): The input samples. Sparse matrices are accepted only if they are supported by the weak model. Returns: array of shape (n_samples, n_classes) containing the predicted probabilities.
Below is the the instruction that describes the task: ### Input: Returns the predicted probabilities for ``X``. Arguments: X (array-like or sparse matrix of shape (n_samples, n_features)): The input samples. Sparse matrices are accepted only if they are supported by the weak model. Returns: array of shape (n_samples, n_classes) containing the predicted probabilities. ### Response: def predict_proba(self, X): """Returns the predicted probabilities for ``X``. Arguments: X (array-like or sparse matrix of shape (n_samples, n_features)): The input samples. Sparse matrices are accepted only if they are supported by the weak model. Returns: array of shape (n_samples, n_classes) containing the predicted probabilities. """ return collections.deque(self.iter_predict_proba(X), maxlen=1).pop()
def pivot(self, md5, tag=''): """Pivot on an md5 (md5 can be a single sample or a sample_set) Args: md5: The md5 can be a single sample or a sample_set tags (optional): a tag for the sample (for the prompt) Returns: Nothing but it's sets the active sample/sample_set """ # Is the md5 a tag? ss = self.workbench.generate_sample_set(md5) if ss: tag = md5 if not tag else tag md5 = ss # Is the md5 a sample_set? if self.workbench.is_sample_set(md5): # Is the sample_set one sample? ss = self.workbench.get_sample_set(md5) if len(ss) == 1: md5 = ss[0] deco = '(%s:%d)' % (tag, len(ss)) self.ipshell.push({'prompt_deco': deco}) else: deco = '(%s:1)' % tag self.ipshell.push({'prompt_deco': deco}) # Set the new md5 self.session.md5 = md5 self.session.short_md5 = md5[:6] self.ipshell.push({'md5': self.session.md5}) self.ipshell.push({'short_md5': self.session.short_md5})
Pivot on an md5 (md5 can be a single sample or a sample_set) Args: md5: The md5 can be a single sample or a sample_set tags (optional): a tag for the sample (for the prompt) Returns: Nothing but it's sets the active sample/sample_set
Below is the the instruction that describes the task: ### Input: Pivot on an md5 (md5 can be a single sample or a sample_set) Args: md5: The md5 can be a single sample or a sample_set tags (optional): a tag for the sample (for the prompt) Returns: Nothing but it's sets the active sample/sample_set ### Response: def pivot(self, md5, tag=''): """Pivot on an md5 (md5 can be a single sample or a sample_set) Args: md5: The md5 can be a single sample or a sample_set tags (optional): a tag for the sample (for the prompt) Returns: Nothing but it's sets the active sample/sample_set """ # Is the md5 a tag? ss = self.workbench.generate_sample_set(md5) if ss: tag = md5 if not tag else tag md5 = ss # Is the md5 a sample_set? if self.workbench.is_sample_set(md5): # Is the sample_set one sample? ss = self.workbench.get_sample_set(md5) if len(ss) == 1: md5 = ss[0] deco = '(%s:%d)' % (tag, len(ss)) self.ipshell.push({'prompt_deco': deco}) else: deco = '(%s:1)' % tag self.ipshell.push({'prompt_deco': deco}) # Set the new md5 self.session.md5 = md5 self.session.short_md5 = md5[:6] self.ipshell.push({'md5': self.session.md5}) self.ipshell.push({'short_md5': self.session.short_md5})
def disabledBrush( self ): """ Return the main brush for this node. :return <QBrush> """ # create the background brush grad = QLinearGradient() rect = self.rect() grad.setStart(QPointF(0, rect.y())) grad.setFinalStop(QPointF(0, rect.bottom())) grad.setColorAt(0, self.disabledColor()) grad.setColorAt(1, self.disabledAlternateColor()) return QBrush(grad)
Return the main brush for this node. :return <QBrush>
Below is the the instruction that describes the task: ### Input: Return the main brush for this node. :return <QBrush> ### Response: def disabledBrush( self ): """ Return the main brush for this node. :return <QBrush> """ # create the background brush grad = QLinearGradient() rect = self.rect() grad.setStart(QPointF(0, rect.y())) grad.setFinalStop(QPointF(0, rect.bottom())) grad.setColorAt(0, self.disabledColor()) grad.setColorAt(1, self.disabledAlternateColor()) return QBrush(grad)
def list_nodes(conn=None, call=None): ''' Return a list of the VMs that are on the provider ''' if call == 'action': raise SaltCloudSystemExit( 'The list_nodes function must be called with -f or --function.' ) if not conn: conn = get_conn() # pylint: disable=E0602 nodes = conn.list_nodes() ret = {} for node in nodes: ret[node.name] = { 'id': node.id, 'image': node.image, 'name': node.name, 'private_ips': node.private_ips, 'public_ips': node.public_ips, 'size': node.size, 'state': node_state(node.state) } return ret
Return a list of the VMs that are on the provider
Below is the the instruction that describes the task: ### Input: Return a list of the VMs that are on the provider ### Response: def list_nodes(conn=None, call=None): ''' Return a list of the VMs that are on the provider ''' if call == 'action': raise SaltCloudSystemExit( 'The list_nodes function must be called with -f or --function.' ) if not conn: conn = get_conn() # pylint: disable=E0602 nodes = conn.list_nodes() ret = {} for node in nodes: ret[node.name] = { 'id': node.id, 'image': node.image, 'name': node.name, 'private_ips': node.private_ips, 'public_ips': node.public_ips, 'size': node.size, 'state': node_state(node.state) } return ret
def _filter_markdown(source, filters): """Only keep some Markdown headers from a Markdown string.""" lines = source.splitlines() # Filters is a list of 'hN' strings where 1 <= N <= 6. headers = [_replace_header_filter(filter) for filter in filters] lines = [line for line in lines if line.startswith(tuple(headers))] return '\n'.join(lines)
Only keep some Markdown headers from a Markdown string.
Below is the the instruction that describes the task: ### Input: Only keep some Markdown headers from a Markdown string. ### Response: def _filter_markdown(source, filters): """Only keep some Markdown headers from a Markdown string.""" lines = source.splitlines() # Filters is a list of 'hN' strings where 1 <= N <= 6. headers = [_replace_header_filter(filter) for filter in filters] lines = [line for line in lines if line.startswith(tuple(headers))] return '\n'.join(lines)
def evict(self, urls): """Remove items from cache matching URLs. Return the number of items removed. """ if isinstance(urls, six.text_type): urls = [urls] urls = set(normalize_url(url) for url in urls) retval = 0 for key in list(self.cache): if key[0] in urls: retval += 1 del self.cache[key] del self.timeouts[key] return retval
Remove items from cache matching URLs. Return the number of items removed.
Below is the the instruction that describes the task: ### Input: Remove items from cache matching URLs. Return the number of items removed. ### Response: def evict(self, urls): """Remove items from cache matching URLs. Return the number of items removed. """ if isinstance(urls, six.text_type): urls = [urls] urls = set(normalize_url(url) for url in urls) retval = 0 for key in list(self.cache): if key[0] in urls: retval += 1 del self.cache[key] del self.timeouts[key] return retval
def _inv_key(list_keys, valid_keys): """ ----- Brief ----- A sub-function of _filter_keywords function. ----------- Description ----------- Function used for identification when a list of keywords contains invalid keywords not present in the valid list. ---------- Parameters ---------- list_keys : list List of keywords that must be verified, i.e., all the inputs needs to be inside valid_keys in order to a True boolean be returned. valid_keys : list List of valid keywords. Returns ------- out : boolean, list Boolean indicating if all the inserted keywords are valid. If true a list with invalid keywords will be returned. """ inv_keys = [] bool_out = True for i in list_keys: if i not in valid_keys: bool_out = False inv_keys.append(i) return bool_out, inv_keys
----- Brief ----- A sub-function of _filter_keywords function. ----------- Description ----------- Function used for identification when a list of keywords contains invalid keywords not present in the valid list. ---------- Parameters ---------- list_keys : list List of keywords that must be verified, i.e., all the inputs needs to be inside valid_keys in order to a True boolean be returned. valid_keys : list List of valid keywords. Returns ------- out : boolean, list Boolean indicating if all the inserted keywords are valid. If true a list with invalid keywords will be returned.
Below is the the instruction that describes the task: ### Input: ----- Brief ----- A sub-function of _filter_keywords function. ----------- Description ----------- Function used for identification when a list of keywords contains invalid keywords not present in the valid list. ---------- Parameters ---------- list_keys : list List of keywords that must be verified, i.e., all the inputs needs to be inside valid_keys in order to a True boolean be returned. valid_keys : list List of valid keywords. Returns ------- out : boolean, list Boolean indicating if all the inserted keywords are valid. If true a list with invalid keywords will be returned. ### Response: def _inv_key(list_keys, valid_keys): """ ----- Brief ----- A sub-function of _filter_keywords function. ----------- Description ----------- Function used for identification when a list of keywords contains invalid keywords not present in the valid list. ---------- Parameters ---------- list_keys : list List of keywords that must be verified, i.e., all the inputs needs to be inside valid_keys in order to a True boolean be returned. valid_keys : list List of valid keywords. Returns ------- out : boolean, list Boolean indicating if all the inserted keywords are valid. If true a list with invalid keywords will be returned. """ inv_keys = [] bool_out = True for i in list_keys: if i not in valid_keys: bool_out = False inv_keys.append(i) return bool_out, inv_keys
def scheme_host_port_prefix(self, scheme='http', host='host', port=None, prefix=None): """Return URI composed of scheme, server, port, and prefix.""" uri = scheme + '://' + host if (port and not ((scheme == 'http' and port == 80) or (scheme == 'https' and port == 443))): uri += ':' + str(port) if (prefix): uri += '/' + prefix return uri
Return URI composed of scheme, server, port, and prefix.
Below is the the instruction that describes the task: ### Input: Return URI composed of scheme, server, port, and prefix. ### Response: def scheme_host_port_prefix(self, scheme='http', host='host', port=None, prefix=None): """Return URI composed of scheme, server, port, and prefix.""" uri = scheme + '://' + host if (port and not ((scheme == 'http' and port == 80) or (scheme == 'https' and port == 443))): uri += ':' + str(port) if (prefix): uri += '/' + prefix return uri
def nacm_rule_list_rule_name(self, **kwargs): """Auto Generated Code """ config = ET.Element("config") nacm = ET.SubElement(config, "nacm", xmlns="urn:ietf:params:xml:ns:yang:ietf-netconf-acm") rule_list = ET.SubElement(nacm, "rule-list") name_key = ET.SubElement(rule_list, "name") name_key.text = kwargs.pop('name') rule = ET.SubElement(rule_list, "rule") name = ET.SubElement(rule, "name") name.text = kwargs.pop('name') callback = kwargs.pop('callback', self._callback) return callback(config)
Auto Generated Code
Below is the the instruction that describes the task: ### Input: Auto Generated Code ### Response: def nacm_rule_list_rule_name(self, **kwargs): """Auto Generated Code """ config = ET.Element("config") nacm = ET.SubElement(config, "nacm", xmlns="urn:ietf:params:xml:ns:yang:ietf-netconf-acm") rule_list = ET.SubElement(nacm, "rule-list") name_key = ET.SubElement(rule_list, "name") name_key.text = kwargs.pop('name') rule = ET.SubElement(rule_list, "rule") name = ET.SubElement(rule, "name") name.text = kwargs.pop('name') callback = kwargs.pop('callback', self._callback) return callback(config)
def reset_params(self): """Reset all parameters to their default values.""" self.__params = dict([p, None] for p in self.param_names) self.set_params(self.param_defaults)
Reset all parameters to their default values.
Below is the the instruction that describes the task: ### Input: Reset all parameters to their default values. ### Response: def reset_params(self): """Reset all parameters to their default values.""" self.__params = dict([p, None] for p in self.param_names) self.set_params(self.param_defaults)
def set_shellwidget(self, shellwidget): """Bind the shellwidget instance to the figure browser""" self.shellwidget = shellwidget shellwidget.set_figurebrowser(self) shellwidget.sig_new_inline_figure.connect(self._handle_new_figure)
Bind the shellwidget instance to the figure browser
Below is the the instruction that describes the task: ### Input: Bind the shellwidget instance to the figure browser ### Response: def set_shellwidget(self, shellwidget): """Bind the shellwidget instance to the figure browser""" self.shellwidget = shellwidget shellwidget.set_figurebrowser(self) shellwidget.sig_new_inline_figure.connect(self._handle_new_figure)
def apply_mesh_programs(self, mesh_programs=None): """Applies mesh programs to meshes""" if not mesh_programs: mesh_programs = [ColorProgram(), TextureProgram(), FallbackProgram()] for mesh in self.meshes: for mp in mesh_programs: instance = mp.apply(mesh) if instance is not None: if isinstance(instance, MeshProgram): mesh.mesh_program = mp break else: raise ValueError("apply() must return a MeshProgram instance, not {}".format(type(instance))) if not mesh.mesh_program: print("WARING: No mesh program applied to '{}'".format(mesh.name))
Applies mesh programs to meshes
Below is the the instruction that describes the task: ### Input: Applies mesh programs to meshes ### Response: def apply_mesh_programs(self, mesh_programs=None): """Applies mesh programs to meshes""" if not mesh_programs: mesh_programs = [ColorProgram(), TextureProgram(), FallbackProgram()] for mesh in self.meshes: for mp in mesh_programs: instance = mp.apply(mesh) if instance is not None: if isinstance(instance, MeshProgram): mesh.mesh_program = mp break else: raise ValueError("apply() must return a MeshProgram instance, not {}".format(type(instance))) if not mesh.mesh_program: print("WARING: No mesh program applied to '{}'".format(mesh.name))
def create_new_state_from_state_with_type(source_state, target_state_class): """The function duplicates/transforms a state to a new state type. If the source state type and the new state type both are ContainerStates the new state will have not transitions to force the user to explicitly re-order the logical flow according the paradigm of the new state type. :param source_state: previous/original state that is to transform into a new state type (target_state_class) :param target_state_class: the final state class type :return: """ current_state_is_container = isinstance(source_state, ContainerState) new_state_is_container = issubclass(target_state_class, ContainerState) if current_state_is_container and new_state_is_container: # TRANSFORM from CONTAINER- TO CONTAINER-STATE # by default all transitions are left out if the new and original state are container states # -> because switch from Barrier, Preemptive or Hierarchy has always different rules state_transitions = {} state_start_state_id = None logger.info("Type change from %s to %s" % (type(source_state).__name__, target_state_class.__name__)) # decider state is removed because it is unique for BarrierConcurrencyState if isinstance(source_state, BarrierConcurrencyState): source_state.remove_state(UNIQUE_DECIDER_STATE_ID, force=True) assert UNIQUE_DECIDER_STATE_ID not in source_state.states # separate state-elements from source state data_flows = dict(source_state.data_flows) source_state.data_flows = {} input_data_ports = dict(source_state.input_data_ports) output_data_ports = dict(source_state.output_data_ports) scoped_variables = dict(source_state.scoped_variables) income = source_state.income outcomes = dict(source_state.outcomes) source_state.input_data_ports = {} source_state.output_data_ports = {} source_state.scoped_variables = {} source_state.transitions = {} # before remove of outcomes related transitions should be gone source_state.income = Income() source_state.outcomes = {} states = dict(source_state.states) # TODO check why next line can not be performed # source_state.states = {} new_state = target_state_class(name=source_state.name, state_id=source_state.state_id, input_data_ports=input_data_ports, output_data_ports=output_data_ports, scoped_variables=scoped_variables, income=income, outcomes=outcomes, transitions=state_transitions, data_flows=data_flows, states=states, start_state_id=state_start_state_id) else: # TRANSFORM from EXECUTION- TO CONTAINER-STATE or FROM CONTAINER- TO EXECUTION-STATE # in case the new state is an execution state remove of child states (for observable notifications) if current_state_is_container and issubclass(target_state_class, ExecutionState): if isinstance(source_state, BarrierConcurrencyState): source_state.remove_state(UNIQUE_DECIDER_STATE_ID, force=True) assert UNIQUE_DECIDER_STATE_ID not in source_state.states for state_id in list(source_state.states.keys()): source_state.remove_state(state_id) # separate state-elements from source state input_data_ports = dict(source_state.input_data_ports) output_data_ports = dict(source_state.output_data_ports) income = source_state.income outcomes = dict(source_state.outcomes) source_state.input_data_ports = {} source_state.output_data_ports = {} source_state.income = Income() source_state.outcomes = {} new_state = target_state_class(name=source_state.name, state_id=source_state.state_id, input_data_ports=input_data_ports, output_data_ports=output_data_ports, income=income, outcomes=outcomes) if source_state.description is not None and len(source_state.description) > 0: new_state.description = source_state.description new_state.semantic_data = Vividict(source_state.semantic_data) return new_state
The function duplicates/transforms a state to a new state type. If the source state type and the new state type both are ContainerStates the new state will have not transitions to force the user to explicitly re-order the logical flow according the paradigm of the new state type. :param source_state: previous/original state that is to transform into a new state type (target_state_class) :param target_state_class: the final state class type :return:
Below is the the instruction that describes the task: ### Input: The function duplicates/transforms a state to a new state type. If the source state type and the new state type both are ContainerStates the new state will have not transitions to force the user to explicitly re-order the logical flow according the paradigm of the new state type. :param source_state: previous/original state that is to transform into a new state type (target_state_class) :param target_state_class: the final state class type :return: ### Response: def create_new_state_from_state_with_type(source_state, target_state_class): """The function duplicates/transforms a state to a new state type. If the source state type and the new state type both are ContainerStates the new state will have not transitions to force the user to explicitly re-order the logical flow according the paradigm of the new state type. :param source_state: previous/original state that is to transform into a new state type (target_state_class) :param target_state_class: the final state class type :return: """ current_state_is_container = isinstance(source_state, ContainerState) new_state_is_container = issubclass(target_state_class, ContainerState) if current_state_is_container and new_state_is_container: # TRANSFORM from CONTAINER- TO CONTAINER-STATE # by default all transitions are left out if the new and original state are container states # -> because switch from Barrier, Preemptive or Hierarchy has always different rules state_transitions = {} state_start_state_id = None logger.info("Type change from %s to %s" % (type(source_state).__name__, target_state_class.__name__)) # decider state is removed because it is unique for BarrierConcurrencyState if isinstance(source_state, BarrierConcurrencyState): source_state.remove_state(UNIQUE_DECIDER_STATE_ID, force=True) assert UNIQUE_DECIDER_STATE_ID not in source_state.states # separate state-elements from source state data_flows = dict(source_state.data_flows) source_state.data_flows = {} input_data_ports = dict(source_state.input_data_ports) output_data_ports = dict(source_state.output_data_ports) scoped_variables = dict(source_state.scoped_variables) income = source_state.income outcomes = dict(source_state.outcomes) source_state.input_data_ports = {} source_state.output_data_ports = {} source_state.scoped_variables = {} source_state.transitions = {} # before remove of outcomes related transitions should be gone source_state.income = Income() source_state.outcomes = {} states = dict(source_state.states) # TODO check why next line can not be performed # source_state.states = {} new_state = target_state_class(name=source_state.name, state_id=source_state.state_id, input_data_ports=input_data_ports, output_data_ports=output_data_ports, scoped_variables=scoped_variables, income=income, outcomes=outcomes, transitions=state_transitions, data_flows=data_flows, states=states, start_state_id=state_start_state_id) else: # TRANSFORM from EXECUTION- TO CONTAINER-STATE or FROM CONTAINER- TO EXECUTION-STATE # in case the new state is an execution state remove of child states (for observable notifications) if current_state_is_container and issubclass(target_state_class, ExecutionState): if isinstance(source_state, BarrierConcurrencyState): source_state.remove_state(UNIQUE_DECIDER_STATE_ID, force=True) assert UNIQUE_DECIDER_STATE_ID not in source_state.states for state_id in list(source_state.states.keys()): source_state.remove_state(state_id) # separate state-elements from source state input_data_ports = dict(source_state.input_data_ports) output_data_ports = dict(source_state.output_data_ports) income = source_state.income outcomes = dict(source_state.outcomes) source_state.input_data_ports = {} source_state.output_data_ports = {} source_state.income = Income() source_state.outcomes = {} new_state = target_state_class(name=source_state.name, state_id=source_state.state_id, input_data_ports=input_data_ports, output_data_ports=output_data_ports, income=income, outcomes=outcomes) if source_state.description is not None and len(source_state.description) > 0: new_state.description = source_state.description new_state.semantic_data = Vividict(source_state.semantic_data) return new_state
def from_array(array): """ Deserialize a new PassportData from a given dictionary. :return: new PassportData instance. :rtype: PassportData """ if array is None or not array: return None # end if assert_type_or_raise(array, dict, parameter_name="array") data = {} data['data'] = EncryptedPassportElement.from_array_list(array.get('data'), list_level=1) data['credentials'] = EncryptedCredentials.from_array(array.get('credentials')) data['_raw'] = array return PassportData(**data)
Deserialize a new PassportData from a given dictionary. :return: new PassportData instance. :rtype: PassportData
Below is the the instruction that describes the task: ### Input: Deserialize a new PassportData from a given dictionary. :return: new PassportData instance. :rtype: PassportData ### Response: def from_array(array): """ Deserialize a new PassportData from a given dictionary. :return: new PassportData instance. :rtype: PassportData """ if array is None or not array: return None # end if assert_type_or_raise(array, dict, parameter_name="array") data = {} data['data'] = EncryptedPassportElement.from_array_list(array.get('data'), list_level=1) data['credentials'] = EncryptedCredentials.from_array(array.get('credentials')) data['_raw'] = array return PassportData(**data)
def add_meta(self, name, value): """ Add a pair of meta data to the definition :param name: name of the meta :type name: str :param value: value of the meta :type value: str """ for mt in self.metas: if mt.name == name: mt.value = value return self self.metas.append(MetaDef(name, value)) return self
Add a pair of meta data to the definition :param name: name of the meta :type name: str :param value: value of the meta :type value: str
Below is the the instruction that describes the task: ### Input: Add a pair of meta data to the definition :param name: name of the meta :type name: str :param value: value of the meta :type value: str ### Response: def add_meta(self, name, value): """ Add a pair of meta data to the definition :param name: name of the meta :type name: str :param value: value of the meta :type value: str """ for mt in self.metas: if mt.name == name: mt.value = value return self self.metas.append(MetaDef(name, value)) return self
def genesis_signing_lockset(genesis, privkey): """ in order to avoid a complicated bootstrapping, we define the genesis_signing_lockset as a lockset with one vote by any validator. """ v = VoteBlock(0, 0, genesis.hash) v.sign(privkey) ls = LockSet(num_eligible_votes=1) ls.add(v) assert ls.has_quorum return ls
in order to avoid a complicated bootstrapping, we define the genesis_signing_lockset as a lockset with one vote by any validator.
Below is the the instruction that describes the task: ### Input: in order to avoid a complicated bootstrapping, we define the genesis_signing_lockset as a lockset with one vote by any validator. ### Response: def genesis_signing_lockset(genesis, privkey): """ in order to avoid a complicated bootstrapping, we define the genesis_signing_lockset as a lockset with one vote by any validator. """ v = VoteBlock(0, 0, genesis.hash) v.sign(privkey) ls = LockSet(num_eligible_votes=1) ls.add(v) assert ls.has_quorum return ls
def _execute_sbatch(self): """Schedule the sbatch file using the sbatch command :returns the slurm job id """ commands = self.campaign.process.get('commands', {}) sbatch = find_executable(commands.get('sbatch', 'sbatch')) sbatch_command = [sbatch, '--parsable', self.sbatch_filename] try: self.logger.debug( 'Executing command: %s', ' '.join(map(six.moves.shlex_quote, sbatch_command)), ) sbatch_out = subprocess.check_output( sbatch_command, universal_newlines=True ) except subprocess.CalledProcessError as cpe: self.logger.error( "SBATCH return non-zero exit" "status %d for tag %s", cpe.returncode, self.tag, ) sbatch_out = cpe.output jobidre = re.compile(r'^([\d]+)(?:;\S*)?$') jobid = None for line in sbatch_out.splitlines(): res = jobidre.match(line) if res is not None: jobid = res.group(1) self.logger.info("Submitted SBATCH job %s for tag %s", jobid, self.tag) elif line: self.logger.warning("SBATCH: %s", line) if jobid is None: self.logger.error("SBATCH submission failed for tag %s", self.tag) return -1 else: return int(jobid)
Schedule the sbatch file using the sbatch command :returns the slurm job id
Below is the the instruction that describes the task: ### Input: Schedule the sbatch file using the sbatch command :returns the slurm job id ### Response: def _execute_sbatch(self): """Schedule the sbatch file using the sbatch command :returns the slurm job id """ commands = self.campaign.process.get('commands', {}) sbatch = find_executable(commands.get('sbatch', 'sbatch')) sbatch_command = [sbatch, '--parsable', self.sbatch_filename] try: self.logger.debug( 'Executing command: %s', ' '.join(map(six.moves.shlex_quote, sbatch_command)), ) sbatch_out = subprocess.check_output( sbatch_command, universal_newlines=True ) except subprocess.CalledProcessError as cpe: self.logger.error( "SBATCH return non-zero exit" "status %d for tag %s", cpe.returncode, self.tag, ) sbatch_out = cpe.output jobidre = re.compile(r'^([\d]+)(?:;\S*)?$') jobid = None for line in sbatch_out.splitlines(): res = jobidre.match(line) if res is not None: jobid = res.group(1) self.logger.info("Submitted SBATCH job %s for tag %s", jobid, self.tag) elif line: self.logger.warning("SBATCH: %s", line) if jobid is None: self.logger.error("SBATCH submission failed for tag %s", self.tag) return -1 else: return int(jobid)
def to_struct_file(self, f): """ write the Vario2d to a PEST-style structure file Parameters ---------- f : (str or file handle) item to write to """ if isinstance(f, str): f = open(f,'w') f.write("VARIOGRAM {0}\n".format(self.name)) f.write(" VARTYPE {0}\n".format(self.vartype)) f.write(" A {0}\n".format(self.a)) f.write(" ANISOTROPY {0}\n".format(self.anisotropy)) f.write(" BEARING {0}\n".format(self.bearing)) f.write("END VARIOGRAM\n\n")
write the Vario2d to a PEST-style structure file Parameters ---------- f : (str or file handle) item to write to
Below is the the instruction that describes the task: ### Input: write the Vario2d to a PEST-style structure file Parameters ---------- f : (str or file handle) item to write to ### Response: def to_struct_file(self, f): """ write the Vario2d to a PEST-style structure file Parameters ---------- f : (str or file handle) item to write to """ if isinstance(f, str): f = open(f,'w') f.write("VARIOGRAM {0}\n".format(self.name)) f.write(" VARTYPE {0}\n".format(self.vartype)) f.write(" A {0}\n".format(self.a)) f.write(" ANISOTROPY {0}\n".format(self.anisotropy)) f.write(" BEARING {0}\n".format(self.bearing)) f.write("END VARIOGRAM\n\n")
def num_choice(choices, default='1', valid_keys='', depth=1, icons='', sn_info=None, indent=4, fg_color='green', separator='', with_img=6, img_list=None, img_cache_dir='/tmp', use_cache=False, extra_hints='', clear_previous=False, quit_app=True, ): """ 传入数组, 生成待选择列表, 如果启用图片支持, 需要额外传入与数组排序一致的图片列表, - 图片在 iterms 中显示速度较慢, 不推荐使用 .. note: 图片在 iterms 中显示速度较慢, 如果数组长度大于10, 不推荐使用 .. code:: python sn_info = { 'align': '-', # 左右对齐 'length': 2, # 显示长度 } :param use_cache: :type use_cache: :param default: :type default: :param indent: ``左侧空白`` :type indent: :param fg_color: ``前景色`` :type fg_color: :param choices: 备选选项 :type choices: list :param depth: ``如果是嵌套数组, 显示当前层级`` :type depth: int :param icons: ``默认展示的icons: '❶❷❸❹❺❻❼❽❾❿'`` :type icons: any :param sn_info: ``需要展示的序号的信息长度对齐方式, 默认2个字符/右对齐`` :type sn_info: dict :param valid_keys: ``可以输入的有效 key, 使用 ',' 分隔`` :type valid_keys: str :param separator: 分隔符 header/footer, 默认无, 如果不为空, 则显示 :type separator: :param img_cache_dir: ``图片缓存目录`` :type img_cache_dir: str :param with_img: ``是否使用图片, 如果值大于0, 则以实际值大小来作为终端显示行数`` :type with_img: int :param img_list: ``图片原始 url `` :type img_list: list :param extra_hints: ``n-next,p-prev,s-skip`` :type extra_hints: any :param clear_previous: ``clear previous output`` :type clear_previous: :return: :rtype: """ icons = ICONS if not icons else icons if not choices: return None # warn: 这里需要使用 None, 不能 not default 来判断!!!, 会可能传入 0 if default is not None: default = '{}'.format(default) sn_info = sn_info or {} _header, _footer = gen_separator(separator=separator) with textui.indent(indent, quote=' {}'.format(icons[depth - 1])): if _header: textui.puts(getattr(textui.colored, fg_color)(_header)) for i, choice in enumerate(choices, start=1): if with_img > 0 and img_list: cat_net_img(img_list[i - 1], indent=indent, img_height=with_img, img_cache_dir=img_cache_dir, use_cache=use_cache) _align = '{}{}'.format(sn_info.get('align', ''), sn_info.get('length', 2)) # _hint = '%{}s. %s'.format(_align) % (i, choice) _hint_num = '%{}s.'.format(_align) % i _hint = '[{}]'.format(_hint_num) _hint = textui.colored.magenta(_hint) _hint += getattr(textui.colored, fg_color)(' %s' % choice) textui.puts(_hint) if _footer: textui.puts(getattr(textui.colored, fg_color)(_footer)) _valid = [str(x + 1) for x in range(0, len(choices))] default_prompt = 'Your Choice' valid_choices = ['q-quit', 'b-back'] if extra_hints: if isinstance(extra_hints, str): extra_hints = extra_hints.split(',') valid_choices += extra_hints default_prompt = '{}({})?'.format(default_prompt, '/'.join(valid_choices)) c = click.prompt( # click.style('[Depth: ({})]Your Choice(q-quit/b-back)?', fg='cyan').format(depth), click.style(default_prompt, fg='cyan'), type=str, default=default ) if str(c) in 'qQ': if quit_app: os._exit(0) else: if clear_previous: click.clear() return str(c) if valid_keys == 'all': return c elif str(c) in 'bB': if clear_previous: click.clear() return str(c) elif valid_keys and str(c) in valid_keys.split(','): return str(c) elif c not in _valid: textui.puts(textui.colored.red(' 😭 ✘ Invalid input[{}]'.format(c))) return num_choice( choices, default, valid_keys, depth, icons, sn_info, indent, fg_color, separator, with_img, img_list, img_cache_dir, use_cache, extra_hints, clear_previous, quit_app, ) else: return int(c) - 1
传入数组, 生成待选择列表, 如果启用图片支持, 需要额外传入与数组排序一致的图片列表, - 图片在 iterms 中显示速度较慢, 不推荐使用 .. note: 图片在 iterms 中显示速度较慢, 如果数组长度大于10, 不推荐使用 .. code:: python sn_info = { 'align': '-', # 左右对齐 'length': 2, # 显示长度 } :param use_cache: :type use_cache: :param default: :type default: :param indent: ``左侧空白`` :type indent: :param fg_color: ``前景色`` :type fg_color: :param choices: 备选选项 :type choices: list :param depth: ``如果是嵌套数组, 显示当前层级`` :type depth: int :param icons: ``默认展示的icons: '❶❷❸❹❺❻❼❽❾❿'`` :type icons: any :param sn_info: ``需要展示的序号的信息长度对齐方式, 默认2个字符/右对齐`` :type sn_info: dict :param valid_keys: ``可以输入的有效 key, 使用 ',' 分隔`` :type valid_keys: str :param separator: 分隔符 header/footer, 默认无, 如果不为空, 则显示 :type separator: :param img_cache_dir: ``图片缓存目录`` :type img_cache_dir: str :param with_img: ``是否使用图片, 如果值大于0, 则以实际值大小来作为终端显示行数`` :type with_img: int :param img_list: ``图片原始 url `` :type img_list: list :param extra_hints: ``n-next,p-prev,s-skip`` :type extra_hints: any :param clear_previous: ``clear previous output`` :type clear_previous: :return: :rtype:
Below is the the instruction that describes the task: ### Input: 传入数组, 生成待选择列表, 如果启用图片支持, 需要额外传入与数组排序一致的图片列表, - 图片在 iterms 中显示速度较慢, 不推荐使用 .. note: 图片在 iterms 中显示速度较慢, 如果数组长度大于10, 不推荐使用 .. code:: python sn_info = { 'align': '-', # 左右对齐 'length': 2, # 显示长度 } :param use_cache: :type use_cache: :param default: :type default: :param indent: ``左侧空白`` :type indent: :param fg_color: ``前景色`` :type fg_color: :param choices: 备选选项 :type choices: list :param depth: ``如果是嵌套数组, 显示当前层级`` :type depth: int :param icons: ``默认展示的icons: '❶❷❸❹❺❻❼❽❾❿'`` :type icons: any :param sn_info: ``需要展示的序号的信息长度对齐方式, 默认2个字符/右对齐`` :type sn_info: dict :param valid_keys: ``可以输入的有效 key, 使用 ',' 分隔`` :type valid_keys: str :param separator: 分隔符 header/footer, 默认无, 如果不为空, 则显示 :type separator: :param img_cache_dir: ``图片缓存目录`` :type img_cache_dir: str :param with_img: ``是否使用图片, 如果值大于0, 则以实际值大小来作为终端显示行数`` :type with_img: int :param img_list: ``图片原始 url `` :type img_list: list :param extra_hints: ``n-next,p-prev,s-skip`` :type extra_hints: any :param clear_previous: ``clear previous output`` :type clear_previous: :return: :rtype: ### Response: def num_choice(choices, default='1', valid_keys='', depth=1, icons='', sn_info=None, indent=4, fg_color='green', separator='', with_img=6, img_list=None, img_cache_dir='/tmp', use_cache=False, extra_hints='', clear_previous=False, quit_app=True, ): """ 传入数组, 生成待选择列表, 如果启用图片支持, 需要额外传入与数组排序一致的图片列表, - 图片在 iterms 中显示速度较慢, 不推荐使用 .. note: 图片在 iterms 中显示速度较慢, 如果数组长度大于10, 不推荐使用 .. code:: python sn_info = { 'align': '-', # 左右对齐 'length': 2, # 显示长度 } :param use_cache: :type use_cache: :param default: :type default: :param indent: ``左侧空白`` :type indent: :param fg_color: ``前景色`` :type fg_color: :param choices: 备选选项 :type choices: list :param depth: ``如果是嵌套数组, 显示当前层级`` :type depth: int :param icons: ``默认展示的icons: '❶❷❸❹❺❻❼❽❾❿'`` :type icons: any :param sn_info: ``需要展示的序号的信息长度对齐方式, 默认2个字符/右对齐`` :type sn_info: dict :param valid_keys: ``可以输入的有效 key, 使用 ',' 分隔`` :type valid_keys: str :param separator: 分隔符 header/footer, 默认无, 如果不为空, 则显示 :type separator: :param img_cache_dir: ``图片缓存目录`` :type img_cache_dir: str :param with_img: ``是否使用图片, 如果值大于0, 则以实际值大小来作为终端显示行数`` :type with_img: int :param img_list: ``图片原始 url `` :type img_list: list :param extra_hints: ``n-next,p-prev,s-skip`` :type extra_hints: any :param clear_previous: ``clear previous output`` :type clear_previous: :return: :rtype: """ icons = ICONS if not icons else icons if not choices: return None # warn: 这里需要使用 None, 不能 not default 来判断!!!, 会可能传入 0 if default is not None: default = '{}'.format(default) sn_info = sn_info or {} _header, _footer = gen_separator(separator=separator) with textui.indent(indent, quote=' {}'.format(icons[depth - 1])): if _header: textui.puts(getattr(textui.colored, fg_color)(_header)) for i, choice in enumerate(choices, start=1): if with_img > 0 and img_list: cat_net_img(img_list[i - 1], indent=indent, img_height=with_img, img_cache_dir=img_cache_dir, use_cache=use_cache) _align = '{}{}'.format(sn_info.get('align', ''), sn_info.get('length', 2)) # _hint = '%{}s. %s'.format(_align) % (i, choice) _hint_num = '%{}s.'.format(_align) % i _hint = '[{}]'.format(_hint_num) _hint = textui.colored.magenta(_hint) _hint += getattr(textui.colored, fg_color)(' %s' % choice) textui.puts(_hint) if _footer: textui.puts(getattr(textui.colored, fg_color)(_footer)) _valid = [str(x + 1) for x in range(0, len(choices))] default_prompt = 'Your Choice' valid_choices = ['q-quit', 'b-back'] if extra_hints: if isinstance(extra_hints, str): extra_hints = extra_hints.split(',') valid_choices += extra_hints default_prompt = '{}({})?'.format(default_prompt, '/'.join(valid_choices)) c = click.prompt( # click.style('[Depth: ({})]Your Choice(q-quit/b-back)?', fg='cyan').format(depth), click.style(default_prompt, fg='cyan'), type=str, default=default ) if str(c) in 'qQ': if quit_app: os._exit(0) else: if clear_previous: click.clear() return str(c) if valid_keys == 'all': return c elif str(c) in 'bB': if clear_previous: click.clear() return str(c) elif valid_keys and str(c) in valid_keys.split(','): return str(c) elif c not in _valid: textui.puts(textui.colored.red(' 😭 ✘ Invalid input[{}]'.format(c))) return num_choice( choices, default, valid_keys, depth, icons, sn_info, indent, fg_color, separator, with_img, img_list, img_cache_dir, use_cache, extra_hints, clear_previous, quit_app, ) else: return int(c) - 1
def designPrimers(seq_args, global_args=None, misprime_lib=None, mishyb_lib=None, debug=False): ''' Run the Primer3 design process. If the global args have been previously set (either by a pervious `designPrimers` call or by a `setGlobals` call), `designPrimers` may be called with seqArgs alone (as a means of optimization). Args: seq_args (dict) : Primer3 sequence/design args as per Primer3 docs global_args (dict, optional) : Primer3 global args as per Primer3 docs misprime_lib (dict, optional) : `Sequence name: sequence` dictionary for mispriming checks. mishyb_lib (dict, optional) : `Sequence name: sequence` dictionary for mishybridization checks. Returns: A dictionary of Primer3 results (should be identical to the expected BoulderIO output from primer3_main) ''' if global_args: primerdesign.setGlobals(global_args, misprime_lib, mishyb_lib) primerdesign.setSeqArgs(seq_args) return primerdesign.runDesign(debug)
Run the Primer3 design process. If the global args have been previously set (either by a pervious `designPrimers` call or by a `setGlobals` call), `designPrimers` may be called with seqArgs alone (as a means of optimization). Args: seq_args (dict) : Primer3 sequence/design args as per Primer3 docs global_args (dict, optional) : Primer3 global args as per Primer3 docs misprime_lib (dict, optional) : `Sequence name: sequence` dictionary for mispriming checks. mishyb_lib (dict, optional) : `Sequence name: sequence` dictionary for mishybridization checks. Returns: A dictionary of Primer3 results (should be identical to the expected BoulderIO output from primer3_main)
Below is the the instruction that describes the task: ### Input: Run the Primer3 design process. If the global args have been previously set (either by a pervious `designPrimers` call or by a `setGlobals` call), `designPrimers` may be called with seqArgs alone (as a means of optimization). Args: seq_args (dict) : Primer3 sequence/design args as per Primer3 docs global_args (dict, optional) : Primer3 global args as per Primer3 docs misprime_lib (dict, optional) : `Sequence name: sequence` dictionary for mispriming checks. mishyb_lib (dict, optional) : `Sequence name: sequence` dictionary for mishybridization checks. Returns: A dictionary of Primer3 results (should be identical to the expected BoulderIO output from primer3_main) ### Response: def designPrimers(seq_args, global_args=None, misprime_lib=None, mishyb_lib=None, debug=False): ''' Run the Primer3 design process. If the global args have been previously set (either by a pervious `designPrimers` call or by a `setGlobals` call), `designPrimers` may be called with seqArgs alone (as a means of optimization). Args: seq_args (dict) : Primer3 sequence/design args as per Primer3 docs global_args (dict, optional) : Primer3 global args as per Primer3 docs misprime_lib (dict, optional) : `Sequence name: sequence` dictionary for mispriming checks. mishyb_lib (dict, optional) : `Sequence name: sequence` dictionary for mishybridization checks. Returns: A dictionary of Primer3 results (should be identical to the expected BoulderIO output from primer3_main) ''' if global_args: primerdesign.setGlobals(global_args, misprime_lib, mishyb_lib) primerdesign.setSeqArgs(seq_args) return primerdesign.runDesign(debug)
def enbase64(byte_str): """ Encode bytes/strings to base64. Args: - ``byte_str``: The string or bytes to base64 encode. Returns: - byte_str encoded as base64. """ # Python 3: base64.b64encode() expects type byte if isinstance(byte_str, str) and not PYTHON2: byte_str = bytes(byte_str, 'utf-8') return base64.b64encode(byte_str)
Encode bytes/strings to base64. Args: - ``byte_str``: The string or bytes to base64 encode. Returns: - byte_str encoded as base64.
Below is the the instruction that describes the task: ### Input: Encode bytes/strings to base64. Args: - ``byte_str``: The string or bytes to base64 encode. Returns: - byte_str encoded as base64. ### Response: def enbase64(byte_str): """ Encode bytes/strings to base64. Args: - ``byte_str``: The string or bytes to base64 encode. Returns: - byte_str encoded as base64. """ # Python 3: base64.b64encode() expects type byte if isinstance(byte_str, str) and not PYTHON2: byte_str = bytes(byte_str, 'utf-8') return base64.b64encode(byte_str)
def backward_word(self, e): # (M-b) u"""Move back to the start of the current or previous word. Words are composed of letters and digits.""" self.l_buffer.backward_word(self.argument_reset) self.finalize()
u"""Move back to the start of the current or previous word. Words are composed of letters and digits.
Below is the the instruction that describes the task: ### Input: u"""Move back to the start of the current or previous word. Words are composed of letters and digits. ### Response: def backward_word(self, e): # (M-b) u"""Move back to the start of the current or previous word. Words are composed of letters and digits.""" self.l_buffer.backward_word(self.argument_reset) self.finalize()
def messages(self, channel='sse'): """ A generator of :class:`~flask_sse.Message` objects from the given channel. """ pubsub = self.redis.pubsub() pubsub.subscribe(channel) for pubsub_message in pubsub.listen(): if pubsub_message['type'] == 'message': msg_dict = json.loads(pubsub_message['data']) yield Message(**msg_dict)
A generator of :class:`~flask_sse.Message` objects from the given channel.
Below is the the instruction that describes the task: ### Input: A generator of :class:`~flask_sse.Message` objects from the given channel. ### Response: def messages(self, channel='sse'): """ A generator of :class:`~flask_sse.Message` objects from the given channel. """ pubsub = self.redis.pubsub() pubsub.subscribe(channel) for pubsub_message in pubsub.listen(): if pubsub_message['type'] == 'message': msg_dict = json.loads(pubsub_message['data']) yield Message(**msg_dict)
def parse(html): """ Parses the given HTML message and returns its stripped representation plus a list of the MessageEntity's that were found. :param message: the message with HTML to be parsed. :return: a tuple consisting of (clean message, [message entities]). """ if not html: return html, [] parser = HTMLToTelegramParser() parser.feed(_add_surrogate(html)) text = helpers.strip_text(parser.text, parser.entities) return _del_surrogate(text), parser.entities
Parses the given HTML message and returns its stripped representation plus a list of the MessageEntity's that were found. :param message: the message with HTML to be parsed. :return: a tuple consisting of (clean message, [message entities]).
Below is the the instruction that describes the task: ### Input: Parses the given HTML message and returns its stripped representation plus a list of the MessageEntity's that were found. :param message: the message with HTML to be parsed. :return: a tuple consisting of (clean message, [message entities]). ### Response: def parse(html): """ Parses the given HTML message and returns its stripped representation plus a list of the MessageEntity's that were found. :param message: the message with HTML to be parsed. :return: a tuple consisting of (clean message, [message entities]). """ if not html: return html, [] parser = HTMLToTelegramParser() parser.feed(_add_surrogate(html)) text = helpers.strip_text(parser.text, parser.entities) return _del_surrogate(text), parser.entities
def mask(self, image, nan_to_num=True, layers=None, in_global_mask=False): """ Vectorize an image and mask out all invalid voxels. Args: images: The image to vectorize and mask. Input can be any object handled by get_image(). layers: Which mask layers to use (specified as int, string, or list of ints and strings). When None, applies the conjunction of all layers. nan_to_num: boolean indicating whether to convert NaNs to 0. in_global_mask: Whether to return the resulting masked vector in the globally masked space (i.e., n_voxels = len(self.global_mask)). If False (default), returns in the full image space (i.e., n_voxels = len(self.volume)). Returns: A 1D NumPy array of in-mask voxels. """ self.set_mask(layers) image = self.get_image(image, output='vector') if in_global_mask: masked_data = image[self.global_mask] masked_data[~self.get_mask(in_global_mask=True)] = 0 else: masked_data = image[self.current_mask] if nan_to_num: masked_data = np.nan_to_num(masked_data) return masked_data
Vectorize an image and mask out all invalid voxels. Args: images: The image to vectorize and mask. Input can be any object handled by get_image(). layers: Which mask layers to use (specified as int, string, or list of ints and strings). When None, applies the conjunction of all layers. nan_to_num: boolean indicating whether to convert NaNs to 0. in_global_mask: Whether to return the resulting masked vector in the globally masked space (i.e., n_voxels = len(self.global_mask)). If False (default), returns in the full image space (i.e., n_voxels = len(self.volume)). Returns: A 1D NumPy array of in-mask voxels.
Below is the the instruction that describes the task: ### Input: Vectorize an image and mask out all invalid voxels. Args: images: The image to vectorize and mask. Input can be any object handled by get_image(). layers: Which mask layers to use (specified as int, string, or list of ints and strings). When None, applies the conjunction of all layers. nan_to_num: boolean indicating whether to convert NaNs to 0. in_global_mask: Whether to return the resulting masked vector in the globally masked space (i.e., n_voxels = len(self.global_mask)). If False (default), returns in the full image space (i.e., n_voxels = len(self.volume)). Returns: A 1D NumPy array of in-mask voxels. ### Response: def mask(self, image, nan_to_num=True, layers=None, in_global_mask=False): """ Vectorize an image and mask out all invalid voxels. Args: images: The image to vectorize and mask. Input can be any object handled by get_image(). layers: Which mask layers to use (specified as int, string, or list of ints and strings). When None, applies the conjunction of all layers. nan_to_num: boolean indicating whether to convert NaNs to 0. in_global_mask: Whether to return the resulting masked vector in the globally masked space (i.e., n_voxels = len(self.global_mask)). If False (default), returns in the full image space (i.e., n_voxels = len(self.volume)). Returns: A 1D NumPy array of in-mask voxels. """ self.set_mask(layers) image = self.get_image(image, output='vector') if in_global_mask: masked_data = image[self.global_mask] masked_data[~self.get_mask(in_global_mask=True)] = 0 else: masked_data = image[self.current_mask] if nan_to_num: masked_data = np.nan_to_num(masked_data) return masked_data
def wiggleFileHandleToProtocol(self, fileHandle): """ Return a continuous protocol object satsifiying the given query parameters from the given wiggle file handle. """ for line in fileHandle: self.readWiggleLine(line) return self._data
Return a continuous protocol object satsifiying the given query parameters from the given wiggle file handle.
Below is the the instruction that describes the task: ### Input: Return a continuous protocol object satsifiying the given query parameters from the given wiggle file handle. ### Response: def wiggleFileHandleToProtocol(self, fileHandle): """ Return a continuous protocol object satsifiying the given query parameters from the given wiggle file handle. """ for line in fileHandle: self.readWiggleLine(line) return self._data
def removePixmap(self, pixmap): """ Removes the pixmap from this widgets list of pixmaps. :param pixmap | <QPixmap> """ scene = self.scene() for item in self.items(): if item.basePixmap() == pixmap: scene.removeItem(item) break
Removes the pixmap from this widgets list of pixmaps. :param pixmap | <QPixmap>
Below is the the instruction that describes the task: ### Input: Removes the pixmap from this widgets list of pixmaps. :param pixmap | <QPixmap> ### Response: def removePixmap(self, pixmap): """ Removes the pixmap from this widgets list of pixmaps. :param pixmap | <QPixmap> """ scene = self.scene() for item in self.items(): if item.basePixmap() == pixmap: scene.removeItem(item) break
def predict_on_stream(config: Union[str, Path, dict], batch_size: int = 1, file_path: Optional[str] = None) -> None: """Make a prediction with the component described in corresponding configuration file.""" if file_path is None or file_path == '-': if sys.stdin.isatty(): raise RuntimeError('To process data from terminal please use interact mode') f = sys.stdin else: f = open(file_path, encoding='utf8') model: Chainer = build_model(config) args_count = len(model.in_x) while True: batch = list((l.strip() for l in islice(f, batch_size * args_count))) if not batch: break args = [] for i in range(args_count): args.append(batch[i::args_count]) res = model(*args) if len(model.out_params) == 1: res = [res] for res in zip(*res): res = json.dumps(res, ensure_ascii=False) print(res, flush=True) if f is not sys.stdin: f.close()
Make a prediction with the component described in corresponding configuration file.
Below is the the instruction that describes the task: ### Input: Make a prediction with the component described in corresponding configuration file. ### Response: def predict_on_stream(config: Union[str, Path, dict], batch_size: int = 1, file_path: Optional[str] = None) -> None: """Make a prediction with the component described in corresponding configuration file.""" if file_path is None or file_path == '-': if sys.stdin.isatty(): raise RuntimeError('To process data from terminal please use interact mode') f = sys.stdin else: f = open(file_path, encoding='utf8') model: Chainer = build_model(config) args_count = len(model.in_x) while True: batch = list((l.strip() for l in islice(f, batch_size * args_count))) if not batch: break args = [] for i in range(args_count): args.append(batch[i::args_count]) res = model(*args) if len(model.out_params) == 1: res = [res] for res in zip(*res): res = json.dumps(res, ensure_ascii=False) print(res, flush=True) if f is not sys.stdin: f.close()
def lines(self, encoding=None, errors='strict', retain=True): r""" Open this file, read all lines, return them in a list. Optional arguments: `encoding` - The Unicode encoding (or character set) of the file. The default is ``None``, meaning the content of the file is read as 8-bit characters and returned as a list of (non-Unicode) str objects. `errors` - How to handle Unicode errors; see help(str.decode) for the options. Default is ``'strict'``. `retain` - If ``True``, retain newline characters; but all newline character combinations (``'\r'``, ``'\n'``, ``'\r\n'``) are translated to ``'\n'``. If ``False``, newline characters are stripped off. Default is ``True``. .. seealso:: :meth:`text` """ return self.text(encoding, errors).splitlines(retain)
r""" Open this file, read all lines, return them in a list. Optional arguments: `encoding` - The Unicode encoding (or character set) of the file. The default is ``None``, meaning the content of the file is read as 8-bit characters and returned as a list of (non-Unicode) str objects. `errors` - How to handle Unicode errors; see help(str.decode) for the options. Default is ``'strict'``. `retain` - If ``True``, retain newline characters; but all newline character combinations (``'\r'``, ``'\n'``, ``'\r\n'``) are translated to ``'\n'``. If ``False``, newline characters are stripped off. Default is ``True``. .. seealso:: :meth:`text`
Below is the the instruction that describes the task: ### Input: r""" Open this file, read all lines, return them in a list. Optional arguments: `encoding` - The Unicode encoding (or character set) of the file. The default is ``None``, meaning the content of the file is read as 8-bit characters and returned as a list of (non-Unicode) str objects. `errors` - How to handle Unicode errors; see help(str.decode) for the options. Default is ``'strict'``. `retain` - If ``True``, retain newline characters; but all newline character combinations (``'\r'``, ``'\n'``, ``'\r\n'``) are translated to ``'\n'``. If ``False``, newline characters are stripped off. Default is ``True``. .. seealso:: :meth:`text` ### Response: def lines(self, encoding=None, errors='strict', retain=True): r""" Open this file, read all lines, return them in a list. Optional arguments: `encoding` - The Unicode encoding (or character set) of the file. The default is ``None``, meaning the content of the file is read as 8-bit characters and returned as a list of (non-Unicode) str objects. `errors` - How to handle Unicode errors; see help(str.decode) for the options. Default is ``'strict'``. `retain` - If ``True``, retain newline characters; but all newline character combinations (``'\r'``, ``'\n'``, ``'\r\n'``) are translated to ``'\n'``. If ``False``, newline characters are stripped off. Default is ``True``. .. seealso:: :meth:`text` """ return self.text(encoding, errors).splitlines(retain)
def _setup(self): """ Load the settings module pointed to by the environment variable. This is used the first time we need any settings at all, if the user has not previously configured the settings manually. """ settings_module = os.environ.get(ENVIRONMENT_SETTINGS_VARIABLE, 'settings') if not settings_module: raise ImproperlyConfigured( 'Requested settings module points to an empty variable. ' 'You must either define the environment variable {0} ' 'or call settings.configure() before accessing the settings.' .format(ENVIRONMENT_SETTINGS_VARIABLE)) self._wrapped = Settings(settings_module, default_settings=global_settings)
Load the settings module pointed to by the environment variable. This is used the first time we need any settings at all, if the user has not previously configured the settings manually.
Below is the the instruction that describes the task: ### Input: Load the settings module pointed to by the environment variable. This is used the first time we need any settings at all, if the user has not previously configured the settings manually. ### Response: def _setup(self): """ Load the settings module pointed to by the environment variable. This is used the first time we need any settings at all, if the user has not previously configured the settings manually. """ settings_module = os.environ.get(ENVIRONMENT_SETTINGS_VARIABLE, 'settings') if not settings_module: raise ImproperlyConfigured( 'Requested settings module points to an empty variable. ' 'You must either define the environment variable {0} ' 'or call settings.configure() before accessing the settings.' .format(ENVIRONMENT_SETTINGS_VARIABLE)) self._wrapped = Settings(settings_module, default_settings=global_settings)
def _merge_metadata_dictionaries(cls, dictionaries): """ Helper function for combining variant collections: given multiple dictionaries mapping: source name -> (variant -> (attribute -> value)) Returns dictionary with union of all variants and sources. """ # three levels of nested dictionaries! # {source name: {variant: {attribute: value}}} combined_dictionary = {} for source_to_metadata_dict in dictionaries: for source_name, variant_to_metadata_dict in source_to_metadata_dict.items(): combined_dictionary.setdefault(source_name, {}) combined_source_dict = combined_dictionary[source_name] for variant, metadata_dict in variant_to_metadata_dict.items(): combined_source_dict.setdefault(variant, {}) combined_source_dict[variant].update(metadata_dict) return combined_dictionary
Helper function for combining variant collections: given multiple dictionaries mapping: source name -> (variant -> (attribute -> value)) Returns dictionary with union of all variants and sources.
Below is the the instruction that describes the task: ### Input: Helper function for combining variant collections: given multiple dictionaries mapping: source name -> (variant -> (attribute -> value)) Returns dictionary with union of all variants and sources. ### Response: def _merge_metadata_dictionaries(cls, dictionaries): """ Helper function for combining variant collections: given multiple dictionaries mapping: source name -> (variant -> (attribute -> value)) Returns dictionary with union of all variants and sources. """ # three levels of nested dictionaries! # {source name: {variant: {attribute: value}}} combined_dictionary = {} for source_to_metadata_dict in dictionaries: for source_name, variant_to_metadata_dict in source_to_metadata_dict.items(): combined_dictionary.setdefault(source_name, {}) combined_source_dict = combined_dictionary[source_name] for variant, metadata_dict in variant_to_metadata_dict.items(): combined_source_dict.setdefault(variant, {}) combined_source_dict[variant].update(metadata_dict) return combined_dictionary
def _output_validators(self): """Output common validator types based on usage.""" if self._walk_for_type('Boolean'): print("from .validators import boolean") if self._walk_for_type('Integer'): print("from .validators import integer") vlist = self.override.get_validator_list() for override in vlist: if override.startswith('common/'): override = override.lstrip('common/') filename = "validators" else: filename = "%s_validators" % self.filename print("from .%s import %s" % (filename, override))
Output common validator types based on usage.
Below is the the instruction that describes the task: ### Input: Output common validator types based on usage. ### Response: def _output_validators(self): """Output common validator types based on usage.""" if self._walk_for_type('Boolean'): print("from .validators import boolean") if self._walk_for_type('Integer'): print("from .validators import integer") vlist = self.override.get_validator_list() for override in vlist: if override.startswith('common/'): override = override.lstrip('common/') filename = "validators" else: filename = "%s_validators" % self.filename print("from .%s import %s" % (filename, override))
def _get_node_parent(self, age, pos): """Get the parent node of node, whch is located in tree's node list. Returns: object: The parent node. """ return self.nodes[age][int(pos / self.comp)]
Get the parent node of node, whch is located in tree's node list. Returns: object: The parent node.
Below is the the instruction that describes the task: ### Input: Get the parent node of node, whch is located in tree's node list. Returns: object: The parent node. ### Response: def _get_node_parent(self, age, pos): """Get the parent node of node, whch is located in tree's node list. Returns: object: The parent node. """ return self.nodes[age][int(pos / self.comp)]
def build_null_stop_time_series( feed: "Feed", date_label: str = "20010101", freq: str = "5Min", *, split_directions: bool = False, ) -> DataFrame: """ Return a stop time series with the same index and hierarchical columns as output by :func:`compute_stop_time_series_base`, but fill it full of null values. """ start = date_label end = pd.to_datetime(date_label + " 23:59:00") rng = pd.date_range(start, end, freq=freq) inds = ["num_trips"] sids = feed.stops.stop_id if split_directions: product = [inds, sids, [0, 1]] names = ["indicator", "stop_id", "direction_id"] else: product = [inds, sids] names = ["indicator", "stop_id"] cols = pd.MultiIndex.from_product(product, names=names) return pd.DataFrame([], index=rng, columns=cols).sort_index( axis=1, sort_remaining=True )
Return a stop time series with the same index and hierarchical columns as output by :func:`compute_stop_time_series_base`, but fill it full of null values.
Below is the the instruction that describes the task: ### Input: Return a stop time series with the same index and hierarchical columns as output by :func:`compute_stop_time_series_base`, but fill it full of null values. ### Response: def build_null_stop_time_series( feed: "Feed", date_label: str = "20010101", freq: str = "5Min", *, split_directions: bool = False, ) -> DataFrame: """ Return a stop time series with the same index and hierarchical columns as output by :func:`compute_stop_time_series_base`, but fill it full of null values. """ start = date_label end = pd.to_datetime(date_label + " 23:59:00") rng = pd.date_range(start, end, freq=freq) inds = ["num_trips"] sids = feed.stops.stop_id if split_directions: product = [inds, sids, [0, 1]] names = ["indicator", "stop_id", "direction_id"] else: product = [inds, sids] names = ["indicator", "stop_id"] cols = pd.MultiIndex.from_product(product, names=names) return pd.DataFrame([], index=rng, columns=cols).sort_index( axis=1, sort_remaining=True )
def parse_period(period: str): """ parses period from date range picker. The received values are full ISO date """ period = period.split(" - ") date_from = Datum() if len(period[0]) == 10: date_from.from_iso_date_string(period[0]) else: date_from.from_iso_long_date(period[0]) date_from.start_of_day() date_to = Datum() if len(period[1]) == 10: date_to.from_iso_date_string(period[1]) else: date_to.from_iso_long_date(period[1]) date_to.end_of_day() return date_from.value, date_to.value
parses period from date range picker. The received values are full ISO date
Below is the the instruction that describes the task: ### Input: parses period from date range picker. The received values are full ISO date ### Response: def parse_period(period: str): """ parses period from date range picker. The received values are full ISO date """ period = period.split(" - ") date_from = Datum() if len(period[0]) == 10: date_from.from_iso_date_string(period[0]) else: date_from.from_iso_long_date(period[0]) date_from.start_of_day() date_to = Datum() if len(period[1]) == 10: date_to.from_iso_date_string(period[1]) else: date_to.from_iso_long_date(period[1]) date_to.end_of_day() return date_from.value, date_to.value
def train(config_path: str, cl_arguments: Iterable[str], output_root: str) -> None: """ Load config and start the training. :param config_path: path to configuration file :param cl_arguments: additional command line arguments which will update the configuration :param output_root: output root in which the training directory will be created """ config = None try: config_path = find_config(config_path) config = load_config(config_file=config_path, additional_args=cl_arguments) validate_config(config) logging.debug('\tLoaded config: %s', config) except Exception as ex: # pylint: disable=broad-except fallback('Loading config failed', ex) run(config=config, output_root=output_root)
Load config and start the training. :param config_path: path to configuration file :param cl_arguments: additional command line arguments which will update the configuration :param output_root: output root in which the training directory will be created
Below is the the instruction that describes the task: ### Input: Load config and start the training. :param config_path: path to configuration file :param cl_arguments: additional command line arguments which will update the configuration :param output_root: output root in which the training directory will be created ### Response: def train(config_path: str, cl_arguments: Iterable[str], output_root: str) -> None: """ Load config and start the training. :param config_path: path to configuration file :param cl_arguments: additional command line arguments which will update the configuration :param output_root: output root in which the training directory will be created """ config = None try: config_path = find_config(config_path) config = load_config(config_file=config_path, additional_args=cl_arguments) validate_config(config) logging.debug('\tLoaded config: %s', config) except Exception as ex: # pylint: disable=broad-except fallback('Loading config failed', ex) run(config=config, output_root=output_root)
def command_create_tables(self, meta_name=None, verbose=False): ''' Create tables according sqlalchemy data model. Is not a complex migration tool like alembic, just creates tables that does not exist:: ./manage.py sqla:create_tables [--verbose] [meta_name] ''' def _create_metadata_tables(metadata): for table in metadata.sorted_tables: if verbose: print(self._schema(table)) else: print(' '+table.name) engine = self.session.get_bind(clause=table) metadata.create_all(bind=engine, tables=[table]) if isinstance(self.metadata, MetaData): print('Creating tables...') _create_metadata_tables(self.metadata) else: for current_meta_name, metadata in self.metadata.items(): if meta_name not in (current_meta_name, None): continue print('Creating tables for {}...'.format(current_meta_name)) _create_metadata_tables(metadata)
Create tables according sqlalchemy data model. Is not a complex migration tool like alembic, just creates tables that does not exist:: ./manage.py sqla:create_tables [--verbose] [meta_name]
Below is the the instruction that describes the task: ### Input: Create tables according sqlalchemy data model. Is not a complex migration tool like alembic, just creates tables that does not exist:: ./manage.py sqla:create_tables [--verbose] [meta_name] ### Response: def command_create_tables(self, meta_name=None, verbose=False): ''' Create tables according sqlalchemy data model. Is not a complex migration tool like alembic, just creates tables that does not exist:: ./manage.py sqla:create_tables [--verbose] [meta_name] ''' def _create_metadata_tables(metadata): for table in metadata.sorted_tables: if verbose: print(self._schema(table)) else: print(' '+table.name) engine = self.session.get_bind(clause=table) metadata.create_all(bind=engine, tables=[table]) if isinstance(self.metadata, MetaData): print('Creating tables...') _create_metadata_tables(self.metadata) else: for current_meta_name, metadata in self.metadata.items(): if meta_name not in (current_meta_name, None): continue print('Creating tables for {}...'.format(current_meta_name)) _create_metadata_tables(metadata)