code
stringlengths
75
104k
docstring
stringlengths
1
46.9k
text
stringlengths
164
112k
def HandleMessageBundles(self, request_comms, response_comms): """Processes a queue of messages as passed from the client. We basically dispatch all the GrrMessages in the queue to the task scheduler for backend processing. We then retrieve from the TS the messages destined for this client. Args: request_comms: A ClientCommunication rdfvalue with messages sent by the client. source should be set to the client CN. response_comms: A ClientCommunication rdfvalue of jobs destined to this client. Returns: tuple of (source, message_count) where message_count is the number of messages received from the client with common name source. """ messages, source, timestamp = self._communicator.DecodeMessages( request_comms) now = time.time() if messages: # Receive messages in line. self.ReceiveMessages(source, messages) # We send the client a maximum of self.max_queue_size messages required_count = max(0, self.max_queue_size - request_comms.queue_size) tasks = [] message_list = rdf_flows.MessageList() # Only give the client messages if we are able to receive them in a # reasonable time. if time.time() - now < 10: tasks = self.DrainTaskSchedulerQueueForClient(source, required_count) message_list.job = tasks # Encode the message_list in the response_comms using the same API version # the client used. self._communicator.EncodeMessages( message_list, response_comms, destination=source, timestamp=timestamp, api_version=request_comms.api_version) return source, len(messages)
Processes a queue of messages as passed from the client. We basically dispatch all the GrrMessages in the queue to the task scheduler for backend processing. We then retrieve from the TS the messages destined for this client. Args: request_comms: A ClientCommunication rdfvalue with messages sent by the client. source should be set to the client CN. response_comms: A ClientCommunication rdfvalue of jobs destined to this client. Returns: tuple of (source, message_count) where message_count is the number of messages received from the client with common name source.
Below is the the instruction that describes the task: ### Input: Processes a queue of messages as passed from the client. We basically dispatch all the GrrMessages in the queue to the task scheduler for backend processing. We then retrieve from the TS the messages destined for this client. Args: request_comms: A ClientCommunication rdfvalue with messages sent by the client. source should be set to the client CN. response_comms: A ClientCommunication rdfvalue of jobs destined to this client. Returns: tuple of (source, message_count) where message_count is the number of messages received from the client with common name source. ### Response: def HandleMessageBundles(self, request_comms, response_comms): """Processes a queue of messages as passed from the client. We basically dispatch all the GrrMessages in the queue to the task scheduler for backend processing. We then retrieve from the TS the messages destined for this client. Args: request_comms: A ClientCommunication rdfvalue with messages sent by the client. source should be set to the client CN. response_comms: A ClientCommunication rdfvalue of jobs destined to this client. Returns: tuple of (source, message_count) where message_count is the number of messages received from the client with common name source. """ messages, source, timestamp = self._communicator.DecodeMessages( request_comms) now = time.time() if messages: # Receive messages in line. self.ReceiveMessages(source, messages) # We send the client a maximum of self.max_queue_size messages required_count = max(0, self.max_queue_size - request_comms.queue_size) tasks = [] message_list = rdf_flows.MessageList() # Only give the client messages if we are able to receive them in a # reasonable time. if time.time() - now < 10: tasks = self.DrainTaskSchedulerQueueForClient(source, required_count) message_list.job = tasks # Encode the message_list in the response_comms using the same API version # the client used. self._communicator.EncodeMessages( message_list, response_comms, destination=source, timestamp=timestamp, api_version=request_comms.api_version) return source, len(messages)
def _gen_addr(entry): """Generates a vCard Address object""" return Address(street=entry.get('address', ''), extended=entry.get('address2', ''), city=entry.get('city', ''), region=entry.get('state', ''), code=entry.get('zip', ''), country=entry.get('country', ''))
Generates a vCard Address object
Below is the the instruction that describes the task: ### Input: Generates a vCard Address object ### Response: def _gen_addr(entry): """Generates a vCard Address object""" return Address(street=entry.get('address', ''), extended=entry.get('address2', ''), city=entry.get('city', ''), region=entry.get('state', ''), code=entry.get('zip', ''), country=entry.get('country', ''))
def save(self, savepath, **kwargs): """ Saves the geojson instance to file. To save with a different text encoding use the 'encoding' argument. Parameters: - **savepath**: Filepath to save the file. """ self.update_bbox() tempfile = open(savepath,"w") json.dump(self._data, tempfile, **kwargs) tempfile.close()
Saves the geojson instance to file. To save with a different text encoding use the 'encoding' argument. Parameters: - **savepath**: Filepath to save the file.
Below is the the instruction that describes the task: ### Input: Saves the geojson instance to file. To save with a different text encoding use the 'encoding' argument. Parameters: - **savepath**: Filepath to save the file. ### Response: def save(self, savepath, **kwargs): """ Saves the geojson instance to file. To save with a different text encoding use the 'encoding' argument. Parameters: - **savepath**: Filepath to save the file. """ self.update_bbox() tempfile = open(savepath,"w") json.dump(self._data, tempfile, **kwargs) tempfile.close()
def rover_turn_circle(SERVO_OUTPUT_RAW): '''return turning circle (diameter) in meters for steering_angle in degrees ''' # this matches Toms slash max_wheel_turn = 35 wheelbase = 0.335 wheeltrack = 0.296 steering_angle = max_wheel_turn * (SERVO_OUTPUT_RAW.servo1_raw - 1500) / 400.0 theta = radians(steering_angle) return (wheeltrack/2) + (wheelbase/sin(theta))
return turning circle (diameter) in meters for steering_angle in degrees
Below is the the instruction that describes the task: ### Input: return turning circle (diameter) in meters for steering_angle in degrees ### Response: def rover_turn_circle(SERVO_OUTPUT_RAW): '''return turning circle (diameter) in meters for steering_angle in degrees ''' # this matches Toms slash max_wheel_turn = 35 wheelbase = 0.335 wheeltrack = 0.296 steering_angle = max_wheel_turn * (SERVO_OUTPUT_RAW.servo1_raw - 1500) / 400.0 theta = radians(steering_angle) return (wheeltrack/2) + (wheelbase/sin(theta))
def get(self, key, timeout=1, is_async=False, only_read=True): """ Test: >>> import time >>> cache = Cache(expire=2) >>> cache.put(key='a', value=0) >>> cache.put(key='b', value=1) >>> cache.put(key='c',value= 2) >>> cache.get('a') 0 >>> cache.get('b') 1 >>> cache.get('c') 2 >>> cache.get('e') == None True >>> time.sleep(2) >>> cache.put(key='e', value=4) >>> cache.get('a') == None True >>> cache.get('b') == None True >>> cache.get('c') == None True """ if key not in self.cache_items: self.logger.debug('Cache item <%s> missing' % key) return None item = self.cache_items.pop(key) item.update_hit_count() if self.read_after_refresh_expire: item.refresh_expire(self.expire) value = item[key] self.cache_items[key] = item self.total_access_count += 1 return value
Test: >>> import time >>> cache = Cache(expire=2) >>> cache.put(key='a', value=0) >>> cache.put(key='b', value=1) >>> cache.put(key='c',value= 2) >>> cache.get('a') 0 >>> cache.get('b') 1 >>> cache.get('c') 2 >>> cache.get('e') == None True >>> time.sleep(2) >>> cache.put(key='e', value=4) >>> cache.get('a') == None True >>> cache.get('b') == None True >>> cache.get('c') == None True
Below is the the instruction that describes the task: ### Input: Test: >>> import time >>> cache = Cache(expire=2) >>> cache.put(key='a', value=0) >>> cache.put(key='b', value=1) >>> cache.put(key='c',value= 2) >>> cache.get('a') 0 >>> cache.get('b') 1 >>> cache.get('c') 2 >>> cache.get('e') == None True >>> time.sleep(2) >>> cache.put(key='e', value=4) >>> cache.get('a') == None True >>> cache.get('b') == None True >>> cache.get('c') == None True ### Response: def get(self, key, timeout=1, is_async=False, only_read=True): """ Test: >>> import time >>> cache = Cache(expire=2) >>> cache.put(key='a', value=0) >>> cache.put(key='b', value=1) >>> cache.put(key='c',value= 2) >>> cache.get('a') 0 >>> cache.get('b') 1 >>> cache.get('c') 2 >>> cache.get('e') == None True >>> time.sleep(2) >>> cache.put(key='e', value=4) >>> cache.get('a') == None True >>> cache.get('b') == None True >>> cache.get('c') == None True """ if key not in self.cache_items: self.logger.debug('Cache item <%s> missing' % key) return None item = self.cache_items.pop(key) item.update_hit_count() if self.read_after_refresh_expire: item.refresh_expire(self.expire) value = item[key] self.cache_items[key] = item self.total_access_count += 1 return value
def _is_dirty(dir_path): """Check whether a git repository has uncommitted changes.""" try: subprocess.check_call(["git", "diff", "--quiet"], cwd=dir_path) return False except subprocess.CalledProcessError: return True
Check whether a git repository has uncommitted changes.
Below is the the instruction that describes the task: ### Input: Check whether a git repository has uncommitted changes. ### Response: def _is_dirty(dir_path): """Check whether a git repository has uncommitted changes.""" try: subprocess.check_call(["git", "diff", "--quiet"], cwd=dir_path) return False except subprocess.CalledProcessError: return True
def _batch_gather_with_broadcast(params, indices, axis): """Like batch_gather, but broadcasts to the left of axis.""" # batch_gather assumes... # params.shape = [A1,...,AN, B1,...,BM] # indices.shape = [A1,...,AN, C] # which gives output of shape # [A1,...,AN, C, B1,...,BM] # Here we broadcast dims of each to the left of `axis` in params, and left of # the rightmost dim in indices, e.g. we can # have # params.shape = [A1,...,AN, B1,...,BM] # indices.shape = [a1,...,aN, C], # where ai broadcasts with Ai. # leading_bcast_shape is the broadcast of [A1,...,AN] and [a1,...,aN]. leading_bcast_shape = tf.broadcast_dynamic_shape( tf.shape(input=params)[:axis], tf.shape(input=indices)[:-1]) params += tf.zeros( tf.concat((leading_bcast_shape, tf.shape(input=params)[axis:]), axis=0), dtype=params.dtype) indices += tf.zeros( tf.concat((leading_bcast_shape, tf.shape(input=indices)[-1:]), axis=0), dtype=indices.dtype) return tf.compat.v1.batch_gather(params, indices)
Like batch_gather, but broadcasts to the left of axis.
Below is the the instruction that describes the task: ### Input: Like batch_gather, but broadcasts to the left of axis. ### Response: def _batch_gather_with_broadcast(params, indices, axis): """Like batch_gather, but broadcasts to the left of axis.""" # batch_gather assumes... # params.shape = [A1,...,AN, B1,...,BM] # indices.shape = [A1,...,AN, C] # which gives output of shape # [A1,...,AN, C, B1,...,BM] # Here we broadcast dims of each to the left of `axis` in params, and left of # the rightmost dim in indices, e.g. we can # have # params.shape = [A1,...,AN, B1,...,BM] # indices.shape = [a1,...,aN, C], # where ai broadcasts with Ai. # leading_bcast_shape is the broadcast of [A1,...,AN] and [a1,...,aN]. leading_bcast_shape = tf.broadcast_dynamic_shape( tf.shape(input=params)[:axis], tf.shape(input=indices)[:-1]) params += tf.zeros( tf.concat((leading_bcast_shape, tf.shape(input=params)[axis:]), axis=0), dtype=params.dtype) indices += tf.zeros( tf.concat((leading_bcast_shape, tf.shape(input=indices)[-1:]), axis=0), dtype=indices.dtype) return tf.compat.v1.batch_gather(params, indices)
def learn(self, initial_state_key, limit=1000, game_n=1): ''' Multi-Agent Learning. Override. Args: initial_state_key: Initial state. limit: Limit of the number of learning. game_n: The number of games. ''' end_flag_list = [False] * len(self.q_learning_list) for game in range(game_n): state_key = copy.copy(initial_state_key) self.t = 1 while self.t <= limit: for i in range(len(self.q_learning_list)): if game + 1 == game_n: self.state_key_list.append((i, copy.copy(state_key))) self.q_learning_list[i].t = self.t next_action_list = self.q_learning_list[i].extract_possible_actions(state_key) if len(next_action_list): action_key = self.q_learning_list[i].select_action( state_key=state_key, next_action_list=next_action_list ) reward_value = self.q_learning_list[i].observe_reward_value(state_key, action_key) # Check. if self.q_learning_list[i].check_the_end_flag(state_key) is True: end_flag_list[i] = True # Max-Q-Value in next action time. next_state_key = self.q_learning_list[i].update_state( state_key=state_key, action_key=action_key ) next_next_action_list = self.q_learning_list[i].extract_possible_actions(next_state_key) if len(next_next_action_list): next_action_key = self.q_learning_list[i].predict_next_action( next_state_key, next_next_action_list ) next_max_q = self.q_learning_list[i].extract_q_df(next_state_key, next_action_key) # Update Q-Value. self.q_learning_list[i].update_q( state_key=state_key, action_key=action_key, reward_value=reward_value, next_max_q=next_max_q ) # Update State. state_key = next_state_key # Epsode. self.t += 1 self.q_learning_list[i].t = self.t if False not in end_flag_list: break
Multi-Agent Learning. Override. Args: initial_state_key: Initial state. limit: Limit of the number of learning. game_n: The number of games.
Below is the the instruction that describes the task: ### Input: Multi-Agent Learning. Override. Args: initial_state_key: Initial state. limit: Limit of the number of learning. game_n: The number of games. ### Response: def learn(self, initial_state_key, limit=1000, game_n=1): ''' Multi-Agent Learning. Override. Args: initial_state_key: Initial state. limit: Limit of the number of learning. game_n: The number of games. ''' end_flag_list = [False] * len(self.q_learning_list) for game in range(game_n): state_key = copy.copy(initial_state_key) self.t = 1 while self.t <= limit: for i in range(len(self.q_learning_list)): if game + 1 == game_n: self.state_key_list.append((i, copy.copy(state_key))) self.q_learning_list[i].t = self.t next_action_list = self.q_learning_list[i].extract_possible_actions(state_key) if len(next_action_list): action_key = self.q_learning_list[i].select_action( state_key=state_key, next_action_list=next_action_list ) reward_value = self.q_learning_list[i].observe_reward_value(state_key, action_key) # Check. if self.q_learning_list[i].check_the_end_flag(state_key) is True: end_flag_list[i] = True # Max-Q-Value in next action time. next_state_key = self.q_learning_list[i].update_state( state_key=state_key, action_key=action_key ) next_next_action_list = self.q_learning_list[i].extract_possible_actions(next_state_key) if len(next_next_action_list): next_action_key = self.q_learning_list[i].predict_next_action( next_state_key, next_next_action_list ) next_max_q = self.q_learning_list[i].extract_q_df(next_state_key, next_action_key) # Update Q-Value. self.q_learning_list[i].update_q( state_key=state_key, action_key=action_key, reward_value=reward_value, next_max_q=next_max_q ) # Update State. state_key = next_state_key # Epsode. self.t += 1 self.q_learning_list[i].t = self.t if False not in end_flag_list: break
def _close_connections(self, connection=None, timeout=5): """Close ``connection`` if specified, otherwise close all connections. Return a list of :class:`.Future` called back once the connection/s are closed. """ all = [] if connection: waiter = connection.event('connection_lost').waiter() if waiter: all.append(waiter) connection.close() else: connections = list(self._concurrent_connections) self._concurrent_connections = set() for connection in connections: waiter = connection.event('connection_lost').waiter() if waiter: all.append(waiter) connection.close() if all: self.logger.info('%s closing %d connections', self, len(all)) return asyncio.wait(all, timeout=timeout, loop=self._loop)
Close ``connection`` if specified, otherwise close all connections. Return a list of :class:`.Future` called back once the connection/s are closed.
Below is the the instruction that describes the task: ### Input: Close ``connection`` if specified, otherwise close all connections. Return a list of :class:`.Future` called back once the connection/s are closed. ### Response: def _close_connections(self, connection=None, timeout=5): """Close ``connection`` if specified, otherwise close all connections. Return a list of :class:`.Future` called back once the connection/s are closed. """ all = [] if connection: waiter = connection.event('connection_lost').waiter() if waiter: all.append(waiter) connection.close() else: connections = list(self._concurrent_connections) self._concurrent_connections = set() for connection in connections: waiter = connection.event('connection_lost').waiter() if waiter: all.append(waiter) connection.close() if all: self.logger.info('%s closing %d connections', self, len(all)) return asyncio.wait(all, timeout=timeout, loop=self._loop)
def add(self): """Add service definition to hierarchy.""" yield self.client.create(self.path) yield self.client.create(self.path + "/type", self.name) yield self.client.create(self.path + "/state") yield self.client.create(self.path + "/machines", "[]") log.debug("registered service '%s' at %s." % (self.name, self.path))
Add service definition to hierarchy.
Below is the the instruction that describes the task: ### Input: Add service definition to hierarchy. ### Response: def add(self): """Add service definition to hierarchy.""" yield self.client.create(self.path) yield self.client.create(self.path + "/type", self.name) yield self.client.create(self.path + "/state") yield self.client.create(self.path + "/machines", "[]") log.debug("registered service '%s' at %s." % (self.name, self.path))
def _init_data_with_tdms(self, tdms_filename): """Initializes the current RT-DC dataset with a tdms file. """ tdms_file = TdmsFile(str(tdms_filename)) # time is always there table = "Cell Track" # Edit naming.dclab2tdms to add features for arg in naming.tdms2dclab: try: data = tdms_file.object(table, arg).data except KeyError: pass else: if data is None or len(data) == 0: # Ignore empty features. npTDMS treats empty # features in the following way: # - in nptdms 0.8.2, `data` is `None` # - in nptdms 0.9.0, `data` is an array of length 0 continue self._events[naming.tdms2dclab[arg]] = data # Set up configuration tdms_config = Configuration( files=[self.path.with_name(self._mid + "_para.ini"), self.path.with_name(self._mid + "_camera.ini")], ) dclab_config = Configuration() for section in naming.configmap: for pname in naming.configmap[section]: meta = naming.configmap[section][pname] typ = dfn.config_funcs[section][pname] if isinstance(meta, tuple): osec, opar = meta if osec in tdms_config and opar in tdms_config[osec]: val = tdms_config[osec].pop(opar) dclab_config[section][pname] = typ(val) else: dclab_config[section][pname] = typ(meta) self.config = dclab_config self._complete_config_tdms(tdms_config) self._init_filters()
Initializes the current RT-DC dataset with a tdms file.
Below is the the instruction that describes the task: ### Input: Initializes the current RT-DC dataset with a tdms file. ### Response: def _init_data_with_tdms(self, tdms_filename): """Initializes the current RT-DC dataset with a tdms file. """ tdms_file = TdmsFile(str(tdms_filename)) # time is always there table = "Cell Track" # Edit naming.dclab2tdms to add features for arg in naming.tdms2dclab: try: data = tdms_file.object(table, arg).data except KeyError: pass else: if data is None or len(data) == 0: # Ignore empty features. npTDMS treats empty # features in the following way: # - in nptdms 0.8.2, `data` is `None` # - in nptdms 0.9.0, `data` is an array of length 0 continue self._events[naming.tdms2dclab[arg]] = data # Set up configuration tdms_config = Configuration( files=[self.path.with_name(self._mid + "_para.ini"), self.path.with_name(self._mid + "_camera.ini")], ) dclab_config = Configuration() for section in naming.configmap: for pname in naming.configmap[section]: meta = naming.configmap[section][pname] typ = dfn.config_funcs[section][pname] if isinstance(meta, tuple): osec, opar = meta if osec in tdms_config and opar in tdms_config[osec]: val = tdms_config[osec].pop(opar) dclab_config[section][pname] = typ(val) else: dclab_config[section][pname] = typ(meta) self.config = dclab_config self._complete_config_tdms(tdms_config) self._init_filters()
def load_jupyter_server_extension(nbapp): """Load the server extension. """ here = PACKAGE_DIR nbapp.log.info('nteract extension loaded from %s' % here) app_dir = here # bundle is part of the python package web_app = nbapp.web_app config = NteractConfig(parent=nbapp) # original # config.assets_dir = os.path.join(app_dir, 'static') config.assets_dir = app_dir config.page_url = '/nteract' config.dev_mode = False # Check for core mode. core_mode = '' if hasattr(nbapp, 'core_mode'): core_mode = nbapp.core_mode # Check for an app dir that is local. if app_dir == here or app_dir == os.path.join(here, 'build'): core_mode = True config.settings_dir = '' web_app.settings.setdefault('page_config_data', dict()) web_app.settings['page_config_data']['token'] = nbapp.token web_app.settings['page_config_data']['ga_code'] = config.ga_code web_app.settings['page_config_data']['asset_url'] = config.asset_url web_app.settings['nteract_config'] = config add_handlers(web_app, config)
Load the server extension.
Below is the the instruction that describes the task: ### Input: Load the server extension. ### Response: def load_jupyter_server_extension(nbapp): """Load the server extension. """ here = PACKAGE_DIR nbapp.log.info('nteract extension loaded from %s' % here) app_dir = here # bundle is part of the python package web_app = nbapp.web_app config = NteractConfig(parent=nbapp) # original # config.assets_dir = os.path.join(app_dir, 'static') config.assets_dir = app_dir config.page_url = '/nteract' config.dev_mode = False # Check for core mode. core_mode = '' if hasattr(nbapp, 'core_mode'): core_mode = nbapp.core_mode # Check for an app dir that is local. if app_dir == here or app_dir == os.path.join(here, 'build'): core_mode = True config.settings_dir = '' web_app.settings.setdefault('page_config_data', dict()) web_app.settings['page_config_data']['token'] = nbapp.token web_app.settings['page_config_data']['ga_code'] = config.ga_code web_app.settings['page_config_data']['asset_url'] = config.asset_url web_app.settings['nteract_config'] = config add_handlers(web_app, config)
def _resolve_group_location(self, group: str) -> str: """ Resolves the location of a setting file based on the given identifier. :param group: the identifier for the group's settings file (~its location) :return: the absolute path of the settings location """ if os.path.isabs(group): possible_paths = [group] else: possible_paths = [] for repository in self.setting_repositories: possible_paths.append(os.path.join(repository, group)) for default_setting_extension in self.default_setting_extensions: number_of_paths = len(possible_paths) for i in range(number_of_paths): path_with_extension = "%s.%s" % (possible_paths[i], default_setting_extension) possible_paths.append(path_with_extension) for path in possible_paths: if os.path.exists(path): return path raise ValueError("Could not resolve location of settings identified by: \"%s\"" % group)
Resolves the location of a setting file based on the given identifier. :param group: the identifier for the group's settings file (~its location) :return: the absolute path of the settings location
Below is the the instruction that describes the task: ### Input: Resolves the location of a setting file based on the given identifier. :param group: the identifier for the group's settings file (~its location) :return: the absolute path of the settings location ### Response: def _resolve_group_location(self, group: str) -> str: """ Resolves the location of a setting file based on the given identifier. :param group: the identifier for the group's settings file (~its location) :return: the absolute path of the settings location """ if os.path.isabs(group): possible_paths = [group] else: possible_paths = [] for repository in self.setting_repositories: possible_paths.append(os.path.join(repository, group)) for default_setting_extension in self.default_setting_extensions: number_of_paths = len(possible_paths) for i in range(number_of_paths): path_with_extension = "%s.%s" % (possible_paths[i], default_setting_extension) possible_paths.append(path_with_extension) for path in possible_paths: if os.path.exists(path): return path raise ValueError("Could not resolve location of settings identified by: \"%s\"" % group)
def random(cls, length=Hash.LEN): """ Generates a random seed using a CSPRNG. :param length: Length of seed, in trytes. For maximum security, this should always be set to 81, but you can change it if you're 110% sure you know what you're doing. See https://iota.stackexchange.com/q/249 for more info. """ return super(Seed, cls).random(length)
Generates a random seed using a CSPRNG. :param length: Length of seed, in trytes. For maximum security, this should always be set to 81, but you can change it if you're 110% sure you know what you're doing. See https://iota.stackexchange.com/q/249 for more info.
Below is the the instruction that describes the task: ### Input: Generates a random seed using a CSPRNG. :param length: Length of seed, in trytes. For maximum security, this should always be set to 81, but you can change it if you're 110% sure you know what you're doing. See https://iota.stackexchange.com/q/249 for more info. ### Response: def random(cls, length=Hash.LEN): """ Generates a random seed using a CSPRNG. :param length: Length of seed, in trytes. For maximum security, this should always be set to 81, but you can change it if you're 110% sure you know what you're doing. See https://iota.stackexchange.com/q/249 for more info. """ return super(Seed, cls).random(length)
def get_cartesian(lat, lon): """ the x-axis goes through long,lat (0,0), so longitude 0 meets the equator; the y-axis goes through (0,90); and the z-axis goes through the poles. In other words: * (0, 0, 0) is the center of earth * (0, 0, R) is the north pole * (0, 0, -R) is the south pole * (R, 0, 0) is somewhere in africa? :param lat: latitude in radians :param lon: longitude in radians """ R = 6371000.0 # metres x = R * math.cos(lat) * math.cos(lon) y = R * math.cos(lat) * math.sin(lon) z = R * math.sin(lat) return [x, y, z]
the x-axis goes through long,lat (0,0), so longitude 0 meets the equator; the y-axis goes through (0,90); and the z-axis goes through the poles. In other words: * (0, 0, 0) is the center of earth * (0, 0, R) is the north pole * (0, 0, -R) is the south pole * (R, 0, 0) is somewhere in africa? :param lat: latitude in radians :param lon: longitude in radians
Below is the the instruction that describes the task: ### Input: the x-axis goes through long,lat (0,0), so longitude 0 meets the equator; the y-axis goes through (0,90); and the z-axis goes through the poles. In other words: * (0, 0, 0) is the center of earth * (0, 0, R) is the north pole * (0, 0, -R) is the south pole * (R, 0, 0) is somewhere in africa? :param lat: latitude in radians :param lon: longitude in radians ### Response: def get_cartesian(lat, lon): """ the x-axis goes through long,lat (0,0), so longitude 0 meets the equator; the y-axis goes through (0,90); and the z-axis goes through the poles. In other words: * (0, 0, 0) is the center of earth * (0, 0, R) is the north pole * (0, 0, -R) is the south pole * (R, 0, 0) is somewhere in africa? :param lat: latitude in radians :param lon: longitude in radians """ R = 6371000.0 # metres x = R * math.cos(lat) * math.cos(lon) y = R * math.cos(lat) * math.sin(lon) z = R * math.sin(lat) return [x, y, z]
def get_full_xml_representation(entity, private_key): """Get full XML representation of an entity. This contains the <XML><post>..</post></XML> wrapper. Accepts either a Base entity or a Diaspora entity. Author `private_key` must be given so that certain entities can be signed. """ from federation.entities.diaspora.mappers import get_outbound_entity diaspora_entity = get_outbound_entity(entity, private_key) xml = diaspora_entity.to_xml() return "<XML><post>%s</post></XML>" % etree.tostring(xml).decode("utf-8")
Get full XML representation of an entity. This contains the <XML><post>..</post></XML> wrapper. Accepts either a Base entity or a Diaspora entity. Author `private_key` must be given so that certain entities can be signed.
Below is the the instruction that describes the task: ### Input: Get full XML representation of an entity. This contains the <XML><post>..</post></XML> wrapper. Accepts either a Base entity or a Diaspora entity. Author `private_key` must be given so that certain entities can be signed. ### Response: def get_full_xml_representation(entity, private_key): """Get full XML representation of an entity. This contains the <XML><post>..</post></XML> wrapper. Accepts either a Base entity or a Diaspora entity. Author `private_key` must be given so that certain entities can be signed. """ from federation.entities.diaspora.mappers import get_outbound_entity diaspora_entity = get_outbound_entity(entity, private_key) xml = diaspora_entity.to_xml() return "<XML><post>%s</post></XML>" % etree.tostring(xml).decode("utf-8")
def find_binary(self, binary): """ Scan and return the first path to a binary that we can find """ if os.path.exists(binary): return binary # Extract out the filename if we were given a full path binary_name = os.path.basename(binary) # Gather $PATH search_paths = os.environ['PATH'].split(':') # Extra paths to scan... default_paths = [ '/usr/bin', '/bin' '/usr/local/bin', '/usr/sbin', '/sbin' '/usr/local/sbin', ] for path in default_paths: if path not in search_paths: search_paths.append(path) for path in search_paths: if os.path.isdir(path): filename = os.path.join(path, binary_name) if os.path.exists(filename): return filename return binary
Scan and return the first path to a binary that we can find
Below is the the instruction that describes the task: ### Input: Scan and return the first path to a binary that we can find ### Response: def find_binary(self, binary): """ Scan and return the first path to a binary that we can find """ if os.path.exists(binary): return binary # Extract out the filename if we were given a full path binary_name = os.path.basename(binary) # Gather $PATH search_paths = os.environ['PATH'].split(':') # Extra paths to scan... default_paths = [ '/usr/bin', '/bin' '/usr/local/bin', '/usr/sbin', '/sbin' '/usr/local/sbin', ] for path in default_paths: if path not in search_paths: search_paths.append(path) for path in search_paths: if os.path.isdir(path): filename = os.path.join(path, binary_name) if os.path.exists(filename): return filename return binary
def queuedb_row_factory(cursor, row): """ Dict row factory """ d = {} for idx, col in enumerate(cursor.description): d[col[0]] = row[idx] return d
Dict row factory
Below is the the instruction that describes the task: ### Input: Dict row factory ### Response: def queuedb_row_factory(cursor, row): """ Dict row factory """ d = {} for idx, col in enumerate(cursor.description): d[col[0]] = row[idx] return d
def from_dict(cls, connector, ip_dict): """Build dict fields as SubObjects if needed. Checks if lambda for building object from dict exists. _global_field_processing and _custom_field_processing rules are checked. """ mapping = cls._global_field_processing.copy() mapping.update(cls._custom_field_processing) # Process fields that require building themselves as objects for field in mapping: if field in ip_dict: ip_dict[field] = mapping[field](ip_dict[field]) return cls(connector, **ip_dict)
Build dict fields as SubObjects if needed. Checks if lambda for building object from dict exists. _global_field_processing and _custom_field_processing rules are checked.
Below is the the instruction that describes the task: ### Input: Build dict fields as SubObjects if needed. Checks if lambda for building object from dict exists. _global_field_processing and _custom_field_processing rules are checked. ### Response: def from_dict(cls, connector, ip_dict): """Build dict fields as SubObjects if needed. Checks if lambda for building object from dict exists. _global_field_processing and _custom_field_processing rules are checked. """ mapping = cls._global_field_processing.copy() mapping.update(cls._custom_field_processing) # Process fields that require building themselves as objects for field in mapping: if field in ip_dict: ip_dict[field] = mapping[field](ip_dict[field]) return cls(connector, **ip_dict)
def handle(self, *test_labels, **options): """ Set the default Gherkin test runner. """ if not options.get('testrunner', None): options['testrunner'] = test_runner_class return super(Command, self).handle(*test_labels, **options)
Set the default Gherkin test runner.
Below is the the instruction that describes the task: ### Input: Set the default Gherkin test runner. ### Response: def handle(self, *test_labels, **options): """ Set the default Gherkin test runner. """ if not options.get('testrunner', None): options['testrunner'] = test_runner_class return super(Command, self).handle(*test_labels, **options)
def extract_headers(headers): """This function extracts valid headers from interactive input.""" sorted_headers = {} matches = re.findall(r'(.*):\s(.*)', headers) for match in matches: header = match[0] value = match[1] try: if value[-1] == ',': value = value[:-1] sorted_headers[header] = value except IndexError: pass return sorted_headers
This function extracts valid headers from interactive input.
Below is the the instruction that describes the task: ### Input: This function extracts valid headers from interactive input. ### Response: def extract_headers(headers): """This function extracts valid headers from interactive input.""" sorted_headers = {} matches = re.findall(r'(.*):\s(.*)', headers) for match in matches: header = match[0] value = match[1] try: if value[-1] == ',': value = value[:-1] sorted_headers[header] = value except IndexError: pass return sorted_headers
def get_trips( feed: "Feed", date: Optional[str] = None, time: Optional[str] = None ) -> DataFrame: """ Return a subset of ``feed.trips``. Parameters ---------- feed : Feed date : string YYYYMMDD date string time : string HH:MM:SS time string, possibly with HH > 23 Returns ------- DataFrame The subset of ``feed.trips`` containing trips active (starting) on the given date at the given time. If no date or time are specified, then return the entire ``feed.trips``. """ if feed.trips is None or date is None: return feed.trips f = feed.trips.copy() f["is_active"] = f["trip_id"].map( lambda trip_id: feed.is_active_trip(trip_id, date) ) f = f[f["is_active"]].copy() del f["is_active"] if time is not None: # Get trips active during given time g = pd.merge(f, feed.stop_times[["trip_id", "departure_time"]]) def F(group): d = {} start = group["departure_time"].dropna().min() end = group["departure_time"].dropna().max() try: result = start <= time <= end except TypeError: result = False d["is_active"] = result return pd.Series(d) h = g.groupby("trip_id").apply(F).reset_index() f = pd.merge(f, h[h["is_active"]]) del f["is_active"] return f
Return a subset of ``feed.trips``. Parameters ---------- feed : Feed date : string YYYYMMDD date string time : string HH:MM:SS time string, possibly with HH > 23 Returns ------- DataFrame The subset of ``feed.trips`` containing trips active (starting) on the given date at the given time. If no date or time are specified, then return the entire ``feed.trips``.
Below is the the instruction that describes the task: ### Input: Return a subset of ``feed.trips``. Parameters ---------- feed : Feed date : string YYYYMMDD date string time : string HH:MM:SS time string, possibly with HH > 23 Returns ------- DataFrame The subset of ``feed.trips`` containing trips active (starting) on the given date at the given time. If no date or time are specified, then return the entire ``feed.trips``. ### Response: def get_trips( feed: "Feed", date: Optional[str] = None, time: Optional[str] = None ) -> DataFrame: """ Return a subset of ``feed.trips``. Parameters ---------- feed : Feed date : string YYYYMMDD date string time : string HH:MM:SS time string, possibly with HH > 23 Returns ------- DataFrame The subset of ``feed.trips`` containing trips active (starting) on the given date at the given time. If no date or time are specified, then return the entire ``feed.trips``. """ if feed.trips is None or date is None: return feed.trips f = feed.trips.copy() f["is_active"] = f["trip_id"].map( lambda trip_id: feed.is_active_trip(trip_id, date) ) f = f[f["is_active"]].copy() del f["is_active"] if time is not None: # Get trips active during given time g = pd.merge(f, feed.stop_times[["trip_id", "departure_time"]]) def F(group): d = {} start = group["departure_time"].dropna().min() end = group["departure_time"].dropna().max() try: result = start <= time <= end except TypeError: result = False d["is_active"] = result return pd.Series(d) h = g.groupby("trip_id").apply(F).reset_index() f = pd.merge(f, h[h["is_active"]]) del f["is_active"] return f
def get_hyperedge_id_mapping(H): """Generates mappings between the set of hyperedge IDs and integer indices (where every hyperedge ID corresponds to exactly 1 integer index). :param H: the hypergraph to find the hyperedge ID mapping on. :returns: dict -- for each integer index, maps the index to the hyperedge ID. dict -- for each hyperedge ID, maps the hyperedge ID to the integer index. :raises: TypeError -- Algorithm only applicable to undirected hypergraphs """ if not isinstance(H, UndirectedHypergraph): raise TypeError("Algorithm only applicable to undirected hypergraphs") indices_to_hyperedge_ids, hyperedge_ids_to_indices = {}, {} hyperedge_index = 0 for hyperedge_id in H.hyperedge_id_iterator(): hyperedge_ids_to_indices.update({hyperedge_id: hyperedge_index}) indices_to_hyperedge_ids.update({hyperedge_index: hyperedge_id}) hyperedge_index += 1 return indices_to_hyperedge_ids, hyperedge_ids_to_indices
Generates mappings between the set of hyperedge IDs and integer indices (where every hyperedge ID corresponds to exactly 1 integer index). :param H: the hypergraph to find the hyperedge ID mapping on. :returns: dict -- for each integer index, maps the index to the hyperedge ID. dict -- for each hyperedge ID, maps the hyperedge ID to the integer index. :raises: TypeError -- Algorithm only applicable to undirected hypergraphs
Below is the the instruction that describes the task: ### Input: Generates mappings between the set of hyperedge IDs and integer indices (where every hyperedge ID corresponds to exactly 1 integer index). :param H: the hypergraph to find the hyperedge ID mapping on. :returns: dict -- for each integer index, maps the index to the hyperedge ID. dict -- for each hyperedge ID, maps the hyperedge ID to the integer index. :raises: TypeError -- Algorithm only applicable to undirected hypergraphs ### Response: def get_hyperedge_id_mapping(H): """Generates mappings between the set of hyperedge IDs and integer indices (where every hyperedge ID corresponds to exactly 1 integer index). :param H: the hypergraph to find the hyperedge ID mapping on. :returns: dict -- for each integer index, maps the index to the hyperedge ID. dict -- for each hyperedge ID, maps the hyperedge ID to the integer index. :raises: TypeError -- Algorithm only applicable to undirected hypergraphs """ if not isinstance(H, UndirectedHypergraph): raise TypeError("Algorithm only applicable to undirected hypergraphs") indices_to_hyperedge_ids, hyperedge_ids_to_indices = {}, {} hyperedge_index = 0 for hyperedge_id in H.hyperedge_id_iterator(): hyperedge_ids_to_indices.update({hyperedge_id: hyperedge_index}) indices_to_hyperedge_ids.update({hyperedge_index: hyperedge_id}) hyperedge_index += 1 return indices_to_hyperedge_ids, hyperedge_ids_to_indices
def mosaic_info(name, pretty): '''Get information for a specific mosaic''' cl = clientv1() echo_json_response(call_and_wrap(cl.get_mosaic_by_name, name), pretty)
Get information for a specific mosaic
Below is the the instruction that describes the task: ### Input: Get information for a specific mosaic ### Response: def mosaic_info(name, pretty): '''Get information for a specific mosaic''' cl = clientv1() echo_json_response(call_and_wrap(cl.get_mosaic_by_name, name), pretty)
def create_user(self, username, password, roles): """ Create a user. @param username: Username @param password: Password @param roles: List of roles for the user. This should be [] for a regular user, or ['ROLE_ADMIN'] for an admin. @return: An ApiUser object """ return users.create_user(self, username, password, roles)
Create a user. @param username: Username @param password: Password @param roles: List of roles for the user. This should be [] for a regular user, or ['ROLE_ADMIN'] for an admin. @return: An ApiUser object
Below is the the instruction that describes the task: ### Input: Create a user. @param username: Username @param password: Password @param roles: List of roles for the user. This should be [] for a regular user, or ['ROLE_ADMIN'] for an admin. @return: An ApiUser object ### Response: def create_user(self, username, password, roles): """ Create a user. @param username: Username @param password: Password @param roles: List of roles for the user. This should be [] for a regular user, or ['ROLE_ADMIN'] for an admin. @return: An ApiUser object """ return users.create_user(self, username, password, roles)
def component_file_refs(filelist): '''Get a list of what elements/refrences exist in component JSON files Parameters ---------- filelist : list A list of paths to json files Returns ------- dict Keys are the file path, value is a list of tuples (compacted element string, refs tuple) ''' ret = {} for fpath in filelist: filedata = fileio.read_json_basis(fpath) refdict = {} for el, eldata in filedata['elements'].items(): refs = tuple(eldata['references']) if not refs in refdict: refdict[refs] = [el] else: refdict[refs].append(el) entry = [] for k, v in refdict.items(): entry.append((misc.compact_elements(v), k)) ret[fpath] = entry return ret
Get a list of what elements/refrences exist in component JSON files Parameters ---------- filelist : list A list of paths to json files Returns ------- dict Keys are the file path, value is a list of tuples (compacted element string, refs tuple)
Below is the the instruction that describes the task: ### Input: Get a list of what elements/refrences exist in component JSON files Parameters ---------- filelist : list A list of paths to json files Returns ------- dict Keys are the file path, value is a list of tuples (compacted element string, refs tuple) ### Response: def component_file_refs(filelist): '''Get a list of what elements/refrences exist in component JSON files Parameters ---------- filelist : list A list of paths to json files Returns ------- dict Keys are the file path, value is a list of tuples (compacted element string, refs tuple) ''' ret = {} for fpath in filelist: filedata = fileio.read_json_basis(fpath) refdict = {} for el, eldata in filedata['elements'].items(): refs = tuple(eldata['references']) if not refs in refdict: refdict[refs] = [el] else: refdict[refs].append(el) entry = [] for k, v in refdict.items(): entry.append((misc.compact_elements(v), k)) ret[fpath] = entry return ret
def _get_uploaded_versions_pypicloud(project_name, index_url, requests_verify=True): """ Query the pypi index at index_url using pypicloud api to find all versions """ api_url = index_url for suffix in ('/pypi', '/pypi/', '/simple', '/simple/'): if api_url.endswith(suffix): api_url = api_url[:len(suffix) * -1] + '/api/package' break url = '/'.join((api_url, project_name)) response = requests.get(url, verify=requests_verify) if response.status_code == 200: return [p['version'] for p in response.json()['packages']] return None
Query the pypi index at index_url using pypicloud api to find all versions
Below is the the instruction that describes the task: ### Input: Query the pypi index at index_url using pypicloud api to find all versions ### Response: def _get_uploaded_versions_pypicloud(project_name, index_url, requests_verify=True): """ Query the pypi index at index_url using pypicloud api to find all versions """ api_url = index_url for suffix in ('/pypi', '/pypi/', '/simple', '/simple/'): if api_url.endswith(suffix): api_url = api_url[:len(suffix) * -1] + '/api/package' break url = '/'.join((api_url, project_name)) response = requests.get(url, verify=requests_verify) if response.status_code == 200: return [p['version'] for p in response.json()['packages']] return None
def getQCAnalyses(self, qctype=None, review_state=None): """return the QC analyses performed in the worksheet in which, at least, one sample of this AR is present. Depending on qctype value, returns the analyses of: - 'b': all Blank Reference Samples used in related worksheet/s - 'c': all Control Reference Samples used in related worksheet/s - 'd': duplicates only for samples contained in this AR If qctype==None, returns all type of qc analyses mentioned above """ qcanalyses = [] suids = [] ans = self.getAnalyses() wf = getToolByName(self, 'portal_workflow') for an in ans: an = an.getObject() if an.getServiceUID() not in suids: suids.append(an.getServiceUID()) def valid_dup(wan): if wan.portal_type == 'ReferenceAnalysis': return False an_state = wf.getInfoFor(wan, 'review_state') return \ wan.portal_type == 'DuplicateAnalysis' \ and wan.getRequestID() == self.id \ and (review_state is None or an_state in review_state) def valid_ref(wan): if wan.portal_type != 'ReferenceAnalysis': return False an_state = wf.getInfoFor(wan, 'review_state') an_reftype = wan.getReferenceType() return wan.getServiceUID() in suids \ and wan not in qcanalyses \ and (qctype is None or an_reftype == qctype) \ and (review_state is None or an_state in review_state) for an in ans: an = an.getObject() ws = an.getWorksheet() if not ws: continue was = ws.getAnalyses() for wa in was: if valid_dup(wa): qcanalyses.append(wa) elif valid_ref(wa): qcanalyses.append(wa) return qcanalyses
return the QC analyses performed in the worksheet in which, at least, one sample of this AR is present. Depending on qctype value, returns the analyses of: - 'b': all Blank Reference Samples used in related worksheet/s - 'c': all Control Reference Samples used in related worksheet/s - 'd': duplicates only for samples contained in this AR If qctype==None, returns all type of qc analyses mentioned above
Below is the the instruction that describes the task: ### Input: return the QC analyses performed in the worksheet in which, at least, one sample of this AR is present. Depending on qctype value, returns the analyses of: - 'b': all Blank Reference Samples used in related worksheet/s - 'c': all Control Reference Samples used in related worksheet/s - 'd': duplicates only for samples contained in this AR If qctype==None, returns all type of qc analyses mentioned above ### Response: def getQCAnalyses(self, qctype=None, review_state=None): """return the QC analyses performed in the worksheet in which, at least, one sample of this AR is present. Depending on qctype value, returns the analyses of: - 'b': all Blank Reference Samples used in related worksheet/s - 'c': all Control Reference Samples used in related worksheet/s - 'd': duplicates only for samples contained in this AR If qctype==None, returns all type of qc analyses mentioned above """ qcanalyses = [] suids = [] ans = self.getAnalyses() wf = getToolByName(self, 'portal_workflow') for an in ans: an = an.getObject() if an.getServiceUID() not in suids: suids.append(an.getServiceUID()) def valid_dup(wan): if wan.portal_type == 'ReferenceAnalysis': return False an_state = wf.getInfoFor(wan, 'review_state') return \ wan.portal_type == 'DuplicateAnalysis' \ and wan.getRequestID() == self.id \ and (review_state is None or an_state in review_state) def valid_ref(wan): if wan.portal_type != 'ReferenceAnalysis': return False an_state = wf.getInfoFor(wan, 'review_state') an_reftype = wan.getReferenceType() return wan.getServiceUID() in suids \ and wan not in qcanalyses \ and (qctype is None or an_reftype == qctype) \ and (review_state is None or an_state in review_state) for an in ans: an = an.getObject() ws = an.getWorksheet() if not ws: continue was = ws.getAnalyses() for wa in was: if valid_dup(wa): qcanalyses.append(wa) elif valid_ref(wa): qcanalyses.append(wa) return qcanalyses
def _download(url): """Downloads an URL and returns a file-like object open for reading, compatible with zipping.ZipFile (it has a seek() method). """ fh = StringIO() for line in get(url): fh.write(line) fh.seek(0) return fh
Downloads an URL and returns a file-like object open for reading, compatible with zipping.ZipFile (it has a seek() method).
Below is the the instruction that describes the task: ### Input: Downloads an URL and returns a file-like object open for reading, compatible with zipping.ZipFile (it has a seek() method). ### Response: def _download(url): """Downloads an URL and returns a file-like object open for reading, compatible with zipping.ZipFile (it has a seek() method). """ fh = StringIO() for line in get(url): fh.write(line) fh.seek(0) return fh
def _convert_postmark_to_native(cls, message): '''Converts Postmark message API field names to their corresponding :class:`Message` attribute names. :param message: Postmark message data, with API fields using Postmark API names. :type message: `dict` ''' d = {} for dest, src in cls._fields.items(): if src in message: d[dest] = message[src] return d
Converts Postmark message API field names to their corresponding :class:`Message` attribute names. :param message: Postmark message data, with API fields using Postmark API names. :type message: `dict`
Below is the the instruction that describes the task: ### Input: Converts Postmark message API field names to their corresponding :class:`Message` attribute names. :param message: Postmark message data, with API fields using Postmark API names. :type message: `dict` ### Response: def _convert_postmark_to_native(cls, message): '''Converts Postmark message API field names to their corresponding :class:`Message` attribute names. :param message: Postmark message data, with API fields using Postmark API names. :type message: `dict` ''' d = {} for dest, src in cls._fields.items(): if src in message: d[dest] = message[src] return d
def pasa(args): """ %prog pasa pasa_db fastafile Run EVM in TIGR-only mode. """ p = OptionParser(pasa.__doc__) opts, args = p.parse_args(args) if len(args) != 2: sys.exit(not p.print_help()) pasa_db, fastafile = args termexons = "pasa.terminal_exons.gff3" if need_update(fastafile, termexons): cmd = "$ANNOT_DEVEL/PASA2/scripts/pasa_asmbls_to_training_set.dbi" cmd += ' -M "{0}:mysql.tigr.org" -p "access:access"'.format(pasa_db) cmd += ' -g {0}'.format(fastafile) sh(cmd) cmd = "$EVM/PasaUtils/retrieve_terminal_CDS_exons.pl" cmd += " trainingSetCandidates.fasta trainingSetCandidates.gff" sh(cmd, outfile=termexons) return termexons
%prog pasa pasa_db fastafile Run EVM in TIGR-only mode.
Below is the the instruction that describes the task: ### Input: %prog pasa pasa_db fastafile Run EVM in TIGR-only mode. ### Response: def pasa(args): """ %prog pasa pasa_db fastafile Run EVM in TIGR-only mode. """ p = OptionParser(pasa.__doc__) opts, args = p.parse_args(args) if len(args) != 2: sys.exit(not p.print_help()) pasa_db, fastafile = args termexons = "pasa.terminal_exons.gff3" if need_update(fastafile, termexons): cmd = "$ANNOT_DEVEL/PASA2/scripts/pasa_asmbls_to_training_set.dbi" cmd += ' -M "{0}:mysql.tigr.org" -p "access:access"'.format(pasa_db) cmd += ' -g {0}'.format(fastafile) sh(cmd) cmd = "$EVM/PasaUtils/retrieve_terminal_CDS_exons.pl" cmd += " trainingSetCandidates.fasta trainingSetCandidates.gff" sh(cmd, outfile=termexons) return termexons
def zone_create_or_update(name, resource_group, **kwargs): ''' .. versionadded:: Fluorine Creates or updates a DNS zone. Does not modify DNS records within the zone. :param name: The name of the DNS zone to create (without a terminating dot). :param resource_group: The name of the resource group. CLI Example: .. code-block:: bash salt-call azurearm_dns.zone_create_or_update myzone testgroup ''' # DNS zones are global objects kwargs['location'] = 'global' dnsconn = __utils__['azurearm.get_client']('dns', **kwargs) # Convert list of ID strings to list of dictionaries with id key. if isinstance(kwargs.get('registration_virtual_networks'), list): kwargs['registration_virtual_networks'] = [{'id': vnet} for vnet in kwargs['registration_virtual_networks']] if isinstance(kwargs.get('resolution_virtual_networks'), list): kwargs['resolution_virtual_networks'] = [{'id': vnet} for vnet in kwargs['resolution_virtual_networks']] try: zone_model = __utils__['azurearm.create_object_model']('dns', 'Zone', **kwargs) except TypeError as exc: result = {'error': 'The object model could not be built. ({0})'.format(str(exc))} return result try: zone = dnsconn.zones.create_or_update( zone_name=name, resource_group_name=resource_group, parameters=zone_model, if_match=kwargs.get('if_match'), if_none_match=kwargs.get('if_none_match') ) result = zone.as_dict() except CloudError as exc: __utils__['azurearm.log_cloud_error']('dns', str(exc), **kwargs) result = {'error': str(exc)} except SerializationError as exc: result = {'error': 'The object model could not be parsed. ({0})'.format(str(exc))} return result
.. versionadded:: Fluorine Creates or updates a DNS zone. Does not modify DNS records within the zone. :param name: The name of the DNS zone to create (without a terminating dot). :param resource_group: The name of the resource group. CLI Example: .. code-block:: bash salt-call azurearm_dns.zone_create_or_update myzone testgroup
Below is the the instruction that describes the task: ### Input: .. versionadded:: Fluorine Creates or updates a DNS zone. Does not modify DNS records within the zone. :param name: The name of the DNS zone to create (without a terminating dot). :param resource_group: The name of the resource group. CLI Example: .. code-block:: bash salt-call azurearm_dns.zone_create_or_update myzone testgroup ### Response: def zone_create_or_update(name, resource_group, **kwargs): ''' .. versionadded:: Fluorine Creates or updates a DNS zone. Does not modify DNS records within the zone. :param name: The name of the DNS zone to create (without a terminating dot). :param resource_group: The name of the resource group. CLI Example: .. code-block:: bash salt-call azurearm_dns.zone_create_or_update myzone testgroup ''' # DNS zones are global objects kwargs['location'] = 'global' dnsconn = __utils__['azurearm.get_client']('dns', **kwargs) # Convert list of ID strings to list of dictionaries with id key. if isinstance(kwargs.get('registration_virtual_networks'), list): kwargs['registration_virtual_networks'] = [{'id': vnet} for vnet in kwargs['registration_virtual_networks']] if isinstance(kwargs.get('resolution_virtual_networks'), list): kwargs['resolution_virtual_networks'] = [{'id': vnet} for vnet in kwargs['resolution_virtual_networks']] try: zone_model = __utils__['azurearm.create_object_model']('dns', 'Zone', **kwargs) except TypeError as exc: result = {'error': 'The object model could not be built. ({0})'.format(str(exc))} return result try: zone = dnsconn.zones.create_or_update( zone_name=name, resource_group_name=resource_group, parameters=zone_model, if_match=kwargs.get('if_match'), if_none_match=kwargs.get('if_none_match') ) result = zone.as_dict() except CloudError as exc: __utils__['azurearm.log_cloud_error']('dns', str(exc), **kwargs) result = {'error': str(exc)} except SerializationError as exc: result = {'error': 'The object model could not be parsed. ({0})'.format(str(exc))} return result
def main(*args): ''' benson14_retinotopy.main(args...) runs the benson14_retinotopy command; see benson14_retinotopy.info for more information. ''' # Parse the arguments... (args, opts) = _benson14_parser(*args) # help? if opts['help']: print(info, file=sys.stdout) return 1 # verbose? if opts['verbose']: def note(s): print(s, file=sys.stdout) return True else: def note(s): return False # based on format, how do we export? sfmt = opts['surf_format'].lower() if sfmt in ['curv', 'auto', 'automatic', 'morph']: sfmt = 'freesurfer_morph' sext = '' elif sfmt == 'nifti': sext = '.nii.gz' elif sfmt in ['mgh', 'mgz', 'nii', 'nii.gz']: sext = '.' + sfmt else: raise ValueError('Unknown surface format: %s' % opts['surf_format']) vfmt = opts['vol_format'].lower() if vfmt == 'nifti': vext = '.nii.gz' elif vfmt in ['mgh', 'mgz', 'nii', 'nii.gz']: vext = '.' + vfmt else: raise ValueError('Unknown volume format: %s' % opts['vol_format']) # Add the subjects directory, if there is one if 'subjects_dir' in opts and opts['subjects_dir'] is not None: add_subject_path(opts['subjects_dir']) ow = not opts['no_overwrite'] nse = opts['no_surf_export'] nve = opts['no_vol_export'] tr = {'angle': opts['angle_tag'], 'eccen': opts['eccen_tag'], 'varea': opts['label_tag'], 'sigma': opts['sigma_tag']} # okay, now go through the subjects... for subnm in args: note('Processing subject %s:' % subnm) sub = subject(subnm) note(' - Interpolating template...') (lhdat, rhdat) = predict_retinotopy(sub, template=opts['template'], registration=opts['registration']) # Export surfaces if nse: note(' - Skipping surface export.') else: note(' - Exporting surfaces:') for (t,dat) in six.iteritems(lhdat): flnm = os.path.join(sub.path, 'surf', 'lh.' + tr[t] + sext) if ow or not os.path.exists(flnm): note(' - Exporting LH prediction file: %s' % flnm) nyio.save(flnm, dat, format=sfmt) else: note(' - Not overwriting existing file: %s' % flnm) for (t,dat) in six.iteritems(rhdat): flnm = os.path.join(sub.path, 'surf', 'rh.' + tr[t] + sext) if ow or not os.path.exists(flnm): note(' - Exporting RH prediction file: %s' % flnm) nyio.save(flnm, dat, format=sfmt) else: note(' - Not overwriting existing file: %s' % flnm) # Export volumes if nve: note(' - Skipping volume export.') else: note(' - Exporting Volumes:') for t in lhdat.keys(): flnm = os.path.join(sub.path, 'mri', tr[t] + vext) if ow or not os.path.exists(flnm): note(' - Preparing volume file: %s' % flnm) dtyp = (np.int32 if t == 'varea' else np.float32) vol = sub.cortex_to_image( (lhdat[t], rhdat[t]), method=('nearest' if t == 'varea' else 'linear'), dtype=dtyp) note(' - Exporting volume file: %s' % flnm) nyio.save(flnm, vol, like=sub) else: note(' - Not overwriting existing file: %s' % flnm) note(' Subject %s finished!' % sub.name) return 0
benson14_retinotopy.main(args...) runs the benson14_retinotopy command; see benson14_retinotopy.info for more information.
Below is the the instruction that describes the task: ### Input: benson14_retinotopy.main(args...) runs the benson14_retinotopy command; see benson14_retinotopy.info for more information. ### Response: def main(*args): ''' benson14_retinotopy.main(args...) runs the benson14_retinotopy command; see benson14_retinotopy.info for more information. ''' # Parse the arguments... (args, opts) = _benson14_parser(*args) # help? if opts['help']: print(info, file=sys.stdout) return 1 # verbose? if opts['verbose']: def note(s): print(s, file=sys.stdout) return True else: def note(s): return False # based on format, how do we export? sfmt = opts['surf_format'].lower() if sfmt in ['curv', 'auto', 'automatic', 'morph']: sfmt = 'freesurfer_morph' sext = '' elif sfmt == 'nifti': sext = '.nii.gz' elif sfmt in ['mgh', 'mgz', 'nii', 'nii.gz']: sext = '.' + sfmt else: raise ValueError('Unknown surface format: %s' % opts['surf_format']) vfmt = opts['vol_format'].lower() if vfmt == 'nifti': vext = '.nii.gz' elif vfmt in ['mgh', 'mgz', 'nii', 'nii.gz']: vext = '.' + vfmt else: raise ValueError('Unknown volume format: %s' % opts['vol_format']) # Add the subjects directory, if there is one if 'subjects_dir' in opts and opts['subjects_dir'] is not None: add_subject_path(opts['subjects_dir']) ow = not opts['no_overwrite'] nse = opts['no_surf_export'] nve = opts['no_vol_export'] tr = {'angle': opts['angle_tag'], 'eccen': opts['eccen_tag'], 'varea': opts['label_tag'], 'sigma': opts['sigma_tag']} # okay, now go through the subjects... for subnm in args: note('Processing subject %s:' % subnm) sub = subject(subnm) note(' - Interpolating template...') (lhdat, rhdat) = predict_retinotopy(sub, template=opts['template'], registration=opts['registration']) # Export surfaces if nse: note(' - Skipping surface export.') else: note(' - Exporting surfaces:') for (t,dat) in six.iteritems(lhdat): flnm = os.path.join(sub.path, 'surf', 'lh.' + tr[t] + sext) if ow or not os.path.exists(flnm): note(' - Exporting LH prediction file: %s' % flnm) nyio.save(flnm, dat, format=sfmt) else: note(' - Not overwriting existing file: %s' % flnm) for (t,dat) in six.iteritems(rhdat): flnm = os.path.join(sub.path, 'surf', 'rh.' + tr[t] + sext) if ow or not os.path.exists(flnm): note(' - Exporting RH prediction file: %s' % flnm) nyio.save(flnm, dat, format=sfmt) else: note(' - Not overwriting existing file: %s' % flnm) # Export volumes if nve: note(' - Skipping volume export.') else: note(' - Exporting Volumes:') for t in lhdat.keys(): flnm = os.path.join(sub.path, 'mri', tr[t] + vext) if ow or not os.path.exists(flnm): note(' - Preparing volume file: %s' % flnm) dtyp = (np.int32 if t == 'varea' else np.float32) vol = sub.cortex_to_image( (lhdat[t], rhdat[t]), method=('nearest' if t == 'varea' else 'linear'), dtype=dtyp) note(' - Exporting volume file: %s' % flnm) nyio.save(flnm, vol, like=sub) else: note(' - Not overwriting existing file: %s' % flnm) note(' Subject %s finished!' % sub.name) return 0
def update(self, collection, selector, modifier, callback=None): """Insert an item into a collection Arguments: collection - the collection to be modified selector - specifies which documents to modify modifier - Specifies how to modify the documents Keyword Arguments: callback - Optional. If present, called with an error object as the first argument and, if no error, the number of affected documents as the second.""" self.call("/" + collection + "/update", [selector, modifier], callback=callback)
Insert an item into a collection Arguments: collection - the collection to be modified selector - specifies which documents to modify modifier - Specifies how to modify the documents Keyword Arguments: callback - Optional. If present, called with an error object as the first argument and, if no error, the number of affected documents as the second.
Below is the the instruction that describes the task: ### Input: Insert an item into a collection Arguments: collection - the collection to be modified selector - specifies which documents to modify modifier - Specifies how to modify the documents Keyword Arguments: callback - Optional. If present, called with an error object as the first argument and, if no error, the number of affected documents as the second. ### Response: def update(self, collection, selector, modifier, callback=None): """Insert an item into a collection Arguments: collection - the collection to be modified selector - specifies which documents to modify modifier - Specifies how to modify the documents Keyword Arguments: callback - Optional. If present, called with an error object as the first argument and, if no error, the number of affected documents as the second.""" self.call("/" + collection + "/update", [selector, modifier], callback=callback)
def create_network(kwargs=None, call=None): ''' ... versionchanged:: 2017.7.0 Create a GCE network. Must specify name and cidr. CLI Example: .. code-block:: bash salt-cloud -f create_network gce name=mynet cidr=10.10.10.0/24 mode=legacy description=optional salt-cloud -f create_network gce name=mynet description=optional ''' if call != 'function': raise SaltCloudSystemExit( 'The create_network function must be called with -f or --function.' ) if not kwargs or 'name' not in kwargs: log.error( 'A name must be specified when creating a network.' ) return False mode = kwargs.get('mode', 'legacy') cidr = kwargs.get('cidr', None) if cidr is None and mode == 'legacy': log.error( 'A network CIDR range must be specified when creating a legacy network.' ) return False name = kwargs['name'] desc = kwargs.get('description', None) conn = get_conn() __utils__['cloud.fire_event']( 'event', 'creating network', 'salt/cloud/net/creating', args={ 'name': name, 'cidr': cidr, 'description': desc, 'mode': mode }, sock_dir=__opts__['sock_dir'], transport=__opts__['transport'] ) network = conn.ex_create_network(name, cidr, desc, mode) __utils__['cloud.fire_event']( 'event', 'created network', 'salt/cloud/net/created', args={ 'name': name, 'cidr': cidr, 'description': desc, 'mode': mode }, sock_dir=__opts__['sock_dir'], transport=__opts__['transport'] ) return _expand_item(network)
... versionchanged:: 2017.7.0 Create a GCE network. Must specify name and cidr. CLI Example: .. code-block:: bash salt-cloud -f create_network gce name=mynet cidr=10.10.10.0/24 mode=legacy description=optional salt-cloud -f create_network gce name=mynet description=optional
Below is the the instruction that describes the task: ### Input: ... versionchanged:: 2017.7.0 Create a GCE network. Must specify name and cidr. CLI Example: .. code-block:: bash salt-cloud -f create_network gce name=mynet cidr=10.10.10.0/24 mode=legacy description=optional salt-cloud -f create_network gce name=mynet description=optional ### Response: def create_network(kwargs=None, call=None): ''' ... versionchanged:: 2017.7.0 Create a GCE network. Must specify name and cidr. CLI Example: .. code-block:: bash salt-cloud -f create_network gce name=mynet cidr=10.10.10.0/24 mode=legacy description=optional salt-cloud -f create_network gce name=mynet description=optional ''' if call != 'function': raise SaltCloudSystemExit( 'The create_network function must be called with -f or --function.' ) if not kwargs or 'name' not in kwargs: log.error( 'A name must be specified when creating a network.' ) return False mode = kwargs.get('mode', 'legacy') cidr = kwargs.get('cidr', None) if cidr is None and mode == 'legacy': log.error( 'A network CIDR range must be specified when creating a legacy network.' ) return False name = kwargs['name'] desc = kwargs.get('description', None) conn = get_conn() __utils__['cloud.fire_event']( 'event', 'creating network', 'salt/cloud/net/creating', args={ 'name': name, 'cidr': cidr, 'description': desc, 'mode': mode }, sock_dir=__opts__['sock_dir'], transport=__opts__['transport'] ) network = conn.ex_create_network(name, cidr, desc, mode) __utils__['cloud.fire_event']( 'event', 'created network', 'salt/cloud/net/created', args={ 'name': name, 'cidr': cidr, 'description': desc, 'mode': mode }, sock_dir=__opts__['sock_dir'], transport=__opts__['transport'] ) return _expand_item(network)
def parse(self, file_name): """ Parse entire file and return relevant object. :param file_name: File path :type file_name: str :return: Parsed object """ self.object = self.parsed_class() with open(file_name, encoding='utf-8') as f: self.parse_str(f.read()) return self.object
Parse entire file and return relevant object. :param file_name: File path :type file_name: str :return: Parsed object
Below is the the instruction that describes the task: ### Input: Parse entire file and return relevant object. :param file_name: File path :type file_name: str :return: Parsed object ### Response: def parse(self, file_name): """ Parse entire file and return relevant object. :param file_name: File path :type file_name: str :return: Parsed object """ self.object = self.parsed_class() with open(file_name, encoding='utf-8') as f: self.parse_str(f.read()) return self.object
def configure_custom(self, config): """Configure an object with a user-supplied factory.""" c = config.pop('()') if not hasattr(c, '__call__') and \ hasattr(types, 'ClassType') and isinstance(c, types.ClassType): c = self.resolve(c) props = config.pop('.', None) # Check for valid identifiers kwargs = dict((k, config[k]) for k in config if valid_ident(k)) result = c(**kwargs) if props: for name, value in props.items(): setattr(result, name, value) return result
Configure an object with a user-supplied factory.
Below is the the instruction that describes the task: ### Input: Configure an object with a user-supplied factory. ### Response: def configure_custom(self, config): """Configure an object with a user-supplied factory.""" c = config.pop('()') if not hasattr(c, '__call__') and \ hasattr(types, 'ClassType') and isinstance(c, types.ClassType): c = self.resolve(c) props = config.pop('.', None) # Check for valid identifiers kwargs = dict((k, config[k]) for k in config if valid_ident(k)) result = c(**kwargs) if props: for name, value in props.items(): setattr(result, name, value) return result
def get_shrunk_data(shrink_info): """Read shrunk file from tinypng.org api.""" out_url = shrink_info['output']['url'] try: return requests.get(out_url).content except HTTPError as err: if err.code != 404: raise exc = ValueError("Unable to read png file \"{0}\"".format(out_url)) exc.__cause__ = err raise exc
Read shrunk file from tinypng.org api.
Below is the the instruction that describes the task: ### Input: Read shrunk file from tinypng.org api. ### Response: def get_shrunk_data(shrink_info): """Read shrunk file from tinypng.org api.""" out_url = shrink_info['output']['url'] try: return requests.get(out_url).content except HTTPError as err: if err.code != 404: raise exc = ValueError("Unable to read png file \"{0}\"".format(out_url)) exc.__cause__ = err raise exc
def map_return_value(self, return_value): """ Returns the mapped return_value of a processed request. If no return_mapping has been defined, the value is returned as is. If return_mapping is a static value, that value is returned, ignoring return_value completely. :param return_value: Value to map. :return: Mapped return value. """ if callable(self.return_mapping): return self.return_mapping(return_value) if self.return_mapping is not None: return self.return_mapping return return_value
Returns the mapped return_value of a processed request. If no return_mapping has been defined, the value is returned as is. If return_mapping is a static value, that value is returned, ignoring return_value completely. :param return_value: Value to map. :return: Mapped return value.
Below is the the instruction that describes the task: ### Input: Returns the mapped return_value of a processed request. If no return_mapping has been defined, the value is returned as is. If return_mapping is a static value, that value is returned, ignoring return_value completely. :param return_value: Value to map. :return: Mapped return value. ### Response: def map_return_value(self, return_value): """ Returns the mapped return_value of a processed request. If no return_mapping has been defined, the value is returned as is. If return_mapping is a static value, that value is returned, ignoring return_value completely. :param return_value: Value to map. :return: Mapped return value. """ if callable(self.return_mapping): return self.return_mapping(return_value) if self.return_mapping is not None: return self.return_mapping return return_value
def connect(self): """ Connect to one of the Disque nodes. You can get current connection with connected_node property :returns: nothing """ self.connected_node = None for i, node in self.nodes.items(): host, port = i.split(':') port = int(port) redis_client = redis.Redis(host, port, **self.client_kw_args) try: ret = redis_client.execute_command('HELLO') format_version, node_id = ret[0], ret[1] others = ret[2:] self.nodes[i] = Node(node_id, host, port, redis_client) self.connected_node = self.nodes[i] except redis.exceptions.ConnectionError: pass if not self.connected_node: raise ConnectionError('couldnt connect to any nodes') logger.info("connected to node %s" % self.connected_node)
Connect to one of the Disque nodes. You can get current connection with connected_node property :returns: nothing
Below is the the instruction that describes the task: ### Input: Connect to one of the Disque nodes. You can get current connection with connected_node property :returns: nothing ### Response: def connect(self): """ Connect to one of the Disque nodes. You can get current connection with connected_node property :returns: nothing """ self.connected_node = None for i, node in self.nodes.items(): host, port = i.split(':') port = int(port) redis_client = redis.Redis(host, port, **self.client_kw_args) try: ret = redis_client.execute_command('HELLO') format_version, node_id = ret[0], ret[1] others = ret[2:] self.nodes[i] = Node(node_id, host, port, redis_client) self.connected_node = self.nodes[i] except redis.exceptions.ConnectionError: pass if not self.connected_node: raise ConnectionError('couldnt connect to any nodes') logger.info("connected to node %s" % self.connected_node)
def _check_operator(self, operator): """ Check Set-Up This method checks algorithm operator against the expected parent classes Parameters ---------- operator : str Algorithm operator to check """ if not isinstance(operator, type(None)): tree = [obj.__name__ for obj in getmro(operator.__class__)] if not any([parent in tree for parent in self._op_parents]): warn('{0} does not inherit an operator ' 'parent.'.format(str(operator.__class__)))
Check Set-Up This method checks algorithm operator against the expected parent classes Parameters ---------- operator : str Algorithm operator to check
Below is the the instruction that describes the task: ### Input: Check Set-Up This method checks algorithm operator against the expected parent classes Parameters ---------- operator : str Algorithm operator to check ### Response: def _check_operator(self, operator): """ Check Set-Up This method checks algorithm operator against the expected parent classes Parameters ---------- operator : str Algorithm operator to check """ if not isinstance(operator, type(None)): tree = [obj.__name__ for obj in getmro(operator.__class__)] if not any([parent in tree for parent in self._op_parents]): warn('{0} does not inherit an operator ' 'parent.'.format(str(operator.__class__)))
def exceptions_log_path(cls, for_pid=None, in_dir=None): """Get the path to either the shared or pid-specific fatal errors log file.""" if for_pid is None: intermediate_filename_component = '' else: assert(isinstance(for_pid, IntegerForPid)) intermediate_filename_component = '.{}'.format(for_pid) in_dir = in_dir or cls._log_dir return os.path.join( in_dir, 'logs', 'exceptions{}.log'.format(intermediate_filename_component))
Get the path to either the shared or pid-specific fatal errors log file.
Below is the the instruction that describes the task: ### Input: Get the path to either the shared or pid-specific fatal errors log file. ### Response: def exceptions_log_path(cls, for_pid=None, in_dir=None): """Get the path to either the shared or pid-specific fatal errors log file.""" if for_pid is None: intermediate_filename_component = '' else: assert(isinstance(for_pid, IntegerForPid)) intermediate_filename_component = '.{}'.format(for_pid) in_dir = in_dir or cls._log_dir return os.path.join( in_dir, 'logs', 'exceptions{}.log'.format(intermediate_filename_component))
def generate_synthetic_magnitudes(aval, bval, mmin, mmax, nyears): ''' Generates a synthetic catalogue for a specified number of years, with magnitudes distributed according to a truncated Gutenberg-Richter distribution :param float aval: a-value :param float bval: b-value :param float mmin: Minimum Magnitude :param float mmax: Maximum Magnitude :param int nyears: Number of years :returns: Synthetic catalogue (dict) with year and magnitude attributes ''' nsamples = int(np.round(nyears * (10. ** (aval - bval * mmin)), 0)) year = np.random.randint(0, nyears, nsamples) # Get magnitudes mags = generate_trunc_gr_magnitudes(bval, mmin, mmax, nsamples) return {'magnitude': mags, 'year': np.sort(year)}
Generates a synthetic catalogue for a specified number of years, with magnitudes distributed according to a truncated Gutenberg-Richter distribution :param float aval: a-value :param float bval: b-value :param float mmin: Minimum Magnitude :param float mmax: Maximum Magnitude :param int nyears: Number of years :returns: Synthetic catalogue (dict) with year and magnitude attributes
Below is the the instruction that describes the task: ### Input: Generates a synthetic catalogue for a specified number of years, with magnitudes distributed according to a truncated Gutenberg-Richter distribution :param float aval: a-value :param float bval: b-value :param float mmin: Minimum Magnitude :param float mmax: Maximum Magnitude :param int nyears: Number of years :returns: Synthetic catalogue (dict) with year and magnitude attributes ### Response: def generate_synthetic_magnitudes(aval, bval, mmin, mmax, nyears): ''' Generates a synthetic catalogue for a specified number of years, with magnitudes distributed according to a truncated Gutenberg-Richter distribution :param float aval: a-value :param float bval: b-value :param float mmin: Minimum Magnitude :param float mmax: Maximum Magnitude :param int nyears: Number of years :returns: Synthetic catalogue (dict) with year and magnitude attributes ''' nsamples = int(np.round(nyears * (10. ** (aval - bval * mmin)), 0)) year = np.random.randint(0, nyears, nsamples) # Get magnitudes mags = generate_trunc_gr_magnitudes(bval, mmin, mmax, nsamples) return {'magnitude': mags, 'year': np.sort(year)}
def _modelmat(self, X, term=-1): """ Builds a model matrix, B, out of the spline basis for each feature B = [B_0, B_1, ..., B_p] Parameters --------- X : array-like of shape (n_samples, m_features) containing the input dataset term : int, optional term index for which to compute the model matrix if -1, will create the model matrix for all features Returns ------- modelmat : sparse matrix of len n_samples containing model matrix of the spline basis for selected features """ X = check_X(X, n_feats=self.statistics_['m_features'], edge_knots=self.edge_knots_, dtypes=self.dtype, features=self.feature, verbose=self.verbose) return self.terms.build_columns(X, term=term)
Builds a model matrix, B, out of the spline basis for each feature B = [B_0, B_1, ..., B_p] Parameters --------- X : array-like of shape (n_samples, m_features) containing the input dataset term : int, optional term index for which to compute the model matrix if -1, will create the model matrix for all features Returns ------- modelmat : sparse matrix of len n_samples containing model matrix of the spline basis for selected features
Below is the the instruction that describes the task: ### Input: Builds a model matrix, B, out of the spline basis for each feature B = [B_0, B_1, ..., B_p] Parameters --------- X : array-like of shape (n_samples, m_features) containing the input dataset term : int, optional term index for which to compute the model matrix if -1, will create the model matrix for all features Returns ------- modelmat : sparse matrix of len n_samples containing model matrix of the spline basis for selected features ### Response: def _modelmat(self, X, term=-1): """ Builds a model matrix, B, out of the spline basis for each feature B = [B_0, B_1, ..., B_p] Parameters --------- X : array-like of shape (n_samples, m_features) containing the input dataset term : int, optional term index for which to compute the model matrix if -1, will create the model matrix for all features Returns ------- modelmat : sparse matrix of len n_samples containing model matrix of the spline basis for selected features """ X = check_X(X, n_feats=self.statistics_['m_features'], edge_knots=self.edge_knots_, dtypes=self.dtype, features=self.feature, verbose=self.verbose) return self.terms.build_columns(X, term=term)
def xml_marshal_bucket_constraint(region): """ Marshal's bucket constraint based on *region*. :param region: Region name of a given bucket. :return: Marshalled XML data. """ root = s3_xml.Element('CreateBucketConfiguration', {'xmlns': _S3_NAMESPACE}) location_constraint = s3_xml.SubElement(root, 'LocationConstraint') location_constraint.text = region data = io.BytesIO() s3_xml.ElementTree(root).write(data, encoding=None, xml_declaration=False) return data.getvalue()
Marshal's bucket constraint based on *region*. :param region: Region name of a given bucket. :return: Marshalled XML data.
Below is the the instruction that describes the task: ### Input: Marshal's bucket constraint based on *region*. :param region: Region name of a given bucket. :return: Marshalled XML data. ### Response: def xml_marshal_bucket_constraint(region): """ Marshal's bucket constraint based on *region*. :param region: Region name of a given bucket. :return: Marshalled XML data. """ root = s3_xml.Element('CreateBucketConfiguration', {'xmlns': _S3_NAMESPACE}) location_constraint = s3_xml.SubElement(root, 'LocationConstraint') location_constraint.text = region data = io.BytesIO() s3_xml.ElementTree(root).write(data, encoding=None, xml_declaration=False) return data.getvalue()
def street_address(self): """ :example '791 Crist Parks' """ pattern = self.random_element(self.street_address_formats) return self.generator.parse(pattern)
:example '791 Crist Parks'
Below is the the instruction that describes the task: ### Input: :example '791 Crist Parks' ### Response: def street_address(self): """ :example '791 Crist Parks' """ pattern = self.random_element(self.street_address_formats) return self.generator.parse(pattern)
def get_block_property(value, is_bytes=False): """Get `BLK` property.""" obj = unidata.ascii_blocks if is_bytes else unidata.unicode_blocks if value.startswith('^'): negated = value[1:] value = '^' + unidata.unicode_alias['block'].get(negated, negated) else: value = unidata.unicode_alias['block'].get(value, value) return obj[value]
Get `BLK` property.
Below is the the instruction that describes the task: ### Input: Get `BLK` property. ### Response: def get_block_property(value, is_bytes=False): """Get `BLK` property.""" obj = unidata.ascii_blocks if is_bytes else unidata.unicode_blocks if value.startswith('^'): negated = value[1:] value = '^' + unidata.unicode_alias['block'].get(negated, negated) else: value = unidata.unicode_alias['block'].get(value, value) return obj[value]
def get_records(cls, ids, with_deleted=False): """Retrieve multiple records by id. :param ids: List of record IDs. :param with_deleted: If `True` then it includes deleted records. :returns: A list of :class:`Record` instances. """ with db.session.no_autoflush: query = RecordMetadata.query.filter(RecordMetadata.id.in_(ids)) if not with_deleted: query = query.filter(RecordMetadata.json != None) # noqa return [cls(obj.json, model=obj) for obj in query.all()]
Retrieve multiple records by id. :param ids: List of record IDs. :param with_deleted: If `True` then it includes deleted records. :returns: A list of :class:`Record` instances.
Below is the the instruction that describes the task: ### Input: Retrieve multiple records by id. :param ids: List of record IDs. :param with_deleted: If `True` then it includes deleted records. :returns: A list of :class:`Record` instances. ### Response: def get_records(cls, ids, with_deleted=False): """Retrieve multiple records by id. :param ids: List of record IDs. :param with_deleted: If `True` then it includes deleted records. :returns: A list of :class:`Record` instances. """ with db.session.no_autoflush: query = RecordMetadata.query.filter(RecordMetadata.id.in_(ids)) if not with_deleted: query = query.filter(RecordMetadata.json != None) # noqa return [cls(obj.json, model=obj) for obj in query.all()]
def _advapi32_create_blob(key_info, key_type, algo, signing=True): """ Generates a blob for importing a key to CryptoAPI :param key_info: An asn1crypto.keys.PublicKeyInfo or asn1crypto.keys.PrivateKeyInfo object :param key_type: A unicode string of "public" or "private" :param algo: A unicode string of "rsa" or "dsa" :param signing: If the key handle is for signing - may only be False for rsa keys :return: A byte string of a blob to pass to advapi32.CryptImportKey() """ if key_type == 'public': blob_type = Advapi32Const.PUBLICKEYBLOB else: blob_type = Advapi32Const.PRIVATEKEYBLOB if algo == 'rsa': struct_type = 'RSABLOBHEADER' if signing: algorithm_id = Advapi32Const.CALG_RSA_SIGN else: algorithm_id = Advapi32Const.CALG_RSA_KEYX else: struct_type = 'DSSBLOBHEADER' algorithm_id = Advapi32Const.CALG_DSS_SIGN blob_header_pointer = struct(advapi32, 'BLOBHEADER') blob_header = unwrap(blob_header_pointer) blob_header.bType = blob_type blob_header.bVersion = Advapi32Const.CUR_BLOB_VERSION blob_header.reserved = 0 blob_header.aiKeyAlg = algorithm_id blob_struct_pointer = struct(advapi32, struct_type) blob_struct = unwrap(blob_struct_pointer) blob_struct.publickeystruc = blob_header bit_size = key_info.bit_size len1 = bit_size // 8 len2 = bit_size // 16 if algo == 'rsa': pubkey_pointer = struct(advapi32, 'RSAPUBKEY') pubkey = unwrap(pubkey_pointer) pubkey.bitlen = bit_size if key_type == 'public': parsed_key_info = key_info['public_key'].parsed pubkey.magic = Advapi32Const.RSA1 pubkey.pubexp = parsed_key_info['public_exponent'].native blob_data = int_to_bytes(parsed_key_info['modulus'].native, signed=False, width=len1)[::-1] else: parsed_key_info = key_info['private_key'].parsed pubkey.magic = Advapi32Const.RSA2 pubkey.pubexp = parsed_key_info['public_exponent'].native blob_data = int_to_bytes(parsed_key_info['modulus'].native, signed=False, width=len1)[::-1] blob_data += int_to_bytes(parsed_key_info['prime1'].native, signed=False, width=len2)[::-1] blob_data += int_to_bytes(parsed_key_info['prime2'].native, signed=False, width=len2)[::-1] blob_data += int_to_bytes(parsed_key_info['exponent1'].native, signed=False, width=len2)[::-1] blob_data += int_to_bytes(parsed_key_info['exponent2'].native, signed=False, width=len2)[::-1] blob_data += int_to_bytes(parsed_key_info['coefficient'].native, signed=False, width=len2)[::-1] blob_data += int_to_bytes(parsed_key_info['private_exponent'].native, signed=False, width=len1)[::-1] blob_struct.rsapubkey = pubkey else: pubkey_pointer = struct(advapi32, 'DSSPUBKEY') pubkey = unwrap(pubkey_pointer) pubkey.bitlen = bit_size if key_type == 'public': pubkey.magic = Advapi32Const.DSS1 params = key_info['algorithm']['parameters'].native key_data = int_to_bytes(key_info['public_key'].parsed.native, signed=False, width=len1)[::-1] else: pubkey.magic = Advapi32Const.DSS2 params = key_info['private_key_algorithm']['parameters'].native key_data = int_to_bytes(key_info['private_key'].parsed.native, signed=False, width=20)[::-1] blob_struct.dsspubkey = pubkey blob_data = int_to_bytes(params['p'], signed=False, width=len1)[::-1] blob_data += int_to_bytes(params['q'], signed=False, width=20)[::-1] blob_data += int_to_bytes(params['g'], signed=False, width=len1)[::-1] blob_data += key_data dssseed_pointer = struct(advapi32, 'DSSSEED') dssseed = unwrap(dssseed_pointer) # This indicates no counter or seed info is available dssseed.counter = 0xffffffff blob_data += struct_bytes(dssseed_pointer) return struct_bytes(blob_struct_pointer) + blob_data
Generates a blob for importing a key to CryptoAPI :param key_info: An asn1crypto.keys.PublicKeyInfo or asn1crypto.keys.PrivateKeyInfo object :param key_type: A unicode string of "public" or "private" :param algo: A unicode string of "rsa" or "dsa" :param signing: If the key handle is for signing - may only be False for rsa keys :return: A byte string of a blob to pass to advapi32.CryptImportKey()
Below is the the instruction that describes the task: ### Input: Generates a blob for importing a key to CryptoAPI :param key_info: An asn1crypto.keys.PublicKeyInfo or asn1crypto.keys.PrivateKeyInfo object :param key_type: A unicode string of "public" or "private" :param algo: A unicode string of "rsa" or "dsa" :param signing: If the key handle is for signing - may only be False for rsa keys :return: A byte string of a blob to pass to advapi32.CryptImportKey() ### Response: def _advapi32_create_blob(key_info, key_type, algo, signing=True): """ Generates a blob for importing a key to CryptoAPI :param key_info: An asn1crypto.keys.PublicKeyInfo or asn1crypto.keys.PrivateKeyInfo object :param key_type: A unicode string of "public" or "private" :param algo: A unicode string of "rsa" or "dsa" :param signing: If the key handle is for signing - may only be False for rsa keys :return: A byte string of a blob to pass to advapi32.CryptImportKey() """ if key_type == 'public': blob_type = Advapi32Const.PUBLICKEYBLOB else: blob_type = Advapi32Const.PRIVATEKEYBLOB if algo == 'rsa': struct_type = 'RSABLOBHEADER' if signing: algorithm_id = Advapi32Const.CALG_RSA_SIGN else: algorithm_id = Advapi32Const.CALG_RSA_KEYX else: struct_type = 'DSSBLOBHEADER' algorithm_id = Advapi32Const.CALG_DSS_SIGN blob_header_pointer = struct(advapi32, 'BLOBHEADER') blob_header = unwrap(blob_header_pointer) blob_header.bType = blob_type blob_header.bVersion = Advapi32Const.CUR_BLOB_VERSION blob_header.reserved = 0 blob_header.aiKeyAlg = algorithm_id blob_struct_pointer = struct(advapi32, struct_type) blob_struct = unwrap(blob_struct_pointer) blob_struct.publickeystruc = blob_header bit_size = key_info.bit_size len1 = bit_size // 8 len2 = bit_size // 16 if algo == 'rsa': pubkey_pointer = struct(advapi32, 'RSAPUBKEY') pubkey = unwrap(pubkey_pointer) pubkey.bitlen = bit_size if key_type == 'public': parsed_key_info = key_info['public_key'].parsed pubkey.magic = Advapi32Const.RSA1 pubkey.pubexp = parsed_key_info['public_exponent'].native blob_data = int_to_bytes(parsed_key_info['modulus'].native, signed=False, width=len1)[::-1] else: parsed_key_info = key_info['private_key'].parsed pubkey.magic = Advapi32Const.RSA2 pubkey.pubexp = parsed_key_info['public_exponent'].native blob_data = int_to_bytes(parsed_key_info['modulus'].native, signed=False, width=len1)[::-1] blob_data += int_to_bytes(parsed_key_info['prime1'].native, signed=False, width=len2)[::-1] blob_data += int_to_bytes(parsed_key_info['prime2'].native, signed=False, width=len2)[::-1] blob_data += int_to_bytes(parsed_key_info['exponent1'].native, signed=False, width=len2)[::-1] blob_data += int_to_bytes(parsed_key_info['exponent2'].native, signed=False, width=len2)[::-1] blob_data += int_to_bytes(parsed_key_info['coefficient'].native, signed=False, width=len2)[::-1] blob_data += int_to_bytes(parsed_key_info['private_exponent'].native, signed=False, width=len1)[::-1] blob_struct.rsapubkey = pubkey else: pubkey_pointer = struct(advapi32, 'DSSPUBKEY') pubkey = unwrap(pubkey_pointer) pubkey.bitlen = bit_size if key_type == 'public': pubkey.magic = Advapi32Const.DSS1 params = key_info['algorithm']['parameters'].native key_data = int_to_bytes(key_info['public_key'].parsed.native, signed=False, width=len1)[::-1] else: pubkey.magic = Advapi32Const.DSS2 params = key_info['private_key_algorithm']['parameters'].native key_data = int_to_bytes(key_info['private_key'].parsed.native, signed=False, width=20)[::-1] blob_struct.dsspubkey = pubkey blob_data = int_to_bytes(params['p'], signed=False, width=len1)[::-1] blob_data += int_to_bytes(params['q'], signed=False, width=20)[::-1] blob_data += int_to_bytes(params['g'], signed=False, width=len1)[::-1] blob_data += key_data dssseed_pointer = struct(advapi32, 'DSSSEED') dssseed = unwrap(dssseed_pointer) # This indicates no counter or seed info is available dssseed.counter = 0xffffffff blob_data += struct_bytes(dssseed_pointer) return struct_bytes(blob_struct_pointer) + blob_data
def revoke_token(self, token, token_type_hint, request, *args, **kwargs): """Revoke an access or refresh token. """ if token_type_hint: tok = self._tokengetter(**{token_type_hint: token}) else: tok = self._tokengetter(access_token=token) if not tok: tok = self._tokengetter(refresh_token=token) if tok: request.client_id = tok.client_id request.user = tok.user tok.delete() return True msg = 'Invalid token supplied.' log.debug(msg) request.error_message = msg return False
Revoke an access or refresh token.
Below is the the instruction that describes the task: ### Input: Revoke an access or refresh token. ### Response: def revoke_token(self, token, token_type_hint, request, *args, **kwargs): """Revoke an access or refresh token. """ if token_type_hint: tok = self._tokengetter(**{token_type_hint: token}) else: tok = self._tokengetter(access_token=token) if not tok: tok = self._tokengetter(refresh_token=token) if tok: request.client_id = tok.client_id request.user = tok.user tok.delete() return True msg = 'Invalid token supplied.' log.debug(msg) request.error_message = msg return False
def process_pc_pathsfromto(source_genes, target_genes, neighbor_limit=1, database_filter=None): """Returns a BiopaxProcessor for a PathwayCommons paths-from-to query. The paths-from-to query finds the paths from a set of source genes to a set of target genes. http://www.pathwaycommons.org/pc2/#graph http://www.pathwaycommons.org/pc2/#graph_kind Parameters ---------- source_genes : list A list of HGNC gene symbols that are the sources of paths being searched for. Examples: ['BRAF', 'RAF1', 'ARAF'] target_genes : list A list of HGNC gene symbols that are the targets of paths being searched for. Examples: ['MAP2K1', 'MAP2K2'] neighbor_limit : Optional[int] The number of steps to limit the length of the paths between the source genes and target genes being queried. Default: 1 database_filter : Optional[list] A list of database identifiers to which the query is restricted. Examples: ['reactome'], ['biogrid', 'pid', 'psp'] If not given, all databases are used in the query. For a full list of databases see http://www.pathwaycommons.org/pc2/datasources Returns ------- bp : BiopaxProcessor A BiopaxProcessor containing the obtained BioPAX model in bp.model. """ model = pcc.graph_query('pathsfromto', source_genes, target_genes, neighbor_limit=neighbor_limit, database_filter=database_filter) if model is not None: return process_model(model)
Returns a BiopaxProcessor for a PathwayCommons paths-from-to query. The paths-from-to query finds the paths from a set of source genes to a set of target genes. http://www.pathwaycommons.org/pc2/#graph http://www.pathwaycommons.org/pc2/#graph_kind Parameters ---------- source_genes : list A list of HGNC gene symbols that are the sources of paths being searched for. Examples: ['BRAF', 'RAF1', 'ARAF'] target_genes : list A list of HGNC gene symbols that are the targets of paths being searched for. Examples: ['MAP2K1', 'MAP2K2'] neighbor_limit : Optional[int] The number of steps to limit the length of the paths between the source genes and target genes being queried. Default: 1 database_filter : Optional[list] A list of database identifiers to which the query is restricted. Examples: ['reactome'], ['biogrid', 'pid', 'psp'] If not given, all databases are used in the query. For a full list of databases see http://www.pathwaycommons.org/pc2/datasources Returns ------- bp : BiopaxProcessor A BiopaxProcessor containing the obtained BioPAX model in bp.model.
Below is the the instruction that describes the task: ### Input: Returns a BiopaxProcessor for a PathwayCommons paths-from-to query. The paths-from-to query finds the paths from a set of source genes to a set of target genes. http://www.pathwaycommons.org/pc2/#graph http://www.pathwaycommons.org/pc2/#graph_kind Parameters ---------- source_genes : list A list of HGNC gene symbols that are the sources of paths being searched for. Examples: ['BRAF', 'RAF1', 'ARAF'] target_genes : list A list of HGNC gene symbols that are the targets of paths being searched for. Examples: ['MAP2K1', 'MAP2K2'] neighbor_limit : Optional[int] The number of steps to limit the length of the paths between the source genes and target genes being queried. Default: 1 database_filter : Optional[list] A list of database identifiers to which the query is restricted. Examples: ['reactome'], ['biogrid', 'pid', 'psp'] If not given, all databases are used in the query. For a full list of databases see http://www.pathwaycommons.org/pc2/datasources Returns ------- bp : BiopaxProcessor A BiopaxProcessor containing the obtained BioPAX model in bp.model. ### Response: def process_pc_pathsfromto(source_genes, target_genes, neighbor_limit=1, database_filter=None): """Returns a BiopaxProcessor for a PathwayCommons paths-from-to query. The paths-from-to query finds the paths from a set of source genes to a set of target genes. http://www.pathwaycommons.org/pc2/#graph http://www.pathwaycommons.org/pc2/#graph_kind Parameters ---------- source_genes : list A list of HGNC gene symbols that are the sources of paths being searched for. Examples: ['BRAF', 'RAF1', 'ARAF'] target_genes : list A list of HGNC gene symbols that are the targets of paths being searched for. Examples: ['MAP2K1', 'MAP2K2'] neighbor_limit : Optional[int] The number of steps to limit the length of the paths between the source genes and target genes being queried. Default: 1 database_filter : Optional[list] A list of database identifiers to which the query is restricted. Examples: ['reactome'], ['biogrid', 'pid', 'psp'] If not given, all databases are used in the query. For a full list of databases see http://www.pathwaycommons.org/pc2/datasources Returns ------- bp : BiopaxProcessor A BiopaxProcessor containing the obtained BioPAX model in bp.model. """ model = pcc.graph_query('pathsfromto', source_genes, target_genes, neighbor_limit=neighbor_limit, database_filter=database_filter) if model is not None: return process_model(model)
def publish(self, **kwargs): "Publishes to the channel which notifies all connected handlers." log.debug('Publish to {0}'.format(self)) self.signal.send(sender=self.name, **kwargs)
Publishes to the channel which notifies all connected handlers.
Below is the the instruction that describes the task: ### Input: Publishes to the channel which notifies all connected handlers. ### Response: def publish(self, **kwargs): "Publishes to the channel which notifies all connected handlers." log.debug('Publish to {0}'.format(self)) self.signal.send(sender=self.name, **kwargs)
def start(self): """Download files using wget or other downloader. Optional curl, aria2c and httpie """ dwn_count = 1 self._directory_prefix() for dwn in self.url: # get file name from url and fix passing char '+' self.file_name = dwn.split("/")[-1].replace("%2B", "+") if dwn.startswith("file:///"): source_dir = dwn[7:-7].replace(slack_ver(), "") self._make_tarfile(self.file_name, source_dir) self._check_certificate() print("\n[{0}/{1}][ {2}Download{3} ] --> {4}\n".format( dwn_count, len(self.url), self.meta.color["GREEN"], self.meta.color["ENDC"], self.file_name)) if self.downder in ["wget"]: subprocess.call("{0} {1} {2}{3} {4}".format( self.downder, self.downder_options, self.dir_prefix, self.path, dwn), shell=True) if self.downder in ["aria2c"]: subprocess.call("{0} {1} {2}{3} {4}".format( self.downder, self.downder_options, self.dir_prefix, self.path[:-1], dwn), shell=True) elif self.downder in ["curl", "http"]: subprocess.call("{0} {1} {2}{3} {4}".format( self.downder, self.downder_options, self.path, self.file_name, dwn), shell=True) self._check_if_downloaded() dwn_count += 1
Download files using wget or other downloader. Optional curl, aria2c and httpie
Below is the the instruction that describes the task: ### Input: Download files using wget or other downloader. Optional curl, aria2c and httpie ### Response: def start(self): """Download files using wget or other downloader. Optional curl, aria2c and httpie """ dwn_count = 1 self._directory_prefix() for dwn in self.url: # get file name from url and fix passing char '+' self.file_name = dwn.split("/")[-1].replace("%2B", "+") if dwn.startswith("file:///"): source_dir = dwn[7:-7].replace(slack_ver(), "") self._make_tarfile(self.file_name, source_dir) self._check_certificate() print("\n[{0}/{1}][ {2}Download{3} ] --> {4}\n".format( dwn_count, len(self.url), self.meta.color["GREEN"], self.meta.color["ENDC"], self.file_name)) if self.downder in ["wget"]: subprocess.call("{0} {1} {2}{3} {4}".format( self.downder, self.downder_options, self.dir_prefix, self.path, dwn), shell=True) if self.downder in ["aria2c"]: subprocess.call("{0} {1} {2}{3} {4}".format( self.downder, self.downder_options, self.dir_prefix, self.path[:-1], dwn), shell=True) elif self.downder in ["curl", "http"]: subprocess.call("{0} {1} {2}{3} {4}".format( self.downder, self.downder_options, self.path, self.file_name, dwn), shell=True) self._check_if_downloaded() dwn_count += 1
def feed(self, pred, label): """ Args: pred (np.ndarray): binary array. label (np.ndarray): binary array of the same size. """ assert pred.shape == label.shape, "{} != {}".format(pred.shape, label.shape) self.nr_pos += (label == 1).sum() self.nr_neg += (label == 0).sum() self.nr_pred_pos += (pred == 1).sum() self.nr_pred_neg += (pred == 0).sum() self.corr_pos += ((pred == 1) & (pred == label)).sum() self.corr_neg += ((pred == 0) & (pred == label)).sum()
Args: pred (np.ndarray): binary array. label (np.ndarray): binary array of the same size.
Below is the the instruction that describes the task: ### Input: Args: pred (np.ndarray): binary array. label (np.ndarray): binary array of the same size. ### Response: def feed(self, pred, label): """ Args: pred (np.ndarray): binary array. label (np.ndarray): binary array of the same size. """ assert pred.shape == label.shape, "{} != {}".format(pred.shape, label.shape) self.nr_pos += (label == 1).sum() self.nr_neg += (label == 0).sum() self.nr_pred_pos += (pred == 1).sum() self.nr_pred_neg += (pred == 0).sum() self.corr_pos += ((pred == 1) & (pred == label)).sum() self.corr_neg += ((pred == 0) & (pred == label)).sum()
def iqr(a): """ Calculate the IQR for an array of numbers. """ a = np.asarray(a) q1 = stats.scoreatpercentile(a, 25) q3 = stats.scoreatpercentile(a, 75) return q3 - q1
Calculate the IQR for an array of numbers.
Below is the the instruction that describes the task: ### Input: Calculate the IQR for an array of numbers. ### Response: def iqr(a): """ Calculate the IQR for an array of numbers. """ a = np.asarray(a) q1 = stats.scoreatpercentile(a, 25) q3 = stats.scoreatpercentile(a, 75) return q3 - q1
def strongly_connected_components(self): """ Return list of strongly connected components of this graph. Returns a list of subgraphs. Algorithm is based on that described in "Path-based depth-first search for strong and biconnected components" by Harold N. Gabow, Inf.Process.Lett. 74 (2000) 107--114. """ raw_sccs = self._component_graph() sccs = [] for raw_scc in raw_sccs: sccs.append([v for vtype, v in raw_scc if vtype == 'VERTEX']) return [self.full_subgraph(scc) for scc in sccs]
Return list of strongly connected components of this graph. Returns a list of subgraphs. Algorithm is based on that described in "Path-based depth-first search for strong and biconnected components" by Harold N. Gabow, Inf.Process.Lett. 74 (2000) 107--114.
Below is the the instruction that describes the task: ### Input: Return list of strongly connected components of this graph. Returns a list of subgraphs. Algorithm is based on that described in "Path-based depth-first search for strong and biconnected components" by Harold N. Gabow, Inf.Process.Lett. 74 (2000) 107--114. ### Response: def strongly_connected_components(self): """ Return list of strongly connected components of this graph. Returns a list of subgraphs. Algorithm is based on that described in "Path-based depth-first search for strong and biconnected components" by Harold N. Gabow, Inf.Process.Lett. 74 (2000) 107--114. """ raw_sccs = self._component_graph() sccs = [] for raw_scc in raw_sccs: sccs.append([v for vtype, v in raw_scc if vtype == 'VERTEX']) return [self.full_subgraph(scc) for scc in sccs]
def get_seqprop_within(self, chain_id, resnum, angstroms, only_protein=True, use_ca=False, custom_coord=None, return_resnums=False): """Get a SeqProp object of the amino acids within X angstroms of the specified chain + residue number. Args: resnum (int): Residue number of the structure chain_id (str): Chain ID of the residue number angstroms (float): Radius of the search sphere only_protein (bool): If only protein atoms (no HETATMS) should be included in the returned sequence use_ca (bool): If the alpha-carbon atom should be used for searching, default is False (last atom of residue used) Returns: SeqProp: Sequence that represents the amino acids in the vicinity of your residue number. """ # XTODO: change "remove" parameter to be clean_seq and to remove all non standard amino acids # TODO: make return_resnums smarter polypep, resnums = self.get_polypeptide_within(chain_id=chain_id, resnum=resnum, angstroms=angstroms, use_ca=use_ca, only_protein=only_protein, custom_coord=custom_coord, return_resnums=True) # final_seq = polypep.get_sequence() # seqprop = SeqProp(id='{}-{}_within_{}_of_{}'.format(self.id, chain_id, angstroms, resnum), # seq=final_seq) chain_subseq = self.chains.get_by_id(chain_id).get_subsequence(resnums) if return_resnums: return chain_subseq, resnums else: return chain_subseq
Get a SeqProp object of the amino acids within X angstroms of the specified chain + residue number. Args: resnum (int): Residue number of the structure chain_id (str): Chain ID of the residue number angstroms (float): Radius of the search sphere only_protein (bool): If only protein atoms (no HETATMS) should be included in the returned sequence use_ca (bool): If the alpha-carbon atom should be used for searching, default is False (last atom of residue used) Returns: SeqProp: Sequence that represents the amino acids in the vicinity of your residue number.
Below is the the instruction that describes the task: ### Input: Get a SeqProp object of the amino acids within X angstroms of the specified chain + residue number. Args: resnum (int): Residue number of the structure chain_id (str): Chain ID of the residue number angstroms (float): Radius of the search sphere only_protein (bool): If only protein atoms (no HETATMS) should be included in the returned sequence use_ca (bool): If the alpha-carbon atom should be used for searching, default is False (last atom of residue used) Returns: SeqProp: Sequence that represents the amino acids in the vicinity of your residue number. ### Response: def get_seqprop_within(self, chain_id, resnum, angstroms, only_protein=True, use_ca=False, custom_coord=None, return_resnums=False): """Get a SeqProp object of the amino acids within X angstroms of the specified chain + residue number. Args: resnum (int): Residue number of the structure chain_id (str): Chain ID of the residue number angstroms (float): Radius of the search sphere only_protein (bool): If only protein atoms (no HETATMS) should be included in the returned sequence use_ca (bool): If the alpha-carbon atom should be used for searching, default is False (last atom of residue used) Returns: SeqProp: Sequence that represents the amino acids in the vicinity of your residue number. """ # XTODO: change "remove" parameter to be clean_seq and to remove all non standard amino acids # TODO: make return_resnums smarter polypep, resnums = self.get_polypeptide_within(chain_id=chain_id, resnum=resnum, angstroms=angstroms, use_ca=use_ca, only_protein=only_protein, custom_coord=custom_coord, return_resnums=True) # final_seq = polypep.get_sequence() # seqprop = SeqProp(id='{}-{}_within_{}_of_{}'.format(self.id, chain_id, angstroms, resnum), # seq=final_seq) chain_subseq = self.chains.get_by_id(chain_id).get_subsequence(resnums) if return_resnums: return chain_subseq, resnums else: return chain_subseq
def loadJSON(self, jdata): """ Loads the given JSON information for this column. :param jdata: <dict> """ super(ReferenceColumn, self).loadJSON(jdata) # load additional information self.__reference = jdata.get('reference') or self.__reference self.__removeAction = jdata.get('removeAction') or self.__removeAction
Loads the given JSON information for this column. :param jdata: <dict>
Below is the the instruction that describes the task: ### Input: Loads the given JSON information for this column. :param jdata: <dict> ### Response: def loadJSON(self, jdata): """ Loads the given JSON information for this column. :param jdata: <dict> """ super(ReferenceColumn, self).loadJSON(jdata) # load additional information self.__reference = jdata.get('reference') or self.__reference self.__removeAction = jdata.get('removeAction') or self.__removeAction
def write_i2c_block_data(self, address, register, value): """ I2C block transactions do not limit the number of bytes transferred but the SMBus layer places a limit of 32 bytes. I2C Block Write: i2c_smbus_write_i2c_block_data() ================================================== The opposite of the Block Read command, this writes bytes to a device, to a designated register that is specified through the Comm byte. Note that command lengths of 0, 2, or more bytes are seupported as they are indistinguishable from data. S Addr Wr [A] Comm [A] Data [A] Data [A] ... [A] Data [A] P Functionality flag: I2C_FUNC_SMBUS_WRITE_I2C_BLOCK """ return self.smbus.write_i2c_block_data(address, register, value)
I2C block transactions do not limit the number of bytes transferred but the SMBus layer places a limit of 32 bytes. I2C Block Write: i2c_smbus_write_i2c_block_data() ================================================== The opposite of the Block Read command, this writes bytes to a device, to a designated register that is specified through the Comm byte. Note that command lengths of 0, 2, or more bytes are seupported as they are indistinguishable from data. S Addr Wr [A] Comm [A] Data [A] Data [A] ... [A] Data [A] P Functionality flag: I2C_FUNC_SMBUS_WRITE_I2C_BLOCK
Below is the the instruction that describes the task: ### Input: I2C block transactions do not limit the number of bytes transferred but the SMBus layer places a limit of 32 bytes. I2C Block Write: i2c_smbus_write_i2c_block_data() ================================================== The opposite of the Block Read command, this writes bytes to a device, to a designated register that is specified through the Comm byte. Note that command lengths of 0, 2, or more bytes are seupported as they are indistinguishable from data. S Addr Wr [A] Comm [A] Data [A] Data [A] ... [A] Data [A] P Functionality flag: I2C_FUNC_SMBUS_WRITE_I2C_BLOCK ### Response: def write_i2c_block_data(self, address, register, value): """ I2C block transactions do not limit the number of bytes transferred but the SMBus layer places a limit of 32 bytes. I2C Block Write: i2c_smbus_write_i2c_block_data() ================================================== The opposite of the Block Read command, this writes bytes to a device, to a designated register that is specified through the Comm byte. Note that command lengths of 0, 2, or more bytes are seupported as they are indistinguishable from data. S Addr Wr [A] Comm [A] Data [A] Data [A] ... [A] Data [A] P Functionality flag: I2C_FUNC_SMBUS_WRITE_I2C_BLOCK """ return self.smbus.write_i2c_block_data(address, register, value)
def protein_permutation(graph_score, num_codons_obs, context_counts, context_to_mut, seq_context, gene_seq, gene_graph, num_permutations=10000, stop_criteria=100, pseudo_count=0): """Performs null-simulations for position-based mutation statistics in a single gene. Parameters ---------- graph_score : float clustering score for observed data num_codons_obs : int number of codons with missense mutation in observed data context_counts : pd.Series number of mutations for each context context_to_mut : dict dictionary mapping nucleotide context to a list of observed somatic base changes. seq_context : SequenceContext Sequence context for the entire gene sequence (regardless of where mutations occur). The nucleotide contexts are identified at positions along the gene. gene_seq : GeneSequence Sequence of gene of interest num_permutations : int, default: 10000 number of permutations to create for null stop_criteria : int stop after stop_criteria iterations are more significant then the observed statistic. Returns ------- protein_pval : float p-value for clustering in neighbor graph constructure from protein structures """ # get contexts and somatic base mycontexts = context_counts.index.tolist() somatic_base = [base for one_context in mycontexts for base in context_to_mut[one_context]] # get random positions determined by sequence context tmp_contxt_pos = seq_context.random_pos(context_counts.iteritems(), num_permutations) tmp_mut_pos = np.hstack(pos_array for base, pos_array in tmp_contxt_pos) # calculate position-based statistics as a result of random positions null_graph_entropy_ct = 0 coverage_list = [] num_mut_list = [] graph_entropy_list = [] for i, row in enumerate(tmp_mut_pos): # calculate the expected value of the relative increase in coverage if i == stop_criteria-1: rel_inc = [coverage_list[k] / float(num_mut_list[k]) for k in range(stop_criteria-1) if coverage_list[k]] exp_rel_inc = np.mean(rel_inc) # calculate observed statistic if num_codons_obs: obs_stat = graph_score / np.log2(exp_rel_inc*num_codons_obs) else: obs_stat = 1.0 # calculate statistics for simulated data sim_stat_list = [ent / np.log2(exp_rel_inc*num_mut_list[l]) for l, ent in enumerate(graph_entropy_list)] null_graph_entropy_ct = len([s for s in sim_stat_list if s-utils.epsilon <= obs_stat]) # get info about mutations tmp_mut_info = mc.get_aa_mut_info(row, somatic_base, gene_seq) # calculate position info tmp_tuple = cutils.calc_pos_info(tmp_mut_info['Codon Pos'], tmp_mut_info['Reference AA'], tmp_mut_info['Somatic AA'], pseudo_count=pseudo_count, is_obs=0) _, _, _, tmp_pos_ct = tmp_tuple # record num of mut codons if i < stop_criteria-1: tmp_num_mut_codons = len(tmp_pos_ct) num_mut_list.append(tmp_num_mut_codons) # get entropy on graph-smoothed probability distribution tmp_graph_entropy, tmp_coverage = scores.compute_ng_stat(gene_graph, tmp_pos_ct) # record the "coverage" in the graph if i < stop_criteria-1: coverage_list.append(tmp_coverage) graph_entropy_list.append(tmp_graph_entropy) # update empirical null distribution counts if i >= stop_criteria: #if tmp_graph_entropy-utils.epsilon <= graph_score: if tmp_num_mut_codons: sim_stat = tmp_graph_entropy / np.log2(exp_rel_inc*tmp_num_mut_codons) else: sim_stat = 1.0 # add count if sim_stat-utils.epsilon <= obs_stat: null_graph_entropy_ct += 1 # stop iterations if reached sufficient precision if null_graph_entropy_ct >= stop_criteria: break # calculate p-value from empirical null-distribution protein_pval = float(null_graph_entropy_ct) / (i+1) return protein_pval, obs_stat
Performs null-simulations for position-based mutation statistics in a single gene. Parameters ---------- graph_score : float clustering score for observed data num_codons_obs : int number of codons with missense mutation in observed data context_counts : pd.Series number of mutations for each context context_to_mut : dict dictionary mapping nucleotide context to a list of observed somatic base changes. seq_context : SequenceContext Sequence context for the entire gene sequence (regardless of where mutations occur). The nucleotide contexts are identified at positions along the gene. gene_seq : GeneSequence Sequence of gene of interest num_permutations : int, default: 10000 number of permutations to create for null stop_criteria : int stop after stop_criteria iterations are more significant then the observed statistic. Returns ------- protein_pval : float p-value for clustering in neighbor graph constructure from protein structures
Below is the the instruction that describes the task: ### Input: Performs null-simulations for position-based mutation statistics in a single gene. Parameters ---------- graph_score : float clustering score for observed data num_codons_obs : int number of codons with missense mutation in observed data context_counts : pd.Series number of mutations for each context context_to_mut : dict dictionary mapping nucleotide context to a list of observed somatic base changes. seq_context : SequenceContext Sequence context for the entire gene sequence (regardless of where mutations occur). The nucleotide contexts are identified at positions along the gene. gene_seq : GeneSequence Sequence of gene of interest num_permutations : int, default: 10000 number of permutations to create for null stop_criteria : int stop after stop_criteria iterations are more significant then the observed statistic. Returns ------- protein_pval : float p-value for clustering in neighbor graph constructure from protein structures ### Response: def protein_permutation(graph_score, num_codons_obs, context_counts, context_to_mut, seq_context, gene_seq, gene_graph, num_permutations=10000, stop_criteria=100, pseudo_count=0): """Performs null-simulations for position-based mutation statistics in a single gene. Parameters ---------- graph_score : float clustering score for observed data num_codons_obs : int number of codons with missense mutation in observed data context_counts : pd.Series number of mutations for each context context_to_mut : dict dictionary mapping nucleotide context to a list of observed somatic base changes. seq_context : SequenceContext Sequence context for the entire gene sequence (regardless of where mutations occur). The nucleotide contexts are identified at positions along the gene. gene_seq : GeneSequence Sequence of gene of interest num_permutations : int, default: 10000 number of permutations to create for null stop_criteria : int stop after stop_criteria iterations are more significant then the observed statistic. Returns ------- protein_pval : float p-value for clustering in neighbor graph constructure from protein structures """ # get contexts and somatic base mycontexts = context_counts.index.tolist() somatic_base = [base for one_context in mycontexts for base in context_to_mut[one_context]] # get random positions determined by sequence context tmp_contxt_pos = seq_context.random_pos(context_counts.iteritems(), num_permutations) tmp_mut_pos = np.hstack(pos_array for base, pos_array in tmp_contxt_pos) # calculate position-based statistics as a result of random positions null_graph_entropy_ct = 0 coverage_list = [] num_mut_list = [] graph_entropy_list = [] for i, row in enumerate(tmp_mut_pos): # calculate the expected value of the relative increase in coverage if i == stop_criteria-1: rel_inc = [coverage_list[k] / float(num_mut_list[k]) for k in range(stop_criteria-1) if coverage_list[k]] exp_rel_inc = np.mean(rel_inc) # calculate observed statistic if num_codons_obs: obs_stat = graph_score / np.log2(exp_rel_inc*num_codons_obs) else: obs_stat = 1.0 # calculate statistics for simulated data sim_stat_list = [ent / np.log2(exp_rel_inc*num_mut_list[l]) for l, ent in enumerate(graph_entropy_list)] null_graph_entropy_ct = len([s for s in sim_stat_list if s-utils.epsilon <= obs_stat]) # get info about mutations tmp_mut_info = mc.get_aa_mut_info(row, somatic_base, gene_seq) # calculate position info tmp_tuple = cutils.calc_pos_info(tmp_mut_info['Codon Pos'], tmp_mut_info['Reference AA'], tmp_mut_info['Somatic AA'], pseudo_count=pseudo_count, is_obs=0) _, _, _, tmp_pos_ct = tmp_tuple # record num of mut codons if i < stop_criteria-1: tmp_num_mut_codons = len(tmp_pos_ct) num_mut_list.append(tmp_num_mut_codons) # get entropy on graph-smoothed probability distribution tmp_graph_entropy, tmp_coverage = scores.compute_ng_stat(gene_graph, tmp_pos_ct) # record the "coverage" in the graph if i < stop_criteria-1: coverage_list.append(tmp_coverage) graph_entropy_list.append(tmp_graph_entropy) # update empirical null distribution counts if i >= stop_criteria: #if tmp_graph_entropy-utils.epsilon <= graph_score: if tmp_num_mut_codons: sim_stat = tmp_graph_entropy / np.log2(exp_rel_inc*tmp_num_mut_codons) else: sim_stat = 1.0 # add count if sim_stat-utils.epsilon <= obs_stat: null_graph_entropy_ct += 1 # stop iterations if reached sufficient precision if null_graph_entropy_ct >= stop_criteria: break # calculate p-value from empirical null-distribution protein_pval = float(null_graph_entropy_ct) / (i+1) return protein_pval, obs_stat
def _get_object_class(cls, class_name): """ :type class_name: str :rtype: core.BunqModel """ class_name = class_name.lstrip(cls.__STRING_FORMAT_UNDERSCORE) if class_name in cls._override_field_map: class_name = cls._override_field_map[class_name] try: return getattr(endpoint, class_name) except AttributeError: pass try: return getattr(object_, class_name) except AttributeError: pass raise BunqException(cls._ERROR_MODEL_NOT_FOUND.format(class_name))
:type class_name: str :rtype: core.BunqModel
Below is the the instruction that describes the task: ### Input: :type class_name: str :rtype: core.BunqModel ### Response: def _get_object_class(cls, class_name): """ :type class_name: str :rtype: core.BunqModel """ class_name = class_name.lstrip(cls.__STRING_FORMAT_UNDERSCORE) if class_name in cls._override_field_map: class_name = cls._override_field_map[class_name] try: return getattr(endpoint, class_name) except AttributeError: pass try: return getattr(object_, class_name) except AttributeError: pass raise BunqException(cls._ERROR_MODEL_NOT_FOUND.format(class_name))
def query_version(stream: aioxmpp.stream.StanzaStream, target: aioxmpp.JID) -> version_xso.Query: """ Query the software version of an entity. :param stream: A stanza stream to send the query on. :type stream: :class:`aioxmpp.stream.StanzaStream` :param target: The address of the entity to query. :type target: :class:`aioxmpp.JID` :raises OSError: if a connection issue occured before a reply was received :raises aioxmpp.errors.XMPPError: if an XMPP error was returned instead of a reply. :rtype: :class:`aioxmpp.version.xso.Query` :return: The response from the peer. The response is returned as :class:`~aioxmpp.version.xso.Query` object. The attributes hold the data returned by the peer. Each attribute may be :data:`None` if the peer chose to omit that information. In an extreme case, all attributes are :data:`None`. """ return (yield from stream.send( aioxmpp.IQ( type_=aioxmpp.IQType.GET, to=target, payload=version_xso.Query(), ) ))
Query the software version of an entity. :param stream: A stanza stream to send the query on. :type stream: :class:`aioxmpp.stream.StanzaStream` :param target: The address of the entity to query. :type target: :class:`aioxmpp.JID` :raises OSError: if a connection issue occured before a reply was received :raises aioxmpp.errors.XMPPError: if an XMPP error was returned instead of a reply. :rtype: :class:`aioxmpp.version.xso.Query` :return: The response from the peer. The response is returned as :class:`~aioxmpp.version.xso.Query` object. The attributes hold the data returned by the peer. Each attribute may be :data:`None` if the peer chose to omit that information. In an extreme case, all attributes are :data:`None`.
Below is the the instruction that describes the task: ### Input: Query the software version of an entity. :param stream: A stanza stream to send the query on. :type stream: :class:`aioxmpp.stream.StanzaStream` :param target: The address of the entity to query. :type target: :class:`aioxmpp.JID` :raises OSError: if a connection issue occured before a reply was received :raises aioxmpp.errors.XMPPError: if an XMPP error was returned instead of a reply. :rtype: :class:`aioxmpp.version.xso.Query` :return: The response from the peer. The response is returned as :class:`~aioxmpp.version.xso.Query` object. The attributes hold the data returned by the peer. Each attribute may be :data:`None` if the peer chose to omit that information. In an extreme case, all attributes are :data:`None`. ### Response: def query_version(stream: aioxmpp.stream.StanzaStream, target: aioxmpp.JID) -> version_xso.Query: """ Query the software version of an entity. :param stream: A stanza stream to send the query on. :type stream: :class:`aioxmpp.stream.StanzaStream` :param target: The address of the entity to query. :type target: :class:`aioxmpp.JID` :raises OSError: if a connection issue occured before a reply was received :raises aioxmpp.errors.XMPPError: if an XMPP error was returned instead of a reply. :rtype: :class:`aioxmpp.version.xso.Query` :return: The response from the peer. The response is returned as :class:`~aioxmpp.version.xso.Query` object. The attributes hold the data returned by the peer. Each attribute may be :data:`None` if the peer chose to omit that information. In an extreme case, all attributes are :data:`None`. """ return (yield from stream.send( aioxmpp.IQ( type_=aioxmpp.IQType.GET, to=target, payload=version_xso.Query(), ) ))
def _run_gates(self, gates, n_qubits, ctx): """Iterate gates and call backend's action for each gates""" for gate in gates: action = self._get_action(gate) if action is not None: ctx = action(gate, ctx) else: ctx = self._run_gates(gate.fallback(n_qubits), n_qubits, ctx) return ctx
Iterate gates and call backend's action for each gates
Below is the the instruction that describes the task: ### Input: Iterate gates and call backend's action for each gates ### Response: def _run_gates(self, gates, n_qubits, ctx): """Iterate gates and call backend's action for each gates""" for gate in gates: action = self._get_action(gate) if action is not None: ctx = action(gate, ctx) else: ctx = self._run_gates(gate.fallback(n_qubits), n_qubits, ctx) return ctx
def rsa_pss_sign(private_key, data, hash_algorithm): """ Generates an RSASSA-PSS signature. For the PSS padding the mask gen algorithm will be mgf1 using the same hash algorithm as the signature. The salt length with be the length of the hash algorithm, and the trailer field with be the standard 0xBC byte. :param private_key: The PrivateKey to generate the signature with :param data: A byte string of the data the signature is for :param hash_algorithm: A unicode string of "md5", "sha1", "sha256", "sha384" or "sha512" :raises: ValueError - when any of the parameters contain an invalid value TypeError - when any of the parameters are of the wrong type OSError - when an error is returned by the OS crypto library :return: A byte string of the signature """ if private_key.algorithm != 'rsa': raise ValueError('The key specified is not an RSA private key') return _sign(private_key, data, hash_algorithm, rsa_pss_padding=True)
Generates an RSASSA-PSS signature. For the PSS padding the mask gen algorithm will be mgf1 using the same hash algorithm as the signature. The salt length with be the length of the hash algorithm, and the trailer field with be the standard 0xBC byte. :param private_key: The PrivateKey to generate the signature with :param data: A byte string of the data the signature is for :param hash_algorithm: A unicode string of "md5", "sha1", "sha256", "sha384" or "sha512" :raises: ValueError - when any of the parameters contain an invalid value TypeError - when any of the parameters are of the wrong type OSError - when an error is returned by the OS crypto library :return: A byte string of the signature
Below is the the instruction that describes the task: ### Input: Generates an RSASSA-PSS signature. For the PSS padding the mask gen algorithm will be mgf1 using the same hash algorithm as the signature. The salt length with be the length of the hash algorithm, and the trailer field with be the standard 0xBC byte. :param private_key: The PrivateKey to generate the signature with :param data: A byte string of the data the signature is for :param hash_algorithm: A unicode string of "md5", "sha1", "sha256", "sha384" or "sha512" :raises: ValueError - when any of the parameters contain an invalid value TypeError - when any of the parameters are of the wrong type OSError - when an error is returned by the OS crypto library :return: A byte string of the signature ### Response: def rsa_pss_sign(private_key, data, hash_algorithm): """ Generates an RSASSA-PSS signature. For the PSS padding the mask gen algorithm will be mgf1 using the same hash algorithm as the signature. The salt length with be the length of the hash algorithm, and the trailer field with be the standard 0xBC byte. :param private_key: The PrivateKey to generate the signature with :param data: A byte string of the data the signature is for :param hash_algorithm: A unicode string of "md5", "sha1", "sha256", "sha384" or "sha512" :raises: ValueError - when any of the parameters contain an invalid value TypeError - when any of the parameters are of the wrong type OSError - when an error is returned by the OS crypto library :return: A byte string of the signature """ if private_key.algorithm != 'rsa': raise ValueError('The key specified is not an RSA private key') return _sign(private_key, data, hash_algorithm, rsa_pss_padding=True)
def api_call(self, opts, args=None, body=None, **kwargs): """Setup the request""" if args: path = opts['name'] % args else: path = opts['name'] path = '/api/v1%s' % path return self._request( opts['method'], path=path, payload=body, **kwargs)
Setup the request
Below is the the instruction that describes the task: ### Input: Setup the request ### Response: def api_call(self, opts, args=None, body=None, **kwargs): """Setup the request""" if args: path = opts['name'] % args else: path = opts['name'] path = '/api/v1%s' % path return self._request( opts['method'], path=path, payload=body, **kwargs)
def adjust_for_flatpak(self): """ Remove plugins that don't work when building Flatpaks """ if self.user_params.flatpak.value: remove_plugins = [ ("prebuild_plugins", "resolve_composes"), # We'll extract the filesystem anyways for a Flatpak instead of exporting # the docker image directly, so squash just slows things down. ("prepublish_plugins", "squash"), # Pulp can't currently handle Flatpaks, which are OCI images ("postbuild_plugins", "pulp_push"), ("postbuild_plugins", "pulp_tag"), ("postbuild_plugins", "pulp_sync"), ("exit_plugins", "pulp_publish"), ("exit_plugins", "pulp_pull"), # delete_from_registry is used for deleting builds from the temporary registry # that pulp_sync mirrors from. ("exit_plugins", "delete_from_registry"), ] for when, which in remove_plugins: self.pt.remove_plugin(when, which, 'not needed for flatpak build')
Remove plugins that don't work when building Flatpaks
Below is the the instruction that describes the task: ### Input: Remove plugins that don't work when building Flatpaks ### Response: def adjust_for_flatpak(self): """ Remove plugins that don't work when building Flatpaks """ if self.user_params.flatpak.value: remove_plugins = [ ("prebuild_plugins", "resolve_composes"), # We'll extract the filesystem anyways for a Flatpak instead of exporting # the docker image directly, so squash just slows things down. ("prepublish_plugins", "squash"), # Pulp can't currently handle Flatpaks, which are OCI images ("postbuild_plugins", "pulp_push"), ("postbuild_plugins", "pulp_tag"), ("postbuild_plugins", "pulp_sync"), ("exit_plugins", "pulp_publish"), ("exit_plugins", "pulp_pull"), # delete_from_registry is used for deleting builds from the temporary registry # that pulp_sync mirrors from. ("exit_plugins", "delete_from_registry"), ] for when, which in remove_plugins: self.pt.remove_plugin(when, which, 'not needed for flatpak build')
def _create_table(self, table_name): ''' create sqlite's table for storing simple dictionaries ''' if self.fieldnames: sql_fields = [] for field in self._fields: if field != '_id': if 'dblite' in self._fields[field]: sql_fields.append(' '.join([field, self._fields[field]['dblite']])) else: sql_fields.append(field) sql_fields = ','.join(sql_fields) SQL = 'CREATE TABLE IF NOT EXISTS %s (%s);' % (table_name, sql_fields) try: self._cursor.execute(SQL) except sqlite3.OperationalError, err: raise RuntimeError('Create table error, %s, SQL: %s' % (err, SQL))
create sqlite's table for storing simple dictionaries
Below is the the instruction that describes the task: ### Input: create sqlite's table for storing simple dictionaries ### Response: def _create_table(self, table_name): ''' create sqlite's table for storing simple dictionaries ''' if self.fieldnames: sql_fields = [] for field in self._fields: if field != '_id': if 'dblite' in self._fields[field]: sql_fields.append(' '.join([field, self._fields[field]['dblite']])) else: sql_fields.append(field) sql_fields = ','.join(sql_fields) SQL = 'CREATE TABLE IF NOT EXISTS %s (%s);' % (table_name, sql_fields) try: self._cursor.execute(SQL) except sqlite3.OperationalError, err: raise RuntimeError('Create table error, %s, SQL: %s' % (err, SQL))
def send_file(self, restricted=True, trusted=False, **kwargs): """Wrap around FileInstance's send file.""" return self.file.send_file( self.basename, restricted=restricted, mimetype=self.mimetype, trusted=trusted, **kwargs )
Wrap around FileInstance's send file.
Below is the the instruction that describes the task: ### Input: Wrap around FileInstance's send file. ### Response: def send_file(self, restricted=True, trusted=False, **kwargs): """Wrap around FileInstance's send file.""" return self.file.send_file( self.basename, restricted=restricted, mimetype=self.mimetype, trusted=trusted, **kwargs )
def terminate_jobflows(self, jobflow_ids): """ Terminate an Elastic MapReduce job flow :type jobflow_ids: list :param jobflow_ids: A list of job flow IDs """ params = {} self.build_list_params(params, jobflow_ids, 'JobFlowIds.member') return self.get_status('TerminateJobFlows', params, verb='POST')
Terminate an Elastic MapReduce job flow :type jobflow_ids: list :param jobflow_ids: A list of job flow IDs
Below is the the instruction that describes the task: ### Input: Terminate an Elastic MapReduce job flow :type jobflow_ids: list :param jobflow_ids: A list of job flow IDs ### Response: def terminate_jobflows(self, jobflow_ids): """ Terminate an Elastic MapReduce job flow :type jobflow_ids: list :param jobflow_ids: A list of job flow IDs """ params = {} self.build_list_params(params, jobflow_ids, 'JobFlowIds.member') return self.get_status('TerminateJobFlows', params, verb='POST')
def print_msg(contentlist): # type: (Union[AnyStr, List[AnyStr], Tuple[AnyStr]]) -> AnyStr """concatenate message list as single string with line feed.""" if isinstance(contentlist, list) or isinstance(contentlist, tuple): return '\n'.join(contentlist) else: # strings if len(contentlist) > 1 and contentlist[-1] != '\n': contentlist += '\n' return contentlist
concatenate message list as single string with line feed.
Below is the the instruction that describes the task: ### Input: concatenate message list as single string with line feed. ### Response: def print_msg(contentlist): # type: (Union[AnyStr, List[AnyStr], Tuple[AnyStr]]) -> AnyStr """concatenate message list as single string with line feed.""" if isinstance(contentlist, list) or isinstance(contentlist, tuple): return '\n'.join(contentlist) else: # strings if len(contentlist) > 1 and contentlist[-1] != '\n': contentlist += '\n' return contentlist
def set_file_priority(self, infohash, file_id, priority): """ Set file of a torrent to a supplied priority level. :param infohash: INFO HASH of torrent. :param file_id: ID of the file to set priority. :param priority: Priority level of the file. """ if priority not in [0, 1, 2, 7]: raise ValueError("Invalid priority, refer WEB-UI docs for info.") elif not isinstance(file_id, int): raise TypeError("File ID must be an int") data = {'hash': infohash.lower(), 'id': file_id, 'priority': priority} return self._post('command/setFilePrio', data=data)
Set file of a torrent to a supplied priority level. :param infohash: INFO HASH of torrent. :param file_id: ID of the file to set priority. :param priority: Priority level of the file.
Below is the the instruction that describes the task: ### Input: Set file of a torrent to a supplied priority level. :param infohash: INFO HASH of torrent. :param file_id: ID of the file to set priority. :param priority: Priority level of the file. ### Response: def set_file_priority(self, infohash, file_id, priority): """ Set file of a torrent to a supplied priority level. :param infohash: INFO HASH of torrent. :param file_id: ID of the file to set priority. :param priority: Priority level of the file. """ if priority not in [0, 1, 2, 7]: raise ValueError("Invalid priority, refer WEB-UI docs for info.") elif not isinstance(file_id, int): raise TypeError("File ID must be an int") data = {'hash': infohash.lower(), 'id': file_id, 'priority': priority} return self._post('command/setFilePrio', data=data)
def element_screen_center(self, element): """ :returns: The center point of the element. :rtype: class:`dict` with the field "left" set to the X coordinate and the field "top" set to the Y coordinate. """ pos = self.element_screen_position(element) size = element.size pos["top"] += int(size["height"] / 2) pos["left"] += int(size["width"] / 2) return pos
:returns: The center point of the element. :rtype: class:`dict` with the field "left" set to the X coordinate and the field "top" set to the Y coordinate.
Below is the the instruction that describes the task: ### Input: :returns: The center point of the element. :rtype: class:`dict` with the field "left" set to the X coordinate and the field "top" set to the Y coordinate. ### Response: def element_screen_center(self, element): """ :returns: The center point of the element. :rtype: class:`dict` with the field "left" set to the X coordinate and the field "top" set to the Y coordinate. """ pos = self.element_screen_position(element) size = element.size pos["top"] += int(size["height"] / 2) pos["left"] += int(size["width"] / 2) return pos
def issue_date(self): """Date when the DOI was issued (:class:`datetime.datetime.Datetime`). """ dates = _pluralize(self._r['dates'], 'date') for date in dates: if date['@dateType'] == 'Issued': return datetime.datetime.strptime(date['#text'], '%Y-%m-%d')
Date when the DOI was issued (:class:`datetime.datetime.Datetime`).
Below is the the instruction that describes the task: ### Input: Date when the DOI was issued (:class:`datetime.datetime.Datetime`). ### Response: def issue_date(self): """Date when the DOI was issued (:class:`datetime.datetime.Datetime`). """ dates = _pluralize(self._r['dates'], 'date') for date in dates: if date['@dateType'] == 'Issued': return datetime.datetime.strptime(date['#text'], '%Y-%m-%d')
def log_status (self, checked, in_progress, queue, duration, num_urls): """Write status message to file descriptor.""" msg = _n("%2d thread active", "%2d threads active", in_progress) % \ in_progress self.write(u"%s, " % msg) msg = _n("%5d link queued", "%5d links queued", queue) % queue self.write(u"%s, " % msg) msg = _n("%4d link", "%4d links", checked) % checked self.write(u"%s" % msg) msg = _n("%3d URL", "%3d URLs", num_urls) % num_urls self.write(u" in %s checked, " % msg) msg = _("runtime %s") % strformat.strduration_long(duration) self.writeln(msg) self.flush()
Write status message to file descriptor.
Below is the the instruction that describes the task: ### Input: Write status message to file descriptor. ### Response: def log_status (self, checked, in_progress, queue, duration, num_urls): """Write status message to file descriptor.""" msg = _n("%2d thread active", "%2d threads active", in_progress) % \ in_progress self.write(u"%s, " % msg) msg = _n("%5d link queued", "%5d links queued", queue) % queue self.write(u"%s, " % msg) msg = _n("%4d link", "%4d links", checked) % checked self.write(u"%s" % msg) msg = _n("%3d URL", "%3d URLs", num_urls) % num_urls self.write(u" in %s checked, " % msg) msg = _("runtime %s") % strformat.strduration_long(duration) self.writeln(msg) self.flush()
def _wrap_client(self, region_name, method, *args, **kwargs): """Proxies all calls to a kms clients methods and removes misbehaving clients :param str region_name: AWS Region ID (ex: us-east-1) :param callable method: a method on the KMS client to proxy :param tuple args: list of arguments to pass to the provided ``method`` :param dict kwargs: dictonary of keyword arguments to pass to the provided ``method`` """ try: return method(*args, **kwargs) except botocore.exceptions.BotoCoreError: self._regional_clients.pop(region_name) _LOGGER.error( 'Removing regional client "%s" from cache due to BotoCoreError on %s call', region_name, method.__name__ ) raise
Proxies all calls to a kms clients methods and removes misbehaving clients :param str region_name: AWS Region ID (ex: us-east-1) :param callable method: a method on the KMS client to proxy :param tuple args: list of arguments to pass to the provided ``method`` :param dict kwargs: dictonary of keyword arguments to pass to the provided ``method``
Below is the the instruction that describes the task: ### Input: Proxies all calls to a kms clients methods and removes misbehaving clients :param str region_name: AWS Region ID (ex: us-east-1) :param callable method: a method on the KMS client to proxy :param tuple args: list of arguments to pass to the provided ``method`` :param dict kwargs: dictonary of keyword arguments to pass to the provided ``method`` ### Response: def _wrap_client(self, region_name, method, *args, **kwargs): """Proxies all calls to a kms clients methods and removes misbehaving clients :param str region_name: AWS Region ID (ex: us-east-1) :param callable method: a method on the KMS client to proxy :param tuple args: list of arguments to pass to the provided ``method`` :param dict kwargs: dictonary of keyword arguments to pass to the provided ``method`` """ try: return method(*args, **kwargs) except botocore.exceptions.BotoCoreError: self._regional_clients.pop(region_name) _LOGGER.error( 'Removing regional client "%s" from cache due to BotoCoreError on %s call', region_name, method.__name__ ) raise
def to_argv_schema(data, arg_names=None, arg_abbrevs=None, filters=None, defaults=None): ''' to_argv_schema(instructions) yields a valid tuple of CommandLineParser instructions for the given instructions tuple; by itself, this will only return the instructions as they are, but optional arguments (below) will override the values in the instructions if provided. to_argv_schema(plan) yields a valid tuple of CommandLineParser instructions for the given plan object. These schema returned by this function will parse a command-line list (sys.argv) for parameters either listed in the instructions (see help(CommandLineParser)) or the afferent parameters of the plan. Generally, this should be called by the argv_parse() function and not directly. If a plan is given as the first argument, then the following rules are used to determine how arguments are parsed: * An argument that begins with -- (e.g., --quux-factor=10) is checked for a matching plan parameter; the argument name "quux-factor" will match either a parameter called "quux-factor" or "quux_factor" (dashes in command-line arguments are auto-translated into underscores). * If "quux_factor" is a parameter to the plan and is the only parameter that starts with a 'q', then -q10 or -q 10 are both equivalent to --quux-factor=10. If other parameters also start with a 'q' then neither "quux_factor" or the other parameter(s) will be matched with the -q flag unless it is specified explicitly via the arg_abbrevs option. * Argument values are parsed using Python's ast.literal_eval(); if this raises an exception then the value is left as a string. * If an argument or flag is provided without an argument (e.g. "--quuxztize" or "-q") then it is interpreted as a boolean flag and is given the value True. * Arguments that come after the flag "--" are never processed. The following options may be given: * arg_names (default: None) may be a dictionary that specifies explicity command-line argument names for the plan parameters; plan parameters should be keys and the argument names should be values. Any parameter not listed in this option will be interpreted according to the above rules. If a parameter is mapped to None then it will not be filled from the command-line arguments. * arg_abbrevs (default:None) may be a dictionary that is handled identically to that of arg_names except that its values must be single letters, which are used for the abbreviated flag names. * defaults (default: None) may specify the default values for the plan parameters; this dictionary overrides the default values of the plan itself. ''' if is_plan(data): # First we must convert it to a valid instruction list (plan, data) = (data, {}) # we go through the afferent parameters... for aff in plan.afferents: # these are provided by the parsing mechanism and shouldn't be processed if aff in ['argv', 'argv_parsed', 'stdout', 'stderr', 'stdin']: continue # we ignore defaults for now data[aff] = (None, aff.replace('_', '-'), aff) # and let's try to guess at abbreviation names entries = sorted(data.keys()) n = len(entries) for (ii,entry) in enumerate(entries): if ii > 0 and entry[0] == entries[ii-1][0]: continue if ii < n-1 and entry[0] == entries[ii+1][0]: continue r = data[entry] data[entry] = (entry[0], r[1], entry, r[3]) if len(r) == 4 else (entry[0], r[1], entry) # now go through and fix defaults... for (entry,dflt) in six.iteritems(plan.defaults): if entry not in data: continue r = data[entry] data[entry] = (r[0], r[1], r[2], dflt) elif arg_names is None and arg_abbrevs is None and defaults is None: # return the same object if there are no changes to a schema return data else: data = {r[2]:r for r in data} # Now we go through and make updates based on the optional arguments if arg_names is None: arg_names = {} for (entry,arg_name) in six.iteritems(arg_names): if entry not in data: continue r = data[entry] data[entry] = (r[0], arg_name, entry) if len(r) == 3 else (r[0], arg_name, entry, r[3]) if arg_abbrevs is None: arg_abbrevs = {} for (entry,arg_abbrev) in six.iteritems(arg_abbrevs): if entry not in data: continue r = data[entry] data[entry] = (arg_abbrev, r[1], entry) if len(r) == 3 else (arg_abbrev, r[1], entry, r[3]) if defaults is None: defaults = {} for (entry,dflt) in six.iteritems(defaults): if entry not in data: continue r = data[entry] data[entry] = (r[0], r[1], entry, dflt) # return the list-ified version of this return [tuple(row) for row in six.itervalues(data)]
to_argv_schema(instructions) yields a valid tuple of CommandLineParser instructions for the given instructions tuple; by itself, this will only return the instructions as they are, but optional arguments (below) will override the values in the instructions if provided. to_argv_schema(plan) yields a valid tuple of CommandLineParser instructions for the given plan object. These schema returned by this function will parse a command-line list (sys.argv) for parameters either listed in the instructions (see help(CommandLineParser)) or the afferent parameters of the plan. Generally, this should be called by the argv_parse() function and not directly. If a plan is given as the first argument, then the following rules are used to determine how arguments are parsed: * An argument that begins with -- (e.g., --quux-factor=10) is checked for a matching plan parameter; the argument name "quux-factor" will match either a parameter called "quux-factor" or "quux_factor" (dashes in command-line arguments are auto-translated into underscores). * If "quux_factor" is a parameter to the plan and is the only parameter that starts with a 'q', then -q10 or -q 10 are both equivalent to --quux-factor=10. If other parameters also start with a 'q' then neither "quux_factor" or the other parameter(s) will be matched with the -q flag unless it is specified explicitly via the arg_abbrevs option. * Argument values are parsed using Python's ast.literal_eval(); if this raises an exception then the value is left as a string. * If an argument or flag is provided without an argument (e.g. "--quuxztize" or "-q") then it is interpreted as a boolean flag and is given the value True. * Arguments that come after the flag "--" are never processed. The following options may be given: * arg_names (default: None) may be a dictionary that specifies explicity command-line argument names for the plan parameters; plan parameters should be keys and the argument names should be values. Any parameter not listed in this option will be interpreted according to the above rules. If a parameter is mapped to None then it will not be filled from the command-line arguments. * arg_abbrevs (default:None) may be a dictionary that is handled identically to that of arg_names except that its values must be single letters, which are used for the abbreviated flag names. * defaults (default: None) may specify the default values for the plan parameters; this dictionary overrides the default values of the plan itself.
Below is the the instruction that describes the task: ### Input: to_argv_schema(instructions) yields a valid tuple of CommandLineParser instructions for the given instructions tuple; by itself, this will only return the instructions as they are, but optional arguments (below) will override the values in the instructions if provided. to_argv_schema(plan) yields a valid tuple of CommandLineParser instructions for the given plan object. These schema returned by this function will parse a command-line list (sys.argv) for parameters either listed in the instructions (see help(CommandLineParser)) or the afferent parameters of the plan. Generally, this should be called by the argv_parse() function and not directly. If a plan is given as the first argument, then the following rules are used to determine how arguments are parsed: * An argument that begins with -- (e.g., --quux-factor=10) is checked for a matching plan parameter; the argument name "quux-factor" will match either a parameter called "quux-factor" or "quux_factor" (dashes in command-line arguments are auto-translated into underscores). * If "quux_factor" is a parameter to the plan and is the only parameter that starts with a 'q', then -q10 or -q 10 are both equivalent to --quux-factor=10. If other parameters also start with a 'q' then neither "quux_factor" or the other parameter(s) will be matched with the -q flag unless it is specified explicitly via the arg_abbrevs option. * Argument values are parsed using Python's ast.literal_eval(); if this raises an exception then the value is left as a string. * If an argument or flag is provided without an argument (e.g. "--quuxztize" or "-q") then it is interpreted as a boolean flag and is given the value True. * Arguments that come after the flag "--" are never processed. The following options may be given: * arg_names (default: None) may be a dictionary that specifies explicity command-line argument names for the plan parameters; plan parameters should be keys and the argument names should be values. Any parameter not listed in this option will be interpreted according to the above rules. If a parameter is mapped to None then it will not be filled from the command-line arguments. * arg_abbrevs (default:None) may be a dictionary that is handled identically to that of arg_names except that its values must be single letters, which are used for the abbreviated flag names. * defaults (default: None) may specify the default values for the plan parameters; this dictionary overrides the default values of the plan itself. ### Response: def to_argv_schema(data, arg_names=None, arg_abbrevs=None, filters=None, defaults=None): ''' to_argv_schema(instructions) yields a valid tuple of CommandLineParser instructions for the given instructions tuple; by itself, this will only return the instructions as they are, but optional arguments (below) will override the values in the instructions if provided. to_argv_schema(plan) yields a valid tuple of CommandLineParser instructions for the given plan object. These schema returned by this function will parse a command-line list (sys.argv) for parameters either listed in the instructions (see help(CommandLineParser)) or the afferent parameters of the plan. Generally, this should be called by the argv_parse() function and not directly. If a plan is given as the first argument, then the following rules are used to determine how arguments are parsed: * An argument that begins with -- (e.g., --quux-factor=10) is checked for a matching plan parameter; the argument name "quux-factor" will match either a parameter called "quux-factor" or "quux_factor" (dashes in command-line arguments are auto-translated into underscores). * If "quux_factor" is a parameter to the plan and is the only parameter that starts with a 'q', then -q10 or -q 10 are both equivalent to --quux-factor=10. If other parameters also start with a 'q' then neither "quux_factor" or the other parameter(s) will be matched with the -q flag unless it is specified explicitly via the arg_abbrevs option. * Argument values are parsed using Python's ast.literal_eval(); if this raises an exception then the value is left as a string. * If an argument or flag is provided without an argument (e.g. "--quuxztize" or "-q") then it is interpreted as a boolean flag and is given the value True. * Arguments that come after the flag "--" are never processed. The following options may be given: * arg_names (default: None) may be a dictionary that specifies explicity command-line argument names for the plan parameters; plan parameters should be keys and the argument names should be values. Any parameter not listed in this option will be interpreted according to the above rules. If a parameter is mapped to None then it will not be filled from the command-line arguments. * arg_abbrevs (default:None) may be a dictionary that is handled identically to that of arg_names except that its values must be single letters, which are used for the abbreviated flag names. * defaults (default: None) may specify the default values for the plan parameters; this dictionary overrides the default values of the plan itself. ''' if is_plan(data): # First we must convert it to a valid instruction list (plan, data) = (data, {}) # we go through the afferent parameters... for aff in plan.afferents: # these are provided by the parsing mechanism and shouldn't be processed if aff in ['argv', 'argv_parsed', 'stdout', 'stderr', 'stdin']: continue # we ignore defaults for now data[aff] = (None, aff.replace('_', '-'), aff) # and let's try to guess at abbreviation names entries = sorted(data.keys()) n = len(entries) for (ii,entry) in enumerate(entries): if ii > 0 and entry[0] == entries[ii-1][0]: continue if ii < n-1 and entry[0] == entries[ii+1][0]: continue r = data[entry] data[entry] = (entry[0], r[1], entry, r[3]) if len(r) == 4 else (entry[0], r[1], entry) # now go through and fix defaults... for (entry,dflt) in six.iteritems(plan.defaults): if entry not in data: continue r = data[entry] data[entry] = (r[0], r[1], r[2], dflt) elif arg_names is None and arg_abbrevs is None and defaults is None: # return the same object if there are no changes to a schema return data else: data = {r[2]:r for r in data} # Now we go through and make updates based on the optional arguments if arg_names is None: arg_names = {} for (entry,arg_name) in six.iteritems(arg_names): if entry not in data: continue r = data[entry] data[entry] = (r[0], arg_name, entry) if len(r) == 3 else (r[0], arg_name, entry, r[3]) if arg_abbrevs is None: arg_abbrevs = {} for (entry,arg_abbrev) in six.iteritems(arg_abbrevs): if entry not in data: continue r = data[entry] data[entry] = (arg_abbrev, r[1], entry) if len(r) == 3 else (arg_abbrev, r[1], entry, r[3]) if defaults is None: defaults = {} for (entry,dflt) in six.iteritems(defaults): if entry not in data: continue r = data[entry] data[entry] = (r[0], r[1], entry, dflt) # return the list-ified version of this return [tuple(row) for row in six.itervalues(data)]
def get_template_from_request(request, page=None): """ Gets a valid template from different sources or falls back to the default template. """ page_templates = settings.get_page_templates() if len(page_templates) == 0: return settings.PAGE_DEFAULT_TEMPLATE template = request.POST.get('template', request.GET.get('template', None)) if template is not None and \ (template in list(dict(page_templates).keys()) or template == settings.PAGE_DEFAULT_TEMPLATE): return template if page is not None: return page.get_template() return settings.PAGE_DEFAULT_TEMPLATE
Gets a valid template from different sources or falls back to the default template.
Below is the the instruction that describes the task: ### Input: Gets a valid template from different sources or falls back to the default template. ### Response: def get_template_from_request(request, page=None): """ Gets a valid template from different sources or falls back to the default template. """ page_templates = settings.get_page_templates() if len(page_templates) == 0: return settings.PAGE_DEFAULT_TEMPLATE template = request.POST.get('template', request.GET.get('template', None)) if template is not None and \ (template in list(dict(page_templates).keys()) or template == settings.PAGE_DEFAULT_TEMPLATE): return template if page is not None: return page.get_template() return settings.PAGE_DEFAULT_TEMPLATE
def _parseFreePBXconf(self): """Parses FreePBX configuration file /etc/amportal for user and password for Asterisk Manager Interface. @return: True if configuration file is found and parsed successfully. """ amiuser = None amipass = None if os.path.isfile(confFileFreePBX): try: fp = open(confFileFreePBX, 'r') data = fp.read() fp.close() except: raise IOError('Failed reading FreePBX configuration file: %s' % confFileFreePBX) for (key, val) in re.findall('^(AMPMGR\w+)\s*=\s*(\S+)\s*$', data, re.MULTILINE): if key == 'AMPMGRUSER': amiuser = val elif key == 'AMPMGRPASS': amipass = val if amiuser and amipass: self._amiuser = amiuser self._amipass = amipass return True return False
Parses FreePBX configuration file /etc/amportal for user and password for Asterisk Manager Interface. @return: True if configuration file is found and parsed successfully.
Below is the the instruction that describes the task: ### Input: Parses FreePBX configuration file /etc/amportal for user and password for Asterisk Manager Interface. @return: True if configuration file is found and parsed successfully. ### Response: def _parseFreePBXconf(self): """Parses FreePBX configuration file /etc/amportal for user and password for Asterisk Manager Interface. @return: True if configuration file is found and parsed successfully. """ amiuser = None amipass = None if os.path.isfile(confFileFreePBX): try: fp = open(confFileFreePBX, 'r') data = fp.read() fp.close() except: raise IOError('Failed reading FreePBX configuration file: %s' % confFileFreePBX) for (key, val) in re.findall('^(AMPMGR\w+)\s*=\s*(\S+)\s*$', data, re.MULTILINE): if key == 'AMPMGRUSER': amiuser = val elif key == 'AMPMGRPASS': amipass = val if amiuser and amipass: self._amiuser = amiuser self._amipass = amipass return True return False
def bounds(self): """The bounds of the random variable. Set `self.i=0.95` to return the 95% interval if this is used for setting bounds on optimizers/etc. where infinite bounds may not be useful. """ return [scipy.stats.norm.interval(self.i, loc=m, scale=s) for s, m in zip(self.sigma, self.mu)]
The bounds of the random variable. Set `self.i=0.95` to return the 95% interval if this is used for setting bounds on optimizers/etc. where infinite bounds may not be useful.
Below is the the instruction that describes the task: ### Input: The bounds of the random variable. Set `self.i=0.95` to return the 95% interval if this is used for setting bounds on optimizers/etc. where infinite bounds may not be useful. ### Response: def bounds(self): """The bounds of the random variable. Set `self.i=0.95` to return the 95% interval if this is used for setting bounds on optimizers/etc. where infinite bounds may not be useful. """ return [scipy.stats.norm.interval(self.i, loc=m, scale=s) for s, m in zip(self.sigma, self.mu)]
def slugify(value): """ Converts to lowercase, removes non-word characters (alphanumerics and underscores) and converts spaces to hyphens. Also strips leading and trailing whitespace. """ value = re.sub(r"[^\w\s-]", "", value).strip() return re.sub(r"[-\s]+", "-", value)
Converts to lowercase, removes non-word characters (alphanumerics and underscores) and converts spaces to hyphens. Also strips leading and trailing whitespace.
Below is the the instruction that describes the task: ### Input: Converts to lowercase, removes non-word characters (alphanumerics and underscores) and converts spaces to hyphens. Also strips leading and trailing whitespace. ### Response: def slugify(value): """ Converts to lowercase, removes non-word characters (alphanumerics and underscores) and converts spaces to hyphens. Also strips leading and trailing whitespace. """ value = re.sub(r"[^\w\s-]", "", value).strip() return re.sub(r"[-\s]+", "-", value)
def array_ratio_std(values_n, sigmas_n, values_d, sigmas_d): r"""Gives error on the ratio of 2 floats or 2 1-dimensional arrays given their values and uncertainties. This assumes the covariance = 0, and that the input uncertainties are small compared to the corresponding input values. _n and _d denote the numerator and denominator respectively. Parameters ---------- values_n: float or numpy array Numerator values. sigmas_n: float or numpy array :math:`1\sigma` uncertainties on values_n. values_d: float or numpy array Denominator values. sigmas_d: float or numpy array :math:`1\sigma` uncertainties on values_d. Returns ------- std: float or numpy array :math:`1\sigma` uncertainty on values_n / values_d. """ std = np.sqrt((sigmas_n / values_n) ** 2 + (sigmas_d / values_d) ** 2) std *= (values_n / values_d) return std
r"""Gives error on the ratio of 2 floats or 2 1-dimensional arrays given their values and uncertainties. This assumes the covariance = 0, and that the input uncertainties are small compared to the corresponding input values. _n and _d denote the numerator and denominator respectively. Parameters ---------- values_n: float or numpy array Numerator values. sigmas_n: float or numpy array :math:`1\sigma` uncertainties on values_n. values_d: float or numpy array Denominator values. sigmas_d: float or numpy array :math:`1\sigma` uncertainties on values_d. Returns ------- std: float or numpy array :math:`1\sigma` uncertainty on values_n / values_d.
Below is the the instruction that describes the task: ### Input: r"""Gives error on the ratio of 2 floats or 2 1-dimensional arrays given their values and uncertainties. This assumes the covariance = 0, and that the input uncertainties are small compared to the corresponding input values. _n and _d denote the numerator and denominator respectively. Parameters ---------- values_n: float or numpy array Numerator values. sigmas_n: float or numpy array :math:`1\sigma` uncertainties on values_n. values_d: float or numpy array Denominator values. sigmas_d: float or numpy array :math:`1\sigma` uncertainties on values_d. Returns ------- std: float or numpy array :math:`1\sigma` uncertainty on values_n / values_d. ### Response: def array_ratio_std(values_n, sigmas_n, values_d, sigmas_d): r"""Gives error on the ratio of 2 floats or 2 1-dimensional arrays given their values and uncertainties. This assumes the covariance = 0, and that the input uncertainties are small compared to the corresponding input values. _n and _d denote the numerator and denominator respectively. Parameters ---------- values_n: float or numpy array Numerator values. sigmas_n: float or numpy array :math:`1\sigma` uncertainties on values_n. values_d: float or numpy array Denominator values. sigmas_d: float or numpy array :math:`1\sigma` uncertainties on values_d. Returns ------- std: float or numpy array :math:`1\sigma` uncertainty on values_n / values_d. """ std = np.sqrt((sigmas_n / values_n) ** 2 + (sigmas_d / values_d) ** 2) std *= (values_n / values_d) return std
def parse_args_kwargs(self, *args, **kwargs): '''Parse the arguments with keywords.''' # unpack the arginfo keys, defdict = self.arginfo assigned = keys[:len(args)] not_assigned = keys[len(args):] # validate kwargs for key in kwargs: assert key not in assigned assert key in keys # integrate args and kwargs knowns = dict(defdict, **kwargs) parsed_args = args + tuple([knowns[key] for key in not_assigned]) return parsed_args
Parse the arguments with keywords.
Below is the the instruction that describes the task: ### Input: Parse the arguments with keywords. ### Response: def parse_args_kwargs(self, *args, **kwargs): '''Parse the arguments with keywords.''' # unpack the arginfo keys, defdict = self.arginfo assigned = keys[:len(args)] not_assigned = keys[len(args):] # validate kwargs for key in kwargs: assert key not in assigned assert key in keys # integrate args and kwargs knowns = dict(defdict, **kwargs) parsed_args = args + tuple([knowns[key] for key in not_assigned]) return parsed_args
def header(self, method, client='htmlshark'): ''' generates Grooveshark API Json header ''' return {'token': self._request_token(method, client), 'privacy': 0, 'uuid': self.session.user, 'clientRevision': grooveshark.const.CLIENTS[client]['version'], 'session': self.session.session, 'client': client, 'country': self.session.country}
generates Grooveshark API Json header
Below is the the instruction that describes the task: ### Input: generates Grooveshark API Json header ### Response: def header(self, method, client='htmlshark'): ''' generates Grooveshark API Json header ''' return {'token': self._request_token(method, client), 'privacy': 0, 'uuid': self.session.user, 'clientRevision': grooveshark.const.CLIENTS[client]['version'], 'session': self.session.session, 'client': client, 'country': self.session.country}
def cli(env, identifier): """Reflash server firmware.""" mgr = SoftLayer.HardwareManager(env.client) hw_id = helpers.resolve_id(mgr.resolve_ids, identifier, 'hardware') if not (env.skip_confirmations or formatting.confirm('This will power off the server with id %s and ' 'reflash device firmware. Continue?' % hw_id)): raise exceptions.CLIAbort('Aborted.') mgr.reflash_firmware(hw_id)
Reflash server firmware.
Below is the the instruction that describes the task: ### Input: Reflash server firmware. ### Response: def cli(env, identifier): """Reflash server firmware.""" mgr = SoftLayer.HardwareManager(env.client) hw_id = helpers.resolve_id(mgr.resolve_ids, identifier, 'hardware') if not (env.skip_confirmations or formatting.confirm('This will power off the server with id %s and ' 'reflash device firmware. Continue?' % hw_id)): raise exceptions.CLIAbort('Aborted.') mgr.reflash_firmware(hw_id)
def download(self): """Download the specified file.""" def total_seconds(td): # Keep backward compatibility with Python 2.6 which doesn't have # this method if hasattr(td, 'total_seconds'): return td.total_seconds() else: return (td.microseconds + (td.seconds + td.days * 24 * 3600) * 10 ** 6) / 10 ** 6 # Don't re-download the file if os.path.isfile(os.path.abspath(self.filename)): self.logger.info("File has already been downloaded: %s" % (self.filename)) return self.filename directory = os.path.dirname(self.filename) if not os.path.isdir(directory): os.makedirs(directory) self.logger.info('Downloading from: %s' % self.url) self.logger.info('Saving as: %s' % self.filename) tmp_file = self.filename + ".part" def _download(): try: start_time = datetime.now() # Enable streaming mode so we can download content in chunks r = self.session.get(self.url, stream=True) r.raise_for_status() content_length = r.headers.get('Content-length') # ValueError: Value out of range if only total_size given if content_length: total_size = int(content_length.strip()) max_value = ((total_size / CHUNK_SIZE) + 1) * CHUNK_SIZE bytes_downloaded = 0 log_level = self.logger.getEffectiveLevel() if log_level <= logging.INFO and content_length: widgets = [pb.Percentage(), ' ', pb.Bar(), ' ', pb.ETA(), ' ', pb.FileTransferSpeed()] pbar = pb.ProgressBar(widgets=widgets, maxval=max_value).start() with open(tmp_file, 'wb') as f: for chunk in r.iter_content(CHUNK_SIZE): f.write(chunk) bytes_downloaded += CHUNK_SIZE if log_level <= logging.INFO and content_length: pbar.update(bytes_downloaded) t1 = total_seconds(datetime.now() - start_time) if self.timeout_download and \ t1 >= self.timeout_download: raise errors.TimeoutError if log_level <= logging.INFO and content_length: pbar.finish() except Exception: if os.path.isfile(tmp_file): os.remove(tmp_file) raise self._retry(_download, retry_exceptions=(requests.exceptions.RequestException, errors.TimeoutError)) os.rename(tmp_file, self.filename) return self.filename
Download the specified file.
Below is the the instruction that describes the task: ### Input: Download the specified file. ### Response: def download(self): """Download the specified file.""" def total_seconds(td): # Keep backward compatibility with Python 2.6 which doesn't have # this method if hasattr(td, 'total_seconds'): return td.total_seconds() else: return (td.microseconds + (td.seconds + td.days * 24 * 3600) * 10 ** 6) / 10 ** 6 # Don't re-download the file if os.path.isfile(os.path.abspath(self.filename)): self.logger.info("File has already been downloaded: %s" % (self.filename)) return self.filename directory = os.path.dirname(self.filename) if not os.path.isdir(directory): os.makedirs(directory) self.logger.info('Downloading from: %s' % self.url) self.logger.info('Saving as: %s' % self.filename) tmp_file = self.filename + ".part" def _download(): try: start_time = datetime.now() # Enable streaming mode so we can download content in chunks r = self.session.get(self.url, stream=True) r.raise_for_status() content_length = r.headers.get('Content-length') # ValueError: Value out of range if only total_size given if content_length: total_size = int(content_length.strip()) max_value = ((total_size / CHUNK_SIZE) + 1) * CHUNK_SIZE bytes_downloaded = 0 log_level = self.logger.getEffectiveLevel() if log_level <= logging.INFO and content_length: widgets = [pb.Percentage(), ' ', pb.Bar(), ' ', pb.ETA(), ' ', pb.FileTransferSpeed()] pbar = pb.ProgressBar(widgets=widgets, maxval=max_value).start() with open(tmp_file, 'wb') as f: for chunk in r.iter_content(CHUNK_SIZE): f.write(chunk) bytes_downloaded += CHUNK_SIZE if log_level <= logging.INFO and content_length: pbar.update(bytes_downloaded) t1 = total_seconds(datetime.now() - start_time) if self.timeout_download and \ t1 >= self.timeout_download: raise errors.TimeoutError if log_level <= logging.INFO and content_length: pbar.finish() except Exception: if os.path.isfile(tmp_file): os.remove(tmp_file) raise self._retry(_download, retry_exceptions=(requests.exceptions.RequestException, errors.TimeoutError)) os.rename(tmp_file, self.filename) return self.filename
def union_categoricals(to_union, sort_categories=False, ignore_order=False): """ Combine list-like of Categorical-like, unioning categories. All categories must have the same dtype. .. versionadded:: 0.19.0 Parameters ---------- to_union : list-like of Categorical, CategoricalIndex, or Series with dtype='category' sort_categories : boolean, default False If true, resulting categories will be lexsorted, otherwise they will be ordered as they appear in the data. ignore_order : boolean, default False If true, the ordered attribute of the Categoricals will be ignored. Results in an unordered categorical. .. versionadded:: 0.20.0 Returns ------- result : Categorical Raises ------ TypeError - all inputs do not have the same dtype - all inputs do not have the same ordered property - all inputs are ordered and their categories are not identical - sort_categories=True and Categoricals are ordered ValueError Empty list of categoricals passed Notes ----- To learn more about categories, see `link <http://pandas.pydata.org/pandas-docs/stable/categorical.html#unioning>`__ Examples -------- >>> from pandas.api.types import union_categoricals If you want to combine categoricals that do not necessarily have the same categories, `union_categoricals` will combine a list-like of categoricals. The new categories will be the union of the categories being combined. >>> a = pd.Categorical(["b", "c"]) >>> b = pd.Categorical(["a", "b"]) >>> union_categoricals([a, b]) [b, c, a, b] Categories (3, object): [b, c, a] By default, the resulting categories will be ordered as they appear in the `categories` of the data. If you want the categories to be lexsorted, use `sort_categories=True` argument. >>> union_categoricals([a, b], sort_categories=True) [b, c, a, b] Categories (3, object): [a, b, c] `union_categoricals` also works with the case of combining two categoricals of the same categories and order information (e.g. what you could also `append` for). >>> a = pd.Categorical(["a", "b"], ordered=True) >>> b = pd.Categorical(["a", "b", "a"], ordered=True) >>> union_categoricals([a, b]) [a, b, a, b, a] Categories (2, object): [a < b] Raises `TypeError` because the categories are ordered and not identical. >>> a = pd.Categorical(["a", "b"], ordered=True) >>> b = pd.Categorical(["a", "b", "c"], ordered=True) >>> union_categoricals([a, b]) TypeError: to union ordered Categoricals, all categories must be the same New in version 0.20.0 Ordered categoricals with different categories or orderings can be combined by using the `ignore_ordered=True` argument. >>> a = pd.Categorical(["a", "b", "c"], ordered=True) >>> b = pd.Categorical(["c", "b", "a"], ordered=True) >>> union_categoricals([a, b], ignore_order=True) [a, b, c, c, b, a] Categories (3, object): [a, b, c] `union_categoricals` also works with a `CategoricalIndex`, or `Series` containing categorical data, but note that the resulting array will always be a plain `Categorical` >>> a = pd.Series(["b", "c"], dtype='category') >>> b = pd.Series(["a", "b"], dtype='category') >>> union_categoricals([a, b]) [b, c, a, b] Categories (3, object): [b, c, a] """ from pandas import Index, Categorical, CategoricalIndex, Series from pandas.core.arrays.categorical import _recode_for_categories if len(to_union) == 0: raise ValueError('No Categoricals to union') def _maybe_unwrap(x): if isinstance(x, (CategoricalIndex, Series)): return x.values elif isinstance(x, Categorical): return x else: raise TypeError("all components to combine must be Categorical") to_union = [_maybe_unwrap(x) for x in to_union] first = to_union[0] if not all(is_dtype_equal(other.categories.dtype, first.categories.dtype) for other in to_union[1:]): raise TypeError("dtype of categories must be the same") ordered = False if all(first.is_dtype_equal(other) for other in to_union[1:]): # identical categories - fastpath categories = first.categories ordered = first.ordered if all(first.categories.equals(other.categories) for other in to_union[1:]): new_codes = np.concatenate([c.codes for c in to_union]) else: codes = [first.codes] + [_recode_for_categories(other.codes, other.categories, first.categories) for other in to_union[1:]] new_codes = np.concatenate(codes) if sort_categories and not ignore_order and ordered: raise TypeError("Cannot use sort_categories=True with " "ordered Categoricals") if sort_categories and not categories.is_monotonic_increasing: categories = categories.sort_values() indexer = categories.get_indexer(first.categories) from pandas.core.algorithms import take_1d new_codes = take_1d(indexer, new_codes, fill_value=-1) elif ignore_order or all(not c.ordered for c in to_union): # different categories - union and recode cats = first.categories.append([c.categories for c in to_union[1:]]) categories = Index(cats.unique()) if sort_categories: categories = categories.sort_values() new_codes = [_recode_for_categories(c.codes, c.categories, categories) for c in to_union] new_codes = np.concatenate(new_codes) else: # ordered - to show a proper error message if all(c.ordered for c in to_union): msg = ("to union ordered Categoricals, " "all categories must be the same") raise TypeError(msg) else: raise TypeError('Categorical.ordered must be the same') if ignore_order: ordered = False return Categorical(new_codes, categories=categories, ordered=ordered, fastpath=True)
Combine list-like of Categorical-like, unioning categories. All categories must have the same dtype. .. versionadded:: 0.19.0 Parameters ---------- to_union : list-like of Categorical, CategoricalIndex, or Series with dtype='category' sort_categories : boolean, default False If true, resulting categories will be lexsorted, otherwise they will be ordered as they appear in the data. ignore_order : boolean, default False If true, the ordered attribute of the Categoricals will be ignored. Results in an unordered categorical. .. versionadded:: 0.20.0 Returns ------- result : Categorical Raises ------ TypeError - all inputs do not have the same dtype - all inputs do not have the same ordered property - all inputs are ordered and their categories are not identical - sort_categories=True and Categoricals are ordered ValueError Empty list of categoricals passed Notes ----- To learn more about categories, see `link <http://pandas.pydata.org/pandas-docs/stable/categorical.html#unioning>`__ Examples -------- >>> from pandas.api.types import union_categoricals If you want to combine categoricals that do not necessarily have the same categories, `union_categoricals` will combine a list-like of categoricals. The new categories will be the union of the categories being combined. >>> a = pd.Categorical(["b", "c"]) >>> b = pd.Categorical(["a", "b"]) >>> union_categoricals([a, b]) [b, c, a, b] Categories (3, object): [b, c, a] By default, the resulting categories will be ordered as they appear in the `categories` of the data. If you want the categories to be lexsorted, use `sort_categories=True` argument. >>> union_categoricals([a, b], sort_categories=True) [b, c, a, b] Categories (3, object): [a, b, c] `union_categoricals` also works with the case of combining two categoricals of the same categories and order information (e.g. what you could also `append` for). >>> a = pd.Categorical(["a", "b"], ordered=True) >>> b = pd.Categorical(["a", "b", "a"], ordered=True) >>> union_categoricals([a, b]) [a, b, a, b, a] Categories (2, object): [a < b] Raises `TypeError` because the categories are ordered and not identical. >>> a = pd.Categorical(["a", "b"], ordered=True) >>> b = pd.Categorical(["a", "b", "c"], ordered=True) >>> union_categoricals([a, b]) TypeError: to union ordered Categoricals, all categories must be the same New in version 0.20.0 Ordered categoricals with different categories or orderings can be combined by using the `ignore_ordered=True` argument. >>> a = pd.Categorical(["a", "b", "c"], ordered=True) >>> b = pd.Categorical(["c", "b", "a"], ordered=True) >>> union_categoricals([a, b], ignore_order=True) [a, b, c, c, b, a] Categories (3, object): [a, b, c] `union_categoricals` also works with a `CategoricalIndex`, or `Series` containing categorical data, but note that the resulting array will always be a plain `Categorical` >>> a = pd.Series(["b", "c"], dtype='category') >>> b = pd.Series(["a", "b"], dtype='category') >>> union_categoricals([a, b]) [b, c, a, b] Categories (3, object): [b, c, a]
Below is the the instruction that describes the task: ### Input: Combine list-like of Categorical-like, unioning categories. All categories must have the same dtype. .. versionadded:: 0.19.0 Parameters ---------- to_union : list-like of Categorical, CategoricalIndex, or Series with dtype='category' sort_categories : boolean, default False If true, resulting categories will be lexsorted, otherwise they will be ordered as they appear in the data. ignore_order : boolean, default False If true, the ordered attribute of the Categoricals will be ignored. Results in an unordered categorical. .. versionadded:: 0.20.0 Returns ------- result : Categorical Raises ------ TypeError - all inputs do not have the same dtype - all inputs do not have the same ordered property - all inputs are ordered and their categories are not identical - sort_categories=True and Categoricals are ordered ValueError Empty list of categoricals passed Notes ----- To learn more about categories, see `link <http://pandas.pydata.org/pandas-docs/stable/categorical.html#unioning>`__ Examples -------- >>> from pandas.api.types import union_categoricals If you want to combine categoricals that do not necessarily have the same categories, `union_categoricals` will combine a list-like of categoricals. The new categories will be the union of the categories being combined. >>> a = pd.Categorical(["b", "c"]) >>> b = pd.Categorical(["a", "b"]) >>> union_categoricals([a, b]) [b, c, a, b] Categories (3, object): [b, c, a] By default, the resulting categories will be ordered as they appear in the `categories` of the data. If you want the categories to be lexsorted, use `sort_categories=True` argument. >>> union_categoricals([a, b], sort_categories=True) [b, c, a, b] Categories (3, object): [a, b, c] `union_categoricals` also works with the case of combining two categoricals of the same categories and order information (e.g. what you could also `append` for). >>> a = pd.Categorical(["a", "b"], ordered=True) >>> b = pd.Categorical(["a", "b", "a"], ordered=True) >>> union_categoricals([a, b]) [a, b, a, b, a] Categories (2, object): [a < b] Raises `TypeError` because the categories are ordered and not identical. >>> a = pd.Categorical(["a", "b"], ordered=True) >>> b = pd.Categorical(["a", "b", "c"], ordered=True) >>> union_categoricals([a, b]) TypeError: to union ordered Categoricals, all categories must be the same New in version 0.20.0 Ordered categoricals with different categories or orderings can be combined by using the `ignore_ordered=True` argument. >>> a = pd.Categorical(["a", "b", "c"], ordered=True) >>> b = pd.Categorical(["c", "b", "a"], ordered=True) >>> union_categoricals([a, b], ignore_order=True) [a, b, c, c, b, a] Categories (3, object): [a, b, c] `union_categoricals` also works with a `CategoricalIndex`, or `Series` containing categorical data, but note that the resulting array will always be a plain `Categorical` >>> a = pd.Series(["b", "c"], dtype='category') >>> b = pd.Series(["a", "b"], dtype='category') >>> union_categoricals([a, b]) [b, c, a, b] Categories (3, object): [b, c, a] ### Response: def union_categoricals(to_union, sort_categories=False, ignore_order=False): """ Combine list-like of Categorical-like, unioning categories. All categories must have the same dtype. .. versionadded:: 0.19.0 Parameters ---------- to_union : list-like of Categorical, CategoricalIndex, or Series with dtype='category' sort_categories : boolean, default False If true, resulting categories will be lexsorted, otherwise they will be ordered as they appear in the data. ignore_order : boolean, default False If true, the ordered attribute of the Categoricals will be ignored. Results in an unordered categorical. .. versionadded:: 0.20.0 Returns ------- result : Categorical Raises ------ TypeError - all inputs do not have the same dtype - all inputs do not have the same ordered property - all inputs are ordered and their categories are not identical - sort_categories=True and Categoricals are ordered ValueError Empty list of categoricals passed Notes ----- To learn more about categories, see `link <http://pandas.pydata.org/pandas-docs/stable/categorical.html#unioning>`__ Examples -------- >>> from pandas.api.types import union_categoricals If you want to combine categoricals that do not necessarily have the same categories, `union_categoricals` will combine a list-like of categoricals. The new categories will be the union of the categories being combined. >>> a = pd.Categorical(["b", "c"]) >>> b = pd.Categorical(["a", "b"]) >>> union_categoricals([a, b]) [b, c, a, b] Categories (3, object): [b, c, a] By default, the resulting categories will be ordered as they appear in the `categories` of the data. If you want the categories to be lexsorted, use `sort_categories=True` argument. >>> union_categoricals([a, b], sort_categories=True) [b, c, a, b] Categories (3, object): [a, b, c] `union_categoricals` also works with the case of combining two categoricals of the same categories and order information (e.g. what you could also `append` for). >>> a = pd.Categorical(["a", "b"], ordered=True) >>> b = pd.Categorical(["a", "b", "a"], ordered=True) >>> union_categoricals([a, b]) [a, b, a, b, a] Categories (2, object): [a < b] Raises `TypeError` because the categories are ordered and not identical. >>> a = pd.Categorical(["a", "b"], ordered=True) >>> b = pd.Categorical(["a", "b", "c"], ordered=True) >>> union_categoricals([a, b]) TypeError: to union ordered Categoricals, all categories must be the same New in version 0.20.0 Ordered categoricals with different categories or orderings can be combined by using the `ignore_ordered=True` argument. >>> a = pd.Categorical(["a", "b", "c"], ordered=True) >>> b = pd.Categorical(["c", "b", "a"], ordered=True) >>> union_categoricals([a, b], ignore_order=True) [a, b, c, c, b, a] Categories (3, object): [a, b, c] `union_categoricals` also works with a `CategoricalIndex`, or `Series` containing categorical data, but note that the resulting array will always be a plain `Categorical` >>> a = pd.Series(["b", "c"], dtype='category') >>> b = pd.Series(["a", "b"], dtype='category') >>> union_categoricals([a, b]) [b, c, a, b] Categories (3, object): [b, c, a] """ from pandas import Index, Categorical, CategoricalIndex, Series from pandas.core.arrays.categorical import _recode_for_categories if len(to_union) == 0: raise ValueError('No Categoricals to union') def _maybe_unwrap(x): if isinstance(x, (CategoricalIndex, Series)): return x.values elif isinstance(x, Categorical): return x else: raise TypeError("all components to combine must be Categorical") to_union = [_maybe_unwrap(x) for x in to_union] first = to_union[0] if not all(is_dtype_equal(other.categories.dtype, first.categories.dtype) for other in to_union[1:]): raise TypeError("dtype of categories must be the same") ordered = False if all(first.is_dtype_equal(other) for other in to_union[1:]): # identical categories - fastpath categories = first.categories ordered = first.ordered if all(first.categories.equals(other.categories) for other in to_union[1:]): new_codes = np.concatenate([c.codes for c in to_union]) else: codes = [first.codes] + [_recode_for_categories(other.codes, other.categories, first.categories) for other in to_union[1:]] new_codes = np.concatenate(codes) if sort_categories and not ignore_order and ordered: raise TypeError("Cannot use sort_categories=True with " "ordered Categoricals") if sort_categories and not categories.is_monotonic_increasing: categories = categories.sort_values() indexer = categories.get_indexer(first.categories) from pandas.core.algorithms import take_1d new_codes = take_1d(indexer, new_codes, fill_value=-1) elif ignore_order or all(not c.ordered for c in to_union): # different categories - union and recode cats = first.categories.append([c.categories for c in to_union[1:]]) categories = Index(cats.unique()) if sort_categories: categories = categories.sort_values() new_codes = [_recode_for_categories(c.codes, c.categories, categories) for c in to_union] new_codes = np.concatenate(new_codes) else: # ordered - to show a proper error message if all(c.ordered for c in to_union): msg = ("to union ordered Categoricals, " "all categories must be the same") raise TypeError(msg) else: raise TypeError('Categorical.ordered must be the same') if ignore_order: ordered = False return Categorical(new_codes, categories=categories, ordered=ordered, fastpath=True)
def prepare_for_translation(localization_bundle_path): """ Prepares the localization bundle for translation. This means, after creating the strings files using genstrings.sh, this will produce '.pending' files, that contain the files that are yet to be translated. Args: localization_bundle_path (str): The path to the localization bundle. """ logging.info("Preparing for translation..") for strings_file in os.listdir(os.path.join(localization_bundle_path, DEFAULT_LANGUAGE_DIRECTORY_NAME)): if not strings_file.endswith(".strings"): continue strings_path = os.path.join(localization_bundle_path, DEFAULT_LANGUAGE_DIRECTORY_NAME, strings_file) for lang_dir in os.listdir(localization_bundle_path): if lang_dir == DEFAULT_LANGUAGE_DIRECTORY_NAME or lang_dir.startswith("."): continue dest_strings_path = os.path.join(localization_bundle_path, lang_dir, strings_file) pending_path = dest_strings_path + ".pending" excluded_path = dest_strings_path + ".excluded" if not os.path.exists(dest_strings_path): open_strings_file(dest_strings_path, "a").close() logging.info("Preparing diff for %s in %s", lang_dir, pending_path) localization_diff(strings_path, dest_strings_path, excluded_path, pending_path)
Prepares the localization bundle for translation. This means, after creating the strings files using genstrings.sh, this will produce '.pending' files, that contain the files that are yet to be translated. Args: localization_bundle_path (str): The path to the localization bundle.
Below is the the instruction that describes the task: ### Input: Prepares the localization bundle for translation. This means, after creating the strings files using genstrings.sh, this will produce '.pending' files, that contain the files that are yet to be translated. Args: localization_bundle_path (str): The path to the localization bundle. ### Response: def prepare_for_translation(localization_bundle_path): """ Prepares the localization bundle for translation. This means, after creating the strings files using genstrings.sh, this will produce '.pending' files, that contain the files that are yet to be translated. Args: localization_bundle_path (str): The path to the localization bundle. """ logging.info("Preparing for translation..") for strings_file in os.listdir(os.path.join(localization_bundle_path, DEFAULT_LANGUAGE_DIRECTORY_NAME)): if not strings_file.endswith(".strings"): continue strings_path = os.path.join(localization_bundle_path, DEFAULT_LANGUAGE_DIRECTORY_NAME, strings_file) for lang_dir in os.listdir(localization_bundle_path): if lang_dir == DEFAULT_LANGUAGE_DIRECTORY_NAME or lang_dir.startswith("."): continue dest_strings_path = os.path.join(localization_bundle_path, lang_dir, strings_file) pending_path = dest_strings_path + ".pending" excluded_path = dest_strings_path + ".excluded" if not os.path.exists(dest_strings_path): open_strings_file(dest_strings_path, "a").close() logging.info("Preparing diff for %s in %s", lang_dir, pending_path) localization_diff(strings_path, dest_strings_path, excluded_path, pending_path)
def strip_empty_lines_forward(self, content, i): """ Skip over empty lines :param content: parsed text :param i: current parsed line :return: number of skipped lined """ while i < len(content): line = content[i].strip(' \r\n\t\f') if line != '': break self.debug_print_strip_msg(i, content[i]) i += 1 # Strip an empty line return i
Skip over empty lines :param content: parsed text :param i: current parsed line :return: number of skipped lined
Below is the the instruction that describes the task: ### Input: Skip over empty lines :param content: parsed text :param i: current parsed line :return: number of skipped lined ### Response: def strip_empty_lines_forward(self, content, i): """ Skip over empty lines :param content: parsed text :param i: current parsed line :return: number of skipped lined """ while i < len(content): line = content[i].strip(' \r\n\t\f') if line != '': break self.debug_print_strip_msg(i, content[i]) i += 1 # Strip an empty line return i
def profile(ids=None, track_ids=None, buckets=None, limit=False): """get the profiles for multiple songs at once Args: ids (str or list): a song ID or list of song IDs Kwargs: buckets (list): A list of strings specifying which buckets to retrieve limit (bool): A boolean indicating whether or not to limit the results to one of the id spaces specified in buckets Returns: A list of term document dicts Example: >>> song_ids = ['SOBSLVH12A8C131F38', 'SOXMSGY1338A5D5873', 'SOJPHZO1376210AFE5', 'SOBHNKR12AB0186218', 'SOSJAHD13770F4D40C'] >>> songs = song.profile(song_ids, buckets=['audio_summary']) [<song - Say It Ain't So>, <song - Island In The Sun>, <song - My Name Is Jonas>, <song - Buddy Holly>] >>> songs[0].audio_summary {u'analysis_url': u'https://echonest-analysis.s3.amazonaws.com/TR/7VRBNguufpHAQQ4ZjJ0eWsIQWl2S2_lrK-7Bp2azHOvPN4VFV-YnU7uO0dXgYtOKT-MTEa/3/full.json?Signature=hmNghHwfEsA4JKWFXnRi7mVP6T8%3D&Expires=1349809918&AWSAccessKeyId=AKIAJRDFEY23UEVW42BQ', u'audio_md5': u'b6079b2b88f8265be8bdd5fe9702e05c', u'danceability': 0.64540643050283253, u'duration': 255.92117999999999, u'energy': 0.30711665772260549, u'key': 8, u'liveness': 0.088994423525370583, u'loudness': -9.7799999999999994, u'mode': 1, u'speechiness': 0.031970700260699259, u'tempo': 76.049999999999997, u'time_signature': 4} >>> """ kwargs = {} if ids: if not isinstance(ids, list): ids = [ids] kwargs['id'] = ids if track_ids: if not isinstance(track_ids, list): track_ids = [track_ids] kwargs['track_id'] = track_ids buckets = buckets or [] if buckets: kwargs['bucket'] = buckets if limit: kwargs['limit'] = 'true' result = util.callm("%s/%s" % ('song', 'profile'), kwargs) return [Song(**util.fix(s_dict)) for s_dict in result['response']['songs']]
get the profiles for multiple songs at once Args: ids (str or list): a song ID or list of song IDs Kwargs: buckets (list): A list of strings specifying which buckets to retrieve limit (bool): A boolean indicating whether or not to limit the results to one of the id spaces specified in buckets Returns: A list of term document dicts Example: >>> song_ids = ['SOBSLVH12A8C131F38', 'SOXMSGY1338A5D5873', 'SOJPHZO1376210AFE5', 'SOBHNKR12AB0186218', 'SOSJAHD13770F4D40C'] >>> songs = song.profile(song_ids, buckets=['audio_summary']) [<song - Say It Ain't So>, <song - Island In The Sun>, <song - My Name Is Jonas>, <song - Buddy Holly>] >>> songs[0].audio_summary {u'analysis_url': u'https://echonest-analysis.s3.amazonaws.com/TR/7VRBNguufpHAQQ4ZjJ0eWsIQWl2S2_lrK-7Bp2azHOvPN4VFV-YnU7uO0dXgYtOKT-MTEa/3/full.json?Signature=hmNghHwfEsA4JKWFXnRi7mVP6T8%3D&Expires=1349809918&AWSAccessKeyId=AKIAJRDFEY23UEVW42BQ', u'audio_md5': u'b6079b2b88f8265be8bdd5fe9702e05c', u'danceability': 0.64540643050283253, u'duration': 255.92117999999999, u'energy': 0.30711665772260549, u'key': 8, u'liveness': 0.088994423525370583, u'loudness': -9.7799999999999994, u'mode': 1, u'speechiness': 0.031970700260699259, u'tempo': 76.049999999999997, u'time_signature': 4} >>>
Below is the the instruction that describes the task: ### Input: get the profiles for multiple songs at once Args: ids (str or list): a song ID or list of song IDs Kwargs: buckets (list): A list of strings specifying which buckets to retrieve limit (bool): A boolean indicating whether or not to limit the results to one of the id spaces specified in buckets Returns: A list of term document dicts Example: >>> song_ids = ['SOBSLVH12A8C131F38', 'SOXMSGY1338A5D5873', 'SOJPHZO1376210AFE5', 'SOBHNKR12AB0186218', 'SOSJAHD13770F4D40C'] >>> songs = song.profile(song_ids, buckets=['audio_summary']) [<song - Say It Ain't So>, <song - Island In The Sun>, <song - My Name Is Jonas>, <song - Buddy Holly>] >>> songs[0].audio_summary {u'analysis_url': u'https://echonest-analysis.s3.amazonaws.com/TR/7VRBNguufpHAQQ4ZjJ0eWsIQWl2S2_lrK-7Bp2azHOvPN4VFV-YnU7uO0dXgYtOKT-MTEa/3/full.json?Signature=hmNghHwfEsA4JKWFXnRi7mVP6T8%3D&Expires=1349809918&AWSAccessKeyId=AKIAJRDFEY23UEVW42BQ', u'audio_md5': u'b6079b2b88f8265be8bdd5fe9702e05c', u'danceability': 0.64540643050283253, u'duration': 255.92117999999999, u'energy': 0.30711665772260549, u'key': 8, u'liveness': 0.088994423525370583, u'loudness': -9.7799999999999994, u'mode': 1, u'speechiness': 0.031970700260699259, u'tempo': 76.049999999999997, u'time_signature': 4} >>> ### Response: def profile(ids=None, track_ids=None, buckets=None, limit=False): """get the profiles for multiple songs at once Args: ids (str or list): a song ID or list of song IDs Kwargs: buckets (list): A list of strings specifying which buckets to retrieve limit (bool): A boolean indicating whether or not to limit the results to one of the id spaces specified in buckets Returns: A list of term document dicts Example: >>> song_ids = ['SOBSLVH12A8C131F38', 'SOXMSGY1338A5D5873', 'SOJPHZO1376210AFE5', 'SOBHNKR12AB0186218', 'SOSJAHD13770F4D40C'] >>> songs = song.profile(song_ids, buckets=['audio_summary']) [<song - Say It Ain't So>, <song - Island In The Sun>, <song - My Name Is Jonas>, <song - Buddy Holly>] >>> songs[0].audio_summary {u'analysis_url': u'https://echonest-analysis.s3.amazonaws.com/TR/7VRBNguufpHAQQ4ZjJ0eWsIQWl2S2_lrK-7Bp2azHOvPN4VFV-YnU7uO0dXgYtOKT-MTEa/3/full.json?Signature=hmNghHwfEsA4JKWFXnRi7mVP6T8%3D&Expires=1349809918&AWSAccessKeyId=AKIAJRDFEY23UEVW42BQ', u'audio_md5': u'b6079b2b88f8265be8bdd5fe9702e05c', u'danceability': 0.64540643050283253, u'duration': 255.92117999999999, u'energy': 0.30711665772260549, u'key': 8, u'liveness': 0.088994423525370583, u'loudness': -9.7799999999999994, u'mode': 1, u'speechiness': 0.031970700260699259, u'tempo': 76.049999999999997, u'time_signature': 4} >>> """ kwargs = {} if ids: if not isinstance(ids, list): ids = [ids] kwargs['id'] = ids if track_ids: if not isinstance(track_ids, list): track_ids = [track_ids] kwargs['track_id'] = track_ids buckets = buckets or [] if buckets: kwargs['bucket'] = buckets if limit: kwargs['limit'] = 'true' result = util.callm("%s/%s" % ('song', 'profile'), kwargs) return [Song(**util.fix(s_dict)) for s_dict in result['response']['songs']]
def init_app(self, app): """Flask application initialization.""" self.init_config(app) app.register_blueprint(blueprint) app.extensions['invenio-groups'] = self
Flask application initialization.
Below is the the instruction that describes the task: ### Input: Flask application initialization. ### Response: def init_app(self, app): """Flask application initialization.""" self.init_config(app) app.register_blueprint(blueprint) app.extensions['invenio-groups'] = self
def _make_sparse_blocks_with_virtual(self, variable, records, data): ''' Handles the data for the variable with sparse records. Organizes the physical record numbers into blocks in a list: [[start_rec1,end_rec1,data_1], [start_rec2,enc_rec2,data_2], ...] Place consecutive physical records into a single block Parameters: variable: dict the variable, returned from varinq('variable', expand=True) records: list a list of physical records data: varies bytes array, numpy.ndarray or list of str form with vitual data embedded, returned from varget('variable') call ''' # Gather the ranges for which we have physical data sparse_blocks = CDF._make_blocks(records) sparse_data = [] if (isinstance(data, np.ndarray)): for sblock in sparse_blocks: # each block in this list: [starting_rec#, ending_rec#, data] asparse = [] asparse.append(sblock[0]) asparse.append(sblock[1]) starting = sblock[0] ending = sblock[1]+1 asparse.append(data[starting:ending]) sparse_data.append(asparse) return sparse_data elif (isinstance(data, bytes)): y = 1 for z in range(0, variable['Num_Dims']): y = y * variable['Dim_Sizes'][z] y = y * CDF._datatype_size(variable['Data_Type'], variable['Num_Elements']) for x in sparse_blocks: # each block in this list: [starting_rec#, ending_rec#, data] asparse = [] asparse.append(sblock[0]) asparse.append(sblock[1]) starting = sblock[0]*y ending = (sblock[1]+1)*y asparse.append(data[starting:ending]) sparse_data.append(asparse) return sparse_data elif (isinstance(data, list)): for x in sparse_blocks: # each block in this list: [starting_rec#, ending_rec#, data] asparse = [] asparse.append(sblock[0]) asparse.append(sblock[1]) records = sparse_blocks[x][1] - sparse_blocks[x][0] + 1 datax = [] ist = sblock[0] for z in range(0, records): datax.append(data[ist+z]) asparse.append(datax) sparse_data.append(asparse) return sparse_data else: print('Can not handle data... Skip') return None
Handles the data for the variable with sparse records. Organizes the physical record numbers into blocks in a list: [[start_rec1,end_rec1,data_1], [start_rec2,enc_rec2,data_2], ...] Place consecutive physical records into a single block Parameters: variable: dict the variable, returned from varinq('variable', expand=True) records: list a list of physical records data: varies bytes array, numpy.ndarray or list of str form with vitual data embedded, returned from varget('variable') call
Below is the the instruction that describes the task: ### Input: Handles the data for the variable with sparse records. Organizes the physical record numbers into blocks in a list: [[start_rec1,end_rec1,data_1], [start_rec2,enc_rec2,data_2], ...] Place consecutive physical records into a single block Parameters: variable: dict the variable, returned from varinq('variable', expand=True) records: list a list of physical records data: varies bytes array, numpy.ndarray or list of str form with vitual data embedded, returned from varget('variable') call ### Response: def _make_sparse_blocks_with_virtual(self, variable, records, data): ''' Handles the data for the variable with sparse records. Organizes the physical record numbers into blocks in a list: [[start_rec1,end_rec1,data_1], [start_rec2,enc_rec2,data_2], ...] Place consecutive physical records into a single block Parameters: variable: dict the variable, returned from varinq('variable', expand=True) records: list a list of physical records data: varies bytes array, numpy.ndarray or list of str form with vitual data embedded, returned from varget('variable') call ''' # Gather the ranges for which we have physical data sparse_blocks = CDF._make_blocks(records) sparse_data = [] if (isinstance(data, np.ndarray)): for sblock in sparse_blocks: # each block in this list: [starting_rec#, ending_rec#, data] asparse = [] asparse.append(sblock[0]) asparse.append(sblock[1]) starting = sblock[0] ending = sblock[1]+1 asparse.append(data[starting:ending]) sparse_data.append(asparse) return sparse_data elif (isinstance(data, bytes)): y = 1 for z in range(0, variable['Num_Dims']): y = y * variable['Dim_Sizes'][z] y = y * CDF._datatype_size(variable['Data_Type'], variable['Num_Elements']) for x in sparse_blocks: # each block in this list: [starting_rec#, ending_rec#, data] asparse = [] asparse.append(sblock[0]) asparse.append(sblock[1]) starting = sblock[0]*y ending = (sblock[1]+1)*y asparse.append(data[starting:ending]) sparse_data.append(asparse) return sparse_data elif (isinstance(data, list)): for x in sparse_blocks: # each block in this list: [starting_rec#, ending_rec#, data] asparse = [] asparse.append(sblock[0]) asparse.append(sblock[1]) records = sparse_blocks[x][1] - sparse_blocks[x][0] + 1 datax = [] ist = sblock[0] for z in range(0, records): datax.append(data[ist+z]) asparse.append(datax) sparse_data.append(asparse) return sparse_data else: print('Can not handle data... Skip') return None
def quantize(arr, min_val, max_val, levels, dtype=np.int64): """Quantize an array of (-inf, inf) to [0, levels-1]. Args: arr (ndarray): Input array. min_val (scalar): Minimum value to be clipped. max_val (scalar): Maximum value to be clipped. levels (int): Quantization levels. dtype (np.type): The type of the quantized array. Returns: tuple: Quantized array. """ if not (isinstance(levels, int) and levels > 1): raise ValueError( 'levels must be a positive integer, but got {}'.format(levels)) if min_val >= max_val: raise ValueError( 'min_val ({}) must be smaller than max_val ({})'.format( min_val, max_val)) arr = np.clip(arr, min_val, max_val) - min_val quantized_arr = np.minimum( np.floor(levels * arr / (max_val - min_val)).astype(dtype), levels - 1) return quantized_arr
Quantize an array of (-inf, inf) to [0, levels-1]. Args: arr (ndarray): Input array. min_val (scalar): Minimum value to be clipped. max_val (scalar): Maximum value to be clipped. levels (int): Quantization levels. dtype (np.type): The type of the quantized array. Returns: tuple: Quantized array.
Below is the the instruction that describes the task: ### Input: Quantize an array of (-inf, inf) to [0, levels-1]. Args: arr (ndarray): Input array. min_val (scalar): Minimum value to be clipped. max_val (scalar): Maximum value to be clipped. levels (int): Quantization levels. dtype (np.type): The type of the quantized array. Returns: tuple: Quantized array. ### Response: def quantize(arr, min_val, max_val, levels, dtype=np.int64): """Quantize an array of (-inf, inf) to [0, levels-1]. Args: arr (ndarray): Input array. min_val (scalar): Minimum value to be clipped. max_val (scalar): Maximum value to be clipped. levels (int): Quantization levels. dtype (np.type): The type of the quantized array. Returns: tuple: Quantized array. """ if not (isinstance(levels, int) and levels > 1): raise ValueError( 'levels must be a positive integer, but got {}'.format(levels)) if min_val >= max_val: raise ValueError( 'min_val ({}) must be smaller than max_val ({})'.format( min_val, max_val)) arr = np.clip(arr, min_val, max_val) - min_val quantized_arr = np.minimum( np.floor(levels * arr / (max_val - min_val)).astype(dtype), levels - 1) return quantized_arr
def passgen(length=12, punctuation=False, digits=True, letters=True, case="both", **kwargs): """Generate random password. Args: length (int): The length of the password. Must be greater than zero. Defaults to 12. punctuation (bool): Whether to use punctuation or not. Defaults to False. limit_punctuation (str): Limits the allowed puncturation to defined characters. digits (bool): Whether to use digits or not. Defaults to True. One of *digits* and *letters* must be True. letters (bool): Whether to use letters or not. Defaults to True. One of *digits* and *letters* must be True. case (str): Letter case to use. Accepts 'upper' for upper case, 'lower' for lower case, and 'both' for both. Defaults to 'both'. Returns: str. The generated password. Raises: ValueError Below are some basic examples. >>> passgen() z7GlutdEEbnk >>> passgen(case='upper') Q81J9DOAMBRN >>> passgen(length=6) EzJMRX """ p_min = punctuation p_max = 0 if punctuation is False else length d_min = digits d_max = 0 if digits is False else length a_min = letters a_max = 0 if letters is False else length if d_min + p_min + a_min > length: raise ValueError("Minimum punctuation and digits number cannot be greater than length") if not digits and not letters: raise ValueError("digits and letters cannot be False at the same time") if length < 1: raise ValueError("length must be greater than zero") if letters: if case == "both": alpha = string.ascii_uppercase + string.ascii_lowercase elif case == "upper": alpha = string.ascii_uppercase elif case == "lower": alpha = string.ascii_lowercase else: raise ValueError("case can only be 'both', 'upper' or 'lower'") else: alpha = string.ascii_uppercase + string.ascii_lowercase if punctuation: limit_punctuation = kwargs.get('limit_punctuation', '') if limit_punctuation == '': punctuation_set = string.punctuation else: # In case limit_punctuation contains non-punctuation characters punctuation_set = ''.join([p for p in limit_punctuation if p in string.punctuation]) else: punctuation_set = string.punctuation srandom = random.SystemRandom() p_generator = Generator(punctuation_set, srandom, p_min, p_max) d_generator = Generator(string.digits, srandom, d_min, d_max) a_generator = Generator(alpha, srandom, a_min, a_max) main_generator = SuperGenerator(srandom, length, length) main_generator.add(p_generator) main_generator.add(a_generator) main_generator.add(d_generator) chars = [] for i in main_generator: chars.append(i) try: srandom.shuffle(chars, srandom) except: random.shuffle(chars) return "".join(chars)
Generate random password. Args: length (int): The length of the password. Must be greater than zero. Defaults to 12. punctuation (bool): Whether to use punctuation or not. Defaults to False. limit_punctuation (str): Limits the allowed puncturation to defined characters. digits (bool): Whether to use digits or not. Defaults to True. One of *digits* and *letters* must be True. letters (bool): Whether to use letters or not. Defaults to True. One of *digits* and *letters* must be True. case (str): Letter case to use. Accepts 'upper' for upper case, 'lower' for lower case, and 'both' for both. Defaults to 'both'. Returns: str. The generated password. Raises: ValueError Below are some basic examples. >>> passgen() z7GlutdEEbnk >>> passgen(case='upper') Q81J9DOAMBRN >>> passgen(length=6) EzJMRX
Below is the the instruction that describes the task: ### Input: Generate random password. Args: length (int): The length of the password. Must be greater than zero. Defaults to 12. punctuation (bool): Whether to use punctuation or not. Defaults to False. limit_punctuation (str): Limits the allowed puncturation to defined characters. digits (bool): Whether to use digits or not. Defaults to True. One of *digits* and *letters* must be True. letters (bool): Whether to use letters or not. Defaults to True. One of *digits* and *letters* must be True. case (str): Letter case to use. Accepts 'upper' for upper case, 'lower' for lower case, and 'both' for both. Defaults to 'both'. Returns: str. The generated password. Raises: ValueError Below are some basic examples. >>> passgen() z7GlutdEEbnk >>> passgen(case='upper') Q81J9DOAMBRN >>> passgen(length=6) EzJMRX ### Response: def passgen(length=12, punctuation=False, digits=True, letters=True, case="both", **kwargs): """Generate random password. Args: length (int): The length of the password. Must be greater than zero. Defaults to 12. punctuation (bool): Whether to use punctuation or not. Defaults to False. limit_punctuation (str): Limits the allowed puncturation to defined characters. digits (bool): Whether to use digits or not. Defaults to True. One of *digits* and *letters* must be True. letters (bool): Whether to use letters or not. Defaults to True. One of *digits* and *letters* must be True. case (str): Letter case to use. Accepts 'upper' for upper case, 'lower' for lower case, and 'both' for both. Defaults to 'both'. Returns: str. The generated password. Raises: ValueError Below are some basic examples. >>> passgen() z7GlutdEEbnk >>> passgen(case='upper') Q81J9DOAMBRN >>> passgen(length=6) EzJMRX """ p_min = punctuation p_max = 0 if punctuation is False else length d_min = digits d_max = 0 if digits is False else length a_min = letters a_max = 0 if letters is False else length if d_min + p_min + a_min > length: raise ValueError("Minimum punctuation and digits number cannot be greater than length") if not digits and not letters: raise ValueError("digits and letters cannot be False at the same time") if length < 1: raise ValueError("length must be greater than zero") if letters: if case == "both": alpha = string.ascii_uppercase + string.ascii_lowercase elif case == "upper": alpha = string.ascii_uppercase elif case == "lower": alpha = string.ascii_lowercase else: raise ValueError("case can only be 'both', 'upper' or 'lower'") else: alpha = string.ascii_uppercase + string.ascii_lowercase if punctuation: limit_punctuation = kwargs.get('limit_punctuation', '') if limit_punctuation == '': punctuation_set = string.punctuation else: # In case limit_punctuation contains non-punctuation characters punctuation_set = ''.join([p for p in limit_punctuation if p in string.punctuation]) else: punctuation_set = string.punctuation srandom = random.SystemRandom() p_generator = Generator(punctuation_set, srandom, p_min, p_max) d_generator = Generator(string.digits, srandom, d_min, d_max) a_generator = Generator(alpha, srandom, a_min, a_max) main_generator = SuperGenerator(srandom, length, length) main_generator.add(p_generator) main_generator.add(a_generator) main_generator.add(d_generator) chars = [] for i in main_generator: chars.append(i) try: srandom.shuffle(chars, srandom) except: random.shuffle(chars) return "".join(chars)
def _newton_rhaphson( self, df, events, start, stop, weights, show_progress=False, step_size=None, precision=10e-6, max_steps=50, initial_point=None, ): # pylint: disable=too-many-arguments,too-many-locals,too-many-branches,too-many-statements """ Newton Rhaphson algorithm for fitting CPH model. Parameters ---------- df: DataFrame stop_times_events: DataFrame meta information about the subjects history show_progress: boolean, optional (default: True) to show verbose output of convergence step_size: float > 0 to determine a starting step size in NR algorithm. precision: float the convergence halts if the norm of delta between successive positions is less than epsilon. Returns -------- beta: (1,d) numpy array. """ assert precision <= 1.0, "precision must be less than or equal to 1." _, d = df.shape # make sure betas are correct size. if initial_point is not None: beta = initial_point else: beta = np.zeros((d,)) i = 0 converging = True ll, previous_ll = 0, 0 start_time = time.time() step_sizer = StepSizer(step_size) step_size = step_sizer.next() while converging: i += 1 if self.strata is None: h, g, ll = self._get_gradients( df.values, events.values, start.values, stop.values, weights.values, beta ) else: g = np.zeros_like(beta) h = np.zeros((d, d)) ll = 0 for _h, _g, _ll in self._partition_by_strata_and_apply( df, events, start, stop, weights, self._get_gradients, beta ): g += _g h += _h ll += _ll if i == 1 and np.all(beta == 0): # this is a neat optimization, the null partial likelihood # is the same as the full partial but evaluated at zero. # if the user supplied a non-trivial initial point, we need to delay this. self._log_likelihood_null = ll if self.penalizer > 0: # add the gradient and hessian of the l2 term g -= self.penalizer * beta h.flat[:: d + 1] -= self.penalizer try: # reusing a piece to make g * inv(h) * g.T faster later inv_h_dot_g_T = spsolve(-h, g, sym_pos=True) except ValueError as e: if "infs or NaNs" in str(e): raise ConvergenceError( """hessian or gradient contains nan or inf value(s). Convergence halted. Please see the following tips in the lifelines documentation: https://lifelines.readthedocs.io/en/latest/Examples.html#problems-with-convergence-in-the-cox-proportional-hazard-model """, e, ) else: # something else? raise e except LinAlgError as e: raise ConvergenceError( """Convergence halted due to matrix inversion problems. Suspicion is high colinearity. Please see the following tips in the lifelines documentation: https://lifelines.readthedocs.io/en/latest/Examples.html#problems-with-convergence-in-the-cox-proportional-hazard-model """, e, ) delta = step_size * inv_h_dot_g_T if np.any(np.isnan(delta)): raise ConvergenceError( """delta contains nan value(s). Convergence halted. Please see the following tips in the lifelines documentation: https://lifelines.readthedocs.io/en/latest/Examples.html#problems-with-convergence-in-the-cox-proportional-hazard-model """ ) # Save these as pending result hessian, gradient = h, g norm_delta = norm(delta) newton_decrement = g.dot(inv_h_dot_g_T) / 2 if show_progress: print( "Iteration %d: norm_delta = %.5f, step_size = %.5f, ll = %.5f, newton_decrement = %.5f, seconds_since_start = %.1f" % (i, norm_delta, step_size, ll, newton_decrement, time.time() - start_time) ) # convergence criteria if norm_delta < precision: converging, completed = False, True elif previous_ll > 0 and abs(ll - previous_ll) / (-previous_ll) < 1e-09: # this is what R uses by default converging, completed = False, True elif newton_decrement < 10e-8: converging, completed = False, True elif i >= max_steps: # 50 iterations steps with N-R is a lot. # Expected convergence is less than 10 steps converging, completed = False, False elif step_size <= 0.0001: converging, completed = False, False elif abs(ll) < 0.0001 and norm_delta > 1.0: warnings.warn( "The log-likelihood is getting suspiciously close to 0 and the delta is still large. There may be complete separation in the dataset. This may result in incorrect inference of coefficients. \ See https://stats.stackexchange.com/questions/11109/how-to-deal-with-perfect-separation-in-logistic-regression", ConvergenceWarning, ) converging, completed = False, False step_size = step_sizer.update(norm_delta).next() beta += delta self._hessian_ = hessian self._score_ = gradient self._log_likelihood = ll if show_progress and completed: print("Convergence completed after %d iterations." % (i)) elif show_progress and not completed: print("Convergence failed. See any warning messages.") # report to the user problems that we detect. if completed and norm_delta > 0.1: warnings.warn( "Newton-Rhapson convergence completed but norm(delta) is still high, %.3f. This may imply non-unique solutions to the maximum likelihood. Perhaps there is colinearity or complete separation in the dataset?" % norm_delta, ConvergenceWarning, ) elif not completed: warnings.warn("Newton-Rhapson failed to converge sufficiently in %d steps." % max_steps, ConvergenceWarning) return beta
Newton Rhaphson algorithm for fitting CPH model. Parameters ---------- df: DataFrame stop_times_events: DataFrame meta information about the subjects history show_progress: boolean, optional (default: True) to show verbose output of convergence step_size: float > 0 to determine a starting step size in NR algorithm. precision: float the convergence halts if the norm of delta between successive positions is less than epsilon. Returns -------- beta: (1,d) numpy array.
Below is the the instruction that describes the task: ### Input: Newton Rhaphson algorithm for fitting CPH model. Parameters ---------- df: DataFrame stop_times_events: DataFrame meta information about the subjects history show_progress: boolean, optional (default: True) to show verbose output of convergence step_size: float > 0 to determine a starting step size in NR algorithm. precision: float the convergence halts if the norm of delta between successive positions is less than epsilon. Returns -------- beta: (1,d) numpy array. ### Response: def _newton_rhaphson( self, df, events, start, stop, weights, show_progress=False, step_size=None, precision=10e-6, max_steps=50, initial_point=None, ): # pylint: disable=too-many-arguments,too-many-locals,too-many-branches,too-many-statements """ Newton Rhaphson algorithm for fitting CPH model. Parameters ---------- df: DataFrame stop_times_events: DataFrame meta information about the subjects history show_progress: boolean, optional (default: True) to show verbose output of convergence step_size: float > 0 to determine a starting step size in NR algorithm. precision: float the convergence halts if the norm of delta between successive positions is less than epsilon. Returns -------- beta: (1,d) numpy array. """ assert precision <= 1.0, "precision must be less than or equal to 1." _, d = df.shape # make sure betas are correct size. if initial_point is not None: beta = initial_point else: beta = np.zeros((d,)) i = 0 converging = True ll, previous_ll = 0, 0 start_time = time.time() step_sizer = StepSizer(step_size) step_size = step_sizer.next() while converging: i += 1 if self.strata is None: h, g, ll = self._get_gradients( df.values, events.values, start.values, stop.values, weights.values, beta ) else: g = np.zeros_like(beta) h = np.zeros((d, d)) ll = 0 for _h, _g, _ll in self._partition_by_strata_and_apply( df, events, start, stop, weights, self._get_gradients, beta ): g += _g h += _h ll += _ll if i == 1 and np.all(beta == 0): # this is a neat optimization, the null partial likelihood # is the same as the full partial but evaluated at zero. # if the user supplied a non-trivial initial point, we need to delay this. self._log_likelihood_null = ll if self.penalizer > 0: # add the gradient and hessian of the l2 term g -= self.penalizer * beta h.flat[:: d + 1] -= self.penalizer try: # reusing a piece to make g * inv(h) * g.T faster later inv_h_dot_g_T = spsolve(-h, g, sym_pos=True) except ValueError as e: if "infs or NaNs" in str(e): raise ConvergenceError( """hessian or gradient contains nan or inf value(s). Convergence halted. Please see the following tips in the lifelines documentation: https://lifelines.readthedocs.io/en/latest/Examples.html#problems-with-convergence-in-the-cox-proportional-hazard-model """, e, ) else: # something else? raise e except LinAlgError as e: raise ConvergenceError( """Convergence halted due to matrix inversion problems. Suspicion is high colinearity. Please see the following tips in the lifelines documentation: https://lifelines.readthedocs.io/en/latest/Examples.html#problems-with-convergence-in-the-cox-proportional-hazard-model """, e, ) delta = step_size * inv_h_dot_g_T if np.any(np.isnan(delta)): raise ConvergenceError( """delta contains nan value(s). Convergence halted. Please see the following tips in the lifelines documentation: https://lifelines.readthedocs.io/en/latest/Examples.html#problems-with-convergence-in-the-cox-proportional-hazard-model """ ) # Save these as pending result hessian, gradient = h, g norm_delta = norm(delta) newton_decrement = g.dot(inv_h_dot_g_T) / 2 if show_progress: print( "Iteration %d: norm_delta = %.5f, step_size = %.5f, ll = %.5f, newton_decrement = %.5f, seconds_since_start = %.1f" % (i, norm_delta, step_size, ll, newton_decrement, time.time() - start_time) ) # convergence criteria if norm_delta < precision: converging, completed = False, True elif previous_ll > 0 and abs(ll - previous_ll) / (-previous_ll) < 1e-09: # this is what R uses by default converging, completed = False, True elif newton_decrement < 10e-8: converging, completed = False, True elif i >= max_steps: # 50 iterations steps with N-R is a lot. # Expected convergence is less than 10 steps converging, completed = False, False elif step_size <= 0.0001: converging, completed = False, False elif abs(ll) < 0.0001 and norm_delta > 1.0: warnings.warn( "The log-likelihood is getting suspiciously close to 0 and the delta is still large. There may be complete separation in the dataset. This may result in incorrect inference of coefficients. \ See https://stats.stackexchange.com/questions/11109/how-to-deal-with-perfect-separation-in-logistic-regression", ConvergenceWarning, ) converging, completed = False, False step_size = step_sizer.update(norm_delta).next() beta += delta self._hessian_ = hessian self._score_ = gradient self._log_likelihood = ll if show_progress and completed: print("Convergence completed after %d iterations." % (i)) elif show_progress and not completed: print("Convergence failed. See any warning messages.") # report to the user problems that we detect. if completed and norm_delta > 0.1: warnings.warn( "Newton-Rhapson convergence completed but norm(delta) is still high, %.3f. This may imply non-unique solutions to the maximum likelihood. Perhaps there is colinearity or complete separation in the dataset?" % norm_delta, ConvergenceWarning, ) elif not completed: warnings.warn("Newton-Rhapson failed to converge sufficiently in %d steps." % max_steps, ConvergenceWarning) return beta
def _pval_from_bootci(boot, estimate): """Compute p-value from bootstrap distribution. Similar to the pval function in the R package mediation. Note that this is less accurate than a permutation test because the bootstrap distribution is not conditioned on a true null hypothesis. """ if estimate == 0: out = 1 else: out = 2 * min(sum(boot > 0), sum(boot < 0)) / len(boot) return min(out, 1)
Compute p-value from bootstrap distribution. Similar to the pval function in the R package mediation. Note that this is less accurate than a permutation test because the bootstrap distribution is not conditioned on a true null hypothesis.
Below is the the instruction that describes the task: ### Input: Compute p-value from bootstrap distribution. Similar to the pval function in the R package mediation. Note that this is less accurate than a permutation test because the bootstrap distribution is not conditioned on a true null hypothesis. ### Response: def _pval_from_bootci(boot, estimate): """Compute p-value from bootstrap distribution. Similar to the pval function in the R package mediation. Note that this is less accurate than a permutation test because the bootstrap distribution is not conditioned on a true null hypothesis. """ if estimate == 0: out = 1 else: out = 2 * min(sum(boot > 0), sum(boot < 0)) / len(boot) return min(out, 1)
def translate(self, trans_inputs: List[TranslatorInput], fill_up_batches: bool = True) -> List[TranslatorOutput]: """ Batch-translates a list of TranslatorInputs, returns a list of TranslatorOutputs. Empty or bad inputs are skipped. Splits inputs longer than Translator.max_input_length into segments of size max_input_length, and then groups segments into batches of at most Translator.max_batch_size. Too-long segments that were split are reassembled into a single output after translation. If fill_up_batches is set to True, underfilled batches are padded to Translator.max_batch_size, otherwise dynamic batch sizing is used, which comes at increased memory usage. :param trans_inputs: List of TranslatorInputs as returned by make_input(). :param fill_up_batches: If True, underfilled batches are padded to Translator.max_batch_size. :return: List of translation results. """ num_inputs = len(trans_inputs) translated_chunks = [] # type: List[IndexedTranslation] # split into chunks input_chunks = [] # type: List[IndexedTranslatorInput] for trans_input_idx, trans_input in enumerate(trans_inputs): # bad input if isinstance(trans_input, BadTranslatorInput): translated_chunks.append(IndexedTranslation(input_idx=trans_input_idx, chunk_idx=0, translation=empty_translation(add_nbest=(self.nbest_size > 1)))) # empty input elif len(trans_input.tokens) == 0: translated_chunks.append(IndexedTranslation(input_idx=trans_input_idx, chunk_idx=0, translation=empty_translation(add_nbest=(self.nbest_size > 1)))) else: # TODO(tdomhan): Remove branch without EOS with next major version bump, as future models will always be trained with source side EOS symbols if self.source_with_eos: max_input_length_without_eos = self.max_input_length # oversized input if len(trans_input.tokens) > max_input_length_without_eos: logger.debug( "Input %s has length (%d) that exceeds max input length (%d). " "Splitting into chunks of size %d.", trans_input.sentence_id, len(trans_input.tokens), self.buckets_source[-1], max_input_length_without_eos) chunks = [trans_input_chunk.with_eos() for trans_input_chunk in trans_input.chunks(max_input_length_without_eos)] input_chunks.extend([IndexedTranslatorInput(trans_input_idx, chunk_idx, chunk_input) for chunk_idx, chunk_input in enumerate(chunks)]) # regular input else: input_chunks.append(IndexedTranslatorInput(trans_input_idx, chunk_idx=0, translator_input=trans_input.with_eos())) else: if len(trans_input.tokens) > self.max_input_length: # oversized input logger.debug( "Input %s has length (%d) that exceeds max input length (%d). " "Splitting into chunks of size %d.", trans_input.sentence_id, len(trans_input.tokens), self.buckets_source[-1], self.max_input_length) chunks = [trans_input_chunk for trans_input_chunk in trans_input.chunks(self.max_input_length)] input_chunks.extend([IndexedTranslatorInput(trans_input_idx, chunk_idx, chunk_input) for chunk_idx, chunk_input in enumerate(chunks)]) else: # regular input input_chunks.append(IndexedTranslatorInput(trans_input_idx, chunk_idx=0, translator_input=trans_input)) if trans_input.constraints is not None: logger.info("Input %s has %d %s: %s", trans_input.sentence_id, len(trans_input.constraints), "constraint" if len(trans_input.constraints) == 1 else "constraints", ", ".join(" ".join(x) for x in trans_input.constraints)) num_bad_empty = len(translated_chunks) # Sort longest to shortest (to rather fill batches of shorter than longer sequences) input_chunks = sorted(input_chunks, key=lambda chunk: len(chunk.translator_input.tokens), reverse=True) # translate in batch-sized blocks over input chunks batch_size = self.max_batch_size if fill_up_batches else min(len(input_chunks), self.max_batch_size) num_batches = 0 for batch_id, batch in enumerate(utils.grouper(input_chunks, batch_size)): logger.debug("Translating batch %d", batch_id) rest = batch_size - len(batch) if fill_up_batches and rest > 0: logger.debug("Padding batch of size %d to full batch size (%d)", len(batch), batch_size) batch = batch + [batch[0]] * rest translator_inputs = [indexed_translator_input.translator_input for indexed_translator_input in batch] batch_translations = self._translate_nd(*self._get_inference_input(translator_inputs)) # truncate to remove filler translations if fill_up_batches and rest > 0: batch_translations = batch_translations[:-rest] for chunk, translation in zip(batch, batch_translations): translated_chunks.append(IndexedTranslation(chunk.input_idx, chunk.chunk_idx, translation)) num_batches += 1 # Sort by input idx and then chunk id translated_chunks = sorted(translated_chunks) num_chunks = len(translated_chunks) # Concatenate results results = [] # type: List[TranslatorOutput] chunks_by_input_idx = itertools.groupby(translated_chunks, key=lambda translation: translation.input_idx) for trans_input, (input_idx, translations_for_input_idx) in zip(trans_inputs, chunks_by_input_idx): translations_for_input_idx = list(translations_for_input_idx) # type: ignore if len(translations_for_input_idx) == 1: # type: ignore translation = translations_for_input_idx[0].translation # type: ignore else: translations_to_concat = [translated_chunk.translation for translated_chunk in translations_for_input_idx] translation = self._concat_translations(translations_to_concat) results.append(self._make_result(trans_input, translation)) num_outputs = len(results) logger.debug("Translated %d inputs (%d chunks) in %d batches to %d outputs. %d empty/bad inputs.", num_inputs, num_chunks, num_batches, num_outputs, num_bad_empty) return results
Batch-translates a list of TranslatorInputs, returns a list of TranslatorOutputs. Empty or bad inputs are skipped. Splits inputs longer than Translator.max_input_length into segments of size max_input_length, and then groups segments into batches of at most Translator.max_batch_size. Too-long segments that were split are reassembled into a single output after translation. If fill_up_batches is set to True, underfilled batches are padded to Translator.max_batch_size, otherwise dynamic batch sizing is used, which comes at increased memory usage. :param trans_inputs: List of TranslatorInputs as returned by make_input(). :param fill_up_batches: If True, underfilled batches are padded to Translator.max_batch_size. :return: List of translation results.
Below is the the instruction that describes the task: ### Input: Batch-translates a list of TranslatorInputs, returns a list of TranslatorOutputs. Empty or bad inputs are skipped. Splits inputs longer than Translator.max_input_length into segments of size max_input_length, and then groups segments into batches of at most Translator.max_batch_size. Too-long segments that were split are reassembled into a single output after translation. If fill_up_batches is set to True, underfilled batches are padded to Translator.max_batch_size, otherwise dynamic batch sizing is used, which comes at increased memory usage. :param trans_inputs: List of TranslatorInputs as returned by make_input(). :param fill_up_batches: If True, underfilled batches are padded to Translator.max_batch_size. :return: List of translation results. ### Response: def translate(self, trans_inputs: List[TranslatorInput], fill_up_batches: bool = True) -> List[TranslatorOutput]: """ Batch-translates a list of TranslatorInputs, returns a list of TranslatorOutputs. Empty or bad inputs are skipped. Splits inputs longer than Translator.max_input_length into segments of size max_input_length, and then groups segments into batches of at most Translator.max_batch_size. Too-long segments that were split are reassembled into a single output after translation. If fill_up_batches is set to True, underfilled batches are padded to Translator.max_batch_size, otherwise dynamic batch sizing is used, which comes at increased memory usage. :param trans_inputs: List of TranslatorInputs as returned by make_input(). :param fill_up_batches: If True, underfilled batches are padded to Translator.max_batch_size. :return: List of translation results. """ num_inputs = len(trans_inputs) translated_chunks = [] # type: List[IndexedTranslation] # split into chunks input_chunks = [] # type: List[IndexedTranslatorInput] for trans_input_idx, trans_input in enumerate(trans_inputs): # bad input if isinstance(trans_input, BadTranslatorInput): translated_chunks.append(IndexedTranslation(input_idx=trans_input_idx, chunk_idx=0, translation=empty_translation(add_nbest=(self.nbest_size > 1)))) # empty input elif len(trans_input.tokens) == 0: translated_chunks.append(IndexedTranslation(input_idx=trans_input_idx, chunk_idx=0, translation=empty_translation(add_nbest=(self.nbest_size > 1)))) else: # TODO(tdomhan): Remove branch without EOS with next major version bump, as future models will always be trained with source side EOS symbols if self.source_with_eos: max_input_length_without_eos = self.max_input_length # oversized input if len(trans_input.tokens) > max_input_length_without_eos: logger.debug( "Input %s has length (%d) that exceeds max input length (%d). " "Splitting into chunks of size %d.", trans_input.sentence_id, len(trans_input.tokens), self.buckets_source[-1], max_input_length_without_eos) chunks = [trans_input_chunk.with_eos() for trans_input_chunk in trans_input.chunks(max_input_length_without_eos)] input_chunks.extend([IndexedTranslatorInput(trans_input_idx, chunk_idx, chunk_input) for chunk_idx, chunk_input in enumerate(chunks)]) # regular input else: input_chunks.append(IndexedTranslatorInput(trans_input_idx, chunk_idx=0, translator_input=trans_input.with_eos())) else: if len(trans_input.tokens) > self.max_input_length: # oversized input logger.debug( "Input %s has length (%d) that exceeds max input length (%d). " "Splitting into chunks of size %d.", trans_input.sentence_id, len(trans_input.tokens), self.buckets_source[-1], self.max_input_length) chunks = [trans_input_chunk for trans_input_chunk in trans_input.chunks(self.max_input_length)] input_chunks.extend([IndexedTranslatorInput(trans_input_idx, chunk_idx, chunk_input) for chunk_idx, chunk_input in enumerate(chunks)]) else: # regular input input_chunks.append(IndexedTranslatorInput(trans_input_idx, chunk_idx=0, translator_input=trans_input)) if trans_input.constraints is not None: logger.info("Input %s has %d %s: %s", trans_input.sentence_id, len(trans_input.constraints), "constraint" if len(trans_input.constraints) == 1 else "constraints", ", ".join(" ".join(x) for x in trans_input.constraints)) num_bad_empty = len(translated_chunks) # Sort longest to shortest (to rather fill batches of shorter than longer sequences) input_chunks = sorted(input_chunks, key=lambda chunk: len(chunk.translator_input.tokens), reverse=True) # translate in batch-sized blocks over input chunks batch_size = self.max_batch_size if fill_up_batches else min(len(input_chunks), self.max_batch_size) num_batches = 0 for batch_id, batch in enumerate(utils.grouper(input_chunks, batch_size)): logger.debug("Translating batch %d", batch_id) rest = batch_size - len(batch) if fill_up_batches and rest > 0: logger.debug("Padding batch of size %d to full batch size (%d)", len(batch), batch_size) batch = batch + [batch[0]] * rest translator_inputs = [indexed_translator_input.translator_input for indexed_translator_input in batch] batch_translations = self._translate_nd(*self._get_inference_input(translator_inputs)) # truncate to remove filler translations if fill_up_batches and rest > 0: batch_translations = batch_translations[:-rest] for chunk, translation in zip(batch, batch_translations): translated_chunks.append(IndexedTranslation(chunk.input_idx, chunk.chunk_idx, translation)) num_batches += 1 # Sort by input idx and then chunk id translated_chunks = sorted(translated_chunks) num_chunks = len(translated_chunks) # Concatenate results results = [] # type: List[TranslatorOutput] chunks_by_input_idx = itertools.groupby(translated_chunks, key=lambda translation: translation.input_idx) for trans_input, (input_idx, translations_for_input_idx) in zip(trans_inputs, chunks_by_input_idx): translations_for_input_idx = list(translations_for_input_idx) # type: ignore if len(translations_for_input_idx) == 1: # type: ignore translation = translations_for_input_idx[0].translation # type: ignore else: translations_to_concat = [translated_chunk.translation for translated_chunk in translations_for_input_idx] translation = self._concat_translations(translations_to_concat) results.append(self._make_result(trans_input, translation)) num_outputs = len(results) logger.debug("Translated %d inputs (%d chunks) in %d batches to %d outputs. %d empty/bad inputs.", num_inputs, num_chunks, num_batches, num_outputs, num_bad_empty) return results
def Chen_Yang(self, T, full=True, quick=True): r'''Method to calculate `a_alpha` and its first and second derivatives according to Hamid and Yang (2017) [1]_. Returns `a_alpha`, `da_alpha_dT`, and `d2a_alpha_dT2`. See `GCEOS.a_alpha_and_derivatives` for more documentation. Seven coefficients needed. .. math:: \alpha = e^{\left(- c_{3}^{\log{\left (\frac{T}{Tc} \right )}} + 1\right) \left(- \frac{T c_{2}}{Tc} + c_{1}\right)} References ---------- .. [1] Chen, Zehua, and Daoyong Yang. "Optimization of the Reduced Temperature Associated with Peng–Robinson Equation of State and Soave-Redlich-Kwong Equation of State To Improve Vapor Pressure Prediction for Heavy Hydrocarbon Compounds." Journal of Chemical & Engineering Data, August 31, 2017. doi:10.1021/acs.jced.7b00496. ''' c1, c2, c3, c4, c5, c6, c7 = self.alpha_function_coeffs T, Tc, a = self.T, self.Tc, self.a a_alpha = a*exp(c4*log((-sqrt(T/Tc) + 1)*(c5 + c6*omega + c7*omega**2) + 1)**2 + (-T/Tc + 1)*(c1 + c2*omega + c3*omega**2)) if not full: return a_alpha else: da_alpha_dT = a*(-(c1 + c2*omega + c3*omega**2)/Tc - c4*sqrt(T/Tc)*(c5 + c6*omega + c7*omega**2)*log((-sqrt(T/Tc) + 1)*(c5 + c6*omega + c7*omega**2) + 1)/(T*((-sqrt(T/Tc) + 1)*(c5 + c6*omega + c7*omega**2) + 1)))*exp(c4*log((-sqrt(T/Tc) + 1)*(c5 + c6*omega + c7*omega**2) + 1)**2 + (-T/Tc + 1)*(c1 + c2*omega + c3*omega**2)) d2a_alpha_dT2 = a*(((c1 + c2*omega + c3*omega**2)/Tc - c4*sqrt(T/Tc)*(c5 + c6*omega + c7*omega**2)*log(-(sqrt(T/Tc) - 1)*(c5 + c6*omega + c7*omega**2) + 1)/(T*((sqrt(T/Tc) - 1)*(c5 + c6*omega + c7*omega**2) - 1)))**2 - c4*(c5 + c6*omega + c7*omega**2)*((c5 + c6*omega + c7*omega**2)*log(-(sqrt(T/Tc) - 1)*(c5 + c6*omega + c7*omega**2) + 1)/(Tc*((sqrt(T/Tc) - 1)*(c5 + c6*omega + c7*omega**2) - 1)) - (c5 + c6*omega + c7*omega**2)/(Tc*((sqrt(T/Tc) - 1)*(c5 + c6*omega + c7*omega**2) - 1)) + sqrt(T/Tc)*log(-(sqrt(T/Tc) - 1)*(c5 + c6*omega + c7*omega**2) + 1)/T)/(2*T*((sqrt(T/Tc) - 1)*(c5 + c6*omega + c7*omega**2) - 1)))*exp(c4*log(-(sqrt(T/Tc) - 1)*(c5 + c6*omega + c7*omega**2) + 1)**2 - (T/Tc - 1)*(c1 + c2*omega + c3*omega**2)) return a_alpha, da_alpha_dT, d2a_alpha_dT2
r'''Method to calculate `a_alpha` and its first and second derivatives according to Hamid and Yang (2017) [1]_. Returns `a_alpha`, `da_alpha_dT`, and `d2a_alpha_dT2`. See `GCEOS.a_alpha_and_derivatives` for more documentation. Seven coefficients needed. .. math:: \alpha = e^{\left(- c_{3}^{\log{\left (\frac{T}{Tc} \right )}} + 1\right) \left(- \frac{T c_{2}}{Tc} + c_{1}\right)} References ---------- .. [1] Chen, Zehua, and Daoyong Yang. "Optimization of the Reduced Temperature Associated with Peng–Robinson Equation of State and Soave-Redlich-Kwong Equation of State To Improve Vapor Pressure Prediction for Heavy Hydrocarbon Compounds." Journal of Chemical & Engineering Data, August 31, 2017. doi:10.1021/acs.jced.7b00496.
Below is the the instruction that describes the task: ### Input: r'''Method to calculate `a_alpha` and its first and second derivatives according to Hamid and Yang (2017) [1]_. Returns `a_alpha`, `da_alpha_dT`, and `d2a_alpha_dT2`. See `GCEOS.a_alpha_and_derivatives` for more documentation. Seven coefficients needed. .. math:: \alpha = e^{\left(- c_{3}^{\log{\left (\frac{T}{Tc} \right )}} + 1\right) \left(- \frac{T c_{2}}{Tc} + c_{1}\right)} References ---------- .. [1] Chen, Zehua, and Daoyong Yang. "Optimization of the Reduced Temperature Associated with Peng–Robinson Equation of State and Soave-Redlich-Kwong Equation of State To Improve Vapor Pressure Prediction for Heavy Hydrocarbon Compounds." Journal of Chemical & Engineering Data, August 31, 2017. doi:10.1021/acs.jced.7b00496. ### Response: def Chen_Yang(self, T, full=True, quick=True): r'''Method to calculate `a_alpha` and its first and second derivatives according to Hamid and Yang (2017) [1]_. Returns `a_alpha`, `da_alpha_dT`, and `d2a_alpha_dT2`. See `GCEOS.a_alpha_and_derivatives` for more documentation. Seven coefficients needed. .. math:: \alpha = e^{\left(- c_{3}^{\log{\left (\frac{T}{Tc} \right )}} + 1\right) \left(- \frac{T c_{2}}{Tc} + c_{1}\right)} References ---------- .. [1] Chen, Zehua, and Daoyong Yang. "Optimization of the Reduced Temperature Associated with Peng–Robinson Equation of State and Soave-Redlich-Kwong Equation of State To Improve Vapor Pressure Prediction for Heavy Hydrocarbon Compounds." Journal of Chemical & Engineering Data, August 31, 2017. doi:10.1021/acs.jced.7b00496. ''' c1, c2, c3, c4, c5, c6, c7 = self.alpha_function_coeffs T, Tc, a = self.T, self.Tc, self.a a_alpha = a*exp(c4*log((-sqrt(T/Tc) + 1)*(c5 + c6*omega + c7*omega**2) + 1)**2 + (-T/Tc + 1)*(c1 + c2*omega + c3*omega**2)) if not full: return a_alpha else: da_alpha_dT = a*(-(c1 + c2*omega + c3*omega**2)/Tc - c4*sqrt(T/Tc)*(c5 + c6*omega + c7*omega**2)*log((-sqrt(T/Tc) + 1)*(c5 + c6*omega + c7*omega**2) + 1)/(T*((-sqrt(T/Tc) + 1)*(c5 + c6*omega + c7*omega**2) + 1)))*exp(c4*log((-sqrt(T/Tc) + 1)*(c5 + c6*omega + c7*omega**2) + 1)**2 + (-T/Tc + 1)*(c1 + c2*omega + c3*omega**2)) d2a_alpha_dT2 = a*(((c1 + c2*omega + c3*omega**2)/Tc - c4*sqrt(T/Tc)*(c5 + c6*omega + c7*omega**2)*log(-(sqrt(T/Tc) - 1)*(c5 + c6*omega + c7*omega**2) + 1)/(T*((sqrt(T/Tc) - 1)*(c5 + c6*omega + c7*omega**2) - 1)))**2 - c4*(c5 + c6*omega + c7*omega**2)*((c5 + c6*omega + c7*omega**2)*log(-(sqrt(T/Tc) - 1)*(c5 + c6*omega + c7*omega**2) + 1)/(Tc*((sqrt(T/Tc) - 1)*(c5 + c6*omega + c7*omega**2) - 1)) - (c5 + c6*omega + c7*omega**2)/(Tc*((sqrt(T/Tc) - 1)*(c5 + c6*omega + c7*omega**2) - 1)) + sqrt(T/Tc)*log(-(sqrt(T/Tc) - 1)*(c5 + c6*omega + c7*omega**2) + 1)/T)/(2*T*((sqrt(T/Tc) - 1)*(c5 + c6*omega + c7*omega**2) - 1)))*exp(c4*log(-(sqrt(T/Tc) - 1)*(c5 + c6*omega + c7*omega**2) + 1)**2 - (T/Tc - 1)*(c1 + c2*omega + c3*omega**2)) return a_alpha, da_alpha_dT, d2a_alpha_dT2
def to_dict(self, remove_nones=False): """ Creates a dictionary representation of the enclave. :param remove_nones: Whether ``None`` values should be filtered out of the dictionary. Defaults to ``False``. :return: A dictionary representation of the EnclavePermissions object. """ d = super().to_dict(remove_nones=remove_nones) d.update({ 'read': self.read, 'create': self.create, 'update': self.update }) return d
Creates a dictionary representation of the enclave. :param remove_nones: Whether ``None`` values should be filtered out of the dictionary. Defaults to ``False``. :return: A dictionary representation of the EnclavePermissions object.
Below is the the instruction that describes the task: ### Input: Creates a dictionary representation of the enclave. :param remove_nones: Whether ``None`` values should be filtered out of the dictionary. Defaults to ``False``. :return: A dictionary representation of the EnclavePermissions object. ### Response: def to_dict(self, remove_nones=False): """ Creates a dictionary representation of the enclave. :param remove_nones: Whether ``None`` values should be filtered out of the dictionary. Defaults to ``False``. :return: A dictionary representation of the EnclavePermissions object. """ d = super().to_dict(remove_nones=remove_nones) d.update({ 'read': self.read, 'create': self.create, 'update': self.update }) return d
def start(self): """Start local agent""" logger.info('Starting agent on localhost') args = self.python.split() + [ os.path.join( self.workdir, self.AGENT_FILENAME), '--telegraf', self.path['TELEGRAF_LOCAL_PATH'], '--host', self.host] if self.kill_old: args.append(self.kill_old) self.session = self.popen(args) self.reader_thread = threading.Thread(target=self.read_buffer) self.reader_thread.setDaemon(True) return self.session
Start local agent
Below is the the instruction that describes the task: ### Input: Start local agent ### Response: def start(self): """Start local agent""" logger.info('Starting agent on localhost') args = self.python.split() + [ os.path.join( self.workdir, self.AGENT_FILENAME), '--telegraf', self.path['TELEGRAF_LOCAL_PATH'], '--host', self.host] if self.kill_old: args.append(self.kill_old) self.session = self.popen(args) self.reader_thread = threading.Thread(target=self.read_buffer) self.reader_thread.setDaemon(True) return self.session
def _get_esxdatacenter_proxy_details(): ''' Returns the running esxdatacenter's proxy details ''' det = __salt__['esxdatacenter.get_details']() return det.get('vcenter'), det.get('username'), det.get('password'), \ det.get('protocol'), det.get('port'), det.get('mechanism'), \ det.get('principal'), det.get('domain'), det.get('datacenter')
Returns the running esxdatacenter's proxy details
Below is the the instruction that describes the task: ### Input: Returns the running esxdatacenter's proxy details ### Response: def _get_esxdatacenter_proxy_details(): ''' Returns the running esxdatacenter's proxy details ''' det = __salt__['esxdatacenter.get_details']() return det.get('vcenter'), det.get('username'), det.get('password'), \ det.get('protocol'), det.get('port'), det.get('mechanism'), \ det.get('principal'), det.get('domain'), det.get('datacenter')