code
stringlengths
75
104k
docstring
stringlengths
1
46.9k
text
stringlengths
164
112k
def _Rforce(self, R, z, phi=0, t=0): """ NAME: _Rforce PURPOSE: evaluate the radial force at (R,z, phi) INPUT: R - Cylindrical Galactocentric radius z - vertical height phi - azimuth t - time OUTPUT: radial force at (R,z, phi) HISTORY: 2016-06-06 - Written - Aladdin """ if not self.isNonAxi and phi is None: phi= 0. r, theta, phi = bovy_coords.cyl_to_spher(R,z,phi) #x = R dr_dR = nu.divide(R,r); dtheta_dR = nu.divide(z,r**2); dphi_dR = 0 return self._computeforceArray(dr_dR, dtheta_dR, dphi_dR, R,z,phi)
NAME: _Rforce PURPOSE: evaluate the radial force at (R,z, phi) INPUT: R - Cylindrical Galactocentric radius z - vertical height phi - azimuth t - time OUTPUT: radial force at (R,z, phi) HISTORY: 2016-06-06 - Written - Aladdin
Below is the the instruction that describes the task: ### Input: NAME: _Rforce PURPOSE: evaluate the radial force at (R,z, phi) INPUT: R - Cylindrical Galactocentric radius z - vertical height phi - azimuth t - time OUTPUT: radial force at (R,z, phi) HISTORY: 2016-06-06 - Written - Aladdin ### Response: def _Rforce(self, R, z, phi=0, t=0): """ NAME: _Rforce PURPOSE: evaluate the radial force at (R,z, phi) INPUT: R - Cylindrical Galactocentric radius z - vertical height phi - azimuth t - time OUTPUT: radial force at (R,z, phi) HISTORY: 2016-06-06 - Written - Aladdin """ if not self.isNonAxi and phi is None: phi= 0. r, theta, phi = bovy_coords.cyl_to_spher(R,z,phi) #x = R dr_dR = nu.divide(R,r); dtheta_dR = nu.divide(z,r**2); dphi_dR = 0 return self._computeforceArray(dr_dR, dtheta_dR, dphi_dR, R,z,phi)
def get_week_URL(date, day=0): """ Returns the week view URL for a given date. :param date: A date instance. :param day: Day number in a month. """ if day < 1: day = 1 date = datetime(year=date.year, month=date.month, day=day, tzinfo=utc) return reverse('calendar_week', kwargs={'year': date.isocalendar()[0], 'week': date.isocalendar()[1]})
Returns the week view URL for a given date. :param date: A date instance. :param day: Day number in a month.
Below is the the instruction that describes the task: ### Input: Returns the week view URL for a given date. :param date: A date instance. :param day: Day number in a month. ### Response: def get_week_URL(date, day=0): """ Returns the week view URL for a given date. :param date: A date instance. :param day: Day number in a month. """ if day < 1: day = 1 date = datetime(year=date.year, month=date.month, day=day, tzinfo=utc) return reverse('calendar_week', kwargs={'year': date.isocalendar()[0], 'week': date.isocalendar()[1]})
def create_photo(self, blogname, **kwargs): """ Create a photo post or photoset on a blog :param blogname: a string, the url of the blog you want to post to. :param state: a string, The state of the post. :param tags: a list of tags that you want applied to the post :param tweet: a string, the customized tweet that you want :param date: a string, the GMT date and time of the post :param format: a string, sets the format type of the post. html or markdown :param slug: a string, a short text summary to the end of the post url :param caption: a string, the caption that you want applied to the photo :param link: a string, the 'click-through' url you want on the photo :param source: a string, the photo source url :param data: a string or a list of the path of photo(s) :returns: a dict created from the JSON response """ kwargs.update({"type": "photo"}) return self._send_post(blogname, kwargs)
Create a photo post or photoset on a blog :param blogname: a string, the url of the blog you want to post to. :param state: a string, The state of the post. :param tags: a list of tags that you want applied to the post :param tweet: a string, the customized tweet that you want :param date: a string, the GMT date and time of the post :param format: a string, sets the format type of the post. html or markdown :param slug: a string, a short text summary to the end of the post url :param caption: a string, the caption that you want applied to the photo :param link: a string, the 'click-through' url you want on the photo :param source: a string, the photo source url :param data: a string or a list of the path of photo(s) :returns: a dict created from the JSON response
Below is the the instruction that describes the task: ### Input: Create a photo post or photoset on a blog :param blogname: a string, the url of the blog you want to post to. :param state: a string, The state of the post. :param tags: a list of tags that you want applied to the post :param tweet: a string, the customized tweet that you want :param date: a string, the GMT date and time of the post :param format: a string, sets the format type of the post. html or markdown :param slug: a string, a short text summary to the end of the post url :param caption: a string, the caption that you want applied to the photo :param link: a string, the 'click-through' url you want on the photo :param source: a string, the photo source url :param data: a string or a list of the path of photo(s) :returns: a dict created from the JSON response ### Response: def create_photo(self, blogname, **kwargs): """ Create a photo post or photoset on a blog :param blogname: a string, the url of the blog you want to post to. :param state: a string, The state of the post. :param tags: a list of tags that you want applied to the post :param tweet: a string, the customized tweet that you want :param date: a string, the GMT date and time of the post :param format: a string, sets the format type of the post. html or markdown :param slug: a string, a short text summary to the end of the post url :param caption: a string, the caption that you want applied to the photo :param link: a string, the 'click-through' url you want on the photo :param source: a string, the photo source url :param data: a string or a list of the path of photo(s) :returns: a dict created from the JSON response """ kwargs.update({"type": "photo"}) return self._send_post(blogname, kwargs)
def get_account_details(self, account): """ This method can be used in a number of scenarios: 1. When it is necessary to very account information 2. When there's a need to filter transactions by an account id 3. When account details (e.g. name of account) are needed """ _form = mechanize.HTMLForm(self.SEARCH_MEMBERS_URL, method="POST") _form.new_control('text', 'username', {'value': account}) _form.new_control('text', '_', {'value': ''}) try: r = self.post_url(self.SEARCH_MEMBERS_URL, form=_form) except AuthRequiredException: self._auth() r = self.post_url(self.SEARCH_MEMBERS_URL, form=_form) if r: # single quoted json parameters are not valid so convert # them into double quoted parameters _decoded = json.loads(r.replace("'", '"')) # we have a double array result so retrieve only what's # essential if _decoded[0]: return _decoded[0][0] raise InvalidAccountException
This method can be used in a number of scenarios: 1. When it is necessary to very account information 2. When there's a need to filter transactions by an account id 3. When account details (e.g. name of account) are needed
Below is the the instruction that describes the task: ### Input: This method can be used in a number of scenarios: 1. When it is necessary to very account information 2. When there's a need to filter transactions by an account id 3. When account details (e.g. name of account) are needed ### Response: def get_account_details(self, account): """ This method can be used in a number of scenarios: 1. When it is necessary to very account information 2. When there's a need to filter transactions by an account id 3. When account details (e.g. name of account) are needed """ _form = mechanize.HTMLForm(self.SEARCH_MEMBERS_URL, method="POST") _form.new_control('text', 'username', {'value': account}) _form.new_control('text', '_', {'value': ''}) try: r = self.post_url(self.SEARCH_MEMBERS_URL, form=_form) except AuthRequiredException: self._auth() r = self.post_url(self.SEARCH_MEMBERS_URL, form=_form) if r: # single quoted json parameters are not valid so convert # them into double quoted parameters _decoded = json.loads(r.replace("'", '"')) # we have a double array result so retrieve only what's # essential if _decoded[0]: return _decoded[0][0] raise InvalidAccountException
def constant_jump_targets(self): """ A set of the static jump targets of the basic block. """ exits = set() if self.exit_statements: for _, _, stmt_ in self.exit_statements: exits.add(stmt_.dst.value) default_target = self.default_exit_target if default_target is not None: exits.add(default_target) return exits
A set of the static jump targets of the basic block.
Below is the the instruction that describes the task: ### Input: A set of the static jump targets of the basic block. ### Response: def constant_jump_targets(self): """ A set of the static jump targets of the basic block. """ exits = set() if self.exit_statements: for _, _, stmt_ in self.exit_statements: exits.add(stmt_.dst.value) default_target = self.default_exit_target if default_target is not None: exits.add(default_target) return exits
def bilinear_interpolation_weights(self, lon, lat): """ Get the four neighbours for each (lon, lat) position and the weight associated with each one for bilinear interpolation. Parameters ---------- lon, lat : :class:`~astropy.units.Quantity` The longitude and latitude values as :class:`~astropy.units.Quantity` instances with angle units. Returns ------- indices : `~numpy.ndarray` 2-D array with shape (4, N) giving the four indices to use for the interpolation weights : `~numpy.ndarray` 2-D array with shape (4, N) giving the four weights to use for the interpolation """ return bilinear_interpolation_weights(lon, lat, self.nside, order=self.order)
Get the four neighbours for each (lon, lat) position and the weight associated with each one for bilinear interpolation. Parameters ---------- lon, lat : :class:`~astropy.units.Quantity` The longitude and latitude values as :class:`~astropy.units.Quantity` instances with angle units. Returns ------- indices : `~numpy.ndarray` 2-D array with shape (4, N) giving the four indices to use for the interpolation weights : `~numpy.ndarray` 2-D array with shape (4, N) giving the four weights to use for the interpolation
Below is the the instruction that describes the task: ### Input: Get the four neighbours for each (lon, lat) position and the weight associated with each one for bilinear interpolation. Parameters ---------- lon, lat : :class:`~astropy.units.Quantity` The longitude and latitude values as :class:`~astropy.units.Quantity` instances with angle units. Returns ------- indices : `~numpy.ndarray` 2-D array with shape (4, N) giving the four indices to use for the interpolation weights : `~numpy.ndarray` 2-D array with shape (4, N) giving the four weights to use for the interpolation ### Response: def bilinear_interpolation_weights(self, lon, lat): """ Get the four neighbours for each (lon, lat) position and the weight associated with each one for bilinear interpolation. Parameters ---------- lon, lat : :class:`~astropy.units.Quantity` The longitude and latitude values as :class:`~astropy.units.Quantity` instances with angle units. Returns ------- indices : `~numpy.ndarray` 2-D array with shape (4, N) giving the four indices to use for the interpolation weights : `~numpy.ndarray` 2-D array with shape (4, N) giving the four weights to use for the interpolation """ return bilinear_interpolation_weights(lon, lat, self.nside, order=self.order)
def sibling(self, offs=1): ''' Return sibling node by relative offset from self. ''' indx = self.pindex + offs if indx < 0: return None if indx >= len(self.parent.kids): return None return self.parent.kids[indx]
Return sibling node by relative offset from self.
Below is the the instruction that describes the task: ### Input: Return sibling node by relative offset from self. ### Response: def sibling(self, offs=1): ''' Return sibling node by relative offset from self. ''' indx = self.pindex + offs if indx < 0: return None if indx >= len(self.parent.kids): return None return self.parent.kids[indx]
def update_pypsa_storage(pypsa, storages, storages_lines): """ Adds storages and their lines to pypsa representation of the edisgo graph. This function effects the following attributes of the pypsa network: components ('StorageUnit'), storage_units, storage_units_t (p_set, q_set), buses, lines Parameters ----------- pypsa : :pypsa:`pypsa.Network<network>` storages : :obj:`list` List with storages of type :class:`~.grid.components.Storage` to add to pypsa network. storages_lines : :obj:`list` List with lines of type :class:`~.grid.components.Line` that connect storages to the grid. """ bus = {'name': [], 'v_nom': [], 'x': [], 'y': []} line = {'name': [], 'bus0': [], 'bus1': [], 'type': [], 'x': [], 'r': [], 's_nom': [], 'length': []} storage = { 'name': [], 'bus': [], 'p_nom': [], 'state_of_charge_initial': [], 'efficiency_store': [], 'efficiency_dispatch': [], 'standing_loss': []} for s in storages: bus_name = '_'.join(['Bus', repr(s)]) storage['name'].append(repr(s)) storage['bus'].append(bus_name) storage['p_nom'].append(s.nominal_power / 1e3) storage['state_of_charge_initial'].append(s.soc_initial) storage['efficiency_store'].append(s.efficiency_in) storage['efficiency_dispatch'].append(s.efficiency_out) storage['standing_loss'].append(s.standing_loss) bus['name'].append(bus_name) bus['v_nom'].append(s.grid.voltage_nom) bus['x'].append(s.geom.x) bus['y'].append(s.geom.y) omega = 2 * pi * 50 for l in storages_lines: line['name'].append(repr(l)) adj_nodes = l.grid.graph.nodes_from_line(l) if isinstance(l.grid, LVGrid): if isinstance(adj_nodes[0], LVStation): line['bus0'].append( '_'.join(['Bus', adj_nodes[0].__repr__(side='lv')])) else: line['bus0'].append('_'.join(['Bus', repr(adj_nodes[0])])) if isinstance(adj_nodes[1], LVStation): line['bus1'].append( '_'.join(['Bus', adj_nodes[1].__repr__(side='lv')])) else: line['bus1'].append('_'.join(['Bus', repr(adj_nodes[1])])) else: if isinstance(adj_nodes[0], LVStation): line['bus0'].append( '_'.join(['Bus', adj_nodes[0].__repr__(side='mv')])) elif isinstance(adj_nodes[0], MVStation): line['bus0'].append( '_'.join(['Bus', adj_nodes[0].__repr__(side='lv')])) else: line['bus0'].append('_'.join(['Bus', repr(adj_nodes[0])])) if isinstance(adj_nodes[1], LVStation): line['bus1'].append( '_'.join(['Bus', adj_nodes[1].__repr__(side='mv')])) elif isinstance(adj_nodes[1], MVStation): line['bus1'].append( '_'.join(['Bus', adj_nodes[1].__repr__(side='lv')])) else: line['bus1'].append('_'.join(['Bus', repr(adj_nodes[1])])) line['type'].append("") line['x'].append(l.type['L'] * omega / 1e3 * l.length) line['r'].append(l.type['R'] * l.length) line['s_nom'].append( sqrt(3) * l.type['I_max_th'] * l.type['U_n'] / 1e3) line['length'].append(l.length) # import new components to pypsa pypsa.import_components_from_dataframe( pd.DataFrame(bus).set_index('name'), 'Bus') pypsa.import_components_from_dataframe( pd.DataFrame(storage).set_index('name'), 'StorageUnit') pypsa.import_components_from_dataframe( pd.DataFrame(line).set_index('name'), 'Line') # import time series of storages and buses to pypsa timeseries_storage_p = pd.DataFrame() timeseries_storage_q = pd.DataFrame() for s in storages: timeseries_storage_p[repr(s)] = s.pypsa_timeseries('p').loc[ pypsa.storage_units_t.p_set.index] timeseries_storage_q[repr(s)] = s.pypsa_timeseries('q').loc[ pypsa.storage_units_t.q_set.index] import_series_from_dataframe(pypsa, timeseries_storage_p, 'StorageUnit', 'p_set') import_series_from_dataframe(pypsa, timeseries_storage_q, 'StorageUnit', 'q_set')
Adds storages and their lines to pypsa representation of the edisgo graph. This function effects the following attributes of the pypsa network: components ('StorageUnit'), storage_units, storage_units_t (p_set, q_set), buses, lines Parameters ----------- pypsa : :pypsa:`pypsa.Network<network>` storages : :obj:`list` List with storages of type :class:`~.grid.components.Storage` to add to pypsa network. storages_lines : :obj:`list` List with lines of type :class:`~.grid.components.Line` that connect storages to the grid.
Below is the the instruction that describes the task: ### Input: Adds storages and their lines to pypsa representation of the edisgo graph. This function effects the following attributes of the pypsa network: components ('StorageUnit'), storage_units, storage_units_t (p_set, q_set), buses, lines Parameters ----------- pypsa : :pypsa:`pypsa.Network<network>` storages : :obj:`list` List with storages of type :class:`~.grid.components.Storage` to add to pypsa network. storages_lines : :obj:`list` List with lines of type :class:`~.grid.components.Line` that connect storages to the grid. ### Response: def update_pypsa_storage(pypsa, storages, storages_lines): """ Adds storages and their lines to pypsa representation of the edisgo graph. This function effects the following attributes of the pypsa network: components ('StorageUnit'), storage_units, storage_units_t (p_set, q_set), buses, lines Parameters ----------- pypsa : :pypsa:`pypsa.Network<network>` storages : :obj:`list` List with storages of type :class:`~.grid.components.Storage` to add to pypsa network. storages_lines : :obj:`list` List with lines of type :class:`~.grid.components.Line` that connect storages to the grid. """ bus = {'name': [], 'v_nom': [], 'x': [], 'y': []} line = {'name': [], 'bus0': [], 'bus1': [], 'type': [], 'x': [], 'r': [], 's_nom': [], 'length': []} storage = { 'name': [], 'bus': [], 'p_nom': [], 'state_of_charge_initial': [], 'efficiency_store': [], 'efficiency_dispatch': [], 'standing_loss': []} for s in storages: bus_name = '_'.join(['Bus', repr(s)]) storage['name'].append(repr(s)) storage['bus'].append(bus_name) storage['p_nom'].append(s.nominal_power / 1e3) storage['state_of_charge_initial'].append(s.soc_initial) storage['efficiency_store'].append(s.efficiency_in) storage['efficiency_dispatch'].append(s.efficiency_out) storage['standing_loss'].append(s.standing_loss) bus['name'].append(bus_name) bus['v_nom'].append(s.grid.voltage_nom) bus['x'].append(s.geom.x) bus['y'].append(s.geom.y) omega = 2 * pi * 50 for l in storages_lines: line['name'].append(repr(l)) adj_nodes = l.grid.graph.nodes_from_line(l) if isinstance(l.grid, LVGrid): if isinstance(adj_nodes[0], LVStation): line['bus0'].append( '_'.join(['Bus', adj_nodes[0].__repr__(side='lv')])) else: line['bus0'].append('_'.join(['Bus', repr(adj_nodes[0])])) if isinstance(adj_nodes[1], LVStation): line['bus1'].append( '_'.join(['Bus', adj_nodes[1].__repr__(side='lv')])) else: line['bus1'].append('_'.join(['Bus', repr(adj_nodes[1])])) else: if isinstance(adj_nodes[0], LVStation): line['bus0'].append( '_'.join(['Bus', adj_nodes[0].__repr__(side='mv')])) elif isinstance(adj_nodes[0], MVStation): line['bus0'].append( '_'.join(['Bus', adj_nodes[0].__repr__(side='lv')])) else: line['bus0'].append('_'.join(['Bus', repr(adj_nodes[0])])) if isinstance(adj_nodes[1], LVStation): line['bus1'].append( '_'.join(['Bus', adj_nodes[1].__repr__(side='mv')])) elif isinstance(adj_nodes[1], MVStation): line['bus1'].append( '_'.join(['Bus', adj_nodes[1].__repr__(side='lv')])) else: line['bus1'].append('_'.join(['Bus', repr(adj_nodes[1])])) line['type'].append("") line['x'].append(l.type['L'] * omega / 1e3 * l.length) line['r'].append(l.type['R'] * l.length) line['s_nom'].append( sqrt(3) * l.type['I_max_th'] * l.type['U_n'] / 1e3) line['length'].append(l.length) # import new components to pypsa pypsa.import_components_from_dataframe( pd.DataFrame(bus).set_index('name'), 'Bus') pypsa.import_components_from_dataframe( pd.DataFrame(storage).set_index('name'), 'StorageUnit') pypsa.import_components_from_dataframe( pd.DataFrame(line).set_index('name'), 'Line') # import time series of storages and buses to pypsa timeseries_storage_p = pd.DataFrame() timeseries_storage_q = pd.DataFrame() for s in storages: timeseries_storage_p[repr(s)] = s.pypsa_timeseries('p').loc[ pypsa.storage_units_t.p_set.index] timeseries_storage_q[repr(s)] = s.pypsa_timeseries('q').loc[ pypsa.storage_units_t.q_set.index] import_series_from_dataframe(pypsa, timeseries_storage_p, 'StorageUnit', 'p_set') import_series_from_dataframe(pypsa, timeseries_storage_q, 'StorageUnit', 'q_set')
def syslog(logger_to_update=logger, facility=SysLogHandler.LOG_USER, disableStderrLogger=True): """ Setup logging to syslog and disable other internal loggers :param logger_to_update: the logger to enable syslog logging for :param facility: syslog facility to log to :param disableStderrLogger: should the default stderr logger be disabled? defaults to True :return the new SysLogHandler, which can be modified externally (e.g. for custom log level) """ # remove internal loggers __remove_internal_loggers(logger_to_update, disableStderrLogger) # Setup logzero to only use the syslog handler with the specified facility syslog_handler = SysLogHandler(facility=facility) setattr(syslog_handler, LOGZERO_INTERNAL_LOGGER_ATTR, True) logger_to_update.addHandler(syslog_handler) return syslog_handler
Setup logging to syslog and disable other internal loggers :param logger_to_update: the logger to enable syslog logging for :param facility: syslog facility to log to :param disableStderrLogger: should the default stderr logger be disabled? defaults to True :return the new SysLogHandler, which can be modified externally (e.g. for custom log level)
Below is the the instruction that describes the task: ### Input: Setup logging to syslog and disable other internal loggers :param logger_to_update: the logger to enable syslog logging for :param facility: syslog facility to log to :param disableStderrLogger: should the default stderr logger be disabled? defaults to True :return the new SysLogHandler, which can be modified externally (e.g. for custom log level) ### Response: def syslog(logger_to_update=logger, facility=SysLogHandler.LOG_USER, disableStderrLogger=True): """ Setup logging to syslog and disable other internal loggers :param logger_to_update: the logger to enable syslog logging for :param facility: syslog facility to log to :param disableStderrLogger: should the default stderr logger be disabled? defaults to True :return the new SysLogHandler, which can be modified externally (e.g. for custom log level) """ # remove internal loggers __remove_internal_loggers(logger_to_update, disableStderrLogger) # Setup logzero to only use the syslog handler with the specified facility syslog_handler = SysLogHandler(facility=facility) setattr(syslog_handler, LOGZERO_INTERNAL_LOGGER_ATTR, True) logger_to_update.addHandler(syslog_handler) return syslog_handler
def _is_builtin(obj): """ Checks if the type of the given object is a built-in one or not :param obj: An object :return: True if the object is of a built-in type """ module_ = inspect.getmodule(obj) if module_ in (None, builtins): return True return module_.__name__ in ("", "__main__")
Checks if the type of the given object is a built-in one or not :param obj: An object :return: True if the object is of a built-in type
Below is the the instruction that describes the task: ### Input: Checks if the type of the given object is a built-in one or not :param obj: An object :return: True if the object is of a built-in type ### Response: def _is_builtin(obj): """ Checks if the type of the given object is a built-in one or not :param obj: An object :return: True if the object is of a built-in type """ module_ = inspect.getmodule(obj) if module_ in (None, builtins): return True return module_.__name__ in ("", "__main__")
def comic_archive_uncompress(filename, image_format): """ Uncompress comic archives. Return the name of the working directory we uncompressed into. """ if not Settings.comics: report = ['Skipping archive file: {}'.format(filename)] return None, ReportStats(filename, report=report) if Settings.verbose: truncated_filename = stats.truncate_cwd(filename) print("Extracting {}...".format(truncated_filename), end='') # create the tmpdir tmp_dir = _get_archive_tmp_dir(filename) if os.path.isdir(tmp_dir): shutil.rmtree(tmp_dir) os.mkdir(tmp_dir) # extract archvie into the tmpdir if image_format == _CBZ_FORMAT: with zipfile.ZipFile(filename, 'r') as zfile: zfile.extractall(tmp_dir) elif image_format == _CBR_FORMAT: with rarfile.RarFile(filename, 'r') as rfile: rfile.extractall(tmp_dir) else: report = '{} {} is not a good format'.format(filename, image_format) return None, ReportStats(filename, report=report) if Settings.verbose: print('done') return tmp_dir, None
Uncompress comic archives. Return the name of the working directory we uncompressed into.
Below is the the instruction that describes the task: ### Input: Uncompress comic archives. Return the name of the working directory we uncompressed into. ### Response: def comic_archive_uncompress(filename, image_format): """ Uncompress comic archives. Return the name of the working directory we uncompressed into. """ if not Settings.comics: report = ['Skipping archive file: {}'.format(filename)] return None, ReportStats(filename, report=report) if Settings.verbose: truncated_filename = stats.truncate_cwd(filename) print("Extracting {}...".format(truncated_filename), end='') # create the tmpdir tmp_dir = _get_archive_tmp_dir(filename) if os.path.isdir(tmp_dir): shutil.rmtree(tmp_dir) os.mkdir(tmp_dir) # extract archvie into the tmpdir if image_format == _CBZ_FORMAT: with zipfile.ZipFile(filename, 'r') as zfile: zfile.extractall(tmp_dir) elif image_format == _CBR_FORMAT: with rarfile.RarFile(filename, 'r') as rfile: rfile.extractall(tmp_dir) else: report = '{} {} is not a good format'.format(filename, image_format) return None, ReportStats(filename, report=report) if Settings.verbose: print('done') return tmp_dir, None
def scope_in(ctx): """ - build new scope on the top of stack - and current scope will wait for it result :param ctx: :return: """ logger.debug('# scope_in') logger.debug(ctx) ctx = ctx.clone() compiled_story = None if not ctx.is_empty_stack(): compiled_story = ctx.get_child_story() logger.debug('# child') logger.debug(compiled_story) # we match child story loop once by message # what should prevent multiple matching by the same message ctx.matched = True ctx.message = modify_stack_in_message(ctx.message, lambda stack: stack[:-1] + [{ 'data': matchers.serialize(callable.WaitForReturn()), 'step': stack[-1]['step'], 'topic': stack[-1]['topic'] }]) try: if not compiled_story and ctx.is_scope_level_part(): compiled_story = ctx.get_current_story_part() except story_context.MissedStoryPart: pass if not compiled_story: compiled_story = ctx.compiled_story() logger.debug('# [>] going deeper') ctx.message = modify_stack_in_message(ctx.message, lambda stack: stack + [ stack_utils.build_empty_stack_item(compiled_story.topic)]) logger.debug(ctx) return ctx
- build new scope on the top of stack - and current scope will wait for it result :param ctx: :return:
Below is the the instruction that describes the task: ### Input: - build new scope on the top of stack - and current scope will wait for it result :param ctx: :return: ### Response: def scope_in(ctx): """ - build new scope on the top of stack - and current scope will wait for it result :param ctx: :return: """ logger.debug('# scope_in') logger.debug(ctx) ctx = ctx.clone() compiled_story = None if not ctx.is_empty_stack(): compiled_story = ctx.get_child_story() logger.debug('# child') logger.debug(compiled_story) # we match child story loop once by message # what should prevent multiple matching by the same message ctx.matched = True ctx.message = modify_stack_in_message(ctx.message, lambda stack: stack[:-1] + [{ 'data': matchers.serialize(callable.WaitForReturn()), 'step': stack[-1]['step'], 'topic': stack[-1]['topic'] }]) try: if not compiled_story and ctx.is_scope_level_part(): compiled_story = ctx.get_current_story_part() except story_context.MissedStoryPart: pass if not compiled_story: compiled_story = ctx.compiled_story() logger.debug('# [>] going deeper') ctx.message = modify_stack_in_message(ctx.message, lambda stack: stack + [ stack_utils.build_empty_stack_item(compiled_story.topic)]) logger.debug(ctx) return ctx
def timing(self, stats, value): """ Log timing information >>> client = StatsdClient() >>> client.timing('example.timing', 500) >>> client.timing(('example.timing23', 'example.timing29'), 500) """ self.update_stats(stats, value, self.SC_TIMING)
Log timing information >>> client = StatsdClient() >>> client.timing('example.timing', 500) >>> client.timing(('example.timing23', 'example.timing29'), 500)
Below is the the instruction that describes the task: ### Input: Log timing information >>> client = StatsdClient() >>> client.timing('example.timing', 500) >>> client.timing(('example.timing23', 'example.timing29'), 500) ### Response: def timing(self, stats, value): """ Log timing information >>> client = StatsdClient() >>> client.timing('example.timing', 500) >>> client.timing(('example.timing23', 'example.timing29'), 500) """ self.update_stats(stats, value, self.SC_TIMING)
def _parse_triggered_hits(self, file_obj): """Parse and store triggered hits.""" for _ in range(self.n_triggered_hits): dom_id, pmt_id = unpack('<ib', file_obj.read(5)) tdc_time = unpack('>I', file_obj.read(4))[0] tot = unpack('<b', file_obj.read(1))[0] trigger_mask = unpack('<Q', file_obj.read(8)) self.triggered_hits.append( (dom_id, pmt_id, tdc_time, tot, trigger_mask) )
Parse and store triggered hits.
Below is the the instruction that describes the task: ### Input: Parse and store triggered hits. ### Response: def _parse_triggered_hits(self, file_obj): """Parse and store triggered hits.""" for _ in range(self.n_triggered_hits): dom_id, pmt_id = unpack('<ib', file_obj.read(5)) tdc_time = unpack('>I', file_obj.read(4))[0] tot = unpack('<b', file_obj.read(1))[0] trigger_mask = unpack('<Q', file_obj.read(8)) self.triggered_hits.append( (dom_id, pmt_id, tdc_time, tot, trigger_mask) )
def get_sql_update_by_first_field(table: str, fieldlist: Sequence[str], delims: Tuple[str, str] = ("", "")) -> str: """Returns SQL for an UPDATE statement, to update all fields except the first field (PK) using the PK as the key.""" return ( "UPDATE " + delimit(table, delims) + " SET " + ",".join([delimit(x, delims) + "=?" for x in fieldlist[1:]]) + " WHERE " + delimit(fieldlist[0], delims) + "=?" )
Returns SQL for an UPDATE statement, to update all fields except the first field (PK) using the PK as the key.
Below is the the instruction that describes the task: ### Input: Returns SQL for an UPDATE statement, to update all fields except the first field (PK) using the PK as the key. ### Response: def get_sql_update_by_first_field(table: str, fieldlist: Sequence[str], delims: Tuple[str, str] = ("", "")) -> str: """Returns SQL for an UPDATE statement, to update all fields except the first field (PK) using the PK as the key.""" return ( "UPDATE " + delimit(table, delims) + " SET " + ",".join([delimit(x, delims) + "=?" for x in fieldlist[1:]]) + " WHERE " + delimit(fieldlist[0], delims) + "=?" )
def fix_lib64(lib_dir, symlink=True): """ Some platforms (particularly Gentoo on x64) put things in lib64/pythonX.Y instead of lib/pythonX.Y. If this is such a platform we'll just create a symlink so lib64 points to lib """ # PyPy's library path scheme is not affected by this. # Return early or we will die on the following assert. if is_pypy: logger.debug('PyPy detected, skipping lib64 symlinking') return # Check we have a lib64 library path if not [p for p in distutils.sysconfig.get_config_vars().values() if isinstance(p, basestring) and 'lib64' in p]: return logger.debug('This system uses lib64; symlinking lib64 to lib') assert os.path.basename(lib_dir) == 'python%s' % sys.version[:3], ( "Unexpected python lib dir: %r" % lib_dir) lib_parent = os.path.dirname(lib_dir) top_level = os.path.dirname(lib_parent) lib_dir = os.path.join(top_level, 'lib') lib64_link = os.path.join(top_level, 'lib64') assert os.path.basename(lib_parent) == 'lib', ( "Unexpected parent dir: %r" % lib_parent) if os.path.lexists(lib64_link): return if symlink: os.symlink('lib', lib64_link) else: copyfile('lib', lib64_link)
Some platforms (particularly Gentoo on x64) put things in lib64/pythonX.Y instead of lib/pythonX.Y. If this is such a platform we'll just create a symlink so lib64 points to lib
Below is the the instruction that describes the task: ### Input: Some platforms (particularly Gentoo on x64) put things in lib64/pythonX.Y instead of lib/pythonX.Y. If this is such a platform we'll just create a symlink so lib64 points to lib ### Response: def fix_lib64(lib_dir, symlink=True): """ Some platforms (particularly Gentoo on x64) put things in lib64/pythonX.Y instead of lib/pythonX.Y. If this is such a platform we'll just create a symlink so lib64 points to lib """ # PyPy's library path scheme is not affected by this. # Return early or we will die on the following assert. if is_pypy: logger.debug('PyPy detected, skipping lib64 symlinking') return # Check we have a lib64 library path if not [p for p in distutils.sysconfig.get_config_vars().values() if isinstance(p, basestring) and 'lib64' in p]: return logger.debug('This system uses lib64; symlinking lib64 to lib') assert os.path.basename(lib_dir) == 'python%s' % sys.version[:3], ( "Unexpected python lib dir: %r" % lib_dir) lib_parent = os.path.dirname(lib_dir) top_level = os.path.dirname(lib_parent) lib_dir = os.path.join(top_level, 'lib') lib64_link = os.path.join(top_level, 'lib64') assert os.path.basename(lib_parent) == 'lib', ( "Unexpected parent dir: %r" % lib_parent) if os.path.lexists(lib64_link): return if symlink: os.symlink('lib', lib64_link) else: copyfile('lib', lib64_link)
def encode(data, scheme=None, size=None): """ Encodes `data` in a DataMatrix image. For now bpp is the libdmtx default which is 24 Args: data: bytes instance scheme: encoding scheme - one of `ENCODING_SCHEME_NAMES`, or `None`. If `None`, defaults to 'Ascii'. size: image dimensions - one of `ENCODING_SIZE_NAMES`, or `None`. If `None`, defaults to 'ShapeAuto'. Returns: Encoded: with properties `(width, height, bpp, pixels)`. You can use that result to build a PIL image: Image.frombytes('RGB', (width, height), pixels) """ size = size if size else 'ShapeAuto' size_name = '{0}{1}'.format(ENCODING_SIZE_PREFIX, size) if not hasattr(DmtxSymbolSize, size_name): raise PyLibDMTXError( 'Invalid size [{0}]: should be one of {1}'.format( size, ENCODING_SIZE_NAMES ) ) size = getattr(DmtxSymbolSize, size_name) scheme = scheme if scheme else 'Ascii' scheme_name = '{0}{1}'.format( ENCODING_SCHEME_PREFIX, scheme.capitalize() ) if not hasattr(DmtxScheme, scheme_name): raise PyLibDMTXError( 'Invalid scheme [{0}]: should be one of {1}'.format( scheme, ENCODING_SCHEME_NAMES ) ) scheme = getattr(DmtxScheme, scheme_name) with _encoder() as encoder: dmtxEncodeSetProp(encoder, DmtxProperty.DmtxPropScheme, scheme) dmtxEncodeSetProp(encoder, DmtxProperty.DmtxPropSizeRequest, size) if dmtxEncodeDataMatrix(encoder, len(data), cast(data, c_ubyte_p)) == 0: raise PyLibDMTXError( 'Could not encode data, possibly because the image is not ' 'large enough to contain the data' ) w, h, bpp = map( partial(dmtxImageGetProp, encoder[0].image), ( DmtxProperty.DmtxPropWidth, DmtxProperty.DmtxPropHeight, DmtxProperty.DmtxPropBitsPerPixel ) ) size = w * h * bpp // 8 pixels = cast( encoder[0].image[0].pxl, ctypes.POINTER(ctypes.c_ubyte * size) ) return Encoded( width=w, height=h, bpp=bpp, pixels=ctypes.string_at(pixels, size) )
Encodes `data` in a DataMatrix image. For now bpp is the libdmtx default which is 24 Args: data: bytes instance scheme: encoding scheme - one of `ENCODING_SCHEME_NAMES`, or `None`. If `None`, defaults to 'Ascii'. size: image dimensions - one of `ENCODING_SIZE_NAMES`, or `None`. If `None`, defaults to 'ShapeAuto'. Returns: Encoded: with properties `(width, height, bpp, pixels)`. You can use that result to build a PIL image: Image.frombytes('RGB', (width, height), pixels)
Below is the the instruction that describes the task: ### Input: Encodes `data` in a DataMatrix image. For now bpp is the libdmtx default which is 24 Args: data: bytes instance scheme: encoding scheme - one of `ENCODING_SCHEME_NAMES`, or `None`. If `None`, defaults to 'Ascii'. size: image dimensions - one of `ENCODING_SIZE_NAMES`, or `None`. If `None`, defaults to 'ShapeAuto'. Returns: Encoded: with properties `(width, height, bpp, pixels)`. You can use that result to build a PIL image: Image.frombytes('RGB', (width, height), pixels) ### Response: def encode(data, scheme=None, size=None): """ Encodes `data` in a DataMatrix image. For now bpp is the libdmtx default which is 24 Args: data: bytes instance scheme: encoding scheme - one of `ENCODING_SCHEME_NAMES`, or `None`. If `None`, defaults to 'Ascii'. size: image dimensions - one of `ENCODING_SIZE_NAMES`, or `None`. If `None`, defaults to 'ShapeAuto'. Returns: Encoded: with properties `(width, height, bpp, pixels)`. You can use that result to build a PIL image: Image.frombytes('RGB', (width, height), pixels) """ size = size if size else 'ShapeAuto' size_name = '{0}{1}'.format(ENCODING_SIZE_PREFIX, size) if not hasattr(DmtxSymbolSize, size_name): raise PyLibDMTXError( 'Invalid size [{0}]: should be one of {1}'.format( size, ENCODING_SIZE_NAMES ) ) size = getattr(DmtxSymbolSize, size_name) scheme = scheme if scheme else 'Ascii' scheme_name = '{0}{1}'.format( ENCODING_SCHEME_PREFIX, scheme.capitalize() ) if not hasattr(DmtxScheme, scheme_name): raise PyLibDMTXError( 'Invalid scheme [{0}]: should be one of {1}'.format( scheme, ENCODING_SCHEME_NAMES ) ) scheme = getattr(DmtxScheme, scheme_name) with _encoder() as encoder: dmtxEncodeSetProp(encoder, DmtxProperty.DmtxPropScheme, scheme) dmtxEncodeSetProp(encoder, DmtxProperty.DmtxPropSizeRequest, size) if dmtxEncodeDataMatrix(encoder, len(data), cast(data, c_ubyte_p)) == 0: raise PyLibDMTXError( 'Could not encode data, possibly because the image is not ' 'large enough to contain the data' ) w, h, bpp = map( partial(dmtxImageGetProp, encoder[0].image), ( DmtxProperty.DmtxPropWidth, DmtxProperty.DmtxPropHeight, DmtxProperty.DmtxPropBitsPerPixel ) ) size = w * h * bpp // 8 pixels = cast( encoder[0].image[0].pxl, ctypes.POINTER(ctypes.c_ubyte * size) ) return Encoded( width=w, height=h, bpp=bpp, pixels=ctypes.string_at(pixels, size) )
def alias(col, mapping): """ Returns a collection of dictionaries with the keys renamed according to the mapping >>> libraries = [{"isbn": 1, "ed": 1}, {"isbn": 2, "ed": 2}] >>> alias(libraries, {"ed": "edition"}) [{'edition': 1, 'isbn': 1}, {'edition': 2, 'isbn': 2}] >>> alias({"a": 1}, {"a": "b"}) [{'b': 1}] """ if not is_list(col): col = [col] def _block(dct): return rename(dct, mapping) return map(_block, col)
Returns a collection of dictionaries with the keys renamed according to the mapping >>> libraries = [{"isbn": 1, "ed": 1}, {"isbn": 2, "ed": 2}] >>> alias(libraries, {"ed": "edition"}) [{'edition': 1, 'isbn': 1}, {'edition': 2, 'isbn': 2}] >>> alias({"a": 1}, {"a": "b"}) [{'b': 1}]
Below is the the instruction that describes the task: ### Input: Returns a collection of dictionaries with the keys renamed according to the mapping >>> libraries = [{"isbn": 1, "ed": 1}, {"isbn": 2, "ed": 2}] >>> alias(libraries, {"ed": "edition"}) [{'edition': 1, 'isbn': 1}, {'edition': 2, 'isbn': 2}] >>> alias({"a": 1}, {"a": "b"}) [{'b': 1}] ### Response: def alias(col, mapping): """ Returns a collection of dictionaries with the keys renamed according to the mapping >>> libraries = [{"isbn": 1, "ed": 1}, {"isbn": 2, "ed": 2}] >>> alias(libraries, {"ed": "edition"}) [{'edition': 1, 'isbn': 1}, {'edition': 2, 'isbn': 2}] >>> alias({"a": 1}, {"a": "b"}) [{'b': 1}] """ if not is_list(col): col = [col] def _block(dct): return rename(dct, mapping) return map(_block, col)
def dump(database, output, min_occurences=1, max_occurences=250, returncmd=False): """ Dumps output from kmc database into tab-delimited format. :param database: Database generated by kmc. :param output: Name for output. :param min_occurences: Minimum number of times kmer must be in database to be dumped. :param max_occurences: Maximum number of times a kmer can be seen and still be dumped. :param returncmd: If true, will return the command used to call KMC as well as out and err. :return: Stdout and stderr from kmc. """ cmd = 'kmc_tools dump -ci{} -cx{} {} {}'.format(min_occurences, max_occurences, database, output) out, err = accessoryfunctions.run_subprocess(cmd) if returncmd: return out, err, cmd else: return out, err
Dumps output from kmc database into tab-delimited format. :param database: Database generated by kmc. :param output: Name for output. :param min_occurences: Minimum number of times kmer must be in database to be dumped. :param max_occurences: Maximum number of times a kmer can be seen and still be dumped. :param returncmd: If true, will return the command used to call KMC as well as out and err. :return: Stdout and stderr from kmc.
Below is the the instruction that describes the task: ### Input: Dumps output from kmc database into tab-delimited format. :param database: Database generated by kmc. :param output: Name for output. :param min_occurences: Minimum number of times kmer must be in database to be dumped. :param max_occurences: Maximum number of times a kmer can be seen and still be dumped. :param returncmd: If true, will return the command used to call KMC as well as out and err. :return: Stdout and stderr from kmc. ### Response: def dump(database, output, min_occurences=1, max_occurences=250, returncmd=False): """ Dumps output from kmc database into tab-delimited format. :param database: Database generated by kmc. :param output: Name for output. :param min_occurences: Minimum number of times kmer must be in database to be dumped. :param max_occurences: Maximum number of times a kmer can be seen and still be dumped. :param returncmd: If true, will return the command used to call KMC as well as out and err. :return: Stdout and stderr from kmc. """ cmd = 'kmc_tools dump -ci{} -cx{} {} {}'.format(min_occurences, max_occurences, database, output) out, err = accessoryfunctions.run_subprocess(cmd) if returncmd: return out, err, cmd else: return out, err
def is_valid_sound(sound, ts): """Check the consistency of a given transcription system conversino""" if isinstance(sound, (Marker, UnknownSound)): return False s1 = ts[sound.name] s2 = ts[sound.s] return s1.name == s2.name and s1.s == s2.s
Check the consistency of a given transcription system conversino
Below is the the instruction that describes the task: ### Input: Check the consistency of a given transcription system conversino ### Response: def is_valid_sound(sound, ts): """Check the consistency of a given transcription system conversino""" if isinstance(sound, (Marker, UnknownSound)): return False s1 = ts[sound.name] s2 = ts[sound.s] return s1.name == s2.name and s1.s == s2.s
def warm_up_cache(self): """Warms up the cache for the slice or table. Note for slices a force refresh occurs. """ slices = None session = db.session() slice_id = request.args.get('slice_id') table_name = request.args.get('table_name') db_name = request.args.get('db_name') if not slice_id and not (table_name and db_name): return json_error_response(__( 'Malformed request. slice_id or table_name and db_name ' 'arguments are expected'), status=400) if slice_id: slices = session.query(models.Slice).filter_by(id=slice_id).all() if not slices: return json_error_response(__( 'Chart %(id)s not found', id=slice_id), status=404) elif table_name and db_name: SqlaTable = ConnectorRegistry.sources['table'] table = ( session.query(SqlaTable) .join(models.Database) .filter( models.Database.database_name == db_name or SqlaTable.table_name == table_name) ).first() if not table: return json_error_response(__( "Table %(t)s wasn't found in the database %(d)s", t=table_name, s=db_name), status=404) slices = session.query(models.Slice).filter_by( datasource_id=table.id, datasource_type=table.type).all() for slc in slices: try: form_data = get_form_data(slc.id, use_slice_data=True)[0] obj = get_viz( datasource_type=slc.datasource.type, datasource_id=slc.datasource.id, form_data=form_data, force=True, ) obj.get_json() except Exception as e: return json_error_response(utils.error_msg_from_exception(e)) return json_success(json.dumps( [{'slice_id': slc.id, 'slice_name': slc.slice_name} for slc in slices]))
Warms up the cache for the slice or table. Note for slices a force refresh occurs.
Below is the the instruction that describes the task: ### Input: Warms up the cache for the slice or table. Note for slices a force refresh occurs. ### Response: def warm_up_cache(self): """Warms up the cache for the slice or table. Note for slices a force refresh occurs. """ slices = None session = db.session() slice_id = request.args.get('slice_id') table_name = request.args.get('table_name') db_name = request.args.get('db_name') if not slice_id and not (table_name and db_name): return json_error_response(__( 'Malformed request. slice_id or table_name and db_name ' 'arguments are expected'), status=400) if slice_id: slices = session.query(models.Slice).filter_by(id=slice_id).all() if not slices: return json_error_response(__( 'Chart %(id)s not found', id=slice_id), status=404) elif table_name and db_name: SqlaTable = ConnectorRegistry.sources['table'] table = ( session.query(SqlaTable) .join(models.Database) .filter( models.Database.database_name == db_name or SqlaTable.table_name == table_name) ).first() if not table: return json_error_response(__( "Table %(t)s wasn't found in the database %(d)s", t=table_name, s=db_name), status=404) slices = session.query(models.Slice).filter_by( datasource_id=table.id, datasource_type=table.type).all() for slc in slices: try: form_data = get_form_data(slc.id, use_slice_data=True)[0] obj = get_viz( datasource_type=slc.datasource.type, datasource_id=slc.datasource.id, form_data=form_data, force=True, ) obj.get_json() except Exception as e: return json_error_response(utils.error_msg_from_exception(e)) return json_success(json.dumps( [{'slice_id': slc.id, 'slice_name': slc.slice_name} for slc in slices]))
def write_config(self, cfg, slot=1): """ Write a configuration to the YubiKey. """ cfg_req_ver = cfg.version_required() if cfg_req_ver > self.version_num(): raise yubikey_base.YubiKeyVersionError('Configuration requires YubiKey version %i.%i (this is %s)' % \ (cfg_req_ver[0], cfg_req_ver[1], self.version())) if not self.capabilities.have_configuration_slot(slot): raise YubiKeyUSBHIDError("Can't write configuration to slot %i" % (slot)) return self._device._write_config(cfg, slot)
Write a configuration to the YubiKey.
Below is the the instruction that describes the task: ### Input: Write a configuration to the YubiKey. ### Response: def write_config(self, cfg, slot=1): """ Write a configuration to the YubiKey. """ cfg_req_ver = cfg.version_required() if cfg_req_ver > self.version_num(): raise yubikey_base.YubiKeyVersionError('Configuration requires YubiKey version %i.%i (this is %s)' % \ (cfg_req_ver[0], cfg_req_ver[1], self.version())) if not self.capabilities.have_configuration_slot(slot): raise YubiKeyUSBHIDError("Can't write configuration to slot %i" % (slot)) return self._device._write_config(cfg, slot)
def submit_job(job_ini, username, hazard_job_id=None): """ Create a job object from the given job.ini file in the job directory and run it in a new process. Returns the job ID and PID. """ job_id = logs.init('job') oq = engine.job_from_file( job_ini, job_id, username, hazard_calculation_id=hazard_job_id) pik = pickle.dumps(oq, protocol=0) # human readable protocol code = RUNCALC % dict(job_id=job_id, hazard_job_id=hazard_job_id, pik=pik, username=username) tmp_py = gettemp(code, suffix='.py') # print(code, tmp_py) # useful when debugging devnull = subprocess.DEVNULL popen = subprocess.Popen([sys.executable, tmp_py], stdin=devnull, stdout=devnull, stderr=devnull) threading.Thread(target=popen.wait).start() logs.dbcmd('update_job', job_id, {'pid': popen.pid}) return job_id, popen.pid
Create a job object from the given job.ini file in the job directory and run it in a new process. Returns the job ID and PID.
Below is the the instruction that describes the task: ### Input: Create a job object from the given job.ini file in the job directory and run it in a new process. Returns the job ID and PID. ### Response: def submit_job(job_ini, username, hazard_job_id=None): """ Create a job object from the given job.ini file in the job directory and run it in a new process. Returns the job ID and PID. """ job_id = logs.init('job') oq = engine.job_from_file( job_ini, job_id, username, hazard_calculation_id=hazard_job_id) pik = pickle.dumps(oq, protocol=0) # human readable protocol code = RUNCALC % dict(job_id=job_id, hazard_job_id=hazard_job_id, pik=pik, username=username) tmp_py = gettemp(code, suffix='.py') # print(code, tmp_py) # useful when debugging devnull = subprocess.DEVNULL popen = subprocess.Popen([sys.executable, tmp_py], stdin=devnull, stdout=devnull, stderr=devnull) threading.Thread(target=popen.wait).start() logs.dbcmd('update_job', job_id, {'pid': popen.pid}) return job_id, popen.pid
def result(self): """ The result from realising the future If the result is not available, block until done. :return: result of the future :raises: any exception encountered during realising the future """ if self._result is None: self.await_result() chunks, exception = self._result if exception is None: return chunks raise exception
The result from realising the future If the result is not available, block until done. :return: result of the future :raises: any exception encountered during realising the future
Below is the the instruction that describes the task: ### Input: The result from realising the future If the result is not available, block until done. :return: result of the future :raises: any exception encountered during realising the future ### Response: def result(self): """ The result from realising the future If the result is not available, block until done. :return: result of the future :raises: any exception encountered during realising the future """ if self._result is None: self.await_result() chunks, exception = self._result if exception is None: return chunks raise exception
def get_calculated_display_values(self, immediate: bool=False) -> DisplayValues: """Return the display values. Return the current (possibly uncalculated) display values unless 'immediate' is specified. If 'immediate', return the existing (calculated) values if they exist. Using the 'immediate' values avoids calculation except in cases where the display values haven't already been calculated. """ if not immediate or not self.__is_master or not self.__last_display_values: if not self.__current_display_values and self.__data_item: self.__current_display_values = DisplayValues(self.__data_item.xdata, self.sequence_index, self.collection_index, self.slice_center, self.slice_width, self.display_limits, self.complex_display_type, self.__color_map_data) def finalize(display_values): self.__last_display_values = display_values self.display_values_changed_event.fire() self.__current_display_values.on_finalize = finalize return self.__current_display_values return self.__last_display_values
Return the display values. Return the current (possibly uncalculated) display values unless 'immediate' is specified. If 'immediate', return the existing (calculated) values if they exist. Using the 'immediate' values avoids calculation except in cases where the display values haven't already been calculated.
Below is the the instruction that describes the task: ### Input: Return the display values. Return the current (possibly uncalculated) display values unless 'immediate' is specified. If 'immediate', return the existing (calculated) values if they exist. Using the 'immediate' values avoids calculation except in cases where the display values haven't already been calculated. ### Response: def get_calculated_display_values(self, immediate: bool=False) -> DisplayValues: """Return the display values. Return the current (possibly uncalculated) display values unless 'immediate' is specified. If 'immediate', return the existing (calculated) values if they exist. Using the 'immediate' values avoids calculation except in cases where the display values haven't already been calculated. """ if not immediate or not self.__is_master or not self.__last_display_values: if not self.__current_display_values and self.__data_item: self.__current_display_values = DisplayValues(self.__data_item.xdata, self.sequence_index, self.collection_index, self.slice_center, self.slice_width, self.display_limits, self.complex_display_type, self.__color_map_data) def finalize(display_values): self.__last_display_values = display_values self.display_values_changed_event.fire() self.__current_display_values.on_finalize = finalize return self.__current_display_values return self.__last_display_values
def calculate_imf_steadiness(inst, steady_window=15, min_window_frac=0.75, max_clock_angle_std=90.0/np.pi, max_bmag_cv=0.5): """ Calculate IMF steadiness using clock angle standard deviation and the coefficient of variation of the IMF magnitude in the GSM Y-Z plane Parameters ----------- inst : pysat.Instrument Instrument with OMNI HRO data steady_window : int Window for calculating running statistical moments in min (default=15) min_window_frac : float Minimum fraction of points in a window for steadiness to be calculated (default=0.75) max_clock_angle_std : float Maximum standard deviation of the clock angle in degrees (default=22.5) max_bmag_cv : float Maximum coefficient of variation of the IMF magnitude in the GSM Y-Z plane (default=0.5) """ # We are not going to interpolate through missing values sample_rate = int(inst.tag[0]) max_wnum = np.floor(steady_window / sample_rate) if max_wnum != steady_window / sample_rate: steady_window = max_wnum * sample_rate print("WARNING: sample rate is not a factor of the statistical window") print("new statistical window is {:.1f}".format(steady_window)) min_wnum = int(np.ceil(max_wnum * min_window_frac)) # Calculate the running coefficient of variation of the BYZ magnitude byz_mean = inst['BYZ_GSM'].rolling(min_periods=min_wnum, center=True, window=steady_window).mean() byz_std = inst['BYZ_GSM'].rolling(min_periods=min_wnum, center=True, window=steady_window).std() inst['BYZ_CV'] = pds.Series(byz_std / byz_mean, index=inst.data.index) # Calculate the running circular standard deviation of the clock angle circ_kwargs = {'high':360.0, 'low':0.0} ca = inst['clock_angle'][~np.isnan(inst['clock_angle'])] ca_std = inst['clock_angle'].rolling(min_periods=min_wnum, window=steady_window, \ center=True).apply(pysat.utils.nan_circstd, kwargs=circ_kwargs) inst['clock_angle_std'] = pds.Series(ca_std, index=inst.data.index) # Determine how long the clock angle and IMF magnitude are steady imf_steady = np.zeros(shape=inst.data.index.shape) steady = False for i,cv in enumerate(inst.data['BYZ_CV']): if steady: del_min = int((inst.data.index[i] - inst.data.index[i-1]).total_seconds() / 60.0) if np.isnan(cv) or np.isnan(ca_std[i]) or del_min > sample_rate: # Reset the steadiness flag if fill values are encountered, or # if an entry is missing steady = False if cv <= max_bmag_cv and ca_std[i] <= max_clock_angle_std: # Steadiness conditions have been met if steady: imf_steady[i] = imf_steady[i-1] imf_steady[i] += sample_rate steady = True inst['IMF_Steady'] = pds.Series(imf_steady, index=inst.data.index) return
Calculate IMF steadiness using clock angle standard deviation and the coefficient of variation of the IMF magnitude in the GSM Y-Z plane Parameters ----------- inst : pysat.Instrument Instrument with OMNI HRO data steady_window : int Window for calculating running statistical moments in min (default=15) min_window_frac : float Minimum fraction of points in a window for steadiness to be calculated (default=0.75) max_clock_angle_std : float Maximum standard deviation of the clock angle in degrees (default=22.5) max_bmag_cv : float Maximum coefficient of variation of the IMF magnitude in the GSM Y-Z plane (default=0.5)
Below is the the instruction that describes the task: ### Input: Calculate IMF steadiness using clock angle standard deviation and the coefficient of variation of the IMF magnitude in the GSM Y-Z plane Parameters ----------- inst : pysat.Instrument Instrument with OMNI HRO data steady_window : int Window for calculating running statistical moments in min (default=15) min_window_frac : float Minimum fraction of points in a window for steadiness to be calculated (default=0.75) max_clock_angle_std : float Maximum standard deviation of the clock angle in degrees (default=22.5) max_bmag_cv : float Maximum coefficient of variation of the IMF magnitude in the GSM Y-Z plane (default=0.5) ### Response: def calculate_imf_steadiness(inst, steady_window=15, min_window_frac=0.75, max_clock_angle_std=90.0/np.pi, max_bmag_cv=0.5): """ Calculate IMF steadiness using clock angle standard deviation and the coefficient of variation of the IMF magnitude in the GSM Y-Z plane Parameters ----------- inst : pysat.Instrument Instrument with OMNI HRO data steady_window : int Window for calculating running statistical moments in min (default=15) min_window_frac : float Minimum fraction of points in a window for steadiness to be calculated (default=0.75) max_clock_angle_std : float Maximum standard deviation of the clock angle in degrees (default=22.5) max_bmag_cv : float Maximum coefficient of variation of the IMF magnitude in the GSM Y-Z plane (default=0.5) """ # We are not going to interpolate through missing values sample_rate = int(inst.tag[0]) max_wnum = np.floor(steady_window / sample_rate) if max_wnum != steady_window / sample_rate: steady_window = max_wnum * sample_rate print("WARNING: sample rate is not a factor of the statistical window") print("new statistical window is {:.1f}".format(steady_window)) min_wnum = int(np.ceil(max_wnum * min_window_frac)) # Calculate the running coefficient of variation of the BYZ magnitude byz_mean = inst['BYZ_GSM'].rolling(min_periods=min_wnum, center=True, window=steady_window).mean() byz_std = inst['BYZ_GSM'].rolling(min_periods=min_wnum, center=True, window=steady_window).std() inst['BYZ_CV'] = pds.Series(byz_std / byz_mean, index=inst.data.index) # Calculate the running circular standard deviation of the clock angle circ_kwargs = {'high':360.0, 'low':0.0} ca = inst['clock_angle'][~np.isnan(inst['clock_angle'])] ca_std = inst['clock_angle'].rolling(min_periods=min_wnum, window=steady_window, \ center=True).apply(pysat.utils.nan_circstd, kwargs=circ_kwargs) inst['clock_angle_std'] = pds.Series(ca_std, index=inst.data.index) # Determine how long the clock angle and IMF magnitude are steady imf_steady = np.zeros(shape=inst.data.index.shape) steady = False for i,cv in enumerate(inst.data['BYZ_CV']): if steady: del_min = int((inst.data.index[i] - inst.data.index[i-1]).total_seconds() / 60.0) if np.isnan(cv) or np.isnan(ca_std[i]) or del_min > sample_rate: # Reset the steadiness flag if fill values are encountered, or # if an entry is missing steady = False if cv <= max_bmag_cv and ca_std[i] <= max_clock_angle_std: # Steadiness conditions have been met if steady: imf_steady[i] = imf_steady[i-1] imf_steady[i] += sample_rate steady = True inst['IMF_Steady'] = pds.Series(imf_steady, index=inst.data.index) return
def do(ruby, command, runas=None, cwd=None, env=None): # pylint: disable=C0103 ''' Execute a command in an RVM controlled environment. ruby Which ruby to use command The rvm command to execute runas The user under which to run rvm. If not specified, then rvm will be run as the user under which Salt is running. cwd The directory from which to run the rvm command. Defaults to the user's home directory. CLI Example: .. code-block:: bash salt '*' rvm.do 2.0.0 <command> ''' try: command = salt.utils.args.shlex_split(command) except AttributeError: command = salt.utils.args.shlex_split(six.text_type(command)) return _rvm_do(ruby, command, runas=runas, cwd=cwd, env=env)
Execute a command in an RVM controlled environment. ruby Which ruby to use command The rvm command to execute runas The user under which to run rvm. If not specified, then rvm will be run as the user under which Salt is running. cwd The directory from which to run the rvm command. Defaults to the user's home directory. CLI Example: .. code-block:: bash salt '*' rvm.do 2.0.0 <command>
Below is the the instruction that describes the task: ### Input: Execute a command in an RVM controlled environment. ruby Which ruby to use command The rvm command to execute runas The user under which to run rvm. If not specified, then rvm will be run as the user under which Salt is running. cwd The directory from which to run the rvm command. Defaults to the user's home directory. CLI Example: .. code-block:: bash salt '*' rvm.do 2.0.0 <command> ### Response: def do(ruby, command, runas=None, cwd=None, env=None): # pylint: disable=C0103 ''' Execute a command in an RVM controlled environment. ruby Which ruby to use command The rvm command to execute runas The user under which to run rvm. If not specified, then rvm will be run as the user under which Salt is running. cwd The directory from which to run the rvm command. Defaults to the user's home directory. CLI Example: .. code-block:: bash salt '*' rvm.do 2.0.0 <command> ''' try: command = salt.utils.args.shlex_split(command) except AttributeError: command = salt.utils.args.shlex_split(six.text_type(command)) return _rvm_do(ruby, command, runas=runas, cwd=cwd, env=env)
def compute_duration_measures(self): """ Helper function for computing measures derived from timing information. These are only computed if the response is textgrid with timing information. All times are in seconds. """ prefix = "TIMING_" + self.current_similarity_measure + "_" + self.current_collection_type + "_" if self.response_format == 'TextGrid': self.compute_response_vowel_duration("TIMING_") #prefixes don't need collection or measure type self.compute_response_continuant_duration("TIMING_") self.compute_between_collection_interval_duration(prefix) self.compute_within_collection_interval_duration(prefix) #these give different values depending on whether singleton clusters are counted or not self.compute_within_collection_vowel_duration(prefix, no_singletons = True) self.compute_within_collection_continuant_duration(prefix, no_singletons = True) self.compute_within_collection_vowel_duration(prefix, no_singletons = False) self.compute_within_collection_continuant_duration(prefix, no_singletons = False)
Helper function for computing measures derived from timing information. These are only computed if the response is textgrid with timing information. All times are in seconds.
Below is the the instruction that describes the task: ### Input: Helper function for computing measures derived from timing information. These are only computed if the response is textgrid with timing information. All times are in seconds. ### Response: def compute_duration_measures(self): """ Helper function for computing measures derived from timing information. These are only computed if the response is textgrid with timing information. All times are in seconds. """ prefix = "TIMING_" + self.current_similarity_measure + "_" + self.current_collection_type + "_" if self.response_format == 'TextGrid': self.compute_response_vowel_duration("TIMING_") #prefixes don't need collection or measure type self.compute_response_continuant_duration("TIMING_") self.compute_between_collection_interval_duration(prefix) self.compute_within_collection_interval_duration(prefix) #these give different values depending on whether singleton clusters are counted or not self.compute_within_collection_vowel_duration(prefix, no_singletons = True) self.compute_within_collection_continuant_duration(prefix, no_singletons = True) self.compute_within_collection_vowel_duration(prefix, no_singletons = False) self.compute_within_collection_continuant_duration(prefix, no_singletons = False)
def normalize_path(): """ Normalizes sys.path to avoid the use of relative folders """ # Normalize Python paths whole_path = [ os.path.abspath(path) for path in sys.path if os.path.exists(path) ] # Keep the "dynamic" current folder indicator and add the "static" # current path # Use an OrderedDict to have a faster lookup (path not in whole_set) whole_set = collections.OrderedDict((("", 1), (os.getcwd(), 1))) # Add original path entries for path in whole_path: if path not in whole_set: whole_set[path] = 1 # Set the new content of sys.path (still ordered thanks to OrderedDict) sys.path = list(whole_set) # Normalize paths in loaded modules for module_ in sys.modules.values(): try: module_.__path__ = [ os.path.abspath(path) for path in module_.__path__ if _package_exists(path) ] except AttributeError: # builtin modules don't have a __path__ pass except ImportError: pass
Normalizes sys.path to avoid the use of relative folders
Below is the the instruction that describes the task: ### Input: Normalizes sys.path to avoid the use of relative folders ### Response: def normalize_path(): """ Normalizes sys.path to avoid the use of relative folders """ # Normalize Python paths whole_path = [ os.path.abspath(path) for path in sys.path if os.path.exists(path) ] # Keep the "dynamic" current folder indicator and add the "static" # current path # Use an OrderedDict to have a faster lookup (path not in whole_set) whole_set = collections.OrderedDict((("", 1), (os.getcwd(), 1))) # Add original path entries for path in whole_path: if path not in whole_set: whole_set[path] = 1 # Set the new content of sys.path (still ordered thanks to OrderedDict) sys.path = list(whole_set) # Normalize paths in loaded modules for module_ in sys.modules.values(): try: module_.__path__ = [ os.path.abspath(path) for path in module_.__path__ if _package_exists(path) ] except AttributeError: # builtin modules don't have a __path__ pass except ImportError: pass
def Lehrer(m, Dtank, Djacket, H, Dinlet, rho, Cp, k, mu, muw=None, isobaric_expansion=None, dT=None, inlettype='tangential', inletlocation='auto'): r'''Calculates average heat transfer coefficient for a jacket around a vessel according to [1]_ as described in [2]_. .. math:: Nu_{S,L} = \left[\frac{0.03Re_S^{0.75}Pr}{1 + \frac{1.74(Pr-1)} {Re_S^{0.125}}}\right]\left(\frac{\mu}{\mu_w}\right)^{0.14} d_g = \left(\frac{8}{3}\right)^{0.5}\delta v_h = (v_Sv_{inlet})^{0.5} + v_A v_{inlet} = \frac{Q}{\frac{\pi}{4}d_{inlet}^2} v_s = \frac{Q}{\frac{\pi}{4}(D_{jacket}^2 - D_{tank}^2)} For Radial inlets: .. math:: v_A = 0.5(2g H \beta\delta \Delta T)^{0.5} For Tangential inlets: .. math:: v_A = 0 Parameters ---------- m : float Mass flow rate of fluid, [kg/s] Dtank : float Outer diameter of tank or vessel surrounded by jacket, [m] Djacket : float Inner diameter of jacket surrounding a vessel or tank, [m] H : float Height of the vessel or tank, [m] Dinlet : float Inner diameter of inlet into the jacket, [m] rho : float Density of the fluid at Tm [kg/m^3] Cp : float Heat capacity of fluid at Tm [J/kg/K] k : float Thermal conductivity of fluid at Tm [W/m/K] mu : float Viscosity of fluid at Tm [Pa*s] muw : float, optional Viscosity of fluid at Tw [Pa*s] isobaric_expansion : float, optional Constant pressure expansivity of a fluid, [m^3/mol/K] dT : float, optional Temperature difference of fluid in jacket, [K] inlettype : str, optional Either 'tangential' or 'radial' inletlocation : str, optional Either 'top' or 'bottom' or 'auto' Returns ------- h : float Average heat transfer coefficient inside the jacket [W/m^2/K] Notes ----- If the fluid is heated and enters from the bottom, natural convection assists the heat transfer and the Grashof term is added; if it were to enter from the top, it would be subtracted. The situation is reversed if entry is from the top. Examples -------- Example as in [2]_, matches completely. >>> Lehrer(m=2.5, Dtank=0.6, Djacket=0.65, H=0.6, Dinlet=0.025, dT=20., ... rho=995.7, Cp=4178.1, k=0.615, mu=798E-6, muw=355E-6) 2922.128124761829 Examples similar to in [2]_ but covering the other case: >>> Lehrer(m=2.5, Dtank=0.6, Djacket=0.65, H=0.6, Dinlet=0.025, dT=20., ... rho=995.7, Cp=4178.1, k=0.615, mu=798E-6, muw=355E-6, ... inlettype='radial', isobaric_expansion=0.000303) 3269.4389632666557 References ---------- .. [1] Lehrer, Isaac H. "Jacket-Side Nusselt Number." Industrial & Engineering Chemistry Process Design and Development 9, no. 4 (October 1, 1970): 553-58. doi:10.1021/i260036a010. .. [2] Gesellschaft, V. D. I., ed. VDI Heat Atlas. 2nd edition. Berlin; New York:: Springer, 2010. ''' delta = (Djacket-Dtank)/2. Q = m/rho Pr = Cp*mu/k vs = Q/H/delta vo = Q/(pi/4*Dinlet**2) if dT and isobaric_expansion and inlettype == 'radial' and inletlocation: if dT > 0: # Heating jacket fluid if inletlocation == 'auto' or inletlocation == 'bottom': va = 0.5*(2*g*H*isobaric_expansion*abs(dT))**0.5 else: va = -0.5*(2*g*H*isobaric_expansion*abs(dT))**0.5 else: # cooling fluid if inletlocation == 'auto' or inletlocation == 'top': va = 0.5*(2*g*H*isobaric_expansion*abs(dT))**0.5 else: va = -0.5*(2*g*H*isobaric_expansion*abs(dT))**0.5 else: va = 0 vh = (vs*vo)**0.5 + va dg = (8/3.)**0.5*delta Res = vh*dg*rho/mu if muw: NuSL = (0.03*Res**0.75*Pr)/(1 + 1.74*(Pr-1)/Res**0.125)*(mu/muw)**0.14 else: NuSL = (0.03*Res**0.75*Pr)/(1 + 1.74*(Pr-1)/Res**0.125) return NuSL*k/dg
r'''Calculates average heat transfer coefficient for a jacket around a vessel according to [1]_ as described in [2]_. .. math:: Nu_{S,L} = \left[\frac{0.03Re_S^{0.75}Pr}{1 + \frac{1.74(Pr-1)} {Re_S^{0.125}}}\right]\left(\frac{\mu}{\mu_w}\right)^{0.14} d_g = \left(\frac{8}{3}\right)^{0.5}\delta v_h = (v_Sv_{inlet})^{0.5} + v_A v_{inlet} = \frac{Q}{\frac{\pi}{4}d_{inlet}^2} v_s = \frac{Q}{\frac{\pi}{4}(D_{jacket}^2 - D_{tank}^2)} For Radial inlets: .. math:: v_A = 0.5(2g H \beta\delta \Delta T)^{0.5} For Tangential inlets: .. math:: v_A = 0 Parameters ---------- m : float Mass flow rate of fluid, [kg/s] Dtank : float Outer diameter of tank or vessel surrounded by jacket, [m] Djacket : float Inner diameter of jacket surrounding a vessel or tank, [m] H : float Height of the vessel or tank, [m] Dinlet : float Inner diameter of inlet into the jacket, [m] rho : float Density of the fluid at Tm [kg/m^3] Cp : float Heat capacity of fluid at Tm [J/kg/K] k : float Thermal conductivity of fluid at Tm [W/m/K] mu : float Viscosity of fluid at Tm [Pa*s] muw : float, optional Viscosity of fluid at Tw [Pa*s] isobaric_expansion : float, optional Constant pressure expansivity of a fluid, [m^3/mol/K] dT : float, optional Temperature difference of fluid in jacket, [K] inlettype : str, optional Either 'tangential' or 'radial' inletlocation : str, optional Either 'top' or 'bottom' or 'auto' Returns ------- h : float Average heat transfer coefficient inside the jacket [W/m^2/K] Notes ----- If the fluid is heated and enters from the bottom, natural convection assists the heat transfer and the Grashof term is added; if it were to enter from the top, it would be subtracted. The situation is reversed if entry is from the top. Examples -------- Example as in [2]_, matches completely. >>> Lehrer(m=2.5, Dtank=0.6, Djacket=0.65, H=0.6, Dinlet=0.025, dT=20., ... rho=995.7, Cp=4178.1, k=0.615, mu=798E-6, muw=355E-6) 2922.128124761829 Examples similar to in [2]_ but covering the other case: >>> Lehrer(m=2.5, Dtank=0.6, Djacket=0.65, H=0.6, Dinlet=0.025, dT=20., ... rho=995.7, Cp=4178.1, k=0.615, mu=798E-6, muw=355E-6, ... inlettype='radial', isobaric_expansion=0.000303) 3269.4389632666557 References ---------- .. [1] Lehrer, Isaac H. "Jacket-Side Nusselt Number." Industrial & Engineering Chemistry Process Design and Development 9, no. 4 (October 1, 1970): 553-58. doi:10.1021/i260036a010. .. [2] Gesellschaft, V. D. I., ed. VDI Heat Atlas. 2nd edition. Berlin; New York:: Springer, 2010.
Below is the the instruction that describes the task: ### Input: r'''Calculates average heat transfer coefficient for a jacket around a vessel according to [1]_ as described in [2]_. .. math:: Nu_{S,L} = \left[\frac{0.03Re_S^{0.75}Pr}{1 + \frac{1.74(Pr-1)} {Re_S^{0.125}}}\right]\left(\frac{\mu}{\mu_w}\right)^{0.14} d_g = \left(\frac{8}{3}\right)^{0.5}\delta v_h = (v_Sv_{inlet})^{0.5} + v_A v_{inlet} = \frac{Q}{\frac{\pi}{4}d_{inlet}^2} v_s = \frac{Q}{\frac{\pi}{4}(D_{jacket}^2 - D_{tank}^2)} For Radial inlets: .. math:: v_A = 0.5(2g H \beta\delta \Delta T)^{0.5} For Tangential inlets: .. math:: v_A = 0 Parameters ---------- m : float Mass flow rate of fluid, [kg/s] Dtank : float Outer diameter of tank or vessel surrounded by jacket, [m] Djacket : float Inner diameter of jacket surrounding a vessel or tank, [m] H : float Height of the vessel or tank, [m] Dinlet : float Inner diameter of inlet into the jacket, [m] rho : float Density of the fluid at Tm [kg/m^3] Cp : float Heat capacity of fluid at Tm [J/kg/K] k : float Thermal conductivity of fluid at Tm [W/m/K] mu : float Viscosity of fluid at Tm [Pa*s] muw : float, optional Viscosity of fluid at Tw [Pa*s] isobaric_expansion : float, optional Constant pressure expansivity of a fluid, [m^3/mol/K] dT : float, optional Temperature difference of fluid in jacket, [K] inlettype : str, optional Either 'tangential' or 'radial' inletlocation : str, optional Either 'top' or 'bottom' or 'auto' Returns ------- h : float Average heat transfer coefficient inside the jacket [W/m^2/K] Notes ----- If the fluid is heated and enters from the bottom, natural convection assists the heat transfer and the Grashof term is added; if it were to enter from the top, it would be subtracted. The situation is reversed if entry is from the top. Examples -------- Example as in [2]_, matches completely. >>> Lehrer(m=2.5, Dtank=0.6, Djacket=0.65, H=0.6, Dinlet=0.025, dT=20., ... rho=995.7, Cp=4178.1, k=0.615, mu=798E-6, muw=355E-6) 2922.128124761829 Examples similar to in [2]_ but covering the other case: >>> Lehrer(m=2.5, Dtank=0.6, Djacket=0.65, H=0.6, Dinlet=0.025, dT=20., ... rho=995.7, Cp=4178.1, k=0.615, mu=798E-6, muw=355E-6, ... inlettype='radial', isobaric_expansion=0.000303) 3269.4389632666557 References ---------- .. [1] Lehrer, Isaac H. "Jacket-Side Nusselt Number." Industrial & Engineering Chemistry Process Design and Development 9, no. 4 (October 1, 1970): 553-58. doi:10.1021/i260036a010. .. [2] Gesellschaft, V. D. I., ed. VDI Heat Atlas. 2nd edition. Berlin; New York:: Springer, 2010. ### Response: def Lehrer(m, Dtank, Djacket, H, Dinlet, rho, Cp, k, mu, muw=None, isobaric_expansion=None, dT=None, inlettype='tangential', inletlocation='auto'): r'''Calculates average heat transfer coefficient for a jacket around a vessel according to [1]_ as described in [2]_. .. math:: Nu_{S,L} = \left[\frac{0.03Re_S^{0.75}Pr}{1 + \frac{1.74(Pr-1)} {Re_S^{0.125}}}\right]\left(\frac{\mu}{\mu_w}\right)^{0.14} d_g = \left(\frac{8}{3}\right)^{0.5}\delta v_h = (v_Sv_{inlet})^{0.5} + v_A v_{inlet} = \frac{Q}{\frac{\pi}{4}d_{inlet}^2} v_s = \frac{Q}{\frac{\pi}{4}(D_{jacket}^2 - D_{tank}^2)} For Radial inlets: .. math:: v_A = 0.5(2g H \beta\delta \Delta T)^{0.5} For Tangential inlets: .. math:: v_A = 0 Parameters ---------- m : float Mass flow rate of fluid, [kg/s] Dtank : float Outer diameter of tank or vessel surrounded by jacket, [m] Djacket : float Inner diameter of jacket surrounding a vessel or tank, [m] H : float Height of the vessel or tank, [m] Dinlet : float Inner diameter of inlet into the jacket, [m] rho : float Density of the fluid at Tm [kg/m^3] Cp : float Heat capacity of fluid at Tm [J/kg/K] k : float Thermal conductivity of fluid at Tm [W/m/K] mu : float Viscosity of fluid at Tm [Pa*s] muw : float, optional Viscosity of fluid at Tw [Pa*s] isobaric_expansion : float, optional Constant pressure expansivity of a fluid, [m^3/mol/K] dT : float, optional Temperature difference of fluid in jacket, [K] inlettype : str, optional Either 'tangential' or 'radial' inletlocation : str, optional Either 'top' or 'bottom' or 'auto' Returns ------- h : float Average heat transfer coefficient inside the jacket [W/m^2/K] Notes ----- If the fluid is heated and enters from the bottom, natural convection assists the heat transfer and the Grashof term is added; if it were to enter from the top, it would be subtracted. The situation is reversed if entry is from the top. Examples -------- Example as in [2]_, matches completely. >>> Lehrer(m=2.5, Dtank=0.6, Djacket=0.65, H=0.6, Dinlet=0.025, dT=20., ... rho=995.7, Cp=4178.1, k=0.615, mu=798E-6, muw=355E-6) 2922.128124761829 Examples similar to in [2]_ but covering the other case: >>> Lehrer(m=2.5, Dtank=0.6, Djacket=0.65, H=0.6, Dinlet=0.025, dT=20., ... rho=995.7, Cp=4178.1, k=0.615, mu=798E-6, muw=355E-6, ... inlettype='radial', isobaric_expansion=0.000303) 3269.4389632666557 References ---------- .. [1] Lehrer, Isaac H. "Jacket-Side Nusselt Number." Industrial & Engineering Chemistry Process Design and Development 9, no. 4 (October 1, 1970): 553-58. doi:10.1021/i260036a010. .. [2] Gesellschaft, V. D. I., ed. VDI Heat Atlas. 2nd edition. Berlin; New York:: Springer, 2010. ''' delta = (Djacket-Dtank)/2. Q = m/rho Pr = Cp*mu/k vs = Q/H/delta vo = Q/(pi/4*Dinlet**2) if dT and isobaric_expansion and inlettype == 'radial' and inletlocation: if dT > 0: # Heating jacket fluid if inletlocation == 'auto' or inletlocation == 'bottom': va = 0.5*(2*g*H*isobaric_expansion*abs(dT))**0.5 else: va = -0.5*(2*g*H*isobaric_expansion*abs(dT))**0.5 else: # cooling fluid if inletlocation == 'auto' or inletlocation == 'top': va = 0.5*(2*g*H*isobaric_expansion*abs(dT))**0.5 else: va = -0.5*(2*g*H*isobaric_expansion*abs(dT))**0.5 else: va = 0 vh = (vs*vo)**0.5 + va dg = (8/3.)**0.5*delta Res = vh*dg*rho/mu if muw: NuSL = (0.03*Res**0.75*Pr)/(1 + 1.74*(Pr-1)/Res**0.125)*(mu/muw)**0.14 else: NuSL = (0.03*Res**0.75*Pr)/(1 + 1.74*(Pr-1)/Res**0.125) return NuSL*k/dg
def define_singleton(carrier, name, cls, cls_args = {}): """Creates a property with the given name, but the cls will created only with the first call Args: carrier: an instance of the class where want to reach the cls instance name (str): the variable name of the cls instance cls (type): the singleton object type cls_args (dict): optional dict for createing cls """ instance_name = "__{}".format(name) setattr(carrier, instance_name, None) def getter(self): instance = getattr(carrier, instance_name) if instance is None: instance = cls(**cls_args) setattr(carrier, instance_name, instance) return instance setattr(type(carrier), name, property(getter))
Creates a property with the given name, but the cls will created only with the first call Args: carrier: an instance of the class where want to reach the cls instance name (str): the variable name of the cls instance cls (type): the singleton object type cls_args (dict): optional dict for createing cls
Below is the the instruction that describes the task: ### Input: Creates a property with the given name, but the cls will created only with the first call Args: carrier: an instance of the class where want to reach the cls instance name (str): the variable name of the cls instance cls (type): the singleton object type cls_args (dict): optional dict for createing cls ### Response: def define_singleton(carrier, name, cls, cls_args = {}): """Creates a property with the given name, but the cls will created only with the first call Args: carrier: an instance of the class where want to reach the cls instance name (str): the variable name of the cls instance cls (type): the singleton object type cls_args (dict): optional dict for createing cls """ instance_name = "__{}".format(name) setattr(carrier, instance_name, None) def getter(self): instance = getattr(carrier, instance_name) if instance is None: instance = cls(**cls_args) setattr(carrier, instance_name, instance) return instance setattr(type(carrier), name, property(getter))
def _spawn_fork_workers(self): """ 通过线程启动多个worker """ thread = Thread(target=self._fork_workers, args=()) thread.daemon = True thread.start()
通过线程启动多个worker
Below is the the instruction that describes the task: ### Input: 通过线程启动多个worker ### Response: def _spawn_fork_workers(self): """ 通过线程启动多个worker """ thread = Thread(target=self._fork_workers, args=()) thread.daemon = True thread.start()
def ip_access_control_list_mappings(self): """ Access the ip_access_control_list_mappings :returns: twilio.rest.api.v2010.account.sip.domain.ip_access_control_list_mapping.IpAccessControlListMappingList :rtype: twilio.rest.api.v2010.account.sip.domain.ip_access_control_list_mapping.IpAccessControlListMappingList """ if self._ip_access_control_list_mappings is None: self._ip_access_control_list_mappings = IpAccessControlListMappingList( self._version, account_sid=self._solution['account_sid'], domain_sid=self._solution['sid'], ) return self._ip_access_control_list_mappings
Access the ip_access_control_list_mappings :returns: twilio.rest.api.v2010.account.sip.domain.ip_access_control_list_mapping.IpAccessControlListMappingList :rtype: twilio.rest.api.v2010.account.sip.domain.ip_access_control_list_mapping.IpAccessControlListMappingList
Below is the the instruction that describes the task: ### Input: Access the ip_access_control_list_mappings :returns: twilio.rest.api.v2010.account.sip.domain.ip_access_control_list_mapping.IpAccessControlListMappingList :rtype: twilio.rest.api.v2010.account.sip.domain.ip_access_control_list_mapping.IpAccessControlListMappingList ### Response: def ip_access_control_list_mappings(self): """ Access the ip_access_control_list_mappings :returns: twilio.rest.api.v2010.account.sip.domain.ip_access_control_list_mapping.IpAccessControlListMappingList :rtype: twilio.rest.api.v2010.account.sip.domain.ip_access_control_list_mapping.IpAccessControlListMappingList """ if self._ip_access_control_list_mappings is None: self._ip_access_control_list_mappings = IpAccessControlListMappingList( self._version, account_sid=self._solution['account_sid'], domain_sid=self._solution['sid'], ) return self._ip_access_control_list_mappings
def create(): """ Factory method instanciating and returning the right wrapper. """ # First, try to use ctypes. if ctypes: inotify = _CtypesLibcINotifyWrapper() if inotify.init(): return inotify # Second, see if C extension is compiled. if inotify_syscalls: inotify = _INotifySyscallsWrapper() if inotify.init(): return inotify
Factory method instanciating and returning the right wrapper.
Below is the the instruction that describes the task: ### Input: Factory method instanciating and returning the right wrapper. ### Response: def create(): """ Factory method instanciating and returning the right wrapper. """ # First, try to use ctypes. if ctypes: inotify = _CtypesLibcINotifyWrapper() if inotify.init(): return inotify # Second, see if C extension is compiled. if inotify_syscalls: inotify = _INotifySyscallsWrapper() if inotify.init(): return inotify
def score( self, data, metric="accuracy", break_ties="random", verbose=True, print_confusion_matrix=True, **kwargs, ): """Scores the predictive performance of the Classifier on all tasks Args: data: a Pytorch DataLoader, Dataset, or tuple with Tensors (X,Y): X: The input for the predict method Y: An [n] or [n, 1] torch.Tensor or np.ndarray of target labels in {1,...,k} metric: A metric (string) with which to score performance or a list of such metrics break_ties: A tie-breaking policy (see Classifier._break_ties()) verbose: The verbosity for just this score method; it will not update the class config. print_confusion_matrix: Print confusion matrix (overwritten to False if verbose=False) Returns: scores: A (float) score or a list of such scores if kwarg metric is a list """ Y_p, Y, Y_s = self._get_predictions( data, break_ties=break_ties, return_probs=True, **kwargs ) # Evaluate on the specified metrics return_list = isinstance(metric, list) metric_list = metric if isinstance(metric, list) else [metric] scores = [] for metric in metric_list: score = metric_score(Y, Y_p, metric, probs=Y_s, ignore_in_gold=[0]) scores.append(score) if verbose: print(f"{metric.capitalize()}: {score:.3f}") # Optionally print confusion matrix if print_confusion_matrix and verbose: confusion_matrix(Y, Y_p, pretty_print=True) # If a single metric was given as a string (not list), return a float if len(scores) == 1 and not return_list: return scores[0] else: return scores
Scores the predictive performance of the Classifier on all tasks Args: data: a Pytorch DataLoader, Dataset, or tuple with Tensors (X,Y): X: The input for the predict method Y: An [n] or [n, 1] torch.Tensor or np.ndarray of target labels in {1,...,k} metric: A metric (string) with which to score performance or a list of such metrics break_ties: A tie-breaking policy (see Classifier._break_ties()) verbose: The verbosity for just this score method; it will not update the class config. print_confusion_matrix: Print confusion matrix (overwritten to False if verbose=False) Returns: scores: A (float) score or a list of such scores if kwarg metric is a list
Below is the the instruction that describes the task: ### Input: Scores the predictive performance of the Classifier on all tasks Args: data: a Pytorch DataLoader, Dataset, or tuple with Tensors (X,Y): X: The input for the predict method Y: An [n] or [n, 1] torch.Tensor or np.ndarray of target labels in {1,...,k} metric: A metric (string) with which to score performance or a list of such metrics break_ties: A tie-breaking policy (see Classifier._break_ties()) verbose: The verbosity for just this score method; it will not update the class config. print_confusion_matrix: Print confusion matrix (overwritten to False if verbose=False) Returns: scores: A (float) score or a list of such scores if kwarg metric is a list ### Response: def score( self, data, metric="accuracy", break_ties="random", verbose=True, print_confusion_matrix=True, **kwargs, ): """Scores the predictive performance of the Classifier on all tasks Args: data: a Pytorch DataLoader, Dataset, or tuple with Tensors (X,Y): X: The input for the predict method Y: An [n] or [n, 1] torch.Tensor or np.ndarray of target labels in {1,...,k} metric: A metric (string) with which to score performance or a list of such metrics break_ties: A tie-breaking policy (see Classifier._break_ties()) verbose: The verbosity for just this score method; it will not update the class config. print_confusion_matrix: Print confusion matrix (overwritten to False if verbose=False) Returns: scores: A (float) score or a list of such scores if kwarg metric is a list """ Y_p, Y, Y_s = self._get_predictions( data, break_ties=break_ties, return_probs=True, **kwargs ) # Evaluate on the specified metrics return_list = isinstance(metric, list) metric_list = metric if isinstance(metric, list) else [metric] scores = [] for metric in metric_list: score = metric_score(Y, Y_p, metric, probs=Y_s, ignore_in_gold=[0]) scores.append(score) if verbose: print(f"{metric.capitalize()}: {score:.3f}") # Optionally print confusion matrix if print_confusion_matrix and verbose: confusion_matrix(Y, Y_p, pretty_print=True) # If a single metric was given as a string (not list), return a float if len(scores) == 1 and not return_list: return scores[0] else: return scores
def run_migrations_offline(): """Run migrations in 'offline' mode. This configures the context with just a URL and not an Engine, though an Engine is acceptable here as well. By skipping the Engine creation we don't even need a DBAPI to be available. Calls to context.execute() here emit the given string to the script output. """ project_path = get_project_path() project_config = get_project_config(project_path) backend = get_backend(project_path,project_config,initialize_db = False) url = str(backend.engine.url) with backend.transaction(): context.configure( connection=backend.connection, url=url, target_metadata=backend.metadata, literal_binds=True) with context.begin_transaction(): context.run_migrations()
Run migrations in 'offline' mode. This configures the context with just a URL and not an Engine, though an Engine is acceptable here as well. By skipping the Engine creation we don't even need a DBAPI to be available. Calls to context.execute() here emit the given string to the script output.
Below is the the instruction that describes the task: ### Input: Run migrations in 'offline' mode. This configures the context with just a URL and not an Engine, though an Engine is acceptable here as well. By skipping the Engine creation we don't even need a DBAPI to be available. Calls to context.execute() here emit the given string to the script output. ### Response: def run_migrations_offline(): """Run migrations in 'offline' mode. This configures the context with just a URL and not an Engine, though an Engine is acceptable here as well. By skipping the Engine creation we don't even need a DBAPI to be available. Calls to context.execute() here emit the given string to the script output. """ project_path = get_project_path() project_config = get_project_config(project_path) backend = get_backend(project_path,project_config,initialize_db = False) url = str(backend.engine.url) with backend.transaction(): context.configure( connection=backend.connection, url=url, target_metadata=backend.metadata, literal_binds=True) with context.begin_transaction(): context.run_migrations()
def collides_axisaligned_rect(self, other): """Returns collision with axis aligned other rect""" # Shift both rects so that self is centered at origin self_shifted = RotoOriginRect(self.width, self.height, -self.angle) s_a = self.sin_a() c_a = self.cos_a() center_x = self.x + self.width / 2.0 * c_a - self.height / 2.0 * s_a center_y = self.y - self.height / 2.0 * c_a - self.width / 2.0 * s_a other_shifted = Rect(other.x - center_x, other.y - center_y, other.width, other.height) # Calculate collision return self_shifted.collides(other_shifted)
Returns collision with axis aligned other rect
Below is the the instruction that describes the task: ### Input: Returns collision with axis aligned other rect ### Response: def collides_axisaligned_rect(self, other): """Returns collision with axis aligned other rect""" # Shift both rects so that self is centered at origin self_shifted = RotoOriginRect(self.width, self.height, -self.angle) s_a = self.sin_a() c_a = self.cos_a() center_x = self.x + self.width / 2.0 * c_a - self.height / 2.0 * s_a center_y = self.y - self.height / 2.0 * c_a - self.width / 2.0 * s_a other_shifted = Rect(other.x - center_x, other.y - center_y, other.width, other.height) # Calculate collision return self_shifted.collides(other_shifted)
def stop_service(name): """ Stop the service given by name. @warn: This method requires UAC elevation in Windows Vista and above. @see: L{get_services}, L{get_active_services}, L{start_service}, L{pause_service}, L{resume_service} """ with win32.OpenSCManager( dwDesiredAccess = win32.SC_MANAGER_CONNECT ) as hSCManager: with win32.OpenService(hSCManager, name, dwDesiredAccess = win32.SERVICE_STOP ) as hService: win32.ControlService(hService, win32.SERVICE_CONTROL_STOP)
Stop the service given by name. @warn: This method requires UAC elevation in Windows Vista and above. @see: L{get_services}, L{get_active_services}, L{start_service}, L{pause_service}, L{resume_service}
Below is the the instruction that describes the task: ### Input: Stop the service given by name. @warn: This method requires UAC elevation in Windows Vista and above. @see: L{get_services}, L{get_active_services}, L{start_service}, L{pause_service}, L{resume_service} ### Response: def stop_service(name): """ Stop the service given by name. @warn: This method requires UAC elevation in Windows Vista and above. @see: L{get_services}, L{get_active_services}, L{start_service}, L{pause_service}, L{resume_service} """ with win32.OpenSCManager( dwDesiredAccess = win32.SC_MANAGER_CONNECT ) as hSCManager: with win32.OpenService(hSCManager, name, dwDesiredAccess = win32.SERVICE_STOP ) as hService: win32.ControlService(hService, win32.SERVICE_CONTROL_STOP)
def ossos_release_with_metadata(): """ Wrap the objects from the Version Releases together with the objects instantiated from fitting their mpc lines """ # discoveries = ossos_release_parser() discoveries = [] observations = ossos_discoveries() for obj in observations: discov = [n for n in obj[0].mpc_observations if n.discovery.is_discovery][0] tno = parameters.tno() tno.dist = obj[1].distance tno.ra_discov = discov.coordinate.ra.degrees tno.mag = discov.mag tno.name = discov.provisional_name discoveries.append(tno) # for obj in discoveries: # observation = [n for n in observations if n.observations[-1].provisional_name == obj.name][0] # for obs in observation.observations: # if obs.discovery.is_discovery: # if obj.mag is not None: # H = obj.mag + 2.5 * math.log10(1. / ((obj.dist ** 2) * ((obj.dist - 1.) ** 2))) # else: # H = None # obj.H = H # obj.T = observation.T # obj.discovery_date = obs.date # obj.observations = observation return discoveries
Wrap the objects from the Version Releases together with the objects instantiated from fitting their mpc lines
Below is the the instruction that describes the task: ### Input: Wrap the objects from the Version Releases together with the objects instantiated from fitting their mpc lines ### Response: def ossos_release_with_metadata(): """ Wrap the objects from the Version Releases together with the objects instantiated from fitting their mpc lines """ # discoveries = ossos_release_parser() discoveries = [] observations = ossos_discoveries() for obj in observations: discov = [n for n in obj[0].mpc_observations if n.discovery.is_discovery][0] tno = parameters.tno() tno.dist = obj[1].distance tno.ra_discov = discov.coordinate.ra.degrees tno.mag = discov.mag tno.name = discov.provisional_name discoveries.append(tno) # for obj in discoveries: # observation = [n for n in observations if n.observations[-1].provisional_name == obj.name][0] # for obs in observation.observations: # if obs.discovery.is_discovery: # if obj.mag is not None: # H = obj.mag + 2.5 * math.log10(1. / ((obj.dist ** 2) * ((obj.dist - 1.) ** 2))) # else: # H = None # obj.H = H # obj.T = observation.T # obj.discovery_date = obs.date # obj.observations = observation return discoveries
def stop(self, bIgnoreExceptions = True): """ Stops debugging all processes. If the kill on exit mode is on, debugged processes are killed when the debugger is stopped. Otherwise when the debugger stops it detaches from all debugged processes and leaves them running (default). For more details see: L{__init__} @note: This method is better than L{detach_from_all} because it can gracefully handle the last debugging event before detaching. @type bIgnoreExceptions: bool @param bIgnoreExceptions: C{True} to ignore any exceptions that may be raised when detaching. """ # Determine if we have a last debug event that we need to continue. try: event = self.lastEvent has_event = bool(event) except Exception: if not bIgnoreExceptions: raise e = sys.exc_info()[1] warnings.warn(str(e), RuntimeWarning) has_event = False # If we do... if has_event: # Disable all breakpoints in the process before resuming execution. try: pid = event.get_pid() self.disable_process_breakpoints(pid) except Exception: if not bIgnoreExceptions: raise e = sys.exc_info()[1] warnings.warn(str(e), RuntimeWarning) # Disable all breakpoints in the thread before resuming execution. try: tid = event.get_tid() self.disable_thread_breakpoints(tid) except Exception: if not bIgnoreExceptions: raise e = sys.exc_info()[1] warnings.warn(str(e), RuntimeWarning) # Resume execution. try: event.continueDebugEvent = win32.DBG_CONTINUE self.cont(event) except Exception: if not bIgnoreExceptions: raise e = sys.exc_info()[1] warnings.warn(str(e), RuntimeWarning) # Detach from or kill all debuggees. try: if self.__bKillOnExit: self.kill_all(bIgnoreExceptions) else: self.detach_from_all(bIgnoreExceptions) except Exception: if not bIgnoreExceptions: raise e = sys.exc_info()[1] warnings.warn(str(e), RuntimeWarning) # Cleanup the process snapshots. try: self.system.clear() except Exception: if not bIgnoreExceptions: raise e = sys.exc_info()[1] warnings.warn(str(e), RuntimeWarning) # Close all Win32 handles the Python garbage collector failed to close. self.force_garbage_collection(bIgnoreExceptions)
Stops debugging all processes. If the kill on exit mode is on, debugged processes are killed when the debugger is stopped. Otherwise when the debugger stops it detaches from all debugged processes and leaves them running (default). For more details see: L{__init__} @note: This method is better than L{detach_from_all} because it can gracefully handle the last debugging event before detaching. @type bIgnoreExceptions: bool @param bIgnoreExceptions: C{True} to ignore any exceptions that may be raised when detaching.
Below is the the instruction that describes the task: ### Input: Stops debugging all processes. If the kill on exit mode is on, debugged processes are killed when the debugger is stopped. Otherwise when the debugger stops it detaches from all debugged processes and leaves them running (default). For more details see: L{__init__} @note: This method is better than L{detach_from_all} because it can gracefully handle the last debugging event before detaching. @type bIgnoreExceptions: bool @param bIgnoreExceptions: C{True} to ignore any exceptions that may be raised when detaching. ### Response: def stop(self, bIgnoreExceptions = True): """ Stops debugging all processes. If the kill on exit mode is on, debugged processes are killed when the debugger is stopped. Otherwise when the debugger stops it detaches from all debugged processes and leaves them running (default). For more details see: L{__init__} @note: This method is better than L{detach_from_all} because it can gracefully handle the last debugging event before detaching. @type bIgnoreExceptions: bool @param bIgnoreExceptions: C{True} to ignore any exceptions that may be raised when detaching. """ # Determine if we have a last debug event that we need to continue. try: event = self.lastEvent has_event = bool(event) except Exception: if not bIgnoreExceptions: raise e = sys.exc_info()[1] warnings.warn(str(e), RuntimeWarning) has_event = False # If we do... if has_event: # Disable all breakpoints in the process before resuming execution. try: pid = event.get_pid() self.disable_process_breakpoints(pid) except Exception: if not bIgnoreExceptions: raise e = sys.exc_info()[1] warnings.warn(str(e), RuntimeWarning) # Disable all breakpoints in the thread before resuming execution. try: tid = event.get_tid() self.disable_thread_breakpoints(tid) except Exception: if not bIgnoreExceptions: raise e = sys.exc_info()[1] warnings.warn(str(e), RuntimeWarning) # Resume execution. try: event.continueDebugEvent = win32.DBG_CONTINUE self.cont(event) except Exception: if not bIgnoreExceptions: raise e = sys.exc_info()[1] warnings.warn(str(e), RuntimeWarning) # Detach from or kill all debuggees. try: if self.__bKillOnExit: self.kill_all(bIgnoreExceptions) else: self.detach_from_all(bIgnoreExceptions) except Exception: if not bIgnoreExceptions: raise e = sys.exc_info()[1] warnings.warn(str(e), RuntimeWarning) # Cleanup the process snapshots. try: self.system.clear() except Exception: if not bIgnoreExceptions: raise e = sys.exc_info()[1] warnings.warn(str(e), RuntimeWarning) # Close all Win32 handles the Python garbage collector failed to close. self.force_garbage_collection(bIgnoreExceptions)
def _parseBoundImportDirectory(self, rva, size, magic = consts.PE32): """ Parses the bound import directory. @type rva: int @param rva: The RVA where the bound import directory starts. @type size: int @param size: The size of the bound import directory. @type magic: int @param magic: (Optional) The type of PE. This value could be L{consts.PE32} or L{consts.PE64}. @rtype: L{ImageBoundImportDescriptor} @return: A new L{ImageBoundImportDescriptor} object. """ data = self.getDataAtRva(rva, size) rd = utils.ReadData(data) boundImportDirectory = directories.ImageBoundImportDescriptor.parse(rd) # parse the name of every bounded import. for i in range(len(boundImportDirectory) - 1): if hasattr(boundImportDirectory[i], "forwarderRefsList"): if boundImportDirectory[i].forwarderRefsList: for forwarderRefEntry in boundImportDirectory[i].forwarderRefsList: offset = forwarderRefEntry.offsetModuleName.value forwarderRefEntry.moduleName = self.readStringAtRva(offset + rva) offset = boundImportDirectory[i].offsetModuleName.value boundImportDirectory[i].moduleName = self.readStringAtRva(offset + rva) return boundImportDirectory
Parses the bound import directory. @type rva: int @param rva: The RVA where the bound import directory starts. @type size: int @param size: The size of the bound import directory. @type magic: int @param magic: (Optional) The type of PE. This value could be L{consts.PE32} or L{consts.PE64}. @rtype: L{ImageBoundImportDescriptor} @return: A new L{ImageBoundImportDescriptor} object.
Below is the the instruction that describes the task: ### Input: Parses the bound import directory. @type rva: int @param rva: The RVA where the bound import directory starts. @type size: int @param size: The size of the bound import directory. @type magic: int @param magic: (Optional) The type of PE. This value could be L{consts.PE32} or L{consts.PE64}. @rtype: L{ImageBoundImportDescriptor} @return: A new L{ImageBoundImportDescriptor} object. ### Response: def _parseBoundImportDirectory(self, rva, size, magic = consts.PE32): """ Parses the bound import directory. @type rva: int @param rva: The RVA where the bound import directory starts. @type size: int @param size: The size of the bound import directory. @type magic: int @param magic: (Optional) The type of PE. This value could be L{consts.PE32} or L{consts.PE64}. @rtype: L{ImageBoundImportDescriptor} @return: A new L{ImageBoundImportDescriptor} object. """ data = self.getDataAtRva(rva, size) rd = utils.ReadData(data) boundImportDirectory = directories.ImageBoundImportDescriptor.parse(rd) # parse the name of every bounded import. for i in range(len(boundImportDirectory) - 1): if hasattr(boundImportDirectory[i], "forwarderRefsList"): if boundImportDirectory[i].forwarderRefsList: for forwarderRefEntry in boundImportDirectory[i].forwarderRefsList: offset = forwarderRefEntry.offsetModuleName.value forwarderRefEntry.moduleName = self.readStringAtRva(offset + rva) offset = boundImportDirectory[i].offsetModuleName.value boundImportDirectory[i].moduleName = self.readStringAtRva(offset + rva) return boundImportDirectory
def reflect_ghost(self, p0): """This method creates the ghost point p0', namely p0 reflected along the edge p1--p2, and the point q at the perpendicular intersection of the reflection. p0 _/| \\__ _/ | \\__ / | \\ p1----|q-------p2 \\_ | __/ \\_ | __/ \\| / p0' """ # Instead of self.p1, one could take any point on the line p1--p2. dist = self.p1 - p0 alpha = numpy.einsum("ij, ij->i", dist, self.mirror_edge) q = dist - (alpha / self.beta)[:, None] * self.mirror_edge return p0 + 2 * q
This method creates the ghost point p0', namely p0 reflected along the edge p1--p2, and the point q at the perpendicular intersection of the reflection. p0 _/| \\__ _/ | \\__ / | \\ p1----|q-------p2 \\_ | __/ \\_ | __/ \\| / p0'
Below is the the instruction that describes the task: ### Input: This method creates the ghost point p0', namely p0 reflected along the edge p1--p2, and the point q at the perpendicular intersection of the reflection. p0 _/| \\__ _/ | \\__ / | \\ p1----|q-------p2 \\_ | __/ \\_ | __/ \\| / p0' ### Response: def reflect_ghost(self, p0): """This method creates the ghost point p0', namely p0 reflected along the edge p1--p2, and the point q at the perpendicular intersection of the reflection. p0 _/| \\__ _/ | \\__ / | \\ p1----|q-------p2 \\_ | __/ \\_ | __/ \\| / p0' """ # Instead of self.p1, one could take any point on the line p1--p2. dist = self.p1 - p0 alpha = numpy.einsum("ij, ij->i", dist, self.mirror_edge) q = dist - (alpha / self.beta)[:, None] * self.mirror_edge return p0 + 2 * q
def get_secret( end_state: NettingChannelEndState, secrethash: SecretHash, ) -> Optional[Secret]: """Returns `secret` if the `secrethash` is for a lock with a known secret.""" partial_unlock_proof = end_state.secrethashes_to_unlockedlocks.get(secrethash) if partial_unlock_proof is None: partial_unlock_proof = end_state.secrethashes_to_onchain_unlockedlocks.get(secrethash) if partial_unlock_proof is not None: return partial_unlock_proof.secret return None
Returns `secret` if the `secrethash` is for a lock with a known secret.
Below is the the instruction that describes the task: ### Input: Returns `secret` if the `secrethash` is for a lock with a known secret. ### Response: def get_secret( end_state: NettingChannelEndState, secrethash: SecretHash, ) -> Optional[Secret]: """Returns `secret` if the `secrethash` is for a lock with a known secret.""" partial_unlock_proof = end_state.secrethashes_to_unlockedlocks.get(secrethash) if partial_unlock_proof is None: partial_unlock_proof = end_state.secrethashes_to_onchain_unlockedlocks.get(secrethash) if partial_unlock_proof is not None: return partial_unlock_proof.secret return None
def precompute_optimzation_S(laplacian_matrix,n_samples,relaxation_kwds): """compute Rk, A, ATAinv, neighbors and pairs for projected mode""" relaxation_kwds.setdefault('presave',False) relaxation_kwds.setdefault('presave_name','pre_comp_current.npy') relaxation_kwds.setdefault('verbose',False) if relaxation_kwds['verbose']: print ('Pre-computing quantities Y to S conversions') print ('Making A and Pairs') A, pairs = makeA(laplacian_matrix) if relaxation_kwds['verbose']: print ('Making Rk and nbhds') Rk_tensor, nbk = compute_Rk(laplacian_matrix,A,n_samples) # TODO: not quite sure what is ATAinv? why we need this? ATAinv = np.linalg.pinv(A.T.dot(A).todense()) if relaxation_kwds['verbose']: print ('Finish calculating pseudo inverse') if relaxation_kwds['presave']: raise NotImplementedError('Not yet implemented presave') return { 'RK': Rk_tensor, 'nbk': nbk, 'ATAinv': ATAinv, 'pairs': pairs, 'A': A }
compute Rk, A, ATAinv, neighbors and pairs for projected mode
Below is the the instruction that describes the task: ### Input: compute Rk, A, ATAinv, neighbors and pairs for projected mode ### Response: def precompute_optimzation_S(laplacian_matrix,n_samples,relaxation_kwds): """compute Rk, A, ATAinv, neighbors and pairs for projected mode""" relaxation_kwds.setdefault('presave',False) relaxation_kwds.setdefault('presave_name','pre_comp_current.npy') relaxation_kwds.setdefault('verbose',False) if relaxation_kwds['verbose']: print ('Pre-computing quantities Y to S conversions') print ('Making A and Pairs') A, pairs = makeA(laplacian_matrix) if relaxation_kwds['verbose']: print ('Making Rk and nbhds') Rk_tensor, nbk = compute_Rk(laplacian_matrix,A,n_samples) # TODO: not quite sure what is ATAinv? why we need this? ATAinv = np.linalg.pinv(A.T.dot(A).todense()) if relaxation_kwds['verbose']: print ('Finish calculating pseudo inverse') if relaxation_kwds['presave']: raise NotImplementedError('Not yet implemented presave') return { 'RK': Rk_tensor, 'nbk': nbk, 'ATAinv': ATAinv, 'pairs': pairs, 'A': A }
def zremrangebyrank(self, name, rank_start, rank_end): """ Remove the elements of the zset which have rank in the range [rank_start,rank_end]. .. note:: The range is [``rank_start``, ``rank_end``] :param string name: the zset name :param int rank_start: zero or positive,the start position :param int rank_end: zero or positive,the end position :return: the number of deleted elements :rtype: int >>> ssdb.zremrangebyrank('zset_1', 0, 2) 3 >>> ssdb.zremrangebyrank('zset_1', 1, 4) 5 >>> ssdb.zremrangebyrank('zset_1', 0, 0) 1 """ rank_start = get_nonnegative_integer('rank_start', rank_start) rank_end = get_nonnegative_integer('rank_end', rank_end) return self.execute_command('zremrangebyrank', name, rank_start, rank_end)
Remove the elements of the zset which have rank in the range [rank_start,rank_end]. .. note:: The range is [``rank_start``, ``rank_end``] :param string name: the zset name :param int rank_start: zero or positive,the start position :param int rank_end: zero or positive,the end position :return: the number of deleted elements :rtype: int >>> ssdb.zremrangebyrank('zset_1', 0, 2) 3 >>> ssdb.zremrangebyrank('zset_1', 1, 4) 5 >>> ssdb.zremrangebyrank('zset_1', 0, 0) 1
Below is the the instruction that describes the task: ### Input: Remove the elements of the zset which have rank in the range [rank_start,rank_end]. .. note:: The range is [``rank_start``, ``rank_end``] :param string name: the zset name :param int rank_start: zero or positive,the start position :param int rank_end: zero or positive,the end position :return: the number of deleted elements :rtype: int >>> ssdb.zremrangebyrank('zset_1', 0, 2) 3 >>> ssdb.zremrangebyrank('zset_1', 1, 4) 5 >>> ssdb.zremrangebyrank('zset_1', 0, 0) 1 ### Response: def zremrangebyrank(self, name, rank_start, rank_end): """ Remove the elements of the zset which have rank in the range [rank_start,rank_end]. .. note:: The range is [``rank_start``, ``rank_end``] :param string name: the zset name :param int rank_start: zero or positive,the start position :param int rank_end: zero or positive,the end position :return: the number of deleted elements :rtype: int >>> ssdb.zremrangebyrank('zset_1', 0, 2) 3 >>> ssdb.zremrangebyrank('zset_1', 1, 4) 5 >>> ssdb.zremrangebyrank('zset_1', 0, 0) 1 """ rank_start = get_nonnegative_integer('rank_start', rank_start) rank_end = get_nonnegative_integer('rank_end', rank_end) return self.execute_command('zremrangebyrank', name, rank_start, rank_end)
def count_above_mean(x): """ Returns the number of values in x that are higher than the mean of x :param x: the time series to calculate the feature of :type x: numpy.ndarray :return: the value of this feature :return type: float """ m = np.mean(x) return np.where(x > m)[0].size
Returns the number of values in x that are higher than the mean of x :param x: the time series to calculate the feature of :type x: numpy.ndarray :return: the value of this feature :return type: float
Below is the the instruction that describes the task: ### Input: Returns the number of values in x that are higher than the mean of x :param x: the time series to calculate the feature of :type x: numpy.ndarray :return: the value of this feature :return type: float ### Response: def count_above_mean(x): """ Returns the number of values in x that are higher than the mean of x :param x: the time series to calculate the feature of :type x: numpy.ndarray :return: the value of this feature :return type: float """ m = np.mean(x) return np.where(x > m)[0].size
def _recurse(self, inputs, output, depth, max_depth): '''We work out all combinations using this internal recursion method''' if depth < max_depth: for index, option in enumerate(inputs): my_output = list(output) my_output.append(option) self._recurse(inputs[index + 1:], my_output, depth + 1, max_depth) else: self._options.append(output)
We work out all combinations using this internal recursion method
Below is the the instruction that describes the task: ### Input: We work out all combinations using this internal recursion method ### Response: def _recurse(self, inputs, output, depth, max_depth): '''We work out all combinations using this internal recursion method''' if depth < max_depth: for index, option in enumerate(inputs): my_output = list(output) my_output.append(option) self._recurse(inputs[index + 1:], my_output, depth + 1, max_depth) else: self._options.append(output)
def should_remove(self, point, node): """ checks if self's point (and maybe identity) matches """ if not self.data == point: return False return (node is None) or (node is self)
checks if self's point (and maybe identity) matches
Below is the the instruction that describes the task: ### Input: checks if self's point (and maybe identity) matches ### Response: def should_remove(self, point, node): """ checks if self's point (and maybe identity) matches """ if not self.data == point: return False return (node is None) or (node is self)
def infer_dtype_from_scalar(val, pandas_dtype=False): """ interpret the dtype from a scalar Parameters ---------- pandas_dtype : bool, default False whether to infer dtype including pandas extension types. If False, scalar belongs to pandas extension types is inferred as object """ dtype = np.object_ # a 1-element ndarray if isinstance(val, np.ndarray): msg = "invalid ndarray passed to infer_dtype_from_scalar" if val.ndim != 0: raise ValueError(msg) dtype = val.dtype val = val.item() elif isinstance(val, str): # If we create an empty array using a string to infer # the dtype, NumPy will only allocate one character per entry # so this is kind of bad. Alternately we could use np.repeat # instead of np.empty (but then you still don't want things # coming out as np.str_! dtype = np.object_ elif isinstance(val, (np.datetime64, datetime)): val = tslibs.Timestamp(val) if val is tslibs.NaT or val.tz is None: dtype = np.dtype('M8[ns]') else: if pandas_dtype: dtype = DatetimeTZDtype(unit='ns', tz=val.tz) else: # return datetimetz as object return np.object_, val val = val.value elif isinstance(val, (np.timedelta64, timedelta)): val = tslibs.Timedelta(val).value dtype = np.dtype('m8[ns]') elif is_bool(val): dtype = np.bool_ elif is_integer(val): if isinstance(val, np.integer): dtype = type(val) else: dtype = np.int64 elif is_float(val): if isinstance(val, np.floating): dtype = type(val) else: dtype = np.float64 elif is_complex(val): dtype = np.complex_ elif pandas_dtype: if lib.is_period(val): dtype = PeriodDtype(freq=val.freq) val = val.ordinal return dtype, val
interpret the dtype from a scalar Parameters ---------- pandas_dtype : bool, default False whether to infer dtype including pandas extension types. If False, scalar belongs to pandas extension types is inferred as object
Below is the the instruction that describes the task: ### Input: interpret the dtype from a scalar Parameters ---------- pandas_dtype : bool, default False whether to infer dtype including pandas extension types. If False, scalar belongs to pandas extension types is inferred as object ### Response: def infer_dtype_from_scalar(val, pandas_dtype=False): """ interpret the dtype from a scalar Parameters ---------- pandas_dtype : bool, default False whether to infer dtype including pandas extension types. If False, scalar belongs to pandas extension types is inferred as object """ dtype = np.object_ # a 1-element ndarray if isinstance(val, np.ndarray): msg = "invalid ndarray passed to infer_dtype_from_scalar" if val.ndim != 0: raise ValueError(msg) dtype = val.dtype val = val.item() elif isinstance(val, str): # If we create an empty array using a string to infer # the dtype, NumPy will only allocate one character per entry # so this is kind of bad. Alternately we could use np.repeat # instead of np.empty (but then you still don't want things # coming out as np.str_! dtype = np.object_ elif isinstance(val, (np.datetime64, datetime)): val = tslibs.Timestamp(val) if val is tslibs.NaT or val.tz is None: dtype = np.dtype('M8[ns]') else: if pandas_dtype: dtype = DatetimeTZDtype(unit='ns', tz=val.tz) else: # return datetimetz as object return np.object_, val val = val.value elif isinstance(val, (np.timedelta64, timedelta)): val = tslibs.Timedelta(val).value dtype = np.dtype('m8[ns]') elif is_bool(val): dtype = np.bool_ elif is_integer(val): if isinstance(val, np.integer): dtype = type(val) else: dtype = np.int64 elif is_float(val): if isinstance(val, np.floating): dtype = type(val) else: dtype = np.float64 elif is_complex(val): dtype = np.complex_ elif pandas_dtype: if lib.is_period(val): dtype = PeriodDtype(freq=val.freq) val = val.ordinal return dtype, val
def play(self): """ Starts game and returns one of 3 results . Iterates between methods ``white_move()`` and ``black_move()`` until game ends. Each method calls the respective player's ``generate_move()`` method. :rtype: int """ colors = [lambda: self.white_move(), lambda: self.black_move()] colors = itertools.cycle(colors) while True: color_fn = next(colors) if game_state.no_moves(self.position): if self.position.get_king(color.white).in_check(self.position): return 1 elif self.position.get_king(color.black).in_check(self.position): return 0 else: return 0.5 color_fn()
Starts game and returns one of 3 results . Iterates between methods ``white_move()`` and ``black_move()`` until game ends. Each method calls the respective player's ``generate_move()`` method. :rtype: int
Below is the the instruction that describes the task: ### Input: Starts game and returns one of 3 results . Iterates between methods ``white_move()`` and ``black_move()`` until game ends. Each method calls the respective player's ``generate_move()`` method. :rtype: int ### Response: def play(self): """ Starts game and returns one of 3 results . Iterates between methods ``white_move()`` and ``black_move()`` until game ends. Each method calls the respective player's ``generate_move()`` method. :rtype: int """ colors = [lambda: self.white_move(), lambda: self.black_move()] colors = itertools.cycle(colors) while True: color_fn = next(colors) if game_state.no_moves(self.position): if self.position.get_king(color.white).in_check(self.position): return 1 elif self.position.get_king(color.black).in_check(self.position): return 0 else: return 0.5 color_fn()
def set_user_profile_photo( self, photo: str ) -> bool: """Use this method to set a new profile photo. This method only works for Users. Bots profile photos must be set using BotFather. Args: photo (``str``): Profile photo to set. Pass a file path as string to upload a new photo that exists on your local machine. Returns: True on success. Raises: :class:`RPCError <pyrogram.RPCError>` in case of a Telegram RPC error. """ return bool( self.send( functions.photos.UploadProfilePhoto( file=self.save_file(photo) ) ) )
Use this method to set a new profile photo. This method only works for Users. Bots profile photos must be set using BotFather. Args: photo (``str``): Profile photo to set. Pass a file path as string to upload a new photo that exists on your local machine. Returns: True on success. Raises: :class:`RPCError <pyrogram.RPCError>` in case of a Telegram RPC error.
Below is the the instruction that describes the task: ### Input: Use this method to set a new profile photo. This method only works for Users. Bots profile photos must be set using BotFather. Args: photo (``str``): Profile photo to set. Pass a file path as string to upload a new photo that exists on your local machine. Returns: True on success. Raises: :class:`RPCError <pyrogram.RPCError>` in case of a Telegram RPC error. ### Response: def set_user_profile_photo( self, photo: str ) -> bool: """Use this method to set a new profile photo. This method only works for Users. Bots profile photos must be set using BotFather. Args: photo (``str``): Profile photo to set. Pass a file path as string to upload a new photo that exists on your local machine. Returns: True on success. Raises: :class:`RPCError <pyrogram.RPCError>` in case of a Telegram RPC error. """ return bool( self.send( functions.photos.UploadProfilePhoto( file=self.save_file(photo) ) ) )
async def connect(self, conn_id, connection_string): """Connect to a device. See :meth:`AbstractDeviceAdapter.connect`. """ self._logger.info("Inside connect, conn_id=%d, conn_string=%s", conn_id, connection_string) try: self._setup_connection(conn_id, connection_string) resp = await self._execute(self._adapter.connect_sync, conn_id, connection_string) _raise_error(conn_id, 'connect', resp) except: self._teardown_connection(conn_id, force=True) raise
Connect to a device. See :meth:`AbstractDeviceAdapter.connect`.
Below is the the instruction that describes the task: ### Input: Connect to a device. See :meth:`AbstractDeviceAdapter.connect`. ### Response: async def connect(self, conn_id, connection_string): """Connect to a device. See :meth:`AbstractDeviceAdapter.connect`. """ self._logger.info("Inside connect, conn_id=%d, conn_string=%s", conn_id, connection_string) try: self._setup_connection(conn_id, connection_string) resp = await self._execute(self._adapter.connect_sync, conn_id, connection_string) _raise_error(conn_id, 'connect', resp) except: self._teardown_connection(conn_id, force=True) raise
def get_token(self, grant_type, client_id, client_secret, redirect_uri, code, **params): """Generate access token HTTP response. :param grant_type: Desired grant type. Must be "authorization_code". :type grant_type: str :param client_id: Client ID. :type client_id: str :param client_secret: Client secret. :type client_secret: str :param redirect_uri: Client redirect URI. :type redirect_uri: str :param code: Authorization code. :type code: str :rtype: requests.Response """ # Ensure proper grant_type if grant_type != 'authorization_code': return self._make_json_error_response('unsupported_grant_type') # Check conditions is_valid_client_id = self.validate_client_id(client_id) is_valid_client_secret = self.validate_client_secret(client_id, client_secret) is_valid_redirect_uri = self.validate_redirect_uri(client_id, redirect_uri) scope = params.get('scope', '') is_valid_scope = self.validate_scope(client_id, scope) data = self.from_authorization_code(client_id, code, scope) is_valid_grant = data is not None # Return proper error responses on invalid conditions if not (is_valid_client_id and is_valid_client_secret): return self._make_json_error_response('invalid_client') if not is_valid_grant or not is_valid_redirect_uri: return self._make_json_error_response('invalid_grant') if not is_valid_scope: return self._make_json_error_response('invalid_scope') # Discard original authorization code self.discard_authorization_code(client_id, code) # Generate access tokens once all conditions have been met access_token = self.generate_access_token() token_type = self.token_type expires_in = self.token_expires_in refresh_token = self.generate_refresh_token() # Save information to be used to validate later requests self.persist_token_information(client_id=client_id, scope=scope, access_token=access_token, token_type=token_type, expires_in=expires_in, refresh_token=refresh_token, data=data) # Return json response return self._make_json_response({ 'access_token': access_token, 'token_type': token_type, 'expires_in': expires_in, 'refresh_token': refresh_token })
Generate access token HTTP response. :param grant_type: Desired grant type. Must be "authorization_code". :type grant_type: str :param client_id: Client ID. :type client_id: str :param client_secret: Client secret. :type client_secret: str :param redirect_uri: Client redirect URI. :type redirect_uri: str :param code: Authorization code. :type code: str :rtype: requests.Response
Below is the the instruction that describes the task: ### Input: Generate access token HTTP response. :param grant_type: Desired grant type. Must be "authorization_code". :type grant_type: str :param client_id: Client ID. :type client_id: str :param client_secret: Client secret. :type client_secret: str :param redirect_uri: Client redirect URI. :type redirect_uri: str :param code: Authorization code. :type code: str :rtype: requests.Response ### Response: def get_token(self, grant_type, client_id, client_secret, redirect_uri, code, **params): """Generate access token HTTP response. :param grant_type: Desired grant type. Must be "authorization_code". :type grant_type: str :param client_id: Client ID. :type client_id: str :param client_secret: Client secret. :type client_secret: str :param redirect_uri: Client redirect URI. :type redirect_uri: str :param code: Authorization code. :type code: str :rtype: requests.Response """ # Ensure proper grant_type if grant_type != 'authorization_code': return self._make_json_error_response('unsupported_grant_type') # Check conditions is_valid_client_id = self.validate_client_id(client_id) is_valid_client_secret = self.validate_client_secret(client_id, client_secret) is_valid_redirect_uri = self.validate_redirect_uri(client_id, redirect_uri) scope = params.get('scope', '') is_valid_scope = self.validate_scope(client_id, scope) data = self.from_authorization_code(client_id, code, scope) is_valid_grant = data is not None # Return proper error responses on invalid conditions if not (is_valid_client_id and is_valid_client_secret): return self._make_json_error_response('invalid_client') if not is_valid_grant or not is_valid_redirect_uri: return self._make_json_error_response('invalid_grant') if not is_valid_scope: return self._make_json_error_response('invalid_scope') # Discard original authorization code self.discard_authorization_code(client_id, code) # Generate access tokens once all conditions have been met access_token = self.generate_access_token() token_type = self.token_type expires_in = self.token_expires_in refresh_token = self.generate_refresh_token() # Save information to be used to validate later requests self.persist_token_information(client_id=client_id, scope=scope, access_token=access_token, token_type=token_type, expires_in=expires_in, refresh_token=refresh_token, data=data) # Return json response return self._make_json_response({ 'access_token': access_token, 'token_type': token_type, 'expires_in': expires_in, 'refresh_token': refresh_token })
def make_script(): """ Return the full testing script, with all the tests. """ # tests easy = [ 'misra1a', 'chwirut2', 'chwirut1', 'lanczos3', 'gauss1', 'gauss2', 'danwood', 'misra1b', ] medium = [ 'kirby2', 'hahn1', 'nelson', 'mgh17', 'lanczos1', 'lanczos2', 'gauss3', 'misra1c', 'misra1d', 'roszman1', 'enso', ] hard = [ 'mgh09', 'thurber', 'boxbod', 'rat42', 'mgh10', 'eckerle4', 'rat43', 'bennett5', ] # beginning material text = \ """ from __future__ import print_function import gvar as gv import numpy as np import lsqfit log = np.log exp = np.exp arctan = np.arctan cos = np.cos sin = np.sin pi = np.pi """ if USE_2ND_STARTING_VALUE: text += '# 2nd starting values\n\n' else: text += '# 1st starting values\n\n' # main() program text += 'def main():\n' text += ' # easy\n' for n in easy: text += ' ' + n + '()\n' text += '\n # medium\n' for n in medium: text += ' ' + n + '()\n' text += '\n # hard\n' for n in hard: text += ' ' + n + '()\n' # add test-fit functions for n in easy: text += '\n' text += make_fcn(n + '.txt') for n in medium: text += '\n' text += make_fcn(n + '.txt') for n in hard: text += '\n' text += make_fcn(n + '.txt') # ending material text += \ """ if __name__ == '__main__': main() """ return text
Return the full testing script, with all the tests.
Below is the the instruction that describes the task: ### Input: Return the full testing script, with all the tests. ### Response: def make_script(): """ Return the full testing script, with all the tests. """ # tests easy = [ 'misra1a', 'chwirut2', 'chwirut1', 'lanczos3', 'gauss1', 'gauss2', 'danwood', 'misra1b', ] medium = [ 'kirby2', 'hahn1', 'nelson', 'mgh17', 'lanczos1', 'lanczos2', 'gauss3', 'misra1c', 'misra1d', 'roszman1', 'enso', ] hard = [ 'mgh09', 'thurber', 'boxbod', 'rat42', 'mgh10', 'eckerle4', 'rat43', 'bennett5', ] # beginning material text = \ """ from __future__ import print_function import gvar as gv import numpy as np import lsqfit log = np.log exp = np.exp arctan = np.arctan cos = np.cos sin = np.sin pi = np.pi """ if USE_2ND_STARTING_VALUE: text += '# 2nd starting values\n\n' else: text += '# 1st starting values\n\n' # main() program text += 'def main():\n' text += ' # easy\n' for n in easy: text += ' ' + n + '()\n' text += '\n # medium\n' for n in medium: text += ' ' + n + '()\n' text += '\n # hard\n' for n in hard: text += ' ' + n + '()\n' # add test-fit functions for n in easy: text += '\n' text += make_fcn(n + '.txt') for n in medium: text += '\n' text += make_fcn(n + '.txt') for n in hard: text += '\n' text += make_fcn(n + '.txt') # ending material text += \ """ if __name__ == '__main__': main() """ return text
def status(self, status, headers=None): ''' Respond with given status and no content :type status: int :param status: status code to return :type headers: dict :param headers: dictionary of headers to add to response :returns: itself :rtype: Rule ''' self.response = _Response(status, headers) return self
Respond with given status and no content :type status: int :param status: status code to return :type headers: dict :param headers: dictionary of headers to add to response :returns: itself :rtype: Rule
Below is the the instruction that describes the task: ### Input: Respond with given status and no content :type status: int :param status: status code to return :type headers: dict :param headers: dictionary of headers to add to response :returns: itself :rtype: Rule ### Response: def status(self, status, headers=None): ''' Respond with given status and no content :type status: int :param status: status code to return :type headers: dict :param headers: dictionary of headers to add to response :returns: itself :rtype: Rule ''' self.response = _Response(status, headers) return self
def get_proxy_version(self): """ Returns version of the Cloud SQL Proxy. """ self._download_sql_proxy_if_needed() command_to_run = [self.sql_proxy_path] command_to_run.extend(['--version']) command_to_run.extend(self._get_credential_parameters()) result = subprocess.check_output(command_to_run).decode('utf-8') pattern = re.compile("^.*[V|v]ersion ([^;]*);.*$") m = pattern.match(result) if m: return m.group(1) else: return None
Returns version of the Cloud SQL Proxy.
Below is the the instruction that describes the task: ### Input: Returns version of the Cloud SQL Proxy. ### Response: def get_proxy_version(self): """ Returns version of the Cloud SQL Proxy. """ self._download_sql_proxy_if_needed() command_to_run = [self.sql_proxy_path] command_to_run.extend(['--version']) command_to_run.extend(self._get_credential_parameters()) result = subprocess.check_output(command_to_run).decode('utf-8') pattern = re.compile("^.*[V|v]ersion ([^;]*);.*$") m = pattern.match(result) if m: return m.group(1) else: return None
def _create_cache_key(source_file): """ return the cache key for a header file. :param source_file: Header file name :type source_file: str :rtype: str """ path, name = os.path.split(source_file) return name + str(hash(path))
return the cache key for a header file. :param source_file: Header file name :type source_file: str :rtype: str
Below is the the instruction that describes the task: ### Input: return the cache key for a header file. :param source_file: Header file name :type source_file: str :rtype: str ### Response: def _create_cache_key(source_file): """ return the cache key for a header file. :param source_file: Header file name :type source_file: str :rtype: str """ path, name = os.path.split(source_file) return name + str(hash(path))
def get_connection_params(self): """Returns a dict of parameters suitable for get_new_connection.""" from django.conf import settings settings_dict = self.settings_dict options = settings_dict.get('OPTIONS', {}) autocommit = options.get('autocommit', False) conn_params = { 'server': settings_dict['HOST'], 'database': settings_dict['NAME'], 'user': settings_dict['USER'], 'port': settings_dict.get('PORT', '1433'), 'password': settings_dict['PASSWORD'], 'timeout': self.command_timeout, 'autocommit': autocommit, 'use_mars': options.get('use_mars', False), 'load_balancer': options.get('load_balancer', None), 'failover_partner': options.get('failover_partner', None), 'use_tz': utc if getattr(settings, 'USE_TZ', False) else None, } for opt in _SUPPORTED_OPTIONS: if opt in options: conn_params[opt] = options[opt] self.tzinfo_factory = utc_tzinfo_factory if settings.USE_TZ else None return conn_params
Returns a dict of parameters suitable for get_new_connection.
Below is the the instruction that describes the task: ### Input: Returns a dict of parameters suitable for get_new_connection. ### Response: def get_connection_params(self): """Returns a dict of parameters suitable for get_new_connection.""" from django.conf import settings settings_dict = self.settings_dict options = settings_dict.get('OPTIONS', {}) autocommit = options.get('autocommit', False) conn_params = { 'server': settings_dict['HOST'], 'database': settings_dict['NAME'], 'user': settings_dict['USER'], 'port': settings_dict.get('PORT', '1433'), 'password': settings_dict['PASSWORD'], 'timeout': self.command_timeout, 'autocommit': autocommit, 'use_mars': options.get('use_mars', False), 'load_balancer': options.get('load_balancer', None), 'failover_partner': options.get('failover_partner', None), 'use_tz': utc if getattr(settings, 'USE_TZ', False) else None, } for opt in _SUPPORTED_OPTIONS: if opt in options: conn_params[opt] = options[opt] self.tzinfo_factory = utc_tzinfo_factory if settings.USE_TZ else None return conn_params
def ReadDictionary(self, file): """Parse a dictionary file. Reads a RADIUS dictionary file and merges its contents into the class instance. :param file: Name of dictionary file to parse or a file-like object :type file: string or file-like object """ fil = dictfile.DictFile(file) state = {} state['vendor'] = '' self.defer_parse = [] for line in fil: state['file'] = fil.File() state['line'] = fil.Line() line = line.split('#', 1)[0].strip() tokens = line.split() if not tokens: continue key = tokens[0].upper() if key == 'ATTRIBUTE': self.__ParseAttribute(state, tokens) elif key == 'VALUE': self.__ParseValue(state, tokens, True) elif key == 'VENDOR': self.__ParseVendor(state, tokens) elif key == 'BEGIN-VENDOR': self.__ParseBeginVendor(state, tokens) elif key == 'END-VENDOR': self.__ParseEndVendor(state, tokens) for state, tokens in self.defer_parse: key = tokens[0].upper() if key == 'VALUE': self.__ParseValue(state, tokens, False) self.defer_parse = []
Parse a dictionary file. Reads a RADIUS dictionary file and merges its contents into the class instance. :param file: Name of dictionary file to parse or a file-like object :type file: string or file-like object
Below is the the instruction that describes the task: ### Input: Parse a dictionary file. Reads a RADIUS dictionary file and merges its contents into the class instance. :param file: Name of dictionary file to parse or a file-like object :type file: string or file-like object ### Response: def ReadDictionary(self, file): """Parse a dictionary file. Reads a RADIUS dictionary file and merges its contents into the class instance. :param file: Name of dictionary file to parse or a file-like object :type file: string or file-like object """ fil = dictfile.DictFile(file) state = {} state['vendor'] = '' self.defer_parse = [] for line in fil: state['file'] = fil.File() state['line'] = fil.Line() line = line.split('#', 1)[0].strip() tokens = line.split() if not tokens: continue key = tokens[0].upper() if key == 'ATTRIBUTE': self.__ParseAttribute(state, tokens) elif key == 'VALUE': self.__ParseValue(state, tokens, True) elif key == 'VENDOR': self.__ParseVendor(state, tokens) elif key == 'BEGIN-VENDOR': self.__ParseBeginVendor(state, tokens) elif key == 'END-VENDOR': self.__ParseEndVendor(state, tokens) for state, tokens in self.defer_parse: key = tokens[0].upper() if key == 'VALUE': self.__ParseValue(state, tokens, False) self.defer_parse = []
def get_resource_from_handle(self, resource_handle, verify_repo=True): """Get a resource. Args: resource_handle (`ResourceHandle`): Handle of the resource. Returns: `PackageRepositoryResource` instance. """ if verify_repo: # we could fix the handle at this point, but handles should # always be made from repo.make_resource_handle... for now, # at least, error to catch any "incorrect" construction of # handles... if resource_handle.variables.get("repository_type") != self.name(): raise ResourceError("repository_type mismatch - requested %r, " "repository_type is %r" % (resource_handle.variables["repository_type"], self.name())) if resource_handle.variables.get("location") != self.location: raise ResourceError("location mismatch - requested %r, " "repository location is %r " % (resource_handle.variables["location"], self.location)) resource = self.pool.get_resource_from_handle(resource_handle) resource._repository = self return resource
Get a resource. Args: resource_handle (`ResourceHandle`): Handle of the resource. Returns: `PackageRepositoryResource` instance.
Below is the the instruction that describes the task: ### Input: Get a resource. Args: resource_handle (`ResourceHandle`): Handle of the resource. Returns: `PackageRepositoryResource` instance. ### Response: def get_resource_from_handle(self, resource_handle, verify_repo=True): """Get a resource. Args: resource_handle (`ResourceHandle`): Handle of the resource. Returns: `PackageRepositoryResource` instance. """ if verify_repo: # we could fix the handle at this point, but handles should # always be made from repo.make_resource_handle... for now, # at least, error to catch any "incorrect" construction of # handles... if resource_handle.variables.get("repository_type") != self.name(): raise ResourceError("repository_type mismatch - requested %r, " "repository_type is %r" % (resource_handle.variables["repository_type"], self.name())) if resource_handle.variables.get("location") != self.location: raise ResourceError("location mismatch - requested %r, " "repository location is %r " % (resource_handle.variables["location"], self.location)) resource = self.pool.get_resource_from_handle(resource_handle) resource._repository = self return resource
def validate_sdl( document_ast: DocumentNode, schema_to_extend: GraphQLSchema = None, rules: Sequence[RuleType] = None, ) -> List[GraphQLError]: """Validate an SDL document.""" context = SDLValidationContext(document_ast, schema_to_extend) if rules is None: rules = specified_sdl_rules visitors = [rule(context) for rule in rules] visit(document_ast, ParallelVisitor(visitors)) return context.errors
Validate an SDL document.
Below is the the instruction that describes the task: ### Input: Validate an SDL document. ### Response: def validate_sdl( document_ast: DocumentNode, schema_to_extend: GraphQLSchema = None, rules: Sequence[RuleType] = None, ) -> List[GraphQLError]: """Validate an SDL document.""" context = SDLValidationContext(document_ast, schema_to_extend) if rules is None: rules = specified_sdl_rules visitors = [rule(context) for rule in rules] visit(document_ast, ParallelVisitor(visitors)) return context.errors
def check_response(headers: Headers, key: str) -> None: """ Check a handshake response received from the server. ``key`` comes from :func:`build_request`. If the handshake is valid, this function returns ``None``. Otherwise it raises an :exc:`~websockets.exceptions.InvalidHandshake` exception. This function doesn't verify that the response is an HTTP/1.1 or higher response with a 101 status code. These controls are the responsibility of the caller. """ connection = sum( [parse_connection(value) for value in headers.get_all("Connection")], [] ) if not any(value.lower() == "upgrade" for value in connection): raise InvalidUpgrade("Connection", " ".join(connection)) upgrade = sum([parse_upgrade(value) for value in headers.get_all("Upgrade")], []) # For compatibility with non-strict implementations, ignore case when # checking the Upgrade header. It's supposed to be 'WebSocket'. if not (len(upgrade) == 1 and upgrade[0].lower() == "websocket"): raise InvalidUpgrade("Upgrade", ", ".join(upgrade)) try: s_w_accept = headers["Sec-WebSocket-Accept"] except KeyError: raise InvalidHeader("Sec-WebSocket-Accept") except MultipleValuesError: raise InvalidHeader( "Sec-WebSocket-Accept", "more than one Sec-WebSocket-Accept header found" ) if s_w_accept != accept(key): raise InvalidHeaderValue("Sec-WebSocket-Accept", s_w_accept)
Check a handshake response received from the server. ``key`` comes from :func:`build_request`. If the handshake is valid, this function returns ``None``. Otherwise it raises an :exc:`~websockets.exceptions.InvalidHandshake` exception. This function doesn't verify that the response is an HTTP/1.1 or higher response with a 101 status code. These controls are the responsibility of the caller.
Below is the the instruction that describes the task: ### Input: Check a handshake response received from the server. ``key`` comes from :func:`build_request`. If the handshake is valid, this function returns ``None``. Otherwise it raises an :exc:`~websockets.exceptions.InvalidHandshake` exception. This function doesn't verify that the response is an HTTP/1.1 or higher response with a 101 status code. These controls are the responsibility of the caller. ### Response: def check_response(headers: Headers, key: str) -> None: """ Check a handshake response received from the server. ``key`` comes from :func:`build_request`. If the handshake is valid, this function returns ``None``. Otherwise it raises an :exc:`~websockets.exceptions.InvalidHandshake` exception. This function doesn't verify that the response is an HTTP/1.1 or higher response with a 101 status code. These controls are the responsibility of the caller. """ connection = sum( [parse_connection(value) for value in headers.get_all("Connection")], [] ) if not any(value.lower() == "upgrade" for value in connection): raise InvalidUpgrade("Connection", " ".join(connection)) upgrade = sum([parse_upgrade(value) for value in headers.get_all("Upgrade")], []) # For compatibility with non-strict implementations, ignore case when # checking the Upgrade header. It's supposed to be 'WebSocket'. if not (len(upgrade) == 1 and upgrade[0].lower() == "websocket"): raise InvalidUpgrade("Upgrade", ", ".join(upgrade)) try: s_w_accept = headers["Sec-WebSocket-Accept"] except KeyError: raise InvalidHeader("Sec-WebSocket-Accept") except MultipleValuesError: raise InvalidHeader( "Sec-WebSocket-Accept", "more than one Sec-WebSocket-Accept header found" ) if s_w_accept != accept(key): raise InvalidHeaderValue("Sec-WebSocket-Accept", s_w_accept)
def href_name (txt): """Return the name part of the first <a href="">name</a> link in txt.""" name = u"" endtag = a_end_search(txt) if not endtag: return name name = txt[:endtag.start()] if img_re.search(name): return image_name(name) return _unquote(name)
Return the name part of the first <a href="">name</a> link in txt.
Below is the the instruction that describes the task: ### Input: Return the name part of the first <a href="">name</a> link in txt. ### Response: def href_name (txt): """Return the name part of the first <a href="">name</a> link in txt.""" name = u"" endtag = a_end_search(txt) if not endtag: return name name = txt[:endtag.start()] if img_re.search(name): return image_name(name) return _unquote(name)
def reflect_table(conn, table_name, schema='public'): """Reflect basic table attributes.""" column_meta = list(get_column_metadata(conn, table_name, schema=schema)) primary_key_columns = list(get_primary_keys(conn, table_name, schema=schema)) columns = [Column(**column_data) for column_data in column_meta] primary_key = PrimaryKey(primary_key_columns) return Table(table_name, columns, primary_key, schema=schema)
Reflect basic table attributes.
Below is the the instruction that describes the task: ### Input: Reflect basic table attributes. ### Response: def reflect_table(conn, table_name, schema='public'): """Reflect basic table attributes.""" column_meta = list(get_column_metadata(conn, table_name, schema=schema)) primary_key_columns = list(get_primary_keys(conn, table_name, schema=schema)) columns = [Column(**column_data) for column_data in column_meta] primary_key = PrimaryKey(primary_key_columns) return Table(table_name, columns, primary_key, schema=schema)
def filename_items_for_filetype(filenames, filetype_info): """Iterator over the filenames matching *filetype_info*.""" matched_files = [] for pattern in filetype_info['file_patterns']: for filename in match_filenames(filenames, pattern): if filename in matched_files: continue try: filename_info = parse( pattern, get_filebase(filename, pattern)) except ValueError: logger.debug("Can't parse %s with %s.", filename, pattern) continue matched_files.append(filename) yield filename, filename_info
Iterator over the filenames matching *filetype_info*.
Below is the the instruction that describes the task: ### Input: Iterator over the filenames matching *filetype_info*. ### Response: def filename_items_for_filetype(filenames, filetype_info): """Iterator over the filenames matching *filetype_info*.""" matched_files = [] for pattern in filetype_info['file_patterns']: for filename in match_filenames(filenames, pattern): if filename in matched_files: continue try: filename_info = parse( pattern, get_filebase(filename, pattern)) except ValueError: logger.debug("Can't parse %s with %s.", filename, pattern) continue matched_files.append(filename) yield filename, filename_info
def routing(self, debug=False, anim=None): """ Performs routing on Load Area centres to build MV grid with ring topology. Args ---- debug: bool, defaults to False If True, information is printed while routing anim: type, defaults to None Descr #TODO """ # do the routing self._graph = mv_routing.solve(graph=self._graph, debug=debug, anim=anim) logger.info('==> MV Routing for {} done'.format(repr(self))) # connect satellites (step 1, with restrictions like max. string length, max peak load per string) self._graph = mv_connect.mv_connect_satellites(mv_grid=self, graph=self._graph, mode='normal', debug=debug) logger.info('==> MV Sat1 for {} done'.format(repr(self))) # connect satellites to closest line/station on a MV ring that have not been connected in step 1 self._graph = mv_connect.mv_connect_satellites(mv_grid=self, graph=self._graph, mode='isolated', debug=debug) logger.info('==> MV Sat2 for {} done'.format(repr(self))) # connect stations self._graph = mv_connect.mv_connect_stations(mv_grid_district=self.grid_district, graph=self._graph, debug=debug) logger.info('==> MV Stations for {} done'.format(repr(self)))
Performs routing on Load Area centres to build MV grid with ring topology. Args ---- debug: bool, defaults to False If True, information is printed while routing anim: type, defaults to None Descr #TODO
Below is the the instruction that describes the task: ### Input: Performs routing on Load Area centres to build MV grid with ring topology. Args ---- debug: bool, defaults to False If True, information is printed while routing anim: type, defaults to None Descr #TODO ### Response: def routing(self, debug=False, anim=None): """ Performs routing on Load Area centres to build MV grid with ring topology. Args ---- debug: bool, defaults to False If True, information is printed while routing anim: type, defaults to None Descr #TODO """ # do the routing self._graph = mv_routing.solve(graph=self._graph, debug=debug, anim=anim) logger.info('==> MV Routing for {} done'.format(repr(self))) # connect satellites (step 1, with restrictions like max. string length, max peak load per string) self._graph = mv_connect.mv_connect_satellites(mv_grid=self, graph=self._graph, mode='normal', debug=debug) logger.info('==> MV Sat1 for {} done'.format(repr(self))) # connect satellites to closest line/station on a MV ring that have not been connected in step 1 self._graph = mv_connect.mv_connect_satellites(mv_grid=self, graph=self._graph, mode='isolated', debug=debug) logger.info('==> MV Sat2 for {} done'.format(repr(self))) # connect stations self._graph = mv_connect.mv_connect_stations(mv_grid_district=self.grid_district, graph=self._graph, debug=debug) logger.info('==> MV Stations for {} done'.format(repr(self)))
def posterior(self, x): """Model is X_1,...,X_n ~ N(0, 1/theta), theta ~ Gamma(a, b)""" return Gamma(a=self.a + 0.5 * x.size, b=self.b + 0.5 * np.sum(x**2))
Model is X_1,...,X_n ~ N(0, 1/theta), theta ~ Gamma(a, b)
Below is the the instruction that describes the task: ### Input: Model is X_1,...,X_n ~ N(0, 1/theta), theta ~ Gamma(a, b) ### Response: def posterior(self, x): """Model is X_1,...,X_n ~ N(0, 1/theta), theta ~ Gamma(a, b)""" return Gamma(a=self.a + 0.5 * x.size, b=self.b + 0.5 * np.sum(x**2))
def _initilize(self, state: State) -> State: """Initialize program state. Called by program.run() and .evolve()""" targets = {} for pc, instr in enumerate(self): if isinstance(instr, Label): targets[instr.target] = pc state = state.update({PC: 0, TARGETS: targets, NAMEDGATES: STDGATES.copy()}) return state
Initialize program state. Called by program.run() and .evolve()
Below is the the instruction that describes the task: ### Input: Initialize program state. Called by program.run() and .evolve() ### Response: def _initilize(self, state: State) -> State: """Initialize program state. Called by program.run() and .evolve()""" targets = {} for pc, instr in enumerate(self): if isinstance(instr, Label): targets[instr.target] = pc state = state.update({PC: 0, TARGETS: targets, NAMEDGATES: STDGATES.copy()}) return state
def addNode(self, node): ''' Update the shared map with my in-construction node ''' self.mybldgbuids[node.buid] = node self.allbldgbuids[node.buid] = (node, self.doneevent)
Update the shared map with my in-construction node
Below is the the instruction that describes the task: ### Input: Update the shared map with my in-construction node ### Response: def addNode(self, node): ''' Update the shared map with my in-construction node ''' self.mybldgbuids[node.buid] = node self.allbldgbuids[node.buid] = (node, self.doneevent)
def _draw_mainlayer(self, gc, view_bounds=None, mode="default"): """ Draws the component """ x_origin = self.x_origin y_origin = self.y_origin gc.save_state() try: # self._draw_bounds(gc) gc.begin_path() gc.translate_ctm(x_origin, y_origin) gc.scale_ctm(self.e_width, self.e_height) gc.arc(0.0, 0.0, 1.0, 0, 2.0*pi) gc.close_path() # Draw stroke at same scale as graphics context # ctm = gc.get_ctm() # if hasattr(ctm, "__len__") and len(ctm) == 6: # scale = sqrt( (ctm[0]+ctm[1]) * (ctm[0]+ctm[1]) / 2.0 + \ # (ctm[2]+ctm[3]) * (ctm[2]+ctm[3]) / 2.0 ) # elif hasattr(gc, "get_ctm_scale"): # scale = gc.get_ctm_scale() # else: # raise RuntimeError("Unable to get scale from GC.") gc.set_line_width(self.pen.line_width) gc.set_stroke_color(self.pen.color_) if self.filled: gc.set_fill_color(self.pen.fill_color_) gc.draw_path(FILL_STROKE) else: gc.stroke_path() finally: gc.restore_state()
Draws the component
Below is the the instruction that describes the task: ### Input: Draws the component ### Response: def _draw_mainlayer(self, gc, view_bounds=None, mode="default"): """ Draws the component """ x_origin = self.x_origin y_origin = self.y_origin gc.save_state() try: # self._draw_bounds(gc) gc.begin_path() gc.translate_ctm(x_origin, y_origin) gc.scale_ctm(self.e_width, self.e_height) gc.arc(0.0, 0.0, 1.0, 0, 2.0*pi) gc.close_path() # Draw stroke at same scale as graphics context # ctm = gc.get_ctm() # if hasattr(ctm, "__len__") and len(ctm) == 6: # scale = sqrt( (ctm[0]+ctm[1]) * (ctm[0]+ctm[1]) / 2.0 + \ # (ctm[2]+ctm[3]) * (ctm[2]+ctm[3]) / 2.0 ) # elif hasattr(gc, "get_ctm_scale"): # scale = gc.get_ctm_scale() # else: # raise RuntimeError("Unable to get scale from GC.") gc.set_line_width(self.pen.line_width) gc.set_stroke_color(self.pen.color_) if self.filled: gc.set_fill_color(self.pen.fill_color_) gc.draw_path(FILL_STROKE) else: gc.stroke_path() finally: gc.restore_state()
def wait_for_all(self, objects, timeout=120): """ Wait until all of given UI proxies show up before timeout. All UI proxies will be polled periodically. See option :py:class:`poll_interval <poco.pocofw.Poco>` in ``Poco``'s initialization for more details. Args: objects (Iterable<:py:class:`UIObjectProxy <poco.proxy.UIObjectProxy>`>): iterable object of the given UI proxies timeout (:obj:`float`): timeout in seconds, default is 120s Raises: PocoTargetTimeout: when not all of UI proxies appeared before timeout """ start = time.time() while True: all_exist = True for obj in objects: if not obj.exists(): all_exist = False break if all_exist: return if time.time() - start > timeout: raise PocoTargetTimeout('all to appear', objects) self.sleep_for_polling_interval()
Wait until all of given UI proxies show up before timeout. All UI proxies will be polled periodically. See option :py:class:`poll_interval <poco.pocofw.Poco>` in ``Poco``'s initialization for more details. Args: objects (Iterable<:py:class:`UIObjectProxy <poco.proxy.UIObjectProxy>`>): iterable object of the given UI proxies timeout (:obj:`float`): timeout in seconds, default is 120s Raises: PocoTargetTimeout: when not all of UI proxies appeared before timeout
Below is the the instruction that describes the task: ### Input: Wait until all of given UI proxies show up before timeout. All UI proxies will be polled periodically. See option :py:class:`poll_interval <poco.pocofw.Poco>` in ``Poco``'s initialization for more details. Args: objects (Iterable<:py:class:`UIObjectProxy <poco.proxy.UIObjectProxy>`>): iterable object of the given UI proxies timeout (:obj:`float`): timeout in seconds, default is 120s Raises: PocoTargetTimeout: when not all of UI proxies appeared before timeout ### Response: def wait_for_all(self, objects, timeout=120): """ Wait until all of given UI proxies show up before timeout. All UI proxies will be polled periodically. See option :py:class:`poll_interval <poco.pocofw.Poco>` in ``Poco``'s initialization for more details. Args: objects (Iterable<:py:class:`UIObjectProxy <poco.proxy.UIObjectProxy>`>): iterable object of the given UI proxies timeout (:obj:`float`): timeout in seconds, default is 120s Raises: PocoTargetTimeout: when not all of UI proxies appeared before timeout """ start = time.time() while True: all_exist = True for obj in objects: if not obj.exists(): all_exist = False break if all_exist: return if time.time() - start > timeout: raise PocoTargetTimeout('all to appear', objects) self.sleep_for_polling_interval()
def _load_significant_pathways_file(path_to_file): """Read in the significant pathways file as a pandas.DataFrame. """ feature_pathway_df = pd.read_table( path_to_file, header=0, usecols=["feature", "side", "pathway"]) feature_pathway_df = feature_pathway_df.sort_values( by=["feature", "side"]) return feature_pathway_df
Read in the significant pathways file as a pandas.DataFrame.
Below is the the instruction that describes the task: ### Input: Read in the significant pathways file as a pandas.DataFrame. ### Response: def _load_significant_pathways_file(path_to_file): """Read in the significant pathways file as a pandas.DataFrame. """ feature_pathway_df = pd.read_table( path_to_file, header=0, usecols=["feature", "side", "pathway"]) feature_pathway_df = feature_pathway_df.sort_values( by=["feature", "side"]) return feature_pathway_df
def save_config(self): """Save configuration: tree widget state""" for option, value in list(self.explorer.get_options().items()): self.set_option(option, value) self.set_option('expanded_state', self.explorer.treewidget.get_expanded_state()) self.set_option('scrollbar_position', self.explorer.treewidget.get_scrollbar_position())
Save configuration: tree widget state
Below is the the instruction that describes the task: ### Input: Save configuration: tree widget state ### Response: def save_config(self): """Save configuration: tree widget state""" for option, value in list(self.explorer.get_options().items()): self.set_option(option, value) self.set_option('expanded_state', self.explorer.treewidget.get_expanded_state()) self.set_option('scrollbar_position', self.explorer.treewidget.get_scrollbar_position())
def set_cropped_metadata(input_doc, output_doc, metadata_info): """Set the metadata for the output document. Mostly just copied over, but "Producer" has a string appended to indicate that this program modified the file. That allows for the undo operation to make sure that this program cropped the file in the first place.""" # Setting metadata with pyPdf requires low-level pyPdf operations, see # http://stackoverflow.com/questions/2574676/change-metadata-of-pdf-file-with-pypdf if not metadata_info: # In case it's null, just set values to empty strings. This class just holds # data temporary in the same format; this is not sent into PyPDF2. class MetadataInfo(object): author = "" creator = "" producer = "" subject = "" title = "" metadata_info = MetadataInfo() output_info_dict = output_doc._info.getObject() # Check Producer metadata attribute to see if this program cropped document before. producer_mod = PRODUCER_MODIFIER already_cropped_by_this_program = False old_producer_string = metadata_info.producer if old_producer_string and old_producer_string.endswith(producer_mod): if args.verbose: print("\nThe document was already cropped at least once by this program.") already_cropped_by_this_program = True producer_mod = "" # No need to pile up suffixes each time on Producer. # Note that all None metadata attributes are currently set to the empty string # when passing along the metadata information. def st(item): if item is None: return "" else: return item output_info_dict.update({ NameObject("/Author"): createStringObject(st(metadata_info.author)), NameObject("/Creator"): createStringObject(st(metadata_info.creator)), NameObject("/Producer"): createStringObject(st(metadata_info.producer) + producer_mod), NameObject("/Subject"): createStringObject(st(metadata_info.subject)), NameObject("/Title"): createStringObject(st(metadata_info.title)) }) return already_cropped_by_this_program
Set the metadata for the output document. Mostly just copied over, but "Producer" has a string appended to indicate that this program modified the file. That allows for the undo operation to make sure that this program cropped the file in the first place.
Below is the the instruction that describes the task: ### Input: Set the metadata for the output document. Mostly just copied over, but "Producer" has a string appended to indicate that this program modified the file. That allows for the undo operation to make sure that this program cropped the file in the first place. ### Response: def set_cropped_metadata(input_doc, output_doc, metadata_info): """Set the metadata for the output document. Mostly just copied over, but "Producer" has a string appended to indicate that this program modified the file. That allows for the undo operation to make sure that this program cropped the file in the first place.""" # Setting metadata with pyPdf requires low-level pyPdf operations, see # http://stackoverflow.com/questions/2574676/change-metadata-of-pdf-file-with-pypdf if not metadata_info: # In case it's null, just set values to empty strings. This class just holds # data temporary in the same format; this is not sent into PyPDF2. class MetadataInfo(object): author = "" creator = "" producer = "" subject = "" title = "" metadata_info = MetadataInfo() output_info_dict = output_doc._info.getObject() # Check Producer metadata attribute to see if this program cropped document before. producer_mod = PRODUCER_MODIFIER already_cropped_by_this_program = False old_producer_string = metadata_info.producer if old_producer_string and old_producer_string.endswith(producer_mod): if args.verbose: print("\nThe document was already cropped at least once by this program.") already_cropped_by_this_program = True producer_mod = "" # No need to pile up suffixes each time on Producer. # Note that all None metadata attributes are currently set to the empty string # when passing along the metadata information. def st(item): if item is None: return "" else: return item output_info_dict.update({ NameObject("/Author"): createStringObject(st(metadata_info.author)), NameObject("/Creator"): createStringObject(st(metadata_info.creator)), NameObject("/Producer"): createStringObject(st(metadata_info.producer) + producer_mod), NameObject("/Subject"): createStringObject(st(metadata_info.subject)), NameObject("/Title"): createStringObject(st(metadata_info.title)) }) return already_cropped_by_this_program
def options(self): """ Returns an iterable of sorted option names in order to loop through all the configuration directives specified in the class. """ keys = self.__class__.__dict__.copy() keys.update(self.__dict__) keys = sorted(keys.keys()) for opt in keys: val = self.get(opt) if val is not None: yield opt, val
Returns an iterable of sorted option names in order to loop through all the configuration directives specified in the class.
Below is the the instruction that describes the task: ### Input: Returns an iterable of sorted option names in order to loop through all the configuration directives specified in the class. ### Response: def options(self): """ Returns an iterable of sorted option names in order to loop through all the configuration directives specified in the class. """ keys = self.__class__.__dict__.copy() keys.update(self.__dict__) keys = sorted(keys.keys()) for opt in keys: val = self.get(opt) if val is not None: yield opt, val
def fill(self, rgb, x=0, y=0, w=None, h=None, name=""): """Creates a new fill layer. Creates a new layer filled with the given rgb color. For example, fill((255,0,0)) creates a red fill. The layers fills the entire canvas by default. """ if w == None: w = self.w - x if h == None: h = self.h - y img = Image.new("RGBA", (w,h), rgb) self.layer(img, x, y, name)
Creates a new fill layer. Creates a new layer filled with the given rgb color. For example, fill((255,0,0)) creates a red fill. The layers fills the entire canvas by default.
Below is the the instruction that describes the task: ### Input: Creates a new fill layer. Creates a new layer filled with the given rgb color. For example, fill((255,0,0)) creates a red fill. The layers fills the entire canvas by default. ### Response: def fill(self, rgb, x=0, y=0, w=None, h=None, name=""): """Creates a new fill layer. Creates a new layer filled with the given rgb color. For example, fill((255,0,0)) creates a red fill. The layers fills the entire canvas by default. """ if w == None: w = self.w - x if h == None: h = self.h - y img = Image.new("RGBA", (w,h), rgb) self.layer(img, x, y, name)
def _get_dim_modifier(self, modifiers, dimstring=None): """Extracts the dimension information from the string of modifiers extracted by the regex. :arg modifiers: the list of modifiers identified by the regex. """ if dimstring is None: suffix = modifiers.split("dimension")[1] start = modifiers.index("dimension") + len("dimension") else: suffix = dimstring start = 0 #We use a stack to monitor how many parenthesis we have traversed in the string. #Once we reach the closing parenthesis, we know that we have the dimension info. stack = [] args = [] for i in range(len(suffix)): if suffix[i] == '(': stack.append(i + start) elif suffix[i] == ')': args.append((stack.pop(), i + start)) if len(stack) == 0: #The last entry in args should be the indices of the entire #dimension expression once the very first '(' has its twin found. return args[-1]
Extracts the dimension information from the string of modifiers extracted by the regex. :arg modifiers: the list of modifiers identified by the regex.
Below is the the instruction that describes the task: ### Input: Extracts the dimension information from the string of modifiers extracted by the regex. :arg modifiers: the list of modifiers identified by the regex. ### Response: def _get_dim_modifier(self, modifiers, dimstring=None): """Extracts the dimension information from the string of modifiers extracted by the regex. :arg modifiers: the list of modifiers identified by the regex. """ if dimstring is None: suffix = modifiers.split("dimension")[1] start = modifiers.index("dimension") + len("dimension") else: suffix = dimstring start = 0 #We use a stack to monitor how many parenthesis we have traversed in the string. #Once we reach the closing parenthesis, we know that we have the dimension info. stack = [] args = [] for i in range(len(suffix)): if suffix[i] == '(': stack.append(i + start) elif suffix[i] == ')': args.append((stack.pop(), i + start)) if len(stack) == 0: #The last entry in args should be the indices of the entire #dimension expression once the very first '(' has its twin found. return args[-1]
def request(self, method, url=None, **kwargs): """ Perform a request. :param: method: The HTTP method to use (example is `GET`). :param: url: The URL to use. The default value is the URL this client was created with (`self.url`) (example is `http://localhost:8080`) :param: kwargs: Any other parameters that will be passed to `treq.request`, for example headers. Or any URL parameters to override, for example path, query or fragment. """ url = self._compose_url(url, kwargs) kwargs.setdefault('timeout', self._timeout) d = self._client.request(method, url, reactor=self._reactor, **kwargs) d.addCallback(self._log_request_response, method, url, kwargs) d.addErrback(self._log_request_error, url) return d
Perform a request. :param: method: The HTTP method to use (example is `GET`). :param: url: The URL to use. The default value is the URL this client was created with (`self.url`) (example is `http://localhost:8080`) :param: kwargs: Any other parameters that will be passed to `treq.request`, for example headers. Or any URL parameters to override, for example path, query or fragment.
Below is the the instruction that describes the task: ### Input: Perform a request. :param: method: The HTTP method to use (example is `GET`). :param: url: The URL to use. The default value is the URL this client was created with (`self.url`) (example is `http://localhost:8080`) :param: kwargs: Any other parameters that will be passed to `treq.request`, for example headers. Or any URL parameters to override, for example path, query or fragment. ### Response: def request(self, method, url=None, **kwargs): """ Perform a request. :param: method: The HTTP method to use (example is `GET`). :param: url: The URL to use. The default value is the URL this client was created with (`self.url`) (example is `http://localhost:8080`) :param: kwargs: Any other parameters that will be passed to `treq.request`, for example headers. Or any URL parameters to override, for example path, query or fragment. """ url = self._compose_url(url, kwargs) kwargs.setdefault('timeout', self._timeout) d = self._client.request(method, url, reactor=self._reactor, **kwargs) d.addCallback(self._log_request_response, method, url, kwargs) d.addErrback(self._log_request_error, url) return d
def update_redis(project: str, environment: str, feature: str, state: str) \ -> None: """ Update redis state for a feature flag. :param project: LaunchDarkly project key. :param environment: LaunchDarkly environment key. :param feature: LaunchDarkly feature key. :param state: State for a feature flag. """ try: hosts = RedisWrapper.connection_string_parser( os.environ.get('REDIS_HOSTS')) except RuntimeError as ex: LOG.error(ex) sys.exit(1) for host in hosts: LOG.info("connecting to %s:%s", host.host, host.port) try: if valid_state(state): new_state = state.lower() redis = RedisWrapper( host.host, host.port, project, environment ) redis.update_flag_record(new_state, feature) create_file(project, environment, feature, new_state) LOG.info("%s was successfully updated.", feature) else: raise Exception('Invalid state: {0}, -s needs \ to be either on or off.'.format(state)) except KeyError as ex: LOG.error("unable to update %s. Exception: %s", host.host, ex) sys.exit(1)
Update redis state for a feature flag. :param project: LaunchDarkly project key. :param environment: LaunchDarkly environment key. :param feature: LaunchDarkly feature key. :param state: State for a feature flag.
Below is the the instruction that describes the task: ### Input: Update redis state for a feature flag. :param project: LaunchDarkly project key. :param environment: LaunchDarkly environment key. :param feature: LaunchDarkly feature key. :param state: State for a feature flag. ### Response: def update_redis(project: str, environment: str, feature: str, state: str) \ -> None: """ Update redis state for a feature flag. :param project: LaunchDarkly project key. :param environment: LaunchDarkly environment key. :param feature: LaunchDarkly feature key. :param state: State for a feature flag. """ try: hosts = RedisWrapper.connection_string_parser( os.environ.get('REDIS_HOSTS')) except RuntimeError as ex: LOG.error(ex) sys.exit(1) for host in hosts: LOG.info("connecting to %s:%s", host.host, host.port) try: if valid_state(state): new_state = state.lower() redis = RedisWrapper( host.host, host.port, project, environment ) redis.update_flag_record(new_state, feature) create_file(project, environment, feature, new_state) LOG.info("%s was successfully updated.", feature) else: raise Exception('Invalid state: {0}, -s needs \ to be either on or off.'.format(state)) except KeyError as ex: LOG.error("unable to update %s. Exception: %s", host.host, ex) sys.exit(1)
def get_datacenters(service_instance, datacenter_names=None, get_all_datacenters=False): ''' Returns all datacenters in a vCenter. service_instance The Service Instance Object from which to obtain cluster. datacenter_names List of datacenter names to filter by. Default value is None. get_all_datacenters Flag specifying whether to retrieve all datacenters. Default value is None. ''' items = [i['object'] for i in get_mors_with_properties(service_instance, vim.Datacenter, property_list=['name']) if get_all_datacenters or (datacenter_names and i['name'] in datacenter_names)] return items
Returns all datacenters in a vCenter. service_instance The Service Instance Object from which to obtain cluster. datacenter_names List of datacenter names to filter by. Default value is None. get_all_datacenters Flag specifying whether to retrieve all datacenters. Default value is None.
Below is the the instruction that describes the task: ### Input: Returns all datacenters in a vCenter. service_instance The Service Instance Object from which to obtain cluster. datacenter_names List of datacenter names to filter by. Default value is None. get_all_datacenters Flag specifying whether to retrieve all datacenters. Default value is None. ### Response: def get_datacenters(service_instance, datacenter_names=None, get_all_datacenters=False): ''' Returns all datacenters in a vCenter. service_instance The Service Instance Object from which to obtain cluster. datacenter_names List of datacenter names to filter by. Default value is None. get_all_datacenters Flag specifying whether to retrieve all datacenters. Default value is None. ''' items = [i['object'] for i in get_mors_with_properties(service_instance, vim.Datacenter, property_list=['name']) if get_all_datacenters or (datacenter_names and i['name'] in datacenter_names)] return items
def discretize_bezier(points, count=None, scale=1.0): """ Parameters ---------- points : (order, dimension) float Control points of the bezier curve For a 2D cubic bezier, order=3, dimension=2 count : int, or None Number of segments scale : float Scale of curve Returns ---------- discrete: (n,d) list of points, a polyline representation of the bezier curve which respects constants.RES_LENGTH """ # make sure we have a numpy array points = np.asanyarray(points, dtype=np.float64) if count is None: # how much distance does a small percentage of the curve take # this is so we can figure out how finely we have to sample t norm = np.linalg.norm(np.diff(points, axis=0), axis=1).sum() count = np.ceil(norm / (res.seg_frac * scale)) count = int(np.clip(count, res.min_sections * len(points), res.max_sections * len(points))) count = int(count) # parameterize incrementing 0.0 - 1.0 t = np.linspace(0.0, 1.0, count) # decrementing 1.0-0.0 t_d = 1.0 - t n = len(points) - 1 # binomial coefficients, i, and each point iterable = zip(binomial(n), np.arange(len(points)), points) # run the actual interpolation stacked = [((t**i) * (t_d**(n - i))).reshape((-1, 1)) * p * c for c, i, p in iterable] result = np.sum(stacked, axis=0) # test to make sure end points are correct test = np.sum((result[[0, -1]] - points[[0, -1]])**2, axis=1) assert (test < tol.merge).all() assert len(result) >= 2 return result
Parameters ---------- points : (order, dimension) float Control points of the bezier curve For a 2D cubic bezier, order=3, dimension=2 count : int, or None Number of segments scale : float Scale of curve Returns ---------- discrete: (n,d) list of points, a polyline representation of the bezier curve which respects constants.RES_LENGTH
Below is the the instruction that describes the task: ### Input: Parameters ---------- points : (order, dimension) float Control points of the bezier curve For a 2D cubic bezier, order=3, dimension=2 count : int, or None Number of segments scale : float Scale of curve Returns ---------- discrete: (n,d) list of points, a polyline representation of the bezier curve which respects constants.RES_LENGTH ### Response: def discretize_bezier(points, count=None, scale=1.0): """ Parameters ---------- points : (order, dimension) float Control points of the bezier curve For a 2D cubic bezier, order=3, dimension=2 count : int, or None Number of segments scale : float Scale of curve Returns ---------- discrete: (n,d) list of points, a polyline representation of the bezier curve which respects constants.RES_LENGTH """ # make sure we have a numpy array points = np.asanyarray(points, dtype=np.float64) if count is None: # how much distance does a small percentage of the curve take # this is so we can figure out how finely we have to sample t norm = np.linalg.norm(np.diff(points, axis=0), axis=1).sum() count = np.ceil(norm / (res.seg_frac * scale)) count = int(np.clip(count, res.min_sections * len(points), res.max_sections * len(points))) count = int(count) # parameterize incrementing 0.0 - 1.0 t = np.linspace(0.0, 1.0, count) # decrementing 1.0-0.0 t_d = 1.0 - t n = len(points) - 1 # binomial coefficients, i, and each point iterable = zip(binomial(n), np.arange(len(points)), points) # run the actual interpolation stacked = [((t**i) * (t_d**(n - i))).reshape((-1, 1)) * p * c for c, i, p in iterable] result = np.sum(stacked, axis=0) # test to make sure end points are correct test = np.sum((result[[0, -1]] - points[[0, -1]])**2, axis=1) assert (test < tol.merge).all() assert len(result) >= 2 return result
def inventory(self, modules_inventory=False): """ Get chassis inventory. :param modules_inventory: True - read modules inventory, false - don't read. """ self.c_info = self.get_attributes() for m_index, m_portcounts in enumerate(self.c_info['c_portcounts'].split()): if int(m_portcounts): module = XenaModule(parent=self, index=m_index) if modules_inventory: module.inventory()
Get chassis inventory. :param modules_inventory: True - read modules inventory, false - don't read.
Below is the the instruction that describes the task: ### Input: Get chassis inventory. :param modules_inventory: True - read modules inventory, false - don't read. ### Response: def inventory(self, modules_inventory=False): """ Get chassis inventory. :param modules_inventory: True - read modules inventory, false - don't read. """ self.c_info = self.get_attributes() for m_index, m_portcounts in enumerate(self.c_info['c_portcounts'].split()): if int(m_portcounts): module = XenaModule(parent=self, index=m_index) if modules_inventory: module.inventory()
def foldOneLine(outbuf, input, lineLength = 75): """ Folding line procedure that ensures multi-byte utf-8 sequences are not broken across lines TO-DO: This all seems odd. Is it still needed, especially in python3? """ if len(input) < lineLength: # Optimize for unfolded line case try: outbuf.write(bytes(input, 'UTF-8')) except Exception: # fall back on py2 syntax outbuf.write(str_(input)) else: # Look for valid utf8 range and write that out start = 0 written = 0 counter = 0 # counts line size in bytes decoded = to_unicode(input) length = len(to_basestring(input)) while written < length: s = decoded[start] # take one char size = len(to_basestring(s)) # calculate it's size in bytes if counter + size > lineLength: try: outbuf.write(bytes("\r\n ", 'UTF-8')) except Exception: # fall back on py2 syntax outbuf.write("\r\n ") counter = 1 # one for space if str is unicode_type: outbuf.write(to_unicode(s)) else: # fall back on py2 syntax outbuf.write(s.encode('utf-8')) written += size counter += size start += 1 try: outbuf.write(bytes("\r\n", 'UTF-8')) except Exception: # fall back on py2 syntax outbuf.write("\r\n")
Folding line procedure that ensures multi-byte utf-8 sequences are not broken across lines TO-DO: This all seems odd. Is it still needed, especially in python3?
Below is the the instruction that describes the task: ### Input: Folding line procedure that ensures multi-byte utf-8 sequences are not broken across lines TO-DO: This all seems odd. Is it still needed, especially in python3? ### Response: def foldOneLine(outbuf, input, lineLength = 75): """ Folding line procedure that ensures multi-byte utf-8 sequences are not broken across lines TO-DO: This all seems odd. Is it still needed, especially in python3? """ if len(input) < lineLength: # Optimize for unfolded line case try: outbuf.write(bytes(input, 'UTF-8')) except Exception: # fall back on py2 syntax outbuf.write(str_(input)) else: # Look for valid utf8 range and write that out start = 0 written = 0 counter = 0 # counts line size in bytes decoded = to_unicode(input) length = len(to_basestring(input)) while written < length: s = decoded[start] # take one char size = len(to_basestring(s)) # calculate it's size in bytes if counter + size > lineLength: try: outbuf.write(bytes("\r\n ", 'UTF-8')) except Exception: # fall back on py2 syntax outbuf.write("\r\n ") counter = 1 # one for space if str is unicode_type: outbuf.write(to_unicode(s)) else: # fall back on py2 syntax outbuf.write(s.encode('utf-8')) written += size counter += size start += 1 try: outbuf.write(bytes("\r\n", 'UTF-8')) except Exception: # fall back on py2 syntax outbuf.write("\r\n")
def get(url, last_modified=None): """Performs a get request to a given url. Returns an empty str on error. """ try: with closing(urllib2.urlopen(url)) as page: if last_modified is not None: last_mod = dateutil.parser.parse(dict(page.info())['last-modified']) if last_mod <= last_modified: return "" return page.read() except urllib2.URLError: return ""
Performs a get request to a given url. Returns an empty str on error.
Below is the the instruction that describes the task: ### Input: Performs a get request to a given url. Returns an empty str on error. ### Response: def get(url, last_modified=None): """Performs a get request to a given url. Returns an empty str on error. """ try: with closing(urllib2.urlopen(url)) as page: if last_modified is not None: last_mod = dateutil.parser.parse(dict(page.info())['last-modified']) if last_mod <= last_modified: return "" return page.read() except urllib2.URLError: return ""
def make_archive(base_name, format, root_dir=None, base_dir=None, verbose=0, dry_run=0, owner=None, group=None, logger=None): """Create an archive file (eg. zip or tar). 'base_name' is the name of the file to create, minus any format-specific extension; 'format' is the archive format: one of "zip", "tar", "bztar" or "gztar". 'root_dir' is a directory that will be the root directory of the archive; ie. we typically chdir into 'root_dir' before creating the archive. 'base_dir' is the directory where we start archiving from; ie. 'base_dir' will be the common prefix of all files and directories in the archive. 'root_dir' and 'base_dir' both default to the current directory. Returns the name of the archive file. 'owner' and 'group' are used when creating a tar archive. By default, uses the current owner and group. """ save_cwd = os.getcwd() if root_dir is not None: if logger is not None: logger.debug("changing into '%s'", root_dir) base_name = os.path.abspath(base_name) if not dry_run: os.chdir(root_dir) if base_dir is None: base_dir = os.curdir kwargs = {'dry_run': dry_run, 'logger': logger} try: format_info = _ARCHIVE_FORMATS[format] except KeyError: raise ValueError("unknown archive format '%s'" % format) func = format_info[0] for arg, val in format_info[1]: kwargs[arg] = val if format != 'zip': kwargs['owner'] = owner kwargs['group'] = group try: filename = func(base_name, base_dir, **kwargs) finally: if root_dir is not None: if logger is not None: logger.debug("changing back to '%s'", save_cwd) os.chdir(save_cwd) return filename
Create an archive file (eg. zip or tar). 'base_name' is the name of the file to create, minus any format-specific extension; 'format' is the archive format: one of "zip", "tar", "bztar" or "gztar". 'root_dir' is a directory that will be the root directory of the archive; ie. we typically chdir into 'root_dir' before creating the archive. 'base_dir' is the directory where we start archiving from; ie. 'base_dir' will be the common prefix of all files and directories in the archive. 'root_dir' and 'base_dir' both default to the current directory. Returns the name of the archive file. 'owner' and 'group' are used when creating a tar archive. By default, uses the current owner and group.
Below is the the instruction that describes the task: ### Input: Create an archive file (eg. zip or tar). 'base_name' is the name of the file to create, minus any format-specific extension; 'format' is the archive format: one of "zip", "tar", "bztar" or "gztar". 'root_dir' is a directory that will be the root directory of the archive; ie. we typically chdir into 'root_dir' before creating the archive. 'base_dir' is the directory where we start archiving from; ie. 'base_dir' will be the common prefix of all files and directories in the archive. 'root_dir' and 'base_dir' both default to the current directory. Returns the name of the archive file. 'owner' and 'group' are used when creating a tar archive. By default, uses the current owner and group. ### Response: def make_archive(base_name, format, root_dir=None, base_dir=None, verbose=0, dry_run=0, owner=None, group=None, logger=None): """Create an archive file (eg. zip or tar). 'base_name' is the name of the file to create, minus any format-specific extension; 'format' is the archive format: one of "zip", "tar", "bztar" or "gztar". 'root_dir' is a directory that will be the root directory of the archive; ie. we typically chdir into 'root_dir' before creating the archive. 'base_dir' is the directory where we start archiving from; ie. 'base_dir' will be the common prefix of all files and directories in the archive. 'root_dir' and 'base_dir' both default to the current directory. Returns the name of the archive file. 'owner' and 'group' are used when creating a tar archive. By default, uses the current owner and group. """ save_cwd = os.getcwd() if root_dir is not None: if logger is not None: logger.debug("changing into '%s'", root_dir) base_name = os.path.abspath(base_name) if not dry_run: os.chdir(root_dir) if base_dir is None: base_dir = os.curdir kwargs = {'dry_run': dry_run, 'logger': logger} try: format_info = _ARCHIVE_FORMATS[format] except KeyError: raise ValueError("unknown archive format '%s'" % format) func = format_info[0] for arg, val in format_info[1]: kwargs[arg] = val if format != 'zip': kwargs['owner'] = owner kwargs['group'] = group try: filename = func(base_name, base_dir, **kwargs) finally: if root_dir is not None: if logger is not None: logger.debug("changing back to '%s'", save_cwd) os.chdir(save_cwd) return filename
def pformat_program_dump(self, program_dump, program_start=None): """ format a BASIC program dump. Useful for debugging. returns a list of formated string lines. """ assert isinstance(program_dump, bytearray) if program_start is None: program_start = self.DEFAULT_PROGRAM_START return self.listing.pformat_program_dump(program_dump, program_start)
format a BASIC program dump. Useful for debugging. returns a list of formated string lines.
Below is the the instruction that describes the task: ### Input: format a BASIC program dump. Useful for debugging. returns a list of formated string lines. ### Response: def pformat_program_dump(self, program_dump, program_start=None): """ format a BASIC program dump. Useful for debugging. returns a list of formated string lines. """ assert isinstance(program_dump, bytearray) if program_start is None: program_start = self.DEFAULT_PROGRAM_START return self.listing.pformat_program_dump(program_dump, program_start)
def updateCurrentNetworkInNdex(self, body, verbose=None): """ Update current network's record in NDEx :param body: Properties required to update a network record in NDEx. :param verbose: print more :returns: 200: successful operation; 404: Network does not exist """ surl=self.___url sv=surl.split('/')[-1] surl=surl.rstrip(sv+'/') response=api(url=surl+'/cyndex2/'+sv+'/networks/current', method="PUT", body=body, verbose=verbose) return response
Update current network's record in NDEx :param body: Properties required to update a network record in NDEx. :param verbose: print more :returns: 200: successful operation; 404: Network does not exist
Below is the the instruction that describes the task: ### Input: Update current network's record in NDEx :param body: Properties required to update a network record in NDEx. :param verbose: print more :returns: 200: successful operation; 404: Network does not exist ### Response: def updateCurrentNetworkInNdex(self, body, verbose=None): """ Update current network's record in NDEx :param body: Properties required to update a network record in NDEx. :param verbose: print more :returns: 200: successful operation; 404: Network does not exist """ surl=self.___url sv=surl.split('/')[-1] surl=surl.rstrip(sv+'/') response=api(url=surl+'/cyndex2/'+sv+'/networks/current', method="PUT", body=body, verbose=verbose) return response
def overview(): """ Provides an overview of the duplicate credentials. """ search = Credential.search() search.aggs.bucket('password_count', 'terms', field='secret', order={'_count': 'desc'}, size=20)\ .metric('username_count', 'cardinality', field='username') \ .metric('host_count', 'cardinality', field='host_ip') \ .metric('top_hits', 'top_hits', docvalue_fields=['username'], size=100) response = search.execute() print_line("{0:65} {1:5} {2:5} {3:5} {4}".format("Secret", "Count", "Hosts", "Users", "Usernames")) print_line("-"*100) for entry in response.aggregations.password_count.buckets: usernames = [] for creds in entry.top_hits: usernames.append(creds.username[0]) usernames = list(set(usernames)) print_line("{0:65} {1:5} {2:5} {3:5} {4}".format(entry.key, entry.doc_count, entry.host_count.value, entry.username_count.value, usernames))
Provides an overview of the duplicate credentials.
Below is the the instruction that describes the task: ### Input: Provides an overview of the duplicate credentials. ### Response: def overview(): """ Provides an overview of the duplicate credentials. """ search = Credential.search() search.aggs.bucket('password_count', 'terms', field='secret', order={'_count': 'desc'}, size=20)\ .metric('username_count', 'cardinality', field='username') \ .metric('host_count', 'cardinality', field='host_ip') \ .metric('top_hits', 'top_hits', docvalue_fields=['username'], size=100) response = search.execute() print_line("{0:65} {1:5} {2:5} {3:5} {4}".format("Secret", "Count", "Hosts", "Users", "Usernames")) print_line("-"*100) for entry in response.aggregations.password_count.buckets: usernames = [] for creds in entry.top_hits: usernames.append(creds.username[0]) usernames = list(set(usernames)) print_line("{0:65} {1:5} {2:5} {3:5} {4}".format(entry.key, entry.doc_count, entry.host_count.value, entry.username_count.value, usernames))
def show(key=None, display_toolbar=True): """Shows the current context figure in the output area. Parameters ---------- key : hashable, optional Any variable that can be used as a key for a dictionary. display_toolbar: bool (default: True) If True, a toolbar for different mouse interaction is displayed with the figure. Raises ------ KeyError When no context figure is associated with the provided key. Examples -------- >>> import numpy as np >>> import pyplot as plt >>> n = 100 >>> x = np.arange(n) >>> y = np.cumsum(np.random.randn(n)) >>> plt.plot(x,y) >>> plt.show() """ if key is None: figure = current_figure() else: figure = _context['figure_registry'][key] if display_toolbar: if not hasattr(figure, 'pyplot'): figure.pyplot = Toolbar(figure=figure) display(VBox([figure, figure.pyplot])) else: display(figure)
Shows the current context figure in the output area. Parameters ---------- key : hashable, optional Any variable that can be used as a key for a dictionary. display_toolbar: bool (default: True) If True, a toolbar for different mouse interaction is displayed with the figure. Raises ------ KeyError When no context figure is associated with the provided key. Examples -------- >>> import numpy as np >>> import pyplot as plt >>> n = 100 >>> x = np.arange(n) >>> y = np.cumsum(np.random.randn(n)) >>> plt.plot(x,y) >>> plt.show()
Below is the the instruction that describes the task: ### Input: Shows the current context figure in the output area. Parameters ---------- key : hashable, optional Any variable that can be used as a key for a dictionary. display_toolbar: bool (default: True) If True, a toolbar for different mouse interaction is displayed with the figure. Raises ------ KeyError When no context figure is associated with the provided key. Examples -------- >>> import numpy as np >>> import pyplot as plt >>> n = 100 >>> x = np.arange(n) >>> y = np.cumsum(np.random.randn(n)) >>> plt.plot(x,y) >>> plt.show() ### Response: def show(key=None, display_toolbar=True): """Shows the current context figure in the output area. Parameters ---------- key : hashable, optional Any variable that can be used as a key for a dictionary. display_toolbar: bool (default: True) If True, a toolbar for different mouse interaction is displayed with the figure. Raises ------ KeyError When no context figure is associated with the provided key. Examples -------- >>> import numpy as np >>> import pyplot as plt >>> n = 100 >>> x = np.arange(n) >>> y = np.cumsum(np.random.randn(n)) >>> plt.plot(x,y) >>> plt.show() """ if key is None: figure = current_figure() else: figure = _context['figure_registry'][key] if display_toolbar: if not hasattr(figure, 'pyplot'): figure.pyplot = Toolbar(figure=figure) display(VBox([figure, figure.pyplot])) else: display(figure)
def add_query_to_url(url, extra_query): '''Adds an extra query to URL, returning the new URL. Extra query may be a dict or a list as returned by :func:`urllib.parse.parse_qsl()` and :func:`urllib.parse.parse_qs()`. ''' split = urllib.parse.urlsplit(url) merged_query = urllib.parse.parse_qsl(split.query) if isinstance(extra_query, dict): for k, v in extra_query.items(): if not isinstance(v, (tuple, list)): merged_query.append((k, v)) else: for cv in v: merged_query.append((k, cv)) else: merged_query.extend(extra_query) merged_split = urllib.parse.SplitResult( split.scheme, split.netloc, split.path, urllib.parse.urlencode(merged_query), split.fragment, ) return merged_split.geturl()
Adds an extra query to URL, returning the new URL. Extra query may be a dict or a list as returned by :func:`urllib.parse.parse_qsl()` and :func:`urllib.parse.parse_qs()`.
Below is the the instruction that describes the task: ### Input: Adds an extra query to URL, returning the new URL. Extra query may be a dict or a list as returned by :func:`urllib.parse.parse_qsl()` and :func:`urllib.parse.parse_qs()`. ### Response: def add_query_to_url(url, extra_query): '''Adds an extra query to URL, returning the new URL. Extra query may be a dict or a list as returned by :func:`urllib.parse.parse_qsl()` and :func:`urllib.parse.parse_qs()`. ''' split = urllib.parse.urlsplit(url) merged_query = urllib.parse.parse_qsl(split.query) if isinstance(extra_query, dict): for k, v in extra_query.items(): if not isinstance(v, (tuple, list)): merged_query.append((k, v)) else: for cv in v: merged_query.append((k, cv)) else: merged_query.extend(extra_query) merged_split = urllib.parse.SplitResult( split.scheme, split.netloc, split.path, urllib.parse.urlencode(merged_query), split.fragment, ) return merged_split.geturl()
def function_name(self): """ Returns name of the function to invoke. If no function identifier is provided, this method will return name of the only function from the template :return string: Name of the function :raises InvokeContextException: If function identifier is not provided """ if self._function_identifier: return self._function_identifier # Function Identifier is *not* provided. If there is only one function in the template, # default to it. all_functions = [f for f in self._function_provider.get_all()] if len(all_functions) == 1: return all_functions[0].name # Get all the available function names to print helpful exception message all_function_names = [f.name for f in all_functions] # There are more functions in the template, and function identifier is not provided, hence raise. raise InvokeContextException("You must provide a function identifier (function's Logical ID in the template). " "Possible options in your template: {}".format(all_function_names))
Returns name of the function to invoke. If no function identifier is provided, this method will return name of the only function from the template :return string: Name of the function :raises InvokeContextException: If function identifier is not provided
Below is the the instruction that describes the task: ### Input: Returns name of the function to invoke. If no function identifier is provided, this method will return name of the only function from the template :return string: Name of the function :raises InvokeContextException: If function identifier is not provided ### Response: def function_name(self): """ Returns name of the function to invoke. If no function identifier is provided, this method will return name of the only function from the template :return string: Name of the function :raises InvokeContextException: If function identifier is not provided """ if self._function_identifier: return self._function_identifier # Function Identifier is *not* provided. If there is only one function in the template, # default to it. all_functions = [f for f in self._function_provider.get_all()] if len(all_functions) == 1: return all_functions[0].name # Get all the available function names to print helpful exception message all_function_names = [f.name for f in all_functions] # There are more functions in the template, and function identifier is not provided, hence raise. raise InvokeContextException("You must provide a function identifier (function's Logical ID in the template). " "Possible options in your template: {}".format(all_function_names))
def get_install_id(filename): """ Return install id from library named in `filename` Returns None if no install id, or if this is not an object file. Parameters ---------- filename : str filename of library Returns ------- install_id : str install id of library `filename`, or None if no install id """ lines = _cmd_out_err(['otool', '-D', filename]) if not _line0_says_object(lines[0], filename): return None if len(lines) == 1: return None if len(lines) != 2: raise InstallNameError('Unexpected otool output ' + '\n'.join(lines)) return lines[1].strip()
Return install id from library named in `filename` Returns None if no install id, or if this is not an object file. Parameters ---------- filename : str filename of library Returns ------- install_id : str install id of library `filename`, or None if no install id
Below is the the instruction that describes the task: ### Input: Return install id from library named in `filename` Returns None if no install id, or if this is not an object file. Parameters ---------- filename : str filename of library Returns ------- install_id : str install id of library `filename`, or None if no install id ### Response: def get_install_id(filename): """ Return install id from library named in `filename` Returns None if no install id, or if this is not an object file. Parameters ---------- filename : str filename of library Returns ------- install_id : str install id of library `filename`, or None if no install id """ lines = _cmd_out_err(['otool', '-D', filename]) if not _line0_says_object(lines[0], filename): return None if len(lines) == 1: return None if len(lines) != 2: raise InstallNameError('Unexpected otool output ' + '\n'.join(lines)) return lines[1].strip()
def _get_wait_in(self, flag=True, all_domain=True): """ Set `wait_inputs` flags for data nodes that: - are estimated from functions with a domain function, and - are waiting inputs. :param flag: Value to be set. If None `wait_inputs` are just cleaned. :type flag: bool, None, optional :param all_domain: Set `wait_inputs` flags for data nodes that are estimated from functions with a domain function. :type all_domain: bool, optional """ wait_in = {} for n, a in self.data_nodes.items(): if n is not SINK and a['wait_inputs']: wait_in[n] = flag if all_domain: for a in self.function_nodes.values(): if 'input_domain' in a: wait_in.update(dict.fromkeys(a['outputs'], flag)) for n, a in self.sub_dsp_nodes.items(): if 'function' in a: dsp = a['function'] wait_in[dsp] = w = dsp._get_wait_in(flag=flag) if 'input_domain' not in a: o = a['outputs'] w = [o[k] for k in set(o).intersection(w)] wait_in.update(dict.fromkeys(w, flag)) if 'input_domain' in a: wait_in[n] = flag wait_in.update(dict.fromkeys(a['outputs'].values(), flag)) return wait_in
Set `wait_inputs` flags for data nodes that: - are estimated from functions with a domain function, and - are waiting inputs. :param flag: Value to be set. If None `wait_inputs` are just cleaned. :type flag: bool, None, optional :param all_domain: Set `wait_inputs` flags for data nodes that are estimated from functions with a domain function. :type all_domain: bool, optional
Below is the the instruction that describes the task: ### Input: Set `wait_inputs` flags for data nodes that: - are estimated from functions with a domain function, and - are waiting inputs. :param flag: Value to be set. If None `wait_inputs` are just cleaned. :type flag: bool, None, optional :param all_domain: Set `wait_inputs` flags for data nodes that are estimated from functions with a domain function. :type all_domain: bool, optional ### Response: def _get_wait_in(self, flag=True, all_domain=True): """ Set `wait_inputs` flags for data nodes that: - are estimated from functions with a domain function, and - are waiting inputs. :param flag: Value to be set. If None `wait_inputs` are just cleaned. :type flag: bool, None, optional :param all_domain: Set `wait_inputs` flags for data nodes that are estimated from functions with a domain function. :type all_domain: bool, optional """ wait_in = {} for n, a in self.data_nodes.items(): if n is not SINK and a['wait_inputs']: wait_in[n] = flag if all_domain: for a in self.function_nodes.values(): if 'input_domain' in a: wait_in.update(dict.fromkeys(a['outputs'], flag)) for n, a in self.sub_dsp_nodes.items(): if 'function' in a: dsp = a['function'] wait_in[dsp] = w = dsp._get_wait_in(flag=flag) if 'input_domain' not in a: o = a['outputs'] w = [o[k] for k in set(o).intersection(w)] wait_in.update(dict.fromkeys(w, flag)) if 'input_domain' in a: wait_in[n] = flag wait_in.update(dict.fromkeys(a['outputs'].values(), flag)) return wait_in
def get_uint32(self): """Read the next token and interpret it as a 32-bit unsigned integer. @raises dns.exception.SyntaxError: @rtype: int """ token = self.get().unescape() if not token.is_identifier(): raise dns.exception.SyntaxError('expecting an identifier') if not token.value.isdigit(): raise dns.exception.SyntaxError('expecting an integer') value = long(token.value) if value < 0 or value > 4294967296L: raise dns.exception.SyntaxError('%d is not an unsigned 32-bit integer' % value) return value
Read the next token and interpret it as a 32-bit unsigned integer. @raises dns.exception.SyntaxError: @rtype: int
Below is the the instruction that describes the task: ### Input: Read the next token and interpret it as a 32-bit unsigned integer. @raises dns.exception.SyntaxError: @rtype: int ### Response: def get_uint32(self): """Read the next token and interpret it as a 32-bit unsigned integer. @raises dns.exception.SyntaxError: @rtype: int """ token = self.get().unescape() if not token.is_identifier(): raise dns.exception.SyntaxError('expecting an identifier') if not token.value.isdigit(): raise dns.exception.SyntaxError('expecting an integer') value = long(token.value) if value < 0 or value > 4294967296L: raise dns.exception.SyntaxError('%d is not an unsigned 32-bit integer' % value) return value
def parse_obj(o): """ Parses a given dictionary with the key being the OBD PID and the value its returned value by the OBD interface :param dict o: :return: """ r = {} for k, v in o.items(): if is_unable_to_connect(v): r[k] = None try: r[k] = parse_value(k, v) except (ObdPidParserUnknownError, AttributeError, TypeError): r[k] = None return r
Parses a given dictionary with the key being the OBD PID and the value its returned value by the OBD interface :param dict o: :return:
Below is the the instruction that describes the task: ### Input: Parses a given dictionary with the key being the OBD PID and the value its returned value by the OBD interface :param dict o: :return: ### Response: def parse_obj(o): """ Parses a given dictionary with the key being the OBD PID and the value its returned value by the OBD interface :param dict o: :return: """ r = {} for k, v in o.items(): if is_unable_to_connect(v): r[k] = None try: r[k] = parse_value(k, v) except (ObdPidParserUnknownError, AttributeError, TypeError): r[k] = None return r
def _handle_account(self, data, ts): """ Handles Account related data. translation table for channel names: Data Channels os - Orders hos - Historical Orders ps - Positions hts - Trades (snapshot) te - Trade Event tu - Trade Update ws - Wallets bu - Balance Info miu - Margin Info fiu - Funding Info fos - Offers hfos - Historical Offers fcs - Credits hfcs - Historical Credits fls - Loans hfls - Historical Loans htfs - Funding Trades n - Notifications (WIP) :param dtype: :param data: :param ts: :return: """ # channel_short, data chan_id, channel_short_name, *data = data entry = (channel_short_name, data, ts) self.account.put(entry)
Handles Account related data. translation table for channel names: Data Channels os - Orders hos - Historical Orders ps - Positions hts - Trades (snapshot) te - Trade Event tu - Trade Update ws - Wallets bu - Balance Info miu - Margin Info fiu - Funding Info fos - Offers hfos - Historical Offers fcs - Credits hfcs - Historical Credits fls - Loans hfls - Historical Loans htfs - Funding Trades n - Notifications (WIP) :param dtype: :param data: :param ts: :return:
Below is the the instruction that describes the task: ### Input: Handles Account related data. translation table for channel names: Data Channels os - Orders hos - Historical Orders ps - Positions hts - Trades (snapshot) te - Trade Event tu - Trade Update ws - Wallets bu - Balance Info miu - Margin Info fiu - Funding Info fos - Offers hfos - Historical Offers fcs - Credits hfcs - Historical Credits fls - Loans hfls - Historical Loans htfs - Funding Trades n - Notifications (WIP) :param dtype: :param data: :param ts: :return: ### Response: def _handle_account(self, data, ts): """ Handles Account related data. translation table for channel names: Data Channels os - Orders hos - Historical Orders ps - Positions hts - Trades (snapshot) te - Trade Event tu - Trade Update ws - Wallets bu - Balance Info miu - Margin Info fiu - Funding Info fos - Offers hfos - Historical Offers fcs - Credits hfcs - Historical Credits fls - Loans hfls - Historical Loans htfs - Funding Trades n - Notifications (WIP) :param dtype: :param data: :param ts: :return: """ # channel_short, data chan_id, channel_short_name, *data = data entry = (channel_short_name, data, ts) self.account.put(entry)
def new(name): """Make a new interpolator by name. Make a new interpolator from the libvips class nickname. For example:: inter = pyvips.Interpolator.new('bicubic') You can get a list of all supported interpolators from the command-line with:: $ vips -l interpolate See for example :meth:`.affine`. """ # logger.debug('VipsInterpolate.new: name = %s', name) vi = vips_lib.vips_interpolate_new(_to_bytes(name)) if vi == ffi.NULL: raise Error('no such interpolator {0}'.format(name)) return Interpolate(vi)
Make a new interpolator by name. Make a new interpolator from the libvips class nickname. For example:: inter = pyvips.Interpolator.new('bicubic') You can get a list of all supported interpolators from the command-line with:: $ vips -l interpolate See for example :meth:`.affine`.
Below is the the instruction that describes the task: ### Input: Make a new interpolator by name. Make a new interpolator from the libvips class nickname. For example:: inter = pyvips.Interpolator.new('bicubic') You can get a list of all supported interpolators from the command-line with:: $ vips -l interpolate See for example :meth:`.affine`. ### Response: def new(name): """Make a new interpolator by name. Make a new interpolator from the libvips class nickname. For example:: inter = pyvips.Interpolator.new('bicubic') You can get a list of all supported interpolators from the command-line with:: $ vips -l interpolate See for example :meth:`.affine`. """ # logger.debug('VipsInterpolate.new: name = %s', name) vi = vips_lib.vips_interpolate_new(_to_bytes(name)) if vi == ffi.NULL: raise Error('no such interpolator {0}'.format(name)) return Interpolate(vi)
def retinotopy_model(name='benson17', hemi=None, radius=np.pi/2.5, sphere_radius=100.0, search_paths=None, update=False): ''' retinotopy_model() yields a standard retinotopy model of V1, V2, and V3 as well as other areas (depending on the options). The model itself is represented as a RegisteredRetinotopyModel object, which may internally store a set of meshes with values at the vertices that define the polar angle and eccentricity, or as another object (such as with the SchiraModel). The mesh models are loaded from files in the neuropythy lib directory. Because the model's class is RegisteredRetinotopyModel, so the details of the model's 2D projection onto the cortical surface are included in the model. The following options may be given: * name (default: 'benson17') indicates the name of the model to load; the Benson17 model is included with the neuropythy library along with various others. If name is a filename, this file is loaded (must be a valid fmm or fmm.gz file). Currently, models that are included with neuropythy are: Benson17, Benson17-uncorrected, Schira10, and Benson14 (which is identical to Schira10, as Schira10 was used by Benson14). * hemi (default: None) specifies that the model should go with a particular hemisphere, either 'lh' or 'rh'. Generally, model files are names lh.<model>.fmm.gz or rh.<model>.fmm.gz, but models intended for the fsaverage_sym don't necessarily get a prefix. Note that you can leave this as none and just specify that the model name is 'lh.model' instead. * radius, sphere_radius (defaults: pi/2.5 and 100.0, respectively) specify the radius of the projection (on the surface of the sphere) and the radius of the sphere (100 is the radius for Freesurfer spheres). See neuropythy.registration.load_fmm_model for mode details. * search_paths (default: None) specifies directories in which to look for fmm model files. No matter what is included in these files, the neuropythy library's folders are searched last. ''' origname = name tup = (name,hemi,radius,sphere_radius) if tup in retinotopy_model.cache: return retinotopy_model.cache[tup] if os.path.isfile(name): fname = name name = None elif name.lower() in ['schira', 'schira10', 'schira2010', 'benson14', 'benson2014']: tmp = get_default_schira_model() retinotopy_model.cache[tup] = tmp return tmp else: name = name if hemi is None else ('%s.%s' % (hemi.lower(), name)) if len(name) > 4 and name[-4:] == '.fmm': fname = name name = name[:-4] elif len(name) > 7 and name[-7:] == '.fmm.gz': fname = name name = name[:-7] else: fname = name + '.fmm' # Find it in the search paths... spaths = ([] if search_paths is None else search_paths) + _retinotopy_model_paths fname = next( (os.path.join(path, nm0) for path in spaths for nm0 in os.listdir(path) for nm in [nm0[:-4] if len(nm0) > 4 and nm0[-4:] == '.fmm' else \ nm0[:-7] if len(nm0) > 7 and nm0[-7:] == '.fmm.gz' else \ None] if nm is not None and nm == name), None) if fname is None: raise ValueError('Cannot find an FFM file with the name %s' % origname) # Okay, load the model... mdl = load_fmm_model(fname).persist() retinotopy_model.cache[tup] = mdl return mdl
retinotopy_model() yields a standard retinotopy model of V1, V2, and V3 as well as other areas (depending on the options). The model itself is represented as a RegisteredRetinotopyModel object, which may internally store a set of meshes with values at the vertices that define the polar angle and eccentricity, or as another object (such as with the SchiraModel). The mesh models are loaded from files in the neuropythy lib directory. Because the model's class is RegisteredRetinotopyModel, so the details of the model's 2D projection onto the cortical surface are included in the model. The following options may be given: * name (default: 'benson17') indicates the name of the model to load; the Benson17 model is included with the neuropythy library along with various others. If name is a filename, this file is loaded (must be a valid fmm or fmm.gz file). Currently, models that are included with neuropythy are: Benson17, Benson17-uncorrected, Schira10, and Benson14 (which is identical to Schira10, as Schira10 was used by Benson14). * hemi (default: None) specifies that the model should go with a particular hemisphere, either 'lh' or 'rh'. Generally, model files are names lh.<model>.fmm.gz or rh.<model>.fmm.gz, but models intended for the fsaverage_sym don't necessarily get a prefix. Note that you can leave this as none and just specify that the model name is 'lh.model' instead. * radius, sphere_radius (defaults: pi/2.5 and 100.0, respectively) specify the radius of the projection (on the surface of the sphere) and the radius of the sphere (100 is the radius for Freesurfer spheres). See neuropythy.registration.load_fmm_model for mode details. * search_paths (default: None) specifies directories in which to look for fmm model files. No matter what is included in these files, the neuropythy library's folders are searched last.
Below is the the instruction that describes the task: ### Input: retinotopy_model() yields a standard retinotopy model of V1, V2, and V3 as well as other areas (depending on the options). The model itself is represented as a RegisteredRetinotopyModel object, which may internally store a set of meshes with values at the vertices that define the polar angle and eccentricity, or as another object (such as with the SchiraModel). The mesh models are loaded from files in the neuropythy lib directory. Because the model's class is RegisteredRetinotopyModel, so the details of the model's 2D projection onto the cortical surface are included in the model. The following options may be given: * name (default: 'benson17') indicates the name of the model to load; the Benson17 model is included with the neuropythy library along with various others. If name is a filename, this file is loaded (must be a valid fmm or fmm.gz file). Currently, models that are included with neuropythy are: Benson17, Benson17-uncorrected, Schira10, and Benson14 (which is identical to Schira10, as Schira10 was used by Benson14). * hemi (default: None) specifies that the model should go with a particular hemisphere, either 'lh' or 'rh'. Generally, model files are names lh.<model>.fmm.gz or rh.<model>.fmm.gz, but models intended for the fsaverage_sym don't necessarily get a prefix. Note that you can leave this as none and just specify that the model name is 'lh.model' instead. * radius, sphere_radius (defaults: pi/2.5 and 100.0, respectively) specify the radius of the projection (on the surface of the sphere) and the radius of the sphere (100 is the radius for Freesurfer spheres). See neuropythy.registration.load_fmm_model for mode details. * search_paths (default: None) specifies directories in which to look for fmm model files. No matter what is included in these files, the neuropythy library's folders are searched last. ### Response: def retinotopy_model(name='benson17', hemi=None, radius=np.pi/2.5, sphere_radius=100.0, search_paths=None, update=False): ''' retinotopy_model() yields a standard retinotopy model of V1, V2, and V3 as well as other areas (depending on the options). The model itself is represented as a RegisteredRetinotopyModel object, which may internally store a set of meshes with values at the vertices that define the polar angle and eccentricity, or as another object (such as with the SchiraModel). The mesh models are loaded from files in the neuropythy lib directory. Because the model's class is RegisteredRetinotopyModel, so the details of the model's 2D projection onto the cortical surface are included in the model. The following options may be given: * name (default: 'benson17') indicates the name of the model to load; the Benson17 model is included with the neuropythy library along with various others. If name is a filename, this file is loaded (must be a valid fmm or fmm.gz file). Currently, models that are included with neuropythy are: Benson17, Benson17-uncorrected, Schira10, and Benson14 (which is identical to Schira10, as Schira10 was used by Benson14). * hemi (default: None) specifies that the model should go with a particular hemisphere, either 'lh' or 'rh'. Generally, model files are names lh.<model>.fmm.gz or rh.<model>.fmm.gz, but models intended for the fsaverage_sym don't necessarily get a prefix. Note that you can leave this as none and just specify that the model name is 'lh.model' instead. * radius, sphere_radius (defaults: pi/2.5 and 100.0, respectively) specify the radius of the projection (on the surface of the sphere) and the radius of the sphere (100 is the radius for Freesurfer spheres). See neuropythy.registration.load_fmm_model for mode details. * search_paths (default: None) specifies directories in which to look for fmm model files. No matter what is included in these files, the neuropythy library's folders are searched last. ''' origname = name tup = (name,hemi,radius,sphere_radius) if tup in retinotopy_model.cache: return retinotopy_model.cache[tup] if os.path.isfile(name): fname = name name = None elif name.lower() in ['schira', 'schira10', 'schira2010', 'benson14', 'benson2014']: tmp = get_default_schira_model() retinotopy_model.cache[tup] = tmp return tmp else: name = name if hemi is None else ('%s.%s' % (hemi.lower(), name)) if len(name) > 4 and name[-4:] == '.fmm': fname = name name = name[:-4] elif len(name) > 7 and name[-7:] == '.fmm.gz': fname = name name = name[:-7] else: fname = name + '.fmm' # Find it in the search paths... spaths = ([] if search_paths is None else search_paths) + _retinotopy_model_paths fname = next( (os.path.join(path, nm0) for path in spaths for nm0 in os.listdir(path) for nm in [nm0[:-4] if len(nm0) > 4 and nm0[-4:] == '.fmm' else \ nm0[:-7] if len(nm0) > 7 and nm0[-7:] == '.fmm.gz' else \ None] if nm is not None and nm == name), None) if fname is None: raise ValueError('Cannot find an FFM file with the name %s' % origname) # Okay, load the model... mdl = load_fmm_model(fname).persist() retinotopy_model.cache[tup] = mdl return mdl
def autodoc_applicationmodel(module): """Improves the docstrings of application models when called at the bottom of the respective module. |autodoc_applicationmodel| requires, similar to |autodoc_basemodel|, that both the application model and its base model are defined in the conventional way. """ autodoc_tuple2doc(module) name_applicationmodel = module.__name__ name_basemodel = name_applicationmodel.split('_')[0] module_basemodel = importlib.import_module(name_basemodel) substituter = Substituter(module_basemodel.substituter) substituter.add_module(module) substituter.update_masters() module.substituter = substituter
Improves the docstrings of application models when called at the bottom of the respective module. |autodoc_applicationmodel| requires, similar to |autodoc_basemodel|, that both the application model and its base model are defined in the conventional way.
Below is the the instruction that describes the task: ### Input: Improves the docstrings of application models when called at the bottom of the respective module. |autodoc_applicationmodel| requires, similar to |autodoc_basemodel|, that both the application model and its base model are defined in the conventional way. ### Response: def autodoc_applicationmodel(module): """Improves the docstrings of application models when called at the bottom of the respective module. |autodoc_applicationmodel| requires, similar to |autodoc_basemodel|, that both the application model and its base model are defined in the conventional way. """ autodoc_tuple2doc(module) name_applicationmodel = module.__name__ name_basemodel = name_applicationmodel.split('_')[0] module_basemodel = importlib.import_module(name_basemodel) substituter = Substituter(module_basemodel.substituter) substituter.add_module(module) substituter.update_masters() module.substituter = substituter
def fetch_host_ip_and_country(host: str) -> Tuple: """ Fetch ip and country by host """ ip = fetch_host_ip(host) if not host: return '', '' country = fetch_country_by_ip(ip) return ip, country
Fetch ip and country by host
Below is the the instruction that describes the task: ### Input: Fetch ip and country by host ### Response: def fetch_host_ip_and_country(host: str) -> Tuple: """ Fetch ip and country by host """ ip = fetch_host_ip(host) if not host: return '', '' country = fetch_country_by_ip(ip) return ip, country