code
stringlengths
75
104k
docstring
stringlengths
1
46.9k
text
stringlengths
164
112k
def get_command_and_defaults(name, exclude_packages=None, exclude_command_class=None): """ Searches "django.core" and the apps in settings.INSTALLED_APPS to find the named command class, optionally skipping packages or a particular command class. Gathers the command's default options and returns the command and options dictionary as a two-tuple: (command, options). Returns (None, {}) if the command class could not be found. """ command = get_command_class(name, exclude_packages=exclude_packages, exclude_command_class=exclude_command_class) defaults = {} if command is not None: command = command() defaults = command.get_option_defaults() \ if isinstance(command, Command) \ else get_option_defaults(command) return (command, defaults)
Searches "django.core" and the apps in settings.INSTALLED_APPS to find the named command class, optionally skipping packages or a particular command class. Gathers the command's default options and returns the command and options dictionary as a two-tuple: (command, options). Returns (None, {}) if the command class could not be found.
Below is the the instruction that describes the task: ### Input: Searches "django.core" and the apps in settings.INSTALLED_APPS to find the named command class, optionally skipping packages or a particular command class. Gathers the command's default options and returns the command and options dictionary as a two-tuple: (command, options). Returns (None, {}) if the command class could not be found. ### Response: def get_command_and_defaults(name, exclude_packages=None, exclude_command_class=None): """ Searches "django.core" and the apps in settings.INSTALLED_APPS to find the named command class, optionally skipping packages or a particular command class. Gathers the command's default options and returns the command and options dictionary as a two-tuple: (command, options). Returns (None, {}) if the command class could not be found. """ command = get_command_class(name, exclude_packages=exclude_packages, exclude_command_class=exclude_command_class) defaults = {} if command is not None: command = command() defaults = command.get_option_defaults() \ if isinstance(command, Command) \ else get_option_defaults(command) return (command, defaults)
def update_payload(self, fields=None): """Wrap payload in ``os_default_template`` relates to `Redmine #21169`_. .. _Redmine #21169: http://projects.theforeman.org/issues/21169 """ payload = super(OSDefaultTemplate, self).update_payload(fields) return {'os_default_template': payload}
Wrap payload in ``os_default_template`` relates to `Redmine #21169`_. .. _Redmine #21169: http://projects.theforeman.org/issues/21169
Below is the the instruction that describes the task: ### Input: Wrap payload in ``os_default_template`` relates to `Redmine #21169`_. .. _Redmine #21169: http://projects.theforeman.org/issues/21169 ### Response: def update_payload(self, fields=None): """Wrap payload in ``os_default_template`` relates to `Redmine #21169`_. .. _Redmine #21169: http://projects.theforeman.org/issues/21169 """ payload = super(OSDefaultTemplate, self).update_payload(fields) return {'os_default_template': payload}
def _a_star_search_internal(graph, start, goal): """Performs an A* search, returning information about whether the goal node was reached and path cost information that can be used to reconstruct the path. """ frontier = PriorityQueue() frontier.put(start, 0) came_from = {start: None} cost_so_far = {start: 0} goal_reached = False while not frontier.empty(): current = frontier.get() if current == goal: goal_reached = True break for next_node in graph.neighbors(current): new_cost = cost_so_far[current] + graph.edge_cost(current, next_node) if next_node not in cost_so_far or new_cost < cost_so_far[next_node]: cost_so_far[next_node] = new_cost priority = new_cost + heuristic(goal, next_node) frontier.put(next_node, priority) came_from[next_node] = current return came_from, cost_so_far, goal_reached
Performs an A* search, returning information about whether the goal node was reached and path cost information that can be used to reconstruct the path.
Below is the the instruction that describes the task: ### Input: Performs an A* search, returning information about whether the goal node was reached and path cost information that can be used to reconstruct the path. ### Response: def _a_star_search_internal(graph, start, goal): """Performs an A* search, returning information about whether the goal node was reached and path cost information that can be used to reconstruct the path. """ frontier = PriorityQueue() frontier.put(start, 0) came_from = {start: None} cost_so_far = {start: 0} goal_reached = False while not frontier.empty(): current = frontier.get() if current == goal: goal_reached = True break for next_node in graph.neighbors(current): new_cost = cost_so_far[current] + graph.edge_cost(current, next_node) if next_node not in cost_so_far or new_cost < cost_so_far[next_node]: cost_so_far[next_node] = new_cost priority = new_cost + heuristic(goal, next_node) frontier.put(next_node, priority) came_from[next_node] = current return came_from, cost_so_far, goal_reached
def coherence_spectrogram(self, other, stride, fftlength=None, overlap=None, window='hann', nproc=1): """Calculate the coherence spectrogram between this `TimeSeries` and other. Parameters ---------- other : `TimeSeries` the second `TimeSeries` in this CSD calculation stride : `float` number of seconds in single PSD (column of spectrogram) fftlength : `float` number of seconds in single FFT overlap : `float`, optional number of seconds of overlap between FFTs, defaults to the recommended overlap for the given window (if given), or 0 window : `str`, `numpy.ndarray`, optional window function to apply to timeseries prior to FFT, see :func:`scipy.signal.get_window` for details on acceptable formats nproc : `int` number of parallel processes to use when calculating individual coherence spectra. Returns ------- spectrogram : `~gwpy.spectrogram.Spectrogram` time-frequency coherence spectrogram as generated from the input time-series. """ from ..spectrogram.coherence import from_timeseries return from_timeseries(self, other, stride, fftlength=fftlength, overlap=overlap, window=window, nproc=nproc)
Calculate the coherence spectrogram between this `TimeSeries` and other. Parameters ---------- other : `TimeSeries` the second `TimeSeries` in this CSD calculation stride : `float` number of seconds in single PSD (column of spectrogram) fftlength : `float` number of seconds in single FFT overlap : `float`, optional number of seconds of overlap between FFTs, defaults to the recommended overlap for the given window (if given), or 0 window : `str`, `numpy.ndarray`, optional window function to apply to timeseries prior to FFT, see :func:`scipy.signal.get_window` for details on acceptable formats nproc : `int` number of parallel processes to use when calculating individual coherence spectra. Returns ------- spectrogram : `~gwpy.spectrogram.Spectrogram` time-frequency coherence spectrogram as generated from the input time-series.
Below is the the instruction that describes the task: ### Input: Calculate the coherence spectrogram between this `TimeSeries` and other. Parameters ---------- other : `TimeSeries` the second `TimeSeries` in this CSD calculation stride : `float` number of seconds in single PSD (column of spectrogram) fftlength : `float` number of seconds in single FFT overlap : `float`, optional number of seconds of overlap between FFTs, defaults to the recommended overlap for the given window (if given), or 0 window : `str`, `numpy.ndarray`, optional window function to apply to timeseries prior to FFT, see :func:`scipy.signal.get_window` for details on acceptable formats nproc : `int` number of parallel processes to use when calculating individual coherence spectra. Returns ------- spectrogram : `~gwpy.spectrogram.Spectrogram` time-frequency coherence spectrogram as generated from the input time-series. ### Response: def coherence_spectrogram(self, other, stride, fftlength=None, overlap=None, window='hann', nproc=1): """Calculate the coherence spectrogram between this `TimeSeries` and other. Parameters ---------- other : `TimeSeries` the second `TimeSeries` in this CSD calculation stride : `float` number of seconds in single PSD (column of spectrogram) fftlength : `float` number of seconds in single FFT overlap : `float`, optional number of seconds of overlap between FFTs, defaults to the recommended overlap for the given window (if given), or 0 window : `str`, `numpy.ndarray`, optional window function to apply to timeseries prior to FFT, see :func:`scipy.signal.get_window` for details on acceptable formats nproc : `int` number of parallel processes to use when calculating individual coherence spectra. Returns ------- spectrogram : `~gwpy.spectrogram.Spectrogram` time-frequency coherence spectrogram as generated from the input time-series. """ from ..spectrogram.coherence import from_timeseries return from_timeseries(self, other, stride, fftlength=fftlength, overlap=overlap, window=window, nproc=nproc)
def derive_fields(self): """ Derives our fields. """ if self.fields is not None: fields = list(self.fields) else: form = self.form fields = [] for field in form: fields.append(field.name) # this is slightly confusing but we add in readonly fields here because they will still # need to be displayed readonly = self.derive_readonly() if readonly: fields += readonly # remove any excluded fields for exclude in self.derive_exclude(): if exclude in fields: fields.remove(exclude) return fields
Derives our fields.
Below is the the instruction that describes the task: ### Input: Derives our fields. ### Response: def derive_fields(self): """ Derives our fields. """ if self.fields is not None: fields = list(self.fields) else: form = self.form fields = [] for field in form: fields.append(field.name) # this is slightly confusing but we add in readonly fields here because they will still # need to be displayed readonly = self.derive_readonly() if readonly: fields += readonly # remove any excluded fields for exclude in self.derive_exclude(): if exclude in fields: fields.remove(exclude) return fields
def _is_allowed(attr, *args): """ Test if a given attribute is allowed according to the current set of configured service backends. """ for backend in _get_backends(): try: if getattr(backend, attr)(*args): return True except AttributeError: raise NotImplementedError("%s.%s.%s() not implemented" % ( backend.__class__.__module__, backend.__class__.__name__, attr) ) return False
Test if a given attribute is allowed according to the current set of configured service backends.
Below is the the instruction that describes the task: ### Input: Test if a given attribute is allowed according to the current set of configured service backends. ### Response: def _is_allowed(attr, *args): """ Test if a given attribute is allowed according to the current set of configured service backends. """ for backend in _get_backends(): try: if getattr(backend, attr)(*args): return True except AttributeError: raise NotImplementedError("%s.%s.%s() not implemented" % ( backend.__class__.__module__, backend.__class__.__name__, attr) ) return False
def projection(self, plain_src_name): """Return the projection for the given source namespace.""" mapped = self.lookup(plain_src_name) if not mapped: return None fields = mapped.include_fields or mapped.exclude_fields if fields: include = 1 if mapped.include_fields else 0 return dict((field, include) for field in fields) return None
Return the projection for the given source namespace.
Below is the the instruction that describes the task: ### Input: Return the projection for the given source namespace. ### Response: def projection(self, plain_src_name): """Return the projection for the given source namespace.""" mapped = self.lookup(plain_src_name) if not mapped: return None fields = mapped.include_fields or mapped.exclude_fields if fields: include = 1 if mapped.include_fields else 0 return dict((field, include) for field in fields) return None
def likelihood3(args): """ %prog likelihood3 140_20.json 140_70.json Plot the likelihood surface and marginal distributions for two settings. """ from matplotlib import gridspec p = OptionParser(likelihood3.__doc__) opts, args, iopts = p.set_image_options(args, figsize="10x10", style="white", cmap="coolwarm") if len(args) != 2: sys.exit(not p.print_help()) jsonfile1, jsonfile2 = args fig = plt.figure(figsize=(iopts.w, iopts.h)) gs = gridspec.GridSpec(9, 2) ax1 = fig.add_subplot(gs[:4, 0]) ax2 = fig.add_subplot(gs[:2, 1]) ax3 = fig.add_subplot(gs[2:4, 1]) ax4 = fig.add_subplot(gs[5:, 0]) ax5 = fig.add_subplot(gs[5:7, 1]) ax6 = fig.add_subplot(gs[7:, 1]) plt.tight_layout(pad=2) plot_panel(jsonfile1, ax1, ax2, ax3, opts.cmap) plot_panel(jsonfile2, ax4, ax5, ax6, opts.cmap) root = fig.add_axes([0, 0, 1, 1]) pad = .02 panel_labels(root, ((pad, 1 - pad, "A"), (pad, 4. / 9, "B"))) normalize_axes(root) image_name = "likelihood3." + iopts.format savefig(image_name, dpi=iopts.dpi, iopts=iopts)
%prog likelihood3 140_20.json 140_70.json Plot the likelihood surface and marginal distributions for two settings.
Below is the the instruction that describes the task: ### Input: %prog likelihood3 140_20.json 140_70.json Plot the likelihood surface and marginal distributions for two settings. ### Response: def likelihood3(args): """ %prog likelihood3 140_20.json 140_70.json Plot the likelihood surface and marginal distributions for two settings. """ from matplotlib import gridspec p = OptionParser(likelihood3.__doc__) opts, args, iopts = p.set_image_options(args, figsize="10x10", style="white", cmap="coolwarm") if len(args) != 2: sys.exit(not p.print_help()) jsonfile1, jsonfile2 = args fig = plt.figure(figsize=(iopts.w, iopts.h)) gs = gridspec.GridSpec(9, 2) ax1 = fig.add_subplot(gs[:4, 0]) ax2 = fig.add_subplot(gs[:2, 1]) ax3 = fig.add_subplot(gs[2:4, 1]) ax4 = fig.add_subplot(gs[5:, 0]) ax5 = fig.add_subplot(gs[5:7, 1]) ax6 = fig.add_subplot(gs[7:, 1]) plt.tight_layout(pad=2) plot_panel(jsonfile1, ax1, ax2, ax3, opts.cmap) plot_panel(jsonfile2, ax4, ax5, ax6, opts.cmap) root = fig.add_axes([0, 0, 1, 1]) pad = .02 panel_labels(root, ((pad, 1 - pad, "A"), (pad, 4. / 9, "B"))) normalize_axes(root) image_name = "likelihood3." + iopts.format savefig(image_name, dpi=iopts.dpi, iopts=iopts)
def condense_hex_colors(css): """Shorten colors from #AABBCC to #ABC where possible.""" regex = re.compile(r"([^\"'=\s])(\s*)#([0-9a-fA-F])([0-9a-fA-F])([0-9a-fA-F])([0-9a-fA-F])([0-9a-fA-F])([0-9a-fA-F])") match = regex.search(css) while match: first = match.group(3) + match.group(5) + match.group(7) second = match.group(4) + match.group(6) + match.group(8) if first.lower() == second.lower(): css = css.replace(match.group(), match.group(1) + match.group(2) + '#' + first) match = regex.search(css, match.end() - 3) else: match = regex.search(css, match.end()) return css
Shorten colors from #AABBCC to #ABC where possible.
Below is the the instruction that describes the task: ### Input: Shorten colors from #AABBCC to #ABC where possible. ### Response: def condense_hex_colors(css): """Shorten colors from #AABBCC to #ABC where possible.""" regex = re.compile(r"([^\"'=\s])(\s*)#([0-9a-fA-F])([0-9a-fA-F])([0-9a-fA-F])([0-9a-fA-F])([0-9a-fA-F])([0-9a-fA-F])") match = regex.search(css) while match: first = match.group(3) + match.group(5) + match.group(7) second = match.group(4) + match.group(6) + match.group(8) if first.lower() == second.lower(): css = css.replace(match.group(), match.group(1) + match.group(2) + '#' + first) match = regex.search(css, match.end() - 3) else: match = regex.search(css, match.end()) return css
def get(self, session, discount_id=None, ext_fields=None): '''taobao.fenxiao.discounts.get 获取折扣信息 查询折扣信息''' request = TOPRequest('taobao.fenxiao.discounts.get') if discount_id!=None: request['discount_id'] = discount_id if ext_fields!=None: request['ext_fields'] = ext_fields self.create(self.execute(request, session)) return self.discounts
taobao.fenxiao.discounts.get 获取折扣信息 查询折扣信息
Below is the the instruction that describes the task: ### Input: taobao.fenxiao.discounts.get 获取折扣信息 查询折扣信息 ### Response: def get(self, session, discount_id=None, ext_fields=None): '''taobao.fenxiao.discounts.get 获取折扣信息 查询折扣信息''' request = TOPRequest('taobao.fenxiao.discounts.get') if discount_id!=None: request['discount_id'] = discount_id if ext_fields!=None: request['ext_fields'] = ext_fields self.create(self.execute(request, session)) return self.discounts
def check_output(self, cmd): """Wrapper for subprocess.check_output.""" ret, output = self._exec(cmd) if not ret == 0: raise CommandError(self) return output
Wrapper for subprocess.check_output.
Below is the the instruction that describes the task: ### Input: Wrapper for subprocess.check_output. ### Response: def check_output(self, cmd): """Wrapper for subprocess.check_output.""" ret, output = self._exec(cmd) if not ret == 0: raise CommandError(self) return output
def patCrossLines(s0): '''make line pattern''' arr = np.zeros((s0,s0), dtype=np.uint8) col = 255 t = int(s0/100.) for pos in np.logspace(0.01,1,10): pos = int(round((pos-0.5)*s0/10.)) cv2.line(arr, (0,pos), (s0,pos), color=col, thickness=t, lineType=cv2.LINE_AA ) cv2.line(arr, (pos,0), (pos,s0), color=col, thickness=t, lineType=cv2.LINE_AA ) return arr.astype(float)
make line pattern
Below is the the instruction that describes the task: ### Input: make line pattern ### Response: def patCrossLines(s0): '''make line pattern''' arr = np.zeros((s0,s0), dtype=np.uint8) col = 255 t = int(s0/100.) for pos in np.logspace(0.01,1,10): pos = int(round((pos-0.5)*s0/10.)) cv2.line(arr, (0,pos), (s0,pos), color=col, thickness=t, lineType=cv2.LINE_AA ) cv2.line(arr, (pos,0), (pos,s0), color=col, thickness=t, lineType=cv2.LINE_AA ) return arr.astype(float)
def _handle_io(self, args, file, result, passphrase=False, binary=False): """Handle a call to GPG - pass input data, collect output data.""" p = self._open_subprocess(args, passphrase) if not binary: stdin = codecs.getwriter(self._encoding)(p.stdin) else: stdin = p.stdin if passphrase: _util._write_passphrase(stdin, passphrase, self._encoding) writer = _util._threaded_copy_data(file, stdin) self._collect_output(p, result, writer, stdin) return result
Handle a call to GPG - pass input data, collect output data.
Below is the the instruction that describes the task: ### Input: Handle a call to GPG - pass input data, collect output data. ### Response: def _handle_io(self, args, file, result, passphrase=False, binary=False): """Handle a call to GPG - pass input data, collect output data.""" p = self._open_subprocess(args, passphrase) if not binary: stdin = codecs.getwriter(self._encoding)(p.stdin) else: stdin = p.stdin if passphrase: _util._write_passphrase(stdin, passphrase, self._encoding) writer = _util._threaded_copy_data(file, stdin) self._collect_output(p, result, writer, stdin) return result
def reproject(self, dst_crs=None, resolution=None, dimensions=None, src_bounds=None, dst_bounds=None, target_aligned_pixels=False, resampling=Resampling.cubic, creation_options=None, **kwargs): """Return re-projected raster to new raster. Parameters ------------ dst_crs: rasterio.crs.CRS, optional Target coordinate reference system. resolution: tuple (x resolution, y resolution) or float, optional Target resolution, in units of target coordinate reference system. dimensions: tuple (width, height), optional Output size in pixels and lines. src_bounds: tuple (xmin, ymin, xmax, ymax), optional Georeferenced extent of output (in source georeferenced units). dst_bounds: tuple (xmin, ymin, xmax, ymax), optional Georeferenced extent of output (in destination georeferenced units). target_aligned_pixels: bool, optional Align the output bounds based on the resolution. Default is `False`. resampling: rasterio.enums.Resampling Reprojection resampling method. Default is `cubic`. creation_options: dict, optional Custom creation options. kwargs: optional Additional arguments passed to transformation function. Returns --------- out: GeoRaster2 """ if self._image is None and self._filename is not None: # image is not loaded yet with tempfile.NamedTemporaryFile(suffix='.tif', delete=False) as tf: warp(self._filename, tf.name, dst_crs=dst_crs, resolution=resolution, dimensions=dimensions, creation_options=creation_options, src_bounds=src_bounds, dst_bounds=dst_bounds, target_aligned_pixels=target_aligned_pixels, resampling=resampling, **kwargs) new_raster = self.__class__(filename=tf.name, temporary=True, band_names=self.band_names) else: # image is loaded already # SimpleNamespace is handy to hold the properties that calc_transform expects, see # https://docs.python.org/3/library/types.html#types.SimpleNamespace src = SimpleNamespace(width=self.width, height=self.height, transform=self.transform, crs=self.crs, bounds=BoundingBox(*self.footprint().get_bounds(self.crs)), gcps=None) dst_crs, dst_transform, dst_width, dst_height = calc_transform( src, dst_crs=dst_crs, resolution=resolution, dimensions=dimensions, target_aligned_pixels=target_aligned_pixels, src_bounds=src_bounds, dst_bounds=dst_bounds) new_raster = self._reproject(dst_width, dst_height, dst_transform, dst_crs=dst_crs, resampling=resampling) return new_raster
Return re-projected raster to new raster. Parameters ------------ dst_crs: rasterio.crs.CRS, optional Target coordinate reference system. resolution: tuple (x resolution, y resolution) or float, optional Target resolution, in units of target coordinate reference system. dimensions: tuple (width, height), optional Output size in pixels and lines. src_bounds: tuple (xmin, ymin, xmax, ymax), optional Georeferenced extent of output (in source georeferenced units). dst_bounds: tuple (xmin, ymin, xmax, ymax), optional Georeferenced extent of output (in destination georeferenced units). target_aligned_pixels: bool, optional Align the output bounds based on the resolution. Default is `False`. resampling: rasterio.enums.Resampling Reprojection resampling method. Default is `cubic`. creation_options: dict, optional Custom creation options. kwargs: optional Additional arguments passed to transformation function. Returns --------- out: GeoRaster2
Below is the the instruction that describes the task: ### Input: Return re-projected raster to new raster. Parameters ------------ dst_crs: rasterio.crs.CRS, optional Target coordinate reference system. resolution: tuple (x resolution, y resolution) or float, optional Target resolution, in units of target coordinate reference system. dimensions: tuple (width, height), optional Output size in pixels and lines. src_bounds: tuple (xmin, ymin, xmax, ymax), optional Georeferenced extent of output (in source georeferenced units). dst_bounds: tuple (xmin, ymin, xmax, ymax), optional Georeferenced extent of output (in destination georeferenced units). target_aligned_pixels: bool, optional Align the output bounds based on the resolution. Default is `False`. resampling: rasterio.enums.Resampling Reprojection resampling method. Default is `cubic`. creation_options: dict, optional Custom creation options. kwargs: optional Additional arguments passed to transformation function. Returns --------- out: GeoRaster2 ### Response: def reproject(self, dst_crs=None, resolution=None, dimensions=None, src_bounds=None, dst_bounds=None, target_aligned_pixels=False, resampling=Resampling.cubic, creation_options=None, **kwargs): """Return re-projected raster to new raster. Parameters ------------ dst_crs: rasterio.crs.CRS, optional Target coordinate reference system. resolution: tuple (x resolution, y resolution) or float, optional Target resolution, in units of target coordinate reference system. dimensions: tuple (width, height), optional Output size in pixels and lines. src_bounds: tuple (xmin, ymin, xmax, ymax), optional Georeferenced extent of output (in source georeferenced units). dst_bounds: tuple (xmin, ymin, xmax, ymax), optional Georeferenced extent of output (in destination georeferenced units). target_aligned_pixels: bool, optional Align the output bounds based on the resolution. Default is `False`. resampling: rasterio.enums.Resampling Reprojection resampling method. Default is `cubic`. creation_options: dict, optional Custom creation options. kwargs: optional Additional arguments passed to transformation function. Returns --------- out: GeoRaster2 """ if self._image is None and self._filename is not None: # image is not loaded yet with tempfile.NamedTemporaryFile(suffix='.tif', delete=False) as tf: warp(self._filename, tf.name, dst_crs=dst_crs, resolution=resolution, dimensions=dimensions, creation_options=creation_options, src_bounds=src_bounds, dst_bounds=dst_bounds, target_aligned_pixels=target_aligned_pixels, resampling=resampling, **kwargs) new_raster = self.__class__(filename=tf.name, temporary=True, band_names=self.band_names) else: # image is loaded already # SimpleNamespace is handy to hold the properties that calc_transform expects, see # https://docs.python.org/3/library/types.html#types.SimpleNamespace src = SimpleNamespace(width=self.width, height=self.height, transform=self.transform, crs=self.crs, bounds=BoundingBox(*self.footprint().get_bounds(self.crs)), gcps=None) dst_crs, dst_transform, dst_width, dst_height = calc_transform( src, dst_crs=dst_crs, resolution=resolution, dimensions=dimensions, target_aligned_pixels=target_aligned_pixels, src_bounds=src_bounds, dst_bounds=dst_bounds) new_raster = self._reproject(dst_width, dst_height, dst_transform, dst_crs=dst_crs, resampling=resampling) return new_raster
def sanitize_version(version): """ Take parse_version() output and standardize output from older setuptools' parse_version() to match current setuptools. """ if hasattr(version, 'base_version'): if version.base_version: parts = version.base_version.split('.') else: parts = [] else: parts = [] for part in version: if part.startswith('*'): break parts.append(part) parts = [int(p) for p in parts] if len(parts) < 3: parts += [0] * (3 - len(parts)) major, minor, micro = parts[:3] cleaned_version = '{}.{}.{}'.format(major, minor, micro) return cleaned_version
Take parse_version() output and standardize output from older setuptools' parse_version() to match current setuptools.
Below is the the instruction that describes the task: ### Input: Take parse_version() output and standardize output from older setuptools' parse_version() to match current setuptools. ### Response: def sanitize_version(version): """ Take parse_version() output and standardize output from older setuptools' parse_version() to match current setuptools. """ if hasattr(version, 'base_version'): if version.base_version: parts = version.base_version.split('.') else: parts = [] else: parts = [] for part in version: if part.startswith('*'): break parts.append(part) parts = [int(p) for p in parts] if len(parts) < 3: parts += [0] * (3 - len(parts)) major, minor, micro = parts[:3] cleaned_version = '{}.{}.{}'.format(major, minor, micro) return cleaned_version
def teardown_databases(self, old_config, options): """ Destroys all the non-mirror databases. """ if len(old_config) > 1: old_names, mirrors = old_config else: old_names = old_config for connection, old_name, destroy in old_names: if destroy: connection.creation.destroy_test_db(old_name, options['verbosity'])
Destroys all the non-mirror databases.
Below is the the instruction that describes the task: ### Input: Destroys all the non-mirror databases. ### Response: def teardown_databases(self, old_config, options): """ Destroys all the non-mirror databases. """ if len(old_config) > 1: old_names, mirrors = old_config else: old_names = old_config for connection, old_name, destroy in old_names: if destroy: connection.creation.destroy_test_db(old_name, options['verbosity'])
def _get_remarks_component(self, string, initial_pos): ''' Parse the remarks into the _remarks dict ''' remarks_code = string[initial_pos:initial_pos + self.ADDR_CODE_LENGTH] if remarks_code != 'REM': raise ish_reportException("Parsing remarks. Expected REM but got %s." % (remarks_code,)) expected_length = int(string[0:4]) + self.PREAMBLE_LENGTH position = initial_pos + self.ADDR_CODE_LENGTH while position < expected_length: key = string[position:position + self.ADDR_CODE_LENGTH] if key == 'EQD': break chars_to_read = string[position + self.ADDR_CODE_LENGTH:position + \ (self.ADDR_CODE_LENGTH * 2)] chars_to_read = int(chars_to_read) position += (self.ADDR_CODE_LENGTH * 2) string_value = string[position:position + chars_to_read] self._remarks[key] = string_value position += chars_to_read
Parse the remarks into the _remarks dict
Below is the the instruction that describes the task: ### Input: Parse the remarks into the _remarks dict ### Response: def _get_remarks_component(self, string, initial_pos): ''' Parse the remarks into the _remarks dict ''' remarks_code = string[initial_pos:initial_pos + self.ADDR_CODE_LENGTH] if remarks_code != 'REM': raise ish_reportException("Parsing remarks. Expected REM but got %s." % (remarks_code,)) expected_length = int(string[0:4]) + self.PREAMBLE_LENGTH position = initial_pos + self.ADDR_CODE_LENGTH while position < expected_length: key = string[position:position + self.ADDR_CODE_LENGTH] if key == 'EQD': break chars_to_read = string[position + self.ADDR_CODE_LENGTH:position + \ (self.ADDR_CODE_LENGTH * 2)] chars_to_read = int(chars_to_read) position += (self.ADDR_CODE_LENGTH * 2) string_value = string[position:position + chars_to_read] self._remarks[key] = string_value position += chars_to_read
def gaussian_cost(X): '''Return the average log-likelihood of data under a standard normal ''' d, n = X.shape if n < 2: return 0 sigma = np.var(X, axis=1, ddof=1) cost = -0.5 * d * n * np.log(2. * np.pi) - 0.5 * (n - 1.) * np.sum(sigma) return cost
Return the average log-likelihood of data under a standard normal
Below is the the instruction that describes the task: ### Input: Return the average log-likelihood of data under a standard normal ### Response: def gaussian_cost(X): '''Return the average log-likelihood of data under a standard normal ''' d, n = X.shape if n < 2: return 0 sigma = np.var(X, axis=1, ddof=1) cost = -0.5 * d * n * np.log(2. * np.pi) - 0.5 * (n - 1.) * np.sum(sigma) return cost
def list_remote(remote_uri, verbose=False): """ remote_uri = 'user@xx.xx.xx.xx' """ remote_uri1, remote_dpath = remote_uri.split(':') if not remote_dpath: remote_dpath = '.' import utool as ut out = ut.cmd('ssh', remote_uri1, 'ls -l %s' % (remote_dpath,), verbose=verbose) import re # Find lines that look like ls output split_lines = [re.split(r'\s+', t) for t in out[0].split('\n')] paths = [' '.join(t2[8:]) for t2 in split_lines if len(t2) > 8] return paths
remote_uri = 'user@xx.xx.xx.xx'
Below is the the instruction that describes the task: ### Input: remote_uri = 'user@xx.xx.xx.xx' ### Response: def list_remote(remote_uri, verbose=False): """ remote_uri = 'user@xx.xx.xx.xx' """ remote_uri1, remote_dpath = remote_uri.split(':') if not remote_dpath: remote_dpath = '.' import utool as ut out = ut.cmd('ssh', remote_uri1, 'ls -l %s' % (remote_dpath,), verbose=verbose) import re # Find lines that look like ls output split_lines = [re.split(r'\s+', t) for t in out[0].split('\n')] paths = [' '.join(t2[8:]) for t2 in split_lines if len(t2) > 8] return paths
def check_is_injectable(func): """ Decorator that will check whether the "inj_name" keyword argument to the wrapped function matches a registered Orca injectable. """ @wraps(func) def wrapper(**kwargs): name = kwargs['inj_name'] if not orca.is_injectable(name): abort(404) return func(**kwargs) return wrapper
Decorator that will check whether the "inj_name" keyword argument to the wrapped function matches a registered Orca injectable.
Below is the the instruction that describes the task: ### Input: Decorator that will check whether the "inj_name" keyword argument to the wrapped function matches a registered Orca injectable. ### Response: def check_is_injectable(func): """ Decorator that will check whether the "inj_name" keyword argument to the wrapped function matches a registered Orca injectable. """ @wraps(func) def wrapper(**kwargs): name = kwargs['inj_name'] if not orca.is_injectable(name): abort(404) return func(**kwargs) return wrapper
def put(self, key, value, attrs=None, format=None, append=False, **kwargs): """ Store object in HDFStore Parameters ---------- key : str value : {Series, DataFrame, Panel, Numpy ndarray} format : 'fixed(f)|table(t)', default is 'fixed' fixed(f) : Fixed format Fast writing/reading. Not-appendable, nor searchable table(t) : Table format Write as a PyTables Table structure which may perform worse but allow more flexible operations like searching/selecting subsets of the data append : boolean, default False This will force Table format, append the input data to the existing. encoding : default None, provide an encoding for strings """ if not isinstance(value, np.ndarray): super(NumpyHDFStore, self).put(key, value, format, append, **kwargs) else: group = self.get_node(key) # remove the node if we are not appending if group is not None and not append: self._handle.removeNode(group, recursive=True) group = None if group is None: paths = key.split('/') # recursively create the groups path = '/' for p in paths: if not len(p): continue new_path = path if not path.endswith('/'): new_path += '/' new_path += p group = self.get_node(new_path) if group is None: group = self._handle.createGroup(path, p) path = new_path ds_name = kwargs.get('ds_name', self._array_dsname) ds = self._handle.createArray(group, ds_name, value) if attrs is not None: for key in attrs: setattr(ds.attrs, key, attrs[key]) self._handle.flush() return ds
Store object in HDFStore Parameters ---------- key : str value : {Series, DataFrame, Panel, Numpy ndarray} format : 'fixed(f)|table(t)', default is 'fixed' fixed(f) : Fixed format Fast writing/reading. Not-appendable, nor searchable table(t) : Table format Write as a PyTables Table structure which may perform worse but allow more flexible operations like searching/selecting subsets of the data append : boolean, default False This will force Table format, append the input data to the existing. encoding : default None, provide an encoding for strings
Below is the the instruction that describes the task: ### Input: Store object in HDFStore Parameters ---------- key : str value : {Series, DataFrame, Panel, Numpy ndarray} format : 'fixed(f)|table(t)', default is 'fixed' fixed(f) : Fixed format Fast writing/reading. Not-appendable, nor searchable table(t) : Table format Write as a PyTables Table structure which may perform worse but allow more flexible operations like searching/selecting subsets of the data append : boolean, default False This will force Table format, append the input data to the existing. encoding : default None, provide an encoding for strings ### Response: def put(self, key, value, attrs=None, format=None, append=False, **kwargs): """ Store object in HDFStore Parameters ---------- key : str value : {Series, DataFrame, Panel, Numpy ndarray} format : 'fixed(f)|table(t)', default is 'fixed' fixed(f) : Fixed format Fast writing/reading. Not-appendable, nor searchable table(t) : Table format Write as a PyTables Table structure which may perform worse but allow more flexible operations like searching/selecting subsets of the data append : boolean, default False This will force Table format, append the input data to the existing. encoding : default None, provide an encoding for strings """ if not isinstance(value, np.ndarray): super(NumpyHDFStore, self).put(key, value, format, append, **kwargs) else: group = self.get_node(key) # remove the node if we are not appending if group is not None and not append: self._handle.removeNode(group, recursive=True) group = None if group is None: paths = key.split('/') # recursively create the groups path = '/' for p in paths: if not len(p): continue new_path = path if not path.endswith('/'): new_path += '/' new_path += p group = self.get_node(new_path) if group is None: group = self._handle.createGroup(path, p) path = new_path ds_name = kwargs.get('ds_name', self._array_dsname) ds = self._handle.createArray(group, ds_name, value) if attrs is not None: for key in attrs: setattr(ds.attrs, key, attrs[key]) self._handle.flush() return ds
def levup(acur, knxt, ecur=None): """LEVUP One step forward Levinson recursion :param acur: :param knxt: :return: * anxt the P+1'th order prediction polynomial based on the P'th order prediction polynomial, acur, and the P+1'th order reflection coefficient, Knxt. * enxt the P+1'th order prediction prediction error, based on the P'th order prediction error, ecur. :References: P. Stoica R. Moses, Introduction to Spectral Analysis Prentice Hall, N.J., 1997, Chapter 3. """ if acur[0] != 1: raise ValueError('At least one of the reflection coefficients is equal to one.') acur = acur[1:] # Drop the leading 1, it is not needed # Matrix formulation from Stoica is used to avoid looping anxt = numpy.concatenate((acur, [0])) + knxt * numpy.concatenate((numpy.conj(acur[-1::-1]), [1])) enxt = None if ecur is not None: # matlab version enxt = (1-knxt'.*knxt)*ecur enxt = (1. - numpy.dot(numpy.conj(knxt), knxt)) * ecur anxt = numpy.insert(anxt, 0, 1) return anxt, enxt
LEVUP One step forward Levinson recursion :param acur: :param knxt: :return: * anxt the P+1'th order prediction polynomial based on the P'th order prediction polynomial, acur, and the P+1'th order reflection coefficient, Knxt. * enxt the P+1'th order prediction prediction error, based on the P'th order prediction error, ecur. :References: P. Stoica R. Moses, Introduction to Spectral Analysis Prentice Hall, N.J., 1997, Chapter 3.
Below is the the instruction that describes the task: ### Input: LEVUP One step forward Levinson recursion :param acur: :param knxt: :return: * anxt the P+1'th order prediction polynomial based on the P'th order prediction polynomial, acur, and the P+1'th order reflection coefficient, Knxt. * enxt the P+1'th order prediction prediction error, based on the P'th order prediction error, ecur. :References: P. Stoica R. Moses, Introduction to Spectral Analysis Prentice Hall, N.J., 1997, Chapter 3. ### Response: def levup(acur, knxt, ecur=None): """LEVUP One step forward Levinson recursion :param acur: :param knxt: :return: * anxt the P+1'th order prediction polynomial based on the P'th order prediction polynomial, acur, and the P+1'th order reflection coefficient, Knxt. * enxt the P+1'th order prediction prediction error, based on the P'th order prediction error, ecur. :References: P. Stoica R. Moses, Introduction to Spectral Analysis Prentice Hall, N.J., 1997, Chapter 3. """ if acur[0] != 1: raise ValueError('At least one of the reflection coefficients is equal to one.') acur = acur[1:] # Drop the leading 1, it is not needed # Matrix formulation from Stoica is used to avoid looping anxt = numpy.concatenate((acur, [0])) + knxt * numpy.concatenate((numpy.conj(acur[-1::-1]), [1])) enxt = None if ecur is not None: # matlab version enxt = (1-knxt'.*knxt)*ecur enxt = (1. - numpy.dot(numpy.conj(knxt), knxt)) * ecur anxt = numpy.insert(anxt, 0, 1) return anxt, enxt
def add(self, process, name=None): """Add a new process to the registry. :param process: A callable (either plain function or object implementing __calll). :param name: The name of the executable to match. If not given it must be provided as 'name' attribute of the given `process`. callable. """ name = name or process.name assert name, "No executable name given.""" self._registry[name] = process
Add a new process to the registry. :param process: A callable (either plain function or object implementing __calll). :param name: The name of the executable to match. If not given it must be provided as 'name' attribute of the given `process`. callable.
Below is the the instruction that describes the task: ### Input: Add a new process to the registry. :param process: A callable (either plain function or object implementing __calll). :param name: The name of the executable to match. If not given it must be provided as 'name' attribute of the given `process`. callable. ### Response: def add(self, process, name=None): """Add a new process to the registry. :param process: A callable (either plain function or object implementing __calll). :param name: The name of the executable to match. If not given it must be provided as 'name' attribute of the given `process`. callable. """ name = name or process.name assert name, "No executable name given.""" self._registry[name] = process
def _ne16(ins): ''' Compares & pops top 2 operands out of the stack, and checks if the 1st operand != 2nd operand (top of the stack). Pushes 0 if False, 1 if True. 16 bit un/signed version ''' output = _16bit_oper(ins.quad[2], ins.quad[3]) output.append('or a') # Resets carry flag output.append('sbc hl, de') output.append('ld a, h') output.append('or l') output.append('push af') return output
Compares & pops top 2 operands out of the stack, and checks if the 1st operand != 2nd operand (top of the stack). Pushes 0 if False, 1 if True. 16 bit un/signed version
Below is the the instruction that describes the task: ### Input: Compares & pops top 2 operands out of the stack, and checks if the 1st operand != 2nd operand (top of the stack). Pushes 0 if False, 1 if True. 16 bit un/signed version ### Response: def _ne16(ins): ''' Compares & pops top 2 operands out of the stack, and checks if the 1st operand != 2nd operand (top of the stack). Pushes 0 if False, 1 if True. 16 bit un/signed version ''' output = _16bit_oper(ins.quad[2], ins.quad[3]) output.append('or a') # Resets carry flag output.append('sbc hl, de') output.append('ld a, h') output.append('or l') output.append('push af') return output
def stop_animation(self, sprites): """stop animation without firing on_complete""" if isinstance(sprites, list) is False: sprites = [sprites] for sprite in sprites: self.tweener.kill_tweens(sprite)
stop animation without firing on_complete
Below is the the instruction that describes the task: ### Input: stop animation without firing on_complete ### Response: def stop_animation(self, sprites): """stop animation without firing on_complete""" if isinstance(sprites, list) is False: sprites = [sprites] for sprite in sprites: self.tweener.kill_tweens(sprite)
def uniq(args): """ %prog uniq bedfile Remove overlapping features with higher scores. """ from jcvi.formats.sizes import Sizes p = OptionParser(uniq.__doc__) p.add_option("--sizes", help="Use sequence length as score") p.add_option("--mode", default="span", choices=("span", "score"), help="Pile mode") opts, args = p.parse_args(args) if len(args) != 1: sys.exit(not p.print_help()) bedfile, = args uniqbedfile = bedfile.split(".")[0] + ".uniq.bed" bed = Bed(bedfile) if opts.sizes: sizes = Sizes(opts.sizes).mapping ranges = [Range(x.seqid, x.start, x.end, sizes[x.accn], i) \ for i, x in enumerate(bed)] else: if opts.mode == "span": ranges = [Range(x.seqid, x.start, x.end, x.end - x.start + 1, i) \ for i, x in enumerate(bed)] else: ranges = [Range(x.seqid, x.start, x.end, float(x.score), i) \ for i, x in enumerate(bed)] selected, score = range_chain(ranges) selected = [x.id for x in selected] selected_ids = set(selected) selected = [bed[x] for x in selected] notselected = [x for i, x in enumerate(bed) if i not in selected_ids] newbed = Bed() newbed.extend(selected) newbed.print_to_file(uniqbedfile, sorted=True) if notselected: leftoverfile = bedfile.split(".")[0] + ".leftover.bed" leftoverbed = Bed() leftoverbed.extend(notselected) leftoverbed.print_to_file(leftoverfile, sorted=True) logging.debug("Imported: {0}, Exported: {1}".format(len(bed), len(newbed))) return uniqbedfile
%prog uniq bedfile Remove overlapping features with higher scores.
Below is the the instruction that describes the task: ### Input: %prog uniq bedfile Remove overlapping features with higher scores. ### Response: def uniq(args): """ %prog uniq bedfile Remove overlapping features with higher scores. """ from jcvi.formats.sizes import Sizes p = OptionParser(uniq.__doc__) p.add_option("--sizes", help="Use sequence length as score") p.add_option("--mode", default="span", choices=("span", "score"), help="Pile mode") opts, args = p.parse_args(args) if len(args) != 1: sys.exit(not p.print_help()) bedfile, = args uniqbedfile = bedfile.split(".")[0] + ".uniq.bed" bed = Bed(bedfile) if opts.sizes: sizes = Sizes(opts.sizes).mapping ranges = [Range(x.seqid, x.start, x.end, sizes[x.accn], i) \ for i, x in enumerate(bed)] else: if opts.mode == "span": ranges = [Range(x.seqid, x.start, x.end, x.end - x.start + 1, i) \ for i, x in enumerate(bed)] else: ranges = [Range(x.seqid, x.start, x.end, float(x.score), i) \ for i, x in enumerate(bed)] selected, score = range_chain(ranges) selected = [x.id for x in selected] selected_ids = set(selected) selected = [bed[x] for x in selected] notselected = [x for i, x in enumerate(bed) if i not in selected_ids] newbed = Bed() newbed.extend(selected) newbed.print_to_file(uniqbedfile, sorted=True) if notselected: leftoverfile = bedfile.split(".")[0] + ".leftover.bed" leftoverbed = Bed() leftoverbed.extend(notselected) leftoverbed.print_to_file(leftoverfile, sorted=True) logging.debug("Imported: {0}, Exported: {1}".format(len(bed), len(newbed))) return uniqbedfile
def QA_fetch_get_stock_xdxr(code, ip=None, port=None): '除权除息' ip, port = get_mainmarket_ip(ip, port) api = TdxHq_API() market_code = _select_market_code(code) with api.connect(ip, port): category = { '1': '除权除息', '2': '送配股上市', '3': '非流通股上市', '4': '未知股本变动', '5': '股本变化', '6': '增发新股', '7': '股份回购', '8': '增发新股上市', '9': '转配股上市', '10': '可转债上市', '11': '扩缩股', '12': '非流通股缩股', '13': '送认购权证', '14': '送认沽权证'} data = api.to_df(api.get_xdxr_info(market_code, code)) if len(data) >= 1: data = data \ .assign(date=pd.to_datetime(data[['year', 'month', 'day']])) \ .drop(['year', 'month', 'day'], axis=1) \ .assign(category_meaning=data['category'].apply(lambda x: category[str(x)])) \ .assign(code=str(code)) \ .rename(index=str, columns={'panhouliutong': 'liquidity_after', 'panqianliutong': 'liquidity_before', 'houzongguben': 'shares_after', 'qianzongguben': 'shares_before'}) \ .set_index('date', drop=False, inplace=False) return data.assign(date=data['date'].apply(lambda x: str(x)[0:10])) else: return None
除权除息
Below is the the instruction that describes the task: ### Input: 除权除息 ### Response: def QA_fetch_get_stock_xdxr(code, ip=None, port=None): '除权除息' ip, port = get_mainmarket_ip(ip, port) api = TdxHq_API() market_code = _select_market_code(code) with api.connect(ip, port): category = { '1': '除权除息', '2': '送配股上市', '3': '非流通股上市', '4': '未知股本变动', '5': '股本变化', '6': '增发新股', '7': '股份回购', '8': '增发新股上市', '9': '转配股上市', '10': '可转债上市', '11': '扩缩股', '12': '非流通股缩股', '13': '送认购权证', '14': '送认沽权证'} data = api.to_df(api.get_xdxr_info(market_code, code)) if len(data) >= 1: data = data \ .assign(date=pd.to_datetime(data[['year', 'month', 'day']])) \ .drop(['year', 'month', 'day'], axis=1) \ .assign(category_meaning=data['category'].apply(lambda x: category[str(x)])) \ .assign(code=str(code)) \ .rename(index=str, columns={'panhouliutong': 'liquidity_after', 'panqianliutong': 'liquidity_before', 'houzongguben': 'shares_after', 'qianzongguben': 'shares_before'}) \ .set_index('date', drop=False, inplace=False) return data.assign(date=data['date'].apply(lambda x: str(x)[0:10])) else: return None
async def digital_write(self, command): """ This method writes a zero or one to a digital pin. :param command: {"method": "digital_write", "params": [PIN, DIGITAL_DATA_VALUE]} :returns: No return message.. """ pin = int(command[0]) value = int(command[1]) await self.core.digital_write(pin, value)
This method writes a zero or one to a digital pin. :param command: {"method": "digital_write", "params": [PIN, DIGITAL_DATA_VALUE]} :returns: No return message..
Below is the the instruction that describes the task: ### Input: This method writes a zero or one to a digital pin. :param command: {"method": "digital_write", "params": [PIN, DIGITAL_DATA_VALUE]} :returns: No return message.. ### Response: async def digital_write(self, command): """ This method writes a zero or one to a digital pin. :param command: {"method": "digital_write", "params": [PIN, DIGITAL_DATA_VALUE]} :returns: No return message.. """ pin = int(command[0]) value = int(command[1]) await self.core.digital_write(pin, value)
def get_absolute_name(self): """ Returns the full dotted name of this field """ res = [] current = self while type(current) != type(None): if current.__matched_index: res.append('$') res.append(current.get_type().db_field) current = current._get_parent() return '.'.join(reversed(res))
Returns the full dotted name of this field
Below is the the instruction that describes the task: ### Input: Returns the full dotted name of this field ### Response: def get_absolute_name(self): """ Returns the full dotted name of this field """ res = [] current = self while type(current) != type(None): if current.__matched_index: res.append('$') res.append(current.get_type().db_field) current = current._get_parent() return '.'.join(reversed(res))
def create_bzip2 (archive, compression, cmd, verbosity, interactive, filenames): """Create a BZIP2 archive with the bz2 Python module.""" if len(filenames) > 1: raise util.PatoolError('multi-file compression not supported in Python bz2') try: with bz2.BZ2File(archive, 'wb') as bz2file: filename = filenames[0] with open(filename, 'rb') as srcfile: data = srcfile.read(READ_SIZE_BYTES) while data: bz2file.write(data) data = srcfile.read(READ_SIZE_BYTES) except Exception as err: msg = "error creating %s: %s" % (archive, err) raise util.PatoolError(msg) return None
Create a BZIP2 archive with the bz2 Python module.
Below is the the instruction that describes the task: ### Input: Create a BZIP2 archive with the bz2 Python module. ### Response: def create_bzip2 (archive, compression, cmd, verbosity, interactive, filenames): """Create a BZIP2 archive with the bz2 Python module.""" if len(filenames) > 1: raise util.PatoolError('multi-file compression not supported in Python bz2') try: with bz2.BZ2File(archive, 'wb') as bz2file: filename = filenames[0] with open(filename, 'rb') as srcfile: data = srcfile.read(READ_SIZE_BYTES) while data: bz2file.write(data) data = srcfile.read(READ_SIZE_BYTES) except Exception as err: msg = "error creating %s: %s" % (archive, err) raise util.PatoolError(msg) return None
def get_identifiers_splitted_by_weights(identifiers={}, proportions={}): """ Divide the given identifiers based on the given proportions. But instead of randomly split the identifiers it is based on category weights. Every identifier has a weight for any number of categories. The target is, to split the identifiers in a way, so the sum of category k within part x is proportional to the sum of category x over all parts according to the given proportions. This is done by greedily insert the identifiers step by step in a part which has free space (weight). If there are no fitting parts anymore, the one with the least weight exceed is used. Args: identifiers (dict): A dictionary containing the weights for each identifier (key). Per item a dictionary of weights per category is given. proportions (dict): Dict of proportions, with a identifier as key. Returns: dict: Dictionary containing a list of identifiers per part with the same key as the proportions dict. Example:: >>> identifiers = { >>> 'a': {'music': 2, 'speech': 1}, >>> 'b': {'music': 5, 'speech': 2}, >>> 'c': {'music': 2, 'speech': 4}, >>> 'd': {'music': 1, 'speech': 4}, >>> 'e': {'music': 3, 'speech': 4} >>> } >>> proportions = { >>> "train" : 0.6, >>> "dev" : 0.2, >>> "test" : 0.2 >>> } >>> get_identifiers_splitted_by_weights(identifiers, proportions) { 'train': ['a', 'b', 'd'], 'dev': ['c'], 'test': ['e'] } """ # Get total weight per category sum_per_category = collections.defaultdict(int) for identifier, cat_weights in identifiers.items(): for category, weight in cat_weights.items(): sum_per_category[category] += weight target_weights_per_part = collections.defaultdict(dict) # Get target weight for each part and category for category, total_weight in sum_per_category.items(): abs_proportions = absolute_proportions(proportions, total_weight) for idx, proportion in abs_proportions.items(): target_weights_per_part[idx][category] = proportion # Distribute items greedily part_ids = sorted(list(proportions.keys())) current_weights_per_part = {idx: collections.defaultdict(int) for idx in part_ids} result = collections.defaultdict(list) for identifier in sorted(identifiers.keys()): cat_weights = identifiers[identifier] target_part = None current_part = 0 weight_over_target = collections.defaultdict(int) # Search for fitting part while target_part is None and current_part < len(part_ids): free_space = True part_id = part_ids[current_part] part_weights = current_weights_per_part[part_id] for category, weight in cat_weights.items(): target_weight = target_weights_per_part[part_id][category] current_weight = part_weights[category] weight_diff = current_weight + weight - target_weight weight_over_target[part_id] += weight_diff if weight_diff > 0: free_space = False # If weight doesn't exceed target, place identifier in part if free_space: target_part = part_id current_part += 1 # If not found fitting part, select the part with the least overweight if target_part is None: target_part = sorted(weight_over_target.items(), key=lambda x: x[1])[0][0] result[target_part].append(identifier) for category, weight in cat_weights.items(): current_weights_per_part[target_part][category] += weight return result
Divide the given identifiers based on the given proportions. But instead of randomly split the identifiers it is based on category weights. Every identifier has a weight for any number of categories. The target is, to split the identifiers in a way, so the sum of category k within part x is proportional to the sum of category x over all parts according to the given proportions. This is done by greedily insert the identifiers step by step in a part which has free space (weight). If there are no fitting parts anymore, the one with the least weight exceed is used. Args: identifiers (dict): A dictionary containing the weights for each identifier (key). Per item a dictionary of weights per category is given. proportions (dict): Dict of proportions, with a identifier as key. Returns: dict: Dictionary containing a list of identifiers per part with the same key as the proportions dict. Example:: >>> identifiers = { >>> 'a': {'music': 2, 'speech': 1}, >>> 'b': {'music': 5, 'speech': 2}, >>> 'c': {'music': 2, 'speech': 4}, >>> 'd': {'music': 1, 'speech': 4}, >>> 'e': {'music': 3, 'speech': 4} >>> } >>> proportions = { >>> "train" : 0.6, >>> "dev" : 0.2, >>> "test" : 0.2 >>> } >>> get_identifiers_splitted_by_weights(identifiers, proportions) { 'train': ['a', 'b', 'd'], 'dev': ['c'], 'test': ['e'] }
Below is the the instruction that describes the task: ### Input: Divide the given identifiers based on the given proportions. But instead of randomly split the identifiers it is based on category weights. Every identifier has a weight for any number of categories. The target is, to split the identifiers in a way, so the sum of category k within part x is proportional to the sum of category x over all parts according to the given proportions. This is done by greedily insert the identifiers step by step in a part which has free space (weight). If there are no fitting parts anymore, the one with the least weight exceed is used. Args: identifiers (dict): A dictionary containing the weights for each identifier (key). Per item a dictionary of weights per category is given. proportions (dict): Dict of proportions, with a identifier as key. Returns: dict: Dictionary containing a list of identifiers per part with the same key as the proportions dict. Example:: >>> identifiers = { >>> 'a': {'music': 2, 'speech': 1}, >>> 'b': {'music': 5, 'speech': 2}, >>> 'c': {'music': 2, 'speech': 4}, >>> 'd': {'music': 1, 'speech': 4}, >>> 'e': {'music': 3, 'speech': 4} >>> } >>> proportions = { >>> "train" : 0.6, >>> "dev" : 0.2, >>> "test" : 0.2 >>> } >>> get_identifiers_splitted_by_weights(identifiers, proportions) { 'train': ['a', 'b', 'd'], 'dev': ['c'], 'test': ['e'] } ### Response: def get_identifiers_splitted_by_weights(identifiers={}, proportions={}): """ Divide the given identifiers based on the given proportions. But instead of randomly split the identifiers it is based on category weights. Every identifier has a weight for any number of categories. The target is, to split the identifiers in a way, so the sum of category k within part x is proportional to the sum of category x over all parts according to the given proportions. This is done by greedily insert the identifiers step by step in a part which has free space (weight). If there are no fitting parts anymore, the one with the least weight exceed is used. Args: identifiers (dict): A dictionary containing the weights for each identifier (key). Per item a dictionary of weights per category is given. proportions (dict): Dict of proportions, with a identifier as key. Returns: dict: Dictionary containing a list of identifiers per part with the same key as the proportions dict. Example:: >>> identifiers = { >>> 'a': {'music': 2, 'speech': 1}, >>> 'b': {'music': 5, 'speech': 2}, >>> 'c': {'music': 2, 'speech': 4}, >>> 'd': {'music': 1, 'speech': 4}, >>> 'e': {'music': 3, 'speech': 4} >>> } >>> proportions = { >>> "train" : 0.6, >>> "dev" : 0.2, >>> "test" : 0.2 >>> } >>> get_identifiers_splitted_by_weights(identifiers, proportions) { 'train': ['a', 'b', 'd'], 'dev': ['c'], 'test': ['e'] } """ # Get total weight per category sum_per_category = collections.defaultdict(int) for identifier, cat_weights in identifiers.items(): for category, weight in cat_weights.items(): sum_per_category[category] += weight target_weights_per_part = collections.defaultdict(dict) # Get target weight for each part and category for category, total_weight in sum_per_category.items(): abs_proportions = absolute_proportions(proportions, total_weight) for idx, proportion in abs_proportions.items(): target_weights_per_part[idx][category] = proportion # Distribute items greedily part_ids = sorted(list(proportions.keys())) current_weights_per_part = {idx: collections.defaultdict(int) for idx in part_ids} result = collections.defaultdict(list) for identifier in sorted(identifiers.keys()): cat_weights = identifiers[identifier] target_part = None current_part = 0 weight_over_target = collections.defaultdict(int) # Search for fitting part while target_part is None and current_part < len(part_ids): free_space = True part_id = part_ids[current_part] part_weights = current_weights_per_part[part_id] for category, weight in cat_weights.items(): target_weight = target_weights_per_part[part_id][category] current_weight = part_weights[category] weight_diff = current_weight + weight - target_weight weight_over_target[part_id] += weight_diff if weight_diff > 0: free_space = False # If weight doesn't exceed target, place identifier in part if free_space: target_part = part_id current_part += 1 # If not found fitting part, select the part with the least overweight if target_part is None: target_part = sorted(weight_over_target.items(), key=lambda x: x[1])[0][0] result[target_part].append(identifier) for category, weight in cat_weights.items(): current_weights_per_part[target_part][category] += weight return result
def reorder_indices(self, indices_order): 'reorder all the indices' # allow mixed index syntax like int indices_order, single = convert_index_to_keys(self.indices, indices_order) old_indices = force_list(self.indices.keys()) if indices_order == old_indices: # no changes return if set(old_indices) != set(indices_order): raise KeyError('Keys in the new order do not match existing keys') # if len(old_indices) == 0: # already return since indices_order must equal to old_indices # return # must have more than 1 index to reorder new_idx = [old_indices.index(i) for i in indices_order] # reorder items items = [map(i.__getitem__, new_idx) for i in self.items()] self.clear(True) _MI_init(self, items, indices_order)
reorder all the indices
Below is the the instruction that describes the task: ### Input: reorder all the indices ### Response: def reorder_indices(self, indices_order): 'reorder all the indices' # allow mixed index syntax like int indices_order, single = convert_index_to_keys(self.indices, indices_order) old_indices = force_list(self.indices.keys()) if indices_order == old_indices: # no changes return if set(old_indices) != set(indices_order): raise KeyError('Keys in the new order do not match existing keys') # if len(old_indices) == 0: # already return since indices_order must equal to old_indices # return # must have more than 1 index to reorder new_idx = [old_indices.index(i) for i in indices_order] # reorder items items = [map(i.__getitem__, new_idx) for i in self.items()] self.clear(True) _MI_init(self, items, indices_order)
def sync(client=None, force=False, verbose=True): """ Let's face it... pushing this stuff to S3 is messy. A lot of different things need to be calculated for each file and they have to be in a certain order as some variables rely on others. """ from mediasync import backends from mediasync.conf import msettings from mediasync.signals import pre_sync, post_sync # create client connection if client is None: client = backends.client() client.open() client.serve_remote = True # send pre-sync signal pre_sync.send(sender=client) # # sync joined media # for joinfile, sourcefiles in msettings['JOINED'].iteritems(): filedata = combine_files(joinfile, sourcefiles, client) if filedata is None: # combine_files() is only interested in CSS/JS files. continue filedata, dirname = filedata content_type = mimetypes.guess_type(joinfile)[0] or msettings['DEFAULT_MIMETYPE'] remote_path = joinfile if dirname: remote_path = "%s/%s" % (dirname, remote_path) if client.process_and_put(filedata, content_type, remote_path, force=force): if verbose: print "[%s] %s" % (content_type, remote_path) # # sync static media # for dirname in os.listdir(client.media_root): dirpath = os.path.abspath(os.path.join(client.media_root, dirname)) if os.path.isdir(dirpath): for filename in listdir_recursive(dirpath): # calculate local and remote paths filepath = os.path.join(dirpath, filename) remote_path = "%s/%s" % (dirname, filename) content_type = mimetypes.guess_type(filepath)[0] or msettings['DEFAULT_MIMETYPE'] if not is_syncable_file(os.path.basename(filename)) or not os.path.isfile(filepath): continue # hidden file or directory, do not upload filedata = open(filepath, 'rb').read() if client.process_and_put(filedata, content_type, remote_path, force=force): if verbose: print "[%s] %s" % (content_type, remote_path) # send post-sync signal while client is still open post_sync.send(sender=client) client.close()
Let's face it... pushing this stuff to S3 is messy. A lot of different things need to be calculated for each file and they have to be in a certain order as some variables rely on others.
Below is the the instruction that describes the task: ### Input: Let's face it... pushing this stuff to S3 is messy. A lot of different things need to be calculated for each file and they have to be in a certain order as some variables rely on others. ### Response: def sync(client=None, force=False, verbose=True): """ Let's face it... pushing this stuff to S3 is messy. A lot of different things need to be calculated for each file and they have to be in a certain order as some variables rely on others. """ from mediasync import backends from mediasync.conf import msettings from mediasync.signals import pre_sync, post_sync # create client connection if client is None: client = backends.client() client.open() client.serve_remote = True # send pre-sync signal pre_sync.send(sender=client) # # sync joined media # for joinfile, sourcefiles in msettings['JOINED'].iteritems(): filedata = combine_files(joinfile, sourcefiles, client) if filedata is None: # combine_files() is only interested in CSS/JS files. continue filedata, dirname = filedata content_type = mimetypes.guess_type(joinfile)[0] or msettings['DEFAULT_MIMETYPE'] remote_path = joinfile if dirname: remote_path = "%s/%s" % (dirname, remote_path) if client.process_and_put(filedata, content_type, remote_path, force=force): if verbose: print "[%s] %s" % (content_type, remote_path) # # sync static media # for dirname in os.listdir(client.media_root): dirpath = os.path.abspath(os.path.join(client.media_root, dirname)) if os.path.isdir(dirpath): for filename in listdir_recursive(dirpath): # calculate local and remote paths filepath = os.path.join(dirpath, filename) remote_path = "%s/%s" % (dirname, filename) content_type = mimetypes.guess_type(filepath)[0] or msettings['DEFAULT_MIMETYPE'] if not is_syncable_file(os.path.basename(filename)) or not os.path.isfile(filepath): continue # hidden file or directory, do not upload filedata = open(filepath, 'rb').read() if client.process_and_put(filedata, content_type, remote_path, force=force): if verbose: print "[%s] %s" % (content_type, remote_path) # send post-sync signal while client is still open post_sync.send(sender=client) client.close()
def file_version(self, name): """Returns the newest available version number of the file. If the remote store is configured, it is queried, otherwise the local version is returned. It is assumed that the remote store always has the newest version of the file. If version is a part of ``name``, it is ignored. """ if self.remote_store: return self.remote_store.file_version(name) else: return self.local_store.file_version(name)
Returns the newest available version number of the file. If the remote store is configured, it is queried, otherwise the local version is returned. It is assumed that the remote store always has the newest version of the file. If version is a part of ``name``, it is ignored.
Below is the the instruction that describes the task: ### Input: Returns the newest available version number of the file. If the remote store is configured, it is queried, otherwise the local version is returned. It is assumed that the remote store always has the newest version of the file. If version is a part of ``name``, it is ignored. ### Response: def file_version(self, name): """Returns the newest available version number of the file. If the remote store is configured, it is queried, otherwise the local version is returned. It is assumed that the remote store always has the newest version of the file. If version is a part of ``name``, it is ignored. """ if self.remote_store: return self.remote_store.file_version(name) else: return self.local_store.file_version(name)
def poller_processor_handler(event, context): # pylint: disable=W0613 """ Historical S3 Poller Processor. This will receive events from the Poller Tasker, and will list all objects of a given technology for an account/region pair. This will generate `polling events` which simulate changes. These polling events contain configuration data such as the account/region defining where the collector should attempt to gather data from. """ LOG.debug('[@] Running Poller...') queue_url = get_queue_url(os.environ.get('POLLER_QUEUE_NAME', 'HistoricalS3Poller')) records = deserialize_records(event['Records']) for record in records: # Skip accounts that have role assumption errors: try: # List all buckets in the account: all_buckets = list_buckets(account_number=record['account_id'], assume_role=HISTORICAL_ROLE, session_name="historical-cloudwatch-s3list", region=record['region'])["Buckets"] events = [S3_POLLING_SCHEMA.serialize_me(record['account_id'], bucket) for bucket in all_buckets] produce_events(events, queue_url, randomize_delay=RANDOMIZE_POLLER) except ClientError as exc: LOG.error(f"[X] Unable to generate events for account. Account Id: {record['account_id']} Reason: {exc}") LOG.debug(f"[@] Finished generating polling events for account: {record['account_id']}. Events Created:" f" {len(record['account_id'])}")
Historical S3 Poller Processor. This will receive events from the Poller Tasker, and will list all objects of a given technology for an account/region pair. This will generate `polling events` which simulate changes. These polling events contain configuration data such as the account/region defining where the collector should attempt to gather data from.
Below is the the instruction that describes the task: ### Input: Historical S3 Poller Processor. This will receive events from the Poller Tasker, and will list all objects of a given technology for an account/region pair. This will generate `polling events` which simulate changes. These polling events contain configuration data such as the account/region defining where the collector should attempt to gather data from. ### Response: def poller_processor_handler(event, context): # pylint: disable=W0613 """ Historical S3 Poller Processor. This will receive events from the Poller Tasker, and will list all objects of a given technology for an account/region pair. This will generate `polling events` which simulate changes. These polling events contain configuration data such as the account/region defining where the collector should attempt to gather data from. """ LOG.debug('[@] Running Poller...') queue_url = get_queue_url(os.environ.get('POLLER_QUEUE_NAME', 'HistoricalS3Poller')) records = deserialize_records(event['Records']) for record in records: # Skip accounts that have role assumption errors: try: # List all buckets in the account: all_buckets = list_buckets(account_number=record['account_id'], assume_role=HISTORICAL_ROLE, session_name="historical-cloudwatch-s3list", region=record['region'])["Buckets"] events = [S3_POLLING_SCHEMA.serialize_me(record['account_id'], bucket) for bucket in all_buckets] produce_events(events, queue_url, randomize_delay=RANDOMIZE_POLLER) except ClientError as exc: LOG.error(f"[X] Unable to generate events for account. Account Id: {record['account_id']} Reason: {exc}") LOG.debug(f"[@] Finished generating polling events for account: {record['account_id']}. Events Created:" f" {len(record['account_id'])}")
def rmon_alarm_entry_alarm_index(self, **kwargs): """Auto Generated Code """ config = ET.Element("config") rmon = ET.SubElement(config, "rmon", xmlns="urn:brocade.com:mgmt:brocade-rmon") alarm_entry = ET.SubElement(rmon, "alarm-entry") alarm_index = ET.SubElement(alarm_entry, "alarm-index") alarm_index.text = kwargs.pop('alarm_index') callback = kwargs.pop('callback', self._callback) return callback(config)
Auto Generated Code
Below is the the instruction that describes the task: ### Input: Auto Generated Code ### Response: def rmon_alarm_entry_alarm_index(self, **kwargs): """Auto Generated Code """ config = ET.Element("config") rmon = ET.SubElement(config, "rmon", xmlns="urn:brocade.com:mgmt:brocade-rmon") alarm_entry = ET.SubElement(rmon, "alarm-entry") alarm_index = ET.SubElement(alarm_entry, "alarm-index") alarm_index.text = kwargs.pop('alarm_index') callback = kwargs.pop('callback', self._callback) return callback(config)
def list(self, *args, **kwargs): """ List Denylisted Notifications Lists all the denylisted addresses. By default this end-point will try to return up to 1000 addresses in one request. But it **may return less**, even if more tasks are available. It may also return a `continuationToken` even though there are no more results. However, you can only be sure to have seen all results if you keep calling `list` with the last `continuationToken` until you get a result without a `continuationToken`. If you are not interested in listing all the members at once, you may use the query-string option `limit` to return fewer. This method gives output: ``v1/notification-address-list.json#`` This method is ``experimental`` """ return self._makeApiCall(self.funcinfo["list"], *args, **kwargs)
List Denylisted Notifications Lists all the denylisted addresses. By default this end-point will try to return up to 1000 addresses in one request. But it **may return less**, even if more tasks are available. It may also return a `continuationToken` even though there are no more results. However, you can only be sure to have seen all results if you keep calling `list` with the last `continuationToken` until you get a result without a `continuationToken`. If you are not interested in listing all the members at once, you may use the query-string option `limit` to return fewer. This method gives output: ``v1/notification-address-list.json#`` This method is ``experimental``
Below is the the instruction that describes the task: ### Input: List Denylisted Notifications Lists all the denylisted addresses. By default this end-point will try to return up to 1000 addresses in one request. But it **may return less**, even if more tasks are available. It may also return a `continuationToken` even though there are no more results. However, you can only be sure to have seen all results if you keep calling `list` with the last `continuationToken` until you get a result without a `continuationToken`. If you are not interested in listing all the members at once, you may use the query-string option `limit` to return fewer. This method gives output: ``v1/notification-address-list.json#`` This method is ``experimental`` ### Response: def list(self, *args, **kwargs): """ List Denylisted Notifications Lists all the denylisted addresses. By default this end-point will try to return up to 1000 addresses in one request. But it **may return less**, even if more tasks are available. It may also return a `continuationToken` even though there are no more results. However, you can only be sure to have seen all results if you keep calling `list` with the last `continuationToken` until you get a result without a `continuationToken`. If you are not interested in listing all the members at once, you may use the query-string option `limit` to return fewer. This method gives output: ``v1/notification-address-list.json#`` This method is ``experimental`` """ return self._makeApiCall(self.funcinfo["list"], *args, **kwargs)
def get_column_type(engine: Engine, tablename: str, columnname: str) -> Optional[TypeEngine]: """ For the specified column in the specified table, get its type as an instance of an SQLAlchemy column type class (or ``None`` if such a column can't be found). For more on :class:`TypeEngine`, see :func:`cardinal_pythonlib.orm_inspect.coltype_as_typeengine`. """ for info in gen_columns_info(engine, tablename): if info.name == columnname: return info.type return None
For the specified column in the specified table, get its type as an instance of an SQLAlchemy column type class (or ``None`` if such a column can't be found). For more on :class:`TypeEngine`, see :func:`cardinal_pythonlib.orm_inspect.coltype_as_typeengine`.
Below is the the instruction that describes the task: ### Input: For the specified column in the specified table, get its type as an instance of an SQLAlchemy column type class (or ``None`` if such a column can't be found). For more on :class:`TypeEngine`, see :func:`cardinal_pythonlib.orm_inspect.coltype_as_typeengine`. ### Response: def get_column_type(engine: Engine, tablename: str, columnname: str) -> Optional[TypeEngine]: """ For the specified column in the specified table, get its type as an instance of an SQLAlchemy column type class (or ``None`` if such a column can't be found). For more on :class:`TypeEngine`, see :func:`cardinal_pythonlib.orm_inspect.coltype_as_typeengine`. """ for info in gen_columns_info(engine, tablename): if info.name == columnname: return info.type return None
def append_faces(vertices_seq, faces_seq): """ Given a sequence of zero- indexed faces and vertices combine them into a single array of faces and a single array of vertices. Parameters ----------- vertices_seq : (n, ) sequence of (m, d) float Multiple arrays of verticesvertex arrays faces_seq : (n, ) sequence of (p, j) int Zero indexed faces for matching vertices Returns ---------- vertices : (i, d) float Points in space faces : (j, 3) int Reference vertex indices """ # the length of each vertex array vertices_len = np.array([len(i) for i in vertices_seq]) # how much each group of faces needs to be offset face_offset = np.append(0, np.cumsum(vertices_len)[:-1]) new_faces = [] for offset, faces in zip(face_offset, faces_seq): if len(faces) == 0: continue # apply the index offset new_faces.append(faces + offset) # stack to clean (n, 3) float vertices = vstack_empty(vertices_seq) # stack to clean (n, 3) int faces = vstack_empty(new_faces) return vertices, faces
Given a sequence of zero- indexed faces and vertices combine them into a single array of faces and a single array of vertices. Parameters ----------- vertices_seq : (n, ) sequence of (m, d) float Multiple arrays of verticesvertex arrays faces_seq : (n, ) sequence of (p, j) int Zero indexed faces for matching vertices Returns ---------- vertices : (i, d) float Points in space faces : (j, 3) int Reference vertex indices
Below is the the instruction that describes the task: ### Input: Given a sequence of zero- indexed faces and vertices combine them into a single array of faces and a single array of vertices. Parameters ----------- vertices_seq : (n, ) sequence of (m, d) float Multiple arrays of verticesvertex arrays faces_seq : (n, ) sequence of (p, j) int Zero indexed faces for matching vertices Returns ---------- vertices : (i, d) float Points in space faces : (j, 3) int Reference vertex indices ### Response: def append_faces(vertices_seq, faces_seq): """ Given a sequence of zero- indexed faces and vertices combine them into a single array of faces and a single array of vertices. Parameters ----------- vertices_seq : (n, ) sequence of (m, d) float Multiple arrays of verticesvertex arrays faces_seq : (n, ) sequence of (p, j) int Zero indexed faces for matching vertices Returns ---------- vertices : (i, d) float Points in space faces : (j, 3) int Reference vertex indices """ # the length of each vertex array vertices_len = np.array([len(i) for i in vertices_seq]) # how much each group of faces needs to be offset face_offset = np.append(0, np.cumsum(vertices_len)[:-1]) new_faces = [] for offset, faces in zip(face_offset, faces_seq): if len(faces) == 0: continue # apply the index offset new_faces.append(faces + offset) # stack to clean (n, 3) float vertices = vstack_empty(vertices_seq) # stack to clean (n, 3) int faces = vstack_empty(new_faces) return vertices, faces
def _arg2opt(arg): ''' Turn a pass argument into the correct option ''' res = [o for o, a in option_toggles.items() if a == arg] res += [o for o, a in option_flags.items() if a == arg] return res[0] if res else None
Turn a pass argument into the correct option
Below is the the instruction that describes the task: ### Input: Turn a pass argument into the correct option ### Response: def _arg2opt(arg): ''' Turn a pass argument into the correct option ''' res = [o for o, a in option_toggles.items() if a == arg] res += [o for o, a in option_flags.items() if a == arg] return res[0] if res else None
def _check_out_arg(func): """Check if ``func`` has an (optional) ``out`` argument. Also verify that the signature of ``func`` has no ``*args`` since they make argument propagation a hassle. Parameters ---------- func : callable Object that should be inspected. Returns ------- has_out : bool ``True`` if the signature has an ``out`` argument, ``False`` otherwise. out_is_optional : bool ``True`` if ``out`` is present and optional in the signature, ``False`` otherwise. Raises ------ TypeError If ``func``'s signature has ``*args``. """ if sys.version_info.major > 2: spec = inspect.getfullargspec(func) kw_only = spec.kwonlyargs else: spec = inspect.getargspec(func) kw_only = () if spec.varargs is not None: raise TypeError('*args not allowed in function signature') pos_args = spec.args pos_defaults = () if spec.defaults is None else spec.defaults has_out = 'out' in pos_args or 'out' in kw_only if 'out' in pos_args: has_out = True out_is_optional = ( pos_args.index('out') >= len(pos_args) - len(pos_defaults)) elif 'out' in kw_only: has_out = out_is_optional = True else: has_out = out_is_optional = False return has_out, out_is_optional
Check if ``func`` has an (optional) ``out`` argument. Also verify that the signature of ``func`` has no ``*args`` since they make argument propagation a hassle. Parameters ---------- func : callable Object that should be inspected. Returns ------- has_out : bool ``True`` if the signature has an ``out`` argument, ``False`` otherwise. out_is_optional : bool ``True`` if ``out`` is present and optional in the signature, ``False`` otherwise. Raises ------ TypeError If ``func``'s signature has ``*args``.
Below is the the instruction that describes the task: ### Input: Check if ``func`` has an (optional) ``out`` argument. Also verify that the signature of ``func`` has no ``*args`` since they make argument propagation a hassle. Parameters ---------- func : callable Object that should be inspected. Returns ------- has_out : bool ``True`` if the signature has an ``out`` argument, ``False`` otherwise. out_is_optional : bool ``True`` if ``out`` is present and optional in the signature, ``False`` otherwise. Raises ------ TypeError If ``func``'s signature has ``*args``. ### Response: def _check_out_arg(func): """Check if ``func`` has an (optional) ``out`` argument. Also verify that the signature of ``func`` has no ``*args`` since they make argument propagation a hassle. Parameters ---------- func : callable Object that should be inspected. Returns ------- has_out : bool ``True`` if the signature has an ``out`` argument, ``False`` otherwise. out_is_optional : bool ``True`` if ``out`` is present and optional in the signature, ``False`` otherwise. Raises ------ TypeError If ``func``'s signature has ``*args``. """ if sys.version_info.major > 2: spec = inspect.getfullargspec(func) kw_only = spec.kwonlyargs else: spec = inspect.getargspec(func) kw_only = () if spec.varargs is not None: raise TypeError('*args not allowed in function signature') pos_args = spec.args pos_defaults = () if spec.defaults is None else spec.defaults has_out = 'out' in pos_args or 'out' in kw_only if 'out' in pos_args: has_out = True out_is_optional = ( pos_args.index('out') >= len(pos_args) - len(pos_defaults)) elif 'out' in kw_only: has_out = out_is_optional = True else: has_out = out_is_optional = False return has_out, out_is_optional
def write_single_batch_images_to_datastore(self, batch_id): """Writes only images from one batch to the datastore.""" client = self._datastore_client with client.no_transact_batch() as client_batch: self._write_single_batch_images_internal(batch_id, client_batch)
Writes only images from one batch to the datastore.
Below is the the instruction that describes the task: ### Input: Writes only images from one batch to the datastore. ### Response: def write_single_batch_images_to_datastore(self, batch_id): """Writes only images from one batch to the datastore.""" client = self._datastore_client with client.no_transact_batch() as client_batch: self._write_single_batch_images_internal(batch_id, client_batch)
async def get_next_match(self): """ Return the first open match found, or if none, the first pending match found |methcoro| Raises: APIException """ if self._final_rank is not None: return None matches = await self.get_matches(MatchState.open_) if len(matches) == 0: matches = await self.get_matches(MatchState.pending) if len(matches) > 0: return matches[0] return None
Return the first open match found, or if none, the first pending match found |methcoro| Raises: APIException
Below is the the instruction that describes the task: ### Input: Return the first open match found, or if none, the first pending match found |methcoro| Raises: APIException ### Response: async def get_next_match(self): """ Return the first open match found, or if none, the first pending match found |methcoro| Raises: APIException """ if self._final_rank is not None: return None matches = await self.get_matches(MatchState.open_) if len(matches) == 0: matches = await self.get_matches(MatchState.pending) if len(matches) > 0: return matches[0] return None
def add_mark_char(char, mark): """ Add mark to a single char. """ if char == "": return "" case = char.isupper() ac = accent.get_accent_char(char) char = accent.add_accent_char(char.lower(), Accent.NONE) new_char = char if mark == Mark.HAT: if char in FAMILY_A: new_char = "â" elif char in FAMILY_O: new_char = "ô" elif char in FAMILY_E: new_char = "ê" elif mark == Mark.HORN: if char in FAMILY_O: new_char = "ơ" elif char in FAMILY_U: new_char = "ư" elif mark == Mark.BREVE: if char in FAMILY_A: new_char = "ă" elif mark == Mark.BAR: if char in FAMILY_D: new_char = "đ" elif mark == Mark.NONE: if char in FAMILY_A: new_char = "a" elif char in FAMILY_E: new_char = "e" elif char in FAMILY_O: new_char = "o" elif char in FAMILY_U: new_char = "u" elif char in FAMILY_D: new_char = "d" new_char = accent.add_accent_char(new_char, ac) return utils.change_case(new_char, case)
Add mark to a single char.
Below is the the instruction that describes the task: ### Input: Add mark to a single char. ### Response: def add_mark_char(char, mark): """ Add mark to a single char. """ if char == "": return "" case = char.isupper() ac = accent.get_accent_char(char) char = accent.add_accent_char(char.lower(), Accent.NONE) new_char = char if mark == Mark.HAT: if char in FAMILY_A: new_char = "â" elif char in FAMILY_O: new_char = "ô" elif char in FAMILY_E: new_char = "ê" elif mark == Mark.HORN: if char in FAMILY_O: new_char = "ơ" elif char in FAMILY_U: new_char = "ư" elif mark == Mark.BREVE: if char in FAMILY_A: new_char = "ă" elif mark == Mark.BAR: if char in FAMILY_D: new_char = "đ" elif mark == Mark.NONE: if char in FAMILY_A: new_char = "a" elif char in FAMILY_E: new_char = "e" elif char in FAMILY_O: new_char = "o" elif char in FAMILY_U: new_char = "u" elif char in FAMILY_D: new_char = "d" new_char = accent.add_accent_char(new_char, ac) return utils.change_case(new_char, case)
def set_code(self, code): """Update widgets from code""" # Get attributes from code attributes = [] strip = lambda s: s.strip('u').strip("'").strip('"') for attr_dict in parse_dict_strings(unicode(code).strip()[19:-1]): attrs = list(strip(s) for s in parse_dict_strings(attr_dict[1:-1])) attributes.append(dict(zip(attrs[::2], attrs[1::2]))) if not attributes: return # Set widgets from attributes # --------------------------- # Figure attributes figure_attributes = attributes[0] for key, widget in self.figure_attributes_panel: try: obj = figure_attributes[key] kwargs_key = key + "_kwargs" if kwargs_key in figure_attributes: widget.set_kwargs(figure_attributes[kwargs_key]) except KeyError: obj = "" widget.code = charts.object2code(key, obj) # Series attributes self.all_series_panel.update(attributes[1:])
Update widgets from code
Below is the the instruction that describes the task: ### Input: Update widgets from code ### Response: def set_code(self, code): """Update widgets from code""" # Get attributes from code attributes = [] strip = lambda s: s.strip('u').strip("'").strip('"') for attr_dict in parse_dict_strings(unicode(code).strip()[19:-1]): attrs = list(strip(s) for s in parse_dict_strings(attr_dict[1:-1])) attributes.append(dict(zip(attrs[::2], attrs[1::2]))) if not attributes: return # Set widgets from attributes # --------------------------- # Figure attributes figure_attributes = attributes[0] for key, widget in self.figure_attributes_panel: try: obj = figure_attributes[key] kwargs_key = key + "_kwargs" if kwargs_key in figure_attributes: widget.set_kwargs(figure_attributes[kwargs_key]) except KeyError: obj = "" widget.code = charts.object2code(key, obj) # Series attributes self.all_series_panel.update(attributes[1:])
def check_time_coordinate(self, ds): ''' Check variables defining time are valid under CF CF §4.4 Variables representing time must always explicitly include the units attribute; there is no default value. The units attribute takes a string value formatted as per the recommendations in the Udunits package. The acceptable units for time are listed in the udunits.dat file. The most commonly used of these strings (and their abbreviations) includes day (d), hour (hr, h), minute (min) and second (sec, s). Plural forms are also acceptable. The reference time string (appearing after the identifier since) may include date alone; date and time; or date, time, and time zone. The reference time is required. A reference time in year 0 has a special meaning (see Section 7.4, "Climatological Statistics"). Recommend that the unit year be used with caution. It is not a calendar year. For similar reasons the unit month should also be used with caution. A time coordinate is identifiable from its units string alone. Optionally, the time coordinate may be indicated additionally by providing the standard_name attribute with an appropriate value, and/or the axis attribute with the value T. :param netCDF4.Dataset ds: An open netCDF dataset :rtype: list :return: List of results ''' ret_val = [] for name in cfutil.get_time_variables(ds): variable = ds.variables[name] # Has units has_units = hasattr(variable, 'units') if not has_units: result = Result(BaseCheck.HIGH, False, self.section_titles['4.4'], ['%s does not have units' % name]) ret_val.append(result) continue # Correct and identifiable units result = Result(BaseCheck.HIGH, True, self.section_titles['4.4']) ret_val.append(result) correct_units = util.units_temporal(variable.units) reasoning = None if not correct_units: reasoning = ['%s does not have correct time units' % name] result = Result(BaseCheck.HIGH, correct_units, self.section_titles['4.4'], reasoning) ret_val.append(result) return ret_val
Check variables defining time are valid under CF CF §4.4 Variables representing time must always explicitly include the units attribute; there is no default value. The units attribute takes a string value formatted as per the recommendations in the Udunits package. The acceptable units for time are listed in the udunits.dat file. The most commonly used of these strings (and their abbreviations) includes day (d), hour (hr, h), minute (min) and second (sec, s). Plural forms are also acceptable. The reference time string (appearing after the identifier since) may include date alone; date and time; or date, time, and time zone. The reference time is required. A reference time in year 0 has a special meaning (see Section 7.4, "Climatological Statistics"). Recommend that the unit year be used with caution. It is not a calendar year. For similar reasons the unit month should also be used with caution. A time coordinate is identifiable from its units string alone. Optionally, the time coordinate may be indicated additionally by providing the standard_name attribute with an appropriate value, and/or the axis attribute with the value T. :param netCDF4.Dataset ds: An open netCDF dataset :rtype: list :return: List of results
Below is the the instruction that describes the task: ### Input: Check variables defining time are valid under CF CF §4.4 Variables representing time must always explicitly include the units attribute; there is no default value. The units attribute takes a string value formatted as per the recommendations in the Udunits package. The acceptable units for time are listed in the udunits.dat file. The most commonly used of these strings (and their abbreviations) includes day (d), hour (hr, h), minute (min) and second (sec, s). Plural forms are also acceptable. The reference time string (appearing after the identifier since) may include date alone; date and time; or date, time, and time zone. The reference time is required. A reference time in year 0 has a special meaning (see Section 7.4, "Climatological Statistics"). Recommend that the unit year be used with caution. It is not a calendar year. For similar reasons the unit month should also be used with caution. A time coordinate is identifiable from its units string alone. Optionally, the time coordinate may be indicated additionally by providing the standard_name attribute with an appropriate value, and/or the axis attribute with the value T. :param netCDF4.Dataset ds: An open netCDF dataset :rtype: list :return: List of results ### Response: def check_time_coordinate(self, ds): ''' Check variables defining time are valid under CF CF §4.4 Variables representing time must always explicitly include the units attribute; there is no default value. The units attribute takes a string value formatted as per the recommendations in the Udunits package. The acceptable units for time are listed in the udunits.dat file. The most commonly used of these strings (and their abbreviations) includes day (d), hour (hr, h), minute (min) and second (sec, s). Plural forms are also acceptable. The reference time string (appearing after the identifier since) may include date alone; date and time; or date, time, and time zone. The reference time is required. A reference time in year 0 has a special meaning (see Section 7.4, "Climatological Statistics"). Recommend that the unit year be used with caution. It is not a calendar year. For similar reasons the unit month should also be used with caution. A time coordinate is identifiable from its units string alone. Optionally, the time coordinate may be indicated additionally by providing the standard_name attribute with an appropriate value, and/or the axis attribute with the value T. :param netCDF4.Dataset ds: An open netCDF dataset :rtype: list :return: List of results ''' ret_val = [] for name in cfutil.get_time_variables(ds): variable = ds.variables[name] # Has units has_units = hasattr(variable, 'units') if not has_units: result = Result(BaseCheck.HIGH, False, self.section_titles['4.4'], ['%s does not have units' % name]) ret_val.append(result) continue # Correct and identifiable units result = Result(BaseCheck.HIGH, True, self.section_titles['4.4']) ret_val.append(result) correct_units = util.units_temporal(variable.units) reasoning = None if not correct_units: reasoning = ['%s does not have correct time units' % name] result = Result(BaseCheck.HIGH, correct_units, self.section_titles['4.4'], reasoning) ret_val.append(result) return ret_val
def port_str_arrange(ports): """ Gives a str in the format (always tcp listed first). T:<tcp ports/portrange comma separated>U:<udp ports comma separated> """ b_tcp = ports.find("T") b_udp = ports.find("U") if (b_udp != -1 and b_tcp != -1) and b_udp < b_tcp: return ports[b_tcp:] + ports[b_udp:b_tcp] return ports
Gives a str in the format (always tcp listed first). T:<tcp ports/portrange comma separated>U:<udp ports comma separated>
Below is the the instruction that describes the task: ### Input: Gives a str in the format (always tcp listed first). T:<tcp ports/portrange comma separated>U:<udp ports comma separated> ### Response: def port_str_arrange(ports): """ Gives a str in the format (always tcp listed first). T:<tcp ports/portrange comma separated>U:<udp ports comma separated> """ b_tcp = ports.find("T") b_udp = ports.find("U") if (b_udp != -1 and b_tcp != -1) and b_udp < b_tcp: return ports[b_tcp:] + ports[b_udp:b_tcp] return ports
def flake(self, message): """ pyflakes found something wrong with the code. @param: A L{pyflakes.messages.Message}. """ self._stdout.write(str(message)) self._stdout.write('\n')
pyflakes found something wrong with the code. @param: A L{pyflakes.messages.Message}.
Below is the the instruction that describes the task: ### Input: pyflakes found something wrong with the code. @param: A L{pyflakes.messages.Message}. ### Response: def flake(self, message): """ pyflakes found something wrong with the code. @param: A L{pyflakes.messages.Message}. """ self._stdout.write(str(message)) self._stdout.write('\n')
def set(self, key, val): """ Return a new PMap with key and val inserted. >>> m1 = m(a=1, b=2) >>> m2 = m1.set('a', 3) >>> m3 = m1.set('c' ,4) >>> m1 pmap({'a': 1, 'b': 2}) >>> m2 pmap({'a': 3, 'b': 2}) >>> m3 pmap({'a': 1, 'c': 4, 'b': 2}) """ return self.evolver().set(key, val).persistent()
Return a new PMap with key and val inserted. >>> m1 = m(a=1, b=2) >>> m2 = m1.set('a', 3) >>> m3 = m1.set('c' ,4) >>> m1 pmap({'a': 1, 'b': 2}) >>> m2 pmap({'a': 3, 'b': 2}) >>> m3 pmap({'a': 1, 'c': 4, 'b': 2})
Below is the the instruction that describes the task: ### Input: Return a new PMap with key and val inserted. >>> m1 = m(a=1, b=2) >>> m2 = m1.set('a', 3) >>> m3 = m1.set('c' ,4) >>> m1 pmap({'a': 1, 'b': 2}) >>> m2 pmap({'a': 3, 'b': 2}) >>> m3 pmap({'a': 1, 'c': 4, 'b': 2}) ### Response: def set(self, key, val): """ Return a new PMap with key and val inserted. >>> m1 = m(a=1, b=2) >>> m2 = m1.set('a', 3) >>> m3 = m1.set('c' ,4) >>> m1 pmap({'a': 1, 'b': 2}) >>> m2 pmap({'a': 3, 'b': 2}) >>> m3 pmap({'a': 1, 'c': 4, 'b': 2}) """ return self.evolver().set(key, val).persistent()
def parse_manage_name_id_request(self, xmlstr, binding=BINDING_SOAP): """ Deal with a LogoutRequest :param xmlstr: The response as a xml string :param binding: What type of binding this message came through. :return: None if the reply doesn't contain a valid SAML LogoutResponse, otherwise the reponse if the logout was successful and None if it was not. """ return self._parse_request(xmlstr, saml_request.ManageNameIDRequest, "manage_name_id_service", binding)
Deal with a LogoutRequest :param xmlstr: The response as a xml string :param binding: What type of binding this message came through. :return: None if the reply doesn't contain a valid SAML LogoutResponse, otherwise the reponse if the logout was successful and None if it was not.
Below is the the instruction that describes the task: ### Input: Deal with a LogoutRequest :param xmlstr: The response as a xml string :param binding: What type of binding this message came through. :return: None if the reply doesn't contain a valid SAML LogoutResponse, otherwise the reponse if the logout was successful and None if it was not. ### Response: def parse_manage_name_id_request(self, xmlstr, binding=BINDING_SOAP): """ Deal with a LogoutRequest :param xmlstr: The response as a xml string :param binding: What type of binding this message came through. :return: None if the reply doesn't contain a valid SAML LogoutResponse, otherwise the reponse if the logout was successful and None if it was not. """ return self._parse_request(xmlstr, saml_request.ManageNameIDRequest, "manage_name_id_service", binding)
def sigterm_handler(signum, frame): '''Intercept sigterm and terminate all processes. ''' if captureproc and captureproc.poll() is None: captureproc.terminate() terminate(True) sys.exit(0)
Intercept sigterm and terminate all processes.
Below is the the instruction that describes the task: ### Input: Intercept sigterm and terminate all processes. ### Response: def sigterm_handler(signum, frame): '''Intercept sigterm and terminate all processes. ''' if captureproc and captureproc.poll() is None: captureproc.terminate() terminate(True) sys.exit(0)
async def disconnect(self, conn_id): """Disconnect from a connected device. See :meth:`AbstractDeviceAdapter.disconnect`. """ resp = await self._execute(self._adapter.disconnect_sync, conn_id) _raise_error(conn_id, 'disconnect', resp) self._teardown_connection(conn_id, force=True)
Disconnect from a connected device. See :meth:`AbstractDeviceAdapter.disconnect`.
Below is the the instruction that describes the task: ### Input: Disconnect from a connected device. See :meth:`AbstractDeviceAdapter.disconnect`. ### Response: async def disconnect(self, conn_id): """Disconnect from a connected device. See :meth:`AbstractDeviceAdapter.disconnect`. """ resp = await self._execute(self._adapter.disconnect_sync, conn_id) _raise_error(conn_id, 'disconnect', resp) self._teardown_connection(conn_id, force=True)
def split_coords_3d(seq): """ :param seq: a flat list with lons, lats and depths :returns: a validated list of (lon, lat, depths) triplets >>> split_coords_3d([1.1, 2.1, 0.1, 2.3, 2.4, 0.1]) [(1.1, 2.1, 0.1), (2.3, 2.4, 0.1)] """ lons, lats, depths = [], [], [] for i, el in enumerate(seq): if i % 3 == 0: lons.append(valid.longitude(el)) elif i % 3 == 1: lats.append(valid.latitude(el)) elif i % 3 == 2: depths.append(valid.depth(el)) return list(zip(lons, lats, depths))
:param seq: a flat list with lons, lats and depths :returns: a validated list of (lon, lat, depths) triplets >>> split_coords_3d([1.1, 2.1, 0.1, 2.3, 2.4, 0.1]) [(1.1, 2.1, 0.1), (2.3, 2.4, 0.1)]
Below is the the instruction that describes the task: ### Input: :param seq: a flat list with lons, lats and depths :returns: a validated list of (lon, lat, depths) triplets >>> split_coords_3d([1.1, 2.1, 0.1, 2.3, 2.4, 0.1]) [(1.1, 2.1, 0.1), (2.3, 2.4, 0.1)] ### Response: def split_coords_3d(seq): """ :param seq: a flat list with lons, lats and depths :returns: a validated list of (lon, lat, depths) triplets >>> split_coords_3d([1.1, 2.1, 0.1, 2.3, 2.4, 0.1]) [(1.1, 2.1, 0.1), (2.3, 2.4, 0.1)] """ lons, lats, depths = [], [], [] for i, el in enumerate(seq): if i % 3 == 0: lons.append(valid.longitude(el)) elif i % 3 == 1: lats.append(valid.latitude(el)) elif i % 3 == 2: depths.append(valid.depth(el)) return list(zip(lons, lats, depths))
def ignore(wrapped): """ Decorator to ignore a Python function. If a Python callable is decorated with ``@spl.ignore`` then function is ignored by ``spl-python-extract.py``. Args: wrapped: Function that will be ignored. """ @functools.wraps(wrapped) def _ignore(*args, **kwargs): return wrapped(*args, **kwargs) _ignore._splpy_optype = _OperatorType.Ignore _ignore._splpy_file = inspect.getsourcefile(wrapped) return _ignore
Decorator to ignore a Python function. If a Python callable is decorated with ``@spl.ignore`` then function is ignored by ``spl-python-extract.py``. Args: wrapped: Function that will be ignored.
Below is the the instruction that describes the task: ### Input: Decorator to ignore a Python function. If a Python callable is decorated with ``@spl.ignore`` then function is ignored by ``spl-python-extract.py``. Args: wrapped: Function that will be ignored. ### Response: def ignore(wrapped): """ Decorator to ignore a Python function. If a Python callable is decorated with ``@spl.ignore`` then function is ignored by ``spl-python-extract.py``. Args: wrapped: Function that will be ignored. """ @functools.wraps(wrapped) def _ignore(*args, **kwargs): return wrapped(*args, **kwargs) _ignore._splpy_optype = _OperatorType.Ignore _ignore._splpy_file = inspect.getsourcefile(wrapped) return _ignore
def upload(self, project_id, parent_kind, parent_id): """ Upload file contents to project within a specified parent. :param project_id: str project uuid :param parent_kind: str type of parent ('dds-project' or 'dds-folder') :param parent_id: str uuid of parent :return: str uuid of the newly uploaded file """ path_data = self.local_file.get_path_data() hash_data = path_data.get_hash() self.upload_id = self.upload_operations.create_upload(project_id, path_data, hash_data, storage_provider_id=self.config.storage_provider_id) ParallelChunkProcessor(self).run() parent_data = ParentData(parent_kind, parent_id) remote_file_data = self.upload_operations.finish_upload(self.upload_id, hash_data, parent_data, self.local_file.remote_id) if self.file_upload_post_processor: self.file_upload_post_processor.run(self.data_service, remote_file_data) return remote_file_data['id']
Upload file contents to project within a specified parent. :param project_id: str project uuid :param parent_kind: str type of parent ('dds-project' or 'dds-folder') :param parent_id: str uuid of parent :return: str uuid of the newly uploaded file
Below is the the instruction that describes the task: ### Input: Upload file contents to project within a specified parent. :param project_id: str project uuid :param parent_kind: str type of parent ('dds-project' or 'dds-folder') :param parent_id: str uuid of parent :return: str uuid of the newly uploaded file ### Response: def upload(self, project_id, parent_kind, parent_id): """ Upload file contents to project within a specified parent. :param project_id: str project uuid :param parent_kind: str type of parent ('dds-project' or 'dds-folder') :param parent_id: str uuid of parent :return: str uuid of the newly uploaded file """ path_data = self.local_file.get_path_data() hash_data = path_data.get_hash() self.upload_id = self.upload_operations.create_upload(project_id, path_data, hash_data, storage_provider_id=self.config.storage_provider_id) ParallelChunkProcessor(self).run() parent_data = ParentData(parent_kind, parent_id) remote_file_data = self.upload_operations.finish_upload(self.upload_id, hash_data, parent_data, self.local_file.remote_id) if self.file_upload_post_processor: self.file_upload_post_processor.run(self.data_service, remote_file_data) return remote_file_data['id']
def _sort_tensor(tensor): """Use `top_k` to sort a `Tensor` along the last dimension.""" sorted_, _ = tf.nn.top_k(tensor, k=tf.shape(input=tensor)[-1]) sorted_.set_shape(tensor.shape) return sorted_
Use `top_k` to sort a `Tensor` along the last dimension.
Below is the the instruction that describes the task: ### Input: Use `top_k` to sort a `Tensor` along the last dimension. ### Response: def _sort_tensor(tensor): """Use `top_k` to sort a `Tensor` along the last dimension.""" sorted_, _ = tf.nn.top_k(tensor, k=tf.shape(input=tensor)[-1]) sorted_.set_shape(tensor.shape) return sorted_
async def get(self, public_key): """Retrieves all users input and output offers Accepts: - public key """ # Sign-verifying functional #super().verify() # Get coinid account = await self.account.getaccountdata(public_key=public_key) if "error" in account.keys(): self.set_status(account["error"]) self.write(account) raise tornado.web.Finish offers_collection = [] for coinid in settings.AVAILABLE_COIN_ID: try: self.account.blockchain.setendpoint(settings.bridges[coinid]) except: continue try: offers = await self.account.blockchain.getbuyeroffers( buyer_address=self.account.validator[coinid](public_key)) for offer in offers: offer["type"] = self.account.ident_offer[offer["type"]] storage_offers = await self.account.getoffers(coinid=coinid, public_key=public_key) except: continue offers_collection.extend(offers + storage_offers) self.write(json.dumps(offers_collection))
Retrieves all users input and output offers Accepts: - public key
Below is the the instruction that describes the task: ### Input: Retrieves all users input and output offers Accepts: - public key ### Response: async def get(self, public_key): """Retrieves all users input and output offers Accepts: - public key """ # Sign-verifying functional #super().verify() # Get coinid account = await self.account.getaccountdata(public_key=public_key) if "error" in account.keys(): self.set_status(account["error"]) self.write(account) raise tornado.web.Finish offers_collection = [] for coinid in settings.AVAILABLE_COIN_ID: try: self.account.blockchain.setendpoint(settings.bridges[coinid]) except: continue try: offers = await self.account.blockchain.getbuyeroffers( buyer_address=self.account.validator[coinid](public_key)) for offer in offers: offer["type"] = self.account.ident_offer[offer["type"]] storage_offers = await self.account.getoffers(coinid=coinid, public_key=public_key) except: continue offers_collection.extend(offers + storage_offers) self.write(json.dumps(offers_collection))
def kms_encrypt(kms_client, service, env, secret): """ Encrypt string for use by a given service/environment Args: kms_client (boto3 kms client object): Instantiated kms client object. Usually created through create_aws_clients. service (string): name of the service that the secret is being encrypted for. env (string): environment that the secret is being encrypted for. secret (string): value to be encrypted Returns: a populated EFPWContext object Raises: SystemExit(1): If there is an error with the boto3 encryption call (ex. missing kms key) """ # Converting all periods to underscores because they are invalid in KMS alias names key_alias = '{}-{}'.format(env, service.replace('.', '_')) try: response = kms_client.encrypt( KeyId='alias/{}'.format(key_alias), Plaintext=secret.encode() ) except ClientError as error: if error.response['Error']['Code'] == "NotFoundException": fail("Key '{}' not found. You may need to run ef-generate for this environment.".format(key_alias), error) else: fail("boto3 exception occurred while performing kms encrypt operation.", error) encrypted_secret = base64.b64encode(response['CiphertextBlob']) return encrypted_secret
Encrypt string for use by a given service/environment Args: kms_client (boto3 kms client object): Instantiated kms client object. Usually created through create_aws_clients. service (string): name of the service that the secret is being encrypted for. env (string): environment that the secret is being encrypted for. secret (string): value to be encrypted Returns: a populated EFPWContext object Raises: SystemExit(1): If there is an error with the boto3 encryption call (ex. missing kms key)
Below is the the instruction that describes the task: ### Input: Encrypt string for use by a given service/environment Args: kms_client (boto3 kms client object): Instantiated kms client object. Usually created through create_aws_clients. service (string): name of the service that the secret is being encrypted for. env (string): environment that the secret is being encrypted for. secret (string): value to be encrypted Returns: a populated EFPWContext object Raises: SystemExit(1): If there is an error with the boto3 encryption call (ex. missing kms key) ### Response: def kms_encrypt(kms_client, service, env, secret): """ Encrypt string for use by a given service/environment Args: kms_client (boto3 kms client object): Instantiated kms client object. Usually created through create_aws_clients. service (string): name of the service that the secret is being encrypted for. env (string): environment that the secret is being encrypted for. secret (string): value to be encrypted Returns: a populated EFPWContext object Raises: SystemExit(1): If there is an error with the boto3 encryption call (ex. missing kms key) """ # Converting all periods to underscores because they are invalid in KMS alias names key_alias = '{}-{}'.format(env, service.replace('.', '_')) try: response = kms_client.encrypt( KeyId='alias/{}'.format(key_alias), Plaintext=secret.encode() ) except ClientError as error: if error.response['Error']['Code'] == "NotFoundException": fail("Key '{}' not found. You may need to run ef-generate for this environment.".format(key_alias), error) else: fail("boto3 exception occurred while performing kms encrypt operation.", error) encrypted_secret = base64.b64encode(response['CiphertextBlob']) return encrypted_secret
def _preprocess(self, data, train): """Zero-mean, unit-variance normalization by default""" if train: inputs, labels = data self.data_mean = inputs.mean(axis=0) self.data_std = inputs.std(axis=0) self.labels_mean = labels.mean(axis=0) self.labels_std = labels.std(axis=0) return ((inputs-self.data_mean)/self.data_std, (labels-self.labels_mean)/self.labels_std) else: return (data-self.data_mean)/self.data_std
Zero-mean, unit-variance normalization by default
Below is the the instruction that describes the task: ### Input: Zero-mean, unit-variance normalization by default ### Response: def _preprocess(self, data, train): """Zero-mean, unit-variance normalization by default""" if train: inputs, labels = data self.data_mean = inputs.mean(axis=0) self.data_std = inputs.std(axis=0) self.labels_mean = labels.mean(axis=0) self.labels_std = labels.std(axis=0) return ((inputs-self.data_mean)/self.data_std, (labels-self.labels_mean)/self.labels_std) else: return (data-self.data_mean)/self.data_std
def subpnt(method, target, et, fixref, abcorr, obsrvr): """ Compute the rectangular coordinates of the sub-observer point on a target body at a specified epoch, optionally corrected for light time and stellar aberration. This routine supersedes :func:`subpt`. http://naif.jpl.nasa.gov/pub/naif/toolkit_docs/C/cspice/subpnt_c.html :param method: Computation method. :type method: str :param target: Name of target body. :type target: str :param et: Epoch in ephemeris seconds past J2000 TDB. :type et: float :param fixref: Body-fixed, body-centered target body frame. :type fixref: str :param abcorr: Aberration correction. :type abcorr: str :param obsrvr: Name of observing body. :type obsrvr: str :return: Sub-observer point on the target body, Sub-observer point epoch, Vector from observer to sub-observer point. :rtype: tuple """ method = stypes.stringToCharP(method) target = stypes.stringToCharP(target) et = ctypes.c_double(et) fixref = stypes.stringToCharP(fixref) abcorr = stypes.stringToCharP(abcorr) obsrvr = stypes.stringToCharP(obsrvr) spoint = stypes.emptyDoubleVector(3) trgepc = ctypes.c_double(0) srfvec = stypes.emptyDoubleVector(3) libspice.subpnt_c(method, target, et, fixref, abcorr, obsrvr, spoint, ctypes.byref(trgepc), srfvec) return stypes.cVectorToPython(spoint), trgepc.value, stypes.cVectorToPython( srfvec)
Compute the rectangular coordinates of the sub-observer point on a target body at a specified epoch, optionally corrected for light time and stellar aberration. This routine supersedes :func:`subpt`. http://naif.jpl.nasa.gov/pub/naif/toolkit_docs/C/cspice/subpnt_c.html :param method: Computation method. :type method: str :param target: Name of target body. :type target: str :param et: Epoch in ephemeris seconds past J2000 TDB. :type et: float :param fixref: Body-fixed, body-centered target body frame. :type fixref: str :param abcorr: Aberration correction. :type abcorr: str :param obsrvr: Name of observing body. :type obsrvr: str :return: Sub-observer point on the target body, Sub-observer point epoch, Vector from observer to sub-observer point. :rtype: tuple
Below is the the instruction that describes the task: ### Input: Compute the rectangular coordinates of the sub-observer point on a target body at a specified epoch, optionally corrected for light time and stellar aberration. This routine supersedes :func:`subpt`. http://naif.jpl.nasa.gov/pub/naif/toolkit_docs/C/cspice/subpnt_c.html :param method: Computation method. :type method: str :param target: Name of target body. :type target: str :param et: Epoch in ephemeris seconds past J2000 TDB. :type et: float :param fixref: Body-fixed, body-centered target body frame. :type fixref: str :param abcorr: Aberration correction. :type abcorr: str :param obsrvr: Name of observing body. :type obsrvr: str :return: Sub-observer point on the target body, Sub-observer point epoch, Vector from observer to sub-observer point. :rtype: tuple ### Response: def subpnt(method, target, et, fixref, abcorr, obsrvr): """ Compute the rectangular coordinates of the sub-observer point on a target body at a specified epoch, optionally corrected for light time and stellar aberration. This routine supersedes :func:`subpt`. http://naif.jpl.nasa.gov/pub/naif/toolkit_docs/C/cspice/subpnt_c.html :param method: Computation method. :type method: str :param target: Name of target body. :type target: str :param et: Epoch in ephemeris seconds past J2000 TDB. :type et: float :param fixref: Body-fixed, body-centered target body frame. :type fixref: str :param abcorr: Aberration correction. :type abcorr: str :param obsrvr: Name of observing body. :type obsrvr: str :return: Sub-observer point on the target body, Sub-observer point epoch, Vector from observer to sub-observer point. :rtype: tuple """ method = stypes.stringToCharP(method) target = stypes.stringToCharP(target) et = ctypes.c_double(et) fixref = stypes.stringToCharP(fixref) abcorr = stypes.stringToCharP(abcorr) obsrvr = stypes.stringToCharP(obsrvr) spoint = stypes.emptyDoubleVector(3) trgepc = ctypes.c_double(0) srfvec = stypes.emptyDoubleVector(3) libspice.subpnt_c(method, target, et, fixref, abcorr, obsrvr, spoint, ctypes.byref(trgepc), srfvec) return stypes.cVectorToPython(spoint), trgepc.value, stypes.cVectorToPython( srfvec)
def compress(self): """Compress files on the server side after transfer complete and make zip available for download. :rtype: ``bool`` """ method, url = get_URL('compress') payload = { 'apikey': self.config.get('apikey'), 'logintoken': self.session.cookies.get('logintoken'), 'transferid': self.transfer_id } res = getattr(self.session, method)(url, params=payload) if res.status_code == 200: return True hellraiser(res)
Compress files on the server side after transfer complete and make zip available for download. :rtype: ``bool``
Below is the the instruction that describes the task: ### Input: Compress files on the server side after transfer complete and make zip available for download. :rtype: ``bool`` ### Response: def compress(self): """Compress files on the server side after transfer complete and make zip available for download. :rtype: ``bool`` """ method, url = get_URL('compress') payload = { 'apikey': self.config.get('apikey'), 'logintoken': self.session.cookies.get('logintoken'), 'transferid': self.transfer_id } res = getattr(self.session, method)(url, params=payload) if res.status_code == 200: return True hellraiser(res)
def curve_fit_unscaled(*args, **kwargs): """ Use the reduced chi square to unscale :mod:`scipy`'s scaled :func:`scipy.optimize.curve_fit`. *\*args* and *\*\*kwargs* are passed through to :func:`scipy.optimize.curve_fit`. The tuple *popt, pcov, chisq_red* is returned, where *popt* is the optimal values for the parameters, *pcov* is the estimated covariance of *popt*, and *chisq_red* is the reduced chi square. See http://docs.scipy.org/doc/scipy-0.15.1/reference/generated/scipy.optimize.curve_fit.html. """ # Extract verbosity verbose = kwargs.pop('verbose', False) # Do initial fit popt, pcov = _spopt.curve_fit(*args, **kwargs) # Expand positional arguments func = args[0] x = args[1] y = args[2] ddof = len(popt) # Try to use sigma to unscale pcov try: sigma = kwargs['sigma'] if sigma is None: sigma = _np.ones(len(y)) # Get reduced chi-square y_expect = func(x, *popt) chisq_red = _chisquare(y, y_expect, sigma, ddof, verbose=verbose) # Correct scaled covariance matrix pcov = pcov / chisq_red return popt, pcov, chisq_red except ValueError: print('hello')
Use the reduced chi square to unscale :mod:`scipy`'s scaled :func:`scipy.optimize.curve_fit`. *\*args* and *\*\*kwargs* are passed through to :func:`scipy.optimize.curve_fit`. The tuple *popt, pcov, chisq_red* is returned, where *popt* is the optimal values for the parameters, *pcov* is the estimated covariance of *popt*, and *chisq_red* is the reduced chi square. See http://docs.scipy.org/doc/scipy-0.15.1/reference/generated/scipy.optimize.curve_fit.html.
Below is the the instruction that describes the task: ### Input: Use the reduced chi square to unscale :mod:`scipy`'s scaled :func:`scipy.optimize.curve_fit`. *\*args* and *\*\*kwargs* are passed through to :func:`scipy.optimize.curve_fit`. The tuple *popt, pcov, chisq_red* is returned, where *popt* is the optimal values for the parameters, *pcov* is the estimated covariance of *popt*, and *chisq_red* is the reduced chi square. See http://docs.scipy.org/doc/scipy-0.15.1/reference/generated/scipy.optimize.curve_fit.html. ### Response: def curve_fit_unscaled(*args, **kwargs): """ Use the reduced chi square to unscale :mod:`scipy`'s scaled :func:`scipy.optimize.curve_fit`. *\*args* and *\*\*kwargs* are passed through to :func:`scipy.optimize.curve_fit`. The tuple *popt, pcov, chisq_red* is returned, where *popt* is the optimal values for the parameters, *pcov* is the estimated covariance of *popt*, and *chisq_red* is the reduced chi square. See http://docs.scipy.org/doc/scipy-0.15.1/reference/generated/scipy.optimize.curve_fit.html. """ # Extract verbosity verbose = kwargs.pop('verbose', False) # Do initial fit popt, pcov = _spopt.curve_fit(*args, **kwargs) # Expand positional arguments func = args[0] x = args[1] y = args[2] ddof = len(popt) # Try to use sigma to unscale pcov try: sigma = kwargs['sigma'] if sigma is None: sigma = _np.ones(len(y)) # Get reduced chi-square y_expect = func(x, *popt) chisq_red = _chisquare(y, y_expect, sigma, ddof, verbose=verbose) # Correct scaled covariance matrix pcov = pcov / chisq_red return popt, pcov, chisq_red except ValueError: print('hello')
def _pop(self, block=True, timeout=None, left=False): """Removes and returns the an item from this GeventDeque. This is an internal method, called by the public methods pop() and popleft(). """ item = None timer = None deque = self._deque empty = IndexError('pop from an empty deque') if block is False: if len(self._deque) > 0: item = deque.popleft() if left else deque.pop() else: raise empty else: try: if timeout is not None: timer = gevent.Timeout(timeout, empty) timer.start() while True: self.notEmpty.wait() if len(deque) > 0: item = deque.popleft() if left else deque.pop() break finally: if timer is not None: timer.cancel() if len(deque) == 0: self.notEmpty.clear() return item
Removes and returns the an item from this GeventDeque. This is an internal method, called by the public methods pop() and popleft().
Below is the the instruction that describes the task: ### Input: Removes and returns the an item from this GeventDeque. This is an internal method, called by the public methods pop() and popleft(). ### Response: def _pop(self, block=True, timeout=None, left=False): """Removes and returns the an item from this GeventDeque. This is an internal method, called by the public methods pop() and popleft(). """ item = None timer = None deque = self._deque empty = IndexError('pop from an empty deque') if block is False: if len(self._deque) > 0: item = deque.popleft() if left else deque.pop() else: raise empty else: try: if timeout is not None: timer = gevent.Timeout(timeout, empty) timer.start() while True: self.notEmpty.wait() if len(deque) > 0: item = deque.popleft() if left else deque.pop() break finally: if timer is not None: timer.cancel() if len(deque) == 0: self.notEmpty.clear() return item
def _get_crawled_urls(self, handle, request): """ Main method where the crawler html content is parsed with beautiful soup and out of the DOM, we get the urls """ try: content = six.text_type(handle.open(request).read(), "utf-8", errors="replace") soup = BeautifulSoup(content, "html.parser") tags = soup('a') for tag in tqdm(tags): href = tag.get("href") if href is not None: url = urllib.parse.urljoin(self.url, escape(href)) if url not in self: self.urls.append(url) except urllib.request.HTTPError as error: if error.code == 404: logger.warning("ERROR: %s -> %s for %s" % (error, error.url, self.url)) else: logger.warning("ERROR: %s for %s" % (error, self.url)) except urllib.request.URLError as error: logger.warning("ERROR: %s for %s" % (error, self.url)) raise urllib.request.URLError("URL entered is Incorrect")
Main method where the crawler html content is parsed with beautiful soup and out of the DOM, we get the urls
Below is the the instruction that describes the task: ### Input: Main method where the crawler html content is parsed with beautiful soup and out of the DOM, we get the urls ### Response: def _get_crawled_urls(self, handle, request): """ Main method where the crawler html content is parsed with beautiful soup and out of the DOM, we get the urls """ try: content = six.text_type(handle.open(request).read(), "utf-8", errors="replace") soup = BeautifulSoup(content, "html.parser") tags = soup('a') for tag in tqdm(tags): href = tag.get("href") if href is not None: url = urllib.parse.urljoin(self.url, escape(href)) if url not in self: self.urls.append(url) except urllib.request.HTTPError as error: if error.code == 404: logger.warning("ERROR: %s -> %s for %s" % (error, error.url, self.url)) else: logger.warning("ERROR: %s for %s" % (error, self.url)) except urllib.request.URLError as error: logger.warning("ERROR: %s for %s" % (error, self.url)) raise urllib.request.URLError("URL entered is Incorrect")
def register_converter(operator_name, conversion_function, overwrite=False): ''' :param operator_name: A unique operator ID. It is usually a string but you can use a type as well :param conversion_function: A callable object :param overwrite: By default, we raise an exception if the caller of this function is trying to assign an existing key (i.e., operator_name) a new value (i.e., conversion_function). Set this flag to True to enable overwriting. ''' if not overwrite and operator_name in _converter_pool: raise ValueError('We do not overwrite registrated converter by default') _converter_pool[operator_name] = conversion_function
:param operator_name: A unique operator ID. It is usually a string but you can use a type as well :param conversion_function: A callable object :param overwrite: By default, we raise an exception if the caller of this function is trying to assign an existing key (i.e., operator_name) a new value (i.e., conversion_function). Set this flag to True to enable overwriting.
Below is the the instruction that describes the task: ### Input: :param operator_name: A unique operator ID. It is usually a string but you can use a type as well :param conversion_function: A callable object :param overwrite: By default, we raise an exception if the caller of this function is trying to assign an existing key (i.e., operator_name) a new value (i.e., conversion_function). Set this flag to True to enable overwriting. ### Response: def register_converter(operator_name, conversion_function, overwrite=False): ''' :param operator_name: A unique operator ID. It is usually a string but you can use a type as well :param conversion_function: A callable object :param overwrite: By default, we raise an exception if the caller of this function is trying to assign an existing key (i.e., operator_name) a new value (i.e., conversion_function). Set this flag to True to enable overwriting. ''' if not overwrite and operator_name in _converter_pool: raise ValueError('We do not overwrite registrated converter by default') _converter_pool[operator_name] = conversion_function
def create_api(app_id=None, login=None, password=None, phone_number=None, scope='offline', api_version='5.92', http_params=None, interactive=False, service_token=None, client_secret=None, two_fa_supported=False, two_fa_force_sms=False): """Factory method to explicitly create API with app_id, login, password and phone_number parameters. If the app_id, login, password are not passed, then token-free session will be created automatically :param app_id: int: vk application id, more info: https://vk.com/dev/main :param login: str: vk login :param password: str: vk password :param phone_number: str: phone number with country code (+71234568990) :param scope: str or list of str: vk session scope :param api_version: str: vk api version, check https://vk.com/dev/versions :param interactive: bool: flag which indicates to use InteractiveVKSession :param service_token: str: new way of querying vk api, instead of getting oauth token :param http_params: dict: requests http parameters passed along :param client_secret: str: secure application key for Direct Authorization, more info: https://vk.com/dev/auth_direct :param two_fa_supported: bool: enable two-factor authentication for Direct Authorization, more info: https://vk.com/dev/auth_direct :param two_fa_force_sms: bool: force SMS two-factor authentication for Direct Authorization if two_fa_supported is True, more info: https://vk.com/dev/auth_direct :return: api instance :rtype : vk_requests.api.API """ session = VKSession(app_id=app_id, user_login=login, user_password=password, phone_number=phone_number, scope=scope, service_token=service_token, api_version=api_version, interactive=interactive, client_secret=client_secret, two_fa_supported = two_fa_supported, two_fa_force_sms=two_fa_force_sms) return API(session=session, http_params=http_params)
Factory method to explicitly create API with app_id, login, password and phone_number parameters. If the app_id, login, password are not passed, then token-free session will be created automatically :param app_id: int: vk application id, more info: https://vk.com/dev/main :param login: str: vk login :param password: str: vk password :param phone_number: str: phone number with country code (+71234568990) :param scope: str or list of str: vk session scope :param api_version: str: vk api version, check https://vk.com/dev/versions :param interactive: bool: flag which indicates to use InteractiveVKSession :param service_token: str: new way of querying vk api, instead of getting oauth token :param http_params: dict: requests http parameters passed along :param client_secret: str: secure application key for Direct Authorization, more info: https://vk.com/dev/auth_direct :param two_fa_supported: bool: enable two-factor authentication for Direct Authorization, more info: https://vk.com/dev/auth_direct :param two_fa_force_sms: bool: force SMS two-factor authentication for Direct Authorization if two_fa_supported is True, more info: https://vk.com/dev/auth_direct :return: api instance :rtype : vk_requests.api.API
Below is the the instruction that describes the task: ### Input: Factory method to explicitly create API with app_id, login, password and phone_number parameters. If the app_id, login, password are not passed, then token-free session will be created automatically :param app_id: int: vk application id, more info: https://vk.com/dev/main :param login: str: vk login :param password: str: vk password :param phone_number: str: phone number with country code (+71234568990) :param scope: str or list of str: vk session scope :param api_version: str: vk api version, check https://vk.com/dev/versions :param interactive: bool: flag which indicates to use InteractiveVKSession :param service_token: str: new way of querying vk api, instead of getting oauth token :param http_params: dict: requests http parameters passed along :param client_secret: str: secure application key for Direct Authorization, more info: https://vk.com/dev/auth_direct :param two_fa_supported: bool: enable two-factor authentication for Direct Authorization, more info: https://vk.com/dev/auth_direct :param two_fa_force_sms: bool: force SMS two-factor authentication for Direct Authorization if two_fa_supported is True, more info: https://vk.com/dev/auth_direct :return: api instance :rtype : vk_requests.api.API ### Response: def create_api(app_id=None, login=None, password=None, phone_number=None, scope='offline', api_version='5.92', http_params=None, interactive=False, service_token=None, client_secret=None, two_fa_supported=False, two_fa_force_sms=False): """Factory method to explicitly create API with app_id, login, password and phone_number parameters. If the app_id, login, password are not passed, then token-free session will be created automatically :param app_id: int: vk application id, more info: https://vk.com/dev/main :param login: str: vk login :param password: str: vk password :param phone_number: str: phone number with country code (+71234568990) :param scope: str or list of str: vk session scope :param api_version: str: vk api version, check https://vk.com/dev/versions :param interactive: bool: flag which indicates to use InteractiveVKSession :param service_token: str: new way of querying vk api, instead of getting oauth token :param http_params: dict: requests http parameters passed along :param client_secret: str: secure application key for Direct Authorization, more info: https://vk.com/dev/auth_direct :param two_fa_supported: bool: enable two-factor authentication for Direct Authorization, more info: https://vk.com/dev/auth_direct :param two_fa_force_sms: bool: force SMS two-factor authentication for Direct Authorization if two_fa_supported is True, more info: https://vk.com/dev/auth_direct :return: api instance :rtype : vk_requests.api.API """ session = VKSession(app_id=app_id, user_login=login, user_password=password, phone_number=phone_number, scope=scope, service_token=service_token, api_version=api_version, interactive=interactive, client_secret=client_secret, two_fa_supported = two_fa_supported, two_fa_force_sms=two_fa_force_sms) return API(session=session, http_params=http_params)
def _attributes2pys(self): """Writes attributes to pys file Format: <selection[0]>\t[...]\t<tab>\t<key>\t<value>\t[...]\n """ # Remove doublettes purged_cell_attributes = [] purged_cell_attributes_keys = [] for selection, tab, attr_dict in self.code_array.cell_attributes: if purged_cell_attributes_keys and \ (selection, tab) == purged_cell_attributes_keys[-1]: purged_cell_attributes[-1][2].update(attr_dict) else: purged_cell_attributes_keys.append((selection, tab)) purged_cell_attributes.append([selection, tab, attr_dict]) for selection, tab, attr_dict in purged_cell_attributes: sel_list = [selection.block_tl, selection.block_br, selection.rows, selection.cols, selection.cells] tab_list = [tab] attr_dict_list = [] for key in attr_dict: attr_dict_list.append(key) attr_dict_list.append(attr_dict[key]) if config["font_save_enabled"] and key == 'textfont': self.fonts_used.append(attr_dict[key]) line_list = map(repr, sel_list + tab_list + attr_dict_list) self.pys_file.write(u"\t".join(line_list) + u"\n")
Writes attributes to pys file Format: <selection[0]>\t[...]\t<tab>\t<key>\t<value>\t[...]\n
Below is the the instruction that describes the task: ### Input: Writes attributes to pys file Format: <selection[0]>\t[...]\t<tab>\t<key>\t<value>\t[...]\n ### Response: def _attributes2pys(self): """Writes attributes to pys file Format: <selection[0]>\t[...]\t<tab>\t<key>\t<value>\t[...]\n """ # Remove doublettes purged_cell_attributes = [] purged_cell_attributes_keys = [] for selection, tab, attr_dict in self.code_array.cell_attributes: if purged_cell_attributes_keys and \ (selection, tab) == purged_cell_attributes_keys[-1]: purged_cell_attributes[-1][2].update(attr_dict) else: purged_cell_attributes_keys.append((selection, tab)) purged_cell_attributes.append([selection, tab, attr_dict]) for selection, tab, attr_dict in purged_cell_attributes: sel_list = [selection.block_tl, selection.block_br, selection.rows, selection.cols, selection.cells] tab_list = [tab] attr_dict_list = [] for key in attr_dict: attr_dict_list.append(key) attr_dict_list.append(attr_dict[key]) if config["font_save_enabled"] and key == 'textfont': self.fonts_used.append(attr_dict[key]) line_list = map(repr, sel_list + tab_list + attr_dict_list) self.pys_file.write(u"\t".join(line_list) + u"\n")
def bpoints2bezier(bpoints): """Converts a list of length 2, 3, or 4 to a CubicBezier, QuadraticBezier, or Line object, respectively. See also: poly2bez.""" order = len(bpoints) - 1 if order == 3: return CubicBezier(*bpoints) elif order == 2: return QuadraticBezier(*bpoints) elif order == 1: return Line(*bpoints) else: assert len(bpoints) in {2, 3, 4}
Converts a list of length 2, 3, or 4 to a CubicBezier, QuadraticBezier, or Line object, respectively. See also: poly2bez.
Below is the the instruction that describes the task: ### Input: Converts a list of length 2, 3, or 4 to a CubicBezier, QuadraticBezier, or Line object, respectively. See also: poly2bez. ### Response: def bpoints2bezier(bpoints): """Converts a list of length 2, 3, or 4 to a CubicBezier, QuadraticBezier, or Line object, respectively. See also: poly2bez.""" order = len(bpoints) - 1 if order == 3: return CubicBezier(*bpoints) elif order == 2: return QuadraticBezier(*bpoints) elif order == 1: return Line(*bpoints) else: assert len(bpoints) in {2, 3, 4}
def photo(self): """ Returns either the :tl:`WebDocument` thumbnail for normal results or the :tl:`Photo` for media results. """ if isinstance(self.result, types.BotInlineResult): return self.result.thumb elif isinstance(self.result, types.BotInlineMediaResult): return self.result.photo
Returns either the :tl:`WebDocument` thumbnail for normal results or the :tl:`Photo` for media results.
Below is the the instruction that describes the task: ### Input: Returns either the :tl:`WebDocument` thumbnail for normal results or the :tl:`Photo` for media results. ### Response: def photo(self): """ Returns either the :tl:`WebDocument` thumbnail for normal results or the :tl:`Photo` for media results. """ if isinstance(self.result, types.BotInlineResult): return self.result.thumb elif isinstance(self.result, types.BotInlineMediaResult): return self.result.photo
def get_differing_atom_residue_ids(self, pdb_name, pdb_list = []): '''Returns a list of residues in pdb_name which differ from the pdbs corresponding to the names in pdb_list.''' # partition_by_sequence is a map pdb_name -> Int where two pdb names in the same equivalence class map to the same integer (i.e. it is a partition) # representative_pdbs is a map Int -> pdb_object mapping the equivalence classes (represented by an integer) to a representative PDB file # self.pdb_name_to_structure_mapping : pdb_name -> pdb_object # Sanity checks assert(pdb_name in self.pdb_names) assert(set(pdb_list).intersection(set(self.pdb_names)) == set(pdb_list)) # the names in pdb_list must be in pdb_names # 1. Get the representative structure for pdb_name representative_pdb_id = self.partition_by_sequence[pdb_name] representative_pdb = self.representative_pdbs[representative_pdb_id] # 2. Get the other representative structures as dictated by pdb_list other_representative_pdbs = set() other_representative_pdb_ids = set() if not pdb_list: pdb_list = self.pdb_names for opdb_name in pdb_list: orepresentative_pdb_id = self.partition_by_sequence[opdb_name] other_representative_pdb_ids.add(orepresentative_pdb_id) other_representative_pdbs.add(self.representative_pdbs[orepresentative_pdb_id]) other_representative_pdbs.discard(representative_pdb) other_representative_pdb_ids.discard(representative_pdb_id) # Early out if pdb_list was empty (or all pdbs were in the same equivalence class) if not other_representative_pdbs: return [] # 3. Return all residues of pdb_name's representative which differ from all the other representatives differing_atom_residue_ids = set() for other_representative_pdb_id in other_representative_pdb_ids: differing_atom_residue_ids = differing_atom_residue_ids.union(set(self.differing_atom_residue_ids[(representative_pdb_id, other_representative_pdb_id)])) return sorted(differing_atom_residue_ids)
Returns a list of residues in pdb_name which differ from the pdbs corresponding to the names in pdb_list.
Below is the the instruction that describes the task: ### Input: Returns a list of residues in pdb_name which differ from the pdbs corresponding to the names in pdb_list. ### Response: def get_differing_atom_residue_ids(self, pdb_name, pdb_list = []): '''Returns a list of residues in pdb_name which differ from the pdbs corresponding to the names in pdb_list.''' # partition_by_sequence is a map pdb_name -> Int where two pdb names in the same equivalence class map to the same integer (i.e. it is a partition) # representative_pdbs is a map Int -> pdb_object mapping the equivalence classes (represented by an integer) to a representative PDB file # self.pdb_name_to_structure_mapping : pdb_name -> pdb_object # Sanity checks assert(pdb_name in self.pdb_names) assert(set(pdb_list).intersection(set(self.pdb_names)) == set(pdb_list)) # the names in pdb_list must be in pdb_names # 1. Get the representative structure for pdb_name representative_pdb_id = self.partition_by_sequence[pdb_name] representative_pdb = self.representative_pdbs[representative_pdb_id] # 2. Get the other representative structures as dictated by pdb_list other_representative_pdbs = set() other_representative_pdb_ids = set() if not pdb_list: pdb_list = self.pdb_names for opdb_name in pdb_list: orepresentative_pdb_id = self.partition_by_sequence[opdb_name] other_representative_pdb_ids.add(orepresentative_pdb_id) other_representative_pdbs.add(self.representative_pdbs[orepresentative_pdb_id]) other_representative_pdbs.discard(representative_pdb) other_representative_pdb_ids.discard(representative_pdb_id) # Early out if pdb_list was empty (or all pdbs were in the same equivalence class) if not other_representative_pdbs: return [] # 3. Return all residues of pdb_name's representative which differ from all the other representatives differing_atom_residue_ids = set() for other_representative_pdb_id in other_representative_pdb_ids: differing_atom_residue_ids = differing_atom_residue_ids.union(set(self.differing_atom_residue_ids[(representative_pdb_id, other_representative_pdb_id)])) return sorted(differing_atom_residue_ids)
def remove(self, item): """ Remove an item from the set, returning if it was present """ with self.lock: if item in self.set: self.set.remove(item) return True return False
Remove an item from the set, returning if it was present
Below is the the instruction that describes the task: ### Input: Remove an item from the set, returning if it was present ### Response: def remove(self, item): """ Remove an item from the set, returning if it was present """ with self.lock: if item in self.set: self.set.remove(item) return True return False
def __validate_enrollment_periods(self, enrollments): """Check for overlapped periods in the enrollments""" for a, b in itertools.combinations(enrollments, 2): max_start = max(a.start, b.start) min_end = min(a.end, b.end) if max_start < min_end: msg = "invalid GrimoireLab enrollment dates. " \ "Organization dates overlap." raise InvalidFormatError(cause=msg) return enrollments
Check for overlapped periods in the enrollments
Below is the the instruction that describes the task: ### Input: Check for overlapped periods in the enrollments ### Response: def __validate_enrollment_periods(self, enrollments): """Check for overlapped periods in the enrollments""" for a, b in itertools.combinations(enrollments, 2): max_start = max(a.start, b.start) min_end = min(a.end, b.end) if max_start < min_end: msg = "invalid GrimoireLab enrollment dates. " \ "Organization dates overlap." raise InvalidFormatError(cause=msg) return enrollments
def deserialize(json, cls=None): """Deserialize a JSON string into a Python object. Args: json (str): the JSON string. cls (:py:class:`object`): if the ``json`` is deserialized into a ``dict`` and this argument is set, the ``dict`` keys are passed as keyword arguments to the given ``cls`` initializer. Returns: Python object representation of the given JSON string. """ LOGGER.debug('deserialize(%s)', json) out = simplejson.loads(json) if isinstance(out, dict) and cls is not None: return cls(**out) return out
Deserialize a JSON string into a Python object. Args: json (str): the JSON string. cls (:py:class:`object`): if the ``json`` is deserialized into a ``dict`` and this argument is set, the ``dict`` keys are passed as keyword arguments to the given ``cls`` initializer. Returns: Python object representation of the given JSON string.
Below is the the instruction that describes the task: ### Input: Deserialize a JSON string into a Python object. Args: json (str): the JSON string. cls (:py:class:`object`): if the ``json`` is deserialized into a ``dict`` and this argument is set, the ``dict`` keys are passed as keyword arguments to the given ``cls`` initializer. Returns: Python object representation of the given JSON string. ### Response: def deserialize(json, cls=None): """Deserialize a JSON string into a Python object. Args: json (str): the JSON string. cls (:py:class:`object`): if the ``json`` is deserialized into a ``dict`` and this argument is set, the ``dict`` keys are passed as keyword arguments to the given ``cls`` initializer. Returns: Python object representation of the given JSON string. """ LOGGER.debug('deserialize(%s)', json) out = simplejson.loads(json) if isinstance(out, dict) and cls is not None: return cls(**out) return out
def _from_dataframe(dataframe, default_type='STRING'): """ Infer a BigQuery table schema from a Pandas dataframe. Note that if you don't explicitly set the types of the columns in the dataframe, they may be of a type that forces coercion to STRING, so even though the fields in the dataframe themselves may be numeric, the type in the derived schema may not be. Hence it is prudent to make sure the Pandas dataframe is typed correctly. Args: dataframe: The DataFrame. default_type : The default big query type in case the type of the column does not exist in the schema. Defaults to 'STRING'. Returns: A list of dictionaries containing field 'name' and 'type' entries, suitable for use in a BigQuery Tables resource schema. """ type_mapping = { 'i': 'INTEGER', 'b': 'BOOLEAN', 'f': 'FLOAT', 'O': 'STRING', 'S': 'STRING', 'U': 'STRING', 'M': 'TIMESTAMP' } fields = [] for column_name, dtype in dataframe.dtypes.iteritems(): fields.append({'name': column_name, 'type': type_mapping.get(dtype.kind, default_type)}) return fields
Infer a BigQuery table schema from a Pandas dataframe. Note that if you don't explicitly set the types of the columns in the dataframe, they may be of a type that forces coercion to STRING, so even though the fields in the dataframe themselves may be numeric, the type in the derived schema may not be. Hence it is prudent to make sure the Pandas dataframe is typed correctly. Args: dataframe: The DataFrame. default_type : The default big query type in case the type of the column does not exist in the schema. Defaults to 'STRING'. Returns: A list of dictionaries containing field 'name' and 'type' entries, suitable for use in a BigQuery Tables resource schema.
Below is the the instruction that describes the task: ### Input: Infer a BigQuery table schema from a Pandas dataframe. Note that if you don't explicitly set the types of the columns in the dataframe, they may be of a type that forces coercion to STRING, so even though the fields in the dataframe themselves may be numeric, the type in the derived schema may not be. Hence it is prudent to make sure the Pandas dataframe is typed correctly. Args: dataframe: The DataFrame. default_type : The default big query type in case the type of the column does not exist in the schema. Defaults to 'STRING'. Returns: A list of dictionaries containing field 'name' and 'type' entries, suitable for use in a BigQuery Tables resource schema. ### Response: def _from_dataframe(dataframe, default_type='STRING'): """ Infer a BigQuery table schema from a Pandas dataframe. Note that if you don't explicitly set the types of the columns in the dataframe, they may be of a type that forces coercion to STRING, so even though the fields in the dataframe themselves may be numeric, the type in the derived schema may not be. Hence it is prudent to make sure the Pandas dataframe is typed correctly. Args: dataframe: The DataFrame. default_type : The default big query type in case the type of the column does not exist in the schema. Defaults to 'STRING'. Returns: A list of dictionaries containing field 'name' and 'type' entries, suitable for use in a BigQuery Tables resource schema. """ type_mapping = { 'i': 'INTEGER', 'b': 'BOOLEAN', 'f': 'FLOAT', 'O': 'STRING', 'S': 'STRING', 'U': 'STRING', 'M': 'TIMESTAMP' } fields = [] for column_name, dtype in dataframe.dtypes.iteritems(): fields.append({'name': column_name, 'type': type_mapping.get(dtype.kind, default_type)}) return fields
def fetch(self): """Submit the request to the ACS Zeropoints Calculator. This method will: * submit the request * parse the response * format the results into a table with the correct units Returns ------- tab : `astropy.table.QTable` or `None` If the request was successful, returns a table; otherwise, `None`. """ LOG.info('Checking inputs...') valid_inputs = self._check_inputs() if valid_inputs: LOG.info('Submitting request to {}'.format(self._url)) self._submit_request() if self._failed: return LOG.info('Parsing the response and formatting the results...') self._parse_and_format() return self.zpt_table LOG.error('Please fix the incorrect input(s)')
Submit the request to the ACS Zeropoints Calculator. This method will: * submit the request * parse the response * format the results into a table with the correct units Returns ------- tab : `astropy.table.QTable` or `None` If the request was successful, returns a table; otherwise, `None`.
Below is the the instruction that describes the task: ### Input: Submit the request to the ACS Zeropoints Calculator. This method will: * submit the request * parse the response * format the results into a table with the correct units Returns ------- tab : `astropy.table.QTable` or `None` If the request was successful, returns a table; otherwise, `None`. ### Response: def fetch(self): """Submit the request to the ACS Zeropoints Calculator. This method will: * submit the request * parse the response * format the results into a table with the correct units Returns ------- tab : `astropy.table.QTable` or `None` If the request was successful, returns a table; otherwise, `None`. """ LOG.info('Checking inputs...') valid_inputs = self._check_inputs() if valid_inputs: LOG.info('Submitting request to {}'.format(self._url)) self._submit_request() if self._failed: return LOG.info('Parsing the response and formatting the results...') self._parse_and_format() return self.zpt_table LOG.error('Please fix the incorrect input(s)')
def com_google_fonts_check_mandatory_glyphs(ttFont): """Font contains .notdef as first glyph? The OpenType specification v1.8.2 recommends that the first glyph is the .notdef glyph without a codepoint assigned and with a drawing. https://docs.microsoft.com/en-us/typography/opentype/spec/recom#glyph-0-the-notdef-glyph Pre-v1.8, it was recommended that a font should also contain a .null, CR and space glyph. This might have been relevant for applications on MacOS 9. """ from fontbakery.utils import glyph_has_ink if ( ttFont.getGlyphOrder()[0] == ".notdef" and ".notdef" not in ttFont.getBestCmap().values() and glyph_has_ink(ttFont, ".notdef") ): yield PASS, ( "Font contains the .notdef glyph as the first glyph, it does " "not have a Unicode value assigned and contains a drawing." ) else: yield WARN, ( "Font should contain the .notdef glyph as the first glyph, " "it should not have a Unicode value assigned and should " "contain a drawing." )
Font contains .notdef as first glyph? The OpenType specification v1.8.2 recommends that the first glyph is the .notdef glyph without a codepoint assigned and with a drawing. https://docs.microsoft.com/en-us/typography/opentype/spec/recom#glyph-0-the-notdef-glyph Pre-v1.8, it was recommended that a font should also contain a .null, CR and space glyph. This might have been relevant for applications on MacOS 9.
Below is the the instruction that describes the task: ### Input: Font contains .notdef as first glyph? The OpenType specification v1.8.2 recommends that the first glyph is the .notdef glyph without a codepoint assigned and with a drawing. https://docs.microsoft.com/en-us/typography/opentype/spec/recom#glyph-0-the-notdef-glyph Pre-v1.8, it was recommended that a font should also contain a .null, CR and space glyph. This might have been relevant for applications on MacOS 9. ### Response: def com_google_fonts_check_mandatory_glyphs(ttFont): """Font contains .notdef as first glyph? The OpenType specification v1.8.2 recommends that the first glyph is the .notdef glyph without a codepoint assigned and with a drawing. https://docs.microsoft.com/en-us/typography/opentype/spec/recom#glyph-0-the-notdef-glyph Pre-v1.8, it was recommended that a font should also contain a .null, CR and space glyph. This might have been relevant for applications on MacOS 9. """ from fontbakery.utils import glyph_has_ink if ( ttFont.getGlyphOrder()[0] == ".notdef" and ".notdef" not in ttFont.getBestCmap().values() and glyph_has_ink(ttFont, ".notdef") ): yield PASS, ( "Font contains the .notdef glyph as the first glyph, it does " "not have a Unicode value assigned and contains a drawing." ) else: yield WARN, ( "Font should contain the .notdef glyph as the first glyph, " "it should not have a Unicode value assigned and should " "contain a drawing." )
def save_params(self, path, new_thread=False): """ Save parameters to file. """ save_logger.info(path) param_variables = self.all_parameters params = [p.get_value().copy() for p in param_variables] if new_thread: thread = Thread(target=save_network_params, args=(params, path)) thread.start() else: save_network_params(params, path) self.train_logger.save(path)
Save parameters to file.
Below is the the instruction that describes the task: ### Input: Save parameters to file. ### Response: def save_params(self, path, new_thread=False): """ Save parameters to file. """ save_logger.info(path) param_variables = self.all_parameters params = [p.get_value().copy() for p in param_variables] if new_thread: thread = Thread(target=save_network_params, args=(params, path)) thread.start() else: save_network_params(params, path) self.train_logger.save(path)
def Cylinder(pos=(0, 0, 0), r=1, height=1, axis=(0, 0, 1), c="teal", alpha=1, res=24): """ Build a cylinder of specified height and radius `r`, centered at `pos`. If `pos` is a list of 2 Points, e.g. `pos=[v1,v2]`, build a cylinder with base centered at `v1` and top at `v2`. |Cylinder| """ if utils.isSequence(pos[0]): # assume user is passing pos=[base, top] base = np.array(pos[0]) top = np.array(pos[1]) pos = (base + top) / 2 height = np.linalg.norm(top - base) axis = top - base axis = utils.versor(axis) else: axis = utils.versor(axis) base = pos - axis * height / 2 top = pos + axis * height / 2 cyl = vtk.vtkCylinderSource() cyl.SetResolution(res) cyl.SetRadius(r) cyl.SetHeight(height) cyl.Update() theta = np.arccos(axis[2]) phi = np.arctan2(axis[1], axis[0]) t = vtk.vtkTransform() t.PostMultiply() t.RotateX(90) # put it along Z t.RotateY(theta * 57.3) t.RotateZ(phi * 57.3) tf = vtk.vtkTransformPolyDataFilter() tf.SetInputData(cyl.GetOutput()) tf.SetTransform(t) tf.Update() pd = tf.GetOutput() actor = Actor(pd, c, alpha) actor.GetProperty().SetInterpolationToPhong() actor.SetPosition(pos) actor.base = base + pos actor.top = top + pos settings.collectable_actors.append(actor) return actor
Build a cylinder of specified height and radius `r`, centered at `pos`. If `pos` is a list of 2 Points, e.g. `pos=[v1,v2]`, build a cylinder with base centered at `v1` and top at `v2`. |Cylinder|
Below is the the instruction that describes the task: ### Input: Build a cylinder of specified height and radius `r`, centered at `pos`. If `pos` is a list of 2 Points, e.g. `pos=[v1,v2]`, build a cylinder with base centered at `v1` and top at `v2`. |Cylinder| ### Response: def Cylinder(pos=(0, 0, 0), r=1, height=1, axis=(0, 0, 1), c="teal", alpha=1, res=24): """ Build a cylinder of specified height and radius `r`, centered at `pos`. If `pos` is a list of 2 Points, e.g. `pos=[v1,v2]`, build a cylinder with base centered at `v1` and top at `v2`. |Cylinder| """ if utils.isSequence(pos[0]): # assume user is passing pos=[base, top] base = np.array(pos[0]) top = np.array(pos[1]) pos = (base + top) / 2 height = np.linalg.norm(top - base) axis = top - base axis = utils.versor(axis) else: axis = utils.versor(axis) base = pos - axis * height / 2 top = pos + axis * height / 2 cyl = vtk.vtkCylinderSource() cyl.SetResolution(res) cyl.SetRadius(r) cyl.SetHeight(height) cyl.Update() theta = np.arccos(axis[2]) phi = np.arctan2(axis[1], axis[0]) t = vtk.vtkTransform() t.PostMultiply() t.RotateX(90) # put it along Z t.RotateY(theta * 57.3) t.RotateZ(phi * 57.3) tf = vtk.vtkTransformPolyDataFilter() tf.SetInputData(cyl.GetOutput()) tf.SetTransform(t) tf.Update() pd = tf.GetOutput() actor = Actor(pd, c, alpha) actor.GetProperty().SetInterpolationToPhong() actor.SetPosition(pos) actor.base = base + pos actor.top = top + pos settings.collectable_actors.append(actor) return actor
def get_triples(self): """ Get the triples in three lists. instance_triple: a triple representing an instance. E.g. instance(w, want-01) attribute triple: relation of attributes, e.g. polarity(w, - ) and relation triple, e.g. arg0 (w, b) """ instance_triple = [] relation_triple = [] attribute_triple = [] for i in range(len(self.nodes)): instance_triple.append(("instance", self.nodes[i], self.node_values[i])) # l[0] is relation name # l[1] is the other node this node has relation with for l in self.relations[i]: relation_triple.append((l[0], self.nodes[i], l[1])) # l[0] is the attribute name # l[1] is the attribute value for l in self.attributes[i]: attribute_triple.append((l[0], self.nodes[i], l[1])) return instance_triple, attribute_triple, relation_triple
Get the triples in three lists. instance_triple: a triple representing an instance. E.g. instance(w, want-01) attribute triple: relation of attributes, e.g. polarity(w, - ) and relation triple, e.g. arg0 (w, b)
Below is the the instruction that describes the task: ### Input: Get the triples in three lists. instance_triple: a triple representing an instance. E.g. instance(w, want-01) attribute triple: relation of attributes, e.g. polarity(w, - ) and relation triple, e.g. arg0 (w, b) ### Response: def get_triples(self): """ Get the triples in three lists. instance_triple: a triple representing an instance. E.g. instance(w, want-01) attribute triple: relation of attributes, e.g. polarity(w, - ) and relation triple, e.g. arg0 (w, b) """ instance_triple = [] relation_triple = [] attribute_triple = [] for i in range(len(self.nodes)): instance_triple.append(("instance", self.nodes[i], self.node_values[i])) # l[0] is relation name # l[1] is the other node this node has relation with for l in self.relations[i]: relation_triple.append((l[0], self.nodes[i], l[1])) # l[0] is the attribute name # l[1] is the attribute value for l in self.attributes[i]: attribute_triple.append((l[0], self.nodes[i], l[1])) return instance_triple, attribute_triple, relation_triple
def tile_read(source, bounds, tilesize, **kwargs): """ Read data and mask. Attributes ---------- source : str or rasterio.io.DatasetReader input file path or rasterio.io.DatasetReader object bounds : list Mercator tile bounds (left, bottom, right, top) tilesize : int Output image size kwargs: dict, optional These will be passed to the _tile_read function. Returns ------- out : array, int returns pixel value. """ if isinstance(source, DatasetReader): return _tile_read(source, bounds, tilesize, **kwargs) else: with rasterio.open(source) as src_dst: return _tile_read(src_dst, bounds, tilesize, **kwargs)
Read data and mask. Attributes ---------- source : str or rasterio.io.DatasetReader input file path or rasterio.io.DatasetReader object bounds : list Mercator tile bounds (left, bottom, right, top) tilesize : int Output image size kwargs: dict, optional These will be passed to the _tile_read function. Returns ------- out : array, int returns pixel value.
Below is the the instruction that describes the task: ### Input: Read data and mask. Attributes ---------- source : str or rasterio.io.DatasetReader input file path or rasterio.io.DatasetReader object bounds : list Mercator tile bounds (left, bottom, right, top) tilesize : int Output image size kwargs: dict, optional These will be passed to the _tile_read function. Returns ------- out : array, int returns pixel value. ### Response: def tile_read(source, bounds, tilesize, **kwargs): """ Read data and mask. Attributes ---------- source : str or rasterio.io.DatasetReader input file path or rasterio.io.DatasetReader object bounds : list Mercator tile bounds (left, bottom, right, top) tilesize : int Output image size kwargs: dict, optional These will be passed to the _tile_read function. Returns ------- out : array, int returns pixel value. """ if isinstance(source, DatasetReader): return _tile_read(source, bounds, tilesize, **kwargs) else: with rasterio.open(source) as src_dst: return _tile_read(src_dst, bounds, tilesize, **kwargs)
def sample(self, data, interval): '''Sample a patch from the data object Parameters ---------- data : dict A data dict as produced by pumpp.Pump.transform interval : slice The time interval to sample Returns ------- data_slice : dict `data` restricted to `interval`. ''' data_slice = dict() for key in data: if '_valid' in key: continue index = [slice(None)] * data[key].ndim # if we have multiple observations for this key, pick one index[0] = self.rng.randint(0, data[key].shape[0]) index[0] = slice(index[0], index[0] + 1) for tdim in self._time[key]: index[tdim] = interval data_slice[key] = data[key][tuple(index)] return data_slice
Sample a patch from the data object Parameters ---------- data : dict A data dict as produced by pumpp.Pump.transform interval : slice The time interval to sample Returns ------- data_slice : dict `data` restricted to `interval`.
Below is the the instruction that describes the task: ### Input: Sample a patch from the data object Parameters ---------- data : dict A data dict as produced by pumpp.Pump.transform interval : slice The time interval to sample Returns ------- data_slice : dict `data` restricted to `interval`. ### Response: def sample(self, data, interval): '''Sample a patch from the data object Parameters ---------- data : dict A data dict as produced by pumpp.Pump.transform interval : slice The time interval to sample Returns ------- data_slice : dict `data` restricted to `interval`. ''' data_slice = dict() for key in data: if '_valid' in key: continue index = [slice(None)] * data[key].ndim # if we have multiple observations for this key, pick one index[0] = self.rng.randint(0, data[key].shape[0]) index[0] = slice(index[0], index[0] + 1) for tdim in self._time[key]: index[tdim] = interval data_slice[key] = data[key][tuple(index)] return data_slice
def visible(self): """ Read/write. |True| if axis is visible, |False| otherwise. """ delete = self._element.delete_ if delete is None: return False return False if delete.val else True
Read/write. |True| if axis is visible, |False| otherwise.
Below is the the instruction that describes the task: ### Input: Read/write. |True| if axis is visible, |False| otherwise. ### Response: def visible(self): """ Read/write. |True| if axis is visible, |False| otherwise. """ delete = self._element.delete_ if delete is None: return False return False if delete.val else True
def __DepthFirstSearch(node, hashes): """ Internal helper method. Args: node (MerkleTreeNode): hashes (list): each item is a bytearray. """ if node.LeftChild is None: hashes.add(node.Hash) else: MerkleTree.__DepthFirstSearch(node.LeftChild, hashes) MerkleTree.__DepthFirstSearch(node.RightChild, hashes)
Internal helper method. Args: node (MerkleTreeNode): hashes (list): each item is a bytearray.
Below is the the instruction that describes the task: ### Input: Internal helper method. Args: node (MerkleTreeNode): hashes (list): each item is a bytearray. ### Response: def __DepthFirstSearch(node, hashes): """ Internal helper method. Args: node (MerkleTreeNode): hashes (list): each item is a bytearray. """ if node.LeftChild is None: hashes.add(node.Hash) else: MerkleTree.__DepthFirstSearch(node.LeftChild, hashes) MerkleTree.__DepthFirstSearch(node.RightChild, hashes)
def is_starred(self): """ :calls: `GET /gists/:id/star <http://developer.github.com/v3/gists>`_ :rtype: bool """ status, headers, data = self._requester.requestJson( "GET", self.url + "/star" ) return status == 204
:calls: `GET /gists/:id/star <http://developer.github.com/v3/gists>`_ :rtype: bool
Below is the the instruction that describes the task: ### Input: :calls: `GET /gists/:id/star <http://developer.github.com/v3/gists>`_ :rtype: bool ### Response: def is_starred(self): """ :calls: `GET /gists/:id/star <http://developer.github.com/v3/gists>`_ :rtype: bool """ status, headers, data = self._requester.requestJson( "GET", self.url + "/star" ) return status == 204
def search(self, search_phrase, limit=None): """ Finds datasets by search phrase. Args: search_phrase (str or unicode): limit (int, optional): how many results to return. None means without limit. Returns: list of DatasetSearchResult instances. """ query, query_params = self._make_query_from_terms(search_phrase, limit=limit) self._parsed_query = (str(query), query_params) assert isinstance(query, TextClause) datasets = {} def make_result(vid=None, b_score=0, p_score=0): res = DatasetSearchResult() res.b_score = b_score res.p_score = p_score res.partitions = set() res.vid = vid return res if query_params: results = self.execute(query, **query_params) for result in results: vid, dataset_score = result datasets[vid] = make_result(vid, b_score=dataset_score) logger.debug('Extending datasets with partitions.') for partition in self.backend.partition_index.search(search_phrase): if partition.dataset_vid not in datasets: datasets[partition.dataset_vid] = make_result(partition.dataset_vid) datasets[partition.dataset_vid].p_score += partition.score datasets[partition.dataset_vid].partitions.add(partition) return list(datasets.values())
Finds datasets by search phrase. Args: search_phrase (str or unicode): limit (int, optional): how many results to return. None means without limit. Returns: list of DatasetSearchResult instances.
Below is the the instruction that describes the task: ### Input: Finds datasets by search phrase. Args: search_phrase (str or unicode): limit (int, optional): how many results to return. None means without limit. Returns: list of DatasetSearchResult instances. ### Response: def search(self, search_phrase, limit=None): """ Finds datasets by search phrase. Args: search_phrase (str or unicode): limit (int, optional): how many results to return. None means without limit. Returns: list of DatasetSearchResult instances. """ query, query_params = self._make_query_from_terms(search_phrase, limit=limit) self._parsed_query = (str(query), query_params) assert isinstance(query, TextClause) datasets = {} def make_result(vid=None, b_score=0, p_score=0): res = DatasetSearchResult() res.b_score = b_score res.p_score = p_score res.partitions = set() res.vid = vid return res if query_params: results = self.execute(query, **query_params) for result in results: vid, dataset_score = result datasets[vid] = make_result(vid, b_score=dataset_score) logger.debug('Extending datasets with partitions.') for partition in self.backend.partition_index.search(search_phrase): if partition.dataset_vid not in datasets: datasets[partition.dataset_vid] = make_result(partition.dataset_vid) datasets[partition.dataset_vid].p_score += partition.score datasets[partition.dataset_vid].partitions.add(partition) return list(datasets.values())
def _signal_to_frame_nonsilent(y, frame_length=2048, hop_length=512, top_db=60, ref=np.max): '''Frame-wise non-silent indicator for audio input. This is a helper function for `trim` and `split`. Parameters ---------- y : np.ndarray, shape=(n,) or (2,n) Audio signal, mono or stereo frame_length : int > 0 The number of samples per frame hop_length : int > 0 The number of samples between frames top_db : number > 0 The threshold (in decibels) below reference to consider as silence ref : callable or float The reference power Returns ------- non_silent : np.ndarray, shape=(m,), dtype=bool Indicator of non-silent frames ''' # Convert to mono y_mono = core.to_mono(y) # Compute the MSE for the signal mse = feature.rms(y=y_mono, frame_length=frame_length, hop_length=hop_length)**2 return (core.power_to_db(mse.squeeze(), ref=ref, top_db=None) > - top_db)
Frame-wise non-silent indicator for audio input. This is a helper function for `trim` and `split`. Parameters ---------- y : np.ndarray, shape=(n,) or (2,n) Audio signal, mono or stereo frame_length : int > 0 The number of samples per frame hop_length : int > 0 The number of samples between frames top_db : number > 0 The threshold (in decibels) below reference to consider as silence ref : callable or float The reference power Returns ------- non_silent : np.ndarray, shape=(m,), dtype=bool Indicator of non-silent frames
Below is the the instruction that describes the task: ### Input: Frame-wise non-silent indicator for audio input. This is a helper function for `trim` and `split`. Parameters ---------- y : np.ndarray, shape=(n,) or (2,n) Audio signal, mono or stereo frame_length : int > 0 The number of samples per frame hop_length : int > 0 The number of samples between frames top_db : number > 0 The threshold (in decibels) below reference to consider as silence ref : callable or float The reference power Returns ------- non_silent : np.ndarray, shape=(m,), dtype=bool Indicator of non-silent frames ### Response: def _signal_to_frame_nonsilent(y, frame_length=2048, hop_length=512, top_db=60, ref=np.max): '''Frame-wise non-silent indicator for audio input. This is a helper function for `trim` and `split`. Parameters ---------- y : np.ndarray, shape=(n,) or (2,n) Audio signal, mono or stereo frame_length : int > 0 The number of samples per frame hop_length : int > 0 The number of samples between frames top_db : number > 0 The threshold (in decibels) below reference to consider as silence ref : callable or float The reference power Returns ------- non_silent : np.ndarray, shape=(m,), dtype=bool Indicator of non-silent frames ''' # Convert to mono y_mono = core.to_mono(y) # Compute the MSE for the signal mse = feature.rms(y=y_mono, frame_length=frame_length, hop_length=hop_length)**2 return (core.power_to_db(mse.squeeze(), ref=ref, top_db=None) > - top_db)
def build_scandata_table(self): """Build an `astropy.table.Table` object from these data. """ shape = self._norm_vals.shape col_norm = Column(name="norm", dtype=float) col_normv = Column(name="norm_scan", dtype=float, shape=shape) col_dll = Column(name="dloglike_scan", dtype=float, shape=shape) tab = Table(data=[col_norm, col_normv, col_dll]) tab.add_row({"norm": 1., "norm_scan": self._norm_vals, "dloglike_scan": -1 * self._nll_vals}) return tab
Build an `astropy.table.Table` object from these data.
Below is the the instruction that describes the task: ### Input: Build an `astropy.table.Table` object from these data. ### Response: def build_scandata_table(self): """Build an `astropy.table.Table` object from these data. """ shape = self._norm_vals.shape col_norm = Column(name="norm", dtype=float) col_normv = Column(name="norm_scan", dtype=float, shape=shape) col_dll = Column(name="dloglike_scan", dtype=float, shape=shape) tab = Table(data=[col_norm, col_normv, col_dll]) tab.add_row({"norm": 1., "norm_scan": self._norm_vals, "dloglike_scan": -1 * self._nll_vals}) return tab
def _friends_leaveoneout_radius(points, ftype): """Internal method used to compute the radius (half-side-length) for each ball (cube) used in :class:`RadFriends` (:class:`SupFriends`) using leave-one-out (LOO) cross-validation.""" # Construct KDTree to enable quick nearest-neighbor lookup for # our resampled objects. kdtree = spatial.KDTree(points) if ftype == 'balls': # Compute radius to two nearest neighbors (self + neighbor). dists, ids = kdtree.query(points, k=2, eps=0, p=2) elif ftype == 'cubes': # Compute half-side-length to two nearest neighbors (self + neighbor). dists, ids = kdtree.query(points, k=2, eps=0, p=np.inf) dist = dists[:, 1] # distances to LOO nearest neighbor return dist
Internal method used to compute the radius (half-side-length) for each ball (cube) used in :class:`RadFriends` (:class:`SupFriends`) using leave-one-out (LOO) cross-validation.
Below is the the instruction that describes the task: ### Input: Internal method used to compute the radius (half-side-length) for each ball (cube) used in :class:`RadFriends` (:class:`SupFriends`) using leave-one-out (LOO) cross-validation. ### Response: def _friends_leaveoneout_radius(points, ftype): """Internal method used to compute the radius (half-side-length) for each ball (cube) used in :class:`RadFriends` (:class:`SupFriends`) using leave-one-out (LOO) cross-validation.""" # Construct KDTree to enable quick nearest-neighbor lookup for # our resampled objects. kdtree = spatial.KDTree(points) if ftype == 'balls': # Compute radius to two nearest neighbors (self + neighbor). dists, ids = kdtree.query(points, k=2, eps=0, p=2) elif ftype == 'cubes': # Compute half-side-length to two nearest neighbors (self + neighbor). dists, ids = kdtree.query(points, k=2, eps=0, p=np.inf) dist = dists[:, 1] # distances to LOO nearest neighbor return dist
def restore(self, state): """Restore this state from the output of a previous call to dump(). Only those properties in this object and listed in state will be updated. Other properties will not be modified and state may contain keys that do not correspond with properties in this object. Args: state (dict): A serialized representation of this object. """ own_properties = set(self.get_properties()) state_properties = set(state) to_restore = own_properties.intersection(state_properties) for name in to_restore: value = state.get(name) if name in self._complex_properties: value = self._complex_properties[name][1](value) setattr(self, name, value)
Restore this state from the output of a previous call to dump(). Only those properties in this object and listed in state will be updated. Other properties will not be modified and state may contain keys that do not correspond with properties in this object. Args: state (dict): A serialized representation of this object.
Below is the the instruction that describes the task: ### Input: Restore this state from the output of a previous call to dump(). Only those properties in this object and listed in state will be updated. Other properties will not be modified and state may contain keys that do not correspond with properties in this object. Args: state (dict): A serialized representation of this object. ### Response: def restore(self, state): """Restore this state from the output of a previous call to dump(). Only those properties in this object and listed in state will be updated. Other properties will not be modified and state may contain keys that do not correspond with properties in this object. Args: state (dict): A serialized representation of this object. """ own_properties = set(self.get_properties()) state_properties = set(state) to_restore = own_properties.intersection(state_properties) for name in to_restore: value = state.get(name) if name in self._complex_properties: value = self._complex_properties[name][1](value) setattr(self, name, value)
def total_flux(self, kwargs_list, norm=False, k=None): """ Computes the total flux of each individual light profile. This allows to estimate the total flux as well as lenstronomy amp to magnitude conversions. Not all models are supported :param kwargs_list: list of keyword arguments corresponding to the light profiles. The 'amp' parameter can be missing. :param norm: bool, if True, computes the flux for amp=1 :param k: int, if set, only evaluates the specific light model :return: list of (total) flux values attributed to each profile """ norm_flux_list = [] for i, model in enumerate(self.profile_type_list): if k is None or k == i: if model in ['SERSIC', 'SERSIC_ELLIPSE', 'INTERPOL', 'GAUSSIAN', 'GAUSSIAN_ELLIPSE', 'MULTI_GAUSSIAN', 'MULTI_GAUSSIAN_ELLIPSE']: kwargs_new = kwargs_list[i].copy() if norm is True: if model in ['MULTI_GAUSSIAN', 'MULTI_GAUSSIAN_ELLIPSE']: new = {'amp': np.array(kwargs_new['amp'])/kwargs_new['amp'][0]} else: new = {'amp': 1} kwargs_new.update(new) norm_flux = self.func_list[i].total_flux(**kwargs_new) norm_flux_list.append(norm_flux) else: raise ValueError("profile %s does not support flux normlization." % model) # TODO implement total flux for e.g. 'HERNQUIST', 'HERNQUIST_ELLIPSE', 'PJAFFE', 'PJAFFE_ELLIPSE', # 'GAUSSIAN', 'GAUSSIAN_ELLIPSE', 'POWER_LAW', 'NIE', 'CHAMELEON', 'DOUBLE_CHAMELEON', 'UNIFORM' return norm_flux_list
Computes the total flux of each individual light profile. This allows to estimate the total flux as well as lenstronomy amp to magnitude conversions. Not all models are supported :param kwargs_list: list of keyword arguments corresponding to the light profiles. The 'amp' parameter can be missing. :param norm: bool, if True, computes the flux for amp=1 :param k: int, if set, only evaluates the specific light model :return: list of (total) flux values attributed to each profile
Below is the the instruction that describes the task: ### Input: Computes the total flux of each individual light profile. This allows to estimate the total flux as well as lenstronomy amp to magnitude conversions. Not all models are supported :param kwargs_list: list of keyword arguments corresponding to the light profiles. The 'amp' parameter can be missing. :param norm: bool, if True, computes the flux for amp=1 :param k: int, if set, only evaluates the specific light model :return: list of (total) flux values attributed to each profile ### Response: def total_flux(self, kwargs_list, norm=False, k=None): """ Computes the total flux of each individual light profile. This allows to estimate the total flux as well as lenstronomy amp to magnitude conversions. Not all models are supported :param kwargs_list: list of keyword arguments corresponding to the light profiles. The 'amp' parameter can be missing. :param norm: bool, if True, computes the flux for amp=1 :param k: int, if set, only evaluates the specific light model :return: list of (total) flux values attributed to each profile """ norm_flux_list = [] for i, model in enumerate(self.profile_type_list): if k is None or k == i: if model in ['SERSIC', 'SERSIC_ELLIPSE', 'INTERPOL', 'GAUSSIAN', 'GAUSSIAN_ELLIPSE', 'MULTI_GAUSSIAN', 'MULTI_GAUSSIAN_ELLIPSE']: kwargs_new = kwargs_list[i].copy() if norm is True: if model in ['MULTI_GAUSSIAN', 'MULTI_GAUSSIAN_ELLIPSE']: new = {'amp': np.array(kwargs_new['amp'])/kwargs_new['amp'][0]} else: new = {'amp': 1} kwargs_new.update(new) norm_flux = self.func_list[i].total_flux(**kwargs_new) norm_flux_list.append(norm_flux) else: raise ValueError("profile %s does not support flux normlization." % model) # TODO implement total flux for e.g. 'HERNQUIST', 'HERNQUIST_ELLIPSE', 'PJAFFE', 'PJAFFE_ELLIPSE', # 'GAUSSIAN', 'GAUSSIAN_ELLIPSE', 'POWER_LAW', 'NIE', 'CHAMELEON', 'DOUBLE_CHAMELEON', 'UNIFORM' return norm_flux_list
def _fix_next_url(next_url): """Remove max=null parameter from URL. Patch for Webex Teams Defect: 'next' URL returned in the Link headers of the responses contain an errant 'max=null' parameter, which causes the next request (to this URL) to fail if the URL is requested as-is. This patch parses the next_url to remove the max=null parameter. Args: next_url(basestring): The 'next' URL to be parsed and cleaned. Returns: basestring: The clean URL to be used for the 'next' request. Raises: AssertionError: If the parameter types are incorrect. ValueError: If 'next_url' does not contain a valid API endpoint URL (scheme, netloc and path). """ next_url = str(next_url) parsed_url = urllib.parse.urlparse(next_url) if not parsed_url.scheme or not parsed_url.netloc or not parsed_url.path: raise ValueError( "'next_url' must be a valid API endpoint URL, minimally " "containing a scheme, netloc and path." ) if parsed_url.query: query_list = parsed_url.query.split('&') if 'max=null' in query_list: query_list.remove('max=null') warnings.warn("`max=null` still present in next-URL returned " "from Webex Teams", RuntimeWarning) new_query = '&'.join(query_list) parsed_url = list(parsed_url) parsed_url[4] = new_query return urllib.parse.urlunparse(parsed_url)
Remove max=null parameter from URL. Patch for Webex Teams Defect: 'next' URL returned in the Link headers of the responses contain an errant 'max=null' parameter, which causes the next request (to this URL) to fail if the URL is requested as-is. This patch parses the next_url to remove the max=null parameter. Args: next_url(basestring): The 'next' URL to be parsed and cleaned. Returns: basestring: The clean URL to be used for the 'next' request. Raises: AssertionError: If the parameter types are incorrect. ValueError: If 'next_url' does not contain a valid API endpoint URL (scheme, netloc and path).
Below is the the instruction that describes the task: ### Input: Remove max=null parameter from URL. Patch for Webex Teams Defect: 'next' URL returned in the Link headers of the responses contain an errant 'max=null' parameter, which causes the next request (to this URL) to fail if the URL is requested as-is. This patch parses the next_url to remove the max=null parameter. Args: next_url(basestring): The 'next' URL to be parsed and cleaned. Returns: basestring: The clean URL to be used for the 'next' request. Raises: AssertionError: If the parameter types are incorrect. ValueError: If 'next_url' does not contain a valid API endpoint URL (scheme, netloc and path). ### Response: def _fix_next_url(next_url): """Remove max=null parameter from URL. Patch for Webex Teams Defect: 'next' URL returned in the Link headers of the responses contain an errant 'max=null' parameter, which causes the next request (to this URL) to fail if the URL is requested as-is. This patch parses the next_url to remove the max=null parameter. Args: next_url(basestring): The 'next' URL to be parsed and cleaned. Returns: basestring: The clean URL to be used for the 'next' request. Raises: AssertionError: If the parameter types are incorrect. ValueError: If 'next_url' does not contain a valid API endpoint URL (scheme, netloc and path). """ next_url = str(next_url) parsed_url = urllib.parse.urlparse(next_url) if not parsed_url.scheme or not parsed_url.netloc or not parsed_url.path: raise ValueError( "'next_url' must be a valid API endpoint URL, minimally " "containing a scheme, netloc and path." ) if parsed_url.query: query_list = parsed_url.query.split('&') if 'max=null' in query_list: query_list.remove('max=null') warnings.warn("`max=null` still present in next-URL returned " "from Webex Teams", RuntimeWarning) new_query = '&'.join(query_list) parsed_url = list(parsed_url) parsed_url[4] = new_query return urllib.parse.urlunparse(parsed_url)
def write_translated(self, name, value, event=None): """Send a translated write request to the VI. """ data = {'name': name} if value is not None: data['value'] = self._massage_write_value(value) if event is not None: data['event'] = self._massage_write_value(event); message = self.streamer.serialize_for_stream(data) bytes_written = self.write_bytes(message) assert bytes_written == len(message) return bytes_written
Send a translated write request to the VI.
Below is the the instruction that describes the task: ### Input: Send a translated write request to the VI. ### Response: def write_translated(self, name, value, event=None): """Send a translated write request to the VI. """ data = {'name': name} if value is not None: data['value'] = self._massage_write_value(value) if event is not None: data['event'] = self._massage_write_value(event); message = self.streamer.serialize_for_stream(data) bytes_written = self.write_bytes(message) assert bytes_written == len(message) return bytes_written
def warning(self, *args): """Log a warning. Used for non-fatal problems.""" if _canShortcutLogging(self.logCategory, WARN): return warningObject(self.logObjectName(), self.logCategory, *self.logFunction(*args))
Log a warning. Used for non-fatal problems.
Below is the the instruction that describes the task: ### Input: Log a warning. Used for non-fatal problems. ### Response: def warning(self, *args): """Log a warning. Used for non-fatal problems.""" if _canShortcutLogging(self.logCategory, WARN): return warningObject(self.logObjectName(), self.logCategory, *self.logFunction(*args))
def run_steps(spec, language="en"): """ Can be called by the user from within a step definition to execute other steps. """ # The way this works is a little exotic, but I couldn't think of a better way to work around # the fact that this has to be a global function and therefore cannot know about which step # runner to use (other than making step runner global) # Find the step runner that is currently running and use it to run the given steps fr = inspect.currentframe() while fr: if "self" in fr.f_locals: f_self = fr.f_locals['self'] if isinstance(f_self, StepsRunner): return f_self.run_steps_from_string(spec, language) fr = fr.f_back
Can be called by the user from within a step definition to execute other steps.
Below is the the instruction that describes the task: ### Input: Can be called by the user from within a step definition to execute other steps. ### Response: def run_steps(spec, language="en"): """ Can be called by the user from within a step definition to execute other steps. """ # The way this works is a little exotic, but I couldn't think of a better way to work around # the fact that this has to be a global function and therefore cannot know about which step # runner to use (other than making step runner global) # Find the step runner that is currently running and use it to run the given steps fr = inspect.currentframe() while fr: if "self" in fr.f_locals: f_self = fr.f_locals['self'] if isinstance(f_self, StepsRunner): return f_self.run_steps_from_string(spec, language) fr = fr.f_back
def use_propsfs(self, folder=None, front=False): """ Args: folder (str | unicode | None): Optional custom mount folder (defaults to /mnt/props on Linux, and /Volumes/props on OSX) front (bool): If True, add provider to front of list """ if folder is None: folder = "/%s/props" % ("Volumes" if platform.system().lower() == "darwin" else "mnt") self.add(PropsfsProvider(folder), front=front)
Args: folder (str | unicode | None): Optional custom mount folder (defaults to /mnt/props on Linux, and /Volumes/props on OSX) front (bool): If True, add provider to front of list
Below is the the instruction that describes the task: ### Input: Args: folder (str | unicode | None): Optional custom mount folder (defaults to /mnt/props on Linux, and /Volumes/props on OSX) front (bool): If True, add provider to front of list ### Response: def use_propsfs(self, folder=None, front=False): """ Args: folder (str | unicode | None): Optional custom mount folder (defaults to /mnt/props on Linux, and /Volumes/props on OSX) front (bool): If True, add provider to front of list """ if folder is None: folder = "/%s/props" % ("Volumes" if platform.system().lower() == "darwin" else "mnt") self.add(PropsfsProvider(folder), front=front)
def clean_alternate_location_indicators(lines): ''' Keeps only the first atom, if alternated location identifiers are being used Removes alternate location ID charactor ''' new_lines = [] previously_seen_alt_atoms = set() for line in lines: if line.startswith('ATOM'): alt_loc_id = line[16] if alt_loc_id != ' ': atom_name = line[12:16].strip() res_name = line[17:20].strip() chain = line[21] resnum = long( line[22:26].strip() ) loc_tup = (atom_name, res_name, chain, resnum) if loc_tup in previously_seen_alt_atoms: # Continue main for loop continue else: previously_seen_alt_atoms.add( loc_tup ) line = line[:16] + ' ' + line[17:] new_lines.append(line) return new_lines
Keeps only the first atom, if alternated location identifiers are being used Removes alternate location ID charactor
Below is the the instruction that describes the task: ### Input: Keeps only the first atom, if alternated location identifiers are being used Removes alternate location ID charactor ### Response: def clean_alternate_location_indicators(lines): ''' Keeps only the first atom, if alternated location identifiers are being used Removes alternate location ID charactor ''' new_lines = [] previously_seen_alt_atoms = set() for line in lines: if line.startswith('ATOM'): alt_loc_id = line[16] if alt_loc_id != ' ': atom_name = line[12:16].strip() res_name = line[17:20].strip() chain = line[21] resnum = long( line[22:26].strip() ) loc_tup = (atom_name, res_name, chain, resnum) if loc_tup in previously_seen_alt_atoms: # Continue main for loop continue else: previously_seen_alt_atoms.add( loc_tup ) line = line[:16] + ' ' + line[17:] new_lines.append(line) return new_lines
def _run_macro(self, statement: Statement) -> bool: """ Resolve a macro and run the resulting string :param statement: the parsed statement from the command line :return: a flag indicating whether the interpretation of commands should stop """ from itertools import islice if statement.command not in self.macros.keys(): raise KeyError('{} is not a macro'.format(statement.command)) macro = self.macros[statement.command] # Make sure enough arguments were passed in if len(statement.arg_list) < macro.minimum_arg_count: self.perror("The macro '{}' expects at least {} argument(s)".format(statement.command, macro.minimum_arg_count), traceback_war=False) return False # Resolve the arguments in reverse and read their values from statement.argv since those # are unquoted. Macro args should have been quoted when the macro was created. resolved = macro.value reverse_arg_list = sorted(macro.arg_list, key=lambda ma: ma.start_index, reverse=True) for arg in reverse_arg_list: if arg.is_escaped: to_replace = '{{' + arg.number_str + '}}' replacement = '{' + arg.number_str + '}' else: to_replace = '{' + arg.number_str + '}' replacement = statement.argv[int(arg.number_str)] parts = resolved.rsplit(to_replace, maxsplit=1) resolved = parts[0] + replacement + parts[1] # Append extra arguments and use statement.arg_list since these arguments need their quotes preserved for arg in islice(statement.arg_list, macro.minimum_arg_count, None): resolved += ' ' + arg # Run the resolved command return self.onecmd_plus_hooks(resolved)
Resolve a macro and run the resulting string :param statement: the parsed statement from the command line :return: a flag indicating whether the interpretation of commands should stop
Below is the the instruction that describes the task: ### Input: Resolve a macro and run the resulting string :param statement: the parsed statement from the command line :return: a flag indicating whether the interpretation of commands should stop ### Response: def _run_macro(self, statement: Statement) -> bool: """ Resolve a macro and run the resulting string :param statement: the parsed statement from the command line :return: a flag indicating whether the interpretation of commands should stop """ from itertools import islice if statement.command not in self.macros.keys(): raise KeyError('{} is not a macro'.format(statement.command)) macro = self.macros[statement.command] # Make sure enough arguments were passed in if len(statement.arg_list) < macro.minimum_arg_count: self.perror("The macro '{}' expects at least {} argument(s)".format(statement.command, macro.minimum_arg_count), traceback_war=False) return False # Resolve the arguments in reverse and read their values from statement.argv since those # are unquoted. Macro args should have been quoted when the macro was created. resolved = macro.value reverse_arg_list = sorted(macro.arg_list, key=lambda ma: ma.start_index, reverse=True) for arg in reverse_arg_list: if arg.is_escaped: to_replace = '{{' + arg.number_str + '}}' replacement = '{' + arg.number_str + '}' else: to_replace = '{' + arg.number_str + '}' replacement = statement.argv[int(arg.number_str)] parts = resolved.rsplit(to_replace, maxsplit=1) resolved = parts[0] + replacement + parts[1] # Append extra arguments and use statement.arg_list since these arguments need their quotes preserved for arg in islice(statement.arg_list, macro.minimum_arg_count, None): resolved += ' ' + arg # Run the resolved command return self.onecmd_plus_hooks(resolved)
def draw_capitan_stroke_images(self, symbols: List[CapitanSymbol], destination_directory: str, stroke_thicknesses: List[int]) -> None: """ Creates a visual representation of the Capitan strokes by drawing lines that connect the points from each stroke of each symbol. :param symbols: The list of parsed Capitan-symbols :param destination_directory: The directory, in which the symbols should be generated into. One sub-folder per symbol category will be generated automatically :param stroke_thicknesses: The thickness of the pen, used for drawing the lines in pixels. If multiple are specified, multiple images will be generated that have a different suffix, e.g. 1-16-3.png for the 3-px version and 1-16-2.png for the 2-px version of the image 1-16 """ total_number_of_symbols = len(symbols) * len(stroke_thicknesses) output = "Generating {0} images with {1} symbols in {2} different stroke thicknesses ({3})".format( total_number_of_symbols, len(symbols), len(stroke_thicknesses), stroke_thicknesses) print(output) print("In directory {0}".format(os.path.abspath(destination_directory)), flush=True) progress_bar = tqdm(total=total_number_of_symbols, mininterval=0.25, desc="Rendering strokes") capitan_file_name_counter = 0 for symbol in symbols: capitan_file_name_counter += 1 target_directory = os.path.join(destination_directory, symbol.symbol_class) os.makedirs(target_directory, exist_ok=True) raw_file_name_without_extension = "capitan-{0}-{1}-stroke".format(symbol.symbol_class, capitan_file_name_counter) for stroke_thickness in stroke_thicknesses: export_path = ExportPath(destination_directory, symbol.symbol_class, raw_file_name_without_extension, 'png', stroke_thickness) symbol.draw_capitan_stroke_onto_canvas(export_path, stroke_thickness, 0) progress_bar.update(1) progress_bar.close()
Creates a visual representation of the Capitan strokes by drawing lines that connect the points from each stroke of each symbol. :param symbols: The list of parsed Capitan-symbols :param destination_directory: The directory, in which the symbols should be generated into. One sub-folder per symbol category will be generated automatically :param stroke_thicknesses: The thickness of the pen, used for drawing the lines in pixels. If multiple are specified, multiple images will be generated that have a different suffix, e.g. 1-16-3.png for the 3-px version and 1-16-2.png for the 2-px version of the image 1-16
Below is the the instruction that describes the task: ### Input: Creates a visual representation of the Capitan strokes by drawing lines that connect the points from each stroke of each symbol. :param symbols: The list of parsed Capitan-symbols :param destination_directory: The directory, in which the symbols should be generated into. One sub-folder per symbol category will be generated automatically :param stroke_thicknesses: The thickness of the pen, used for drawing the lines in pixels. If multiple are specified, multiple images will be generated that have a different suffix, e.g. 1-16-3.png for the 3-px version and 1-16-2.png for the 2-px version of the image 1-16 ### Response: def draw_capitan_stroke_images(self, symbols: List[CapitanSymbol], destination_directory: str, stroke_thicknesses: List[int]) -> None: """ Creates a visual representation of the Capitan strokes by drawing lines that connect the points from each stroke of each symbol. :param symbols: The list of parsed Capitan-symbols :param destination_directory: The directory, in which the symbols should be generated into. One sub-folder per symbol category will be generated automatically :param stroke_thicknesses: The thickness of the pen, used for drawing the lines in pixels. If multiple are specified, multiple images will be generated that have a different suffix, e.g. 1-16-3.png for the 3-px version and 1-16-2.png for the 2-px version of the image 1-16 """ total_number_of_symbols = len(symbols) * len(stroke_thicknesses) output = "Generating {0} images with {1} symbols in {2} different stroke thicknesses ({3})".format( total_number_of_symbols, len(symbols), len(stroke_thicknesses), stroke_thicknesses) print(output) print("In directory {0}".format(os.path.abspath(destination_directory)), flush=True) progress_bar = tqdm(total=total_number_of_symbols, mininterval=0.25, desc="Rendering strokes") capitan_file_name_counter = 0 for symbol in symbols: capitan_file_name_counter += 1 target_directory = os.path.join(destination_directory, symbol.symbol_class) os.makedirs(target_directory, exist_ok=True) raw_file_name_without_extension = "capitan-{0}-{1}-stroke".format(symbol.symbol_class, capitan_file_name_counter) for stroke_thickness in stroke_thicknesses: export_path = ExportPath(destination_directory, symbol.symbol_class, raw_file_name_without_extension, 'png', stroke_thickness) symbol.draw_capitan_stroke_onto_canvas(export_path, stroke_thickness, 0) progress_bar.update(1) progress_bar.close()
def plot_gaussian_projection(mu, lmbda, vecs, **kwargs): ''' Plots a ndim gaussian projected onto 2D vecs, where vecs is a matrix whose two columns are the subset of some orthonomral basis (e.g. from PCA on samples). ''' return plot_gaussian_2D(project_data(mu,vecs),project_ellipsoid(lmbda,vecs),**kwargs)
Plots a ndim gaussian projected onto 2D vecs, where vecs is a matrix whose two columns are the subset of some orthonomral basis (e.g. from PCA on samples).
Below is the the instruction that describes the task: ### Input: Plots a ndim gaussian projected onto 2D vecs, where vecs is a matrix whose two columns are the subset of some orthonomral basis (e.g. from PCA on samples). ### Response: def plot_gaussian_projection(mu, lmbda, vecs, **kwargs): ''' Plots a ndim gaussian projected onto 2D vecs, where vecs is a matrix whose two columns are the subset of some orthonomral basis (e.g. from PCA on samples). ''' return plot_gaussian_2D(project_data(mu,vecs),project_ellipsoid(lmbda,vecs),**kwargs)
def write_scrolling(self, text, attr=None): u'''write text at current cursor position while watching for scrolling. If the window scrolls because you are at the bottom of the screen buffer, all positions that you are storing will be shifted by the scroll amount. For example, I remember the cursor position of the prompt so that I can redraw the line but if the window scrolls, the remembered position is off. This variant of write tries to keep track of the cursor position so that it will know when the screen buffer is scrolled. It returns the number of lines that the buffer scrolled. ''' x, y = self.pos() w, h = self.size() scroll = 0 # the result # split the string into ordinary characters and funny characters chunks = self.motion_char_re.split(text) for chunk in chunks: n = self.write_color(chunk, attr) if len(chunk) == 1: # the funny characters will be alone if chunk[0] == u'\n': # newline x = 0 y += 1 elif chunk[0] == u'\r': # carriage return x = 0 elif chunk[0] == u'\t': # tab x = 8 * (int(x / 8) + 1) if x > w: # newline x -= w y += 1 elif chunk[0] == u'\007': # bell pass elif chunk[0] == u'\010': x -= 1 if x < 0: y -= 1 # backed up 1 line else: # ordinary character x += 1 if x == w: # wrap x = 0 y += 1 if y == h: # scroll scroll += 1 y = h - 1 else: # chunk of ordinary characters x += n l = int(x / w) # lines we advanced x = x % w # new x value y += l if y >= h: # scroll scroll += y - h + 1 y = h - 1 return scroll
u'''write text at current cursor position while watching for scrolling. If the window scrolls because you are at the bottom of the screen buffer, all positions that you are storing will be shifted by the scroll amount. For example, I remember the cursor position of the prompt so that I can redraw the line but if the window scrolls, the remembered position is off. This variant of write tries to keep track of the cursor position so that it will know when the screen buffer is scrolled. It returns the number of lines that the buffer scrolled.
Below is the the instruction that describes the task: ### Input: u'''write text at current cursor position while watching for scrolling. If the window scrolls because you are at the bottom of the screen buffer, all positions that you are storing will be shifted by the scroll amount. For example, I remember the cursor position of the prompt so that I can redraw the line but if the window scrolls, the remembered position is off. This variant of write tries to keep track of the cursor position so that it will know when the screen buffer is scrolled. It returns the number of lines that the buffer scrolled. ### Response: def write_scrolling(self, text, attr=None): u'''write text at current cursor position while watching for scrolling. If the window scrolls because you are at the bottom of the screen buffer, all positions that you are storing will be shifted by the scroll amount. For example, I remember the cursor position of the prompt so that I can redraw the line but if the window scrolls, the remembered position is off. This variant of write tries to keep track of the cursor position so that it will know when the screen buffer is scrolled. It returns the number of lines that the buffer scrolled. ''' x, y = self.pos() w, h = self.size() scroll = 0 # the result # split the string into ordinary characters and funny characters chunks = self.motion_char_re.split(text) for chunk in chunks: n = self.write_color(chunk, attr) if len(chunk) == 1: # the funny characters will be alone if chunk[0] == u'\n': # newline x = 0 y += 1 elif chunk[0] == u'\r': # carriage return x = 0 elif chunk[0] == u'\t': # tab x = 8 * (int(x / 8) + 1) if x > w: # newline x -= w y += 1 elif chunk[0] == u'\007': # bell pass elif chunk[0] == u'\010': x -= 1 if x < 0: y -= 1 # backed up 1 line else: # ordinary character x += 1 if x == w: # wrap x = 0 y += 1 if y == h: # scroll scroll += 1 y = h - 1 else: # chunk of ordinary characters x += n l = int(x / w) # lines we advanced x = x % w # new x value y += l if y >= h: # scroll scroll += y - h + 1 y = h - 1 return scroll