code
stringlengths
51
2.38k
docstring
stringlengths
4
15.2k
def _tag_ebs(self, conn, role): tags = {'Name': 'spilo_' + self.cluster_name, 'Role': role, 'Instance': self.instance_id} volumes = conn.get_all_volumes(filters={'attachment.instance-id': self.instance_id}) conn.create_tags([v.id for v in volumes], tags)
set tags, carrying the cluster name, instance role and instance id for the EBS storage
def upload_mission(aFileName): missionlist = readmission(aFileName) print("\nUpload mission from a file: %s" % aFileName) print(' Clear mission') cmds = vehicle.commands cmds.clear() for command in missionlist: cmds.add(command) print(' Upload mission') vehicle.commands.upload()
Upload a mission from a file.
def integral(self, bandname): intg = {} for det in self.rsr[bandname].keys(): wvl = self.rsr[bandname][det]['wavelength'] resp = self.rsr[bandname][det]['response'] intg[det] = np.trapz(resp, wvl) return intg
Calculate the integral of the spectral response function for each detector.
def _CheckPythonVersionAndDisableWarnings(self): if self._checked_for_old_python_version: return if sys.version_info[0:3] < (2, 7, 9): logger.warning( 'You are running a version of Python prior to 2.7.9. Your version ' 'of Python has multiple weaknesses in its SSL implementation that ' 'can allow an attacker to read or modify SSL encrypted data. ' 'Please update. Further SSL warnings will be suppressed. See ' 'https://www.python.org/dev/peps/pep-0466/ for more information.') urllib3_module = urllib3 if not urllib3_module: if hasattr(requests, 'packages'): urllib3_module = getattr(requests.packages, 'urllib3') if urllib3_module and hasattr(urllib3_module, 'disable_warnings'): urllib3_module.disable_warnings() self._checked_for_old_python_version = True
Checks python version, and disables SSL warnings. urllib3 will warn on each HTTPS request made by older versions of Python. Rather than spamming the user, we print one warning message, then disable warnings in urllib3.
def libvlc_media_library_media_list(p_mlib): f = _Cfunctions.get('libvlc_media_library_media_list', None) or \ _Cfunction('libvlc_media_library_media_list', ((1,),), class_result(MediaList), ctypes.c_void_p, MediaLibrary) return f(p_mlib)
Get media library subitems. @param p_mlib: media library object. @return: media list subitems.
def _makeTimingAbsolute(relativeDataList, startTime, endTime): timingSeq = [row[0] for row in relativeDataList] valueSeq = [list(row[1:]) for row in relativeDataList] absTimingSeq = makeSequenceAbsolute(timingSeq, startTime, endTime) absDataList = [tuple([time, ] + row) for time, row in zip(absTimingSeq, valueSeq)] return absDataList
Maps values from 0 to 1 to the provided start and end time Input is a list of tuples of the form ([(time1, pitch1), (time2, pitch2),...]
def clear(self): self._index = defaultdict(list) self._reverse_index = defaultdict(list) self._undefined_keys = {}
Clear index.
def pong(self, payload): if isinstance(payload, six.text_type): payload = payload.encode("utf-8") self.send(payload, ABNF.OPCODE_PONG)
send pong data. payload: data payload to send server.
def _cleanup_tempdir(tempdir): try: shutil.rmtree(tempdir) except OSError as err: if err.errno != errno.ENOENT: raise
Clean up temp directory ignoring ENOENT errors.
def keys(self): keys = [] for app_name, __ in self.items(): keys.append(app_name) return keys
return a list of all app_names
def getNumDownloads(self, fileInfo): downloads = fileInfo[fileInfo.find("FILE INFORMATION"):] if -1 != fileInfo.find("not included in ranking"): return "0" downloads = downloads[:downloads.find(".<BR>")] downloads = downloads[downloads.find("</A> with ") + len("</A> with "):] return downloads
Function to get the number of times a file has been downloaded
def create_form(self, label_columns=None, inc_columns=None, description_columns=None, validators_columns=None, extra_fields=None, filter_rel_fields=None): label_columns = label_columns or {} inc_columns = inc_columns or [] description_columns = description_columns or {} validators_columns = validators_columns or {} extra_fields = extra_fields or {} form_props = {} for col_name in inc_columns: if col_name in extra_fields: form_props[col_name] = extra_fields.get(col_name) else: self._convert_col(col_name, self._get_label(col_name, label_columns), self._get_description(col_name, description_columns), self._get_validators(col_name, validators_columns), filter_rel_fields, form_props) return type('DynamicForm', (DynamicForm,), form_props)
Converts a model to a form given :param label_columns: A dictionary with the column's labels. :param inc_columns: A list with the columns to include :param description_columns: A dictionary with a description for cols. :param validators_columns: A dictionary with WTForms validators ex:: validators={'personal_email':EmailValidator} :param extra_fields: A dictionary containing column names and a WTForm Form fields to be added to the form, these fields do not exist on the model itself ex:: extra_fields={'some_col':BooleanField('Some Col', default=False)} :param filter_rel_fields: A filter to be applied on relationships
def attach(self, stdout=True, stderr=True, stream=True, logs=False): try: data = parse_stream(self.client.attach(self.id, stdout, stderr, stream, logs)) except KeyboardInterrupt: logger.warning( "service container: {0} has been interrupted. " "The container will be stopped but will not be deleted.".format(self.name) ) data = None self.stop() return data
Keeping this simple until we need to extend later.
def on_breakpoints_changed(self, removed=False): if not self.ready_to_run: return self.mtime += 1 if not removed: self.set_tracing_for_untraced_contexts()
When breakpoints change, we have to re-evaluate all the assumptions we've made so far.
def _print_config_text(tree, indentation=0): config = '' for key, value in six.iteritems(tree): config += '{indent}{line}\n'.format(indent=' '*indentation, line=key) if value: config += _print_config_text(value, indentation=indentation+1) return config
Return the config as text from a config tree.
def validate(request: Union[Dict, List], schema: dict) -> Union[Dict, List]: jsonschema_validate(request, schema) return request
Wraps jsonschema.validate, returning the same object passed in. Args: request: The deserialized-from-json request. schema: The jsonschema schema to validate against. Raises: jsonschema.ValidationError
def _integrate_fixed_trajectory(self, h, T, step, relax): solution = np.hstack((self.t, self.y)) while self.successful(): self.integrate(self.t + h, step, relax) current_step = np.hstack((self.t, self.y)) solution = np.vstack((solution, current_step)) if (h > 0) and (self.t >= T): break elif (h < 0) and (self.t <= T): break else: continue return solution
Generates a solution trajectory of fixed length.
def helper_add(access_token, ck_id, path, body): full_path = ''.join([path, "('", ck_id, "')"]) full_path_encoded = urllib.parse.quote(full_path, safe='') endpoint = ''.join([ams_rest_endpoint, full_path_encoded]) return do_ams_put(endpoint, full_path_encoded, body, access_token, "json_only", "1.0;NetFx")
Helper Function to add strings to a URL path. Args: access_token (str): A valid Azure authentication token. ck_id (str): A CK ID. path (str): A URL Path. body (str): A Body. Returns: HTTP response. JSON body.
def get_singularity_version(): version = os.environ.get('SPYTHON_SINGULARITY_VERSION', "") if version == "": try: version = run_command(["singularity", '--version'], quiet=True) except: return version if version['return_code'] == 0: if len(version['message']) > 0: version = version['message'][0].strip('\n') return version
get the singularity client version. Useful in the case that functionality has changed, etc. Can be "hacked" if needed by exporting SPYTHON_SINGULARITY_VERSION, which is checked before checking on the command line.
def release(self, message_id, reservation_id, delay=0): url = "queues/%s/messages/%s/release" % (self.name, message_id) body = {'reservation_id': reservation_id} if delay > 0: body['delay'] = delay body = json.dumps(body) response = self.client.post(url, body=body, headers={'Content-Type': 'application/json'}) return response['body']
Release locked message after specified time. If there is no message with such id on the queue. Arguments: message_id -- The ID of the message. reservation_id -- Reservation Id of the message. delay -- The time after which the message will be released.
def unique_authors(self, limit): seen = set() if limit == 0: limit = None seen_add = seen.add return [x.author for x in self.sorted_commits[:limit] if not (x.author in seen or seen_add(x.author))]
Unique list of authors, but preserving order.
def cross_v2(vec1, vec2): return vec1.y * vec2.x - vec1.x * vec2.y
Return the crossproduct of the two vectors as a Vec2. Cross product doesn't really make sense in 2D, but return the Z component of the 3d result.
def serverDirectories(self): directs = [] url = self._url + "/directories" params = { "f" : "json" } res = self._get(url=url, param_dict=params, securityHandler=self._securityHandler, proxy_url=self._proxy_url, proxy_port=self._proxy_port) for direct in res['directories']: directs.append( ServerDirectory(url=url + "/%s" % direct["name"], securityHandler=self._securityHandler, proxy_url=self._proxy_url, proxy_port=self._proxy_port, initialize=True)) return directs
returns the server directory object in a list
def _merge_maps(m1, m2): return type(m1)(chain(m1.items(), m2.items()))
merge two Mapping objects, keeping the type of the first mapping
def get(self, blocking=True): if self.closed: raise PoolAlreadyClosedError("Connection pool is already closed.") if not self.limiter.acquire(blocking=blocking): return None c = None try: c = self.idle_conns.pop() except IndexError: try: c = self.connect_func() except Exception: self.limiter.release() raise return _ConnectionProxy(self, c)
Gets a connection. Args: blocking: Whether to block when max_size connections are already in use. If false, may return None. Returns: A connection to the database. Raises: PoolAlreadyClosedError: if close() method was already called on this pool.
def timestamp_file(): config_dir = os.path.join( os.path.expanduser("~"), BaseGlobalConfig.config_local_dir ) if not os.path.exists(config_dir): os.mkdir(config_dir) timestamp_file = os.path.join(config_dir, "cumulus_timestamp") try: with open(timestamp_file, "r+") as f: yield f except IOError: with open(timestamp_file, "w+") as f: yield f
Opens a file for tracking the time of the last version check
def crypto_core_ed25519_add(p, q): ensure(isinstance(p, bytes) and isinstance(q, bytes) and len(p) == crypto_core_ed25519_BYTES and len(q) == crypto_core_ed25519_BYTES, 'Each point must be a {} long bytes sequence'.format( 'crypto_core_ed25519_BYTES'), raising=exc.TypeError) r = ffi.new("unsigned char[]", crypto_core_ed25519_BYTES) rc = lib.crypto_core_ed25519_add(r, p, q) ensure(rc == 0, 'Unexpected library error', raising=exc.RuntimeError) return ffi.buffer(r, crypto_core_ed25519_BYTES)[:]
Add two points on the edwards25519 curve. :param p: a :py:data:`.crypto_core_ed25519_BYTES` long bytes sequence representing a point on the edwards25519 curve :type p: bytes :param q: a :py:data:`.crypto_core_ed25519_BYTES` long bytes sequence representing a point on the edwards25519 curve :type q: bytes :return: a point on the edwards25519 curve represented as a :py:data:`.crypto_core_ed25519_BYTES` long bytes sequence :rtype: bytes
def run_all(self): logger.debug("Creating batch session") session = Session() for section_id in self.parser.sections(): self.run_job(section_id, session=session)
Run all the jobs specified in the configuration file.
def is_mastercard(n): n, length = str(n), len(str(n)) if length >= 16 and length <= 19: if ''.join(n[:2]) in strings_between(51, 56): return True return False
Checks if credit card number fits the mastercard format.
def choice(anon, obj, field, val): return anon.faker.choice(field=field)
Randomly chooses one of the choices set on the field.
def _param_fields(kwargs, fields): if fields is None: return if type(fields) in [list, set, frozenset, tuple]: fields = {x: True for x in fields} if type(fields) == dict: fields.setdefault("_id", False) kwargs["projection"] = fields
Normalize the "fields" argument to most find methods
def partial_normalize(self, axis: AxisIdentifier = 0, inplace: bool = False): axis = self._get_axis(axis) if not inplace: copy = self.copy() copy.partial_normalize(axis, inplace=True) return copy else: self._coerce_dtype(float) if axis == 0: divisor = self._frequencies.sum(axis=0) else: divisor = self._frequencies.sum(axis=1)[:, np.newaxis] divisor[divisor == 0] = 1 self._frequencies /= divisor self._errors2 /= (divisor * divisor) return self
Normalize in rows or columns. Parameters ---------- axis: int or str Along which axis to sum (numpy-sense) inplace: bool Update the object itself Returns ------- hist : Histogram2D
def is_element_visible(driver, selector, by=By.CSS_SELECTOR): try: element = driver.find_element(by=by, value=selector) return element.is_displayed() except Exception: return False
Returns whether the specified element selector is visible on the page. @Params driver - the webdriver object (required) selector - the locator that is used (required) by - the method to search for the locator (Default: By.CSS_SELECTOR) @Returns Boolean (is element visible)
def unpickler(zone, utcoffset=None, dstoffset=None, tzname=None): tz = pytz.timezone(zone) if utcoffset is None: return tz utcoffset = memorized_timedelta(utcoffset) dstoffset = memorized_timedelta(dstoffset) try: return tz._tzinfos[(utcoffset, dstoffset, tzname)] except KeyError: pass for localized_tz in tz._tzinfos.values(): if (localized_tz._utcoffset == utcoffset and localized_tz._dst == dstoffset): return localized_tz inf = (utcoffset, dstoffset, tzname) tz._tzinfos[inf] = tz.__class__(inf, tz._tzinfos) return tz._tzinfos[inf]
Factory function for unpickling pytz tzinfo instances. This is shared for both StaticTzInfo and DstTzInfo instances, because database changes could cause a zones implementation to switch between these two base classes and we can't break pickles on a pytz version upgrade.
def create(self, url): bucket, obj_key = _parse_url(url) if not bucket: raise InvalidURL(url, "You must specify a bucket and (optional) path") if obj_key: target = "/".join((bucket, obj_key)) else: target = bucket return self.call("CreateBucket", bucket=target)
Create a bucket, directory, or empty file.
def _normalize_properties(self, definition): args = definition.get('Properties', {}).copy() if 'Condition' in definition: args.update({'Condition': definition['Condition']}) if 'UpdatePolicy' in definition: args.update({'UpdatePolicy': self._create_instance( UpdatePolicy, definition['UpdatePolicy'])}) if 'CreationPolicy' in definition: args.update({'CreationPolicy': self._create_instance( CreationPolicy, definition['CreationPolicy'])}) if 'DeletionPolicy' in definition: args.update( {'DeletionPolicy': self._convert_definition( definition['DeletionPolicy'])}) if 'Metadata' in definition: args.update( {'Metadata': self._convert_definition( definition['Metadata'])}) if 'DependsOn' in definition: args.update( {'DependsOn': self._convert_definition( definition['DependsOn'])}) return args
Inspects the definition and returns a copy of it that is updated with any special property such as Condition, UpdatePolicy and the like.
def _checksum(self, packet): xorsum = 0 for s in packet: xorsum ^= ord(s) return xorsum
calculate the XOR checksum of a packet in string format
def fullversion(): cmd = 'dnsmasq -v' out = __salt__['cmd.run'](cmd).splitlines() comps = out[0].split() version_num = comps[2] comps = out[1].split() return {'version': version_num, 'compile options': comps[3:]}
Shows installed version of dnsmasq and compile options. CLI Example: .. code-block:: bash salt '*' dnsmasq.fullversion
def build_sort(): sorts = request.args.getlist('sort') sorts = [sorts] if isinstance(sorts, basestring) else sorts sorts = [s.split(' ') for s in sorts] return [{SORTS[s]: d} for s, d in sorts if s in SORTS]
Build sort query paramter from kwargs
def on_idle(self, event): self.checkReszie() if self.resized: self.rescaleX() self.calcFontScaling() self.calcHorizonPoints() self.updateRPYLocations() self.updateAARLocations() self.adjustPitchmarkers() self.adjustHeadingPointer() self.adjustNorthPointer() self.updateBatteryBar() self.updateStateText() self.updateWPText() self.adjustWPPointer() self.updateAltHistory() self.canvas.draw() self.canvas.Refresh() self.resized = False time.sleep(0.05)
To adjust text and positions on rescaling the window when resized.
def send_exception_to_sentry(self, exc_info): if not self.sentry_client: LOGGER.debug('No sentry_client, aborting') return message = dict(self.active_message) try: duration = math.ceil(time.time() - self.delivery_time) * 1000 except TypeError: duration = 0 kwargs = {'extra': { 'consumer_name': self.consumer_name, 'env': dict(os.environ), 'message': message}, 'time_spent': duration} LOGGER.debug('Sending exception to sentry: %r', kwargs) self.sentry_client.captureException(exc_info, **kwargs)
Send an exception to Sentry if enabled. :param tuple exc_info: exception information as returned from :func:`sys.exc_info`
def get_version(): sys.modules["setup_helpers"] = object() sys.modules["setup_helpers_macos"] = object() sys.modules["setup_helpers_windows"] = object() filename = os.path.join(_ROOT_DIR, "setup.py") loader = importlib.machinery.SourceFileLoader("setup", filename) setup_mod = loader.load_module() return setup_mod.VERSION
Get the current version from ``setup.py``. Assumes that importing ``setup.py`` will have no side-effects (i.e. assumes the behavior is guarded by ``if __name__ == "__main__"``). Returns: str: The current version in ``setup.py``.
def get_roots(self): if self.__directionless: sys.stderr.write("ERROR: can't get roots of an undirected graph\n") sys.exit() outputids = self.__nodes.keys() rootset = set(outputids) - set(self.__child_to_parent.keys()) return [self.__nodes[x] for x in rootset]
get the roots of a graph. must be a directed graph :returns: root list of nodes :rtype: Node[]
def _add_q(self, q_object): self._criteria = self._criteria._combine(q_object, q_object.connector)
Add a Q-object to the current filter.
def create(cls, name, datacenter, subnet=None, gateway=None, background=False): if not background and not cls.intty(): background = True datacenter_id_ = int(Datacenter.usable_id(datacenter)) vlan_params = { 'name': name, 'datacenter_id': datacenter_id_, } if subnet: vlan_params['subnet'] = subnet if gateway: vlan_params['gateway'] = gateway result = cls.call('hosting.vlan.create', vlan_params) if not background: cls.echo('Creating your vlan.') cls.display_progress(result) cls.echo('Your vlan %s has been created.' % name) return result
Create a new vlan.
def params(self, dict): self._configuration.update(dict) self._measurements.update()
Set configuration variables for an OnShape part.
def encode(self, obj): try: result = json.dumps(obj, sort_keys=True, indent=None, separators=(',', ':'), ensure_ascii=False) if isinstance(result, six.text_type): return result.encode("utf-8") else: return result except (UnicodeEncodeError, TypeError) as error: raise exceptions.EncodingError('json', error)
Returns ``obj`` serialized as JSON formatted bytes. Raises ------ ~ipfsapi.exceptions.EncodingError Parameters ---------- obj : str | list | dict | int JSON serializable Python object Returns ------- bytes
def streamDefByThreshold(self, stream_raster_grid, threshold, contributing_area_grid, mask_grid=None, ): log("PROCESS: StreamDefByThreshold") self.stream_raster_grid = stream_raster_grid cmd = [os.path.join(self.taudem_exe_path, 'threshold'), '-ssa', contributing_area_grid, '-src', self.stream_raster_grid, '-thresh', str(threshold), ] if mask_grid: cmd += ['-mask', mask_grid] self._run_mpi_cmd(cmd) self._add_prj_file(contributing_area_grid, self.stream_raster_grid)
Calculates the stream definition by threshold.
def run(self): context = zmq.Context() socket = context.socket(zmq.PUB) socket.setsockopt(zmq.LINGER, 100) socket.bind('ipc://' + self.timer_sock) count = 0 log.debug('ConCache-Timer started') while not self.stopped.wait(1): socket.send(self.serial.dumps(count)) count += 1 if count >= 60: count = 0
main loop that fires the event every second
def api_post(self, action, data, binary_data_param=None): binary_data_param = binary_data_param or [] if binary_data_param: return self.api_post_multipart(action, data, binary_data_param) else: return self._api_request(action, data, 'POST')
Perform an HTTP POST request, using the shared-secret auth hash. @param action: API action call @param data: dictionary values
def remove_token(self, token_stack, token): token_stack.reverse() try: token_stack.remove(token) retval = True except ValueError: retval = False token_stack.reverse() return retval
Remove last occurance of token from stack
def select_sample(in_file, sample, out_file, config, filters=None): if not utils.file_exists(out_file): with file_transaction(config, out_file) as tx_out_file: if len(get_samples(in_file)) == 1: shutil.copy(in_file, tx_out_file) else: if in_file.endswith(".gz"): bgzip_and_index(in_file, config) bcftools = config_utils.get_program("bcftools", config) output_type = "z" if out_file.endswith(".gz") else "v" filter_str = "-f %s" % filters if filters is not None else "" cmd = "{bcftools} view -O {output_type} {filter_str} {in_file} -s {sample} > {tx_out_file}" do.run(cmd.format(**locals()), "Select sample: %s" % sample) if out_file.endswith(".gz"): bgzip_and_index(out_file, config) return out_file
Select a single sample from the supplied multisample VCF file.
def set_pixel(framebuf, x, y, color): index = (y >> 3) * framebuf.stride + x offset = y & 0x07 framebuf.buf[index] = (framebuf.buf[index] & ~(0x01 << offset)) | ((color != 0) << offset)
Set a given pixel to a color.
def get_client(host, userid, password, port=443, auth_method='basic', client_timeout=60, **kwargs): return functools.partial(scci_cmd, host, userid, password, port=port, auth_method=auth_method, client_timeout=client_timeout, **kwargs)
get SCCI command partial function This function returns SCCI command partial function :param host: hostname or IP of iRMC :param userid: userid for iRMC with administrator privileges :param password: password for userid :param port: port number of iRMC :param auth_method: irmc_username :param client_timeout: timeout for SCCI operations :returns: scci_cmd partial function which takes a SCCI command param
def compile_file(self, filename, encoding="utf-8", bare=False): if isinstance(filename, _BaseString): filename = [filename] scripts = [] for f in filename: with io.open(f, encoding=encoding) as fp: scripts.append(fp.read()) return self.compile('\n\n'.join(scripts), bare=bare)
compile a CoffeeScript script file to a JavaScript code. filename can be a list or tuple of filenames, then contents of files are concatenated with line feeds. if bare is True, then compile the JavaScript without the top-level function safety wrapper (like the coffee command).
def meet(self, featuresets): concepts = (f.concept for f in featuresets) meet = self.lattice.meet(concepts) return self._featuresets[meet.index]
Return the nearest featureset that implies all given ones.
async def handle_request(self, request): service_name = request.rel_url.query['servicename'] received_code = request.rel_url.query['pairingcode'].lower() _LOGGER.info('Got pairing request from %s with code %s', service_name, received_code) if self._verify_pin(received_code): cmpg = tags.uint64_tag('cmpg', int(self._pairing_guid, 16)) cmnm = tags.string_tag('cmnm', self._name) cmty = tags.string_tag('cmty', 'iPhone') response = tags.container_tag('cmpa', cmpg + cmnm + cmty) self._has_paired = True return web.Response(body=response) return web.Response(status=500)
Respond to request if PIN is correct.
def from_file(cls, file_path: Path, w3: Web3) -> "Package": if isinstance(file_path, Path): raw_manifest = file_path.read_text() validate_raw_manifest_format(raw_manifest) manifest = json.loads(raw_manifest) else: raise TypeError( "The Package.from_file method expects a pathlib.Path instance." f"Got {type(file_path)} instead." ) return cls(manifest, w3, file_path.as_uri())
Returns a ``Package`` instantiated by a manifest located at the provided Path. ``file_path`` arg must be a ``pathlib.Path`` instance. A valid ``Web3`` instance is required to instantiate a ``Package``.
def _parse_path_table(self, ptr_size, extent): self._seek_to_extent(extent) data = self._cdfp.read(ptr_size) offset = 0 out = [] extent_to_ptr = {} while offset < ptr_size: ptr = path_table_record.PathTableRecord() len_di_byte = bytearray([data[offset]])[0] read_len = path_table_record.PathTableRecord.record_length(len_di_byte) ptr.parse(data[offset:offset + read_len]) out.append(ptr) extent_to_ptr[ptr.extent_location] = ptr offset += read_len return out, extent_to_ptr
An internal method to parse a path table on an ISO. For each path table entry found, a Path Table Record object is created, and the callback is called. Parameters: vd - The volume descriptor that these path table records correspond to. extent - The extent at which this path table record starts. callback - The callback to call for each path table record. Returns: A tuple consisting of the list of path table record entries and a dictionary of the extent locations to the path table record entries.
def _get_whitelist_licenses(config_path): whitelist_licenses = [] try: print('config path', config_path) with open(config_path) as config: whitelist_licenses = [line.rstrip() for line in config] except IOError: print('Warning: No {} file was found.'.format(LICENSE_CHECKER_CONFIG_NAME)) return whitelist_licenses
Get whitelist license names from config file. :param config_path: str :return: list
def _get_type_hints(func, args = None, res = None, infer_defaults = None): if args is None or res is None: args2, res2 = _get_types(func, util.is_classmethod(func), util.is_method(func), unspecified_type = type(NotImplemented), infer_defaults = infer_defaults) if args is None: args = args2 if res is None: res = res2 slf = 1 if util.is_method(func) else 0 argNames = util.getargnames(util.getargspecs(util._actualfunc(func))) result = {} if not args is Any: prms = get_Tuple_params(args) for i in range(slf, len(argNames)): if not prms[i-slf] is type(NotImplemented): result[argNames[i]] = prms[i-slf] result['return'] = res return result
Helper for get_type_hints.
def add_fs(self, name, fs, write=False, priority=0): if isinstance(fs, text_type): fs = open_fs(fs) if not isinstance(fs, FS): raise TypeError("fs argument should be an FS object or FS URL") self._filesystems[name] = _PrioritizedFS( priority=(priority, self._sort_index), fs=fs ) self._sort_index += 1 self._resort() if write: self.write_fs = fs self._write_fs_name = name
Add a filesystem to the MultiFS. Arguments: name (str): A unique name to refer to the filesystem being added. fs (FS or str): The filesystem (instance or URL) to add. write (bool): If this value is True, then the ``fs`` will be used as the writeable FS (defaults to False). priority (int): An integer that denotes the priority of the filesystem being added. Filesystems will be searched in descending priority order and then by the reverse order they were added. So by default, the most recently added filesystem will be looked at first.
def add_time(self, extra_time): window_start = self.parent.value('window_start') + extra_time self.parent.overview.update_position(window_start)
Go to the predefined time forward.
def update_workspace_acl(namespace, workspace, acl_updates, invite_users_not_found=False): uri = "{0}workspaces/{1}/{2}/acl?inviteUsersNotFound={3}".format(fcconfig.root_url, namespace, workspace, str(invite_users_not_found).lower()) headers = _fiss_agent_header({"Content-type": "application/json"}) return __SESSION.patch(uri, headers=headers, data=json.dumps(acl_updates))
Update workspace access control list. Args: namespace (str): project to which workspace belongs workspace (str): Workspace name acl_updates (list(dict)): Acl updates as dicts with two keys: "email" - Firecloud user email "accessLevel" - one of "OWNER", "READER", "WRITER", "NO ACCESS" Example: {"email":"user1@mail.com", "accessLevel":"WRITER"} invite_users_not_found (bool): true to invite unregistered users, false to ignore Swagger: https://api.firecloud.org/#!/Workspaces/updateWorkspaceACL
def check_file(self, fs, info): if self.exclude is not None and fs.match(self.exclude, info.name): return False return fs.match(self.filter, info.name)
Check if a filename should be included. Override to exclude files from the walk. Arguments: fs (FS): A filesystem instance. info (Info): A resource info object. Returns: bool: `True` if the file should be included.
def evaluator(evaluate): @functools.wraps(evaluate) def inspyred_evaluator(candidates, args): fitness = [] for candidate in candidates: fitness.append(evaluate(candidate, args)) return fitness inspyred_evaluator.single_evaluation = evaluate return inspyred_evaluator
Return an inspyred evaluator function based on the given function. This function generator takes a function that evaluates only one candidate. The generator handles the iteration over each candidate to be evaluated. The given function ``evaluate`` must have the following signature:: fitness = evaluate(candidate, args) This function is most commonly used as a function decorator with the following usage:: @evaluator def evaluate(candidate, args): # Implementation of evaluation pass The generated function also contains an attribute named ``single_evaluation`` which holds the original evaluation function. In this way, the original single-candidate function can be retrieved if necessary.
def _tm(self, theta, phi, psi, dx, dy, dz): matrix = self.get_matrix(theta, phi, psi, dx, dy, dz) coord = matrix.dot(self.coord2) dist = coord - self.coord1 d_i2 = (dist * dist).sum(axis=0) tm = -(1 / (1 + (d_i2 / self.d02))) return tm
Compute the minimisation target, not normalised.
def download(cls, url, filename=None): return utility.download(url, cls.directory(), filename)
Download a file into the correct cache directory.
def encrypt_data(self, name, plaintext, context="", key_version=0, nonce=None, batch_input=None, type="aes256-gcm96", convergent_encryption="", mount_point=DEFAULT_MOUNT_POINT): params = { 'plaintext': plaintext, 'context': context, 'key_version': key_version, 'nonce': nonce, 'batch_input': batch_input, 'type': type, 'convergent_encryption': convergent_encryption, } api_path = '/v1/{mount_point}/encrypt/{name}'.format( mount_point=mount_point, name=name, ) response = self._adapter.post( url=api_path, json=params, ) return response.json()
Encrypt the provided plaintext using the named key. This path supports the create and update policy capabilities as follows: if the user has the create capability for this endpoint in their policies, and the key does not exist, it will be upserted with default values (whether the key requires derivation depends on whether the context parameter is empty or not). If the user only has update capability and the key does not exist, an error will be returned. Supported methods: POST: /{mount_point}/encrypt/{name}. Produces: 200 application/json :param name: Specifies the name of the encryption key to encrypt against. This is specified as part of the URL. :type name: str | unicode :param plaintext: Specifies base64 encoded plaintext to be encoded. :type plaintext: str | unicode :param context: Specifies the base64 encoded context for key derivation. This is required if key derivation is enabled for this key. :type context: str | unicode :param key_version: Specifies the version of the key to use for encryption. If not set, uses the latest version. Must be greater than or equal to the key's min_encryption_version, if set. :type key_version: int :param nonce: Specifies the base64 encoded nonce value. This must be provided if convergent encryption is enabled for this key and the key was generated with Vault 0.6.1. Not required for keys created in 0.6.2+. The value must be exactly 96 bits (12 bytes) long and the user must ensure that for any given context (and thus, any given encryption key) this nonce value is never reused. :type nonce: str | unicode :param batch_input: Specifies a list of items to be encrypted in a single batch. When this parameter is set, if the parameters 'plaintext', 'context' and 'nonce' are also set, they will be ignored. The format for the input is: [dict(context="b64_context", plaintext="b64_plaintext"), ...] :type batch_input: List[dict] :param type: This parameter is required when encryption key is expected to be created. When performing an upsert operation, the type of key to create. :type type: str | unicode :param convergent_encryption: This parameter will only be used when a key is expected to be created. Whether to support convergent encryption. This is only supported when using a key with key derivation enabled and will require all requests to carry both a context and 96-bit (12-byte) nonce. The given nonce will be used in place of a randomly generated nonce. As a result, when the same context and nonce are supplied, the same ciphertext is generated. It is very important when using this mode that you ensure that all nonces are unique for a given context. Failing to do so will severely impact the ciphertext's security. :type convergent_encryption: str | unicode :param mount_point: The "path" the method/backend was mounted on. :type mount_point: str | unicode :return: The JSON response of the request. :rtype: requests.Response
def create_response(request, body=None, status=None, headers=None): if body is None: return HttpResponse(None, status or HTTPStatus.NO_CONTENT, headers) else: body = request.response_codec.dumps(body) response = HttpResponse(body, status or HTTPStatus.OK, headers) response.set_content_type(request.response_codec.CONTENT_TYPE) return response
Generate a HttpResponse. :param request: Request object :param body: Body of the response :param status: HTTP status code :param headers: Any headers.
def zremrangebyscore(self, key, min_score, max_score): return self._execute([b'ZREMRANGEBYSCORE', key, min_score, max_score])
Removes all elements in the sorted set stored at key with a score between min and max. Intervals are described in :meth:`~tredis.RedisClient.zrangebyscore`. Returns the number of elements removed. .. note:: **Time complexity**: ``O(log(N)+M)`` with ``N`` being the number of elements in the sorted set and M the number of elements removed by the operation. :param key: The key of the sorted set :type key: :class:`str`, :class:`bytes` :param min_score: Lowest score definition :type min_score: :class:`str`, :class:`bytes` :param max_score: Highest score definition :type max_score: :class:`str`, :class:`bytes` :rtype: int :raises: :exc:`~tredis.exceptions.RedisError`
def add_tag_for_component(user, c_id): v1_utils.verify_existence_and_get(c_id, _TABLE) values = { 'component_id': c_id } component_tagged = tags.add_tag_to_resource(values, models.JOIN_COMPONENTS_TAGS) return flask.Response(json.dumps(component_tagged), 201, content_type='application/json')
Add a tag on a specific component.
def triangle_center(tri, uv=False): if uv: data = [t.uv for t in tri] mid = [0.0, 0.0] else: data = tri.vertices mid = [0.0, 0.0, 0.0] for vert in data: mid = [m + v for m, v in zip(mid, vert)] mid = [float(m) / 3.0 for m in mid] return tuple(mid)
Computes the center of mass of the input triangle. :param tri: triangle object :type tri: elements.Triangle :param uv: if True, then finds parametric position of the center of mass :type uv: bool :return: center of mass of the triangle :rtype: tuple
def writeinfo(self, linelist, colour = None): self.checkforpilimage() colour = self.defaultcolour(colour) self.changecolourmode(colour) self.makedraw() self.loadinfofont() for i, line in enumerate(linelist): topspacing = 5 + (12 + 5)*i self.draw.text((10, topspacing), line, fill = colour, font = self.infofont) if self.verbose : print "I've written some info on the image."
We add a longer chunk of text on the upper left corner of the image. Provide linelist, a list of strings that will be written one below the other.
def __connect(self, wsURL, symbol): self.logger.debug("Starting thread") self.ws = websocket.WebSocketApp(wsURL, on_message=self.__on_message, on_close=self.__on_close, on_open=self.__on_open, on_error=self.__on_error, header=self.__get_auth()) self.wst = threading.Thread(target=lambda: self.ws.run_forever()) self.wst.daemon = True self.wst.start() self.logger.debug("Started thread") conn_timeout = 5 while not self.ws.sock or not self.ws.sock.connected and conn_timeout: sleep(1) conn_timeout -= 1 if not conn_timeout: self.logger.error("Couldn't connect to WS! Exiting.") self.exit() sys.exit(1)
Connect to the websocket in a thread.
def pipes(stream, *transformers): for transformer in transformers: stream = stream.pipe(transformer) return stream
Pipe several transformers end to end.
def mongo_retry(f): log_all_exceptions = 'arctic' in f.__module__ if f.__module__ else False @wraps(f) def f_retry(*args, **kwargs): global _retry_count, _in_retry top_level = not _in_retry _in_retry = True try: while True: try: return f(*args, **kwargs) except (DuplicateKeyError, ServerSelectionTimeoutError) as e: _handle_error(f, e, _retry_count, **_get_host(args)) raise except (OperationFailure, AutoReconnect) as e: _retry_count += 1 _handle_error(f, e, _retry_count, **_get_host(args)) except Exception as e: if log_all_exceptions: _log_exception(f.__name__, e, _retry_count, **_get_host(args)) raise finally: if top_level: _in_retry = False _retry_count = 0 return f_retry
Catch-all decorator that handles AutoReconnect and OperationFailure errors from PyMongo
def cmd_create(args): if args.type == SQLITE: if args.output is not None and path.exists(args.output): remove(args.output) storage = SqliteStorage(db=args.output, settings=args.settings) else: storage = JsonStorage(settings=args.settings) markov = MarkovText.from_storage(storage) read(args.input, markov, args.progress) save(markov, args.output, args)
Create a generator. Parameters ---------- args : `argparse.Namespace` Command arguments.
def create_policy(policyName, policyDocument, region=None, key=None, keyid=None, profile=None): try: conn = _get_conn(region=region, key=key, keyid=keyid, profile=profile) if not isinstance(policyDocument, string_types): policyDocument = salt.utils.json.dumps(policyDocument) policy = conn.create_policy(policyName=policyName, policyDocument=policyDocument) if policy: log.info('The newly created policy version is %s', policy['policyVersionId']) return {'created': True, 'versionId': policy['policyVersionId']} else: log.warning('Policy was not created') return {'created': False} except ClientError as e: return {'created': False, 'error': __utils__['boto3.get_error'](e)}
Given a valid config, create a policy. Returns {created: true} if the policy was created and returns {created: False} if the policy was not created. CLI Example: .. code-block:: bash salt myminion boto_iot.create_policy my_policy \\ '{"Version":"2015-12-12",\\ "Statement":[{"Effect":"Allow",\\ "Action":["iot:Publish"],\\ "Resource":["arn:::::topic/foo/bar"]}]}'
def get_bounds(pts): pts_t = np.asarray(pts).T return np.asarray(([np.min(_pts) for _pts in pts_t], [np.max(_pts) for _pts in pts_t]))
Return the minimum point and maximum point bounding a set of points.
def http_get(url, filename=None): try: ret = requests.get(url) except requests.exceptions.SSLError as error: _LOGGER.error(error) return False if ret.status_code != 200: return False if filename is None: return ret.content with open(filename, 'wb') as data: data.write(ret.content) return True
Download HTTP data.
def job_started_message(self, job, queue): return '[%s|%s|%s] starting' % (queue._cached_name, job.pk.get(), job._cached_identifier)
Return the message to log just befre the execution of the job
def _deserialize(self, value, attr, data): if not self.context.get('convert_dates', True) or not value: return value value = super(ArrowField, self)._deserialize(value, attr, data) timezone = self.get_field_value('timezone') target = arrow.get(value) if timezone and text_type(target.to(timezone)) != text_type(target): raise ValidationError( "The provided datetime is not in the " "{} timezone.".format(timezone) ) return target
Deserializes a string into an Arrow object.
def make_response(message, status_code, details=None): response_body = dict(message=message) if details: response_body['details'] = details response = jsonify(response_body) response.status_code = status_code return response
Make a jsonified response with specified message and status code.
def pack(self): if six.PY3: return {'message': six.text_type(self), 'args': self.args} return dict(message=self.__unicode__(), args=self.args)
Pack this exception into a serializable dictionary that is safe for transport via msgpack
def disable_logger(self, disabled=True): if disabled: sys.stdout = _original_stdout sys.stderr = _original_stderr else: sys.stdout = self.__stdout_stream sys.stderr = self.__stderr_stream self.logger.disabled = disabled
Disable all logging calls.
def bin(x, bins, maxX=None, minX=None): if maxX is None: maxX = x.max() if minX is None: minX = x.min() if not np.iterable(bins): bins = np.linspace(minX, maxX+1e-5, bins+1) return np.digitize(x.ravel(), bins).reshape(x.shape), bins
bin signal x using 'binsN' bin. If minX, maxX are None, they default to the full range of the signal. If they are not None, everything above maxX gets assigned to binsN-1 and everything below minX gets assigned to 0, this is effectively the same as clipping x before passing it to 'bin' input: ----- x: signal to be binned, some sort of iterable bins: int, number of bins iterable, bin edges maxX: clips data above maxX minX: clips data below maxX output: ------ binnedX: x after being binned bins: bins used for binning. if input 'bins' is already an iterable it just returns the same iterable example: # make 10 bins of equal length spanning from x.min() to x.max() bin(x, 10) # use predefined bins such that each bin has the same number of points (maximize entropy) binsN = 10 percentiles = list(np.arange(0, 100.1, 100/binsN)) bins = np.percentile(x, percentiles) bin(x, bins)
def set_execution_mode(self, execution_mode, notify=True): if not isinstance(execution_mode, StateMachineExecutionStatus): raise TypeError("status must be of type StateMachineExecutionStatus") self._status.execution_mode = execution_mode if notify: self._status.execution_condition_variable.acquire() self._status.execution_condition_variable.notify_all() self._status.execution_condition_variable.release()
An observed setter for the execution mode of the state machine status. This is necessary for the monitoring client to update the local state machine in the same way as the root state machine of the server. :param execution_mode: the new execution mode of the state machine :raises exceptions.TypeError: if the execution mode is of the wrong type
def print_dedicated_access(access): table = formatting.Table(['id', 'Name', 'Cpus', 'Memory', 'Disk', 'Created'], 'Dedicated Access') for host in access: host_id = host.get('id') host_fqdn = host.get('name') host_cpu = host.get('cpuCount') host_mem = host.get('memoryCapacity') host_disk = host.get('diskCapacity') host_created = host.get('createDate') table.add_row([host_id, host_fqdn, host_cpu, host_mem, host_disk, host_created]) return table
Prints out the dedicated hosts a user can access
def get_hyperedge_id(self, tail, head): frozen_tail = frozenset(tail) frozen_head = frozenset(head) if not self.has_hyperedge(frozen_tail, frozen_head): raise ValueError("No such hyperedge exists.") return self._successors[frozen_tail][frozen_head]
From a tail and head set of nodes, returns the ID of the hyperedge that these sets comprise. :param tail: iterable container of references to nodes in the tail of the hyperedge to be added :param head: iterable container of references to nodes in the head of the hyperedge to be added :returns: str -- ID of the hyperedge that has that the specified tail and head sets comprise. :raises: ValueError -- No such hyperedge exists. Examples: :: >>> H = DirectedHypergraph() >>> hyperedge_list = (["A"], ["B", "C"]), (("A", "B"), ("C"), {weight: 2}), (set(["B"]), set(["A", "C"]))) >>> hyperedge_ids = H.add_hyperedges(hyperedge_list) >>> x = H.get_hyperedge_id(["A"], ["B", "C"])
def validate_date(date, project_member_id, filename): try: arrow.get(date) except Exception: return False return True
Check if date is in ISO 8601 format. :param date: This field is the date to be checked. :param project_member_id: This field is the project_member_id corresponding to the date provided. :param filename: This field is the filename corresponding to the date provided.
def unbounded(self): self._check_valid() return (self._problem._p.get_status() == qsoptex.SolutionStatus.UNBOUNDED)
Whether the solution is unbounded
def list_networks(kwargs=None, call=None): if call != 'function': raise SaltCloudSystemExit( 'The list_networks function must be called with ' '-f or --function.' ) return {'Networks': salt.utils.vmware.list_networks(_get_si())}
List all the standard networks for this VMware environment CLI Example: .. code-block:: bash salt-cloud -f list_networks my-vmware-config
def schedule_violations(schedule, events, slots): array = converter.schedule_to_array(schedule, events, slots) return array_violations(array, events, slots)
Take a schedule and return a list of violated constraints Parameters ---------- schedule : list or tuple a schedule in schedule form events : list or tuple of resources.Event instances slots : list or tuple of resources.Slot instances Returns ------- Generator of a list of strings indicating the nature of the violated constraints
def clear(self): self._level = None self._fingerprint = None self._transaction = None self._user = None self._tags = {} self._contexts = {} self._extras = {} self.clear_breadcrumbs() self._should_capture = True self._span = None
Clears the entire scope.
def build_command(self, config, **kwargs): command = ['perl', self.script, CLI_OPTIONS['config']['option'], config] for key, value in kwargs.items(): if value: command.append(CLI_OPTIONS[key]['option']) if value is True: command.append(CLI_OPTIONS[key].get('default', '1')) else: command.append(value) return command
Builds the command to execute MIP.
def relabel_squeeze(data): palette, index = np.unique(data, return_inverse=True) data = index.reshape(data.shape) return data
Makes relabeling of data if there are unused values.
def _get_recipients(self, array): for address, name in array: if not name: yield address else: yield "\"%s\" <%s>" % (name, address)
Returns an iterator of objects in the form ["Name <address@example.com", ...] from the array [["address@example.com", "Name"]]
def inverse_gaussian_gradient(image, alpha=100.0, sigma=5.0): gradnorm = ndi.gaussian_gradient_magnitude(image, sigma, mode='nearest') return 1.0 / np.sqrt(1.0 + alpha * gradnorm)
Inverse of gradient magnitude. Compute the magnitude of the gradients in the image and then inverts the result in the range [0, 1]. Flat areas are assigned values close to 1, while areas close to borders are assigned values close to 0. This function or a similar one defined by the user should be applied over the image as a preprocessing step before calling `morphological_geodesic_active_contour`. Parameters ---------- image : (M, N) or (L, M, N) array Grayscale image or volume. alpha : float, optional Controls the steepness of the inversion. A larger value will make the transition between the flat areas and border areas steeper in the resulting array. sigma : float, optional Standard deviation of the Gaussian filter applied over the image. Returns ------- gimage : (M, N) or (L, M, N) array Preprocessed image (or volume) suitable for `morphological_geodesic_active_contour`.
def from_etree(root): cite_list = [] citations = root.xpath('Citations/EventIVORN') if citations: description = root.xpath('Citations/Description') if description: description_text = description[0].text else: description_text = None for entry in root.Citations.EventIVORN: if entry.text: cite_list.append( Cite(ref_ivorn=entry.text, cite_type=entry.attrib['cite'], description=description_text) ) else: logger.info( 'Ignoring empty citation in {}'.format( root.attrib['ivorn'])) return cite_list
Load up the citations, if present, for initializing with the Voevent.