code
stringlengths
51
2.38k
docstring
stringlengths
4
15.2k
def average(sequence, key): return sum(map(key, sequence)) / float(len(sequence))
Averages a sequence based on a key.
def custom_scale_mixture_prior_builder(getter, name, *args, **kwargs): del getter del name del args del kwargs return CustomScaleMixture( FLAGS.prior_pi, FLAGS.prior_sigma1, FLAGS.prior_sigma2)
A builder for the gaussian scale-mixture prior of Fortunato et al. Please see https://arxiv.org/abs/1704.02798, section 7.1 Args: getter: The `getter` passed to a `custom_getter`. Please see the documentation for `tf.get_variable`. name: The `name` argument passed to `tf.get_variable`. *args: Positional arguments forwarded by `tf.get_variable`. **kwargs: Keyword arguments forwarded by `tf.get_variable`. Returns: An instance of `tfp.distributions.Distribution` representing the prior distribution over the variable in question.
def upper_camel(string, prefix='', suffix=''): return require_valid(append_underscore_if_keyword(''.join( upper_case_first_char(word) for word in en.words(' '.join([prefix, string, suffix]))) ))
Generate a camel-case identifier with the first word capitalised. Useful for class names. Takes a string, prefix, and optional suffix. `prefix` can be set to `''`, though be careful - without a prefix, the function will throw `InvalidIdentifier` when your string starts with a number. Example: >>> upper_camel("I'm a class", prefix='') 'IAmAClass'
def map_hmms(input_model, mapping): output_model = copy.copy(input_model) o_hmms = [] for i_hmm in input_model['hmms']: i_hmm_name = i_hmm['name'] o_hmm_names = mapping.get(i_hmm_name, [i_hmm_name]) for o_hmm_name in o_hmm_names: o_hmm = copy.copy(i_hmm) o_hmm['name'] = o_hmm_name o_hmms.append(o_hmm) output_model['hmms'] = o_hmms return output_model
Create a new HTK HMM model given a model and a mapping dictionary. :param input_model: The model to transform of type dict :param mapping: A dictionary from string -> list(string) :return: The transformed model of type dict
def escape(str): out = '' for char in str: if char in '\\ ': out += '\\' out += char return out
Precede all special characters with a backslash.
def graph_to_dot(graph, node_renderer=None, edge_renderer=None): node_pairs = list(graph.nodes.items()) edge_pairs = list(graph.edges.items()) if node_renderer is None: node_renderer_wrapper = lambda nid: '' else: node_renderer_wrapper = lambda nid: ' [%s]' % ','.join( ['%s=%s' % tpl for tpl in list(node_renderer(graph, nid).items())]) graph_string = 'digraph G {\n' graph_string += 'overlap=scale;\n' for node_id, node in node_pairs: graph_string += '%i%s;\n' % (node_id, node_renderer_wrapper(node_id)) for edge_id, edge in edge_pairs: node_a = edge['vertices'][0] node_b = edge['vertices'][1] graph_string += '%i -> %i;\n' % (node_a, node_b) graph_string += '}' return graph_string
Produces a DOT specification string from the provided graph.
def ks_significance(fg_pos, bg_pos=None): p = ks_pvalue(fg_pos, max(fg_pos)) if p > 0: return -np.log10(p) else: return np.inf
Computes the -log10 of Kolmogorov-Smirnov p-value of position distribution. Parameters ---------- fg_pos : array_like The list of values for the positive set. bg_pos : array_like, optional The list of values for the negative set. Returns ------- p : float -log10(KS p-value).
def instantiate_client(_unused_client, _unused_to_delete): from google.cloud import logging client = logging.Client() credentials = object() from google.cloud import logging client = logging.Client(project="my-project", credentials=credentials)
Instantiate client.
def create_rflink_connection(port=None, host=None, baud=57600, protocol=RflinkProtocol, packet_callback=None, event_callback=None, disconnect_callback=None, ignore=None, loop=None): protocol = partial( protocol, loop=loop if loop else asyncio.get_event_loop(), packet_callback=packet_callback, event_callback=event_callback, disconnect_callback=disconnect_callback, ignore=ignore if ignore else [], ) if host: conn = loop.create_connection(protocol, host, port) else: baud = baud conn = create_serial_connection(loop, protocol, port, baud) return conn
Create Rflink manager class, returns transport coroutine.
def new_type(type_name: str, prefix: str or None = None) -> str: if Naming.TYPE_PREFIX in type_name: raise TypeError('Cannot create new type: type {} is already prefixed.'.format(type_name)) prefix = (prefix + Naming.TYPE_PREFIX) if prefix is not None else '' return prefix + type_name
Creates a resource type with optionally a prefix. Using the rules of JSON-LD, we use prefixes to disambiguate between different types with the same name: one can Accept a device or a project. In eReuse.org there are different events with the same names, in linked-data terms they have different URI. In eReuse.org, we solve this with the following: "@type": "devices:Accept" // the URI for these events is 'devices/events/accept' "@type": "projects:Accept" // the URI for these events is 'projects/events/accept ... Type is only used in events, when there are ambiguities. The rest of "@type": "devices:Accept" "@type": "Accept" But these not: "@type": "projects:Accept" // it is an event from a project "@type": "Accept" // it is an event from a device
def right_shift_blockwise(x, query_shape, name=None): with tf.variable_scope( name, default_name="right_shift_blockwise", values=[x]): x_list_shape = x.get_shape().as_list() x_shape = common_layers.shape_list(x) x = tf.expand_dims(x, axis=1) x = pad_to_multiple_2d(x, query_shape) padded_x_shape = common_layers.shape_list(x) x_indices = gather_indices_2d(x, query_shape, query_shape) x_new = get_shifted_center_blocks(x, x_indices) output = scatter_blocks_2d(x_new, x_indices, padded_x_shape) output = tf.squeeze(output, axis=1) output = tf.slice(output, [0, 0, 0, 0], [-1, x_shape[1], x_shape[2], -1]) output.set_shape(x_list_shape) return output
Right shifts once in every block. Args: x: a tensor of shape [batch, height, width, depth] query_shape: A 2d tuple of ints name: a string Returns: output: a tensor of the same shape as x
def get_price(item): the_price = "No Default Pricing" for price in item.get('prices', []): if not price.get('locationGroupId'): the_price = "%0.4f" % float(price['hourlyRecurringFee']) return the_price
Finds the price with the default locationGroupId
def _list_records(self, rtype=None, name=None, content=None): records = [ { 'id': record['id'], 'type': record['type'], 'name': self._full_name(record['hostname']), 'content': record['destination'], 'priority': record['priority'], 'ttl': self.zone_ttl, } for record in self._raw_records(None, rtype, name, content) ] LOGGER.debug('list_records: %s', records) return records
List all records. Return an empty list if no records found. ``rtype``, ``name`` and ``content`` are used to filter records.
def connect( uri=None, user=None, password=None, host=None, port=9091, database=None, protocol='binary', execution_type=EXECUTION_TYPE_CURSOR, ): client = MapDClient( uri=uri, user=user, password=password, host=host, port=port, database=database, protocol=protocol, execution_type=execution_type, ) if options.default_backend is None: options.default_backend = client return client
Create a MapDClient for use with Ibis Parameters could be :param uri: str :param user: str :param password: str :param host: str :param port: int :param database: str :param protocol: str :param execution_type: int Returns ------- MapDClient
def get_grade_system_lookup_session(self, proxy): if not self.supports_grade_system_lookup(): raise errors.Unimplemented() return sessions.GradeSystemLookupSession(proxy=proxy, runtime=self._runtime)
Gets the ``OsidSession`` associated with the grade system lookup service. arg: proxy (osid.proxy.Proxy): a proxy return: (osid.grading.GradeSystemLookupSession) - a ``GradeSystemLookupSession`` raise: NullArgument - ``proxy`` is ``null`` raise: OperationFailed - unable to complete request raise: Unimplemented - ``supports_grade_system_lookup()`` is ``false`` *compliance: optional -- This method must be implemented if ``supports_grade_system_lookup()`` is ``true``.*
def _parse_result(self, result): u if result is not True: for section, errors in result.iteritems(): for key, value in errors.iteritems(): if value is not True: message = ( '"{0}" option in [{1}] is invalid value. {2}' ''.format(key, section, value) ) print(message) err_message = ( 'Some options are invalid!!! Please see the log!!!' ) raise validate.ValidateError(err_message) else: return True
u""" This method parses validation results. If result is True, then do nothing. if include even one false to result, this method parse result and raise Exception.
def print_lamps(): print("Printing information about all lamps paired to the Gateway") lights = [dev for dev in devices if dev.has_light_control] if len(lights) == 0: exit(bold("No lamps paired")) container = [] for l in lights: container.append(l.raw) print(jsonify(container))
Print all lamp devices as JSON
def text_assert(self, anchor, byte=False): if not self.text_search(anchor, byte=byte): raise DataNotFound(u'Substring not found: %s' % anchor)
If `anchor` is not found then raise `DataNotFound` exception.
def all_lemmas(self): for lemma_dict in self._mongo_db.lexunits.find(): yield Lemma(self, lemma_dict)
A generator over all the lemmas in the GermaNet database.
def by_geopoint(self, lat, long): header, content = self._http_request(self.BASE_URL, lat=lat, long=long) return json.loads(content)
Perform a Yelp Neighborhood API Search based on a geopoint. Args: lat - geopoint latitude long - geopoint longitude
def read_ip_ranges(filename, local_file = True, ip_only = False, conditions = []): targets = [] data = load_data(filename, local_file = local_file) if 'source' in data: conditions = data['conditions'] local_file = data['local_file'] if 'local_file' in data else False data = load_data(data['source'], local_file = local_file, key_name = 'prefixes') else: data = data['prefixes'] for d in data: condition_passed = True for condition in conditions: if type(condition) != list or len(condition) < 3: continue condition_passed = pass_condition(d[condition[0]], condition[1], condition[2]) if not condition_passed: break if condition_passed: targets.append(d) if ip_only: ips = [] for t in targets: ips.append(t['ip_prefix']) return ips else: return targets
Returns the list of IP prefixes from an ip-ranges file :param filename: :param local_file: :param conditions: :param ip_only: :return:
def do_filter(qs, qdata, quick_query_fields=[], int_quick_query_fields=[]): try: qs = qs.filter( __gen_quick_query_params( qdata.get('q_quick_search_kw'), quick_query_fields, int_quick_query_fields) ) q, kw_query_params = __gen_query_params(qdata) qs = qs.filter(q, **kw_query_params) except: import traceback traceback.print_exc() return qs
auto filter queryset by dict. qs: queryset need to filter. qdata: quick_query_fields: int_quick_query_fields:
def parseRangeString(s, convertToZeroBased=False): result = set() for _range in s.split(','): match = _rangeRegex.match(_range) if match: start, end = match.groups() start = int(start) if end is None: end = start else: end = int(end) if start > end: start, end = end, start if convertToZeroBased: result.update(range(start - 1, end)) else: result.update(range(start, end + 1)) else: raise ValueError( 'Illegal range %r. Ranges must single numbers or ' 'number-number.' % _range) return result
Parse a range string of the form 1-5,12,100-200. @param s: A C{str} specifiying a set of numbers, given in the form of comma separated numeric ranges or individual indices. @param convertToZeroBased: If C{True} all indices will have one subtracted from them. @return: A C{set} of all C{int}s in the specified set.
def cos(cls, x: 'TensorFluent') -> 'TensorFluent': return cls._unary_op(x, tf.cos, tf.float32)
Returns a TensorFluent for the cos function. Args: x: The input fluent. Returns: A TensorFluent wrapping the cos function.
def _downgrade_v1(op): op.drop_index('ix_futures_contracts_root_symbol') op.drop_index('ix_futures_contracts_symbol') with op.batch_alter_table('futures_contracts') as batch_op: batch_op.alter_column(column_name='multiplier', new_column_name='contract_multiplier') batch_op.drop_column('tick_size') op.create_index('ix_futures_contracts_root_symbol', table_name='futures_contracts', columns=['root_symbol']) op.create_index('ix_futures_contracts_symbol', table_name='futures_contracts', columns=['symbol'], unique=True)
Downgrade assets db by removing the 'tick_size' column and renaming the 'multiplier' column.
def contribute_to_class(model_class, name='slots', descriptor=None): rel_obj = descriptor or PlaceholderDescriptor() rel_obj.contribute_to_class(model_class, name) setattr(model_class, name, rel_obj) return True
Function that adds a description to a model Class. :param model_class: The model class the descriptor is to be added to. :param name: The attribute name the descriptor will be assigned to. :param descriptor: The descriptor instance to be used. If none is specified it will default to ``icekit.plugins.descriptors.PlaceholderDescriptor``. :return: True
def measure(*qubits: raw_types.Qid, key: Optional[str] = None, invert_mask: Tuple[bool, ...] = () ) -> gate_operation.GateOperation: for qubit in qubits: if isinstance(qubit, np.ndarray): raise ValueError( 'measure() was called a numpy ndarray. Perhaps you meant ' 'to call measure_state_vector on numpy array?' ) elif not isinstance(qubit, raw_types.Qid): raise ValueError( 'measure() was called with type different than Qid.') if key is None: key = _default_measurement_key(qubits) return MeasurementGate(len(qubits), key, invert_mask).on(*qubits)
Returns a single MeasurementGate applied to all the given qubits. The qubits are measured in the computational basis. Args: *qubits: The qubits that the measurement gate should measure. key: The string key of the measurement. If this is None, it defaults to a comma-separated list of the target qubits' str values. invert_mask: A list of Truthy or Falsey values indicating whether the corresponding qubits should be flipped. None indicates no inverting should be done. Returns: An operation targeting the given qubits with a measurement. Raises: ValueError if the qubits are not instances of Qid.
def commit(self, cont = False): self.journal.close() self.journal = None os.remove(self.j_file) for itm in os.listdir(self.tmp_dir): os.remove(cpjoin(self.tmp_dir, itm)) if cont is True: self.begin()
Finish a transaction
def to_OrderedDict(self, include_null=True): if include_null: return OrderedDict(self.items()) else: items = list() for c in self.__table__._columns: try: items.append((c.name, self.__dict__[c.name])) except KeyError: pass return OrderedDict(items)
Convert to OrderedDict.
def post(self, *args, **kwargs): try: resp = self.session.post(*args, **kwargs) if resp.status_code in _EXCEPTIONS_BY_CODE: raise _EXCEPTIONS_BY_CODE[resp.status_code](resp.reason) if resp.status_code != requests.codes['ok']: raise exceptions.Etcd3Exception(resp.reason) except requests.exceptions.Timeout as ex: raise exceptions.ConnectionTimeoutError(six.text_type(ex)) except requests.exceptions.ConnectionError as ex: raise exceptions.ConnectionFailedError(six.text_type(ex)) return resp.json()
helper method for HTTP POST :param args: :param kwargs: :return: json response
def _get_symbol_by_slope( self, slope, default_symbol ): if slope > math.tan(3 * math.pi / 8): draw_symbol = "|" elif math.tan(math.pi / 8) < slope < math.tan(3 * math.pi / 8): draw_symbol = u"\u27cb" elif abs(slope) < math.tan(math.pi / 8): draw_symbol = "-" elif slope < math.tan(-math.pi / 8) and\ slope > math.tan(-3 * math.pi / 8): draw_symbol = u"\u27CD" elif slope < math.tan(-3 * math.pi / 8): draw_symbol = "|" else: draw_symbol = default_symbol return draw_symbol
return line oriented approximatively along the slope value
def memory_read8(self, addr, num_bytes, zone=None): return self.memory_read(addr, num_bytes, zone=zone, nbits=8)
Reads memory from the target system in units of bytes. Args: self (JLink): the ``JLink`` instance addr (int): start address to read from num_bytes (int): number of bytes to read zone (str): memory zone to read from Returns: List of bytes read from the target system. Raises: JLinkException: if memory could not be read.
def get_driver_config(driver): response = errorIfUnauthorized(role='admin') if response: return response else: response = ApitaxResponse() return Response(status=200, body=response.getResponseBody())
Retrieve the config of a loaded driver Retrieve the config of a loaded driver # noqa: E501 :param driver: The driver to use for the request. ie. github :type driver: str :rtype: Response
def validate(self, value, redis): value = super().validate(value, redis) if is_hashed(value): return value return make_password(value)
hash passwords given via http
def sync(self): xbin = self.xbin.value() ybin = self.ybin.value() n = 0 for xsl, xsr, ys, nx, ny in self: if xbin > 1: xsl = xbin*((xsl-1)//xbin)+1 self.xsl[n].set(xsl) xsr = xbin*((xsr-1025)//xbin)+1025 self.xsr[n].set(xsr) if ybin > 1: ys = ybin*((ys-1)//ybin)+1 self.ys[n].set(ys) n += 1 g = get_root(self).globals self.sbutt.config(bg=g.COL['main']) self.sbutt.config(state='disable')
Synchronise the settings. This means that the pixel start values are shifted downwards so that they are synchronised with a full-frame binned version. This does nothing if the binning factors == 1.
def download_all(self, urls, dir_path): filenames = [] try: for url in urls: filenames.append(self.download(url, dir_path)) except DownloadError as e: for filename in filenames: os.remove(filename) raise e return filenames
Download all the resources specified by urls into dir_path. The resulting file paths is returned. DownloadError is raised if at least one of the resources cannot be downloaded. In the case already downloaded resources are erased.
def from_array(array): if array is None or not array: return None assert_type_or_raise(array, dict, parameter_name="array") data = {} data['file_id'] = u(array.get('file_id')) data['length'] = int(array.get('length')) data['duration'] = int(array.get('duration')) data['thumb'] = PhotoSize.from_array(array.get('thumb')) if array.get('thumb') is not None else None data['file_size'] = int(array.get('file_size')) if array.get('file_size') is not None else None data['_raw'] = array return VideoNote(**data)
Deserialize a new VideoNote from a given dictionary. :return: new VideoNote instance. :rtype: VideoNote
def is_type_sub_type_of( schema: GraphQLSchema, maybe_subtype: GraphQLType, super_type: GraphQLType ) -> bool: if maybe_subtype is super_type: return True if is_non_null_type(super_type): if is_non_null_type(maybe_subtype): return is_type_sub_type_of( schema, cast(GraphQLNonNull, maybe_subtype).of_type, cast(GraphQLNonNull, super_type).of_type, ) return False elif is_non_null_type(maybe_subtype): return is_type_sub_type_of( schema, cast(GraphQLNonNull, maybe_subtype).of_type, super_type ) if is_list_type(super_type): if is_list_type(maybe_subtype): return is_type_sub_type_of( schema, cast(GraphQLList, maybe_subtype).of_type, cast(GraphQLList, super_type).of_type, ) return False elif is_list_type(maybe_subtype): return False if ( is_abstract_type(super_type) and is_object_type(maybe_subtype) and schema.is_possible_type( cast(GraphQLAbstractType, super_type), cast(GraphQLObjectType, maybe_subtype), ) ): return True return False
Check whether a type is subtype of another type in a given schema. Provided a type and a super type, return true if the first type is either equal or a subset of the second super type (covariant).
def age(self): if self.exists: return datetime.utcnow() - datetime.utcfromtimestamp(os.path.getmtime(self.name)) return timedelta()
Age of the video
def create_module(module, target): module_x = module.split('.') cur_path = '' for path in module_x: cur_path = os.path.join(cur_path, path) if not os.path.isdir(os.path.join(target, cur_path)): os.mkdir(os.path.join(target, cur_path)) if not os.path.exists(os.path.join(target, cur_path, '__init__.py')): touch(os.path.join(target, cur_path, '__init__.py')) return cur_path
Create a module directory structure into the target directory.
def chunks(seq, size): return (seq[pos:pos + size] for pos in range(0, len(seq), size))
simple two-line alternative to `ubelt.chunks`
def _get_all_relationships(self): relationships_all = set() for goterm in self.go2obj.values(): if goterm.relationship: relationships_all.update(goterm.relationship) if goterm.relationship_rev: relationships_all.update(goterm.relationship_rev) return relationships_all
Return all relationships seen in GO Dag subset.
def consumer_commit_for_times(consumer, partition_to_offset, atomic=False): no_offsets = set() for tp, offset in six.iteritems(partition_to_offset): if offset is None: logging.error( "No offsets found for topic-partition {tp}. Either timestamps not supported" " for the topic {tp}, or no offsets found after timestamp specified, or there is no" " data in the topic-partition.".format(tp=tp), ) no_offsets.add(tp) if atomic and len(no_offsets) > 0: logging.error( "Commit aborted; offsets were not found for timestamps in" " topics {}".format(",".join([str(tp) for tp in no_offsets])), ) return offsets_metadata = { tp: OffsetAndMetadata(partition_to_offset[tp].offset, metadata=None) for tp in six.iterkeys(partition_to_offset) if tp not in no_offsets } if len(offsets_metadata) != 0: consumer.commit(offsets_metadata)
Commits offsets to Kafka using the given KafkaConsumer and offsets, a mapping of TopicPartition to Unix Epoch milliseconds timestamps. Arguments: consumer (KafkaConsumer): an initialized kafka-python consumer. partitions_to_offset (dict TopicPartition: OffsetAndTimestamp): Map of TopicPartition to OffsetAndTimestamp. Return value of offsets_for_times. atomic (bool): Flag to specify whether the commit should fail if offsets are not found for some TopicPartition: timestamp pairs.
def post_file(self, url, filename, file_stream, *args, **kwargs): res = self._conn.post(url, files={filename: file_stream}, headers=self._prepare_headers(**kwargs)) if res.status_code == 200 or res.status_code == 201: return res.text else: return None
Uploads file to provided url. Returns contents as text Args: **url**: address where to upload file **filename**: Name of the uploaded file **file_stream**: file like object to upload .. versionadded:: 0.3.2 **additional_headers**: (optional) Additional headers to be used with request Returns: string
def guess_github_repo(): p = subprocess.run(['git', 'ls-remote', '--get-url', 'origin'], stdout=subprocess.PIPE, stderr=subprocess.PIPE, check=False) if p.stderr or p.returncode: return False url = p.stdout.decode('utf-8').strip() m = GIT_URL.fullmatch(url) if not m: return False return m.group(1)
Guesses the github repo for the current directory Returns False if no guess can be made.
def create_entry(self, group, **kwargs): if group not in self.groups: raise ValueError("Group doesn't exist / is not bound to this database.") uuid = binascii.hexlify(get_random_bytes(16)) entry = Entry(uuid=uuid, group_id=group.id, created=util.now(), modified=util.now(), accessed=util.now(), **kwargs) self.entries.append(entry) group.entries.append(entry) return entry
Create a new Entry object. The group which should hold the entry is needed. image must be an unsigned int >0, group a Group. :param group: The associated group. :keyword title: :keyword icon: :keyword url: :keyword username: :keyword password: :keyword notes: :keyword expires: Expiration date (if None, entry will never expire). :type expires: datetime :return: The new entry. :rtype: :class:`keepassdb.model.Entry`
def intersection(self, *others, **kwargs): return self._combine_variant_collections( combine_fn=set.intersection, variant_collections=(self,) + others, kwargs=kwargs)
Returns the intersection of variants in several VariantCollection objects.
def get_services_health(self) -> dict: services_health = {} services_ids = self._get_services() for service_id in services_ids: service_name = DC.get_service_name(service_id) if DC.get_replicas(service_id) != \ DC.get_actual_replica(service_id): services_health[service_name] = "Unhealthy" else: services_health[service_name] = "Healthy" return services_health
Get the health of all services. Returns: dict, services id and health status
def text(self, selector): result = self.__bs4.select(selector) return [r.get_text() for r in result] \ if result.__len__() > 1 else \ result[0].get_text() if result.__len__() > 0 else None
Return text result that executed by given css selector :param selector: `str` css selector :return: `list` or `None`
def get_readable_time_string(seconds): seconds = int(seconds) minutes = seconds // 60 seconds = seconds % 60 hours = minutes // 60 minutes = minutes % 60 days = hours // 24 hours = hours % 24 result = "" if days > 0: result += "%d %s " % (days, "Day" if (days == 1) else "Days") if hours > 0: result += "%d %s " % (hours, "Hour" if (hours == 1) else "Hours") if minutes > 0: result += "%d %s " % (minutes, "Minute" if (minutes == 1) else "Minutes") if seconds > 0: result += "%d %s " % (seconds, "Second" if (seconds == 1) else "Seconds") return result.strip()
Returns human readable string from number of seconds
def walk_nodes(self, node, original): try: nodelist = self.parser.get_nodelist(node, original=original) except TypeError: nodelist = self.parser.get_nodelist(node, original=original, context=None) for node in nodelist: if isinstance(node, SassSrcNode): if node.is_sass: yield node else: for node in self.walk_nodes(node, original=original): yield node
Iterate over the nodes recursively yielding the templatetag 'sass_src'
def get(self, id, **options): if not self._item_path: raise AttributeError('get is not available for %s' % self._item_name) target = self._item_path % id json_data = self._redmine.get(target, **options) data = self._redmine.unwrap_json(self._item_type, json_data) data['_source_path'] = target return self._objectify(data=data)
Get a single item with the given ID
def check(text): err = "misc.suddenly" msg = u"Suddenly is nondescript, slows the action, and warns your reader." regex = "Suddenly," return existence_check(text, [regex], err, msg, max_errors=3, require_padding=False, offset=-1, ignore_case=False)
Advice on sudden vs suddenly.
def get_cloud_from_metadata_endpoint(arm_endpoint, name=None, session=None): cloud = Cloud(name or arm_endpoint) cloud.endpoints.management = arm_endpoint cloud.endpoints.resource_manager = arm_endpoint _populate_from_metadata_endpoint(cloud, arm_endpoint, session) return cloud
Get a Cloud object from an ARM endpoint. .. versionadded:: 0.4.11 :Example: .. code:: python get_cloud_from_metadata_endpoint(https://management.azure.com/, "Public Azure") :param str arm_endpoint: The ARM management endpoint :param str name: An optional name for the Cloud object. Otherwise it's the ARM endpoint :params requests.Session session: A requests session object if you need to configure proxy, cert, etc. :rtype Cloud: :returns: a Cloud object :raises: MetadataEndpointError if unable to build the Cloud object
def process_part(self, char): if char in self.whitespace or char == self.eol_char: self.parts.append( ''.join(self.part) ) self.part = [] self.process_char = self.process_delimiter if char == self.eol_char: self.complete = True return if char in self.quote_chars: self.inquote = char self.process_char = self.process_quote return self.part.append(char)
Process chars while in a part
def reverse(self, lon, lat, types=None, limit=None): uri = URITemplate(self.baseuri + '/{dataset}/{lon},{lat}.json').expand( dataset=self.name, lon=str(round(float(lon), self.precision.get('reverse', 5))), lat=str(round(float(lat), self.precision.get('reverse', 5)))) params = {} if types: types = list(types) params.update(self._validate_place_types(types)) if limit is not None: if not types or len(types) != 1: raise InvalidPlaceTypeError( 'Specify a single type when using limit with reverse geocoding') params.update(limit='{0}'.format(limit)) resp = self.session.get(uri, params=params) self.handle_http_error(resp) def geojson(): return resp.json() resp.geojson = geojson return resp
Returns a Requests response object that contains a GeoJSON collection of places near the given longitude and latitude. `response.geojson()` returns the geocoding result as GeoJSON. `response.status_code` returns the HTTP API status code. See: https://www.mapbox.com/api-documentation/search/#reverse-geocoding.
def _setuintbe(self, uintbe, length=None): if length is not None and length % 8 != 0: raise CreationError("Big-endian integers must be whole-byte. " "Length = {0} bits.", length) self._setuint(uintbe, length)
Set the bitstring to a big-endian unsigned int interpretation.
def output(id, url): try: experiment = ExperimentClient().get(normalize_job_name(id)) except FloydException: experiment = ExperimentClient().get(id) output_dir_url = "%s/%s/files" % (floyd.floyd_web_host, experiment.name) if url: floyd_logger.info(output_dir_url) else: floyd_logger.info("Opening output path in your browser ...") webbrowser.open(output_dir_url)
View the files from a job.
def getFeatureById(self, featureId): sql = "SELECT * FROM FEATURE WHERE id = ?" query = self._dbconn.execute(sql, (featureId,)) ret = query.fetchone() if ret is None: return None return sqlite_backend.sqliteRowToDict(ret)
Fetch feature by featureID. :param featureId: the FeatureID as found in GFF3 records :return: dictionary representing a feature object, or None if no match is found.
def op_match_funcdef_handle(self, original, loc, tokens): if len(tokens) == 3: func, args = get_infix_items(tokens) cond = None elif len(tokens) == 4: func, args = get_infix_items(tokens[:-1]) cond = tokens[-1] else: raise CoconutInternalException("invalid infix match function definition tokens", tokens) name_tokens = [func, args] if cond is not None: name_tokens.append(cond) return self.name_match_funcdef_handle(original, loc, name_tokens)
Process infix match defs. Result must be passed to insert_docstring_handle.
def _get_timezone(self, root): tz_str = root.xpath('//div[@class="smallfont" and @align="center"]')[0].text hours = int(self._tz_re.search(tz_str).group(1)) return tzoffset(tz_str, hours * 60)
Find timezone informatation on bottom of the page.
def from_coo(cls, obj, vartype=None): import dimod.serialization.coo as coo if isinstance(obj, str): return coo.loads(obj, cls=cls, vartype=vartype) return coo.load(obj, cls=cls, vartype=vartype)
Deserialize a binary quadratic model from a COOrdinate_ format encoding. .. _COOrdinate: https://en.wikipedia.org/wiki/Sparse_matrix#Coordinate_list_(COO) Args: obj: (str/file): Either a string or a `.read()`-supporting `file object`_ that represents linear and quadratic biases for a binary quadratic model. This data is stored as a list of 3-tuples, (i, j, bias), where :math:`i=j` for linear biases. vartype (:class:`.Vartype`/str/set, optional): Variable type for the binary quadratic model. Accepted input values: * :class:`.Vartype.SPIN`, ``'SPIN'``, ``{-1, 1}`` * :class:`.Vartype.BINARY`, ``'BINARY'``, ``{0, 1}`` If not provided, the vartype must be specified with a header in the file. .. _file object: https://docs.python.org/3/glossary.html#term-file-object .. note:: Variables must use index lables (numeric lables). Binary quadratic models created from COOrdinate format encoding have offsets set to zero. Examples: An example of a binary quadratic model encoded in COOrdinate format. .. code-block:: none 0 0 0.50000 0 1 0.50000 1 1 -1.50000 The Coordinate format with a header .. code-block:: none # vartype=SPIN 0 0 0.50000 0 1 0.50000 1 1 -1.50000 This example saves a binary quadratic model to a COOrdinate-format file and creates a new model by reading the saved file. >>> import dimod >>> bqm = dimod.BinaryQuadraticModel({0: -1.0, 1: 1.0}, {(0, 1): -1.0}, 0.0, dimod.BINARY) >>> with open('tmp.qubo', 'w') as file: # doctest: +SKIP ... bqm.to_coo(file) >>> with open('tmp.qubo', 'r') as file: # doctest: +SKIP ... new_bqm = dimod.BinaryQuadraticModel.from_coo(file, dimod.BINARY) >>> any(new_bqm) # doctest: +SKIP True
def hide_object(self, key, hide=True): self.object_queue.put(SlipHideObject(key, hide))
hide an object on the map by key
def get_task(config): path = os.path.join(config['work_dir'], "task.json") message = "Can't read task from {}!\n%(exc)s".format(path) contents = load_json_or_yaml(path, is_path=True, message=message) return contents
Read the task.json from work_dir. Args: config (dict): the running config, to find work_dir. Returns: dict: the contents of task.json Raises: ScriptWorkerTaskException: on error.
def get_airport_metars_hist(self, iata): url = AIRPORT_BASE.format(iata) + "/weather" return self._fr24.get_airport_metars_hist(url)
Retrieve the metar data for past 72 hours. The data will not be parsed to readable format. Given the IATA code of an airport, this method returns the metar information for last 72 hours. Args: iata (str): The IATA code for an airport, e.g. HYD Returns: The metar data for the airport Example:: from pyflightdata import FlightData f=FlightData() #optional login f.login(myemail,mypassword) f.get_airport_metars_hist('HYD')
def report_open_file(self, options): filename = options['filename'] logger.debug('Call LSP for %s' % filename) language = options['language'] callback = options['codeeditor'] stat = self.main.lspmanager.start_client(language.lower()) self.main.lspmanager.register_file( language.lower(), filename, callback) if stat: if language.lower() in self.lsp_editor_settings: self.lsp_server_ready( language.lower(), self.lsp_editor_settings[ language.lower()]) else: editor = self.get_current_editor() editor.lsp_ready = False
Request to start a LSP server to attend a language.
def to_string(mnemonic): strings = { ReilMnemonic.ADD: "add", ReilMnemonic.SUB: "sub", ReilMnemonic.MUL: "mul", ReilMnemonic.DIV: "div", ReilMnemonic.MOD: "mod", ReilMnemonic.BSH: "bsh", ReilMnemonic.AND: "and", ReilMnemonic.OR: "or", ReilMnemonic.XOR: "xor", ReilMnemonic.LDM: "ldm", ReilMnemonic.STM: "stm", ReilMnemonic.STR: "str", ReilMnemonic.BISZ: "bisz", ReilMnemonic.JCC: "jcc", ReilMnemonic.UNKN: "unkn", ReilMnemonic.UNDEF: "undef", ReilMnemonic.NOP: "nop", ReilMnemonic.SEXT: "sext", ReilMnemonic.SDIV: "sdiv", ReilMnemonic.SMOD: "smod", ReilMnemonic.SMUL: "smul", } return strings[mnemonic]
Return the string representation of the given mnemonic.
def vcard(self, qs): try: import vobject except ImportError: print(self.style.ERROR("Please install vobject to use the vcard export format.")) sys.exit(1) out = sys.stdout for ent in qs: card = vobject.vCard() card.add('fn').value = full_name(**ent) if not ent['last_name'] and not ent['first_name']: card.add('n').value = vobject.vcard.Name(full_name(**ent)) else: card.add('n').value = vobject.vcard.Name(ent['last_name'], ent['first_name']) emailpart = card.add('email') emailpart.value = ent['email'] emailpart.type_param = 'INTERNET' out.write(card.serialize())
VCARD format.
def build_act(cls: Type[_Block], node: ast.stmt, test_func_node: ast.FunctionDef) -> _Block: add_node_parents(test_func_node) act_block_node = node while act_block_node.parent != test_func_node: act_block_node = act_block_node.parent return cls([act_block_node], LineType.act)
Act block is a single node - either the act node itself, or the node that wraps the act node.
def _parse_row(row): data = [] labels = HtmlTable._get_row_tag(row, "th") if labels: data += labels columns = HtmlTable._get_row_tag(row, "td") if columns: data += columns return data
Parses HTML row :param row: HTML row :return: list of values in row
def HandlePeerInfoReceived(self, payload): addrs = IOHelper.AsSerializableWithType(payload, 'neo.Network.Payloads.AddrPayload.AddrPayload') if not addrs: return for nawt in addrs.NetworkAddressesWithTime: self.leader.RemoteNodePeerReceived(nawt.Address, nawt.Port, self.prefix)
Process response of `self.RequestPeerInfo`.
def update(self, notification_level): data = values.of({'NotificationLevel': notification_level, }) payload = self._version.update( 'POST', self._uri, data=data, ) return UserChannelInstance( self._version, payload, service_sid=self._solution['service_sid'], user_sid=self._solution['user_sid'], channel_sid=self._solution['channel_sid'], )
Update the UserChannelInstance :param UserChannelInstance.NotificationLevel notification_level: The push notification level to assign to the User Channel :returns: Updated UserChannelInstance :rtype: twilio.rest.chat.v2.service.user.user_channel.UserChannelInstance
def octaves(freq, fmin=20., fmax=2e4): if any(f <= 0 for f in (freq, fmin, fmax)): raise ValueError("Frequencies have to be positive") while freq < fmin: freq *= 2 while freq > fmax: freq /= 2 if freq < fmin: return [] return list(it.takewhile(lambda x: x > fmin, (freq * 2 ** harm for harm in it.count(0, -1)) ))[::-1] \ + list(it.takewhile(lambda x: x < fmax, (freq * 2 ** harm for harm in it.count(1)) ))
Given a frequency and a frequency range, returns all frequencies in that range that is an integer number of octaves related to the given frequency. Parameters ---------- freq : Frequency, in any (linear) unit. fmin, fmax : Frequency range, in the same unit of ``freq``. Defaults to 20.0 and 20,000.0, respectively. Returns ------- A list of frequencies, in the same unit of ``freq`` and in ascending order. Examples -------- >>> from audiolazy import octaves, sHz >>> octaves(440.) [27.5, 55.0, 110.0, 220.0, 440.0, 880.0, 1760.0, 3520.0, 7040.0, 14080.0] >>> octaves(440., fmin=3000) [3520.0, 7040.0, 14080.0] >>> Hz = sHz(44100)[1] # Conversion unit from sample rate >>> freqs = octaves(440 * Hz, fmin=300 * Hz, fmax = 1000 * Hz) # rad/sample >>> len(freqs) # Number of octaves 2 >>> [round(f, 6) for f in freqs] # Values in rad/sample [0.062689, 0.125379] >>> [round(f / Hz, 6) for f in freqs] # Values in Hz [440.0, 880.0]
def getPhysicalAddress(self, length=512): name = ctypes.create_string_buffer(length) self._ioctl(_HIDIOCGRAWPHYS(length), name, True) return name.value
Returns device physical address as a string. See hidraw documentation for value signification, as it depends on device's bus type.
def head(self, *args, **kwargs): self.model = self.get_model(kwargs.get('id')) result = yield self.model.fetch() if not result: self.not_found() return if not self.has_read_permission(): self.permission_denied() return self.add_headers() self.set_status(200) self.finish()
Handle HEAD requests for the item :param args: :param kwargs:
def save_image(image, local_filename): r_name, __, i_name = image.rpartition('/') i_name, __, __ = i_name.partition(':') with temp_dir() as remote_tmp: archive = posixpath.join(remote_tmp, 'image_{0}.tar.gz'.format(i_name)) run('docker save {0} | gzip --stdout > {1}'.format(image, archive), shell=False) get(archive, local_filename)
Saves a Docker image as a compressed tarball. This command line client method is a suitable alternative, if the Remove API method is too slow. :param image: Image id or tag. :type image: unicode :param local_filename: Local file name to store the image into. If this is a directory, the image will be stored there as a file named ``image_<Image name>.tar.gz``.
def df2list(df): subjects = df.index.levels[0].values.tolist() lists = df.index.levels[1].values.tolist() idx = pd.IndexSlice df = df.loc[idx[subjects,lists],df.columns] lst = [df.loc[sub,:].values.tolist() for sub in subjects] return lst
Convert a MultiIndex df to list Parameters ---------- df : pandas.DataFrame A MultiIndex DataFrame where the first level is subjects and the second level is lists (e.g. egg.pres) Returns ---------- lst : a list of lists of lists of values The input df reformatted as a list
def pickle_loads(cls, s): strio = StringIO() strio.write(s) strio.seek(0) flow = pmg_pickle_load(strio) return flow
Reconstruct the flow from a string.
def _getInstrumentsVoc(self): cfilter = {'portal_type': 'Instrument', 'is_active': True} if self.getMethod(): cfilter['getMethodUIDs'] = {"query": self.getMethod().UID(), "operator": "or"} bsc = getToolByName(self, 'bika_setup_catalog') items = [('', 'No instrument')] + [ (o.UID, o.Title) for o in bsc(cfilter)] o = self.getInstrument() if o and o.UID() not in [i[0] for i in items]: items.append((o.UID(), o.Title())) items.sort(lambda x, y: cmp(x[1], y[1])) return DisplayList(list(items))
This function returns the registered instruments in the system as a vocabulary. The instruments are filtered by the selected method.
def on_helpButton(self, event, page=None): path = find_pmag_dir.get_pmag_dir() help_page = os.path.join(path, 'dialogs', 'help_files', page) if not os.path.exists(help_page): help_page = os.path.join(path, 'help_files', page) html_frame = pw.HtmlFrame(self, page=help_page) html_frame.Show()
shows html help page
def save_hash(self, location, basedir, ext=None): if isinstance(location, six.text_type): location = location.encode('utf-8') rel_path = hashed_path(location, ext=ext) path = os.path.join(basedir, rel_path) if not os.path.exists(path): path_dir, _ = os.path.split(path) try: os.makedirs(path_dir) except OSError: pass with open(path, 'wb') as out: out.write(self._bytes_body) return rel_path
Save response body into file with special path builded from hash. That allows to lower number of files per directory. :param location: URL of file or something else. It is used to build the SHA1 hash. :param basedir: base directory to save the file. Note that file will not be saved directly to this directory but to some sub-directory of `basedir` :param ext: extension which should be appended to file name. The dot is inserted automatically between filename and extension. :returns: path to saved file relative to `basedir` Example:: >>> url = 'http://yandex.ru/logo.png' >>> g.go(url) >>> g.response.save_hash(url, 'some_dir', ext='png') 'e8/dc/f2918108788296df1facadc975d32b361a6a.png' # the file was saved to $PWD/some_dir/e8/dc/... TODO: replace `basedir` with two options: root and save_to. And returns save_to + path
def bar(self, x=None, y=None, **kwds): return self(kind='bar', x=x, y=y, **kwds)
Vertical bar plot. A bar plot is a plot that presents categorical data with rectangular bars with lengths proportional to the values that they represent. A bar plot shows comparisons among discrete categories. One axis of the plot shows the specific categories being compared, and the other axis represents a measured value. Parameters ---------- x : label or position, optional Allows plotting of one column versus another. If not specified, the index of the DataFrame is used. y : label or position, optional Allows plotting of one column versus another. If not specified, all numerical columns are used. **kwds Additional keyword arguments are documented in :meth:`DataFrame.plot`. Returns ------- matplotlib.axes.Axes or np.ndarray of them An ndarray is returned with one :class:`matplotlib.axes.Axes` per column when ``subplots=True``. See Also -------- DataFrame.plot.barh : Horizontal bar plot. DataFrame.plot : Make plots of a DataFrame. matplotlib.pyplot.bar : Make a bar plot with matplotlib. Examples -------- Basic plot. .. plot:: :context: close-figs >>> df = pd.DataFrame({'lab':['A', 'B', 'C'], 'val':[10, 30, 20]}) >>> ax = df.plot.bar(x='lab', y='val', rot=0) Plot a whole dataframe to a bar plot. Each column is assigned a distinct color, and each row is nested in a group along the horizontal axis. .. plot:: :context: close-figs >>> speed = [0.1, 17.5, 40, 48, 52, 69, 88] >>> lifespan = [2, 8, 70, 1.5, 25, 12, 28] >>> index = ['snail', 'pig', 'elephant', ... 'rabbit', 'giraffe', 'coyote', 'horse'] >>> df = pd.DataFrame({'speed': speed, ... 'lifespan': lifespan}, index=index) >>> ax = df.plot.bar(rot=0) Instead of nesting, the figure can be split by column with ``subplots=True``. In this case, a :class:`numpy.ndarray` of :class:`matplotlib.axes.Axes` are returned. .. plot:: :context: close-figs >>> axes = df.plot.bar(rot=0, subplots=True) >>> axes[1].legend(loc=2) # doctest: +SKIP Plot a single column. .. plot:: :context: close-figs >>> ax = df.plot.bar(y='speed', rot=0) Plot only selected categories for the DataFrame. .. plot:: :context: close-figs >>> ax = df.plot.bar(x='lifespan', rot=0)
def wait_for_task_property(service, task, prop, timeout_sec=120): return time_wait(lambda: task_property_present_predicate(service, task, prop), timeout_seconds=timeout_sec)
Waits for a task to have the specified property
def check_ns_run_logls(run, dup_assert=False, dup_warn=False): assert np.array_equal(run['logl'], run['logl'][np.argsort(run['logl'])]) if dup_assert or dup_warn: unique_logls, counts = np.unique(run['logl'], return_counts=True) repeat_logls = run['logl'].shape[0] - unique_logls.shape[0] msg = ('{} duplicate logl values (out of a total of {}). This may be ' 'caused by limited numerical precision in the output files.' '\nrepeated logls = {}\ncounts = {}\npositions in list of {}' ' unique logls = {}').format( repeat_logls, run['logl'].shape[0], unique_logls[counts != 1], counts[counts != 1], unique_logls.shape[0], np.where(counts != 1)[0]) if dup_assert: assert repeat_logls == 0, msg elif dup_warn: if repeat_logls != 0: warnings.warn(msg, UserWarning)
Check run logls are unique and in the correct order. Parameters ---------- run: dict nested sampling run to check. dup_assert: bool, optional Whether to raise and AssertionError if there are duplicate logl values. dup_warn: bool, optional Whether to give a UserWarning if there are duplicate logl values (only used if dup_assert is False). Raises ------ AssertionError if run does not have expected properties.
def new_transaction(self, timeout, durability, transaction_type): connection = self._connect() return Transaction(self._client, connection, timeout, durability, transaction_type)
Creates a Transaction object with given timeout, durability and transaction type. :param timeout: (long), the timeout in seconds determines the maximum lifespan of a transaction. :param durability: (int), the durability is the number of machines that can take over if a member fails during a transaction commit or rollback :param transaction_type: (Transaction Type), the transaction type which can be :const:`~hazelcast.transaction.TWO_PHASE` or :const:`~hazelcast.transaction.ONE_PHASE` :return: (:class:`~hazelcast.transaction.Transaction`), new created Transaction.
def _extract_auth_config(self): service = self._service if not service.authentication: return {} auth_infos = {} for auth_rule in service.authentication.rules: selector = auth_rule.selector provider_ids_to_audiences = {} for requirement in auth_rule.requirements: provider_id = requirement.providerId if provider_id and requirement.audiences: audiences = requirement.audiences.split(u",") provider_ids_to_audiences[provider_id] = audiences auth_infos[selector] = AuthInfo(provider_ids_to_audiences) return auth_infos
Obtains the authentication configurations.
def unsubscribe(self, ssid, max_msgs=0): if self.is_closed: raise ErrConnectionClosed sub = None try: sub = self._subs[ssid] except KeyError: return if max_msgs == 0 or sub.received >= max_msgs: self._subs.pop(ssid, None) self._remove_subscription(sub) if not self.is_reconnecting: yield self.auto_unsubscribe(ssid, max_msgs)
Takes a subscription sequence id and removes the subscription from the client, optionally after receiving more than max_msgs, and unsubscribes immediatedly.
def set_module_version(self, major, minor, patch): if not (self._is_byte(major) and self._is_byte(minor) and self._is_byte(patch)): raise ArgumentError("Invalid module version number with component that does not fit in 1 byte", major=major, minor=minor, patch=patch) self.module_version = (major, minor, patch)
Set the module version for this module. Each module must declare a semantic version number in the form: major.minor.patch where each component is a 1 byte number between 0 and 255.
def throttle(coro, limit=1, timeframe=1, return_value=None, raise_exception=False): assert_corofunction(coro=coro) limit = max(int(limit), 1) remaning = limit timeframe = timeframe * 1000 last_call = now() result = None def stop(): if raise_exception: raise RuntimeError('paco: coroutine throttle limit exceeded') if return_value: return return_value return result def elapsed(): return now() - last_call @asyncio.coroutine def wrapper(*args, **kw): nonlocal result nonlocal remaning nonlocal last_call if elapsed() > timeframe: remaning = limit last_call = now() elif elapsed() < timeframe and remaning <= 0: return stop() remaning -= 1 result = yield from coro(*args, **kw) return result return wrapper
Creates a throttled coroutine function that only invokes ``coro`` at most once per every time frame of seconds or milliseconds. Provide options to indicate whether func should be invoked on the leading and/or trailing edge of the wait timeout. Subsequent calls to the throttled coroutine return the result of the last coroutine invocation. This function can be used as decorator. Arguments: coro (coroutinefunction): coroutine function to wrap with throttle strategy. limit (int): number of coroutine allowed execution in the given time frame. timeframe (int|float): throttle limit time frame in seconds. return_value (mixed): optional return if the throttle limit is reached. Returns the latest returned value by default. raise_exception (bool): raise exception if throttle limit is reached. Raises: RuntimeError: if cannot throttle limit reached (optional). Returns: coroutinefunction Usage:: async def mul_2(num): return num * 2 # Use as simple wrapper throttled = paco.throttle(mul_2, limit=1, timeframe=2) await throttled(2) # => 4 await throttled(3) # ignored! # => 4 await asyncio.sleep(2) await throttled(3) # executed! # => 6 # Use as decorator @paco.throttle(limit=1, timeframe=2) async def mul_2(num): return num * 2 await mul_2(2) # => 4 await mul_2(3) # ignored! # => 4 await asyncio.sleep(2) await mul_2(3) # executed! # => 6
def _add_index(self, index): index_name = index.get_name() index_name = self._normalize_identifier(index_name) replaced_implicit_indexes = [] for name, implicit_index in self._implicit_indexes.items(): if implicit_index.is_fullfilled_by(index) and name in self._indexes: replaced_implicit_indexes.append(name) already_exists = ( index_name in self._indexes and index_name not in replaced_implicit_indexes or self._primary_key_name is not False and index.is_primary() ) if already_exists: raise IndexAlreadyExists(index_name, self._name) for name in replaced_implicit_indexes: del self._indexes[name] del self._implicit_indexes[name] if index.is_primary(): self._primary_key_name = index_name self._indexes[index_name] = index return self
Adds an index to the table. :param index: The index to add :type index: Index :rtype: Table
def sphinx(self): try: assert __IPYTHON__ classdoc = '' except (NameError, AssertionError): scls = self.sphinx_class() classdoc = ' ({})'.format(scls) if scls else '' prop_doc = '**{name}**{cls}: {doc}{info}'.format( name=self.name, cls=classdoc, doc=self.doc, info=', {}'.format(self.info) if self.info else '', ) return prop_doc
Generate Sphinx-formatted documentation for the Property
def os_requires_version(ostack_release, pkg): def wrap(f): @wraps(f) def wrapped_f(*args): if os_release(pkg) < ostack_release: raise Exception("This hook is not supported on releases" " before %s" % ostack_release) f(*args) return wrapped_f return wrap
Decorator for hook to specify minimum supported release
def is_ready(self): ready = len(self.get(self.name, [])) > 0 if not ready: hookenv.log('Incomplete relation: {}'.format(self.__class__.__name__), hookenv.DEBUG) return ready
Returns True if all of the `required_keys` are available from any units.
def current_version(): filepath = os.path.abspath( project_root / "directory_components" / "version.py") version_py = get_file_string(filepath) regex = re.compile(Utils.get_version) if regex.search(version_py) is not None: current_version = regex.search(version_py).group(0) print(color( "Current directory-components version: {}".format(current_version), fg='blue', style='bold')) get_update_info() else: print(color( 'Error finding directory-components version.', fg='red', style='bold'))
Get current version of directory-components.
def get_agent_sock_path(env=None, sp=subprocess): args = [util.which('gpgconf'), '--list-dirs'] output = check_output(args=args, env=env, sp=sp) lines = output.strip().split(b'\n') dirs = dict(line.split(b':', 1) for line in lines) log.debug('%s: %s', args, dirs) return dirs[b'agent-socket']
Parse gpgconf output to find out GPG agent UNIX socket path.
def read(filename, loader=None, implicit_tuple=True, allow_errors=False): with open(filename, 'r') as f: return reads(f.read(), filename=filename, loader=loader, implicit_tuple=implicit_tuple, allow_errors=allow_errors)
Load but don't evaluate a GCL expression from a file.
def cmd(send, msg, _): coin = ['heads', 'tails'] if not msg: send('The coin lands on... %s' % choice(coin)) elif not msg.lstrip('-').isdigit(): send("Not A Valid Positive Integer.") else: msg = int(msg) if msg < 0: send("Negative Flipping requires the (optional) quantum coprocessor.") return headflips = randint(0, msg) tailflips = msg - headflips send('The coins land on heads %g times and on tails %g times.' % (headflips, tailflips))
Flips a coin a number of times. Syntax: {command} [number]
def _get_client_and_key(url, user, password, verbose=0): session = {} session['client'] = six.moves.xmlrpc_client.Server(url, verbose=verbose, use_datetime=True) session['key'] = session['client'].auth.login(user, password) return session
Return the client object and session key for the client
def _default_url(self): host = 'localhost' if self.mode == 'remote' else self.host return 'ws://{}:{}/dev'.format(host, self.port)
Websocket URL to connect to and listen for reload requests
def fit(self, X, y=None): if self.metric != 'precomputed': X = check_array(X, accept_sparse='csr') self._raw_data = X elif issparse(X): X = check_array(X, accept_sparse='csr') else: check_precomputed_distance_matrix(X) kwargs = self.get_params() kwargs.pop('prediction_data', None) kwargs.update(self._metric_kwargs) (self.labels_, self.probabilities_, self.cluster_persistence_, self._condensed_tree, self._single_linkage_tree, self._min_spanning_tree) = hdbscan(X, **kwargs) if self.prediction_data: self.generate_prediction_data() return self
Perform HDBSCAN clustering from features or distance matrix. Parameters ---------- X : array or sparse (CSR) matrix of shape (n_samples, n_features), or \ array of shape (n_samples, n_samples) A feature array, or array of distances between samples if ``metric='precomputed'``. Returns ------- self : object Returns self