code
stringlengths
51
2.38k
docstring
stringlengths
4
15.2k
def _tracker(self, _observer_, _self_, *args, **kwds): self.track_object(_self_, name=_observer_.name, resolution_level=_observer_.detail, keep=_observer_.keep, trace=_observer_.trace) _observer_.init(_self_, *args, **kwds)
Injected constructor for tracked classes. Call the actual constructor of the object and track the object. Attach to the object before calling the constructor to track the object with the parameters of the most specialized class.
def rmsd(V, W): D = len(V[0]) N = len(V) result = 0.0 for v, w in zip(V, W): result += sum([(v[i] - w[i])**2.0 for i in range(D)]) return np.sqrt(result/N)
Calculate Root-mean-square deviation from two sets of vectors V and W. Parameters ---------- V : array (N,D) matrix, where N is points and D is dimension. W : array (N,D) matrix, where N is points and D is dimension. Returns ------- rmsd : float Root-mean-square deviation between the two vectors
def binarize_signal(signal, treshold="auto", cut="higher"): if treshold == "auto": treshold = (np.max(np.array(signal)) - np.min(np.array(signal)))/2 signal = list(signal) binary_signal = [] for i in range(len(signal)): if cut == "higher": if signal[i] > treshold: binary_signal.append(1) else: binary_signal.append(0) else: if signal[i] < treshold: binary_signal.append(1) else: binary_signal.append(0) return(binary_signal)
Binarize a channel based on a continuous channel. Parameters ---------- signal = array or list The signal channel. treshold = float The treshold value by which to select the events. If "auto", takes the value between the max and the min. cut = str "higher" or "lower", define the events as above or under the treshold. For photosensors, a white screen corresponds usually to higher values. Therefore, if your events were signalled by a black colour, events values would be the lower ones, and you should set the cut to "lower". Returns ---------- list binary_signal Example ---------- >>> import neurokit as nk >>> binary_signal = nk.binarize_signal(signal, treshold=4) Authors ---------- - `Dominique Makowski <https://dominiquemakowski.github.io/>`_ Dependencies ---------- None
def notifyReady(self): if self.instance: return defer.succeed(self.instance) def on_cancel(d): self.__notify_ready.remove(d) df = defer.Deferred(on_cancel) self.__notify_ready.append(df) return df
Returns a deferred that will fire when the factory has created a protocol that can be used to communicate with a Mongo server. Note that this will not fire until we have connected to a Mongo master, unless slaveOk was specified in the Mongo URI connection options.
def set_time_rate(self, value): if isinstance(value, float) is False: raise TypeError("The type of __time_rate must be float.") if value <= 0.0: raise ValueError("The value of __time_rate must be greater than 0.0") self.__time_rate = value
setter Time rate.
def group_toggle(self, addr, use_cache=True): data = self.group_read(addr, use_cache) if len(data) != 1: problem = "Can't toggle a {}-octet group address {}".format( len(data), addr) logging.error(problem) raise KNXException(problem) if data[0] == 0: self.group_write(addr, [1]) elif data[0] == 1: self.group_write(addr, [0]) else: problem = "Can't toggle group address {} as value is {}".format( addr, data[0]) logging.error(problem) raise KNXException(problem)
Toggle the value of an 1-bit group address. If the object has a value != 0, it will be set to 0, otherwise to 1
def get_translation(self, context_id, translation_id): translation = next((x for x in self.get_translations(context_id) if x['id'] == translation_id), None) if translation is None: raise SoftLayerAPIError('SoftLayer_Exception_ObjectNotFound', 'Unable to find object with id of \'{}\'' .format(translation_id)) return translation
Retrieves a translation entry for the given id values. :param int context_id: The id-value representing the context instance. :param int translation_id: The id-value representing the translation instance. :return dict: Mapping of properties for the translation entry. :raise SoftLayerAPIError: If a translation cannot be found.
def _make_sj_out_dict(fns, jxns=None, define_sample_name=None): if define_sample_name == None: define_sample_name = lambda x: x else: assert len(set([define_sample_name(x) for x in fns])) == len(fns) sj_outD = dict() for fn in fns: sample = define_sample_name(fn) df = read_sj_out_tab(fn) df = df[df.unique_junction_reads > 0] index = (df.chrom + ':' + df.start.astype(str) + '-' + df.end.astype(str)) assert len(index) == len(set(index)) df.index = index if jxns: df = df.ix[set(df.index) & jxns] sj_outD[sample] = df return sj_outD
Read multiple sj_outs, return dict with keys as sample names and values as sj_out dataframes. Parameters ---------- fns : list of strs of filenames or file handles List of filename of the SJ.out.tab files to read in jxns : set If provided, only keep junctions in this set. define_sample_name : function that takes string as input Function mapping filename to sample name. For instance, you may have the sample name in the path and use a regex to extract it. The sample names will be used as the column names. If this is not provided, the columns will be named as the input files. Returns ------- sj_outD : dict Dict whose keys are sample names and values are sj_out dataframes
def restartIndyPool(**kwargs): print("Restarting...") try: stopIndyPool() startIndyPool() print("...restarted") except Exception as exc: eprint("...failed to restart") raise exc
Restart the indy_pool docker container. Idempotent. Ensures that the indy_pool container is a new running instance.
def _insert_job(self, body_object): logger.debug('Submitting job: %s' % body_object) job_collection = self.bigquery.jobs() return job_collection.insert( projectId=self.project_id, body=body_object ).execute(num_retries=self.num_retries)
Submit a job to BigQuery Direct proxy to the insert() method of the offical BigQuery python client. Able to submit load, link, query, copy, or extract jobs. For more details, see: https://google-api-client-libraries.appspot.com/documentation/bigquery/v2/python/latest/bigquery_v2.jobs.html#insert Parameters ---------- body_object : body object passed to bigquery.jobs().insert() Returns ------- response of the bigquery.jobs().insert().execute() call Raises ------ BigQueryTimeoutException on timeout
def put_job(self, data, pri=65536, delay=0, ttr=120): with self._sock_ctx() as socket: message = 'put {pri} {delay} {ttr} {datalen}\r\n'.format( pri=pri, delay=delay, ttr=ttr, datalen=len(data), data=data ).encode('utf-8') if not isinstance(data, bytes): data = data.encode('utf-8') message += data message += b'\r\n' self._send_message(message, socket) return self._receive_id(socket)
Insert a new job into whatever queue is currently USEd :param data: Job body :type data: Text (either str which will be encoded as utf-8, or bytes which are already utf-8 :param pri: Priority for the job :type pri: int :param delay: Delay in seconds before the job should be placed on the ready queue :type delay: int :param ttr: Time to reserve (how long a worker may work on this job before we assume the worker is blocked and give the job to another worker :type ttr: int .. seealso:: :func:`put_job_into()` Put a job into a specific tube :func:`using()` Insert a job using an external guard
def _check_codons(self): for stop_codon in self.stop_codons: if stop_codon in self.codon_table: if self.codon_table[stop_codon] != "*": raise ValueError( ("Codon '%s' not found in stop_codons, but codon table " "indicates that it should be") % (stop_codon,)) else: self.codon_table[stop_codon] = "*" for start_codon in self.start_codons: if start_codon not in self.codon_table: raise ValueError( "Start codon '%s' missing from codon table" % ( start_codon,)) for codon, amino_acid in self.codon_table.items(): if amino_acid == "*" and codon not in self.stop_codons: raise ValueError( "Non-stop codon '%s' can't translate to '*'" % ( codon,)) if len(self.codon_table) != 64: raise ValueError( "Expected 64 codons but found %d in codon table" % ( len(self.codon_table,)))
If codon table is missing stop codons, then add them.
def w(name, parallel, workflow, internal): Workflow = collections.namedtuple("Workflow", "name parallel workflow internal") return Workflow(name, parallel, workflow, internal)
A workflow, allowing specification of sub-workflows for nested parallelization. name and parallel are documented under the Step (s) function. workflow -- a list of Step tuples defining the sub-workflow internal -- variables used in the sub-workflow but not exposed to subsequent steps
def get_blocks_overview(block_representation_list, coin_symbol='btc', txn_limit=None, api_key=None): for block_representation in block_representation_list: assert is_valid_block_representation( block_representation=block_representation, coin_symbol=coin_symbol) assert is_valid_coin_symbol(coin_symbol) blocks = ';'.join([str(x) for x in block_representation_list]) url = make_url(coin_symbol, **dict(blocks=blocks)) logger.info(url) params = {} if api_key: params['token'] = api_key if txn_limit: params['limit'] = txn_limit r = requests.get(url, params=params, verify=True, timeout=TIMEOUT_IN_SECONDS) r = get_valid_json(r) return [_clean_tx(response_dict=d) for d in r]
Batch request version of get_blocks_overview
def dispatch(*funcs): def _dispatch(*args, **kwargs): for f in funcs: result = f(*args, **kwargs) if result is not None: return result return None return _dispatch
Iterates through the functions and calls them with given the parameters and returns the first non-empty result >>> f = dispatch(lambda: None, lambda: 1) >>> f() 1 :param \*funcs: funcs list of dispatched functions :returns: dispatch functoin
def adjust_interleave(self, interleave): if interleave == None and self.parent: self.interleave = self.parent.interleave else: self.interleave = interleave
Inherit interleave status from parent if undefined.
def _rpc(http, project, method, base_url, request_pb, response_pb_cls): req_data = request_pb.SerializeToString() response = _request(http, project, method, req_data, base_url) return response_pb_cls.FromString(response)
Make a protobuf RPC request. :type http: :class:`requests.Session` :param http: HTTP object to make requests. :type project: str :param project: The project to connect to. This is usually your project name in the cloud console. :type method: str :param method: The name of the method to invoke. :type base_url: str :param base_url: The base URL where the API lives. :type request_pb: :class:`google.protobuf.message.Message` instance :param request_pb: the protobuf instance representing the request. :type response_pb_cls: A :class:`google.protobuf.message.Message` subclass. :param response_pb_cls: The class used to unmarshall the response protobuf. :rtype: :class:`google.protobuf.message.Message` :returns: The RPC message parsed from the response.
def delete_scope(self, scope): assert isinstance(scope, Scope), 'Scope "{}" is not a scope!'.format(scope.name) response = self._request('DELETE', self._build_url('scope', scope_id=str(scope.id))) if response.status_code != requests.codes.no_content: raise APIError("Could not delete scope, {}: {}".format(str(response), response.content))
Delete a scope. This will delete a scope if the client has the right to do so. Sufficient permissions to delete a scope are a superuser, a user in the `GG:Configurators` group or a user that is the Scope manager of the scope to be deleted. :param scope: Scope object to be deleted :type scope: :class: `models.Scope` :return: None :raises APIError: in case of failure in the deletion of the scope
def _fingerprint_target_specs(self, specs): assert self._build_graph is not None, ( 'cannot fingerprint specs `{}` without a `BuildGraph`'.format(specs) ) hasher = sha1() for spec in sorted(specs): for target in sorted(self._build_graph.resolve(spec)): h = target.compute_invalidation_hash() if h: hasher.update(h.encode('utf-8')) return hasher.hexdigest()
Returns a fingerprint of the targets resolved from given target specs.
def buildFileList(input, output=None, ivmlist=None, wcskey=None, updatewcs=True, **workinplace): newfilelist, ivmlist, output, oldasndict, filelist = \ buildFileListOrig(input=input, output=output, ivmlist=ivmlist, wcskey=wcskey, updatewcs=updatewcs, **workinplace) return newfilelist, ivmlist, output, oldasndict
Builds a file list which has undergone various instrument-specific checks for input to MultiDrizzle, including splitting STIS associations.
def walkable(self, x, y): return self.inside(x, y) and self.nodes[y][x].walkable
check, if the tile is inside grid and if it is set as walkable
def writelines(self, lines): self.make_dir() with open(self.path, "w") as f: return f.writelines(lines)
Write a list of strings to file.
def size(self): pos = self._stream.tell() self._stream.seek(0,2) size = self._stream.tell() self._stream.seek(pos, 0) return size
Return the size of the stream, or -1 if it cannot be determined.
def _default_objc(self): objc = ctypes.cdll.LoadLibrary(find_library('objc')) objc.objc_getClass.restype = ctypes.c_void_p objc.sel_registerName.restype = ctypes.c_void_p objc.objc_msgSend.restype = ctypes.c_void_p objc.objc_msgSend.argtypes = [ctypes.c_void_p, ctypes.c_void_p] return objc
Load the objc library using ctypes.
def parse_to_gvid(v): from geoid.civick import GVid from geoid.acs import AcsGeoid m1 = '' try: return GVid.parse(v) except ValueError as e: m1 = str(e) try: return AcsGeoid.parse(v).convert(GVid) except ValueError as e: raise ValueError("Failed to parse to either ACS or GVid: {}; {}".format(m1, str(e)))
Parse an ACS Geoid or a GVID to a GVID
def post_gist(content, description='', filename='file', auth=False): post_data = json.dumps({ "description": description, "public": True, "files": { filename: { "content": content } } }).encode('utf-8') headers = make_auth_header() if auth else {} response = requests.post("https://api.github.com/gists", data=post_data, headers=headers) response.raise_for_status() response_data = json.loads(response.text) return response_data['html_url']
Post some text to a Gist, and return the URL.
def get_header(results): ret = ['name', ] values = next(iter(results.values())) for k, v in values.items(): if isinstance(v, dict): for metric in v.keys(): ret.append('%s:%s' % (k, metric)) else: ret.append(k) return ret
Extracts the headers, using the first value in the dict as the template
def wait(self, timeout=3): start = time.time() while not self.exists(): self.poco.sleep_for_polling_interval() if time.time() - start > timeout: break return self
Block and wait for max given time before the UI element appears. Args: timeout: maximum waiting time in seconds Returns: :py:class:`UIObjectProxy <poco.proxy.UIObjectProxy>`: self
def AB(self): try: return self._AB except AttributeError: pass self._AB = [self.A, self.B] return self._AB
A list containing Points A and B.
def supported_tifs(self): buf = ctypes.c_uint32() self._dll.JLINKARM_TIF_GetAvailable(ctypes.byref(buf)) return buf.value
Returns a bitmask of the supported target interfaces. Args: self (JLink): the ``JLink`` instance Returns: Bitfield specifying which target interfaces are supported.
def debug(msg): if DEBUG: sys.stderr.write("DEBUG: " + msg + "\n") sys.stderr.flush()
Displays debug messages to stderr only if the Python interpreter was invoked with the -O flag.
def has_reduction(expr): def fn(expr): op = expr.op() if isinstance(op, ops.TableNode): return lin.halt, None if isinstance(op, ops.Reduction): return lin.halt, True return lin.proceed, None reduction_status = lin.traverse(fn, expr) return any(reduction_status)
Does `expr` contain a reduction? Parameters ---------- expr : ibis.expr.types.Expr An ibis expression Returns ------- truth_value : bool Whether or not there's at least one reduction in `expr` Notes ----- The ``isinstance(op, ops.TableNode)`` check in this function implies that we only examine every non-table expression that precedes the first table expression.
def reset_state(self): super(AugmentorList, self).reset_state() for a in self.augmentors: a.reset_state()
Will reset state of each augmentor
def make_error_response(self, validation_error, expose_errors): authenticate_header = ['Bearer realm="{}"'.format(settings.DJOAUTH2_REALM)] if not expose_errors: response = HttpResponse(status=400) response['WWW-Authenticate'] = ', '.join(authenticate_header) return response status_code = 401 error_details = get_error_details(validation_error) if isinstance(validation_error, InvalidRequest): status_code = 400 elif isinstance(validation_error, InvalidToken): status_code = 401 elif isinstance(validation_error, InsufficientScope): error_details['scope'] = ' '.join(self.required_scope_names) status_code = 403 response = HttpResponse(content=json.dumps(error_details), content_type='application/json', status=status_code) for key, value in error_details.iteritems(): authenticate_header.append('{}="{}"'.format(key, value)) response['WWW-Authenticate'] = ', '.join(authenticate_header) return response
Return an appropriate ``HttpResponse`` on authentication failure. In case of an error, the specification only details the inclusion of the ``WWW-Authenticate`` header. Additionally, when allowed by the specification, we respond with error details formatted in JSON in the body of the response. For more information, read the specification: http://tools.ietf.org/html/rfc6750#section-3.1 . :param validation_error: A :py:class:`djoauth2.access_token.AuthenticationError` raised by the :py:meth:`validate` method. :param expose_errors: A boolean describing whether or not to expose error information in the error response, as described by the section of the specification linked to above. :rtype: a Django ``HttpResponse``.
def build_footprint(node: ast.AST, first_line_no: int) -> Set[int]: return set( range( get_first_token(node).start[0] - first_line_no, get_last_token(node).end[0] - first_line_no + 1, ) )
Generates a list of lines that the passed node covers, relative to the marked lines list - i.e. start of function is line 0.
def render_engine_or_search_template(template_name, **context): from indico_search.plugin import SearchPlugin assert current_plugin == SearchPlugin.instance templates = ('{}:{}'.format(SearchPlugin.instance.engine_plugin.name, template_name), template_name) return render_plugin_template(templates, **context)
Renders a template from the engine plugin or the search plugin If the template is available in the engine plugin, it's taken from there, otherwise the template from this plugin is used. :param template_name: name of the template :param context: the variables that should be available in the context of the template.
def do_start_alerts(self, _): if self._alerter_thread.is_alive(): print("The alert thread is already started") else: self._stop_thread = False self._alerter_thread = threading.Thread(name='alerter', target=self._alerter_thread_func) self._alerter_thread.start()
Starts the alerter thread
def total_read_throughput(self): total = self.read_throughput for index in itervalues(self.global_indexes): total += index.read_throughput return total
Combined read throughput of table and global indexes
def video_pixel_noise_bottom(x, model_hparams, vocab_size): input_noise = getattr(model_hparams, "video_modality_input_noise", 0.25) inputs = x if model_hparams.mode == tf.estimator.ModeKeys.TRAIN: background = tfp.stats.percentile(inputs, 50., axis=[0, 1, 2, 3]) input_shape = common_layers.shape_list(inputs) input_size = tf.reduce_prod(input_shape[:-1]) input_mask = tf.multinomial( tf.log([[input_noise, 1.-input_noise]]), input_size) input_mask = tf.reshape(tf.cast(input_mask, tf.int32), input_shape[:-1]+[1]) inputs = inputs * input_mask + background * (1 - input_mask) return video_bottom(inputs, model_hparams, vocab_size)
Bottom transformation for video.
def ping(self, message=None): return self.write(self.parser.ping(message), encode=False)
Write a ping ``frame``.
def range(cls, dataset, dimension): dim = dataset.get_dimension(dimension, strict=True) values = dataset.dimension_values(dim.name, False) return (np.nanmin(values), np.nanmax(values))
Computes the range along a particular dimension.
def tiles_to_pixels(self, tiles): pixel_coords = Vector2() pixel_coords.X = tiles[0] * self.spritesheet[0].width pixel_coords.Y = tiles[1] * self.spritesheet[0].height return pixel_coords
Convert tile coordinates into pixel coordinates
def find_for_player_id(player_id, connection=None, page_size=100, page_number=0, sort_by=DEFAULT_SORT_BY, sort_order=DEFAULT_SORT_ORDER): return pybrightcove.connection.ItemResultSet( "find_playlists_for_player_id", Playlist, connection, page_size, page_number, sort_by, sort_order, player_id=player_id)
List playlists for a for given player id.
def hash_file(file_obj, hash_function=hashlib.md5): file_position = file_obj.tell() hasher = hash_function() hasher.update(file_obj.read()) hashed = hasher.hexdigest() file_obj.seek(file_position) return hashed
Get the hash of an open file- like object. Parameters --------- file_obj: file like object hash_function: function to use to hash data Returns --------- hashed: str, hex version of result
def find_name(self, template_name, search_dirs): file_name = self.make_file_name(template_name) return self._find_path_required(search_dirs, file_name)
Return the path to a template with the given name. Arguments: template_name: the name of the template. search_dirs: the list of directories in which to search.
def load_configuration(configuration): if isinstance(configuration, dict): return configuration else: with open(configuration) as configfile: return json.load(configfile)
Returns a dictionary, accepts a dictionary or a path to a JSON file.
def _make_attr_element_from_resourceattr(parent, resource_attr_i): attr = _make_attr_element(parent, resource_attr_i.attr) attr_is_var = etree.SubElement(attr, 'is_var') attr_is_var.text = resource_attr_i.attr_is_var return attr
General function to add an attribute element to a resource element.
def check(self, line): if not isinstance(line, str): raise TypeError("Parameter 'line' not a 'string', is {0}".format(type(line))) if line in self.contents: return line return False
Find first occurrence of 'line' in file. This searches each line as a whole, if you want to see if a substring is in a line, use .grep() or .egrep() If found, return the line; this makes it easier to chain methods. :param line: String; whole line to find. :return: String or False.
def remove(self, participant): for topic, participants in list(self._participants_by_topic.items()): self.unsubscribe(participant, topic) if not participants: del self._participants_by_topic[topic]
Unsubscribe this participant from all topic to which it is subscribed.
def get_corpus(self, corpus_id): try: corpus = self.corpora[corpus_id] return corpus except KeyError: raise InvalidCorpusError
Return a corpus given an ID. If the corpus ID cannot be found, an InvalidCorpusError is raised. Parameters ---------- corpus_id : str The ID of the corpus to return. Returns ------- Corpus The corpus with the given ID.
def gradient(x, a, c): return jac(x, a).T.dot(g(x, a, c))
J'.G
def irreducible_purviews(cm, direction, mechanism, purviews): def reducible(purview): _from, to = direction.order(mechanism, purview) return connectivity.block_reducible(cm, _from, to) return [purview for purview in purviews if not reducible(purview)]
Return all purviews which are irreducible for the mechanism. Args: cm (np.ndarray): An |N x N| connectivity matrix. direction (Direction): |CAUSE| or |EFFECT|. purviews (list[tuple[int]]): The purviews to check. mechanism (tuple[int]): The mechanism in question. Returns: list[tuple[int]]: All purviews in ``purviews`` which are not reducible over ``mechanism``. Raises: ValueError: If ``direction`` is invalid.
def GET_namespaces( self, path_info ): qs_values = path_info['qs_values'] offset = qs_values.get('offset', None) count = qs_values.get('count', None) blockstackd_url = get_blockstackd_url() namespaces = blockstackd_client.get_all_namespaces(offset=offset, count=count, hostport=blockstackd_url) if json_is_error(namespaces): status_code = namespaces.get('http_status', 502) return self._reply_json({'error': namespaces['error']}, status_code=status_code) self._reply_json(namespaces) return
Get the list of all namespaces Reply all existing namespaces Reply 502 if we can't reach the server for whatever reason
def render_mako_template(self, template_path, context=None): context = context or {} template_str = self.load_unicode(template_path) lookup = MakoTemplateLookup(directories=[pkg_resources.resource_filename(self.module_name, '')]) template = MakoTemplate(template_str, lookup=lookup) return template.render(**context)
Evaluate a mako template by resource path, applying the provided context
def get_temperature(self, unit=DEGREES_C): if self.type in self.TYPES_12BIT_STANDARD: value = self.raw_sensor_count value /= 16.0 if value == 85.0: raise ResetValueError(self) factor = self._get_unit_factor(unit) return factor(value) factor = self._get_unit_factor(unit) return factor(self.raw_sensor_temp * 0.001)
Returns the temperature in the specified unit :param int unit: the unit of the temperature requested :returns: the temperature in the given unit :rtype: float :raises UnsupportedUnitError: if the unit is not supported :raises NoSensorFoundError: if the sensor could not be found :raises SensorNotReadyError: if the sensor is not ready yet :raises ResetValueError: if the sensor has still the initial value and no measurment
def earliest_date(dates, full_date=False): min_date = min(PartialDate.loads(date) for date in dates) if not min_date.month and full_date: min_date.month = 1 if not min_date.day and full_date: min_date.day = 1 return min_date.dumps()
Return the earliest among the schema-compliant dates. This is a convenience wrapper around :ref:`PartialDate`, which should be used instead if more features are needed. Args: dates(list): List of dates from which oldest/earliest one will be returned full_date(bool): Adds month and/or day as "01" if they are missing Returns: str: Earliest date from provided list
def form_valid(self, form): response = super(FormAjaxMixin, self).form_valid(form) if self.request.is_ajax(): return self.json_to_response() return response
If form valid return response with action
def _transition_callback(self, is_on, transition): if transition.state_stages: state = transition.state_stages[transition.stage_index] if self.is_on and is_on is False: state['brightness'] = self.brightness self.set(is_on=is_on, cancel_transition=False, **state) self._active_transition = None
Callback that is called when a transition has ended. :param is_on: The on-off state to transition to. :param transition: The transition that has ended.
def A_multiple_hole_cylinder(Do, L, holes): r side_o = pi*Do*L cap_circle = pi*Do**2/4*2 A = cap_circle + side_o for Di, n in holes: side_i = pi*Di*L cap_removed = pi*Di**2/4*2 A = A + side_i*n - cap_removed*n return A
r'''Returns the surface area of a cylinder with multiple holes. Calculation will naively return a negative value or other impossible result if the number of cylinders added is physically impossible. Holes may be of different shapes, but must be perpendicular to the axis of the cylinder. .. math:: A = \pi D_o L + 2\cdot \frac{\pi D_o^2}{4} + \sum_{i}^n \left( \pi D_i L - 2\cdot \frac{\pi D_i^2}{4}\right) Parameters ---------- Do : float Diameter of the exterior of the cylinder, [m] L : float Length of the cylinder, [m] holes : list List of tuples containing (diameter, count) pairs of descriptions for each of the holes sizes. Returns ------- A : float Surface area [m^2] Examples -------- >>> A_multiple_hole_cylinder(0.01, 0.1, [(0.005, 1)]) 0.004830198704894308
def _get_repos(url): current_page = 1 there_is_something_left = True repos_list = [] while there_is_something_left: api_driver = GithubRawApi( url, url_params={"page": current_page}, get_api_content_now=True ) for repo in api_driver.api_content: repo_name = repo["name"] repo_user = repo["owner"]["login"] repos_list.append( GithubUserRepository(repo_user, repo_name)) there_is_something_left = bool(api_driver.api_content) current_page += 1 return repos_list
Gets repos in url :param url: Url :return: List of repositories in given url
def read_plain_int32(file_obj, count): length = 4 * count data = file_obj.read(length) if len(data) != length: raise EOFError("Expected {} bytes but got {} bytes".format(length, len(data))) res = struct.unpack("<{}i".format(count).encode("utf-8"), data) return res
Read `count` 32-bit ints using the plain encoding.
def _vote_disagreement(self, votes): ret = [] for candidate in votes: ret.append(0.0) lab_count = {} for lab in candidate: lab_count[lab] = lab_count.setdefault(lab, 0) + 1 for lab in lab_count.keys(): ret[-1] -= lab_count[lab] / self.n_students * \ math.log(float(lab_count[lab]) / self.n_students) return ret
Return the disagreement measurement of the given number of votes. It uses the vote vote to measure the disagreement. Parameters ---------- votes : list of int, shape==(n_samples, n_students) The predictions that each student gives to each sample. Returns ------- disagreement : list of float, shape=(n_samples) The vote entropy of the given votes.
def resume(self, instance_id): nt_ks = self.compute_conn response = nt_ks.servers.resume(instance_id) return True
Resume a server
def load_data_table(table_name, meta_file, meta): for table in meta['tables']: if table['name'] == table_name: prefix = os.path.dirname(meta_file) relative_path = os.path.join(prefix, meta['path'], table['path']) return pd.read_csv(relative_path), table
Return the contents and metadata of a given table. Args: table_name(str): Name of the table. meta_file(str): Path to the meta.json file. meta(dict): Contents of meta.json. Returns: tuple(pandas.DataFrame, dict)
def insertReadGroupSet(self, readGroupSet): programsJson = json.dumps( [protocol.toJsonDict(program) for program in readGroupSet.getPrograms()]) statsJson = json.dumps(protocol.toJsonDict(readGroupSet.getStats())) try: models.Readgroupset.create( id=readGroupSet.getId(), datasetid=readGroupSet.getParentContainer().getId(), referencesetid=readGroupSet.getReferenceSet().getId(), name=readGroupSet.getLocalId(), programs=programsJson, stats=statsJson, dataurl=readGroupSet.getDataUrl(), indexfile=readGroupSet.getIndexFile(), attributes=json.dumps(readGroupSet.getAttributes())) for readGroup in readGroupSet.getReadGroups(): self.insertReadGroup(readGroup) except Exception as e: raise exceptions.RepoManagerException(e)
Inserts a the specified readGroupSet into this repository.
def delete_nve_member(self, nexus_host, nve_int_num, vni): starttime = time.time() path_snip = snipp.PATH_VNI_UPDATE % (nve_int_num, vni) self.client.rest_delete(path_snip, nexus_host) self.capture_and_print_timeshot( starttime, "delete_nve", switch=nexus_host)
Delete a member configuration on the NVE interface.
def connect(url, prefix=None, **kwargs): return connection(url, prefix=get_prefix(prefix), **kwargs)
connect and return a connection instance from url arguments: - url (str): xbahn connection url
def read_lines(in_file): with open(in_file, 'r') as inf: in_contents = inf.read().split('\n') return in_contents
Returns a list of lines from a input markdown file.
def ConfigureHostnames(config, external_hostname = None): if not external_hostname: try: external_hostname = socket.gethostname() except (OSError, IOError): print("Sorry, we couldn't guess your hostname.\n") external_hostname = RetryQuestion( "Please enter your hostname e.g. " "grr.example.com", "^[\\.A-Za-z0-9-]+$", external_hostname) print( ) frontend_url = RetryQuestion("Frontend URL", "^http://.*/$", "http://%s:8080/" % external_hostname) config.Set("Client.server_urls", [frontend_url]) frontend_port = urlparse.urlparse(frontend_url).port or grr_config.CONFIG.Get( "Frontend.bind_port") config.Set("Frontend.bind_port", frontend_port) print( ) ui_url = RetryQuestion("AdminUI URL", "^http[s]*://.*$", "http://%s:8000" % external_hostname) config.Set("AdminUI.url", ui_url) ui_port = urlparse.urlparse(ui_url).port or grr_config.CONFIG.Get( "AdminUI.port") config.Set("AdminUI.port", ui_port)
This configures the hostnames stored in the config.
def save(self, path: str): with open(path, 'wb') as out: np.save(out, self.lex) logger.info("Saved top-k lexicon to \"%s\"", path)
Save lexicon in Numpy array format. Lexicon will be specific to Sockeye model. :param path: Path to Numpy array output file.
def setup(self): for table_spec in self._table_specs: with self._conn: table_spec.setup(self._conn)
Setup cache tables.
def clean(self, text, **kwargs): if sys.version_info < (3, 0): if not isinstance(text, unicode): raise exceptions.UnicodeRequired clean_chunks = [] filth = Filth() for next_filth in self.iter_filth(text): clean_chunks.append(text[filth.end:next_filth.beg]) clean_chunks.append(next_filth.replace_with(**kwargs)) filth = next_filth clean_chunks.append(text[filth.end:]) return u''.join(clean_chunks)
This is the master method that cleans all of the filth out of the dirty dirty ``text``. All keyword arguments to this function are passed through to the ``Filth.replace_with`` method to fine-tune how the ``Filth`` is cleaned.
def get_instance(self, payload): return RecordingInstance( self._version, payload, account_sid=self._solution['account_sid'], call_sid=self._solution['call_sid'], )
Build an instance of RecordingInstance :param dict payload: Payload response from the API :returns: twilio.rest.api.v2010.account.call.recording.RecordingInstance :rtype: twilio.rest.api.v2010.account.call.recording.RecordingInstance
def det_n(x): assert x.ndim == 3 assert x.shape[1] == x.shape[2] if x.shape[1] == 1: return x[:,0,0] result = np.zeros(x.shape[0]) for permutation in permutations(np.arange(x.shape[1])): sign = parity(permutation) result += np.prod([x[:, i, permutation[i]] for i in range(x.shape[1])], 0) * sign sign = - sign return result
given N matrices, return N determinants
def rule_function_not_found(self, fun=None): sfun = str(fun) self.cry('rule_function_not_found:' + sfun) def not_found(*a, **k): return(sfun + ':rule_function_not_found', k.keys()) return not_found
any function that does not exist will be added as a dummy function that will gather inputs for easing into the possible future implementation
def get_all_regions(db_connection): if not hasattr(get_all_regions, '_results'): sql = 'CALL get_all_regions();' results = execute_sql(sql, db_connection) get_all_regions._results = results return get_all_regions._results
Gets a list of all regions. :return: A list of all regions. Results have regionID and regionName. :rtype: list
def template_hook_collect(module, hook_name, *args, **kwargs): try: templatehook = getattr(module, hook_name) except AttributeError: return "" return format_html_join( sep="\n", format_string="{}", args_generator=( (response, ) for response in templatehook(*args, **kwargs) ) )
Helper to include in your own templatetag, for static TemplateHooks Example:: import myhooks from hooks.templatetags import template_hook_collect @register.simple_tag(takes_context=True) def hook(context, name, *args, **kwargs): return template_hook_collect(myhooks, name, context, *args, **kwargs) :param module module: Module containing the template hook definitions :param str hook_name: The hook name to be dispatched :param \*args: Positional arguments, will be passed to hook callbacks :param \*\*kwargs: Keyword arguments, will be passed to hook callbacks :return: A concatenation of all callbacks\ responses marked as safe (conditionally) :rtype: str
def AddPmf(self, other): pmf = Pmf() for v1, p1 in self.Items(): for v2, p2 in other.Items(): pmf.Incr(v1 + v2, p1 * p2) return pmf
Computes the Pmf of the sum of values drawn from self and other. other: another Pmf returns: new Pmf
def __validate(data, classes, labels): "Validator of inputs." if not isinstance(data, dict): raise TypeError( 'data must be a dict! keys: sample ID or any unique identifier') if not isinstance(labels, dict): raise TypeError( 'labels must be a dict! keys: sample ID or any unique identifier') if classes is not None: if not isinstance(classes, dict): raise TypeError( 'labels must be a dict! keys: sample ID or any unique identifier') if not len(data) == len(labels) == len(classes): raise ValueError('Lengths of data, labels and classes do not match!') if not set(list(data)) == set(list(labels)) == set(list(classes)): raise ValueError( 'data, classes and labels dictionaries must have the same keys!') num_features_in_elements = np.unique([sample.size for sample in data.values()]) if len(num_features_in_elements) > 1: raise ValueError( 'different samples have different number of features - invalid!') return True
Validator of inputs.
def _set_item(self, key, value): self._ensure_valid_index(value) value = self._sanitize_column(key, value) NDFrame._set_item(self, key, value) if len(self): self._check_setitem_copy()
Add series to DataFrame in specified column. If series is a numpy-array (not a Series/TimeSeries), it must be the same length as the DataFrames index or an error will be thrown. Series/TimeSeries will be conformed to the DataFrames index to ensure homogeneity.
def marvcli_user_rm(ctx, username): app = create_app() try: app.um.user_rm(username) except ValueError as e: ctx.fail(e.args[0])
Remove a user
def get_module_path(exc_type): module = inspect.getmodule(exc_type) return "{}.{}".format(module.__name__, exc_type.__name__)
Return the dotted module path of `exc_type`, including the class name. e.g.:: >>> get_module_path(MethodNotFound) >>> "nameko.exceptions.MethodNotFound"
def ancestor_paths(start=None, limit={}): import utool as ut limit = ut.ensure_iterable(limit) limit = {expanduser(p) for p in limit}.union(set(limit)) if start is None: start = os.getcwd() path = start prev = None while path != prev and prev not in limit: yield path prev = path path = dirname(path)
All paths above you
def interpolate_delta_t(delta_t_table, tt): tt_array, delta_t_array = delta_t_table delta_t = _to_array(interp(tt, tt_array, delta_t_array, nan, nan)) missing = isnan(delta_t) if missing.any(): if missing.shape: tt = tt[missing] delta_t[missing] = delta_t_formula_morrison_and_stephenson_2004(tt) else: delta_t = delta_t_formula_morrison_and_stephenson_2004(tt) return delta_t
Return interpolated Delta T values for the times in `tt`. The 2xN table should provide TT values as element 0 and corresponding Delta T values for element 1. For times outside the range of the table, a long-term formula is used instead.
def deactivateAaPdpContextRequest(): a = TpPd(pd=0x8) b = MessageType(mesType=0x53) c = AaDeactivationCauseAndSpareHalfOctets() packet = a / b / c return packet
DEACTIVATE AA PDP CONTEXT REQUEST Section 9.5.13
def parse_attributes( fields ): attributes = {} for field in fields: pair = field.split( '=' ) attributes[ pair[0] ] = pair[1] return attributes
Parse list of key=value strings into a dict
def multipart_delete(self, multipart): multipart.delete() db.session.commit() if multipart.file_id: remove_file_data.delay(str(multipart.file_id)) return self.make_response('', 204)
Abort a multipart upload. :param multipart: A :class:`invenio_files_rest.models.MultipartObject` instance. :returns: A Flask response.
def validate_json(self): if not hasattr(self, 'guidance_json'): return False checksum = self.guidance_json.get('checksum') contents = self.guidance_json.get('db') hash_key = ("{}{}".format(json.dumps(contents, sort_keys=True), self.assignment.endpoint).encode()) digest = hashlib.md5(hash_key).hexdigest() if not checksum: log.warning("Checksum on guidance not found. Invalidating file") return False if digest != checksum: log.warning("Checksum %s did not match actual digest %s", checksum, digest) return False return True
Ensure that the checksum matches.
def _create_metadata_cache(cache_location): cache_url = os.getenv('GUTENBERG_FUSEKI_URL') if cache_url: return FusekiMetadataCache(cache_location, cache_url) try: return SleepycatMetadataCache(cache_location) except InvalidCacheException: logging.warning('Unable to create cache based on BSD-DB. ' 'Falling back to SQLite backend. ' 'Performance may be degraded significantly.') return SqliteMetadataCache(cache_location)
Creates a new metadata cache instance appropriate for this platform.
def _submit_bundle(cmd_args, app): sac = streamsx.rest.StreamingAnalyticsConnection(service_name=cmd_args.service_name) sas = sac.get_streaming_analytics() sr = sas.submit_job(bundle=app.app, job_config=app.cfg[ctx.ConfigParams.JOB_CONFIG]) if 'exception' in sr: rc = 1 elif 'status_code' in sr: try: rc = 0 if int(sr['status_code'] == 200) else 1 except: rc = 1 elif 'id' in sr or 'jobId' in sr: rc = 0 sr['return_code'] = rc return sr
Submit an existing bundle to the service
def getPeer(self, url): peers = list(models.Peer.select().where(models.Peer.url == url)) if len(peers) == 0: raise exceptions.PeerNotFoundException(url) return peers[0]
Finds a peer by URL and return the first peer record with that URL.
def get_default_ENV(env): global default_ENV try: return env['ENV'] except KeyError: if not default_ENV: import SCons.Environment default_ENV = SCons.Environment.Environment()['ENV'] return default_ENV
A fiddlin' little function that has an 'import SCons.Environment' which can't be moved to the top level without creating an import loop. Since this import creates a local variable named 'SCons', it blocks access to the global variable, so we move it here to prevent complaints about local variables being used uninitialized.
def ring_to_nested(ring_index, nside): nside = np.asarray(nside, dtype=np.intc) return _core.ring_to_nested(ring_index, nside)
Convert a HEALPix 'ring' index to a HEALPix 'nested' index Parameters ---------- ring_index : int or `~numpy.ndarray` Healpix index using the 'ring' ordering nside : int or `~numpy.ndarray` Number of pixels along the side of each of the 12 top-level HEALPix tiles Returns ------- nested_index : int or `~numpy.ndarray` Healpix index using the 'nested' ordering
def rgb2term(r: int, g: int, b: int) -> str: return hex2term_map[rgb2termhex(r, g, b)]
Convert an rgb value to a terminal code.
def dvcrss(s1, s2): assert len(s1) is 6 and len(s2) is 6 s1 = stypes.toDoubleVector(s1) s2 = stypes.toDoubleVector(s2) sout = stypes.emptyDoubleVector(6) libspice.dvcrss_c(s1, s2, sout) return stypes.cVectorToPython(sout)
Compute the cross product of two 3-dimensional vectors and the derivative of this cross product. http://naif.jpl.nasa.gov/pub/naif/toolkit_docs/C/cspice/dvcrss_c.html :param s1: Left hand state for cross product and derivative. :type s1: 6-Element Array of floats :param s2: Right hand state for cross product and derivative. :type s2: 6-Element Array of floats :return: State associated with cross product of positions. :rtype: 6-Element Array of floats
def are_equivalent(*args, **kwargs): if len(args) == 1: return True first_item = args[0] for item in args[1:]: if type(item) != type(first_item): return False if isinstance(item, dict): if not are_dicts_equivalent(item, first_item): return False elif hasattr(item, '__iter__') and not isinstance(item, (str, bytes, dict)): if len(item) != len(first_item): return False for value in item: if value not in first_item: return False for value in first_item: if value not in item: return False else: if item != first_item: return False return True
Indicate if arguments passed to this function are equivalent. .. hint:: This checker operates recursively on the members contained within iterables and :class:`dict <python:dict>` objects. .. caution:: If you only pass one argument to this checker - even if it is an iterable - the checker will *always* return ``True``. To evaluate members of an iterable for equivalence, you should instead unpack the iterable into the function like so: .. code-block:: python obj = [1, 1, 1, 2] result = are_equivalent(*obj) # Will return ``False`` by unpacking and evaluating the iterable's members result = are_equivalent(obj) # Will always return True :param args: One or more values, passed as positional arguments. :returns: ``True`` if ``args`` are equivalent, and ``False`` if not. :rtype: :class:`bool <python:bool>` :raises SyntaxError: if ``kwargs`` contains duplicate keyword parameters or duplicates keyword parameters passed to the underlying validator
def _validate_slices_form_uniform_grid(slice_datasets): invariant_properties = [ 'Modality', 'SOPClassUID', 'SeriesInstanceUID', 'Rows', 'Columns', 'PixelSpacing', 'PixelRepresentation', 'BitsAllocated', 'BitsStored', 'HighBit', ] for property_name in invariant_properties: _slice_attribute_equal(slice_datasets, property_name) _validate_image_orientation(slice_datasets[0].ImageOrientationPatient) _slice_ndarray_attribute_almost_equal(slice_datasets, 'ImageOrientationPatient', 1e-5) slice_positions = _slice_positions(slice_datasets) _check_for_missing_slices(slice_positions)
Perform various data checks to ensure that the list of slices form a evenly-spaced grid of data. Some of these checks are probably not required if the data follows the DICOM specification, however it seems pertinent to check anyway.
def get_state_machine(self): if self.parent: if self.is_root_state: return self.parent else: return self.parent.get_state_machine() return None
Get a reference of the state_machine the state belongs to :rtype rafcon.core.state_machine.StateMachine :return: respective state machine
def ensure_each_wide_obs_chose_an_available_alternative(obs_id_col, choice_col, availability_vars, wide_data): wide_availability_values = wide_data[list( availability_vars.values())].values unavailable_condition = ((wide_availability_values == 0).sum(axis=1) .astype(bool)) problem_obs = [] for idx, row in wide_data.loc[unavailable_condition].iterrows(): if row.at[availability_vars[row.at[choice_col]]] != 1: problem_obs.append(row.at[obs_id_col]) if problem_obs != []: msg = "The following observations chose unavailable alternatives:\n{}" raise ValueError(msg.format(problem_obs)) return None
Checks whether or not each observation with a restricted choice set chose an alternative that was personally available to him or her. Will raise a helpful ValueError if this is not the case. Parameters ---------- obs_id_col : str. Denotes the column in `wide_data` that contains the observation ID values for each row. choice_col : str. Denotes the column in `wide_data` that contains a one if the alternative pertaining to the given row was the observed outcome for the observation pertaining to the given row and a zero otherwise. availability_vars : dict. There should be one key value pair for each alternative that is observed in the dataset. Each key should be the alternative id for the alternative, and the value should be the column heading in `wide_data` that denotes (using ones and zeros) whether an alternative is available/unavailable, respectively, for a given observation. Alternative id's, i.e. the keys, must be integers. wide_data : pandas dataframe. Contains one row for each observation. Should have the specified `[obs_id_col, choice_col] + availability_vars.values()` columns. Returns ------- None
def header(msg): width = len(msg) + 4 s = [] s.append('-' * width) s.append("| %s |" % msg) s.append('-' * width) return '\n'.join(s)
Wrap `msg` in bars to create a header effect