code
stringlengths
51
2.38k
docstring
stringlengths
4
15.2k
def get_json(self, path, **kwargs): url = self._make_url(path) headers = kwargs.setdefault('headers', {}) headers.update({'Accept': 'application/json'}) response = self._make_request("GET", url, **kwargs) return json.loads(response.text)
Perform an HTTP GET request with JSON headers of the specified path against Device Cloud Make an HTTP GET request against Device Cloud with this accounts credentials and base url. This method uses the `requests <http://docs.python-requests.org/en/latest/>`_ library `request method <http://docs.python-requests.org/en/latest/api/#requests.request>`_ and all keyword arguments will be passed on to that method. This method will automatically add the ``Accept: application/json`` and parse the JSON response from Device Cloud. :param str path: Device Cloud path to GET :param int retries: The number of times the request should be retried if an unsuccessful response is received. Most likely, you should leave this at 0. :raises DeviceCloudHttpException: if a non-success response to the request is received from Device Cloud :returns: A python data structure containing the results of calling ``json.loads`` on the body of the response from Device Cloud.
def predict_variant_effects(variant, raise_on_error=False): if len(variant.gene_ids) == 0: effects = [Intergenic(variant)] else: effects = [] transcripts_grouped_by_gene = groupby_field(variant.transcripts, 'gene_id') for gene_id in sorted(variant.gene_ids): if gene_id not in transcripts_grouped_by_gene: gene = variant.ensembl.gene_by_id(gene_id) effects.append(Intragenic(variant, gene)) else: for transcript in transcripts_grouped_by_gene[gene_id]: if raise_on_error: effect = predict_variant_effect_on_transcript( variant=variant, transcript=transcript) else: effect = predict_variant_effect_on_transcript_or_failure( variant=variant, transcript=transcript) effects.append(effect) return EffectCollection(effects)
Determine the effects of a variant on any transcripts it overlaps. Returns an EffectCollection object. Parameters ---------- variant : Variant raise_on_error : bool Raise an exception if we encounter an error while trying to determine the effect of this variant on a transcript, or simply log the error and continue.
def _input_file_as_html_links(cls, session: AppSession): scrape_result = session.factory['HTMLScraper'].scrape_file( session.args.input_file, encoding=session.args.local_encoding or 'utf-8' ) for context in scrape_result.link_contexts: yield context.link
Read input file as HTML and return the links.
def _after_request(self, response): cookie_secure = (current_app.config['OIDC_COOKIE_SECURE'] and current_app.config.get('OIDC_ID_TOKEN_COOKIE_SECURE', True)) if getattr(g, 'oidc_id_token_dirty', False): if g.oidc_id_token: signed_id_token = self.cookie_serializer.dumps(g.oidc_id_token) response.set_cookie( current_app.config['OIDC_ID_TOKEN_COOKIE_NAME'], signed_id_token, secure=cookie_secure, httponly=True, max_age=current_app.config['OIDC_ID_TOKEN_COOKIE_TTL']) else: response.set_cookie( current_app.config['OIDC_ID_TOKEN_COOKIE_NAME'], '', path=current_app.config['OIDC_ID_TOKEN_COOKIE_PATH'], secure=cookie_secure, httponly=True, expires=0) return response
Set a new ID token cookie if the ID token has changed.
def find_protein_complexes(model): complexes = [] for rxn in model.reactions: if not rxn.gene_reaction_rule: continue size = find_top_level_complex(rxn.gene_reaction_rule) if size >= 2: complexes.append(rxn) return complexes
Find reactions that are catalyzed by at least a heterodimer. Parameters ---------- model : cobra.Model The metabolic model under investigation. Returns ------- list Reactions whose gene-protein-reaction association contains at least one logical AND combining different gene products (heterodimer).
def power_in_band(events, dat, s_freq, frequency): dat = diff(dat) pw = empty(events.shape[0]) pw.fill(nan) for i, one_event in enumerate(events): x0 = one_event[0] x1 = one_event[2] if x0 < 0 or x1 >= len(dat): pw[i] = nan else: sf, Pxx = periodogram(dat[x0:x1], s_freq) b0 = asarray([abs(x - frequency[0]) for x in sf]).argmin() b1 = asarray([abs(x - frequency[1]) for x in sf]).argmin() pw[i] = mean(Pxx[b0:b1]) return pw
Define power of the signal within frequency band. Parameters ---------- events : ndarray (dtype='int') N x 3 matrix with start, peak, end samples dat : ndarray (dtype='float') vector with the original data s_freq : float sampling frequency frequency : tuple of float low and high frequency of spindle band, for window Returns ------- ndarray (dtype='float') vector with power
def transfer_and_wait( self, registry_address: PaymentNetworkID, token_address: TokenAddress, amount: TokenAmount, target: Address, identifier: PaymentID = None, transfer_timeout: int = None, secret: Secret = None, secret_hash: SecretHash = None, ): payment_status = self.transfer_async( registry_address=registry_address, token_address=token_address, amount=amount, target=target, identifier=identifier, secret=secret, secret_hash=secret_hash, ) payment_status.payment_done.wait(timeout=transfer_timeout) return payment_status
Do a transfer with `target` with the given `amount` of `token_address`.
def context_lookup(self, vars): while isinstance(vars, IscmExpr): vars = vars.resolve(self.context) for (k,v) in vars.items(): if isinstance(v, IscmExpr): vars[k] = v.resolve(self.context) return vars
Lookup the variables in the provided dictionary, resolve with entries in the context
def choose_best_amplicon(self, amplicon_tuples): quality = 0 amplicon_length = 0 best_amplicon = None for amplicon in amplicon_tuples: if int(amplicon[4]) >= quality and int(amplicon[5]) >= amplicon_length: quality = int(amplicon[4]) amplicon_length = int(amplicon[5]) best_amplicon = amplicon return best_amplicon
Iterates over amplicon tuples and returns the one with highest quality and amplicon length.
def _merge_with_defaults(params): marks_params = [ tz.merge(default, param) for default, param in zip(itertools.repeat(_default_params['marks']), params['marks']) ] if 'marks' in params else [_default_params['marks']] merged_without_marks = tz.merge_with( tz.merge, tz.dissoc(_default_params, 'marks'), tz.dissoc(params, 'marks') ) return tz.merge(merged_without_marks, {'marks': marks_params})
Performs a 2-level deep merge of params with _default_params with corrent merging of params for each mark. This is a bit complicated since params['marks'] is a list and we need to make sure each mark gets the default params.
def client(self, container): self._client_chk.check(container) return ContainerClient(self._client, int(container))
Return a client instance that is bound to that container. :param container: container id :return: Client object bound to the specified container id Return a ContainerResponse from container.create
def setKey(self, key, value): data = self.getDictionary() data[key] = value self.setDictionary(data)
Sets the value for the specified dictionary key
def add(self, term): if isinstance(term, Conjunction): for term_ in term.terms: self.add(term_) elif isinstance(term, Term): self._terms.append(term) else: raise TypeError('Not a Term or Conjunction')
Add a term to the conjunction. Args: term (:class:`Term`, :class:`Conjunction`): term to add; if a :class:`Conjunction`, all of its terms are added to the current conjunction. Raises: :class:`TypeError`: when *term* is an invalid type
def seebeck_spb(eta,Lambda=0.5): from fdint import fdk return constants.k/constants.e * ((2. + Lambda) * fdk( 1.+ Lambda, eta)/ ((1.+Lambda)*fdk(Lambda, eta))- eta) * 1e+6
Seebeck analytic formula in the single parabolic model
def spaced_indexes(len_, n, trunc=False): if n is None: return np.arange(len_) all_indexes = np.arange(len_) if trunc: n = min(len_, n) if n == 0: return np.empty(0) stride = len_ // n try: indexes = all_indexes[0:-1:stride] except ValueError: raise ValueError('cannot slice list of len_=%r into n=%r parts' % (len_, n)) return indexes
Returns n evenly spaced indexes. Returns as many as possible if trunc is true
def umount(mountpoint, persist=False): cmd_args = ['umount', mountpoint] try: subprocess.check_output(cmd_args) except subprocess.CalledProcessError as e: log('Error unmounting {}\n{}'.format(mountpoint, e.output)) return False if persist: return fstab_remove(mountpoint) return True
Unmount a filesystem
def end_script(self): if self.remote_bridge.status not in (BRIDGE_STATUS.RECEIVED, BRIDGE_STATUS.WAITING): return [1] self.remote_bridge.status = BRIDGE_STATUS.RECEIVED return [0]
Indicate that we have finished receiving a script.
async def wait(self): if self._ping_task is None: raise RuntimeError('Response is not started') with contextlib.suppress(asyncio.CancelledError): await self._ping_task
EventSourceResponse object is used for streaming data to the client, this method returns future, so we can wain until connection will be closed or other task explicitly call ``stop_streaming`` method.
def parse_cstring(stream, offset): stream.seek(offset) string = "" while True: char = struct.unpack('c', stream.read(1))[0] if char == b'\x00': return string else: string += char.decode()
parse_cstring will parse a null-terminated string in a bytestream. The string will be decoded with UTF-8 decoder, of course since we are doing this byte-a-byte, it won't really work for all Unicode strings. TODO: add proper Unicode support
def reqContractDetails(self, contract: Contract) -> List[ContractDetails]: return self._run(self.reqContractDetailsAsync(contract))
Get a list of contract details that match the given contract. If the returned list is empty then the contract is not known; If the list has multiple values then the contract is ambiguous. The fully qualified contract is available in the the ContractDetails.contract attribute. This method is blocking. https://interactivebrokers.github.io/tws-api/contract_details.html Args: contract: The contract to get details for.
def ssh_authorized_key_exists(public_key, application_name, user=None): with open(authorized_keys(application_name, user)) as keys: return ('%s' % public_key) in keys.read()
Check if given key is in the authorized_key file. :param public_key: Public key. :type public_key: str :param application_name: Name of application eg nova-compute-something :type application_name: str :param user: The user that the ssh asserts are for. :type user: str :returns: Whether given key is in the authorized_key file. :rtype: boolean
def __generate_tree(self, top, src, resources, models, ctrls, views, utils): res = self.__mkdir(top) for fn in (src, models, ctrls, views, utils): res = self.__mkpkg(fn) or res res = self.__mkdir(resources) or res res = self.__mkdir(os.path.join(resources, "ui", "builder")) or res res = self.__mkdir(os.path.join(resources, "ui", "styles")) or res res = self.__mkdir(os.path.join(resources, "external")) or res return res
Creates directories and packages
def get_pwm(self, led_num): self.__check_range('led_number', led_num) register_low = self.calc_led_register(led_num) return self.__get_led_value(register_low)
Generic getter for all LED PWM value
def custom_property_prefix_strict(instance): for prop_name in instance.keys(): if (instance['type'] in enums.PROPERTIES and prop_name not in enums.PROPERTIES[instance['type']] and prop_name not in enums.RESERVED_PROPERTIES and not CUSTOM_PROPERTY_PREFIX_RE.match(prop_name)): yield JSONError("Custom property '%s' should have a type that " "starts with 'x_' followed by a source unique " "identifier (like a domain name with dots " "replaced by hyphen), a hyphen and then the name." % prop_name, instance['id'], 'custom-prefix')
Ensure custom properties follow strict naming style conventions. Does not check property names in custom objects.
def process_user_info_response(self, response): mapping = ( ('username', 'preferred_username'), ('email', 'email'), ('last_name', 'family_name'), ('first_name', 'given_name'), ) return {dest: response[source] for dest, source in mapping}
Process the user info response data. By default, this simply maps the edX user info key-values (example below) to Django-friendly names. If your provider returns different fields, you should sub-class this class and override this method. .. code-block:: python { "username": "jdoe", "email": "jdoe@example.com", "first_name": "Jane", "last_name": "Doe" } Arguments: response (dict): User info data Returns: dict
def _read_json_db(self): try: metadata_str = self.db_io.read_metadata_from_uri( self.layer_uri, 'json') except HashNotFoundError: return {} try: metadata = json.loads(metadata_str) return metadata except ValueError: message = tr('the file DB entry for %s does not appear to be ' 'valid JSON') message %= self.layer_uri raise MetadataReadError(message)
read metadata from a json string stored in a DB. :return: the parsed json dict :rtype: dict
def changelog_file_option_validator(ctx, param, value): path = Path(value) if not path.exists(): filename = click.style(path.name, fg="blue", bold=True) ctx.fail( "\n" f" {x_mark} Unable to find {filename}\n" ' Run "$ brau init" to create one' ) return path
Checks that the given file path exists in the current working directory. Returns a :class:`~pathlib.Path` object. If the file does not exist raises a :class:`~click.UsageError` exception.
def GET_namespace_num_names(self, path_info, namespace_id): if not check_namespace(namespace_id): return self._reply_json({'error': 'Invalid namespace'}, status_code=400) blockstackd_url = get_blockstackd_url() name_count = blockstackd_client.get_num_names_in_namespace(namespace_id, hostport=blockstackd_url) if json_is_error(name_count): log.error("Failed to load namespace count for {}: {}".format(namespace_id, name_count['error'])) return self._reply_json({'error': 'Failed to load namespace count: {}'.format(name_count['error'])}, status_code=404) self._reply_json({'names_count': name_count})
Get the number of names in a namespace Reply the number on success Reply 404 if the namespace does not exist Reply 502 on failure to talk to the blockstack server
def logtrick_sgd(sgd): r @wraps(sgd) def new_sgd(fun, x0, data, bounds=None, eval_obj=False, **sgd_kwargs): if bounds is None: return sgd(fun, x0, data, bounds=bounds, eval_obj=eval_obj, **sgd_kwargs) logx, expx, gradx, bounds = _logtrick_gen(bounds) if bool(eval_obj): def new_fun(x, *fargs, **fkwargs): o, g = fun(expx(x), *fargs, **fkwargs) return o, gradx(g, x) else: def new_fun(x, *fargs, **fkwargs): return gradx(fun(expx(x), *fargs, **fkwargs), x) result = sgd(new_fun, logx(x0), data, bounds=bounds, eval_obj=eval_obj, **sgd_kwargs) result['x'] = expx(result['x']) return result return new_sgd
r""" Log-Trick decorator for stochastic gradients. This decorator implements the "log trick" for optimizing positive bounded variables using SGD. It will apply this trick for any variables that correspond to a Positive() bound. Examples -------- >>> from ..optimize import sgd >>> from ..btypes import Bound, Positive Here is an example where we may want to enforce a particular parameter or parameters to be strictly greater than zero, >>> def cost(w, data, lambda_): ... N = len(data) ... y, X = data[:, 0], data[:, 1:] ... y_est = X.dot(w) ... ww = w.T.dot(w) ... obj = (y - y_est).sum() / N + lambda_ * ww ... gradw = - 2 * X.T.dot(y - y_est) / N + 2 * lambda_ * w ... return obj, gradw Now let's enforce that the `w` are positive, >>> bounds = [Positive(), Positive()] >>> new_sgd = logtrick_sgd(sgd) Data >>> y = np.linspace(1, 10, 100) + np.random.randn(100) + 1 >>> X = np.array([np.ones(100), np.linspace(1, 100, 100)]).T >>> data = np.hstack((y[:, np.newaxis], X)) Initial values >>> w_0 = np.array([1., 1.]) >>> lambda_0 = .25 >>> res = new_sgd(cost, w_0, data, args=(lambda_0,), bounds=bounds, ... batch_size=10, eval_obj=True) >>> res.x >= 0 array([ True, True], dtype=bool) Note ---- This decorator only works on unstructured optimizers. However, it can be use with structured_minimizer, so long as it is the inner wrapper.
def getReffs(self, level: int=1, subreference: CtsReference=None) -> CtsReferenceSet: if not subreference and hasattr(self, "reference"): subreference = self.reference elif subreference and not isinstance(subreference, CtsReference): subreference = CtsReference(subreference) return self.getValidReff(level=level, reference=subreference)
CtsReference available at a given level :param level: Depth required. If not set, should retrieve first encountered level (1 based) :param subreference: Subreference (optional) :returns: List of levels
def spp_call_peaks( self, treatment_bam, control_bam, treatment_name, control_name, output_dir, broad, cpus, qvalue=None): broad = "TRUE" if broad else "FALSE" cmd = self.tools.Rscript + " `which spp_peak_calling.R` {0} {1} {2} {3} {4} {5} {6}".format( treatment_bam, control_bam, treatment_name, control_name, broad, cpus, output_dir ) if qvalue is not None: cmd += " {}".format(qvalue) return cmd
Build command for R script to call peaks with SPP. :param str treatment_bam: Path to file with data for treatment sample. :param str control_bam: Path to file with data for control sample. :param str treatment_name: Name for the treatment sample. :param str control_name: Name for the control sample. :param str output_dir: Path to folder for output. :param str | bool broad: Whether to specify broad peak calling mode. :param int cpus: Number of cores the script may use. :param float qvalue: FDR, as decimal value :return str: Command to run.
def _assert_no_error(error, exception_class=None): if error == 0: return cf_error_string = Security.SecCopyErrorMessageString(error, None) output = _cf_string_to_unicode(cf_error_string) CoreFoundation.CFRelease(cf_error_string) if output is None or output == u'': output = u'OSStatus %s' % error if exception_class is None: exception_class = ssl.SSLError raise exception_class(output)
Checks the return code and throws an exception if there is an error to report
def check_string(sql_string, add_semicolon=False): prepped_sql = sqlprep.prepare_sql(sql_string, add_semicolon=add_semicolon) success, msg = ecpg.check_syntax(prepped_sql) return success, msg
Check whether a string is valid PostgreSQL. Returns a boolean indicating validity and a message from ecpg, which will be an empty string if the input was valid, or a description of the problem otherwise.
def detect_infinitive_phrase(sentence): if not 'to' in sentence.lower(): return False doc = nlp(sentence) prev_word = None for w in doc: if prev_word == 'to': if w.dep_ == 'ROOT' and w.tag_.startswith('VB'): return True else: return False prev_word = w.text.lower()
Given a string, return true if it is an infinitive phrase fragment
def get_for_accounts(self, accounts: List[Account]): account_ids = [acc.guid for acc in accounts] query = ( self.query .filter(Split.account_guid.in_(account_ids)) ) splits = query.all() return splits
Get all splits for the given accounts
def cb(option, value, parser): arguments = [value] for arg in parser.rargs: if arg[0] != "-": arguments.append(arg) else: del parser.rargs[:len(arguments)] break if getattr(parser.values, option.dest): arguments.extend(getattr(parser.values, option.dest)) setattr(parser.values, option.dest, arguments)
Callback function to handle variable number of arguments in optparse
def draw_selection(self, surf): select_start = self._select_start if select_start: mouse_pos = self.get_mouse_pos() if (mouse_pos and mouse_pos.surf.surf_type & SurfType.SCREEN and mouse_pos.surf.surf_type == select_start.surf.surf_type): rect = point.Rect(select_start.world_pos, mouse_pos.world_pos) surf.draw_rect(colors.green, rect, 1)
Draw the selection rectange.
def transformer_text_encoder(inputs, target_space, hparams, name=None): with tf.variable_scope(name, default_name="transformer_text_encoder"): inputs = common_layers.flatten4d3d(inputs) [ encoder_input, encoder_self_attention_bias, ed, ] = transformer_layers.transformer_prepare_encoder( inputs, target_space=target_space, hparams=hparams) encoder_input = tf.nn.dropout(encoder_input, 1.0 - hparams.dropout) encoder_output = transformer_layers.transformer_encoder( encoder_input, encoder_self_attention_bias, hparams) return encoder_output, ed
Transformer text encoder over inputs with unmasked full attention. Args: inputs: Tensor of shape [batch, length, 1, hparams.hidden_size]. target_space: int. Used for encoding inputs under a target space id. hparams: HParams. name: string, variable scope. Returns: encoder_output: Tensor of shape [batch, length, hparams.hidden_size]. ed: Tensor of shape [batch, 1, 1, length]. Encoder-decoder attention bias for any padded tokens.
def compare_md5(self): if self.direction == "put": remote_md5 = self.remote_md5() return self.source_md5 == remote_md5 elif self.direction == "get": local_md5 = self.file_md5(self.dest_file) return self.source_md5 == local_md5
Compare md5 of file on network device to md5 of local file.
def _focus_tab(self, tab_idx): for i in range(self.tab_widget.count()): self.tab_widget.setTabEnabled(i, False) self.tab_widget.setTabEnabled(tab_idx, True) self.tab_widget.setCurrentIndex(tab_idx)
Change tab focus
def _remove_unicode_keys(dictobj): if sys.version_info[:2] >= (3, 0): return dictobj assert isinstance(dictobj, dict) newdict = {} for key, value in dictobj.items(): if type(key) is unicode: key = key.encode('utf-8') newdict[key] = value return newdict
Convert keys from 'unicode' to 'str' type. workaround for <http://bugs.python.org/issue2646>
def decompose_dateint(dateint): year = int(dateint / 10000) leftover = dateint - year * 10000 month = int(leftover / 100) day = leftover - month * 100 return year, month, day
Decomposes the given dateint into its year, month and day components. Arguments --------- dateint : int An integer object decipting a specific calendaric day; e.g. 20161225. Returns ------- year : int The year component of the given dateint. month : int The month component of the given dateint. day : int The day component of the given dateint.
def load_config(self, path): if path == None: print("Path to config was null; using defaults.") return if not os.path.exists(path): print("[No user config file found at default location; using defaults.]\n") return user_config = None with open(path) as f: user_config = f.read() extension = os.path.splitext(path) if extension == 'yaml': user_config = yaml.load(user_config) else: raise Error('Configuration file type "{}" not supported'.format(extension)) self.merge_config(user_config) self.configuration = Configuration(self.data_config, self.model_config, self.conversation_config)
Load a configuration file; eventually, support dicts, .yaml, .csv, etc.
def validate_argmax_with_skipna(skipna, args, kwargs): skipna, args = process_skipna(skipna, args) validate_argmax(args, kwargs) return skipna
If 'Series.argmax' is called via the 'numpy' library, the third parameter in its signature is 'out', which takes either an ndarray or 'None', so check if the 'skipna' parameter is either an instance of ndarray or is None, since 'skipna' itself should be a boolean
def get_stream_or_content_from_request(request): if request.stream.tell(): logger.info('Request stream already consumed. ' 'Storing file content using in-memory data.') return request.data else: logger.info('Storing file content using request stream.') return request.stream
Ensure the proper content is uploaded. Stream might be already consumed by authentication process. Hence flask.request.stream might not be readable and return improper value. This methods checks if the stream has already been consumed and if so retrieve the data from flask.request.data where it has been stored.
def thread_raise(thread, exctype): import ctypes, inspect, threading, logging if not inspect.isclass(exctype): raise TypeError( 'cannot raise %s, only exception types can be raised (not ' 'instances)' % exctype) gate = thread_exception_gate(thread) with gate.lock: if gate.ok_to_raise.is_set() and thread.is_alive(): gate.ok_to_raise.clear() logging.info('raising %s in thread %s', exctype, thread) res = ctypes.pythonapi.PyThreadState_SetAsyncExc( ctypes.c_long(thread.ident), ctypes.py_object(exctype)) if res == 0: raise ValueError( 'invalid thread id? thread.ident=%s' % thread.ident) elif res != 1: ctypes.pythonapi.PyThreadState_SetAsyncExc(thread.ident, 0) raise SystemError('PyThreadState_SetAsyncExc failed') else: logging.info('queueing %s for thread %s', exctype, thread) gate.queue_exception(exctype)
Raises or queues the exception `exctype` for the thread `thread`. See the documentation on the function `thread_exception_gate()` for more information. Adapted from http://tomerfiliba.com/recipes/Thread2/ which explains: "The exception will be raised only when executing python bytecode. If your thread calls a native/built-in blocking function, the exception will be raised only when execution returns to the python code." Raises: TypeError if `exctype` is not a class ValueError, SystemError in case of unexpected problems
def get_task_runs(self, json_file=None): if self.project is None: raise ProjectError loader = create_task_runs_loader(self.project.id, self.tasks, json_file, self.all) self.task_runs, self.task_runs_file = loader.load() self._check_project_has_taskruns() self.task_runs_df = dataframer.create_task_run_data_frames(self.tasks, self.task_runs)
Load all project Task Runs from Tasks.
def get_state(cls, clz): if clz not in cls.__shared_state: cls.__shared_state[clz] = ( clz.init_state() if hasattr(clz, "init_state") else {} ) return cls.__shared_state[clz]
Retrieve the state of a given Class. :param clz: types.ClassType :return: Class state. :rtype: dict
def from_json(cls, data, api=None): result = cls(api=api) for elem_cls in [Node, Way, Relation, Area]: for element in data.get("elements", []): e_type = element.get("type") if hasattr(e_type, "lower") and e_type.lower() == elem_cls._type_value: result.append(elem_cls.from_json(element, result=result)) return result
Create a new instance and load data from json object. :param data: JSON data returned by the Overpass API :type data: Dict :param api: :type api: overpy.Overpass :return: New instance of Result object :rtype: overpy.Result
def merge(self, other): other.qualify() for n in ('name', 'qname', 'min', 'max', 'default', 'type', 'nillable', 'form_qualified',): if getattr(self, n) is not None: continue v = getattr(other, n) if v is None: continue setattr(self, n, v)
Merge another object as needed.
def _parse_data_array(self, data_array): tokenSeparator = data_array.encoding.tokenSeparator blockSeparator = data_array.encoding.blockSeparator data_values = data_array.values lines = [x for x in data_values.split(blockSeparator) if x != ""] ret_val = [] for row in lines: values = row.split(tokenSeparator) ret_val.append( [ float(v) if " " not in v.strip() else [float(vv) for vv in v.split()] for v in values ] ) return [list(x) for x in zip(*ret_val)]
Parses a general DataArray.
def update_task_positions_obj(self, positions_obj_id, revision, values): return positions_endpoints.update_task_positions_obj(self, positions_obj_id, revision, values)
Updates the ordering of tasks in the positions object with the given ID to the ordering in the given values. See https://developer.wunderlist.com/documentation/endpoints/positions for more info Return: The updated TaskPositionsObj-mapped object defining the order of list layout
def _close_prepared_statement(self): self.prepared_sql = None self.flush_to_query_ready() self.connection.write(messages.Close('prepared_statement', self.prepared_name)) self.connection.write(messages.Flush()) self._message = self.connection.read_expected_message(messages.CloseComplete) self.connection.write(messages.Sync())
Close the prepared statement on the server.
def count(self): if self._primary_keys is None: return self.queryset.count() else: return len(self.pks)
Return a count of instances.
def _parse_bid_table(self, table): player = table.find_all('td')[0].text owner = table.find_all('td')[1].text team = table.find('img')['alt'] price = int(table.find_all('td')[3].text.replace(".","")) bid_date = table.find_all('td')[4].text trans_date = table.find_all('td')[5].text status = table.find_all('td')[6].text return player,owner,team,price,bid_date,trans_date,status
Convert table row values into strings @return: player, owner, team, price, bid_date, trans_date, status
def filter(self, fn, skip_na=True, seed=None): assert callable(fn), "Input must be callable" if seed is None: seed = abs(hash("%0.20f" % time.time())) % (2 ** 31) with cython_context(): return SArray(_proxy=self.__proxy__.filter(fn, skip_na, seed))
Filter this SArray by a function. Returns a new SArray filtered by this SArray. If `fn` evaluates an element to true, this element is copied to the new SArray. If not, it isn't. Throws an exception if the return type of `fn` is not castable to a boolean value. Parameters ---------- fn : function Function that filters the SArray. Must evaluate to bool or int. skip_na : bool, optional If True, will not apply fn to any undefined values. seed : int, optional Used as the seed if a random number generator is included in fn. Returns ------- out : SArray The SArray filtered by fn. Each element of the SArray is of type int. Examples -------- >>> sa = turicreate.SArray([1,2,3]) >>> sa.filter(lambda x: x < 3) dtype: int Rows: 2 [1, 2]
def floating_point_to_datetime(day, fp_time): result = datetime(year=day.year, month=day.month, day=day.day) result += timedelta(minutes=math.ceil(60 * fp_time)) return result
Convert a floating point time to a datetime.
def read_user_mapping(self, user_name, mount_point=DEFAULT_MOUNT_POINT): api_path = '/v1/auth/{mount_point}/map/users/{user_name}'.format( mount_point=mount_point, user_name=user_name, ) response = self._adapter.get(url=api_path) return response.json()
Read the GitHub user policy mapping. Supported methods: GET: /auth/{mount_point}/map/users/{user_name}. Produces: 200 application/json :param user_name: GitHub user name :type user_name: str | unicode :param mount_point: The "path" the method/backend was mounted on. :type mount_point: str | unicode :return: The JSON response of the read_user_mapping request. :rtype: dict
def _set_shared_instances(self): self.inqueue = self.em.get_inqueue() self.outqueue = self.em.get_outqueue() self.namespace = self.em.get_namespace()
Sets attributes from the shared instances.
def _build_tarball(src_repo) -> str: run = partial(subprocess.run, cwd=src_repo, check=True) run(['git', 'clean', '-xdff']) src_repo = Path(src_repo) if os.path.exists(src_repo / 'es' / 'upstream'): run(['git', 'submodule', 'update', '--init', '--', 'es/upstream']) run(['./gradlew', '--no-daemon', 'clean', 'distTar']) distributions = Path(src_repo) / 'app' / 'build' / 'distributions' return next(distributions.glob('crate-*.tar.gz'))
Build a tarball from src and return the path to it
def hash_shooter(video_path): filesize = os.path.getsize(video_path) readsize = 4096 if os.path.getsize(video_path) < readsize * 2: return None offsets = (readsize, filesize // 3 * 2, filesize // 3, filesize - readsize * 2) filehash = [] with open(video_path, 'rb') as f: for offset in offsets: f.seek(offset) filehash.append(hashlib.md5(f.read(readsize)).hexdigest()) return ';'.join(filehash)
Compute a hash using Shooter's algorithm :param string video_path: path of the video :return: the hash :rtype: string
def update_notification_settings(self, api_token, event, service, should_notify): params = { 'token': api_token, 'notification_type': event, 'service': service, 'dont_notify': should_notify } return self._post('update_notification_setting', params)
Update a user's notification settings. :param api_token: The user's login api_token. :type api_token: str :param event: Update the notification settings of this event. :type event: str :param service: ``email`` or ``push`` :type service: str :param should_notify: If ``0`` notify, otherwise do not. :type should_notify: int :return: The HTTP response to the request. :rtype: :class:`requests.Response` >>> from pytodoist.api import TodoistAPI >>> api = TodoistAPI() >>> response = api.login('john.doe@gmail.com', 'password') >>> user_info = response.json() >>> user_api_token = user_info['api_token'] >>> response = api.update_notification_settings(user_api_token, ... 'user_left_project', ... 'email', 0) ...
def get(self, index, doc_type, id, fields=None, model=None, **query_params): path = make_path(index, doc_type, id) if fields is not None: query_params["fields"] = ",".join(fields) model = model or self.model return model(self, self._send_request('GET', path, params=query_params))
Get a typed JSON document from an index based on its id.
def as_dict(self): def conv(v): if isinstance(v, SerializableAttributesHolder): return v.as_dict() elif isinstance(v, list): return [conv(x) for x in v] elif isinstance(v, dict): return {x:conv(y) for (x,y) in v.items()} else: return v return {k.replace('_', '-'): conv(v) for (k, v) in self._attributes.items()}
Returns a JSON-serializeable object representing this tree.
def _render_asset(self, subpath): return send_from_directory( self.assets.cache_path, self.assets.cache_filename(subpath))
Renders the specified cache file.
def get_bank_hierarchy_session(self): if not self.supports_bank_hierarchy(): raise errors.Unimplemented() return sessions.BankHierarchySession(runtime=self._runtime)
Gets the session traversing bank hierarchies. return: (osid.assessment.BankHierarchySession) - a ``BankHierarchySession`` raise: OperationFailed - unable to complete request raise: Unimplemented - ``supports_bank_hierarchy() is false`` *compliance: optional -- This method must be implemented if ``supports_bank_hierarchy()`` is true.*
def _to_backend(self, p): if isinstance(p, self._cmp_base): return p.path elif isinstance(p, self._backend): return p elif self._backend is unicode and isinstance(p, bytes): return p.decode(self._encoding) elif self._backend is bytes and isinstance(p, unicode): return p.encode(self._encoding, 'surrogateescape' if PY3 else 'strict') else: raise TypeError("Can't construct a %s from %r" % ( self.__class__.__name__, type(p)))
Converts something to the correct path representation. If given a Path, this will simply unpack it, if it's the correct type. If given the correct backend, it will return that. If given bytes for unicode of unicode for bytes, it will encode/decode with a reasonable encoding. Note that these operations can raise UnicodeError!
def get_from_area(self, lat_min, lon_min, lat_max, lon_max, picture_size=None, set_=None, map_filter=None): page_size = 100 page = 0 result = self._request(lat_min, lon_min, lat_max, lon_max, page * page_size, (page + 1) * page_size, picture_size, set_, map_filter) total_photos = result['count'] if total_photos < page_size: return result page += 1 pages = (total_photos / page_size) + 1 while page < pages: new_result = self._request(lat_min, lon_min, lat_max, lon_max, page * page_size, (page + 1) * page_size, picture_size, set_, map_filter) result['photos'].extend(new_result['photos']) page += 1 return result
Get all available photos for a specific bounding box :param lat_min: Minimum latitude of the bounding box :type lat_min: float :param lon_min: Minimum longitude of the bounding box :type lon_min: float :param lat_max: Maximum latitude of the bounding box :type lat_max: float :param lon_max: Maximum longitude of the bounding box :type lon_max: float :param picture_size: This can be: original, medium (*default*), small, thumbnail, square, mini_square :type picture_size: basestring :param set_: This can be: public, popular or user-id; where user-id is the specific id of a user (as integer) :type set_: basestring/int :param map_filter: Whether to return photos that look better together; when True, tries to avoid returning photos of the same location :type map_filter: bool :return: Returns the full dataset of all available photos
def guess_mime_mimedb (filename): mime, encoding = None, None if mimedb is not None: mime, encoding = mimedb.guess_type(filename, strict=False) if mime not in ArchiveMimetypes and encoding in ArchiveCompressions: mime = Encoding2Mime[encoding] encoding = None return mime, encoding
Guess MIME type from given filename. @return: tuple (mime, encoding)
def broadcast_tx(cls, tx_hex): success = None for api_call in cls.BROADCAST_TX_MAIN: try: success = api_call(tx_hex) if not success: continue return except cls.IGNORED_ERRORS: pass if success is False: raise ConnectionError('Transaction broadcast failed, or ' 'Unspents were already used.') raise ConnectionError('All APIs are unreachable.')
Broadcasts a transaction to the blockchain. :param tx_hex: A signed transaction in hex form. :type tx_hex: ``str`` :raises ConnectionError: If all API services fail.
def get_content(ident_hash, context=None): id, version = get_id_n_version(ident_hash) filename = 'index.cnxml.html' if context is not None: stmt = _get_sql('get-baked-content.sql') args = dict(id=id, version=version, context=context) else: stmt = _get_sql('get-content.sql') args = dict(id=id, version=version, filename=filename) with db_connect() as db_conn: with db_conn.cursor() as cursor: cursor.execute(stmt, args) try: content, _ = cursor.fetchone() except TypeError: raise ContentNotFound(ident_hash, context, filename) return content[:]
Returns the content for the given ``ident_hash``. ``context`` is optionally ident-hash used to find the content within the context of a Collection ident_hash.
def verify_order(self, hostname, domain, location, hourly, flavor, router=None): create_options = self._generate_create_dict(hostname=hostname, router=router, domain=domain, flavor=flavor, datacenter=location, hourly=hourly) return self.client['Product_Order'].verifyOrder(create_options)
Verifies an order for a dedicated host. See :func:`place_order` for a list of available options.
def _analyse_overview_field(content): if "(" in content: return content.split("(")[0], content.split("(")[0] elif "/" in content: return content.split("/")[0], content.split("/")[1] return content, ""
Split the field in drbd-overview
def lang_match_xml(row, accepted_languages): if not accepted_languages: return True column_languages = set() for elem in row: lang = elem[0].attrib.get(XML_LANG, None) if lang: column_languages.add(lang) return (not column_languages) or (column_languages & accepted_languages)
Find if the XML row contains acceptable language data
def save(self, target=None, shp=None, shx=None, dbf=None): if shp: self.saveShp(shp) if shx: self.saveShx(shx) if dbf: self.saveDbf(dbf) elif target: self.saveShp(target) self.shp.close() self.saveShx(target) self.shx.close() self.saveDbf(target) self.dbf.close()
Save the shapefile data to three files or three file-like objects. SHP and DBF files can also be written exclusively using saveShp, saveShx, and saveDbf respectively.
def load(config, opt): ctx = Context(opt) seed_map = py_resources() seed_keys = sorted(set([m[0] for m in seed_map]), key=resource_sort) for config_key in seed_keys: if config_key not in config: continue for resource_config in config[config_key]: mod = find_model(config_key, resource_config, seed_map) if not mod: LOG.warning("unable to find mod for %s", resource_config) continue ctx.add(mod(resource_config, opt)) for config_key in config.keys(): if config_key != 'pgp_keys' and \ config_key not in seed_keys: LOG.warning("missing model for %s", config_key) return filtered_context(ctx)
Loads and returns a full context object based on the Secretfile
def _get_resource(self, resource, obj, params=None, **kwargs): r = self._http_resource('GET', resource, params=params) item = self._resource_deserialize(r.content.decode("utf-8")) return obj.new_from_dict(item, h=self, **kwargs)
Returns a mapped object from an HTTP resource.
def inspect_workers(self): workers = tuple(self.workers.values()) expired = tuple(w for w in workers if not w.is_alive()) for worker in expired: self.workers.pop(worker.pid) return ((w.pid, w.exitcode) for w in expired if w.exitcode != 0)
Updates the workers status. Returns the workers which have unexpectedly ended.
def check_isis_version(major, minor=0, patch=0): if ISIS_VERSION and (major, minor, patch) <= ISIS_VERISON_TUPLE: return msg = 'Version %s.%s.%s of isis required (%s found).' raise VersionError(msg % (major, minor, patch, ISIS_VERSION))
Checks that the current isis version is equal to or above the suplied version.
def _free_up_space(self, size, this_rel_path=None): space = self.size + size - self.maxsize if space <= 0: return removes = [] for row in self.database.execute("SELECT path, size, time FROM files ORDER BY time ASC"): if space > 0: removes.append(row[0]) space -= row[1] else: break for rel_path in removes: if rel_path != this_rel_path: global_logger.debug("Deleting {}".format(rel_path)) self.remove(rel_path)
If there are not size bytes of space left, delete files until there is Args: size: size of the current file this_rel_path: rel_pat to the current file, so we don't delete it.
def _recursive_merge(dct, merge_dct, raise_on_missing): for k, v in merge_dct.items(): if k in dct: if isinstance(dct[k], dict) and isinstance(merge_dct[k], BaseMapping): dct[k] = _recursive_merge(dct[k], merge_dct[k], raise_on_missing) else: dct[k] = merge_dct[k] elif isinstance(dct, Extensible): dct[k] = merge_dct[k] else: message = "Unknown configuration key: '{k}'".format(k=k) if raise_on_missing: raise KeyError(message) else: logging.getLogger(__name__).warning(message) return dct
Recursive dict merge This modifies `dct` in place. Use `copy.deepcopy` if this behavior is not desired.
def cycle_focus(self): windows = self.windows() new_index = (windows.index(self.active_window) + 1) % len(windows) self.active_window = windows[new_index]
Cycle through all windows.
def tail_threshold(vals, N=1000): vals = numpy.array(vals) if len(vals) < N: raise RuntimeError('Not enough input values to determine threshold') vals.sort() return min(vals[-N:])
Determine a threshold above which there are N louder values
def reset(self): max_dataset_history = self.value('max_dataset_history') keep_recent_datasets(max_dataset_history, self.info) self.labels.reset() self.channels.reset() self.info.reset() self.notes.reset() self.overview.reset() self.spectrum.reset() self.traces.reset()
Remove all the information from previous dataset before loading a new dataset.
def squared_distance(v1, v2): v1, v2 = _convert_to_vector(v1), _convert_to_vector(v2) return v1.squared_distance(v2)
Squared distance between two vectors. a and b can be of type SparseVector, DenseVector, np.ndarray or array.array. >>> a = Vectors.sparse(4, [(0, 1), (3, 4)]) >>> b = Vectors.dense([2, 5, 4, 1]) >>> a.squared_distance(b) 51.0
def _can_process_application(self, app): return (self.LOCATION_KEY in app.properties and isinstance(app.properties[self.LOCATION_KEY], dict) and self.APPLICATION_ID_KEY in app.properties[self.LOCATION_KEY] and self.SEMANTIC_VERSION_KEY in app.properties[self.LOCATION_KEY])
Determines whether or not the on_before_transform_template event can process this application :param dict app: the application and its properties
def set_mode_by_id(self, zone_id, mode): if not self._do_auth(): raise RuntimeError("Unable to login") data = { "ZoneId": zone_id, "mode": mode.value } headers = { "Accept": "application/json", "Content-Type": "application/json", 'Authorization': 'Bearer ' + self.login_data['token']['accessToken'] } url = self.api_base_url + "Home/SetZoneMode" response = requests.post(url, data=json.dumps( data), headers=headers, timeout=10) if response.status_code != 200: return False mode_data = response.json() return mode_data.get("isSuccess", False)
Set the mode by using the zone id Supported zones are available in the enum Mode
def rpc_export(rpc_method_name, sync=False): def dec(f): f._nvim_rpc_method_name = rpc_method_name f._nvim_rpc_sync = sync f._nvim_bind = True f._nvim_prefix_plugin_path = False return f return dec
Export a function or plugin method as a msgpack-rpc request handler.
def GroupSizer(field_number, is_repeated, is_packed): tag_size = _TagSize(field_number) * 2 assert not is_packed if is_repeated: def RepeatedFieldSize(value): result = tag_size * len(value) for element in value: result += element.ByteSize() return result return RepeatedFieldSize else: def FieldSize(value): return tag_size + value.ByteSize() return FieldSize
Returns a sizer for a group field.
def get_default_config_help(self): config_help = super(PostgresqlCollector, self).get_default_config_help() config_help.update({ 'host': 'Hostname', 'dbname': 'DB to connect to in order to get list of DBs in PgSQL', 'user': 'Username', 'password': 'Password', 'port': 'Port number', 'password_provider': "Whether to auth with supplied password or" " .pgpass file <password|pgpass>", 'sslmode': 'Whether to use SSL - <disable|allow|require|...>', 'underscore': 'Convert _ to .', 'extended': 'Enable collection of extended database stats.', 'metrics': 'List of enabled metrics to collect', 'pg_version': "The version of postgres that you'll be monitoring" " eg. in format 9.2", 'has_admin': 'Admin privileges are required to execute some' ' queries.', }) return config_help
Return help text for collector
def visit_Assign(self, node): if self._in_class(node): element_full_name = self._pop_indent_stack(node, "prop") code_id = (self._fname, node.lineno) self._processed_line = node.lineno self._callables_db[element_full_name] = { "name": element_full_name, "type": "prop", "code_id": code_id, "last_lineno": None, } self._reverse_callables_db[code_id] = element_full_name # code = self.generic_visit(node)
Implement assignment walker. Parse class properties defined via the property() function
def get_pools(time_span=None, api_code=None): resource = 'pools' if time_span is not None: resource += '?timespan=' + time_span if api_code is not None: resource += '&api_code=' + api_code response = util.call_api(resource, base_url='https://api.blockchain.info/') json_response = json.loads(response) return {k: v for (k, v) in json_response.items()}
Get number of blocks mined by each pool. :param str time_span: duration of the chart. Default is 4days (optional) :param str api_code: Blockchain.info API code (optional) :return: an instance of dict:{str,int}
def flatten(dictionary, separator='.', prefix=''): new_dict = {} for key, value in dictionary.items(): new_key = prefix + separator + key if prefix else key if isinstance(value, collections.MutableMapping): new_dict.update(flatten(value, separator, new_key)) elif isinstance(value, list): new_value = [] for item in value: if isinstance(item, collections.MutableMapping): new_value.append(flatten(item, separator, new_key)) else: new_value.append(item) new_dict[new_key] = new_value else: new_dict[new_key] = value return new_dict
Flatten the dictionary keys are separated by separator Arguments: dictionary {dict} -- The dictionary to be flattened. Keyword Arguments: separator {str} -- The separator to use (default is '.'). It will crush items with key conflicts. prefix {str} -- Used for recursive calls. Returns: dict -- The flattened dictionary.
def push(self, element, value): insert_pos = 0 for index, el in enumerate(self.tops): if not self.find_min and el[1] >= value: insert_pos = index+1 elif self.find_min and el[1] <= value: insert_pos = index+1 self.tops.insert(insert_pos, [element, value]) self.tops = self.tops[:self.n]
Push an ``element`` into the datastrucutre together with its value and only save it if it currently is one of the top n elements. Drop elements if necessary.
def get_banks_by_item(self, item_id): mgr = self._get_provider_manager('ASSESSMENT', local=True) lookup_session = mgr.get_bank_lookup_session(proxy=self._proxy) return lookup_session.get_banks_by_ids( self.get_bank_ids_by_item(item_id))
Gets the list of ``Banks`` mapped to an ``Item``. arg: item_id (osid.id.Id): ``Id`` of an ``Item`` return: (osid.assessment.BankList) - list of banks raise: NotFound - ``item_id`` is not found raise: NullArgument - ``item_id`` is ``null`` raise: OperationFailed - unable to complete request raise: PermissionDenied - assessment failure *compliance: mandatory -- This method must be implemented.*
def get(self, key): if isinstance(key, unicode): key = key.encode('utf-8') v = self.client.get(key) if v is None: raise KeyError("Cache key [%s] not found" % key) else: return v
because memcached does not provide a function to check if a key is existed so here is a heck way, if the value is None, then raise Exception
def interleave(*args): result = [] for array in zip(*args): result.append(tuple(flatten(array))) return result
Interleaves the elements of the provided arrays. >>> a = [(0, 0), (1, 0), (2, 0), (3, 0)] >>> b = [(0, 0), (0, 1), (0, 2), (0, 3)] >>> interleave(a, b) [(0, 0, 0, 0), (1, 0, 0, 1), (2, 0, 0, 2), (3, 0, 0, 3)] This is useful for combining multiple vertex attributes into a single vertex buffer. The shader attributes can be assigned a slice of the vertex buffer.
def get_message(message, *args, **kwargs): msg = current_app.extensions['simplelogin'].messages.get(message) if msg and (args or kwargs): return msg.format(*args, **kwargs) return msg
Helper to get internal messages outside this instance
def _Scroll(self, lines=None): if lines is None: lines = self._cli_lines if lines < 0: self._displayed -= self._cli_lines self._displayed += lines if self._displayed < 0: self._displayed = 0 self._lines_to_show = self._cli_lines else: self._lines_to_show = lines self._lastscroll = lines
Set attributes to scroll the buffer correctly. Args: lines: An int, number of lines to scroll. If None, scrolls by the terminal length.
def disconnect(self, connection): self.log.debug("Disconnecting %s" % connection) for dest in list(self._topics.keys()): if connection in self._topics[dest]: self._topics[dest].remove(connection) if not self._topics[dest]: del self._topics[dest]
Removes a subscriber connection. @param connection: The client connection to unsubscribe. @type connection: L{coilmq.server.StompConnection}