code
stringlengths
51
2.38k
docstring
stringlengths
4
15.2k
def Rx_matrix(theta): return np.array([ [1, 0, 0], [0, np.cos(theta), -np.sin(theta)], [0, np.sin(theta), np.cos(theta)] ])
Rotation matrix around the X axis
def getusers(self, userlist): userobjs = [User(self, **rawuser) for rawuser in self._getusers(names=userlist).get('users', [])] ret = [] for u in userlist: for uobj in userobjs[:]: if uobj.email == u: userobjs.remove(uobj) ret.append(uobj) break ret += userobjs return ret
Return a list of Users from . :userlist: List of usernames to lookup :returns: List of User records
def get_url(url): sub = "{0}.spotilocal.com".format("".join(choices(ascii_lowercase, k=10))) return "http://{0}:{1}{2}".format(sub, DEFAULT_PORT, url)
Ranomdly generates a url for use in requests. Generates a hostname with the port and the provided suffix url provided :param url: A url fragment to use in the creation of the master url
def LoadData(self, data, custom_properties=None): self.__data = [] self.AppendData(data, custom_properties)
Loads new rows to the data table, clearing existing rows. May also set the custom_properties for the added rows. The given custom properties dictionary specifies the dictionary that will be used for *all* given rows. Args: data: The rows that the table will contain. custom_properties: A dictionary of string to string to set as the custom properties for all rows.
def gen_jcc(src, dst): return ReilBuilder.build(ReilMnemonic.JCC, src, ReilEmptyOperand(), dst)
Return a JCC instruction.
def get_offset_with_default(cursor=None, default_offset=0): if not is_str(cursor): return default_offset offset = cursor_to_offset(cursor) try: return int(offset) except: return default_offset
Given an optional cursor and a default offset, returns the offset to use; if the cursor contains a valid offset, that will be used, otherwise it will be the default.
def balance(self): self.check() if not sum(map(lambda x: x.amount, self.src)) == -self.amount: raise XnBalanceError("Sum of source amounts " "not equal to transaction amount") if not sum(map(lambda x: x.amount, self.dst)) == self.amount: raise XnBalanceError("Sum of destination amounts " "not equal to transaction amount") return True
Check this transaction for correctness
def get_keys_to_action(self): keyword_to_key = { "UP": ord("w"), "DOWN": ord("s"), "LEFT": ord("a"), "RIGHT": ord("d"), "FIRE": ord(" "), } keys_to_action = {} for action_id, action_meaning in enumerate(self.action_meanings): keys_tuple = tuple(sorted([ key for keyword, key in keyword_to_key.items() if keyword in action_meaning])) assert keys_tuple not in keys_to_action keys_to_action[keys_tuple] = action_id keys_to_action[(ord("r"),)] = self.RETURN_DONE_ACTION keys_to_action[(ord("c"),)] = self.TOGGLE_WAIT_ACTION keys_to_action[(ord("n"),)] = self.WAIT_MODE_NOOP_ACTION return keys_to_action
Get mapping from keyboard keys to actions. Required by gym.utils.play in environment or top level wrapper. Returns: { Unicode code point for keyboard key: action (formatted for step()), ... }
def read(self, n=4096): size = self._next_packet_size(n) if size <= 0: return else: data = six.binary_type() while len(data) < size: nxt = self.stream.read(size - len(data)) if not nxt: return data data = data + nxt return data
Read up to `n` bytes of data from the Stream, after demuxing. Less than `n` bytes of data may be returned depending on the available payload, but the number of bytes returned will never exceed `n`. Because demuxing involves scanning 8-byte headers, the actual amount of data read from the underlying stream may be greater than `n`.
def AddLabels(self, labels_names, owner=None): if owner is None and not self.token: raise ValueError("Can't set label: No owner specified and " "no access token available.") if isinstance(labels_names, string_types): raise ValueError("Label list can't be string.") owner = owner or self.token.username current_labels = self.Get(self.Schema.LABELS, self.Schema.LABELS()) for label_name in labels_names: label = rdf_aff4.AFF4ObjectLabel( name=label_name, owner=owner, timestamp=rdfvalue.RDFDatetime.Now()) current_labels.AddLabel(label) self.Set(current_labels)
Add labels to the AFF4Object.
def init_db(db_path): logger.info("Creating database") with closing(connect_database(db_path)) as db: with open(SCHEMA, 'r') as f: db.cursor().executescript(f.read()) db.commit() return
Build the sqlite database
def send(dest, msg, transactionid=None): transheader = '' if transactionid: transheader = 'transaction: %s\n' % transactionid return "SEND\ndestination: %s\n%s\n%s\x00\n" % (dest, transheader, msg)
STOMP send command. dest: This is the channel we wish to subscribe to msg: This is the message body to be sent. transactionid: This is an optional field and is not needed by default.
def ways_callback(self, data): for way_id, tags, nodes in data: if tags: self.ways[way_id] = (tags, nodes)
Callback for all ways
def bm3_p(v, v0, k0, k0p, p_ref=0.0): return cal_p_bm3(v, [v0, k0, k0p], p_ref=p_ref)
calculate pressure from 3rd order Birch-Murnathan equation :param v: volume at different pressures :param v0: volume at reference conditions :param k0: bulk modulus at reference conditions :param k0p: pressure derivative of bulk modulus at different conditions :param p_ref: reference pressure (default = 0) :return: pressure
def walk_paths(self, base: Optional[pathlib.PurePath] = pathlib.PurePath()) \ -> Iterator[pathlib.PurePath]: raise NotImplementedError()
Recursively traverse all paths inside this entity, including the entity itself. :param base: The base path to prepend to the entity name. :return: An iterator of paths.
def _recursive_matches(self, nodes, count): assert self.content is not None if count >= self.min: yield 0, {} if count < self.max: for alt in self.content: for c0, r0 in generate_matches(alt, nodes): for c1, r1 in self._recursive_matches(nodes[c0:], count+1): r = {} r.update(r0) r.update(r1) yield c0 + c1, r
Helper to recursively yield the matches.
def recursive_copy(source, dest): for root, _, files in salt.utils.path.os_walk(source): path_from_source = root.replace(source, '').lstrip(os.sep) target_directory = os.path.join(dest, path_from_source) if not os.path.exists(target_directory): os.makedirs(target_directory) for name in files: file_path_from_source = os.path.join(source, path_from_source, name) target_path = os.path.join(target_directory, name) shutil.copyfile(file_path_from_source, target_path)
Recursively copy the source directory to the destination, leaving files with the source does not explicitly overwrite. (identical to cp -r on a unix machine)
def describe(self): lines = [] lines.append("Symbol = {}".format(self.name)) if len(self.tags): tgs = ", ".join(x.tag for x in self.tags) lines.append(" tagged = {}".format(tgs)) if len(self.aliases): als = ", ".join(x.alias for x in self.aliases) lines.append(" aliased = {}".format(als)) if len(self.feeds): lines.append(" feeds:") for fed in self.feeds: lines.append(" {}. {}".format(fed.fnum, fed.ftype)) return "\n".join(lines)
describes a Symbol, returns a string
def apply_fixes(args, tmpdir): invocation = [args.clang_apply_replacements_binary] if args.format: invocation.append('-format') if args.style: invocation.append('-style=' + args.style) invocation.append(tmpdir) subprocess.call(invocation)
Calls clang-apply-fixes on a given directory.
def main(): from six import StringIO import eppy.iddv7 as iddv7 IDF.setiddname(StringIO(iddv7.iddtxt)) idf1 = IDF(StringIO('')) loopname = "p_loop" sloop = ['sb0', ['sb1', 'sb2', 'sb3'], 'sb4'] dloop = ['db0', ['db1', 'db2', 'db3'], 'db4'] loopname = "c_loop" sloop = ['sb0', ['sb1', 'sb2', 'sb3'], 'sb4'] dloop = ['db0', ['db1', 'db2', 'db3'], 'db4'] loopname = "a_loop" sloop = ['sb0', ['sb1', 'sb2', 'sb3'], 'sb4'] dloop = ['zone1', 'zone2', 'zone3'] makeairloop(idf1, loopname, sloop, dloop) idf1.savecopy("hh1.idf")
the main routine
def unravel_staff(staff_data): staff_list = [] for role, staff_members in staff_data['data'].items(): for member in staff_members: member['role'] = role staff_list.append(member) return staff_list
Unravels staff role dictionary into flat list of staff members with ``role`` set as an attribute. Args: staff_data(dict): Data return from py:method::get_staff Returns: list: Flat list of staff members with ``role`` set to role type (i.e. course_admin, instructor, TA, etc)
def etd_ms_dict2xmlfile(filename, metadata_dict): try: f = open(filename, 'w') f.write(generate_etd_ms_xml(metadata_dict).encode("utf-8")) f.close() except: raise MetadataGeneratorException( 'Failed to create an XML file. Filename: %s' % (filename) )
Create an ETD MS XML file.
def _get_uniparc_sequences_through_uniprot_ACs(self, mapping_pdb_id, uniprot_ACs, cache_dir): m = uniprot_map('ACC', 'UPARC', uniprot_ACs, cache_dir = cache_dir) UniParcIDs = [] for _, v in m.iteritems(): UniParcIDs.extend(v) mapping = {mapping_pdb_id : []} for UniParcID in UniParcIDs: entry = UniParcEntry(UniParcID, cache_dir = cache_dir) mapping[mapping_pdb_id].append(entry) return mapping
Get the UniParc sequences associated with the UniProt accession number.
def BLASTcheck(rid,baseURL="http://blast.ncbi.nlm.nih.gov"): URL=baseURL+"/Blast.cgi?" URL=URL+"FORMAT_OBJECT=SearchInfo&RID="+rid+"&CMD=Get" response=requests.get(url = URL) r=response.content.split("\n") try: status=[ s for s in r if "Status=" in s ][0].split("=")[-1] ThereAreHits=[ s for s in r if "ThereAreHits=" in s ][0].split("=")[-1] except: status=None ThereAreHits=None print(rid, status, ThereAreHits) sys.stdout.flush() return status, ThereAreHits
Checks the status of a query. :param rid: BLAST search request identifier. Allowed values: The Request ID (RID) returned when the search was submitted :param baseURL: server url. Default=http://blast.ncbi.nlm.nih.gov :returns status: status for the query. :returns therearehist: yes or no for existing hits on a finished query.
def on_source_directory_chooser_clicked(self): title = self.tr('Set the source directory for script and scenario') self.choose_directory(self.source_directory, title)
Autoconnect slot activated when tbSourceDir is clicked.
def get_template_image(kwargs=None, call=None): if call == 'action': raise SaltCloudSystemExit( 'The get_template_image function must be called with -f or --function.' ) if kwargs is None: kwargs = {} name = kwargs.get('name', None) if name is None: raise SaltCloudSystemExit( 'The get_template_image function requires a \'name\'.' ) try: ret = list_templates()[name]['template']['disk']['image'] except KeyError: raise SaltCloudSystemExit( 'The image for template \'{0}\' could not be found.'.format(name) ) return ret
Returns a template's image from the given template name. .. versionadded:: 2018.3.0 .. code-block:: bash salt-cloud -f get_template_image opennebula name=my-template-name
def calculate_mean(self, pars_for_mean, calculation_type): if len(pars_for_mean) == 0: return({}) elif len(pars_for_mean) == 1: return ({"dec": float(pars_for_mean[0]['dec']), "inc": float(pars_for_mean[0]['inc']), "calculation_type": calculation_type, "n": 1}) elif calculation_type == 'Fisher': mpars = pmag.dolnp(pars_for_mean, 'direction_type') elif calculation_type == 'Fisher by polarity': mpars = pmag.fisher_by_pol(pars_for_mean) for key in list(mpars.keys()): mpars[key]['n_planes'] = 0 mpars[key]['calculation_type'] = 'Fisher' mpars['calculation_type'] = calculation_type return mpars
Uses pmag.dolnp or pmag.fisher_by_pol to do a fisher mean or fisher mean by polarity on the list of dictionaries in pars for mean Parameters ---------- pars_for_mean : list of dictionaries with all data to average calculation_type : type of mean to take (options: Fisher, Fisher by polarity) Returns ------- mpars : dictionary with information of mean or empty dictionary TODO : put Bingham statistics back in once a method for displaying them is figured out
def arrange(df, *args, **kwargs): flat_args = [a for a in flatten(args)] series = [df[arg] if isinstance(arg, str) else df.iloc[:, arg] if isinstance(arg, int) else pd.Series(arg) for arg in flat_args] sorter = pd.concat(series, axis=1).reset_index(drop=True) sorter = sorter.sort_values(sorter.columns.tolist(), **kwargs) return df.iloc[sorter.index, :]
Calls `pandas.DataFrame.sort_values` to sort a DataFrame according to criteria. See: http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.sort_values.html For a list of specific keyword arguments for sort_values (which will be the same in arrange). Args: *args: Symbolic, string, integer or lists of those types indicating columns to sort the DataFrame by. Kwargs: **kwargs: Any keyword arguments will be passed through to the pandas `DataFrame.sort_values` function.
def max_cation_removal(self): oxid_pot = sum( [(Element(spec.symbol).max_oxidation_state - spec.oxi_state) * self.comp[spec] for spec in self.comp if is_redox_active_intercalation(Element(spec.symbol))]) oxid_limit = oxid_pot / self.cation_charge num_cation = self.comp[Specie(self.cation.symbol, self.cation_charge)] return min(oxid_limit, num_cation)
Maximum number of cation A that can be removed while maintaining charge-balance. Returns: integer amount of cation. Depends on cell size (this is an 'extrinsic' function!)
def spine_to_terminal_wedge(mol): for i, a in mol.atoms_iter(): if mol.neighbor_count(i) == 1: ni, nb = list(mol.neighbors(i).items())[0] if nb.order == 1 and nb.type in (1, 2) \ and ni > i != nb.is_lower_first: nb.is_lower_first = not nb.is_lower_first nb.type = {1: 2, 2: 1}[nb.type]
Arrange stereo wedge direction from spine to terminal atom
def copy(self, *, frame=None, form=None): new_compl = {} for k, v in self.complements.items(): new_compl[k] = v.copy() if hasattr(v, 'copy') else v new_obj = self.__class__( self.date, self.base.copy(), self.form, self.frame, self.propagator.copy() if self.propagator is not None else None, **new_compl ) if frame and frame != self.frame: new_obj.frame = frame if form and form != self.form: new_obj.form = form return new_obj
Provide a new instance of the same point in space-time Keyword Args: frame (str or Frame): Frame to convert the new instance into form (str or Form): Form to convert the new instance into Return: Orbit: Override :py:meth:`numpy.ndarray.copy()` to include additional fields
def _mappingGetValueSet(mapping, keys): setUnion = set() for k in keys: setUnion = setUnion.union(mapping[k]) return setUnion
Return a combined set of values from the mapping. :param mapping: dict, for each key contains a set of entries returns a set of combined entries
def validate_token_age(callback_token): try: token = CallbackToken.objects.get(key=callback_token, is_active=True) seconds = (timezone.now() - token.created_at).total_seconds() token_expiry_time = api_settings.PASSWORDLESS_TOKEN_EXPIRE_TIME if seconds <= token_expiry_time: return True else: token.is_active = False token.save() return False except CallbackToken.DoesNotExist: return False
Returns True if a given token is within the age expiration limit.
def connections(self): self._check_session() status, data = self._rest.get_request('connections') return data
Get list of connections.
def PLAY(self): message = "PLAY " + self.session.url + " RTSP/1.0\r\n" message += self.sequence message += self.authentication message += self.user_agent message += self.session_id message += '\r\n' return message
RTSP session is ready to send data.
def add_option(self, parser): group = parser.add_argument_group(self.name) for stat in self.stats: stat.add_option(group) group.add_argument( "--{0}".format(self.option), action="store_true", help="All above")
Add option group and all children options.
def get_scenarios(network_id,**kwargs): user_id = kwargs.get('user_id') try: net_i = db.DBSession.query(Network).filter(Network.id == network_id).one() net_i.check_read_permission(user_id=user_id) except NoResultFound: raise ResourceNotFoundError("Network %s not found"%(network_id)) return net_i.scenarios
Get all the scenarios in a given network.
def delete_router_by_name(self, rtr_name, tenant_id): try: routers = self.neutronclient.list_routers() rtr_list = routers.get('routers') for rtr in rtr_list: if rtr_name == rtr['name']: self.neutronclient.delete_router(rtr['id']) except Exception as exc: LOG.error("Failed to get and delete router by name %(name)s, " "Exc %(exc)s", {'name': rtr_name, 'exc': str(exc)}) return False return True
Delete the openstack router and its interfaces given its name. The interfaces should be already removed prior to calling this function.
def add_list_member(self, list_id, user_id): return List(tweepy_list_to_json(self._client.add_list_member(list_id=list_id, user_id=user_id)))
Add a user to list :param list_id: list ID number :param user_id: user ID number :return: :class:`~responsebot.models.List` object
def show_pypi_releases(self): try: hours = int(self.options.show_pypi_releases) except ValueError: self.logger.error("ERROR: You must supply an integer.") return 1 try: latest_releases = self.pypi.updated_releases(hours) except XMLRPCFault as err_msg: self.logger.error(err_msg) self.logger.error("ERROR: Couldn't retrieve latest releases.") return 1 for release in latest_releases: print("%s %s" % (release[0], release[1])) return 0
Show PyPI releases for the last number of `hours` @returns: 0 = success or 1 if failed to retrieve from XML-RPC server
def throw_invalid_quad_params(quad, QUADS, nparams): raise InvalidICError(str(quad), "Invalid quad code params for '%s' (expected %i, but got %i)" % (quad, QUADS[quad][0], nparams) )
Exception raised when an invalid number of params in the quad code has been emmitted.
def _generate_union_class_variant_creators(self, ns, data_type): for field in data_type.fields: if not is_void_type(field.data_type): field_name = fmt_func(field.name) field_name_reserved_check = fmt_func(field.name, check_reserved=True) if is_nullable_type(field.data_type): field_dt = field.data_type.data_type else: field_dt = field.data_type self.emit('@classmethod') self.emit('def {}(cls, val):'.format(field_name_reserved_check)) with self.indent(): self.emit(' ') self.emit("return cls('{}', val)".format(field_name)) self.emit()
Each non-symbol, non-any variant has a corresponding class method that can be used to construct a union with that variant selected.
def urlsplit(url): proto, rest = url.split(':', 1) host = '' if rest[:2] == '//': host, rest = rest[2:].split('/', 1) rest = '/' + rest return proto, host, rest
Split an arbitrary url into protocol, host, rest The standard urlsplit does not want to provide 'netloc' for arbitrary protocols, this works around that. :param url: The url to split into component parts
def config_field_type(field, cls): return defs.ConfigField(lambda _: isinstance(_, cls), lambda: CONFIG_FIELD_TYPE_ERROR.format(field, cls.__name__))
Validate a config field against a type. Similar functionality to :func:`validate_field_matches_type` but returns :obj:`honeycomb.defs.ConfigField`
def handleFlaskPostRequest(flaskRequest, endpoint): if flaskRequest.method == "POST": return handleHttpPost(flaskRequest, endpoint) elif flaskRequest.method == "OPTIONS": return handleHttpOptions() else: raise exceptions.MethodNotAllowedException()
Handles the specified flask request for one of the POST URLS Invokes the specified endpoint to generate a response.
def title_prefix(soup): "titlePrefix for article JSON is only articles with certain display_channel values" prefix = None display_channel_match_list = ['feature article', 'insight', 'editorial'] for d_channel in display_channel(soup): if d_channel.lower() in display_channel_match_list: if raw_parser.sub_display_channel(soup): prefix = node_text(first(raw_parser.sub_display_channel(soup))) return prefix
titlePrefix for article JSON is only articles with certain display_channel values
def _delete_horizontal_space(text, pos): while pos > 0 and text[pos - 1].isspace(): pos -= 1 end_pos = pos while end_pos < len(text) and text[end_pos].isspace(): end_pos += 1 return text[:pos] + text[end_pos:], pos
Delete all spaces and tabs around pos.
def error_response(self, code, content=''): self.send_response(code) self.send_header('Content-Type', 'text/xml') self.add_compliance_header() self.end_headers() self.wfile.write(content)
Construct and send error response.
def load_pickle(filename): try: if pd: return pd.read_pickle(filename), None else: with open(filename, 'rb') as fid: data = pickle.load(fid) return data, None except Exception as err: return None, str(err)
Load a pickle file as a dictionary
def _create_table_and_update_context(node, context): schema_type_name = sql_context_helpers.get_schema_type_name(node, context) table = context.compiler_metadata.get_table(schema_type_name).alias() context.query_path_to_selectable[node.query_path] = table return table
Create an aliased table for a SqlNode. Updates the relevant Selectable global context. Args: node: SqlNode, the current node. context: CompilationContext, global compilation state and metadata. Returns: Table, the newly aliased SQLAlchemy table.
def _restore_training_state(self, restore_state): self.load_state_dict(restore_state["model"]) self.optimizer.load_state_dict(restore_state["optimizer"]) self.lr_scheduler.load_state_dict(restore_state["lr_scheduler"]) start_iteration = restore_state["iteration"] + 1 if self.config["verbose"]: print(f"Restored checkpoint to iteration {start_iteration}.") if restore_state["best_model_found"]: self.checkpointer.best_model_found = True self.checkpointer.best_iteration = restore_state["best_iteration"] self.checkpointer.best_score = restore_state["best_score"] if self.config["verbose"]: print( f"Updated checkpointer: " f"best_score={self.checkpointer.best_score:.3f}, " f"best_iteration={self.checkpointer.best_iteration}" ) return start_iteration
Restores the model and optimizer states This helper function restores the model's state to a given iteration so that a user can resume training at any epoch. Args: restore_state: a state_dict dictionary
def get_and_check_project(valid_vcs_rules, source_url): project_path = match_url_regex(valid_vcs_rules, source_url, match_url_path_callback) if project_path is None: raise ValueError("Unknown repo for source url {}!".format(source_url)) project = project_path.split('/')[-1] return project
Given vcs rules and a source_url, return the project. The project is in the path, but is the repo name. `releases/mozilla-beta` is the path; `mozilla-beta` is the project. Args: valid_vcs_rules (tuple of frozendicts): the valid vcs rules, per ``match_url_regex``. source_url (str): the source url to find the project for. Raises: RuntimeError: on failure to find the project. Returns: str: the project.
def data_storage_shape(self): if self.data_shape == -1: return -1 else: return tuple(self.data_shape[ax] for ax in np.argsort(self.data_axis_order))
Shape tuple of the data as stored in the file. If no header is available (i.e., before it has been initialized), or any of the header entries ``'nx', 'ny', 'nz'`` is missing, -1 is returned, which makes reshaping a no-op. Otherwise, the returned shape is a permutation of `data_shape`, i.e., ``(nx, ny, nz)``, according to `data_axis_order` in the following way:: data_shape[i] == data_storage_shape[data_axis_order[i]] See Also -------- data_shape data_axis_order
def __get_response(self, uri, params=None, method="get", stream=False): if not hasattr(self, "session") or not self.session: self.session = requests.Session() if self.access_token: self.session.headers.update( {'Authorization': 'Bearer {}'.format(self.access_token)} ) if params: params = {k: v for k, v in params.items() if v is not None} kwargs = { "url": uri, "verify": True, "stream": stream } kwargs["params" if method == "get" else "data"] = params return getattr(self.session, method)(**kwargs)
Creates a response object with the given params and option Parameters ---------- url : string The full URL to request. params: dict A list of parameters to send with the request. This will be sent as data for methods that accept a request body and will otherwise be sent as query parameters. method : str The HTTP method to use. stream : bool Whether to stream the response. Returns a requests.Response object.
def visit_continue(self, node, parent): return nodes.Continue( getattr(node, "lineno", None), getattr(node, "col_offset", None), parent )
visit a Continue node by returning a fresh instance of it
def _cwl_workflow_template(inputs, top_level=False): ready_inputs = [] for inp in inputs: cur_inp = copy.deepcopy(inp) for attr in ["source", "valueFrom", "wf_duplicate"]: cur_inp.pop(attr, None) if top_level: cur_inp = workflow._flatten_nested_input(cur_inp) cur_inp = _clean_record(cur_inp) ready_inputs.append(cur_inp) return {"class": "Workflow", "cwlVersion": "v1.0", "hints": [], "requirements": [{"class": "EnvVarRequirement", "envDef": [{"envName": "MPLCONFIGDIR", "envValue": "."}]}, {"class": "ScatterFeatureRequirement"}, {"class": "SubworkflowFeatureRequirement"}], "inputs": ready_inputs, "outputs": [], "steps": []}
Retrieve CWL inputs shared amongst different workflows.
def delete(cls, name): result = cls.call('hosting.rproxy.delete', cls.usable_id(name)) cls.echo('Deleting your webaccelerator named %s' % name) cls.display_progress(result) cls.echo('Webaccelerator have been deleted') return result
Delete a webaccelerator
def contains(cat, key, container): hash(key) try: loc = cat.categories.get_loc(key) except KeyError: return False if is_scalar(loc): return loc in container else: return any(loc_ in container for loc_ in loc)
Helper for membership check for ``key`` in ``cat``. This is a helper method for :method:`__contains__` and :class:`CategoricalIndex.__contains__`. Returns True if ``key`` is in ``cat.categories`` and the location of ``key`` in ``categories`` is in ``container``. Parameters ---------- cat : :class:`Categorical`or :class:`categoricalIndex` key : a hashable object The key to check membership for. container : Container (e.g. list-like or mapping) The container to check for membership in. Returns ------- is_in : bool True if ``key`` is in ``self.categories`` and location of ``key`` in ``categories`` is in ``container``, else False. Notes ----- This method does not check for NaN values. Do that separately before calling this method.
def get_client_info(self): iq = aioxmpp.IQ( to=self.client.local_jid.bare().replace(localpart=None), type_=aioxmpp.IQType.GET, payload=xso.Query() ) reply = (yield from self.client.send(iq)) return reply
A query is sent to the server to obtain the client's data stored at the server. :return: :class:`~aioxmpp.ibr.Query`
def assert_pks_uniqueness(self, pks, exclude, value): pks = list(set(pks)) if len(pks) > 1: raise UniquenessError( "Multiple values indexed for unique field %s.%s: %s" % ( self.model.__name__, self.field.name, pks ) ) elif len(pks) == 1 and (not exclude or pks[0] != exclude): self.connection.delete(self.field.key) raise UniquenessError( 'Value "%s" already indexed for unique field %s.%s (for instance %s)' % ( self.normalize_value(value), self.model.__name__, self.field.name, pks[0] ) )
Check uniqueness of pks Parameters ----------- pks: iterable The pks to check for uniqueness. If more than one different, it will raise. If only one and different than `exclude`, it will raise too. exclude: str The pk that we accept to be the only one in `pks`. For example the pk of the instance we want to check for uniqueness: we don't want to raise if the value is the one already set for this instance value: any Only to be displayed in the error message. Raises ------ UniquenessError - If at least two different pks - If only one pk that is not the `exclude` one
def timestamps(self, use_current=True): if use_current: self.timestamp("created_at").use_current() self.timestamp("updated_at").use_current() else: self.timestamp("created_at") self.timestamp("updated_at")
Create creation and update timestamps to the table. :rtype: Fluent
def prepare_amazon_algorithm_estimator(estimator, inputs, mini_batch_size=None): if isinstance(inputs, list): for record in inputs: if isinstance(record, amazon_estimator.RecordSet) and record.channel == 'train': estimator.feature_dim = record.feature_dim break elif isinstance(inputs, amazon_estimator.RecordSet): estimator.feature_dim = inputs.feature_dim else: raise TypeError('Training data must be represented in RecordSet or list of RecordSets') estimator.mini_batch_size = mini_batch_size
Set up amazon algorithm estimator, adding the required `feature_dim` hyperparameter from training data. Args: estimator (sagemaker.amazon.amazon_estimator.AmazonAlgorithmEstimatorBase): An estimator for a built-in Amazon algorithm to get information from and update. inputs: The training data. * (sagemaker.amazon.amazon_estimator.RecordSet) - A collection of Amazon :class:~`Record` objects serialized and stored in S3. For use with an estimator for an Amazon algorithm. * (list[sagemaker.amazon.amazon_estimator.RecordSet]) - A list of :class:~`sagemaker.amazon.amazon_estimator.RecordSet` objects, where each instance is a different channel of training data.
def send_request(self, *args, **kwargs): try: return super(JSHost, self).send_request(*args, **kwargs) except RequestsConnectionError as e: if ( self.manager and self.has_connected and self.logfile and 'unsafe' not in kwargs ): raise ProcessError( '{} appears to have crashed, you can inspect the log file at {}'.format( self.get_name(), self.logfile, ) ) raise six.reraise(RequestsConnectionError, RequestsConnectionError(*e.args), sys.exc_info()[2])
Intercept connection errors which suggest that a managed host has crashed and raise an exception indicating the location of the log
def _storage_list_keys(bucket, pattern): data = [{'Name': item.metadata.name, 'Type': item.metadata.content_type, 'Size': item.metadata.size, 'Updated': item.metadata.updated_on} for item in _storage_get_keys(bucket, pattern)] return datalab.utils.commands.render_dictionary(data, ['Name', 'Type', 'Size', 'Updated'])
List all storage keys in a specified bucket that match a pattern.
def render(self, *args, **kwargs): env = {}; stdout = [] for dictarg in args: env.update(dictarg) env.update(kwargs) self.execute(stdout, env) return ''.join(stdout)
Render the template using keyword arguments as local variables.
def rouge_2(hypotheses, references): rouge_2 = [ rouge_n([hyp], [ref], 2) for hyp, ref in zip(hypotheses, references) ] rouge_2_f, _, _ = map(np.mean, zip(*rouge_2)) return rouge_2_f
Calculate ROUGE-2 F1, precision, recall scores
def register_module(self, module, url_prefix): module._plugin = self module._url_prefix = url_prefix for func in module._register_funcs: func(self, url_prefix)
Registers a module with a plugin. Requires a url_prefix that will then enable calls to url_for. :param module: Should be an instance `xbmcswift2.Module`. :param url_prefix: A url prefix to use for all module urls, e.g. '/mymodule'
def tabModificationStateChanged(self, tab): if tab == self.currentTab: changed = tab.editBox.document().isModified() if self.autoSaveActive(tab): changed = False self.actionSave.setEnabled(changed) self.setWindowModified(changed)
Perform all UI state changes that need to be done when the modification state of the current tab has changed.
def increment_title(title): count = re.search('\d+$', title).group(0) new_title = title[:-(len(count))] + str(int(count)+1) return new_title
Increments a string that ends in a number
def _remove_unused_nodes(self): nodes, wf_remove_node = self.nodes, self.workflow.remove_node add_visited, succ = self._visited.add, self.workflow.succ for n in (set(self._wf_pred) - set(self._visited)): node_type = nodes[n]['type'] if node_type == 'data': continue if node_type == 'dispatcher' and succ[n]: add_visited(n) i = self.index + nodes[n]['index'] self.sub_sol[i]._remove_unused_nodes() continue wf_remove_node(n)
Removes unused function and sub-dispatcher nodes.
def _create_clock(self): trading_o_and_c = self.trading_calendar.schedule.ix[ self.sim_params.sessions] market_closes = trading_o_and_c['market_close'] minutely_emission = False if self.sim_params.data_frequency == 'minute': market_opens = trading_o_and_c['market_open'] minutely_emission = self.sim_params.emission_rate == "minute" execution_opens = \ self.trading_calendar.execution_time_from_open(market_opens) execution_closes = \ self.trading_calendar.execution_time_from_close(market_closes) else: execution_closes = \ self.trading_calendar.execution_time_from_close(market_closes) execution_opens = execution_closes before_trading_start_minutes = days_at_time( self.sim_params.sessions, time(8, 45), "US/Eastern" ) return MinuteSimulationClock( self.sim_params.sessions, execution_opens, execution_closes, before_trading_start_minutes, minute_emission=minutely_emission, )
If the clock property is not set, then create one based on frequency.
def _get_struct_clipactions(self): obj = _make_object("ClipActions") clipeventflags_size = 2 if self._version <= 5 else 4 clipactionend_size = 2 if self._version <= 5 else 4 all_zero = b"\x00" * clipactionend_size assert unpack_ui16(self._src) == 0 obj.AllEventFlags = self._src.read(clipeventflags_size) obj.ClipActionRecords = records = [] while True: next_bytes = self._src.read(clipactionend_size) if next_bytes == all_zero: return record = _make_object("ClipActionRecord") records.append(record) record.EventFlags = next_bytes record.ActionRecordSize = unpack_ui32(self._src) record.TheRestTODO = self._src.read(record.ActionRecordSize) return obj
Get the several CLIPACTIONRECORDs.
def transform(transform_func): def decorator(func): @wraps(func) def f(*args, **kwargs): return transform_func( func(*args, **kwargs) ) return f return decorator
Apply a transformation to a functions return value
def translation_generator( variant_sequences, reference_contexts, min_transcript_prefix_length, max_transcript_mismatches, include_mismatches_after_variant, protein_sequence_length=None): for reference_context in reference_contexts: for variant_sequence in variant_sequences: translation = Translation.from_variant_sequence_and_reference_context( variant_sequence=variant_sequence, reference_context=reference_context, min_transcript_prefix_length=min_transcript_prefix_length, max_transcript_mismatches=max_transcript_mismatches, include_mismatches_after_variant=include_mismatches_after_variant, protein_sequence_length=protein_sequence_length) if translation is not None: yield translation
Given all detected VariantSequence objects for a particular variant and all the ReferenceContext objects for that locus, translate multiple protein sequences, up to the number specified by the argument max_protein_sequences_per_variant. Parameters ---------- variant_sequences : list of VariantSequence objects Variant sequences overlapping a single original variant reference_contexts : list of ReferenceContext objects Reference sequence contexts from the same variant as the variant_sequences min_transcript_prefix_length : int Minimum number of nucleotides before the variant to test whether our variant sequence can use the reading frame from a reference transcript. max_transcript_mismatches : int Maximum number of mismatches between coding sequence before variant and reference transcript we're considering for determing the reading frame. include_mismatches_after_variant : bool If true, mismatches occurring after the variant locus will also count toward max_transcript_mismatches filtering. protein_sequence_length : int, optional Truncate protein to be at most this long. Yields a sequence of Translation objects.
def _second_column(self): if self._A[1, 1] == 0 and self._A[2, 1] != 0: self._swap_rows(1, 2) if self._A[2, 1] != 0: self._zero_second_column()
Right-low 2x2 matrix Assume elements in first row and column are all zero except for A[0,0].
def get_worker_build_info(workflow, platform): workspace = workflow.plugin_workspace[OrchestrateBuildPlugin.key] return workspace[WORKSPACE_KEY_BUILD_INFO][platform]
Obtain worker build information for a given platform
def entropy(data): if len(data) == 0: return None n = sum(data) _op = lambda f: f * math.log(f) return - sum(_op(float(i) / n) for i in data)
Compute the Shannon entropy, a measure of uncertainty.
def set_value(self, instance, value, parent=None): self.resolve_base(instance) value = self.deserialize(value, parent) instance.values[self.alias] = value self._trigger_changed(instance, value)
Set prop value :param instance: :param value: :param parent: :return:
def resolve_identifiers(self, subject_context): session = subject_context.session identifiers = subject_context.resolve_identifiers(session) if (not identifiers): msg = ("No identity (identifier_collection) found in the " "subject_context. Looking for a remembered identity.") logger.debug(msg) identifiers = self.get_remembered_identity(subject_context) if identifiers: msg = ("Found remembered IdentifierCollection. Adding to the " "context to be used for subject construction.") logger.debug(msg) subject_context.identifiers = identifiers subject_context.remembered = True else: msg = ("No remembered identity found. Returning original " "context.") logger.debug(msg) return subject_context
ensures that a subject_context has identifiers and if it doesn't will attempt to locate them using heuristics
def _specialize(self, reconfigure=False): for manifest in [self.source, self.target]: context_dict = {} if manifest: for s in manifest.formula_sections(): context_dict["%s:root_dir" % s] = self.directory.install_directory(s) context_dict['config:root_dir'] = self.directory.root_dir context_dict['config:node'] = system.NODE manifest.add_additional_context(context_dict) self._validate_manifest() for feature in self.features.run_order: if not reconfigure: self.run_action(feature, 'resolve') instance = self.features[feature] if instance.target: self.run_action(feature, 'prompt')
Add variables and specialize contexts
def set_var(self, vardef): if not(vardef.default and self.cache['ctx'].get(vardef.name)): self.cache['ctx'][vardef.name] = vardef.expression.value
Set variable to global stylesheet context.
def rule_expand(component, text): global rline_mpstate if component[0] == '<' and component[-1] == '>': return component[1:-1].split('|') if component in rline_mpstate.completion_functions: return rline_mpstate.completion_functions[component](text) return [component]
expand one rule component
def create_environment_vip(self): return EnvironmentVIP( self.networkapi_url, self.user, self.password, self.user_ldap)
Get an instance of environment_vip services facade.
def check_token(self, respond): if respond.status_code == 401: self.credential.obtain_token(config=self.config) return False return True
Check is the user's token is valid
def wrap_get_channel(cls, response): json = response.json() c = cls.wrap_json(json) return c
Wrap the response from getting a channel into an instance and return it :param response: The response from getting a channel :type response: :class:`requests.Response` :returns: the new channel instance :rtype: :class:`list` of :class:`channel` :raises: None
def _preprocess_Y(self, Y, k): Y = Y.clone() if Y.dim() == 1 or Y.shape[1] == 1: Y = pred_to_prob(Y.long(), k=k) return Y
Convert Y to prob labels if necessary
def SegmentProd(a, ids): func = lambda idxs: reduce(np.multiply, a[idxs]) return seg_map(func, a, ids),
Segmented prod op.
def get_list_database(self): url = "db" response = self.request( url=url, method='GET', expected_response_code=200 ) return response.json()
Get the list of databases.
def normalize_path(path, filetype=FILE): if not path: raise ValueError('"{0}" is not a valid path.'.format(path)) if not os.path.exists(path): raise ValueError('"{0}" does not exist.'.format(path)) if filetype == FILE and not os.path.isfile(path): raise ValueError('"{0}" is not a file.'.format(path)) elif filetype == DIR and not os.path.isdir(path): raise ValueError('"{0}" is not a dir.'.format(path)) return os.path.abspath(path)
Takes a path and a filetype, verifies existence and type, and returns absolute path.
def remove_item_languages(self, item, languages): qs = TransLanguage.objects.filter(code__in=languages) remove_langs = [lang for lang in qs] if not remove_langs: return ct_item = ContentType.objects.get_for_model(item) item_lan, created = TransItemLanguage.objects.get_or_create(content_type_id=ct_item.id, object_id=item.id) for lang in remove_langs: item_lan.languages.remove(lang) if item_lan.languages.count() == 0: item_lan.delete()
delete the selected languages from the TransItemLanguage model :param item: :param languages: :return:
def queueStream(self, rdds, oneAtATime=True, default=None): if default and not isinstance(default, RDD): default = self._sc.parallelize(default) if not rdds and default: rdds = [rdds] if rdds and not isinstance(rdds[0], RDD): rdds = [self._sc.parallelize(input) for input in rdds] self._check_serializers(rdds) queue = self._jvm.PythonDStream.toRDDQueue([r._jrdd for r in rdds]) if default: default = default._reserialize(rdds[0]._jrdd_deserializer) jdstream = self._jssc.queueStream(queue, oneAtATime, default._jrdd) else: jdstream = self._jssc.queueStream(queue, oneAtATime) return DStream(jdstream, self, rdds[0]._jrdd_deserializer)
Create an input stream from a queue of RDDs or list. In each batch, it will process either one or all of the RDDs returned by the queue. .. note:: Changes to the queue after the stream is created will not be recognized. @param rdds: Queue of RDDs @param oneAtATime: pick one rdd each time or pick all of them once. @param default: The default rdd if no more in rdds
def update(): assert request.method == "POST", "POST request expected received {}".format(request.method) if request.method == 'POST': selected_run = request.form['selected_run'] variable_names = utils.get_variables(selected_run).items() if len(current_index) < 1: for _, v_n in variable_names: current_index[v_n] = 0 logging.info("Current index: {}".format(current_index)) data = utils.get_variable_update_dicts(current_index, variable_names, selected_run) return jsonify(data)
Called by XMLHTTPrequest function periodically to get new graph data. Usage description: This function queries the database and returns all the newly added values. :return: JSON Object, passed on to the JS script.
def after_sample(analysis_request): analysis_request.setDateSampled(DateTime()) idxs = ['getDateSampled'] for analysis in analysis_request.getAnalyses(full_objects=True): analysis.reindexObject(idxs=idxs)
Method triggered after "sample" transition for the Analysis Request passed in is performed
def _data(self, received_data): if self.listener.on_data(received_data) is False: self.stop() raise ListenerError(self.listener.connection_id, received_data)
Sends data to listener, if False is returned; socket is closed. :param received_data: Decoded data received from socket.
def create(self, ogpgs): data = {'ogpgs': ogpgs} return super(ApiObjectGroupPermissionGeneral, self).post('api/v3/object-group-perm-general/', data)
Method to create object group permissions general :param ogpgs: List containing vrf desired to be created on database :return: None
def write(self, f): if isinstance(f, str): f = io.open(f, 'w', encoding='utf-8') if not hasattr(f, 'read'): raise AttributeError("Wrong type of file: {0}".format(type(f))) NS_LOGGER.info('Write to `{0}`'.format(f.name)) for section in self.sections.keys(): f.write('[{0}]\n'.format(section)) for k, v in self[section].items(): f.write('{0:15}= {1}\n'.format(k, v)) f.write('\n') f.close()
Write namespace as INI file. :param f: File object or path to file.
def radio_button(g, l, fn): w = urwid.RadioButton(g, l, False, on_state_change=fn) w = urwid.AttrWrap(w, 'button normal', 'button select') return w
Inheriting radio button of urwid
def enqueue_command(self, command, data): if command == CommandType.TrialEnd or (command == CommandType.ReportMetricData and data['type'] == 'PERIODICAL'): self.assessor_command_queue.put((command, data)) else: self.default_command_queue.put((command, data)) qsize = self.default_command_queue.qsize() if qsize >= QUEUE_LEN_WARNING_MARK: _logger.warning('default queue length: %d', qsize) qsize = self.assessor_command_queue.qsize() if qsize >= QUEUE_LEN_WARNING_MARK: _logger.warning('assessor queue length: %d', qsize)
Enqueue command into command queues
def delete_vpc_peering_connection(conn_id=None, conn_name=None, region=None, key=None, keyid=None, profile=None, dry_run=False): if not _exactly_one((conn_id, conn_name)): raise SaltInvocationError('Exactly one of conn_id or ' 'conn_name must be provided.') conn = _get_conn3(region=region, key=key, keyid=keyid, profile=profile) if conn_name: conn_id = _vpc_peering_conn_id_for_name(conn_name, conn) if not conn_id: raise SaltInvocationError("Couldn't resolve VPC peering connection " "{0} to an ID".format(conn_name)) try: log.debug('Trying to delete vpc peering connection') conn.delete_vpc_peering_connection(DryRun=dry_run, VpcPeeringConnectionId=conn_id) return {'msg': 'VPC peering connection deleted.'} except botocore.exceptions.ClientError as err: e = __utils__['boto.get_error'](err) log.error('Failed to delete VPC peering %s: %s', conn_name or conn_id, e) return {'error': e}
Delete a VPC peering connection. .. versionadded:: 2016.11.0 conn_id The connection ID to check. Exclusive with conn_name. conn_name The connection name to check. Exclusive with conn_id. region Region to connect to. key Secret key to be used. keyid Access key to be used. profile A dict with region, key and keyid, or a pillar key (string) that contains a dict with region, key and keyid. dry_run If True, skip application and simply return projected status. CLI Example: .. code-block:: bash # Create a named VPC peering connection salt myminion boto_vpc.delete_vpc_peering_connection conn_name=salt-vpc # Specify a region salt myminion boto_vpc.delete_vpc_peering_connection conn_name=salt-vpc region=us-west-2 # specify an id salt myminion boto_vpc.delete_vpc_peering_connection conn_id=pcx-8a8939e3
def get_context(self, name, retry=google.api_core.gapic_v1.method.DEFAULT, timeout=google.api_core.gapic_v1.method.DEFAULT, metadata=None): if 'get_context' not in self._inner_api_calls: self._inner_api_calls[ 'get_context'] = google.api_core.gapic_v1.method.wrap_method( self.transport.get_context, default_retry=self._method_configs['GetContext'].retry, default_timeout=self._method_configs['GetContext'].timeout, client_info=self._client_info, ) request = context_pb2.GetContextRequest(name=name, ) return self._inner_api_calls['get_context']( request, retry=retry, timeout=timeout, metadata=metadata)
Retrieves the specified context. Example: >>> import dialogflow_v2 >>> >>> client = dialogflow_v2.ContextsClient() >>> >>> name = client.context_path('[PROJECT]', '[SESSION]', '[CONTEXT]') >>> >>> response = client.get_context(name) Args: name (str): Required. The name of the context. Format: ``projects/<Project ID>/agent/sessions/<Session ID>/contexts/<Context ID>``. retry (Optional[google.api_core.retry.Retry]): A retry object used to retry requests. If ``None`` is specified, requests will not be retried. timeout (Optional[float]): The amount of time, in seconds, to wait for the request to complete. Note that if ``retry`` is specified, the timeout applies to each individual attempt. metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata that is provided to the method. Returns: A :class:`~google.cloud.dialogflow_v2.types.Context` instance. Raises: google.api_core.exceptions.GoogleAPICallError: If the request failed for any reason. google.api_core.exceptions.RetryError: If the request failed due to a retryable error and retry attempts failed. ValueError: If the parameters are invalid.