code
stringlengths
51
2.38k
docstring
stringlengths
4
15.2k
def is_parent_of_catalog(self, id_, catalog_id): if self._catalog_session is not None: return self._catalog_session.is_parent_of_catalog(id_=id_, catalog_id=catalog_id) return self._hierarchy_session.is_parent(id_=catalog_id, parent_id=id_)
Tests if an ``Id`` is a direct parent of a catalog. arg: id (osid.id.Id): an ``Id`` arg: catalog_id (osid.id.Id): the ``Id`` of a catalog return: (boolean) - ``true`` if this ``id`` is a parent of ``catalog_id,`` ``false`` otherwise raise: NotFound - ``catalog_id`` is not found raise: NullArgument - ``id`` or ``catalog_id`` is ``null`` raise: OperationFailed - unable to complete request raise: PermissionDenied - authorization failure occurred *compliance: mandatory -- This method must be implemented.* *implementation notes*: If ``id`` not found return ``false``.
def filter_data(data, filter_name, filter_file, bits, filterbank_off=False, swstat_channel_name=None): if filterbank_off: return numpy.zeros(len(data)) for i in range(10): filter = Filter(filter_file[filter_name][i]) bit = int(bits[-(i+1)]) if bit: logging.info('filtering with filter module %d', i) if len(filter.sections): data = filter.apply(data) else: coeffs = iir2z(filter_file[filter_name][i]) if len(coeffs) > 1: logging.info('Gain-only filter module return more than one number') sys.exit() gain = coeffs[0] data = gain * data return data
A naive function to determine if the filter was on at the time and then filter the data.
def list_directories(dir_pathname, recursive=True, topdown=True, followlinks=False): for root, dirnames, filenames\ in walk(dir_pathname, recursive, topdown, followlinks): for dirname in dirnames: yield absolute_path(os.path.join(root, dirname))
Enlists all the directories using their absolute paths within the specified directory, optionally recursively. :param dir_pathname: The directory to traverse. :param recursive: ``True`` for walking recursively through the directory tree; ``False`` otherwise. :param topdown: Please see the documentation for :func:`os.walk` :param followlinks: Please see the documentation for :func:`os.walk`
def server_enabled(s_name, **connection_args): server = _server_get(s_name, **connection_args) return server is not None and server.get_state() == 'ENABLED'
Check if a server is enabled globally CLI Example: .. code-block:: bash salt '*' netscaler.server_enabled 'serverName'
def EnableEditingOnService(self, url, definition = None): adminFS = AdminFeatureService(url=url, securityHandler=self._securityHandler) if definition is None: definition = collections.OrderedDict() definition['hasStaticData'] = False definition['allowGeometryUpdates'] = True definition['editorTrackingInfo'] = {} definition['editorTrackingInfo']['enableEditorTracking'] = False definition['editorTrackingInfo']['enableOwnershipAccessControl'] = False definition['editorTrackingInfo']['allowOthersToUpdate'] = True definition['editorTrackingInfo']['allowOthersToDelete'] = True definition['capabilities'] = "Query,Editing,Create,Update,Delete" existingDef = {} existingDef['capabilities'] = adminFS.capabilities existingDef['allowGeometryUpdates'] = adminFS.allowGeometryUpdates enableResults = adminFS.updateDefinition(json_dict=definition) if 'error' in enableResults: return enableResults['error'] adminFS = None del adminFS print (enableResults) return existingDef
Enables editing capabilities on a feature service. Args: url (str): The URL of the feature service. definition (dict): A dictionary containing valid definition values. Defaults to ``None``. Returns: dict: The existing feature service definition capabilities. When ``definition`` is not provided (``None``), the following values are used by default: +------------------------------+------------------------------------------+ | Key | Value | +------------------------------+------------------------------------------+ | hasStaticData | ``False`` | +------------------------------+------------------------------------------+ | allowGeometryUpdates | ``True`` | +------------------------------+------------------------------------------+ | enableEditorTracking | ``False`` | +------------------------------+------------------------------------------+ | enableOwnershipAccessControl | ``False`` | +------------------------------+------------------------------------------+ | allowOthersToUpdate | ``True`` | +------------------------------+------------------------------------------+ | allowOthersToDelete | ``True`` | +------------------------------+------------------------------------------+ | capabilities | ``"Query,Editing,Create,Update,Delete"`` | +------------------------------+------------------------------------------+
def get_resource(self, resource, **kwargs): return resource(request=self.request, response=self.response, path_params=self.path_params, application=self.application, **kwargs)
Returns a new instance of the resource class passed in as resource. This is a helper to make future-compatibility easier when new arguments get added to the constructor. :param resource: Resource class to instantiate. Gets called with the named arguments as required for the constructor. :type resource: :class:`Resource` :param kwargs: Additional named arguments to pass to the constructor function. :type kwargs: dict
def describe_alias(FunctionName, Name, region=None, key=None, keyid=None, profile=None): try: alias = _find_alias(FunctionName, Name, region=region, key=key, keyid=keyid, profile=profile) if alias: keys = ('AliasArn', 'Name', 'FunctionVersion', 'Description') return {'alias': dict([(k, alias.get(k)) for k in keys])} else: return {'alias': None} except ClientError as e: return {'error': __utils__['boto3.get_error'](e)}
Given a function name and alias name describe the properties of the alias. Returns a dictionary of interesting properties. CLI Example: .. code-block:: bash salt myminion boto_lambda.describe_alias myalias
def zero_pad(matrix, to_length): assert matrix.shape[0] <= to_length if not matrix.shape[0] <= to_length: logger.error("zero_pad cannot be performed on matrix with shape {}" " to length {}".format(matrix.shape[0], to_length)) raise ValueError result = np.zeros((to_length,) + matrix.shape[1:]) result[:matrix.shape[0]] = matrix return result
Zero pads along the 0th dimension to make sure the utterance array x is of length to_length.
def Parse(text): precondition.AssertType(text, Text) if compatibility.PY2: text = text.encode("utf-8") return yaml.safe_load(text)
Parses a YAML source into a Python object. Args: text: A YAML source to parse. Returns: A Python data structure corresponding to the YAML source.
def get_interfaces(zone, permanent=True): cmd = '--zone={0} --list-interfaces'.format(zone) if permanent: cmd += ' --permanent' return __firewall_cmd(cmd).split()
List interfaces bound to a zone .. versionadded:: 2016.3.0 CLI Example: .. code-block:: bash salt '*' firewalld.get_interfaces zone
def Delete(self): disk_set = [{'diskId': o.id, 'sizeGB': o.size} for o in self.parent.disks if o!=self] self.parent.disks = [o for o in self.parent.disks if o!=self] self.parent.server.dirty = True return(clc.v2.Requests(clc.v2.API.Call('PATCH','servers/%s/%s' % (self.parent.server.alias,self.parent.server.id), json.dumps([{"op": "set", "member": "disks", "value": disk_set}]), session=self.session), alias=self.parent.server.alias, session=self.session))
Delete disk. This request will error if disk is protected and cannot be removed (e.g. a system disk) >>> clc.v2.Server("WA1BTDIX01").Disks().disks[2].Delete().WaitUntilComplete() 0
def user_link_history(self, created_before=None, created_after=None, limit=100, **kwargs): limit = int(limit) created_after = int(created_after) created_before = int(created_before) hist = self.api.user_link_history( limit=limit, created_before=created_before, created_after=created_after) record = "{0} - {1}" links = [] for r in hist: link = r.get('keyword_link') or r['link'] title = r['title'] or '<< NO TITLE >>' links.append(record.format(link, title)) log.debug("First 3 Links fetched:") log.debug(pretty(hist[0:3], indent=4)) return links
Bit.ly API - user_link_history wrapper
def update_assume_role_policy(role_name, policy_document, region=None, key=None, keyid=None, profile=None): conn = _get_conn(region=region, key=key, keyid=keyid, profile=profile) if isinstance(policy_document, six.string_types): policy_document = salt.utils.json.loads(policy_document, object_pairs_hook=odict.OrderedDict) try: _policy_document = salt.utils.json.dumps(policy_document) conn.update_assume_role_policy(role_name, _policy_document) log.info('Successfully updated assume role policy for IAM role %s.', role_name) return True except boto.exception.BotoServerError as e: log.error(e) log.error('Failed to update assume role policy for IAM role %s.', role_name) return False
Update an assume role policy for a role. .. versionadded:: 2015.8.0 CLI Example: .. code-block:: bash salt myminion boto_iam.update_assume_role_policy myrole '{"Statement":"..."}'
def system_status(self): flag, timestamp, status = self._query(('GETDAT? 1', (Integer, Float, Integer))) return { 'timestamp': datetime.datetime.fromtimestamp(timestamp), 'temperature': STATUS_TEMPERATURE[status & 0xf], 'magnet': STATUS_MAGNET[(status >> 4) & 0xf], 'chamber': STATUS_CHAMBER[(status >> 8) & 0xf], 'sample_position': STATUS_SAMPLE_POSITION[(status >> 12) & 0xf], }
The system status codes.
def is_float_like(value): try: if isinstance(value, float): return True return float(value) == value and not str(value).isdigit() except: return False
Returns whether the value acts like a standard float. >>> is_float_like(4.0) True >>> is_float_like(numpy.float32(4.0)) True >>> is_float_like(numpy.int32(4.0)) False >>> is_float_like(4) False
def _validator(key, val, env): if not env[key] in (True, False): raise SCons.Errors.UserError( 'Invalid value for boolean option %s: %s' % (key, env[key]))
Validates the given value to be either '0' or '1'. This is usable as 'validator' for SCons' Variables.
def conv_layer(x, hidden_size, kernel_size, stride, pooling_window, dropout_rate, dilation_rate, name="conv"): with tf.variable_scope(name): out = x out = common_layers.conv1d_block( out, hidden_size, [(dilation_rate, kernel_size)], strides=stride, first_relu=False, padding="same") out = tf.nn.relu(out) if pooling_window: out = tf.layers.max_pooling1d( out, pooling_window, pooling_window, padding="same") out = tf.layers.dropout(out, dropout_rate) return out
Single conv layer with relu, optional pooling, and dropout.
def _is_subspan(self, m, span): return ( m.sentence.id == span[0] and m.char_start >= span[1] and m.char_end <= span[2] )
Tests if mention m is subspan of span, where span is defined specific to mention type.
def print_table(lines, separate_head=True): widths = [] for line in lines: for i,size in enumerate([len(x) for x in line]): while i >= len(widths): widths.append(0) if size > widths[i]: widths[i] = size print_string = "" for i,width in enumerate(widths): print_string += "{" + str(i) + ":" + str(width) + "} | " if (len(print_string) == 0): return print_string = print_string[:-3] for i,line in enumerate(lines): print(print_string.format(*line)) if (i == 0 and separate_head): print("-"*(sum(widths)+3*(len(widths)-1)))
Prints a formatted table given a 2 dimensional array
def _createGsshaPyObjects(self, cell): gridCell = GridStreamCell(cellI=cell['i'], cellJ=cell['j'], numNodes=cell['numNodes']) gridCell.gridStreamFile = self for linkNode in cell['linkNodes']: gridNode = GridStreamNode(linkNumber=linkNode['linkNumber'], nodeNumber=linkNode['nodeNumber'], nodePercentGrid=linkNode['percent']) gridNode.gridStreamCell = gridCell
Create GSSHAPY PipeGridCell and PipeGridNode Objects Method
def fetch(args: List[str], env: Dict[str, str] = None, encoding: str = sys.getdefaultencoding()) -> str: stdout, _ = run(args, env=env, capture_stdout=True, echo_stdout=False, encoding=encoding) log.debug(stdout) return stdout
Run a command and returns its stdout. Args: args: the command-line arguments env: the operating system environment to use encoding: the encoding to use for ``stdout`` Returns: the command's ``stdout`` output
def check_no_element_by_selector(self, selector): elems = find_elements_by_jquery(world.browser, selector) if elems: raise AssertionError("Expected no matching elements, found {}.".format( len(elems)))
Assert an element does not exist matching the given selector.
def type(self): properties = {self.is_code: "code", self.is_data: "data", self.is_string: "string", self.is_tail: "tail", self.is_unknown: "unknown"} for k, v in properties.items(): if k: return v
return the type of the Line
def getExistingFile(name, max_suffix=1000): num = 1 stem, ext = os.path.splitext(name) filename = name while not os.path.exists(filename): suffix = "-%d" % num filename = stem + suffix + ext num += 1 if num >= max_suffix: raise ValueError("No file %r found" % name) return filename
Add filename suffix until file exists @return: filename if file is found @raise: ValueError if maximum suffix number is reached while searching
def __version_capture_slp(self, pkg_id, version_binary, version_display, display_name): if self.__pkg_obj and hasattr(self.__pkg_obj, 'version_capture'): version_str, src, version_user_str = \ self.__pkg_obj.version_capture(pkg_id, version_binary, version_display, display_name) if src != 'use-default' and version_str and src: return version_str, src, version_user_str elif src != 'use-default': raise ValueError( 'version capture within object \'{0}\' failed ' 'for pkg id: \'{1}\' it returned \'{2}\' \'{3}\' ' '\'{4}\''.format(six.text_type(self.__pkg_obj), pkg_id, version_str, src, version_user_str) ) if version_display and re.match(r'\d+', version_display, flags=re.IGNORECASE + re.UNICODE) is not None: version_str = version_display src = 'display-version' elif version_binary and re.match(r'\d+', version_binary, flags=re.IGNORECASE + re.UNICODE) is not None: version_str = version_binary src = 'version-binary' else: src = 'none' version_str = '0.0.0.0.0' return version_str, src, version_str
This returns the version and where the version string came from, based on instructions under ``version_capture``, if ``version_capture`` is missing, it defaults to value of display-version. Args: pkg_id (str): Publisher of the software/component. version_binary (str): Name of the software. version_display (str): True if package is a component. display_name (str): True if the software/component is 32bit architecture. Returns: str: Package Id
def log(self, level, *msg_elements): self.report.log(self._threadlocal.current_workunit, level, *msg_elements)
Log a message against the current workunit.
def call_later(fn, args=(), delay=0.001): thread = _Thread(target=lambda: (_time.sleep(delay), fn(*args))) thread.start()
Calls the provided function in a new thread after waiting some time. Useful for giving the system some time to process an event, without blocking the current execution flow.
def overall(): return ZeroOrMore(Grammar.comment) + Dict(ZeroOrMore(Group( Grammar._section + ZeroOrMore(Group(Grammar.line))) ))
The overall grammer for pulling apart the main input files.
def _add_slide_footer(self, slide_no): if self.builder.config.slide_footer: self.body.append( '\n<div class="slide-footer">%s</div>\n' % ( self.builder.config.slide_footer, ), )
Add the slide footer to the output if enabled.
def createTable(dbconn, pd): cols = ('%s %s' % (defn.name, getTypename(defn)) for defn in pd.fields) sql = 'CREATE TABLE IF NOT EXISTS %s (%s)' % (pd.name, ', '.join(cols)) dbconn.execute(sql) dbconn.commit()
Creates a database table for the given PacketDefinition.
def run(argv=None): cli = InfrascopeCLI() return cli.run(sys.argv[1:] if argv is None else argv)
Main CLI entry point.
def train_epoch(self, epoch_info, source: 'vel.api.Source', interactive=True): self.train() if interactive: iterator = tqdm.tqdm(source.train_loader(), desc="Training", unit="iter", file=sys.stdout) else: iterator = source.train_loader() for batch_idx, (data, target) in enumerate(iterator): batch_info = BatchInfo(epoch_info, batch_idx) batch_info.on_batch_begin() self.train_batch(batch_info, data, target) batch_info.on_batch_end() iterator.set_postfix(loss=epoch_info.result_accumulator.intermediate_value('loss'))
Run a single training epoch
def load_tabs(self): tab_group = self.get_tabs(self.request, **self.kwargs) tabs = tab_group.get_tabs() for tab in [t for t in tabs if issubclass(t.__class__, TableTab)]: self.table_classes.extend(tab.table_classes) for table in tab._tables.values(): self._table_dict[table._meta.name] = {'table': table, 'tab': tab}
Loads the tab group. It compiles the table instances for each table attached to any :class:`horizon.tabs.TableTab` instances on the tab group. This step is necessary before processing any tab or table actions.
def _to_full_dict(xmltree): xmldict = {} for attrName, attrValue in xmltree.attrib.items(): xmldict[attrName] = attrValue if not xmltree.getchildren(): if not xmldict: return xmltree.text elif xmltree.text: xmldict[_conv_name(xmltree.tag)] = xmltree.text for item in xmltree: name = _conv_name(item.tag) if name not in xmldict: xmldict[name] = _to_full_dict(item) else: if not isinstance(xmldict[name], list): xmldict[name] = [xmldict[name]] xmldict[name].append(_to_full_dict(item)) return xmldict
Returns the full XML dictionary including attributes.
def is_namespace_preordered( self, namespace_id_hash ): namespace_preorder = self.get_namespace_preorder(namespace_id_hash) if namespace_preorder is None: return False else: return True
Given a namespace preorder hash, determine if it is preordered at the current block.
def color_pack2rgb(packed): r = packed & 255 g = (packed & (255 << 8)) >> 8 b = (packed & (255 << 16)) >> 16 return r, g, b
Returns r, g, b tuple from packed wx.ColourGetRGB value
def spread(iterable): if len(iterable) == 1: return 0 iterable = iterable.copy() iterable.sort() max_diff = max(abs(i - j) for (i, j) in zip(iterable[1:], iterable[:-1])) return max_diff
Returns the maximal spread of a sorted list of numbers. Parameters ---------- iterable A list of numbers. Returns ------- max_diff The maximal difference when the iterable is sorted. Examples ------- >>> spread([1, 11, 13, 15]) 10 >>> spread([1, 15, 11, 13]) 10
def objects_to_record(self, preference=None): from ambry.orm.file import File raise NotImplementedError("Still uses obsolete file_info_map") for file_const, (file_name, clz) in iteritems(file_info_map): f = self.file(file_const) pref = preference if preference else f.record.preference if pref in (File.PREFERENCE.MERGE, File.PREFERENCE.OBJECT): self._bundle.logger.debug(' otr {}'.format(file_const)) f.objects_to_record()
Create file records from objects.
def clear(self): for item in self.traverseItems(): if isinstance(item, XTreeWidgetItem): item.destroy() super(XTreeWidget, self).clear()
Removes all the items from this tree widget. This will go through and also destroy any XTreeWidgetItems prior to the model clearing its references.
def _weight_opacity(weight, weight_range): min_opacity = 0.8 if np.isclose(weight, 0) and np.isclose(weight_range, 0): rel_weight = 0.0 else: rel_weight = abs(weight) / weight_range return '{:.2f}'.format(min_opacity + (1 - min_opacity) * rel_weight)
Return opacity value for given weight as a string.
def generate_order_template(self, quote_id, extra, quantity=1): if not isinstance(extra, dict): raise ValueError("extra is not formatted properly") container = self.get_order_container(quote_id) container['quantity'] = quantity for key in extra.keys(): container[key] = extra[key] return container
Generate a complete order template. :param int quote_id: ID of target quote :param dictionary extra: Overrides for the defaults of SoftLayer_Container_Product_Order :param int quantity: Number of items to order.
def directionaldiff(f, x0, vec, **options): x0 = np.asarray(x0) vec = np.asarray(vec) if x0.size != vec.size: raise ValueError('vec and x0 must be the same shapes') vec = np.reshape(vec/np.linalg.norm(vec.ravel()), x0.shape) return Derivative(lambda t: f(x0+t*vec), **options)(0)
Return directional derivative of a function of n variables Parameters ---------- fun: callable analytical function to differentiate. x0: array vector location at which to differentiate fun. If x0 is an nxm array, then fun is assumed to be a function of n*m variables. vec: array vector defining the line along which to take the derivative. It should be the same size as x0, but need not be a vector of unit length. **options: optional arguments to pass on to Derivative. Returns ------- dder: scalar estimate of the first derivative of fun in the specified direction. Examples -------- At the global minimizer (1,1) of the Rosenbrock function, compute the directional derivative in the direction [1 2] >>> import numpy as np >>> import numdifftools as nd >>> vec = np.r_[1, 2] >>> rosen = lambda x: (1-x[0])**2 + 105*(x[1]-x[0]**2)**2 >>> dd, info = nd.directionaldiff(rosen, [1, 1], vec, full_output=True) >>> np.allclose(dd, 0) True >>> np.abs(info.error_estimate)<1e-14 True See also -------- Derivative, Gradient
def has_permission(self, request): if not self.object and not self.permission: return True if not self.permission: return request.user.has_perm('{}_{}'.format( self.model_permission, self.object.__class__.__name__.lower()), self.object ) return request.user.has_perm(self.permission)
Check if user has permission
def bacpypes_debugging(obj): logger = logging.getLogger(obj.__module__ + '.' + obj.__name__) obj._logger = logger obj._debug = logger.debug obj._info = logger.info obj._warning = logger.warning obj._error = logger.error obj._exception = logger.exception obj._fatal = logger.fatal return obj
Function for attaching a debugging logger to a class or function.
def handle_starttag(self, tag, attrs): if not tag == 'a': return for attr in attrs: if attr[0] == 'href': url = urllib.unquote(attr[1]) self.active_url = url.rstrip('/').split('/')[-1] return
Callback for when a tag gets opened.
def set_dimmer(self, dimmer, transition_time=None): values = { ATTR_LIGHT_DIMMER: dimmer, } if transition_time is not None: values[ATTR_TRANSITION_TIME] = transition_time return self.set_values(values)
Set dimmer value of a group. dimmer: Integer between 0..255 transition_time: Integer representing tenth of a second (default None)
def plotants(vis, figfile): from .scripting import CasapyScript script = os.path.join(os.path.dirname(__file__), 'cscript_plotants.py') with CasapyScript(script, vis=vis, figfile=figfile) as cs: pass
Plot the physical layout of the antennas described in the MS. vis (str) Path to the input dataset figfile (str) Path to the output image file. The output image format will be inferred from the extension of *figfile*. Example:: from pwkit.environments.casa import tasks tasks.plotants('dataset.ms', 'antennas.png')
def getKey(self, key): data = self.getDictionary() if key in data: return data[key] else: return None
Retrieves the value for the specified dictionary key
def _update_shared_response(X, S, W, features): subjs = len(X) TRs = X[0].shape[1] R = np.zeros((features, TRs)) for i in range(subjs): R += W[i].T.dot(X[i]-S[i]) R /= subjs return R
Update the shared response `R`. Parameters ---------- X : list of 2D arrays, element i has shape=[voxels_i, timepoints] Each element in the list contains the fMRI data for alignment of one subject. S : list of array, element i has shape=[voxels_i, timepoints] The individual component :math:`S_i` for each subject. W : list of array, element i has shape=[voxels_i, features] The orthogonal transforms (mappings) :math:`W_i` for each subject. features : int The number of features in the model. Returns ------- R : array, shape=[features, timepoints] The updated shared response.
def detect_config(self): default_files = ("config.json", "config.yml") for file_ in default_files: if os.path.exists(file_): return file_
check in the current working directory for configuration files
def GetUcsPropertyMetaAttributeList(classId): if classId in _ManagedObjectMeta: attrList = _ManagedObjectMeta[classId].keys() attrList.remove("Meta") return attrList if classId in _MethodFactoryMeta: attrList = _MethodFactoryMeta[classId].keys() attrList.remove("Meta") return attrList nci = UcsUtils.FindClassIdInMoMetaIgnoreCase(classId) if (nci != None): attrList = _ManagedObjectMeta[nci].keys() attrList.remove("Meta") return attrList nci = UcsUtils.FindClassIdInMethodMetaIgnoreCase(classId) if (nci != None): attrList = _MethodFactoryMeta[nci].keys() attrList.remove("Meta") return attrList return None
Methods returns the class meta.
def _getStore(self): storeDir = self.store.newDirectory(self.indexDirectory) if not storeDir.exists(): store = Store(storeDir) self._initStore(store) return store else: return Store(storeDir)
Get the Store used for FTS. If it does not exist, it is created and initialised.
def instruction_ADD8(self, opcode, m, register): assert register.WIDTH == 8 old = register.value r = old + m register.set(r) self.clear_HNZVC() self.update_HNZVC_8(old, m, r)
Adds the memory byte into an 8-bit accumulator. source code forms: ADDA P; ADDB P CC bits "HNZVC": aaaaa
def create_html_from_fragment(tag): try: assert isinstance(tag, bs4.element.Tag) except AssertionError: raise TypeError try: assert tag.find_all('body') == [] except AssertionError: raise ValueError soup = BeautifulSoup('<html><head></head><body></body></html>', 'html.parser') soup.body.append(tag) return soup
Creates full html tree from a fragment. Assumes that tag should be wrapped in a body and is currently not Args: tag: a bs4.element.Tag Returns:" bs4.element.Tag: A bs4 tag representing a full html document
def _handle_component(sourcekey, comp_dict): if comp_dict.comp_key is None: fullkey = sourcekey else: fullkey = "%s_%s" % (sourcekey, comp_dict.comp_key) srcdict = make_sources(fullkey, comp_dict) if comp_dict.model_type == 'IsoSource': print("Writing xml for %s to %s: %s %s" % (fullkey, comp_dict.srcmdl_name, comp_dict.model_type, comp_dict.Spectral_Filename)) elif comp_dict.model_type == 'MapCubeSource': print("Writing xml for %s to %s: %s %s" % (fullkey, comp_dict.srcmdl_name, comp_dict.model_type, comp_dict.Spatial_Filename)) SrcmapsDiffuse_SG._write_xml(comp_dict.srcmdl_name, srcdict.values())
Make the source objects and write the xml for a component
def entry_archive_year_url(): entry = Entry.objects.filter(published=True).latest() arg_list = [entry.published_on.strftime("%Y")] return reverse('blargg:entry_archive_year', args=arg_list)
Renders the ``entry_archive_year`` URL for the latest ``Entry``.
def id_source(source, full=False): if source not in source_ids: return '' if full: return source_ids[source][1] else: return source_ids[source][0]
Returns the name of a website-scrapping function.
def init_running_properties(self): for prop, entry in list(self.__class__.running_properties.items()): val = entry.default setattr(self, prop, copy(val) if isinstance(val, (set, list, dict)) else val)
Initialize the running_properties. Each instance have own property. :return: None
def _scope_lookup(self, node, name, offset=0): try: stmts = node._filter_stmts(self.locals[name], self, offset) except KeyError: stmts = () if stmts: return self, stmts if self.parent: pscope = self.parent.scope() if not pscope.is_function: pscope = pscope.root() return pscope.scope_lookup(node, name) return builtin_lookup(name)
XXX method for interfacing the scope lookup
def merge_configurations(configurations): configuration = {} for c in configurations: for k, v in c.iteritems(): if k in configuration: raise ValueError('%s already in a previous base configuration' % k) configuration[k] = v return configuration
Merge configurations together and raise error if a conflict is detected :param configurations: configurations to merge together :type configurations: list of :attr:`~pyextdirect.configuration.Base.configuration` dicts :return: merged configurations as a single one :rtype: dict
def find_parents(self): for i in range(len(self.vertices)): self.vertices[i].parents = [] for i in range(len(self.vertices)): for child in self.vertices[i].children: if i not in self.vertices[child].parents: self.vertices[child].parents.append(i)
Take a tree and set the parents according to the children Takes a tree structure which lists the children of each vertex and computes the parents for each vertex and places them in.
def all(self): " execute query, get all list of lists" query,inputs = self._toedn() return self.db.q(query, inputs = inputs, limit = self._limit, offset = self._offset, history = self._history)
execute query, get all list of lists
def dirs(self, paths, access=None): self.failures = [path for path in paths if not isvalid(path, access, filetype='dir')] return not self.failures
Verify list of directories
def patch_python_logging_handlers(): logging.StreamHandler = StreamHandler logging.FileHandler = FileHandler logging.handlers.SysLogHandler = SysLogHandler logging.handlers.WatchedFileHandler = WatchedFileHandler logging.handlers.RotatingFileHandler = RotatingFileHandler if sys.version_info >= (3, 2): logging.handlers.QueueHandler = QueueHandler
Patch the python logging handlers with out mixed-in classes
def create_source( self, parent, source, retry=google.api_core.gapic_v1.method.DEFAULT, timeout=google.api_core.gapic_v1.method.DEFAULT, metadata=None, ): if "create_source" not in self._inner_api_calls: self._inner_api_calls[ "create_source" ] = google.api_core.gapic_v1.method.wrap_method( self.transport.create_source, default_retry=self._method_configs["CreateSource"].retry, default_timeout=self._method_configs["CreateSource"].timeout, client_info=self._client_info, ) request = securitycenter_service_pb2.CreateSourceRequest( parent=parent, source=source ) if metadata is None: metadata = [] metadata = list(metadata) try: routing_header = [("parent", parent)] except AttributeError: pass else: routing_metadata = google.api_core.gapic_v1.routing_header.to_grpc_metadata( routing_header ) metadata.append(routing_metadata) return self._inner_api_calls["create_source"]( request, retry=retry, timeout=timeout, metadata=metadata )
Creates a source. Example: >>> from google.cloud import securitycenter_v1 >>> >>> client = securitycenter_v1.SecurityCenterClient() >>> >>> parent = client.organization_path('[ORGANIZATION]') >>> >>> # TODO: Initialize `source`: >>> source = {} >>> >>> response = client.create_source(parent, source) Args: parent (str): Resource name of the new source's parent. Its format should be "organizations/[organization\_id]". source (Union[dict, ~google.cloud.securitycenter_v1.types.Source]): The Source being created, only the display\_name and description will be used. All other fields will be ignored. If a dict is provided, it must be of the same form as the protobuf message :class:`~google.cloud.securitycenter_v1.types.Source` retry (Optional[google.api_core.retry.Retry]): A retry object used to retry requests. If ``None`` is specified, requests will not be retried. timeout (Optional[float]): The amount of time, in seconds, to wait for the request to complete. Note that if ``retry`` is specified, the timeout applies to each individual attempt. metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata that is provided to the method. Returns: A :class:`~google.cloud.securitycenter_v1.types.Source` instance. Raises: google.api_core.exceptions.GoogleAPICallError: If the request failed for any reason. google.api_core.exceptions.RetryError: If the request failed due to a retryable error and retry attempts failed. ValueError: If the parameters are invalid.
def Softmax(a): e = np.exp(a) return np.divide(e, np.sum(e, axis=-1, keepdims=True)),
Softmax op.
def _sigma_inel(self, Tp): L = np.log(Tp / self._Tth) sigma = 30.7 - 0.96 * L + 0.18 * L ** 2 sigma *= (1 - (self._Tth / Tp) ** 1.9) ** 3 return sigma * 1e-27
Inelastic cross-section for p-p interaction. KATV14 Eq. 1 Parameters ---------- Tp : float Kinetic energy of proton (i.e. Ep - m_p*c**2) [GeV] Returns ------- sigma_inel : float Inelastic cross-section for p-p interaction [1/cm2].
def _export_graph(self): for brain_name in self.trainers.keys(): self.trainers[brain_name].export_model()
Exports latest saved models to .nn format for Unity embedding.
def generate_signature_class(cls): return type("%sSigs" % cls.__name__, (Base,), {'__tablename__': "%s_sigs" % cls.__tablename__, 'id': sa.Column(sa.Integer, sa.Sequence('%s_id_seq' % cls.__tablename__), primary_key=True, doc="primary key"), 'data': sa.Column(sa.Text(), nullable=False, doc="The signed data"), '%s_id' % cls.__tablename__: sa.Column(sa.Integer, sa.ForeignKey("%s.id" % cls.__tablename__), nullable=False)})
Generate a declarative model for storing signatures related to the given cls parameter. :param class cls: The declarative model to generate a signature class for. :return: The signature class, as a declarative derived from Base.
def disable(self): self.ticker_text_label.hide() if self.current_observed_sm_m: self.stop_sm_m_observation(self.current_observed_sm_m)
Relieve all state machines that have no active execution and hide the widget
def inventory(uri, format): base_uri = dtoolcore.utils.sanitise_uri(uri) info = _base_uri_info(base_uri) if format is None: _cmd_line_report(info) elif format == "csv": _csv_tsv_report(info, ",") elif format == "tsv": _csv_tsv_report(info, "\t") elif format == "html": _html_report(info)
Generate an inventory of datasets in a base URI.
def dispose_qualification_type(self, qualification_id): return self._is_ok( self.mturk.delete_qualification_type(QualificationTypeId=qualification_id) )
Remove a qualification type we created
def from_dict(self, document): identifier = str(document['_id']) active = document['active'] directory = os.path.join(self.directory, identifier) timestamp = datetime.datetime.strptime(document['timestamp'], '%Y-%m-%dT%H:%M:%S.%f') properties = document['properties'] return FunctionalDataHandle(identifier, properties, directory, timestamp=timestamp, is_active=active)
Create functional data object from JSON document retrieved from database. Parameters ---------- document : JSON Json document in database Returns ------- FunctionalDataHandle Handle for functional data object
def max_ver(ver1, ver2): cmp_res = compare(ver1, ver2) if cmp_res == 0 or cmp_res == 1: return ver1 else: return ver2
Returns the greater version of two versions :param ver1: version string 1 :param ver2: version string 2 :return: the greater version of the two :rtype: :class:`VersionInfo` >>> import semver >>> semver.max_ver("1.0.0", "2.0.0") '2.0.0'
def get_rotated(self, angle): ca = math.cos(angle) sa = math.sin(angle) return Point(self.x*ca-self.y*sa, self.x*sa+self.y*ca)
Rotates this vector through the given anti-clockwise angle in radians.
def _getDetails(self, uriRef, associations_details): associationDetail = {} for detail in associations_details: if detail['subject'] == uriRef: associationDetail[detail['predicate']] = detail['object'] associationDetail['id'] = uriRef return associationDetail
Given a uriRef, return a dict of all the details for that Ref use the uriRef as the 'id' of the dict
def reload(self): self.buf = self.fetch(buf = self.buf) return self
This object will read header when it is constructed, which means it might be out-of-date if the file is updated through some other handle. It's rarely required to call this method, and it's a symptom of fragile code. However, if you have multiple handles to the same header, it might be necessary. Consider the following example:: >>> x = f.header[10] >>> y = f.header[10] >>> x[1, 5] { 1: 5, 5: 10 } >>> y[1, 5] { 1: 5, 5: 10 } >>> x[1] = 6 >>> x[1], y[1] # write to x[1] is invisible to y 6, 5 >>> y.reload() >>> x[1], y[1] 6, 6 >>> x[1] = 5 >>> x[1], y[1] 5, 6 >>> y[5] = 1 >>> x.reload() >>> x[1], y[1, 5] # the write to x[1] is lost 6, { 1: 6; 5: 1 } In segyio, headers writes are atomic, and the write to disk writes the full cache. If this cache is out of date, some writes might get lost, even though the updates are compatible. The fix to this issue is either to use ``reload`` and maintain buffer consistency, or simply don't let header handles alias and overlap in lifetime. Notes ----- .. versionadded:: 1.6
def prepare_model_data(self, packages, linked, pip=None, private_packages=None): logger.debug('') return self._prepare_model_data(packages, linked, pip=pip, private_packages=private_packages)
Prepare downloaded package info along with pip pacakges info.
def cutoff(s, length=120): if length < 5: raise ValueError('length must be >= 5') if len(s) <= length: return s else: i = (length - 2) / 2 j = (length - 3) / 2 return s[:i] + '...' + s[-j:]
Cuts a given string if it is longer than a given length.
def get_output(src): output = '' lines = open(src.path, 'rU').readlines() for line in lines: m = re.match(config.import_regex,line) if m: include_path = os.path.abspath(src.dir + '/' + m.group('script')); if include_path not in config.sources: script = Script(include_path) script.parents.append(src) config.sources[script.path] = script include_file = config.sources[include_path] if include_file not in config.stack or m.group('command') == 'import': config.stack.append(include_file) output += get_output(include_file) else: output += line return output
parse lines looking for commands
def items_at(self, depth): if depth < 1: yield ROOT, self elif depth == 1: for key, value in self.items(): yield key, value else: for dict_tree in self.values(): for key, value in dict_tree.items_at(depth - 1): yield key, value
Iterate items at specified depth.
def to_dict(self): return { 'body': self._body, 'method': self._method, 'properties': self._properties, 'channel': self._channel }
Message to Dictionary. :rtype: dict
def simplify(self, eps, max_dist_error, max_speed_error, topology_only=False): for segment in self.segments: segment.simplify(eps, max_dist_error, max_speed_error, topology_only) return self
In-place simplification of segments Args: max_dist_error (float): Min distance error, in meters max_speed_error (float): Min speed error, in km/h topology_only: Boolean, optional. True to keep the topology, neglecting velocity and time accuracy (use common Douglas-Ramen-Peucker). False (default) to simplify segments keeping the velocity between points. Returns: This track
def align_options(options): l = 0 for opt in options: if len(opt[0]) > l: l = len(opt[0]) s = [] for opt in options: s.append(' {0}{1} {2}'.format(opt[0], ' ' * (l - len(opt[0])), opt[1])) return '\n'.join(s)
Indents flags and aligns help texts.
def get_default_plugin(cls): from importlib import import_module from django.conf import settings default_plugin = getattr(settings, 'ACCESS_DEFAULT_PLUGIN', "access.plugins.DjangoAccessPlugin") if default_plugin not in cls.default_plugins: logger.info("Creating a default plugin: %s", default_plugin) path = default_plugin.split('.') plugin_path = '.'.join(path[:-1]) plugin_name = path[-1] DefaultPlugin = getattr(import_module(plugin_path), plugin_name) cls.default_plugins[default_plugin] = DefaultPlugin() return cls.default_plugins[default_plugin]
Return a default plugin.
def ParseFileObject(self, parser_mediator, file_object): file_object.seek(0, os.SEEK_SET) header = file_object.read(2) if not self.BENCODE_RE.match(header): raise errors.UnableToParseFile('Not a valid Bencoded file.') file_object.seek(0, os.SEEK_SET) try: data_object = bencode.bdecode(file_object.read()) except (IOError, bencode.BTFailure) as exception: raise errors.UnableToParseFile( '[{0:s}] unable to parse file: {1:s} with error: {2!s}'.format( self.NAME, parser_mediator.GetDisplayName(), exception)) if not data_object: raise errors.UnableToParseFile( '[{0:s}] missing decoded data for file: {1:s}'.format( self.NAME, parser_mediator.GetDisplayName())) for plugin in self._plugins: try: plugin.UpdateChainAndProcess(parser_mediator, data=data_object) except errors.WrongBencodePlugin as exception: logger.debug('[{0:s}] wrong plugin: {1!s}'.format( self.NAME, exception))
Parses a bencoded file-like object. Args: parser_mediator (ParserMediator): mediates interactions between parsers and other components, such as storage and dfvfs. file_object (dfvfs.FileIO): a file-like object. Raises: UnableToParseFile: when the file cannot be parsed.
def _set_state(self, new_state, response=None): if self._closed: raise RuntimeError("message tracker is closed") if (self._state == MessageState.ABORTED or new_state == MessageState.IN_TRANSIT or (self._state == MessageState.ERROR and new_state == MessageState.DELIVERED_TO_SERVER) or (self._state == MessageState.ERROR and new_state == MessageState.ABORTED) or (self._state == MessageState.DELIVERED_TO_RECIPIENT and new_state == MessageState.DELIVERED_TO_SERVER) or (self._state == MessageState.SEEN_BY_RECIPIENT and new_state == MessageState.DELIVERED_TO_SERVER) or (self._state == MessageState.SEEN_BY_RECIPIENT and new_state == MessageState.DELIVERED_TO_RECIPIENT)): raise ValueError( "message tracker transition from {} to {} not allowed".format( self._state, new_state ) ) self._state = new_state self._response = response self.on_state_changed(self._state, self._response)
Set the state of the tracker. :param new_state: The new state of the tracker. :type new_state: :class:`~.MessageState` member :param response: A stanza related to the new state. :type response: :class:`~.StanzaBase` or :data:`None` :raise ValueError: if a forbidden state transition is attempted. :raise RuntimeError: if the tracker is closed. The state of the tracker is set to the `new_state`. The :attr:`response` is also overriden with the new value, no matter if the new or old value is :data:`None` or not. The :meth:`on_state_changed` event is emitted. The following transitions are forbidden and attempting to perform them will raise :class:`ValueError`: * any state -> :attr:`~.MessageState.IN_TRANSIT` * :attr:`~.MessageState.DELIVERED_TO_RECIPIENT` -> :attr:`~.MessageState.DELIVERED_TO_SERVER` * :attr:`~.MessageState.SEEN_BY_RECIPIENT` -> :attr:`~.MessageState.DELIVERED_TO_RECIPIENT` * :attr:`~.MessageState.SEEN_BY_RECIPIENT` -> :attr:`~.MessageState.DELIVERED_TO_SERVER` * :attr:`~.MessageState.ABORTED` -> any state * :attr:`~.MessageState.ERROR` -> any state If the tracker is already :meth:`close`\\ -d, :class:`RuntimeError` is raised. This check happens *before* a test is made whether the transition is valid. This method is part of the "protected" interface.
def annotate_with_XMLNS(tree, prefix, URI): if not ET.iselement(tree): tree = tree.getroot() tree.attrib['xmlns:' + prefix] = URI iterator = tree.iter() next(iterator) for e in iterator: e.tag = prefix + ":" + e.tag
Annotates the provided DOM tree with XMLNS attributes and adds XMLNS prefixes to the tags of the tree nodes. :param tree: the input DOM tree :type tree: an ``xml.etree.ElementTree.ElementTree`` or ``xml.etree.ElementTree.Element`` object :param prefix: XMLNS prefix for tree nodes' tags :type prefix: str :param URI: the URI for the XMLNS definition file :type URI: str
def __get_container_path(self, host_path): libname = os.path.split(host_path)[1] return os.path.join(_container_lib_location, libname)
A simple helper function to determine the path of a host library inside the container :param host_path: The path of the library on the host :type host_path: str
def reset(self): if self.running: raise RuntimeError('paco: executor is still running') self.pool.clear() self.observer.clear() self.semaphore = asyncio.Semaphore(self.limit, loop=self.loop)
Resets the executer scheduler internal state. Raises: RuntimeError: is the executor is still running.
def create( project: 'projects.Project', destination_directory, destination_filename: str = None ) -> file_io.FILE_WRITE_ENTRY: template_path = environ.paths.resources('web', 'project.html') with open(template_path, 'r') as f: dom = f.read() dom = dom.replace( '<!-- CAULDRON:EXPORT -->', templating.render_template( 'notebook-script-header.html', uuid=project.uuid, version=environ.version ) ) filename = ( destination_filename if destination_filename else '{}.html'.format(project.uuid) ) html_out_path = os.path.join(destination_directory, filename) return file_io.FILE_WRITE_ENTRY( path=html_out_path, contents=dom )
Creates a FILE_WRITE_ENTRY for the rendered HTML file for the given project that will be saved in the destination directory with the given filename. :param project: The project for which the rendered HTML file will be created :param destination_directory: The absolute path to the folder where the HTML file will be saved :param destination_filename: The name of the HTML file to be written in the destination directory. Defaults to the project uuid. :return: A FILE_WRITE_ENTRY for the project's HTML file output
def get_cache_key(request, page, lang, site_id, title): from cms.cache import _get_cache_key from cms.templatetags.cms_tags import _get_page_by_untyped_arg from cms.models import Page if not isinstance(page, Page): page = _get_page_by_untyped_arg(page, request, site_id) if not site_id: try: site_id = page.node.site_id except AttributeError: site_id = page.site_id if not title: return _get_cache_key('page_tags', page, '', site_id) + '_type:tags_list' else: return _get_cache_key('title_tags', page, lang, site_id) + '_type:tags_list'
Create the cache key for the current page and tag type
def project_activity(index, start, end): results = { "metrics": [OpenedIssues(index, start, end), ClosedIssues(index, start, end)] } return results
Compute the metrics for the project activity section of the enriched github issues index. Returns a dictionary containing a "metric" key. This key contains the metrics for this section. :param index: index object :param start: start date to get the data from :param end: end date to get the data upto :return: dictionary with the value of the metrics
def print_param_values(self_): self = self_.self for name,val in self.param.get_param_values(): print('%s.%s = %s' % (self.name,name,val))
Print the values of all this object's Parameters.
def allow_capability(self, ctx, ops): nops = 0 for op in ops: if op != LOGIN_OP: nops += 1 if nops == 0: raise ValueError('no non-login operations required in capability') _, used = self._allow_any(ctx, ops) squasher = _CaveatSquasher() for i, is_used in enumerate(used): if not is_used: continue for cond in self._conditions[i]: squasher.add(cond) return squasher.final()
Checks that the user is allowed to perform all the given operations. If not, a discharge error will be raised. If allow_capability succeeds, it returns a list of first party caveat conditions that must be applied to any macaroon granting capability to execute the operations. Those caveat conditions will not include any declarations contained in login macaroons - the caller must be careful not to mint a macaroon associated with the LOGIN_OP operation unless they add the expected declaration caveat too - in general, clients should not create capabilities that grant LOGIN_OP rights. The operations must include at least one non-LOGIN_OP operation.
def destroy(self): sdl2.SDL_GL_DeleteContext(self.context) sdl2.SDL_DestroyWindow(self.window) sdl2.SDL_Quit()
Gracefully close the window
def configInputQueue(): def captureInput(iqueue): while True: c = getch() if c == '\x03' or c == '\x04': log.debug("Break received (\\x{0:02X})".format(ord(c))) iqueue.put(c) break log.debug( "Input Char '{}' received".format( c if c != '\r' else '\\r')) iqueue.put(c) input_queue = queue.Queue() input_thread = threading.Thread(target=lambda: captureInput(input_queue)) input_thread.daemon = True input_thread.start() return input_queue, input_thread
configure a queue for accepting characters and return the queue
def set_tcp_reconnect(socket, config): reconnect_options = { 'zmq_reconnect_ivl': 'RECONNECT_IVL', 'zmq_reconnect_ivl_max': 'RECONNECT_IVL_MAX', } for key, const in reconnect_options.items(): if key in config: attr = getattr(zmq, const, None) if attr: socket.setsockopt(attr, config[key])
Set a series of TCP reconnect options on the socket if and only if 1) they are specified explicitly in the config and 2) the version of pyzmq has been compiled with support Once our fedmsg bus grew to include many hundreds of endpoints, we started notices a *lot* of SYN-ACKs in the logs. By default, if an endpoint is unavailable, zeromq will attempt to reconnect every 100ms until it gets a connection. With this code, you can reconfigure that to back off exponentially to some max delay (like 1000ms) to reduce reconnect storm spam. See the following - http://api.zeromq.org/3-2:zmq-setsockopt
def is_allowed(self, subject_id, action, resource_id, policy_sets=[]): body = { "action": action, "subjectIdentifier": subject_id, "resourceIdentifier": resource_id, } if policy_sets: body['policySetsEvaluationOrder'] = policy_sets uri = self.uri + '/v1/policy-evaluation' logging.debug("URI=" + str(uri)) logging.debug("BODY=" + str(body)) response = self.service._post(uri, body) if 'effect' in response: if response['effect'] in ['NOT_APPLICABLE', 'PERMIT']: return True return False
Evaluate a policy-set against a subject and resource. example/ is_allowed('/user/j12y', 'GET', '/asset/12')
def optimization_loop(self, timeSeries, forecastingMethod, remainingParameters, currentParameterValues=None): if currentParameterValues is None: currentParameterValues = {} if 0 == len(remainingParameters): for parameter in currentParameterValues: forecastingMethod.set_parameter(parameter, currentParameterValues[parameter]) forecast = timeSeries.apply(forecastingMethod) error = self._errorClass(**self._errorMeasureKWArgs) if not error.initialize(timeSeries, forecast): return [] return [[error, dict(currentParameterValues)]] localParameter = remainingParameters[-1] localParameterName = localParameter[0] localParameterValues = localParameter[1] results = [] for value in localParameterValues: currentParameterValues[localParameterName] = value remainingParameters = remainingParameters[:-1] results += self.optimization_loop(timeSeries, forecastingMethod, remainingParameters, currentParameterValues) return results
The optimization loop. This function is called recursively, until all parameter values were evaluated. :param TimeSeries timeSeries: TimeSeries instance that requires an optimized forecast. :param BaseForecastingMethod forecastingMethod: ForecastingMethod that is used to optimize the parameters. :param list remainingParameters: List containing all parameters with their corresponding values that still need to be evaluated. When this list is empty, the most inner optimization loop is reached. :param dictionary currentParameterValues: The currently evaluated forecast parameter combination. :return: Returns a list containing a BaseErrorMeasure instance as defined in :py:meth:`BaseOptimizationMethod.__init__` and the forecastingMethods parameter. :rtype: list