code
stringlengths
51
2.38k
docstring
stringlengths
4
15.2k
def merge_strings_files(old_strings_file, new_strings_file): old_localizable_dict = generate_localization_key_to_entry_dictionary_from_file(old_strings_file) output_file_elements = [] f = open_strings_file(new_strings_file, "r+") for header_comment, comments, key, value in extract_header_comment_key_value_tuples_from_file(f): if len(header_comment) > 0: output_file_elements.append(Comment(header_comment)) localize_value = value if key in old_localizable_dict: localize_value = old_localizable_dict[key].value output_file_elements.append(LocalizationEntry(comments, key, localize_value)) f.close() write_file_elements_to_strings_file(old_strings_file, output_file_elements)
Merges the old strings file with the new one. Args: old_strings_file (str): The path to the old strings file (previously produced, and possibly altered) new_strings_file (str): The path to the new strings file (newly produced).
def msg(self, level, s, *args): if s and level <= self.debug: print "%s%s %s" % (" " * self.indent, s, ' '.join(map(repr, args)))
Print a debug message with the given level
def get_smart_task(self, task_id): def process_result(result): return SmartTask(self, result) return Command('get', [ROOT_SMART_TASKS, task_id], process_result=process_result)
Return specified transition. Returns a Command.
async def add_shade_to_scene(self, shade_id, scene_id, position=None): if position is None: _shade = await self.get_shade(shade_id) position = await _shade.get_current_position() await (SceneMembers(self.request)).create_scene_member( position, scene_id, shade_id )
Add a shade to a scene.
def cert_from_instance(instance): if instance.signature: if instance.signature.key_info: return cert_from_key_info(instance.signature.key_info, ignore_age=True) return []
Find certificates that are part of an instance :param instance: An instance :return: possible empty list of certificates
def get_conv_out_grad(net, image, class_id=None, conv_layer_name=None): return _get_grad(net, image, class_id, conv_layer_name, image_grad=False)
Get the output and gradients of output of a convolutional layer. Parameters: ---------- net: Block Network to use for visualization. image: NDArray Preprocessed image to use for visualization. class_id: int Category ID this image belongs to. If not provided, network's prediction will be used. conv_layer_name: str Name of the convolutional layer whose output and output's gradients need to be acptured.
def config_conf(obj): "Extracts the configuration of the underlying ConfigParser from obj" cfg = {} for name in dir(obj): if name in CONFIG_PARSER_CFG: cfg[name] = getattr(obj, name) return cfg
Extracts the configuration of the underlying ConfigParser from obj
def _create_event(self, alert_type, msg_title, msg, server, tags=None): msg_title = 'Couchbase {}: {}'.format(server, msg_title) msg = 'Couchbase instance {} {}'.format(server, msg) return { 'timestamp': int(time.time()), 'event_type': 'couchbase_rebalance', 'msg_text': msg, 'msg_title': msg_title, 'alert_type': alert_type, 'source_type_name': self.SOURCE_TYPE_NAME, 'aggregation_key': server, 'tags': tags, }
Create an event object
def __parameter_enum(self, final_subfield): if isinstance(final_subfield, messages.EnumField): enum_descriptor = {} for enum_value in final_subfield.type.to_dict().keys(): enum_descriptor[enum_value] = {'backendValue': enum_value} return enum_descriptor
Returns enum descriptor of final subfield if it is an enum. An enum descriptor is a dictionary with keys as the names from the enum and each value is a dictionary with a single key "backendValue" and value equal to the same enum name used to stored it in the descriptor. The key "description" can also be used next to "backendValue", but protorpc Enum classes have no way of supporting a description for each value. Args: final_subfield: A simple field from the end of a subfield list. Returns: The enum descriptor for the field, if it's an enum descriptor, else returns None.
def tags_getListUserPopular(user_id='', count=''): method = 'flickr.tags.getListUserPopular' auth = user_id == '' data = _doget(method, auth=auth, user_id=user_id) result = {} if isinstance(data.rsp.tags.tag, list): for tag in data.rsp.tags.tag: result[tag.text] = tag.count else: result[data.rsp.tags.tag.text] = data.rsp.tags.tag.count return result
Gets the popular tags for a user in dictionary form tag=>count
def with_source(self, lease): super().with_source(lease) self.offset = lease.offset self.sequence_number = lease.sequence_number
Init Azure Blob Lease from existing.
def waitForResponse(self, timeOut=None): self.__evt.wait(timeOut) if self.waiting(): raise Timeout() else: if self.response["error"]: raise Exception(self.response["error"]) else: return self.response["result"]
blocks until the response arrived or timeout is reached.
def _get_snmpv2c(self, oid): snmp_target = (self.hostname, self.snmp_port) cmd_gen = cmdgen.CommandGenerator() (error_detected, error_status, error_index, snmp_data) = cmd_gen.getCmd( cmdgen.CommunityData(self.community), cmdgen.UdpTransportTarget(snmp_target, timeout=1.5, retries=2), oid, lookupNames=True, lookupValues=True, ) if not error_detected and snmp_data[0][1]: return text_type(snmp_data[0][1]) return ""
Try to send an SNMP GET operation using SNMPv2 for the specified OID. Parameters ---------- oid : str The SNMP OID that you want to get. Returns ------- string : str The string as part of the value from the OID you are trying to retrieve.
def model_ext_functions(vk, model): model['ext_functions'] = {'instance': {}, 'device': {}} alias = {v: k for k, v in model['alias'].items()} for extension in get_extensions_filtered(vk): for req in extension['require']: if not req.get('command'): continue ext_type = extension['@type'] for x in req['command']: name = x['@name'] if name in alias.keys(): model['ext_functions'][ext_type][name] = alias[name] else: model['ext_functions'][ext_type][name] = name
Fill the model with extensions functions
def setup(): install_requirements = ["attrdict"] if sys.version_info[:2] < (3, 4): install_requirements.append("pathlib") setup_requirements = ['six', 'setuptools>=17.1', 'setuptools_scm'] needs_sphinx = { 'build_sphinx', 'docs', 'upload_docs', }.intersection(sys.argv) if needs_sphinx: setup_requirements.append('sphinx') setuptools.setup( author="David Gidwani", author_email="david.gidwani@gmail.com", classifiers=[ "Development Status :: 4 - Beta", "Intended Audience :: Developers", "License :: OSI Approved :: BSD License", "Operating System :: OS Independent", "Programming Language :: Python", "Programming Language :: Python :: 2", "Programming Language :: Python :: 2.7", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.3", "Programming Language :: Python :: 3.4", "Topic :: Software Development", "Topic :: Software Development :: Libraries :: Python Modules", ], description="Painless access to namespaced environment variables", download_url="https://github.com/darvid/biome/tarball/0.1", install_requires=install_requirements, keywords="conf config configuration environment", license="BSD", long_description=readme(), name="biome", package_dir={'': 'src'}, packages=setuptools.find_packages('./src'), setup_requires=setup_requirements, tests_require=["pytest"], url="https://github.com/darvid/biome", use_scm_version=True, )
Package setup entrypoint.
def get(self, timeout=None, block=True, throw_dead=True): _vv and IOLOG.debug('%r.get(timeout=%r, block=%r)', self, timeout, block) try: msg = self._latch.get(timeout=timeout, block=block) except LatchError: raise ChannelError(self.closed_msg) if msg.is_dead and throw_dead: msg._throw_dead() return msg
Sleep waiting for a message to arrive on this receiver. :param float timeout: If not :data:`None`, specifies a timeout in seconds. :raises mitogen.core.ChannelError: The remote end indicated the channel should be closed, communication with it was lost, or :meth:`close` was called in the local process. :raises mitogen.core.TimeoutError: Timeout was reached. :returns: :class:`Message` that was received.
def merge_blocks(a_blocks, b_blocks): assert a_blocks[-1][2] == b_blocks[-1][2] == 0 assert a_blocks[-1] == b_blocks[-1] combined_blocks = sorted(list(set(a_blocks + b_blocks))) i = j = 0 for a, b, size in combined_blocks: assert i <= a assert j <= b i = a + size j = b + size return combined_blocks
Given two lists of blocks, combine them, in the proper order. Ensure that there are no overlaps, and that they are for sequences of the same length.
def recv_match(self, condition=None, type=None, blocking=False): if type is not None and not isinstance(type, list): type = [type] while True: m = self.recv_msg() if m is None: return None if type is not None and not m.get_type() in type: continue if not mavutil.evaluate_condition(condition, self.messages): continue return m
recv the next message that matches the given condition type can be a string or a list of strings
def personality(self, category: str = 'mbti') -> Union[str, int]: mbtis = ('ISFJ', 'ISTJ', 'INFJ', 'INTJ', 'ISTP', 'ISFP', 'INFP', 'INTP', 'ESTP', 'ESFP', 'ENFP', 'ENTP', 'ESTJ', 'ESFJ', 'ENFJ', 'ENTJ') if category.lower() == 'rheti': return self.random.randint(1, 10) return self.random.choice(mbtis)
Generate a type of personality. :param category: Category. :return: Personality type. :rtype: str or int :Example: ISFJ.
def create_table(self, model_class): if model_class.is_system_model(): raise DatabaseException("You can't create system table") if getattr(model_class, 'engine') is None: raise DatabaseException("%s class must define an engine" % model_class.__name__) self._send(model_class.create_table_sql(self))
Creates a table for the given model class, if it does not exist already.
def set_timezone(tz=None, deploy=False): if not tz: raise CommandExecutionError("Timezone name option must not be none.") ret = {} query = {'type': 'config', 'action': 'set', 'xpath': '/config/devices/entry[@name=\'localhost.localdomain\']/deviceconfig/system/timezone', 'element': '<timezone>{0}</timezone>'.format(tz)} ret.update(__proxy__['panos.call'](query)) if deploy is True: ret.update(commit()) return ret
Set the timezone of the Palo Alto proxy minion. A commit will be required before this is processed. CLI Example: Args: tz (str): The name of the timezone to set. deploy (bool): If true then commit the full candidate configuration, if false only set pending change. .. code-block:: bash salt '*' panos.set_timezone UTC salt '*' panos.set_timezone UTC deploy=True
def create_bigquery_table(self, database, schema, table_name, callback, sql): conn = self.get_thread_connection() client = conn.handle view_ref = self.table_ref(database, schema, table_name, conn) view = google.cloud.bigquery.Table(view_ref) callback(view) with self.exception_handler(sql): client.create_table(view)
Create a bigquery table. The caller must supply a callback that takes one argument, a `google.cloud.bigquery.Table`, and mutates it.
def random_alphanum(length): charset = string.ascii_letters + string.digits return random_string(length, charset)
Return a random string of ASCII letters and digits. :param int length: The length of string to return :returns: A random string :rtype: str
def delete_table(table_name): to_delete = [ make_key(table_name, rec['id']) for rec in read_by_indexes(table_name, []) ] with DatastoreTransaction() as tx: tx.get_commit_req().mutation.delete.extend(to_delete)
Mainly for testing.
def main(): s = rawdata.content.DataFiles() all_ingredients = list(s.get_collist_by_name(data_files[1]['file'], data_files[1]['col'])[0]) best_ingred, worst_ingred = find_best_ingredients(all_ingredients, dinner_guests) print('best ingred = ', best_ingred) print('worst ingred = ', worst_ingred) for have in ingredients_on_hand: if have in best_ingred: print('Use this = ', have)
script to find a list of recipes for a group of people with specific likes and dislikes. Output of script best ingred = ['Tea', 'Tofu', 'Cheese', 'Cucumber', 'Salad', 'Chocolate'] worst ingred = ['Fish', 'Lamb', 'Pie', 'Asparagus', 'Chicken', 'Turnips'] Use this = Tofu Use this = Cheese
def add_review(self, reviewer, product, review, date=None): if not isinstance(reviewer, self._reviewer_cls): raise TypeError( "Type of given reviewer isn't acceptable:", reviewer, ", expected:", self._reviewer_cls) elif not isinstance(product, self._product_cls): raise TypeError( "Type of given product isn't acceptable:", product, ", expected:", self._product_cls) r = self._review_cls(review, date=date) self.graph.add_edge(reviewer, product, review=r) return r
Add a new review from a given reviewer to a given product. Args: reviewer: an instance of Reviewer. product: an instance of Product. review: a float value. date: date the review issued. Returns: the added new review object. Raises: TypeError: when given reviewer and product aren't instance of specified reviewer and product class when this graph is constructed.
def isAudio(self): val=False if self.__dict__['codec_type']: if str(self.__dict__['codec_type']) == 'audio': val=True return val
Is this stream labelled as an audio stream?
def run(self): while True: try: data = None if self.debug: self.py3_wrapper.log("waiting for a connection") connection, client_address = self.sock.accept() try: if self.debug: self.py3_wrapper.log("connection from") data = connection.recv(MAX_SIZE) if data: data = json.loads(data.decode("utf-8")) if self.debug: self.py3_wrapper.log(u"received %s" % data) self.command_runner.run_command(data) finally: connection.close() except Exception: if data: self.py3_wrapper.log("Command error") self.py3_wrapper.log(data) self.py3_wrapper.report_exception("command failed")
Main thread listen to socket and send any commands to the CommandRunner.
def reset(self): for index in range(self.counts_len): self.counts[index] = 0 self.total_count = 0 self.min_value = sys.maxsize self.max_value = 0 self.start_time_stamp_msec = sys.maxsize self.end_time_stamp_msec = 0
Reset the histogram to a pristine state
def month_name(self, locale=None): if self.tz is not None and not timezones.is_utc(self.tz): values = self._local_timestamps() else: values = self.asi8 result = fields.get_date_name_field(values, 'month_name', locale=locale) result = self._maybe_mask_results(result, fill_value=None) return result
Return the month names of the DateTimeIndex with specified locale. .. versionadded:: 0.23.0 Parameters ---------- locale : str, optional Locale determining the language in which to return the month name. Default is English locale. Returns ------- Index Index of month names. Examples -------- >>> idx = pd.date_range(start='2018-01', freq='M', periods=3) >>> idx DatetimeIndex(['2018-01-31', '2018-02-28', '2018-03-31'], dtype='datetime64[ns]', freq='M') >>> idx.month_name() Index(['January', 'February', 'March'], dtype='object')
def symmetry(self, input_entity, coefficients, duplicate=True): d = {1: "Line", 2: "Surface", 3: "Volume"} entity = "{}{{{}}};".format(d[input_entity.dimension], input_entity.id) if duplicate: entity = "Duplicata{{{}}}".format(entity) self._GMSH_CODE.append( "Symmetry {{{}}} {{{}}}".format( ", ".join([str(co) for co in coefficients]), entity ) ) return
Transforms all elementary entities symmetrically to a plane. The vector should contain four expressions giving the coefficients of the plane's equation.
def main(): alarm = XBeeAlarm('/dev/ttyUSB0', '\x56\x78') routine = SimpleWakeupRoutine(alarm) from time import sleep while True: try: print "Waiting 5 seconds..." sleep(5) print "Firing" routine.trigger() except KeyboardInterrupt: break
Run through simple demonstration of alarm concept
def get_ips(linode_id=None): if linode_id: ips = _query('linode', 'ip.list', args={'LinodeID': linode_id}) else: ips = _query('linode', 'ip.list') ips = ips['DATA'] ret = {} for item in ips: node_id = six.text_type(item['LINODEID']) if item['ISPUBLIC'] == 1: key = 'public_ips' else: key = 'private_ips' if ret.get(node_id) is None: ret.update({node_id: {'public_ips': [], 'private_ips': []}}) ret[node_id][key].append(item['IPADDRESS']) if linode_id: _all_ips = {'public_ips': [], 'private_ips': []} matching_id = ret.get(six.text_type(linode_id)) if matching_id: _all_ips['private_ips'] = matching_id['private_ips'] _all_ips['public_ips'] = matching_id['public_ips'] ret = _all_ips return ret
Returns public and private IP addresses. linode_id Limits the IP addresses returned to the specified Linode ID.
def filters(self): if self._filters is None: self._filters, self._attributes = self._fetch_configuration() return self._filters
List of filters available for the dataset.
def is_elem_ref(elem_ref): return ( elem_ref and isinstance(elem_ref, tuple) and len(elem_ref) == 3 and (elem_ref[0] == ElemRefObj or elem_ref[0] == ElemRefArr) )
Returns true if the elem_ref is an element reference :param elem_ref: :return:
def loads(s, encode_nominal=False, return_type=DENSE): decoder = ArffDecoder() return decoder.decode(s, encode_nominal=encode_nominal, return_type=return_type)
Convert a string instance containing the ARFF document into a Python object. :param s: a string object. :param encode_nominal: boolean, if True perform a label encoding while reading the .arff file. :param return_type: determines the data structure used to store the dataset. Can be one of `arff.DENSE`, `arff.COO`, `arff.LOD`, `arff.DENSE_GEN` or `arff.LOD_GEN`. Consult the sections on `working with sparse data`_ and `loading progressively`_. :return: a dictionary.
def as_bool(self, key): val = self[key] if isinstance(val, bool): return val return val.lower() == "yes"
Express given key's value as a boolean type. Typically, this is used for ``ssh_config``'s pseudo-boolean values which are either ``"yes"`` or ``"no"``. In such cases, ``"yes"`` yields ``True`` and any other value becomes ``False``. .. note:: If (for whatever reason) the stored value is already boolean in nature, it's simply returned. .. versionadded:: 2.5
def get_conditions(self, service_id): conditions = etree.Element('Conditions') conditions.set('NotBefore', self.instant()) conditions.set('NotOnOrAfter', self.instant(offset=30)) restriction = etree.SubElement(conditions, 'AudienceRestrictionCondition') audience = etree.SubElement(restriction, 'Audience') audience.text = service_id return conditions
Build a Conditions XML block for a SAML 1.1 Assertion.
def find_var_end(self, text): return self.find_end(text, self.start_var_token, self.end_var_token)
find the of a variable
def find_courses(self, partial): partial = partial.lower() keys = self.courses.keys() keys = [k for k in keys if k.lower().find(partial) != -1] courses = [self.courses[k] for k in keys] return list(set(courses))
Finds all courses by a given substring. This is case-insensitive.
def config_loader(app, **kwargs): app.config.from_object(Config) app.config.update(**kwargs)
Custom config loader.
def _read_pdb(path): r_mode = 'r' openf = open if path.endswith('.gz'): r_mode = 'rb' openf = gzip.open with openf(path, r_mode) as f: txt = f.read() if path.endswith('.gz'): if sys.version_info[0] >= 3: txt = txt.decode('utf-8') else: txt = txt.encode('ascii') return path, txt
Read PDB file from local drive.
def install_board_with_programmer(mcu, programmer, f_cpu=16000000, core='arduino', replace_existing=False, ): bunch = AutoBunch() board_id = '{mcu}_{f_cpu}_{programmer}'.format(f_cpu=f_cpu, mcu=mcu, programmer=programmer, ) bunch.name = '{mcu}@{f} Prog:{programmer}'.format(f=strfreq(f_cpu), mcu=mcu, programmer=programmer, ) bunch.upload.using = programmer bunch.build.mcu = mcu bunch.build.f_cpu = str(f_cpu) + 'L' bunch.build.core = core install_board(board_id, bunch, replace_existing=replace_existing)
install board with programmer.
def create_table(self): all_tables = self.aws_conn.list_tables()['TableNames'] if self.table_name in all_tables: log.info("Table %s already exists" % self.table_name) else: log.info("Table %s does not exist: creating it" % self.table_name) self.table = Table.create( self.table_name, schema=[ HashKey('key') ], throughput={ 'read': 10, 'write': 10, }, connection=self.aws_conn, )
Create the DynamoDB table used by this ObjectStore, only if it does not already exists.
def releaseNativeOverlayHandle(self, ulOverlayHandle, pNativeTextureHandle): fn = self.function_table.releaseNativeOverlayHandle result = fn(ulOverlayHandle, pNativeTextureHandle) return result
Release the pNativeTextureHandle provided from the GetOverlayTexture call, this allows the system to free the underlying GPU resources for this object, so only do it once you stop rendering this texture.
def regex_search(self, regex: str) -> List[HistoryItem]: regex = regex.strip() if regex.startswith(r'/') and regex.endswith(r'/'): regex = regex[1:-1] finder = re.compile(regex, re.DOTALL | re.MULTILINE) def isin(hi): return finder.search(hi) or finder.search(hi.expanded) return [itm for itm in self if isin(itm)]
Find history items which match a given regular expression :param regex: the regular expression to search for. :return: a list of history items, or an empty list if the string was not found
def delete(name, profile="splunk"): client = _get_splunk(profile) try: client.saved_searches.delete(name) return True except KeyError: return None
Delete a splunk search CLI Example: splunk_search.delete 'my search name'
def show(self, wait=1.2, scale=10, module_color=(0, 0, 0, 255), background=(255, 255, 255, 255), quiet_zone=4): import os import time import tempfile import webbrowser try: from urlparse import urljoin from urllib import pathname2url except ImportError: from urllib.parse import urljoin from urllib.request import pathname2url f = tempfile.NamedTemporaryFile('wb', suffix='.png', delete=False) self.png(f, scale=scale, module_color=module_color, background=background, quiet_zone=quiet_zone) f.close() webbrowser.open_new_tab(urljoin('file:', pathname2url(f.name))) time.sleep(wait) os.unlink(f.name)
Displays this QR code. This method is mainly intended for debugging purposes. This method saves the output of the :py:meth:`png` method (with a default scaling factor of 10) to a temporary file and opens it with the standard PNG viewer application or within the standard webbrowser. The temporary file is deleted afterwards. If this method does not show any result, try to increase the `wait` parameter. This parameter specifies the time in seconds to wait till the temporary file is deleted. Note, that this method does not return until the provided amount of seconds (default: 1.2) has passed. The other parameters are simply passed on to the `png` method.
def exit_and_fail(self, msg=None, out=None): self.exit(result=PANTS_FAILED_EXIT_CODE, msg=msg, out=out)
Exits the runtime with a nonzero exit code, indicating failure. :param msg: A string message to print to stderr or another custom file desciptor before exiting. (Optional) :param out: The file descriptor to emit `msg` to. (Optional)
def _org_path(self, which, payload): return Subscription( self._server_config, organization=payload['organization_id'], ).path(which)
A helper method for generating paths with organization IDs in them. :param which: A path such as "manifest_history" that has an organization ID in it. :param payload: A dict with an "organization_id" key in it. :returns: A string. The requested path.
def ncontains(self, column, value): df = self.df[self.df[column].str.contains(value) == False] if df is None: self.err("Can not select contained data") return self.df = df
Set the main dataframe instance to rows that do not contains a string value in a column
def get(self, sid): return RecordingContext(self._version, account_sid=self._solution['account_sid'], sid=sid, )
Constructs a RecordingContext :param sid: The unique string that identifies the resource :returns: twilio.rest.api.v2010.account.recording.RecordingContext :rtype: twilio.rest.api.v2010.account.recording.RecordingContext
def validate_answer(self, value): try: serialized = json.dumps(value) except (ValueError, TypeError): raise serializers.ValidationError("Answer value must be JSON-serializable") if len(serialized) > Submission.MAXSIZE: raise serializers.ValidationError("Maximum answer size exceeded.") return value
Check that the answer is JSON-serializable and not too long.
def detail_dict(self): d = self.dict def aug_col(c): d = c.dict d['stats'] = [s.dict for s in c.stats] return d d['table'] = self.table.dict d['table']['columns'] = [aug_col(c) for c in self.table.columns] return d
A more detailed dict that includes the descriptions, sub descriptions, table and columns.
def AddStopTime(self, stop, problems=None, schedule=None, **kwargs): if problems is None: problems = problems_module.default_problem_reporter stoptime = self.GetGtfsFactory().StopTime( problems=problems, stop=stop, **kwargs) self.AddStopTimeObject(stoptime, schedule)
Add a stop to this trip. Stops must be added in the order visited. Args: stop: A Stop object kwargs: remaining keyword args passed to StopTime.__init__ Returns: None
def is_method(arg): if inspect.ismethod(arg): return True if isinstance(arg, NonInstanceMethod): return True if inspect.isfunction(arg): return _get_first_arg_name(arg) == 'self' return False
Checks whether given object is a method.
def mpub(self, topic, *messages): with self.random_connection() as client: client.mpub(topic, *messages) return self.wait_response()
Publish messages to a topic
def model(self) -> 'modeltools.Model': model = vars(self).get('model') if model: return model raise AttributeError( f'The model object of element `{self.name}` has ' f'been requested but not been prepared so far.')
The |Model| object handled by the actual |Element| object. Directly after their initialisation, elements do not know which model they require: >>> from hydpy import Element >>> hland = Element('hland', outlets='outlet') >>> hland.model Traceback (most recent call last): ... AttributeError: The model object of element `hland` has been \ requested but not been prepared so far. During scripting and when working interactively in the Python shell, it is often convenient to assign a |model| directly. >>> from hydpy.models.hland_v1 import * >>> parameterstep('1d') >>> hland.model = model >>> hland.model.name 'hland_v1' >>> del hland.model >>> hasattr(hland, 'model') False For the "usual" approach to prepare models, please see the method |Element.init_model|. The following examples show that assigning |Model| objects to property |Element.model| creates some connection required by the respective model type automatically . These examples should be relevant for developers only. The following |hbranch| model branches a single input value (from to node `inp`) to multiple outputs (nodes `out1` and `out2`): >>> from hydpy import Element, Node, reverse_model_wildcard_import >>> reverse_model_wildcard_import() >>> element = Element('a_branch', ... inlets='branch_input', ... outlets=('branch_output_1', 'branch_output_2')) >>> inp = element.inlets.branch_input >>> out1, out2 = element.outlets >>> from hydpy.models.hbranch import * >>> parameterstep() >>> xpoints(0.0, 3.0) >>> ypoints(branch_output_1=[0.0, 1.0], branch_output_2=[0.0, 2.0]) >>> parameters.update() >>> element.model = model To show that the inlet and outlet connections are built properly, we assign a new value to the inlet node `inp` and verify that the suitable fractions of this value are passed to the outlet nodes out1` and `out2` by calling method |Model.doit|: >>> inp.sequences.sim = 999.0 >>> model.doit(0) >>> fluxes.input input(999.0) >>> out1.sequences.sim sim(333.0) >>> out2.sequences.sim sim(666.0)
def pull(self): for json_e in self.d.pull(repository=self.name, tag=self.tag, stream=True, decode=True): logger.debug(json_e) status = graceful_get(json_e, "status") if status: logger.info(status) else: error = graceful_get(json_e, "error") logger.error(status) raise ConuException("There was an error while pulling the image %s: %s", self.name, error) self.using_transport(SkopeoTransport.DOCKER_DAEMON)
Pull this image from registry. Raises an exception if the image is not found in the registry. :return: None
def get_interface_switch(self, nexus_host, intf_type, interface): if intf_type == "ethernet": path_interface = "phys-[eth" + interface + "]" else: path_interface = "aggr-[po" + interface + "]" action = snipp.PATH_IF % path_interface starttime = time.time() response = self.client.rest_get(action, nexus_host) self.capture_and_print_timeshot(starttime, "getif", switch=nexus_host) LOG.debug("GET call returned interface %(if_type)s %(interface)s " "config", {'if_type': intf_type, 'interface': interface}) return response
Get the interface data from host. :param nexus_host: IP address of Nexus switch :param intf_type: String which specifies interface type. example: ethernet :param interface: String indicating which interface. example: 1/19 :returns response: Returns interface data
def require_ajax_logged_in(func): @functools.wraps(func) def inner_func(self, *pargs, **kwargs): if not self._ajax_api.logged_in: logger.info('Logging into AJAX API for required meta method') if not self.has_credentials: raise ApiLoginFailure( 'Login is required but no credentials were provided') self._ajax_api.User_Login(name=self._state['username'], password=self._state['password']) return func(self, *pargs, **kwargs) return inner_func
Check if ajax API is logged in and login if not
def retry( transport: 'UDPTransport', messagedata: bytes, message_id: UDPMessageID, recipient: Address, stop_event: Event, timeout_backoff: Iterable[int], ) -> bool: async_result = transport.maybe_sendraw_with_result( recipient, messagedata, message_id, ) event_quit = event_first_of( async_result, stop_event, ) for timeout in timeout_backoff: if event_quit.wait(timeout=timeout) is True: break log.debug( 'retrying message', node=pex(transport.raiden.address), recipient=pex(recipient), msgid=message_id, ) transport.maybe_sendraw_with_result( recipient, messagedata, message_id, ) return async_result.ready()
Send messagedata until it's acknowledged. Exit when: - The message is delivered. - Event_stop is set. - The iterator timeout_backoff runs out. Returns: bool: True if the message was acknowledged, False otherwise.
def base_list_parser(): base_parser = ArgumentParser(add_help=False) base_parser.add_argument( '-F', '--format', action='store', default='default', choices=['csv', 'json', 'yaml', 'default'], help='choose the output format') return base_parser
Creates a parser with arguments specific to formatting lists of resources. Returns: {ArgumentParser}: Base parser with defaul list args
def _coerce_client_id(client_id): if isinstance(client_id, type(u'')): client_id = client_id.encode('utf-8') if not isinstance(client_id, bytes): raise TypeError('{!r} is not a valid consumer group (must be' ' str or bytes)'.format(client_id)) return client_id
Ensure the provided client ID is a byte string. If a text string is provided, it is encoded as UTF-8 bytes. :param client_id: :class:`bytes` or :class:`str` instance
def append_query_parameter(url, parameters, ignore_if_exists=True): if ignore_if_exists: for key in parameters.keys(): if key + "=" in url: del parameters[key] parameters_str = "&".join(k + "=" + v for k, v in parameters.items()) append_token = "&" if "?" in url else "?" return url + append_token + parameters_str
quick and dirty appending of query parameters to a url
def fail_eof(self, end_tokens=None, lineno=None): stack = list(self._end_token_stack) if end_tokens is not None: stack.append(end_tokens) return self._fail_ut_eof(None, stack, lineno)
Like fail_unknown_tag but for end of template situations.
def main(): parser = build_command_parser() options = parser.parse_args() file_pointer = options.filename input_format = options.input_format with file_pointer: json_data = json.load(file_pointer) if input_format == SIMPLE_JSON: report = Report(json_data) elif input_format == EXTENDED_JSON: report = Report( json_data.get('messages'), json_data.get('stats'), json_data.get('previous')) print(report.render(), file=options.output)
Pylint JSON to HTML Main Entry Point
def watch(self, limit=None, timeout=None): start_time = time.time() count = 0 while not timeout or time.time() - start_time < timeout: new = self.read() if new != self.temp: count += 1 self.callback(new) if count == limit: break self.temp = new time.sleep(self.interval)
Block method to watch the clipboard changing.
def list_cleared_orders(self, bet_status='SETTLED', event_type_ids=None, event_ids=None, market_ids=None, runner_ids=None, bet_ids=None, customer_order_refs=None, customer_strategy_refs=None, side=None, settled_date_range=time_range(), group_by=None, include_item_description=None, locale=None, from_record=None, record_count=None, session=None, lightweight=None): params = clean_locals(locals()) method = '%s%s' % (self.URI, 'listClearedOrders') (response, elapsed_time) = self.request(method, params, session) return self.process_response(response, resources.ClearedOrders, elapsed_time, lightweight)
Returns a list of settled bets based on the bet status, ordered by settled date. :param str bet_status: Restricts the results to the specified status :param list event_type_ids: Optionally restricts the results to the specified Event Type IDs :param list event_ids: Optionally restricts the results to the specified Event IDs :param list market_ids: Optionally restricts the results to the specified market IDs :param list runner_ids: Optionally restricts the results to the specified Runners :param list bet_ids: If you ask for orders, restricts the results to orders with the specified bet IDs :param list customer_order_refs: Optionally restricts the results to the specified customer order references :param list customer_strategy_refs: Optionally restricts the results to the specified customer strategy references :param str side: Optionally restricts the results to the specified side :param dict settled_date_range: Optionally restricts the results to be from/to the specified settled date :param str group_by: How to aggregate the lines, if not supplied then the lowest level is returned :param bool include_item_description: If true then an ItemDescription object is included in the response :param str locale: The language used for the response :param int from_record: Specifies the first record that will be returned :param int record_count: Specifies how many records will be returned from the index position 'fromRecord' :param requests.session session: Requests session object :param bool lightweight: If True will return dict not a resource :rtype: resources.ClearedOrders
def validate(self, val): if self.nullable and val is None: return type_er = self.__type_check(val) if type_er: return type_er spec_err = self._specific_validation(val) if spec_err: return spec_err
Performs basic validation of field value and then passes it for specific validation. :param val: field value to validate :return: error message or None
def add_accounts_to_project(accounts_query, project): query = accounts_query.filter(date_deleted__isnull=True) for account in query: add_account_to_project(account, project)
Add accounts to project.
def runtime_import(object_path): obj_module, obj_element = object_path.rsplit(".", 1) loader = __import__(obj_module, globals(), locals(), [str(obj_element)]) return getattr(loader, obj_element)
Import at runtime.
def slim_stem(token): target_sulfixs = ['ic', 'tic', 'e', 'ive', 'ing', 'ical', 'nal', 'al', 'ism', 'ion', 'ation', 'ar', 'sis', 'us', 'ment'] for sulfix in sorted(target_sulfixs, key=len, reverse=True): if token.endswith(sulfix): token = token[0:-len(sulfix)] break if token.endswith('ll'): token = token[:-1] return token
A very simple stemmer, for entity of GO stemming. >>> token = 'interaction' >>> slim_stem(token) 'interact'
def iter_duration(self, iter_trigger): print process_info = ProcessInfo(self.frame_count, use_last_rates=4) start_time = time.time() next_status = start_time + 0.25 old_pos = next(iter_trigger) for pos in iter_trigger: duration = pos - old_pos yield duration old_pos = pos if time.time() > next_status: next_status = time.time() + 1 self._print_status(process_info) self._print_status(process_info) print
yield the duration of two frames in a row.
def _ConsumeInteger(tokenizer, is_signed=False, is_long=False): try: result = ParseInteger(tokenizer.token, is_signed=is_signed, is_long=is_long) except ValueError as e: raise tokenizer.ParseError(str(e)) tokenizer.NextToken() return result
Consumes an integer number from tokenizer. Args: tokenizer: A tokenizer used to parse the number. is_signed: True if a signed integer must be parsed. is_long: True if a long integer must be parsed. Returns: The integer parsed. Raises: ParseError: If an integer with given characteristics couldn't be consumed.
def dict_to_nvlist(dict): result = [] for item in list(dict.keys()): result.append(SDOPackage.NameValue(item, omniORB.any.to_any(dict[item]))) return result
Convert a dictionary into a CORBA namevalue list.
def service_present(name, service_type, description=None, profile=None, **connection_args): ret = {'name': name, 'changes': {}, 'result': True, 'comment': 'Service "{0}" already exists'.format(name)} role = __salt__['keystone.service_get'](name=name, profile=profile, **connection_args) if 'Error' not in role: return ret else: if __opts__.get('test'): ret['result'] = None ret['comment'] = 'Service "{0}" will be added'.format(name) return ret __salt__['keystone.service_create'](name, service_type, description, profile=profile, **connection_args) ret['comment'] = 'Service "{0}" has been added'.format(name) ret['changes']['Service'] = 'Created' return ret
Ensure service present in Keystone catalog name The name of the service service_type The type of Openstack Service description (optional) Description of the service
def yellow(cls): "Make the text foreground color yellow." wAttributes = cls._get_text_attributes() wAttributes &= ~win32.FOREGROUND_MASK wAttributes |= win32.FOREGROUND_YELLOW cls._set_text_attributes(wAttributes)
Make the text foreground color yellow.
def OnViewFrozen(self, event): self.grid._view_frozen = not self.grid._view_frozen self.grid.grid_renderer.cell_cache.clear() self.grid.ForceRefresh() event.Skip()
Show cells as frozen status
def before(method_name): def decorator(function): @wraps(function) def wrapper(self, *args, **kwargs): returns = getattr(self, method_name)(*args, **kwargs) if returns is None: return function(self, *args, **kwargs) else: if isinstance(returns, HttpResponse): return returns else: return function(self, *returns) return wrapper return decorator
Run the given method prior to the decorated view. If you return anything besides ``None`` from the given method, its return values will replace the arguments of the decorated view. If you return an instance of ``HttpResponse`` from the given method, Respite will return it immediately without delegating the request to the decorated view. Example usage:: class ArticleViews(Views): @before('_load') def show(self, request, article): return self._render( request = request, template = 'show', context = { 'article': article } ) def _load(self, request, id): try: return request, Article.objects.get(id=id) except Article.DoesNotExist: return self._error(request, 404, message='The article could not be found.') :param method: A string describing a class method.
def get_bool(self, property): value = self.get(property) if isinstance(value, bool): return value return value.lower() == "true"
Gets the value of the given property as boolean. :param property: (:class:`~hazelcast.config.ClientProperty`), Property to get value from :return: (bool), Value of the given property
def _is_wildcard_match(self, domain_labels, valid_domain_labels): first_domain_label = domain_labels[0] other_domain_labels = domain_labels[1:] wildcard_label = valid_domain_labels[0] other_valid_domain_labels = valid_domain_labels[1:] if other_domain_labels != other_valid_domain_labels: return False if wildcard_label == '*': return True wildcard_regex = re.compile('^' + wildcard_label.replace('*', '.*') + '$') if wildcard_regex.match(first_domain_label): return True return False
Determines if the labels in a domain are a match for labels from a wildcard valid domain name :param domain_labels: A list of unicode strings, with A-label form for IDNs, of the labels in the domain name to check :param valid_domain_labels: A list of unicode strings, with A-label form for IDNs, of the labels in a wildcard domain pattern :return: A boolean - if the domain matches the valid domain
def _list_dir(self, path): try: elements = [ os.path.join(path, x) for x in os.listdir(path) ] if os.path.isdir(path) else [] elements.sort() except OSError: elements = None return elements
returns absolute paths for all entries in a directory
def install_dependencies(dependencies, verbose=False): if not dependencies: return stdout = stderr = None if verbose else subprocess.DEVNULL with tempfile.TemporaryDirectory() as req_dir: req_file = Path(req_dir) / "requirements.txt" with open(req_file, "w") as f: for dependency in dependencies: f.write(f"{dependency}\n") pip = ["python3", "-m", "pip", "install", "-r", req_file] if sys.base_prefix == sys.prefix and not hasattr(sys, "real_prefix"): pip.append("--user") try: subprocess.check_call(pip, stdout=stdout, stderr=stderr) except subprocess.CalledProcessError: raise Error(_("failed to install dependencies")) importlib.reload(site)
Install all packages in dependency list via pip.
def _create(self, **kwargs): tmos_ver = self._meta_data['bigip']._meta_data['tmos_version'] legacy = kwargs.pop('legacy', False) publish = kwargs.pop('publish', False) self._filter_version_specific_options(tmos_ver, **kwargs) if LooseVersion(tmos_ver) < LooseVersion('12.1.0'): return super(Policy, self)._create(**kwargs) else: if legacy: return super(Policy, self)._create(legacy=True, **kwargs) else: if 'subPath' not in kwargs: msg = "The keyword 'subPath' must be specified when " \ "creating draft policy in TMOS versions >= 12.1.0. " \ "Try and specify subPath as 'Drafts'." raise MissingRequiredCreationParameter(msg) self = super(Policy, self)._create(**kwargs) if publish: self.publish() return self
Allow creation of draft policy and ability to publish a draft Draft policies only exist in 12.1.0 and greater versions of TMOS. But there must be a method to create a draft, then publish it. :raises: MissingRequiredCreationParameter
def ensure_unicode_args(function): @wraps(function) def wrapped(*args, **kwargs): if six.PY2: return function( *salt.utils.data.decode_list(args), **salt.utils.data.decode_dict(kwargs) ) else: return function(*args, **kwargs) return wrapped
Decodes all arguments passed to the wrapped function
def default_output_name(self, input_file): irom_segment = self.get_irom_segment() if irom_segment is not None: irom_offs = irom_segment.addr - ESP8266ROM.IROM_MAP_START else: irom_offs = 0 return "%s-0x%05x.bin" % (os.path.splitext(input_file)[0], irom_offs & ~(ESPLoader.FLASH_SECTOR_SIZE - 1))
Derive a default output name from the ELF name.
def determineMaxWindowSize(dtype, limit=None): vmem = psutil.virtual_memory() maxSize = math.floor(math.sqrt(vmem.available / np.dtype(dtype).itemsize)) if limit is None or limit >= maxSize: return maxSize else: return limit
Determines the largest square window size that can be used, based on the specified datatype and amount of currently available system memory. If `limit` is specified, then this value will be returned in the event that it is smaller than the maximum computed size.
def moys_dict(self): moy_dict = {} for val, dt in zip(self.values, self.datetimes): moy_dict[dt.moy] = val return moy_dict
Return a dictionary of this collection's values where the keys are the moys. This is useful for aligning the values with another list of datetimes.
def handle(self, connection_id, message_content): if self._network.get_connection_status(connection_id) != \ ConnectionStatus.CONNECTION_REQUEST: LOGGER.debug("Connection's previous message was not a" " ConnectionRequest, Remove connection to %s", connection_id) violation = AuthorizationViolation( violation=RoleType.Value("NETWORK")) return HandlerResult( HandlerStatus.RETURN_AND_CLOSE, message_out=violation, message_type=validator_pb2.Message .AUTHORIZATION_VIOLATION) random_payload = os.urandom(PAYLOAD_LENGTH) self._challenge_payload_cache[connection_id] = random_payload auth_challenge_response = AuthorizationChallengeResponse( payload=random_payload) self._network.update_connection_status( connection_id, ConnectionStatus.AUTH_CHALLENGE_REQUEST) return HandlerResult( HandlerStatus.RETURN, message_out=auth_challenge_response, message_type=validator_pb2.Message. AUTHORIZATION_CHALLENGE_RESPONSE)
If the connection wants to take on a role that requires a challenge to be signed, it will request the challenge by sending an AuthorizationChallengeRequest to the validator it wishes to connect to. The validator will send back a random payload that must be signed. If the connection has not sent a ConnectionRequest or the connection has already recieved an AuthorizationChallengeResponse, an AuthorizationViolation will be returned and the connection will be closed.
def read_file(file_path_name): with io.open(os.path.join(os.path.dirname(__file__), file_path_name), mode='rt', encoding='utf-8') as fd: return fd.read()
Read the content of the specified file. @param file_path_name: path and name of the file to read. @return: content of the specified file.
def _format_state_result(name, result, changes=None, comment=''): if changes is None: changes = {'old': '', 'new': ''} return {'name': name, 'result': result, 'changes': changes, 'comment': comment}
Creates the state result dictionary.
def autodoc_skip(app, what, name, obj, skip, options): if name in config.EXCLUDE_MEMBERS: return True if name in config.INCLUDE_MEMBERS: return False return skip
Hook that tells autodoc to include or exclude certain fields. Sadly, it doesn't give a reference to the parent object, so only the ``name`` can be used for referencing. :type app: sphinx.application.Sphinx :param what: The parent type, ``class`` or ``module`` :type what: str :param name: The name of the child method/attribute. :type name: str :param obj: The child value (e.g. a method, dict, or module reference) :param options: The current autodoc settings. :type options: dict .. seealso:: http://www.sphinx-doc.org/en/stable/ext/autodoc.html#event-autodoc-skip-member
def cli(wio, send): command = send click.echo("UDP command: {}".format(command)) result = udp.common_send(command) if result is None: return debug_error() else: click.echo(result)
Sends a UDP command to the wio device. \b DOES: Support "VERSION", "SCAN", "Blank?", "DEBUG", "ENDEBUG: 1", "ENDEBUG: 0" "APCFG: AP\\tPWDs\\tTOKENs\\tSNs\\tSERVER_Domains\\tXSERVER_Domain\\t\\r\\n", Note: 1. Ensure your device is Configure Mode. 2. Change your computer network to Wio's AP. \b EXAMPLE: wio udp --send [command], send UPD command
def now_micros(absolute=False) -> int: micros = int(time.time() * 1e6) if absolute: return micros return micros - EPOCH_MICROS
Return current micros since epoch as integer.
def serialize_options(options, block): for index, option_dict in enumerate(block.options): option = etree.SubElement(options, 'option') if index == block.correct_answer: option.set('correct', u'True') if hasattr(block, 'correct_rationale'): rationale = etree.SubElement(option, 'rationale') rationale.text = block.correct_rationale['text'] text = etree.SubElement(option, 'text') text.text = option_dict.get('text', '') serialize_image(option_dict, option)
Serialize the options in peer instruction XBlock to xml Args: options (lxml.etree.Element): The <options> XML element. block (PeerInstructionXBlock): The XBlock with configuration to serialize. Returns: None
def create_typed_target (self, type, project, name, sources, requirements, default_build, usage_requirements): assert isinstance(type, basestring) assert isinstance(project, ProjectTarget) assert is_iterable_typed(sources, basestring) assert is_iterable_typed(requirements, basestring) assert is_iterable_typed(default_build, basestring) return self.main_target_alternative (TypedTarget (name, project, type, self.main_target_sources (sources, name), self.main_target_requirements (requirements, project), self.main_target_default_build (default_build, project), self.main_target_usage_requirements (usage_requirements, project)))
Creates a TypedTarget with the specified properties. The 'name', 'sources', 'requirements', 'default_build' and 'usage_requirements' are assumed to be in the form specified by the user in Jamfile corresponding to 'project'.
def batch_put_attributes(self, items, replace=True): return self.connection.batch_put_attributes(self, items, replace)
Store attributes for multiple items. :type items: dict or dict-like object :param items: A dictionary-like object. The keys of the dictionary are the item names and the values are themselves dictionaries of attribute names/values, exactly the same as the attribute_names parameter of the scalar put_attributes call. :type replace: bool :param replace: Whether the attribute values passed in will replace existing values or will be added as addition values. Defaults to True. :rtype: bool :return: True if successful
def _get_known_noncoding_het_snp(data_dict): if data_dict['gene'] == '1': return None if data_dict['known_var'] == '1' and data_dict['ref_ctg_effect'] == 'SNP' \ and data_dict['smtls_nts'] != '.' and ';' not in data_dict['smtls_nts']: nucleotides = data_dict['smtls_nts'].split(',') depths = data_dict['smtls_nts_depth'].split(',') if len(nucleotides) != len(depths): raise Error('Mismatch in number of inferred nucleotides from ctg_nt, smtls_nts, smtls_nts_depth columns. Cannot continue\n' + str(data_dict)) try: var_nucleotide = data_dict['known_var_change'][-1] depths = [int(x) for x in depths] nuc_to_depth = dict(zip(nucleotides, depths)) total_depth = sum(depths) var_depth = nuc_to_depth.get(var_nucleotide, 0) percent_depth = round(100 * var_depth / total_depth, 1) except: return None return data_dict['known_var_change'], percent_depth else: return None
If ref is coding, return None. If the data dict has a known snp, and samtools made a call, then return the string ref_name_change and the % of reads supporting the variant type. If noncoding, but no samtools call, then return None
def collectData(reads1, reads2, square, matchAmbiguous): result = defaultdict(dict) for id1, read1 in reads1.items(): for id2, read2 in reads2.items(): if id1 != id2 or not square: match = compareDNAReads( read1, read2, matchAmbiguous=matchAmbiguous)['match'] if not matchAmbiguous: assert match['ambiguousMatchCount'] == 0 result[id1][id2] = result[id2][id1] = match return result
Get pairwise matching statistics for two sets of reads. @param reads1: An C{OrderedDict} of C{str} read ids whose values are C{Read} instances. These will be the rows of the table. @param reads2: An C{OrderedDict} of C{str} read ids whose values are C{Read} instances. These will be the columns of the table. @param square: If C{True} we are making a square table of a set of sequences against themselves (in which case we show nothing on the diagonal). @param matchAmbiguous: If C{True}, count ambiguous nucleotides that are possibly correct as actually being correct. Otherwise, we are strict and insist that only non-ambiguous nucleotides can contribute to the matching nucleotide count.