code
stringlengths
51
2.38k
docstring
stringlengths
4
15.2k
def set_distributed_assembled(self, irn_loc, jcn_loc, a_loc): self.set_distributed_assembled_rows_cols(irn_loc, jcn_loc) self.set_distributed_assembled_values(a_loc)
Set the distributed assembled matrix. Distributed assembled matrices require setting icntl(18) != 0.
def get_all_runs(self, only_largest_budget=False): all_runs = [] for k in self.data.keys(): runs = self.get_runs_by_id(k) if len(runs) > 0: if only_largest_budget: all_runs.append(runs[-1]) else: all_runs.extend(runs) return(all_runs)
returns all runs performed Parameters ---------- only_largest_budget: boolean if True, only the largest budget for each configuration is returned. This makes sense if the runs are continued across budgets and the info field contains the information you care about. If False, all runs of a configuration are returned
def _adjusted_script_code(self, script): script_code = ByteData() if script[0] == len(script) - 1: return script script_code += VarInt(len(script)) script_code += script return script_code
Checks if the script code pased in to the sighash function is already length-prepended This will break if there's a redeem script that's just a pushdata That won't happen in practice Args: script (bytes): the spend script Returns: (bytes): the length-prepended script (if necessary)
def do_application_actions_plus(parser, token): nodelist = parser.parse(('end_application_actions',)) parser.delete_first_token() return ApplicationActionsPlus(nodelist)
Render actions available with extra text.
def auth_interactive(self, username, handler, submethods=""): if (not self.active) or (not self.initial_kex_done): raise SSHException("No existing session") my_event = threading.Event() self.auth_handler = AuthHandler(self) self.auth_handler.auth_interactive( username, handler, my_event, submethods ) return self.auth_handler.wait_for_response(my_event)
Authenticate to the server interactively. A handler is used to answer arbitrary questions from the server. On many servers, this is just a dumb wrapper around PAM. This method will block until the authentication succeeds or fails, peroidically calling the handler asynchronously to get answers to authentication questions. The handler may be called more than once if the server continues to ask questions. The handler is expected to be a callable that will handle calls of the form: ``handler(title, instructions, prompt_list)``. The ``title`` is meant to be a dialog-window title, and the ``instructions`` are user instructions (both are strings). ``prompt_list`` will be a list of prompts, each prompt being a tuple of ``(str, bool)``. The string is the prompt and the boolean indicates whether the user text should be echoed. A sample call would thus be: ``handler('title', 'instructions', [('Password:', False)])``. The handler should return a list or tuple of answers to the server's questions. If the server requires multi-step authentication (which is very rare), this method will return a list of auth types permissible for the next step. Otherwise, in the normal case, an empty list is returned. :param str username: the username to authenticate as :param callable handler: a handler for responding to server questions :param str submethods: a string list of desired submethods (optional) :return: list of auth types permissible for the next stage of authentication (normally empty). :raises: `.BadAuthenticationType` -- if public-key authentication isn't allowed by the server for this user :raises: `.AuthenticationException` -- if the authentication failed :raises: `.SSHException` -- if there was a network error .. versionadded:: 1.5
def _split_path(self, path): if '\\' in path: p = path.find('\\') hive = path[:p] path = path[p+1:] else: hive = path path = None handle = self._hives_by_name[ hive.upper() ] return handle, path
Splits a Registry path and returns the hive and key. @type path: str @param path: Registry path. @rtype: tuple( int, str ) @return: Tuple containing the hive handle and the subkey path. The hive handle is always one of the following integer constants: - L{win32.HKEY_CLASSES_ROOT} - L{win32.HKEY_CURRENT_USER} - L{win32.HKEY_LOCAL_MACHINE} - L{win32.HKEY_USERS} - L{win32.HKEY_PERFORMANCE_DATA} - L{win32.HKEY_CURRENT_CONFIG}
def _setup_transitions(tdef, states, prev=()): trs = list(prev) for transition in tdef: if len(transition) == 3: (name, source, target) = transition if is_string(source) or isinstance(source, State): source = [source] source = [states[src] for src in source] target = states[target] tr = Transition(name, source, target) else: raise TypeError( "Elements of the 'transition' attribute of a " "workflow should be three-tuples; got %r instead." % (transition,) ) if any(prev_tr.name == tr.name for prev_tr in trs): trs = [tr if prev_tr.name == tr.name else prev_tr for prev_tr in trs] else: trs.append(tr) return TransitionList(trs)
Create a TransitionList object from a 'transitions' Workflow attribute. Args: tdef: list of transition definitions states (StateList): already parsed state definitions. prev (TransitionList): transition definitions from a parent. Returns: TransitionList: the list of transitions defined in the 'tdef' argument.
def det_cplx_loglr(self, det): try: return getattr(self._current_stats, '{}_cplx_loglr'.format(det)) except AttributeError: self._loglr() return getattr(self._current_stats, '{}_cplx_loglr'.format(det))
Returns the complex log likelihood ratio in the given detector. Parameters ---------- det : str The name of the detector. Returns ------- complex float : The complex log likelihood ratio.
def reflect_left(self, value): if value > self: value = self.reflect(value) return value
Only reflects the value if is > self.
def _reference_keys(self, reference): if not isinstance(reference, six.string_types): raise TypeError( 'When using ~ to reference dynamic attributes ref must be a str. a {0} was provided.'.format(type(reference).__name__) ) if '~' in reference: reference = reference[1:] scheme = self._scheme_references.get(reference) if not scheme: raise LookupError( "Was unable to find {0} in the scheme references. " "available references {1}".format(reference, ', '.join(self._scheme_references.keys())) ) return scheme['keys'] else: raise AttributeError('references must start with ~. Please update {0} and retry.'.format(reference))
Returns a list of all of keys for a given reference. :param reference: a :string: :rtype: A :list: of reference keys.
def ValidateServiceGaps(self, problems, validation_start_date, validation_end_date, service_gap_interval): if service_gap_interval is None: return departures = self.GenerateDateTripsDeparturesList(validation_start_date, validation_end_date) first_day_without_service = validation_start_date last_day_without_service = validation_start_date consecutive_days_without_service = 0 for day_date, day_trips, _ in departures: if day_trips == 0: if consecutive_days_without_service == 0: first_day_without_service = day_date consecutive_days_without_service += 1 last_day_without_service = day_date else: if consecutive_days_without_service >= service_gap_interval: problems.TooManyDaysWithoutService(first_day_without_service, last_day_without_service, consecutive_days_without_service) consecutive_days_without_service = 0 if consecutive_days_without_service >= service_gap_interval: problems.TooManyDaysWithoutService(first_day_without_service, last_day_without_service, consecutive_days_without_service)
Validate consecutive dates without service in the feed. Issue a warning if it finds service gaps of at least "service_gap_interval" consecutive days in the date range [validation_start_date, last_service_date) Args: problems: The problem reporter object validation_start_date: A date object representing the date from which the validation should take place validation_end_date: A date object representing the first day the feed is active service_gap_interval: An integer indicating how many consecutive days the service gaps need to have for a warning to be issued Returns: None
def takes_parameters(count): def decorator(f): @wraps(f) def wrapper(filter_operation_info, location, context, parameters, *args, **kwargs): if len(parameters) != count: raise GraphQLCompilationError(u'Incorrect number of parameters, expected {} got ' u'{}: {}'.format(count, len(parameters), parameters)) return f(filter_operation_info, location, context, parameters, *args, **kwargs) return wrapper return decorator
Ensure the filter function has "count" parameters specified.
def OnCellBackgroundColor(self, event): with undo.group(_("Background color")): self.grid.actions.set_attr("bgcolor", event.color) self.grid.ForceRefresh() self.grid.update_attribute_toolbar() event.Skip()
Cell background color event handler
def bin(self): bits = self.v == 4 and 32 or 128 return bin(self.ip).split('b')[1].rjust(bits, '0')
Full-length binary representation of the IP address. >>> ip = IP("127.0.0.1") >>> print(ip.bin()) 01111111000000000000000000000001
def _loadError(ErrorType, thrownError, tracebackString): RemoteException = asRemoteException(ErrorType) return RemoteException(thrownError, tracebackString)
constructor of RemoteExceptions
def device_id_partition_keygen(request_envelope): try: device_id = request_envelope.context.system.device.device_id return device_id except AttributeError: raise PersistenceException("Couldn't retrieve device id from " "request envelope, for partition key use")
Retrieve device id from request envelope, to use as partition key. :param request_envelope: Request Envelope passed during skill invocation :type request_envelope: ask_sdk_model.RequestEnvelope :return: Device Id retrieved from request envelope :rtype: str :raises: :py:class:`ask_sdk_core.exceptions.PersistenceException`
def find_element(self, value, by=By.ID, update=False) -> Elements: if update or not self._nodes: self.uidump() for node in self._nodes: if node.attrib[by] == value: bounds = node.attrib['bounds'] coord = list(map(int, re.findall(r'\d+', bounds))) click_point = (coord[0] + coord[2]) / \ 2, (coord[1] + coord[3]) / 2 return self._element_cls(self, node.attrib, by, value, coord, click_point) raise NoSuchElementException(f'No such element: {by}={value!r}.')
Find a element or the first element.
def inheritance_diagram_directive(name, arguments, options, content, lineno, content_offset, block_text, state, state_machine): node = inheritance_diagram() class_names = arguments graph = InheritanceGraph(class_names) for name in graph.get_all_class_names(): refnodes, x = xfileref_role( 'class', ':class:`%s`' % name, name, 0, state) node.extend(refnodes) node['graph'] = graph node['parts'] = options.get('parts', 0) node['content'] = " ".join(class_names) return [node]
Run when the inheritance_diagram directive is first encountered.
def parallel_bulk( client, actions, thread_count=4, chunk_size=500, max_chunk_bytes=100 * 1024 * 1024, queue_size=4, expand_action_callback=expand_action, *args, **kwargs ): from multiprocessing.pool import ThreadPool actions = map(expand_action_callback, actions) class BlockingPool(ThreadPool): def _setup_queues(self): super(BlockingPool, self)._setup_queues() self._inqueue = Queue(max(queue_size, thread_count)) self._quick_put = self._inqueue.put pool = BlockingPool(thread_count) try: for result in pool.imap( lambda bulk_chunk: list( _process_bulk_chunk( client, bulk_chunk[1], bulk_chunk[0], *args, **kwargs ) ), _chunk_actions( actions, chunk_size, max_chunk_bytes, client.transport.serializer ), ): for item in result: yield item finally: pool.close() pool.join()
Parallel version of the bulk helper run in multiple threads at once. :arg client: instance of :class:`~elasticsearch.Elasticsearch` to use :arg actions: iterator containing the actions :arg thread_count: size of the threadpool to use for the bulk requests :arg chunk_size: number of docs in one chunk sent to es (default: 500) :arg max_chunk_bytes: the maximum size of the request in bytes (default: 100MB) :arg raise_on_error: raise ``BulkIndexError`` containing errors (as `.errors`) from the execution of the last chunk when some occur. By default we raise. :arg raise_on_exception: if ``False`` then don't propagate exceptions from call to ``bulk`` and just report the items that failed as failed. :arg expand_action_callback: callback executed on each action passed in, should return a tuple containing the action line and the data line (`None` if data line should be omitted). :arg queue_size: size of the task queue between the main thread (producing chunks to send) and the processing threads.
def authorize_download(dataset_name=None): print(('Acquiring resource: ' + dataset_name)) print('') dr = data_resources[dataset_name] print('Details of data: ') print((dr['details'])) print('') if dr['citation']: print('Please cite:') print((dr['citation'])) print('') if dr['size']: print(('After downloading the data will take up ' + str(dr['size']) + ' bytes of space.')) print('') print(('Data will be stored in ' + os.path.join(data_path, dataset_name) + '.')) print('') if overide_manual_authorize: if dr['license']: print('You have agreed to the following license:') print((dr['license'])) print('') return True else: if dr['license']: print('You must also agree to the following license:') print((dr['license'])) print('') return prompt_user('Do you wish to proceed with the download? [yes/no]')
Check with the user that the are happy with terms and conditions for the data set.
def _setup_pyudev_monitoring(self): context = pyudev.Context() monitor = pyudev.Monitor.from_netlink(context) self.udev_observer = pyudev.MonitorObserver(monitor, self._udev_event) self.udev_observer.start() self.py3_wrapper.log("udev monitoring enabled")
Setup the udev monitor.
def process_api_config_response(self, config_json): with self._config_lock: self._add_discovery_config() for config in config_json.get('items', []): lookup_key = config.get('name', ''), config.get('version', '') self._configs[lookup_key] = config for config in self._configs.itervalues(): name = config.get('name', '') api_version = config.get('api_version', '') path_version = config.get('path_version', '') sorted_methods = self._get_sorted_methods(config.get('methods', {})) for method_name, method in sorted_methods: self._save_rest_method(method_name, name, path_version, method)
Parses a JSON API config and registers methods for dispatch. Side effects: Parses method name, etc. for all methods and updates the indexing data structures with the information. Args: config_json: A dict, the JSON body of the getApiConfigs response.
def allow_unsigned(self, mav, msgId): if self.allow is None: self.allow = { mavutil.mavlink.MAVLINK_MSG_ID_RADIO : True, mavutil.mavlink.MAVLINK_MSG_ID_RADIO_STATUS : True } if msgId in self.allow: return True if self.settings.allow_unsigned: return True return False
see if an unsigned packet should be allowed
def read_files(*filenames): output = [] for filename in filenames: f = codecs.open(filename, encoding='utf-8') try: output.append(f.read()) finally: f.close() return '\n\n'.join(output)
Output the contents of one or more files to a single concatenated string.
def has_image(self, digest, mime_type, index, size=500): cache_key = f"img:{index}:{size}:{digest}" return mime_type.startswith("image/") or cache_key in self.cache
Tell if there is a preview image.
def fetch_user(app_id, token, ticket, url_detail='https://pswdless.appspot.com/rest/detail'): return FetchUserWithValidation(app_id, token, ticket, url_detail)
Fetch the user deatil from Passwordless
def render(self): value = self.value if value is None: value = [] fmt = [Int.fmt] data = [len(value)] for item_value in value: if issubclass(self.item_class, Primitive): item = self.item_class(item_value) else: item = item_value item_format, item_data = item.render() fmt.extend(item_format) data.extend(item_data) return "".join(fmt), data
Creates a composite ``struct`` format and the data to render with it. The format and data are prefixed with a 32-bit integer denoting the number of elements, after which each of the items in the array value are ``render()``-ed and added to the format and data as well.
def get_batch_result_ids(self, job_id, batch_id): response = requests.get(self._get_batch_results_url(job_id, batch_id), headers=self._get_batch_info_headers()) response.raise_for_status() root = ET.fromstring(response.text) result_ids = [r.text for r in root.findall('%sresult' % self.API_NS)] return result_ids
Get result IDs of a batch that has completed processing. :param job_id: job_id as returned by 'create_operation_job(...)' :param batch_id: batch_id as returned by 'create_batch(...)' :return: list of batch result IDs to be used in 'get_batch_result(...)'
def split_by_idxs(seq, idxs): last = 0 for idx in idxs: if not (-len(seq) <= idx < len(seq)): raise KeyError(f'Idx {idx} is out-of-bounds') yield seq[last:idx] last = idx yield seq[last:]
A generator that returns sequence pieces, seperated by indexes specified in idxs.
def getModule(self, moduleName): if moduleName not in self.moduleCache: modulePath = FilePath( athena.jsDeps.getModuleForName(moduleName)._cache.path) cachedModule = self.moduleCache[moduleName] = CachedJSModule( moduleName, modulePath) else: cachedModule = self.moduleCache[moduleName] return cachedModule
Retrieve a JavaScript module cache from the file path cache. @returns: Module cache for the named module. @rtype: L{CachedJSModule}
def update_stats_history(self): if self.get_key() is None: item_name = '' else: item_name = self.get_key() if self.get_export() and self.history_enable(): for i in self.get_items_history_list(): if isinstance(self.get_export(), list): for l in self.get_export(): self.stats_history.add( nativestr(l[item_name]) + '_' + nativestr(i['name']), l[i['name']], description=i['description'], history_max_size=self._limits['history_size']) else: self.stats_history.add(nativestr(i['name']), self.get_export()[i['name']], description=i['description'], history_max_size=self._limits['history_size'])
Update stats history.
def get_config_dict(self): config_dict = {} for dotted_key, value in self.get_config_values().items(): subkeys = dotted_key.split('.') d = config_dict for key in subkeys: d = d.setdefault(key, value if key == subkeys[-1] else {}) return config_dict
Reconstruct the nested structure of this object's configuration and return it as a dict.
def _consume_next(self): response = six.next(self._response_iterator) self._counter += 1 if self._metadata is None: metadata = self._metadata = response.metadata source = self._source if source is not None and source._transaction_id is None: source._transaction_id = metadata.transaction.id if response.HasField("stats"): self._stats = response.stats values = list(response.values) if self._pending_chunk is not None: values[0] = self._merge_chunk(values[0]) if response.chunked_value: self._pending_chunk = values.pop() self._merge_values(values)
Consume the next partial result set from the stream. Parse the result set into new/existing rows in :attr:`_rows`
def re_pipe(FlowRate, Diam, Nu): ut.check_range([FlowRate, ">0", "Flow rate"], [Diam, ">0", "Diameter"], [Nu, ">0", "Nu"]) return (4 * FlowRate) / (np.pi * Diam * Nu)
Return the Reynolds Number for a pipe.
def get(self, sid): return AssignedAddOnContext( self._version, account_sid=self._solution['account_sid'], resource_sid=self._solution['resource_sid'], sid=sid, )
Constructs a AssignedAddOnContext :param sid: The unique string that identifies the resource :returns: twilio.rest.api.v2010.account.incoming_phone_number.assigned_add_on.AssignedAddOnContext :rtype: twilio.rest.api.v2010.account.incoming_phone_number.assigned_add_on.AssignedAddOnContext
def get_repo_url(pypirc, repository): pypirc = os.path.abspath(os.path.expanduser(pypirc)) pypi_config = base.PyPIConfig(pypirc) repo_config = pypi_config.get_repo_config(repository) if repo_config: return repo_config.get_clean_url() else: return base.RepositoryURL(repository)
Fetch the RepositoryURL for a given repository, reading info from pypirc. Will try to find the repository in the .pypirc, including username/password. Args: pypirc (str): path to the .pypirc config file repository (str): URL or alias for the repository Returns: base.RepositoryURL for the repository
def get_open_spaces(board): open_spaces = [] for i in range(3): for j in range(3): if board[i][j] == 0: open_spaces.append(encode_pos(i, j)) return open_spaces
Given a representation of the board, returns a list of open spaces.
def _rebuild_fields(self): new_field_lists = [] field_list_len = self.min_elements while not field_list_len > self.max_elements: how_many = self.max_elements + 1 - field_list_len i = 0 while i < how_many: current = self.random.sample(self._fields, field_list_len) if current not in new_field_lists: new_field_lists.append(current) i += 1 field_list_len += 1 new_containers = [] for i, fields in enumerate(new_field_lists): dup_fields = [field.copy() for field in fields] if self.get_name(): name = '%s_sublist_%d' % (self.get_name(), i) else: name = 'sublist_%d' % (i) new_containers.append(Container(fields=dup_fields, encoder=self.subcontainer_encoder, name=name)) self.replace_fields(new_containers)
We take the original fields and create subsets of them, each subset will be set into a container. all the resulted containers will then replace the original _fields, since we inherit from OneOf each time only one of them will be mutated and used. This is super ugly and dangerous, any idea how to implement it in a better way is welcome.
def fromgroups(args): from jcvi.formats.bed import Bed p = OptionParser(fromgroups.__doc__) opts, args = p.parse_args(args) if len(args) < 2: sys.exit(not p.print_help()) groupsfile = args[0] bedfiles = args[1:] beds = [Bed(x) for x in bedfiles] fp = open(groupsfile) groups = [row.strip().split(",") for row in fp] for b1, b2 in product(beds, repeat=2): extract_pairs(b1, b2, groups)
%prog fromgroups groupsfile a.bed b.bed ... Flatten the gene familes into pairs, the groupsfile is a file with each line containing the members, separated by comma. The commands also require several bed files in order to sort the pairs into different piles (e.g. pairs of species in comparison.
def img2ascii(img_path, ascii_path, ascii_char="*", pad=0): if len(ascii_char) != 1: raise Exception("ascii_char has to be single character.") image = Image.open(img_path).convert("L") matrix = np.array(image) matrix[np.where(matrix >= 128)] = 255 matrix[np.where(matrix < 128)] = 0 lines = list() for vector in matrix: line = list() for i in vector: line.append(" " * pad) if i: line.append(" ") else: line.append(ascii_char) lines.append("".join(line)) with open(ascii_path, "w") as f: f.write("\n".join(lines))
Convert an image to ascii art text. Suppose we have an image like that: .. image:: images/rabbit.png :align: left Put some codes:: >>> from weatherlab.math.img2waveform import img2ascii >>> img2ascii(r"testdata\img2waveform\rabbit.png", ... r"testdata\img2waveform\asciiart.txt", pad=0) Then you will see this in asciiart.txt:: ****** *** *** **** ** ** ********* ** ** *** *** ** * ** ** ** ** ** ** ** * *** * * ** ** ** ** * ** ** ** * ** * * ** ** * ** ** * ** ** * ** ** * * ** ** * ** * ** ** ** ** ** ** * ** ** ** * * ** ** * ** * ** * ** * * ** ** * * ** * * * ** ** * * ** ** * ** ** ** ** ** * ** ** ** * * ** ** * * ** ** * * ** * * ** ** * * ** * ** * ** * ** * ** * ** ** ** ** * ** ** ** * ** ** ** ** ** ** * ** ** ** ** * ** ** ** ** ** ** * ** ******* * ** ******* ** ** ** ** * ** ** *** * **** *** *** *** ** **** ** *** ** *** ** ** ** ** * ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** * ** * ** ** * ** ** * ** * ** ** * ** * ** ** ** ** ** ** ** ** ** ** ** ** *** *** ** * **** **** ** * *** **** ** ** ** ** * ** * ** * * ** ** ** ** * * ** ** ** ** ** ** * ** ** ** ** ** ** ** *** ** ** ** ****** *** *** ****** ** *** * *** *** *** *** *** *** **** **** ******** ******* *** ********** ******** *** ** *** ************ ********** *** * *** ** * **** *********************** *** ** *** ** * ** **** ** ******* * *** ***** *** **** * * ***** ********** * **** * * ** ** *** * * ** * ******************************* * *** * ** ** ***** * *** ********** ** ** ********** *** ** *** ** * ***** ** * ***** ** ** ***** * * ** * ** *** *** ************ ** ****** ** * * ** ** ** * ** *** ** ******* * * ** ** ** **** * ** * ** * **** ** ** *** *** ******* ****** * ** * *** ***** *** ** ***** ** ** ** * * ***** ************************************ * **** * ** *** ** ** *********************************************** *** *** *** ** ****************************************** **** ** ** ** **** ** ** ******************************************** ** * ** ** ****** ** ******************************************** ** * *** ** ***** *********************************************** ** **** * *** ****************************** **************** ********* ** ** *************************************** * * * ***** * ** ** ********************************************** *** * * ** ** *********************************** ******* ** * ** ** ***************************************** *** ** * *** ** * ********************************************** ** ** ****** ************************************************ ** *** **** *********************************************** ******** ** *********************************************** **** *** ** ******************************************* ** *** ** ***** ****** * * * * * ******** *** ** ** *** *** * * **** **** **** * ** ** * *** ** *** **** * * ** **** * *** ******** * *** ***** ***** ** ** ** ** *** ** *** ***** ******* * * ** * ** ******** *************** * ******************* ****************************** *** *** ********* ** ** * ** ** * ** ** * ** ** * ** ** * ** ** ** ** ** ****** * ** ********* ************************************* ********** :param img_path: the image file path :type img_path: str :param ascii_path: the output ascii text file path :type ascii_path: str :param pad: how many space been filled in between two pixels :type pad: int
def find_typed_function(pytype, prefix, suffix, module=lal): laltype = to_lal_type_str(pytype) return getattr(module, '{0}{1}{2}'.format(prefix, laltype, suffix))
Returns the lal method for the correct type Parameters ---------- pytype : `type`, `numpy.dtype` the python type, or dtype, to map prefix : `str` the function name prefix (before the type tag) suffix : `str` the function name suffix (after the type tag) Raises ------ AttributeError if the function is not found Examples -------- >>> from gwpy.utils.lal import find_typed_function >>> find_typed_function(float, 'Create', 'Sequence') <built-in function CreateREAL8Sequence>
def prepend(self, _, child, name=None): self._insert(child, prepend=True, name=name) return self
Adds childs to this tag, starting from the first position.
def from_dictionary(cls, dictionary): if not isinstance(dictionary, dict): raise TypeError('dictionary has to be a dict type, got: {}'.format(type(dictionary))) return cls(dictionary)
Parse a dictionary representing all command line parameters.
def screenshot(filename="screenshot.png"): if not settings.plotter_instance.window: colors.printc('~bomb screenshot(): Rendering window is not present, skip.', c=1) return w2if = vtk.vtkWindowToImageFilter() w2if.ShouldRerenderOff() w2if.SetInput(settings.plotter_instance.window) w2if.ReadFrontBufferOff() w2if.Update() pngwriter = vtk.vtkPNGWriter() pngwriter.SetFileName(filename) pngwriter.SetInputConnection(w2if.GetOutputPort()) pngwriter.Write()
Save a screenshot of the current rendering window.
def upload(config, remote_loc, u_filename): rcode = False try: sftp, transport = get_sftp_conn(config) remote_dir = get_remote_path(remote_loc) for part in ['sha1', 'asc']: local_file = '%s.%s' % (u_filename, part) remote_file = os.path.join(remote_dir, local_file) sftp.put(local_file, remote_file) sftp.put(remote_dir, os.path.join(remote_dir, u_filename)) rcode = True except BaseException: pass finally: if 'transport' in locals(): transport.close() return rcode
Upload the files
def debug_callback(event, *args, **kwds): l = ['event %s' % (event.type,)] if args: l.extend(map(str, args)) if kwds: l.extend(sorted('%s=%s' % t for t in kwds.items())) print('Debug callback (%s)' % ', '.join(l))
Example callback, useful for debugging.
def interm_fluent_variables(self) -> FluentParamsList: fluents = self.domain.intermediate_fluents ordering = self.domain.interm_fluent_ordering return self._fluent_params(fluents, ordering)
Returns the instantiated intermediate fluents in canonical order. Returns: Sequence[Tuple[str, List[str]]]: A tuple of pairs of fluent name and a list of instantiated fluents represented as strings.
def parse_code(url): result = urlparse(url) query = parse_qs(result.query) return query['code']
Parse the code parameter from the a URL :param str url: URL to parse :return: code query parameter :rtype: str
def add_action_to(cls, parser, action, subactions, level): p = parser.add_parser(action.name, description=action.description, argument_default=argparse.SUPPRESS) for arg in action.args: arg.add_argument_to(p) if subactions: subparsers = cls._add_subparsers_required(p, dest=settings.SUBASSISTANT_N_STRING.format(level), title=cls.subactions_str, description=cls.subactions_desc) for subact, subsubacts in sorted(subactions.items(), key=lambda x: x[0].name): cls.add_action_to(subparsers, subact, subsubacts, level + 1)
Adds given action to given parser Args: parser: instance of devassistant_argparse.ArgumentParser action: devassistant.actions.Action subclass subactions: dict with subactions - {SubA: {SubB: {}}, SubC: {}}
def save(self, *args, **kwargs): if self.send_html: self.content = get_text_for_html(self.html_content) else: self.html_content = None super(EmailTemplate, self).save(*args, **kwargs)
If this is an HTML template, then set the non-HTML content to be the stripped version of the HTML. If this is a plain text template, then set the HTML content to be null.
def lr_find(self, start_lr=1e-5, end_lr=10, wds=None, linear=False, **kwargs): self.save('tmp') layer_opt = self.get_layer_opt(start_lr, wds) self.sched = LR_Finder(layer_opt, len(self.data.trn_dl), end_lr, linear=linear) self.fit_gen(self.model, self.data, layer_opt, 1, **kwargs) self.load('tmp')
Helps you find an optimal learning rate for a model. It uses the technique developed in the 2015 paper `Cyclical Learning Rates for Training Neural Networks`, where we simply keep increasing the learning rate from a very small value, until the loss starts decreasing. Args: start_lr (float/numpy array) : Passing in a numpy array allows you to specify learning rates for a learner's layer_groups end_lr (float) : The maximum learning rate to try. wds (iterable/float) Examples: As training moves us closer to the optimal weights for a model, the optimal learning rate will be smaller. We can take advantage of that knowledge and provide lr_find() with a starting learning rate 1000x smaller than the model's current learning rate as such: >> learn.lr_find(lr/1000) >> lrs = np.array([ 1e-4, 1e-3, 1e-2 ]) >> learn.lr_find(lrs / 1000) Notes: lr_find() may finish before going through each batch of examples if the loss decreases enough. .. _Cyclical Learning Rates for Training Neural Networks: http://arxiv.org/abs/1506.01186
def _compute_gas_price(probabilities, desired_probability): first = probabilities[0] last = probabilities[-1] if desired_probability >= first.prob: return int(first.gas_price) elif desired_probability <= last.prob: return int(last.gas_price) for left, right in sliding_window(2, probabilities): if desired_probability < right.prob: continue elif desired_probability > left.prob: raise Exception('Invariant') adj_prob = desired_probability - right.prob window_size = left.prob - right.prob position = adj_prob / window_size gas_window_size = left.gas_price - right.gas_price gas_price = int(math.ceil(right.gas_price + gas_window_size * position)) return gas_price else: raise Exception('Invariant')
Given a sorted range of ``Probability`` named-tuples returns a gas price computed based on where the ``desired_probability`` would fall within the range. :param probabilities: An iterable of `Probability` named-tuples sorted in reverse order. :param desired_probability: An floating point representation of the desired probability. (e.g. ``85% -> 0.85``)
def add_exception_handler(self, exception_handler): if exception_handler is None or not isinstance( exception_handler, AbstractExceptionHandler): raise DispatchException( "Input is not an AbstractExceptionHandler instance") self._exception_handlers.append(exception_handler)
Checks the type before adding it to the exception_handlers instance variable. :param exception_handler: Exception Handler instance. :type exception_handler: ask_sdk_runtime.dispatch_components.exception_components.AbstractExceptionHandler :raises: :py:class:`ask_sdk_runtime.exceptions.DispatchException` if a null input is provided or if the input is of invalid type
def week(self): self.magnification = 345600 self._update(self.baseNumber, self.magnification) return self
set unit to week
def make_random_client_id(self): if PY2: return ('py_%s' % base64.b64encode(str(random.randint(1, 0x40000000)))) else: return ('py_%s' % base64.b64encode(bytes(str(random.randint(1, 0x40000000)), 'ascii')))
Returns a random client identifier
def _get_fct_number_of_arg(self, fct): py_version = sys.version_info[0] if py_version >= 3: return len(inspect.signature(fct).parameters) return len(inspect.getargspec(fct)[0])
Get the number of argument of a fuction.
def message(blockers): if not blockers: encoding = getattr(sys.stdout, 'encoding', '') if encoding: encoding = encoding.lower() if encoding == 'utf-8': flair = "\U0001F389 " else: flair = '' return [flair + 'You have 0 projects blocking you from using Python 3!'] flattened_blockers = set() for blocker_reasons in blockers: for blocker in blocker_reasons: flattened_blockers.add(blocker) need = 'You need {0} project{1} to transition to Python 3.' formatted_need = need.format(len(flattened_blockers), 's' if len(flattened_blockers) != 1 else '') can_port = ('Of {0} {1} project{2}, {3} {4} no direct dependencies ' 'blocking {5} transition:') formatted_can_port = can_port.format( 'those' if len(flattened_blockers) != 1 else 'that', len(flattened_blockers), 's' if len(flattened_blockers) != 1 else '', len(blockers), 'have' if len(blockers) != 1 else 'has', 'their' if len(blockers) != 1 else 'its') return formatted_need, formatted_can_port
Create a sequence of key messages based on what is blocking.
def to_json(self): sets = self.sets() return sorted(sorted(x) for x in sets)
Returns the equivalence classes a sorted list of sorted lists.
def configure(self, config): self.config = config self.update_monitors() for profile in ('worker', 'result'): for _ in range(config['threads'][profile]['number']): worker = threading.Thread(target=config['threads'][profile]['function']) worker.daemon = True worker.start() self.heartbeat() self.refresh_stopper = set_interval(config['interval']['refresh']*1000, self.update_monitors) self.heartbeat_stopper = set_interval(config['interval']['heartbeat']*1000, self.heartbeat) self.reporting_stopper = set_interval(config['interval']['reporting']*1000, self.reporting) return self
Configure Monitor, pull list of what to monitor, initialize threads
def prepare_uuid(data, schema): if isinstance(data, uuid.UUID): return str(data) else: return data
Converts uuid.UUID to string formatted UUID xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
def profile_add(user, profile): ret = {} profiles = profile.split(',') known_profiles = profile_list().keys() valid_profiles = [p for p in profiles if p in known_profiles] log.debug( 'rbac.profile_add - profiles=%s, known_profiles=%s, valid_profiles=%s', profiles, known_profiles, valid_profiles, ) if valid_profiles: res = __salt__['cmd.run_all']('usermod -P "{profiles}" {login}'.format( login=user, profiles=','.join(set(profile_get(user) + valid_profiles)), )) if res['retcode'] > 0: ret['Error'] = { 'retcode': res['retcode'], 'message': res['stderr'] if 'stderr' in res else res['stdout'] } return ret active_profiles = profile_get(user, False) for p in profiles: if p not in valid_profiles: ret[p] = 'Unknown' elif p in active_profiles: ret[p] = 'Added' else: ret[p] = 'Failed' return ret
Add profile to user user : string username profile : string profile name CLI Example: .. code-block:: bash salt '*' rbac.profile_add martine 'Primary Administrator' salt '*' rbac.profile_add martine 'User Management,User Security'
def withSize(cls, minimum, maximum): class X(cls): subtypeSpec = cls.subtypeSpec + constraint.ValueSizeConstraint( minimum, maximum) X.__name__ = cls.__name__ return X
Creates a subclass with value size constraint.
def supports_caller(func): def wrap_stackframe(context, *args, **kwargs): context.caller_stack._push_frame() try: return func(context, *args, **kwargs) finally: context.caller_stack._pop_frame() return wrap_stackframe
Apply a caller_stack compatibility decorator to a plain Python function. See the example in :ref:`namespaces_python_modules`.
def ic_pos(self, row1, row2=None): if row2 is None: row2 = [0.25,0.25,0.25,0.25] score = 0 for a,b in zip(row1, row2): if a > 0: score += a * log(a / b) / log(2) return score
Calculate the information content of one position. Returns ------- score : float Information content.
def get_app_state(device_id, app_id): if not is_valid_app_id(app_id): abort(403) if not is_valid_device_id(device_id): abort(403) if device_id not in devices: abort(404) app_state = devices[device_id].app_state(app_id) return jsonify(state=app_state, status=app_state)
Get the state of the requested app
def strip_non_ascii(s): stripped = (c for c in s if 0 < ord(c) < 127) clean_string = u''.join(stripped) return clean_string
Returns the string without non-ASCII characters. Parameters ---------- string : string A string that may contain non-ASCII characters. Returns ------- clean_string : string A string that does not contain non-ASCII characters.
def get_app(): from bottle import default_app default_app.push() for module in ("mongo_orchestration.apps.servers", "mongo_orchestration.apps.replica_sets", "mongo_orchestration.apps.sharded_clusters"): __import__(module) app = default_app.pop() return app
return bottle app that includes all sub-apps
def get_event_string(self, evtype, code): if WIN and evtype == 'Key': try: code = self.codes['wincodes'][code] except KeyError: pass try: return self.codes[evtype][code] except KeyError: raise UnknownEventCode("We don't know this event.", evtype, code)
Get the string name of the event.
def parse(self, requires_cfg=True): self._parse_default() self._parse_config(requires_cfg) self._parse_env()
Parse the configuration sources into `Bison`. Args: requires_cfg (bool): Specify whether or not parsing should fail if a config file is not found. (default: True)
def num_fails(self): n = len(self.failed_phase_list) if self.phase_stack[-1].status in (SolverStatus.failed, SolverStatus.cyclic): n += 1 return n
Return the number of failed solve steps that have been executed. Note that num_solves is inclusive of failures.
def download(self): return self._client.download_object( self._instance, self._bucket, self.name)
Download this object.
def getActiveSegment(self, c, i, timeStep): nSegments = len(self.cells[c][i]) bestActivation = self.activationThreshold which = -1 for j,s in enumerate(self.cells[c][i]): activity = self.getSegmentActivityLevel(s, self.activeState[timeStep], connectedSynapsesOnly = True) if activity >= bestActivation: bestActivation = activity which = j if which != -1: return self.cells[c][i][which] else: return None
For a given cell, return the segment with the strongest _connected_ activation, i.e. sum up the activations of the connected synapses of the segments only. That is, a segment is active only if it has enough connected synapses.
def replace(self, new): assert self.parent is not None, str(self) assert new is not None if not isinstance(new, list): new = [new] l_children = [] found = False for ch in self.parent.children: if ch is self: assert not found, (self.parent.children, self, new) if new is not None: l_children.extend(new) found = True else: l_children.append(ch) assert found, (self.children, self, new) self.parent.changed() self.parent.children = l_children for x in new: x.parent = self.parent self.parent = None
Replace this node with a new one in the parent.
def block_offset_bounds(self, namespace): cursor = self.cursor cursor.execute('SELECT MIN("offset"), MAX("offset") ' 'FROM gauged_statistics WHERE namespace = %s', (namespace,)) return cursor.fetchone()
Get the minimum and maximum block offset for the specified namespace
def PostRegistration(method): if not isinstance(method, types.FunctionType): raise TypeError("@PostRegistration can only be applied on functions") validate_method_arity(method, "service_reference") _append_object_entry( method, constants.IPOPO_METHOD_CALLBACKS, constants.IPOPO_CALLBACK_POST_REGISTRATION, ) return method
The service post-registration callback decorator is called after a service of the component has been registered to the framework. The decorated method must accept the :class:`~pelix.framework.ServiceReference` of the registered service as argument:: @PostRegistration def callback_method(self, service_reference): ''' service_reference: The ServiceReference of the provided service ''' # ... :param method: The decorated method :raise TypeError: The decorated element is not a valid function
def _prepare_statement(sql_statement, parameters): placehoolders = RdbmsConnection._get_placeholders(sql_statement, parameters) for (variable_name, (variable_type, variable_value)) in placehoolders.iteritems(): if isinstance(variable_value, (list, set, tuple)): sql_statement = RdbmsConnection._replace_placeholder(sql_statement, (variable_name, variable_type, variable_value)) del parameters[variable_name] return sql_statement
Prepare the specified SQL statement, replacing the placeholders by the value of the given parameters @param sql_statement: the string expression of a SQL statement. @param parameters: a dictionary of parameters where the key represents the name of a parameter and the value represents the value of this parameter to replace in each placeholder of this parameter in the SQL statement. @return: a string representation of the SQL statement where the placehodlers have been replaced by the value of the corresponding variables, depending on the type of these variables.
def rsync_upload(): excludes = ["*.pyc", "*.pyo", "*.db", ".DS_Store", ".coverage", "local_settings.py", "/static", "/.git", "/.hg"] local_dir = os.getcwd() + os.sep return rsync_project(remote_dir=env.proj_path, local_dir=local_dir, exclude=excludes)
Uploads the project with rsync excluding some files and folders.
def _dasd_reverse_conversion(cls, val, **kwargs): if val is not None: if val.upper() == 'ADMINISTRATORS': return '0' elif val.upper() == 'ADMINISTRATORS AND POWER USERS': return '1' elif val.upper() == 'ADMINISTRATORS AND INTERACTIVE USERS': return '2' elif val.upper() == 'NOT DEFINED': return '9999' else: return 'Invalid Value' else: return 'Not Defined'
converts DASD String values to the reg_sz value
def add_general_report_optgroup(parser): g = parser.add_argument_group("Reporting Options") g.add_argument("--report-dir", action="store", default=None) g.add_argument("--report", action=_opt_cb_report, help="comma-separated list of report formats")
General Reporting Options
def file_list(self, load): if 'env' in load: load.pop('env') ret = set() if 'saltenv' not in load: return [] if not isinstance(load['saltenv'], six.string_types): load['saltenv'] = six.text_type(load['saltenv']) for fsb in self.backends(load.pop('fsbackend', None)): fstr = '{0}.file_list'.format(fsb) if fstr in self.servers: ret.update(self.servers[fstr](load)) prefix = load.get('prefix', '').strip('/') if prefix != '': ret = [f for f in ret if f.startswith(prefix)] return sorted(ret)
Return a list of files from the dominant environment
def syscall_from_number(self, number, allow_unsupported=True, abi=None): abilist = self.syscall_abis if abi is None else [abi] if self.syscall_library is None: if not allow_unsupported: raise AngrUnsupportedSyscallError("%s does not have a library of syscalls implemented" % self.name) proc = P['stubs']['syscall']() elif not allow_unsupported and not self.syscall_library.has_implementation(number, self.arch, abilist): raise AngrUnsupportedSyscallError("No implementation for syscall %d" % number) else: proc = self.syscall_library.get(number, self.arch, abilist) if proc.abi is not None: baseno, minno, _ = self.syscall_abis[proc.abi] mapno = number - minno + baseno else: mapno = self.unknown_syscall_number proc.addr = mapno * self.syscall_addr_alignment + self.kernel_base return proc
Get a syscall SimProcedure from its number. :param number: The syscall number :param allow_unsupported: Whether to return a "stub" syscall for unsupported numbers instead of throwing an error :param abi: The name of the abi to use. If None, will assume that the abis have disjoint numbering schemes and pick the right one. :return: The SimProcedure for the syscall
def on_batch_end(self, train, **kwargs:Any)->None: "Take one step forward on the annealing schedule for the optim params." if train: if self.idx_s >= len(self.lr_scheds): return {'stop_training': True, 'stop_epoch': True} self.opt.lr = self.lr_scheds[self.idx_s].step() self.opt.mom = self.mom_scheds[self.idx_s].step() if self.lr_scheds[self.idx_s].is_done: self.idx_s += 1
Take one step forward on the annealing schedule for the optim params.
def field_value(key, label, color, padding): if not clr.has_colors and padding > 0: padding = 7 if color == "bright gray" or color == "dark gray": bright_prefix = "" else: bright_prefix = "bright " field = clr.stringc(key, "{0}{1}".format(bright_prefix, color)) field_label = clr.stringc(label, color) return "{0:>{1}} {2}".format(field, padding, field_label)
Print a specific field's stats.
def sys_version(version_tuple): old_version = sys.version_info sys.version_info = version_tuple yield sys.version_info = old_version
Set a temporary sys.version_info tuple :param version_tuple: a fake sys.version_info tuple
def _GenerateSection(self, problem_type): if problem_type == transitfeed.TYPE_WARNING: dataset_problems = self._dataset_warnings heading = 'Warnings' else: dataset_problems = self._dataset_errors heading = 'Errors' if not dataset_problems: return '' prefix = '<h2 class="issueHeader">%s:</h2>' % heading dataset_sections = [] for dataset_merger, problems in dataset_problems.items(): dataset_sections.append('<h3>%s</h3><ol>%s</ol>' % ( dataset_merger.FILE_NAME, '\n'.join(problems))) body = '\n'.join(dataset_sections) return prefix + body
Generate a listing of the given type of problems. Args: problem_type: The type of problem. This is one of the problem type constants from transitfeed. Returns: The generated HTML as a string.
def is_link_inline(cls, tag, attribute): if tag in cls.TAG_ATTRIBUTES \ and attribute in cls.TAG_ATTRIBUTES[tag]: attr_flags = cls.TAG_ATTRIBUTES[tag][attribute] return attr_flags & cls.ATTR_INLINE return attribute != 'href'
Return whether the link is likely to be inline object.
def infer_doy_max(arr): cal = arr.time.encoding.get('calendar', None) if cal in calendars: doy_max = calendars[cal] else: doy_max = arr.time.dt.dayofyear.max().data if len(arr.time) < 360: raise ValueError("Cannot infer the calendar from a series less than a year long.") if doy_max not in [360, 365, 366]: raise ValueError("The target array's calendar is not recognized") return doy_max
Return the largest doy allowed by calendar. Parameters ---------- arr : xarray.DataArray Array with `time` coordinate. Returns ------- int The largest day of the year found in calendar.
def manager(self, value): "Set the manager object in the global _managers dict." pid = current_process().ident if _managers is None: raise RuntimeError("Can not set the manager following a system exit.") if pid not in _managers: _managers[pid] = value else: raise Exception("Manager already set for pid %s" % pid)
Set the manager object in the global _managers dict.
def read_json(self, xblock): self._warn_deprecated_outside_JSONField() return self.to_json(self.read_from(xblock))
Retrieve the serialized value for this field from the specified xblock
def privmsg_many(self, targets, text): target = ','.join(targets) return self.privmsg(target, text)
Send a PRIVMSG command to multiple targets.
def ncores_allocated(self): return sum(task.manager.num_cores for task in self if task.status in [task.S_SUB, task.S_RUN])
Returns the number of CPUs allocated in this moment. A core is allocated if it's running a task or if we have submitted a task to the queue manager but the job is still pending.
def search(self, query, pagination, result_field): result = [] url = "/".join((self.url, query)) while url: log.debug("Pagure query: {0}".format(url)) try: response = requests.get(url, headers=self.headers) log.data("Response headers:\n{0}".format(response.headers)) except requests.RequestException as error: log.error(error) raise ReportError("Pagure search {0} failed.".format(self.url)) data = response.json() objects = data[result_field] log.debug("Result: {0} fetched".format( listed(len(objects), "item"))) log.data(pretty(data)) if not objects: break result.extend(objects) url = data[pagination]['next'] return result
Perform Pagure query
def get_requires(): requirements = open("requirements.txt", "r").read() return list(filter(lambda x: x != "", requirements.split()))
Read requirements.txt.
def is_image(file): match = re.match(r'\.(png|jpe?g)', _get_extension(file), re.IGNORECASE) if match: return True else: return isinstance(resolve_bot_file_id(file), types.Photo)
Returns ``True`` if the file extension looks like an image file to Telegram.
def profile_list(default_only=False): profiles = {} default_profiles = ['All'] with salt.utils.files.fopen('/etc/security/policy.conf', 'r') as policy_conf: for policy in policy_conf: policy = salt.utils.stringutils.to_unicode(policy) policy = policy.split('=') if policy[0].strip() == 'PROFS_GRANTED': default_profiles.extend(policy[1].strip().split(',')) with salt.utils.files.fopen('/etc/security/prof_attr', 'r') as prof_attr: for profile in prof_attr: profile = salt.utils.stringutils.to_unicode(profile) profile = profile.split(':') if len(profile) != 5: continue profiles[profile[0]] = profile[3] if default_only: for p in [p for p in profiles if p not in default_profiles]: del profiles[p] return profiles
List all available profiles default_only : boolean return only default profile CLI Example: .. code-block:: bash salt '*' rbac.profile_list
def connect(self, receiver): if not callable(receiver): raise ValueError('Invalid receiver: %s' % receiver) self.receivers.append(receiver)
Append receiver.
def compute_grouped_sigma(ungrouped_sigma, group_matrix): group_matrix = np.array(group_matrix, dtype=np.bool) sigma_masked = np.ma.masked_array(ungrouped_sigma * group_matrix.T, mask=(group_matrix ^ 1).T) sigma_agg = np.ma.mean(sigma_masked, axis=1) sigma = np.zeros(group_matrix.shape[1], dtype=np.float) np.copyto(sigma, sigma_agg, where=group_matrix.sum(axis=0) == 1) np.copyto(sigma, np.NAN, where=group_matrix.sum(axis=0) != 1) return sigma
Returns sigma for the groups of parameter values in the argument ungrouped_metric where the group consists of no more than one parameter
def version_info(self): package = pkg.get_distribution('git-up') local_version_str = package.version local_version = package.parsed_version print('GitUp version is: ' + colored('v' + local_version_str, 'green')) if not self.settings['updates.check']: return print('Checking for updates...', end='') try: reader = codecs.getreader('utf-8') details = json.load(reader(urlopen(PYPI_URL))) online_version = details['info']['version'] except (HTTPError, URLError, ValueError): recent = True else: recent = local_version >= pkg.parse_version(online_version) if not recent: print( '\rRecent version is: ' + colored('v' + online_version, color='yellow', attrs=['bold']) ) print('Run \'pip install -U git-up\' to get the update.') else: sys.stdout.write('\r' + ' ' * 80 + '\n')
Tell, what version we're running at and if it's up to date.
def create_environment(self, **kwargs): environment = super().create_environment(**kwargs) environment.tests.update({ 'type': self.test_type, 'kind': self.test_kind, 'opposite_before_self': self.test_opposite_before_self, }) environment.filters.update({ 'docstringline': self.filter_docstringline, 'pyquotesingle': self.filter_pyquotesingle, 'derivedname': self.filter_derived_name, 'refqualifiers': self.filter_refqualifiers, 'attrqualifiers': self.filter_attrqualifiers, 'supertypes': self.filter_supertypes, 'all_contents': self.filter_all_contents, 'pyfqn': self.filter_pyfqn, 're_sub': lambda v, p, r: re.sub(p, r, v), 'set': self.filter_set, }) from pyecore import ecore environment.globals.update({'ecore': ecore}) return environment
Return a new Jinja environment. Derived classes may override method to pass additional parameters or to change the template loader type.
def files_type(fs0, fs1, files): for file_meta in files['deleted_files']: file_meta['type'] = fs0.file(file_meta['path']) for file_meta in files['created_files'] + files['modified_files']: file_meta['type'] = fs1.file(file_meta['path']) return files
Inspects the file type of the given files.