code
stringlengths
20
4.93k
docstring
stringlengths
33
1.27k
source
stringclasses
3 values
def all_min(tensors): return _apply_all_reduce('min', tensors)
Returns a list of tensors with the all-reduce min across `tensors`. The computation is done with an all-reduce operation, so if only some of the returned tensors are evaluated then the computation will hang. Args: tensors: The input tensors across which to reduce; must be assigned to GPU devices. Returns: List of tensors, each with the minimum of the input tensors, where tensor i has the same device as `tensors[i]`.
github-repos
def parse(filename, encoding=None): with open(filename, encoding=encoding) as source: for line in source: for word in line.split(): yield word
!DEMO! Simple file parsing generator Args: filename: absolute or relative path to file on disk encoding: encoding string that is passed to open function
juraj-google-style
def _GetSourceFileSystem(self, source_path_spec, resolver_context=None): if not source_path_spec: raise RuntimeError('Missing source.') file_system = path_spec_resolver.Resolver.OpenFileSystem( source_path_spec, resolver_context=resolver_context) type_indicator = source_path_spec.type_indicator if path_spec_factory.Factory.IsSystemLevelTypeIndicator(type_indicator): mount_point = source_path_spec else: mount_point = source_path_spec.parent return file_system, mount_point
Retrieves the file system of the source. Args: source_path_spec (dfvfs.PathSpec): source path specification of the file system. resolver_context (dfvfs.Context): resolver context. Returns: tuple: containing: dfvfs.FileSystem: file system. dfvfs.PathSpec: mount point path specification that refers to the base location of the file system. Raises: RuntimeError: if source path specification is not set.
juraj-google-style
def abort_all_if(expr, reason, extras=None): if expr: abort_all(reason, extras)
Abort all subsequent tests, if the expression evaluates to True. Args: expr: The expression that is evaluated. reason: The reason to abort. extras: An optional field for extra information to be included in test result. Raises: signals.TestAbortAll: Abort all subsequent tests.
github-repos
def __init__(self, paths, case_sensitive=True, path_segment_separator='/'): super(PathFilterScanTree, self).__init__() self._case_sensitive = case_sensitive self._path_segment_separator = path_segment_separator self._root_node = None if not self._case_sensitive: paths = [path.lower() for path in paths] path_filter_table = _PathFilterTable( paths, [], path_segment_separator=self._path_segment_separator) if path_filter_table.paths: self._root_node = self._BuildScanTreeNode(path_filter_table, [])
Initializes and builds a path filter scan tree. Args: paths: a list of strings containing the paths. case_sensitive: optional boolean value to indicate string matches should be case sensitive. path_segment_separator: optional string containing the path segment separator.
juraj-google-style
def find_required_filehandlers(self, requirements, filename_info): req_fh = [] filename_info = set(filename_info.items()) if requirements: for requirement in requirements: for fhd in self.file_handlers[requirement]: if set(fhd.filename_info.items()).issubset(filename_info): req_fh.append(fhd) break else: raise RuntimeError('No matching requirement file of type {}'.format(requirement)) return req_fh
Find the necessary file handlers for the given requirements. We assume here requirements are available. Raises: KeyError, if no handler for the given requirements is available. RuntimeError, if there is a handler for the given requirements, but it doesn't match the filename info.
codesearchnet
def make_connection(self):
Makes a connection to the snippet server on the remote device. This function makes a connection to the server and sends a handshake request to ensure the server is available for upcoming RPCs. There are two types of connections used by snippet clients: * The client makes a new connection each time it needs to send an RPC. * The client makes a connection in this stage and uses it for all the RPCs. In this case, the client should implement `close_connection` to close the connection. Raises: errors.ProtocolError: something went wrong when exchanging data with the server.
github-repos
def _get_attributes(self, attributes): params = [] if isinstance(attributes, dict): for attribute_key in attributes.keys(): attribute_value = attributes.get(attribute_key) if validator.is_attribute_valid(attribute_key, attribute_value): attribute_id = self.config.get_attribute_id(attribute_key) if attribute_id: params.append({ 'entity_id': attribute_id, 'key': attribute_key, 'type': self.EventParams.CUSTOM, 'value': attribute_value }) bot_filtering_value = self._get_bot_filtering() if isinstance(bot_filtering_value, bool): params.append({ 'entity_id': enums.ControlAttributes.BOT_FILTERING, 'key': enums.ControlAttributes.BOT_FILTERING, 'type': self.EventParams.CUSTOM, 'value': bot_filtering_value }) return params
Get attribute(s) information. Args: attributes: Dict representing user attributes and values which need to be recorded. Returns: List consisting of valid attributes for the user. Empty otherwise.
juraj-google-style
def CheckFlowCanBeStartedOnClient(flow_name): flow_cls = flow.GRRFlow.GetPlugin(flow_name) if flow_cls.category: return True else: raise access_control.UnauthorizedAccess(("Flow %s can't be started on a client by non-suid users." % flow_name))
Checks if flow can be started on a particular client. Only flows with a category can bestarted. Having a category means that the flow will be accessible from the UI. Args: flow_name: Name of the flow to check access for. Returns: True if flow is externally accessible. Raises: access_control.UnauthorizedAccess: if flow is not externally accessible.
codesearchnet
def AddFile(self, fd, external=True): files_for_write = [] for sub_store in self.GetChildrenByPriority(allow_external=external): new_file = sub_store.AddFile(fd) if new_file: files_for_write.append(new_file) fd.Seek(0) while files_for_write: data = fd.Read(self.CHUNK_SIZE) if not data: break for child in files_for_write: child.Write(data) for child in files_for_write: child.Close()
Create a new file in the file store. We delegate the actual file addition to our contained implementations. Implementations can either implement the AddFile() method, returning a file like object which will be written on, or directly support the AddBlobToStore() method which can copy the VFSBlobImage efficiently. Args: fd: An AFF4 object open for read/write. external: If true, attempt to add files to stores defined as EXTERNAL.
juraj-google-style
def __init__(self, channel): self.Predict = channel.unary_unary( "/google.cloud.automl.v1beta1.PredictionService/Predict", request_serializer=google_dot_cloud_dot_automl__v1beta1_dot_proto_dot_prediction__service__pb2.PredictRequest.SerializeToString, response_deserializer=google_dot_cloud_dot_automl__v1beta1_dot_proto_dot_prediction__service__pb2.PredictResponse.FromString, ) self.BatchPredict = channel.unary_unary( "/google.cloud.automl.v1beta1.PredictionService/BatchPredict", request_serializer=google_dot_cloud_dot_automl__v1beta1_dot_proto_dot_prediction__service__pb2.BatchPredictRequest.SerializeToString, response_deserializer=google_dot_longrunning_dot_operations__pb2.Operation.FromString, )
Constructor. Args: channel: A grpc.Channel.
juraj-google-style
def __init__(self, **namespaces): super(Configuration, self).__init__() for key, entry in compat.iteritems(namespaces): self.register(key, entry)
Initialize a configuration with a series of namespaces. Args: **namespaces: Each keyword should be a Namespace object which will be added to the configuration file. Raises: TypeError: If an entry is not a Namespace object. ValueError: If the namespace is already registered.
juraj-google-style
def persist_compilestats(run, session, stats): for stat in stats: stat.run_id = run.id session.add(stat)
Persist the run results in the database. Args: run: The run we attach the compilestats to. session: The db transaction we belong to. stats: The stats we want to store in the database.
juraj-google-style
def find_divisors(n): if not isinstance(n, int): raise TypeError("Expecting a strictly positive integer") if n <= 0: raise ValueError("Expecting a strictly positive integer") for i in range(1, int(n**0.5) + 1): if n % i == 0: divisors = {i, n for divisor in divisors: yield divisor
Find all the positive divisors of the given integer n. Args: n (int): strictly positive integer Returns: A generator of all the positive divisors of n Raises: TypeError: if n is not an integer ValueError: if n is negative
juraj-google-style
def _ParseCmdItem(self, cmd_input, template_file=None): fsm = textfsm.TextFSM(template_file) if not self._keys: self._keys = set(fsm.GetValuesByAttrib('Key')) table = texttable.TextTable() table.header = fsm.header for record in fsm.ParseText(cmd_input): table.Append(record) return table
Creates Texttable with output of command. Args: cmd_input: String, Device response. template_file: File object, template to parse with. Returns: TextTable containing command output. Raises: CliTableError: A template was not found for the given command.
juraj-google-style
def _rand_dtype(rand, shape, dtype, scale=1.0, post=lambda x: x): r = lambda: numpy_compat.np_asarray(scale * rand(*_dims_of_shape(shape)), dtype) if onp.issubdtype(dtype, onp.complexfloating): vals = r() + 1j * r() else: vals = r() return _cast_to_shape(numpy_compat.np_asarray(post(vals), dtype), shape, dtype)
Produce random values given shape, dtype, scale, and post-processor. Args: rand: a function for producing random values of a given shape, e.g. a bound version of either onp.RandomState.randn or onp.RandomState.rand. shape: a shape value as a tuple of positive integers. dtype: a numpy dtype. scale: optional, a multiplicative scale for the random values (default 1). post: optional, a callable for post-processing the random values (default identity). Returns: An ndarray of the given shape and dtype using random values based on a call to rand but scaled, converted to the appropriate dtype, and post-processed.
github-repos
def GetLogicalLines(self): self._StartNewLine() return self._logical_lines
Fetch the result of the tree walk. Note: only call this after visiting the whole tree. Returns: A list of LogicalLine objects.
github-repos
def _ParseCachedEntry2003(self, value_data, cached_entry_offset): try: cached_entry = self._ReadStructureFromByteStream(value_data[cached_entry_offset:], cached_entry_offset, self._cached_entry_data_type_map) except (ValueError, errors.ParseError) as exception: raise errors.ParseError('Unable to parse cached entry value with error: {0!s}'.format(exception)) path_size = cached_entry.path_size maximum_path_size = cached_entry.maximum_path_size path_offset = cached_entry.path_offset if ((path_offset > 0) and (path_size > 0)): path_size += path_offset maximum_path_size += path_offset try: path = value_data[path_offset:path_size].decode('utf-16-le') except UnicodeDecodeError: raise errors.ParseError('Unable to decode cached entry path to string') cached_entry_object = AppCompatCacheCachedEntry() cached_entry_object.cached_entry_size = self._cached_entry_data_type_map.GetByteSize() cached_entry_object.file_size = getattr(cached_entry, 'file_size', None) cached_entry_object.last_modification_time = cached_entry.last_modification_time cached_entry_object.path = path return cached_entry_object
Parses a Windows 2003 cached entry. Args: value_data (bytes): value data. cached_entry_offset (int): offset of the first cached entry data relative to the start of the value data. Returns: AppCompatCacheCachedEntry: cached entry. Raises: ParseError: if the value data could not be parsed.
codesearchnet
def object_hook(self, object_dict): instance = self.decoder(object_dict) self.condition_list.append(instance) self.index += 1 return self.index
Hook which when passed into a json.JSONDecoder will replace each dict in a json string with its index and convert the dict to an object as defined by the passed in condition_decoder. The newly created condition object is appended to the conditions_list. Args: object_dict: Dict representing an object. Returns: An index which will be used as the placeholder in the condition_structure
juraj-google-style
def diff(self, sym: Symbol, n: int = 1, expand_simplify: bool = True): if not isinstance(sym, sympy.Basic): raise TypeError("%s needs to be a Sympy symbol" % sym) if sym.free_symbols.issubset(self.free_symbols): deriv = QuantumDerivative.create(self, derivs={sym: n}, vals=None) if not deriv.is_zero and expand_simplify: deriv = deriv.expand().simplify_scalar() return deriv else: return self.__class__._zero
Differentiate by scalar parameter `sym`. Args: sym: What to differentiate by. n: How often to differentiate expand_simplify: Whether to simplify the result. Returns: The n-th derivative.
juraj-google-style
def put(self, item, *args, **kwargs): if (not self.enabled): return timeout = kwargs.pop('timeout', None) if (timeout is None): timeout = self.default_timeout cache_key = self.make_key(args, kwargs) with self._cache_lock: self._cache[cache_key] = ((time() + timeout), item)
Put an item into the cache, for this combination of args and kwargs. Args: *args: any arguments. **kwargs: any keyword arguments. If ``timeout`` is specified as one of the keyword arguments, the item will remain available for retrieval for ``timeout`` seconds. If ``timeout`` is `None` or not specified, the ``default_timeout`` for this cache will be used. Specify a ``timeout`` of 0 (or ensure that the ``default_timeout`` for this cache is 0) if this item is not to be cached.
codesearchnet
def render_header(image: np.ndarray, header: str, input_data_format: Optional[Union[str, ChildProcessError]]=None, **kwargs): requires_backends(render_header, 'vision') image = to_pil_image(image, input_data_format=input_data_format) header_image = render_text(header, **kwargs) new_width = max(header_image.width, image.width) new_height = int(image.height * (new_width / image.width)) new_header_height = int(header_image.height * (new_width / header_image.width)) new_image = Image.new('RGB', (new_width, new_height + new_header_height), 'white') new_image.paste(header_image.resize((new_width, new_header_height)), (0, 0)) new_image.paste(image.resize((new_width, new_height)), (0, new_header_height)) new_image = to_numpy_array(new_image) if infer_channel_dimension_format(new_image) == ChannelDimension.LAST: new_image = to_channel_dimension_format(new_image, ChannelDimension.LAST) return new_image
Renders the input text as a header on the input image. Args: image (`np.ndarray`): The image to render the header on. header (`str`): The header text. data_format (`Union[ChannelDimension, str]`, *optional*): The data format of the image. Can be either "ChannelDimension.channels_first" or "ChannelDimension.channels_last". Returns: `np.ndarray`: The image with the header rendered.
github-repos
def get_asset_filename_to_add(asset_filepath, asset_filename_map): asset_filename = os.path.basename(asset_filepath) if asset_filename not in asset_filename_map: return asset_filename other_asset_filepath = asset_filename_map[asset_filename] if other_asset_filepath == asset_filepath: return asset_filename if not file_io.filecmp(asset_filepath, other_asset_filepath): return _get_unique_asset_filename(asset_filename, asset_filename_map) return asset_filename
Get a unique basename to add to the SavedModel if this file is unseen. Assets come from users as full paths, and we save them out to the SavedModel as basenames. In some cases, the basenames collide. Here, we dedupe asset basenames by first checking if the file is the same, and, if different, generate and return an index-suffixed basename that can be used to add the asset to the SavedModel. Args: asset_filepath: the full path to the asset that is being saved asset_filename_map: a dict of filenames used for saving the asset in the SavedModel to full paths from which the filenames were derived. Returns: Uniquified filename string if the file is not a duplicate, or the original filename if the file has already been seen and saved.
github-repos
def resolve_variables(variables, context, provider): for variable in variables: variable.resolve(context, provider)
Given a list of variables, resolve all of them. Args: variables (list of :class:`stacker.variables.Variable`): list of variables context (:class:`stacker.context.Context`): stacker context provider (:class:`stacker.provider.base.BaseProvider`): subclass of the base provider
codesearchnet
def matrices_to_flat_transforms(transform_matrices): with ops.name_scope('matrices_to_flat_transforms'): transform_matrices = ops.convert_to_tensor(transform_matrices, name='transform_matrices') if transform_matrices.shape.ndims not in (2, 3): raise ValueError('Matrices should be 2D or 3D, got: %s' % transform_matrices) transforms = array_ops.reshape(transform_matrices, constant_op.constant([-1, 9])) transforms /= transforms[:, 8:9] return transforms[:, :8]
Converts affine matrices to `tf.contrib.image` projective transforms. Note that we expect matrices that map output coordinates to input coordinates. To convert forward transformation matrices, call `tf.linalg.inv` on the matrices and use the result here. Args: transform_matrices: One or more affine transformation matrices, for the reverse transformation in homogeneous coordinates. Shape `(3, 3)` or `(N, 3, 3)`. Returns: 2D tensor of flat transforms with shape `(N, 8)`, which may be passed into `tf.contrib.image.transform`. Raises: ValueError: If `transform_matrices` have an invalid shape.
github-repos
def loop_until_valid_response(prompt): responses = {"Y": True, "YES": True, "TRUE": True, "N": False, "NO": False, "FALSE": False} response = "" while response.upper() not in responses: response = raw_input(prompt) return responses[response.upper()]
Loop over entering input until it is a valid bool-ish response. Args: prompt: Text presented to user. Returns: The bool value equivalent of what was entered.
juraj-google-style
def path(self, source, target): visited = set(source.split('+')) targets = (set(target.split('+')) - visited) for tablename in visited.union(targets): self[tablename] if (len(targets) == 0): return [] paths = [[(tablename, None)] for tablename in visited] while True: newpaths = [] for path in paths: (laststep, pivot) = path[(- 1)] if (laststep in targets): return path[1:] else: for key in self[laststep].keys(): for step in (set(self.find(key)) - visited): visited.add(step) newpaths.append((path + [(step, key)])) if newpaths: paths = newpaths else: break raise ItsdbError('no relation path found from {} to {}'.format(source, target))
Find the path of id fields connecting two tables. This is just a basic breadth-first-search. The relations file should be small enough to not be a problem. Returns: list: (table, fieldname) pairs describing the path from the source to target tables Raises: :class:`delphin.exceptions.ItsdbError`: when no path is found Example: >>> relations.path('item', 'result') [('parse', 'i-id'), ('result', 'parse-id')] >>> relations.path('parse', 'item') [('item', 'i-id')] >>> relations.path('item', 'item') []
codesearchnet
def graph(self, as_dot=False): if not self.has_graph: return None if not as_dot: if self.graph_ is None: self.graph_ = read_graph_from_string(self.graph_string) return self.graph_ if self.graph_string: if self.graph_string.startswith('{'): self.graph_ = read_graph_from_string(self.graph_string) else: return self.graph_string return write_dot(self.graph_)
Get the resolve graph. Args: as_dot: If True, get the graph as a dot-language string. Otherwise, a pygraph.digraph object is returned. Returns: A string or `pygraph.digraph` object, or None if there is no graph associated with the resolve.
juraj-google-style
def Serialize(self, writer): self.SerializeUnsigned(writer) writer.WriteSerializableArray(self.scripts)
Serialize object. Args: writer (neo.IO.BinaryWriter):
juraj-google-style
def event_stream(app, *, filter_by_prefix=None): q = Queue() def handle_event(event): if ((filter_by_prefix is None) or ((filter_by_prefix is not None) and event['type'].startswith(filter_by_prefix))): q.put(event) def receive_events(): with app.connection() as connection: recv = app.events.Receiver(connection, handlers={'*': handle_event}) recv.capture(limit=None, timeout=None, wakeup=True) t = threading.Thread(target=receive_events) t.start() while True: (yield q.get(block=True))
Generator function that returns celery events. This function turns the callback based celery event handling into a generator. Args: app: Reference to a celery application object. filter_by_prefix (str): If not None, only allow events that have a type that starts with this prefix to yield an generator event. Returns: generator: A generator that returns celery events.
codesearchnet
def _get_sample_generator(samples): if isinstance(samples, Mapping): def samples_generator(): for ind in range(samples[list(samples.keys())[0]].shape[0]): (yield np.array([samples[s][(ind, :)] for s in sorted(samples)])) elif isinstance(samples, np.ndarray): def samples_generator(): for ind in range(samples.shape[0]): (yield samples[ind]) else: samples_generator = samples return samples_generator
Get a sample generator from the given polymorphic input. Args: samples (ndarray, dict or generator): either an matrix of shape (d, p, n) with d problems, p parameters and n samples, or a dictionary with for every parameter a matrix with shape (d, n) or, finally, a generator function that yields sample arrays of shape (p, n). Returns: generator: a generator that yields a matrix of size (p, n) for every problem in the input.
codesearchnet
def _chglog(amend: bool = False, stage: bool = False, next_version: str = None, auto_next_version: bool = False): if config.CHANGELOG_DISABLE(): LOGGER.info('skipping changelog update as per config') else: epab.utils.ensure_exe('git') epab.utils.ensure_exe('gitchangelog') LOGGER.info('writing changelog') if auto_next_version: next_version = epab.utils.get_next_version() with gitchangelog_config(): with temporary_tag(next_version): changelog, _ = elib_run.run('gitchangelog', mute=True) changelog = re.sub(BOGUS_LINE_PATTERN, '\\1\n', changelog) Path(config.CHANGELOG_FILE_PATH()).write_text(changelog, encoding='utf8') if amend: CTX.repo.amend_commit( append_to_msg='update changelog [auto]', files_to_add=str(config.CHANGELOG_FILE_PATH()) ) elif stage: CTX.repo.stage_subset(str(config.CHANGELOG_FILE_PATH()))
Writes the changelog Args: amend: amend last commit with changes stage: stage changes
juraj-google-style
def to_representation(self, instance): updated_course = copy.deepcopy(instance) enterprise_customer_catalog = self.context['enterprise_customer_catalog'] updated_course['enrollment_url'] = enterprise_customer_catalog.get_course_enrollment_url(updated_course['key']) for course_run in updated_course['course_runs']: course_run['enrollment_url'] = enterprise_customer_catalog.get_course_run_enrollment_url(course_run['key']) return updated_course
Return the updated course data dictionary. Arguments: instance (dict): The course data. Returns: dict: The updated course data.
codesearchnet
def guided_registration(request, page_number=None): PAGE_PROFILE = 1 PAGE_TICKET = 2 PAGE_PRODUCTS = 3 PAGE_PRODUCTS_MAX = 4 TOTAL_PAGES = 4 ticket_category = inventory.Category.objects.get(id=settings.TICKET_PRODUCT_CATEGORY) cart = CartController.for_user(request.user) attendee = people.Attendee.get_instance(request.user) if attendee.completed_registration: return redirect(review) has_profile = hasattr(attendee, 'attendeeprofilebase') if (not has_profile): max_page = PAGE_PROFILE redirect_page = PAGE_PROFILE else: products = inventory.Product.objects.filter(productitem__cart=cart.cart) products = products.filter(category=ticket_category) if (products.count() == 0): max_page = PAGE_TICKET redirect_page = PAGE_TICKET else: max_page = PAGE_PRODUCTS_MAX redirect_page = PAGE_PRODUCTS if ((page_number is None) or (int(page_number) > max_page)): return redirect('guided_registration', redirect_page) page_number = int(page_number) next_step = redirect('guided_registration', (page_number + 1)) with BatchController.batch(request.user): available = ProductController.available_products(request.user, category=ticket_category) if (not available): messages.error(request, 'There are no more tickets available.') return redirect('dashboard') sections = [] if (page_number == PAGE_PROFILE): title = 'Attendee information' sections = _guided_registration_profile_and_voucher(request) elif (page_number == PAGE_TICKET): title = 'Select ticket type' sections = _guided_registration_products(request, GUIDED_MODE_TICKETS_ONLY) elif (page_number == PAGE_PRODUCTS): title = 'Additional items' sections = _guided_registration_products(request, GUIDED_MODE_ALL_ADDITIONAL) elif (page_number == PAGE_PRODUCTS_MAX): title = 'More additional items' sections = _guided_registration_products(request, GUIDED_MODE_EXCLUDE_COMPLETE) if (not sections): attendee.completed_registration = True attendee.save() return redirect('review') if (sections and (request.method == 'POST')): for section in sections: if section.form.errors: break else: return next_step data = {'current_step': page_number, 'sections': sections, 'title': title, 'total_steps': TOTAL_PAGES} return render(request, 'registrasion/guided_registration.html', data)
Goes through the registration process in order, making sure user sees all valid categories. The user must be logged in to see this view. Parameter: page_number: 1) Profile form (and e-mail address?) 2) Ticket type 3) Remaining products 4) Mark registration as complete Returns: render: Renders ``registrasion/guided_registration.html``, with the following data:: { "current_step": int(), # The current step in the # registration "sections": sections, # A list of # GuidedRegistrationSections "title": str(), # The title of the page "total_steps": int(), # The total number of steps }
codesearchnet
def _get_required_params_for_conversion(self, event_key, event_tags): snapshot = {} event_dict = { self.EventParams.EVENT_ID: self.config.get_event(event_key).id, self.EventParams.TIME: self._get_time(), self.EventParams.KEY: event_key, self.EventParams.UUID: str(uuid.uuid4()) } if event_tags: revenue_value = event_tag_utils.get_revenue_value(event_tags) if revenue_value is not None: event_dict[event_tag_utils.REVENUE_METRIC_TYPE] = revenue_value numeric_value = event_tag_utils.get_numeric_value(event_tags, self.config.logger) if numeric_value is not None: event_dict[event_tag_utils.NUMERIC_METRIC_TYPE] = numeric_value if len(event_tags) > 0: event_dict[self.EventParams.TAGS] = event_tags snapshot[self.EventParams.EVENTS] = [event_dict] return snapshot
Get parameters that are required for the conversion event to register. Args: event_key: Key representing the event which needs to be recorded. event_tags: Dict representing metadata associated with the event. Returns: Dict consisting of the decisions and events info for conversion event.
juraj-google-style
def loadfile(method=True, writable=False, create=False): def convert_file_args(args, kwargs): filething = (args[0] if args else None) filename = kwargs.pop('filename', None) fileobj = kwargs.pop('fileobj', None) return (filething, filename, fileobj, args[1:], kwargs) def wrap(func): @wraps(func) def wrapper(self, *args, **kwargs): (filething, filename, fileobj, args, kwargs) = convert_file_args(args, kwargs) with _openfile(self, filething, filename, fileobj, writable, create) as h: return func(self, h, *args, **kwargs) @wraps(func) def wrapper_func(*args, **kwargs): (filething, filename, fileobj, args, kwargs) = convert_file_args(args, kwargs) with _openfile(None, filething, filename, fileobj, writable, create) as h: return func(h, *args, **kwargs) return (wrapper if method else wrapper_func) return wrap
A decorator for functions taking a `filething` as a first argument. Passes a FileThing instance as the first argument to the wrapped function. Args: method (bool): If the wrapped functions is a method writable (bool): If a filename is passed opens the file readwrite, if passed a file object verifies that it is writable. create (bool): If passed a filename that does not exist will create a new empty file.
codesearchnet
def Collect(self, top_frame): frame = top_frame top_line = self.breakpoint['location']['line'] breakpoint_frames = self.breakpoint['stackFrames'] try: if ('expressions' in self.breakpoint): self.breakpoint['evaluatedExpressions'] = [self._CaptureExpression(top_frame, expression) for expression in self.breakpoint['expressions']] while (frame and (len(breakpoint_frames) < self.max_frames)): line = (top_line if (frame == top_frame) else frame.f_lineno) code = frame.f_code if (len(breakpoint_frames) < self.max_expand_frames): (frame_arguments, frame_locals) = self.CaptureFrameLocals(frame) else: frame_arguments = [] frame_locals = [] breakpoint_frames.append({'function': _GetFrameCodeObjectName(frame), 'location': {'path': NormalizePath(code.co_filename), 'line': line}, 'arguments': frame_arguments, 'locals': frame_locals}) frame = frame.f_back except BaseException as e: self.breakpoint['status'] = {'isError': True, 'description': {'format': 'INTERNAL ERROR: Failed while capturing locals of frame $0: $1', 'parameters': [str(len(breakpoint_frames)), str(e)]}} num_vars = 1 while ((num_vars < len(self._var_table)) and (self._total_size < self.max_size)): self._var_table[num_vars] = self.CaptureVariable(self._var_table[num_vars], 0, self.default_capture_limits, can_enqueue=False) num_vars += 1 self.TrimVariableTable(num_vars) self._CaptureEnvironmentLabels() self._CaptureRequestLogId() self._CaptureUserId()
Collects call stack, local variables and objects. Starts collection from the specified frame. We don't start from the top frame to exclude the frames due to debugger. Updates the content of self.breakpoint. Args: top_frame: top frame to start data collection.
codesearchnet
def in_coord_list_pbc(fcoord_list, fcoord, atol=1e-8): return len(find_in_coord_list_pbc(fcoord_list, fcoord, atol=atol)) > 0
Tests if a particular fractional coord is within a fractional coord_list. Args: fcoord_list: List of fractional coords to test fcoord: A specific fractional coord to test. atol: Absolute tolerance. Defaults to 1e-8. Returns: True if coord is in the coord list.
juraj-google-style
def invoke_string(self, line): line = str(line) if len(line) == 0: return True if line[0] == u' return True args = self._split_line(line) return self.invoke(args)
Parse and invoke a string line. Args: line (str): The line that we want to parse and invoke. Returns: bool: A boolean specifying if the last function created a new context (False if a new context was created) and a list with the remainder of the command line if this function did not consume all arguments.)
juraj-google-style
def altitude_diff(msg): tc = common.typecode(msg) if (tc != 19): raise RuntimeError(('%s: Not a airborne velocity message, expecting TC=19' % msg)) msgbin = common.hex2bin(msg) sign = ((- 1) if int(msgbin[80]) else 1) value = common.bin2int(msgbin[81:88]) if ((value == 0) or (value == 127)): return None else: return ((sign * (value - 1)) * 25)
Decode the differece between GNSS and barometric altitude Args: msg (string): 28 bytes hexadecimal message string, TC=19 Returns: int: Altitude difference in ft. Negative value indicates GNSS altitude below barometric altitude.
codesearchnet
def execute(self, triple_map, output, **kwargs): subjects = [] found_elements = self.source.xpath(str(triple_map.logicalSource.iterator), namespaces=self.xml_ns) for element in found_elements: subject = self.generate_term(term_map=triple_map.subjectMap, element=element, **kwargs) start = len(output) for row in triple_map.predicateObjectMap: predicate = row.predicate if (row.template is not None): obj_ = self.generate_term(term_map=row, **kwargs) output.add((subject, predicate, obj_)) if (row.parentTriplesMap is not None): self.__handle_parents__(output, parent_map=row.parentTriplesMap, subject=subject, predicate=predicate, **kwargs) new_subjects = self.__reference_handler__(output, predicate_obj_map=row, element=element, subject=subject) subjects.extend(new_subjects) if (row.constant is not None): output.add((subject, predicate, row.constant)) if (start < len(output)): if (triple_map.subjectMap.class_ is not None): output.add((subject, NS_MGR.rdf.type.rdflib, triple_map.subjectMap.class_)) subjects.append(subject) return subjects
Method executes mapping between source Args: ----- triple_map: SimpleNamespace, Triple Map
codesearchnet
def delete(self, service): url = self._url_format(service) return self.rest_action( self._session.delete, url )
Generic DELETE operation for Learning Modules API. Args: service (str): The endpoint service to use, i.e. gradebook Raises: requests.RequestException: Exception connection error ValueError: Unable to decode response content Returns: list: the json-encoded content of the response
juraj-google-style
def transformer_prepare_decoder(targets, hparams, features=None): if hparams.causal_decoder_self_attention: if hparams.prepend_mode == "prepend_inputs_full_attention": decoder_self_attention_bias = ( common_attention.attention_bias_prepend_inputs_full_attention( common_attention.embedding_to_padding(targets))) else: decoder_self_attention_bias = ( common_attention.attention_bias_lower_triangle( common_layers.shape_list(targets)[1])) else: decoder_padding = common_attention.embedding_to_padding(targets) decoder_self_attention_bias = ( common_attention.attention_bias_ignore_padding(decoder_padding)) if features and "targets_segmentation" in features: targets_segmentation = features["targets_segmentation"] targets_position = features["targets_position"] decoder_self_attention_bias += common_attention.attention_bias_same_segment( targets_segmentation, targets_segmentation) else: targets_position = None if hparams.proximity_bias: decoder_self_attention_bias += common_attention.attention_bias_proximal( common_layers.shape_list(targets)[1]) decoder_input = common_layers.shift_right_3d(targets) if hparams.pos == "timing": if targets_position is not None: decoder_input = common_attention.add_timing_signal_1d_given_position( decoder_input, targets_position) else: decoder_input = common_attention.add_timing_signal_1d(decoder_input) elif hparams.pos == "emb": decoder_input = common_attention.add_positional_embedding( decoder_input, hparams.max_length, "targets_positional_embedding", targets_position) if hparams.activation_dtype == "bfloat16": decoder_self_attention_bias = tf.cast(decoder_self_attention_bias, tf.bfloat16) return (decoder_input, decoder_self_attention_bias)
Prepare one shard of the model for the decoder. Args: targets: a Tensor. hparams: run hyperparameters features: optionally pass the entire features dictionary as well. This is needed now for "packed" datasets. Returns: decoder_input: a Tensor, bottom of decoder stack decoder_self_attention_bias: a bias tensor for use in decoder self-attention
juraj-google-style
def create_or_update_video_transcript(video_id, language_code, metadata, file_data=None): metadata = { prop: value for prop, value in six.iteritems(metadata) if prop in ['provider', 'language_code', 'file_name', 'file_format'] and value } file_format = metadata.get('file_format') if file_format and file_format not in list(dict(TranscriptFormat.CHOICES).keys()): raise InvalidTranscriptFormat('{} transcript format is not supported'.format(file_format)) provider = metadata.get('provider') if provider and provider not in list(dict(TranscriptProviderType.CHOICES).keys()): raise InvalidTranscriptProvider('{} transcript provider is not supported'.format(provider)) try: video = Video.objects.get(edx_video_id=video_id) video_transcript, __ = VideoTranscript.create_or_update(video, language_code, metadata, file_data) except Video.DoesNotExist: return None return video_transcript.url()
Create or Update video transcript for an existing video. Arguments: video_id: it can be an edx_video_id or an external_id extracted from external sources in a video component. language_code: language code of a video transcript metadata (dict): A dict containing (to be overwritten) properties file_data (InMemoryUploadedFile): Transcript data to be saved for a course video. Returns: video transcript url
juraj-google-style
def f2format(filename): print(('Now converting %r...' % filename)) encoding = os.getenv('F2FORMAT_ENCODING', LOCALE_ENCODING) lineno = dict() content = list() with open(filename, 'r', encoding=encoding) as file: lineno[1] = 0 for (lnum, line) in enumerate(file, start=1): content.append(line) lineno[(lnum + 1)] = (lineno[lnum] + len(line)) string = ''.join(content) text = convert(string, lineno) with open(filename, 'w', encoding=encoding) as file: file.write(text)
Wrapper works for conversion. Args: - filename -- str, file to be converted
codesearchnet
def add_arguments(self, parser): group = parser.add_mutually_exclusive_group(required=True) group.add_argument('-l', '--list', nargs='?', type=str.lower, default='_', choices=['usb', 'ip'], help='list all the connected emulators') group.add_argument('-s', '--supported', nargs=1, help='query whether a device is supported') group.add_argument('-t', '--test', action='store_true', help='perform a self-test') return None
Adds the arguments for the emulator command. Args: self (EmulatorCommand): the ``EmulatorCommand`` instance parser (argparse.ArgumentParser): parser to add the commands to Returns: ``None``
codesearchnet
def list_documents(project_id, knowledge_base_id): import dialogflow_v2beta1 as dialogflow client = dialogflow.DocumentsClient() knowledge_base_path = client.knowledge_base_path(project_id, knowledge_base_id) print('Documents for Knowledge Id: {}'.format(knowledge_base_id)) for document in client.list_documents(knowledge_base_path): print(' - Display Name: {}'.format(document.display_name)) print(' - Knowledge ID: {}'.format(document.name)) print(' - MIME Type: {}'.format(document.mime_type)) print(' - Knowledge Types:') for knowledge_type in document.knowledge_types: print(' - {}'.format(KNOWLEDGE_TYPES[knowledge_type])) print(' - Source: {}\n'.format(document.content_uri))
Lists the Documents belonging to a Knowledge base. Args: project_id: The GCP project linked with the agent. knowledge_base_id: Id of the Knowledge base.
codesearchnet
def convert_upsample_bilinear(params, w_name, scope_name, inputs, layers, weights, names): print('Converting upsample...') if names == 'short': tf_name = 'UPSL' + random_string(4) elif names == 'keep': tf_name = w_name else: tf_name = w_name + str(random.random()) output_size = params['output_size'] align_corners = params['align_corners'] > 0 def target_layer(x, size=output_size, align_corners=align_corners): import tensorflow as tf x = tf.transpose(x, [0, 2, 3, 1]) x = tf.image.resize_images(x, size, align_corners=align_corners) x = tf.transpose(x, [0, 3, 1, 2]) return x lambda_layer = keras.layers.Lambda(target_layer) layers[scope_name] = lambda_layer(layers[inputs[0]])
Convert upsample_bilinear2d layer. Args: params: dictionary with layer parameters w_name: name prefix in state_dict scope_name: pytorch scope name inputs: pytorch node inputs layers: dictionary with keras tensors weights: pytorch state_dict names: use short names for keras layers
juraj-google-style
def repsep(parser: Union[(Parser, Sequence[Input])], separator: Union[(Parser, Sequence[Input])]) -> RepeatedSeparatedParser: if isinstance(parser, str): parser = lit(parser) if isinstance(separator, str): separator = lit(separator) return RepeatedSeparatedParser(parser, separator)
Match a parser zero or more times separated by another parser. This matches repeated sequences of ``parser`` separated by ``separator``. A list is returned containing the value from each match of ``parser``. The values from ``separator`` are discarded. If there are no matches, an empty list is returned. Args: parser: Parser or literal separator: Parser or literal
codesearchnet
def encode(self, object_): if self.enforce_reversible: self.enforce_reversible = False if self.decode(self.encode(object_)) != object_: raise ValueError('Encoding is not reversible for "%s"' % object_) self.enforce_reversible = True return object_
Encodes an object. Args: object_ (object): Object to encode. Returns: object: Encoding of the object.
juraj-google-style
def get(self, name: str) -> Optional[ListEntry]: parts = name.split(self._delimiter) try: node = self._find(self._root, *parts) except KeyError: return None else: marked = self._marked.get(name) return ListEntry(name, node.exists, marked, bool(node.children))
Return the named entry in the list tree. Args: name: The entry name.
juraj-google-style
def notebook_content(model, notebook_comms_target=None, theme=FromCurdoc): if (not isinstance(model, Model)): raise ValueError('notebook_content expects a single Model instance') with OutputDocumentFor([model], apply_theme=theme, always_new=True) as new_doc: (docs_json, [render_item]) = standalone_docs_json_and_render_items([model]) div = div_for_render_item(render_item) render_item = render_item.to_json() if notebook_comms_target: render_item['notebook_comms_target'] = notebook_comms_target script = DOC_NB_JS.render(docs_json=serialize_json(docs_json), render_items=serialize_json([render_item])) return (encode_utf8(script), encode_utf8(div), new_doc)
Return script and div that will display a Bokeh plot in a Jupyter Notebook. The data for the plot is stored directly in the returned HTML. Args: model (Model) : Bokeh object to render notebook_comms_target (str, optional) : A target name for a Jupyter Comms object that can update the document that is rendered to this notebook div theme (Theme, optional) : Defaults to the ``Theme`` instance in the current document. Setting this to ``None`` uses the default theme or the theme already specified in the document. Any other value must be an instance of the ``Theme`` class. Returns: script, div, Document .. note:: Assumes :func:`~bokeh.io.notebook.load_notebook` or the equivalent has already been executed.
codesearchnet
def is44(msg): if allzeros(msg): return False d = hex2bin(data(msg)) if wrongstatus(d, 5, 6, 23): return False if wrongstatus(d, 35, 36, 46): return False if wrongstatus(d, 47, 48, 49): return False if wrongstatus(d, 50, 51, 56): return False if (bin2int(d[0:4]) > 4): return False vw = wind44(msg) if ((vw is not None) and (vw[0] > 250)): return False (temp, temp2) = temp44(msg) if ((min(temp, temp2) > 60) or (max(temp, temp2) < (- 80))): return False return True
Check if a message is likely to be BDS code 4,4. Meteorological routine air report Args: msg (String): 28 bytes hexadecimal message string Returns: bool: True or False
codesearchnet
def CheckCStyleCast(filename, clean_lines, linenum, cast_type, pattern, error): line = clean_lines.elided[linenum] match = Search(pattern, line) if (not match): return False context = line[0:(match.start(1) - 1)] if Match('.*\\b(?:sizeof|alignof|alignas|[_A-Z][_A-Z0-9]*)\\s*$', context): return False if (linenum > 0): for i in xrange((linenum - 1), max(0, (linenum - 5)), (- 1)): context = (clean_lines.elided[i] + context) if Match('.*\\b[_A-Z][_A-Z0-9]*\\s*\\((?:\\([^()]*\\)|[^()])*$', context): return False if (context.endswith(' operator++') or context.endswith(' operator--')): return False remainder = line[match.end(0):] if Match('^\\s*(?:;|const\\b|throw\\b|final\\b|override\\b|[=>{),]|->)', remainder): if Match('^\\s*>', remainder): return False matched_zero = Match('^\\s=\\s*(\\S+)\\s*;', remainder) if (matched_zero and (matched_zero.group(1) != '0')): return False if Match('.*\\)\\s*$', line[0:match.start(0)]): return False raw_line = clean_lines.raw_lines[linenum] if ('/*' in raw_line): return False error(filename, linenum, 'readability/function', 3, 'All parameters should be named in a function') return True error(filename, linenum, 'readability/casting', 4, ('Using C-style cast. Use %s<%s>(...) instead' % (cast_type, match.group(1)))) return True
Checks for a C-style cast by looking for the pattern. Args: filename: The name of the current file. clean_lines: A CleansedLines instance containing the file. linenum: The number of the line to check. cast_type: The string for the C++ cast to recommend. This is either reinterpret_cast, static_cast, or const_cast, depending. pattern: The regular expression used to find C-style casts. error: The function to call with any errors found. Returns: True if an error was emitted. False otherwise.
codesearchnet
def compute_serialized_parameters_size(num_parameters: int, dtype: ParameterFormat) -> int: return num_parameters * dtype.size
Compute the size taken by all the parameters in the given the storage format when serializing the model Args: num_parameters: Number of parameters to be saved dtype: The data format each parameter will be saved Returns: Size (in byte) taken to save all the parameters
github-repos
def _compute_static_batch_dim(self): new_batch_dim = tensor_util.constant_value(self._batch_sizes) if new_batch_dim is None: return None if isinstance(new_batch_dim, np.ndarray): if len(new_batch_dim.shape) == 1: if np.all(new_batch_dim == new_batch_dim[0]): new_batch_dim = new_batch_dim[0] else: return None elif len(new_batch_dim.shape) > 1: raise ValueError(f'Invalid `batch_sizes`. Expected `batch_sizes` to be a scalar or a vector. Received `batch_sizes` of rank {len(new_batch_dim.shape)}.') if self._may_form_partial_batches(new_batch_dim): return None return new_batch_dim
Computes the static batch dimension of a dataset if it can be determined. Given the RebatchDataset parameters, determines the batch dimension of this dataset statically. Returns None if this cannot be determined or is variable. Returns: An integer representing the batch dimension of the dataset. If it cannot be determined statically, returns None. Raises: ValueError: The batch_sizes parameter is malformed, input_dataset is not batched, or input_dataset batch sizes are incompatible with each other.
github-repos
def from_series(self, series, add_index_column=True): if series.name: self.headers = [series.name] else: self.headers = ["value"] self.type_hints = [self.__get_typehint_from_dtype(series.dtype)] if add_index_column: self.headers = [""] + self.headers if self.type_hints: self.type_hints = [None] + self.type_hints self.value_matrix = [ [index] + [value] for index, value in zip(series.index.tolist(), series.tolist()) ] else: self.value_matrix = [[value] for value in series.tolist()]
Set tabular attributes to the writer from :py:class:`pandas.Series`. Following attributes are set by the method: - :py:attr:`~.headers` - :py:attr:`~.value_matrix` - :py:attr:`~.type_hints` Args: series(pandas.Series): Input pandas.Series object. add_index_column(bool, optional): If |True|, add a column of ``index`` of the ``series``. Defaults to |True|.
juraj-google-style
def _cache_form_details(self, form): cache = FormCache() form['model']['form_key'] = cache.form_id form['model']['form_name'] = self.__class__.__name__ cache.set( { 'model': list(form['model'].keys()), 'non_data_fields': self.non_data_fields } )
Caches some form details to lates process and validate incoming (response) form data Args: form: form dict
juraj-google-style
def __init__(self, fut, file_obj, tid=None): super().__init__() self._tid = tid if isinstance(file_obj, str): self.file_obj = File(file_obj) elif isinstance(file_obj, File): self.file_obj = file_obj else: raise ValueError("DataFuture must be initialized with a str or File") self.parent = fut self._exception = None if fut is None: logger.debug("Setting result to filepath since no future was passed") self.set_result(self.file_obj) else: if isinstance(fut, Future): self.parent.add_done_callback(self.parent_callback) else: raise NotFutureError("DataFuture can be created only with a FunctionFuture on None") logger.debug("Creating DataFuture with parent: %s", self.parent) logger.debug("Filepath: %s", self.filepath)
Construct the DataFuture object. If the file_obj is a string convert to a File. Args: - fut (AppFuture) : AppFuture that this DataFuture will track - file_obj (string/File obj) : Something representing file(s) Kwargs: - tid (task_id) : Task id that this DataFuture tracks
juraj-google-style
def _reset_build_compile_trackers(model): model.built = False model.inputs = None model.outputs = None model._is_compiled = False if not ops.executing_eagerly_outside_functions(): model._v1_compile_was_called = False model.optimizer = None
Reset state trackers for model. Note that we do not actually zero out attributes such as optimizer, but instead rely on the expectation that all of the attrs will be over-written on calling build/compile/etc. This is somewhat fragile, insofar as we check elsewhere for the presence of these attributes as evidence of having been built/compiled/etc. Pending a better way to do this, we reset key attributes here to allow building and compiling. Args: model: the model that is being reset
github-repos
def substring_evaluator(self, index): condition_name = self.condition_data[index][0] condition_value = self.condition_data[index][1] user_value = self.attributes.get(condition_name) if not isinstance(condition_value, string_types): self.logger.warning(audience_logs.UNKNOWN_CONDITION_VALUE.format( self._get_condition_json(index), )) return None if not isinstance(user_value, string_types): self.logger.warning(audience_logs.UNEXPECTED_TYPE.format( self._get_condition_json(index), type(user_value), condition_name )) return None return condition_value in user_value
Evaluate the given substring match condition for the given user attributes. Args: index: Index of the condition to be evaluated. Returns: Boolean: - True if the condition value is a substring of the user attribute value. - False if the condition value is not a substring of the user attribute value. None: if the condition value isn't a string or the user attribute value isn't a string.
juraj-google-style
def create_unique_autosave_filename(self, filename, autosave_dir): basename = osp.basename(filename) autosave_filename = osp.join(autosave_dir, basename) if (autosave_filename in self.name_mapping.values()): counter = 0 (root, ext) = osp.splitext(basename) while (autosave_filename in self.name_mapping.values()): counter += 1 autosave_basename = '{}-{}{}'.format(root, counter, ext) autosave_filename = osp.join(autosave_dir, autosave_basename) return autosave_filename
Create unique autosave file name for specified file name. Args: filename (str): original file name autosave_dir (str): directory in which autosave files are stored
codesearchnet
def _start_job(self, request: 'bigquery.BigqueryJobsInsertRequest', stream=None): try: upload = None if stream: upload = Upload.FromStream(stream, mime_type=UNKNOWN_MIME_TYPE) response = self.client.jobs.Insert(request, upload=upload) _LOGGER.info('Started BigQuery job: %s\n bq show -j --format=prettyjson --project_id=%s %s', response.jobReference, response.jobReference.projectId, response.jobReference.jobId) return response except HttpError as exn: if exn.status_code == 409: jobId = request.job.jobReference.jobId _LOGGER.info('BigQuery job %s already exists, will not retry inserting it: %s', request.job.jobReference, exn) job_location = self._parse_location_from_exc(exn.content, jobId) response = request.job if not response.jobReference.location and job_location: response.jobReference.location = job_location return response else: _LOGGER.info('Failed to insert job %s: %s', request.job.jobReference, exn) raise
Inserts a BigQuery job. If the job exists already, it returns it. Args: request (bigquery.BigqueryJobsInsertRequest): An insert job request. stream (IO[bytes]): A bytes IO object open for reading.
github-repos
def setKstar(self, term_i, Ks): assert (Ks.shape[0] == self.N) self.vd.getTerm(term_i).getKcf().setK0cross(Ks)
Set the kernel for predictions Args: term_i: index of the term we are interested in Ks: (TODO: is this the covariance between train and test or the covariance between test points?)
codesearchnet
def get_single_item_from_sequence(sequence, condition, ErrorClass=ValueError, no_item_error_message='No item matched condition', too_many_item_error_message='Too many items matched condition', append_sequence_to_error_message=True): filtered_sequence = [item for item in sequence if condition(item)] number_of_items_in_filtered_sequence = len(filtered_sequence) if (number_of_items_in_filtered_sequence == 0): error_message = no_item_error_message elif (number_of_items_in_filtered_sequence > 1): error_message = too_many_item_error_message else: return filtered_sequence[0] if append_sequence_to_error_message: error_message = '{}. Given: {}'.format(error_message, sequence) raise ErrorClass(error_message)
Return an item from a python sequence based on the given condition. Args: sequence (sequence): The sequence to filter condition: A function that serves to filter items from `sequence`. Function must have one argument (a single item from the sequence) and return a boolean. ErrorClass (Exception): The error type raised in case the item isn't unique no_item_error_message (str): The message raised when no item matched the condtion too_many_item_error_message (str): The message raised when more than one item matched the condition append_sequence_to_error_message (bool): Show or hide what was the tested sequence in the error message. Hiding it may prevent sensitive data (such as password) to be exposed to public logs Returns: The only item in the sequence which matched the condition
codesearchnet
def start(component, exact): version_file = conf.get_path('version_file', 'VERSION') develop = conf.get('git.devel_branch', 'develop') common.assert_on_branch(develop) with conf.within_proj_dir(): out = shell.run('git status --porcelain', capture=True).stdout lines = out.split(os.linesep) has_changes = any( not l.startswith('??') for l in lines if l.strip() ) if has_changes: log.info("Cannot release: there are uncommitted changes") exit(1) old_ver, new_ver = versioning.bump(component, exact) log.info("Bumping package version") log.info(" old version: <35>{}".format(old_ver)) log.info(" new version: <35>{}".format(new_ver)) with conf.within_proj_dir(): branch = 'release/' + new_ver common.git_checkout(branch, create=True) log.info("Creating commit for the release") shell.run('git add {ver_file} && git commit -m "{msg}"'.format( ver_file=version_file, msg="Releasing v{}".format(new_ver) ))
Create a new release branch. Args: component (str): Version component to bump when creating the release. Can be *major*, *minor* or *patch*. exact (str): The exact version to set for the release. Overrides the component argument. This allows to re-release a version if something went wrong with the release upload.
juraj-google-style
def _TSKFileTimeCopyToStatTimeTuple(self, tsk_file, time_value): if ((not tsk_file) or (not tsk_file.info) or (not tsk_file.info.meta) or (not tsk_file.info.fs_info)): raise errors.BackEndError('Missing TSK File .info, .info.meta. or .info.fs_info') stat_time = getattr(tsk_file.info.meta, time_value, None) stat_time_nano = None if (self._file_system_type in self._TSK_HAS_NANO_FS_TYPES): time_value_nano = '{0:s}_nano'.format(time_value) stat_time_nano = getattr(tsk_file.info.meta, time_value_nano, None) if ((stat_time_nano is not None) and (pytsk3.TSK_VERSION_NUM >= 67240191)): stat_time_nano /= 100 return (stat_time, stat_time_nano)
Copies a SleuthKit file object time value to a stat timestamp tuple. Args: tsk_file (pytsk3.File): TSK file. time_value (str): name of the time value. Returns: tuple[int, int]: number of seconds since 1970-01-01 00:00:00 and fraction of second in 100 nano seconds intervals. The number of seconds is None on error, or if the file system does not include the requested timestamp. The fraction of second is None on error, or if the file system does not support sub-second precision. Raises: BackEndError: if the TSK File .info, .info.meta or info.fs_info attribute is missing.
codesearchnet
def has_entities(status): try: if sum(len(v) for v in status.entities.values()) > 0: return True except AttributeError: if sum(len(v) for v in status['entities'].values()) > 0: return True return False
Returns true if a Status object has entities. Args: status: either a tweepy.Status object or a dict returned from Twitter API
juraj-google-style
def update_paths_and_config(self, config, pkg_dir_name, pkg_cache_dir=None): if (pkg_cache_dir is None): pkg_cache_dir = self.package_cache_dir cached_dir_path = os.path.join(pkg_cache_dir, pkg_dir_name) if config.get('paths'): for path in config['paths']: path_to_append = os.path.join(cached_dir_path, path) logger.debug('Appending "%s" to python sys.path', path_to_append) sys.path.append(path_to_append) else: sys.path.append(cached_dir_path) if config.get('configs'): for config_filename in config['configs']: self.configs_to_merge.append(os.path.join(cached_dir_path, config_filename))
Handle remote source defined sys.paths & configs. Args: config (dict): git config dictionary pkg_dir_name (string): directory name of the stacker archive pkg_cache_dir (string): fully qualified path to stacker cache cache directory
codesearchnet
def load_file(file_path, credentials=None): if file_path.startswith('gs: return _load_file_from_gcs(file_path, credentials) else: return open(file_path, 'r')
Load a file from either local or gcs. Args: file_path: The target file path, which should have the prefix 'gs://' if to be loaded from gcs. credentials: Optional credential to be used to load the file from gcs. Returns: A python File object if loading file from local or a StringIO object if loading from gcs.
codesearchnet
def init_from_class_batches(self, class_batches, num_shards=None): shards_for_submissions = {} shard_idx = 0 for idx, (batch_id, batch_val) in enumerate(iteritems(class_batches)): work_id = DEFENSE_WORK_ID_PATTERN.format(idx) submission_id = batch_val['submission_id'] shard_id = None if num_shards: shard_id = shards_for_submissions.get(submission_id) if shard_id is None: shard_id = shard_idx % num_shards shards_for_submissions[submission_id] = shard_id shard_idx += 1 self.work[work_id] = { 'claimed_worker_id': None, 'claimed_worker_start_time': None, 'is_completed': False, 'error': None, 'elapsed_time': None, 'submission_id': submission_id, 'shard_id': shard_id, 'output_classification_batch_id': batch_id, }
Initializes work pieces from classification batches. Args: class_batches: dict with classification batches, could be obtained as ClassificationBatches.data num_shards: number of shards to split data into, if None then no sharding is done.
juraj-google-style
def ColumnTypeParser(description): if (not description): raise DataTableException('Description error: empty description given') if (not isinstance(description, (six.string_types, tuple))): raise DataTableException(('Description error: expected either string or tuple, got %s.' % type(description))) if isinstance(description, six.string_types): description = (description,) for elem in description[:3]: if (not isinstance(elem, six.string_types)): raise DataTableException(('Description error: expected tuple of strings, current element of type %s.' % type(elem))) desc_dict = {'id': description[0], 'label': description[0], 'type': 'string', 'custom_properties': {}} if (len(description) > 1): desc_dict['type'] = description[1].lower() if (len(description) > 2): desc_dict['label'] = description[2] if (len(description) > 3): if (not isinstance(description[3], dict)): raise DataTableException(('Description error: expected custom properties of type dict, current element of type %s.' % type(description[3]))) desc_dict['custom_properties'] = description[3] if (len(description) > 4): raise DataTableException('Description error: tuple of length > 4') if (desc_dict['type'] not in ['string', 'number', 'boolean', 'date', 'datetime', 'timeofday']): raise DataTableException(("Description error: unsupported type '%s'" % desc_dict['type'])) return desc_dict
Parses a single column description. Internal helper method. Args: description: a column description in the possible formats: 'id' ('id',) ('id', 'type') ('id', 'type', 'label') ('id', 'type', 'label', {'custom_prop1': 'custom_val1'}) Returns: Dictionary with the following keys: id, label, type, and custom_properties where: - If label not given, it equals the id. - If type not given, string is used by default. - If custom properties are not given, an empty dictionary is used by default. Raises: DataTableException: The column description did not match the RE, or unsupported type was passed.
codesearchnet
def run(self, row, **kwargs): self.source = row kwargs['output'] = self.__graph__() super(CSVRowProcessor, self).run(**kwargs) return kwargs['output']
Methods takes a row and depending if a dict or list, runs RML rules. Args: ----- row(Dict, List): Row from CSV Reader
juraj-google-style
def pb(scalars_layout): import tensorflow.compat.v1 as tf assert isinstance(scalars_layout, layout_pb2.Layout) tensor = tf.make_tensor_proto( scalars_layout.SerializeToString(), dtype=tf.string) tf_summary_metadata = tf.SummaryMetadata.FromString( metadata.create_summary_metadata().SerializeToString()) summary = tf.Summary() summary.value.add(tag=metadata.CONFIG_SUMMARY_TAG, metadata=tf_summary_metadata, tensor=tensor) return summary
Creates a summary that contains a layout. When users navigate to the custom scalars dashboard, they will see a layout based on the proto provided to this function. Args: scalars_layout: The scalars_layout_pb2.Layout proto that specifies the layout. Returns: A summary proto containing the layout.
juraj-google-style
def get_func_graphs(op): def _get_func_graph_for_branch(name_attr_list, cached_attr_name=None): func_graph = None if cached_attr_name is not None: func_graph = getattr(op, cached_attr_name, None) inputs = op.inputs[1:] if func_graph is None: input_shapes = [t.shape for t in inputs] func_graph = util.get_func_graph(op, input_shapes, name_attr_list.name) for external_t, internal_t in zip(inputs, func_graph.inputs): handle_data_util.copy_handle_data(external_t, internal_t) func_graph.function_captures.reset_captures(inputs, func_graph.inputs) func_graph._forward_cond = op return func_graph if op.type in ['If', 'StatelessIf']: return (_get_func_graph_for_branch(op.get_attr('then_branch'), '_true_graph'), _get_func_graph_for_branch(op.get_attr('else_branch'), '_false_graph')) elif op.type in ['Case', 'StatelessCase']: return [_get_func_graph_for_branch(branch_fn, '_branch_graph_{}'.format(i)) for i, branch_fn in enumerate(op.get_attr('branches'))] else: raise ValueError('Unsupported op type: {}'.format(op.type))
Returns `FuncGraph`s for the input op branches. Args: op: The If or Case Operation. Returns: A tuple of the `FuncGraph`s of the then_branch and else_branch (all branches for Case).
github-repos
def get_filename(self, tag): if tag.find('filename', recursive=False) is not None: return tag.filename.contents[0] elif tag.find('anchorfile', recursive=False) is not None: return tag.anchorfile.contents[0] + '
Extract and return a documentation filename from a tag. Override as necessary, though this default implementation probably covers all the cases of interest. Args: tag: A BeautifulSoup Tag that satisfies match_criterion. Returns: A string that would be appropriate to use as the documentation filename for an entry in a Zeal database.
juraj-google-style
def convert_to_rgb(self, video: 'torch.Tensor') -> VideoInput: video = F.grayscale_to_rgb(video) if video.shape[-3] == 3 or not (video[..., 3, :, :] < 255).any(): return video alpha = video[..., 3, :, :] / 255.0 video = (1 - alpha[..., None, :, :]) * 255 + alpha[..., None, :, :] * video[..., :3, :, :] return video
Converts a video to RGB format. Args: video (`"torch.Tensor"`): The video to convert. Returns: `torch.Tensor`: The converted video.
github-repos
def __init__(self, regex: str, option_suffix: str): super().__init__(option_suffix) self._regex = self._build_matcher(regex)
Create a new instance. Args: regex: The regular expression describing the entry line to match. The first matching line is selected. The expression must contain a single capture group that contains the data to return. option_suffix: Suffix for each configuration option
juraj-google-style
def build_grab_exception(ex, curl): if (ex.args[0] == 23): if (getattr(curl, 'grab_callback_interrupted', None) is True): return None else: return error.GrabNetworkError(ex.args[1], ex) elif (ex.args[0] == 28): return error.GrabTimeoutError(ex.args[1], ex) elif (ex.args[0] == 7): return error.GrabConnectionError(ex.args[1], ex) elif (ex.args[0] == 67): return error.GrabAuthError(ex.args[1], ex) elif (ex.args[0] == 47): return error.GrabTooManyRedirectsError(ex.args[1], ex) elif (ex.args[0] == 6): return error.GrabCouldNotResolveHostError(ex.args[1], ex) elif (ex.args[0] == 3): return error.GrabInvalidUrl(ex.args[1], ex) else: return error.GrabNetworkError(ex.args[1], ex)
Build Grab exception from the pycurl exception Args: ex - the original pycurl exception curl - the Curl instance raised the exception
codesearchnet
def search(self,limit,start_date=None,end_date=None,clipper=None): search_string = self._query_builder(start_date, end_date, clipper ) try: r = requests.get('%s?%s&&maxRecords=%s' % (self.api_url, search_string, limit)) r.raise_for_status() except requests.HTTPError, e: exit ("site is not available") r_dict = json.loads(r.text) result={} if (r_dict['features'] == 0): result['status'] = u'error' result['message'] = "error while loading datas" else: result['status'] = u'SUCCESS' result['total'] = len(r_dict['features']) result['limit'] = limit result['ID']=[i['id'] for i in r_dict['features']] result['downloads']=[{"download" : i['properties']['services']['download']['url'], "id" : i['id']} for i in r_dict['features']] result['results'] = { "features": [{ 'properties':{'sceneID': i['id'], 'sat_type': i['properties']['platform'], 'thumbnail': i['properties']['thumbnail'], 'date': i['properties']['completionDate'], 'download': i['properties']['services']['download']['url']} , 'geometry': i['geometry'], "type": "Feature"} for i in r_dict['features']], "type": "FeatureCollection" } return result
The main method of Search class. It searches tTheia Landsat API Returns python dictionary Arguments: start_date -- date string. format: YYYY-MM-DD end_date -- date string. format: YYYY-MM-DD limit -- integer specigying the maximum results return. clipper -- clipper object : clipper.bbox / clipper.town
juraj-google-style
def check_supported_model_or_raise(model: Union['PreTrainedModel', 'TFPreTrainedModel'], feature: str='default') -> Tuple[str, Callable]: model_type = model.config.model_type.replace('_', '-') model_name = getattr(model, 'name', '') model_features = FeaturesManager.get_supported_features_for_model_type(model_type, model_name=model_name) if feature not in model_features: raise ValueError(f"{model.config.model_type} doesn't support feature {feature}. Supported values are: {model_features}") return (model.config.model_type, FeaturesManager._SUPPORTED_MODEL_TYPE[model_type][feature])
Check whether or not the model has the requested features. Args: model: The model to export. feature: The name of the feature to check if it is available. Returns: (str) The type of the model (OnnxConfig) The OnnxConfig instance holding the model export properties.
github-repos
def _analyze_indexed_fields(indexed_fields): result = {} for field_name in indexed_fields: if (not isinstance(field_name, basestring)): raise TypeError(('Field names must be strings; got %r' % (field_name,))) if ('.' not in field_name): if (field_name in result): raise ValueError(('Duplicate field name %s' % field_name)) result[field_name] = None else: (head, tail) = field_name.split('.', 1) if (head not in result): result[head] = [tail] elif (result[head] is None): raise ValueError(('Field name %s conflicts with ancestor %s' % (field_name, head))) else: result[head].append(tail) return result
Internal helper to check a list of indexed fields. Args: indexed_fields: A list of names, possibly dotted names. (A dotted name is a string containing names separated by dots, e.g. 'foo.bar.baz'. An undotted name is a string containing no dots, e.g. 'foo'.) Returns: A dict whose keys are undotted names. For each undotted name in the argument, the dict contains that undotted name as a key with None as a value. For each dotted name in the argument, the dict contains the first component as a key with a list of remainders as values. Example: If the argument is ['foo.bar.baz', 'bar', 'foo.bletch'], the return value is {'foo': ['bar.baz', 'bletch'], 'bar': None}. Raises: TypeError if an argument is not a string. ValueError for duplicate arguments and for conflicting arguments (when an undotted name also appears as the first component of a dotted name).
codesearchnet
def wait_for_task(self, task, timeout=(- 1)): self.__wait_task_completion(task, timeout) task = self.get(task) logger.debug(('Waiting for task. Percentage complete: ' + str(task.get('computedPercentComplete')))) logger.debug(('Waiting for task. Task state: ' + str(task.get('taskState')))) task_response = self.__get_task_response(task) logger.debug('Task completed') return task_response
Wait for task execution and return associated resource. Args: task: task dict timeout: timeout in seconds Returns: Associated resource when creating or updating; True when deleting.
codesearchnet
def get_list(self, obj_class, data, subset): url = obj_class.get_url(data) if obj_class.can_list and obj_class.can_get: if (subset and len(subset) == 1 and subset[0].upper() == "BASIC") and obj_class is jssobjects.Computer: url += "/subset/basic" result = self.jss.get(url) if obj_class.container: result = result.find(obj_class.container) return self._build_jss_object_list(result, obj_class) elif obj_class.can_get: xmldata = self.jss.get(url) return obj_class(self.jss, xmldata) else: raise JSSMethodNotAllowedError( obj_class.__class__.__name__)
Get a list of objects as JSSObjectList. Args: obj_class: The JSSObject subclass type to search for. data: None subset: Some objects support a subset for listing; namely Computer, with subset="basic". Returns: JSSObjectList
juraj-google-style
def get_subscript(self, sub_script_name): tree = self.treeWidget() items = tree.findItems(sub_script_name, QtCore.Qt.MatchExactly | QtCore.Qt.MatchRecursive) if len(items) >= 1: subscript_item = [sub_item for sub_item in items if isinstance(sub_item.value, Script) and sub_item.parent() is self] subscript_item = subscript_item[0] else: raise ValueError('several elements with name ' + sub_script_name) return subscript_item
finds the item that contains the sub_script with name sub_script_name Args: sub_script_name: name of subscript Returns: B26QTreeItem in QTreeWidget which is a script
juraj-google-style
def reverse_transform_table(self, table, table_meta, missing=None): if missing is None: missing = self.missing else: self.missing = missing warnings.warn( DEPRECATION_MESSAGE.format('reverse_transform_table'), DeprecationWarning) result = pd.DataFrame(index=table.index) table_name = table_meta['name'] for field in table_meta['fields']: new_column = self._reverse_transform_column(table, field, table_name) if new_column is not None: result[field['name']] = new_column return result
Transform a `table` back to its original format. Args: table(pandas.DataFrame): Contents of the table to be transformed. table_meta(dict): Metadata for the given table. missing(bool): Wheter or not use NullTransformer to handle missing values. Returns: pandas.DataFrame: Table in original format.
juraj-google-style
def cudnn_gru(units, n_hidden, n_layers=1, trainable_initial_states=False, seq_lengths=None, input_initial_h=None, name='cudnn_gru', reuse=False): with tf.variable_scope(name, reuse=reuse): gru = tf.contrib.cudnn_rnn.CudnnGRU(num_layers=n_layers, num_units=n_hidden) if trainable_initial_states: init_h = tf.get_variable('init_h', [n_layers, 1, n_hidden]) init_h = tf.tile(init_h, (1, tf.shape(units)[0], 1)) else: init_h = tf.zeros([n_layers, tf.shape(units)[0], n_hidden]) initial_h = (input_initial_h or init_h) (h, h_last) = gru(tf.transpose(units, (1, 0, 2)), (initial_h,)) h = tf.transpose(h, (1, 0, 2)) h_last = tf.squeeze(h_last, axis=0)[(- 1)] if (seq_lengths is not None): indices = tf.stack([tf.range(tf.shape(h)[0]), (seq_lengths - 1)], axis=1) h_last = tf.gather_nd(h, indices) return (h, h_last)
Fast CuDNN GRU implementation Args: units: tf.Tensor with dimensions [B x T x F], where B - batch size T - number of tokens F - features n_hidden: dimensionality of hidden state trainable_initial_states: whether to create a special trainable variable to initialize the hidden states of the network or use just zeros seq_lengths: tensor of sequence lengths with dimension [B] n_layers: number of layers input_initial_h: initial hidden state, tensor name: name of the variable scope to use reuse:whether to reuse already initialized variable Returns: h - all hidden states along T dimension, tf.Tensor with dimensionality [B x T x F] h_last - last hidden state, tf.Tensor with dimensionality [B x H]
codesearchnet
def _PrintWarningCounters(self, storage_counters): warnings_by_pathspec = storage_counters.get('warnings_by_path_spec', {}) warnings_by_parser_chain = storage_counters.get( 'warnings_by_parser_chain', {}) if not warnings_by_parser_chain: self._output_writer.Write('No warnings stored.\n\n') return table_view = views.ViewsFactory.GetTableView( self._views_format_type, title='Warnings generated per parser', column_names=['Parser (plugin) name', 'Number of warnings']) for parser_chain, count in warnings_by_parser_chain.items(): parser_chain = parser_chain or '<No parser>' table_view.AddRow([parser_chain, '{0:d}'.format(count)]) table_view.Write(self._output_writer) table_view = views.ViewsFactory.GetTableView( self._views_format_type, title='Pathspecs with most warnings', column_names=['Number of warnings', 'Pathspec']) top_pathspecs = warnings_by_pathspec.most_common(10) for pathspec, count in top_pathspecs: for path_index, line in enumerate(pathspec.split('\n')): if not line: continue if path_index == 0: table_view.AddRow(['{0:d}'.format(count), line]) else: table_view.AddRow(['', line]) table_view.Write(self._output_writer)
Prints a summary of the warnings. Args: storage_counters (dict): storage counters.
juraj-google-style
def auto_repr(obj: Any, with_addr: bool = False, sort_attrs: bool = True, joiner: str = COMMA_SPACE) -> str: if sort_attrs: keys = sorted(obj.__dict__.keys()) else: keys = obj.__dict__.keys() elements = ["{}={}".format(k, repr(getattr(obj, k))) for k in keys] return repr_result(obj, elements, with_addr=with_addr, joiner=joiner)
Convenience function for :func:`__repr__`. Works its way through the object's ``__dict__`` and reports accordingly. Args: obj: object to display with_addr: include the memory address of ``obj`` sort_attrs: sort the attributes into alphabetical order? joiner: string with which to join the elements Returns: string: :func:`repr`-style representation
juraj-google-style
def pull_datapackage(descriptor, name, backend, **backend_options): warnings.warn('Functions "push/pull_datapackage" are deprecated. Please use "Package" class', UserWarning) datapackage_name = name plugin = import_module(('jsontableschema.plugins.%s' % backend)) storage = plugin.Storage(**backend_options) resources = [] for table in storage.buckets: schema = storage.describe(table) base = os.path.dirname(descriptor) (path, name) = _restore_path(table) fullpath = os.path.join(base, path) helpers.ensure_dir(fullpath) with io.open(fullpath, 'wb') as file: model = Schema(deepcopy(schema)) data = storage.iter(table) writer = csv.writer(file, encoding='utf-8') writer.writerow(model.headers) for row in data: writer.writerow(row) resource = {'schema': schema, 'path': path} if (name is not None): resource['name'] = name resources.append(resource) mode = 'w' encoding = 'utf-8' if six.PY2: mode = 'wb' encoding = None resources = _restore_resources(resources) helpers.ensure_dir(descriptor) with io.open(descriptor, mode=mode, encoding=encoding) as file: descriptor = {'name': datapackage_name, 'resources': resources} json.dump(descriptor, file, indent=4) return storage
Pull Data Package from storage. All parameters should be used as keyword arguments. Args: descriptor (str): path where to store descriptor name (str): name of the pulled datapackage backend (str): backend name like `sql` or `bigquery` backend_options (dict): backend options mentioned in backend docs
codesearchnet
def update_location_centroid(point, cluster, max_distance, min_samples): cluster.append(point) points = [p.gen2arr() for p in cluster] eps = estimate_meters_to_deg(max_distance, precision=6) p_cluster = DBSCAN(eps=eps, min_samples=min_samples) p_cluster.fit(points) clusters = {} for i, label in enumerate(p_cluster.labels_): if label in clusters.keys(): clusters[label].append(points[i]) else: clusters[label] = [points[i]] centroids = [] biggest_centroid_l = -float("inf") biggest_centroid = None for label, n_cluster in clusters.items(): centroid = compute_centroid(n_cluster) centroids.append(centroid) if label >= 0 and len(n_cluster) >= biggest_centroid_l: biggest_centroid_l = len(n_cluster) biggest_centroid = centroid if biggest_centroid is None: biggest_centroid = compute_centroid(points) return biggest_centroid, cluster
Updates the centroid of a location cluster with another point Args: point (:obj:`Point`): Point to add to the cluster cluster (:obj:`list` of :obj:`Point`): Location cluster max_distance (float): Max neighbour distance min_samples (int): Minimum number of samples Returns: (:obj:`Point`, :obj:`list` of :obj:`Point`): Tuple with the location centroid and new point cluster (given cluster + given point)
juraj-google-style
def _FormatDateTime(self, event): if not event.timestamp: return 'N/A' date_time = dfdatetime_posix_time.PosixTimeInMicroseconds( timestamp=event.timestamp) year, month, day_of_month = date_time.GetDate() hours, minutes, seconds = date_time.GetTimeOfDay() try: return '{0:04d}-{1:02d}-{2:02d} {3:02d}:{4:02d}:{5:02d}'.format( year, month, day_of_month, hours, minutes, seconds) except (TypeError, ValueError): self._ReportEventError(event, ( 'unable to copy timestamp: {0!s} to a human readable date and ' 'time. Defaulting to: "0000-00-00 00:00:00"').format(event.timestamp)) return '0000-00-00 00:00:00'
Formats the date and time. Args: event (EventObject): event. Returns: str: date and time string or "N/A" if no event timestamp is available.
juraj-google-style
def on_message(self, event): metadata = self._parse_metadata(event) message = Message(text=metadata['text'], metadata=metadata).__dict__ if message.get('text'): message['text'] = self.find_and_replace_userids(message['text']) message['text'] = self.find_and_replace_channel_refs( message['text'] ) return message
Runs when a message event is received Args: event: RTM API event. Returns: Legobot.messge
juraj-google-style
def write_rtt(jlink): try: while jlink.connected(): bytes = list(bytearray(input(), "utf-8") + b"\x0A\x00") bytes_written = jlink.rtt_write(0, bytes) except Exception: print("IO write thread exception, exiting...") thread.interrupt_main() raise
Writes kayboard input to JLink RTT buffer #0. This method is a loop that blocks waiting on stdin. When enter is pressed, LF and NUL bytes are added to the input and transmitted as a byte list. If the JLink is disconnected, it will exit gracefully. If any other exceptions are raised, they will be caught and re-raised after interrupting the main thread. Args: jlink (pylink.JLink): The JLink to write to. Raises: Exception on error.
juraj-google-style
def __get_valid_form_data_elements(self, soup): elements = [] for element in soup.find_all(['input', 'button', 'textarea', 'select']): if element.has_attr('name'): elements.append(element) return elements
Get all valid form input elements. Note: An element is valid when the value can be updated client-side and the element has a name attribute. Args: soup (obj): The BeautifulSoup form. Returns: list(obj): Soup elements.
codesearchnet
def make_target(url, extra_opts=None): parts = compat.urlparse(url, allow_fragments=False) scheme = parts.scheme.lower() if (scheme in ['ftp', 'ftps']): creds = (parts.username, parts.password) tls = (scheme == 'ftps') from ftpsync import ftp_target target = ftp_target.FtpTarget(parts.path, parts.hostname, parts.port, username=creds[0], password=creds[1], tls=tls, timeout=None, extra_opts=extra_opts) else: target = FsTarget(url, extra_opts) return target
Factory that creates `_Target` objects from URLs. FTP targets must begin with the scheme ``ftp://`` or ``ftps://`` for TLS. Note: TLS is only supported on Python 2.7/3.2+. Args: url (str): extra_opts (dict, optional): Passed to Target constructor. Default: None. Returns: :class:`_Target`
codesearchnet
def undo_last_change(self): if (len(self.history) == 0): raise IndexError("Can't undo. Already at oldest change.") if ('input_structure' not in self.history[(- 1)]): raise IndexError("Can't undo. Latest history has no input_structure") h = self.history.pop() self._undone.append((h, self.final_structure)) s = h['input_structure'] if isinstance(s, dict): s = Structure.from_dict(s) self.final_structure = s
Undo the last change in the TransformedStructure. Raises: IndexError: If already at the oldest change.
codesearchnet
def create_context(self, state_hash, base_contexts, inputs, outputs): for address in inputs: if (not self.namespace_is_valid(address)): raise CreateContextException('Address or namespace {} listed in inputs is not valid'.format(address)) for address in outputs: if (not self.namespace_is_valid(address)): raise CreateContextException('Address or namespace {} listed in outputs is not valid'.format(address)) addresses_to_find = [add for add in inputs if (len(add) == 70)] (address_values, reads) = self._find_address_values_in_chain(base_contexts=base_contexts, addresses_to_find=addresses_to_find) context = ExecutionContext(state_hash=state_hash, read_list=inputs, write_list=outputs, base_context_ids=base_contexts) contexts_asked_not_found = [cid for cid in base_contexts if (cid not in self._contexts)] if contexts_asked_not_found: raise KeyError('Basing a new context off of context ids {} that are not in context manager'.format(contexts_asked_not_found)) context.create_initial(address_values) self._contexts[context.session_id] = context if reads: context.create_prefetch(reads) self._address_queue.put_nowait((context.session_id, state_hash, reads)) return context.session_id
Create a ExecutionContext to run a transaction against. Args: state_hash: (str): Merkle root to base state on. base_contexts (list of str): Context ids of contexts that will have their state applied to make this context. inputs (list of str): Addresses that can be read from. outputs (list of str): Addresses that can be written to. Returns: context_id (str): the unique context_id of the session
codesearchnet
def subscribe(self, peer_jid): self.roster.subscribe(aioxmpp.JID.fromstr(peer_jid).bare())
Asks for subscription Args: peer_jid (str): the JID you ask for subscriptiion
juraj-google-style
def daylight_saving_start_day(self, value=None): if value is not None: try: value = str(value) except ValueError: raise ValueError( 'value {} need to be of type str ' 'for field `daylight_saving_start_day`'.format(value)) if ',' in value: raise ValueError('value should not contain a comma ' 'for field `daylight_saving_start_day`') self._daylight_saving_start_day = value
Corresponds to IDD Field `daylight_saving_start_day` Args: value (str): value for IDD Field `daylight_saving_start_day` if `value` is None it will not be checked against the specification and is assumed to be a missing value Raises: ValueError: if `value` is not a valid value
juraj-google-style