code
stringlengths
20
4.93k
docstring
stringlengths
33
1.27k
source
stringclasses
3 values
def create_statement_inspection_table(sts: List[Influence]): columns = [ "un_groundings", "subj_polarity", "obj_polarity", "Sentence", "Source API", ] polarity_to_str = lambda x: "+" if x == 1 else "-" if x == -1 else "None" l = [] for s in sts: subj_un_grounding = s.subj.db_refs["UN"][0][0].split("/")[-1] obj_un_grounding = s.obj.db_refs["UN"][0][0].split("/")[-1] subj_polarity = s.subj_delta["polarity"] obj_polarity = s.obj_delta["polarity"] subj_adjectives = s.subj_delta["adjectives"] for e in s.evidence: l.append( ( (subj_un_grounding, obj_un_grounding), subj_polarity, obj_polarity, e.text, e.source_api, ) ) df = pd.DataFrame(l, columns=columns) df = df.pivot_table(index=["un_groundings", "Source API", "Sentence"]) def hover(hover_color=" return dict( selector="tr:hover", props=[("background-color", "%s" % hover_color)], ) styles = [ hover(), dict(props=[("font-size", "100%"), ("font-family", "Gill Sans")]), ] return df.style.set_table_styles(styles)
Display an HTML representation of a table with INDRA statements to manually inspect for validity. Args: sts: A list of INDRA statements to be manually inspected for validity.
juraj-google-style
def notify_owner(func): def wrapper(self, *args, **kwargs): old = self._saved_copy() result = func(self, *args, **kwargs) self._notify_owners(old) return result wrapper.__doc__ = ('Container method ``%s`` instrumented to notify property owners' % func.__name__) return wrapper
A decorator for mutating methods of property container classes that notifies owners of the property container about mutating changes. Args: func (callable) : the container method to wrap in a notification Returns: wrapped method Examples: A ``__setitem__`` could be wrapped like this: .. code-block:: python # x[i] = y @notify_owner def __setitem__(self, i, y): return super(PropertyValueDict, self).__setitem__(i, y) The returned wrapped method will have a docstring indicating what original method it is wrapping.
codesearchnet
def write_to_file(src, dst): n = 0 for block in src: dst.write(block) n += len(block) return n
Write data from `src` into `dst`. Args: src (iterable): iterable that yields blocks of data to write dst (file-like object): file-like object that must support .write(block) Returns: number of bytes written to `dst`
juraj-google-style
def get_user(self, user_id): try: return self._user_dict[user_id] except KeyError: logger.warning('UserList returning unknown User for UserID %s', user_id) return User(user_id, None, None, None, [], False)
Get a user by its ID. Args: user_id (~hangups.user.UserID): The ID of the user. Raises: KeyError: If no such user is known. Returns: :class:`~hangups.user.User` with the given ID.
juraj-google-style
def _sign_operation(op): md5 = hashlib.md5() md5.update(op.consumerId.encode('utf-8')) md5.update(b'\x00') md5.update(op.operationName.encode('utf-8')) if op.labels: signing.add_dict_to_hash(md5, encoding.MessageToPyValue(op.labels)) return md5.digest()
Obtains a signature for an operation in a ReportRequest. Args: op (:class:`endpoints_management.gen.servicecontrol_v1_messages.Operation`): an operation used in a `ReportRequest` Returns: string: a unique signature for that operation
juraj-google-style
def handler_for_name(fq_name): resolved_name = for_name(fq_name) if isinstance(resolved_name, (type, types.ClassType)): return resolved_name() elif isinstance(resolved_name, types.MethodType): return getattr(resolved_name.im_class(), resolved_name.__name__) else: return resolved_name
Resolves and instantiates handler by fully qualified name. First resolves the name using for_name call. Then if it resolves to a class, instantiates a class, if it resolves to a method - instantiates the class and binds method to the instance. Args: fq_name: fully qualified name of something to find. Returns: handler instance which is ready to be called.
codesearchnet
def htmlcolor_to_rgb(str_color): if not (str_color.startswith(' raise ValueError("Bad html color format. Expected: ' result = [1.0 * int(n, 16) / 255 for n in (str_color[1:3], str_color[3:5], str_color[5:])] return result
function to convert HTML-styly color string to RGB values Args: s: Color in HTML format Returns: list of three RGB color components
juraj-google-style
def trim_wav_ms(in_path: Path, out_path: Path, start_time: int, end_time: int) -> None: try: trim_wav_sox(in_path, out_path, start_time, end_time) except FileNotFoundError: trim_wav_pydub(in_path, out_path, start_time, end_time) except subprocess.CalledProcessError: trim_wav_pydub(in_path, out_path, start_time, end_time)
Extracts part of a WAV File. First attempts to call sox. If sox is unavailable, it backs off to pydub+ffmpeg. Args: in_path: A path to the source file to extract a portion of out_path: A path describing the to-be-created WAV file. start_time: The point in the source WAV file at which to begin extraction. end_time: The point in the source WAV file at which to end extraction.
codesearchnet
def UnlockScanNode(self, path_spec): if not self.HasScanNode(path_spec): raise KeyError('Scan node does not exist.') if path_spec not in self._locked_scan_nodes: raise KeyError('Scan node is not locked.') del self._locked_scan_nodes[path_spec] self._scan_nodes[path_spec].scanned = False
Marks a scan node as unlocked. Args: path_spec (PathSpec): path specification. Raises: KeyError: if the scan node does not exists or is not locked.
juraj-google-style
def get_tensor_file_paths(self, node_name, output_slot, debug_op, device_name=None): device_name = self._infer_device_name(device_name, node_name) watch_key = _get_tensor_watch_key(node_name, output_slot, debug_op) if watch_key not in self._watch_key_to_datum[device_name]: raise WatchKeyDoesNotExistInDebugDumpDirError('Watch key "%s" does not exist in the debug dump of device %s' % (watch_key, device_name)) return [datum.file_path for datum in self._watch_key_to_datum[device_name][watch_key]]
Get the file paths from a debug-dumped tensor. Args: node_name: (`str`) name of the node that the tensor is produced by. output_slot: (`int`) output slot index of tensor. debug_op: (`str`) name of the debug op. device_name: (`str`) name of the device. If there is only one device or if the specified debug_watch_key exists on only one device, this argument is optional. Returns: List of file path(s) loaded. This is a list because each debugged tensor may be dumped multiple times. Raises: WatchKeyDoesNotExistInDebugDumpDirError: If the tensor does not exist in the debug-dump data.
github-repos
def load_actor_class(self, driver_id, function_descriptor): function_id = function_descriptor.function_id actor_class = self._loaded_actor_classes.get(function_id, None) if (actor_class is None): if self._worker.load_code_from_local: driver_id = ray.DriverID.nil() actor_class = self._load_actor_from_local(driver_id, function_descriptor) else: actor_class = self._load_actor_class_from_gcs(driver_id, function_descriptor) self._loaded_actor_classes[function_id] = actor_class module_name = function_descriptor.module_name actor_class_name = function_descriptor.class_name actor_methods = inspect.getmembers(actor_class, predicate=is_function_or_method) for (actor_method_name, actor_method) in actor_methods: method_descriptor = FunctionDescriptor(module_name, actor_method_name, actor_class_name) method_id = method_descriptor.function_id executor = self._make_actor_method_executor(actor_method_name, actor_method, actor_imported=True) self._function_execution_info[driver_id][method_id] = FunctionExecutionInfo(function=executor, function_name=actor_method_name, max_calls=0) self._num_task_executions[driver_id][method_id] = 0 self._num_task_executions[driver_id][function_id] = 0 return actor_class
Load the actor class. Args: driver_id: Driver ID of the actor. function_descriptor: Function descriptor of the actor constructor. Returns: The actor class.
codesearchnet
def SetFlushInterval(self, flush_interval): self._flush_interval = flush_interval logger.debug('Elasticsearch flush interval: {0:d}'.format(flush_interval))
Set the flush interval. Args: flush_interval (int): number of events to buffer before doing a bulk insert.
codesearchnet
def intersection_update(self, *others): for other in map(self._as_mapping, others): for (element, current_count) in list(self.items()): multiplicity = other.get(element, 0) if (multiplicity < current_count): self[element] = multiplicity
r"""Update the multiset, keeping only elements found in it and all others. >>> ms = Multiset('aab') >>> ms.intersection_update('bc') >>> sorted(ms) ['b'] You can also use the ``&=`` operator for the same effect. However, the operator version will only accept a set as other operator, not any iterable, to avoid errors. >>> ms = Multiset('aabc') >>> ms &= Multiset('abbd') >>> sorted(ms) ['a', 'b'] For a variant of the operation which does not modify the multiset, but returns a new multiset instead see :meth:`intersection`. Args: others: The other sets to intersect this multiset with. Can also be any :class:`~typing.Iterable`\[~T] or :class:`~typing.Mapping`\[~T, :class:`int`] which are then converted to :class:`Multiset`\[~T].
codesearchnet
def proc_val(key, val): list_keys = ("LDAUU", "LDAUL", "LDAUJ", "MAGMOM", "DIPOL", "LANGEVIN_GAMMA", "QUAD_EFG", "EINT") bool_keys = ("LDAU", "LWAVE", "LSCALU", "LCHARG", "LPLANE", "LUSE_VDW", "LHFCALC", "ADDGRID", "LSORBIT", "LNONCOLLINEAR") float_keys = ("EDIFF", "SIGMA", "TIME", "ENCUTFOCK", "HFSCREEN", "POTIM", "EDIFFG", "AGGAC", "PARAM1", "PARAM2") int_keys = ("NSW", "NBANDS", "NELMIN", "ISIF", "IBRION", "ISPIN", "ICHARG", "NELM", "ISMEAR", "NPAR", "LDAUPRINT", "LMAXMIX", "ENCUT", "NSIM", "NKRED", "NUPDOWN", "ISPIND", "LDAUTYPE", "IVDW") def smart_int_or_float(numstr): if numstr.find(".") != -1 or numstr.lower().find("e") != -1: return float(numstr) else: return int(numstr) try: if key in list_keys: output = [] toks = re.findall( r"(-?\d+\.?\d*)\*?(-?\d+\.?\d*)?\*?(-?\d+\.?\d*)?", val) for tok in toks: if tok[2] and "3" in tok[0]: output.extend( [smart_int_or_float(tok[2])] * int(tok[0]) * int(tok[1])) elif tok[1]: output.extend([smart_int_or_float(tok[1])] * int(tok[0])) else: output.append(smart_int_or_float(tok[0])) return output if key in bool_keys: m = re.match(r"^\.?([T|F|t|f])[A-Za-z]*\.?", val) if m: if m.group(1) == "T" or m.group(1) == "t": return True else: return False raise ValueError(key + " should be a boolean type!") if key in float_keys: return float(re.search(r"^-?\d*\.?\d*[e|E]?-?\d*", val).group(0)) if key in int_keys: return int(re.match(r"^-?[0-9]+", val).group(0)) except ValueError: pass try: val = int(val) return val except ValueError: pass try: val = float(val) return val except ValueError: pass if "true" in val.lower(): return True if "false" in val.lower(): return False return val.strip().capitalize()
Static helper method to convert INCAR parameters to proper types, e.g., integers, floats, lists, etc. Args: key: INCAR parameter key val: Actual value of INCAR parameter.
juraj-google-style
def api_request(self, method_name, params): url = self._method_url(method_name) data = json.dumps(params) return self._make_request(url=url, method='post', data=data)
Execute an arbitrary method. Args: method_name (str): include the controller name: 'devices/search' params (dict): the method parameters Returns: A dict with the response Raises: requests.exceptions.HTTPError
codesearchnet
def ensure_app_data_dir(appname, *args): from ubelt import util_path dpath = get_app_data_dir(appname, *args) util_path.ensuredir(dpath) return dpath
Calls `get_app_data_dir` but ensures the directory exists. Args: appname (str): the name of the application *args: any other subdirectories may be specified SeeAlso: get_app_data_dir Example: >>> import ubelt as ub >>> dpath = ub.ensure_app_data_dir('ubelt') >>> assert exists(dpath)
juraj-google-style
def _get_schema(cls, schema): if isinstance(schema, string_types): schema = cls._get_object_from_python_path(schema) if isclass(schema): schema = schema() if (not isinstance(schema, Schema)): raise TypeError('The schema must be a path to a Marshmallow schema or a Marshmallow schema.') return schema
Method that will fetch a Marshmallow schema flexibly. Args: schema (marshmallow.Schema|str): Either the schema class, an instance of a schema, or a Python path to a schema. Returns: marshmallow.Schema: The desired schema. Raises: TypeError: This is raised if the provided object isn't a Marshmallow schema.
codesearchnet
def fwd(self, x_data): x_data = numpy.asfarray(x_data) shape = x_data.shape x_data = x_data.reshape(len(self), (- 1)) (lower, upper) = evaluation.evaluate_bound(self, x_data) q_data = numpy.zeros(x_data.shape) indices = (x_data > upper) q_data[indices] = 1 indices = ((~ indices) & (x_data >= lower)) q_data[indices] = numpy.clip(evaluation.evaluate_forward(self, x_data), a_min=0, a_max=1)[indices] q_data = q_data.reshape(shape) return q_data
Forward Rosenblatt transformation. Args: x_data (numpy.ndarray): Location for the distribution function. ``x_data.shape`` must be compatible with distribution shape. Returns: (numpy.ndarray): Evaluated distribution function values, where ``out.shape==x_data.shape``.
codesearchnet
def delete_asset(self, asset_id, asset_type): return self.asset(asset_id, asset_type=asset_type, action='DELETE')
Delete the asset with the provided asset_id. Args: asset_id: The id of the asset. asset_type: The asset type. Returns:
codesearchnet
def translate_node_id(self, ni: PrefName, sctx: SchemaContext) -> QualName: p, s, loc = ni.partition(":") if not s: return (ni, sctx.default_ns) try: mdata = self.modules[sctx.text_mid] except KeyError: raise ModuleNotRegistered(*sctx.text_mid) from None try: return (loc, self.namespace(mdata.prefix_map[p])) except KeyError: raise UnknownPrefix(p, sctx.text_mid) from None
Translate node identifier to a qualified name. Args: ni: Node identifier (with optional prefix). sctx: SchemaContext. Raises: ModuleNotRegistered: If `mid` is not registered in the data model. UnknownPrefix: If the prefix specified in `ni` is not declared.
juraj-google-style
def generate(self, descriptors): model_ids = self.search_tree.adj_list.keys() target_graph = None father_id = None descriptors = deepcopy(descriptors) elem_class = Elem if self.optimizemode is OptimizeMode.Maximize: elem_class = ReverseElem pq = PriorityQueue() temp_list = [] for model_id in model_ids: metric_value = self.searcher.get_metric_value_by_id(model_id) temp_list.append((metric_value, model_id)) temp_list = sorted(temp_list) for metric_value, model_id in temp_list: graph = self.searcher.load_model_by_id(model_id) graph.clear_operation_history() graph.clear_weights() pq.put(elem_class(metric_value, model_id, graph)) t = 1.0 t_min = self.t_min alpha = 0.9 opt_acq = self._get_init_opt_acq_value() while not pq.empty() and t > t_min: elem = pq.get() if self.optimizemode is OptimizeMode.Maximize: temp_exp = min((elem.metric_value - opt_acq) / t, 1.0) else: temp_exp = min((opt_acq - elem.metric_value) / t, 1.0) ap = math.exp(temp_exp) if ap >= random.uniform(0, 1): for temp_graph in transform(elem.graph): if contain(descriptors, temp_graph.extract_descriptor()): continue temp_acq_value = self.acq(temp_graph) pq.put(elem_class(temp_acq_value, elem.father_id, temp_graph)) descriptors.append(temp_graph.extract_descriptor()) if self._accept_new_acq_value(opt_acq, temp_acq_value): opt_acq = temp_acq_value father_id = elem.father_id target_graph = deepcopy(temp_graph) t *= alpha if father_id is None: return None, None nm_graph = self.searcher.load_model_by_id(father_id) for args in target_graph.operation_history: getattr(nm_graph, args[0])(*list(args[1:])) return nm_graph, father_id
Generate new architecture. Args: descriptors: All the searched neural architectures. Returns: graph: An instance of Graph. A morphed neural network with weights. father_id: The father node ID in the search tree.
juraj-google-style
def read_as_base64(fn): with open(fn) as unpacked_file: with tempfile.TemporaryFile() as b64_file: base64.encode(unpacked_file, b64_file) b64_file.flush() b64_file.seek(0) return b64_file.read()
Convert given `fn` to base64 and return it. This method does the process in not-so-much memory consuming way. Args: fn (str): Path to the file which should be converted. Returns: str: File encoded as base64.
codesearchnet
def __init__(self, cumulative=IGNORED, name=IGNORED, scalar=IGNORED, kind=IGNORED): if name != IGNORED and (not isinstance(name, MetricStructuredNameMatcher)): raise ValueError('name must be a MetricStructuredNameMatcher.') self.cumulative = cumulative self.name = name self.scalar = scalar self.kind = kind
Creates a MetricUpdateMatcher. Any property not passed in to the constructor will be ignored when matching. Args: cumulative: A boolean. name: A MetricStructuredNameMatcher object that matches the name. scalar: An integer with the metric update. kind: A string defining the kind of counter.
github-repos
def get_config(self): all_args = tf_inspect.getfullargspec(self.__init__).args config = {'name': self.name, 'trainable': self.trainable} if hasattr(self, '_batch_input_shape'): config['batch_input_shape'] = self._batch_input_shape config['dtype'] = policy.serialize(self._dtype_policy) if hasattr(self, 'dynamic'): if self.dynamic: config['dynamic'] = self.dynamic elif 'dynamic' in all_args: all_args.remove('dynamic') expected_args = config.keys() extra_args = [arg for arg in all_args if arg not in expected_args] if len(extra_args) > 1 and hasattr(self.get_config, '_is_default'): raise NotImplementedError('Layers with arguments in `__init__` must override `get_config`.') return config
Returns the config of the layer. A layer config is a Python dictionary (serializable) containing the configuration of a layer. The same layer can be reinstantiated later (without its trained weights) from this configuration. The config of a layer does not include connectivity information, nor the layer class name. These are handled by `Network` (one layer of abstraction above). Returns: Python dictionary.
github-repos
def _apply_credentials(auto_refresh=True, credentials=None, headers=None): token = credentials.get_credentials().access_token if (auto_refresh is True): if (token is None): token = credentials.refresh(access_token=None, timeout=10) elif credentials.jwt_is_expired(): token = credentials.refresh(timeout=10) headers.update({'Authorization': 'Bearer {}'.format(token)})
Update Authorization header. Update request headers with latest `access_token`. Perform token `refresh` if token is ``None``. Args: auto_refresh (bool): Perform token refresh if access_token is ``None`` or expired. Defaults to ``True``. credentials (class): Read-only credentials. headers (class): Requests `CaseInsensitiveDict`.
codesearchnet
def set_large_file_size(self, st_size): self._check_positive_int(st_size) if self.st_size: self.size = 0 if self.filesystem: self.filesystem.change_disk_usage(st_size, self.name, self.st_dev) self.st_size = st_size self._byte_contents = None
Sets the self.st_size attribute and replaces self.content with None. Provided specifically to simulate very large files without regards to their content (which wouldn't fit in memory). Note that read/write operations with such a file raise :py:class:`FakeLargeFileIoException`. Args: st_size: (int) The desired file size Raises: IOError: if the st_size is not a non-negative integer, or if st_size exceeds the available file system space
codesearchnet
def _attach_debugger_logic(model, debug_path: Optional[str]='.', do_prune_layers: Optional[bool]=True, use_repr: bool=True): class_name = model.__class__.__name__ model._call_tree = {'module_path': class_name, 'inputs': None, 'outputs': None, 'children': []} model._debugger_model_call_stack = [] model._debugger_module_dump_name = class_name if debug_path: try: os.makedirs(debug_path, exist_ok=True) except Exception as e: raise ValueError(f'Unexpected or existing debug_path={debug_path}.') from e def wrap_forward(module, full_path): orig_forward = module.forward @functools.wraps(orig_forward) def wrapped_forward(*inps, **kws): if _is_rank_zero(): dict_inputs = {'args': inps, 'kwargs': kws} dict_inputs = {k: dict_inputs[k] for k in dict_inputs if len(dict_inputs[k]) > 0} node = {'module_path': full_path, 'inputs': _serialize_io(dict_inputs, debug_path=debug_path, use_repr=use_repr, path_to_value=f'{full_path}_inputs'), 'outputs': None, 'children': []} model._debugger_model_call_stack.append(node) with torch.no_grad(): out = orig_forward(*inps, **kws) if _is_rank_zero(): if sum((1 for _ in module.named_children())) > 0: node['outputs'] = None else: node['outputs'] = _serialize_io(out, debug_path=debug_path, use_repr=use_repr, path_to_value=f'{full_path}_outputs') finished = model._debugger_model_call_stack.pop() if not finished['children']: finished.pop('children') if model._debugger_model_call_stack: model._debugger_model_call_stack[-1]['children'].append(finished) return out module.forward = wrapped_forward for name, submodule in model.named_modules(): if name == '': continue wrap_forward(submodule, f'{class_name}.{name}') real_top_forward = model.forward @functools.wraps(real_top_forward) def top_wrapped_forward(*inps, **kws): if _is_rank_zero(): top_node = {'module_path': f'{class_name} (top-level)', 'inputs': _serialize_io({'args': inps, 'kwargs': kws}, debug_path=debug_path, use_repr=use_repr, path_to_value=f'{class_name}_inputs'), 'outputs': None, 'children': []} model._debugger_model_call_stack.append(top_node) out = real_top_forward(*inps, **kws) if _is_rank_zero() and model._debugger_model_call_stack: top_node['outputs'] = _serialize_io(out, debug_path=debug_path, use_repr=use_repr, path_to_value=f'{class_name}_outputs') finished = model._debugger_model_call_stack.pop() model._call_tree['inputs'] = finished['inputs'] model._call_tree['outputs'] = finished['outputs'] model._call_tree['children'] = finished['children'] [model._call_tree.pop(k, None) for k in list(model._call_tree.keys()) if not model._call_tree[k]] if do_prune_layers: prune_intermediate_layers(model._call_tree) log_model_debug_trace(debug_path=debug_path, model=model) return out model.forward = top_wrapped_forward
Attaches a debugging wrapper to every module in the model. This records structured inputs and outputs during the forward pass into a call tree. Args: model (`PreTrainedModel`, `nn.Module`): Model to wrap. debug_path (`str`): Optional directory to dump debug JSON files. do_prune_layers (`bool`, *optional*, defaults to `True`): Whether to prune intermediate layers. use_repr (bool, *optional*, defaults to `True`): Whether to save a `repr()`-ized version of the tensors as the `value` property in the asscoiated FULL_TENSORS.json file, or to store full tensors in separate SafeTensors files and store the relative path to that file in the `value` property.
github-repos
def save(self, clean=True): ret = {} if clean: self._dirty = False else: ret['_dirty'] = self._dirty return ret
Serialize into raw representation. Clears the dirty bit by default. Args: clean (bool): Whether to clear the dirty bit. Returns: dict: Raw.
codesearchnet
def evaluate(self, node: InstanceNode) -> XPathValue: return self._eval(XPathContext(node, node, 1, 1))
Evaluate the receiver and return the result. Args: node: Context node for XPath evaluation. Raises: XPathTypeError: If a subexpression of the receiver is of a wrong type.
juraj-google-style
def _rewrite_input_as_indexed_slices(body_grad_graph, grad_output_slices, forward_input, loop_vars): init_slices = _create_grad_indexed_slices_init(grad_output_slices, forward_input) with body_grad_graph.as_default(): input_slices = indexed_slices.IndexedSlices(values=body_grad_graph.capture(init_slices.values, allowlisted=True), indices=body_grad_graph.capture(init_slices.indices, allowlisted=True), dense_shape=body_grad_graph.capture(init_slices.dense_shape, allowlisted=True)) for t in _flatten(init_slices): captured_t = body_grad_graph.captures.pop(t) body_grad_graph.inputs.remove(captured_t) new_output_slices = _rewrite_grad_indexed_slices_output(grad_output_slices, input_slices) return _update_indexed_slices_param(body_grad_graph, loop_vars, init_slices, input_slices, new_output_slices, grad_output_slices)
Rewrites grad_output_slices's corresponding input to be an IndexedSlices. This rewrite requires that forward_input was captured in the forward loop, i.e. is not a user-specified loop variable. This is important because the rewrite assumes that forward_input is passed through to its corresponding output unchanged. This assumption is used in _rewrite_input_as_indexed_slices, which depends on the exact gradient structure produced by the input's fanout. This can yield a more efficient computation than using _rewrite_output_as_tensor, since it preserves the IndexedSlices structure instead of converting the IndexedSlices to a dense Tensor. Args: body_grad_graph: _WhileBodyGradFuncGraph. grad_output_slices: IndexedSlices output of body_grad_graph. forward_input: the corresponding Tensor input to the forward loop. loop_vars: list of Tensors. The inputs to body_grad_graph. Returns: The new loop_vars to pass to body_grad_graph.
github-repos
def chat(self, id): json = self.skype.conn("GET", "{0}/users/ME/conversations/{1}".format(self.skype.conn.msgsHost, id), auth=SkypeConnection.Auth.RegToken, params={"view": "msnp24Equivalent"}).json() cls = SkypeSingleChat if "threadProperties" in json: info = self.skype.conn("GET", "{0}/threads/{1}".format(self.skype.conn.msgsHost, json.get("id")), auth=SkypeConnection.Auth.RegToken, params={"view": "msnp24Equivalent"}).json() json.update(info) cls = SkypeGroupChat return self.merge(cls.fromRaw(self.skype, json))
Get a single conversation by identifier. Args: id (str): single or group chat identifier
juraj-google-style
def get_sequence_sliding_window_properties(self, scale, window, representative_only=True): if representative_only: if (not self.representative_sequence): log.warning('{}: no representative sequence set, cannot get sequence properties'.format(self.id)) return if (not self.representative_sequence.seq): log.warning('{}: representative sequence {} set, but no sequence stored. Cannot get sequence properties.'.format(self.id, self.representative_sequence.id)) return self.representative_sequence.get_sliding_window_properties(scale=scale, window=window) if (not representative_only): for s in self.sequences: if (not s.seq): log.warning('{}: no sequence stored. Cannot get sequence properties.'.format(s.id)) continue else: s.get_sliding_window_properties(scale=scale, window=window)
Run Biopython ProteinAnalysis with a sliding window to calculate a given property. Results are stored in the protein's respective SeqProp objects at ``.letter_annotations`` Args: scale (str): Scale name window (int): Sliding window size representative_only (bool): If analysis should only be run on the representative sequence
codesearchnet
def range_dimension_tensor(self, name='range_dimension_tensor'): with self._name_scope(name): return self._range_dimension_tensor()
Dimension (in the sense of vector spaces) of the range of this operator. Determined at runtime. If this operator acts like the batch matrix `A` with `A.shape = [B1,...,Bb, M, N]`, then this returns `M`. Args: name: A name for this `Op`. Returns: `int32` `Tensor`
github-repos
def create_trial_from_spec(spec, output_path, parser, **trial_kwargs): try: args = parser.parse_args(to_argv(spec)) except SystemExit: raise TuneError('Error parsing args, see above message', spec) if ('resources_per_trial' in spec): trial_kwargs['resources'] = json_to_resources(spec['resources_per_trial']) return Trial(trainable_name=spec['run'], config=spec.get('config', {}), local_dir=os.path.join(args.local_dir, output_path), stopping_criterion=spec.get('stop', {}), checkpoint_freq=args.checkpoint_freq, checkpoint_at_end=args.checkpoint_at_end, keep_checkpoints_num=args.keep_checkpoints_num, checkpoint_score_attr=args.checkpoint_score_attr, export_formats=spec.get('export_formats', []), restore_path=spec.get('restore'), upload_dir=args.upload_dir, trial_name_creator=spec.get('trial_name_creator'), loggers=spec.get('loggers'), sync_function=spec.get('sync_function'), max_failures=args.max_failures, **trial_kwargs)
Creates a Trial object from parsing the spec. Arguments: spec (dict): A resolved experiment specification. Arguments should The args here should correspond to the command line flags in ray.tune.config_parser. output_path (str); A specific output path within the local_dir. Typically the name of the experiment. parser (ArgumentParser): An argument parser object from make_parser. trial_kwargs: Extra keyword arguments used in instantiating the Trial. Returns: A trial object with corresponding parameters to the specification.
codesearchnet
def from_text_files(cls, path, field, train, validation, test=None, bs=64, bptt=70, **kwargs): (trn_ds, val_ds, test_ds) = ConcatTextDataset.splits(path, text_field=field, train=train, validation=validation, test=test) return cls(path, field, trn_ds, val_ds, test_ds, bs, bptt, **kwargs)
Method used to instantiate a LanguageModelData object that can be used for a supported nlp task. Args: path (str): the absolute path in which temporary model data will be saved field (Field): torchtext field train (str): file location of the training data validation (str): file location of the validation data test (str): file location of the testing data bs (int): batch size to use bptt (int): back propagation through time hyper-parameter kwargs: other arguments Returns: a LanguageModelData instance, which most importantly, provides us the datasets for training, validation, and testing Note: The train, validation, and test path can be pointed to any file (or folder) that contains a valid text corpus.
codesearchnet
def remove_file(profile, branch, file_path, commit_message=None): branch_sha = get_branch_sha(profile, branch) tree = get_files_in_branch(profile, branch_sha) new_tree = remove_file_from_tree(tree, file_path) data = trees.create_tree(profile, new_tree) sha = data.get('sha') if (not commit_message): commit_message = (('Deleted ' + file_path) + '.') parents = [branch_sha] commit_data = commits.create_commit(profile, commit_message, sha, parents) commit_sha = commit_data.get('sha') ref_data = refs.update_ref(profile, ('heads/' + branch), commit_sha) return ref_data
Remove a file from a branch. Args: profile A profile generated from ``simplygithub.authentication.profile``. Such profiles tell this module (i) the ``repo`` to connect to, and (ii) the ``token`` to connect with. branch The name of a branch. file_path The path of the file to delete. commit_message A commit message to give to the commit. Returns: A dict with data about the branch's new ref (it includes the new SHA the branch's HEAD points to, after committing the new file).
codesearchnet
def resolve_topic(topic): try: (module_name, _, class_name) = topic.partition(' module = importlib.import_module(module_name) except ImportError as e: raise TopicResolutionError('{}: {}'.format(topic, e)) try: cls = resolve_attr(module, class_name) except AttributeError as e: raise TopicResolutionError('{}: {}'.format(topic, e)) return cls
Return class described by given topic. Args: topic: A string describing a class. Returns: A class. Raises: TopicResolutionError: If there is no such class.
codesearchnet
def copy_function(func, name=None): code = func.__code__ newname = name or func.__name__ newcode = CodeType( code.co_argcount, code.co_kwonlyargcount, code.co_nlocals, code.co_stacksize, code.co_flags, code.co_code, code.co_consts, code.co_names, code.co_varnames, code.co_filename, newname, code.co_firstlineno, code.co_lnotab, code.co_freevars, code.co_cellvars, ) newfunc = FunctionType( newcode, func.__globals__, newname, func.__defaults__, func.__closure__, ) newfunc.__dict__.update(func.__dict__) return newfunc
Copy a function object with different name. Args: func (function): Function to be copied. name (string, optional): Name of the new function. If not spacified, the same name of `func` will be used. Returns: newfunc (function): New function with different name.
juraj-google-style
def __init__(self, experimenter=None, exp_type=None): super().__init__() self.experimenter = experimenter self.exp_type = exp_type
Create a ExperimenterMultipartHeader with the parameters below. Args: experimenter: Experimenter ID which takes the same form as in struct ofp_experimenter_header ( :class:`~pyof.v0x04.symmetric.experimenter.ExperimenterHeader`) exp_type: Experimenter defined.
juraj-google-style
def get_symmetry_operations(self, cartesian=False): (rotation, translation) = self._get_symmetry() symmops = [] mat = self._structure.lattice.matrix.T invmat = np.linalg.inv(mat) for (rot, trans) in zip(rotation, translation): if cartesian: rot = np.dot(mat, np.dot(rot, invmat)) trans = np.dot(trans, self._structure.lattice.matrix) op = SymmOp.from_rotation_and_translation(rot, trans) symmops.append(op) return symmops
Return symmetry operations as a list of SymmOp objects. By default returns fractional coord symmops. But cartesian can be returned too. Returns: ([SymmOp]): List of symmetry operations.
codesearchnet
def bloom_gelu_forward(x: torch.Tensor) -> torch.Tensor: return x * 0.5 * (1.0 + torch.tanh(0.79788456 * x * (1 + 0.044715 * x * x)))
Custom bias GELU function. Adapted from Megatron-DeepSpeed code. Here we use a simple implementation (inference) to make the model jitable. Args: x (`torch.tensor`): input hidden states
github-repos
def add_listener(self, callback, event_type=None): listener_uid = uuid4() self.listeners.append( { 'uid': listener_uid, 'callback': callback, 'event_type': event_type } ) return listener_uid
Add a listener that will send a callback when the client recieves an event. Args: callback (func(roomchunk)): Callback called when an event arrives. event_type (str): The event_type to filter for. Returns: uuid.UUID: Unique id of the listener, can be used to identify the listener.
juraj-google-style
def find_certs() -> str: bundle = path.realpath(path.dirname(httplib2.CA_CERTS)) if (not bundle.startswith(path.dirname(httplib2.__file__))): return bundle for (platform, files) in PLATFORM_FILES.items(): if sys.platform.startswith(platform): for cert_file in files: if path.exists(cert_file): return cert_file if path.exists(getenv('CURL_CA_BUNDLE', '')): return getenv('CURL_CA_BUNDLE') if ALLOW_FALLBACK: warnings.warn('No system certs detected, falling back to bundled', RuntimeWarning) return httplib2.CA_CERTS else: raise RuntimeError('No system certs detected!')
Find suitable certificates for ``httplib2``. Warning: The default behaviour is to fall back to the bundled certificates when no system certificates can be found. If you're packaging ``jnrbase`` *please* set ``ALLOW_FALLBACK`` to ``False`` to disable this very much unwanted behaviour, but please maintain the option so that downstream users can inspect the configuration easily. See also: :pypi:`httplib2` Returns: Path to SSL certificates Raises: RuntimeError: When no suitable certificates are found
codesearchnet
def find_wells_without_curve(self, mnemonic, alias=None): return Project([w for w in self if (w.get_curve(mnemonic, alias=alias) is None)])
Returns a new Project with only the wells which DO NOT have the named curve. Args: menmonic (str): the name of the curve to look for. alias (dict): a welly alias dictionary. Returns: project.
codesearchnet
def __init__(self, path, auto_reboot_args=None, keep_explorer=False, add_all_devices=False): super(SimpleTestResult, self).__init__() self.path = path self.auto_reboot_args = auto_reboot_args self.result = json.load(open(self.path, 'r')) self.log_handler = None self.started = None self.keep_explorer = keep_explorer self.add_all_devices = add_all_devices SimpleTestResult.executions += 1 logger.info('Initial state is %s', json.dumps(self.result, indent=2))
Record test results in json file Args: path (str): File path to record the results auto_reboot (bool): Whether reboot when harness die
juraj-google-style
def group_sub_entities(self, entities: List[dict]) -> dict: entity = entities[0]['entity'].split('-', 1)[-1] scores = np.nanmean([entity['score'] for entity in entities]) tokens = [entity['word'] for entity in entities] entity_group = {'entity_group': entity, 'score': np.mean(scores), 'word': self.tokenizer.convert_tokens_to_string(tokens), 'start': entities[0]['start'], 'end': entities[-1]['end']} return entity_group
Group together the adjacent tokens with the same entity predicted. Args: entities (`dict`): The entities predicted by the pipeline.
github-repos
def assertRaisesWithPredicateMatch(self, exception_type, expected_err_re_or_predicate): if callable(expected_err_re_or_predicate): predicate = expected_err_re_or_predicate else: def predicate(e): if isinstance(e, errors.OpError): e = cast(errors.OpError, e) err_str = cast(str, e.message) op = e.op else: err_str = str(e) op = None while op is not None: err_str += '\nCaused by: ' + op.name op = op._original_op logging.info("Searching within error strings: '%s' within '%s'", expected_err_re_or_predicate, err_str) return re.search(expected_err_re_or_predicate, err_str) try: yield self.fail(exception_type.__name__ + ' not raised') except Exception as e: if not isinstance(e, exception_type) or not predicate(e): raise AssertionError('Exception of type %s: %s' % (str(type(e)), str(e)))
Returns a context manager to enclose code expected to raise an exception. If the exception is an OpError, the op stack is also included in the message predicate search. Args: exception_type: The expected type of exception that should be raised. expected_err_re_or_predicate: If this is callable, it should be a function of one argument that inspects the passed-in exception and returns True (success) or False (please fail the test). Otherwise, the error message is expected to match this regular expression partially. Returns: A context manager to surround code that is expected to raise an exception.
github-repos
def __call__(self, request: Union[Chunk, List[Chunk]], *args, **kwargs) -> List[Tuple[Chunk, Dict[str, Any]]]: requests = request if isinstance(request, list) else [request] query = self.vector_search_parameters.format_query(requests) if self.log_query: _LOGGER.info('Executing query %s', query) query_job = self.client.query(query) results = query_job.result() results_by_id = {} for result_row in results: result_dict = dict(result_row.items()) results_by_id[result_row.id] = result_dict response = [] for chunk in requests: result_dict = results_by_id.get(chunk.id, {}) response.append((chunk, result_dict)) return response
Process request(s) using BigQuery vector search. Args: request: Single Chunk with embedding or list of Chunk's with embeddings to process Returns: Chunk(s) where chunk.metadata['enrichment_output'] contains the data retrieved via BigQuery VECTOR_SEARCH.
github-repos
def min_count(self, n=1): word_count = {w:c for w,c in iteritems(self.word_count) if c >= n} return CountedVocabulary(word_count=word_count)
Returns a vocabulary after eliminating the words that appear < `n`. Args: n (integer): specifies the minimum word frequency allowed.
juraj-google-style
def rename(self, source_file_names, destination_file_names): raise NotImplementedError
Rename the files at the source list to the destination list. Source and destination lists should be of the same size. Args: source_file_names: List of file paths that need to be moved destination_file_names: List of destination_file_names for the files Raises: ``BeamIOError``: if any of the rename operations fail
github-repos
def _init_metadata_service(self, version): metadata_cfg = self._load_config_section(CONFIG_METADATA_SECTION) self._token_metadata = metadata_cfg[CONFIG_TOKEN] proto = metadata_cfg[CONFIG_PROTOCOL] host = metadata_cfg[CONFIG_HOST] self._metadata = MetadataService(host, version) self._metadata.base_protocol = proto self._metadata.set_auth(self._token_metadata)
Method to initialize the Metadata Service from the config data Args: version (string): Version of Boss API to use. Returns: None Raises: (KeyError): if given invalid version.
juraj-google-style
def get_op_name(tensor_name): if not tensor_name: raise ValueError(f'Tensor name cannot be empty or None. Received: {tensor_name}.') if tensor_name.startswith('^'): tensor_name = tensor_name[1:] if ':' in tensor_name: op_name, _ = tensor_name.split(':') return op_name return tensor_name
Extract the Op name from a Tensor name. The Op name is everything before a colon, if present, not including any ^ prefix denoting a control dependency. Args: tensor_name: the full name of a Tensor in the graph. Returns: The name of the Op of which the given Tensor is an output. Raises: ValueError: if tensor_name is None or empty.
github-repos
def job_history(backend): year = widgets.Output(layout=widgets.Layout(display='flex-inline', align_items='center', min_height='400px')) month = widgets.Output(layout=widgets.Layout(display='flex-inline', align_items='center', min_height='400px')) week = widgets.Output(layout=widgets.Layout(display='flex-inline', align_items='center', min_height='400px')) tabs = widgets.Tab(layout=widgets.Layout(max_height='620px')) tabs.children = [year, month, week] tabs.set_title(0, 'Year') tabs.set_title(1, 'Month') tabs.set_title(2, 'Week') tabs.selected_index = 1 _build_job_history(tabs, backend) return tabs
Widget for displaying job history Args: backend (IBMQbackend): The backend. Returns: Tab: A tab widget for history images.
juraj-google-style
def create(cls, **kwargs): try: return cls.add(cls.new(**kwargs)) except: cls.session.rollback() raise
Initializes a new instance, adds it to the db and commits the transaction. Args: **kwargs: The keyword arguments for the init constructor. Examples: >>> user = User.create(name="Vicky", email="vicky@h.com") >>> user.id 35
juraj-google-style
def getSwarmModelParams(modelID): cjDAO = ClientJobsDAO.get() (jobID, description) = cjDAO.modelsGetFields(modelID, ['jobId', 'genDescription']) (baseDescription,) = cjDAO.jobGetFields(jobID, ['genBaseDescription']) descriptionDirectory = tempfile.mkdtemp() try: baseDescriptionFilePath = os.path.join(descriptionDirectory, 'base.py') with open(baseDescriptionFilePath, mode='wb') as f: f.write(baseDescription) descriptionFilePath = os.path.join(descriptionDirectory, 'description.py') with open(descriptionFilePath, mode='wb') as f: f.write(description) expIface = helpers.getExperimentDescriptionInterfaceFromModule(helpers.loadExperimentDescriptionScriptFromDir(descriptionDirectory)) return json.dumps(dict(modelConfig=expIface.getModelDescription(), inferenceArgs=expIface.getModelControl().get('inferenceArgs', None))) finally: shutil.rmtree(descriptionDirectory, ignore_errors=True)
Retrieve the Engine-level model params from a Swarm model Args: modelID - Engine-level model ID of the Swarm model Returns: JSON-encoded string containing Model Params
codesearchnet
def close(self): if self._session and (not self._closed): self._closed = True tf_session.TF_CloseSession(self._session)
Closes this session. Calling this method frees all resources associated with the session. Raises: tf.errors.OpError: Or one of its subclasses if an error occurs while closing the TensorFlow session.
github-repos
def y_score(estimator, X): try: y = estimator.predict_proba(X) return y[:, 1] except(AttributeError): return estimator.decision_function(X)
Score examples from a new matrix X Args: estimator: an sklearn estimator object X: design matrix with the same features that the estimator was trained on Returns: a vector of scores of the same length as X Note that estimator.predict_proba is preferred but when unavailable (e.g. SVM without probability calibration) decision_function is used.
juraj-google-style
def has_inf_or_nan(datum, tensor): _ = datum if isinstance(tensor, InconvertibleTensorProto): return False elif np.issubdtype(tensor.dtype, np.floating) or np.issubdtype(tensor.dtype, np.complexfloating) or np.issubdtype(tensor.dtype, np.integer): return np.any(np.isnan(tensor)) or np.any(np.isinf(tensor)) else: return False
A predicate for whether a tensor consists of any bad numerical values. This predicate is common enough to merit definition in this module. Bad numerical values include `nan`s and `inf`s. The signature of this function follows the requirement of the method `DebugDumpDir.find()`. Args: datum: (`DebugTensorDatum`) Datum metadata. tensor: (`numpy.ndarray` or None) Value of the tensor. None represents an uninitialized tensor. Returns: (`bool`) True if and only if tensor consists of any nan or inf values.
github-repos
def _handle_port_request(self, client_data, writer): try: pid = int(client_data) except ValueError as error: self._client_request_errors += 1 log.warning('Could not parse request: %s', error) return log.info('Request on behalf of pid %d.', pid) log.info('cmdline: %s', _get_process_command_line(pid)) if not _should_allocate_port(pid): self._denied_allocations += 1 return port = self._port_pool.get_port_for_process(pid) if port > 0: self._total_allocations += 1 writer.write('{:d}\n'.format(port).encode('utf-8')) log.debug('Allocated port %d to pid %d', port, pid) else: self._denied_allocations += 1
Given a port request body, parse it and respond appropriately. Args: client_data: The request bytes from the client. writer: The asyncio Writer for the response to be written to.
juraj-google-style
def GetShadowMap(self, since=None): return ShadowUpdateGetter().GetUpdates(self._GetClient(), self.conf['bucket'], self.conf['shadow_object'], since)
Return the shadow map from this source. Args: since: Get data only changed since this timestamp (inclusive) or None for all data. Returns: instance of shadow.ShadowMap
github-repos
def _write_session(self): base_name = ('%ssession' % self._product_accronym.lower()) filename = ('%s%s.py' % (self._class_prefix.lower(), base_name)) override_content = self._extract_override_content(base_name) self.write(destination=self.output_directory, filename=filename, template_name='session.py.tpl', version=self.api_version, product_accronym=self._product_accronym, class_prefix=self._class_prefix, root_api=self.api_root, api_prefix=self.api_prefix, override_content=override_content, header=self.header_content)
Write SDK session file Args: version (str): the version of the server
codesearchnet
def input_waiting(self): buf = array.array('I', [0]) try: fcntl.ioctl(self._fd, termios.TIOCINQ, buf, True) except OSError as e: raise SerialError(e.errno, ('Querying input waiting: ' + e.strerror)) return buf[0]
Query the number of bytes waiting to be read from the serial port. Returns: int: number of bytes waiting to be read. Raises: SerialError: if an I/O or OS error occurs.
codesearchnet
def loss(logits, labels): labels = tf.to_int64(labels) cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits( logits=logits, labels=labels, name='xentropy') return tf.reduce_mean(cross_entropy, name='xentropy_mean')
Calculates the loss from the logits and the labels. Args: logits: Logits tensor, float - [batch_size, NUM_CLASSES]. labels: Labels tensor, int32 - [batch_size]. Returns: loss: Loss tensor of type float.
juraj-google-style
def path(self, goal): if goal == self.name: return [self] if goal not in self.routes: raise ValueError("Unknown '{0}'".format(goal)) obj = self path = [obj] while True: obj = obj.routes[goal].direction path.append(obj) if obj.name == goal: break return path
Get the shortest way between two nodes of the graph Args: goal (str): Name of the targeted node Return: list of Node
juraj-google-style
def _lookup_model(cls, kind, default_model=None): modelclass = cls._kind_map.get(kind, default_model) if modelclass is None: raise KindError( "No model class found for kind '%s'. Did you forget to import it?" % kind) return modelclass
Get the model class for the kind. Args: kind: A string representing the name of the kind to lookup. default_model: The model class to use if the kind can't be found. Returns: The model class for the requested kind. Raises: KindError: The kind was not found and no default_model was provided.
juraj-google-style
def vectorize(density_matrix, method='col'): density_matrix = np.array(density_matrix) if (method == 'col'): return density_matrix.flatten(order='F') elif (method == 'row'): return density_matrix.flatten(order='C') elif (method in ['pauli', 'pauli_weights']): num = int(np.log2(len(density_matrix))) if (len(density_matrix) != (2 ** num)): raise Exception('Input state must be n-qubit state') if (method == 'pauli_weights'): pgroup = pauli_group(num, case='weight') else: pgroup = pauli_group(num, case='tensor') vals = [np.trace(np.dot(p.to_matrix(), density_matrix)) for p in pgroup] return np.array(vals) return None
Flatten an operator to a vector in a specified basis. Args: density_matrix (ndarray): a density matrix. method (str): the method of vectorization. Allowed values are - 'col' (default) flattens to column-major vector. - 'row' flattens to row-major vector. - 'pauli'flattens in the n-qubit Pauli basis. - 'pauli-weights': flattens in the n-qubit Pauli basis ordered by weight. Returns: ndarray: the resulting vector. Raises: Exception: if input state is not a n-qubit state
codesearchnet
def has_request(self, request): queue_item = QueueItem(request, Response(request.url)) key = queue_item.get_hash() for status in QueueItem.STATUSES: if (key in self.__get_var(('items_' + status)).keys()): return True return False
Check if the given request already exists in the queue. Args: request (:class:`nyawc.http.Request`): The request to check. Returns: bool: True if already exists, False otherwise.
codesearchnet
def command(self, cmd_name, callback, *args): cmd = JLinkCommand(cmd_name, args, callback) self._commands.put(cmd)
Run an asynchronous command. Args: cmd_name (int): The unique code for the command to execute. callback (callable): The optional callback to run when the command finishes. The signature should be callback(cmd_name, result, exception) *args: Any arguments that are passed to the underlying command handler
codesearchnet
def get_model(self, opt_fn, emb_sz, n_hid, n_layers, **kwargs): m = get_language_model(self.nt, emb_sz, n_hid, n_layers, self.pad_idx, **kwargs) model = SingleModel(to_gpu(m)) return RNN_Learner(self, model, opt_fn=opt_fn)
Method returns a RNN_Learner object, that wraps an instance of the RNN_Encoder module. Args: opt_fn (Optimizer): the torch optimizer function to use emb_sz (int): embedding size n_hid (int): number of hidden inputs n_layers (int): number of hidden layers kwargs: other arguments Returns: An instance of the RNN_Learner class.
codesearchnet
def prefetch_users(persistent_course_grades): users = User.objects.filter( id__in=[grade.user_id for grade in persistent_course_grades] ) return { user.id: user for user in users }
Prefetch Users from the list of user_ids present in the persistent_course_grades. Arguments: persistent_course_grades (list): A list of PersistentCourseGrade. Returns: (dict): A dictionary containing user_id to user mapping.
juraj-google-style
def Deserialize(self, reader): self.name = reader.ReadVarString().decode('utf-8') self.symbol = reader.ReadVarString().decode('utf-8') self.decimals = reader.ReadUInt8()
Read serialized data from byte stream Args: reader (neocore.IO.BinaryReader): reader to read byte data from
juraj-google-style
def from_file(cls, filename, directory=None, format=None, engine=None, encoding=File._encoding): filepath = os.path.join(directory or '', filename) if encoding is None: encoding = locale.getpreferredencoding() with io.open(filepath, encoding=encoding) as fd: source = fd.read() return cls(source, filename, directory, format, engine, encoding)
Return an instance with the source string read from the given file. Args: filename: Filename for loading/saving the source. directory: (Sub)directory for source loading/saving and rendering. format: Rendering output format (``'pdf'``, ``'png'``, ...). engine: Layout command used (``'dot'``, ``'neato'``, ...). encoding: Encoding for loading/saving the source.
juraj-google-style
def train(self, X_train, Y_train, X_test, Y_test): while True: print(1) time.sleep(1) if (random.randint(0, 9) >= 5): break
Train and validate the LR on a train and test dataset Args: X_train (np.array): Training data Y_train (np.array): Training labels X_test (np.array): Test data Y_test (np.array): Test labels
codesearchnet
def stitch_map(tiles, width, height, bbox, dpi): size = (int((width * dpi_to_dpmm(dpi))), int((height * dpi_to_dpmm(dpi)))) background = Image.new('RGBA', size, (255, 255, 255)) for layer in tiles: layer_img = Image.new('RGBA', size) for ((x, y), tile_path) in layer.items(): tile = Image.open(tile_path) layer_img.paste(tile, (((x - bbox.min.x) * TILE_SIZE), ((y - bbox.min.y) * TILE_SIZE))) background = Image.alpha_composite(background, layer_img) add_scales_bar(background, bbox) return background.convert('RGB')
Merge tiles together into one image. Args: tiles (list of dict of file): tiles for each layer width (float): page width in mm height (height): page height in mm dpi (dpi): resolution in dots per inch Returns: PIL.Image: merged map.
codesearchnet
def subproc_call(cmd, timeout=None): try: output = subprocess.check_output(cmd, stderr=subprocess.STDOUT, shell=True, timeout=timeout) return (output, 0) except subprocess.TimeoutExpired as e: logger.warn("Command '{}' timeout!".format(cmd)) logger.warn(e.output.decode('utf-8')) return (e.output, (- 1)) except subprocess.CalledProcessError as e: logger.warn("Command '{}' failed, return code={}".format(cmd, e.returncode)) logger.warn(e.output.decode('utf-8')) return (e.output, e.returncode) except Exception: logger.warn("Command '{}' failed to run.".format(cmd)) return ('', (- 2))
Execute a command with timeout, and return STDOUT and STDERR Args: cmd(str): the command to execute. timeout(float): timeout in seconds. Returns: output(bytes), retcode(int). If timeout, retcode is -1.
codesearchnet
def get_variant_type(variant_source): file_type = get_file_type(variant_source) variant_type = 'sv' if file_type == 'vcf': variants = VCF(variant_source) elif file_type == 'gemini': variants = GeminiQuery(variant_source) gemini_query = "SELECT * from variants" variants.run(gemini_query) for i,variant in enumerate(variants): if file_type == 'vcf': if variant.is_snp: variant_type = 'snv' elif file_type == 'gemini': if variant['type'] == 'snp': variant_type = 'snv' if i > 1000: break return variant_type
Try to find out what type of variants that exists in a variant source Args: variant_source (str): Path to variant source source_mode (str): 'vcf' or 'gemini' Returns: variant_type (str): 'sv' or 'snv'
juraj-google-style
async def client_event_handler(self, client_id, event_tuple, user_data): (conn_string, event_name, event) = event_tuple if (event_name == 'report'): report = event.serialize() report['encoded_report'] = base64.b64encode(report['encoded_report']) msg_payload = dict(connection_string=conn_string, serialized_report=report) msg_name = OPERATIONS.NOTIFY_REPORT elif (event_name == 'trace'): encoded_payload = base64.b64encode(event) msg_payload = dict(connection_string=conn_string, payload=encoded_payload) msg_name = OPERATIONS.NOTIFY_TRACE elif (event_name == 'progress'): msg_payload = dict(connection_string=conn_string, operation=event.get('operation'), done_count=event.get('finished'), total_count=event.get('total')) msg_name = OPERATIONS.NOTIFY_PROGRESS elif (event_name == 'device_seen'): msg_payload = event msg_name = OPERATIONS.NOTIFY_DEVICE_FOUND elif (event_name == 'broadcast'): report = event.serialize() report['encoded_report'] = base64.b64encode(report['encoded_report']) msg_payload = dict(connection_string=conn_string, serialized_report=report) msg_name = OPERATIONS.NOTIFY_BROADCAST else: self._logger.debug('Not forwarding unknown event over websockets: %s', event_tuple) return try: self._logger.debug('Sending event %s: %s', msg_name, msg_payload) (await self.server.send_event(user_data, msg_name, msg_payload)) except websockets.exceptions.ConnectionClosed: self._logger.debug('Could not send notification because connection was closed for client %s', client_id)
Forward an event on behalf of a client. This method is called by StandardDeviceServer when it has an event that should be sent to a client. Args: client_id (str): The client that we should send this event to event_tuple (tuple): The conn_string, event_name and event object passed from the call to notify_event. user_data (object): The user data passed in the call to :meth:`setup_client`.
codesearchnet
def _write_session(self): base_name = "%ssession" % self._product_accronym.lower() filename = "%s%s.py" % (self._class_prefix.lower(), base_name) override_content = self._extract_override_content(base_name) self.write(destination=self.output_directory, filename=filename, template_name="session.py.tpl", version=self.api_version, product_accronym=self._product_accronym, class_prefix=self._class_prefix, root_api=self.api_root, api_prefix=self.api_prefix, override_content=override_content, header=self.header_content)
Write SDK session file Args: version (str): the version of the server
juraj-google-style
def get_port_from_port_server(portserver_address, pid=None): if (not portserver_address): return None if (portserver_address[0] == '@'): portserver_address = ('\x00' + portserver_address[1:]) if (pid is None): pid = os.getpid() try: if hasattr(socket, 'AF_UNIX'): sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) else: sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) try: sock.connect(portserver_address) sock.sendall(('%d\n' % pid).encode('ascii')) buf = sock.recv(1024) finally: sock.close() except socket.error as e: print('Socket error when connecting to portserver:', e, file=sys.stderr) return None try: port = int(buf.split(b'\n')[0]) except ValueError: print('Portserver failed to find a port.', file=sys.stderr) return None _owned_ports.add(port) return port
Request a free a port from a system-wide portserver. This follows a very simple portserver protocol: The request consists of our pid (in ASCII) followed by a newline. The response is a port number and a newline, 0 on failure. This function is an implementation detail of pick_unused_port(). It should not normally be called by code outside of this module. Args: portserver_address: The address (path) of a unix domain socket with which to connect to the portserver. A leading '@' character indicates an address in the "abstract namespace." On systems without socket.AF_UNIX, this is an AF_INET address. pid: The PID to tell the portserver to associate the reservation with. If None, the current process's PID is used. Returns: The port number on success or None on failure.
codesearchnet
def __init__(self, interface, logger, base_configs=None): raise NotImplementedError('Base class should not be called directly!')
The constructor for the Sniffer. It constructs a sniffer and configures it to be ready for capture. Args: interface: A string specifying the interface used to configure the sniffer. logger: Mobly logger object. base_configs: A dictionary containing baseline configurations of the sniffer. These can be overridden when staring a capture. The keys are specified by Sniffer.CONFIG_KEY_*. Returns: self: A configured sniffer. Raises: InvalidDataError: if the config_path is invalid. NoPermissionError: if an error occurs while configuring the sniffer.
github-repos
def flatten_per_replica_values(distribution_strategy, per_replica_values): return [e for flattened in nest.flatten(per_replica_values) for e in distribution_strategy.unwrap(flattened)]
Unwraps and flattens a nest of PerReplica parameters. PerReplica values have one value associated with each device. Each entry in the PerReplica dict has a device `key` and the corresponding value on the device as the `value`. In this function we take a PerReplica value or a list of PerReplica values and return all the values in the PerReplica dict. Args: distribution_strategy: DistributionStrategy used to distribute training and validation. per_replica_values: List of PerReplica object or a single PerReplica object. Returns: List of values of all the PerReplica objects.
github-repos
def rename_style(self, old_name, new_name): if (old_name not in self.styles): raise KeyError(('Style %r not found' % old_name)) if (new_name in self.styles): raise ValueError(('There is already a style called %r' % new_name)) if (not is_valid_field_content(new_name)): raise ValueError(('%r is not a valid name' % new_name)) self.styles[new_name] = self.styles[old_name] del self.styles[old_name] for line in self: if (line.style == old_name): line.style = new_name
Rename a style, including references to it. Arguments: old_name (str): Style to be renamed. new_name (str): New name for the style (must be unused). Raises: KeyError: No style named old_name. ValueError: new_name is not a legal name (cannot use commas) or new_name is taken.
codesearchnet
def ssh(cmd=''): with settings(warn_only=True): local('ssh -A -o StrictHostKeyChecking=no -i "%s" %s@%s "%s"' % ( env.key_filename, env.user, env.host, cmd))
SSH into the server(s) (sequentially if more than one) Args: cmd (str) ='': Command to run on the server
juraj-google-style
def encode_corpus(self, corpus, output_path): out_container = containers.Container(output_path) out_container.open() for utterance in corpus.utterances.values(): data = self.encode_utterance(utterance, corpus=corpus) out_container.set(utterance.idx, data) out_container.close() return out_container
Encode all utterances of the given corpus and store them in a :class:`audiomate.container.Container`. Args: corpus (Corpus): The corpus to process. output_path (str): The path to store the container with the encoded data. Returns: Container: The container with the encoded data.
juraj-google-style
def intersect(df, other, index=False, keep='first'): validate_set_ops(df, other) if index: df_reset_index = df.reset_index() other_reset_index = other.reset_index() index_cols = [col for col in df_reset_index.columns if col not in df.columns] df_index_names = df.index.names return_df = (pd.merge(df_reset_index, other_reset_index, how='inner', left_on=df_reset_index.columns.values.tolist(), right_on=df_reset_index.columns.values.tolist()) .set_index(index_cols)) return_df.index.names = df_index_names return_df = return_df.drop_duplicates(keep=keep) return return_df else: return_df = pd.merge(df, other, how='inner', left_on=df.columns.values.tolist(), right_on=df.columns.values.tolist()) return_df = return_df.drop_duplicates(keep=keep) return return_df
Returns rows that appear in both DataFrames. Args: df (pandas.DataFrame): data passed in through the pipe. other (pandas.DataFrame): other DataFrame to use for set operation with the first. Kwargs: index (bool): Boolean indicating whether to consider the pandas index as part of the set operation (default `False`). keep (str): Indicates which duplicate should be kept. Options are `'first'` and `'last'`.
juraj-google-style
def add_implem(self, transition, attribute, function, **kwargs): implem = ImplementationProperty(field_name=self.state_field, transition=transition, workflow=self.workflow, implementation=function, **kwargs) self.implementations[transition.name] = implem self.transitions_at[transition.name] = attribute return implem
Add an implementation. Args: transition (Transition): the transition for which the implementation is added attribute (str): the name of the attribute where the implementation will be available function (callable): the actual implementation function **kwargs: extra arguments for the related ImplementationProperty.
codesearchnet
def set_image(self, text): if exercises.CONTENT_STORAGE_PLACEHOLDER in text: return text, [] stripped_text = text.strip().replace('\\n', '') graphie_regex = re.compile(WEB_GRAPHIE_URL_REGEX, flags=re.IGNORECASE) graphie_match = graphie_regex.match(stripped_text) if graphie_match: is_web_plus_graphie = True graphie_rawpath = graphie_match.groupdict()['rawpath'] graphie_path = graphie_rawpath.replace(" exercise_image_file = _ExerciseGraphieFile(graphie_path) elif get_base64_encoding(stripped_text): is_web_plus_graphie = False exercise_image_file = _ExerciseBase64ImageFile(stripped_text) else: is_web_plus_graphie = False exercise_image_file = _ExerciseImageFile(stripped_text) exercise_image_file.assessment_item = self _filename = exercise_image_file.process_file() new_text = exercises.CONTENT_STORAGE_FORMAT.format(exercise_image_file.get_replacement_str()) if is_web_plus_graphie: new_text = "web+graphie:" + new_text return new_text, [exercise_image_file]
Save image resource at `text` (path or url) to storage, then return the replacement string and the necessary exercicse image file object. Args: - text (str): path or url to parse as an exercise image resource Returns: (new_text, files) - `new_text` (str): replacement string for the original `text` string - `files` (list): list of files that were downloaded from `text`
juraj-google-style
def texture3d(self, size, components, data=None, *, alignment=1, dtype='f1') -> 'Texture3D': res = Texture3D.__new__(Texture3D) res.mglo, res._glo = self.mglo.texture3d(size, components, data, alignment, dtype) res.ctx = self res.extra = None return res
Create a :py:class:`Texture3D` object. Args: size (tuple): The width, height and depth of the texture. components (int): The number of components 1, 2, 3 or 4. data (bytes): Content of the texture. Keyword Args: alignment (int): The byte alignment 1, 2, 4 or 8. dtype (str): Data type. Returns: :py:class:`Texture3D` object
juraj-google-style
def _SetExtractionPreferredTimeZone(self, knowledge_base): if self._preferred_time_zone: try: knowledge_base.SetTimeZone(self._preferred_time_zone) except ValueError: logger.warning('Unsupported time zone: {0:s}, defaulting to {1:s}'.format(self._preferred_time_zone, knowledge_base._time_zone.zone))
Sets the preferred time zone before extraction. Args: knowledge_base (KnowledgeBase): contains information from the source data needed for parsing.
codesearchnet
def filter_devices(ads, func): results = [] for ad in ads: if func(ad): results.append(ad) return results
Finds the AndroidDevice instances from a list that match certain conditions. Args: ads: A list of AndroidDevice instances. func: A function that takes an AndroidDevice object and returns True if the device satisfies the filter condition. Returns: A list of AndroidDevice instances that satisfy the filter condition.
codesearchnet
def _prune_heads(self, heads_to_prune): for layer, heads in heads_to_prune.items(): self.encoder.layer[layer].attention.prune_heads(heads)
Prunes heads of the model. Args: heads_to_prune: dict of {layer_num: list of heads to prune in this layer}
github-repos
def design_stat_extremes(self, value="Extremes"): if value is not None: try: value = str(value) except ValueError: raise ValueError( 'value {} need to be of type str ' 'for field `design_stat_extremes`'.format(value)) if ',' in value: raise ValueError('value should not contain a comma ' 'for field `design_stat_extremes`') vals = set() vals.add("Extremes") if value not in vals: raise ValueError('value {} is not an accepted value for ' 'field `design_stat_extremes`'.format(value)) self._design_stat_extremes = value
Corresponds to IDD Field `design_stat_extremes` Args: value (str): value for IDD Field `design_stat_extremes` Accepted values are: - Extremes Default value: Extremes if `value` is None it will not be checked against the specification and is assumed to be a missing value Raises: ValueError: if `value` is not a valid value
juraj-google-style
def get_block(self, height_or_hash, id=None, endpoint=None): return self._call_endpoint(GET_BLOCK, params=[height_or_hash, 1], id=id, endpoint=endpoint)
Look up a block by the height or hash of the block. Args: height_or_hash: (int or str) either the height of the desired block or its hash in the form '1e67372c158a4cfbb17b9ad3aaae77001a4247a00318e354c62e53b56af4006f' id: (int, optional) id to use for response tracking endpoint: (RPCEndpoint, optional) endpoint to specify to use Returns: block: a json object or the ``neorpc.Core.Block.Block`` object
juraj-google-style
def _Matches(path, pattern_list): return any(fnmatch.fnmatchcase(path, pattern) for pattern in pattern_list)
Returns true if path matches any patten found in pattern_list. Args: path: A dot separated path to a package, class, method or variable pattern_list: A list of wildcard patterns Returns: True if path matches any wildcard found in pattern_list.
juraj-google-style
def parse_rule(cls, txt): types = {'glob': GlobRule, 'regex': RegexRule, 'range': RangeRule, 'before': TimestampRule, 'after': TimestampRule} (label, txt) = Rule._parse_label(txt) if (label is None): if ('*' in txt): label = 'glob' else: label = 'range' elif (label not in types): raise ConfigurationError(("'%s' is not a valid package filter type" % label)) rule_cls = types[label] txt_ = ('%s(%s)' % (label, txt)) try: rule = rule_cls._parse(txt_) except Exception as e: raise ConfigurationError(("Error parsing package filter '%s': %s: %s" % (txt_, e.__class__.__name__, str(e)))) return rule
Parse a rule from a string. See rezconfig.package_filter for an overview of valid strings. Args: txt (str): String to parse. Returns: `Rule` instance.
codesearchnet
def update(self, other): if isinstance(other, NdMapping): dims = [d for d in other.kdims if d not in self.kdims] if len(dims) == other.ndims: raise KeyError("Cannot update with NdMapping that has" " a different set of key dimensions.") elif dims: other = other.drop_dimension(dims) other = other.data for key, data in other.items(): self._add_item(key, data, sort=False) if self.sort: self._resort()
Merges other item with this object Args: other: Object containing items to merge into this object Must be a dictionary or NdMapping type
juraj-google-style
def create_graph_from_data(self, data): self.arguments['{SCORE}'] = self.scores[self.score] self.arguments['{VERBOSE}'] = str(self.verbose).upper() results = self._run_gies(data, verbose=self.verbose) return nx.relabel_nodes(nx.DiGraph(results), {idx: i for idx, i in enumerate(data.columns)})
Run the GIES algorithm. Args: data (pandas.DataFrame): DataFrame containing the data Returns: networkx.DiGraph: Solution given by the GIES algorithm.
juraj-google-style
def has_register(self, register): has_reg = False if (isinstance(register, QuantumRegister) and register in self.qregs): has_reg = True elif (isinstance(register, ClassicalRegister) and register in self.cregs): has_reg = True return has_reg
Test if this circuit has the register r. Args: register (Register): a quantum or classical register. Returns: bool: True if the register is contained in this circuit.
juraj-google-style
def get(self, key, state_manager, training=None): if key in self._feature_tensors: return self._feature_tensors[key] if key in self._features: feature_tensor = self._get_raw_feature_as_tensor(key) self._feature_tensors[key] = feature_tensor return feature_tensor if isinstance(key, six.string_types): raise ValueError('Feature {} is not in features dictionary.'.format(key)) if not isinstance(key, fc_types.FeatureColumn): raise TypeError('"key" must be either a "str" or "FeatureColumn". Provided: {}'.format(key)) column = key logging.debug('Transforming feature_column %s.', column) try: transformed = column.transform_feature(self, state_manager, training=training) except TypeError: transformed = column.transform_feature(self, state_manager) if transformed is None: raise ValueError('Column {} is not supported.'.format(column.name)) self._feature_tensors[column] = transformed return transformed
Returns a `Tensor` for the given key. A `str` key is used to access a base feature (not-transformed). When a `FeatureColumn` is passed, the transformed feature is returned if it already exists, otherwise the given `FeatureColumn` is asked to provide its transformed output, which is then cached. Args: key: a `str` or a `FeatureColumn`. state_manager: A StateManager object that holds the FeatureColumn state. training: Boolean indicating whether to the column is being used in training mode. This argument is passed to the transform_feature method of any `FeatureColumn` that takes a `training` argument. For example, if a `FeatureColumn` performed dropout, it could expose a `training` argument to control whether the dropout should be applied. Returns: The transformed `Tensor` corresponding to the `key`. Raises: ValueError: if key is not found or a transformed `Tensor` cannot be computed.
github-repos
def write(self, output_buffer, kmip_version=enums.KMIPVersion.KMIP_1_0): local_buffer = utils.BytearrayStream() if self._object_type: self._object_type.write(local_buffer, kmip_version=kmip_version) else: raise exceptions.InvalidField('The Create response payload is missing the object type field.') if self._unique_identifier: self._unique_identifier.write(local_buffer, kmip_version=kmip_version) else: raise exceptions.InvalidField('The Create response payload is missing the unique identifier field.') if (kmip_version < enums.KMIPVersion.KMIP_2_0): if self._template_attribute: self._template_attribute.write(local_buffer, kmip_version=kmip_version) self.length = local_buffer.length() super(CreateResponsePayload, self).write(output_buffer, kmip_version=kmip_version) output_buffer.write(local_buffer.buffer)
Write the data encoding the Create response payload to a buffer. Args: output_buffer (stream): A data buffer in which to encode object data, supporting a write method. kmip_version (KMIPVersion): An enumeration defining the KMIP version with which the object will be encoded. Optional, defaults to KMIP 1.0. Raises: InvalidField: Raised if the object type attribute or unique identifier is not defined.
codesearchnet