docstring
stringlengths
52
499
function
stringlengths
67
35.2k
__index_level_0__
int64
52.6k
1.16M
"Enforce" the intent parser interface at registration time. Args: intent_parser(intent): Intent to be registered. Raises: ValueError: on invalid intent
def register_intent_parser(self, intent_parser): if hasattr(intent_parser, 'validate') and callable(intent_parser.validate): self.intent_parsers.append(intent_parser) else: raise ValueError("%s is not an intent parser" % str(intent_parser))
376,095
Register a domain with the intent engine. Args: tokenizer(tokenizer): The tokenizer you wish to use. trie(Trie): the Trie() you wish to use. domain(str): a string representing the domain you wish to add
def register_domain(self, domain=0, tokenizer=None, trie=None): self.domains[domain] = IntentDeterminationEngine( tokenizer=tokenizer, trie=trie)
376,102
Register an entity to be tagged in potential parse results. Args: entity_value(str): the value/proper name of an entity instance (Ex: "The Big Bang Theory") entity_type(str): the type/tag of an entity instance (Ex: "Television Show") domain(str): a string representing the domain you wish to add the entity to
def register_entity(self, entity_value, entity_type, alias_of=None, domain=0): if domain not in self.domains: self.register_domain(domain=domain) self.domains[domain].register_entity(entity_value=entity_value, entity_type=entity_type, alias_of=alias_of)
376,103
A regular expression making use of python named group expressions. Example: (?P<Artist>.*) Args: regex_str(str): a string representing a regular expression as defined above domain(str): a string representing the domain you wish to add the entity to
def register_regex_entity(self, regex_str, domain=0): if domain not in self.domains: self.register_domain(domain=domain) self.domains[domain].register_regex_entity(regex_str=regex_str)
376,104
Register a intent parser with a domain. Args: intent_parser(intent): The intent parser you wish to register. domain(str): a string representing the domain you wish register the intent parser to.
def register_intent_parser(self, intent_parser, domain=0): if domain not in self.domains: self.register_domain(domain=domain) self.domains[domain].register_intent_parser( intent_parser=intent_parser)
376,106
Initialize ContextManagerFrame Args: entities(list): List of Entities... metadata(object): metadata to describe context?
def __init__(self, entities=[], metadata={}): self.entities = entities self.metadata = metadata
376,110
merge into contextManagerFrame new entity and metadata. Appends tag as new entity and adds keys in metadata to keys in self.metadata. Args: tag(str): entity to be added to self.entities metadata(object): metadata containes keys to be added to self.metadata
def merge_context(self, tag, metadata): self.entities.append(tag) for k in metadata.keys(): if k not in self.metadata: self.metadata[k] = k
376,112
Constructs a list of entities from the context. Args: max_frames(int): maximum number of frames to look back missing_entities(list of str): a list or set of tag names, as strings Returns: list: a list of entities
def get_context(self, max_frames=None, missing_entities=[]): if not max_frames or max_frames > len(self.frame_stack): max_frames = len(self.frame_stack) missing_entities = list(missing_entities) context = [] for i in xrange(max_frames): frame_entities = [entity.copy() for entity in self.frame_stack[i].entities] for entity in frame_entities: entity['confidence'] = entity.get('confidence', 1.0) / (2.0 + i) context += frame_entities result = [] if len(missing_entities) > 0: for entity in context: if entity.get('data') in missing_entities: result.append(entity) # NOTE: this implies that we will only ever get one # of an entity kind from context, unless specified # multiple times in missing_entities. Cannot get # an arbitrary number of an entity kind. missing_entities.remove(entity.get('data')) else: result = context return result
376,114
Takes a list of lists and returns a list of lists with one item from each list. This new list should be the length of each list multiplied by the others. 18 for an list with lists of 3, 2 and 3. Also the lenght of each sub list should be same as the length of lists passed in. Args: lists(list of Lists): A list of lists Returns: list of lists: returns a list of lists constructions of one item from each list in lists.
def choose_1_from_each(lists): if len(lists) == 0: yield [] else: for el in lists[0]: for next_list in choose_1_from_each(lists[1:]): yield [el] + next_list
376,117
This searches tags for Entites in at_least_one and returns any match Args: tags(list): List of tags with Entities to search for Entities at_least_one(list): List of Entities to find in tags Returns: object: returns None if no match is found but returns any match as an object
def resolve_one_of(tags, at_least_one): if len(tags) < len(at_least_one): return None for possible_resolution in choose_1_from_each(at_least_one): resolution = {} pr = possible_resolution[:] for entity_type in pr: last_end_index = -1 if entity_type in resolution: last_end_index = resolution.get[entity_type][-1].get('end_token') tag, value, c = find_first_tag(tags, entity_type, after_index=last_end_index) if not tag: break else: if entity_type not in resolution: resolution[entity_type] = [] resolution[entity_type].append(tag) if len(resolution) == len(possible_resolution): return resolution return None
376,118
Create Intent object Args: name(str): Name for Intent requires(list): Entities that are required at_least_one(list): One of these Entities are required optional(list): Optional Entities used by the intent
def __init__(self, name, requires, at_least_one, optional): self.name = name self.requires = requires self.at_least_one = at_least_one self.optional = optional
376,119
Validate weather tags has required entites for this intent to fire Args: tags(list): Tags and Entities used for validation confidence(float): ? Returns: intent, tags: Returns intent and tags used by the intent on falure to meat required entities then returns intent with confidence of 0.0 and an empty list for tags.
def validate_with_tags(self, tags, confidence): result = {'intent_type': self.name} intent_confidence = 0.0 local_tags = tags[:] used_tags = [] for require_type, attribute_name in self.requires: required_tag, canonical_form, confidence = find_first_tag(local_tags, require_type) if not required_tag: result['confidence'] = 0.0 return result, [] result[attribute_name] = canonical_form if required_tag in local_tags: local_tags.remove(required_tag) used_tags.append(required_tag) # TODO: use confidence based on edit distance and context intent_confidence += confidence if len(self.at_least_one) > 0: best_resolution = resolve_one_of(tags, self.at_least_one) if not best_resolution: result['confidence'] = 0.0 return result, [] else: for key in best_resolution: result[key] = best_resolution[key][0].get('key') # TODO: at least one must support aliases intent_confidence += 1.0 used_tags.append(best_resolution) if best_resolution in local_tags: local_tags.remove(best_resolution) for optional_type, attribute_name in self.optional: optional_tag, canonical_form, conf = find_first_tag(local_tags, optional_type) if not optional_tag or attribute_name in result: continue result[attribute_name] = canonical_form if optional_tag in local_tags: local_tags.remove(optional_tag) used_tags.append(optional_tag) intent_confidence += 1.0 total_confidence = intent_confidence / len(tags) * confidence target_client, canonical_form, confidence = find_first_tag(local_tags, CLIENT_ENTITY_NAME) result['target'] = target_client.get('key') if target_client else None result['confidence'] = total_confidence return result, used_tags
376,121
Constructor Args: intent_name(str): the name of the intents that this parser parses/validates
def __init__(self, intent_name): self.at_least_one = [] self.requires = [] self.optional = [] self.name = intent_name
376,122
The intent parser should require an entity of the provided type. Args: entity_type(str): an entity type attribute_name(str): the name of the attribute on the parsed intent. Defaults to match entity_type. Returns: self: to continue modifications.
def require(self, entity_type, attribute_name=None): if not attribute_name: attribute_name = entity_type self.requires += [(entity_type, attribute_name)] return self
376,123
Parsed intents from this parser can optionally include an entity of the provided type. Args: entity_type(str): an entity type attribute_name(str): the name of the attribute on the parsed intent. Defaults to match entity_type. Returns: self: to continue modifications.
def optionally(self, entity_type, attribute_name=None): if not attribute_name: attribute_name = entity_type self.optional += [(entity_type, attribute_name)] return self
376,124
Fetch associations for a species and pair of categories in bulk. Arguments: - subject_category: String (not None) - object_category: String (not None) - taxon: String - rows: int Additionally, any argument for search_associations can be passed
def bulk_fetch(subject_category, object_category, taxon, rows=MAX_ROWS, **kwargs): assert subject_category is not None assert object_category is not None time.sleep(1) logging.info("Bulk query: {} {} {}".format(subject_category, object_category, taxon)) assocs = search_associations_compact(subject_category=subject_category, object_category=object_category, subject_taxon=taxon, rows=rows, iterate=True, **kwargs) logging.info("Rows retrieved: {}".format(len(assocs))) if len(assocs) == 0: logging.error("No associations returned for query: {} {} {}".format(subject_category, object_category, taxon)) return assocs
376,169
Basic boolean query, using inference. Arguments: - terms: list list of class ids. Returns the set of subjects that have at least one inferred annotation to each of the specified classes. - negated_terms: list list of class ids. Filters the set of subjects so that there are no inferred annotations to any of the specified classes
def query(self, terms=None, negated_terms=None): if terms is None: terms = [] matches_all = 'owl:Thing' in terms if negated_terms is None: negated_terms = [] termset = set(terms) negated_termset = set(negated_terms) matches = [] n_terms = len(termset) for subj in self.subjects: if matches_all or len(termset.intersection(self.inferred_types(subj))) == n_terms: if len(negated_termset.intersection(self.inferred_types(subj))) == 0: matches.append(subj) return matches
376,575
Joins a package name and a relative name. Args: package: A dotted name, e.g. foo.bar.baz relative_name: A dotted name with possibly some leading dots, e.g. ..x.y Returns: The relative name appended to the parent's package, after going up one level for each leading dot. e.g. foo.bar.baz + ..hello.world -> foo.hello.world The unchanged relative_name if it does not start with a dot or has too many leading dots.
def get_absolute_name(package, relative_name): path = package.split('.') if package else [] name = relative_name.lstrip('.') ndots = len(relative_name) - len(name) if ndots > len(path): return relative_name absolute_path = path[:len(path) + 1 - ndots] if name: absolute_path.append(name) return '.'.join(absolute_path)
376,660
Simulate how Python resolves imports. Returns the filename of the source file Python would load when processing a statement like 'import name' in the module we're currently under. Args: item: An instance of ImportItem Returns: A filename Raises: ImportException: If the module doesn't exist.
def resolve_import(self, item): name = item.name # The last part in `from a.b.c import d` might be a symbol rather than a # module, so we try a.b.c and a.b.c.d as names. short_name = None if item.is_from and not item.is_star: if '.' in name.lstrip('.'): # The name is something like `a.b.c`, so strip off `.c`. rindex = name.rfind('.') else: # The name is something like `..c`, so strip off just `c`. rindex = name.rfind('.') + 1 short_name = name[:rindex] if import_finder.is_builtin(name): filename = name + '.so' return Builtin(filename, name) filename, level = convert_to_path(name) if level: # This is a relative import; we need to resolve the filename # relative to the importing file path. filename = os.path.normpath( os.path.join(self.current_directory, filename)) files = [(name, filename)] if short_name: short_filename = os.path.dirname(filename) files.append((short_name, short_filename)) for module_name, path in files: for fs in self.fs_path: f = self._find_file(fs, path) if not f or f == self.current_module.path: # We cannot import a file from itself. continue if item.is_relative(): package_name = self.current_module.package_name if package_name is None: # Relative import in non-package raise ImportException(name) module_name = get_absolute_name(package_name, module_name) if isinstance(self.current_module, System): return System(f, module_name) return Local(f, module_name, fs) # If the module isn't found in the explicit pythonpath, see if python # itself resolved it. if item.source: prefix, ext = os.path.splitext(item.source) mod_name = name # We need to check for importing a symbol here too. if short_name: mod = prefix.replace(os.path.sep, '.') mod = utils.strip_suffix(mod, '.__init__') if not mod.endswith(name) and mod.endswith(short_name): mod_name = short_name if ext == '.pyc': pyfile = prefix + '.py' if os.path.exists(pyfile): return System(pyfile, mod_name) elif not ext: pyfile = os.path.join(prefix, "__init__.py") if os.path.exists(pyfile): return System(pyfile, mod_name) return System(item.source, mod_name) raise ImportException(name)
376,669
Use python to resolve an import. Args: name: The fully qualified module name. Returns: The path to the module source file or None.
def resolve_import(name, is_from, is_star): # Don't try to resolve relative imports or builtins here; they will be # handled by resolve.Resolver if name.startswith('.') or is_builtin(name): return None ret = _resolve_import(name) if ret is None and is_from and not is_star: package, _ = name.rsplit('.', 1) ret = _resolve_import(package) return ret
376,685
Add a file and all its recursive dependencies to the graph. Args: filename: The name of the file. trim: Whether to trim the dependencies of builtin and system files.
def add_file_recursive(self, filename, trim=False): assert not self.final, 'Trying to mutate a final graph.' self.add_source_file(filename) queue = collections.deque([filename]) seen = set() while queue: filename = queue.popleft() self.graph.add_node(filename) try: deps, broken = self.get_file_deps(filename) except parsepy.ParseError: # Python couldn't parse `filename`. If we're sure that it is a # Python file, we mark it as unreadable and keep the node in the # graph so importlab's callers can do their own syntax error # handling if desired. if filename.endswith('.py'): self.unreadable_files.add(filename) else: self.graph.remove_node(filename) continue for f in broken: self.broken_deps[filename].add(f) for f in deps: if self.follow_file(f, seen, trim): queue.append(f) seen.add(f) self.graph.add_node(f) self.graph.add_edge(filename, f)
376,693
Create and return a final graph. Args: env: An environment.Environment object filenames: A list of filenames trim: Whether to trim the dependencies of builtin and system files. Returns: An immutable ImportGraph with the recursive dependencies of all the files in filenames
def create(cls, env, filenames, trim=False): import_graph = cls(env) for filename in filenames: import_graph.add_file_recursive(os.path.abspath(filename), trim) import_graph.build() return import_graph
376,700
Expand a list of filenames passed in as sources. This is a helper function for handling command line arguments that specify a list of source files and directories. Any directories in filenames will be scanned recursively for .py files. Any files that do not end with ".py" will be dropped. Args: filenames: A list of filenames to process. cwd: An optional working directory to expand relative paths Returns: A list of sorted full paths to .py files
def expand_source_files(filenames, cwd=None): out = [] for f in expand_paths(filenames, cwd): if os.path.isdir(f): # If we have a directory, collect all the .py files within it. out += collect_files(f, ".py") else: if f.endswith(".py"): out.append(f) return sorted(set(out))
376,718
Add additional worker assignments or minutes to a HIT. Args: hit_id: A list conaining one hit_id string. assignments: Variable <int> for number of assignments to add. minutes: Variable <int> for number of minutes to add. Returns: A side effect of this function is that the state of a HIT changes on AMT servers. Raises:
def hit_extend(self, hit_id, assignments, minutes): assert type(hit_id) is list assert type(hit_id[0]) is str if self.amt_services.extend_hit(hit_id[0], assignments, minutes): print "HIT extended."
376,817
Given JSON objects from client, perform actual processing Arguments: plugin (dict): JSON representation of plug-in to process instance (dict, optional): JSON representation of Instance to be processed. action (str, optional): Id of action to process
def process(self, plugin, instance=None, action=None): plugin_obj = self.__plugins[plugin["id"]] instance_obj = (self.__instances[instance["id"]] if instance is not None else None) result = pyblish.plugin.process( plugin=plugin_obj, context=self._context, instance=instance_obj, action=action) return formatting.format_result(result)
377,170
Locate `program` in PATH Arguments: program (str): Name of program, e.g. "python"
def which(program): def is_exe(fpath): if os.path.isfile(fpath) and os.access(fpath, os.X_OK): return True return False for path in os.environ["PATH"].split(os.pathsep): for ext in os.getenv("PATHEXT", "").split(os.pathsep): fname = program + ext.lower() abspath = os.path.join(path.strip('"'), fname) if is_exe(abspath): return abspath return None
377,197
Start the Qt-runtime and show the window Arguments: aschild (bool, optional): Run as child of parent process
def main(demo=False, aschild=False, targets=[]): if aschild: print("Starting pyblish-qml") compat.main() app = Application(APP_PATH, targets) app.listen() print("Done, don't forget to call `show()`") return app.exec_() else: print("Starting pyblish-qml server..") service = ipc.service.MockService() if demo else ipc.service.Service() server = ipc.server.Server(service, targets=targets) proxy = ipc.server.Proxy(server) proxy.show(settings.to_dict()) server.listen() server.wait()
377,202
Display GUI Once the QML interface has been loaded, use this to display it. Arguments: port (int): Client asking to show GUI. client_settings (dict, optional): Visual settings, see settings.py
def show(self, client_settings=None): window = self.window if client_settings: # Apply client-side settings settings.from_dict(client_settings) window.setWidth(client_settings["WindowSize"][0]) window.setHeight(client_settings["WindowSize"][1]) window.setTitle(client_settings["WindowTitle"]) window.setFramePosition( QtCore.QPoint( client_settings["WindowPosition"][0], client_settings["WindowPosition"][1] ) ) message = list() message.append("Settings: ") for key, value in settings.to_dict().items(): message.append(" %s = %s" % (key, value)) print("\n".join(message)) window.requestActivate() window.showNormal() # Work-around for window appearing behind # other windows upon being shown once hidden. previous_flags = window.flags() window.setFlags(previous_flags | QtCore.Qt.WindowStaysOnTopHint) window.setFlags(previous_flags) # Give statemachine enough time to boot up if not any(state in self.controller.states for state in ["ready", "finished"]): util.timer("ready") ready = QtTest.QSignalSpy(self.controller.ready) count = len(ready) ready.wait(1000) if len(ready) != count + 1: print("Warning: Could not enter ready state") util.timer_end("ready", "Awaited statemachine for %.2f ms") if client_settings: self.controller.data['autoValidate'] = client_settings.get('autoValidate', False) self.controller.data['autoPublish'] = client_settings.get('autoPublish', False) self.controller.show.emit() # Allow time for QML to initialise util.schedule(self.controller.reset, 500, channel="main")
377,207
Perform operation in thread with callback Instances are cached until finished, at which point they are garbage collected. If we didn't do this, Python would step in and garbage collect the thread before having had time to finish, resulting in an exception. Arguments: target (callable): Method or function to call callback (callable, optional): Method or function to call once `target` has finished. Returns: None
def defer(target, args=None, kwargs=None, callback=None): obj = _defer(target, args, kwargs, callback) obj.finished.connect(lambda: _defer_cleanup(obj)) obj.start() _defer_threads.append(obj) return obj
377,211
Transmit a `process` request to host Arguments: plugin (PluginProxy): Plug-in to process context (ContextProxy): Filtered context instance (InstanceProxy, optional): Instance to process action (str, optional): Action to process
def process(self, plugin, context, instance=None, action=None): plugin = plugin.to_json() instance = instance.to_json() if instance is not None else None return self._dispatch("process", args=[plugin, instance, action])
377,224
Send message to parent process Arguments: func (str): Name of function for parent to call args (list, optional): Arguments passed to function when called
def _dispatch(self, func, args=None): data = json.dumps( { "header": "pyblish-qml:popen.request", "payload": { "name": func, "args": args or list(), } } ) # This should never happen. Each request is immediately # responded to, always. If it isn't the next line will block. # If multiple responses were made, then this will fail. # Both scenarios are bugs. assert self.channels["response"].empty(), ( "There were pending messages in the response channel") sys.stdout.write(data + "\n") sys.stdout.flush() try: message = self.channels["response"].get() if six.PY3: response = json.loads(message) else: response = _byteify(json.loads(message, object_hook=_byteify)) except TypeError as e: raise e else: assert response["header"] == "pyblish-qml:popen.response", response return response["payload"]
377,230
Append `plugin` to model Arguments: plugin (dict): Serialised plug-in from pyblish-rpc Schema: plugin.json
def add_plugin(self, plugin): item = {} item.update(defaults["common"]) item.update(defaults["plugin"]) for member in ["pre11", "name", "label", "optional", "category", "actions", "id", "order", "doc", "type", "module", "match", "hasRepair", "families", "contextEnabled", "instanceEnabled", "__instanceEnabled__", "path"]: item[member] = plugin[member] # Visualised in Perspective item["familiesConcatenated"] = ", ".join(plugin["families"]) # converting links to HTML pattern = r"(https?:\/\/(?:w{1,3}.)?[^\s]*?(?:\.[a-z]+)+)" pattern += r"(?![^<]*?(?:<\/\w+>|\/?>))" if item["doc"] and re.search(pattern, item["doc"]): html = r"<a href='\1'><font color='FF00CC'>\1</font></a>" item["doc"] = re.sub(pattern, html, item["doc"]) # Append GUI-only data item["itemType"] = "plugin" item["hasCompatible"] = True item["isToggled"] = plugin.get("active", True) item["verb"] = { "Selector": "Collect", "Collector": "Collect", "Validator": "Validate", "Extractor": "Extract", "Integrator": "Integrate", "Conformer": "Integrate", }.get(item["type"], "Other") for action in item["actions"]: if action["on"] == "all": item["actionsIconVisible"] = True self.add_section(item["verb"]) item = self.add_item(item) self.plugins.append(item)
377,247
Append `instance` to model Arguments: instance (dict): Serialised instance Schema: instance.json
def add_instance(self, instance): assert isinstance(instance, dict) item = defaults["common"].copy() item.update(defaults["instance"]) item.update(instance["data"]) item.update(instance) item["itemType"] = "instance" item["isToggled"] = instance["data"].get("publish", True) item["hasCompatible"] = True item["category"] = item["category"] or item["family"] self.add_section(item["category"]) # Visualised in Perspective families = [instance["data"]["family"]] families.extend(instance["data"].get("families", [])) item["familiesConcatenated"] += ", ".join(families) item = self.add_item(item) self.instances.append(item)
377,248
Append `section` to model Arguments: name (str): Name of section
def add_section(self, name): assert isinstance(name, str) # Skip existing sections for section in self.sections: if section.name == name: return section item = defaults["common"].copy() item["name"] = name item["itemType"] = "section" item = self.add_item(item) self.sections.append(item) return item
377,250
Append `context` to model Arguments: context (dict): Serialised to add Schema: context.json
def add_context(self, context, label=None): assert isinstance(context, dict) item = defaults["common"].copy() item.update(defaults["instance"]) item.update(context) item["family"] = None item["label"] = context["data"].get("label") or settings.ContextLabel item["itemType"] = "instance" item["isToggled"] = True item["optional"] = False item["hasCompatible"] = True item = self.add_item(item) self.instances.append(item)
377,251
Update item-model with result from host State is sent from host after processing had taken place and represents the events that took place; including log messages and completion status. Arguments: result (dict): Dictionary following the Result schema
def update_with_result(self, result): assert isinstance(result, dict), "%s is not a dictionary" % result for type in ("instance", "plugin"): id = (result[type] or {}).get("id") is_context = not id if is_context: item = self.instances[0] else: item = self.items.get(id) if item is None: # If an item isn't there yet # no worries. It's probably because # reset is still running and the # item in question is a new instance # not yet added to the model. continue item.isProcessing = False item.currentProgress = 1 item.processed = True item.hasWarning = item.hasWarning or any([ record["levelno"] == logging.WARNING for record in result["records"] ]) if result.get("error"): item.hasError = True item.amountFailed += 1 else: item.succeeded = True item.amountPassed += 1 item.duration += result["duration"] item.finishedAt = time.time() if item.itemType == "plugin" and not item.actionsIconVisible: actions = list(item.actions) # Context specific actions for action in list(actions): if action["on"] == "failed" and not item.hasError: actions.remove(action) if action["on"] == "succeeded" and not item.succeeded: actions.remove(action) if action["on"] == "processed" and not item.processed: actions.remove(action) if actions: item.actionsIconVisible = True # Update section item class DummySection(object): hasWarning = False hasError = False succeeded = False section_item = DummySection() for section in self.sections: if item.itemType == "plugin" and section.name == item.verb: section_item = section if (item.itemType == "instance" and section.name == item.category): section_item = section section_item.hasWarning = ( section_item.hasWarning or item.hasWarning ) section_item.hasError = section_item.hasError or item.hasError section_item.succeeded = section_item.succeeded or item.succeeded section_item.isProcessing = False
377,252
Remove exclusion rule Arguments: role (int, string): Qt role or name to remove value (object, optional): Value to remove. If none is supplied, the entire role will be removed.
def remove_exclusion(self, role, value=None): self._remove_rule(self.excludes, role, value)
377,265
Return actions from plug-in at `index` Arguments: index (int): Index at which item is located in model
def getPluginActions(self, index): index = self.data["proxies"]["plugin"].mapToSource( self.data["proxies"]["plugin"].index( index, 0, QtCore.QModelIndex())).row() item = self.data["models"]["item"].items[index] # Inject reference to the original index actions = [ dict(action, **{"index": index}) for action in item.actions ] # Context specific actions for action in list(actions): if action["on"] == "failed" and not item.hasError: actions.remove(action) if action["on"] == "succeeded" and not item.succeeded: actions.remove(action) if action["on"] == "processed" and not item.processed: actions.remove(action) if action["on"] == "notProcessed" and item.processed: actions.remove(action) # Discard empty categories, separators remaining_actions = list() index = 0 try: action = actions[index] except IndexError: pass else: while action: try: action = actions[index] except IndexError: break isempty = False if action["__type__"] in ("category", "separator"): try: next_ = actions[index + 1] if next_["__type__"] != "action": isempty = True except IndexError: isempty = True if not isempty: remaining_actions.append(action) index += 1 return remaining_actions
377,288
Exclude a `role` of `value` at `target` Arguments: target (str): Destination proxy model operation (str): "add" or "remove" exclusion role (str): Role to exclude value (str): Value of `role` to exclude
def exclude(self, target, operation, role, value): target = {"result": self.data["proxies"]["result"], "instance": self.data["proxies"]["instance"], "plugin": self.data["proxies"]["plugin"]}[target] if operation == "add": target.add_exclusion(role, value) elif operation == "remove": target.remove_exclusion(role, value) else: raise TypeError("operation must be either `add` or `remove`")
377,294
Apply settings from dictionary Arguments: settings (dict): Settings in the form of a dictionary
def from_dict(settings): assert isinstance(settings, dict), "`settings` must be of type dict" for key, value in settings.items(): setattr(self, key, value)
377,320
Apply misplaced members from `binding` to Qt.py Arguments: binding (dict): Misplaced members
def _reassign_misplaced_members(binding): for src, dst in _misplaced_members[binding].items(): src_module, src_member = src.split(".") dst_module, dst_member = dst.split(".") try: src_object = getattr(Qt, dst_module) except AttributeError: # Skip reassignment of non-existing members. # This can happen if a request was made to # rename a member that didn't exist, for example # if QtWidgets isn't available on the target platform. continue dst_value = getattr(getattr(Qt, "_" + src_module), src_member) setattr( src_object, dst_member, dst_value )
377,404
Attempt to show GUI Requires install() to have been run first, and a live instance of Pyblish QML in the background. Arguments: parent (None, optional): Deprecated targets (list, optional): Publishing targets modal (bool, optional): Block interactions to parent
def show(parent=None, targets=[], modal=None, auto_publish=False, auto_validate=False): # Get modal mode from environment if modal is None: modal = bool(os.environ.get("PYBLISH_QML_MODAL", False)) # Automatically install if not already installed. install(modal) show_settings = settings.to_dict() show_settings['autoPublish'] = auto_publish show_settings['autoValidate'] = auto_validate # Show existing GUI if _state.get("currentServer"): server = _state["currentServer"] proxy = ipc.server.Proxy(server) try: proxy.show(show_settings) return server except IOError: # The running instance has already been closed. _state.pop("currentServer") if not host.is_headless(): host.splash() try: service = ipc.service.Service() server = ipc.server.Server(service, targets=targets, modal=modal) except Exception: # If for some reason, the GUI fails to show. traceback.print_exc() return host.desplash() proxy = ipc.server.Proxy(server) proxy.show(show_settings) # Store reference to server for future calls _state["currentServer"] = server log.info("Success. QML server available as " "pyblish_qml.api.current_server()") server.listen() return server
377,412
Add log file. Args: path (:obj:`str`): Path to the log file.
def add_log_file(path): logfile_handler = RotatingFileHandler( path, maxBytes=50000, backupCount=2) formatter = logging.Formatter( fmt='%(asctime)s %(levelname)s %(module)s - %(message)s', datefmt="%d-%b-%Y %H:%M:%S") logfile_handler.setFormatter(formatter) geoparse_logger.addHandler(logfile_handler)
377,532
Initialize base GEO object. Args: name (:obj:`str`): Name of the object. metadata (:obj:`dict`): Metadata information. Raises: TypeError: Metadata should be a dict.
def __init__(self, name, metadata): if not isinstance(metadata, dict): raise TypeError("Metadata should be a dictionary not a %s" % str( type(metadata))) self.name = name self.metadata = metadata self.relations = {} if 'relation' in self.metadata: for relation in self.metadata['relation']: tmp = re.split(r':\s+', relation) relname = tmp[0] relval = tmp[1] if relname in self.relations: self.relations[relname].append(relval) else: self.relations[relname] = [relval]
377,535
Get the metadata attribute by the name. Args: metaname (:obj:`str`): Name of the attribute Returns: :obj:`list` or :obj:`str`: Value(s) of the requested metadata attribute Raises: NoMetadataException: Attribute error TypeError: Metadata should be a list
def get_metadata_attribute(self, metaname): metadata_value = self.metadata.get(metaname, None) if metadata_value is None: raise NoMetadataException( "No metadata attribute named %s" % metaname) if not isinstance(metadata_value, list): raise TypeError("Metadata is not a list and it should be.") if len(metadata_value) > 1: return metadata_value else: return metadata_value[0]
377,536
Save the object in a SOFT format. Args: path_or_handle (:obj:`str` or :obj:`file`): Path or handle to output file as_gzip (:obj:`bool`): Save as gzip
def to_soft(self, path_or_handle, as_gzip=False): if isinstance(path_or_handle, str): if as_gzip: with gzip.open(path_or_handle, 'wt') as outfile: outfile.write(self._get_object_as_soft()) else: with open(path_or_handle, 'w') as outfile: outfile.write(self._get_object_as_soft()) else: path_or_handle.write(self._get_object_as_soft())
377,538
Pivot samples by specified column. Construct a table in which columns (names) are the samples, index is a specified column eg. ID_REF and values in the columns are of one specified type. Args: values (:obj:`str`): Column name present in all GSMs. index (:obj:`str`, optional): Column name that will become an index in pivoted table. Defaults to "ID_REF". Returns: :obj:`pandas.DataFrame`: Pivoted data
def pivot_samples(self, values, index="ID_REF"): data = [] for gsm in self.gsms.values(): tmp_data = gsm.table.copy() tmp_data["name"] = gsm.name data.append(tmp_data) ndf = concat(data).pivot(index=index, values=values, columns="name") return ndf
377,555
Download file with Aspera Connect. For details see the documentation ov Aspera Connect Args: user (:obj:`str`): FTP user. host (:obj:`str`): FTP host. Defaults to "ftp-trace.ncbi.nlm.nih.gov".
def download_aspera(self, user, host, silent=False): aspera_home = os.environ.get("ASPERA_HOME", None) if not aspera_home: raise ValueError("environment variable $ASPERA_HOME not set") if not os.path.exists(aspera_home): raise ValueError( "$ASPERA_HOME directory {} does not exist".format(aspera_home)) ascp = os.path.join(aspera_home, "connect/bin/ascp") key = os.path.join(aspera_home, "connect/etc/asperaweb_id_dsa.openssh") if not os.path.exists(ascp): raise ValueError("could not find ascp binary") if not os.path.exists(key): raise ValueError("could not find openssh key") parsed_url = urlparse(self.url) cmd = "{} -i {} -k1 -T -l400m {}@{}:{} {}".format( ascp, key, user, host, parsed_url.path, self._temp_file_name) logger.debug(cmd) try: pr = sp.Popen(cmd, shell=True, stdout=sp.PIPE, stderr=sp.PIPE) stdout, stderr = pr.communicate() if not silent: logger.debug("Aspera stdout: " + str(stdout)) logger.debug("Aspera stderr: " + str(stderr)) if pr.returncode == 0: logger.debug("Moving %s to %s" % ( self._temp_file_name, self.destination)) shutil.move(self._temp_file_name, self.destination) logger.debug("Successfully downloaded %s" % self.url) else: logger.error( "Failed to download %s using Aspera Connect" % self.url) finally: try: os.remove(self._temp_file_name) except OSError: pass
377,565
Parse the SOFT file entry name line that starts with '^', '!' or '#'. Args: entry_line (:obj:`str`): Line from SOFT to be parsed. Returns: :obj:`2-tuple`: Type of entry, value of entry.
def __parse_entry(entry_line): if entry_line.startswith("!"): entry_line = sub(r"!\w*?_", '', entry_line) else: entry_line = entry_line.strip()[1:] try: entry_type, entry_name = [i.strip() for i in entry_line.split("=", 1)] except ValueError: entry_type = [i.strip() for i in entry_line.split("=", 1)][0] entry_name = '' return entry_type, entry_name
377,572
Parse list of lines with metadata information from SOFT file. Args: lines (:obj:`Iterable`): Iterator over the lines. Returns: :obj:`dict`: Metadata from SOFT file.
def parse_metadata(lines): meta = defaultdict(list) for line in lines: line = line.rstrip() if line.startswith("!"): if "_table_begin" in line or "_table_end" in line: continue key, value = __parse_entry(line) meta[key].append(value) return dict(meta)
377,573
Parse list of lines with columns description from SOFT file. Args: lines (:obj:`Iterable`): Iterator over the lines. Returns: :obj:`pandas.DataFrame`: Columns description.
def parse_columns(lines): data = [] index = [] for line in lines: line = line.rstrip() if line.startswith("#"): tmp = __parse_entry(line) data.append(tmp[1]) index.append(tmp[0]) return DataFrame(data, index=index, columns=['description'])
377,574
Parse list of line with columns description from SOFT file of GDS. Args: lines (:obj:`Iterable`): Iterator over the lines. subsets (:obj:`dict` of :obj:`GEOparse.GDSSubset`): Subsets to use. Returns: :obj:`pandas.DataFrame`: Columns description.
def parse_GDS_columns(lines, subsets): data = [] index = [] for line in lines: line = line.rstrip() if line.startswith("#"): tmp = __parse_entry(line) data.append(tmp[1]) index.append(tmp[0]) df = DataFrame(data, index=index, columns=['description']) subset_ids = defaultdict(dict) for subsetname, subset in iteritems(subsets): for expid in subset.metadata["sample_id"][0].split(","): try: subset_type = subset.get_type() subset_ids[subset_type][expid] = \ subset.metadata['description'][0] except Exception as err: logger.error("Error processing subsets: %s for subset %s" % ( subset.get_type(), subsetname)) return df.join(DataFrame(subset_ids))
377,575
Parse list of lines from SOFT file into DataFrame. Args: lines (:obj:`Iterable`): Iterator over the lines. Returns: :obj:`pandas.DataFrame`: Table data.
def parse_table_data(lines): # filter lines that do not start with symbols data = "\n".join([i.rstrip() for i in lines if not i.startswith(("^", "!", "#")) and i.rstrip()]) if data: return read_csv(StringIO(data), index_col=None, sep="\t") else: return DataFrame()
377,576
Parse GSM entry from SOFT file. Args: filepath (:obj:`str` or :obj:`Iterable`): Path to file with 1 GSM entry or list of lines representing GSM from GSE file. entry_name (:obj:`str`, optional): Name of the entry. By default it is inferred from the data. Returns: :obj:`GEOparse.GSM`: A GSM object.
def parse_GSM(filepath, entry_name=None): if isinstance(filepath, str): with utils.smart_open(filepath) as f: soft = [] has_table = False for line in f: if "_table_begin" in line or (not line.startswith(("^", "!", "#"))): has_table = True soft.append(line.rstrip()) else: soft = [] has_table = False for line in filepath: if "_table_begin" in line or (not line.startswith(("^", "!", "#"))): has_table = True soft.append(line.rstrip()) if entry_name is None: sets = [i for i in soft if i.startswith("^")] if len(sets) > 1: raise Exception("More than one entry in GPL") if len(sets) == 0: raise NoEntriesException( "No entries found. Check the if accession is correct!") entry_name = parse_entry_name(sets[0]) columns = parse_columns(soft) metadata = parse_metadata(soft) if has_table: table_data = parse_table_data(soft) else: table_data = DataFrame() gsm = GSM(name=entry_name, table=table_data, metadata=metadata, columns=columns) return gsm
377,577
Parse GSE SOFT file. Args: filepath (:obj:`str`): Path to GSE SOFT file. Returns: :obj:`GEOparse.GSE`: A GSE object.
def parse_GSE(filepath): gpls = {} gsms = {} series_counter = 0 database = None metadata = {} gse_name = None with utils.smart_open(filepath) as soft: groupper = groupby(soft, lambda x: x.startswith("^")) for is_new_entry, group in groupper: if is_new_entry: entry_type, entry_name = __parse_entry(next(group)) logger.debug("%s: %s" % (entry_type.upper(), entry_name)) if entry_type == "SERIES": gse_name = entry_name series_counter += 1 if series_counter > 1: raise Exception( "GSE file should contain only one series entry!") is_data, data_group = next(groupper) message = ("The key is not False, probably there is an " "error in the SOFT file") assert not is_data, message metadata = parse_metadata(data_group) elif entry_type == "SAMPLE": is_data, data_group = next(groupper) gsms[entry_name] = parse_GSM(data_group, entry_name) elif entry_type == "PLATFORM": is_data, data_group = next(groupper) gpls[entry_name] = parse_GPL(data_group, entry_name) elif entry_type == "DATABASE": is_data, data_group = next(groupper) database_metadata = parse_metadata(data_group) database = GEODatabase(name=entry_name, metadata=database_metadata) else: logger.error("Cannot recognize type %s" % entry_type) gse = GSE(name=gse_name, metadata=metadata, gpls=gpls, gsms=gsms, database=database) return gse
377,579
Parse GDS SOFT file. Args: filepath (:obj:`str`): Path to GDS SOFT file. Returns: :obj:`GEOparse.GDS`: A GDS object.
def parse_GDS(filepath): dataset_lines = [] subsets = {} database = None dataset_name = None with utils.smart_open(filepath) as soft: groupper = groupby(soft, lambda x: x.startswith("^")) for is_new_entry, group in groupper: if is_new_entry: entry_type, entry_name = __parse_entry(next(group)) logger.debug("%s: %s" % (entry_type.upper(), entry_name)) if entry_type == "SUBSET": is_data, data_group = next(groupper) message = ("The key is not False, probably there is an " "error in the SOFT file") assert not is_data, message subset_metadata = parse_metadata(data_group) subsets[entry_name] = GDSSubset(name=entry_name, metadata=subset_metadata) elif entry_type == "DATABASE": is_data, data_group = next(groupper) message = ("The key is not False, probably there is an " "error in the SOFT file") assert not is_data, message database_metadata = parse_metadata(data_group) database = GEODatabase(name=entry_name, metadata=database_metadata) elif entry_type == "DATASET": is_data, data_group = next(groupper) dataset_name = entry_name for line in data_group: dataset_lines.append(line.rstrip()) else: logger.error("Cannot recognize type %s" % entry_type) metadata = parse_metadata(dataset_lines) columns = parse_GDS_columns(dataset_lines, subsets) table = parse_table_data(dataset_lines) return GDS(name=dataset_name, metadata=metadata, columns=columns, table=table, subsets=subsets, database=database)
377,580
Make directory(ies). This function behaves like mkdir -p. Args: path_to_dir (:obj:`str`): Path to the directory to make.
def mkdir_p(path_to_dir): try: os.makedirs(path_to_dir) except OSError as e: # Python >2.5 if e.errno == EEXIST and os.path.isdir(path_to_dir): logger.debug( "Directory %s already exists. Skipping." % path_to_dir) else: raise e
377,581
Open file intelligently depending on the source and python version. Args: filepath (:obj:`str`): Path to the file. Yields: Context manager for file handle.
def smart_open(filepath): if filepath[-2:] == "gz": mode = "rt" fopen = gzip.open else: mode = "r" fopen = open if sys.version_info[0] < 3: fh = fopen(filepath, mode) else: fh = fopen(filepath, mode, errors="ignore") try: yield fh except IOError: fh.close() finally: fh.close()
377,583
Tunes a specified pipeline with the specified tuner for TUNING_BUDGET_PER_ITER (3) iterations. Params: X: np.array of X training data y: np.array of y training data X_val: np.array of X validation data y_val: np.array of y validation data generate_model: function that returns an slkearn model to fit tuner: BTB tuner object for tuning hyperparameters
def tune_pipeline(X, y, X_val, y_val, generate_model, tuner): print("Tuning with GP tuner for %s iterations" % TUNING_BUDGET_PER_ITER) for i in range(TUNING_BUDGET_PER_ITER): params = tuner.propose() # create model using proposed hyperparams from tuner model = generate_model(params) model.fit(X, y) predicted = model.predict(X_val) score = accuracy_score(predicted, y_val) # record hyper-param combination and score for tuning tuner.add(params, score) print("Final score:", tuner._best_score)
377,585
Finds row of self.dpp_matrix most closely corresponds to X by means of Kendall tau distance. https://en.wikipedia.org/wiki/Kendall_tau_distance Args: dpp_vector (np.array): Array with shape (n_components, )
def fit(self, dpp_vector): # decompose X and generate the rankings of the elements in the # decomposed matrix dpp_vector_decomposed = self.mf_model.transform(dpp_vector) dpp_vector_ranked = stats.rankdata( dpp_vector_decomposed, method='dense', ) max_agreement_index = None max_agreement = -1 # min value of Kendall Tau agremment for i in range(self.dpp_ranked.shape[0]): # calculate agreement between current row and X agreement, _ = stats.kendalltau( dpp_vector_ranked, self.dpp_ranked[i, :], ) if agreement > max_agreement: max_agreement_index = i max_agreement = agreement if max_agreement_index is None: max_agreement_index = np.random.randint(self.dpp_matrix.shape[0]) # store the row with the highest agreement for prediction self.matching_dataset = self.dpp_matrix[max_agreement_index, :]
377,592
Fit Args: X (np.array): Array of hyperparameter values with shape (n_samples, len(tunables)) y (np.array): Array of scores with shape (n_samples, )
def fit(self, X, y): self.X = X self.y = y
377,630
Generate random hyperparameter vectors Args: n (int, optional): number of candidates to generate. Defaults to 1000. Returns: candidates (np.array): Array of candidate hyperparameter vectors with shape (n_samples, len(tunables))
def _create_candidates(self, n=1000): # If using a grid, generate a list of previously unused grid points if self.grid: return self._candidates_from_grid(n) # If not using a grid, generate a list of vectors where each parameter # is chosen uniformly at random else: return self._random_candidates(n)
377,633
Use the trained model to propose a new set of parameters. Args: n (int, optional): number of candidates to propose Returns: Mapping of tunable name to proposed value. If called with n>1 then proposal is a list of dictionaries.
def propose(self, n=1): proposed_params = [] for i in range(n): # generate a list of random candidate vectors. If self.grid == True # each candidate will be a vector that has not been used before. candidate_params = self._create_candidates() # create_candidates() returns None when every grid point # has been tried if candidate_params is None: return None # predict() returns a tuple of predicted values for each candidate predictions = self.predict(candidate_params) # acquire() evaluates the list of predictions, selects one, # and returns its index. idx = self._acquire(predictions) # inverse transform acquired hyperparameters # based on hyparameter type params = {} for i in range(candidate_params[idx, :].shape[0]): inverse_transformed = self.tunables[i][1].inverse_transform( candidate_params[idx, i] ) params[self.tunables[i][0]] = inverse_transformed proposed_params.append(params) return params if n == 1 else proposed_params
377,634
Add data about known pipeline and scores. Updates ``dpp_vector`` and refits model with all data. Args: X (dict): mapping of pipeline indices to scores. Keys must correspond to the index of a column in ``dpp_matrix`` and values are the corresponding score for pipeline on the dataset.
def add(self, X): for each in X: self.dpp_vector[each] = X[each] self.fit(self.dpp_vector.reshape(1, -1))
377,640
Parses a logline timestamp into a tuple. Args: t: Timestamp in logline format. Returns: An iterable of date and time elements in the order of month, day, hour, minute, second, microsecond.
def _parse_logline_timestamp(t): date, time = t.split(' ') month, day = date.split('-') h, m, s = time.split(':') s, ms = s.split('.') return (month, day, h, m, s, ms)
377,855
Comparator for timestamps in logline format. Args: t1: Timestamp in logline format. t2: Timestamp in logline format. Returns: -1 if t1 < t2; 1 if t1 > t2; 0 if t1 == t2.
def logline_timestamp_comparator(t1, t2): dt1 = _parse_logline_timestamp(t1) dt2 = _parse_logline_timestamp(t2) for u1, u2 in zip(dt1, dt2): if u1 < u2: return -1 elif u1 > u2: return 1 return 0
377,857
Converts an epoch timestamp in ms to log line timestamp format, which is readible for humans. Args: epoch_time: integer, an epoch timestamp in ms. time_zone: instance of tzinfo, time zone information. Using pytz rather than python 3.2 time_zone implementation for python 2 compatibility reasons. Returns: A string that is the corresponding timestamp in log line timestamp format.
def epoch_to_log_line_timestamp(epoch_time, time_zone=None): s, ms = divmod(epoch_time, 1000) d = datetime.datetime.fromtimestamp(s, tz=time_zone) return d.strftime('%m-%d %H:%M:%S.') + str(ms)
377,859
Customizes the root logger for a test run. The logger object has a stream handler and a file handler. The stream handler logs INFO level to the terminal, the file handler logs DEBUG level to files. Args: log_path: Location of the log file. prefix: A prefix for each log line in terminal. filename: Name of the log file. The default is the time the logger is requested.
def _setup_test_logger(log_path, prefix=None): log = logging.getLogger() kill_test_logger(log) log.propagate = False log.setLevel(logging.DEBUG) # Log info to stream terminal_format = log_line_format if prefix: terminal_format = '[%s] %s' % (prefix, log_line_format) c_formatter = logging.Formatter(terminal_format, log_line_time_format) ch = logging.StreamHandler(sys.stdout) ch.setFormatter(c_formatter) ch.setLevel(logging.INFO) # Log everything to file f_formatter = logging.Formatter(log_line_format, log_line_time_format) # Write logger output to files fh_info = logging.FileHandler( os.path.join(log_path, records.OUTPUT_FILE_INFO_LOG)) fh_info.setFormatter(f_formatter) fh_info.setLevel(logging.INFO) fh_debug = logging.FileHandler( os.path.join(log_path, records.OUTPUT_FILE_DEBUG_LOG)) fh_debug.setFormatter(f_formatter) fh_debug.setLevel(logging.DEBUG) log.addHandler(ch) log.addHandler(fh_info) log.addHandler(fh_debug) log.log_path = log_path logging.log_path = log_path
377,860
Cleans up a test logger object by removing all of its handlers. Args: logger: The logging object to clean up.
def kill_test_logger(logger): for h in list(logger.handlers): logger.removeHandler(h) if isinstance(h, logging.FileHandler): h.close()
377,861
Creates a symlink to the latest test run logs. Args: actual_path: The source directory where the latest test run's logs are.
def create_latest_log_alias(actual_path): alias_path = os.path.join(os.path.dirname(actual_path), 'latest') utils.create_alias(actual_path, alias_path)
377,862
Customizes the root logger for a test run. Args: log_path: Location of the report file. prefix: A prefix for each log line in terminal. filename: Name of the files. The default is the time the objects are requested.
def setup_test_logger(log_path, prefix=None, filename=None): utils.create_dir(log_path) _setup_test_logger(log_path, prefix) logging.info('Test output folder: "%s"', log_path) create_latest_log_alias(log_path)
377,863
Executes multiple test classes as a suite. This is the default entry point for running a test suite script file directly. Args: test_classes: List of python classes containing Mobly tests. argv: A list that is then parsed as cli args. If None, defaults to cli input.
def run_suite(test_classes, argv=None): # Parse cli args. parser = argparse.ArgumentParser(description='Mobly Suite Executable.') parser.add_argument( '-c', '--config', nargs=1, type=str, required=True, metavar='<PATH>', help='Path to the test configuration file.') parser.add_argument( '--tests', '--test_case', nargs='+', type=str, metavar='[ClassA[.test_a] ClassB[.test_b] ...]', help='A list of test classes and optional tests to execute.') if not argv: argv = sys.argv[1:] args = parser.parse_args(argv) # Load test config file. test_configs = config_parser.load_test_config_file(args.config[0]) # Check the classes that were passed in for test_class in test_classes: if not issubclass(test_class, base_test.BaseTestClass): logging.error('Test class %s does not extend ' 'mobly.base_test.BaseTestClass', test_class) sys.exit(1) # Find the full list of tests to execute selected_tests = compute_selected_tests(test_classes, args.tests) # Execute the suite ok = True for config in test_configs: runner = test_runner.TestRunner(config.log_path, config.test_bed_name) for (test_class, tests) in selected_tests.items(): runner.add_test_class(config, test_class, tests) try: runner.run() ok = runner.results.is_all_pass and ok except signals.TestAbortAll: pass except: logging.exception('Exception when executing %s.', config.test_bed_name) ok = False if not ok: sys.exit(1)
377,864
Opens a telnet connection to the desired AttenuatorDevice and queries basic information. Args: host: A valid hostname (IP address or DNS-resolvable name) to an MC-DAT attenuator instrument. port: An optional port number (defaults to telnet default 23)
def open(self, host, port=23): self._telnet_client.open(host, port) config_str = self._telnet_client.cmd("MN?") if config_str.startswith("MN="): config_str = config_str[len("MN="):] self.properties = dict( zip(['model', 'max_freq', 'max_atten'], config_str.split("-", 2))) self.max_atten = float(self.properties['max_atten'])
377,870
This function returns the current attenuation from an attenuator at a given index in the instrument. Args: idx: This zero-based index is the identifier for a particular attenuator in an instrument. Raises: Error: The underlying telnet connection to the instrument is not open. Returns: A float that is the current attenuation value.
def get_atten(self, idx=0): if not self.is_open: raise attenuator.Error( "Connection to attenuator at %s is not open!" % self._telnet_client.host) if idx + 1 > self.path_count or idx < 0: raise IndexError("Attenuator index out of range!", self.path_count, idx) atten_val_str = self._telnet_client.cmd("CHAN:%s:ATT?" % (idx + 1)) atten_val = float(atten_val_str) return atten_val
377,872
Initializes an Sl4aClient. Args: ad: AndroidDevice object.
def __init__(self, ad): super(Sl4aClient, self).__init__(app_name=_APP_NAME, ad=ad) self._ad = ad self.ed = None self._adb = ad.adb
377,873
Initializes a SnippetClient. Args: package: (str) The package name of the apk where the snippets are defined. ad: (AndroidDevice) the device object associated with this client.
def __init__(self, package, ad): super(SnippetClient, self).__init__(app_name=package, ad=ad) self.package = package self._ad = ad self._adb = ad.adb self._proc = None
377,883
Starts snippet apk on the device and connects to it. After prechecks, this launches the snippet apk with an adb cmd in a standing subprocess, checks the cmd response from the apk for protocol version, then sets up the socket connection over adb port-forwarding. Args: ProtocolVersionError, if protocol info or port info cannot be retrieved from the snippet apk.
def _start_app_and_connect(self): self._check_app_installed() self.disable_hidden_api_blacklist() persists_shell_cmd = self._get_persist_command() # Use info here so people can follow along with the snippet startup # process. Starting snippets can be slow, especially if there are # multiple, and this avoids the perception that the framework is hanging # for a long time doing nothing. self.log.info('Launching snippet apk %s with protocol %d.%d', self.package, _PROTOCOL_MAJOR_VERSION, _PROTOCOL_MINOR_VERSION) cmd = _LAUNCH_CMD % (persists_shell_cmd, self.package) start_time = time.time() self._proc = self._do_start_app(cmd) # Check protocol version and get the device port line = self._read_protocol_line() match = re.match('^SNIPPET START, PROTOCOL ([0-9]+) ([0-9]+)$', line) if not match or match.group(1) != '1': raise ProtocolVersionError(self._ad, line) line = self._read_protocol_line() match = re.match('^SNIPPET SERVING, PORT ([0-9]+)$', line) if not match: raise ProtocolVersionError(self._ad, line) self.device_port = int(match.group(1)) # Forward the device port to a new host port, and connect to that port self.host_port = utils.get_available_host_port() self._adb.forward( ['tcp:%d' % self.host_port, 'tcp:%d' % self.device_port]) self.connect() # Yaaay! We're done! self.log.debug('Snippet %s started after %.1fs on host port %s', self.package, time.time() - start_time, self.host_port)
377,885
Registers a service. This will create a service instance, starts the service, and adds the instance to the mananger. Args: alias: string, the alias for this instance. service_class: class, the service class to instantiate. configs: (optional) config object to pass to the service class's constructor. start_service: bool, whether to start the service instance or not. Default is True.
def register(self, alias, service_class, configs=None, start_service=True): if not inspect.isclass(service_class): raise Error(self._device, '"%s" is not a class!' % service_class) if not issubclass(service_class, base_service.BaseService): raise Error( self._device, 'Class %s is not a subclass of BaseService!' % service_class) if alias in self._service_objects: raise Error( self._device, 'A service is already registered with alias "%s".' % alias) service_obj = service_class(self._device, configs) if start_service: service_obj.start() self._service_objects[alias] = service_obj
377,896
Unregisters a service instance. Stops a service and removes it from the manager. Args: alias: string, the alias of the service instance to unregister.
def unregister(self, alias): if alias not in self._service_objects: raise Error(self._device, 'No service is registered with alias "%s".' % alias) service_obj = self._service_objects.pop(alias) if service_obj.is_alive: with expects.expect_no_raises( 'Failed to stop service instance "%s".' % alias): service_obj.stop()
377,897
Syntactic sugar to enable direct access of service objects by alias. Args: name: string, the alias a service object was registered under.
def __getattr__(self, name): if self.has_service_by_name(name): return self._service_objects[name] return self.__getattribute__(name)
377,903
Creates AndroidDevice controller objects. Args: configs: A list of dicts, each representing a configuration for an Android device. Returns: A list of AndroidDevice objects.
def create(configs): if not configs: raise Error(ANDROID_DEVICE_EMPTY_CONFIG_MSG) elif configs == ANDROID_DEVICE_PICK_ALL_TOKEN: ads = get_all_instances() elif not isinstance(configs, list): raise Error(ANDROID_DEVICE_NOT_LIST_CONFIG_MSG) elif isinstance(configs[0], dict): # Configs is a list of dicts. ads = get_instances_with_configs(configs) elif isinstance(configs[0], basestring): # Configs is a list of strings representing serials. ads = get_instances(configs) else: raise Error('No valid config found in: %s' % configs) valid_ad_identifiers = list_adb_devices() + list_adb_devices_by_usb_id() for ad in ads: if ad.serial not in valid_ad_identifiers: raise DeviceError(ad, 'Android device is specified in config but' ' is not attached.') _start_services_on_ads(ads) return ads
377,904
Cleans up AndroidDevice objects. Args: ads: A list of AndroidDevice objects.
def destroy(ads): for ad in ads: try: ad.services.stop_all() except: ad.log.exception('Failed to clean up properly.')
377,905
Starts long running services on multiple AndroidDevice objects. If any one AndroidDevice object fails to start services, cleans up all existing AndroidDevice objects and their services. Args: ads: A list of AndroidDevice objects whose services to start.
def _start_services_on_ads(ads): running_ads = [] for ad in ads: running_ads.append(ad) start_logcat = not getattr(ad, KEY_SKIP_LOGCAT, DEFAULT_VALUE_SKIP_LOGCAT) try: ad.services.register( SERVICE_NAME_LOGCAT, logcat.Logcat, start_service=start_logcat) except Exception: is_required = getattr(ad, KEY_DEVICE_REQUIRED, DEFAULT_VALUE_DEVICE_REQUIRED) if is_required: ad.log.exception('Failed to start some services, abort!') destroy(running_ads) raise else: ad.log.exception('Skipping this optional device because some ' 'services failed to start.')
377,906
Parses a byte string representing a list of devices. The string is generated by calling either adb or fastboot. The tokens in each string is tab-separated. Args: device_list_str: Output of adb or fastboot. key: The token that signifies a device in device_list_str. Returns: A list of android device serial numbers.
def parse_device_list(device_list_str, key): clean_lines = new_str(device_list_str, 'utf-8').strip().split('\n') results = [] for line in clean_lines: tokens = line.strip().split('\t') if len(tokens) == 2 and tokens[1] == key: results.append(tokens[0]) return results
377,907
Create AndroidDevice instances from a list of serials. Args: serials: A list of android device serials. Returns: A list of AndroidDevice objects.
def get_instances(serials): results = [] for s in serials: results.append(AndroidDevice(s)) return results
377,909
Create AndroidDevice instances from a list of dict configs. Each config should have the required key-value pair 'serial'. Args: configs: A list of dicts each representing the configuration of one android device. Returns: A list of AndroidDevice objects.
def get_instances_with_configs(configs): results = [] for c in configs: try: serial = c.pop('serial') except KeyError: raise Error( 'Required value "serial" is missing in AndroidDevice config %s.' % c) is_required = c.get(KEY_DEVICE_REQUIRED, True) try: ad = AndroidDevice(serial) ad.load_config(c) except Exception: if is_required: raise ad.log.exception('Skipping this optional device due to error.') continue results.append(ad) return results
377,910
Create AndroidDevice instances for all attached android devices. Args: include_fastboot: Whether to include devices in bootloader mode or not. Returns: A list of AndroidDevice objects each representing an android device attached to the computer.
def get_all_instances(include_fastboot=False): if include_fastboot: serial_list = list_adb_devices() + list_fastboot_devices() return get_instances(serial_list) return get_instances(list_adb_devices())
377,911
Finds the AndroidDevice instances from a list that match certain conditions. Args: ads: A list of AndroidDevice instances. func: A function that takes an AndroidDevice object and returns True if the device satisfies the filter condition. Returns: A list of AndroidDevice instances that satisfy the filter condition.
def filter_devices(ads, func): results = [] for ad in ads: if func(ad): results.append(ad) return results
377,912
Finds a list of AndroidDevice instance from a list that has specific attributes of certain values. Example: get_devices(android_devices, label='foo', phone_number='1234567890') get_devices(android_devices, model='angler') Args: ads: A list of AndroidDevice instances. kwargs: keyword arguments used to filter AndroidDevice instances. Returns: A list of target AndroidDevice instances. Raises: Error: No devices are matched.
def get_devices(ads, **kwargs): def _get_device_filter(ad): for k, v in kwargs.items(): if not hasattr(ad, k): return False elif getattr(ad, k) != v: return False return True filtered = filter_devices(ads, _get_device_filter) if not filtered: raise Error( 'Could not find a target device that matches condition: %s.' % kwargs) else: return filtered
377,913
Add attributes to the AndroidDevice object based on config. Args: config: A dictionary representing the configs. Raises: Error: The config is trying to overwrite an existing attribute.
def load_config(self, config): for k, v in config.items(): if hasattr(self, k): raise DeviceError( self, ('Attribute %s already exists with value %s, cannot set ' 'again.') % (k, getattr(self, k))) setattr(self, k, v)
377,927
Takes a bug report on the device and stores it in a file. Args: test_name: Name of the test method that triggered this bug report. begin_time: Timestamp of when the test started. timeout: float, the number of seconds to wait for bugreport to complete, default is 5min. destination: string, path to the directory where the bugreport should be saved.
def take_bug_report(self, test_name, begin_time, timeout=300, destination=None): new_br = True try: stdout = self.adb.shell('bugreportz -v').decode('utf-8') # This check is necessary for builds before N, where adb shell's ret # code and stderr are not propagated properly. if 'not found' in stdout: new_br = False except adb.AdbError: new_br = False if destination: br_path = utils.abs_path(destination) else: br_path = os.path.join(self.log_path, 'BugReports') utils.create_dir(br_path) base_name = ',%s,%s.txt' % (begin_time, self._normalized_serial) if new_br: base_name = base_name.replace('.txt', '.zip') test_name_len = utils.MAX_FILENAME_LEN - len(base_name) out_name = test_name[:test_name_len] + base_name full_out_path = os.path.join(br_path, out_name.replace(' ', r'\ ')) # in case device restarted, wait for adb interface to return self.wait_for_boot_completion() self.log.info('Taking bugreport for %s.', test_name) if new_br: out = self.adb.shell('bugreportz', timeout=timeout).decode('utf-8') if not out.startswith('OK'): raise DeviceError(self, 'Failed to take bugreport: %s' % out) br_out_path = out.split(':')[1].strip() self.adb.pull([br_out_path, full_out_path]) else: # shell=True as this command redirects the stdout to a local file # using shell redirection. self.adb.bugreport( ' > "%s"' % full_out_path, shell=True, timeout=timeout) self.log.info('Bugreport for %s taken at %s.', test_name, full_out_path)
377,930
Start iperf client on the device. Return status as true if iperf client start successfully. And data flow information as results. Args: server_host: Address of the iperf server. extra_args: A string representing extra arguments for iperf client, e.g. '-i 1 -t 30'. Returns: status: true if iperf client start successfully. results: results have data flow information
def run_iperf_client(self, server_host, extra_args=''): out = self.adb.shell('iperf3 -c %s %s' % (server_host, extra_args)) clean_out = new_str(out, 'utf-8').strip().split('\n') if 'error' in clean_out[0].lower(): return False, clean_out return True, clean_out
377,931
Waits for Android framework to broadcast ACTION_BOOT_COMPLETED. This function times out after 15 minutes. Args: timeout: float, the number of seconds to wait before timing out. If not specified, no timeout takes effect.
def wait_for_boot_completion( self, timeout=DEFAULT_TIMEOUT_BOOT_COMPLETION_SECOND): timeout_start = time.time() self.adb.wait_for_device(timeout=timeout) while time.time() < timeout_start + timeout: try: if self.is_boot_completed(): return except adb.AdbError: # adb shell calls may fail during certain period of booting # process, which is normal. Ignoring these errors. pass time.sleep(5) raise DeviceError(self, 'Booting process timed out')
377,932
Expects an expression evaluates to True. If the expectation is not met, the test is marked as fail after its execution finishes. Args: expr: The expression that is evaluated. msg: A string explaining the details in case of failure. extras: An optional field for extra information to be included in test result.
def expect_true(condition, msg, extras=None): try: asserts.assert_true(condition, msg, extras) except signals.TestSignal as e: logging.exception('Expected a `True` value, got `False`.') recorder.add_error(e)
377,938
Expects an expression evaluates to False. If the expectation is not met, the test is marked as fail after its execution finishes. Args: expr: The expression that is evaluated. msg: A string explaining the details in case of failure. extras: An optional field for extra information to be included in test result.
def expect_false(condition, msg, extras=None): try: asserts.assert_false(condition, msg, extras) except signals.TestSignal as e: logging.exception('Expected a `False` value, got `True`.') recorder.add_error(e)
377,939
Expects no exception is raised in a context. If the expectation is not met, the test is marked as fail after its execution finishes. A default message is added to the exception `details`. Args: message: string, custom message to add to exception's `details`. extras: An optional field for extra information to be included in test result.
def expect_no_raises(message=None, extras=None): try: yield except Exception as e: e_record = records.ExceptionRecord(e) if extras: e_record.extras = extras msg = message or 'Got an unexpected exception' details = '%s: %s' % (msg, e_record.details) logging.exception(details) e_record.details = details recorder.add_error(e_record)
377,941
Resets the internal state of the recorder. Args: record: records.TestResultRecord, the test record for a test.
def reset_internal_states(self, record=None): self._record = None self._count = 0 self._record = record
377,942
Record an error from expect APIs. This method generates a position stamp for the expect. The stamp is composed of a timestamp and the number of errors recorded so far. Args: error: Exception or signals.ExceptionRecord, the error to add.
def add_error(self, error): self._count += 1 self._record.add_error('expect@%s+%s' % (time.time(), self._count), error)
377,943
Constructor of the class. The constructor is the only place to pass in a config. If you need to change the config later, you should unregister the service instance from `ServiceManager` and register again with the new config. Args: device: the device object this service is associated with. config: optional configuration defined by the author of the service class.
def __init__(self, device, configs=None): self._device = device self._configs = configs
377,945
Convenient method for creating excerpts of adb logcat. To use this feature, call this method at the end of: `setup_class`, `teardown_test`, and `teardown_class`. This moves the current content of `self.adb_logcat_file_path` to the log directory specific to the current test. Args: current_test_info: `self.current_test_info` in a Mobly test.
def create_per_test_excerpt(self, current_test_info): self.pause() dest_path = current_test_info.output_path utils.create_dir(dest_path) self._ad.log.debug('AdbLog excerpt location: %s', dest_path) shutil.move(self.adb_logcat_file_path, dest_path) self.resume()
377,950