docstring
stringlengths
52
499
function
stringlengths
67
35.2k
__index_level_0__
int64
52.6k
1.16M
Get a method definition from the method repository. Args: namespace (str): Methods namespace method (str): method name version (int): snapshot_id of the method wdl_only (bool): Exclude metadata Swagger: https://api.firecloud.org/#!/Method_Repository/get_api_methods_namespace_name_snapshotId
def get_repository_method(namespace, method, snapshot_id, wdl_only=False): uri = "methods/{0}/{1}/{2}?onlyPayload={3}".format(namespace, method, snapshot_id, str(wdl_only).lower()) return __get(uri)
668,245
Redacts a method and all of its associated configurations. The method should exist in the methods repository. Args: namespace (str): Methods namespace method (str): method name snapshot_id (int): snapshot_id of the method Swagger: https://api.firecloud.org/#!/Method_Repository/delete_api_methods_namespace_name_snapshotId
def delete_repository_method(namespace, name, snapshot_id): uri = "methods/{0}/{1}/{2}".format(namespace, name, snapshot_id) return __delete(uri)
668,247
Redacts a configuration and all of its associated configurations. The configuration should exist in the methods repository. Args: namespace (str): configuration namespace configuration (str): configuration name snapshot_id (int): snapshot_id of the configuration Swagger: https://api.firecloud.org/#!/Method_Repository/delete_api_configurations_namespace_name_snapshotId
def delete_repository_config(namespace, name, snapshot_id): uri = "configurations/{0}/{1}/{2}".format(namespace, name, snapshot_id) return __delete(uri)
668,248
Get permissions for a method. The method should exist in the methods repository. Args: namespace (str): Methods namespace method (str): method name version (int): snapshot_id of the method Swagger: https://api.firecloud.org/#!/Method_Repository/getMethodACL
def get_repository_method_acl(namespace, method, snapshot_id): uri = "methods/{0}/{1}/{2}/permissions".format(namespace,method,snapshot_id) return __get(uri)
668,249
Set method permissions. The method should exist in the methods repository. Args: namespace (str): Methods namespace method (str): method name snapshot_id (int): snapshot_id of the method acl_updates (list(dict)): List of access control updates Swagger: https://api.firecloud.org/#!/Method_Repository/setMethodACL
def update_repository_method_acl(namespace, method, snapshot_id, acl_updates): uri = "methods/{0}/{1}/{2}/permissions".format(namespace,method,snapshot_id) return __post(uri, json=acl_updates)
668,250
Get configuration permissions. The configuration should exist in the methods repository. Args: namespace (str): Configuration namespace config (str): Configuration name snapshot_id (int): snapshot_id of the method Swagger: https://api.firecloud.org/#!/Method_Repository/getConfigACL
def get_repository_config_acl(namespace, config, snapshot_id): uri = "configurations/{0}/{1}/{2}/permissions".format(namespace, config, snapshot_id) return __get(uri)
668,251
Set configuration permissions. The configuration should exist in the methods repository. Args: namespace (str): Configuration namespace config (str): Configuration name snapshot_id (int): snapshot_id of the method acl_updates (list(dict)): List of access control updates Swagger: https://api.firecloud.org/#!/Method_Repository/setConfigACL
def update_repository_config_acl(namespace, config, snapshot_id, acl_updates): uri = "configurations/{0}/{1}/{2}/permissions".format(namespace, config, snapshot_id) return __post(uri, json=acl_updates)
668,252
Abort running job in a workspace. Args: namespace (str): project to which workspace belongs workspace (str): Workspace name submission_id (str): Submission's unique identifier Swagger: https://api.firecloud.org/#!/Submissions/deleteSubmission
def abort_submission(namespace, workspace, submission_id): uri = "workspaces/{0}/{1}/submissions/{2}".format(namespace, workspace, submission_id) return __delete(uri)
668,254
Request submission information. Args: namespace (str): project to which workspace belongs workspace (str): Workspace name submission_id (str): Submission's unique identifier Swagger: https://api.firecloud.org/#!/Submissions/monitorSubmission
def get_submission(namespace, workspace, submission_id): uri = "workspaces/{0}/{1}/submissions/{2}".format(namespace, workspace, submission_id) return __get(uri)
668,255
Request the metadata for a workflow in a submission. Args: namespace (str): project to which workspace belongs workspace (str): Workspace name submission_id (str): Submission's unique identifier workflow_id (str): Workflow's unique identifier. Swagger: https://api.firecloud.org/#!/Submissions/workflowMetadata
def get_workflow_metadata(namespace, workspace, submission_id, workflow_id): uri = "workspaces/{0}/{1}/submissions/{2}/workflows/{3}".format(namespace, workspace, submission_id, workflow_id) return __get(uri)
668,256
Request the outputs for a workflow in a submission. Args: namespace (str): project to which workspace belongs workspace (str): Workspace name submission_id (str): Submission's unique identifier workflow_id (str): Workflow's unique identifier. Swagger: https://api.firecloud.org/#!/Submissions/workflowOutputsInSubmission
def get_workflow_outputs(namespace, workspace, submission_id, workflow_id): uri = "workspaces/{0}/{1}/".format(namespace, workspace) uri += "submissions/{0}/workflows/{1}/outputs".format(submission_id, workflow_id) return __get(uri)
668,257
Create a new FireCloud Workspace. Args: namespace (str): project to which workspace belongs name (str): Workspace name protected (bool): If True, this workspace is protected by dbGaP credentials. This option is only available if your FireCloud account is linked to your NIH account. attributes (dict): Workspace attributes as key value pairs Swagger: https://api.firecloud.org/#!/Workspaces/createWorkspace
def create_workspace(namespace, name, authorizationDomain="", attributes=None): if not attributes: attributes = dict() body = { "namespace": namespace, "name": name, "attributes": attributes } if authorizationDomain: authDomain = [{"membersGroupName": authorizationDomain}] else: authDomain = [] body["authorizationDomain"] = authDomain return __post("workspaces", json=body)
668,258
Add a user to a group the caller owns Args: group (str): Group name role (str) : Role of user for group; either 'member' or 'admin' email (str): Email of user or group to add Swagger: https://api.firecloud.org/#!/Groups/addUserToGroup
def add_user_to_group(group, role, email): uri = "groups/{0}/{1}/{2}".format(group, role, email) return __put(uri)
668,262
Remove a user from a group the caller owns Args: group (str): Group name role (str) : Role of user for group; either 'member' or 'admin' email (str): Email of user or group to remove Swagger: https://api.firecloud.org/#!/Groups/removeUserFromGroup
def remove_user_from_group(group, role, email): uri = "groups/{0}/{1}/{2}".format(group, role, email) return __delete(uri)
668,263
Create new FireCloud method. If the namespace + name already exists, a new snapshot is created. Args: namespace (str): Method namespace for this method name (str): Method name wdl (file): WDL description synopsis (str): Short description of task documentation (file): Extra documentation for method
def new(namespace, name, wdl, synopsis, documentation=None, api_url=fapi.PROD_API_ROOT): r = fapi.update_workflow(namespace, name, synopsis, wdl, documentation, api_url) fapi._check_response_code(r, 201) d = r.json() return Method(namespace, name, d["snapshotId"])
668,267
Set permissions for this method. Args: role (str): Access level one of {one of "OWNER", "READER", "WRITER", "NO ACCESS"} users (list(str)): List of users to give role to
def set_acl(self, role, users): acl_updates = [{"user": user, "role": role} for user in users] r = fapi.update_repository_method_acl( self.namespace, self.name, self.snapshot_id, acl_updates, self.api_url ) fapi._check_response_code(r, 200)
668,271
Remove attribute from a workspace. Args: attr (str): attribute name
def remove_attribute(self, attr): update = [fapi._attr_rem(attr)] r = fapi.update_workspace_attributes(self.namespace, self.name, update, self.api_url) self.data["workspace"]["attributes"].pop(attr, None) fapi._check_response_code(r, 200)
668,291
Upload entity data to workspace from tsv loadfile. Args: tsv_file (file): Tab-delimited file of entity data
def import_tsv(self, tsv_file): r = fapi.upload_entities_tsv(self.namespace, self.name, self.tsv_file, self.api_url) fapi._check_response_code(r, 201)
668,292
Return entity in this workspace. Args: etype (str): Entity type entity_id (str): Entity name/unique id
def get_entity(self, etype, entity_id): r = fapi.get_entity(self.namespace, self.name, etype, entity_id, self.api_url) fapi._check_response_code(r, 200) dresp = r.json() return Entity(etype, entity_id, dresp['attributes'])
668,293
Delete an entity in this workspace. Args: etype (str): Entity type entity_id (str): Entity name/unique id
def delete_entity(self, etype, entity_id): r = fapi.delete_entity(self.namespace, self.name, etype, entity_id, self.api_url) fapi._check_response_code(r, 202)
668,294
Upload entity objects. Args: entities: iterable of firecloud.Entity objects.
def import_entities(self, entities): edata = Entity.create_payload(entities) r = fapi.upload_entities(self.namespace, self.name, edata, self.api_url) fapi._check_response_code(r, 201)
668,295
Copy entities from another workspace. Args: from_namespace (str): Source workspace namespace from_workspace (str): Source workspace name etype (str): Entity type enames (list(str)): List of entity names to copy
def copy_entities(self, from_namespace, from_workspace, etype, enames): r = fapi.copy_entities(from_namespace, from_workspace, self.namespace, self.name, etype, enames, self.api_url) fapi._check_response_code(r, 201)
668,301
Clone this workspace. Args: to_namespace (str): Target workspace namespace to_name (str): Target workspace name
def clone(self, to_namespace, to_name): r = fapi.clone_workspace(self.namespace, self.name, to_namespace, to_name, self.api_url) fapi._check_response_code(r, 201) return Workspace(to_namespace, to_name, self.api_url)
668,304
Open a file for atomic writing. Generates a temp file, renames to value of ``path``. Arguments: ``permissions``: Permissions to set (default: umask) ``file_factory``: If given, the handle yielded will be the result of calling file_factory(path) Additional arguments are passed to tempfile.NamedTemporaryFile
def atomic_write(path, mode='wt', permissions=None, file_factory=None, **kwargs): if permissions is None: permissions = apply_umask() # Handle stdout: if path == '-': yield sys.stdout else: base_dir = os.path.dirname(path) kwargs['suffix'] = os.path.basename(path) tf = tempfile.NamedTemporaryFile( dir=base_dir, mode=mode, delete=False, **kwargs) # If a file_factory is given, close, and re-open a handle using the # file_factory if file_factory is not None: tf.close() tf = file_factory(tf.name) try: with tf: yield tf # Move os.rename(tf.name, path) os.chmod(path, permissions) except: os.remove(tf.name) raise
668,497
Parse cgMLST alleles from fasta file cgMLST FASTA file must have a header format of ">{marker name}|{allele name}" Args: cgmlst_fasta (str): cgMLST fasta file path Returns: dict of list: Marker name to list of allele sequences
def parse_cgmlst_alleles(cgmlst_fasta): out = defaultdict(list) for header, seq in parse_fasta(cgmlst_fasta): if not '|' in header: raise Exception('Unexpected format for cgMLST fasta file header. No "|" (pipe) delimiter present! Header="{}"'.format(header)) marker_name, allele_name = header.split('|') out[marker_name].append(seq) return out
668,514
Check that a file is valid FASTA format. - First non-blank line needs to begin with a '>' header character. - Sequence can only contain valid IUPAC nucleotide characters Args: fasta_str (str): FASTA file contents string Raises: Exception: If invalid FASTA format
def fasta_format_check(fasta_path, logger): header_count = 0 line_count = 1 nt_count = 0 with open(fasta_path) as f: for l in f: l = l.strip() if l == '': continue if l[0] == '>': header_count += 1 continue if header_count == 0 and l[0] != '>': error_msg = 'First non-blank line (L:{line_count}) does not contain FASTA header. Line beginning with ">" expected.' \ .format(line_count=line_count) logger.error(error_msg) raise Exception(error_msg) non_nucleotide_chars_in_line = set(l) - VALID_NUCLEOTIDES if len(non_nucleotide_chars_in_line) > 0: error_msg = 'Line {line} contains the following non-nucleotide characters: {non_nt_chars}' \ .format(line=line_count, non_nt_chars=', '.join([x for x in non_nucleotide_chars_in_line])) logger.error(error_msg) raise Exception(error_msg) nt_count += len(l) line_count += 1 if nt_count == 0: error_msg = 'File "{}" does not contain any nucleotide sequence.'.format(fasta_path) logger.error(error_msg) raise Exception(error_msg) logger.info('Valid FASTA format "{}" ({} bp)'.format(fasta_path, nt_count))
668,520
Convert list of ACGT strings to matix of 1-4 ints Args: seqs (list of str): nucleotide sequences with only 'ACGT' characters Returns: numpy.array of int: matrix of integers from 1 to 4 inclusive representing A, C, G, and T str: nucleotide sequence string
def seq_int_arr(seqs): return np.array([[NT_TO_INT[c] for c in x.upper()] for x in seqs])
668,524
Group alleles by matching ends Args: arr (numpy.array): 2D int matrix of alleles bp (int): length of ends to group by Returns: dict of lists: key of start + end strings to list of indices of alleles with matching ends
def group_alleles_by_start_end_Xbp(arr, bp=28): starts = arr[:,0:bp] ends = arr[:,-bp:] starts_ends_idxs = defaultdict(list) l, seq_len = arr.shape for i in range(l): start_i = starts[i] end_i = ends[i] start_i_str = ''.join([str(x) for x in start_i]) end_i_str = ''.join([str(x) for x in end_i]) starts_ends_idxs[start_i_str + end_i_str].append(i) return starts_ends_idxs
668,525
Flat clusters from distance matrix Args: dists (numpy.array): pdist distance matrix t (float): fcluster (tree cutting) distance threshold Returns: dict of lists: cluster number to list of indices of distances in cluster
def allele_clusters(dists, t=0.025): clusters = fcluster(linkage(dists), 0.025, criterion='distance') cluster_idx = defaultdict(list) for idx, cl in enumerate(clusters): cluster_idx[cl].append(idx) return cluster_idx
668,526
Find the index of the row with the minimum row distance sum This should return the index of the row index with the least distance overall to all other rows. Args: dists (np.array): must be square distance matrix Returns: int: index of row with min dist row sum
def min_row_dist_sum_idx(dists): row_sums = np.apply_along_axis(arr=dists, axis=0, func1d=np.sum) return row_sums.argmin()
668,527
Compute Mash distances of sketch file of genome fasta to RefSeq sketch DB. Args: mash_bin (str): Mash binary path Returns: (str): Mash STDOUT string
def mash_dist_trusted(fasta_path): args = [MASH_BIN, 'dist', MASH_SKETCH_FILE, fasta_path] p = Popen(args, stderr=PIPE, stdout=PIPE) (stdout, stderr) = p.communicate() retcode = p.returncode if retcode != 0: raise Exception('Could not run Mash dist {}'.format(stderr)) return stdout
668,529
Get a condensed cgMLST pairwise distance matrix for specified Genomes_ where condensed means redundant cgMLST profiles are only represented once in the distance matrix. Args: user_name (list): List of Genome_ names to retrieve condensed distance matrix for Returns: (numpy.array, list): tuple of condensed cgMLST distance matrix and list of grouped Genomes_
def nr_profiles(arr, genomes): gs_collapse = [] genome_idx_dict = {} indices = [] patt_dict = {} for i, g in enumerate(genomes): p = arr[i, :].tostring() if p in patt_dict: parent = patt_dict[p] idx = genome_idx_dict[parent] gs_collapse[idx].append(g) else: indices.append(i) patt_dict[p] = g genome_idx_dict[g] = len(gs_collapse) gs_collapse.append([g]) return arr[indices, :], gs_collapse
668,533
Alleles to retrieve from genome fasta Get a dict of the genome fasta contig title to a list of blastn results of the allele sequences that must be retrieved from the genome contig. Args: df (pandas.DataFrame): blastn results dataframe Returns: {str:[pandas.Series]}: dict of contig title (header name) to list of top blastn result records for each marker for which the allele sequence must be retrieved from the original sequence.
def alleles_to_retrieve(df): contig_blastn_records = defaultdict(list) markers = df.marker.unique() for m in markers: dfsub = df[df.marker == m] for i, r in dfsub.iterrows(): if r.coverage < 1.0: contig_blastn_records[r.stitle].append(r) break return contig_blastn_records
668,550
Perfect BLAST matches to marker results dict Parse perfect BLAST matches to marker results dict. Args: df (pandas.DataFrame): DataFrame of perfect BLAST matches Returns: dict: cgMLST330 marker names to matching allele numbers
def matches_to_marker_results(df): assert isinstance(df, pd.DataFrame) from collections import defaultdict d = defaultdict(list) for idx, row in df.iterrows(): marker = row['marker'] d[marker].append(row) marker_results = {} for k,v in d.items(): if len(v) > 1: logging.debug('Multiple potential cgMLST allele matches (n=%s) found for marker %s. Selecting match on longest contig.', len(v), k) df_marker = pd.DataFrame(v) df_marker.sort_values('slen', ascending=False, inplace=True) for i,r in df_marker.iterrows(): allele = r['allele_name'] slen = r['slen'] logging.debug('Selecting allele %s from contig with length %s', allele, slen) seq = r['sseq'] if '-' in seq: logging.warning('Gaps found in allele. Removing gaps. %s', r) seq = seq.replace('-', '').upper() allele = allele_name(seq) marker_results[k] = allele_result_dict(allele, seq, r.to_dict()) break elif len(v) == 1: row = v[0] seq = row['sseq'] if '-' in seq: logging.warning('Gaps found in allele. Removing gaps. %s', row) seq = seq.replace('-', '').upper() allele = allele_name(seq) marker_results[k] = allele_result_dict(allele, seq, row.to_dict()) else: err_msg = 'Empty list of matches for marker {}'.format(k) logging.error(err_msg) raise Exception(err_msg) return marker_results
668,552
Perform in silico cgMLST on an input genome Args: blast_runner (sistr.src.blast_wrapper.BlastRunner): blastn runner object with genome fasta initialized Returns: dict: cgMLST ref genome match, distance to closest ref genome, subspecies and serovar predictions dict: marker allele match results (seq, allele name, blastn results)
def run_cgmlst(blast_runner, full=False): from sistr.src.serovar_prediction.constants import genomes_to_serovar df_cgmlst_profiles = ref_cgmlst_profiles() logging.debug('{} distinct cgMLST330 profiles'.format(df_cgmlst_profiles.shape[0])) logging.info('Running BLAST on serovar predictive cgMLST330 alleles') cgmlst_fasta_path = CGMLST_CENTROID_FASTA_PATH if not full else CGMLST_FULL_FASTA_PATH blast_outfile = blast_runner.blast_against_query(cgmlst_fasta_path) logging.info('Reading BLAST output file "{}"'.format(blast_outfile)) blast_reader = BlastReader(blast_outfile) if blast_reader.df is None: logging.error('No cgMLST330 alleles found!') return ({'distance': 1.0, 'genome_match': None, 'serovar': None, 'matching_alleles': 0, 'subspecies': None, 'cgmlst330_ST': None,}, {}, ) logging.info('Found {} cgMLST330 allele BLAST results'.format(blast_reader.df.shape[0])) df_cgmlst_blastn = process_cgmlst_results(blast_reader.df) marker_match_results = matches_to_marker_results(df_cgmlst_blastn[df_cgmlst_blastn.is_match]) contig_blastn_records = alleles_to_retrieve(df_cgmlst_blastn) retrieved_marker_alleles = get_allele_sequences(blast_runner.fasta_path, contig_blastn_records, full=full) logging.info('Type retrieved_marker_alleles %s', type(retrieved_marker_alleles)) all_marker_results = marker_match_results.copy() for marker, res in retrieved_marker_alleles.items(): all_marker_results[marker] = res for marker in df_cgmlst_profiles.columns: if marker not in all_marker_results: all_marker_results[marker] = {'blast_result': None, 'name': None, 'seq': None,} cgmlst_results = {} for marker, res in all_marker_results.items(): try: cgmlst_results[marker] = int(res['name']) except: logging.error('Missing cgmlst_results for %s', marker) logging.debug(res) logging.info('Calculating number of matching alleles to serovar predictive cgMLST330 profiles') df_relatives = find_closest_related_genome(cgmlst_results, df_cgmlst_profiles) genome_serovar_dict = genomes_to_serovar() df_relatives['serovar'] = [genome_serovar_dict[genome] for genome in df_relatives.index] logging.debug('Top 5 serovar predictive cgMLST profiles:\n{}'.format(df_relatives.head())) spp = None subspeciation_tuple = cgmlst_subspecies_call(df_relatives) if subspeciation_tuple is not None: spp, distance, spp_counter = subspeciation_tuple logging.info('Top subspecies by cgMLST is "{}" (min dist={}, Counter={})'.format(spp, distance, spp_counter)) else: logging.warning('Subspeciation by cgMLST was not possible!') cgmlst_serovar = None cgmlst_matching_genome = None cgmlst_matching_alleles = 0 cgmlst_distance = 1.0 for idx, row in df_relatives.iterrows(): cgmlst_distance = row['distance'] cgmlst_matching_alleles = row['matching'] cgmlst_serovar = row['serovar'] if cgmlst_distance <= 1.0 else None cgmlst_matching_genome = idx if cgmlst_distance <= 1.0 else None logging.info('Top serovar by cgMLST profile matching: "{}" with {} matching alleles, distance={:.1%}'.format( cgmlst_serovar, cgmlst_matching_alleles, cgmlst_distance )) break cgmlst_st = None cgmlst_markers_sorted = sorted(all_marker_results.keys()) cgmlst_allele_names = [] marker = None for marker in cgmlst_markers_sorted: try: aname = all_marker_results[marker]['name'] if aname: cgmlst_allele_names.append(str(aname)) else: break except: break if len(cgmlst_allele_names) == len(cgmlst_markers_sorted): cgmlst_st = allele_name('-'.join(cgmlst_allele_names)) logging.info('cgMLST330 Sequence Type=%s', cgmlst_st) else: logging.warning('Could not compute cgMLST330 Sequence Type due to missing data (marker %s)', marker) return ({'distance': cgmlst_distance, 'genome_match': cgmlst_matching_genome, 'serovar': cgmlst_serovar, 'matching_alleles': cgmlst_matching_alleles, 'subspecies': spp, 'cgmlst330_ST': cgmlst_st,}, all_marker_results, )
668,555
Extract genome name from fasta filename Get the filename without directory and remove the file extension. Example: With fasta file path ``/path/to/genome_1.fasta``:: fasta_path = '/path/to/genome_1.fasta' genome_name = genome_name_from_fasta_path(fasta_path) print(genome_name) # => "genome_1" Args: fasta_path (str): fasta file path Returns: str: genome name
def genome_name_from_fasta_path(fasta_path): filename = os.path.basename(fasta_path) return re.sub(r'(\.fa$)|(\.fas$)|(\.fasta$)|(\.fna$)|(\.\w{1,}$)', '', filename)
668,565
Helper function to flatten_dict Recursively flatten all nested values within a dict Args: key (str): parent key x (object): object to flatten or add to out dict out (dict): 1D output dict sep (str): flattened key separator string Returns: dict: flattened 1D dict
def _recur_flatten(key, x, out, sep='.'): if x is None or isinstance(x, (str, int, float, bool)): out[key] = x return out if isinstance(x, list): for i, v in enumerate(x): new_key = '{}{}{}'.format(key, sep, i) out = _recur_flatten(new_key, v, out, sep) if isinstance(x, dict): for k, v in x.items(): new_key = '{}{}{}'.format(key, sep, k) out = _recur_flatten(new_key, v, out, sep) return out
668,574
Flatten a dict Flatten an arbitrarily nested dict as output by to_dict .. note:: Keys in the flattened dict may get very long. Args: x (dict): Arbitrarily nested dict (maybe resembling a tree) with literal/scalar leaf values Returns: dict: flattened 1D dict
def flatten_dict(x): out = {} for k, v in x.items(): out = _recur_flatten(k, v, out) return out
668,575
Read BLASTN output file into a pandas DataFrame Sort the DataFrame by BLAST bitscore. If there are no BLASTN results, then no results can be returned. Args: blast_outfile (str): `blastn` output file path Raises: EmptyDataError: No data could be parsed from the `blastn` output file
def __init__(self, blast_outfile): self.blast_outfile = blast_outfile try: self.df = pd.read_table(self.blast_outfile, header=None) self.df.columns = BLAST_TABLE_COLS # calculate the coverage for when results need to be validated self.df.loc[:, 'coverage'] = self.df.length / self.df.qlen self.df.sort_values(by='bitscore', ascending=False, inplace=True) self.df.loc[:, 'is_trunc'] = BlastReader.trunc(qstart=self.df.qstart, qend=self.df.qend, qlen=self.df.qlen, sstart=self.df.sstart, send=self.df.send, slen=self.df.slen) logging.debug(self.df.head()) self.is_missing = False except EmptyDataError as exc: logging.warning('No BLASTN results to parse from file %s', blast_outfile) self.is_missing = True
668,585
First DataFrame row to list of dict Args: df (pandas.DataFrame): A DataFrame with at least one row Returns: A list of dict that looks like: [{'C1': 'x'}, {'C2': 'y'}, {'C3': 'z'}] from a DataFrame that looks like: C1 C2 C3 1 x y z Else if `df` is `None`, returns `None`
def df_first_row_to_dict(df): if df is not None: return [dict(r) for i, r in df.head(1).iterrows()][0]
668,586
Check if a query sequence is truncated by the end of a subject sequence Args: qstart (int): Query sequence start index qend (int): Query sequence end index sstart (int): Subject sequence start index send (int): Subject sequence end index qlen (int): Query sequence length slen (int): Subject sequence length Returns: bool: Result truncated by subject sequence end?
def is_blast_result_trunc(qstart, qend, sstart, send, qlen, slen): q_match_len = abs(qstart - qend) + 1 s_max = max(sstart, send) s_min = min(sstart, send) return (q_match_len < qlen) and (s_max >= slen or s_min <= 1)
668,587
Create a Mash sketch from an input fasta file Args: fasta_path (str): input fasta file path. Genome name in fasta filename outdir (str): output directory path to write Mash sketch file to Returns: str: output Mash sketch file path
def sketch_fasta(fasta_path, outdir): genome_name = genome_name_from_fasta_path(fasta_path) outpath = os.path.join(outdir, genome_name) args = ['mash', 'sketch', '-o', outpath, fasta_path] logging.info('Running Mash sketch with command: %s', ' '.join(args)) p = Popen(args) p.wait() sketch_path = outpath + '.msh' assert os.path.exists(sketch_path), 'Mash sketch for genome {} was not created at {}'.format( genome_name, sketch_path) return sketch_path
668,592
Merge new Mash sketches with current Mash sketches Args: outdir (str): output directory to write merged Mash sketch file sketch_paths (list of str): Mash sketch file paths for input fasta files Returns: str: output path for Mash sketch file with new and old sketches
def merge_sketches(outdir, sketch_paths): merge_sketch_path = os.path.join(outdir, 'sistr.msh') args = ['mash', 'paste', merge_sketch_path] for x in sketch_paths: args.append(x) args.append(MASH_SKETCH_FILE) logging.info('Running Mash paste with command: %s', ' '.join(args)) p = Popen(args) p.wait() assert os.path.exists(merge_sketch_path), 'Merged sketch was not created at {}'.format(merge_sketch_path) return merge_sketch_path
668,593
Adds triple to rdflib Graph Triple can be of any subject, predicate, and object of the entity without a need for order. Args: subj: Entity subject pred: Entity predicate obj: Entity object Example: In [1]: add_triple( ...: 'http://uri.interlex.org/base/ilx_0101431', ...: RDF.type, ...: 'http://www.w3.org/2002/07/owl#Class') ...: )
def add_triple( self, subj: Union[URIRef, str], pred: Union[URIRef, str], obj: Union[URIRef, Literal, str] ) -> None: if obj in [None, "", " "]: return # Empty objects are bad practice _subj = self.process_subj_or_pred(subj) _pred = self.process_subj_or_pred(pred) _obj = self.process_obj(obj) self.g.add( (_subj, _pred, _obj) )
669,483
Gives component the proper node type Args: obj: Entity object to be converted to its appropriate node type Returns: URIRef or Literal type of the object provided. Raises: SystemExit: If object is a dict or list it becomes str with broken data. Needs to come in one object at a time.
def process_obj(self, obj: Union[URIRef, Literal, str]) -> Union[URIRef, Literal]: if isinstance(obj, dict) or isinstance(obj, list): exit(str(obj) + ': should be str or intended to be a URIRef or Literal.') if isinstance(obj, Literal) or isinstance(obj, URIRef): prefix = self.find_prefix(obj) if prefix: self.process_prefix(prefix) return obj if len(obj) > 8: if 'http' == obj[:4] and '://' in obj and ' ' not in obj: prefix = self.find_prefix(obj) if prefix: self.process_prefix(prefix) return URIRef(obj) if ':' in str(obj): presumed_prefix, info = obj.split(':', 1) namespace: Union[Namespace, None] = self.process_prefix(presumed_prefix) if namespace: return namespace[info] return Literal(obj)
669,486
Removes triple from rdflib Graph You must input the triple in its URIRef or Literal form for each node exactly the way it was inputed or it will not delete the triple. Args: subj: Entity subject to be removed it its the only node with this subject; else this is just going to delete a desciption I.E. predicate_object of this entity. pred: Entity predicate to be removed obj: Entity object to be removed
def remove_triple( self, subj: URIRef, pred: URIRef, obj: Union[URIRef, Literal] ) -> None: self.g.remove( (subj, pred, obj) )
669,488
Tagged term in interlex to warn this term is no longer used There isn't an proper way to delete a term and so we have to mark it so I can extrapolate that in mysql/ttl loads. Args: term_id: id of the term of which to be deprecated term_version: version of the term of which to be deprecated Example: deprecateTerm('ilx_0101431', '6')
def deprecate_entity( self, ilx_id: str, note = None, ) -> None: term_id, term_version = [(d['id'], d['version']) for d in self.ilxSearches([ilx_id], crawl=True, _print=False).values()][0] annotations = [{ 'tid': term_id, 'annotation_tid': '306375', # id for annotation "deprecated" 'value': 'True', 'term_version': term_version, 'annotation_term_version': '1', # term version for annotation "deprecated" }] if note: editor_note = { 'tid': term_id, 'annotation_tid': '306378', # id for annotation "editorNote" 'value': note, 'term_version': term_version, 'annotation_term_version': '1', # term version for annotation "deprecated" } annotations.append(editor_note) self.addAnnotations(annotations, crawl=True, _print=False) print(annotations)
669,512
Returns the row in InterLex associated with the curie Note: Pressumed to not have duplicate curies in InterLex Args: curie: The "prefix:fragment_id" of the existing_id pertaining to the ontology Returns: None or dict
def curie_search(self, curie:str) -> dict: ilx_row = self.curie2row.get(curie) if not ilx_row: return None else: return ilx_row
669,690
Returns the rows in InterLex associated with the fragment Note: Pressumed to have duplicate fragements in InterLex Args: fragment: The fragment_id of the curie pertaining to the ontology Returns: None or List[dict]
def fragment_search(self, fragement:str) -> List[dict]: fragement = self.extract_fragment(fragement) ilx_rows = self.fragment2rows.get(fragement) if not ilx_rows: return None else: return ilx_rows
669,691
Returns the rows in InterLex associated with that label Note: Pressumed to have duplicated labels in InterLex Args: label: label of the entity you want to find Returns: None or List[dict]
def label_search(self, label:str) -> List[dict]: ilx_rows = self.label2rows(self.local_degrade(label)) if not ilx_rows: return None else: return ilx_rows
669,692
Recursively collect all tests in test description. Args: name (str): Yaml test description file name. descriptions (dict): Dict of test description name (key) and absolute file paths (value). parsed (list): List of description paths which have already been parsed to prevent infinte recursion. Returns: A list of expanded test files.
def get_tests_from_description(name, descriptions, parsed=None): tests = [] if not parsed: parsed = [] description = descriptions.get(name, None) if not description: raise IpaUtilsException( 'Test description file with name: %s cannot be located.' % name ) if description in parsed: return tests parsed.append(description) test_data = get_yaml_config(description) if 'tests' in test_data: tests += test_data.get('tests') if 'include' in test_data: for description_name in test_data.get('include'): tests += get_tests_from_description( description_name, descriptions, parsed ) return tests
669,715
Put the IM into All-Linking mode. Puts the IM into All-Linking mode for 4 minutes. Parameters: mode: 0 | 1 | 3 | 255 0 - PLM is responder 1 - PLM is controller 3 - Device that initiated All-Linking is Controller 255 = Delete All-Link group: All-Link group number (0 - 255)
def start_all_linking(self, mode, group): msg = StartAllLinking(mode, group) self.send_msg(msg)
670,163
Write to the device All-Link Database. Parameters: Required: mode: r - device is a responder of target c - device is a controller of target group: Link group target: Address of the other device Optional: data1: Device dependant data2: Device dependant data3: Device dependant
def write_aldb(self, mem_addr: int, mode: str, group: int, target, data1=0x00, data2=0x00, data3=0x00): if isinstance(mode, str) and mode.lower() in ['c', 'r']: pass else: _LOGGER.error('Insteon link mode: %s', mode) raise ValueError("Mode must be 'c' or 'r'") if isinstance(group, int): pass else: raise ValueError("Group must be an integer") target_addr = Address(target) _LOGGER.debug('calling aldb write_record') self._aldb.write_record(mem_addr, mode, group, target_addr, data1, data2, data3) self._aldb.add_loaded_callback(self._aldb_loaded_callback)
670,225
Test the on/off method of a device. Usage: on_off_test address [group] Arguments: address: Required - INSTEON address of the device group: Optional - All-Link group number. Defaults to 1
async def on_off_test(self, addr, group): device = None state = None if addr: dev_addr = Address(addr) device = self.plm.devices[dev_addr.id] if device: state = device.states[group] if state: if hasattr(state, 'on') and hasattr(state, 'off'): _LOGGING.info('Send on request') _LOGGING.info('----------------------') device.states[group].on() await asyncio.sleep(2, loop=self.loop) _LOGGING.info('Send off request') _LOGGING.info('----------------------') device.states[group].off() await asyncio.sleep(2, loop=self.loop) _LOGGING.info('Send on request') _LOGGING.info('----------------------') device.states[group].on() await asyncio.sleep(2, loop=self.loop) _LOGGING.info('Send off request') _LOGGING.info('----------------------') device.states[group].off() await asyncio.sleep(2, loop=self.loop) else: _LOGGING.warning('Device %s with state %d is not an on/off' 'device.', device.id, state.name) else: _LOGGING.error('Could not find device %s', addr)
670,374
Connect to the PLM device. Usage: connect [device [workdir]] Arguments: device: PLM device (default /dev/ttyUSB0) workdir: Working directory to save and load device information
async def do_connect(self, args): params = args.split() device = '/dev/ttyUSB0' workdir = None try: device = params[0] except IndexError: if self.tools.device: device = self.tools.device try: workdir = params[1] except IndexError: if self.tools.workdir: workdir = self.tools.workdir if device: await self.tools.connect(False, device=device, workdir=workdir) _LOGGING.info('Connection complete.')
670,393
Test the on/off method of a device. Usage: on_off_test address [group] Arguments: address: Required - INSTEON address of the device group: Optional - All-Link group number. Defaults to 1
async def do_on_off_test(self, args): params = args.split() addr = None group = None try: addr = params[0] except IndexError: addr = None try: group = int(params[1]) except ValueError: group = None except IndexError: group = 1 if addr and group: await self.tools.on_off_test(addr, group) else: _LOGGING.error('Invalid address or group') self.do_help('on_off_test')
670,395
Add an All-Link record to the IM and a device. Usage: add_all_link [linkcode] [group] [address] Arguments: linkcode: 0 - PLM is responder 1 - PLM is controller 3 - PLM is controller or responder Default 1 group: All-Link group number (0 - 255). Default 0. address: INSTEON device to link with (not supported by all devices)
def do_add_all_link(self, args): linkcode = 1 group = 0 addr = None params = args.split() if params: try: linkcode = int(params[0]) except IndexError: linkcode = 1 except ValueError: linkcode = None try: group = int(params[1]) except IndexError: group = 0 except ValueError: group = None try: addr = params[2] except IndexError: addr = None if linkcode in [0, 1, 3] and 255 >= group >= 0: self.loop.create_task( self.tools.start_all_linking(linkcode, group, addr)) else: _LOGGING.error('Link code %d or group number %d not valid', linkcode, group) self.do_help('add_all_link')
670,396
Print the All-Link database for a device. Usage: print_aldb address|plm|all Arguments: address: INSTEON address of the device plm: Print the All-Link database for the PLM all: Print the All-Link database for all devices This method requires that the device ALDB has been loaded. To load the device ALDB use the command: load_aldb address|plm|all
def do_print_aldb(self, args): params = args.split() addr = None try: addr = params[0] except IndexError: _LOGGING.error('Device address required.') self.do_help('print_aldb') if addr: if addr.lower() == 'all': self.tools.print_all_aldb() elif addr.lower() == 'plm': addr = self.tools.plm.address.id self.tools.print_device_aldb(addr) else: self.tools.print_device_aldb(addr)
670,397
Set Hub connection parameters. Usage: set_hub_connection username password host [port] Arguments: username: Hub username password: Hub password host: host name or IP address port: IP port [default 25105]
def do_set_hub_connection(self, args): params = args.split() username = None password = None host = None port = None try: username = params[0] password = params[1] host = params[2] port = params[3] except IndexError: pass if username and password and host: if not port: port = 25105 self.tools.username = username self.tools.password = password self.tools.host = host self.tools.port = port else: _LOGGING.error('username password host are required') self.do_help('set_hub_connection')
670,398
Set the log file. Usage: set_log_file filename Parameters: filename: log file name to write to THIS CAN ONLY BE CALLED ONCE AND MUST BE CALLED BEFORE ANY LOGGING STARTS.
def do_set_log_file(self, args): params = args.split() try: filename = params[0] logging.basicConfig(filename=filename) except IndexError: self.do_help('set_log_file')
670,399
Set the log level. Usage: set_log_level i|v Parameters: log_level: i - info | v - verbose
def do_set_log_level(self, arg): if arg in ['i', 'v']: _LOGGING.info('Setting log level to %s', arg) if arg == 'i': _LOGGING.setLevel(logging.INFO) _INSTEONPLM_LOGGING.setLevel(logging.INFO) else: _LOGGING.setLevel(logging.DEBUG) _INSTEONPLM_LOGGING.setLevel(logging.DEBUG) else: _LOGGING.error('Log level value error.') self.do_help('set_log_level')
670,403
Set the PLM OS device. Device defaults to /dev/ttyUSB0 Usage: set_device device Arguments: device: Required - INSTEON PLM device
def do_set_device(self, args): params = args.split() device = None try: device = params[0] except IndexError: _LOGGING.error('Device name required.') self.do_help('set_device') if device: self.tools.device = device
670,404
Help command. Usage: help [command] Parameters: command: Optional - command name to display detailed help
def do_help(self, arg): cmds = arg.split() if cmds: func = getattr(self, 'do_{}'.format(cmds[0])) if func: _LOGGING.info(func.__doc__) else: _LOGGING.error('Command %s not found', cmds[0]) else: _LOGGING.info("Available command list: ") for curr_cmd in dir(self.__class__): if curr_cmd.startswith("do_") and not curr_cmd == 'do_test': print(" - ", curr_cmd[3:]) _LOGGING.info("For help with a command type `help command`")
670,406
Add an X10 device to the IM. Usage: add_x10_device housecode unitcode type Arguments: housecode: Device housecode (A - P) unitcode: Device unitcode (1 - 16) type: Device type Current device types are: - OnOff - Dimmable - Sensor Example: add_x10_device M 12 OnOff
def do_add_x10_device(self, args): params = args.split() housecode = None unitcode = None dev_type = None try: housecode = params[0] unitcode = int(params[1]) if unitcode not in range(1, 17): raise ValueError dev_type = params[2] except IndexError: pass except ValueError: _LOGGING.error('X10 unit code must be an integer 1 - 16') unitcode = None if housecode and unitcode and dev_type: device = self.tools.add_x10_device(housecode, unitcode, dev_type) if not device: _LOGGING.error('Device not added. Please check the ' 'information you provided.') self.do_help('add_x10_device') else: _LOGGING.error('Device housecode, unitcode and type are ' 'required.') self.do_help('add_x10_device')
670,409
Writes data to the HID device on its endpoint. Parameters: data: data to send on the HID endpoint report_id: the report ID to use. Returns: The number of bytes written including the report ID.
def write(self, data, report_id=0): if not self._is_open: raise HIDException("HIDDevice not open") write_data = bytearray([report_id]) + bytearray(data) cdata = ffi.new("const unsigned char[]", bytes(write_data)) num_written = hidapi.hid_write(self._device, cdata, len(write_data)) if num_written < 0: raise HIDException("Failed to write to HID device: " + str(num_written)) else: return num_written
670,646
Read from the hid device on its endpoint. Parameters: size: number of bytes to read timeout: length to wait in milliseconds Returns: The HID report read from the device. The first byte in the result will be the report ID if used.
def read(self, size=64, timeout=None): if not self._is_open: raise HIDException("HIDDevice not open") data = [0] * size cdata = ffi.new("unsigned char[]", data) bytes_read = 0 if timeout == None: bytes_read = hidapi.hid_read(self._device, cdata, len(cdata)) else: bytes_read = hidapi.hid_read_timeout(self._device, cdata, len(cdata), timeout) if bytes_read < 0: raise HIDException("Failed to read from HID device: " + str(bytes_read)) elif bytes_read == 0: return [] else: return bytearray(cdata)
670,647
Send a Feature report to a HID device. Feature reports are sent over the Control endpoint as a Set_Report transfer. Parameters: data The data to send Returns: This function returns the actual number of bytes written
def send_feature_report(self, data, report_id=0x00): if not self._is_open: raise HIDException("HIDDevice not open") report = bytearray([report_id]) + bytearray(data) cdata = ffi.new("const unsigned char[]", bytes(report)) bytes_written = hidapi.hid_send_feature_report(self._device, cdata, len(report)) if bytes_written == -1: raise HIDException("Failed to send feature report to HID device") return bytes_written
670,650
Get a feature report from a HID device. Feature reports are sent over the Control endpoint as a Get_Report transfer. Parameters: size The number of bytes to read. report_id The report id to read Returns: They bytes read from the HID report
def get_feature_report(self, size, report_id=0x00): data = [0] * (size+1) cdata = ffi.new("unsigned char[]", bytes(data)) cdata[0] = report_id bytes_read = hidapi.hid_get_feature_report(self._device, cdata, len(cdata)) if bytes_read == -1: raise HIDException("Failed to get feature report from HID device") return bytearray(cdata[1:size+1])
670,651
Recurses through lists and converts lists of string to float Args: x: string or list of strings
def str2fl(x): def helper_to_fl(s_): if s_ == "": return "null" elif "," in s_: s_ = s_.replace(",", "") try: return float(s_) except: return (s_) fl_lst = [] if isinstance(x[0], str): # Check if list of strings, then sent to conversion for xi in range(len(x)): fl_lst.append(helper_to_fl(x[xi])) elif isinstance(x[0], list): # Check if list of lists, then recurse for xi in range(len(x)): fl_lst.append(str2fl(x[xi])) else: return False return fl_lst
670,695
Interpret the mutation type (missense, etc.) and set appropriate flags. Args: hgvs_string (str): hgvs syntax with "p." removed
def __set_mutation_type(self, hgvs_string): self.__set_lost_stop_status(hgvs_string) self.__set_lost_start_status(hgvs_string) self.__set_missense_status(hgvs_string) # missense mutations self.__set_indel_status() # indel mutations self.__set_frame_shift_status() # check for fs self.__set_premature_stop_codon_status(hgvs_string)
671,814
Sets a flag for unkown effect according to HGVS syntax. The COSMIC database also uses unconventional questionmarks to denote missing information. Args: hgvs_string (str): hgvs syntax with "p." removed
def __set_unkown_effect(self, hgvs_string): # Standard use by HGVS of indicating unknown effect. unknown_effect_list = ['?', '(=)', '='] # unknown effect symbols if hgvs_string in unknown_effect_list: self.unknown_effect = True elif "(" in hgvs_string: # parethesis in HGVS indicate expected outcomes self.unknown_effect = True else: self.unknown_effect = False # detect if there are missing information. commonly COSMIC will # have insertions with p.?_?ins? or deleteions with ?del indicating # missing information. if "?" in hgvs_string: self.is_missing_info = True else: self.is_missing_info = False
671,821
Set a flag for no protein expected. ("p.0" or "p.0?") Args: hgvs_string (str): hgvs syntax with "p." removed
def __set_no_protein(self, hgvs_string): no_protein_list = ['0', '0?'] # no protein symbols if hgvs_string in no_protein_list: self.is_no_protein = True self.is_non_silent = True else: self.is_no_protein = False
671,822
Convert HGVS syntax for amino acid change into attributes. Specific details of the mutation are stored in attributes like self.intial (prior to mutation), sel.pos (mutation position), self.mutated (mutation), and self.stop_pos (position of stop codon, if any). Args: aa_hgvs (str): amino acid string following HGVS syntax
def __parse_hgvs_syntax(self, aa_hgvs): self.is_valid = True # assume initially the syntax is legitimate self.is_synonymous = False # assume not synonymous until proven if self.unknown_effect or self.is_no_protein: # unknown effect from mutation. usually denoted as p.? self.pos = None pass elif self.is_lost_stop: self.initial = aa_hgvs[0] self.mutated = re.findall('([A-Z?*]+)$', aa_hgvs)[0] self.pos = int(re.findall('^\*(\d+)', aa_hgvs)[0]) self.stop_pos = None elif self.is_lost_start: self.initial = aa_hgvs[0] self.mutated = aa_hgvs[-1] self.pos = int(aa_hgvs[1:-1]) elif self.is_missense: self.initial = aa_hgvs[0] self.mutated = aa_hgvs[-1] self.pos = int(aa_hgvs[1:-1]) self.stop_pos = None # not a nonsense mutation if self.initial == self.mutated: self.is_synonymous = True self.is_non_silent = False elif self.mutated == '*': self.is_nonsense_mutation = True elif self.is_indel: if self.is_insertion: if not self.is_missing_info: self.initial = re.findall('([A-Z])\d+', aa_hgvs)[:2] # first two self.pos = tuple(map(int, re.findall('[A-Z](\d+)', aa_hgvs)[:2])) # first two self.mutated = re.findall('(?<=INS)[A-Z0-9?*]+', aa_hgvs)[0] self.mutated = self.mutated.strip('?') # remove the missing info '?' else: self.initial = '' self.pos = tuple() self.mutated = '' elif self.is_deletion: if not self.is_missing_info: self.initial = re.findall('([A-Z])\d+', aa_hgvs) self.pos = tuple(map(int, re.findall('[A-Z](\d+)', aa_hgvs))) self.mutated = re.findall('(?<=DEL)[A-Z]*', aa_hgvs)[0] else: self.initial = '' self.pos = tuple() self.mutated = '' elif self.is_frame_shift: self.initial = aa_hgvs[0] self.mutated = '' try: self.pos = int(re.findall('[A-Z*](\d+)', aa_hgvs)[0]) if self.is_premature_stop_codon: self.stop_pos = int(re.findall('\*>?(\d+)$', aa_hgvs)[0]) else: self.stop_pos = None except IndexError: # unconventional usage of indicating frameshifts will cause # index errors. For example, in some cases 'fs' is not used. # In other cases, either amino acids were not included or # just designated as a '?' self.logger.debug('(Parsing-Problem) frame shift hgvs string: "%s"' % aa_hgvs) self.pos = None self.stop_pos = None self.is_missing_info = True elif self.is_nonsense_mutation: self.initial = aa_hgvs[0] self.mutated = '*' # there is actually a stop codon self.stop_pos = 0 # indicates same position is stop codon try: self.pos = int(aa_hgvs[1:-1]) except ValueError: # wierd error of p.E217>D* self.is_valid = False self.pos = None self.logger.debug('(Parsing-Problem) Invalid HGVS Amino Acid ' 'syntax: ' + aa_hgvs) if self.initial == self.mutated: # classify nonsense-to-nonsense mutations as synonymous self.is_synonymous = True self.is_non_silent = False else: self.is_valid = False # did not match any of the possible cases self.logger.debug('(Parsing-Problem) Invalid HGVS Amino Acid ' 'syntax: ' + aa_hgvs)
671,823
Simply ensure that the passed item is a tuple. If it is not, then convert it if possible, or raise a NotImplementedError Args: item: the item that needs to become a tuple Returns: the item casted as a tuple Raises: NotImplementedError: if converting the given item to a tuple is not implemented.
def _ensure_tuple(item): if isinstance(item, tuple): return item elif isinstance(item, list): return tuple(item) elif isinstance(item, np.ndarray): return tuple(item.tolist()) else: raise NotImplementedError
672,601
Get the Column or SuperColumn at the given column_path. If no value is present, NotFoundException is thrown. (This is the only method that can throw an exception under non-failure conditions.) Parameters: - key - column_path - consistency_level
def get(self, key, column_path, consistency_level): self._seqid += 1 d = self._reqs[self._seqid] = defer.Deferred() self.send_get(key, column_path, consistency_level) return d
672,947
Get the group of columns contained by column_parent (either a ColumnFamily name or a ColumnFamily/SuperColumn name pair) specified by the given SlicePredicate. If no matching values are found, an empty list is returned. Parameters: - key - column_parent - predicate - consistency_level
def get_slice(self, key, column_parent, predicate, consistency_level): self._seqid += 1 d = self._reqs[self._seqid] = defer.Deferred() self.send_get_slice(key, column_parent, predicate, consistency_level) return d
672,949
returns the number of columns matching <code>predicate</code> for a particular <code>key</code>, <code>ColumnFamily</code> and optionally <code>SuperColumn</code>. Parameters: - key - column_parent - predicate - consistency_level
def get_count(self, key, column_parent, predicate, consistency_level): self._seqid += 1 d = self._reqs[self._seqid] = defer.Deferred() self.send_get_count(key, column_parent, predicate, consistency_level) return d
672,950
Performs a get_slice for column_parent and predicate for the given keys in parallel. Parameters: - keys - column_parent - predicate - consistency_level
def multiget_slice(self, keys, column_parent, predicate, consistency_level): self._seqid += 1 d = self._reqs[self._seqid] = defer.Deferred() self.send_multiget_slice(keys, column_parent, predicate, consistency_level) return d
672,952
Perform a get_count in parallel on the given list<binary> keys. The return value maps keys to the count found. Parameters: - keys - column_parent - predicate - consistency_level
def multiget_count(self, keys, column_parent, predicate, consistency_level): self._seqid += 1 d = self._reqs[self._seqid] = defer.Deferred() self.send_multiget_count(keys, column_parent, predicate, consistency_level) return d
672,954
returns a subset of columns for a contiguous range of keys. Parameters: - column_parent - predicate - range - consistency_level
def get_range_slices(self, column_parent, predicate, range, consistency_level): self._seqid += 1 d = self._reqs[self._seqid] = defer.Deferred() self.send_get_range_slices(column_parent, predicate, range, consistency_level) return d
672,955
returns a range of columns, wrapping to the next rows if necessary to collect max_results. Parameters: - column_family - range - start_column - consistency_level
def get_paged_slice(self, column_family, range, start_column, consistency_level): self._seqid += 1 d = self._reqs[self._seqid] = defer.Deferred() self.send_get_paged_slice(column_family, range, start_column, consistency_level) return d
672,958
Returns the subset of columns specified in SlicePredicate for the rows matching the IndexClause @deprecated use get_range_slices instead with range.row_filter specified Parameters: - column_parent - index_clause - column_predicate - consistency_level
def get_indexed_slices(self, column_parent, index_clause, column_predicate, consistency_level): self._seqid += 1 d = self._reqs[self._seqid] = defer.Deferred() self.send_get_indexed_slices(column_parent, index_clause, column_predicate, consistency_level) return d
672,960
Insert a Column at the given column_parent.column_family and optional column_parent.super_column. Parameters: - key - column_parent - column - consistency_level
def insert(self, key, column_parent, column, consistency_level): self._seqid += 1 d = self._reqs[self._seqid] = defer.Deferred() self.send_insert(key, column_parent, column, consistency_level) return d
672,962
Increment or decrement a counter. Parameters: - key - column_parent - column - consistency_level
def add(self, key, column_parent, column, consistency_level): self._seqid += 1 d = self._reqs[self._seqid] = defer.Deferred() self.send_add(key, column_parent, column, consistency_level) return d
672,965
Remove data from the row specified by key at the granularity specified by column_path, and the given timestamp. Note that all the values in column_path besides column_path.column_family are truly optional: you can remove the entire row by just specifying the ColumnFamily, or you can remove a SuperColumn or a single Column by specifying those levels too. Parameters: - key - column_path - timestamp - consistency_level
def remove(self, key, column_path, timestamp, consistency_level): self._seqid += 1 d = self._reqs[self._seqid] = defer.Deferred() self.send_remove(key, column_path, timestamp, consistency_level) return d
672,966
Remove a counter at the specified location. Note that counters have limited support for deletes: if you remove a counter, you must wait to issue any following update until the delete has reached all the nodes and all of them have been fully compacted. Parameters: - key - path - consistency_level
def remove_counter(self, key, path, consistency_level): self._seqid += 1 d = self._reqs[self._seqid] = defer.Deferred() self.send_remove_counter(key, path, consistency_level) return d
672,968
Mutate many columns or super columns for many row keys. See also: Mutation. mutation_map maps key to column family to a list of Mutation objects to take place at that scope. * Parameters: - mutation_map - consistency_level
def batch_mutate(self, mutation_map, consistency_level): self._seqid += 1 d = self._reqs[self._seqid] = defer.Deferred() self.send_batch_mutate(mutation_map, consistency_level) return d
672,970
Atomically mutate many columns or super columns for many row keys. See also: Mutation. mutation_map maps key to column family to a list of Mutation objects to take place at that scope. * Parameters: - mutation_map - consistency_level
def atomic_batch_mutate(self, mutation_map, consistency_level): self._seqid += 1 d = self._reqs[self._seqid] = defer.Deferred() self.send_atomic_batch_mutate(mutation_map, consistency_level) return d
672,971
Truncate will mark and entire column family as deleted. From the user's perspective a successful call to truncate will result complete data deletion from cfname. Internally, however, disk space will not be immediatily released, as with all deletes in cassandra, this one only marks the data as deleted. The operation succeeds only if all hosts in the cluster at available and will throw an UnavailableException if some hosts are down. Parameters: - cfname
def truncate(self, cfname): self._seqid += 1 d = self._reqs[self._seqid] = defer.Deferred() self.send_truncate(cfname) return d
672,973
get the token ring: a map of ranges to host addresses, represented as a set of TokenRange instead of a map from range to list of endpoints, because you can't use Thrift structs as map keys: https://issues.apache.org/jira/browse/THRIFT-162 for the same reason, we can't return a set here, even though order is neither important nor predictable. Parameters: - keyspace
def describe_ring(self, keyspace): self._seqid += 1 d = self._reqs[self._seqid] = defer.Deferred() self.send_describe_ring(keyspace) return d
672,985
describe specified keyspace Parameters: - keyspace
def describe_keyspace(self, keyspace): self._seqid += 1 d = self._reqs[self._seqid] = defer.Deferred() self.send_describe_keyspace(keyspace) return d
672,993
experimental API for hadoop/parallel query support. may change violently and without warning. returns list of token strings such that first subrange is (list[0], list[1]], next is (list[1], list[2]], etc. Parameters: - cfName - start_token - end_token - keys_per_split
def describe_splits(self, cfName, start_token, end_token, keys_per_split): self._seqid += 1 d = self._reqs[self._seqid] = defer.Deferred() self.send_describe_splits(cfName, start_token, end_token, keys_per_split) return d
672,996
adds a column family. returns the new schema id. Parameters: - cf_def
def system_add_column_family(self, cf_def): self._seqid += 1 d = self._reqs[self._seqid] = defer.Deferred() self.send_system_add_column_family(cf_def) return d
673,001
drops a column family. returns the new schema id. Parameters: - column_family
def system_drop_column_family(self, column_family): self._seqid += 1 d = self._reqs[self._seqid] = defer.Deferred() self.send_system_drop_column_family(column_family) return d
673,004
adds a keyspace and any column families that are part of it. returns the new schema id. Parameters: - ks_def
def system_add_keyspace(self, ks_def): self._seqid += 1 d = self._reqs[self._seqid] = defer.Deferred() self.send_system_add_keyspace(ks_def) return d
673,006
drops a keyspace and any column families that are part of it. returns the new schema id. Parameters: - keyspace
def system_drop_keyspace(self, keyspace): self._seqid += 1 d = self._reqs[self._seqid] = defer.Deferred() self.send_system_drop_keyspace(keyspace) return d
673,008
updates properties of a keyspace. returns the new schema id. Parameters: - ks_def
def system_update_keyspace(self, ks_def): self._seqid += 1 d = self._reqs[self._seqid] = defer.Deferred() self.send_system_update_keyspace(ks_def) return d
673,010
updates properties of a column family. returns the new schema id. Parameters: - cf_def
def system_update_column_family(self, cf_def): self._seqid += 1 d = self._reqs[self._seqid] = defer.Deferred() self.send_system_update_column_family(cf_def) return d
673,012
Executes a CQL (Cassandra Query Language) statement and returns a CqlResult containing the results. Parameters: - query - compression
def execute_cql_query(self, query, compression): self._seqid += 1 d = self._reqs[self._seqid] = defer.Deferred() self.send_execute_cql_query(query, compression) return d
673,014
Prepare a CQL (Cassandra Query Language) statement by compiling and returning - the type of CQL statement - an id token of the compiled CQL stored on the server side. - a count of the discovered bound markers in the statement Parameters: - query - compression
def prepare_cql_query(self, query, compression): self._seqid += 1 d = self._reqs[self._seqid] = defer.Deferred() self.send_prepare_cql_query(query, compression) return d
673,017