idx
int64
0
63k
question
stringlengths
53
5.28k
target
stringlengths
5
805
31,800
def get_data_filename ( filename ) : global _data_map if _data_map is None : _data_map = { } for root , dirs , files in os . walk ( specdir ) : for fname in files : _data_map [ fname ] = os . path . join ( root , fname ) if filename not in _data_map : raise KeyError ( filename + ' not found in ' + specdir ) return _dat...
Map filename to its actual path .
31,801
def StdRenorm ( spectrum , band , RNval , RNunitstring , force = False ) : if not force : stat = band . check_overlap ( spectrum ) if stat == 'full' : pass elif stat == 'partial' : if band . check_sig ( spectrum ) : spectrum . warnings [ 'PartialRenorm' ] = True print ( 'Warning: Spectrum is not defined everywhere in '...
This is used by ~pysynphot . spectrum . SourceSpectrum for renormalization .
31,802
def setdirs ( outfiles ) : for k in outfiles : fname = outfiles [ k ] dname = os . path . dirname ( fname ) if not os . path . isdir ( dname ) : os . mkdir ( dname )
Create the directories if need be
31,803
def bb_photlam_arcsec ( wave , temperature ) : lam = wave * 1.0E-10 return F * llam_SI ( lam , temperature ) / ( HS * C / lam )
Evaluate Planck s law in photlam per square arcsec .
31,804
def Units ( uname ) : if isinstance ( uname , BaseUnit ) : return uname else : try : if issubclass ( uname , BaseUnit ) : return uname ( ) except TypeError : try : return factory ( uname ) except KeyError : if uname == str ( None ) : return None else : raise ValueError ( "Unknown units %s" % uname )
Generate a unit object .
31,805
def ismatch ( a , b ) : if a == b : return True else : try : if isinstance ( a , b ) : return True except TypeError : try : if isinstance ( b , a ) : return True except TypeError : try : if isinstance ( a , type ( b ) ) : return True except TypeError : try : if isinstance ( b , type ( a ) ) : return True except TypeErr...
Method to allow smart comparisons between classes instances and string representations of units and give the right answer . For internal use only .
31,806
def ToABMag ( self , wave , flux , ** kwargs ) : arg = H * flux * wave return - 1.085736 * N . log ( arg ) + ABZERO
Convert to abmag .
31,807
def ToSTMag ( self , wave , flux , ** kwargs ) : arg = H * C * flux / wave return - 1.085736 * N . log ( arg ) + STZERO
Convert to stmag .
31,808
def ToOBMag ( self , wave , flux , area = None ) : area = area if area else refs . PRIMARY_AREA bin_widths = binning . calculate_bin_widths ( binning . calculate_bin_edges ( wave ) ) arg = flux * bin_widths * area return - 1.085736 * N . log ( arg )
Convert to obmag .
31,809
def ToVegaMag ( self , wave , flux , ** kwargs ) : from . import spectrum resampled = spectrum . Vega . resample ( wave ) normalized = flux / resampled . _fluxtable return - 2.5 * N . log10 ( normalized )
Convert to vegamag .
31,810
def acquire ( graftm_package_path ) : contents_hash = json . load ( open ( os . path . join ( graftm_package_path , GraftMPackage . _CONTENTS_FILE_NAME ) , ) ) v = contents_hash [ GraftMPackage . VERSION_KEY ] logging . debug ( "Loading version %i GraftM package: %s" % ( v , graftm_package_path ) ) if v == 2 : pkg = Gr...
Acquire a new graftm Package
31,811
def check_required_keys ( self , required_keys ) : h = self . _contents_hash for key in required_keys : if key not in h : raise InsufficientGraftMPackageException ( "Package missing key %s" % key )
raise InsufficientGraftMPackageException if this package does not conform to the standard of the given package
31,812
def compile ( output_package_path , refpkg_path , hmm_path , diamond_database_file , max_range , trusted_cutoff = False , search_hmm_files = None ) : if os . path . exists ( output_package_path ) : raise Exception ( "Not writing new GraftM package to already existing file/directory with name %s" % output_package_path )...
Create a new GraftM package with the given inputs . Any files specified as parameters are copied into the final package so can be removed after calling this function .
31,813
def create_diamond_db ( self ) : base = self . unaligned_sequence_database_path ( ) cmd = "diamond makedb --in '%s' -d '%s'" % ( self . unaligned_sequence_database_path ( ) , base ) extern . run ( cmd ) diamondb = '%s.dmnd' % base os . rename ( diamondb , self . diamond_database_path ( ) ) return diamondb
Create a diamond database from the unaligned sequences in this package .
31,814
def graftm_package_is_protein ( graftm_package ) : found = None with open ( graftm_package . alignment_hmm_path ( ) ) as f : r = f . read ( ) . split ( "\n" ) for line in r : if line == 'ALPH DNA' : found = False break elif line == 'ALPH amino' : found = True break if found is None : raise Exception ( "Unable to dete...
Return true if this package is an Amino Acid alignment package otherwise False i . e . it is a nucleotide package . In general it is best to use is_protein_package instead .
31,815
def is_protein_package ( self ) : if not hasattr ( self , '_is_protein_package' ) : self . _is_protein_package = GraftMPackageVersion3 . graftm_package_is_protein ( self ) return self . _is_protein_package
Return true if this package is an Amino Acid alignment package otherwise False i . e . it is a nucleotide package . Cache the result for speed .
31,816
def basename ( self ) : return os . path . basename ( self . read_file ) [ : - len ( self . _get_extension ( self . read_file ) ) ]
Return the name of the file with the . fasta or fq . gz etc removed
31,817
def match_alignment_and_tree_sequence_ids ( self , sequence_names , tree ) : tip_names_count = { } for t in tree . leaf_node_iter ( ) : name = t . taxon . label . replace ( ' ' , '_' ) if name in tip_names_count : raise Exception ( "Duplicate tip name found in tree: '%s'" % name ) else : tip_names_count [ name ] = 1 fo...
Check to make sure that the sequences specified in the alignment and the tree are the same otherwise raise an Exception detailing the problem for the user to fix
31,818
def remove_sequences ( self , tree , sequence_names ) : tree . prune_taxa_with_labels ( sequence_names ) tree . prune_taxa_with_labels ( [ s . replace ( '_' , ' ' ) for s in sequence_names ] )
Remove sequences with in the given sequence_names array from the tree in place . Assumes the sequences are found in the tree and that they are all unique .
31,819
def _hmmalign ( self , input_path , directions , pipeline , forward_reads_output_path , reverse_reads_output_path ) : if pipeline == PIPELINE_AA : reverse_direction_reads_present = False else : reverse_direction_reads_present = False in directions . values ( ) with tempfile . NamedTemporaryFile ( prefix = 'for_file' , ...
Align reads to the aln_hmm . Receives unaligned sequences and aligns them .
31,820
def hmmalign_sequences ( self , hmm , sequences , output_file ) : cmd = 'hmmalign --trim %s %s' % ( hmm , sequences ) output = extern . run ( cmd ) with open ( output_file , 'w' ) as f : SeqIO . write ( SeqIO . parse ( StringIO ( output ) , 'stockholm' ) , f , 'fasta' )
Run hmmalign and convert output to aligned fasta format
31,821
def hmmsearch ( self , output_path , input_path , unpack , seq_type , threads , cutoff , orfm ) : logging . debug ( "Using %i HMMs to search" % ( len ( self . search_hmm ) ) ) output_table_list = [ ] if len ( self . search_hmm ) > 1 : for hmm in self . search_hmm : out = os . path . join ( os . path . split ( output_pa...
hmmsearch - Search raw reads for hits using search_hmm list
31,822
def merge_forev_aln ( self , forward_aln_list , reverse_aln_list , outputs ) : orfm_regex = OrfM . regular_expression ( ) def remove_orfm_end ( records ) : new_dict = { } for key , record in records . iteritems ( ) : orfmregex = orfm_regex . match ( key ) if orfmregex : new_dict [ orfmregex . groups ( 0 ) [ 0 ] ] = rec...
merge_forev_aln - Merges forward and reverse alignments for a given run
31,823
def nhmmer ( self , output_path , unpack , threads , evalue ) : logging . debug ( "Using %i HMMs to search" % ( len ( self . search_hmm ) ) ) output_table_list = [ ] if len ( self . search_hmm ) > 1 : for hmm in self . search_hmm : out = os . path . join ( os . path . split ( output_path ) [ 0 ] , os . path . basename ...
nhmmer - Search input path using nhmmer
31,824
def _check_euk_contamination ( self , hmm_hit_tables ) : euk_hit_table = HMMreader ( hmm_hit_tables . pop ( - 1 ) ) other_hit_tables = [ HMMreader ( x ) for x in hmm_hit_tables ] reads_unique_to_eukaryotes = [ ] reads_with_better_euk_hit = [ ] for hit in euk_hit_table . names ( ) : bits = [ ] for hit_table in other_hit...
check_euk_contamination - Check output HMM tables hits reads that hit the 18S HMM with a higher bit score .
31,825
def _extract_multiple_hits ( self , hits , reads_path , output_path ) : complement_information = { } try : reads = SeqIO . to_dict ( SeqIO . parse ( reads_path , "fasta" ) ) except : logging . error ( "Multiple sequences found with the same ID. The input sequences are either ill formated or are interleaved. \If you pro...
splits out regions of a read that hit the HMM . For example when two of same gene are identified within the same contig The regions mapping to the HMM will be split out and written out to a new file as a new record .
31,826
def alignment_correcter ( self , alignment_file_list , output_file_name , filter_minimum = None ) : corrected_sequences = { } for alignment_file in alignment_file_list : insert_list = [ ] sequence_list = list ( SeqIO . parse ( open ( alignment_file , 'r' ) , 'fasta' ) ) for sequence in sequence_list : for idx , nt in e...
Remove lower case insertions in alignment outputs from HMM align . Give a list of alignments and an output file name and each alignment will be corrected and written to a single file ready to be placed together using pplacer .
31,827
def _extract_orfs ( self , input_path , orfm , hit_readnames , output_path , search_method , sequence_frame_info_list = None ) : if search_method == "hmmsearch" : orfm_cmd = orfm . command_line ( ) cmd = 'fxtract -H -X -f /dev/stdin <(%s %s) > %s' % ( orfm_cmd , input_path , output_path ) process = subprocess . Popen (...
Call ORFs on a file with nucleotide sequences and extract the proteins whose name is in hit_readnames .
31,828
def aa_db_search ( self , files , base , unpack , search_method , maximum_range , threads , evalue , min_orf_length , restrict_read_length , diamond_database ) : if search_method == 'hmmsearch' : output_search_file = files . hmmsearch_output_path ( base ) elif search_method == 'diamond' : output_search_file = files . d...
Amino acid database search pipeline - pipeline where reads are searched as amino acids and hits are identified using hmmsearch or diamond searches
31,829
def nt_db_search ( self , files , base , unpack , euk_check , search_method , maximum_range , threads , evalue ) : hmmsearch_output_table = files . hmmsearch_output_path ( base ) hit_reads_fasta = files . fa_output_path ( base ) return self . search_and_extract_nucleotides_matching_nucleotide_database ( unpack , euk_ch...
Nucleotide database search pipeline - pipeline where reads are searched as nucleotides and hits are identified using nhmmer searches
31,830
def align ( self , input_path , output_path , directions , pipeline , filter_minimum ) : with tempfile . NamedTemporaryFile ( prefix = 'for_conv_file' , suffix = '.fa' ) as fwd_fh : fwd_conv_file = fwd_fh . name with tempfile . NamedTemporaryFile ( prefix = 'rev_conv_file' , suffix = '.fa' ) as rev_fh : rev_conv_file =...
align - Takes input path to fasta of unaligned reads aligns them to a HMM and returns the aligned reads in the output path
31,831
def reroot_by_tree ( self , old_tree , new_tree ) : old_tree . is_rooted = True new_tree . is_rooted = True if len ( old_tree . seed_node . child_nodes ( ) ) != 2 : raise Exception ( "Unexpectedly found a non-binary tree. Perhaps need to use Rerooter.reroot() ?" ) new_tip_names = set ( [ tip . taxon . label for tip in ...
reroot the new tree so that it matches the old tree s root if possible . If more than one rerooting is possible root at the longest internal branch that is consistent with the root of the old_tree .
31,832
def _assign_taxonomy_with_diamond ( self , base_list , db_search_results , graftm_package , graftm_files ) : runner = Diamond ( graftm_package . diamond_database_path ( ) , self . args . threads , self . args . evalue ) taxonomy_definition = Getaxnseq ( ) . read_taxtastic_taxonomy_and_seqinfo ( open ( graftm_package . ...
Run diamond to assign taxonomy
31,833
def _concatenate_file ( self , file_list , output ) : to_cat = ' ' . join ( file_list ) logging . debug ( "Concatenating files: %s" % ( to_cat ) ) cmd = "cat %s > %s" % ( to_cat , output ) extern . run ( cmd )
Call unix cat to concatenate a list of files
31,834
def write ( self , output_io ) : for name , tax in self . taxonomy . items ( ) : output_io . write ( "%s\t%s\n" % ( name , '; ' . join ( tax ) ) )
Write a taxonomy to an open stream out in GG format . Code calling this function must open and close the io object .
31,835
def _reroot ( self ) : rerooter = Rerooter ( ) self . tree = rerooter . reroot_by_tree ( self . reference_tree , self . tree )
Run the re - rooting algorithm in the Rerooter class .
31,836
def file_basename ( self , file ) : valid_extensions = set ( ".tree" , ".tre" ) split_file = os . path . basename ( file ) . split ( '.' ) base , suffix = '.' . join ( split_file [ : - 1 ] ) , split_file [ - 1 ] if suffix in valid_extensions : return base else : logging . error ( "Invalid file extension found on file: ...
Strips the path and last extension from the file variable . If the extension is found to be valid the basename will be returned . Otherwise an error will be raise and graftM will exit
31,837
def set_euk_hmm ( self , args ) : 'Set the hmm used by graftM to cross check for euks.' if hasattr ( args , 'euk_hmm_file' ) : pass elif not hasattr ( args , 'euk_hmm_file' ) : setattr ( args , 'euk_hmm_file' , os . path . join ( os . path . dirname ( inspect . stack ( ) [ - 1 ] [ 1 ] ) , '..' , 'share' , '18S.hmm' ) )...
Set the hmm used by graftM to cross check for euks .
31,838
def get_maximum_range ( self , hmm ) : length = int ( [ x for x in open ( hmm ) if x . startswith ( "LENG" ) ] [ 0 ] . split ( ) [ 1 ] ) max_length = round ( length * 1.5 , 0 ) return max_length
If no maximum range has been specified and if using a hmm search a maximum range can be determined by using the length of the HMM
31,839
def place ( self , reverse_pipe , seqs_list , resolve_placements , files , args , slash_endings , tax_descr , clusterer ) : trusted_placements = { } files_to_delete = [ ] alias_hash = self . alignment_merger ( seqs_list , files . comb_aln_fa ( ) ) files_to_delete += seqs_list files_to_delete . append ( files . comb_aln...
placement - This is the placement pipeline in GraftM in aligned reads are placed into phylogenetic trees and the results interpreted . If reverse reads are used this is where the comparisons are made between placements for the summary tables to be build in the next stage .
31,840
def extract ( self , reads_to_extract , database_fasta_file , output_file ) : cmd = "fxtract -XH -f /dev/stdin '%s' > %s" % ( database_fasta_file , output_file ) extern . run ( cmd , stdin = '\n' . join ( reads_to_extract ) )
Extract the reads_to_extract from the database_fasta_file and put them in output_file .
31,841
def extract_forward_and_reverse_complement ( self , forward_reads_to_extract , reverse_reads_to_extract , database_fasta_file , output_file ) : self . extract ( forward_reads_to_extract , database_fasta_file , output_file ) cmd_rev = "fxtract -XH -f /dev/stdin '%s'" % database_fasta_file output = extern . run ( cmd_rev...
As per extract except also reverse complement the sequences .
31,842
def write_tabular_otu_table ( self , sample_names , read_taxonomies , combined_output_otu_table_io ) : delim = u'\t' combined_output_otu_table_io . write ( delim . join ( [ '#ID' , delim . join ( sample_names ) , 'ConsensusLineage' ] ) ) combined_output_otu_table_io . write ( u"\n" ) for otu_id , tax , counts in self ....
A function that takes a hash of trusted placements and compiles them into an OTU - esque table .
31,843
def write_krona_plot ( self , sample_names , read_taxonomies , output_krona_filename ) : tempfiles = [ ] for n in sample_names : tempfiles . append ( tempfile . NamedTemporaryFile ( prefix = 'GraftMkronaInput' , suffix = n ) ) delim = u'\t' for _ , tax , counts in self . _iterate_otu_table_rows ( read_taxonomies ) : fo...
Creates krona plot at the given location . Assumes the krona executable ktImportText is available on the shell PATH
31,844
def uncluster_annotations ( self , input_annotations , reverse_pipe ) : output_annotations = { } for placed_alignment_file_path , clusters in self . seq_library . iteritems ( ) : if reverse_pipe and placed_alignment_file_path . endswith ( "_reverse_clustered.fa" ) : continue placed_alignment_file = os . path . basename...
Update the annotations hash provided by pplacer to include all representatives within each cluster
31,845
def cluster ( self , input_fasta_list , reverse_pipe ) : output_fasta_list = [ ] for input_fasta in input_fasta_list : output_path = input_fasta . replace ( '_hits.aln.fa' , '_clustered.fa' ) cluster_dict = { } logging . debug ( 'Clustering reads' ) if os . path . exists ( input_fasta ) : reads = self . seqio . read_fa...
cluster - Clusters reads at 100% identity level and writes them to file . Resets the input_fasta variable as the FASTA file containing the clusters .
31,846
def create ( self , input_package_path , output_package_path , ** kwargs ) : force = kwargs . pop ( 'force' , ArchiveDefaultOptions . force ) if len ( kwargs ) > 0 : raise Exception ( "Unexpected arguments detected: %s" % kwargs ) logging . info ( "Archiving GraftM package '%s' as '%s'" % ( input_package_path , output_...
Create an archived GraftM package
31,847
def extract ( self , archive_path , output_package_path , ** kwargs ) : force = kwargs . pop ( 'force' , ArchiveDefaultOptions . force ) if len ( kwargs ) > 0 : raise Exception ( "Unexpected arguments detected: %s" % kwargs ) logging . info ( "Un-archiving GraftM package '%s' from '%s'" % ( output_package_path , archiv...
Extract an archived GraftM package .
31,848
def _setup_output ( self , path , force ) : if os . path . isdir ( path ) or os . path . isfile ( path ) : if force : logging . warn ( "Deleting previous file/directory '%s'" % path ) if os . path . isfile ( path ) : os . remove ( path ) else : shutil . rmtree ( path ) else : raise Exception ( "Cowardly refusing to ove...
Clear the way for an output to be placed at path
31,849
def import_from_nhmmer_table ( hmmout_path ) : res = HMMSearchResult ( ) res . fields = [ SequenceSearchResult . QUERY_ID_FIELD , SequenceSearchResult . HMM_NAME_FIELD , SequenceSearchResult . ALIGNMENT_LENGTH_FIELD , SequenceSearchResult . QUERY_FROM_FIELD , SequenceSearchResult . QUERY_TO_FIELD , SequenceSearchResult...
Generate new results object from the output of nhmmer search
31,850
def hmmsearch ( self , input_pipe , hmms , output_files ) : r if len ( hmms ) != len ( output_files ) : raise Exception ( "Programming error: number of supplied HMMs differs from the number of supplied output files" ) queue = [ ] for i , hmm in enumerate ( hmms ) : queue . append ( [ hmm , output_files [ i ] ] ) while ...
r Run HMMsearch with all the HMMs generating output files
31,851
def _munch_off_batch ( self , queue ) : r if len ( queue ) == 1 or self . _num_cpus == 1 : pairs_to_run = [ [ queue . pop ( 0 ) , self . _num_cpus ] ] else : pairs_to_run = [ ] while len ( queue ) > 0 and len ( pairs_to_run ) < self . _num_cpus : pairs_to_run . append ( [ queue . pop ( 0 ) , 1 ] ) num_cpus_left = self ...
r Take a batch of sequences off the queue and return pairs_to_run . The queue given as a parameter is affected
31,852
def _hmm_command ( self , input_pipe , pairs_to_run ) : r element = pairs_to_run . pop ( ) hmmsearch_cmd = self . _individual_hmm_command ( element [ 0 ] [ 0 ] , element [ 0 ] [ 1 ] , element [ 1 ] ) while len ( pairs_to_run ) > 0 : element = pairs_to_run . pop ( ) hmmsearch_cmd = "tee >(%s) | %s" % ( self . _individua...
r INTERNAL method for getting cmdline for running a batch of HMMs .
31,853
def _parse_contents ( self , contents_file_path ) : logging . debug ( "Parsing %s" % ( contents_file_path ) ) contents_dict = json . load ( open ( contents_file_path ) ) return contents_dict
Parse the contents . json file and return the dictionary
31,854
def _check_reads_hit ( self , alignment_io , min_aligned_fraction ) : to_return = [ ] alignment_length = None for s in SeqIO . parse ( alignment_io , "fasta" ) : if not alignment_length : alignment_length = len ( s . seq ) min_length = int ( min_aligned_fraction * alignment_length ) logging . debug ( "Determined min nu...
Given an alignment return a list of sequence names that are less than the min_aligned_fraction
31,855
def _align_sequences ( self , input_sequences_path , output_alignment_path , threads ) : logging . debug ( "Aligning sequences using mafft" ) cmd = "mafft --anysymbol --thread %s --auto /dev/stdin > %s" % ( threads , output_alignment_path ) inputs = [ ] with open ( input_sequences_path ) as f : for name , seq , _ in Se...
Align sequences into alignment_file
31,856
def _get_hmm_from_alignment ( self , alignment , hmm_filename , output_alignment_filename ) : logging . info ( "Building HMM from alignment" ) with tempfile . NamedTemporaryFile ( suffix = '.fasta' , prefix = 'graftm' ) as tempaln : cmd = "hmmbuild -O /dev/stdout -o /dev/stderr '%s' '%s'" % ( hmm_filename , alignment )...
Return a HMM file and alignment of sequences to that HMM
31,857
def _align_sequences_to_hmm ( self , hmm_file , sequences_file , output_alignment_file ) : ss = SequenceSearcher ( hmm_file ) with tempfile . NamedTemporaryFile ( prefix = 'graftm' , suffix = '.aln.fasta' ) as tempalign : ss . hmmalign_sequences ( hmm_file , sequences_file , tempalign . name ) ss . alignment_correcter ...
Align sequences to an HMM and write an alignment of these proteins after cleanup so that they can be used for tree - making
31,858
def _define_range ( self , sequences ) : sequence_count = 0 total_sequence = 0 for record in SeqIO . parse ( open ( sequences ) , 'fasta' ) : total_sequence += 1 sequence_count += len ( record . seq ) max_range = ( sequence_count / total_sequence ) * 1.5 return max_range
define_range - define the maximum range within which two hits in a db search can be linked . This is defined as 1 . 5X the average length of all reads in the database .
31,859
def _generate_tree_log_file ( self , tree , alignment , output_tree_file_path , output_log_file_path , residue_type , fasttree ) : if residue_type == Create . _NUCLEOTIDE_PACKAGE_TYPE : cmd = "%s -quiet -gtr -nt -nome -mllen -intree '%s' -log %s -out %s %s" % ( fasttree , tree , output_log_file_path , output_tree_file_...
Generate the FastTree log file given a tree and the alignment that made that tree
31,860
def _remove_sequences_from_alignment ( self , sequence_names , input_alignment_file , output_alignment_file ) : nameset = set ( sequence_names ) num_written = 0 with open ( output_alignment_file , 'w' ) as f : for s in SeqIO . parse ( open ( input_alignment_file ) , "fasta" ) : if s . name not in nameset : SeqIO . writ...
Remove sequences from the alignment file that have names in sequence_names
31,861
def _create_dmnd_database ( self , unaligned_sequences_path , daa_output ) : logging . debug ( "Building diamond database" ) cmd = "diamond makedb --in '%s' -d '%s'" % ( unaligned_sequences_path , daa_output ) extern . run ( cmd )
Build a diamond database using diamond makedb
31,862
def _check_for_duplicate_sequence_names ( self , fasta_file_path ) : found_sequence_names = set ( ) for record in SeqIO . parse ( fasta_file_path , 'fasta' ) : name = record . name if name in found_sequence_names : return name found_sequence_names . add ( name ) return False
Test if the given fasta file contains sequences with duplicate sequence names .
31,863
def read_taxtastic_taxonomy_and_seqinfo ( self , taxonomy_io , seqinfo_io ) : lineages = [ ] taxon_to_lineage_index = { } expected_number_of_fields = None for line in taxonomy_io : splits = line . strip ( ) . split ( ',' ) if expected_number_of_fields is None : expected_number_of_fields = len ( splits ) lineages = [ { ...
Read the taxonomy and seqinfo files into a dictionary of sequence_name = > taxonomy where the taxonomy is an array of lineages given to that sequence . Possibly this method is unable to handle the full definition of these files? It doesn t return what each of the ranks are for starters . Doesn t deal with duplicate tax...
31,864
def each_sequence ( self , fp ) : for name , seq , _ in self . each ( fp ) : yield Sequence ( name , seq )
Like each except iterate over Sequence objects
31,865
def _fields_list_to_dict ( fields ) : for key in fields : assert isinstance ( key , ( str , unicode ) ) return dict ( [ [ key , 1 ] for key in fields ] )
Takes a list of field names and returns a matching dictionary .
31,866
def _socket_connect ( self ) : self . usage_count = 0 try : s = socket . socket ( socket . AF_INET , socket . SOCK_STREAM , 0 ) s . connect ( ( self . _host , self . _port ) ) self . __stream = self . __backend . register_stream ( s , ** self . __kwargs ) self . __stream . set_close_callback ( self . _socket_close ) se...
create a socket connect register a stream with the async backend
31,867
def _socket_close ( self ) : callback = self . __callback self . __callback = None try : if callback : callback ( None , InterfaceError ( 'connection closed' ) ) finally : self . __job_queue = [ ] self . __alive = False self . __pool . cache ( self )
cleanup after the socket is closed by the other end
31,868
def _close ( self ) : callback = self . __callback self . __callback = None try : if callback : callback ( None , InterfaceError ( 'connection closed' ) ) finally : self . __job_queue = [ ] self . __alive = False self . __stream . close ( )
close the socket and cleanup
31,869
def send_message ( self , message , callback ) : if self . __callback is not None : raise ProgrammingError ( 'connection already in use' ) if callback : err_callback = functools . partial ( callback , None ) else : err_callback = None for job in self . __job_queue : if isinstance ( job , asyncjobs . AsyncJob ) : job . ...
send a message over the wire ; callback = None indicates a safe = False call where we write and forget about it
31,870
def _next_job ( self ) : if self . __job_queue : job = self . __job_queue . pop ( ) job . process ( )
execute the next job from the top of the queue
31,871
def get_connection_pool ( self , pool_id , * args , ** kwargs ) : assert isinstance ( pool_id , ( str , unicode ) ) if not hasattr ( self , '_pools' ) : self . _pools = { } if pool_id not in self . _pools : self . _pools [ pool_id ] = ConnectionPool ( * args , ** kwargs ) return self . _pools [ pool_id ]
get a connection pool transparently creating it if it doesn t already exist
31,872
def close_idle_connections ( self , pool_id = None ) : if not hasattr ( self , '_pools' ) : return if pool_id : if pool_id not in self . _pools : raise ProgrammingError ( "pool %r does not exist" % pool_id ) else : pool = self . _pools [ pool_id ] pool . close ( ) else : for pool_id , pool in self . _pools . items ( ) ...
close idle connections to mongo
31,873
def connection ( self ) : self . _condition . acquire ( ) try : if ( self . _maxconnections and self . _connections >= self . _maxconnections ) : raise TooManyConnections ( "%d connections are already equal to the max: %d" % ( self . _connections , self . _maxconnections ) ) try : con = self . _idle_cache . pop ( 0 ) e...
get a cached connection from the pool
31,874
def __query_options ( self ) : options = 0 if self . __tailable : options |= _QUERY_OPTIONS [ "tailable_cursor" ] if self . __slave_okay or self . __pool . _slave_okay : options |= _QUERY_OPTIONS [ "slave_okay" ] if not self . __timeout : options |= _QUERY_OPTIONS [ "no_timeout" ] return options
Get the query options string to use for this query .
31,875
def update_context ( app , pagename , templatename , context , doctree ) : if doctree is None : return visitor = _FindTabsDirectiveVisitor ( doctree ) doctree . walk ( visitor ) if not visitor . found_tabs_directive : paths = [ posixpath . join ( '_static' , 'sphinx_tabs/' + f ) for f in FILES ] if 'css_files' in conte...
Remove sphinx - tabs CSS and JS asset files if not used in a page
31,876
def copy_assets ( app , exception ) : if 'getLogger' in dir ( logging ) : log = logging . getLogger ( __name__ ) . info else : log = app . info builders = get_compatible_builders ( app ) if exception : return if app . builder . name not in builders : if not app . config [ 'sphinx_tabs_nowarn' ] : app . warn ( 'Not copy...
Copy asset files to the output
31,877
def setup ( app ) : app . add_config_value ( 'sphinx_tabs_nowarn' , False , '' ) app . add_config_value ( 'sphinx_tabs_valid_builders' , [ ] , '' ) app . add_directive ( 'tabs' , TabsDirective ) app . add_directive ( 'tab' , TabDirective ) app . add_directive ( 'group-tab' , GroupTabDirective ) app . add_directive ( 'c...
Set up the plugin
31,878
def run ( self ) : self . assert_has_content ( ) env = self . state . document . settings . env node = nodes . container ( ) node [ 'classes' ] = [ 'sphinx-tabs' ] if 'next_tabs_id' not in env . temp_data : env . temp_data [ 'next_tabs_id' ] = 0 if 'tabs_stack' not in env . temp_data : env . temp_data [ 'tabs_stack' ] ...
Parse a tabs directive
31,879
def connection ( self , collectionname , dbname = None ) : if not collectionname or ".." in collectionname : raise DataError ( "collection names cannot be empty" ) if "$" in collectionname and not ( collectionname . startswith ( "oplog.$main" ) or collectionname . startswith ( "$cmd" ) ) : raise DataError ( "collection...
Get a cursor to a collection by name .
31,880
def collection_names ( self , callback ) : callback = partial ( self . _collection_names_result , callback ) self [ "system.namespaces" ] . find ( _must_use_master = True , callback = callback )
Get a list of all the collection names in selected database
31,881
def _collection_names_result ( self , callback , results , error = None ) : names = [ r [ 'name' ] for r in results if r [ 'name' ] . count ( '.' ) == 1 ] assert error == None , repr ( error ) strip = len ( self . _pool . _dbname ) + 1 callback ( [ name [ strip : ] for name in names ] )
callback to for collection names query filters out collection names
31,882
def parent_images ( self , parents ) : parents = list ( parents ) change_instrs = [ ] for instr in self . structure : if instr [ 'instruction' ] != 'FROM' : continue old_image , stage = image_from ( instr [ 'value' ] ) if not old_image : continue if not parents : raise RuntimeError ( "not enough parents to match build ...
setter for images in FROM instructions . Images are updated per build stage with the given parents in the order they appear . Raises RuntimeError if a different number of parents are given than there are stages as that is likely to be a mistake .
31,883
def baseimage ( self , new_image ) : images = self . parent_images or [ None ] images [ - 1 ] = new_image self . parent_images = images
change image of final stage FROM instruction
31,884
def cmd ( self , value ) : cmd = None for insndesc in self . structure : if insndesc [ 'instruction' ] == 'FROM' : cmd = None elif insndesc [ 'instruction' ] == 'CMD' : cmd = insndesc new_cmd = 'CMD ' + value if cmd : self . add_lines_at ( cmd , new_cmd , replace = True ) else : self . add_lines ( new_cmd )
setter for final CMD instruction in final build stage
31,885
def _instruction_getter ( self , name , env_replace ) : if name != 'LABEL' and name != 'ENV' : raise ValueError ( "Unsupported instruction '%s'" , name ) instructions = { } envs = { } for instruction_desc in self . structure : this_instruction = instruction_desc [ 'instruction' ] if this_instruction == 'FROM' : instruc...
Get LABEL or ENV instructions with environment replacement
31,886
def b2u ( string ) : if ( isinstance ( string , bytes ) or ( PY2 and isinstance ( string , str ) ) ) : return string . decode ( 'utf-8' ) return string
bytes to unicode
31,887
def u2b ( string ) : if ( ( PY2 and isinstance ( string , unicode ) ) or ( ( not PY2 ) and isinstance ( string , str ) ) ) : return string . encode ( 'utf-8' ) return string
unicode to bytes
31,888
def _update_quoting_state ( self , ch ) : is_escaped = self . escaped self . escaped = ( not self . escaped and ch == '\\' and self . quotes != self . SQUOTE ) if self . escaped : return '' if is_escaped : if self . quotes == self . DQUOTE : if ch == '"' : return ch return "{0}{1}" . format ( '\\' , ch ) return ch if s...
Update self . quotes and self . escaped
31,889
def split ( self , maxsplit = None , dequote = True ) : class Word ( object ) : def __init__ ( self ) : self . value = None @ property def valid ( self ) : return self . value is not None def append ( self , s ) : if self . value is None : self . value = s else : self . value += s num_splits = 0 word = Word ( ) while T...
Generator for the words of the string
31,890
def get_line_value ( self , context_type ) : if context_type . upper ( ) == "ENV" : return self . line_envs elif context_type . upper ( ) == "LABEL" : return self . line_labels
Get the values defined on this line .
31,891
def get_values ( self , context_type ) : if context_type . upper ( ) == "ENV" : return self . envs elif context_type . upper ( ) == "LABEL" : return self . labels
Get the values valid on this line .
31,892
def uniqueStates ( states , rates ) : order = np . lexsort ( states . T ) states = states [ order ] diff = np . ones ( len ( states ) , 'bool' ) diff [ 1 : ] = ( states [ 1 : ] != states [ : - 1 ] ) . any ( - 1 ) sums = np . bincount ( diff . cumsum ( ) - 1 , rates [ order ] ) return states [ diff ] , sums
Returns unique states and sums up the corresponding rates . States should be a 2d numpy array with on each row a state and rates a 1d numpy array with length equal to the number of rows in states . This may be helpful in the transition function for summing up the rates of different transitions that lead to the same sta...
31,893
def checkInitialState ( self , initialState ) : assert initialState is not None , "Initial state has not been specified." assert isinstance ( initialState , ( int , list , tuple , np . ndarray , set ) ) , "initialState %r is not an int, tuple, list, set or numpy array" % initialState if isinstance ( initialState , list...
Check whether the initial state is of the correct type . The state should be either an int list tuple or np . array and all its elements must be integer . Returns an int if the state is an integer otherwise a tuple .
31,894
def indirectInitialMatrix ( self , initialState ) : mapping = { } rates = OrderedDict ( ) convertedState = self . checkInitialState ( initialState ) if isinstance ( convertedState , set ) : frontier = set ( convertedState ) for idx , state in enumerate ( convertedState ) : mapping [ state ] = idx if idx == 0 : usesNump...
Given some initial state this iteratively determines new states . We repeatedly call the transition function on unvisited states in the frontier set . Each newly visited state is put in a dictionary called mapping and the rates are stored in a dictionary .
31,895
def getStateCode ( self , state ) : return np . dot ( state - self . minvalues , self . statecode )
Calculates the state code for a specific state or set of states . We transform the states so that they are nonnegative and take an inner product . The resulting number is unique because we use numeral system with a large enough base .
31,896
def getStateIndex ( self , state ) : statecodes = self . getStateCode ( state ) return np . searchsorted ( self . codes , statecodes ) . astype ( int )
Returns the index of a state by calculating the state code and searching for this code a sorted list . Can be called on multiple states at once .
31,897
def transitionStates ( self , state ) : newstates , rates = self . transition ( state ) newindices = self . getStateIndex ( newstates ) return newindices , rates
Return the indices of new states and their rates .
31,898
def convertToRateMatrix ( self , Q ) : rowSums = Q . sum ( axis = 1 ) . getA1 ( ) idxRange = np . arange ( Q . shape [ 0 ] ) Qdiag = coo_matrix ( ( rowSums , ( idxRange , idxRange ) ) , shape = Q . shape ) . tocsr ( ) return Q - Qdiag
Converts the initial matrix to a rate matrix . We make all rows in Q sum to zero by subtracting the row sums from the diagonal .
31,899
def getTransitionMatrix ( self , probabilities = True ) : if self . P is not None : if isspmatrix ( self . P ) : if not isspmatrix_csr ( self . P ) : self . P = self . P . tocsr ( ) else : assert isinstance ( self . P , np . ndarray ) and self . P . ndim == 2 and self . P . shape [ 0 ] == self . P . shape [ 1 ] , 'P ne...
If self . P has been given already we will reuse it and convert it to a sparse csr matrix if needed . Otherwise we will generate it using the direct or indirect method . Since most solution methods use a probability matrix this is the default setting . By setting probabilities = False we can also return a rate matrix .