INSTRUCTION
stringlengths
1
8.43k
RESPONSE
stringlengths
75
104k
Determine the path to the config file. This will return in this order of precedence: - the value of $BUGWARRIORRC if set - $XDG_CONFIG_HOME/ bugwarrior/ bugwarriorc if exists - ~/. bugwarriorrc if exists - <dir >/ bugwarrior/ bugwarriorc if exists for dir in $XDG_CONFIG_DIRS - $XDG_CONFIG_HOME/ bugwarrior/ bugwarriorc ...
def get_config_path(): """ Determine the path to the config file. This will return, in this order of precedence: - the value of $BUGWARRIORRC if set - $XDG_CONFIG_HOME/bugwarrior/bugwarriorc if exists - ~/.bugwarriorrc if exists - <dir>/bugwarrior/bugwarriorc if exists, for dir in $XDG_CONFI...
Expand environment variables and user home ( ~ ) in the log. file and return as relative path.
def fix_logging_path(config, main_section): """ Expand environment variables and user home (~) in the log.file and return as relative path. """ log_file = config.get(main_section, 'log.file') if log_file: log_file = os.path.expanduser(os.path.expandvars(log_file)) if os.path.isab...
Accepts both integers and empty values.
def getint(self, section, option): """ Accepts both integers and empty values. """ try: return super(BugwarriorConfigParser, self).getint(section, option) except ValueError: if self.get(section, option) == u'': return None else: ...
Default longdescs/ flags case to [] since they may not be present.
def _get_bug_attr(bug, attr): """Default longdescs/flags case to [] since they may not be present.""" if attr in ("longdescs", "flags"): return getattr(bug, attr, []) return getattr(bug, attr)
Pull down tasks from forges and add them to your taskwarrior tasks.
def pull(dry_run, flavor, interactive, debug): """ Pull down tasks from forges and add them to your taskwarrior tasks. Relies on configuration in bugwarriorrc """ try: main_section = _get_section_name(flavor) config = _try_load_config(main_section, interactive) lockfile_path =...
Perform a request to the fully qualified url and return json.
def get_data(self, url): """ Perform a request to the fully qualified url and return json. """ return self.json_response(requests.get(url, **self.requests_kwargs))
Pages through an object collection from the bitbucket API. Returns an iterator that lazily goes through all the values of all the pages in the collection.
def get_collection(self, url): """ Pages through an object collection from the bitbucket API. Returns an iterator that lazily goes through all the 'values' of all the pages in the collection. """ url = self.BASE_API2 + url while url is not None: response = self.get_da...
Count the # of differences between equal length strings str1 and str2
def hamdist(str1, str2): """Count the # of differences between equal length strings str1 and str2""" diffs = 0 for ch1, ch2 in zip(str1, str2): if ch1 != ch2: diffs += 1 return diffs
For a given issue issue find its local UUID.
def find_local_uuid(tw, keys, issue, legacy_matching=False): """ For a given issue issue, find its local UUID. Assembles a list of task IDs existing in taskwarrior matching the supplied issue (`issue`) on the combination of any set of supplied unique identifiers (`keys`) or, optionally, the task's ...
Merge array field from the remote_issue into local_task
def merge_left(field, local_task, remote_issue, hamming=False): """ Merge array field from the remote_issue into local_task * Local 'left' entries are preserved without modification * Remote 'left' are appended to task if not present in local. :param `field`: Task field to merge. :param `local_tas...
Returns a list of UDAs defined by given targets
def build_uda_config_overrides(targets): """ Returns a list of UDAs defined by given targets For all targets in `targets`, build a dictionary of configuration overrides representing the UDAs defined by the passed-in services (`targets`). Given a hypothetical situation in which you have two services, t...
Parse the big ugly sprint string stored by JIRA.
def _parse_sprint_string(sprint): """ Parse the big ugly sprint string stored by JIRA. They look like: com.atlassian.greenhopper.service.sprint.Sprint@4c9c41a5[id=2322,rapid ViewId=1173,state=ACTIVE,name=Sprint 1,startDate=2016-09-06T16:08:07.4 55Z,endDate=2016-09-23T16:08:00.000Z,compl...
Gets valid user credentials from storage.
def get_credentials(self): """Gets valid user credentials from storage. If nothing has been stored, or if the stored credentials are invalid, the OAuth2 flow is completed to obtain the new credentials. Returns: Credentials, the obtained credential. """ with ...
Efficient way to compute highly repetitive scoring i. e. sequences are involved multiple time
def multi_rouge_n(sequences, scores_ids, n=2): """ Efficient way to compute highly repetitive scoring i.e. sequences are involved multiple time Args: sequences(list[str]): list of sequences (either hyp or ref) scores_ids(list[tuple(int)]): list of pairs (hyp_id, ref_id) ie. ...
Computes ROUGE - N of two text collections of sentences. Sourece: http:// research. microsoft. com/ en - us/ um/ people/ cyl/ download/ papers/ rouge - working - note - v1. 3. 1. pdf
def rouge_n(evaluated_sentences, reference_sentences, n=2): """ Computes ROUGE-N of two text collections of sentences. Sourece: http://research.microsoft.com/en-us/um/people/cyl/download/ papers/rouge-working-note-v1.3.1.pdf Args: evaluated_sentences: The sentences that have been picked by th...
Returns LCS_u ( r_i C ) which is the LCS score of the union longest common subsequence between reference sentence ri and candidate summary C. For example: if r_i = w1 w2 w3 w4 w5 and C contains two sentences: c1 = w1 w2 w6 w7 w8 and c2 = w1 w3 w8 w9 w5 then the longest common subsequence of r_i and c1 is w1 w2 and the ...
def _union_lcs(evaluated_sentences, reference_sentence, prev_union=None): """ Returns LCS_u(r_i, C) which is the LCS score of the union longest common subsequence between reference sentence ri and candidate summary C. For example: if r_i= w1 w2 w3 w4 w5, and C contains two sentences: c1 = w1 w2 w6 w...
Computes ROUGE - L ( summary level ) of two text collections of sentences. http:// research. microsoft. com/ en - us/ um/ people/ cyl/ download/ papers/ rouge - working - note - v1. 3. 1. pdf
def rouge_l_summary_level(evaluated_sentences, reference_sentences): """ Computes ROUGE-L (summary level) of two text collections of sentences. http://research.microsoft.com/en-us/um/people/cyl/download/papers/ rouge-working-note-v1.3.1.pdf Calculated according to: R_lcs = SUM(1, u)[LCS<union>(...
Calculate ROUGE scores between each pair of lines ( hyp_file [ i ] ref_file [ i ] ). Args: * hyp_path: hypothesis file path * ref_path: references file path * avg ( False ): whether to get an average scores or a list
def get_scores(self, avg=False, ignore_empty=False): """Calculate ROUGE scores between each pair of lines (hyp_file[i], ref_file[i]). Args: * hyp_path: hypothesis file path * ref_path: references file path * avg (False): whether to get an average scores or a list ...
calculate pvalues for all categories in the graph
def calc_pvalues(query, gene_sets, background=20000, **kwargs): """ calculate pvalues for all categories in the graph :param set query: set of identifiers for which the p value is calculated :param dict gene_sets: gmt file dict after background was set :param set background: total number of genes in yo...
benjamini hocheberg fdr correction. inspired by statsmodels
def fdrcorrection(pvals, alpha=0.05): """ benjamini hocheberg fdr correction. inspired by statsmodels """ # Implement copy from GOATools. pvals = np.asarray(pvals) pvals_sortind = np.argsort(pvals) pvals_sorted = np.take(pvals, pvals_sortind) ecdffactor = _ecdf(pvals_sorted) reject = p...
Standardize the mean and variance of the data axis Parameters.
def zscore(data2d, axis=0): """Standardize the mean and variance of the data axis Parameters. :param data2d: DataFrame to normalize. :param axis: int, Which axis to normalize across. If 0, normalize across rows, if 1, normalize across columns. If None, don't change data ...
Visualize the dataframe.
def heatmap(df, z_score=None, title='', figsize=(5,5), cmap='RdBu_r', xticklabels=True, yticklabels=True, ofname=None, **kwargs): """Visualize the dataframe. :param df: DataFrame from expression table. :param z_score: z_score axis{0, 1}. If None, don't normalize data. :param title: gene se...
This is the main function for reproducing the gsea plot.
def gseaplot(rank_metric, term, hits_indices, nes, pval, fdr, RES, pheno_pos='', pheno_neg='', figsize=(6,5.5), cmap='seismic', ofname=None, **kwargs): """This is the main function for reproducing the gsea plot. :param rank_metric: pd.Series for rankings, rank_metric.values. :p...
Visualize enrichr results.
def dotplot(df, column='Adjusted P-value', title='', cutoff=0.05, top_term=10, sizes=None, norm=None, legend=True, figsize=(6, 5.5), cmap='RdBu_r', ofname=None, **kwargs): """Visualize enrichr results. :param df: GSEApy DataFrame results. :param column: which column of DataFrame t...
Visualize enrichr results.
def barplot(df, column='Adjusted P-value', title="", cutoff=0.05, top_term=10, figsize=(6.5,6), color='salmon', ofname=None, **kwargs): """Visualize enrichr results. :param df: GSEApy DataFrame results. :param column: which column of DataFrame to show. Default: Adjusted P-value :param title...
function for removing spines and ticks.
def adjust_spines(ax, spines): """function for removing spines and ticks. :param ax: axes object :param spines: a list of spines names to keep. e.g [left, right, top, bottom] if spines = []. remove all spines and ticks. """ for loc, spine in ax.spines.items(): if loc in...
The Main function/ pipeline for GSEApy.
def main(): """The Main function/pipeline for GSEApy.""" # Parse options... argparser = prepare_argparser() args = argparser.parse_args() subcommand = args.subcommand_name if subcommand == "replot": # reproduce plots using GSEAPY from .gsea import Replot rep = Replot(in...
Prepare argparser object. New options will be added in this function first.
def prepare_argparser(): """Prepare argparser object. New options will be added in this function first.""" description = "%(prog)s -- Gene Set Enrichment Analysis in Python" epilog = "For command line options of each command, type: %(prog)s COMMAND -h" # top-level parser argparser = ap.ArgumentPars...
output option
def add_output_option(parser): """output option""" parser.add_argument("-o", "--outdir", dest="outdir", type=str, default='GSEApy_reports', metavar='', action="store", help="The GSEApy output directory. Default: the current working directory") parser.add_argu...
output group
def add_output_group(parser, required=True): """output group""" output_group = parser.add_mutually_exclusive_group(required=required) output_group.add_argument("-o", "--ofile", dest="ofile", type=str, default='GSEApy_reports', help="Output file name. Mutually exclusive with --...
Add main function gsea argument parsers.
def add_gsea_parser(subparsers): """Add main function 'gsea' argument parsers.""" argparser_gsea = subparsers.add_parser("gsea", help="Main GSEApy Function: run GSEApy instead of GSEA.") # group for input files group_input = argparser_gsea.add_argument_group("Input files arguments") group_input.ad...
Add function prerank argument parsers.
def add_prerank_parser(subparsers): """Add function 'prerank' argument parsers.""" argparser_prerank = subparsers.add_parser("prerank", help="Run GSEApy Prerank tool on preranked gene list.") # group for input files prerank_input = argparser_prerank.add_argument_group("Input files arguments") prer...
Add function plot argument parsers.
def add_plot_parser(subparsers): """Add function 'plot' argument parsers.""" argparser_replot = subparsers.add_parser("replot", help="Reproduce GSEA desktop output figures.") group_replot = argparser_replot.add_argument_group("Input arguments") group_replot.add_argument("-i", "--indir", action="store...
Add function enrichr argument parsers.
def add_enrichr_parser(subparsers): """Add function 'enrichr' argument parsers.""" argparser_enrichr = subparsers.add_parser("enrichr", help="Using Enrichr API to perform GO analysis.") # group for required options. enrichr_opt = argparser_enrichr.add_argument_group("Input arguments") enrichr_opt....
Add function biomart argument parsers.
def add_biomart_parser(subparsers): """Add function 'biomart' argument parsers.""" argparser_biomart = subparsers.add_parser("biomart", help="Using BioMart API to convert gene ids.") # group for required options. biomart_opt = argparser_biomart.add_argument_group("Input arguments") biomart_opt.add...
This is the most important function of GSEApy. It has the same algorithm with GSEA and ssGSEA.
def enrichment_score(gene_list, correl_vector, gene_set, weighted_score_type=1, nperm=1000, rs=np.random.RandomState(), single=False, scale=False): """This is the most important function of GSEApy. It has the same algorithm with GSEA and ssGSEA. :param gene_list: The ordered gene li...
Next generation algorithm of GSEA and ssGSEA.
def enrichment_score_tensor(gene_mat, cor_mat, gene_sets, weighted_score_type, nperm=1000, rs=np.random.RandomState(), single=False, scale=False): """Next generation algorithm of GSEA and ssGSEA. :param gene_mat: the ordered gene list(vector) with or without gene indices ...
Build shuffled ranking matrix when permutation_type eq to phenotype.
def ranking_metric_tensor(exprs, method, permutation_num, pos, neg, classes, ascending, rs=np.random.RandomState()): """Build shuffled ranking matrix when permutation_type eq to phenotype. :param exprs: gene_expression DataFrame, gene_name indexed. :param str method: calc...
The main function to rank an expression table.
def ranking_metric(df, method, pos, neg, classes, ascending): """The main function to rank an expression table. :param df: gene_expression DataFrame. :param method: The method used to calculate a correlation or ranking. Default: 'log2_ratio_of_classes'. Others methods are...
compute enrichment scores and enrichment nulls.
def gsea_compute_tensor(data, gmt, n, weighted_score_type, permutation_type, method, pheno_pos, pheno_neg, classes, ascending, processes=1, seed=None, single=False, scale=False): """compute enrichment scores and enrichment nulls. :param data: preprocessed expression datafr...
compute enrichment scores and enrichment nulls.
def gsea_compute(data, gmt, n, weighted_score_type, permutation_type, method, pheno_pos, pheno_neg, classes, ascending, processes=1, seed=None, single=False, scale=False): """compute enrichment scores and enrichment nulls. :param data: preprocessed expression dataframe or ...
Compute nominal p - value.
def gsea_pval(es, esnull): """Compute nominal p-value. From article (PNAS): estimate nominal p-value for S from esnull by using the positive or negative portion of the distribution corresponding to the sign of the observed ES(S). """ # to speed up, using numpy function to compute pval in p...
normalize the ES ( S pi ) and the observed ES ( S ) separately rescaling the positive and negative scores by dividing the mean of the ES ( S pi ). return: NES NESnull
def normalize(es, esnull): """normalize the ES(S,pi) and the observed ES(S), separately rescaling the positive and negative scores by dividing the mean of the ES(S,pi). return: NES, NESnull """ nEnrichmentScores =np.zeros(es.shape) nEnrichmentNulls=np.zeros(esnull.shape) esnu...
Compute nominal pvals normalized ES and FDR q value.
def gsea_significance(enrichment_scores, enrichment_nulls): """Compute nominal pvals, normalized ES, and FDR q value. For a given NES(S) = NES* >= 0. The FDR is the ratio of the percentage of all (S,pi) with NES(S,pi) >= 0, whose NES(S,pi) >= NES*, divided by the percentage of observed S wi...
logging start
def log_init(outlog, log_level=logging.INFO): """logging start""" # clear old root logger handlers logging.getLogger("gseapy").handlers = [] # init a root logger logging.basicConfig(level = logging.DEBUG, format = 'LINE %(lineno)-4d: %(asctime)s [%(levelname)-8s] %(mess...
log stop
def log_stop(logger): """log stop""" handlers = logger.handlers[:] for handler in handlers: handler.close() logger.removeHandler(handler)
retry connection. define max tries num if the backoff_factor is 0. 1 then sleep () will sleep for [ 0. 1s 0. 2s 0. 4s... ] between retries. It will also force a retry if the status code returned is 500 502 503 or 504.
def retry(num=5): """"retry connection. define max tries num if the backoff_factor is 0.1, then sleep() will sleep for [0.1s, 0.2s, 0.4s, ...] between retries. It will also force a retry if the status code returned is 500, 502, 503 or 504. """ s = requests.Sessi...
Extract class ( phenotype ) name from. cls file.
def gsea_cls_parser(cls): """Extract class(phenotype) name from .cls file. :param cls: the a class list instance or .cls file which is identical to GSEA input . :return: phenotype name and a list of class vector. """ if isinstance(cls, list) : classes = cls sample_name= unique(clas...
Parse results. edb file stored under ** edb ** file folder.
def gsea_edb_parser(results_path, index=0): """Parse results.edb file stored under **edb** file folder. :param results_path: the .results file located inside edb folder. :param index: gene_set index of gmt database, used for iterating items. :return: enrichment_term, hit_index,nes, pval, fdr. """ ...
Parse gene_sets. gmt ( gene set database ) file or download from enrichr server.
def gsea_gmt_parser(gmt, min_size = 3, max_size = 1000, gene_list=None): """Parse gene_sets.gmt(gene set database) file or download from enrichr server. :param gmt: the gene_sets.gmt file of GSEA input or an enrichr library name. checkout full enrichr library name here: http://amp.pharm.mssm.ed...
return enrichr active enrichr library name.: param str database: Select one from { Human Mouse Yeast Fly Fish Worm }
def get_library_name(database='Human'): """return enrichr active enrichr library name. :param str database: Select one from { 'Human', 'Mouse', 'Yeast', 'Fly', 'Fish', 'Worm' } """ # make a get request to get the gmt names and meta data from Enrichr # old code # response = requests.get(...
Get available marts and their names.
def get_marts(self): """Get available marts and their names.""" mart_names = pd.Series(self.names, name="Name") mart_descriptions = pd.Series(self.displayNames, name="Description") return pd.concat([mart_names, mart_descriptions], axis=1)
Get available datasets from mart you ve selected
def get_datasets(self, mart='ENSEMBL_MART_ENSEMBL'): """Get available datasets from mart you've selected""" datasets = self.datasets(mart, raw=True) return pd.read_csv(StringIO(datasets), header=None, usecols=[1, 2], names = ["Name", "Description"],sep="\t")
Get available attritbutes from dataset you ve selected
def get_attributes(self, dataset): """Get available attritbutes from dataset you've selected""" attributes = self.attributes(dataset) attr_ = [ (k, v[0]) for k, v in attributes.items()] return pd.DataFrame(attr_, columns=["Attribute","Description"])
Get available filters from dataset you ve selected
def get_filters(self, dataset): """Get available filters from dataset you've selected""" filters = self.filters(dataset) filt_ = [ (k, v[0]) for k, v in filters.items()] return pd.DataFrame(filt_, columns=["Filter", "Description"])
mapping ids using BioMart.
def query(self, dataset='hsapiens_gene_ensembl', attributes=[], filters={}, filename=None): """mapping ids using BioMart. :param dataset: str, default: 'hsapiens_gene_ensembl' :param attributes: str, list, tuple :param filters: dict, {'filter name': list(filter value)} ...
Run Gene Set Enrichment Analysis.
def gsea(data, gene_sets, cls, outdir='GSEA_', min_size=15, max_size=500, permutation_num=1000, weighted_score_type=1,permutation_type='gene_set', method='log2_ratio_of_classes', ascending=False, processes=1, figsize=(6.5,6), format='pdf', graph_num=20, no_plot=False, seed=None, verbose=False...
Run Gene Set Enrichment Analysis with single sample GSEA tool
def ssgsea(data, gene_sets, outdir="ssGSEA_", sample_norm_method='rank', min_size=15, max_size=2000, permutation_num=0, weighted_score_type=0.25, scale=True, ascending=False, processes=1, figsize=(7,6), format='pdf', graph_num=20, no_plot=False, seed=None, verbose=False): """Run Gene Set Enric...
Run Gene Set Enrichment Analysis with pre - ranked correlation defined by user.
def prerank(rnk, gene_sets, outdir='GSEA_Prerank', pheno_pos='Pos', pheno_neg='Neg', min_size=15, max_size=500, permutation_num=1000, weighted_score_type=1, ascending=False, processes=1, figsize=(6.5,6), format='pdf', graph_num=20, no_plot=False, seed=None, verbose=False): """ Ru...
The main function to reproduce GSEA desktop outputs.
def replot(indir, outdir='GSEA_Replot', weighted_score_type=1, min_size=3, max_size=1000, figsize=(6.5,6), graph_num=20, format='pdf', verbose=False): """The main function to reproduce GSEA desktop outputs. :param indir: GSEA desktop results directory. In the sub folder, you must contain edb file fo...
create temp directory.
def prepare_outdir(self): """create temp directory.""" self._outdir = self.outdir if self._outdir is None: self._tmpdir = TemporaryDirectory() self.outdir = self._tmpdir.name elif isinstance(self.outdir, str): mkdirs(self.outdir) else: ...
set cpu numbers to be used
def _set_cores(self): """set cpu numbers to be used""" cpu_num = cpu_count()-1 if self._processes > cpu_num: cores = cpu_num elif self._processes < 1: cores = 1 else: cores = self._processes # have to be int if user input is float ...
Parse ranking file. This file contains ranking correlation vector ( or expression values ) and gene names or ids.
def _load_ranking(self, rnk): """Parse ranking file. This file contains ranking correlation vector( or expression values) and gene names or ids. :param rnk: the .rnk file of GSEA input or a Pandas DataFrame, Series instance. :return: a Pandas Series with gene name indexed ran...
load gene set dict
def load_gmt(self, gene_list, gmt): """load gene set dict""" if isinstance(gmt, dict): genesets_dict = gmt elif isinstance(gmt, str): genesets_dict = self.parse_gmt(gmt) else: raise Exception("Error parsing gmt parameter for gene sets") ...
gmt parser
def parse_gmt(self, gmt): """gmt parser""" if gmt.lower().endswith(".gmt"): with open(gmt) as genesets: genesets_dict = { line.strip().split("\t")[0]: line.strip().split("\t")[2:] for line in genesets.readlines()} return geneset...
return active enrichr library name. Offical API
def get_libraries(self, database=''): """return active enrichr library name.Offical API """ lib_url='http://amp.pharm.mssm.edu/%sEnrichr/datasetStatistics'%database libs_json = json.loads(requests.get(lib_url).text) libs = [lib['libraryName'] for lib in libs_json['statistics']] ...
download enrichr libraries.
def _download_libraries(self, libname): """ download enrichr libraries.""" self._logger.info("Downloading and generating Enrichr library gene sets......") s = retry(5) # queery string ENRICHR_URL = 'http://amp.pharm.mssm.edu/Enrichr/geneSetLibrary' query_string = '?mode=t...
only use for gsea heatmap
def _heatmat(self, df, classes, pheno_pos, pheno_neg): """only use for gsea heatmap""" width = len(classes) if len(classes) >= 6 else 5 cls_booA =list(map(lambda x: True if x == pheno_pos else False, classes)) cls_booB =list(map(lambda x: True if x == pheno_neg else False, classes)) ...
Plotting API.: param rank_metric: sorted pd. Series with rankings values.: param results: self. results: param data: preprocessed expression table
def _plotting(self, rank_metric, results, graph_num, outdir, format, figsize, pheno_pos='', pheno_neg=''): """ Plotting API. :param rank_metric: sorted pd.Series with rankings values. :param results: self.results :param data: preprocessed expression table ...
reformat gsea results and save to txt
def _save_results(self, zipdata, outdir, module, gmt, rank_metric, permutation_type): """reformat gsea results, and save to txt""" res = OrderedDict() for gs, gseale, ind, RES in zipdata: rdict = OrderedDict() rdict['es'] = gseale[0] rdict['nes'] = gseale[1] ...
pre - processed the data frame. new filtering methods will be implement here.
def load_data(self, cls_vec): """pre-processed the data frame.new filtering methods will be implement here. """ # read data in if isinstance(self.data, pd.DataFrame) : exprs = self.data.copy() # handle index is gene_names if exprs.index.dtype == 'O': ...
GSEA main procedure
def run(self): """GSEA main procedure""" assert self.permutation_type in ["phenotype", "gene_set"] assert self.min_size <= self.max_size # Start Analysis self._logger.info("Parsing data files for GSEA.............................") # phenotype labels parsing phe...
GSEA prerank workflow
def run(self): """GSEA prerank workflow""" assert self.min_size <= self.max_size # parsing rankings dat2 = self._load_ranking(self.rnk) assert len(dat2) > 1 # cpu numbers self._set_cores() # Start Analysis self._logger.info("Parsing data files f...
normalization samples see here: http:// rowley. mit. edu/ caw_web/ ssGSEAProjection/ ssGSEAProjection. Library. R
def norm_samples(self, dat): """normalization samples see here: http://rowley.mit.edu/caw_web/ssGSEAProjection/ssGSEAProjection.Library.R """ if self.sample_norm_method == 'rank': data = dat.rank(axis=0, method='average', na_option='bottom') data = 10000*data ...
run entry
def run(self): """run entry""" self._logger.info("Parsing data files for ssGSEA...........................") # load data data = self.load_data() # normalized samples, and rank normdat = self.norm_samples(data) # filtering out gene sets and build gene sets dictiona...
Single Sample GSEA workflow with permutation procedure
def runSamplesPermu(self, df, gmt=None): """Single Sample GSEA workflow with permutation procedure""" assert self.min_size <= self.max_size mkdirs(self.outdir) self.resultsOnSamples = OrderedDict() outdir = self.outdir # iter throught each sample for name, ser in...
Single Sample GSEA workflow. multiprocessing utility on samples.
def runSamples(self, df, gmt=None): """Single Sample GSEA workflow. multiprocessing utility on samples. """ # df.index.values are gene_names # Save each sample results to odict self.resultsOnSamples = OrderedDict() outdir = self.outdir # run ssgsea for...
save es and stats
def _save(self, outdir): """save es and stats""" # save raw ES to one csv file samplesRawES = pd.DataFrame(self.resultsOnSamples) samplesRawES.index.name = 'Term|ES' # normalize enrichment scores by using the entire data set, as indicated # by Barbie et al., 2009, online ...
main replot function
def run(self): """main replot function""" assert self.min_size <= self.max_size assert self.fignum > 0 import glob from bs4 import BeautifulSoup # parsing files....... try: results_path = glob.glob(self.indir+'*/edb/results.edb')[0] rank_p...
Enrichr API.
def enrichr(gene_list, gene_sets, organism='human', description='', outdir='Enrichr', background='hsapiens_gene_ensembl', cutoff=0.05, format='pdf', figsize=(8,6), top_term=10, no_plot=False, verbose=False): """Enrichr API. :param gene_list: Flat file with list of genes, one gene id per...
create temp directory.
def prepare_outdir(self): """create temp directory.""" self._outdir = self.outdir if self._outdir is None: self._tmpdir = TemporaryDirectory() self.outdir = self._tmpdir.name elif isinstance(self.outdir, str): mkdirs(self.outdir) else: ...
parse gene_sets input file type
def parse_genesets(self): """parse gene_sets input file type""" enrichr_library = self.get_libraries() if isinstance(self.gene_sets, list): gss = self.gene_sets elif isinstance(self.gene_sets, str): gss = [ g.strip() for g in self.gene_sets.strip().split(",") ] ...
parse gene list
def parse_genelists(self): """parse gene list""" if isinstance(self.gene_list, list): genes = self.gene_list elif isinstance(self.gene_list, pd.DataFrame): # input type is bed file if self.gene_list.shape[1] >=3: genes= self.gene_list.iloc[:,:3...
send gene list to enrichr server
def send_genes(self, gene_list, url): """ send gene list to enrichr server""" payload = { 'list': (None, gene_list), 'description': (None, self.descriptions) } # response response = requests.post(url, files=payload) if not response.ok: r...
Compare the genes sent and received to get successfully recognized genes
def check_genes(self, gene_list, usr_list_id): ''' Compare the genes sent and received to get successfully recognized genes ''' response = requests.get('http://amp.pharm.mssm.edu/Enrichr/view?userListId=%s' % usr_list_id) if not response.ok: raise Exception('Error get...
Enrichr API
def get_results(self, gene_list): """Enrichr API""" ADDLIST_URL = 'http://amp.pharm.mssm.edu/%sEnrichr/addList'%self._organism job_id = self.send_genes(gene_list, ADDLIST_URL) user_list_id = job_id['userListId'] RESULTS_URL = 'http://amp.pharm.mssm.edu/%sEnrichr/export'%self._or...
get background gene
def get_background(self): """get background gene""" # input is a file if os.path.isfile(self.background): with open(self.background) as b: bg2 = b.readlines() bg = [g.strip() for g in bg2] return set(bg) # package included ...
Select Enrichr organism from below:
def get_organism(self): """Select Enrichr organism from below: Human & Mouse: H. sapiens & M. musculus Fly: D. melanogaster Yeast: S. cerevisiae Worm: C. elegans Fish: D. rerio """ organism = {'default': ['', 'hs', 'mm', 'human','mouse', ...
use local mode p = p - value computed using the Fisher exact test ( Hypergeometric test )
def enrich(self, gmt): """use local mode p = p-value computed using the Fisher exact test (Hypergeometric test) Not implemented here: combine score = log(p)·z see here: http://amp.pharm.mssm.edu/Enrichr/help#background&q=4 columns contain: ...
run enrichr for one sample gene list but multi - libraries
def run(self): """run enrichr for one sample gene list but multi-libraries""" # set organism self.get_organism() # read input file genes_list = self.parse_genelists() gss = self.parse_genesets() # if gmt self._logger.info("Connecting to Enrichr Server to ...
Create a cube primitive
def cube(script, size=1.0, center=False, color=None): """Create a cube primitive Note that this is made of 6 quads, not triangles """ """# Convert size to list if it isn't already if not isinstance(size, list): size = list(size) # If a single value was supplied use it for all 3 axes ...
Create a cylinder or cone primitive. Usage is based on OpenSCAD. # height = height of the cylinder # radius1 = radius of the cone on bottom end # radius2 = radius of the cone on top end # center = If true will center the height of the cone/ cylinder around # the origin. Default is false placing the base of the cylinder...
def cylinder(script, up='z', height=1.0, radius=None, radius1=None, radius2=None, diameter=None, diameter1=None, diameter2=None, center=False, cir_segments=32, color=None): """Create a cylinder or cone primitive. Usage is based on OpenSCAD. # height = height of the cylinder # radiu...
create an icosphere mesh
def icosphere(script, radius=1.0, diameter=None, subdivisions=3, color=None): """create an icosphere mesh radius Radius of the sphere # subdivisions = Subdivision level; Number of the recursive subdivision of the # surface. Default is 3 (a sphere approximation composed by 1280 faces). # Admitted va...
# angle = Angle of the cone subtending the cap. It must be <180 less than 180 # subdivisions = Subdivision level ; Number of the recursive subdivision of the # surface. Default is 3 ( a sphere approximation composed by 1280 faces ). # Admitted values are in the range 0 ( an icosahedron ) to 8 ( a 1. 3 MegaTris # approx...
def sphere_cap(script, angle=1.0, subdivisions=3, color=None): """# angle = Angle of the cone subtending the cap. It must be <180 less than 180 # subdivisions = Subdivision level; Number of the recursive subdivision of the # surface. Default is 3 (a sphere approximation composed by 1280 faces). # Admitt...
Create a torus mesh
def torus(script, major_radius=3.0, minor_radius=1.0, inner_diameter=None, outer_diameter=None, major_segments=48, minor_segments=12, color=None): """Create a torus mesh Args: major_radius (float, (optional)): radius from the origin to the center of the cross sections ...
2D square/ plane/ grid created on XY plane
def grid(script, size=1.0, x_segments=1, y_segments=1, center=False, color=None): """2D square/plane/grid created on XY plane x_segments # Number of segments in the X direction. y_segments # Number of segments in the Y direction. center="false" # If true square will be centered on origin; ...
Create a 2D ( surface ) circle or annulus radius1 = 1 # Outer radius of the circle radius2 = 0 # Inner radius of the circle ( if non - zero it creates an annulus ) color = # specify a color name to apply vertex colors to the newly created mesh
def annulus(script, radius=None, radius1=None, radius2=None, diameter=None, diameter1=None, diameter2=None, cir_segments=32, color=None): """Create a 2D (surface) circle or annulus radius1=1 # Outer radius of the circle radius2=0 # Inner radius of the circle (if non-zero it creates an annulus) ...
Creates a round open tube e. g. a cylinder with no top or bottom.
def cylinder_open_hires(script, height=1.0, radius=1, diameter=None, cir_segments=48, height_segments=1, invert_normals=False, center=False, color=None): """ Creates a round open tube, e.g. a cylinder with no top or bottom. Useful if you want to wrap it around an...
Creates a square open tube e. g. a box with no top or bottom.
def cube_open_hires_old(script, size=1.0, x_segments=1, y_segments=1, z_segments=1, center=False, color=None): """ Creates a square open tube, e.g. a box with no top or bottom. Useful if you want to wrap it around and join the open ends together, forming a torus. """ """# Convert si...
Creates a square open tube e. g. a box with no top or bottom.
def cube_open_hires(script, size=1.0, x_segments=1, y_segments=1, z_segments=1, center=False, color=None): """ Creates a square open tube, e.g. a box with no top or bottom. Useful if you want to wrap it around and join the open ends together, forming a torus. """ """# Convert size t...