_id stringlengths 2 7 | title stringlengths 1 88 | partition stringclasses 3
values | text stringlengths 31 13.1k | language stringclasses 1
value | meta_information dict |
|---|---|---|---|---|---|
q22200 | normalize_per_cell | train | def normalize_per_cell(
data,
counts_per_cell_after=None,
counts_per_cell=None,
key_n_counts=None,
copy=False,
layers=[],
use_rep=None,
min_counts=1,
) -> Optional[AnnData]:
"""Normalize total counts per cell.
.. warning::
.. deprecated:: 1.3.7
Use :func:`~scanpy.api.pp.normalize_total` instead.
The new function is equivalent to the present
function, except that
* the new function doesn't filter cells based on `min_counts`,
use :func:`~scanpy.api.pp.filter_cells` if filtering is needed.
* some arguments were renamed
* `copy` is replaced by `inplace`
Normalize each cell by total counts over all genes, so that every cell has
the same total count after normalization.
Similar functions are used, for example, by Seurat [Satija15]_, Cell Ranger
[Zheng17]_ or SPRING [Weinreb17]_.
Parameters
----------
data : :class:`~anndata.AnnData`, `np.ndarray`, `sp.sparse`
The (annotated) data matrix of shape `n_obs` × `n_vars`. Rows correspond
to cells and columns to genes.
counts_per_cell_after : `float` or `None`, optional (default: `None`)
If `None`, after normalization, each cell has a total count equal
to the median of the *counts_per_cell* before normalization.
counts_per_cell : `np.array`, optional (default: `None`)
Precomputed counts per cell.
key_n_counts : `str`, optional (default: `'n_counts'`)
Name of the field in `adata.obs` where the total counts per cell are
stored.
copy : `bool`, optional (default: `False`)
If an :class:`~anndata.AnnData` is passed, determines whether a copy
is returned.
min_counts : `int`, optional (default: 1)
Cells with counts less than `min_counts` are filtered out during
normalization.
Returns
-------
Returns or updates `adata` with normalized version of the original
`adata.X`, depending on `copy`.
Examples
--------
>>> adata = AnnData(
>>> data=np.array([[1, 0], [3, 0], [5, 6]]))
>>> print(adata.X.sum(axis=1))
[ 1. 3. 11.]
>>> sc.pp.normalize_per_cell(adata)
>>> print(adata.obs)
>>> print(adata.X.sum(axis=1))
n_counts
0 1.0
1 3.0
2 11.0
[ 3. 3. 3.]
>>> sc.pp.normalize_per_cell(adata, counts_per_cell_after=1,
>>> key_n_counts='n_counts2')
>>> print(adata.obs)
>>> print(adata.X.sum(axis=1))
n_counts n_counts2
0 1.0 3.0
1 3.0 3.0
2 11.0 3.0
[ 1. 1. 1.]
"""
if key_n_counts is None: key_n_counts = 'n_counts'
if isinstance(data, AnnData):
logg.msg('normalizing | python | {
"resource": ""
} |
q22201 | scale | train | def scale(data, zero_center=True, max_value=None, copy=False) -> Optional[AnnData]:
"""Scale data to unit variance and zero mean.
.. note::
Variables (genes) that do not display any variation (are constant across
all observations) are retained and set to 0 during this operation. In
the future, they might be set to NaNs.
Parameters
----------
data : :class:`~anndata.AnnData`, `np.ndarray`, `sp.sparse`
The (annotated) data matrix of shape `n_obs` × `n_vars`. Rows correspond
to cells and columns to genes.
zero_center : `bool`, optional (default: `True`)
If `False`, omit zero-centering variables, which allows to handle sparse
input efficiently.
max_value : `float` or `None`, optional (default: `None`)
Clip (truncate) to this value after scaling. If `None`, do not clip.
copy : `bool`, optional (default: `False`)
If an :class:`~anndata.AnnData` is passed, determines whether a copy
is returned.
Returns
-------
Depending on `copy` returns or updates `adata` with a scaled `adata.X`.
"""
if isinstance(data, AnnData):
adata = data.copy() if copy else data
# need to add the following here to make inplace logic work
if zero_center and issparse(adata.X):
logg.msg(
'... scale_data: as `zero_center=True`, sparse input is '
| python | {
"resource": ""
} |
q22202 | subsample | train | def subsample(data, fraction=None, n_obs=None, random_state=0, copy=False) -> Optional[AnnData]:
"""Subsample to a fraction of the number of observations.
Parameters
----------
data : :class:`~anndata.AnnData`, `np.ndarray`, `sp.sparse`
The (annotated) data matrix of shape `n_obs` × `n_vars`. Rows correspond
to cells and columns to genes.
fraction : `float` in [0, 1] or `None`, optional (default: `None`)
Subsample to this `fraction` of the number of observations.
n_obs : `int` or `None`, optional (default: `None`)
Subsample to this number of observations.
random_state : `int` or `None`, optional (default: 0)
Random seed to change subsampling.
copy : `bool`, optional (default: `False`)
If an :class:`~anndata.AnnData` is passed, determines whether a copy
| python | {
"resource": ""
} |
q22203 | downsample_counts | train | def downsample_counts(
adata: AnnData,
counts_per_cell: Optional[Union[int, Collection[int]]] = None,
total_counts: Optional[int] = None,
random_state: Optional[int] = 0,
replace: bool = False,
copy: bool = False,
) -> Optional[AnnData]:
"""Downsample counts from count matrix.
If `counts_per_cell` is specified, each cell will downsampled. If
`total_counts` is specified, expression matrix will be downsampled to
contain at most `total_counts`.
Parameters
----------
adata
Annotated data matrix.
counts_per_cell
Target total counts per cell. If a cell has more than 'counts_per_cell',
it will be downsampled to this number. Resulting counts can be specified
on a per cell basis by passing an array.Should be an integer or integer
ndarray with same length as number of obs.
total_counts
Target total counts. If the count matrix has more than `total_counts`
it will be downsampled to have this number.
random_state
Random seed for subsampling.
replace
Whether to sample the counts with replacement.
copy
If an :class:`~anndata.AnnData` is passed, determines whether a copy
is returned.
Returns
-------
Depending on `copy` returns | python | {
"resource": ""
} |
q22204 | _downsample_array | train | def _downsample_array(col: np.array, target: int, random_state: int=0,
replace: bool = True, inplace: bool=False):
"""
Evenly reduce counts in cell to target amount.
This is an internal function and has some restrictions:
* `dtype` of col must be an integer (i.e. satisfy issubclass(col.dtype.type, np.integer))
| python | {
"resource": ""
} |
q22205 | _sec_to_str | train | def _sec_to_str(t):
"""Format time in seconds.
Parameters
----------
t : int
Time in seconds.
"""
from functools import reduce
return | python | {
"resource": ""
} |
q22206 | paga_degrees | train | def paga_degrees(adata) -> List[int]:
"""Compute the degree of each node in the abstracted graph.
Parameters
----------
adata : AnnData
Annotated data matrix.
Returns
-------
List of degrees for each node.
"""
import networkx as nx | python | {
"resource": ""
} |
q22207 | paga_expression_entropies | train | def paga_expression_entropies(adata) -> List[float]:
"""Compute the median expression entropy for each node-group.
Parameters
----------
adata : AnnData
Annotated data matrix.
Returns
-------
Entropies of median expressions for each node.
"""
from scipy.stats import entropy
groups_order, groups_masks = utils.select_groups(
adata, key=adata.uns['paga']['groups'])
entropies = []
for mask in groups_masks:
| python | {
"resource": ""
} |
q22208 | _calc_density | train | def _calc_density(
x: np.ndarray,
y: np.ndarray,
):
"""
Function to calculate the density of cells in an embedding.
"""
# Calculate the point density
xy = np.vstack([x,y])
z | python | {
"resource": ""
} |
q22209 | read_10x_h5 | train | def read_10x_h5(filename, genome=None, gex_only=True) -> AnnData:
"""Read 10x-Genomics-formatted hdf5 file.
Parameters
----------
filename : `str` | :class:`~pathlib.Path`
Filename.
genome : `str`, optional (default: ``None``)
Filter expression to this genes within this genome. For legacy 10x h5
files, this must be provided if the data contains more than one genome.
gex_only : `bool`, optional (default: `True`)
Only keep 'Gene Expression' data and ignore other feature types,
e.g. 'Antibody Capture', 'CRISPR Guide Capture', or 'Custom'
Returns
-------
Annotated data matrix, where obsevations/cells are named by their
barcode and variables/genes by gene name. The data matrix is stored in
`adata.X`, cell names in `adata.obs_names` and gene names in
`adata.var_names`. The gene IDs are stored in `adata.var['gene_ids']`.
The feature types are stored in `adata.var['feature_types']`
"""
logg.info('reading', filename, r=True, end=' ')
with tables.open_file(str(filename), 'r') as f:
v3 = '/matrix' in f
if v3:
adata = _read_v3_10x_h5(filename)
if genome:
if genome not in adata.var['genome'].values:
raise ValueError(
| python | {
"resource": ""
} |
q22210 | _read_legacy_10x_h5 | train | def _read_legacy_10x_h5(filename, genome=None):
"""
Read hdf5 file from Cell Ranger v2 or earlier versions.
"""
with tables.open_file(str(filename), 'r') as f:
try:
children = [x._v_name for x in f.list_nodes(f.root)]
if not genome:
if len(children) > 1:
raise ValueError(
"'{filename}' contains more than one genome. For legacy 10x h5 "
"files you must specify the genome if more than one is present. "
"Available genomes are: {avail}"
.format(filename=filename, avail=children)
)
genome = children[0]
elif genome not in children:
raise ValueError(
"Could not find genome '{genome}' in '{filename}'. "
"Available genomes are: {avail}"
.format(
genome=genome, filename=str(filename),
avail=children,
)
)
dsets = {}
for node in f.walk_nodes('/' + genome, 'Array'):
dsets[node.name] = node.read()
# AnnData works with csr matrices
# 10x stores the transposed data, so we do the transposition right away
from scipy.sparse import csr_matrix
M, N = dsets['shape']
data = dsets['data']
if dsets['data'].dtype == np.dtype('int32'):
data = dsets['data'].view('float32')
| python | {
"resource": ""
} |
q22211 | _read_v3_10x_h5 | train | def _read_v3_10x_h5(filename):
"""
Read hdf5 file from Cell Ranger v3 or later versions.
"""
with tables.open_file(str(filename), 'r') as f:
try:
dsets = {}
for node in f.walk_nodes('/matrix', 'Array'):
dsets[node.name] = node.read()
from scipy.sparse import csr_matrix
M, N = dsets['shape']
data = dsets['data']
if dsets['data'].dtype == np.dtype('int32'):
data = dsets['data'].view('float32')
data[:] = dsets['data']
matrix = csr_matrix((data, dsets['indices'], dsets['indptr']),
shape=(N, M))
adata = AnnData(matrix,
{'obs_names': dsets['barcodes'].astype(str)},
{'var_names': dsets['name'].astype(str),
| python | {
"resource": ""
} |
q22212 | read_10x_mtx | train | def read_10x_mtx(path, var_names='gene_symbols', make_unique=True, cache=False, gex_only=True) -> AnnData:
"""Read 10x-Genomics-formatted mtx directory.
Parameters
----------
path : `str`
Path to directory for `.mtx` and `.tsv` files,
e.g. './filtered_gene_bc_matrices/hg19/'.
var_names : {'gene_symbols', 'gene_ids'}, optional (default: 'gene_symbols')
The variables index.
make_unique : `bool`, optional (default: `True`)
Whether to make the variables index unique by appending '-1',
'-2' etc. or not.
cache : `bool`, optional (default: `False`)
If `False`, read from source, if `True`, read from fast 'h5ad' cache.
gex_only : `bool`, optional (default: `True`)
Only keep 'Gene Expression' data and ignore other feature types,
e.g. 'Antibody Capture', 'CRISPR Guide Capture', or 'Custom'
| python | {
"resource": ""
} |
q22213 | _read_legacy_10x_mtx | train | def _read_legacy_10x_mtx(path, var_names='gene_symbols', make_unique=True, cache=False):
"""
Read mex from output from Cell Ranger v2 or earlier versions
"""
path = Path(path)
adata = read(path / 'matrix.mtx', cache=cache).T # transpose the data
genes = pd.read_csv(path / 'genes.tsv', header=None, sep='\t')
if var_names == 'gene_symbols':
| python | {
"resource": ""
} |
q22214 | read_params | train | def read_params(filename, asheader=False, verbosity=0) -> Dict[str, Union[int, float, bool, str, None]]:
"""Read parameter dictionary from text file.
Assumes that parameters are specified in the format:
par1 = value1
par2 = value2
Comments that start with '#' are allowed.
Parameters
----------
filename : str, Path
Filename of data file.
asheader : bool, optional
Read the dictionary from the header (comment section) of a file.
Returns
-------
Dictionary that stores parameters.
"""
filename = str(filename) # allow passing pathlib.Path objects
from collections import OrderedDict
params = OrderedDict([])
| python | {
"resource": ""
} |
q22215 | write_params | train | def write_params(path, *args, **dicts):
"""Write parameters to file, so that it's readable by read_params.
Uses INI file format.
"""
path = Path(path)
if not path.parent.is_dir():
path.parent.mkdir(parents=True)
if len(args) == 1:
d = args[0]
with path.open('w') as f:
for key in d:
f.write(key + ' = ' + str(d[key]) + '\n')
else:
| python | {
"resource": ""
} |
q22216 | get_params_from_list | train | def get_params_from_list(params_list):
"""Transform params list to dictionary.
"""
params = {}
for i in range(0, len(params_list)):
if '=' not in params_list[i]:
try:
if not isinstance(params[key], list): params[key] = [params[key]]
params[key] += [params_list[i]]
except | python | {
"resource": ""
} |
q22217 | _slugify | train | def _slugify(path: Union[str, PurePath]) -> str:
"""Make a path into a filename."""
if not isinstance(path, PurePath):
path = PurePath(path)
parts = list(path.parts)
if parts[0] == '/':
parts.pop(0)
elif len(parts[0]) == 3 and parts[0][1:] == ':\\':
parts[0] = parts[0][0] | python | {
"resource": ""
} |
q22218 | _read_softgz | train | def _read_softgz(filename) -> AnnData:
"""Read a SOFT format data file.
The SOFT format is documented here
http://www.ncbi.nlm.nih.gov/geo/info/soft2.html.
Notes
-----
The function is based on a script by Kerby Shedden.
http://dept.stat.lsa.umich.edu/~kshedden/Python-Workshop/gene_expression_comparison.html
"""
filename = str(filename) # allow passing pathlib.Path objects
import gzip
with gzip.open(filename, mode='rt') as file:
# The header part of the file contains information about the
# samples. Read that information first.
samples_info = {}
for line in file:
if line.startswith("!dataset_table_begin"):
break
elif line.startswith("!subset_description"):
subset_description = line.split("=")[1].strip()
elif line.startswith("!subset_sample_id"):
subset_ids = line.split("=")[1].split(",")
subset_ids = [x.strip() for x in subset_ids]
for k in subset_ids:
samples_info[k] = subset_description
# Next line is the column headers (sample id's)
sample_names = file.readline().strip().split("\t")
# The column indices that contain gene expression data
I = [i for i, x in enumerate(sample_names) if x.startswith("GSM")]
# Restrict the column headers to those that we keep
sample_names = [sample_names[i] for i in I]
| python | {
"resource": ""
} |
q22219 | convert_bool | train | def convert_bool(string):
"""Check whether string is boolean.
"""
if string == 'True':
return True, True
elif | python | {
"resource": ""
} |
q22220 | convert_string | train | def convert_string(string):
"""Convert string to int, float or bool.
"""
if is_int(string):
return int(string)
elif is_float(string):
return float(string)
elif convert_bool(string)[0]:
| python | {
"resource": ""
} |
q22221 | get_used_files | train | def get_used_files():
"""Get files used by processes with name scanpy."""
import psutil
loop_over_scanpy_processes = (proc for proc in psutil.process_iter()
if proc.name() == 'scanpy')
filenames = []
for proc in loop_over_scanpy_processes:
try:
flist = proc.open_files()
for nt in flist:
filenames.append(nt.path)
| python | {
"resource": ""
} |
q22222 | check_datafile_present_and_download | train | def check_datafile_present_and_download(path, backup_url=None):
"""Check whether the file is present, otherwise download.
"""
path = Path(path)
if path.is_file(): return True
if backup_url is None: return False
logg.info('try downloading from url\n' + backup_url + '\n' +
'... this may take a while but only happens once')
if not | python | {
"resource": ""
} |
q22223 | is_valid_filename | train | def is_valid_filename(filename, return_ext=False):
"""Check whether the argument is a filename."""
ext = Path(filename).suffixes
if len(ext) > 2:
logg.warn('Your filename has more than two extensions: {}.\n'
'Only considering the two last: {}.'.format(ext, ext[-2:]))
ext = ext[-2:]
# cases for gzipped/bzipped text files
if len(ext) == 2 and ext[0][1:] in text_exts and ext[1][1:] in ('gz', 'bz2'):
return ext[0][1:] if return_ext else True
elif ext and ext[-1][1:] in avail_exts:
return ext[-1][1:] if return_ext else True
elif ''.join(ext) == '.soft.gz':
return 'soft.gz' if return_ext else True
elif ''.join(ext) == '.mtx.gz':
return 'mtx.gz' | python | {
"resource": ""
} |
q22224 | correlation_matrix | train | def correlation_matrix(adata,groupby=None ,group=None, corr_matrix=None, annotation_key=None):
"""Plot correlation matrix.
Plot a correlation matrix for genes strored in sample annotation using rank_genes_groups.py
Parameters
----------
adata : :class:`~anndata.AnnData`
Annotated data matrix.
groupby : `str`, optional (default: None)
If specified, searches data_annotation for correlation_matrix+groupby+str(group)
group : int
Identifier of the group (necessary if and only if groupby is also specified)
corr_matrix : DataFrame, optional (default: None)
Correlation matrix as a DataFrame (annotated axis) that can be transferred manually if wanted
annotation_key: `str`, optional (default: None)
If specified, looks in data annotation for this key.
"""
# TODO: At the moment, noly works for int identifiers
if corr_matrix is None:
# This will produce an error if he annotation doesn't exist, which is okay
if annotation_key is None:
if groupby is None:
corr_matrix = adata.uns['Correlation_matrix']
else:
| python | {
"resource": ""
} |
q22225 | tqdm_hook | train | def tqdm_hook(t):
"""
Wraps tqdm instance.
Don't forget to close() or __exit__()
the tqdm instance once you're done with it (easiest using `with` syntax).
Example
-------
>>> with tqdm(...) as t:
... reporthook = my_hook(t)
... urllib.urlretrieve(..., reporthook=reporthook)
"""
last_b = [0]
def update_to(b=1, bsize=1, tsize=None):
"""
b : int, optional
Number of blocks transferred so far [default: 1].
bsize : int, optional
| python | {
"resource": ""
} |
q22226 | matrix | train | def matrix(matrix, xlabel=None, ylabel=None, xticks=None, yticks=None,
title=None, colorbar_shrink=0.5, color_map=None, show=None,
save=None, ax=None):
"""Plot a matrix."""
if ax is None: ax = pl.gca()
img = ax.imshow(matrix, cmap=color_map)
if xlabel is not None: ax.set_xlabel(xlabel)
if ylabel is not None: ax.set_ylabel(ylabel)
if title is not None: ax.set_title(title)
if xticks is not None:
| python | {
"resource": ""
} |
q22227 | timeseries | train | def timeseries(X, **kwargs):
"""Plot X. See timeseries_subplot."""
pl.figure(figsize=(2*rcParams['figure.figsize'][0], rcParams['figure.figsize'][1]), | python | {
"resource": ""
} |
q22228 | timeseries_subplot | train | def timeseries_subplot(X,
time=None,
color=None,
var_names=(),
highlightsX=(),
xlabel='',
ylabel='gene expression',
yticks=None,
xlim=None,
legend=True,
palette=None,
color_map='viridis'):
"""Plot X.
Parameters
----------
X : np.ndarray
Call this with:
X with one column, color categorical.
X with one column, color continuous.
X with n columns, color is of length n.
"""
if color is not None:
use_color_map = isinstance(color[0], float) or isinstance(color[0], np.float32)
palette = default_palette(palette)
x_range = np.arange(X.shape[0]) if time is None else time
if X.ndim == 1: X = X[:, None]
if X.shape[1] > 1:
colors = palette[:X.shape[1]].by_key()['color']
subsets = [(x_range, X[:, i]) for i in range(X.shape[1])]
elif use_color_map:
colors = [color]
subsets = [(x_range, X[:, 0])]
else:
levels, _ = np.unique(color, return_inverse=True)
colors = np.array(palette[:len(levels)].by_key()['color'])
subsets = [(x_range[color == l], X[color == l, :]) for l in levels]
| python | {
"resource": ""
} |
q22229 | timeseries_as_heatmap | train | def timeseries_as_heatmap(X, var_names=None, highlightsX=None, color_map=None):
"""Plot timeseries as heatmap.
Parameters
----------
X : np.ndarray
Data array.
var_names : array_like
Array of strings naming variables stored in columns of X.
"""
if highlightsX is None:
highlightsX = []
if var_names is None:
var_names = []
if len(var_names) == 0:
var_names = np.arange(X.shape[1])
if var_names.ndim == 2:
var_names = var_names[:, 0]
# transpose X
X = X.T
minX = np.min(X)
# insert space into X
if False:
# generate new array with highlightsX
space = 10 # integer
Xnew = np.zeros((X.shape[0], X.shape[1] + space*len(highlightsX)))
hold = 0
_hold = 0
space_sum = 0
for ih, h in enumerate(highlightsX):
_h = h + space_sum
Xnew[:, _hold:_h] = X[:, hold:h]
Xnew[:, _h:_h+space] = minX * np.ones((X.shape[0], space))
# update variables | python | {
"resource": ""
} |
q22230 | savefig | train | def savefig(writekey, dpi=None, ext=None):
"""Save current figure to file.
The `filename` is generated as follows:
filename = settings.figdir + writekey + settings.plot_suffix + '.' + settings.file_format_figs
"""
if dpi is None:
# we need this as in notebooks, the internal figures are also influenced by 'savefig.dpi' this...
if not isinstance(rcParams['savefig.dpi'], str) and rcParams['savefig.dpi'] < 150:
if settings._low_resolution_warning:
logg.warn(
'You are using a low resolution (dpi<150) for saving figures.\n'
'Consider running `set_figure_params(dpi_save=...)`, which will '
'adjust `matplotlib.rcParams[\'savefig.dpi\']`') | python | {
"resource": ""
} |
q22231 | scatter_group | train | def scatter_group(ax, key, imask, adata, Y, projection='2d', size=3, alpha=None):
"""Scatter of group using representation of data Y.
"""
mask = adata.obs[key].cat.categories[imask] == adata.obs[key].values
color = adata.uns[key + '_colors'][imask]
if not isinstance(color[0], str):
from matplotlib.colors import rgb2hex
color = rgb2hex(adata.uns[key + '_colors'][imask])
if not is_color_like(color):
raise ValueError('"{}" is not a valid matplotlib color.'.format(color))
data = [Y[mask, 0], Y[mask, | python | {
"resource": ""
} |
q22232 | setup_axes | train | def setup_axes(
ax=None,
panels='blue',
colorbars=[False],
right_margin=None,
left_margin=None,
projection='2d',
show_ticks=False):
"""Grid of axes for plotting, legends and colorbars.
"""
if '3d' in projection: from mpl_toolkits.mplot3d import Axes3D
avail_projections = {'2d', '3d'}
if projection not in avail_projections:
raise ValueError('choose projection from', avail_projections)
if left_margin is not None:
raise ValueError('Currently not supporting to pass `left_margin`.')
if np.any(colorbars) and right_margin is None:
right_margin = 1 - rcParams['figure.subplot.right'] + 0.21 # 0.25
elif right_margin is None:
right_margin = 1 - rcParams['figure.subplot.right'] + 0.06 # 0.10
# make a list of right margins for each panel
if not isinstance(right_margin, list):
right_margin_list = [right_margin for i in range(len(panels))]
else:
right_margin_list = right_margin
# make a figure with len(panels) panels in a row side by side
top_offset = 1 - rcParams['figure.subplot.top']
bottom_offset = 0.15 if show_ticks else 0.08
left_offset = 1 if show_ticks else 0.3 # in units of base_height
base_height = rcParams['figure.figsize'][1]
height = base_height
base_width = rcParams['figure.figsize'][0]
if show_ticks: base_width *= 1.1
draw_region_width = base_width - left_offset - top_offset - 0.5 # this is kept constant throughout
right_margin_factor = sum([1 + right_margin for right_margin in right_margin_list])
width_without_offsets = right_margin_factor * draw_region_width # this is the total width that keeps draw_region_width
right_offset = (len(panels) - 1) * left_offset
figure_width = width_without_offsets + left_offset + right_offset
draw_region_width_frac = draw_region_width / figure_width
left_offset_frac = left_offset / figure_width
right_offset_frac = 1 - (len(panels) - 1) * left_offset_frac
if ax is None:
| python | {
"resource": ""
} |
q22233 | arrows_transitions | train | def arrows_transitions(ax, X, indices, weight=None):
"""
Plot arrows of transitions in data matrix.
Parameters
----------
ax : matplotlib.axis
Axis object from matplotlib.
X : np.array
Data array, any representation wished (X, psi, phi, etc).
indices : array_like
Indices storing the transitions.
"""
step = 1
width = axis_to_data(ax, 0.001)
if X.shape[0] > 300:
step = 5
width = axis_to_data(ax, 0.0005)
if X.shape[0] > 500:
step = 30
width = axis_to_data(ax, 0.0001)
head_width = 10*width
for ix, x in enumerate(X):
if ix % step == 0:
X_step = X[indices[ix]] - x
# don't plot arrow of length 0
for itrans in range(X_step.shape[0]):
alphai = 1
widthi = width
head_widthi = head_width
| python | {
"resource": ""
} |
q22234 | scale_to_zero_one | train | def scale_to_zero_one(x):
"""Take some 1d data and scale it so that min matches 0 and max 1.
"""
xscaled = x - | python | {
"resource": ""
} |
q22235 | hierarchy_pos | train | def hierarchy_pos(G, root, levels=None, width=1., height=1.):
"""Tree layout for networkx graph.
See https://stackoverflow.com/questions/29586520/can-one-get-hierarchical-graphs-from-networkx-with-python-3
answer by burubum.
If there is a cycle that is reachable from root, then this will see
infinite recursion.
Parameters
----------
G: the graph
root: the root node
levels: a dictionary
key: level number (starting from 0)
value: number of nodes in this level
width: horizontal space allocated for drawing
height: vertical space allocated for drawing
"""
TOTAL = "total"
CURRENT = "current"
def make_levels(levels, node=root, currentLevel=0, parent=None):
"""Compute the number of nodes for each level
"""
if currentLevel not in levels:
levels[currentLevel] = {TOTAL: 0, CURRENT: 0}
levels[currentLevel][TOTAL] += 1
neighbors = list(G.neighbors(node))
if parent is not None:
neighbors.remove(parent)
for neighbor in neighbors:
levels = make_levels(levels, neighbor, currentLevel + 1, node)
return levels
| python | {
"resource": ""
} |
q22236 | zoom | train | def zoom(ax, xy='x', factor=1):
"""Zoom into axis.
Parameters
----------
"""
limits = ax.get_xlim() if xy == 'x' else ax.get_ylim()
| python | {
"resource": ""
} |
q22237 | get_ax_size | train | def get_ax_size(ax, fig):
"""Get axis size
Parameters
----------
ax : matplotlib.axis
Axis object from matplotlib.
fig : matplotlib.Figure
Figure.
""" | python | {
"resource": ""
} |
q22238 | axis_to_data | train | def axis_to_data(ax, width):
"""For a width in axis coordinates, return the corresponding in data
coordinates.
Parameters
----------
ax : matplotlib.axis
Axis object from matplotlib.
width : float
Width in xaxis coordinates.
""" | python | {
"resource": ""
} |
q22239 | axis_to_data_points | train | def axis_to_data_points(ax, points_axis):
"""Map points in axis coordinates to data coordinates.
Uses matplotlib.transform.
Parameters
----------
ax : matplotlib.axis
Axis object from matplotlib.
points_axis : np.array
Points in | python | {
"resource": ""
} |
q22240 | console_main | train | def console_main():
"""This serves as CLI entry point and will not show a Python traceback if a called command fails"""
| python | {
"resource": ""
} |
q22241 | filter_rank_genes_groups | train | def filter_rank_genes_groups(adata, key=None, groupby=None, use_raw=True, log=True,
key_added='rank_genes_groups_filtered',
min_in_group_fraction=0.25, min_fold_change=2,
max_out_group_fraction=0.5):
"""Filters out genes based on fold change and fraction of genes expressing the gene within and outside the `groupby` categories.
See :func:`~scanpy.tl.rank_genes_groups`.
Results are stored in `adata.uns[key_added]` (default: 'rank_genes_groups_filtered').
To preserve the original structure of adata.uns['rank_genes_groups'], filtered genes
are set to `NaN`.
Parameters
----------
adata: :class:`~anndata.AnnData`
key
groupby
use_raw
log : if true, it means that the values to work with are in log scale
key_added
min_in_group_fraction
min_fold_change
max_out_group_fraction
Returns
-------
Same output as :ref:`scanpy.tl.rank_genes_groups` but with filtered genes names set to
`nan`
Examples
--------
>>> adata = sc.datasets.pbmc68k_reduced()
>>> sc.tl.rank_genes_groups(adata, 'bulk_labels', method='wilcoxon')
>>> sc.tl.filter_rank_genes_groups(adata, min_fold_change=3)
>>> # visualize results
>>> sc.pl.rank_genes_groups(adata, key='rank_genes_groups_filtered')
>>> # visualize results using dotplot
>>> sc.pl.rank_genes_groups_dotplot(adata, key='rank_genes_groups_filtered')
"""
if key is None:
key = 'rank_genes_groups'
if groupby is None:
groupby = str(adata.uns[key]['params']['groupby'])
# convert structured numpy array into DataFrame
gene_names = pd.DataFrame(adata.uns[key]['names'])
fraction_in_cluster_matrix = pd.DataFrame(np.zeros(gene_names.shape), columns=gene_names.columns,
index=gene_names.index)
fold_change_matrix = pd.DataFrame(np.zeros(gene_names.shape), columns=gene_names.columns, index=gene_names.index)
fraction_out_cluster_matrix = pd.DataFrame(np.zeros(gene_names.shape), columns=gene_names.columns,
index=gene_names.index)
logg.info("Filtering genes using: min_in_group_fraction: {} "
"min_fold_change: {}, max_out_group_fraction: {}".format(min_in_group_fraction, min_fold_change,
max_out_group_fraction))
from ..plotting._anndata import _prepare_dataframe
for cluster in gene_names.columns:
# iterate per column
var_names = gene_names[cluster].values
# add column to adata as __is_in_cluster__. This facilitates to measure fold change
# of each gene with respect to all other clusters
adata.obs['__is_in_cluster__'] = pd.Categorical(adata.obs[groupby] == cluster)
# obs_tidy has rows=groupby, columns=var_names
categories, obs_tidy = _prepare_dataframe(adata, var_names, groupby='__is_in_cluster__', use_raw=use_raw)
# for if category defined by groupby (if any) compute | python | {
"resource": ""
} |
q22242 | blobs | train | def blobs(n_variables=11, n_centers=5, cluster_std=1.0, n_observations=640) -> AnnData:
"""Gaussian Blobs.
Parameters
----------
n_variables : `int`, optional (default: 11)
Dimension of feature space.
n_centers : `int`, optional (default: 5)
Number of cluster centers.
cluster_std : `float`, optional (default: 1.0)
Standard deviation of clusters.
n_observations : `int`, optional (default: 640)
Number of observations. By default, this is the same observation number as in
``sc.datasets.krumsiek11()``.
Returns
-------
Annotated data matrix containing a observation annotation 'blobs' that
indicates cluster identity.
"""
import sklearn.datasets | python | {
"resource": ""
} |
q22243 | toggleswitch | train | def toggleswitch() -> AnnData:
"""Simulated toggleswitch.
Data obtained simulating a simple toggleswitch `Gardner *et al.*, Nature
(2000) <https://doi.org/10.1038/35002131>`__.
Simulate via :func:`~scanpy.api.sim`.
Returns
-------
Annotated data matrix.
"""
filename | python | {
"resource": ""
} |
q22244 | pbmc68k_reduced | train | def pbmc68k_reduced() -> AnnData:
"""Subsampled and processed 68k PBMCs.
10x PBMC 68k dataset from
https://support.10xgenomics.com/single-cell-gene-expression/datasets
The original PBMC 68k dataset was preprocessed using scanpy and was saved
keeping only 724 cells and 221 highly variable genes.
The saved file contains the annotation of cell types (key: 'bulk_labels'), UMAP coordinates,
louvain | python | {
"resource": ""
} |
q22245 | OnFlySymMatrix.restrict | train | def restrict(self, index_array):
"""Generate a view restricted to a subset of indices.
"""
new_shape = index_array.shape[0], index_array.shape[0]
return OnFlySymMatrix(self.get_row, new_shape, DC_start=self.DC_start,
| python | {
"resource": ""
} |
q22246 | Neighbors.compute_neighbors | train | def compute_neighbors(
self,
n_neighbors: int = 30,
knn: bool = True,
n_pcs: Optional[int] = None,
use_rep: Optional[str] = None,
method: str = 'umap',
random_state: Optional[Union[RandomState, int]] = 0,
write_knn_indices: bool = False,
metric: str = 'euclidean',
metric_kwds: Mapping[str, Any] = {}
) -> None:
"""\
Compute distances and connectivities of neighbors.
Parameters
----------
n_neighbors
Use this number of nearest neighbors.
knn
Restrict result to `n_neighbors` nearest neighbors.
{n_pcs}
{use_rep}
Returns
-------
Writes sparse graph attributes `.distances` and `.connectivities`.
Also writes `.knn_indices` and `.knn_distances` if
`write_knn_indices==True`.
"""
if n_neighbors > self._adata.shape[0]: # very small datasets
n_neighbors = 1 + int(0.5*self._adata.shape[0])
logg.warn('n_obs too small: adjusting to `n_neighbors = {}`'
.format(n_neighbors))
if method == 'umap' and not knn:
raise ValueError('`method = \'umap\' only with `knn = True`.')
if method not in {'umap', 'gauss'}:
raise ValueError('`method` needs to be \'umap\' or \'gauss\'.')
if self._adata.shape[0] >= 10000 and not knn:
logg.warn(
'Using high n_obs without `knn=True` takes a lot of memory...')
self.n_neighbors = n_neighbors
self.knn = knn
X = choose_representation(self._adata, use_rep=use_rep, n_pcs=n_pcs)
# neighbor search
use_dense_distances = (metric == 'euclidean' and X.shape[0] < 8192) or knn == False
if use_dense_distances:
_distances = pairwise_distances(X, metric=metric, **metric_kwds)
knn_indices, knn_distances = get_indices_distances_from_dense_matrix(
_distances, n_neighbors)
if knn:
self._distances = get_sparse_matrix_from_indices_distances_numpy(
knn_indices, knn_distances, X.shape[0], n_neighbors)
else:
self._distances = _distances
else:
# non-euclidean case and approx nearest neighbors
if X.shape[0] < 4096:
X = pairwise_distances(X, metric=metric, **metric_kwds)
| python | {
"resource": ""
} |
q22247 | Neighbors.compute_transitions | train | def compute_transitions(self, density_normalize=True):
"""Compute transition matrix.
Parameters
----------
density_normalize : `bool`
The density rescaling of Coifman and Lafon (2006): Then only the
geometry of the data matters, not the sampled density.
Returns
-------
Makes attributes `.transitions_sym` and `.transitions` available.
"""
W = self._connectivities
# density normalization as of Coifman et al. (2005)
# ensures that kernel matrix is independent of sampling density
if density_normalize:
# q[i] is an estimate for the sampling density at point i
# it's also the degree of the underlying graph
q = np.asarray(W.sum(axis=0))
if not issparse(W):
Q = np.diag(1.0/q)
else:
Q = scipy.sparse.spdiags(1.0/q, 0, W.shape[0], W.shape[0])
| python | {
"resource": ""
} |
q22248 | Neighbors.compute_eigen | train | def compute_eigen(self, n_comps=15, sym=None, sort='decrease'):
"""Compute eigen decomposition of transition matrix.
Parameters
----------
n_comps : `int`
Number of eigenvalues/vectors to be computed, set `n_comps = 0` if
you need all eigenvectors.
sym : `bool`
Instead of computing the eigendecomposition of the assymetric
transition matrix, computed the eigendecomposition of the symmetric
Ktilde matrix.
matrix : sparse matrix, np.ndarray, optional (default: `.connectivities`)
Matrix to diagonalize. Merely for testing and comparison purposes.
Returns
-------
Writes the following attributes.
eigen_values : numpy.ndarray
Eigenvalues of transition matrix.
eigen_basis : numpy.ndarray
Matrix of eigenvectors (stored in columns). `.eigen_basis` is
projection of data matrix on right eigenvectors, that is, the
| python | {
"resource": ""
} |
q22249 | Neighbors._set_pseudotime | train | def _set_pseudotime(self):
"""Return pseudotime with respect to root point.
"""
| python | {
"resource": ""
} |
q22250 | Neighbors._set_iroot_via_xroot | train | def _set_iroot_via_xroot(self, xroot):
"""Determine the index of the root cell.
Given an expression vector, find the observation index that is closest
to this vector.
Parameters
----------
xroot : np.ndarray
Vector that marks the root cell, the vector storing the initial
condition, only relevant for computing pseudotime.
"""
if self._adata.shape[1] != xroot.size:
raise ValueError(
| python | {
"resource": ""
} |
q22251 | mitochondrial_genes | train | def mitochondrial_genes(host, org) -> pd.Index:
"""Mitochondrial gene symbols for specific organism through BioMart.
Parameters
----------
host : {{'www.ensembl.org', ...}}
A valid BioMart host URL.
org : {{'hsapiens', 'mmusculus', 'drerio'}}
Organism to query. Currently available are human ('hsapiens'), mouse
('mmusculus') and zebrafish ('drerio').
Returns
-------
A :class:`pandas.Index` containing mitochondrial gene symbols.
"""
try:
from bioservices import biomart
except ImportError:
raise ImportError(
'You need to install the `bioservices` module.')
from io import StringIO
s = biomart.BioMart(host=host)
# building query
s.new_query()
if org == 'hsapiens':
s.add_dataset_to_xml('hsapiens_gene_ensembl')
s.add_attribute_to_xml('hgnc_symbol')
elif org == 'mmusculus':
s.add_dataset_to_xml('mmusculus_gene_ensembl')
| python | {
"resource": ""
} |
q22252 | highest_expr_genes | train | def highest_expr_genes(
adata, n_top=30, show=None, save=None,
ax=None, gene_symbols=None, **kwds
):
"""\
Fraction of counts assigned to each gene over all cells.
Computes, for each gene, the fraction of counts assigned to that gene within
a cell. The `n_top` genes with the highest mean fraction over all cells are
plotted as boxplots.
This plot is similar to the `scater` package function `plotHighestExprs(type
= "highest-expression")`, see `here
<https://bioconductor.org/packages/devel/bioc/vignettes/scater/inst/doc/vignette-qc.html>`__. Quoting
from there:
*We expect to see the “usual suspects”, i.e., mitochondrial genes, actin,
ribosomal protein, MALAT1. A few spike-in transcripts may also be
present here, though if all of the spike-ins are in the top 50, it
suggests that too much spike-in RNA was added. A large number of
pseudo-genes or predicted genes may indicate problems with alignment.*
-- Davis McCarthy and Aaron Lun
Parameters
----------
adata : :class:`~anndata.AnnData`
Annotated data matrix.
n_top : `int`, optional (default:30)
Number of top
{show_save_ax}
gene_symbols : `str`, optional (default:None)
Key for field in .var that stores gene symbols if you do not want to use .var_names.
**kwds : keyword arguments
Are passed to `seaborn.boxplot`.
Returns
-------
If `show==False` a :class:`~matplotlib.axes.Axes`.
"""
from scipy.sparse import issparse
| python | {
"resource": ""
} |
q22253 | filter_genes_cv_deprecated | train | def filter_genes_cv_deprecated(X, Ecutoff, cvFilter):
"""Filter genes by coefficient of variance and mean.
See `filter_genes_dispersion`.
Reference: Weinreb et al. (2017).
"""
if issparse(X):
| python | {
"resource": ""
} |
q22254 | filter_genes_fano_deprecated | train | def filter_genes_fano_deprecated(X, Ecutoff, Vcutoff):
"""Filter genes by fano factor and mean.
See `filter_genes_dispersion`.
Reference: Weinreb et al. (2017).
"""
if issparse(X):
raise ValueError('Not defined for sparse input. See `filter_genes_dispersion`.')
| python | {
"resource": ""
} |
q22255 | materialize_as_ndarray | train | def materialize_as_ndarray(a):
"""Convert distributed arrays to ndarrays."""
if type(a) in (list, tuple):
if da is not None and any(isinstance(arr, da.Array) for arr in a):
| python | {
"resource": ""
} |
q22256 | mnn_concatenate | train | def mnn_concatenate(*adatas, geneset=None, k=20, sigma=1, n_jobs=None, **kwds):
"""Merge AnnData objects and correct batch effects using the MNN method.
Batch effect correction by matching mutual nearest neighbors [Haghverdi18]_
has been implemented as a function 'mnnCorrect' in the R package
`scran <https://bioconductor.org/packages/release/bioc/html/scran.html>`__
This function provides a wrapper to use the mnnCorrect function when
concatenating Anndata objects by using the Python-R interface `rpy2
<https://pypi.org/project/rpy2/>`__.
Parameters
----------
adatas : :class:`~anndata.AnnData`
AnnData matrices to concatenate with. Each dataset should generally be
log-transformed, e.g., log-counts. Datasets should have the same number
of genes, or at lease have all the genes in geneset.
geneset : `list`, optional (default: `None`)
A list specifying the genes with which distances between cells are
calculated in mnnCorrect, typically the highly variable genes.
All genes are used if no geneset provided. See the `scran manual
<https://bioconductor.org/packages/release/bioc/html/scran.html>`__ for
details.
k : `int`, ptional (default: 20)
See the `scran manual <https://bioconductor.org/packages/release/bioc/html/scran.html>`__
for details.
sigma : `int`, ptional (default: 20)
See the `scran manual <https://bioconductor.org/packages/release/bioc/html/scran.html>`__
for details.
n_jobs : `int` or `None` (default: `sc.settings.n_jobs`)
Number of jobs.
kwds :
Keyword arguments passed to Anndata.concatenate
Returns
-------
An :class:`~anndata.AnnData` object with MNN corrected data matrix X.
Example
-------
>>> adata1
AnnData object with | python | {
"resource": ""
} |
q22257 | _design_matrix | train | def _design_matrix(
model: pd.DataFrame,
batch_key: str,
batch_levels: Collection[str],
) -> pd.DataFrame:
"""
Computes a simple design matrix.
Parameters
--------
model
Contains the batch annotation
batch_key
Name of the batch column
batch_levels
Levels of the batch annotation
Returns
--------
The design matrix for the regression problem
"""
import patsy
design = patsy.dmatrix(
"~ 0 + C(Q('{}'), levels=batch_levels)".format(batch_key),
model,
return_type="dataframe",
)
model = model.drop([batch_key], axis=1)
numerical_covariates = model.select_dtypes('number').columns.values
logg.info("Found {} batches\n".format(design.shape[1]))
other_cols = [c for c in model.columns.values if c not in numerical_covariates]
if other_cols:
col_repr = " + ".join("Q('{}')".format(x) for x in other_cols)
factor_matrix = patsy.dmatrix("~ 0 + {}".format(col_repr),
model[other_cols],
| python | {
"resource": ""
} |
q22258 | _standardize_data | train | def _standardize_data(
model: pd.DataFrame,
data: pd.DataFrame,
batch_key: str,
) -> Tuple[pd.DataFrame, pd.DataFrame, np.ndarray, np.ndarray]:
"""
Standardizes the data per gene.
The aim here is to make mean and variance be comparable across batches.
Parameters
--------
model
Contains the batch annotation
data
Contains the Data
batch_key
Name of the batch column in the model matrix
Returns
--------
s_data : pandas.DataFrame
Standardized Data
design : pandas.DataFrame
Batch assignment as one-hot encodings
var_pooled : numpy.ndarray
Pooled variance per gene
stand_mean : numpy.ndarray
Gene-wise mean
"""
# compute the design matrix
batch_items = model.groupby(batch_key).groups.items()
batch_levels, batch_info = zip(*batch_items)
n_batch = len(batch_info)
| python | {
"resource": ""
} |
q22259 | _it_sol | train | def _it_sol(s_data, g_hat, d_hat, g_bar, t2, a, b, conv=0.0001) -> Tuple[float, float]:
"""
Iteratively compute the conditional posterior means for gamma and delta.
gamma is an estimator for the additive batch effect, deltat is an estimator
for the multiplicative batch effect. We use an EB framework to estimate these
two. Analytical expressions exist for both parameters, which however depend on each other.
We therefore iteratively evalutate these two expressions until convergence is reached.
Parameters
--------
s_data : pd.DataFrame
Contains the standardized Data
g_hat : float
Initial guess for gamma
d_hat : float
Initial guess for delta
g_bar, t_2, a, b : float
Hyperparameters
conv: float, optional (default: `0.0001`)
convergence criterium
Returns:
--------
gamma : float
estimated value for gamma
delta : float
estimated value for delta
"""
n = (1 - np.isnan(s_data)).sum(axis=1)
g_old = g_hat.copy()
d_old = d_hat.copy()
change = 1
count = 0
# we place a normally distributed prior on gamma and and inverse gamma prior on | python | {
"resource": ""
} |
q22260 | top_proportions | train | def top_proportions(mtx, n):
"""
Calculates cumulative proportions of top expressed genes
Parameters
----------
mtx : `Union[np.array, sparse.spmatrix]`
Matrix, where each row is a sample, each column a feature.
n : `int`
Rank to calculate proportions up to. Value is treated as 1-indexed,
`n=50` will calculate cumulative proportions up to the 50th most
expressed gene. | python | {
"resource": ""
} |
q22261 | top_segment_proportions | train | def top_segment_proportions(mtx, ns):
"""
Calculates total percentage of counts in top ns genes.
Parameters
----------
mtx : `Union[np.array, sparse.spmatrix]`
Matrix, where each row is a sample, each column a feature.
ns : `Container[Int]`
Positions to calculate cumulative proportion at. Values are considered
1-indexed, e.g. `ns=[50]` will calculate cumulative | python | {
"resource": ""
} |
q22262 | add_args | train | def add_args(p):
"""
Update parser with tool specific arguments.
This overwrites was is done in utils.uns_args.
"""
# dictionary for adding arguments
dadd_args = {
| python | {
"resource": ""
} |
q22263 | _check_branching | train | def _check_branching(X,Xsamples,restart,threshold=0.25):
"""\
Check whether time series branches.
Parameters
----------
X (np.array): current time series data.
Xsamples (np.array): list of previous branching samples.
restart (int): counts number of restart trials.
threshold (float, optional): sets threshold for attractor
identification.
Returns
-------
check : bool
true if branching realization
Xsamples
updated list
"""
check = True
if restart == 0:
| python | {
"resource": ""
} |
q22264 | check_nocycles | train | def check_nocycles(Adj, verbosity=2):
"""\
Checks that there are no cycles in graph described by adjacancy matrix.
Parameters
----------
Adj (np.array): adjancancy matrix of dimension (dim, dim)
Returns
-------
True if there is no cycle, False otherwise.
"""
dim = Adj.shape[0]
for g in range(dim):
v = np.zeros(dim)
v[g] = 1
for i in range(dim):
v = Adj.dot(v)
| python | {
"resource": ""
} |
q22265 | sample_coupling_matrix | train | def sample_coupling_matrix(dim=3,connectivity=0.5):
"""\
Sample coupling matrix.
Checks that returned graphs contain no self-cycles.
Parameters
----------
dim : int
dimension of coupling matrix.
connectivity : float
fraction of connectivity, fully connected means 1.,
not-connected means 0, in the case of fully connected, one has
dim*(dim-1)/2 edges in the graph.
Returns
-------
Tuple (Coupl,Adj,Adj_signed) of coupling matrix, adjancancy and
signed adjacancy matrix.
"""
max_trial = 10
check = False
for trial in range(max_trial):
# random topology for a given connectivity / edge density
Coupl = np.zeros((dim,dim))
n_edges = 0
for gp in range(dim):
for g in range(dim):
if gp != g:
# need to have the factor 0.5, otherwise
# connectivity=1 would lead to dim*(dim-1) edges
| python | {
"resource": ""
} |
q22266 | GRNsim.sim_model | train | def sim_model(self,tmax,X0,noiseDyn=0,restart=0):
""" Simulate the model.
"""
self.noiseDyn = noiseDyn
#
X = np.zeros((tmax,self.dim))
X[0] = X0 + noiseDyn*np.random.randn(self.dim)
# run simulation
for t in range(1,tmax):
if self.modelType == 'hill':
Xdiff = self.Xdiff_hill(X[t-1])
elif self.modelType == 'var':
| python | {
"resource": ""
} |
q22267 | GRNsim.Xdiff_hill | train | def Xdiff_hill(self,Xt):
""" Build Xdiff from coefficients of boolean network,
that is, using self.boolCoeff. The employed functions
are Hill type activation and deactivation functions.
See Wittmann et al., BMC Syst. Biol. 3, 98 (2009),
doi:10.1186/1752-0509-3-98 for more details.
"""
verbosity = self.verbosity>0 and self.writeOutputOnce
self.writeOutputOnce = False
Xdiff = np.zeros(self.dim)
for ichild,child in enumerate(self.pas.keys()):
# check whether list of parents is non-empty,
# otherwise continue
if self.pas[child]:
Xdiff_syn = 0 # synthesize term
if verbosity > 0:
Xdiff_syn_str = ''
else:
continue
# loop over all tuples for which the boolean update
# rule returns true, these are stored in self.boolCoeff
for ituple,tuple in enumerate(self.boolCoeff[child]):
Xdiff_syn_tuple = 1
Xdiff_syn_tuple_str = ''
for iv,v in enumerate(tuple):
iparent = self.varNames[self.pas[child][iv]]
x = Xt[iparent] | python | {
"resource": ""
} |
q22268 | GRNsim.hill_a | train | def hill_a(self,x,threshold=0.1,power=2):
""" Activating hill function. """
x_pow = np.power(x,power)
threshold_pow | python | {
"resource": ""
} |
q22269 | GRNsim.hill_i | train | def hill_i(self,x,threshold=0.1,power=2):
""" Inhibiting hill function.
Is equivalent to 1-hill_a(self,x,power,threshold).
"""
x_pow | python | {
"resource": ""
} |
q22270 | GRNsim.nhill_a | train | def nhill_a(self,x,threshold=0.1,power=2,ichild=2):
""" Normalized activating hill function. """
x_pow = np.power(x,power)
| python | {
"resource": ""
} |
q22271 | GRNsim.nhill_i | train | def nhill_i(self,x,threshold=0.1,power=2):
""" Normalized inhibiting hill function.
Is equivalent to 1-nhill_a(self,x,power,threshold).
"""
x_pow | python | {
"resource": ""
} |
q22272 | GRNsim.read_model | train | def read_model(self):
""" Read the model and the couplings from the model file.
"""
if self.verbosity > 0:
settings.m(0,'reading model',self.model)
# read model
boolRules = []
for line in open(self.model):
if line.startswith('#') and 'modelType =' in line:
keyval = line
if '|' in line:
keyval, type = line.split('|')[:2]
self.modelType = keyval.split('=')[1].strip()
if line.startswith('#') and 'invTimeStep =' in line:
keyval = line
if '|' in line:
keyval, type = line.split('|')[:2]
self.invTimeStep = float(keyval.split('=')[1].strip())
if not line.startswith('#'):
boolRules.append([s.strip() for s in line.split('=')])
if line.startswith('# coupling list:'):
break
self.dim = len(boolRules)
self.boolRules = collections.OrderedDict(boolRules)
self.varNames = collections.OrderedDict([(s, i)
for i, s in enumerate(self.boolRules.keys())])
names = self.varNames
# read couplings via names
self.Coupl = np.zeros((self.dim, self.dim))
| python | {
"resource": ""
} |
q22273 | GRNsim.set_coupl_old | train | def set_coupl_old(self):
""" Using the adjacency matrix, sample a coupling matrix.
"""
if self.model == 'krumsiek11' or self.model == 'var':
# we already built the coupling matrix in set_coupl20()
return
self.Coupl = np.zeros((self.dim,self.dim))
for i in range(self.Adj.shape[0]):
for j,a in enumerate(self.Adj[i]):
# if there is a 1 in Adj, specify co and antiregulation
# and strength of regulation
if a != 0:
co_anti = np.random.randint(2)
# set a lower bound for the coupling parameters
# they ought not to be smaller than 0.1
# and not be larger than 0.4
self.Coupl[i,j] = 0.0*np.random.rand() + 0.1
| python | {
"resource": ""
} |
q22274 | GRNsim.coupl_model1 | train | def coupl_model1(self):
""" In model 1, we want enforce the following signs
on the couplings. Model 2 has the same couplings
| python | {
"resource": ""
} |
q22275 | GRNsim.coupl_model5 | train | def coupl_model5(self):
""" Toggle switch.
"""
self.Coupl = -0.2*self.Adj
self.Coupl[2,0] *= -1
| python | {
"resource": ""
} |
q22276 | GRNsim.coupl_model8 | train | def coupl_model8(self):
""" Variant of toggle switch.
"""
self.Coupl = 0.5*self.Adj_signed
# reduce the value of the coupling of the repressing genes
# otherwise completely unstable solutions | python | {
"resource": ""
} |
q22277 | GRNsim.sim_model_backwards | train | def sim_model_backwards(self,tmax,X0):
""" Simulate the model backwards in time.
"""
X = np.zeros((tmax,self.dim))
X[tmax-1] = X0
for t in range(tmax-2,-1,-1):
sol = sp.optimize.root(self.sim_model_back_help,
| python | {
"resource": ""
} |
q22278 | GRNsim.parents_from_boolRule | train | def parents_from_boolRule(self,rule):
""" Determine parents based on boolean updaterule.
Returns list of parents.
"""
rule_pa = rule.replace('(','').replace(')','').replace('or','').replace('and','').replace('not','')
rule_pa = rule_pa.split()
# if there are no parents, continue
if not rule_pa:
return []
# check whether these are meaningful parents
pa_old = []
pa_delete = []
for pa in rule_pa:
if pa not in self.varNames.keys():
settings.m(0,'list of available variables:')
settings.m(0,list(self.varNames.keys()))
message = ('processing of rule "' + rule
+ ' yields an invalid parent: ' + pa
+ ' | check whether the syntax is correct: \n'
| python | {
"resource": ""
} |
q22279 | GRNsim.build_boolCoeff | train | def build_boolCoeff(self):
''' Compute coefficients for tuple space.
'''
# coefficients for hill functions from boolean update rules
self.boolCoeff = collections.OrderedDict([(s,[]) for s in self.varNames.keys()])
# parents
self.pas = collections.OrderedDict([(s,[]) for s in self.varNames.keys()])
#
for key in self.boolRules.keys():
rule = self.boolRules[key]
self.pas[key] = self.parents_from_boolRule(rule)
pasIndices = [self.varNames[pa] for pa in self.pas[key]]
# check whether there are coupling matrix entries for each parent
for g in range(self.dim):
if g in pasIndices:
if np.abs(self.Coupl[self.varNames[key],g]) < 1e-10:
raise ValueError('specify coupling value for '+str(key)+' <- '+str(g))
else:
if np.abs(self.Coupl[self.varNames[key],g]) > 1e-10:
| python | {
"resource": ""
} |
q22280 | GRNsim.process_rule | train | def process_rule(self,rule,pa,tuple):
''' Process a string that denotes a boolean rule.
'''
| python | {
"resource": ""
} |
q22281 | StaticCauseEffect.sim_givenAdj | train | def sim_givenAdj(self, Adj: np.array, model='line'):
"""\
Simulate data given only an adjacancy matrix and a model.
The model is a bivariate funtional dependence. The adjacancy matrix
needs to be acyclic.
Parameters
----------
Adj
adjacancy matrix of shape (dim,dim).
Returns
-------
Data array of shape (n_samples,dim).
"""
# nice examples
examples = [{'func' : 'sawtooth', 'gdist' : 'uniform',
'sigma_glob' : 1.8, 'sigma_noise' : 0.1}]
# nr of samples
n_samples = 100
# noise
sigma_glob = 1.8
sigma_noise = 0.4
# coupling function / model
func = self.funcs[model]
# glob distribution
sourcedist = 'uniform'
# loop over source nodes
dim = Adj.shape[0]
X = np.zeros((n_samples,dim))
# source nodes have no parents themselves
nrpar = 0
children = list(range(dim))
parents = []
for gp in range(dim):
if Adj[gp,:].sum() == nrpar:
if sourcedist == 'gaussian':
X[:,gp] = np.random.normal(0,sigma_glob,n_samples)
if sourcedist == 'uniform':
X[:,gp] = np.random.uniform(-sigma_glob,sigma_glob,n_samples)
parents.append(gp)
children.remove(gp)
# all of the following guarantees for 3 dim, that we generate the data
# in the correct sequence
# then compute all nodes that have 1 parent, then those with 2 parents
children_sorted = []
nrchildren_par = np.zeros(dim)
nrchildren_par[0] = len(parents)
for nrpar in range(1,dim):
# loop over child nodes
for gp in children:
if Adj[gp,:].sum() == nrpar:
children_sorted.append(gp)
| python | {
"resource": ""
} |
q22282 | StaticCauseEffect.sim_combi | train | def sim_combi(self):
""" Simulate data to model combi regulation.
"""
n_samples = 500
sigma_glob = 1.8
X = np.zeros((n_samples,3))
X[:,0] = np.random.uniform(-sigma_glob,sigma_glob,n_samples)
X[:,1] = np.random.uniform(-sigma_glob,sigma_glob,n_samples)
func = self.funcs['tanh']
| python | {
"resource": ""
} |
q22283 | _calc_overlap_count | train | def _calc_overlap_count(
markers1: dict,
markers2: dict,
):
"""Calculate overlap count between the values of two dictionaries
Note: dict values must be sets
| python | {
"resource": ""
} |
q22284 | _calc_overlap_coef | train | def _calc_overlap_coef(
markers1: dict,
markers2: dict,
):
"""Calculate overlap coefficient between the values of two dictionaries
Note: dict values must be sets
"""
overlap_coef=np.zeros((len(markers1), len(markers2)))
j=0
for marker_group in markers1:
tmp = [len(markers2[i].intersection(markers1[marker_group]))/
| python | {
"resource": ""
} |
q22285 | _calc_jaccard | train | def _calc_jaccard(
markers1: dict,
markers2: dict,
):
"""Calculate jaccard index between the values of two dictionaries
Note: dict values must be sets
"""
jacc_results=np.zeros((len(markers1), len(markers2)))
| python | {
"resource": ""
} |
q22286 | pca_overview | train | def pca_overview(adata, **params):
"""\
Plot PCA results.
The parameters are the ones of the scatter plot. Call pca_ranking separately
if you want to change the default settings.
Parameters
----------
adata : :class:`~anndata.AnnData`
Annotated data matrix.
color : string or list of strings, optional (default: `None`)
Keys for observation/cell annotation either as list `["ann1", "ann2"]` or
string `"ann1,ann2,..."`.
use_raw : `bool`, optional (default: `True`)
Use `raw` attribute of `adata` if present.
{scatter_bulk}
show : bool, optional (default: `None`)
Show the plot, do not return axis.
save : `bool` or `str`, optional (default: `None`)
| python | {
"resource": ""
} |
q22287 | pca_loadings | train | def pca_loadings(adata, components=None, show=None, save=None):
"""Rank genes according to contributions to PCs.
Parameters
----------
adata : :class:`~anndata.AnnData`
Annotated data matrix.
components : str or list of integers, optional
For example, ``'1,2,3'`` means ``[1, 2, 3]``, first, second, third
principal component.
show : bool, optional (default: `None`)
Show the plot, do not return axis.
save : `bool` or `str`, optional (default: `None`)
If `True` or a `str`, save the figure. A string is appended to the
default filename. Infer the filetype if ending on {'.pdf', '.png', '.svg'}.
| python | {
"resource": ""
} |
q22288 | pca_variance_ratio | train | def pca_variance_ratio(adata, n_pcs=30, log=False, show=None, save=None):
"""Plot the variance ratio.
Parameters
----------
n_pcs : `int`, optional (default: `30`)
Number of PCs to show.
log : `bool`, optional (default: `False`)
Plot on logarithmic scale..
show : `bool`, optional (default: `None`)
| python | {
"resource": ""
} |
q22289 | dpt_timeseries | train | def dpt_timeseries(adata, color_map=None, show=None, save=None, as_heatmap=True):
"""Heatmap of pseudotime series.
Parameters
----------
as_heatmap : bool (default: False)
Plot the timeseries as heatmap.
"""
if adata.n_vars > 100:
logg.warn('Plotting more than 100 genes might take some while,'
'consider selecting only highly variable genes, for example.')
# only if number of genes is not too high
if as_heatmap:
| python | {
"resource": ""
} |
q22290 | dpt_groups_pseudotime | train | def dpt_groups_pseudotime(adata, color_map=None, palette=None, show=None, save=None):
"""Plot groups and pseudotime."""
pl.figure()
pl.subplot(211)
timeseries_subplot(adata.obs['dpt_groups'].cat.codes,
time=adata.obs['dpt_order'].values,
color=np.asarray(adata.obs['dpt_groups']),
highlightsX=adata.uns['dpt_changepoints'],
ylabel='dpt groups',
yticks=(np.arange(len(adata.obs['dpt_groups'].cat.categories), dtype=int)
if len(adata.obs['dpt_groups'].cat.categories) < 5 else None),
palette=palette)
pl.subplot(212) | python | {
"resource": ""
} |
q22291 | _rank_genes_groups_plot | train | def _rank_genes_groups_plot(adata, plot_type='heatmap', groups=None,
n_genes=10, groupby=None, key=None,
show=None, save=None, **kwds):
"""\
Plot ranking of genes using the specified plot type
Parameters
----------
adata : :class:`~anndata.AnnData`
Annotated data matrix.
groups : `str` or `list` of `str`
The groups for which to show the gene ranking.
n_genes : `int`, optional (default: 10)
Number of genes to show.
groupby : `str` or `None`, optional (default: `None`)
The key of the observation grouping to consider. By default,
the groupby is chosen from the rank genes groups parameter but
other groupby options can be used.
{show_save_ax}
"""
if key is None:
key = 'rank_genes_groups'
if 'dendrogram' not in kwds:
kwds['dendrogram'] = True
if groupby is None:
groupby = str(adata.uns[key]['params']['groupby'])
group_names = (adata.uns[key]['names'].dtype.names
if groups is None else groups)
gene_names = []
start = 0
group_positions = []
group_names_valid = []
for group in group_names:
# get all genes that are 'not-nan'
genes_list = [gene for gene in adata.uns[key]['names'][group] if not pd.isnull(gene)][:n_genes]
if len(genes_list) == 0:
logg.warn("No genes found for group {}".format(group))
continue
gene_names.extend(genes_list)
end = start + len(genes_list)
group_positions.append((start, end -1))
group_names_valid.append(group)
start = end
group_names = | python | {
"resource": ""
} |
q22292 | sim | train | def sim(adata, tmax_realization=None, as_heatmap=False, shuffle=False,
show=None, save=None):
"""Plot results of simulation.
Parameters
----------
as_heatmap : bool (default: False)
Plot the timeseries as heatmap.
tmax_realization : int or None (default: False)
Number of observations in one realization of the time series. The data matrix
adata.X consists in concatenated realizations.
shuffle : bool, optional (default: False)
Shuffle the data.
save : `bool` or `str`, optional (default: `None`)
If `True` or a `str`, save the figure. A string is appended to the
default filename. Infer the filetype if ending on {{'.pdf', '.png', '.svg'}}.
show : bool, optional (default: `None`)
Show the plot, do not return axis.
"""
from ... import utils as sc_utils
if tmax_realization is not None: tmax = tmax_realization
elif 'tmax_write' in adata.uns: tmax = adata.uns['tmax_write']
else: tmax = adata.n_obs
n_realizations = adata.n_obs/tmax
if not shuffle:
if not as_heatmap:
timeseries(adata.X,
var_names=adata.var_names,
xlim=[0, 1.25*adata.n_obs],
highlightsX=np.arange(tmax, n_realizations*tmax, tmax),
| python | {
"resource": ""
} |
q22293 | cellbrowser | train | def cellbrowser(
adata, data_dir, data_name,
embedding_keys = None,
annot_keys = ["louvain", "percent_mito", "n_genes", "n_counts"],
cluster_field = "louvain",
nb_marker = 50,
skip_matrix = False,
html_dir = None,
port = None,
do_debug = False
):
"""
Export adata to a UCSC Cell Browser project directory. If `html_dir` is
set, subsequently build the html files from the project directory into
`html_dir`. If `port` is set, start an HTTP server in the background and
serve `html_dir` on `port`.
By default, export all gene expression data from `adata.raw`, the
annotations `louvain`, `percent_mito`, `n_genes` and `n_counts` and the top
`nb_marker` cluster markers. All existing files in data_dir are
overwritten, except cellbrowser.conf.
See `UCSC Cellbrowser <https://github.com/maximilianh/cellBrowser>`__ for
details.
Parameters
----------
adata : :class:`~anndata.AnnData`
Annotated data matrix
data_dir : `str`
Path to directory for exported Cell Browser files.
Usually these are the files `exprMatrix.tsv.gz`, `meta.tsv`,
coordinate files like `tsne.coords.tsv`,
and cluster marker gene lists like `markers.tsv`.
A file `cellbrowser.conf` is also created with pointers to these files.
As a result, each adata object should have its own project_dir.
data_name : `str`
Name of dataset in Cell Browser, a string without special characters.
This is written to `data_dir`/cellbrowser.conf.
Ideally this is a short unique name for the dataset,
like "pbmc3k" or "tabulamuris".
embedding_keys: `list` of `str` or `dict` of `key (str)`->`display label (str)`
2-D embeddings in `adata.obsm` to export.
The prefix "`X_`" or "`X_draw_graph_`" is not necessary.
Coordinates missing from `adata` are skipped.
By default, these keys are tried: ["tsne", "umap", "pagaFa", "pagaFr",
"pagaUmap", "phate", "fa", "fr", "kk", "drl", "rt"].
For these, default display labels are automatically used.
For other values, you can specify a dictionary instead of a list,
the values of the dictionary are then the display labels for the
coordinates, e.g. `{'tsne' : "t-SNE by Scanpy"}`
annot_keys: `list` of `str` or `dict` of `key (str)`->`display label (str)`
Annotations in `adata.obsm` to export.
Can be a dictionary with key -> display label.
skip_matrix: `boolean`
Do not export the matrix.
If you had previously exported this adata into the same `data_dir`,
then there is no need to export the whole matrix again.
This option will make the export a lot faster,
e.g. when only coordinates or meta data were changed.
html_dir: `str`
If this variable is set, the export will build html
files from | python | {
"resource": ""
} |
q22294 | umap | train | def umap(adata, **kwargs) -> Union[Axes, List[Axes], None]:
"""\
Scatter plot in UMAP basis.
Parameters
----------
{adata_color_etc}
| python | {
"resource": ""
} |
q22295 | tsne | train | def tsne(adata, **kwargs) -> Union[Axes, List[Axes], None]:
"""\
Scatter plot in tSNE basis.
Parameters
----------
{adata_color_etc}
| python | {
"resource": ""
} |
q22296 | diffmap | train | def diffmap(adata, **kwargs) -> Union[Axes, List[Axes], None]:
"""\
Scatter plot in Diffusion Map basis.
Parameters
----------
| python | {
"resource": ""
} |
q22297 | draw_graph | train | def draw_graph(adata, layout=None, **kwargs) -> Union[Axes, List[Axes], None]:
"""\
Scatter plot in graph-drawing basis.
Parameters
----------
{adata_color_etc}
layout : {{'fa', 'fr', 'drl', ...}}, optional (default: last computed)
One of the `draw_graph` layouts, see
:func:`~scanpy.api.tl.draw_graph`. By default, the last computed layout
is used.
{edges_arrows}
{scatter_bulk}
{show_save_ax}
Returns
-------
If `show==False` a :class:`~matplotlib.axes.Axes` or a list of it.
"""
if layout is None:
layout = | python | {
"resource": ""
} |
q22298 | pca | train | def pca(adata, **kwargs) -> Union[Axes, List[Axes], None]:
"""\
Scatter plot in PCA coordinates.
Parameters
----------
{adata_color_etc}
{scatter_bulk}
{show_save_ax}
Returns | python | {
"resource": ""
} |
q22299 | _add_legend_or_colorbar | train | def _add_legend_or_colorbar(adata, ax, cax, categorical, value_to_plot, legend_loc,
scatter_array, legend_fontweight, legend_fontsize,
groups, multi_panel):
"""
Adds a color bar or a legend to the given ax. A legend is added when the
data is categorical and a color bar is added when a continuous value was used.
"""
# add legends or colorbars
if categorical is True:
# add legend to figure
categories = list(adata.obs[value_to_plot].cat.categories)
colors = adata.uns[value_to_plot + '_colors']
if multi_panel is True:
# Shrink current axis by 10% to fit legend and match
# size of plots that are not categorical
box = ax.get_position()
ax.set_position([box.x0, box.y0, box.width * 0.91, box.height])
if groups is not None:
# only label groups with the respective color
colors = [colors[categories.index(x)] for x in groups]
| python | {
"resource": ""
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.