markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Computing connectivities using UMAP gives us quantitiatively different results for the pseudotime.
sc.pp.neighbors(adata, n_neighbors=20, use_rep='X', method='umap') sc.tl.diffmap(adata) sc.tl.dpt(adata, n_branchings=1) sc.pl.diffmap(adata, color=['dpt_pseudotime', 'dpt_groups', 'paul15_clusters'])
170502_paul15/paul15.ipynb
theislab/scanpy_usage
bsd-3-clause
Two variant implementations of the Theis well fuction W_theis0: exp1 directly from scipy.special W_theis1: by integration using scipy and numpy functionality.
def W_theis0(u): """Return Theis well function using scipy.special function exp1 directly.""" return exp1(u) def W_theis1(u): """Return Theis well function by integrating using scipy functionality. This turns out to be a very accurate yet fast impementation, about as fast as the exp1 function ...
Syllabus_in_notebooks/Sec6_4_4_Theis_Hantush_implementations.ipynb
Olsthoorn/TransientGroundwaterFlow
gpl-3.0
Four variant implementations of the Hantush well function
def W_hantush0(u, rho, tol=1e-14): '''Hantush well function implemented as power series This implementation works but has a limited reach; for very small values of u (u<0.001) the solution will deteriorate into nonsense, ''' tau = (rho/2)**2 / u f0 = 1 E = exp1(u) w0= f0 * E W ...
Syllabus_in_notebooks/Sec6_4_4_Theis_Hantush_implementations.ipynb
Olsthoorn/TransientGroundwaterFlow
gpl-3.0
Results of the timing Theis: W_theis0 : 6.06 µs ± 261 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each) W_theis1(u) : 7.11 µs ± 163 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each) W_theis2(u) : 299 µs ± 6.79 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each) W_theis3(u) : 553 µs ± 33.7 µs p...
rhos = [0., 0.1, 0.3, 1, 3] u = np.logspace(-6, 1, 71) ax = newfig('Hantush type curves', '1/u', 'Wh(u, rho)', xscale='log', yscale='log') ax.plot(1/u, W_theis0(u), lw=3, label='Theis', zorder=100) for rho in rhos: ax.plot(1/u, W_hantush2(u, rho), '.', label='rho={:.1f}'.format(rho)) ax.plot(1/u, W_hantush3(...
Syllabus_in_notebooks/Sec6_4_4_Theis_Hantush_implementations.ipynb
Olsthoorn/TransientGroundwaterFlow
gpl-3.0
爬取文章
resp = requests.get(ARTICLE_URL, cookies={'over18': '1'}) assert resp.status_code == 200 soup = BeautifulSoup(resp.text, 'lxml') main_content = soup.find(id = 'main-content') img_link = main_content.findAll('a', recursive=False) pprint(img_link)
appendix_ptt/03_crawl_image.ipynb
afunTW/dsc-crawling
apache-2.0
檢查並下載圖片
def check_and_download_img(url, savedir='download_img'): image_resp = requests.get(url, stream=True) image = Image.open(image_resp.raw) filename = os.path.basename(url) # check format real_filename = '{}.{}'.format( filename.split('.')[0], image.format.lower() ) print('c...
appendix_ptt/03_crawl_image.ipynb
afunTW/dsc-crawling
apache-2.0
interactions is a matrix with entries equal to 1 if the i-th user posted an answer to the j-th question; the goal is to recommend the questions to users who might answer them. question_features is a sparse matrix containing question metadata in the form of tags. vectorizer is a sklearn.feature_extraction.DictVectorize...
print(repr(interactions)) print(repr(question_features))
examples/crossvalidated/example.ipynb
qqwjq/lightFM
apache-2.0
The tags matrix contains rows such as
print(question_vectorizer.inverse_transform(question_features[:3]))
examples/crossvalidated/example.ipynb
qqwjq/lightFM
apache-2.0
User features are exactly what we would expect from processing raw text:
print(user_vectorizer.inverse_transform(user_features[2]))
examples/crossvalidated/example.ipynb
qqwjq/lightFM
apache-2.0
Fitting models Train/test split We can split the dataset into train and test sets by using utility functions defined in model.py.
import model import inspect print(inspect.getsource(model.train_test_split)) train, test = model.train_test_split(interactions)
examples/crossvalidated/example.ipynb
qqwjq/lightFM
apache-2.0
Traditional MF model Let's start with a traditional collaborative filtering model that does not use any metadata. We can do this using lightfm -- we simply do not pass in any metadata matrices. We'll use the following function to train a WARP model.
print(inspect.getsource(model.fit_lightfm_model)) mf_model = model.fit_lightfm_model(train, epochs=1)
examples/crossvalidated/example.ipynb
qqwjq/lightFM
apache-2.0
The following function will compute the AUC score on the test set:
print(inspect.getsource(model.auc_lightfm)) mf_score = model.auc_lightfm(mf_model, test) print(mf_score)
examples/crossvalidated/example.ipynb
qqwjq/lightFM
apache-2.0
Ooops. That's worse than random (due possibly to overfitting). In this case, this is because the CrossValidated dataset is very sparse: there just aren't enough interactions to support a traditional collaborative filtering model. In general, we'd also like to recommend questions that have no answers yet, making the col...
print(inspect.getsource(model.fit_content_models))
examples/crossvalidated/example.ipynb
qqwjq/lightFM
apache-2.0
Running this and evaluating the AUC score gives
content_models = model.fit_content_models(train, question_features) content_score = model.auc_content_models(content_models, test, question_features) print(content_score)
examples/crossvalidated/example.ipynb
qqwjq/lightFM
apache-2.0
That's a bit better, but not great. In addition, a linear model of this form fails to capture tag similarity. For example, probit and logistic regression are closely related, yet the model will not automatically infer knowledge of one knowledge of the other. Hybrid LightFM model What happens if we estimate theLightFM ...
lightfm_model = model.fit_lightfm_model(train, post_features=question_features) lightfm_score = model.auc_lightfm(lightfm_model, test, post_features=question_features) print(lightfm_score)
examples/crossvalidated/example.ipynb
qqwjq/lightFM
apache-2.0
We can add user features on top for a small additional improvement:
lightfm_model = model.fit_lightfm_model(train, post_features=question_features, user_features=user_features) lightfm_score = model.auc_lightfm(lightfm_model, test, post_features=question_features, user_features=user_features) print(lightfm_score...
examples/crossvalidated/example.ipynb
qqwjq/lightFM
apache-2.0
This is quite a bit better, illustrating the fact that an embedding-based model can capture more interesting relationships between content features. Feature embeddings One additional advantage of metadata-based latent models is that they give us useful latent representations of the metadata features themselves --- much...
print(inspect.getsource(model.similar_tags))
examples/crossvalidated/example.ipynb
qqwjq/lightFM
apache-2.0
Let's demonstrate this.
for tag in ['bayesian', 'regression', 'survival', 'p-value']: print('Tags similar to %s:' % tag) print(model.similar_tags(lightfm_model, question_vectorizer, tag)[:5])
examples/crossvalidated/example.ipynb
qqwjq/lightFM
apache-2.0
However, we still have the challenge of visually associating the value of the prices in a neighborhood with the value of the spatial lag of values for the focal unit. The latter is a weighted average of homicide rates in the focal county's neighborhood. To complement the geovisualization of these associations we can tu...
y.median() yb = y > y.median() sum(yb)
notebooks/explore/esda/Spatial Autocorrelation for Areal Unit Data.ipynb
pysal/pysal
bsd-3-clause
Let's make a significant mass ratio and radius ratio...
b['q'] = 0.7 b['requiv@primary'] = 1.0 b['requiv@secondary'] = 0.5 b['teff@secondary@component'] = 5000
2.1/examples/rossiter_mclaughlin.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Adding Datasets Now we'll add radial velocity and mesh datasets. We'll add two identical datasets for RVs so that we can have one computed dynamically and the other computed numerically (these options will need to be set later).
b.add_dataset('rv', times=np.linspace(0,2,201), dataset='dynamicalrvs') b.add_dataset('rv', times=np.linspace(0,2,201), dataset='numericalrvs')
2.1/examples/rossiter_mclaughlin.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Storing the mesh at every timestep is overkill, and will be both computationally and memory intensive. So let's just sample at the times we care about.
times = b.get_value('times@primary@numericalrvs@dataset') times = times[times<0.1] print times b.add_dataset('mesh', dataset='mesh01', times=times, columns=['vws', 'rvs*'])
2.1/examples/rossiter_mclaughlin.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Plotting Now let's plot the radial velocities. First we'll plot the dynamical RVs. Note that dynamical RVs show the true radial velocity of the center of mass of each star, and so we do not see the Rossiter McLaughlin effect.
afig, mplfig = b['dynamicalrvs@model'].plot(c={'primary': 'b', 'secondary': 'r'}, show=True)
2.1/examples/rossiter_mclaughlin.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
But the numerical method integrates over the visible surface elements, giving us what we'd observe if deriving RVs from observed spectra of the binary. Here we do see the Rossiter McLaughlin effect. You'll also notice that RVs are not available for the secondary star when its completely occulted (they're nans in the ...
afig, mplfig = b['numericalrvs@model'].plot(c={'primary': 'b', 'secondary': 'r'}, show=True)
2.1/examples/rossiter_mclaughlin.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
To visualize what is happening, we can plot the radial velocities of each surface element in the mesh at one of these times. Here just plot on the mesh@model parameterset - the mesh will automatically get coordinates from mesh01 and then we point to rvs@numericalrvs for the facecolors.
afig, mplfig = b['mesh@model'].plot(time=0.03, fc='rvs@numericalrvs', ec="None", show=True)
2.1/examples/rossiter_mclaughlin.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Here you can see that the secondary star is blocking part of the "red" RVs of the primary star. This is essentially the same as plotting the negative z-component of the velocities (for convention - our system is in a right handed system with +z towards the viewer, but RV convention has negative RVs for blue shifts). We...
afig, mplfig = b['mesh01@model'].plot(time=0.09, fc='vws', ec="None", show=True)
2.1/examples/rossiter_mclaughlin.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
About IPython From Python for Data Analysis: The IPython project began in 2001 as Fernando Perez's side project to make a better interactive Python interpreter. In the subsequent 11 years it has grown into what's widely considered one of the most important tools in the modern scientific Python computing stack. While i...
b = [1, 2, 3]
courses/python/material/ipynbs/Using the IPython Notebook.ipynb
kraemerd17/kraemerd17.github.io
mit
Python In [2]: b.&lt;Tab&gt; b.append b.extend b.insert b.remove b.sort b.count b.index b.pop b.reverse Tab completion works in many contexts outside of searching the interactive namespace and completing object or module attributes. When typing anything that looks like a file path (even in a P...
b?
courses/python/material/ipynbs/Using the IPython Notebook.ipynb
kraemerd17/kraemerd17.github.io
mit
Type: list String form: [1, 2, 3] Length: 3 Docstring: list() -&gt; new empty list list(iterable) -&gt; new list initialized from iterable's items This is referred to as object introspection. If the object is a function or instance method, the docstring, if defined, will also be shown. Suppose we'd written ...
def add_numbers(a, b): """ Add two numbers together Returns ------- the_sum : type of arguments """ return a + b
courses/python/material/ipynbs/Using the IPython Notebook.ipynb
kraemerd17/kraemerd17.github.io
mit
Then using ? shows us the docstring:
add_numbers?
courses/python/material/ipynbs/Using the IPython Notebook.ipynb
kraemerd17/kraemerd17.github.io
mit
``` Type: function String form: <function add_numbers at 0x7facfc177488> File: /home/alethiometryst/mathlan/public_html/courses/python/course-material/ipynbs/<ipython-input-11-5b88597b2522> Definition: add_numbers(a, b) Docstring: Add two numbers together Returns the_sum : type of arguments ``` Using ?? ...
add_numbers??
courses/python/material/ipynbs/Using the IPython Notebook.ipynb
kraemerd17/kraemerd17.github.io
mit
``` Type: function String form: <function add_numbers at 0x7facfc177488> File: /home/alethiometryst/mathlan/public_html/courses/python/course-material/ipynbs/<ipython-input-11-5b88597b2522> Definition: add_numbers(a, b) Source: def add_numbers(a, b): """ Add two numbers together Returns ------- t...
import numpy as np np.*load*?
courses/python/material/ipynbs/Using the IPython Notebook.ipynb
kraemerd17/kraemerd17.github.io
mit
np.load np.loads np.loadtxt np.pkgload The %run Command Any file can be run as a Python program inside the environment of your IPython session using the %run command. Keyboard Shortcuts IPython has many keyboard shortcuts for navigating the prompt (which will be familiar to users of the Emacs text editor or the UNIX b...
def func(a): return a + 2 func(3, 4)
courses/python/material/ipynbs/Using the IPython Notebook.ipynb
kraemerd17/kraemerd17.github.io
mit
Magic Commands IPython has many special commands, known as "magic" commands, which are designed to facilitate common tasks and enable you to easily control the behavior of the IPython system. A magic command is any command prefixed by the percent symbol %. Magic commands can be viewed as command line programs to be run...
def square_one(x): return x ** 2 def square_two(x): return x * x %timeit square_one(500) %timeit square_two(500)
courses/python/material/ipynbs/Using the IPython Notebook.ipynb
kraemerd17/kraemerd17.github.io
mit
It's good to remember that $x\cdot x$ is always faster to compute than $x^2$. %timeit can be used for more complicated functions. For example, consider the Fibonacci numbers, which are computed according to the following rule: \begin{align} F_1 &= 1 \ F_2 &= 1 \ F_n &= F_{n-1} + F_{n-2} \end{align} So, $F_3$ is the sum...
# Recursive implementation def fibonacci_one(n): if n == 1 or n == 2: return 1 else: return fibonacci_one(n-1) + fibonacci_one(n-2) # Iterative implementation def fibonacci_two(n): a = 1 b = 1 if n < 3: return 1 for i in range (3, n+1): c = a a += b ...
courses/python/material/ipynbs/Using the IPython Notebook.ipynb
kraemerd17/kraemerd17.github.io
mit
Remember that there are 1000 nanoseconds per microsecond. In other words, the iterative implementation is 150 times faster than the recursive one. Debugging Anyone who has dealt with computers for any length of time has had to deal with bugs in their code (I'm looking at you, 161ers). Sometimes you make a (or many) sm...
import matplotlib.pyplot as plt %matplotlib inline # A simple, naive solution def invert(list_of_doubles): inverted_list = [] for i in list_of_doubles: inverted_list.append(1./i) return inverted_list # A silly, useless function def plot_demo(inverted_list): plt.plot(inverted_list) plt.titl...
courses/python/material/ipynbs/Using the IPython Notebook.ipynb
kraemerd17/kraemerd17.github.io
mit
Everything looks good, so what's the problem?
plot_demo(invert([1., 2., 3., 4., 5., 6., 7., 0.]))
courses/python/material/ipynbs/Using the IPython Notebook.ipynb
kraemerd17/kraemerd17.github.io
mit
Okay, so we're getting a divide by zero... let's run %debug if it can enlighten us on where the functions broke.
%debug
courses/python/material/ipynbs/Using the IPython Notebook.ipynb
kraemerd17/kraemerd17.github.io
mit
Debugger commands This example was extremely simplified, but %debug is a very useful command to have in your toolbelt. Here are the key commands inside the debugger to help you navigate. | Command | Action | Command | Action | |---------|--------|---------|--------| |h(elp) | Display command list |s(tep)| Step into fun...
from numpy import random def is_sorted(lst): for i in range(1,len(lst)): if lst[i] < lst[i-1]: return False return True def bubblesort(lst): while not is_sorted(lst): for i in range(0, len(lst) - 1): if lst[i+1] < lst[i]: temp = lst[i] ...
courses/python/material/ipynbs/Using the IPython Notebook.ipynb
kraemerd17/kraemerd17.github.io
mit
Spatiotemporal permutation F-test on full sensor data Tests for differential evoked responses in at least one condition using a permutation clustering test. The FieldTrip neighbor templates will be used to determine the adjacency between sensors. This serves as a spatial prior to the clustering. Significant spatiotempo...
# Authors: Denis Engemann <denis.engemann@gmail.com> # # License: BSD (3-clause) import numpy as np import matplotlib.pyplot as plt from mpl_toolkits.axes_grid1 import make_axes_locatable from mne.viz import plot_topomap import mne from mne.stats import spatio_temporal_cluster_test from mne.datasets import sample fro...
0.15/_downloads/plot_stats_spatio_temporal_cluster_sensors.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Set parameters
data_path = sample.data_path() raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif' event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif' event_id = {'Aud_L': 1, 'Aud_R': 2, 'Vis_L': 3, 'Vis_R': 4} tmin = -0.2 tmax = 0.5 # Setup for reading the raw data raw = mne.io.read_raw_fif(...
0.15/_downloads/plot_stats_spatio_temporal_cluster_sensors.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Compare Cumulative Return
aapl['Cumulative'] = aapl['Close'] / aapl['Close'].iloc[0] spy_etf['Cumulative'] = spy_etf['Close'] / spy_etf['Close'].iloc[0] aapl['Cumulative'].plot(label = 'AAPL', figsize = (10,8)) spy_etf['Cumulative'].plot(label = 'SPY Index') plt.legend() plt.title('Cumulative Return')
17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/09-Python-Finance-Fundamentals/03-CAPM-Capital-Asset-Pricing-Model.ipynb
arcyfelix/Courses
apache-2.0
Get Daily Return
aapl['Daily Return'] = aapl['Close'].pct_change(1) spy_etf['Daily Return'] = spy_etf['Close'].pct_change(1) fig = plt.figure(figsize = (12, 8)) plt.scatter(aapl['Daily Return'], spy_etf['Daily Return'], alpha = 0.3) aapl['Daily Return'].hist(bins = 100, figsize = (12, 8)) spy_etf['Daily Return'].hist(bin...
17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/09-Python-Finance-Fundamentals/03-CAPM-Capital-Asset-Pricing-Model.ipynb
arcyfelix/Courses
apache-2.0
What if our stock was completely related to SP500?
spy_etf['Daily Return'].head() import numpy as np noise = np.random.normal(0, 0.001, len(spy_etf['Daily Return'].iloc[1:])) noise spy_etf['Daily Return'].iloc[1:] + noise beta, alpha, r_value, p_value, std_err = stats.linregress(spy_etf['Daily Return'].iloc[1:]+noise, ...
17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/09-Python-Finance-Fundamentals/03-CAPM-Capital-Asset-Pricing-Model.ipynb
arcyfelix/Courses
apache-2.0
Question: Consider there is a feature with a score far better than other features. Is this feature a good choice? Unsupervised Learning: Principal Component Analysis (PCA) Principal component analysis is a dimensionality reduction algorithm that we can use to find structure in our data.The main aim is to find surface o...
import sklearn import numpy as np import pandas as pd import matplotlib.pyplot as plt from sklearn.model_selection import train_test_split import gzip, pickle, sys f = gzip.open('Datasets/mnist.pkl.gz', 'rb') (input_train, output_train), (input_test, output_test), _ = pickle.load(f, encoding='bytes') for i in ran...
07-pca-feature-selection.ipynb
msadegh97/machine-learning-course
gpl-3.0
Array CNVs I'm going to make a table of all CNVs identified by arrays. Some iPSC didn't have any CNVs. For now, if an iPSC is in the CNV table, that means that it either didn't have CNVs or we didn't test that clone/passage number for CNVs.
cnv = baseline_cnv.merge(baseline_snpa, left_on='snpa_id', right_index=True, suffixes=['_cnv', '_snpa']) cnv = cnv.merge(baseline_analyte, left_on='analyte_id', right_index=True, suffixes=['_cnv', '_analyte']) cnv = cnv.merge(baseline_tissue, left_on='tissue_id', right_index=Tru...
notebooks/Input Data.ipynb
frazer-lab/cardips-ipsc-eqtl
mit
RNA-seq Samples for this Study I'm going to use baseline and family 1070 samples.
# Get family1070 samples. tdf = family1070_rnas[family1070_rnas.comment.isnull()] tdf = tdf.merge(family1070_tissue, left_on='tissue_id', right_index=True, suffixes=['_rna', '_tissue']) tdf = tdf[tdf.cell_type == 'iPSC'] tdf.index = tdf.rnas_id tdf['status'] = data_rnas.ix[tdf.index, 'status'] tdf = td...
notebooks/Input Data.ipynb
frazer-lab/cardips-ipsc-eqtl
mit
I can use all of these samples that passed QC for various expression analyses. eQTL samples Now I'm going to identify one sample per subject to use for eQTL analysis. I'll start by keeping samples whose clone/passage number matches up with those from the 222 cohort.
rna['in_eqtl'] = False samples = (cohort222.subject_id + ':' + cohort222.clone.astype(int).astype(str) + ':' + cohort222.passage.astype(int).astype(str)) t = rna.dropna(subset=['passage']) t.loc[:, ('sample')] = (t.subject_id + ':' + t.clone.astype(int).astype(str) + ':' + t.passa...
notebooks/Input Data.ipynb
frazer-lab/cardips-ipsc-eqtl
mit
Now I'll add in any samples for which we have CNVs but weren't in the 222.
samples = (cnv.subject_id + ':' + cnv.clone.astype(int).astype(str) + ':' + cnv.passage.astype(int).astype(str)) t = rna.dropna(subset=['passage']) t.loc[:, ('sample')] = (t.subject_id + ':' + t.clone.astype(int).astype(str) + ':' + t.passage.astype(int).astype(str)) t = t[t['sampl...
notebooks/Input Data.ipynb
frazer-lab/cardips-ipsc-eqtl
mit
Now I'll add in samples where the clone was in the 222 but we don't have the same passage number.
samples = (cohort222.subject_id + ':' + cohort222.clone.astype(int).astype(str)) t = rna[rna.in_eqtl == False] t = t[t.subject_id.apply(lambda x: x not in rna.ix[rna.in_eqtl, 'subject_id'].values)] t['samples'] = t.subject_id + ':' + t.clone.astype(int).astype(str) t = t[t.samples.apply(lambda x: x in samples.values)]...
notebooks/Input Data.ipynb
frazer-lab/cardips-ipsc-eqtl
mit
Now I'll add in any samples from subjects we don't yet have in the eQTL analysis.
t = rna[rna.in_eqtl == False] t = t[t.subject_id.apply(lambda x: x not in rna.ix[rna.in_eqtl, 'subject_id'].values)] rna.ix[t.index, 'in_eqtl'] = True n = rna.in_eqtl.value_counts()[True] print('We potentially have {} distinct subjects in the eQTL analysis.'.format(n))
notebooks/Input Data.ipynb
frazer-lab/cardips-ipsc-eqtl
mit
WGS Samples Now I'll assign WGS IDs for each RNA-seq sample. Some subjects have multiple WGS samples for different cell types. I'll preferentially use blood, fibroblast, and finally iPSC WGS.
wgs = baseline_wgs.merge(baseline_analyte, left_on='analyte_id', right_index=True, suffixes=['_wgs', '_analyte']) wgs = wgs.merge(baseline_tissue, left_on='tissue_id', right_index=True, suffixes=['_wgs', '_tissue']) wgs = wgs.merge(baseline_analyte, left_on='analyte_id', right...
notebooks/Input Data.ipynb
frazer-lab/cardips-ipsc-eqtl
mit
I'm going to keep one WGS sample per person in the cohort (preferentially blood, fibroblast, and finally iPSC) even if we don't have RNA-seq in case we want to look at phasing etc.
vc = wgs.subject_id.value_counts() vc = vc[vc > 1] keep = [] for s in vc.index: t = wgs[wgs.subject_id == s] if t.shape[0] == 1: keep.append(t.index[0]) elif t.shape[0] > 1: if 'Blood' in t.source.values: t = t[t.source == 'Blood'] elif 'iPSC' in t.source.values: ...
notebooks/Input Data.ipynb
frazer-lab/cardips-ipsc-eqtl
mit
RNA-seq Data
dy = '/projects/CARDIPS/pipeline/RNAseq/combined_files' # STAR logs. fn = os.path.join(dy, 'star_logs.tsv') logs = pd.read_table(fn, index_col=0, low_memory=False) logs = logs.ix[rna.index] logs.index.name = 'sample_id' fn = os.path.join(outdir, 'star_logs.tsv') if not os.path.exists(fn): logs.to_csv(fn, sep='\t')...
notebooks/Input Data.ipynb
frazer-lab/cardips-ipsc-eqtl
mit
Variant Calls
fn = os.path.join(private_outdir, 'autosomal_variants.vcf.gz') if not os.path.exists(fn): os.symlink('/projects/CARDIPS/pipeline/WGS/mergedVCF/CARDIPS_201512.PASS.vcf.gz', fn) os.symlink('/projects/CARDIPS/pipeline/WGS/mergedVCF/CARDIPS_201512.PASS.vcf.gz.tbi', fn + '.tbi')
notebooks/Input Data.ipynb
frazer-lab/cardips-ipsc-eqtl
mit
External Data I'm going to use the expression estimates for some samples from GSE73211.
fn = os.path.join(outdir, 'GSE73211.tsv') if not os.path.exists(fn): os.symlink('/projects/CARDIPS/pipeline/RNAseq/combined_files/GSE73211.tsv', fn) GSE73211 = pd.read_table(fn, index_col=0) dy = '/projects/CARDIPS/pipeline/RNAseq/combined_files' fn = os.path.join(dy, 'rsem_tpm.tsv') tpm = pd.read_table(fn, index_...
notebooks/Input Data.ipynb
frazer-lab/cardips-ipsc-eqtl
mit
The Data object is initialized with the path to the directory of .pickle files. On creation it reads in the pickle files, but does not transform the data.
data = ICO.Data(os.getcwd()+'/data/')
ICO Data Object Example.ipynb
MATH497project/MATH497-DiabeticRetinopathy
mit
The data object can be accessed like a dictionary to the underlying Dataframes. These will be transformed on their first access into a normalized form. (This might take awhile for the first access)
start = time.time() data["all_encounter_data"] print(time.time() - start) data["all_encounter_data"].describe(include='all') data["all_encounter_data"].columns.values data['all_encounter_data'].shape[0] data['all_encounter_data'].to_pickle('all_encounter_data_Dan_20170415.pickle') start = time.time() data["all_per...
ICO Data Object Example.ipynb
MATH497project/MATH497-DiabeticRetinopathy
mit
The "random" agent selects (uniformly) at random from the set of valid moves. In Connect Four, a move is considered valid if there's still space in the column to place a disc (i.e., if the board has seven rows, the column has fewer than seven discs). In the code cell below, this agent plays one game round against a co...
# Two random agents play one game round env.run(["random", "random"]) # Show the game env.render(mode="ipython")
notebooks/game_ai/raw/tut1.ipynb
Kaggle/learntools
apache-2.0
You can use the player above to view the game in detail: every move is captured and can be replayed. Try this now! As you'll soon see, this information will prove incredibly useful for brainstorming ways to improve our agents. Defining agents To participate in the competition, you'll create your own agents. Your age...
#$HIDE_INPUT$ import random import numpy as np # Selects random valid column def agent_random(obs, config): valid_moves = [col for col in range(config.columns) if obs.board[col] == 0] return random.choice(valid_moves) # Selects middle column def agent_middle(obs, config): return config.columns//2 # Selec...
notebooks/game_ai/raw/tut1.ipynb
Kaggle/learntools
apache-2.0
So, what are obs and config, exactly? obs obs contains two pieces of information: - obs.board - the game board (a Python list with one item for each grid location) - obs.mark - the piece assigned to the agent (either 1 or 2) obs.board is a Python list that shows the locations of the discs, where the first row appears f...
# Agents play one game round env.run([agent_leftmost, agent_random]) # Show the game env.render(mode="ipython")
notebooks/game_ai/raw/tut1.ipynb
Kaggle/learntools
apache-2.0
The outcome of a single game is usually not enough information to figure out how well our agents are likely to perform. To get a better idea, we'll calculate the win percentages for each agent, averaged over multiple games. For fairness, each agent goes first half of the time. To do this, we'll use the get_win_percen...
#$HIDE_INPUT$ def get_win_percentages(agent1, agent2, n_rounds=100): # Use default Connect Four setup config = {'rows': 6, 'columns': 7, 'inarow': 4} # Agent 1 goes first (roughly) half the time outcomes = evaluate("connectx", [agent1, agent2], config, [], n_rounds//2) # Agent 2 goes first...
notebooks/game_ai/raw/tut1.ipynb
Kaggle/learntools
apache-2.0
Which agent do you think performs better against the random agent: the agent that always plays in the middle (agent_middle), or the agent that chooses the leftmost valid column (agent_leftmost)? Let's find out!
get_win_percentages(agent1=agent_middle, agent2=agent_random) get_win_percentages(agent1=agent_leftmost, agent2=agent_random)
notebooks/game_ai/raw/tut1.ipynb
Kaggle/learntools
apache-2.0
Change $\lambda$ below to see its effect on the profile shape.
pohlPlot(lam=7)
lessons/.ipynb_checkpoints/BoundaryLayerSolver-checkpoint.ipynb
ultiyuan/test0
gpl-2.0
Quiz 1 What value of $\lambda$ denotes separated flow? $\lambda$<-12 $\lambda$=0 $\lambda$>12 Using the Pohlhausen profile, the various factors in the momentum integral equation are defined as $\frac{\delta_1}\delta = \int_0^1 (1-f) d\eta = \frac3{10}-\lambda\frac1{120}$ $\frac{\delta_2}\delta = \int_0^1 f(1-f) ...
def disp_ratio(lam): return 3./10.-lam/120. def mom_ratio(lam): return 37./315.-lam/945.-lam**2/9072. def df_0(lam): return 2+lam/6. pyplot.xlabel(r'$\lambda$', fontsize=16) lam = numpy.linspace(-12,12,100) pyplot.plot(lam,disp_ratio(lam), lw=2, label=r'$\delta_1/\delta$') pyplot.plot(lam,mom_ratio(lam), lw=2, label=r...
lessons/.ipynb_checkpoints/BoundaryLayerSolver-checkpoint.ipynb
ultiyuan/test0
gpl-2.0
Note that these are all polynomial functions of $\lambda$. Since $u_e$ is given by potential flow and $\lambda = \frac {\delta^2}\nu u_e'$, the only unknown in the momentum equation is now $\delta(x)$! Stagnation point condition Now we need to write the momentum equation in terms of $\delta$ (and $\lambda$) and solve. ...
def g_1(lam): return df_0(lam)-lam*(disp_ratio(lam)+2*mom_ratio(lam))
lessons/.ipynb_checkpoints/BoundaryLayerSolver-checkpoint.ipynb
ultiyuan/test0
gpl-2.0
Using this definition, the momentum equation is $$ g_1(\lambda) = Re_\delta \delta_2'$$ Quiz 3 The equation above further simplifies at the stagnation point. Which is correct? $g_1 = 0$ $g_1 = Re_\delta$ $ \frac 12 c_f = 0$ Solving this equations will determine our initial condition $\lambda_0$. Using my vast googl...
from scipy.optimize import bisect lam0 = bisect(g_1,-12,12) # use bisect method to find root between -12...12 print 'lambda_0 = ',lam0
lessons/.ipynb_checkpoints/BoundaryLayerSolver-checkpoint.ipynb
ultiyuan/test0
gpl-2.0
With the value of $\lambda_0$ determined, the initial condition $\delta_0$ is simply $$ \delta_0 = \sqrt{\frac{\nu \lambda_0}{u_e'(x_0)}} $$ Pohlhausen momentum equation The only thing left to do is write $\delta_2'$ in terms of $\delta'$. Using $F=\frac{\delta_2}\delta$ we have $$ \delta_2' = \frac{d}{dx}(F\delta) $$ ...
def ddx_delta(Re_d,lam): if Re_d==0: return 0 # Stagnation point condition return g_1(lam)/mom_ratio(lam)/Re_d # delta'
lessons/.ipynb_checkpoints/BoundaryLayerSolver-checkpoint.ipynb
ultiyuan/test0
gpl-2.0
Lets plot the functions of $\lambda$ to get a feel for how the boundary layer will develop.
pyplot.xlabel(r'$\lambda$', fontsize=16) pyplot.ylabel(r'$g_1/F$', fontsize=16) pyplot.plot(lam,ddx_delta(1,lam), lw=2) pyplot.scatter(lam0,0, s=100, c='r') pyplot.text(lam0,3, r'$\lambda_0$',fontsize=15)
lessons/.ipynb_checkpoints/BoundaryLayerSolver-checkpoint.ipynb
ultiyuan/test0
gpl-2.0
Quiz 4 What will happen if $\lambda>\lambda_0$? Flat plate boundary layer flow. The boundary layer will shrink. The Pohlausen equation will be singular. Ordinary differential equations The momentum equation above is an ordinary differentilal equation (ODE), having the form $$ \psi' = g(\psi(x),x) $$ where the derivat...
def heun(g,psi_i,i,dx,*args): g_i = g(psi_i,i,*args) # integrand at i tilde_psi = psi_i+g_i*dx # predicted estimate at i+1 g_i_1 = g(tilde_psi,i+1,*args) # integrand at i+1 return psi_i+0.5*(g_i+g_i_1)*dx # corrected estimate
lessons/.ipynb_checkpoints/BoundaryLayerSolver-checkpoint.ipynb
ultiyuan/test0
gpl-2.0
In this code we've made the integrand g a function of psi and the index i. Note that g_i_1=$g_{i+1}$ and we've passed $i+1$ as the index. We've also left the option for additional arguments to be passed to g as *args which is required for the boundary layer ODE. Before we get to that, lets test heun using $\psi'=\psi$...
N = 20 # number of steps x = numpy.linspace(0,numpy.pi,N) # set up x array from 0..pi psi = numpy.full_like(x,1.) # psi array with phi0=1 def g_test(psi,i): return psi # define derivative function for i in range(N-1): #...
lessons/.ipynb_checkpoints/BoundaryLayerSolver-checkpoint.ipynb
ultiyuan/test0
gpl-2.0
Looks good, only 1% error. Bonus: What is the error if we don't do the correction step? Boundary layer on a circle Returning to the boundary layer ODE, we first define a function which can be integrated by heun
def g_pohl(delta_i,i,u_e,du_e,nu): Re_d = delta_i*u_e[i]/nu # compute local Reynolds number lam = delta_i**2*du_e[i]/nu # compute local lambda return ddx_delta(Re_d,lam) # get derivative
lessons/.ipynb_checkpoints/BoundaryLayerSolver-checkpoint.ipynb
ultiyuan/test0
gpl-2.0
where u_e, du_e, and nu are the extra arguments, needed to compute $Re_\delta$ and $\lambda$. Then we use this function and heun to march from the initial condition $\lambda_0,\delta_0$ along the boundary layer until we reach the point of separation at $\lambda<-12$
def march(x,u_e,du_e,nu): delta0 = numpy.sqrt(lam0*nu/du_e[0]) # set delta0 delta = numpy.full_like(x,delta0) # delta array lam = numpy.full_like(x,lam0) # lambda array for i in range(len(x)-1): # march! delta[i+1] ...
lessons/.ipynb_checkpoints/BoundaryLayerSolver-checkpoint.ipynb
ultiyuan/test0
gpl-2.0
and we're done! Let's test it on the flow around a circle. In this case the boundary layer will march around the circle from $s=0,\ldots,R\pi$. Lets set the parameters $R=1$, $U_\infty=1$ and $Re_R=10^5$, such that $\nu=10^{-5}$. The tangential velocity around a circular cylinder using potential flow is simply $$u_e...
nu = 1e-4 # viscosity N = 32 # number of steps s = numpy.linspace(0,numpy.pi,N) # distance goes from 0..pi u_e = 2.*numpy.sin(s) # velocity du_e = 2.*numpy.cos(s) # gradient delta,lam,iSep = marc...
lessons/.ipynb_checkpoints/BoundaryLayerSolver-checkpoint.ipynb
ultiyuan/test0
gpl-2.0
Let plot the boundary layer thickness on the circle compared to the exact solution for a laminar flat plate boundary layer from Blausius.
pyplot.ylabel(r'$\delta/R$', fontsize=16) pyplot.xlabel(r'$s/R$', fontsize=16) pyplot.plot(s[:iSep+1],delta[:iSep+1],lw=2,label='Circle') pyplot.plot(s,s*5/numpy.sqrt(s/nu),lw=2,label='Flat plate') pyplot.legend(loc='upper left') pyplot.scatter(s[iSep],delta[iSep], s=100, c='r') pyplot.text(s[iSep]+0.1,delta[iSep],'sep...
lessons/.ipynb_checkpoints/BoundaryLayerSolver-checkpoint.ipynb
ultiyuan/test0
gpl-2.0
The circle solution is completely different due to the external pressure gradients. The boundary layer growth is stunted on the front body $\delta$ increases rapidly as the flow approaches the midbody The flow separates around $1.87$ radians $\approx 107^o$ This is in good agreement with Hoerner's Fluid-Dynamic Drag,...
# your code here
lessons/.ipynb_checkpoints/BoundaryLayerSolver-checkpoint.ipynb
ultiyuan/test0
gpl-2.0
Please ignore the cell below. It just loads our style for the notebooks.
from IPython.core.display import HTML def css_styling(): styles = open('../styles/custom.css', 'r').read() return HTML(styles) css_styling()
lessons/.ipynb_checkpoints/BoundaryLayerSolver-checkpoint.ipynb
ultiyuan/test0
gpl-2.0
Hantush versus HantushWellModel The reason there are two implementations in Pastas is that each implementation currently has advantages and disadvantages. We will discuss those soon, but first let's introduce the two implementations. The two Hantush response functions are very similar, but differ in the definition of t...
# A defined so that 100 m3/day results in 5 m drawdown A = -5 / 100.0 a = 200 b = 0.5 d = 0.0 # reference level # auto-correlated residuals AR(1) sigma_n = 0.05 alpha = 50 sigma_r = sigma_n / np.sqrt(1 - np.exp(-2 * 14 / alpha)) print(f'sigma_r = {sigma_r:.2f} m')
concepts/hantush_response.ipynb
pastas/pasta
mit
Two ways to get a Markov chain There are two ways to generate a Markov chain: Parse one or more sequences of states. This will be turned into a transition matrix, from which a probability matrix will be computed. Directly from a transition matrix, if you already have that data. Let's look at the transition matrix fir...
from striplog.markov import Markov_chain data = [[ 0, 37, 3, 2], [21, 0, 41, 14], [20, 25, 0, 0], [ 1, 14, 1, 0]] m = Markov_chain(data, states=['A', 'B', 'C', 'D']) m
docs/tutorial/15_Model_sequences_with_Markov_chains.ipynb
agile-geoscience/striplog
apache-2.0
Note that they do not include self-transitions in their data. So the elements being counted are simple 'beds' not 'depth samples' (say). If you build a Markov chain using a matrix with self-transitions, these will be preserved; note include_self=True in the example here:
data = [[10, 37, 3, 2], [21, 20, 41, 14], [20, 25, 20, 0], [ 1, 14, 1, 10]] Markov_chain(data)
docs/tutorial/15_Model_sequences_with_Markov_chains.ipynb
agile-geoscience/striplog
apache-2.0
Testing for independence We use the model of quasi-independence given in Powers & Easterling, as opposed to an independent model like scipy.stats.chi2_contingency(), for computing chi-squared and the expected transitions: First, let's look at the expected transition frequencies of the original Powers & Easterling data:
import numpy as np np.set_printoptions(precision=3) m.expected_counts
docs/tutorial/15_Model_sequences_with_Markov_chains.ipynb
agile-geoscience/striplog
apache-2.0
The $\chi^2$ statistic shows the value for the observed ordering, along with the critical value at (by default) the 95% confidence level. If the first number is higher than the second number (ideally much higher), then we can reject the hypothesis that the ordering is quasi-independent. That is, we have shown that the ...
m.chi_squared()
docs/tutorial/15_Model_sequences_with_Markov_chains.ipynb
agile-geoscience/striplog
apache-2.0
Which transitions are interesting? The normalized difference shows which transitions are 'interesting'. These numbers can be interpreted as standard deviations away from the model of quasi-independence. That is, transitions with large positive numbers represent passages that occur more often than might be expected. Any...
m.normalized_difference
docs/tutorial/15_Model_sequences_with_Markov_chains.ipynb
agile-geoscience/striplog
apache-2.0
We can visualize this as an image:
m.plot_norm_diff()
docs/tutorial/15_Model_sequences_with_Markov_chains.ipynb
agile-geoscience/striplog
apache-2.0
We can also interpret this matrix as a graph. The upward transitions from C to A are particularly strong in this one. Transitions from A to C happen less often than we'd expect. Those from B to D and D to B, less so. It's arguably easier to interpret the data when this matrix is interpreted as a directed graph:
m.plot_graph()
docs/tutorial/15_Model_sequences_with_Markov_chains.ipynb
agile-geoscience/striplog
apache-2.0
We can look at an undirected version of the graph too. It downplays non-reciprocal relationships. I'm not sure this is useful...
m.plot_graph(directed=False)
docs/tutorial/15_Model_sequences_with_Markov_chains.ipynb
agile-geoscience/striplog
apache-2.0
Generating random sequences We can generate a random succession of beds with the same transition statistics:
''.join(m.generate_states(n=30))
docs/tutorial/15_Model_sequences_with_Markov_chains.ipynb
agile-geoscience/striplog
apache-2.0
Again, this will respect the absence or presence of self-transitions, e.g.:
data = [[10, 37, 3, 2], [21, 20, 41, 14], [20, 25, 20, 0], [ 1, 14, 1, 10]] x = Markov_chain(data) ''.join(map(str, x.generate_states(n=30)))
docs/tutorial/15_Model_sequences_with_Markov_chains.ipynb
agile-geoscience/striplog
apache-2.0
Parse states Striplog can interpret various kinds of data as sequences of states. For example, it can get the unique elements and a 'sequence of sequences' from: A simple list of states, eg [1,2,2,2,1,1] A string of states, eg 'ABBDDDDDCCCC' A list of lists of states, eg [[1,2,2,3], [2,4,2]] (NB, not same length) A l...
data = { 'log7': [1, 3, 1, 3, 5, 1, 2, 1, 3, 1, 5, 6, 1, 2, 1, 2, 1, 2, 1, 3, 5, 6, 5, 1], 'log9': [1, 3, 1, 5, 1, 5, 3, 1, 2, 1, 2, 1, 3, 5, 1, 5, 6, 5, 6, 1, 2, 1, 5, 6, 1], 'log11': [1, 3, 1, 2, 1, 5, 3, 1, 2, 1, 2, 1, 3, 5, 3, 5, 1, 9, 5, 5, 5, 5, 6, 1], 'log12': [1, 5, 3, 1, 2, 1, 2, 1, 2, 1, 4, ...
docs/tutorial/15_Model_sequences_with_Markov_chains.ipynb
agile-geoscience/striplog
apache-2.0
Let's check out the normalized difference matrix:
m.expected_counts
docs/tutorial/15_Model_sequences_with_Markov_chains.ipynb
agile-geoscience/striplog
apache-2.0
It's hard to read this but we can change NumPy's display options:
np.set_printoptions(suppress=True, precision=1, linewidth=120) m.normalized_difference
docs/tutorial/15_Model_sequences_with_Markov_chains.ipynb
agile-geoscience/striplog
apache-2.0
Or use a graphical view:
m.plot_norm_diff()
docs/tutorial/15_Model_sequences_with_Markov_chains.ipynb
agile-geoscience/striplog
apache-2.0
And the graph version. Note you can re-run this cell to rearrange the graph.
m.plot_graph(figsize=(15,15), max_size=2400, edge_labels=True, seed=13)
docs/tutorial/15_Model_sequences_with_Markov_chains.ipynb
agile-geoscience/striplog
apache-2.0
<div style="border: solid red 2px; border-radius:5px; background-color:#ffeeee; padding:10px 10px 20px 10px;"> <h3>Experimental implementation!</h3> <p>Multistep Markov chains are a bit of an experiment in `striplog`. Please [get in touch](mailto:matt@agilescientific.com) if you have thoughts about how it should work....
m = Markov_chain.from_sequence(logs, step=2) m
docs/tutorial/15_Model_sequences_with_Markov_chains.ipynb
agile-geoscience/striplog
apache-2.0
Note that self-transitions are ignored by default. With multi-step transitions, this results in a lot of zeros because we don't just eliminate transitions like 1 > 1 > 1, but also 1 > 1 > 2 and 1 > 1 > 3, etc, as well as 1 > 2 > 2, 1 > 3 > 3, etc, and 2 > 1 > 1, 3 > 1 > 1, etc. (But we will keep 1 > 2 > 1.) Now we hav...
m.normalized_difference.shape
docs/tutorial/15_Model_sequences_with_Markov_chains.ipynb
agile-geoscience/striplog
apache-2.0
This is hard to inspect! Let's just get the indices of the highest values. If we add one to the indices, we'll have a handy list of facies number transitions, since these are just 1 to 11. So we can interpret these as transitions with anomalously high probability. <img src="Normal_distribution.png" /> There are 11 &ti...
cutoff = 5 # 1.96 is 95% confidence idx = np.where(m.normalized_difference > cutoff) locs = np.array(list(zip(*idx))) scores = {tuple(loc+1):score for score, loc in zip(m.normalized_difference[idx], locs)} for (a, b, c), score in sorted(scores.items(), key=lambda pair: pair[1], reverse=True): print(f"{a:<2}...
docs/tutorial/15_Model_sequences_with_Markov_chains.ipynb
agile-geoscience/striplog
apache-2.0
Unfortunately, it's a bit harder to draw this as a graph. Technically, it's a hypergraph.
# This should error for now. # m.plot_graph(figsize=(15,15), max_size=2400, edge_labels=True)
docs/tutorial/15_Model_sequences_with_Markov_chains.ipynb
agile-geoscience/striplog
apache-2.0
Also, the expected counts or frequencies are a bit... hard to interpret:
np.set_printoptions(suppress=True, precision=3, linewidth=120) m.expected_counts
docs/tutorial/15_Model_sequences_with_Markov_chains.ipynb
agile-geoscience/striplog
apache-2.0
Step = 3 We can actually make a model for any number of steps, but we will need commensurately more data, especially if we're not going to include self-transitions:
m = Markov_chain.from_sequence(logs, step=3) m
docs/tutorial/15_Model_sequences_with_Markov_chains.ipynb
agile-geoscience/striplog
apache-2.0
I have no idea how to visualize or interpret this thing, let me know if you do something with it! From Striplog() instance I use striplog to represent stratigraphy. Let's make a Markov chain model from an instance of striplog! First, we'll make a striplog by applying a couple of cut-offs to a GR log:
from welly import Well from striplog import Striplog, Component w = Well.from_las("P-129_out.LAS") gr = w.data['GR'] comps = [Component({'lithology': 'sandstone'}), Component({'lithology': 'greywacke'}), Component({'lithology': 'shale'}), ] s = Striplog.from_log(gr, cutoff=[30, 90], compone...
docs/tutorial/15_Model_sequences_with_Markov_chains.ipynb
agile-geoscience/striplog
apache-2.0