markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Computing connectivities using UMAP gives us quantitiatively different results for the pseudotime.
sc.pp.neighbors(adata, n_neighbors=20, use_rep='X', method='umap') sc.tl.diffmap(adata) sc.tl.dpt(adata, n_branchings=1) sc.pl.diffmap(adata, color=['dpt_pseudotime', 'dpt_groups', 'paul15_clusters'])
170502_paul15/paul15.ipynb
theislab/scanpy_usage
bsd-3-clause
Two variant implementations of the Theis well fuction W_theis0: exp1 directly from scipy.special W_theis1: by integration using scipy and numpy functionality.
def W_theis0(u): """Return Theis well function using scipy.special function exp1 directly.""" return exp1(u) def W_theis1(u): """Return Theis well function by integrating using scipy functionality. This turns out to be a very accurate yet fast impementation, about as fast as the exp1 function form scipy.special. In fact we define three functions and finally compute the desired answer with the last one. The three functions are nicely packages in the overall W_theis1 function. """ def funcTh(y): return np.exp(-y) / y def Wth2(u): return quad(funcTh, u, np.inf) WTh = np.frompyfunc(Wth2, 1, 2) return Wth(u) def W_theis2(u, practically_log_inf=20, steps_per_log_cycle=50): """Theis well function using smart integration""" if np.isscalar(u): u = np.array([u]) # Genereate integration point from first u tot practially inf and mix with the # given u, so they are in the array of u values. lu0 = np.log10(u[0]) n = int((practically_log_inf - lu0) * steps_per_log_cycle) uu = np.unique(np.hstack((np.logspace(lu0, practically_log_inf, n), u))) kernel = np.exp(-uu) dlnu = np.diff(np.log(uu)) Wuu = np.cumsum(np.hstack((0, (kernel[:-1] + kernel[1:]) * dlnu / 2))) Wuu = Wuu[-1] - Wuu # This holds the integral from each uu to infinity # So now just look up the Wuu values where uu is u W = np.zeros_like(u) for i, ui in enumerate(u): W[i] = Wuu[np.where(uu==ui)[0][0]] return W def W_theis3(u): """Return Theis well function using power series.""" tol = 1e-16 gam = 0.577216 if np.isscalar(u): u = np.array([u]) u1 = u[u <= 15] # All outcomes for u > 15 are identical to zero terms0 = u1 W = -gam - np.log(u1) + terms0 for i in range(2, 250): terms1 = -terms0 * u1 * (i -1) / (i * i) W += terms1 if np.max(np.abs(terms0 + terms1)) < tol: break terms0 = terms1 return np.hstack((W, np.zeros_like(u[u > 15])))
Syllabus_in_notebooks/Sec6_4_4_Theis_Hantush_implementations.ipynb
Olsthoorn/TransientGroundwaterFlow
gpl-3.0
Four variant implementations of the Hantush well function
def W_hantush0(u, rho, tol=1e-14): '''Hantush well function implemented as power series This implementation works but has a limited reach; for very small values of u (u<0.001) the solution will deteriorate into nonsense, ''' tau = (rho/2)**2 / u f0 = 1 E = exp1(u) w0= f0 * E W = w0 for n in range(1, 500): E = (1/n) * (np.exp(-u) - u * E) f0 = -f0 / n * tau w1 = f0 * E #print(w1) if np.max(abs(w0 + w1)) < tol: # use w0 + w1 because terms alternate sign #print('succes') break W += w1 w0 = w1 # remember previous value return W def W_hantush1(u, rho): """Return Hantush well function by straight-forward integration. A large number of points are required to be accurate, but it won't still be as accurate as the quad method from scipy.integrate which is also at least as fast. """ if np.isscalar(u): u = np.asarray([u]) w = np.zeros_like(u) for i, uu in enumerate(u): y = np.logspace(np.log10(uu), 10, 5000) arg = np.exp(-y - (rho/2) ** 2 / y ) / y w[i] = np.sum(np.diff(y) * 0.5 * (arg[:-1]+ arg[1:])) return w def W_hantush2(u, rho): """Return Hantush well function by integration trying to be smarter. This function is no faster than the previous one with 5000 points. Parameters ---------- u = np.ndarray of floats an array of u values u = r**2 S / (4 kD t) rho: float value of r/lambda with lambda = sqrt(kD c) """ if np.isscalar(u): u = np.asarray([u]) uu = np.unique(np.hstack((np.logspace(np.log10(np.min(u)), 10, 5000), u))) arg = np.exp(-uu - (rho/2) ** 2 / uu) / uu duu = np.diff(uu) S = np.hstack((0, (arg[1:] + arg[:-1])* duu / 2)) Wsum = np.zeros_like(u) for i, ui in enumerate(u): Wsum[i] = np.sum(S[uu > ui]) return Wsum def W_hantush3(u, rho): """Return Hantush well function by integration using scipy functinality. This turns out to be a very accurate yet fast impementation, about as fast as the exp1 function form scipy.special. In fact we define three functions and finally compute the desired answer with the last one. The three functions are nicely packages in the overall W_theis1 function. """ def whkernel(y, rho): return np.exp(-y - (rho/2) ** 2 / y ) / y def whquad(u, rho): return quad(whkernel, u, np.inf, args=(rho)) Wh = np.frompyfunc(whquad, 2, 2) # 2 inputs and tow outputs h and err return Wh(u, rho)[0] # cut-off err
Syllabus_in_notebooks/Sec6_4_4_Theis_Hantush_implementations.ipynb
Olsthoorn/TransientGroundwaterFlow
gpl-3.0
Results of the timing Theis: W_theis0 : 6.06 µs ± 261 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each) W_theis1(u) : 7.11 µs ± 163 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each) W_theis2(u) : 299 µs ± 6.79 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each) W_theis3(u) : 553 µs ± 33.7 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each) Their is almost no difference in speed between direcly using exp1 from scipy and integrating numerically using quad. Both are equally accurate. Thex explicit integration is slow just as the summation. Hantush: W_hantush0(u, rho) : 86 µs ± 1.69 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each) W_hantus1(, rho) : 7.53 ms ± 72.9 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) W_hantush2(u, rho) : 882 µs ± 26.9 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each) W_hantush3(u, rho) : 8.64 ms ± 75.4 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) Note that my "smart" integration (W_hantush2) is about 9 time faster than the simple integration and th quad solution. So it turns aout to be smart enough after all. The smart and normal integration methods are equally accurate to 5 didgets with 5000 points and haveing 1e10 as upper limit. The quad method has 10 didgets accuracy. The series method is the slowest of all, 10 times slower than the quad and simple integration methods and 100 times slower than my smart method. But it is as accurate as the quad method. The series method is also very effective. The series method is also not accurate. The number of terms to include must be much larger, which would make it even slower te compute.
rhos = [0., 0.1, 0.3, 1, 3] u = np.logspace(-6, 1, 71) ax = newfig('Hantush type curves', '1/u', 'Wh(u, rho)', xscale='log', yscale='log') ax.plot(1/u, W_theis0(u), lw=3, label='Theis', zorder=100) for rho in rhos: ax.plot(1/u, W_hantush2(u, rho), '.', label='rho={:.1f}'.format(rho)) ax.plot(1/u, W_hantush3(u, rho), label='rho={:.1f}'.format(rho)) ax.legend() plt.show() rhos = [0., 0.1, 0.3, 1, 3] u = np.logspace(-6, 1, 71) ax = newfig('Hantush type curves', '1/y', 'W(u, rho)', xscale='log') for rho in rhos: ax.plot(1/u, W_hantush2(u, rho), '.', label='rho={:.1f}'.format(rho)) ax.plot(1/u, W_hantush3(u, rho), label='rho={:.1f}'.format(rho)) ax.legend() plt.show()
Syllabus_in_notebooks/Sec6_4_4_Theis_Hantush_implementations.ipynb
Olsthoorn/TransientGroundwaterFlow
gpl-3.0
爬取文章
resp = requests.get(ARTICLE_URL, cookies={'over18': '1'}) assert resp.status_code == 200 soup = BeautifulSoup(resp.text, 'lxml') main_content = soup.find(id = 'main-content') img_link = main_content.findAll('a', recursive=False) pprint(img_link)
appendix_ptt/03_crawl_image.ipynb
afunTW/dsc-crawling
apache-2.0
檢查並下載圖片
def check_and_download_img(url, savedir='download_img'): image_resp = requests.get(url, stream=True) image = Image.open(image_resp.raw) filename = os.path.basename(url) # check format real_filename = '{}.{}'.format( filename.split('.')[0], image.format.lower() ) print('check and fixed filename {} -> {}'.format(filename, real_filename)) # download if not os.path.exists(savedir): os.makedirs(savedir) savepath = os.path.join(savedir, real_filename) image.save(savepath) print('save imag - {}'.format(savepath)) for tag in img_link: check_and_download_img(tag['href'])
appendix_ptt/03_crawl_image.ipynb
afunTW/dsc-crawling
apache-2.0
interactions is a matrix with entries equal to 1 if the i-th user posted an answer to the j-th question; the goal is to recommend the questions to users who might answer them. question_features is a sparse matrix containing question metadata in the form of tags. vectorizer is a sklearn.feature_extraction.DictVectorizer instance that translates the tags into vector form. user_features and user_vectorizer pertain to user features. In this case, we take users' 'About' sections: short snippets of natural language that (often) describe a given user's interests. Printing the matrices show that we have around 3200 users and 40,000 questions. Questions are described by one or more tags from a set of about 1200.
print(repr(interactions)) print(repr(question_features))
examples/crossvalidated/example.ipynb
qqwjq/lightFM
apache-2.0
The tags matrix contains rows such as
print(question_vectorizer.inverse_transform(question_features[:3]))
examples/crossvalidated/example.ipynb
qqwjq/lightFM
apache-2.0
User features are exactly what we would expect from processing raw text:
print(user_vectorizer.inverse_transform(user_features[2]))
examples/crossvalidated/example.ipynb
qqwjq/lightFM
apache-2.0
Fitting models Train/test split We can split the dataset into train and test sets by using utility functions defined in model.py.
import model import inspect print(inspect.getsource(model.train_test_split)) train, test = model.train_test_split(interactions)
examples/crossvalidated/example.ipynb
qqwjq/lightFM
apache-2.0
Traditional MF model Let's start with a traditional collaborative filtering model that does not use any metadata. We can do this using lightfm -- we simply do not pass in any metadata matrices. We'll use the following function to train a WARP model.
print(inspect.getsource(model.fit_lightfm_model)) mf_model = model.fit_lightfm_model(train, epochs=1)
examples/crossvalidated/example.ipynb
qqwjq/lightFM
apache-2.0
The following function will compute the AUC score on the test set:
print(inspect.getsource(model.auc_lightfm)) mf_score = model.auc_lightfm(mf_model, test) print(mf_score)
examples/crossvalidated/example.ipynb
qqwjq/lightFM
apache-2.0
Ooops. That's worse than random (due possibly to overfitting). In this case, this is because the CrossValidated dataset is very sparse: there just aren't enough interactions to support a traditional collaborative filtering model. In general, we'd also like to recommend questions that have no answers yet, making the collaborative model doubly ineffective. Content-based model To remedy this, we can try using a content-based model. The following code uses question tags to estimate a logistic regression model for each user, predicting the probability that a user would want to answer a given question.
print(inspect.getsource(model.fit_content_models))
examples/crossvalidated/example.ipynb
qqwjq/lightFM
apache-2.0
Running this and evaluating the AUC score gives
content_models = model.fit_content_models(train, question_features) content_score = model.auc_content_models(content_models, test, question_features) print(content_score)
examples/crossvalidated/example.ipynb
qqwjq/lightFM
apache-2.0
That's a bit better, but not great. In addition, a linear model of this form fails to capture tag similarity. For example, probit and logistic regression are closely related, yet the model will not automatically infer knowledge of one knowledge of the other. Hybrid LightFM model What happens if we estimate theLightFM model with question features?
lightfm_model = model.fit_lightfm_model(train, post_features=question_features) lightfm_score = model.auc_lightfm(lightfm_model, test, post_features=question_features) print(lightfm_score)
examples/crossvalidated/example.ipynb
qqwjq/lightFM
apache-2.0
We can add user features on top for a small additional improvement:
lightfm_model = model.fit_lightfm_model(train, post_features=question_features, user_features=user_features) lightfm_score = model.auc_lightfm(lightfm_model, test, post_features=question_features, user_features=user_features) print(lightfm_score)
examples/crossvalidated/example.ipynb
qqwjq/lightFM
apache-2.0
This is quite a bit better, illustrating the fact that an embedding-based model can capture more interesting relationships between content features. Feature embeddings One additional advantage of metadata-based latent models is that they give us useful latent representations of the metadata features themselves --- much in the way word embedding approaches like word2vec. The code below takes an input CrossValidated tag and finds tags that are close to it (in the cosine similarity sense) in the latent embedding space:
print(inspect.getsource(model.similar_tags))
examples/crossvalidated/example.ipynb
qqwjq/lightFM
apache-2.0
Let's demonstrate this.
for tag in ['bayesian', 'regression', 'survival', 'p-value']: print('Tags similar to %s:' % tag) print(model.similar_tags(lightfm_model, question_vectorizer, tag)[:5])
examples/crossvalidated/example.ipynb
qqwjq/lightFM
apache-2.0
However, we still have the challenge of visually associating the value of the prices in a neighborhood with the value of the spatial lag of values for the focal unit. The latter is a weighted average of homicide rates in the focal county's neighborhood. To complement the geovisualization of these associations we can turn to formal statistical measures of spatial autocorrelation. Global Spatial Autocorrelation We begin with a simple case where the variable under consideration is binary. This is useful to unpack the logic of spatial autocorrelation tests. So even though our attribute is a continuously valued one, we will convert it to a binary case to illustrate the key concepts: Binary Case
y.median() yb = y > y.median() sum(yb)
notebooks/explore/esda/Spatial Autocorrelation for Areal Unit Data.ipynb
pysal/pysal
bsd-3-clause
Let's make a significant mass ratio and radius ratio...
b['q'] = 0.7 b['requiv@primary'] = 1.0 b['requiv@secondary'] = 0.5 b['teff@secondary@component'] = 5000
2.1/examples/rossiter_mclaughlin.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Adding Datasets Now we'll add radial velocity and mesh datasets. We'll add two identical datasets for RVs so that we can have one computed dynamically and the other computed numerically (these options will need to be set later).
b.add_dataset('rv', times=np.linspace(0,2,201), dataset='dynamicalrvs') b.add_dataset('rv', times=np.linspace(0,2,201), dataset='numericalrvs')
2.1/examples/rossiter_mclaughlin.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Storing the mesh at every timestep is overkill, and will be both computationally and memory intensive. So let's just sample at the times we care about.
times = b.get_value('times@primary@numericalrvs@dataset') times = times[times<0.1] print times b.add_dataset('mesh', dataset='mesh01', times=times, columns=['vws', 'rvs*'])
2.1/examples/rossiter_mclaughlin.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Plotting Now let's plot the radial velocities. First we'll plot the dynamical RVs. Note that dynamical RVs show the true radial velocity of the center of mass of each star, and so we do not see the Rossiter McLaughlin effect.
afig, mplfig = b['dynamicalrvs@model'].plot(c={'primary': 'b', 'secondary': 'r'}, show=True)
2.1/examples/rossiter_mclaughlin.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
But the numerical method integrates over the visible surface elements, giving us what we'd observe if deriving RVs from observed spectra of the binary. Here we do see the Rossiter McLaughlin effect. You'll also notice that RVs are not available for the secondary star when its completely occulted (they're nans in the array).
afig, mplfig = b['numericalrvs@model'].plot(c={'primary': 'b', 'secondary': 'r'}, show=True)
2.1/examples/rossiter_mclaughlin.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
To visualize what is happening, we can plot the radial velocities of each surface element in the mesh at one of these times. Here just plot on the mesh@model parameterset - the mesh will automatically get coordinates from mesh01 and then we point to rvs@numericalrvs for the facecolors.
afig, mplfig = b['mesh@model'].plot(time=0.03, fc='rvs@numericalrvs', ec="None", show=True)
2.1/examples/rossiter_mclaughlin.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Here you can see that the secondary star is blocking part of the "red" RVs of the primary star. This is essentially the same as plotting the negative z-component of the velocities (for convention - our system is in a right handed system with +z towards the viewer, but RV convention has negative RVs for blue shifts). We could also plot the RV per triangle by plotting 'vws'. Note that this is actually defaulting to an inverted colormap to show you the same colorscheme ('RdBu_r' vs 'RdBu').
afig, mplfig = b['mesh01@model'].plot(time=0.09, fc='vws', ec="None", show=True)
2.1/examples/rossiter_mclaughlin.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
About IPython From Python for Data Analysis: The IPython project began in 2001 as Fernando Perez's side project to make a better interactive Python interpreter. In the subsequent 11 years it has grown into what's widely considered one of the most important tools in the modern scientific Python computing stack. While it does not provide any computational or data analytical tools by itself, IPython is designed from the ground up to maximize your productivity in both interactive computing and software development. It encourages an execute-explore workflow instead of the typical edit-compile-run workflow of many other programming languages. The IPython Notebook uses an input-output programming paradigm centered around the cell. Excecuting a cell saves all of the data into the notebook, so you can use it later. But, you can execute a cell, write more code, and come back to the same cell to make changes, which gives the notebook an incredible amount of versatility. Each notebook uses an execution kernel to keep track of all of the data. If you run into a challenge, say, an accidental infinite loop, you can interrupt and restart the kernel from the options. Restarting the kernel drops all of the data saved to the main memory stack, so you need to re-execute all of your cells. Key features of IPython Users of Mathematica may feel familiar with the overall layout of the IPython Notebook, but there are some important differences (and advantages) of using the Notebook. The IPython Notebook is (lovingly, but jokingly) referred to as the "poor person's Mathematica", which is unfair to both IPython and Mathematica. The Notebook serves a different purpose from Wolfram's product, and it does so exceptionally well. Tab completion From Python for Data Analysis: One of the major improvements over the standard Python shell is tab completion, a feature common to most interactive data analysis environments. While entering expressions in the shell, pressing &lt;Tab&gt; will search the namespace for any variables (objects, functions, etc.) matching the characters you have typed so far: ```Python In [1]: an_apple = 27 In [2]: an_example = 42 In [3]: an<Tab> an_apple and an_example any ``` In this example, not that IPython displayed both the two variables defined as well as the Python keyword and and built-in function any. Naturally, you can also complete methods and attributes on any object after typing a period:
b = [1, 2, 3]
courses/python/material/ipynbs/Using the IPython Notebook.ipynb
kraemerd17/kraemerd17.github.io
mit
Python In [2]: b.&lt;Tab&gt; b.append b.extend b.insert b.remove b.sort b.count b.index b.pop b.reverse Tab completion works in many contexts outside of searching the interactive namespace and completing object or module attributes. When typing anything that looks like a file path (even in a Python string), pressing &lt;Tab&gt; will complete anything on your computer's file system matching what you've typed. Combined with the %run command (see later section), this functionality will undoubtedly save you many keystrokes. Another area where tab completion saves time is in the completion of function keyword argument (arguments that include the = sign). Introspection Closing a question mark (?) before or after a variable will display some general information about the object:
b?
courses/python/material/ipynbs/Using the IPython Notebook.ipynb
kraemerd17/kraemerd17.github.io
mit
Type: list String form: [1, 2, 3] Length: 3 Docstring: list() -&gt; new empty list list(iterable) -&gt; new list initialized from iterable's items This is referred to as object introspection. If the object is a function or instance method, the docstring, if defined, will also be shown. Suppose we'd written the following functions:
def add_numbers(a, b): """ Add two numbers together Returns ------- the_sum : type of arguments """ return a + b
courses/python/material/ipynbs/Using the IPython Notebook.ipynb
kraemerd17/kraemerd17.github.io
mit
Then using ? shows us the docstring:
add_numbers?
courses/python/material/ipynbs/Using the IPython Notebook.ipynb
kraemerd17/kraemerd17.github.io
mit
``` Type: function String form: <function add_numbers at 0x7facfc177488> File: /home/alethiometryst/mathlan/public_html/courses/python/course-material/ipynbs/<ipython-input-11-5b88597b2522> Definition: add_numbers(a, b) Docstring: Add two numbers together Returns the_sum : type of arguments ``` Using ?? will also show the function's source code if possible:
add_numbers??
courses/python/material/ipynbs/Using the IPython Notebook.ipynb
kraemerd17/kraemerd17.github.io
mit
``` Type: function String form: <function add_numbers at 0x7facfc177488> File: /home/alethiometryst/mathlan/public_html/courses/python/course-material/ipynbs/<ipython-input-11-5b88597b2522> Definition: add_numbers(a, b) Source: def add_numbers(a, b): """ Add two numbers together Returns ------- the_sum : type of arguments """ return a + b ``` ? has a final usage, which is for searching the IPython namespace in a manner similar to the standard UNIX or Windows command line. A number of characters combined with the wildcard (*) will show all names matching the wildcard expression. For example, we could get a list of all functions in the top level NumPy namespace containing load:
import numpy as np np.*load*?
courses/python/material/ipynbs/Using the IPython Notebook.ipynb
kraemerd17/kraemerd17.github.io
mit
np.load np.loads np.loadtxt np.pkgload The %run Command Any file can be run as a Python program inside the environment of your IPython session using the %run command. Keyboard Shortcuts IPython has many keyboard shortcuts for navigating the prompt (which will be familiar to users of the Emacs text editor or the UNIX bash shell). Here are the most commonly used elements: Command Mode (press Esc to enable) | Command | Description | Command | Description | |---------|-------------|---------|-------------| | Enter | edit mode| Ctrl-j | move cell down | | Shift-Enter | run cell, select below | a | insert cell above | | Ctrl-Enter | run cell | b | insert cell below | | Alt-Enter | run cell, insert below | x | cut cell | | y | to code | c | copy cell | | m | to markdown | Shift-v | paste cell above | | r | to raw | v | paste cell below | | 1 | to heading 1 | d | delete cell (press twice) | | 2 | to heading 2 | Shift-m | merge cell below | | 3 | to heading 3 | s | save notebook | | 4 | to heading 4 | Ctrl-s | save notebook | | 5 | to heading 5 | l | toggle line numbers | | 6 | to heading 6 | o | toggle output | | Up | select previous cell | Shift-o | toggle output scrolling | | Down | select next cell | q | close pager | | k | select previous cell | h | keyboard shortcuts | | j | select next cell | i | interrupt kernel (press twice) | | Ctrl-k | move cell up | 0 | restart kernel (press twice) | Edit Mode (press Enter to enable) | Command | Description | Command | Description | |---------|-------------|---------|-------------| | Tab | code completion or indent | Ctrl-Down | go to cell end | | Shift-Tab | tooltip | Ctrl-Left| go one word left | | Ctrl-] | indent | Ctrl-Right | go one word right | | Ctrl-[ | dedent | Ctrl-Backspace | del word before | | Ctrl-a | select all | Ctrl-Delete | del word after | | Ctrl-z | undo | Esc | command mode | | Ctrl-Shift-z | redo | Ctrl-m | command mode | | Ctrl-y | redo | Shift-Enter | run cell, select below | | Ctrl-Home | go to cell start | Ctrl-Enter | run cell, select below | | Ctrl-Up | go to cell start | Alt-Enter | run cell, insert below | | Ctrl-End | go to cell end | Ctrl-Shift--| split cell | | Ctrl-s | save notebook | Exception and Tracebacks If an exception is raised while executing any statement, IPython will by default print a full call stack trace (traceback) with a few lines of context around the position at each point in the stack.
def func(a): return a + 2 func(3, 4)
courses/python/material/ipynbs/Using the IPython Notebook.ipynb
kraemerd17/kraemerd17.github.io
mit
Magic Commands IPython has many special commands, known as "magic" commands, which are designed to facilitate common tasks and enable you to easily control the behavior of the IPython system. A magic command is any command prefixed by the percent symbol %. Magic commands can be viewed as command line programs to be run within the IPython system. Many of them have additional "command line" options, which can all be viewed (as you might expect) using ?. Magic functions can be used by default without the percent sign, as long as no variable is defined with the same name as the magic function. This feature is called automagic and can be enabled or disabled using %automatic. Since IPython's documentation is easily accessible from within the system, I encourage you to explore all of the special commands available by typing %quickref or %magic. Here are a few more of the most critical ones for being productive in interactive computing and Python development in IPython. |Command | Description | |--------|-------------| | %quickref | Display the IPython Quick Reference Card | | %magic | Display detailed documentation for all of the available magic commands | | %debug | Enter the interactive debugger at the bottom of the last exception traceback | | %hist | Print command input (and optionally output) history | | %pdb | Automatically enter debugger after any exception | | %paste | Execute pre-formatted Python code from clipboard | | %cpaste | Open a special prompt for manually pasting Python code to be executed | | %reset | Delete all variables / names defined in interactive namespace | | %page OBJECT | Pretty print the object and display it through a pager | | %run script.py | Run a Python script inside IPython | | %prun statement | Execute statement with cProfile and report the profiler output | | %time statement | Report the execution time of single statement | | %timeit statement | Run a (short) statement multiple times to compute an emsemble average execution time.| | %who, %who_ls, %whos | Display variables defined in interactive namespace, varying info depth| | %xdel variable | Delete a variable and clear references to the object in the IPython internals | Try it! Which "squaring" function do you think is more efficient?
def square_one(x): return x ** 2 def square_two(x): return x * x %timeit square_one(500) %timeit square_two(500)
courses/python/material/ipynbs/Using the IPython Notebook.ipynb
kraemerd17/kraemerd17.github.io
mit
It's good to remember that $x\cdot x$ is always faster to compute than $x^2$. %timeit can be used for more complicated functions. For example, consider the Fibonacci numbers, which are computed according to the following rule: \begin{align} F_1 &= 1 \ F_2 &= 1 \ F_n &= F_{n-1} + F_{n-2} \end{align} So, $F_3$ is the sum of $F_1$ and $F_2$, i.e. $F_3=1+1=2$. Then $F_4 = F_2 + F_3 = 5$, and so on. We can write a Python function to calculate Fibonacci numbers with two different strategies: recursion or iteration. One might ask which implementation is more efficient, so we can use %timeit to get a good idea.
# Recursive implementation def fibonacci_one(n): if n == 1 or n == 2: return 1 else: return fibonacci_one(n-1) + fibonacci_one(n-2) # Iterative implementation def fibonacci_two(n): a = 1 b = 1 if n < 3: return 1 for i in range (3, n+1): c = a a += b b = c return a %timeit fibonacci_one(15) %timeit fibonacci_two(15)
courses/python/material/ipynbs/Using the IPython Notebook.ipynb
kraemerd17/kraemerd17.github.io
mit
Remember that there are 1000 nanoseconds per microsecond. In other words, the iterative implementation is 150 times faster than the recursive one. Debugging Anyone who has dealt with computers for any length of time has had to deal with bugs in their code (I'm looking at you, 161ers). Sometimes you make a (or many) small mistake(s) that you don't catch before running your code. Your program won't work properly, and you need to figure out what's causing the problem. Luckily, with IPython, we get to use the interactive debugger, %debug, which lets you step through your code to help you spot errors. Consider the following problem. You need to write a function that takes a list of doubles and computes the list of its multiplicative inverses. Then, you need to plot the data (don't worry about the matplotlib code; it's a demonstration). For example, if you are given $[1, 2, 3, 4, 5]$, you should obtain \begin{align} f([1,2,3,4,5]) = \left [1, \frac{1}{2}, \frac{1}{3}, \frac{1}{4}, \frac{1}{5} \right ] \end{align} represented as doubles. Suppose your first attempt at a solution is as follows.
import matplotlib.pyplot as plt %matplotlib inline # A simple, naive solution def invert(list_of_doubles): inverted_list = [] for i in list_of_doubles: inverted_list.append(1./i) return inverted_list # A silly, useless function def plot_demo(inverted_list): plt.plot(inverted_list) plt.title("Debug Demo") plot_demo(invert([1., 2., 3., 4., 5., 6., 7.]))
courses/python/material/ipynbs/Using the IPython Notebook.ipynb
kraemerd17/kraemerd17.github.io
mit
Everything looks good, so what's the problem?
plot_demo(invert([1., 2., 3., 4., 5., 6., 7., 0.]))
courses/python/material/ipynbs/Using the IPython Notebook.ipynb
kraemerd17/kraemerd17.github.io
mit
Okay, so we're getting a divide by zero... let's run %debug if it can enlighten us on where the functions broke.
%debug
courses/python/material/ipynbs/Using the IPython Notebook.ipynb
kraemerd17/kraemerd17.github.io
mit
Debugger commands This example was extremely simplified, but %debug is a very useful command to have in your toolbelt. Here are the key commands inside the debugger to help you navigate. | Command | Action | Command | Action | |---------|--------|---------|--------| |h(elp) | Display command list |s(tep)| Step into function call| |help command| Show documentation for command | n(ext) | Execute current line and advance| |c(ontinue) | Resume program execution | u(p) / d(own) | Move up/down in function call stack | |q(uit) | Exit debugger without executing any more code | a(rgs) | Show arguments for current function | |b(reak) number| Set a breakpoint at number in current file | debug statement | Invoke statement in new (recursive) debugger | |b path/to/file.py:number| Set breakpoint at line number in specified file | l(ist) statement | Show current position with context| | w(here) | Print full stack trace with context at current position | Tips for Productive Code Development Using IPython Wes McKinney has a number of helpful tips regarding using IPython for code development. Importantly, he says Writing code in a way that makes it easy to develop, debug, and ultimately use interactively may be a paradigm shift for many users. There are procedural details like code reloading that may require some adjustment as well as coding style concerns. As such, most of this section is more of an art than a science and will require some experimentation on your part to determine a way to write your Python code that is effective and productive for you. Ultimately you want to structure your code in a way that makes it easy to use iteratively and to be able to explore the results of running a program or function as effortlessly as possible. On these lines, here are some of McKinney's tips for good code design in Python. Flat is better than nested Deeply nested code makes me think about the many layers of an onion. When testing or debugging a function, how many layers of the onion must you peel back in order to reach the code of interest? Making functions and classes as decoupled and modular aas possible makes them easier to test (if you are writing unit tests), debug, and use interactively. Overcome a fear of longer files If you come from a Java background, you may have been told to keep files short. However, while developing code using IPython, working with 10 small, but interconnected files (under, say, 100 lines each) is likely to cause you more headache in general than a single large file or two or three longer files. Fewer files means fewer modules to reload and less jumping between files while editing, too. Lab: Testing Efficiency One fundamental problem in computing is array (or list) sorting. There are many different ways to sort, each providing distinct advantages. As a general rule, the best objective way to compare sorting algorithms is their efficiency, but this can change depending on the size of the array (or list) under consideration. This lab invites you to try to implement two common sorting algorithms: Bubble sort, Selection sort Neither algorithm is inherently efficient, but are comparatively simple to put to code. The bubble sort works as follows: Start with a list of numbers Check if it is sorted If it is sorted, then return the list If not, continue Compare the first and second numbers If the first number is less than the second number, continue If the second number is less than the first number, swap, and then continue Shift over one element of the list, and repeat. After reaching the end of the list, go back to step (2). The selection sort works as follows: Start with a list of numbers Set the "swap" index to 0 Check if the list after the swap index is sorted If it is sorted, then return the list If not, continue Cycle through the remaining list and identify the location of the smallest number Exchange the minimum with the current swap index Increment the swap index Go back to step(2). Your task: implement these sorting algorithms in Python, and use IPython tools to determine which algorithm is more efficient.
from numpy import random def is_sorted(lst): for i in range(1,len(lst)): if lst[i] < lst[i-1]: return False return True def bubblesort(lst): while not is_sorted(lst): for i in range(0, len(lst) - 1): if lst[i+1] < lst[i]: temp = lst[i] lst[i] = lst[i+1] lst[i+1] = temp return lst def selection_sort(lst): for i, e in enumerate(lst): mn = min(range(i,len(lst)), key=lst.__getitem__) lst[i], lst[mn] = lst[mn], e return lst %timeit bubblesort(random.randn(1000)) %timeit selection_sort(random.randn(1000))
courses/python/material/ipynbs/Using the IPython Notebook.ipynb
kraemerd17/kraemerd17.github.io
mit
Spatiotemporal permutation F-test on full sensor data Tests for differential evoked responses in at least one condition using a permutation clustering test. The FieldTrip neighbor templates will be used to determine the adjacency between sensors. This serves as a spatial prior to the clustering. Significant spatiotemporal clusters will then be visualized using custom matplotlib code.
# Authors: Denis Engemann <denis.engemann@gmail.com> # # License: BSD (3-clause) import numpy as np import matplotlib.pyplot as plt from mpl_toolkits.axes_grid1 import make_axes_locatable from mne.viz import plot_topomap import mne from mne.stats import spatio_temporal_cluster_test from mne.datasets import sample from mne.channels import find_ch_connectivity print(__doc__)
0.15/_downloads/plot_stats_spatio_temporal_cluster_sensors.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Set parameters
data_path = sample.data_path() raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif' event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif' event_id = {'Aud_L': 1, 'Aud_R': 2, 'Vis_L': 3, 'Vis_R': 4} tmin = -0.2 tmax = 0.5 # Setup for reading the raw data raw = mne.io.read_raw_fif(raw_fname, preload=True) raw.filter(1, 30, fir_design='firwin') events = mne.read_events(event_fname)
0.15/_downloads/plot_stats_spatio_temporal_cluster_sensors.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Compare Cumulative Return
aapl['Cumulative'] = aapl['Close'] / aapl['Close'].iloc[0] spy_etf['Cumulative'] = spy_etf['Close'] / spy_etf['Close'].iloc[0] aapl['Cumulative'].plot(label = 'AAPL', figsize = (10,8)) spy_etf['Cumulative'].plot(label = 'SPY Index') plt.legend() plt.title('Cumulative Return')
17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/09-Python-Finance-Fundamentals/03-CAPM-Capital-Asset-Pricing-Model.ipynb
arcyfelix/Courses
apache-2.0
Get Daily Return
aapl['Daily Return'] = aapl['Close'].pct_change(1) spy_etf['Daily Return'] = spy_etf['Close'].pct_change(1) fig = plt.figure(figsize = (12, 8)) plt.scatter(aapl['Daily Return'], spy_etf['Daily Return'], alpha = 0.3) aapl['Daily Return'].hist(bins = 100, figsize = (12, 8)) spy_etf['Daily Return'].hist(bins = 100, figsize = (12, 8)) beta,alpha, r_value, p_value, std_err = stats.linregress(aapl['Daily Return'].iloc[1:],spy_etf['Daily Return'].iloc[1:]) beta alpha r_value
17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/09-Python-Finance-Fundamentals/03-CAPM-Capital-Asset-Pricing-Model.ipynb
arcyfelix/Courses
apache-2.0
What if our stock was completely related to SP500?
spy_etf['Daily Return'].head() import numpy as np noise = np.random.normal(0, 0.001, len(spy_etf['Daily Return'].iloc[1:])) noise spy_etf['Daily Return'].iloc[1:] + noise beta, alpha, r_value, p_value, std_err = stats.linregress(spy_etf['Daily Return'].iloc[1:]+noise, spy_etf['Daily Return'].iloc[1:]) beta alpha
17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/09-Python-Finance-Fundamentals/03-CAPM-Capital-Asset-Pricing-Model.ipynb
arcyfelix/Courses
apache-2.0
Question: Consider there is a feature with a score far better than other features. Is this feature a good choice? Unsupervised Learning: Principal Component Analysis (PCA) Principal component analysis is a dimensionality reduction algorithm that we can use to find structure in our data.The main aim is to find surface onto which projection errros are minimized.This surface is a lower the dimensional subspace spanned by principal components of the data. These principal components are the direction along which the projection of the data onto that axis hae the maximum variance.The main component along which the data varies is called the principal axis of variation. Intuitive interpretation of PCA At the first step we plot a data an with 2 dimenstion and try to reduce it to one Don't get confused PCA is different from linear regression How about If we have three-dimensional Using PCA in Sklearn
import sklearn import numpy as np import pandas as pd import matplotlib.pyplot as plt from sklearn.model_selection import train_test_split import gzip, pickle, sys f = gzip.open('Datasets/mnist.pkl.gz', 'rb') (input_train, output_train), (input_test, output_test), _ = pickle.load(f, encoding='bytes') for i in range(4): plt.subplot(2,2,i+1) plt.imshow(input_train[i].reshape((28,28)), cmap=plt.cm.gray_r, interpolation='nearest') plt.show() from sklearn.ensemble import RandomForestClassifier randomforest = RandomForestClassifier(n_estimators=30) randomforest.fit(input_train,output_train) from sklearn.metrics import classification_report print(classification_report(output_test, randomforest.predict(input_test))) from sklearn.decomposition import PCA pca = PCA(n_components=500) pca.fit(input_train) plt.figure(figsize=(12,6)) plt.plot(np.cumsum(pca.explained_variance_ratio_[0:500]),marker = 'o') plt.show() np.cumsum(pca.explained_variance_ratio_[0:500])[200] pca = PCA(n_components= 200) pca.fit(input_train) x_train = pca.transform(input_train) x_test = pca.transform(input_test) for i in range(4): plt.subplot(2,2,i+1) plt.imshow(pca.inverse_transform(x_train)[i].reshape((28, 28)), cmap=plt.cm.gray_r, interpolation='nearest' ) plt.show() randomforest = RandomForestClassifier(n_estimators=30) randomforest.fit(x_train,output_train) from sklearn.metrics import classification_report print(classification_report(output_test, randomforest.predict(x_test)))
07-pca-feature-selection.ipynb
msadegh97/machine-learning-course
gpl-3.0
Array CNVs I'm going to make a table of all CNVs identified by arrays. Some iPSC didn't have any CNVs. For now, if an iPSC is in the CNV table, that means that it either didn't have CNVs or we didn't test that clone/passage number for CNVs.
cnv = baseline_cnv.merge(baseline_snpa, left_on='snpa_id', right_index=True, suffixes=['_cnv', '_snpa']) cnv = cnv.merge(baseline_analyte, left_on='analyte_id', right_index=True, suffixes=['_cnv', '_analyte']) cnv = cnv.merge(baseline_tissue, left_on='tissue_id', right_index=True, suffixes=['_cnv', '_tissue']) cnv = cnv[['type', 'chr', 'start', 'end', 'len', 'primary_detect_method', 'clone', 'passage', 'subject_id']]
notebooks/Input Data.ipynb
frazer-lab/cardips-ipsc-eqtl
mit
RNA-seq Samples for this Study I'm going to use baseline and family 1070 samples.
# Get family1070 samples. tdf = family1070_rnas[family1070_rnas.comment.isnull()] tdf = tdf.merge(family1070_tissue, left_on='tissue_id', right_index=True, suffixes=['_rna', '_tissue']) tdf = tdf[tdf.cell_type == 'iPSC'] tdf.index = tdf.rnas_id tdf['status'] = data_rnas.ix[tdf.index, 'status'] tdf = tdf[tdf.status == 0] tdf = tdf[['ipsc_clone_number', 'ipsc_passage', 'subject_id']] tdf.columns = ['clone', 'passage', 'subject_id'] tdf['isolated_by'] = 'p' tdf.index.name = 'rna_id' # Get the iPSC eQTL samples. rna = baseline_rnas[baseline_rnas.rnas_id.isnull() == False] rna.index = rna.rnas_id rna.index.name = 'rna_id' rna['status'] = data_rnas.ix[rna.index, 'status'] rna = rna[rna.status == 0] #rna = rna.ix[censor[censor == False].index] rna = rna.merge(baseline_analyte, left_on='analyte_id', right_index=True, suffixes=['_rnas', '_analyte']) rna = rna.merge(baseline_tissue, left_on='tissue_id', right_index=True, suffixes=['_rnas', '_tissue']) rna = rna[['clone', 'passage', 'subject_id']] rna['isolated_by'] = 'a' rna = pd.concat([rna, tdf]) # Get 222 subjects. cohort222 = baseline_ipsc.merge(baseline_tissue, left_on='tissue_id', right_index=True, suffixes=['_ipsc', '_tissue']) n = len(set(rna.subject_id) & set(cohort222.subject_id)) print('We have {} of the 222 subjects in the "222 cohort."'.format(n)) rna['sequence_id'] = data_rnas.ix[rna.index, 'sequence_id']
notebooks/Input Data.ipynb
frazer-lab/cardips-ipsc-eqtl
mit
I can use all of these samples that passed QC for various expression analyses. eQTL samples Now I'm going to identify one sample per subject to use for eQTL analysis. I'll start by keeping samples whose clone/passage number matches up with those from the 222 cohort.
rna['in_eqtl'] = False samples = (cohort222.subject_id + ':' + cohort222.clone.astype(int).astype(str) + ':' + cohort222.passage.astype(int).astype(str)) t = rna.dropna(subset=['passage']) t.loc[:, ('sample')] = (t.subject_id + ':' + t.clone.astype(int).astype(str) + ':' + t.passage.astype(int).astype(str)) t = t[t['sample'].apply(lambda x: x in samples.values)] # These samples are in the 222 cohort and the eQTL analysis. rna['in_222'] = False rna.ix[t.index, 'in_222'] = True rna.ix[t.index, 'in_eqtl'] = True
notebooks/Input Data.ipynb
frazer-lab/cardips-ipsc-eqtl
mit
Now I'll add in any samples for which we have CNVs but weren't in the 222.
samples = (cnv.subject_id + ':' + cnv.clone.astype(int).astype(str) + ':' + cnv.passage.astype(int).astype(str)) t = rna.dropna(subset=['passage']) t.loc[:, ('sample')] = (t.subject_id + ':' + t.clone.astype(int).astype(str) + ':' + t.passage.astype(int).astype(str)) t = t[t['sample'].apply(lambda x: x in samples.values)] t = t[t.subject_id.apply(lambda x: x not in rna.ix[rna.in_eqtl, 'subject_id'].values)] # These samples aren't in the 222 but we have a measured CNV for them. rna.ix[t.index, 'in_eqtl'] = True
notebooks/Input Data.ipynb
frazer-lab/cardips-ipsc-eqtl
mit
Now I'll add in samples where the clone was in the 222 but we don't have the same passage number.
samples = (cohort222.subject_id + ':' + cohort222.clone.astype(int).astype(str)) t = rna[rna.in_eqtl == False] t = t[t.subject_id.apply(lambda x: x not in rna.ix[rna.in_eqtl, 'subject_id'].values)] t['samples'] = t.subject_id + ':' + t.clone.astype(int).astype(str) t = t[t.samples.apply(lambda x: x in samples.values)] # These clones are in the 222, we just have a different passage number. rna['clone_in_222'] = False rna.ix[rna.in_222, 'clone_in_222'] = True rna.ix[t.index, 'clone_in_222'] = True rna.ix[t.index, 'in_eqtl'] = True
notebooks/Input Data.ipynb
frazer-lab/cardips-ipsc-eqtl
mit
Now I'll add in any samples from subjects we don't yet have in the eQTL analysis.
t = rna[rna.in_eqtl == False] t = t[t.subject_id.apply(lambda x: x not in rna.ix[rna.in_eqtl, 'subject_id'].values)] rna.ix[t.index, 'in_eqtl'] = True n = rna.in_eqtl.value_counts()[True] print('We potentially have {} distinct subjects in the eQTL analysis.'.format(n))
notebooks/Input Data.ipynb
frazer-lab/cardips-ipsc-eqtl
mit
WGS Samples Now I'll assign WGS IDs for each RNA-seq sample. Some subjects have multiple WGS samples for different cell types. I'll preferentially use blood, fibroblast, and finally iPSC WGS.
wgs = baseline_wgs.merge(baseline_analyte, left_on='analyte_id', right_index=True, suffixes=['_wgs', '_analyte']) wgs = wgs.merge(baseline_tissue, left_on='tissue_id', right_index=True, suffixes=['_wgs', '_tissue']) wgs = wgs.merge(baseline_analyte, left_on='analyte_id', right_index=True, suffixes=['_wgs', '_analyte']) wgs = wgs.dropna(subset=['wgs_id']) wgs.index = wgs.wgs_id wgs['status'] = data_wgs.ix[wgs.index, 'status'] wgs = wgs[wgs.status == 0] rna['wgs_id'] = '' for i in rna.index: s = rna.ix[i, 'subject_id'] t = wgs[wgs.subject_id == s] if t.shape[0] == 1: rna.ix[i, 'wgs_id'] = t.index[0] elif t.shape[0] > 1: if 'Blood' in t.source.values: t = t[t.source == 'Blood'] elif 'iPSC' in t.source.values: t = t[t.source == 'iPSC'] if t.shape[0] == 1: rna.ix[i, 'wgs_id'] = t.index[0] else: print('?: {}'.format(i)) else: #print('No WGS: {}'.format(i)) print('No WGS: {}'.format(rna.ix[i, 'subject_id'])) rna.ix[i, 'in_eqtl'] = False rna.ix[rna['wgs_id'] == '', 'wgs_id'] = np.nan n = rna.in_eqtl.value_counts()[True] print('We are left with {} subjects for the eQTL analysis.'.format(n))
notebooks/Input Data.ipynb
frazer-lab/cardips-ipsc-eqtl
mit
I'm going to keep one WGS sample per person in the cohort (preferentially blood, fibroblast, and finally iPSC) even if we don't have RNA-seq in case we want to look at phasing etc.
vc = wgs.subject_id.value_counts() vc = vc[vc > 1] keep = [] for s in vc.index: t = wgs[wgs.subject_id == s] if t.shape[0] == 1: keep.append(t.index[0]) elif t.shape[0] > 1: if 'Blood' in t.source.values: t = t[t.source == 'Blood'] elif 'iPSC' in t.source.values: t = t[t.source == 'iPSC'] if t.shape[0] == 1: keep.append(t.index[0]) else: print('?: {}'.format(i)) wgs = wgs.drop(set(wgs[wgs.subject_id.apply(lambda x: x in vc.index)].index) - set(keep)) wgs = wgs[['source', 'subject_id']] wgs.columns = ['cell', 'subject_id'] subject = subject_subject.copy(deep=True) subject = subject.ix[set(rna.subject_id) | set(wgs.subject_id)] subject = subject[['sex', 'age', 'family_id', 'father_id', 'mother_id', 'twin_id', 'ethnicity_group']] fn = os.path.join(outdir, 'cnvs.tsv') if not os.path.exists(fn): cnv.to_csv(fn, sep='\t') rna.index.name = 'sample_id' fn = os.path.join(outdir, 'rnaseq_metadata.tsv') if not os.path.exists(fn): rna.to_csv(fn, sep='\t') fn = os.path.join(outdir, 'subject_metadata.tsv') if not os.path.exists(fn): subject.to_csv(fn, sep='\t') fn = os.path.join(outdir, 'wgs_metadata.tsv') if not os.path.exists(fn): wgs.to_csv(fn, sep='\t')
notebooks/Input Data.ipynb
frazer-lab/cardips-ipsc-eqtl
mit
RNA-seq Data
dy = '/projects/CARDIPS/pipeline/RNAseq/combined_files' # STAR logs. fn = os.path.join(dy, 'star_logs.tsv') logs = pd.read_table(fn, index_col=0, low_memory=False) logs = logs.ix[rna.index] logs.index.name = 'sample_id' fn = os.path.join(outdir, 'star_logs.tsv') if not os.path.exists(fn): logs.to_csv(fn, sep='\t') # Picard stats. fn = os.path.join(dy, 'picard_metrics.tsv') picard = pd.read_table(fn, index_col=0, low_memory=False) picard = picard.ix[rna.index] picard.index.name = 'sample_id' fn = os.path.join(outdir, 'picard_metrics.tsv') if not os.path.exists(fn): picard.to_csv(fn, sep='\t') # Expression values. fn = os.path.join(dy, 'rsem_tpm_isoforms.tsv') tpm = pd.read_table(fn, index_col=0, low_memory=False) tpm = tpm[rna.index] fn = os.path.join(outdir, 'rsem_tpm_isoforms.tsv') if not os.path.exists(fn): tpm.to_csv(fn, sep='\t') fn = os.path.join(dy, 'rsem_tpm.tsv') tpm = pd.read_table(fn, index_col=0, low_memory=False) tpm = tpm[rna.index] fn = os.path.join(outdir, 'rsem_tpm.tsv') if not os.path.exists(fn): tpm.to_csv(fn, sep='\t') fn = os.path.join(dy, 'rsem_expected_counts.tsv') ec = pd.read_table(fn, index_col=0, low_memory=False) ec = ec[rna.index] fn = os.path.join(outdir, 'rsem_expected_counts.tsv') if not os.path.exists(fn): ec.to_csv(fn, sep='\t') ec_sf = cpb.analysis.deseq2_size_factors(ec.astype(int), meta=rna, design='~subject_id') fn = os.path.join(outdir, 'rsem_expected_counts_size_factors.tsv') if not os.path.exists(fn): ec_sf.to_csv(fn, sep='\t') ec_n = (ec / ec_sf) fn = os.path.join(outdir, 'rsem_expected_counts_norm.tsv') if not os.path.exists(fn): ec_n.to_csv(fn, sep='\t') fn = os.path.join(dy, 'gene_counts.tsv') gc = pd.read_table(fn, index_col=0, low_memory=False) gc = gc[rna.index] fn = os.path.join(outdir, 'gene_counts.tsv') if not os.path.exists(fn): gc.to_csv(fn, sep='\t') gc_sf = cpb.analysis.deseq2_size_factors(gc, meta=rna, design='~subject_id') fn = os.path.join(outdir, 'gene_counts_size_factors.tsv') if not os.path.exists(fn): gc_sf.to_csv(fn, sep='\t') gc_n = (gc / gc_sf) fn = os.path.join(outdir, 'gene_counts_norm.tsv') if not os.path.exists(fn): gc_n.to_csv(fn, sep='\t') # Allele counts. cpy.makedir(os.path.join(private_outdir, 'allele_counts')) fns = glob.glob('/projects/CARDIPS/pipeline/RNAseq/sample/' '*/*mbased/*mbased_input.tsv') fns = [x for x in fns if x.split('/')[-3] in rna.index] for fn in fns: new_fn = os.path.join(private_outdir, 'allele_counts', os.path.split(fn)[1]) if not os.path.exists(new_fn): os.symlink(fn, new_fn) # MBASED ASE results. dy = '/projects/CARDIPS/pipeline/RNAseq/combined_files' df = pd.read_table(os.path.join(dy, 'mbased_major_allele_freq.tsv'), index_col=0) df = df[rna.index].dropna(how='all') df.to_csv(os.path.join(outdir, 'mbased_major_allele_freq.tsv'), sep='\t') df = pd.read_table(os.path.join(dy, 'mbased_p_val_ase.tsv'), index_col=0) df = df[rna.index].dropna(how='all') df.to_csv(os.path.join(outdir, 'mbased_p_val_ase.tsv'), sep='\t') df = pd.read_table(os.path.join(dy, 'mbased_p_val_het.tsv'), index_col=0) df = df[rna.index].dropna(how='all') df.to_csv(os.path.join(outdir, 'mbased_p_val_het.tsv'), sep='\t') cpy.makedir(os.path.join(private_outdir, 'mbased_snv')) fns = glob.glob('/projects/CARDIPS/pipeline/RNAseq/sample/*/*mbased/*_snv.tsv') fns = [x for x in fns if x.split('/')[-3] in rna.index] for fn in fns: new_fn = os.path.join(private_outdir, 'mbased_snv', os.path.split(fn)[1]) if not os.path.exists(new_fn): os.symlink(fn, new_fn)
notebooks/Input Data.ipynb
frazer-lab/cardips-ipsc-eqtl
mit
Variant Calls
fn = os.path.join(private_outdir, 'autosomal_variants.vcf.gz') if not os.path.exists(fn): os.symlink('/projects/CARDIPS/pipeline/WGS/mergedVCF/CARDIPS_201512.PASS.vcf.gz', fn) os.symlink('/projects/CARDIPS/pipeline/WGS/mergedVCF/CARDIPS_201512.PASS.vcf.gz.tbi', fn + '.tbi')
notebooks/Input Data.ipynb
frazer-lab/cardips-ipsc-eqtl
mit
External Data I'm going to use the expression estimates for some samples from GSE73211.
fn = os.path.join(outdir, 'GSE73211.tsv') if not os.path.exists(fn): os.symlink('/projects/CARDIPS/pipeline/RNAseq/combined_files/GSE73211.tsv', fn) GSE73211 = pd.read_table(fn, index_col=0) dy = '/projects/CARDIPS/pipeline/RNAseq/combined_files' fn = os.path.join(dy, 'rsem_tpm.tsv') tpm = pd.read_table(fn, index_col=0, low_memory=False) tpm = tpm[GSE73211.index] fn = os.path.join(outdir, 'GSE73211_tpm.tsv') if not os.path.exists(fn): tpm.to_csv(fn, sep='\t')
notebooks/Input Data.ipynb
frazer-lab/cardips-ipsc-eqtl
mit
The Data object is initialized with the path to the directory of .pickle files. On creation it reads in the pickle files, but does not transform the data.
data = ICO.Data(os.getcwd()+'/data/')
ICO Data Object Example.ipynb
MATH497project/MATH497-DiabeticRetinopathy
mit
The data object can be accessed like a dictionary to the underlying Dataframes. These will be transformed on their first access into a normalized form. (This might take awhile for the first access)
start = time.time() data["all_encounter_data"] print(time.time() - start) data["all_encounter_data"].describe(include='all') data["all_encounter_data"].columns.values data['all_encounter_data'].shape[0] data['all_encounter_data'].to_pickle('all_encounter_data_Dan_20170415.pickle') start = time.time() data["all_person_data"] print(time.time() - start) data["all_person_data"].describe(include='all') data["all_person_data"].columns.values data['all_person_data'].shape[0] data['all_person_data'].to_pickle('all_person_data_Dan_20170415.pickle')
ICO Data Object Example.ipynb
MATH497project/MATH497-DiabeticRetinopathy
mit
The "random" agent selects (uniformly) at random from the set of valid moves. In Connect Four, a move is considered valid if there's still space in the column to place a disc (i.e., if the board has seven rows, the column has fewer than seven discs). In the code cell below, this agent plays one game round against a copy of itself.
# Two random agents play one game round env.run(["random", "random"]) # Show the game env.render(mode="ipython")
notebooks/game_ai/raw/tut1.ipynb
Kaggle/learntools
apache-2.0
You can use the player above to view the game in detail: every move is captured and can be replayed. Try this now! As you'll soon see, this information will prove incredibly useful for brainstorming ways to improve our agents. Defining agents To participate in the competition, you'll create your own agents. Your agent should be implemented as a Python function that accepts two arguments: obs and config. It returns an integer with the selected column, where indexing starts at zero. So, the returned value is one of 0-6, inclusive. We'll start with a few examples, to provide some context. In the code cell below: - The first agent behaves identically to the "random" agent above. - The second agent always selects the middle column, whether it's valid or not! Note that if any agent selects an invalid move, it loses the game. - The third agent selects the leftmost valid column.
#$HIDE_INPUT$ import random import numpy as np # Selects random valid column def agent_random(obs, config): valid_moves = [col for col in range(config.columns) if obs.board[col] == 0] return random.choice(valid_moves) # Selects middle column def agent_middle(obs, config): return config.columns//2 # Selects leftmost valid column def agent_leftmost(obs, config): valid_moves = [col for col in range(config.columns) if obs.board[col] == 0] return valid_moves[0]
notebooks/game_ai/raw/tut1.ipynb
Kaggle/learntools
apache-2.0
So, what are obs and config, exactly? obs obs contains two pieces of information: - obs.board - the game board (a Python list with one item for each grid location) - obs.mark - the piece assigned to the agent (either 1 or 2) obs.board is a Python list that shows the locations of the discs, where the first row appears first, followed by the second row, and so on. We use 1 to track player 1's discs, and 2 to track player 2's discs. For instance, for this game board: <center> <img src="https://i.imgur.com/kSYx4Nx.png" width=25%><br/> </center> obs.board would be [0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 2, 2, 0, 0, 0, 0, 2, 1, 2, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 2, 1, 2, 0, 2, 0]. config config contains three pieces of information: - config.columns - number of columns in the game board (7 for Connect Four) - config.rows - number of rows in the game board (6 for Connect Four) - config.inarow - number of pieces a player needs to get in a row in order to win (4 for Connect Four) Take the time now to investigate the three agents we've defined above. Make sure that the code makes sense to you! Evaluating agents To have the custom agents play one game round, we use the same env.run() method as before.
# Agents play one game round env.run([agent_leftmost, agent_random]) # Show the game env.render(mode="ipython")
notebooks/game_ai/raw/tut1.ipynb
Kaggle/learntools
apache-2.0
The outcome of a single game is usually not enough information to figure out how well our agents are likely to perform. To get a better idea, we'll calculate the win percentages for each agent, averaged over multiple games. For fairness, each agent goes first half of the time. To do this, we'll use the get_win_percentages() function (defined in a hidden code cell). To view the details of this function, click on the "Code" button below.
#$HIDE_INPUT$ def get_win_percentages(agent1, agent2, n_rounds=100): # Use default Connect Four setup config = {'rows': 6, 'columns': 7, 'inarow': 4} # Agent 1 goes first (roughly) half the time outcomes = evaluate("connectx", [agent1, agent2], config, [], n_rounds//2) # Agent 2 goes first (roughly) half the time outcomes += [[b,a] for [a,b] in evaluate("connectx", [agent2, agent1], config, [], n_rounds-n_rounds//2)] print("Agent 1 Win Percentage:", np.round(outcomes.count([1,-1])/len(outcomes), 2)) print("Agent 2 Win Percentage:", np.round(outcomes.count([-1,1])/len(outcomes), 2)) print("Number of Invalid Plays by Agent 1:", outcomes.count([None, 0])) print("Number of Invalid Plays by Agent 2:", outcomes.count([0, None]))
notebooks/game_ai/raw/tut1.ipynb
Kaggle/learntools
apache-2.0
Which agent do you think performs better against the random agent: the agent that always plays in the middle (agent_middle), or the agent that chooses the leftmost valid column (agent_leftmost)? Let's find out!
get_win_percentages(agent1=agent_middle, agent2=agent_random) get_win_percentages(agent1=agent_leftmost, agent2=agent_random)
notebooks/game_ai/raw/tut1.ipynb
Kaggle/learntools
apache-2.0
Change $\lambda$ below to see its effect on the profile shape.
pohlPlot(lam=7)
lessons/.ipynb_checkpoints/BoundaryLayerSolver-checkpoint.ipynb
ultiyuan/test0
gpl-2.0
Quiz 1 What value of $\lambda$ denotes separated flow? $\lambda$<-12 $\lambda$=0 $\lambda$>12 Using the Pohlhausen profile, the various factors in the momentum integral equation are defined as $\frac{\delta_1}\delta = \int_0^1 (1-f) d\eta = \frac3{10}-\lambda\frac1{120}$ $\frac{\delta_2}\delta = \int_0^1 f(1-f) d\eta = \frac{37}{315}-\lambda\frac1{945}-\lambda^2\frac1{9072}$ $\frac 12 c_f Re_\delta =f'(0)= 2+\lambda\frac1{6}$ where $Re_\delta = \frac{u_e\delta}\nu$ is the local boundary layer Reynolds number.
def disp_ratio(lam): return 3./10.-lam/120. def mom_ratio(lam): return 37./315.-lam/945.-lam**2/9072. def df_0(lam): return 2+lam/6. pyplot.xlabel(r'$\lambda$', fontsize=16) lam = numpy.linspace(-12,12,100) pyplot.plot(lam,disp_ratio(lam), lw=2, label=r'$\delta_1/\delta$') pyplot.plot(lam,mom_ratio(lam), lw=2, label=r'$\delta_2/\delta$') pyplot.plot(lam,df_0(lam)/10., lw=2, label=r'$c_f Re_\delta/20$') pyplot.legend(loc='upper right')
lessons/.ipynb_checkpoints/BoundaryLayerSolver-checkpoint.ipynb
ultiyuan/test0
gpl-2.0
Note that these are all polynomial functions of $\lambda$. Since $u_e$ is given by potential flow and $\lambda = \frac {\delta^2}\nu u_e'$, the only unknown in the momentum equation is now $\delta(x)$! Stagnation point condition Now we need to write the momentum equation in terms of $\delta$ (and $\lambda$) and solve. This equation needs to be valid from the leading edge all the way to the point of separation. For any body with finite thickness the boundary layer will begin at the stagnation point at the front of the body. However, describing the boundary layer at a stagnation point is somewhat tricky. Quiz 2 Which relationships are true at a stagnation point? $u_e = 0$ $u_e' = 0$ $\delta/x << 1$ $c_f$ is singular That's no good - the momentum equation will be singular at the leading edge. We can avoid this problem by multiplying the whole equation by $Re_\delta$, leading to: $$ \frac 12 c_f Re_\delta = \frac\delta\nu u_e' [\delta_1+2\delta_2]+Re_\delta \delta_2'$$ The first term on the RHS can be simplified by dividing the brackets by $\delta$ and mutiplying by $\delta$ outside out to produce the definition of $\lambda$. This lets us group the terms only dependant on $\lambda$ together to define $$ g_1(\lambda) = \frac 12 c_f Re_\delta - \lambda \left[\frac{\delta_1}{\delta}+2\frac{\delta_2}\delta\right]$$
def g_1(lam): return df_0(lam)-lam*(disp_ratio(lam)+2*mom_ratio(lam))
lessons/.ipynb_checkpoints/BoundaryLayerSolver-checkpoint.ipynb
ultiyuan/test0
gpl-2.0
Using this definition, the momentum equation is $$ g_1(\lambda) = Re_\delta \delta_2'$$ Quiz 3 The equation above further simplifies at the stagnation point. Which is correct? $g_1 = 0$ $g_1 = Re_\delta$ $ \frac 12 c_f = 0$ Solving this equations will determine our initial condition $\lambda_0$. Using my vast google skills I found the bisect function in scipy.optimize which will solve for the root.
from scipy.optimize import bisect lam0 = bisect(g_1,-12,12) # use bisect method to find root between -12...12 print 'lambda_0 = ',lam0
lessons/.ipynb_checkpoints/BoundaryLayerSolver-checkpoint.ipynb
ultiyuan/test0
gpl-2.0
With the value of $\lambda_0$ determined, the initial condition $\delta_0$ is simply $$ \delta_0 = \sqrt{\frac{\nu \lambda_0}{u_e'(x_0)}} $$ Pohlhausen momentum equation The only thing left to do is write $\delta_2'$ in terms of $\delta'$. Using $F=\frac{\delta_2}\delta$ we have $$ \delta_2' = \frac{d}{dx}(F\delta) $$ From the line plot above, we see that $F$ is nearly unchanged across the whole range of $\lambda$, so we will treat it as a constant. Therefore the complete Pohlhausen momentum equation is $$ g_1 = Re_\delta F \delta'$$ Isolating the derivative, we have $$ \delta'= \frac{g_1(\lambda)}{Re_\delta F(\lambda)} $$
def ddx_delta(Re_d,lam): if Re_d==0: return 0 # Stagnation point condition return g_1(lam)/mom_ratio(lam)/Re_d # delta'
lessons/.ipynb_checkpoints/BoundaryLayerSolver-checkpoint.ipynb
ultiyuan/test0
gpl-2.0
Lets plot the functions of $\lambda$ to get a feel for how the boundary layer will develop.
pyplot.xlabel(r'$\lambda$', fontsize=16) pyplot.ylabel(r'$g_1/F$', fontsize=16) pyplot.plot(lam,ddx_delta(1,lam), lw=2) pyplot.scatter(lam0,0, s=100, c='r') pyplot.text(lam0,3, r'$\lambda_0$',fontsize=15)
lessons/.ipynb_checkpoints/BoundaryLayerSolver-checkpoint.ipynb
ultiyuan/test0
gpl-2.0
Quiz 4 What will happen if $\lambda>\lambda_0$? Flat plate boundary layer flow. The boundary layer will shrink. The Pohlausen equation will be singular. Ordinary differential equations The momentum equation above is an ordinary differentilal equation (ODE), having the form $$ \psi' = g(\psi(x),x) $$ where the derivative is only a function of the variable $\psi$ and one indepentant variable $x$. All ODEs have an important feature in common: Mathematics fundamental: ODEs Systems' whose evolution depends only on their current state This makes them easier to solve. If we integrate the ODE from $x_0$ to $x_1$ we have $$ \psi(x_1) = \psi_1= \psi_0+\int_{x_0}^{x_1} g(\psi(x),x) dx $$ which means all we need to solve for $\psi_1$ is the initial condition $\psi_0$ and an estimate the RHS integral. And once we have $\psi_1$, we can get $\psi_2$, etc. In general we have $$ \psi_{i+1}= \psi_i+\int_{x_i}^{x_{i+1}} g(\psi(x),x) dx \quad i=0,\ldots, N-1$$ This means the ODE can be solved by marching from $x=0$ to $x=L$. Compare this to the vortex panel method and its linear system of equations that needed to be solved simultaneously using matrices... This is easy. Numerical integration You've seen numerical ways to determine the area under a curve before, like the trapezioal rule $$ \int_{x_i}^{x_{i+1}} f(x) dx \approx \frac12[f(x_1)+f(x_{i+1})] \Delta x$$ where $\Delta x=x_{i+1}-x_1$ Quiz 5 What is the important difference between the integral above and the ODE integral? $\psi_{i+1}$ is unknown $g$ is unknown $g$ is nonlinear This means we have to split the numerical method into two steps. First we estimate the integral as $g(\psi_i,x_i)\Delta x$. This lets us predict an estimate of $\psi_{i+1}$ $$ \tilde\psi_{i+1}= \psi_i+ g(\psi_i,x_i)\Delta x $$ However, this one-sided estimate of the integral is very rough. In the next step we correct the prediction using the trapeziodal rule $$ \psi_{i+1}= \psi_i+ \frac12[g(\psi_i,x_i)+g(\tilde\psi_{i+1},x_{i+1})]\Delta x$$ This is often called the predictor/corrector method, or Heun's method. Let's code it up:
def heun(g,psi_i,i,dx,*args): g_i = g(psi_i,i,*args) # integrand at i tilde_psi = psi_i+g_i*dx # predicted estimate at i+1 g_i_1 = g(tilde_psi,i+1,*args) # integrand at i+1 return psi_i+0.5*(g_i+g_i_1)*dx # corrected estimate
lessons/.ipynb_checkpoints/BoundaryLayerSolver-checkpoint.ipynb
ultiyuan/test0
gpl-2.0
In this code we've made the integrand g a function of psi and the index i. Note that g_i_1=$g_{i+1}$ and we've passed $i+1$ as the index. We've also left the option for additional arguments to be passed to g as *args which is required for the boundary layer ODE. Before we get to that, lets test heun using $\psi'=\psi$ with $\psi_0=1$, since we know the solution is $\psi = e^x$
N = 20 # number of steps x = numpy.linspace(0,numpy.pi,N) # set up x array from 0..pi psi = numpy.full_like(x,1.) # psi array with phi0=1 def g_test(psi,i): return psi # define derivative function for i in range(N-1): # march! psi[i+1] = heun(g_test,psi[i],i,(x[i+1]-x[i])) pyplot.plot(x,psi) pyplot.plot(x,numpy.exp(x)) print 'exp(pi) ~ ', psi[N-1],', error = ',1-psi[N-1]/numpy.exp(numpy.pi)
lessons/.ipynb_checkpoints/BoundaryLayerSolver-checkpoint.ipynb
ultiyuan/test0
gpl-2.0
Looks good, only 1% error. Bonus: What is the error if we don't do the correction step? Boundary layer on a circle Returning to the boundary layer ODE, we first define a function which can be integrated by heun
def g_pohl(delta_i,i,u_e,du_e,nu): Re_d = delta_i*u_e[i]/nu # compute local Reynolds number lam = delta_i**2*du_e[i]/nu # compute local lambda return ddx_delta(Re_d,lam) # get derivative
lessons/.ipynb_checkpoints/BoundaryLayerSolver-checkpoint.ipynb
ultiyuan/test0
gpl-2.0
where u_e, du_e, and nu are the extra arguments, needed to compute $Re_\delta$ and $\lambda$. Then we use this function and heun to march from the initial condition $\lambda_0,\delta_0$ along the boundary layer until we reach the point of separation at $\lambda<-12$
def march(x,u_e,du_e,nu): delta0 = numpy.sqrt(lam0*nu/du_e[0]) # set delta0 delta = numpy.full_like(x,delta0) # delta array lam = numpy.full_like(x,lam0) # lambda array for i in range(len(x)-1): # march! delta[i+1] = heun(g_pohl,delta[i],i,x[i+1]-x[i], # integrate BL using... u_e,du_e,nu) # additional arguments lam[i+1] = delta[i+1]**2*du_e[i+1]/nu # compute lambda if abs(lam[i+1])>12: break # check stop condition return delta,lam,i # return with separation index
lessons/.ipynb_checkpoints/BoundaryLayerSolver-checkpoint.ipynb
ultiyuan/test0
gpl-2.0
and we're done! Let's test it on the flow around a circle. In this case the boundary layer will march around the circle from $s=0,\ldots,R\pi$. Lets set the parameters $R=1$, $U_\infty=1$ and $Re_R=10^5$, such that $\nu=10^{-5}$. The tangential velocity around a circular cylinder using potential flow is simply $$u_e = 2\sin(s)$$ Now that we've defined march we can set-up and solve for the boundary layer in just a few lines of code:
nu = 1e-4 # viscosity N = 32 # number of steps s = numpy.linspace(0,numpy.pi,N) # distance goes from 0..pi u_e = 2.*numpy.sin(s) # velocity du_e = 2.*numpy.cos(s) # gradient delta,lam,iSep = march(s,u_e,du_e,nu) # solve!
lessons/.ipynb_checkpoints/BoundaryLayerSolver-checkpoint.ipynb
ultiyuan/test0
gpl-2.0
Let plot the boundary layer thickness on the circle compared to the exact solution for a laminar flat plate boundary layer from Blausius.
pyplot.ylabel(r'$\delta/R$', fontsize=16) pyplot.xlabel(r'$s/R$', fontsize=16) pyplot.plot(s[:iSep+1],delta[:iSep+1],lw=2,label='Circle') pyplot.plot(s,s*5/numpy.sqrt(s/nu),lw=2,label='Flat plate') pyplot.legend(loc='upper left') pyplot.scatter(s[iSep],delta[iSep], s=100, c='r') pyplot.text(s[iSep]+0.1,delta[iSep],'separation between\n' +'%.2f' % s[iSep]+'<s<'+'%.2f' % s[iSep+1],fontsize=12)
lessons/.ipynb_checkpoints/BoundaryLayerSolver-checkpoint.ipynb
ultiyuan/test0
gpl-2.0
The circle solution is completely different due to the external pressure gradients. The boundary layer growth is stunted on the front body $\delta$ increases rapidly as the flow approaches the midbody The flow separates around $1.87$ radians $\approx 107^o$ This is in good agreement with Hoerner's Fluid-Dynamic Drag, which states that theoretical laminar separation occurs at ~$110^o$ for this case. Quiz 6 How does the separation point depend on $\nu$? Increasing $\nu$ delays separation Decreasing $\nu$ delays separation Chaning $\nu$ has no effect on separation We know analytically that $\delta$ scales as $\sqrt\nu$ (which you can double check using the code above), and therefore $\lambda=\frac{\delta^2}\nu u_e'$ doesn't depend on $\nu$ at all. Since the separation point is determined by $\lambda$, this is also independant of $\nu$. Fluids fundamental: Separation Point The point of laminar separation is independant of $Re$ This is not true of a turbulent boundary layer. Quiz 7 How can you compute the total friction drag coefficient $C_F$ on the circle? Use the flat plate estimate, I'm sure that will be fine... Compute $\tau_w=\frac 12 c_f \rho u_e^2 $ and integrate numerically Hint: numpy.trapz Your turn Determine $C_F=\frac {2F_F}{\rho U_\infty^2 S}$ , where $F_F = \int \tau_w s_x ds$ is the 2D friction drag and $S$ is the 2D surface area, and compare it to the flat plate solution: $1.33 Re^{-1/2}$.
# your code here
lessons/.ipynb_checkpoints/BoundaryLayerSolver-checkpoint.ipynb
ultiyuan/test0
gpl-2.0
Please ignore the cell below. It just loads our style for the notebooks.
from IPython.core.display import HTML def css_styling(): styles = open('../styles/custom.css', 'r').read() return HTML(styles) css_styling()
lessons/.ipynb_checkpoints/BoundaryLayerSolver-checkpoint.ipynb
ultiyuan/test0
gpl-2.0
Hantush versus HantushWellModel The reason there are two implementations in Pastas is that each implementation currently has advantages and disadvantages. We will discuss those soon, but first let's introduce the two implementations. The two Hantush response functions are very similar, but differ in the definition of the parameters. The table below shows the formulas for both implementations. | Name | Parameters | Formula | Description | |------------------|-------------|:------------------------------------------------------------------------|--------------------------------------------------------------------------------| | Hantush | 3 - A, a, b | $$ \theta(t) = At^{-1} e^{-t/a - ab/t} $$ | Response function commonly used for groundwater abstraction wells. | | HantushWellModel | 3 - A, a, b | $$ \theta(t) = A K_0 \left( \sqrt{4b} \right) t^{-1} e^{-t/a - ab/t} $$ | Implementation of the Hantush well function that allows scaling with distance. | In the first implementation the parameters $A$, $a$, and $b$ can be written as: $$ \begin{align} A &= \frac{1}{2 \pi T} K_0 \left( \sqrt{4b} \right) \ a &= cS \ b &= \frac{r^2}{4 \lambda^2} \end{align} $$ In this case parameter $A$ is also known as the "gain", which is equal to the steady-state contribution of a stress with unit 1. For example, the drawdown caused by a well with a continuous extraction rate of 1.0 (the units don't really matter here and are determined by what units the user puts in). In the second implementation, the definition of the parameters $A$ is different, which allows the distance $r$ between an extraction well and an observation well to be passed as a variable. This allows multiple wells to have the same response function, which can be useful to e.g. reduce the number of parameters in a model with multiple extraction wells. When $r$ is passed as a parameter, the formula for $b$ below is simplified by substituting in $1$ for $r$. Note that $r$ is never optimized, but has to be provided by the user. $$ \begin{align} A &= \frac{1}{2 \pi T} \ a &= cS \ b &= \frac{r^2}{4 \lambda^2} \end{align} $$ Which Hantush should I use? So why two implementations? Well, there are advantages and disadvantages to both implementations, which are listed below. <!-- Table Does not render pretty in docs...> <!-- |Name | Pro| Con| |:--|:----|:-----| |**Hantush**|<ul><li>Parameter A is the gain, which makes it easier to interpret the results.</li> <li>Estimates the uncertainty of the gain directly.</li></ul>|<ul><li>Cannot be used to simulate multiple wells.</li><li>More challenging to relate to aquifer characteristics.</li></ul>| |**HantushWellModel**|<ul><li>Can be used with WellModel to simulate multiple wells with one response function.</li><li>Easier to relate parameters to aquifer characteristics.</li></ul>|<ul><li>Does not directly estimate the uncertainty of the gain but can be calculated using special methods.</li><li>More sensitive to the initial value of parameters, in rare cases the initial parameter values have to be tweaked to get a good fit result.</li></ul>| --> Hantush Pro: - Parameter A is the gain, which makes it easier to interpret the results. - Estimates the uncertainty of the gain directly. Con: - Cannot be used to simulate multiple wells. - More challenging to relate to aquifer characteristics. HantushWellModel Pro: - Can be used with WellModel to simulate multiple wells with one response function. - Easier to relate parameters to aquifer characteristics. Con: - Does not directly estimate the uncertainty of the gain but this can be calculated using special methods. - More sensitive to the initial value of parameters, in rare cases the initial parameter values have to be tweaked to get a good fit result. So which one should you use? It depends on your use-case: Use Hantush if you are considering a single extraction well and you're interested in calculating the gain and the uncertainty of the gain. Use HantushWellModel if you are simulating multiple extraction wells or want to pass the distance between extraction and observation well as a known parameter. Of course these aren't strict rules and it is encouraged to explore different model structures when building your timeseries models. But as a first general guiding principle this should help in selecting which approach is appropriate to your specific problem. Synthetic example A synthetic example is used to show both Hantush implementations. First, we create a synthetic timeseries generated with the Hantush response function to which we add autocorrelated residuals. We set the parameter values for the Hantush response function:
# A defined so that 100 m3/day results in 5 m drawdown A = -5 / 100.0 a = 200 b = 0.5 d = 0.0 # reference level # auto-correlated residuals AR(1) sigma_n = 0.05 alpha = 50 sigma_r = sigma_n / np.sqrt(1 - np.exp(-2 * 14 / alpha)) print(f'sigma_r = {sigma_r:.2f} m')
concepts/hantush_response.ipynb
pastas/pasta
mit
Two ways to get a Markov chain There are two ways to generate a Markov chain: Parse one or more sequences of states. This will be turned into a transition matrix, from which a probability matrix will be computed. Directly from a transition matrix, if you already have that data. Let's look at the transition matrix first, since that's how Powers & Easterling presented the data in their paper on this topic. Powers & Easterling data Key reference: Powers, DW and RG Easterling (1982). Improved methodology for using embedded Markov chains to describe cyclical sediments. Journal of Sedimentary Petrology 52 (3), p 913&ndash;923. Let's use one of the examples in Powers & Easterling &mdash; they use this transition matrix from Gingerich, PD (1969). Markov analysis of cyclic alluvial sediments. Journal of Sedimentary Petrology, 39, p. 330-332. https://doi.org/10.1306/74D71C4E-2B21-11D7-8648000102C1865D
from striplog.markov import Markov_chain data = [[ 0, 37, 3, 2], [21, 0, 41, 14], [20, 25, 0, 0], [ 1, 14, 1, 0]] m = Markov_chain(data, states=['A', 'B', 'C', 'D']) m
docs/tutorial/15_Model_sequences_with_Markov_chains.ipynb
agile-geoscience/striplog
apache-2.0
Note that they do not include self-transitions in their data. So the elements being counted are simple 'beds' not 'depth samples' (say). If you build a Markov chain using a matrix with self-transitions, these will be preserved; note include_self=True in the example here:
data = [[10, 37, 3, 2], [21, 20, 41, 14], [20, 25, 20, 0], [ 1, 14, 1, 10]] Markov_chain(data)
docs/tutorial/15_Model_sequences_with_Markov_chains.ipynb
agile-geoscience/striplog
apache-2.0
Testing for independence We use the model of quasi-independence given in Powers & Easterling, as opposed to an independent model like scipy.stats.chi2_contingency(), for computing chi-squared and the expected transitions: First, let's look at the expected transition frequencies of the original Powers & Easterling data:
import numpy as np np.set_printoptions(precision=3) m.expected_counts
docs/tutorial/15_Model_sequences_with_Markov_chains.ipynb
agile-geoscience/striplog
apache-2.0
The $\chi^2$ statistic shows the value for the observed ordering, along with the critical value at (by default) the 95% confidence level. If the first number is higher than the second number (ideally much higher), then we can reject the hypothesis that the ordering is quasi-independent. That is, we have shown that the ordering is non-random.
m.chi_squared()
docs/tutorial/15_Model_sequences_with_Markov_chains.ipynb
agile-geoscience/striplog
apache-2.0
Which transitions are interesting? The normalized difference shows which transitions are 'interesting'. These numbers can be interpreted as standard deviations away from the model of quasi-independence. That is, transitions with large positive numbers represent passages that occur more often than might be expected. Any numbers greater than 2 are likely to be important.
m.normalized_difference
docs/tutorial/15_Model_sequences_with_Markov_chains.ipynb
agile-geoscience/striplog
apache-2.0
We can visualize this as an image:
m.plot_norm_diff()
docs/tutorial/15_Model_sequences_with_Markov_chains.ipynb
agile-geoscience/striplog
apache-2.0
We can also interpret this matrix as a graph. The upward transitions from C to A are particularly strong in this one. Transitions from A to C happen less often than we'd expect. Those from B to D and D to B, less so. It's arguably easier to interpret the data when this matrix is interpreted as a directed graph:
m.plot_graph()
docs/tutorial/15_Model_sequences_with_Markov_chains.ipynb
agile-geoscience/striplog
apache-2.0
We can look at an undirected version of the graph too. It downplays non-reciprocal relationships. I'm not sure this is useful...
m.plot_graph(directed=False)
docs/tutorial/15_Model_sequences_with_Markov_chains.ipynb
agile-geoscience/striplog
apache-2.0
Generating random sequences We can generate a random succession of beds with the same transition statistics:
''.join(m.generate_states(n=30))
docs/tutorial/15_Model_sequences_with_Markov_chains.ipynb
agile-geoscience/striplog
apache-2.0
Again, this will respect the absence or presence of self-transitions, e.g.:
data = [[10, 37, 3, 2], [21, 20, 41, 14], [20, 25, 20, 0], [ 1, 14, 1, 10]] x = Markov_chain(data) ''.join(map(str, x.generate_states(n=30)))
docs/tutorial/15_Model_sequences_with_Markov_chains.ipynb
agile-geoscience/striplog
apache-2.0
Parse states Striplog can interpret various kinds of data as sequences of states. For example, it can get the unique elements and a 'sequence of sequences' from: A simple list of states, eg [1,2,2,2,1,1] A string of states, eg 'ABBDDDDDCCCC' A list of lists of states, eg [[1,2,2,3], [2,4,2]] (NB, not same length) A list of strings of states, eg ['aaabb', 'aabbccc'] (NB, not same length) A list of state names, eg ['sst', 'mud', 'sst'] (requires optional argument) A list of lists of state names, eg [['SS', 'M', 'SS'], ['M', 'M', 'LS']] The corresponding sets of unique states look like: [1, 2] ['A', 'B', 'C', 'D'] [1, 2, 3, 4] ['a', 'b', 'c'] ['mud', sst'] ['LS', 'M', 'SS'] Let's look at a data example... Data from Matt's thesis These are the transitions from some measured sections in my PhD thesis. They start at the bottom, so in Log 7, we start with lithofacies 1 (offshore mudstone) and pass upwards into lithofacies 3, then back into 1, then 3, and so on. We can instantiate a Markov_chain object from a sequence using its from_sequence() method. This expects either a sequence of 'states' (numbers or letters or strings representing rock types) or a sequence of sequences of states.
data = { 'log7': [1, 3, 1, 3, 5, 1, 2, 1, 3, 1, 5, 6, 1, 2, 1, 2, 1, 2, 1, 3, 5, 6, 5, 1], 'log9': [1, 3, 1, 5, 1, 5, 3, 1, 2, 1, 2, 1, 3, 5, 1, 5, 6, 5, 6, 1, 2, 1, 5, 6, 1], 'log11': [1, 3, 1, 2, 1, 5, 3, 1, 2, 1, 2, 1, 3, 5, 3, 5, 1, 9, 5, 5, 5, 5, 6, 1], 'log12': [1, 5, 3, 1, 2, 1, 2, 1, 2, 1, 4, 5, 6, 1, 2, 1, 4, 5, 1, 5, 5, 5, 1, 2, 1, 8, 9, 10, 9, 5, 1], 'log13': [1, 6, 1, 3, 1, 3, 5, 3, 6, 1, 6, 5, 3, 1, 5, 1, 2, 1, 4, 3, 5, 3, 4, 3, 5, 1, 5, 9, 11, 9, 1], 'log14': [1, 3, 1, 5, 8, 5, 6, 1, 3, 4, 5, 3, 1, 3, 5, 1, 7, 7, 7, 1, 7, 1, 3, 8, 5, 5, 1, 5, 9, 9, 11, 9, 1], 'log15': [1, 8, 1, 3, 5, 1, 2, 3, 6, 3, 6, 5, 2, 1, 2, 1, 8, 5, 1, 5, 9, 9, 11, 1], 'log16': [1, 8, 1, 5, 1, 5, 5, 6, 1, 3, 5, 3, 5, 5, 5, 8, 5, 1, 9, 9, 3, 1], 'log17': [1, 3, 8, 1, 8, 5, 1, 8, 9, 5, 10, 5, 8, 9, 10, 8, 5, 1, 8, 9, 1], 'log18': [1, 8, 2, 1, 2, 1, 10, 8, 9, 5, 5, 1, 2, 1, 2, 9, 5, 9, 5, 8, 5, 9, 1] } logs = list(data.values()) m = Markov_chain.from_sequence(logs) m.states m.observed_counts
docs/tutorial/15_Model_sequences_with_Markov_chains.ipynb
agile-geoscience/striplog
apache-2.0
Let's check out the normalized difference matrix:
m.expected_counts
docs/tutorial/15_Model_sequences_with_Markov_chains.ipynb
agile-geoscience/striplog
apache-2.0
It's hard to read this but we can change NumPy's display options:
np.set_printoptions(suppress=True, precision=1, linewidth=120) m.normalized_difference
docs/tutorial/15_Model_sequences_with_Markov_chains.ipynb
agile-geoscience/striplog
apache-2.0
Or use a graphical view:
m.plot_norm_diff()
docs/tutorial/15_Model_sequences_with_Markov_chains.ipynb
agile-geoscience/striplog
apache-2.0
And the graph version. Note you can re-run this cell to rearrange the graph.
m.plot_graph(figsize=(15,15), max_size=2400, edge_labels=True, seed=13)
docs/tutorial/15_Model_sequences_with_Markov_chains.ipynb
agile-geoscience/striplog
apache-2.0
<div style="border: solid red 2px; border-radius:5px; background-color:#ffeeee; padding:10px 10px 20px 10px;"> <h3>Experimental implementation!</h3> <p>Multistep Markov chains are a bit of an experiment in `striplog`. Please [get in touch](mailto:matt@agilescientific.com) if you have thoughts about how it should work.</p> </div> Multistep transitions Step = 2 So far we've just been looking at direct this-to-that transitions, i.e. only considering each previous transition. What if we use the previous-but-one?
m = Markov_chain.from_sequence(logs, step=2) m
docs/tutorial/15_Model_sequences_with_Markov_chains.ipynb
agile-geoscience/striplog
apache-2.0
Note that self-transitions are ignored by default. With multi-step transitions, this results in a lot of zeros because we don't just eliminate transitions like 1 > 1 > 1, but also 1 > 1 > 2 and 1 > 1 > 3, etc, as well as 1 > 2 > 2, 1 > 3 > 3, etc, and 2 > 1 > 1, 3 > 1 > 1, etc. (But we will keep 1 > 2 > 1.) Now we have a 3D array of transition probabilities.
m.normalized_difference.shape
docs/tutorial/15_Model_sequences_with_Markov_chains.ipynb
agile-geoscience/striplog
apache-2.0
This is hard to inspect! Let's just get the indices of the highest values. If we add one to the indices, we'll have a handy list of facies number transitions, since these are just 1 to 11. So we can interpret these as transitions with anomalously high probability. <img src="Normal_distribution.png" /> There are 11 &times; 11 &times; 11 = 1331 transitions in this array (inluding self-transitions).
cutoff = 5 # 1.96 is 95% confidence idx = np.where(m.normalized_difference > cutoff) locs = np.array(list(zip(*idx))) scores = {tuple(loc+1):score for score, loc in zip(m.normalized_difference[idx], locs)} for (a, b, c), score in sorted(scores.items(), key=lambda pair: pair[1], reverse=True): print(f"{a:<2} -> {b:<2} -> {c:<2} {score:.3f}") m.chi_squared()
docs/tutorial/15_Model_sequences_with_Markov_chains.ipynb
agile-geoscience/striplog
apache-2.0
Unfortunately, it's a bit harder to draw this as a graph. Technically, it's a hypergraph.
# This should error for now. # m.plot_graph(figsize=(15,15), max_size=2400, edge_labels=True)
docs/tutorial/15_Model_sequences_with_Markov_chains.ipynb
agile-geoscience/striplog
apache-2.0
Also, the expected counts or frequencies are a bit... hard to interpret:
np.set_printoptions(suppress=True, precision=3, linewidth=120) m.expected_counts
docs/tutorial/15_Model_sequences_with_Markov_chains.ipynb
agile-geoscience/striplog
apache-2.0
Step = 3 We can actually make a model for any number of steps, but we will need commensurately more data, especially if we're not going to include self-transitions:
m = Markov_chain.from_sequence(logs, step=3) m
docs/tutorial/15_Model_sequences_with_Markov_chains.ipynb
agile-geoscience/striplog
apache-2.0
I have no idea how to visualize or interpret this thing, let me know if you do something with it! From Striplog() instance I use striplog to represent stratigraphy. Let's make a Markov chain model from an instance of striplog! First, we'll make a striplog by applying a couple of cut-offs to a GR log:
from welly import Well from striplog import Striplog, Component w = Well.from_las("P-129_out.LAS") gr = w.data['GR'] comps = [Component({'lithology': 'sandstone'}), Component({'lithology': 'greywacke'}), Component({'lithology': 'shale'}), ] s = Striplog.from_log(gr, cutoff=[30, 90], components=comps, basis=gr.basis) s s.plot()
docs/tutorial/15_Model_sequences_with_Markov_chains.ipynb
agile-geoscience/striplog
apache-2.0