repo_name
stringlengths
6
77
path
stringlengths
8
215
license
stringclasses
15 values
content
stringlengths
335
154k
vascotenner/holoviews
doc/Tutorials/Bokeh_Backend.ipynb
bsd-3-clause
import numpy as np import pandas as pd import holoviews as hv hv.notebook_extension('bokeh') """ Explanation: <div class="alert alert-info" role="alert"> This tutorial contains a lot of bokeh plots, which may take a little while to load and render. </div> One of the major design principles of HoloViews is that the declaration of data is completely independent from the plotting implementation. This means that the visualization of HoloViews data structures can be performed by different plotting backends. As part of the 1.4 release of HoloViews, a Bokeh backend was added in addition to the default matplotlib backend. Bokeh provides a powerful platform to generate interactive plots using HTML5 canvas and WebGL, and is ideally suited towards interactive exploration of data. By combining the ease of generating interactive, high-dimensional visualizations with the interactive widgets and fast rendering provided by Bokeh, HoloViews becomes even more powerful. This tutorial will cover some basic options on how to style and change various plot attributes and explore some of the more advanced features like interactive tools, linked axes, and brushing. As usual, the first thing we do is initialize the HoloViews notebook extension, but we now specify the backend specifically. End of explanation """ from holoviews.plotting.bokeh.element import (line_properties, fill_properties, text_properties) print(""" Line properties: %s\n Fill properties: %s\n Text properties: %s """ % (line_properties, fill_properties, text_properties)) """ Explanation: We could instead leave the default backend as 'matplotlib', and then switch only some specific cells to use bokeh using a cell magic: python %%output backend='bokeh' obj Element Style options Most Bokeh Elements support a mixture of the following fill, line, and text style customization options: End of explanation """ curve_opts = dict(line_width=10, line_color='indianred', line_dash='dotted', line_alpha=0.5) point_opts = dict(fill_color='#00AA00', fill_alpha=0.5, line_width=1, line_color='black', size=5) text_opts = dict(text_align='center', text_baseline='middle', text_color='gray', text_font='Arial') xs = np.linspace(0, np.pi*4, 100) data = (xs, np.sin(xs)) (hv.Curve(data)(style=curve_opts) + hv.Points(data)(style=point_opts) + hv.Text(6, 0, 'Here is some text')(style=text_opts)) """ Explanation: Here's an example of HoloViews Elements using a Bokeh backend, with bokeh's style options: End of explanation """ %%opts Points.A [width=300 height=300] Points.B [width=600 height=300] hv.Points(data, group='A') + hv.Points(data, group='B') """ Explanation: Notice that because the first two plots use the same underlying data, they become linked, such that zooming or panning one of the plots makes the corresponding change on the other. Sizing Elements Sizing and aspect of Elements in bokeh are always computed in absolute pixels. To change the size or aspect of an Element set the width and height plot options. End of explanation """ points = hv.Points(np.exp(xs)) axes_opts = [('Plain', {}), ('Log', {'logy': True}), ('None', {'yaxis': None}), ('Rotate', {'xrotation': 90}), ('N Ticks', {'xticks': 3}), ('List Ticks', {'xticks': [0, 100, 300, 500]})] hv.Layout([points.relabel(group=group)(plot=opts) for group, opts in axes_opts]).cols(3).display('all') """ Explanation: Controlling axes Bokeh provides a variety of options to control the axes. Here we provide a quick overview of linked plots for the same data displayed differently by applying log axes, disabling axes, rotating ticks, specifying the number of ticks, and supplying an explicit list of ticks. End of explanation """ %%opts Overlay [width=600 legend_position='top_left'] try: import bokeh.sampledata.stocks except: import bokeh.sampledata bokeh.sampledata.download() from bokeh.sampledata.stocks import GOOG, AAPL goog_dates = np.array(GOOG['date'], dtype=np.datetime64) aapl_dates = np.array(AAPL['date'], dtype=np.datetime64) hv.Curve((goog_dates, GOOG['adj_close']), kdims=['Date'], vdims=['Stock Index'], label='Google') *\ hv.Curve((aapl_dates, AAPL['adj_close']), kdims=['Date'], vdims=['Stock Index'], label='Apple') """ Explanation: Datetime axes Both the matplotlib and the bokeh backends allow plotting datetime data, if you ensure the dates array is of a datetime dtype. End of explanation """ %%opts Distribution (kde_kws=dict(shade=True)) d1 = np.random.randn(500) + 450 d2 = np.random.randn(500) + 540 sines = np.array([np.sin(np.linspace(0, np.pi*2, 100)) + np.random.normal(0, 1, 100) for _ in range(20)]) hv.Distribution(d1) + hv.Bivariate((d1, d2)) + hv.TimeSeries(sines) """ Explanation: Matplotlib/Seaborn conversions Bokeh also allows converting a subset of existing matplotlib plot types to Bokeh. This allows us to work with some of the Seaborn plot types, including Distribution, Bivariate, and TimeSeries: End of explanation """ %%opts Overlay [tabs=True width=600 height=600] RGB [width=600 height=600] x,y = np.mgrid[-50:51, -50:51] * 0.1 img = hv.Image(np.sin(x**2+y**2), bounds=(-1,-1,1,1)) img.relabel('Low') * img.clone(img.data*2).relabel('High') + img """ Explanation: Containers Tabs Using bokeh, both (Nd)Overlay and (Nd)Layout types may be displayed inside a tabs widget. This may be enabled via a plot option tabs, and may even be nested inside a Layout. End of explanation """ points = hv.Points(np.random.randn(500,2)) points.hist(num_bins=51, dimension=['x','y']) """ Explanation: Another reason to use tabs is that some Layout combinations may not be able to be displayed directly using HoloViews. For example, it is not currently possible to display a GridSpace as part of a Layout in any backend, and this combination will automatically switch to a tab representation for the bokeh backend. Marginals The Bokeh backend also supports marginal plots to generate adjoined plots. The most convenient way to build an AdjointLayout is with the .hist() method. End of explanation """ img.hist(num_bins=100, dimension=['x', 'y'], weight_dimension='z', mean_weighted=True) +\ img.hist(dimension='z') """ Explanation: When the histogram represents a quantity that is mapped to a value dimension with a corresponding colormap, it will automatically share the colormap, making it useful as a colorbar for that dimension as well as a histogram. End of explanation """ hmap = hv.HoloMap({phase: img.clone(np.sin(x**2+y**2+phase)) for phase in np.linspace(0, np.pi*2, 6)}, kdims=['Phase']) hmap.hist(num_bins=100, dimension=['x', 'y'], weight_dimension='z', mean_weighted=True) """ Explanation: HoloMaps HoloMaps work in bokeh just as in other backends. End of explanation """ %%opts Points [tools=['hover']] (size=5) HeatMap [tools=['hover']] Histogram [tools=['hover']] Layout [shared_axes=False] error = np.random.rand(100, 3) heatmap_data = {(chr(65+i),chr(97+j)):i*j for i in range(5) for j in range(5) if i!=j} data = [np.random.normal() for i in range(10000)] hist = np.histogram(data, 20) hv.Points(error) + hv.HeatMap(heatmap_data).sort() + hv.Histogram(hist) """ Explanation: Tools: Interactive widgets Hover tools Some Elements allow revealing additional data by hovering over the data. To enable the hover tool, simply supply 'hover' as a list to the tools plot option. End of explanation """ %%opts Points [tools=['box_select', 'lasso_select', 'poly_select']] (size=10 unselected_color='red' color='blue') hv.Points(error) """ Explanation: Selection tools Bokeh provides a number of tools for selecting data points including box_select, lasso_select and poly_select. To distinguish between selected and unselected data points we can also set the unselected_color. You can try out any of these selection tools and see how the plot is affected: End of explanation """ macro_df = pd.read_csv('http://assets.holoviews.org/macro.csv', '\t') """ Explanation: Selection widget with shared axes and linked brushing When dealing with complex multi-variate data it is often useful to explore interactions between variables across plots. HoloViews will automatically link the data sources of plots in a Layout if they draw from the same data, allowing for both linked axes and brushing. We'll see what this looks like in practice using a small dataset of macro-economic data: End of explanation """ %%opts Scatter [tools=['box_select', 'lasso_select']] Layout [shared_axes=True shared_datasource=True] hv.Scatter(macro_df, kdims=['year'], vdims=['gdp']) +\ hv.Scatter(macro_df, kdims=['gdp'], vdims=['unem']) """ Explanation: By creating two Points Elements, which both draw their data from the same pandas DataFrame, the two plots become automatically linked. This linking behavior can be toggled with the shared_datasource plot option on a Layout or GridSpace. You can try selecting data in one plot, and see how the corresponding data (those on the same rows of the DataFrame, even if the plots show different data, will be highlighted in each. End of explanation """ %%opts Scatter [tools=['box_select', 'lasso_select', 'hover'] border=0] Histogram {+axiswise} table = hv.Table(macro_df, kdims=['year', 'country']) matrix = hv.operation.gridmatrix(table.groupby('country')) matrix.select(country=['West Germany', 'United Kingdom', 'United States']) """ Explanation: A gridmatrix is a clear use case for linked plotting. This operation plots any combination of numeric variables against each other, in a grid, and selecting datapoints in any plot will highlight them in all of them. Such linking can thus reveal how values in a particular range (e.g. very large outliers along one dimension) relate to each of the other dimensions. End of explanation """
SylvainCorlay/bqplot
examples/Tutorials/Linking Plots With Widgets.ipynb
apache-2.0
import numpy as np import ipywidgets as widgets import bqplot.pyplot as plt """ Explanation: Building interactive plots using bqplot and ipywidgets bqplot is built on top of the ipywidgets framework ipwidgets and bqplot widgets can be seamlessly integrated to build interactive plots bqplot figure widgets can be stacked with UI controls available in ipywidgets by using Layout classes (Box, HBox, VBox) in ipywidgets (Note that only Figure objects (not Mark objects) inherit from DOMWidget class and can be combined with other widgets from ipywidgets) Trait attributes of widgets can be linked using callbacks. Callbacks should be registered using the observe method Please follow these links for detailed documentation on: 1. Layout and Styling of Jupyter Widgets * Linking Widgets <br>Let's look at examples of linking plots with UI controls End of explanation """ y = np.random.randn(100).cumsum() # simple random walk # create a button update_btn = widgets.Button(description='Update', button_style='success') # create a figure widget fig1 = plt.figure(animation_duration=750) line = plt.plot(y) # define an on_click function def on_btn_click(btn): # update the y attribute of line mark line.y = np.random.randn(100).cumsum() # another random walk # register the on_click function update_btn.on_click(on_btn_click) # stack button and figure using VBox widgets.VBox([fig1, update_btn]) """ Explanation: Update the plot on a button click End of explanation """ import pandas as pd # create a dummy time series for 5 dummy stock tickers dates = pd.date_range(start='20180101', end='20181231') n = len(dates) tickers = list('ABCDE') prices = pd.DataFrame(np.random.randn(n, 5).cumsum(axis=0), columns=tickers) # create a dropdown menu for tickers dropdown = widgets.Dropdown(description='Ticker', options=tickers) # create figure for plotting time series current_ticker = dropdown.value fig_title_tmpl = '"{}" Time Series' # string template for title of the figure fig2 = plt.figure(title=fig_title_tmpl.format(current_ticker)) fig2.layout.width = '900px' time_series = plt.plot(dates, prices[current_ticker]) plt.xlabel('Date') plt.ylabel('Price') # 1. create a callback which updates the plot when dropdown item is selected def update_plot(*args): selected_ticker = dropdown.value # update the y attribute of the mark by selecting # the column from the price data frame time_series.y = prices[selected_ticker] # update the title of the figure fig2.title = fig_title_tmpl.format(selected_ticker) # 2. register the callback by using the 'observe' method dropdown.observe(update_plot, 'value') # stack the dropdown and fig widgets using VBox widgets.VBox([dropdown, fig2]) """ Explanation: Let's look at an example where we link a plot to a dropdown menu End of explanation """ # create two dropdown menus for X and Y attributes of scatter x_dropdown = widgets.Dropdown(description='X', options=tickers, value='A') y_dropdown = widgets.Dropdown(description='Y', options=tickers, value='B') # create figure for plotting the scatter x_ticker = x_dropdown.value y_ticker = y_dropdown.value # set up fig_margin to allow space to display color bar fig_margin = dict(top=20, bottom=40, left=60, right=80) fig3 = plt.figure(animation_duration=1000, fig_margin=fig_margin) # custom axis options for color data axes_options = {'color': {'tick_format': '%m/%y', 'side': 'right', 'num_ticks': 5}} scatter = plt.scatter(x=prices[x_ticker], y=prices[y_ticker], color=dates, # represent chronology using color scale stroke='black', colors=['red'], default_size=32, axes_options=axes_options) plt.xlabel(x_ticker) plt.ylabel(y_ticker) # 1. create a callback which updates the plot when dropdown item is selected def update_scatter(*args): x_ticker = x_dropdown.value y_ticker = y_dropdown.value # update the x and y attributes of the mark by selecting # the column from the price data frame with scatter.hold_sync(): scatter.x = prices[x_ticker] scatter.y = prices[y_ticker] # update the title of the figure plt.xlabel(x_ticker) plt.ylabel(y_ticker) # 2. register the callback by using the 'observe' method x_dropdown.observe(update_scatter, 'value') y_dropdown.observe(update_scatter, 'value') # stack the dropdown and fig widgets using VBox widgets.VBox([widgets.HBox([x_dropdown, y_dropdown]), fig3]) """ Explanation: Let's now create a scatter plot where we select X and Y data from the two dropdown menus End of explanation """ funcs = dict(sin=np.sin, cos=np.cos, tan=np.tan, sinh=np.sinh, tanh=np.tanh) dropdown = widgets.Dropdown(options=funcs, description='Function') fig = plt.figure(title='sin(x)', animation_duration=1000) # create x and y data attributes for the line chart x = np.arange(-10, 10, .1) y = np.sin(x) line = plt.plot(x, y ,'m') def update_line(*args): f = dropdown.value fig.title = f'{f.__name__}(x)' line.y = f(line.x) dropdown.observe(update_line, 'value') widgets.VBox([dropdown, fig]) """ Explanation: In the example below, we'll look at plots of trigonometic functions End of explanation """
spm2164/foundations-homework
14/14 - TF-IDF Homework.ipynb
artistic-2.0
# If you'd like to download it through the command line... !curl -O http://www.cs.cornell.edu/home/llee/data/convote/convote_v1.1.tar.gz # And then extract it through the command line... !tar -zxf convote_v1.1.tar.gz """ Explanation: Homework 14 (or so): TF-IDF text analysis and clustering Hooray, we kind of figured out how text analysis works! Some of it is still magic, but at least the TF and IDF parts make a little sense. Kind of. Somewhat. No, just kidding, we're professionals now. Investigating the Congressional Record The Congressional Record is more or less what happened in Congress every single day. Speeches and all that. A good large source of text data, maybe? Let's pretend it's totally secret but we just got it leaked to us in a data dump, and we need to check it out. It was leaked from this page here. End of explanation """ # glob finds files matching a certain filename pattern import glob # Give me all the text files paths = glob.glob('convote_v1.1/data_stage_one/development_set/*') paths[:5] len(paths) """ Explanation: You can explore the files if you'd like, but we're going to get the ones from convote_v1.1/data_stage_one/development_set/. It's a bunch of text files. End of explanation """ speeches = [] for path in paths: with open(path) as speech_file: speech = { 'pathname': path, 'filename': path.split('/')[-1], 'content': speech_file.read() } speeches.append(speech) speeches_df = pd.DataFrame(speeches) speeches_df.head() """ Explanation: So great, we have 702 of them. Now let's import them. End of explanation """ speeches_df['content'].head(5) """ Explanation: In class we had the texts variable. For the homework can just do speeches_df['content'] to get the same sort of list of stuff. Take a look at the contents of the first 5 speeches End of explanation """ count_vectorizer = CountVectorizer(stop_words='english') X=count_vectorizer.fit_transform(speeches_df['content']) len(count_vectorizer.get_feature_names()) tokens_df=pd.DataFrame(X.toarray(), columns=count_vectorizer.get_feature_names()) tokens_df """ Explanation: Doing our analysis Use the sklearn package and a plain boring CountVectorizer to get a list of all of the tokens used in the speeches. If it won't list them all, that's ok! Make a dataframe with those terms as columns. Be sure to include English-language stopwords End of explanation """ count_vectorizer = CountVectorizer(stop_words='english', max_features=100) X=count_vectorizer.fit_transform(speeches_df['content']) len(count_vectorizer.get_feature_names()) """ Explanation: Okay, it's far too big to even look at. Let's try to get a list of features from a new CountVectorizer that only takes the top 100 words. End of explanation """ tokens_df=pd.DataFrame(X.toarray(), columns=count_vectorizer.get_feature_names()) tokens_df """ Explanation: Now let's push all of that into a dataframe with nicely named columns. End of explanation """ # 702 rows means 702 speeches, since each speech is a single string len(tokens_df) # if the speech doesnt contain a chairman, the column entry will be 0. so, 250 no-chairmain speeches. granted, # we have no idea if they stared the speech with chairman or just mentioned him somewhere len(tokens_df[tokens_df['chairman']==0]) # 76 times no mr or chairman. which means they must call the chairman just 'chairman' a lot. rude! len(tokens_df[(tokens_df['mr']==0) & (tokens_df['chairman']==0)]) """ Explanation: Everyone seems to start their speeches with "mr chairman" - how many speeches are there total, and how many don't mention "chairman" and how many mention neither "mr" nor "chairman"? End of explanation """ # so speech index 375 tokens_df[tokens_df['thank']==tokens_df['thank'].max()].index # lets look at the speech speeches_df['content'][375] # wow that was long. but its the most thankful one, so whatever. """ Explanation: What is the index of the speech which is the most thankful, a.k.a. includes the word 'thank' the most times? End of explanation """ # i sorted by china here, on a lark. lets see if it holds true if i sort by trade tokens_df[(tokens_df['china']>0) & (tokens_df['trade']>0)].sort_values(by='china', ascending=False)[['china', 'trade']].head(3) tokens_df[(tokens_df['china']>0) & (tokens_df['trade']>0)].sort_values(by='trade', ascending=False)[['china', 'trade']].head(3) # kind of! at any rate, speech 294 seems to be the most china and trade related. lets look at it! speeches_df['content'][294] # thats another super long speech, but it does seem to mostly be about trade and china. """ Explanation: If I'm searching for China and trade, what are the top 3 speeches to read according to the CountVectoriser? End of explanation """ l2_vectorizer = TfidfVectorizer(stop_words='english', use_idf=True) X = l2_vectorizer.fit_transform(speeches_df['content']) tfidf_tokens_df = pd.DataFrame(X.toarray(), columns=l2_vectorizer.get_feature_names()) china_trade_df=pd.DataFrame([tfidf_tokens_df['china'], tfidf_tokens_df['trade'], tfidf_tokens_df['china'] + tfidf_tokens_df['trade']], index=["china", "trade", "china + trade"]).T china_trade_df[china_trade_df.any(axis=1)].sort_values(by='china + trade', ascending=False).head(3) # wow, that comes up with a totally different list of speeches. lets look at speech 636 speeches_df['content'][636] # that one is very short by comparison and this time, its really only about china and trade """ Explanation: Now what if I'm using a TfidfVectorizer? End of explanation """ # index 0 is the first speech, which was the first one imported. paths[0] # Pass that into 'cat' using { } which lets you put variables in shell commands # that way you can pass the path to cat !cat {paths[0]} # i guess i probably should have read ahead to this part. oh well, dumping the index of the speeches_df # was still mostly readable """ Explanation: What's the content of the speeches? Here's a way to get them: End of explanation """ election_chaos_df=pd.DataFrame([tfidf_tokens_df['election'], tfidf_tokens_df['chaos'], tfidf_tokens_df['election'] + tfidf_tokens_df['chaos']], index=["election", "chaos", "election + chaos"]).T election_chaos_df[election_chaos_df.any(axis=1)].sort_values(by='chaos', ascending=False).head(10) # i did the sort that way because i guess they dont talk about chaos much. lets look at that speech !cat {paths[257]} # thats pretty weak. i tried to come up with something spicier but i cant tell what years these are from. # how about this? clinton_welfare_df=pd.DataFrame([tfidf_tokens_df['clinton'], tfidf_tokens_df['welfare'], tfidf_tokens_df['clinton'] + tfidf_tokens_df['welfare']], index=["clinton", "welfare", "clinton + welfare"]).T clinton_welfare_df[clinton_welfare_df.any(axis=1)].sort_values(by='clinton + welfare', ascending=False).head(10) !cat {paths[346]} # still pretty dull. oh well. """ Explanation: Now search for something else! Another two terms that might show up. elections and chaos? Whatever you thnik might be interesting. End of explanation """ from sklearn.cluster import KMeans count_vectorizer = CountVectorizer(stop_words='english') X=count_vectorizer.fit_transform(speeches_df['content']) number_of_clusters = 8 km = KMeans(n_clusters=number_of_clusters) km.fit(X) print("Top terms per cluster:") order_centroids = km.cluster_centers_.argsort()[:, ::-1] terms = count_vectorizer.get_feature_names() for i in range(number_of_clusters): top_ten_words = [terms[ind] for ind in order_centroids[i, :5]] print("Cluster {}: {}".format(i, ' '.join(top_ten_words))) tf_vectorizer = TfidfVectorizer(stop_words='english', use_idf=False) X = tf_vectorizer.fit_transform(speeches_df['content']) number_of_clusters = 8 km = KMeans(n_clusters=number_of_clusters) km.fit(X) print("Top terms per cluster:") order_centroids = km.cluster_centers_.argsort()[:, ::-1] terms = tf_vectorizer.get_feature_names() for i in range(number_of_clusters): top_ten_words = [terms[ind] for ind in order_centroids[i, :5]] print("Cluster {}: {}".format(i, ' '.join(top_ten_words))) tfidf_vectorizer = TfidfVectorizer(stop_words='english', use_idf=True) X = tfidf_vectorizer.fit_transform(speeches_df['content']) number_of_clusters = 8 km = KMeans(n_clusters=number_of_clusters) km.fit(X) print("Top terms per cluster:") order_centroids = km.cluster_centers_.argsort()[:, ::-1] terms = tfidf_vectorizer.get_feature_names() for i in range(number_of_clusters): top_ten_words = [terms[ind] for ind in order_centroids[i, :5]] print("Cluster {}: {}".format(i, ' '.join(top_ten_words))) """ Explanation: Enough of this garbage, let's cluster Using a simple counting vectorizer, cluster the documents into eight categories, telling me what the top terms are per category. Using a term frequency vectorizer, cluster the documents into eight categories, telling me what the top terms are per category. Using a term frequency inverse document frequency vectorizer, cluster the documents into eight categories, telling me what the top terms are per category. End of explanation """ # well, it seems to be between results including wild horses and those including frivolous lawsuits. # i prefer wild horses but the tfidf is probably the best representation of the actual document """ Explanation: Which one do you think works the best? End of explanation """ paths = glob.glob('hp/hp/*') paths[:5] hp_fics = [] for path in paths: with open(path) as hp_file: hp_fic = { 'pathname': path, 'filename': path.split('/')[-1], 'content': hp_file.read() } hp_fics.append(hp_fic) hp_fics_df = pd.DataFrame(hp_fics) hp_fics_df.head() tfidf_vectorizer = TfidfVectorizer(stop_words='english', use_idf=True) X = tfidf_vectorizer.fit_transform(hp_fics_df['content']) number_of_clusters = 2 km = KMeans(n_clusters=number_of_clusters) km.fit(X) print("Top terms per cluster:") order_centroids = km.cluster_centers_.argsort()[:, ::-1] terms = tfidf_vectorizer.get_feature_names() for i in range(number_of_clusters): top_ten_words = [terms[ind] for ind in order_centroids[i, :5]] print("Cluster {}: {}".format(i, ' '.join(top_ten_words))) # i would say people are either writing about harry and hermione or lily and james # ive never read harry potter, but i think that means either the harry generation or their parents generation? # seems legit """ Explanation: Harry Potter time I have a scraped collection of Harry Potter fanfiction at https://github.com/ledeprogram/courses/raw/master/algorithms/data/hp.zip. I want you to read them in, vectorize them and cluster them. Use this process to find out the two types of Harry Potter fanfiction. What is your hypothesis? End of explanation """
mayank-johri/LearnSeleniumUsingPython
Section 2 - Advance Python/Chapter S2.05 - REST API - Server & Clients/Web%20scraping%20with%20Python.ipynb
gpl-3.0
import urllib2 response = urllib2.urlopen("http://example.com") print response.read() """ Explanation: Web scraping with Python This is an introduction to web scraping using Python, where our task is to extract information from web pages. Prerequisites (knowledge): basic Python (its data structures, string manipulation) basic HTML basic HTTP (know what a GET request is: this will be reviewed) bonus: knowledge of how to use XPath Prerequisites (software): the lxml package Rather than using Scrapy or another Python web scraping framework, we'll go the barebones route. HTTP basics HTTP is a protocol for transferring data across the internet. Every communication looks like this: a client (such as a browser) sends a request to a web server, and the server sends a response back to the client. There's a few types of requests that the client can send, such as the GET and POST request types. The GET request is the most common type. Semantically, it is used for requesting data from the web server. POST requests are usually used when we're sending data to the server that's large or changes some kind of state on the server, such as if it causes a database to be updated. The request and response each follow a simple text-based format: the first line is specific to requests and responses, then several lines of headers are specified in a Header-Name: value format, then a blank line follows the headers and precedes the body. The body contains the main payload, and a header tells the client/server how large the body is. An example request (from Wikipedia): GET /index.html HTTP/1.1 Host: www.example.com And a corresponding response, showing us a status code (everyone's seen 404 Not Found) among other things: HTTP/1.1 200 OK Date: Mon, 23 May 2005 22:38:34 GMT Server: Apache/1.3.3.7 (Unix) (Red-Hat/Linux) Last-Modified: Wed, 08 Jan 2003 23:11:55 GMT Etag: "3f80f-1b6-3e1cb03b" Content-Type: text/html; charset=UTF-8 Content-Length: 131 Connection: close &lt;html&gt; &lt;head&gt; &lt;title&gt;An Example Page&lt;/title&gt; &lt;/head&gt; &lt;body&gt; Hello World, this is a very simple HTML document. &lt;/body&gt; &lt;/html&gt; The body doesn't necessarily have to be plain text as in this example: it could be a sequence of non-text bytes whose length is specified by Content-Length. Let's try simulating that same request. End of explanation """ print response.info() print "The content type is '%s'." % response.info()['content-type'] """ Explanation: Python's urlopen function in the urllib2 module returns a file-like object, so we can call read() to read all its contents. We can access the response's headers, too, using the .info() method of the response object. It returns a mimetools.Message instance that we can use like a dict: End of explanation """ import re source = urllib2.urlopen("http://ncix.com").read() print re.search(r'<title>(.*?)</title>', source).group(1) import urllib2 import re response = urllib.urlopen("https://money.cnn.com") print (response.read()) print (re.search(r'<ul class="module-body wsod currencies">(.*?)</ul>', source).group(1)) """ Explanation: This was a simple GET request. We can also send POST requests using the data keyword argument of the urlopen() function. Massaging out information For those familiar with regular expressions (affectionately referred to as regex's), regexes look like something we could use here to extract information from HTML. That's partially correct. Let's try extracting the page title of a website. End of explanation """ import lxml.html html = lxml.html.fromstring(source) print html.xpath('//a')[:10] # just print out the first 10 """ Explanation: We can handle simple stuff with regexes, but HTML tags are simply too complicated for all but the simplest of cases. A tag with attributes can span multiple lines, there can be arbitrary whitespace in a tag, etc. However, regexes will still prove useful to process text that's inside an HTML page, and might be useful for extracting text from some Javascript source in a page. What can we do instead? HTML, like XML, has a recursive containment structure, so lends itself well to a recursive (nested) data structure with classes representing each tag. There's parsers for HTML source that nicely handle constructing these representations of HTML. For Python, we've got BeautifulSoup html5lib lxml (a wrapper for the C++ libxml library) We'll stick with the last one, but the other two are good too. Let's try using it to read all the &lt;a&gt; tags (hyperlinks) from the NCIX homepage. End of explanation """ html.xpath('//div[@id="sublinks"]/a[@class="sub_link"]/@href') """ Explanation: So, we've got a bunch of &lt;a&gt; tags. The xpath() method performs an XPath query. XPath is a query language for XML that searches the hierarchical structure that XML or HTML has (called the DOM or Document Object Model). Our query was //a. An XPath query consists of slash-delimited parts. Here, the double slash // means "any number of parents", followed by an &lt;a&gt; tag. Using an HTML inspector like the one built into Google Chrome (press F12 to activate it on a page), we can determine what the structure of a certain node in the DOM is. For example, if we open up the NCIX page and inspect the "Popular Categories" on the side, we find that each category link is inside a certain div tag: &lt;div id="sublinks"&gt; ... and each link looks like: &lt;a href="http://www.ncix.com/products/?minorcatid=1263" class="sub_link"&gt;Blu-Ray Drives&lt;span class="qtycount"&gt; (6)&lt;/span&gt;&lt;/a&gt; Let's grab all of these links using an Xpath query. In XPath, an at-sign @ before an identifier means "attribute": End of explanation """
giacomov/3ML
examples/Joint fitting XRT and GBM data with XSPEC models.ipynb
bsd-3-clause
%matplotlib inline %matplotlib notebook from threeML import * import os """ Explanation: Joint fitting XRT and GBM data with XSPEC models Goals 3ML is designed to properly joint fit data from different instruments with thier instrument dependent likelihoods. We demostrate this with joint fitting data from GBM and XRT while simultaneously showing hwo to use the XSPEC models form astromodels Setup You must have you HEASARC initiated so that astromodels can find the XSPEC libraries. End of explanation """ trigger="GRB110731A" dec=-28.546 ra=280.52 xrt_dir='xrt' xrt = SwiftXRTLike("XRT",pha_file=os.path.join(xrt_dir,"xrt_src.pha"), bak_file=os.path.join(xrt_dir,"xrt_bkg.pha"), rsp_file=os.path.join(xrt_dir,"xrt.rmf"), arf_file=os.path.join(xrt_dir,"xrt.arf")) xrt.view_count_spectrum() """ Explanation: Load XRT data Make a likelihood for the XRT including all the appropriate files End of explanation """ data_dir_gbm=os.path.join('gbm','bn110731A') trigger_number = 'bn110731465' gbm_data = download_GBM_trigger_data(trigger_number,detectors=['n3'],destination_directory=data_dir_gbm,compress_tte=True) # Select the time interval src_selection = "100.169342-150.169342" nai3 = FermiGBMTTELike('NAI3', os.path.join(data_dir_gbm,"glg_tte_n3_bn110731465_v00.fit.gz"), "20-90,160-250", # background selection src_selection, # source interval rsp_file=os.path.join(data_dir_gbm, "glg_cspec_n3_bn110731465_v00.rsp2")) """ Explanation: Load GBM data Load all the GBM data you need and make appropriate background, source time, and energy selections. Make sure to check the light curves! End of explanation """ nai3.view_lightcurve(20,250) """ Explanation: View the light curve End of explanation """ nai3.set_active_measurements("8-900") nai3.view_count_spectrum() """ Explanation: Make energy selections and check them out End of explanation """ xspec_abund('angr') spectral_model = XS_phabs()* XS_zphabs() * XS_powerlaw() spectral_model.nh_1=0.101 spectral_model.nh_1.fix = True spectral_model.nh_2=0.1114424 spectral_model.nh_2.fix = True spectral_model.redshift_2 = 0.618 spectral_model.redshift_2.fix =True spectral_model.display() """ Explanation: Setup the model astromodels allows you to use XSPEC models if you have XSPEC installed. Set all the normal parameters you would in XSPEC and build a model the normal 3ML/astromodel way! End of explanation """ ptsrc = PointSource(trigger,ra,dec,spectral_shape=spectral_model) model = Model(ptsrc) data = DataList(xrt,nai3) jl = JointLikelihood(model, data, verbose=False) model.display() """ Explanation: Setup the joint likelihood Create a point source object and model. Load the data into a data list and create the joint likelihood End of explanation """ res = jl.fit() res = jl.get_errors() res = jl.get_contours(spectral_model.phoindex_3,1.5,2.5,50) res = jl.get_contours(spectral_model.norm_3,.1,.3,25,spectral_model.phoindex_3,1.5,2.5,50) """ Explanation: Fitting Maximum Likelihood style End of explanation """ spectral_model.phoindex_3.prior = Uniform_prior(lower_bound=-5.0, upper_bound=5.0) spectral_model.norm_3.prior = Log_uniform_prior(lower_bound=1E-5, upper_bound=1) bayes = BayesianAnalysis(model, data) samples = bayes.sample(n_walkers=50,burn_in=100, n_samples=1000) fig = bayes.corner_plot(plot_contours=True, plot_density=False) bayes.get_highest_density_interval() cleanup_downloaded_GBM_data(gbm_data) """ Explanation: And then go Bayesian! End of explanation """
drphilmarshall/OM10
notebooks/tutorial.ipynb
mit
from __future__ import division, print_function import os, numpy as np import matplotlib matplotlib.use('TkAgg') %matplotlib inline import om10 %load_ext autoreload %autoreload 2 """ Explanation: OM10 Tutorial In this notebook we demonstrate the basic functionality of the om10 package, including how to: Make some "standard" mock lensed quasar samples; Visualize those samples; Inspect individual systems. Requirements You will need to have followed the installation instructions in the OM10 README. End of explanation """ quads, doubles = {}, {} DES = om10.DB() DES.select_random(maglim=23.6, area=5000.0, IQ=0.9) quads['DES'] = DES.sample[DES.sample['NIMG'] == 4] doubles['DES'] = DES.sample[DES.sample['NIMG'] == 2] print('Predicted number of LSST quads, doubles: ', len(quads['DES']),',',len(doubles['DES'])) print('Predicted LSST quad fraction: ', str(int(100.0*len(quads['DES'])/(1.0*len(doubles['DES']))))+'%') LSST = om10.DB() LSST.select_random(maglim=23.3, area=18000.0, IQ=0.7) quads['LSST'] = LSST.sample[LSST.sample['NIMG'] == 4] doubles['LSST'] = LSST.sample[LSST.sample['NIMG'] == 2] print('Predicted number of LSST quads, doubles: ', len(quads['LSST']),',',len(doubles['LSST'])) print('Predicted LSST quad fraction: ', str(int(100.0*len(quads['LSST'])/(1.0*len(doubles['LSST']))))+'%') fig = om10.plot_sample(doubles['LSST'], color='blue') fig = om10.plot_sample(quads['LSST'], color='red', fig=fig) """ Explanation: Selecting Mock Lens Samples Let's look at what we might expect from DES and LSST, by making two different selections from the OM10 database. End of explanation """ db = om10.DB() # Pull out a specific lens and plot it: id = 7176527 lens = db.get_lens(id) om10.plot_lens(lens) # Plot 3 random lenses from a given survey and plot them: db.select_random(maglim=21.4, area=30000.0, IQ=1.0, Nlens=3) for id in db.sample['LENSID']: lens = db.get_lens(id) om10.plot_lens(lens, IQ=1.0) """ Explanation: Visualizing Lens Systems Let's pull out some lenses and see what they look like. End of explanation """
terrydolan/lfctransfers
lfctransfers.ipynb
mit
import pandas as pd import matplotlib as mpl import matplotlib.pyplot as plt import numpy as np import sys import collections from datetime import datetime from __future__ import division # enable inline plotting %matplotlib inline """ Explanation: LFC Data Analysis: The Transfers See Terry's blog Inspiring Transfers for a discussion of the data analysis. This notebook analyses Liverpool FC's transfers over the last 5 seasons, from 2011-2012 to 2014-15. It also compares Liverpool's average net transfer spend, revenue and wage bill to the other teams in the top 6. The analysis uses IPython Notebook, python, pandas and matplotlib to explore the data. Set-up Import the modules needed for the analysis. End of explanation """ print 'python version: {}'.format(sys.version) print 'pandas version: {}'.format(pd.__version__) print 'matplotlib version: {}'.format(mpl.__version__) print 'numpy version: {}'.format(np.__version__) """ Explanation: Print version numbers. End of explanation """ LFC_TRANSFERS_CSV_FILE = 'data\lfc_transfers_2010-2011_2015-2016.csv' dflfc_transfers = pd.read_csv(LFC_TRANSFERS_CSV_FILE, parse_dates=['Date']) # convert Fee to millions of pounds dflfc_transfers.Fee = dflfc_transfers.Fee/1000000 # show shape dflfc_transfers.shape dflfc_transfers.head() dflfc_transfers.tail() # check Date (of transfer) column is datetime data type dflfc_transfers.dtypes """ Explanation: Load the data into a dataframes and munge Create dataframe of LFC transfers Data source: lfchistory.net Note that all transfers are shown for FSG's ownership, from Oct 2010 to September 2015. End of explanation """ LFC_PLAYERS_CSV_FILE = 'data\lfc_players_september2015_upd.csv' dflfc_players = pd.read_csv(LFC_PLAYERS_CSV_FILE, parse_dates=['birthdate']) dflfc_players.rename(columns={'player': 'Player', 'birthdate': 'Birthdate', 'country': 'Country'}, inplace=True) dflfc_players.shape dflfc_players.dtypes dflfc_players.head(10) """ Explanation: Create dataframe of LFC players giving birthdate of each player Data source: lfchistory.net End of explanation """ PREM_TRANSFERS_CSV_FILE = 'data\prem_transfers_2011-2012_2015-2016.csv' dfprem_transfers = pd.read_csv(PREM_TRANSFERS_CSV_FILE, skiprows=2, header=True) # convert money to millions of pounds dfprem_transfers.Purchased = np.round(dfprem_transfers.Purchased/1000000, 1) dfprem_transfers.Sold = np.round(dfprem_transfers.Sold/1000000, 1) dfprem_transfers.NetSpend = np.round(dfprem_transfers.NetSpend/1000000, 1) dfprem_transfers.PerSeasonSpend = np.round(dfprem_transfers.PerSeasonSpend/1000000, 1) # show shape dfprem_transfers.shape dfprem_transfers.head() dfprem_transfers.tail() """ Explanation: Create dataframe of premier league transfers over last 5 seasons Data source: transferleague.co.uk End of explanation """ PREM_WAGES_CSV_FILE = 'data\prem_wages_2012-2013_2014-2015.csv' dfprem_wages = pd.read_csv(PREM_WAGES_CSV_FILE, skiprows=4, header=True) # show shape dfprem_wages.shape dfprem_wages """ Explanation: Create dataframe of premier league wage bill over last 5 seasons Data source: guardian.co.uk etc End of explanation """ PREM_TABLE_CSV_FILE = 'data\prem_table_2014-2015.csv' dfprem_table_2014_2015 = pd.read_csv(PREM_TABLE_CSV_FILE, skiprows=2, header=True) # show shape dfprem_table_2014_2015.shape dfprem_table_2014_2015.head(6) dfprem_table_2014_2015.tail() """ Explanation: Create dataframe of league table for 2014-2015 Data source: http://www.sportsmole.co.uk/football/premier-league/2014-15/table.html End of explanation """ # note that the revenue figures are for 2014 deloitte_review_2015 = {'Team': ['Manchester United', 'Manchester City', 'Chelsea', 'Arsenal', 'Liverpool', 'Tottenham Hotspur'], 'Revenue': [433.2, 346.5, 324.4, 300.5, 255.8, 180.5]} df_revenue = pd.DataFrame(data=deloitte_review_2015, columns=['Team', 'Revenue']) df_revenue = df_revenue.set_index('Team') df_revenue """ Explanation: Create dataframe of Deloiite Football Money League Data source: Deloitte Football Money League The revenue (in pounds) is in the detailed report. End of explanation """ dflfc_transfers[dflfc_transfers.Season == '2015-2016'] """ Explanation: Analyse the data Ask a question and find the answer! Show ins and outs for 2015-2016 End of explanation """ dflfc_transfers_in = dflfc_transfers[dflfc_transfers.Direction == 'In'] dflfc_transfers_in[dflfc_transfers_in.Fee == dflfc_transfers_in.Fee.max()] """ Explanation: What was biggest fee paid for a Liverpool player over the data period? End of explanation """ dflfc_transfers_out = dflfc_transfers[dflfc_transfers.Direction == 'Out'] dflfc_transfers_out[dflfc_transfers_out.Fee == dflfc_transfers_out.Fee.max()] """ Explanation: What was biggest fee received for a player? End of explanation """ dflfc_transfers.groupby(['Season', 'Direction']).sum() """ Explanation: Summarise Fees In and Out End of explanation """ df_fsg = dflfc_transfers.groupby(['Season', 'Direction']).sum().unstack() df_fsg.columns = df_fsg.columns.droplevel() del df_fsg.columns.name df_fsg df_fsg['NetSpend'] = df_fsg.In - df_fsg.Out df_fsg """ Explanation: Show as 'unstacked' dataframe End of explanation """ df_fsg.sum() """ Explanation: Calculate total Fees and NetSpend, over the 6 seasons End of explanation """ df_fsg.mean() """ Explanation: Calculate average (mean) Fees and NetSpend, over the 6 seasons End of explanation """ df_fsg['2011-2012':'2015-2016'].NetSpend.mean() """ Explanation: What is the the average Net Spend over last 5 seasons? End of explanation """ dflfc_transfers_in.Club.value_counts().head() """ Explanation: Where do most players come from? End of explanation """ dflfc_transfers_out.Club.value_counts().head() """ Explanation: Where do most players go to? End of explanation """ dflfc_transfers_in[dflfc_transfers_in.Fee >= 15] """ Explanation: Which players were bought for more than £15M? End of explanation """ dflfc_transfers_out[dflfc_transfers_out.Fee >= 15] """ Explanation: Which players were sold for more than £15M? End of explanation """ df_fsg.ix['2011-2012':] avg_netspend = df_fsg.ix['2011-2012':].NetSpend.mean() print 'Average Transfer Net Spend per Season is £{}M'.format(round(avg_netspend, 1)) ax = df_fsg.ix['2011-2012':].plot(kind='bar', figsize=(12, 9), color=('r', 'y', 'b'), legend=False) plt.axhline(avg_netspend, color='b', linestyle='--') ax.set_ylabel('Transfers in Pounds (Millions)') ax.set_title("FSG's LFC Transfers per Season for the Last 5 Seasons\n(2011-2012 to 2015-2016)") ax.text(-.4, 97, 'prepared by: @lfcsorted', fontsize=9) # create legend l1 = plt.Line2D([], [], linewidth=7, color='r') l2 = plt.Line2D([], [], linewidth=7, color='y') l3 = plt.Line2D([], [], linewidth=7, color='b') l4 = plt.Line2D([], [], linewidth=1, color='b', linestyle='--') labels = ['Transfers In', 'Transfers Out', 'Transfer Net Spend', 'Average Transfer Net Spend per Season'] ax.legend([l1, l2, l3, l4], labels, fancybox=True, shadow=True, framealpha=0.8, loc='upper left') # plot and save current figure fig = plt.gcf() plt.show() fig.savefig('FSGTransfersPerSeason2011-2015.png', bbox_inches='tight') """ Explanation: Plot FSG's LFC Transfers per Season for last 5 seasons (2011-12 to 2015-16) Note that the analysis is carried out in September 2015. Therefore the 2015-2016 NetSpend does not include the January 2016 tranfer window. End of explanation """ dfprem_transfers top6_2014_15 = [ 'Chelsea', 'Manchester City', 'Manchester United', 'Arsenal', 'Tottenham Hotspur', 'Liverpool'] df_prem_transfers_top6 = dfprem_transfers[['Team', 'PerSeasonSpend']][dfprem_transfers.Team.isin(top6_2014_15)] df_prem_transfers_top6 """ Explanation: Now let's compare Liverpool's transfers to the other top 6 teams. What is net spend per season for top 5 teams over last 5 seasons? End of explanation """ ax = df_prem_transfers_top6.plot(x='Team', kind='bar', figsize=(12, 9), color=('b', 'b', 'b', 'r', 'b', 'b'), legend=False) ax.set_ylabel('Net Spend per Season in Pounds (Millions)') ax.set_title("Top 6 Premier League Teams Average Transfer Net Spend per Season for Last 5 Seasons\n(2011-2012 to 2015-2016)") plt.axhline(0, color='grey') # x axis at net spend = 0 ax.text(-0.45, -9, 'prepared by: @lfcsorted', fontsize=9) # plot and save current figure fig = plt.gcf() plt.show() fig.savefig('PremTransferSpendPerSeason2011-2015.png', bbox_inches='tight') """ Explanation: Plot Top 6 Premier League Teams Average Transfer Net Spend per Season for Last 5 Seasons End of explanation """ df_revenue.head(6) """ Explanation: What is the revenue of top 6 premier league clubs? End of explanation """ ax = df_revenue.plot(kind='bar', figsize=(12, 9), color=('b', 'b', 'b', 'b', 'r', 'b'), legend=False) ax.set_ylabel('Revenue in Pounds (Millions)') ax.set_title('Top 6 English Premier League Teams by Revenue (2014)') ax.text(4.45, 440, 'prepared by: @lfcsorted', fontsize=9) # plot and save current figure fig = plt.gcf() plt.show() fig.savefig('PremRevenue2014.png', bbox_inches='tight') """ Explanation: Plot Top 6 English Premier League Teams by Revenue End of explanation """ dflfc_transfers.head() dflfc_transfers.shape dflfc_players.head() dflfc_transfers_with_dob = pd.DataFrame.merge(dflfc_transfers, dflfc_players, how='left') dflfc_transfers_with_dob.shape dflfc_transfers_with_dob.head() dflfc_transfers_with_dob.dtypes # check to see if any Birthdates are missing dflfc_transfers_with_dob.Birthdate.isnull().any() # show missing entries (these have been reported to lfchistory.net) dflfc_transfers_with_dob[dflfc_transfers_with_dob.Birthdate.isnull()] # fill in mising data dflfc_transfers_with_dob.loc[dflfc_transfers_with_dob.Player == 'Chris Mavinga', 'Country'] = 'France' dflfc_transfers_with_dob.loc[dflfc_transfers_with_dob.Player == 'Chris Mavinga', 'Birthdate'] = pd.Timestamp('19910526') dflfc_transfers_with_dob.loc[dflfc_transfers_with_dob.Player == 'Kristoffer Peterson', 'Country'] = 'Sweden' dflfc_transfers_with_dob.loc[dflfc_transfers_with_dob.Player == 'Kristoffer Peterson', 'Birthdate'] = pd.Timestamp('19941128') dflfc_transfers_with_dob[(dflfc_transfers_with_dob.Player == 'Chris Mavinga') | (dflfc_transfers_with_dob.Player == 'Kristoffer Peterson')] """ Explanation: Let's now examine the age of the LIverpool transfers First create new transfer dataframe with birthdate of player End of explanation """ def age_at(dob, this_date): """Return age in years at this_date for given date of birth. Note that both dob and this_date are of type datetime.""" return round((this_date - dob).days/365.0, 1) dflfc_transfers_with_dob['AgeAtTransfer'] = dflfc_transfers_with_dob.apply(lambda row: age_at(row.Birthdate, row.Date), axis=1) dflfc_transfers_with_dob.tail() """ Explanation: Add age at transfer date to the dataframe End of explanation """ dflfc_transfers_with_dob[(dflfc_transfers_with_dob.Direction == 'In') & (dflfc_transfers_with_dob.Fee >= 0)]\ ['AgeAtTransfer'].mean() """ Explanation: What is average age of incoming players by season? Average age of all transfers in, including free End of explanation """ dflfc_transfers_with_dob[(dflfc_transfers_with_dob.Direction == 'In') & (dflfc_transfers_with_dob.Fee > 0)]\ ['AgeAtTransfer'].mean() """ Explanation: Average age of all transfers in, excluding free End of explanation """ dflfc_transfers_with_dob[(dflfc_transfers_with_dob.Direction == 'In') & (dflfc_transfers_with_dob.Fee > 0)]\ [['Season', 'AgeAtTransfer']].groupby('Season').mean() """ Explanation: Average age of all transfers in, excluding free, by season End of explanation """ dflfc_transfers_with_dob[dflfc_transfers_with_dob.Direction == 'In']\ [['Season', 'AgeAtTransfer']].groupby('Season').agg(lambda x: round(x.mean(), 1)) """ Explanation: Average age of all transfers in, including free, by season (rounded) End of explanation """ dflfc_transfers_with_dob[dflfc_transfers_with_dob.Direction == 'In']\ [['Season', 'AgeAtTransfer']].groupby('Season').agg(lambda x: round(x.mean(), 1)).plot(kind='bar', legend=False, title='Age at Transfer In') """ Explanation: Plot age at transfer in End of explanation """ dflfc_transfers_with_dob[['Season', 'Direction', 'AgeAtTransfer']].groupby(['Season', 'Direction'])\ .agg(lambda x: round(x.mean(), 1)).unstack()\ .plot(kind='bar', title='Age at Transfer', ylim=(0,40), yticks=range(0,35,5)) """ Explanation: What is average age of incoming and outgoing players by season End of explanation """ dflfc_transfers_with_dob[dflfc_transfers_with_dob.Fee > 0][['Season', 'Direction', 'AgeAtTransfer']].groupby(['Season', 'Direction'])\ .agg(lambda x: round(x.mean(), 1)).unstack()\ .plot(kind='bar', title='Age at Transfer (non-zero)', ylim=(0,40), yticks=range(0,35,5)) dflfc_transfers_with_dob[['Season', 'Direction', 'AgeAtTransfer']].groupby(['Season', 'Direction']).agg(lambda x: round(x.mean(), 1)) dflfc_transfers_with_dob[dflfc_transfers_with_dob.Fee > 0][['Season', 'Direction', 'AgeAtTransfer']].groupby(['Season', 'Direction']).agg(lambda x: round(x.mean(), 1)) """ Explanation: Excluding free End of explanation """ LFC_GAME1_CSV_FILE = 'data\lfc_pl_opening_games_aug2015.csv' dflfc_openers = pd.read_csv(LFC_GAME1_CSV_FILE, parse_dates=['Date']) # show shape dflfc_openers.shape dflfc_openers.tail() """ Explanation: Note that Outs do not include players leaving at end of contract e.g. Gerrard or those retiring e.g. Carragher Analyse age of starting line-ups over last 3 seasons End of explanation """ dflfc_openers = pd.DataFrame.merge(dflfc_openers, dflfc_players[['Player', 'Birthdate']], how='left') dflfc_openers.shape dflfc_openers.head() dflfc_openers['AgeAtOpener'] = dflfc_openers.apply(lambda row: age_at(row.Birthdate, row.Date), axis=1) dflfc_openers.tail() """ Explanation: Add age at opening game End of explanation """ dflfc_openers[['Season', 'AgeAtOpener']].groupby('Season').agg(lambda x: round(x.mean(), 1)) dflfc_openers[['Season', 'AgeAtOpener']].groupby('Season').agg(lambda x: round(x.mean(), 1)).plot(kind='bar', ylim=(24,28)) dflfc_transfers_with_dob[dflfc_transfers_with_dob.Direction == 'In'][['Season', 'Player', 'Fee', 'AgeAtTransfer']] """ Explanation: Calculate average age of team End of explanation """ LFC_ARSENAL_CSV_FILE = 'data\lfc_pl_vs_Arsenal_aug2015.csv' dflfc_arsenal = pd.read_csv(LFC_ARSENAL_CSV_FILE, parse_dates=['Date'], skiprows=2) # show shape dflfc_arsenal.shape dflfc_arsenal = pd.merge(dflfc_arsenal, dflfc_players[['Player', 'Birthdate']]) dflfc_arsenal dflfc_arsenal['AgeAtGame'] = dflfc_arsenal.apply(lambda row: age_at(row.Birthdate, row.Date), axis=1) dflfc_arsenal dflfc_arsenal.mean() """ Explanation: What is average age of team that finished againt Arsenal? End of explanation """ dfprem_wages[['Team', '2014-2015']].head() dfprem_table_2014_2015[['Team', 'PTS']].head() dfprem_table_2014_2015_wages = pd.merge(dfprem_table_2014_2015[['Team', 'PTS']], dfprem_wages[['Team', '2014-2015']]) dfprem_table_2014_2015_wages dfprem_table_2014_2015_wages.rename(columns={'2014-2015': 'WageBill','PTS': 'Points'}, inplace=True) dfprem_table_2014_2015_wages.head() dfprem_table_2014_2015.Rank[dfprem_table_2014_2015.Team == 'Chelsea'].values[0] dfprem_table_2014_2015_wages.plot(kind='scatter', x='WageBill', y='Points') df = dfprem_table_2014_2015_wages.set_index('Team') (a,b) = df.ix['Liverpool'] print a, b # Ref: http://stackoverflow.com/questions/739241/date-ordinal-output def n_plus_suffix(n): """Return n plus the suffix e.g. 1 becomes 1st, 2 becomes 2nd.""" assert isinstance(n, (int, long)), '{} is not an integer'.format(n) if 10 <= n % 100 < 20: return str(n) + 'th' else: return str(n) + {1 : 'st', 2 : 'nd', 3 : 'rd'}.get(n % 10, "th") """ Explanation: Compare team wage bills for 2014-15 End of explanation """ # calculate points total for 6th place sixth_place_points = df.iloc[5].Points # plot top 6 as blue circles ax = df[df.Points >= sixth_place_points].plot(kind='scatter', x='WageBill', y='Points', figsize=(12,9), color='b') # plot others as black circles df[df.Points < sixth_place_points].plot(ax=ax, kind='scatter', x='WageBill', y='Points', figsize=(12,9), color='k') # calculate Liverpool's points and plot as red circles lfcpoints, lfcwage = df.ix['Liverpool'] ax.plot(lfcwage, lfcpoints, 'ro') # add x and y labels etc ax.set_xlabel('Wage Bill in Pounds (millions)') ax.set_ylabel('Points') ax.set_title('Wage Bill vs Points, with Top 6 highlighted (2014-2015)') ax.text(2, 21, 'prepared by: @lfcsorted', fontsize=9) # add text showing team and position for top 6 for team, (points, wagebill) in df.iterrows(): pos = int(dfprem_table_2014_2015.Rank[dfprem_table_2014_2015.Team == team].values[0]) team_pos = '{} ({})'.format(team, n_plus_suffix(pos)) if points >= sixth_place_points: ax.annotate(s=team_pos, xy=(wagebill,points), xytext=(wagebill-len(team)-3, points+1)) """ Explanation: Plot Wage Bill vs Points, with Top 6 highlighted (2014-2015) End of explanation """ # list all of the teams df.index.values # set position of text, default is centred on top of the circle # note that posotionis decided by trial and error, to give clearest plot TEAM_CL = ['Sunderland', 'West Ham United'] TEAM_CR = ['Stoke City', 'Aston Villa', 'Everton'] TEAM_CT = [] for team in df.index.values: if team not in TEAM_CL + TEAM_CR: TEAM_CT.append(team) # calculate points total for 6th place sixth_place_points = df.iloc[5].Points # plot top 6 as blue circles ax = df[df.Points >= sixth_place_points].plot(kind='scatter', x='WageBill', y='Points', figsize=(12,9), color='b') # plot others as black circles df[df.Points < sixth_place_points].plot(ax=ax, kind='scatter', x='WageBill', y='Points', figsize=(12,9), color='k') # calculate Liverpool's points and plot as red circles lfcpoints, lfcwage = df.ix['Liverpool'] ax.plot(lfcwage, lfcpoints, 'ro') # add x and y labels etc ax.set_xlabel('Wage Bill in Pounds (millions)') ax.set_ylabel('Points') ax.set_title('Wage Bill vs Points, with Top 6 highlighted (2014-2015)') ax.text(2, 21, 'prepared by: @lfcsorted', fontsize=9) # add text showing team and position for team, (points, wagebill) in df.iterrows(): pos = int(dfprem_table_2014_2015.Rank[dfprem_table_2014_2015.Team == team].values[0]) team_pos = '{} ({})'.format(team, n_plus_suffix(pos)) if team in TEAM_CT: ax.annotate(s=team_pos, xy=(wagebill,points), xytext=(wagebill-len(team)-3, points+1)) elif team in TEAM_CR: #print 'team cr: {}'.format(team) ax.annotate(s=team_pos, xy=(wagebill,points), xytext=(wagebill+2, points-0.5)) elif team in TEAM_CL: #print 'team cl: {}'.format(team) ax.annotate(s=team_pos, xy=(wagebill,points), xytext=(wagebill-3*len(team)-6, points-0.5)) else: raise('unexpected error') # add key areas to the graph and label from matplotlib.patches import Rectangle XTEXT_OFFSET = 1 YTEXT_OFFSET = -2 # add top 4 area, with annotation in top left hand corner of rectangle top4rect_bl_x = df[0:4].WageBill.min() # bottom left x co-ord of rectangle top4rect_bl_y = df[0:4].Points.min() # bottom left y co-ord of rectangle top4rect_width = df[0:4].WageBill.max() - top4rect_bl_x # width of rectangle top4rect_height = df[0:4].Points.max() - top4rect_bl_y # height of rectangle top4rect_tl_x = df[0:4].WageBill.min() # top left x co-ord for annotation top4rect_tl_y = df[0:4].Points.max() # top left y co-ord for annotation top4rect_xtext = top4rect_tl_x + XTEXT_OFFSET # text x co-ord for annotation top4rect_ytext = top4rect_tl_y + YTEXT_OFFSET # text y co-ord for annotation ax.add_patch(Rectangle((top4rect_bl_x, top4rect_bl_y), top4rect_width, top4rect_height, facecolor="blue", alpha=0.2)) ax.annotate(s='Top 4 area', xy=(top4rect_tl_x, top4rect_tl_y), xytext=(top4rect_xtext, top4rect_ytext), color='blue') # add top 6 area, with annotation in top left hand corner of rectangle top6rect_bl_x = df[0:6].WageBill.min() # bottom left x co-ord of rectangle top6rect_bl_y = df[0:6].Points.min() # bottom left y co-ord of rectangle top6rect_width = df[0:6].WageBill.max() - top6rect_bl_x # width of rectangle top6rect_height = df[0:6].Points.max() - top6rect_bl_y # height of rectangle top6rect_tl_x = df[0:6].WageBill.min() # top left x co-ord for annotation top6rect_tl_y = df[0:6].Points.max() # top left y co-ord for annotation top6rect_xtext = top6rect_tl_x + XTEXT_OFFSET # text x co-ord for annotation top6rect_ytext = top6rect_tl_y + YTEXT_OFFSET # text y co-ord for annotation ax.add_patch(Rectangle((top6rect_bl_x, top6rect_bl_y), top6rect_width, top6rect_height, facecolor="lightblue", alpha=0.2)) ax.annotate(s='Top 6 area', xy=(top6rect_tl_x, top6rect_tl_y), xytext=(top6rect_xtext, top6rect_ytext), color='blue', alpha=0.7) # plot and save current figure fig = plt.gcf() plt.show() fig.savefig('PremWageBillvsPoints2014-2015.png', bbox_inches='tight') """ Explanation: Prettify the plot - label all teams and highlight the top 4 and top 6 areas. End of explanation """
CUFCTL/DLBD
Fall2017/Module1.ipynb
mit
import sys, os import pickle import torch import torch.utils.data as data import glob from PIL import Image import numpy as np def unpickle(fname): with open(fname, 'rb') as f: Dict = pickle.load(f, encoding='bytes') return Dict def load_data(batch): print ("Loading batch:{}".format(batch)) return unpickle(batch) class CIFARLoader(data.Dataset): """ CIFAR-10 Loader: Loads the CIFAR-10 data according to an index value and returns the data and the labels. args: root: Root of the data directory. Optional args: transforms: The transforms you wish to apply to the data. target_transforms: The transforms you wish to apply to the labels. """ def __init__(self, root, train=True, transform=None, target_transform=None): self.root = root self.transform = transform self.target_transform = target_transform self.train = train patt = os.path.join(self.root, 'data_batch_*') # create the pattern we want to search for. self.batches = sorted(glob.glob(patt)) self.train_data = [] self.train_labels = [] self.test_data = [] self.test_labels = [] if self.train: for batch in self.batches: entry = {} entry = load_data(batch) self.train_data.append(entry[b'data']) self.train_labels += entry[b'labels'] else: entry = load_data(os.path.join(self.root, 'test_batch')) self.test_data.append(entry[b'data']) self.test_labels += entry[b'labels'] ############################################# # We need to "concatenate" all the different # # training samples into one big array. For # # doing that we're going to use a numpy # # function called "concatenate". # ############################################## if self.train: self.train_data = np.concatenate(self.train_data) self.train_data = self.train_data.reshape((50000, 3, 32,32)) self.train_data = self.train_data.transpose((0,2,3,1)) # pay attention to this step! else: self.test_data = np.concatenate(self.test_data) self.test_data = self.test_data.reshape((10000, 3,32,32)) self.test_data = self.test_data.transpose((0,2,3,1)) def __getitem__(self, index): if self.train: image = self.train_data[index] label = self.train_labels[index] else: image = self.test_data[index] label = self.test_labels[index] if self.transform is not None: image = self.transform(image) if self.target_transform is not None: label = self.target_transform(label) # print(image.size()) return image, label def __len__(self): if self.train: return len(self.train_data) else: return len(self.test_data) """ Explanation: Module 1: Introduction to Neural Nets The aim of this module is to introduce you to designing simple neural networks. You've already seen how to load data in PyTorch and a sample script of the overall workflow. In this notebook, you'll implement your own neural network and report on it's performance. Gathering data We'll use the dataloading module we developed earlier. End of explanation """ import torchvision.transforms as transforms import torch.utils.data as data import numpy as np import matplotlib.pyplot as plt import torchvision def imshow(torch_tensor): torch_tensor = torch_tensor/2 + 0.5 npimg = torch_tensor.numpy() plt.imshow(npimg.transpose(1,2,0)) plt.show() tfs = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5,0.5,0.5), (0.5,0.5,0.5))]) root='/home/akulshr/cifar-10-batches-py/' cifar_train = CIFARLoader(root, train=True, transform=tfs) # create a "CIFARLoader instance". cifar_loader = data.DataLoader(cifar_train, batch_size=4, shuffle=True, num_workers=2) # all possible classes in the CIFAR-10 dataset classes = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck') data_iter = iter(cifar_loader) data,label = data_iter.next() #visualize data. imshow(torchvision.utils.make_grid(data)) # print the labels ' '.join(classes[label[j]] for j in range(4)) """ Explanation: Given the dataset, let's test our dataset by seeing some of the images and their corresponding labels. PyTorch provides us with a neat little function called make_grid which plots "x" number of images together in a grid. End of explanation """ import torch.nn as nn class MyNet(nn.Module): """ Your neural network here. bs: Batch size, you can include or leave it out. """ def __init__(self, bs): super(MyNet, self).__init__() pass def forward(self, x): pass net = MyNet(4) # be sure to put any additional parameters you pass to __init__ here print(net) """ Explanation: Creating a Neural Network Now that we're through with the boring part, let's move on to the fun stuff! In the code stub being provided you can write your own network definition and then print it. We've not covered Convolutional Layers yet, so the fun will be limited to just using Linear Layers. When using linear layers keep in mind that the input features are 3*32*32. When writing out the layers it is important to think in terms of matrix multiplication. So if your input features are of dimension 4x3x32x32 then your input features must be the same dimensions. I'll define some terms so that you can use them while designing the net: N: The batch size --> This determines how many images are pushed through the network during an iteration. C: The number of channels --> It's an RGB image hence we set this to 3. H,W: The height and width of the image. Your input to a network is usually NxCxHxW. Now a linear layer expects a single number as an input feature, so for a batch size of 1 your input features will be 3072(3*32*32). End of explanation """ import torch.optim as optim import torch.utils.data as data from torch.autograd import Variable tfs = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5,0.5,0.5), (0.5,0.5,0.5))]) root='/home/akulshr/cifar-10-batches-py/' cifar_train = CIFARLoader(root, transform=tfs) # create a "CIFARLoader instance". cifar_train_loader = data.DataLoader(cifar_train, batch_size=4, shuffle=True, num_workers=2) lossfn = nn.NLLLoss() optimz = optim.SGD(net.parameters(), lr=1e-3, momentum=0.9) def train(net): net.train() for ep in range(2): running_loss = 0.0 for ix, (img,label) in enumerate(cifar_train_loader, 0): img_var = Variable(img) label_var = Variable(label) optimz.zero_grad() # print(img_var.size()) op = net(img_var) loss = lossfn(op, label_var) loss.backward() optimz.step() running_loss += loss.data[0] if ix%2000 == 1999: print("[%d/%5d] Loss: %f"%(ep+1, ix+1, running_loss/2000)) running_loss = 0.0 print("Finished Training\n") train(net) """ Explanation: Training the Network Having defined our network and tested that our dataloader works to our satisfaction, we're going to train the network. For your convenience, the training script is included and it is highly recommended that you try to gain a sense of what's happening. We'll talk more about training in the coming meetings. End of explanation """ def imshow(torch_tensor): torch_tensor = torch_tensor/2 + 0.5 npimg = torch_tensor.numpy() plt.imshow(npimg.transpose(1,2,0)) plt.show() tfs = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5,0.5,0.5), (0.5,0.5,0.5))]) root='/home/akulshr/cifar-10-batches-py/' cifar_test = CIFARLoader(root, train=False, transform=tfs) cifar_test_loader = data.DataLoader(cifar_test,batch_size=4, shuffle=False, num_workers=2) # all possible classes in the CIFAR-10 dataset classes = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck') data_iter = iter(cifar_test_loader) imgs,label = data_iter.next() # Show the test images. imshow(torchvision.utils.make_grid(imgs)) # Print the "Ground Truth labels" print("Ground Truth: ") print(' '.join(classes[label[j]] for j in range(4))) """ Explanation: So far we've trained the network and we're seeing some output loss. However, that's only the part of the story, since we need the model to perform well on unseen inputs. In order to do that we'll evaluate the dataset on the test_batch. End of explanation """ data_iter = iter(cifar_test_loader) imgs,label = data_iter.next() op = net(Variable(imgs)) _, pred = torch.max(op.data, 1) print("Guessed class: ") print(' '.join(classes[pred[j]] for j in range(4))) """ Explanation: So we've got these images along with their labels as "ground truth". Now let's ask the neural network we just trained as to what it thinks the images are End of explanation """ correct = 0.0 total = 0.0 for cache in cifar_test_loader: img, label = cache op = net(Variable(img)) _, pred = torch.max(op.data, 1) total += label.size(0) correct += (pred==label).sum() print("accuracy: %f"%(100*(correct/total))) """ Explanation: Pretty sweet! our neural network seems to have learnt something. Let's see how it does on the overall dataset: End of explanation """
Saytiras/StalkerML
Calculate Opinion with Base.ipynb
gpl-2.0
# import logging # logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s') # logging.root.level = logging.INFO from os import path from random import shuffle from corputil import FileCorpus, ListCorpus from corputil.utils import load_stopwords from gensim.models.word2vec import LineSentence, Word2Vec spd = [ path.join('data', 'Politics', 'SPD.txt'), path.join('data', 'Politics', 'SPD_EU.txt'), path.join('data', 'Politics', 'SPD_Fraktion.txt') ] linke = [ path.join('data', 'Politics', 'Linke.txt'), path.join('data', 'Politics', 'Linke_EU.txt'), path.join('data', 'Politics', 'Linke_Fraktion.txt') ] gruene = [ path.join('data', 'Politics', 'Grüne.txt'), path.join('data', 'Politics', 'Grüne_EU.txt'), path.join('data', 'Politics', 'Grüne_Fraktion.txt') ] fdp = [ path.join('data', 'Politics', 'FDP.txt'), path.join('data', 'Politics', 'FDP_EU.txt'), path.join('data', 'Politics', 'FDP_Fraktion.txt') ] cdu = [ path.join('data', 'Politics', 'CDU.txt'), path.join('data', 'Politics', 'CDU_EU.txt'), path.join('data', 'Politics', 'CDU_Fraktion.txt') ] npd = [ path.join('data', 'Politics', 'NPD_Fraktion_MV.txt'), path.join('data', 'Politics', 'NPD_Fraktion_Sachsen.txt'), path.join('data', 'Politics', 'NPD_Jung.txt') ] corpora = [ FileCorpus(linke), FileCorpus(spd), FileCorpus(gruene), FileCorpus(fdp), FileCorpus(cdu), FileCorpus(npd) ] parties = [ 'Linke', 'SPD', 'Gruene', 'FDP', 'CDU', 'NPD' ] """ Explanation: Calculate Political Opinion Models End of explanation """ sentences = LineSentence(path.join('data', 'Archive', 'Corpus_Wiki.txt')) base = Word2Vec(sentences, workers=4, iter=4, size=100, window=2, sg=1) """ Explanation: Training the Base Model Calculate the base model (from german wiki), that is later used as a base for training the classification models. End of explanation """ base.save(path.join('models', 'word2vec', 'Base.w2v')) base = None sentences = None """ Explanation: Save model to disk. Don't finalize the model because we need to train it with new data later! End of explanation """ for party, corpus in zip(parties, corpora): sentences = list(corpus.sentences_token()) shuffle(sentences) model = Word2Vec.load(path.join('models', 'word2vec', 'Base.w2v')) model.train(sentences, total_examples=len(sentences)) model.save(path.join('models', 'word2vec', '{}.w2v'.format(party))) """ Explanation: Training the Classifier End of explanation """ models = [path.join('models', 'word2vec', '{}.w2v'.format(party)) for party in parties] labels = ['2015-44', '2015-45', '2015-46', '2015-47', '2015-48', '2015-49', '2015-50', '2015-51', '2015-52', '2015-53', '2016-01', '2016-02', '2016-03', '2016-04', '2016-05', '2016-06'] files = [path.join('data', 'CurrentNews', '{}.csv').format(label) for label in labels] out = [path.join('data', 'CurrentNews', 's_{}.csv').format(label) for label in labels] import pandas as pd import numpy as np def calc_score(doc, mod): model = Word2Vec.load(mod) score = model.score(doc, len(doc)) return score # Taken from Matt Taddy: https://github.com/TaddyLab/gensim/blob/deepir/docs/notebooks/deepir.ipynb def calc_probability(df, mods): docs = list(ListCorpus(list(df.loc[:, 'text'])).doc_sentences_token()) sentlist = [s for d in docs for s in d] llhd = np.array( [ calc_score(sentlist, m) for m in mods ] ) lhd = np.exp(llhd - llhd.max(axis=0)) prob = pd.DataFrame( (lhd/lhd.sum(axis=0)).transpose() ) prob["doc"] = [i for i,d in enumerate(docs) for s in d] prob = prob.groupby("doc").mean() return prob # raw = pd.concat([pd.read_csv(file, sep='|', encoding='utf-8') for file in files], ignore_index=True) # prob = calc_probability(raw, models) # data = pd.concat([raw, prob], axis=1) # data.groupby('site').mean() for file, o in zip(files, out): data = pd.read_csv(file, sep='|', encoding='utf-8') sentiment = calc_probability(data, models) csv = pd.concat([data, sentiment], axis=1) csv.rename(columns={ 0: 'LINKE', 1: 'SPD', 2: 'GRÜNE', 3: 'FDP', 4: 'CDU', 5: 'NPD' }, inplace=True) csv.to_csv(o, index=False, encoding='utf-8', sep='|') """ Explanation: Political Ideology Detection End of explanation """
robertoalotufo/ia898
master/tutorial_numpy_1_10.ipynb
mit
import numpy as np a = np.array([11,1,2,3,4,5,12,-3,-4,7,4]) print('a = ',a) print('np.clip(a,0,10) = ', np.clip(a,0,10)) """ Explanation: Table of Contents <p><div class="lev1 toc-item"><a href="#Clip" data-toc-modified-id="Clip-1"><span class="toc-item-num">1&nbsp;&nbsp;</span>Clip</a></div><div class="lev2 toc-item"><a href="#Exemplos" data-toc-modified-id="Exemplos-11"><span class="toc-item-num">1.1&nbsp;&nbsp;</span>Exemplos</a></div><div class="lev2 toc-item"><a href="#Exemplo-com-ponto-flutuante" data-toc-modified-id="Exemplo-com-ponto-flutuante-12"><span class="toc-item-num">1.2&nbsp;&nbsp;</span>Exemplo com ponto flutuante</a></div><div class="lev1 toc-item"><a href="#Documentação-Oficial-Numpy" data-toc-modified-id="Documentação-Oficial-Numpy-2"><span class="toc-item-num">2&nbsp;&nbsp;</span>Documentação Oficial Numpy</a></div> # Clip A função clip substitui os valores de um array que estejam abaixo de um limiar mínimo ou que estejam acima de um limiar máximo, por esses limiares mínimo e máximo, respectivamente. Esta função é especialmente útil em processamento de imagens para evitar que os índices ultrapassem os limites das imagens. ## Exemplos End of explanation """ a = np.arange(10).astype(np.int) print('a=',a) print('np.clip(a,2.5,7.5)=',np.clip(a,2.5,7.5)) """ Explanation: Exemplo com ponto flutuante Observe que se os parâmetros do clip estiverem em ponto flutuante, o resultado também será em ponto flutuante: End of explanation """
mne-tools/mne-tools.github.io
0.20/_downloads/3927e2933ae8d1b19effcbd5c5341bd0/plot_20_visualize_evoked.ipynb
bsd-3-clause
import os import numpy as np import mne """ Explanation: Visualizing Evoked data This tutorial shows the different visualization methods for :class:~mne.Evoked objects. :depth: 2 As usual we'll start by importing the modules we need: End of explanation """ sample_data_folder = mne.datasets.sample.data_path() sample_data_evk_file = os.path.join(sample_data_folder, 'MEG', 'sample', 'sample_audvis-ave.fif') evokeds_list = mne.read_evokeds(sample_data_evk_file, baseline=(None, 0), proj=True, verbose=False) # show the condition names for e in evokeds_list: print(e.comment) """ Explanation: Instead of creating the :class:~mne.Evoked object from an :class:~mne.Epochs object, we'll load an existing :class:~mne.Evoked object from disk. Remember, the :file:.fif format can store multiple :class:~mne.Evoked objects, so we'll end up with a :class:list of :class:~mne.Evoked objects after loading. Recall also from the tut-section-load-evk section of the introductory Evoked tutorial &lt;tut-evoked-class&gt; that the sample :class:~mne.Evoked objects have not been baseline-corrected and have unapplied projectors, so we'll take care of that when loading: End of explanation """ conds = ('aud/left', 'aud/right', 'vis/left', 'vis/right') evks = dict(zip(conds, evokeds_list)) # ‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾ this is equivalent to: # {'aud/left': evokeds_list[0], 'aud/right': evokeds_list[1], # 'vis/left': evokeds_list[2], 'vis/right': evokeds_list[3]} """ Explanation: To make our life easier, let's convert that list of :class:~mne.Evoked objects into a :class:dictionary &lt;dict&gt;. We'll use /-separated dictionary keys to encode the conditions (like is often done when epoching) because some of the plotting methods can take advantage of that style of coding. End of explanation """ evks['aud/left'].plot(exclude=[]) """ Explanation: Plotting signal traces .. sidebar:: Butterfly plots Plots of superimposed sensor timeseries are called "butterfly plots" because the positive- and negative-going traces can resemble butterfly wings. The most basic plot of :class:~mne.Evoked objects is a butterfly plot of each channel type, generated by the :meth:evoked.plot() &lt;mne.Evoked.plot&gt; method. By default, channels marked as "bad" are suppressed, but you can control this by passing an empty :class:list to the exclude parameter (default is exclude='bads'): End of explanation """ evks['aud/left'].plot(picks='mag', spatial_colors=True, gfp=True) """ Explanation: Notice the completely flat EEG channel and the noisy gradiometer channel plotted in red color. Like many MNE-Python plotting functions, :meth:evoked.plot() &lt;mne.Evoked.plot&gt; has a picks parameter that can select channels to plot by name, index, or type. In the next plot we'll show only magnetometer channels, and also color-code the channel traces by their location by passing spatial_colors=True. Finally, we'll superimpose a trace of the :term:global field power &lt;GFP&gt; across channels: End of explanation """ times = np.linspace(0.05, 0.13, 5) evks['aud/left'].plot_topomap(ch_type='mag', times=times, colorbar=True) fig = evks['aud/left'].plot_topomap(ch_type='mag', times=0.09, average=0.1) fig.text(0.5, 0.05, 'average from 40-140 ms', ha='center') """ Explanation: Plotting scalp topographies In an interactive session, the butterfly plots seen above can be click-dragged to select a time region, which will pop up a map of the average field distribution over the scalp for the selected time span. You can also generate scalp topographies at specific times or time spans using the :meth:~mne.Evoked.plot_topomap method: End of explanation """ mags = evks['aud/left'].copy().pick_types(meg='mag') mne.viz.plot_arrowmap(mags.data[:, 175], mags.info, extrapolate='local') """ Explanation: Additional examples of plotting scalp topographies can be found in ex-evoked-topomap. Arrow maps Scalp topographies at a given time point can be augmented with arrows to show the estimated magnitude and direction of the magnetic field, using the function :func:mne.viz.plot_arrowmap: End of explanation """ evks['vis/right'].plot_joint() """ Explanation: Joint plots Joint plots combine butterfly plots with scalp topographies, and provide an excellent first-look at evoked data; by default, topographies will be automatically placed based on peak finding. Here we plot the right-visual-field condition; if no picks are specified we get a separate figure for each channel type: End of explanation """ def custom_func(x): return x.max(axis=1) for combine in ('mean', 'median', 'gfp', custom_func): mne.viz.plot_compare_evokeds(evks, picks='eeg', combine=combine) """ Explanation: Like :meth:~mne.Evoked.plot_topomap you can specify the times at which you want the scalp topographies calculated, and you can customize the plot in various other ways as well. See :meth:mne.Evoked.plot_joint for details. Comparing Evoked objects To compare :class:~mne.Evoked objects from different experimental conditions, the function :func:mne.viz.plot_compare_evokeds can take a :class:list or :class:dict of :class:~mne.Evoked objects and plot them all on the same axes. Like most MNE-Python visualization functions, it has a picks parameter for selecting channels, but by default will generate one figure for each channel type, and combine information across channels of the same type by calculating the :term:global field power &lt;GFP&gt;. Information may be combined across channels in other ways too; support for combining via mean, median, or standard deviation are built-in, and custom callable functions may also be used, as shown here: End of explanation """ mne.viz.plot_compare_evokeds(evks, picks='MEG 1811', colors=dict(aud=0, vis=1), linestyles=dict(left='solid', right='dashed')) """ Explanation: One nice feature of :func:~mne.viz.plot_compare_evokeds is that when passing evokeds in a dictionary, it allows specifying plot styles based on /-separated substrings of the dictionary keys (similar to epoch selection; see tut-section-subselect-epochs). Here, we specify colors for "aud" and "vis" conditions, and linestyles for "left" and "right" conditions, and the traces and legend are styled accordingly. End of explanation """ evks['vis/right'].plot_image(picks='meg') """ Explanation: Image plots Like :class:~mne.Epochs, :class:~mne.Evoked objects also have a :meth:~mne.Evoked.plot_image method, but unlike :meth:epochs.plot_image() &lt;mne.Epochs.plot_image&gt;, :meth:evoked.plot_image() &lt;mne.Evoked.plot_image&gt; shows one channel per row instead of one epoch per row. Again, a picks parameter is available, as well as several other customization options; see :meth:~mne.Evoked.plot_image for details. End of explanation """ mne.viz.plot_compare_evokeds(evks, picks='eeg', colors=dict(aud=0, vis=1), linestyles=dict(left='solid', right='dashed'), axes='topo', styles=dict(aud=dict(linewidth=1), vis=dict(linewidth=1))) """ Explanation: Topographical subplots For sensor-level analyses it can be useful to plot the response at each sensor in a topographical layout. The :func:~mne.viz.plot_compare_evokeds function can do this if you pass axes='topo', but it can be quite slow if the number of sensors is too large, so here we'll plot only the EEG channels: End of explanation """ mne.viz.plot_evoked_topo(evokeds_list) """ Explanation: For larger numbers of sensors, the method :meth:evoked.plot_topo() &lt;mne.Evoked.plot_topo&gt; and the function :func:mne.viz.plot_evoked_topo can both be used. The :meth:~mne.Evoked.plot_topo method will plot only a single condition, while the :func:~mne.viz.plot_evoked_topo function can plot one or more conditions on the same axes, if passed a list of :class:~mne.Evoked objects. The legend entries will be automatically drawn from the :class:~mne.Evoked objects' comment attribute: End of explanation """ subjects_dir = os.path.join(sample_data_folder, 'subjects') sample_data_trans_file = os.path.join(sample_data_folder, 'MEG', 'sample', 'sample_audvis_raw-trans.fif') """ Explanation: By default, :func:~mne.viz.plot_evoked_topo will plot all MEG sensors (if present), so to get EEG sensors you would need to modify the evoked objects first (e.g., using :func:mne.pick_types). <div class="alert alert-info"><h4>Note</h4><p>In interactive sessions, both approaches to topographical plotting allow you to click one of the sensor subplots to pop open a larger version of the evoked plot at that sensor.</p></div> 3D Field Maps The scalp topographies above were all projected into 2-dimensional overhead views of the field, but it is also possible to plot field maps in 3D. To do this requires a :term:trans file to transform locations between the coordinate systems of the MEG device and the head surface (based on the MRI). You can compute 3D field maps without a trans file, but it will only work for calculating the field on the MEG helmet from the MEG sensors. End of explanation """ maps = mne.make_field_map(evks['aud/left'], trans=sample_data_trans_file, subject='sample', subjects_dir=subjects_dir) evks['aud/left'].plot_field(maps, time=0.1) """ Explanation: By default, MEG sensors will be used to estimate the field on the helmet surface, while EEG sensors will be used to estimate the field on the scalp. Once the maps are computed, you can plot them with :meth:evoked.plot_field() &lt;mne.Evoked.plot_field&gt;: End of explanation """ for ch_type in ('mag', 'grad', 'eeg'): evk = evks['aud/right'].copy().pick(ch_type) _map = mne.make_field_map(evk, trans=sample_data_trans_file, subject='sample', subjects_dir=subjects_dir, meg_surf='head') fig = evk.plot_field(_map, time=0.1) mne.viz.set_3d_title(fig, ch_type, size=20) """ Explanation: You can also use MEG sensors to estimate the scalp field by passing meg_surf='head'. By selecting each sensor type in turn, you can compare the scalp field estimates from each. End of explanation """
pysal/pysal
notebooks/explore/pointpats/pointpattern.ipynb
bsd-3-clause
import pysal.lib as ps import numpy as np from pysal.explore.pointpats import PointPattern """ Explanation: Planar Point Patterns in PySAL Author: Serge Rey &#115;&#106;&#115;&#114;&#101;&#121;&#64;&#103;&#109;&#97;&#105;&#108;&#46;&#99;&#111;&#109; and Wei Kang &#119;&#101;&#105;&#107;&#97;&#110;&#103;&#57;&#48;&#48;&#57;&#64;&#103;&#109;&#97;&#105;&#108;&#46;&#99;&#111;&#109; Introduction This notebook introduces the basic PointPattern class in PySAL and covers the following: What is a point pattern? Creating Point Patterns Attributes of Point Patterns Intensity Estimates Next steps What is a point pattern? We introduce basic terminology here and point the interested reader to more detailed references on the underlying theory of the statistical analysis of point patterns. Points and Event Points To start we consider a series of point locations, $(s_1, s_2, \ldots, s_n)$ in a study region $\Re$. We limit our focus here to a two-dimensional space so that $s_j = (x_j, y_j)$ is the spatial coordinate pair for point location $j$. We will be interested in two different types of points. Event Points Event Points are locations where something of interest has occurred. The term event is very general here and could be used to represent a wide variety of phenomena. Some examples include: locations of individual plants of a certain species archeological sites addresses of disease cases locations of crimes the distribution of neurons among many others. It is important to recognize that in the statistical analysis of point patterns the interest extends beyond the observed point pattern at hand. The observed patterns are viewed as realizations from some underlying spatial stochastic process. Arbitrary Points The second type of point we consider are those locations where the phenomena of interest has not been observed. These go by various names such as "empty space" or "regular" points, and at first glance might seem less interesting to a spatial analayst. However, these types of points play a central role in a class of point pattern methods that we explore below. Point Pattern Analysis The analysis of event points focuses on a number of different characteristics of the collective spatial pattern that is observed. Often the pattern is jugded against the hypothesis of complete spatial randomness (CSR). That is, one assumes that the point events arise independently of one another and with constant probability across $\Re$, loosely speaking. Of course, many of the empirical point patterns we encounter do not appear to be generated from such a simple stochastic process. The depatures from CSR can be due to two types of effects. First order effects For a point process, the first-order properties pertain to the intensity of the process across space. Whether and how the intensity of the point pattern varies within our study region are questions that assume center stage. Such variation in the itensity of the pattern of, say, addresses of individuals with a certain type of non-infectious disease may reflect the underlying population density. In other words, although the point pattern of disease cases may display variation in intensity in our study region, and thus violate the constant probability of an event condition, that spatial drift in the pattern intensity could be driven by an underlying covariate. Second order effects The second channel by which departures from CSR can arise is through interaction and dependence between events in space. The canonical example being contagious diseases whereby the presence of an infected individual increases the probability of subsequent additional cases nearby. When a pattern departs from expectation under CSR, this is suggestive that the underlying process may have some spatial structure that merits further investigation. Thus methods for detection of deviations from CSR and testing for alternative processes have given rise to a large literature in point pattern statistics. Methods of Point Pattern Analysis in PySAL The points module in PySAL implements basic methods of point pattern analysis organized into the following groups: Point Processing Centrography and Visualization Quadrat Based Methods Distance Based Methods In the remainder of this notebook we shall focus on point processing. End of explanation """ points = [[66.22, 32.54], [22.52, 22.39], [31.01, 81.21], [9.47, 31.02], [30.78, 60.10], [75.21, 58.93], [79.26, 7.68], [8.23, 39.93], [98.73, 77.17], [89.78, 42.53], [65.19, 92.08], [54.46, 8.48]] p1 = PointPattern(points) p1.mbb """ Explanation: Creating Point Patterns From lists We can build a point pattern by using Python lists of coordinate pairs $(s_0, s_1,\ldots, s_m)$ as follows: End of explanation """ p1.summary() type(p1.points) np.asarray(p1.points) p1.mbb """ Explanation: Thus $s_0 = (66.22, 32.54), \ s_{11}=(54.46, 8.48)$. End of explanation """ points = np.asarray(points) points p1_np = PointPattern(points) p1_np.summary() """ Explanation: From numpy arrays End of explanation """ f = ps.examples.get_path('vautm17n_points.shp') fo = ps.io.open(f) pp_va = PointPattern(np.asarray([pnt for pnt in fo])) fo.close() pp_va.summary() """ Explanation: From shapefiles This example uses 200 randomly distributed points within the counties of Virginia. Coordinates are for UTM zone 17 N. End of explanation """ pp_va.summary() pp_va.points pp_va.head() pp_va.tail() """ Explanation: Attributes of PySAL Point Patterns End of explanation """ pp_va.lambda_mbb """ Explanation: Intensity Estimates The intensity of a point process at point $s_i$ can be defined as: $$\lambda(s_j) = \lim \limits_{|\mathbf{A}s_j| \to 0} \left { \frac{E(Y(\mathbf{A}s_j)}{|\mathbf{A}s_j|} \right } $$ where $\mathbf{A}s_j$ is a small region surrounding location $s_j$ with area $|\mathbf{A}s_j|$, and $E(Y(\mathbf{A}s_j)$ is the expected number of event points in $\mathbf{A}s_j$. The intensity is the mean number of event points per unit of area at point $s_j$. Recall that one of the implications of CSR is that the intensity of the point process is constant in our study area $\Re$. In other words $\lambda(s_j) = \lambda(s_{j+1}) = \ldots = \lambda(s_n) = \lambda \ \forall s_j \in \Re$. Thus, if the area of $\Re$ = $|\Re|$ the expected number of event points in the study region is: $E(Y(\Re)) = \lambda |\Re|.$ In PySAL, the intensity is estimated by using a geometric object to encode the study region. We refer to this as the window, $W$. The reason for distinguishing between $\Re$ and $W$ is that the latter permits alternative definitions of the bounding object. Intensity estimates are based on the following: $$\hat{\lambda} = \frac{n}{|W|}$$ where $n$ is the number of points in the window $W$, and $|W|$ is the area of $W$. Intensity based on minimum bounding box: $$\hat{\lambda}{mbb} = \frac{n}{|W{mbb}|}$$ where $W_{mbb}$ is the minimum bounding box for the point pattern. End of explanation """ pp_va.lambda_hull """ Explanation: Intensity based on convex hull: $$\hat{\lambda}{hull} = \frac{n}{|W{hull}|}$$ where $W_{hull}$ is the convex hull for the point pattern. End of explanation """
reetawwsum/Jupyter-Notebooks
Data analysis in Python with pandas.ipynb
mit
import pandas as pd """ Explanation: Data analysis in Python with pandas What is pandas? pandas: Open source library in Python for data analysis, data manipulation, and data visualisation. Pros: 1. Tons of functionality 2. Well supported by community 3. Active development 4. Lot of documentation 5. Plays well with other packages, for e.g NumPy, Scikit-learn End of explanation """ orders = pd.read_table('http://bit.ly/chiporders') orders.head() user_cols = ['user_id', 'age', 'gender', 'occupation', 'zip_code'] users = pd.read_table('http://bit.ly/movieusers', delimiter='|', header=None, names=user_cols) users.head() """ Explanation: How do I read a tabular data file into pandas? Tabular data file: By default tab separated file (tsv) End of explanation """ ufo = pd.read_csv('http://bit.ly/uforeports') ufo.head() type(ufo) type(ufo['City']) city = ufo.City city.head() """ Explanation: Tip: skiprows and skipfooter params are useful to omit extra data in a file heading or ending. How do I select a pandas Series from a DataFrame? Two basic data structures in pandas 1. DataFrame: Table with rows and columns 2. Series: Each columns is known as pandas Series End of explanation """ ufo['location'] = ufo.City + ', ' + ufo.State ufo.head() """ Explanation: Tip: Create a new Series in a DataFrame End of explanation """ movies = pd.read_csv('http://bit.ly/imdbratings') movies.head() movies.describe() movies.shape movies.dtypes type(movies) movies.describe(include=['object']) """ Explanation: Why do some pandas commands end with parentheses, and other commands don't? End of explanation """ ufo.head() ufo.columns ufo.rename(columns={'Colors Reported': 'Colors_Reported', 'Shape Reported': 'Shape_Reported'}, inplace=True) ufo.columns ufo_cols = ['city', 'colors reported', 'state reported', 'state', 'time', 'location'] ufo.columns = ufo_cols ufo.head() ufo_cols = ['City', 'Colors Reported', 'State Reported', 'State', 'Time'] ufo = pd.read_csv('http://bit.ly/uforeports', names=ufo_cols, header=0) ufo.head() """ Explanation: Tip: Hit "Shift+Tab" inside a method parentheses to get list of arguments How to rename columns in pandas DataFrame? End of explanation """ ufo.columns = ufo.columns.str.replace(' ', '_') ufo.columns """ Explanation: Tip: Use str.replace method to drop the space from columns names End of explanation """ ufo = pd.read_csv('http://bit.ly/uforeports') ufo.head() ufo.shape ufo.drop('Colors Reported', axis=1, inplace=True) ufo.head() ufo.drop(labels=['City', 'State'], axis=1, inplace=True) ufo.head() """ Explanation: How do I remove columns from a pandas DataFrame? End of explanation """ ufo.drop([0, 1], axis=0, inplace=True) ufo.shape """ Explanation: Tip: To remove rows instead of columns, choose axis=0 End of explanation """ movies = pd.read_csv('http://bit.ly/imdbratings') movies.head() movies.title.sort_values() movies.sort_values('title') """ Explanation: How do I sort a pandas DataFrame or Series? End of explanation """ movies.sort_values(['content_rating', 'duration']) """ Explanation: Tip: Sort by multiple columns End of explanation """ movies = pd.read_csv('http://bit.ly/imdbratings') movies.head() movies.shape movies[movies.duration >= 200] """ Explanation: How do I filter rows of a pandas DataFrame by column value? End of explanation """ movies.loc[movies.duration >= 200, 'genre'] """ Explanation: Tip: Filtered DataFrame is also a DataFrame End of explanation """ movies = pd.read_csv('http://bit.ly/imdbratings') movies.head() movies[movies.duration >= 200] movies[(movies.duration >= 200) & (movies.genre == 'Drama')] """ Explanation: How do I apply multiple filter criteria to a pandas DataFrame? End of explanation """ movies[movies.genre.isin(['Crime', 'Drama', 'Action'])] """ Explanation: Tip: Multiple condition on a single column End of explanation """ drinks = pd.read_csv('http://bit.ly/drinksbycountry') drinks.dtypes drinks.beer_servings = drinks.beer_servings.astype(float) drinks.dtypes drinks = pd.read_csv('http://bit.ly/drinksbycountry', dtype={'beer_servings': float}) drinks.dtypes orders = pd.read_table('http://bit.ly/chiporders') orders.head() orders.dtypes orders.item_price = orders.item_price.str.replace('$', '').astype(float) orders.head() orders.item_name.str.contains('Chicken').astype(int).head() """ Explanation: How do I change the data type of a pandas Series? End of explanation """ movies = pd.read_csv('http://bit.ly/imdbratings') movies.head() movies.dtypes movies.genre.describe() movies.genre.value_counts() movies.genre.value_counts(normalize=True) movies.genre.unique() movies.genre.nunique() pd.crosstab(movies.genre, movies.content_rating) movies.duration.describe() movies.duration.mean() """ Explanation: How do I explore a pandas Series? End of explanation """ %matplotlib inline movies.duration.plot(kind='hist') movies.genre.value_counts().plot(kind='bar') """ Explanation: Tip: Visualisation End of explanation """ ufo = pd.read_csv('http://bit.ly/uforeports') ufo.tail() ufo.isnull().tail() ufo.notnull().tail() ufo.isnull().sum() ufo[ufo.City.isnull()] ufo.shape ufo.dropna(how='any').shape ufo.dropna(how='all').shape ufo.dropna(subset=['City', 'Shape Reported'], how='any').shape ufo['Shape Reported'].value_counts(dropna=False) ufo['Shape Reported'].fillna(value='VARIOUS', inplace=True) ufo['Shape Reported'].value_counts() """ Explanation: How do I handle missing values in pandas? End of explanation """ train = pd.read_csv('http://bit.ly/kaggletrain') train.head() train['Sex_num'] = train.Sex.map({'female': 0, 'male': 1}) train.loc[0:4, ['Sex', 'Sex_num']] train['Name_length'] = train.Name.apply(len) train.loc[0:4, ['Name', 'Name_length']] import numpy as np train['Fare_ceil'] = train.Fare.apply(np.ceil) train.loc[0:4, ['Fare', 'Fare_ceil']] train.Name.str.split(',').head() def get_element(my_list, position): return my_list[position] train['Last_name'] = train.Name.str.split(',').apply(get_element, position=0) train['Last_name'] = train.Name.str.split(',').apply(lambda x: x[0]) train.loc[0:4, ['Name', 'Last_name']] drinks = pd.read_csv('http://bit.ly/drinksbycountry') drinks.head() drinks.loc[:, 'beer_servings':'wine_servings'].apply(np.argmax, axis=1) drinks.loc[:, 'beer_servings':'wine_servings'].applymap(float) """ Explanation: How do I apply a function to a pandas Series or a DataFrame? End of explanation """ movies = pd.read_csv('http://bit.ly/imdbratings') movies.head() movies.content_rating.isnull().sum() movies[movies.content_rating.isnull()] movies.content_rating.value_counts() movies.loc[movies.content_rating == 'NOT RATED', 'content_rating'] = np.nan movies.content_rating.isnull().sum() top_movies = movies.loc[movies.star_rating >= 9, :] top_movies top_movies.loc[0, 'duration'] = 150 top_movies top_movies = movies.loc[movies.star_rating >= 9, :].copy() top_movies top_movies.loc[0, 'duration'] = 150 top_movies """ Explanation: How do I avoid a SettingWithCopyWarning in pandas? End of explanation """ drinks = pd.read_csv('http://bit.ly/drinksbycountry') drinks pd.get_option('display.max_rows') pd.set_option('display.max_rows', None) drinks pd.reset_option('display.max_rows') pd.get_option('display.max_rows') train = pd.read_csv('http://bit.ly/kaggletrain') train pd.get_option('display.max_colwidth') pd.set_option('display.max_colwidth', 1000) pd.get_option('display.precision') pd.set_option('display.precision', 2) drinks.head() drinks['x'] = drinks.wine_servings * 1000 drinks['y'] = drinks.total_litres_of_pure_alcohol * 1000 drinks.head() pd.set_option('display.float_format', '{:,}'.format) pd.describe_option('rows') pd.reset_option('all') """ Explanation: How do I change display options in pandas? End of explanation """ df = pd.DataFrame({'id': [100, 101, 102], 'color': ['red', 'blue', 'red']}, columns=['id', 'color'], index=['A', 'B', 'C']) pd.DataFrame([[100, 'red'], [101, 'blue'], [102, 'red']], columns=['id', 'color']) import numpy as np arr = np.random.rand(4, 2) arr pd.DataFrame(arr, columns=['one', 'two']) pd.DataFrame({'student': np.arange(100, 110, 1), 'test': np.random.randint(60, 101, 10)}).set_index('student') s = pd.Series(['round', 'square'], index=['C', 'B'], name='shape') s df pd.concat([df, s], axis=1) """ Explanation: How do I create a pandas DataFrame from another object? End of explanation """ train = pd.read_csv('http://bit.ly/kaggletrain') train train['Sex_male'] = train.Sex.map({'female': 0, 'male': 1}) train.head() pd.get_dummies(train.Sex, prefix='Sex').iloc[:, 1:] train.Embarked.value_counts() embarked_dummies = pd.get_dummies(train.Embarked, prefix='Embarked').iloc[:, 1:] train = pd.concat([train, embarked_dummies], axis=1) train.head() pd.get_dummies(train, columns=['Sex', 'Embarked'], drop_first=True) """ Explanation: How do I create dummy variables in pandas? End of explanation """ user_cols = ['user_id', 'age', 'gender', 'occupation', 'zip_code'] users = pd.read_table('http://bit.ly/movieusers', delimiter='|', header=None, names=user_cols, index_col='user_id') users.head() users.shape users.zip_code.duplicated().sum() users.duplicated().sum() users.loc[users.duplicated(keep=False), :] users.drop_duplicates(keep='first').shape users.duplicated(subset=['age', 'zip_code']).sum() """ Explanation: How do I find and remove duplicate rows in pandas? End of explanation """ drinks = pd.read_csv('http://bit.ly/drinksbycountry') drinks.head() drinks.info() drinks.info(memory_usage='deep') drinks.memory_usage(deep=True) sorted(drinks.continent.unique()) drinks['continent'] = drinks.continent.astype('category') drinks.dtypes drinks.continent.cat.codes.head() drinks.memory_usage(deep=True) drinks['country'] = drinks.country.astype('category') drinks.memory_usage(deep=True) df = pd.DataFrame({'ID': [100, 101, 102, 103], 'quality': ['good', 'very good', 'good', 'excellent']}) df """ Explanation: How do I make my pandas DataFrame smaller and faster? End of explanation """ ufo = pd.read_csv('http://bit.ly/uforeports') ufo.head(3) ufo.loc[0, :] ufo.loc[0:2, :] ufo.loc[0, 'City':'State'] ufo.loc[ufo.City=='Oakland', :] ufo.iloc[:, [0, 3]] """ Explanation: How do I select multiple rows and columns from a pandas DataFrame? End of explanation """ train = pd.read_csv('http://bit.ly/kaggletrain') train.head() feature_cols = ['Pclass', 'Parch'] X = train.loc[:, feature_cols] y = train.Survived from sklearn import linear_model logreg = linear_model.LogisticRegression() logreg.fit(X, y) test = pd.read_csv('http://bit.ly/kaggletest') X_new = test.loc[:, feature_cols] y_predict = logreg.predict(X_new) y_predict.shape pd.DataFrame({'PassengerId': test.PassengerId, 'Survived': y_predict}).set_index('PassengerId').to_csv('sub.csv') train.to_pickle('train.pkl') pd.read_pickle('train.pkl') """ Explanation: How do I use pandas with scikit-learn to create Kaggle submissions? End of explanation """ orders = pd.read_table('http://bit.ly/chiporders') orders.head() orders.item_name.str.upper() orders[orders.item_name.str.contains('Chicken')] """ Explanation: How do I use string methods in pandas? End of explanation """ drinks = pd.read_csv('http://bit.ly/drinksbycountry') drinks.head() drinks.drop(2, axis=0).head() drinks.mean(axis=1).head() """ Explanation: How do I use the "axis" parameter in pandas? End of explanation """ ufo = pd.read_csv('http://bit.ly/uforeports') ufo.head() ufo.dtypes ufo.Time.str.slice(-5, -3).astype(int).head() ufo['Time'] = pd.to_datetime(ufo.Time) ufo.dtypes ufo.head() ufo.Time.dt.dayofyear.head() ts = pd.to_datetime('1/1/1999') ufo.loc[ufo.Time > ts, :].head() %matplotlib inline ufo['Year'] = ufo.Time.dt.year ufo.head() ufo.Year.value_counts().sort_index().plot() ufo.sample(3) ufo.sample(frac=0.001) """ Explanation: How do I work with dates and times in pandas? End of explanation """ drinks = pd.read_csv('http://bit.ly/drinksbycountry') drinks.head() drinks.index drinks.loc[23, :] drinks.set_index('country', inplace=True) drinks.head() drinks.loc['Brazil', 'beer_servings'] drinks.index.name = None drinks.head() drinks.reset_index(inplace=True) drinks.head() drinks = pd.read_csv('http://bit.ly/drinksbycountry') drinks.head() drinks.set_index('country', inplace=True) drinks.head() drinks.continent.head() drinks.continent.value_counts()['Africa'] drinks.continent.value_counts().sort_index().head() people = pd.Series([3000000, 85000], index=['Albania', 'Andorra'], name='population') people drinks.beer_servings * people pd.concat([drinks, people], axis=1).head() """ Explanation: What do I need to know about the pandas index? End of explanation """ drinks = pd.read_csv('http://bit.ly/drinksbycountry') drinks.head() drinks.beer_servings.mean() drinks.groupby('continent').beer_servings.mean() drinks[drinks.continent=='Africa'].beer_servings.mean() drinks.groupby('continent').beer_servings.agg(['count', 'min', 'max', 'mean']) %matplotlib inline drinks.groupby('continent').mean().plot(kind='bar') """ Explanation: When should I use a "groupby" in pandas? End of explanation """ ufo = pd.read_csv('http://bit.ly/uforeports') ufo.head() ufo.drop('City', axis=1, inplace=True) ufo.head() ufo.dropna(how='any').shape ufo.shape """ Explanation: When should I use "inplace" parameter in pandas? End of explanation """
tyamamot/h29iro
codes/4_Link_Analysis.ipynb
mit
import numpy as np import numpy.linalg as lg import networkx as nx import matplotlib.pyplot as plt %matplotlib inline %precision 2 """ Explanation: 第4回 リンク解析 この演習では,PageRankアルゴリズムの実装例を通して、アルゴリズムの理解を深めることおよび,既存のライブラリを用いてグラフの描画およびPageRankの計算ができることを目的とします. この演習では以下のライブラリを使用します. - NetworkX - グラフの生成,分析,描画などグラフに対する各種操作のためのPythonライブラリ End of explanation """ def pagerank(A, d = 0.85, eps = 1e-6): """ A: 遷移確率行列 d: damping factor eps: 誤差(eps以下になれば終了) """ n = A.shape[0] #ページ数n e = np.ones(A.shape[0]) #要素が1のn次元ベクトル p = e / n #PageRankベクトルの初期化(初期状態分布) while True: # while Trueとしているが,実際には一定回数以上ループを繰り返すとアルゴリズムを終了するような設計がよい p_next = d * np.dot(A.T, p) + (1.0 - d) * e / n # PageRankベクトルの更新 if lg.norm(p_next - p, ord=1) <= eps: #差のL1ノルムがeps以下になれば,終了 p = p_next break p = p_next return p """ Explanation: 1. PageRankアルゴリズムの実装 まずは,べき乗法によるPageRankの計算を実装します. べき乗法に基づくPageRankアルゴリズムは以下のように記述できます. End of explanation """ # 有向グラフの描画 G = nx.DiGraph() G.add_nodes_from([1,2,3]) G.add_edges_from([(1,2),(3,2)]) pos = nx.spring_layout(G) # ばねモデルに基づくグラフのレイアウト nx.draw(G, pos, with_labels=True, node_size=500, node_color="w") plt.show() """ Explanation: では,上記のアルゴリズムを用いて,実際のグラフでPageRankを計算してみましょう. 簡単のため,以下のような3ノードの有向グラフ$G$を考えます. End of explanation """ A = np.array([ [0, 1, 0], [1/3, 1/3, 1/3], [0, 1, 0] ]) A """ Explanation: このグラフ$G$に対応する遷移確率行列$A$は下記のように定義できます(「確率行列」となるよう,ページ2に対応する遷移確率の修正を行っている点に注意). End of explanation """ p = pagerank(A, d = 0.85, eps=1e-6) p """ Explanation: $d = 0.85$, $\epsilon = 10^{-6}$としたときのPageRankベクトル ${\mathbf p} = { p_1, p_2, p_3 }$ は,以下のように求められます. End of explanation """ G = nx.DiGraph() #有向グラフの生成 G.add_nodes_from([1,2,3,4,5,6]) #ノードの定義 G.add_edges_from([(1,2),(1,3), #エッジの定義 (1,2) は ノード1 から ノード2 へ有向エッジがあることを意味 (2,1),(2,3), (3,2), (4,3),(4,5), (6,4),(6,5)]) pos = nx.spring_layout(G) #バネモデルでグラフをレイアウト nx.draw(G, pos, with_labels=True, node_size=1000, node_color="w") plt.show() """ Explanation: $p_1=0.21, p_2=0.57, p_3=0.21$となり,多くの入リンクを持つページ$2$が高いPageRank値を持つであろうという直感と一致することがわかります. 2. NetworkXを用いたグラフの描画とPageRankの計算 Pythonのグラフ処理ライブラリであるNetworkXを使って,グラフの可視化およびPageRank値の計算を行う例を示します.ここでは,講義資料「リンク解析(1)」p12のグラフ例を用います. End of explanation """ G.add_edges_from([(5,1),(5,2),(5,3),(5,4),(5,5),(5,6)]) pos = nx.spring_layout(G) nx.draw(G, pos, with_labels=True, node_size=1000, node_color="w") plt.show() """ Explanation: さて,ノード5は出リンクを持たないため,全てのノードにエッジを張り,遷移確率行列を修正します. End of explanation """ p = nx.pagerank(G, alpha=0.85, tol=1e-6) # べき乗法によるpagerank計算. 引数 alpha はdamping factor, tol は eps に対応 p """ Explanation: では,このグラフに対してNetworkXの機能を用いてPageRankを計算します.NetworkXは各種グラフ解析アルゴリズムを提供しています.たとえば,PageRankやHITSなども実装されています.PageRankを求めるには,NetworkXのpagerank関数を用います. End of explanation """ p = nx.pagerank_numpy(G, alpha=0.85) # numpyの固有値計算ライブラリに基づくPageRank計算 p p = nx.pagerank_scipy(G, alpha=0.85) #scipyの疎行列を使ったべき乗法 p """ Explanation: NetworkXのpagerank関数は,これ以外にもいくつか種類が用意されています. End of explanation """
stephank16/enes_graph_use_case
.ipynb_checkpoints/ENES1-checkpoint.ipynb
gpl-3.0
import ENESNeoTools from py2neo import Graph, Node, Relationship, authenticate authenticate("localhost:7474", ENESNeoTools.user_name, ENESNeoTools.pass_word) # connect to authenticated graph database graph = Graph("http://localhost:7474/db/data/") """ Explanation: ENES use case graph example A graph is generated representing a server infrastructure providing different types of data services hosted by data servers The data is organized in collections Collections can be hosted by multiple servers (replication) Servers provide different average bandwidth to different geographical regions Setup Connection to neo4j instance End of explanation """ from neo4jrestclient.client import GraphDatabase from neo4jrestclient.query import Q gdb = GraphDatabase("http://localhost:7474/db/data/",username="neo4j",password="prolog16") """ Explanation: also rest client possible End of explanation """ # collection organization reflects directory structure: # e.g. cordex/output/EUR-11/MPI-CSC/MPI-M-MPI-ESM-LR/rcp85/r1i1p1/MPI-CSC-REMO2009/v1/day/tas # generic structure: <activity>/<product>/<Domain>/<Institution>/<GCMModelName>/<CMIP5ExperimentName> # /<CMIP5EnsembleMember>/<RCMModelName>/<RCMVersionID>/<Frequency>/<VariableName>. # facets describing collection facet_nodes = [] for key, value in ENESNeoTools.facet_list1.iteritems(): facet_node = Node("Collection",name=value[1], level=value[0]) facet_nodes.append(facet_node) facet_chain = [] for i in range(1,len(facet_nodes)): rel = Relationship(facet_nodes[i],"belongs_to",facet_nodes[i-1]) facet_chain.append(rel) for rel in facet_chain: graph.create(rel) cordex_file_set1 = ENESNeoTools.get_files(ENESNeoTools.facet_list1) #cordex_set1 = [] cordex_rel1 = [] for cordexfile in cordex_file_set1: node = Node("File", name=cordexfile, group="file") # cordex_set1.append(node) cordex_rel1.append(Relationship(node,"belongs_to",facet_nodes[0])) for rel in cordex_rel1: graph.create(rel) """ Explanation: Set up a data collection graph data is organized in collections collections are hierarchically organized according to levels (file directory analogon) End of explanation """ server_list = ENESNeoTools.get_servers() service_rels = [] server_nodes = [] for (sname, surl) in server_list: new_node = Node('data_server',name=sname, url=surl) server_nodes.append(new_node) data_services = ENESNeoTools.data_service_nodes(sname) for data_service in data_services: service_rels.append(Relationship(data_service,"service",new_node)) for rel in service_rels: graph.create(rel) """ Explanation: Data servers graph setup servers and expose three types of data access services (http, globus, opendap) services and servers can be non-operational ("down") End of explanation """ orig1 = Relationship(facet_nodes[1],"served_by",server_nodes[0]) replica1 = Relationship(facet_nodes[1],"served_by",server_nodes[1]) graph.create(orig1) graph.create(replica1) """ Explanation: Combine data set graph with server graph a data collection is "served_by" a data_server End of explanation """ region_germany = Node("country", name="Germany", provider="DFN") region_australia = Node("country", name="Australia", provider="RNet") region_sweden = Node("country", name="Sweden", provider="SweNet") user1 = Node("user",name="Stephan Kindermann") user2 = Node("user",name="Mr Spock") user3 = Node("user",name="Michael Kolax") home1 = Relationship(user1,"connects_to",region_germany) home2 = Relationship(user2,"connects_to",region_australia) home3 = Relationship(user3,"connects_to",region_sweden) link1 = Relationship(server_nodes[0],"nw_link",region_germany, bandwidth=2000000) link2 = Relationship(server_nodes[0],"nw_link",region_sweden, bandwidth=1000000) link3 = Relationship(server_nodes[0],"nw_link",region_australia,bandwidth=500000) link4 = Relationship(server_nodes[1],"nw_link",region_germany, bandwidth=1500000) link5 = Relationship(server_nodes[1],"nw_link",region_sweden, bandwidth=3000000) link6 = Relationship(server_nodes[1],"nw_link",region_australia, bandwidth=400000) graph.create(link1,link2,link3,link4,link5,link6) """ Explanation: Data servers provide different bandwidth to different regions / countries and end users belong to different regions (temporarily) End of explanation """ server_nodes[0].properties["status"] = "UP" server_nodes[1].properties["status"] = "UP" server_nodes[0].push() server_nodes[1].push() server_nodes[0].properties """ Explanation: Data servers are sometimes down (not operational and thus do not serve data to users) End of explanation """ from IPython.display import HTML HTML('<iframe src=http://localhost:7474/browser/ width=1000 height=800> </iframe>') %load_ext cypher statement = """MATCH (myfile:File {name:"tas_EUR-11_MPI-M-MPI-ESM-LR_rcp85_r1i1p1_MPI-CSC-REMO2009_v1_day_20660101-20701231.nc"}) RETURN myfile""" results = graph.cypher.execute(statement) results results = %cypher http://neo4j:prolog16@localhost:7474/db/data MATCH (myfile:File {name:"tas_EUR-11_MPI-M-MPI-ESM-LR_rcp85_r1i1p1_MPI-CSC-REMO2009_v1_day_20660101-20701231.nc"}) RETURN myfile results.get_dataframe() graph.open_browser() """ Explanation: Interactive cells to play with graph End of explanation """ %%cypher http://neo4j:prolog16@localhost:7474/db/data MATCH (a:File)-[:belongs_to*]-(b:Collection) -[:served_by]- (c:data_server) WHERE c.status = 'UP' AND a.name = 'tas_EUR-11_MPI-M-MPI-ESM-LR_rcp85_r1i1p1_MPI-CSC-REMO2009_v1_day_20760101-20801231.nc' RETURN c """ Explanation: return operational servers for a specific file End of explanation """ server_nodes[1].properties["status"] = "DOWN" server_nodes[1].push() %%cypher http://neo4j:prolog16@localhost:7474/db/data MATCH (a:File)-[:belongs_to*]-(b:Collection) -[:served_by]- (c:data_server) WHERE c.status = 'UP' AND a.name = 'tas_EUR-11_MPI-M-MPI-ESM-LR_rcp85_r1i1p1_MPI-CSC-REMO2009_v1_day_20760101-20801231.nc' RETURN c results = %cypher http://neo4j:prolog16@localhost:7474/db/data MATCH (a)-[r]-(b) RETURN a,r, b %%bash ls """ Explanation: switch off a server and rerun query End of explanation """ %%cypher http://neo4j:prolog16@localhost:7474/db/data MATCH (n) OPTIONAL MATCH (n)-[r]-() DELETE n,r graph.delete_all() """ Explanation: Simple cells to clean graphdb End of explanation """ %matplotlib inline results.get_graph() results.draw() """ Explanation: simple graph visualizations End of explanation """
GoogleCloudPlatform/mlops-on-gcp
immersion/kubeflow_pipelines/walkthrough/labs/lab-01.ipynb
apache-2.0
import json import os import numpy as np import pandas as pd import pickle import uuid import time import tempfile from googleapiclient import discovery from googleapiclient import errors from google.cloud import bigquery from jinja2 import Template from kfp.components import func_to_container_op from typing import NamedTuple from sklearn.metrics import accuracy_score from sklearn.model_selection import train_test_split from sklearn.linear_model import SGDClassifier from sklearn.pipeline import Pipeline from sklearn.preprocessing import StandardScaler, OneHotEncoder from sklearn.compose import ColumnTransformer """ Explanation: Using custom containers with AI Platform Training Learning Objectives: 1. Learn how to create a train and a validation split with Big Query 1. Learn how to wrap a machine learning model into a Docker container and train in on CAIP 1. Learn how to use the hyperparameter tunning engine on GCP to find the best hyperparameters 1. Learn how to deploy a trained machine learning model GCP as a rest API and query it. In this lab, you develop, package as a docker image, and run on AI Platform Training a training application that trains a multi-class classification model that predicts the type of forest cover from cartographic data. The dataset used in the lab is based on Covertype Data Set from UCI Machine Learning Repository. The training code uses scikit-learn for data pre-processing and modeling. The code has been instrumented using the hypertune package so it can be used with AI Platform hyperparameter tuning. End of explanation """ !gsutil ls REGION = 'us-central1' ARTIFACT_STORE = 'gs://hostedkfp-default-l2iv13wnek' PROJECT_ID = !(gcloud config get-value core/project) PROJECT_ID = PROJECT_ID[0] os.environ['PROJECT_ID'] = PROJECT_ID DATA_ROOT='{}/data'.format(ARTIFACT_STORE) JOB_DIR_ROOT='{}/jobs'.format(ARTIFACT_STORE) TRAINING_FILE_PATH='{}/{}/{}'.format(DATA_ROOT, 'training', 'dataset.csv') VALIDATION_FILE_PATH='{}/{}/{}'.format(DATA_ROOT, 'validation', 'dataset.csv') """ Explanation: Configure environment settings Set location paths, connections strings, and other environment settings. Make sure to update REGION, and ARTIFACT_STORE with the settings reflecting your lab environment. REGION - the compute region for AI Platform Training and Prediction ARTIFACT_STORE - the GCS bucket created during installation of AI Platform Pipelines. The bucket name starts with the hostedkfp-default- prefix. End of explanation """ %%bash DATASET_LOCATION=US DATASET_ID=covertype_dataset TABLE_ID=covertype DATA_SOURCE=gs://workshop-datasets/covertype/small/dataset.csv SCHEMA=Elevation:INTEGER,\ Aspect:INTEGER,\ Slope:INTEGER,\ Horizontal_Distance_To_Hydrology:INTEGER,\ Vertical_Distance_To_Hydrology:INTEGER,\ Horizontal_Distance_To_Roadways:INTEGER,\ Hillshade_9am:INTEGER,\ Hillshade_Noon:INTEGER,\ Hillshade_3pm:INTEGER,\ Horizontal_Distance_To_Fire_Points:INTEGER,\ Wilderness_Area:STRING,\ Soil_Type:STRING,\ Cover_Type:INTEGER bq --location=$DATASET_LOCATION --project_id=$PROJECT_ID mk --dataset $DATASET_ID bq --project_id=$PROJECT_ID --dataset_id=$DATASET_ID load \ --source_format=CSV \ --skip_leading_rows=1 \ --replace \ $TABLE_ID \ $DATA_SOURCE \ $SCHEMA """ Explanation: Importing the dataset into BigQuery End of explanation """ %%bigquery SELECT * FROM `covertype_dataset.covertype` """ Explanation: Explore the Covertype dataset End of explanation """ !bq query \ -n 0 \ --destination_table covertype_dataset.training \ --replace \ --use_legacy_sql=false \ 'SELECT * \ FROM `covertype_dataset.covertype` AS cover \ WHERE \ MOD(ABS(FARM_FINGERPRINT(TO_JSON_STRING(cover))), 10) IN (1, 2, 3, 4)' !bq extract \ --destination_format CSV \ covertype_dataset.training \ $TRAINING_FILE_PATH """ Explanation: Create training and validation splits Use BigQuery to sample training and validation splits and save them to GCS storage Create a training split End of explanation """ # TODO: You code to create the BQ table validation split # TODO: Your code to export the validation table to GCS df_train = pd.read_csv(TRAINING_FILE_PATH) df_validation = pd.read_csv(VALIDATION_FILE_PATH) print(df_train.shape) print(df_validation.shape) """ Explanation: Create a validation split Exercise In the first cell below, create a validation split that takes 10% of the data using the bq command and export this split into the BigQuery table covertype_dataset.validation. In the second cell, use the bq command to export that BigQuery validation table to GCS at $VALIDATION_FILE_PATH. End of explanation """ numeric_feature_indexes = slice(0, 10) categorical_feature_indexes = slice(10, 12) preprocessor = ColumnTransformer( transformers=[ ('num', StandardScaler(), numeric_feature_indexes), ('cat', OneHotEncoder(), categorical_feature_indexes) ]) pipeline = Pipeline([ ('preprocessor', preprocessor), ('classifier', SGDClassifier(loss='log', tol=1e-3)) ]) """ Explanation: Develop a training application Configure the sklearn training pipeline. The training pipeline preprocesses data by standardizing all numeric features using sklearn.preprocessing.StandardScaler and encoding all categorical features using sklearn.preprocessing.OneHotEncoder. It uses stochastic gradient descent linear classifier (SGDClassifier) for modeling. End of explanation """ num_features_type_map = {feature: 'float64' for feature in df_train.columns[numeric_feature_indexes]} df_train = df_train.astype(num_features_type_map) df_validation = df_validation.astype(num_features_type_map) """ Explanation: Convert all numeric features to float64 To avoid warning messages from StandardScaler all numeric features are converted to float64. End of explanation """ X_train = df_train.drop('Cover_Type', axis=1) y_train = df_train['Cover_Type'] X_validation = df_validation.drop('Cover_Type', axis=1) y_validation = df_validation['Cover_Type'] pipeline.set_params(classifier__alpha=0.001, classifier__max_iter=200) pipeline.fit(X_train, y_train) """ Explanation: Run the pipeline locally. End of explanation """ accuracy = pipeline.score(X_validation, y_validation) print(accuracy) """ Explanation: Calculate the trained model's accuracy. End of explanation """ TRAINING_APP_FOLDER = 'training_app' os.makedirs(TRAINING_APP_FOLDER, exist_ok=True) """ Explanation: Prepare the hyperparameter tuning application. Since the training run on this dataset is computationally expensive you can benefit from running a distributed hyperparameter tuning job on AI Platform Training. End of explanation """ %%writefile {TRAINING_APP_FOLDER}/train.py # Copyright 2019 Google Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import os import subprocess import sys import fire import pickle import numpy as np import pandas as pd import hypertune from sklearn.compose import ColumnTransformer from sklearn.linear_model import SGDClassifier from sklearn.pipeline import Pipeline from sklearn.preprocessing import StandardScaler, OneHotEncoder def train_evaluate(job_dir, training_dataset_path, validation_dataset_path, alpha, max_iter, hptune): df_train = pd.read_csv(training_dataset_path) df_validation = pd.read_csv(validation_dataset_path) if not hptune: df_train = pd.concat([df_train, df_validation]) numeric_feature_indexes = slice(0, 10) categorical_feature_indexes = slice(10, 12) preprocessor = ColumnTransformer( transformers=[ ('num', StandardScaler(), numeric_feature_indexes), ('cat', OneHotEncoder(), categorical_feature_indexes) ]) pipeline = Pipeline([ ('preprocessor', preprocessor), ('classifier', SGDClassifier(loss='log',tol=1e-3)) ]) num_features_type_map = {feature: 'float64' for feature in df_train.columns[numeric_feature_indexes]} df_train = df_train.astype(num_features_type_map) df_validation = df_validation.astype(num_features_type_map) print('Starting training: alpha={}, max_iter={}'.format(alpha, max_iter)) X_train = df_train.drop('Cover_Type', axis=1) y_train = df_train['Cover_Type'] pipeline.set_params(classifier__alpha=alpha, classifier__max_iter=max_iter) pipeline.fit(X_train, y_train) if hptune: # TODO: Score the model with the validation data and capture the result # with the hypertune library # Save the model if not hptune: model_filename = 'model.pkl' with open(model_filename, 'wb') as model_file: pickle.dump(pipeline, model_file) gcs_model_path = "{}/{}".format(job_dir, model_filename) subprocess.check_call(['gsutil', 'cp', model_filename, gcs_model_path], stderr=sys.stdout) print("Saved model in: {}".format(gcs_model_path)) if __name__ == "__main__": fire.Fire(train_evaluate) """ Explanation: Write the tuning script. Notice the use of the hypertune package to report the accuracy optimization metric to AI Platform hyperparameter tuning service. Exercise Complete the code below to capture the metric that the hyper parameter tunning engine will use to optimize the hyper parameter. End of explanation """ %%writefile {TRAINING_APP_FOLDER}/Dockerfile FROM gcr.io/deeplearning-platform-release/base-cpu RUN pip install -U fire cloudml-hypertune scikit-learn==0.20.4 pandas==0.24.2 # TODO """ Explanation: Package the script into a docker image. Notice that we are installing specific versions of scikit-learn and pandas in the training image. This is done to make sure that the training runtime is aligned with the serving runtime. Later in the notebook you will deploy the model to AI Platform Prediction, using the 1.15 version of AI Platform Prediction runtime. Make sure to update the URI for the base image so that it points to your project's Container Registry. Exercise Complete the Dockerfile below so that it copies the 'train.py' file into the container at /app and runs it when the container is started. End of explanation """ IMAGE_NAME='trainer_image' IMAGE_TAG='latest' IMAGE_URI='gcr.io/{}/{}:{}'.format(PROJECT_ID, IMAGE_NAME, IMAGE_TAG) !gcloud builds submit --tag $IMAGE_URI $TRAINING_APP_FOLDER """ Explanation: Build the docker image. You use Cloud Build to build the image and push it your project's Container Registry. As you use the remote cloud service to build the image, you don't need a local installation of Docker. End of explanation """ %%writefile {TRAINING_APP_FOLDER}/hptuning_config.yaml # Copyright 2019 Google Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. trainingInput: hyperparameters: goal: MAXIMIZE maxTrials: 4 maxParallelTrials: 4 hyperparameterMetricTag: accuracy enableTrialEarlyStopping: TRUE params: # TODO: Your code goes here """ Explanation: Submit an AI Platform hyperparameter tuning job Create the hyperparameter configuration file. Recall that the training code uses SGDClassifier. The training application has been designed to accept two hyperparameters that control SGDClassifier: - Max iterations - Alpha The below file configures AI Platform hypertuning to run up to 6 trials on up to three nodes and to choose from two discrete values of max_iter and the linear range betwee 0.00001 and 0.001 for alpha. Exercise Complete the hptuning_config.yaml file below so that the hyperparameter tunning engine try for parameter values * max_iter the two values 200 and 300 * alpha a linear range of values between 0.00001 and 0.001 End of explanation """ JOB_NAME = "JOB_{}".format(time.strftime("%Y%m%d_%H%M%S")) JOB_DIR = "{}/{}".format(JOB_DIR_ROOT, JOB_NAME) SCALE_TIER = "BASIC" !gcloud ai-platform jobs submit training $JOB_NAME \ --region=# TODO\ --job-dir=# TODO \ --master-image-uri=# TODO \ --scale-tier=# TODO \ --config # TODO \ -- \ # TODO """ Explanation: Start the hyperparameter tuning job. Exercise Use the gcloud command to start the hyperparameter tuning job. End of explanation """ !gcloud ai-platform jobs describe $JOB_NAME !gcloud ai-platform jobs stream-logs $JOB_NAME """ Explanation: Monitor the job. You can monitor the job using GCP console or from within the notebook using gcloud commands. End of explanation """ ml = discovery.build('ml', 'v1') job_id = 'projects/{}/jobs/{}'.format(PROJECT_ID, JOB_NAME) request = ml.projects().jobs().get(name=job_id) try: response = request.execute() except errors.HttpError as err: print(err) except: print("Unexpected error") response """ Explanation: Retrieve HP-tuning results. After the job completes you can review the results using GCP Console or programatically by calling the AI Platform Training REST end-point. End of explanation """ response['trainingOutput']['trials'][0] """ Explanation: The returned run results are sorted by a value of the optimization metric. The best run is the first item on the returned list. End of explanation """ alpha = response['trainingOutput']['trials'][0]['hyperparameters']['alpha'] max_iter = response['trainingOutput']['trials'][0]['hyperparameters']['max_iter'] JOB_NAME = "JOB_{}".format(time.strftime("%Y%m%d_%H%M%S")) JOB_DIR = "{}/{}".format(JOB_DIR_ROOT, JOB_NAME) SCALE_TIER = "BASIC" !gcloud ai-platform jobs submit training $JOB_NAME \ --region=$REGION \ --job-dir=$JOB_DIR \ --master-image-uri=$IMAGE_URI \ --scale-tier=$SCALE_TIER \ -- \ --training_dataset_path=$TRAINING_FILE_PATH \ --validation_dataset_path=$VALIDATION_FILE_PATH \ --alpha=$alpha \ --max_iter=$max_iter \ --nohptune !gcloud ai-platform jobs stream-logs $JOB_NAME """ Explanation: Retrain the model with the best hyperparameters You can now retrain the model using the best hyperparameters and using combined training and validation splits as a training dataset. Configure and run the training job End of explanation """ !gsutil ls $JOB_DIR """ Explanation: Examine the training output The training script saved the trained model as the 'model.pkl' in the JOB_DIR folder on GCS. End of explanation """ model_name = 'forest_cover_classifier' labels = "task=classifier,domain=forestry" !gcloud # TODO: You code goes here """ Explanation: Deploy the model to AI Platform Prediction Create a model resource Exercise Complete the gcloud command below to create a model with model_name in $REGION tagged with labels: End of explanation """ model_version = 'v01' !gcloud # TODO \ --model=# TODO \ --origin=# TODO \ --runtime-version=# TODO \ --framework=# TODO \ --python-version=# TODO \ --region=global """ Explanation: Create a model version Exercise Complete the gcloud command below to create a version of the model: End of explanation """ input_file = 'serving_instances.json' with open(input_file, 'w') as f: for index, row in X_validation.head().iterrows(): f.write(json.dumps(list(row.values))) f.write('\n') !cat $input_file """ Explanation: Serve predictions Prepare the input file with JSON formated instances. End of explanation """ !gcloud # TODO: Complete the command """ Explanation: Invoke the model Exercise Using the gcloud command send the data in $input_file to your model deployed as a REST API: End of explanation """
shareactorIO/pipeline
source.ml/jupyterhub.ml/notebooks/spark/Deploy_SparkML_Census_DecisionTree.ipynb
apache-2.0
# You may need to Reconnect (more than Restart) the Kernel to pick up changes to these sett import os master = '--master spark://127.0.0.1:47077' conf = '--conf spark.cores.max=1 --conf spark.executor.memory=512m' packages = '--packages com.amazonaws:aws-java-sdk:1.7.4,org.apache.hadoop:hadoop-aws:2.7.1' jars = '--jars /root/lib/jpmml-sparkml-package-1.0-SNAPSHOT.jar' py_files = '--py-files /root/lib/jpmml.py' os.environ['PYSPARK_SUBMIT_ARGS'] = master \ + ' ' + conf \ + ' ' + packages \ + ' ' + jars \ + ' ' + py_files \ + ' ' + 'pyspark-shell' print(os.environ['PYSPARK_SUBMIT_ARGS']) from pyspark.ml import Pipeline from pyspark.ml.feature import RFormula from pyspark.ml.classification import DecisionTreeClassifier from pyspark.sql import SparkSession sparkSession = SparkSession.builder.getOrCreate() data = sparkSession.read.format("csv") \ .option("inferSchema", "true").option("header", "true") \ .load("hdfs://127.0.0.1:39000/datasets/census/census.csv") data.head() """ Explanation: Generate Spark ML Decision Tree Step 0: Load Libraries, Data, and SparkSession End of explanation """ formula = RFormula(formula = "income ~ .") classifier = DecisionTreeClassifier() pipeline = Pipeline(stages = [formula, classifier]) pipelineModel = pipeline.fit(data) print(pipelineModel) print(pipelineModel.stages[1].toDebugString) """ Explanation: Step 2: Build Spark ML Pipeline with Decision Tree Classifier End of explanation """ from jpmml import toPMMLBytes pmmlBytes = toPMMLBytes(sparkSession, data, pipelineModel) print(pmmlBytes.decode("utf-8")) """ Explanation: Step 3: Convert Spark ML Pipeline to PMML End of explanation """ from urllib import request update_url = 'http://<your-ip>:39040/update-pmml/pmml_census' update_headers = {} update_headers['Content-type'] = 'application/xml' req = request.Request(update_url, headers=update_headers, data=pmmlBytes) resp = request.urlopen(req) print(resp.status) # Should return Http Status 200 from urllib import request evaluate_url = 'http://<your-ip>:39040/evaluate-pmml/pmml_census' evaluate_headers = {} evaluate_headers['Content-type'] = 'application/json' input_params = '{"age":39,"workclass":"State-gov","education":"Bachelors","education_num":13,"marital_status":"Never-married","occupation":"Adm-clerical","relationship":"Not-in-family","race":"White","sex":"Male","capital_gain":2174,"capital_loss":0,"hours_per_week":40,"native_country":"United-States"}' encoded_input_params = input_params.encode('utf-8') req = request.Request(evaluate_url, headers=evaluate_headers, data=encoded_input_params) resp = request.urlopen(req) print(resp.read()) # Should return valid classification with probabilities """ Explanation: Deployment Option 1: Mutable Model Deployment End of explanation """ !mkdir -p /root/src/pmml/census/ with open('/root/src/pmml/census/pmml_census.pmml', 'wb') as f: f.write(pmmlBytes) !ls /root/src/pmml/census/pmml_census.pmml """ Explanation: Model Server Dashboard Fill in <your-ip> below, then copy/paste to your browser http://&lt;your-ip&gt;:47979/hystrix-dashboard/monitor/monitor.html?streams=%5B%7B%22name%22%3A%22%22%2C%22stream%22%3A%22http%3A%2F%2F&lt;your-ip&gt;%3A39043%2Fhystrix.stream%22%2C%22auth%22%3A%22%22%2C%22delay%22%3A%22%22%7D%2C%7B%22name%22%3A%22%22%2C%22stream%22%3A%22http%3A%2F%2F&lt;your-ip&gt;%3A39042%2Fhystrix.stream%22%2C%22auth%22%3A%22%22%2C%22delay%22%3A%22%22%7D%2C%7B%22name%22%3A%22%22%2C%22stream%22%3A%22http%3A%2F%2F&lt;your-ip&gt;%3A39041%2Fhystrix.stream%22%2C%22auth%22%3A%22%22%2C%22delay%22%3A%22%22%7D%2C%7B%22name%22%3A%22%22%2C%22stream%22%3A%22http%3A%2F%2F&lt;your-ip&gt;%3A39040%2Fhystrix.stream%22%2C%22auth%22%3A%22%22%2C%22delay%22%3A%22%22%7D%5D Deployment Option 2: Immutable Model Deployment Save Model to Disk End of explanation """ !start-loadtest.sh $SOURCE_HOME/loadtest/RecommendationServiceStressTest-local-census.jmx """ Explanation: TODO: Trigger Airflow to Build New Docker Image (ie. via Github commit) Load Test End of explanation """
chris1610/pbpython
notebooks/Combining-Multiple-Excel-File-with-Pandas.ipynb
bsd-3-clause
import pandas as pd import numpy as np """ Explanation: Introduction One of the most common tasks for pandas and python is to automate the process to aggregate data from multiple spreadsheets and files. This article will walk through the basic flow required to parse multiple excel files, combine some data, clean it up and analyze it. Please refer to this post for the full post. Collecting the Data Import pandas and numpy End of explanation """ !ls ../data """ Explanation: Let's take a look at the files in our input directory, using the convenient shell commands in ipython. End of explanation """ !ls ../data/sales-*-2014.xlsx """ Explanation: There are a lot of files, but we only want to look at the sales .xlsx files. End of explanation """ import glob glob.glob("../data/sales-*-2014.xlsx") """ Explanation: Use the python glob module to easily list out the files we need End of explanation """ all_data = pd.DataFrame() for f in glob.glob("../data/sales-*-2014.xlsx"): df = pd.read_excel(f) all_data = all_data.append(df,ignore_index=True) """ Explanation: This gives us what we need, let's import each of our files and combine them into one file. Panda's concat and append can do this for us. I'm going to use append in this example. The code snippet below will initialize a blank DataFrame then append all of the individual files into the all_data DataFrame. End of explanation """ all_data.describe() """ Explanation: Now we have all the data in our all_data DataFrame. You can use describe to look at it and make sure you data looks good. End of explanation """ all_data.head() """ Explanation: Alot of this data may not make much sense for this data set but I'm most interested in the count row to make sure the number of data elements makes sense. End of explanation """ all_data['date'] = pd.to_datetime(all_data['date']) """ Explanation: It is not critical in this example but the best practice is to convert the date column to a date time object. End of explanation """ status = pd.read_excel("../data/customer-status.xlsx") status """ Explanation: Combining Data Now that we have all of the data into one DataFrame, we can do any manipulations the DataFrame supports. In this case, the next thing we want to do is read in another file that contains the customer status by account. You can think of this as a company's customer segmentation strategy or some other mechanism for identifying their customers. First, we read in the data. End of explanation """ all_data_st = pd.merge(all_data, status, how='left') all_data_st.head() """ Explanation: We want to merge this data with our concatenated data set of sales. We use panda's merge function and tell it to do a left join which is similar to Excel's vlookup function. End of explanation """ all_data_st[all_data_st["account number"]==737550].head() """ Explanation: This looks pretty good but let's look at a specific account. End of explanation """ all_data_st['status'].fillna('bronze',inplace=True) all_data_st.head() """ Explanation: This account number was not in our status file, so we have a bunch of NaN's. We can decide how we want to handle this situation. For this specific case, let's label all missing accounts as bronze. Use the fillna function to easily accomplish this on the status column. End of explanation """ all_data_st[all_data_st["account number"]==737550].head() """ Explanation: Check the data just to make sure we're all good. End of explanation """ pd.__version__ """ Explanation: Now we have all of the data along with the status column filled in. We can do our normal data manipulations using the full suite of pandas capability. Using Categories One of the relatively new functions in pandas is support for categorical data. From the pandas, documentation - "Categoricals are a pandas data type, which correspond to categorical variables in statistics: a variable, which can take on only a limited, and usually fixed, number of possible values (categories; levels in R). Examples are gender, social class, blood types, country affiliations, observation time or ratings via Likert scales." For our purposes, the status field is a good candidate for a category type. You must make sure you have a recent version of pandas installed for this example to work. End of explanation """ all_data_st["status"] = all_data_st["status"].astype("category") """ Explanation: First, we typecast it to a category using astype. End of explanation """ all_data_st.head() """ Explanation: This doesn't immediately appear to change anything yet. End of explanation """ all_data_st.dtypes """ Explanation: Buy you can see that it is a new data type. End of explanation """ all_data_st.sort_values(by=["status"]).head() """ Explanation: Categories get more interesting when you assign order to the categories. Right now, if we call sort on the column, it will sort alphabetically. End of explanation """ all_data_st["status"].cat.set_categories([ "gold","silver","bronze"],inplace=True) """ Explanation: We use set_categories to tell it the order we want to use for this category object. In this case, we use the Olympic medal ordering. End of explanation """ all_data_st.sort_values(by=["status"]).head() all_data_st["status"].describe() """ Explanation: Now, we can sort it so that gold shows on top. End of explanation """ all_data_st.groupby(["status"])["quantity","unit price","ext price"].mean() """ Explanation: For instance, if you want to take a quick look at how your top tier customers are performaing compared to the bottom. Use groupby to give us the average of the values. End of explanation """ all_data_st.groupby(["status"])["quantity","unit price","ext price"].agg([np.sum,np.mean, np.std]) """ Explanation: Of course, you can run multiple aggregation functions on the data to get really useful information End of explanation """ all_data_st.drop_duplicates(subset=["account number","name"]).iloc[:,[0,1,7]].groupby(["status"])["name"].count() """ Explanation: So, what does this tell you? Well, the data is completely random but my first observation is that we sell more units to our bronze customers than gold. Even when you look at the total dollar value associated with bronze vs. gold, it looks backwards. Maybe we should look at how many bronze customers we have and see what is going on. What I plan to do is filter out the unique accounts and see how many gold, silver and bronze customers there are. I'm purposely stringing a lot of commands together which is not necessarily best practice but does show how powerful pandas can be. Feel free to review my previous articles and play with this command yourself to understand what all these commands mean. End of explanation """
JannesKlaas/MLiFC
Week 4/Ch. 17 - NLP and Word Embeddings.ipynb
mit
import os imdb_dir = './aclImdb' # Data directory train_dir = os.path.join(imdb_dir, 'train') # Get the path of the train set # Setup empty lists to fill labels = [] texts = [] # First go through the negatives, then through the positives for label_type in ['neg', 'pos']: # Get the sub path dir_name = os.path.join(train_dir, label_type) # Loop over all files in path for fname in os.listdir(dir_name): # Only consider text files if fname[-4:] == '.txt': # Read the text file and put it in the list f = open(os.path.join(dir_name, fname)) texts.append(f.read()) f.close() # Attach the corresponding label if label_type == 'neg': labels.append(0) else: labels.append(1) """ Explanation: Ch. 17 - NLP and Word Embeddings Welcome to week 4! This week, we will take a look at natural language processing. From Wikipedia: Natural language processing (NLP) is a field of computer science, artificial intelligence concerned with the interactions between computers and human (natural) languages, and, in particular, concerned with programming computers to fruitfully process large natural language data. Challenges in natural language processing frequently involve speech recognition, natural language understanding, and natural language generation. While last week was about making computers able to see, this week is about making them able to read. This is useful in the financial industry where large amounts of information are usually presented in form of texts. Starting from ticker headlines, to news reports, to analyst reports all the way to off the record chit chat by industry figures on social media, text is in many ways at the very center of what the financial industry does. In this week, we will take a look at text classification problems and sentiment analysis. Sentiment analysis with the IMDB dataset Sentiment analysis is about judging how positive or negative the tone in a document is. The output of a sentiment analysis is a score between zero and one, where one means the tone is very positive and zero means it is very negative. Sentiment analysis is used for trading quite frequently. For example the sentiment of quarterly reports issued by firms is automatically analyzed to see how the firm judges its own position. Sentiment analysis is also applied to the tweets of traders to estimate an overall market mood. Today, there are many data providers that offer sentiment analysis as a service. In principle, training a sentiment analysis model works just like training a binary text classifier. The text gets classified into positive (1) or not positive (0). This works exactly like other binary classification only that we need some new tools to handle text. A common dataset for sentiment analysis is the corpus of Internet Movie Database (IMDB) movie reviews. Since each review comes with a text and a numerical rating, the number of stars, it is easy to label the training data. In the IMDB dataset, movie reviews that gave less then five stars where labeled negative while movies that gave more than seven stars where labeled positive (IMDB works with a ten star scale). Let's give the data a look: End of explanation """ len(labels), len(texts) """ Explanation: We should have 25,000 texts and labels. End of explanation """ import numpy as np np.mean(labels) """ Explanation: Half of the reviews are positive End of explanation """ print('Label',labels[24002]) print(texts[24002]) """ Explanation: Let's look at a positive review: End of explanation """ print('Label',labels[1]) print(texts[1]) """ Explanation: And a negative review: End of explanation """ from keras.preprocessing.text import Tokenizer import numpy as np max_words = 10000 # We will only consider the 10K most used words in this dataset tokenizer = Tokenizer(num_words=max_words) # Setup tokenizer.fit_on_texts(texts) # Generate tokens by counting frequency sequences = tokenizer.texts_to_sequences(texts) # Turn text into sequence of numbers """ Explanation: Tokenizing text Computers can not work with words directly. To them, a word is just a meaningless row of characters. To work with words, we need to turn words into so called 'Tokens'. A token is a number that represents that word. Each word gets assigned a token. Tokens are usually assigned by word frequency. The most frequent words like 'a' or 'the' get tokens like 1 or 2 while less often used words like 'profusely' get assigned very high numbers. We can tokenize text directly with Keras. When we tokenize text, we usually choose a maximum number of words we want to consider, our vocabulary so to speak. This prevents us from assigning tokens to words that are hardly ever used, mostly because of typos or because they are not actual words or because they are just very uncommon. This prevents us from over fitting to texts that contain strange words or wired spelling errors. Words that are beyond that cutoff point get assigned the token 0, unknown. End of explanation """ word_index = tokenizer.word_index print('Token for "the"',word_index['the']) print('Token for "Movie"',word_index['movie']) print('Token for "generator"',word_index['generator']) """ Explanation: The tokenizers word index is a dictionary that maps each word to a number. You can see that words that are frequently used in discussions about movies have a lower token number. End of explanation """ sequences[24002] """ Explanation: Our positive review from earlier has now been converted into a sequence of numbers. End of explanation """ from keras.preprocessing.sequence import pad_sequences maxlen = 100 # Make all sequences 100 words long data = pad_sequences(sequences, maxlen=maxlen) print(data.shape) # We have 25K, 100 word sequences now """ Explanation: To proceed, we now have to make sure that all text sequences we feed into the model have the same length. We can do this with Keras pad sequences tool. It cuts of sequences that are too long and adds zeros to sequences that are too short. End of explanation """ labels = np.asarray(labels) # Shuffle data indices = np.arange(data.shape[0]) np.random.shuffle(indices) data = data[indices] labels = labels[indices] training_samples = 20000 # We will be training on 10K samples validation_samples = 5000 # We will be validating on 10000 samples # Split data x_train = data[:training_samples] y_train = labels[:training_samples] x_val = data[training_samples: training_samples + validation_samples] y_val = labels[training_samples: training_samples + validation_samples] """ Explanation: Now we can turn all data into proper training and validation data. End of explanation """ from keras.models import Sequential from keras.layers import Embedding, Flatten, Dense embedding_dim = 50 model = Sequential() model.add(Embedding(max_words, embedding_dim, input_length=maxlen)) model.add(Flatten()) model.add(Dense(32, activation='relu')) model.add(Dense(1, activation='sigmoid')) model.summary() """ Explanation: Embeddings As the attuned reader might have already guessed, words and word tokens are categorical features. As such, we can not directly feed them into the neural net. Just because a word has a larger token value, it does not express a higher value in any way. It is just a different category. Previously, we have dealt with categorical data by turning it into one hot encoded vectors. But for words, this is impractical. Since our vocabulary is 10,000 words, each vector would contain 10,000 numbers which are all zeros except for one. This is highly inefficient. Instead we will use an embedding. Embeddings also turn categorical data into vectors. But instead of creating a one hot vector, we create a vector in which all elements are numbers. In practice, embeddings work like a look up table. For each token, they store a vector. When the token is given to the embedding layer, it returns the vector for that token and passes it through the neural network. As the network trains, the embeddings get optimized as well. Remember that neural networks work by calculating the derivative of the loss function with respect to the parameters (weights) of the model. Through backpropagation we can also calculate the derivative of the loss function with respect to the input of the model. Thus we can optimize the embeddings to deliver ideal inputs that help our model. In practice it looks like this: We have to specify how large we want the word vectors to be. A 50 dimensional vector is able to capture good embeddings even for quite large vocabularies. We also have to specify for how many words we want embeddings and how long our sequences are. End of explanation """ model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['acc']) history = model.fit(x_train, y_train, epochs=10, batch_size=32, validation_data=(x_val, y_val)) """ Explanation: You can see that the embedding layer has 500,000 trainable parameters, that is 50 parameters for each of the 10K words. End of explanation """ glove_dir = './glove.6B' # This is the folder with the dataset embeddings_index = {} # We create a dictionary of word -> embedding f = open(os.path.join(glove_dir, 'glove.6B.100d.txt')) # Open file # In the dataset, each line represents a new word embedding # The line starts with the word and the embedding values follow for line in f: values = line.split() word = values[0] # The first value is the word, the rest are the values of the embedding embedding = np.asarray(values[1:], dtype='float32') # Load embedding embeddings_index[word] = embedding # Add embedding to our embedding dictionary f.close() print('Found %s word vectors.' % len(embeddings_index)) """ Explanation: Note that training your own embeddings is prone to over fitting. As you can see our model archives 100% accuracy on the training set but only 83% accuracy on the validation set. A clear sign of over fitting. In practice it is therefore quite rare to train new embeddings unless you have a massive dataset. Much more commonly, pre trained embeddings are used. A common pretrained embedding is GloVe, Global Vectors for Word Representation. It has been trained on billions of words from Wikipedia and the Gigaword 5 dataset, more than we could ever hope to train from our movie reviews. After downloading the GloVe embeddings from the GloVe website we can load them into our model: End of explanation """ # Create a matrix of all embeddings all_embs = np.stack(embeddings_index.values()) emb_mean = all_embs.mean() # Calculate mean emb_std = all_embs.std() # Calculate standard deviation emb_mean,emb_std """ Explanation: Not all words that are in our IMDB vocabulary might be in the GloVe embeddings though. For missing words it is wise to use random embeddings with the same mean and standard deviation as the GloVe embeddings End of explanation """ embedding_dim = 100 # We now use larger embeddings word_index = tokenizer.word_index nb_words = min(max_words, len(word_index)) # How many words are there actually # Create a random matrix with the same mean and std as the embeddings embedding_matrix = np.random.normal(emb_mean, emb_std, (nb_words, embedding_dim)) # The vectors need to be in the same position as their index. # Meaning a word with token 1 needs to be in the second row (rows start with zero) and so on # Loop over all words in the word index for word, i in word_index.items(): # If we are above the amount of words we want to use we do nothing if i >= max_words: continue # Get the embedding vector for the word embedding_vector = embeddings_index.get(word) # If there is an embedding vector, put it in the embedding matrix if embedding_vector is not None: embedding_matrix[i] = embedding_vector """ Explanation: We can now create an embedding matrix holding all word vectors. End of explanation """ model = Sequential() model.add(Embedding(max_words, embedding_dim, input_length=maxlen, weights = [embedding_matrix], trainable = False)) model.add(Flatten()) model.add(Dense(32, activation='relu')) model.add(Dense(1, activation='sigmoid')) model.summary() """ Explanation: This embedding matrix can be used as weights for the embedding layer. This way, the embedding layer uses the pre trained GloVe weights instead of random ones. We can also set the embedding layer to not trainable. This means, Keras won't change the weights of the embeddings while training which makes sense since our embeddings are already trained. End of explanation """ model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['acc']) history = model.fit(x_train, y_train, epochs=10, batch_size=32, validation_data=(x_val, y_val)) """ Explanation: Notice that we now have far fewer trainable parameters. End of explanation """ # Demo on a positive text my_text = 'I love dogs. Dogs are the best. They are lovely, cuddly animals that only want the best for humans.' seq = tokenizer.texts_to_sequences([my_text]) print('raw seq:',seq) seq = pad_sequences(seq, maxlen=maxlen) print('padded seq:',seq) prediction = model.predict(seq) print('positivity:',prediction) # Demo on a negative text my_text = 'The bleak economic outlook will force many small businesses into bankruptcy.' seq = tokenizer.texts_to_sequences([my_text]) print('raw seq:',seq) seq = pad_sequences(seq, maxlen=maxlen) print('padded seq:',seq) prediction = model.predict(seq) print('positivity:',prediction) """ Explanation: Now our model over fits less but also does worse on the validation set. Using our model To determine the sentiment of a text, we can now use our trained model. End of explanation """
ozorich/phys202-2015-work
assignments/assignment07/AlgorithmsEx02.ipynb
mit
%matplotlib inline from matplotlib import pyplot as plt import seaborn as sns import numpy as np """ Explanation: Algorithms Exercise 2 Imports End of explanation """ a=[1,2,3,4,5,3] for x in a: print(x) def find_peaks(a): """Find the indices of the local maxima in a sequence.""" a=list(a) index=[] first=a[0] if first>a[1]: index.append(0) start=1 for x in a[1:len(a)-1]: prev=a[a.index(x)-1] foll=a[a.index(x)+1] if x> prev and x>foll: index.append(a.index(x,start)) start+=1 last=a[len(a)-1] if last>a[len(a)-2]: index.append(len(a)-1) return(np.array(index)) find_peaks([3,2,1,0]) p1 = find_peaks([2,0,1,0,2,0,1]) assert np.allclose(p1, np.array([0,2,4,6])) p2 = find_peaks(np.array([0,1,2,3])) assert np.allclose(p2, np.array([3])) p3 = find_peaks([3,2,1,0]) assert np.allclose(p3, np.array([0])) """ Explanation: Peak finding Write a function find_peaks that finds and returns the indices of the local maxima in a sequence. Your function should: Properly handle local maxima at the endpoints of the input array. Return a Numpy array of integer indices. Handle any Python iterable as input. End of explanation """ from sympy import pi, N pi_digits_str = str(N(pi, 10001))[2:] pi_digits_list=np.array([int(x) for x in pi_digits_str]) pi_digits_list x=np.diff(find_peaks(pi_digits_list)) plt.hist(x,20) plt.title('Distance between local maxes in the digits of $\pi$') plt.xlabel('Distance between Maxima') plt.ylabel('Number of Maxima') plt.xticks(range(0,30,2)) plt.xlim(right=22) assert True # use this for grading the pi digits histogram """ Explanation: Here is a string with the first 10000 digits of $\pi$ (after the decimal). Write code to perform the following: Convert that string to a Numpy array of integers. Find the indices of the local maxima in the digits of $\pi$. Use np.diff to find the distances between consequtive local maxima. Visualize that distribution using an appropriately customized histogram. End of explanation """
tomlyscan/Ordenacao
Notas Ordenacao.ipynb
gpl-3.0
# Encontrando o maximo e minimo valor em uma lista: a = [1, -2, 2, 0, 3, 4, 5, 10, -3, -1] print('Maior valor da lista: ', max(a)) print('Menor valor da lista: ', min(a)) # Criar uma lista de tamanho fixo inicializado com 0: contador = max(a) + abs(min(a)) + 1 pos_zero = abs(min(a)) lista_contador = [0]*contador print('Lista contador: ', lista_contador) print('Lista de indices: ', [i for i in range(min(a),max(a)+1)]) # Incrementar as posicoes de lista_contador com os elementos da lista a ser ordenada for i in range(len(a)): if a[i] == 0: lista_contador[pos_zero] += 1 elif a[i] > 0: lista_contador[pos_zero + a[i]] += 1 elif a[i] < 0: lista_contador[pos_zero + a[i]] += 1 print('Lista contador: ', lista_contador) print('Lista de indices: ', [i for i in range(min(a),max(a)+1)]) # Ler a lista_contador e gerar a lista ordenada ordenada = [] for i in range(len(lista_contador)): if lista_contador[i] > 0: for j in range(lista_contador[i]): if i < pos_zero: ordenada += [i - pos_zero] elif i == pos_zero: ordenada += [0] elif i > pos_zero: ordenada += [i - pos_zero] ordenada # Criando a funcao counting sort: def counting_sort(a): ordenada = [] contador = max(a) + abs(min(a)) + 1 pos_zero = abs(min(a)) lista_contador = [0]*contador for i in range(len(a)): if a[i] == 0: lista_contador[pos_zero] += 1 else: lista_contador[pos_zero + a[i]] += 1 for i in range(len(lista_contador)): if lista_contador[i] > 0: for j in range(lista_contador[i]): if i == pos_zero: ordenada += [0] else: ordenada += [i - pos_zero] return ordenada # Testando counting sort: print(counting_sort(a)) """ Explanation: Algoritmo Counting sort Encontra o maior valor da lista a ser ordenada [8, 10, 12, 4, 7, 3, 0] maior = 12 Cria um array para contagem com maior+1 posições End of explanation """ import math a = [1000, 100, 10, 1, -1, -10, -10, -100, -1000, -10000] # Calculando a quantidade de digitos do maior numero da lista em termos absolutos menor = abs(min(a)) maior = abs(max(a)) digitos_menor = int(math.log10(menor)) +1 digitos_maior = int(math.log10(maior)) +1 print('Numero de digitos do menor numero: ', digitos_menor) print('Numero de digitos do maior numero: ', digitos_maior) # Transformando um numero em uma lista com os seus digitos # Entrada: numero inteiro, Saida: lista de numeros inteiros def number_to_list(num): negative = False res = [] for x in str(num): if x == '-': negative = True continue if negative == True: res += [int(x)*-1] negative = False continue res += [int(x)] return res number_to_list(-1000) lista = [-1, 0, 0, 0] len_maior = 10 for x in range(0, len_maior - len(lista)): lista = [0] + lista lista import pandas as pd a_row = pd.Series(lista) index = [x for x in range(len(lista))] df = pd.DataFrame(columns=index) df = df.append(a_row, ignore_index=True) df import math def largest_digit_number(a): menor = abs(min(a)) maior = abs(max(a)) digitos_menor = int(math.log10(menor)) +1 digitos_maior = int(math.log10(maior)) +1 return digitos_maior if digitos_maior > digitos_menor else digitos_menor #largest_digit_number(a) def list_to_row(num, largest_digit_number, index=1 ): list = number_to_list(num) for x in range(0, (largest_digit_number+1) - len(list)): list = [0] + list list[0] = index return list list_to_row(-1000, 6) # Primeiro numero da lista é o índice da lista a ser ordenada e que vai servir de hash def list_to_matrix(a): matrix = [] largest = largest_digit_number(a) for i in range(len(a)): matrix += [list_to_row(a[i], largest, i )] return matrix list_to_matrix(a) import numpy as np arr2D = np.array(list_to_matrix(a)) print(arr2D) arr2D = arr2D[arr2D[:, -1].argsort()] # Ordenando a ultima coluna da matriz print(arr2D) def sorted_matrix(a): arr2D = np.array(list_to_matrix(a)) for i in range(1, largest_digit_number(a)+1): arr2D = arr2D[arr2D[:, -i].argsort()] return arr2D sm = sorted_matrix(a) sm index = [] for i in range(len(sm)): index += [sm[i][0]] index def radix_sort(a): sorted_list = [] radix_matrix = sorted_matrix(a) for i in range(len(radix_matrix)): sorted_list += [a[radix_matrix[i][0]]] return sorted_list radix_sort(a) arr[:20] list_to_matrix(arr[:20]) sorted_matrix(arr[:20]) import numpy as np arr2D = np.array([[ 0, 9, 3, 2, 8, 7], [ 1, 0, 0, 0, 0, -2], [ 2, 9, 4, 6, 5, 6], [ 3, 0, 0, 0, -2, 9], [ 4, 0, 0, 0, -1, 4], [ 5, 0, 0, 0, 2, 4], [ 6, 0, 0, 0, -4, 8], [ 7, 0, 0, 0, 0, 8], [ 8, 5, 5, 4, 8, 6], [ 9, 8, 3, 5, 3, 4], [10, 6, 7, 5, 6, 7], [11, 0, 0, 0, 1, 5], [12, 0, 0, 0, 2, 2], [13, 0, 0, 0, -2, 4], [14, 0, 0, 0, -3, 0], [15, 5, 0, 7, 1, 2], [16, 0, 0, 0, -3, 3], [17, 8, 4, 6, 4, 8], [18, 5, 2, 2, 7, 4], [19, 0, 0, 0, 3, 9]]) #array2D_temp = arr2D #col = {} #array = arr2D[:,-1] #for i in range(len(array)): # col[i] = array[i] #col = {k: v for k, v in sorted(col.items(), key=lambda item: item[1])} #list = [[]]*arr2D.shape[0] #a = 0 #for i in col: # list[a] = arr2D[i].tolist() # a += 1 #list for i in range(1, 6): radix_col = {} array = arr2D[:,-i] matrix = [[]]*arr2D.shape[0] a = 0 for i in range(len(array)): radix_col[i] = array[i] radix_col = {k: v for k, v in sorted(radix_col.items(), key=lambda item: item[1])} for i in radix_col: matrix[a] = arr2D[i].tolist() a += 1 arr2D = np.array(matrix) arr2D """ Explanation: Algoritmo Radix Sort End of explanation """ bucket = [x for x in range(10)] list_0 = [1, 12, 13, 14, 15] bucket[0] = list_0 bucket a = 16 bucket[0] += [a] bucket # Criando os buckets para a ordenacao (de 0 a 9 dividido entre numeros positivos e negativos) bucket_positive = [x for x in range(10)] bucket_negative = bucket_positive def bucket_sort(a): bucket_length = 10 bucket_positive = [[]]*bucket_length bucket_negative = [[]]*bucket_length ldn = largest_digit_number(a) num = 0.0 res = [] for i in range(len(a)): num = a[i]/math.pow(bucket_length,ldn-1) if num < 0: if not bucket_negative[int(abs(num))]: bucket_negative[int(abs(num))] = [a[i]] else: bucket_negative[int(abs(num))] += [a[i]] else: if not bucket_positive[int(abs(num))]: bucket_positive[int(abs(num))] = [a[i]] else: bucket_positive[int(abs(num))] += [a[i]] for i in range(bucket_length): if bucket_negative[i]: bucket_negative[i] = counting_sort(bucket_negative[i]) if bucket_positive[i]: bucket_positive[i] = counting_sort(bucket_positive[i]) bucket_negative.reverse() res = sum(bucket_negative, []) + sum(bucket_positive, []) return res #bucket_sort([-2, -4, -8, -16]) bucket_sort(arr[:20]) arr = [[]]*10 if not arr[0]: arr[0] = [14] else: arr[0] += [14] arr arr[0] += [17] if arr[0]: print("Exists!") k = [[8, 15, 22, 24, 39], [], [], [], [], [50712, 52274, 55486], [67567], [], [83534, 84648], [93287, 94656]] res = [] for i in range(len(k)-1, 0, -1): if k[i]: res += k[i] res """ Explanation: Algoritmo Bucket Sort End of explanation """ def parent(i): if i % 2 == 0: return (i // 2) -1 return i // 2 def left(i): return 2*i + 1 def right(i): return 2*i + 2 def max_heapify(A, tamanho_heap, i): maior = 0 l = left(i) r = right(i) if l < tamanho_heap and A[l] > A[i]: maior = l else: maior = i if r < tamanho_heap and A[r] > A[maior]: maior = r if maior != i: A[i], A[maior] = A[maior], A[i] max_heapify(A, tamanho_heap, maior) def build_max_heap(A): for i in range(len(A)//2 -1, -1, -1): max_heapify(A, len(A), i) def heap_sort(A): build_max_heap(A) for i in range(len(A)-1, 0, -1): A[i], A[0] = A[0], A[i] max_heapify(A, i, 0) A = [16, 4, 10, 14, 7, 9, 3, 2, 8, 1] heap_sort(A) A heap_sort(arr) arr """ Explanation: Heap Máximo O valor de um nó é no máximo, igual ao valor de seu pai. A[Parent(i)] >= A[i] End of explanation """
aldian/tensorflow
tensorflow/lite/examples/experimental_new_converter/keras_lstm.ipynb
apache-2.0
!pip install tf-nightly --upgrade """ Explanation: Overview This CodeLab demonstrates how to build a LSTM model for MNIST recognition using Keras, and how to convert it to TensorFlow Lite. The CodeLab is very similar to the tf.lite.experimental.nn.TFLiteLSTMCell CodeLab. However, with the control flow support in the experimental new converter, we can define the model with control flow directly without refactoring the code. Also note: We're not trying to build the model to be a real world application, but only demonstrate how to use TensorFlow Lite. You can a build a much better model using CNN models. For a more canonical lstm codelab, please see here. Step 0: Prerequisites It's recommended to try this feature with the newest TensorFlow nightly pip build. End of explanation """ import numpy as np import tensorflow as tf model = tf.keras.models.Sequential([ tf.keras.layers.Input(shape=(28, 28), name='input'), tf.keras.layers.LSTM(20), tf.keras.layers.Flatten(), tf.keras.layers.Dense(10, activation=tf.nn.softmax, name='output') ]) model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) model.summary() """ Explanation: Step 1: Build the MNIST LSTM model. End of explanation """ # Load MNIST dataset. (x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data() x_train, x_test = x_train / 255.0, x_test / 255.0 x_train = x_train.astype(np.float32) x_test = x_test.astype(np.float32) # Change this to True if you want to test the flow rapidly. # Train with a small dataset and only 1 epoch. The model will work poorly # but this provides a fast way to test if the conversion works end to end. _FAST_TRAINING = False _EPOCHS = 5 if _FAST_TRAINING: _EPOCHS = 1 _TRAINING_DATA_COUNT = 1000 x_train = x_train[:_TRAINING_DATA_COUNT] y_train = y_train[:_TRAINING_DATA_COUNT] model.fit(x_train, y_train, epochs=_EPOCHS) model.evaluate(x_test, y_test, verbose=0) """ Explanation: Step 2: Train & Evaluate the model. We will train the model using MNIST data. End of explanation """ converter = tf.lite.TFLiteConverter.from_keras_model(model) tflite_model = converter.convert() """ Explanation: Step 3: Convert the Keras model to TensorFlow Lite model. Note here: we just convert to TensorFlow Lite model as usual. End of explanation """ # Run the model with TensorFlow to get expected results. expected = model.predict(x_test[0:1]) # Run the model with TensorFlow Lite interpreter = tf.lite.Interpreter(model_content=tflite_model) interpreter.allocate_tensors() input_details = interpreter.get_input_details() output_details = interpreter.get_output_details() interpreter.set_tensor(input_details[0]["index"], x_test[0:1, :, :]) interpreter.invoke() result = interpreter.get_tensor(output_details[0]["index"]) # Assert if the result of TFLite model is consistent with the TF model. np.testing.assert_almost_equal(expected, result) print("Done. The result of TensorFlow matches the result of TensorFlow Lite.") """ Explanation: Step 4: Check the converted TensorFlow Lite model. Now load the TensorFlow Lite model and use the TensorFlow Lite python interpreter to verify the results. End of explanation """
LSSTC-DSFP/LSSTC-DSFP-Sessions
Sessions/Session07/Day0/TooBriefVizSolutions.ipynb
mit
import numpy as np import matplotlib.pyplot as plt %matplotlib inline """ Explanation: Introduction to Visualization: Density Estimation and Data Exploration Version 0.1 There are many flavors of data analysis that fall under the "visualization" umbrella in astronomy. Today, by way of example, we will focus on 2 basic problems. By AA Miller 16 September 2017 End of explanation """ from sklearn.datasets import load_linnerud linnerud = load_linnerud() chinups = linnerud.data[:,0] """ Explanation: Problem 1) Density Estimation Starting with 2MASS and SDSS and extending through LSST, we are firmly in an era where data and large statistical samples are cheap. With this explosion in data volume comes a problem: we do not know the underlying probability density function (PDF) of the random variables measured via our observations. Hence - density estimation: an attempt to recover the unknown PDF from observations. In some cases theory can guide us to a parametric form for the PDF, but more often than not such guidance is not available. There is a common, simple, and very familiar tool for density estimation: histograms. But there is also a problem: HISTOGRAMS LIE! We will "prove" this to be the case in a series of examples. For this exercise, we will load the famous Linnerud data set, which tested 20 middle aged men by measuring the number of chinups, situps, and jumps they could do in order to compare these numbers to their weight, pulse, and waist size. To load the data (just chinups for now) we will run the following: from sklearn.datasets import load_linnerud linnerud = load_linnerud() chinups = linnerud.data[:,0] End of explanation """ plt.hist(chinups, histtype = "step", lw = 3) """ Explanation: Problem 1a Plot the histogram for the number of chinups using the default settings in pyplot. End of explanation """ plt.hist(chinups, bins = 5, histtype="step", lw = 3) plt.hist(chinups, align = "left", histtype="step", lw = 3) """ Explanation: Already with this simple plot we see a problem - the choice of bin centers and number of bins suggest that there is a 0% probability that middle aged men can do 10 chinups. Intuitively this seems incorrect, so lets examine how the histogram changes if we change the number of bins or the bin centers. Problem 1b Using the same data make 2 new histograms: (i) one with 5 bins (bins = 5), and (ii) one with the bars centered on the left bin edges (align = "left"). Hint - if overplotting the results, you may find it helpful to use the histtype = "step" option End of explanation """ bins = np.append(np.sort(chinups)[::5], np.max(chinups)) plt.hist(chinups, bins = bins, histtype = "step", normed = True, lw = 3) """ Explanation: These small changes significantly change the output PDF. With fewer bins we get something closer to a continuous distribution, while shifting the bin centers reduces the probability to zero at 9 chinups. What if we instead allow the bin width to vary and require the same number of points in each bin? You can determine the bin edges for bins with 5 sources using the following command: bins = np.append(np.sort(chinups)[::5], np.max(chinups)) Problem 1c Plot a histogram with variable width bins, each with the same number of points. Hint - setting normed = True will normalize the bin heights so that the PDF integrates to 1. End of explanation """ plt.hist(chinups, histtype = 'step') # this is the code for the rug plot plt.plot(chinups, np.zeros_like(chinups), '|', color='k', ms = 25, mew = 4) """ Explanation: Ending the lie Earlier I stated that histograms lie. One simple way to combat this lie: show all the data. Displaying the original data points allows viewers to somewhat intuit the effects of the particular bin choices that have been made (though this can also be cumbersome for very large data sets, which these days is essentially all data sets). The standard for showing individual observations relative to a histogram is a "rug plot," which shows a vertical tick (or other symbol) at the location of each source used to estimate the PDF. Problem 1d Execute the cell below to see an example of a rug plot. End of explanation """ # execute this cell from sklearn.neighbors import KernelDensity def kde_sklearn(data, grid, bandwidth = 1.0, **kwargs): kde_skl = KernelDensity(bandwidth = bandwidth, **kwargs) kde_skl.fit(data[:, np.newaxis]) log_pdf = kde_skl.score_samples(grid[:, np.newaxis]) # sklearn returns log(density) return np.exp(log_pdf) """ Explanation: Of course, even rug plots are not a perfect solution. Many of the chinup measurements are repeated, and those instances cannot be easily isolated above. One (slightly) better solution is to vary the transparency of the rug "whiskers" using alpha = 0.3 in the whiskers plot call. But this too is far from perfect. To recap, histograms are not ideal for density estimation for the following reasons: They introduce discontinuities that are not present in the data They are strongly sensitive to user choices ($N_\mathrm{bins}$, bin centering, bin grouping), without any mathematical guidance to what these choices should be They are difficult to visualize in higher dimensions Histograms are useful for generating a quick representation of univariate data, but for the reasons listed above they should never be used for analysis. Most especially, functions should not be fit to histograms given how greatly the number of bins and bin centering affects the output histogram. Okay - so if we are going to rail on histograms this much, there must be a better option. There is: Kernel Density Estimation (KDE), a nonparametric form of density estimation whereby a normalized kernel function is convolved with the discrete data to obtain a continuous estimate of the underlying PDF. As a rule, the kernel must integrate to 1 over the interval $-\infty$ to $\infty$ and be symmetric. There are many possible kernels (gaussian is highly popular, though Epanechnikov, an inverted parabola, produces the minimal mean square error). KDE is not completely free of the problems we illustrated for histograms above (in particular, both a kernel and the width of the kernel need to be selected), but it does manage to correct a number of the ills. We will now demonstrate this via a few examples using the scikit-learn implementation of KDE: KernelDensity, which is part of the sklearn.neighbors module. Note There are many implementations of KDE in Python, and Jake VanderPlas has put together an excellent description of the strengths and weaknesses of each. We will use the scitkit-learn version as it is in many cases the fastest implementation. To demonstrate the basic idea behind KDE, we will begin by representing each point in the dataset as a block (i.e. we will adopt the tophat kernel). Borrowing some code from Jake, we can estimate the KDE using the following code: from sklearn.neighbors import KernelDensity def kde_sklearn(data, grid, bandwidth = 1.0, **kwargs): kde_skl = KernelDensity(bandwidth = bandwidth, **kwargs) kde_skl.fit(data[:, np.newaxis]) log_pdf = kde_skl.score_samples(grid[:, np.newaxis]) # sklearn returns log(density) return np.exp(log_pdf) The two main options to set are the bandwidth and the kernel. End of explanation """ grid = np.arange(0 + 1e-4,20,0.01) PDFtophat = kde_sklearn(chinups, grid, bandwidth = 0.1, kernel = 'tophat') plt.plot(grid, PDFtophat) """ Explanation: Problem 1e Plot the KDE of the PDF for the number of chinups middle aged men can do using a bandwidth of 0.1 and a tophat kernel. Hint - as a general rule, the grid should be smaller than the bandwidth when plotting the PDF. End of explanation """ PDFtophat1 = kde_sklearn(chinups, grid, bandwidth = 1, kernel = 'tophat') plt.plot(grid, PDFtophat1, 'MediumAquaMarine', lw = 3, label = "bw = 1") PDFtophat5 = kde_sklearn(chinups, grid, bandwidth = 5, kernel = 'tophat') plt.plot(grid, PDFtophat5, 'Tomato', lw = 3, label = "bw = 5") plt.legend() """ Explanation: In this representation, each "block" has a height of 0.25. The bandwidth is too narrow to provide any overlap between the blocks. This choice of kernel and bandwidth produces an estimate that is essentially a histogram with a large number of bins. It gives no sense of continuity for the distribution. Now, we examine the difference (relative to histograms) upon changing the the width (i.e. kernel) of the blocks. Problem 1f Plot the KDE of the PDF for the number of chinups middle aged men can do using bandwidths of 0.5 and 2.5 and a tophat kernel. How do the results differ from the histogram plots above? End of explanation """ PDFgaussian = kde_sklearn(chinups, grid, bandwidth = 1, kernel = 'gaussian') plt.plot(grid, PDFgaussian, 'DarkOrange', lw = 3, label = "gaussian") PDFepanechnikov = kde_sklearn(chinups, grid, bandwidth = 2, kernel = 'epanechnikov') plt.plot(grid, PDFepanechnikov, 'SlateGrey', lw = 3, label = "epanechnikov") plt.legend(loc = 2) """ Explanation: It turns out blocks are not an ideal representation for continuous data (see discussion on histograms above). Now we will explore the resulting PDF from other kernels. Problem 1g Plot the KDE of the PDF for the number of chinups middle aged men can do using a gaussian and Epanechnikov kernel. How do the results differ from the histogram plots above? Hint - you will need to select the bandwidth. The examples above should provide insight into the useful range for bandwidth selection. You may need to adjust the values to get an answer you "like." End of explanation """ x = np.arange(0, 6*np.pi, 0.1) y = np.cos(x) plt.plot(x,y, lw = 2) plt.xlabel('X') plt.ylabel('Y') plt.xlim(0, 6*np.pi) """ Explanation: So, what is the optimal choice of bandwidth and kernel? Unfortunately, there is no hard and fast rule, as every problem will likely have a different optimization. Typically, the choice of bandwidth is far more important than the choice of kernel. In the case where the PDF is likely to be gaussian (or close to gaussian), then Silverman's rule of thumb can be used: $$h = 1.059 \sigma n^{-1/5}$$ where $h$ is the bandwidth, $\sigma$ is the standard deviation of the samples, and $n$ is the total number of samples. Note - in situations with bimodal or more complicated distributions, this rule of thumb can lead to woefully inaccurate PDF estimates. The most general way to estimate the choice of bandwidth is via cross validation (we will cover cross-validation later today). What about multidimensional PDFs? It is possible using many of the Python implementations of KDE to estimate multidimensional PDFs, though it is very very important to beware the curse of dimensionality in these circumstances. Problem 2) Data Exploration Now a more open ended topic: data exploration. In brief, data exploration encompases a large suite of tools (including those discussed above) to examine data that live in large dimensional spaces. There is no single best method or optimal direction for data exploration. Instead, today we will introduce some of the tools available via python. As an example we will start with a basic line plot - and examine tools beyond matplotlib. End of explanation """ import seaborn as sns fig = plt.figure() ax = fig.add_subplot(111) ax.plot(x,y, lw = 2) ax.set_xlabel('X') ax.set_ylabel('Y') ax.set_xlim(0, 6*np.pi) """ Explanation: Seaborn Seaborn is a plotting package that enables many useful features for exploration. In fact, a lot of the functionality that we developed above can readily be handled with seaborn. To begin, we will make the same plot that we created in matplotlib. End of explanation """ sns.set_style("ticks") fig = plt.figure() ax = fig.add_subplot(111) ax.plot(x,y, lw = 2) ax.set_xlabel('X') ax.set_ylabel('Y') ax.set_xlim(0, 6*np.pi) """ Explanation: These plots look identical, but it is possible to change the style with seaborn. seaborn has 5 style presets: darkgrid, whitegrid, dark, white, and ticks. You can change the preset using the following: sns.set_style("whitegrid") which will change the output for all subsequent plots. Note - if you want to change the style for only a single plot, that can be accomplished with the following: with sns.axes_style("dark"): with all ploting commands inside the with statement. Problem 3a Re-plot the sine curve using each seaborn preset to see which you like best - then adopt this for the remainder of the notebook. End of explanation """ # default color palette current_palette = sns.color_palette() sns.palplot(current_palette) """ Explanation: The folks behind seaborn have thought a lot about color palettes, which is a good thing. Remember - the choice of color for plots is one of the most essential aspects of visualization. A poor choice of colors can easily mask interesting patterns or suggest structure that is not real. To learn more about what is available, see the seaborn color tutorial. Here we load the default: End of explanation """ # set palette to colorblind sns.set_palette("colorblind") current_palette = sns.color_palette() sns.palplot(current_palette) """ Explanation: which we will now change to colorblind, which is clearer to those that are colorblind. End of explanation """ iris = sns.load_dataset("iris") iris """ Explanation: Now that we have covered the basics of seaborn (and the above examples truly only scratch the surface of what is possible), we will explore the power of seaborn for higher dimension data sets. We will load the famous Iris data set, which measures 4 different features of 3 different types of Iris flowers. There are 150 different flowers in the data set. Note - for those familiar with pandas seaborn is designed to integrate easily and directly with pandas DataFrame objects. In the example below the Iris data are loaded into a DataFrame. iPython notebooks also display the DataFrame data in a nice readable format. End of explanation """ # note - hist, kde, and rug all set to True, set to False to turn them off with sns.axes_style("dark"): sns.distplot(iris['petal_length'], bins=20, hist=True, kde=True, rug=True) """ Explanation: Now that we have a sense of the data structure, it is useful to examine the distribution of features. Above, we went to great pains to produce histograms, KDEs, and rug plots. seaborn handles all of that effortlessly with the distplot function. Problem 3b Plot the distribution of petal lengths for the Iris data set. End of explanation """ plt.scatter(iris['petal_length'], iris['petal_width']) plt.xlabel("petal length (cm)") plt.ylabel("petal width (cm)") """ Explanation: Of course, this data set lives in a 4D space, so plotting more than univariate distributions is important (and as we will see tomorrow this is particularly useful for visualizing classification results). Fortunately, seaborn makes it very easy to produce handy summary plots. At this point, we are familiar with basic scatter plots in matplotlib. Problem 3c Make a matplotlib scatter plot showing the Iris petal length against the Iris petal width. End of explanation """ with sns.axes_style("darkgrid"): xexample = np.random.normal(loc = 0.2, scale = 1.1, size = 10000) yexample = np.random.normal(loc = -0.1, scale = 0.9, size = 10000) plt.scatter(xexample, yexample) """ Explanation: Of course, when there are many many data points, scatter plots become difficult to interpret. As in the example below: End of explanation """ # hexbin w/ bins = "log" returns the log of counts/bin # mincnt = 1 displays only hexpix with at least 1 source present with sns.axes_style("darkgrid"): plt.hexbin(xexample, yexample, bins = "log", cmap = "viridis", mincnt = 1) plt.colorbar() """ Explanation: Here, we see that there are many points, clustered about the origin, but we have no sense of the underlying density of the distribution. 2D histograms, such as plt.hist2d(), can alleviate this problem. I prefer to use plt.hexbin() which is a little easier on the eyes (though note - these histograms are just as subject to the same issues discussed above). End of explanation """ with sns.axes_style("darkgrid"): sns.kdeplot(xexample, yexample,shade=False) """ Explanation: While the above plot provides a significant improvement over the scatter plot by providing a better sense of the density near the center of the distribution, the binedge effects are clearly present. An even better solution, like before, is a density estimate, which is easily built into seaborn via the kdeplot function. End of explanation """ sns.jointplot(x=iris['petal_length'], y=iris['petal_width']) """ Explanation: This plot is much more appealing (and informative) than the previous two. For the first time we can clearly see that the distribution is not actually centered on the origin. Now we will move back to the Iris data set. Suppose we want to see univariate distributions in addition to the scatter plot? This is certainly possible with matplotlib and you can find examples on the web, however, with seaborn this is really easy. End of explanation """ sns.jointplot(x=iris['petal_length'], y=iris['petal_width'], kind = 'kde', shade = 'False') """ Explanation: But! Histograms and scatter plots can be problematic as we have discussed many times before. Problem 3d Re-create the plot above but set kind='kde' to produce density estimates of the distributions. End of explanation """ sns.pairplot(iris[["sepal_length", "sepal_width", "petal_length", "petal_width"]]) """ Explanation: That is much nicer than what was presented above. However - we still have a problem in that our data live in 4D, but we are (mostly) limited to 2D projections of that data. One way around this is via the seaborn version of a pairplot, which plots the distribution of every variable in the data set against each other. (Here is where the integration with pandas DataFrames becomes so powerful.) End of explanation """ sns.pairplot(iris, vars = ["sepal_length", "sepal_width", "petal_length", "petal_width"], hue = "species", diag_kind = 'kde') """ Explanation: For data sets where we have classification labels, we can even color the various points using the hue option, and produce KDEs along the diagonal with diag_type = 'kde'. End of explanation """ g = sns.PairGrid(iris, vars = ["sepal_length", "sepal_width", "petal_length", "petal_width"], hue = "species", diag_sharey=False) g.map_lower(sns.kdeplot) g.map_upper(plt.scatter, edgecolor='white') g.map_diag(sns.kdeplot, lw=3) """ Explanation: Even better - there is an option to create a PairGrid which allows fine tuned control of the data as displayed above, below, and along the diagonal. In this way it becomes possible to avoid having symmetric redundancy, which is not all that informative. In the example below, we will show scatter plots and contour plots simultaneously. End of explanation """
jvines/Metodos-Numericos
Catedras/Catedra_04.ipynb
gpl-3.0
import numpy as np import matplotlib.pyplot as plt %matplotlib inline """ Explanation: Catedra 04 End of explanation """ def bisection(f, a, b, eps=1e-5): ''' Bisection busca la raiz de la funcion f a traves del metodo de la biseccion. Parameters ---------- f : function Funcion a evaluar. a : int or double Valor inical del intervalo. b : int or double Valor final del intervalo. eps : int or double Tolerancia pedida. Returns ------- root : float Cero de la funcion. Raises ------ ValueError Cuando a y b tienen el mismo signo. ''' if a*b > 0: raise ValueError("a y b deben tener signos opuestos (a={}, b={}).".format(a,b)) root = (a+b)/2. while abs(root-a)>eps or abs(root-b)>eps: if f(a)*f(root) < 0: b = root else: a = root root = (a+b)/2. return root """ Explanation: Raices de una funcion Se quiere encontrar numericamente las raices o ceros de una funcion. $f(x)$. Note que $f(x^{}) = x^{}$ Asi se puede definir $g(x) = f(x) - x$, y de esta forma $g(x^{*}) = 0$ Metodo de la biseccion Este metodo tiene las siguientes hipotesis: - $f$ debe ser una funcion continua. - El intervalo de busqueda debe ser acotado $[a, b]$ - $f(a)$ debe tener signo opuesto a $f(b)$ ($f(a)\cdot f(b) < 0$) Bajo estas condiciones se asegura que siempre existe al menos una raiz de $f$ en $[a, b]$ gracias al Teorema de Valor Intermedio. El algoritmo para el metodo de la biseccion sigue de la siguiente manera: $$ a_{n}, b_{n} \ p_{n} = \frac{a_{n}+b_{n}}{2} \ \Bigg{ \begin{array}{c} g(a_n)\cdot g(p_n) > 0 \rightarrow a_{n+1} = p_{n} \ g(b_n)\cdot g(p_n) < 0 \rightarrow a_{n+1} = p_{n} \end{array} \ \Bigg{ \begin{array}{c} g(a_n)\cdot g(p_n) < 0 \rightarrow b_{n+1} = p_{n} \ g(b_n)\cdot g(p_n) > 0 \rightarrow b_{n+1} = p_{n} \end{array} $$ En palabras: 1. Se calcula p = $\frac{a+b}{2}$. 2. Se evalua $f(p)$. 3. Si se cumple la convergencia, es decir, $a-p$ o $b-p$ es suficientemente pequeño, se retorna $p$ y se detiene el algoritmo. 4. Se examina el signo de $f(p)$ y se reemplaza $(a,\,\, f(a))$ o $(b,\,\, f(b))$ con $(c,\,\, f(c))$ tal que haya un cero entre los dos nuevos extremos. Una implementacion simple del algoritmo es la que sigue: End of explanation """ x = np.linspace(0, np.pi, 100) plt.plot(x, np.sin(x), 'b', label='sin(x)') plt.plot(x, np.cos(x), 'r', label='cos(x)') plt.xlabel('x [radianes]') plt.legend() def sin_menos_cos(x): ''' sin_menos_cos calcula la resta entre sin(x) y cos(x). Parameters ---------- x : int or float Punto a evaluar la funcion. Returns ------- res : float sin(x)-cos(x). ''' res = np.sin(x) - np.cos(x) return res cero = bisection(sin_menos_cos, 0., 1.5, eps=.1) plt.plot(x, np.sin(x), 'b', label='sin(x)') plt.plot(x, np.cos(x), 'r', label='cos(x)') plt.xlabel('x [radianes]') plt.axvline(cero, color='g') plt.legend() cero_2 = bisection(sin_menos_cos, 0., 1.5, eps=1e-10) plt.plot(x, np.sin(x), 'b', label='sin(x)') plt.plot(x, np.cos(x), 'r', label='cos(x)') plt.xlabel('x [radianes]') plt.axvline(cero, color='brown') plt.axvline(cero_2, color='g') plt.legend() """ Explanation: Un ejemplo: End of explanation """
krosaen/ml-study
python-ml-book/ch12/ch12.ipynb
mit
import os import struct import numpy as np def load_mnist(path, kind='train'): """Load MNIST data from `path`""" labels_path = os.path.join(path, '%s-labels-idx1-ubyte' % kind) images_path = os.path.join(path, '%s-images-idx3-ubyte' % kind) with open(labels_path, 'rb') as lbpath: magic, n = struct.unpack('>II', lbpath.read(8)) labels = np.fromfile(lbpath, dtype=np.uint8) with open(images_path, 'rb') as imgpath: magic, num, rows, cols = struct.unpack(">IIII", imgpath.read(16)) images = np.fromfile(imgpath, dtype=np.uint8).reshape(len(labels), 784) return images, labels X_train, y_train = load_mnist('mnist', kind='train') print('Rows: %d, columns: %d' % (X_train.shape[0], X_train.shape[1])) X_test, y_test = load_mnist('mnist', kind='t10k') print('Rows: %d, columns: %d' % (X_test.shape[0], X_test.shape[1])) import matplotlib.pyplot as plt %matplotlib inline %config InlineBackend.figure_format = 'retina' fig, ax = plt.subplots(nrows=2, ncols=5, sharex=True, sharey=True,) ax = ax.flatten() for i in range(10): img = X_train[y_train == i][0].reshape(28, 28) ax[i].imshow(img, cmap='Greys', interpolation='nearest') ax[0].set_xticks([]) ax[0].set_yticks([]) plt.tight_layout() plt.show() """ Explanation: Chapter 12: Training Artificial Neural Networks for Image Recognition In this notebook I work through chapter 12 of Python Machine Learning—see the author's definitive notes. Loading in the MNIST hand written image data set End of explanation """ fig, ax = plt.subplots(nrows=5, ncols=5, sharex=True, sharey=True,) ax = ax.flatten() for i in range(25): img = X_train[y_train == 4][i].reshape(28, 28) ax[i].imshow(img, cmap='Greys', interpolation='nearest') ax[0].set_xticks([]) ax[0].set_yticks([]) plt.tight_layout() plt.show() """ Explanation: Show a bunch of 4s End of explanation """ from sklearn.tree import DecisionTreeClassifier from sklearn.ensemble import RandomForestClassifier tree10 = DecisionTreeClassifier(criterion='entropy', max_depth=10, random_state=0) tree100 = DecisionTreeClassifier(criterion='entropy', max_depth=100, random_state=0) rf10 = RandomForestClassifier(criterion='entropy', n_estimators=10, random_state=1) rf100 = RandomForestClassifier(criterion='entropy', n_estimators=100, random_state=1) labeled_models = [ ('decision tree depth 10', tree10), ('decision tree depth 100', tree100), ('random forest 10 estimators', rf10), ('random forest 100 estimators', rf100), ] import time import subprocess def say_done(label): subprocess.call("say 'done with {}'".format(label), shell=True) for label, model in labeled_models: before = time.time() model.fit(X_train, y_train) after = time.time() print("{} fit the dataset in {:.1f} seconds".format(label, after - before)) say_done(label) from sklearn.metrics import accuracy_score for label, model in labeled_models: print("{} training fit: {:.3f}".format(label, accuracy_score(y_train, model.predict(X_train)))) print("{} test accuracy: {:.3f}".format(label, accuracy_score(y_test, model.predict(X_test)))) """ Explanation: Classifying with tree based models Let's see how well some other models do before we get to the neural net. End of explanation """
jansoe/FUImaging
examples/Chaining/CompareMFInitialisation.ipynb
mit
import sys import os import pickle import matplotlib.pyplot as plt import numpy as np from collections import defaultdict from scipy.spatial.distance import pdist from scipy.stats import gaussian_kde pythonpath_for_regnmf = os.path.realpath(os.path.join(os.path.pardir, os.path.pardir)) sys.path.append(pythonpath_for_regnmf) from regnmf import ImageAnalysisComponents as ia from regnmf import datamaker from regnmf.regularizedHALS import convex_cone import warnings warnings.filterwarnings("ignore", category=DeprecationWarning) %matplotlib inline """ Explanation: Effects of Chaining rNMF and sICA This Notebook reproduces some results of the manuscript http://dx.doi.org/10.1016/j.neuroimage.2014.04.041. Please make sure to have the regNMF module (available at https://github.com/jansoe/FUImaging) in your PYTHONPATH. End of explanation """ param = {'act_time': [0.01, 0.1, 0.3, 0.8, 1.0, 1.0], 'cov': 0.3, 'latents': 40, 'mean': 0.2, 'no_samples': 50, 'noisevar': 0.2, 'shape': (50, 50), 'width':0.1, 'var': 0.08} """ Explanation: Parameter for creation of surrogate Data End of explanation """ anal_param = {'sparse_param': 0.5, 'factors': 80, 'smooth_param': 2, 'init':'convex', 'sparse_fct':'global_sparse', 'verbose':0 } """ Explanation: Parameter for Matrix Factorization End of explanation """ def violin_plot(ax, data, color='b'): ''' create violin plots on an axis ''' w = 0.4 for p, d in enumerate(data): k = gaussian_kde(d) #calculates the kernel density m = k.dataset.min() #lower bound of violin M = k.dataset.max() #upper bound of violin x = np.arange(m,M,(M-m)/100.) # support for violin v = k.evaluate(x) #violin profile (density curve) scale = w/v.max() v = v*scale #scaling the violin to the available space ax.fill_betweenx(x,p,v+p, facecolor=color, edgecolor = color, alpha=1) ax.fill_betweenx(x,p,-v+p, facecolor=color, edgecolor = color, alpha=1) #median perc = np.percentile(d, [25,50,75]) perc_width = k.evaluate(perc)*scale l1, = ax.plot([p-perc_width[1],p+perc_width[1]],[perc[1], perc[1]], 'k', lw=0.5) l2, = ax.plot([p-perc_width[0],p+perc_width[0]],[perc[0], perc[0]], '0.25', lw=0.5) ax.plot([p-perc_width[2],p+perc_width[2]],[perc[2], perc[2]], '0.25', lw=0.5) ax.legend([l1, l2], ['median', 'quartiles'], prop={'size':fontsize}, numpoints=1, loc = 'lower right', labelspacing=0.1, handletextpad=0.5, bbox_to_anchor = (1, 0.9), handlelength=1, borderaxespad=-0.5, frameon=False) def cor(time1, time2, num_sources): '''calculate crosscorrelation between sources and latents''' return np.corrcoef(np.vstack((time1, time2)))[num_sources:, :num_sources] """ Explanation: Helper Functions End of explanation """ num_datasets = 5 #number of independent datasets mse = defaultdict(list) cor = defaultdict(list) for dummy in range(num_datasets): compare = {} # create data tempdata = datamaker.Dataset(param) # plain NMF nnma = ia.NNMF(maxcount=50, num_components=anal_param['factors']) anal_param.update({'init':'convex'}) nnma.param.update(anal_param) compare['nmf'] = nnma(ia.TimeSeries(tempdata.observed, shape=param['shape'])) # plain sICA sica = ia.sICA(num_components=anal_param['factors']) compare['sica'] = sica(ia.TimeSeries(tempdata.observed, shape=param['shape'])) # NMF on sICA reduced data reduced_data = np.dot(compare['sica']._series, compare['sica'].base._series) compare['sicareduced_nmf'] = nnma(ia.TimeSeries(reduced_data, shape=param['shape'])) # sICA initialized NMF nnma = ia.NNMF(maxcount=50, num_components=anal_param['factors']) A = compare['sica']._series.copy() X = compare['sica'].base._series.copy() A[A<0]=0 X[X<0]=0 anal_param.update({'init':{'A':A, 'X':X}}) nnma.param.update(anal_param) compare['sicainit_nmf'] = nnma(ia.TimeSeries(tempdata.observed, shape=param['shape'])) # NMF initialized sICA compare['nmfinit_sica'] = compare['nmf'].copy() sica = ia.sICA(num_components=anal_param['factors']) out_temp = sica(compare['nmfinit_sica'].base.copy()) compare['nmfinit_sica'].base = out_temp.base compare['nmfinit_sica']._series = np.dot(compare['nmfinit_sica']._series, out_temp._series) # sICA on NMF reduced data nmf_reduced = np.dot(compare['nmf']._series, compare['nmf'].base._series) sica = ia.sICA(num_components=anal_param['factors']) compare['nmfreduced_sica'] = sica(ia.TimeSeries(nmf_reduced, shape=param['shape'])) # sICA on convex cone compare['ccinit_sica'] = compare['nmf'].copy() init = convex_cone(tempdata.observed, anal_param['factors']) out_temp = sica(ia.TimeSeries(np.array(init['base']), shape=param['shape'])) compare['ccinit_sica'].base = out_temp.base compare['ccinit_sica']._series = np.dot(np.array(init['timecourses']).T, out_temp._series) #collect performance measures for k in compare: cor[k] += list(tempdata.cor2source(compare[k])[1]) mse[k] += list(tempdata.mse2source(compare[k], local=0.05)) """ Explanation: Perform chained matrix factorization applied factorizations: plain rNMF and plain sICA rNMF on data from sICA (i.e. PCA) reconstruction, that is A*X rNMF initialized by rectified sICA components sICA on pixel participation of rNMF sICA on data from rNMF reconstruction End of explanation """ fig = plt.figure(figsize=(15, 6)) fontsize = 10 ax = fig.add_axes([0.1,0.2,0.35,0.75]) keys = ['nmf', 'sicainit_nmf', 'sicareduced_nmf'] data = [1-np.array(mse[i]) for i in keys] violin_plot(ax, data, '0.5') ax.set_xticks(range(len(keys))) ax.set_xticklabels(['NMF', 'sICA init\nNMF', 'sICA reconst.\nNMF'], rotation='0', ha='center', size=fontsize) ax.set_ylabel('SR', size=fontsize) ax.set_ylim([0,0.9]) ax.set_yticks([0,0.4,0.8]) ax.yaxis.set_tick_params(labelsize=fontsize) ax.yaxis.set_ticks_position('left') ax.xaxis.set_tick_params(size=0) for pos in ['right', 'bottom', 'top']: ax.spines[pos].set_color('none') ax = fig.add_axes([0.6,0.2,0.35,0.75]) keys = ['sica', 'nmfinit_sica', 'nmfreduced_sica'] data = [1-np.array(mse[i]) for i in keys] violin_plot(ax, data, '0.5') ax.set_xticks(range(len(keys))) ax.set_xticklabels(['sICA', 'NMF init\nsICA', 'NMF reconst.\nsICA'], rotation='0', ha='center', size=fontsize) ax.set_ylabel('SR', size=fontsize) ax.set_ylim([0,0.9]) ax.set_yticks([0,0.4,0.8]) ax.yaxis.set_tick_params(labelsize=fontsize) ax.yaxis.set_ticks_position('left') ax.xaxis.set_tick_params(size=0) for pos in ['right', 'bottom', 'top']: ax.spines[pos].set_color('none') plt.show() """ Explanation: Violinplots of Source Recovery (SR) End of explanation """
raybuhr/pyfolio
pyfolio/examples/bayesian.ipynb
apache-2.0
%matplotlib inline import pyfolio as pf """ Explanation: Bayesian performance analysis example in pyfolio There are also a few more advanced (and still experimental) analysis methods in pyfolio based on Bayesian statistics. The main benefit of these methods is uncertainty quantification. All the values you saw above, like the Sharpe ratio, are just single numbers. These estimates are noisy because they have been computed over a limited number of data points. So how much can you trust these numbers? You don't know because there is no sense of uncertainty. That is where Bayesian statistics helps as instead of single values, we are dealing with probability distributions that assign degrees of belief to all possible parameter values. Lets create the Bayesian tear sheet. Under the hood this is running MCMC sampling in PyMC3 to estimate the posteriors which can take quite a while (that's the reason why we don't generate this by default in create_full_tear_sheet()). Import pyfolio End of explanation """ stock_rets = pf.utils.get_symbol_rets('FB') """ Explanation: Fetch the daily returns for a stock End of explanation """ out_of_sample = stock_rets.index[-40] pf.create_bayesian_tear_sheet(stock_rets, live_start_date=out_of_sample) """ Explanation: Create Bayesian tear sheet End of explanation """ help(pf.bayesian.run_model) """ Explanation: Lets go through these row by row: The first one is the Bayesian cone plot that is the result of a summer internship project of Sepideh Sadeghi here at Quantopian. It's similar to the cone plot you already saw at in the tear sheet above but has two critical additions: (i) it takes uncertainty into account (i.e. a short backtest length will result in a wider cone), and (ii) it does not assume normality of returns but instead uses a Student-T distribution with heavier tails. The next row is comparing mean returns of the in-sample (backest) and OOS (forward) period. As you can see, mean returns are not a single number but a (posterior) distribution that gives us an indication of how certain we can be in our estimates. As you can see, the green distribution on the left side is much wider representing our increased uncertainty due to having less OOS data. We can then calculate the difference between these two distributions as shown on the right side. The grey lines denote the 2.5% and 97.5% percentiles. Intuitively, if the right grey line is lower than 0 you can say that with probability > 97.5% the OOS mean returns are below what is suggested by the backtest. The model used here is called BEST and was developed by John Kruschke. The next couple of rows follow the same pattern but are an estimate of annual volatility, Sharpe ratio and their respective differences. The 5th row shows the effect size or the difference of means normalized by the standard deviation and gives you a general sense how far apart the two distributions are. Intuitively, even if the means are significantly different, it may not be very meaningful if the standard deviation is huge amounting to a tiny difference of the two returns distributions. The 6th row shows predicted returns (based on the backtest) for tomorrow, and 5 days from now. The blue line indicates the probability of losing more than 5% of your portfolio value and can be interpeted as a Bayesian VaR estimate. Lastly, a Bayesian estimate of annual alpha and beta. In addition to uncertainty estimates, this model, like all above ones, assumes returns to be T-distributed which leads to more robust estimates than a standard linear regression would. Running models directly You can also run individual models. All models can be found in pyfolio.bayesian and run via the run_model() function. End of explanation """ # Run model that assumes returns to be T-distributed trace = pf.bayesian.run_model('t', stock_rets) """ Explanation: For example, to run a model that assumes returns to be normally distributed, you can call: End of explanation """ # Check what frequency of samples from the sharpe posterior are above 0. print('Probability of Sharpe ratio > 0 = {:3}%'.format((trace['sharpe'] > 0).mean() * 100)) """ Explanation: The returned trace object can be directly inquired. For example might we ask what the probability of the Sharpe ratio being larger than 0 is by checking what percentage of posterior samples of the Sharpe ratio are > 0: End of explanation """ import pymc3 as pm pm.traceplot(trace); """ Explanation: But we can also interact with it like with any other pymc3 trace: End of explanation """
ES-DOC/esdoc-jupyterhub
notebooks/awi/cmip6/models/sandbox-1/land.ipynb
gpl-3.0
# DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'awi', 'sandbox-1', 'land') """ Explanation: ES-DOC CMIP6 Model Properties - Land MIP Era: CMIP6 Institute: AWI Source ID: SANDBOX-1 Topic: Land Sub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes. Properties: 154 (96 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-15 16:53:37 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation """ # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) """ Explanation: Document Authors Set document authors End of explanation """ # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) """ Explanation: Document Contributors Specify document contributors End of explanation """ # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) """ Explanation: Document Publication Specify document publication status End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Conservation Properties 3. Key Properties --&gt; Timestepping Framework 4. Key Properties --&gt; Software Properties 5. Grid 6. Grid --&gt; Horizontal 7. Grid --&gt; Vertical 8. Soil 9. Soil --&gt; Soil Map 10. Soil --&gt; Snow Free Albedo 11. Soil --&gt; Hydrology 12. Soil --&gt; Hydrology --&gt; Freezing 13. Soil --&gt; Hydrology --&gt; Drainage 14. Soil --&gt; Heat Treatment 15. Snow 16. Snow --&gt; Snow Albedo 17. Vegetation 18. Energy Balance 19. Carbon Cycle 20. Carbon Cycle --&gt; Vegetation 21. Carbon Cycle --&gt; Vegetation --&gt; Photosynthesis 22. Carbon Cycle --&gt; Vegetation --&gt; Autotrophic Respiration 23. Carbon Cycle --&gt; Vegetation --&gt; Allocation 24. Carbon Cycle --&gt; Vegetation --&gt; Phenology 25. Carbon Cycle --&gt; Vegetation --&gt; Mortality 26. Carbon Cycle --&gt; Litter 27. Carbon Cycle --&gt; Soil 28. Carbon Cycle --&gt; Permafrost Carbon 29. Nitrogen Cycle 30. River Routing 31. River Routing --&gt; Oceanic Discharge 32. Lakes 33. Lakes --&gt; Method 34. Lakes --&gt; Wetlands 1. Key Properties Land surface key properties 1.1. Model Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of land surface model. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of land surface model code (e.g. MOSES2.2) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.3. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "water" # "energy" # "carbon" # "nitrogen" # "phospherous" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 1.4. Land Atmosphere Flux Exchanges Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Fluxes exchanged with the atmopshere. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.5. Atmospheric Coupling Treatment Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.land_cover') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "bare soil" # "urban" # "lake" # "land ice" # "lake ice" # "vegetated" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 1.6. Land Cover Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Types of land cover defined in the land surface model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.land_cover_change') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.7. Land Cover Change Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe how land cover change is managed (e.g. the use of net or gross transitions) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.8. Tiling Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.conservation_properties.energy') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 2. Key Properties --&gt; Conservation Properties TODO 2.1. Energy Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how energy is conserved globally and to what level (e.g. within X [units]/year) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.conservation_properties.water') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 2.2. Water Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how water is conserved globally and to what level (e.g. within X [units]/year) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 2.3. Carbon Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how carbon is conserved globally and to what level (e.g. within X [units]/year) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 3. Key Properties --&gt; Timestepping Framework TODO 3.1. Timestep Dependent On Atmosphere Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is a time step dependent on the frequency of atmosphere coupling? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 3.2. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overall timestep of land surface model (i.e. time between calls) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 3.3. Timestepping Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General description of time stepping method and associated time step(s) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.software_properties.repository') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 4. Key Properties --&gt; Software Properties Software properties of land surface code 4.1. Repository Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Location of code for this component. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.software_properties.code_version') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 4.2. Code Version Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Code version identifier. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.software_properties.code_languages') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 4.3. Code Languages Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Code language(s). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.grid.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5. Grid Land surface grid 5.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of the grid in the land surface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.grid.horizontal.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6. Grid --&gt; Horizontal The horizontal grid in the land surface 6.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the general structure of the horizontal grid (not including any tiling) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 6.2. Matches Atmosphere Grid Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does the horizontal grid match the atmosphere? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.grid.vertical.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7. Grid --&gt; Vertical The vertical grid in the soil 7.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the general structure of the vertical grid in the soil (not including any tiling) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.grid.vertical.total_depth') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 7.2. Total Depth Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The total depth of the soil (in metres) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8. Soil Land surface soil 8.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of soil in the land surface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.heat_water_coupling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.2. Heat Water Coupling Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the coupling between heat and water in the soil End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.number_of_soil layers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 8.3. Number Of Soil layers Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of soil layers End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.4. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List the prognostic variables of the soil scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9. Soil --&gt; Soil Map Key properties of the land surface soil map 9.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General description of soil map End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.structure') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9.2. Structure Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil structure map End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.texture') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9.3. Texture Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil texture map End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.organic_matter') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9.4. Organic Matter Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil organic matter map End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.albedo') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9.5. Albedo Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil albedo map End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.water_table') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9.6. Water Table Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil water table map, if any End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 9.7. Continuously Varying Soil Depth Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does the soil properties vary continuously with depth? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.soil_depth') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9.8. Soil Depth Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil depth map End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 10. Soil --&gt; Snow Free Albedo TODO 10.1. Prognostic Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is snow free albedo prognostic? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.snow_free_albedo.functions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "vegetation type" # "soil humidity" # "vegetation state" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 10.2. Functions Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N If prognostic, describe the dependancies on snow free albedo calculations End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "distinction between direct and diffuse albedo" # "no distinction between direct and diffuse albedo" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 10.3. Direct Diffuse Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If prognostic, describe the distinction between direct and diffuse albedo End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 10.4. Number Of Wavelength Bands Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If prognostic, enter the number of wavelength bands used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 11. Soil --&gt; Hydrology Key properties of the land surface soil hydrology 11.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General description of the soil hydrological model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 11.2. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time step of river soil hydrology in seconds End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 11.3. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil hydrology tiling, if any. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 11.4. Vertical Discretisation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the typical vertical discretisation End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 11.5. Number Of Ground Water Layers Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of soil layers that may contain water End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "perfect connectivity" # "Darcian flow" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 11.6. Lateral Connectivity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Describe the lateral connectivity between tiles End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Bucket" # "Force-restore" # "Choisnel" # "Explicit diffusion" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 11.7. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The hydrological dynamics scheme in the land surface model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 12. Soil --&gt; Hydrology --&gt; Freezing TODO 12.1. Number Of Ground Ice Layers Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How many soil layers may contain ground ice End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 12.2. Ice Storage Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the method of ice storage End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 12.3. Permafrost Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the treatment of permafrost, if any, within the land surface scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.drainage.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 13. Soil --&gt; Hydrology --&gt; Drainage TODO 13.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General describe how drainage is included in the land surface scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.drainage.types') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Gravity drainage" # "Horton mechanism" # "topmodel-based" # "Dunne mechanism" # "Lateral subsurface flow" # "Baseflow from groundwater" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13.2. Types Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Different types of runoff represented by the land surface model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.heat_treatment.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 14. Soil --&gt; Heat Treatment TODO 14.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General description of how heat treatment properties are defined End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.heat_treatment.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 14.2. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time step of soil heat scheme in seconds End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.heat_treatment.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 14.3. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil heat treatment tiling, if any. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 14.4. Vertical Discretisation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the typical vertical discretisation End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Force-restore" # "Explicit diffusion" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 14.5. Heat Storage Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Specify the method of heat storage End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.heat_treatment.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "soil moisture freeze-thaw" # "coupling with snow temperature" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 14.6. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Describe processes included in the treatment of soil heat End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 15. Snow Land surface snow 15.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of snow in the land surface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 15.2. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the snow tiling, if any. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.number_of_snow_layers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 15.3. Number Of Snow Layers Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of snow levels used in the land surface scheme/model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.density') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "constant" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 15.4. Density Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Description of the treatment of snow density End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.water_equivalent') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 15.5. Water Equivalent Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Description of the treatment of the snow water equivalent End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.heat_content') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 15.6. Heat Content Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Description of the treatment of the heat content of snow End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.temperature') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 15.7. Temperature Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Description of the treatment of snow temperature End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.liquid_water_content') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 15.8. Liquid Water Content Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Description of the treatment of snow liquid water End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.snow_cover_fractions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "ground snow fraction" # "vegetation snow fraction" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 15.9. Snow Cover Fractions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Specify cover fractions used in the surface snow scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "snow interception" # "snow melting" # "snow freezing" # "blowing snow" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 15.10. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Snow related processes in the land surface scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 15.11. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List the prognostic variables of the snow scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.snow_albedo.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "prescribed" # "constant" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 16. Snow --&gt; Snow Albedo TODO 16.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the treatment of snow-covered land albedo End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.snow_albedo.functions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "vegetation type" # "snow age" # "snow density" # "snow grain type" # "aerosol deposition" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 16.2. Functions Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N *If prognostic, * End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 17. Vegetation Land surface vegetation 17.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of vegetation in the land surface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 17.2. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time step of vegetation scheme in seconds End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.dynamic_vegetation') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 17.3. Dynamic Vegetation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there dynamic evolution of vegetation? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 17.4. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the vegetation tiling, if any. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.vegetation_representation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "vegetation types" # "biome types" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17.5. Vegetation Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Vegetation classification used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.vegetation_types') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "broadleaf tree" # "needleleaf tree" # "C3 grass" # "C4 grass" # "vegetated" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17.6. Vegetation Types Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List of vegetation types in the classification, if any End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.biome_types') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "evergreen needleleaf forest" # "evergreen broadleaf forest" # "deciduous needleleaf forest" # "deciduous broadleaf forest" # "mixed forest" # "woodland" # "wooded grassland" # "closed shrubland" # "opne shrubland" # "grassland" # "cropland" # "wetlands" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17.7. Biome Types Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List of biome types in the classification, if any End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.vegetation_time_variation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "fixed (not varying)" # "prescribed (varying from files)" # "dynamical (varying from simulation)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17.8. Vegetation Time Variation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How the vegetation fractions in each tile are varying with time End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.vegetation_map') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 17.9. Vegetation Map Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.interception') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 17.10. Interception Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is vegetation interception of rainwater represented? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.phenology') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic (vegetation map)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17.11. Phenology Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Treatment of vegetation phenology End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.phenology_description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 17.12. Phenology Description Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 General description of the treatment of vegetation phenology End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.leaf_area_index') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prescribed" # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17.13. Leaf Area Index Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Treatment of vegetation leaf area index End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.leaf_area_index_description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 17.14. Leaf Area Index Description Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 General description of the treatment of leaf area index End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.biomass') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17.15. Biomass Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 *Treatment of vegetation biomass * End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.biomass_description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 17.16. Biomass Description Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 General description of the treatment of vegetation biomass End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.biogeography') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17.17. Biogeography Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Treatment of vegetation biogeography End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.biogeography_description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 17.18. Biogeography Description Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 General description of the treatment of vegetation biogeography End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.stomatal_resistance') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "light" # "temperature" # "water availability" # "CO2" # "O3" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17.19. Stomatal Resistance Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Specify what the vegetation stomatal resistance depends on End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 17.20. Stomatal Resistance Description Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 General description of the treatment of vegetation stomatal resistance End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 17.21. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List the prognostic variables of the vegetation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.energy_balance.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 18. Energy Balance Land surface energy balance 18.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of energy balance in land surface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.energy_balance.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 18.2. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the energy balance tiling, if any. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 18.3. Number Of Surface Temperatures Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.energy_balance.evaporation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "alpha" # "beta" # "combined" # "Monteith potential evaporation" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 18.4. Evaporation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Specify the formulation method for land surface evaporation, from soil and vegetation End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.energy_balance.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "transpiration" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 18.5. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Describe which processes are included in the energy balance scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 19. Carbon Cycle Land surface carbon cycle 19.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of carbon cycle in land surface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 19.2. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the carbon cycle tiling, if any. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 19.3. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time step of carbon cycle in seconds End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "grand slam protocol" # "residence time" # "decay time" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 19.4. Anthropogenic Carbon Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Describe the treament of the anthropogenic carbon pool End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 19.5. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List the prognostic variables of the carbon scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 20. Carbon Cycle --&gt; Vegetation TODO 20.1. Number Of Carbon Pools Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Enter the number of carbon pools used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 20.2. Carbon Pools Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the carbon pools used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 20.3. Forest Stand Dynamics Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the treatment of forest stand dyanmics End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 21. Carbon Cycle --&gt; Vegetation --&gt; Photosynthesis TODO 21.1. Method Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 22. Carbon Cycle --&gt; Vegetation --&gt; Autotrophic Respiration TODO 22.1. Maintainance Respiration Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the general method used for maintainence respiration End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 22.2. Growth Respiration Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the general method used for growth respiration End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 23. Carbon Cycle --&gt; Vegetation --&gt; Allocation TODO 23.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the general principle behind the allocation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "leaves + stems + roots" # "leaves + stems + roots (leafy + woody)" # "leaves + fine roots + coarse roots + stems" # "whole plant (no distinction)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 23.2. Allocation Bins Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Specify distinct carbon bins used in allocation End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "fixed" # "function of vegetation type" # "function of plant allometry" # "explicitly calculated" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 23.3. Allocation Fractions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how the fractions of allocation are calculated End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 24. Carbon Cycle --&gt; Vegetation --&gt; Phenology TODO 24.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the general principle behind the phenology scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 25. Carbon Cycle --&gt; Vegetation --&gt; Mortality TODO 25.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the general principle behind the mortality scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 26. Carbon Cycle --&gt; Litter TODO 26.1. Number Of Carbon Pools Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Enter the number of carbon pools used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 26.2. Carbon Pools Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the carbon pools used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 26.3. Decomposition Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the decomposition methods used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.litter.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 26.4. Method Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the general method used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 27. Carbon Cycle --&gt; Soil TODO 27.1. Number Of Carbon Pools Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Enter the number of carbon pools used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 27.2. Carbon Pools Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the carbon pools used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 27.3. Decomposition Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the decomposition methods used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.soil.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 27.4. Method Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the general method used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 28. Carbon Cycle --&gt; Permafrost Carbon TODO 28.1. Is Permafrost Included Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is permafrost included? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 28.2. Emitted Greenhouse Gases Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the GHGs emitted End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 28.3. Decomposition Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the decomposition methods used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 28.4. Impact On Soil Properties Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the impact of permafrost on soil properties End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.nitrogen_cycle.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 29. Nitrogen Cycle Land surface nitrogen cycle 29.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of the nitrogen cycle in the land surface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.nitrogen_cycle.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 29.2. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the notrogen cycle tiling, if any. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.nitrogen_cycle.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 29.3. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time step of nitrogen cycle in seconds End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 29.4. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List the prognostic variables of the nitrogen scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 30. River Routing Land surface river routing 30.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of river routing in the land surface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 30.2. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the river routing, if any. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 30.3. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time step of river routing scheme in seconds End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 30.4. Grid Inherited From Land Surface Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the grid inherited from land surface? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.grid_description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 30.5. Grid Description Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 General description of grid, if not inherited from land surface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.number_of_reservoirs') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 30.6. Number Of Reservoirs Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Enter the number of reservoirs End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.water_re_evaporation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "flood plains" # "irrigation" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 30.7. Water Re Evaporation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N TODO End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 30.8. Coupled To Atmosphere Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Is river routing coupled to the atmosphere model component? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.coupled_to_land') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 30.9. Coupled To Land Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the coupling between land and rivers End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "heat" # "water" # "tracers" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 30.10. Quantities Exchanged With Atmosphere Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N If couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "present day" # "adapted for other periods" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 30.11. Basin Flow Direction Map Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What type of basin flow direction map is being used? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.flooding') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 30.12. Flooding Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the representation of flooding, if any End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 30.13. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List the prognostic variables of the river routing End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "direct (large rivers)" # "diffuse" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 31. River Routing --&gt; Oceanic Discharge TODO 31.1. Discharge Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Specify how rivers are discharged to the ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "heat" # "water" # "tracers" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 31.2. Quantities Transported Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Quantities that are exchanged from river-routing to the ocean model component End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 32. Lakes Land surface lakes 32.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of lakes in the land surface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.coupling_with_rivers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 32.2. Coupling With Rivers Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Are lakes coupled to the river routing model component? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 32.3. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time step of lake scheme in seconds End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "heat" # "water" # "tracers" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 32.4. Quantities Exchanged With Rivers Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N If coupling with rivers, which quantities are exchanged between the lakes and rivers End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.vertical_grid') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 32.5. Vertical Grid Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the vertical grid of lakes End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 32.6. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List the prognostic variables of the lake scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.method.ice_treatment') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 33. Lakes --&gt; Method TODO 33.1. Ice Treatment Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is lake ice included? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.method.albedo') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 33.2. Albedo Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the treatment of lake albedo End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.method.dynamics') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "No lake dynamics" # "vertical" # "horizontal" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 33.3. Dynamics Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Which dynamics of lakes are treated? horizontal, vertical, etc. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 33.4. Dynamic Lake Extent Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is a dynamic lake extent scheme included? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.method.endorheic_basins') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 33.5. Endorheic Basins Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Basins not flowing to ocean included? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.wetlands.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 34. Lakes --&gt; Wetlands TODO 34.1. Description Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the treatment of wetlands, if any End of explanation """
CSB-book/CSB
scientific/solutions/Lord_of_the_flies_solution.ipynb
gpl-3.0
from Bio import Entrez import re """ Explanation: Solution of 6.6.1, Lord of the Fruit Flies Identify the number of papers in PubMed that has Drosophila virilis in the title or abstract End of explanation """ # Always tell NCBI who you are (edit the e-mail below!) Entrez.email = "your_name@yourmailhost.com" handle = Entrez.esearch(db="pubmed", term="Drosophila virilis[Title/Abstract]", usehistory="y") record = Entrez.read(handle) # generate a Python list with all Pubmed IDs of articles about D. virilis id_list = record["IdList"] record["Count"] webenv = record["WebEnv"] query_key = record["QueryKey"] """ Explanation: We construct an esearch request and use the NCBI history function in order to refer to this search in our subsequent efetch call. End of explanation """ handle = Entrez.efetch(db="pubmed", rettype="medline", retmode="text", retstart=0, retmax=543, webenv=webenv, query_key=query_key) out_handle = open("D_virilis_pubs.txt", "w") data = handle.read() handle.close() out_handle.write(data) out_handle.close() """ Explanation: Retrieve the PubMed entries using our search history End of explanation """ with open("D_virilis_pubs.txt") as datafile: author_dict = {} for line in datafile: if re.match("AU", line): # capture author author = line.split("-", 1)[1] # remove leading and trailing whitespace author = author.strip() # if key is present, add 1 # if it's not present, initialize at 1 author_dict[author] = 1 + author_dict.get(author, 0) """ Explanation: Count the number of contributions per author We construct a dictionary with all authors as keys and the number of contributions as value. End of explanation """ for author in sorted(author_dict, key = author_dict.get, reverse = True)[:5]: print(author, ":", author_dict[author]) """ Explanation: Find the top five researchers Dictionaries do not have a natural order but we can sort a dictionary based on the values using the function sorted. We retrieve the number of contributions per author from our author_dict using author_dict.get and use it as value in the sorted function. sorted returns a list that can be indexed to return only the top 5 of researchers. End of explanation """
cancilla/streamsx.health
samples/HealthcareJupyterDemo/notebooks/experimental/HealthcareDemo-AnalyticsService.ipynb
apache-2.0
!pip install --user --upgrade streamsx !pip install --user --upgrade "git+https://github.com/IBMStreams/streamsx.health.git#egg=healthdemo&subdirectory=samples/HealthcareJupyterDemo/package" """ Explanation: Healthcare Python Streaming Application Demo This application demonstrates how users can develop Python Streaming Applications on DSX. The DSX Notebook ultimately submits two Streams applications to a local Streams cluster. The first application is a pre-compiled SPL application that simulates patient waveform and vital data, as well as publishes the data to a topic. The second application is a Python Topology application written using the Streams Python API. This application subscribes to the topic containing the patient data, performs analysis on the waveforms and sends all of the data, including the results of the analysis, to the Streams view server. Submitting the Python application from the Notebook allows for connecting to the Streams view server in order to retrieve the data. Once the data has been retrieved, it can be analyzed, manipulated or visualized like any other data accessed from a notebook. In the case of this demo, waveform graphs and numerical widgets are being used to display the healthcare data of the patient. The following diagram outlines the architecture of the demo. Cell Description This cell is responsible for installing python modules required for running this notebook End of explanation """ from streamsx.topology.topology import Topology, schema from streamsx.topology.context import ConfigParams, submit from streamsx.topology import functions print ('Submitting to streaming analytic service.') # 'vcap_services.json' is created by running the "Create VCAP Service Credential" notebook # Change the service name accordingly config = { ConfigParams.VCAP_SERVICES: 'vcap_services.json', ConfigParams.SERVICE_NAME: 'Streaming-Analytics' } numPatients = 3 print ("Don't forget to submit patient ingest microservice manually. (Set num.patients >= 3)") """ Explanation: Cell Description This cell is responsible for building and submitting the Streams applications to the Streams cluster. PhysionetIngestServiceMulti microservice This microservice comes in the form of a pre-compiled SAB file. The microservice retrieves patient waveform and vital data from a Physionet database (https://www.physionet.org/). 3 different sets of data are used as source. The patient data is submitted to the ingest-physionet topic, which allows it to be consumed from downtstream applications or services. End of explanation """ from healthdemo.utils import get_patient_id, get_sampled_data_values topo = Topology('HealthcareDemo_PatientAlert') def patientNeedsAttention(tup): pulse = get_sampled_data_values(tup, 'PULSE')[0] return pulse > 0 and pulse < 80 ## Create a view that shows patient that requires attention patients_requiring_attention = topo.subscribe('ingest-physionet', schema.CommonSchema.Json) \ .map(functions.identity) \ .filter(patientNeedsAttention) view = patients_requiring_attention.view() submit('ANALYTICS_SERVICE', topo, config) print ("DONE") """ Explanation: Cell Description This cell is responsible for building and submitting the Streams applications to the Streams cluster. Healthcare Patient Alert Python Topology Application This cell contains source code for the Python Topology application. This is a Streaming Python application that ingests the patient data from the ingest-physionet topic, performs filtering on patients that require attention, and then sends the data to the Streams view server. End of explanation """ from healthdemo.medgraphs import NumericText import re ## load BokehJS visualization library (must be loaded in a separate cell) from bokeh.io import output_notebook, push_notebook from bokeh.resources import INLINE output_notebook(resources=INLINE) %autosave 0 %reload_ext autoreload %aimport healthdemo.utils %aimport healthdemo.medgraphs %autoreload 1 ## create the graphs ## graphs = [] alert_div = ''' <div style="background-color: #152935; border: 3px solid black; height: 120px; width: 700px; font-family: arial; clear: both; color: {0}"> <div style="margin-left: 5px; margin-top: 20px; font-size: 30px; float:left">{1} requires Attention!</div> <div style="margin-right: 5px; margin-top: 20px; font-size: 30px; float:right">pulse: {2} {3}</div> </div> ''' for id in range(numPatients): pulse_numeric = NumericText(signal_label='PULSE', title='Patient-%d' % (id+1), color='#e71d32', override_str=alert_div) graphs.append(pulse_numeric) ## retrieve data from Streams view in a background job ## def data_collector(view, graphs): for d in iter(view.get, None): patientId = int(re.findall('\d+', get_patient_id(d))[0]) if patientId < numPatients: graphs[patientId-1].add(d) from IPython.lib import backgroundjobs as bg jobs = bg.BackgroundJobManager() jobs.new(data_collector, view.start_data_fetch(), graphs) """ Explanation: Cell Description This cell creates the background job that access the view data. The view data is continuously retrieved from the Streams view server in a background job. Each graph object represent a patient and receives data when the patient requires attention. End of explanation """ import time from bokeh.io import show from bokeh.layouts import column, widgetbox plots = [] for g in graphs: plots.append(widgetbox(g.get_figure())) ## display graphs print ("Monitoring patients' pulses...") show(column(plots), # If using bokeh > 0.12.2, uncomment the following statement #notebook_handle=True ) cnt = 0 while True: ## update graphs for g in graphs: g.update() ## update notebook cnt += 1 if cnt % 125 == 0: push_notebook() ## refresh the graphs cnt = 0 time.sleep(0.008) """ Explanation: Cell Description This cell is responsible for laying out and displaying the graphs. There is an infinite loop that continuously calls the update() method on each of the graphs. After each graph has been updated, a call to push_notebook() is made, which causes the notebook to update the graphics. End of explanation """ from healthdemo.patientmonitoring_functions import streaming_rpeak from healthdemo.healthcare_functions import PatientFilter, GenTimestamp, aggregate from healthdemo.windows import SlidingWindow def getPatientView(patient_id): ''' Select data of given patient_id, perform analysis and return view. Parameters ---------- patient_id: int patient_id (1-based) Returns ------- view: topology.View view data from Streams server ''' topo = Topology('HealthcareDemo_Patient%d' % (patient_id)) ## Ingest, preprocess and aggregate patient data sample_rate = 125 patient_data = topo.subscribe('ingest-physionet', schema.CommonSchema.Json) \ .map(functions.identity) \ .filter(PatientFilter('patient-%d' % (patient_id))) \ .transform(GenTimestamp(sample_rate)) \ .transform(SlidingWindow(length=sample_rate, trigger=sample_rate-1)) \ .transform(aggregate) ## Calculate RPeak and RR delta patient_data = streaming_rpeak(patient_data, sample_rate, data_label='ECG Lead II') ## Create a view of the data patient_view = patient_data.view() submit('ANALYTICS_SERVICE', topo, config) return patient_view # Retrieve view for a patient patient_view = getPatientView(2) print('DONE') """ Explanation: Cell Description This cell is responsible for building and submitting the Streams applications to the Streams cluster. Healthcare Patient Python Topology Application This cell contains source code for the Python Topology application. As described in the above architecture, this is a Streaming Python application that ingests the patient data from the ingest-physionet topic, performs filtering and analysis on the data, and then sends the data to the Streams view server. End of explanation """ from healthdemo.medgraphs import ECGGraph, PoincareGraph, NumericText, ABPNumericText ## load BokehJS visualization library (must be loaded in a separate cell) from bokeh.io import output_notebook, push_notebook from bokeh.resources import INLINE output_notebook(resources=INLINE) %autosave 0 %reload_ext autoreload %aimport healthdemo.utils %aimport healthdemo.medgraphs %autoreload 1 ## create the graphs ## graphs = [] ecg_leadII_graph = ECGGraph(signal_label='ECG Lead II', title='ECG Lead II', plot_width=600, min_range=-0.5, max_range=2.0) graphs.append(ecg_leadII_graph) leadII_poincare = PoincareGraph(signal_label='Poincare - ECG Lead II', title='Poincare - ECG Lead II') graphs.append(leadII_poincare) ecg_leadV_graph = ECGGraph(signal_label='ECG Lead V', title='ECG Lead V', plot_width=600) graphs.append(ecg_leadV_graph) resp_graph = ECGGraph(signal_label='Resp', title='Resp', min_range=-1, max_range=3, plot_width=600) graphs.append(resp_graph) pleth_graph = ECGGraph(signal_label='Pleth', title='Pleth', min_range=0, max_range=5, plot_width=600) graphs.append(pleth_graph) hr_numeric = NumericText(signal_label='HR', title='HR', color='#7cc7ff') graphs.append(hr_numeric) pulse_numeric = NumericText(signal_label='PULSE', title='PULSE', color='#e71d32') graphs.append(pulse_numeric) spo2_numeric = NumericText(signal_label='SpO2', title='SpO2', color='#8cd211') graphs.append(spo2_numeric) abp_numeric = ABPNumericText(abp_sys_label='ABP Systolic', abp_dia_label='ABP Diastolic', title='ABP', color='#fdd600') graphs.append(abp_numeric) ## retrieve data from Streams view in a background job ## def data_collector(view, graphs): for d in iter(view.get, None): for g in graphs: g.add(d) from IPython.lib import backgroundjobs as bg jobs = bg.BackgroundJobManager() jobs.new(data_collector, patient_view.start_data_fetch(), graphs) """ Explanation: Cell Description This cell initializes all of the graphs that will be used as well as creates the background job that access the view data. The view data is continuously retrieved from the Streams view server in a background job. Each graph object receives a copy of the data. The graph objects extracts and stores the data that is relevant for that particular graph. Each time a call to update() is made on a graph object, the next data point is retrieved and displayed. Each graph object maintains an internal queue so that each time a call to update() is made, the next element in the queue is retrieved and removed. End of explanation """ import time from bokeh.io import show from bokeh.layouts import column, row, widgetbox ## display graphs show( row( column( ecg_leadII_graph.get_figure(), ecg_leadV_graph.get_figure(), resp_graph.get_figure(), pleth_graph.get_figure() ), column( leadII_poincare.get_figure(), widgetbox(hr_numeric.get_figure()), widgetbox(pulse_numeric.get_figure()), widgetbox(spo2_numeric.get_figure()), widgetbox(abp_numeric.get_figure()) ) ), # If using bokeh > 0.12.2, uncomment the following statement #notebook_handle=True ) cnt = 0 while True: ## update graphs for g in graphs: g.update() ## update notebook cnt += 1 if cnt % 5 == 0: push_notebook() ## refresh the graphs cnt = 0 time.sleep(0.008) """ Explanation: Cell Description This cell is responsible for laying out and displaying the graphs. There is an infinite loop that continuously calls the update() method on each of the graphs. After each graph has been updated, a call to push_notebook() is made, which causes the notebook to update the graphics. End of explanation """
htwangtw/Patterns-of-Thought
notebooks/2.0-FC-vs-NYCQ-nestedKFold-Yeo7nodes.ipynb
mit
import copy import os, sys import numpy as np import pandas as pd import joblib os.chdir('../') # loa my modules from src.utils import load_pkl from src.file_io import save_output from src.models import nested_kfold_cv_scca, clean_confound, permutate_scca from src.visualise import set_text_size, show_results, write_pdf, write_png dat_path = './data/processed/dict_SCCA_data_prepro_revision1.pkl' # load data dataset = load_pkl(dat_path) dataset.keys() FC_nodes = dataset['FC_nodes'] MRIQ = dataset['MRIQ'] mot = dataset['Motion_Jenkinson'] sex = dataset['Gender'] age = dataset['Age'] confound_raw = np.hstack((mot, sex, age)) out_folds = 5 in_folds = 5 n_selected = 4 """ Explanation: Pipeline Cleaning confounds We first created the confound matrix according to Smith et al. (2015). The confound variables are motion (Jenkinson), sex, and age. We also created squared confound measures to help account for potentially nonlinear effects of these confounds. Nested k-fold cross validation We employed the nested approach to accomandate the hyper-parameter selection and model selection. This is a complex and costly method but the smaple size allows us to use this approach. End of explanation """ %%time para_search, best_model, pred_errors = nested_kfold_cv_scca( FC_nodes, MRIQ, R=confound_raw, n_selected=n_selected, out_folds=5, in_folds=5, reg_X=(0.1, 0.9), reg_Y=(0.1, 0.9) ) """ Explanation: confound cleaning in CV loops End of explanation """ X, Y, R = clean_confound(FC_nodes, MRIQ, confound_raw) from sklearn.linear_model import LinearRegression from scipy.stats.mstats import zscore lr = LinearRegression(fit_intercept=False) lr.fit(R, np.arctanh(FC_nodes)) rec_ = lr.coef_.dot(R.T).T r_2 = 1 - (np.var(np.arctanh(FC_nodes) - rec_) / np.var(np.arctanh(FC_nodes))) print "confounds explained {}% of the FC data".format(np.round(r_2 * 100), 0) lr = LinearRegression(fit_intercept=False) lr.fit(R, zscore(MRIQ)) rec_ = lr.coef_.dot(R.T).T r_2 = 1 - (np.var(zscore(MRIQ) - rec_) / np.var(zscore(MRIQ))) print "confounds explained {}% of the self-report data".format(np.round(r_2 * 100), 0) """ Explanation: Examing the explained variance % End of explanation """ %%time df_permute = permutate_scca(X, Y, best_model.cancorr_, best_model, n_permute=5000) df_permute u, v = best_model.u, best_model.v set_text_size(12) figs = show_results(u, v, range(1,58), dataset['MRIQ_labels'], rank_v=True, sparse=True) write_png('./reports/revision/bestModel_yeo7nodes_component_{:}.png', figs) X_scores, Y_scores, df_z = save_output(dataset, confound_raw, best_model, X, Y, path=None) df_z.to_csv('./data/processed/NYCQ_CCA_score_revision_yeo7nodes_{0:1d}_{1:.1f}_{2:.1f}.csv'.format( best_model.n_components, best_model.penX, best_model.penY)) df_z.to_pickle('./data/processed/NYCQ_CCA_score_revision_yeo7nodes_{0:1d}_{1:.1f}_{2:.1f}.pkl'.format( best_model.n_components, best_model.penX, best_model.penY)) joblib.dump(best_model, './models/SCCA_Yeo7nodes_revision_{:1d}_{:.2f}_{:.2f}.pkl'.format( best_model.n_components, best_model.penX, best_model.penY)) """ Explanation: Permutation test with data augmentation After data decomposition with SCCA, one way to access the reliability of the canonical component is permutation test. The purpose of a permutation test is to construct a null distribution of the target matrice to access the confidence level of our discovery. The target matrix should be the the statistics optimisation goal in the original model. In SCCA, the canonical correlations are used to construct the null distribution. There are two possible ways to permute the data - scarmbling the subjuect-wise or variable-wise links. To perform subject-wise scrambling, you shuffle one dataset by row, so each observation will have non-matching variables. This permutation scheme access the significance of the individual information to the model. We can, otherwise, shuffle the order of the variable for each participant to disturb the variable property, hence it can access the contribution of variable profile to the modeling results. We adopt the permutation test with the FWE-corrected p-value in the Smith et al. 2015 paper with data arugmentaion to increase the size of the resampling datasets. End of explanation """
monaen/CellClassification
shape/analysis_shape_classification.ipynb
mit
import numpy as np import os, sys import matplotlib.pyplot as plt from pylab import * import glob import collections import random import math from PIL import Image, ImageDraw %matplotlib inline caffe_root = '../../../' import caffe from caffe import layers as L, params as P ## define workspace paramsworkspace workspace='examples/mywork/shape/' ## params setting Numtrain = 6000 Numval = 1000 Numtest = 2000 """ Explanation: This ipynb shows the results of shapes classification labels: circle --> 0 rectangle --> 1 triangle --> 2 End of explanation """ def calcu_loss_acc(net, batch_size = 1, Numval = 0): ''' calculate the average loss and accuracy of dataset (default: validation dataset) input solver(Caffe solver) batch_size Numval(default = 0) return avg_loss, avg_accuracy ''' # batch_size = net.blobs['data'].num _loss = [] _accuracy = [] for i in range(Numval/batch_size): rs = net.forward() #print rs _loss.append(rs['loss'].tolist()) _accuracy.append(rs['accuracy'].tolist()) avg_loss = mean(_loss) avg_accuracy = mean(_accuracy) # print 'avg_loss: ', avg_loss, 'avg_accuracy ', avg_accuracy return avg_loss, avg_accuracy def calcu_ave_acc(model_def, model_weights, Numdata): net = None caffe.set_mode_gpu() net = caffe.Net(model_def, # defines the structure of the model model_weights, # contains the trained weights caffe.TEST) # use test mode (e.g., don't perform dropout) batch_size = net.blobs['data'].num tlab_result = np.array([]) ground_truths = np.array([]) for i in range(Numdata/batch_size): rs = net.forward() tlab_result = np.append(tlab_result, rs['prob'].argmax(1)) ground_truths = np.append(ground_truths, net.blobs['label'].data) ave_acc = float(sum(tlab_result == ground_truths))/float(len(tlab_result)) print "average accuracy: ", ave_acc return ave_acc, tlab_result, ground_truths ## calculate tp, tn, fp, fn def acc_prec_recall(tlab_result, ground_truths): types = set(tlab_result) if len(tlab_result) != len(ground_truths): assert len(tlab_result) == len(ground_truths), 'The length of predicted results and ground truth labels are not match!' N = len(tlab_result) # N = tp + tn + fp + fn precision = [] recall = [] for _type in types: ind_tlab = np.where(tlab_result == _type)[0] # 37 ind_truth = np.where(ground_truths == _type)[0] # 39 ind_flab = np.where(tlab_result != _type)[0] # 163 ind_false = np.where(ground_truths != _type)[0] # 161 tp_list = [i for i in ind_tlab if i in ind_truth] fp_list = [i for i in ind_tlab if i not in ind_truth] tn_list = [i for i in ind_flab if i in ind_false] fn_list = [i for i in ind_flab if i not in ind_false] tp = float(len(tp_list)) fp = float(len(fp_list)) tn = float(len(tn_list)) fn = float(len(fn_list)) precision.append(tp/(tp+fp)) recall.append(tp/(tp+fn)) return precision, recall ## change the work root !!! os.chdir(caffe_root) print "the work root now is: ", os.getcwd() """ Explanation: analysis function definition End of explanation """ ## params setting val_net_path = "train_val_5layers_bn_shape.prototxt" test_net_path = "train_test_5layers_bn_shape.prototxt" deploy_net_path = "deploy_5layers_bn_shape.prototxt" caffemodel_path = "model/5layers_bn_iter_1200.caffemodel" model_val = os.path.join(workspace + val_net_path) model_test = os.path.join(workspace + test_net_path) model_deploy = os.path.join(workspace + deploy_net_path) model_weights = os.path.join(workspace + caffemodel_path) ## calculation val ave_acc, tlab_result, ground_truths = calcu_ave_acc(model_val, model_weights, Numval) print 'The val images accuracy is: ',ave_acc """ Explanation: 3 types of shapes average accuracy: validation End of explanation """ cell_types = ['circle', 'rectangle', 'triangle'] precision, recall = acc_prec_recall(tlab_result, ground_truths) print 'precision: ', zip(cell_types, precision) print 'recall: ', zip(cell_types, recall) """ Explanation: precision and recall of each cell type: validation End of explanation """ ## calculation test ave_acc, tlab_result, ground_truths = calcu_ave_acc(model_test, model_weights, Numtest) print 'The test images accuracy is: ',ave_acc cell_types = ['circle', 'rectangle', 'triangle'] precision, recall = acc_prec_recall(tlab_result, ground_truths) print 'precision: ', zip(cell_types, precision) print 'recall: ', zip(cell_types, recall) """ Explanation: average accuracy: test End of explanation """ def draw_triangle(img, cxy, r, fill, fuzzy=0): x, y = cxy[0], cxy[1] ax = x ay = y-r bx = x + math.floor(r*math.cos(math.pi/6)) by = y+math.floor(r*math.sin(math.pi/6)) cx = x - math.floor(r*math.cos(math.pi/6)) cy = y+math.floor(r*math.sin(math.pi/6)) if fuzzy > 0: ax = math.floor(ax * random.uniform(fuzzy,1.0)) ay = math.floor(ay * random.uniform(fuzzy,1.0)) bx = math.floor(bx * random.uniform(fuzzy,1.0)) by = math.floor(by * random.uniform(fuzzy,1.0)) cx = math.floor(cx * random.uniform(fuzzy,1.0)) cy = math.floor(cy * random.uniform(fuzzy,1.0)) pts = [(ax, ay), (bx, by), (cx, cy)] draw = ImageDraw.Draw(img) draw.polygon(pts, fill, outline=None) def draw_circle(img, cxy, r, fill, bb=False, fuzzy=0): draw = ImageDraw.Draw(img) tlx=cxy[0]-r tly=cxy[1]-r brx=cxy[0]+r bry=cxy[1]+r if fuzzy > 0: tlx=math.floor(tlx * random.uniform(fuzzy,1.0)) tly=math.floor(tly * random.uniform(fuzzy,1.0)) brx=math.floor(brx * random.uniform(fuzzy,1.0)) bry=math.floor(bry * random.uniform(fuzzy,1.0)) #print ("Fuzzy is set. (%d,%d,%d,%d) => (%d,%d,%d,%d)" % # (cxy[0]-r,cxy[1]-r,cxy[0]+r,cxy[1]+r, tlx, tly, brx, bry)) if bb: draw.rectangle([tlx, tly, brx, bry],fill,outline=None) else: draw.ellipse([tlx, tly, brx, bry],fill,outline=None) del draw def BgColor(): # fixed color: white bcA = 225 # fixed color: grey bcB = 150 # fixed color: darkgrey bcC = 80 # pick one, or use a fixed one: #bgcolor = bcA bgcolor = random.choice([bcA, bcB, bcC]) return bgcolor def FgColor(): fgcolor = random.randint(0,250) #fgcolor = 100 return fgcolor def GenShapeFA(shape, crmin, crmax, n, size, clipok, fuzzy=0): for i in range(0,n): bgcolor = BgColor() img = Image.new('L', (size,size), bgcolor) imgcx = size/2 imgcy = size/2 if clipok: cdelta = size/2 else: cdelta = size/2 - crmax cx = imgcx + random.randint(-cdelta,cdelta) cy = imgcy + random.randint(-cdelta,cdelta) if clipok: # this line may clip r = random.randint(crmin,crmax) else: # I want r to always fall within image boundary maxr_noclip = min([cx, size-cx, cy, size-cy]) rmax = min([crmax, maxr_noclip]) rmin = min([crmin, rmax]) r = random.randint(rmin,rmax) fgcolor = FgColor() if shape == "rectangle": prefix="s" draw_circle(img, (cx, cy), r, fgcolor, bb=True, fuzzy=fuzzy) elif shape == "circle": prefix="c" draw_circle(img, (cx, cy), r, fgcolor, bb=False, fuzzy=fuzzy) elif shape == "triangle": prefix="t" draw_triangle(img, (cx, cy), r, fgcolor, fuzzy=fuzzy) elif shape == "arc": prefix="a" draw_arc(img, (cx, cy), r, fgcolor, fuzzy=fuzzy) # img.save(outdir + "/" + prefix + "_%04d_%03d_%03d" % (i,fgcolor,bgcolor) + ".jpg") return img """ Explanation: now we test the network with a generated shape The following part shows the network performance for classifying a randomly generated shape. We first generate three types of shapes (circle, rectangle, triangle) and then classify them individually and get the result. remember our label assignments are: circle --> 0 rectangle --> 1 triangle --> 2 random shape generation functions definition End of explanation """ size=305 clipok=False # shapes are defined relative to a bounding/inscribing circle crmin=40 crmax=70 nrcircle=1 fuzziness=0.8 classes = ['circle', 'rectangle', 'triangle'] """ Explanation: parameters setting parameters End of explanation """ net = None caffe.set_mode_gpu() net = caffe.Net(model_deploy, # defines the structure of the model model_weights, # contains the trained weights caffe.TEST) # use test mode (e.g., don't perform dropout) # set the input shape net.blobs['data'].reshape(1, # batch size 3, # 3-channel (BGR) images 305, 305) # image size is 305x305 """ Explanation: load model End of explanation """ img = GenShapeFA("rectangle", crmin, crmax, nrcircle, size, clipok, fuzzy=fuzziness) img = np.asarray(img) img = np.tile(img, (3,1,1)) img = img.astype('float32')/255 img = img.transpose(1,2,0) plt.imshow(img) plt.axis("off") print img.shape """ Explanation: generate random shape we first generate the shape and then change the format to fit the caffe requirement format: float32 color channel: BGR End of explanation """ # load the mean for subtraction mu = np.load(os.path.join(workspace, 'data/shape_mean.npy')) mu = mu.mean(1).mean(1) # average over pixels to obtain the mean (BGR) pixel values print 'mean-subtracted values:', zip('BGR', mu) # create transformer for the input called 'data' transformer = caffe.io.Transformer({'data': net.blobs['data'].data.shape}) transformer.set_transpose('data', (2,0,1)) # move image channels to outermost dimension transformer.set_mean('data', mu) # subtract the dataset-mean value in each channel transformer.set_raw_scale('data', 255) # rescale from [0, 1] to [0, 255] transformer.set_channel_swap('data', (2,1,0)) # swap channels from RGB to BGR transformed_image = transformer.preprocess('data', img) # copy the image data into the memory allocated for the net net.blobs['data'].data[...] = transformed_image ### perform classification output = net.forward() output_prob = output['prob'][0] # the output probability vector for the first image in the batch print 'predicted class is:', classes[output_prob.argmax()] """ Explanation: change the color channel to BGR End of explanation """ def generate_shape(choice): img = GenShapeFA(choice, crmin, crmax, nrcircle, size, clipok, fuzzy=fuzziness) img = np.asarray(img) img = np.tile(img, (3,1,1)) img = img.astype('float32')/255 img = img.transpose(1,2,0) plt.imshow(img) plt.axis("off") return img def classify_shape(transformer, img): transformed_image = transformer.preprocess('data', img) # copy the image data into the memory allocated for the net net.blobs['data'].data[...] = transformed_image ### perform classification output = net.forward() output_prob = output['prob'][0] # the output probability vector for the first image in the batch print 'predicted class is:', classes[output_prob.argmax()] return """ Explanation: Next, we integrate all the process into a function and make it clear End of explanation """ img = generate_shape("circle") classify_shape(transformer, img) img = generate_shape("circle") classify_shape(transformer, img) img = generate_shape("circle") classify_shape(transformer, img) """ Explanation: see the classification performance !!! circle End of explanation """ img = generate_shape("rectangle") classify_shape(transformer, img) img = generate_shape("rectangle") classify_shape(transformer, img) img = generate_shape("rectangle") classify_shape(transformer, img) """ Explanation: rectangle End of explanation """ img = generate_shape("triangle") classify_shape(transformer, img) img = generate_shape("triangle") classify_shape(transformer, img) img = generate_shape("triangle") classify_shape(transformer, img) """ Explanation: triangle End of explanation """ img = generate_shape("triangle") classify_shape(transformer, img) img = generate_shape("triangle") classify_shape(transformer, img) img = generate_shape("rectangle") classify_shape(transformer, img) img = generate_shape("circle") classify_shape(transformer, img) """ Explanation: wrong classification examples sometimes the model output the wrong answer, since the whole accuracy cannot achieve 100 percent. like what I showed below. And actually each w End of explanation """
kwant-project/kwant-tutorial-2016
3.4.graphene_qshe.ipynb
bsd-2-clause
# We'll have 3D plotting and 2D band structure, so we need a handful of helper functions. %run matplotlib_setup.ipy from types import SimpleNamespace from ipywidgets import interact import matplotlib from matplotlib import pyplot from mpl_toolkits import mplot3d import numpy as np import kwant from wraparound import wraparound def momentum_to_lattice(k): """Transform momentum to the basis of reciprocal lattice vectors. See https://en.wikipedia.org/wiki/Reciprocal_lattice#Generalization_of_a_dual_lattice """ B = np.array(graphene.prim_vecs).T A = B.dot(np.linalg.inv(B.T.dot(B))) return np.linalg.solve(A, k) def dispersion_2D(syst, args=None, lim=1.5*np.pi, num_points=200): """A simple plot of 2D band structure.""" if args is None: args = [] momenta = np.linspace(-lim, lim, num_points) energies = [] for kx in momenta: for ky in momenta: lattice_k = momentum_to_lattice([kx, ky]) h = syst.hamiltonian_submatrix(args=(list(args) + list(lattice_k))) energies.append(np.linalg.eigvalsh(h)) energies = np.array(energies).reshape(num_points, num_points, -1) emin, emax = np.min(energies), np.max(energies) kx, ky = np.meshgrid(momenta, momenta) fig = pyplot.figure() axes = fig.add_subplot(1, 1, 1, projection='3d') for band in range(energies.shape[-1]): axes.plot_surface(kx, ky, energies[:, :, band], cstride=2, rstride=2, cmap=matplotlib.cm.RdBu_r, vmin=emin, vmax=emax, linewidth=0.1) """ Explanation: Graphene and Kane-Mele model We are going to: * Deal with 2D band structures * Use a more general lattice (honeycomb lattice of graphene) * Construct the very first topological insulator * Learn about topological protection in presence of time-reversal symmetry Parts of this tutorial are based on the online course on topology in condensed matter End of explanation """ graphene = kwant.lattice.general([[1, 0], [1/2, np.sqrt(3)/2]], # lattice vectors [[0, 0], [0, 1/np.sqrt(3)]]) # Coordinates of the sites a, b = graphene.sublattices """ Explanation: Graphene Quantum Hall effect creates protected edge states using a strong magnetic field. Another way to create those is to start from a system with Dirac cones, and open gaps in those. There is a real (and a very important) two-dimensional system which has Dirac cones: graphene. So in this chapter we will take graphene and make it into a topological system with chiral edge states. Graphene is a single layer of carbon atoms arranged in a honeycomb lattice. It is a triangular lattice with two atoms per unit cell, type $A$ and type $B$, represented by red and blue sites in the figure: End of explanation """ bulk_graphene = kwant.Builder(kwant.TranslationalSymmetry(*graphene.prim_vecs)) bulk_graphene[graphene.shape((lambda pos: True), (0, 0))] = 0 bulk_graphene[graphene.neighbors(1)] = 1 dispersion_2D(wraparound(bulk_graphene).finalized()) """ Explanation: We now create a Builder with the translational symmetries of graphene, and calculate the bulk dispersion of graphene. Hence, the wave function in a unit cell can be written as a vector $(\Psi_A, \Psi_B)^T$ of amplitudes on the two sites $A$ and $B$. Taking a simple tight-binding model where electrons can hop between neighboring sites with hopping strength $t$, one obtains the Bloch Hamiltonian: $$ H_0(\mathbf{k})= \begin{pmatrix} 0 & h(\mathbf{k}) \ h^\dagger(\mathbf{k}) & 0 \end{pmatrix}\,, $$ with $\mathbf{k}=(k_x, k_y)$ and $$h(\mathbf{k}) = t_1\,\sum_i\,\exp\,\left(i\,\mathbf{k}\cdot\mathbf{a}_i\right)\,.$$ Here $\mathbf{a}_i$ are the three vectors in the figure, connecting nearest neighbors of the lattice [we set the lattice spacing to one, so that for instance $\mathbf{a}_1=(1,0)$]. Introducing a set of Pauli matrices $\sigma$ which act on the sublattice degree of freedom, we can write the Hamiltonian in a compact form as $$H_0(\mathbf{k}) = t_1\,\sum_i\,\left[\sigma_x\,\cos(\mathbf{k}\cdot\mathbf{a}_i)-\sigma_y \,\sin(\mathbf{k}\cdot\mathbf{a}_i)\right]\,.$$ The energy spectrum $E(\mathbf{k}) = \pm \,\left|h(\mathbf{k})\right|$ gives rise to the famous band structure of graphene, with the two bands touching at the six corners of the Brillouin zone: End of explanation """ zigzag_ribbon = kwant.Builder(kwant.TranslationalSymmetry([1, 0])) zigzag_ribbon[graphene.shape((lambda pos: abs(pos[1]) < 9), (0, 0))] = 0 zigzag_ribbon[graphene.neighbors(1)] = 1 kwant.plotter.bands(zigzag_ribbon.finalized()); """ Explanation: Let's also create 1D ribbons of graphene. There are two nontrivial directions: armchair and zigzag End of explanation """ nnn_hoppings_a = (((-1, 0), a, a), ((0, 1), a, a), ((1, -1), a, a)) nnn_hoppings_b = (((1, 0), b, b), ((0, -1), b, b), ((-1, 1), b, b)) nnn_hoppings = nnn_hoppings_a + nnn_hoppings_b def nnn_hopping(site1, site2, params): return 1j * params.t_2 def onsite(site, params): return params.m * (1 if site.family == a else -1) def add_hoppings(syst): syst[graphene.neighbors(1)] = 1 syst[[kwant.builder.HoppingKind(*hopping) for hopping in nnn_hoppings]] = nnn_hopping haldane = kwant.Builder(kwant.TranslationalSymmetry(*graphene.prim_vecs)) haldane[graphene.shape((lambda pos: True), (0, 0))] = onsite haldane[graphene.neighbors(1)] = 1 haldane[[kwant.builder.HoppingKind(*hopping) for hopping in nnn_hoppings]] = nnn_hopping @interact(t_2=(0, .08, .01)) def qshe_dispersion(t_2=0, m=.2): dispersion_2D(wraparound(haldane).finalized(), [SimpleNamespace(t_2=t_2, m=m)], num_points=100) """ Explanation: Your turn! Calculate a dispersion of an armchair nanoribbon. You'll need to figure out what is its period. Your turn! Add potentials of opposite sign to the zigzag nanoribbon, and see what happens to the dispersion relation. We have now opened a gap, but there are no protected states inside it. Haldane model of anomalous quantum Hall effect The more interesting way to open the gap in graphene dispersion is introduced by Duncan Haldane, Phys. Rev. Lett. 61, 2015 (1988) The idea of this model is to break inversion symmetry that protects the Dirac points by adding next-nearest neighbor hoppings End of explanation """ # Pauli matrices s0 = np.identity(2) sx = np.array([[0, 1], [1, 0]]) sy = np.array([[0, -1j], [1j, 0]]) sz = np.diag([1, -1]) def spin_orbit(site1, site2, params): return 1j * params.t_2 * sz def onsite(site, params): return s0 * params.m * (1 if site.family == a else -1) def add_hoppings(syst): syst[graphene.neighbors(1)] = s0 syst[[kwant.builder.HoppingKind(*hopping) for hopping in nnn_hoppings]] = spin_orbit bulk_kane_mele = kwant.Builder(kwant.TranslationalSymmetry(*graphene.prim_vecs)) bulk_kane_mele[graphene.shape((lambda pos: True), (0, 0))] = onsite add_hoppings(bulk_kane_mele) @interact(t_2=(0, .3, .01)) def qshe_dispersion(t_2=0, m=.2): dispersion_2D(wraparound(bulk_kane_mele).finalized(), [SimpleNamespace(t_2=t_2, m=m)], num_points=100) zigzag_kane_mele = kwant.Builder(kwant.TranslationalSymmetry([1, 0])) zigzag_kane_mele[graphene.shape((lambda pos: abs(pos[1]) < 9), (0, 0))] = onsite add_hoppings(zigzag_kane_mele) @interact(t_2=(0, .12, .01)) def qshe_zigzag_dispersion(t_2=0, m=.2): kwant.plotter.bands(zigzag_kane_mele.finalized(), [SimpleNamespace(t_2=t_2, m=m)]) """ Explanation: Now we see that the gap closes in one of the Dirac cones, and does not close in the other half. Let's see what this means for the dispersion relation in a ribbon. Your turn! Plot a dispersion of either nanoribbon, and see what happens to the edge states Quantum spin Hall effect in Kane-Mele model (Following: C.L. Kane and E.J. Mele, Phys. Rev. Lett. 95, 226801 (2005)) Haldane model breaks time-reversal symmetry and inversion symmetry. Lattice-scale hoppings that break time-reversal symmetry do not appear in non-magnetic materials. We can make the Hamiltonian invariant under inversion and time-reversal by making the next-nearest neighbor hoppings spin-dependent. So if we take those hoppings equal to $i t_2 \sigma_z$, we get teh End of explanation """
tpin3694/tpin3694.github.io
python/pandas_lowercase_column_names.ipynb
mit
# Import modules import pandas as pd # Set ipython's max row display pd.set_option('display.max_row', 1000) # Set iPython's max column width to 50 pd.set_option('display.max_columns', 50) """ Explanation: Title: Lower Case Column Names In Pandas Dataframe Slug: pandas_lowercase_column_names Summary: Lower Case Column Names In Pandas Dataframe Date: 2016-05-01 12:00 Category: Python Tags: Data Wrangling Authors: Chris Albon Preliminaries End of explanation """ # Create an example dataframe data = {'NAME': ['Jason', 'Molly', 'Tina', 'Jake', 'Amy'], 'YEAR': [2012, 2012, 2013, 2014, 2014], 'REPORTS': [4, 24, 31, 2, 3]} df = pd.DataFrame(data, index = ['Cochice', 'Pima', 'Santa Cruz', 'Maricopa', 'Yuma']) df """ Explanation: Create an example dataframe End of explanation """ # Map the lowering function to all column names df.columns = map(str.lower, df.columns) df """ Explanation: Lowercase column values End of explanation """
csiu/100daysofcode
misc/day44_querying_database.ipynb
mit
dbname="kick" tblname="info" engine = create_engine( 'postgresql://localhost:5432/{dbname}'.format(dbname=dbname)) # Connect to database conn = psycopg2.connect(dbname=dbname) cur = conn.cursor() """ Explanation: Questions to answer What kind of projects are popular on Kickstarter? How much are people asking for? What kind of projects tend to be more funded? Connect to database End of explanation """ cur.execute("SELECT column_name,data_type FROM information_schema.columns WHERE table_name = '{table}';".format(table=tblname)) rows = cur.fetchall() pd.DataFrame(rows, columns=["column_name", "data_type"]) """ Explanation: Remind myself of the columns in the table: End of explanation """ cur.execute("SELECT COUNT(*) from {table}".format(table=tblname)) cur.fetchone() """ Explanation: Number of records in table: End of explanation """ cur.execute("SELECT topic, COUNT(*) from {table} GROUP BY topic ORDER BY count DESC;".format(table=tblname)) rows = cur.fetchall() df = pd.DataFrame(rows, columns=["topic", "count"]) # Plot findings plt.rcParams["figure.figsize"] = [17,5] df.plot(kind="bar", x="topic", y="count", legend=False) plt.ylabel("Kickstarter projects") plt.xlabel("Topic") plt.title("Kickstarter projects by topic") plt.tick_params(axis='x', labelsize=7) "There are {num_topics} different types of Kickstarter projects".format(num_topics=df.shape[0]) # Most popular project topic is df[df["count"] == df["count"].max()] # Most rare project topic is df[df["count"] == df["count"].min()] """ Explanation: Question 1: Project topics How many different types of projects are on Kickstarter? What is most popular? What is most rare? End of explanation """ cur.execute("SELECT id, blurb, goal*static_usd_rate as goal_usd FROM {table} WHERE topic = '{topic}'".format(table=tblname, topic="Taxidermy")) rows = cur.fetchall() for row in rows: row_id, blurb, goal = row print(">>> $%d | id: %s" % (goal, row_id), blurb, sep="\n") """ Explanation: What are the rare projects? End of explanation """ sql = "SELECT id, topic, goal*static_usd_rate as goal_usd FROM {table}".format(table=tblname) cur.execute(sql) rows = cur.fetchall() df = pd.DataFrame(rows, columns=["id", "topic", "goal_usd"]) # Asking average np.log10(df.goal_usd).plot.kde() plt.xlabel("log(funding goal in USD)") "Most projects are asking for: $%d - $%d" % (10**2.5, 10**5) sns.barplot(x="topic", y="goal_usd", data=df.groupby("topic").mean().reset_index().sort_values(by="goal_usd", ascending=False)) _ = plt.xticks(rotation='vertical') plt.ylabel("Average goal (USD)") plt.xlabel("Kickstarter project topic") plt.title("Funding goals on Kickstarter by topic") plt.tick_params(axis='x', labelsize=7) """ Explanation: Question 2: Project funding goals How much are people asking for in general? by topics? End of explanation """ sql = "SELECT id, topic, goal, pledged, pledged/goal as progress FROM info ORDER BY progress DESC;" cur.execute(sql) rows = cur.fetchall() df = pd.DataFrame(rows, columns=["id", "topic", "goal", "pledged", "progress"]) df["well_funded"] = df.progress >= 1 plt.rcParams["figure.figsize"] = [17,5] sns.boxplot(x="topic", y="progress", data=df[df.well_funded].sort_values(by="topic")) _ = plt.xticks(rotation='vertical') plt.yscale('log') plt.ylabel("Percent of funding goal") plt.xlabel("Topic") plt.title("Projects that were successfully funded by Topic") plt.tick_params(axis='x', labelsize=7) sns.barplot(x="topic", y="progress", data=df[df.well_funded].groupby("topic").count().reset_index().sort_values(by="progress", ascending=False)) _ = plt.xticks(rotation='vertical') plt.ylabel("Project that were successfully funded") plt.xlabel("Topic") plt.title("Projects that were successfully funded by Topic") plt.tick_params(axis='x', labelsize=7) plt.rcParams["figure.figsize"] = [17,5] sns.boxplot(x="topic", y="progress", data=df[np.invert(df.well_funded)].sort_values(by="topic")) _ = plt.xticks(rotation='vertical') plt.ylabel("Percent of funding goal met") plt.xlabel("Topic") plt.title("Pojects that have yet to meet their funding goals") plt.tick_params(axis='x', labelsize=7) sns.barplot(x="topic", y="progress", data=df[np.invert(df.well_funded)].groupby("topic").count().reset_index().sort_values(by="progress", ascending=False)) _ = plt.xticks(rotation='vertical') plt.ylabel("Project that were not yet successfully funded") plt.xlabel("Topic") plt.title("Pojects that have yet to meet their funding goals") plt.tick_params(axis='x', labelsize=7) """ Explanation: "Movie Theaters" and "Space exploration" have the average higest funding goals Question 3: Funding success What tends to get funded? End of explanation """ # close communication with the PostgreSQL database server cur.close() # commit the changes conn.commit() # close connection conn.close() """ Explanation: Close connection End of explanation """
plopd/music-mining-massive-datasets
Duplicate Detection with LSH Cosine Similarity.ipynb
mit
data_path = os.path.join('MillionSongSubset', 'AdditionalFiles', 'subset_msd_summary_file.h5') features = ['duration', 'end_of_fade_in','key', 'loudness', 'mode', 'start_of_fade_out', 'tempo', 'time_signature'] verbose = False """ Explanation: Reading the data data has to be a .h5 data file. data_path should contain the path to this .h5 data file. End of explanation """ def get_feature_matrix(feature, data): ''' Reads the data and the feature names and returns the track ids and the feature matrix. The track_id field from the data is mandatory, therefore it will always be included Args: feature_names(list of strings): list containing the feature names that will be included in the feature matrix. data(pandas.io.pytables.HDFStore table): table containing the data. Returns: (numpy.ndarray, numpy.ndarray): (N, 1) of track_ids, feature matrix (N, D). ''' if 'track_id' in feature: songs = np.asarray(data[osp.join('analysis','songs')][feature]) else: songs = np.asarray(data[osp.join('analysis','songs')][['track_id'] + feature]) return np.array(songs[:, 0]), np.array(songs[:, 1:], dtype=np.float64) def get_random_vector(n): ''' Returns a vector with normal distributed values {-1,1}. Args: n (int) : size of the vector. Returns: ndarray : list of length n of normal distributed values {-1,1}. ''' return 2*np.random.randint(0, 2, n) - 1 def cosine_angle(a, b): ''' Returns the cosine of the angle of two given vectors Args: a(numpy.ndarray): vector of real values. b(numpy.ndarray): vector of real values. Returns: double: the cosine of the angle between a and b. ''' return np.dot(a, b) / (np.linalg.norm(a) * np.linalg.norm(b)) def cosine_distance(a, b): ''' Returns the cosine distance between two vectors. Args: a(numpy.ndarray): vector of real values. b(numpy.ndarray): vector of real values. Returns: double: the cosine distance of a and b. ''' return 1.0 - cosine_angle(a, b) def get_hash_bands(sketch, r, b): ''' Computes the signature matrix (#samples, #bands) with hash-values for the different song based on the respective bands. Args: sketch(numpy.ndarray): sketch matrix (#samples, #rows*#bands) with values in domain {0,1}. r(int): number of rows b(int): number of bands Returns: (numpy.ndarray, numpy.ndarray): (#samples, #bands) with hash-values for the different song based on the respective bands. ''' twos = np.array([1<<i for i in range(r)]) hashes = np.zeros((np.size(sketch, 0), b)) for i in range(b): hashes[:,i] = np.dot(sketch[:, (r*i):(r*(i+1))], twos) return hashes.astype(np.uint64) ''' Operations: - Get data - Build feature matrix - 0-1 normalize it track_ids: matrix(#samples x 1) feature_matrix: matrix(#samples x #features) ''' songs = pd.HDFStore(data_path) track_ids, feature_matrix = get_feature_matrix(features, songs) feature_matrix = preprocessing.scale(feature_matrix) if verbose: print("Shape track_ids:",track_ids.shape) print("Shape feature matrix:",feature_matrix.shape) """ Explanation: Helper functions End of explanation """ # data and algorithm parameters ''' D = number of features N = number of samples b = number of bands r = number of rows eps = angle threshold(degrees) ''' D = np.size(feature_matrix,1) N = np.size(feature_matrix, 0) b = 3 r = 64 eps = 2 ''' Operations: - Generate matrix of random vectors with values in {-1,1}. RV: matrix(#bands*#rows x n_features) ''' RV = np.array([get_random_vector(D) for i in range(b*r)]) if verbose: print("Shape RV:",np.shape(RV)) print("Random vectors matrix RV:\n",RV) ''' Operations: - Generate sketch matrix, by performing Clip sketch matrix to 0-1 range for hashing. Dimensionality: n_samples x n_bands*n_rows ''' Sketch = np.dot(feature_matrix, RV.T) Sketch[Sketch < 0] = -1 Sketch[Sketch > 0] = 1 Sketch[Sketch == 0] = np.random.randint(0,2)*2-1 if verbose: print("Shape Sketch:",Sketch.shape) print("Sketch:\n",Sketch) # clip values of Sketch matrix in domain {0,1} to easily hash them. Sketch[Sketch < 0] = 0 if verbose: print("Shape Binary Sketch:",Sketch.shape) print("Binary Sketch:\n",Sketch) hb = get_hash_bands(Sketch,r, b) if verbose: print("Shape hb:",hb.shape) print("hb:\n",hb) """ Explanation: LSH Cosine Similarity End of explanation """ ''' candidates(dict): Dictionary with key=(song_id,song_id), value=cosine_distance(song_id,song_id) duplicates(list): List of tuples (songid, songid) buckets(dict) : Dictionary with key=band_id, value=dict with key=hash_key, value = list of song_id ''' candidates = {} duplicates = [] buckets = { i : {} for i in range(b) } start = time.time() for i in range(b): for j in range(N): hash_key = hb[j,i] if hash_key not in buckets[i]: buckets[i][hash_key] = [] buckets[i][hash_key].append(j) for candidates_list in buckets[i].values(): if len(candidates_list) > 1: for _i in range(len(candidates_list)): for _j in range(_i+1,len(candidates_list)): songA = candidates_list[_i] songB = candidates_list[_j] if (songA,songB) not in candidates: candidates[(songA,songB)] = cosine_distance(feature_matrix[songA,:],feature_matrix[songB,:]) cos_eps_dist = 1-math.cos(math.radians(eps)) for key in candidates.keys(): if candidates[key] < cos_eps_dist: songA = key[0] songB = key[1] duplicates.append((songA,songB)) print("LSH Duration:", time.time() - start,"sec") print("Nr. candidates:", len(candidates.keys())) print("Nr. duplicates:",len(duplicates)) """ Explanation: Algorithm Description for each band b we store a hash_table in the dictionary buckets[b]. for each band b: for each song s: add song s to the bucket to which it hashes in the dictionary buckets[b]. for each key in buckets[b]: if it contains more than one element, it contains candidates for duplication. for all elements in this list, generate all unordered pairs. for each such pair, if it is not contained in the candidates dictionary, add the pair with the corressponding cosine distance. for each pair in candidates: check if its cosine distance < cosine distance of epsilon, meaning we consider it a duplicate. End of explanation """
nicolas998/wmf
Examples/Calibracion_Barbosa_NSGAII.ipynb
gpl-3.0
%matplotlib inline import numpy as np import pylab as pl from wmf import wmf import pandas as pnd # Herramientas para DEAP from deap import base, creator import random from deap import tools """ Explanation: Calibracion BARBOSA NSGAII Este es un ensayo de como se puede implementar el algortimo NSGAII para la calibración de un modelo hidrológico. End of explanation """ #Cargado de la cuenca y set cu = wmf.SimuBasin(rute='/media/nicolas/Home/nicolas/01_SIATA/nc_cuencas/Cuenca_AMVA_Barbosa_C.nc') wmf.models.show_storage = 1 wmf.models.separate_fluxes = 1 wmf.models.dt=300.0 #Rutas de la lluvia ruta_lluvia = '/media/nicolas/Home/nicolas/01_SIATA/bin_rain/Barbosa60m/Barbosa60m_201702210500-201702212000.bin' ruta_hdr = '/media/nicolas/Home/nicolas/01_SIATA/bin_rain/Barbosa60m/Barbosa60m_201702210500-201702212000.hdr' #Nodo de evaluacion p =np.where(wmf.models.control[wmf.models.control<>0] == 371) nodo=p[0] nodo = nodo+1 #Caudal Observado Qaula = pnd.read_msgpack('/media/nicolas/Home/nicolas/01_SIATA/series/Q_Aula-20170221-20170222.bin') Qaula = Qaula.rolling(window=3).median() Qobs = Qaula['2017-02-21-15:00':'2017-02-21-21:20'].values Qobs.shape """ Explanation: Preparación cuenca para simular End of explanation """ #Condiciones cu.set_Storage(wmf.models.max_capilar*0.07,0) #quiero entender esto.. cu.set_Storage(1500,3) """ Explanation: Condiciones almacenamiento End of explanation """ #Inicip y cantidad de pasos inicio = 125 npasos = 50 #Set del elemento Ns = wmf.nsgaii_element(ruta_lluvia, np.roll(Qobs[5:55], -5), npasos, inicio, cu, evp = [0.00001, 0.0002], infil = [80, 200], perco = [10, 50], losses = [0,0], velRun = [0.2, 3], velSub = [0.5, 4], velSup = [0.2, 0.2], velStream = [0.99, 0.99], Hu = [1,1], Hg = [1,1], rangosMutacion=[[0.00001, 0.0002], [80, 200], [10, 50], [0, 0], [0.2, 3], [0.5, 4], [0.2, 0.2], [0.99, 0.99], [1.0, 1.0], [1.0, 1.0]],) #probCruce = [0.5, 0.2, 0.3, 1.0, 0.2, 0.2, 1.0, 1.0, 1.0], #probMutacion = [0.5, 0.2, 0.3, 1.0, 0.2, 0.2, 1.0, 1.0, 1.0]) """ Explanation: Ejecución: Calibracion : Parametros de calibracion del modelo, orden:. - Evaporacion. - Infiltracion. - Percolacion. - Perdidas. - Vel Superficial . - Vel Sub-superficial. - Vel Subterranea. - Vel Cauce. - Max Capilar. - Max Gravitacional. Ensayo NSGAII Desde wmf se trae una nueva clase la cual es la herramienta que va a dar las ordenes a SimuBasin para realizar el proceso de calibración automático multi-objetivo. A continuación se establece la cantidad de pasos, el inicio de la simulación y variables básicas para la misma, tales como: - ruta de la lluvia binaria. - Caudal observado con el cual se compara el algorítmo. - el objeto de simulación cu. - Los rangos aleatorios de construcción de individuos, y sus rangos de mutación. End of explanation """ pop, Qsim, fit = cu.Calib_NSGAII(Ns, nodo, pop_size=40, process=20, MUTPB=0.5, CXPB=0.5) """ Explanation: A continuación se lanza el algoritmo de calibración, el caul pertenece al objeto como tal, en este se establece: - Ns: El objeto con las instrucciónes para la evolución. - nodo: El resultado de simulación de la cuenca que se va a tomar para evaluar al modelo. - pop_size: tamano de la población. - process: Cantidad de núcleos del sistema que van a ser utilizados en paralelo. - MUTPB: Probabilidad genérica de mutación de un individuo. - CXPB: Probabilidad genérica de cruce de dos indiviudos. End of explanation """ for i in Qsim: pl.plot(i, 'b') pl.plot(np.roll(Qobs[5:45], -5), 'r', lw = 3) """ Explanation: Se presenta el resultado obtenido para las 40 realizaciones End of explanation """ fit = np.array(fit).T pl.scatter(np.array(fit).T[0],np.array(fit).T[1] ) pl.grid(True) """ Explanation: Figura 1: Resultados de simulación (azul), vs caudal observado. End of explanation """
junghao/fdsn
examples/GeoNet_FDSN_demo_station.ipynb
mit
from obspy import UTCDateTime from obspy.clients.fdsn import Client as FDSN_Client from obspy import read_inventory """ Explanation: GeoNet FDSN webservice with Obspy demo - Station Service This demo introduces some simple code that requests data using GeoNet's FDSN webservices and the obspy module in python. This notebook uses Python 3. Getting Started - Import Modules End of explanation """ client = FDSN_Client("GEONET") """ Explanation: Define GeoNet FDSN client End of explanation """ inventory = client.get_stations(latitude=-42.693,longitude=173.022,maxradius=0.5, starttime = "2016-11-13 11:05:00.000",endtime = "2016-11-14 11:00:00.000") print(inventory) _=inventory.plot(projection="local") """ Explanation: Accessing Station Metadata Use the station service to access station metadata from GeoNet stations. Note, that metadata provided is prodominately associated with data types available from the FDSN archive, and therefore does not include things such as Geodetic station information. This example gets all stations that are operating at the time of the Kaikoura earthquake and that are located within a 0.5 degrees radius of the epicentre. It lists the station codes and plots them on a map. End of explanation """ inventory = client.get_stations(station="KUZ",level="response", starttime = "2016-11-13 11:05:00.000",endtime = "2016-11-14 11:00:00.000") print(inventory) """ Explanation: The following examples dive into retrieving different information from the inventory object. This object is based on FDSN stationXML and therefore can provide much the same information. To get all available information into the inventory you will want to request data down to the response level. The default requests information just to a station level. For more information, see the obspy inventory class. This example gets data from a station, KUZ, and prints a summary of the inventory contents End of explanation """ network = inventory[0] station = network[0] # equivalent to inventory[0][0] num_channels = len(station) print(station) """ Explanation: Now, we can look at more information, such as specifics about the station. Such as the time it opened and location. End of explanation """ channel = station[0] # equivalent to inventory[0][0][0] print(channel) """ Explanation: We can drill down even futher into a particular channel and look at the time it was operating for, whether it was continously recording, the sample rate and some basic sensor information. End of explanation """ resp = channel.response print(resp) resp.plot(0.001,output="VEL",label='KUZ HHZ') """ Explanation: This channel states that there is response information available, so we can look at a summary of the response and plot it. End of explanation """
ledeprogram/algorithms
class7/donow/Kandrach_Sasha_7_donow.ipynb
gpl-3.0
import pandas as pd %matplotlib inline import numpy as np from sklearn.linear_model import LogisticRegression """ Explanation: Apply logistic regression to categorize whether a county had high mortality rate due to contamination 1. Import the necessary packages to read in the data, plot, and create a logistic regression model End of explanation """ df = pd.read_csv("hanford.csv") df """ Explanation: 2. Read in the hanford.csv file in the data/ folder End of explanation """ df.describe() df['Exposure'].max() - df['Exposure'].min() df['Mortality'].max() - df['Mortality'].min() df['Exposure'].quantile(q=0.25) df['Exposure'].quantile(q=0.25) df['Exposure'].quantile(q=0.5) df['Exposure'].quantile(q=0.75) iqr_ex = df['Exposure'].quantile(q=0.75) - df['Exposure'].quantile(q=0.25) iqr_ex df['Mortality'].quantile(q=0.25) df['Mortality'].quantile(q=0.5) df['Mortality'].quantile(q=0.75) iqr_mort = df['Mortality'].quantile(q=0.75) - df['Mortality'].quantile(q=0.25) iqr_mort df.std() """ Explanation: <img src="../../images/hanford_variables.png"></img> 3. Calculate the basic descriptive statistics on the data End of explanation """
bollwyvl/ipymd
examples/ex2.notebook.ipynb
bsd-3-clause
# some code in python def f(x): y = x * x return y """ Explanation: Test notebook This is a text notebook. Here are some rich text, code, $\pi\simeq 3.1415$ equations. Another equation: $$\sum_{i=1}^n x_i$$ Python code: End of explanation """ import IPython print("Hello world!") 2*2 def decorator(f): return f @decorator def f(x): pass 3*3 """ Explanation: Random code: javascript console.log("hello" + 3); Python code: End of explanation """ print(4*4) %%bash echo 'hello' """ Explanation: some text End of explanation """ import numpy as np import matplotlib.pyplot as plt %matplotlib inline plt.imshow(np.random.rand(5,5,4), interpolation='none'); """ Explanation: An image: Subtitle a list One small link! Two 2.1 2.2 Three and Un Deux End of explanation """
vadim-ivlev/STUDY
coding/.ipynb_checkpoints/hacker rank-checkpoint.ipynb
mit
# Это единственный комментарий который имеет смысл # I s def find_index(m,a): try: return a.index(m) except : return -1 def find_two_sum(a, s): ''' >>> (3, 5) == find_two_sum([1, 3, 5, 7, 9], 12) True ''' if len(a)<2: return (-1,-1) idx = dict( (v,i) for i,v in enumerate(a) ) for i in a: m = s - i k = idx.get(m,-1) if k != -1 : return (i,k) return (-1, -1) print(find_two_sum([1, 3, 5, 7, 9], 12)) if __name__ == '__main__': import doctest; doctest.testmod() """ Explanation: https://www.testdome.com/questions/python/two-sum/14289?questionIds=14288,14289&generatorId=92&type=fromtest&testDifficulty=Easy Write a function that, given a list and a target sum, returns zero-based indices of any two distinct elements whose sum is equal to the target sum. If there are no such elements, the function should return (-1, -1). For example, find_two_sum([1, 3, 5, 7, 9], 12) should return a tuple containing any of the following pairs of indices: 1 and 4 (3 + 9 = 12) 2 and 3 (5 + 7 = 12) 3 and 2 (7 + 5 = 12) 4 and 1 (9 + 3 = 12) End of explanation """ %%javascript IPython.keyboard_manager.command_shortcuts.add_shortcut('g', { handler : function (event) { var input = IPython.notebook.get_selected_cell().get_text(); var cmd = "f = open('.toto.py', 'w');f.close()"; if (input != "") { cmd = '%%writefile .toto.py\n' + input; } IPython.notebook.kernel.execute(cmd); //cmd = "import os;os.system('open -a /Applications/MacVim.app .toto.py')"; //cmd = "!open -a /Applications/MacVim.app .toto.py"; cmd = "!code .toto.py"; IPython.notebook.kernel.execute(cmd); return false; }} ); IPython.keyboard_manager.command_shortcuts.add_shortcut('u', { handler : function (event) { function handle_output(msg) { var ret = msg.content.text; IPython.notebook.get_selected_cell().set_text(ret); } var callback = {'output': handle_output}; var cmd = "f = open('.toto.py', 'r');print(f.read())"; IPython.notebook.kernel.execute(cmd, {iopub: callback}, {silent: false}); return false; }} ); # v=getattr(a, 'pop')(1) s='print 4 7 ' commands={ 'print':print, 'len':len } def exec_string(s): global commands chunks=s.split() func_name=chunks[0] if len(chunks) else 'blbl' func=commands.get(func_name,None) params=[int(x) for x in chunks[1:]] if func: func(*params) exec_string(s) """ Explanation: https://stackoverflow.com/questions/28309430/edit-ipython-cell-in-an-external-editor Edit IPython cell in an external editor This is what I came up with. I added 2 shortcuts: 'g' to launch gvim with the content of the current cell (you can replace gvim with whatever text editor you like). 'u' to update the content of the current cell with what was saved by gvim. So, when you want to edit the cell with your preferred editor, hit 'g', make the changes you want to the cell, save the file in your editor (and quit), then hit 'u'. Just execute this cell to enable these features: End of explanation """ M = int(input()) m =set((map(int,input().split()))) N = int(input()) n =set((map(int,input().split()))) m ^ n S='add 5 6' method, *args = S.split() print(method) print(*map(int,args)) method,(*map(int,args)) # methods # (*map(int,args)) # command='add'.split() # method, args = command[0], list(map(int,command[1:])) # method, args for _ in range(2): met, *args = input().split() print(met, args) try: pass # methods[met](*list(map(int,args))) except: pass """ Explanation: Symmetric Difference https://www.hackerrank.com/challenges/symmetric-difference/problem Task Given sets of integers, and , print their symmetric difference in ascending order. The term symmetric difference indicates those values that exist in either or but do not exist in both. Input Format The first line of input contains an integer, . The second line contains space-separated integers. The third line contains an integer, . The fourth line contains space-separated integers. Output Format Output the symmetric difference integers in ascending order, one per line. Sample Input 4 2 4 5 9 4 2 4 11 12 Sample Output 5 9 11 12 End of explanation """
jinntrance/MOOC
coursera/deep-neural-network/quiz and assignments/week 5/Gradient+Checking+v1.ipynb
cc0-1.0
# Packages import numpy as np from testCases import * from gc_utils import sigmoid, relu, dictionary_to_vector, vector_to_dictionary, gradients_to_vector """ Explanation: Gradient Checking Welcome to the final assignment for this week! In this assignment you will learn to implement and use gradient checking. You are part of a team working to make mobile payments available globally, and are asked to build a deep learning model to detect fraud--whenever someone makes a payment, you want to see if the payment might be fraudulent, such as if the user's account has been taken over by a hacker. But backpropagation is quite challenging to implement, and sometimes has bugs. Because this is a mission-critical application, your company's CEO wants to be really certain that your implementation of backpropagation is correct. Your CEO says, "Give me a proof that your backpropagation is actually working!" To give this reassurance, you are going to use "gradient checking". Let's do it! End of explanation """ # GRADED FUNCTION: forward_propagation def forward_propagation(x, theta): """ Implement the linear forward propagation (compute J) presented in Figure 1 (J(theta) = theta * x) Arguments: x -- a real-valued input theta -- our parameter, a real number as well Returns: J -- the value of function J, computed using the formula J(theta) = theta * x """ ### START CODE HERE ### (approx. 1 line) J = theta * x ### END CODE HERE ### return J x, theta = 2, 4 J = forward_propagation(x, theta) print ("J = " + str(J)) """ Explanation: 1) How does gradient checking work? Backpropagation computes the gradients $\frac{\partial J}{\partial \theta}$, where $\theta$ denotes the parameters of the model. $J$ is computed using forward propagation and your loss function. Because forward propagation is relatively easy to implement, you're confident you got that right, and so you're almost 100% sure that you're computing the cost $J$ correctly. Thus, you can use your code for computing $J$ to verify the code for computing $\frac{\partial J}{\partial \theta}$. Let's look back at the definition of a derivative (or gradient): $$ \frac{\partial J}{\partial \theta} = \lim_{\varepsilon \to 0} \frac{J(\theta + \varepsilon) - J(\theta - \varepsilon)}{2 \varepsilon} \tag{1}$$ If you're not familiar with the "$\displaystyle \lim_{\varepsilon \to 0}$" notation, it's just a way of saying "when $\varepsilon$ is really really small." We know the following: $\frac{\partial J}{\partial \theta}$ is what you want to make sure you're computing correctly. You can compute $J(\theta + \varepsilon)$ and $J(\theta - \varepsilon)$ (in the case that $\theta$ is a real number), since you're confident your implementation for $J$ is correct. Lets use equation (1) and a small value for $\varepsilon$ to convince your CEO that your code for computing $\frac{\partial J}{\partial \theta}$ is correct! 2) 1-dimensional gradient checking Consider a 1D linear function $J(\theta) = \theta x$. The model contains only a single real-valued parameter $\theta$, and takes $x$ as input. You will implement code to compute $J(.)$ and its derivative $\frac{\partial J}{\partial \theta}$. You will then use gradient checking to make sure your derivative computation for $J$ is correct. <img src="images/1Dgrad_kiank.png" style="width:600px;height:250px;"> <caption><center> <u> Figure 1 </u>: 1D linear model<br> </center></caption> The diagram above shows the key computation steps: First start with $x$, then evaluate the function $J(x)$ ("forward propagation"). Then compute the derivative $\frac{\partial J}{\partial \theta}$ ("backward propagation"). Exercise: implement "forward propagation" and "backward propagation" for this simple function. I.e., compute both $J(.)$ ("forward propagation") and its derivative with respect to $\theta$ ("backward propagation"), in two separate functions. End of explanation """ # GRADED FUNCTION: backward_propagation def backward_propagation(x, theta): """ Computes the derivative of J with respect to theta (see Figure 1). Arguments: x -- a real-valued input theta -- our parameter, a real number as well Returns: dtheta -- the gradient of the cost with respect to theta """ ### START CODE HERE ### (approx. 1 line) dtheta = x ### END CODE HERE ### return dtheta x, theta = 2, 4 dtheta = backward_propagation(x, theta) print ("dtheta = " + str(dtheta)) """ Explanation: Expected Output: <table style=> <tr> <td> ** J ** </td> <td> 8</td> </tr> </table> Exercise: Now, implement the backward propagation step (derivative computation) of Figure 1. That is, compute the derivative of $J(\theta) = \theta x$ with respect to $\theta$. To save you from doing the calculus, you should get $dtheta = \frac { \partial J }{ \partial \theta} = x$. End of explanation """ # GRADED FUNCTION: gradient_check def gradient_check(x, theta, epsilon = 1e-7): """ Implement the backward propagation presented in Figure 1. Arguments: x -- a real-valued input theta -- our parameter, a real number as well epsilon -- tiny shift to the input to compute approximated gradient with formula(1) Returns: difference -- difference (2) between the approximated gradient and the backward propagation gradient """ # Compute gradapprox using left side of formula (1). epsilon is small enough, you don't need to worry about the limit. ### START CODE HERE ### (approx. 5 lines) thetaplus = theta + epsilon # Step 1 thetaminus = theta - epsilon # Step 2 J_plus = forward_propagation(x, thetaplus) # Step 3 J_minus = forward_propagation(x, thetaminus) # Step 4 gradapprox = (J_plus - J_minus) / 2 / epsilon # Step 5 ### END CODE HERE ### # Check if gradapprox is close enough to the output of backward_propagation() ### START CODE HERE ### (approx. 1 line) grad = backward_propagation(x, theta) ### END CODE HERE ### ### START CODE HERE ### (approx. 1 line) numerator = np.linalg.norm(grad - gradapprox) # Step 1' denominator = np.linalg.norm(grad) + np.linalg.norm(gradapprox) # Step 2' difference = numerator / denominator # Step 3' ### END CODE HERE ### if difference < 1e-7: print ("The gradient is correct!") else: print ("The gradient is wrong!") return difference x, theta = 2, 4 difference = gradient_check(x, theta) print("difference = " + str(difference)) """ Explanation: Expected Output: <table> <tr> <td> ** dtheta ** </td> <td> 2 </td> </tr> </table> Exercise: To show that the backward_propagation() function is correctly computing the gradient $\frac{\partial J}{\partial \theta}$, let's implement gradient checking. Instructions: - First compute "gradapprox" using the formula above (1) and a small value of $\varepsilon$. Here are the Steps to follow: 1. $\theta^{+} = \theta + \varepsilon$ 2. $\theta^{-} = \theta - \varepsilon$ 3. $J^{+} = J(\theta^{+})$ 4. $J^{-} = J(\theta^{-})$ 5. $gradapprox = \frac{J^{+} - J^{-}}{2 \varepsilon}$ - Then compute the gradient using backward propagation, and store the result in a variable "grad" - Finally, compute the relative difference between "gradapprox" and the "grad" using the following formula: $$ difference = \frac {\mid\mid grad - gradapprox \mid\mid_2}{\mid\mid grad \mid\mid_2 + \mid\mid gradapprox \mid\mid_2} \tag{2}$$ You will need 3 Steps to compute this formula: - 1'. compute the numerator using np.linalg.norm(...) - 2'. compute the denominator. You will need to call np.linalg.norm(...) twice. - 3'. divide them. - If this difference is small (say less than $10^{-7}$), you can be quite confident that you have computed your gradient correctly. Otherwise, there may be a mistake in the gradient computation. End of explanation """ def forward_propagation_n(X, Y, parameters): """ Implements the forward propagation (and computes the cost) presented in Figure 3. Arguments: X -- training set for m examples Y -- labels for m examples parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3": W1 -- weight matrix of shape (5, 4) b1 -- bias vector of shape (5, 1) W2 -- weight matrix of shape (3, 5) b2 -- bias vector of shape (3, 1) W3 -- weight matrix of shape (1, 3) b3 -- bias vector of shape (1, 1) Returns: cost -- the cost function (logistic cost for one example) """ # retrieve parameters m = X.shape[1] W1 = parameters["W1"] b1 = parameters["b1"] W2 = parameters["W2"] b2 = parameters["b2"] W3 = parameters["W3"] b3 = parameters["b3"] # LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID Z1 = np.dot(W1, X) + b1 A1 = relu(Z1) Z2 = np.dot(W2, A1) + b2 A2 = relu(Z2) Z3 = np.dot(W3, A2) + b3 A3 = sigmoid(Z3) # Cost logprobs = np.multiply(-np.log(A3),Y) + np.multiply(-np.log(1 - A3), 1 - Y) cost = 1./m * np.sum(logprobs) cache = (Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3) return cost, cache """ Explanation: Expected Output: The gradient is correct! <table> <tr> <td> ** difference ** </td> <td> 2.9193358103083e-10 </td> </tr> </table> Congrats, the difference is smaller than the $10^{-7}$ threshold. So you can have high confidence that you've correctly computed the gradient in backward_propagation(). Now, in the more general case, your cost function $J$ has more than a single 1D input. When you are training a neural network, $\theta$ actually consists of multiple matrices $W^{[l]}$ and biases $b^{[l]}$! It is important to know how to do a gradient check with higher-dimensional inputs. Let's do it! 3) N-dimensional gradient checking The following figure describes the forward and backward propagation of your fraud detection model. <img src="images/NDgrad_kiank.png" style="width:600px;height:400px;"> <caption><center> <u> Figure 2 </u>: deep neural network<br>LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID</center></caption> Let's look at your implementations for forward propagation and backward propagation. End of explanation """ def backward_propagation_n(X, Y, cache): """ Implement the backward propagation presented in figure 2. Arguments: X -- input datapoint, of shape (input size, 1) Y -- true "label" cache -- cache output from forward_propagation_n() Returns: gradients -- A dictionary with the gradients of the cost with respect to each parameter, activation and pre-activation variables. """ m = X.shape[1] (Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3) = cache dZ3 = A3 - Y dW3 = 1./m * np.dot(dZ3, A2.T) db3 = 1./m * np.sum(dZ3, axis=1, keepdims = True) dA2 = np.dot(W3.T, dZ3) dZ2 = np.multiply(dA2, np.int64(A2 > 0)) dW2 = 1./m * np.dot(dZ2, A1.T) db2 = 1./m * np.sum(dZ2, axis=1, keepdims = True) dA1 = np.dot(W2.T, dZ2) dZ1 = np.multiply(dA1, np.int64(A1 > 0)) dW1 = 1./m * np.dot(dZ1, X.T) db1 = 1./m * np.sum(dZ1, axis=1, keepdims = True) gradients = {"dZ3": dZ3, "dW3": dW3, "db3": db3, "dA2": dA2, "dZ2": dZ2, "dW2": dW2, "db2": db2, "dA1": dA1, "dZ1": dZ1, "dW1": dW1, "db1": db1} return gradients """ Explanation: Now, run backward propagation. End of explanation """ # GRADED FUNCTION: gradient_check_n def gradient_check_n(parameters, gradients, X, Y, epsilon = 1e-7): """ Checks if backward_propagation_n computes correctly the gradient of the cost output by forward_propagation_n Arguments: parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3": grad -- output of backward_propagation_n, contains gradients of the cost with respect to the parameters. x -- input datapoint, of shape (input size, 1) y -- true "label" epsilon -- tiny shift to the input to compute approximated gradient with formula(1) Returns: difference -- difference (2) between the approximated gradient and the backward propagation gradient """ # Set-up variables parameters_values, _ = dictionary_to_vector(parameters) grad = gradients_to_vector(gradients) num_parameters = parameters_values.shape[0] J_plus = np.zeros((num_parameters, 1)) J_minus = np.zeros((num_parameters, 1)) gradapprox = np.zeros((num_parameters, 1)) # Compute gradapprox for i in range(num_parameters): # Compute J_plus[i]. Inputs: "parameters_values, epsilon". Output = "J_plus[i]". # "_" is used because the function you have to outputs two parameters but we only care about the first one ### START CODE HERE ### (approx. 3 lines) thetaplus = np.copy(parameters_values) # Step 1 thetaplus[i][0] += epsilon # Step 2 J_plus[i], _ = forward_propagation_n(X, Y, vector_to_dictionary(thetaplus)) # Step 3 ### END CODE HERE ### # Compute J_minus[i]. Inputs: "parameters_values, epsilon". Output = "J_minus[i]". ### START CODE HERE ### (approx. 3 lines) thetaminus = np.copy(parameters_values) # Step 1 thetaminus[i][0] -= epsilon # Step 2 J_minus[i], _ = forward_propagation_n(X, Y, vector_to_dictionary(thetaminus)) # Step 3 ### END CODE HERE ### # Compute gradapprox[i] ### START CODE HERE ### (approx. 1 line) gradapprox[i] = (J_plus[i] - J_minus[i]) / 2 / epsilon ### END CODE HERE ### # Compare gradapprox to backward propagation gradients by computing difference. ### START CODE HERE ### (approx. 1 line) numerator = np.linalg.norm(grad - gradapprox) # Step 1' denominator = np.linalg.norm(grad) + np.linalg.norm(gradapprox) # Step 2' difference = numerator / denominator # Step 3' ### END CODE HERE ### if difference > 2e-7: print ("\033[93m" + "There is a mistake in the backward propagation! difference = " + str(difference) + "\033[0m") else: print ("\033[92m" + "Your backward propagation works perfectly fine! difference = " + str(difference) + "\033[0m") return difference X, Y, parameters = gradient_check_n_test_case() cost, cache = forward_propagation_n(X, Y, parameters) gradients = backward_propagation_n(X, Y, cache) difference = gradient_check_n(parameters, gradients, X, Y) """ Explanation: You obtained some results on the fraud detection test set but you are not 100% sure of your model. Nobody's perfect! Let's implement gradient checking to verify if your gradients are correct. How does gradient checking work?. As in 1) and 2), you want to compare "gradapprox" to the gradient computed by backpropagation. The formula is still: $$ \frac{\partial J}{\partial \theta} = \lim_{\varepsilon \to 0} \frac{J(\theta + \varepsilon) - J(\theta - \varepsilon)}{2 \varepsilon} \tag{1}$$ However, $\theta$ is not a scalar anymore. It is a dictionary called "parameters". We implemented a function "dictionary_to_vector()" for you. It converts the "parameters" dictionary into a vector called "values", obtained by reshaping all parameters (W1, b1, W2, b2, W3, b3) into vectors and concatenating them. The inverse function is "vector_to_dictionary" which outputs back the "parameters" dictionary. <img src="images/dictionary_to_vector.png" style="width:600px;height:400px;"> <caption><center> <u> Figure 2 </u>: dictionary_to_vector() and vector_to_dictionary()<br> You will need these functions in gradient_check_n()</center></caption> We have also converted the "gradients" dictionary into a vector "grad" using gradients_to_vector(). You don't need to worry about that. Exercise: Implement gradient_check_n(). Instructions: Here is pseudo-code that will help you implement the gradient check. For each i in num_parameters: - To compute J_plus[i]: 1. Set $\theta^{+}$ to np.copy(parameters_values) 2. Set $\theta^{+}_i$ to $\theta^{+}_i + \varepsilon$ 3. Calculate $J^{+}_i$ using to forward_propagation_n(x, y, vector_to_dictionary($\theta^{+}$ )). - To compute J_minus[i]: do the same thing with $\theta^{-}$ - Compute $gradapprox[i] = \frac{J^{+}_i - J^{-}_i}{2 \varepsilon}$ Thus, you get a vector gradapprox, where gradapprox[i] is an approximation of the gradient with respect to parameter_values[i]. You can now compare this gradapprox vector to the gradients vector from backpropagation. Just like for the 1D case (Steps 1', 2', 3'), compute: $$ difference = \frac {\| grad - gradapprox \|_2}{\| grad \|_2 + \| gradapprox \|_2 } \tag{3}$$ End of explanation """
james-prior/cohpy
20160523-cohpy-speed-of-searching-sets-and-lists-simplified.ipynb
mit
def make_list(n): if True: return list(range(n)) else: return list(str(i) for i in range(n)) n = int(25e6) # n = 5 m = (0, n // 2, n-1, n) a_list = make_list(n) a_set = set(a_list) n, m # Finding something that is in a set is fast. # The key one is looking for has little effect on the speed. beginning = 0 middle = n//2 end = n-1 %timeit beginning in a_set %timeit middle in a_set %timeit end in a_set # Finding something that is _not_ in a set is also fast. %timeit n in a_set # Searching for something in a list # starts at the beginning and compares each value. # The search time depends on where the value is in the list. # That can be slow. beginning = 0 middle = n//2 end = n-1 %timeit beginning in a_list %timeit middle in a_list %timeit end in a_list # Finding something that is not is a list is the worst case. # It has to be compared to all values of the list. %timeit n in a_list max_exponent = 6 for n in (10 ** i for i in range(1, max_exponent+1)): a_list = make_list(n) a_set = set(a_list) m = (0, n // 2, n-1, n) for j in m: print('length is %s, looking for %s' % (n, j)) %timeit j in a_set """ Explanation: This notebook explores the speed of searching for values in sets and lists. After reading this notebook, watch Brandon Rhodes' videos All Your Ducks In A Row: Data Structures in the Standard Library and Beyond and The Mighty Dictionary. End of explanation """ for n in (10 ** i for i in range(1, max_exponent+1)): a_list = make_list(n) a_set = set(a_list) m = (0, n // 2, n-1, n) for j in m: print('length is %s, looking for %s' % (n, j)) %timeit j in a_list """ Explanation: Notice that the difference between searching small sets and large sets in not large. This is the magic of Python sets and dictionaries. Read the hash table Wikipedia article for an explanation of how this works. End of explanation """
mne-tools/mne-tools.github.io
0.20/_downloads/d5764d6befb13ad52368247a508e45f6/plot_3d_to_2d.ipynb
bsd-3-clause
# Authors: Christopher Holdgraf <choldgraf@berkeley.edu> # # License: BSD (3-clause) from scipy.io import loadmat import numpy as np from matplotlib import pyplot as plt from os import path as op import mne from mne.viz import ClickableImage # noqa from mne.viz import (plot_alignment, snapshot_brain_montage, set_3d_view) print(__doc__) subjects_dir = mne.datasets.sample.data_path() + '/subjects' path_data = mne.datasets.misc.data_path() + '/ecog/sample_ecog.mat' # We've already clicked and exported layout_path = op.join(op.dirname(mne.__file__), 'data', 'image') layout_name = 'custom_layout.lout' """ Explanation: How to convert 3D electrode positions to a 2D image. Sometimes we want to convert a 3D representation of electrodes into a 2D image. For example, if we are using electrocorticography it is common to create scatterplots on top of a brain, with each point representing an electrode. In this example, we'll show two ways of doing this in MNE-Python. First, if we have the 3D locations of each electrode then we can use Mayavi to take a snapshot of a view of the brain. If we do not have these 3D locations, and only have a 2D image of the electrodes on the brain, we can use the :class:mne.viz.ClickableImage class to choose our own electrode positions on the image. End of explanation """ mat = loadmat(path_data) ch_names = mat['ch_names'].tolist() elec = mat['elec'] # electrode coordinates in meters # Now we make a montage stating that the sEEG contacts are in head # coordinate system (although they are in MRI). This is compensated # by the fact that below we do not specicty a trans file so the Head<->MRI # transform is the identity. montage = mne.channels.make_dig_montage(ch_pos=dict(zip(ch_names, elec)), coord_frame='head') info = mne.create_info(ch_names, 1000., 'ecog').set_montage(montage) print('Created %s channel positions' % len(ch_names)) """ Explanation: Load data First we'll load a sample ECoG dataset which we'll use for generating a 2D snapshot. End of explanation """ fig = plot_alignment(info, subject='sample', subjects_dir=subjects_dir, surfaces=['pial'], meg=False) set_3d_view(figure=fig, azimuth=200, elevation=70) xy, im = snapshot_brain_montage(fig, montage) # Convert from a dictionary to array to plot xy_pts = np.vstack([xy[ch] for ch in info['ch_names']]) # Define an arbitrary "activity" pattern for viz activity = np.linspace(100, 200, xy_pts.shape[0]) # This allows us to use matplotlib to create arbitrary 2d scatterplots fig2, ax = plt.subplots(figsize=(10, 10)) ax.imshow(im) ax.scatter(*xy_pts.T, c=activity, s=200, cmap='coolwarm') ax.set_axis_off() # fig2.savefig('./brain.png', bbox_inches='tight') # For ClickableImage """ Explanation: Project 3D electrodes to a 2D snapshot Because we have the 3D location of each electrode, we can use the :func:mne.viz.snapshot_brain_montage function to return a 2D image along with the electrode positions on that image. We use this in conjunction with :func:mne.viz.plot_alignment, which visualizes electrode positions. End of explanation """ # This code opens the image so you can click on it. Commented out # because we've stored the clicks as a layout file already. # # The click coordinates are stored as a list of tuples # im = plt.imread('./brain.png') # click = ClickableImage(im) # click.plot_clicks() # # Generate a layout from our clicks and normalize by the image # print('Generating and saving layout...') # lt = click.to_layout() # lt.save(op.join(layout_path, layout_name)) # To save if we want # # We've already got the layout, load it lt = mne.channels.read_layout(layout_name, path=layout_path, scale=False) x = lt.pos[:, 0] * float(im.shape[1]) y = (1 - lt.pos[:, 1]) * float(im.shape[0]) # Flip the y-position fig, ax = plt.subplots() ax.imshow(im) ax.scatter(x, y, s=120, color='r') plt.autoscale(tight=True) ax.set_axis_off() plt.show() """ Explanation: Manually creating 2D electrode positions If we don't have the 3D electrode positions then we can still create a 2D representation of the electrodes. Assuming that you can see the electrodes on the 2D image, we can use :class:mne.viz.ClickableImage to open the image interactively. You can click points on the image and the x/y coordinate will be stored. We'll open an image file, then use ClickableImage to return 2D locations of mouse clicks (or load a file already created). Then, we'll return these xy positions as a layout for use with plotting topo maps. End of explanation """
privong/pythonclub
sessions/06-mcmc/MCMC with emcee.ipynb
gpl-3.0
%matplotlib inline import numpy as np import matplotlib.pyplot as plt import emcee import corner """ Explanation: MCMC Demonstration Markov Chain Monte Carlo is a useful technique for fitting models to data and obtaining estimates for the uncertainties of the model parameters. There are a slew of python modules and interfaces to do MCMC including: emcee PyMC pymultinest emcee is fairly straightforward to use, so this demonstration is written to use that. pymultinest is worth investigating if you have large numbers of parameters (say, > 30) and/or multi-nodal solution spaces. Required Packages For this demo you should have the following packages installed (in addition to standard ones like numpy, scipy, matplotlib, and astropy): emcee corner Optionally, install the dust_emissivity package for the last section, on fitting a blackbody. Demo Overview This demo will proceed very simply, and will follow the emcee tutorial for fititng a line. At the end is a short, astrophysical example, which includes non-flat priors. Preliminaries End of explanation """ nthreads = 3 """ Explanation: Emcee has multithreadding support. Set this to the number of cores you would like to use. In this demo we will use the python multiprocessing module support built in to emcee. Emcee can also use MPI if you're working on a cluster and want to distribute the job across nodes. See the documentation for that. End of explanation """ # define our true relation m_true = 1.7 b_true = 2.7 f_true = 0.3 # generate some data N = 30 x = np.sort(10*np.random.rand(N)) yerr = 0.2+0.6*np.random.rand(N) y = m_true*x+b_true y += np.abs(f_true*y) * np.random.randn(N) y += yerr * np.random.randn(N) fig = plt.figure() ax = fig.add_subplot(1, 1, 1) ax.errorbar(x, y, yerr=yerr, ls='', marker='.', color='gray', label='Data') ax.plot(x, m_true*x + b_true, color='black', ls='-', label='True Relation') ax.set_ylabel('y', fontsize='x-large') ax.set_xlabel('x', fontsize='x-large') ax.minorticks_on() ax.legend(loc='best') """ Explanation: Fitting a Line Generate and Plot Some Random Data Give it both x and y errors. End of explanation """ A = np.vstack((np.ones_like(x), x)).T C = np.diag(yerr * yerr) cov = np.linalg.inv(np.dot(A.T, np.linalg.solve(C, A))) b_ls, m_ls = np.dot(cov, np.dot(A.T, np.linalg.solve(C, y))) print('Least squares fitting result:') print('slope: {0:1.2f}'.format(m_ls)) print('y-intercept: {0:1.2f}'.format(b_ls)) fig = plt.figure() ax = fig.add_subplot(1, 1, 1) ax.errorbar(x, y, yerr=yerr, ls='', marker='.', color='gray', label='Data') ax.plot(x, m_true*x + b_true, color='black', ls='-', label='True Relation') ax.plot(x, m_ls * x + b_ls, color='red', ls=':', label='Least Squares') ax.set_ylabel('y', fontsize='x-large') ax.set_xlabel('x', fontsize='x-large') ax.minorticks_on() ax.legend(loc='best') """ Explanation: Least-Squares Fit (ignoring the x-errors) End of explanation """ import scipy.optimize as op def lnlike(theta, x, y, yerr): b, m, lnf = theta model = m * x + b inv_sigma2 = 1.0/(yerr**2 + model**2*np.exp(2*lnf)) return -0.5*(np.sum((y-model)**2*inv_sigma2 - np.log(inv_sigma2))) # let's make some initial guesses for our parameters # remember this is now theta and b_perp p2 = [b_true, m_true, f_true] nll = lambda *args: -lnlike(*args) result = op.minimize(nll, p2, args=(x, y, yerr)) if not(result['success']): print("Max likelihood failed.") print(result['message']) ml_b, ml_m, ml_f = result['x'] print("Maximum likelihood result:") print("slope: {0:1.2f}".format(ml_m)) print("y-intercept: {0:1.2f}".format(ml_b)) print("ln(f): {0:1.2f}".format(ml_f)) fig = plt.figure() ax = fig.add_subplot(1, 1, 1) ax.errorbar(x, y, yerr=yerr, ls='', marker='.', color='gray', label='Data') ax.plot(x, m_true*x + b_true, color='black', ls='-', label='True Relation') ax.plot(x, m_ls * x + b_ls, color='red', ls=':', label='Least Squares') ax.plot(x, ml_m * x + ml_b, color='blue', ls='--', label='Max likelihood') ax.set_ylabel('y', fontsize='x-large') ax.set_xlabel('x', fontsize='x-large') ax.minorticks_on() ax.legend(loc='best') """ Explanation: Maximum likelihood So, we need to define a likelihood function. End of explanation """ def lnprior(theta): b, m, lnf = theta if lnf >= 0.0: return -np.inf return 0.0 def lnprob(theta, x, y, yerr): lp = lnprior(theta) if not np.isfinite(lp): return -np.inf return lp + lnlike(theta, x, y, yerr) # now let's set up the MCMC chains ndim = 3 nwalkers = 500 steps = 500 # initialize the walkers to the vicinity of the parameters derived from # ML pos = [result["x"] + 1e-3*np.random.randn(ndim) for i in range(nwalkers)] # initialze the sampler sampler = emcee.EnsembleSampler(nwalkers, ndim, lnprob, args=(x, y, yerr), threads=nthreads) # go! go! go! # run the sampler for 500 steps sampler.run_mcmc(pos, steps) samples = sampler.chain """ Explanation: What about the Errors? This is where MCMC comes in. But we need to add some priors for the parameters and use those priors End of explanation """ print("Mean acceptance rate is: {0:1.2f}".format(np.mean(sampler.acceptance_fraction))) """ Explanation: That took about 10 seconds on my desktop (3.4 GHz Core i7). What is the acceptance rate? Lore has it that this should be between $0.3-0.5$. End of explanation """ fig = plt.figure() dim_name = [r'$b$', r'$m$', r'$\ln f$'] for dim in range(ndim): ax = fig.add_subplot(ndim, 1, dim+1) for i in range(nwalkers): ax.plot(np.arange(steps), samples[i, :, dim], ls='-', color='black', alpha=10./nwalkers) ax.set_ylabel(dim_name[dim], fontsize='large') ax.set_xlabel('step', fontsize='large') """ Explanation: This acceptance rate is okay. If it is too low, the emcee documentation suggests increasing the number of walkers until the acceptance fraction is at the desired level. Let's visualize the chains. End of explanation """ samples = sampler.chain[:, 50:, :].reshape((-1, ndim)) """ Explanation: It looks like the walkers have "burned in" by 50 steps, so keep only those samples after 50 steps. End of explanation """ fig = corner.corner(samples, labels=[r"$b$", r"$m$", r"$\ln\,f$"], quantiles=[0.16, 0.5, 0.84], truths=[b_true, m_true, np.log(f_true)], show_titles=True) """ Explanation: What does this look like? Let's visualize with the traditional corner plot. I will give it the actual line parameters with the "truths" parameter, so we can see how our results compare to the actual values. End of explanation """ fig = plt.figure() ax = fig.add_subplot(1, 1, 1) ax.errorbar(x, y, yerr=yerr, ls='', marker='.', color='gray', label='Data') ax.plot(x, m_true*x + b_true, color='black', ls='-', label='True Relation') ax.plot(x, m_ls * x + b_ls, color='red', ls=':', label='Least Squares') ax.plot(x, ml_m * x + ml_b, color='blue', ls='--', label='Max likelihood') for b, m, lnf in samples[np.random.randint(len(samples), size=100)]: ax.plot(x, m * x + b, color='green', alpha=0.1) ax.set_ylabel('y', fontsize='x-large') ax.set_xlabel('x', fontsize='x-large') ax.minorticks_on() ax.legend(loc='best') samples[:, 2] = np.exp(samples[:, 2]) b_mcmc, m_mcmc, f_mcmc = map(lambda v: (v[1], v[2]-v[1], v[1]-v[0]), zip(*np.percentile(samples, [16, 50, 84], axis=0))) print("MCMC Parameter estimates:") print("slope: {0:1.2f} (+{1:1.2f}, -{2:1.2f})".format(m_mcmc[0], m_mcmc[1], m_mcmc[2])) print("y-intercept: {0:1.2f} (+{1:1.2f}, -{2:1.2f})".format(b_mcmc[0], b_mcmc[1], b_mcmc[2])) print("\nTrue values:") print("slope: {0:1.2f}".format(m_true)) print("y-intercept: {0:1.2f}".format(b_true)) """ Explanation: Now let's plot a bunch of sample fits from the MCMC chain, on top of our data and other models. End of explanation """ from dust_emissivity.blackbody import modified_blackbody import astropy.units as u def fit_bb(x, *p): """ simpler wrapper function to get the units right I don't care about the absolute amplitude, so the 1e-9 factor is just for numerical happiness. """ return 1.e-9* p[1] * modified_blackbody((x*u.micron).to(u.Hz, equivalencies=u.spectral()), p[0] * u.K, beta=p[2], kappa0=0.48*u.m**2/u.kg, nu0=(250*u.micron).to('Hz', u.spectral())).to('Jy').value FIRm = np.array([(70., 50., 2.6), (100., 55., 2.3), (160., 34., 1.6), (250., 12., 0.8), (350., 4.6, 0.3), (500., 1.3, 0.1)], dtype=[('wave', float), ('flux', float), ('dflux', float)]) plotrange = np.arange(FIRm['wave'][0], FIRm['wave'][-1], 1) def lnlike(theta, x, y, yerr): T, amp, beta, lnf = theta model = fit_bb(x, T, amp, beta) inv_sigma2 = 1.0 / (yerr**2 + model**2*np.exp(2*lnf)) return -0.5 * np.sum((y-model)**2*inv_sigma2 - np.log(inv_sigma2)) # initial guesses. 25K, arbitrary p0 = [25, 1, 1.8, -1] nll = lambda *args: -lnlike(*args) maxlike = op.minimize(nll, p0, args=(FIRm['wave'], FIRm['flux'], FIRm['dflux']),method='Nelder-Mead') Tfit, Ampfit, betafit, lnffit = maxlike["x"] print("Max likelihood:") print("T: {0:1.1f} K".format(Tfit)) print("amp: {0:1.1f}".format(Ampfit)) print("beta: {0:1.2f}".format(betafit)) fig = plt.figure() ax = fig.add_subplot(1, 1, 1) ax.errorbar(FIRm['wave'], FIRm['flux'], yerr=FIRm['dflux'], ls='', marker='.', color='black', label='Herschel PACS+SPIRE') ax.plot(plotrange, fit_bb(plotrange, Tfit, Ampfit, betafit), color='red', label='Max likelihood') ax.set_ylabel(r'F$_{\nu}$') ax.set_xlabel('$\lambda$ ($\mu m$)') ax.set_xlim([60, 600]) ax.set_yscale('log') ax.set_xscale('log') ax.legend(loc='best') """ Explanation: Astrophysical Example: FIR SED Let's say we have Herschel PACS+SPIRE photometry and we want to get the dust temperature... End of explanation """ def lnprior(theta): T, amp, lnf, beta = theta if T >= 2.73 and amp > 0.: return -1 * (T - 25)**2 / (2 * 2.5**2) return -np.inf def lnprob(theta, x, y, yerr): lp = lnprior(theta) if not(np.isfinite(lp)): return -np.inf return lp + lnlike(theta, x, y, yerr) ndim, nwalkers = 4, 300 pos = [maxlike["x"] + 1e-4 * np.random.randn(ndim) for i in range(nwalkers)] """ Explanation: MCMC This will show how you might use informative priors. Let's make sure it knows that the dust needs to be warmer than the CMB and that the amplitude needs to be positive. Also, "normal" galaxies have dust temperatures of ~25 K, with a dispersion of a few 2K. Let's set the prior on temperature to be a Gaussian centered at 25 K with a sigma of 2.5K. End of explanation """ sampler = emcee.EnsembleSampler(nwalkers, ndim, lnprob, args=(FIRm['wave'], FIRm['flux'], FIRm['dflux']), threads=nthreads) sampler.run_mcmc(pos, 1000) samples = sampler.chain[:, 100:, :].reshape((-1, ndim)) """ Explanation: Because of the larger parameter space and more complex model, this will take longer to run. End of explanation """ # show best-fit values as the "truth" values fig = corner.corner(samples, labels=["T", "Amp", r"$\beta$", r"$\ln\,f$"], quantiles=[0.16, 0.5, 0.84], show_titles=True, truths=[Tfit, Ampfit, betafit, lnffit]) """ Explanation: Again, look at the distribution of parameter estimates. But here, show the estimated parameters from the maximum likelihood model as the "true" values. End of explanation """ fig = plt.figure() ax = fig.add_subplot(1, 1, 1) ax.errorbar(FIRm['wave'], FIRm['flux'], yerr=FIRm['dflux'], ls='', marker='.', color='black', label='Herschel PACS+SPIRE') ax.plot(plotrange, fit_bb(plotrange, Tfit, Ampfit, betafit), color='red', label='Max likelihood') for T, A, b, lnf in samples[np.random.randint(len(samples), size=100)]: ax.plot(plotrange, fit_bb(plotrange, T, A, b), color='green', alpha=0.05) ax.set_ylabel(r'F$_{\nu}$') ax.set_xlabel('$\lambda$ ($\mu m$)') ax.set_xlim([60, 600]) ax.set_yscale('log') ax.set_xscale('log') ax.legend(loc='best') samples[:, 3] = np.exp(samples[:, 3]) T_mcmc, A_mcmc, beta_mcmc, f_mcmc = map(lambda v: (v[1], v[2]-v[1], v[1]-v[0]), zip(*np.percentile(samples, [16, 50, 84], axis=0))) print("MCMC Parameter estimates:") print("T: {0:1.2f} (+{1:1.2f}, -{2:1.2f}) K".format(T_mcmc[0], T_mcmc[1], T_mcmc[2])) print("beta: {0:1.2f} (+{1:1.2f}, -{2:1.2f})".format(beta_mcmc[0], beta_mcmc[1], beta_mcmc[2])) """ Explanation: The offsets between the MCMC median values and the maximum likelihood are at least partially a consequence of our chosen prior on the temperature. End of explanation """
mattgiguere/EPRV
code/make_missings.ipynb
mit
import re import numpy as np import pandas as pd import matplotlib.pyplot as plt import matplotlib.image as mpimg import matplotlib #%matplotlib inline """ Explanation: manipulate_regonline_output This notebook reads the RegOnline output into a pandas DataFrame and reworks it to have each row contain the attendee, the Doppler Primer Session, the Monday Breakout session, and the Tuesday breakout session in each row. End of explanation """ df = pd.read_excel('/Users/matt/projects/EPRV/data/missings2.xls', encoding='utf-8') df.columns df.loc[16:17] """ Explanation: Read the RegOnline output into a pandas DataFrame End of explanation """ #df['AgendaItem'].str.contains('Doppler Primer:') sundf = df[df['AgendaItem'].str.contains('Doppler Primer:')].copy() len(sundf) """ Explanation: Extract the Sunday Sessions RegOnline outputs multiple entries for each person, and each entry differs by the AgendaItem. AgendaItems exist for all sessions happening on all days. In this section, we extract the sessions happening on Sunday, which are all prefixed by "Doppler Primer: ". End of explanation """ sundf['PrimerID'] = 0 sundf['Primer'] = [re.search(r'(.*):\s(.*)$', item).group(2) for item in sundf['AgendaItem']] sundf[['AgendaItem', 'Primer']].head(3) sundf['Primer'].unique() """ Explanation: Let's create two new columns in our DataFrame: the Primer, and the PrimerID. The Primer column will contain the name of the Doppler Primer session (minus the Doppler Primer: prefix), and the PrimerID will be a session identifier that will later be used in plotting. End of explanation """ dopID = 0 for agItem in sundf['Primer'].unique(): sundf.loc[sundf['Primer'] == agItem, 'PrimerID'] = dopID dopID += 1 """ Explanation: Now loop through the five unique sessions, updating the PrimerID column for each participant: End of explanation """ sun_ses = ['NA', 'SA', 'IC', 'DC', 'SM'] """ Explanation: Create an abbreviated code for each session. This will be added to the nametag to spark conversation among participants. End of explanation """ sundf[['AgendaItem', 'Primer', 'PrimerID']].head(4) """ Explanation: A quick preview of the first few rows to see the result: End of explanation """ mondf = df[df['AgendaItem'].str.contains('Monday Break-out:')].copy() len(mondf) mondf['MonID'] = 0 mondf['Monday'] = [re.search(r'(.*):\s(.*)$', item).group(2) for item in mondf['AgendaItem']] mondf['Monday'].unique() monID = 0 for agItem in mondf['Monday'].unique(): mondf.loc[mondf['Monday'] == agItem, 'MonID'] = monID monID += 1 mondf['Monday'].unique() mon_ses = ['NA', 'FS', 'TC', 'BC', 'FC'] mondf[['AgendaItem', 'Monday', 'MonID']].head(4) """ Explanation: Extract the Monday Sessions Now to do the same for the Monday sessions. End of explanation """ tuedf = df[df['AgendaItem'].str.contains('Tuesday Break-out:')].copy() len(tuedf) tuedf['TueID'] = 0 tuedf['Tuesday'] = [re.search(r'(.*):\s(.*)$', item).group(2) for item in tuedf['AgendaItem']] tuedf['Tuesday'].unique() tuesID = 0 for agItem in tuedf['Tuesday'].unique(): tuedf.loc[tuedf['Tuesday'] == agItem, 'TueID'] = tuesID tuesID += 1 tuedf['Tuesday'].unique() tue_ses = ['NA', 'ST', 'DC', 'LB', 'PS'] tuedf[['AgendaItem', 'Tuesday', 'TueID']].head(4) """ Explanation: Extract Tuesday Sessions End of explanation """ fulldf = df[['RegId', 'GroupId', 'FirstName', 'LastName', 'Company']] print(len(fulldf)) fulldf = fulldf.drop_duplicates() print(len(fulldf)) print(len(sundf)) print(len(mondf)) print(len(tuedf)) fulldf.columns sundf.columns newdf = pd.merge(fulldf, sundf, on=['RegId', 'GroupId', 'FirstName', 'LastName', 'Company'], how='left') print(len(newdf)) newdf = pd.merge(newdf, mondf, on=['RegId', 'GroupId', 'FirstName', 'LastName', 'Company'], how='left') print(len(newdf)) newdf = pd.merge(newdf, tuedf, on=['RegId', 'GroupId', 'FirstName', 'LastName', 'Company'], how='left') print(len(newdf)) newdf.head(5) newdf.columns """ Explanation: Combine the DataFrames We only need to join on one field. However, pandas does something weird, where it creates multiple GroupId_x columns when joining multiple times. The simple solution is just to join on multiple columns since we know they're all consistent. End of explanation """ finaldf = newdf[['FirstName', 'LastName', 'Company', 'Primer', 'PrimerID', 'Monday', 'MonID', 'Tuesday', 'TueID']].sort('LastName').reset_index().copy() finaldf.head(5) len(finaldf) finaldf.columns """ Explanation: Now create a new DataFrame that is a subset of the newdf with only the columns of interest. Also, make sure the DataFrame is sorted by lastname, the index is reset, and it's a copy of newdf instead of a pointer to newdf. End of explanation """ finaldf.Company = ['Earth' if pd.isnull(company_el) else company_el for company_el in finaldf.Company] """ Explanation: Now replace all empty cells for "Company" to a very general location: End of explanation """ finaldf.PrimerID = [4 if pd.isnull(primerid_el) else primerid_el for primerid_el in finaldf.PrimerID] """ Explanation: Replace NaNs for PrimerID with the "Not Attending" ID: End of explanation """ len(finaldf[pd.isnull(finaldf['MonID'])]) """ Explanation: Check for NaNs in the Monday ID: End of explanation """ finaldf.MonID = [4 if pd.isnull(monid_el) else monid_el for monid_el in finaldf.MonID] len(finaldf[pd.isnull(finaldf['MonID'])]) """ Explanation: Replace NaNs for the MonID with the "Not Attending" ID: End of explanation """ len(finaldf[pd.isnull(finaldf['TueID'])]) finaldf.TueID = [4 if pd.isnull(tueid_el) else tueid_el for tueid_el in finaldf.TueID] len(finaldf[pd.isnull(finaldf['TueID'])]) """ Explanation: Replace NaNs for the TueID with the "Not Attending" ID: End of explanation """ p = re.compile ('(/|^(?!.*/).*-|^(?!.*/).*,|^(?!.*/).*\sat\s)') p.subn(r'\1\n', finaldf.loc[2].Company)[0] """ Explanation: Test out the wrap-around text for institute for participants that have long institution names. This regular expression will look for institutions (or Companies, as RegOnline refers to them), and find items that have a '/', and if no '/', either a '-', ',', or 'at' in the text. If so, add a newline character to make the text wrap around to the next line. We'll first test the output on a participant's institution that contains both a '/' and a '-': End of explanation """ #p.subn(r'\1\n', finaldf.loc[53].Company)[0] """ Explanation: And test a cell that is long, contains at, but at is part of a longer word: End of explanation """ [p.sub(r'\1\n', company_el) if len(company_el) > 30 else company_el for company_el in finaldf.head(5).Company.values] """ Explanation: And a quick test on a few more institutions: End of explanation """ finaldf.Company = [p.sub(r'\1\n', company_el) if len(company_el) > 30 else company_el for company_el in finaldf.Company.values] """ Explanation: Now update the full Company column of the DataFrame: End of explanation """ png = mpimg.imread('/Users/matt/projects/EPRV/images/NameTag2.png') png.shape import matplotlib.font_manager as mfm fontpaths = fontpaths=['/System/Library/Fonts/', '/Library/Fonts', '/Library/Fonts/Microsoft', '/usr/X11/lib/X11/fonts', '/opt/X11/share/fonts', '/Users/matt/Library/Fonts'] blaa = mfm.findSystemFonts(fontpaths=fontpaths) colors = ['#FFE2A9', '#4BA4D8', '#768085', '#BF5338', '#335B8F'] colors2 = ['#335B8F', '#BF5338', '#768085', '#4BA4D8', '#FFE2A9'] colors3 = ['#4BA4D8', '#FFE2A9', '#BF5338', '#768085', '#335B8F'] circ_ypos = 775 name_dict = {'family': 'YaleNew-Roman', 'color': '#D6E8E1', 'weight': 'bold', 'size': 28 } company_dict = {'family': 'YaleNew-Roman', 'color': '#D6E8E1', 'weight': 'bold', 'size': 16 } circle_dict = {'family': 'YaleNew-Roman', 'color': '#1D2523', 'weight': 'normal', 'size': 20 } def change_name_size(name, name_dict): if len(name) < 16: name_dict['size'] = 28 elif ((len(name) >= 16) and (len(name) < 19)): name_dict['size'] = 24 elif ((len(name) >= 19) and (len(name) < 24)): name_dict['size'] = 20 elif ((len(name) >= 24) and (len(name) < 30)): name_dict['size'] = 17 else: name_dict['size'] = 16 return name_dict def change_company_size(company, company_dict): newlines = len(re.findall(r'\n', finaldf.loc[0].Company)) if newlines == 0: if len(company) < 15: company_dict['size'] = 18 elif ((len(company) >= 15) and (len(company) < 30)): company_dict['size'] = 14 elif ((len(company) >= 30) and (len(company) < 40)): company_dict['size'] = 12 elif ((len(company) >= 40) and (len(company) < 50)): company_dict['size'] = 10 else: company_dict['size'] = 8 else: if len(company) < 15: company_dict['size'] = 18 elif ((len(company) >= 15) and (len(company) < 40)): company_dict['size'] = 14 elif ((len(company) >= 40) and (len(company) < 50)): company_dict['size'] = 12 else: company_dict['size'] = 10 return company_dict # The HP Color LaserJet CP4020 offsets things by 1/16th of an inch left-to-right. # This fudge factor should fix that: hrz_fdg = 1. / 16./ 8.5 leftarr = np.array([0.0294, 0.5, 0.0294, 0.5, 0.0294, 0.5]) + hrz_fdg bottomarr = [0.091, 0.091, 0.364, 0.364, 0.637, 0.637] width = 0.4706 height = 0.273 # loop through the total number of pages: for page in range(int(np.ceil((len(finaldf))/6.))): print('Now on page: {}'.format(page)) fig = plt.figure(figsize=(8.5, 11)) for indx in range(6): # add an if statement to handle the last page if there are less than # six participants remaining: if ((page*6 + indx) < len(finaldf)): rect = [leftarr[indx], bottomarr[indx], width, height] ax = fig.add_axes(rect) ax.imshow(png) ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) print(u'Now making name tag for: {} {}'.format(finaldf.loc[page*6 + indx].FirstName, finaldf.loc[page*6 + indx].LastName)) #add name text: name = finaldf.loc[page*6 + indx].FirstName + ' ' + finaldf.loc[page*6 + indx].LastName this_name_dict = change_name_size(name, name_dict) ax.text(600, 500, name, fontdict=this_name_dict, horizontalalignment='center') #add company text: company = finaldf.loc[page*6 + indx].Company this_co_dict = change_company_size(company, company_dict) ax.text(600, 625, company, fontdict=this_co_dict, horizontalalignment='center') #add circles for sessions: circ1 = plt.Circle((750, circ_ypos), 70, color=colors[int(finaldf.loc[page*6 + indx].PrimerID)]) fig.gca().add_artist(circ1) ax.text(750, circ_ypos + 27.5, sun_ses[int(finaldf.loc[page*6 + indx].PrimerID)], fontdict=circle_dict, horizontalalignment='center') circ2 = plt.Circle((925, circ_ypos), 70, color=colors2[int(finaldf.loc[page*6 + indx].MonID)]) fig.gca().add_artist(circ2) ax.text(925, circ_ypos + 27.5, mon_ses[int(finaldf.loc[page*6 + indx].MonID)], fontdict=circle_dict, horizontalalignment='center') circ3 = plt.Circle((1100, circ_ypos), 70, color=colors3[int(finaldf.loc[page*6 + indx].TueID)]) fig.gca().add_artist(circ3) ax.text(1100, circ_ypos + 27.5, tue_ses[int(finaldf.loc[page*6 + indx].TueID)], fontdict=circle_dict, horizontalalignment='center') plt.savefig('../nametags/more_missing_nameTags_bold_p'+str(page)+'.png', dpi=300) finaldf.columns finaldf.FirstName.values finaldf.LastName.values hrz_fdg = 1. / 16./ 8.5 leftarr = np.array([0.0294, 0.5, 0.0294, 0.5, 0.0294, 0.5]) leftarr + hrz_fdg """ Explanation: Plot Labels Now that we have our DataFrame cleaned up the way we want it we can print the data to the Avery 5392 format. This format contains 6 4"x3" nametags per sheet. End of explanation """
dchandan/rebound
ipython_examples/OrbitPlot.ipynb
gpl-3.0
import rebound sim = rebound.Simulation() sim.add(m=1) sim.add(m=0.1, e=0.041, a=0.4, inc=0.2, f=0.43, Omega=0.82, omega=2.98) sim.add(m=1e-3, e=0.24, a=1.0, pomega=2.14) sim.add(m=1e-3, e=0.24, a=1.5, omega=1.14, l=2.1) sim.add(a=-2.7, e=1.4, f=-1.5,omega=-0.7) # hyperbolic orbit """ Explanation: Orbit Plot REBOUND comes with a simple way to plot instantaneous orbits of planetary systems. To show how this works, let's setup a test simulation with 4 planets. End of explanation """ %matplotlib inline fig = rebound.OrbitPlot(sim) """ Explanation: To plot these initial orbits in the $xy$-plane, we can simply call the OrbitPlot function and give it the simulation as an argument. End of explanation """ fig = rebound.OrbitPlot(sim, unitlabel="[AU]", color=True, trails=True, periastron=True) fig = rebound.OrbitPlot(sim, unitlabel="[AU]", periastron=True, lw=2) """ Explanation: Note that the OrbitPlot function chooses reasonable limits for the axes for you. There are various ways to customize the plot. Have a look at the arguments used in the following examples, which are pretty much self-explanatory (if in doubt, check the documentation!). End of explanation """
phoebe-project/phoebe2-docs
development/tutorials/pblum.ipynb
gpl-3.0
#!pip install -I "phoebe>=2.4,<2.5" import phoebe from phoebe import u # units import numpy as np logger = phoebe.logger() b = phoebe.default_binary() """ Explanation: Passband Luminosity Setup Let's first make sure we have the latest version of PHOEBE 2.4 installed (uncomment this line if running in an online notebook session such as colab). End of explanation """ b.add_dataset('lc', times=phoebe.linspace(0,1,101), dataset='lc01') """ Explanation: And we'll add a single light curve dataset so that we can see how passband luminosities affect the resulting synthetic light curve model. End of explanation """ b.set_value('irrad_method', 'none') b.set_value_all('ld_mode', 'manual') b.set_value_all('ld_func', 'linear') b.set_value_all('ld_coeffs', [0.]) b.set_value_all('ld_mode_bol', 'manual') b.set_value_all('ld_func_bol', 'linear') b.set_value_all('ld_coeffs_bol', [0.]) b.set_value_all('atm', 'blackbody') """ Explanation: Lastly, just to make things a bit easier and faster, we'll turn off irradiation (reflection), use blackbody atmospheres, and disable limb-darkening (so that we can play with weird temperatures without having to worry about falling of the grids). End of explanation """ print(b.get_parameter(qualifier='pblum_mode', dataset='lc01')) """ Explanation: Relevant Parameters & Methods A pblum_mode parameter exists for each LC dataset in the bundle. This parameter defines how passband luminosities are handled. The subsections below describe the use and parameters exposed depening on the value of this parameter. End of explanation """ print(b.compute_pblums()) """ Explanation: For any of these modes, you can expose the intrinsic (excluding extrinsic effects such as spots and irradiation) and extrinsic computed luminosities of each star (in each dataset) by calling b.compute_pblums. Note that as its an aspect-dependent effect, boosting is ignored in all of these output values. End of explanation """ print(b.filter(qualifier='pblum')) print(b.get_parameter(qualifier='pblum_component')) b.set_value('pblum_component', 'secondary') print(b.filter(qualifier='pblum')) """ Explanation: For more details, see the section below on "Accessing Model Luminosities" as well as the b.compute_pblums API docs The table below provides a brief summary of all available pblum_mode options. Details are given in the remainder of the tutorial. | pblum_mode | intent | |-------------------|--------| | component-coupled | provide pblum for one star (by default L1), compute pblums for other stars from atmosphere tables | | decoupled | provide pblums for each star independently | | absolute | obtain unscaled pblums, in passband watts, computed from atmosphere tables | | dataset-scaled | calculate each pblum from the scaling factor between absolute fluxes and each dataset | | dataset-coupled | same as above, but all datasets are scaled with the same scaling factor | pblum_mode = 'component-coupled' pblum_mode='component-coupled' is the default option and maintains the default behavior from previous releases. Here the user provides passband luminosities for a single star in the system for the given dataset/passband, and all other stars are scaled accordingly. By default, the value of pblum is set for the primary star in the system, but we can instead provide pblum for the secondary star by changing the value of pblum_component. End of explanation """ b.set_value('pblum_component', 'primary') print(b.get_parameter(qualifier='pblum', component='primary')) """ Explanation: Note that in general (for the case of a spherical star), a pblum of 4pi will result in an out-of-eclipse flux of ~1. Now let's just reset to the default case where the primary star has a provided (default) pblum of 4pi. End of explanation """ print(b.compute_pblums()) """ Explanation: NOTE: other parameters also affect flux-levels, including limb darkening, third light, boosting, irradiation, and distance If we call b.compute_pblums, we'll see that the computed intrinsic luminosity of the primary star (pblum@primary@lc01) matches the value of the parameter above. End of explanation """ b.run_compute() afig, mplfig = b.plot(show=True) """ Explanation: Let's see how changing the value of pblum affects the computed light curve. By default, pblum is set to be 4 pi, giving a total flux for the primary star of ~1. Since the secondary star in the default binary is identical to the primary star, we'd expect an out-of-eclipse flux of the binary to be ~2. End of explanation """ b.set_value('pblum', component='primary', value=2*np.pi) print(b.compute_pblums()) b.run_compute() afig, mplfig = b.plot(show=True) """ Explanation: If we now set pblum to be only 2 pi, we should expect the luminosities as well as entire light curve to be scaled in half. End of explanation """ b.set_value('teff', component='secondary', value=0.5 * b.get_value('teff', component='primary')) print(b.filter(qualifier='teff')) print(b.compute_pblums()) b.run_compute() afig, mplfig = b.plot(show=True) """ Explanation: And if we halve the temperature of the secondary star - the resulting light curve changes to the new sum of fluxes, where the primary star dominates since the secondary star flux is reduced by a factor of 16, so we expect a total out-of-eclipse flux of ~0.5 + ~0.5/16 = ~0.53. End of explanation """ b.set_value_all('teff', 6000) b.set_value_all('pblum', 4*np.pi) """ Explanation: Let us undo our changes before we look at decoupled luminosities. End of explanation """ b.set_value('pblum_mode', 'decoupled') """ Explanation: pblum_mode = 'decoupled' The luminosities are decoupled when pblums are provided for the individual components. To accomplish this, set pblum_mode to 'decoupled'. End of explanation """ print(b.filter(qualifier='pblum')) """ Explanation: Now we see that both pblum parameters are available and can have different values. End of explanation """ b.set_value_all('pblum', 4*np.pi) print(b.compute_pblums()) b.run_compute() afig, mplfig = b.plot(show=True) """ Explanation: If we set these to 4pi, then we'd expect each star to contribute 1.0 in flux units, meaning the baseline of the light curve should be at approximately 2.0 End of explanation """ print(b.filter(qualifier='teff')) b.set_value('teff', component='secondary', value=3000) print(b.compute_pblums()) b.run_compute() afig, mplfig = b.plot(show=True) """ Explanation: Now let's make a significant temperature-ratio by making a very cool secondary star. Since the luminosities are decoupled - this temperature change won't affect the resulting light curve very much (compare this to the case above with coupled luminosities). What is happening here is that even though the secondary star is cooler, its luminosity is being rescaled to the same value as the primary star, so the eclipse depth doesn't change (you would see a similar lack-of-effect if you changed the radii - although in that case the eclipse widths would still change due to the change in geometry). End of explanation """ b.set_value_all('teff', 6000) b.set_value_all('pblum', 4*np.pi) """ Explanation: In most cases you will not want decoupled luminosities as they can easily break the self-consistency of your model. Now we'll just undo our changes before we look at accessing model luminosities. End of explanation """ b.set_value('pblum_mode', 'absolute') """ Explanation: pblum_mode = 'absolute' By setting pblum_mode to 'absolute', luminosities and fluxes will be returned in absolute units and not rescaled. Note that third light and distance will still affect the resulting flux levels. End of explanation """ print(b.filter(qualifier='pblum')) print(b.compute_pblums()) b.run_compute() afig, mplfig = b.plot(show=True) """ Explanation: As we no longer provide pblum values to scale, those parameters are not visible when filtering. End of explanation """ fluxes = b.get_value('fluxes', context='model') * 0.8 + (np.random.random(101) * 0.1) b.set_value('fluxes', context='dataset', value=fluxes) afig, mplfig = b.plot(context='dataset', show=True) """ Explanation: (note the exponent on the y-axis of the above figure) pblum_mode = 'dataset-scaled' Setting pblum_mode to 'dataset-scaled' is only allowed if fluxes are attached to the dataset itself. Let's use our existing model to generate "fake" data and then populate the dataset. End of explanation """ b.set_value('pblum_mode', 'dataset-scaled') print(b.compute_pblums()) b.run_compute() afig, mplfig = b.plot(show=True) """ Explanation: Now if we set pblum_mode to 'dataset-scaled', the resulting model will be scaled to best fit the data. Note that in this mode we cannot access computed luminosities via b.compute_pblums (without providing model - we'll get back to that in a minute), nor can we access scaled intensities from the mesh. End of explanation """ print(b.get_parameter(qualifier='flux_scale', context='model')) """ Explanation: The model stores the scaling factor used between the absolute fluxes and the relative fluxes that best fit to the observational data. End of explanation """ print(b.compute_pblums(model='latest')) """ Explanation: We can then access the scaled luminosities by passing the model tag to b.compute_pblums. Keep in mind this only scales the absolute luminosities by flux_scale so assumes a fixed distance@system. This is useful though if we wanted to use 'dataset-scaled' to get an estimate for pblum before changing to 'component-coupled' and optimizing or marginalizing over pblum. End of explanation """ b.set_value('pblum_mode', 'component-coupled') b.set_value('fluxes', context='dataset', value=[]) """ Explanation: Before moving on, let's remove our fake data (and reset pblum_mode or else PHOEBE will complain about the lack of data). End of explanation """ b.add_dataset('lc', times=phoebe.linspace(0,1,101), ld_mode='manual', ld_func='linear', ld_coeffs=[0], passband='Johnson:B', dataset='lc02') b.set_value('pblum_mode', dataset='lc02', value='dataset-coupled') """ Explanation: pblum_mode = 'dataset-coupled' Setting pblum_mode to 'dataset-coupled' allows for the same scaling factor to be applied to two different datasets. In order to see this in action, we'll add another LC dataset in a different passband. End of explanation """ print(b.filter('pblum*')) print(b.compute_pblums()) b.run_compute() afig, mplfig = b.plot(show=True, legend=True) """ Explanation: Here we see the pblum_mode@lc01 is set to 'component-coupled' meaning it will follow the rules described earlier where pblum is provided for the primary component and the secondary is coupled to that. pblum_mode@lc02 is set to 'dataset-coupled' with pblum_dataset@lc01 pointing to 'lc01'. End of explanation """ print(b.compute_pblums()) """ Explanation: Accessing Model Luminosities Passband luminosities at t0@system per-star (including following all coupling logic) can be computed and exposed on the fly by calling compute_pblums. End of explanation """ print(b.compute_pblums(dataset='lc01', component='primary')) """ Explanation: By default this exposes 'pblum' and 'pblum_ext' for all component-dataset pairs in the form of a dictionary. Alternatively, you can pass a label or list of labels to component and/or dataset. End of explanation """ b.add_dataset('mesh', times=np.linspace(0,1,5), dataset='mesh01', columns=['areas', 'pblum_ext@lc01', 'ldint@lc01', 'ptfarea@lc01', 'abs_normal_intensities@lc01', 'normal_intensities@lc01']) b.run_compute() """ Explanation: For more options, see the b.compute_pblums API docs. Note that this same logic is applied (at t0) to initialize all passband luminosities within the backend, so there is no need to call compute_pblums before run_compute. In order to access passband luminosities at times other than t0, you can add a mesh dataset and request the pblum_ext column to be exposed. For stars that have pblum defined (as opposed to coupled to another star or dataset), this value should be equivalent to the value of the parameter (at t0 if no features or irradiation are present, and in simple circular cases will probably be equivalent at all times). Let's create a mesh dataset at a few times and then access the synthetic luminosities. End of explanation """ print(b.filter(qualifier='pblum_ext', context='model').twigs) """ Explanation: Since the luminosities are passband-dependent, they are stored with the same dataset as the light curve (or RV), but with the mesh method, and are available at each of the times at which a mesh was stored. End of explanation """ t0 = b.get_value('t0@system') print(b.get_value(qualifier='pblum_ext', time=t0, component='primary', kind='mesh', context='model')) print(b.get_value('pblum@primary@dataset')) print(b.compute_pblums(component='primary', dataset='lc01')) """ Explanation: Now let's compare the value of the synthetic luminosities to those of the input pblum End of explanation """ print(b.get_value(qualifier='pblum_ext', time=t0, component='primary', kind='mesh', context='model')) print(b.get_value(qualifier='pblum_ext', time=t0, component='secondary', kind='mesh', context='model')) """ Explanation: In this case, since our two stars are identical, the synthetic luminosity of the secondary star should be the same as the primary (and the same as pblum@primary). End of explanation """ b['teff@secondary@component'] = 3000 print(b.compute_pblums(dataset='lc01')) b.run_compute() print(b.get_value(qualifier='pblum_ext', time=t0, component='primary', kind='mesh', context='model')) print(b.get_value(qualifier='pblum_ext', time=t0, component='secondary', kind='mesh', context='model')) """ Explanation: However, if we change the temperature of the secondary star again, since the pblums are coupled, we'd expect the synthetic luminosity of the primary to remain fixed but the secondary to decrease. End of explanation """ print(b['ld_mode']) print(b['atm']) b.run_compute(irrad_method='horvat') print(b.get_value(qualifier='pblum_ext', time=t0, component='primary', kind='mesh', context='model')) print(b.get_value('pblum@primary@dataset')) print(b.compute_pblums(dataset='lc01', irrad_method='horvat')) """ Explanation: And lastly, if we re-enable irradiation, we'll see that the extrinsic luminosities do not match the prescribed value of pblum (an intrinsic luminosity). End of explanation """ b.set_value_all('teff@component', 6000) """ Explanation: Now, we'll just undo our changes before continuing End of explanation """ b.run_compute() areas = b.get_value(qualifier='areas', dataset='mesh01', time=t0, component='primary', unit='m^2') ldint = b.get_value(qualifier='ldint', component='primary', time=t0) ptfarea = b.get_value(qualifier='ptfarea', component='primary', time=t0) abs_normal_intensities = b.get_value(qualifier='abs_normal_intensities', dataset='lc01', time=t0, component='primary') normal_intensities = b.get_value(qualifier='normal_intensities', dataset='lc01', time=t0, component='primary') """ Explanation: Role of Pblum Let's now look at the intensities in the mesh to see how they're being scaled under-the-hood. First we'll recompute our model with the equal temperatures and irradiation disabled (to ignore the difference between pblum and pblum_ext). End of explanation """ print(np.median(abs_normal_intensities)) """ Explanation: 'abs_normal_intensities' are the intensities per triangle in absolute units, i.e. W/m^3. End of explanation """ print(np.median(normal_intensities)) """ Explanation: The values of 'normal_intensities', however, are significantly samller (in this case). These are the intensities in relative units which will eventually be integrated to give us flux for a light curve. End of explanation """ pblum = b.get_value(qualifier='pblum', component='primary', context='dataset') print(np.sum(normal_intensities * ldint * np.pi * areas) * ptfarea, pblum) """ Explanation: 'normal_intensities' are scaled from 'abs_normal_intensities' so that the computed luminosity matches the prescribed luminosity (pblum). Here we compute the luminosity by summing over each triangle's intensity in the normal direction, and multiply it by pi to account for blackbody intensity emitted in all directions in the solid angle, and by the area of that triangle. End of explanation """
EnergyID/opengrid
scripts/TimeSeries.ipynb
gpl-2.0
import os, sys import inspect import numpy as np import datetime as dt import time import pytz import pandas as pd import pdb script_dir = os.path.dirname(os.path.abspath(inspect.getfile(inspect.currentframe()))) # add the path to opengrid to sys.path sys.path.append(os.path.join(script_dir, os.pardir, os.pardir)) from opengrid.library import config c=config.Config() DEV = c.get('env', 'type') == 'dev' # DEV is True if we are in development environment, False if on the droplet if not DEV: # production environment: don't try to display plots import matplotlib matplotlib.use('Agg') import matplotlib.pyplot as plt from matplotlib.dates import HourLocator, DateFormatter, AutoDateLocator # find tmpo sys.path.append(c.get('tmpo', 'folder')) try: if os.path.exists(c.get('tmpo', 'data')): path_to_tmpo_data = c.get('tmpo', 'data') except: path_to_tmpo_data = None from opengrid.library.houseprint import houseprint if DEV: if c.get('env', 'plots') == 'inline': %matplotlib inline else: %matplotlib qt else: pass # don't try to render plots plt.rcParams['figure.figsize'] = 12,8 import charts """ Explanation: This script shows how to use the existing code in opengrid to create a baseload electricity consumption benchmark. End of explanation """ number_of_days = 7 """ Explanation: Script settings End of explanation """ hp = houseprint.load_houseprint_from_file('new_houseprint.pkl') hp.init_tmpo(path_to_tmpo_data=path_to_tmpo_data) start = pd.Timestamp(time.time() - number_of_days*86400, unit='s') sensors = hp.get_sensors() #sensors.remove('b325dbc1a0d62c99a50609e919b9ea06') for sensor in sensors: s = sensor.get_data(head=start, resample='s') try: s = s.resample(rule='60s', how='max') s = s.diff()*3600/60 # plot with charts (don't show it) and save html charts.plot(pd.DataFrame(s), stock=True, save=os.path.join(c.get('data', 'folder'), 'figures', 'TimeSeries_'+sensor.key+'.html'), show=True) except: pass len(sensors) """ Explanation: We create one big dataframe, the columns are the sensors End of explanation """
pombredanne/https-gitlab.lrde.epita.fr-vcsn-vcsn
doc/notebooks/automaton.infiltration.ipynb
gpl-3.0
import vcsn c = vcsn.context('lal_char, seriesset<lal_char, z>') std = lambda exp: c.expression(exp).standard() c """ Explanation: automaton.infiltration Create the (accessible part of the) infiltration product of two automata. In a way the infiltration product combines the conjunction (synchronized) and the shuffle product. Preconditions: - all the labelsets are letterized See also: - conjunction - shuffle Examples End of explanation """ x = std("<x>a"); x y = std("<y>a"); y x & y x.shuffle(y) x.infiltration(y) """ Explanation: The following simple example aims at emphasizing that the transitions of the infiltration combine those of the shuffle and the conjunction products. End of explanation """ xx = x * x xx yy = y * y xx & yy xx.shuffle(yy) xx.infiltration(yy) """ Explanation: Don't be mistaken though: if in this example the sum of the shuffle and conjunction products indeed match the infiltration product, this no longer applies to larger automata. In the following example (which nicely highlights the features of these three types of product) the transition from $(1, 0)$ to $(2, 1)$ would be missing. End of explanation """ x = std('<x>a') y = std('<y>a') z = std('<z>a') a = x.infiltration(y).infiltration(z); a b = x.infiltration(y.infiltration(z)); b a.strip().is_isomorphic(b.strip()) """ Explanation: Associativity This operator is associative. End of explanation """ c = x.infiltration(y, z); c c.strip().is_isomorphic(a.strip()) """ Explanation: Variadicity As a convenience, infiltration is variadic: it may accept more than two arguments. However, it's (currently) only a wrapper around repeated calls to the binary operation (as can be seen by the parentheses in the state names below). End of explanation """
vlad17/vlad17.github.io
assets/2020-11-01-lbfgs-vs-gd.ipynb
apache-2.0
from numpy_ringbuffer import RingBuffer import numpy as np from scipy.stats import special_ortho_group from scipy import linalg as sla %matplotlib inline from matplotlib import pyplot as plt from scipy.optimize import line_search class LBFGS: def __init__(self, m, d, x0, g0): self.s = RingBuffer(capacity=m, dtype=(float, d)) self.y = RingBuffer(capacity=m, dtype=(float, d)) self.x = x0.copy() self.g = g0.copy() def mvm(self, q): q = q.copy() m = len(self.s) alphas = np.zeros(m, dtype=float) for i, (s, y) in enumerate(zip(reversed(self.s), reversed(self.y))): inv_rho = s.dot(y) alphas[m - i - 1] = s.dot(q) / inv_rho q -= alphas[m - i - 1] * y if m > 0: s = next(reversed(self.s)) y = next(reversed(self.y)) gamma = s.dot(y) / y.dot(y) else: gamma = 1 z = gamma * q for (alpha, s, y) in zip(alphas, self.s, self.y): inv_rho = s.dot(y) beta = y.dot(z) / inv_rho z += s * (alpha - beta) return -z # mvm(self, self.g) gives current lbfgs direction # - H g def update(self, x, g): s = x - self.x y = g - self.g if self.s.is_full: assert self.y.is_full self.s.popleft() self.y.popleft() self.s.append(s) self.y.append(y) self.x = x.copy() self.g = g.copy() from scipy.optimize.linesearch import line_search_armijo def haar(n, d, rng=np.random): # https://nhigham.com/2020/04/22/what-is-a-random-orthogonal-matrix/ assert n >= d z = rng.normal(size=(n, d)) if n > d: q, r = sla.qr(z, mode='economic') else: q, r = sla.qr(z, mode='full') assert q.shape[1] == d, (q.shape[1], d) return q np.random.seed(1234) d = 100 n = 1000 vt = haar(d, d) u = haar(n, d) # bottom singular value we'll keep at 1 # so top determines the condition number # for a vector s of singular values # A = u diag(s) vt # objective = 1/2 ||Ax - 1||_2^2 x0 = np.zeros(d) b = np.ones(n) def xopt(A): u, s, vt = A return vt.T.dot(u.T.dot(b) / s) def objective(A, x): u, s, vt = A vtx = vt.dot(x) Ax = u.dot(s * vtx) diff = Ax - b f = diff.dot(diff) / 2 g = vt.T.dot(s * (s * vtx - u.T.dot(b))) return f, g def hessian_mvm(A, q): u, s, vt = A return vt.T.dot(s * (s * vt.dot(q))) def gd(A, max_iter=1000, tol=1e-11, c1=0.2, c2=0.8, armijo=False): x = x0.copy() xsol = xopt(A) fsol = objective(A, xsol)[0] gaps = [] for _ in range(max_iter): f, g = objective(A, x) gaps.append(abs(f - fsol)) if gaps[-1] < tol: break if armijo: alpha, *_ = line_search_armijo( lambda x: objective(A, x)[0], x, -g, g, f) else: alpha = line_search( lambda x: objective(A, x)[0], lambda x: objective(A, x)[1], x, -g, maxiter=1000, c1=c1, c2=c2) if alpha[0] is None: raise RuntimeError((alpha, g, x)) alpha = alpha[0] x -= alpha * g return gaps def lbfgs(A, m, max_iter=1000, tol=1e-11, extras=False, c1=0.2, c2=0.8, armijo=False): x = x0.copy() xsol = xopt(A) fsol = objective(A, xsol)[0] gaps = [] if extras: newton = [] cosine = [] f, g = objective(A, x) opt = LBFGS(m, d, x, g) for i in range(max_iter): gaps.append(abs(f - fsol)) p = opt.mvm(opt.g) if extras: newton.append(np.linalg.norm( hessian_mvm(A, p) - opt.mvm(p) ) / np.linalg.norm(p)) cosine.append(1 - p.dot(-g) / np.linalg.norm(p) / np.linalg.norm(g)) if gaps[-1] < tol: break if armijo: alpha, *_ = line_search_armijo( lambda x: objective(A, x)[0], x, p, opt.g, f) else: alpha = line_search( lambda x: objective(A, x)[0], lambda x: objective(A, x)[1], x, p, maxiter=1000, c1=c1, c2=c2) if alpha[0] is None: raise RuntimeError(alpha) alpha = alpha[0] x += alpha * p f, g = objective(A, x) opt.update(x, g) if extras: return gaps, newton, cosine return gaps for kappa, ls in [(10, '-'), (50, '--')]: s = np.linspace(1, kappa, d) A = (u, s, vt) gds = gd(A) memory = 10 lbs = lbfgs(A, memory) matrix_name = 'linspace eigenvals' plt.semilogy(gds, c='b', label=r'GD ($\kappa = {kappa}$)'.format(kappa=kappa), ls=ls) plt.semilogy(lbs, c='r', ls=ls, label=r'L-BFGS ($m = {memory}, \kappa = {kappa}$)'.format( kappa=kappa, memory=memory)) plt.legend(bbox_to_anchor=(1.05, 0.5), loc='center left') plt.xlabel('iterations') plt.ylabel('function optimality gap') plt.title(matrix_name) plt.show() """ Explanation: L-BFGS vs GD Curiously, the original L-BFGS convergence proof essentially reduces the L-BFGS iteration to GD. This establishes L-BFGS converges globally for sufficiently regular functions and also that it has local linear convergence, just like GD, for smooth and strongly convex functions. But if you look carefully at the proof, the construction is very strange: the more memory L-BFGS uses the less it looks like GD, the worse the smoothness constants are for the actual local rate of convergence. I go to into more detail on this in my SO question on the topic, but I was curious about some empirical assessments of how these compare. I found a study which confirms high-level intuition: L-BFGS interpolates between CG and BFGS as you increase memory. This relationship is true in a limiting sense: when $L=0$ L-BFGS is equal to a flavor of CG (with exact line search) and when $L=\infty$ it's BFGS. BFGS, in turn, iteratively constructs approximations $B_k$ to the Hessian which eventually satisfy a directional inequality $\|(B_k-\nabla_k^2)\mathbf{p}_k\|=o(\|\mathbf{p}_k\|)$ where $\mathbf{p}_k=-B_k^{-1}\nabla_k$ is the descent direction, which it turns out is enough to be "close enough" to Newton that you can achieve superlinear convergence rates. So, to what extent does agreement between $\mathbf{p}_k,\nabla_k$ (measured as $\cos^2 \theta_k$, the square of the cosine of the angle between the two) explain fast L-BFGS convergence? How about the magnitude of the Hessian-approximate-BFGS-Hessian agreement along the descent direction $\|(B_k-\nabla_k^2)\mathbf{p}_k\|$? What about the secant equation difference? One interesting hypothesis is that the low-rank view L-BFGS has into the Hessian means that it can't approximate the Hessian well if its eigenvalues are spread far apart (since you need to "spend" rank to explore parts of the eigenspace). Let's take some simple overdetermined least squares systems with varying eigenspectra and see how all the metrics above respond. End of explanation """ for memory, ls in [(10, '-'), (25, '--'), (50, ':')]: for kappa, color in [(10, 'r'), (25, 'b'), (50, 'g')]: s = np.linspace(1, kappa, d) A = (u, s, vt) lbs = lbfgs(A, memory) plt.semilogy(lbs, c=color, ls=ls, label=r'$\kappa = {kappa}, m = {memory}$'.format( memory=memory, kappa=kappa)) plt.legend(bbox_to_anchor=(1.05, 0.5), loc='center left') plt.xlabel('iterations') plt.ylabel('function optimality gap') plt.title(matrix_name) plt.show('linspace eigenvals (L-BFGS)') """ Explanation: OK, so pretty interestingly, L-BFGS is still fundamentally linear in terms of its convergence rate (which translates to $\log \epsilon^{-1}$ speed for quadratic problems like ours). But clearly it gets better bang for its buck in the rate itself. And this is obviously important---even though the $\kappa = 50$ GD is still "exponentially fast", it's clear that the small slope means it'll still take a practically long time to converge. We know from explicit analysis that the GD linear rate will be something like $((\kappa - 1)/\kappa)^2$. If you squint really hard, that's basically $((1-\kappa^{-1})^\kappa)^{2/\kappa}\approx e^{-2/\kappa}$ for large $\kappa$, which is why our "exponential rates" look not-so-exponential, especially for $\kappa$ near the number of iterations (because then the suboptimality gap looks like $r^{T/\kappa}$ for $T$ iterations and $r=e^{-2}$). It's interesting to compare how sensitive L-BFGS and GD are to the condition number increase. Yes, we're fixing the eigenvalue pattern to be a linear spread, but let's save inspecting that to the end. The linear rate is effectively the slope that the log plot has at the end. While we're at it, what's the effect of more memory? End of explanation """ for memory, ls in [(10, '-'), (25, '--'), (50, ':')]: for kappa, color in [(10, 'r'), (25, 'b'), (50, 'g')]: bot, mid, top = d // 3, d // 3, d - 2 * d // 3 s = [1] * bot + [kappa / 2] * mid + [kappa] * top s = np.array(s) A = (u, s, vt) lbs = lbfgs(A, memory) plt.semilogy(lbs, c=color, ls=ls, label=r'$\kappa = {kappa}, m = {memory}$'.format( memory=memory, kappa=kappa)) plt.legend(bbox_to_anchor=(1.05, 0.5), loc='center left') plt.xlabel('iterations') plt.ylabel('function optimality gap') plt.title('tri-cluster eigenvals (L-BFGS)') plt.show() for memory, ls in [(10, '-'), (25, '--'), (50, ':')]: for kappa, color in [(10, 'r'), (25, 'b'), (50, 'g')]: s = np.logspace(0, np.log10(kappa), d) A = (u, s, vt) lbs = lbfgs(A, memory) plt.semilogy(lbs, c=color, ls=ls, label=r'$\kappa = {kappa}, m = {memory}$'.format( memory=memory, kappa=kappa)) plt.legend(bbox_to_anchor=(1.05, 0.5), loc='center left') plt.xlabel('iterations') plt.ylabel('function optimality gap') plt.title('logspace eigenvals (L-BFGS)') plt.show() """ Explanation: Now that's pretty cool, it looks like the limiting behavior is still ultimately linear (as expected, it takes about as many iterations as the memory size for the limiting behavior to "kick in"), but as memory increases, the rate gets better. What if we make the eigenvalues clumpy? End of explanation """ from scipy.stats import linregress kappa = 30 memory = list(range(5, 100 + 1, 5)) for kappa, color in [(10, 'r'), (30, 'b'), (50, 'g')]: rates = [] for m in memory: s = np.logspace(0, np.log10(kappa), d) A = (u, s, vt) lbs = lbfgs(A, m) y = np.log(lbs) x = np.arange(len(lbs)) + 1 slope, *_ = linregress(x, y) rates.append(np.exp(slope)) plt.plot(memory, rates, c=color, label=r'$\kappa = {kappa}$'.format(kappa=kappa)) plt.legend(bbox_to_anchor=(1.05, 0.5), loc='center left') plt.xlabel('memory (dimension = {d})'.format(d=d)) plt.ylabel('linear convergence rate') plt.title(r'logspace eigenvals, L-BFGS' .format(kappa=kappa)) plt.show() kappas = list(range(5, 50 + 1, 5)) # interestingly, large memory becomes unstable for memory, color in [(10, 'r'), (15, 'b'), (20, 'g')]: rates = [] for kappa in kappas: s = np.logspace(0, np.log10(kappa), d) A = (u, s, vt) lbs = lbfgs(A, memory) y = np.log(lbs) x = np.arange(len(lbs)) + 1 slope, *_ = linregress(x, y) rates.append(np.exp(slope)) plt.plot(kappas, rates, c = color, label=r'$m = {memory}$'.format(memory=memory)) plt.legend(bbox_to_anchor=(1.05, 0.5), loc='center left') plt.xlabel(r'$\kappa$') plt.ylabel('linear convergence rate') plt.title(r'logspace eigenvals, L-BFGS' .format(memory=memory)) plt.show() """ Explanation: Wow, done in 12 iterations for the clustered eigenvalues. It looks like the hardest spectrum for L-BFGS (and coincedentally the one with the cleanest descent curves) is evenly log-spaced spectra. Let's try to map out the relationship between memory, kappa, and the linear convergence rate. End of explanation """ kappa = 30 s = np.logspace(0, np.log10(kappa), d) A = (u, s, vt) newtons, cosines = [], [] memories = [5, 50] for color, memory in zip(['r', 'g'], memories): lbs, newton, cosine = lbfgs(A, memory, extras=True) matrix_name = r'logspace eigenvals, $\kappa = {kappa}$'.format(kappa=kappa) plt.semilogy(lbs, c=color, label=r'L-BFGS ($m = {memory}$)'.format(memory=memory)) newtons.append(newton) cosines.append(cosine) gds = gd(A, max_iter=len(lbs)) plt.semilogy(gds, c='b', label='GD') plt.legend(bbox_to_anchor=(1.05, 0.5), loc='center left') plt.xlabel('iterations') plt.ylabel('function optimality gap') plt.title(matrix_name) plt.show() for newton, memory in zip(newtons, memories): newton = np.array(newton) index = np.arange(len(newton)) q = np.percentile(newton, 95) plt.semilogy(index[newton < q], newton[newton < q], label=r'$m = {memory}$'.format(memory=memory)) plt.xlabel('iterations') plt.ylabel(r'$\|(B_k -\nabla_k^2)\mathbf{p}_k\|_2/\|\mathbf{p}_k\|_2$') plt.title('Directional Newton Approximation') plt.show() for cosine, memory in zip(cosines, memories): cosine = np.array(cosine) plt.plot(cosine, label=r'$m = {memory}$'.format(memory=memory)) plt.xlabel('iterations') plt.ylabel(r'$\cos \theta_k$') plt.title('L-BFGS and GD Cosine') plt.show() """ Explanation: OK, so the dependence in $\kappa$ still smells like $1-\kappa^{-1}$, but at least there's a very interesting linear trend between the linear convergence rate and memory (which does seem to bottom out for well-conditioned problems, but those don't matter so much). What's cool is that it's preserved across different $\kappa$. To finish off, just out of curiosity, do any of the BFGS diagnostics tell us much about the convergence rate? End of explanation """ kappa_log10 = 5 s = np.logspace(0, kappa_log10, d) memory = 5 import ray ray.init(ignore_reinit_error=True) # https://vladfeinberg.com/2019/10/20/prngs.html from numpy.random import SeedSequence, default_rng ss = SeedSequence(12345) trials = 16 child_seeds = ss.spawn(trials) maxit = 1000 * 100 @ray.remote(num_cpus=1) def descent(A, algo): if algo == 'lbfgs': return lbfgs(A, memory, armijo=True, max_iter=maxit) else: return gd(A, max_iter=maxit, armijo=True) @ray.remote def trial(seed): rng = default_rng(seed) vt = haar(d, d, rng) u = haar(n, d, rng) A = (u, s, vt) lbs = descent.remote(A, 'lbfgs') gds = descent.remote(A, 'gd') lbs = ray.get(lbs) gds = ray.get(gds) lbsnp = np.full(maxit, min(lbs)) gdsnp = np.full(maxit, min(gds)) lbsnp[:len(lbs)] = lbs gdsnp[:len(gds)] = gds return lbsnp, gdsnp lbs_gm = np.zeros(maxit) gds_gm = np.zeros(maxit) for i, fut in enumerate([trial.remote(seed) for seed in child_seeds]): lbs, gds = ray.get(fut) lbs_gm += np.log(lbs) gds_gm += np.log(gds) lbs_gm /= trials gds_gm /= trials matrix_name = r'logspace eigenvals, $\kappa = 10^{{{kappa_log10}}}$, GM over {trials} trials'.format(kappa_log10=kappa_log10, trials=trials) plt.semilogy(np.exp(gds_gm), c='b', label='GD') plt.semilogy(np.exp(lbs_gm), c=color, label=r'L-BFGS ($m = {memory}$)'.format(memory=memory)) plt.legend(bbox_to_anchor=(1.05, 0.5), loc='center left') plt.xlabel('iterations') plt.ylabel('function optimality gap') plt.title(matrix_name) plt.show() """ Explanation: So, as we can see above, it's not quite right to look to either $\cos\theta_k$ nor $\|(B_k-\nabla^2_k)\mathbf{p}_k\|/\|\mathbf{p}_k\|$ to demonstrate L-BFGS convergence (the latter should tend to zero per BFGS theory as memory tends to infinity). But at least for quadratic functions, perhaps it's possible to work out the linear rate acceleration observed earlier via some matrix algebra. A follow-up question by Brian Borchers was what happens in the ill-conditioned regime. Unfortunately, the Wolfe search no longer converges, for GD and L-BFGS. Switching to backtracking-only stabilizes the descent. We end up with noisier curves so I geometrically average over a few samples. Note the rates are all still linear but much worse. End of explanation """
ldiary/marigoso
notebooks/an_example_of_using_jupyter_for_documenting_and_automating_bdd_style_tests.ipynb
mit
from marigoso import Test browser = Test().launch_browser("Firefox") browser.get_url("https://www.blogger.com/") header = browser.get_element("tag=h2") assert header.text == "Sign in to continue to Blogger" """ Explanation: An example of using Jupyter for Documenting and Automating BDD Style Tests This is a sample document which contains both software feature requirements and its corresponding manual and automated tests. This is an executable document, which can be shared between Business Analyst, Developer, (manual and/or automation) Testers and other stakeholders. Aggregating all these information in a single executable file can help maintain the synchronization between the requirements, manual tests and automated tests. It helps outdated requirements, outdated manual test steps or outdated automation test steps to be identified easily. Because the document is written in Jupyter (IPython) notebook, tests can easily be arranged in BDD (Gherkin) style without requiring any third party BDD framework. The tests can be executed one cell at a time, or all in one go. These tests can be discovered and run using Pytest, which means, these tests can also be imported into Continuous Integration test environment such as Jenkins. Requirements Summary |Scenario | Can Not Post Comment as Anonymous| |:-------:|------------------------------------------------------------------------| |Given| I am a Blogger anonymous user. | |And | I am not logged in to any google account. | |When | I post a comment to a blog post. | |Then | the comment input must be successful. | |But | I must be prompted to login first before post can be completed. | Manual and Automated Test Steps Given I am a Blogger anonymous user. |Step|Actions|Expected Results| |:------:|---|----------------| |01| Launch a browser and navigate to Blogger website.| The loaded page should contain a header asking you to sign in to Blogger.| End of explanation """ browser.get_url("https://mail.google.com/") header = browser.get_element("tag=h2") assert header.text == "Sign in to continue to Gmail" """ Explanation: And I am not logged in to any google account. |Step|Actions|Expected Results| |:------:|---|----------------| |02| Navigate to any other Google services you are subscribed to, e.g Gmail.| The loaded page should contain a header asking you to sign in to that Google service.| End of explanation """ browser.get_url("http://pytestuk.blogspot.co.uk/2015/11/testing.html") browser.press_available("id=cookieChoiceDismiss") iframe = browser.get_element("css=div#bc_0_0T_box iframe") browser.switch_to.frame(iframe) browser.kb_type("id=commentBodyField", "An example of Selenium automation in Python.") assert browser.select_text("id=identityMenu", "Google Account") """ Explanation: When I post a comment to a blog post. |Step|Actions|Expected Results| |:------:|---|----------------| |03| Navigate to a particular post in Blogger.| Page must load successfully.| |04| If there is a Cookie Notice from Google, dismiss it.| Cookie notice must be dismissed successfully.| |05| Provide the following input: | Input must be successfull.| | | Comment body| An example of Selenium automation in Python.| | | Comment as | Google Account| End of explanation """ browser.submit_btn("Publish") assert not browser.is_available("id=main-error") """ Explanation: Then the comment input must be successful. |Step|Actions|Expected Results| |:------:|---|----------------| |06| Press the "Publish" button at the buttom of the page.| The page must be submitted without errors.| End of explanation """ header = browser.get_element("tag=h2") assert header.text == "Sign in to continue to Blogger" browser.quit() import time localtime = time.asctime(time.localtime(time.time())) print("All tests passed on {}.".format(localtime)) """ Explanation: But I must be prompted to login first before post can be completed. |Step|Actions|Expected Results| |:------:|---|----------------| |07| Observe the landing page after submitting the "Publish" button.| The page must ask you to login to Blogger.| End of explanation """
CalPolyPat/phys202-2015-work
days/day12/Integration.ipynb
mit
%matplotlib inline import matplotlib.pyplot as plt import seaborn as sns import numpy as np """ Explanation: Numerical Integration Learning Objectives: Learn how to numerically integrate 1d and 2d functions that are represented as Python functions or numerical arrays of data using scipy.integrate. This lesson was orginally developed by Jennifer Klay under the terms of the MIT license. The original version is in this repo (https://github.com/Computing4Physics/C4P). Her materials was in turn based on content from the Computational Physics book by Mark Newman at University of Michigan, materials developed by Matt Moelter and Jodi Christiansen for PHYS 202 at Cal Poly, as well as the SciPy tutorials. Imports End of explanation """ func = lambda x: x**4 - 2*x + 1 N = 10 a = 0.0 b = 2.0 h = (b-a)/N k = np.arange(1,N) I = h*(0.5*func(a) + 0.5*func(b) + func(a+k*h).sum()) print(I) """ Explanation: Introduction We often calculate integrals in physics (electromagnetism, thermodynamics, quantum mechanics, etc.). In calculus, you learned how to evaluate integrals analytically. Some functions are too difficult to integrate analytically and for these we need to use the computer to integrate numerically. A numerical integral goes back to the basic principles of calculus. Given a function $f(x)$, we need to find the area under the curve between two limits, $a$ and $b$: $$ I(a,b) = \int_a^b f(x) dx $$ There is no known way to calculate such an area exactly in all cases on a computer, but we can do it approximately by dividing up the area into rectangular slices and adding them all together. Unfortunately, this is a poor approximation, since the rectangles under and overshoot the function: <img src="rectangles.png" width=400> Trapezoidal Rule A better approach, which involves very little extra work, is to divide the area into trapezoids rather than rectangles. The area under the trapezoids is a considerably better approximation to the area under the curve, and this approach, though simple, often gives perfectly adequate results. <img src="trapz.png" width=420> We can improve the approximation by making the size of the trapezoids smaller. Suppose we divide the interval from $a$ to $b$ into $N$ slices or steps, so that each slice has width $h = (b − a)/N$ . Then the right-hand side of the $k$ th slice falls at $a+kh$, and the left-hand side falls at $a+kh−h$ = $a+(k−1)h$ . Thus the area of the trapezoid for this slice is $$ A_k = \tfrac{1}{2}h[ f(a+(k−1)h)+ f(a+kh) ] $$ This is the trapezoidal rule. It gives us a trapezoidal approximation to the area under one slice of our function. Now our approximation for the area under the whole curve is the sum of the areas of the trapezoids for all $N$ slices $$ I(a,b) \simeq \sum\limits_{k=1}^N A_k = \tfrac{1}{2}h \sum\limits_{k=1}^N [ f(a+(k−1)h)+ f(a+kh) ] = h \left[ \tfrac{1}{2}f(a) + \tfrac{1}{2}f(b) + \sum\limits_{k=1}^{N-1} f(a+kh)\right] $$ Note the structure of the formula: the quantity inside the square brackets is a sum over values of $f(x)$ measured at equally spaced points in the integration domain, and we take a half of the values at the start and end points but one times the value at all the interior points. Applying the Trapezoidal rule Use the trapezoidal rule to calculate the integral of $x^4 − 2x + 1$ from $x$ = 0 to $x$ = 2. This is an integral we can do by hand, so we can check our work. To define the function, let's use a lambda expression (you learned about these in the advanced python section of CodeCademy). It's basically just a way of defining a function of some variables in one line. For this case, it is just a function of x: End of explanation """ N = 10 a = 0.0 b = 2.0 h = (b-a)/N k1 = np.arange(1,N/2+1) k2 = np.arange(1,N/2) I = (1./3.)*h*(func(a) + func(b) + 4.*func(a+(2*k1-1)*h).sum() + 2.*func(a+2*k2*h).sum()) print(I) """ Explanation: The correct answer is $$ I(0,2) = \int_0^2 (x^4-2x+1)dx = \left[\tfrac{1}{5}x^5-x^2+x\right]_0^2 = 4.4 $$ So our result is off by about 2%. Simpson's Rule The trapezoidal rule estimates the area under a curve by approximating the curve with straight-line segments. We can often get a better result if we approximate the function instead with curves of some kind. Simpson's rule uses quadratic curves. In order to specify a quadratic completely one needs three points, not just two as with a straight line. So in this method we take a pair of adjacent slices and fit a quadratic through the three points that mark the boundaries of those slices. Given a function $f(x)$ and spacing between adjacent points $h$, if we fit a quadratic curve $ax^2 + bx + c$ through the points $x$ = $-h$, 0, $+h$, we get $$ f(-h) = ah^2 - bh + c, \hspace{1cm} f(0) = c, \hspace{1cm} f(h) = ah^2 +bh +c $$ Solving for $a$, $b$, and $c$ gives: $$ a = \frac{1}{h^2}\left[\tfrac{1}{2}f(-h) - f(0) + \tfrac{1}{2}f(h)\right], \hspace{1cm} b = \frac{1}{2h}\left[f(h)-f(-h)\right], \hspace{1cm} c = f(0) $$ and the area under the curve of $f(x)$ from $-h$ to $+h$ is given approximately by the area under the quadratic: $$ I(-h,h) \simeq \int_{-h}^h (ax^2+bx+c)dx = \tfrac{2}{3}ah^3 + 2ch = \tfrac{1}{3}h[f(-h)+4f(0)+f(h)] $$ This is Simpson’s rule. It gives us an approximation to the area under two adjacent slices of our function. Note that the final formula for the area involves only $h$ and the value of the function at evenly spaced points, just as with the trapezoidal rule. So to use Simpson’s rule we don’t actually have to worry about the details of fitting a quadratic—we just plug numbers into this formula and it gives us an answer. This makes Simpson’s rule almost as simple to use as the trapezoidal rule, and yet Simpson’s rule often gives much more accurate answers. Applying Simpson’s rule involves dividing the domain of integration into many slices and using the rule to separately estimate the area under successive pairs of slices, then adding the estimates for all pairs to get the final answer. If we are integrating from $x = a$ to $x = b$ in slices of width $h$ then Simpson’s rule gives the area under the $k$ th pair, approximately, as $$ A_k = \tfrac{1}{3}h[f(a+(2k-2)h)+4f(a+(2k-1)h) + f(a+2kh)] $$ With $N$ slices in total, there are $N/2$ pairs of slices, and the approximate value of the entire integral is given by the sum $$ I(a,b) \simeq \sum\limits_{k=1}^{N/2}A_k = \tfrac{1}{3}h\left[f(a)+f(b)+4\sum\limits_{k=1}^{N/2}f(a+(2k-1)h)+2\sum\limits_{k=1}^{N/2-1}f(a+2kh)\right] $$ Note that the total number of slices must be even for Simpson's rule to work. Applying Simpson's rule Now let's code Simpson's rule to compute the integral of the same function from before, $f(x) = x^4 - 2x + 1$ from 0 to 2. End of explanation """ import scipy.integrate as integrate integrate? """ Explanation: Adaptive methods and higher order approximations In some cases, particularly for integrands that are rapidly varying, a very large number of steps may be needed to achieve the desired accuracy, which means the calculation can become slow. So how do we choose the number $N$ of steps for our integrals? In our example calculations we just chose round numbers and looked to see if the results seemed reasonable. A more common situation is that we want to calculate the value of an integral to a given accuracy, such as four decimal places, and we would like to know how many steps will be needed. So long as the desired accuracy does not exceed the fundamental limit set by the machine precision of our computer— the rounding error that limits all calculations—then it should always be possible to meet our goal by using a large enough number of steps. At the same time, we want to avoid using more steps than are necessary, since more steps take more time and our calculation will be slower. Ideally we would like an $N$ that gives us the accuracy we want and no more. A simple way to achieve this is to start with a small value of $N$ and repeatedly double it until we achieve the accuracy we want. This method is an example of an adaptive integration method, one that changes its own parameters to get a desired answer. The trapezoidal rule is based on approximating an integrand $f(x)$ with straight-line segments, while Simpson’s rule uses quadratics. We can create higher-order (and hence potentially more accurate) rules by using higher-order polynomials, fitting $f(x)$ with cubics, quartics, and so forth. The general form of the trapezoidal and Simpson rules is $$ \int_a^b f(x)dx \simeq \sum\limits_{k=1}^{N}w_kf(x_k) $$ where the $x_k$ are the positions of the sample points at which we calculate the integrand and the $w_k$ are some set of weights. In the trapezoidal rule, the first and last weights are $\tfrac{1}{2}$ and the others are all 1, while in Simpson’s rule the weights are $\tfrac{1}{3}$ for the first and last slices and alternate between $\tfrac{4}{3}$ and $\tfrac{2}{3}$ for the other slices. For higher-order rules the basic form is the same: after fitting to the appropriate polynomial and integrating we end up with a set of weights that multiply the values $f(x_k)$ of the integrand at evenly spaced sample points. Notice that the trapezoidal rule is exact if the function being integrated is actually a straight line, because then the straight-line approximation isn’t an approximation at all. Similarly, Simpson’s rule is exact if the function being integrated is a quadratic, and so on for higher order polynomials. There are other more advanced schemes for calculating integrals that can achieve high accuracy while still arriving at an answer quickly. These typically combine the higher order polynomial approximations with adaptive methods for choosing the number of slices, in some cases allowing their sizes to vary over different regions of the integrand. One such method, called Gaussian Quadrature - after its inventor, Carl Friedrich Gauss, uses Legendre polynomials to choose the $x_k$ and $w_k$ such that we can obtain an integration rule accurate to the highest possible order of $2N−1$. It is beyond the scope of this course to derive the Gaussian quadrature method, but you can learn more about it by searching the literature. Now that we understand the basics of numerical integration and have even coded our own trapezoidal and Simpson's rules, we can feel justified in using scipy's built-in library of numerical integrators that build on these basic ideas, without coding them ourselves. scipy.integrate It is time to look at scipy's built-in functions for integrating functions numerically. Start by importing the library. End of explanation """ fun = lambda x : np.exp(-x)*np.sin(x) result,error = integrate.quad(fun, 0, 2*np.pi) print(result,error) """ Explanation: An overview of the module is provided by the help command, but it produces a lot of output. Here's a quick summary: Methods for Integrating Functions given function object. quad -- General purpose integration. dblquad -- General purpose double integration. tplquad -- General purpose triple integration. fixed_quad -- Integrate func(x) using Gaussian quadrature of order n. quadrature -- Integrate with given tolerance using Gaussian quadrature. romberg -- Integrate func using Romberg integration. Methods for Integrating Functions given fixed samples. trapz -- Use trapezoidal rule to compute integral from samples. cumtrapz -- Use trapezoidal rule to cumulatively compute integral. simps -- Use Simpson's rule to compute integral from samples. romb -- Use Romberg Integration to compute integral from (2**k + 1) evenly-spaced samples. See the <code>special</code> module's orthogonal polynomials (<code>scipy.special</code>) for Gaussian quadrature roots and weights for other weighting factors and regions. Interface to numerical integrators of ODE systems. odeint -- General integration of ordinary differential equations. ode -- Integrate ODE using VODE and ZVODE routines. General integration (quad) The scipy function quad is provided to integrate a function of one variable between two points. The points can be $\pm\infty$ ($\pm$ np.infty) to indicate infinite limits. For example, suppose you wish to integrate the following: $$ I = \int_0^{2\pi} e^{-x}\sin(x)dx $$ This could be computed using quad as: End of explanation """ I = integrate.quad(fun, 0, np.infty) print(I) """ Explanation: The first argument to quad is a “callable” Python object (i.e a function, method, or class instance). Notice that we used a lambda function in this case as the argument. The next two arguments are the limits of integration. The return value is a tuple, with the first element holding the estimated value of the integral and the second element holding an upper bound on the error. The analytic solution to the integral is $$ \int_0^{2\pi} e^{-x} \sin(x) dx = \frac{1}{2} - e^{-2\pi} \simeq \textrm{0.499066} $$ so that is pretty good. Here it is again, integrated from 0 to infinity: End of explanation """ print(abs(I[0]-0.5)) """ Explanation: In this case the analytic solution is exactly 1/2, so again pretty good. We can calculate the error in the result by looking at the difference between the exact result and the numerical value from quad with End of explanation """ x = np.arange(0, 20, 2) y = np.array([0, 3, 5, 2, 8, 9, 0, -3, 4, 9], dtype = float) plt.plot(x,y) plt.xlabel('x') plt.ylabel('y') #Show the integration area as a filled region plt.fill_between(x, y, y2=0,color='red',hatch='//',alpha=0.2); I = integrate.simps(y,x) print(I) """ Explanation: In this case, the numerically-computed integral is within $10^{-16}$ of the exact result — well below the reported error bound. Integrating array data When you want to compute the integral for an array of data (such as our thermistor resistance-temperature data from the Interpolation lesson), you don't have the luxury of varying your choice of $N$, the number of slices (unless you create an interpolated approximation to your data). There are three functions for computing integrals given only samples: trapz , simps, and romb. The trapezoidal rule approximates the function as a straight line between adjacent points while Simpson’s rule approximates the function between three adjacent points as a parabola, as we have already seen. The first two functions can also handle non-equally-spaced samples (something we did not code ourselves) which is a useful extension to these integration rules. If the samples are equally-spaced and the number of samples available is $2^k+1$ for some integer $k$, then Romberg integration can be used to obtain high-precision estimates of the integral using the available samples. Romberg integration is an adaptive method that uses the trapezoid rule at step-sizes related by a power of two and then performs something called Richardson extrapolation on these estimates to approximate the integral with a higher-degree of accuracy. (A different interface to Romberg integration useful when the function can be provided is also available as romberg). Applying simps to array data Here is an example of using simps to compute the integral for some discrete data: End of explanation """ from scipy.integrate import dblquad #NOTE: the order of arguments matters - inner to outer integrand = lambda x,y: y * np.sin(x) + x * np.cos(y) ymin = 0 ymax = np.pi #The callable functions for the x limits are just constants in this case: xmin = lambda y : np.pi xmax = lambda y : 2*np.pi #See the help for correct order of limits I, err = dblquad(integrand, ymin, ymax, xmin, xmax) print(I, err) dblquad? """ Explanation: Multiple Integrals Multiple integration can be handled using repeated calls to quad. The mechanics of this for double and triple integration have been wrapped up into the functions dblquad and tplquad. The function dblquad performs double integration. Use the help function to be sure that you define the arguments in the correct order. The limits on all inner integrals are actually functions (which can be constant). Double integrals using dblquad Suppose we want to integrate $f(x,y)=y\sin(x)+x\cos(y)$ over $\pi \le x \le 2\pi$ and $0 \le y \le \pi$: $$\int_{x=\pi}^{2\pi}\int_{y=0}^{\pi} y \sin(x) + x \cos(y) dxdy$$ To use dblquad we have to provide callable functions for the range of the x-variable. Although here they are constants, the use of functions for the limits enables freedom to integrate over non-constant limits. In this case we create trivial lambda functions that return the constants. Note the order of the arguments in the integrand. If you put them in the wrong order you will get the wrong answer. End of explanation """ from scipy.integrate import tplquad #AGAIN: the order of arguments matters - inner to outer integrand = lambda x,y,z: y * np.sin(x) + z * np.cos(x) zmin = -1 zmax = 1 ymin = lambda z: 0 ymax = lambda z: 1 #Note the order of these arguments: xmin = lambda y,z: 0 xmax = lambda y,z: np.pi #Here the order of limits is outer to inner I, err = tplquad(integrand, zmin, zmax, ymin, ymax, xmin, xmax) print(I, err) """ Explanation: Triple integrals using tplquad We can also numerically evaluate a triple integral: $$ \int_{x=0}^{\pi}\int_{y=0}^{1}\int_{z=-1}^{1} y\sin(x)+z\cos(x) dxdydz$$ End of explanation """
Cairo4/pythonkurs
02 jupyter notebook, python/02 Jupyter Notebook & Python Intro.ipynb
mit
#Mit einem Hashtag vor einer Zeile können wir Code kommentieren, auch das ist sehr wichtig. #Immer, wirklich, immer den eigenen Code zu kommentieren. Vor allem am Anfang. print('hello world') #Der Printbefehl druckt einfach alles aus. Nicht wirklich wahnsinnig toll. #Doch er ist später sehr nützlich. Vorallem wenn es darum geht Fehler im eigenn Code zu finden. #Mit dem Inputbefehl kannst Du Den Nutzer mit dem intergieren. input('wie alt bis Du?') """ Explanation: Jupyter Notebook & Python Intro Zuerst navigieren wir mit der Kommandozeile in den Folder, wo wir das Jupyter Notebook abspeichern wollen. Dann gehen wir in unser virtual environment und starten mit "jupyter notebook" unser Notebook auf. Jupyter Notebook ist eine Arbeitsoberfläche, der für Coding-Anfänger sehr einfach zu bedienen ist, denn es lassen sich Code-Teile einzelnen abspielen. Es gibt zwei Formate der Zellen. Code-Format und sogenanntes Markdown. Letzteres ist ein Textformat, das dem Text möglichst wenige Formatinfos anhängt. Nicht wie Word zum Beispiel. Wenn man grosse Notebooks entwickelt, ist es sehr hilfreich damit zu arbeiten. Zum Beispiel Titel Titel Titel Titel Titel Einmal Zweimal Doppelt? DasischnureBemerkig,wegemGartehaagvornedra. Oder Aufzählungen, Fetten. Das geht alles mit Markdown. Man kann sogar Tabellen bauen oder Hyper Links setzen. Wie zum Beispiel auf dieses Markdown Cheatsheet. Hier sind weitere sehr praktische Format. In der Regel benutzten wir Jupyter Notebooks aber nicht, um zu texten, sondern zu coden. Legen wir los. Print und Input Datentypen Aktionen Variablen und Zuordnungen If, elif, else Lists Dictionaries Tuples Simple Funktionen: len, sort, sorted For Loop Python Print und Input End of explanation """ #Strings 'Hallo wie geht es Dir' "12345" str(124) str(1230847120934) #Integer 567 int('1234') #Floats 4.542323 float(12) #Dates, eigentlich Strings #Dates liest er als Strings, aber es ist eine der wichtigsten Sonderform. Ansonsten sind str, int und float. '15-11-2019' type("12") """ Explanation: Datentypen End of explanation """ #strings addieren mit + print('Hallo' + 'wie' + 'geht' + 'es') #Strings addieren mit , gibt Abstände! print('Hallo','wie','geht','es') #Alle anderen gängigen: #minus - #Mal * #geteilt durch / #Spezial: Modulo. %, geteilt durch und der Rest, der übrigbleibt. Also 13 enthält 2x die fünf und dann den Rest = 3. Modulo rechnet alles, was durch (hier) fünf teiltbar ist und sagt, was dann übrig bleibt. Hier: 3 13 % 5 """ Explanation: Aktionen End of explanation """ #Grösser und kleiner als: #< > #Gleich == (wichtig, doppelte Gleichzeichen) vergleicht etwas. #Denn das einfach *definiert* eine Variable "Schweiz" == 'reich' Schweiz = 'reich' Schweiz == 'reich' 'Schweiz' = 'reich' 1 = 6 a = b a = 'b' a == 'b' a = a """ Explanation: Variablen, Vergleiche und Zuordnungen von Variablen End of explanation """ elem = int(input('Wie alt bist Du?')) elem if elem < 0: print('Das ist unmöglich') else: print('Du bist aber alt') if elem == 12: print("Gratuliere zum Duzend!") #elif macht, dass der Code auch weiter ausgeführt wird, auch wenn es zutrifft. elem = int(input('Wie alt bist Du?')) if elem < 0: print('Das ist unmöglich') elif elem < 25: print('Du bist aber jung') else: print('Du bist aber alt') """ Explanation: if - else - (elif) End of explanation """ #Eckige Klammern [1,2,"eine String dazwischen",3,4,"nun folgt eine Float:",5.23,6,7] lst = [1,2,3,4,5,6,7] lst #Einzelene Elemente - 0 bedeutet das erste Element. lst[0] #Ganze Abschnitte "bis vier" in diesem Bsp. lst[:4] #Komplexere Schnitte in diesem Bsp. jedes Zweite Element. lst[::2] #Append (hinzufügen), Pop (abschneiden - wenn ich leere Klammern verwende, ist default das letzte Element gemeint), etc. lst.pop() lst lst.append(7) lst.pop() lst #Aufpassen mit Befehl: list weil das macht aus etwas eine Liste. Auch aus Strings: list('hallo wie geht') #Elegantester Weg, eine Liste zu schreiben. Und ganz wichtig, #der Computer beginn immer bei 0. list(range(10)) list(range(5,-1,-1)) """ Explanation: Lists End of explanation """ #Komische, geschwungene Klammern {'Tier': 'Hund', 'Grösse': 124, 'Alter': 10} dct = {'Tier': 'Hund', 'Grösse': 124, 'Alter': 10} dct dct['Grösse'] #List of Dictionaires dct_lst = [{'Tier': 'Hund', 'Grösse': 124, 'Alter': 10}, {'Tier': 'Katze', 'Grösse': 130, 'Alter': 8}] dct_lst[0] dct_lst[0]['Alter'] dct_lst[1]["Alter"] """ Explanation: Dictionaries Verwende hier die geschwungene Klammern End of explanation """ tuple(lst) #Unveränderbar. Also gutes Format, um Sachen abzuspeichern. #Aber das wirklich nur der Vollständigkeitshalber. """ Explanation: Tuples Hier sind runde Klammern König. End of explanation """ #len mit Strings (len für length) - zählt einfach die Elemente. len('hallo wie geht es Dir') #len mit Lists len([1,2,3,4,4,5]) #len mit dictionaries len({'Tier': 'Hund', 'Alter': 345}) #len mit Tuples len((1,1,1,2,2,1)) #sorted für momentane Sortierung sorted('hallo wie geht es Dir') a = 'hallo wie geht es Dir' sorted(a) a #Sort funktioniert allerdings "nur" mit lists lst = [1, 5, 9, 10, 34, 12, 12, 14] lst.sort() lst dic = {'Tier': 'Hund', 'Alter': 345} dic.sort() """ Explanation: Simple Funktionen - len und sort Beachte wie man die aufruft. Nämlich mit runden Klammern End of explanation """ lst for x in lst: print(x) dic = {'Tier': 'Hund', 'Alter': 345} for key, value in dic.items(): print(key, value) #for loop to make new lists lst #Nehmen wir einmal an, wir wollen nur die geraden Zahlen in der Liste new_lst = [] for elem in lst: if elem % 2 == 0: new_lst.append(elem) else: continue new_lst lst """ Explanation: For Loop End of explanation """ dic_lst = [{'Animal': 'Dog', 'Size': 45}, {'Animal': 'Cat', 'Size': 23}, {'Animal': 'Bird', 'Size': 121212}] for dic in dic_lst: print(dic) for dic in dic_lst: print(dic['Animal']) for dic in dic_lst: print(dic['Animal'] + ': '+ str(dic['Size'])) """ Explanation: For loop with list of dictionaries End of explanation """
EvanBianco/striplog
tutorial/Basic_objects.ipynb
apache-2.0
import striplog striplog.__version__ """ Explanation: Basic objects A striplog depends on a hierarchy of objects. This notebook shows the objects and their basic functionality. Lexicon: A dictionary containing the words and word categories to use for rock descriptions. Component: A set of attributes. Interval: One element from a Striplog — consists of a top, base, a description, one or more Components, and a source. Striplogs (a set of Intervals) are described in a separate notebook. Decors and Legends are also described in another notebook. End of explanation """ from striplog import Lexicon print(Lexicon.__doc__) lexicon = Lexicon.default() lexicon lexicon.synonyms """ Explanation: <hr /> Lexicon End of explanation """ lexicon.find_synonym('Halite') s = "grysh gn ss w/ sp gy sh" lexicon.expand_abbreviations(s) """ Explanation: Most of the lexicon works 'behind the scenes' when processing descriptions into Rock components. End of explanation """ from striplog import Component print(Component.__doc__) """ Explanation: <hr /> Component A set of attributes. All are optional. End of explanation """ r = {'colour': 'grey', 'grainsize': 'vf-f', 'lithology': 'sand'} rock = Component(r) rock """ Explanation: We define a new rock with a Python dict object: End of explanation """ rock.colour """ Explanation: The Rock has a colour: End of explanation """ rock.summary() """ Explanation: And it has a summary, which is generated from its attributes. End of explanation """ rock.summary(fmt="My rock: {lithology} ({colour}, {GRAINSIZE})") """ Explanation: We can format the summary if we wish: End of explanation """ rock2 = Component({'grainsize': 'VF-F', 'colour': 'Grey', 'lithology': 'Sand'}) rock == rock2 """ Explanation: We can compare rocks with the usual == operator: End of explanation """ rock3 = Component.from_text('Grey fine sandstone.', lexicon) rock3 rock4 = Component.from_text('Grey, sandstone, vf-f ', lexicon) rock4 """ Explanation: In order to create a Component object from text, we need a lexicon to compare the text against. The lexicon describes the language we want to extract, and what it means. End of explanation """ from striplog import Interval print(Interval.__doc__) """ Explanation: <hr /> Interval Intervals are where it gets interesting. An interval can have: a top a base a description (in natural language) a list of Components Intervals don't have a 'way up', it's implied by the order of top and base. End of explanation """ Interval(10, 20, components=[rock]) """ Explanation: I might make an Interval explicitly from a Component... End of explanation """ Interval(20, 40, "Grey sandstone with shale flakes.", lexicon=lexicon) """ Explanation: ... or I might pass a description and a lexicon and Striplog will parse the description and attempt to extract structured Component objects from it. End of explanation """ interval = Interval(20, 40, "Grey sandstone with black shale flakes.", lexicon=lexicon, max_component=2) interval """ Explanation: Notice I only got one Component, even though the description contains a subordinate lithology. This is the default behaviour, we have to ask for more components: End of explanation """ interval.primary """ Explanation: Intervals have a primary attribute, which holds the first component, no matter how many components there are. End of explanation """ interval.summary(fmt="{colour} {lithology} {amount}") """ Explanation: Ask for the summary to see the thickness and a Rock summary of the primary component. Note that the format code only applies to the Rock part of the summary. End of explanation """ interval_2 = Interval(40, 65, "Red sandstone.", lexicon=lexicon) """ Explanation: We can compare intervals, based on their thickness. Let's make one which is 5 m thicker than the prvious one. End of explanation """ print(interval_2 == interval) print(interval_2 > interval) print(max(interval, interval_2).summary()) """ Explanation: Technical aside: The Interval class is a functools.total_ordering, so providing __eq__ and one other comparison (such as __lt__) in the class definition means that instances of the class have implicit order. So you can use sorted on a Striplog, for example. It wasn't clear to me whether this should compare tops (say), so that '>' might mean 'deeper', or if it should be keyed on thickness. I chose the latter, and implemented other comparisons instead. End of explanation """ interval_2 + interval """ Explanation: We can combine intervals with the + operator. (However, you cannot subtract intervals.) End of explanation """ interval + 5 """ Explanation: If we add a number to an interval, it adds thickness to the base. End of explanation """ interval + rock3 """ Explanation: Adding a rock adds a (minor) component and adds to the description. End of explanation """
maviator/Kaggle_home_price_prediction
Script/SKlearn models.ipynb
mit
# Adding needed libraries and reading data import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns from sklearn.linear_model import ElasticNet, Lasso, BayesianRidge, LassoLarsIC from sklearn.ensemble import RandomForestRegressor, GradientBoostingRegressor from sklearn.kernel_ridge import KernelRidge from sklearn.pipeline import make_pipeline from sklearn.preprocessing import RobustScaler from sklearn.base import BaseEstimator, TransformerMixin, RegressorMixin, clone from sklearn.model_selection import KFold, cross_val_score, train_test_split from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split, cross_val_score from sklearn.metrics import r2_score, mean_squared_error from sklearn.utils import shuffle from xgboost.sklearn import XGBRegressor %matplotlib inline import warnings warnings.filterwarnings('ignore') # get home price train & test csv files as a DataFrame train = pd.read_csv("../Data/train.csv") test = pd.read_csv("../Data/test.csv") full = train.append(test, ignore_index=True) print (train.shape, test.shape, full.shape) train.head() """ Explanation: How to score 0.11952 and get top 19% by Mohtadi Ben Fraj Credits Part of the code for data exploration is taken for this notebook (https://www.kaggle.com/neviadomski/how-to-get-to-top-25-with-simple-model-sklearn/notebook). The idea of averaging the models is inspired from this notebook (https://www.kaggle.com/serigne/stacked-regressions-top-4-on-leaderboard) Importing libraries and data End of explanation """ #Checking for missing data NAs = pd.concat([train.isnull().sum(), test.isnull().sum()], axis=1, keys=['Train', 'Test']) NAs[NAs.sum(axis=1) > 0] """ Explanation: Check for missing data End of explanation """ # Prints R2 and RMSE scores def get_score(prediction, lables): print('R2: {}'.format(r2_score(prediction, lables))) print('RMSE: {}'.format(np.sqrt(mean_squared_error(prediction, lables)))) # Shows scores for train and validation sets def train_test(estimator, x_trn, x_tst, y_trn, y_tst): prediction_train = estimator.predict(x_trn) # Printing estimator print(estimator) # Printing train scores get_score(prediction_train, y_trn) prediction_test = estimator.predict(x_tst) # Printing test scores print("Test") get_score(prediction_test, y_tst) """ Explanation: Helper functions End of explanation """ sns.lmplot(x='GrLivArea', y='SalePrice', data=train) train = train[train.GrLivArea < 4500] sns.lmplot(x='GrLivArea', y='SalePrice', data=train) """ Explanation: Removing outliers End of explanation """ # Spliting to features and lables train_labels = train.pop('SalePrice') features = pd.concat([train, test], keys=['train', 'test']) # Deleting features that are more than 50% missing features.drop(['PoolQC', 'MiscFeature', 'FireplaceQu', 'Fence', 'Alley'], axis=1, inplace=True) features.shape """ Explanation: Splitting to features and labels and deleting variables I don't need End of explanation """ # MSZoning NA in pred. filling with most popular values features['MSZoning'] = features['MSZoning'].fillna(features['MSZoning'].mode()[0]) # LotFrontage NA in all. I suppose NA means 0 features['LotFrontage'] = features['LotFrontage'].fillna(features['LotFrontage'].mean()) # MasVnrType NA in all. filling with most popular values features['MasVnrType'] = features['MasVnrType'].fillna(features['MasVnrType'].mode()[0]) # MasVnrArea NA in all. filling with mean value features['MasVnrArea'] = features['MasVnrArea'].fillna(features['MasVnrArea'].mean()) # BsmtQual, BsmtCond, BsmtExposure, BsmtFinType1, BsmtFinType2 # NA in all. NA means No basement for col in ('BsmtQual', 'BsmtCond', 'BsmtExposure', 'BsmtFinType1', 'BsmtFinType2'): features[col] = features[col].fillna('NoBSMT') # BsmtFinSF1 and BsmtFinSF2 NA in pred. I suppose NA means 0 features['BsmtFinSF1'] = features['BsmtFinSF1'].fillna(0) features['BsmtFinSF2'] = features['BsmtFinSF2'].fillna(0) # BsmtFullBath and BsmtHalfBath NA in all. filling with most popular value features['BsmtFullBath'] = features['BsmtFullBath'].fillna(features['BsmtFullBath'].median()) features['BsmtHalfBath'] = features['BsmtHalfBath'].fillna(features['BsmtHalfBath'].median()) # BsmtUnfSF NA in all. Filling with mean value features['BsmtUnfSF'] = features['BsmtUnfSF'].fillna(features['BsmtUnfSF'].mean()) # Exterior1st and Exterior2nd NA in all. filling with most popular value features['Exterior1st'] = features['Exterior1st'].fillna(features['Exterior1st'].mode()[0]) features['Exterior2nd'] = features['Exterior2nd'].fillna(features['Exterior2nd'].mode()[0]) # Functional NA in all. filling with most popular value features['Functional'] = features['Functional'].fillna(features['Functional'].mode()[0]) # TotalBsmtSF NA in pred. I suppose NA means 0 features['TotalBsmtSF'] = features['TotalBsmtSF'].fillna(0) # Electrical NA in pred. filling with most popular values features['Electrical'] = features['Electrical'].fillna(features['Electrical'].mode()[0]) # KitchenQual NA in pred. filling with most popular values features['KitchenQual'] = features['KitchenQual'].fillna(features['KitchenQual'].mode()[0]) # GarageArea NA in all. NA means no garage so 0 features['GarageArea'] = features['GarageArea'].fillna(0.0) # GarageType, GarageFinish, GarageQual NA in all. NA means No Garage for col in ('GarageType', 'GarageFinish', 'GarageQual', 'GarageQual', 'GarageCond'): features[col] = features[col].fillna('NoGRG') # GarageCars NA in pred. I suppose NA means 0 features['GarageCars'] = features['GarageCars'].fillna(0.0) # SaleType NA in pred. filling with most popular values features['SaleType'] = features['SaleType'].fillna(features['SaleType'].mode()[0]) # Utilities NA in all. filling with most popular value features['Utilities'] = features['Utilities'].fillna(features['Utilities'].mode()[0]) # Adding total sqfootage feature and removing Basement, 1st and 2nd floor features features['TotalSF'] = features['TotalBsmtSF'] + features['1stFlrSF'] + features['2ndFlrSF'] features.drop(['TotalBsmtSF', '1stFlrSF', '2ndFlrSF', 'GarageYrBlt'], axis=1, inplace=True) features.shape """ Explanation: Filling missing values End of explanation """ # Our SalesPrice is skewed right (check plot below). I'm logtransforming it. ax = sns.distplot(train_labels) ## Log transformation of labels train_labels = np.log(train_labels) ## Now it looks much better ax = sns.distplot(train_labels) """ Explanation: Log transformation End of explanation """ def num2cat(x): return str(x) features['MSSubClass_str'] = features['MSSubClass'].apply(num2cat) features.pop('MSSubClass') features.shape """ Explanation: Converting categorical features with order to numerical Converting categorical variables with choices: Ex, Gd, TA, FA and Po def cat2numCondition(x): if x == 'Ex': return 5 if x == 'Gd': return 4 if x == 'TA': return 3 if x == 'Fa': return 2 if x == 'Po': return 1 return -1 features.shape cols = ['ExterQual', 'ExterCond', 'BsmtQual', 'BsmtCond', 'HeatingQC', 'KitchenQual', 'GarageQual', 'GarageCond'] for col in cols: features[col+'_num'] = features[col].apply(cat2numCondition) features.pop(col) features.shape Converting categorical condition: Gd, Av, Mn, No def cat2numBsmnt(x): if x == 'Gd': return 3 if x == 'Av': return 2 if x == 'Mn': return 1 if x == 'No': return 0 return -1 features['BsmtExposure_num'] = features['BsmtExposure'].apply(cat2numBsmnt) features.pop('BsmtExposure') features.shape Converting categorical values: GLQ, ALQ, BLQ, Rec, LwQ, Unf '''' def cat2numQual(x): if x == 'GLQ': return 5 if x == 'ALQ': return 4 if x == 'BLQ': return 3 if x == 'Rec': return 2 if x == 'LwQ': return 1 if x == 'Unf': return 0 return -1 '''' cols = ['BsmtFinType1', 'BsmtFinType2'] for col in cols: features[col+'_num'] = features[col].apply(cat2numCondition) features.pop(col) features.shape End of explanation """ # Getting Dummies from all other categorical vars for col in features.dtypes[features.dtypes == 'object'].index: for_dummy = features.pop(col) features = pd.concat([features, pd.get_dummies(for_dummy, prefix=col)], axis=1) features.shape features.head() """ Explanation: Converting categorical features to binary End of explanation """ #features.drop('MSZoning_C (all)',axis=1) """ Explanation: Overfitting columns End of explanation """ ### Splitting features train_features = features.loc['train'].drop('Id', axis=1).select_dtypes(include=[np.number]).values test_features = features.loc['test'].drop('Id', axis=1).select_dtypes(include=[np.number]).values """ Explanation: Splitting train and test features End of explanation """ ### Splitting x_train, x_test, y_train, y_test = train_test_split(train_features, train_labels, test_size=0.1, random_state=200) """ Explanation: Splitting to train and validation sets End of explanation """ GBR = GradientBoostingRegressor(n_estimators=12000, learning_rate=0.05, max_depth=3, max_features='sqrt', min_samples_leaf=15, min_samples_split=10, loss='huber') GBR.fit(x_train, y_train) train_test(GBR, x_train, x_test, y_train, y_test) # Average R2 score and standart deviation of 5-fold cross-validation scores = cross_val_score(GBR, train_features, train_labels, cv=5) print("Accuracy: %0.2f (+/- %0.2f)" % (scores.mean(), scores.std() * 2)) """ Explanation: Modeling 1. Gradient Boosting Regressor End of explanation """ lasso = make_pipeline(RobustScaler(), Lasso(alpha =0.0005, random_state=1)) lasso.fit(x_train, y_train) train_test(lasso, x_train, x_test, y_train, y_test) # Average R2 score and standart deviation of 5-fold cross-validation scores = cross_val_score(lasso, train_features, train_labels, cv=5) print("Accuracy: %0.2f (+/- %0.2f)" % (scores.mean(), scores.std() * 2)) """ Explanation: 2. LASSO regression End of explanation """ ENet = make_pipeline(RobustScaler(), ElasticNet(alpha=0.0005, l1_ratio=.9, random_state=3)) ENet.fit(x_train, y_train) train_test(ENet, x_train, x_test, y_train, y_test) # Average R2 score and standart deviation of 5-fold cross-validation scores = cross_val_score(ENet, train_features, train_labels, cv=5) print("Accuracy: %0.2f (+/- %0.2f)" % (scores.mean(), scores.std() * 2)) """ Explanation: 3. Elastic Net Regression End of explanation """ # Retraining models on all train data GBR.fit(train_features, train_labels) lasso.fit(train_features, train_labels) ENet.fit(train_features, train_labels) def averaginModels(X, train, labels, models=[]): for model in models: model.fit(train, labels) predictions = np.column_stack([ model.predict(X) for model in models ]) return np.mean(predictions, axis=1) test_y = averaginModels(test_features, train_features, train_labels, [GBR, lasso, ENet]) test_y = np.exp(test_y) """ Explanation: Averaging models End of explanation """ test_id = test.Id test_submit = pd.DataFrame({'Id': test_id, 'SalePrice': test_y}) test_submit.shape test_submit.head() test_submit.to_csv('house_price_pred_avg_gbr_lasso_enet.csv', index=False) """ Explanation: Submission End of explanation """
amozie/amozie
testzie/keras_logistic_regression.ipynb
apache-2.0
from keras.layers import * from keras.models import * from keras.optimizers import * from keras.callbacks import * import keras from keras import backend as K import numpy as np import matplotlib.pyplot as plt import pandas as pd import itertools %matplotlib inline """ Explanation: <h1>Table of Contents<span class="tocSkip"></span></h1> <div class="toc"><ul class="toc-item"><li><span><a href="#logistic-regression" data-toc-modified-id="logistic-regression-1"><span class="toc-item-num">1&nbsp;&nbsp;</span>logistic regression</a></span><ul class="toc-item"><li><span><a href="#data" data-toc-modified-id="data-1.1"><span class="toc-item-num">1.1&nbsp;&nbsp;</span>data</a></span></li><li><span><a href="#model" data-toc-modified-id="model-1.2"><span class="toc-item-num">1.2&nbsp;&nbsp;</span>model</a></span></li><li><span><a href="#predict" data-toc-modified-id="predict-1.3"><span class="toc-item-num">1.3&nbsp;&nbsp;</span>predict</a></span></li><li><span><a href="#contour" data-toc-modified-id="contour-1.4"><span class="toc-item-num">1.4&nbsp;&nbsp;</span>contour</a></span></li></ul></li><li><span><a href="#polynomial-logistic-regression" data-toc-modified-id="polynomial-logistic-regression-2"><span class="toc-item-num">2&nbsp;&nbsp;</span>polynomial logistic regression</a></span><ul class="toc-item"><li><span><a href="#data" data-toc-modified-id="data-2.1"><span class="toc-item-num">2.1&nbsp;&nbsp;</span>data</a></span></li><li><span><a href="#model" data-toc-modified-id="model-2.2"><span class="toc-item-num">2.2&nbsp;&nbsp;</span>model</a></span></li><li><span><a href="#predict" data-toc-modified-id="predict-2.3"><span class="toc-item-num">2.3&nbsp;&nbsp;</span>predict</a></span></li><li><span><a href="#contour" data-toc-modified-id="contour-2.4"><span class="toc-item-num">2.4&nbsp;&nbsp;</span>contour</a></span></li></ul></li></ul></div> End of explanation """ X = np.random.rand(1000, 2) Y = np.where(X[:, 0] * X[:, 1] > 0.16, 1, 0)[:, np.newaxis] plt.scatter(X[:, 0], X[:, 1], c=Y[:, 0]) """ Explanation: logistic regression data End of explanation """ model_x = Input((2, )) model_y = Dense(1, activation='sigmoid')(model_x) model = Model(model_x, model_y) model.compile( loss='binary_crossentropy', optimizer='sgd', metrics=['accuracy']) hist = model.fit(X, Y, batch_size=50, epochs=500, verbose=0) print(model.evaluate(X, Y, verbose=0)) """ Explanation: model End of explanation """ pred = model.predict(X) > 0.5 Y_pred = np.where(pred, 1, 0) cond1 = np.logical_and(Y == 1, Y != Y_pred).flatten() cond0 = np.logical_and(Y == 0, Y != Y_pred).flatten() plt.scatter(X[:, 0], X[:, 1], c=Y[:, 0], marker='.') plt.scatter(X[cond1][:, 0], X[cond1][:, 1], c='r', marker='x') plt.scatter(X[cond0][:, 0], X[cond0][:, 1], c='g', marker='x') """ Explanation: predict End of explanation """ px, py = np.meshgrid(np.linspace(0, 1), np.linspace(0, 1)) pxy = np.vstack((px.flatten(), py.flatten())).T pz = model.predict(pxy).reshape(50, 50) # pz = np.where(pz > 0.5, 1, 0) plt.contourf(px, py, pz, 1, cmap=plt.cm.binary_r) # plt.pcolormesh(px, py, pz, cmap=plt.cm.binary_r) plt.colorbar() plt.contour(px, py, pz, [0.5], colors='k') plt.scatter(X[:, 0], X[:, 1], c=Y[:, 0], marker='.') plt.tricontourf(X[:,0], X[:,1], Y_pred[:,0], 1, cmap=plt.cm.binary_r) plt.colorbar() plt.tricontour(X[:,0], X[:,1], Y_pred[:,0], [0.5], colors='k') plt.scatter(X[:, 0], X[:, 1], c=Y[:, 0], marker='.') """ Explanation: contour End of explanation """ X = np.random.rand(1000, 2) Y = np.where((X[:, 0]-0.5)**2/9 + (X[:, 1]-0.5)**2/6 < 0.01 + np.random.randn(1000)/300, 1, 0)[:, np.newaxis] plt.scatter(X[:, 0], X[:, 1], c=Y[:, 0]) """ Explanation: polynomial logistic regression data End of explanation """ def to_polynomial(x, y, n): l = [] for i in range(n+1): for j in range(i+1): if i==0: continue l.append(x**(i-j) * y**j) return l model_x = Input((2, )) model_y = Lambda(lambda x: K.map_fn(lambda y: K.stack(to_polynomial(y[0], y[1], 6)), x))(model_x) model_y = Dense(1, activation='sigmoid')(model_y) model = Model(model_x, model_y) model.compile(loss='binary_crossentropy', optimizer=Adam(), metrics=['accuracy']) hist = model.fit(X, Y, batch_size=50, epochs=500, verbose=0) """ Explanation: model End of explanation """ pred = model.predict(X) Y_pred = np.where(pred>0.5, 1, 0) cond0 = np.logical_and(Y==0, Y!=Y_pred).flatten() cond1 = np.logical_and(Y==1, Y!=Y_pred).flatten() plt.scatter(X[:,0], X[:,1], c=Y[:,0]) plt.scatter(X[cond0][:,0], X[cond0][:,1], c='g', marker='x') plt.scatter(X[cond1][:,0], X[cond1][:,1], c='r', marker='x') """ Explanation: predict End of explanation """ px, py = np.meshgrid(np.linspace(0, 1), np.linspace(0, 1)) pxy = np.vstack([px.flatten(), py.flatten()]).T pz = model.predict(pxy).reshape(50, 50) plt.contourf(px, py, pz, cmap=plt.cm.binary_r) plt.colorbar() plt.contour(px, py, pz, [0.5], colors='k') plt.scatter(X[:,0], X[:,1], c=Y[:,0], marker='.') """ Explanation: contour End of explanation """
jcmgray/xyzpy
docs/examples/visualize matrix.ipynb
mit
import xyzpy as xyz import numpy as np import scipy.linalg as sla """ Explanation: Visualizing Linear Algebra Decompositions In this notebook we just demonstrate the utility function xyzpy.visualize_matrix on various linear algebra decompositions taken from scipy. This function plots matrices with the values of numbers directly mapped to color. By default, complex phase gives the hue, with real positive = blue real negative = orange imaginary positive = purple imaginary negative = green whereas the magnitude gives the saturation, such that $|z| \sim 0$ gives white. End of explanation """ X = np.random.randn(20, 20) + 0.01j * np.random.rand(20, 20) xyz.visualize_matrix(X, figsize=(2, 2)) """ Explanation: First we'll start with a non-symmetric random matrix with some small complex parts: End of explanation """ xyz.visualize_matrix(sla.svd(X), gridsize=(1, 3), figsize=(6, 6)) """ Explanation: Singular Value Decomposition End of explanation """ xyz.visualize_matrix(sla.eig(X), figsize=(4, 4)) """ Explanation: The 1D array of real singular values in decreasing magnitude is shown as a diagonal. Eigen-decomposition End of explanation """ xyz.visualize_matrix(sla.schur(X), figsize=(4, 4)) """ Explanation: Here we see the introduction of many complex numbers far from the real axis. Schur decomposition End of explanation """ xyz.visualize_matrix(sla.schur(X.real), figsize=(4, 4)) """ Explanation: If you look closely here at the color sequence of the left diagonal it follows the eigen decomposition. End of explanation """ xyz.visualize_matrix(sla.qr(X), figsize=(4, 4)) """ Explanation: QR Decomposition End of explanation """ xyz.visualize_matrix(sla.polar(X), figsize=(4, 4)) """ Explanation: Polar Decomposition End of explanation """ xyz.visualize_matrix(sla.lu(X), figsize=(6, 6), gridsize=(1, 3)) """ Explanation: LU Decomposition End of explanation """ xyz.visualize_matrix(sla.lu(X, permute_l=True), figsize=(4, 4)) """ Explanation: Multiplying the left matrix in reorders the rows of the $L$ factor: End of explanation """
miykael/nipype_tutorial
notebooks/introduction_quickstart.ipynb
bsd-3-clause
import os from os.path import abspath from nipype import Workflow, Node, MapNode, Function from nipype.interfaces.fsl import BET, IsotropicSmooth, ApplyMask from nilearn.plotting import plot_anat %matplotlib inline import matplotlib.pyplot as plt """ Explanation: Nipype Quickstart Existing documentation Visualizing the evolution of Nipype This notebook is taken from reproducible-imaging repository Import a few things from nipype and external libraries End of explanation """ # will use a T1w from ds000114 dataset input_file = abspath("/data/ds000114/sub-01/ses-test/anat/sub-01_ses-test_T1w.nii.gz") # we will be typing here """ Explanation: Interfaces Interfaces are the core pieces of Nipype. The interfaces are python modules that allow you to use various external packages (e.g. FSL, SPM or FreeSurfer), even if they themselves are written in another programming language than python. Let's try to use bet from FSL: End of explanation """ bet = BET() bet.inputs.in_file = input_file bet.inputs.out_file = "/output/T1w_nipype_bet.nii.gz" res = bet.run() """ Explanation: If you're lost the code is here: End of explanation """ res.outputs """ Explanation: let's check the output: End of explanation """ plot_anat('/output/T1w_nipype_bet.nii.gz', display_mode='ortho', dim=-1, draw_cross=False, annotate=False); """ Explanation: and we can plot the output file End of explanation """ BET.help() """ Explanation: you can always check the list of arguments using help method End of explanation """ # type your code here from nipype.interfaces.fsl import IsotropicSmooth # all this information can be found when we run `help` method. # note that you can either provide `in_file` and `fwhm` or `in_file` and `sigma` IsotropicSmooth.help() """ Explanation: Exercise 1a Import IsotropicSmooth from nipype.interfaces.fsl and find out the FSL command that is being run. What are the mandatory inputs for this interface? End of explanation """ # type your solution here smoothing = IsotropicSmooth() smoothing.inputs.in_file = "/data/ds000114/sub-01/ses-test/anat/sub-01_ses-test_T1w.nii.gz" smoothing.inputs.fwhm = 4 smoothing.inputs.out_file = "/output/T1w_nipype_smooth.nii.gz" smoothing.run() # plotting the output plot_anat('/output/T1w_nipype_smooth.nii.gz', display_mode='ortho', dim=-1, draw_cross=False, annotate=False); """ Explanation: Exercise 1b Run the IsotropicSmooth for /data/ds000114/sub-01/ses-test/anat/sub-01_ses-test_T1w.nii.gz file with a smoothing kernel 4mm: End of explanation """ # we will be typing here """ Explanation: Nodes and Workflows Interfaces are the core pieces of Nipype that run the code of your desire. But to streamline your analysis and to execute multiple interfaces in a sensible order, you have to put them in something that we call a Node and create a Workflow. In Nipype, a node is an object that executes a certain function. This function can be anything from a Nipype interface to a user-specified function or an external script. Each node consists of a name, an interface, and at least one input field and at least one output field. Once you have multiple nodes you can use Workflow to connect with each other and create a directed graph. Nipype workflow will take care of input and output of each interface and arrange the execution of each interface in the most efficient way. Let's create the first node using BET interface: End of explanation """ # Create Node bet_node = Node(BET(), name='bet') # Specify node inputs bet_node.inputs.in_file = input_file bet_node.inputs.mask = True # bet node can be also defined this way: #bet_node = Node(BET(in_file=input_file, mask=True), name='bet_node') """ Explanation: If you're lost the code is here: End of explanation """ # Type your solution here: # smooth_node = smooth_node = Node(IsotropicSmooth(in_file=input_file, fwhm=4), name="smooth") """ Explanation: Exercise 2 Create a Node for IsotropicSmooth interface. End of explanation """ mask_node = Node(ApplyMask(), name="mask") """ Explanation: We will now create one more Node for our workflow End of explanation """ ApplyMask.help() """ Explanation: Let's check the interface: End of explanation """ # will be writing the code here: """ Explanation: As you can see the interface takes two mandatory inputs: in_file and mask_file. We want to use the output of smooth_node as in_file and one of the output of bet_file (the mask_file) as mask_file input. Let's initialize a Workflow: End of explanation """ # Initiation of a workflow wf = Workflow(name="smoothflow", base_dir="/output/working_dir") """ Explanation: if you're lost, the full code is here: End of explanation """ # we will be typing here: """ Explanation: It's very important to specify base_dir (as absolute path), because otherwise all the outputs would be saved somewhere in the temporary files. let's connect the bet_node output to mask_node input` End of explanation """ wf.connect(bet_node, "mask_file", mask_node, "mask_file") """ Explanation: if you're lost, the code is here: End of explanation """ # type your code here wf.connect(smooth_node, "out_file", mask_node, "in_file") """ Explanation: Exercise 3 Connect out_file of smooth_node to in_file of mask_node. End of explanation """ wf.write_graph("workflow_graph.dot") from IPython.display import Image Image(filename="/output/working_dir/smoothflow/workflow_graph.png") """ Explanation: Let's see a graph describing our workflow: End of explanation """ wf.write_graph(graph2use='flat') from IPython.display import Image Image(filename="/output/working_dir/smoothflow/graph_detailed.png") """ Explanation: you can also plot a more detailed graph: End of explanation """ # we will type our code here: """ Explanation: and now let's run the workflow End of explanation """ # Execute the workflow res = wf.run() """ Explanation: if you're lost, the full code is here: End of explanation """ # we can check the output of specific nodes from workflow list(res.nodes)[0].result.outputs """ Explanation: and let's look at the results End of explanation """ ! tree -L 3 /output/working_dir/smoothflow/ """ Explanation: we can see the fie structure that has been created: End of explanation """ import numpy as np import nibabel as nb #import matplotlib.pyplot as plt # Let's create a short helper function to plot 3D NIfTI images def plot_slice(fname): # Load the image img = nb.load(fname) data = img.get_data() # Cut in the middle of the brain cut = int(data.shape[-1]/2) + 10 # Plot the data plt.imshow(np.rot90(data[..., cut]), cmap="gray") plt.gca().set_axis_off() f = plt.figure(figsize=(12, 4)) for i, img in enumerate(["/data/ds000114/sub-01/ses-test/anat/sub-01_ses-test_T1w.nii.gz", "/output/working_dir/smoothflow/smooth/sub-01_ses-test_T1w_smooth.nii.gz", "/output/working_dir/smoothflow/bet/sub-01_ses-test_T1w_brain_mask.nii.gz", "/output/working_dir/smoothflow/mask/sub-01_ses-test_T1w_smooth_masked.nii.gz"]): f.add_subplot(1, 4, i + 1) plot_slice(img) """ Explanation: and we can plot the results: End of explanation """ # we will type the code here """ Explanation: Iterables Some steps in a neuroimaging analysis are repetitive. Running the same preprocessing on multiple subjects or doing statistical inference on multiple files. To prevent the creation of multiple individual scripts, Nipype has as execution plugin for Workflow, called iterables. <img src="../static/images/iterables.png" width="240"> Let's assume we have a workflow with two nodes, node (A) does simple skull stripping, and is followed by a node (B) that does isometric smoothing. Now, let's say, that we are curious about the effect of different smoothing kernels. Therefore, we want to run the smoothing node with FWHM set to 2mm, 8mm, and 16mm. let's just modify smooth_node: End of explanation """ smooth_node_it = Node(IsotropicSmooth(in_file=input_file), name="smooth") smooth_node_it.iterables = ("fwhm", [4, 8, 16]) """ Explanation: if you're lost the code is here: End of explanation """ bet_node_it = Node(BET(in_file=input_file, mask=True), name='bet_node') mask_node_it = Node(ApplyMask(), name="mask") """ Explanation: we will define again bet and smooth nodes: End of explanation """ # Initiation of a workflow wf_it = Workflow(name="smoothflow_it", base_dir="/output/working_dir") wf_it.connect(bet_node_it, "mask_file", mask_node_it, "mask_file") wf_it.connect(smooth_node_it, "out_file", mask_node_it, "in_file") """ Explanation: will create a new workflow with a new base_dir: End of explanation """ res_it = wf_it.run() """ Explanation: let's run the workflow and check the output End of explanation """ list(res_it.nodes) """ Explanation: let's see the graph End of explanation """ ! tree -L 3 /output/working_dir/smoothflow_it/ """ Explanation: We can see the file structure that was created: End of explanation """ def square_func(x): return x ** 2 square = Function(input_names=["x"], output_names=["f_x"], function=square_func) """ Explanation: you have now 7 nodes instead of 3! MapNode If you want to iterate over a list of inputs, but need to feed all iterated outputs afterward as one input (an array) to the next node, you need to use a MapNode. A MapNode is quite similar to a normal Node, but it can take a list of inputs and operate over each input separately, ultimately returning a list of outputs. Imagine that you have a list of items (let's say files) and you want to execute the same node on them (for example some smoothing or masking). Some nodes accept multiple files and do exactly the same thing on them, but some don't (they expect only one file). MapNode can solve this problem. Imagine you have the following workflow: <img src="../static/images/mapnode.png" width="325"> Node A outputs a list of files, but node B accepts only one file. Additionally, C expects a list of files. What you would like is to run B for every file in the output of A and collect the results as a list and feed it to C. Let's run a simple numerical example using nipype Function interface End of explanation """ square_node = Node(square, name="square") square_node.inputs.x = 2 res = square_node.run() res.outputs """ Explanation: If I want to know the results only for one x we can use Node: End of explanation """ # NBVAL_SKIP square_node = Node(square, name="square") square_node.inputs.x = [2, 4] res = square_node.run() res.outputs """ Explanation: let's try to ask for more values of x End of explanation """ square_mapnode = MapNode(square, name="square", iterfield=["x"]) square_mapnode.inputs.x = [2, 4] res = square_mapnode.run() res.outputs """ Explanation: It will give an error since square_func do not accept list. But we can try MapNode: End of explanation """
gfeiden/MagneticUpperSco
notes/convective_structure.ipynb
mit
%matplotlib inline import numpy as np import matplotlib.pyplot as plt from scipy.interpolate import interp1d """ Explanation: Radiative Cores & Convective Envelopes Analysis of how magnetic fields influence the extent of radiative cores and convective envelopes in young, pre-main-sequence stars. Begin with some preliminaries. End of explanation """ # read standard 10 Myr isochrone iso_std = np.genfromtxt('../models/iso/std/dmestar_00010.0myr_z+0.00_a+0.00_phx.iso') # read standard 5 Myr isochrone iso_5my = np.genfromtxt('../models/iso/std/dmestar_00005.0myr_z+0.00_a+0.00_phx.iso') # read magnetic isochrone iso_mag = np.genfromtxt('../models/iso/mag/dmestar_00010.0myr_z+0.00_a+0.00_phx_magBeq.iso') """ Explanation: Load a standard and magnetic isochrone with equivalent ages. Here, the adopted age is 10 Myr to look specifically at the predicted internal structure of stars in Upper Scorpius. End of explanation """ masses = np.arange(0.09, 1.70, 0.01) # new mass domain # create an interpolation curve for a standard isochrone icurve = interp1d(iso_std[:,0], iso_std, axis=0, kind='cubic') # and transform to new mass domain iso_std_eq = icurve(masses) # create interpolation curve for standard 5 Myr isochrone icurve = interp1d(iso_5my[:,0], iso_5my, axis=0, kind='linear') # and transform to a new mass domain iso_5my_eq = icurve(masses) # create an interpolation curve for a magnetic isochrone icurve = interp1d(iso_mag[:,0], iso_mag, axis=0, kind='cubic') # and transform to new mass domain iso_mag_eq = icurve(masses) """ Explanation: The magnetic isochrone is known to begin at a lower mass than the standard isochrone and both isochrones have gaps where individual models failed to converge. Gaps need not occur at the same masses along each isochrone. To overcome these inconsistencies, we can interpolate both isochrones onto a pre-defined mass domain. End of explanation """ plt.plot(10**iso_std[:, 1], iso_std[:, 3], '-', lw=4, color='red') plt.plot(10**iso_std_eq[:, 1], iso_std_eq[:, 3], '--', lw=4, color='black') plt.plot(10**iso_mag[:, 1], iso_mag[:, 3], '-', lw=4, color='blue') plt.plot(10**iso_mag_eq[:, 1], iso_mag_eq[:, 3], '--', lw=4, color='black') plt.grid() plt.xlim(2500., 8000.) plt.ylim(-2, 1.1) plt.xlabel('$T_{\\rm eff}\ [K]$', fontsize=20) plt.ylabel('$\\log(L / L_{\\odot})$', fontsize=20) """ Explanation: Let's compare the interpolated isochrones to the original, just to be sure that the resulting isochrones are smooth. End of explanation """ # as a function of stellar mass plt.plot(iso_std_eq[:, 0], 1.0 - iso_std_eq[:, -1]/iso_std_eq[:, 0], '--', lw=3, color='#333333') plt.plot(iso_5my_eq[:, 0], 1.0 - iso_5my_eq[:, -1]/iso_5my_eq[:, 0], '-.', lw=3, color='#333333') plt.plot(iso_mag_eq[:, 0], 1.0 - iso_mag_eq[:, -1]/iso_mag_eq[:, 0], '-' , lw=4, color='#01a9db') plt.grid() plt.xlabel('${\\rm Stellar Mass}\ [M_{\\odot}]$', fontsize=20) plt.ylabel('$M_{\\rm rad\ core}\ /\ M_{\\star}$', fontsize=20) # as a function of effective temperature plt.plot(10**iso_std_eq[:, 1], 1.0 - iso_std_eq[:, -1]/iso_std_eq[:, 0], '--', lw=3, color='#333333') plt.plot(10**iso_5my_eq[:, 1], 1.0 - iso_5my_eq[:, -1]/iso_5my_eq[:, 0], '-.', lw=3, color='#333333') plt.plot(10**iso_mag_eq[:, 1], 1.0 - iso_mag_eq[:, -1]/iso_mag_eq[:, 0], '-' , lw=4, color='#01a9db') plt.grid() plt.xlim(3000., 7000.) plt.xlabel('${\\rm Effective Temperature}\ [K]$', fontsize=20) plt.ylabel('$M_{\\rm rad\ core}\ /\ M_{\\star}$', fontsize=20) """ Explanation: The interpolation appears to have worked well as there are no egregious discrepancies between the real and interpolated isochrones. We can now analyze the properties of the radiative cores and the convective envelopes. Beginning with the radiative core, we can look as a function of stellar properties, how much of the total stellar mass is contained in the radiative core. End of explanation """ # as a function of stellar mass (note, there is a minus sign switch b/c we tabulate # convective envelope mass) plt.plot(iso_mag_eq[:, 0], (iso_mag_eq[:, -1] - iso_std_eq[:, -1]), '-' , lw=4, color='#01a9db') plt.plot(iso_mag_eq[:, 0], (iso_mag_eq[:, -1] - iso_5my_eq[:, -1]), '--' , lw=4, color='#01a9db') plt.grid() plt.xlabel('${\\rm Stellar Mass}\ [M_{\\odot}]$', fontsize=20) plt.ylabel('$\\Delta M_{\\rm rad\ core}\ [M_{\\odot}]$', fontsize=20) """ Explanation: Now let's look at the relative difference in radiative core mass as a function of these stellar properties. End of explanation """ # interpolate into the temperature domain Teffs = np.log10(np.arange(3050., 7000., 50.)) icurve = interp1d(iso_std[:, 1], iso_std, axis=0, kind='linear') iso_std_te = icurve(Teffs) icurve = interp1d(iso_5my[:, 1], iso_5my, axis=0, kind='linear') iso_5my_te = icurve(Teffs) icurve = interp1d(iso_mag[:, 1], iso_mag, axis=0, kind='linear') iso_mag_te = icurve(Teffs) # as a function of stellar mass # (note, there is a minus sign switch b/c we tabulate convective envelope mass) # # plotting: standard - magnetic where + implies plt.plot(10**Teffs, (iso_mag_te[:, 0] - iso_mag_te[:, -1] - iso_std_te[:, 0] + iso_std_te[:, -1]), '-' , lw=4, color='#01a9db') plt.plot(10**Teffs, (iso_mag_te[:, 0] - iso_mag_te[:, -1] - iso_5my_te[:, 0] + iso_5my_te[:, -1]), '--' , lw=4, color='#01a9db') np.savetxt('../models/rad_core_comp.txt', np.column_stack((iso_std_te, iso_mag_te)), fmt="%10.6f") np.savetxt('../models/rad_core_comp_dage.txt', np.column_stack((iso_5my_te, iso_mag_te)), fmt="%10.6f") plt.grid() plt.xlim(3000., 7000.) plt.xlabel('${\\rm Effective Temperature}\ [K]$', fontsize=20) plt.ylabel('$\\Delta M_{\\rm rad\ core}\ [M_{\\odot}]$', fontsize=20) """ Explanation: Analysis End of explanation """
AaronCWong/phys202-2015-work
assignments/assignment05/MatplotlibEx03.ipynb
mit
%matplotlib inline import matplotlib.pyplot as plt import numpy as np """ Explanation: Matplotlib Exercise 3 Imports End of explanation """ def well2d(x, y, nx, ny, L=1.0): firstsine = (nx*np.pi*x)/L secondsine = ((ny*np.pi*y)/L) psi = np.array(2/L*((np.sin(firstsine)*(np.sin(secondsine))))) return psi psi = well2d(np.linspace(0,1,10), np.linspace(0,1,10), 1, 1) assert len(psi)==10 assert psi.shape==(10,) """ Explanation: Contour plots of 2d wavefunctions The wavefunction of a 2d quantum well is: $$ \psi_{n_x,n_y}(x,y) = \frac{2}{L} \sin{\left( \frac{n_x \pi x}{L} \right)} \sin{\left( \frac{n_y \pi y}{L} \right)} $$ This is a scalar field and $n_x$ and $n_y$ are quantum numbers that measure the level of excitation in the x and y directions. $L$ is the size of the well. Define a function well2d that computes this wavefunction for values of x and y that are NumPy arrays. End of explanation """ x = np.linspace(0,1,100) y = np.linspace(0,1,100) a,b = np.meshgrid(x,y) well = well2d(a,b,3,2,1) plt.figure(figsize = (15,10)) plt.colorbar(plt.contourf(a,b,well,cmap='cool')) plt.title('Contour Plot of Wavefunction'); axis = plt.gca() axis.get_xaxis().tick_bottom() axis.get_yaxis().tick_left() plt.box(False) plt.xlabel('X') plt.ylabel('Y'); assert True # use this cell for grading the contour plot """ Explanation: The contour, contourf, pcolor and pcolormesh functions of Matplotlib can be used for effective visualizations of 2d scalar fields. Use the Matplotlib documentation to learn how to use these functions along with the numpy.meshgrid function to visualize the above wavefunction: Use $n_x=3$, $n_y=2$ and $L=0$. Use the limits $[0,1]$ for the x and y axis. Customize your plot to make it effective and beautiful. Use a non-default colormap. Add a colorbar to you visualization. First make a plot using one of the contour functions: End of explanation """ x = np.linspace(0,1,100) y = np.linspace(0,1,100) a,b = np.meshgrid(x,y) well = well2d(a,b,3,2,1) plt.figure(figsize = (15,10)) plt.colorbar(plt.pcolor(a,b,well,cmap='cool')) plt.title('Contour Plot of Wavefunction'); axis = plt.gca() axis.get_xaxis().tick_bottom() axis.get_yaxis().tick_left() plt.box(False) plt.xlabel('X') plt.ylabel('Y'); assert True # use this cell for grading the pcolor plot """ Explanation: Next make a visualization using one of the pcolor functions: End of explanation """
MLIME/12aMostra
src/Tensorflow Tutorial.ipynb
gpl-3.0
import numpy as np import tensorflow as tf import pandas as pd import util %matplotlib inline """ Explanation: Tutorial em Tensorflow: Regressão Linear Nesse tutorial vamos montar um modelo de regressão linear usando a biblioteca Tensorflow. End of explanation """ # Podemos olhar o começo dessa tabela df = pd.read_excel('data/fire_theft.xls') df.head() # E também podemos ver algumas estatísticas descritivas básicas df.describe() #transformando o dataset numa matrix data = df.as_matrix() data = data.astype('float32') """ Explanation: Vamos usar um dataset bem simples: Fire and Theft in Chicago As obervações são pares $(X,Y)$ em que $X =$ incêncios por 1000 moradías $Y =$ roubos por 1000 habitantes referentes a cidade de Chicago. End of explanation """ num_samples = data.shape[0] learning_rate=0.001 num_epochs=101 show_epoch=10 """ Explanation: Antes de montar o modelo vamos definir todos os Hyper parametros End of explanation """ session = tf.InteractiveSession() # criando os placeholders para o par (X, Y) tf_number_fire = tf.placeholder(tf.float32, shape=[], name="X") tf_number_theft = tf.placeholder(tf.float32, shape=[], name="Y") # definindo os pesos do modelo. Ambos são inicializados com 0. tf_weight = tf.get_variable("w", dtype=tf.float32, initializer=0.) tf_bias = tf.get_variable("b", dtype=tf.float32, initializer=0.) # criando a predição do modelo: prediction = w*x +b tf_prediction = (tf_weight * tf_number_fire) + tf_bias # Definindo a função de custo como # o erro quadrático médio: (preiction -Y)^2 tf_loss = tf.square(tf_prediction - tf_number_theft) #Definindo o otimizador para fazer o SGD tf_opt = tf.train.GradientDescentOptimizer(learning_rate) tf_optimizer = tf_opt.minimize(tf_loss) """ Explanation: Graph e Session são duas classes centrais no tensorflow. Nós montamos as operações na classe Graph (o grafo de computação) e executamos essas operações dentro de uma Session. Sempre existe um grafo default. Quando usamos tf.Graph.as_default sobrescrevemos o grafo default pelo grafo definido no contexto. Um modo interativo de se rodar um grafo é por meio da tf.InteractiveSession() Vamos definir a regressão linear no grafo default End of explanation """ print('Start training\n') session.run(tf.global_variables_initializer()) step = 0 for i in range(num_epochs): total_loss = 0 for x, y in data: feed_dict = {tf_number_fire: x, tf_number_theft: y} _,loss,w,b = session.run([tf_optimizer,tf_loss, tf_weight, tf_bias], feed_dict=feed_dict) total_loss += loss if i % show_epoch == 0: print("\nEpoch {0}: {1}".format(i, total_loss/num_samples)) """ Explanation: Como temos poucos dados (42 observações) podemos treinar o modelo passando por cada uma das observações uma a uma. End of explanation """ r2 = util.r_squared(data,w,b) util.plot_line(data, w, b, "Linear Regression with MSE", r2) """ Explanation: Treinado o modelo, temos os novos valores para $w$ e $b$. Assim podemos calcular o $R^2$ e plotar a reta resultante End of explanation """ class Config(): """ Class to hold all model hyperparams. :type learning_rate: float :type delta: float :type huber: boolean :type num_epochs: int :type show_epoch: int :type log_path: None or str """ def __init__(self, learning_rate=0.001, delta=1.0, huber=False, num_epochs=101, show_epoch=10, log_path=None): self.learning_rate = learning_rate self.delta = delta self.huber = huber self.num_epochs = num_epochs self.show_epoch = show_epoch if log_path is None: self.log_path = util.get_log_path() else: self.log_path = log_path class LinearRegression: """ Class for the linear regression model :type config: Config """ def __init__(self, config): self.learning_rate = config.learning_rate self.delta = config.delta self.huber = config.huber self.log_path = config.log_path self.build_graph() def create_placeholders(self): """ Method for creating placeholders for input X (number of fire) and label Y (number of theft). """ self.number_fire = tf.placeholder(tf.float32, shape=[], name="X") self.number_theft = tf.placeholder(tf.float32, shape=[], name="Y") def create_variables(self): """ Method for creating weight and bias variables. """ with tf.name_scope("Weights"): self.weight = tf.get_variable("w", dtype=tf.float32, initializer=0.) self.bias = tf.get_variable("b", dtype=tf.float32, initializer=0.) def create_summaries(self): """ Method to create the histogram summaries for all variables """ tf.summary.histogram('weights_summ', self.weight) tf.summary.histogram('bias_summ', self.bias) def create_prediction(self): """ Method for creating the linear regression prediction. """ with tf.name_scope("linear-model"): self.prediction = (self.number_fire * self.weight) + self.bias def create_MSE_loss(self): """ Method for creating the mean square error loss function. """ with tf.name_scope("loss"): self.loss = tf.square(self.prediction - self.number_theft) tf.summary.scalar("loss", self.loss) def create_Huber_loss(self): """ Method for creating the Huber loss function. """ with tf.name_scope("loss"): residual = tf.abs(self.prediction - self.number_theft) condition = tf.less(residual, self.delta) small_residual = 0.5 * tf.square(residual) large_residual = self.delta * residual - 0.5 * tf.square(self.delta) self.loss = tf.where(condition, small_residual, large_residual) tf.summary.scalar("loss", self.loss) def create_optimizer(self): """ Method to create the optimizer of the graph """ with tf.name_scope("optimizer"): opt = tf.train.GradientDescentOptimizer(self.learning_rate) self.optimizer = opt.minimize(self.loss) def build_graph(self): """ Method to build the computation graph in tensorflow """ self.graph = tf.Graph() with self.graph.as_default(): self.create_placeholders() self.create_variables() self.create_summaries() self.create_prediction() if self.huber: self.create_Huber_loss() else: self.create_MSE_loss() self.create_optimizer() """ Explanation: O código acima pode ser melhorado. Podemos encapsular os hyper parametros numa classe. Assim como o modelo de regressão linear. End of explanation """ def run_training(model, config, data, verbose=True): """ Function to train the linear regression model :type model: LinearRegression :type config: Config :type data: np array :type verbose: boolean :rtype total_loss: float :rtype w: float :rtype b: float """ num_samples = data.shape[0] num_epochs = config.num_epochs show_epoch = config.show_epoch log_path = model.log_path with tf.Session(graph=model.graph) as sess: if verbose: print('Start training\n') # functions to write the tensorboard logs summary_writer = tf.summary.FileWriter(log_path,sess.graph) all_summaries = tf.summary.merge_all() # initializing variables tf.global_variables_initializer().run() step = 0 for i in range(num_epochs): # run num_epochs epochs total_loss = 0 for x, y in data: step += 1 feed_dict = {model.number_fire: x, model.number_theft: y} _,loss,summary,w,b = sess.run([model.optimizer, # run optimizer to perform minimization model.loss, all_summaries, model.weight, model.bias], feed_dict=feed_dict) #writing the log summary_writer.add_summary(summary,step) summary_writer.flush() total_loss += loss if i % show_epoch == 0: print("\nEpoch {0}: {1}".format(i, total_loss/num_samples)) if verbose: print("\n========= For TensorBoard visualization type ===========") print("\ntensorboard --logdir={}\n".format(log_path)) return total_loss,w,b my_config = Config() my_model = LinearRegression(my_config) l,w,b = run_training(my_model, my_config, data) """ Explanation: Nesse modelo definimos dois tipos de função de erro. Uma delas é chamada de Huber loss. Relembrando a função: $L_{\delta}(y,f(x)) = \frac{1}{2}(y-f(x))^{2}$ se $|y-f(x)|\leq \delta$ $L_{\delta}(y,f(x)) = \delta|y-f(x)| -\frac{1}{2}\delta^{2}$ caso contrário End of explanation """ # !tensorboard --logdir= """ Explanation: Tensorboard é uma ótima ferramenta de visualização. Podemos ver o grafo de computação e ver certas metrícas ao longo do treinamento End of explanation """
mne-tools/mne-tools.github.io
0.23/_downloads/da9f4c4e77e7268fbe1384cfc1b249a5/70_eeg_mri_coords.ipynb
bsd-3-clause
# Authors: Eric Larson <larson.eric.d@gmail.com> # # License: BSD Style. import os.path as op import nibabel from nilearn.plotting import plot_glass_brain import numpy as np import mne from mne.channels import compute_native_head_t, read_custom_montage from mne.viz import plot_alignment """ Explanation: EEG source localization given electrode locations on an MRI This tutorial explains how to compute the forward operator from EEG data when the electrodes are in MRI voxel coordinates. End of explanation """ data_path = mne.datasets.sample.data_path() subjects_dir = op.join(data_path, 'subjects') fname_raw = op.join(data_path, 'MEG', 'sample', 'sample_audvis_raw.fif') bem_dir = op.join(subjects_dir, 'sample', 'bem') fname_bem = op.join(bem_dir, 'sample-5120-5120-5120-bem-sol.fif') fname_src = op.join(bem_dir, 'sample-oct-6-src.fif') misc_path = mne.datasets.misc.data_path() fname_T1_electrodes = op.join(misc_path, 'sample_eeg_mri', 'T1_electrodes.mgz') fname_mon = op.join(misc_path, 'sample_eeg_mri', 'sample_mri_montage.elc') """ Explanation: Prerequisites For this we will assume that you have: raw EEG data your subject's MRI reconstrcted using FreeSurfer an appropriate boundary element model (BEM) an appropriate source space (src) your EEG electrodes in Freesurfer surface RAS coordinates, stored in one of the formats :func:mne.channels.read_custom_montage supports Let's set the paths to these files for the sample dataset, including a modified sample MRI showing the electrode locations plus a .elc file corresponding to the points in MRI coords (these were synthesized &lt;https://gist.github.com/larsoner/0ac6fad57e31cb2d9caa77350a9ff366&gt;__, and thus are stored as part of the misc dataset). End of explanation """ img = nibabel.load(fname_T1_electrodes) # original subject MRI w/EEG ras_mni_t = mne.transforms.read_ras_mni_t('sample', subjects_dir) # from FS mni_affine = np.dot(ras_mni_t['trans'], img.affine) # vox->ras->MNI img_mni = nibabel.Nifti1Image(img.dataobj, mni_affine) # now in MNI coords! plot_glass_brain(img_mni, cmap='hot_black_bone', threshold=0., black_bg=True, resampling_interpolation='nearest', colorbar=True) """ Explanation: Visualizing the MRI Let's take our MRI-with-eeg-locations and adjust the affine to put the data in MNI space, and plot using :func:nilearn.plotting.plot_glass_brain, which does a maximum intensity projection (easy to see the fake electrodes). This plotting function requires data to be in MNI space. Because img.affine gives the voxel-to-world (RAS) mapping, if we apply a RAS-to-MNI transform to it, it becomes the voxel-to-MNI transformation we need. Thus we create a "new" MRI image in MNI coordinates and plot it as: End of explanation """ dig_montage = read_custom_montage(fname_mon, head_size=None, coord_frame='mri') dig_montage.plot() """ Explanation: Getting our MRI voxel EEG locations to head (and MRI surface RAS) coords Let's load our :class:~mne.channels.DigMontage using :func:mne.channels.read_custom_montage, making note of the fact that we stored our locations in Freesurfer surface RAS (MRI) coordinates. .. collapse:: |question| What if my electrodes are in MRI voxels? :class: info If you have voxel coordinates in MRI voxels, you can transform these to FreeSurfer surface RAS (called "mri" in MNE) coordinates using the transformations that FreeSurfer computes during reconstruction. ``nibabel`` calls this transformation the ``vox2ras_tkr`` transform and operates in millimeters, so we can load it, convert it to meters, and then apply it:: &gt;&gt;&gt; pos_vox = ... # loaded from a file somehow &gt;&gt;&gt; img = nibabel.load(fname_T1) &gt;&gt;&gt; vox2mri_t = img.header.get_vox2ras_tkr() # voxel -&gt; mri trans &gt;&gt;&gt; pos_mri = mne.transforms.apply_trans(vox2mri_t, pos_vox) &gt;&gt;&gt; pos_mri /= 1000. # mm -&gt; m You can also verify that these are correct (or manually convert voxels to MRI coords) by looking at the points in Freeview or tkmedit. End of explanation """ trans = compute_native_head_t(dig_montage) print(trans) # should be mri->head, as the "native" space here is MRI """ Explanation: We can then get our transformation from the MRI coordinate frame (where our points are defined) to the head coordinate frame from the object. End of explanation """ raw = mne.io.read_raw_fif(fname_raw) raw.pick_types(meg=False, eeg=True, stim=True, exclude=()).load_data() raw.set_montage(dig_montage) raw.plot_sensors(show_names=True) """ Explanation: Let's apply this digitization to our dataset, and in the process automatically convert our locations to the head coordinate frame, as shown by :meth:~mne.io.Raw.plot_sensors. End of explanation """ raw.set_eeg_reference(projection=True) events = mne.find_events(raw) epochs = mne.Epochs(raw, events) cov = mne.compute_covariance(epochs, tmax=0.) evoked = epochs['1'].average() # trigger 1 in auditory/left evoked.plot_joint() """ Explanation: Now we can do standard sensor-space operations like make joint plots of evoked data. End of explanation """ fig = plot_alignment( evoked.info, trans=trans, show_axes=True, surfaces='head-dense', subject='sample', subjects_dir=subjects_dir) """ Explanation: Getting a source estimate New we have all of the components we need to compute a forward solution, but first we should sanity check that everything is well aligned: End of explanation """ fwd = mne.make_forward_solution( evoked.info, trans=trans, src=fname_src, bem=fname_bem, verbose=True) """ Explanation: Now we can actually compute the forward: End of explanation """ inv = mne.minimum_norm.make_inverse_operator( evoked.info, fwd, cov, verbose=True) stc = mne.minimum_norm.apply_inverse(evoked, inv) brain = stc.plot(subjects_dir=subjects_dir, initial_time=0.1) """ Explanation: Finally let's compute the inverse and apply it: End of explanation """
blackjax-devs/blackjax
examples/TemperedSMC.ipynb
apache-2.0
import jax import jax.numpy as jnp import matplotlib.pyplot as plt import numpy as np from jax.scipy.stats import multivariate_normal jax.config.update("jax_platform_name", "cpu") import blackjax import blackjax.smc.resampling as resampling """ Explanation: Use Tempered SMC to improve exploration of MCMC methods. Multimodal distributions are typically hard to sample from, in particular using energy based methods such as HMC, as you need high energy levels to escape a potential well. Tempered SMC helps with this by considering a sequence of distributions: $$p_{\lambda_k}(x) \propto p_0(x) \exp(-\lambda_k V(x))$$ where the tempering parameter $\lambda_k$ takes increasing values between $0$ and $1$. Tempered SMC will also particularly shine when the MCMC step is not well calibrated (too small step size, etc) like in the example below. Imports End of explanation """ def V(x): return 5 * jnp.square(jnp.sum(x**2) - 1) def prior_log_prob(x): d = x.shape[0] return multivariate_normal.logpdf(x, jnp.zeros((d,)), jnp.eye(d)) linspace = jnp.linspace(-2, 2, 5000).reshape(-1, 1) lambdas = jnp.linspace(0.0, 1.0, 5) prior_logvals = jnp.vectorize(prior_log_prob, signature="(d)->()")(linspace) potential_vals = jnp.vectorize(V, signature="(d)->()")(linspace) log_res = prior_logvals.reshape(1, -1) - jnp.expand_dims( lambdas, 1 ) * potential_vals.reshape(1, -1) density = jnp.exp(log_res) normalizing_factor = jnp.sum(density, axis=1, keepdims=True) * ( linspace[1] - linspace[0] ) density /= normalizing_factor fig, ax = plt.subplots(figsize=(12, 8)) ax.plot(linspace.squeeze(), density.T) ax.legend(list(lambdas)) def inference_loop(rng_key, mcmc_kernel, initial_state, num_samples): @jax.jit def one_step(state, k): state, _ = mcmc_kernel(k, state) return state, state keys = jax.random.split(rng_key, num_samples) _, states = jax.lax.scan(one_step, initial_state, keys) return states def full_logprob(x): return -V(x) + prior_log_prob(x) inv_mass_matrix = jnp.eye(1) n_samples = 10_000 """ Explanation: Sampling from a bimodal potential Experimental setup We consider a prior distribution $p_0(x) = \mathcal{N}(x \mid 0, 1)$ and a potential function $V(x) = (x^2 - 1)^2$. This corresponds to the following distribution. We plot the resulting tempered density for 5 different values of $\lambda_k$: from $\lambda_k =1$ which correponds to the original density to $\lambda_k=0$. The lower the value of $\lambda_k$ the easier it is to sampler from the posterior density. End of explanation """ %%time key = jax.random.PRNGKey(42) hmc_parameters = dict( step_size=1e-4, inverse_mass_matrix=inv_mass_matrix, num_integration_steps=50 ) hmc = blackjax.hmc(full_logprob, **hmc_parameters) hmc_state = hmc.init(jnp.ones((1,))) hmc_samples = inference_loop(key, hmc.step, hmc_state, n_samples) samples = np.array(hmc_samples.position[:, 0]) _ = plt.hist(samples, bins=100, density=True) _ = plt.plot(linspace.squeeze(), density[-1]) """ Explanation: Sample with HMC We first try to sample from the posterior density using an HMC kernel. End of explanation """ %%time nuts_parameters = dict(step_size=1e-4, inverse_mass_matrix=inv_mass_matrix) nuts = blackjax.nuts(full_logprob, **nuts_parameters) nuts_state = nuts.init(jnp.ones((1,))) nuts_samples = inference_loop(key, nuts.step, nuts_state, n_samples) samples = np.array(nuts_samples.position[:, 0]) _ = plt.hist(samples, bins=100, density=True) _ = plt.plot(linspace.squeeze(), density[-1]) """ Explanation: Sample with NUTS We now use a NUTS kernel. End of explanation """ def smc_inference_loop(rng_key, smc_kernel, initial_state): """Run the temepered SMC algorithm. We run the adaptive algorithm until the tempering parameter lambda reaches the value lambda=1. """ def cond(carry): i, state, _k = carry return state.lmbda < 1 def one_step(carry): i, state, k = carry k, subk = jax.random.split(k, 2) state, _ = smc_kernel(subk, state) return i + 1, state, k n_iter, final_state, _ = jax.lax.while_loop( cond, one_step, (0, initial_state, rng_key) ) return n_iter, final_state %%time loglikelihood = lambda x: -V(x) hmc_parameters = dict( step_size=1e-4, inverse_mass_matrix=inv_mass_matrix, num_integration_steps=1 ) tempered = blackjax.adaptive_tempered_smc( prior_log_prob, loglikelihood, blackjax.hmc, hmc_parameters, resampling.systematic, 0.5, mcmc_iter=1, ) initial_smc_state = jax.random.multivariate_normal( jax.random.PRNGKey(0), jnp.zeros([1]), jnp.eye(1), (n_samples,) ) initial_smc_state = tempered.init(initial_smc_state) n_iter, smc_samples = smc_inference_loop(key, tempered.step, initial_smc_state) print("Number of steps in the adaptive algorithm: ", n_iter.item()) samples = np.array(smc_samples.particles[:, 0]) _ = plt.hist(samples, bins=100, density=True) _ = plt.plot(linspace.squeeze(), density[-1]) """ Explanation: Tempered SMC with HMC kernel We now use the adaptive tempered SMC algorithm with an HMC kernel. We only take one HMC step before resampling. The algorithm is run until $\lambda_k$ crosses the $\lambda_k = 1$ limit. End of explanation """ def prior_log_prob(x): d = x.shape[0] return multivariate_normal.logpdf(x, jnp.zeros((d,)), 2 * jnp.eye(d)) def V(x): d = x.shape[-1] res = -10 * d + jnp.sum(x**2 - 10 * jnp.cos(2 * jnp.pi * x), -1) return res linspace = jnp.linspace(-5, 5, 5000).reshape(-1, 1) lambdas = jnp.linspace(0.0, 1.0, 5) potential_vals = jnp.vectorize(V, signature="(d)->()")(linspace) log_res = jnp.expand_dims(lambdas, 1) * potential_vals.reshape(1, -1) density = jnp.exp(-log_res) normalizing_factor = jnp.sum(density, axis=1, keepdims=True) * ( linspace[1] - linspace[0] ) density /= normalizing_factor fig, ax = plt.subplots(figsize=(12, 8)) ax.semilogy(linspace.squeeze(), density.T) ax.legend(list(lambdas)) def inference_loop(rng_key, mcmc_kernel, initial_state, num_samples): def one_step(state, k): state, _ = mcmc_kernel(k, state) return state, state keys = jax.random.split(rng_key, num_samples) _, states = jax.lax.scan(one_step, initial_state, keys) return states inv_mass_matrix = jnp.eye(1) n_samples = 1_000 """ Explanation: Sampling from the Rastrigin potential Experimental setup We consider a prior distribution $p_0(x) = \mathcal{N}(x \mid 0_2, 2 I_2)$ and we want to sample from a Rastrigin type potential function $V(x) = -2 A + \sum_{i=1}^2x_i^2 - A \cos(2 \pi x_i)$ where we choose $A=10$. These potential functions are known to be particularly hard to sample. We plot the resulting tempered density for 5 different values of $\lambda_k$: from $\lambda_k =1$ which correponds to the original density to $\lambda_k=0$. The lower the value of $\lambda_k$ the easier it is to sampler from the posterior log-density. End of explanation """ %%time key = jax.random.PRNGKey(42) loglikelihood = lambda x: -V(x) hmc_parameters = dict( step_size=1e-2, inverse_mass_matrix=inv_mass_matrix, num_integration_steps=50 ) hmc = blackjax.hmc(full_logprob, **hmc_parameters) hmc_state = hmc.init(jnp.ones((1,))) hmc_samples = inference_loop(key, hmc.step, hmc_state, n_samples) samples = np.array(hmc_samples.position[:, 0]) _ = plt.hist(samples, bins=100, density=True) _ = plt.plot(linspace.squeeze(), density[-1]) _ = plt.yscale("log") """ Explanation: HMC sampler We first try to sample from the posterior density using an HMC kernel. End of explanation """ %%time nuts_parameters = dict(step_size=1e-2, inverse_mass_matrix=inv_mass_matrix) nuts = blackjax.nuts(full_logprob, **nuts_parameters) nuts_state = nuts.init(jnp.ones((1,))) nuts_samples = inference_loop(key, nuts.step, nuts_state, n_samples) samples = np.array(nuts_samples.position[:, 0]) _ = plt.hist(samples, bins=100, density=True) _ = plt.plot(linspace.squeeze(), density[-1]) _ = plt.yscale("log") """ Explanation: NUTS sampler We do the same using a NUTS kernel. End of explanation """ %%time loglikelihood = lambda x: -V(x) hmc_parameters = dict( step_size=1e-2, inverse_mass_matrix=inv_mass_matrix, num_integration_steps=100 ) tempered = blackjax.adaptive_tempered_smc( prior_log_prob, loglikelihood, blackjax.hmc, hmc_parameters, resampling.systematic, 0.75, mcmc_iter=1, ) initial_smc_state = jax.random.multivariate_normal( jax.random.PRNGKey(0), jnp.zeros([1]), jnp.eye(1), (n_samples,) ) initial_smc_state = tempered.init(initial_smc_state) n_iter, smc_samples = smc_inference_loop(key, tempered.step, initial_smc_state) print("Number of steps in the adaptive algorithm: ", n_iter.item()) samples = np.array(smc_samples.particles[:, 0]) _ = plt.hist(samples, bins=100, density=True) _ = plt.plot(linspace.squeeze(), density[-1]) _ = plt.yscale("log") """ Explanation: Tempered SMC with HMC kernel We now use the adaptive tempered SMC algorithm with an HMC kernel. We only take one HMC step before resampling. The algorithm is run until $\lambda_k$ crosses the $\lambda_k = 1$ limit. We correct the bias introduced by the (arbitrary) prior. End of explanation """
GlobalFishingWatch/vessel-scoring
notebooks/Model-Sensitivity-to-Seed.ipynb
apache-2.0
%matplotlib inline from vessel_scoring import data, utils from vessel_scoring.models import train_model_on_data from vessel_scoring.evaluate_model import evaluate_model, compare_models from IPython.core.display import display, HTML, Markdown import numpy as np import sys from sklearn import metrics from vessel_scoring.logistic_model import LogisticModel def make_model(seed=4321): return LogisticModel(colspec=dict( windows=[1800, 3600, 10800, 21600, 43200, 86400], measures=['measure_daylight', 'measure_speed']), order=6, random_state=seed) def load_data(seed=4321): # Data supplied by Kristina _, train_lline, valid_lline, test_lline = data.load_dataset_by_vessel( '../datasets/kristina_longliner.measures.npz', seed) _, train_trawl, valid_trawl, test_trawl = data.load_dataset_by_vessel( '../datasets/kristina_trawl.measures.npz', seed) _, train_pseine, valid_pseine, test_pseine = data.load_dataset_by_vessel( '../datasets/kristina_ps.measures.npz', seed) # Slow transits (used to train models to avoid classifying slow transits as fishing) TRANSIT_WEIGHT = 10 x_tran, xtrain_tran, xcross_tran, xtest_tran = data.load_dataset_by_vessel( '../datasets/slow-transits.measures.npz', even_split=False, seed=seed) xtrain_tran = utils.clone_subset(xtrain_tran, test_lline.dtype) xcross_tran = utils.clone_subset(xcross_tran, test_lline.dtype) xtest_tran = utils.clone_subset(xtest_tran, test_lline.dtype) train_tran = np.concatenate([xtrain_tran, xcross_tran] * TRANSIT_WEIGHT) train = {'longliner': np.concatenate([train_lline, valid_lline, train_tran]), 'trawler': np.concatenate([train_trawl, valid_trawl, train_tran]), 'purse_seine': np.concatenate([train_pseine, valid_pseine, train_tran])} test = {'longliner': test_lline, 'trawler': test_trawl, 'purse_seine': test_pseine} return train, test def get_seeds(count): np.random.seed(4321) return np.random.randint(4294967295, size=count) """ Explanation: Examine sensitivity of current best model (Gear-Specific, Multi-Window Logistic Model with is-daylight) with respect to the various random seeds that are used during training. Random seed for model itself. Expect minimal effect Random seed for splitting training / testing data. Perhaps larger effect As it turns out (see below), neither makes much of a difference. Since this test was originally convceived the MMSI selection became a lot less random in order to better match the number of test and training MMSI. As a result, this lack of sensitivity may not hold as we add new MMSI. That said, we can expect the results to improve as we add more MMSI going forward. End of explanation """ N_SEEDS = 10 train_data, test_data = load_data() for gear in ['purse_seine', 'trawler', 'longliner']: X_test = test_data[gear] display(HTML("<h2>{}</h2>".format(gear.replace('_', ' ').title()))) predictions = [] trained_models = [] for seed in get_seeds(N_SEEDS): mdl = make_model(seed) train_model_on_data(mdl, train_data[gear]) trained_models.append((seed, mdl)) predictions.append((seed, (mdl.predict_proba(X_test)[:,1] > 0.5), X_test['classification'] > 0.5)) lines = ["|Model|Recall|Precision|F1-Score|", "|-----|------|---------|--------|"] for name, pred, actual in predictions: lines.append("|{}|{:.2f}|{:.2f}|{:.2f}|".format(name, metrics.recall_score(actual, pred), metrics.precision_score(actual, pred), metrics.f1_score(actual, pred))) display(Markdown('\n'.join(lines))) compare_models(trained_models, X_test) display(HTML("<hr/>")) """ Explanation: First investigate sensitivity of the LogisiticModels to different seeds End of explanation """ for gear in ['purse_seine', 'trawler', 'longliner']: X_test = test_data[gear] display(HTML("<h2>{}</h2>".format(gear.replace('_', ' ').title()))) predictions = [] trained_models = [] for seed in get_seeds(N_SEEDS): mdl = make_model() train_data, test_data = load_data(seed) train_model_on_data(mdl, train_data[gear]) trained_models.append((seed, mdl)) predictions.append((seed, (mdl.predict_proba(X_test)[:,1] > 0.5), X_test['classification'] > 0.5)) lines = ["|Model|Recall|Precision|F1-Score|", "|-----|------|---------|--------|"] for name, pred, actual in predictions: lines.append("|{}|{:.2f}|{:.2f}|{:.2f}|".format(name, metrics.recall_score(actual, pred), metrics.precision_score(actual, pred), metrics.f1_score(actual, pred))) display(Markdown('\n'.join(lines))) compare_models(trained_models, X_test) display(HTML("<hr/>")) """ Explanation: Essentially no difference when setting the seed for different runs. What about setting the seed when loading the data? End of explanation """
cloudera/ibis
docs/source/user_guide/geospatial_analysis.ipynb
apache-2.0
# Launch the postgis container. # This may take a bit of time if it needs to download the image. !docker run -d -p 5432:5432 --name postgis-db -e POSTGRES_PASSWORD=supersecret mdillon/postgis:9.6-alpine """ Explanation: Ibis and Geospatial Operations One of the most popular extensions to PostgreSQL is PostGIS, which adds support for storing geospatial geometries, as well as functionality for reasoning about and performing operations on those geometries. This is a demo showing how to assemble ibis expressions for a PostGIS-enabled database. We will be using a database that has been loaded with an Open Street Map extract for Southern California. This extract can be found here, and loaded into PostGIS using a tool like ogr2ogr. Preparation We first need to set up a demonstration database and load it with the sample data. If you have Docker installed, you can download and launch a PostGIS database with the following: End of explanation """ !wget https://download.geofabrik.de/north-america/us/california/socal-latest.osm.pbf """ Explanation: Next, we download our OSM extract (about 400 MB): End of explanation """ !ogr2ogr -f PostgreSQL PG:"dbname='postgres' user='postgres' password='supersecret' port=5432 host='localhost'" -lco OVERWRITE=yes --config PG_USE_COPY YES socal-latest.osm.pbf """ Explanation: Finally, we load it into the database using ogr2ogr (this may take some time): End of explanation """ import os import geopandas import ibis %matplotlib inline client = ibis.postgres.connect( url='postgres://postgres:supersecret@localhost:5432/postgres' ) """ Explanation: Connecting to the database We first make the relevant imports, and connect to the PostGIS database: End of explanation """ client.list_tables() """ Explanation: Let's look at the tables available in the database: End of explanation """ polygons = client.table('multipolygons') lines = client.table('lines') """ Explanation: As you can see, this Open Street Map extract stores its data according to the geometry type. Let's grab references to the polygon and line tables: End of explanation """ cities = polygons[polygons.admin_level == '8'] cities = cities[ cities.name.name('city_name'), cities.wkb_geometry.name('city_geometry') ] """ Explanation: Querying the data We query the polygons table for shapes with an administrative level of 8, which corresponds to municipalities. We also reproject some of the column names so we don't have a name collision later. End of explanation """ los_angeles = cities[cities.city_name == 'Los Angeles'] la_city = los_angeles.execute() la_city_geom = la_city.iloc[0].city_geometry la_city_geom """ Explanation: We can assemble a specific query for the city of Los Angeles, and execute it to get the geometry of the city. This will be useful later when reasoning about other geospatial relationships in the LA area: End of explanation """ highways = lines[(lines.highway == 'motorway')] highways = highways[ highways.name.name('highway_name'), highways.wkb_geometry.name('highway_geometry') ] """ Explanation: Let's also extract freeways from the lines table, which are indicated by the value 'motorway' in the highway column: End of explanation """ la_neighbors_expr = cities[ cities.city_geometry.intersects( ibis.literal(la_city_geom, type='multipolygon;4326:geometry') ) ] la_neighbors = la_neighbors_expr.execute().dropna() la_neighbors """ Explanation: Making a spatial join Let's test a spatial join by selecting all the highways that intersect the city of Los Angeles, or one if its neighbors. We begin by assembling an expression for Los Angeles and its neighbors. We consider a city to be a neighbor if it has any point of intersection (by this critereon we also get Los Angeles itself). We can pass in the city geometry that we selected above when making our query by marking it as a literal value in ibis: End of explanation """ la_highways_expr = highways.inner_join( la_neighbors_expr, highways.highway_geometry.intersects(la_neighbors_expr.city_geometry) ).materialize() la_highways = la_highways_expr.execute() la_highways.plot() """ Explanation: Now we join the neighbors expression with the freeways expression, on the condition that the highways intersect any of the city geometries: End of explanation """ ocean = geopandas.read_file('https://www.naturalearthdata.com/http//www.naturalearthdata.com/download/10m/physical/ne_10m_ocean.zip') land = geopandas.read_file('https://www.naturalearthdata.com/http//www.naturalearthdata.com/download/10m/physical/ne_10m_land.zip') ax = la_neighbors.dropna().plot(figsize=(16,16), cmap='rainbow', alpha=0.9) ax.set_autoscale_on(False) ax.set_axis_off() land.plot(ax=ax, color='tan', alpha=0.4) ax = ocean.plot(ax=ax, color='navy') la_highways.plot(ax=ax, color='maroon') """ Explanation: Combining the results Now that we have made a number of queries and joins, let's combine them into a single plot. To make the plot a bit nicer, we can also load some shapefiles for the coast and land: End of explanation """
mercybenzaquen/foundations-homework
databases_hw/db04/Homework_4-graded.ipynb
mit
numbers_str = '496,258,332,550,506,699,7,985,171,581,436,804,736,528,65,855,68,279,721,120' """ Explanation: Graded = 10/11 Homework #4 These problem sets focus on list comprehensions, string operations and regular expressions. Problem set #1: List slices and list comprehensions Let's start with some data. The following cell contains a string with comma-separated integers, assigned to a variable called numbers_str: End of explanation """ new_list = numbers_str.split(",") numbers = [int(item) for item in new_list] max(numbers) """ Explanation: In the following cell, complete the code with an expression that evaluates to a list of integers derived from the raw numbers in numbers_str, assigning the value of this expression to a variable numbers. If you do everything correctly, executing the cell should produce the output 985 (not '985'). End of explanation """ #len(numbers) sorted(numbers)[10:] """ Explanation: Great! We'll be using the numbers list you created above in the next few problems. In the cell below, fill in the square brackets so that the expression evaluates to a list of the ten largest values in numbers. Expected output: [506, 528, 550, 581, 699, 721, 736, 804, 855, 985] (Hint: use a slice.) End of explanation """ sorted([item for item in numbers if item % 3 == 0]) """ Explanation: In the cell below, write an expression that evaluates to a list of the integers from numbers that are evenly divisible by three, sorted in numerical order. Expected output: [120, 171, 258, 279, 528, 699, 804, 855] End of explanation """ from math import sqrt # your code here squared = [] for item in numbers: if item < 100: squared_numbers = sqrt(item) squared.append(squared_numbers) squared """ Explanation: Okay. You're doing great. Now, in the cell below, write an expression that evaluates to a list of the square roots of all the integers in numbers that are less than 100. In order to do this, you'll need to use the sqrt function from the math module, which I've already imported for you. Expected output: [2.6457513110645907, 8.06225774829855, 8.246211251235321] (These outputs might vary slightly depending on your platform.) End of explanation """ planets = [ {'diameter': 0.382, 'mass': 0.06, 'moons': 0, 'name': 'Mercury', 'orbital_period': 0.24, 'rings': 'no', 'type': 'terrestrial'}, {'diameter': 0.949, 'mass': 0.82, 'moons': 0, 'name': 'Venus', 'orbital_period': 0.62, 'rings': 'no', 'type': 'terrestrial'}, {'diameter': 1.00, 'mass': 1.00, 'moons': 1, 'name': 'Earth', 'orbital_period': 1.00, 'rings': 'no', 'type': 'terrestrial'}, {'diameter': 0.532, 'mass': 0.11, 'moons': 2, 'name': 'Mars', 'orbital_period': 1.88, 'rings': 'no', 'type': 'terrestrial'}, {'diameter': 11.209, 'mass': 317.8, 'moons': 67, 'name': 'Jupiter', 'orbital_period': 11.86, 'rings': 'yes', 'type': 'gas giant'}, {'diameter': 9.449, 'mass': 95.2, 'moons': 62, 'name': 'Saturn', 'orbital_period': 29.46, 'rings': 'yes', 'type': 'gas giant'}, {'diameter': 4.007, 'mass': 14.6, 'moons': 27, 'name': 'Uranus', 'orbital_period': 84.01, 'rings': 'yes', 'type': 'ice giant'}, {'diameter': 3.883, 'mass': 17.2, 'moons': 14, 'name': 'Neptune', 'orbital_period': 164.8, 'rings': 'yes', 'type': 'ice giant'}] """ Explanation: Problem set #2: Still more list comprehensions Still looking good. Let's do a few more with some different data. In the cell below, I've defined a data structure and assigned it to a variable planets. It's a list of dictionaries, with each dictionary describing the characteristics of a planet in the solar system. Make sure to run the cell before you proceed. End of explanation """ [item['name'] for item in planets if item['diameter'] > 2] #I got one more planet! #Ta-Stephan: We asked for greater than 4, not greater than 2. """ Explanation: Now, in the cell below, write a list comprehension that evaluates to a list of names of the planets that have a diameter greater than four earth radii. Expected output: ['Jupiter', 'Saturn', 'Uranus'] End of explanation """ #sum([int(item['mass']) for item in planets]) sum([item['mass'] for item in planets]) """ Explanation: In the cell below, write a single expression that evaluates to the sum of the mass of all planets in the solar system. Expected output: 446.79 End of explanation """ import re planet_with_giant= [item['name'] for item in planets if re.search(r'\bgiant\b', item['type'])] planet_with_giant """ Explanation: Good work. Last one with the planets. Write an expression that evaluates to the names of the planets that have the word giant anywhere in the value for their type key. Expected output: ['Jupiter', 'Saturn', 'Uranus', 'Neptune'] End of explanation """ import re poem_lines = ['Two roads diverged in a yellow wood,', 'And sorry I could not travel both', 'And be one traveler, long I stood', 'And looked down one as far as I could', 'To where it bent in the undergrowth;', '', 'Then took the other, as just as fair,', 'And having perhaps the better claim,', 'Because it was grassy and wanted wear;', 'Though as for that the passing there', 'Had worn them really about the same,', '', 'And both that morning equally lay', 'In leaves no step had trodden black.', 'Oh, I kept the first for another day!', 'Yet knowing how way leads on to way,', 'I doubted if I should ever come back.', '', 'I shall be telling this with a sigh', 'Somewhere ages and ages hence:', 'Two roads diverged in a wood, and I---', 'I took the one less travelled by,', 'And that has made all the difference.'] """ Explanation: EXTREME BONUS ROUND: Write an expression below that evaluates to a list of the names of the planets in ascending order by their number of moons. (The easiest way to do this involves using the key parameter of the sorted function, which we haven't yet discussed in class! That's why this is an EXTREME BONUS question.) Expected output: ['Mercury', 'Venus', 'Earth', 'Mars', 'Neptune', 'Uranus', 'Saturn', 'Jupiter'] Problem set #3: Regular expressions In the following section, we're going to do a bit of digital humanities. (I guess this could also be journalism if you were... writing an investigative piece about... early 20th century American poetry?) We'll be working with the following text, Robert Frost's The Road Not Taken. Make sure to run the following cell before you proceed. End of explanation """ [item for item in poem_lines if re.search(r'\b[a-zA-Z]{4}\b \b[a-zA-Z]{4}\b', item)] """ Explanation: In the cell above, I defined a variable poem_lines which has a list of lines in the poem, and imported the re library. In the cell below, write a list comprehension (using re.search()) that evaluates to a list of lines that contain two words next to each other (separated by a space) that have exactly four characters. (Hint: use the \b anchor. Don't overthink the "two words in a row" requirement.) Expected result: ['Then took the other, as just as fair,', 'Had worn them really about the same,', 'And both that morning equally lay', 'I doubted if I should ever come back.', 'I shall be telling this with a sigh'] End of explanation """ [item for item in poem_lines if re.search(r'\b[a-zA-Z]{5}\b.?$',item)] """ Explanation: Good! Now, in the following cell, write a list comprehension that evaluates to a list of lines in the poem that end with a five-letter word, regardless of whether or not there is punctuation following the word at the end of the line. (Hint: Try using the ? quantifier. Is there an existing character class, or a way to write a character class, that matches non-alphanumeric characters?) Expected output: ['And be one traveler, long I stood', 'And looked down one as far as I could', 'And having perhaps the better claim,', 'Though as for that the passing there', 'In leaves no step had trodden black.', 'Somewhere ages and ages hence:'] End of explanation """ all_lines = " ".join(poem_lines) """ Explanation: Okay, now a slightly trickier one. In the cell below, I've created a string all_lines which evaluates to the entire text of the poem in one string. Execute this cell. End of explanation """ re.findall(r'[I] (\b\w+\b)', all_lines) """ Explanation: Now, write an expression that evaluates to all of the words in the poem that follow the word 'I'. (The strings in the resulting list should not include the I.) Hint: Use re.findall() and grouping! Expected output: ['could', 'stood', 'could', 'kept', 'doubted', 'should', 'shall', 'took'] End of explanation """ entrees = [ "Yam, Rosemary and Chicken Bowl with Hot Sauce $10.95", "Lavender and Pepperoni Sandwich $8.49", "Water Chestnuts and Peas Power Lunch (with mayonnaise) $12.95 - v", "Artichoke, Mustard Green and Arugula with Sesame Oil over noodles $9.95 - v", "Flank Steak with Lentils And Tabasco Pepper With Sweet Chilli Sauce $19.95", "Rutabaga And Cucumber Wrap $8.49 - v" ] """ Explanation: Finally, something super tricky. Here's a list of strings that contains a restaurant menu. Your job is to wrangle this plain text, slightly-structured data into a list of dictionaries. End of explanation """ #Ta-Stephan: Careful - price should be an int, not a string. menu = [] for item in entrees: entrees_dictionary= {} match = re.search(r'(.*) .(\d*\d\.\d{2})\ ?( - v+)?$', item) if match: name = match.group(1) price= match.group(2) #vegetarian= match.group(3) if match.group(3): entrees_dictionary['vegetarian']= True else: entrees_dictionary['vegetarian']= False entrees_dictionary['name']= name entrees_dictionary['price']= price menu.append(entrees_dictionary) menu """ Explanation: You'll need to pull out the name of the dish and the price of the dish. The v after the hyphen indicates that the dish is vegetarian---you'll need to include that information in your dictionary as well. I've included the basic framework; you just need to fill in the contents of the for loop. Expected output: [{'name': 'Yam, Rosemary and Chicken Bowl with Hot Sauce ', 'price': 10.95, 'vegetarian': False}, {'name': 'Lavender and Pepperoni Sandwich ', 'price': 8.49, 'vegetarian': False}, {'name': 'Water Chestnuts and Peas Power Lunch (with mayonnaise) ', 'price': 12.95, 'vegetarian': True}, {'name': 'Artichoke, Mustard Green and Arugula with Sesame Oil over noodles ', 'price': 9.95, 'vegetarian': True}, {'name': 'Flank Steak with Lentils And Tabasco Pepper With Sweet Chilli Sauce ', 'price': 19.95, 'vegetarian': False}, {'name': 'Rutabaga And Cucumber Wrap ', 'price': 8.49, 'vegetarian': True}] Great work! You are done. Go cavort in the sun, or whatever it is you students do when you're done with your homework End of explanation """
root-mirror/training
NCPSchool2021/RDataFrame/04-rdataframe-advanced.ipynb
gpl-2.0
import numpy import ROOT np_dict = {colname: numpy.random.rand(100) for colname in ["a","b","c"]} df = ROOT.RDF.MakeNumpyDataFrame(np_dict) print(f"Columns in the RDataFrame: {df.GetColumnNames()}") co = df.Count() m_a = df.Mean("a") fil1 = df.Filter("c < 0.7") def1 = fil1.Define("d", "a+b+c") h = def1.Histo1D("d") c = ROOT.TCanvas() h.Draw() print(f"Number of rows in the dataset: {co.GetValue()}") print(f"Average value of column a: {m_a.GetValue()}") c.Draw() """ Explanation: RDataFrame advanced features There are still many features available with RDataFrame that might serve your analysis needs! Working with numpy arrays RDataFrame offers interoperability with numpy arrays. It can be created from a dictionary of such arrays and it can also export its contents to the same format. All operations are available also when using the numpy-based dataset. Note: this support is limited to one-dimensional numpy arrays, which are directly mapped to columns in the RDataFrame. End of explanation """ ROOT.EnableImplicitMT() treename1 = "myDataset" filename1 = "https://github.com/root-project/root/raw/master/tutorials/dataframe/df017_vecOpsHEP.root" treename2 = "dataset" filename2 = "data/example_file.root" df1 = ROOT.RDataFrame(treename1, filename1) df2 = ROOT.RDataFrame(treename2, filename2) h1 = df1.Histo1D("px") h2 = df2.Histo1D("a") ROOT.RDF.RunGraphs((h1, h2)) c = ROOT.TCanvas() h1.Draw() c.Draw() c = ROOT.TCanvas() h2.Draw() c.Draw() """ Explanation: Multiple concurrent RDataFrame runs If your analysis needs multiple RDataFrames to run (for example multiple dataset samples, data vs simulation etc.), the ROOT.RDF.RunGraphs End of explanation """ import pyspark sc = pyspark.SparkContext.getOrCreate() """ Explanation: Distributed RDataFrame An RDataFrame analysis written in Python can be executed both locally - possibly in parallel on the cores of the machine - and distributedly by offloading computations to external resources, including Spark and Dask clusters. This feature is enabled by the architecture depicted below, which shows that RDataFrame computation graphs can be mapped to different kinds of resources via backends. In this notebook we will exercise the Spark backend, which divides an RDataFrame input dataset in logical ranges and submits computations for each of those ranges to Spark resources. <center><img src="images/DistRDF_architecture.png"></center> Create a Spark context In order to work with a Spark cluster we need a SparkContext object, which represents the connection to that cluster and allows to configure execution-related parameters (e.g. number of cores, memory). When running this notebook from SWAN, a SparkContext object is already created for us when connecting to the selected cluster via the graphical interface. Alternatively, we could create a SparkContext as described in the Spark documentation. End of explanation """ # Use a Spark RDataFrame RDataFrame = ROOT.RDF.Experimental.Distributed.Spark.RDataFrame df = RDataFrame("Events", "root://eospublic.cern.ch//eos/opendata/cms/derived-data/AOD2NanoAODOutreachTool/Run2012BC_DoubleMuParked_Muons.root", npartitions=4, sparkcontext=sc) """ Explanation: Create a ROOT dataframe We now create an RDataFrame based on the same dataset seen in the exercise rdataframe-dimuon. A Spark RDataFrame receives two extra parameters: the number of partitions to apply to the dataset (npartitions) and the SparkContext object (sparkcontext). Besides that detail, a Spark RDataFrame is not different from a local RDataFrame: the analysis presented in this notebook would not change if we wanted to execute it locally. End of explanation """ %%time df_2mu = df.Filter("nMuon == 2", "Events with exactly two muons") df_c = df_2mu.Count() print(f"Number of events after filter: {df_c.GetValue()}") """ Explanation: Run your analysis unchanged From now on, the rest of your application can be written exactly as we have seen with local RDataFrame. The goal of the distributed RDataFrame module is to support all the traditional RDataFrame operations (those that make sense in a distributed context at least). Currently only a subset of those is available and can be found in the corresponding section of the documentation End of explanation """
rsignell-usgs/notebook
NEXRAD/THREDDS_NEXRAD.ipynb
mit
import matplotlib import warnings warnings.filterwarnings("ignore", category=matplotlib.cbook.MatplotlibDeprecationWarning) %matplotlib inline """ Explanation: Using Python to Access NEXRAD Level 2 Data from Unidata THREDDS Server This is a modified version of Ryan May's notebook here: http://nbviewer.jupyter.org/gist/dopplershift/356f2e14832e9b676207 The TDS provides a mechanism to query for available data files, as well as provides access to the data as native volume files, through OPeNDAP, and using its own CDMRemote protocol. Since we're using Python, we can take advantage of Unidata's Siphon package, which provides an easy API for talking to THREDDS servers. Bookmark these resources for when you want to use Siphon later! + latest Siphon documentation + Siphon github repo + TDS documentation Downloading the single latest volume Just a bit of initial set-up to use inline figures and quiet some warnings. End of explanation """ # The archive of data on S3 URL did not work for me, despite .edu domain #url = 'http://thredds-aws.unidata.ucar.edu/thredds/radarServer/nexrad/level2/S3/' #Trying motherlode URL url = 'http://thredds.ucar.edu/thredds/radarServer/nexrad/level2/IDD/' from siphon.radarserver import RadarServer rs = RadarServer(url) """ Explanation: First we'll create an instance of RadarServer to point to the appropriate radar server access URL. End of explanation """ from datetime import datetime, timedelta query = rs.query() query.stations('KLVX').time(datetime.utcnow()) """ Explanation: Next, we'll create a new query object to help request the data. Using the chaining methods, let's ask for the latest data at the radar KLVX (Louisville, KY). We see that when the query is represented as a string, it shows the encoded URL. End of explanation """ rs.validate_query(query) """ Explanation: We can use the RadarServer instance to check our query, to make sure we have required parameters and that we have chosen valid station(s) and variable(s) End of explanation """ catalog = rs.get_catalog(query) """ Explanation: Make the request, which returns an instance of TDSCatalog; this handles parsing the returned XML information. End of explanation """ catalog.datasets """ Explanation: We can look at the datasets on the catalog to see what data we found by the query. We find one volume in the return, since we asked for the volume nearest to a single time. End of explanation """ ds = list(catalog.datasets.values())[0] ds.access_urls """ Explanation: We can pull that dataset out of the dictionary and look at the available access URLs. We see URLs for OPeNDAP, CDMRemote, and HTTPServer (direct download). End of explanation """ from siphon.cdmr import Dataset data = Dataset(ds.access_urls['CdmRemote']) """ Explanation: We'll use the CDMRemote reader in Siphon and pass it the appropriate access URL. End of explanation """ import numpy as np def raw_to_masked_float(var, data): # Values come back signed. If the _Unsigned attribute is set, we need to convert # from the range [-127, 128] to [0, 255]. if var._Unsigned: data = data & 255 # Mask missing points data = np.ma.array(data, mask=data==0) # Convert to float using the scale and offset return data * var.scale_factor + var.add_offset def polar_to_cartesian(az, rng): az_rad = np.deg2rad(az)[:, None] x = rng * np.sin(az_rad) y = rng * np.cos(az_rad) return x, y """ Explanation: We define some helper functions to make working with the data easier. One takes the raw data and converts it to floating point values with the missing data points appropriately marked. The other helps with converting the polar coordinates (azimuth and range) to Cartesian (x and y). End of explanation """ sweep = 0 ref_var = data.variables['Reflectivity_HI'] ref_data = ref_var[sweep] rng = data.variables['distanceR_HI'][:] az = data.variables['azimuthR_HI'][sweep] """ Explanation: The CDMRemote reader provides an interface that is almost identical to the usual python NetCDF interface. We pull out the variables we need for azimuth and range, as well as the data itself. End of explanation """ ref = raw_to_masked_float(ref_var, ref_data) x, y = polar_to_cartesian(az, rng) """ Explanation: Then convert the raw data to floating point values and the polar coordinates to Cartesian. End of explanation """ from metpy.plots import ctables # For NWS colortable ref_norm, ref_cmap = ctables.registry.get_with_steps('NWSReflectivity', 5, 5) """ Explanation: MetPy is a Python package for meteorology (Documentation: http://metpy.readthedocs.org and GitHub: http://github.com/MetPy/MetPy). We import MetPy and use it to get the colortable and value mapping information for the NWS Reflectivity data. End of explanation """ import matplotlib.pyplot as plt import cartopy def new_map(fig, lon, lat): # Create projection centered on the radar. This allows us to use x # and y relative to the radar. proj = cartopy.crs.LambertConformal(central_longitude=lon, central_latitude=lat) # New axes with the specified projection ax = fig.add_subplot(1, 1, 1, projection=proj) # Add coastlines ax.coastlines('50m', 'black', linewidth=2, zorder=2) # Grab state borders state_borders = cartopy.feature.NaturalEarthFeature( category='cultural', name='admin_1_states_provinces_lines', scale='50m', facecolor='none') ax.add_feature(state_borders, edgecolor='black', linewidth=1, zorder=3) return ax """ Explanation: Finally, we plot them up using matplotlib and cartopy. We create a helper function for making a map to keep things simpler later. End of explanation """ # Our specified time #dt = datetime(2012, 10, 29, 15) # Superstorm Sandy #dt = datetime(2016, 6, 18, 1) dt = datetime(2016, 6, 8, 18) query = rs.query() query.lonlat_point(-73.687, 41.175).time_range(dt, dt + timedelta(hours=1)) """ Explanation: Download a collection of historical data This time we'll make a query based on a longitude, latitude point and using a time range. End of explanation """ cat = rs.get_catalog(query) cat.datasets """ Explanation: The specified longitude, latitude are in NY and the TDS helpfully finds the closest station to that point. We can see that for this time range we obtained multiple datasets. End of explanation """ ds = list(cat.datasets.values())[0] data = Dataset(ds.access_urls['CdmRemote']) # Pull out the data of interest sweep = 0 rng = data.variables['distanceR_HI'][:] az = data.variables['azimuthR_HI'][sweep] ref_var = data.variables['Reflectivity_HI'] # Convert data to float and coordinates to Cartesian ref = raw_to_masked_float(ref_var, ref_var[sweep]) x, y = polar_to_cartesian(az, rng) """ Explanation: Grab the first dataset so that we can get the longitude and latitude of the station and make a map for plotting. We'll go ahead and specify some longitude and latitude bounds for the map. End of explanation """ fig = plt.figure(figsize=(10, 10)) ax = new_map(fig, data.StationLongitude, data.StationLatitude) # Set limits in lat/lon space ax.set_extent([-77, -70, 38, 43]) # Add ocean and land background ocean = cartopy.feature.NaturalEarthFeature('physical', 'ocean', scale='50m', edgecolor='face', facecolor=cartopy.feature.COLORS['water']) land = cartopy.feature.NaturalEarthFeature('physical', 'land', scale='50m', edgecolor='face', facecolor=cartopy.feature.COLORS['land']) ax.add_feature(ocean, zorder=-1) ax.add_feature(land, zorder=-1) ax.pcolormesh(x, y, ref, cmap=ref_cmap, norm=ref_norm, zorder=0); """ Explanation: Use the function to make a new map and plot a colormapped view of the data End of explanation """
bhargavvader/pycobra
docs/notebooks/voronoi_clustering.ipynb
mit
%matplotlib inline import numpy as np from pycobra.cobra import Cobra from pycobra.visualisation import Visualisation from pycobra.diagnostics import Diagnostics import matplotlib.pyplot as plt from sklearn import cluster """ Explanation: Visualising Clustering with Voronoi Tesselations When experimenting with using the Voronoi Tesselation to identify which machines are picked up by certain points, it was easy to extend the idea to visualising clustering through a voronoi. Using the voronoi_finite_polygons_2d method from pycobra.visualisation, it's easy to do this End of explanation """ from sklearn.datasets.samples_generator import make_blobs X, Y = make_blobs(n_samples=200, centers=2, n_features=2) Y = np.power(X[:,0], 2) + np.power(X[:,1], 2) """ Explanation: Let's make some blobs so clustering is easy. End of explanation """ two_means = cluster.KMeans(n_clusters=2) spectral = cluster.SpectralClustering(n_clusters=2, eigen_solver='arpack', affinity="nearest_neighbors") dbscan = cluster.DBSCAN(eps=.6) affinity_propagation = cluster.AffinityPropagation(damping=.9, preference=-200) birch = cluster.Birch(n_clusters=2) from pycobra.visualisation import voronoi_finite_polygons_2d from scipy.spatial import Voronoi, voronoi_plot_2d """ Explanation: We set up a few scikit-learn clustering machines which we'd like to visualise the results of. End of explanation """ def plot_cluster_voronoi(data, algo): # passing input space to set up voronoi regions. points = np.hstack((np.reshape(data[:,0], (len(data[:,0]), 1)), np.reshape(data[:,1], (len(data[:,1]), 1)))) vor = Voronoi(points) # use helper Voronoi regions, vertices = voronoi_finite_polygons_2d(vor) fig, ax = plt.subplots() plot = ax.scatter([], []) indice = 0 for region in regions: ax.plot(data[:,0][indice], data[:,1][indice], 'ko') polygon = vertices[region] # if it isn't gradient based we just color red or blue depending on whether that point uses the machine in question color = algo.labels_[indice] # we assume only two if color == 0: color = 'r' else: color = 'b' ax.fill(*zip(*polygon), alpha=0.4, color=color, label="") indice += 1 ax.axis('equal') plt.xlim(vor.min_bound[0] - 0.1, vor.max_bound[0] + 0.1) plt.ylim(vor.min_bound[1] - 0.1, vor.max_bound[1] + 0.1) two_means.fit(X) plot_cluster_voronoi(X, two_means) dbscan.fit(X) plot_cluster_voronoi(X, dbscan) spectral.fit(X) plot_cluster_voronoi(X, spectral) affinity_propagation.fit(X) plot_cluster_voronoi(X, affinity_propagation) birch.fit(X) plot_cluster_voronoi(X, birch) """ Explanation: Helper function to implement the Voronoi. End of explanation """
fisicatyc/Cuantica_Jupyter
estados_ligados.ipynb
mit
from tecnicas_numericas import * import tecnicas_numericas print(dir(tecnicas_numericas)) """ Explanation: <div class="alert alert-success"> Este notebook de ipython depende de los modulos: <li> `tecnicas_numericas`, ilustrado en el notebook [Técnicas numéricas](tecnicas_numericas.ipynb). <li> `vis_int`, ilustrado en el notebook [Visualización e Interacción](vis_int.ipybn) (esta incluido en el `import` a `tecnicas_numericas`). </div> End of explanation """ def V_inf(x): return 0 def V_fin(V_0, a, x): if abs(x) < a/2: return 0 else: return V_0 """ Explanation: Estados ligados El presente documento cumple como función ilustrar de forma interactiva el comportamiento de las soluciones de la ecuación de Schrödinger estacionaria para estados ligados en problemas 1D con la aplicación del método de Numerov. Simulación La aplicación del método del disparo con el algoritmo de Numerov, implica la búsqueda de raíces para encontrar los autovalores de energía. Su forma de proceder es mediante el avance regular en pasos de energía entre un mínimo y máximo hasta encontrar un cambio de signo en la evaluación la función de onda (o criterio equivalente, como la derivada logaritmica de la misma) hasta el punto de comparación. La presencia de este cambio de signo indica que existe una energía $E$ en el intervalo $[E_i, E_{i+1}]$ que es o raíz de la función de Numerov (por tanto autovalor del sistema) o una discontinuidad. Estas raíces y discontinuidas son asociadas a la función equivalente de cuantización de la energía, como en el problema típico de potencial finito lo es la ecuación trascendental (sin embargo, ésta no aparace explicitamente en el modelo númerico). Funciones de potencial Las soluciones de autovalores de energía de un sistema se asocian al potencial y geometría del sistema (condiciones de frontera para un problema sobre una línea por ser 1D), las cuales se deben imponer acorde a las condiciones físicas de interes. Para las condiciones de frontera sabemos que en los extremos estas deben anularse, así que solo hace falta describir el potencial al cual se esta sujeto. Pozo infinito y finito El potencial del pozo infinito es descrito como: \begin{equation} V(x) = \begin{cases} 0 & |x| < \frac{a}{2}\ \infty & |x| \geq \frac{a}{2} \end{cases} \end{equation} Para el potencial de un pozo finito su potencial se puede describir como: \begin{equation} V(x) = \begin{cases} 0 & |x| < \frac{a}{2}\ V_0 & |x| \geq \frac{a}{2} \end{cases} \end{equation} Estos pozos son los casos básicos de estudio por la facilidad para su desarrollo análitico e interpretación sencilla. Se puede ver en estos casos de estudio aplicaciones en ... End of explanation """ control_pozo = fun_contenedor_base() agregar_control(control_pozo, FloatSlider(value = 5.2, min = .5, max= 10., step= .1, description='a')) pozo_link = link((control_pozo.children[1], 'min'), (control_pozo.children[4], 'value')) boton_pozo = Button(description='Simular pozo') def click_pozo(boton): V_max = control_pozo.children[0].value L = control_pozo.children[1].value N = control_pozo.children[2].value n = control_pozo.children[3].value a = control_pozo.children[4].value Vx = lambda x: V_fin(V_max, a, x) Solve_Schr(Vx, V_max, L, N, n) clear_output(wait=True) boton_pozo.on_click(click_pozo) display(control_pozo, boton_pozo) """ Explanation: Para efectos numéricos el infinito se traslada a una longitud grande comparativamente al ancho del pozo, la cual se designará como $L$. En el caso de que $a = L$, corresponde justamente al pozo infinito, de manera que la simulación de estos dos casos requiere un solo control y es basado en el potencial finito. End of explanation """ def V_arm(omega, x): return omega**2 * x**2 / 4 control_arm = fun_contenedor_base() agregar_control(control_arm, FloatSlider(value = 1., min = .1, max= 4., step= .1, description='$\omega$')) boton_arm = Button(description='Simular potencial') def click_arm(boton): E_max = control_arm.children[0].value L = control_arm.children[1].value N = control_arm.children[2].value n = control_arm.children[3].value omega = control_arm.children[4].value Vx = lambda x: V_arm(omega, x) Solve_Schr(Vx, E_max, L, N, n) clear_output(wait=True) boton_arm.on_click(click_arm) display(control_arm, boton_arm) """ Explanation: Potencial armonico El potencial armónico cumple con la descripción dada por \begin{equation} V(x) = \frac{\omega^2 x^2}{4} \end{equation} End of explanation """ control_arb = fun_contenedor_base() E_max = control_arb.children[0] L = control_arb.children[1] N = control_arb.children[2] n = control_arb.children[3] n.value = 300 L.value = 20. str_potencial = Text(value='x**2 / 4 + x**3 / 50', description= 'Potencial') str_potencial.funcion = lambda x: eval(str_potencial.value) agregar_control(control_arb, str_potencial) # Ingrese un texto en formato python con dependencia solo de 'x'. def ingreso_potencial(str_potencial): str_potencial.funcion = lambda x: eval(str_potencial.value) Vx = str_potencial.funcion h = L.value / n.value V_vec = [Vx(-L.value/2 + h*i) for i in range(n.value + 1)] V_min = min(V_vec) V_max = max(V_vec) dV = (V_max - V_min) / 50 E_max.step = dV E_max.min = V_min E_max.max = V_max + (V_max - V_min) E_max.value = V_max ingreso_potencial(str_potencial) boton_arb = Button(description='Simular potencial') def click_arbitrario(boton): Vx = str_potencial.funcion Solve_Schr(Vx, E_max.value, L.value, N.value, n.value) clear_output(wait=True) str_potencial.on_submit(ingreso_potencial) boton_arb.on_click(click_arbitrario) display(control_arb, boton_arb) """ Explanation: Potencial arbitrario El problema 1D puede ser resuelto para un problema de potencial arbitrario $V(x)$, donde str_potencial es un string que representa la función del potencial. Debe tenerse en cuenta, tanto para este caso como los anteriores, que numéricamente el infinito es representado como una escala mayor por un cierta cantidad que la escala de interes del sistema. Actividad : Proponga una función potencial de interes y desarrolle el bloque de código requerido para simularlo con este notebook. Use como base las funciones desarrolladas en los notebooks y el bloque siguiente. El bloque siguiente ilustra un problema de potencial armónico con anarmonicidad. End of explanation """
iurilarosa/thesis
codici/Archiviati/numpy/.ipynb_checkpoints/Prove numpy-checkpoint.ipynb
gpl-3.0
unimatr = numpy.ones((10,10)) #unimatr duimatr = unimatr*2 #duimatr uniarray = numpy.ones((10,1)) #uniarray triarray = uniarray*3 scalarray = numpy.arange(10) scalarray = scalarray.reshape(10,1) #NB fare il reshape da orizzontale a verticale è come se aggiungesse #una dimensione all'array facendolo diventare un ndarray #(prima era un array semplice, poi diventa un array (x,1), quindi puoi fare trasposto) #NB NUMPY NON FA TRASPOSTO DI ARRAY SEMPLICE! #scalarray scalarray.T ramatricia = numpy.random.randint(2, size=36).reshape((6,6)) ramatricia2 = numpy.random.randint(2, size=36).reshape((6,6)) #WARNING questa operazione moltiplica elemento per elemento #se l'oggetto è di dimensione inferiore moltiplica ogni riga/colonna # o matrice verticale/orizzontale a seconda della forma dell'oggetto duimatr*scalarray #duimatr*scalarray.T #duimatr*duimatr ramatricia*ramatricia2 #numpy dot invece fa prodotto matriciale righe per colonne numpy.dot(duimatr,scalarray) #numpy.dot(duimatr,duimatr) numpy.dot(ramatricia2,ramatricia) unimatricia = numpy.ones((3,3)) rangematricia = numpy.arange(9).reshape((3,3)) numpy.dot(rangematricia, rangematricia) duimatr + scalarray """ Explanation: Prove manipolazioni array End of explanation """ scalarray = numpy.arange(4) uniarray = numpy.ones(4) matricia = numpy.outer(scalarray, uniarray) matricia tensorio = numpy.outer(matricia,scalarray).reshape(4,4,4) tensorio # metodo di creazione array nd (numpy.ndarray) """ Explanation: Prove creazione matrice 3D con prodotti esterni End of explanation """ tensorio = numpy.ones(1000).reshape(10,10,10) tensorio # metodo di creazione array nd (numpy.ndarray) #altro metodo è con comando diretto #tensorio = numpy.ndarray((3,3,3), dtype = int, buffer=numpy.arange(30)) #potrebbe essere utile con la matrice sparsa della peakmap, anche se difficilmente è maneggiabile come matrice densa #oppure # HO FINALMENTE SCOPERTO COME SI METTE IL DTYPE COME SI DEVE!! con "numpy.float32"! #tensorio = numpy.zeros((3,3,3), dtype = numpy.float32) #tensorio.dtype #tensorio scalarray = numpy.arange(10) uniarray = numpy.ones(10) scalamatricia = numpy.outer(scalarray,scalarray) #scalamatricia tensorio * 2 tensorio + 2 tensorio + scalamatricia %time tensorio + scalarray %time tensorio.__add__(scalarray) #danno stesso risultato con tempi paragonabili """ Explanation: Prove manipolazione matrici 3D numpy End of explanation """ from scipy import sparse ramatricia = numpy.random.randint(2, size=25).reshape((5,5)) ramatricia #efficiente per colonne #sparsamatricia = sparse.csc_matrix(ramatricia) #print(sparsamatricia) #per righe sparsamatricia = sparse.csr_matrix(ramatricia) print(sparsamatricia) sparsamatricia.toarray() righe = numpy.array([0,0,0,1,2,3,3,4]) colonne = numpy.array([0,0,4,2,1,4,3,0]) valori = numpy.ones(righe.size) sparsamatricia = sparse.coo_matrix((valori, (righe,colonne))) print(sparsamatricia) sparsamatricia.toarray() """ Explanation: Prove matrici sparse End of explanation """ #vari modi per fare prodotti di matrici (con somma con operatore + è lo stesso) densamatricia = sparsamatricia.toarray() #densa-densa prodottoPerElementiDD = densamatricia*densamatricia prodottoMatricialeDD = numpy.dot(densamatricia, densamatricia) #sparsa-densa prodottoMatricialeSD = sparsamatricia*densamatricia prodottoMatricialeSD2 = sparsamatricia.dot(densamatricia) #sparsa-sparsa prodottoMatricialeSS = sparsamatricia*sparsamatricia prodottoMatricialeSS2 = sparsamatricia.dot(sparsamatricia) # "SPARSA".dot("SPARSA O DENSA") FA PRODOTTO MATRICIALE # "SPARSA * SPARSA" FA PRODOTTO MATRICIALE prodottoMatricialeDD - prodottoMatricialeSS #nb somme e sottrazioni tra matrici sparse e dense sono ok # prodotto matriciale tra densa e sparsa funziona come sparsa e sparsa """ Explanation: Prodotto di matrici Prodotti interni Considera di avere 2 matrici, a e b, in forma numpy array: a*b fa il prodotto elemento per elemento (solo se a e b hanno stessa dimensione) numpy.dot(a,b) fa il prodotto matriciale righe per colonne Ora considera di avere 2 matrici, a e b, in forma di scipy.sparse: a*b fa il prodotto matriciale righe per colonne numpy.dot(a,b) non funziona per nulla a.dot(b) fa il prodotto matriciale righe per colonne End of explanation """ densarray = numpy.array(["a","b"],dtype = object) densarray2 = numpy.array(["c","d"],dtype = object) numpy.outer(densarray,[1,2]) densamatricia = numpy.array([[1,2],[3,4]]) densamatricia2 = numpy.array([["a","b"],["c","d"]], dtype = object) numpy.outer(densamatricia2,densamatricia).reshape(4,2,2) densarray1 = numpy.array([0,2]) densarray2 = numpy.array([5,0]) densamatricia = numpy.array([[1,2],[3,4]]) densamatricia2 = numpy.array([[0,2],[5,0]]) nrighe = 2 ncolonne = 2 npiani = 4 prodottoEstDD = numpy.outer(densamatricia,densamatricia2).reshape(npiani,ncolonne,nrighe) #prodottoEstDD #prodottoEstDD = numpy.dstack((prodottoEstDD[0,:],prodottoEstDD[1,:])) prodottoEstDD sparsarray1 = sparse.csr_matrix(densarray1) sparsarray2 = sparse.csr_matrix(densarray2) sparsamatricia = sparse.csr_matrix(densamatricia) sparsamatricia2 = sparse.csr_matrix(densamatricia2) prodottoEstSS = sparse.kron(sparsamatricia,sparsamatricia2).toarray() prodottoEstSD = sparse.kron(sparsamatricia,densamatricia2).toarray() prodottoEstSD #prove prodotti esterni # numpy.outer # scipy.sparse.kron #densa-densa prodottoEsternoDD = numpy.outer(densamatricia,densamatricia) #sparsa-densa prodottoEsternoSD = sparse.kron(sparsamatricia,densamatricia) #sparsa-sparsa prodottoEsternoSS = sparse.kron(sparsamatricia,sparsamatricia) prodottoEsternoDD-prodottoEsternoSS # altre prove di prodotti esterni rarray1 = numpy.random.randint(2, size=4) rarray2 = numpy.random.randint(2, size=4) print(rarray1,rarray2) ramatricia = numpy.outer(rarray1,rarray2) unimatricia = numpy.ones((4,4)).astype(int) #ramatricia2 = rarray1 * rarray2.T print(ramatricia,unimatricia) #print(ramatricia) #print("eppoi") #print(ramatricia2) #sparsarray = sparse.csr_matrix(rarray1) #print(sparsarray) #ramatricia2 = #il mio caso problematico è che ho una matrice di cui so tutti gli elementi non zero, #so quante righe ho (i tempi), ma non so quante colonne di freq ho randomcolonne = numpy.random.randint(10)+1 ramatricia = numpy.random.randint(2, size=10*randomcolonne).reshape((10,randomcolonne)) print(ramatricia.shape) #ramatricia nonzeri = numpy.nonzero(ramatricia) ndati = len(nonzeri[0]) ndati ramatricia #ora cerco di fare la matrice sparsa print(ndati) dati = numpy.ones(2*ndati).reshape(ndati,2) dati coordinateRighe = nonzeri[0] coordinateColonne = nonzeri[1] sparsamatricia = sparse.coo_matrix((dati,(coordinateRighe,coordinateColonne))) densamatricia = sparsamatricia.toarray() densamatricia """ Explanation: Prodotti esterni End of explanation """ matrice = numpy.arange(30).reshape(10,3) matrice righe = numpy.array([1,0,1,1]) colonne = numpy.array([2,0,2,2]) pesi = numpy.array([100,200,300,10]) print(righe,colonne) matrice[righe,colonne] matrice[righe,colonne] = (matrice[righe,colonne] + numpy.array([100,200,300,10])) matrice %matplotlib inline a = pyplot.imshow(matrice) numpy.add.at(matrice, [righe,colonne],pesi) matrice %matplotlib inline a = pyplot.imshow(matrice) matr """ Explanation: Provo a passare operazioni a array con array di coordinate End of explanation """ from matplotlib import pyplot %matplotlib inline ##AL MOMENTO INUTILE, NON COMPILARE x = numpy.random.randint(10,size = 10) y = numpy.random.randint(10,size = 10) pyplot.scatter(x,y, s = 5) #nb imshow si può fare solo con un 2d array #visualizzazione di una matrice, solo matrici dense a quanto pare a = pyplot.imshow(densamatricia) #a = pyplot.imshow(sparsamatricia) #c = pyplot.matshow(densamatricia) #spy invece funziona anche per le sparse! pyplot.spy(sparsamatricia,precision=0.01, marker = ".", markersize=10) #in alternativa, scatterplot delle coordinate dal dataframe b = pyplot.scatter(coordinateColonne,coordinateRighe, s = 2) import seaborn %matplotlib inline sbRegplot = seaborn.regplot(x=coordinateRighe, y=coordinateColonne, color="g", fit_reg=False) import pandas coordinateRighe = coordinateRighe.reshape(len(coordinateRighe),1) coordinateColonne = coordinateColonne.reshape(len(coordinateColonne),1) #print([coordinateRighe,coordinateColonne]) coordinate = numpy.concatenate((coordinateRighe,coordinateColonne),axis = 1) coordinate tabella = pandas.DataFrame(coordinate) tabella.columns = ["righe", "colonne"] sbPlmplot = seaborn.lmplot(x = "righe", y = "colonne", data = tabella, fit_reg=False) """ Explanation: Prove plots End of explanation """ import numpy from scipy import sparse import multiprocessing from matplotlib import pyplot #first i build a matrix of some x positions vs time datas in a sparse format matrix = numpy.random.randint(2, size = 100).astype(float).reshape(10,10) x = numpy.nonzero(matrix)[0] times = numpy.nonzero(matrix)[1] weights = numpy.random.rand(x.size) import scipy.io mint = numpy.amin(times) maxt = numpy.amax(times) scipy.io.savemat('debugExamples/numpy.mat',{ 'matrix':matrix, 'x':x, 'times':times, 'weights':weights, 'mint':mint, 'maxt':maxt, }) times #then i define an array of y positions nStepsY = 5 y = numpy.arange(1,nStepsY+1) # provo a iterare # VERSIONE CON HACK CON SPARSE verificato viene uguale a tutti gli altri metodi più semplici che ho provato # ma ha problemi con parallelizzazione nRows = nStepsY nColumns = 80 y = numpy.arange(1,nStepsY+1) image = numpy.zeros((nRows, nColumns)) def itermatrix(ithStep): yTimed = y[ithStep]*times positions = (numpy.round(x-yTimed)+50).astype(int) fakeRow = numpy.zeros(positions.size) matrix = sparse.coo_matrix((weights, (fakeRow, positions))).todense() matrix = numpy.ravel(matrix) missColumns = (nColumns-matrix.size) zeros = numpy.zeros(missColumns) matrix = numpy.concatenate((matrix, zeros)) return matrix #for i in numpy.arange(nStepsY): # image[i] = itermatrix(i) #or imageSparsed = list(map(itermatrix, range(nStepsY))) imageSparsed = numpy.array(imageSparsed) scipy.io.savemat('debugExamples/numpyResult.mat', {'imageSparsed':imageSparsed}) a = pyplot.imshow(imageSparsed, aspect = 10) pyplot.show() import numpy from scipy import sparse import multiprocessing from matplotlib import pyplot #first i build a matrix of some x positions vs time datas in a sparse format matrix = numpy.random.randint(2, size = 100).astype(float).reshape(10,10) times = numpy.nonzero(matrix)[0] freqs = numpy.nonzero(matrix)[1] weights = numpy.random.rand(times.size) #then i define an array of y positions nStepsSpindowns = 5 spindowns = numpy.arange(1,nStepsSpindowns+1) #PROVA CON BINCOUNT def mapIt(ithStep): ncolumns = 80 image = numpy.zeros(ncolumns) sdTimed = spindowns[ithStep]*times positions = (numpy.round(freqs-sdTimed)+50).astype(int) values = numpy.bincount(positions,weights) values = values[numpy.nonzero(values)] positions = numpy.unique(positions) image[positions] = values return image %time imageMapped = list(map(mapIt, range(nStepsSpindowns))) imageMapped = numpy.array(imageMapped) %matplotlib inline a = pyplot.imshow(imageMapped, aspect = 10) # qui provo fully vectorial def fullmatrix(nRows, nColumns): spindowns = numpy.arange(1,nStepsSpindowns+1) image = numpy.zeros((nRows, nColumns)) sdTimed = numpy.outer(spindowns,times) freqs3d = numpy.outer(numpy.ones(nStepsSpindowns),freqs) weights3d = numpy.outer(numpy.ones(nStepsSpindowns),weights) spindowns3d = numpy.outer(spindowns,numpy.ones(times.size)) positions = (numpy.round(freqs3d-sdTimed)+50).astype(int) matrix = sparse.coo_matrix((numpy.ravel(weights3d), (numpy.ravel(spindowns3d), numpy.ravel(positions)))).todense() return matrix %time image = fullmatrix(nStepsSpindowns, 80) a = pyplot.imshow(image, aspect = 10) pyplot.show() """ Explanation: Un esempio semplice del mio problema End of explanation """ #confronto con codice ORIGINALE in matlab immagineOrig = scipy.io.loadmat('debugExamples/dbOrigResult.mat')['binh_df0'] a = pyplot.imshow(immagineOrig[:,0:80], aspect = 10) pyplot.show() #PROVA CON BINCOUNT def mapIt(ithStep): ncolumns = 80 image = numpy.zeros(ncolumns) yTimed = y[ithStep]*times positions = (numpy.round(x-yTimed)+50).astype(int) values = numpy.bincount(positions,weights) where = tf.not_equal(values, zero) values = values[numpy.nonzero(values)] positions = numpy.unique(positions) image[positions] = values return image %time imageMapped = list(map(mapIt, range(nStepsY))) imageMapped = numpy.array(imageMapped) %matplotlib inline a = pyplot.imshow(imageMapped, aspect = 10) # qui provo con vettorializzazione di numpy (apply along axis) nrows = nStepsY ncolumns = 80 matrix = numpy.zeros(nrows*ncolumns).reshape(nrows,ncolumns) def applyIt(image): ithStep = 1 image = numpy.zeros(ncolumns) yTimed = y[ithStep]*times positions = (numpy.round(x-yTimed)+50).astype(int) #print(positions) values = numpy.bincount(positions,weights) values = values[numpy.nonzero(values)] positions = numpy.unique(positions) image[positions] = values return image imageApplied = numpy.apply_along_axis(applyIt,1,matrix) a = pyplot.imshow(imageApplied, aspect = 10) # qui provo fully vectorial def fullmatrix(nRows, nColumns): y = numpy.arange(1,nStepsY+1) image = numpy.zeros((nRows, nColumns)) yTimed = numpy.outer(y,times) x3d = numpy.outer(numpy.ones(nStepsY),x) weights3d = numpy.outer(numpy.ones(nStepsY),weights) y3d = numpy.outer(y,numpy.ones(x.size)) positions = (numpy.round(x3d-yTimed)+50).astype(int) matrix = sparse.coo_matrix((numpy.ravel(weights3d), (numpy.ravel(y3d), numpy.ravel(positions)))).todense() return matrix %time image = fullmatrix(nStepsY, 80) a = pyplot.imshow(image, aspect = 10) pyplot.show() imageMapped = list(map(itermatrix, range(nStepsY))) imageMapped = numpy.array(imageMapped) a = pyplot.imshow(imageMapped, aspect = 10) pyplot.show() # prova con numpy.put nStepsY = 5 def mapIt(ithStep): ncolumns = 80 image = numpy.zeros(ncolumns) yTimed = y[ithStep]*times positions = (numpy.round(x-yTimed)+50).astype(int) values = numpy.bincount(positions,weights) values = values[numpy.nonzero(values)] positions = numpy.unique(positions) image[positions] = values return image %time imagePutted = list(map(mapIt, range(nStepsY))) imagePutted = numpy.array(imagePutted) %matplotlib inline a = pyplot.imshow(image, aspect = 10) pyplot.show() """ Explanation: Confronti Debug! End of explanation """ ramatricia = numpy.random.randint(10, size=120).reshape((5,4,3,2)) print(ramatricia[0,0,:,:]) #print(ramatricia) print(ramatricia.reshape(20,3,3)) """ Explanation: Documentazione Roba di array vari di numpy Domanda interessante su creazione matrici (stackoverflow) Creazione array ND Operatore add equivalente ad a+b per array ND Data types Prodotto tensore (da vedere ancora) Generazione array ND random Generazione array 1D random intero (eg binario) Dà le coordinate di tutti gli elementi nonzero Concatenate: unisce due array in un solo array (mette il secondo dopo il primo nello stesso array, poi eventualmenete va reshapato se si vuole fare una matrice da più arrays) Stack: unisce due array, forse migliore di concatenate, forse li aggiunge facendo una matrice Roba di matrici sparse Creazione sparse (nb vedi esempio finale per mio caso) Creazione sparsa random Forma in cui fa prodotto esterno Roba scatterplot et similia Scatterplot (nb attenti alle coordinate) Plot di matrici (imshow) Tutorial per imshow Spy FA PLOT DI MATRICI SPARSE! Plots con seaborn: regplot) (più semplice, come pyplot vuole solo due array delle coordinate),lmplot (vuole dataframe),pairplot (non mi dovrebbe servire) Esempio scatterplot con lmplot (v anche siscomp) End of explanation """
csyhuang/hn2016_falwa
examples/.ipynb_checkpoints/example_barotropic-checkpoint.ipynb
mit
from hn2016_falwa.wrapper import barotropic_eqlat_lwa # Module for plotting local wave activity (LWA) plots and # the corresponding equivalent-latitude profile from math import pi from netCDF4 import Dataset import numpy as np import matplotlib.pyplot as plt %matplotlib inline # --- Parameters --- # Earth_radius = 6.378e+6 # Earth's radius # --- Load the absolute vorticity field [256x512] --- # readFile = Dataset('barotropic_vorticity.nc', mode='r') # --- Read in longitude and latitude arrays --- # xlon = readFile.variables['longitude'][:] ylat = readFile.variables['latitude'][:] clat = np.abs(np.cos(ylat*pi/180.)) # cosine latitude nlon = xlon.size nlat = ylat.size # --- Parameters needed to use the module HN2015_LWA --- # dphi = (ylat[2]-ylat[1])*pi/180. # Equal spacing between latitude grid points, in radian area = 2.*pi*Earth_radius**2 *(np.cos(ylat[:,np.newaxis]*pi/180.)*dphi)/float(nlon) * np.ones((nlat,nlon)) area = np.abs(area) # To make sure area element is always positive (given floating point errors). # --- Read in the absolute vorticity field from the netCDF file --- # absVorticity = readFile.variables['absolute_vorticity'][:] readFile.close() """ Explanation: Instructions This sample code demonstrate how the wrapper function "barotropic_eqlat_lwa" in thepython package "hn2016_falwa" computes the finite-amplitude local wave activity (LWA) from absolute vorticity fields in a barotropic model with spherical geometry according to the definition in Huang & Nakamura (2016,JAS) equation (13). This sample code reproduces the LWA plots (Fig.4 in HN15) computed based on an absolute vorticity map. Contact Please make inquiries and report issues via Github: https://github.com/csyhuang/hn2016_falwa/issues End of explanation """ # --- Obtain equivalent-latitude relationship and also the LWA from the absolute vorticity snapshot --- Q_ref,LWA = barotropic_eqlat_lwa(ylat,absVorticity,area,Earth_radius*clat*dphi,nlat) # Full domain included """ Explanation: Obtain equivalent-latitude relationship and also the LWA from an absolute vorticity snapshot End of explanation """ # --- Color axis for plotting LWA --- # LWA_caxis = np.linspace(0,LWA.max(),31,endpoint=True) # --- Plot the abs. vorticity field, LWA and equivalent-latitude relationship and LWA --- # fig = plt.subplots(figsize=(14,4)) plt.subplot(1,3,1) # Absolute vorticity map c=plt.contourf(xlon,ylat,absVorticity,31) cb = plt.colorbar(c) cb.formatter.set_powerlimits((0, 0)) cb.ax.yaxis.set_offset_position('right') cb.update_ticks() plt.title('Absolute vorticity [1/s]') plt.xlabel('Longitude (degree)') plt.ylabel('Latitude (degree)') plt.subplot(1,3,2) # LWA (full domain) plt.contourf(xlon,ylat,LWA,LWA_caxis) plt.colorbar() plt.title('Local Wave Activity [m/s]') plt.xlabel('Longitude (degree)') plt.ylabel('Latitude (degree)') plt.subplot(1,3,3) # Equivalent-latitude relationship Q(y) plt.plot(Q_ref,ylat,'b',label='Equivalent-latitude relationship') plt.plot(np.mean(absVorticity,axis=1),ylat,'g',label='zonal mean abs. vorticity') plt.ticklabel_format(style='sci', axis='x', scilimits=(0,0)) plt.ylim(-90,90) plt.legend(loc=4,fontsize=10) plt.title('Equivalent-latitude profile') plt.ylabel('Latitude (degree)') plt.xlabel('Q(y) [1/s] | y = latitude') plt.tight_layout() plt.show() """ Explanation: Plotting the results End of explanation """
ES-DOC/esdoc-jupyterhub
notebooks/inpe/cmip6/models/besm-2-7/toplevel.ipynb
gpl-3.0
# DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'inpe', 'besm-2-7', 'toplevel') """ Explanation: ES-DOC CMIP6 Model Properties - Toplevel MIP Era: CMIP6 Institute: INPE Source ID: BESM-2-7 Sub-Topics: Radiative Forcings. Properties: 85 (42 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-15 16:54:06 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation """ # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) """ Explanation: Document Authors Set document authors End of explanation """ # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) """ Explanation: Document Contributors Specify document contributors End of explanation """ # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) """ Explanation: Document Publication Specify document publication status End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Flux Correction 3. Key Properties --&gt; Genealogy 4. Key Properties --&gt; Software Properties 5. Key Properties --&gt; Coupling 6. Key Properties --&gt; Tuning Applied 7. Key Properties --&gt; Conservation --&gt; Heat 8. Key Properties --&gt; Conservation --&gt; Fresh Water 9. Key Properties --&gt; Conservation --&gt; Salt 10. Key Properties --&gt; Conservation --&gt; Momentum 11. Radiative Forcings 12. Radiative Forcings --&gt; Greenhouse Gases --&gt; CO2 13. Radiative Forcings --&gt; Greenhouse Gases --&gt; CH4 14. Radiative Forcings --&gt; Greenhouse Gases --&gt; N2O 15. Radiative Forcings --&gt; Greenhouse Gases --&gt; Tropospheric O3 16. Radiative Forcings --&gt; Greenhouse Gases --&gt; Stratospheric O3 17. Radiative Forcings --&gt; Greenhouse Gases --&gt; CFC 18. Radiative Forcings --&gt; Aerosols --&gt; SO4 19. Radiative Forcings --&gt; Aerosols --&gt; Black Carbon 20. Radiative Forcings --&gt; Aerosols --&gt; Organic Carbon 21. Radiative Forcings --&gt; Aerosols --&gt; Nitrate 22. Radiative Forcings --&gt; Aerosols --&gt; Cloud Albedo Effect 23. Radiative Forcings --&gt; Aerosols --&gt; Cloud Lifetime Effect 24. Radiative Forcings --&gt; Aerosols --&gt; Dust 25. Radiative Forcings --&gt; Aerosols --&gt; Tropospheric Volcanic 26. Radiative Forcings --&gt; Aerosols --&gt; Stratospheric Volcanic 27. Radiative Forcings --&gt; Aerosols --&gt; Sea Salt 28. Radiative Forcings --&gt; Other --&gt; Land Use 29. Radiative Forcings --&gt; Other --&gt; Solar 1. Key Properties Key properties of the model 1.1. Model Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Top level overview of coupled model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of coupled model. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 2. Key Properties --&gt; Flux Correction Flux correction properties of the model 2.1. Details Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how flux corrections are applied in the model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 3. Key Properties --&gt; Genealogy Genealogy and history of the model 3.1. Year Released Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Year the model was released End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 3.2. CMIP3 Parent Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 CMIP3 parent if any End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 3.3. CMIP5 Parent Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 CMIP5 parent if any End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 3.4. Previous Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Previously known as End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 4. Key Properties --&gt; Software Properties Software properties of model 4.1. Repository Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Location of code for this component. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 4.2. Code Version Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Code version identifier. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 4.3. Code Languages Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Code language(s). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 4.4. Components Structure Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe how model realms are structured into independent software components (coupled via a coupler) and internal software components. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "OASIS" # "OASIS3-MCT" # "ESMF" # "NUOPC" # "Bespoke" # "Unknown" # "None" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 4.5. Coupler Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Overarching coupling framework for model. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.coupling.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5. Key Properties --&gt; Coupling ** 5.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of coupling in the model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 5.2. Atmosphere Double Flux Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Atmosphere grid" # "Ocean grid" # "Specific coupler grid" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 5.3. Atmosphere Fluxes Calculation Grid Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Where are the air-sea fluxes calculated End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 5.4. Atmosphere Relative Winds Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Are relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6. Key Properties --&gt; Tuning Applied Tuning methodology for model 6.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.2. Global Mean Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List set of metrics/diagnostics of the global mean state used in tuning model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.3. Regional Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.4. Trend Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List observed trend metrics/diagnostics used in tuning model/component (such as 20th century) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.5. Energy Balance Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.6. Fresh Water Balance Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7. Key Properties --&gt; Conservation --&gt; Heat Global heat convervation properties of the model 7.1. Global Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how heat is conserved globally End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7.2. Atmos Ocean Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how heat is conserved at the atmosphere/ocean coupling interface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7.3. Atmos Land Interface Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how heat is conserved at the atmosphere/land coupling interface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7.4. Atmos Sea-ice Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how heat is conserved at the atmosphere/sea-ice coupling interface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7.5. Ocean Seaice Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how heat is conserved at the ocean/sea-ice coupling interface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7.6. Land Ocean Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how heat is conserved at the land/ocean coupling interface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8. Key Properties --&gt; Conservation --&gt; Fresh Water Global fresh water convervation properties of the model 8.1. Global Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how fresh_water is conserved globally End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.2. Atmos Ocean Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how fresh_water is conserved at the atmosphere/ocean coupling interface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.3. Atmos Land Interface Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how fresh water is conserved at the atmosphere/land coupling interface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.4. Atmos Sea-ice Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.5. Ocean Seaice Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how fresh water is conserved at the ocean/sea-ice coupling interface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.6. Runoff Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe how runoff is distributed and conserved End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.7. Iceberg Calving Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how iceberg calving is modeled and conserved End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.8. Endoreic Basins Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how endoreic basins (no ocean access) are treated End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.9. Snow Accumulation Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe how snow accumulation over land and over sea-ice is treated End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9. Key Properties --&gt; Conservation --&gt; Salt Global salt convervation properties of the model 9.1. Ocean Seaice Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how salt is conserved at the ocean/sea-ice coupling interface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 10. Key Properties --&gt; Conservation --&gt; Momentum Global momentum convervation properties of the model 10.1. Details Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how momentum is conserved in the model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 11. Radiative Forcings Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5) 11.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of radiative forcings (GHG and aerosols) implementation in model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 12. Radiative Forcings --&gt; Greenhouse Gases --&gt; CO2 Carbon dioxide forcing 12.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 12.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13. Radiative Forcings --&gt; Greenhouse Gases --&gt; CH4 Methane forcing 13.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 13.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 14. Radiative Forcings --&gt; Greenhouse Gases --&gt; N2O Nitrous oxide forcing 14.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 14.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 15. Radiative Forcings --&gt; Greenhouse Gases --&gt; Tropospheric O3 Troposheric ozone forcing 15.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 15.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 16. Radiative Forcings --&gt; Greenhouse Gases --&gt; Stratospheric O3 Stratospheric ozone forcing 16.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 16.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17. Radiative Forcings --&gt; Greenhouse Gases --&gt; CFC Ozone-depleting and non-ozone-depleting fluorinated gases forcing 17.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "Option 1" # "Option 2" # "Option 3" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17.2. Equivalence Concentration Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Details of any equivalence concentrations used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 17.3. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 18. Radiative Forcings --&gt; Aerosols --&gt; SO4 SO4 aerosol forcing 18.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 18.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 19. Radiative Forcings --&gt; Aerosols --&gt; Black Carbon Black carbon aerosol forcing 19.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 19.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 20. Radiative Forcings --&gt; Aerosols --&gt; Organic Carbon Organic carbon aerosol forcing 20.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 20.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 21. Radiative Forcings --&gt; Aerosols --&gt; Nitrate Nitrate forcing 21.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 21.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 22. Radiative Forcings --&gt; Aerosols --&gt; Cloud Albedo Effect Cloud albedo effect forcing (RFaci) 22.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 22.2. Aerosol Effect On Ice Clouds Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Radiative effects of aerosols on ice clouds are represented? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 22.3. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 23. Radiative Forcings --&gt; Aerosols --&gt; Cloud Lifetime Effect Cloud lifetime effect forcing (ERFaci) 23.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 23.2. Aerosol Effect On Ice Clouds Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Radiative effects of aerosols on ice clouds are represented? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 23.3. RFaci From Sulfate Only Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Radiative forcing from aerosol cloud interactions from sulfate aerosol only? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 23.4. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 24. Radiative Forcings --&gt; Aerosols --&gt; Dust Dust forcing 24.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 24.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 25. Radiative Forcings --&gt; Aerosols --&gt; Tropospheric Volcanic Tropospheric volcanic forcing 25.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Type A" # "Type B" # "Type C" # "Type D" # "Type E" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 25.2. Historical Explosive Volcanic Aerosol Implementation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How explosive volcanic aerosol is implemented in historical simulations End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Type A" # "Type B" # "Type C" # "Type D" # "Type E" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 25.3. Future Explosive Volcanic Aerosol Implementation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How explosive volcanic aerosol is implemented in future simulations End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 25.4. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 26. Radiative Forcings --&gt; Aerosols --&gt; Stratospheric Volcanic Stratospheric volcanic forcing 26.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Type A" # "Type B" # "Type C" # "Type D" # "Type E" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 26.2. Historical Explosive Volcanic Aerosol Implementation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How explosive volcanic aerosol is implemented in historical simulations End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Type A" # "Type B" # "Type C" # "Type D" # "Type E" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 26.3. Future Explosive Volcanic Aerosol Implementation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How explosive volcanic aerosol is implemented in future simulations End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 26.4. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 27. Radiative Forcings --&gt; Aerosols --&gt; Sea Salt Sea salt forcing 27.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 27.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 28. Radiative Forcings --&gt; Other --&gt; Land Use Land use forcing 28.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 28.2. Crop Change Only Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Land use change represented via crop change only? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 28.3. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "irradiance" # "proton" # "electron" # "cosmic ray" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 29. Radiative Forcings --&gt; Other --&gt; Solar Solar forcing 29.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How solar forcing is provided End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 29.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """
seifip/udacity-deep-learning-nanodegree
!P3 - TV Script Generation/dlnd_tv_script_generation.ipynb
mit
""" DON'T MODIFY ANYTHING IN THIS CELL """ import helper data_dir = './data/simpsons/moes_tavern_lines.txt' text = helper.load_data(data_dir) # Ignore notice, since we don't use it for analysing the data text = text[81:] """ Explanation: TV Script Generation In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern. Get the Data The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc.. End of explanation """ view_sentence_range = (0, 10) """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np print('Dataset Stats') print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()}))) scenes = text.split('\n\n') print('Number of scenes: {}'.format(len(scenes))) sentence_count_scene = [scene.count('\n') for scene in scenes] print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene))) sentences = [sentence for scene in scenes for sentence in scene.split('\n')] print('Number of lines: {}'.format(len(sentences))) word_count_sentence = [len(sentence.split()) for sentence in sentences] print('Average number of words in each line: {}'.format(np.average(word_count_sentence))) print() print('The sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) """ Explanation: Explore the Data Play around with view_sentence_range to view different parts of the data. End of explanation """ import numpy as np import problem_unittests as tests def create_lookup_tables(text): """ Create lookup tables for vocabulary :param text: The text of tv scripts split into words :return: A tuple of dicts (vocab_to_int, int_to_vocab) """ vocab = sorted(set(text)) vocab_to_int = {c: i for i, c in enumerate(vocab)} int_to_vocab = dict(enumerate(vocab)) # TODO: Implement Function return vocab_to_int, int_to_vocab """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_create_lookup_tables(create_lookup_tables) """ Explanation: Implement Preprocessing Functions The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below: - Lookup Table - Tokenize Punctuation Lookup Table To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries: - Dictionary to go from the words to an id, we'll call vocab_to_int - Dictionary to go from the id to word, we'll call int_to_vocab Return these dictionaries in the following tuple (vocab_to_int, int_to_vocab) End of explanation """ def token_lookup(): """ Generate a dict to turn punctuation into a token. :return: Tokenize dictionary where the key is the punctuation and the value is the token """ # TODO: Implement Function return {'.':'||period||', ',':'||comma||', '"':'||quotation_mark||', ';':'||semicolon||', '!':'||exclamation_mark||', '?':'||question_mark||', '(':'||left_paren||', ')':'||right_paren||', '--':'||dash||', '\n':'||return||'} """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_tokenize(token_lookup) """ Explanation: Tokenize Punctuation We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!". Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token: - Period ( . ) - Comma ( , ) - Quotation Mark ( " ) - Semicolon ( ; ) - Exclamation mark ( ! ) - Question mark ( ? ) - Left Parentheses ( ( ) - Right Parentheses ( ) ) - Dash ( -- ) - Return ( \n ) This dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token "dash", try using something like "||dash||". End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ # Preprocess Training, Validation, and Testing Data helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables) """ Explanation: Preprocess all the data and save it Running the code cell below will preprocess all the data and save it to file. End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ import helper import numpy as np import problem_unittests as tests int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess() """ Explanation: Check Point This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk. End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ from distutils.version import LooseVersion import warnings import tensorflow as tf # Check TensorFlow Version assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer' print('TensorFlow Version: {}'.format(tf.__version__)) # Check for a GPU if not tf.test.gpu_device_name(): warnings.warn('No GPU found. Please use a GPU to train your neural network.') else: print('Default GPU Device: {}'.format(tf.test.gpu_device_name())) """ Explanation: Build the Neural Network You'll build the components necessary to build a RNN by implementing the following functions below: - get_inputs - get_init_cell - get_embed - build_rnn - build_nn - get_batches Check the Version of TensorFlow and Access to GPU End of explanation """ def get_inputs(): """ Create TF Placeholders for input, targets, and learning rate. :return: Tuple (input, targets, learning rate) """ inputs = tf.placeholder(tf.int32, [None, None], name='input') targets = tf.placeholder(tf.int32, [None, None], name='targets') learning_rate = tf.placeholder(tf.float32, name='learning_rate') # TODO: Implement Function return inputs, targets, learning_rate """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_get_inputs(get_inputs) """ Explanation: Input Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders: - Input text placeholder named "input" using the TF Placeholder name parameter. - Targets placeholder - Learning Rate placeholder Return the placeholders in the following tuple (Input, Targets, LearningRate) End of explanation """ def get_init_cell(batch_size, rnn_size): """ Create an RNN Cell and initialize it. :param batch_size: Size of batches :param rnn_size: Size of RNNs :return: Tuple (cell, initialize state) """ num_layers = 1 basic_cells = [tf.contrib.rnn.BasicLSTMCell(rnn_size) for _ in range(num_layers)] cell = tf.contrib.rnn.MultiRNNCell(basic_cells) initial_state = cell.zero_state(batch_size, tf.float32) initial_state = tf.identity(initial_state, name="initial_state") return cell, initial_state """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_get_init_cell(get_init_cell) """ Explanation: Build RNN Cell and Initialize Stack one or more BasicLSTMCells in a MultiRNNCell. - The Rnn size should be set using rnn_size - Initalize Cell State using the MultiRNNCell's zero_state() function - Apply the name "initial_state" to the initial state using tf.identity() Return the cell and initial state in the following tuple (Cell, InitialState) End of explanation """ def get_embed(input_data, vocab_size, embed_dim): """ Create embedding for <input_data>. :param input_data: TF placeholder for text input. :param vocab_size: Number of words in vocabulary. :param embed_dim: Number of embedding dimensions :return: Embedded input. """ embeddings = tf.Variable(tf.random_uniform([vocab_size, embed_dim], -1.0, 1.0)) embed = tf.nn.embedding_lookup(embeddings, input_data) return embed """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_get_embed(get_embed) """ Explanation: Word Embedding Apply embedding to input_data using TensorFlow. Return the embedded sequence. End of explanation """ def build_rnn(cell, inputs): """ Create a RNN using a RNN Cell :param cell: RNN Cell :param inputs: Input text data :return: Tuple (Outputs, Final State) """ outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32) final_state = tf.identity(final_state, name='final_state') return outputs, final_state """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_build_rnn(build_rnn) """ Explanation: Build RNN You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN. - Build the RNN using the tf.nn.dynamic_rnn() - Apply the name "final_state" to the final state using tf.identity() Return the outputs and final_state state in the following tuple (Outputs, FinalState) End of explanation """ def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim): """ Build part of the neural network :param cell: RNN cell :param rnn_size: Size of rnns :param input_data: Input data :param vocab_size: Vocabulary size :param embed_dim: Number of embedding dimensions :return: Tuple (Logits, FinalState) """ embed = get_embed(input_data, vocab_size, embed_dim) rnn, final_state = build_rnn(cell, embed) logits = tf.contrib.layers.fully_connected(inputs=rnn, \ num_outputs=vocab_size, \ activation_fn=None,\ weights_initializer=tf.truncated_normal_initializer(mean=0.0, stddev=0.01),\ biases_initializer=tf.zeros_initializer()) return logits, final_state """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_build_nn(build_nn) """ Explanation: Build the Neural Network Apply the functions you implemented above to: - Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function. - Build RNN using cell and your build_rnn(cell, inputs) function. - Apply a fully connected layer with a linear activation and vocab_size as the number of outputs. Return the logits and final state in the following tuple (Logits, FinalState) End of explanation """ def get_batches(int_text, batch_size, seq_length): """ Return batches of input and target :param int_text: Text with the words replaced by their ids :param batch_size: The size of batch :param seq_length: The length of sequence :return: Batches as a Numpy array """ n_batches = len(int_text) // (batch_size * seq_length) total_size = n_batches * batch_size * seq_length x = np.array(int_text[:total_size]) y = np.roll(x, -1) input_batch = np.split(x.reshape(batch_size, -1), n_batches, 1) target_batch = np.split(y.reshape(batch_size, -1), n_batches, 1) batches = np.array(list(zip(input_batch, target_batch))) return batches """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_get_batches(get_batches) """ Explanation: Batches Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements: - The first element is a single batch of input with the shape [batch size, sequence length] - The second element is a single batch of targets with the shape [batch size, sequence length] If you can't fill the last batch with enough data, drop the last batch. For exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20], 3, 2) would return a Numpy array of the following: ``` [ # First Batch [ # Batch of Input [[ 1 2], [ 7 8], [13 14]] # Batch of targets [[ 2 3], [ 8 9], [14 15]] ] # Second Batch [ # Batch of Input [[ 3 4], [ 9 10], [15 16]] # Batch of targets [[ 4 5], [10 11], [16 17]] ] # Third Batch [ # Batch of Input [[ 5 6], [11 12], [17 18]] # Batch of targets [[ 6 7], [12 13], [18 1]] ] ] ``` Notice that the last target value in the last batch is the first input value of the first batch. In this case, 1. This is a common technique used when creating sequence batches, although it is rather unintuitive. End of explanation """ # Number of Epochs num_epochs = 500 # Batch Size batch_size = 512 # RNN Size rnn_size = 256 # Embedding Dimension Size embed_dim = 400 # Sequence Length seq_length = 14 # Learning Rate learning_rate = 0.001 # Show stats for every n number of batches show_every_n_batches = 25 """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ save_dir = './save' """ Explanation: Neural Network Training Hyperparameters Tune the following parameters: Set num_epochs to the number of epochs. Set batch_size to the batch size. Set rnn_size to the size of the RNNs. Set embed_dim to the size of the embedding. Set seq_length to the length of sequence. Set learning_rate to the learning rate. Set show_every_n_batches to the number of batches the neural network should print progress. End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ from tensorflow.contrib import seq2seq train_graph = tf.Graph() with train_graph.as_default(): vocab_size = len(int_to_vocab) input_text, targets, lr = get_inputs() input_data_shape = tf.shape(input_text) cell, initial_state = get_init_cell(input_data_shape[0], rnn_size) logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size, embed_dim) # Probabilities for generating words probs = tf.nn.softmax(logits, name='probs') # Loss function cost = seq2seq.sequence_loss( logits, targets, tf.ones([input_data_shape[0], input_data_shape[1]])) # Optimizer optimizer = tf.train.AdamOptimizer(lr) # Gradient Clipping gradients = optimizer.compute_gradients(cost) capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None] train_op = optimizer.apply_gradients(capped_gradients) """ Explanation: Build the Graph Build the graph using the neural network you implemented. End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ batches = get_batches(int_text, batch_size, seq_length) with tf.Session(graph=train_graph) as sess: sess.run(tf.global_variables_initializer()) for epoch_i in range(num_epochs): state = sess.run(initial_state, {input_text: batches[0][0]}) for batch_i, (x, y) in enumerate(batches): feed = { input_text: x, targets: y, initial_state: state, lr: learning_rate} train_loss, state, _ = sess.run([cost, final_state, train_op], feed) # Show every <show_every_n_batches> batches if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0: print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format( epoch_i, batch_i, len(batches), train_loss)) # Save Model saver = tf.train.Saver() saver.save(sess, save_dir) print('Model Trained and Saved') """ Explanation: Train Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forums to see if anyone is having the same problem. End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ # Save parameters for checkpoint helper.save_params((seq_length, save_dir)) """ Explanation: Save Parameters Save seq_length and save_dir for generating a new TV script. End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ import tensorflow as tf import numpy as np import helper import problem_unittests as tests _, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess() seq_length, load_dir = helper.load_params() """ Explanation: Checkpoint End of explanation """ def get_tensors(loaded_graph): """ Get input, initial state, final state, and probabilities tensor from <loaded_graph> :param loaded_graph: TensorFlow graph loaded from file :return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor) """ input_0 = loaded_graph.get_tensor_by_name("input:0") initial_state_0 = loaded_graph.get_tensor_by_name("initial_state:0") final_state_0 = loaded_graph.get_tensor_by_name("final_state:0") probs_0 = loaded_graph.get_tensor_by_name("probs:0") return input_0, initial_state_0, final_state_0, probs_0 """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_get_tensors(get_tensors) """ Explanation: Implement Generate Functions Get Tensors Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names: - "input:0" - "initial_state:0" - "final_state:0" - "probs:0" Return the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor) End of explanation """ def pick_word(probabilities, int_to_vocab): """ Pick the next word in the generated text :param probabilities: Probabilites of the next word :param int_to_vocab: Dictionary of word ids as the keys and words as the values :return: String of the predicted word """ predicted_word = int_to_vocab[np.argmax(probabilities)] return predicted_word """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_pick_word(pick_word) """ Explanation: Choose Word Implement the pick_word() function to select the next word using probabilities. End of explanation """ gen_length = 200 # homer_simpson, moe_szyslak, or Barney_Gumble prime_word = 'moe_szyslak' """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ loaded_graph = tf.Graph() with tf.Session(graph=loaded_graph) as sess: # Load saved model loader = tf.train.import_meta_graph(load_dir + '.meta') loader.restore(sess, load_dir) # Get Tensors from loaded model input_text, initial_state, final_state, probs = get_tensors(loaded_graph) # Sentences generation setup gen_sentences = [prime_word + ':'] prev_state = sess.run(initial_state, {input_text: np.array([[1]])}) # Generate sentences for n in range(gen_length): # Dynamic Input dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]] dyn_seq_length = len(dyn_input[0]) # Get Prediction probabilities, prev_state = sess.run( [probs, final_state], {input_text: dyn_input, initial_state: prev_state}) pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab) gen_sentences.append(pred_word) # Remove tokens tv_script = ' '.join(gen_sentences) for key, token in token_dict.items(): ending = ' ' if key in ['\n', '(', '"'] else '' tv_script = tv_script.replace(' ' + token.lower(), key) tv_script = tv_script.replace('\n ', '\n') tv_script = tv_script.replace('( ', '(') print(tv_script) """ Explanation: Generate TV Script This will generate the TV script for you. Set gen_length to the length of TV script you want to generate. End of explanation """
eneskemalergin/OldBlog
_oldnotebooks/Basic_Sequence_Analysis.ipynb
mit
from Bio import Entrez, SeqIO # Using my email Entrez.email = "eneskemalergin@gmail.com" # Get the FASTA file hdl = Entrez.efetch(db='nucleotide', id=['NM_002299'],rettype='fasta') # Lactase gene # Read it and store it in seq seq = SeqIO.read(hdl, 'fasta') print "First 10 and last 10: " + seq.seq[:10] + "..." + seq.seq[-10:] """ Explanation: Performing Basic Sequence Analysis Now I am continuing to my bioinformatics cookbook tutorial series. Today's topic is to perform basic sequence analysis which is the basics of Next Generation Sequencing. We will do some basic sequence analysis on DNA sequences. FASTA files are our main target on this, also Biopython as a main library of Python. Let's first download a FASTA sequence End of explanation """ from Bio import SeqIO # Open a new fasta file and make it ready to write on w_hdl = open('example.fasta', 'w') # specify the part to write w_seq = seq[11:5795] # Write it SeqIO.write([w_seq], w_hdl, 'fasta') # And of course close it w_hdl.close() """ Explanation: Let's save the Biopython object in FASTA file; End of explanation """ # Parse the fasta file and store it in recs recs = SeqIO.parse('example.fasta', 'fasta') # Iterate over each records for rec in recs: # Get the sequences of each rec seq = rec.seq # Show the desription print(rec.description) # Show the first 10 letter in sequence print(seq[:10]) # print(seq.alphabet) """ Explanation: If you want to write many sequences (easily millions with NGS), do not use a list, as shown in the preceding code because this will allocate massive amounts of memory.Either use an iterator or use the SeqIO.write function several times with a subset of sequence on each write. We need to read the sequence of course to be able to use it End of explanation """ from Bio import Seq from Bio.Alphabet import IUPAC seq = Seq.Seq(str(seq), IUPAC.unambiguous_dna) """ Explanation: In our example code we have only 1 sequence in 1 FASTA file so we did not have to iterate through each record. Since we won't know each time how many records we will have in FASTA the code above is suitable for most cases. The first line of FASTA file is description of the gene, in this case : gi|32481205|ref|NM_002299.2| Homo sapiens lactase (LCT), mRNA The second line is the first 10 lettern in sequence The last line is shows how the sequence represented Now let's change the alphabet of the sequence we got: We create a new sequence with a more informative alphabet. End of explanation """ rna = Seq.Seq(str(seq), IUPAC.unambiguous_dna) rna = seq.transcribe() # Changing DNA into RNA print "some of the rna variable: "+rna[:10]+"..."+rna[-10:] """ Explanation: Now have an unambiguous DNA, we can transcribe it as follows: End of explanation """ prot = seq.translate() # Changing RNA into corresponding Protein print "some of the resulting protein sequence: "+prot[:10]+"..."+prot[-10:] """ Explanation: Note that the Seq constructor takes a string, not a sequence. You will see that the alphabet of the rna variable is now IUPACUnambigousRNA. Finally let's translate it into Protein: End of explanation """ !wget ftp://ftp.1000genomes.ebi.ac.uk/vol1/ftp/phase3/data/NA18489/sequence_read/SRR003265.filt.fastq.gz """ Explanation: Now, we have a protein alphabet with the annotation that there is a stop codon (so, our protein is complete). There are other files to store and represent sequences and we talked about some of them in the first blog post of the series. Now I will show you how to work with modern file formats such as FASTQ format. FASTQ files are the standard format output by modern sequencers. The purpose of the following content is to make you comfortable with quality scores and how to work with them. To be able to explain the concept we will use real big data from "1000 Genomes Project" Next-generation datasets are generally very large like 1000 Genomes Project. You will need to download some stuff so, get ready to wait :) Let's Start by downloading the dataset: (BTW the following snippet is for IPython NB so if you are following this from my blog go ahead and click here) End of explanation """ import gzip # This is the library we need to unzip .gz from Bio import SeqIO # The usual SeqIO # Unzip and read the fastq file at the end store it in recs recs = SeqIO.parse(gzip.open('SRR003265.filt.fastq.gz'),'fastq') rec = next(recs) # Print the id, description and sequence of the record print(rec.id, rec.description, rec.seq) # Print the letter_annotations # Biopython will convert all the Phred encoding letters to logarithmic scores print(rec.letter_annotations) """ Explanation: Now we have file "SRR003265.filt.fastq.gz" which has 3 extensions, 1 is fastq so we are fine. The last one gz is the thing we will solve with Pyhton Library while we are opening it. First we need to open the file: End of explanation """ from collections import defaultdict # Unzip and read the fastq file recs = SeqIO.parse(gzip.open('SRR003265.filt.fastq.gz'),'fastq') # Make integer dictionary cnt = defaultdict(int) # Iterate over records for rec in recs: # In each letter of the sequence for letter in rec.seq: # Count the letters and store the number of count in dictionary cnt cnt[letter] += 1 # Find the total of cnt counts tot = sum(cnt.values()) # Iterate over the dictionary cnt for letter, cnt_value in cnt.items(): print('%s: %.2f %d' % (letter, 100. * cnt_value / tot, cnt_value)) # Prints the following # For each Letter inside # Print the percentage of apperance in sequences # and the total number of letter # Do this for each letter (even for NONE(N)) """ Explanation: You should usually store your FASTQ files in a compressed format, for space saving and processing time saving's sake. Don't use list(recs), if you don't want to sacrife a lot of memory, since FASTQ files are usualy big ones. Then, let's take a look at the distribution of nucleotide reads: End of explanation """ %matplotlib inline # Plot it in IPython Directly # Calling libraries import seaborn as sns import matplotlib.pyplot as plt # Again unzip, read the fastq file recs = SeqIO.parse(gzip.open('SRR003265.filt.fastq.gz'), 'fastq') # Make a dictionary n_cnt = defaultdict(int) # The same code as before until here # iterate through the file and get the position of any references to N. for rec in recs: for i, letter in enumerate(rec.seq): pos = i + 1 if letter == 'N': n_cnt[pos] += 1 seq_len = max(n_cnt.keys()) positions = range(1, seq_len + 1) fig, ax = plt.subplots() ax.plot(positions, [n_cnt[x] for x in positions]) ax.set_xlim(1, seq_len) """ Explanation: Note that there is a residual number for N calls. These are calls in which a sequencer reports an unknown base. Now, let's plot the distribution of Ns according to its read position: End of explanation """ # Reopen and read recs = SeqIO.parse(gzip.open('SRR003265.filt.fastq.gz'),'fastq') # default dictionary qual_pos = defaultdict(list) for rec in recs: for i, qual in enumerate(rec.letter_annotations['phred_quality']): if i < 25 or qual == 40: continue pos = i + 1 qual_pos[pos].append(qual) vps = [] poses = qual_pos.keys() poses.sort() for pos in poses: vps.append(qual_pos[pos]) fig, ax = plt.subplots() ax.boxplot(vps) ax.set_xticklabels([str(x) for x in range(26, max(qual_pos.keys()) + 1)]) """ Explanation: Until position 25, there are no errors. This is not what you will get from a typical sequencer output, because Our example file is already filtered and the 1000 genomes filtering rules enforce that no N calls can occur before position 25. the quantity of uncalled bases is positiondependent. So, what about the quality of reads? Let's study the distribution of Phred scores and plot the distribution of qualities according to thei read position: End of explanation """
tudarmstadt-lt/sensegram
QuickStart.ipynb
apache-2.0
import sensegram # see README for model download information sense_vectors_fpath = "model/dewiki.txt.clusters.minsize5-1000-sum-score-20.sense_vectors" sv = sensegram.SenseGram.load_word2vec_format(sense_vectors_fpath, binary=False) """ Explanation: Demonstrating various stages of word sense disambiguation The example below relies on a model for the German language. Usage of the toolkit for other languages is the same. You just need to download a model for the corresponding language. 1. Loading pre-trained sense vectors To test with word sense embeddings you can use a pretrained model (sense vectors and sense probabilities). These sense vectors were induced from Wikipedia using word2vec similarities between words in ego-networks. Sense probabilities are stored in a separate file which is located next to the file with sense vectors. End of explanation """ word = "Hund" sv.get_senses(word) """ Explanation: 2. Getting the list of senses of a word Probabilities of senses will be loaded automatically if placed in the same folder as sense vectors and named according to the same scheme as our pretrained files. To examine how many senses were learned for a word call get_senses funcion: End of explanation """ word = "Hund" for sense_id, prob in sv.get_senses(word): print(sense_id) print("="*20) for rsense_id, sim in sv.wv.most_similar(sense_id): print("{} {:f}".format(rsense_id, sim)) print("\n") """ Explanation: 3. Sense aware nearest neighbors The function returns a list of sense names with probabilities for each sense. As one can see, our model has learned two senses for the word "ключ". To understand which word sense is represented with a sense vector use most_similar function: End of explanation """ from gensim.models import KeyedVectors word_vectors_fpath = "model/dewiki.txt.word_vectors" wv = KeyedVectors.load_word2vec_format(word_vectors_fpath, binary=False, unicode_errors="ignore") """ Explanation: 4. Word sense disambiguation: loading word embeddings To use our word sense disambiguation mechanism you also need word vectors or context vectors, depending on the dismabiguation strategy. Those word are located in the model directory and has the extension .vectors. Our WSD mechanism is based on word similarities (sim) and requires word vectors to represent context words. In following we provide a disambiguation example using similarity strategy. First, load word vectors using gensim library: End of explanation """ from wsd import WSD wsd_model = WSD(sv, wv, window=5, method='sim', filter_ctx=3) """ Explanation: Then initialise the WSD object with sense and word vectors: End of explanation """ word = "Hund" context = "Die beste Voraussetzung für die Hund-Katze-Freundschaft ist, dass keiner von beiden in der Vergangenheit unangenehme Erlebnisse mit der anderen Gattung hatte. Am einfachsten ist die ungleiche WG, wenn sich zwei Jungtiere ein Zuhause teilen. Bei erwachsenen Tieren ist es einfacher, wenn sich Miezi in Bellos Haushalt einnistet – nicht umgekehrt, da Hunde Rudeltiere sind. Damit ein Hund das Kätzchen aber auch als Rudelmitglied sieht und nicht als Futter sollten ein paar Regeln beachtet werden" wsd_model.dis_text(context, word, 0, 4) """ Explanation: The settings have the following meaning: it will extract at most window*2 words around the target word from the sentence as context and it will use only three most discriminative context words for disambiguation. Now you can disambiguate the word "table" in the sentence "They bought a table and chairs for kitchen" using dis_text function. As input it takes a sentence with space separated tokens, a target word, and start/end indices of the target word in the given sentence. End of explanation """ import sensegram from wsd import WSD from gensim.models import KeyedVectors # Input data and paths (see README for model download information) sense_vectors_fpath = "model/wiki.txt.clusters.minsize5-1000-sum-score-20.sense_vectors" word_vectors_fpath = "model/wiki.txt.word_vectors" context_words_max = 3 # change this paramters to 1, 2, 5, 10, 15, 20 : it may improve the results context_window_size = 5 # this parameters can be also changed during experiments word = "python" context = "Python is an interpreted high-level programming language for general-purpose programming. Created by Guido van Rossum and first released in 1991, Python has a design philosophy that emphasizes code readability, notably using significant whitespace." ignore_case = True lang = "en" # to filter out stopwords # Load models (takes long time) sv = sensegram.SenseGram.load_word2vec_format(sense_vectors_fpath, binary=False) wv = KeyedVectors.load_word2vec_format(word_vectors_fpath, binary=False, unicode_errors="ignore") # Play with the model (is quick) print("Probabilities of the senses:\n{}\n\n".format(sv.get_senses(word, ignore_case=ignore_case))) for sense_id, prob in sv.get_senses(word, ignore_case=ignore_case): print(sense_id) print("="*20) for rsense_id, sim in sv.wv.most_similar(sense_id): print("{} {:f}".format(rsense_id, sim)) print("\n") # Disambiguate a word in a context wsd_model = WSD(sv, wv, window=context_window_size, lang=lang, filter_ctx=context_words_max, ignore_case=ignore_case) print(wsd_model.disambiguate(context, word)) """ Explanation: Putting the four steps described above together An example of words sense induction, in this case for the English language End of explanation """ import sensegram from wsd import WSD from gensim.models import KeyedVectors # Input data and paths sense_vectors_fpath = "model/sdewac-v3.corpus.clusters.minsize5-1000-sum-score-20.sense_vectors" word_vectors_fpath = "model/sdewac-v3.corpus.word_vectors" context_words_max = 3 # change this paramters to 1, 2, 5, 10, 15, 20 : it may improve the results context_window_size = 5 # this parameters can be also changed during experiments word = "Maus" context = "Die Maus ist ein Eingabegerät (Befehlsgeber) bei Computern. Der allererste Prototyp wurde 1963 nach Zeichnungen von Douglas C. Engelbart gebaut; seit Mitte der 1980er Jahre bildet die Maus für fast alle Computertätigkeiten zusammen mit dem Monitor und der Tastatur eine der wichtigsten Mensch-Maschine-Schnittstellen. Die Entwicklung grafischer Benutzeroberflächen hat die Computermaus zu einem heute praktisch an jedem Desktop-PC verfügbaren Standardeingabegerät gemacht." ignore_case = True # Load models (takes long time) sv = sensegram.SenseGram.load_word2vec_format(sense_vectors_fpath, binary=False) wv = KeyedVectors.load_word2vec_format(word_vectors_fpath, binary=False, unicode_errors="ignore") # Play with the model (is quick) print("Probabilities of the senses:\n{}\n\n".format(sv.get_senses(word, ignore_case=ignore_case))) for sense_id, prob in sv.get_senses(word, ignore_case=ignore_case): print(sense_id) print("="*20) for rsense_id, sim in sv.wv.most_similar(sense_id): print("{} {:f}".format(rsense_id, sim)) print("\n") # Disambiguate a word in a context wsd_model = WSD(sv, wv, window=context_window_size, lang="de", filter_ctx=context_words_max, ignore_case=ignore_case) print(wsd_model.disambiguate(context, word)) """ Explanation: SDEWaC corpus End of explanation """ import sensegram from wsd import WSD from gensim.models import KeyedVectors # Input data and paths sense_vectors_fpath = "model/wikipedia-ru-2018.txt.clusters.minsize5-1000-sum-score-20.sense_vectors" word_vectors_fpath = "model/wikipedia-ru-2018.txt.word_vectors" max_context_words = 3 # change this paramters to 1, 2, 5, 10, 15, 20 : it may improve the results context_window_size = 20 # this parameters can be also changed during experiments word = "ключ" context = "Ключ — это секретная информация, используемая криптографическим алгоритмом при зашифровании/расшифровании сообщений, постановке и проверке цифровой подписи, вычислении кодов аутентичности (MAC). При использовании одного и того же алгоритма результат шифрования зависит от ключа. Для современных алгоритмов сильной криптографии утрата ключа приводит к практической невозможности расшифровать информацию." ignore_case = True lang = "ru" # to filter out stopwords # Load models (takes long time) # wv = KeyedVectors.load_word2vec_format(word_vectors_fpath, binary=False, unicode_errors="ignore") # sv = sensegram.SenseGram.load_word2vec_format(sense_vectors_fpath, binary=False) # Play with the model (is quick) print("Probabilities of the senses:\n{}\n\n".format(sv.get_senses(word, ignore_case=ignore_case))) for sense_id, prob in sv.get_senses(word, ignore_case=ignore_case): print(sense_id) print("="*20) for rsense_id, sim in sv.wv.most_similar(sense_id): print("{} {:f}".format(rsense_id, sim)) print("\n") # Disambiguate a word in a context wsd_model = WSD(sv, wv, window=context_window_size, lang=lang, max_context_words=max_context_words, ignore_case=ignore_case) print(wsd_model.disambiguate(context, word)) ########################### from pandas import read_csv # you can download the WSI evaluation dataset with 'git clone https://github.com/nlpub/russe-wsi-kit.git' wikiwiki_fpath = "../russe-wsi-kit/data/main/wiki-wiki/train.csv" activedict_fpath = "../russe-wsi-kit/data/main/active-dict/test.csv" btsrnc_fpath = "../russe-wsi-kit/data/main/bts-rnc/test.csv" def evaluate(dataset_fpath): output_fpath = dataset_fpath + ".pred.csv" df = read_csv(dataset_fpath, sep="\t", encoding="utf-8") for i, row in df.iterrows(): sense_id, _ = wsd_model.disambiguate(row.context, row.word) df.loc[i, "predict_sense_id"] = sense_id df.to_csv(output_fpath, sep="\t", encoding="utf-8") print("Output:", output_fpath) return output_fpath evaluate(wikiwiki_fpath) evaluate(btsrnc_fpath) evaluate(activedict_fpath) %load_ext autoreload %autoreload 2 """ Explanation: Word sense induction exepriment for the Russian language End of explanation """
GoogleCloudPlatform/vertex-ai-samples
notebooks/official/explainable_ai/sdk_automl_tabular_classification_online_explain.ipynb
apache-2.0
import os # Google Cloud Notebook if os.path.exists("/opt/deeplearning/metadata/env_version"): USER_FLAG = "--user" else: USER_FLAG = "" ! pip3 install --upgrade google-cloud-aiplatform $USER_FLAG """ Explanation: Vertex SDK: AutoML training tabular classification model for online explanation <table align="left"> <td> <a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/tree/master/notebooks/official/automl/sdk_automl_tabular_classification_online_explain.ipynb"> <img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab </a> </td> <td> <a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/tree/master/notebooks/official/automl/sdk_automl_tabular_classification_online_explain.ipynb"> <img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo"> View on GitHub </a> </td> <td> <a href="https://console.cloud.google.com/vertex-ai/notebooks/deploy-notebook?download_url=https://github.com/GoogleCloudPlatform/vertex-ai-samples/tree/master/notebooks/official/automl/sdk_automl_tabular_classification_online_explain.ipynb"> Open in Vertex AI Workbench </a> </td> </table> <br/><br/><br/> Overview This tutorial demonstrates how to use the Vertex SDK to create tabular classification models and do online prediction with explanation using a Google Cloud AutoML model. Dataset The dataset used for this tutorial is the Iris dataset from TensorFlow Datasets. This dataset does not require any feature engineering. The version of the dataset you will use in this tutorial is stored in a public Cloud Storage bucket. The trained model predicts the type of Iris flower species from a class of three species: setosa, virginica, or versicolor. Objective In this tutorial, you create an AutoML tabular classification model and deploy for online prediction with explainability from a Python script using the Vertex SDK. You can alternatively create and deploy models using the gcloud command-line tool or online using the Cloud Console. The steps performed include: Create a Vertex Dataset resource. Train the model. View the model evaluation. Deploy the Model resource to a serving Endpoint resource. Make a prediction request with explainability. Undeploy the Model. Costs This tutorial uses billable components of Google Cloud: Vertex AI Cloud Storage Learn about Vertex AI pricing and Cloud Storage pricing, and use the Pricing Calculator to generate a cost estimate based on your projected usage. Set up your local development environment If you are using Colab or Google Cloud Notebook, your environment already meets all the requirements to run this notebook. You can skip this step. Otherwise, make sure your environment meets this notebook's requirements. You need the following: The Cloud Storage SDK Git Python 3 virtualenv Jupyter notebook running in a virtual environment with Python 3 The Cloud Storage guide to Setting up a Python development environment and the Jupyter installation guide provide detailed instructions for meeting these requirements. The following steps provide a condensed set of instructions: Install and initialize the SDK. Install Python 3. Install virtualenv and create a virtual environment that uses Python 3. Activate that environment and run pip3 install Jupyter in a terminal shell to install Jupyter. Run jupyter notebook on the command line in a terminal shell to launch Jupyter. Open this notebook in the Jupyter Notebook Dashboard. Installation Install the latest version of Vertex SDK for Python. End of explanation """ ! pip3 install -U google-cloud-storage $USER_FLAG """ Explanation: Install the latest GA version of google-cloud-storage library as well. End of explanation """ ! pip3 install -U tabulate $USER_FLAG if os.getenv("IS_TESTING"): ! pip3 install --upgrade tensorflow $USER_FLAG """ Explanation: Install the latest GA version of Tabulate library as well. End of explanation """ import os if not os.getenv("IS_TESTING"): # Automatically restart kernel after installs import IPython app = IPython.Application.instance() app.kernel.do_shutdown(True) """ Explanation: Restart the kernel Once you've installed the additional packages, you need to restart the notebook kernel so it can find the packages. End of explanation """ PROJECT_ID = "[your-project-id]" # @param {type:"string"} if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]": # Get your GCP project id from gcloud shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null PROJECT_ID = shell_output[0] print("Project ID:", PROJECT_ID) ! gcloud config set project $PROJECT_ID """ Explanation: Before you begin GPU runtime This tutorial does not require a GPU runtime. Set up your Google Cloud project The following steps are required, regardless of your notebook environment. Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs. Make sure that billing is enabled for your project. Enable the following APIs: Vertex AI APIs, Compute Engine APIs, and Cloud Storage. The Google Cloud SDK is already installed in Google Cloud Notebook. Enter your project ID in the cell below. Then run the cell to make sure the Cloud SDK uses the right project for all the commands in this notebook. Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $. End of explanation """ REGION = "us-central1" # @param {type: "string"} """ Explanation: Region You can also change the REGION variable, which is used for operations throughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you. Americas: us-central1 Europe: europe-west4 Asia Pacific: asia-east1 You may not use a multi-regional bucket for training with Vertex AI. Not all regions provide support for all Vertex AI services. Learn more about Vertex AI regions End of explanation """ from datetime import datetime TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S") """ Explanation: Timestamp If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial. End of explanation """ # If you are running this notebook in Colab, run this cell and follow the # instructions to authenticate your GCP account. This provides access to your # Cloud Storage bucket and lets you submit training jobs and prediction # requests. import os import sys # If on Google Cloud Notebook, then don't execute this code if not os.path.exists("/opt/deeplearning/metadata/env_version"): if "google.colab" in sys.modules: from google.colab import auth as google_auth google_auth.authenticate_user() # If you are running this notebook locally, replace the string below with the # path to your service account key and run this cell to authenticate your GCP # account. elif not os.getenv("IS_TESTING"): %env GOOGLE_APPLICATION_CREDENTIALS '' """ Explanation: Authenticate your Google Cloud account If you are using Google Cloud Notebook, your environment is already authenticated. Skip this step. If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth. Otherwise, follow these steps: In the Cloud Console, go to the Create service account key page. Click Create service account. In the Service account name field, enter a name, and click Create. In the Grant this service account access to project section, click the Role drop-down list. Type "Vertex" into the filter box, and select Vertex Administrator. Type "Storage Object Admin" into the filter box, and select Storage Object Admin. Click Create. A JSON file that contains your key downloads to your local environment. Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell. End of explanation """ BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"} if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]": BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP """ Explanation: Create a Cloud Storage bucket The following steps are required, regardless of your notebook environment. When you initialize the Vertex SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions. Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization. End of explanation """ ! gsutil mb -l $REGION $BUCKET_NAME """ Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket. End of explanation """ ! gsutil ls -al $BUCKET_NAME """ Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents: End of explanation """ import google.cloud.aiplatform as aip """ Explanation: Set up variables Next, set up some variables used throughout the tutorial. Import libraries and define constants End of explanation """ aip.init(project=PROJECT_ID, staging_bucket=BUCKET_NAME) """ Explanation: Initialize Vertex SDK for Python Initialize the Vertex SDK for Python for your project and corresponding bucket. End of explanation """ IMPORT_FILE = "gs://cloud-samples-data/tables/iris_1000.csv" """ Explanation: Tutorial Now you are ready to start creating your own AutoML tabular classification model. Location of Cloud Storage training data. Now set the variable IMPORT_FILE to the location of the CSV index file in Cloud Storage. End of explanation """ count = ! gsutil cat $IMPORT_FILE | wc -l print("Number of Examples", int(count[0])) print("First 10 rows") ! gsutil cat $IMPORT_FILE | head heading = ! gsutil cat $IMPORT_FILE | head -n1 label_column = str(heading).split(",")[-1].split("'")[0] print("Label Column Name", label_column) if label_column is None: raise Exception("label column missing") """ Explanation: Quick peek at your data You will use a version of the Iris dataset that is stored in a public Cloud Storage bucket, using a CSV index file. Start by doing a quick peek at the data. You count the number of examples by counting the number of rows in the CSV index file (wc -l) and then peek at the first few rows. You also need for training to know the heading name of the label column, which is save as label_column. For this dataset, it is the last column in the CSV file. End of explanation """ dataset = aip.TabularDataset.create( display_name="Iris" + "_" + TIMESTAMP, gcs_source=[IMPORT_FILE] ) print(dataset.resource_name) """ Explanation: Create the Dataset Next, create the Dataset resource using the create method for the TabularDataset class, which takes the following parameters: display_name: The human readable name for the Dataset resource. gcs_source: A list of one or more dataset index files to import the data items into the Dataset resource. bq_source: Alternatively, import data items from a BigQuery table into the Dataset resource. This operation may take several minutes. End of explanation """ dag = aip.AutoMLTabularTrainingJob( display_name="iris_" + TIMESTAMP, optimization_prediction_type="classification", optimization_objective="minimize-log-loss", ) print(dag) """ Explanation: Create and run training pipeline To train an AutoML model, you perform two steps: 1) create a training pipeline, and 2) run the pipeline. Create training pipeline An AutoML training pipeline is created with the AutoMLTabularTrainingJob class, with the following parameters: display_name: The human readable name for the TrainingJob resource. optimization_prediction_type: The type task to train the model for. classification: A tabuar classification model. regression: A tabular regression model. column_transformations: (Optional): Transformations to apply to the input columns optimization_objective: The optimization objective to minimize or maximize. binary classification: minimize-log-loss maximize-au-roc maximize-au-prc maximize-precision-at-recall maximize-recall-at-precision multi-class classification: minimize-log-loss regression: minimize-rmse minimize-mae minimize-rmsle The instantiated object is the DAG (directed acyclic graph) for the training pipeline. End of explanation """ model = dag.run( dataset=dataset, model_display_name="iris_" + TIMESTAMP, training_fraction_split=0.6, validation_fraction_split=0.2, test_fraction_split=0.2, budget_milli_node_hours=8000, disable_early_stopping=False, target_column=label_column, ) """ Explanation: Run the training pipeline Next, you run the DAG to start the training job by invoking the method run, with the following parameters: dataset: The Dataset resource to train the model. model_display_name: The human readable name for the trained model. training_fraction_split: The percentage of the dataset to use for training. test_fraction_split: The percentage of the dataset to use for test (holdout data). validation_fraction_split: The percentage of the dataset to use for validation. target_column: The name of the column to train as the label. budget_milli_node_hours: (optional) Maximum training time specified in unit of millihours (1000 = hour). disable_early_stopping: If True, training maybe completed before using the entire budget if the service believes it cannot further improve on the model objective measurements. The run method when completed returns the Model resource. The execution of the training pipeline will take upto 20 minutes. End of explanation """ # Get model resource ID models = aip.Model.list(filter="display_name=iris_" + TIMESTAMP) # Get a reference to the Model Service client client_options = {"api_endpoint": f"{REGION}-aiplatform.googleapis.com"} model_service_client = aip.gapic.ModelServiceClient(client_options=client_options) model_evaluations = model_service_client.list_model_evaluations( parent=models[0].resource_name ) model_evaluation = list(model_evaluations)[0] print(model_evaluation) """ Explanation: Review model evaluation scores After your model has finished training, you can review the evaluation scores for it. First, you need to get a reference to the new model. As with datasets, you can either use the reference to the model variable you created when deployed the model or you can list all of the models in your project. End of explanation """ endpoint = model.deploy(machine_type="n1-standard-4") """ Explanation: Deploy the model Next, deploy your model for online prediction. To deploy the model, you invoke the deploy method, with the following parameters: machine_type: The type of compute machine. End of explanation """ INSTANCE = { "petal_length": "1.4", "petal_width": "1.3", "sepal_length": "5.1", "sepal_width": "2.8", } """ Explanation: Send a online prediction request with explainability Send a online prediction with explainability to your deployed model. In this method, the predicted response will include an explanation on how the features contributed to the explanation. Make test item You will use synthetic data as a test data item. Don't be concerned that we are using synthetic data -- we just want to demonstrate how to make a prediction. End of explanation """ instances_list = [INSTANCE] prediction = endpoint.explain(instances_list) print(prediction) """ Explanation: Make the prediction with explanation Now that your Model resource is deployed to an Endpoint resource, one can do online explanations by sending prediction requests to the Endpoint resource. Request The format of each instance is: [feature_list] Since the explain() method can take multiple items (instances), send your single test item as a list of one test item. Response The response from the explain() call is a Python dictionary with the following entries: ids: The internal assigned unique identifiers for each prediction request. displayNames: The class names for each class label. confidences: For classification, the predicted confidence, between 0 and 1, per class label. values: For regression, the predicted value. deployed_model_id: The Vertex AI identifier for the deployed Model resource which did the predictions. explanations: The feature attributions End of explanation """ import numpy as np try: label = np.argmax(prediction[0][0]["scores"]) cls = prediction[0][0]["classes"][label] print("Predicted Value:", cls, prediction[0][0]["scores"][label]) except: pass """ Explanation: Understanding the explanations response First, you will look what your model predicted and compare it to the actual value. End of explanation """ from tabulate import tabulate feature_names = ["sepal_length", "sepal_width", "petal_length", "petal_width"] attributions = prediction.explanations[0].attributions[0].feature_attributions rows = [] for i, val in enumerate(feature_names): rows.append([val, INSTANCE[val], attributions[val]]) print(tabulate(rows, headers=["Feature name", "Feature value", "Attribution value"])) """ Explanation: Examine feature attributions Next you will look at the feature attributions for this particular example. Positive attribution values mean a particular feature pushed your model prediction up by that amount, and vice versa for negative attribution values. End of explanation """ import random # Prepare 10 test examples to your model for prediction using a random distribution to generate # test instances instances = [] for i in range(10): pl = str(random.uniform(1.0, 2.0)) pw = str(random.uniform(1.0, 2.0)) sl = str(random.uniform(4.0, 6.0)) sw = str(random.uniform(2.0, 4.0)) instances.append( {"petal_length": pl, "petal_width": pw, "sepal_length": sl, "sepal_width": sw} ) response = endpoint.explain(instances) """ Explanation: Check your explanations and baselines To better make sense of the feature attributions you're getting, you should compare them with your model's baseline. In most cases, the sum of your attribution values + the baseline should be very close to your model's predicted value for each input. Also note that for regression models, the baseline_score returned from AI Explanations will be the same for each example sent to your model. For classification models, each class will have its own baseline. In this section you'll send 10 test examples to your model for prediction in order to compare the feature attributions with the baseline. Then you'll run each test example's attributions through a sanity check in the sanity_check_explanations method. Get explanations End of explanation """ import numpy as np def sanity_check_explanations( explanation, prediction, mean_tgt_value=None, variance_tgt_value=None ): passed_test = 0 total_test = 1 # `attributions` is a dict where keys are the feature names # and values are the feature attributions for each feature baseline_score = explanation.attributions[0].baseline_output_value print("baseline:", baseline_score) # Sanity check 1 # The prediction at the input is equal to that at the baseline. # Please use a different baseline. Some suggestions are: random input, training # set mean. if abs(prediction - baseline_score) <= 0.05: print("Warning: example score and baseline score are too close.") print("You might not get attributions.") else: passed_test += 1 print("Sanity Check 1: Passed") print(passed_test, " out of ", total_test, " sanity checks passed.") i = 0 for explanation in response.explanations: try: prediction = np.max(response.predictions[i]["scores"]) except TypeError: prediction = np.max(response.predictions[i]) sanity_check_explanations(explanation, prediction) i += 1 """ Explanation: Sanity check In the function below you perform a sanity check on the explanations. End of explanation """ endpoint.undeploy_all() """ Explanation: Undeploy the model When you are done doing predictions, you undeploy the model from the Endpoint resouce. This deprovisions all compute resources and ends billing for the deployed model. End of explanation """ delete_all = True if delete_all: # Delete the dataset using the Vertex dataset object try: if "dataset" in globals(): dataset.delete() except Exception as e: print(e) # Delete the model using the Vertex model object try: if "model" in globals(): model.delete() except Exception as e: print(e) # Delete the endpoint using the Vertex endpoint object try: if "endpoint" in globals(): endpoint.delete() except Exception as e: print(e) # Delete the AutoML or Pipeline trainig job try: if "dag" in globals(): dag.delete() except Exception as e: print(e) # Delete the custom trainig job try: if "job" in globals(): job.delete() except Exception as e: print(e) # Delete the batch prediction job using the Vertex batch prediction object try: if "batch_predict_job" in globals(): batch_predict_job.delete() except Exception as e: print(e) # Delete the hyperparameter tuning job using the Vertex hyperparameter tuning object try: if "hpt_job" in globals(): hpt_job.delete() except Exception as e: print(e) if "BUCKET_NAME" in globals(): ! gsutil rm -r $BUCKET_NAME """ Explanation: Cleaning up To clean up all Google Cloud resources used in this project, you can delete the Google Cloud project you used for the tutorial. Otherwise, you can delete the individual resources you created in this tutorial: Dataset Pipeline Model Endpoint AutoML Training Job Batch Job Custom Job Hyperparameter Tuning Job Cloud Storage Bucket End of explanation """
nealjean/predicting-poverty
figures/Figure 4.ipynb
mit
from fig_utils import * import matplotlib.pyplot as plt import time %matplotlib inline """ Explanation: Figure 4: Evaluation of model performance This notebook generates individual panels of Figure 4 in "Combining satellite imagery and machine learning to predict poverty". End of explanation """ country_path = '../data/output/LSMS/pooled/' percentiles = [0.05, 0.10, 0.15, 0.20, 0.25, 0.30, 0.35, 0.40, 0.45, 0.50, 0.55, 0.60, 0.65, 0.70, 0.75, 0.80, 0.85, 0.90, 0.95, 1.00] survey = 'lsms' dimension = 10 k = 10 k_inner = 5 trials = 5 poverty_line = 1.90 multiples = [1, 2, 3] t0 = time.time() compare_models(country_path, survey, percentiles, dimension, k, k_inner, trials, poverty_line, multiples) t1 = time.time() print 'Finished in {} seconds'.format(t1-t0) """ Explanation: Transfer learning vs. nightlights In these experiments, we compare the performance of the transfer learning model based on satellite imagery with the performance of a model that uses nightlights. The parameters needed to produce the plots for Panels A and B are as follows: country_paths: Paths of directories containing pooled survey data percentiles: Wealth percentiles to evaluate survey: Either 'lsms' or 'dhs' dimension: Number of dimensions to reduce image features to using PCA k: Number of cross validation folds k_inner: Number of inner cross validation folds for selection of regularization parameter trials: Number of trials to average over poverty_line: International poverty line ($1.90/capita/day) multiples: Multiples of the poverty line to plot For many trials, it will take several minutes or more to produce the plots. For 100 trials, it should take 40-60 minutes for LSMS and longer for DHS. Each data directory should contain the following 4 files: conv_features.npy: (n, 4096) array containing image features corresponding to n clusters nightlights.npy: (n,) vector containing the average nightlights value for each cluster households.npy: (n,) vector containing the number of households for each cluster image_counts.npy: (n,) vector containing the number of images available for each cluster Each data directory should also contain one of the following: consumptions.npy: (n,) vector containing average cluster consumption expenditures for LSMS surveys assets.npy: (n,) vector containing average cluster asset index for DHS surveys Exact results may differ slightly with each run due to randomly splitting data into training and test sets. Panel A: Pooled LSMS End of explanation """ country_path = '../data/output/DHS/pooled/' percentiles = [0.05, 0.10, 0.15, 0.20, 0.25, 0.30, 0.35, 0.40, 0.45, 0.50, 0.55, 0.60, 0.65, 0.70, 0.75, 0.80, 0.85, 0.90, 0.95, 1.00] survey = 'dhs' dimension = 10 k = 5 k_inner = 3 trials = 3 t0 = time.time() compare_models(country_path, survey, percentiles, dimension, k, k_inner, trials, poverty_line, multiples) t1 = time.time() print 'Finished in {} seconds'.format(t1-t0) """ Explanation: Panel B: Pooled DHS End of explanation """ # Parameters country_names = ['nigeria', 'tanzania', 'uganda', 'malawi', 'pooled'] country_paths = ['../data/output/LSMS/nigeria/', '../data/output/LSMS/tanzania/', '../data/output/LSMS/uganda/', '../data/output/LSMS/malawi/', '../data/output/LSMS/pooled/'] survey = 'lsms' dimension = 100 k = 3 k_inner = 3 points = 10 alpha_low = 0 alpha_high = 3 trials = 100 run_randomization_test(country_names, country_paths, survey, dimension, k, k_inner, points, alpha_low, alpha_high, trials) """ Explanation: Randomization tests In these experiments, we randomly reassign daytime imagery to survey locations and retrain the model on incorrect images (see SM 1.7). The parameters needed to produce the plots for Panels C and D are as follows: country_names: Names of countries as a list of lower-case strings country_paths: Paths of directories containing survey data corresponding to specified countries survey: Either 'lsms' or 'dhs' dimension: Number of dimensions to reduce image features to using PCA k: Number of cross validation folds k_inner: Number of inner cross validation folds for selection of regularization parameter points: Number of regularization parameters to try alpha_low: Log of smallest regularization parameter to try alpha_high: Log of largest regularization parameter to try trials: Number of trials for shuffled distribution If trials is large (>100), producing the plots will take more than a couple of minutes. Each data directory should contain the following 3 files: cluster_conv_features.npy: (n, 4096) array containing image features corresponding to n clusters cluster_households.npy: (n,) vector containing the number of households for each cluster cluster_image_counts.npy: (n,) vector containing the number of images available for each cluster Each data directory should also contain one of the following: cluster_consumptions.npy: (n,) vector containing average cluster consumption expenditures for LSMS surveys cluster_assets.npy: (n,) vector containing average cluster asset index for DHS surveys Exact results may differ slightly with each run due to randomly splitting data into training and test sets. Panel C: LSMS consumption expenditures End of explanation """ # Parameters country_names = ['nigeria', 'tanzania', 'uganda', 'malawi', 'rwanda', 'pooled'] country_paths = ['../data/output/DHS/nigeria/', '../data/output/DHS/tanzania/', '../data/output/DHS/uganda/', '../data/output/DHS/malawi/', '../data/output/DHS/rwanda/', '../data/output/DHS/pooled/'] survey = 'dhs' dimension = 100 k = 3 k_inner = 3 points = 10 alpha_low = 0 alpha_high = 3 trials = 100 run_randomization_test(country_names, country_paths, survey, dimension, k, k_inner, points, alpha_low, alpha_high, trials) """ Explanation: Panel D: DHS assets End of explanation """
evanmiltenburg/python-for-text-analysis
Chapters/Chapter 10 - Dictionaries.ipynb
apache-2.0
student_grades = ['Frank', 8, 'Susan', 7, 'Guido', 10] student = 'Frank' index_of_student = student_grades.index(student) # we use the index method (list.index) print('grade of', student, 'is', student_grades[index_of_student + 1]) """ Explanation: Chapter 10 - Dictionaries This notebook uses code snippets and explanation from this course The last type of container we will introduce in this topic is dictionaries. Programming is mostly about solving real-world problems as efficiently as possible, but it is also important to write and organize code in a human-readable fashion. A dictionary offers a kind of abstraction that comes in handy often: it is a type of "associative memory" or key:value storage. It allows you to describe two pieces of data and their relationship. At the end of this chapter, you will: * understand the relevance of dictionaries * know how to create a dictionary * know how to add items to a dictionary * know how to inspect/extract items from a dictionary * know how to count with a dictionary * know how to create nested dictionaries If you want to learn more about these topics, you might find the following links useful: * Python documentation If you have questions about this chapter, please contact us (cltl.python.course@gmail.com). 1. Dictionaries Imagine that you are a teacher, and you've graded exams (everyone got high grades, of course). You would like to store this information so that you can ask the program for the grade of a particular student. After some thought, you first try to accomplish this using a list. End of explanation """ student_grades = {'Frank': 8, 'Susan': 7, 'Guido': 10} student_grades['Frank'] """ Explanation: However, you're not happy about the solution. Every time you request a grade, we need to first determine the position of the student in the list and then use that index + 1 to obtain the grade. That's pretty inefficient. The take-home message here is that lists are not really good if we want two pieces of information together. Dictionaries for the rescue! End of explanation """ student_grades = {'Frank': 8, 'Susan': 7, 'Guido': 10} """ Explanation: 2. How to create a dictionary Let's take another look at the student_grades dictionary. End of explanation """ student_grades = {'Frank': 8, 'Susan': 7, 'Guido': 10} """ Explanation: a dictionary is surrounded by curly brackets, and the key/value pairs are separated by commas. A dictionary consists of one or more key:value pairs. The key is the 'identifier' or "name" that is used to describe the value. the keys in a dictionary are unique the syntax for a key/value pair is: KEY : VALUE the keys (e.g. 'Frank') in a dictionary have to be immutable the values (e.g., 8) in a dictionary can by any python object a dictionary can be empty Please note that keys in a dictionary have to immutable. This works (strings as keys) End of explanation """ a_dict = {['a', 'list']: 8} """ Explanation: This does not (list as keys) End of explanation """ a_dict = {'Frank': 8, 'Susan': 7} """ Explanation: Please note that the values in a dictionary can by any python object This works (integers as values) End of explanation """ another_dict = {'Frank' : [8], 'Susan' : [7]} """ Explanation: But this as well (lists as values) End of explanation """ an_empty_dict = dict() another_empty_dict = {} # This works too, but it is less readable and confusing (looks similar to sets) print(type(another_empty_dict), type(an_empty_dict)) """ Explanation: Please note that a dictionary can be empty (use dict()): End of explanation """ a_dict = dict() print(a_dict) a_dict['Frank'] = 8 print(a_dict) """ Explanation: 3. How to add items to a dictionary There is one very simple way in order to add a key:value pair to a dictionary. Please look at the following code snippet: End of explanation """ a_dict = dict() a_dict['Frank'] = 8 print(a_dict) a_dict['Frank'] = 7 print(a_dict) a_dict['Frank'] = 9 print(a_dict) """ Explanation: Please note that dictionary keys should be unique identifiers for the values in the dictionary. Key:value pairs get overwritten if you assign a different value to an existing key. End of explanation """ student_grades = {'Frank': 8, 'Susan': 7, 'Guido': 10} print(student_grades['Frank']) """ Explanation: 4. How to access data in a dictionary The most basic operation on a dictionary is a look-up. Simply enter the key and the dictionary returns the value. End of explanation """ student_grades['Piet'] """ Explanation: If the key is not in the dictionary, it will return a KeyError. End of explanation """ key = 'Piet' if key in student_grades: print(student_grades[key]) else: print(key, 'not in dictionary') key = 'Frank' if key in student_grades: print(student_grades[key]) else: print(key, 'not in dictionary') """ Explanation: In order to avoid getting an error, you can use an if-statement End of explanation """ student_grades = {'Frank': 8, 'Susan': 7, 'Guido': 10} the_keys = student_grades.keys() print(the_keys) """ Explanation: the keys method returns the keys in a dictionary End of explanation """ the_values = student_grades.values() print(the_values) """ Explanation: the values method returns the values in a dictionary End of explanation """ the_values = student_grades.values() print(len(the_values)) # number of values in a dict print(max(the_values)) # highest value of values in a dict print(min(the_values)) # lowest value of values in a dict print(sum(the_values)) # sum of all values of values in a dict """ Explanation: We can use the built-in functions to inspect the keys and values. For example: End of explanation """ student_grades = {'Frank': 8, 'Susan': 7, 'Guido': 10} print(student_grades.items()) """ Explanation: However, what if we want to know which students got a 8 or higher? The items method is very useful for this scenario. Please carefully look at the following code snippet. End of explanation """ for key, value in student_grades.items(): # please note the tuple unpacking print(key, value) """ Explanation: The items method returns a list of tuples. We can combine what we have learnt about looping and tuples to access the keys (the students' names) and values (their grades): End of explanation """ for student, grade in student_grades.items(): if grade > 7: print(student, grade) """ Explanation: This also makes it possible to detect which students obtained a grade of 8 or higher. End of explanation """ letter2freq = dict() word = 'hippo' for letter in word: if letter in letter2freq: # add 1 to the dictionary if the keys exists letter2freq[letter] += 1 # note: x +=1 does the same as x = x + 1 else: letter2freq[letter] = 1 # set default value to 1 if key does not exists print(letter, letter2freq) print() print(letter2freq) """ Explanation: 5. Counting with a dictionary Dictionaries are very useful to derive statistics. For example, we can easily determine the frequency of each letter in a word. End of explanation """ a_sentence = ['Obama', 'was', 'the', 'president', 'of', 'the', 'USA'] word2freq = dict() for word in a_sentence: if word in word2freq: # add 1 to the dictionary if the keys exists word2freq[word] += 1 else: word2freq[word] = 1 # set default value to 1 if key does not exists print(word, word2freq) print() print(word2freq) """ Explanation: You can do this as well with lists End of explanation """ from collections import Counter word_freq = Counter(['Obama', 'was', 'the', 'president', 'of', 'the', 'USA']) print(word_freq) """ Explanation: Python actually has a module, which is very useful for counting. It's called collections. End of explanation """ a_nested_dictionary = {'a_key': {'nested_key1': 1, 'nested_key2': 2, 'nested_key3': 3} } print(a_nested_dictionary) """ Explanation: Feel free to start using this module after the assignment of this block. 6. Nested dictionaries Since dictionaries consists of key:value pairs, we can actually make another dictionary the value of a dictionary. End of explanation """ print(a_nested_dictionary['a_key']) """ Explanation: Please note that the value is in fact a dictionary: End of explanation """ the_nested_value = a_nested_dictionary['a_key']['nested_key1'] print(the_nested_value) """ Explanation: In order to access the nested value, we must do a look up for each key on each nested level End of explanation """ # your code here """ Explanation: Practice questions: What do sets and dictionaries have in common? What do lists and tuples have in common? Can you add things to a list? Can you add things to a tuples? An overview: | property | set | list | tuple | dict keys | dict values | |------------------------------- |-------------------|-----------------|-------------|-----------|-------------| | mutable (can you add add/remove?) | yes | yes | no | yes | yes | | can contain duplicates | no | yes | yes | no | yes | | ordered | no | yes | yes | yes, but do not rely on it | depends on type of value | | finding element(s) | quick | slow | slow | quick | depends on type of value | | can contain | immutables | all | all | immutables | all | Exercises Exercise 1: You are tying to keep track of your groceries using a python dictionary. Please add 'tomatoes', 'bread', 'chocolate bars' and 'pineapples' to your shopping dictionary and assign values according to how many items of each you would like to buy. End of explanation """ # your code here """ Explanation: Exercise 2: Print the number of pineapples you would like to buy by using only one line of code and without printing the entire dictionary. End of explanation """ # you code here """ Explanation: Exercise 3: Use a loop and unpacking to print the items and numbers on your shopping list in the following format: Item: [Item], number: [number] e.g. Item: tomatoes, number: 3 End of explanation """
sdpython/ensae_teaching_cs
_doc/notebooks/td2a_ml/td2a_tree_selection_correction.ipynb
mit
from jyquickhelper import add_notebook_menu add_notebook_menu() %matplotlib inline """ Explanation: 2A.ml - Réduction d'une forêt aléatoire - correction Le modèle Lasso permet de sélectionner des variables, une forêt aléatoire produit une prédiction comme étant la moyenne d'arbres de régression. Et si on mélangeait les deux ? End of explanation """ from sklearn.datasets import load_diabetes data = load_diabetes() X, y = data.data, data.target from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y) """ Explanation: Datasets Comme il faut toujours des données, on prend ce jeu Boston. End of explanation """ from sklearn.ensemble import RandomForestRegressor as model_class clr = model_class() clr.fit(X_train, y_train) """ Explanation: Une forêt aléatoire End of explanation """ len(clr.estimators_) from sklearn.metrics import r2_score r2_score(y_test, clr.predict(X_test)) """ Explanation: Le nombre d'arbres est... End of explanation """ import numpy dest = numpy.zeros((X_test.shape[0], len(clr.estimators_))) estimators = numpy.array(clr.estimators_).ravel() for i, est in enumerate(estimators): pred = est.predict(X_test) dest[:, i] = pred average = numpy.mean(dest, axis=1) r2_score(y_test, average) """ Explanation: Random Forest = moyenne des prédictions On recommence en faisant la moyenne soi-même. End of explanation """ from sklearn.linear_model import LinearRegression def new_features(forest, X): dest = numpy.zeros((X.shape[0], len(forest.estimators_))) estimators = numpy.array(forest.estimators_).ravel() for i, est in enumerate(estimators): pred = est.predict(X) dest[:, i] = pred return dest X_train_2 = new_features(clr, X_train) lr = LinearRegression() lr.fit(X_train_2, y_train) X_test_2 = new_features(clr, X_test) r2_score(y_test, lr.predict(X_test_2)) """ Explanation: A priori, c'est la même chose. Pondérer les arbres à l'aide d'une régression linéaire La forêt aléatoire est une façon de créer de nouvelles features, 100 exactement qu'on utilise pour caler une régression linéaire. End of explanation """ lr.coef_ import matplotlib.pyplot as plt fig, ax = plt.subplots(1, 1, figsize=(12, 4)) ax.bar(numpy.arange(0, len(lr.coef_)), lr.coef_) ax.set_title("Coefficients pour chaque arbre calculés avec une régression linéaire"); """ Explanation: Un peu moins bien, un peu mieux, le risque d'overfitting est un peu plus grand avec ces nombreuses features car la base d'apprentissage ne contient que 379 observations (regardez X_train.shape pour vérifier). End of explanation """ lr_raw = LinearRegression() lr_raw.fit(X_train, y_train) r2_score(y_test, lr_raw.predict(X_test)) """ Explanation: Le score est avec une régression linéaire sur les variables initiales est nettement moins élevé. End of explanation """ from sklearn.linear_model import Lasso lrs = Lasso(max_iter=10000) lrs.fit(X_train_2, y_train) lrs.coef_ """ Explanation: Sélection d'arbres L'idée est d'utiliser un algorithme de sélection de variables type Lasso pour réduire la forêt aléatoire sans perdre en performance. C'est presque le même code. End of explanation """ r2_score(y_test, lrs.predict(X_test_2)) """ Explanation: Pas mal de zéros donc pas mal d'arbres non utilisés. End of explanation """ from tqdm import tqdm alphas = [0.01 * i for i in range(100)] +[1 + 0.1 * i for i in range(100)] obs = [] for i in tqdm(range(0, len(alphas))): alpha = alphas[i] lrs = Lasso(max_iter=20000, alpha=alpha) lrs.fit(X_train_2, y_train) obs.append(dict( alpha=alpha, null=len(lrs.coef_[lrs.coef_!=0]), r2=r2_score(y_test, lrs.predict(X_test_2)) )) from pandas import DataFrame df = DataFrame(obs) df.tail() fig, ax = plt.subplots(1, 2, figsize=(12, 4)) df[["alpha", "null"]].set_index("alpha").plot(ax=ax[0], logx=True) ax[0].set_title("Nombre de coefficients non nulls") df[["alpha", "r2"]].set_index("alpha").plot(ax=ax[1], logx=True) ax[1].set_title("r2"); """ Explanation: Pas trop de perte... Ca donne envie d'essayer plusieurs valeur de alpha. End of explanation """
naifrec/cnn-dropout
cnn-scyfer-project.ipynb
mit
import cPickle import gzip import os import sys import timeit import numpy import theano import theano.tensor as T from theano.tensor.signal import downsample from theano.tensor.nnet import conv rng = numpy.random.RandomState(23455) # instantiate 4D tensor for input input = T.tensor4(name='input') w_shp = (2, 3, 9, 9) """ Explanation: Convolutional Neural Network (LeNet) Update of the example of CNN given on deeplearning.net. This notebook tries to explain all the code as if reader had no knowledge of Theano whatsoever. Theano is a Python library that allows you to define, optimize, and evaluate mathematical expressions involving multi-dimensional arrays efficiently. Theano features: tight integration with NumPy – Use numpy.ndarray in Theano-compiled functions. transparent use of a GPU – Perform data-intensive calculations up to 140x faster than with CPU.(float32 only) efficient symbolic differentiation – Theano does your derivatives for function with one or many inputs. speed and stability optimizations – Get the right answer for log(1+x) even when x is really tiny. dynamic C code generation – Evaluate expressions faster. extensive unit-testing and self-verification – Detect and diagnose many types of mistake. Theano has been powering large-scale computationally intensive scientific investigations since 2007. Outline of this document: A. The tools to implement CNNs 1. The Convolution Operator 2. Testing ConvOp on an image 3. MaxPooling 4. Convolution + MaxPooling layer B. Full LeNet model 1. HiddenLayer class 2. LogisticRegression class 3. Loading dataset 4. Putting it all together C. Implementation of Learning Rate Decay D. Implementation of dropout 1. Creating dropout function 2. Creating dropout classes 3. Rewriting evaluate_lenet5 E. Visualization of the convolutional filters 1. Visualization function 2. Testing the function on a single untrained LeNetConvPoolLayer 3. Displaying the learned filters after training F. Automated creation of a CNN + MLP A. The tools to implement CNNs 1. The Convolution Operator ConvOp is the main workhorse for implementing a convolutional layer in Theano. ConvOp is used by theano.tensor.signal.conv2d, which takes two symbolic inputs: a 4D tensor corresponding to a mini-batch of input images. The shape of the tensor is as follows: [mini-batch size, number of input feature maps, image height, image width]. a 4D tensor corresponding to the weight matrix W. The shape of the tensor is: [number of feature maps at layer m, number of feature maps at layer m-1, filter height, filter width] Below is the Theano code for implementing a convolutional layer similar to the one of Figure 1. The input consists of 3 features maps (an RGB color image) of size 120x160. We use two convolutional filters with 9x9 receptive fields. End of explanation """ w_bound = numpy.sqrt(3 * 9 * 9) """ Explanation: The shape of the 4D tensor corresponding to the weight matrix W is: number of feature maps at layer 2: as we chose to have only 2 convolutional filters, we will have 2 resulting feature maps. number of feature maps at layer 1: the original image being RGB, it has 3 layers on top of each other, so 3 feature maps. filter height: the convolutional filters has 9x9 receptive fields, so height = 9 pixels filter width: similarly, width = 9 pixels End of explanation """ W = theano.shared( numpy.asarray( rng.uniform( low=-1.0 / w_bound, high=1.0 / w_bound, size=w_shp), dtype=input.dtype), name ='W') """ Explanation: Note that we use the same weight initialization formula as with the MLP. Weights are sampled randomly from a uniform distribution in the range [-1/fan-in, 1/fan-in], where fan-in is the number of inputs to a hidden unit. For MLPs, this was the number of units in the layer below. For CNNs however, we have to take into account the number of input feature maps and the size of the receptive fields. End of explanation """ # initialize shared variable for bias (1D tensor) with random values # IMPORTANT: biases are usually initialized to zero. However in this # particular application, we simply apply the convolutional layer to # an image without learning the parameters. We therefore initialize # them to random values to "simulate" learning. b_shp = (2,) b = theano.shared(numpy.asarray( rng.uniform(low=-.5, high=.5, size=b_shp), dtype=input.dtype), name ='b') """ Explanation: RandomState.uniform(low=0.0, high=1.0, size=None): draw samples from a uniform distribution: samples are uniformly distributed over the half-open interval [low, high) (includes low, but excludes high). In other words, any value within the given interval is equally likely to be drawn by uniform. source theano.shared: the main benefits of using shared constructors are you can use them to initialise important variables with predefined numerical values (weight matrices in a neural network, for example). source The distinction between Theano-managed memory and user-managed memory can be broken down by some Theano functions (e.g. shared, get_value and the constructors for In and Out) by using a borrow=True flag. This can make those methods faster (by avoiding copy operations) at the expense of risking subtle bugs in the overall program (by aliasing memory). Take home message: It is a safe practice (and a good idea) to use borrow=True in a shared variable constructor when the shared variable stands for a large object (in terms of memory footprint) and you do not want to create copies of it in memory. It is not a reliable technique to use borrow=True to modify shared variables through side-effect, because with some devices (e.g. GPU devices) this technique will not work. End of explanation """ # build symbolic expression that computes the convolution of input with filters in w conv_out = conv.conv2d(input, W) """ Explanation: We chose to have only 2 filters, so 2 bias terms need to be initialized. End of explanation """ # build symbolic expression to add bias and apply activation function, i.e. produce neural net layer output output = T.nnet.sigmoid(conv_out + b.dimshuffle('x', 0, 'x', 'x')) """ Explanation: nnet.conv2d: This is the standard operator for convolutional neural networks working with batches of multi-channel 2D images, available for CPU and GPU. source End of explanation """ # create theano function to compute filtered images f = theano.function([input], output) """ Explanation: tensor.nnet.sigmoid(x): returns the standard sigmoid nonlinearity applied to x. Parameters: x - symbolic Tensor (or compatible) Return type: same as x Returns: element-wise sigmoid: $$sigmoid(x) = \frac{1}{1 + \exp(-x)}$$. Note: in numpy and in Theano, the transpose of a vector is exactly the same vector! Use reshape or dimshuffle to turn your vector into a row or column matrix. source End of explanation """ import pylab from PIL import Image # open random image of dimensions 1936×2592 img = Image.open(open('images/profilepic4.jpg')) img = numpy.asarray(img, dtype='float64') / 256. # divide by 256 to have RGB 0-1 scale and not 0 - 256 #put image in 4D tensor of shape (1, 3, height, width) img_ = img.transpose(2, 0, 1).reshape(1, 3, 2592, 1936) filtered_img = f(img_) # plot original image and first and second components of output pylab.subplot(1, 3, 1); pylab.axis('off'); pylab.imshow(img) pylab.gray(); # recall that the convOp output (filtered image) is actually a "minibatch", # of size 1 here, so we take index 0 in the first dimension: pylab.subplot(1, 3, 2); pylab.axis('off'); pylab.imshow(filtered_img[0, 0, :, :]) pylab.subplot(1, 3, 3); pylab.axis('off'); pylab.imshow(filtered_img[0, 1, :, :]) pylab.show() """ Explanation: 2. Testing ConvOp on an image End of explanation """ from theano.tensor.signal import downsample input = T.dtensor4('input') maxpool_shape = (2, 2) pool_out = downsample.max_pool_2d(input, maxpool_shape, ignore_border=True) g = theano.function([input],pool_out) invals = numpy.random.RandomState(1).rand(3, 2, 5, 5) print 'With ignore_border set to True:' print 'invals[0, 0, :, :] =\n', invals[0, 0, :, :] print 'output[0, 0, :, :] =\n', g(invals)[0, 0, :, :] pool_out = downsample.max_pool_2d(input, maxpool_shape, ignore_border=False) g = theano.function([input],pool_out) print 'With ignore_border set to False:' print 'invals[0, 0, :, :] =\n', invals[0, 0, :, :] print 'output[0, 0, :, :] =\n', g(invals)[0, 0, :, :] """ Explanation: <img src="images/figure_3.png"> 3. MaxPooling Another important concept of CNNs is max-pooling, which is a form of non-linear down-sampling. Max-pooling partitions the input image into a set of non-overlapping rectangles and, for each such sub-region, outputs the maximum value. Max-pooling is useful in vision for two reasons: By eliminating non-maximal values, it reduces computation for upper layers. It provides a form of translation invariance. Imagine cascading a max-pooling layer with a convolutional layer. There are 8 directions in which one can translate the input image by a single pixel. If max-pooling is done over a 2x2 region, 3 out of these 8 possible configurations will produce exactly the same output at the convolutional layer. For max-pooling over a 3x3 window, this jumps to 5/8. Since it provides additional robustness to position, max-pooling is a “smart” way of reducing the dimensionality of intermediate representations. Max-pooling is done in Theano by way of theano.tensor.signal.downsample.max_pool_2d. This function takes as input an N dimensional tensor (where N >= 2) and a downscaling factor and performs max-pooling over the 2 trailing dimensions of the tensor. End of explanation """ class LeNetConvPoolLayer(object): """Pool Layer of a convolutional network """ def __init__(self, rng, input, filter_shape, image_shape, poolsize=(2, 2)): """ Allocate a LeNetConvPoolLayer with shared variable internal parameters. :type rng: numpy.random.RandomState :param rng: a random number generator used to initialize weights :type input: theano.tensor.dtensor4 :param input: symbolic image tensor, of shape image_shape :type filter_shape: tuple or list of length 4 :param filter_shape: (number of filters, num input feature maps, filter height, filter width) :type image_shape: tuple or list of length 4 :param image_shape: (batch size, num input feature maps, image height, image width) :type poolsize: tuple or list of length 2 :param poolsize: the downsampling (pooling) factor (#rows, #cols) """ assert image_shape[1] == filter_shape[1] # assert just checks if the number of feature maps is consistent between filter shape and image_shape self.input = input # there are "num input feature maps * filter height * filter width" # inputs to each hidden unit # reminder: Weights are sampled randomly from a uniform distribution # in the range [-1/fan-in, 1/fan-in], where fan-in is the number of inputs to a hidden unit fan_in = numpy.prod(filter_shape[1:]) # each unit in the lower layer receives a gradient from: # "num output feature maps * filter height * filter width" / # pooling size fan_out = (filter_shape[0] * numpy.prod(filter_shape[2:]) / numpy.prod(poolsize)) # initialize weights with random weights W_bound = numpy.sqrt(6. / (fan_in + fan_out)) self.W = theano.shared( numpy.asarray( rng.uniform(low=-W_bound, high=W_bound, size=filter_shape), dtype=theano.config.floatX ), borrow=True # see above the def of theano.shared for explanation of borrow ) # the bias is a 1D tensor -- one bias per output feature map b_values = numpy.zeros((filter_shape[0],), dtype=theano.config.floatX) self.b = theano.shared(value=b_values, borrow=True) # convolve input feature maps with filters conv_out = conv.conv2d( input=input, filters=self.W, filter_shape=filter_shape, image_shape=image_shape ) # downsample each feature map individually, using maxpooling pooled_out = downsample.max_pool_2d( input=conv_out, ds=poolsize, ignore_border=True ) # add the bias term. Since the bias is a vector (1D array), we first # reshape it to a tensor of shape (1, n_filters, 1, 1). Each bias will # thus be broadcasted across mini-batches and feature map # width & height self.output = T.tanh(pooled_out + self.b.dimshuffle('x', 0, 'x', 'x')) # store parameters of this layer self.params = [self.W, self.b] # keep track of model input self.input = input """ Explanation: theano.tensor.signal.downsample.max_pool_2d(input, ds, ignore_border=None, st=None, padding=(0, 0), mode='max'): takes as input a N-D tensor, where N >= 2. It downscales the input image by the specified factor, by keeping only the maximum value of non-overlapping patches of size (ds[0],ds[1]) Parameters: input (N-D theano tensor of input images) – Input images. Max pooling will be done over the 2 last dimensions. ds (tuple of length 2) – Factor by which to downscale (vertical ds, horizontal ds). (2,2) will halve the image in each dimension. ignore_border (bool (default None, will print a warning and set to False)) – When True, (5,5) input with ds=(2,2) will generate a (2,2) output. (3,3) otherwise. st (tuple of lenght 2) – Stride size, which is the number of shifts over rows/cols to get the next pool region. If st is None, it is considered equal to ds (no overlap on pooling regions). padding (tuple of two ints) – (pad_h, pad_w), pad zeros to extend beyond four borders of the images, pad_h is the size of the top and bottom margins, and pad_w is the size of the left and right margins. mode ({‘max’, ‘sum’, ‘average_inc_pad’, ‘average_exc_pad’}) – Operation executed on each window. max and sum always exclude the padding in the computation. average gives you the choice to include or exclude it. source 4. Convolution + MaxPooling layer We now have all we need to implement a LeNet model in Theano. We start with the LeNetConvPoolLayer class, which implements a {convolution + max-pooling} layer. End of explanation """ class HiddenLayer(object): def __init__(self, rng, input, n_in, n_out, W=None, b=None, activation=T.tanh): """ Typical hidden layer of a MLP: units are fully-connected and have sigmoidal activation function. Weight matrix W is of shape (n_in,n_out) and the bias vector b is of shape (n_out,). NOTE : The nonlinearity used here is tanh Hidden unit activation is given by: tanh(dot(input,W) + b) :type rng: numpy.random.RandomState :param rng: a random number generator used to initialize weights :type input: theano.tensor.dmatrix :param input: a symbolic tensor of shape (n_examples, n_in) :type n_in: int :param n_in: dimensionality of input :type n_out: int :param n_out: number of hidden units :type activation: theano.Op or function :param activation: Non linearity to be applied in the hidden layer """ self.input = input # `W` is initialized with `W_values` which is uniformely sampled # from sqrt(-6./(n_in+n_hidden)) and sqrt(6./(n_in+n_hidden)) # for tanh activation function # the output of uniform if converted using asarray to dtype # theano.config.floatX so that the code is runable on GPU # Note : optimal initialization of weights is dependent on the # activation function used (among other things). # For example, results presented in [Xavier10] suggest that you # should use 4 times larger initial weights for sigmoid # compared to tanh # We have no info for other function, so we use the same as # tanh. if W is None: W_values = numpy.asarray( rng.uniform( low=-numpy.sqrt(6. / (n_in + n_out)), high=numpy.sqrt(6. / (n_in + n_out)), size=(n_in, n_out) ), dtype=theano.config.floatX ) if activation == theano.tensor.nnet.sigmoid: W_values *= 4 W = theano.shared(value=W_values, name='W', borrow=True) if b is None: b_values = numpy.zeros((n_out,), dtype=theano.config.floatX) b = theano.shared(value=b_values, name='b', borrow=True) self.W = W self.b = b lin_output = T.dot(input, self.W) + self.b self.output = ( lin_output if activation is None else activation(lin_output) ) # parameters of the model self.params = [self.W, self.b] """ Explanation: Notice that when initializing the weight values, the fan-in is determined by the size of the receptive fields and the number of input feature maps. B. Full LeNet model Sparse, convolutional layers and max-pooling are at the heart of the LeNet family of models. While the exact details of the model will vary greatly, the figure below shows a graphical depiction of a LeNet model. <img src="images/mylenet.png"> The lower-layers are composed to alternating convolution and max-pooling layers. The upper-layers however are fully-connected and correspond to a traditional MLP (hidden layer + logistic regression). The input to the first fully-connected layer is the set of all features maps at the layer below. From an implementation point of view, this means lower-layers operate on 4D tensors. These are then flattened to a 2D matrix of rasterized feature maps, to be compatible with our previous MLP implementation. Using the LogisticRegression class defined in Classifying MNIST digits using Logistic Regression and the HiddenLayer class defined in Multilayer Perceptron, we can instantiate the network as follows. 1. HiddenLayer class The original code for this class can be found here: source End of explanation """ class LogisticRegression(object): """Multi-class Logistic Regression Class The logistic regression is fully described by a weight matrix :math:`W` and bias vector :math:`b`. Classification is done by projecting data points onto a set of hyperplanes, the distance to which is used to determine a class membership probability. """ def __init__(self, input, n_in, n_out): """ Initialize the parameters of the logistic regression :type input: theano.tensor.TensorType :param input: symbolic variable that describes the input of the architecture (one minibatch) :type n_in: int :param n_in: number of input units, the dimension of the space in which the datapoints lie :type n_out: int :param n_out: number of output units, the dimension of the space in which the labels lie """ # initialize with 0 the weights W as a matrix of shape (n_in, n_out) self.W = theano.shared( value=numpy.zeros( (n_in, n_out), dtype=theano.config.floatX ), name='W', borrow=True ) # initialize the biases b as a vector of n_out 0s self.b = theano.shared( value=numpy.zeros( (n_out,), dtype=theano.config.floatX ), name='b', borrow=True ) # symbolic expression for computing the matrix of class-membership # probabilities # Where: # W is a matrix where column-k represent the separation hyperplane for # class-k # x is a matrix where row-j represents input training sample-j # b is a vector where element-k represent the free parameter of # hyperplane-k self.p_y_given_x = T.nnet.softmax(T.dot(input, self.W) + self.b) # symbolic description of how to compute prediction as class whose # probability is maximal self.y_pred = T.argmax(self.p_y_given_x, axis=1) # parameters of the model self.params = [self.W, self.b] # keep track of model input self.input = input def negative_log_likelihood(self, y): """Return the mean of the negative log-likelihood of the prediction of this model under a given target distribution. .. math:: \frac{1}{|\mathcal{D}|} \mathcal{L} (\theta=\{W,b\}, \mathcal{D}) = \frac{1}{|\mathcal{D}|} \sum_{i=0}^{|\mathcal{D}|} \log(P(Y=y^{(i)}|x^{(i)}, W,b)) \\ \ell (\theta=\{W,b\}, \mathcal{D}) :type y: theano.tensor.TensorType :param y: corresponds to a vector that gives for each example the correct label Note: we use the mean instead of the sum so that the learning rate is less dependent on the batch size """ # y.shape[0] is (symbolically) the number of rows in y, i.e., # number of examples (call it n) in the minibatch # T.arange(y.shape[0]) is a symbolic vector which will contain # [0,1,2,... n-1] T.log(self.p_y_given_x) is a matrix of # Log-Probabilities (call it LP) with one row per example and # one column per class LP[T.arange(y.shape[0]),y] is a vector # v containing [LP[0,y[0]], LP[1,y[1]], LP[2,y[2]], ..., # LP[n-1,y[n-1]]] and T.mean(LP[T.arange(y.shape[0]),y]) is # the mean (across minibatch examples) of the elements in v, # i.e., the mean log-likelihood across the minibatch. return -T.mean(T.log(self.p_y_given_x)[T.arange(y.shape[0]), y]) def errors(self, y): """Return a float representing the number of errors in the minibatch over the total number of examples of the minibatch ; zero one loss over the size of the minibatch :type y: theano.tensor.TensorType :param y: corresponds to a vector that gives for each example the correct label """ # check if y has same dimension of y_pred if y.ndim != self.y_pred.ndim: raise TypeError( 'y should have the same shape as self.y_pred', ('y', y.type, 'y_pred', self.y_pred.type) ) # check if y is of the correct datatype if y.dtype.startswith('int'): # the T.neq operator returns a vector of 0s and 1s, where 1 # represents a mistake in prediction return T.mean(T.neq(self.y_pred, y)) else: raise NotImplementedError() """ Explanation: The class uses tanh as activation function by default. This can be supported by the results presented in the scientific paper by called Performance Analysis of Various Activation Functions in Generalized MLP Architectures of Neural Networks by Ahmet V Olgac and Bekir Karlik. In this study, we have used five conventional differentiable and monotonic activation functions for the evolution of MLP architecture along with Generalized Delta rule learning. These proposed well-known and effective activation functions are Bi-polar sigmoid, Uni-polar sigmoid, Tanh, Conic Section, and Radial Bases Function (RBF). Having compared their performances, simulation results show that Tanh (hyperbolic tangent) function performs better recognition accuracy than those of the other functions. In other words, the neural network computed good results when “Tanh-Tanh” combination of activation functions was used for both neurons (or nodes) of hidden and output layers. The paper by Xavier can be found at: 2. LogisticRegression class The original code for this class can be found here: source End of explanation """ def evaluate_lenet5(learning_rate=0.1, n_epochs=200, dataset='mnist.pkl.gz', nkerns=[20, 50], batch_size=500): """ Demonstrates lenet on MNIST dataset :type learning_rate: float :param learning_rate: learning rate used (factor for the stochastic gradient) :type n_epochs: int :param n_epochs: maximal number of epochs to run the optimizer :type dataset: string :param dataset: path to the dataset used for training /testing (MNIST here) :type nkerns: list of ints :param nkerns: number of kernels on each layer (so 20 convolutional filters, and then 50 activation units) """ rng = numpy.random.RandomState(23455) datasets = load_data(dataset) train_set_x, train_set_y = datasets[0] valid_set_x, valid_set_y = datasets[1] test_set_x, test_set_y = datasets[2] # compute number of minibatches for training, validation and testing n_train_batches = train_set_x.get_value(borrow=True).shape[0] n_valid_batches = valid_set_x.get_value(borrow=True).shape[0] n_test_batches = test_set_x.get_value(borrow=True).shape[0] n_train_batches /= batch_size n_valid_batches /= batch_size n_test_batches /= batch_size # allocate symbolic variables for the data index = T.lscalar() # index to a [mini]batch # start-snippet-1 x = T.matrix('x') # the data is presented as rasterized images y = T.ivector('y') # the labels are presented as 1D vector of # [int] labels ###################### # BUILD ACTUAL MODEL # ###################### print '... building the model' # Reshape matrix of rasterized images of shape (batch_size, 28 * 28) # to a 4D tensor, compatible with our LeNetConvPoolLayer # (28, 28) is the size of MNIST images. layer0_input = x.reshape((batch_size, 1, 28, 28)) # Construct the first convolutional pooling layer: # filtering reduces the image size to (28-5+1 , 28-5+1) = (24, 24) # maxpooling reduces this further to (24/2, 24/2) = (12, 12) # 4D output tensor is thus of shape (batch_size, nkerns[0], 12, 12) layer0 = LeNetConvPoolLayer( rng, input=layer0_input, image_shape=(batch_size, 1, 28, 28), filter_shape=(nkerns[0], 1, 5, 5), poolsize=(2, 2) ) ''' Reminder of LeNetConvPoolLayer input parameters and types :type rng: numpy.random.RandomState :param rng: a random number generator used to initialize weights :type input: theano.tensor.dtensor4 :param input: symbolic image tensor, of shape image_shape :type filter_shape: tuple or list of length 4 :param filter_shape: (number of filters, num input feature maps, filter height, filter width) :type image_shape: tuple or list of length 4 :param image_shape: (batch size, num input feature maps, image height, image width) :type poolsize: tuple or list of length 2 :param poolsize: the downsampling (pooling) factor (#rows, #cols) ''' # Construct the second convolutional pooling layer # filtering reduces the image size to (12-5+1, 12-5+1) = (8, 8) # maxpooling reduces this further to (8/2, 8/2) = (4, 4) # 4D output tensor is thus of shape (batch_size, nkerns[1], 4, 4) layer1 = LeNetConvPoolLayer( rng, input=layer0.output, image_shape=(batch_size, nkerns[0], 12, 12), filter_shape=(nkerns[1], nkerns[0], 5, 5), poolsize=(2, 2) ) # the HiddenLayer being fully-connected, it operates on 2D matrices of # shape (batch_size, num_pixels) (i.e matrix of rasterized images). # This will generate a matrix of shape (batch_size, nkerns[1] * 4 * 4), # or (500, 50 * 4 * 4) = (500, 800) with the default values. layer2_input = layer1.output.flatten(2) # construct a fully-connected sigmoidal layer layer2 = HiddenLayer( rng, input=layer2_input, n_in=nkerns[1] * 4 * 4, n_out=500, activation=T.tanh ) # classify the values of the fully-connected sigmoidal layer layer3 = LogisticRegression(input=layer2.output, n_in=500, n_out=10) # the cost we minimize during training is the NLL of the model cost = layer3.negative_log_likelihood(y) # create a function to compute the mistakes that are made by the model test_model = theano.function( [index], layer3.errors(y), givens={ x: test_set_x[index * batch_size: (index + 1) * batch_size], y: test_set_y[index * batch_size: (index + 1) * batch_size] } ) validate_model = theano.function( [index], layer3.errors(y), givens={ x: valid_set_x[index * batch_size: (index + 1) * batch_size], y: valid_set_y[index * batch_size: (index + 1) * batch_size] } ) # create a list of all model parameters to be fit by gradient descent params = layer3.params + layer2.params + layer1.params + layer0.params # create a list of gradients for all model parameters grads = T.grad(cost, params) # train_model is a function that updates the model parameters by # SGD Since this model has many parameters, it would be tedious to # manually create an update rule for each model parameter. We thus # create the updates list by automatically looping over all # (params[i], grads[i]) pairs. updates = [ (param_i, param_i - learning_rate * grad_i) for param_i, grad_i in zip(params, grads) ] train_model = theano.function( [index], cost, updates=updates, givens={ x: train_set_x[index * batch_size: (index + 1) * batch_size], y: train_set_y[index * batch_size: (index + 1) * batch_size] } ) # end-snippet-1 ############### # TRAIN MODEL # ############### print '... training' # early-stopping parameters patience = 10000 # look as this many examples regardless patience_increase = 2 # wait this much longer when a new best is # found improvement_threshold = 0.995 # a relative improvement of this much is # considered significant validation_frequency = min(n_train_batches, patience / 2) # go through this many # minibatche before checking the network # on the validation set; in this case we # check every epoch best_validation_loss = numpy.inf best_iter = 0 test_score = 0. start_time = timeit.default_timer() epoch = 0 done_looping = False while (epoch < n_epochs) and (not done_looping): epoch = epoch + 1 for minibatch_index in xrange(n_train_batches): # This function is very similar to range(), but returns an xrange object instead of a list. # This is an opaque sequence type which yields the same values as the corresponding list, # without actually storing them all simultaneously. The advantage of xrange() over range() # is minimal (since xrange() still has to create the values when asked for them) except when a # very large range is used on a memory-starved machine or when all of the range’s elements # are never used (such as when the loop is usually terminated with break). # For more information on xrange objects, see XRange Type and Sequence Types — str, # unicode, list, tuple, bytearray, buffer, xrange iter = (epoch - 1) * n_train_batches + minibatch_index # for epoch = 1 (first value while entering the "while" loop; iter = 0 * n_train_batches + minibtach_index # so iter = 0. This will call train_model over the index of train_set_x[0:500] and train_set_y[0:500]. # the (epoch -1) * n_train_batches keep track of the iteration number while looping over and over on # the train set. if iter % 100 == 0: print 'training @ iter = ', iter cost_ij = train_model(minibatch_index) # Only at this moment all the symbolic expression that were called during "Building the model" are # called with real values replacing the symbolic tensors. That is how theano works. if (iter + 1) % validation_frequency == 0: # compute zero-one loss on validation set validation_losses = [validate_model(i) for i in xrange(n_valid_batches)] this_validation_loss = numpy.mean(validation_losses) print('epoch %i, minibatch %i/%i, validation error %f %%' % (epoch, minibatch_index + 1, n_train_batches, this_validation_loss * 100.)) # if we got the best validation score until now if this_validation_loss < best_validation_loss: #improve patience if loss improvement is good enough if this_validation_loss < best_validation_loss * \ improvement_threshold: patience = max(patience, iter * patience_increase) # save best validation score and iteration number best_validation_loss = this_validation_loss best_iter = iter # test it on the test set test_losses = [ test_model(i) for i in xrange(n_test_batches) ] test_score = numpy.mean(test_losses) print((' epoch %i, minibatch %i/%i, test error of ' 'best model %f %%') % (epoch, minibatch_index + 1, n_train_batches, test_score * 100.)) if patience <= iter: done_looping = True break end_time = timeit.default_timer() print('Optimization complete.') print('Best validation score of %f %% obtained at iteration %i, ' 'with test performance %f %%' % (best_validation_loss * 100., best_iter + 1, test_score * 100.)) print >> sys.stderr, ('The code for file ' + os.path.split(__file__)[1] + ' ran for %.2fm' % ((end_time - start_time) / 60.)) """ Explanation: .negative_log_likelihood(y): this method returns the mean of the negative log-likelihood of the prediction of this model under a given target distribution: $$\frac{1}{|\mathcal{D}|} \mathcal{L} (\theta={W,b}, \mathcal{D}) =\frac{1}{|\mathcal{D}|} \sum_{i=0}^{|\mathcal{D}|} \log(P(Y=y^{(i)}|x^{(i)}, W,b)) \ \ell (\theta={W,b}, \mathcal{D})$$ type y: theano.tensor.TensorType param y: corresponds to a vector that gives for each example the correct label Note: we use the mean instead of the sum so that the learning rate is less dependent on the batch size. 3. Loading dataset Original code can be found here. This piece of code loads the dataset and partitions it into: train set, validation set and test set. 4. Putting it all together End of explanation """ def evaluate_lenet5_ldr(learning_rate=0.1, learning_rate_decay = 0.98, n_epochs=200, dataset='mnist.pkl.gz', nkerns=[20, 50], batch_size=500): """ :type learning_rate_decay: float :param learning_rate_decay: learning rate decay used """ rng = numpy.random.RandomState(23455) """ ... """ #!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!# # Theano function to decay the learning rate, this is separate from the # training function because we only want to do this once each epoch instead # of after each minibatch. decay_learning_rate = theano.function(inputs=[], outputs=learning_rate, updates={learning_rate: learning_rate * learning_rate_decay}) #!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!# ############### # TRAIN MODEL # ############### """ ... """ while (epoch < n_epochs) and (not done_looping): epoch = epoch + 1 for minibatch_index in xrange(n_train_batches): iter = (epoch - 1) * n_train_batches + minibatch_index if iter % 100 == 0: print 'training @ iter = ', iter cost_ij = train_model(minibatch_index) if (iter + 1) % validation_frequency == 0: # compute zero-one loss on validation set validation_losses = [validate_model(i) for i in xrange(n_valid_batches)] this_validation_loss = numpy.mean(validation_losses) print('epoch %i, minibatch %i/%i, validation error %f %%' % (epoch, minibatch_index + 1, n_train_batches, this_validation_loss * 100.)) # if we got the best validation score until now if this_validation_loss < best_validation_loss: #improve patience if loss improvement is good enough if this_validation_loss < best_validation_loss * \ improvement_threshold: patience = max(patience, iter * patience_increase) # save best validation score and iteration number best_validation_loss = this_validation_loss best_iter = iter # test it on the test set test_losses = [ test_model(i) for i in xrange(n_test_batches) ] test_score = numpy.mean(test_losses) print((' epoch %i, minibatch %i/%i, test error of ' 'best model %f %%') % (epoch, minibatch_index + 1, n_train_batches, test_score * 100.)) #!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! new_learning_rate = decay_learning_rate() #!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! if patience <= iter: done_looping = True break """ ... """ """ Explanation: C. Implementation of Learning Rate Decay Let's modify the code of evaluate_lenet5 function so it allows Learning Rate Decay. Definition: the learning rate is the step-size of the update of the parameters during gradient descent. It is typically between 0.1 and 0.01. However, if it is too big, gradient descent can overshoot the minimum and diverge. If it is too small, the optimization is very slow and may get stuck into a local minimum. The learning rate decay allows for the learning rate to be big at the beginning and then slowly decrease when nearing the global minimum: initial learning rate: $$\alpha = \alpha_0$$ learning rate decay: $$\alpha_d$$ at each iteration update: $$\alpha = \alpha_d*\alpha$$ The full code can be found at code/convolutional_mlp_ldr.py End of explanation """ def _dropout_from_layer(rng, layer, p): """p is the probablity of dropping a unit """ srng = theano.tensor.shared_randomstreams.RandomStreams( rng.randint(999999)) # p=1-p because 1's indicate keep and p is probability of dropping mask = srng.binomial(n=1, p=1-p, size=layer.shape) # The cast is important because # int * float32 = float64 which pulls things off the gpu output = layer * T.cast(mask, theano.config.floatX) return output """ Explanation: D. Implementation of dropout Dropout is a technique that was presented in G. Hinton's work "Dropout: A simple Way to Prevent Neural Networks from Overfitting". As can be read in the abstract: Deep neural nets with a large number of parameters are very powerful machine learning systems. However, overfitting is a serious problem in such networks. Large networks are also slow to use, making it difficult to deal with overfitting by combining the predictions of many different large neural nets at test time. Dropout is a technique for addressing this problem. The key idea is to randomly drop units (along with their connections) from the neural network during training. This prevents units from co-adapting too much. During training, dropout samples from an exponential number of different “thinned” networks. At test time, it is easy to approximate the effect of averaging the predictions of all these thinned networks by simply using a single unthinned network that has smaller weights <img src="images/hinton1.png" width=650> Dropping out 20% of the input units and 50% of the hidden units was often found to be optimal Implementation solution presented here is greatly inspired from GitHub user mdenil work on dropout, who implemented Hinton's dropout on a Multi-Layer Perceptron (base code from deeplearning.net). 1. Creating dropout function This function takes a layer (which can be either a layer of units in an MLP or a layer of feature maps in a CNN) and drop units from the layer with a probability of p (or in the case of CNN pixels from feature maps with a probability of p). End of explanation """ class DropoutHiddenLayer(HiddenLayer): def __init__(self, rng, input, n_in, n_out, activation, dropout_rate, W=None, b=None): super(DropoutHiddenLayer, self).__init__( rng=rng, input=input, n_in=n_in, n_out=n_out, W=W, b=b, activation=activation) self.output = _dropout_from_layer(rng, self.output, p=dropout_rate) class DropoutLeNetConvPoolLayer(LeNetConvPoolLayer): def __init__(self, rng, input, filter_shape, image_shape, poolsize, dropout_rate, W=None, b=None): super(DropoutLeNetConvPoolLayer, self).__init__( rng=rng, input=input, filter_shape=filter_shape, image_shape=image_shape, poolsize=poolsize, W=W, b=b) self.output = _dropout_from_layer(rng, self.output, p=dropout_rate) """ Explanation: 2. Creating dropout classes We create child classes from HiddenLayer and LeNetConvPoolLayer so that they take into account dropout. End of explanation """ def evaluate_lenet5(initial_learning_rate=0.1, learning_rate_decay = 1, dropout_rates = [0.2, 0.2, 0.2, 0.5], n_epochs=200, dataset='mnist.pkl.gz', nkerns=[20, 50], batch_size=500): """ :type dropout_rates: list of float :param dropout_rates: dropout rate used for each layer (input layer, 1st filtered layer, 2nd filtered layer, fully connected layer) """ """ ... """ ###################### # BUILD ACTUAL MODEL # ###################### print '... building the model' # Reshape matrix of rasterized images of shape (batch_size, 28 * 28) # to a 4D tensor, compatible with our LeNetConvPoolLayer # (28, 28) is the size of MNIST images. layer0_input = x.reshape((batch_size, 1, 28, 28)) # Dropping out pixels from original image randomly, with a probability of dropping # low enough not too drop too much information (20% was found to be ideal) layer0_input_dropout = _dropout_from_layer(rng, layer0_input, dropout_rates[0]) # Construct the first convolutional pooling layer: # filtering reduces the image size to (28-5+1 , 28-5+1) = (24, 24) # maxpooling reduces this further to (24/2, 24/2) = (12, 12) # 4D output tensor is thus of shape (batch_size, nkerns[0], 12, 12) layer0_dropout = DropoutLeNetConvPoolLayer( rng, input=layer0_input_dropout, image_shape=(batch_size, 1, 28, 28), filter_shape=(nkerns[0], 1, 5, 5), poolsize=(2, 2), dropout_rate= dropout_rates[1] ) # Creating in parallel a normal LeNetConvPoolLayer that share the same # W and b as the dropout layer, with W scaled with p. layer0 = LeNetConvPoolLayer( rng, input=layer0_input, image_shape=(batch_size, 1, 28, 28), filter_shape=(nkerns[0], 1, 5, 5), poolsize=(2, 2), W=layer0_dropout.W * (1 - dropout_rates[0]), b=layer0_dropout.b ) # Construct the second convolutional pooling layer # filtering reduces the image size to (12-5+1, 12-5+1) = (8, 8) # maxpooling reduces this further to (8/2, 8/2) = (4, 4) # 4D output tensor is thus of shape (batch_size, nkerns[1], 4, 4) layer1_dropout = DropoutLeNetConvPoolLayer( rng, input=layer0_dropout.output, image_shape=(batch_size, nkerns[0], 12, 12), filter_shape=(nkerns[1], nkerns[0], 5, 5), poolsize=(2, 2), dropout_rate = dropout_rates[2] ) layer1 = LeNetConvPoolLayer( rng, input=layer0.output, image_shape=(batch_size, nkerns[0], 12, 12), filter_shape=(nkerns[1], nkerns[0], 5, 5), poolsize=(2, 2), W=layer1_dropout.W * (1 - dropout_rates[1]), b=layer1_dropout.b ) # the HiddenLayer being fully-connected, it operates on 2D matrices of # shape (batch_size, num_pixels) (i.e matrix of rasterized images). # This will generate a matrix of shape (batch_size, nkerns[1] * 4 * 4), # or (500, 50 * 4 * 4) = (500, 800) with the default values. layer2_dropout_input = layer1_dropout.output.flatten(2) layer2_input = layer1.output.flatten(2) # construct a fully-connected sigmoidal layer layer2_dropout = DropoutHiddenLayer( rng, input=layer2_dropout_input, n_in=nkerns[1] * 4 * 4, n_out=500, activation=T.tanh, dropout_rate = dropout_rates[3] ) layer2 = HiddenLayer( rng, input=layer2_input, n_in=nkerns[1] * 4 * 4, n_out=500, activation=T.tanh, W=layer2_dropout.W * (1 - dropout_rates[2]), b=layer2_dropout.b ) # classify the values of the fully-connected sigmoidal layer layer3_dropout = LogisticRegression( input = layer2_dropout.output, n_in = 500, n_out = 10) layer3 = LogisticRegression( input=layer2.output, n_in=500, n_out=10, W=layer3_dropout.W * (1 - dropout_rates[-1]), b=layer3_dropout.b ) # the cost we minimize during training is the NLL of the model cost = layer3.negative_log_likelihood(y) dropout_cost = layer3_dropout.negative_log_likelihood(y) # create a function to compute the mistakes that are made by the model test_model = theano.function( [index], layer3.errors(y), givens={ x: test_set_x[index * batch_size: (index + 1) * batch_size], y: test_set_y[index * batch_size: (index + 1) * batch_size] } ) validate_model = theano.function( [index], layer3.errors(y), givens={ x: valid_set_x[index * batch_size: (index + 1) * batch_size], y: valid_set_y[index * batch_size: (index + 1) * batch_size] } ) # create a list of all model parameters to be fit by gradient descent params = layer3_dropout.params + layer2_dropout.params + layer1_dropout.params + layer0_dropout.params # create a list of gradients for all model parameters grads = T.grad(dropout_cost, params) # train_model is a function that updates the model parameters by SGD updates = [ (param_i, param_i - learning_rate * grad_i) for param_i, grad_i in zip(params, grads) ] train_model = theano.function( [index], dropout_cost, updates=updates, givens={ x: train_set_x[index * batch_size: (index + 1) * batch_size], y: train_set_y[index * batch_size: (index + 1) * batch_size] } ) """ ... """ """ Explanation: Note: we dropout pixels after pooling. 3. Rewriting evaluate_lenet5 Each time a layer is instantiated, two actual layers need to be actually created in parallel: the dropout layer which drops out some of its units with a probability of p, and an associated layer sharing the same coefficient W and b except W is scaled using p. Again, full code can be found at code/convolutional_mlp_dropout.py End of explanation """ import pylab from PIL import Image def display_filter(W, n_cols = 5): """ :type W: numpy_nd_array :param W: parameter W of a convolutional + max pooling layer :type image_width: int : param image_width: width of the final image representing the different filters """ W_shape = W.shape n_filters = W_shape[0] #param filter_shape: (number of filters, num input feature maps, filter height, filter width) filter_height = W_shape[2] filter_width = W_shape[3] n_lines = numpy.ceil(n_filters / n_cols) for n in range(n_filters): Wn = W[n,0,:,:] Wn = Wn / Wn.max() # Scaling W to get 0-1 gray scale pylab.subplot(n_lines, n_cols, n + 1); pylab.axis('off'); pylab.imshow(W[n,0,:,:], cmap=pylab.gray()) pylab.show() """ Explanation: After running the code for 50 epochs (237 minutes of computation) we get: Best validation score of 1.560000 % obtained at iteration 5000, with test performance 1.450000 % Full result file at results/dropout_good_percent.txt E. Visualization of the convolutional filters Read this article on understanding the convolutional neural networks. Many methods of visualization what the convolutional networks learned are descrived. We will retain the first one, as it is the most straight-forward to implement: Visualizing the activations and first-layer weights: Layer Activations: the most straight-forward visualization technique is to show the activations of the network during the forward pass. For ReLU networks, the activations usually start out looking relatively blobby and dense, but as the training progresses the activations usually become more sparse and localized. One dangerous pitfall that can be easily noticed with this visualization is that some activation maps may be all zero for many different inputs, which can indicate dead filters, and can be a symptom of high learning rates. Conv/FC Filters: The second common strategy is to visualize the weights. These are usually most interpretable on the first CONV layer which is looking directly at the raw pixel data, but it is possible to also show the filter weights deeper in the network. The weights are useful to visualize because well-trained networks usually display nice and smooth filters without any noisy patterns. Noisy patterns can be an indicator of a network that hasn't been trained for long enough, or possibly a very low regularization strength that may have led to overfitting. I would like to visualize the filters, so implement the second most common strategy to see the first 20 filters. M. D. Zeiler wrote an interesting paper about Deconvolutional Networks (DeConvNet) for visualizing and understanding convolutional filter. The only code I found for this subject can be found here. 1. Visualization function Let's create a function that displays the weight of the filters if fed the weight parameter W. End of explanation """ rng = numpy.random.RandomState(1234) img = Image.open(open('images/profilepic4.jpg')) img = numpy.asarray(img, dtype='float64') / 256. # divide by 256 to have RGB 0-1 scale and not 0 - 256 img_ = img.transpose(2, 0, 1).reshape(1, 3, 2592, 1936) input = img_ filter_shape = [20,3,12,12] image_shape = [1,3,2592,1936] poolsize = (2, 2) layer_test = LeNetConvPoolLayer(rng, input, filter_shape, image_shape, poolsize) f = theano.function([], layer_test.params) W = f[0] display_filter(W) """ Explanation: 2. Testing the function on a single untrained LeNetConvPoolLayer To test the function let's take back the example of the image of me when I was 5 years old. I will feed it to a LeNetConvPoolLayer, retrieve the weights, and display them. End of explanation """ def evaluate_lenet5(initial_learning_rate=0.1, learning_rate_decay = 1, dropout_rates = [0.2, 0.2, 0.2, 0.5], n_epochs=200, dataset='mnist.pkl.gz', display_filters = True, nkerns=[20, 50], batch_size=500): """ :type display_filters: Bool :param display_filters: True if we want to display the learned filters after training we skip to the very end of the code, after training is done """ if display_filters: # Retrieving the filters from first and second layer first_convlayer_params = theano.function([], layer0_dropout.params) second_convlayer_params = theano.function([], layer1_dropout.params) W0 = first_convlayer_params[0] W1 = second_convlayer_params[0] # Display filters from first layer (20 filters) display_filter(W0) # Display filters from second layer (50 filters) display_filter(W1) """ Explanation: <img src="images/filters2.png" width = 400 > As the weights are randomly initialized we of course see random pattern in each filter. 3. Displaying the learned filters after training Let's now modify the code of evaluate_lenet5 so that it displays the filters after training. Full code can be found at code/filter_visualization.py. End of explanation """
sdpython/pyquickhelper
_unittests/ut_ipythonhelper/data/example_corrplot.ipynb
mit
%pylab inline import pyensae import matplotlib.pyplot as plt plt.style.use('ggplot') import pandas import numpy letters = "ABCDEFGHIJKLM"[0:10] df = pandas.DataFrame(dict(( (k, numpy.random.random(10)+ord(k)-65) for k in letters))) df.head() from pyensae.graph_helper import Corrplot c = Corrplot(df) c.plot(figsize=(12,6)) """ Explanation: example of a corrplot Biokit proposes nice graphs for correlation: corrplot function in Python but it only works with Python 2.7. I took the code out and put a modified version of in pyensae. End of explanation """ fig = plt.figure(num=None, facecolor='white', figsize=(12,6)) ax = plt.subplot(1, 1, 1, aspect='equal', axisbg='white') c = Corrplot(df) c.plot(ax=ax) """ Explanation: To avoid created another graph container: End of explanation """ import seaborn as sns cmap = sns.diverging_palette(h_neg=210, h_pos=350, s=90, l=30, as_cmap=True, center="light") sns.clustermap(df.corr(), figsize=(10, 10), cmap=cmap) """ Explanation: We compare it with seaborn and this example Discovering structure in heatmap data. End of explanation """
anhaidgroup/py_entitymatching
notebooks/guides/step_wise_em_guides/Performing Matching with a Rule-Based Matcher.ipynb
bsd-3-clause
# Import py_entitymatching package import py_entitymatching as em import os import pandas as pd """ Explanation: Introduction This IPython notebook illustrates how to perform matching using the rule-based matcher. First, we need to import py_entitymatching package and other libraries as follows: End of explanation """ # Get the datasets directory datasets_dir = em.get_install_path() + os.sep + 'datasets' path_A = datasets_dir + os.sep + 'dblp_demo.csv' path_B = datasets_dir + os.sep + 'acm_demo.csv' path_labeled_data = datasets_dir + os.sep + 'labeled_data_demo.csv' A = em.read_csv_metadata(path_A, key='id') B = em.read_csv_metadata(path_B, key='id') # Load the pre-labeled data S = em.read_csv_metadata(path_labeled_data, key='_id', ltable=A, rtable=B, fk_ltable='ltable_id', fk_rtable='rtable_id') S.head() """ Explanation: Then, read the (sample) input tables for matching purposes. End of explanation """ # Split S into I an J IJ = em.split_train_test(S, train_proportion=0.5, random_state=0) I = IJ['train'] J = IJ['test'] """ Explanation: Then, split the labeled data into development set and evaluation set. Use the development set to select the best learning-based matcher End of explanation """ brm = em.BooleanRuleMatcher() """ Explanation: Creating and Using a Rule-Based Matcher This, typically involves the following steps: 1. Creating the rule-based matcher 2. Creating features 3. Adding Rules 4. Using the Matcher to Predict Results Creating the Rule-Based Matcher End of explanation """ # Generate a set of features F = em.get_features_for_matching(A, B, validate_inferred_attr_types=False) """ Explanation: Creating Features Next, we need to create a set of features for the development set. Magellan provides a way to automatically generate features based on the attributes in the input tables. For the purposes of this guide, we use the automatically generated features. End of explanation """ F.feature_name """ Explanation: We observe that there were 20 features generated. As a first step, lets say that we decide to use only 'year' related features. End of explanation """ # Add two rules to the rule-based matcher # The first rule has two predicates, one comparing the titles and the other looking for an exact match of the years brm.add_rule(['title_title_lev_sim(ltuple, rtuple) > 0.4', 'year_year_exm(ltuple, rtuple) == 1'], F) # This second rule compares the authors brm.add_rule(['authors_authors_lev_sim(ltuple, rtuple) > 0.4'], F) brm.get_rule_names() # Rules can also be deleted from the rule-based matcher brm.delete_rule('_rule_1') """ Explanation: Adding Rules Before we can use the rule-based matcher, we need to create rules to evaluate tuple pairs. Each rule is a list of strings. Each string specifies a conjunction of predicates. Each predicate has three parts: (1) an expression, (2) a comparison operator, and (3) a value. The expression is evaluated over a tuple pair, producing a numeric value. End of explanation """ brm.predict(S, target_attr='pred_label', append=True) S """ Explanation: Using the Matcher to Predict Results Now that our rule-based matcher has some rules, we can use it to predict whether a tuple pair is actually a match. Each rule is is a conjunction of predicates and will return True only if all the predicates return True. The matcher is then a disjunction of rules and if any one of the rules return True, then the tuple pair will be a match. End of explanation """
mne-tools/mne-tools.github.io
0.24/_downloads/93b9388c9b54989a6ee795fd5dedd153/otp.ipynb
bsd-3-clause
# Author: Eric Larson <larson.eric.d@gmail.com> # # License: BSD-3-Clause import os.path as op import mne import numpy as np from mne import find_events, fit_dipole from mne.datasets.brainstorm import bst_phantom_elekta from mne.io import read_raw_fif print(__doc__) """ Explanation: Plot sensor denoising using oversampled temporal projection This demonstrates denoising using the OTP algorithm :footcite:LarsonTaulu2018 on data with with sensor artifacts (flux jumps) and random noise. End of explanation """ dipole_number = 1 data_path = bst_phantom_elekta.data_path() raw = read_raw_fif( op.join(data_path, 'kojak_all_200nAm_pp_no_chpi_no_ms_raw.fif')) raw.crop(40., 50.).load_data() order = list(range(160, 170)) raw.copy().filter(0., 40.).plot(order=order, n_channels=10) """ Explanation: Plot the phantom data, lowpassed to get rid of high-frequency artifacts. We also crop to a single 10-second segment for speed. Notice that there are two large flux jumps on channel 1522 that could spread to other channels when performing subsequent spatial operations (e.g., Maxwell filtering, SSP, or ICA). End of explanation """ raw_clean = mne.preprocessing.oversampled_temporal_projection(raw) raw_clean.filter(0., 40.) raw_clean.plot(order=order, n_channels=10) """ Explanation: Now we can clean the data with OTP, lowpass, and plot. The flux jumps have been suppressed alongside the random sensor noise. End of explanation """ def compute_bias(raw): events = find_events(raw, 'STI201', verbose=False) events = events[1:] # first one has an artifact tmin, tmax = -0.2, 0.1 epochs = mne.Epochs(raw, events, dipole_number, tmin, tmax, baseline=(None, -0.01), preload=True, verbose=False) sphere = mne.make_sphere_model(r0=(0., 0., 0.), head_radius=None, verbose=False) cov = mne.compute_covariance(epochs, tmax=0, method='oas', rank=None, verbose=False) idx = epochs.time_as_index(0.036)[0] data = epochs.get_data()[:, :, idx].T evoked = mne.EvokedArray(data, epochs.info, tmin=0.) dip = fit_dipole(evoked, cov, sphere, n_jobs=1, verbose=False)[0] actual_pos = mne.dipole.get_phantom_dipoles()[0][dipole_number - 1] misses = 1000 * np.linalg.norm(dip.pos - actual_pos, axis=-1) return misses bias = compute_bias(raw) print('Raw bias: %0.1fmm (worst: %0.1fmm)' % (np.mean(bias), np.max(bias))) bias_clean = compute_bias(raw_clean) print('OTP bias: %0.1fmm (worst: %0.1fmm)' % (np.mean(bias_clean), np.max(bias_clean),)) """ Explanation: We can also look at the effect on single-trial phantom localization. See the tut-brainstorm-elekta-phantom for more information. Here we use a version that does single-trial localization across the 17 trials are in our 10-second window: End of explanation """
samuxiii/prototypes
learning/stock/stock.ipynb
mit
from sklearn.linear_model import RidgeCV from sklearn.model_selection import train_test_split from sklearn.externals import joblib import numpy as np import matplotlib.pyplot as plt import os data = np.loadtxt(fname = 'data.txt', delimiter = ',') X, y = data[:,:5], data[:,5] print("Features sample: {}".format(X[1])) print("Result: {}".format(y[1])) m = X.shape[0] #number of samples #training X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2) print("Train shape: {}".format(X_train.shape)) print("Test shape: {}".format(X_test.shape)) """ Explanation: Stock Prices End of explanation """ clf = RidgeCV(alphas = [0.1, 1.0, 10.0], normalize=True) clf.fit(X_train, y_train) #predict prediction = clf.predict(X_test); print("Expected is: {}".format(y_test[0])) print("Prediction is: {}".format(prediction[0])) print("Score: {}".format(clf.score(X_test, y_test))) print("Alpha: {}".format(clf.alpha_)) #plotting all data plt.figure(1) real, = plt.plot(np.arange(m), y, 'b-', label='real') predicted, = plt.plot(np.arange(m), clf.predict(X), 'r-', label='predicted') plt.ylabel('Stock') plt.xlabel('Time') plt.legend([real, predicted], ['Real', 'Predicted']) plt.show() #plotting only test mtest = X_test.shape[0] real, = plt.plot(np.arange(mtest), y_test, 'b-', label='real') test, = plt.plot(np.arange(mtest), clf.predict(X_test), 'g-', label='test') plt.ylabel('Stock') plt.xlabel('Time') plt.legend([real, test], ['Real', 'Test']) plt.show() """ Explanation: Ridge as Linear Regressor End of explanation """
gaufung/Data_Analytics_Learning_Note
DesignPattern/AdapterPattern.ipynb
mit
class ACpnStaff(object): name="" id="" phone="" def __init__(self,id): self.id=id def getName(self): print ("A protocol getName method...id:%s"%self.id) return self.name def setName(self,name): print ("A protocol setName method...id:%s"%self.id) self.name=name def getPhone(self): print ("A protocol getPhone method...id:%s"%self.id) return self.phone def setPhone(self,phone): print ("A protocol setPhone method...id:%s"%self.id) self.phone=phone class BCpnStaff(object): name="" id="" telephone="" def __init__(self,id): self.id=id def get_name(self): print ("B protocol get_name method...id:%s"%self.id) return self.name def set_name(self,name): print ("B protocol set_name method...id:%s"%self.id) self.name=name def get_telephone(self): print ("B protocol get_telephone method...id:%s"%self.id) return self.telephone def set_telephone(self,telephone): print ("B protocol get_name method...id:%s"%self.id) self.telephone=telephone """ Explanation: 适配器模式(Adapter Pattern) 1 代码 假设某公司A与某公司B需要合作,公司A需要访问公司B的人员信息,但公司A与公司B协议接口不同,该如何处理?先将公司A和公司B针对各自的人员信息访问系统封装了对象接口。 End of explanation """ class CpnStaffAdapter(object): b_cpn="" def __init__(self,id): self.b_cpn=BCpnStaff(id) def getName(self): return self.b_cpn.get_name() def getPhone(self): return self.b_cpn.get_telephone() def setName(self,name): self.b_cpn.set_name(name) def setPhone(self,phone): self.b_cpn.set_telephone(phone) """ Explanation: 为在A公司平台复用B公司接口,直接调用B公司人员接口是个办法,但会对现在业务流程造成不确定的风险。为减少耦合,规避风险,我们需要一个帮手,就像是转换电器电压的适配器一样,这个帮手就是协议和接口转换的适配器。适配器构造如下: End of explanation """ acpn_staff=ACpnStaff("123") acpn_staff.setName("X-A") acpn_staff.setPhone("10012345678") print ("A Staff Name:%s"%acpn_staff.getName()) print ("A Staff Phone:%s"%acpn_staff.getPhone()) bcpn_staff=CpnStaffAdapter("456") bcpn_staff.setName("Y-B") bcpn_staff.setPhone("99987654321") print ("B Staff Name:%s"%bcpn_staff.getName()) print ("B Staff Phone:%s"%bcpn_staff.getPhone()) """ Explanation: 适配器将B公司人员接口封装,而对外接口形式与A公司人员接口一致,达到用A公司人员接口访问B公司人员信息的效果。 业务示例如下: End of explanation """
ual/hedonic-models
sales-hedonic-output.ipynb
bsd-3-clause
# Startup steps import pandas as pd, numpy as np, statsmodels.api as sm import matplotlib.pyplot as plt, matplotlib.cm as cm, matplotlib.font_manager as fm import matplotlib.mlab as mlab from scipy.stats import pearsonr, ttest_rel %matplotlib inline """ Explanation: Tutorial on Hedonic Regression This material uses Python to demonstrate some aspects of hedonic regression. The objective here is not to learn to program, but to understand the hedonic regression methodology. We begin with an example in which we generate some synthetic data using a set of coefficients and a mathematical model, and learn those coefficients using a statistical method called multiple regression. The first part of this notebook follows closely the content in the Statsmodels online documentation. We won't try to provide the theoretical basis for regression models here. You can consult any number of online resources for this, including Wikipedia's explanation of Ordinary Least Squares Regression. We will be using the Statsmodels library for this (documentation here). The statistical model is assumed to be $Y = X\beta + \epsilon$, where $\epsilon\sim N\left(0,\sigma^{2}\Sigma\right)$ Depending on the assumption on $\Sigma$, Statsmodels have currently four classes available GLS : generalized least squares for arbitrary covariance $\Sigma$ OLS : ordinary least squares for i.i.d. errors $\Sigma=\textbf{I}$ WLS : weighted least squares for heteroskedastic errors $\text{diag}\left (\Sigma\right)$ GLSAR : feasible generalized least squares with autocorrelated AR(p) errors $\Sigma=\Sigma\left(\rho\right)$ We focus here on the simple Ordinary Least Squares (OLS) model that is the most widely used, but makes strong assumptions about the errors being indepentently and identically distributed (i.i.d.). When these conditions are met, the OLS parameter estimates are the Best Linear Unbiased Estimates (BLUE). More intuitively (perhaps), what linear regression using the OLS estimator attempts to do is find the vector of parameters ($\beta$), such that when you compute a linear function $X\beta$ you generate a predicted array of $\hat{y}$ that, compared to the observed $y$, the squared sum of each observation's error ($\epsilon_{i} = \hat{y}{i} - y{i}$) across all the observations ($\Sigma^{2}\epsilon_{i}$), is minimized. End of explanation """ plt.figure(1, figsize=(10,8), ) plt.plot([0, 10], [0, 20]) plt.axis([0, 10, 0, 20]) plt.show(); """ Explanation: To introduce concepts, let's begin by defining a hypothetical relationship between a dependent variable $y$ and an explanatory, or independent variable $x$. We are only going to explore correlation, but there is an implicit causal story that x is influencing y, and not the other way around. Let's say we have a relationship in which y is expected to be twice the value of x. A pretty simple model: $y = 2x$ Another way to describe this is as a line with an intercept of zero: $y = 0 + 2x$ We will look at it initially with no intercept and then add that in, before going to more than one independent variable. If we plot the 'model', we can see that at a value of x=0, the intercept is 0, so the value of y will be zero. And at a value of x=10, the value of y is 2x = 2*10 = 20. End of explanation """ nsample = 50 x = np.linspace(0, 10, 50) X = x beta = np.array([0, 2]) e = np.random.normal(size=nsample) X = sm.add_constant(X) y = np.dot(X, beta) + e """ Explanation: Now let's try to approximate real world data, in which we might be collecting observations of x and y, with some measurement error. Let's impose the assumption that the observations will deviate from this line with some randomly distributed error. We will use the standard assumption that the errors are normally distributed, like a bell curve, as is common. We can use the equation for the model to generate points that would fall on the line above if there were no error, but we will add random errors to it to demonstrate how regression models work. Generate data using a model we define: End of explanation """ plt.figure(1, figsize=(10,8), ) plt.scatter(x, y, marker=0, s=10, c='g') plt.axis([0, 10, 0, 20]) plt.show(); """ Explanation: Plot the data as a scatterplot. End of explanation """ plt.figure(1, figsize=(10,8), ) plt.plot([0, 10], [0, 20]) plt.scatter(x, y, marker=0, s=10, c='g') plt.axis([0, 10, 0, 20]) plt.show(); """ Explanation: Add line based on Intercept = 0, beta = slope of line = 2 Now we can see the original 'model' and the generated observations. What regression analysis enables is to 'learn' the parameters of a model that most closely approximates the process that generated a set of observations. In this case, we have a controlled setting, because we generated the data and know the 'true' values of the parameters of the model: intercept = 0, and slope of the line = beta = 2. Look at the plot below with the line superimposed on the 'observed' (generated) data. We can intuit that if we tilted the line or shifted it up or down, and calculated the 'errors', or the distance of each point to the line, that the square of their sum would get bigger. The reason for squaring is that if we didn't, the negative and positive errors would more or less cancel out. So we can infer that the 'best fit' model parameters would be ones that minimize the sum of the squares errors between the observed data and the y values predicted by the model. End of explanation """ nsample = 20 x = np.linspace(0, 10, 20) X = x beta = np.array([2, 2]) e = np.random.normal(size=nsample) X = sm.add_constant(X) y = np.dot(X, beta) + e """ Explanation: Regenerate the data using an intercept = 2 End of explanation """ model = sm.OLS(y, X) results = model.fit() print(results.summary()) print('Parameters: ', results.params) inter = results.params[0] beta = results.params[1] print('Intercept =', inter) print('Beta = ', beta) print('Rsquared = ', results.rsquared) """ Explanation: Specifying regression models using design matrices (dmatrices) in statsmodels To fit most of the models covered by statsmodels, you will need to create two design matrices. The first is a matrix of endogenous variable(s) (i.e. dependent, response, regressand, etc.). The second is a matrix of exogenous variable(s) (i.e. independent, predictor, regressor, etc.). The OLS coefficient estimates are calculated using linear algebra to find the parameters that minimize the sum of the squared errors: $$\hat{\beta} = (X'X)^{-1} X'y$$ where $y$ is an $N \times 1$ column of data on sales price. $X$ is $N \times 2$ with an intercept and the x variable. Run a simple linear regression and compare the coefficients to the ones used to generate the data. End of explanation """ plt.figure(1, figsize=(10,8), ) plt.plot([0, 10], [2, 22], c='r') plt.plot([0,10], [inter,(inter+10*beta)], c='b') plt.scatter(x, y, marker=0, s=10, c='g') plt.axis([0, 10, 0, 22]) plt.show(); """ Explanation: Rsquared = Explained Variation / Total Variation Rsquared = 1 - (Unexplained Variation / Total Variation) Rsquared = 1 – (sum of squared residuals / sum of squared deviation of prices from the mean price) Plot the 'true' line using the original coefficients, and the 'predicted' line, using the estimated coefficients. Try with smaller and larger samples. End of explanation """ sf = pd.read_csv('data/redfin_2017-03-05-17-45-34-san-francisco-county-1-month.csv') sf.columns sf1 = sf.rename(index=str, columns={'SALE TYPE': 'saletype', 'SOLD DATE': 'solddate', 'PROPERTY TYPE': 'proptype', 'ADDRESS': 'address', 'CITY': 'city', 'STATE': 'state', 'ZIP': 'zip', 'PRICE': 'price', 'BEDS': 'beds', 'BATHS': 'baths', 'LOCATION': 'location', 'SQUARE FEET': 'sqft', 'LOT SIZE': 'lotsize', 'YEAR BUILT': 'yrbuilt', 'DAYS ON MARKET': 'daysonmkt', '$/SQUARE FEET': 'pricesqft', 'LATITUDE': 'latitude', 'LONGITUDE': 'longitude', 'HOA/MONTH': 'hoamonth', 'URL (SEE http://www.redfin.com/buy-a-home/comparative-market-analysis FOR INFO ON PRICING)': 'url', 'STATUS': 'status', 'NEXT OPEN HOUSE START TIME': 'nextopenstart', 'NEXT OPEN HOUSE END TIME': 'nextopenend', 'SOURCE': 'source', 'MLS#': 'mls', 'FAVORITE': 'favorite', 'INTERESTED': 'interested' }) sf1.head() sf1.describe() """ Explanation: Now on to some real data and Hedonic Regression We will use a large sample of single family housing sales from the San Francisco Bay Area to estimate a linear regression model in which the dependent variable $y$ is the price of a house at the time of sale, and $X$ is a set of exogenous, or explanatory variables. What exactly does this give us? A statistical way to figure out what the component amenities in a house are worth, if you could buy them a la carte. Another way to think of it is, how much do house buyers in the Bay Area during this period pay, on average, for an additional unit of each amenity: square foot of living space, bedroom, bathroom, etc. Here we use the sales transactions in San Francisco over a month from early-February through early-March. First we load the data from a csv file. Then we rename columns to make the data easier to work with. End of explanation """ plt.figure(1, figsize=(10,8), ) plt.scatter(sf1['sqft'], sf1['price'], marker=0, s=10, c='g') #plt.axis([12, 16, 12, 16]) plt.show(); """ Explanation: Here is a scatterplot of sqft and price End of explanation """ plt.figure(1, figsize=(10,8), ) plt.scatter(sf1['beds'], sf1['price'], marker=0, s=10, c='g') #plt.axis([12, 16, 12, 16]) plt.show(); """ Explanation: Here is a scatterplot of beds and price End of explanation """ sf1['beds4'] = sf1['beds'] sf1['baths4'] = sf1['baths'] sf1.describe() """ Explanation: Evaluating correlations among multiple variables and price What if we want to know how price is affected by both sqft and beds, and other variables as well? We would generally use multiple regression. Recoding variables Sometimes variables have larger values than you intend to use. You can either drop those records, or recode the data so that values above some limit are capped at that limit. End of explanation """ #sf1.loc[:,'beds4'][sf1['beds']>3] = 4 #sf1.loc[:,'baths4'][sf1['baths']>3] = 4 sf1.loc[sf1.beds > 3, 'beds4'] = 4 sf1.loc[sf1.baths > 3, 'baths4'] = 4 sf1.describe() """ Explanation: Since the maximum bedrooms is 18 and the maximum bathrooms is 5.5, let's create a recoded version of these to cap the maximum value at 4 for each. End of explanation """ import statsmodels.api as sm import numpy as np from patsy import dmatrices y, X = dmatrices('np.log(price) ~ np.log(sqft) + beds + baths', data=sf1, return_type='dataframe') mod = sm.OLS(y, X) res = mod.fit() residuals = res.resid predicted = res.fittedvalues observed = y print(res.summary()) """ Explanation: Now let's estimate a series of models using the sales data. Here we specify models using R syntax. This uses the patsy language See http://patsy.readthedocs.org/en/latest/ for complete documentation End of explanation """ plt.hist(residuals, bins=25, normed=True, alpha=.5) mu = residuals.mean() variance = residuals.var() sigma = residuals.std() x = np.linspace(-3, 3, 100) plt.plot(x,mlab.normpdf(x, mu, sigma)); """ Explanation: Experiment with the log transformations and practice interpretation Most hedonic regression models use a log-transformation of the dependent variable (price), by taking the logarithm of the price of each sale and using it as the dependent variable. It changes the interpretation of the coefficients. If the variable on the right hand side is untransformed (in its original scale) and the dependent variable is log-transformed, then one unit increase in the right hand side variable is predicted to increase the price of a house by the percentage indicated by the coefficient. If the right hand side variable is also log transformed, then the interpretation is one percent change in the independent variable is associated with the percentage change in the dependent variable indicated by the coefficient. If neither is transformed, then the coefficient indicates the dollar amount of change in price expected from a one unit change in the independent variable. How well the does model fit the data? The errors appear to be normally distributed - with half having positive errors and half having negative errors, and the mean value being zero. This is one indicator of whether the model is inacurate (statistically biased). End of explanation """ plt.figure(1, figsize=(10,8), ) plt.plot([12, 16], [0, 0], c='b') plt.scatter(predicted, residuals, marker=0, s=10, c='g'); plt.axis([12, 16, -0.8, 0.8]) plt.show(); """ Explanation: Another way to look at these results is to plot the errors against the range of the y variable, to see if the errors appear to be higher at one end of the range of y or the other. It seems to be fairly uniform across the scale of y. End of explanation """ plt.figure(1, figsize=(10,8), ) plt.plot([12, 16], [12, 16]) plt.scatter(observed, predicted, marker=0, s=10, c='g') plt.axis([12, 16, 12, 16]) plt.show(); """ Explanation: The next plot compares observed values on the x axis to predicted values from the model on the y axis. End of explanation """
robblack007/clase-cinematica-robot
Practicas/practica4/Practica.ipynb
mit
# Esta libreria tiene las funciones principales que utilizaremos from sympy import var, Matrix, Function, sin, cos, pi, trigsimp # Esta libreria contiene una funcion que la va a dar un formato "bonito" a nuestras ecuaciones from sympy.physics.mechanics import mechanics_printing mechanics_printing() τ = 2*pi """ Explanation: Algoritmo de Denavit - Hartenberg Calculo simbólico Dentro del notebook de Jupyter tenemos la ventaja de utilizar diferentes tipos de elementos desplegables en un página web, como imagenes, interfaces graficas, etc. En esta práctica utilizaremos un nuevo tipo de elementos que nos ayudan a desplegar matemáticas de una manera estetica y funcional. Como algunos ya se habran dado cuenta, dentro de un notebook tenemos diferentes tipos de celdas, las que aparecen por default cuando creas una nueva celda son celdas de código, pero si queremos una celda de comentarios, tan solo tenemos que cambiarla en el menu de arriba por el tipo Markdown. En este tipo de celda podemos incluso añadir imagenes como la que esta arriba (si quieres ver como se hace, tan solo tienes que hacer doble clic sobre la celda para desplegar el código utilizado). En especifico nos interesa poder escribir ecuaciones matemáticas como $x_1 = 10$, o bien $$ E = m c^2 $$ Nota que dependiendo del numero de $, la ecuación aparecerá en linea junto con el texto, o aparte y centrado. Pero hasta el momento no hemos hecho ningun cálculo con estas ecuaciones, tan solo las hemos escrito, y a pesar de que se ven bien, no podemos manipularlas de ninguna manera. Para esto nos ayudará la librería sympy, la cual esta diseñada especificamente para poder hacer cálculos simbólicos. Para empezar tenemos que importar algunas funciones de la librería. End of explanation """ var("t q1 q2 q3") var("l1:4") """ Explanation: Una vez importadas las funciones de nuestra librería, podemos empezar declarando variables (o constantes) conocidas para nosotros. End of explanation """ # Defino el punto origen, la rotación y traslación aplicados p0 = Matrix([[2], [3], [0]]) R1 = Matrix([[cos(q1), -sin(q1), 0], [sin(q1), cos(q1), 0], [0, 0, 1]]) d1 = Matrix([[l1], [0], [0]]) p1 = R1*p0 + d1 p1 """ Explanation: Nota que cualquiera de estas dos notaciones es valida para importar variables de sympy Si ahora creamos una matriz con nuestros datos es posible utilizar las operaciones de matrices conocidas por nosotros, por ejemplo vamos hacer una transformación de movimientos rígidos, definida por la siguiente ecuación: $$ p_1^0 = R_1 p_0^0 + d_1^0 $$ End of explanation """ a = 1.2 d = 0 α = τ/4 θ = q1 A1 = Matrix([[cos(θ), -sin(θ)*cos(α), sin(θ)*sin(α), a*cos(θ)], [sin(θ), cos(θ)*cos(α), -cos(θ)*sin(α), a*sin(θ)], [0, sin(α), cos(α), d], [0, 0, 0, 1]]) A1 """ Explanation: Convención Denavit - Hartenberg Sabemos que la conveción Denavit - Hartenberg es una manera de obtener las transformaciones homogéneas necesarias para cada eslabon de un manipulador, sin embargo las multiplicaciones de matrices resultantes de este algorimto por lo general consumen demasiado tiempo, por lo que en esta práctica aprenderemos a definir una función que calcule cada matriz de transformación y poder multiplicarlas con la computadora. Lo primero que necesitamos es la matriz general del algoritmo Denavit - Hartenberg: $$ A_i = \begin{pmatrix} c_{\theta_i} & -s_{\theta_i} c_{\alpha_i} & s_{\theta_i} s_{\alpha_i} & a_i c_{\theta_i} \ s_{\theta_i} & c_{\theta_i} c_{\alpha_i} & -c_{\theta_i} s_{\alpha_i} & a_i s_{\theta_i} \ 0 & s_{\alpha_i} & c_{\alpha_i} & d_i \ 0 & 0 & 0 & 1 \end{pmatrix} $$ Si escribimos esta matriz dando valores aleatorios a $\theta$, $\alpha$, $d$ y $a$, tendremos: End of explanation """ a = 0.8 d = 0 α = 0 θ = q2 A2 = Matrix([[cos(θ), -sin(θ)*cos(α), sin(θ)*sin(α), a*cos(θ)], [sin(θ), cos(θ)*cos(α), -cos(θ)*sin(α), a*sin(θ)], [0, sin(α), cos(α), d], [0, 0, 0, 1]]) A2 """ Explanation: Dando otros valores aleatorios a la misma matriz tendremos: End of explanation """ A1*A2 """ Explanation: Y de la misma manera que en cualquier motor de algebra, si queremos multiplicar, tan solo tenemos que hacerlo: End of explanation """ A1.inv() """ Explanation: o bien, obtener la matriz inversa: End of explanation """ trigsimp(A1.inv()) """ Explanation: la cual, a veces puede ser simplificada: End of explanation """
GEMScienceTools/rmtk
notebooks/vulnerability/derivation_fragility/NLTHA_on_SDOF/MSA_on_SDOF.ipynb
agpl-3.0
import numpy as np from rmtk.vulnerability.common import utils from rmtk.vulnerability.derivation_fragility.NLTHA_on_SDOF import MSA_on_SDOF from rmtk.vulnerability.derivation_fragility.NLTHA_on_SDOF import MSA_utils from rmtk.vulnerability.derivation_fragility.NLTHA_on_SDOF.read_pinching_parameters import read_parameters %matplotlib inline """ Explanation: Multiple Stripe Analysis (MSA) for Single Degree of Freedom (SDOF) Oscillators In this method, a single degree of freedom (SDOF) model of each structure is subjected to non-linear time history analysis using a suite of ground motion records scaled to multple stripes of intensity measure. The displacements of the SDOF due to each ground motion record are used as input to determine the distribution of buildings in each damage state for each level of ground motion intensity. A regression algorithm is then applied to derive the fragility model. The figure below illustrates the results of a Multiple Stripe Analysis, from which the fragility function is built. <img src="../../../../figures/MSA_example.jpg" width="500" align="middle"> Note: To run the code in a cell: Click on the cell to select it. Press SHIFT+ENTER on your keyboard or press the play button (<button class='fa fa-play icon-play btn btn-xs btn-default'></button>) in the toolbar above. End of explanation """ capacity_curves_file = '/Users/chiaracasotto/GitHub/rmtk_data/capacity_curves_sdof_first_mode.csv' sdof_hysteresis = "/Users/chiaracasotto/GitHub/rmtk_data/pinching_parameters.csv" capacity_curves = utils.read_capacity_curves(capacity_curves_file) capacity_curves = utils.check_SDOF_curves(capacity_curves) utils.plot_capacity_curves(capacity_curves) hysteresis = read_parameters(sdof_hysteresis) """ Explanation: Load capacity curves In order to use this methodology, it is necessary to provide one (or a group) of capacity curves, defined according to the format described in the RMTK manual. Please provide the location of the file containing the capacity curves using the parameter capacity_curves_file. If the User wants to specify the cyclic hysteretic behaviour of the SDOF system, please input the path of the file where the hysteretic parameters are contained, using the variable sdof_hysteresis. The parameters should be defined according to the format described in the RMTK manual. If instead default parameters want to be assumed, please set the sdof_hysteresis variable to "Default" End of explanation """ gmrs_folder = "../../../../../rmtk_data/MSA_records" minT, maxT = 0.1, 2.0 no_bins = 2 no_rec_bin = 10 record_scaled_folder = "../../../../../rmtk_data/Scaling_factors" gmrs = utils.read_gmrs(gmrs_folder) #utils.plot_response_spectra(gmrs, minT, maxT) """ Explanation: Load ground motion records For what concerns the ground motions to be used in th Multiple Stripe Analysis the following inputs are required: 1. gmrs_folder: path to the folder containing the ground motion records to be used in the analysis. Each accelerogram needs to be in a separate CSV file as described in the RMTK manual. 2. record_scaled_folder. In this folder there should be a csv file for each Intensity Measure bin selected for the MSA, containing the names of the records that should be scaled to that IM bin, and the corresponding scaling factors. An example of this type of file is provided in the RMTK manual. 3. no_bins: number of Intensity Measure bins. 4. no_rec_bin: number of records per bin If the user wants to plot acceleration, displacement and velocity response spectra, the function utils.plot_response_spectra(gmrs, minT, maxT) should be un-commented. The parameters minT and maxT are used to define the period bounds when plotting the spectra for the provided ground motion fields. End of explanation """ damage_model_file = "../../../../../rmtk_data/damage_model_Sd.csv" damage_model = utils.read_damage_model(damage_model_file) """ Explanation: Load damage state thresholds Please provide the path to your damage model file using the parameter damage_model_file in the cell below. Currently the user can provide spectral displacement, capacity curve dependent and interstorey drift damage model type. If the damage model type is interstorey drift the user has to input interstorey drift values of the MDOF system. The user can then provide the pushover curve in terms of Vb-dfloor to be able to convert interstorey drift limit states to roof displacements and spectral displacements of the SDOF system, otherwise a linear relationship is assumed. End of explanation """ damping_ratio = 0.05 degradation = False msa = {}; msa['n. bins']=no_bins; msa['records per bin']=no_rec_bin; msa['input folder']=record_scaled_folder PDM, Sds, IML_info = MSA_on_SDOF.calculate_fragility(capacity_curves, hysteresis, msa, gmrs, damage_model, damping_ratio, degradation) """ Explanation: Obtain the damage probability matrix The following parameters need to be defined in the cell below in order to calculate the damage probability matrix: 1. damping_ratio: This parameter defines the damping ratio for the structure. 2. degradation: This boolean parameter should be set to True or False to specify whether structural degradation should be considered in the analysis or not. End of explanation """ IMT = "Sa" T = 0.47 #T = np.arange(0.4,1.91,0.01) regression_method = "least squares" fragility_model = MSA_utils.calculate_fragility_model(PDM,gmrs,IML_info,IMT,msa,damage_model, T,damping_ratio, regression_method) """ Explanation: Fit lognormal CDF fragility curves The following parameters need to be defined in the cell below in order to fit lognormal CDF fragility curves to the damage probability matrix obtained above: 1. IMT: This parameter specifies the intensity measure type to be used. Currently supported options are "PGA", "Sa","Sd" and "HI" (Housner Intensity). 2. period: This parameter defines the period for which a spectral intensity measure should be computed. If Housner Intensity is selected as intensity measure a range of periods should be defined instead (for example T=np.arange(0.3,3.61,0.01)). 3. regression_method: This parameter defines the regression method to be used for estimating the parameters of the fragility functions. The valid options are "least squares" and "max likelihood". End of explanation """ minIML, maxIML = 0.01, 4 utils.plot_fragility_model(fragility_model, minIML, maxIML) print fragility_model['damage_states'][0:] """ Explanation: Plot fragility functions The following parameters need to be defined in the cell below in order to plot the lognormal CDF fragility curves obtained above: * minIML and maxIML: These parameters define the limits of the intensity measure level for plotting the functions End of explanation """ taxonomy = "HI_Intact_v4_lq" minIML, maxIML = 0.01, 3.00 output_type = "csv" output_path = "../../../../../phd_thesis/results/damping_0.39/" utils.save_mean_fragility(taxonomy, fragility_model, minIML, maxIML, output_type, output_path) """ Explanation: Save fragility functions The derived parametric fragility functions can be saved to a file in either CSV format or in the NRML format that is used by all OpenQuake input models. The following parameters need to be defined in the cell below in order to save the lognormal CDF fragility curves obtained above: 1. taxonomy: This parameter specifies a taxonomy string for the the fragility functions. 2. minIML and maxIML: These parameters define the bounds of applicability of the functions. 3. output_type: This parameter specifies the file format to be used for saving the functions. Currently, the formats supported are "csv" and "nrml". End of explanation """ cons_model_file = "../../../../../rmtk_data/cons_model.csv" imls = [0.05, 0.10, 0.15, 0.20, 0.25, 0.30, 0.35, 0.40, 0.45, 0.50, 0.60, 0.70, 0.80, 0.90, 1.00, 1.20, 1.40, 1.60, 1.80, 2.00, 2.20, 2.40, 2.60, 2.80, 3.00, 3.20, 3.40, 3.60, 3.80, 4.00] distribution_type = "lognormal" cons_model = utils.read_consequence_model(cons_model_file) vulnerability_model = utils.convert_fragility_vulnerability(fragility_model, cons_model, imls, distribution_type) utils.plot_vulnerability_model(vulnerability_model) """ Explanation: Obtain vulnerability function A vulnerability model can be derived by combining the set of fragility functions obtained above with a consequence model. In this process, the fractions of buildings in each damage state are multiplied by the associated damage ratio from the consequence model, in order to obtain a distribution of loss ratio for each intensity measure level. The following parameters need to be defined in the cell below in order to calculate vulnerability functions using the above derived fragility functions: 1. cons_model_file: This parameter specifies the path of the consequence model file. 2. imls: This parameter specifies a list of intensity measure levels in increasing order at which the distribution of loss ratios are required to be calculated. 3. distribution_type: This parameter specifies the type of distribution to be used for calculating the vulnerability function. The distribution types currently supported are "lognormal", "beta", and "PMF". End of explanation """ taxonomy = "RC" output_type = "csv" output_path = "../../../../../rmtk_data/output/" utils.save_vulnerability(taxonomy, vulnerability_model, output_type, output_path) """ Explanation: Save vulnerability function The derived parametric or nonparametric vulnerability function can be saved to a file in either CSV format or in the NRML format that is used by all OpenQuake input models. The following parameters need to be defined in the cell below in order to save the lognormal CDF fragility curves obtained above: 1. taxonomy: This parameter specifies a taxonomy string for the the fragility functions. 3. output_type: This parameter specifies the file format to be used for saving the functions. Currently, the formats supported are "csv" and "nrml". End of explanation """
PyDataMadrid2016/Conference-Info
workshops_materials/20160408_1100_Pandas_for_beginners/tutorial/EN - Tutorial 03 - Basic operations with pandas data structures.ipynb
mit
# first, the imports import os import datetime as dt import pandas as pd import numpy as np import matplotlib.pyplot as plt np.random.seed(19760812) %matplotlib inline # we read data from file 'mast.txt' ipath = os.path.join('Datos', 'mast.txt') # Now, we define a function to parse the dates def dateparse(date, time): YY = 2000 + int(date[:2]) MM = int(date[2:4]) DD = int(date[4:]) hh = int(time[:2]) mm = int(time[2:]) return dt.datetime(YY, MM, DD, hh, mm, 0) cols = ['Date', 'time', 'wspd', 'wspd_max', 'wdir', 'x1', 'x2', 'x3', 'x4', 'x5', 'wspd_std'] wind = pd.read_csv(ipath, sep = "\s*", names = cols, parse_dates = [[0, 1]], index_col = 0, date_parser = dateparse) """ Explanation: Let's start from what we have seen in the previous notebook... Let's read some wind data End of explanation """ wind.info() wind.describe() """ Explanation: Basic information (in this case from a DataFrame) End of explanation """ wind.index wind.values wind.values.shape wind.columns """ Explanation: Access the índexes, values, columns (Series doesn't have this attribute): End of explanation """ # We remove a column using 'del' keyword del wind['x1'] wind.head(3) # We extract a column using 'pop' method s = wind.pop('x2') wind.head(3) del wind['x3'] del wind['x4'] del wind['x5'] wind.info() """ Explanation: Removing/extracting columns There are some columns that are not interesting (named 'x1', 'x2', 'x3', 'x4' and 'x5'). We will remove these columns from our DataFrame. We have several options to do so: End of explanation """ type(s) s.head(3) s.info() """ Explanation: One of the columns, that extracted using the pop method, is referenced using the s variable, and it is a Series: End of explanation """ # s.is_time_series deprecated s.index.is_all_dates s.describe() s.dtype s.values s.index s.columns """ Explanation: Is it a TimeSeries?, i.e., Are all the indexes dates? End of explanation """ # We create a DataFrame df = pd.DataFrame(np.array([['a','b','c','d','e'], [10,20,30,40,50]]).T, columns = ['col1', 'col2']) df """ Explanation: Working with the indexes End of explanation """ df.index = np.arange(1,6) * 100 df """ Explanation: We can re-write the indexes at any time: End of explanation """ df.set_index('col1', inplace = True) df """ Explanation: We can use a column to define our indexes: End of explanation """ df.reset_index(inplace = True) df """ Explanation: We can undo the set_index action using: End of explanation """ df.columns = ['column1', 'column2'] df """ Explanation: As with indexes, we can change the name of the columns: End of explanation """ df.index.name = 'indices' df """ Explanation: The indexes 'column' can have a name (already seen before): End of explanation """ numpy_attrs = dir(s.values) series_attrs = dir(s) for attr in numpy_attrs: if attr not in series_attrs: print('NOOOOOOOOOOOOOOOOOOOOOO', attr) else: print(attr) """ Explanation: pandas data structures are numpy arrays on steroids Don't forget that behind the scenes we have numpy arrays and pandas exposes much of the numpy arrays functionality directly from their data structures. We can see, for instance, what attributes of a numpy array have an equivalent directly in a Series (or DataFrame): End of explanation """ s.mean() s.min() s.max() s[0:10].tolist() """ Explanation: So, a lot of operations we do with a numpy array can be made directly from a pandas data structure: End of explanation """ %%timeit s.mean() s.min() s.max() %%timeit s.values.mean() s.values.min() s.values.max() """ Explanation: ... <div class="alert alert-danger"> <p><b>Note:</b></p> <p>Sometimes could be convenient to use directly the numpy arrays method when performance is an issue.</p> </div> End of explanation """ numpy_attrs = dir(s.values) series_attrs = dir(s) for attr in series_attrs: if attr not in numpy_attrs: print('NOOOOOOOOOOOOOOOOOOOOOO', attr) else: print(attr) """ Explanation: And where are the steroids? Be patient!!!!!! 'Stuff' that are in a Series but not in a numpy array End of explanation """ numpy_attrs = dir(s.values) dataframe_attrs = dir(wind) for attr in dataframe_attrs: if attr not in numpy_attrs: print('NOOOOOOOOOOOOOOOOOOOOOO', attr) else: print(attr) """ Explanation: 'Stuff' that are in a DataFrame but not in a numpy array End of explanation """ wind['wspd'].apply(lambda x: str(x) + ' m/s') wind.corr() wind.cumsum() wind.diff() """ Explanation: Examples of some useful operations. We will see some of this in a more detailed manner and with examples in the next notebooks. End of explanation """ # Calculate the mean wind speed (column 'wspd'): # Calculate the median of the wind direction (column 'wdir'): # Obtain the maximum difference between two time steps # (column 'wspd_std') """ Explanation: Now, we are skimming this. Later we will see it in a more detailed way: Let's do some simple examples: End of explanation """ pd.rolling_mean(wind, 5, center = True).head(10) """ Explanation: Other interesting methods are the pd.rolling_*: End of explanation """ wind.rolling(5, center = True).mean().head(10) """ Explanation: As you can read in the previous warning message the rolling_* functions are deprecated and will not be available in the near future. In the previous text cell I wrote explicitly 'methods' because all the rolling_* functions now are grouped in the rolling method. How we can do it with the rolling method: End of explanation """ import inspect info = inspect.getmembers(wind, predicate=inspect.ismethod) for stuff in info: print(stuff[0]) """ Explanation: Other interesting 'stuff' in a DataFrame (change DataFrame with Series or other data structures): End of explanation """ index = pd.date_range('2000/01/01', freq = '12H', periods = 10) index = index.append(pd.date_range('2000/01/10', freq = '1D', periods = 3)) df = pd.DataFrame(np.random.randint(1, 100, size = (13, 3)), index = index, columns = ['col1', 'col2', 'col3']) df # Let's fill some values with NaN df[df > 70] = np.nan df """ Explanation: Working with missing data It is quite usual that our datasets have missing data. End of explanation """ df['col1'].sum() df['col1'].values.sum() df['col1'].sum(skipna = False) """ Explanation: As opposed to what happens with a numpy array, in pandas, operations ignore NaN values unless we explicitly state the opposite. Let's see this in action: End of explanation """ df.isnull() """ Explanation: We can detect 'null' values (NaN) using isnull: End of explanation """ df.notnull() """ Explanation: Or not null using notnull: End of explanation """ # Let's remember how is our DataFrame df df.ffill() df.bfill() df.fillna(value = 'Kiko') """ Explanation: We can see that we have NaN values. We can fill them using ffill or bfill (similar to fillna(method = 'ffill') and to fillna(method = 'bfill'), respectively): End of explanation """ df = pd.DataFrame(np.random.randint(1, 100, size = (15, 3)), index = pd.date_range('2015/01/01', freq = '12H', periods = 15)) df df[df > 70] = 'Kiko' df """ Explanation: Let's create a new DataFrame with indexes with 12H frequency. End of explanation """ df[df == 'Kiko'] = np.nan df # We remove the rows where any value of the row is NaN # axis = 0 would be equivalent to axis = 'rows' or axis = 'index' # Later we will see more about the axis keyword... df.dropna(axis = 'rows') # Let's remove the rows where all the values in the row are NaN df.iloc[2, :] = np.nan df.dropna(axis = 'rows', how = 'all') # We can remove columns where any valu in the column is a NaN df.dropna(axis = 'columns', how = 'any') # axis = 1 is equivalent to axis = 'columns'. More on this later. # how = 'any' is he default value so we don't need to add it. # Let's add a column only with not null values and let's repeat the operation df['col4'] = 9999 df.dropna(axis = 'columns', how = 'any') # Now let's add a column where all the values are NaN df['col5'] = np.nan df.dropna(axis = 'columns', how = 'all') """ Explanation: We can remove rows or columns that have a NaN value, all NaN values,... End of explanation """ df.interpolate() """ Explanation: We can also fill NaN values using interpolate: End of explanation """ df.info() """ Explanation: But, what is happening here!!! Why null values are not being interpolated? Let's see how are the columns. End of explanation """ df[[0, 1, 2]] = df[[0, 1, 2]].astype(np.float) df.interpolate() # Have a look to the docs of the 'interpolate' method to know how to use it """ Explanation: We can see columns 0, 1 y 2 are of type object and this type is not a number. On the other hand, in the column col4 there isn't any value to interpolate. Last, in column col5 all the values are NaN. Let's convert the first three columns to interpolate: End of explanation """
MIT-LCP/mimic-workshop
intro_to_mimic/01-example-patient-heart-failure.ipynb
mit
import numpy as np import pandas as pd import matplotlib.pyplot as plt import sqlite3 %matplotlib inline """ Explanation: Exploring the trajectory of a single patient Import Python libraries We first need to import some tools for working with data in Python. - NumPy is for working with numbers - Pandas is for analysing data - MatPlotLib is for making plots - Sqlite3 to connect to the database End of explanation """ # Connect to the MIMIC database conn = sqlite3.connect('data/mimicdata.sqlite') # Create our test query test_query = """ SELECT subject_id, hadm_id, admittime, dischtime, admission_type, diagnosis FROM admissions """ # Run the query and assign the results to a variable test = pd.read_sql_query(test_query,conn) # Display the first few rows test.head() """ Explanation: Connect to the database We can use the sqlite3 library to connect to the MIMIC database Once the connection is established, we'll run a simple SQL query. End of explanation """ query = """ SELECT de.icustay_id , (strftime('%s',de.charttime)-strftime('%s',ie.intime))/60.0/60.0 as HOURS , di.label , de.value , de.valuenum , de.uom FROM chartevents de INNER join d_items di ON de.itemid = di.itemid INNER join icustays ie ON de.icustay_id = ie.icustay_id WHERE de.icustay_id = 252522 ORDER BY charttime; """ ce = pd.read_sql_query(query,conn) # OPTION 2: load chartevents from a CSV file # ce = pd.read_csv('data/example_chartevents.csv', index_col='HOURSSINCEADMISSION') # Preview the data # Use 'head' to limit the number of rows returned ce.head() """ Explanation: Load the chartevents data The chartevents table contains data charted at the patient bedside. It includes variables such as heart rate, respiratory rate, temperature, and so on. We'll begin by loading the chartevents data for a single patient. End of explanation """ # Select a single column ce['LABEL'] """ Explanation: Review the patient's heart rate We can select individual columns using the column name. For example, if we want to select just the label column, we write ce.LABEL or alternatively ce['LABEL'] End of explanation """ # Select just the heart rate rows using an index ce[ce.LABEL=='Heart Rate'] """ Explanation: In a similar way, we can select rows from data using indexes. For example, to select rows where the label is equal to 'Heart Rate', we would create an index using [ce.LABEL=='Heart Rate'] End of explanation """ # Which time stamps have a corresponding heart rate measurement? print ce.index[ce.LABEL=='Heart Rate'] # Set x equal to the times x_hr = ce.HOURS[ce.LABEL=='Heart Rate'] # Set y equal to the heart rates y_hr = ce.VALUENUM[ce.LABEL=='Heart Rate'] # Plot time against heart rate plt.figure(figsize=(14, 6)) plt.plot(x_hr,y_hr) plt.xlabel('Time',fontsize=16) plt.ylabel('Heart rate',fontsize=16) plt.title('Heart rate over time from admission to the intensive care unit') """ Explanation: Plot 1: How did the patients heart rate change over time? Using the methods described above to select our data of interest, we can create our x and y axis values to create a time series plot of heart rate. End of explanation """ # Exercise 1 here """ Explanation: Task 1 What is happening to this patient's heart rate? Plot respiratory rate over time for the patient. Is there anything unusual about the patient's respiratory rate? End of explanation """ plt.figure(figsize=(14, 6)) plt.plot(ce.HOURS[ce.LABEL=='Respiratory Rate'], ce.VALUENUM[ce.LABEL=='Respiratory Rate'], 'k+', markersize=10, linewidth=4) plt.plot(ce.HOURS[ce.LABEL=='Resp Alarm - High'], ce.VALUENUM[ce.LABEL=='Resp Alarm - High'], 'm--') plt.plot(ce.HOURS[ce.LABEL=='Resp Alarm - Low'], ce.VALUENUM[ce.LABEL=='Resp Alarm - Low'], 'm--') plt.xlabel('Time',fontsize=16) plt.ylabel('Respiratory rate',fontsize=16) plt.title('Respiratory rate over time from admission, with upper and lower alarm thresholds') plt.ylim(0,55) """ Explanation: Plot 2: Did the patient's vital signs breach any alarm thresholds? Alarm systems in the intensive care unit are commonly based on high and low thresholds defined by the carer. False alarms are often a problem and so thresholds may be set arbitrarily to reduce alarms. As a result, alarm settings carry limited information. End of explanation """ # Display the first few rows of the GCS eye response data ce[ce.LABEL=='GCS - Eye Opening'].head() # Prepare the size of the figure plt.figure(figsize=(18, 10)) # Set x equal to the times x_hr = ce.HOURS[ce.LABEL=='Heart Rate'] # Set y equal to the heart rates y_hr = ce.VALUENUM[ce.LABEL=='Heart Rate'] plt.plot(x_hr,y_hr) plt.plot(ce.HOURS[ce.LABEL=='Respiratory Rate'], ce.VALUENUM[ce.LABEL=='Respiratory Rate'], 'k', markersize=6) # Add a text label to the y-axis plt.text(-20,155,'GCS - Eye Opening',fontsize=14) plt.text(-20,150,'GCS - Motor Response',fontsize=14) plt.text(-20,145,'GCS - Verbal Response',fontsize=14) # Iterate over list of GCS labels, plotting around 1 in 10 to avoid overlap for i, txt in enumerate(ce.VALUE[ce.LABEL=='GCS - Eye Opening'].values): if np.mod(i,6)==0 and i < 65: plt.annotate(txt, (ce.HOURS[ce.LABEL=='GCS - Eye Opening'].values[i],155),fontsize=14) for i, txt in enumerate(ce.VALUE[ce.LABEL=='GCS - Motor Response'].values): if np.mod(i,6)==0 and i < 65: plt.annotate(txt, (ce.HOURS[ce.LABEL=='GCS - Motor Response'].values[i],150),fontsize=14) for i, txt in enumerate(ce.VALUE[ce.LABEL=='GCS - Verbal Response'].values): if np.mod(i,6)==0 and i < 65: plt.annotate(txt, (ce.HOURS[ce.LABEL=='GCS - Verbal Response'].values[i],145),fontsize=14) plt.title('Vital signs and Glasgow Coma Scale over time from admission',fontsize=16) plt.xlabel('Time (hours)',fontsize=16) plt.ylabel('Heart rate or GCS',fontsize=16) plt.ylim(10,165) """ Explanation: Task 2 Based on the data, does it look like the alarms would have triggered for this patient? Plot 3: What is patient's level of consciousness? Glasgow Coma Scale (GCS) is a measure of consciousness. It is commonly used for monitoring patients in the intensive care unit. It consists of three components: eye response; verbal response; motor response. End of explanation """ # OPTION 1: load outputs from the patient query = """ select de.icustay_id , (strftime('%s',de.charttime)-strftime('%s',ie.intime))/60.0/60.0 as HOURS , di.label , de.value , de.valueuom from outputevents de inner join icustays ie on de.icustay_id = ie.icustay_id inner join d_items di on de.itemid = di.itemid where de.subject_id = 40080 order by charttime; """ oe = pd.read_sql_query(query,conn) oe.head() plt.figure(figsize=(14, 10)) plt.figure(figsize=(14, 6)) plt.title('Fluid output over time') plt.plot(oe.HOURS, oe.VALUE.cumsum()/1000, 'ro', markersize=8, label='Output volume, L') plt.xlim(0,72) plt.ylim(0,10) plt.legend() """ Explanation: Task 3 How is the patient's consciousness changing over time? Stop here... Plot 4: What other data do we have on the patient? Using Pandas 'read_csv function' again, we'll now load the outputevents data - this table contains all information about patient outputs (urine output, drains, dialysis). End of explanation """ # OPTION 1: load inputs given to the patient (usually intravenously) using the database connection query = """ select de.icustay_id , (strftime('%s',de.starttime)-strftime('%s',ie.intime))/60.0/60.0 as HOURS_START , (strftime('%s',de.endtime)-strftime('%s',ie.intime))/60.0/60.0 as HOURS_END , de.linkorderid , di.label , de.amount , de.amountuom , de.rate , de.rateuom from inputevents_mv de inner join icustays ie on de.icustay_id = ie.icustay_id inner join d_items di on de.itemid = di.itemid where de.subject_id = 40080 order by endtime; """ ie = pd.read_sql_query(query,conn) # # OPTION 2: load ioevents using the CSV file with endtime as the index # ioe = pd.read_csv('inputevents.csv' # ,header=None # ,names=['subject_id','itemid','label','starttime','endtime','amount','amountuom','rate','rateuom'] # ,parse_dates=True) ie.head() """ Explanation: To provide necessary context to this plot, it would help to include patient input data. This provides the necessary context to determine a patient's fluid balance - a key indicator in patient health. End of explanation """ ie['LABEL'].unique() plt.figure(figsize=(14, 10)) # Plot the cumulative input against the cumulative output plt.plot(ie.HOURS_END[ie.AMOUNTUOM=='mL'], ie.AMOUNT[ie.AMOUNTUOM=='mL'].cumsum()/1000, 'go', markersize=8, label='Intake volume, L') plt.plot(oe.HOURS, oe.VALUE.cumsum()/1000, 'ro', markersize=8, label='Output volume, L') plt.title('Fluid balance over time',fontsize=16) plt.xlabel('Hours',fontsize=16) plt.ylabel('Volume (litres)',fontsize=16) # plt.ylim(0,38) plt.legend() """ Explanation: Note that the column headers are different: we have "HOURS_START" and "HOURS_END". This is because inputs are administered over a fixed period of time. End of explanation """ plt.figure(figsize=(14, 10)) # Plot the cumulative input against the cumulative output plt.plot(ie.HOURS_END[ie.AMOUNTUOM=='mL'], ie.AMOUNT[ie.AMOUNTUOM=='mL'].cumsum()/1000, 'go', markersize=8, label='Intake volume, L') plt.plot(oe.HOURS, oe.VALUE.cumsum()/1000, 'ro', markersize=8, label='Output volume, L') # example on getting two columns from a dataframe: ie[['HOURS_START','HOURS_END']].head() for i, idx in enumerate(ie.index[ie.LABEL=='Furosemide (Lasix)']): plt.plot([ie.HOURS_START[ie.LABEL=='Furosemide (Lasix)'][idx], ie.HOURS_END[ie.LABEL=='Furosemide (Lasix)'][idx]], [ie.RATE[ie.LABEL=='Furosemide (Lasix)'][idx], ie.RATE[ie.LABEL=='Furosemide (Lasix)'][idx]], 'b-',linewidth=4) plt.title('Fluid balance over time',fontsize=16) plt.xlabel('Hours',fontsize=16) plt.ylabel('Volume (litres)',fontsize=16) # plt.ylim(0,38) plt.legend() ie['LABEL'].unique() """ Explanation: As the plot shows, the patient's intake tends to be above their output (as one would expect!) - but there are periods where they are almost one to one. One of the biggest challenges of working with ICU data is that context is everything - let's look at a treatment (lasix) that we know will affect this graph. End of explanation """ # Exercise 2 here """ Explanation: Exercise 2 Plot the alarms for the mean arterial pressure ('Arterial Blood Pressure mean') HINT: you can use ce.LABEL.unique() to find a list of variable names Were the alarm thresholds breached? End of explanation """ plt.figure(figsize=(14, 10)) plt.plot(ce.index[ce.LABEL=='Heart Rate'], ce.VALUENUM[ce.LABEL=='Heart Rate'], 'rx', markersize=8, label='HR') plt.plot(ce.index[ce.LABEL=='O2 saturation pulseoxymetry'], ce.VALUENUM[ce.LABEL=='O2 saturation pulseoxymetry'], 'g.', markersize=8, label='O2') plt.plot(ce.index[ce.LABEL=='Arterial Blood Pressure mean'], ce.VALUENUM[ce.LABEL=='Arterial Blood Pressure mean'], 'bv', markersize=8, label='MAP') plt.plot(ce.index[ce.LABEL=='Respiratory Rate'], ce.VALUENUM[ce.LABEL=='Respiratory Rate'], 'k+', markersize=8, label='RR') plt.title('Vital signs over time from admission') plt.ylim(0,130) plt.legend() """ Explanation: Plot 3: Were the patient's other vital signs stable? End of explanation """ # OPTION 1: load labevents data using the database connection query = """ SELECT de.subject_id , de.charttime , di.label, de.value, de.valuenum , de.uom FROM labevents de INNER JOIN d_labitems di ON de.itemid = di.itemid where de.subject_id = 40080 """ le = pd.read_sql_query(query,conn) # OPTION 2: load labevents from the CSV file # le = pd.read_csv('data/example_labevents.csv', index_col='HOURSSINCEADMISSION') # preview the labevents data le.head() # preview the ioevents data le[le.LABEL=='HEMOGLOBIN'] plt.figure(figsize=(14, 10)) plt.plot(le.index[le.LABEL=='HEMATOCRIT'], le.VALUENUM[le.LABEL=='HEMATOCRIT'], 'go', markersize=6, label='Haematocrit') plt.plot(le.index[le.LABEL=='HEMOGLOBIN'], le.VALUENUM[le.LABEL=='HEMOGLOBIN'], 'bv', markersize=8, label='Hemoglobin') plt.title('Laboratory measurements over time from admission') plt.ylim(0,38) plt.legend() """ Explanation: Plot 5: Laboratory measurements Using Pandas 'read_csv function' again, we'll now load the labevents data. This data corresponds to measurements made in a laboratory - usually on a sample of patient blood. End of explanation """
MBARIMike/stoqs
stoqs/loaders/CANON/toNetCDF/notebooks/lrauv_nav_adjust.ipynb
gpl-3.0
from netCDF4 import Dataset import numpy as np # 1. Initial daphne file from the https://stoqs.mbari.org/stoqs_canon_may2018 campaign, wget'ted from: # http://dods.mbari.org/data/lrauv/daphne/missionlogs/2018/20180603_20180611/20180608T003220/201806080032_201806090421.nc4 #df = '/vagrant/dev/stoqsgit/201806080032_201806090421.nc4' # 2. This file has dead reckoned positions before the first GPS fix # http://dods.mbari.org/data/lrauv/daphne/missionlogs/2018/20180603_20180611/20180605T183835/201806051838_201806070507.nc4 #df = '/vagrant/dev/stoqsgit/201806051838_201806070507.nc4' # 3. This file has a bogus coordinate _time value at index 832 that should be removed to improve performance # http://dods.mbari.org/data/lrauv/daphne/missionlogs/2018/20180227_20180301/20180301T095515/201803010955_201803011725.nc4 df = '/vagrant/dev/stoqsgit/201803010955_201803011725.nc4' ds = Dataset(df) # Default is to not create the interactive data plots, which add a lot of data to the Notebook bokeh_plots = False """ Explanation: Develop Algorithm to Nudge Dead Reckoned LRAUV Segments to Match GPS Fixes Read data from .nc4 files and examine performance of nudging method - needs to work for all cases Executing this Notebook requires a personal STOQS server. Follow the steps to build your own development system &mdash; this will take a few hours and depends on a good connection to the Internet. Once your server is up log into it (after a cd ~/Vagrants/stoqsvm) and activate your virtual environment with the usual commands, e.g.: vagrant ssh -- -X export STOQS_HOME=/vagrant/dev/stoqsgit # Use STOQS_HOME=/home/vagrant/dev/stoqsgit if not using NFS mount cd $STOQS_HOME &amp;&amp; source venv-stoqs/bin/activate export DATABASE_URL=postgis://stoqsadm:CHANGEME@127.0.0.1:5438/stoqs Launch Jupyter Notebook from this directory on your system with: cd $STOQS_HOME/stoqs/loaders/CANON/toNetCDF/notebooks ../../../../manage.py shell_plus --notebook A Firefox window should appear where you can open this file and execute it. Test with various files that have special circumstances &mdash; encountered while reprocessing the archive End of explanation """ import pandas as pd from datetime import datetime from time import time def var_series(data_array, time_array, tmin=0, tmax=time(), angle=False, verbose=False): '''Return a Pandas series of the coordinate with invalid and out of range time values removed''' mt = np.ma.masked_invalid(time_array) mt = np.ma.masked_outside(mt, tmin, tmax) bad_times = [str(datetime.utcfromtimestamp(es)) for es in time_array[:][mt.mask]] if verbose and bad_times: print(f"Removing bad times in {data_array.name} ([index], [values]): {np.where(mt.mask)[0]}, {bad_times}") v_time = pd.to_datetime(mt.compressed(), unit='s',errors = 'coerce') da = pd.Series(data_array[:][~mt.mask], index=v_time) rad_to_deg = False if angle: # Some universal positions are in degrees, some are in radians - make a guess based on mean values if np.max(np.abs(da)) <= np.pi and np.max(np.abs(da)) <= np.pi: rad_to_deg = True if verbose: print(f"{data_array.name}: rad_to_deg = {rad_to_deg}") if rad_to_deg: da = da * 180.0 / np.pi return da """ Explanation: Define function to remove bad time axis values and convert angles to degrees End of explanation """ %matplotlib inline from pylab import plt import numpy as np # Make Pandas Series of the coordinate and yaw and depth variables lon_fix = var_series(ds['longitude_fix'], ds['longitude_fix_time'], angle=True) lat_fix = var_series(ds['latitude_fix'], ds['latitude_fix_time'], angle=True) lon = var_series(ds['longitude'], ds['longitude_time'], angle=True) lat = var_series(ds['latitude'], ds['latitude_time'], angle=True) yaw = var_series(ds['platform_orientation'], ds['platform_orientation_time'], angle=True) depth = var_series(ds['depth'], ds['depth_time'], angle=False) # Make plots of the original data plt.rcParams['figure.figsize'] = (15, 8); fig, ax = plt.subplots(3,1) ax[0].set_title('Dead Reckoned and GPS Navigation Time Series') ax[1].set_xlabel('Time (GMT)') ax[0].set_ylabel('Latitude (degrees_north)') ax[0].plot(lat.index, lat, '-x', label='Dead Reckoned') ax[0].plot(lat_fix.index, lat_fix, 'o', label='GPS') ax[0].grid(True) ax[1].set_ylabel('Longitude (degrees_east)') ax[1].plot(lon.index, lon, '-x', label='Dead Reckoned') ax[1].plot(lon_fix.index, lon_fix, 'o', label='GPS') ax[1].grid(True) ax[2].set_ylabel('Yaw (degrees)') ax[2].plot(yaw.index, yaw) ax[2].grid(True) ax[0].legend() _ = ax[1].legend() """ Explanation: Examine the data from the file End of explanation """ def plot_vars(var_time, var, var_fix_time, var_fix, y_label, plot_depth=False): from bokeh.plotting import figure from bokeh.io import output_notebook, show from bokeh.models import LinearAxis, Range1d from bokeh.resources import INLINE output_notebook(resources=INLINE, hide_banner=True) p = figure(width = 900, height = 300, title = 'Dead Reckoned and GPS Fix Positions', x_axis_type="datetime", x_axis_label='Time (GMT)', y_range=(np.min(var), np.max(var)), y_axis_label = y_label) p.extra_y_ranges = {"depth_axis": Range1d(start=2, end=-0.2)} p.add_layout(LinearAxis(y_range_name="depth_axis"), 'right') p.line(var_time, var, line_width=1) p.cross(var_time, var) p.square(var_fix_time, var_fix, color="orange") if plot_depth: p.line(depth.index, depth, y_range_name="depth_axis", line_color="yellow") p.cross(depth.index, depth, y_range_name="depth_axis", line_color="yellow") _ = show(p) """ Explanation: Files 1 and 2 have interesting left turns during the underwater dead reckoned segments in the Latitude plots. These are correlated with significant changes in the Yaw (heading) data, so they appear to be real. The goal with the algorithm is to elimate the jumps in position when the LRAUV surfaces and receives new GPS positions. Define function to make interactive Bokeh plot of dead reckoned and GPS latitude or longitude variables End of explanation """ if bokeh_plots: plot_vars(lat.index, lat, lat_fix.index, lat_fix, 'Latitude (degrees_north)', plot_depth=True) if bokeh_plots: plot_vars(lon.index, lon, lon_fix.index, lon_fix, 'Longitude (degrees_east)', plot_depth=True) """ Explanation: Create interactive plots where we can zoom into areas of interest. (plot_vars() adds a lot of data to the Notebook, so don't commit rendered.) End of explanation """ def nudge_coords(ds, verbose=False): '''Given a ds object to an LRAUV .nc4 file return adjusted longitude and latitude arrays that reconstruct the trajectory so that the dead reckoned positions are nudged so that they match the GPS fixes ''' from math import cos # Produce Pandas time series from the NetCDF variables lon_fix = var_series(ds['longitude_fix'], ds['longitude_fix_time'], angle=True, verbose=verbose) lat_fix = var_series(ds['latitude_fix'], ds['latitude_fix_time'], angle=True, verbose=verbose) lon = var_series(ds['longitude'], ds['longitude_time'], angle=True, verbose=verbose) lat = var_series(ds['latitude'], ds['latitude_time'], angle=True, verbose=verbose) max_sec_diff_at_end = 10 print(f"{'seg#':4s} {'end_sec_diff':12s} {'end_lon_diff':12s} {'end_lat_diff':12s}", end='') print(f" {'len(segi)':9s} {'seg_min':7s} {'u_drift (cm/s)':14s} {'v_drift (cm/s)':14s}") # Any dead reckoned points before first GPS fix - usually empty as GPS fix happens before dive segi = np.where(lat.index < lat_fix.index[0])[0] if lon[:][segi].any(): lon_nudged = lon[segi] lat_nudged = lat[segi] dt_nudged = lon.index[segi] print(f"{' ':4} {'-':>12} {'-':>12} {'-':>12}", end='') else: lon_nudged = np.array([]) lat_nudged = np.array([]) dt_nudged = np.array([], dtype='datetime64[ns]') if segi.any(): print(f"{' ':4} {'nan':>12} {'nan':>12} {'nan':>12}", end='') if segi.any(): seg_min = (lat.index[segi][-1] - lat.index[segi][0]).total_seconds() / 60 print(f" {len(segi):-9d} {seg_min:7.2f} {'-':>14} {'-':>14}") else: seg_min = 0 for i in range(len(lat_fix) - 1): # Segment of dead reckoned (under water) positions, each surrounded by GPS fixes segi = np.where(np.logical_and(lat.index > lat_fix.index[i], lat.index < lat_fix.index[i+1]))[0] end_sec_diff = (lat_fix.index[i+1] - lat.index[segi[-1]]).total_seconds() assert(end_sec_diff < max_sec_diff_at_end) end_lon_diff = lon_fix[i+1] - lon[segi[-1]] end_lat_diff = lat_fix[i+1] - lat[segi[-1]] seg_min = (lat.index[segi][-1] - lat.index[segi][0]).total_seconds() / 60 # Compute approximate horizontal drift rate as a sanity check u_drift = (end_lat_diff * cos(lat_fix[i+1]) * 60 * 185300 / (lat.index[segi][-1] - lat.index[segi][0]).total_seconds()) v_drift = (end_lat_diff * 60 * 185300 / (lat.index[segi][-1] - lat.index[segi][0]).total_seconds()) print(f"{i:4d}: {end_sec_diff:12.3f} {end_lon_diff:12.7f} {end_lat_diff:12.7f}", end='') print(f" {len(segi):-9d} {seg_min:7.2f} {u_drift:14.2f} {v_drift:14.2f}") # Start with zero adjustment at begining and linearly ramp up to the diff at the end lon_nudge = np.interp( lon.index[segi].astype(np.int64), [lon.index[segi].astype(np.int64)[0], lon.index[segi].astype(np.int64)[-1]], [0, end_lon_diff] ) lat_nudge = np.interp( lat.index[segi].astype(np.int64), [lat.index[segi].astype(np.int64)[0], lat.index[segi].astype(np.int64)[-1]], [0, end_lat_diff] ) lon_nudged = np.append(lon_nudged, lon[segi] + lon_nudge) lat_nudged = np.append(lat_nudged, lat[segi] + lat_nudge) dt_nudged = np.append(dt_nudged, lon.index[segi]) # Any dead reckoned points after first GPS fix - not possible to nudge, just copy in segi = np.where(lat.index > lat_fix.index[-1])[0] seg_min = 0 if segi.any(): lon_nudged = np.append(lon_nudged, lon[segi]) lat_nudged = np.append(lat_nudged, lat[segi]) dt_nudged = np.append(dt_nudged, lon.index[segi]) seg_min = (lat.index[segi][-1] - lat.index[segi][0]).total_seconds() / 60 print(f"{i:4d}: {'-':>12} {'-':>12} {'-':>12}", end='') print(f" {len(segi):-9d} {seg_min:7.2f} {'-':>14} {'-':>14}") return pd.Series(lon_nudged, index=dt_nudged), pd.Series(lat_nudged, index=dt_nudged) lon_nudged, lat_nudged = nudge_coords(ds, verbose=True) if bokeh_plots: plot_vars(lat_nudged.index, lat_nudged, lat_fix.index, lat_fix, 'Latitude (degrees_north)') if bokeh_plots: plot_vars(lon_nudged.index, lon_nudged, lon_fix.index, lon_fix, 'Longitude (degrees_east)') plt.rcParams['figure.figsize'] = (15, 8); fig, ax = plt.subplots(2,1) ax[0].set_title('Dead Reckoned and GPS Navigation Positions After Nudging') ax[1].set_xlabel('Time (GMT)') ax[0].set_ylabel('Latitude (degrees_north)') ax[0].plot(lat_nudged.index, lat_nudged, '-x', label='Nudged Dead Reckoned') ax[0].plot(lat_fix.index, lat_fix, 'o', label='GPS') ax[0].grid(True) ax[1].set_ylabel('Longitude (degrees_east)') ax[1].plot(lon_nudged.index, lon_nudged, '-x', label='Nudged Dead Reckoned') ax[1].plot(lon_fix.index, lon_fix, 'o', label='GPS') ax[1].grid(True) ax[0].legend() _ = ax[1].legend() """ Explanation: After exploring the data, it appears that we can safely adjust all dead reckoned positions that are in between the GPS fixes. Define a function to loop though pairs of GPS fixes and "nudge" the dead reckoned positions so that they match the position of the second GPS fix (acquired after surfacing) in the pair. End of explanation """
VUInformationRetrieval/IR2016_2017
02_building.ipynb
gpl-2.0
Summaries_file = 'data/malaria__Summaries.pkl.bz2' Abstracts_file = 'data/malaria__Abstracts.pkl.bz2' import pickle, bz2 from collections import namedtuple Summaries = pickle.load( bz2.BZ2File( Summaries_file, 'rb' ) ) paper = namedtuple( 'paper', ['title', 'authors', 'year', 'doi'] ) for (id, paper_info) in Summaries.items(): Summaries[id] = paper( *paper_info ) Abstracts = pickle.load( bz2.BZ2File( Abstracts_file, 'rb' ) ) """ Explanation: Mini-Assignment 2: Building a Simple Search Index In this mini-assignment, we will build a simple search index, which we will use later for Boolean retrieval. The assignment tasks are again at the bottom of this document. Loading the Data End of explanation """ Summaries[24130474] Abstracts[24130474] """ Explanation: Let's have a look at what the data looks like for our example paper: End of explanation """ def tokenize(text): """ Function that tokenizes a string in a rather naive way. Can be extended later. """ return text.split(' ') def preprocess(tokens): """ Perform linguistic preprocessing on a list of tokens. Can be extended later. """ result = [] for token in tokens: result.append(token.lower()) return result print(preprocess(tokenize("Lorem ipsum dolor sit AMET"))) from IPython.display import display, HTML import re def display_summary( id, show_abstract=False, show_id=True, extra_text='' ): """ Function for printing a paper's summary through IPython's Rich Display System. Trims long author lists, and adds a link to the paper's DOI (when available). """ s = Summaries[id] lines = [] title = s.title if s.doi != '': title = '<a href=http://dx.doi.org/%s>%s</a>' % (s.doi, title) title = '<strong>' + title + '</strong>' lines.append(title) authors = ', '.join( s.authors[:20] ) + ('' if len(s.authors) <= 20 else ', ...') lines.append(str(s.year) + '. ' + authors) if (show_abstract): lines.append('<small><strong>Abstract:</strong> <em>%s</em></small>' % Abstracts[id]) if (show_id): lines.append('[ID: %d]' % id) if (extra_text != ''): lines.append(extra_text) display( HTML('<br>'.join(lines)) ) display_summary(22433778) display_summary(24130474, show_abstract=True) """ Explanation: Some Utility Functions We'll define some utility functions that allow us to tokenize a string into terms, perform linguistic preprocessing on a list of terms, as well as a function to display information about a paper in a nice way. Note that these tokenization and preprocessing functions are rather naive - you may have to make them smarter in a later assignment. End of explanation """ from collections import defaultdict inverted_index = defaultdict(set) # This may take a while: for (id, abstract) in Abstracts.items(): for term in preprocess(tokenize(abstract)): inverted_index[term].add(id) """ Explanation: Creating our first index We will now create an inverted index based on the words in the abstracts of the papers in our dataset. We will implement our inverted index as a Python dictionary with terms as keys and posting lists as values. For the posting lists, instead of using Python lists and then implementing the different operations on them ourselves, we will use Python sets and use the predefined set operations to process these posting "lists". This will also ensure that each document is added at most once per term. The use of Python sets is not the most efficient solution but will work for our purposes. (As an optional additional exercise, you can try to implement the posting lists as Python lists for this and the following mini-assignments.) Not every paper in our dataset has an abstract; we will only index papers for which an abstract is available. End of explanation """ print(inverted_index['network']) """ Explanation: Let's see what's in the index for the example term 'network': End of explanation """ query_word = 'amsterdam' for i in inverted_index[query_word]: display_summary(i) """ Explanation: We can now use this inverted index to answer simple one-word queries, for example to show all papers that contain the word 'amsterdam': End of explanation """ # Add your code here """ Explanation: Assignments Your name: ... Task 1 Construct a function called and_query that takes as input a single string, consisting of one or more words, and returns a list of matching documents. and_query, as its name suggests, should require that all query terms are present in the documents of the result list. Demonstrate the working of your function with an example (choose one that leads to fewer than 100 hits to not overblow this notebook file). (You can use the tokenize and preprocess functions we defined above to tokenize and preprocess your query. You can also exploit the fact that the posting lists are sets, which means you can easily perform set operations such as union, difference and intersect on them.) End of explanation """ # Add your code here """ Explanation: Task 2 Construct a second function called or_query that works in the same way as and_query you just implemented, but returns documents that contain at least one of the words in the query. Demonstrate the working of this second function also with an example (again, choose one that leads to fewer than 100 hits). End of explanation """ # Add your code here """ Explanation: Task 3 Show how many hits the query "the who" returns for your two query functions (and_query and or_query). End of explanation """
miaecle/deepchem
examples/tutorials/15_Synthetic_Feasibility_Scoring.ipynb
mit
%tensorflow_version 1.x !curl -Lo deepchem_installer.py https://raw.githubusercontent.com/deepchem/deepchem/master/scripts/colab_install.py import deepchem_installer %time deepchem_installer.install(version='2.3.0') import deepchem as dc # Lets get some molecules to play with from deepchem.molnet.load_function import tox21_datasets tasks, datasets, transformers = tox21_datasets.load_tox21(featurizer='Raw', split=None, reload=False) molecules = datasets[0].X """ Explanation: Tutorial Part 15: Synthetic Feasibility Synthetic feasibility is a problem when running large scale enumerations. Ofen molecules that are enumerated are very difficult to make and thus not worth inspection even if their other chemical properties are good in silico. This tutorial goes through how to train the ScScore model [1]. The idea of the model is to train on pairs of molecules where one molecule is "more complex" than the other. The neural network then can make scores which attempt to keep this pairwise ordering of molecules. The final result is a model which can give a relative complexity of a molecule. The paper trains on every reaction in reaxys, declaring products more complex than reactions. Since this training set is prohibitively expensive we will instead train on arbitrary molecules declaring one more complex if it's SMILES string is longer. In the real world you can use whatever measure of complexity makes sense for the project. In this tutorial, we'll use the Tox21 dataset to train our simple synthetic feasibility model. Colab This tutorial and the rest in this sequence are designed to be done in Google colab. If you'd like to open this notebook in colab, you can use the following link. Setup We recommend you run this tutorial on Google colab. You'll need to run the following commands to set up your colab environment to run the notebook. End of explanation """ from rdkit import Chem import random from deepchem.feat import CircularFingerprint import deepchem as dc import numpy as np def create_dataset(fingerprints, smiles_lens, ds_size=100000): """ m1: list of np.Array fingerprints for molecules m2: list of int length of a molecules SMILES string returns: dc.data.Dataset for input into ScScore Model Dataset.X shape is (sample_id, molecule_id, features) Dataset.y shape is (sample_id,) values is 1 if the 0th index molecule is more complex 0 if the 1st index molecule is more complex """ X, y = [], [] all_data = list(zip(fingerprints, smiles_lens)) while len(y) < ds_size: i1 = random.randrange(0, len(smiles_lens)) i2 = random.randrange(0, len(smiles_lens)) m1 = all_data[i1] m2 = all_data[i2] if m1[1] == m2[1]: continue if m1[1] > m2[1]: y.append(1.0) else: y.append(0.0) X.append([m1[0], m2[0]]) return dc.data.NumpyDataset(np.array(X), np.expand_dims(np.array(y), axis=1)) """ Explanation: Make The Datasets Because ScScore is trained on relative complexities we have our X tensor in our dataset has 3 dimensions (sample_id, molecule_id, features). the 1st dimension molecule_id is in [0, 1], because a sample is a pair of molecules. The label is 1 if the zeroth molecule is more complex than the first molecule. The function create_dataset we introduce below pulls random pairs of smiles strings out of a given list and ranks them according to this complexity measure. In the real world you could use purchase cost, or number of reaction steps required as your complexity score. End of explanation """ # Lets split our dataset into a train set and a test set molecule_ds = dc.data.NumpyDataset(np.array(molecules)) splitter = dc.splits.RandomSplitter() train_mols, test_mols = splitter.train_test_split(molecule_ds) """ Explanation: With our complexity ranker in place we can now construct our dataset. Let's start by loading the molecules in the Tox21 dataset into memory. We split the dataset at this stage to ensure that the training and test set have non-overlapping sets of molecules. End of explanation """ # In the paper they used 1024 bit fingerprints with chirality n_features=1024 featurizer = dc.feat.CircularFingerprint(size=n_features, radius=2, chiral=True) train_features = featurizer.featurize(train_mols.X) train_smileslen = [len(Chem.MolToSmiles(x)) for x in train_mols.X] train_dataset = create_dataset(train_features, train_smileslen) """ Explanation: We'll featurize all our molecules with the ECFP fingerprint with chirality (matching the source paper), and will then construct our pairwise dataset using the code from above. End of explanation """ from deepchem.models import ScScoreModel # Now to create the model and train it model = ScScoreModel(n_features=n_features) model.fit(train_dataset, nb_epoch=20) """ Explanation: Now that we have our dataset created, let's train a ScScoreModel on this dataset. End of explanation """ import matplotlib.pyplot as plt %matplotlib inline mol_scores = model.predict_mols(test_mols.X) smiles_lengths = [len(Chem.MolToSmiles(x)) for x in test_mols.X] """ Explanation: Model Performance Lets evaluate how well the model does on our holdout molecules. The SaScores should track the length of SMILES strings from never before seen molecules. End of explanation """ plt.figure(figsize=(20,16)) plt.scatter(smiles_lengths, mol_scores) plt.xlim(0,80) plt.xlabel("SMILES length") plt.ylabel("ScScore") plt.show() """ Explanation: Let's now plot the length of the smiles string of the molecule against the SaScore using matplotlib. End of explanation """
chbrandt/pynotes
SS82_filtering/.ipynb_checkpoints/Untitled-checkpoint.ipynb
gpl-2.0
from IPython.display import HTML HTML('''<script> code_show=true; function code_toggle() { if (code_show){ $('div.input').hide(); } else { $('div.input').show(); } code_show = !code_show } $( document ).ready(code_toggle); </script> <form action="javascript:code_toggle()"><input type="submit" value="Click here to toggle on/off the raw code."></form>''') from IPython.display import HTML HTML(''' <figure> <img src="Stripe82_gal_projection.png" alt="Swift observations over Stripe82"> <figcaption>Figure 1: Swift observations over Stripe82</figcaption> </figure> ''') """ Explanation: SS82 Talking to Bruno about his project on stacking Swift observations and my project on Stripe82 SED we started to think about a collaboration to create a set of deep observations with Swift data. Deep Swift data being a great addition to the High-Energy end of Stripe82 data collection (see, for instance, LaMassa,2016). Bruno then seached for all Swift-XRT observations inside the Stripe-82: * RA: 310 : 60 * Dec: -1.25 : 1.25 Over the Stripe, Bruno has found ~3000 observations. See Figure 1 and tables Table 1 and Table 2. Here, I'll do the filtering of the observations to keep only those useful for Paolo's stacking. The selection looks for observations done within a time-range of a few days; for instance, 20 days is the window size I'll use here. If all you want is to have a look at the final/filtered catalog, go straight to final section. Otherwise, if the code used in this filtering does matter to you, you can show them out clicking the button below. End of explanation """ import pandas cat = pandas.read_csv('Swift_Master_Stripe82_groups.ascii', delim_whitespace=True) print "Table 1: Sample of the catalog" pandas.concat([cat.head(5),cat.tail(5)]) print "Table 2: Summary of the catalog columns" cat.describe(include='all') """ Explanation: The base catalog Right below, in Table 1, we can see a sample of the catalog (the first/last five lines). In Table 2, we see a brief description of the catalog. End of explanation """ cat['start_time'] = pandas.to_datetime(cat['start_time']) cat_grouped_by_target = cat[['Target_Name','start_time']].groupby(['Target_Name']) cat_descr = cat_grouped_by_target.describe().unstack() cat_time = cat_descr.sort_values([('start_time','count')],ascending=False) del cat_descr """ Explanation: Target_Name is the name of the (central) object at each observation, from that we see we have 681 unique sources out of the 3035 observations. GroupSize is the number of overlapping observations, the average number is ~54. Let's see how sparse are the observations in time and how do they distribute for each source. End of explanation """ title = "Figure 2: Number of sources(Y axis) observed number of times(X axis)" %matplotlib inline from matplotlib import pyplot as plt width = 16 height = 4 plt.figure(figsize=(width, height)) yticks = [2,10,50,100,200,300] xticks = range(51) ax = cat_time[('start_time','count')].plot.hist(bins=xticks,xlim=(0,50),title=title,grid=True,xticks=xticks,yticks=yticks,align='left') ax.set_xlabel('Number of observations (per source)') print "Table 3: Number counts and dates (first/last) of the observations (per object)" cat_time """ Explanation: Number of observations To have a glue about the number of observations done over each object we can look the counts shown by Table 3 and the histogram below (Figure 2). End of explanation """ print "Table 4: Observation carried out for source 'V1647ORI' sorted in time" g = cat_grouped_by_target.get_group('V1647ORI') g_sorted = g.sort_values('start_time') g_sorted """ Explanation: Filtering the data First, a closer look to an example To have a better idea of what we should find regarding the observation time of these sources, I'll take a particular one -- V1647ORI -- and see what we have for this source. End of explanation """ def find_clustered_observations(sorted_target_observations,time_range=10): # Let's select a 'time_range' days window to select valid observations window_size = time_range g_sorted = sorted_target_observations # an ordered dictionary works as a 'set' structure from collections import OrderedDict selected_allObs = OrderedDict() # define en identificator for each cluster of observations, to ease future filtering group_obs = 1 _last_time = None _last_id = None for _row in g_sorted.iterrows(): ind,row = _row if _last_time is None: _last_time = row.start_time _last_id = ind continue _delta = row.start_time - _last_time if _delta.days <= window_size: selected_allObs[_last_id] = group_obs selected_allObs[ind] = group_obs else: if len(selected_allObs): group_obs = selected_allObs.values()[-1] + 1 _last_time = row.start_time _last_id = ind return selected_allObs from collections import OrderedDict obs_indx = OrderedDict() for name,group in cat_grouped_by_target: g_sorted = group.sort_values('start_time') filtered_indxs = find_clustered_observations(g_sorted,time_range=20) obs_indx.update(filtered_indxs) import pandas obsChunks_forFilteringCat = pandas.DataFrame(obs_indx.values(),columns=['obs_chunk'],index=obs_indx.keys()) # obsChunks_forFilteringCat.sort_index() print "Table 5: original catalog with column 'obs_chunk' to flag which rows succeed the filtering (non-NA values)." cat_with_obsChunksFlag = cat.join(obsChunks_forFilteringCat) cols = list(cat_with_obsChunksFlag.columns) cols.insert(2,cols.pop(-1)) cat_with_obsChunksFlag = cat_with_obsChunksFlag.ix[:,cols] cat_with_obsChunksFlag """ Explanation: If we consider each group of observations of our interest -- let me call them "chunk" -- observations that distance each other no more than "X" days (for example, X=20 days) we see from this example that it happens to exist more than one "chunk" of observations per object. Here, for instance, rows 347,344,343,346 and 338,339,336,335,341 form the cluster of observations of our interest, "chunk-1" and "chunk-2", respectively. To select the candidates we need to run a window function over the 'start_time' sorted list, where the function has two elements (i.e, observations) to ask their distance in time. If the pair of observations is less than, say 20 days, they are selected for future processing. Applying the filter to all objects Now defining a 20 days window as the selection criterium to all objects in our catalog we end up with 2254 observations, done over 320 objects. Table 5 add such information through column "obs_chunk", where "Not-Available" value means the observations that have not succeed in the filtering applied. Note: obs_chunk values mean the groupings -- "chunks" -- formed within each object's set of observations. They are unique among each object's observations, but not accross the entire catalog. End of explanation """ cat_filtered = cat_with_obsChunksFlag.dropna(subset=['obs_chunk']) cat_filtered cat_filtered.describe(include='all') cat_filtered.to_csv('Swift_Master_Stripe82_groups_filtered.csv') """ Explanation: Filtered catalog And here is the final catalog, where rows (i.e, observations) with out of our interest (i.e, "obs_chunk == NaN") were removed. This catalog is written to 'Swift_Master_Stripe82_groups_filtered.csv'. End of explanation """
AhmetHamzaEmra/Deep-Learning-Specialization-Coursera
Sequence Models/Operations+on+word+vectors+-+v1.ipynb
mit
import numpy as np from w2v_utils import * """ Explanation: Operations on word vectors Welcome to your first assignment of this week! Because word embeddings are very computionally expensive to train, most ML practitioners will load a pre-trained set of embeddings. After this assignment you will be able to: Load pre-trained word vectors, and measure similarity using cosine similarity Use word embeddings to solve word analogy problems such as Man is to Woman as King is to ______. Modify word embeddings to reduce their gender bias Let's get started! Run the following cell to load the packages you will need. End of explanation """ words, word_to_vec_map = read_glove_vecs('data/glove.6B.50d.txt') """ Explanation: Next, lets load the word vectors. For this assignment, we will use 50-dimensional GloVe vectors to represent words. Run the following cell to load the word_to_vec_map. End of explanation """ # GRADED FUNCTION: cosine_similarity def cosine_similarity(u, v): """ Cosine similarity reflects the degree of similariy between u and v Arguments: u -- a word vector of shape (n,) v -- a word vector of shape (n,) Returns: cosine_similarity -- the cosine similarity between u and v defined by the formula above. """ distance = 0.0 ### START CODE HERE ### # Compute the dot product between u and v (≈1 line) dot = np.dot(u, v) # Compute the L2 norm of u (≈1 line) norm_u = np.sqrt(np.sum(u**2)) # Compute the L2 norm of v (≈1 line) norm_v = np.sqrt(np.sum(v**2)) # Compute the cosine similarity defined by formula (1) (≈1 line) cosine_similarity = dot/(norm_u*norm_v) ### END CODE HERE ### return cosine_similarity father = word_to_vec_map["father"] mother = word_to_vec_map["mother"] ball = word_to_vec_map["ball"] crocodile = word_to_vec_map["crocodile"] france = word_to_vec_map["france"] italy = word_to_vec_map["italy"] paris = word_to_vec_map["paris"] rome = word_to_vec_map["rome"] print("cosine_similarity(father, mother) = ", cosine_similarity(father, mother)) print("cosine_similarity(ball, crocodile) = ",cosine_similarity(ball, crocodile)) print("cosine_similarity(france - paris, rome - italy) = ",cosine_similarity(france - paris, rome - italy)) """ Explanation: You've loaded: - words: set of words in the vocabulary. - word_to_vec_map: dictionary mapping words to their GloVe vector representation. You've seen that one-hot vectors do not do a good job cpaturing what words are similar. GloVe vectors provide much more useful information about the meaning of individual words. Lets now see how you can use GloVe vectors to decide how similar two words are. 1 - Cosine similarity To measure how similar two words are, we need a way to measure the degree of similarity between two embedding vectors for the two words. Given two vectors $u$ and $v$, cosine similarity is defined as follows: $$\text{CosineSimilarity(u, v)} = \frac {u . v} {||u||_2 ||v||_2} = cos(\theta) \tag{1}$$ where $u.v$ is the dot product (or inner product) of two vectors, $||u||_2$ is the norm (or length) of the vector $u$, and $\theta$ is the angle between $u$ and $v$. This similarity depends on the angle between $u$ and $v$. If $u$ and $v$ are very similar, their cosine similarity will be close to 1; if they are dissimilar, the cosine similarity will take a smaller value. <img src="images/cosine_sim.png" style="width:800px;height:250px;"> <caption><center> Figure 1: The cosine of the angle between two vectors is a measure of how similar they are</center></caption> Exercise: Implement the function cosine_similarity() to evaluate similarity between word vectors. Reminder: The norm of $u$ is defined as $ ||u||2 = \sqrt{\sum{i=1}^{n} u_i^2}$ End of explanation """ # GRADED FUNCTION: complete_analogy def complete_analogy(word_a, word_b, word_c, word_to_vec_map): """ Performs the word analogy task as explained above: a is to b as c is to ____. Arguments: word_a -- a word, string word_b -- a word, string word_c -- a word, string word_to_vec_map -- dictionary that maps words to their corresponding vectors. Returns: best_word -- the word such that v_b - v_a is close to v_best_word - v_c, as measured by cosine similarity """ # convert words to lower case word_a, word_b, word_c = word_a.lower(), word_b.lower(), word_c.lower() ### START CODE HERE ### # Get the word embeddings v_a, v_b and v_c (≈1-3 lines) e_a, e_b, e_c = word_to_vec_map[word_a], word_to_vec_map[word_b], word_to_vec_map[word_c] ### END CODE HERE ### words = word_to_vec_map.keys() max_cosine_sim = -100 # Initialize max_cosine_sim to a large negative number best_word = None # Initialize best_word with None, it will help keep track of the word to output # loop over the whole word vector set for w in words: # to avoid best_word being one of the input words, pass on them. if w in [word_a, word_b, word_c] : continue ### START CODE HERE ### # Compute cosine similarity between the combined_vector and the current word (≈1 line) cosine_sim = cosine_similarity(e_b-e_a, word_to_vec_map[w]-e_c) # If the cosine_sim is more than the max_cosine_sim seen so far, # then: set the new max_cosine_sim to the current cosine_sim and the best_word to the current word (≈3 lines) if cosine_sim > max_cosine_sim: max_cosine_sim = cosine_sim best_word = w ### END CODE HERE ### return best_word """ Explanation: Expected Output: <table> <tr> <td> **cosine_similarity(father, mother)** = </td> <td> 0.890903844289 </td> </tr> <tr> <td> **cosine_similarity(ball, crocodile)** = </td> <td> 0.274392462614 </td> </tr> <tr> <td> **cosine_similarity(france - paris, rome - italy)** = </td> <td> -0.675147930817 </td> </tr> </table> After you get the correct expected output, please feel free to modify the inputs and measure the cosine similarity between other pairs of words! Playing around the cosine similarity of other inputs will give you a better sense of how word vectors behave. 2 - Word analogy task In the word analogy task, we complete the sentence <font color='brown'>"a is to b as c is to ____"</font>. An example is <font color='brown'> 'man is to woman as king is to queen' </font>. In detail, we are trying to find a word d, such that the associated word vectors $e_a, e_b, e_c, e_d$ are related in the following manner: $e_b - e_a \approx e_d - e_c$. We will measure the similarity between $e_b - e_a$ and $e_d - e_c$ using cosine similarity. Exercise: Complete the code below to be able to perform word analogies! End of explanation """ triads_to_try = [('italy', 'italian', 'spain'), ('india', 'delhi', 'japan'), ('man', 'woman', 'boy'), ('small', 'smaller', 'large')] for triad in triads_to_try: print ('{} -> {} :: {} -> {}'.format( *triad, complete_analogy(*triad,word_to_vec_map))) """ Explanation: Run the cell below to test your code, this may take 1-2 minutes. End of explanation """ g = word_to_vec_map['woman'] - word_to_vec_map['man'] print(g) """ Explanation: Expected Output: <table> <tr> <td> **italy -> italian** :: </td> <td> spain -> spanish </td> </tr> <tr> <td> **india -> delhi** :: </td> <td> japan -> tokyo </td> </tr> <tr> <td> **man -> woman ** :: </td> <td> boy -> girl </td> </tr> <tr> <td> **small -> smaller ** :: </td> <td> large -> larger </td> </tr> </table> Once you get the correct expected output, please feel free to modify the input cells above to test your own analogies. Try to find some other analogy pairs that do work, but also find some where the algorithm doesn't give the right answer: For example, you can try small->smaller as big->?. Congratulations! You've come to the end of this assignment. Here are the main points you should remember: Cosine similarity a good way to compare similarity between pairs of word vectors. (Though L2 distance works too.) For NLP applications, using a pre-trained set of word vectors from the internet is often a good way to get started. Even though you have finished the graded portions, we recommend you take a look too at the rest of this notebook. Congratulations on finishing the graded portions of this notebook! 3 - Debiasing word vectors (OPTIONAL/UNGRADED) In the following exercise, you will examine gender biases that can be reflected in a word embedding, and explore algorithms for reducing the bias. In addition to learning about the topic of debiasing, this exercise will also help hone your intuition about what word vectors are doing. This section involves a bit of linear algebra, though you can probably complete it even without being expert in linear algebra, and we encourage you to give it a shot. This portion of the notebook is optional and is not graded. Lets first see how the GloVe word embeddings relate to gender. You will first compute a vector $g = e_{woman}-e_{man}$, where $e_{woman}$ represents the word vector corresponding to the word woman, and $e_{man}$ corresponds to the word vector corresponding to the word man. The resulting vector $g$ roughly encodes the concept of "gender". (You might get a more accurate representation if you compute $g_1 = e_{mother}-e_{father}$, $g_2 = e_{girl}-e_{boy}$, etc. and average over them. But just using $e_{woman}-e_{man}$ will give good enough results for now.) End of explanation """ print ('List of names and their similarities with constructed vector:') # girls and boys name name_list = ['john', 'marie', 'sophie', 'ronaldo', 'priya', 'rahul', 'danielle', 'reza', 'katy', 'yasmin'] for w in name_list: print (w, cosine_similarity(word_to_vec_map[w], g)) """ Explanation: Now, you will consider the cosine similarity of different words with $g$. Consider what a positive value of similarity means vs a negative cosine similarity. End of explanation """ print('Other words and their similarities:') word_list = ['lipstick', 'guns', 'science', 'arts', 'literature', 'warrior','doctor', 'tree', 'receptionist', 'technology', 'fashion', 'teacher', 'engineer', 'pilot', 'computer', 'singer'] for w in word_list: print (w, cosine_similarity(word_to_vec_map[w], g)) """ Explanation: As you can see, female first names tend to have a positive cosine similarity with our constructed vector $g$, while male first names tend to have a negative cosine similarity. This is not suprising, and the result seems acceptable. But let's try with some other words. End of explanation """ def neutralize(word, g, word_to_vec_map): """ Removes the bias of "word" by projecting it on the space orthogonal to the bias axis. This function ensures that gender neutral words are zero in the gender subspace. Arguments: word -- string indicating the word to debias g -- numpy-array of shape (50,), corresponding to the bias axis (such as gender) word_to_vec_map -- dictionary mapping words to their corresponding vectors. Returns: e_debiased -- neutralized word vector representation of the input "word" """ ### START CODE HERE ### # Select word vector representation of "word". Use word_to_vec_map. (≈ 1 line) e = word_to_vec_map[word] # Compute e_biascomponent using the formula give above. (≈ 1 line) e_biascomponent = (e*g)/ np.sqrt(np.sum(g**2))**2 # Neutralize e by substracting e_biascomponent from it # e_debiased should be equal to its orthogonal projection. (≈ 1 line) e_debiased = e-e_biascomponent ### END CODE HERE ### return e_debiased e = "receptionist" print("cosine similarity between " + e + " and g, before neutralizing: ", cosine_similarity(word_to_vec_map["receptionist"], g)) e_debiased = neutralize("receptionist", g, word_to_vec_map) print("cosine similarity between " + e + " and g, after neutralizing: ", cosine_similarity(e_debiased, g)) """ Explanation: Do you notice anything surprising? It is astonishing how these results reflect certain unhealthy gender stereotypes. For example, "computer" is closer to "man" while "literature" is closer to "woman". Ouch! We'll see below how to reduce the bias of these vectors, using an algorithm due to Boliukbasi et al., 2016. Note that some word pairs such as "actor"/"actress" or "grandmother"/"grandfather" should remain gender specific, while other words such as "receptionist" or "technology" should be neutralized, i.e. not be gender-related. You will have to treat these two type of words differently when debiasing. 3.1 - Neutralize bias for non-gender specific words The figure below should help you visualize what neutralizing does. If you're using a 50-dimensional word embedding, the 50 dimensional space can be split into two parts: The bias-direction $g$, and the remaining 49 dimensions, which we'll call $g_{\perp}$. In linear algebra, we say that the 49 dimensional $g_{\perp}$ is perpendicular (or "othogonal") to $g$, meaning it is at 90 degrees to $g$. The neutralization step takes a vector such as $e_{receptionist}$ and zeros out the component in the direction of $g$, giving us $e_{receptionist}^{debiased}$. Even though $g_{\perp}$ is 49 dimensional, given the limitations of what we can draw on a screen, we illustrate it using a 1 dimensional axis below. <img src="images/neutral.png" style="width:800px;height:300px;"> <caption><center> Figure 2: The word vector for "receptionist" represented before and after applying the neutralize operation. </center></caption> Exercise: Implement neutralize() to remove the bias of words such as "receptionist" or "scientist". Given an input embedding $e$, you can use the following formulas to compute $e^{debiased}$: $$e^{bias_component} = \frac{e*g}{||g||_2^2} * g\tag{2}$$ $$e^{debiased} = e - e^{bias_component}\tag{3}$$ If you are an expert in linear algebra, you may recognize $e^{bias_component}$ as the projection of $e$ onto the direction $g$. If you're not an expert in linear algebra, don't worry about this. <!-- **Reminder**: a vector $u$ can be split into two parts: its projection over a vector-axis $v_B$ and its projection over the axis orthogonal to $v$: $$u = u_B + u_{\perp}$$ where : $u_B = $ and $ u_{\perp} = u - u_B $ !--> End of explanation """ def equalize(pair, bias_axis, word_to_vec_map): """ Debias gender specific words by following the equalize method described in the figure above. Arguments: pair -- pair of strings of gender specific words to debias, e.g. ("actress", "actor") bias_axis -- numpy-array of shape (50,), vector corresponding to the bias axis, e.g. gender word_to_vec_map -- dictionary mapping words to their corresponding vectors Returns e_1 -- word vector corresponding to the first word e_2 -- word vector corresponding to the second word """ ### START CODE HERE ### # Step 1: Select word vector representation of "word". Use word_to_vec_map. (≈ 2 lines) w1, w2 = None e_w1, e_w2 = None # Step 2: Compute the mean of e_w1 and e_w2 (≈ 1 line) mu = None # Step 3: Compute the projections of mu over the bias axis and the orthogonal axis (≈ 2 lines) mu_B = None mu_orth = None # Step 4: Set e1_orth and e2_orth to be equal to mu_orth (≈2 lines) e1_orth = None e2_orth = None # Step 5: Adjust the Bias part of u1 and u2 using the formulas given in the figure above (≈2 lines) e_w1B = None e_w2B = None # Step 6: Debias by equalizing u1 and u2 to the sum of their projections (≈2 lines) e1 = None e2 = None ### END CODE HERE ### return e1, e2 print("cosine similarities before equalizing:") print("cosine_similarity(word_to_vec_map[\"man\"], gender) = ", cosine_similarity(word_to_vec_map["man"], g)) print("cosine_similarity(word_to_vec_map[\"woman\"], gender) = ", cosine_similarity(word_to_vec_map["woman"], g)) print() e1, e2 = equalize(("man", "woman"), g, word_to_vec_map) print("cosine similarities after equalizing:") print("cosine_similarity(e1, gender) = ", cosine_similarity(e1, g)) print("cosine_similarity(e2, gender) = ", cosine_similarity(e2, g)) """ Explanation: Expected Output: The second result is essentially 0, up to numerical roundof (on the order of $10^{-17}$). <table> <tr> <td> **cosine similarity between receptionist and g, before neutralizing:** : </td> <td> 0.330779417506 </td> </tr> <tr> <td> **cosine similarity between receptionist and g, after neutralizing:** : </td> <td> -3.26732746085e-17 </tr> </table> 3.2 - Equalization algorithm for gender-specific words Next, lets see how debiasing can also be applied to word pairs such as "actress" and "actor." Equalization is applied to pairs of words that you might want to have differ only through the gender property. As a concrete example, suppose that "actress" is closer to "babysit" than "actor." By applying neutralizing to "babysit" we can reduce the gender-stereotype associated with babysitting. But this still does not guarantee that "actor" and "actress" are equidistant from "babysit." The equalization algorithm takes care of this. The key idea behind equalization is to make sure that a particular pair of words are equi-distant from the 49-dimensional $g_\perp$. The equalization step also ensures that the two equalized steps are now the same distance from $e_{receptionist}^{debiased}$, or from any other work that has been neutralized. In pictures, this is how equalization works: <img src="images/equalize10.png" style="width:800px;height:400px;"> The derivation of the linear algebra to do this is a bit more complex. (See Bolukbasi et al., 2016 for details.) But the key equations are: $$ \mu = \frac{e_{w1} + e_{w2}}{2}\tag{4}$$ $$ \mu_{B} = \frac {\mu * \text{bias_axis}}{||\text{bias_axis}||_2} + ||\text{bias_axis}||_2 *\text{bias_axis} \tag{5}$$ $$\mu_{\perp} = \mu - \mu_{B} \tag{6}$$ $$e_{w1B} = \sqrt{ |{1 - ||\mu_{\perp} ||^2_2} |} * \frac{(e_{\text{w1}} - \mu_{\perp}) - \mu_B} {|(e_{w1} - \mu_{\perp}) - \mu_B)|} \tag{7}$$ $$e_{w2B} = \sqrt{ |{1 - ||\mu_{\perp} ||^2_2} |} * \frac{(e_{\text{w2}} - \mu_{\perp}) - \mu_B} {|(e_{w2} - \mu_{\perp}) - \mu_B)|} \tag{8}$$ $$e_1 = e_{w1B} + \mu_{\perp} \tag{9}$$ $$e_2 = e_{w2B} + \mu_{\perp} \tag{10}$$ Exercise: Implement the function below. Use the equations above to get the final equalized version of the pair of words. Good luck! End of explanation """
iglpdc/comp-phys
01_01_euler.ipynb
mit
t0 = 10. # initial temperature ts = 83. # temp. of the environment r = 0.1 # cooling rate dt = 0.05 # time step tmax = 60. # maximum time nsteps = int(tmax/dt) # number of steps t = t0 for i in range(1,nsteps+1): new_t = t - r*(t-ts)*dt t = new_t print i,i*dt, t # we can also do t = t - r*(t-ts)*dt """ Explanation: author: - 'Adrian E. Feiguin' title: 'Computational Physics' ... Ordinary differential equations Let’s consider a simple 1st order equation: $$\frac{dx}{dy}=f(x,y)$$ To solve this equation with a computer we need to discretize the differences: we have to convert the differential equation into a “finite differences” equation. The simplest solution is Euler’s method. Euler’s method Supouse that at a point $x_0$, the function $f$ has a value $y_0$. We want to find the approximate value of $y$ in a point $x_1$ close to $x_0$, $x_1=x_0+\Delta x$, with $\Delta x$ small. We assume that $f$, the rate of change of $y$, is constant in this interval $\Delta x$. Therefore we find: $$\begin{eqnarray} && dx \approx \Delta x &=&x_1-x_0, \ && dy \approx \Delta y &=&y_1-y_0,\end{eqnarray}$$ with $y_1=y(x_1)=y(x_0+\Delta x)$. Then we re-write the differential equation in terms of discrete differences as: $$\frac{\Delta y}{\Delta x}=f(x,y)$$ or $$\Delta y = f(x,y)\Delta x$$ and approximate the value of $y_1$ as $$y_1=y_0+f(x_0,y_0)(x_1-x_0)$$ We can generalize this formula to find the value of $y$ at $x_2=x_1+\Delta x$ as $$y_{2}=y_1+f(x_1,y_1)\Delta x,$$ or in the general case: $$y_{n+1}=y_n+f(x_n,y_n)\Delta x$$ This is a good approximation as long as $\Delta x$ is “small”. What is small? Depends on the problem, but it is basically defined by the “rate of change”, or “smoothness” of $f$. $f(x)$ has to behave smoothly and without rapid variations in the interval $\Delta x$. Notice that Euler’s method is equivalent to a 1st order Taylor expansion about the point $x_0$. The “local error” calculating $x_1$ is then $O(\Delta x^2)$. If we use the method $N$ times to calculate $N$ consecutive points, the propagated “global” error will be $NO(\Delta x^2)\approx O(\Delta x)$. This error decreases linearly with decreasing step, so we need to halve the step size to reduce the error in half. The numerical work for each step consists of a single evaluation of $f$. Exercise 1.1: Newton’s law of cooling If the temperature difference between an object and its surroundings is small, the rate of change of the temperature of the object is proportional to the temperature difference: $$\frac{dT}{dt}=-r(T-T_s),$$ where $T$ is the temperature of the body, $T_s$ is the temperature of the environment, and $r$ is a “cooling constant” that depends on the heat transfer mechanism, the contact area with the environment and the thermal properties of the body. The minus sign appears because if $T>T_s$, the temperature must decrease. Write a program to calculate the temperature of a body at a time $t$, given the cooling constant $r$ and the temperature of the body at time $t=0$. Plot the results for $r=0.1\frac{1}{min}$; $T_0=83^{\circ} C$ using different intervals $\Delta t$ and compare with exact (analytical) results. End of explanation """ %matplotlib inline import numpy as np from matplotlib import pyplot """ Explanation: Let's try plotting the results. We first need to import the required libraries and methods End of explanation """ my_time = np.zeros(nsteps) my_temp = np.zeros(nsteps) """ Explanation: Next, we create numpy arrays to store the (x,y) values End of explanation """ t = t0 my_tempt[0] = t0 for i in range(1,nsteps): t = t - r*(t-ts)*dt my_time[i] = i*dt my_temp[i] = t pyplot.plot(my_time, my_temp, color='#003366', ls='-', lw=3) pyplot.xlabel('time') pyplot.ylabel('temperature'); """ Explanation: We have to re write the loop to store the values in the arrays. Remember that numpy arrays start from 0. End of explanation """ my_time = np.linspace(0.,tmax,nsteps) pyplot.plot(my_time, my_temp, color='#003366', ls='-', lw=3) pyplot.xlabel('time') pyplot.ylabel('temperature'); """ Explanation: We could have saved effort by defining End of explanation """ def euler(y, f, dx): """Computes y_new = y + f*dx Parameters ---------- y : float old value of y_n at x_n f : float first derivative f(x,y) evaluated at (x_n,y_n) dx : float x step """ return y + f*dx t = t0 for i in range(1,nsteps): t = euler(t, -r*(t-ts), dt) my_temp[i] = t """ Explanation: Alternatively, and in order to re use code in future problems, we could have created a function. End of explanation """ euler = lambda y, f, dx: y + f*dx """ Explanation: Actually, for this particularly simple case, calling a function may introduce unecessary overhead, but it is a an example that we will find useful for future applications. For a simple function like this we could have used a "lambda" function (more about lambda functions <a href="http://www.secnetix.de/olli/Python/lambda_functions.hawk">here</a>). End of explanation """ dt = 1. my_color = ['#003366','#663300','#660033','#330066'] for j in range(0,4): nsteps = int(tmax/dt) #the arrays will have different size for different time steps my_time = np.linspace(dt,tmax,nsteps) my_temp = np.zeros(nsteps) t = t0 for i in range(1,nsteps): t = euler(t, -r*(t-ts), dt) my_temp[i] = t pyplot.plot(my_time, my_temp, color=my_color[j], ls='-', lw=3) dt = dt/2. pyplot.xlabel('time'); pyplot.ylabel('temperature'); pyplot.xlim(8,10) pyplot.ylim(50,58); """ Explanation: Now, let's study the effects of different time steps on the convergence: End of explanation """
gonzmg88/cnn_basic_course
transfer_learning.ipynb
gpl-3.0
import dogs_vs_cats as dvc all_files = dvc.image_files() """ Explanation: Pretrained CNN: transfer learning Nature article: Dermatologist-level classification of skin cancer with deep neural networks End of explanation """ from keras.applications.nasnet import NASNetMobile from keras.preprocessing import image from keras.applications.vgg16 import preprocess_input, decode_predictions import numpy as np # https://keras.io/applications/#vgg16 model = NASNetMobile(weights='imagenet') input_image_shape = (224,224,3) img_path = all_files[10] img = image.load_img(img_path, target_size=input_image_shape[1:]) x = image.img_to_array(img) x = np.expand_dims(x, axis=0) x = preprocess_input(x) preds = model.predict(x) print('Predicted:', decode_predictions(preds, top=3)[0]) # decode the results into a list of tuples (class, description, probability) model.summary() """ Explanation: Imagenet pretrained models Documentation from: https://keras.io/applications/. In keras.applications namespace we have the latest top accuracy solutions of imagenet 2012 classification contest. End of explanation """ # (one such list for each sample in the batch) print('Predicted:', decode_predictions(preds, top=5)) from IPython.display import Image Image(img_path) print(preds.shape) from keras.applications.imagenet_utils import CLASS_INDEX # Imagenet 1000 classes CLASS_INDEX # predict a set of images n_images = 10 x = np.ndarray((n_images,3,224,224)) for i,img_path in enumerate(all_files[0:n_images]): img = image.load_img(img_path, target_size=(224, 224)) x[i] = image.img_to_array(img) # preprocess and predict all together x_preprocessed = preprocess_input(x) preds = model.predict(x_preprocessed,verbose=1) print("") print(preds.shape) dec_preds = decode_predictions(preds,top=5) dec_preds from IPython.display import Image,display for img_path,dec_pred in zip(all_files[0:n_images],dec_preds): display(Image(img_path,width="120px",height="120px")) print(" ".join(["%s (prob: %.3f)"%(elm[1],elm[2]) for elm in dec_pred])) """ Explanation: Imagenet 1000 classes: * http://image-net.org/explore End of explanation """ # load model without top layer n_images_train=500 n_images_test=500 input_image_shape = (3,224,224) train_features, train_labels,train_files, \ test_features, test_labels, test_files = dvc.training_test_datasets(all_files, n_images_train,n_images_test, input_image_shape) # load_img from keras.preprocessing loads the images in [0,255] scale train_features = preprocess_input(train_features) test_features = preprocess_input(test_features) from keras.models import Model base_model = VGG16(weights='imagenet') model = Model(input=base_model.input, output=base_model.get_layer('fc2').output) print("Predicting train images") train_features_cnn = model.predict(train_features,verbose=1) print("Predicting test images") test_features_cnn = model.predict(test_features,verbose=1) train_features_cnn.shape from sklearn import svm from sklearn.model_selection import GridSearchCV tuned_parameters = {'kernel': ['linear'], 'C': [1, 10, 100, 1000]} clf = GridSearchCV(svm.SVC(C=1), tuned_parameters, cv=5,n_jobs=7) clf.fit(train_features_cnn, train_labels) clf.best_estimator_ print("Train score: {}".format(clf.score(train_features_cnn,train_labels))) print("Test score: {}".format(clf.score(test_features_cnn,test_labels))) """ Explanation: Using pretrained CNN as feature extractors End of explanation """
kmunve/APS
aps/notebooks/meps_det_pp_1km.ipynb
mit
# ensure loading of APS modules import sys, os sys.path.append(r'C:\Users\kmu\PycharmProjects\APS') print(sys.path) # -*- coding: utf-8 -*- import matplotlib.pyplot as plt import matplotlib matplotlib.style.use('seaborn-notebook') import matplotlib.patches as patches plt.rcParams['figure.figsize'] = (14, 6) %matplotlib inline import datetime import numpy as np import netCDF4 import warnings warnings.filterwarnings("ignore") from aps.load_region import load_region, clip_region from aps.analysis import describe # check versions (overkill, but why not?) print('Python version:', sys.version) print('Numpy version: ', np.__version__) print('Matplotlib version: ', matplotlib.__version__) print('Today: ', datetime.date.today()) # Load region mask - only for data on 1km xgeo-grid region_mask, y_min, y_max, x_min, x_max = load_region(3007) """ Explanation: Analysis of Meps data Data is provided by MET Norway through thredds.met.no. The spatial resolution is either 2.5 or 0.5 km which is regridded to 1 km using Fimex. Imports and setup End of explanation """ nc_kmu = netCDF4.Dataset(r"\\hdata\grid\tmp\kmu\meps\meps_det_pp_1km_2018012406.nc", "r") time_v = nc_kmu.variables['time'] # Choose a time-step t_index = 6 # Choose a pressure level (if applicable) p_index = 12 # 12=1000hPa, 11=925hPa, 10=850hPa, ..., 7=500hPa, ..., 0=50hPa in arome_metcoop_test ts = netCDF4.num2date(time_v[t_index], time_v.units) print(ts) # clouds cloud_cover = clip_region(nc_kmu.variables['cloud_area_fraction'], region_mask, t_index, y_min, y_max, x_min, x_max) low_clouds = clip_region(nc_kmu.variables['low_type_cloud_area_fraction'], region_mask, t_index, y_min, y_max, x_min, x_max) medium_clouds = clip_region(nc_kmu.variables['medium_type_cloud_area_fraction'], region_mask, t_index, y_min, y_max, x_min, x_max) high_clouds = clip_region(nc_kmu.variables['high_type_cloud_area_fraction'], region_mask, t_index, y_min, y_max, x_min, x_max) print(cloud_cover.shape, nc_kmu.variables['cloud_area_fraction'].shape) plt.imshow(nc_kmu.variables['air_temperature_2m'][t_index, 0,:,:]) f, axes = plt.subplots(nrows=2, ncols=2, figsize=(12,7)) colormap = plt.cm.PuBuGn for ax, data, tle in zip(axes.flat, [cloud_cover, low_clouds, medium_clouds, high_clouds], ["total", "low", "med", "high"]): im = ax.imshow(data, cmap=colormap, vmin=0, vmax=100) ax.set_title(tle) f.subplots_adjust(right=0.8) cbar_ax = f.add_axes([0.85, 0.15, 0.05, 0.7]) cb = f.colorbar(im, cax=cbar_ax) cb.set_label('Cloud fraction (%)') plt.show() a = np.cumsum(cloud_cover)[-1] / cloud_cover.size print("{0:.2f} cloud cover at {1}".format(a, ts)) """ Explanation: Cloud area fractions from meps_det_pp Only variable "cloud_fraction" is contained in the default NVE extraction meps_det_pp...nc. End of explanation """ air_temperature = clip_region(nc_kmu.variables['air_temperature_2m'], region_mask, t_index, y_min, y_max, x_min, x_max) altitude = clip_region(nc_kmu.variables['altitude'], region_mask, t_index, y_min, y_max, x_min, x_max) f, (ax1, ax2) = plt.subplots(nrows=1, ncols=2, figsize=(12,7)) im1 = ax1.imshow(air_temperature-273.15, cmap=plt.cm.seismic, vmin=-10, vmax=10) im2 = ax2.imshow(altitude, cmap=plt.cm.Greys, vmin=0, vmax=2500) f.subplots_adjust(right=0.8) cbar_ax = f.add_axes([0.85, 0.15, 0.05, 0.7]) cb = f.colorbar(im1, cax=cbar_ax) cb.set_label('Temperature (C)') plt.show() fl = altitude[air_temperature>272.65] alt = np.ma.masked_where(air_temperature<272.65, altitude) fl = np.ma.masked_where(air_temperature>273.65, alt) print(fl, fl.shape, type(fl)) plt.imshow(fl, vmin=0, vmax=1500) plt.plot(fl.flatten()) print(np.mean(fl.flatten())) for p in [0,5,25,50,75,95,100]: print(p, ": ", np.percentile(fl.flatten(), p)) """ Explanation: APS freezing level End of explanation """ region_mask, y_min, y_max, x_min, x_max = load_region(4001, local=True) TA = clip_region(nc_kmu.variables['air_temperature_2m'], region_mask, t_index, y_min, y_max, x_min, x_max) alt = clip_region(nc_kmu.variables['altitude'], region_mask, t_index, y_min, y_max, x_min, x_max) print(describe(TA)) print(describe(alt)) f, (ax1, ax2) = plt.subplots(nrows=1, ncols=2, figsize=(12,7)) im1 = ax1.imshow(TA-273.15, cmap=plt.cm.seismic, vmin=-10, vmax=10) im2 = ax2.imshow(alt, cmap=plt.cm.Greys, vmin=0, vmax=2500) f.subplots_adjust(right=0.8) cbar_ax = f.add_axes([0.85, 0.15, 0.05, 0.7]) cb = f.colorbar(im1, cax=cbar_ax) cb.set_label('Temperature (C)') plt.show() nc_kmu.close() """ Explanation: Test local monitoring regions End of explanation """ nc_ex = netCDF4.Dataset(r"\\hdata\grid\tmp\kmu\meps\meps_det_extracted_1km_latest.nc", "r") time_v_ex = nc_ex.variables['time'] # Choose a time-step t_index = 6 # Choose a pressure level (if applicable) p_index = 12 # 12=1000hPa, 11=925hPa, 10=850hPa, ..., 7=500hPa, ..., 0=50hPa in arome_metcoop_test y_dim = nc_ex.dimensions['y'].size ts = netCDF4.num2date(time_v_ex[t_index], time_v_ex.units) print(ts) isot = clip_region(nc_ex.variables['altitude_of_0_degree_isotherm'], region_mask, t_index, y_min, y_max, x_min, x_max) print(np.median(isot),np.mean(isot),np.min(isot),np.max(isot), np.nanpercentile(isot, 95)) isot = nc_ex.variables['altitude_of_0_degree_isotherm'][t_index, 0,:,:] print(np.median(isot),np.mean(isot),np.min(isot),np.max(isot), np.nanpercentile(isot, 95)) wetb = nc_ex.variables['altitude_of_isoTprimW_equal_0'][t_index, 0, :, :] print(np.median(wetb),np.mean(wetb),np.min(wetb),np.max(wetb), np.percentile(wetb, 50), np.nanpercentile(wetb, 95)) plt.imshow(wetb, vmin=-1500, vmax=1500) plt.colorbar() plt.imshow(np.flipud(wetb), vmin=-1500, vmax=1500) plt.colorbar() # Load region mask - only for data on 1km xgeo-grid region_mask, y_min, y_max, x_min, x_max = load_region(3034) #wetb_clip = clip_region(nc_ex.variables['altitude_of_isoTprimW_equal_0'], region_mask, t_index, y_min, y_max, x_min, x_max) wetb_clip = region_mask * np.flipud(nc_ex.variables['altitude_of_isoTprimW_equal_0'][t_index, 0, (y_dim-y_max):(y_dim-y_min), x_min:x_max]) print(np.nanmedian(wetb_clip),np.nanmean(wetb_clip),np.nanmin(wetb_clip),np.nanmax(wetb_clip), np.nanpercentile(wetb_clip, 50), np.nanpercentile(wetb_clip, 95)) plt.imshow(wetb_clip, vmin=-1500, vmax=1500) plt.colorbar(); #TODO: Currently wrong - 3007 Vest-Finnmark should be blue - flip up-down # I updated with y_dim-y_max:y_dim-y_min - should be correct now, but check # Load region mask - only for data on 1km xgeo-grid region_mask, y_min, y_max, x_min, x_max = load_region(3024) for k, f in enumerate(range(0, 60, 6), start=1): t = [] sl = [] for i, d in enumerate(range(f, f+6), start=1): wetb_clip = region_mask * np.flipud(nc_ex.variables['altitude_of_isoTprimW_equal_0'][d, 0, (y_dim-y_max):(y_dim-y_min), x_min:x_max]) _t = netCDF4.num2date(time_v_ex[d], time_v_ex.units) _sl = np.nanmedian(wetb_clip) print("\t", i, f, d, _t, _sl, np.nanmean(wetb_clip)) t.append(i) sl.append(_sl) sl = np.array(sl) print(np.mean(sl)) plt.plot(t, sl); plt.imshow(isot-wetb) plt.colorbar() plt.imshow(region_mask) _vr = netCDF4.Dataset(r"../data/terrain_parameters/VarslingsOmr_2017.nc", "r") _regions = _vr.variables["VarslingsOmr_2017"][:] plt.imshow(_regions, vmin=3000, vmax=3048, cmap=plt.get_cmap('gist_ncar')) """ Explanation: Test alternative data on freezing level from meps_det_extracted We can use the 0-degree isotherm layer. But it has "no data" where the 0-degree altidute is below the terrain elevation. We can use the wet bulb which should give the correct temperature where the humidity is 100% (say precipitation). However, values where relative humidity is low can diverge substantially - the wet bulb can be 5 to 10 degree Kelvin/Celsius lower. Thus corresponding to 500 to 1500 meters difference :-( End of explanation """ _local = _vr.variables["LokalOmr_2018"][:] plt.imshow(_local) """ Explanation: Vi angi jo temperaturen. Så det er egentlig kun snøfallgrense som er relevant og ikke nullgradersgrense som må være med. Så i stedet for 5 mm i døgnet, opp mot 12 mm i mest utsatt område.Liten kuling fra vest.-8 °C til -2 °C på 1100 moh.Plussgrader opp til 500 moh om ettermiddagen. Skyet. kan vi skrive 5 mm i døgnet, opp mot 12 mm i mest utsatt område. Regn opp til 500 moh om ettermiddagen. Liten kuling fra vest.-8 °C til -2 °C på 1100 moh. Skyet. End of explanation """