repo_name stringlengths 6 77 | path stringlengths 8 215 | license stringclasses 15
values | content stringlengths 335 154k |
|---|---|---|---|
hiteshagrawal/python | udacity/nano-degree/ipython_notebook_tutorial (1).ipynb | gpl-2.0 | # Hit shift + enter or use the run button to run this cell and see the results
print 'hello world'
# The last line of every code cell will be displayed by default,
# even if you don't print it. Run this cell to see how this works.
2 + 2 # The result of this line will not be displayed
3 + 3 # The result of this line will be displayed, because it is the last line of the cell
"""
Explanation: Text Using Markdown
If you double click on this cell, you will see the text change so that all of the formatting is removed. This allows you to edit this block of text. This block of text is written using Markdown, which is a way to format text using headers, links, italics, and many other options. Hit shift + enter or shift + return on your keyboard to show the formatted text again. This is called "running" the cell, and you can also do it using the run button in the toolbar.
Code cells
One great advantage of IPython notebooks is that you can show your Python code alongside the results, add comments to the code, or even add blocks of text using Markdown. These notebooks allow you to collaborate with others and share your work. The following cell is a code cell.
End of explanation
"""
# If you run this cell, you should see the values displayed as a table.
# Pandas is a software library for data manipulation and analysis. You'll learn to use it later in this course.
import pandas as pd
df = pd.DataFrame({'a': [2, 4, 6, 8], 'b': [1, 3, 5, 7]})
df
# If you run this cell, you should see a scatter plot of the function y = x^2
%pylab inline
import matplotlib.pyplot as plt
xs = range(-30, 31)
ys = [x ** 2 for x in xs]
plt.scatter(xs, ys)
"""
Explanation: Nicely formatted results
IPython notebooks allow you to display nicely formatted results, such as plots and tables, directly in
the notebook. You'll learn how to use the following libraries later on in this course, but for now here's a
preview of what IPython notebook can do.
End of explanation
"""
class_name = "Intro to Data Analysis"
message = class_name + " is awesome!"
message
"""
Explanation: Creating cells
To create a new code cell, click "Insert > Insert Cell [Above or Below]". A code cell will automatically be created.
To create a new markdown cell, first follow the process above to create a code cell, then change the type from "Code" to "Markdown" using the dropdown next to the run, stop, and restart buttons.
Re-running cells
If you find a bug in your code, you can always update the cell and re-run it. However, any cells that come afterward won't be automatically updated. Try it out below. First run each of the three cells. The first two don't have any output, but you will be able to tell they've run because a number will appear next to them, for example, "In [5]". The third cell should output the message "Intro to Data Analysis is awesome!"
End of explanation
"""
|
megatharun/basic-python-for-researcher | .ipynb_checkpoints/Tutorial 7 - Data Visualization and Plotting-checkpoint.ipynb | artistic-2.0 | %matplotlib inline
import matplotlib.pyplot as plt
"""
Explanation: <span style="color: #B40486">BASIC PYTHON FOR RESEARCHERS</span>
by Megat Harun Al Rashid bin Megat Ahmad
last updated: April 14, 2016
<span style="color: #29088A">7. Data Visualization and Plotting</span>
The <span style="color: #0000FF">$Matplotlib$</span> library can be considered as the default data visualization and plotting tools for Python, even though there are others libraries that can be used e.g. <span style="color: #0000FF">$Chaco$</span>, <span style="color: #0000FF">$PyX$</span>, <span style="color: #0000FF">$Bokeh$</span> and <span style="color: #0000FF">$Lightning$</span>. In this tutorial, we will focus exclusively on <span style="color: #0000FF">$Matplotlib$</span> and explore the basics and few of its large number of advanced features.
7.1 Basic Plotting
Users can start plotting with the minimum lines of codes to create a simple line plot. In a 2-dimensional plot, data for x and y coordinates can be represented by lists or <span style="color: #0000FF">$NumPy$</span> arrays. Users can import the plotting function from the <span style="color: #0000FF">$Matplotlib$</span> library as well as specify on how to display it.
End of explanation
"""
# Data in lists
x = [1,2,3,4,5]
y = [21,34,78,12,9]
# Plot x against y
plt.plot(x, y)
"""
Explanation: Here the line (<span style="color: #B404AE">%</span>matplotlib inline) means that the plot when created will be displayed on the working document. The second line import the plotting function of <span style="color: #0000FF">$Matplotlib$</span> library.
End of explanation
"""
%matplotlib qt
plt.plot(x, y)
"""
Explanation: Replacing the first line with <span style="color: #B404AE">%</span>matplotlib qt, an interactive plotting graphical user interface (GUI) will be displayed instead in a separate window (similar to MATLAB).
End of explanation
"""
%matplotlib inline
plt.plot(x, y)
plt.title('Plot Title')
plt.xlabel("label x") # naming the x-axis label
plt.ylabel("label y") # naming the y-axis label
plt.legend(['data'], loc='upper right') # displaying legend and its position
# saving the image plot with specific resolution
plt.savefig("Tutorial7/First_Image.png", dpi = 300)
"""
Explanation: <img src="Tutorial7/MatplotlibQt.png" width="500" >
This GUI allows users to interactively inspect the plot .e.g. enlarging specific plot region for details, moving the plot on the canvas, changing the axis range and colors etc.
However, we will only use (<span style="color: #B404AE">%</span>matplotlib inline) for the remainder of this tutorial. Let us see some other basic codes that are needed to prepare more than a simple plot.
End of explanation
"""
import numpy as np
x = [1.0,2.0,3.0,4.0,5.0]
y1 = [21,34,78,12,9]
y2 = [10,25,63,26,15]
plt.figure(figsize=(12, 6)) # set the figure canvas size before plotting
# Two plots here in the same graph, each on top the other going down the list
plt.plot(x,y2,'b--s', ms = 8, mfc = 'r', label='data 1')
plt.plot(x,y1,'g-o', linewidth = 4, ms = 12, mfc = 'magenta', alpha = 0.5, label='data 2')
plt.xlim(0.5,5.5) # setting x-axis range
plt.ylim(0,100) # setting y-axis range
plt.xticks(np.linspace(0.5,5.5,11)) # creating x ticks using numpy array
plt.yticks(np.linspace(0,100,11)) # creating y ticks using numpy array
plt.grid() # showing grid (according to ticks)
plt.xlabel("label x") # naming the x-axis label
plt.ylabel("label y") # naming the y-axis label
plt.legend(loc='upper right') # displaying legend and its position
# saving the image plot with specific resolution
plt.savefig("Tutorial7/Second_Image.eps", dpi = 100)
"""
Explanation: In the above example, the same x and y data are used (both of which have been declared beforehand), instead of just plotting, users can add title of the graph, names for x and y axes, insert legend and save the file with specific resolution (in this case as .png file). It is not necessary to declare <span style="color: #B404AE">%</span>matplotlib inline in the cell when it is already declared in previous cell (but if <span style="color: #B404AE">%</span>matplotlib qt was declared at some cells previously, then <span style="color: #B404AE">%</span>matplotlib inline needs to be re-declared again). Much more features can be added (like below) and we will see these one by one.
End of explanation
"""
x_data = np.linspace(0.5,2.5,20) # Creating data
x_data
# y data are calculated inside plot() function
fig = plt.figure(figsize=(8, 6)) # set the figure canvas size before plotting
axes1 = fig.add_axes([0.1, 0.1, 0.9, 0.9]) # creating first plot and its positions on canvas
axes2 = fig.add_axes([0.2, 0.55, 0.35, 0.35]) # creating second plot and its position on canvas
# main figure
axes1.plot(x_data,np.exp(x_data),'go-') # green with square object and line
axes1.set_xlabel('x main')
axes1.set_ylabel('y main')
axes1.set_title('main title')
# embedded figure
axes2.plot(x_data,x_data**(-1),'r--') # just a red dashed line
axes2.set_xlabel('x embedded')
axes2.set_ylabel('y embedded')
axes2.set_title('embedded title')
"""
Explanation: 7.2 Multiple Plots
Multiple plots can be created in many ways. It can be done by specifying the position of these plots on the whole canvas.
End of explanation
"""
# y data are calculated inside plot() function
plt.subplot(1,2,1) # graph with one row, two columns at position 1
plt.plot(x_data,np.exp(x_data),'go-') # green with square object and line
plt.subplot(1,2,2) # graph with one row, two columns at position 2
plt.plot(x_data,x_data**(-1),'r--') # just a red dashed line
plt.subplot(2,2,1)
plt.plot(x_data,np.exp(x_data),'go-')
plt.subplot(2,2,2)
plt.plot(x_data,x_data**(-1),'r--')
plt.subplot(2,2,4)
plt.plot(x_data,x_data,'bs-')
"""
Explanation: The input cell above shows the user how to embed a plot within a plot by specifying the initial coordinates and sizes of the plots. A slightly different syntax is used here with the sub plot properties assigned to variables axes1 and axes2. From here on it is easy to add features to each plot by operating on the each plot variable.
It is possibly much better to use sub plotting functions (depends on users taste) like <span style="color: #0000FF">$subplot(i,j,k)$</span> function where <span style="color: #0000FF">$i$</span> represents number of rows, <span style="color: #0000FF">$j$</span> represents number of columns and <span style="color: #0000FF">$k$</span> is the position of the plot (moving horizontally to the left from above to below) as shown for <span style="color: #0000FF">$i$</span> = 2 and <span style="color: #0000FF">$j$</span> = 2 below.
<img src="Tutorial7/Grid1.png" width="200" >
End of explanation
"""
# A canvas of 2 x 3 graph
plot1 = plt.subplot2grid((2,3), (0,0), rowspan=2) # a(i,j) = starting at (0,0) and extend to (1,0)
plot2 = plt.subplot2grid((2,3), (0,1), colspan=2) # a(i,j) = starting at (0,1) and extend to (0,2)
plot3 = plt.subplot2grid((2,3), (1,1)) # a(i,j) = starting at (1,1)
plot4 = plt.subplot2grid((2,3), (1,2)) # a(i,j) = starting at (1,2)
"""
Explanation: Other subplot functions like <span style="color: #0000FF">$subplot2grid()$</span> and <span style="color: #0000FF">$gridspec()$</span> allow more control on the produced multi-plots. In the case of <span style="color: #0000FF">$subplot2grid()$</span>, the location of a plot in a 2-dimensional graphical grid can be specified using coordinate (i,j) with both i and j starting at $0$.
End of explanation
"""
# A canvas of 2 x 3 graph
p1 = plt.subplot2grid((2,3), (0,0), rowspan=2) # a(i,j) = starting at (0,0) and extend to (1,0)
p2 = plt.subplot2grid((2,3), (0,1), colspan=2) # a(i,j) = starting at (0,1) and extend to (0,2)
p3 = plt.subplot2grid((2,3), (1,1)) # a# it return best fit values for parameters
# pass the function name, x array, y array, initial value of parameters (in list)(i,j) = starting at (1,1)
p4 = plt.subplot2grid((2,3), (1,2)) # a(i,j) = starting at (1,2)
plt.tight_layout()
"""
Explanation: The axes numerics are quite messy becuase of overlapping. User can use the <span style="color: #0000FF">$tight{_}layout()$</span> function to automatically tidy up the whole graph.
End of explanation
"""
# A canvas of 2 x 3 graph
plt.figure(figsize=(8,4))
p1 = plt.subplot2grid((2,3), (0,0), rowspan=2) # a(i,j) = starting at (0,0) and extend to (1,0)
p2 = plt.subplot2grid((2,3), (0,1), colspan=2) # a(i,j) = starting at (0,1) and extend to (0,2)
p3 = plt.subplot2grid((2,3), (1,1)) # a(i,j) = starting at (1,1)
p4 = plt.subplot2grid((2,3), (1,2)) # a(i,j) = starting at (1,2)
plt.tight_layout()
p1.plot(x_data,np.exp(x_data),'go-')
p2.plot(x_data,2.0*np.exp(-((x_data-1.5)**2/(2*0.05))),'ms-') # Gaussian function for y value
p3.plot(x_data,x_data,'bs-')
p4.plot(x_data,x_data**(-1),'r--')
"""
Explanation: We can now add some data inside these plots using data in x_data (and also increase the canvas size).
End of explanation
"""
# a 2 x 2 grid plot with specified relative width and height
plt.figure(figsize=(8,4))
gs = plt.GridSpec(2, 2, width_ratios=[1,2],height_ratios=[2.5,1.5]) # a 2 x 2 grid plot
gp1 = plt.subplot(gs[0])
gp2 = plt.subplot(gs[1])
gp3 = plt.subplot(gs[2])
gp4 = plt.subplot(gs[3])
plt.tight_layout()
gp1.plot(x_data,np.exp(x_data),'go-')
gp2.plot(x_data,2.0*np.exp(-((x_data-1.5)**2/(2*0.05))),'ms-')
gp3.plot(x_data,x_data,'bs-')
gp4.plot(x_data,x_data**(-1),'r--')
gp4.set_yticks(np.linspace(0.4,2.0,5)) # setting the y ticks for plot 4
"""
Explanation: Relative size of row and column can be specified using <span style="color: #0000FF">$GridSpec()$</span>:
End of explanation
"""
plt.figure(figsize=(12,4))
gs1 = plt.GridSpec(3, 3, width_ratios=[1.5,1,1.5],height_ratios=[1.5,1,1.5])
gs1.update(left=0.05, right=0.48, wspace=0.3, hspace=0.4) # size on canvas
fp1 = plt.subplot(gs1[:2, :2])
fp2 = plt.subplot(gs1[0, 2])
fp3 = plt.subplot(gs1[2, :2])
fp4 = plt.subplot(gs1[1:, 2])
gs2 = plt.GridSpec(3, 3, width_ratios=[1.6,1,1.4],height_ratios=[1.3,1,1.7])
gs2.update(left=0.55, right=0.98, wspace=0.3, hspace=0.4) # size on canvas
afp1 = plt.subplot(gs2[:2,0])
afp2 = plt.subplot(gs2[:2, 1:])
afp3 = plt.subplot(gs2[2, :-1])
afp4 = plt.subplot(gs2[2, -1])
"""
Explanation: <span style="color: #0000FF">$GridSpec()$</span> also allows the users to mix relative lengths of rows and columns with plots that extend these rows and columns (much like <span style="color: #0000FF">$subplot2grid()$</span>).
End of explanation
"""
plt.figure(figsize=(12,4))
gs1 = plt.GridSpec(3, 3, width_ratios=[1.5,1,1.5],height_ratios=[1.5,1,1.5])
gs1.update(left=0.05, right=0.48, wspace=0.3, hspace=0.4) # size on canvas
fp1 = plt.subplot(gs1[:2, :2])
fp2 = plt.subplot(gs1[0, 2])
fp3 = plt.subplot(gs1[2, :2])
fp4 = plt.subplot(gs1[1:, 2])
gs2 = plt.GridSpec(3, 3, width_ratios=[1.6,1,1.4],height_ratios=[1.3,1,1.7])
gs2.update(left=0.55, right=0.98, wspace=0.3, hspace=0.4) # size on canvas
afp1 = plt.subplot(gs2[:2,0])
afp2 = plt.subplot(gs2[:2, 1:])
afp3 = plt.subplot(gs2[2, :-1])
afp4 = plt.subplot(gs2[2, -1])
fp1.plot(x_data,2.0*np.exp(-((x_data-1.5)**2/(2*0.05))),'ms-')
fp2.plot(x_data,np.exp(x_data),'go-')
fp2.set_yticks(np.linspace(0,14,5))
fp3.plot(x_data,x_data,'bs-')
fp4.plot(x_data,x_data**(-1),'r--')
afp1.plot(x_data,x_data**(-1),'r--')
afp2.plot(x_data,2.0*np.exp(-((x_data-1.5)**2/(2*0.05))),'ms-')
afp3.plot(x_data,np.exp(x_data),'go-')
afp4.plot(x_data,x_data,'bs-')
afp4.set_yticks(np.linspace(0,3,6))
"""
Explanation: Similarly, let add some data:
End of explanation
"""
xl = np.linspace(4,6,31)
Lf = 5*(0.02/((xl-5)**2 + 0.02)) # Cauchy or Lorentz distribution
# Setting the canvas size and also resolution
plt.figure(figsize=(14,5), dpi=100)
# Assigning plotting function to variable, here subplot() is used
# even though there is only one plot
# and no ticks created on the frame (ticks are specified later)
Lp = plt.subplot(xticks=[], yticks=[])
# Plot certain borders with width specified
Lp.spines['bottom'].set_linewidth(2)
Lp.spines['left'].set_linewidth(2)
plt.subplots_adjust(left=0.1, right=0.9, top=0.75, bottom=0.25) # size on canvas
# Plotting
Lp.plot(xl, Lf, color="r", linewidth=2.0, linestyle="-")
# Title and Axes and Legend
Lp.set_title('Cauchy \nDistribution', fontsize=18, color='blue',\
horizontalalignment='left', fontweight='bold', x=0.05, y=1.05)
Lp.set_xlabel(r'$x$', fontsize=20, fontweight='bold', color='g')
Lp.set_ylabel(r'$L(x) = A\left[\frac{\gamma}{(x - x_{o})^{2} + \gamma}\right]$', \
fontsize=20, fontweight='bold', color='#DF0101')
# Legend
## Anchoring legend on the canvas and made it translucent
Lp.legend([r'$Lorentz$'], fontsize=18, bbox_to_anchor=[0.2, 0.98]).get_frame().set_alpha(0.5)
# Axes ticks and ranges
ytick_list = np.linspace(0,6,7)
Lp.set_xticks(np.linspace(0,10,11))
Lp.set_yticks(ytick_list)
Lp.set_xticklabels(["$%.1f$" % xt for xt in (np.linspace(0,10,11))], fontsize = 14)
Lp.set_yticklabels(["$%d$" % yt for yt in ytick_list], fontsize = 14)
## Major ticks
Lp.tick_params(axis='x', which = 'major', direction='out', top = 'off', width=2, length=10, pad=15)
Lp.tick_params(axis='y', which = 'major', direction='in', right = 'off', width=2, length=10, pad=5)
## Minor ticks
Lp.set_xticks(np.linspace(-0.2,10.2,53), minor=True)
Lp.tick_params(axis='x', which = 'minor', direction='out', top = 'off', width=1, length=5)
# Grid
Lp.grid(which='major', color='#0B3B0B', alpha=0.75, linestyle='dashed', linewidth=1.2)
Lp.grid(which='minor', color='#0B3B0B', linewidth=1.2)
# Save the plot in many formats (compare the differences)
plt.savefig("Tutorial7/Third_Image.eps", dpi = 500)
plt.savefig("Tutorial7/Third_Image.jpeg", dpi = 75)
"""
Explanation: 7.3 Setting the features of the plot
Features of a plot like label font size, legend position, ticks number etc. can be specified. There are many ways to do all of these and one of the way (possibly with the easiest syntax to understand) is shown in the codes below. It is good practice to assigned the plotting function to a variable before setting these features. This is very useful when doing multiple plotting.
End of explanation
"""
xv = np.linspace(0,10,11)
yv = 5*xv
# Setting the canvas size and also resolution
plt.figure(figsize=(14,4), dpi=100)
plt.xlim(0,50)
plt.ylim(0,50)
Lp = plt.subplot()
# Plot without top and right borders
Lp.spines['top'].set_visible(False)
Lp.spines['right'].set_visible(False)
# Show ticks only on left and bottom spines
Lp.yaxis.set_ticks_position('left')
Lp.xaxis.set_ticks_position('bottom')
# Title and axes labels
Lp.set_title('Variety of Plot')
Lp.set_xlabel(r'$x$')
Lp.set_ylabel(r'$y$')
# Plotting
## Line widths and colors
Lp.plot(xv+2, yv, color="blue", linewidth=0.25)
Lp.plot(xv+4, yv, color="red", linewidth=0.50)
Lp.plot(xv+6, yv, color="m", linewidth=1.00)
Lp.plot(xv+8, yv, color="blue", linewidth=2.00)
## Linestype options: ‘-‘, ‘–’, ‘-.’, ‘:’, ‘steps’
Lp.plot(xv+12, yv, color="red", lw=2, linestyle='-')
Lp.plot(xv+14, yv, color="#08088A", lw=2, ls='-.')
Lp.plot(xv+16, yv, color="red", lw=2, ls=':')
## Dash line can be cusotomized
line, = Lp.plot(xv+20, yv, color="blue", lw=2)
line.set_dashes([5, 10, 15, 10]) # format: line length, space length, ...
## Possible marker symbols: marker = '+', 'o', '*', 's', ',', '.', '1', '2', '3', '4', ...
Lp.plot(xv+24, yv, color="#0B3B0B", lw=2, ls='', marker='+', markersize = 12)
Lp.errorbar(xv+26, yv, color="green", lw=2, ls='', marker='o', yerr=5)
Lp.plot(xv+28, yv, color="green", lw=2, ls='', marker='s')
Lp.plot(xv+30, yv, color="#0B3B0B", lw=2, ls='', marker='1', ms = 12)
# Marker sizes and colorsplt.errorbar(x, y, xerr=0.2, yerr=0.4)
Lp.plot(xv+34, yv, color="r", lw=1, ls='-', marker='o', markersize=3)
Lp.plot(xv+36, yv, color="g", lw=1, ls='-', marker='o', markersize=5)
Lp.plot(xv+38, yv, color="purple", lw=1, ls='-', marker='o', markersize=8, markerfacecolor="red")
Lp.plot(xv+40, yv, color="purple", lw=1, ls='-', marker='s', markersize=8,
markerfacecolor="yellow", markeredgewidth=2, markeredgecolor="blue");
"""
Explanation: The <span style="color: #0000FF">$r\${text}\$$</span> notation allows the writing of mathematical equations using <span style="color: #0000FF">$LaTeX$</span> syntax.
Let us now see some variety of plot lines that can be created.
End of explanation
"""
people = ['Rechtschaffen', 'Tadashi', 'Vertueux', 'Justo', 'Salleh']
num_of_publ = [20,34,51,18,46]
y_pos = np.arange(len(people))
plt.title('Academic Output')
plt.barh(y_pos, num_of_publ, align='center', color = '#DF0101', alpha=0.5)
plt.yticks(y_pos, people)
plt.xlabel('Number of Publications')
plt.ylabel('Academicians')
"""
Explanation: I suppose most aspects of the line plot have been covered but please remember that there are many ways in which a line plot can be created.
Histograms and bar charts can be plotted with similar syntax but will be explored further when we learn <span style="color: #0000FF">$Pandas$</span> in the next tutorial. <span style="color: #0000FF">$Pandas$</span> has a rich approach to produce histogram/bar charts with <span style="color: #0000FF">$Matplotlib$</span>. Some examples of histograms/bars charts:
Some examples of histograms/bars charts from <span style="color: #0000FF">$Matplotlib$</span>:
End of explanation
"""
n = np.random.randn(100000) # a 1D normalized random array with 100,000 elements
fig, Hp = plt.subplots(1,2,figsize=(12,4))
# Each bin represents number of element (y-axis values) with
# certain values in the bin range (x-axis)
Hp[0].hist(n, bins = 50, color = 'red', alpha = 0.25)
Hp[0].set_title("Normal Distribution with bins = 50")
Hp[0].set_xlim((min(n), max(n)))
Hp[1].hist(n, bins = 25, color = 'g')
ni = plt.hist(n, cumulative=True, bins=25, visible=False) # Extract the data
Hp[1].set_title("Normal Distribution with bins = 25")
Hp[1].set_xlim((min(n), max(n)))
Hp[1].set_ylim(0, 15000)
ni
"""
Explanation: It is possible to extract the data from histogram/bar chart.
End of explanation
"""
'''Plot histogram with multiple sample sets and demonstrate:
* Use of legend with multiple sample sets
* Stacked bars
* Step curve with a color fill
* Data sets of different sample sizes
'''
n_bins = 10 # Number of bins to be displayed
x = np.random.randn(1000, 3) # A 2D normalized random array with 3 x 1,000 elements
fig, axes = plt.subplots(nrows=2, ncols=2, figsize=(10,8))
ax0, ax1, ax2, ax3 = axes.flat # Assigning the axes list to different variables
colors = ['red', 'tan', 'lime']
# {normed = 1} means that y-axis values are normalized
ax0.hist(x, n_bins, normed=1, histtype='bar', color=colors, label=colors)
ax0.legend(prop={'size': 10})
ax0.set_title('bars with legend')
ax1.hist(x, n_bins, normed=1, histtype='bar', stacked=True)
ax1.set_title('stacked bar')
ax2.hist(x, n_bins, histtype='step', stacked=True, fill=True)
ax2.set_title('stepfilled')
# Make a multiple-histogram of data-sets with different length
# or inhomogeneous 2D normalized random array
x_multi = [np.random.randn(n) for n in [10000, 5000, 2000]]
ax3.hist(x_multi, n_bins, histtype='bar')
ax3.set_title('different sample sizes')
plt.tight_layout()
"""
Explanation: The histogram plots below is an example from one of the <span style="color: #0000FF">$Matplotlib$</span> gallery example with some slight modifications and added comments.
End of explanation
"""
from mpl_toolkits.mplot3d.axes3d import Axes3D
# 2D Gaussian Plot (3D image)
import matplotlib.cm as cm
import matplotlib.ticker as ticker
a = 12.0
b = 5.0
c = 0.3
fig = plt.figure(figsize=(6,4))
ax = fig.gca(projection='3d') # Passing a projection='3d' argument
X = np.linspace(4, 6, 100)
Y = np.linspace(4, 6, 100)
X, Y = np.meshgrid(X, Y)
R = (X-b)**2/(2*c**2) + (Y-b)**2/(2*c**2) # Use the declared a,b,c values in 1D-Gaussian
Z = a*np.exp(-R)
# Applying the plot features to variable surf
surf = ax.plot_surface(X, Y, Z, rstride=1, cstride=1, cmap=cm.hsv,
linewidth=0, antialiased=True, alpha=0.3)
ax.set_zlim(-1.01, 14)
ax.zaxis.set_major_locator(ticker.LinearLocator(8))
ax.zaxis.set_major_formatter(ticker.FormatStrFormatter('%.1f'))
fig.colorbar(surf, shrink=0.4, aspect=10)
ax.view_init(30, 40) # Angles of viewing
plt.tight_layout()
"""
Explanation: 7.4 3D Plotting
A 3D plot can be created by importing the Axes3D class and passing a projection='3d' argument.
End of explanation
"""
fig_wire = plt.figure(figsize=(12,6))
# Wireframe plot
ax1 = fig_wire.add_subplot(1,2,1, projection='3d')
ax1.plot_wireframe(X, Y, Z, rstride=4, cstride=4, alpha=0.75)
ax1.view_init(55, 30)
ax1.set_zlim(-2, 14)
ax1.set_title('Wireframe')
# Surface plot
ax2 = fig_wire.add_subplot(1,2,2, projection='3d')
ax2.plot_surface(X, Y, Z, rstride=6, cstride=6, alpha=0.45)
ax2.view_init(30, 30)
ax2.set_title('Surface and Contour')
# Different color maps used for each contours
# the offset argument refers to the position of contour
ax2.contour(X, Y, Z, zdir='x', offset=3, cmap=cm.hsv)
ax2.contour(X, Y, Z, zdir='y', offset=3, cmap=cm.prism)
ax2.contour(X, Y, Z, zdir='z', offset=-8, cmap=cm.coolwarm)
# Axes range for contours
ax2.set_xlim3d(3, 7)
ax2.set_ylim3d(3, 7)
ax2.set_zlim3d(-8, 20)
fig_wire.tight_layout()
"""
Explanation: Using the data X, Y and Z in the computer memory from the above cell, we create a wireframe plot and a surface plot with contour projection.
End of explanation
"""
# Brownian motion of particle suspended in liquid
nu = 0.7978E-3 # Pa s
kB = 1.3806488E-23 # m^2 kg s^-2 K^-1
d = 0.001 # 1 micron
T = 30+273.15 # Kelvin
D = kB*T/(3*np.pi*nu*d)
dt = 0.00001
dl = np.sqrt(2*D*dt)
xp = np.random.uniform(0,0.0001,20)
yp = np.random.uniform(-0.00000005,0.000000005,20)
for value in range(0,xp.size,1):
angle1 = np.random.normal(0,np.pi,500)
xb = xp[value]+np.cumsum(dl*np.cos(angle1))
yb = yp[value]+np.cumsum(dl*np.sin(angle1))
plt.figure(figsize=(6,4))
plt.plot(xb,yb)
from matplotlib import animation # Importing the animation module
fig, ax = plt.subplots(figsize=(6,4))
xr = np.sqrt((xb.max()-xb.min())**2)/20.0
yr = np.sqrt((yb.max()-yb.min())**2)/20.0
ax.set_xlim([(xb.min()-xr), (xb.max()+xr)])
ax.set_ylim([(yb.min()-yr), (yb.max()+yr)])
line1, = ax.plot([], [], 'ro-')
line2, = ax.plot([], [], color="blue", lw=1)
x2 = np.array([])
y2 = np.array([])
def init():
line1.set_data([], [])
line2.set_data([], [])
def update(n):
# n = frame counter
global x2, y2
x2 = np.hstack((x2,xb[n]))
y2 = np.hstack((y2,yb[n]))
line1.set_data([xb[n],xb[n+1]],[yb[n],yb[n+1]])
line2.set_data(x2, y2)
# Creating the animation
# anim = animation.FuncAnimation(fig, update, init_func=init, frames=len(xb)-1, blit=True)
anim = animation.FuncAnimation(fig, update, init_func=init, frames=len(xb)-1)
# Saving the animation film in .mp4 format
anim.save('Tutorial7/Brownian.mp4', fps=10, writer="avconv", codec="libx264")
plt.close(fig)
"""
Explanation: 7.5 Animated Plotting
<span style="color: #0000FF">$Matplotlib$</span> has the ability to generate a movie file from sequences of figures. Let see an example of the syntax needed to capture a Brownian motion in a video (it is necessary to explicitly import the animation module from <span style="color: #0000FF">$Matplotlib$</span>).
End of explanation
"""
from IPython.display import HTML
video = open("Tutorial7/Brownian.mp4", "rb").read()
video_encoded = video.encode("base64")
video_tag = '<video controls alt="test" src="data:video/x-m4v;base64,{0}">'.format(video_encoded)
HTML(video_tag)
"""
Explanation: Creating the video in a correct video format that can be run by your browser may require the installation of these tools (in my case with Ubuntu 14.04LTS):
sudo apt-get install ffmpeg libav-tools
sudo apt-get install libavcodec-extra-*
Different linux variants and other operating systems may require further tuning.
The video can then be displayed:
End of explanation
"""
|
materialsproject/MPContribs | mpcontribs-portal/notebooks/contribs.materialsproject.org/get_started.ipynb | mit | name = "your-project-name"
apikey = "your-api-key" # profile.materialsproject.org
"""
Explanation: MPContribs
Walkthrough
start with a materials detail page on MP with user contributions
navigate to https://mpcontribs.org and explore
apply for project on https://workshop-contribs.materialsproject.org/contribute (wait for approval by admins)
End of explanation
"""
from mpcontribs.client import Client
from mp_api.matproj import MPRester
from refractivesqlite import dboperations as DB
from pandas import DataFrame
"""
Explanation: Contribute data on Refractive Index
We'll prepare refractive index data for contribution to https://workshop-contribs.materialsproject.org using the mpcontribs-client.
End of explanation
"""
db = DB.Database("refractive.db")
#db.create_database_from_url()
db.search_pages("Au", exact=True)
materials = db.search_custom(
'select * from pages where book="Au" and hasrefractive=1 and hasextinction=1'
)
"""
Explanation: Explore and extract refractive index data
https://github.com/HugoGuillen/refractiveindex.info-sqlite/blob/master/Tutorial.ipynb
End of explanation
"""
page_id = materials[0][0]
mat = db.get_material(page_id)
mpr = MPRester(apikey)
def get_contrib(mat):
info = mat.get_page_info()
formula = info["book"]
mpid = mpr.get_materials_ids(formula)[0]
rmin, rmax = info['rangeMin']*1000, info['rangeMax']*1000
mid = (rmin + rmax) / 2
n = mat.get_refractiveindex(mid)
k = mat.get_extinctioncoefficient(mid)
x = "wavelength λ [μm]"
refrac = DataFrame(mat.get_complete_refractive(), columns=[x, "n"])
refrac.set_index(x, inplace=True)
extinct = DataFrame(mat.get_complete_extinction(), columns=[x, "k"])
extinct.set_index(x, inplace=True)
df = refrac.join(extinct["k"])
df.attrs["title"] = f"Complex refractive index (n+ik) for {formula}"
df.attrs["labels"] = {
"value": "n, k", # y-axis label
"variable": "Re/Im" # legend name (= df.columns.name)
}
df.plot(**df.attrs)#.show()
df.attrs["name"] = "n,k(λ)"
return {
"project": name,
"identifier": str(mpid),
"data": {
"n": float(n),
"k": float(k),
"range.min": f"{rmin} nm",
"range.mid": f"{mid} nm",
"range.max": f"{rmax} nm",
"points": info["points"],
"page": info["page"]
},
"tables": [df]
}
contrib = get_contrib(mat)
"""
Explanation: Prepare a single contribution for testing
Note that a contribution to a specific MP material contains 4 optional components:
simple (potentially nested) "key-value" data
tables as Pandas DataFrame objects (think spreadsheets and csv files)
structures as Pymatgen Structure objects (think CIF, POSCAR, ...)
attachments (think gzipped text files, PNG/JPG..)
Example for a single contribution dictionary:
{
"project": "sandbox",
"identifier": "mp-4",
"data": {
"a": "3 eV",
"b": {"c": "hello", "d": 3},
"d.e.f": "nest via dot-notation"
},
"structures": [<pymatgen Structure>, ...],
"tables": [<pandas DataFrame>, ...],
"attachments": [<pathlib.Path>, <mpcontribs.client.Attachment>, ...]
}
End of explanation
"""
client = Client(
host="workshop-contribs-api.materialsproject.org",
apikey=apikey
)
db.check_url_version()
update = {
'unique_identifiers': False,
'references': [
{'label': 'website', 'url': 'https://refractiveindex.info'},
{'label': 'source', 'url': "https://refractiveindex.info/download/database/rii-database-2019-02-11.zip"}
],
"other": { # describe the root fields here to automatically include tooltips on MP
"n": "real part of complex refractive index",
"k": "imaginary part of complex refractive index",
"range": "wavelength range for n,k in nm",
"points": "number of λ points in range",
"page": "reference to data source/publication"
}
}
# could also update authors, title, long_title, description
client.projects.update_entry(pk=name, project=update).result()
"""
Explanation: Retrieve and update project info
Let's add the URL for the DB and also set unique_identifiers to False. This flag indicates that this project can contain multiple contributions (rows in the landing page's overview table) for the same MP material (mp-id). We also want to include descriptions of the data columns in the project's other field.
End of explanation
"""
client.init_columns(name, {
"n": "", # dimensionless
"k": "",
"range.min": "nm",
"range.mid": "nm",
"range.max": "nm",
"points": "",
"page": None # text
})
"""
Explanation: Try searching for refractive in MPContribs browse page now
Initialize data columns
End of explanation
"""
client.submit_contributions([contrib])
"""
Explanation: Submit test contribution
Simply provide your list of contributions as argument to the client's submit_contributions function to prepare and upload them to MPContribs. Duplicate checking will happen automatically if unique_identifers is set to True for the project (the default). If successful, the client will return the number of contributions submitted.
End of explanation
"""
contributions = []
for material in materials:
page_id = material[0]
mat = db.get_material(page_id)
contrib = get_contrib(mat)
contributions.append(contrib)
contributions[10]
client.delete_contributions(name)
client.submit_contributions(contributions, ignore_dupes=True)
"""
Explanation: Your first contribution should now show up in your project on https://workshop-contribs.materialsproject.org :)
Prepare and upload all contributions
End of explanation
"""
client.make_public(name)
"""
Explanation: Publish contributions
End of explanation
"""
all_ids = client.get_all_ids(
{"project": name},
include=["tables"],
data_id_fields={name: "page"},
fmt="map"
)
"""
Explanation: Retrieve and query contributions
End of explanation
"""
tid = all_ids[name]["mp-81"]["Johnson"]["tables"]["n,k(λ)"]["id"]
client.get_table(tid).display()
"""
Explanation: Grab the table ID and retrieve it as Pandas DataFrame. Show a graph.
End of explanation
"""
query = {
"project": name,
"formula__contains": "Au",
#"identifier__in": []
"data__n__value__lt": 200,
"data__k__value__gte": 7,
"_sort": "-data__range__mid__value",
"_fields": [
"id", "identifier", "formula",
"data.range.mid.value",
"data.n.value",
"data.k.value"
]
}
print(client.get_totals(query))
client.contributions.get_entries(**query).result()
"""
Explanation: Finally, let's build up a more complicated query to reduce the list of contributions to the ones we might be interested in.
End of explanation
"""
|
axbaretto/beam | examples/notebooks/documentation/transforms/python/elementwise/map-py.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License")
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
"""
Explanation: <a href="https://colab.research.google.com/github/apache/beam/blob/master/examples/notebooks/documentation/transforms/python/elementwise/map-py.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab"/></a>
<table align="left"><td><a target="_blank" href="https://beam.apache.org/documentation/transforms/python/elementwise/map"><img src="https://beam.apache.org/images/logos/full-color/name-bottom/beam-logo-full-color-name-bottom-100.png" width="32" height="32" />View the docs</a></td></table>
End of explanation
"""
!pip install --quiet -U apache-beam
"""
Explanation: Map
<script type="text/javascript">
localStorage.setItem('language', 'language-py')
</script>
<table align="left" style="margin-right:1em">
<td>
<a class="button" target="_blank" href="https://beam.apache.org/releases/pydoc/current/apache_beam.transforms.core.html#apache_beam.transforms.core.Map"><img src="https://beam.apache.org/images/logos/sdks/python.png" width="32px" height="32px" alt="Pydoc"/> Pydoc</a>
</td>
</table>
<br/><br/><br/>
Applies a simple 1-to-1 mapping function over each element in the collection.
Setup
To run a code cell, you can click the Run cell button at the top left of the cell,
or select it and press Shift+Enter.
Try modifying a code cell and re-running it to see what happens.
To learn more about Colab, see
Welcome to Colaboratory!.
First, let's install the apache-beam module.
End of explanation
"""
import apache_beam as beam
with beam.Pipeline() as pipeline:
plants = (
pipeline
| 'Gardening plants' >> beam.Create([
' 🍓Strawberry \n',
' 🥕Carrot \n',
' 🍆Eggplant \n',
' 🍅Tomato \n',
' 🥔Potato \n',
])
| 'Strip' >> beam.Map(str.strip)
| beam.Map(print))
"""
Explanation: Examples
In the following examples, we create a pipeline with a PCollection of produce with their icon, name, and duration.
Then, we apply Map in multiple ways to transform every element in the PCollection.
Map accepts a function that returns a single element for every input element in the PCollection.
Example 1: Map with a predefined function
We use the function str.strip which takes a single str element and outputs a str.
It strips the input element's whitespaces, including newlines and tabs.
End of explanation
"""
import apache_beam as beam
def strip_header_and_newline(text):
return text.strip('# \n')
with beam.Pipeline() as pipeline:
plants = (
pipeline
| 'Gardening plants' >> beam.Create([
'# 🍓Strawberry\n',
'# 🥕Carrot\n',
'# 🍆Eggplant\n',
'# 🍅Tomato\n',
'# 🥔Potato\n',
])
| 'Strip header' >> beam.Map(strip_header_and_newline)
| beam.Map(print))
"""
Explanation: <table align="left" style="margin-right:1em">
<td>
<a class="button" target="_blank" href="https://github.com/apache/beam/blob/master/sdks/python/apache_beam/examples/snippets/transforms/elementwise/map.py"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" width="32px" height="32px" alt="View source code"/> View source code</a>
</td>
</table>
<br/><br/><br/>
Example 2: Map with a function
We define a function strip_header_and_newline which strips any '#', ' ', and '\n' characters from each element.
End of explanation
"""
import apache_beam as beam
with beam.Pipeline() as pipeline:
plants = (
pipeline
| 'Gardening plants' >> beam.Create([
'# 🍓Strawberry\n',
'# 🥕Carrot\n',
'# 🍆Eggplant\n',
'# 🍅Tomato\n',
'# 🥔Potato\n',
])
| 'Strip header' >> beam.Map(lambda text: text.strip('# \n'))
| beam.Map(print))
"""
Explanation: <table align="left" style="margin-right:1em">
<td>
<a class="button" target="_blank" href="https://github.com/apache/beam/blob/master/sdks/python/apache_beam/examples/snippets/transforms/elementwise/map.py"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" width="32px" height="32px" alt="View source code"/> View source code</a>
</td>
</table>
<br/><br/><br/>
Example 3: Map with a lambda function
We can also use lambda functions to simplify Example 2.
End of explanation
"""
import apache_beam as beam
def strip(text, chars=None):
return text.strip(chars)
with beam.Pipeline() as pipeline:
plants = (
pipeline
| 'Gardening plants' >> beam.Create([
'# 🍓Strawberry\n',
'# 🥕Carrot\n',
'# 🍆Eggplant\n',
'# 🍅Tomato\n',
'# 🥔Potato\n',
])
| 'Strip header' >> beam.Map(strip, chars='# \n')
| beam.Map(print))
"""
Explanation: <table align="left" style="margin-right:1em">
<td>
<a class="button" target="_blank" href="https://github.com/apache/beam/blob/master/sdks/python/apache_beam/examples/snippets/transforms/elementwise/map.py"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" width="32px" height="32px" alt="View source code"/> View source code</a>
</td>
</table>
<br/><br/><br/>
Example 4: Map with multiple arguments
You can pass functions with multiple arguments to Map.
They are passed as additional positional arguments or keyword arguments to the function.
In this example, strip takes text and chars as arguments.
End of explanation
"""
import apache_beam as beam
with beam.Pipeline() as pipeline:
plants = (
pipeline
| 'Gardening plants' >> beam.Create([
('🍓', 'Strawberry'),
('🥕', 'Carrot'),
('🍆', 'Eggplant'),
('🍅', 'Tomato'),
('🥔', 'Potato'),
])
| 'Format' >>
beam.MapTuple(lambda icon, plant: '{}{}'.format(icon, plant))
| beam.Map(print))
"""
Explanation: <table align="left" style="margin-right:1em">
<td>
<a class="button" target="_blank" href="https://github.com/apache/beam/blob/master/sdks/python/apache_beam/examples/snippets/transforms/elementwise/map.py"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" width="32px" height="32px" alt="View source code"/> View source code</a>
</td>
</table>
<br/><br/><br/>
Example 5: MapTuple for key-value pairs
If your PCollection consists of (key, value) pairs,
you can use MapTuple to unpack them into different function arguments.
End of explanation
"""
import apache_beam as beam
with beam.Pipeline() as pipeline:
chars = pipeline | 'Create chars' >> beam.Create(['# \n'])
plants = (
pipeline
| 'Gardening plants' >> beam.Create([
'# 🍓Strawberry\n',
'# 🥕Carrot\n',
'# 🍆Eggplant\n',
'# 🍅Tomato\n',
'# 🥔Potato\n',
])
| 'Strip header' >> beam.Map(
lambda text,
chars: text.strip(chars),
chars=beam.pvalue.AsSingleton(chars),
)
| beam.Map(print))
"""
Explanation: <table align="left" style="margin-right:1em">
<td>
<a class="button" target="_blank" href="https://github.com/apache/beam/blob/master/sdks/python/apache_beam/examples/snippets/transforms/elementwise/map.py"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" width="32px" height="32px" alt="View source code"/> View source code</a>
</td>
</table>
<br/><br/><br/>
Example 6: Map with side inputs as singletons
If the PCollection has a single value, such as the average from another computation,
passing the PCollection as a singleton accesses that value.
In this example, we pass a PCollection the value '# \n' as a singleton.
We then use that value as the characters for the str.strip method.
End of explanation
"""
import apache_beam as beam
with beam.Pipeline() as pipeline:
chars = pipeline | 'Create chars' >> beam.Create(['#', ' ', '\n'])
plants = (
pipeline
| 'Gardening plants' >> beam.Create([
'# 🍓Strawberry\n',
'# 🥕Carrot\n',
'# 🍆Eggplant\n',
'# 🍅Tomato\n',
'# 🥔Potato\n',
])
| 'Strip header' >> beam.Map(
lambda text,
chars: text.strip(''.join(chars)),
chars=beam.pvalue.AsIter(chars),
)
| beam.Map(print))
"""
Explanation: <table align="left" style="margin-right:1em">
<td>
<a class="button" target="_blank" href="https://github.com/apache/beam/blob/master/sdks/python/apache_beam/examples/snippets/transforms/elementwise/map.py"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" width="32px" height="32px" alt="View source code"/> View source code</a>
</td>
</table>
<br/><br/><br/>
Example 7: Map with side inputs as iterators
If the PCollection has multiple values, pass the PCollection as an iterator.
This accesses elements lazily as they are needed,
so it is possible to iterate over large PCollections that won't fit into memory.
End of explanation
"""
import apache_beam as beam
def replace_duration(plant, durations):
plant['duration'] = durations[plant['duration']]
return plant
with beam.Pipeline() as pipeline:
durations = pipeline | 'Durations' >> beam.Create([
(0, 'annual'),
(1, 'biennial'),
(2, 'perennial'),
])
plant_details = (
pipeline
| 'Gardening plants' >> beam.Create([
{
'icon': '🍓', 'name': 'Strawberry', 'duration': 2
},
{
'icon': '🥕', 'name': 'Carrot', 'duration': 1
},
{
'icon': '🍆', 'name': 'Eggplant', 'duration': 2
},
{
'icon': '🍅', 'name': 'Tomato', 'duration': 0
},
{
'icon': '🥔', 'name': 'Potato', 'duration': 2
},
])
| 'Replace duration' >> beam.Map(
replace_duration,
durations=beam.pvalue.AsDict(durations),
)
| beam.Map(print))
"""
Explanation: <table align="left" style="margin-right:1em">
<td>
<a class="button" target="_blank" href="https://github.com/apache/beam/blob/master/sdks/python/apache_beam/examples/snippets/transforms/elementwise/map.py"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" width="32px" height="32px" alt="View source code"/> View source code</a>
</td>
</table>
<br/><br/><br/>
Note: You can pass the PCollection as a list with beam.pvalue.AsList(pcollection),
but this requires that all the elements fit into memory.
Example 8: Map with side inputs as dictionaries
If a PCollection is small enough to fit into memory, then that PCollection can be passed as a dictionary.
Each element must be a (key, value) pair.
Note that all the elements of the PCollection must fit into memory for this.
If the PCollection won't fit into memory, use beam.pvalue.AsIter(pcollection) instead.
End of explanation
"""
|
vadim-ivlev/STUDY | handson-data-science-python/DataScience-Python3/SimilarMovies.ipynb | mit | import pandas as pd
r_cols = ['user_id', 'movie_id', 'rating']
ratings = pd.read_csv('e:/sundog-consult/udemy/datascience/ml-100k/u.data', sep='\t', names=r_cols, usecols=range(3), encoding="ISO-8859-1")
m_cols = ['movie_id', 'title']
movies = pd.read_csv('e:/sundog-consult/udemy/datascience/ml-100k/u.item', sep='|', names=m_cols, usecols=range(2), encoding="ISO-8859-1")
ratings = pd.merge(movies, ratings)
ratings.head()
"""
Explanation: Finding Similar Movies
We'll start by loading up the MovieLens dataset. Using Pandas, we can very quickly load the rows of the u.data and u.item files that we care about, and merge them together so we can work with movie names instead of ID's. (In a real production job, you'd stick with ID's and worry about the names at the display layer to make things more efficient. But this lets us understand what's going on better for now.)
End of explanation
"""
movieRatings = ratings.pivot_table(index=['user_id'],columns=['title'],values='rating')
movieRatings.head()
"""
Explanation: Now the amazing pivot_table function on a DataFrame will construct a user / movie rating matrix. Note how NaN indicates missing data - movies that specific users didn't rate.
End of explanation
"""
starWarsRatings = movieRatings['Star Wars (1977)']
starWarsRatings.head()
"""
Explanation: Let's extract a Series of users who rated Star Wars:
End of explanation
"""
similarMovies = movieRatings.corrwith(starWarsRatings)
similarMovies = similarMovies.dropna()
df = pd.DataFrame(similarMovies)
df.head(10)
"""
Explanation: Pandas' corrwith function makes it really easy to compute the pairwise correlation of Star Wars' vector of user rating with every other movie! After that, we'll drop any results that have no data, and construct a new DataFrame of movies and their correlation score (similarity) to Star Wars:
End of explanation
"""
similarMovies.sort_values(ascending=False)
"""
Explanation: (That warning is safe to ignore.) Let's sort the results by similarity score, and we should have the movies most similar to Star Wars! Except... we don't. These results make no sense at all! This is why it's important to know your data - clearly we missed something important.
End of explanation
"""
import numpy as np
movieStats = ratings.groupby('title').agg({'rating': [np.size, np.mean]})
movieStats.head()
"""
Explanation: Our results are probably getting messed up by movies that have only been viewed by a handful of people who also happened to like Star Wars. So we need to get rid of movies that were only watched by a few people that are producing spurious results. Let's construct a new DataFrame that counts up how many ratings exist for each movie, and also the average rating while we're at it - that could also come in handy later.
End of explanation
"""
popularMovies = movieStats['rating']['size'] >= 100
movieStats[popularMovies].sort_values([('rating', 'mean')], ascending=False)[:15]
"""
Explanation: Let's get rid of any movies rated by fewer than 100 people, and check the top-rated ones that are left:
End of explanation
"""
df = movieStats[popularMovies].join(pd.DataFrame(similarMovies, columns=['similarity']))
df.head()
"""
Explanation: 100 might still be too low, but these results look pretty good as far as "well rated movies that people have heard of." Let's join this data with our original set of similar movies to Star Wars:
End of explanation
"""
df.sort_values(['similarity'], ascending=False)[:15]
"""
Explanation: And, sort these new results by similarity score. That's more like it!
End of explanation
"""
|
postBG/DL_project | intro-to-rnns/Anna_KaRNNa.ipynb | mit | import time
from collections import namedtuple
import numpy as np
import tensorflow as tf
"""
Explanation: Anna KaRNNa
In this notebook, I'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.
This network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN.
<img src="assets/charseq.jpeg" width="500">
End of explanation
"""
with open('anna.txt', 'r') as f:
text=f.read()
vocab = set(text)
vocab_to_int = {c: i for i, c in enumerate(vocab)}
int_to_vocab = dict(enumerate(vocab))
encoded = np.array([vocab_to_int[c] for c in text], dtype=np.int32)
"""
Explanation: First we'll load the text file and convert it into integers for our network to use. Here I'm creating a couple dictionaries to convert the characters to and from integers. Encoding the characters as integers makes it easier to use as input in the network.
End of explanation
"""
text[:100]
"""
Explanation: Let's check out the first 100 characters, make sure everything is peachy. According to the American Book Review, this is the 6th best first line of a book ever.
End of explanation
"""
encoded[:100]
"""
Explanation: And we can see the characters encoded as integers.
End of explanation
"""
len(vocab)
"""
Explanation: Since the network is working with individual characters, it's similar to a classification problem in which we are trying to predict the next character from the previous text. Here's how many 'classes' our network has to pick from.
End of explanation
"""
def get_batches(arr, n_seqs, n_steps):
'''Create a generator that returns batches of size
n_seqs x n_steps from arr.
Arguments
---------
arr: Array you want to make batches from
n_seqs: Batch size, the number of sequences per batch
n_steps: Number of sequence steps per batch
'''
# Get the batch size and number of batches we can make
batch_size = n_seqs * n_steps
n_batches = len(arr)//batch_size
# Keep only enough characters to make full batches
arr = arr[:n_batches * batch_size]
# Reshape into n_seqs rows
arr = arr.reshape((n_seqs, -1))
for n in range(0, arr.shape[1], n_steps):
# The features
x = arr[:, n:n+n_steps]
# The targets, shifted by one
y = np.zeros_like(x)
y[:, :-1], y[:, -1] = x[:, 1:], x[:, 0]
yield x, y
"""
Explanation: Making training mini-batches
Here is where we'll make our mini-batches for training. Remember that we want our batches to be multiple sequences of some desired number of sequence steps. Considering a simple example, our batches would look like this:
<img src="assets/sequence_batching@1x.png" width=500px>
<br>
We have our text encoded as integers as one long array in encoded. Let's create a function that will give us an iterator for our batches. I like using generator functions to do this. Then we can pass encoded into this function and get our batch generator.
The first thing we need to do is discard some of the text so we only have completely full batches. Each batch contains $N \times M$ characters, where $N$ is the batch size (the number of sequences) and $M$ is the number of steps. Then, to get the number of batches we can make from some array arr, you divide the length of arr by the batch size. Once you know the number of batches and the batch size, you can get the total number of characters to keep.
After that, we need to split arr into $N$ sequences. You can do this using arr.reshape(size) where size is a tuple containing the dimensions sizes of the reshaped array. We know we want $N$ sequences (n_seqs below), let's make that the size of the first dimension. For the second dimension, you can use -1 as a placeholder in the size, it'll fill up the array with the appropriate data for you. After this, you should have an array that is $N \times (M * K)$ where $K$ is the number of batches.
Now that we have this array, we can iterate through it to get our batches. The idea is each batch is a $N \times M$ window on the array. For each subsequent batch, the window moves over by n_steps. We also want to create both the input and target arrays. Remember that the targets are the inputs shifted over one character. You'll usually see the first input character used as the last target character, so something like this:
python
y[:, :-1], y[:, -1] = x[:, 1:], x[:, 0]
where x is the input batch and y is the target batch.
The way I like to do this window is use range to take steps of size n_steps from $0$ to arr.shape[1], the total number of steps in each sequence. That way, the integers you get from range always point to the start of a batch, and each window is n_steps wide.
End of explanation
"""
batches = get_batches(encoded, 10, 50)
x, y = next(batches)
print('x\n', x[:10, :10])
print('\ny\n', y[:10, :10])
"""
Explanation: Now I'll make my data sets and we can check out what's going on here. Here I'm going to use a batch size of 10 and 50 sequence steps.
End of explanation
"""
def build_inputs(batch_size, num_steps):
''' Define placeholders for inputs, targets, and dropout
Arguments
---------
batch_size: Batch size, number of sequences per batch
num_steps: Number of sequence steps in a batch
'''
# Declare placeholders we'll feed into the graph
inputs = tf.placeholder(tf.int32, [batch_size, num_steps], name='inputs')
targets = tf.placeholder(tf.int32, [batch_size, num_steps], name='targets')
# Keep probability placeholder for drop out layers
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
return inputs, targets, keep_prob
"""
Explanation: If you implemented get_batches correctly, the above output should look something like
```
x
[[55 63 69 22 6 76 45 5 16 35]
[ 5 69 1 5 12 52 6 5 56 52]
[48 29 12 61 35 35 8 64 76 78]
[12 5 24 39 45 29 12 56 5 63]
[ 5 29 6 5 29 78 28 5 78 29]
[ 5 13 6 5 36 69 78 35 52 12]
[63 76 12 5 18 52 1 76 5 58]
[34 5 73 39 6 5 12 52 36 5]
[ 6 5 29 78 12 79 6 61 5 59]
[ 5 78 69 29 24 5 6 52 5 63]]
y
[[63 69 22 6 76 45 5 16 35 35]
[69 1 5 12 52 6 5 56 52 29]
[29 12 61 35 35 8 64 76 78 28]
[ 5 24 39 45 29 12 56 5 63 29]
[29 6 5 29 78 28 5 78 29 45]
[13 6 5 36 69 78 35 52 12 43]
[76 12 5 18 52 1 76 5 58 52]
[ 5 73 39 6 5 12 52 36 5 78]
[ 5 29 78 12 79 6 61 5 59 63]
[78 69 29 24 5 6 52 5 63 76]]
``
although the exact numbers will be different. Check to make sure the data is shifted over one step fory`.
Building the model
Below is where you'll build the network. We'll break it up into parts so it's easier to reason about each bit. Then we can connect them up into the whole network.
<img src="assets/charRNN.png" width=500px>
Inputs
First off we'll create our input placeholders. As usual we need placeholders for the training data and the targets. We'll also create a placeholder for dropout layers called keep_prob.
End of explanation
"""
def build_lstm(lstm_size, num_layers, batch_size, keep_prob):
''' Build LSTM cell.
Arguments
---------
keep_prob: Scalar tensor (tf.placeholder) for the dropout keep probability
lstm_size: Size of the hidden layers in the LSTM cells
num_layers: Number of LSTM layers
batch_size: Batch size
'''
### Build the LSTM Cell
# Use a basic LSTM cell
lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
# Add dropout to the cell
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
# Stack up multiple LSTM layers, for deep learning
cell = tf.contrib.rnn.MultiRNNCell([drop] * num_layers)
initial_state = cell.zero_state(batch_size, tf.float32)
return cell, initial_state
"""
Explanation: LSTM Cell
Here we will create the LSTM cell we'll use in the hidden layer. We'll use this cell as a building block for the RNN. So we aren't actually defining the RNN here, just the type of cell we'll use in the hidden layer.
We first create a basic LSTM cell with
python
lstm = tf.contrib.rnn.BasicLSTMCell(num_units)
where num_units is the number of units in the hidden layers in the cell. Then we can add dropout by wrapping it with
python
tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
You pass in a cell and it will automatically add dropout to the inputs or outputs. Finally, we can stack up the LSTM cells into layers with tf.contrib.rnn.MultiRNNCell. With this, you pass in a list of cells and it will send the output of one cell into the next cell. For example,
python
tf.contrib.rnn.MultiRNNCell([cell]*num_layers)
This might look a little weird if you know Python well because this will create a list of the same cell object. However, TensorFlow will create different weight matrices for all cell objects. Even though this is actually multiple LSTM cells stacked on each other, you can treat the multiple layers as one cell.
We also need to create an initial cell state of all zeros. This can be done like so
python
initial_state = cell.zero_state(batch_size, tf.float32)
Below, we implement the build_lstm function to create these LSTM cells and the initial state.
End of explanation
"""
def build_output(lstm_output, in_size, out_size):
''' Build a softmax layer, return the softmax output and logits.
Arguments
---------
x: Input tensor
in_size: Size of the input tensor, for example, size of the LSTM cells
out_size: Size of this softmax layer
'''
# Reshape output so it's a bunch of rows, one row for each step for each sequence.
# That is, the shape should be batch_size*num_steps rows by lstm_size columns
seq_output = tf.concat(lstm_output, axis=1)
x = tf.reshape(seq_output, [-1, in_size])
# Connect the RNN outputs to a softmax layer
with tf.variable_scope('softmax'):
softmax_w = tf.Variable(tf.truncated_normal((in_size, out_size), stddev=0.1))
softmax_b = tf.Variable(tf.zeros(out_size))
# Since output is a bunch of rows of RNN cell outputs, logits will be a bunch
# of rows of logit outputs, one for each step and sequence
logits = tf.matmul(x, softmax_w) + softmax_b
# Use softmax to get the probabilities for predicted characters
out = tf.nn.softmax(logits, name='predictions')
return out, logits
"""
Explanation: RNN Output
Here we'll create the output layer. We need to connect the output of the RNN cells to a full connected layer with a softmax output. The softmax output gives us a probability distribution we can use to predict the next character.
If our input has batch size $N$, number of steps $M$, and the hidden layer has $L$ hidden units, then the output is a 3D tensor with size $N \times M \times L$. The output of each LSTM cell has size $L$, we have $M$ of them, one for each sequence step, and we have $N$ sequences. So the total size is $N \times M \times L$.
We are using the same fully connected layer, the same weights, for each of the outputs. Then, to make things easier, we should reshape the outputs into a 2D tensor with shape $(M * N) \times L$. That is, one row for each sequence and step, where the values of each row are the output from the LSTM cells.
One we have the outputs reshaped, we can do the matrix multiplication with the weights. We need to wrap the weight and bias variables in a variable scope with tf.variable_scope(scope_name) because there are weights being created in the LSTM cells. TensorFlow will throw an error if the weights created here have the same names as the weights created in the LSTM cells, which they will be default. To avoid this, we wrap the variables in a variable scope so we can give them unique names.
End of explanation
"""
def build_loss(logits, targets, lstm_size, num_classes):
''' Calculate the loss from the logits and the targets.
Arguments
---------
logits: Logits from final fully connected layer
targets: Targets for supervised learning
lstm_size: Number of LSTM hidden units
num_classes: Number of classes in targets
'''
# One-hot encode targets and reshape to match logits, one row per batch_size per step
y_one_hot = tf.one_hot(targets, num_classes)
y_reshaped = tf.reshape(y_one_hot, logits.get_shape())
# Softmax cross entropy loss
loss = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y_reshaped)
loss = tf.reduce_mean(loss)
return loss
"""
Explanation: Training loss
Next up is the training loss. We get the logits and targets and calculate the softmax cross-entropy loss. First we need to one-hot encode the targets, we're getting them as encoded characters. Then, reshape the one-hot targets so it's a 2D tensor with size $(MN) \times C$ where $C$ is the number of classes/characters we have. Remember that we reshaped the LSTM outputs and ran them through a fully connected layer with $C$ units. So our logits will also have size $(MN) \times C$.
Then we run the logits and targets through tf.nn.softmax_cross_entropy_with_logits and find the mean to get the loss.
End of explanation
"""
def build_optimizer(loss, learning_rate, grad_clip):
''' Build optmizer for training, using gradient clipping.
Arguments:
loss: Network loss
learning_rate: Learning rate for optimizer
'''
# Optimizer for training, using gradient clipping to control exploding gradients
tvars = tf.trainable_variables()
grads, _ = tf.clip_by_global_norm(tf.gradients(loss, tvars), grad_clip)
train_op = tf.train.AdamOptimizer(learning_rate)
optimizer = train_op.apply_gradients(zip(grads, tvars))
return optimizer
"""
Explanation: Optimizer
Here we build the optimizer. Normal RNNs have have issues gradients exploding and disappearing. LSTMs fix the disappearance problem, but the gradients can still grow without bound. To fix this, we can clip the gradients above some threshold. That is, if a gradient is larger than that threshold, we set it to the threshold. This will ensure the gradients never grow overly large. Then we use an AdamOptimizer for the learning step.
End of explanation
"""
class CharRNN:
def __init__(self, num_classes, batch_size=64, num_steps=50,
lstm_size=128, num_layers=2, learning_rate=0.001,
grad_clip=5, sampling=False):
# When we're using this network for sampling later, we'll be passing in
# one character at a time, so providing an option for that
if sampling == True:
batch_size, num_steps = 1, 1
else:
batch_size, num_steps = batch_size, num_steps
tf.reset_default_graph()
# Build the input placeholder tensors
self.inputs, self.targets, self.keep_prob = build_inputs(batch_size, num_steps)
# Build the LSTM cell
cell, self.initial_state = build_lstm(lstm_size, num_layers, batch_size, self.keep_prob)
### Run the data through the RNN layers
# First, one-hot encode the input tokens
x_one_hot = tf.one_hot(self.inputs, num_classes)
# Run each sequence step through the RNN and collect the outputs
outputs, state = tf.nn.dynamic_rnn(cell, x_one_hot, initial_state=self.initial_state)
self.final_state = state
# Get softmax predictions and logits
self.prediction, self.logits = build_output(outputs, lstm_size, num_classes)
# Loss and optimizer (with gradient clipping)
self.loss = build_loss(self.logits, self.targets, lstm_size, num_classes)
self.optimizer = build_optimizer(self.loss, learning_rate, grad_clip)
"""
Explanation: Build the network
Now we can put all the pieces together and build a class for the network. To actually run data through the LSTM cells, we will use tf.nn.dynamic_rnn. This function will pass the hidden and cell states across LSTM cells appropriately for us. It returns the outputs for each LSTM cell at each step for each sequence in the mini-batch. It also gives us the final LSTM state. We want to save this state as final_state so we can pass it to the first LSTM cell in the the next mini-batch run. For tf.nn.dynamic_rnn, we pass in the cell and initial state we get from build_lstm, as well as our input sequences. Also, we need to one-hot encode the inputs before going into the RNN.
End of explanation
"""
batch_size = 100 # Sequences per batch
num_steps = 100 # Number of sequence steps per batch
lstm_size = 512 # Size of hidden layers in LSTMs
num_layers = 2 # Number of LSTM layers
learning_rate = 0.001 # Learning rate
keep_prob = 0.5 # Dropout keep probability
"""
Explanation: Hyperparameters
Here I'm defining the hyperparameters for the network.
batch_size - Number of sequences running through the network in one pass.
num_steps - Number of characters in the sequence the network is trained on. Larger is better typically, the network will learn more long range dependencies. But it takes longer to train. 100 is typically a good number here.
lstm_size - The number of units in the hidden layers.
num_layers - Number of hidden LSTM layers to use
learning_rate - Learning rate for training
keep_prob - The dropout keep probability when training. If you're network is overfitting, try decreasing this.
Here's some good advice from Andrej Karpathy on training the network. I'm going to copy it in here for your benefit, but also link to where it originally came from.
Tips and Tricks
Monitoring Validation Loss vs. Training Loss
If you're somewhat new to Machine Learning or Neural Networks it can take a bit of expertise to get good models. The most important quantity to keep track of is the difference between your training loss (printed during training) and the validation loss (printed once in a while when the RNN is run on the validation data (by default every 1000 iterations)). In particular:
If your training loss is much lower than validation loss then this means the network might be overfitting. Solutions to this are to decrease your network size, or to increase dropout. For example you could try dropout of 0.5 and so on.
If your training/validation loss are about equal then your model is underfitting. Increase the size of your model (either number of layers or the raw number of neurons per layer)
Approximate number of parameters
The two most important parameters that control the model are lstm_size and num_layers. I would advise that you always use num_layers of either 2/3. The lstm_size can be adjusted based on how much data you have. The two important quantities to keep track of here are:
The number of parameters in your model. This is printed when you start training.
The size of your dataset. 1MB file is approximately 1 million characters.
These two should be about the same order of magnitude. It's a little tricky to tell. Here are some examples:
I have a 100MB dataset and I'm using the default parameter settings (which currently print 150K parameters). My data size is significantly larger (100 mil >> 0.15 mil), so I expect to heavily underfit. I am thinking I can comfortably afford to make lstm_size larger.
I have a 10MB dataset and running a 10 million parameter model. I'm slightly nervous and I'm carefully monitoring my validation loss. If it's larger than my training loss then I may want to try to increase dropout a bit and see if that helps the validation loss.
Best models strategy
The winning strategy to obtaining very good models (if you have the compute time) is to always err on making the network larger (as large as you're willing to wait for it to compute) and then try different dropout values (between 0,1). Whatever model has the best validation performance (the loss, written in the checkpoint filename, low is good) is the one you should use in the end.
It is very common in deep learning to run many different models with many different hyperparameter settings, and in the end take whatever checkpoint gave the best validation performance.
By the way, the size of your training and validation splits are also parameters. Make sure you have a decent amount of data in your validation set or otherwise the validation performance will be noisy and not very informative.
End of explanation
"""
epochs = 20
# Save every N iterations
save_every_n = 200
model = CharRNN(len(vocab), batch_size=batch_size, num_steps=num_steps,
lstm_size=lstm_size, num_layers=num_layers,
learning_rate=learning_rate)
saver = tf.train.Saver(max_to_keep=100)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# Use the line below to load a checkpoint and resume training
#saver.restore(sess, 'checkpoints/______.ckpt')
counter = 0
for e in range(epochs):
# Train network
new_state = sess.run(model.initial_state)
loss = 0
for x, y in get_batches(encoded, batch_size, num_steps):
counter += 1
start = time.time()
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: keep_prob,
model.initial_state: new_state}
batch_loss, new_state, _ = sess.run([model.loss,
model.final_state,
model.optimizer],
feed_dict=feed)
end = time.time()
print('Epoch: {}/{}... '.format(e+1, epochs),
'Training Step: {}... '.format(counter),
'Training loss: {:.4f}... '.format(batch_loss),
'{:.4f} sec/batch'.format((end-start)))
if (counter % save_every_n == 0):
saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size))
saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size))
"""
Explanation: Time for training
This is typical training code, passing inputs and targets into the network, then running the optimizer. Here we also get back the final LSTM state for the mini-batch. Then, we pass that state back into the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I save a checkpoint.
Here I'm saving checkpoints with the format
i{iteration number}_l{# hidden layer units}.ckpt
End of explanation
"""
tf.train.get_checkpoint_state('checkpoints')
"""
Explanation: Saved checkpoints
Read up on saving and loading checkpoints here: https://www.tensorflow.org/programmers_guide/variables
End of explanation
"""
def pick_top_n(preds, vocab_size, top_n=5):
p = np.squeeze(preds)
p[np.argsort(p)[:-top_n]] = 0
p = p / np.sum(p)
c = np.random.choice(vocab_size, 1, p=p)[0]
return c
def sample(checkpoint, n_samples, lstm_size, vocab_size, prime="The "):
samples = [c for c in prime]
model = CharRNN(len(vocab), lstm_size=lstm_size, sampling=True)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, checkpoint)
new_state = sess.run(model.initial_state)
for c in prime:
x = np.zeros((1, 1))
x[0,0] = vocab_to_int[c]
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.prediction, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
for i in range(n_samples):
x[0,0] = c
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.prediction, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
return ''.join(samples)
"""
Explanation: Sampling
Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.
The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.
End of explanation
"""
tf.train.latest_checkpoint('checkpoints')
checkpoint = tf.train.latest_checkpoint('checkpoints')
samp = sample(checkpoint, 2000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i200_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i600_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i1200_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
"""
Explanation: Here, pass in the path to a checkpoint and sample from the network.
End of explanation
"""
|
probml/pyprobml | notebooks/book1/13/mlp_cifar_pytorch.ipynb | mit | import sklearn
import scipy
import scipy.optimize
import matplotlib.pyplot as plt
from mpl_toolkits import mplot3d
from mpl_toolkits.mplot3d import Axes3D
import seaborn as sns
import warnings
warnings.filterwarnings("ignore")
import itertools
import time
from functools import partial
import os
import numpy as np
from scipy.special import logsumexp
np.set_printoptions(precision=3)
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import torchvision
print("torch version {}".format(torch.__version__))
if torch.cuda.is_available():
print(torch.cuda.get_device_name(0))
print("current device {}".format(torch.cuda.current_device()))
else:
print("Torch cannot find GPU")
def set_seed(seed):
np.random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
use_cuda = torch.cuda.is_available()
device = torch.device("cuda:0" if use_cuda else "cpu")
# torch.backends.cudnn.benchmark = True
"""
Explanation: <a href="https://colab.research.google.com/github/probml/pyprobml/blob/master/book1/supplements/mlp_cifar_pytorch.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
MLP for image classification using PyTorch
In this section, we follow Chap. 7 of the Deep Learning With PyTorch book, and illustrate how to fit an MLP to a two-class version of CIFAR. (We modify the code from here.)
End of explanation
"""
from torchvision import datasets
folder = "data"
cifar10 = datasets.CIFAR10(folder, train=True, download=True)
cifar10_val = datasets.CIFAR10(folder, train=False, download=True)
print(type(cifar10))
print(type(cifar10).__mro__) # module resolution order shows class hierarchy
print(len(cifar10))
img, label = cifar10[99]
print(type(img))
print(img)
plt.imshow(img)
plt.show()
class_names = ["airplane", "automobile", "bird", "cat", "deer", "dog", "frog", "horse", "ship", "truck"]
fig = plt.figure(figsize=(8, 3))
num_classes = 10
for i in range(num_classes):
ax = fig.add_subplot(2, 5, 1 + i, xticks=[], yticks=[])
ax.set_title(class_names[i])
img = next(img for img, label in cifar10 if label == i)
plt.imshow(img)
plt.show()
"""
Explanation: Get the CIFAR dataset
End of explanation
"""
# Now we want to convert this to a tensor
from torchvision import transforms
to_tensor = transforms.ToTensor()
img, label = cifar10[99]
img_t = to_tensor(img)
print(type(img))
# print(img.shape)
print(type(img_t))
print(img_t.shape) # channels * height * width, here channels=3 (RGB)
print(img_t.min(), img_t.max()) # pixel values are rescaled to 0..1
# transform the whole dataset to tensors
cifar10 = datasets.CIFAR10(folder, train=True, download=False, transform=transforms.ToTensor())
img, label = cifar10[99]
print(type(img))
plt.imshow(img.permute(1, 2, 0)) # matplotlib expects H*W*C
plt.show()
"""
Explanation: Convert to tensors
End of explanation
"""
# we load the whole training set as a batch, of size 3*H*W*N
imgs = torch.stack([img for img, _ in cifar10], dim=3)
print(imgs.shape)
imgs_flat = imgs.view(3, -1) # reshape by keeping first 3 channels, but flatten all others
print(imgs_flat.shape)
mu = imgs_flat.mean(dim=1) # average over second dimension (H*W*N) to get one mean per channel
sigma = imgs_flat.std(dim=1)
print(mu)
print(sigma)
cifar10 = datasets.CIFAR10(
folder,
train=True,
download=False,
transform=transforms.Compose([transforms.ToTensor(), transforms.Normalize(mu, sigma)]),
)
cifar10_val = datasets.CIFAR10(
folder,
train=False,
download=False,
transform=transforms.Compose(
[
transforms.ToTensor(),
transforms.Normalize(mu, sigma),
]
),
)
# rescaled data is harder to visualize
img, _ = cifar10[99]
plt.imshow(img.permute(1, 2, 0))
plt.show()
"""
Explanation: Standardize the inputs
We standardize the features by computing the mean and std of each channel, averaging across all pixels and all images. This will help optimization.
End of explanation
"""
class_names = ["airplane", "automobile", "bird", "cat", "deer", "dog", "frog", "horse", "ship", "truck"]
label_map = {0: 0, 2: 1} # 0(airplane)->0, 2(bird)->1
cifar2 = [(img, label_map[label]) for img, label in cifar10 if label in [0, 2]]
cifar2_val = [(img, label_map[label]) for img, label in cifar10_val if label in [0, 2]]
print(len(cifar2))
print(len(cifar2_val))
"""
Explanation: Create two-class version of dataset
We extract data which correspond to airplane or bird.
The result object is a list of pairs.
This "acts like" an object of type torch.utilts.data.dataset.Dataset, since it implements the len() and item index methods.
End of explanation
"""
img, label = cifar10[0]
img = img.view(-1)
ninputs = len(img)
nhidden = 512
nclasses = 2
torch.manual_seed(0)
model = nn.Sequential(nn.Linear(ninputs, nhidden), nn.Tanh(), nn.Linear(nhidden, nclasses), nn.LogSoftmax(dim=1))
print(model)
"""
Explanation: A shallow, fully connected model
End of explanation
"""
torch.manual_seed(0)
from collections import OrderedDict
model = nn.Sequential(
OrderedDict(
[
("hidden_linear", nn.Linear(ninputs, nhidden)),
("activation", nn.Tanh()),
("output_linear", nn.Linear(nhidden, nclasses)),
("softmax", nn.LogSoftmax(dim=1)),
]
)
)
print(model)
"""
Explanation: We can name the layers so we can access their activations and/or parameters more easily.
End of explanation
"""
img, label = cifar2[0]
img_batch = img.view(-1).unsqueeze(0)
print(img_batch.shape)
logprobs = model(img_batch)
print(logprobs.shape)
print(logprobs)
probs = torch.exp(logprobs) # elementwise
print(probs)
print(probs.sum(1))
"""
Explanation: Let's test the model.
End of explanation
"""
loss_fn = nn.NLLLoss()
loss = loss_fn(logprobs, torch.tensor([label]))
print(loss)
"""
Explanation: Negative log likelihood loss.
End of explanation
"""
activation = {}
def get_activation(name):
def hook(model, input, output):
activation[name] = output.detach()
return hook
model.output_linear.register_forward_hook(get_activation("output_linear"))
logprobs = model(img_batch).detach().numpy()
logits = activation["output_linear"]
logprobs2 = F.log_softmax(logits).detach().numpy()
print(logprobs)
print(logprobs2)
assert np.allclose(logprobs, logprobs2)
"""
Explanation: Let's access the output of the logit layer directly, bypassing the final log softmax.
(We borrow a trick from here).
End of explanation
"""
torch.manual_seed(0)
model_logits = nn.Sequential(nn.Linear(ndims_input, nhidden), nn.Tanh(), nn.Linear(nhidden, nclasses))
logits2 = model_logits(img_batch)
print(logits)
print(logits2)
torch.testing.assert_allclose(logits, logits2)
"""
Explanation: We can also modify the model to return logits.
End of explanation
"""
logprobs = model(img_batch)
loss = nn.NLLLoss()(logprobs, torch.tensor([label]))
logits = model_logits(img_batch)
loss2 = nn.CrossEntropyLoss()(logits, torch.tensor([label]))
print(loss)
print(loss2)
torch.testing.assert_allclose(loss, loss2)
"""
Explanation: In this case, we need to modify the loss to take in logits.
End of explanation
"""
class MLP(nn.Module):
def __init__(self, ninputs, nhidden, nclasses):
super().__init__()
self.fc1 = nn.Linear(ninputs, nhidden)
self.fc2 = nn.Linear(nhidden, nclasses)
def forward(self, x):
out = F.tanh(self.fc1(x))
out = self.fc2(out)
return out # logits
torch.manual_seed(0)
model = MLP(ninputs, nhidden, nclasses)
logits = model(img_batch)
logits2 = model_logits(img_batch)
print(logits)
print(logits2)
torch.testing.assert_allclose(logits, logits2)
# print(list(model.named_parameters()))
nparams = [p.numel() for p in model.parameters() if p.requires_grad == True]
print(nparams)
# weights1, bias1, weights2, bias2
print([ninputs * nhidden, nhidden, nhidden * nclasses, nclasses])
"""
Explanation: We can also use the functional API to specify the model. This avoids having to create stateless layers (i.e., layers with no adjustable parameters), such as the tanh or softmax layers.
End of explanation
"""
def compute_accuracy(model, loader):
correct = 0
total = 0
with torch.no_grad():
for imgs, labels in loader:
outputs = model(imgs.view(imgs.shape[0], -1))
_, predicted = torch.max(outputs, dim=1)
total += labels.shape[0]
correct += int((predicted == labels).sum())
return correct / total
train_loader = torch.utils.data.DataLoader(cifar2, batch_size=64, shuffle=False)
val_loader = torch.utils.data.DataLoader(cifar2_val, batch_size=64, shuffle=False)
torch.manual_seed(0)
model = MLP(ninputs, nhidden, nclasses)
acc_train = compute_accuracy(model, train_loader)
acc_val = compute_accuracy(model, val_loader)
print([acc_train, acc_val])
"""
Explanation: Evaluation pre-training
End of explanation
"""
torch.manual_seed(0)
model = MLP(ninputs, nhidden, nclasses)
learning_rate = 1e-2
optimizer = optim.SGD(model.parameters(), lr=learning_rate)
loss_fn = nn.CrossEntropyLoss()
n_epochs = 20
for epoch in range(n_epochs):
for imgs, labels in train_loader:
outputs = model(imgs.view(imgs.shape[0], -1))
loss = loss_fn(outputs, labels)
optimizer.zero_grad()
loss.backward()
optimizer.step()
# At end of each epoch
acc_val = compute_accuracy(model, val_loader)
loss_train_batch = float(loss)
print(f"Epoch {epoch}, Batch Loss {loss_train_batch}, Val acc {acc_val}")
"""
Explanation: Training loop
End of explanation
"""
|
compsocialscience/summer-institute | 2018/materials/boulder/day2-digital-trace-data/BoulderSICSS.ipynb | mit | # Install tweepy
# !pip install tweepy
# Import the libraries we need
import tweepy
import json
import time
import networkx
import os
import matplotlib.pyplot as plt
from collections import Counter
# Authenticate!
auth = tweepy.OAuthHandler("Consumer Key", "Consumer Secret")
auth.set_access_token("Access Token", "Access Token Secret")
api = tweepy.API(auth)
# Check working directory
os.getcwd()
# Set working directory
os.chdir('FOLDER FOR SAVING FILES')
# Check working directory
os.getcwd()
"""
Explanation: <h1>Constructing Ego Networks from Retweets</h1>
Yotam Shmargad<br>
University of Arizona<br>
Email: yotam@email.arizona.edu<br>
Web: www.yotamshmargad.com
<h2>Introduction</h2>
<br>
Twitter has become a prominent online social network, playing a major role in how people all over the world share and consume information. Moreover, while some social networks have made it difficult for researchers to extract data from their servers, Twitter remains relatively open for now. This tutorial will go through the details of how to construct a Twitter user’s ego network from retweets they have received on their tweets. Instead of focusing on who follows who on Twitter, the method instead conceptualizes edges as existing between users if they have recently retweeted each other.<br><br>
Conceptualizing edges as retweets has two primary benefits. First, it captures recent interactions between users rather than decisions that they may have made long ago (i.e. following each other) that may not translate into meaningful interaction today. Second, users often have many more followers than they do retweeters. The method proposed can thus be used to analyze even relatively popular users. The code goes through obtaining authorization from Twitter, taking into account the limits that Twitter imposes on data extraction, and handling errors generated from deleted tweets or users.
<h2>1. Importing libraries and getting Twitter's approval</h2>
End of explanation
"""
# Keep track of API calls
# User timeline
callsUT = 0
# Retweeters
callsRT = 0
# Number of tweets to be pulled
# Ego
E = 10
# Alter
A = 10
# Existing user with tweets
ego = api.user_timeline(screen_name = "CUBoulder", count = E, include_rts = False, exclude_replies = True)
callsUT += 1
len(ego)
# Existing user with no tweets
ego = api.user_timeline(screen_name = "DeveloperYotam", count = E, include_rts = False, exclude_replies = True)
callsUT += 1
len(ego)
# Non-existing user
ego = api.user_timeline(screen_name = "fakeuserq4587937045", count = E, include_rts = False, exclude_replies = True)
callsUT += 1
# Handling errors
ego = []
egosn = "CUBoulder"
try:
ego_raw = api.user_timeline(screen_name = egosn, count = E, include_rts = False, exclude_replies = True)
except tweepy.TweepError:
print("fail!")
callsUT += 1
# Converting results to a list of json objects
ego = [egotweet._json for egotweet in ego_raw]
# Writing ego tweets to a json file
with open('egotweet.json', 'w') as file:
json.dump(ego, file)
callsUT
# Looking at a json object
ego[0]
# Accessing an element of ego tweets
ego[0]["id_str"]
# Storing one of ego's tweet id
egoid = ego[0]["id_str"]
# Storing and printing ego tweet ids and retweet counts
tweetids = []
retweets = []
if len(ego) != 0:
for egotweet in ego:
tweetids.append(egotweet["id_str"])
retweets.append(egotweet["retweet_count"])
print(egotweet["id_str"],egotweet["retweet_count"])
"""
Explanation: <h2>2. Pulling ego tweets</h2>
End of explanation
"""
# Collecting Retweets
egort = api.retweets(ego[0]["id_str"])
callsRT += 1
len(egort)
callsRT
# Non-existing tweet
egort = api.retweets("garblegarble")
callsRT += 1
# Note: callsRT did not increase in the last command
callsRT
callsRT += 1
# Sleep for 10 seconds
time.sleep(10)
# Collecting retweeters of ego tweets
allretweeters = []
self = []
check = []
for egotweet in ego:
retweeters = []
try:
selftweet = 0
if callsRT >= 75:
time.sleep(900)
egort_raw = api.retweets(egotweet["id_str"])
egort = [egoretweet._json for egoretweet in egort_raw]
for retweet in egort:
if retweet["user"]["id_str"]!=egoid:
allretweeters.append((egoid,retweet["user"]["id_str"]))
retweeters.append(retweet["user"]["id_str"])
else:
selftweet = 1
check.append(len(retweeters))
self.append(selftweet)
except tweepy.TweepError:
check.append(0)
self.append(0)
callsRT += 1
# Writing results to files
with open('check.json', 'w') as file:
json.dump(check, file)
with open('self.json', 'w') as file:
json.dump(self, file)
with open('allretweeters.json', 'w') as file:
json.dump(allretweeters, file)
# Printing tweet ids, retweet counts,
# retweeters obtained, and whether a self tweet is included
for a, b, c, d in zip(tweetids,retweets,check,self):
print(a, b, c, d)
len(allretweeters)
allretweeters
"""
Explanation: <h2>3. Pulling retweeters</h2>
End of explanation
"""
# Assigning edge weight to be number of tweets retweeted
weight = Counter()
for (i, j) in allretweeters:
weight[(i, j)] +=1
weight
# Defining weighted edges
weighted_edges = list(weight.items())
weighted_edges
# Defining the network object
G = networkx.Graph()
G.add_edges_from([x[0] for x in weighted_edges])
# Visualizing the network
networkx.draw(G, width=[x[1] for x in weighted_edges])
"""
Explanation: <h2>4. Visualizing the network of retweeters</h2>
End of explanation
"""
# Defining the set of unique retweeters
unique = [x[0][1] for x in weighted_edges]
len(unique)
unique
callsUT
# Collecting and storing the tweets of retweeters
alter = []
alters = []
for retweeter in unique:
try:
if callsUT >= 900:
time.sleep(900)
alter_raw = api.user_timeline(retweeter, count = A, include_rts = False, exclude_replies = True)
alter = [altertweet._json for altertweet in alter_raw]
alters.append(alter)
except tweepy.TweepError:
print("fail!")
callsUT += 1
with open('alters.json', 'w') as file:
json.dump(alters, file)
callsUT
len(alters)
# Printing the number of tweets pulled for each retweeter
for alt in alters:
print(len(alt))
# Storing and printing alter ids, tweet ids, and retweet counts
altids = []
alttweetids = []
altretweets = []
for alt in alters:
for alttweet in alt:
altids.append(alttweet["user"]["id_str"])
alttweetids.append(alttweet["id_str"])
altretweets.append(alttweet["retweet_count"])
print(alttweet["user"]["id_str"],alttweet["id_str"],alttweet["retweet_count"])
"""
Explanation: <h2>5. Pulling retweeter tweets</h2>
End of explanation
"""
# Collecting retweeters of alter tweets
allalt = []
altself = []
altcheck = []
for alt in alters:
for alttweet in alt:
altid = alttweet["user"]["id_str"]
altretweeters = []
try:
selftweet = 0
if callsRT >= 75:
time.sleep(900)
altrt_raw = api.retweets(alttweet["id_str"])
altrt = [altretweet._json for altretweet in altrt_raw]
for retweet in altrt:
if retweet["user"]["id_str"]!=altid:
allalt.append((altid,retweet["user"]["id_str"]))
altretweeters.append(retweet["user"]["id_str"])
else:
selftweet = 1
altcheck.append(len(altretweeters))
altself.append(selftweet)
except tweepy.TweepError:
altcheck.append(0)
altself.append(0)
callsRT += 1
# Writing results to files
with open('altcheck.json', 'w') as file:
json.dump(altcheck, file)
with open('altself.json', 'w') as file:
json.dump(altself, file)
with open('altretweeters.json', 'w') as file:
json.dump(altretweeters, file)
with open('allalt.json', 'w') as file:
json.dump(allalt, file)
# Printing alter user ids, tweet ids, retweet counts,
# retweeters obtained, and whether a self tweet is included
for a, b, c, d, e in zip(altids,alttweetids,altretweets,altcheck,altself):
print(a, b, c, d, e)
len(allalt)
allalt
"""
Explanation: <h2>6. Pulling retweeters of retweeters</h2>
End of explanation
"""
weight = Counter()
for (i, j) in allalt:
weight[(i, j)] +=1
weight
all_edges = weighted_edges + list(weight.items())
all_edges
# Defining the full network object
G = networkx.Graph()
G.add_edges_from([x[0] for x in all_edges])
# Visualizing the full network
networkx.draw(G, width=[x[1] for x in all_edges])
"""
Explanation: <h2>7. Visualizing the full network of retweeters</h2>
End of explanation
"""
|
dvklopfenstein/PrincetonAlgorithms | notebooks/ElemSymbolTbls.ipynb | gpl-2.0 | # Setup for running examples
import sys
import os
sys.path.insert(0, '{GIT}/PrincetonAlgorithms/py'.format(GIT=os.environ['GIT']))
from AlgsSedgewickWayne.BST import BST
# Function to convert keys to key-value pairs where
# 1. the key is the letter and
# 2. the value is the index into the key list
get_kv = lambda keys: [(k, v) for v, k in enumerate(keys)]
"""
Explanation: Elementary Symbol Tables
Python Code
Symbol Table API
A Symbol Table is a collection of key-value pairs, where the key is the Symbol.
1.1) Date.py is an example of an user-created immutable type which can be used as a key
1.2) Client for ST.py: FrequencyCounter.py
Elementary Symbol Table Implementations
2.1) SequentialSearchST.py, an unordered linked-list
2.2) BinarySearchST.py, ordered array. Fast lookup (slow insert)
Ordered Operations: get, put, delete, size, min, max, floor, ceiling, rank, etc.
3.1) ST.py
Binary Search Trees A binary tree in symmetric order
A classic data structure that enables us to provide efficient
implementations of Symbol Table algorithms
4.1) BST.py
Ordered Operations in BSTs
Deletion in BSTs
Table of Contents for Examples
EX1 Order of "put"s determine tree shape
Examples
EX1 Order of "put"s determine tree shape
14:30 There are many different BSTs that correspond to the same set of keys.
The number of compares depends on the order in which the keys come in.
End of explanation
"""
# All of these will create the same best case tree shape
# Each example has the same keys, but different values
BST(get_kv(['H', 'C', 'S', 'A', 'E', 'R', 'X'])).wr_png("BST_bc0.png")
BST(get_kv(['H', 'S', 'X', 'R', 'C', 'E', 'A'])).wr_png("BST_bc1.png")
BST(get_kv(['H', 'C', 'A', 'E', 'S', 'R', 'X'])).wr_png("BST_bc2.png")
"""
Explanation: Tree shape: Best case
End of explanation
"""
# These will create worst case tree shapes
BST(get_kv(['A', 'C', 'E', 'H', 'R', 'S', 'X'])).wr_png("BST_wc_fwd.png")
BST(get_kv(['X', 'S', 'R', 'H', 'E', 'C', 'A'])).wr_png("BST_wc_rev.png")
"""
Explanation: Best Case Tree Shape
Tree shape: Worst case
End of explanation
"""
|
opengeostat/pygslib | pygslib/Ipython_templates/backtr_raw.ipynb | mit | #general imports
import matplotlib.pyplot as plt
import pygslib
from matplotlib.patches import Ellipse
import numpy as np
import pandas as pd
#make the plots inline
%matplotlib inline
"""
Explanation: Testing the back normalscore transformation
End of explanation
"""
#get the data in gslib format into a pandas Dataframe
mydata= pygslib.gslib.read_gslib_file('../data/cluster.dat')
#view data in a 2D projection
plt.scatter(mydata['Xlocation'],mydata['Ylocation'], c=mydata['Primary'])
plt.colorbar()
plt.grid(True)
plt.show()
"""
Explanation: Getting the data ready for work
If the data is in GSLIB format you can use the function gslib.read_gslib_file(filename) to import the data into a Pandas DataFrame.
End of explanation
"""
print (pygslib.gslib.__dist_transf.backtr.__doc__)
"""
Explanation: The nscore transformation table function
End of explanation
"""
transin,transout, error = pygslib.gslib.__dist_transf.ns_ttable(mydata['Primary'],mydata['Declustering Weight'])
print ('there was any error?: ', error!=0)
"""
Explanation: Get the transformation table
End of explanation
"""
mydata['NS_Primary'] = pygslib.gslib.__dist_transf.nscore(mydata['Primary'],transin,transout,getrank=False)
mydata['NS_Primary'].hist(bins=30)
"""
Explanation: Get the normal score transformation
Note that the declustering is applied on the transformation tables
End of explanation
"""
mydata['NS_Primary_BT'],error = pygslib.gslib.__dist_transf.backtr(mydata['NS_Primary'],
transin,transout,
ltail=1,utail=1,ltpar=0,utpar=60,
zmin=0,zmax=60,getrank=False)
print ('there was any error?: ', error!=0, error)
mydata[['Primary','NS_Primary_BT']].hist(bins=30)
mydata[['Primary','NS_Primary_BT', 'NS_Primary']].head()
"""
Explanation: Doing the back transformation
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/nerc/cmip6/models/hadgem3-gc31-hm/atmos.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'nerc', 'hadgem3-gc31-hm', 'atmos')
"""
Explanation: ES-DOC CMIP6 Model Properties - Atmos
MIP Era: CMIP6
Institute: NERC
Source ID: HADGEM3-GC31-HM
Topic: Atmos
Sub-Topics: Dynamical Core, Radiation, Turbulence Convection, Microphysics Precipitation, Cloud Scheme, Observation Simulation, Gravity Waves, Solar, Volcanos.
Properties: 156 (127 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:26
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties --> Overview
2. Key Properties --> Resolution
3. Key Properties --> Timestepping
4. Key Properties --> Orography
5. Grid --> Discretisation
6. Grid --> Discretisation --> Horizontal
7. Grid --> Discretisation --> Vertical
8. Dynamical Core
9. Dynamical Core --> Top Boundary
10. Dynamical Core --> Lateral Boundary
11. Dynamical Core --> Diffusion Horizontal
12. Dynamical Core --> Advection Tracers
13. Dynamical Core --> Advection Momentum
14. Radiation
15. Radiation --> Shortwave Radiation
16. Radiation --> Shortwave GHG
17. Radiation --> Shortwave Cloud Ice
18. Radiation --> Shortwave Cloud Liquid
19. Radiation --> Shortwave Cloud Inhomogeneity
20. Radiation --> Shortwave Aerosols
21. Radiation --> Shortwave Gases
22. Radiation --> Longwave Radiation
23. Radiation --> Longwave GHG
24. Radiation --> Longwave Cloud Ice
25. Radiation --> Longwave Cloud Liquid
26. Radiation --> Longwave Cloud Inhomogeneity
27. Radiation --> Longwave Aerosols
28. Radiation --> Longwave Gases
29. Turbulence Convection
30. Turbulence Convection --> Boundary Layer Turbulence
31. Turbulence Convection --> Deep Convection
32. Turbulence Convection --> Shallow Convection
33. Microphysics Precipitation
34. Microphysics Precipitation --> Large Scale Precipitation
35. Microphysics Precipitation --> Large Scale Cloud Microphysics
36. Cloud Scheme
37. Cloud Scheme --> Optical Cloud Properties
38. Cloud Scheme --> Sub Grid Scale Water Distribution
39. Cloud Scheme --> Sub Grid Scale Ice Distribution
40. Observation Simulation
41. Observation Simulation --> Isscp Attributes
42. Observation Simulation --> Cosp Attributes
43. Observation Simulation --> Radar Inputs
44. Observation Simulation --> Lidar Inputs
45. Gravity Waves
46. Gravity Waves --> Orographic Gravity Waves
47. Gravity Waves --> Non Orographic Gravity Waves
48. Solar
49. Solar --> Solar Pathways
50. Solar --> Solar Constant
51. Solar --> Orbital Parameters
52. Solar --> Insolation Ozone
53. Volcanos
54. Volcanos --> Volcanoes Treatment
1. Key Properties --> Overview
Top level key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmosphere model code (CAM 4.0, ARPEGE 3.2,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "AGCM"
# "ARCM"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of atmospheric model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "primitive equations"
# "non-hydrostatic"
# "anelastic"
# "Boussinesq"
# "hydrostatic"
# "quasi-hydrostatic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the atmosphere.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Resolution
Characteristics of the model resolution
2.1. Horizontal Resolution Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of the model grid, e.g. T42, N48.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, e.g. 2.5 x 3.75 degrees lat-lon.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 1 deg (Equator) - 0.5 deg
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 2.4. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on the computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.high_top')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 2.5. High Top
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the atmosphere have a high-top? High-Top atmospheres have a fully resolved stratosphere with a model top above the stratopause.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Timestepping
Characteristics of the atmosphere model time stepping
3.1. Timestep Dynamics
Is Required: TRUE Type: STRING Cardinality: 1.1
Timestep for the dynamics, e.g. 30 min.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.2. Timestep Shortwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the shortwave radiative transfer, e.g. 1.5 hours.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.3. Timestep Longwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the longwave radiative transfer, e.g. 3 hours.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "modified"
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Orography
Characteristics of the model orography
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the orography.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.changes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "related to ice sheets"
# "related to tectonics"
# "modified mean"
# "modified variance if taken into account in model (cf gravity waves)"
# TODO - please enter value(s)
"""
Explanation: 4.2. Changes
Is Required: TRUE Type: ENUM Cardinality: 1.N
If the orography type is modified describe the time adaptation changes.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Grid --> Discretisation
Atmosphere grid discretisation
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of grid discretisation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spectral"
# "fixed grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6. Grid --> Discretisation --> Horizontal
Atmosphere discretisation in the horizontal
6.1. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "finite elements"
# "finite volumes"
# "finite difference"
# "centered finite difference"
# TODO - please enter value(s)
"""
Explanation: 6.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "second"
# "third"
# "fourth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.3. Scheme Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation function order
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "filter"
# "pole rotation"
# "artificial island"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.4. Horizontal Pole
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal discretisation pole singularity treatment
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gaussian"
# "Latitude-Longitude"
# "Cubed-Sphere"
# "Icosahedral"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.5. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "isobaric"
# "sigma"
# "hybrid sigma-pressure"
# "hybrid pressure"
# "vertically lagrangian"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 7. Grid --> Discretisation --> Vertical
Atmosphere discretisation in the vertical
7.1. Coordinate Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type of vertical coordinate system
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Dynamical Core
Characteristics of the dynamical core
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere dynamical core
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the dynamical core of the model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Adams-Bashforth"
# "explicit"
# "implicit"
# "semi-implicit"
# "leap frog"
# "multi-step"
# "Runge Kutta fifth order"
# "Runge Kutta second order"
# "Runge Kutta third order"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.3. Timestepping Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Timestepping framework type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface pressure"
# "wind components"
# "divergence/curl"
# "temperature"
# "potential temperature"
# "total water"
# "water vapour"
# "water liquid"
# "water ice"
# "total water moments"
# "clouds"
# "radiation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of the model prognostic variables
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9. Dynamical Core --> Top Boundary
Type of boundary layer at the top of the model
9.1. Top Boundary Condition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Top boundary condition
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.2. Top Heat
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary heat treatment
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.3. Top Wind
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary wind treatment
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10. Dynamical Core --> Lateral Boundary
Type of lateral boundary condition (if the model is a regional model)
10.1. Condition
Is Required: FALSE Type: ENUM Cardinality: 0.1
Type of lateral boundary condition
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11. Dynamical Core --> Diffusion Horizontal
Horizontal diffusion scheme
11.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Horizontal diffusion scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "iterated Laplacian"
# "bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal diffusion scheme method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heun"
# "Roe and VanLeer"
# "Roe and Superbee"
# "Prather"
# "UTOPIA"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12. Dynamical Core --> Advection Tracers
Tracer advection scheme
12.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Tracer advection scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Eulerian"
# "modified Euler"
# "Lagrangian"
# "semi-Lagrangian"
# "cubic semi-Lagrangian"
# "quintic semi-Lagrangian"
# "mass-conserving"
# "finite volume"
# "flux-corrected"
# "linear"
# "quadratic"
# "quartic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme characteristics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "dry mass"
# "tracer mass"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.3. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme conserved quantities
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Priestley algorithm"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.4. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracer advection scheme conservation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "VanLeer"
# "Janjic"
# "SUPG (Streamline Upwind Petrov-Galerkin)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Dynamical Core --> Advection Momentum
Momentum advection scheme
13.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Momentum advection schemes name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "2nd order"
# "4th order"
# "cell-centred"
# "staggered grid"
# "semi-staggered grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme characteristics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa D-grid"
# "Arakawa E-grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.3. Scheme Staggering Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme staggering type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Angular momentum"
# "Horizontal momentum"
# "Enstrophy"
# "Mass"
# "Total energy"
# "Vorticity"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.4. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme conserved quantities
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.5. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme conservation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.aerosols')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sulphate"
# "nitrate"
# "sea salt"
# "dust"
# "ice"
# "organic"
# "BC (black carbon / soot)"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "polar stratospheric ice"
# "NAT (nitric acid trihydrate)"
# "NAD (nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particle)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14. Radiation
Characteristics of the atmosphere radiation process
14.1. Aerosols
Is Required: TRUE Type: ENUM Cardinality: 1.N
Aerosols whose radiative effect is taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Radiation --> Shortwave Radiation
Properties of the shortwave radiation scheme
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of shortwave radiation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Shortwave radiation scheme spectral integration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Shortwave radiation transport calculation methods
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Shortwave radiation scheme number of spectral intervals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16. Radiation --> Shortwave GHG
Representation of greenhouse gases in the shortwave radiation scheme
16.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose shortwave radiative effects are taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17. Radiation --> Shortwave Cloud Ice
Shortwave radiative properties of ice crystals in clouds
17.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud ice crystals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18. Radiation --> Shortwave Cloud Liquid
Shortwave radiative properties of liquid droplets in clouds
18.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud liquid droplets
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19. Radiation --> Shortwave Cloud Inhomogeneity
Cloud inhomogeneity in the shortwave radiation scheme
19.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20. Radiation --> Shortwave Aerosols
Shortwave radiative properties of aerosols
20.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with aerosols
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 21. Radiation --> Shortwave Gases
Shortwave radiative properties of gases
21.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with gases
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22. Radiation --> Longwave Radiation
Properties of the longwave radiation scheme
22.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of longwave radiation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the longwave radiation scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Longwave radiation scheme spectral integration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Longwave radiation transport calculation methods
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 22.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Longwave radiation scheme number of spectral intervals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23. Radiation --> Longwave GHG
Representation of greenhouse gases in the longwave radiation scheme
23.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose longwave radiative effects are taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 24. Radiation --> Longwave Cloud Ice
Longwave radiative properties of ice crystals in clouds
24.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud ice crystals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 24.2. Physical Reprenstation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 24.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25. Radiation --> Longwave Cloud Liquid
Longwave radiative properties of liquid droplets in clouds
25.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud liquid droplets
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26. Radiation --> Longwave Cloud Inhomogeneity
Cloud inhomogeneity in the longwave radiation scheme
26.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27. Radiation --> Longwave Aerosols
Longwave radiative properties of aerosols
27.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with aerosols
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 28. Radiation --> Longwave Gases
Longwave radiative properties of gases
28.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with gases
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29. Turbulence Convection
Atmosphere Convective Turbulence and Clouds
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere convection and turbulence
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Mellor-Yamada"
# "Holtslag-Boville"
# "EDMF"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30. Turbulence Convection --> Boundary Layer Turbulence
Properties of the boundary layer turbulence scheme
30.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Boundary layer turbulence scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TKE prognostic"
# "TKE diagnostic"
# "TKE coupled with water"
# "vertical profile of Kz"
# "non-local diffusion"
# "Monin-Obukhov similarity"
# "Coastal Buddy Scheme"
# "Coupled with convection"
# "Coupled with gravity waves"
# "Depth capped at cloud base"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Boundary layer turbulence scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.3. Closure Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Boundary layer turbulence scheme closure order
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 30.4. Counter Gradient
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Uses boundary layer turbulence scheme counter gradient
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 31. Turbulence Convection --> Deep Convection
Properties of the deep convection scheme
31.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Deep convection scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "adjustment"
# "plume ensemble"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CAPE"
# "bulk"
# "ensemble"
# "CAPE/WFN based"
# "TKE/CIN based"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vertical momentum transport"
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "updrafts"
# "downdrafts"
# "radiative effect of anvils"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of deep convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for deep convection. Microphysical processes directly control the amount of detrainment of cloud hydrometeor and water vapor from updrafts
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32. Turbulence Convection --> Shallow Convection
Properties of the shallow convection scheme
32.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Shallow convection scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "cumulus-capped boundary layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
shallow convection scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "same as deep (unified)"
# "included in boundary layer turbulence"
# "separate diagnosis"
# TODO - please enter value(s)
"""
Explanation: 32.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
shallow convection scheme method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of shallow convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for shallow convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 33. Microphysics Precipitation
Large Scale Cloud Microphysics and Precipitation
33.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of large scale cloud microphysics and precipitation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34. Microphysics Precipitation --> Large Scale Precipitation
Properties of the large scale precipitation scheme
34.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the large scale precipitation parameterisation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "liquid rain"
# "snow"
# "hail"
# "graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 34.2. Hydrometeors
Is Required: TRUE Type: ENUM Cardinality: 1.N
Precipitating hydrometeors taken into account in the large scale precipitation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Properties of the large scale cloud microphysics scheme
35.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the microphysics parameterisation scheme used for large scale clouds.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mixed phase"
# "cloud droplets"
# "cloud ice"
# "ice nucleation"
# "water vapour deposition"
# "effect of raindrops"
# "effect of snow"
# "effect of graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 35.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Large scale cloud microphysics processes
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36. Cloud Scheme
Characteristics of the cloud scheme
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the atmosphere cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "atmosphere_radiation"
# "atmosphere_microphysics_precipitation"
# "atmosphere_turbulence_convection"
# "atmosphere_gravity_waves"
# "atmosphere_solar"
# "atmosphere_volcano"
# "atmosphere_cloud_simulator"
# TODO - please enter value(s)
"""
Explanation: 36.3. Atmos Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Atmosphere components that are linked to the cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 36.4. Uses Separate Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Different cloud schemes for the different types of clouds (convective, stratiform and boundary layer)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "entrainment"
# "detrainment"
# "bulk cloud"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 36.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 36.6. Prognostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a prognostic scheme?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 36.7. Diagnostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a diagnostic scheme?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud amount"
# "liquid"
# "ice"
# "rain"
# "snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 36.8. Prognostic Variables
Is Required: FALSE Type: ENUM Cardinality: 0.N
List the prognostic variables used by the cloud scheme, if applicable.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "random"
# "maximum"
# "maximum-random"
# "exponential"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 37. Cloud Scheme --> Optical Cloud Properties
Optical cloud properties
37.1. Cloud Overlap Method
Is Required: FALSE Type: ENUM Cardinality: 0.1
Method for taking into account overlapping of cloud layers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.2. Cloud Inhomogeneity
Is Required: FALSE Type: STRING Cardinality: 0.1
Method for taking into account cloud inhomogeneity
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
"""
Explanation: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Sub-grid scale water distribution
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale water distribution type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 38.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale water distribution function name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 38.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale water distribution function type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
"""
Explanation: 38.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale water distribution coupling with convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
"""
Explanation: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Sub-grid scale ice distribution
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale ice distribution type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 39.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale ice distribution function name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 39.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale ice distribution function type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
"""
Explanation: 39.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale ice distribution coupling with convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 40. Observation Simulation
Characteristics of observation simulation
40.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of observation simulator characteristics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "no adjustment"
# "IR brightness"
# "visible optical depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41. Observation Simulation --> Isscp Attributes
ISSCP Characteristics
41.1. Top Height Estimation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator ISSCP top height estimation methodUo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "lowest altitude level"
# "highest altitude level"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41.2. Top Height Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator ISSCP top height direction
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Inline"
# "Offline"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 42. Observation Simulation --> Cosp Attributes
CFMIP Observational Simulator Package attributes
42.1. Run Configuration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator COSP run configuration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 42.2. Number Of Grid Points
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of grid points
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 42.3. Number Of Sub Columns
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of sub-cloumns used to simulate sub-grid variability
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 42.4. Number Of Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of levels
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 43. Observation Simulation --> Radar Inputs
Characteristics of the cloud radar simulator
43.1. Frequency
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Cloud simulator radar frequency (Hz)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface"
# "space borne"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 43.2. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator radar type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 43.3. Gas Absorption
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses gas absorption
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 43.4. Effective Radius
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses effective radius
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice spheres"
# "ice non-spherical"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 44. Observation Simulation --> Lidar Inputs
Characteristics of the cloud lidar simulator
44.1. Ice Types
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator lidar ice type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "max"
# "random"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 44.2. Overlap
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator lidar overlap
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 45. Gravity Waves
Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.
45.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of gravity wave parameterisation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rayleigh friction"
# "Diffusive sponge layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 45.2. Sponge Layer
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sponge layer in the upper levels in order to avoid gravity wave reflection at the top.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "continuous spectrum"
# "discrete spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 45.3. Background
Is Required: TRUE Type: ENUM Cardinality: 1.1
Background wave distribution
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "effect on drag"
# "effect on lifting"
# "enhanced topography"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 45.4. Subgrid Scale Orography
Is Required: TRUE Type: ENUM Cardinality: 1.N
Subgrid scale orography effects taken into account.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 46. Gravity Waves --> Orographic Gravity Waves
Gravity waves generated due to the presence of orography
46.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the orographic gravity wave scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear mountain waves"
# "hydraulic jump"
# "envelope orography"
# "low level flow blocking"
# "statistical sub-grid scale variance"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave source mechanisms
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "non-linear calculation"
# "more than two cardinal directions"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave calculation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "includes boundary layer ducting"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave propogation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave dissipation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 47. Gravity Waves --> Non Orographic Gravity Waves
Gravity waves generated by non-orographic processes.
47.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the non-orographic gravity wave scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convection"
# "precipitation"
# "background spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 47.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave source mechanisms
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spatially dependent"
# "temporally dependent"
# TODO - please enter value(s)
"""
Explanation: 47.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave calculation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 47.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave propogation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 47.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave dissipation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 48. Solar
Top of atmosphere solar insolation characteristics
48.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of solar insolation of the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SW radiation"
# "precipitating energetic particles"
# "cosmic rays"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 49. Solar --> Solar Pathways
Pathways for solar forcing of the atmosphere
49.1. Pathways
Is Required: TRUE Type: ENUM Cardinality: 1.N
Pathways for the solar forcing of the atmosphere model domain
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
"""
Explanation: 50. Solar --> Solar Constant
Solar constant and top of atmosphere insolation characteristics
50.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the solar constant.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 50.2. Fixed Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If the solar constant is fixed, enter the value of the solar constant (W m-2).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 50.3. Transient Characteristics
Is Required: TRUE Type: STRING Cardinality: 1.1
solar constant transient characteristics (W m-2)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
"""
Explanation: 51. Solar --> Orbital Parameters
Orbital parameters and top of atmosphere insolation characteristics
51.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of orbital parameters
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 51.2. Fixed Reference Date
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Reference date for fixed orbital parameters (yyyy)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 51.3. Transient Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Description of transient orbital parameters
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Berger 1978"
# "Laskar 2004"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 51.4. Computation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used for computing orbital parameters.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 52. Solar --> Insolation Ozone
Impact of solar insolation on stratospheric ozone
52.1. Solar Ozone Impact
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does top of atmosphere insolation impact on stratospheric ozone?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 53. Volcanos
Characteristics of the implementation of volcanoes
53.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the implementation of volcanic effects in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "high frequency solar constant anomaly"
# "stratospheric aerosols optical thickness"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 54. Volcanos --> Volcanoes Treatment
Treatment of volcanoes in the atmosphere
54.1. Volcanoes Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How volcanic effects are modeled in the atmosphere.
End of explanation
"""
|
kimkipyo/dss_git_kkp | 통계, 머신러닝 복습/160607화_12일차_(확률론적)선형 회귀 분석 Linear Regression Analysis/5.patsy 패키지 소개.ipynb | mit | from patsy import dmatrix, dmatrices
np.random.rand(5)
np.random.seed(0)
x1 = np.random.rand(5) + 10
x2 = np.random.rand(5) * 10
x1, x2
dmatrix("x1")
"""
Explanation: patsy 패키지 소개
회귀 분석 전처리 패키지
encoding/transform/design matrix 기능
R-style formula 문자열 지원
design matrix
dmatrix(fomula[, data])
R-style formula 문자열을 받아서 X matrix 생성
자동으로 intercept (bias) column 생성
local namespace에서 변수를 찾음
data parameter에 pandas DataFrame을 주면 column lable에서 변수를 찾음
End of explanation
"""
dmatrix("x1 - 1")
dmatrix("x1 + 0")
dmatrix("x1 + x2")
dmatrix("x1 + x2 - 1")
df = pd.DataFrame(np.array([x1, x2]).T, columns=["x1", "x2"])
df
dmatrix("x1 + x2 - 1", data=df)
"""
Explanation: R-style formula
| 기호 | 설명 |
|-|-|
|+| 설명 변수 추가 |
|-| 설명 변수 제거 |
|1, 0| intercept. (제거시 사용) |
|:| interaction (곱) |
|*| a*b = a + b + a:b |
|/| a/b = a + a:b |
|~| 종속 - 독립 관계 |
End of explanation
"""
dmatrix("x1 + np.log(np.abs(x2))", data=df)
def doubleit(x):
return 2 * x
dmatrix("doubleit(x1)", data=df)
dmatrix("center(x1) + standardize(x2)", data=df)
"""
Explanation: 변환(Transform)
numpy 함수 이름 사용 가능
사용자 정의 함수 사용 가능
patsy 전용 함수 이름 사용 가능
center(x)
standardize(x)
scale(x)
End of explanation
"""
dmatrix("x1 + x2", data=df)
dmatrix("I(x1 + x2)", data=df)
"""
Explanation: 변수 보호 I()
다른 formula 기호로부터 보호
End of explanation
"""
dmatrix("x1 + I(x1**2) + I(x1**3) + I(x1**4)", data=df)
"""
Explanation: 다항 선형 회귀
End of explanation
"""
df["a1"] = pd.Series(["a1", "a1", "a2", "a2", "a3", "a5"])
df["a2"] = pd.Series([1, 4, 5, 6, 8, 9])
df
dmatrix("a1", data=df)
dmatrix("a2", data=df)
dmatrix("C(a2)", data=df)
"""
Explanation: 카테고리 변수
End of explanation
"""
|
enakai00/jupyter_NikkeiLinux | No4/Figure3 - Basic Animations.ipynb | apache-2.0 | import numpy as np
import matplotlib.pyplot as plt
import matplotlib.animation as animation
from numpy.random import randint
%matplotlib nbagg
"""
Explanation: [1-1] 動画作成用のモジュールをインポートして、動画を表示可能なモードにセットします。
End of explanation
"""
fig = plt.figure(figsize=(6,2))
subplot = fig.add_subplot(1,1,1)
subplot.set_xlim(0,50)
subplot.set_ylim(-1,1)
x = 0
images = []
for _ in range(50):
image = subplot.scatter([x],[0])
images.append([image])
x += 1
ani = animation.ArtistAnimation(fig, images, interval=100)
ani.save('animation01.gif', writer='imagemagick', fps=10)
"""
Explanation: [1-2] x軸上を一定速度で移動するボールの動画を描きます。
動画のGIFファイル「animation01.gif」も同時に作成します。
End of explanation
"""
fig = plt.figure(figsize=(6,4))
subplot = fig.add_subplot(1,1,1)
y1s, y2s, y3s = [], [], []
y1, y2, y3 = 0, 0, 0
images = []
for t in range(100):
y1s.append(y1)
y2s.append(y2)
y3s.append(y3)
image1, = subplot.plot(range(t+1), y1s, color='blue')
image2, = subplot.plot(range(t+1), y2s, color='green')
image3, = subplot.plot(range(t+1), y3s, color='red')
images.append([image1, image2, image3])
y1 += randint(-1,2)
y2 += randint(-1,2)
y3 += randint(-1,2)
ani = animation.ArtistAnimation(fig, images, interval=100)
ani.save('animation02.gif', writer='imagemagick', fps=10)
"""
Explanation: [1-3] 3個のランダムウォークの動画を描きます。
動画のGIFファイル「animation02.gif」も同時に作成します。
End of explanation
"""
|
batfish/pybatfish | docs/source/notebooks/interacting.ipynb | apache-2.0 | import pandas as pd
from pybatfish.client.session import Session
from pybatfish.datamodel import *
from pybatfish.datamodel.answer import *
from pybatfish.datamodel.flow import *
pd.set_option('display.max_colwidth', None)
pd.set_option('display.max_columns', None)
# Prevent rendering text between '$' as MathJax expressions
pd.set_option('display.html.use_mathjax', False)
# Configure all pybatfish loggers to use WARN level
import logging
logging.getLogger('pybatfish').setLevel(logging.WARN)
"""
Explanation: Interacting with the Batfish service
Python Imports
In your Python program (or shell) you will need to import Pybatfish modules.
The most common imports are shown below. Depending your needs, this list may vary.
End of explanation
"""
bf = Session(host="localhost")
"""
Explanation: Sessions
The Batfish service may be running locally on your machine, or on a remote server.
The first step to analyzing your configurations is setting up the connection to the Batfish service.
End of explanation
"""
bf.set_network('example_dc')
"""
Explanation: Uploading configurations
Batfish is designed to analyze a series of snapshots of a network.
A network is a logical grouping of devices -- it may mean all of the devices in your network,
or a subset (e.g., all devices in a single datacenter.)
A snapshot is a state of the network at a given time.
A network may contain many snapshots, allowing you to understand the evolution of your network.
Let's say we will be working with our example datacenter:
End of explanation
"""
SNAPSHOT_DIR = '../../networks/example'
bf.init_snapshot(SNAPSHOT_DIR, name='snapshot-2020-01-01', overwrite=True)
"""
Explanation: Now you are ready to create your first snapshot. Batfish can ingest a variety of data in order to model your network,
so let's look at how you can package it as a snapshot.
Packaging snapshot data
Batfish expects snapshot data to be organized in a specific folder structure.
snapshot [top-level folder]
configs [folder with configurations files of network devices]
router1.cfg
router2.cfg
...
batfish [supplemental information (not device configurations)]
isp_config.json
...
See this snapshot for an example. For illustration, it contains some files that are not used by Batfish, e.g., example-network.png (network diagrams are not needed). It also contains information for host modeling, which need not be provided if you are not modeling hosts.
When you supply the snapshot as a zipped file, the top-level folder (called "snapshot" above) should be part of the zip archive.
Details on the format of configuration files and supplemental information are described here
Initializing a new snapshot
End of explanation
"""
bf.set_network('example_dc')
bf.set_snapshot('snapshot-2020-01-01')
"""
Explanation: Analyzing an existing snapshot
If you would like to analyze a previously-initialized snapshot, you do not need to re-initialize it.
Simply set the network and snapshot by name:
End of explanation
"""
bf.q.initIssues().answer()
"""
Explanation: Running Questions
After initializing (or setting) a snapshot,
you can query the Batfish service to retrieve information about the snapshot.
Batfish exposes a series of questions to users.
With the help of these questions you can examine data about you network as a whole,
or individual devices, in a vendor-agnostic way.
The general pattern for Batfish questions is:
bf.q.<question_name>() Creates a question (with parameters, if applicable).
bf.q.<question_name>().answer() sends the question to the Batfish service and returns the answer
bf.q.<question_name>().answer().frame() converts the answer into a Pandas dataframe for easy data manipulation
This pattern is demonstrated via the initIssues question below.
Initialization issues
While Batfish supports a wide variety of vendors and configuration constructs,
it may not fully support your configuration files.
We recommend checking the status of the snapshot you just initialized, by runnning bf.q.initIssues:
End of explanation
"""
issues = bf.q.initIssues().answer().frame()
issues[issues['Details'].apply(lambda x: "Could not determine update source for BGP neighbor:" not in x)]
"""
Explanation: Given the answer of a question, you may want to focus on certain rows/columns or ignore certain rows. This is easy via Pandas dataframe manipulation. For instance, if you want to ignore all rows that warn about BGP update source, you may do the following.
End of explanation
"""
import logging
logging.getLogger("pybatfish").setLevel(logging.WARN)
"""
Explanation: Now that you know the basics of interacting with the Batfish service, you can 1) explore a variety of questions that enable you to analyze your network in great detail; and 2) check out code examples for a range of use cases.
Logging
The server-side logs are accessible via Docker. Assuming your container is named "batfish", run docker logs batfish to view the logs. See documentation for docker logs command for helpful command line options.
The default client-side logging (by pybatfish) is verbose to inform new users about what is happening. To control logging verbosity, use the following snippet toward the top of your Python script. Replace logging.WARN with your preferred logging level.
End of explanation
"""
|
santipuch590/deeplearning-tf | dl_tf_BDU/3.RNN/ML0120EN-3.2-Review-LSTM-basics.ipynb | mit | import numpy as np
import tensorflow as tf
tf.reset_default_graph()
sess = tf.InteractiveSession()
"""
Explanation: <a href="https://www.bigdatauniversity.com"><img src = "https://ibm.box.com/shared/static/jvcqp2iy2jlx2b32rmzdt0tx8lvxgzkp.png" width = 300, align = "center"></a>
<h1 align=center><font size = 5>RECURRENT NETWORKS IN DEEP LEARNING</font></h1>
The Long Short-Term Memory Model
Hello and welcome to this notebook. In this notebook, we will go over concepts of the Long Short-Term Memory (LSTM) model, a refinement of the original Recurrent Neural Network model. By the end of this notebook, you should be able to understand the Long Short-Term Memory model, the benefits and problems it solves, and its inner workings and calculations.
The Problem to be Solved
Long Short-Term Memory, or LSTM for short, is one of the proposed solutions or upgrades to the Recurrent Neural Network model. The Recurrent Neural Network is a specialized type of Neural Network that solves the issue of maintaining context for Sequential data -- such as Weather data, Stocks, Genes, etc. At each iterative step, the processing unit takes in an input and the current state of the network, and produces an output and a new state that is re-fed into the network.
<img src=https://ibm.box.com/shared/static/v7p90neiaqghmpwawpiecmz9n7080m59.png width="720"/>
<center>Representation of a Recurrent Neural Network</center>
However, this model has some problems. It's very computationally expensive to maintain the state for a large amount of units, even more so over a long amount of time. Additionally, Recurrent Networks are very sensitive to changes in their parameters. As such, they are prone to different problems with their Gradient Descent optimizer -- they either grow exponentially (Exploding Gradient) or drop down to near zero and stabilize (Vanishing Gradient), both problems that greatly harm a model's learning capability.
Long Short-Term Memory: What is it?
To solve these problems, Hochreiter and Schmidhuber published a paper in 1997 describing a way to keep information over long periods of time and additionally solve the oversensitivity to parameter changes, i.e., make backpropagating through the Recurrent Networks more viable.
The Long Short-Term Memory, as it was called, was an abstraction of how computer memory works. It is "bundled" with whatever processing unit is implemented in the Recurrent Network, although outside of its flow, and is responsible for keeping, reading, and outputting information for the model. The way it works is simple: you have a linear unit, which is the information cell itself, surrounded by three logistic gates responsible for maintaining the data. One gate is for inputting data into the information cell, one is for outputting data from the input cell, and the last one is to keep or forget data depending on the needs of the network.
Thanks to that, it not only solves the problem of keeping states, because the network can choose to forget data whenever information is not needed, it also solves the gradient problems, since the Logistic Gates have a very nice derivative.
Long Short-Term Memory Architecture
As seen before, the Long Short-Term Memory is composed of a linear unit surrounded by three logistic gates. The name for these gates vary from place to place, but the most usual names for them are the "Input" or "Write" Gate, which handles the writing of data into the information cell, the "Output" or "Read" Gate, which handles the sending of data back onto the Recurrent Network, and the "Keep" or "Forget" Gate, which handles the maintaining and modification of the data stored in the information cell.
<img src=https://ibm.box.com/shared/static/zx10duv5egw0baw6gh2hzsgr8ex45gsg.png width="720"/>
<center>Diagram of the Long Short-Term Memory Unit</center>
The three gates are the centerpiece of the LSTM unit. The gates, when activated by the network, perform their respective functions. For example, the Input Gate will write whatever data it is passed onto the information cell, the Output Gate will return whatever data is in the information cell, and the Keep Gate will maintain the data in the information cell. These gates are analog and multiplicative, and as such, can modify the data based on the signal they are sent.
For example, an usual flow of operations for the LSTM unit is as such: First off, the Keep Gate has to decide whether to keep or forget the data currently stored in memory. It receives both the input and the state of the Recurrent Network, and passes it through its Sigmoid activation. A value of 1 means that the LSTM unit should keep the data stored perfectly and a value of 0 means that it should forget it entirely. Consider $S_{t-1}$ as the incoming (previous) state, $x_t$ as the incoming input, and $W_k$, $B_k$ as the weight and bias for the Keep Gate. Additionally, consider $Old_{t-1}$ as the data previously in memory. What happens can be summarized by this equation:
<br/>
<font size = 4><strong>
$$K_t = \sigma(W_k \times [S_{t-1},x_t] + B_k)$$
$$Old_t = K_t \times Old_{t-1}$$
</strong></font>
<br/>
As you can see, $Old_{t-1}$ was multiplied by value was returned by the Keep Gate -- this value is written in the memory cell. Then, the input and state are passed on to the Input Gate, in which there is another Sigmoid activation applied. Concurrently, the input is processed as normal by whatever processing unit is implemented in the network, and then multiplied by the Sigmoid activation's result, much like the Keep Gate. Consider $W_i$ and $B_i$ as the weight and bias for the Input Gate, and $C_t$ the result of the processing of the inputs by the Recurrent Network.
<br/>
<font size = 4><strong>
$$I_t = \sigma(W_i\times[S_{t-1},x_t]+B_i)$$
$$New_t = I_t \times C_t$$
</strong></font>
<br/>
$New_t$ is the new data to be input into the memory cell. This is then added to whatever value is still stored in memory.
<br/>
<font size = 4><strong>
$$Cell_t = Old_t + New_t$$
</strong></font>
<br/>
We now have the candidate data which is to be kept in the memory cell. The conjunction of the Keep and Input gates work in an analog manner, making it so that it is possible to keep part of the old data and add only part of the new data. Consider however, what would happen if the Forget Gate was set to 0 and the Input Gate was set to 1:
<br/>
<font size = 4><strong>
$$Old_t = 0 \times Old_{t-1}$$
$$New_t = 1 \times C_t$$
$$Cell_t = C_t$$
</strong></font>
<br/>
The old data would be totally forgotten and the new data would overwrite it completely.
The Output Gate functions in a similar manner. To decide what we should output, we take the input data and state and pass it through a Sigmoid function as usual. The contents of our memory cell, however, are pushed onto a Tanh function to bind them between a value of -1 to 1. Consider $W_o$ and $B_o$ as the weight and bias for the Output Gate.
<br/>
<font size = 4><strong>
$$O_t = \sigma(W_o \times [S_{t-1},x_t] + B_o)$$
$$Output_t = O_t \times tanh(Cell_t)$$
</strong></font>
<br/>
And that $Output_t$ is what is output into the Recurrent Network.
<br/>
<img width="384" src="https://ibm.box.com/shared/static/rkr60528r3mz2fmtlpah8lqpg7mcsy0g.png">
<center>The Logistic Function plotted</Center>
As mentioned many times, all three gates are logistic. The reason for this is because it is very easy to backpropagate through them, and as such, it is possible for the model to learn exactly how it is supposed to use this structure. This is one of the reasons for which LSTM is a very strong structure. Additionally, this solves the gradient problems by being able to manipulate values through the gates themselves -- by passing the inputs and outputs through the gates, we have now a easily derivable function modifying our inputs.
In regards to the problem of storing many states over a long period of time, LSTM handles this perfectly by only keeping whatever information is necessary and forgetting it whenever it is not needed anymore. Therefore, LSTMs are a very elegant solution to both problems.
LSTM basics
Lets first create a tiny LSTM network sample to underestand the architecture of LSTM networks.
We need to import the necessary modules for our code. We need numpy and tensorflow, obviously. Additionally, we can import directly the tensorflow.models.rnn.rnn model, which includes the function for building RNNs, and tensorflow.models.rnn.ptb.reader which is the helper module for getting the input data from the dataset we just downloaded.
If you want to learm more take a look at https://www.tensorflow.org/versions/r0.11/api_docs/python/rnn_cell/
End of explanation
"""
LSTM_CELL_SIZE = 3
with tf.variable_scope('basic_lstm_cell'):
try:
lstm_cell = tf.contrib.rnn.LSTMCell(LSTM_CELL_SIZE, state_is_tuple=True, reuse=False)
except:
print('LSTM already exists in the current scope. Reset the TF graph to re-create it.')
sample_input = tf.constant([[1,2,3,4,3,2],[3,2,2,2,2,2]],dtype=tf.float32)
state = (tf.zeros([2,LSTM_CELL_SIZE]),)*2
output, state_new = lstm_cell(sample_input, state)
sess.run(tf.global_variables_initializer())
print (sess.run(sample_input))
"""
Explanation: We want to create a network that has only one LSTM cell. The LSTM cell has 2 hidden nodes, so we need 2 state vector as well. Here, state is a tuple with 2 elements, each one is of size [1 x 2], one for passing information to next time step, and another for passing the state to next layer/output.
End of explanation
"""
cell_state, h_state = sess.run(state)
print('Cell state shape: ', cell_state.shape)
print('Hidden state shape: ', h_state.shape)
print(cell_state)
print(h_state)
"""
Explanation: As we can see, the states are initalized with zeros:
End of explanation
"""
print (sess.run(output))
c_new, h_new = sess.run(state_new)
print(c_new)
print(h_new)
"""
Explanation: Lets look the output and state of the network:
End of explanation
"""
sample_LSTM_CELL_SIZE = 3 #3 hidden nodes (it is equal to time steps)
sample_batch_size = 2
sample_input = tf.constant([
[[1,2,3,4,3,2],
[1,2,1,1,1,2],
[1,2,2,2,2,2]],
[[1,2,3,4,3,2],
[3,2,2,1,1,2],
[0,0,0,0,3,2]]
],dtype=tf.float32)
num_layers = 3
with tf.variable_scope("stacked_lstm"):
try:
multi_lstm_cell = tf.contrib.rnn.MultiRNNCell(
[tf.contrib.rnn.BasicLSTMCell(sample_LSTM_CELL_SIZE, state_is_tuple=True) for _ in range(num_layers)]
)
_initial_state = multi_lstm_cell.zero_state(sample_batch_size, tf.float32)
outputs, new_state = tf.nn.dynamic_rnn(multi_lstm_cell,
sample_input,
dtype=tf.float32,
initial_state=_initial_state)
except ValueError:
print('Stacked LSTM already exists in the current scope. Reset the TF graph to re-create it.')
sess.run(tf.global_variables_initializer())
print (sess.run(sample_input))
sess.run(_initial_state)
print (sess.run(new_state))
print (sess.run(output))
"""
Explanation: Stacked LSTM basecs
What about if we want to have a RNN?
The input should be a Tensor of shape: [batch_size, max_time, ...], in our case it would be (2, 3, 6)
End of explanation
"""
|
anthonyng2/FX-Trading-with-Python-and-Oanda | Oanda v1 REST-oandapy/05.00 Trade Management.ipynb | mit | from datetime import datetime, timedelta
import pandas as pd
import oandapy
import configparser
config = configparser.ConfigParser()
config.read('../config/config_v1.ini')
account_id = config['oanda']['account_id']
api_key = config['oanda']['api_key']
oanda = oandapy.API(environment="practice",
access_token=api_key)
trade_expire = datetime.now() + timedelta(days=1)
trade_expire = trade_expire.isoformat("T") + "Z"
fx_list = ["EUR_USD", "USD_CHF", "GBP_USD"]
for oo in fx_list:
response = oanda.create_order(account_id,
instrument = oo,
units=1000,
side="buy",
type="market",
expiry=trade_expire)
response = oanda.get_trades(account_id)
pd.DataFrame((response['trades']))
trade_id = response['trades'][0]['id']
"""
Explanation: <!--NAVIGATION-->
< Order Management | Contents | Position Management >
Trade Management
Trades
Getting a List of all Open Trades
get_trades(self, account_id, **params)
End of explanation
"""
response = oanda.get_trade(account_id,trade_id=trade_id)
print(response)
"""
Explanation: Get Specific Trade Information
get_trade(self, account_id, trade_id, **params)
End of explanation
"""
response = oanda.modify_trade(account_id,trade_id=trade_id, stopLoss=1.15)
print(response)
"""
Explanation: You can also obtain modify trade and close trade by calling on the following APIs:
modify_trade(self, account_id, trade_id, **params)
close_trade(self, account_id, trade_id, **params)
Modify Trade
modify_trade(self, account_id, trade_id, **params)
End of explanation
"""
response = oanda.close_trade(account_id, instrument='EUR_USD',
trade_id=trade_id)
print(response)
response = oanda.get_trades(account_id)
pd.DataFrame(response['trades'])
"""
Explanation: Close An Open Trade
close_trade(self, account_id, trade_id, **params)
End of explanation
"""
|
SylvainCorlay/bqplot | examples/Marks/Object Model/Pie.ipynb | apache-2.0 | data = np.random.rand(3)
pie = Pie(sizes=data, display_labels='outside', labels=list(string.ascii_uppercase))
fig = Figure(marks=[pie], animation_duration=1000)
fig
"""
Explanation: Basic Pie Chart
End of explanation
"""
n = np.random.randint(1, 10)
pie.sizes = np.random.rand(n)
"""
Explanation: Update Data
End of explanation
"""
with pie.hold_sync():
pie.display_values = True
pie.values_format = '.1f'
"""
Explanation: Display Values
End of explanation
"""
pie.sort = True
"""
Explanation: Enable sort
End of explanation
"""
pie.selected_style = {'opacity': 1, 'stroke': 'white', 'stroke-width': 2}
pie.unselected_style = {'opacity': 0.2}
pie.selected = [1]
pie.selected = None
"""
Explanation: Set different styles for selected slices
End of explanation
"""
pie.label_color = 'Red'
pie.font_size = '20px'
pie.font_weight = 'bold'
"""
Explanation: For more on piechart interactions, see the Mark Interactions notebook
Modify label styling
End of explanation
"""
pie1 = Pie(sizes=np.random.rand(6), inner_radius=0.05)
fig1 = Figure(marks=[pie1], animation_duration=1000)
fig1
"""
Explanation: Update pie shape and style
End of explanation
"""
# As of now, the radius sizes are absolute, in pixels
with pie1.hold_sync():
pie1.radius = 150
pie1.inner_radius = 100
# Angles are in radians, 0 being the top vertical
with pie1.hold_sync():
pie1.start_angle = -90
pie1.end_angle = 90
"""
Explanation: Change pie dimensions
End of explanation
"""
pie1.y = 0.1
pie1.x = 0.6
pie1.radius = 180
"""
Explanation: Move the pie around
x and y attributes control the position of the pie in the figure.
If no scales are passed for x and y, they are taken in absolute
figure coordinates, between 0 and 1.
End of explanation
"""
pie1.stroke = 'brown'
pie1.colors = ['orange', 'darkviolet']
pie1.opacities = [.1, 1]
fig1
"""
Explanation: Change slice styles
Pie slice colors cycle through the colors and opacities attribute, as the Lines Mark.
End of explanation
"""
from bqplot import ColorScale, ColorAxis
Nslices = 7
size_data = np.random.rand(Nslices)
color_data = np.random.randn(Nslices)
sc = ColorScale(scheme='Reds')
# The ColorAxis gives a visual representation of its ColorScale
ax = ColorAxis(scale=sc)
pie2 = Pie(sizes=size_data, scales={'color': sc}, color=color_data)
Figure(marks=[pie2], axes=[ax])
"""
Explanation: Represent an additional dimension using Color
The Pie allows for its colors to be determined by data, that is passed to the color attribute.
A ColorScale with the desired color scheme must also be passed.
End of explanation
"""
from datetime import datetime
from bqplot.traits import convert_to_date
from bqplot import DateScale, LinearScale, Axis
avg_precipitation_days = [(d/30., 1-d/30.) for d in [2, 3, 4, 6, 12, 17, 23, 22, 15, 4, 1, 1]]
temperatures = [9, 12, 16, 20, 22, 23, 22, 22, 22, 20, 15, 11]
dates = [datetime(2010, k, 1) for k in range(1, 13)]
sc_x = DateScale()
sc_y = LinearScale()
ax_x = Axis(scale=sc_x, label='Month', tick_format='%b')
ax_y = Axis(scale=sc_y, orientation='vertical', label='Average Temperature')
pies = [Pie(sizes=precipit, x=date, y=temp,display_labels='none',
scales={'x': sc_x, 'y': sc_y}, radius=30., stroke='navy',
apply_clip=False, colors=['navy', 'navy'], opacities=[1, .1])
for precipit, date, temp in zip(avg_precipitation_days, dates, temperatures)]
Figure(title='Kathmandu Precipitation', marks=pies, axes=[ax_x, ax_y],
padding_x=.05, padding_y=.1)
"""
Explanation: Position the Pie using custom scales
Pies can be positioned, via the x and y attributes,
using either absolute figure scales or custom 'x' or 'y' scales
End of explanation
"""
|
nwjs/chromium.src | third_party/tensorflow-text/src/docs/guide/subwords_tokenizer.ipynb | bsd-3-clause | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2019 The TensorFlow Authors.
End of explanation
"""
!pip install -q -U tensorflow-text
!pip install -q tensorflow_datasets
import collections
import os
import pathlib
import re
import string
import sys
import tempfile
import time
import numpy as np
import matplotlib.pyplot as plt
import tensorflow_datasets as tfds
import tensorflow_text as text
import tensorflow as tf
tf.get_logger().setLevel('ERROR')
pwd = pathlib.Path.cwd()
"""
Explanation: <table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/text/guide/subwords_tokenizer"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/text/blob/master/docs/guide/subwords_tokenizer.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/text/blob/master/docs/guide/subwords_tokenizer.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/text/docs/guide/subwords_tokenizer.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
Subword tokenizers
This tutorial demonstrates how to generate a subword vocabulary from a dataset, and use it to build a text.BertTokenizer from the vocabulary.
The main advantage of a subword tokenizer is that it interpolates between word-based and character-based tokenization. Common words get a slot in the vocabulary, but the tokenizer can fall back to word pieces and individual characters for unknown words.
Objective: At the end of this tutorial you'll have built a complete end-to-end wordpiece tokenizer and detokenizer from scratch, and saved it as a saved_model that you can load and use in this translation tutorial.
Overview
The tensorflow_text package includes TensorFlow implementations of many common tokenizers. This includes three subword-style tokenizers:
text.BertTokenizer - The BertTokenizer class is a higher level interface. It includes BERT's token splitting algorithm and a WordPieceTokenizer. It takes sentences as input and returns token-IDs.
text.WordpieceTokenizer - The WordPieceTokenizer class is a lower level interface. It only implements the WordPiece algorithm. You must standardize and split the text into words before calling it. It takes words as input and returns token-IDs.
text.SentencepieceTokenizer - The SentencepieceTokenizer requires a more complex setup. Its initializer requires a pre-trained sentencepiece model. See the google/sentencepiece repository for instructions on how to build one of these models. It can accept sentences as input when tokenizing.
This tutorial builds a Wordpiece vocabulary in a top down manner, starting from existing words. This process doesn't work for Japanese, Chinese, or Korean since these languages don't have clear multi-character units. To tokenize these languages conside using text.SentencepieceTokenizer, text.UnicodeCharTokenizer or this approach.
Setup
End of explanation
"""
examples, metadata = tfds.load('ted_hrlr_translate/pt_to_en', with_info=True,
as_supervised=True)
train_examples, val_examples = examples['train'], examples['validation']
"""
Explanation: Download the dataset
Fetch the Portuguese/English translation dataset from tfds:
End of explanation
"""
for pt, en in train_examples.take(1):
print("Portuguese: ", pt.numpy().decode('utf-8'))
print("English: ", en.numpy().decode('utf-8'))
"""
Explanation: This dataset produces Portuguese/English sentence pairs:
End of explanation
"""
train_en = train_examples.map(lambda pt, en: en)
train_pt = train_examples.map(lambda pt, en: pt)
"""
Explanation: Note a few things about the example sentences above:
* They're lower case.
* There are spaces around the punctuation.
* It's not clear if or what unicode normalization is being used.
End of explanation
"""
from tensorflow_text.tools.wordpiece_vocab import bert_vocab_from_dataset as bert_vocab
"""
Explanation: Generate the vocabulary
This section generates a wordpiece vocabulary from a dataset. If you already have a vocabulary file and just want to see how to build a text.BertTokenizer or text.Wordpiece tokenizer with it then you can skip ahead to the Build the tokenizer section.
Note: The vocabulary generation code used in this tutorial is optimized for simplicity. If you need a more scalable solution consider using the Apache Beam implementation available in tools/wordpiece_vocab/generate_vocab.py
The vocabulary generation code is included in the tensorflow_text pip package. It is not imported by default , you need to manually import it:
End of explanation
"""
bert_tokenizer_params=dict(lower_case=True)
reserved_tokens=["[PAD]", "[UNK]", "[START]", "[END]"]
bert_vocab_args = dict(
# The target vocabulary size
vocab_size = 8000,
# Reserved tokens that must be included in the vocabulary
reserved_tokens=reserved_tokens,
# Arguments for `text.BertTokenizer`
bert_tokenizer_params=bert_tokenizer_params,
# Arguments for `wordpiece_vocab.wordpiece_tokenizer_learner_lib.learn`
learn_params={},
)
%%time
pt_vocab = bert_vocab.bert_vocab_from_dataset(
train_pt.batch(1000).prefetch(2),
**bert_vocab_args
)
"""
Explanation: The bert_vocab.bert_vocab_from_dataset function will generate the vocabulary.
There are many arguments you can set to adjust its behavior. For this tutorial, you'll mostly use the defaults. If you want to learn more about the options, first read about the algorithm, and then have a look at the code.
This takes about 2 minutes.
End of explanation
"""
print(pt_vocab[:10])
print(pt_vocab[100:110])
print(pt_vocab[1000:1010])
print(pt_vocab[-10:])
"""
Explanation: Here are some slices of the resulting vocabulary.
End of explanation
"""
def write_vocab_file(filepath, vocab):
with open(filepath, 'w') as f:
for token in vocab:
print(token, file=f)
write_vocab_file('pt_vocab.txt', pt_vocab)
"""
Explanation: Write a vocabulary file:
End of explanation
"""
%%time
en_vocab = bert_vocab.bert_vocab_from_dataset(
train_en.batch(1000).prefetch(2),
**bert_vocab_args
)
print(en_vocab[:10])
print(en_vocab[100:110])
print(en_vocab[1000:1010])
print(en_vocab[-10:])
"""
Explanation: Use that function to generate a vocabulary from the english data:
End of explanation
"""
write_vocab_file('en_vocab.txt', en_vocab)
!ls *.txt
"""
Explanation: Here are the two vocabulary files:
End of explanation
"""
pt_tokenizer = text.BertTokenizer('pt_vocab.txt', **bert_tokenizer_params)
en_tokenizer = text.BertTokenizer('en_vocab.txt', **bert_tokenizer_params)
"""
Explanation: Build the tokenizer
<a id="build_the_tokenizer"></a>
The text.BertTokenizer can be initialized by passing the vocabulary file's path as the first argument (see the section on tf.lookup for other options):
End of explanation
"""
for pt_examples, en_examples in train_examples.batch(3).take(1):
for ex in en_examples:
print(ex.numpy())
"""
Explanation: Now you can use it to encode some text. Take a batch of 3 examples from the english data:
End of explanation
"""
# Tokenize the examples -> (batch, word, word-piece)
token_batch = en_tokenizer.tokenize(en_examples)
# Merge the word and word-piece axes -> (batch, tokens)
token_batch = token_batch.merge_dims(-2,-1)
for ex in token_batch.to_list():
print(ex)
"""
Explanation: Run it through the BertTokenizer.tokenize method. Initially, this returns a tf.RaggedTensor with axes (batch, word, word-piece):
End of explanation
"""
# Lookup each token id in the vocabulary.
txt_tokens = tf.gather(en_vocab, token_batch)
# Join with spaces.
tf.strings.reduce_join(txt_tokens, separator=' ', axis=-1)
"""
Explanation: If you replace the token IDs with their text representations (using tf.gather) you can see that in the first example the words "searchability" and "serendipity" have been decomposed into "search ##ability" and "s ##ere ##nd ##ip ##ity":
End of explanation
"""
words = en_tokenizer.detokenize(token_batch)
tf.strings.reduce_join(words, separator=' ', axis=-1)
"""
Explanation: To re-assemble words from the extracted tokens, use the BertTokenizer.detokenize method:
End of explanation
"""
START = tf.argmax(tf.constant(reserved_tokens) == "[START]")
END = tf.argmax(tf.constant(reserved_tokens) == "[END]")
def add_start_end(ragged):
count = ragged.bounding_shape()[0]
starts = tf.fill([count,1], START)
ends = tf.fill([count,1], END)
return tf.concat([starts, ragged, ends], axis=1)
words = en_tokenizer.detokenize(add_start_end(token_batch))
tf.strings.reduce_join(words, separator=' ', axis=-1)
"""
Explanation: Note: BertTokenizer.tokenize/BertTokenizer.detokenize does not round
trip losslessly. The result of detokenize will not, in general, have the
same content or offsets as the input to tokenize. This is because of the
"basic tokenization" step, that splits the strings into words before
applying the WordpieceTokenizer, includes irreversible
steps like lower-casing and splitting on punctuation. WordpieceTokenizer
on the other hand is reversible.
Customization and export
This tutorial builds the text tokenizer and detokenizer used by the Transformer tutorial. This section adds methods and processing steps to simplify that tutorial, and exports the tokenizers using tf.saved_model so they can be imported by the other tutorials.
Custom tokenization
The downstream tutorials both expect the tokenized text to include [START] and [END] tokens.
The reserved_tokens reserve space at the beginning of the vocabulary, so [START] and [END] have the same indexes for both languages:
End of explanation
"""
def cleanup_text(reserved_tokens, token_txt):
# Drop the reserved tokens, except for "[UNK]".
bad_tokens = [re.escape(tok) for tok in reserved_tokens if tok != "[UNK]"]
bad_token_re = "|".join(bad_tokens)
bad_cells = tf.strings.regex_full_match(token_txt, bad_token_re)
result = tf.ragged.boolean_mask(token_txt, ~bad_cells)
# Join them into strings.
result = tf.strings.reduce_join(result, separator=' ', axis=-1)
return result
en_examples.numpy()
token_batch = en_tokenizer.tokenize(en_examples).merge_dims(-2,-1)
words = en_tokenizer.detokenize(token_batch)
words
cleanup_text(reserved_tokens, words).numpy()
"""
Explanation: Custom detokenization
Before exporting the tokenizers there are a couple of things you can cleanup for the downstream tutorials:
They want to generate clean text output, so drop reserved tokens like [START], [END] and [PAD].
They're interested in complete strings, so apply a string join along the words axis of the result.
End of explanation
"""
class CustomTokenizer(tf.Module):
def __init__(self, reserved_tokens, vocab_path):
self.tokenizer = text.BertTokenizer(vocab_path, lower_case=True)
self._reserved_tokens = reserved_tokens
self._vocab_path = tf.saved_model.Asset(vocab_path)
vocab = pathlib.Path(vocab_path).read_text().splitlines()
self.vocab = tf.Variable(vocab)
## Create the signatures for export:
# Include a tokenize signature for a batch of strings.
self.tokenize.get_concrete_function(
tf.TensorSpec(shape=[None], dtype=tf.string))
# Include `detokenize` and `lookup` signatures for:
# * `Tensors` with shapes [tokens] and [batch, tokens]
# * `RaggedTensors` with shape [batch, tokens]
self.detokenize.get_concrete_function(
tf.TensorSpec(shape=[None, None], dtype=tf.int64))
self.detokenize.get_concrete_function(
tf.RaggedTensorSpec(shape=[None, None], dtype=tf.int64))
self.lookup.get_concrete_function(
tf.TensorSpec(shape=[None, None], dtype=tf.int64))
self.lookup.get_concrete_function(
tf.RaggedTensorSpec(shape=[None, None], dtype=tf.int64))
# These `get_*` methods take no arguments
self.get_vocab_size.get_concrete_function()
self.get_vocab_path.get_concrete_function()
self.get_reserved_tokens.get_concrete_function()
@tf.function
def tokenize(self, strings):
enc = self.tokenizer.tokenize(strings)
# Merge the `word` and `word-piece` axes.
enc = enc.merge_dims(-2,-1)
enc = add_start_end(enc)
return enc
@tf.function
def detokenize(self, tokenized):
words = self.tokenizer.detokenize(tokenized)
return cleanup_text(self._reserved_tokens, words)
@tf.function
def lookup(self, token_ids):
return tf.gather(self.vocab, token_ids)
@tf.function
def get_vocab_size(self):
return tf.shape(self.vocab)[0]
@tf.function
def get_vocab_path(self):
return self._vocab_path
@tf.function
def get_reserved_tokens(self):
return tf.constant(self._reserved_tokens)
"""
Explanation: Export
The following code block builds a CustomTokenizer class to contain the text.BertTokenizer instances, the custom logic, and the @tf.function wrappers required for export.
End of explanation
"""
tokenizers = tf.Module()
tokenizers.pt = CustomTokenizer(reserved_tokens, 'pt_vocab.txt')
tokenizers.en = CustomTokenizer(reserved_tokens, 'en_vocab.txt')
"""
Explanation: Build a CustomTokenizer for each language:
End of explanation
"""
model_name = 'ted_hrlr_translate_pt_en_converter'
tf.saved_model.save(tokenizers, model_name)
"""
Explanation: Export the tokenizers as a saved_model:
End of explanation
"""
reloaded_tokenizers = tf.saved_model.load(model_name)
reloaded_tokenizers.en.get_vocab_size().numpy()
tokens = reloaded_tokenizers.en.tokenize(['Hello TensorFlow!'])
tokens.numpy()
text_tokens = reloaded_tokenizers.en.lookup(tokens)
text_tokens
round_trip = reloaded_tokenizers.en.detokenize(tokens)
print(round_trip.numpy()[0].decode('utf-8'))
"""
Explanation: Reload the saved_model and test the methods:
End of explanation
"""
!zip -r {model_name}.zip {model_name}
!du -h *.zip
"""
Explanation: Archive it for the translation tutorials:
End of explanation
"""
pt_lookup = tf.lookup.StaticVocabularyTable(
num_oov_buckets=1,
initializer=tf.lookup.TextFileInitializer(
filename='pt_vocab.txt',
key_dtype=tf.string,
key_index = tf.lookup.TextFileIndex.WHOLE_LINE,
value_dtype = tf.int64,
value_index=tf.lookup.TextFileIndex.LINE_NUMBER))
pt_tokenizer = text.BertTokenizer(pt_lookup)
"""
Explanation: <a id="algorithm"></a>
Optional: The algorithm
It's worth noting here that there are two versions of the WordPiece algorithm: Bottom-up and top-down. In both cases goal is the same: "Given a training corpus and a number of desired
tokens D, the optimization problem is to select D wordpieces such that the resulting corpus is minimal in the
number of wordpieces when segmented according to the chosen wordpiece model."
The original bottom-up WordPiece algorithm, is based on byte-pair encoding. Like BPE, It starts with the alphabet, and iteratively combines common bigrams to form word-pieces and words.
TensorFlow Text's vocabulary generator follows the top-down implementation from BERT. Starting with words and breaking them down into smaller components until they hit the frequency threshold, or can't be broken down further. The next section describes this in detail. For Japanese, Chinese and Korean this top-down approach doesn't work since there are no explicit word units to start with. For those you need a different approach.
Choosing the vocabulary
The top-down WordPiece generation algorithm takes in a set of (word, count) pairs and a threshold T, and returns a vocabulary V.
The algorithm is iterative. It is run for k iterations, where typically k = 4, but only the first two are really important. The third and fourth (and beyond) are just identical to the second. Note that each step of the binary search runs the algorithm from scratch for k iterations.
The iterations described below:
First iteration
Iterate over every word and count pair in the input, denoted as (w, c).
For each word w, generate every substring, denoted as s. E.g., for the
word human, we generate {h, hu, hum, huma,
human, ##u, ##um, ##uma, ##uman, ##m, ##ma, ##man, #a, ##an, ##n}.
Maintain a substring-to-count hash map, and increment the count of each s
by c. E.g., if we have (human, 113) and (humas, 3) in our input, the
count of s = huma will be 113+3=116.
Once we've collected the counts of every substring, iterate over the (s,
c) pairs starting with the longest s first.
Keep any s that has a c > T. E.g., if T = 100 and we have (pers,
231); (dogs, 259); (##rint; 76), then we would keep pers and dogs.
When an s is kept, subtract off its count from all of its prefixes. This
is the reason for sorting all of the s by length in step 4. This is a
critical part of the algorithm, because otherwise words would be double
counted. For example, let's say that we've kept human and we get to
(huma, 116). We know that 113 of those 116 came from human, and 3
came from humas. However, now that human is in our vocabulary, we know
we will never segment human into huma ##n. So once human has been
kept, then huma only has an effective count of 3.
This algorithm will generate a set of word pieces s (many of which will be
whole words w), which we could use as our WordPiece vocabulary.
However, there is a problem: This algorithm will severely overgenerate word
pieces. The reason is that we only subtract off counts of prefix tokens.
Therefore, if we keep the word human, we will subtract off the count for h,
hu, hu, huma, but not for ##u, ##um, ##uma, ##uman and so on. So we might
generate both human and ##uman as word pieces, even though ##uman will
never be applied.
So why not subtract off the counts for every substring, not just every
prefix? Because then we could end up subtracting off the counts multiple
times. Let's say that we're processing s of length 5 and we keep both
(##denia, 129) and (##eniab, 137), where 65 of those counts came from the
word undeniable. If we subtract off from every substring, we would subtract
65 from the substring ##enia twice, even though we should only subtract
once. However, if we only subtract off from prefixes, it will correctly only be
subtracted once.
Second (and third ...) iteration
To solve the overgeneration issue mentioned above, we perform multiple
iterations of the algorithm.
Subsequent iterations are identical to the first, with one important
distinction: In step 2, instead of considering every substring, we apply the
WordPiece tokenization algorithm using the vocabulary from the previous
iteration, and only consider substrings which start on a split point.
For example, let's say that we're performing step 2 of the algorithm and
encounter the word undeniable. In the first iteration, we would consider every
substring, e.g., {u, un, und, ..., undeniable, ##n, ##nd, ..., ##ndeniable,
...}.
Now, for the second iteration, we will only consider a subset of these. Let's
say that after the first iteration, the relevant word pieces are:
un, ##deni, ##able, ##ndeni, ##iable
The WordPiece algorithm will segment this into un ##deni ##able (see the
section Applying WordPiece for more information). In this
case, we will only consider substrings that start at a segmentation point. We
will still consider every possible end position. So during the second
iteration, the set of s for undeniable is:
{u, un, und, unden, undeni, undenia, undeniab, undeniabl,
undeniable, ##d, ##de, ##den, ##deni, ##denia, ##deniab, ##deniabl
, ##deniable, ##a, ##ab, ##abl, ##able}
The algorithm is otherwise identical. In this example, in the first iteration,
the algorithm produces the suprious tokens ##ndeni and ##iable. Now, these
tokens are never considered, so they will not be generated by the second
iteration. We perform several iterations just to make sure the results converge
(although there is no literal convergence guarantee).
Applying WordPiece
<a id="applying_wordpiece"></a>
Once a WordPiece vocabulary has been generated, we need to be able to apply it
to new data. The algorithm is a simple greedy longest-match-first application.
For example, consider segmenting the word undeniable.
We first lookup undeniable in our WordPiece dictionary, and if it's present,
we're done. If not, we decrement the end point by one character, and repeat,
e.g., undeniabl.
Eventually, we will either find a subtoken in our vocabulary, or get down to a
single character subtoken. (In general, we assume that every character is in our
vocabulary, although this might not be the case for rare Unicode characters. If
we encounter a rare Unicode character that's not in the vocabulary we simply map
the entire word to <unk>).
In this case, we find un in our vocabulary. So that's our first word piece.
Then we jump to the end of un and repeat the processing, e.g., try to find
##deniable, then ##deniabl, etc. This is repeated until we've segmented the
entire word.
Intuition
Intuitively, WordPiece tokenization is trying to satisfy two different
objectives:
Tokenize the data into the least number of pieces as possible. It is
important to keep in mind that the WordPiece algorithm does not "want" to
split words. Otherwise, it would just split every word into its characters,
e.g., human -> {h, ##u, ##m, ##a, #n}. This is one critical thing that
makes WordPiece different from morphological splitters, which will split
linguistic morphemes even for common words (e.g., unwanted -> {un, want,
ed}).
When a word does have to be split into pieces, split it into pieces that
have maximal counts in the training data. For example, the reason why the
word undeniable would be split into {un, ##deni, ##able} rather than
alternatives like {unde, ##niab, ##le} is that the counts for un and
##able in particular will be very high, since these are common prefixes
and suffixes. Even though the count for ##le must be higher than ##able,
the low counts of unde and ##niab will make this a less "desirable"
tokenization to the algorithm.
Optional: tf.lookup
<a id="tf.lookup"></a>
If you need access to, or more control over the vocabulary it's worth noting that you can build the lookup table yourself and pass that to BertTokenizer.
When you pass a string, BertTokenizer does the following:
End of explanation
"""
pt_lookup.lookup(tf.constant(['é', 'um', 'uma', 'para', 'não']))
"""
Explanation: Now you have direct access to the lookup table used in the tokenizer.
End of explanation
"""
pt_lookup = tf.lookup.StaticVocabularyTable(
num_oov_buckets=1,
initializer=tf.lookup.KeyValueTensorInitializer(
keys=pt_vocab,
values=tf.range(len(pt_vocab), dtype=tf.int64)))
pt_tokenizer = text.BertTokenizer(pt_lookup)
"""
Explanation: You don't need to use a vocabulary file, tf.lookup has other initializer options. If you have the vocabulary in memory you can use lookup.KeyValueTensorInitializer:
End of explanation
"""
|
roaminsight/roamresearch | BlogPosts/Translation_scaling_invariance_regression/Translation_and_scaling_invariance_in_regression_models.ipynb | apache-2.0 | __author__ = 'Adam Foster and Nick Dingwall'
"""
Explanation: Translation and scaling invariance in regression models
End of explanation
"""
from centering_and_scaling import *
%matplotlib inline
# A dataset:
data = np.random.multivariate_normal(
mean=[4, 0], cov=[[5, 2], [2, 3]], size=250)
X, y = data[:, 0], data[:, 1]
# Subtract the mean from the features:
empirical_mean = X.mean()
Z = X - empirical_mean
# Before and after comparison:
compare_normalizations(
X, Z, y, "Before mean-centering", "After mean-centering")
"""
Explanation: TL;DR: In regularized regression models, which matters more: scaling (forcing unit variance) or mean centering? Take our pop quiz and see whether your answer is supported by the maths!
Introduction: a quick pop quiz for the data scientists
Mean-centering has no effect
Scaling does not affect unregularized regression
Scaling affects regularized regression
Conclusion
Introduction: a quick pop quiz for the data scientists
When you're training a regularized regression model, which preprocessing step on the features is more important: scaling (forcing unit variance) or mean centering (forcing zero mean, a special case of translating)?
Scaling is always more important.
Mean centering is always more important.
They're both as important as each other.
It depends on the dataset.
The short answer
When we popped this quiz in the office, 4 was the most popular answer. Perhaps you'll be surprised to learn that the answer is actually 1: mean-centering the features doesn't make any difference to model predictions.
The long answer
If model predictions are what you care about, 1 is the right answer. Mean-centering the features makes no difference as long as you don't penalize the intercept term, whereas scaling does impact predictions in regularized models \cite{the_elements}, pp. 63-64. More generally, regularized regression models show translation invariance: you can add or subtract constants from your features without affecting model predictions. However, they do not show scaling invariance: if you multiply or divide your features by constants, the predictions may change.
If you answered 2, you have a tough argument to make.
If you answered 3, you are half right. 3 is the right answer for unregularized regression models because neither scaling nor mean-centering affects the predictions of unregularized models \cite{gelmanhill}, p. 53. In more formal language, unregularized regression models are translation and scaling invariant.
If you answered 4, maybe you care about interpreting the model coefficients themselves and not just about model predictions. In this case, 4 could be the right answer. The intercept from a model with mean-centered data equals the prediction when all features take their mean value, which could be important for interpretation. On the other hand, we'll show that you can easily recover the intercept you would have found from centered data after you've run your regression with uncentered data.
Practical benefits of translation invariance
Your first reaction might be, "Well, I'll just mean-center the features anyway, since it doesn't matter and might even help with interpretation." That is likely to be reasonable for datasets of modest size. However, it's disastrous if you were counting on sparse input representations to help manage a massive dataset. Consider all those zeros in your sparse array. When you subtract the mean, they shift to $-\bar{x}$. A matrix that was 1% dense is suddenly nearly 100% dense and your machine has run out of memory.
Post structure
This difference between scaling and mean-centering is initially unintuitive, so we'll spend a bit of time proving the following two claims:
Mean-centering never affects model predictions in regularized or unregularized models.
Scaling never affects unregularized models but does affect regularized ones.
We also discuss the effect of rescaling in regularized regression models, showing that, since regularization shrinks model coefficients towards 0, a rescaling which made one feature very small could lose the signal coming from that predictor.
$\newcommand{\VEC}[1]{\boldsymbol{#1}}$
Mean-centering has no effect
Formal argument
Regularized regression \cite{the_elements}, pp. 61-83, solves the following minimization problem
\begin{equation}
\min_{\mu, \VEC{\beta}}\; \|Y - \mu {\VEC{1}} - X\VEC{\beta} \|_2^2 + \lambda R(\VEC{\beta})
\end{equation}
where $R$ is the penalization function and ${\VEC{1}}$ is a $n\times 1$ vector of 1's. (As usual, $Y$ is a $n\times 1$ response vector, $X$ is a $n\times p$ feature matrix, $\mu$ is a scalar, $\VEC{\beta}$ is a $p\times 1$ vector of regression coefficients and $\lambda > 0$ is the regularization parameter. Equivalent variants using a column of $1$'s instead of a separate intercept are common \cite{the_elements}, p. 45.)
We've separated out the intercept, $\mu$, to emphasize the fact that it is not penalized. Now let's translate the features by subtracting out the mean of each column (we'll denote these by $\bar{x}_1, \bar{x}_2, \ldots$)
\begin{align}
\begin{split}
\text{for } i=1, \ldots, n:& \
Z_{i1} &= X_{i1} - \bar{x}1\
Z{i2} &= X_{i2} - \bar{x}_2\
&\text{etc.}
\end{split}
\end{align}
This can be succinctly written
\begin{equation}
Z = X- {\VEC{1}}\bar{\VEC{x}}^\top
\end{equation}
where $\bar{\VEC{x}}$ is a $p\times 1$ vector whose components are the column means $\bar{x}_1, \bar{x}_2, ...$
Now let's examine the objective function for regression with $Z$ to be minimized over some new variables $\nu$ (intercept) and $\VEC{\theta}$ (coefficients)
\begin{align}
\|Y - \nu {\VEC{1}} - Z\VEC{\theta} \|_2^2 + \lambda R(\VEC{\theta})
&=\|Y - \nu {\VEC{1}} - (X- {\VEC{1}}\bar{\VEC{x}}^\top)\VEC{\theta} \|_2^2 + \lambda R(\VEC{\theta}) \
&=\|Y - (\nu - \bar{\VEC{x}}^\top\VEC{\theta}) {\VEC{1}} - X\VEC{\theta} \|_2^2 + \lambda R(\VEC{\theta})
\end{align}
Now set $\mu = \nu - \bar{\VEC{x}}^\top\VEC{\theta}$ and $\VEC{\beta} = \VEC{\theta}$ and we're back to the original objective with $X$. That means the minimizers must be related simply by $\hat{\mu} = \hat{\nu} - \bar{\VEC{x}}^\top \hat{\VEC{\theta}}$ and $\hat{\VEC{\beta}} = \hat{\VEC{\theta}}$.
Aside: The hats differentiate between variables (no hats, can take any value) and minimizers (with hats, take a certain value for fixed $X$ and $Y$).
Predictions stay the same
What about predictions? Suppose we introduce a new 'test' point $\VEC{x}^\star$ and its corresponding $\VEC{z}$-value, which is $\VEC{z}^\star = \VEC{x}^\star - \bar{\VEC{x}}$. To make predictions with the original (not mean-centered) model we use $\hat{\mu} + \VEC{x}^{\star\top}\hat{\VEC{\beta}}$ and for the centered model, we use $\hat{\nu} + \VEC{z}^{\star\top} \hat{\VEC{\theta}}$. But then
\begin{align}
\hat{\mu} + \VEC{x}^{\star\top} \hat{\VEC{\beta}} &= \left( \hat{\nu} - \bar{\VEC{x}}^\top\hat{\VEC{\theta}} \right) + \VEC{x}^{\star\top} \hat{\VEC{\theta}} \
&= \hat{\nu} + \left(\VEC{x}^\star - \bar{\VEC{x}}\right)^\top \hat{\VEC{\theta}} \
&= \hat{\nu} + \VEC{z}^{\star\top} \hat{\VEC{\theta}}
\end{align}
So the predictions from the two models are identical.
The important thing to note is that $\hat{\VEC{\beta}} = \hat{\VEC{\theta}}$ so all the regression coefficients stay the same. Only the intercept needs to change. We defined $Z$ by mean-centering the columns. Actually, our proof works for any translation that we apply to the columns of $X$. This is what we mean by translation invariance.
Suppose you fit a model without mean-centering (in our notation, that means using $X$). How could you recover the intercept you would have got using centered data (in our notation, using $Z$)? It's already there for us. The intercept using centered data was called $\hat{\nu}$ and we know $\hat{\nu} = \hat{\mu} + \bar{\VEC{x}}^\top\hat{\VEC{\theta}} = \hat{\mu} + \bar{\VEC{x}}^\top\hat{\VEC{\beta}}$. So the mean-centered intercept is equivalent to
\begin{equation}
\hat{\nu} = \hat{\mu} + \sum_{i=1}^p \bar{x}_i \hat{\beta}_i
\end{equation}
An example
That derivation was a bit heavy, so let's work through an example. The code is tucked away in centering_and_scaling.py:
End of explanation
"""
# The case of `x`:
x_star = 10
lr = LinearRegression()
lr.fit(X.reshape(-1,1), y)
x_prediction = lr.predict(x_star)[0]
print("The prediction at x = {} is {:.4}".format(
x_star, x_prediction))
# The case of `z`:
z_star = x_star - empirical_mean
lr.fit(Z.reshape(-1,1), y)
z_prediction = lr.predict(z_star)[0]
print("The prediction at z = {} - {:.4} = {:.4} is {:.4}".format(
x_star, empirical_mean, z_star, z_prediction))
"""
Explanation: Notice that we have a different intercept, but the same slope. The predictions from these two models will be identical. For instance, the prediction at $x = 10$ is around $2.5$ on the upper graph. The corresponding $z$ is $z = 10 - \bar{x} \approx 6$ with precisely the same prediction on the lower graph. The following code works through this in detail:
End of explanation
"""
Z = X / X.std()
compare_normalizations(
X, Z, y,
title1="Before rescaling",
title2="After rescaling")
"""
Explanation: Ignoring the intercept
It follows from this result that we can ignore $\mu$ in the regression model. Why is this? We know for sure that we can mean-center $X$ without changing the model. If $X$ is mean-centered, then the optimal value of $\mu$ is the mean of $Y$, written $\bar{Y}$. (This requires just a bit of calculus: ${\bf{1}}^{\top}X = 0$ when you've mean-centered $X$.) We might as well mean-center $Y$ as well, leaving $\mu = 0$, so we can ignore it from now on.
Scaling does not affect unregularized regression
Formal argument
Let's now look at scaling in the case of unregularized regression. We've ignored $\mu$ and there is no $R$ either for an unregularized model, so we have the following minimization problem:
\begin{equation}
\min_\VEC{\beta} \; \|Y - X\VEC{\beta} \|_2^2
\end{equation}
Let's rescale the features by dividing by the standard deviation of each column (we'll denote these by $\sigma_1, \sigma_2, \ldots $)
\begin{align}
\begin{split}
\text{for } i=1, \ldots, n:& \
Z_{i1} &= \frac{X_{i1}}{\sigma_1}\
Z_{i2} &= \frac{X_{i2}}{\sigma_2}\
&\text{etc.}
\end{split}
\end{align}
which we can succinctly write as
\begin{equation}
Z = XD
\end{equation}
where $D$ is a diagonal matrix with $D_{ii} = \sigma_i^{-1}$.
The objective function for regression with $Z$ is
\begin{align}
\|Y - Z\VEC{\theta} \|_2^2
= \|Y - XD\VEC{\theta} \|^2_2
\end{align}
Now set $\VEC{\beta} = D\VEC{\theta}$ and we're back to the original objective for $X$. That means the minimizers are related simply by $\hat{\VEC{\beta}} = D\hat{\VEC{\theta}}$. Just like with mean-centering, this result actually applies to any rescaling, not just dividing by the standard deviations.
Predictions stay the same
What about predictions? Suppose we introduce a test point $\VEC{x}^\star$ and its corresponding $\VEC{z}$-value, which is $\VEC{z}^\star = D^T \VEC{x}^\star $. To make predictions with the original (not unit variance) model we use $\VEC{x}^{\star\top}\hat{\VEC{\beta}}$ and for the unit variance model, we use $\VEC{z}^{\star\top}\hat{\VEC{\theta}}$. But then
\begin{align}
\VEC{x}^{\star\top}\hat{\VEC{\beta}}
&= \VEC{x}^{\star\top}D \hat{\VEC{\theta}} \
&= \left(D^\top \VEC{x}^\star \right)^\top \hat{\VEC{\theta}} \
&= \VEC{z}^{\star\top}\hat{\VEC{\theta}}
\end{align}
So the predictions from the two models are identical.
An example
This example is exactly like the previous one, but now we divide by the standard deviation rather than subtracting the mean:
End of explanation
"""
standard_deviation = X.std()
compare_transformed_normalizations(
X, y,
transform=(lambda x : x/standard_deviation),
model=LinearRegression(),
title1="Trained using original data",
title2="Trained using rescaled data")
"""
Explanation: These two models certainly look different. It's important to remember that the input to the two models are different, though. For the top model, we input $x = 10$ and get a prediction about $2.5$. For the bottom model we need to input $z = 10/\sigma \approx 4.5$ which then gives a prediction somewhere between $2$ and $3$.
A clearer way to compare these graphs would be:
0. Fit two models: one using $X$ (not unit variance) and one using $Z$ (columns rescaled to unit variance).
0. Plot the $X$ model.
0. Rescale the $Z$ model back so that accepts inputs in $x$-space and plot this.
0. Look to see if the two graphs are the same.
The function compare_transformed_normalizations in centering_and_scaling.py does just this.
End of explanation
"""
from sklearn.linear_model import Lasso
compare_transformed_normalizations(
X, y,
transform=lambda x : x/3,
model=Lasso(),
title1="Trained using original data",
title2="Trained using scaled data")
"""
Explanation: It should be completely unsurprising that the lower scatterplot now matches the upper scatterplot. The more notable thing is that the regression line, which was fitted in $z$-space and then transformed, is the same as the line fitted in $x$-space to start with. Thus, we see clearly that scaling doesn't affect the unregularized model.
When we look at regularized regression, these lines will not match.
Scaling affects regularized regression
Formal argument
Let's try to redo the proof for regularized regression (that nothing changes) and see where it breaks.
Recall the objective
\begin{equation}
\min_\VEC{\beta} \; \|Y - X\VEC{\beta} \|_2^2 + \lambda R(\VEC{\beta})
\end{equation}
and the rescaling
\begin{equation}
Z = XD
\end{equation}
We'll consider a general $D$ from now on (so need not have $D_{ii} = \sigma_i^{-1}$).
For regression with $Z$ the objective with our new coefficient variable $\VEC{\theta}$ is
\begin{align}
\|Y - Z\VEC{\theta}\|_2^2 + \lambda R(\VEC{\theta})
&= \|Y - XD\VEC{\theta} \|_2^2 + \lambda R(\VEC{\theta})
\end{align}
Now set $\VEC{\beta} = D\VEC{\theta}$
\begin{align}
\|Y - XD\VEC{\theta} \|_2^2 + \lambda R(\VEC{\theta})
&= \|Y - X\VEC{\beta} \|_2^2 + \lambda R(D^{-1}\VEC{\beta}) \
&\neq \|Y - X\VEC{\beta} \|_2^2 + \lambda R(\VEC{\beta})
\end{align}
Unsurprisingly, the problem is in the regularization term. Instead of regularizing $\VEC{\beta}$ we end up regularizing $D^{-1}\VEC{\beta}$. That means it's unlikely the two problems will have the same minimizers. Although we set $\VEC{\beta} = D\VEC{\theta}$, we can't conclude that $\hat{\VEC{\beta}} = D\hat{\VEC{\theta}}$.
Quantifying the change
So we know that in regularized regression, the models might not be equivalent. Can we be more precise? Let's examine one particular example of regularized regression, the Lasso \cite{tibshirani1996}. For the Lasso we have
\begin{align}
R(\VEC{\beta})&= \sum_{i=1}^p |\beta_i| \
R(D^{-1}\VEC{\beta}) &= \sum_{i=1}^p \frac{|\beta_i|}{D_{ii}} \label{eq:lasso}
\end{align}
This tells us that we will strongly penalize coefficients which have a small value of $D_{ii}$. That is, if we scale down a certain predictor, the Lasso will more aggressively shrink the corresponding $\beta_i$. Conversely, we will not apply much shrinkage to $\beta_i$ if the $i$th column of $X$ was scaled up a lot.
Let's exemplify:
End of explanation
"""
compare_transformed_normalizations(
X, y,
transform=(lambda x : 3*x),
model=Lasso(),
title1="Trained using original data",
title2="Trained using scaled data")
"""
Explanation: We're seeing exactly what we expected. By making everything three times as small, the model penalized much more strongly, shrinking the regression coefficient exactly to zero.
End of explanation
"""
|
jon-young/cell-line-clust | doc/Biclustering.ipynb | gpl-2.0 | dfFile = os.path.join('..', 'data', 'siRNA_dataframe.csv')
RNAiDf = pd.read_csv(dfFile, index_col=0)
RNAiDf.tail()
"""
Explanation: 2015 December 4-6
Loading and exploring UTSW RNAi dataset...
End of explanation
"""
rplVals = np.nanmedian(RNAiDf.values, axis=0)
for i,col in enumerate(RNAiDf.columns):
RNAiDf[col].replace(np.nan, rplVals[i], inplace=True)
"""
Explanation: Impute NaN values by replacing with the column median. Choosing the column b/c more points exist to give better statistics along with the attempting to establish that the gene deletion in question has little effect on the cell line.
End of explanation
"""
modelSC = bicluster.SpectralCoclustering()
modelSC.fit(RNAiDf.values)
"""
Explanation: Spectral Co-Clustering
End of explanation
"""
modelSB = bicluster.SpectralBiclustering()
modelSB.fit(RNAiDf.values)
"""
Explanation: Spectral Biclustering
End of explanation
"""
|
dereneaton/RADmissing | sims_nb_simulations.ipynb | mit | ## standard Python imports
import glob
import itertools
from collections import OrderedDict, Counter
## extra Python imports
import rpy2 ## required for tree plotting
import ete2 ## used for tree manipulation
import egglib ## used for coalescent simulations
import numpy as np
import pandas as pd
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
## print versions
for pkg in [matplotlib, np, pd, ete2, rpy2]:
print "{:<10}\t{:<10}".\
format(pkg.__name__, pkg.__version__)
"""
Explanation: RADseq data simulations
I simulated two trees to work with. One that is completely imbalanced (ladder-like) and one that is balanced (equal number tips descended from each node). I'm using the Python package ete2 for most of the tree manipulations. This notebook was run in Python 2.7. You will also need the package rpy2 installed, as well as a working version of R with the package 'ape' to make tree plots later in the notebook.
End of explanation
"""
## check simrrls package and requirements
import egglib
import simrrls
## print versions
print 'egglib', egglib.version
print 'simrrls', simrrls.__version__
"""
Explanation: Simulation software
I wrote a program to simulate RAD-seq like sequence data which uses the python package egglib for coalescent simulations. Below will check that you have the relevant software installed. See here for simrrls installation: https://github.com/dereneaton/simrrls
End of explanation
"""
## base tree
Tbal = ete2.Tree()
## branch lengths
bls = 1.
## namer
n = iter(('s'+str(i) for i in xrange(1,1500)))
## first nodes
n1 = Tbal.add_child(name=n.next(), dist=bls)
n2 = Tbal.add_child(name=n.next(), dist=bls)
## make balanced tree
while len(Tbal.get_leaves()) < 64:
thisrep = Tbal.get_descendants()
for node in thisrep:
if len(node.get_children()) < 1:
node.add_child(name=n.next(), dist=bls)
node.add_child(name=n.next(), dist=bls)
## Save newick string to file
Tbal.write(outfile="Tbal.tre", format=3)
"""
Explanation: Generate trees for simulations
Make a balanced tree with 64 tips and tree length = 6
End of explanation
"""
## newick string
! cat Tbal.tre
## show tree, remove node circles
#for node in Tbal.traverse():
# node.img_style["size"] = 0
#Tbal.render("%%inline", h=500)
"""
Explanation: String representation
End of explanation
"""
## base tree
Timb = ete2.Tree()
## namer
n = iter(('s'+str(i) for i in range(1,5000)))
## scale branches to match balanced treelength
brlen = (bls*6.)/63
## first nodes
n1 = Timb.add_child(name=n.next(), dist=brlen)
n2 = Timb.add_child(name=n.next(), dist=brlen)
while len(Timb.get_leaves()) < 64:
## extend others
for tip in Timb.get_leaves()[:-1]:
tip.dist += brlen
## extend the last node
Timb.get_leaves()[-1].add_child(name=n.next(), dist=brlen)
Timb.get_leaves()[-1].add_sister(name=n.next(), dist=brlen)
## write to file
Timb.write(outfile="Timb.tre", format=3)
"""
Explanation: Make an imbalanced tree of same treelength with 64 tips
End of explanation
"""
! cat Timb.tre
## show tree, remove node circles
#for node in Timb.traverse():
# node.img_style["size"] = 0
#Timb.render("%%inline", h=500)
"""
Explanation: Or copy the following string to a file:
End of explanation
"""
print set([i.get_distance(Tbal) for i in Tbal]), 'treelength'
print len(Tbal), 'tips'
print set([i.get_distance(Timb) for i in Timb]), 'treelength'
print len(Timb), 'tips'
"""
Explanation: Check that the trees are the same length (close enough).
End of explanation
"""
%load_ext rpy2.ipython
%%R -w 400 -h 600
library(ape)
## make tree ultrametric using penalized likelihood
Vtree <- read.tree("~/Dropbox/RAxML_bestTree.VIB_small_c85d6m4p99")
Utree <- drop.tip(Vtree, "clemensiae_DRY6_PWS_2135")
Utree <- ladderize(chronopl(Utree, 0.5))
## multiply bls so tree length=6 after dropping outgroup
Utree$edge.length <- Utree$edge.length*6
## save the new tree
write.tree(Utree, "Tvib.tre")
plot(Utree, cex=0.7, edge.width=2)
add.scale.bar()
#edgelabels(round(Utree$edge.length,3))
#### load TVib tree into Python and print newick string
Tvib = ete2.Tree("Tvib.tre")
! cat Tvib.tre
"""
Explanation: An ultrametric topology of Viburnum w/ 64 tips
This tree is inferred in notebook 3, and here it is scaled with penalized likelihood to be ultrametric.
End of explanation
"""
%%bash
## balanced tree
mkdir -p Tbal_rad_drop/
mkdir -p Tbal_ddrad_drop/
mkdir -p Tbal_rad_covfull/
mkdir -p Tbal_rad_covlow/
mkdir -p Tbal_rad_covmed/
## imbalanced tree
mkdir -p Timb_rad_drop/
mkdir -p Timb_ddrad_drop/
mkdir -p Timb_rad_covfull/
mkdir -p Timb_rad_covlow/
mkdir -p Timb_rad_covmed/
## sims on empirical Viburnum topo
mkdir -p Tvib_rad_drop/
mkdir -p Tvib_ddrad_drop/
mkdir -p Tvib_rad_covfull/
mkdir -p Tvib_rad_covlow/
mkdir -p Tvib_rad_covmed
"""
Explanation: Simulate sequence data on each tree
Here I use the simrrls program to simulate RADseq data on each input topology with locus dropout occurring with respect to phylogenetic distances. Find simrrls in my github profile.
Comparing tree shapes and sources of missing data
End of explanation
"""
%%bash
simrrls -h
"""
Explanation: Show simrrls options
End of explanation
"""
%%bash
for tree in Tbal Timb Tvib;
do
simrrls -mc 1 -ms 1 -t $tree.tre \
-L 1000 -l 100 \
-u 1e-9 -N 1e6 \
-f rad -c1 CTGCAG \
-o $tree\_rad_drop/$tree
simrrls -mc 1 -ms 1 -t $tree.tre \
-L 1000 -l 100 \
-u 1e-9 -N 1e6 \
-f ddrad -c1 CTGCAG -c2 AATT \
-o $tree\_ddrad_drop/$tree
simrrls -mc 0 -ms 0 -t $tree.tre \
-L 1000 -l 100 \
-u 1e-9 -N 1e6 \
-f rad -c1 CTGCAG \
-o $tree\_rad_covfull/$tree
simrrls -mc 0 -ms 0 -t $tree.tre \
-L 1000 -l 100 \
-u 1e-9 -N 1e6 \
-f rad -c1 CTGCAG \
-dm 5 -ds 5 \
-o $tree\_rad_covmed/$tree
simrrls -mc 0 -ms 0 -t $tree.tre \
-L 1000 -l 100 \
-u 1e-9 -N 1e6 \
-f rad -c1 CTGCAG \
-dm 2 -ds 5 \
-o $tree\_rad_covlow/$tree
done
"""
Explanation: Simulate RAD data on different trees and with sampling
Here I simulate 1000 loci on each tree. For each tree data are simulated in 5 ways. With and without data loss from mutation-disruption or low sequencing coverage, and as a rad data type (one cutter) and ddrad (two cutters). This will take about 10 minutes to run probably.
End of explanation
"""
%%bash
pyrad --version
%%bash
## new params file (remove existing file if present)
rm params.txt
pyrad -n
%%bash
## enter parameters into params file using sed
sed -i '/## 1. /c\Tbal_rad_drop ## 1. working dir ' params.txt
sed -i '/## 2. /c\Tbal_rad_drop/*.gz ## 2. data loc ' params.txt
sed -i '/## 3. /c\Tbal_rad_drop/*barcodes.txt ## 3. Bcode ' params.txt
sed -i '/## 6. /c\TGCAG,AATT ## 6. cutters ' params.txt
sed -i '/## 7. /c\20 ## 7. Nproc ' params.txt
sed -i '/## 10. /c\.82 ## 10. clust thresh' params.txt
sed -i '/## 11. /c\rad ## 11. datatype ' params.txt
sed -i '/## 12. /c\2 ## 12. minCov ' params.txt
sed -i '/## 13. /c\10 ## 13. maxSH' params.txt
sed -i '/## 14. /c\Tbal ## 14. outname' params.txt
sed -i '/## 24./c\99 ## 24. maxH' params.txt
sed -i '/## 30./c\n,p,s ## 30. out format' params.txt
## IPython code to iterate over trees and coverages and run pyrad
## sometimes this freezes when run in a jupyter notebook due
## to problems with multiprocessing in notebooks. This is why my new
## work with ipyrad uses ipyparallel instead of multiprocessing.
for tree in ['Tbal', 'Timb', 'Tvib']:
for dtype in ['rad', 'ddrad']:
with open('params.txt', 'rb') as params:
pp = params.readlines()
pp[1] = "{}_{}_drop ## 1. \n".format(tree, dtype)
pp[2] = "{}_{}_drop/*.gz ## 2. \n".format(tree, dtype)
pp[3] = "{}_{}_drop/*barcodes.txt ## 3. \n".format(tree, dtype)
pp[14] = "{} ## 14. \n".format(tree)
with open('params.txt', 'wb') as params:
params.write("".join(pp))
## this calls pyrad as a bash script
! pyrad -p params.txt >> log.txt 2>&1
for cov in ['full', 'med', 'low']:
with open('params.txt', 'rb') as params:
pp = params.readlines()
pp[1] = "{}_rad_cov{} ## 1. \n".format(tree, cov)
pp[2] = "{}_rad_cov{}/*.gz ## 2. \n".format(tree, cov)
pp[3] = "{}_rad_cov{}/*barcodes.txt ## 3. \n".format(tree, cov)
pp[14] = "{} ## 14. \n".format(tree)
with open('params.txt', 'wb') as params:
params.write("".join(pp))
## this calls pyrad as a bash script
! pyrad -p params.txt >> log.txt 2>&1
"""
Explanation: Assemble data sets in pyRAD
End of explanation
"""
def getarray(locifile, tree):
""" parse the loci list and return a
presence/absence matrix ordered by
the tips on the tree"""
## parse the loci file
loci = open(locifile).read().split("\n//")[:-1]
## order (ladderize) the tree
tree.ladderize()
## get tip names
names = tree.get_leaf_names()
## make empty matrix
lxs = np.zeros((len(names), len(loci)))
## fill the matrix
for loc in xrange(len(loci)):
for seq in loci[loc].split("\n"):
if ">" in seq:
lxs[names.index(seq.split()[0][1:].rsplit("_", 1)[0]),loc] += 1
return lxs
def countmatrix(lxsabove, lxsbelow, max=0):
""" fill a matrix with pairwise data sharing
between each pair of samples. You could put
in two different 'share' matrices to have
different results above and below the diagonal.
Can enter a max value to limit fill along diagonal.
"""
share = np.zeros((lxsabove.shape[0],
lxsbelow.shape[0]))
## fill above
names = range(lxsabove.shape[0])
for row in lxsabove:
for samp1,samp2 in itertools.combinations(names,2):
shared = lxsabove[samp1, lxsabove[samp2,]>0].sum()
share[samp1,samp2] = shared
## fill below
for row in lxsbelow:
for samp2,samp1 in itertools.combinations(names,2):
shared = lxsabove[samp1, lxsabove[samp2,]>0].sum()
share[samp1,samp2] = shared
## fill diagonal
if not max:
for row in range(len(names)):
share[row,row] = lxsabove[row,].sum()
else:
for row in range(len(names)):
share[row,row] = max
return share
def plotSVGmatrix(share, outname):
surf = plt.pcolormesh(share, cmap="gist_yarg")
dims = plt.axis('image')
surf.axes.get_xaxis().set_ticklabels([])
surf.axes.get_xaxis().set_ticks([])
surf.axes.get_yaxis().set_ticklabels([])
surf.axes.get_yaxis().set_ticks([])
ax = plt.gca()
ax.invert_yaxis()
plt.colorbar(surf, aspect=15)
if outname:
plt.savefig(outname+".svg")
def fullplot(locifile, tree, outname=None):
lxsB = getarray(locifile, tree)
share = countmatrix(lxsB, lxsB)
plotSVGmatrix(share, outname)
fullplot('Tbal_rad_drop/outfiles/Tbal.loci', Tbal, 'Tbal_drop')
fullplot('Tbal_rad_covlow/outfiles/Tbal.loci', Tbal, 'Tbal_covlow')
fullplot('Tbal_rad_covmed/outfiles/Tbal.loci', Tbal, 'Tbal_covmed')
fullplot('Tbal_rad_covfull/outfiles/Tbal.loci', Tbal, 'Tbal_covfull')
fullplot('Timb_rad_drop/outfiles/Timb.loci', Timb, 'Timb_drop')
fullplot('Tvib_rad_drop/outfiles/Tvib.loci', Tvib, 'Tvib_drop')
fullplot('Tvib_rad_covlow/outfiles/Tvib.loci', Tvib, 'Tvib_covlow')
"""
Explanation: Visualize data sharing on these trees
End of explanation
"""
lxs_Tbal_droprad = getarray("Tbal_rad_drop/outfiles/Tbal.loci", Tbal)
lxs_Tbal_dropddrad = getarray("Tbal_ddrad_drop/outfiles/Tbal.loci", Tbal)
lxs_Tbal_covlow = getarray("Tbal_rad_covlow/outfiles/Tbal.loci", Tbal)
lxs_Tbal_covmed = getarray("Tbal_rad_covmed/outfiles/Tbal.loci", Tbal)
lxs_Tbal_covfull = getarray("Tbal_rad_covfull/outfiles/Tbal.loci", Tbal)
lxs_Timb_droprad = getarray("Timb_rad_drop/outfiles/Timb.loci", Timb)
lxs_Timb_dropddrad = getarray("Timb_ddrad_drop/outfiles/Timb.loci", Timb)
lxs_Timb_covlow = getarray("Timb_rad_covlow/outfiles/Timb.loci", Timb)
lxs_Timb_covmed = getarray("Timb_rad_covmed/outfiles/Timb.loci", Timb)
lxs_Timb_covfull = getarray("Timb_rad_covfull/outfiles/Timb.loci", Timb)
lxs_Tvib_droprad = getarray("Tvib_rad_drop/outfiles/Tvib.loci", Tvib)
lxs_Tvib_dropddrad = getarray("Tvib_ddrad_drop/outfiles/Tvib.loci", Tvib)
lxs_Tvib_covlow = getarray("Tvib_rad_covlow/outfiles/Tvib.loci", Tvib)
lxs_Tvib_covmed = getarray("Tvib_rad_covmed/outfiles/Tvib.loci", Tvib)
lxs_Tvib_covfull = getarray("Tvib_rad_covfull/outfiles/Tvib.loci", Tvib)
"""
Explanation: The hierarchical distribution of informative sites
First we re-calculate the pair-wise data sharing matrices for all species in each data set.
End of explanation
"""
def count_inf4(tree, matrix, node):
""" count the number of loci with data spanning
a given node in the tree """
## get children of selected node
a, b = node.get_children()
## get tip descendents of a and b
tips_a = set(a.get_leaf_names())
tips_b = set(b.get_leaf_names())
## get every other tip (outgroups)
upone = node.up
if upone.is_root():
ch = upone.children
sis = [i for i in ch if i != node][0]
if sis.children:
tips_c = sis.children[0].get_leaf_names()
tips_d = sis.children[1].get_leaf_names()
else:
return 0
else:
upone = set(node.up.get_leaf_names())
tips_c = upone - tips_a - tips_b
tips_all = set(tree.get_leaf_names())
tips_d = tips_all - tips_a - tips_b - tips_c
## get indices in matrix for leaf tips
names = tree.get_leaf_names()
index_a = [names.index(i) for i in tips_a]
index_b = [names.index(i) for i in tips_b]
index_c = [names.index(i) for i in tips_c]
index_d = [names.index(i) for i in tips_d]
## how man loci are "informative"
inf = 0
for col in matrix.T:
hits_a = sum([col[i] for i in index_a])
hits_b = sum([col[i] for i in index_b])
hits_c = sum([col[i] for i in index_c])
hits_d = sum([col[i] for i in index_d])
if all([hits_a, hits_b, hits_c, hits_d]):
inf += 1
return inf
"""
Explanation: A function to count loci for each bipartition (quartet-style)
End of explanation
"""
def nodes_dat(tree, lxs, datfilename):
dat = []
for node in tree.traverse():
if not (node.is_leaf() or node.is_root()):
loci = count_inf4(tree, lxs, node)
dist = round(tree.get_distance(node),2)
dat.append([dist, loci])
node.name = "%d" % loci
## print tree with bls & node labels
tree.write(format=3,outfile=datfilename+".tre")
## print data to file
with open(datfilename, 'w') as outfile:
np.savetxt(outfile, np.array(dat), fmt="%.2f")
"""
Explanation: A function to write data to file for plotting
Here I iterate over each node and apply count_inf4 which returns the number of loci that are informative for the subtending bipartition, and count_snps which counts snps segregating at that bipartition. This takes a few minutes to run.
End of explanation
"""
%%bash
## a new directory to store the data in
mkdir -p analysis_counts2
nodes_dat(Tbal, lxs_Tbal_droprad,
"analysis_counts2/Tbal_droprad.dat3")
nodes_dat(Tbal, lxs_Tbal_dropddrad,
"analysis_counts2/Tbal_dropddrad.dat3")
nodes_dat(Tbal, lxs_Tbal_covlow,
"analysis_counts2/Tbal_covlow.dat3")
nodes_dat(Tbal, lxs_Tbal_covmed,
"analysis_counts2/Tbal_covmed.dat3")
nodes_dat(Tbal, lxs_Tbal_covfull,
"analysis_counts2/Tbal_covfull.dat3")
nodes_dat(Timb, lxs_Timb_droprad,
"analysis_counts2/Timb_droprad.dat3")
nodes_dat(Timb, lxs_Timb_dropddrad,
"analysis_counts2/Timb_dropddrad.dat3")
nodes_dat(Timb, lxs_Timb_covlow,
"analysis_counts2/Timb_covlow.dat3")
nodes_dat(Timb, lxs_Timb_covmed,
"analysis_counts2/Timb_covmed.dat3")
nodes_dat(Timb, lxs_Timb_covfull,
"analysis_counts2/Timb_covfull.dat3")
nodes_dat(Tvib, lxs_Tvib_droprad,
"analysis_counts2/Tvib_droprad.dat3")
nodes_dat(Tvib, lxs_Tvib_dropddrad,
"analysis_counts2/Tvib_dropddrad.dat3")
nodes_dat(Tvib, lxs_Tvib_covlow,
"analysis_counts2/Tvib_covlow.dat3")
nodes_dat(Tvib, lxs_Tvib_covmed,
"analysis_counts2/Tvib_covmed.dat3")
nodes_dat(Tvib, lxs_Tvib_covfull,
"analysis_counts2/Tvib_covfull.dat3")
"""
Explanation: Make data files
End of explanation
"""
%load_ext rpy2.ipython
%%R
library(ape)
"""
Explanation: Plot the hierarchical distribution with the trees
End of explanation
"""
%%R
## read in the data and factor results
dat <- read.table("analysis_counts2/Tbal_droprad.dat3",
header=F, col.names=c('depth','loci'))
dat[,1] <- as.factor(dat[,1])
Tbal_droprad_Lme <- with(dat, tapply(loci, depth, mean))
dat <- read.table("analysis_counts2/Tbal_dropddrad.dat3",
header=F, col.names=c('depth','loci'))
dat[,1] <- as.factor(dat[,1])
Tbal_dropddrad_Lme <- with(dat, tapply(loci, depth, mean))
dat <- read.table("analysis_counts2/Tbal_covlow.dat3",
header=F, col.names=c('depth','loci'))
dat[,1] <- as.factor(dat[,1])
Tbal_covlow_Lme <- with(dat, tapply(loci, depth, mean))
dat <- read.table("analysis_counts2/Tbal_covmed.dat3",
header=F, col.names=c('depth','loci'))
dat[,1] <- as.factor(dat[,1])
Tbal_covmed_Lme <- with(dat, tapply(loci, depth, mean))
dat <- read.table("analysis_counts2/Tbal_covfull.dat3",
header=F, col.names=c('depth','loci'))
dat[,1] <- as.factor(dat[,1])
Tbal_covfull_Lme <- with(dat, tapply(loci, depth, mean))
%%R
## read in the data and factor results
dat <- read.table("analysis_counts2/Timb_droprad.dat3",
header=F, col.names=c('depth','loci'))
dat[,1] <- as.factor(dat[,1])
Timb_droprad_Lme <- with(dat, tapply(loci, depth, mean))
dat <- read.table("analysis_counts2/Timb_dropddrad.dat3",
header=F, col.names=c('depth','loci'))
dat[,1] <- as.factor(dat[,1])
Timb_dropddrad_Lme <- with(dat, tapply(loci, depth, mean))
dat <- read.table("analysis_counts2/Timb_covlow.dat3",
header=F, col.names=c('depth','loci'))
dat[,1] <- as.factor(dat[,1])
Timb_covlow_Lme <- with(dat, tapply(loci, depth, mean))
dat <- read.table("analysis_counts2/Timb_covmed.dat3",
header=F, col.names=c('depth','loci'))
dat[,1] <- as.factor(dat[,1])
Timb_covmed_Lme <- with(dat, tapply(loci, depth, mean))
dat <- read.table("analysis_counts2/Timb_covfull.dat3",
header=F, col.names=c('depth','loci'))
dat[,1] <- as.factor(dat[,1])
Timb_covfull_Lme <- with(dat, tapply(loci, depth, mean))
%%R
## read in the data and factor results
dat <- read.table("analysis_counts2/Tvib_droprad.dat3",
header=F, col.names=c('depth','loci'))
dat[,1] <- as.factor(dat[,1])
Tvib_droprad_Lme <- with(dat, tapply(loci, depth, mean))
dat <- read.table("analysis_counts2/Tvib_dropddrad.dat3",
header=F, col.names=c('depth','loci'))
dat[,1] <- as.factor(dat[,1])
Tvib_dropddrad_Lme <- with(dat, tapply(loci, depth, mean))
dat <- read.table("analysis_counts2/Tvib_covlow.dat3",
header=F, col.names=c('depth','loci'))
dat[,1] <- as.factor(dat[,1])
Tvib_covlow_Lme <- with(dat, tapply(loci, depth, mean))
dat <- read.table("analysis_counts2/Tvib_covmed.dat3",
header=F, col.names=c('depth','loci'))
dat[,1] <- as.factor(dat[,1])
Tvib_covmed_Lme <- with(dat, tapply(loci, depth, mean))
dat <- read.table("analysis_counts2/Tvib_covfull.dat3",
header=F, col.names=c('depth','loci'))
dat[,1] <- as.factor(dat[,1])
Tvib_covfull_Lme <- with(dat, tapply(loci, depth, mean))
"""
Explanation: Load in the data to R
End of explanation
"""
%%R -w 400 -h 400
#svg("box1.svg", width=4, height=5)
L = Tbal_droprad_Lme
plot(L, xlim=c(0,6), ylim=c(575,1025),
cex.axis=1.25, type='n', xaxt="n")
#abline(h=1000, lwd=2, col="#6E6E6E", lty="dotted")
L = Tbal_covfull_Lme
df1 = data.frame(as.numeric(names(L)),as.numeric(L))
lines(df1, type='l', lwd=2, lty=2)
points(df1, cex=2.5, pch=21, col="#262626", bg="#262626", lwd=0.75)
L = Tbal_droprad_Lme
df1 = data.frame(as.numeric(names(L)),as.numeric(L))
lines(df1, type='l', lwd=2, lty=2)
points(df1, cex=2.5, pch=21, col="#6E6E6E", bg="#6E6E6E", lwd=0.75)
#L = Tbal_dropddrad_Lme
#df1 = data.frame(as.numeric(names(L)),as.numeric(L))
#points(df1, cex=3.5, pch=21, col="#6E6E6E", bg="#D3D3D3", lwd=0.75)
box()
axis(side=1, at=seq(0,6,0.5),
labels=as.character(seq(6,0,by=-0.5)), cex.axis=1.25)
#dev.off()
%%R -w 400 -h 400
#svg("box2.svg", width=4, height=5)
L = Tbal_covlow_Lme
plot(L, xlim=c(0,6), ylim=c(575,1025),
cex.axis=1.25, type='n', xaxt="n")
abline(h=1000, lwd=2, col="#6E6E6E", lty="dotted")
L = Tbal_covfull_Lme
df1 = data.frame(as.numeric(names(L)),as.numeric(L))
lines(df1, type='l', lwd=2, lty=2)
points(df1, cex=2.5, pch=21, col="#262626", bg="#262626", lwd=0.75)
L = Tbal_covmed_Lme
df1 = data.frame(as.numeric(names(L)),as.numeric(L))
lines(df1, type='l', lwd=2, lty=2)
points(df1, cex=2.5, pch=21, col="#262626", bg="#6E6E6E", lwd=0.75)
#L = Tbal_covlow_Lme
#df1 = data.frame(as.numeric(names(L)),as.numeric(L))
#points(df1, cex=3.5, pch=21, col="#262626", bg="#D3D3D3", lwd=0.75)
box()
axis(side=1, at=seq(0,6,0.5),
labels=as.character(seq(6,0,by=-0.5)), cex.axis=1.25)
#dev.off()
%%R -w 400 -h 400
#svg("box3.svg", width=4, height=5)
## samples every 6th to make plot more readable
L = Timb_droprad_Lme[c(3:62)]
plot(L, xlim=c(0,6), ylim=c(575, 1025),
cex.axis=1.25, type='n', xaxt="n")#, yaxt="n")
#abline(h=1000, lwd=2, col="#6E6E6E", lty="dotted")
L = Timb_covfull_Lme[seq(3, 65, 6)]
df1 = data.frame(as.numeric(names(L)),as.numeric(L))
lines(df1, type='l', lwd=2, lty=2)
points(df1, cex=2.5, pch=21, col="#262626", bg="#262626", lwd=0.75)
L = Timb_droprad_Lme[seq(3, 65, 6)]
df1 = data.frame(as.numeric(names(L)),as.numeric(L))
lines(df1, type='l', lwd=2, lty=2)
points(df1, cex=2.5, pch=21, col="#262626", bg="#6E6E6E", lwd=0.75)
box()
axis(side=1, at=seq(0,6, 0.5),
labels=as.character(seq(6,0,by=-0.5)), cex.axis=1.25)
#dev.off()
%%R -w 400 -h 400
## samples every 6th to make plot more readable
#svg("box4.svg", width=4, height=5)
L = Timb_droprad_Lme[c(3:62)]
plot(L, xlim=c(0,6), ylim=c(575, 1025),
cex.axis=1.25, type='n', xaxt="n")#, yaxt="n")
#abline(h=1000, lwd=2, col="#6E6E6E", lty="dotted")
L = Timb_covfull_Lme[seq(3, 65, 6)]
df1 = data.frame(as.numeric(names(L)),as.numeric(L))
lines(df1, type='l', lwd=2, lty=2)
points(df1, cex=2.5, pch=21, col="#262626", bg="#262626", lwd=0.75)
L = Timb_covmed_Lme[seq(3, 65, 6)]
df1 = data.frame(as.numeric(names(L)),as.numeric(L))
lines(df1, type='l', lwd=2, lty=2)
points(df1, cex=2.5, pch=21, col="#262626", bg="#6E6E6E", lwd=0.75)
box()
axis(side=1, at=seq(0,6, 0.5),
labels=as.character(seq(6,0,by=-0.5)), cex.axis=1.25)
#dev.off()
%%R -w 400 -h 500
#svg("Sourcemissing.svg", height=10, width=8)
#svg("Sourcemissing.svg", height=7, width=5.33)
mat2 = matrix(c(1,1,1,4,4,4,7,7,7,
1,1,1,4,4,4,7,7,7,
2,2,2,5,5,5,8,8,8,
3,3,3,6,6,6,9,9,9),
4,9, byrow=TRUE)
layout(mat2)
par(mar=c(1,1,0,1),
oma=c(2,2,1,0))
#########################################################
tre <- read.tree("Tbal.tre")
plot(tre, show.tip.label=F,
edge.width=2.5, type='p',
x.lim=c(0,6))
####
L = Tbal_droprad_Lme
plot(L, xlim=c(0,6), ylim=c(-25,1200),
cex.axis=1.25, type='n', xaxt="n")
abline(h=1000, lwd=2, col="#6E6E6E", lty="dotted")
L = Tbal_covfull_Lme
df1 = data.frame(as.numeric(names(L)),as.numeric(L))
points(df1, cex=1.5, pch=21, col="#262626", bg="#262626", lwd=0.75)
L = Tbal_droprad_Lme
df1 = data.frame(as.numeric(names(L)),as.numeric(L))
points(df1, cex=1.5, pch=21, col="#262626", bg="#6E6E6E", lwd=0.75)
L = Tbal_dropddrad_Lme
df1 = data.frame(as.numeric(names(L)),as.numeric(L))
points(df1, cex=1.5, pch=21, col="#262626", bg="#D3D3D3", lwd=0.75)
box()
####
L = Tbal_covlow_Lme
plot(L, xlim=c(0,6), ylim=c(-25,1200),
cex.axis=1.25, type='n', xaxt="n")
abline(h=1000, lwd=2, col="#6E6E6E", lty="dotted")
L = Tbal_covfull_Lme
df1 = data.frame(as.numeric(names(L)),as.numeric(L))
points(df1, cex=1.5, pch=21, col="#262626", bg="#262626", lwd=0.75)
L = Tbal_covmed_Lme
df1 = data.frame(as.numeric(names(L)),as.numeric(L))
points(df1, cex=1.5, pch=21, col="#262626", bg="#6E6E6E", lwd=0.75)
L = Tbal_covlow_Lme
df1 = data.frame(as.numeric(names(L)),as.numeric(L))
points(df1, cex=1.5, pch=21, col="#262626", bg="#D3D3D3", lwd=0.75)
axis(side=1, at=seq(0,6,0.5),
labels=as.character(seq(6,0,by=-0.5)), cex.axis=1.25)
box()
##########################################################
tre <- read.tree("Timb.tre")
plot(tre, show.tip.label=F,
edge.width=2.5, type='p',
x.lim=c(0,6))
####
L = Timb_droprad_Lme[c(3:62)]
plot(L, xlim=c(0,6), ylim=c(-25,1200),
cex.axis=1.25, type='n', xaxt="n", yaxt="n")
abline(h=1000, lwd=2, col="#6E6E6E", lty="dotted")
L = Timb_covfull_Lme[c(3:62)]
df1 = data.frame(as.numeric(names(L)),as.numeric(L))
points(df1, cex=1.5, pch=21, col="#262626", bg="#262626", lwd=0.75)
L = Timb_droprad_Lme[c(3:62)]
df1 = data.frame(as.numeric(names(L)),as.numeric(L))
points(df1, cex=1.5, pch=21, col="#262626", bg="#6E6E6E", lwd=0.75)
L = Timb_dropddrad_Lme[c(3:62)]
df1 = data.frame(as.numeric(names(L)),as.numeric(L))
points(df1, cex=1.5, pch=21, col="#262626", bg="#D3D3D3", lwd=0.75)
box()
####
L = Timb_covlow_Lme[c(3:62)]
plot(L, xlim=c(0,6), ylim=c(-25,1200),
cex.axis=1.25, type='n', xaxt="n", yaxt="n")
abline(h=1000, lwd=2, col="#6E6E6E", lty="dotted")
L = Timb_covfull_Lme[c(3:62)]
df1 = data.frame(as.numeric(names(L)),as.numeric(L))
points(df1, cex=1.5, pch=21, col="#262626", bg="#262626", lwd=0.75)
L = Timb_covmed_Lme[c(3:62)]
df1 = data.frame(as.numeric(names(L)),as.numeric(L))
points(df1, cex=1.5, pch=21, col="#262626", bg="#6E6E6E", lwd=0.75)
L = Timb_covlow_Lme[c(3:62)]
df1 = data.frame(as.numeric(names(L)),as.numeric(L))
points(df1, cex=1.5, pch=21, col="#262626", bg="#D3D3D3", lwd=0.75)
box()
##
axis(side=1, at=seq(0,6,0.5),
labels=as.character(seq(6,0,by=-0.5)), cex.axis=1.25)
#########################################################
tre <- read.tree("Tvib.tre")
plot(tre, show.tip.label=F,
edge.width=2.5, type='p',
x.lim=c(0,6))
####
L = Tvib_droprad_Lme
plot(L, xlim=c(0,6.25), ylim=c(-25,1200),
cex.axis=1.25, type='n', xaxt="n", yaxt="n")
abline(h=1000, lwd=2, col="#6E6E6E", lty="dotted")
L = Tvib_covfull_Lme
df1 = data.frame(as.numeric(names(L)),as.numeric(L))
points(df1, cex=1.5, pch=21, col="#262626", bg="#262626", lwd=0.75)
L = Tvib_droprad_Lme
df1 = data.frame(as.numeric(names(L)),as.numeric(L))
points(df1, cex=1.5, pch=21, col="#262626", bg="#6E6E6E", lwd=0.75)
L = Tvib_dropddrad_Lme
df1 = data.frame(as.numeric(names(L)),as.numeric(L))
points(df1, cex=1.5, pch=21, col="#262626", bg="#D3D3D3", lwd=0.75)
box()
####
plot(L, xlim=c(0,6.25), ylim=c(-25,1200),
cex.axis=1.25, type='n', xaxt="n", yaxt="n")
abline(h=1000, lwd=2, col="#6E6E6E", lty="dotted")
L = Tvib_covfull_Lme
df1 = data.frame(as.numeric(names(L)),as.numeric(L))
points(df1, cex=1.5, pch=21, col="#262626", bg="#262626", lwd=0.75)
L = Tvib_covmed_Lme
df1 = data.frame(as.numeric(names(L)),as.numeric(L))
points(df1, cex=1.5, pch=21, col="#262626", bg="#6E6E6E", lwd=0.75)
L = Tvib_covlow_Lme
df1 = data.frame(as.numeric(names(L)),as.numeric(L))
points(df1, cex=1.5, pch=21, col="#262626", bg="#D3D3D3", lwd=0.75)
box()
##
axis(side=1, at=seq(0,6,1),
labels=as.character(seq(6,0,by=-1)), cex.axis=1.25)
#dev.off()
"""
Explanation: Plots
End of explanation
"""
Tvib2 = Tvib.copy()
for node in Tvib2:
node.name = node.name+"_0"
## full size data
lxs_EmpVib_full = getarray("/home/deren/Dropbox/RADexplore/EmpVib/vib_full_64tip_c85d6m4p99.loci", Tvib)#, dropind=1)
lxs_EmpVib_half = getarray("/home/deren/Dropbox/RADexplore/EmpVib/vib_half_64tip_c85d6m4p99.loci", Tvib)#, dropind=1)
share_full = countmatrix(lxs_EmpVib_full,lxs_EmpVib_full)
plotSVGmatrix(share_full, "EmpVib_full")
share_half = countmatrix(lxs_EmpVib_half,lxs_EmpVib_half)
plotSVGmatrix(share_half, "EmpVib_half")
nodes_dat(Tvib, lxs_EmpVib_half,
"analysis_counts/Tvib_Emp_half.dat3")
nodes_dat(Tvib, lxs_EmpVib_full,
"analysis_counts/Tvib_Emp_full.dat3")
%%R
## read in the data and factor results
dat <- read.table("analysis_counts/Tvib_Emp_full.dat3",
header=F, col.names=c('depth','loci'))
dat[,1] <- as.factor(dat[,1])
EmpVib_full_Lme <- with(dat, tapply(loci, depth, mean))
## read in the data and factor results
dat <- read.table("analysis_counts/Tvib_Emp_half.dat3",
header=F, col.names=c('depth','loci'))
dat[,1] <- as.factor(dat[,1])
EmpVib_half_Lme <- with(dat, tapply(loci, depth, mean))
%%R
EmpVib_half_Lme
%%R -w 200 -h 400
#svg("EmpVib_fullvhalf3.svg", height=6, width=2.5)
mat2 <- matrix(c(1,1,1,2),byrow=TRUE)
layout(mat2)
par(mar=c(1,1,0,1),
oma=c(2,2,1,0))
#########################################################
#tre <- read.tree("Tvib.tre")
#plot(tre, show.tip.label=F,
# edge.width=2.5, type='p',
# x.lim=c(0,2.25))
Vtre <- read.tree("analysis_counts/EmpVib_full.dat3.tre")
plot(Vtre, cex=0.6, adj=0.05, x.lim=c(0,2.25),
edge.width=2.5, type='p',show.tip.label=FALSE)
nodelabels(pch=20, col="black",
cex=as.integer(Vtre$node.label)/7500)
####
L = EmpVib_half_Lme
plot(L, xlim=c(0,2.25), ylim=c(-25,50200),
cex.axis=1.25, type='n', xaxt="n")
df1 = data.frame(as.numeric(names(L)),
as.numeric(L))
points(df1, cex=1.25, pch=21, col="red", bg="red")
####
L = EmpVib_full_Lme
#plot(L, xlim=c(0,2.25), ylim=c(-25,50200),
# cex.axis=1.25, type='n', xaxt="n")
abline(h=0, lwd=2, col="gray", lty="dotted")
df1 = data.frame(as.numeric(names(L)),
as.numeric(L))
points(df1, cex=1.25, pch=21, col="#262626", bg="#262626")
#dev.off()
%%R -w 200 -h 400
#svg("fullvhalf3.svg", height=4.5, width=4)
####
L = EmpVib_full_Lme
plot(L, xlim=c(0,2.25), ylim=c(-25,50200),
cex.axis=1.25, type='n', xaxt="n")
abline(h=0, lwd=2, col="gray", lty="dotted")
df1 = data.frame(as.numeric(names(L)),
as.numeric(L))
points(df1, cex=1.25, pch=21, col="#262626", bg="#262626")
dev.off()
svg("fullonly.svg", height=4.5, width=4)
####
L = EmpVib_half_Lme
plot(L, xlim=c(0,2.25), ylim=c(-25,50200),
cex.axis=1.25, type='n', xaxt="n")
df1 = data.frame(as.numeric(names(L)),
as.numeric(L))
points(df1, cex=1.25, pch=21, col="red", bg="red")
####
L = EmpVib_full_Lme
abline(h=0, lwd=2, col="gray", lty="dotted")
df1 = data.frame(as.numeric(names(L)),
as.numeric(L))
points(df1, cex=1.25, pch=21, col="#262626", bg="#262626")
#dev.off()
%%R
data.frame(cbind(median(EmpVib_full_Lme),
median(EmpVib_half_Lme)),
cbind(mean(EmpVib_full_Lme),
mean(EmpVib_half_Lme)),
col.names=c("full","half"))
%%R
svg('hist.svg', width=4.25, height=4)
hist(rnorm(10000), col="grey")
dev.off()
%%R -w 300 -h 600
Vtre <- read.tree("analysis_counts/EmpVib_full.dat3.tre")
svg("EmpVib_full_nodes.svg", height=6, width=3)
plot(Vtre, cex=0.6, adj=0.05,
edge.width=3, show.tip.label=FALSE)
nodelabels(pch=20, col="black",
cex=as.integer(Vtre$node.label)/10000)
dev.off()
"""
Explanation: Empirical data (full & half depth)
Here I am grabbing the assembled empirical data from notebook_1 (Viburnum) to compare the effect of sequencing coverage with the results we see when simulating data on that tree.
End of explanation
"""
%%R -w 300 -h 600
#svg("locisnpsdepth.svg", height=8, width=4)
#pdf("locisnpsdepth.pdf", height=8, width=4)
mat2 <- matrix(c(1,1,1,5,5,5,
1,1,1,5,5,5,
2,2,2,6,6,6,
3,3,3,7,7,7,
4,4,4,8,8,8),
5,6, byrow=TRUE)
layout(mat2)
par(mar=c(1,1,0,1),
oma=c(2,2,1,0))
tre <- read.tree("Tbal.tre")
plot(tre, show.tip.label=F,
edge.width=2.5, type='p',
x.lim=c(-0.25,2.75))
##-------------------------------
## Plot full data locus sharing
x = seq(1.5,5.5)
y = Tbal_full_Lme#[1:6]
s = Tbal_full_Lsd#[1:6]
plot(x, y, xlim=c(1,7.2), ylim=c(-25,3100),
cex.axis=1.25, type='n', xaxt="n")
abline(h=(seq(0,1000,200)), lwd=1.5, col="gray", lty="dotted")
points(x,y, cex=1.25, pch=21, col="#262626", bg="#262626")
lines(x,y, lwd=2, col="#262626")
segments(x, y-s, x, y+s, col="#262626")
epsilon = 0.02
segments(x-epsilon,y-s,x+epsilon,y-s)
segments(x-epsilon,y+s,x+epsilon,y+s)
x = seq(1.5,5.5)
y = Tbal_full_Sme#[2:6]
s = Tbal_full_Ssd#[2:6]
lines(x, y, lwd=2, col="darkgrey")
points(x, y, cex=1.25, pch=21, col="darkgrey", bg="darkgrey")
segments(x, y-s, x, y+s, col="darkgrey")
epsilon = 0.02
segments(x-epsilon,y-s,x+epsilon,y-s)
segments(x-epsilon,y+s,x+epsilon,y+s)
##--------------------------------
## Plot drop data locus sharing
x = seq(1.5,5.5)
y = Tbal_drop_Lme
s = Tbal_drop_Lsd
plot(x, y, xlim=c(1,7.2), ylim=c(-25,3100),
cex.axis=1.25, type='n', xaxt="n")
abline(h=(seq(0,1000,200)), lwd=1.5, col="gray", lty="dotted")
points(x,y, cex=1.25, pch=21, col="#262626", bg="#262626")
lines(x,y, lwd=2, col="#262626")
segments(x, y-s, x, y+s, col="#262626")
epsilon = 0.02
segments(x-epsilon,y-s,x+epsilon,y-s)
segments(x-epsilon,y+s,x+epsilon,y+s)
x = seq(1.5,5.5)
y = Tbal_drop_Sme
s = Tbal_drop_Ssd
lines(x, y, lwd=2, col="darkgrey")
points(x, y, cex=1.25, pch=21, col="darkgrey", bg="darkgrey")
segments(x, y-s, x, y+s, col="darkgrey")
epsilon = 0.02
segments(x-epsilon,y-s,x+epsilon,y-s)
segments(x-epsilon,y+s,x+epsilon,y+s)
##------------------------------
## Plot cov data locus sharing
x = seq(1.5,5.5)
y = Tbal_cov_Lme
s = Tbal_cov_Lsd
plot(x, y, xlim=c(1,7.2), ylim=c(-25,3100),
cex.axis=1.25, type='n', xaxt="n")
abline(h=(seq(0,1000,200)), lwd=1.5, col="gray", lty="dotted")
points(x,y, cex=1.25, pch=21, col="#262626", bg="#262626")
lines(x,y, lwd=2, col="#262626")
segments(x, y-s, x, y+s, col="#262626")
epsilon = 0.02
segments(x-epsilon,y-s,x+epsilon,y-s)
segments(x-epsilon,y+s,x+epsilon,y+s)
x = seq(1.5,5.5)
y = Tbal_cov_Sme
s = Tbal_cov_Ssd
lines(x, y, lwd=2, col="darkgrey")
points(x, y, cex=1.25, pch=21, col="darkgrey", bg="darkgrey")
segments(x, y-s, x, y+s, col="darkgrey")
epsilon = 0.02
segments(x-epsilon,y-s,x+epsilon,y-s)
segments(x-epsilon,y+s,x+epsilon,y+s)
axis(side=1, at=seq(1.1,8.1,1),
labels=as.character(seq(3,-0.5,by=-0.5)), cex.axis=1.25)
###########################################
###########################################
tre <- read.tree("Timb.tre")
plot(tre, show.tip.label=F,
edge.width=2, type='p')
##------------------------------------
## Plot full data locus sharing
x = seq(2,62)
y = Timb_full_Lme[2:62]
s = Timb_full_Lsd[2:62]
plot(x, y, xlim=c(1,65), ylim=c(-25,3100),
cex.axis=1.25, type='n', yaxt="n", xaxt="n")
abline(h=(seq(0,1000,200)), lwd=1.5, col="gray", lty="dotted")
points(x,y, cex=1, pch=21, col="#262626", bg="#262626")
lines(x,y, lwd=2, col="#262626")
segments(x, y-s, x, y+s, col="#262626")
epsilon = 0.02
segments(x-epsilon,y-s,x+epsilon,y-s)
segments(x-epsilon,y+s,x+epsilon,y+s)
y = Timb_full_Sme[2:62]
s = Timb_full_Ssd[2:62]
lines(x, y, lwd=2, col="darkgrey")
#points(x, y, cex=1, pch=21, col="darkgrey", bg="darkgrey")
segments(x, y-s, x, y+s, col="darkgrey")
epsilon = 0.02
segments(x-epsilon,y-s,x+epsilon,y-s)
segments(x-epsilon,y+s,x+epsilon,y+s)
##----------------------------------
## Plot full data locus sharing
x = seq(2,62)
y = Timb_drop_Lme[2:62]
s = Timb_drop_Lsd[2:62]
plot(x, y, xlim=c(1,65), ylim=c(-25,3100),
cex.axis=1.25, type='n', yaxt="n", xaxt="n")
abline(h=(seq(0,1000,200)), lwd=1.5, col="gray", lty="dotted")
points(x,y, cex=1, pch=21, col="#262626", bg="#262626")
lines(x,y, lwd=2, col="#262626")
segments(x, y-s, x, y+s, col="#262626")
epsilon = 0.02
segments(x-epsilon,y-s,x+epsilon,y-s)
segments(x-epsilon,y+s,x+epsilon,y+s)
y = Timb_drop_Sme[2:62]
s = Timb_drop_Ssd[2:62]
lines(x, y, lwd=2, col="darkgrey")
#points(x, y, cex=1, pch=21, col="darkgrey", bg="darkgrey")
segments(x, y-s, x, y+s)
epsilon = 0.02
segments(x-epsilon,y-s,x+epsilon,y-s)
segments(x-epsilon,y+s,x+epsilon,y+s)
##-----------------------------------
## Plot drop data locus sharing
x = seq(2,62)
y = Timb_cov_Lme[2:62]
s = Timb_cov_Lsd[2:62]
plot(x, y,
xlim=c(1,65), ylim=c(-20,3100),
cex=1, cex.axis=1.25,
pch=21, bg="#262626", xaxt="n", yaxt="n")
abline(h=(seq(0,1000,200)), lwd=1.5, col="gray", lty="dotted")
points(x,y, cex=1, pch=21, col="#262626", bg="#262626")
lines(x,y, lwd=2, col="#262626")
segments(x, y-s, x, y+s, col="#262626")
epsilon = 0.02
segments(x-epsilon,y-s,x+epsilon,y-s)
segments(x-epsilon,y+s,x+epsilon,y+s)
y = Timb_cov_Sme[2:62]
s = Timb_cov_Ssd[2:62]
lines(x, y, lwd=2, col="darkgrey")
#points(x, y, cex=1, pch=21, col="darkgrey", bg="darkgrey")
segments(x, y-s, x, y+s, col="darkgrey")
epsilon = 0.02
segments(x-epsilon,y-s,x+epsilon,y-s)
segments(x-epsilon,y+s,x+epsilon,y+s)
axis(side=1, at=seq(2.1,72,10),
labels=as.character(seq(3,0,by=-0.5)), cex.axis=1.25)
#dev.off()
"""
Explanation: plOT
End of explanation
"""
lxs_EmpVib_full
def write_nodes_to_tree(tree,lxs,treefilename):
for node in tree.traverse():
if not node.is_leaf():
inf = count_inf4(tree, lxs, node)
node.name = "%d" % inf
## print tree with bls & node labels
tree.write(format=3,outfile=treefilename)
write_nodes_to_tree(Tvib, lxs_EmpVib_full, "Tvib_full_nodes.tre")
%%R -w 400 -h 500
tre <- read.tree("loci_Tbal_cov")
plot(tre)#, show.tip.label=F, edge.width=2.5)
#nodelabels(pch=21,
# bg="#262626",
# cex=as.integer(tre$node.label)/150)
nodelabels(tre$node.label, bg='grey', cex=1.5)
"""
Explanation: PLOt nodes on tree
End of explanation
"""
def counts(lxs, minr, maxr, maxi):
## data store
data = np.zeros((maxr+1-minr,maxi))
for sample in range(minr, maxr+1):
g = itertools.combinations(range(maxr), sample)
i = 0
while i<maxi:
try:
gsamp = g.next()
except StopIteration:
break
shared = sum(lxs[gsamp,:].sum(axis=0) == len(gsamp))
data[sample-minr,i] = shared
i += 1
return data
Dbal = counts(lxs_Tbal_drop, 4, 32, 1000)
Dimb = counts(lxs_Timb_drop, 4, 32, 1000)
Dbal
def counts2(lxs, minr, maxr, maxi):
## data store
data = np.zeros(((maxr+1-minr)*maxi, 3))
count = 0
for sample in range(minr, maxr+1):
g = itertools.combinations(range(maxr), sample)
i = 0
while i<maxi:
try:
gsamp = g.next()
except StopIteration:
break
shared = sum(lxs[gsamp,:].sum(axis=0) == len(gsamp))
datum = [sample, float(shared), i+1]
data[count] = datum
i += 1
count += 1
return data
Dimb1 = counts2(lxs_Timb_full, 4, 32, 100)
Dimb2 = counts2(lxs_Timb_drop, 4, 32, 100)
Dimb3 = counts2(lxs_Timb_cov, 4, 32, 100)
Dbal1 = counts2(lxs_Tbal_full, 4, 32, 100)
Dbal2 = counts2(lxs_Tbal_drop, 4, 32, 100)
Dbal3 = counts2(lxs_Tbal_cov, 4, 32, 100)
def saveto(D, outname):
dd = pd.DataFrame({"time":[i[0] for i in D],
"loci":[i[1] for i in D],
"idx":[i[2] for i in D],
"tree":["Timb" for _ in D]})
dd.to_csv(outname)
saveto(Dimb1, "Dimb1.dat")
saveto(Dimb2, "Dimb2.dat")
saveto(Dimb3, "Dimb3.dat")
saveto(Dbal1, "Dbal1.dat")
saveto(Dbal2, "Dbal2.dat")
saveto(Dbal3, "Dbal3.dat")
"""
Explanation: Data sharing by sub-sampling
How much data are shared by a random N samples, and how much data are shared across the deepest bipartition for 2+N samples. Also how many SNPs?
End of explanation
"""
|
nicoguaro/AdvancedMath | notebooks/pde.ipynb | mit | import numpy as np
import matplotlib.pyplot as plt
import matplotlib.animation as animation
from matplotlib import rcParams
"""
Explanation: Partial differential equations
End of explanation
"""
%matplotlib notebook
rcParams['mathtext.fontset'] = 'cm'
rcParams['font.size'] = 14
red = "#e41a1c"
blue = "#377eb8"
gray = "#eeeeee"
"""
Explanation: Set up ...
End of explanation
"""
def heat_update(n):
ax0.cla()
for cont in range(ntime):
u_heat[1:N-1] = dt/(factor*dx**2)*(u_heat[2:N] + u_heat[0:N-2] -
2*u_heat[1:N-1]) + u_heat[1:N-1]
ax0.plot(x, u_heat)
ax0.set_ylim(-1.2, 1.2)
ax0.set_title("$t = {:.2f}$".format(n*ntime*dt))
N = 2001
x = np.linspace(-1, 1, N)
dx = x[1] - x[0]
diff = 1
factor = 10
dt = dx**2/(factor*diff)
ntime = 10
u_heat = 1 - abs(4*x)
u_heat[x < -0.25] = 0
u_heat[x > 0.25] = 0
fig0 = plt.figure(figsize=(5, 5))
ax0 = fig0.add_subplot(111)
ani0 = animation.FuncAnimation(fig0, heat_update, range(100), blit=False, repeat=False)
plt.show()
"""
Explanation: Heat equation
$$\alpha \frac{\partial^2 u}{\partial x^2} = \frac{\partial u}{\partial t}$$
End of explanation
"""
def wave_update(n):
ax1.cla()
for cont in range(ntime):
u_aux = u.copy()
u_aux2 = u_old.copy()
u_old[:] = u[:]
u_w[1:N-1] = alpha**2*(u_aux[2:N] + u_aux[0:N-2] - 2*u_aux[1:N-1]) \
+ 2*u_aux[1:N-1] - u_aux2[1:N-1]
ax1.plot(x, u_w)
ax1.set_ylim(-1.2, 1.2)
ax1.set_title("$t = {:.2f}$".format(n*ntime*dt))
N = 2001
x = np.linspace(-1, 1, N)
dx = x[1] - x[0]
vel = 1
factor = 10
dt = dx/(factor*vel)
alpha = vel*dt/dx
ntime = 1000
u_old = 1 - abs(4*x)
u_old[x < -0.25] = 0
u_old[x > 0.25] = 0
u_w = u_old.copy()
fig1 = plt.figure(figsize=(5, 5))
ax1 = fig1.add_subplot(111)
ani1 = animation.FuncAnimation(fig1, wave_update, range(100), blit=False, repeat=False)
from IPython.core.display import HTML
def css_styling():
styles = open('./styles/custom_barba.css', 'r').read()
return HTML(styles)
css_styling()
"""
Explanation: Wave equation
$$c^2 \frac{\partial^2 u}{\partial x^2} = \frac{\partial^2 u}{\partial t^2}$$
End of explanation
"""
|
satishgoda/learning | python/jupyter/tutorial/jupyter_notebook.ipynb | mit | from IPython.display import FileLink, FileLinks
"""
Explanation: About
This Jupyter notebook demonstrates the features of the Notebook!!
http://jupyter-notebook.readthedocs.io
End of explanation
"""
!ls -1rt *.png
"""
Explanation: Notebook Format
JSON
Viewing Notebooks
http://nbviewer.jupyter.org
Github supports preliminary viewing of ipynb files.
Converting Notebooks
jupyter-nbconvert.exe --to slides ipython_gui.ipynb --reveal=reveal.js --post serve
Shell commands
End of explanation
"""
from IPython.display import Image
Image("jupyter_notebook_login.png")
"""
Explanation: Tab completion
Can be accessed by pressing the TAB key (In a code cell)
Loading Images
Python
End of explanation
"""
from IPython.display import Image, display
import path
cwd = path.getcwdu()
cwdpath = path.Path(cwd)
for pngfile in sorted(cwdpath.listdir("*.png")):
print(pngfile)
print(Image(pngfile))
#display(Image(pngfile))
print("\n")
"""
Explanation: Loading a bunch of images from the current directory
Note: The following example assumes that that path.py module is installed
End of explanation
"""
|
kratzert/RRMPG | examples/model_api_example.ipynb | mit | # Imports and Notebook setup
from timeit import timeit
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from rrmpg.models import CemaneigeGR4J
from rrmpg.data import CAMELSLoader
from rrmpg.tools.monte_carlo import monte_carlo
from rrmpg.utils.metrics import calc_nse
"""
Explanation: Model API Example
In this notebook, we'll explore some functionality of the models of this package. We'll work with the coupled CemaneigeGR4j model that is implemented in rrmpg.models module. The data we'll use, comes from the CAMELS [1] data set. For some basins, the data is provided within this Python library and can be easily imported using the CAMELSLoader class implemented in the rrmpg.data module.
In summary we'll look at:
- How you can create a model instance.
- How we can use the CAMELSLoader.
- How you can fit the model parameters to observed discharge by:
- Using one of SciPy's global optimizer
- Monte-Carlo-Simulation
- How you can use a fitted model to calculate the simulated discharge.
[1] Addor, N., A.J. Newman, N. Mizukami, and M.P. Clark, 2017: The CAMELS data set: catchment attributes and meteorology for large-sample studies. version 2.0. Boulder, CO: UCAR/NCAR. doi:10.5065/D6G73C3Q
End of explanation
"""
model = CemaneigeGR4J()
model.get_params()
"""
Explanation: Create a model
As a first step let us have a look how we can create one of the models implemented in rrmpg.models. Basically, for all models we have two different options:
1. Initialize a model without specific model parameters.
2. Initialize a model with specific model parameters.
The documentation provides a list of all model parameters. Alternatively we can look at help() for the model (e.g. help(CemaneigeGR4J)).
If no specific model parameters are provided upon intialization, random parameters will be generated that are in between the default parameter bounds. We can look at these bounds by calling .get_param_bounds() method on the model object and check the current parameter values by calling .get_params() method.
For now we don't know any specific parameter values, so we'll create one with random parameters.
End of explanation
"""
df = CAMELSLoader().load_basin('01031500')
df.head()
"""
Explanation: Here we can see the six model parameters of CemaneigeGR4J model and their current values.
Using the CAMELSLoader
To have data to start with, we can use the CAMELSLoader class to load data of provided basins from the CAMELS dataset. To get a list of all available basins that are provided within this library, we can use the .get_basin_numbers() method. For now we will use the provided basin number 01031500.
End of explanation
"""
# calcute the end date of the calibration period
end_cal = pd.to_datetime(f"{df.index[0].year + 15}/09/30", yearfirst=True)
# validation period starts one day later
start_val = end_cal + pd.DateOffset(days=1)
# split the data into two parts
cal = df[:end_cal].copy()
val = df[start_val:].copy()
"""
Explanation: Next we will split the data into a calibration period, which we will use to find a set of good model parameters, and a validation period, we will use the see how good our model works on unseen data. As in the CAMELS data set publication, we will use the first 15 hydrological years for calibration. The rest of the data will be used for validation.
Because the index of the dataframe is in pandas Datetime format, we can easily split the dataframe into two parts
End of explanation
"""
help(model.fit)
"""
Explanation: Fit the model to observed discharge
As already said above, we'll look at two different methods implemented in this library:
1. Using one of SciPy's global optimizer
2. Monte-Carlo-Simulation
Using one of SciPy's global optimizer
Each model has a .fit() method. This function uses the global optimizer differential evolution from the scipy package to find the set of model parameters that produce the best simulation, regarding the provided observed discharge array.
The inputs for this function can be found in the documentation or the help().
End of explanation
"""
# calculate mean temp for calibration and validation period
cal['tmean'] = (cal['tmin(C)'] + cal['tmax(C)']) / 2
val['tmean'] = (val['tmin(C)'] + val['tmax(C)']) / 2
# load the gauge station height
height = CAMELSLoader().get_station_height('01031500')
"""
Explanation: We don't know any values for the initial states of the storages, so we will ignore them for now. For the missing mean temperature, we calculate a proxy from the minimum and maximum daily temperature. The station height can be retrieved from the CAMELSLoader class via the .get_station_height() method.
End of explanation
"""
# We don't have an initial value for the snow storage, so we omit this input
result = model.fit(cal['QObs(mm/d)'], cal['prcp(mm/day)'], cal['tmean'],
cal['tmin(C)'], cal['tmax(C)'], cal['PET'], height)
"""
Explanation: Now we are ready to fit the model and retrieve a good set of model parameters from the optimizer. Again, this will be done with the calibration data. Because the model methods also except pandas Series, we can call the function as follows.
End of explanation
"""
result
"""
Explanation: result is an object defined by the scipy library and contains the optimized model parameters, as well as some more information on the optimization process. Let us have a look at this object:
End of explanation
"""
params = {}
param_names = model.get_parameter_names()
for i, param in enumerate(param_names):
params[param] = result.x[i]
# This line set the model parameters to the ones specified in the dict
model.set_params(params)
# To be sure, let's look at the current model parameters
model.get_params()
"""
Explanation: The relevant information here is:
- fun is the final value of our optimization criterion (the mean-squared-error in this case)
- message describes the cause of the optimization termination
- nfev is the number of model simulations
- sucess is a flag wether or not the optimization was successful
- x are the optimized model parameters
Next, let us set the model parameters to the optimized ones found by the search. Therefore we need to create a dictonary containing one key for each model parameter and as the corresponding value the optimized parameter. As mentioned before, the list of model parameter names can be retrieved by the model.get_parameter_names() function. We can then create the needed dictonary by the following lines of code:
End of explanation
"""
help(monte_carlo)
"""
Explanation: Also it might not be clear at the first look, this are the same parameters as the ones specified in result.x. In result.x they are ordered according to the ordering of the _param_list specified in each model class, where ass the dictonary output here is alphabetically sorted.
Monte-Carlo-Simulation
Now let us have a look how we can use the Monte-Carlo-Simulation implemented in rrmpg.tools.monte_carlo.
End of explanation
"""
model2 = CemaneigeGR4J()
# Let use run MC for 1000 runs, which is in the same range as the above optimizer
result_mc = monte_carlo(model2, num=10000, qobs=cal['QObs(mm/d)'],
prec=cal['prcp(mm/day)'], mean_temp=cal['tmean'],
min_temp=cal['tmin(C)'], max_temp=cal['tmax(C)'],
etp=cal['PET'], met_station_height=height)
# Get the index of the best fit (smallest mean squared error)
idx = np.argmin(result_mc['mse'][~np.isnan(result_mc['mse'])])
# Get the optimal parameters and set them as model parameters
optim_params = result_mc['params'][idx]
params = {}
for i, param in enumerate(param_names):
params[param] = optim_params[i]
# This line set the model parameters to the ones specified in the dict
model2.set_params(params)
"""
Explanation: As specified in the help text, all model inputs needed for a simulation must be provided as keyword arguments. The keywords need to match the names specified in the model.simulate() function. Let us create a new model instance and see how this works for the CemaneigeGR4J model.
End of explanation
"""
# simulated discharge of the model optimized by the .fit() function
val['qsim_fit'] = model.simulate(val['prcp(mm/day)'], val['tmean'],
val['tmin(C)'], val['tmax(C)'],
val['PET'], height)
# simulated discharge of the model optimized by monte-carlo-sim
val['qsim_mc'] = model2.simulate(val['prcp(mm/day)'], val['tmean'],
val['tmin(C)'], val['tmax(C)'],
val['PET'], height)
# Calculate and print the Nash-Sutcliff-Efficiency for both simulations
nse_fit = calc_nse(val['QObs(mm/d)'], val['qsim_fit'])
nse_mc = calc_nse(val['QObs(mm/d)'], val['qsim_mc'])
print("NSE of the .fit() optimization: {:.4f}".format(nse_fit))
print("NSE of the Monte-Carlo-Simulation: {:.4f}".format(nse_mc))
"""
Explanation: Calculate simulated discharge
We now have two models, optimized by different methods. Let's calculate the simulated streamflow of each model and compare the results! Each model has a .simulate() method, that returns the simulated discharge for the inputs we provide to this function.
End of explanation
"""
# Plot last full hydrological year of the simulation
%matplotlib notebook
start_date = pd.to_datetime("2013/10/01", yearfirst=True)
end_date = pd.to_datetime("2014/09/30", yearfirst=True)
plt.plot(val.loc[start_date:end_date, 'QObs(mm/d)'], label='Qobs')
plt.plot(val.loc[start_date:end_date, 'qsim_fit'], label='Qsim .fit()')
plt.plot(val.loc[start_date:end_date, 'qsim_mc'], label='Qsim mc')
plt.legend()
"""
Explanation: What do this number mean? Let us have a look at some window of the simulated timeseries and compare them to the observed discharge:
End of explanation
"""
%%timeit
model.simulate(val['prcp(mm/day)'], val['tmean'],
val['tmin(C)'], val['tmax(C)'],
val['PET'], height)
"""
Explanation: The result is not perfect, but it is not bad either! And since this package is also about speed, let us also check how long it takes to simulate the discharge for the entire validation period (19 years of data).
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.21/_downloads/2212671cb1d04d466a35eb15470863da/plot_forward_sensitivity_maps.ipynb | bsd-3-clause | # Author: Eric Larson <larson.eric.d@gmail.com>
#
# License: BSD (3-clause)
import mne
from mne.datasets import sample
import matplotlib.pyplot as plt
print(__doc__)
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'
fwd_fname = data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif'
subjects_dir = data_path + '/subjects'
# Read the forward solutions with surface orientation
fwd = mne.read_forward_solution(fwd_fname)
mne.convert_forward_solution(fwd, surf_ori=True, copy=False)
leadfield = fwd['sol']['data']
print("Leadfield size : %d x %d" % leadfield.shape)
"""
Explanation: Display sensitivity maps for EEG and MEG sensors
Sensitivity maps can be produced from forward operators that
indicate how well different sensor types will be able to detect
neural currents from different regions of the brain.
To get started with forward modeling see tut-forward.
End of explanation
"""
grad_map = mne.sensitivity_map(fwd, ch_type='grad', mode='fixed')
mag_map = mne.sensitivity_map(fwd, ch_type='mag', mode='fixed')
eeg_map = mne.sensitivity_map(fwd, ch_type='eeg', mode='fixed')
"""
Explanation: Compute sensitivity maps
End of explanation
"""
picks_meg = mne.pick_types(fwd['info'], meg=True, eeg=False)
picks_eeg = mne.pick_types(fwd['info'], meg=False, eeg=True)
fig, axes = plt.subplots(2, 1, figsize=(10, 8), sharex=True)
fig.suptitle('Lead field matrix (500 dipoles only)', fontsize=14)
for ax, picks, ch_type in zip(axes, [picks_meg, picks_eeg], ['meg', 'eeg']):
im = ax.imshow(leadfield[picks, :500], origin='lower', aspect='auto',
cmap='RdBu_r')
ax.set_title(ch_type.upper())
ax.set_xlabel('sources')
ax.set_ylabel('sensors')
fig.colorbar(im, ax=ax)
fig_2, ax = plt.subplots()
ax.hist([grad_map.data.ravel(), mag_map.data.ravel(), eeg_map.data.ravel()],
bins=20, label=['Gradiometers', 'Magnetometers', 'EEG'],
color=['c', 'b', 'k'])
fig_2.legend()
ax.set(title='Normal orientation sensitivity',
xlabel='sensitivity', ylabel='count')
grad_map.plot(time_label='Gradiometer sensitivity', subjects_dir=subjects_dir,
clim=dict(lims=[0, 50, 100]))
"""
Explanation: Show gain matrix a.k.a. leadfield matrix with sensitivity map
End of explanation
"""
|
Cristianobam/UFABC | Unidade5-Atividades.ipynb | mit | # Faça aqui o programa usando for
n = int(input("De o valor de n: "))
total = 0
for n in range(1, n + 1):
num = int(input('Número a ser somado: '))
total = total + (num**2)
print(total)
# Faça aqui o programa usando while
teto = int(input('Número a serem somados: '))
total1 = 0
r = 0
while(r < teto):
nume = int(input('Número a ser somado: '))
total1 = total1 + (nume**2)
r = r + 1
print(total1)
"""
Explanation: Exercício 1: soma de quadrados
Pergunte ao usuário por um número $n$. Em seguida, pergunte a ele por $n$ números inteiros e, no final, mostre a ele a soma dos quadrados dos $n$ números que o usuário forneceu. Por exemplo, se as respostas do usuário forem: 3, 4, 2, 2, sua resposta deve ser $4^2 + 2^2 + 2^2 = 24$. Faça um programa usando o comando for e outro usando o comando while.
End of explanation
"""
# Solução análoga ao exemplo da soma de 1 a n no arquivo Unidade5.ipynb (mas agora com o produto)
n = int(input('Número: '))
fac = 1
while (n>0):
fac = n * fac
n = n - 1
print(fac)
"""
Explanation: Exercício 2: calculando o fatorial
Talvez alguns de vocês saibam que o módulo math do Python tem uma função chamada factorial que calcula o fatorial de um número. Neste exercício, vamos fazer um programa para calcular o fatorial sem usar o módulo math.
Você sabe que o fatorial de um número inteiro positivo $n$ é definido como
$$n! = n\times (n-1)\times (n-2)\times\cdots\times 3\times 2\times 1$$
Neste exemplo, você deve fazer um programa que faz o seguinte:
* Pergunta o valor de $n$ ao usuário,
* Calcula o fatorial de $n$ e mostra o resultado ao usuário.
<u>Não use a função factorial do módulo math!</u>
End of explanation
"""
# Solução análoga ao exemplo das vendas no arquivo Unidade5.ipynb (mas agora você deve calcular distâncias)
n = int(input('Número total de coordenadas: '))
xa = 0
ya = 0
dist = 0
while(n>0):
x = int(input('Coordenada x: '))
y = int(input('Coordenada y: '))
parc = (((x - xa)**2) + ((y - ya)**2))**(1/2)
xa = x
ya = y
dist = dist + parc
n = n - 1
print('A distância total percorrida é: {:.2f}'.format(dist))
"""
Explanation: Exercício 3: distância total
Neste exercício você deve fazer um programa que calcula a distância percorrida por um helicóptero. O helicóptero parte da posição $(0, 0)$, voa em linha reta até a posição $(x_1, y_1)$ e pousa, depois voa em linha reta até a posição $(x_2, y_2)$ e pousa, e assim por diante, até pousar definitivamente na posição $(x_n, y_n)$. Suponha que essas coordenadas são todas inteiras. Seu programa deve fazer o seguinte:
* pedir para o usuário digitar o número $n$ de pontos que o helicóptero deve visitar (além da origem);
* depois pedir as coordenadas dos pontos, calcular a distância entre cada dois pontos consecutivos e acumular esse valor;
* finalmente, seu programa deve mostrar a distância total percorrida pelo helicóptero.
Veja um exemplo de execução abaixo.
Quantos heliportos o helicóptero irá visitar? 3
De a coordenada x do heliporto 1? 4
De a coordenada y do heliporto 1? 0
De a coordenada x do heliporto 2? 4
De a coordenada y do heliporto 2? 3
De a coordenada x do heliporto 3? 0
De a coordenada y do heliporto 3? 0
A distância total percorrida é 12.0
Dica: $\sqrt{x} = x^{1/2}$
End of explanation
"""
n = int(input('Digite o número a ser verificado: '))
a = 0
for i in range(2,n+1):
if n % i == 0:
a = a + 1
if a == 1:
print('{} é primo'.format(n))
else:
print('{} é composto'.format(n))
"""
Explanation: Exercício 4: $n$ é primo?
Um número inteiro $n > 1$ é primo se tem exatamente dois divisores: $1$ e o próprio $n$.
Faça um programa que pede para o usuário digitar um número inteiro $n > 1$ e que decide se $n$ é primo ou composto.
End of explanation
"""
n = int(input('Digite o número a ser verificado: '))
a = 0
for i in range(2,n+1):
for x in range(2,i):
if i % x == 0:
break
else:
a = a + 1
print('Até o número {}, existe(m) {} primo(s)'.format(n,a))
"""
Explanation: Exercício 5: primos menores que $n$
Faça um programa que pede para o usuário digitar um número inteiro $n > 1$ e que conta (e depois mostra) quantos números primos são menores ou iguais a $n$.
End of explanation
"""
x = True
while x:
n = int(input('Digite um número maior que zero: '))
if n>0:
print('n² é: ',n**2)
else:
x = False
"""
Explanation: Exercício 6: quadrados
Faça um programa que fica executando o seguinte procedimento:
pede para o usuário digitar um número $n > 0$;
se $n > 0$, imprime $n^2$, senão termina o programa.
End of explanation
"""
|
kubeflow/kfserving-lts | docs/samples/explanation/alibi/moviesentiment/movie_review_explanations.ipynb | apache-2.0 | !pygmentize moviesentiment.yaml
!kubectl apply -f moviesentiment.yaml
CLUSTER_IPS=!(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
CLUSTER_IP=CLUSTER_IPS[0]
print(CLUSTER_IP)
SERVICE_HOSTNAMES=!(kubectl get inferenceservice moviesentiment -o jsonpath='{.status.url}' | cut -d "/" -f 3)
SERVICE_HOSTNAME=SERVICE_HOSTNAMES[0]
print(SERVICE_HOSTNAME)
import sys
sys.path.append('../')
from alibi_helper import *
from alibi.datasets import fetch_movie_sentiment
movies = fetch_movie_sentiment()
idxNeg = 37
idxPos = 5227
for idx in [idxNeg,idxPos]:
print(movies.data[idx])
show_prediction(predict(movies.data[idx:idx+1],'moviesentiment',movies,SERVICE_HOSTNAME,CLUSTER_IP))
"""
Explanation: Movie Review Explanations
We will use an SKLearn classifier built on movie sentiment data which predicts positive or negative sentiment for review text.
The Kfserving resource provdes:
* A pretrained sklearn model stored on a Google bucket
* A Text Seldon Alibi Explainer. See the Alibi Docs for further details.
End of explanation
"""
exp = explain(movies.data[idxNeg:idxNeg+1],"moviesentiment",SERVICE_HOSTNAME,CLUSTER_IP)
show_anchors(exp['data']['anchor'])
"""
Explanation: Get Explanation for Negative Prediction
End of explanation
"""
show_bar([exp['data']['precision']],[''],"Precision")
show_bar([exp['data']['coverage']],[''],"Coverage")
show_feature_coverage(exp['data'])
show_examples(exp['data'],0,movies)
show_examples(exp['data'],0,movies,False)
"""
Explanation: Show precision. How likely predictions using the Anchor features would produce the same result.
End of explanation
"""
exp = explain(movies.data[idxPos:idxPos+1],"moviesentiment",SERVICE_HOSTNAME,CLUSTER_IP)
show_anchors(exp['data']['anchor'])
"""
Explanation: Get Explanation for Positive Example
End of explanation
"""
show_bar([exp['data']['precision']],[''],"Precision")
show_bar([exp['data']['coverage']],[''],"Coverage")
show_feature_coverage(exp['data'])
show_examples(exp['data'],0,movies)
show_examples(exp['data'],0,movies,False)
"""
Explanation: Show precision. How likely predictions using the Anchor features would produce the same result.
End of explanation
"""
!kubectl delete -f moviesentiment.yaml
"""
Explanation: Teardown
End of explanation
"""
|
jepegit/cellpy | dev_utils/easyplot/EasyPlot_Dev.ipynb | mit | files = [f1, f2]
names = [f1.name, f2.name]
ezplt = easyplot.EasyPlot(files, names, figtitle="Test1")
ezplt.plot()
"""
Explanation: Checking standard usage
End of explanation
"""
easyplot.EasyPlot(
files,
names,
figtitle="Test2",
galvanostatic_normalize_capacity=True,
all_in_one=True,
dqdv_plot=True,
).plot()
f1.with_name(f"{f1.stem}_tmp.xlsx")
easyplot.help()
"""
Explanation: NOTE
easyplot crashes if nicknames are not given
End of explanation
"""
journal_file = Path("../../dev_data/db/test_journal.xlsx")
journal_file.is_file()
j = LabJournal(db_reader=None)
j.from_file(journal_file, paginate=False)
j.pages
"""
Explanation: Loading journal
End of explanation
"""
rawfiles = j.pages.raw_file_names.to_list()
names = j.pages.label.to_list()
masses = j.pages.mass.to_list()
easyplot.EasyPlot(
rawfiles,
names,
figtitle="Test3",
galvanostatic_normalize_capacity=True,
all_in_one=True,
dqdv_plot=True,
).plot()
outfile = journal_file
j.to_file(outfile, to_project_folder=False)
"""
Explanation: NOTE
should rename index to cell_name instead of filename
End of explanation
"""
bad_cycle_numbers = {"cell_name": ['20160805_test001_45_cc', '20160805_test001_45_cc', '20160805_test001_45_cc', '20160805_test001_47_cc', '20160805_test001_47_cc']
'20160805_test001_45_cc': [4, 337, 338],
'20160805_test001_47_cc': [7, 8, 9, 33],
}
bad_cycle_numbers = {
"20160805_test001_45_cc": [4, 337, 338],
"20160805_test001_47_cc": [7, 8, 9, 33],
}
bad_cells = ["20160805_test001_45_cc", "another_cell_000_cc"]
notes = ["one comment for the road", "another comment", "a third comment"]
session0 = {
"bad_cycle_numbers": bad_cycle_numbers,
"bad_cells": bad_cells,
"notes": notes,
}
meta = j._prm_packer()
meta
session
import pandas as pd
l_bad_cycle_numbers = []
for k, v in bad_cycle_numbers.items():
l_bad_cycle_numbers.append(pd.DataFrame(data=v, columns=[k]))
df_bad_cycle_numbers = (
pd.concat(l_bad_cycle_numbers, axis=1)
.melt(var_name="cell_name", value_name="cycle_index")
.dropna()
)
df_bad_cycle_numbers
df_bad_cells = pd.DataFrame(bad_cells, columns=["cell_name"])
df_bad_cells
df_notes = pd.DataFrame(notes, columns=["txt"])
df_notes
len(meta)
df_meta = pd.DataFrame(meta, index=[0]).melt(var_name="parameter", value_name="value")
df_meta
session = pd.concat(
[df_bad_cycle_numbers, df_bad_cells, df_notes],
axis=1,
keys=["bad_cycle_number", "bad_cells", "notes"],
)
session
file_name = Path("out.xlsx")
pages = j.pages
# saving journal to xlsx
try:
with pd.ExcelWriter(file_name, mode="w", engine="openpyxl") as writer:
pages.to_excel(writer, sheet_name="pages", engine="openpyxl")
# no index is not supported for multi-index (update to index=False when pandas implements it):
session.to_excel(writer, sheet_name="session", engine="openpyxl")
df_meta.to_excel(writer, sheet_name="meta", engine="openpyxl", index=False)
except PermissionError as e:
print(f"Could not load journal to xlsx ({e})")
"""
Explanation: Note
need to implement a way to export journals to xlsx
maybe we also need to allow for reading meta and session from xlsx journals
figure out or modify easyplot so that mass and other parameters can be given to it
figure out a way to save an easyplot session with session and metadata
Save journal to xlsx
End of explanation
"""
import tempfile
import shutil
# loading journal from xlsx: pages
temporary_directory = tempfile.mkdtemp()
temporary_file_name = shutil.copy(file_name, temporary_directory)
try:
pages2 = pd.read_excel(temporary_file_name, sheet_name="pages", engine="openpyxl")
except PermissionError as e:
print(f"Could not load journal to xlsx ({e})")
pages2
# loading journal from xlsx: session
temporary_directory = tempfile.mkdtemp()
temporary_file_name = shutil.copy(file_name, temporary_directory)
try:
session2 = pd.read_excel(
temporary_file_name, sheet_name="session", engine="openpyxl", header=[0, 1]
)
except PermissionError as e:
print(f"Could not load journal to xlsx ({e})")
session2
bcn2 = {
l: list(sb["cycle_index"].values)
for l, sb in session2["bad_cycle_number"].groupby("cell_name")
}
bcn2
bc2 = list(session2["bad_cells"].dropna().values.flatten())
bc2
n2 = list(session2["notes"].dropna().values.flatten())
n2
session3 = {"bad_cycle_numbers": bcn2, "bad_cells": bc2, "notes": n2}
session0
session3
session0 == session3
# loading journal from xlsx: meta
temporary_directory = tempfile.mkdtemp()
temporary_file_name = shutil.copy(file_name, temporary_directory)
try:
df_meta2 = pd.read_excel(
temporary_file_name, sheet_name="meta", engine="openpyxl", index_col="parameter"
)
except PermissionError as e:
print(f"Could not load journal to xlsx ({e})")
meta2 = df_meta2.to_dict()["value"]
meta2
meta
pages2
session2
df_meta2
j._prm_packer()
"""
Explanation: loading xlsx journal
End of explanation
"""
|
fortyninemaps/karta | doc/source/geointerface.ipynb | mit | from karta.examples import greenland
from karta.vector.read import from_shape
import shapely.geometry
"""
Explanation: Example using __geo_interface__
The __geo_interface__ specification suggested by Sean Gilles (gist) vastly expands the capabilities of Karta by making data interchange with external modules simple. This example shows how the popular shapely package can to used to apply a buffer to a Karta polygon.
End of explanation
"""
shapely_poly = shapely.geometry.shape(greenland)
shapely_poly
"""
Explanation: Pass the data in the greenland Polygon to shapely.
End of explanation
"""
shapely_poly_buffered = shapely_poly.buffer(100e3)
shapely_poly_buffered
"""
Explanation: Use shapely to call the GEOS library's GEOSBuffer function (takes several seconds on my machine).
End of explanation
"""
greenland_buffered = from_shape(shapely_poly_buffered)
greenland_buffered._crs = greenland.crs
"""
Explanation: Convert the data back to a Karta Polygon. Since shapely doesn't handle coordinate systems, we need to set the original coordinate system from the old Polygon.
End of explanation
"""
|
tensorflow/docs-l10n | site/en-snapshot/guide/keras/save_and_serialize.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
"""
import numpy as np
import tensorflow as tf
from tensorflow import keras
"""
Explanation: Save and load Keras models
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/guide/keras/save_and_serialize"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/snapshot-keras/site/en/guide/keras/save_and_serialize.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/keras-team/keras-io/blob/master/guides/serialization_and_saving.py"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/guide/keras/save_and_serialize.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
Introduction
A Keras model consists of multiple components:
The architecture, or configuration, which specifies what layers the model
contain, and how they're connected.
A set of weights values (the "state of the model").
An optimizer (defined by compiling the model).
A set of losses and metrics (defined by compiling the model or calling
add_loss() or add_metric()).
The Keras API makes it possible to save all of these pieces to disk at once,
or to only selectively save some of them:
Saving everything into a single archive in the TensorFlow SavedModel format
(or in the older Keras H5 format). This is the standard practice.
Saving the architecture / configuration only, typically as a JSON file.
Saving the weights values only. This is generally used when training the model.
Let's take a look at each of these options. When would you use one or the other,
and how do they work?
How to save and load a model
If you only have 10 seconds to read this guide, here's what you need to know.
Saving a Keras model:
python
model = ... # Get model (Sequential, Functional Model, or Model subclass)
model.save('path/to/location')
Loading the model back:
python
from tensorflow import keras
model = keras.models.load_model('path/to/location')
Now, let's look at the details.
Setup
End of explanation
"""
def get_model():
# Create a simple model.
inputs = keras.Input(shape=(32,))
outputs = keras.layers.Dense(1)(inputs)
model = keras.Model(inputs, outputs)
model.compile(optimizer="adam", loss="mean_squared_error")
return model
model = get_model()
# Train the model.
test_input = np.random.random((128, 32))
test_target = np.random.random((128, 1))
model.fit(test_input, test_target)
# Calling `save('my_model')` creates a SavedModel folder `my_model`.
model.save("my_model")
# It can be used to reconstruct the model identically.
reconstructed_model = keras.models.load_model("my_model")
# Let's check:
np.testing.assert_allclose(
model.predict(test_input), reconstructed_model.predict(test_input)
)
# The reconstructed model is already compiled and has retained the optimizer
# state, so training can resume:
reconstructed_model.fit(test_input, test_target)
"""
Explanation: Whole-model saving & loading
You can save an entire model to a single artifact. It will include:
The model's architecture/config
The model's weight values (which were learned during training)
The model's compilation information (if compile() was called)
The optimizer and its state, if any (this enables you to restart training
where you left)
APIs
model.save() or tf.keras.models.save_model()
tf.keras.models.load_model()
There are two formats you can use to save an entire model to disk:
the TensorFlow SavedModel format, and the older Keras H5 format.
The recommended format is SavedModel. It is the default when you use model.save().
You can switch to the H5 format by:
Passing save_format='h5' to save().
Passing a filename that ends in .h5 or .keras to save().
SavedModel format
SavedModel is the more comprehensive save format that saves the model architecture,
weights, and the traced Tensorflow subgraphs of the call functions. This enables
Keras to restore both built-in layers as well as custom objects.
Example:
End of explanation
"""
!ls my_model
"""
Explanation: What the SavedModel contains
Calling model.save('my_model') creates a folder named my_model,
containing the following:
End of explanation
"""
class CustomModel(keras.Model):
def __init__(self, hidden_units):
super(CustomModel, self).__init__()
self.hidden_units = hidden_units
self.dense_layers = [keras.layers.Dense(u) for u in hidden_units]
def call(self, inputs):
x = inputs
for layer in self.dense_layers:
x = layer(x)
return x
def get_config(self):
return {"hidden_units": self.hidden_units}
@classmethod
def from_config(cls, config):
return cls(**config)
model = CustomModel([16, 16, 10])
# Build the model by calling it
input_arr = tf.random.uniform((1, 5))
outputs = model(input_arr)
model.save("my_model")
# Option 1: Load with the custom_object argument.
loaded_1 = keras.models.load_model(
"my_model", custom_objects={"CustomModel": CustomModel}
)
# Option 2: Load without the CustomModel class.
# Delete the custom-defined model class to ensure that the loader does not have
# access to it.
del CustomModel
loaded_2 = keras.models.load_model("my_model")
np.testing.assert_allclose(loaded_1(input_arr), outputs)
np.testing.assert_allclose(loaded_2(input_arr), outputs)
print("Original model:", model)
print("Model Loaded with custom objects:", loaded_1)
print("Model loaded without the custom object class:", loaded_2)
"""
Explanation: The model architecture, and training configuration
(including the optimizer, losses, and metrics) are stored in saved_model.pb.
The weights are saved in the variables/ directory.
For detailed information on the SavedModel format, see the
SavedModel guide (The SavedModel format on disk).
How SavedModel handles custom objects
When saving the model and its layers, the SavedModel format stores the
class name, call function, losses, and weights (and the config, if implemented).
The call function defines the computation graph of the model/layer.
In the absence of the model/layer config, the call function is used to create
a model that exists like the original model which can be trained, evaluated,
and used for inference.
Nevertheless, it is always a good practice to define the get_config
and from_config methods when writing a custom model or layer class.
This allows you to easily update the computation later if needed.
See the section about Custom objects
for more information.
Example:
End of explanation
"""
model = get_model()
# Train the model.
test_input = np.random.random((128, 32))
test_target = np.random.random((128, 1))
model.fit(test_input, test_target)
# Calling `save('my_model.h5')` creates a h5 file `my_model.h5`.
model.save("my_h5_model.h5")
# It can be used to reconstruct the model identically.
reconstructed_model = keras.models.load_model("my_h5_model.h5")
# Let's check:
np.testing.assert_allclose(
model.predict(test_input), reconstructed_model.predict(test_input)
)
# The reconstructed model is already compiled and has retained the optimizer
# state, so training can resume:
reconstructed_model.fit(test_input, test_target)
"""
Explanation: The first loaded model is loaded using the config and CustomModel class. The second
model is loaded by dynamically creating the model class that acts like the original model.
Configuring the SavedModel
New in TensoFlow 2.4
The argument save_traces has been added to model.save, which allows you to toggle
SavedModel function tracing. Functions are saved to allow the Keras to re-load custom
objects without the original class definitons, so when save_traces=False, all custom
objects must have defined get_config/from_config methods. When loading, the custom
objects must be passed to the custom_objects argument. save_traces=False reduces the
disk space used by the SavedModel and saving time.
Keras H5 format
Keras also supports saving a single HDF5 file containing the model's architecture,
weights values, and compile() information.
It is a light-weight alternative to SavedModel.
Example:
End of explanation
"""
layer = keras.layers.Dense(3, activation="relu")
layer_config = layer.get_config()
new_layer = keras.layers.Dense.from_config(layer_config)
"""
Explanation: Limitations
Compared to the SavedModel format, there are two things that don't
get included in the H5 file:
External losses & metrics added via model.add_loss()
& model.add_metric() are not saved (unlike SavedModel).
If you have such losses & metrics on your model and you want to resume training,
you need to add these losses back yourself after loading the model.
Note that this does not apply to losses/metrics created inside layers via
self.add_loss() & self.add_metric(). As long as the layer gets loaded,
these losses & metrics are kept, since they are part of the call method of the layer.
The computation graph of custom objects such as custom layers
is not included in the saved file. At loading time, Keras will need access
to the Python classes/functions of these objects in order to reconstruct the model.
See Custom objects.
Saving the architecture
The model's configuration (or architecture) specifies what layers the model
contains, and how these layers are connected*. If you have the configuration of a model,
then the model can be created with a freshly initialized state for the weights
and no compilation information.
*Note this only applies to models defined using the functional or Sequential apis
not subclassed models.
Configuration of a Sequential model or Functional API model
These types of models are explicit graphs of layers: their configuration
is always available in a structured form.
APIs
get_config() and from_config()
tf.keras.models.model_to_json() and tf.keras.models.model_from_json()
get_config() and from_config()
Calling config = model.get_config() will return a Python dict containing
the configuration of the model. The same model can then be reconstructed via
Sequential.from_config(config) (for a Sequential model) or
Model.from_config(config) (for a Functional API model).
The same workflow also works for any serializable layer.
Layer example:
End of explanation
"""
model = keras.Sequential([keras.Input((32,)), keras.layers.Dense(1)])
config = model.get_config()
new_model = keras.Sequential.from_config(config)
"""
Explanation: Sequential model example:
End of explanation
"""
inputs = keras.Input((32,))
outputs = keras.layers.Dense(1)(inputs)
model = keras.Model(inputs, outputs)
config = model.get_config()
new_model = keras.Model.from_config(config)
"""
Explanation: Functional model example:
End of explanation
"""
model = keras.Sequential([keras.Input((32,)), keras.layers.Dense(1)])
json_config = model.to_json()
new_model = keras.models.model_from_json(json_config)
"""
Explanation: to_json() and tf.keras.models.model_from_json()
This is similar to get_config / from_config, except it turns the model
into a JSON string, which can then be loaded without the original model class.
It is also specific to models, it isn't meant for layers.
Example:
End of explanation
"""
model.save("my_model")
tensorflow_graph = tf.saved_model.load("my_model")
x = np.random.uniform(size=(4, 32)).astype(np.float32)
predicted = tensorflow_graph(x).numpy()
"""
Explanation: Custom objects
Models and layers
The architecture of subclassed models and layers are defined in the methods
__init__ and call. They are considered Python bytecode,
which cannot be serialized into a JSON-compatible config
-- you could try serializing the bytecode (e.g. via pickle),
but it's completely unsafe and means your model cannot be loaded on a different system.
In order to save/load a model with custom-defined layers, or a subclassed model,
you should overwrite the get_config and optionally from_config methods.
Additionally, you should use register the custom object so that Keras is aware of it.
Custom functions
Custom-defined functions (e.g. activation loss or initialization) do not need
a get_config method. The function name is sufficient for loading as long
as it is registered as a custom object.
Loading the TensorFlow graph only
It's possible to load the TensorFlow graph generated by the Keras. If you
do so, you won't need to provide any custom_objects. You can do so like
this:
End of explanation
"""
class CustomLayer(keras.layers.Layer):
def __init__(self, a):
self.var = tf.Variable(a, name="var_a")
def call(self, inputs, training=False):
if training:
return inputs * self.var
else:
return inputs
def get_config(self):
return {"a": self.var.numpy()}
# There's actually no need to define `from_config` here, since returning
# `cls(**config)` is the default behavior.
@classmethod
def from_config(cls, config):
return cls(**config)
layer = CustomLayer(5)
layer.var.assign(2)
serialized_layer = keras.layers.serialize(layer)
new_layer = keras.layers.deserialize(
serialized_layer, custom_objects={"CustomLayer": CustomLayer}
)
"""
Explanation: Note that this method has several drawbacks:
* For traceability reasons, you should always have access to the custom
objects that were used. You wouldn't want to put in production a model
that you cannot re-create.
* The object returned by tf.saved_model.load isn't a Keras model. So it's
not as easy to use. For example, you won't have access to .predict() or .fit()
Even if its use is discouraged, it can help you if you're in a tight spot,
for example, if you lost the code of your custom objects or have issues
loading the model with tf.keras.models.load_model().
You can find out more in
the page about tf.saved_model.load
Defining the config methods
Specifications:
get_config should return a JSON-serializable dictionary in order to be
compatible with the Keras architecture- and model-saving APIs.
from_config(config) (classmethod) should return a new layer or model
object that is created from the config.
The default implementation returns cls(**config).
Example:
End of explanation
"""
class CustomLayer(keras.layers.Layer):
def __init__(self, units=32, **kwargs):
super(CustomLayer, self).__init__(**kwargs)
self.units = units
def build(self, input_shape):
self.w = self.add_weight(
shape=(input_shape[-1], self.units),
initializer="random_normal",
trainable=True,
)
self.b = self.add_weight(
shape=(self.units,), initializer="random_normal", trainable=True
)
def call(self, inputs):
return tf.matmul(inputs, self.w) + self.b
def get_config(self):
config = super(CustomLayer, self).get_config()
config.update({"units": self.units})
return config
def custom_activation(x):
return tf.nn.tanh(x) ** 2
# Make a model with the CustomLayer and custom_activation
inputs = keras.Input((32,))
x = CustomLayer(32)(inputs)
outputs = keras.layers.Activation(custom_activation)(x)
model = keras.Model(inputs, outputs)
# Retrieve the config
config = model.get_config()
# At loading time, register the custom objects with a `custom_object_scope`:
custom_objects = {"CustomLayer": CustomLayer, "custom_activation": custom_activation}
with keras.utils.custom_object_scope(custom_objects):
new_model = keras.Model.from_config(config)
"""
Explanation: Registering the custom object
Keras keeps a note of which class generated the config.
From the example above, tf.keras.layers.serialize
generates a serialized form of the custom layer:
{'class_name': 'CustomLayer', 'config': {'a': 2}}
Keras keeps a master list of all built-in layer, model, optimizer,
and metric classes, which is used to find the correct class to call from_config.
If the class can't be found, then an error is raised (Value Error: Unknown layer).
There are a few ways to register custom classes to this list:
Setting custom_objects argument in the loading function. (see the example
in section above "Defining the config methods")
tf.keras.utils.custom_object_scope or tf.keras.utils.CustomObjectScope
tf.keras.utils.register_keras_serializable
Custom layer and function example
End of explanation
"""
with keras.utils.custom_object_scope(custom_objects):
new_model = keras.models.clone_model(model)
"""
Explanation: In-memory model cloning
You can also do in-memory cloning of a model via tf.keras.models.clone_model().
This is equivalent to getting the config then recreating the model from its config
(so it does not preserve compilation information or layer weights values).
Example:
End of explanation
"""
def create_layer():
layer = keras.layers.Dense(64, activation="relu", name="dense_2")
layer.build((None, 784))
return layer
layer_1 = create_layer()
layer_2 = create_layer()
# Copy weights from layer 1 to layer 2
layer_2.set_weights(layer_1.get_weights())
"""
Explanation: Saving & loading only the model's weights values
You can choose to only save & load a model's weights. This can be useful if:
You only need the model for inference: in this case you won't need to
restart training, so you don't need the compilation information or optimizer state.
You are doing transfer learning: in this case you will be training a new model
reusing the state of a prior model, so you don't need the compilation
information of the prior model.
APIs for in-memory weight transfer
Weights can be copied between different objects by using get_weights
and set_weights:
tf.keras.layers.Layer.get_weights(): Returns a list of numpy arrays.
tf.keras.layers.Layer.set_weights(): Sets the model weights to the values
in the weights argument.
Examples below.
Transfering weights from one layer to another, in memory
End of explanation
"""
# Create a simple functional model
inputs = keras.Input(shape=(784,), name="digits")
x = keras.layers.Dense(64, activation="relu", name="dense_1")(inputs)
x = keras.layers.Dense(64, activation="relu", name="dense_2")(x)
outputs = keras.layers.Dense(10, name="predictions")(x)
functional_model = keras.Model(inputs=inputs, outputs=outputs, name="3_layer_mlp")
# Define a subclassed model with the same architecture
class SubclassedModel(keras.Model):
def __init__(self, output_dim, name=None):
super(SubclassedModel, self).__init__(name=name)
self.output_dim = output_dim
self.dense_1 = keras.layers.Dense(64, activation="relu", name="dense_1")
self.dense_2 = keras.layers.Dense(64, activation="relu", name="dense_2")
self.dense_3 = keras.layers.Dense(output_dim, name="predictions")
def call(self, inputs):
x = self.dense_1(inputs)
x = self.dense_2(x)
x = self.dense_3(x)
return x
def get_config(self):
return {"output_dim": self.output_dim, "name": self.name}
subclassed_model = SubclassedModel(10)
# Call the subclassed model once to create the weights.
subclassed_model(tf.ones((1, 784)))
# Copy weights from functional_model to subclassed_model.
subclassed_model.set_weights(functional_model.get_weights())
assert len(functional_model.weights) == len(subclassed_model.weights)
for a, b in zip(functional_model.weights, subclassed_model.weights):
np.testing.assert_allclose(a.numpy(), b.numpy())
"""
Explanation: Transfering weights from one model to another model with a
compatible architecture, in memory
End of explanation
"""
inputs = keras.Input(shape=(784,), name="digits")
x = keras.layers.Dense(64, activation="relu", name="dense_1")(inputs)
x = keras.layers.Dense(64, activation="relu", name="dense_2")(x)
outputs = keras.layers.Dense(10, name="predictions")(x)
functional_model = keras.Model(inputs=inputs, outputs=outputs, name="3_layer_mlp")
inputs = keras.Input(shape=(784,), name="digits")
x = keras.layers.Dense(64, activation="relu", name="dense_1")(inputs)
x = keras.layers.Dense(64, activation="relu", name="dense_2")(x)
# Add a dropout layer, which does not contain any weights.
x = keras.layers.Dropout(0.5)(x)
outputs = keras.layers.Dense(10, name="predictions")(x)
functional_model_with_dropout = keras.Model(
inputs=inputs, outputs=outputs, name="3_layer_mlp"
)
functional_model_with_dropout.set_weights(functional_model.get_weights())
"""
Explanation: The case of stateless layers
Because stateless layers do not change the order or number of weights,
models can have compatible architectures even if there are extra/missing
stateless layers.
End of explanation
"""
# Runnable example
sequential_model = keras.Sequential(
[
keras.Input(shape=(784,), name="digits"),
keras.layers.Dense(64, activation="relu", name="dense_1"),
keras.layers.Dense(64, activation="relu", name="dense_2"),
keras.layers.Dense(10, name="predictions"),
]
)
sequential_model.save_weights("ckpt")
load_status = sequential_model.load_weights("ckpt")
# `assert_consumed` can be used as validation that all variable values have been
# restored from the checkpoint. See `tf.train.Checkpoint.restore` for other
# methods in the Status object.
load_status.assert_consumed()
"""
Explanation: APIs for saving weights to disk & loading them back
Weights can be saved to disk by calling model.save_weights
in the following formats:
TensorFlow Checkpoint
HDF5
The default format for model.save_weights is TensorFlow checkpoint.
There are two ways to specify the save format:
save_format argument: Set the value to save_format="tf" or save_format="h5".
path argument: If the path ends with .h5 or .hdf5,
then the HDF5 format is used. Other suffixes will result in a TensorFlow
checkpoint unless save_format is set.
There is also an option of retrieving weights as in-memory numpy arrays.
Each API has its pros and cons which are detailed below.
TF Checkpoint format
Example:
End of explanation
"""
class CustomLayer(keras.layers.Layer):
def __init__(self, a):
self.var = tf.Variable(a, name="var_a")
layer = CustomLayer(5)
layer_ckpt = tf.train.Checkpoint(layer=layer).save("custom_layer")
ckpt_reader = tf.train.load_checkpoint(layer_ckpt)
ckpt_reader.get_variable_to_dtype_map()
"""
Explanation: Format details
The TensorFlow Checkpoint format saves and restores the weights using
object attribute names. For instance, consider the tf.keras.layers.Dense layer.
The layer contains two weights: dense.kernel and dense.bias.
When the layer is saved to the tf format, the resulting checkpoint contains the keys
"kernel" and "bias" and their corresponding weight values.
For more information see
"Loading mechanics" in the TF Checkpoint guide.
Note that attribute/graph edge is named after the name used in parent object,
not the name of the variable. Consider the CustomLayer in the example below.
The variable CustomLayer.var is saved with "var" as part of key, not "var_a".
End of explanation
"""
inputs = keras.Input(shape=(784,), name="digits")
x = keras.layers.Dense(64, activation="relu", name="dense_1")(inputs)
x = keras.layers.Dense(64, activation="relu", name="dense_2")(x)
outputs = keras.layers.Dense(10, name="predictions")(x)
functional_model = keras.Model(inputs=inputs, outputs=outputs, name="3_layer_mlp")
# Extract a portion of the functional model defined in the Setup section.
# The following lines produce a new model that excludes the final output
# layer of the functional model.
pretrained = keras.Model(
functional_model.inputs, functional_model.layers[-1].input, name="pretrained_model"
)
# Randomly assign "trained" weights.
for w in pretrained.weights:
w.assign(tf.random.normal(w.shape))
pretrained.save_weights("pretrained_ckpt")
pretrained.summary()
# Assume this is a separate program where only 'pretrained_ckpt' exists.
# Create a new functional model with a different output dimension.
inputs = keras.Input(shape=(784,), name="digits")
x = keras.layers.Dense(64, activation="relu", name="dense_1")(inputs)
x = keras.layers.Dense(64, activation="relu", name="dense_2")(x)
outputs = keras.layers.Dense(5, name="predictions")(x)
model = keras.Model(inputs=inputs, outputs=outputs, name="new_model")
# Load the weights from pretrained_ckpt into model.
model.load_weights("pretrained_ckpt")
# Check that all of the pretrained weights have been loaded.
for a, b in zip(pretrained.weights, model.weights):
np.testing.assert_allclose(a.numpy(), b.numpy())
print("\n", "-" * 50)
model.summary()
# Example 2: Sequential model
# Recreate the pretrained model, and load the saved weights.
inputs = keras.Input(shape=(784,), name="digits")
x = keras.layers.Dense(64, activation="relu", name="dense_1")(inputs)
x = keras.layers.Dense(64, activation="relu", name="dense_2")(x)
pretrained_model = keras.Model(inputs=inputs, outputs=x, name="pretrained")
# Sequential example:
model = keras.Sequential([pretrained_model, keras.layers.Dense(5, name="predictions")])
model.summary()
pretrained_model.load_weights("pretrained_ckpt")
# Warning! Calling `model.load_weights('pretrained_ckpt')` won't throw an error,
# but will *not* work as expected. If you inspect the weights, you'll see that
# none of the weights will have loaded. `pretrained_model.load_weights()` is the
# correct method to call.
"""
Explanation: Transfer learning example
Essentially, as long as two models have the same architecture,
they are able to share the same checkpoint.
Example:
End of explanation
"""
# Create a subclassed model that essentially uses functional_model's first
# and last layers.
# First, save the weights of functional_model's first and last dense layers.
first_dense = functional_model.layers[1]
last_dense = functional_model.layers[-1]
ckpt_path = tf.train.Checkpoint(
dense=first_dense, kernel=last_dense.kernel, bias=last_dense.bias
).save("ckpt")
# Define the subclassed model.
class ContrivedModel(keras.Model):
def __init__(self):
super(ContrivedModel, self).__init__()
self.first_dense = keras.layers.Dense(64)
self.kernel = self.add_variable("kernel", shape=(64, 10))
self.bias = self.add_variable("bias", shape=(10,))
def call(self, inputs):
x = self.first_dense(inputs)
return tf.matmul(x, self.kernel) + self.bias
model = ContrivedModel()
# Call model on inputs to create the variables of the dense layer.
_ = model(tf.ones((1, 784)))
# Create a Checkpoint with the same structure as before, and load the weights.
tf.train.Checkpoint(
dense=model.first_dense, kernel=model.kernel, bias=model.bias
).restore(ckpt_path).assert_consumed()
"""
Explanation: It is generally recommended to stick to the same API for building models. If you
switch between Sequential and Functional, or Functional and subclassed,
etc., then always rebuild the pre-trained model and load the pre-trained
weights to that model.
The next question is, how can weights be saved and loaded to different models
if the model architectures are quite different?
The solution is to use tf.train.Checkpoint to save and restore the exact layers/variables.
Example:
End of explanation
"""
# Runnable example
sequential_model = keras.Sequential(
[
keras.Input(shape=(784,), name="digits"),
keras.layers.Dense(64, activation="relu", name="dense_1"),
keras.layers.Dense(64, activation="relu", name="dense_2"),
keras.layers.Dense(10, name="predictions"),
]
)
sequential_model.save_weights("weights.h5")
sequential_model.load_weights("weights.h5")
"""
Explanation: HDF5 format
The HDF5 format contains weights grouped by layer names.
The weights are lists ordered by concatenating the list of trainable weights
to the list of non-trainable weights (same as layer.weights).
Thus, a model can use a hdf5 checkpoint if it has the same layers and trainable
statuses as saved in the checkpoint.
Example:
End of explanation
"""
class NestedDenseLayer(keras.layers.Layer):
def __init__(self, units, name=None):
super(NestedDenseLayer, self).__init__(name=name)
self.dense_1 = keras.layers.Dense(units, name="dense_1")
self.dense_2 = keras.layers.Dense(units, name="dense_2")
def call(self, inputs):
return self.dense_2(self.dense_1(inputs))
nested_model = keras.Sequential([keras.Input((784,)), NestedDenseLayer(10, "nested")])
variable_names = [v.name for v in nested_model.weights]
print("variables: {}".format(variable_names))
print("\nChanging trainable status of one of the nested layers...")
nested_model.get_layer("nested").dense_1.trainable = False
variable_names_2 = [v.name for v in nested_model.weights]
print("\nvariables: {}".format(variable_names_2))
print("variable ordering changed:", variable_names != variable_names_2)
"""
Explanation: Note that changing layer.trainable may result in a different
layer.weights ordering when the model contains nested layers.
End of explanation
"""
def create_functional_model():
inputs = keras.Input(shape=(784,), name="digits")
x = keras.layers.Dense(64, activation="relu", name="dense_1")(inputs)
x = keras.layers.Dense(64, activation="relu", name="dense_2")(x)
outputs = keras.layers.Dense(10, name="predictions")(x)
return keras.Model(inputs=inputs, outputs=outputs, name="3_layer_mlp")
functional_model = create_functional_model()
functional_model.save_weights("pretrained_weights.h5")
# In a separate program:
pretrained_model = create_functional_model()
pretrained_model.load_weights("pretrained_weights.h5")
# Create a new model by extracting layers from the original model:
extracted_layers = pretrained_model.layers[:-1]
extracted_layers.append(keras.layers.Dense(5, name="dense_3"))
model = keras.Sequential(extracted_layers)
model.summary()
"""
Explanation: Transfer learning example
When loading pretrained weights from HDF5, it is recommended to load
the weights into the original checkpointed model, and then extract
the desired weights/layers into a new model.
Example:
End of explanation
"""
|
xmnlab/notebooks | DSP/phase/Phase-Difference.ipynb | mit | from matplotlib import pyplot as plt
from numpy.fft import fft, ifft
import numpy as np
import pandas as pd
%matplotlib inline
def sine_signal(
t: np.array, A: float, f: float, φ: float
) -> pd.Series:
"""
φ input in degree unit
:param t:
:type t:
:param A:
:type A:
:param f:
:type f:
:param φ:
:type φ:
:return:
:rtype:
"""
ω = 2*np.pi*f
return pd.Series(A*np.sin(ω*t + np.deg2rad(φ)), index=t)
"""
Explanation: Phase difference methods
Sine wave
$y(t) = A\sin(2 \pi f t + \varphi) = A\sin(\omega t + \varphi)$
where:
* ''A'' = the ''[[amplitude]]'', the peak deviation of the function from zero.
* ''f'' = the ''[[frequency|ordinary frequency]]'', the ''[[Real number|number]]'' of oscillations (cycles) that occur each second of time.
* ''ω'' = 2π''f'', the ''[[angular frequency]]'', the rate of change of the function argument in units of [[radian]]s per second
* ''$\varphi$'' = the ''[[Phase (waves)|phase]]'', specifies (in radians) where in its cycle the oscillation is at ''t'' = 0.
** When ''$\varphi$'' is non-zero, the entire waveform appears to be shifted in time by the amount ''$\varphi$''/''ω'' seconds. A negative value represents a delay, and a positive value represents an advance (https://en.wikipedia.org/wiki/Sine_wave).
End of explanation
"""
# GENERAL INFO
π = np.pi
channels = 2
φs = [3, 95] # phase shift of each channel
f = 25
ns = 680
fs = 680
t = np.linspace(0, ns//fs, ns)
df = pd.DataFrame(np.zeros((ns, channels)), index=t)
for i in range(channels):
φ = φs[i]
title = 'Phase of channel %s: %s (deg)' % (i, φ)
df[i] = sine_signal(t, A=1, f=f, φ=φ)
df[i].plot(title=title, figsize=(12, 2))
plt.show()
"""
Explanation: Method 1
End of explanation
"""
_df = df.iloc[1:, :].copy()
phase_diff = np.arccos(
np.dot(_df[0], _df[1])/(
np.linalg.norm(_df[0])*np.linalg.norm(_df[1])
)
)*180/π
if np.isnan(phase_diff):
phase_diff = 0.0
print(phase_diff)
"""
Explanation: $\varphi = \arccos(\frac{a \cdot b}{\left\|a\right\| * \left\|b\right\|}) \text{rad} $
End of explanation
"""
status_success = 0
status_fail = 0
# print('ID', 'NOMINAL', 'ESTIMATED', 'STATUS', sep='\t')
for k in range(1000):
φs = [
np.random.randint(0, 180, 1)[0],
np.random.randint(0, 180, 1)[0]
] # phase shift of each channel
df = pd.DataFrame(np.zeros((ns, channels)), index=t)
# signal generation
for i in range(channels):
φ = φs[i]
title = 'Phase of channel %s: %s (deg)' % (i, φ)
df[i] = sine_signal(t, A=1, f=f, φ=φ)
nominal_diff = float(np.abs(φs[0]-φs[1]).astype(float))
_df = df.iloc[1:, :].copy()
phase_diff = np.round(np.arccos(
np.dot(_df[0], _df[1])/(
np.linalg.norm(_df[0])*np.linalg.norm(_df[1])
)
)*180/π)
if np.isnan(phase_diff):
phase_diff = 0.0
# print(k, nominal_diff, phase_diff, '\t', sep='\t', end='')
try:
np.testing.assert_almost_equal(nominal_diff, phase_diff)
status_success += 1
except:
status_fail += 1
# print('OK')
print('SUCCESS: ', status_success, sep='\t')
print('FAIL: ', status_fail, sep='\t')
"""
Explanation: Test Mehod 1
End of explanation
"""
# GENERAL INFO
π = np.pi
channels = 2
φs = [3, 95] # phase shift of each channel
f = 25
ns = 680
fs = 680
t = np.linspace(0, ns//fs, ns)
df = pd.DataFrame(np.zeros((ns, channels)), index=t)
for i in range(channels):
φ = φs[i]
title = 'Phase of channel %s: %s (deg)' % (i, φ)
df[i] = sine_signal(t, A=1, f=f, φ=φ)
df[i].plot(title=title, figsize=(12, 2))
plt.show()
# need to remove the final term
# the 1st term here could be confused, but the
# phase diff between the 2 signal will be right too
_df = df.iloc[:-1].copy()
xdft_1 = fft(_df[0])
xdft_2 = fft(_df[1])
phase_shift_1 = np.angle(xdft_1[f])+π/2
phase_shift_2 = np.angle(xdft_2[f])+π/2
phase_shift_1*180/π, phase_shift_2*180/π
"""
Explanation: Method 2
Using FFT
End of explanation
"""
status_success = 0
status_fail = 0
# print('ID', 'NOMINAL', 'ESTIMATED', 'STATUS', sep='\t')
for k in range(1000):
φs = [
np.random.randint(0, 180, 1)[0],
np.random.randint(0, 180, 1)[0]
] # phase shift of each channel
df = pd.DataFrame(np.zeros((ns, channels)), index=t)
# signal generation
for i in range(channels):
φ = φs[i]
title = 'Phase of channel %s: %s (deg)' % (i, φ)
df[i] = sine_signal(t, A=1, f=f, φ=φ)
nominal_diff = float(np.abs(φs[0]-φs[1]).astype(float))
_df = df.iloc[:-1, :].copy()
xdft_1 = fft(_df[0])
xdft_2 = fft(_df[1])
phase_shift_1 = np.angle(xdft_1[f])+π/2
phase_shift_2 = np.angle(xdft_2[f])+π/2
phase_diff = np.abs(phase_shift_1-phase_shift_2)*180/π
if np.isnan(phase_diff):
phase_diff = 0.0
# print(k, nominal_diff, phase_diff, '\t', sep='\t', end='')
try:
np.testing.assert_almost_equal(nominal_diff, phase_diff)
status_success += 1
except:
status_fail += 1
# print('OK')
print('SUCCESS: ', status_success, sep='\t')
print('FAIL: ', status_fail, sep='\t')
"""
Explanation: Test Method 2
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/dwd/cmip6/models/sandbox-3/land.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'dwd', 'sandbox-3', 'land')
"""
Explanation: ES-DOC CMIP6 Model Properties - Land
MIP Era: CMIP6
Institute: DWD
Source ID: SANDBOX-3
Topic: Land
Sub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes.
Properties: 154 (96 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:57
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code (e.g. MOSES2.2)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.3. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "water"
# "energy"
# "carbon"
# "nitrogen"
# "phospherous"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.4. Land Atmosphere Flux Exchanges
Is Required: FALSE Type: ENUM Cardinality: 0.N
Fluxes exchanged with the atmopshere.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.5. Atmospheric Coupling Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bare soil"
# "urban"
# "lake"
# "land ice"
# "lake ice"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.6. Land Cover
Is Required: TRUE Type: ENUM Cardinality: 1.N
Types of land cover defined in the land surface model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover_change')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.7. Land Cover Change
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how land cover change is managed (e.g. the use of net or gross transitions)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.8. Tiling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.energy')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how energy is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.water')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Water
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how water is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Carbon
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how carbon is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a time step dependent on the frequency of atmosphere coupling?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Overall timestep of land surface model (i.e. time between calls)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.3. Timestepping Method
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of time stepping method and associated time step(s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Grid
Land surface grid
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the horizontal grid (not including any tiling)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the horizontal grid match the atmosphere?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the vertical grid in the soil (not including any tiling)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.total_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 7.2. Total Depth
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The total depth of the soil (in metres)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Soil
Land surface soil
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of soil in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_water_coupling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.2. Heat Water Coupling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the coupling between heat and water in the soil
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.number_of_soil layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 8.3. Number Of Soil layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the soil scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of soil map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.2. Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil structure map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.texture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.3. Texture
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil texture map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.organic_matter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.4. Organic Matter
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil organic matter map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.5. Albedo
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil albedo map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.water_table')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.6. Water Table
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil water table map, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 9.7. Continuously Varying Soil Depth
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the soil properties vary continuously with depth?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.8. Soil Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil depth map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow free albedo prognostic?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "soil humidity"
# "vegetation state"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, describe the dependancies on snow free albedo calculations
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "distinction between direct and diffuse albedo"
# "no distinction between direct and diffuse albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10.3. Direct Diffuse
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe the distinction between direct and diffuse albedo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 10.4. Number Of Wavelength Bands
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If prognostic, enter the number of wavelength bands used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the soil hydrological model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river soil hydrology in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil hydrology tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.5. Number Of Ground Water Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers that may contain water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "perfect connectivity"
# "Darcian flow"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.6. Lateral Connectivity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe the lateral connectivity between tiles
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bucket"
# "Force-restore"
# "Choisnel"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.7. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
The hydrological dynamics scheme in the land surface model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
How many soil layers may contain ground ice
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.2. Ice Storage Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of ice storage
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.3. Permafrost
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of permafrost, if any, within the land surface scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General describe how drainage is included in the land surface scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gravity drainage"
# "Horton mechanism"
# "topmodel-based"
# "Dunne mechanism"
# "Lateral subsurface flow"
# "Baseflow from groundwater"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
Different types of runoff represented by the land surface model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of how heat treatment properties are defined
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of soil heat scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil heat treatment tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Force-restore"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.5. Heat Storage
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the method of heat storage
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "soil moisture freeze-thaw"
# "coupling with snow temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.6. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe processes included in the treatment of soil heat
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Snow
Land surface snow
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of snow in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.number_of_snow_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.3. Number Of Snow Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of snow levels used in the land surface scheme/model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.4. Density
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow density
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.water_equivalent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.5. Water Equivalent
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the snow water equivalent
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.heat_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.6. Heat Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the heat content of snow
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.temperature')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.7. Temperature
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow temperature
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.liquid_water_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.8. Liquid Water Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow liquid water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_cover_fractions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ground snow fraction"
# "vegetation snow fraction"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.9. Snow Cover Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify cover fractions used in the surface snow scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "snow interception"
# "snow melting"
# "snow freezing"
# "blowing snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.10. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Snow related processes in the land surface scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.11. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the snow scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "prescribed"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of snow-covered land albedo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "snow age"
# "snow density"
# "snow grain type"
# "aerosol deposition"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
*If prognostic, *
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vegetation in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 17.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of vegetation scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.dynamic_vegetation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 17.3. Dynamic Vegetation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there dynamic evolution of vegetation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.4. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vegetation tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation types"
# "biome types"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.5. Vegetation Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Vegetation classification used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "broadleaf tree"
# "needleleaf tree"
# "C3 grass"
# "C4 grass"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.6. Vegetation Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of vegetation types in the classification, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biome_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "evergreen needleleaf forest"
# "evergreen broadleaf forest"
# "deciduous needleleaf forest"
# "deciduous broadleaf forest"
# "mixed forest"
# "woodland"
# "wooded grassland"
# "closed shrubland"
# "opne shrubland"
# "grassland"
# "cropland"
# "wetlands"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.7. Biome Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of biome types in the classification, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_time_variation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed (not varying)"
# "prescribed (varying from files)"
# "dynamical (varying from simulation)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.8. Vegetation Time Variation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How the vegetation fractions in each tile are varying with time
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.9. Vegetation Map
Is Required: FALSE Type: STRING Cardinality: 0.1
If vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.interception')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 17.10. Interception
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is vegetation interception of rainwater represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic (vegetation map)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.11. Phenology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation phenology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.12. Phenology Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation phenology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.13. Leaf Area Index
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation leaf area index
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.14. Leaf Area Index Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of leaf area index
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.15. Biomass
Is Required: TRUE Type: ENUM Cardinality: 1.1
*Treatment of vegetation biomass *
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.16. Biomass Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biomass
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.17. Biogeography
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation biogeography
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.18. Biogeography Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biogeography
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "light"
# "temperature"
# "water availability"
# "CO2"
# "O3"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.19. Stomatal Resistance
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify what the vegetation stomatal resistance depends on
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.20. Stomatal Resistance Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation stomatal resistance
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.21. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the vegetation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of energy balance in land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the energy balance tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 18.3. Number Of Surface Temperatures
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "alpha"
# "beta"
# "combined"
# "Monteith potential evaporation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.4. Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify the formulation method for land surface evaporation, from soil and vegetation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "transpiration"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe which processes are included in the energy balance scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of carbon cycle in land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the carbon cycle tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 19.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of carbon cycle in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grand slam protocol"
# "residence time"
# "decay time"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19.4. Anthropogenic Carbon
Is Required: FALSE Type: ENUM Cardinality: 0.N
Describe the treament of the anthropogenic carbon pool
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.5. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the carbon scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20.3. Forest Stand Dynamics
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of forest stand dyanmics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for maintainence respiration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.2. Growth Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for growth respiration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the allocation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "leaves + stems + roots"
# "leaves + stems + roots (leafy + woody)"
# "leaves + fine roots + coarse roots + stems"
# "whole plant (no distinction)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.2. Allocation Bins
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify distinct carbon bins used in allocation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "function of vegetation type"
# "function of plant allometry"
# "explicitly calculated"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.3. Allocation Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how the fractions of allocation are calculated
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the phenology scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the mortality scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is permafrost included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28.2. Emitted Greenhouse Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
List the GHGs emitted
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28.4. Impact On Soil Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the impact of permafrost on soil properties
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the nitrogen cycle in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the notrogen cycle tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 29.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of nitrogen cycle in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the nitrogen scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30. River Routing
Land surface river routing
30.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of river routing in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the river routing, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river routing scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 30.4. Grid Inherited From Land Surface
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the grid inherited from land surface?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.5. Grid Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of grid, if not inherited from land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.number_of_reservoirs')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.6. Number Of Reservoirs
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of reservoirs
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.water_re_evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "flood plains"
# "irrigation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.7. Water Re Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
TODO
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 30.8. Coupled To Atmosphere
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Is river routing coupled to the atmosphere model component?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_land')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.9. Coupled To Land
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the coupling between land and rivers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.10. Quantities Exchanged With Atmosphere
Is Required: FALSE Type: ENUM Cardinality: 0.N
If couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "adapted for other periods"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.11. Basin Flow Direction Map
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of basin flow direction map is being used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.flooding')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.12. Flooding
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the representation of flooding, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.13. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the river routing
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "direct (large rivers)"
# "diffuse"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify how rivers are discharged to the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.2. Quantities Transported
Is Required: TRUE Type: ENUM Cardinality: 1.N
Quantities that are exchanged from river-routing to the ocean model component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32. Lakes
Land surface lakes
32.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lakes in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.coupling_with_rivers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 32.2. Coupling With Rivers
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are lakes coupled to the river routing model component?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 32.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of lake scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.4. Quantities Exchanged With Rivers
Is Required: FALSE Type: ENUM Cardinality: 0.N
If coupling with rivers, which quantities are exchanged between the lakes and rivers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.vertical_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32.5. Vertical Grid
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vertical grid of lakes
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the lake scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.ice_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is lake ice included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 33.2. Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of lake albedo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No lake dynamics"
# "vertical"
# "horizontal"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 33.3. Dynamics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which dynamics of lakes are treated? horizontal, vertical, etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 33.4. Dynamic Lake Extent
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a dynamic lake extent scheme included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.endorheic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 33.5. Endorheic Basins
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Basins not flowing to ocean included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.wetlands.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of wetlands, if any
End of explanation
"""
|
tuanavu/coursera-university-of-washington | machine_learning/4_clustering_and_retrieval/assigment/week3/.ipynb_checkpoints/module-5-decision-tree-assignment-1-blank-Graphlab-checkpoint.ipynb | mit | import graphlab
graphlab.canvas.set_target('ipynb')
"""
Explanation: Identifying safe loans with decision trees
The LendingClub is a peer-to-peer leading company that directly connects borrowers and potential lenders/investors. In this notebook, you will build a classification model to predict whether or not a loan provided by LendingClub is likely to [default](https://en.wikipedia.org/wiki/Default_(finance).
In this notebook you will use data from the LendingClub to predict whether a loan will be paid off in full or the loan will be charged off and possibly go into default. In this assignment you will:
Use SFrames to do some feature engineering.
Train a decision-tree on the LendingClub dataset.
Visualize the tree.
Predict whether a loan will default along with prediction probabilities (on a validation set).
Train a complex tree model and compare it to simple tree model.
Let's get started!
Fire up Graphlab Create
Make sure you have the latest version of GraphLab Create. If you don't find the decision tree module, then you would need to upgrade GraphLab Create using
pip install graphlab-create --upgrade
End of explanation
"""
loans = graphlab.SFrame('lending-club-data.gl/')
"""
Explanation: Load LendingClub dataset
We will be using a dataset from the LendingClub. A parsed and cleaned form of the dataset is availiable here. Make sure you download the dataset before running the following command.
End of explanation
"""
loans.column_names()
"""
Explanation: Exploring some features
Let's quickly explore what the dataset looks like. First, let's print out the column names to see what features we have in this dataset.
End of explanation
"""
loans['grade'].show()
"""
Explanation: Here, we see that we have some feature columns that have to do with grade of the loan, annual income, home ownership status, etc. Let's take a look at the distribution of loan grades in the dataset.
End of explanation
"""
loans['home_ownership'].show()
"""
Explanation: We can see that over half of the loan grades are assigned values B or C. Each loan is assigned one of these grades, along with a more finely discretized feature called subgrade (feel free to explore that feature column as well!). These values depend on the loan application and credit report, and determine the interest rate of the loan. More information can be found here.
Now, let's look at a different feature.
End of explanation
"""
# safe_loans = 1 => safe
# safe_loans = -1 => risky
loans['safe_loans'] = loans['bad_loans'].apply(lambda x : +1 if x==0 else -1)
loans = loans.remove_column('bad_loans')
"""
Explanation: This feature describes whether the loanee is mortaging, renting, or owns a home. We can see that a small percentage of the loanees own a home.
Exploring the target column
The target column (label column) of the dataset that we are interested in is called bad_loans. In this column 1 means a risky (bad) loan 0 means a safe loan.
In order to make this more intuitive and consistent with the lectures, we reassign the target to be:
* +1 as a safe loan,
* -1 as a risky (bad) loan.
We put this in a new column called safe_loans.
End of explanation
"""
loans['safe_loans'].show(view = 'Categorical')
"""
Explanation: Now, let us explore the distribution of the column safe_loans. This gives us a sense of how many safe and risky loans are present in the dataset.
End of explanation
"""
features = ['grade', # grade of the loan
'sub_grade', # sub-grade of the loan
'short_emp', # one year or less of employment
'emp_length_num', # number of years of employment
'home_ownership', # home_ownership status: own, mortgage or rent
'dti', # debt to income ratio
'purpose', # the purpose of the loan
'term', # the term of the loan
'last_delinq_none', # has borrower had a delinquincy
'last_major_derog_none', # has borrower had 90 day or worse rating
'revol_util', # percent of available credit being used
'total_rec_late_fee', # total late fees received to day
]
target = 'safe_loans' # prediction target (y) (+1 means safe, -1 is risky)
# Extract the feature columns and target column
loans = loans[features + [target]]
"""
Explanation: You should have:
* Around 81% safe loans
* Around 19% risky loans
It looks like most of these loans are safe loans (thankfully). But this does make our problem of identifying risky loans challenging.
Features for the classification algorithm
In this assignment, we will be using a subset of features (categorical and numeric). The features we will be using are described in the code comments below. If you are a finance geek, the LendingClub website has a lot more details about these features.
End of explanation
"""
safe_loans_raw = loans[loans[target] == +1]
risky_loans_raw = loans[loans[target] == -1]
print "Number of safe loans : %s" % len(safe_loans_raw)
print "Number of risky loans : %s" % len(risky_loans_raw)
"""
Explanation: What remains now is a subset of features and the target that we will use for the rest of this notebook.
Sample data to balance classes
As we explored above, our data is disproportionally full of safe loans. Let's create two datasets: one with just the safe loans (safe_loans_raw) and one with just the risky loans (risky_loans_raw).
End of explanation
"""
print "Percentage of safe loans :", (len(safe_loans_raw)/float(len(safe_loans_raw) + len(risky_loans_raw)))
print "Percentage of risky loans :", (len(risky_loans_raw)/float(len(safe_loans_raw) + len(risky_loans_raw)))
"""
Explanation: Now, write some code to compute below the percentage of safe and risky loans in the dataset and validate these numbers against what was given using .show earlier in the assignment:
End of explanation
"""
# Since there are fewer risky loans than safe loans, find the ratio of the sizes
# and use that percentage to undersample the safe loans.
percentage = len(risky_loans_raw)/float(len(safe_loans_raw))
risky_loans = risky_loans_raw
safe_loans = safe_loans_raw.sample(percentage, seed=1)
# Append the risky_loans with the downsampled version of safe_loans
loans_data = risky_loans.append(safe_loans)
"""
Explanation: One way to combat class imbalance is to undersample the larger class until the class distribution is approximately half and half. Here, we will undersample the larger class (safe loans) in order to balance out our dataset. This means we are throwing away many data points. We used seed=1 so everyone gets the same results.
End of explanation
"""
print "Percentage of safe loans :", len(safe_loans) / float(len(loans_data))
print "Percentage of risky loans :", len(risky_loans) / float(len(loans_data))
print "Total number of loans in our new dataset :", len(loans_data)
"""
Explanation: Now, let's verify that the resulting percentage of safe and risky loans are each nearly 50%.
End of explanation
"""
train_data, validation_data = loans_data.random_split(.8, seed=1)
"""
Explanation: Note: There are many approaches for dealing with imbalanced data, including some where we modify the learning algorithm. These approaches are beyond the scope of this course, but some of them are reviewed in this paper. For this assignment, we use the simplest possible approach, where we subsample the overly represented class to get a more balanced dataset. In general, and especially when the data is highly imbalanced, we recommend using more advanced methods.
Split data into training and validation sets
We split the data into training and validation sets using an 80/20 split and specifying seed=1 so everyone gets the same results.
Note: In previous assignments, we have called this a train-test split. However, the portion of data that we don't train on will be used to help select model parameters (this is known as model selection). Thus, this portion of data should be called a validation set. Recall that examining performance of various potential models (i.e. models with different parameters) should be on validation set, while evaluation of the final selected model should always be on test data. Typically, we would also save a portion of the data (a real test set) to test our final model on or use cross-validation on the training set to select our final model. But for the learning purposes of this assignment, we won't do that.
End of explanation
"""
decision_tree_model = graphlab.decision_tree_classifier.create(train_data, validation_set=None,
target = target, features = features)
decision_tree_model.show(view="Tree")
"""
Explanation: Use decision tree to build a classifier
Now, let's use the built-in GraphLab Create decision tree learner to create a loan prediction model on the training data. (In the next assignment, you will implement your own decision tree learning algorithm.) Our feature columns and target column have already been decided above. Use validation_set=None to get the same results as everyone else.
End of explanation
"""
small_model = graphlab.decision_tree_classifier.create(train_data, validation_set=None,
target = target, features = features, max_depth = 2)
"""
Explanation: Visualizing a learned model
As noted in the documentation, typically the the max depth of the tree is capped at 6. However, such a tree can be hard to visualize graphically. Here, we instead learn a smaller model with max depth of 2 to gain some intuition by visualizing the learned tree.
End of explanation
"""
small_model.show(view="Tree")
"""
Explanation: In the view that is provided by GraphLab Create, you can see each node, and each split at each node. This visualization is great for considering what happens when this model predicts the target of a new data point.
Note: To better understand this visual:
* The root node is represented using pink.
* Intermediate nodes are in green.
* Leaf nodes in blue and orange.
End of explanation
"""
validation_safe_loans = validation_data[validation_data[target] == 1]
validation_risky_loans = validation_data[validation_data[target] == -1]
sample_validation_data_risky = validation_risky_loans[0:2]
sample_validation_data_safe = validation_safe_loans[0:2]
sample_validation_data = sample_validation_data_safe.append(sample_validation_data_risky)
sample_validation_data
"""
Explanation: Making predictions
Let's consider two positive and two negative examples from the validation set and see what the model predicts. We will do the following:
* Predict whether or not a loan is safe.
* Predict the probability that a loan is safe.
End of explanation
"""
decision_tree_model.predict(sample_validation_data)
(sample_validation_data['safe_loans'] == decision_tree_model.predict(sample_validation_data)).sum()/float(len(sample_validation_data))
"""
Explanation: Explore label predictions
Now, we will use our model to predict whether or not a loan is likely to default. For each row in the sample_validation_data, use the decision_tree_model to predict whether or not the loan is classified as a safe loan.
Hint: Be sure to use the .predict() method.
End of explanation
"""
decision_tree_model.predict(sample_validation_data, output_type='probability')
"""
Explanation: Quiz Question: What percentage of the predictions on sample_validation_data did decision_tree_model get correct?
Explore probability predictions
For each row in the sample_validation_data, what is the probability (according decision_tree_model) of a loan being classified as safe?
Hint: Set output_type='probability' to make probability predictions using decision_tree_model on sample_validation_data:
End of explanation
"""
small_model.predict(sample_validation_data, output_type='probability')
"""
Explanation: Quiz Question: Which loan has the highest probability of being classified as a safe loan?
Checkpoint: Can you verify that for all the predictions with probability >= 0.5, the model predicted the label +1?
Tricky predictions!
Now, we will explore something pretty interesting. For each row in the sample_validation_data, what is the probability (according to small_model) of a loan being classified as safe?
Hint: Set output_type='probability' to make probability predictions using small_model on sample_validation_data:
End of explanation
"""
sample_validation_data[1]
"""
Explanation: Quiz Question: Notice that the probability preditions are the exact same for the 2nd and 3rd loans. Why would this happen?
Visualize the prediction on a tree
Note that you should be able to look at the small tree, traverse it yourself, and visualize the prediction being made. Consider the following point in the sample_validation_data
End of explanation
"""
small_model.show(view="Tree")
"""
Explanation: Let's visualize the small tree here to do the traversing for this data point.
End of explanation
"""
small_model.predict(sample_validation_data[1])
"""
Explanation: Note: In the tree visualization above, the values at the leaf nodes are not class predictions but scores (a slightly advanced concept that is out of the scope of this course). You can read more about this here. If the score is $\geq$ 0, the class +1 is predicted. Otherwise, if the score < 0, we predict class -1.
Quiz Question: Based on the visualized tree, what prediction would you make for this data point?
Now, let's verify your prediction by examining the prediction made using GraphLab Create. Use the .predict function on small_model.
End of explanation
"""
print small_model.evaluate(train_data)['accuracy']
print decision_tree_model.evaluate(train_data)['accuracy']
"""
Explanation: Evaluating accuracy of the decision tree model
Recall that the accuracy is defined as follows:
$$
\mbox{accuracy} = \frac{\mbox{# correctly classified examples}}{\mbox{# total examples}}
$$
Let us start by evaluating the accuracy of the small_model and decision_tree_model on the training data
End of explanation
"""
print small_model.evaluate(validation_data)['accuracy']
print decision_tree_model.evaluate(validation_data)['accuracy']
"""
Explanation: Checkpoint: You should see that the small_model performs worse than the decision_tree_model on the training data.
Now, let us evaluate the accuracy of the small_model and decision_tree_model on the entire validation_data, not just the subsample considered above.
End of explanation
"""
big_model = graphlab.decision_tree_classifier.create(train_data, validation_set=None,
target = target, features = features, max_depth = 10)
"""
Explanation: Quiz Question: What is the accuracy of decision_tree_model on the validation set, rounded to the nearest .01?
Evaluating accuracy of a complex decision tree model
Here, we will train a large decision tree with max_depth=10. This will allow the learned tree to become very deep, and result in a very complex model. Recall that in lecture, we prefer simpler models with similar predictive power. This will be an example of a more complicated model which has similar predictive power, i.e. something we don't want.
End of explanation
"""
print big_model.evaluate(train_data)['accuracy']
print big_model.evaluate(validation_data)['accuracy']
"""
Explanation: Now, let us evaluate big_model on the training set and validation set.
End of explanation
"""
predictions = decision_tree_model.predict(validation_data)
decision_tree_model.show(view='Evaluation')
"""
Explanation: Checkpoint: We should see that big_model has even better performance on the training set than decision_tree_model did on the training set.
Quiz Question: How does the performance of big_model on the validation set compare to decision_tree_model on the validation set? Is this a sign of overfitting?
Quantifying the cost of mistakes
Every mistake the model makes costs money. In this section, we will try and quantify the cost of each mistake made by the model.
Assume the following:
False negatives: Loans that were actually safe but were predicted to be risky. This results in an oppurtunity cost of losing a loan that would have otherwise been accepted.
False positives: Loans that were actually risky but were predicted to be safe. These are much more expensive because it results in a risky loan being given.
Correct predictions: All correct predictions don't typically incur any cost.
Let's write code that can compute the cost of mistakes made by the model. Complete the following 4 steps:
1. First, let us compute the predictions made by the model.
1. Second, compute the number of false positives.
2. Third, compute the number of false negatives.
3. Finally, compute the cost of mistakes made by the model by adding up the costs of true positives and false positives.
First, let us make predictions on validation_data using the decision_tree_model:
End of explanation
"""
len(predictions)
false_positives = (validation_data[validation_data['safe_loans'] != predictions]['safe_loans'] == -1).sum()
print false_positives
"""
Explanation: False positives are predictions where the model predicts +1 but the true label is -1. Complete the following code block for the number of false positives:
End of explanation
"""
false_negatives = (validation_data[validation_data['safe_loans'] != predictions]['safe_loans'] == +1).sum()
print false_negatives
"""
Explanation: False negatives are predictions where the model predicts -1 but the true label is +1. Complete the following code block for the number of false negatives:
End of explanation
"""
cost_of_mistakes = (false_negatives * 10000) + (false_positives * 20000)
print cost_of_mistakes
"""
Explanation: Quiz Question: Let us assume that each mistake costs money:
* Assume a cost of \$10,000 per false negative.
* Assume a cost of \$20,000 per false positive.
What is the total cost of mistakes made by decision_tree_model on validation_data?
End of explanation
"""
|
hanhanwu/Hanhan_Data_Science_Practice | AI_Experiments/digit_recognition_Pytorch.ipynb | mit | %pylab inline
import os
import numpy as np
import pandas as pd
import imageio as io
from sklearn.metrics import accuracy_score
import torch
# Get data from here: https://datahack.analyticsvidhya.com/contest/practice-problem-identify-the-digits/
seed = 10
rng = np.random.RandomState(seed)
train = pd.read_csv('Train_digits/train.csv')
train.head()
# randomly display an image
img_name = rng.choice(train.filename)
training_image_path = 'Train_digits/Images/train/' + img_name
training_img = io.imread(training_image_path, as_gray=True)
pylab.imshow(training_img, cmap='gray')
pylab.axis('off')
pylab.show()
# This is just 1 image
print(training_img.shape)
training_img[0] # each image has 28x28 pixel square, 784 pixels in total
# store all images as numpy arrays, to make data manipulation easier
temp = []
for img_name in train.filename:
training_image_path = 'Train_digits/Images/train/' + img_name
training_img = io.imread(training_image_path, as_gray=True) # !!! as_gray param makes a difference here!!
img = training_img.astype('float32')
temp.append(img)
train_x = np.stack(temp)
train_x /= 255.0
train_x = train_x.reshape(-1, 784).astype('float32') # 784 pixels per image
train_y = train.label.values
"""
Explanation: Try Pytorch with Digit Recognition
Reference: https://www.analyticsvidhya.com/blog/2018/02/pytorch-tutorial/
<b>Some functions are wrong or out of dated which will cause so much troubles. Better to try my code below</b>.
However, since these deep learning open sources keep updating and deprecating functions, you will never know whether a previous tutorial works for you at the time when you try. This is one of the reasons I hate deep learning.
To compare with Keras code in Digit Recognition: https://github.com/hanhanwu/Hanhan_Data_Science_Practice/blob/master/AI_Experiments/digital_recognition_Keras.ipynb
End of explanation
"""
print(train_x.shape)
train_x
print(train_y.shape) # 49000 images in total
train_y
# create validation set
split_size = int(train_x.shape[0]*0.7)
train_x, val_x = train_x[:split_size], train_x[split_size:]
train_y, val_y = train_y[:split_size], train_y[split_size:]
print(train_x.shape, train_y.shape)
print(train_x)
print(train_y)
# Using Pytorch to build the model
from torch.autograd import Variable
## number of neurons in each layer
input_num_units = 28*28 # 784 pixels per image
hidden_num_units = 500
output_num_units = 10 # 0 - 9, 10 digits
## set variables used in NN
epochs = 5
batch_size = 128
learning_rate = 0.001
# define model
model = torch.nn.Sequential(
torch.nn.Linear(input_num_units, hidden_num_units),
torch.nn.ReLU(),
torch.nn.Linear(hidden_num_units, output_num_units),
)
loss_fn = torch.nn.CrossEntropyLoss()
# define optimization algorithm
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
# preprocess a batch of dataset
def preproc(unclean_batch_x):
"""Convert values to range 0-1"""
temp_batch = unclean_batch_x / unclean_batch_x.max()
return temp_batch
# create a batch
def batch_creator(batch_size):
dataset_name = 'train'
dataset_length = eval(dataset_name+'_x').shape[0]
batch_mask = rng.choice(dataset_length, batch_size)
batch_x = eval(dataset_name+'_x')[batch_mask]
batch_x = preproc(batch_x)
batch_y = eval(dataset_name+'_y')[batch_mask] # train_x, train_y has the same length
return batch_x, batch_y
# train network
total_batch = int(train.shape[0]/batch_size)
for epoch in range(epochs):
avg_cost = 0
for i in range(total_batch):
# create batch
batch_x, batch_y = batch_creator(batch_size)
# pass that batch for training
x, y = Variable(torch.from_numpy(batch_x)), Variable(torch.from_numpy(batch_y), requires_grad=False)
pred = model(x)
# get loss
loss = loss_fn(pred, y)
# perform backpropagation
loss.backward()
optimizer.step()
avg_cost += loss.data/total_batch
print(epoch, avg_cost)
# get training accuracy
x, y = Variable(torch.from_numpy(preproc(train_x))), Variable(torch.from_numpy(train_y), requires_grad=False)
pred = model(x)
final_pred = np.argmax(pred.data.numpy(), axis=1)
accuracy_score(train_y, final_pred)
# get validation accuracy
x, y = Variable(torch.from_numpy(preproc(val_x))), Variable(torch.from_numpy(val_y), requires_grad=False)
pred = model(x)
final_pred = np.argmax(pred.data.numpy(), axis=1)
accuracy_score(val_y, final_pred)
"""
Explanation: With as_gray param in io.imread(), it will help you get train_x, train_y have the same length. Otherwise there will be so much troubles in creating batches later.
End of explanation
"""
|
dusenberrymw/incubator-systemml | samples/jupyter-notebooks/DML Tips and Tricks (aka Fun With DML).ipynb | apache-2.0 | from systemml import MLContext, dml, jvm_stdout
ml = MLContext(sc)
print (ml.buildTime())
"""
Explanation: Replace NaN with mode
Use sample builtin function to create sample from matrix
Count of Matching Values in two Matrices/Vectors
Cross Validation
Value-based join of two Matrices
Filter Matrix to include only Frequent Column Values
Construct (sparse) Matrix from (rowIndex, colIndex, values) triplets
Find and remove duplicates in columns or rows
Set based Indexing
Group by Aggregate using Linear Algebra
Cumulative Summation with Decay Multiplier
Invert Lower Triangular Matrix
End of explanation
"""
prog="""
# Function for NaN-aware replacement with mode
replaceNaNwithMode = function (matrix[double] X, integer colId)
return (matrix[double] X)
{
Xi = replace (target=X[,colId], pattern=0/0, replacement=max(X[,colId])+1) # replace NaN with largest value + 1
agg = aggregate (target=Xi, groups=Xi, fn="count") # count each distinct value
mode = as.scalar (rowIndexMax(t(agg[1:nrow(agg)-1, ]))) # mode is max frequent value except last value
X[,colId] = replace (target=Xi, pattern=max(Xi), replacement=mode) # fill in mode
}
X = matrix('1 NaN 1 NaN 1 2 2 1 1 2', rows = 5, cols = 2)
Y = replaceNaNwithMode (X, 2)
print ("Before: \n" + toString(X))
print ("After: \n" + toString(Y))
"""
with jvm_stdout(True):
ml.execute(dml(prog))
"""
Explanation: Replace NaN with mode<a id="NaN2Mode" />
This functions replaces NaN in column with mode of column
End of explanation
"""
prog="""
X = matrix ('2 1 8 3 5 6 7 9 4 4', rows = 5, cols = 2 )
nbrSamples = 2
sv = order (target = sample (nrow (X), nbrSamples, FALSE)) # samples w/o replacement, and order
P = table (seq (1, nbrSamples), sv, nbrSamples, nrow(X)) # permutation matrix
samples = P %*% X; # apply P to perform selection
print ("X: \n" + toString(X))
print ("sv: \n" + toString(sv))
print ("samples: \n" + toString(samples))
"""
with jvm_stdout(True):
ml.execute(dml(prog))
"""
Explanation: Use sample builtin function to create sample from matrix<a id="sample" />
Use sample() function, create permutation matrix using table(), and pull sample from X.
End of explanation
"""
prog="""
X = matrix('8 4 5 4 9 10', rows = 6, cols = 1)
Y = matrix('4 9 5 1 9 7 ', rows = 6, cols = 1)
matches = sum (X == Y)
print ("t(X): " + toString(t(X)))
print ("t(Y): " + toString(t(Y)))
print ("Number of Matches: " + matches + "\n")
"""
with jvm_stdout(True):
ml.execute(dml(prog))
"""
Explanation: Count of Matching Values in two Matrices/Vectors<a id="MatchingRows" />
Given two matrices/vectors X and Y, get a count of the rows where X and Y have the same value.
End of explanation
"""
prog = """
holdOut = 1/3
kFolds = 1/holdOut
nRows = 6; nCols = 3;
X = matrix(seq(1, nRows * nCols), rows = nRows, cols = nCols) # X data
y = matrix(seq(1, nRows), rows = nRows, cols = 1) # y label data
Xy = cbind (X,y) # Xy Data for CV
sv = rand (rows = nRows, cols = 1, min = 0.0, max = 1.0, pdf = "uniform") # sv selection vector for fold creation
sv = (order(target=sv, by=1, index.return=TRUE)) %% kFolds + 1 # with numbers between 1 .. kFolds
stats = matrix(0, rows=kFolds, cols=1) # stats per kFolds model on test data
parfor (i in 1:kFolds)
{
# Skip empty training data or test data.
if ( sum (sv == i) > 0 & sum (sv == i) < nrow(X) )
{
Xyi = removeEmpty(target = Xy, margin = "rows", select = (sv == i)) # Xyi fold, i.e. 1/k of rows (test data)
Xyni = removeEmpty(target = Xy, margin = "rows", select = (sv != i)) # Xyni data, i.e. (k-1)/k of rows (train data)
# Skip extreme label inbalance
distinctLabels = aggregate( target = Xyni[,1], groups = Xyni[,1], fn = "count")
if ( nrow(distinctLabels) > 1)
{
wi = trainAlg (Xyni[ ,1:ncol(Xy)-1], Xyni[ ,ncol(Xy)]) # wi Model for i-th training data
pi = testAlg (Xyi [ ,1:ncol(Xy)-1], wi) # pi Prediction for i-th test data
ei = evalPrediction (pi, Xyi[ ,ncol(Xy)]) # stats[i,] evaluation of prediction of i-th fold
stats[i,] = ei
print ( "Test data Xyi" + i + "\n" + toString(Xyi)
+ "\nTrain data Xyni" + i + "\n" + toString(Xyni)
+ "\nw" + i + "\n" + toString(wi)
+ "\nstats" + i + "\n" + toString(stats[i,])
+ "\n")
}
else
{
print ("Training data for fold " + i + " has only " + nrow(distinctLabels) + " distinct labels. Needs to be > 1.")
}
}
else
{
print ("Training data or test data for fold " + i + " is empty. Fold not validated.")
}
}
print ("SV selection vector:\n" + toString(sv))
trainAlg = function (matrix[double] X, matrix[double] y)
return (matrix[double] w)
{
w = t(X) %*% y
}
testAlg = function (matrix[double] X, matrix[double] w)
return (matrix[double] p)
{
p = X %*% w
}
evalPrediction = function (matrix[double] p, matrix[double] y)
return (matrix[double] e)
{
e = as.matrix(sum (p - y))
}
"""
with jvm_stdout(True):
ml.execute(dml(prog))
"""
Explanation: Cross Validation<a id="CrossValidation" />
Perform kFold cross validation by running in parallel fold creation, training algorithm, test algorithm, and evaluation.
End of explanation
"""
prog = """
M1 = matrix ('1 1 2 3 3 3 4 4 5 3 6 4 7 1 8 2 9 1', rows = 9, cols = 2)
M2 = matrix ('1 1 2 8 3 3 4 3 5 1', rows = 5, cols = 2)
I = rowSums (outer (M1[,2], t(M2[,2]), "==")) # I : indicator matrix for M1
M12 = removeEmpty (target = M1, margin = "rows", select = I) # apply filter to retrieve join result
print ("M1 \n" + toString(M1))
print ("M2 \n" + toString(M2))
print ("M1[,2] joined with M2[,2], and return matching M1 rows\n" + toString(M12))
"""
with jvm_stdout():
ml.execute(dml(prog))
"""
Explanation: Value-based join of two Matrices<a id="JoinMatrices"/>
Given matrix M1 and M2, join M1 on column 2 with M2 on column 2, and return matching rows of M1.
End of explanation
"""
prog = """
MinFreq = 3 # minimum frequency of tokens
M = matrix ('1 1 2 3 3 3 4 4 5 3 6 4 7 1 8 2 9 1', rows = 9, cols = 2)
gM = aggregate (target = M[,2], groups = M[,2], fn = "count") # gM: group by and count (grouped matrix)
gv = cbind (seq(1,nrow(gM)), gM) # gv: add group values to counts (group values)
fg = removeEmpty (target = gv * (gv[,2] >= MinFreq), margin = "rows") # fg: filtered groups
I = rowSums (outer (M[,2] ,t(fg[,1]), "==")) # I : indicator of size M with filtered groups
fM = removeEmpty (target = M, margin = "rows", select = I) # FM: filter matrix
print (toString(M))
print (toString(fM))
"""
with jvm_stdout():
ml.execute(dml(prog))
"""
Explanation: Filter Matrix to include only Frequent Column Values <a id="FilterMatrix"/>
Given a matrix, filter the matrix to only include rows with column values that appear more often than MinFreq.
End of explanation
"""
prog = """
I = matrix ("1 3 3 4 5", rows = 5, cols = 1)
J = matrix ("2 3 4 1 6", rows = 5, cols = 1)
V = matrix ("10 20 30 40 50", rows = 5, cols = 1)
M = table (I, J, V)
print (toString (M))
"""
ml.execute(dml(prog).output('M')).get('M').toNumPy()
"""
Explanation: Construct (sparse) Matrix from (rowIndex, colIndex, values) triplets<a id="Construct_sparse_Matrix"></a>
Given rowIndex, colIndex, and values as column vectors, construct (sparse) matrix.
End of explanation
"""
prog = """
X = matrix ("1 2 3 3 3 4 5 10", rows = 8, cols = 1)
I = rbind (matrix (1,1,1), (X[1:nrow (X)-1,] != X[2:nrow (X),])); # compare current with next value
res = removeEmpty (target = X, margin = "rows", select = I); # select where different
"""
ml.execute(dml(prog).output('res')).get('res').toNumPy()
"""
Explanation: Find and remove duplicates in columns or rows<a id="Find_and_remove_duplicates"></a>
Assuming values are sorted.
End of explanation
"""
prog = """
X = matrix ("3 2 1 3 3 4 5 10", rows = 8, cols = 1)
I = aggregate (target = X, groups = X[,1], fn = "count") # group and count duplicates
res = removeEmpty (target = seq (1, max (X[,1])), margin = "rows", select = (I != 0)); # select groups
"""
ml.execute(dml(prog).output('res')).get('res').toNumPy()
"""
Explanation: No assumptions on values.
End of explanation
"""
prog = """
X = matrix ("3 2 1 3 3 4 5 10", rows = 8, cols = 1)
X = order (target = X, by = 1) # order values
I = rbind (matrix (1,1,1), (X[1:nrow (X)-1,] != X[2:nrow (X),]));
res = removeEmpty (target = X, margin = "rows", select = I);
"""
ml.execute(dml(prog).output('res')).get('res').toNumPy()
"""
Explanation: Order the values and then remove duplicates.
End of explanation
"""
prog = """
X = matrix (1, rows = 1, cols = 100)
J = matrix ("10 20 25 26 28 31 50 67 79", rows = 1, cols = 9)
res = X + table (matrix (1, rows = 1, cols = ncol (J)), J, 10)
print (toString (res))
"""
ml.execute(dml(prog).output('res')).get('res').toNumPy()
"""
Explanation: Set based Indexing<a id="Set_based_Indexing"></a>
Given a matrix X, and a indicator matrix J with indices into X.
Use J to perform operation on X, e.g. add value 10 to cells in X indicated by J.
End of explanation
"""
prog = """
C = matrix ('50 40 20 10 30 20 40 20 30', rows = 9, cols = 1) # category data
V = matrix ('20 11 49 33 94 29 48 74 57', rows = 9, cols = 1) # value data
PCV = cbind (cbind (seq (1, nrow (C), 1), C), V); # PCV representation
PCV = order (target = PCV, by = 3, decreasing = TRUE, index.return = FALSE);
PCV = order (target = PCV, by = 2, decreasing = FALSE, index.return = FALSE);
# Find all rows of PCV where the category has a new value, in comparison to the previous row
is_new_C = matrix (1, rows = 1, cols = 1);
if (nrow (C) > 1) {
is_new_C = rbind (is_new_C, (PCV [1:nrow(C) - 1, 2] < PCV [2:nrow(C), 2]));
}
# Associate each category with its index
index_C = cumsum (is_new_C); # cumsum
# For each category, compute:
# - the list of distinct categories
# - the maximum value for each category
# - 0-1 aggregation matrix that adds records of the same category
distinct_C = removeEmpty (target = PCV [, 2], margin = "rows", select = is_new_C);
max_V_per_C = removeEmpty (target = PCV [, 3], margin = "rows", select = is_new_C);
C_indicator = table (index_C, PCV [, 1], max (index_C), nrow (C)); # table
sum_V_per_C = C_indicator %*% V
"""
res = ml.execute(dml(prog).output('PCV','distinct_C', 'max_V_per_C', 'C_indicator', 'sum_V_per_C'))
print (res.get('PCV').toNumPy())
print (res.get('distinct_C').toNumPy())
print (res.get('max_V_per_C').toNumPy())
print (res.get('C_indicator').toNumPy())
print (res.get('sum_V_per_C').toNumPy())
"""
Explanation: Group by Aggregate using Linear Algebra<a id="Multi_column_Sorting"></a>
Given a matrix PCV as (Position, Category, Value), sort PCV by category, and within each category by value in descending order. Create indicator vector for category changes, create distinct categories, and perform linear algebra operations.
End of explanation
"""
cumsum_prod_def = """
cumsum_prod = function (Matrix[double] X, Matrix[double] C, double start)
return (Matrix[double] Y)
# Computes the following recurrence in log-number of steps:
# Y [1, ] = X [1, ] + C [1, ] * start;
# Y [i+1, ] = X [i+1, ] + C [i+1, ] * Y [i, ]
{
Y = X; P = C; m = nrow(X); k = 1;
Y [1,] = Y [1,] + C [1,] * start;
while (k < m) {
Y [k + 1:m,] = Y [k + 1:m,] + Y [1:m - k,] * P [k + 1:m,];
P [k + 1:m,] = P [1:m - k,] * P [k + 1:m,];
k = 2 * k;
}
}
"""
"""
Explanation: Cumulative Summation with Decay Multiplier<a id="CumSum_Product"></a>
Given matrix X, compute:
Y[i] = X[i]
+ X[i-1] * C[i]
+ X[i-2] * C[i] * C[i-1]
+ X[i-3] * C[i] * C[i-1] * C[i-2]
+ ...
End of explanation
"""
prog = cumsum_prod_def + """
X = matrix ("1 2 3 4 5 6 7 8 9", rows = 9, cols = 1);
#Zeros in C cause "breaks" that restart the cumulative summation from 0
C = matrix ("0 1 1 0 1 1 1 0 1", rows = 9, cols = 1);
Y = cumsum_prod (X, C, 0);
print (toString(Y))
"""
with jvm_stdout():
ml.execute(dml(prog))
"""
Explanation: In this example we use cumsum_prod for cumulative summation with "breaks", that is, multiple cumulative summations in one.
End of explanation
"""
prog = cumsum_prod_def + """
X = matrix ("1 2 3 4 5 6 7 8 9", rows = 9, cols = 1);
# Ones in S represent selected rows to be copied, zeros represent non-selected rows
S = matrix ("1 0 0 1 0 0 0 1 0", rows = 9, cols = 1);
Y = cumsum_prod (X * S, 1 - S, 0);
print (toString(Y))
"""
with jvm_stdout():
ml.execute(dml(prog))
"""
Explanation: In this example, we copy selected rows downward to all consecutive non-selected rows.
End of explanation
"""
cumsum_prod_naive_def = """
cumsum_prod_naive = function (Matrix[double] X, Matrix[double] C, double start)
return (Matrix[double] Y)
{
Y = matrix (0, rows = nrow(X), cols = ncol(X));
Y [1,] = X [1,] + C [1,] * start;
for (i in 2:nrow(X))
{
Y [i,] = X [i,] + C [i,] * Y [i - 1,]
}
}
"""
"""
Explanation: This is a naive implementation of cumulative summation with decay multiplier.
End of explanation
"""
prog = cumsum_prod_def + cumsum_prod_naive_def + """
X = rand (rows = 20000, cols = 10, min = 0, max = 1, pdf = "uniform", sparsity = 1.0);
C = rand (rows = 20000, cols = 10, min = 0, max = 1, pdf = "uniform", sparsity = 1.0);
Y1 = cumsum_prod_naive (X, C, 0.123);
"""
with jvm_stdout():
ml.execute(dml(prog))
prog = cumsum_prod_def + cumsum_prod_naive_def + """
X = rand (rows = 20000, cols = 10, min = 0, max = 1, pdf = "uniform", sparsity = 1.0);
C = rand (rows = 20000, cols = 10, min = 0, max = 1, pdf = "uniform", sparsity = 1.0);
Y2 = cumsum_prod (X, C, 0.123);
"""
with jvm_stdout():
ml.execute(dml(prog))
"""
Explanation: There is a significant performance difference between the <b>naive</b> implementation and the <b>tricky</b> implementation.
End of explanation
"""
invert_lower_triangular_def = """
invert_lower_triangular = function (Matrix[double] LI)
return (Matrix[double] LO)
{
n = nrow (LI);
LO = matrix (0, rows = n, cols = n);
LO = LO + diag (1 / diag (LI));
k = 1;
while (k < n)
{
LPF = matrix (0, rows = n, cols = n);
parfor (p in 0:((n - 1) / (2 * k)), check = 0)
{
i = 2 * k * p;
j = i + k;
q = min (n, j + k);
if (j + 1 <= q) {
L1 = LO [i + 1:j, i + 1:j];
L2 = LI [j + 1:q, i + 1:j];
L3 = LO [j + 1:q, j + 1:q];
LPF [j + 1:q, i + 1:j] = -L3 %*% L2 %*% L1;
}
}
LO = LO + LPF;
k = 2 * k;
}
}
"""
prog = invert_lower_triangular_def + """
n = 1000;
A = rand (rows = n, cols = n, min = -1, max = 1, pdf = "uniform", sparsity = 1.0);
Mask = cumsum (diag (matrix (1, rows = n, cols = 1)));
L = (A %*% t(A)) * Mask; # Generate L for stability of the inverse
X = invert_lower_triangular (L);
print ("Maximum difference between X %*% L and Identity = " + max (abs (X %*% L - diag (matrix (1, rows = n, cols = 1)))));
"""
with jvm_stdout():
ml.execute(dml(prog))
"""
Explanation: Invert Lower Triangular Matrix<a id="Invert_Lower_Triangular_Matrix"></a>
In this example, we invert a lower triangular matrix using a the following divide-and-conquer approach. Given lower triangular matrix L, we compute its inverse X which is also lower triangular by splitting both matrices in the middle into 4 blocks (in a 2x2 fashion), and multiplying them together to get the identity matrix:
\begin{equation}
L \text{ %% } X = \left(\begin{matrix} L_1 & 0 \ L_2 & L_3 \end{matrix}\right)
\text{ %% } \left(\begin{matrix} X_1 & 0 \ X_2 & X_3 \end{matrix}\right)
= \left(\begin{matrix} L_1 X_1 & 0 \ L_2 X_1 + L_3 X_2 & L_3 X_3 \end{matrix}\right)
= \left(\begin{matrix} I & 0 \ 0 & I \end{matrix}\right)
\nonumber
\end{equation}
If we multiply blockwise, we get three equations:
$
\begin{equation}
L1 \text{ %% } X1 = 1\
L3 \text{ %% } X3 = 1\
L2 \text{ %% } X1 + L3 \text{ %% } X2 = 0\
\end{equation}
$
Solving these equation gives the following formulas for X:
$
\begin{equation}
X1 = inv(L1) \
X3 = inv(L3) \
X2 = - X3 \text{ %% } L2 \text{ %% } X1 \
\end{equation}
$
If we already recursively inverted L1 and L3, we can invert L2. This suggests an algorithm that starts at the diagonal and iterates away from the diagonal, involving bigger and bigger blocks (of size 1, 2, 4, 8, etc.) There is a logarithmic number of steps, and inside each step, the inversions can be performed in parallel using a parfor-loop.
Function "invert_lower_triangular" occurs within more general inverse operations and matrix decompositions. The divide-and-conquer idea allows to derive more efficient algorithms for other matrix decompositions.
End of explanation
"""
invert_lower_triangular_naive_def = """
invert_lower_triangular_naive = function (Matrix[double] LI)
return (Matrix[double] LO)
{
n = nrow (LI);
LO = diag (matrix (1, rows = n, cols = 1));
for (i in 1:n - 1)
{
LO [i,] = LO [i,] / LI [i, i];
LO [i + 1:n,] = LO [i + 1:n,] - LI [i + 1:n, i] %*% LO [i,];
}
LO [n,] = LO [n,] / LI [n, n];
}
"""
"""
Explanation: This is a naive implementation of inverting a lower triangular matrix.
End of explanation
"""
prog = invert_lower_triangular_naive_def + """
n = 1000;
A = rand (rows = n, cols = n, min = -1, max = 1, pdf = "uniform", sparsity = 1.0);
Mask = cumsum (diag (matrix (1, rows = n, cols = 1)));
L = (A %*% t(A)) * Mask; # Generate L for stability of the inverse
X = invert_lower_triangular_naive (L);
print ("Maximum difference between X %*% L and Identity = " + max (abs (X %*% L - diag (matrix (1, rows = n, cols = 1)))));
"""
with jvm_stdout():
ml.execute(dml(prog))
"""
Explanation: The naive implementation is significantly slower than the divide-and-conquer implementation.
End of explanation
"""
|
schatzlab/biomedicalresearch | lectures/05.BinomialExponential/BinomialDistribution.ipynb | mit | import random
results = []
for trial in xrange(10000):
heads = 0
for i in xrange(100):
flip = random.randint(0,1)
if (flip == 0):
heads += 1
results.append(heads)
print results[1:10]
import matplotlib.pyplot as plt
plt.figure()
plt.hist(results)
plt.show()
## Plot the histogram using integer values by creating more bins
plt.figure()
plt.hist(results, bins=range(100))
plt.title("Using integer values")
plt.show()
## Plot the density function, notice bars sum to exactly 1
## Also make the plot bigger
plt.figure(figsize=(15,6))
plt.hist(results, bins=range(100), normed=True)
plt.title("coin flip densities")
plt.show()
"""
Explanation: Adventures in coin flipping
AKA Introduction to the Binomial Distribution
End of explanation
"""
flips_mean = float(sum(results)) / len(results)
print flips_mean
## the numpy package has lots of useful routines: http://www.numpy.org/
import numpy as np
mean = np.mean(results)
print mean
## we could code standard deviation by hand, but numpy makes it easier
stdev=np.std(results)
print stdev
## Overlay a normal distribution on top of the coin flip data
plt.figure(figsize=(15,6))
count, bins, patches = plt.hist(results, bins=range(100), normed=True, label="coin flip histogram")
plt.plot(bins, 1/(stdev * np.sqrt(2 * np.pi)) *
np.exp( - (bins - mean)**2 / (2 * stdev**2) ),
linewidth=3, color='red', label="normal distribution")
plt.title("Coin flip densities with normal distribution overlay")
plt.legend()
plt.show()
"""
Explanation: The binomial distribution is closely related to the normal distribution (aka Gaussian distribution)
The probability density for the Gaussian distribution is
End of explanation
"""
prob_heads = .5
num_flips = 100
num_heads = 25
prob_flips = np.math.factorial(num_flips) / \
(np.math.factorial(num_heads) * np.math.factorial(num_flips-num_heads)) * \
(prob_heads**num_heads) * ((1-prob_heads)**(num_flips-num_heads))
print "The probability of seeing %d heads in %d flips is %.015f" % (num_heads, num_flips, prob_flips)
## Another super useful package is scipy
import scipy.stats
sp_prob = scipy.stats.binom.pmf(num_heads, num_flips, prob_heads)
print "scipy computed it as %0.15f" % sp_prob
## normal approximatation
print scipy.stats.norm(50, 5).pdf(25)
## Overlay a normal distribution on top of the coin flip data
plt.figure(figsize=(15,6))
count, bins, patches = plt.hist(results, bins=range(100), normed=True, label="coin flip histogram")
plt.plot(bins, scipy.stats.binom.pmf(bins, num_flips, prob_heads),linewidth=3, color='red', label="binomial distribution")
plt.plot(bins, scipy.stats.norm(50,5).pdf(bins),linewidth=3, color='green', linestyle='--', label="normal distribution")
plt.title("Coin flip densities with normal distribution overlay")
plt.legend()
plt.show()
"""
Explanation: Could we figure this out analytically?
General Form
$$
p(\text{k heads in n flips}) = (\text{prob. of this many heads}) * \
(\text{prob. of this many tails}) * \
(\text{how many possible orderings?})
$$
Specifics
$$
\begin{array}{c}
p(\text{k heads in n flips}) \leftarrow {n \choose k} * p^{k} * (1-p)^{(n-k)} \\
p \leftarrow \text{probability of heads in a single flip} \\
p^k \leftarrow \text{total probability of k heads} \\
1-p \leftarrow \text{probabilty of one tails} \\
n-k \leftarrow \text{number of tails} \\
(1-p)^{(n-k)} \leftarrow \text{ probability of all the tails} \\
{n \choose k} \leftarrow \text{ all the possible orderings of k heads in n flips} \\
\end{array}
$$
Reminder:
$$
{n \choose k} = \frac{n!}{k!(n-k)!}
$$
End of explanation
"""
expected_mean = num_flips * prob_heads
expected_stdev = np.math.sqrt(num_flips * prob_heads * (1 - prob_heads))
print "In %d flips, with a probability %.02f" % (num_flips, prob_heads)
print "The expected frequency is %.02f +/- %.02f" % (expected_mean, expected_stdev)
print "The observed frequency was %0.2f +/- %0.2f" % (mean, stdev)
"""
Explanation: How can we use the mean and standard deviation to estimate the probability?
FACT: The mean of the binomial distribution is
$$
mean = n * p
$$
FACT: The standard deviation of the binomial distribution is
$$
stdev = \sqrt{np(1-p)}
$$
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.24/_downloads/78ad76ea5b03c29b4b851b8b64f74b68/linear_model_patterns.ipynb | bsd-3-clause | # Authors: Alexandre Gramfort <alexandre.gramfort@inria.fr>
# Romain Trachel <trachelr@gmail.com>
# Jean-Remi King <jeanremi.king@gmail.com>
#
# License: BSD-3-Clause
import mne
from mne import io, EvokedArray
from mne.datasets import sample
from mne.decoding import Vectorizer, get_coef
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LogisticRegression
from sklearn.pipeline import make_pipeline
# import a linear classifier from mne.decoding
from mne.decoding import LinearModel
print(__doc__)
data_path = sample.data_path()
sample_path = data_path + '/MEG/sample'
"""
Explanation: Linear classifier on sensor data with plot patterns and filters
Here decoding, a.k.a MVPA or supervised machine learning, is applied to M/EEG
data in sensor space. Fit a linear classifier with the LinearModel object
providing topographical patterns which are more neurophysiologically
interpretable :footcite:HaufeEtAl2014 than the classifier filters (weight
vectors). The patterns explain how the MEG and EEG data were generated from
the discriminant neural sources which are extracted by the filters.
Note patterns/filters in MEG data are more similar than EEG data
because the noise is less spatially correlated in MEG than EEG.
End of explanation
"""
raw_fname = sample_path + '/sample_audvis_filt-0-40_raw.fif'
event_fname = sample_path + '/sample_audvis_filt-0-40_raw-eve.fif'
tmin, tmax = -0.1, 0.4
event_id = dict(aud_l=1, vis_l=3)
# Setup for reading the raw data
raw = io.read_raw_fif(raw_fname, preload=True)
raw.filter(.5, 25, fir_design='firwin')
events = mne.read_events(event_fname)
# Read epochs
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True,
decim=2, baseline=None, preload=True)
del raw
labels = epochs.events[:, -1]
# get MEG and EEG data
meg_epochs = epochs.copy().pick_types(meg=True, eeg=False)
meg_data = meg_epochs.get_data().reshape(len(labels), -1)
"""
Explanation: Set parameters
End of explanation
"""
clf = LogisticRegression(solver='lbfgs')
scaler = StandardScaler()
# create a linear model with LogisticRegression
model = LinearModel(clf)
# fit the classifier on MEG data
X = scaler.fit_transform(meg_data)
model.fit(X, labels)
# Extract and plot spatial filters and spatial patterns
for name, coef in (('patterns', model.patterns_), ('filters', model.filters_)):
# We fitted the linear model onto Z-scored data. To make the filters
# interpretable, we must reverse this normalization step
coef = scaler.inverse_transform([coef])[0]
# The data was vectorized to fit a single model across all time points and
# all channels. We thus reshape it:
coef = coef.reshape(len(meg_epochs.ch_names), -1)
# Plot
evoked = EvokedArray(coef, meg_epochs.info, tmin=epochs.tmin)
evoked.plot_topomap(title='MEG %s' % name, time_unit='s')
"""
Explanation: Decoding in sensor space using a LogisticRegression classifier
End of explanation
"""
X = epochs.pick_types(meg=False, eeg=True)
y = epochs.events[:, 2]
# Define a unique pipeline to sequentially:
clf = make_pipeline(
Vectorizer(), # 1) vectorize across time and channels
StandardScaler(), # 2) normalize features across trials
LinearModel(
LogisticRegression(solver='lbfgs'))) # 3) fits a logistic regression
clf.fit(X, y)
# Extract and plot patterns and filters
for name in ('patterns_', 'filters_'):
# The `inverse_transform` parameter will call this method on any estimator
# contained in the pipeline, in reverse order.
coef = get_coef(clf, name, inverse_transform=True)
evoked = EvokedArray(coef, epochs.info, tmin=epochs.tmin)
evoked.plot_topomap(title='EEG %s' % name[:-1], time_unit='s')
"""
Explanation: Let's do the same on EEG data using a scikit-learn pipeline
End of explanation
"""
|
nilmtk/nilmtk | docs/manual/user_guide/nilmtk_api_tutorial.ipynb | apache-2.0 | from nilmtk.api import API
import warnings
warnings.filterwarnings("ignore")
"""
Explanation: NILMTK Rapid Experimentation API
This notebook demonstrates the use of NILMTK's ExperimentAPI - a new NILMTK interface which allows NILMTK users to focus on which experiments to run rather than on the code required to run such experiments.
It is important to note that handing over so much flexibility to the user does require the user to be somewhat familiar with the data set, but this part of the process is supported by NILMTK as data exploration is simple and well documented.
Lets us start with a very simple experiment to demonstrate the use of the API for multiple appliances in a minimal use case. This experiment shows how the user can select the appliances in the dataset on which disaggregation is to be performed.
Importing the API.
End of explanation
"""
from nilmtk.disaggregate import CO
"""
Explanation: Next, we import the required algorithms on which we wish to run the experiments
End of explanation
"""
experiment1 = {
'power': {'mains': ['apparent','active'],'appliance': ['apparent','active']},
'sample_rate': 60,
'appliances': ['fridge','air conditioner', 'microwave'],
'methods': {"CO":CO({})},
'train': {
'datasets': {
'Dataport': {
'path': 'data/dataport.hdf5',
'buildings': {
10: {
'start_time': '2015-04-04',
'end_time': '2015-04-06'
}
}
}
}
},
'test': {
'datasets': {
'Dataport': {
'path': 'data/dataport.hdf5',
'buildings': {
10: {
'start_time': '2015-04-25',
'end_time': '2015-04-26'
}
}
}
},
'metrics':['rmse']
}
}
"""
Explanation: Next, we enter the values for the different parameters in the dictionary. Since we need multiple appliances, we enter the names of all the required appliances in the 'appliances' parameter.
End of explanation
"""
api_results_experiment_1 = API(experiment1)
"""
Explanation: In this example experimental setup, we have set the sample rate at 60Hz and use Combinatorial Optimisation to
disaggregate the required appliances from building 10 in the dataport dataset with the RMSE metric to measure the accuracy. We also specify the dates for training and testing
Next we provide this experiment dictionary as input to the API.
End of explanation
"""
errors_keys = api_results_experiment_1.errors_keys
errors = api_results_experiment_1.errors
for i in range(len(errors)):
print (errors_keys[i])
print (errors[i])
print ("\n\n")
"""
Explanation: We can observe the prediction vs. truth graphs in the above cell. The accuracy metrics can be accessed using the following commands:
End of explanation
"""
from nilmtk.disaggregate import FHMM_EXACT, Mean
experiment2 = {
'power': {'mains': ['apparent','active'],'appliance': ['apparent','active']},
'sample_rate': 60,
'appliances': ['fridge','air conditioner', 'microwave'],
'methods': {"Mean":Mean({}),"FHMM_EXACT":FHMM_EXACT({'num_of_states':2}), "CombinatorialOptimisation":CO({})},
'train': {
'datasets': {
'Dataport': {
'path': 'data/dataport.hdf5',
'buildings': {
10: {
'start_time': '2015-04-04',
'end_time': '2015-04-06'
}
}
}
}
},
'test': {
'datasets': {
'Datport': {
'path': 'data/dataport.hdf5',
'buildings': {
10: {
'start_time': '2015-04-25',
'end_time': '2015-04-26'
}
}
}
},
'metrics':['mae', 'rmse']
}
}
api_results_experiment_2 = API(experiment2)
api_results_experiment_2.errors
errors_keys = api_results_experiment_2.errors_keys
errors = api_results_experiment_2.errors
for i in range(len(errors)):
print (errors_keys[i])
print (errors[i])
print ("\n\n")
"""
Explanation: This was a trivial experiment that only scratches the surface of the true potential of this API.
In the next experiment we will run an incrementally more complex version of the above experiment. Here we will use multiple models to disaggregate the appliance readings with the models having their own sets of parameters which can be set by the users within the experimental dictionary in order to fine tune experiments.
We also import the required algorithms for the next experiments
End of explanation
"""
from nilmtk.disaggregate import Hart85
experiment3 = {
'power': {'mains': ['apparent','active'],'appliance': ['apparent','active']},
'sample_rate': 60,
'appliances': ['fridge','air conditioner','electric furnace','washing machine'],
'artificial_aggregate': True,
'chunksize': 20000,
'DROP_ALL_NANS': False,
'methods': {"Mean":Mean({}),"Hart85":Hart85({}), "FHMM_EXACT":FHMM_EXACT({'num_of_states':2}), "CO":CO({})},
'train': {
'datasets': {
'Dataport': {
'path': 'data/dataport.hdf5',
'buildings': {
54: {
'start_time': '2015-01-28',
'end_time': '2015-02-12'
},
56: {
'start_time': '2015-01-28',
'end_time': '2015-02-12'
},
57: {
'start_time': '2015-04-30',
'end_time': '2015-05-14'
},
}
}
}
},
'test': {
'datasets': {
'Datport': {
'path': 'data/dataport.hdf5',
'buildings': {
94: {
'start_time': '2015-04-30',
'end_time': '2015-05-07'
},
103: {
'start_time': '2014-01-26',
'end_time': '2014-02-03'
},
113: {
'start_time': '2015-04-30',
'end_time': '2015-05-07'
},
}
}
},
'metrics':['mae', 'rmse']
}
}
api_results_experiment_3 = API(experiment3)
errors_keys = api_results_experiment_3.errors_keys
errors = api_results_experiment_3.errors
for i in range(len(errors)):
print (errors_keys[i])
print (errors[i])
print ("\n\n")
"""
Explanation: The API makes running experiments extremely quick and efficient, with the emphasis on creating finely tuned reproducible experiments where model and parameter performances can be easily evaluated at a glance.
In the next iteration of this experiment, we introduce more parameters chunksize, DROP_ALL_NANS and artificial_aggregate and add another disaggregation algorithm (Hart85). We also train and test data from multiple buildings of the same dataset.
We also import the Hart algorithm for the next experiment
End of explanation
"""
experiment4 = {
'power': {'mains': ['apparent','active'],'appliance': ['apparent','active']},
'sample_rate': 60,
'appliances': ['washing machine','fridge'],
'artificial_aggregate': True,
'chunksize': 20000,
'DROP_ALL_NANS': False,
'methods': {"Mean":Mean({}),"Hart85":Hart85({}), "FHMM_EXACT":FHMM_EXACT({'num_of_states':2}), 'CO':CO({})},
'train': {
'datasets': {
'UKDALE': {
'path': 'C:/Users/Hp/Desktop/nilmtk-contrib/ukdale.h5',
'buildings': {
1: {
'start_time': '2017-01-05',
'end_time': '2017-03-05'
},
}
},
}
},
'test': {
'datasets': {
'DRED': {
'path': 'C:/Users/Hp/Desktop/nilmtk-contrib/dred.h5',
'buildings': {
1: {
'start_time': '2015-09-21',
'end_time': '2015-10-01'
}
}
},
'REDD': {
'path': 'C:/Users/Hp/Desktop/nilmtk-contrib/redd.h5',
'buildings': {
1: {
'start_time': '2011-04-17',
'end_time': '2011-04-27'
}
}
}
},
'metrics':['mae', 'rmse']
}
}
api_results_experiment_4 = API(experiment4)
errors_keys = api_results_experiment_4.errors_keys
errors = api_results_experiment_4.errors
for i in range(len(errors)):
print (errors_keys[i])
print (errors[i])
print ("\n\n")
"""
Explanation: The results of the above experiment are presented for every chunk per building in the test set.
In the following experiment, we demonstrate how to run experiments across datasets, which was previously not possible. The important thing to pay attention to is that such datasets can only be trained and tested together as long as they have common appliances in homes with common ac_types.
End of explanation
"""
|
jonasluz/mia-cg | Exercises/Exercícios#2.ipynb | unlicense | # Demonstração algébrica, sem código.
# Desenho da parábola.
#
import math
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
from mpl_toolkits.axes_grid.axislines import SubplotZero
def prep_axis():
"""
Preparação dos eixos do gráfico
"""
fig = plt.figure(1)
ax = SubplotZero(fig, 111)
fig.add_subplot(ax)
for direction in ["xzero", "yzero"]:
ax.axis[direction].set_axisline_style("-|>")
ax.axis[direction].set_visible(True)
for direction in ["left", "right", "bottom", "top"]:
ax.axis[direction].set_visible(False)
return ax
ax = prep_axis()
# Domínio da função (em a e x).
a = np.linspace(-4, 4, 9)
x = np.linspace(-1, 1, 100)
# Desenha gráfico
for ai in [int(af) for af in a]:
# gráfico:
y = np.sqrt(4 * ai * x)
ax.plot(x, y)
# anotação do valor de a:
notex = -1 if ai < 0 else 1
notey = np.abs(ai)
if int(ai) != -10:
dist = 15 * ai
ax.annotate("$a={:d}$".format(int(ai)),
xy=(notex, notey), xycoords='data',
xytext=(-dist, 40-abs(dist) if ai != 0 else 10),
textcoords='offset points')
plt.show()
"""
Explanation: Lista de Exercícios #2 de Computação Gráfica
<hr>
<center>Jonas de Araújo Luz Jr.</center>
<center>unifor@jonasluz.com</center>
Questão #1
Elimine o parâmetro u da seguinte equação vetorial: r(u) = au<sup>2</sup> i + 2au j, onde a é um valor escalar.<br>
Verifique que esta equação vetorial corresponde à parábola cuja equação cartesiana é y<sup>2</sup> = 4ax.
End of explanation
"""
INF = float('Inf')
# Valores de u
u_table = [np.linspace(-10, 10, 21), np.linspace(-1, 1, 11)]
# Impressão das tabelas
#
for u in u_table:
print('\nValores para u = {}...', u)
# Equações paramétricas correspondentes
x = -u
y = [1/ui for ui in u - [0]]
# Equação de y em função de x
yx = [-1/xi for xi in x - [0]];
# Os vetores devem possuir o mesmo tamanho.
assert len(u) == len(x) == len(y) == len(yx)
print('Foram gerados {:d} pontos.'.format(len(u)))
# Gerar o código LaTex da tabela de valores.
print('$u$\t&$x$\t&$y$\t&$r(u)$\\\\ \midrule')
for k, ui in enumerate(u):
if y[k] == INF:
continue
print('{u:6.2}\t&{x:6.2}\t&{y:6.2f}\t&${x:6.2}\\mathbf{{i}} {signal} {yabs:6.2f}\\mathbf{{j}}$\\\\'.
format(
k=k+1, u=ui,
x=x[k],
y=y[k],
signal=('-' if y[k] < 0 else '+'),
yabs=np.abs(y[k])
)
)
## Desenho do gráfico da função.
#
# Valores de u
u_plot = [np.linspace(-10, 10, 200), np.linspace(-1, 1, 200)]
u = u_plot[0] # Mude de 0 para 1 aqui para plotar o gráfico entre -10 e 10 ou entre -1 e 1, respectivamente.
# Equações paramétricas da curva
x = -u
y = [1/ui for ui in u - [0]]
# Equação de y em função de x
yx = [-1/xi for xi in x - [0]]
# Desenha o gráfico.
ax = prep_axis()
ax.plot(x, yx)
plt.show()
"""
Explanation: Questão #2
Considere a curva planar dada pela equação vetorial: r(u) = - u i + 1/u j, u ≠ 0.<br>
Construa uma tabela de valores para u e, então, esboce esta curva para -10 ≤ u ≤ 10.<br>
Inclua considerações sobre -1 < u < 0 e 0 < u < 1.
End of explanation
"""
from mpl_toolkits.mplot3d import Axes3D
mpl.rcParams['legend.fontsize'] = 10
fig = plt.figure()
ax = fig.gca(projection='3d')
circle = 2 * np.pi
theta = np.linspace(-4 * np.pi, 4 * np.pi, 100)
x = theta / circle
y = 5 * np.cos(theta)
z = 3 * np.sin(theta)
plot_directors = True # Mude para True para plotar as retas diretoras.
if plot_directors:
ax.plot(x, y, z, label='curva paramétrica')
ax.plot(x, [5 for xi in x], [0 for xi in x], label='$\mathbf{D}_y$')
ax.plot(x, [-5 for xi in x], [0 for xi in x], label='$\mathbf{D\'}_y$')
ax.plot(x, [0 for xi in x], [3 for xi in x], label='$\mathbf{D}_z$')
ax.plot(x, [0 for xi in x], [-3 for xi in x], label='$\mathbf{D\'}_z$')
ax.legend()
else:
ax.plot(x, y, z)
fig = plt.figure()
plt.show()
"""
Explanation: Questão #3
A curva 3D abaixo tem a seguinte equação vetorial: r(θ) = θ / 360 i + 5 cos θ j + 3 sen θ k**, onde θ é medido em graus.<br>
Identifique esta curva e descreva suas principais características.
End of explanation
"""
|
sassoftware/sas-viya-machine-learning | image_recognition/car-damage-analysis/Car-Damage-Image-Analysis-for-Insurance.ipynb | apache-2.0 | # import the required packages
from swat import *
from pprint import pprint
import numpy as np
import matplotlib.pyplot as plt
import cv2
# define the function to display the processed image files.
def imageShow(session, casTable, imageId, nimages):
a = session.table.fetch(sastypes=False,sortby=[{'name':'_id_'}],table=casTable,to=nimages)
fig = plt.figure(figsize=(20, 20))
for i in range(nimages):
imageData = a['Fetch'].ix[ i][ imageId]
img_np = cv2.imdecode(np.fromstring( imageData, np.uint8),1)
fig.add_subplot(1,nimages,i+1)
plt.imshow(img_np)
img_np[:,:,[0,2]]=img_np[:,:,[2,0]]
plt.xticks([]), plt.yticks([])
"""
Explanation: Automated Assessment of Car Damage Photos using SAS
Import the required packages including swat for accessing CAS actions
https://github.com/sassoftware/python-swat
End of explanation
"""
# define the host machine and port for CAS connection: port is 5570 from Linux client and 8777 from Windows client.
hostmachine = 'my-viya-server.my-domain.com'
port = 8777
# authentication file on the client machine with user name and password (see the link above).
_authinfo = r"my-local-path\_authinfo"
# path on the Viya server where the image files to be processed are stored.
path_source_images = "my-host-path-for-sources"
path_reference_images = "my-host-path-for-references"
# set up a CAS session.
conn = CAS(hostmachine, port, authinfo = _authinfo)
# load CAS image action set for processing images.
conn.loadactionset('image')
"""
Explanation: Set up the environment and Connect to SAS from Python
Creating an Authinfo file:
http://go.documentation.sas.com/?docsetId=caspg&docsetTarget=n0i9fvsmff624fn1nh29sa6cx6lk.htm&docsetVersion=3.2&locale=en
End of explanation
"""
conn.image.loadImages(casout={'name':'inputTable', 'replace':True}, path= path_source_images)
conn.image.processimages(casout={'name':'inputTable_resized', 'replace':True},
imagefunctions=[{'functionoptions':{'width':1000,'functiontype':'RESIZE','height':600}}],
imagetable={'name':'inputTable'})
imageTable = conn.CASTable('inputTable_resized')
imageShow(conn, imageTable, 0, 4)
"""
Explanation: Load images and resize
End of explanation
"""
r = conn.image.processImages(casout={'name':'resultingImages','replace':True},
imagetable={'name':'inputTable_resized'},
imagefunctions=[
{'options':{'functiontype':'CONVERT_COLOR'}} #change color space
])
print(r)
outTable = conn.CASTable('resultingImages')
type(outTable)
imageShow(conn, outTable, 0, 4)
"""
Explanation: Convert colours
End of explanation
"""
r = conn.image.processImages(casout={'name':'resultingImages','replace':True},
imagetable={'name':'inputTable_resized'},
imagefunctions=[
{'options':{'functiontype':'CONVERT_COLOR'}}, #change color space
{'options':{'functiontype':'BILATERAL_FILTER', #noise reduction
'diameter':13,'sigmacolor':30,'sigmaspace':30}},
{'options':{'functiontype':'THRESHOLD', #image binarization
'type':'OTSU','value':125}},
{'options':{'functiontype':'LAPLACIAN', #edge detection with the Laplace operator
'kernelsize':12}}
])
print(r)
outTable = conn.CASTable('resultingImages')
imageShow(conn, outTable, 0, 4)
outTable.head(4)
"""
Explanation: Apply noise reduction and binarization
End of explanation
"""
# Process reference files to compare.
conn.image.loadImages(casout={'name':'inTable', 'replace':True}, path= 'path_reference_images')
conn.image.processImages(casout={'name':'refTable','replace':True},
imagetable={'name':'inTable'},
imagefunctions=[{'functionoptions':{'width':1000,'functiontype':'RESIZE','height':600}}, # resize
{'options':{'functiontype':'CONVERT_COLOR'}}, #change color space
{'options':{'functiontype':'BILATERAL_FILTER', #noise reduction
'diameter':13,'sigmacolor':30,'sigmaspace':30}},
{'options':{'functiontype':'THRESHOLD', #image binarization
'type':'OTSU','value':125}}
])
# Compare reference and source images to find the similarity index.
results = conn.image.compareImages(
casOut={
"name":"output",
"replace":True
},
pairAll=True,
referenceImages={
"table":{
"name":'refTable'
}},
sourceImages={
"table":{
"name":'resultingImages'
}}
)
scoreTable = conn.CASTable("output")
del scoreTable['_channel4_']
del scoreTable['_channel3_']
print(results)
print(scoreTable.head())
# end the CAS session.
conn.session.endsession()
"""
Explanation: Compare images with the labeled images in the historical data
Using the similarity index for decision making
End of explanation
"""
|
jphall663/GWU_data_mining | 10_model_interpretability/src/dt_surrogate.ipynb | apache-2.0 | # imports
import h2o
from h2o.estimators.gbm import H2OGradientBoostingEstimator
from h2o.estimators.deeplearning import H2ODeepLearningEstimator
from h2o.backend import H2OLocalServer
from IPython.display import Image
from IPython.display import display
import os
import re
import subprocess
from subprocess import CalledProcessError
import time
# start and connect to h2o server
h2o.init(max_mem_size='12G')
# load clean data
path = '../../03_regression/data/train.csv'
frame = h2o.import_file(path=path)
"""
Explanation: License
Copyright 2017 J. Patrick Hall, jphall@gwu.edu
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
Decision Tree Surrogates (Global Surrogates)
Based on: Craven, Mark W. and Shavlik, Jude W. Extracting tree structured representations of trained networks. Advances in Neural Information Processing Systems, pp. 24–30, 1996.
https://papers.nips.cc/paper/1152-extracting-tree-structured-representations-of-trained-networks.pdf
Requires GraphViz
For MacOS: brew install graphviz
http://www.graphviz.org/
Preliminaries: imports, start H2O, load data
End of explanation
"""
# assign target and inputs
y = 'SalePrice'
X = [name for name in frame.columns if name not in [y, 'Id']]
# determine column types
# impute
reals, enums = [], []
for key, val in frame.types.items():
if key in X:
if val == 'enum':
enums.append(key)
else:
reals.append(key)
_ = frame[reals].impute(method='median')
_ = frame[enums].impute(method='mode')
# split into training an validation, and 30% test
train, valid = frame.split_frame([0.7])
"""
Explanation: Clean and prepare data for modeling
End of explanation
"""
# initialize pre-tuned nn model
nn_model = H2ODeepLearningEstimator(
epochs=50, # read over the data 50 times, but in mini-batches
hidden=[170, 320], # 100 hidden units in 1 hidden layer
activation='Tanh', # bounded activation function that allows for dropout, tanh
l2=0.007, # L2 penalty can increase stability in presence of highly correlated inputs
adaptive_rate=True, # adjust magnitude of weight updates automatically (+stability, +accuracy)
stopping_rounds=2, # stop after validation error does not decrease for 5 iterations
score_each_iteration=True, # score validation error on every iteration
reproducible=True,
seed=12345)
# train nn model
nn_model.train(
x=X,
y=y,
training_frame=train,
validation_frame=valid)
# measure nn AUC
print(nn_model.rmsle(train=True))
print(nn_model.rmsle(valid=True))
"""
Explanation: Train a "black box" model
End of explanation
"""
# cbind predictions to training frame
# give them a nice name
preds = nn_model.predict(frame)
preds.columns = ['predicted_SalePrice']
frame_yhat = frame.cbind(preds)
"""
Explanation: Use a decision tree surrogate to generate explanations of the "black box" model
First bind the "black box" model predictions onto the training frame
End of explanation
"""
yhat = 'predicted_SalePrice'
model_id = 'dt_surrogate_mojo'
# train single tree surrogate model
surrogate = H2OGradientBoostingEstimator(ntrees=1,
sample_rate=1,
col_sample_rate=1,
max_depth=3,
seed=12345,
model_id=model_id)
_ = surrogate.train(x=X, y=yhat, training_frame=frame_yhat)
# persist MOJO (compiled, representation of trained model)
# from which to generate plot of surrogate
mojo_path = surrogate.download_mojo(path='.')
print(surrogate)
print('Generated MOJO path:\n', mojo_path)
"""
Explanation: Train decision tree surrogate model
End of explanation
"""
details = False # print more info on tree, details = True
title = 'Home Prices Decision Tree Surrogate'
hs = H2OLocalServer()
h2o_jar_path = hs._find_jar()
print('Discovered H2O jar path:\n', h2o_jar_path)
gv_file_name = model_id + '.gv'
gv_args = str('-cp ' + h2o_jar_path +
' hex.genmodel.tools.PrintMojo --tree 0 -i '
+ mojo_path + ' -o').split()
gv_args.insert(0, 'java')
gv_args.append(gv_file_name)
if details:
gv_args.append('--detail')
if title is not None:
gv_args = gv_args + ['--title', title]
print()
print('Calling external process ...')
print(' '.join(gv_args))
_ = subprocess.call(gv_args)
"""
Explanation: Generate GraphViz representation of MOJO
End of explanation
"""
png_file_name = model_id + '.png'
png_args = str('dot -Tpng ' + gv_file_name + ' -o ' + png_file_name)
png_args = png_args.split()
print('Calling external process ...')
print(' '.join(png_args))
_ = subprocess.call(png_args)
"""
Explanation: Generate PNG from GraphViz representation
End of explanation
"""
display(Image((png_file_name)))
"""
Explanation: Display decision tree surrogate
End of explanation
"""
# shutdown h2o
h2o.cluster().shutdown(prompt=True)
"""
Explanation: From this surrogate model we can see ...
Some of the most important variables
Important interactions
The decision path for the most expensive houses
The decision path for the least expensive houses
End of explanation
"""
|
metpy/MetPy | v0.12/_downloads/8c91fa5ab51e12860cfa1e679eaa746d/xarray_tutorial.ipynb | bsd-3-clause | import cartopy.crs as ccrs
import cartopy.feature as cfeature
import matplotlib.pyplot as plt
import xarray as xr
# Any import of metpy will activate the accessors
import metpy.calc as mpcalc
from metpy.cbook import get_test_data
from metpy.units import units
"""
Explanation: xarray with MetPy Tutorial
xarray <http://xarray.pydata.org/>_ is a powerful Python package that provides N-dimensional
labeled arrays and datasets following the Common Data Model. While the process of integrating
xarray features into MetPy is ongoing, this tutorial demonstrates how xarray can be used
within the current version of MetPy. MetPy's integration primarily works through accessors
which allow simplified projection handling and coordinate identification. Unit and calculation
support is currently available in a limited fashion, but should be improved in future
versions.
End of explanation
"""
# Open the netCDF file as a xarray Dataset
data = xr.open_dataset(get_test_data('irma_gfs_example.nc', False))
# View a summary of the Dataset
print(data)
"""
Explanation: Getting Data
While xarray can handle a wide variety of n-dimensional data (essentially anything that can
be stored in a netCDF file), a common use case is working with model output. Such model
data can be obtained from a THREDDS Data Server using the siphon package, but for this
tutorial, we will use an example subset of GFS data from Hurrican Irma (September 5th,
2017).
End of explanation
"""
# To parse the full dataset, we can call parse_cf without an argument, and assign the returned
# Dataset.
data = data.metpy.parse_cf()
# If we instead want just a single variable, we can pass that variable name to parse_cf and
# it will return just that data variable as a DataArray.
data_var = data.metpy.parse_cf('Temperature_isobaric')
# If we want only a subset of variables, we can pass a list of variable names as well.
data_subset = data.metpy.parse_cf(['u-component_of_wind_isobaric',
'v-component_of_wind_isobaric'])
# To rename variables, supply a dictionary between old and new names to the rename method
data = data.rename({
'Vertical_velocity_pressure_isobaric': 'omega',
'Relative_humidity_isobaric': 'relative_humidity',
'Temperature_isobaric': 'temperature',
'u-component_of_wind_isobaric': 'u',
'v-component_of_wind_isobaric': 'v',
'Geopotential_height_isobaric': 'height'
})
"""
Explanation: Preparing Data
To make use of the data within MetPy, we need to parse the dataset for projection
information following the CF conventions. For this, we use the
data.metpy.parse_cf() method, which will return a new, parsed DataArray or
Dataset.
Additionally, we rename our data variables for easier reference.
End of explanation
"""
data['temperature'].metpy.convert_units('degC')
"""
Explanation: Units
MetPy's DataArray accessor has a unit_array property to obtain a pint.Quantity array
of just the data from the DataArray (metadata other than units is removed) and a
convert_units method to convert the the data from one unit to another (keeping it as a
DataArray). For now, we'll just use convert_units to convert our temperature to
degC.
End of explanation
"""
# Get multiple coordinates (for example, in just the x and y direction)
x, y = data['temperature'].metpy.coordinates('x', 'y')
# If we want to get just a single coordinate from the coordinates method, we have to use
# tuple unpacking because the coordinates method returns a generator
vertical, = data['temperature'].metpy.coordinates('vertical')
# Or, we can just get a coordinate from the property
time = data['temperature'].metpy.time
# To verify, we can inspect all their names
print([coord.name for coord in (x, y, vertical, time)])
"""
Explanation: WARNING: Versions of MetPy prior to 1.0 (including 0.12) require units to be in the
attributes of your xarray DataArray. If you attempt to use a DataArray containing a
pint.Quantity instead, incorrect results are likely to occur, since these earlier
versions of MetPy will ignore the pint.Quantity and still just rely upon the units
attribute. See GitHub Issue #1358 <https://github.com/Unidata/MetPy/issues/1358>_ for more
details.
Note that this changes in newer versions of MetPy as of 1.0, when Quantities-in-xarray
became the default behavior.
Coordinates
You may have noticed how we directly accessed the vertical coordinates above using their
names. However, in general, if we are working with a particular DataArray, we don't have to
worry about that since MetPy is able to parse the coordinates and so obtain a particular
coordinate type directly. There are two ways to do this:
Use the data_var.metpy.coordinates method
Use the data_var.metpy.x, data_var.metpy.y, data_var.metpy.longitude,
data_var.metpy.latitude, data_var.metpy.vertical, data_var.metpy.time
properties
The valid coordinate types are:
x
y
longitude
latitude
vertical
time
(Both approaches are shown below)
The x, y, vertical, and time coordinates cannot be multidimensional,
however, the longitude and latitude coordinates can (which is often the case for
gridded weather data in its native projection). Note that for gridded data on an
equirectangular projection, such as the GFS data in this example, x and longitude
will be identical (as will y and latitude).
End of explanation
"""
print(data['height'].metpy.sel(vertical=850 * units.hPa))
"""
Explanation: Indexing and Selecting Data
MetPy provides wrappers for the usual xarray indexing and selection routines that can handle
quantities with units. For DataArrays, MetPy also allows using the coordinate axis types
mentioned above as aliases for the coordinates. And so, if we wanted 850 hPa heights,
we would take:
End of explanation
"""
data_crs = data['temperature'].metpy.cartopy_crs
print(data_crs)
"""
Explanation: For full details on xarray indexing/selection, see
xarray's documentation <http://xarray.pydata.org/en/stable/indexing.html>_.
Projections
Getting the cartopy coordinate reference system (CRS) of the projection of a DataArray is as
straightforward as using the data_var.metpy.cartopy_crs property:
End of explanation
"""
data_globe = data['temperature'].metpy.cartopy_globe
print(data_globe)
"""
Explanation: The cartopy Globe can similarly be accessed via the data_var.metpy.cartopy_globe
property:
End of explanation
"""
lat, lon = xr.broadcast(y, x)
f = mpcalc.coriolis_parameter(lat)
dx, dy = mpcalc.lat_lon_grid_deltas(lon, lat, initstring=data_crs.proj4_init)
heights = data['height'].metpy.loc[{'time': time[0], 'vertical': 500. * units.hPa}]
u_geo, v_geo = mpcalc.geostrophic_wind(heights, f, dx, dy)
print(u_geo)
print(v_geo)
"""
Explanation: Calculations
Most of the calculations in metpy.calc will accept DataArrays by converting them
into their corresponding unit arrays. While this may often work without any issues, we must
keep in mind that because the calculations are working with unit arrays and not DataArrays:
The calculations will return unit arrays rather than DataArrays
Broadcasting must be taken care of outside of the calculation, as it would only recognize
dimensions by order, not name
As an example, we calculate geostropic wind at 500 hPa below:
End of explanation
"""
heights = data['height'].metpy.loc[{'time': time[0], 'vertical': 500. * units.hPa}]
lat, lon = xr.broadcast(y, x)
f = mpcalc.coriolis_parameter(lat)
dx, dy = mpcalc.grid_deltas_from_dataarray(heights)
u_geo, v_geo = mpcalc.geostrophic_wind(heights, f, dx, dy)
print(u_geo)
print(v_geo)
"""
Explanation: Also, a limited number of calculations directly support xarray DataArrays or Datasets (they
can accept and return xarray objects). Right now, this includes
Derivative functions
first_derivative
second_derivative
gradient
laplacian
Cross-section functions
cross_section_components
normal_component
tangential_component
absolute_momentum
More details can be found by looking at the documentation for the specific function of
interest.
There is also the special case of the helper function, grid_deltas_from_dataarray, which
takes a DataArray input, but returns unit arrays for use in other calculations. We could
rewrite the above geostrophic wind example using this helper function as follows:
End of explanation
"""
# A very simple example example of a plot of 500 hPa heights
data['height'].metpy.loc[{'time': time[0], 'vertical': 500. * units.hPa}].plot()
plt.show()
# Let's add a projection and coastlines to it
ax = plt.axes(projection=ccrs.LambertConformal())
data['height'].metpy.loc[{'time': time[0],
'vertical': 500. * units.hPa}].plot(ax=ax, transform=data_crs)
ax.coastlines()
plt.show()
# Or, let's make a full 500 hPa map with heights, temperature, winds, and humidity
# Select the data for this time and level
data_level = data.metpy.loc[{time.name: time[0], vertical.name: 500. * units.hPa}]
# Create the matplotlib figure and axis
fig, ax = plt.subplots(1, 1, figsize=(12, 8), subplot_kw={'projection': data_crs})
# Plot RH as filled contours
rh = ax.contourf(x, y, data_level['relative_humidity'], levels=[70, 80, 90, 100],
colors=['#99ff00', '#00ff00', '#00cc00'])
# Plot wind barbs, but not all of them
wind_slice = slice(5, -5, 5)
ax.barbs(x[wind_slice], y[wind_slice],
data_level['u'].metpy.unit_array[wind_slice, wind_slice].to('knots'),
data_level['v'].metpy.unit_array[wind_slice, wind_slice].to('knots'),
length=6)
# Plot heights and temperature as contours
h_contour = ax.contour(x, y, data_level['height'], colors='k', levels=range(5400, 6000, 60))
h_contour.clabel(fontsize=8, colors='k', inline=1, inline_spacing=8,
fmt='%i', rightside_up=True, use_clabeltext=True)
t_contour = ax.contour(x, y, data_level['temperature'], colors='xkcd:deep blue',
levels=range(-26, 4, 2), alpha=0.8, linestyles='--')
t_contour.clabel(fontsize=8, colors='xkcd:deep blue', inline=1, inline_spacing=8,
fmt='%i', rightside_up=True, use_clabeltext=True)
# Add geographic features
ax.add_feature(cfeature.LAND.with_scale('50m'), facecolor=cfeature.COLORS['land'])
ax.add_feature(cfeature.OCEAN.with_scale('50m'), facecolor=cfeature.COLORS['water'])
ax.add_feature(cfeature.STATES.with_scale('50m'), edgecolor='#c7c783', zorder=0)
ax.add_feature(cfeature.LAKES.with_scale('50m'), facecolor=cfeature.COLORS['water'],
edgecolor='#c7c783', zorder=0)
# Set a title and show the plot
ax.set_title('500 hPa Heights (m), Temperature (\u00B0C), Humidity (%) at '
+ time[0].dt.strftime('%Y-%m-%d %H:%MZ').item())
plt.show()
"""
Explanation: Plotting
Like most meteorological data, we want to be able to plot these data. DataArrays can be used
like normal numpy arrays in plotting code, which is the recommended process at the current
point in time, or we can use some of xarray's plotting functionality for quick inspection of
the data.
(More detail beyond the following can be found at xarray's plotting reference
<http://xarray.pydata.org/en/stable/plotting.html>_.)
End of explanation
"""
|
psas/liquid-engine-analysis | archive/aerobee-150-reconstruction/AJ11-26.ipynb | gpl-3.0 | from math import pi, log
# Physics
g_0 = 9.80665 # kg.m/s^2 Standard gravity
# Chemistry
rho_rfna = 1500.0 # kg/m^3 Density of IRFNA
rho_fa = 1130.0 # kg/m^3 Density of Furfuryl Alcohol
rho_an = 1021.0 # kg/m^3 Density of Aniline
# Data
Isp = 209.0 # s Average Specific Impulse accounting for underexpantion[1]
r = 0.190 # m Radius of the tanks (OD of rocket)[2]
Burn_time = 52.0 # s Duration of the burn[2]
Mass_Fuel = 134.4 # kg Mass of the fuel burnt[1]
Mass_Ox = 343.9 # kg Mass of the oxidizer burnt[1]
"""
Explanation: Aerobee 150 Engine
The Aerobee 150 flew on an AJ11-26 IRFNA and ANFA hypergolic pressure fed liquid motor.
We have some information to start with.
End of explanation
"""
rho_fuel = rho_an*0.65 + rho_fa*0.35
OF = Mass_Ox / Mass_Fuel
mdot = (Mass_Fuel+Mass_Ox) / Burn_time
Thrust = mdot*g_0*Isp
print "O/F ratio: %6.1f" % OF
print "mdot: %7.2f [kg/s]" % mdot
print "Thrust: %6.1f [kN]" % (Thrust/1000.0)
# Mass flow for each propllent
mdot_o = mdot / (1 + (1/OF))
mdot_f = mdot / (1 + OF)
print "Ox flow: %7.2f kg/s" % mdot_o
print "Fuel flow: %7.2f kg/s" % mdot_f
def tank_length(m, rho):
l = m / (rho*pi*r*r)
return l
l_o = tank_length(Mass_Ox, rho_rfna)
l_o += l_o*0.1 # add 10% for ullage
l_f = tank_length(Mass_Fuel, rho_fuel)
l_f += l_f*0.1 # add 10% for ullage
print "Ox tank length: . . . .%7.3f m" % l_o
print "Fuel tank length: %7.3f m" % l_f
"""
Explanation: First lets compute the fuel density, O/F ratio, mass flow rate, and Thrust
End of explanation
"""
|
DS-100/sp17-materials | sp17/labs/lab11/lab11.ipynb | gpl-3.0 | !pip install -U sklearn
import numpy as np
import pandas as pd
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
import sklearn as skl
import sklearn.linear_model as lm
import scipy.io as sio
!pip install -U okpy
from client.api.notebook import Notebook
ok = Notebook('lab11.ok')
"""
Explanation: Lab 11: Regularization and Cross-Validation
End of explanation
"""
!head train.csv
"""
Explanation: Today's lab covers:
How to use regularization to avoid overfitting
How to use cross-validation to find the amount of regularization that produces a model with the least error for new data
Dammed Data
We've put our data into two files: train.csv and test.csv which contain
the training and test data, respectively. You are not allowed to train
on the test data.
The y values in the training data correspond to the amount of water that flows out of the dam on a particular day. There is only 1 feature: the increase in water level for the dam's reservoir on that day, which we'll call x.
End of explanation
"""
data = pd.read_csv('train.csv')
X = data[['X']].as_matrix()
y = data['y'].as_matrix()
X.shape, y.shape
_ = plt.plot(X[:, 0], y, '.')
plt.xlabel('Change in water level (X)')
plt.ylabel('Water flow out of dam (y)')
def plot_data_and_curve(curve_x, curve_y):
plt.plot(X[:, 0], y, '.')
plt.plot(curve_x, curve_y, '-')
plt.ylim(-20, 60)
plt.xlabel('Change in water level (X)')
plt.ylabel('Water flow out of dam (y)')
"""
Explanation: Let's load in the data:
End of explanation
"""
linear_clf = ...
# Fit your classifier
linear_clf.fit(X, y)
# Predict a bunch of points to draw best fit line
all_x = np.linspace(-55, 55, 200).reshape(-1, 1)
line = linear_clf.predict(all_x)
plot_data_and_curve(all_x, line)
"""
Explanation: Question 1: As a warmup, let's fit a line to this data using sklearn.
We've imported sklearn.linear_model as lm, so you can use that instead of
typing out the whole module name. Running the cell should show the data
with your best fit line.
End of explanation
"""
from sklearn.preprocessing import PolynomialFeatures
X_poly = ...
X_poly.shape
"""
Explanation: Question 2: If you had to guess, which has a larger effect on error for this dataset: bias or variance?
Explain briefly.
Write your answer here, replacing this text.
Question 3: Let's now add some complexity to our model by adding polynomial features.
Reference the sklearn docs on the PolynomialFeatures class. You should use this class to add polynomial features to X up to degree 8 using the fit_transform method.
The final X_poly data matrix should have shape (33, 9). Think about and discuss why.
End of explanation
"""
poly_clf = ...
# Fit your classifier
...
# Set curve to your model's predictions on all_x
curve = ...
plot_data_and_curve(all_x, curve)
"""
Explanation: Question 4: Now, fit a linear regression to the data, using polynomial features.
Then, follow the code in Question 1 to make predictions for the values in all_x. You'll have to add polynomial features to all_x in order to make predictions.
Then, running this cell should plot the best fit curve using a degree 8 polynomial.
End of explanation
"""
def mse(predicted_y, actual_y):
return np.mean((predicted_y - actual_y) ** 2)
line_training_error = ...
poly_training_error = ...
line_training_error, poly_training_error
"""
Explanation: Question 5: Think about and discuss what you notice in the model's predictions.
Now, compute the mean squared training error for both the best fit line and polynomial. Again, you'll have to transform the training data for the polynomial regression before you can make predictions.
You should get training errors of around 52.8 and 5.23 for line and polynomial models, respectively. Why does the polynomial model get a lower training error than the linear model?
End of explanation
"""
from sklearn.pipeline import make_pipeline
poly_pipeline = ...
# Fit the pipeline on X and y
...
# Compute the training error
pipeline_training_error = ...
pipeline_training_error
"""
Explanation: Question 6: It's annoying to have to transform the data every time we want to use polynomial features. We can use a Pipeline to let us do both transformation and regression in one step.
Read the docs for make_pipeline and create a pipeline for polynomial regression called poly_pipeline. Then, fit it on X and y and compute the training error as in Question 5. The training errors should match.
End of explanation
"""
from sklearn.model_selection import train_test_split
np.random.seed(42)
...
X_train.shape, X_valid.shape
"""
Explanation: Nice! With pipelines, we can combine any number of transformations and treat the whole thing as a single classifier.
Question 7: Now, we know that a low training error doesn't necessarily mean your model is good. So, we'll hold out some points from the training data for a validation set. We'll use these held-out points to choose the best model.
Use the train_test_split function to split out one third of the training data for validation. Call the resulting datasets X_train, X_valid, y_train, y_valid.
X_train should have shape (22, 1). X_valid should have shape (11, 1)
End of explanation
"""
# Fit the linear classifier
...
# Fit the polynomial pipeline
...
X_train_line_error = ...
X_valid_line_error = ...
X_train_poly_error = ...
X_valid_poly_error = ...
X_train_line_error, X_valid_line_error, X_train_poly_error, X_valid_poly_error
"""
Explanation: Question 8: Now, set X_train_line_error, X_valid_line_error,
X_train_poly_error, X_valid_poly_error to the training and validation
errors for both linear and polynomial regression.
You'll have to call .fit on your classifiers/pipelines again since we're using
X_train and y_train instead of X and y.
You should see that the validation error for the polynomial fit is significantly
higher than the linear fit (152.6 vs 115.2).
End of explanation
"""
ridge_pipeline = ...
# Fit your classifier
...
# Set curve to your model's predictions on all_x
ridge_curve = ...
plot_data_and_curve(all_x, ridge_curve)
"""
Explanation: Question 9: Our 8 degree polynomial is overfitting our data.
To reduce overfitting, we can use regularization.
The usual cost function for linear regression is:
$$J(\theta) = (Y - X \theta)^T (Y - X \theta)$$
Edit the cell below to show the cost function of linear regressions with L2 regularization. Use
$\lambda $ as your regularization parameter.
$$J(\theta) = (Y - X \theta)^T (Y - X \theta)$$
Now, explain why this cost function helps reduce overfitting.
Write your answer here, replacing this text.
Question 10: L2 regularization for linear regression is also known as
Ridge regression. Create a pipeline called ridge_pipeline that again
creates polynomial features with degree 8 and then uses the Ridge sklearn
classifier.
The alpha argument is the same as our $\lambda$. Leave it as the default (1.0). You should set normalize=True to normalize your data before fitting. Why do we have to do this?
Then, fit your pipeline on the data. The cell will then plot the curve of your
regularized classifier. You should notice the curve is significantly
smoother.
Then, fiddle around with the alpha value. What do you notice as you
increase alpha? Decrease alpha?
End of explanation
"""
ridge_train_error = ...
ridge_valid_error = ...
ridge_train_error, ridge_valid_error
"""
Explanation: Question 11: Compute the training and validation error for the ridge_pipeline.
How do the errors compare to the errors for the unregularized model? Why did each one go up/down?
End of explanation
"""
alphas = [0.0001, 0.001, 0.01, 0.1, 0.5, 1.0, 10.0]
# Your code to find the best alpha
...
best_alpha
"""
Explanation: Question 12: Now we want to know: how do we choose the best alpha value?
This is where we use our validation set. We can try out a bunch of alphas and pick the one that gives us the least error on the validation set. Why can't we use the one that gives us the least error on the training set? The test set?
For each alpha in the given alphas list, fit a Ridge regression model to the training set and check its accuracy on the validation set.
Finally, set best_alpha to the best value. You should get a best alpha of 0.01 with a validation error of 15.7.
End of explanation
"""
best_pipeline = ...
best_pipeline.fit(X_train, y_train)
best_curve = best_pipeline.predict(all_x)
plot_data_and_curve(all_x, best_curve)
"""
Explanation: Question 13: Now, set best_pipeline to the pipeline with the degree 8 polynomial transform and the ridge regression model with the best value of alpha.
End of explanation
"""
test_data = pd.read_csv('test.csv')
X_test = data[['X']].as_matrix()
y_test = data['y'].as_matrix()
line_test_error = mse(linear_clf.predict(X_test), y_test)
poly_test_error = mse(poly_pipeline.predict(X_test), y_test)
best_test_error = mse(best_pipeline.predict(X_test), y_test)
line_test_error, poly_test_error, best_test_error
"""
Explanation: Now, run the cell below to find the test error of your simple linear model, your polynomial model, and your regularized polynomial model.
End of explanation
"""
i_finished_the_lab = False
_ = ok.grade('qcompleted')
_ = ok.backup()
_ = ok.submit()
"""
Explanation: Nice! You've use regularization and cross-validation to fit an accurate polynomial model to the dataset.
In the future, you'd probably want to use something like RidgeCV to automatically perform cross-validation, but it's instructive to do it yourself at least once.
Submitting your assignment
If you made a good-faith effort to complete the lab, change i_finished_the_lab to True in the cell below. In any case, run the cells below to submit the lab.
End of explanation
"""
|
quoniammm/mine-tensorflow-examples | fastAI/deeplearning1/nbs/statefarm.ipynb | mit | from theano.sandbox import cuda
cuda.use('gpu0')
%matplotlib inline
from __future__ import print_function, division
path = "data/state/"
#path = "data/state/sample/"
import utils; reload(utils)
from utils import *
from IPython.display import FileLink
batch_size=64
"""
Explanation: Enter State Farm
End of explanation
"""
batches = get_batches(path+'train', batch_size=batch_size)
val_batches = get_batches(path+'valid', batch_size=batch_size*2, shuffle=False)
(val_classes, trn_classes, val_labels, trn_labels,
val_filenames, filenames, test_filenames) = get_classes(path)
"""
Explanation: Setup batches
End of explanation
"""
trn = get_data(path+'train')
val = get_data(path+'valid')
save_array(path+'results/val.dat', val)
save_array(path+'results/trn.dat', trn)
val = load_array(path+'results/val.dat')
trn = load_array(path+'results/trn.dat')
"""
Explanation: Rather than using batches, we could just import all the data into an array to save some processing time. (In most examples I'm using the batches, however - just because that's how I happened to start out.)
End of explanation
"""
def conv1(batches):
model = Sequential([
BatchNormalization(axis=1, input_shape=(3,224,224)),
Convolution2D(32,3,3, activation='relu'),
BatchNormalization(axis=1),
MaxPooling2D((3,3)),
Convolution2D(64,3,3, activation='relu'),
BatchNormalization(axis=1),
MaxPooling2D((3,3)),
Flatten(),
Dense(200, activation='relu'),
BatchNormalization(),
Dense(10, activation='softmax')
])
model.compile(Adam(lr=1e-4), loss='categorical_crossentropy', metrics=['accuracy'])
model.fit_generator(batches, batches.nb_sample, nb_epoch=2, validation_data=val_batches,
nb_val_samples=val_batches.nb_sample)
model.optimizer.lr = 0.001
model.fit_generator(batches, batches.nb_sample, nb_epoch=4, validation_data=val_batches,
nb_val_samples=val_batches.nb_sample)
return model
model = conv1(batches)
"""
Explanation: Re-run sample experiments on full dataset
We should find that everything that worked on the sample (see statefarm-sample.ipynb), works on the full dataset too. Only better! Because now we have more data. So let's see how they go - the models in this section are exact copies of the sample notebook models.
Single conv layer
End of explanation
"""
gen_t = image.ImageDataGenerator(rotation_range=15, height_shift_range=0.05,
shear_range=0.1, channel_shift_range=20, width_shift_range=0.1)
batches = get_batches(path+'train', gen_t, batch_size=batch_size)
model = conv1(batches)
model.optimizer.lr = 0.0001
model.fit_generator(batches, batches.nb_sample, nb_epoch=15, validation_data=val_batches,
nb_val_samples=val_batches.nb_sample)
"""
Explanation: Interestingly, with no regularization or augmentation we're getting some reasonable results from our simple convolutional model. So with augmentation, we hopefully will see some very good results.
Data augmentation
End of explanation
"""
gen_t = image.ImageDataGenerator(rotation_range=15, height_shift_range=0.05,
shear_range=0.1, channel_shift_range=20, width_shift_range=0.1)
batches = get_batches(path+'train', gen_t, batch_size=batch_size)
model = Sequential([
BatchNormalization(axis=1, input_shape=(3,224,224)),
Convolution2D(32,3,3, activation='relu'),
BatchNormalization(axis=1),
MaxPooling2D(),
Convolution2D(64,3,3, activation='relu'),
BatchNormalization(axis=1),
MaxPooling2D(),
Convolution2D(128,3,3, activation='relu'),
BatchNormalization(axis=1),
MaxPooling2D(),
Flatten(),
Dense(200, activation='relu'),
BatchNormalization(),
Dropout(0.5),
Dense(200, activation='relu'),
BatchNormalization(),
Dropout(0.5),
Dense(10, activation='softmax')
])
model.compile(Adam(lr=10e-5), loss='categorical_crossentropy', metrics=['accuracy'])
model.fit_generator(batches, batches.nb_sample, nb_epoch=2, validation_data=val_batches,
nb_val_samples=val_batches.nb_sample)
model.optimizer.lr=0.001
model.fit_generator(batches, batches.nb_sample, nb_epoch=10, validation_data=val_batches,
nb_val_samples=val_batches.nb_sample)
model.optimizer.lr=0.00001
model.fit_generator(batches, batches.nb_sample, nb_epoch=10, validation_data=val_batches,
nb_val_samples=val_batches.nb_sample)
"""
Explanation: I'm shocked by how good these results are! We're regularly seeing 75-80% accuracy on the validation set, which puts us into the top third or better of the competition. With such a simple model and no dropout or semi-supervised learning, this really speaks to the power of this approach to data augmentation.
Four conv/pooling pairs + dropout
Unfortunately, the results are still very unstable - the validation accuracy jumps from epoch to epoch. Perhaps a deeper model with some dropout would help.
End of explanation
"""
vgg = Vgg16()
model=vgg.model
last_conv_idx = [i for i,l in enumerate(model.layers) if type(l) is Convolution2D][-1]
conv_layers = model.layers[:last_conv_idx+1]
conv_model = Sequential(conv_layers)
# batches shuffle must be set to False when pre-computing features
batches = get_batches(path+'train', batch_size=batch_size, shuffle=False)
(val_classes, trn_classes, val_labels, trn_labels,
val_filenames, filenames, test_filenames) = get_classes(path)
conv_feat = conv_model.predict_generator(batches, batches.nb_sample)
conv_val_feat = conv_model.predict_generator(val_batches, val_batches.nb_sample)
conv_test_feat = conv_model.predict_generator(test_batches, test_batches.nb_sample)
save_array(path+'results/conv_val_feat.dat', conv_val_feat)
save_array(path+'results/conv_test_feat.dat', conv_test_feat)
save_array(path+'results/conv_feat.dat', conv_feat)
conv_feat = load_array(path+'results/conv_feat.dat')
conv_val_feat = load_array(path+'results/conv_val_feat.dat')
conv_val_feat.shape
"""
Explanation: This is looking quite a bit better - the accuracy is similar, but the stability is higher. There's still some way to go however...
Imagenet conv features
Since we have so little data, and it is similar to imagenet images (full color photos), using pre-trained VGG weights is likely to be helpful - in fact it seems likely that we won't need to fine-tune the convolutional layer weights much, if at all. So we can pre-compute the output of the last convolutional layer, as we did in lesson 3 when we experimented with dropout. (However this means that we can't use full data augmentation, since we can't pre-compute something that changes every image.)
End of explanation
"""
def get_bn_layers(p):
return [
MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]),
Flatten(),
Dropout(p/2),
Dense(128, activation='relu'),
BatchNormalization(),
Dropout(p/2),
Dense(128, activation='relu'),
BatchNormalization(),
Dropout(p),
Dense(10, activation='softmax')
]
p=0.8
bn_model = Sequential(get_bn_layers(p))
bn_model.compile(Adam(lr=0.001), loss='categorical_crossentropy', metrics=['accuracy'])
bn_model.fit(conv_feat, trn_labels, batch_size=batch_size, nb_epoch=1,
validation_data=(conv_val_feat, val_labels))
bn_model.optimizer.lr=0.01
bn_model.fit(conv_feat, trn_labels, batch_size=batch_size, nb_epoch=2,
validation_data=(conv_val_feat, val_labels))
bn_model.save_weights(path+'models/conv8.h5')
"""
Explanation: Batchnorm dense layers on pretrained conv layers
Since we've pre-computed the output of the last convolutional layer, we need to create a network that takes that as input, and predicts our 10 classes. Let's try using a simplified version of VGG's dense layers.
End of explanation
"""
gen_t = image.ImageDataGenerator(rotation_range=15, height_shift_range=0.05,
shear_range=0.1, channel_shift_range=20, width_shift_range=0.1)
da_batches = get_batches(path+'train', gen_t, batch_size=batch_size, shuffle=False)
"""
Explanation: Looking good! Let's try pre-computing 5 epochs worth of augmented data, so we can experiment with combining dropout and augmentation on the pre-trained model.
Pre-computed data augmentation + dropout
We'll use our usual data augmentation parameters:
End of explanation
"""
da_conv_feat = conv_model.predict_generator(da_batches, da_batches.nb_sample*5)
save_array(path+'results/da_conv_feat2.dat', da_conv_feat)
da_conv_feat = load_array(path+'results/da_conv_feat2.dat')
"""
Explanation: We use those to create a dataset of convolutional features 5x bigger than the training set.
End of explanation
"""
da_conv_feat = np.concatenate([da_conv_feat, conv_feat])
"""
Explanation: Let's include the real training data as well in its non-augmented form.
End of explanation
"""
da_trn_labels = np.concatenate([trn_labels]*6)
"""
Explanation: Since we've now got a dataset 6x bigger than before, we'll need to copy our labels 6 times too.
End of explanation
"""
def get_bn_da_layers(p):
return [
MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]),
Flatten(),
Dropout(p),
Dense(256, activation='relu'),
BatchNormalization(),
Dropout(p),
Dense(256, activation='relu'),
BatchNormalization(),
Dropout(p),
Dense(10, activation='softmax')
]
p=0.8
bn_model = Sequential(get_bn_da_layers(p))
bn_model.compile(Adam(lr=0.001), loss='categorical_crossentropy', metrics=['accuracy'])
"""
Explanation: Based on some experiments the previous model works well, with bigger dense layers.
End of explanation
"""
bn_model.fit(da_conv_feat, da_trn_labels, batch_size=batch_size, nb_epoch=1,
validation_data=(conv_val_feat, val_labels))
bn_model.optimizer.lr=0.01
bn_model.fit(da_conv_feat, da_trn_labels, batch_size=batch_size, nb_epoch=4,
validation_data=(conv_val_feat, val_labels))
bn_model.optimizer.lr=0.0001
bn_model.fit(da_conv_feat, da_trn_labels, batch_size=batch_size, nb_epoch=4,
validation_data=(conv_val_feat, val_labels))
"""
Explanation: Now we can train the model as usual, with pre-computed augmented data.
End of explanation
"""
bn_model.save_weights(path+'models/da_conv8_1.h5')
"""
Explanation: Looks good - let's save those weights.
End of explanation
"""
val_pseudo = bn_model.predict(conv_val_feat, batch_size=batch_size)
"""
Explanation: Pseudo labeling
We're going to try using a combination of pseudo labeling and knowledge distillation to allow us to use unlabeled data (i.e. do semi-supervised learning). For our initial experiment we'll use the validation set as the unlabeled data, so that we can see that it is working without using the test set. At a later date we'll try using the test set.
To do this, we simply calculate the predictions of our model...
End of explanation
"""
comb_pseudo = np.concatenate([da_trn_labels, val_pseudo])
comb_feat = np.concatenate([da_conv_feat, conv_val_feat])
"""
Explanation: ...concatenate them with our training labels...
End of explanation
"""
bn_model.load_weights(path+'models/da_conv8_1.h5')
bn_model.fit(comb_feat, comb_pseudo, batch_size=batch_size, nb_epoch=1,
validation_data=(conv_val_feat, val_labels))
bn_model.fit(comb_feat, comb_pseudo, batch_size=batch_size, nb_epoch=4,
validation_data=(conv_val_feat, val_labels))
bn_model.optimizer.lr=0.00001
bn_model.fit(comb_feat, comb_pseudo, batch_size=batch_size, nb_epoch=4,
validation_data=(conv_val_feat, val_labels))
"""
Explanation: ...and fine-tune our model using that data.
End of explanation
"""
bn_model.save_weights(path+'models/bn-ps8.h5')
"""
Explanation: That's a distinct improvement - even although the validation set isn't very big. This looks encouraging for when we try this on the test set.
End of explanation
"""
def do_clip(arr, mx): return np.clip(arr, (1-mx)/9, mx)
keras.metrics.categorical_crossentropy(val_labels, do_clip(val_preds, 0.93)).eval()
conv_test_feat = load_array(path+'results/conv_test_feat.dat')
preds = bn_model.predict(conv_test_feat, batch_size=batch_size*2)
subm = do_clip(preds,0.93)
subm_name = path+'results/subm.gz'
classes = sorted(batches.class_indices, key=batches.class_indices.get)
submission = pd.DataFrame(subm, columns=classes)
submission.insert(0, 'img', [a[4:] for a in test_filenames])
submission.head()
submission.to_csv(subm_name, index=False, compression='gzip')
FileLink(subm_name)
"""
Explanation: Submit
We'll find a good clipping amount using the validation set, prior to submitting.
End of explanation
"""
for l in get_bn_layers(p): conv_model.add(l)
for l1,l2 in zip(bn_model.layers, conv_model.layers[last_conv_idx+1:]):
l2.set_weights(l1.get_weights())
for l in conv_model.layers: l.trainable =False
for l in conv_model.layers[last_conv_idx+1:]: l.trainable =True
comb = np.concatenate([trn, val])
gen_t = image.ImageDataGenerator(rotation_range=8, height_shift_range=0.04,
shear_range=0.03, channel_shift_range=10, width_shift_range=0.08)
batches = gen_t.flow(comb, comb_pseudo, batch_size=batch_size)
val_batches = get_batches(path+'valid', batch_size=batch_size*2, shuffle=False)
conv_model.compile(Adam(lr=0.00001), loss='categorical_crossentropy', metrics=['accuracy'])
conv_model.fit_generator(batches, batches.N, nb_epoch=1, validation_data=val_batches,
nb_val_samples=val_batches.N)
conv_model.optimizer.lr = 0.0001
conv_model.fit_generator(batches, batches.N, nb_epoch=3, validation_data=val_batches,
nb_val_samples=val_batches.N)
for l in conv_model.layers[16:]: l.trainable =True
conv_model.optimizer.lr = 0.00001
conv_model.fit_generator(batches, batches.N, nb_epoch=8, validation_data=val_batches,
nb_val_samples=val_batches.N)
conv_model.save_weights(path+'models/conv8_ps.h5')
conv_model.load_weights(path+'models/conv8_da.h5')
val_pseudo = conv_model.predict(val, batch_size=batch_size*2)
save_array(path+'models/pseudo8_da.dat', val_pseudo)
"""
Explanation: This gets 0.534 on the leaderboard.
The "things that didn't really work" section
You can safely ignore everything from here on, because they didn't really help.
Finetune some conv layers too
End of explanation
"""
drivers_ds = pd.read_csv(path+'driver_imgs_list.csv')
drivers_ds.head()
img2driver = drivers_ds.set_index('img')['subject'].to_dict()
driver2imgs = {k: g["img"].tolist()
for k,g in drivers_ds[['subject', 'img']].groupby("subject")}
def get_idx(driver_list):
return [i for i,f in enumerate(filenames) if img2driver[f[3:]] in driver_list]
drivers = driver2imgs.keys()
rnd_drivers = np.random.permutation(drivers)
ds1 = rnd_drivers[:len(rnd_drivers)//2]
ds2 = rnd_drivers[len(rnd_drivers)//2:]
models=[fit_conv([d]) for d in drivers]
models=[m for m in models if m is not None]
all_preds = np.stack([m.predict(conv_test_feat, batch_size=128) for m in models])
avg_preds = all_preds.mean(axis=0)
avg_preds = avg_preds/np.expand_dims(avg_preds.sum(axis=1), 1)
keras.metrics.categorical_crossentropy(val_labels, np.clip(avg_val_preds,0.01,0.99)).eval()
keras.metrics.categorical_accuracy(val_labels, np.clip(avg_val_preds,0.01,0.99)).eval()
"""
Explanation: Ensembling
End of explanation
"""
|
astarostin/MachineLearningSpecializationCoursera | course4/week1 - Биномиальный критерий для доли - demo.ipynb | apache-2.0 | import numpy as np
from scipy import stats
%pylab inline
"""
Explanation: Биномиальный критерий для доли
End of explanation
"""
n = 16
n_samples = 1000
samples = np.random.randint(2, size = (n_samples, n))
t_stat = map(sum, samples)
pylab.hist(t_stat, bins = 16, color = 'b', range = (0, 16), label = 't_stat')
pylab.legend()
"""
Explanation: Shaken, not stirred
Джеймс Бонд говорит, что предпочитает мартини смешанным, но не взболтанным. Проведём слепой тест (blind test): n раз предложим ему пару напитков и выясним, какой из двух он предпочитает:
выборка - бинарный вектор длины $n$, где 1 - Джеймс Бонд предпочел смешанный напиток, 0 - взболтанный;
гипотеза $H_0$ - Джеймс Бонд не различает 2 вида напитков и выбирает наугад;
статистика $t$ - количество единиц в выборке.
End of explanation
"""
stats.binom_test(12, 16, 0.5, alternative = 'two-sided')
stats.binom_test(13, 16, 0.5, alternative = 'two-sided')
stats.binom_test(67, 100, 0.75, alternative='two-sided')
stats.binom_test(22, 50, 0.75, alternative='two-sided')
"""
Explanation: Нулевое распределение статистики — биномиальное $Bin(n, 0.5)$
Двусторонняя альтернатива
гипотеза $H_1$ - Джеймс Бонд предпочитает какой-то определённый вид мартини.
End of explanation
"""
stats.binom_test(12, 16, 0.5, alternative = 'greater')
stats.binom_test(11, 16, 0.5, alternative = 'greater')
"""
Explanation: Односторонняя альтернатива
гипотеза $H_1$ - Джеймс Бонд предпочитает смешанный напиток.
End of explanation
"""
|
arborh/tensorflow | tensorflow/lite/experimental/micro/examples/micro_speech/train_speech_model.ipynb | apache-2.0 | import os
# A comma-delimited list of the words you want to train for.
# The options are: yes,no,up,down,left,right,on,off,stop,go
# All other words will be used to train an "unknown" category.
os.environ["WANTED_WORDS"] = "yes,no"
# The number of steps and learning rates can be specified as comma-separated
# lists to define the rate at each stage. For example,
# TRAINING_STEPS=15000,3000 and LEARNING_RATE=0.001,0.0001
# will run 18,000 training loops in total, with a rate of 0.001 for the first
# 15,000, and 0.0001 for the final 3,000.
os.environ["TRAINING_STEPS"]="15000,3000"
os.environ["LEARNING_RATE"]="0.001,0.0001"
# Calculate the total number of steps, which is used to identify the checkpoint
# file name.
total_steps = sum(map(lambda string: int(string),
os.environ["TRAINING_STEPS"].split(",")))
os.environ["TOTAL_STEPS"] = str(total_steps)
# Print the configuration to confirm it
!echo "Training these words: ${WANTED_WORDS}"
!echo "Training steps in each stage: ${TRAINING_STEPS}"
!echo "Learning rate in each stage: ${LEARNING_RATE}"
!echo "Total number of training steps: ${TOTAL_STEPS}"
"""
Explanation: Train a Simple Audio Recognition model for microcontroller use
This notebook demonstrates how to train a 20kb Simple Audio Recognition model for TensorFlow Lite for Microcontrollers. It will produce the same model used in the micro_speech example application.
The model is designed to be used with Google Colaboratory.
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/lite/experimental/micro/examples/micro_speech/train_speech_model.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/experimental/micro/examples/micro_speech/train_speech_model.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
The notebook runs Python scripts to train and freeze the model, and uses the TensorFlow Lite converter to convert it for use with TensorFlow Lite for Microcontrollers.
Training is much faster using GPU acceleration. Before you proceed, ensure you are using a GPU runtime by going to Runtime -> Change runtime type and selecting GPU. Training 18,000 iterations will take 1.5-2 hours on a GPU runtime.
Configure training
The following os.environ lines can be customized to set the words that will be trained for, and the steps and learning rate of the training. The default values will result in the same model that is used in the micro_speech example. Run the cell to set the configuration:
End of explanation
"""
# Replace Colab's default TensorFlow install with a more recent
# build that contains the operations that are needed for training
!pip uninstall -y tensorflow tensorflow_estimator tensorboard
!pip install -q tf-estimator-nightly==1.14.0.dev2019072901 tf-nightly-gpu==1.15.0.dev20190729
"""
Explanation: Install dependencies
Next, we'll install a GPU build of TensorFlow, so we can use GPU acceleration for training.
End of explanation
"""
# Clone the repository from GitHub
!git clone -q https://github.com/tensorflow/tensorflow
# Check out a commit that has been tested to work
# with the build of TensorFlow we're using
!git -c advice.detachedHead=false -C tensorflow checkout 17ce384df70
"""
Explanation: We'll also clone the TensorFlow repository, which contains the scripts that train and freeze the model.
End of explanation
"""
# Delete any old logs from previous runs
!rm -rf /content/retrain_logs
# Load TensorBoard
%load_ext tensorboard
%tensorboard --logdir /content/retrain_logs
"""
Explanation: Load TensorBoard
Now, set up TensorBoard so that we can graph our accuracy and loss as training proceeds.
End of explanation
"""
!python tensorflow/tensorflow/examples/speech_commands/train.py \
--model_architecture=tiny_conv --window_stride=20 --preprocess=micro \
--wanted_words=${WANTED_WORDS} --silence_percentage=25 --unknown_percentage=25 \
--quantize=1 --verbosity=WARN --how_many_training_steps=${TRAINING_STEPS} \
--learning_rate=${LEARNING_RATE} --summaries_dir=/content/retrain_logs \
--data_dir=/content/speech_dataset --train_dir=/content/speech_commands_train \
"""
Explanation: Begin training
Next, run the following script to begin training. The script will first download the training data:
End of explanation
"""
!python tensorflow/tensorflow/examples/speech_commands/freeze.py \
--model_architecture=tiny_conv --window_stride=20 --preprocess=micro \
--wanted_words=${WANTED_WORDS} --quantize=1 --output_file=/content/tiny_conv.pb \
--start_checkpoint=/content/speech_commands_train/tiny_conv.ckpt-${TOTAL_STEPS}
"""
Explanation: Freeze the graph
Once training is complete, run the following cell to freeze the graph.
End of explanation
"""
!toco \
--graph_def_file=/content/tiny_conv.pb --output_file=/content/tiny_conv.tflite \
--input_shapes=1,49,40,1 --input_arrays=Reshape_2 --output_arrays='labels_softmax' \
--inference_type=QUANTIZED_UINT8 --mean_values=0 --std_dev_values=9.8077
"""
Explanation: Convert the model
Run this cell to use the TensorFlow Lite converter to convert the frozen graph into the TensorFlow Lite format, fully quantized for use with embedded devices.
End of explanation
"""
import os
model_size = os.path.getsize("/content/tiny_conv.tflite")
print("Model is %d bytes" % model_size)
"""
Explanation: The following cell will print the model size, which will be under 20 kilobytes.
End of explanation
"""
# Install xxd if it is not available
!apt-get -qq install xxd
# Save the file as a C source file
!xxd -i /content/tiny_conv.tflite > /content/tiny_conv.cc
# Print the source file
!cat /content/tiny_conv.cc
"""
Explanation: Finally, we use xxd to transform the model into a source file that can be included in a C++ project and loaded by TensorFlow Lite for Microcontrollers.
End of explanation
"""
|
phenology/infrastructure | applications/notebooks/stable/plot_kmeans_clusters-Light.ipynb | apache-2.0 | import sys
sys.path.append("/usr/lib/spark/python")
sys.path.append("/usr/lib/spark/python/lib/py4j-0.10.4-src.zip")
sys.path.append("/usr/lib/python3/dist-packages")
import os
os.environ["HADOOP_CONF_DIR"] = "/etc/hadoop/conf"
import os
os.environ["PYSPARK_PYTHON"] = "python3"
os.environ["PYSPARK_DRIVER_PYTHON"] = "ipython"
from pyspark.mllib.clustering import KMeans, KMeansModel
from pyspark import SparkConf, SparkContext
from osgeo import gdal
from io import BytesIO
import matplotlib.pyplot as plt
import rasterio
from rasterio import plot
from rasterio.io import MemoryFile
"""
Explanation: Plot Kmeans clusters stored in a GeoTiff
This is a notebook plots the GeoTiffs created out of kmeans. Such GeoTiffs contains the Kmeans cluster IDs.
Dependencies
End of explanation
"""
appName = "plot_kmeans_clusters"
masterURL="spark://pheno0.phenovari-utwente.surf-hosted.nl:7077"
try:
sc.stop()
except NameError:
print("A new Spark Context will be created.")
sc = SparkContext(conf = SparkConf().setAppName(appName).setMaster(masterURL))
"""
Explanation: Spark Context
End of explanation
"""
#GeoTiffs to be read from "hdfs:///user/hadoop/spring-index/"
dir_path = "hdfs:///user/hadoop/spring-index/"
offline_dir_path = "hdfs:///user/pheno/spring-index/"
geoTiff_dir = "BloomFinal"
#Kmeans number of iterations and clusters
numIterations = 35
minClusters = 2
maxClusters = 15
stepClusters = 1
"""
Explanation: Mode of Operation setup
The user should modify the following variables to define which GeoTiffs should be loaded. In case it (s)he wants to visualize results that just came out of kmeans laste execution, just copy the values set at its Mode of Operation Setup.
End of explanation
"""
geotiff_hdfs_paths = []
if minClusters > maxClusters:
maxClusters = minClusters
stepClusters = 1
if stepClusters < 1:
stepClusters = 1
numClusters_id = 0
numClusters = minClusters
while numClusters <= maxClusters :
path = offline_dir_path + geoTiff_dir + '/clusters_' + str(numClusters) + '_' + str(numIterations) + '.tif'
geotiff_hdfs_paths.append(path)
numClusters_id += 1
numClusters += stepClusters
"""
Explanation: Mode of Operation verification
End of explanation
"""
clusters_dataByteArrays = []
numClusters_id = 0
numClusters = minClusters
while numClusters <= maxClusters :
clusters_data = sc.binaryFiles(geotiff_hdfs_paths[numClusters_id]).take(1)
clusters_dataByteArrays.append(bytearray(clusters_data[0][1]))
numClusters_id += 1
numClusters += stepClusters
"""
Explanation: Load GeoTiffs
Load the GeoTiffs into MemoryFiles.
End of explanation
"""
for val in clusters_dataByteArrays:
#Create a Memory File
memFile = MemoryFile(val).open()
print(memFile.profile)
memFile.close()
"""
Explanation: Check GeoTiffs metadata
End of explanation
"""
%matplotlib inline
numClusters_id = 0
numClusters = minClusters
while numClusters <= maxClusters :
print ("Plot for " + str(numClusters) + " clusters!!!")
memFile = MemoryFile(clusters_dataByteArrays[numClusters_id]).open()
plot.show((memFile,1))
if (numClusters < maxClusters) :
_ = input("Press [enter] to continue.")
memFile.close()
numClusters_id += 1
numClusters += stepClusters
"""
Explanation: Plot GeoTiffs
End of explanation
"""
|
ecell/ecell4-notebooks | en/tests/Reversible.ipynb | gpl-2.0 | %matplotlib inline
from ecell4.prelude import *
"""
Explanation: Reversible
This is for an integrated test of E-Cell4. Here, we test a simple reversible association/dissociation model in volume.
End of explanation
"""
D = 1
radius = 0.005
N_A = 60
U = 0.5
ka_factor = 0.1 # 0.1 is for reaction-limited
N = 20 # a number of samples
"""
Explanation: Parameters are given as follows. D, radius, N_A, U, and ka_factor mean a diffusion constant, a radius of molecules, an initial number of molecules of A and B, a ratio of dissociated form of A at the steady state, and a ratio between an intrinsic association rate and collision rate defined as ka andkD below, respectively. Dimensions of length and time are assumed to be micro-meter and second.
End of explanation
"""
import numpy
kD = 4 * numpy.pi * (radius * 2) * (D * 2)
ka = kD * ka_factor
kd = ka * N_A * U * U / (1 - U)
kon = ka * kD / (ka + kD)
koff = kd * kon / ka
"""
Explanation: Calculating optimal reaction rates. ka and kd are intrinsic, kon and koff are effective reaction rates.
End of explanation
"""
y0 = {'A': N_A, 'B': N_A}
duration = 3
opt_kwargs = {'legend': True}
"""
Explanation: Start with no C molecules, and simulate 3 seconds.
End of explanation
"""
with species_attributes():
A | B | C | {'radius': radius, 'D': D}
with reaction_rules():
A + B == C | (kon, koff)
m = get_model()
"""
Explanation: Make a model with effective rates. This model is for macroscopic simulation algorithms.
End of explanation
"""
ret1 = run_simulation(duration, y0=y0, model=m)
ret1.plot(**opt_kwargs)
"""
Explanation: Save a result with ode as obs, and plot it:
End of explanation
"""
ret2 = ensemble_simulations(duration, ndiv=20, y0=y0, model=m, solver='gillespie', repeat=N)
ret2.plot('o', ret1, '-', **opt_kwargs)
"""
Explanation: Simulating with gillespie (Bars represent standard error of the mean):
End of explanation
"""
ret2 = ensemble_simulations(duration, ndiv=20, y0=y0, model=m, solver=('meso', Integer3(4, 4, 4)), repeat=N)
ret2.plot('o', ret1, '-', **opt_kwargs)
"""
Explanation: Simulating with meso:
End of explanation
"""
with species_attributes():
A | B | C | {'radius': radius, 'D': D}
with reaction_rules():
A + B == C | (ka, kd)
m = get_model()
"""
Explanation: Make a model with intrinsic rates. This model is for microscopic (particle) simulation algorithms.
End of explanation
"""
ret2 = ensemble_simulations(duration, ndiv=20, y0=y0, model=m, solver=('spatiocyte', radius), repeat=N)
ret2.plot('o', ret1, '-', **opt_kwargs)
"""
Explanation: Simulating with spatiocyte. voxel_radius is given as radius:
End of explanation
"""
ret2 = ensemble_simulations(duration, ndiv=20, y0=y0, model=m, solver=('egfrd', Integer3(4, 4, 4)), repeat=N)
ret2.plot('o', ret1, '-', **opt_kwargs)
"""
Explanation: Simulating with egfrd:
End of explanation
"""
|
eds-uga/csci1360e-su16 | lectures/L3 - Python Variables and Syntax.ipynb | mit | x = 2
"""
Explanation: Lecture 3: Python Variables and Syntax
CSCI 1360E: Foundations for Informatics and Analytics
Overview and Objectives
In this lecture, we'll get into more detail on Python variables, as well as language syntax. By the end, you should be able to:
Define variables of string and numerical types, convert between them, and use them in basic operations
Explain the different variants of typing in programming languages, and what "duck-typing" in Python does
Understand how Python uses whitespace in its syntax
Demonstrate how smart variable-naming and proper use of comments can effectively document your code
This week will effectively be a "crash-course" in Python basics; there's a lot of ground to cover!
Part 1: Variables and Types
We saw in the last lecture how to define variables, as well as a few of the basic variable "types" available in Python. It's important to keep in mind that each variable you define has a "type", and this type will dictate much (if not all) of the operations you can perform on and with that variable.
To recap: a variable in Python is a sort of placeholder that stores a value. Critically, a variable has both a name and a type. For example:
End of explanation
"""
y = 2.0
"""
Explanation: It's easy to determine the name of the variable; in this case, the name is $x$. It can be a bit more complicated to determine the type of the variable, as it depends on the value the variable is storing. In this case, it's storing the number 2. Since there's no decimal point on the number, we call this number an integer, or int for short.
Thus, in this case, the name of the variable is x and the type is int.
Numerical types
What other types of variables are there?
End of explanation
"""
x = 2
y = 3
z = x / y
"""
Explanation: In this example, since y is assigned a value of 2.0, it is referred to as a floating-point variable, or float for short. It doesn't matter that the decimal is 0; internally, Python sees the explicit presence of a decimal and treats the variable y as having type float.
Floats do the heavy-lifting of much of the computation in data science. Whenever you're computing probabilities or fractions or normalizations, floats are the types of variables you're using. In general, you tend to use floats for heavy computation, and ints for counting things.
There is an explicit connection between ints and floats. Let's illustrate with an example:
End of explanation
"""
type(z)
"""
Explanation: In this case, we've defined two variables x and y and assigned them integer values, so they are both of type int. However, we've used them both in a division operation and assigned the result to a variable named z. If we were to check the type of z, what type do you think it would be?
z is a float!
End of explanation
"""
x = 2
y = 3
z = x * y
type(z)
x = 2.5
y = 3.5
z = x * y
type(z)
"""
Explanation: How does that happen? Shouldn't an operation involving two ints produce an int? In general, yes it does. However, in cases where a decimal number is outputted, Python implicitly "promotes" the variable storing the result. This is known as casting, and it can take two forms: implicit casting (as we just saw), or explicit casting.
Casting
Implicit casting is done in such a way as to try to abide by "common sense": if you're dividing two numbers, you would all but expect to receive a fraction, or decimal, on the other end. If you're multiplying two numbers, the type of the output depends on the types of the inputs--two floats multiplied will likely produce a float, while two ints multiplied will produce an int.
End of explanation
"""
x = 2.5
y = 3.5
z = x * y
print("Float z:\t{}\nInteger z:\t{}".format(z, int(z)))
"""
Explanation: Explicit casting, on the other hand, is a little trickier. In this case, it's you the programmer who are making explicit (hence the name) what type you want your variables to be. Python has a couple special built-in functions for performing explicit casting on variables, and they're named what you would expect: int() for casting a variable as an int, and float for casting it as a float.
End of explanation
"""
x = "this is a string"
type(x)
"""
Explanation: Whoa! What's going on here?
With explicit casting, you are telling Python to override its default behavior. In doing so, it has to make some decisions as to how to do so in a way that still makes sense. When you cast a float to an int, some information is lost; namely, the decimal. So the way Python handles this is by quite literally discarding the entire decimal portion.
In this way, even if your number was 9.999999999 and you perfomed an explicit cast to int(), Python would hand you back a 9.
Language typing mechanisms
Python as a language is known as dynamically typed. This means you don't have to specify the type of the variable when you define it; rather, Python infers the type based on how you've defined it and how you use it. As we've already seen, Python creates a variable of type int when you assign it an integer number like 5, and it automatically converts the type to a float whenever the operations produce decimals.
Other languages, like C++ and Java, are statically typed, meaning in addition to naming a variable when it is declared, the programmer must also explicitly state the type of the variable.
There are pros and cons to both dynamic and static typing; in particular, some would argue that it's easier to make mistakes in dynamically typed languages, as one isn't always 100% certain of what Python's underlying type representation is without explicitly checking the type of the variable. On the other hand, not having to declare types every time you define a new variable can eliminate a lot of boilerplate code.
In fact, when checking for the type of the variable, Python implements what is known as duck typing: if it walks like a duck and quacks like a duck, it's a duck. As such, Python checks the properties of the variables you've defined and treats each variable as the type it most resembles.
This brings us to a concept known as type safety. This is an important point, especially in dynamically typed languages where the type is not explicitly set by the programmer: there are countless examples of nefarious hacking that has exploited a lack of type safety in certain applications in order to execute malicious code.
A particularly fun example is known as a roundoff error, or more specifically to our case, a representation error. This occurs when we are attempting to represent a value for which we simply don't have enough precision to accurately store. This gets a little technical, but basically there are a certain number of bits allocated that represent the whole number part of a float, and a certain number that represent the decimal part of a float.
When there are too many decimal values to represent (usually because the number we're trying to store is very, very small), we get an underflow error.
For example, let's say we wanted to build an algorithm that automatically predicts whether an email we received is spam or not. To do this, we have to multiply a lot of probabilities together. Probabilities are floats between 0 and 1. Now imagine multiplying several hundreds to thousands of these values together; we'll very likely end up with tiny, tiny numbers. In the case of an underflow error, Python may very well set our total probability to 0 as it won't have enough bits to represent the full decimal.
When there are too many whole numbers to represent (usually because the number we're trying to store is very, very large), we get an overflow error.
One of the most popular examples of an overflow error was the Y2K bug. In this case, most Windows machines internally stored the year as simply the last two digits. Thus, when the year 2000 rolled around, the two numbers representing the year overflowed and reset to 00. A similar problem is anticipated for 2038, when 32-bit Unix machines will also see their internal date representations overflow to 0.
In these cases, and especially in dynamically typed languages like Python, it is very important to know what types of variables you're working with and what the limitations of those types are.
String types
Strings, as we've also seen previously, are the variable types used in Python to represent text.
End of explanation
"""
x = "some string"
y = "another string"
z = x + " " + y
print(z)
"""
Explanation: Unlike numerical types like ints and floats, you can't really perform arithmetic operations on strings, with one exception:
End of explanation
"""
s = "2"
t = "divisor"
x = s / t
"""
Explanation: The + operator, when applied to strings, is called string concatenation. This means, quite literally, that it glues or concatenates two strings together to create a new string. In this case, we took the string in x, concatenated it to an empty space " ", and concatenated that again to the string in y, storing the whole thing in a final string z.
Other than the + operator, the other arithmetic operations aren't defined for strings, so I wouldn't recommend trying them...
End of explanation
"""
s = "2"
x = int(s)
print("x = {} and has type {}.".format(x, type(x)))
"""
Explanation: Casting, however, is alive and well with strings. In particular, if you know the string you're working with is a string representation of a number, you can cast it from a string to a numeric type:
End of explanation
"""
x = 2
s = str(x)
print("s = {} and has type {}.".format(s, type(s)))
"""
Explanation: And back again:
End of explanation
"""
s = "Some string with WORDS"
print(s.upper()) # make all the letters uppercase
print(s.lower()) # make all the letters lowercase
"""
Explanation: Strings also have some useful methods that numeric types don't for doing some basic text processing.
End of explanation
"""
s1 = " python "
s2 = " python"
s3 = "python "
"""
Explanation: A very useful method that will come in handy later in the course when we do some text processing is strip(). Often when you're reading text from a file and splitting it into tokens, you're left with strings that have leading or trailing whitespace:
End of explanation
"""
print("|" + s1.strip() + "|")
print("|" + s2.strip() + "|")
print("|" + s3.strip() + "|")
"""
Explanation: Anyone who looked at these three strings would say they're the same, but the whitespace before and after the word python in each of them results in Python treating them each as unique. Thankfully, we can use the strip method:
End of explanation
"""
s = "some string"
t = 'this also works'
"""
Explanation: You can also delimit strings using either single-quotes or double-quotes. Either is fine and largely depends on your preference.
End of explanation
"""
s = "some string"
len(s)
"""
Explanation: Python also has a built-in method len() that can be used to return the length of a string. The length is simply the number of individual characters (including any whitespace) in the string.
End of explanation
"""
x = 2
y = 2
x == y
"""
Explanation: Variable comparisons and Boolean types
We can also compare variables! By comparing variables, we can ask whether two things are equal, or greater than or less than some other value. This sort of true-or-false comparison gives rise to yet another type in Python: the boolean type. A variable of this type takes only two possible values: True or False.
Let's say we have two numeric variables, x and y, and want to check if they're equal. To do this, we use a variation of the assginment operator:
End of explanation
"""
s1 = "a string"
s2 = "a string"
s1 == s2
s3 = "another string"
s1 == s3
"""
Explanation: Hooray! The == sign is the equality comparison operator, and it will return True or False depending on whether or not the two values are exactly equal. This works for strings as well:
End of explanation
"""
x = 1
y = 2
x < y
x > y
"""
Explanation: We can also ask if variables are less than or greater than each other, using the < and > operators, respectively.
End of explanation
"""
x = 2
y = 3
x <= y
x = 3
x <= y
x = 3.00001
x <= y
"""
Explanation: In a small twist of relative magnitude comparisons, we can also ask if something is less than or equal to or greater than or equal to some other value. To do this, in addition to the comparison operators < or >, we also add an equal sign:
End of explanation
"""
s1 = "some string"
s2 = "another string"
s1 > s2
s1 = "Some string"
s1 > s2
"""
Explanation: Interestingly, these operators also work for strings. Be careful, though: their behavior may be somewhat unexpected until you figure out what actual trick is happening:
End of explanation
"""
# Adds two numbers that are initially strings by converting them to an int and a float,
# then converting the final result to an int and storing it in the variable x.
x = int(int("1345") + float("31.5"))
print(x)
"""
Explanation: Part 2: Variable naming conventions and documentation
There are some rules regarding what can and cannot be used as a variable name. Beyond those rules, there are guidelines.
Variable naming rules
Names can contain only letters, numbers, and underscores.
All the letters a-z (upper and lowercase), the numbers 0-9, and underscores are at your disposal. Anything else is illegal. No special characters like pound signs, dollar signs, or percents are allowed. Hashtag alphanumerics only.
Variable names can only start with letters or underscores.
Numbers cannot be the first character of a variable name. message_1 is a perfectly valid variable name; however, 1_message is not and will throw an error.
Spaces are not allowed in variable names.
Underscores are how Python programmers tend to "simulate" spaces in variable names, but simply put there's no way to name a variable with multiple words separated by spaces.
Avoid using Python keywords or function names as variables.
This might take some trial-and-error. Basically, if you try to name a variable print or float or str, you'll run into a lot of problems down the road. Technically this isn't outlawed in Python, but it will cause a lot of headaches later in your program.
Variable naming conventions
These are not hard-and-fast rules, but rather suggestions to help "standardize" code and make it easier to read by people who aren't necessarily familiar with the code you've written.
Make variable names short, but descriptive.
I've been giving a lot of examples using variables named x, s, and so forth. This is bad. Don't do it--unless, for example, you're defining x and y to be points in a 2D coordinate axis, or as a counter; one-letter variable names for counters are quite common.
Outside of those narrow use-cases, the variable names should constitute a pithy description that reflects their function in your program. A variable storing a name, for example, could be name or even student_name, but don't go as far as to use the_name_of_the_student.
Be careful with the lowercase l or uppercase O.
This is one of those annoying rules that largely only applies to one-letter variables: stay away from using letters that also bear striking resemblance to numbers. Naming your variable l or O may confuse downstream readers of your code, making them think you're sprinkling 1s and 0s throughout your code.
Variable names should be all lowercase, using underscores for multiple words.
Java programmers may take umbrage with this point: the convention there is to useCamelCase for multi-word variable names. Since Python takes quite a bit from the C language (and its back-end is implemented in C), it also borrows a lot of C conventions, one of which is to use underscores and all lowercase letters in variable names.
The one exception to this rule is when you define variables that are constant; that is, their values don't change. In this case, the variable name is usually in all-caps. For example: PI = 3.14159.
Self-documenting code
The practice of pithy but precise variable naming strategies is known as "self-documenting code."
We've learned before that we can insert comments into our code to explain things that might otherwise be confusing:
End of explanation
"""
str_length = len("some string")
"""
Explanation: Comments are important to good coding style and should be used often for clarification. However, even more preferable to the liberal use of comments is a good variable naming convention. For instance, instead of naming a variable "x" or "y" or "c", give it a name that describes its purpose.
End of explanation
"""
x = 5
x += 10
"""
Explanation: I could've used a comment to explain how this variable was storing the length of the string, but by naming the variable itself in terms of what it was doing, I don't even need such a comment. It's self-evident from the name itself what this variable is doing.
Part 3: Whitespace in Python
Whitespace (no, not that Whitespace) is important in the Python language. Some languages like C++ and Java use semi-colons to delineate the end of a single statement. Python, however, does not, but still needs some way to identify when we've reached the end of a statement.
In Python, it's the return key that denotes the end of a statement. Returns, tabs, and spaces are all collectively known as "whitespace", and each can drastically change how your Python program runs. Especially when we get into loops, conditionals, and functions, this will become critical and may be the source of many insidious bugs.
For example, the following code won't run:
End of explanation
"""
|
vadim-ivlev/STUDY | handson-data-science-python/DataScience-Python3/ItemBasedCF.ipynb | mit | import pandas as pd
r_cols = ['user_id', 'movie_id', 'rating']
ratings = pd.read_csv('e:/sundog-consult/udemy/datascience/ml-100k/u.data', sep='\t', names=r_cols, usecols=range(3), encoding="ISO-8859-1")
m_cols = ['movie_id', 'title']
movies = pd.read_csv('e:/sundog-consult/udemy/datascience/ml-100k/u.item', sep='|', names=m_cols, usecols=range(2), encoding="ISO-8859-1")
ratings = pd.merge(movies, ratings)
ratings.head()
"""
Explanation: Item-Based Collaborative Filtering
As before, we'll start by importing the MovieLens 100K data set into a pandas DataFrame:
End of explanation
"""
userRatings = ratings.pivot_table(index=['user_id'],columns=['title'],values='rating')
userRatings.head()
"""
Explanation: Now we'll pivot this table to construct a nice matrix of users and the movies they rated. NaN indicates missing data, or movies that a given user did not watch:
End of explanation
"""
corrMatrix = userRatings.corr()
corrMatrix.head()
"""
Explanation: Now the magic happens - pandas has a built-in corr() method that will compute a correlation score for every column pair in the matrix! This gives us a correlation score between every pair of movies (where at least one user rated both movies - otherwise NaN's will show up.) That's amazing!
End of explanation
"""
corrMatrix = userRatings.corr(method='pearson', min_periods=100)
corrMatrix.head()
"""
Explanation: However, we want to avoid spurious results that happened from just a handful of users that happened to rate the same pair of movies. In order to restrict our results to movies that lots of people rated together - and also give us more popular results that are more easily recongnizable - we'll use the min_periods argument to throw out results where fewer than 100 users rated a given movie pair:
End of explanation
"""
myRatings = userRatings.loc[0].dropna()
myRatings
"""
Explanation: Now let's produce some movie recommendations for user ID 0, who I manually added to the data set as a test case. This guy really likes Star Wars and The Empire Strikes Back, but hated Gone with the Wind. I'll extract his ratings from the userRatings DataFrame, and use dropna() to get rid of missing data (leaving me only with a Series of the movies I actually rated:)
End of explanation
"""
simCandidates = pd.Series()
for i in range(0, len(myRatings.index)):
print ("Adding sims for " + myRatings.index[i] + "...")
# Retrieve similar movies to this one that I rated
sims = corrMatrix[myRatings.index[i]].dropna()
# Now scale its similarity by how well I rated this movie
sims = sims.map(lambda x: x * myRatings[i])
# Add the score to the list of similarity candidates
simCandidates = simCandidates.append(sims)
#Glance at our results so far:
print ("sorting...")
simCandidates.sort_values(inplace = True, ascending = False)
print (simCandidates.head(10))
"""
Explanation: Now, let's go through each movie I rated one at a time, and build up a list of possible recommendations based on the movies similar to the ones I rated.
So for each movie I rated, I'll retrieve the list of similar movies from our correlation matrix. I'll then scale those correlation scores by how well I rated the movie they are similar to, so movies similar to ones I liked count more than movies similar to ones I hated:
End of explanation
"""
simCandidates = simCandidates.groupby(simCandidates.index).sum()
simCandidates.sort_values(inplace = True, ascending = False)
simCandidates.head(10)
"""
Explanation: This is starting to look like something useful! Note that some of the same movies came up more than once, because they were similar to more than one movie I rated. We'll use groupby() to add together the scores from movies that show up more than once, so they'll count more:
End of explanation
"""
filteredSims = simCandidates.drop(myRatings.index)
filteredSims.head(10)
"""
Explanation: The last thing we have to do is filter out movies I've already rated, as recommending a movie I've already watched isn't helpful:
End of explanation
"""
|
olivierverdier/demo-notebooks | PageRank.ipynb | mit | A1 = array([
[0, 1, 0, 0, 0, 0 ],
[1, 0, 0, 0, 0, 1 ],
[0, 0, 0, 1/3, 1/2, 0 ],
[0, 0, 0, 0, 0, 0 ],
[0, 0, 0, 1/3, 0, 0 ],
[0, 0, 1, 1/3, 1/2, 0 ] ])
brus = 1/6*array([
[1,1,1,1,1,1],
[1,1,1,1,1,1],
[1,1,1,1,1,1],
[1,1,1,1,1,1],
[1,1,1,1,1,1],
[1,1,1,1,1,1]])
A = .85*A1 + .15*brus
import networkx
G = networkx.from_numpy_matrix(A1.T, create_using=networkx.MultiDiGraph())
networkx.draw_networkx(G)
feil = 0.0001
x = rand(6) # tilfeldig valgt vektor x
x = x/norm(x) # nå har x lengde 1
for i in range(200):
Ax = dot(A,x) # fordi vi trenger å huske to vektorer
Ax = Ax/norm(Ax) # lengde 1 igjen
if norm(Ax-x) < feil: # hvis vi er nær nok en egenvektor
break
x = Ax
"""
Explanation: Metoden som brukes for å finne en egenvektor er potensmetoden.
Eksempelmatrisen til PageRank-eksempelet, se 13.3 i kompendiet
End of explanation
"""
x = x/sum(x)
# nå representerer verdiene i x sannsynligheter.
# Det er disse tallene pagerank bruker for å rangere nettsidene,
# fra størst til minst verdi.
bar(arange(len(x)),x)
"""
Explanation: Nå er x en egenvektor (omtrent).
End of explanation
"""
|
amirfz/pinder | exploration/exploring_the_idea.ipynb | gpl-3.0 | client = MongoClient('localhost:27017')
db = client.arXivDB
db.arXivfeeds.count()
"""
Explanation: connecting to mongodb
End of explanation
"""
print(db.arXivfeeds.find_one().keys())
for item in db.arXivfeeds.find({'published_parsed': 2016}).sort('_id', pymongo.DESCENDING).limit(5):
print(item['title'])
#db.arXivfeeds.delete_many({})
"""
Explanation: retrieving the available fields as a reference
End of explanation
"""
def cleaner(doc, stem=False):
'''Function to clean the text data and prep for further analysis'''
doc = doc.lower() # turn text to lowercase
stops = set(stopwords.words("english")) # Creating a set of Stopwords
p_stemmer = PorterStemmer() # Creating the stemmer model
doc = re.sub(r"quantum", '', doc) # removing the word quantum (duh)
doc = re.sub(r"physics", '', doc) # removing the word physics (duh)
doc = re.sub(r"state", '', doc) # removing the word state (duh)
doc = re.sub(r'\$.*?\$', 'latexinlineformula', doc) # replacing latex inline formula
doc = re.sub(r'\\n', ' ', doc) # removing new line character
doc = re.sub(r'\\\\\"', '', doc) # removing german double dotted letters
doc = re.sub(r"</?\w+[^>]*>", '', doc) # removing html tags
doc = re.sub("[^a-zA-Z]", ' ', doc) # removing anythin other alpha-numerical char's and @ and !
doc = doc.split() # Splits the data into individual words
doc = [w for w in doc if not w in stops and len(w) > 3] # Removes stopwords and short length words
if stem:
doc = [p_stemmer.stem(i) for i in doc] # Stemming (reducing words to their root)
if not len(doc): # dealing with comments that are all emojis, stop words or other languages
doc = ['emptystring']
# print('text cleaning done!')
return ' '.join(doc)
cleaner(db.arXivfeeds.find_one({'published_parsed': 2016})['summary'])
"""
Explanation: build a field specific stop word list
function for cleaning text
End of explanation
"""
def plot_abstract_and_title_wordcloud(arXivfeed_query_result):
arXivfeed_2015_text = cleaner(' '.join([' '.join(list(d.values())) for d in arXivfeed_query_result]))
# Generate a word cloud image
wordcloud_arXivfeed_2015 = WordCloud().generate(arXivfeed_2015_text)
# Display the generated image:
plt.imshow(wordcloud_arXivfeed_2015)
plt.axis("off")
plot_abstract_and_title_wordcloud(list(db.arXivfeeds.find({'published_parsed': 1995}, {'_id':0,'title':1})))
plot_abstract_and_title_wordcloud(list(db.arXivfeeds.find({'published_parsed': 2002}, {'_id':0,'title':1})))
plot_abstract_and_title_wordcloud(list(db.arXivfeeds.find({'published_parsed': 2015}, {'_id':0,'title':1})))
"""
Explanation: plotting the wordcloud for abstracts and titles from various years
End of explanation
"""
years = range(1994,2016,1)
num_publications_per_year = [db.arXivfeeds.find({'published_parsed': y}).count() for y in years]
plt.plot(years, num_publications_per_year)
"""
Explanation: plotting the number publications per year
End of explanation
"""
pattern1 = r'[Pp]hoton\w*'
pattern2 = r'[Oo]ptic\w*'
set(re.findall(pattern2,
' '.join([' '.join(list(d.values())) for d in db.arXivfeeds.find({}, {'_id':0,'summary':1})])))
num_ph_papers = np.zeros(len(years))
for i, y in enumerate(years):
num_ph_papers[i] = db.arXivfeeds.find({'$and':[{'published_parsed': y},
{'$or':[
{'summary': {'$regex': pattern1}},
{'title': {'$regex': pattern1}},
{'summary': {'$regex': pattern2}},
{'title': {'$regex': pattern2}}
]}
]}).count()
plt.plot(years, num_ph_papers/num_publications_per_year)
"""
Explanation: this shows that the database is missing entries and judging from the total number of downloaded items, there must be around 30000 papers missing
plotting the relative appearance of a few terms
End of explanation
"""
list(db.arXivfeeds.find({'published_parsed': 2016}).limit(1))[0]
"""
Explanation: plotting the world map of author affiliations
there is no affiliation in the parsed data
End of explanation
"""
import nltk
import gensim
import pyLDAvis
import pyLDAvis.gensim
documents = [cleaner(d['summary']) for d in db.arXivfeeds.find({'published_parsed': 2010}, {'_id':0, 'summary':1})]
# documents = [cleaner(d['summary']) for d in db.arXivfeeds.find({}, {'_id':0, 'summary':1})]
train_set = []
for j in range(len(documents)):
train_set.append(nltk.word_tokenize(documents[j]))
dic = gensim.corpora.Dictionary(train_set)
print(len(dic))
dic.filter_extremes(no_below=20, no_above=0.1)
print(len(dic))
corpus = [dic.doc2bow(text) for text in train_set] # transform every token into BOW
tfidf = gensim.models.TfidfModel(corpus)
corpus_tfidf = tfidf[corpus]
lda = gensim.models.LdaModel(corpus_tfidf, id2word = dic, num_topics = 10, iterations=20, passes = 10)
corpus_lda = lda[corpus_tfidf]
vis_data = pyLDAvis.gensim.prepare(lda, corpus, dic)
pyLDAvis.display(vis_data)
"""
Explanation: using LDA to map out the topics
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/nasa-giss/cmip6/models/sandbox-3/land.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'nasa-giss', 'sandbox-3', 'land')
"""
Explanation: ES-DOC CMIP6 Model Properties - Land
MIP Era: CMIP6
Institute: NASA-GISS
Source ID: SANDBOX-3
Topic: Land
Sub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes.
Properties: 154 (96 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:21
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code (e.g. MOSES2.2)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.3. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "water"
# "energy"
# "carbon"
# "nitrogen"
# "phospherous"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.4. Land Atmosphere Flux Exchanges
Is Required: FALSE Type: ENUM Cardinality: 0.N
Fluxes exchanged with the atmopshere.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.5. Atmospheric Coupling Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bare soil"
# "urban"
# "lake"
# "land ice"
# "lake ice"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.6. Land Cover
Is Required: TRUE Type: ENUM Cardinality: 1.N
Types of land cover defined in the land surface model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover_change')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.7. Land Cover Change
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how land cover change is managed (e.g. the use of net or gross transitions)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.8. Tiling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.energy')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how energy is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.water')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Water
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how water is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Carbon
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how carbon is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a time step dependent on the frequency of atmosphere coupling?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Overall timestep of land surface model (i.e. time between calls)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.3. Timestepping Method
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of time stepping method and associated time step(s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Grid
Land surface grid
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the horizontal grid (not including any tiling)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the horizontal grid match the atmosphere?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the vertical grid in the soil (not including any tiling)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.total_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 7.2. Total Depth
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The total depth of the soil (in metres)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Soil
Land surface soil
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of soil in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_water_coupling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.2. Heat Water Coupling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the coupling between heat and water in the soil
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.number_of_soil layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 8.3. Number Of Soil layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the soil scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of soil map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.2. Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil structure map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.texture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.3. Texture
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil texture map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.organic_matter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.4. Organic Matter
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil organic matter map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.5. Albedo
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil albedo map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.water_table')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.6. Water Table
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil water table map, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 9.7. Continuously Varying Soil Depth
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the soil properties vary continuously with depth?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.8. Soil Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil depth map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow free albedo prognostic?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "soil humidity"
# "vegetation state"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, describe the dependancies on snow free albedo calculations
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "distinction between direct and diffuse albedo"
# "no distinction between direct and diffuse albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10.3. Direct Diffuse
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe the distinction between direct and diffuse albedo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 10.4. Number Of Wavelength Bands
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If prognostic, enter the number of wavelength bands used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the soil hydrological model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river soil hydrology in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil hydrology tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.5. Number Of Ground Water Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers that may contain water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "perfect connectivity"
# "Darcian flow"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.6. Lateral Connectivity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe the lateral connectivity between tiles
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bucket"
# "Force-restore"
# "Choisnel"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.7. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
The hydrological dynamics scheme in the land surface model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
How many soil layers may contain ground ice
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.2. Ice Storage Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of ice storage
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.3. Permafrost
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of permafrost, if any, within the land surface scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General describe how drainage is included in the land surface scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gravity drainage"
# "Horton mechanism"
# "topmodel-based"
# "Dunne mechanism"
# "Lateral subsurface flow"
# "Baseflow from groundwater"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
Different types of runoff represented by the land surface model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of how heat treatment properties are defined
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of soil heat scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil heat treatment tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Force-restore"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.5. Heat Storage
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the method of heat storage
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "soil moisture freeze-thaw"
# "coupling with snow temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.6. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe processes included in the treatment of soil heat
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Snow
Land surface snow
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of snow in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.number_of_snow_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.3. Number Of Snow Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of snow levels used in the land surface scheme/model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.4. Density
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow density
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.water_equivalent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.5. Water Equivalent
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the snow water equivalent
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.heat_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.6. Heat Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the heat content of snow
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.temperature')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.7. Temperature
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow temperature
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.liquid_water_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.8. Liquid Water Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow liquid water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_cover_fractions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ground snow fraction"
# "vegetation snow fraction"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.9. Snow Cover Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify cover fractions used in the surface snow scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "snow interception"
# "snow melting"
# "snow freezing"
# "blowing snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.10. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Snow related processes in the land surface scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.11. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the snow scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "prescribed"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of snow-covered land albedo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "snow age"
# "snow density"
# "snow grain type"
# "aerosol deposition"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
*If prognostic, *
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vegetation in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 17.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of vegetation scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.dynamic_vegetation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 17.3. Dynamic Vegetation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there dynamic evolution of vegetation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.4. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vegetation tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation types"
# "biome types"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.5. Vegetation Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Vegetation classification used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "broadleaf tree"
# "needleleaf tree"
# "C3 grass"
# "C4 grass"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.6. Vegetation Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of vegetation types in the classification, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biome_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "evergreen needleleaf forest"
# "evergreen broadleaf forest"
# "deciduous needleleaf forest"
# "deciduous broadleaf forest"
# "mixed forest"
# "woodland"
# "wooded grassland"
# "closed shrubland"
# "opne shrubland"
# "grassland"
# "cropland"
# "wetlands"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.7. Biome Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of biome types in the classification, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_time_variation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed (not varying)"
# "prescribed (varying from files)"
# "dynamical (varying from simulation)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.8. Vegetation Time Variation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How the vegetation fractions in each tile are varying with time
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.9. Vegetation Map
Is Required: FALSE Type: STRING Cardinality: 0.1
If vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.interception')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 17.10. Interception
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is vegetation interception of rainwater represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic (vegetation map)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.11. Phenology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation phenology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.12. Phenology Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation phenology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.13. Leaf Area Index
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation leaf area index
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.14. Leaf Area Index Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of leaf area index
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.15. Biomass
Is Required: TRUE Type: ENUM Cardinality: 1.1
*Treatment of vegetation biomass *
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.16. Biomass Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biomass
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.17. Biogeography
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation biogeography
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.18. Biogeography Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biogeography
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "light"
# "temperature"
# "water availability"
# "CO2"
# "O3"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.19. Stomatal Resistance
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify what the vegetation stomatal resistance depends on
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.20. Stomatal Resistance Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation stomatal resistance
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.21. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the vegetation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of energy balance in land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the energy balance tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 18.3. Number Of Surface Temperatures
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "alpha"
# "beta"
# "combined"
# "Monteith potential evaporation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.4. Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify the formulation method for land surface evaporation, from soil and vegetation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "transpiration"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe which processes are included in the energy balance scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of carbon cycle in land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the carbon cycle tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 19.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of carbon cycle in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grand slam protocol"
# "residence time"
# "decay time"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19.4. Anthropogenic Carbon
Is Required: FALSE Type: ENUM Cardinality: 0.N
Describe the treament of the anthropogenic carbon pool
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.5. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the carbon scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20.3. Forest Stand Dynamics
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of forest stand dyanmics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for maintainence respiration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.2. Growth Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for growth respiration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the allocation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "leaves + stems + roots"
# "leaves + stems + roots (leafy + woody)"
# "leaves + fine roots + coarse roots + stems"
# "whole plant (no distinction)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.2. Allocation Bins
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify distinct carbon bins used in allocation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "function of vegetation type"
# "function of plant allometry"
# "explicitly calculated"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.3. Allocation Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how the fractions of allocation are calculated
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the phenology scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the mortality scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is permafrost included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28.2. Emitted Greenhouse Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
List the GHGs emitted
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28.4. Impact On Soil Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the impact of permafrost on soil properties
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the nitrogen cycle in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the notrogen cycle tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 29.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of nitrogen cycle in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the nitrogen scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30. River Routing
Land surface river routing
30.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of river routing in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the river routing, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river routing scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 30.4. Grid Inherited From Land Surface
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the grid inherited from land surface?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.5. Grid Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of grid, if not inherited from land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.number_of_reservoirs')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.6. Number Of Reservoirs
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of reservoirs
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.water_re_evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "flood plains"
# "irrigation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.7. Water Re Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
TODO
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 30.8. Coupled To Atmosphere
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Is river routing coupled to the atmosphere model component?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_land')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.9. Coupled To Land
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the coupling between land and rivers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.10. Quantities Exchanged With Atmosphere
Is Required: FALSE Type: ENUM Cardinality: 0.N
If couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "adapted for other periods"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.11. Basin Flow Direction Map
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of basin flow direction map is being used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.flooding')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.12. Flooding
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the representation of flooding, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.13. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the river routing
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "direct (large rivers)"
# "diffuse"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify how rivers are discharged to the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.2. Quantities Transported
Is Required: TRUE Type: ENUM Cardinality: 1.N
Quantities that are exchanged from river-routing to the ocean model component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32. Lakes
Land surface lakes
32.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lakes in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.coupling_with_rivers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 32.2. Coupling With Rivers
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are lakes coupled to the river routing model component?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 32.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of lake scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.4. Quantities Exchanged With Rivers
Is Required: FALSE Type: ENUM Cardinality: 0.N
If coupling with rivers, which quantities are exchanged between the lakes and rivers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.vertical_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32.5. Vertical Grid
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vertical grid of lakes
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the lake scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.ice_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is lake ice included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 33.2. Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of lake albedo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No lake dynamics"
# "vertical"
# "horizontal"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 33.3. Dynamics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which dynamics of lakes are treated? horizontal, vertical, etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 33.4. Dynamic Lake Extent
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a dynamic lake extent scheme included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.endorheic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 33.5. Endorheic Basins
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Basins not flowing to ocean included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.wetlands.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of wetlands, if any
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/fio-ronm/cmip6/models/sandbox-1/atmos.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'fio-ronm', 'sandbox-1', 'atmos')
"""
Explanation: ES-DOC CMIP6 Model Properties - Atmos
MIP Era: CMIP6
Institute: FIO-RONM
Source ID: SANDBOX-1
Topic: Atmos
Sub-Topics: Dynamical Core, Radiation, Turbulence Convection, Microphysics Precipitation, Cloud Scheme, Observation Simulation, Gravity Waves, Solar, Volcanos.
Properties: 156 (127 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:01
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties --> Overview
2. Key Properties --> Resolution
3. Key Properties --> Timestepping
4. Key Properties --> Orography
5. Grid --> Discretisation
6. Grid --> Discretisation --> Horizontal
7. Grid --> Discretisation --> Vertical
8. Dynamical Core
9. Dynamical Core --> Top Boundary
10. Dynamical Core --> Lateral Boundary
11. Dynamical Core --> Diffusion Horizontal
12. Dynamical Core --> Advection Tracers
13. Dynamical Core --> Advection Momentum
14. Radiation
15. Radiation --> Shortwave Radiation
16. Radiation --> Shortwave GHG
17. Radiation --> Shortwave Cloud Ice
18. Radiation --> Shortwave Cloud Liquid
19. Radiation --> Shortwave Cloud Inhomogeneity
20. Radiation --> Shortwave Aerosols
21. Radiation --> Shortwave Gases
22. Radiation --> Longwave Radiation
23. Radiation --> Longwave GHG
24. Radiation --> Longwave Cloud Ice
25. Radiation --> Longwave Cloud Liquid
26. Radiation --> Longwave Cloud Inhomogeneity
27. Radiation --> Longwave Aerosols
28. Radiation --> Longwave Gases
29. Turbulence Convection
30. Turbulence Convection --> Boundary Layer Turbulence
31. Turbulence Convection --> Deep Convection
32. Turbulence Convection --> Shallow Convection
33. Microphysics Precipitation
34. Microphysics Precipitation --> Large Scale Precipitation
35. Microphysics Precipitation --> Large Scale Cloud Microphysics
36. Cloud Scheme
37. Cloud Scheme --> Optical Cloud Properties
38. Cloud Scheme --> Sub Grid Scale Water Distribution
39. Cloud Scheme --> Sub Grid Scale Ice Distribution
40. Observation Simulation
41. Observation Simulation --> Isscp Attributes
42. Observation Simulation --> Cosp Attributes
43. Observation Simulation --> Radar Inputs
44. Observation Simulation --> Lidar Inputs
45. Gravity Waves
46. Gravity Waves --> Orographic Gravity Waves
47. Gravity Waves --> Non Orographic Gravity Waves
48. Solar
49. Solar --> Solar Pathways
50. Solar --> Solar Constant
51. Solar --> Orbital Parameters
52. Solar --> Insolation Ozone
53. Volcanos
54. Volcanos --> Volcanoes Treatment
1. Key Properties --> Overview
Top level key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmosphere model code (CAM 4.0, ARPEGE 3.2,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "AGCM"
# "ARCM"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of atmospheric model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "primitive equations"
# "non-hydrostatic"
# "anelastic"
# "Boussinesq"
# "hydrostatic"
# "quasi-hydrostatic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the atmosphere.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Resolution
Characteristics of the model resolution
2.1. Horizontal Resolution Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of the model grid, e.g. T42, N48.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, e.g. 2.5 x 3.75 degrees lat-lon.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 1 deg (Equator) - 0.5 deg
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 2.4. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on the computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.high_top')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 2.5. High Top
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the atmosphere have a high-top? High-Top atmospheres have a fully resolved stratosphere with a model top above the stratopause.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Timestepping
Characteristics of the atmosphere model time stepping
3.1. Timestep Dynamics
Is Required: TRUE Type: STRING Cardinality: 1.1
Timestep for the dynamics, e.g. 30 min.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.2. Timestep Shortwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the shortwave radiative transfer, e.g. 1.5 hours.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.3. Timestep Longwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the longwave radiative transfer, e.g. 3 hours.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "modified"
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Orography
Characteristics of the model orography
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the orography.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.changes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "related to ice sheets"
# "related to tectonics"
# "modified mean"
# "modified variance if taken into account in model (cf gravity waves)"
# TODO - please enter value(s)
"""
Explanation: 4.2. Changes
Is Required: TRUE Type: ENUM Cardinality: 1.N
If the orography type is modified describe the time adaptation changes.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Grid --> Discretisation
Atmosphere grid discretisation
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of grid discretisation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spectral"
# "fixed grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6. Grid --> Discretisation --> Horizontal
Atmosphere discretisation in the horizontal
6.1. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "finite elements"
# "finite volumes"
# "finite difference"
# "centered finite difference"
# TODO - please enter value(s)
"""
Explanation: 6.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "second"
# "third"
# "fourth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.3. Scheme Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation function order
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "filter"
# "pole rotation"
# "artificial island"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.4. Horizontal Pole
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal discretisation pole singularity treatment
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gaussian"
# "Latitude-Longitude"
# "Cubed-Sphere"
# "Icosahedral"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.5. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "isobaric"
# "sigma"
# "hybrid sigma-pressure"
# "hybrid pressure"
# "vertically lagrangian"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 7. Grid --> Discretisation --> Vertical
Atmosphere discretisation in the vertical
7.1. Coordinate Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type of vertical coordinate system
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Dynamical Core
Characteristics of the dynamical core
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere dynamical core
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the dynamical core of the model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Adams-Bashforth"
# "explicit"
# "implicit"
# "semi-implicit"
# "leap frog"
# "multi-step"
# "Runge Kutta fifth order"
# "Runge Kutta second order"
# "Runge Kutta third order"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.3. Timestepping Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Timestepping framework type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface pressure"
# "wind components"
# "divergence/curl"
# "temperature"
# "potential temperature"
# "total water"
# "water vapour"
# "water liquid"
# "water ice"
# "total water moments"
# "clouds"
# "radiation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of the model prognostic variables
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9. Dynamical Core --> Top Boundary
Type of boundary layer at the top of the model
9.1. Top Boundary Condition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Top boundary condition
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.2. Top Heat
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary heat treatment
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.3. Top Wind
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary wind treatment
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10. Dynamical Core --> Lateral Boundary
Type of lateral boundary condition (if the model is a regional model)
10.1. Condition
Is Required: FALSE Type: ENUM Cardinality: 0.1
Type of lateral boundary condition
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11. Dynamical Core --> Diffusion Horizontal
Horizontal diffusion scheme
11.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Horizontal diffusion scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "iterated Laplacian"
# "bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal diffusion scheme method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heun"
# "Roe and VanLeer"
# "Roe and Superbee"
# "Prather"
# "UTOPIA"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12. Dynamical Core --> Advection Tracers
Tracer advection scheme
12.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Tracer advection scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Eulerian"
# "modified Euler"
# "Lagrangian"
# "semi-Lagrangian"
# "cubic semi-Lagrangian"
# "quintic semi-Lagrangian"
# "mass-conserving"
# "finite volume"
# "flux-corrected"
# "linear"
# "quadratic"
# "quartic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme characteristics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "dry mass"
# "tracer mass"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.3. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme conserved quantities
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Priestley algorithm"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.4. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracer advection scheme conservation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "VanLeer"
# "Janjic"
# "SUPG (Streamline Upwind Petrov-Galerkin)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Dynamical Core --> Advection Momentum
Momentum advection scheme
13.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Momentum advection schemes name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "2nd order"
# "4th order"
# "cell-centred"
# "staggered grid"
# "semi-staggered grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme characteristics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa D-grid"
# "Arakawa E-grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.3. Scheme Staggering Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme staggering type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Angular momentum"
# "Horizontal momentum"
# "Enstrophy"
# "Mass"
# "Total energy"
# "Vorticity"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.4. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme conserved quantities
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.5. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme conservation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.aerosols')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sulphate"
# "nitrate"
# "sea salt"
# "dust"
# "ice"
# "organic"
# "BC (black carbon / soot)"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "polar stratospheric ice"
# "NAT (nitric acid trihydrate)"
# "NAD (nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particle)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14. Radiation
Characteristics of the atmosphere radiation process
14.1. Aerosols
Is Required: TRUE Type: ENUM Cardinality: 1.N
Aerosols whose radiative effect is taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Radiation --> Shortwave Radiation
Properties of the shortwave radiation scheme
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of shortwave radiation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Shortwave radiation scheme spectral integration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Shortwave radiation transport calculation methods
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Shortwave radiation scheme number of spectral intervals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16. Radiation --> Shortwave GHG
Representation of greenhouse gases in the shortwave radiation scheme
16.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose shortwave radiative effects are taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17. Radiation --> Shortwave Cloud Ice
Shortwave radiative properties of ice crystals in clouds
17.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud ice crystals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18. Radiation --> Shortwave Cloud Liquid
Shortwave radiative properties of liquid droplets in clouds
18.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud liquid droplets
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19. Radiation --> Shortwave Cloud Inhomogeneity
Cloud inhomogeneity in the shortwave radiation scheme
19.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20. Radiation --> Shortwave Aerosols
Shortwave radiative properties of aerosols
20.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with aerosols
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 21. Radiation --> Shortwave Gases
Shortwave radiative properties of gases
21.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with gases
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22. Radiation --> Longwave Radiation
Properties of the longwave radiation scheme
22.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of longwave radiation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the longwave radiation scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Longwave radiation scheme spectral integration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Longwave radiation transport calculation methods
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 22.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Longwave radiation scheme number of spectral intervals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23. Radiation --> Longwave GHG
Representation of greenhouse gases in the longwave radiation scheme
23.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose longwave radiative effects are taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 24. Radiation --> Longwave Cloud Ice
Longwave radiative properties of ice crystals in clouds
24.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud ice crystals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 24.2. Physical Reprenstation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 24.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25. Radiation --> Longwave Cloud Liquid
Longwave radiative properties of liquid droplets in clouds
25.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud liquid droplets
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26. Radiation --> Longwave Cloud Inhomogeneity
Cloud inhomogeneity in the longwave radiation scheme
26.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27. Radiation --> Longwave Aerosols
Longwave radiative properties of aerosols
27.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with aerosols
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 28. Radiation --> Longwave Gases
Longwave radiative properties of gases
28.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with gases
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29. Turbulence Convection
Atmosphere Convective Turbulence and Clouds
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere convection and turbulence
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Mellor-Yamada"
# "Holtslag-Boville"
# "EDMF"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30. Turbulence Convection --> Boundary Layer Turbulence
Properties of the boundary layer turbulence scheme
30.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Boundary layer turbulence scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TKE prognostic"
# "TKE diagnostic"
# "TKE coupled with water"
# "vertical profile of Kz"
# "non-local diffusion"
# "Monin-Obukhov similarity"
# "Coastal Buddy Scheme"
# "Coupled with convection"
# "Coupled with gravity waves"
# "Depth capped at cloud base"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Boundary layer turbulence scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.3. Closure Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Boundary layer turbulence scheme closure order
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 30.4. Counter Gradient
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Uses boundary layer turbulence scheme counter gradient
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 31. Turbulence Convection --> Deep Convection
Properties of the deep convection scheme
31.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Deep convection scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "adjustment"
# "plume ensemble"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CAPE"
# "bulk"
# "ensemble"
# "CAPE/WFN based"
# "TKE/CIN based"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vertical momentum transport"
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "updrafts"
# "downdrafts"
# "radiative effect of anvils"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of deep convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for deep convection. Microphysical processes directly control the amount of detrainment of cloud hydrometeor and water vapor from updrafts
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32. Turbulence Convection --> Shallow Convection
Properties of the shallow convection scheme
32.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Shallow convection scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "cumulus-capped boundary layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
shallow convection scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "same as deep (unified)"
# "included in boundary layer turbulence"
# "separate diagnosis"
# TODO - please enter value(s)
"""
Explanation: 32.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
shallow convection scheme method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of shallow convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for shallow convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 33. Microphysics Precipitation
Large Scale Cloud Microphysics and Precipitation
33.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of large scale cloud microphysics and precipitation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34. Microphysics Precipitation --> Large Scale Precipitation
Properties of the large scale precipitation scheme
34.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the large scale precipitation parameterisation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "liquid rain"
# "snow"
# "hail"
# "graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 34.2. Hydrometeors
Is Required: TRUE Type: ENUM Cardinality: 1.N
Precipitating hydrometeors taken into account in the large scale precipitation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Properties of the large scale cloud microphysics scheme
35.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the microphysics parameterisation scheme used for large scale clouds.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mixed phase"
# "cloud droplets"
# "cloud ice"
# "ice nucleation"
# "water vapour deposition"
# "effect of raindrops"
# "effect of snow"
# "effect of graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 35.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Large scale cloud microphysics processes
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36. Cloud Scheme
Characteristics of the cloud scheme
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the atmosphere cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "atmosphere_radiation"
# "atmosphere_microphysics_precipitation"
# "atmosphere_turbulence_convection"
# "atmosphere_gravity_waves"
# "atmosphere_solar"
# "atmosphere_volcano"
# "atmosphere_cloud_simulator"
# TODO - please enter value(s)
"""
Explanation: 36.3. Atmos Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Atmosphere components that are linked to the cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 36.4. Uses Separate Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Different cloud schemes for the different types of clouds (convective, stratiform and boundary layer)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "entrainment"
# "detrainment"
# "bulk cloud"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 36.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 36.6. Prognostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a prognostic scheme?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 36.7. Diagnostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a diagnostic scheme?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud amount"
# "liquid"
# "ice"
# "rain"
# "snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 36.8. Prognostic Variables
Is Required: FALSE Type: ENUM Cardinality: 0.N
List the prognostic variables used by the cloud scheme, if applicable.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "random"
# "maximum"
# "maximum-random"
# "exponential"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 37. Cloud Scheme --> Optical Cloud Properties
Optical cloud properties
37.1. Cloud Overlap Method
Is Required: FALSE Type: ENUM Cardinality: 0.1
Method for taking into account overlapping of cloud layers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.2. Cloud Inhomogeneity
Is Required: FALSE Type: STRING Cardinality: 0.1
Method for taking into account cloud inhomogeneity
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
"""
Explanation: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Sub-grid scale water distribution
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale water distribution type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 38.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale water distribution function name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 38.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale water distribution function type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
"""
Explanation: 38.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale water distribution coupling with convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
"""
Explanation: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Sub-grid scale ice distribution
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale ice distribution type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 39.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale ice distribution function name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 39.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale ice distribution function type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
"""
Explanation: 39.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale ice distribution coupling with convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 40. Observation Simulation
Characteristics of observation simulation
40.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of observation simulator characteristics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "no adjustment"
# "IR brightness"
# "visible optical depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41. Observation Simulation --> Isscp Attributes
ISSCP Characteristics
41.1. Top Height Estimation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator ISSCP top height estimation methodUo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "lowest altitude level"
# "highest altitude level"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41.2. Top Height Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator ISSCP top height direction
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Inline"
# "Offline"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 42. Observation Simulation --> Cosp Attributes
CFMIP Observational Simulator Package attributes
42.1. Run Configuration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator COSP run configuration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 42.2. Number Of Grid Points
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of grid points
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 42.3. Number Of Sub Columns
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of sub-cloumns used to simulate sub-grid variability
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 42.4. Number Of Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of levels
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 43. Observation Simulation --> Radar Inputs
Characteristics of the cloud radar simulator
43.1. Frequency
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Cloud simulator radar frequency (Hz)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface"
# "space borne"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 43.2. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator radar type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 43.3. Gas Absorption
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses gas absorption
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 43.4. Effective Radius
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses effective radius
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice spheres"
# "ice non-spherical"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 44. Observation Simulation --> Lidar Inputs
Characteristics of the cloud lidar simulator
44.1. Ice Types
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator lidar ice type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "max"
# "random"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 44.2. Overlap
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator lidar overlap
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 45. Gravity Waves
Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.
45.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of gravity wave parameterisation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rayleigh friction"
# "Diffusive sponge layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 45.2. Sponge Layer
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sponge layer in the upper levels in order to avoid gravity wave reflection at the top.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "continuous spectrum"
# "discrete spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 45.3. Background
Is Required: TRUE Type: ENUM Cardinality: 1.1
Background wave distribution
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "effect on drag"
# "effect on lifting"
# "enhanced topography"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 45.4. Subgrid Scale Orography
Is Required: TRUE Type: ENUM Cardinality: 1.N
Subgrid scale orography effects taken into account.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 46. Gravity Waves --> Orographic Gravity Waves
Gravity waves generated due to the presence of orography
46.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the orographic gravity wave scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear mountain waves"
# "hydraulic jump"
# "envelope orography"
# "low level flow blocking"
# "statistical sub-grid scale variance"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave source mechanisms
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "non-linear calculation"
# "more than two cardinal directions"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave calculation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "includes boundary layer ducting"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave propogation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave dissipation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 47. Gravity Waves --> Non Orographic Gravity Waves
Gravity waves generated by non-orographic processes.
47.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the non-orographic gravity wave scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convection"
# "precipitation"
# "background spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 47.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave source mechanisms
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spatially dependent"
# "temporally dependent"
# TODO - please enter value(s)
"""
Explanation: 47.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave calculation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 47.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave propogation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 47.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave dissipation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 48. Solar
Top of atmosphere solar insolation characteristics
48.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of solar insolation of the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SW radiation"
# "precipitating energetic particles"
# "cosmic rays"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 49. Solar --> Solar Pathways
Pathways for solar forcing of the atmosphere
49.1. Pathways
Is Required: TRUE Type: ENUM Cardinality: 1.N
Pathways for the solar forcing of the atmosphere model domain
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
"""
Explanation: 50. Solar --> Solar Constant
Solar constant and top of atmosphere insolation characteristics
50.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the solar constant.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 50.2. Fixed Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If the solar constant is fixed, enter the value of the solar constant (W m-2).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 50.3. Transient Characteristics
Is Required: TRUE Type: STRING Cardinality: 1.1
solar constant transient characteristics (W m-2)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
"""
Explanation: 51. Solar --> Orbital Parameters
Orbital parameters and top of atmosphere insolation characteristics
51.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of orbital parameters
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 51.2. Fixed Reference Date
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Reference date for fixed orbital parameters (yyyy)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 51.3. Transient Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Description of transient orbital parameters
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Berger 1978"
# "Laskar 2004"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 51.4. Computation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used for computing orbital parameters.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 52. Solar --> Insolation Ozone
Impact of solar insolation on stratospheric ozone
52.1. Solar Ozone Impact
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does top of atmosphere insolation impact on stratospheric ozone?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 53. Volcanos
Characteristics of the implementation of volcanoes
53.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the implementation of volcanic effects in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "high frequency solar constant anomaly"
# "stratospheric aerosols optical thickness"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 54. Volcanos --> Volcanoes Treatment
Treatment of volcanoes in the atmosphere
54.1. Volcanoes Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How volcanic effects are modeled in the atmosphere.
End of explanation
"""
|
anonyXmous/CapstoneProject | sliderule_dsi_xml_exercise.ipynb | unlicense | from xml.etree import ElementTree as ET
"""
Explanation: XML example and exercise
study examples of accessing nodes in XML tree structure
work on exercise to be completed and submitted
reference: https://docs.python.org/2.7/library/xml.etree.elementtree.html
data source: http://www.dbis.informatik.uni-goettingen.de/Mondial
End of explanation
"""
document_tree = ET.parse( 'data/mondial_database_less.xml' )
# print names of all countries
for child in document_tree.getroot():
print (child.find('name').text)
# print names of all countries and their cities
for element in document_tree.iterfind('country'):
print ('* ' + element.find('name').text + ':',)
capitals_string = ''
for subelement in element.getiterator('city'):
capitals_string += subelement.find('name').text + ', '
print (capitals_string[:-2])
"""
Explanation: XML example
for details about tree traversal and iterators, see https://docs.python.org/2.7/library/xml.etree.elementtree.html
End of explanation
"""
document = ET.parse( './data/mondial_database.xml' )
"""
Explanation: XML exercise
Using data in 'data/mondial_database.xml', the examples above, and refering to https://docs.python.org/2.7/library/xml.etree.elementtree.html, find
10 countries with the lowest infant mortality rates
10 cities with the largest population
10 ethnic groups with the largest overall populations (sum of best/latest estimates over all countries)
name and country of a) longest river, b) largest lake and c) airport at highest elevation
End of explanation
"""
import pandas as pd
# gather all data into a list
root = document.getroot()
df = []
for country in root.findall('country'):
infmort = country.find('infant_mortality')
cname = country.find('name')
if infmort != None:
df.append([cname.text, pd.to_numeric(infmort.text)])
# convert into a dataframe
df = pd.DataFrame(df)
# assign column names
df.columns = ['name', 'infant_mortality']
# sort by infant mortality, ascending=True
pd.options.display.float_format = '{:,.2f}'.format
df = df.sort_values(['infant_mortality'], ascending=[1]).head(10).reset_index(drop=True)
df
"""
Explanation: 10 countries with the lowest infant mortality rates
End of explanation
"""
# gather population tags per country (excluding per city/prov)
df = []
for root in document.getroot():
country = root.find('name').text
for child in root:
if ((root.tag == 'country') & (child.tag=='population')):
yr = child.attrib['year']
pop = child.text
df.append([country, yr, pd.to_numeric(pop)])
# convert into a dataframe
df = pd.DataFrame(df, columns = ['country', 'year', 'population'])
# sort by population, ascending=False
df.population = df.population.astype(float)
pd.options.display.float_format = '{:,.0f}'.format
print(df.sort_values(['population'], ascending=[0]).head(10).reset_index(drop=True))
"""
Explanation: 10 cities with the largest population
End of explanation
"""
df_eth_cnt = []
for root in document.getroot():
country = root.find('name').text
df_eth = []
for child in root:
if ((root.tag == 'country') & (child.tag=='population')):
yr = child.attrib['year']
pop = child.text
if ((root.tag == 'country') & (child.tag=='ethnicgroup')):
percent = pd.to_numeric(child.attrib['percentage'])
ethnic = child.text
df_eth.append([percent, ethnic])
for k in range(len(df_eth)):
df_eth_cnt.append([df_eth[k][1], (df_eth[k][0])*(pd.to_numeric(pop)/100)])
#df_pop.append([country, yr, pd.to_numeric(pop), df_eth_cnt])
df = pd.DataFrame(df_eth_cnt, columns = ['ethnicgroup', 'population'])
print(df.sort_values(['population'], ascending=[0]).head(10).reset_index(drop=True))
"""
Explanation: 10 ethnic groups with the largest overall populations (sum of best/latest estimates over all countries)
End of explanation
"""
# gather all data into a list
root = document.getroot()
dfcntry = []
for country in root.findall('country'):
dfcntry.append([country.attrib['car_code'], country.find('name').text])
dfcntry = pd.DataFrame(dfcntry, columns=['country_code','country_name'])
dfUniq = dfcntry[(dfcntry.country_name != '')].drop_duplicates('country_code')
dfUniq = pd.DataFrame(dfUniq, columns=['country_code','country_name'])
df = []
for river in root.findall('river'):
length = river.find('length')
rivername = river.find('name')
if length != None:
df.append([river.attrib['country'], rivername.text, pd.to_numeric(length.text)])
# convert into a dataframe
df = pd.DataFrame(df, columns = ['country(ies)', 'river_name', 'length'])
# print answer
df = df.sort_values(['length'], ascending=[0]).head(1).reset_index(drop=True)
# extract country names
cntry = []
for i in df['country(ies)'].str.split():
for j in range(len(i)):
cntry.append(dfUniq.loc[dfUniq['country_code'] == i[j]])
# concat names into a string
str =''
for k in range(len(cntry)):
str += (cntry[k]['country_name'].values[0]) + " "
#remove last space
str = str[:-1]
# update country name
df['country(ies)'] = str
# display answer
df
"""
Explanation: name and country of a) longest river
End of explanation
"""
# gather all data into a list
dfcntry = []
for country in root.findall('country'):
dfcntry.append([country.attrib['car_code'], country.find('name').text])
dfcntry = pd.DataFrame(dfcntry, columns=['country_code','country_name'])
dfUniq = dfcntry[(dfcntry.country_name != '')].drop_duplicates('country_code')
dfUniq = pd.DataFrame(dfUniq, columns=['country_code','country_name'])
root = document.getroot()
df = []
for lake in root.findall('lake'):
area = lake.find('area')
lakename = lake.find('name')
if area != None:
df.append([lake.attrib['country'], lakename.text, pd.to_numeric(area.text)])
# convert into a dataframe
df = pd.DataFrame(df, columns = ['country(ies)', 'lake_name', 'area'])
# get largest lake by area
df = df.sort_values(['area'], ascending=[0]).head(1).reset_index(drop=True)
# extract country names
cntry = []
for i in df['country(ies)'].str.split():
for j in range(len(i)):
cntry.append(dfUniq.loc[dfUniq['country_code'] == i[j]])
# concat names into a string
str =''
for k in range(len(cntry)):
str += (cntry[k]['country_name'].values[0]) + " "
#remove last space
str = str[:-1]
# update country name
df['country(ies)'] = str
# display answer
df
"""
Explanation: name and country of b) largest lake
End of explanation
"""
# create a new dataframe with unique project code and name
root = document.getroot()
dfcntry = []
for country in root.findall('country'):
dfcntry.append([country.attrib['car_code'], country.find('name').text])
dfcntry = pd.DataFrame(dfcntry, columns=['country_code','country_name'])
dfUniq = dfcntry[(dfcntry.country_name != '')].drop_duplicates('country_code')
dfUniq = pd.DataFrame(dfUniq, columns=['country_code','country_name'])
# gather all data into a list
df = []
for airport in root.findall('airport'):
area = airport.find('elevation')
airportname = airport.find('name')
if area.text != None:
df.append([airport.attrib['country'], airportname.text, pd.to_numeric(area.text)])
# convert into a dataframe
df = pd.DataFrame(df, columns = ['country_code', 'airport_name', 'elevation'])
# get country with highest elevation
df = df.sort_values(['elevation'], ascending=[0]).head(1).reset_index(drop=True)
# locate the country names from list of unique country codes
dfCntry = dfUniq.loc[dfUniq['country_code'] == df['country_code'].values[0]]
# merge two dataframes, print answer
df = pd.merge(dfCntry, df, on='country_code', how='outer')
# i don't want to display country code
del df['country_code']
df
"""
Explanation: name and country of c) airport at highest elevation
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/fio-ronm/cmip6/models/sandbox-3/land.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'fio-ronm', 'sandbox-3', 'land')
"""
Explanation: ES-DOC CMIP6 Model Properties - Land
MIP Era: CMIP6
Institute: FIO-RONM
Source ID: SANDBOX-3
Topic: Land
Sub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes.
Properties: 154 (96 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:01
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code (e.g. MOSES2.2)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.3. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "water"
# "energy"
# "carbon"
# "nitrogen"
# "phospherous"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.4. Land Atmosphere Flux Exchanges
Is Required: FALSE Type: ENUM Cardinality: 0.N
Fluxes exchanged with the atmopshere.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.5. Atmospheric Coupling Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bare soil"
# "urban"
# "lake"
# "land ice"
# "lake ice"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.6. Land Cover
Is Required: TRUE Type: ENUM Cardinality: 1.N
Types of land cover defined in the land surface model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover_change')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.7. Land Cover Change
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how land cover change is managed (e.g. the use of net or gross transitions)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.8. Tiling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.energy')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how energy is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.water')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Water
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how water is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Carbon
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how carbon is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a time step dependent on the frequency of atmosphere coupling?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Overall timestep of land surface model (i.e. time between calls)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.3. Timestepping Method
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of time stepping method and associated time step(s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Grid
Land surface grid
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the horizontal grid (not including any tiling)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the horizontal grid match the atmosphere?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the vertical grid in the soil (not including any tiling)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.total_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 7.2. Total Depth
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The total depth of the soil (in metres)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Soil
Land surface soil
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of soil in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_water_coupling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.2. Heat Water Coupling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the coupling between heat and water in the soil
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.number_of_soil layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 8.3. Number Of Soil layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the soil scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of soil map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.2. Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil structure map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.texture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.3. Texture
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil texture map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.organic_matter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.4. Organic Matter
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil organic matter map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.5. Albedo
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil albedo map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.water_table')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.6. Water Table
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil water table map, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 9.7. Continuously Varying Soil Depth
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the soil properties vary continuously with depth?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.8. Soil Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil depth map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow free albedo prognostic?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "soil humidity"
# "vegetation state"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, describe the dependancies on snow free albedo calculations
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "distinction between direct and diffuse albedo"
# "no distinction between direct and diffuse albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10.3. Direct Diffuse
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe the distinction between direct and diffuse albedo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 10.4. Number Of Wavelength Bands
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If prognostic, enter the number of wavelength bands used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the soil hydrological model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river soil hydrology in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil hydrology tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.5. Number Of Ground Water Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers that may contain water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "perfect connectivity"
# "Darcian flow"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.6. Lateral Connectivity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe the lateral connectivity between tiles
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bucket"
# "Force-restore"
# "Choisnel"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.7. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
The hydrological dynamics scheme in the land surface model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
How many soil layers may contain ground ice
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.2. Ice Storage Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of ice storage
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.3. Permafrost
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of permafrost, if any, within the land surface scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General describe how drainage is included in the land surface scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gravity drainage"
# "Horton mechanism"
# "topmodel-based"
# "Dunne mechanism"
# "Lateral subsurface flow"
# "Baseflow from groundwater"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
Different types of runoff represented by the land surface model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of how heat treatment properties are defined
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of soil heat scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil heat treatment tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Force-restore"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.5. Heat Storage
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the method of heat storage
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "soil moisture freeze-thaw"
# "coupling with snow temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.6. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe processes included in the treatment of soil heat
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Snow
Land surface snow
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of snow in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.number_of_snow_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.3. Number Of Snow Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of snow levels used in the land surface scheme/model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.4. Density
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow density
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.water_equivalent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.5. Water Equivalent
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the snow water equivalent
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.heat_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.6. Heat Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the heat content of snow
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.temperature')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.7. Temperature
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow temperature
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.liquid_water_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.8. Liquid Water Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow liquid water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_cover_fractions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ground snow fraction"
# "vegetation snow fraction"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.9. Snow Cover Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify cover fractions used in the surface snow scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "snow interception"
# "snow melting"
# "snow freezing"
# "blowing snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.10. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Snow related processes in the land surface scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.11. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the snow scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "prescribed"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of snow-covered land albedo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "snow age"
# "snow density"
# "snow grain type"
# "aerosol deposition"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
*If prognostic, *
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vegetation in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 17.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of vegetation scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.dynamic_vegetation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 17.3. Dynamic Vegetation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there dynamic evolution of vegetation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.4. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vegetation tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation types"
# "biome types"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.5. Vegetation Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Vegetation classification used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "broadleaf tree"
# "needleleaf tree"
# "C3 grass"
# "C4 grass"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.6. Vegetation Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of vegetation types in the classification, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biome_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "evergreen needleleaf forest"
# "evergreen broadleaf forest"
# "deciduous needleleaf forest"
# "deciduous broadleaf forest"
# "mixed forest"
# "woodland"
# "wooded grassland"
# "closed shrubland"
# "opne shrubland"
# "grassland"
# "cropland"
# "wetlands"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.7. Biome Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of biome types in the classification, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_time_variation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed (not varying)"
# "prescribed (varying from files)"
# "dynamical (varying from simulation)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.8. Vegetation Time Variation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How the vegetation fractions in each tile are varying with time
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.9. Vegetation Map
Is Required: FALSE Type: STRING Cardinality: 0.1
If vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.interception')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 17.10. Interception
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is vegetation interception of rainwater represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic (vegetation map)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.11. Phenology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation phenology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.12. Phenology Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation phenology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.13. Leaf Area Index
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation leaf area index
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.14. Leaf Area Index Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of leaf area index
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.15. Biomass
Is Required: TRUE Type: ENUM Cardinality: 1.1
*Treatment of vegetation biomass *
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.16. Biomass Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biomass
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.17. Biogeography
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation biogeography
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.18. Biogeography Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biogeography
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "light"
# "temperature"
# "water availability"
# "CO2"
# "O3"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.19. Stomatal Resistance
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify what the vegetation stomatal resistance depends on
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.20. Stomatal Resistance Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation stomatal resistance
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.21. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the vegetation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of energy balance in land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the energy balance tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 18.3. Number Of Surface Temperatures
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "alpha"
# "beta"
# "combined"
# "Monteith potential evaporation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.4. Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify the formulation method for land surface evaporation, from soil and vegetation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "transpiration"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe which processes are included in the energy balance scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of carbon cycle in land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the carbon cycle tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 19.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of carbon cycle in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grand slam protocol"
# "residence time"
# "decay time"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19.4. Anthropogenic Carbon
Is Required: FALSE Type: ENUM Cardinality: 0.N
Describe the treament of the anthropogenic carbon pool
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.5. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the carbon scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20.3. Forest Stand Dynamics
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of forest stand dyanmics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for maintainence respiration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.2. Growth Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for growth respiration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the allocation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "leaves + stems + roots"
# "leaves + stems + roots (leafy + woody)"
# "leaves + fine roots + coarse roots + stems"
# "whole plant (no distinction)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.2. Allocation Bins
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify distinct carbon bins used in allocation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "function of vegetation type"
# "function of plant allometry"
# "explicitly calculated"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.3. Allocation Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how the fractions of allocation are calculated
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the phenology scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the mortality scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is permafrost included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28.2. Emitted Greenhouse Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
List the GHGs emitted
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28.4. Impact On Soil Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the impact of permafrost on soil properties
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the nitrogen cycle in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the notrogen cycle tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 29.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of nitrogen cycle in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the nitrogen scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30. River Routing
Land surface river routing
30.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of river routing in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the river routing, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river routing scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 30.4. Grid Inherited From Land Surface
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the grid inherited from land surface?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.5. Grid Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of grid, if not inherited from land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.number_of_reservoirs')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.6. Number Of Reservoirs
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of reservoirs
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.water_re_evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "flood plains"
# "irrigation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.7. Water Re Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
TODO
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 30.8. Coupled To Atmosphere
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Is river routing coupled to the atmosphere model component?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_land')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.9. Coupled To Land
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the coupling between land and rivers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.10. Quantities Exchanged With Atmosphere
Is Required: FALSE Type: ENUM Cardinality: 0.N
If couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "adapted for other periods"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.11. Basin Flow Direction Map
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of basin flow direction map is being used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.flooding')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.12. Flooding
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the representation of flooding, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.13. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the river routing
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "direct (large rivers)"
# "diffuse"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify how rivers are discharged to the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.2. Quantities Transported
Is Required: TRUE Type: ENUM Cardinality: 1.N
Quantities that are exchanged from river-routing to the ocean model component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32. Lakes
Land surface lakes
32.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lakes in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.coupling_with_rivers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 32.2. Coupling With Rivers
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are lakes coupled to the river routing model component?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 32.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of lake scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.4. Quantities Exchanged With Rivers
Is Required: FALSE Type: ENUM Cardinality: 0.N
If coupling with rivers, which quantities are exchanged between the lakes and rivers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.vertical_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32.5. Vertical Grid
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vertical grid of lakes
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the lake scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.ice_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is lake ice included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 33.2. Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of lake albedo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No lake dynamics"
# "vertical"
# "horizontal"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 33.3. Dynamics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which dynamics of lakes are treated? horizontal, vertical, etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 33.4. Dynamic Lake Extent
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a dynamic lake extent scheme included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.endorheic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 33.5. Endorheic Basins
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Basins not flowing to ocean included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.wetlands.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of wetlands, if any
End of explanation
"""
|
ray-project/ray | doc/source/tune/examples/hebo_example.ipynb | apache-2.0 | # !pip install ray[tune]
!pip install HEBO==0.3.2
"""
Explanation: Running Tune experiments with HEBOSearch
In this tutorial we introduce HEBO, while running a simple Ray Tune experiment. Tune’s Search Algorithms integrate with ZOOpt and, as a result, allow you to seamlessly scale up a HEBO optimization process - without sacrificing performance.
Heteroscadastic Evolutionary Bayesian Optimization (HEBO) does not rely on the gradient of the objective function, but instead, learns from samples of the search space. It is suitable for optimizing functions that are nondifferentiable, with many local minima, or even unknown but only testable. This necessarily makes the algorithm belong to the domain of "derivative-free optimization" and "black-box optimization".
In this example we minimize a simple objective to briefly demonstrate the usage of HEBO with Ray Tune via HEBOSearch. It's useful to keep in mind that despite the emphasis on machine learning experiments, Ray Tune optimizes any implicit or explicit objective. Here we assume zoopt==0.4.1 library is installed. To learn more, please refer to the HEBO website.
End of explanation
"""
import time
import ray
from ray import tune
from ray.tune.suggest.hebo import HEBOSearch
"""
Explanation: Click below to see all the imports we need for this example.
You can also launch directly into a Binder instance to run this notebook yourself.
Just click on the rocket symbol at the top of the navigation.
End of explanation
"""
def evaluate(step, width, height, activation):
time.sleep(0.1)
activation_boost = 10 if activation=="relu" else 1
return (0.1 + width * step / 100) ** (-1) + height * 0.1 + activation_boost
"""
Explanation: Let's start by defining a simple evaluation function.
We artificially sleep for a bit (0.1 seconds) to simulate a long-running ML experiment.
This setup assumes that we're running multiple steps of an experiment and try to tune two hyperparameters,
namely width and height, and activation.
End of explanation
"""
def objective(config):
for step in range(config["steps"]):
score = evaluate(step, config["width"], config["height"], config["activation"])
tune.report(iterations=step, mean_loss=score)
ray.init(configure_logging=False)
"""
Explanation: Next, our objective function takes a Tune config, evaluates the score of your experiment in a training loop,
and uses tune.report to report the score back to Tune.
End of explanation
"""
previously_run_params = [
{"width": 10, "height": 0, "activation": "relu"},
{"width": 15, "height": -20, "activation": "tanh"},
]
known_rewards = [-189, -1144]
max_concurrent = 8
algo = HEBOSearch(
metric="mean_loss",
mode="min",
points_to_evaluate=previously_run_params,
evaluated_rewards=known_rewards,
random_state_seed=123,
max_concurrent=max_concurrent,
)
"""
Explanation: While defining the search algorithm, we may choose to provide an initial set of hyperparameters that we believe are especially promising or informative, and
pass this information as a helpful starting point for the HyperOptSearch object.
We also set the maximum concurrent trials to 8.
End of explanation
"""
num_samples = 1000
# If 1000 samples take too long, you can reduce this number.
# We override this number here for our smoke tests.
num_samples = 10
"""
Explanation: The number of samples is the number of hyperparameter combinations that will be tried out. This Tune run is set to 1000 samples.
(you can decrease this if it takes too long on your machine).
End of explanation
"""
search_config = {
"steps": 100,
"width": tune.uniform(0, 20),
"height": tune.uniform(-100, 100),
"activation": tune.choice(["relu, tanh"])
}
"""
Explanation: Next we define a search space. The critical assumption is that the optimal hyperparamters live within this space. Yet, if the space is very large, then those hyperparameters may be difficult to find in a short amount of time.
End of explanation
"""
analysis = tune.run(
objective,
metric="mean_loss",
mode="min",
name="hebo_exp_with_warmstart",
search_alg=algo,
num_samples=num_samples,
config=search_config
)
"""
Explanation: Finally, we run the experiment to "min"imize the "mean_loss" of the objective by searching search_config via algo, num_samples times. This previous sentence is fully characterizes the search problem we aim to solve. With this in mind, notice how efficient it is to execute tune.run().
End of explanation
"""
print("Best hyperparameters found were: ", analysis.best_config)
ray.shutdown()
"""
Explanation: Here are the hyperparamters found to minimize the mean loss of the defined objective.
End of explanation
"""
|
irazhur/StatisticalMethods | examples/SDSScatalog/CorrFunc.ipynb | gpl-2.0 | %load_ext autoreload
%autoreload 2
import numpy as np
import SDSS
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
import copy
# We want to select galaxies, and then are only interested in their positions on the sky.
data = pd.read_csv("downloads/SDSSobjects.csv",usecols=['ra','dec','u','g',\
'r','i','size'])
# Filter out objects with bad magnitude or size measurements:
data = data[(data['u'] > 0) & (data['g'] > 0) & (data['r'] > 0) & (data['i'] > 0) & (data['size'] > 0)]
# Make size cuts, to exclude stars and nearby galaxies, and magnitude cuts, to get good galaxy detections:
data = data[(data['size'] > 0.8) & (data['size'] < 10.0) & (data['i'] > 17) & (data['i'] < 22)]
# Drop the things we're not so interested in:
del data['u'], data['g'], data['r'], data['i'],data['size']
data.head()
Ngals = len(data)
ramin,ramax = np.min(data['ra']),np.max(data['ra'])
decmin,decmax = np.min(data['dec']),np.max(data['dec'])
print Ngals,"galaxy-like objects in (ra,dec) range (",ramin,":",ramax,",",decmin,":",decmax,")"
"""
Explanation: "Spatial Clustering" - the Galaxy Correlation Function
The degree to which objects positions are correlated with each other - "clustered" - is of great interest in astronomy.
We expect galaxies to appear in groups and clusters, as they fall together under gravity: the statistics of galaxy clustering should contain information about galaxy evolution during hierarchical structure formation.
Let's try and measure a clustering signal in our SDSS photometric object catalog.
End of explanation
"""
# !pip install --upgrade TreeCorr
"""
Explanation: The Correlation Function
The 2-point correlation function $\xi(\theta)$ is defined as "the probability of finding two galaxies separated by an angular distance $\theta$ with respect to that expected for a random distribution" (Peebles 1980), and is an excellent summary statistic for quantifying the clustering of galaxies.
The simplest possible estimator for this excess probability is just
$\hat{\xi}(\theta) = \frac{DD - RR}{RR}$,
where $DD(\theta) = N_{\rm pairs}(\theta) / N_D(N_D-1)/2$. Here, $N_D$ is the total number of galaxies in the dataset, and $N_{\rm pairs}(\theta)$ is the number of galaxy pairs with separation lying in a bin centered on $\theta$. $RR(\theta)$ is the same quantity computed in a "random catalog," covering the same field of view but with uniformly randomly distributed positions.
Correlations between mock galaxies distributed uniformly randomly over the survey "footprint" helps account for spurious effects in the correlation function that might arise from weird survey area design.
We'll use Mike Jarvis' TreeCorr code (Jarvis et al 2004) to compute this correlation function estimator efficiently. You can read more about better estimators starting from the TreeCorr wiki.
End of explanation
"""
random = pd.DataFrame({'ra' : ramin + (ramax-ramin)*np.random.rand(Ngals), 'dec' : (180./np.pi)*np.arcsin(np.random.uniform(np.sin(decmin*np.pi/180.0), np.sin(decmax*np.pi/180.),Ngals))})
print len(random), type(random)
"""
Explanation: Random Catalogs
First we'll need a random catalog. Let's make it the same size as the data one.
While this may not be needed for the small field in this example, let's generate random points that are uniformly distributed on a patch of the sphere.
End of explanation
"""
fig, ax = plt.subplots(nrows=1, ncols=2)
fig.set_size_inches(15, 6)
plt.subplots_adjust(wspace=0.2)
random.plot(kind='scatter', x='ra', y='dec', ax=ax[0], title='Random')
ax[0].set_xlabel('RA / deg')
ax[0].set_ylabel('Dec. / deg')
data.plot(kind='scatter', x='ra', y='dec', ax=ax[1], title='Data')
ax[1].set_xlabel('RA / deg')
ax[1].set_ylabel('Dec. / deg')
"""
Explanation: Now let's plot both catalogs, and compare.
End of explanation
"""
import treecorr
random_cat = treecorr.Catalog(ra=random['ra'], dec=random['dec'], ra_units='deg', dec_units='deg')
data_cat = treecorr.Catalog(ra=data['ra'], dec=data['dec'], ra_units='deg', dec_units='deg')
# Set up some correlation function estimator objects:
sep_units='arcmin'
min_sep=0.5
max_sep=10.0
N = 7
bin_size = np.log10(1.0*max_sep/min_sep)/(1.0*N)
dd = treecorr.NNCorrelation(bin_size=bin_size, min_sep=min_sep, max_sep=max_sep, sep_units=sep_units, bin_slop=0.05/bin_size)
rr = treecorr.NNCorrelation(bin_size=bin_size, min_sep=min_sep, max_sep=max_sep, sep_units=sep_units, bin_slop=0.05/bin_size)
# Process the data:
dd.process(data_cat)
rr.process(random_cat)
# Combine into a correlation function and its variance:
xi, varxi = dd.calculateXi(rr)
plt.figure(figsize=(15,8))
plt.rc('xtick', labelsize=16)
plt.rc('ytick', labelsize=16)
plt.errorbar(np.exp(dd.logr),xi,np.sqrt(varxi),c='blue',linewidth=2)
# plt.xscale('log')
plt.xlabel('$\\theta / {\\rm arcmin}$',fontsize=20)
plt.ylabel('$\\xi(\\theta)$',fontsize=20)
plt.ylim([-0.1,0.2])
plt.grid(True)
"""
Explanation: Estimating $\xi(\theta)$
End of explanation
"""
|
ajdawson/python_for_climate_scientists | course_content/f2py-example/f2py_example.ipynb | gpl-3.0 | import lanczos1
print(dir(lanczos1))
lanczos1.dfiltrq?
"""
Explanation: Calling Fortran code from Python: f2py
f2py
The program f2py is supplied with numpy. It wraps Fortran code into an extension module, allowing the Fortran code to be called directly from Python.
The quick way
First we'll build a Python module from our Fortran code in the simplest way possible using
f2py -c lanczos1.f -m lanczos1
This command will create a Python extension module named lanczos1 giving us access to all the subroutines defined in the fortran source file lanczos1.f.
Let's check this worked by importing the generated module and looking inside it to see what it contains:
End of explanation
"""
import numpy as np
# Filter parameters:
nwt = 115
fca = 0.1
fcb = 0.3
nsigma = 1
ihp = 2
# Output arrays:
wt = np.empty([nwt])
resp = np.empty([2 * nwt - 1])
freq = np.empty([2 * nwt - 1])
ier = 0
# Call the Fortran routine:
lanczos1.dfiltrq(fca, fcb, nsigma, ihp, wt, resp, freq, ier)
"""
Explanation: We'll define some basic filter paramters to test this code, note we have to pre-allocate the arrays that will store the output of dfiltrq:
End of explanation
"""
import matplotlib.pyplot as plt
plt.plot(freq, resp)
plt.show()
"""
Explanation: Let's plot the frequency response of the filter, to check that the code is actually working:
End of explanation
"""
import lanczos2
print(dir(lanczos2))
lanczos2.dfiltrq?
"""
Explanation: Using this procedure it was simple to generate the Python code from the Fortran, but the result is somewhat difficult to use. We are having to use the Fortran style of pre-allocating our output arrays and passing them into the function, and having it populate them. We are using Python, and we can do better!
The smart way
We'll take a slightly different approach now and split the process up into two stages. First we'll generate an interface file using f2py:
f2py lanczos1.f -m lanczos2 -h lanczos2.pyf
This will generate a text file lanczos2.pyf that we can review and edit before continuing to compile the module.
We can edit this file to simplify the Python module that will be generated by:
removing the subroutine dfilwtq which we don't want
specifying the intent of the arguments correctly
We can generate the Python module from the interface using the command:
f2py -c lanczos2.pyf lanczos1.f
This will generate a Python module named lanczos2.
Let's check this worked by importing the generated module and looking inside it to see what it contains:
End of explanation
"""
wt, resp, freq, ier = lanczos2.dfiltrq(nwt, fca, fcb, nsigma, ihp)
"""
Explanation: Now that we have simplified the interface we can call the function in a nicer (more Pythonic) way where we pass inputs as arguments and receive outputs as the function's return value:
End of explanation
"""
plt.plot(freq, resp)
plt.show()
"""
Explanation: We can quickly check if the output is as expected by plotting the frequency response:
End of explanation
"""
import lanczos3
print(dir(lanczos3))
lanczos3.dfiltrq?
"""
Explanation: The result of this method is much more pleasing to use, although it required some awkward fiddling with the interface file. Finally we'll demonstrate a method that combines the simplicity of the first, with the desirable result of the second.
The quick and smart way
For the third method we'll edit the source code by inserting special comments that f2py will read and understand:
cf2py intent(in) nwt, fca, fcb, nsigma, ihp
cf2py intent(out) wt, resp, freq, ier
We can generate an interface file from the modified source code in the same way as before. This time we'll include the extra option only: dfiltrq to tell f2py to only generate an interface for the procedure we want:
f2py lanczos3.f -m lanczos3 -h lanczos3.pyf \
only: dfiltrq
Note how this interface file looks the same as our hand-modified one from method 2 (lanczos2.pyf), but we didn't have to edit it at all, the addition of 2 comments to the source code allowed the interface to be generated as we wanted.
In fact, we only generated the interface file here to demonstrate that it is correct. Now that we have declared our argument intents in the source we could have used the quick method to generate a Python module straight away:
f2py -c lanczos3.f -m lanczos3 only: dfiltrq
Finally we can verify that the generated module is as we expect:
End of explanation
"""
wt, resp, freq, ier = lanczos3.dfiltrq(nwt, fca, fcb, nsigma, ihp)
plt.plot(freq, resp)
plt.show()
"""
Explanation: Plot the frequency response one last time to verify the solution is still correct:
End of explanation
"""
|
sdpython/ensae_teaching_cs | _doc/notebooks/td1a_algo/td1a_correction_session7.ipynb | mit | from jyquickhelper import add_notebook_menu
add_notebook_menu()
"""
Explanation: 1A.algo - Programmation dynamique et plus court chemin (correction)
Correction.
End of explanation
"""
import pyensae.datasource
pyensae.datasource.download_data("matrix_distance_7398.zip", website = "xd")
import pandas
df = pandas.read_csv("matrix_distance_7398.txt", sep="\t", header=None, names=["v1","v2","distance"])
matrice = df.values
matrice[:5]
"""
Explanation: On récupère le fichier matrix_distance_7398.txt depuis matrix_distance_7398.zip qui contient des distances entre différentes villes (pas toutes).
End of explanation
"""
vil = { }
for row in matrice :
vil [row[0]] = 0
vil [row[1]] = 1
vil = list(vil.keys())
print (len(vil))
"""
Explanation: Exercice 1
End of explanation
"""
dist = { }
for row in matrice :
a = row[0]
b = row[1]
dist[a,b] = dist[b,a] = row[2]
print (len(dist))
print ( dist["Charleville-Mezieres","Bordeaux"] ) # elle n'existe pas encore
"""
Explanation: Exercice 2
La distance n'existe pas encore. L'exception du court programme suivant le montre. Rejoindre Bordeaux depuis Charleville nécessite plusieurs étapes.
End of explanation
"""
d = { }
d['Charleville-Mezieres'] = 0
for v in vil : d[v] = 1e10
for v,w in dist :
if v == 'Charleville-Mezieres':
d[w] = dist[v,w]
print(len(d))
"""
Explanation: Exercice 3
On peut remplir facilement toutes les cases correspondant aux villes reliées à Charleville-Mézières, c'est-à-dire toutes les villes accessibles en une étape.
End of explanation
"""
for v,w in dist :
d2 = d[v] + dist[v,w]
if d2 < d[w] :
d[w] = d2
print ( d["Bordeaux"] )
"""
Explanation: Exercice 4
Si on découvre que $d[w] > d[v] + dist[w,v]$, cela veut dire qu'il faut mettre à jour le tableau $d$ car il ne contient pas la distance optimale. On répète cela pour toutes les paires $(v,w)$.
End of explanation
"""
for i in range(0,len(d)) :
for v,w in dist :
d2 = d[v] + dist[v,w]
if d2 < d[w] :
d[w] = d2
print ( d["Bordeaux"] )
"""
Explanation: On trouve 813197 mètres pour la distance (Charleville-Mezieres, Bordeaux). Ce n'est pas forcément la meilleure. Pour être sûr, il faut répéter la même itération autant de fois qu'il y a de villes (car le plus long chemin contient autant d'étapes qu'il y a de villes).
End of explanation
"""
import random
skieurs = [ random.gauss(1.75, 0.1) for i in range(0,10) ]
paires = [ random.gauss(1.75, 0.1) for i in range(0,15) ]
skieurs.sort()
paires.sort()
p = { }
p [-1,-1] = 0
for n,taille in enumerate(skieurs) : p[n,-1] = p[n-1,-1] + taille
for m,paire in enumerate(paires ) : p[-1,m] = 0
for n,taille in enumerate(skieurs) :
for m,paire in enumerate(paires) :
p1 = p.get ( (n ,m-1), 1e10 )
p2 = p.get ( (n-1,m-1), 1e10 ) + abs(taille - paire)
p[n,m] = min(p1,p2)
print (p[len(skieurs)-1,len(paires)-1])
"""
Explanation: Exercice facultatif
Pour montrer que l'algorithme suggéré permettra d'obtenir la solution optimale, il faut montrer qu'il n'est pas nécessaire d'envisager aucun autre ordre que celui des skieurs et des paires triés par taille croissante. Cela ne veut pas dire qu'un autre ordre ne sera pas optimal, cela veut dire que pour obtenir l'appariement de coût optimal, il existe une solution pour laquelle skieurs et skis sont rangés dans l'ordre.
On considère donc un appariement $\sigma$ qui associé le skieur $t_i$ à la paire $s_{\sigma(i)}$. Il suffit que montrer que :
$\forall i,j, \; t_i \leqslant t_j \Longleftrightarrow s_{\sigma(i)} \leqslant s_{\sigma(j)}$
Pour montrer cela, on fait un raisonnement par l'absurde : pour $i$ et $j$ quelconques, on suppose qu'il existe un appariement optimal tel que $t_i \geqslant t_j$ et $s_{\sigma(i)} < s_{\sigma(j)}$. Le coût $C(\sigma)$ de cet appariement est :
$C(\sigma) = \sum_{k=1}^{N} \left| t_k - s_{\sigma(k)} \right| = \alpha + \left| t_i - s_{\sigma(i)} \right| + \left| t_j - s_{\sigma(j)} \right|$
Le coût de l'appariement en permutant les skieurs $i$ et $j$ (donc en les rangeant dans l'ordre croissant) est :
$C(\sigma') = \sum_{k=1}^{N} \left| t_k - s_{\sigma(k)} \right| = \alpha + \left| t_j - s_{\sigma(i)} \right| + \left| t_i - s_{\sigma(j)} \right|$
On calcule :
$C(\sigma) - C(\sigma') = \left| t_i - s_{\sigma(i)} \right| + \left| t_j - s_{\sigma(j)} \right| - \left| t_j - s_{\sigma(i)} \right| - \left| t_i - s_{\sigma(j)} \right|$
Premier cas $t_j \geqslant s_{\sigma(i)}$ et $t_i > t_j \geqslant s_{\sigma(i)}$ et :
$\begin{array}{rcl} C(\sigma) - C(\sigma') &=& \left| t_i - s_{\sigma(i)} \right| + \left| t_j - s_{\sigma(j)} \right| - \left( t_j - s_{\sigma(j)} + s_{\sigma(j)} - s_{\sigma(i)} \right) - \left| t_i - s_{\sigma(j)} \right| \ &=& t_i - s_{\sigma(i)} + \left| t_j - s_{\sigma(j)} \right| - \left( t_j - s_{\sigma(j)} \right) - \left( s_{\sigma(j)} - s_{\sigma(i)} \right) - \left| t_i - s_{\sigma(j)} \right| \ &=& t_i - s_{\sigma(i)}- \left| t_i - s_{\sigma(i)} \right| + \left| t_j - s_{\sigma(j)} \right| - \left( t_j - s_{\sigma(j)} \right) \ &=& \left| t_j - s_{\sigma(j)} \right| - \left( t_j - s_{\sigma(j)} \right|) \ &\geqslant& 0 \end{array}$
Second cas $t_j \leqslant s_{\sigma(i)}$ et $t_j \leqslant s_{\sigma(i)} \leqslant s_{\sigma(j)}$ et :
$\begin{array}{rcl} C(\sigma) - C(\sigma') &=& \left| t_i - s_{\sigma(i)} \right| + s_{\sigma(j)} -t_j - \left( s_{\sigma(i)} - t_j\right) - \left| t_i - s_{\sigma(j)} \right| \ &=& \left| t_i - s_{\sigma(i)} \right| + s_{\sigma(j)} - s_{\sigma(i)} - \left| t_i - s_{\sigma(j)} \right| \ &\geqslant& \left| t_i - s_{\sigma(j)}\right| - \left| s_{\sigma(j)} - s_{\sigma(i)} \right| + s_{\sigma(j)} - s_{\sigma(i)} - \left| t_i - s_{\sigma(j)} \right| \ &\geqslant& 0 \end{array}$
Dans les deux cas, on montre donc qu'il existe un appariement meilleur ou équivalent en permutant les deux skieurs $i$ et $j$, c'est-à-dire en les triant par ordre croissant de taille. Nous avons donc montré que, si les paires de ski sont triées par ordre croissant de taille, il existe necéssairement un appariement optimal pour lequel les skieurs sont aussi triés par ordre croissant. Lors de la recherche de cet appariement optimal, on peut se restreindre à ces cas de figure.
Exercice 5
$p(n,m) = \min \left{ p(n-1,m-1) + \left| t_n - s_m \right|, p(n,m-1) \right}$
Lorsqu'on considère le meilleur appariement des paires $1..m$ et des skieurs $1..n$, il n'y a que deux choix possibles pour la paire $m$ :
soit elle n'est associée à aucun skieur et dans ce cas : $p(n,m) = p(n,m-1)$,
soit elle est associée au skieur $n$ (et à aucun autre) : $p(n,m) = p(n-1,m-1) + \left| t_n - s_m \right|$.
Exercice 6
End of explanation
"""
from pyquickhelper.helpgen import NbImage
NbImage('graph_notebook_ski.png')
"""
Explanation: Exercice 7
Il faut imaginer que $p$ peut être représenté sous forme de matrice et qu'à chaque fois, on prend le meilleur chemin parmi 2 :
Chemin horizontal : on ne choisit pas la paire $m$.
Chemin diagonal : on choisit la paire $m$ pour le skieur $n$
End of explanation
"""
p = { }
p [-1,-1] = 0
best = { }
for n,taille in enumerate(skieurs) : p[n,-1] = p[n-1,-1] + taille
for m,paire in enumerate(paires ) : p[-1,m] = 0
for n,taille in enumerate(skieurs) :
for m,paire in enumerate(paires) :
p1 = p.get ( (n ,m-1), 1e10 )
p2 = p.get ( (n-1,m-1), 1e10 ) + abs(taille - paire)
p[n,m] = min(p1,p2)
if p[n,m] == p1 : best [n,m] = n,m-1
else : best [n,m] = n-1,m-1
print (p[len(skieurs)-1,len(paires)-1])
chemin = [ ]
pos = len(skieurs)-1,len(paires)-1
while pos in best :
print (pos)
chemin.append(pos)
pos = best[pos]
chemin.reverse()
print (chemin)
"""
Explanation: \xymatrix{ & m-1 & m \\ n-1 & p(n-1,m-1) \ar[dr]^{ + \abs{t_n - s_m}} & p(n-1,m) \\ n & p(n,m-1) \ar[r] & p(n,m) \\ }
End of explanation
"""
import pyensae.datasource # utiliser pyensae >= 0.8
files = pyensae.datasource.download_data("facebook.tar.gz",website="http://snap.stanford.edu/data/")
import pandas
df = pandas.read_csv("facebook/1912.edges", sep=" ", names=["v1","v2"])
print(df.shape)
df.head()
"""
Explanation: Exercice 8
Les deux algorithmes ont un coût quadratique.
Prolongements : degré de séparation sur Facebook
End of explanation
"""
|
Atzingen/curso-IoT-2017 | aula-03-python/Introducao-Python-01.ipynb | mit | print "Hello Python 2.7 !"
"""
Explanation: Introdução a linguagem Python (parte 1)
Notebook para o curso de IoT - IFSP Piracicaba
Gustavo Voltani von Atzingen
Python - versão 2.7
Este notebook contém uma introdução aos comandos básicos em python.
Serão cobertos os seguintes tópicos
Print
Comentários
Atribuição de variáveis e tipos
Trabalhando com strings (parte 1)
Listas (parte 1)
Estruturas de controle (if-elif-else, for, while)
Funções
Utilizando módulos (import)
<i>Print</i>
Para se "imprimir" algum texto na tela, o python (versão 2.7) possui uma palavra reservada, chamada <i>print</i>.
End of explanation
"""
print 'Parte 1 - ', ' A resposta é: ', 42
"""
Explanation: Também podemos imprimir várias strings ou numeros, separando-os por ','
End of explanation
"""
print 'O valor da leitura dos sensores são {}Votls e {}Volts'.format(4.2, 1.68)
"""
Explanation: Podemos inserir variáveis (núméricas) no meio do texto utilizando o método <i>.format</i>
End of explanation
"""
# isto é uma linha de comentário
"""
Explanation: <i>Comentários</i>
Comentários são inseridos no programa utilizando o caracter '#' e com isso toda linha é ignorada pelo interpretador
End of explanation
"""
''' Isto e um bloco de comentarios
Todas as linhas neste bloco sao ignoradas
pelo interpretador
'''
"""
Explanation: Para fazer um bloco de comentário (várias linhas), utiliza-se ''' no início e ''' no final do bloco de comentário
End of explanation
"""
a = 42 # A variável á recebe um número
b = 1.68 # Variável real
c = 'texto' # Texto
print a, b, c
"""
Explanation: <i> Atribuição de variáveis </i>
Em python as variáveis não são explicitamente declaradas. O interpretador faz a atribuição em tempo de execução.
Os tipis de estrutura utilizadas pelo interpretador são:
* Números (number) - Inteiro ou real
* Strings (string)
* Listas (list)
* Tuplas (tuple)
* Dicionários (dictionary)
End of explanation
"""
a = 1.3
print 'valor de a antes: ', a
a = 'texto'
print 'valor de a depois: ', a
"""
Explanation: As variáveis podem alterar o seu tipo durante a execução (runtime)
End of explanation
"""
a, b = 1, 1
print a, b
a, b = b, a + b
print a, b
"""
Explanation: As variáveis podem sem atribuidas simultaneamente. Isto pode ser feito para simplificar o código e evitar a criação de variáveis temporárias
End of explanation
"""
nome = 'Gustavo' # Isto é uma string
nome = "Joao" # Isto também é uma string
letra = 'a' # Strings também podem ter um único caracter
"""
Explanation: <i> Strings </i>
strings podem ser criadas utilizando ' ou " (aspas simples ou dupla)
End of explanation
"""
nome = 'Gustavo Voltani von Atzingen'
print nome[0], nome[1], nome[8] # A indexação começa em zero e segue até o ultimo valor
nome = 'Gustavo Voltani von Atzingen'
print nome[-1], nome[-2] # Também existe a indexação do fim para o início com
#números negativos iniciando em 1
nome = 'Gustavo Voltani von Atzingen'
print nome[8:15] # Podemos pegar parte da string desta forma
print nome[20:] # Da osição 20 até o final
print nome[:7] # Do início até a posição 6
"""
Explanation: Podemos utilizar a indexação para acessar elementos da string ou partes dela
End of explanation
"""
nome = 'Gustavo Voltani von Atzingen'
print nome.split(' ') # separando o nome pelo espaço em branco
"""
Explanation: Existes vários métodos que podem ser aplicados na string. O método <i>split</i> divide a string em um caracter especificado. Outros métidis serão abordados em aulas posteriores.
End of explanation
"""
lista = ['texto1', 'texto2', 'texto3', 'texto4']
print lista
# também podemos ter vários tipos na mesma lista
lista = [42, 'texto2', 1.68, 'texto4']
print lista
# também podemos ter uma lista dentro de outra
lista = [ [42, 54, 1.7], 'texto2', 1.68, 'texto4']
print lista
# A lista também é indexada e pode ser buscada da mesma forma que
# foi feito com as strings
lista = [42, 34, 78, 1, 91, 1, 34]
print lista[0], lista[-1], lista[2:5]
"""
Explanation: <i> Listas </i>
Listas são sequencias ordenadas de objetos (que podem ser strings, numeros, listas ou outros)
End of explanation
"""
a = 4
if a < 1:
print 'a é menor que 1'
elif a < 3:
print 'a é menor que 3 e maior ou igual 1'
elif a < 5:
print 'a é menor que 5 e maior ou igual 3'
else:
print 'a é maior= 5'
"""
Explanation: <i> Estruturas de controle: if </i>
End of explanation
"""
nome = 'gustavo'
for letra in nome:
print letra
lista = ['texto1', 'texto2', 'texto3', 'texto4']
for item in lista:
print item
# Se quisermos fazer uma repetição com contagem numérica, podemos
# utilizar a função range() ou outras que serão mostradas futuramente
# Mostra os números de 0 a 9
for i in range(10):
print i
# se quisermos contar os elementos de uma lista podemos usar a função enumerate
lista = ['texto1', 'texto2', 'texto3', 'texto4']
for indice, item in enumerate(lista):
print indice, item
"""
Explanation: <i> Estruturas de controle: for </i>
for é uma estrutura de controle que vai iterar sobre uma lista ou uma string
End of explanation
"""
contador = 0
while contador < 5:
print contador
contador += 1
"""
Explanation: <i> Estruturas de while: for </i>
Repete até que a condição seja falsa
End of explanation
"""
def somador(a, b):
return a + a
somador(1, 2)
def separa_por_espao(texto):
if ' ' in texto:
return texto.split(' ')
else:
return None
nome1, nome2 = separa_por_espao('nome1 nome2')
print nome1, nome2
# funções podem ter argumentos chave
def soma(a, b=1):
return a + b
print soma(1,2)
print soma(1)
"""
Explanation: <i> Funções </i>
Funções são escritas com a palavra <i> def </i> e o nome da função, juntamente com
os argumentos.
Pode retorar (ou não) um ou mais ojbetos.
End of explanation
"""
import datetime
tempo_atual = datetime.datetime.now()
print tempo_atual.hour, tempo_atual.minute, tempo_atual.second
from datetime import datetime as d
tempo_atual = d.now()
print tempo_atual.hour, tempo_atual.minute, tempo_atual.second
"""
Explanation: <i> Módulos e importação </i>
End of explanation
"""
|
smousavi05/EQTransformer | docs/source/downloading.ipynb | mit | from EQTransformer.utils.downloader import makeStationList, downloadMseeds
"""
Explanation: Downloading Continuous Data
This notebook demonstrates the use of EQTransformer for downloading continuous data from seismic networks.
End of explanation
"""
help(makeStationList)
"""
Explanation: You can use help() to learn about input parameters of each fuunction. For instance:
End of explanation
"""
MINLAT=35.50
MAXLAT=35.60
MINLON=-117.80
MAXLON=-117.40
STIME="2019-09-01 00:00:00.00"
ETIME="2019-09-02 00:00:00.00"
"""
Explanation: 1) Finding the availabel stations
Defining the location and time period of interest:
End of explanation
"""
CHANLIST=["HH[ZNE]", "HH[Z21]", "BH[ZNE]", "EH[ZNE]", "SH[ZNE]", "HN[ZNE]", "HN[Z21]", "DP[ZNE]"]
"""
Explanation: You can limit your data types (e.g. broadband, short period, or strong motion) of interest:
End of explanation
"""
makeStationList(client_list=["SCEDC"],
min_lat=MINLAT,
max_lat=MAXLAT,
min_lon=MINLON,
max_lon=MAXLON,
start_time=STIME,
end_time=ETIME,
channel_list=CHANLIST,
filter_network=["SY"],
filter_station=[])
"""
Explanation: This will download the information on the stations that are available based on your search criteria. You can filter out the networks or stations that you are not interested in, you can find the name of the appropriate client for your request from here:
End of explanation
"""
downloadMseeds(client_list=["SCEDC", "IRIS"],
stations_json='station_list.json',
output_dir="downloads_mseeds",
start_time=STIME,
end_time=ETIME,
min_lat=MINLAT,
max_lat=MAXLAT,
min_lon=MINLON,
max_lon=MAXLON,
chunk_size=1,
channel_list=[],
n_processor=2)
"""
Explanation: A jason file ("stataions_list.json") should have been created in your current directory. This contains information for the available stations (i.e. 4 stations in this case). Next, you can download the data for the available stations using the following function and script. This may take a few minutes.
2) Downloading the data
You can define multipel clients as the source:
End of explanation
"""
|
simpeg/tutorials | notebooks/fundamentals/pixels_and_neighbors/mesh.ipynb | mit | %matplotlib inline
import numpy as np
from SimPEG import Mesh, Utils
import matplotlib.pyplot as plt
plt.set_cmap(plt.get_cmap('viridis')) # use a nice colormap!
"""
Explanation: The Mesh: Where do things live?
<img src="images/FiniteVolume.png" width=70% align="center">
<h4 align="center">Figure 3. Anatomy of a finite volume cell.</h4>
To bring our continuous equations into the computer, we need to discretize the earth and represent it using a finite(!) set of numbers. In this tutorial we will explain the discretization in 2D and generalize to 3D in the notebooks. A 2D (or 3D!) mesh is used to divide up space, and we can represent functions (fields, parameters, etc.) on this mesh at a few discrete places: the nodes, edges, faces, or cell centers. For consistency between 2D and 3D we refer to faces having area and cells having volume, regardless of their dimensionality. Nodes and cell centers naturally hold scalar quantities while edges and faces have implied directionality and therefore naturally describe vectors. The conductivity, $\sigma$, changes as a function of space, and is likely to have discontinuities (e.g. if we cross a geologic boundary). As such, we will represent the conductivity as a constant over each cell, and discretize it at the center of the cell. The electrical current density, $\vec{j}$, will be continuous across conductivity interfaces, and therefore, we will represent it on the faces of each cell. Remember that $\vec{j}$ is a vector; the direction of it is implied by the mesh definition (i.e. in $x$, $y$ or $z$), so we can store the array $\bf{j}$ as scalars that live on the face and inherit the face's normal. When $\vec{j}$ is defined on the faces of a cell the potential, $\vec{\phi}$, will be put on the cell centers (since $\vec{j}$ is related to $\phi$ through spatial derivatives, it allows us to approximate centered derivatives leading to a staggered, second-order discretization).
Implementation
End of explanation
"""
# Plot a simple tensor mesh
hx = np.r_[2., 1., 1., 2.] # cell widths in the x-direction
hy = np.r_[2., 1., 1., 1., 2.] # cell widths in the y-direction
mesh2D = Mesh.TensorMesh([hx,hy]) # construct a simple SimPEG mesh
mesh2D.plotGrid(nodes=True, faces=True, centers=True) # plot it!
# This can similarly be extended to 3D (this is a simple 2-cell mesh)
hx = np.r_[2., 2.] # cell widths in the x-direction
hy = np.r_[2.] # cell widths in the y-direction
hz = np.r_[1.] # cell widths in the z-direction
mesh3D = Mesh.TensorMesh([hx,hy,hz]) # construct a simple SimPEG mesh
mesh3D.plotGrid(nodes=True, faces=True, centers=True) # plot it!
"""
Explanation: Create a Mesh
A mesh is used to divide up space, here we will use SimPEG's mesh class to define a simple tensor mesh. By "Tensor Mesh" we mean that the mesh can be completely defined by the tensor products of vectors in each dimension; for a 2D mesh, we require one vector describing the cell widths in the x-direction and another describing the cell widths in the y-direction.
Here, we define and plot a simple 2D mesh using SimPEG's mesh class. The cell centers boundaries are shown in blue, cell centers as red dots and cell faces as green arrows (pointing in the positive x, y - directions). Cell nodes are plotted as blue squares.
End of explanation
"""
# Construct a simple 2D, uniform mesh on a unit square
mesh = Mesh.TensorMesh([10, 8])
mesh.plotGrid()
"The mesh has {nC} cells and {nF} faces".format(nC=mesh.nC, nF=mesh.nF)
# Sometimes you need properties in each dimension
("In the x dimension we have {vnCx} cells. This is because our mesh is {vnCx} x {vnCy}.").format(
vnCx=mesh.vnC[0],
vnCy=mesh.vnC[1]
)
# Similarly, we need to keep track of the faces, we have face grids in both the x, and y
# directions.
("Faces are vectors so the number of faces pointing in the x direction is {nFx} = {vnFx0} x {vnFx1} "
"In the y direction we have {nFy} = {vnFy0} x {vnFy1} faces").format(
nFx=mesh.nFx,
vnFx0=mesh.vnFx[0],
vnFx1=mesh.vnFx[1],
nFy=mesh.nFy,
vnFy0=mesh.vnFy[0],
vnFy1=mesh.vnFy[1]
)
"""
Explanation: Counting things on the Mesh
Once we have defined the vectors necessary for construsting the mesh, it is there are a number of properties that are often useful, including keeping track of the
- number of cells: mesh.nC
- number of cells in each dimension: mesh.vnC
- number of faces: mesh.nF
- number of x-faces: mesh.nFx (and in each dimension mesh.vnFx ...)
and the list goes on. Check out SimPEG's mesh documentation for more.
End of explanation
"""
# On a uniform mesh, not suprisingly, the cell volumes are all the same
plt.colorbar(mesh.plotImage(mesh.vol, grid=True)[0])
plt.title('Cell Volumes');
# All cell volumes are defined by the product of the cell widths
assert (np.all(mesh.vol == 1./mesh.vnC[0] * 1./mesh.vnC[1])) # all cells have the same volume on a uniform, unit cell mesh
print("The cell volume is the product of the cell widths in the x and y dimensions: "
"{hx} x {hy} = {vol} ".format(
hx = 1./mesh.vnC[0], # we are using a uniform, unit square mesh
hy = 1./mesh.vnC[1],
vol = mesh.vol[0]
)
)
# Similarly, all x-faces should have the same area, equal to that of the length in the y-direction
assert np.all(mesh.area[:mesh.nFx] == 1.0/mesh.nCy) # because our domain is a unit square
# and all y-faces have an "area" equal to the length in the x-dimension
assert np.all(mesh.area[mesh.nFx:] == 1.0/mesh.nCx)
print(
"The area of the x-faces is {xFaceArea} and the area of the y-faces is {yFaceArea}".format(
xFaceArea=mesh.area[0],
yFaceArea=mesh.area[mesh.nFx]
)
)
mesh.plotGrid(faces=True)
# On a non-uniform tensor mesh, the first mesh we defined, the cell volumes vary
# hx = np.r_[2., 1., 1., 2.] # cell widths in the x-direction
# hy = np.r_[2., 1., 1., 1., 2.] # cell widths in the y-direction
# mesh2D = Mesh.TensorMesh([hx,hy]) # construct a simple SimPEG mesh
plt.colorbar(mesh2D.plotImage(mesh2D.vol, grid=True)[0])
plt.title('Cell Volumes');
"""
Explanation: Simple properties of the mesh
There are a few things that we will need to know about the mesh and each of it's cells, including the
- cell volume: mesh.vol,
- face area: mesh.area.
For consistency between 2D and 3D we refer to faces having area and cells having volume, regardless of their dimensionality.
End of explanation
"""
from SimPEG.Utils import mkvc
mesh = Mesh.TensorMesh([3,4])
vec = np.arange(mesh.nC)
row_major = vec.reshape(mesh.vnC, order='C')
print('Row major ordering (standard python)')
print(row_major)
col_major = vec.reshape(mesh.vnC, order='F')
print('\nColumn major ordering (what we want!)')
print(col_major)
# mkvc unwraps using column major ordering, so we expect
assert np.all(mkvc(col_major) == vec)
print('\nWe get back the expected vector using mkvc: {vec}'.format(vec=mkvc(col_major)))
"""
Explanation: Grids and Putting things on a mesh
When storing and working with features of the mesh such as cell volumes, face areas, in a linear algebra sense, it is useful to think of them as vectors... so the way we unwrap is super important.
Most importantly we want some compatibility with <a href="https://en.wikipedia.org/wiki/Vectorization_(mathematics)#Compatibility_with_Kronecker_products">Kronecker products</a> as we will see later! This actually leads to us thinking about unwrapping our vectors column first. This column major ordering is inspired by linear algebra conventions which are the standard in Matlab, Fortran, Julia, but sadly not Python. To make your life a bit easier, you can use our MakeVector mkvc function from Utils.
End of explanation
"""
# gridCC
"The cell centered grid is {gridCCshape0} x {gridCCshape1} since we have {nC} cells in the mesh and it is {dim} dimensions".format(
gridCCshape0=mesh.gridCC.shape[0],
gridCCshape1=mesh.gridCC.shape[1],
nC=mesh.nC,
dim=mesh.dim
)
# The first column is the x-locations, and the second the y-locations
mesh.plotGrid()
plt.plot(mesh.gridCC[:,0], mesh.gridCC[:,1],'ro')
# gridFx
"Similarly, the x-Face grid is {gridFxshape0} x {gridFxshape1} since we have {nFx} x-faces in the mesh and it is {dim} dimensions".format(
gridFxshape0=mesh.gridFx.shape[0],
gridFxshape1=mesh.gridFx.shape[1],
nFx=mesh.nFx,
dim=mesh.dim
)
mesh.plotGrid()
plt.plot(mesh.gridCC[:,0], mesh.gridCC[:,1],'ro')
plt.plot(mesh.gridFx[:,0], mesh.gridFx[:,1],'g>')
"""
Explanation: Grids on the Mesh
When defining where things are located, we need the spatial locations of where we are discretizing different aspects of the mesh. A SimPEG Mesh has several grids. In particular, here it is handy to look at the
- Cell centered grid: mesh.gridCC
- x-Face grid: mesh.gridFx
- y-Face grid: mesh.gridFy
End of explanation
"""
mesh = Mesh.TensorMesh([100, 80]) # setup a mesh on which to solve
# model parameters
sigma_background = 1. # Conductivity of the background, S/m
sigma_block = 10. # Conductivity of the block, S/m
# add a block to our model
x_block = np.r_[0.4, 0.6]
y_block = np.r_[0.4, 0.6]
# assign them on the mesh
sigma = sigma_background * np.ones(mesh.nC) # create a physical property model
block_indices = ((mesh.gridCC[:,0] >= x_block[0]) & # left boundary
(mesh.gridCC[:,0] <= x_block[1]) & # right boudary
(mesh.gridCC[:,1] >= y_block[0]) & # bottom boundary
(mesh.gridCC[:,1] <= y_block[1])) # top boundary
# add the block to the physical property model
sigma[block_indices] = sigma_block
# plot it!
plt.colorbar(mesh.plotImage(sigma)[0])
plt.title('electrical conductivity, $\sigma$')
"""
Explanation: Putting a Model on a Mesh
In index.ipynb, we constructed a model of a block in a whole-space, here we revisit it having defined the elements of the mesh we are using.
End of explanation
"""
|
scraperwiki/databaker | databaker/tutorial/Finding_your_way.ipynb | agpl-3.0 |
# Load in the functions
from databaker.framework import *
# Load the spreadsheet
tabs = loadxlstabs("example1.xls")
# Select the first table
tab = tabs[0]
print("The unordered bag of cells for this table looks like:")
print(tab)
"""
Explanation: Opening and previewing
This uses the tiny excel spreadsheet example1.xls. It is small enough to preview inline in this notebook. But for bigger spreadsheet tables you will want to open them up in a separate window.
End of explanation
"""
# Preview the table as a table inline
savepreviewhtml(tab)
bb = tab.is_bold()
print("The cells with bold font are", bb)
print("The", len(bb), "cells immediately below these bold font cells are", bb.shift(DOWN))
cc = tab.filter("Cars")
print("The single cell with the text 'Cars' is", cc)
cc.assert_one() # proves there is only one cell in this bag
print("Everything in the column below the 'Cars' cell is", cc.fill(DOWN))
hcc = tab.filter("Cars").expand(DOWN)
print("If you wanted to include the 'Cars' heading, then use expand", hcc)
print("You can print the cells in row-column order if you don't mind unfriendly code")
shcc = sorted(hcc.unordered_cells, key=lambda Cell:(Cell.y, Cell.x))
print(shcc)
print("It can be easier to see the set of cells coloured within the table")
savepreviewhtml(hcc)
"""
Explanation: Selecting cell bags
A table is also "bag of cells", which just so happens to be a set of all the cells in the table.
A "bag of cells" is like a Python set (and looks like one when you print it), but it has extra selection functions that help you navigate around the table.
We will learn these as we go along, but you can see the full list on the tutorial_reference notebook.
End of explanation
"""
"All the cells that have an 'o' in them:", tab.regex(".*?o")
"""
Explanation: Note: As you work through this tutorial, do please feel free to temporarily insert new Jupyter-Cells in order to give yourself a place to experiment with any of the functions that are available. (Remember, the value of the last line in a Jupyter-Cell is always printed out -- in addition to any earlier print-statements.)
End of explanation
"""
# We get the array of observations by selecting its corner and expanding down and to the right
obs = tab.excel_ref('B4').expand(DOWN).expand(RIGHT)
savepreviewhtml(obs)
# the two main headings are in a row and a column
r1 = tab.excel_ref('B3').expand(RIGHT)
r2 = tab.excel_ref('A3').fill(DOWN)
# here we pass in a list containing two cell bags and get two colours
savepreviewhtml([r1, r2])
# HDim is made from a bag of cells, a name, and an instruction on how to look it up
# from an observation cell.
h1 = HDim(r1, "Vehicles", DIRECTLY, ABOVE)
# Here is an example cell
cc = tab.excel_ref('C5')
# You can preview a dimension as well as just a cell bag
savepreviewhtml([h1, cc])
# !!! This is the important look-up stage from a cell into a dimension
print("Cell", cc, "matches", h1.cellvalobs(cc), "in dimension", h1.label)
# You can start to see through to the final result of all this work when you
# print out the lookup values for every observation in the table at once.
for ob in obs:
print("Obs", ob, "maps to", h1.cellvalobs(ob))
"""
Explanation: Observations and dimensions
Let's get on with some actual work. In our terminology, an "Observation" is a numerical measure (eg anything in the 3x4 array of numbers in the example table), and a "Dimension" is one of the headings.
Both are made up of a bag of cells, however a Dimension also needs to know how to "look up" from the Observation to its dimensional value.
End of explanation
"""
# You can change an output value like this:
h1.AddCellValueOverride("Cars", "Horses")
for ob in obs:
print("Obs", ob, "maps to", h1.cellvalobs(ob))
# Alternatively, you can override by the reference to a single cell to a value
# (This will work even if the cell C3 is empty, which helps with filling in blank headings)
h1.AddCellValueOverride(tab.excel_ref('C3'), "Submarines")
for ob in obs:
print("Obs", ob, "maps to", h1.cellvalobs(ob))
# You can override the header value for an individual observation element.
b4cell = tab.excel_ref('B4')
h1.AddCellValueOverride(b4cell, "Clouds")
for ob in obs:
print("Obs", ob, "maps to", h1.cellvalobs(ob))
# The preview table shows how things have changed
savepreviewhtml([h1, obs])
wob = tab.excel_ref('A1')
print("Wrong-Obs", wob, "maps to", h1.cellvalobs(wob), " <--- ie Nothing")
h1.AddCellValueOverride(None, "Who knows?")
print("After giving a default value Wrong-Obs", wob, "now maps to", h1.cellvalobs(wob))
# The default even works if the cell bag set is empty. In which case we have a special
# constant case that maps every observation to the same value
h3 = HDimConst("Category", "Beatles")
for ob in obs:
print("Obs", ob, "maps to", h3.cellvalobs(ob))
"""
Explanation: Note the value of h1.cellvalobs(ob) is actually a pair composed of the heading cell and its value. This is is because we can over-ride its output value without actually rewriting the original table, as we shall see.
End of explanation
"""
dimensions = [
HDim(tab.excel_ref('B1'), TIME, CLOSEST, ABOVE),
HDim(r1, "Vehicles", DIRECTLY, ABOVE),
HDim(r2, "Name", DIRECTLY, LEFT),
HDimConst("Category", "Beatles")
]
c1 = ConversionSegment(obs, dimensions, processTIMEUNIT=False)
savepreviewhtml(c1)
# If the table is too big, we can preview it in another file is openable in another browser window.
# (It's very useful if you are using two computer screens.)
savepreviewhtml(c1, "preview.html", verbose=False)
print("Looking up all the observations against all the dimensions and print them out")
for ob in c1.segment:
print(c1.lookupobs(ob))
df = c1.topandas()
df
"""
Explanation: Conversion segments and output
A ConversionSegment is a collection of Dimensions with an Observation set that is going to be processed and output as a table all at once.
You can preview them in HTML (just like the cell bags and dimensions), only this time the observation cells can be clicked on interactively to show how they look up.
End of explanation
"""
print(writetechnicalCSV(None, c1))
# This is how to write to a file
writetechnicalCSV("exampleWDA.csv", c1)
# We can read this file back in to a list of pandas dataframes
dfs = readtechnicalCSV("exampleWDA.csv")
print(dfs[0])
"""
Explanation: WDA Technical CSV
The ONS uses their own data system for publishing their time-series data known as WDA.
If you need to output to it, then this next section is for you.
The function which outputs to the WDA format is writetechnicalCSV(filename, [conversionsegments]) The format is very verbose because it repeats each dimension name and its value twice in each row, and every row begins with the following list of column entries, whether or not they exist.
observation, data_marking, statistical_unit_eng, statistical_unit_cym, measure_type_eng, measure_type_cym, observation_type, obs_type_value, unit_multiplier, unit_of_measure_eng, unit_of_measure_cym, confidentuality, geographic_area
The writetechnicalCSV() function accepts a single conversion segment, a list of conversion segments, or equivalently a pandas dataframe.
End of explanation
"""
# See that the `2014` no longer ends with `.0`
c1 = ConversionSegment(obs, dimensions, processTIMEUNIT=True)
c1.topandas()
"""
Explanation: Note If you were wondering what the processTIMEUNIT=False was all about in the ConversionSegment constructor, it's a feature to help the WDA output automatically set the TIMEUNIT column according to whether it should be Year, Month, or Quarter.
You will note that the TIME column above is 2014.0 when it really should be 2014 with the TIMEUNIT set to Year.
By setting it to True the ConversionSegment object will identify the timeunit from the value of the TIME column and then force its format to conform.
End of explanation
"""
|
udibr/flavours-of-physics | sPlot.ipynb | mit | import numpy as np
%matplotlib inline
from matplotlib import pylab as plt
import pandas as pd
import evaluation
folder = '../inputs/'
agreement = pd.read_csv(folder + 'check_agreement.csv', index_col='id')
"""
Explanation: In the kaggle flavours of physics competition the admins wanted to test
if the predictions that are originally made for $\tau$ decay are the same for simulated decay events and real decay events. This test was performed on decay of D particles for which both examples exists.
The plan is to generate a historgam of the prediction for the real and simulated events and see if they match using KS score.
The only problem is that the real decay events are mixed with background events. A method called sPlot is used to give a different weight to each real event that measures how much the event is likely to be a decay based on the mass measurement of the particle in the event. This method only works if the predictions are not correlated with the mass and for that a separate test is performed.
This notebook attempts to reproduce how the weights were computed.
For more info visit the forum.
End of explanation
"""
agreement_mc = agreement[agreement.signal == 1]
agreement_real = agreement[agreement.signal == 0]
"""
Explanation: The agreement is made from monte carlo of D decay and real data which is made from both signal and background of D decay
End of explanation
"""
assert np.all(agreement_mc.weight == 1);
"""
Explanation: We want to take our control $x$, which is in our case the predicition probability for $\tau \rightarrow \mu \mu \mu$ decay, and accumulate it in to two histogram one for real D decay and one for simulated D decay.
For the agreement_mc we know that all events are signal and we should use the same weight for all of them when building the histogram. Therefore they all have the same weight.
End of explanation
"""
agreement_real.weight.hist(bins=50)
"""
Explanation: But we dont know exactly which real events are D decay and which are background. Instead we will use a different weight for each event.
High value indicates a D decay and low value a background.
End of explanation
"""
N = len(agreement_real)
N1f = 0.85
N1 = int(N1f*N)
N2 = N - N1
N,N1,N2
"""
Explanation: The weights are computed by using the mass of the particle as the discriminating variable to tell if an event is a D decay or background and the weights are computed using sPlot:
for $N$ real events, with mass $y_e$ ($e=1 \ldots N$),
assume you know the mass distribution for background $f_1$ and D decay $f_2$.
Use maximium liklihood to find the yield or the expected number of background $N_1$ and D decay $N_2$ events.
$\mathcal{L} = \sum_{e=1}^N \log{{\sum_{i=1}^{N_s} N_i f_i(y_e)}} - \sum_{i=1}^{N_s} N_i$
(there are only two species, $N_s = 2$, in our case.)
The naive weight would have been:
$\mathcal{P}n(y_e) = \frac{N_n f_n(y_e)}{\sum{k=1}^{N_s} N_k f_k(y_e)}$
(in our case we are interested in building a histogram for the D decay events, so $n = 2$.)
But the correct weight uses the expected covariance matrix $\textbf{V}$
$\textbf{V}{nj}^{-1} = \sum{e=1}^N \frac{f_n(y_e) f_j(y_e)}{(\sum_{k=1}^{N_s} {N_k f_k(y_e)})^2}$
and the correct weights (called sWeight) are computed as follows
${s}\mathcal{P}_n(y_e) = \frac{\sum{j=1}^{N_s} \textbf{V}{nj} f_j(y_e)}{\sum{k=1}^{N_s} N_k f_k(y_e)}$
Example
As an example showing how the weights were computed lets use a set of imaginary values that looks more or less close to what the admin used to compute their weights
Lets skip the maximium liklihood step and assume we know how many decay (N2) and background (N1) events are in the real data
End of explanation
"""
D_mean = 3852*0.511
D_std = 8.233
"""
Explanation: We need to know in advance the mass distribution of decay (f2) and background (f1) events.
The decay has a normal mass distribution with a mean and std:
End of explanation
"""
R = 4
xmin = D_mean - R* D_std
xmax = D_mean + R*D_std
Range = xmax-xmin
BW = 2.43 * D_std
pdip = 0.22
"""
Explanation: The background is uniformaly distributed over 4 sigma to two sides of mean with a dip over 2.43 sigma. The dip falls by 22%
End of explanation
"""
Dsignal = np.random.normal(D_mean,D_std,N2)
"""
Explanation: simulate decay events
End of explanation
"""
Dbackground_above = (Range/2. - BW) * np.random.random(N1) + BW + D_mean
Dbackground_below = D_mean - (Range/2. - BW) * np.random.random(N1) - BW
Dbackground_out = np.choose(np.random.random(N1) < 0.5,[Dbackground_below, Dbackground_above])
# Dbackground_in = BW * (2. * (np.random.random(N1)-0.5)) + D_mean
# Dbackground = np.choose(np.random.random(N1) < pdip,[Dbackground_out, Dbackground_in])
Dbackground_norm = np.random.normal(0.,D_std,N1)
Dbackground_norm = np.choose(Dbackground_norm < 0.,
[Dbackground_norm + D_mean - BW,
D_mean + BW + Dbackground_norm])
Dbackground_flat = 2.*BW * np.random.random(N1) - BW + D_mean
Dbackground_in = np.choose(np.random.random(N1) < pdip,[Dbackground_flat, Dbackground_norm])
# at +/- BW the pdf from the left is S=1./(Range - 2.*BW)
# from the right it is U = (1-pdif)/(2.*BW) + pdif*stats.norm(0, D_std).pdf(0)
# We want S*Q = U*(1-Q)
U = (1-pdip)/(2.*BW) + pdip / (np.sqrt(2.*np.pi)*D_std)
S = 1./(Range - 2.*BW)
Q = U / (S + U)
Dbackground = np.choose(np.random.random(N1) > Q ,[Dbackground_out, Dbackground_in])
def f_f1(y):
n = len(y)
r = (1.-pdip)/(2.*BW)
r += pdip * np.choose(y < D_mean + BW, [0.,stats.norm(D_mean+BW, D_std).pdf(y)])
r += pdip * np.choose(y > D_mean - BW, [0.,stats.norm(D_mean-BW, D_std).pdf(y)])
r = np.choose(np.abs(y - D_mean) > BW, [(1-Q)*r, Q/(Range - 2.*BW)])
return r
from scipy import stats
B = 200
support = np.linspace(xmin, xmax, B)
plt.hist(Dsignal, bins=B, label='f2')
plt.hist(Dbackground, bins=B, label='f1',alpha=0.5)
plt.plot(support,Range/B * N1*f_f1(support))
plt.legend(loc='upper left')
plt.gca().set_xlim((xmin, xmax));
"""
Explanation: simulate background events
End of explanation
"""
Vinv = 0
y = np.concatenate((Dbackground, Dsignal))
f1 = f_f1(y)
f2 = stats.norm(D_mean, D_std).pdf(y)
Z = N1 * f1 + N2 * f2
Z.shape,f1.shape,f2.shape
Vinv = np.array([[f1/Z * f1/Z, f1/Z * f2/Z],[f1/Z * f2/Z, f2/Z*f2/Z]]).sum(axis=-1)
V = np.linalg.inv(Vinv)
V
sW = np.dot(V, np.array([f1, f2])) / Z
sW.shape
"""
Explanation: compute weights of the simulated decay and background events
End of explanation
"""
agreement_real.weight.hist(bins=50, label='real')
plt.hist(sW[1,:], label='example',bins=50,alpha=0.5)
plt.legend(loc='upper right')
"""
Explanation: generate a histogram of the decay weights from the simulated decay and background events and compare it with the histogram the weights given by the admin
End of explanation
"""
N_support = len(support)
f1_support = f_f1(support)
f2_support = stats.norm(D_mean, D_std).pdf(support)
Z_support = N1 * f1_support + N2 * f2_support
Z_support.shape
sW_support = np.dot(V, np.array([f1_support, f2_support])) / Z_support
sW_support.shape
plt.plot(support,sW_support[0,:], label='background')
plt.plot(support,sW_support[1,:], label='signal')
plt.legend(loc='upper left')
plt.title('sPlot weights')
plt.gca().set_xlim((xmin, xmax))
"""
Explanation: compute the sPlot weights from f1 and f2 for signal and background over a range of linear spacing (support) of masses
End of explanation
"""
|
paolorivas/homeworkfoundations | homeworkdata/Homework_3_Paolo_Rivas_Legua.ipynb | mit | from bs4 import BeautifulSoup
from urllib.request import urlopen
html_str = urlopen("http://static.decontextualize.com/widgets2016.html").read()
document = BeautifulSoup(html_str, "html.parser")
"""
Explanation: Homework assignment #3
These problem sets focus on using the Beautiful Soup library to scrape web pages.
Problem Set #1: Basic scraping
I've made a web page for you to scrape. It's available here. The page concerns the catalog of a famous widget company. You'll be answering several questions about this web page. In the cell below, I've written some code so that you end up with a variable called html_str that contains the HTML source code of the page, and a variable document that stores a Beautiful Soup object.
End of explanation
"""
h3_tags = document.find_all('h3')
print(type(h3_tags))
print([tag.string for tag in h3_tags])
len([tag.string for tag in h3_tags])
"""
Explanation: Now, in the cell below, use Beautiful Soup to write an expression that evaluates to the number of <h3> tags contained in widgets2016.html.
End of explanation
"""
telephone_checkup = document.find('a', attrs={'class': 'tel'})
[tag.string for tag in telephone_checkup]
"""
Explanation: Now, in the cell below, write an expression or series of statements that displays the telephone number beneath the "Widget Catalog" header.
End of explanation
"""
all_widget = document.find_all('td', attrs={'class': 'wname'})
#USING FOR LOOP
for widget in all_widget:
widget2 = widget.string
print(widget2)
print("##########")
#Using LIST comprehention
raw_widget = [tag.string for tag in all_widget]
raw_widget
"""
Explanation: In the cell below, use Beautiful Soup to write some code that prints the names of all the widgets on the page. After your code has executed, widget_names should evaluate to a list that looks like this (though not necessarily in this order):
Skinner Widget
Widget For Furtiveness
Widget For Strawman
Jittery Widget
Silver Widget
Divided Widget
Manicurist Widget
Infinite Widget
Yellow-Tipped Widget
Unshakable Widget
Self-Knowledge Widget
Widget For Cinema
End of explanation
"""
widgets = []
# your code here
search_table = document.find_all('tr', attrs={'class': 'winfo'})
for new_key in search_table:
diccionaries = {}
partno_tag = new_key.find('td', attrs={'class': 'partno'})
price_tag = new_key.find('td', attrs={'class': 'price'})
quantity_tag = new_key.find('td', attrs={'class': 'quantity'})
widget_tag = new_key.find('td', attrs={'class': 'wname'})
diccionaries['partno'] = partno_tag.string
diccionaries['price'] = price_tag.string
diccionaries['quantity'] = quantity_tag.string
diccionaries['widget'] = widget_tag.string
widgets.append(diccionaries)
widgets
# end your code
#test
widgets[5]['partno']
"""
Explanation: Problem set #2: Widget dictionaries
For this problem set, we'll continue to use the HTML page from the previous problem set. In the cell below, I've made an empty list and assigned it to a variable called widgets. Write code that populates this list with dictionaries, one dictionary per widget in the source file. The keys of each dictionary should be partno, wname, price, and quantity, and the value for each of the keys should be the value for the corresponding column for each row. After executing the cell, your list should look something like this:
[{'partno': 'C1-9476',
'price': '$2.70',
'quantity': u'512',
'wname': 'Skinner Widget'},
{'partno': 'JDJ-32/V',
'price': '$9.36',
'quantity': '967',
'wname': u'Widget For Furtiveness'},
...several items omitted...
{'partno': '5B-941/F',
'price': '$13.26',
'quantity': '919',
'wname': 'Widget For Cinema'}]
And this expression:
widgets[5]['partno']
... should evaluate to:
LH-74/O
End of explanation
"""
widgets = []
# your code here
search_table = document.find_all('tr', attrs={'class': 'winfo'})
for new_key in search_table:
diccionaries = {}
partno_tag = new_key.find('td', attrs={'class': 'partno'})
price_tag = new_key.find('td', attrs={'class': 'price'})
quantity_tag = new_key.find('td', attrs={'class': 'quantity'})
widget_tag = new_key.find('td', attrs={'class': 'wname'})
diccionaries['partno'] = partno_tag.string
diccionaries['price'] = float(price_tag.string[1:])
diccionaries['quantity'] = int(quantity_tag.string)
diccionaries['widget'] = widget_tag.string
widgets.append(diccionaries)
widgets
#widgets
# end your code
"""
Explanation: In the cell below, duplicate your code from the previous question. Modify the code to ensure that the values for price and quantity in each dictionary are floating-point numbers and integers, respectively. I.e., after executing the cell, your code should display something like this:
[{'partno': 'C1-9476',
'price': 2.7,
'quantity': 512,
'widgetname': 'Skinner Widget'},
{'partno': 'JDJ-32/V',
'price': 9.36,
'quantity': 967,
'widgetname': 'Widget For Furtiveness'},
... some items omitted ...
{'partno': '5B-941/F',
'price': 13.26,
'quantity': 919,
'widgetname': 'Widget For Cinema'}]
(Hint: Use the float() and int() functions. You may need to use string slices to convert the price field to a floating-point number.)
End of explanation
"""
new_list = []
for items in widgets:
new_list.append(items['quantity'])
sum(new_list)
"""
Explanation: Great! I hope you're having fun. In the cell below, write an expression or series of statements that uses the widgets list created in the cell above to calculate the total number of widgets that the factory has in its warehouse.
Expected output: 7928
End of explanation
"""
for widget in widgets:
if widget['price'] > 9.30:
print(widget['widget'])
"""
Explanation: In the cell below, write some Python code that prints the names of widgets whose price is above $9.30.
Expected output:
Widget For Furtiveness
Jittery Widget
Silver Widget
Infinite Widget
Widget For Cinema
End of explanation
"""
example_html = """
<h2>Camembert</h2>
<p>A soft cheese made in the Camembert region of France.</p>
<h2>Cheddar</h2>
<p>A yellow cheese made in the Cheddar region of... France, probably, idk whatevs.</p>
"""
"""
Explanation: Problem set #3: Sibling rivalries
In the following problem set, you will yet again be working with the data in widgets2016.html. In order to accomplish the tasks in this problem set, you'll need to learn about Beautiful Soup's .find_next_sibling() method. Here's some information about that method, cribbed from the notes:
Often, the tags we're looking for don't have a distinguishing characteristic, like a class attribute, that allows us to find them using .find() and .find_all(), and the tags also aren't in a parent-child relationship. This can be tricky! For example, take the following HTML snippet, (which I've assigned to a string called example_html):
End of explanation
"""
example_doc = BeautifulSoup(example_html, "html.parser")
cheese_dict = {}
for h2_tag in example_doc.find_all('h2'):
cheese_name = h2_tag.string
cheese_desc_tag = h2_tag.find_next_sibling('p')
cheese_dict[cheese_name] = cheese_desc_tag.string
cheese_dict
"""
Explanation: If our task was to create a dictionary that maps the name of the cheese to the description that follows in the <p> tag directly afterward, we'd be out of luck. Fortunately, Beautiful Soup has a .find_next_sibling() method, which allows us to search for the next tag that is a sibling of the tag you're calling it on (i.e., the two tags share a parent), that also matches particular criteria. So, for example, to accomplish the task outlined above:
End of explanation
"""
for h3_tag in document.find_all('h3'):
if "Hallowed widgets" in h3_tag:
table = h3_tag.find_next_sibling('table', {'class': 'widgetlist'})
partno = table.find_all('td', {'class': 'partno'})
for x in partno:
print(x.string)
"""
Explanation: With that knowledge in mind, let's go back to our widgets. In the cell below, write code that uses Beautiful Soup, and in particular the .find_next_sibling() method, to print the part numbers of the widgets that are in the table just beneath the header "Hallowed Widgets."
Expected output:
MZ-556/B
QV-730
T1-9731
5B-941/F
End of explanation
"""
widget_count = {}
doc = document.find_all('h3')
for h3_tag in doc:
widget_name = h3_tag.string
table = h3_tag.find_next_sibling('table', {'class': 'widgetlist'})
partno = table.find_all('td', {'class': 'partno'})
count = len(partno)
widget_count[widget_name] = count
widgets
"""
Explanation: Okay, now, the final task. If you can accomplish this, you are truly an expert web scraper. I'll have little web scraper certificates made up and I'll give you one, if you manage to do this thing. And I know you can do it!
In the cell below, I've created a variable category_counts and assigned to it an empty dictionary. Write code to populate this dictionary so that its keys are "categories" of widgets (e.g., the contents of the <h3> tags on the page: "Forensic Widgets", "Mood widgets", "Hallowed Widgets") and the value for each key is the number of widgets that occur in that category. I.e., after your code has been executed, the dictionary category_counts should look like this:
{'Forensic Widgets': 3,
'Hallowed widgets': 4,
'Mood widgets': 2,
'Wondrous widgets': 3}
End of explanation
"""
|
anhaidgroup/py_entitymatching | notebooks/guides/step_wise_em_guides/Sampling and Labeling.ipynb | bsd-3-clause | # Import py_entitymatching package
import py_entitymatching as em
import os
import pandas as pd
# Get the datasets directory
datasets_dir = em.get_install_path() + os.sep + 'datasets'
path_A = datasets_dir + os.sep + 'DBLP.csv'
path_B = datasets_dir + os.sep + 'ACM.csv'
path_C = datasets_dir + os.sep + 'tableC.csv'
A = em.read_csv_metadata(path_A, key='id')
B = em.read_csv_metadata(path_B, key='id')
C = em.read_csv_metadata(path_C, key='_id',
fk_ltable='ltable_id', fk_rtable='rtable_id',
ltable=A, rtable=B)
C.head()
len(C)
"""
Explanation: Introduction
This IPython notebook illustrates how to sample and label a table (candidate set).
First, we need to import py_entitymatching package and other libraries as follows:
End of explanation
"""
S = em.sample_table(C, 450)
"""
Explanation: Sample Candidate Set
From the candidate set, a sample (for labeling purposes) can be obtained like this:
End of explanation
"""
# Label the sampled set
# Specify the name for the label column
G = em.label_table(S, 'gold_label')
"""
Explanation: Label the Sampled Set
End of explanation
"""
# Assume that we have labeled the data and stored it in
# labeled_data_demo.csv
path_labeled_data = datasets_dir + os.sep + 'labeled_data_demo.csv'
G = em.read_csv_metadata(path_labeled_data, key='_id',
fk_ltable='ltable_id', fk_rtable='rtable_id',
ltable=A, rtable=B)
G.head()
"""
Explanation: The user must specify 0 for non-match and 1 for match. Typically, the sampling and the labeling step is done in iterations (till we get sufficient density of matches). Once labeled, the labeled data set will look like this:
End of explanation
"""
|
kirichoi/tellurium | examples/notebooks/core/tesedmlExample.ipynb | apache-2.0 | from __future__ import print_function
import tellurium as te
te.setDefaultPlottingEngine('matplotlib')
%matplotlib inline
import phrasedml
antimony_str = '''
model myModel
S1 -> S2; k1*S1
S1 = 10; S2 = 0
k1 = 1
end
'''
phrasedml_str = '''
model1 = model "myModel"
sim1 = simulate uniform(0, 5, 100)
task1 = run sim1 on model1
plot "Figure 1" time vs S1, S2
'''
# create the sedml xml string from the phrasedml
sbml_str = te.antimonyToSBML(antimony_str)
phrasedml.setReferencedSBML("myModel", sbml_str)
sedml_str = phrasedml.convertString(phrasedml_str)
if sedml_str == None:
print(phrasedml.getLastPhrasedError())
print(sedml_str)
"""
Explanation: Back to the main Index
Working with SED-ML
SED-ML describes how to run a set of simulations on a model encoded in SBML or CellML through specifying tasks, algorithm parameters, and post-processing. SED-ML has a limited vocabulary of simulation types (timecourse and steady state) is not designed to replace scripting with Python or other general-purpose languages. Instead, SED-ML is designed to provide a rudimentary way to reproduce the dynamics of a model across different tools. This process would otherwise require human intervention and becomes very laborious when thousands of models are involved.
The basic elements of a SED-ML document are:
Models, which reference external SBML/CellML files or other previously defined models within the same SED-ML document,
Simulations, which reference specific numerical solvers from the KiSAO ontology,
Tasks, which apply a simulation to a model, and
Outputs, which can be plots or reports.
Models in SED-ML essentially create instances of SBML/CellML models, and each instance can have different parameters.
Tellurium's approach to handling SED-ML is to first convert the SED-ML document to a Python script, which contains all the Tellurium-specific function calls to run all tasks described in the SED-ML. For authoring SED-ML, Tellurium uses PhraSEDML, a human-readable analog of SED-ML. Example syntax is shown below.
SED-ML files are not very useful in isolation. Since SED-ML always references external SBML and CellML files, software which supports exchanging SED-ML files should use COMBINE archives, which package all related standards-encoded files together.
Reproducible computational biology experiments with SED-ML - The Simulation Experiment Description Markup Language. Waltemath D., Adams R., Bergmann F.T., Hucka M., Kolpakov F., Miller A.K., Moraru I.I., Nickerson D., Snoep J.L.,Le Novère, N. BMC Systems Biology 2011, 5:198 (http://www.pubmed.org/22172142)
Creating a SED-ML file
This example shows how to use PhraSEDML to author SED-ML files. Whenever a PhraSEDML script references an external model, you should use phrasedml.setReferencedSBML to ensure that the PhraSEDML script can be properly converted into a SED-ML file.
End of explanation
"""
import tempfile, os, shutil
workingDir = tempfile.mkdtemp(suffix="_sedml")
sbml_file = os.path.join(workingDir, 'myModel')
sedml_file = os.path.join(workingDir, 'sed_main.xml')
with open(sbml_file, 'wb') as f:
f.write(sbml_str.encode('utf-8'))
f.flush()
print('SBML written to temporary file')
with open(sedml_file, 'wb') as f:
f.write(sedml_str.encode('utf-8'))
f.flush()
print('SED-ML written to temporary file')
# For technical reasons, any software which uses libSEDML
# must provide a custom build - Tellurium uses tesedml
import tesedml as libsedml
sedml_doc = libsedml.readSedML(sedml_file)
n_errors = sedml_doc.getErrorLog().getNumFailsWithSeverity(libsedml.LIBSEDML_SEV_ERROR)
print('Read SED-ML file, number of errors: {}'.format(n_errors))
if n_errors > 0:
print(sedml_doc.getErrorLog().toString())
# execute SED-ML using Tellurium
te.executeSEDML(sedml_str, workingDir=workingDir)
# clean up
#shutil.rmtree(workingDir)
"""
Explanation: Reading / Executing SED-ML
After converting PhraSEDML to SED-ML, you can call te.executeSEDML to use Tellurium to execute all simulations in the SED-ML. This example also shows how to use libSEDML (used by Tellurium and PhraSEDML internally) for reading SED-ML files.
End of explanation
"""
from __future__ import print_function
import tellurium as te, tellurium.temiriam as temiriam
te.setDefaultPlottingEngine('matplotlib')
%matplotlib inline
import phrasedml
# Get SBML from URN and set for phrasedml
urn = "urn:miriam:biomodels.db:BIOMD0000000012"
sbml_str = temiriam.getSBMLFromBiomodelsURN(urn=urn)
phrasedml.setReferencedSBML('BIOMD0000000012', sbml_str)
# <SBML species>
# PX - LacI protein
# PY - TetR protein
# PZ - cI protein
# X - LacI mRNA
# Y - TetR mRNA
# Z - cI mRNA
# <SBML parameters>
# ps_a - tps_active: Transcrition from free promotor in transcripts per second and promotor
# ps_0 - tps_repr: Transcrition from fully repressed promotor in transcripts per second and promotor
phrasedml_str = """
model1 = model "{}"
model2 = model model1 with ps_0=1.3E-5, ps_a=0.013
sim1 = simulate uniform(0, 1000, 1000)
task1 = run sim1 on model1
task2 = run sim1 on model2
# A simple timecourse simulation
plot "Figure 1.1 Timecourse of repressilator" task1.time vs task1.PX, task1.PZ, task1.PY
# Applying preprocessing
plot "Figure 1.2 Timecourse after pre-processing" task2.time vs task2.PX, task2.PZ, task2.PY
# Applying postprocessing
plot "Figure 1.3 Timecourse after post-processing" task1.PX/max(task1.PX) vs task1.PZ/max(task1.PZ), \
task1.PY/max(task1.PY) vs task1.PX/max(task1.PX), \
task1.PZ/max(task1.PZ) vs task1.PY/max(task1.PY)
""".format('BIOMD0000000012')
# convert to SED-ML
sedml_str = phrasedml.convertString(phrasedml_str)
if sedml_str == None:
raise RuntimeError(phrasedml.getLastError())
# Run the SED-ML file with results written in workingDir
import tempfile, shutil, os
workingDir = tempfile.mkdtemp(suffix="_sedml")
# write out SBML
with open(os.path.join(workingDir, 'BIOMD0000000012'), 'wb') as f:
f.write(sbml_str.encode('utf-8'))
te.executeSEDML(sedml_str, workingDir=workingDir)
shutil.rmtree(workingDir)
"""
Explanation: SED-ML L1V2 specification example
This example uses the celebrated repressilator model to demonstrate how to 1) download a model from the BioModels database, 2) create a PhraSEDML string to simulate the model, 3) convert the PhraSEDML to SED-ML, and 4) use Tellurium to execute the resulting SED-ML.
This and other examples here are the SED-ML reference specification (Introduction section).
End of explanation
"""
import tellurium as te
from tellurium.tests.testdata import sedxDir
import os
omexPath = os.path.join(sedxDir, "BIOMD0000000003.sedx")
print('Loading SED-ML archive from path: {}'.format(omexPath))
print('Using {} as a working directory'.format(os.path.join(os.path.split(omexPath)[0], '_te_BIOMD0000000003')))
# execute the SED-ML archive
te.executeSEDML(omexPath)
"""
Explanation: Execute SED-ML Archive
Tellurium can read and execute the SED-ML from a SED-ML archive. This is not the same as a COMBINE archive (see below for COMBINE archive examples).
End of explanation
"""
|
olivertomic/hoggorm | examples/PCA/PCA_on_cancer_data.ipynb | bsd-2-clause | import hoggorm as ho
import hoggormplot as hop
import pandas as pd
import numpy as np
"""
Explanation: Principal component analysis (PCA) on cancer data
This notebook illustrates how to use the hoggorm package to carry out principal component analysis (PCA) on a multivariate data set on cancer in men across OECD countries. Furthermore, we will learn how to visualize the results of the PCA using the hoggormPlot package.
Import packages and prepare data
First import hoggorm for analysis of the data and hoggormPlot for plotting of the analysis results. We'll also import pandas such that we can read the data into a data frame. numpy is needed for checking dimensions of the data.
End of explanation
"""
# Load OECD data for cancer in men
# Insert code for reading data from other folder in repository instead of directly from same repository.
data_df = pd.read_csv('Cancer_men_perc.txt', index_col=0, sep='\t')
data_df
"""
Explanation: Next, load the cancer data that we are going to analyse using hoggorm. The data can be acquired from the OECD (The Organisation for Economic Co-operation and Development) and holds the percentages of various cancer types in men. After the data has been loaded into the pandas data frame, we'll display it in the notebook.
End of explanation
"""
np.shape(data_df)
"""
Explanation: Let's have a look at the dimensions of the data frame.
End of explanation
"""
# Get the values from the data frame
data = data_df.values
# Get the variable or columns names
data_varNames = list(data_df.columns)
# Get the object or row names
data_objNames = list(data_df.index)
"""
Explanation: There are observations for 34 countries as well as all OECD countries together, which results in 35 rows. Furthermore, there are 10 columns where each column represents one type of cancer in men.
The nipalsPCA class in hoggorm accepts only numpy arrays with numerical values and not pandas data frames. Therefore, the pandas data frame holding the imported data needs to be "taken apart" into three parts:
* a numpy array holding the numeric values
* a Python list holding variable (column) names
* a Python list holding object (row) names.
The array with values will be used as input for the nipalsPCA class for analysis. The Python lists holding the variable and row names will be used later in the plotting function from the hoggormPlot package when visualizing the results of the analysis. Below is the code needed to access both data, variable names and object names.
End of explanation
"""
data_varNames
"""
Explanation: Let's have a quick look at the column or variable names.
End of explanation
"""
data_objNames
"""
Explanation: Now show the object or row names.
End of explanation
"""
model = ho.nipalsPCA(arrX=data, Xstand=False, cvType=["loo"], numComp=4)
"""
Explanation: Apply PCA to our data
Now, let's run PCA on the data using the nipalsPCA class. The documentation provides a description of the input parameters. Using input parameter arrX we define which numpy array we would like to analyse. By setting input parameter Xstand=False we make sure that the variables are only mean centered, not scaled to unit variance. This is the default setting and actually doesn't need to expressed explicitly. Setting parameter cvType=["loo"] we make sure that we compute the PCA model using full cross validation. "loo" means "Leave One Out". By setting parameter numpComp=4 we ask for four principal components (PC) to be computed.
End of explanation
"""
hop.plot(model, comp=[1, 2],
plots=[1, 2, 3, 4, 6],
objNames=data_objNames,
XvarNames=data_varNames)
"""
Explanation: That's it, the PCA model has been computed. Now we would like to inspect the results by visualising them. We can do this using the taylor-made plotting function for PCA from the separate hoggormPlot package. If we wish to plot the results for component 1 and component 2, we can do this by setting the input argument comp=[1, 2]. The input argument plots=[1, 2, 3, 4, 6] lets the user define which plots are to be plotted. If this list for example contains value 1, the function will generate the scores plot for the model. If the list contains value 2, then the loadings plot will be plotted. Value 3 stands for correlation loadings plot and value 4 stands for bi-plot and 6 stands for explained variance plot. The hoggormPlot documentation provides a description of input parameters.
End of explanation
"""
# Get scores and store in numpy array
scores = model.X_scores()
# Get scores and store in pandas dataframe with row and column names
scores_df = pd.DataFrame(model.X_scores())
scores_df.index = data_objNames
scores_df.columns = ['PC{0}'.format(x+1) for x in range(model.X_scores().shape[1])]
scores_df
help(ho.nipalsPCA.X_scores)
# Dimension of the scores
np.shape(model.X_scores())
"""
Explanation: Accessing numerical results
Now that we have visualized the PCA results, we may also want to access the numerical results. Below are some examples. For a complete list of accessible results, please see this part of the documentation.
End of explanation
"""
# Get loadings and store in numpy array
loadings = model.X_loadings()
# Get loadings and store in pandas dataframe with row and column names
loadings_df = pd.DataFrame(model.X_loadings())
loadings_df.index = data_varNames
loadings_df.columns = ['PC{0}'.format(x+1) for x in range(model.X_loadings().shape[1])]
loadings_df
help(ho.nipalsPCA.X_loadings)
np.shape(model.X_loadings())
"""
Explanation: We see that the numpy array holds the scores for all countries and OECD (35 in total) for four components as required when computing the PCA model.
End of explanation
"""
# Get loadings and store in numpy array
loadings = model.X_corrLoadings()
# Get loadings and store in pandas dataframe with row and column names
loadings_df = pd.DataFrame(model.X_corrLoadings())
loadings_df.index = data_varNames
loadings_df.columns = ['PC{0}'.format(x+1) for x in range(model.X_corrLoadings().shape[1])]
loadings_df
help(ho.nipalsPCA.X_corrLoadings)
# Get calibrated explained variance of each component
calExplVar = model.X_calExplVar()
# Get calibrated explained variance and store in pandas dataframe with row and column names
calExplVar_df = pd.DataFrame(model.X_calExplVar())
calExplVar_df.columns = ['calibrated explained variance']
calExplVar_df.index = ['PC{0}'.format(x+1) for x in range(model.X_loadings().shape[1])]
calExplVar_df
help(ho.nipalsPCA.X_calExplVar)
# Get cumulative calibrated explained variance
cumCalExplVar = model.X_cumCalExplVar()
# Get cumulative calibrated explained variance and store in pandas dataframe with row and column names
cumCalExplVar_df = pd.DataFrame(model.X_cumCalExplVar())
cumCalExplVar_df.columns = ['cumulative calibrated explained variance']
cumCalExplVar_df.index = ['PC{0}'.format(x) for x in range(model.X_loadings().shape[1] + 1)]
cumCalExplVar_df
help(ho.nipalsPCA.X_cumCalExplVar)
# Get cumulative calibrated explained variance for each variable
cumCalExplVar_ind = model.X_cumCalExplVar_indVar()
# Get cumulative calibrated explained variance for each variable and store in pandas dataframe with row and column names
cumCalExplVar_ind_df = pd.DataFrame(model.X_cumCalExplVar_indVar())
cumCalExplVar_ind_df.columns = data_varNames
cumCalExplVar_ind_df.index = ['PC{0}'.format(x) for x in range(model.X_loadings().shape[1] + 1)]
cumCalExplVar_ind_df
help(ho.nipalsPCA.X_cumCalExplVar_indVar)
# Get calibrated predicted X for a given number of components
# Predicted X from calibration using 1 component
X_from_1_component = model.X_predCal()[1]
# Predicted X from calibration using 1 component stored in pandas data frame with row and columns names
X_from_1_component_df = pd.DataFrame(model.X_predCal()[1])
X_from_1_component_df.index = data_objNames
X_from_1_component_df.columns = data_varNames
X_from_1_component_df
# Get predicted X for a given number of components
# Predicted X from calibration using 4 components
X_from_4_component = model.X_predCal()[4]
# Predicted X from calibration using 1 component stored in pandas data frame with row and columns names
X_from_4_component_df = pd.DataFrame(model.X_predCal()[4])
X_from_4_component_df.index = data_objNames
X_from_4_component_df.columns = data_varNames
X_from_4_component_df
help(ho.nipalsPCA.X_predCal)
# Get validated explained variance of each component
valExplVar = model.X_valExplVar()
# Get calibrated explained variance and store in pandas dataframe with row and column names
valExplVar_df = pd.DataFrame(model.X_valExplVar())
valExplVar_df.columns = ['validated explained variance']
valExplVar_df.index = ['PC{0}'.format(x+1) for x in range(model.X_loadings().shape[1])]
valExplVar_df
help(ho.nipalsPCA.X_valExplVar)
# Get cumulative validated explained variance
cumValExplVar = model.X_cumValExplVar()
# Get cumulative validated explained variance and store in pandas dataframe with row and column names
cumValExplVar_df = pd.DataFrame(model.X_cumValExplVar())
cumValExplVar_df.columns = ['cumulative validated explained variance']
cumValExplVar_df.index = ['PC{0}'.format(x) for x in range(model.X_loadings().shape[1] + 1)]
cumValExplVar_df
help(ho.nipalsPCA.X_cumValExplVar)
# Get cumulative validated explained variance for each variable
cumCalExplVar_ind = model.X_cumCalExplVar_indVar()
# Get cumulative validated explained variance for each variable and store in pandas dataframe with row and column names
cumValExplVar_ind_df = pd.DataFrame(model.X_cumValExplVar_indVar())
cumValExplVar_ind_df.columns = data_varNames
cumValExplVar_ind_df.index = ['PC{0}'.format(x) for x in range(model.X_loadings().shape[1] + 1)]
cumValExplVar_ind_df
help(ho.nipalsPCA.X_cumValExplVar_indVar)
# Get validated predicted X for a given number of components
# Predicted X from validation using 1 component
X_from_1_component_val = model.X_predVal()[1]
# Predicted X from calibration using 1 component stored in pandas data frame with row and columns names
X_from_1_component_val_df = pd.DataFrame(model.X_predVal()[1])
X_from_1_component_val_df.index = data_objNames
X_from_1_component_val_df.columns = data_varNames
X_from_1_component_val_df
# Get validated predicted X for a given number of components
# Predicted X from validation using 3 components
X_from_3_component_val = model.X_predVal()[3]
# Predicted X from calibration using 3 components stored in pandas data frame with row and columns names
X_from_3_component_val_df = pd.DataFrame(model.X_predVal()[3])
X_from_3_component_val_df.index = data_objNames
X_from_3_component_val_df.columns = data_varNames
X_from_3_component_val_df
help(ho.nipalsPCA.X_predVal)
# Get predicted scores for new measurements (objects) of X
# First pretend that we acquired new X data by using part of the existing data and overlaying some noise
import numpy.random as npr
new_data = data[0:4, :] + npr.rand(4, np.shape(data)[1])
np.shape(new_data)
# Now insert the new data into the existing model and compute scores for two components (numComp=2)
pred_scores = model.X_scores_predict(new_data, numComp=2)
# Same as above, but results stored in a pandas dataframe with row names and column names
pred_scores_df = pd.DataFrame(model.X_scores_predict(new_data, numComp=2))
pred_scores_df.columns = ['PC{0}'.format(x) for x in range(2)]
pred_scores_df.index = ['new object {0}'.format(x) for x in range(np.shape(new_data)[0])]
pred_scores_df
help(ho.nipalsPCA.X_scores_predict)
"""
Explanation: Here we see that the array holds the loadings for the 10 variables in the data across four components.
End of explanation
"""
|
google-research/google-research | yoto/colabs/plot_yoto_vae.ipynb | apache-2.0 | import tensorflow as tf
import tensorflow_hub as hub
import tensorflow_datasets as tfds
from io import StringIO, BytesIO
import numpy as np
import IPython.display
import PIL.Image
tf.compat.v1.enable_eager_execution()
tf.compat.v1.enable_v2_behavior()
#@title Plotting utilities
# Plotting utils, taken from
# https://colab.research.google.com/github/tensorflow/hub/blob/master/examples/colab/biggan_generation_with_tf_hub.ipynb
def imgrid(imarray, cols=5, pad=1):
pad = int(pad)
assert pad >= 0
cols = int(cols)
assert cols >= 1
N, H, W, C = imarray.shape
rows = int(np.ceil(N / float(cols)))
batch_pad = rows * cols - N
assert batch_pad >= 0
post_pad = [batch_pad, pad, pad, 0]
pad_arg = [[0, p] for p in post_pad]
imarray = np.pad(imarray, pad_arg, 'constant')
H += pad
W += pad
grid = (imarray
.reshape(rows, cols, H, W, C)
.transpose(0, 2, 1, 3, 4)
.reshape(rows*H, cols*W, C))
return grid[:-pad, :-pad]
def imshow(a, format='png', jpeg_fallback=True):
a = np.asarray(a, dtype=np.uint8)
str_file = BytesIO()
PIL.Image.fromarray(a).save(str_file, format)
png_data = str_file.getvalue()
try:
disp = IPython.display.display(IPython.display.Image(png_data))
except IOError:
if jpeg_fallback and format != 'jpeg':
print ('Warning: image was too large to display in format "{}"; '
'trying jpeg instead.').format(format)
return imshow(a, format='jpeg')
else:
raise
return disp
"""
Explanation: Imports and utils
End of explanation
"""
#@title Module path and constants
HUB_PATH = None # Set this variable, see comment above.
DATASET_NAME = "shapes3d" # Either shapes3d or cifar10.
BATCH_SIZE = 10
# The configs distributed with this package, differ in that for cifar10 the input
# is scaled to [-1. 1] and there is no sigmoid-transform, while for shapes3 the
# range is scaled to [0, 1] and a logit transofrm is used.
if DATASET_NAME == "cifar10":
RANGE = "-1_1"
SIGMOID = False
else:
RANGE = "0_1"
SIGMOID = True
def postprocess(images):
if SIGMOID:
images = tf.nn.sigmoid(images)
if RANGE == "-1_1":
images = (1 + images) / 2 # [-1, 1] to [0, 1]
images = tf.clip_by_value(images * 255, 0, 255)
return tf.cast(images, tf.uint8)
#@title Load the data form tfds
def data_reader():
dataset = tfds.load(DATASET_NAME, split="train")
for data in dataset.batch(BATCH_SIZE):
images_np_int = data["image"].numpy()
images_np = images_np_int.astype(np.float) / 255
if RANGE == "-1_1":
images_np = 2 * images_np - 1
yield images_np, images_np_int
#@title Load the VAE.
spec = hub.load_module_spec(HUB_PATH)
noise_dim = spec.get_input_info_dict("sample")["noise"].get_shape()[1]
imsize, channels = spec.get_output_info_dict(
"sample")["default"].get_shape()[-2:]
sampler = hub.KerasLayer(HUB_PATH, signature="sample")
def sample(noise_np, beta=0.):
inputs_extra_np = np.log(
np.stack([np.ones(BATCH_SIZE) * beta, np.ones(BATCH_SIZE) * 1.],
axis=1)).astype(np.float)
return postprocess(
sampler({"noise": noise_np, "inputs_extra": inputs_extra_np}))
reconstructor = hub.KerasLayer(HUB_PATH, signature="reconstruct")
def reconstruct(images_np, beta=0.):
inputs_extra_np = np.log(
np.stack([np.ones(BATCH_SIZE) * beta, np.ones(BATCH_SIZE) * 1.],
axis=1)).astype(np.float)
return postprocess(
reconstructor({"image": images_np, "inputs_extra": inputs_extra_np}))
"""
Explanation: Setup: Model and data
Set the HUB_PATH variable to point to the hub module, e.g. /home/username/yoto_xyz/hub_modules/step-000550000.
End of explanation
"""
reader = data_reader() # Initialize the data reader.
np.random.seed(18)
noise_np = np.random.randn(BATCH_SIZE, noise_dim) * 1.
images_np, images_np_int = next(reader) # Fetch the first batch.
beta_values = [1, 128, 512]
print("Reconstructions:")
input_grid = imgrid(images_np_int, cols=BATCH_SIZE)
imshow(input_grid)
for beta in beta_values:
reconstructions = reconstruct(images_np, beta)
recon_grid = imgrid(reconstructions.numpy(), cols=BATCH_SIZE)
imshow(recon_grid)
print("Samples:")
for beta in beta_values:
sample_grid = imgrid(sample(noise_np, beta).numpy(), cols=BATCH_SIZE)
imshow(sample_grid)
"""
Explanation: Plotting
End of explanation
"""
|
Juanlu001/Charla-PyConES15-poliastro | Going to Mars with Python in 5 minutes.ipynb | mit | %matplotlib notebook
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
import astropy.units as u
from astropy import time
from poliastro import iod
from poliastro.plotting import plot
from poliastro.bodies import Sun, Earth
from poliastro.twobody import State
from poliastro import ephem
from jplephem.spk import SPK
ephem.download_kernel("de421")
"""
Explanation: A Marte con Python usando poliastro
<img src="http://poliastro.github.io/_images/logo_text.svg" width="70%" />
Juan Luis Cano Rodríguez juanlu@pybonacci.org
2016-04-09 PyData Madrid 2016
...en 5 minutos :)
Warning: This is rocket science!
¿Qué es la Astrodinámica?
Una rama de la Mecánica (a su vez una rama de la Física) que estudia problemas prácticos acerca del movimiento de cohetes y otros vehículos en el espacio
¿Qué es poliastro?
Una biblioteca de puro Python para Astrodinámica
http://poliastro.github.io/
¡Vamos a Marte!
End of explanation
"""
r = [-6045, -3490, 2500] * u.km
v = [-3.457, 6.618, 2.533] * u.km / u.s
ss = State.from_vectors(Earth, r, v)
with plt.style.context('pybonacci'):
plot(ss)
"""
Explanation: Primero: definir la órbita
End of explanation
"""
epoch = time.Time("2015-06-21 16:35")
r_, v_ = ephem.planet_ephem(ephem.EARTH, epoch)
r_
v_.to(u.km / u.s)
"""
Explanation: Segundo: localiza los planetas
End of explanation
"""
date_launch = time.Time('2011-11-26 15:02', scale='utc')
date_arrival = time.Time('2012-08-06 05:17', scale='utc')
tof = date_arrival - date_launch
r0, _ = ephem.planet_ephem(ephem.EARTH, date_launch)
r, _ = ephem.planet_ephem(ephem.MARS, date_arrival)
(v0, v), = iod.lambert(Sun.k, r0, r, tof)
v0
v
"""
Explanation: Tercero: Calcula la trayectoria
End of explanation
"""
def go_to_mars(offset=500., tof_=6000.):
# Initial data
N = 50
date_launch = time.Time('2016-03-14 09:31', scale='utc') + ((offset - 500.) * u.day)
date_arrival = time.Time('2016-10-19 16:00', scale='utc') + ((offset - 500.) * u.day)
tof = tof_ * u.h
# Calculate vector of times from launch and arrival Julian days
jd_launch = date_launch.jd
jd_arrival = jd_launch + tof.to(u.day).value
jd_vec = np.linspace(jd_launch, jd_arrival, num=N)
times_vector = time.Time(jd_vec, format='jd')
rr_earth, vv_earth = ephem.planet_ephem(ephem.EARTH, times_vector)
rr_mars, vv_mars = ephem.planet_ephem(ephem.MARS, times_vector)
# Compute the transfer orbit!
r0 = rr_earth[:, 0]
rf = rr_mars[:, -1]
(va, vb), = iod.lambert(Sun.k, r0, rf, tof)
ss0_trans = State.from_vectors(Sun, r0, va, date_launch)
ssf_trans = State.from_vectors(Sun, rf, vb, date_arrival)
# Extract whole orbit of Earth, Mars and transfer (for plotting)
rr_trans = np.zeros_like(rr_earth)
rr_trans[:, 0] = r0
for ii in range(1, len(jd_vec)):
tof = (jd_vec[ii] - jd_vec[0]) * u.day
rr_trans[:, ii] = ss0_trans.propagate(tof).r
# Better compute backwards
jd_init = (date_arrival - 1 * u.year).jd
jd_vec_rest = np.linspace(jd_init, jd_launch, num=N)
times_rest = time.Time(jd_vec_rest, format='jd')
rr_earth_rest, _ = ephem.planet_ephem(ephem.EARTH, times_rest)
rr_mars_rest, _ = ephem.planet_ephem(ephem.MARS, times_rest)
# Plot figure
# To add arrows:
# https://github.com/matplotlib/matplotlib/blob/master/lib/matplotlib/streamplot.py#L140
fig = plt.figure(figsize=(10, 10))
ax = fig.add_subplot(111, projection='3d')
def plot_body(ax, r, color, size, border=False, **kwargs):
"""Plots body in axes object.
"""
return ax.plot(*r[:, None], marker='o', color=color, ms=size, mew=int(border), **kwargs)
# I like color
color_earth0 = '#3d4cd5'
color_earthf = '#525fd5'
color_mars0 = '#ec3941'
color_marsf = '#ec1f28'
color_sun = '#ffcc00'
color_orbit = '#888888'
color_trans = '#444444'
# Plotting orbits is easy!
ax.plot(*rr_earth.to(u.km).value, color=color_earth0)
ax.plot(*rr_mars.to(u.km).value, color=color_mars0)
ax.plot(*rr_trans.to(u.km).value, color=color_trans)
ax.plot(*rr_earth_rest.to(u.km).value, ls='--', color=color_orbit)
ax.plot(*rr_mars_rest.to(u.km).value, ls='--', color=color_orbit)
# But plotting planets feels even magical!
plot_body(ax, np.zeros(3), color_sun, 16)
plot_body(ax, r0.to(u.km).value, color_earth0, 8)
plot_body(ax, rr_earth[:, -1].to(u.km).value, color_earthf, 8)
plot_body(ax, rr_mars[:, 0].to(u.km).value, color_mars0, 8)
plot_body(ax, rf.to(u.km).value, color_marsf, 8)
# Add some text
ax.text(-0.75e8, -3.5e8, -1.5e8, "ExoMars mission:\nfrom Earth to Mars",
size=20, ha='center', va='center', bbox={"pad": 30, "lw": 0, "fc": "w"})
ax.text(r0[0].to(u.km).value * 2.4, r0[1].to(u.km).value * 0.4, r0[2].to(u.km).value * 1.25,
"Earth at launch\n({})".format(date_launch.to_datetime().strftime("%d %b")),
ha="left", va="bottom", backgroundcolor='#ffffff')
ax.text(rf[0].to(u.km).value * 1.1, rf[1].to(u.km).value * 1.1, rf[2].to(u.km).value,
"Mars at arrival\n({})".format(date_arrival.to_datetime().strftime("%d %b")),
ha="left", va="top", backgroundcolor='#ffffff')
ax.text(-1.9e8, 8e7, 1e8, "Transfer\norbit", ha="right", va="center", backgroundcolor='#ffffff')
# Tune axes
ax.set_xlim(-3e8, 3e8)
ax.set_ylim(-3e8, 3e8)
ax.set_zlim(-3e8, 3e8)
# And finally!
ax.view_init(30, 260)
plt.show()
#fig.savefig("trans_30_260.png", bbox_inches='tight')
#return fig, ax
go_to_mars()
"""
Explanation: ...y es Python puro!
Truco: numba
Cuarto: ¡vamos a Marte!
End of explanation
"""
%matplotlib inline
from ipywidgets import interactive
from IPython.display import display
w = interactive(go_to_mars, offset=(0., 1000.), tof_=(100., 12000.))
display(w)
"""
Explanation: Quinto: ¡¡Hagámoslo interactivo!!!1!
End of explanation
"""
|
egentry/dwarf_photo-z | dwarfz/data/get_data.ipynb | mit | from __future__ import division, print_function
# give access to importing dwarfz
import os, sys
dwarfz_package_dir = os.getcwd().split("dwarfz")[0]
if dwarfz_package_dir not in sys.path:
sys.path.insert(0, dwarfz_package_dir)
import dwarfz
from dwarfz.hsc_credentials import credential
from dwarfz.hsc_release_query import query_wrapper
# back to regular import statements
import os, sys
import shutil
import glob
import pandas as pd
import numpy as np
import pathlib
"""
Explanation: Explanation
The HSC data is too large to store as one sqlite database file using github. So instead, it needs to be fetched by the user, separately from cloning the repository. This notebook is a work-in-progress to help automate that process, and make sure that the final schema is correct.
Sending the query
The HSC data release site provides a command line tool for querying the database; I've adapted it to run programmatically from within a python session. Check it out; it's the file hsc_release_query.py. There's a working example of a simple query in sql_tester.ipynb. This notebook rolls everything together: querying the server, and combining the subsets into one table.
What gets saved?
This comes in two parts:
1) Get the main HSC table (position, fluxes, flags for each object)
2) Get a list of matched spec-z's
Code
Remember to set your credentials within hsc_credentials.py !
End of explanation
"""
sql_base = """
SELECT
object_id,
ra, dec,
detect_is_patch_inner, detect_is_tract_inner, detect_is_primary,
gcmodel_flux, gcmodel_flux_err, gcmodel_flux_flags, gcmodel_mag,
rcmodel_flux, rcmodel_flux_err, rcmodel_flux_flags, rcmodel_mag,
icmodel_flux, icmodel_flux_err, icmodel_flux_flags, icmodel_mag,
zcmodel_flux, zcmodel_flux_err, zcmodel_flux_flags, zcmodel_mag,
ycmodel_flux, ycmodel_flux_err, ycmodel_flux_flags, ycmodel_mag
FROM
pdr1_cosmos_widedepth_median.forced
LIMIT
{}
OFFSET
{}
"""
"""
Explanation: Get HSC Fluxes
Build the query
Gets both the fluxes and the magnitudes. The difference shouldn't matter, but now you have both, depending on what's more convenient. In general, using the flux flags with the magnitude values is what I usually do.
End of explanation
"""
n_objects = 1263503
block_size = 250000
n_blocks = (n_objects // block_size) + 1
temp_hsc_table_dir = pathlib.Path("partial_hsc_tables")
if not temp_hsc_table_dir.is_dir():
temp_hsc_table_dir.mkdir()
limit = block_size
preview_results = False
delete_job = True
out_format = "sqlite3"
for i in range(n_blocks):
offset = i*block_size
sql = sql_base.format(limit, offset)
output_filename = temp_hsc_table_dir / "tmp_{}.sqlite3".format(i)
print(" ---------------- QUERY {} -------------------- ".format(i+1))
print(sql)
with open(output_filename, mode="wb") as output_file:
query_wrapper(credential, sql, preview_results, delete_job,
out_format, output_file,
nomail=True)
"""
Explanation: Make the query
The total number of objects is currently hardcoded! Make sure this hasn't changed!
The cleaner way to do this would be to make a simple query to the database, then count the number of records. But for now, hardcoding it is simpler.
End of explanation
"""
database_filenames = sorted(temp_hsc_table_dir.glob("tmp_*.sqlite3"))
database_filenames
"""
Explanation: Check if it worked
End of explanation
"""
dfs = [pd.read_sql_table("table_1", "sqlite:///{}".format(database_filename),
index_col="object_id")
for database_filename in database_filenames]
assert(sum(df.shape[0] for df in dfs) == n_objects)
combined = pd.concat(dfs)
assert(combined.shape[0] == n_objects)
del dfs
combined.head()
for filename in database_filenames:
os.remove(filename)
if len(list(temp_hsc_table_dir.glob("*")))==0:
temp_hsc_table_dir.rmdir()
combined.keys()
hsc_database_filename = "HSC_COSMOS_median_forced.sqlite3"
hsc_database_filename_old = hsc_database_filename + ".old"
if os.path.exists(hsc_database_filename):
try:
shutil.move(hsc_database_filename, hsc_database_filename_old)
combined.to_sql("hsc", "sqlite:///{}".format(hsc_database_filename))
except:
# in case there's an error during writing, don't overwrite/delete the existing database
shutil.move(hsc_database_filename_old, hsc_database_filename)
raise
else:
# only delete if combining went successfully
os.remove(hsc_database_filename + ".old")
else:
combined.to_sql("hsc", "sqlite:///{}".format(hsc_database_filename))
"""
Explanation: Combine databases
End of explanation
"""
COSMOS_filename = pathlib.Path(dwarfz.data_dir_default) / "COSMOS_reference.sqlite"
COSMOS = dwarfz.datasets.COSMOS(COSMOS_filename)
COSMOS.df.head()
HSC_filename = pathlib.Path(dwarfz.data_dir_default) / "HSC_COSMOS_median_forced.sqlite3"
HSC = dwarfz.datasets.HSC(HSC_filename)
HSC.df.head()
matches = dwarfz.matching.Matches(COSMOS.df, HSC.df)
matches_filename = pathlib.Path(dwarfz.data_dir_default) / "matches.sqlite3"
if not matches_filename.exists():
matches.save_to_filename(matches_filename)
"""
Explanation: Match HSC objects to COSMOS objects
Every COSMOS galaxy will be in 1 pair. HSC galaxies can be in 0, 1 or more pairs.
End of explanation
"""
print("threshold (error) : {:>5.2f}".format(matches.threshold_error))
print("threshold (match) : {:>5.2f}".format(matches.threshold_match))
print("overall completeness : {:.2f} %".format(100*np.mean(matches.df.match[~matches.df.error])))
print("min separation: {:.4f} [arcsec]".format(min(matches.df.sep)))
print("max separation: {:.4f} [arcsec]".format(max(matches.df.sep)))
"""
Explanation: Check matches
End of explanation
"""
redshifts_sql = """
SELECT
object_id, specz_id,
d_pos,
specz_ra, specz_dec,
specz_redshift, specz_redshift_err, specz_flag_homogeneous
FROM
pdr1_cosmos_widedepth_median.specz
"""
"""
Explanation: Get spec-z's matched to HSC objects
Build the query
End of explanation
"""
preview_results = False
delete_job = True
out_format = "sqlite3"
output_filename = "specz.{}".format(out_format)
print(output_filename)
with open(output_filename, mode="wb") as output_file:
query_wrapper(credential, redshifts_sql, preview_results, delete_job,
out_format, output_file,
nomail=True,
)
"""
Explanation: Make the query
End of explanation
"""
!ls -lh specz.sqlite3
df = pd.read_sql_table("table_1",
"sqlite:///{}".format("specz.sqlite3"),
index_col="object_id")
df = df[df.specz_flag_homogeneous]
df.head()
"""
Explanation: Check if it worked
End of explanation
"""
photoz_sql = """
SELECT
pdr1_deep.forced.object_id,
pdr1_deep.forced.ra,
pdr1_deep.forced.dec,
pdr1_deep.photoz_frankenz.photoz_best,
pdr1_deep.photoz_frankenz.photoz_risk_best
FROM
pdr1_deep.forced
INNER JOIN pdr1_deep.photoz_frankenz
ON pdr1_deep.photoz_frankenz.object_id=pdr1_deep.forced.object_id
WHERE (ra BETWEEN 149.25 AND 151.25) AND (dec BETWEEN 1.4 AND 3);
"""
"""
Explanation: Get FRANKEN-Z photo-z's, and then match to HSC
Build the query
There are no photo-z's with the "fake" COSMOS-field Wide images. That catalog was originally UltraDeep, degraded to being Wide-like. To most-closely match the photo-z catalogs, I'd then want to look in the UltraDeep dataset; but to most-correctly prepare for running on the true-Wide data, I'll pull my photo-z's from the Deep later. (Note: no photo-z's have been publicly released for the Wide data within the COSMOS field, circa 8 June 2017)
End of explanation
"""
preview_results = False
delete_job = True
out_format = "sqlite3"
output_filename = "photoz_tmp.{}".format(out_format)
print(output_filename)
with open(output_filename, mode="wb") as output_file:
query_wrapper(credential, photoz_sql, preview_results, delete_job,
out_format, output_file,
nomail=True,
)
"""
Explanation: Make the query
End of explanation
"""
!ls -lh photoz_tmp.sqlite3
df = pd.read_sql_table("table_1",
"sqlite:///{}".format("photoz_tmp.sqlite3"),
index_col="object_id")
df.head()
df.to_sql("FRANKENZ", "sqlite:///franken_z-DEEP-COSMOS.sqlite3",
if_exists="replace")
os.remove("photoz_tmp.sqlite3")
"""
Explanation: Check if it worked
End of explanation
"""
HSC_filename = pathlib.Path(dwarfz.data_dir_default) / "HSC_COSMOS_median_forced.sqlite3"
HSC = dwarfz.datasets.HSC(HSC_filename)
matches = dwarfz.matching.Matches(HSC.df, df )
matches.df["HSC_ids"] = matches.df.index
matches.df["FRANKENZ_ids"] = matches.df.catalog_2_ids
matches.df.head()
HSC.df.join(matches.df).join(df[["photoz_best",
"photoz_risk_best"]],
on="FRANKENZ_ids").head()
"""
Explanation: Cross reference FRANKENZ ids to general HSC ids
End of explanation
"""
HSC_photo_zs = HSC.df.copy()[[]] # only copy index column
HSC_photo_zs = HSC_photo_zs.join(matches.df[["FRANKENZ_ids"]])
HSC_photo_zs = HSC_photo_zs.join(df[["photoz_best", "photoz_risk_best"]],
on="FRANKENZ_ids")
HSC_photo_zs.head()
HSC_photo_zs.to_sql("photo_z",
"sqlite:///HSC_matched_to_FRANKENZ.sqlite",
if_exists="replace",
)
"""
Explanation: Copy index column to a new data frame, then only add desired columns
End of explanation
"""
|
metpy/MetPy | v0.7/_downloads/sigma_to_pressure_interpolation.ipynb | bsd-3-clause | import cartopy.crs as ccrs
import cartopy.feature as cfeature
import matplotlib.pyplot as plt
from netCDF4 import Dataset, num2date
import metpy.calc as mcalc
from metpy.cbook import get_test_data
from metpy.plots import add_metpy_logo
from metpy.units import units
"""
Explanation: Sigma to Pressure Interpolation
By using metpy.calc.log_interp, data with sigma as the vertical coordinate can be
interpolated to isobaric coordinates.
End of explanation
"""
data = Dataset(get_test_data('wrf_example.nc', False))
lat = data.variables['lat'][:]
lon = data.variables['lon'][:]
time = data.variables['time']
vtimes = num2date(time[:], time.units)
temperature = data.variables['temperature'][:] * units(data.variables['temperature'].units)
pres = data.variables['pressure'][:] * units(data.variables['pressure'].units)
hgt = data.variables['height'][:] * units(data.variables['height'].units)
"""
Explanation: Data
The data for this example comes from the outer domain of a WRF-ARW model forecast
initialized at 1200 UTC on 03 June 1980. Model data courtesy Matthew Wilson, Valparaiso
University Department of Geography and Meteorology.
End of explanation
"""
plevs = [700.] * units.hPa
"""
Explanation: Array of desired pressure levels
End of explanation
"""
isobaric_levels = mcalc.log_interp(plevs, pres, hgt, temperature, axis=1)
"""
Explanation: Interpolate The Data
Now that the data is ready, we can interpolate to the new isobaric levels. The data is
interpolated from the irregular pressure values for each sigma level to the new input
mandatory isobaric levels. mcalc.log_interp will interpolate over a specified dimension
with the axis argument. In this case, axis=1 will correspond to interpolation on the
vertical axis.
End of explanation
"""
height = isobaric_levels[0]
temp = isobaric_levels[1]
"""
Explanation: The interpolated data is output in a list, so we will pull out each variable for plotting.
End of explanation
"""
# Set up our projection
crs = ccrs.LambertConformal(central_longitude=-100.0, central_latitude=45.0)
# Set up our array of latitude and longitude values and transform to
# the desired projection.
tlatlons = crs.transform_points(ccrs.PlateCarree(), lon, lat)
tlons = tlatlons[:, :, 0]
tlats = tlatlons[:, :, 1]
# Get data to plot state and province boundaries
states_provinces = cfeature.NaturalEarthFeature(category='cultural',
name='admin_1_states_provinces_lakes',
scale='50m',
facecolor='none')
# Set the forecast hour
FH = 1
# Create the figure and grid for subplots
fig = plt.figure(figsize=(17, 12))
add_metpy_logo(fig, 470, 320, size='large')
# Plot 700 hPa
ax = plt.subplot(111, projection=crs)
ax.coastlines('50m', edgecolor='black', linewidth=0.75)
ax.add_feature(states_provinces, edgecolor='black', linewidth=0.5)
# Plot the heights
cs = ax.contour(tlons, tlats, height[FH, 0, :, :],
colors='k', linewidths=1.0, linestyles='solid')
plt.clabel(cs, fontsize=10, inline=1, inline_spacing=7,
fmt='%i', rightside_up=True, use_clabeltext=True)
# Contour the temperature
cf = ax.contourf(tlons, tlats, temp[FH, 0, :, :], range(-20, 20, 1), cmap=plt.cm.jet)
cb = plt.colorbar(cf, orientation='horizontal', extend=max, aspect=65, shrink=0.5, pad=0.05,
extendrect='True')
cb.set_label('Celsius', size='x-large')
ax.set_extent([-106.5, -90.4, 34.5, 46.75], crs=ccrs.PlateCarree())
# Make the axis title
ax.set_title('{:.0f} hPa Heights (m) and Temperature (C)'.format(plevs[0].m), loc='center',
fontsize=10)
# Set the figure title
fig.suptitle('WRF-ARW Forecast VALID: {:s} UTC'.format(str(vtimes[FH])), fontsize=14)
plt.show()
"""
Explanation: Plotting the Data for 700 hPa.
End of explanation
"""
|
MasterRobotica-UVic/Control-and-Actuators | proportional_control.ipynb | gpl-3.0 | def carSys(n, kp, d0):
return d0*pow(1-0.1*kp,n)
# The system starts at 11m from the wall
d0 = 11
# optimal values of kp [0,10]:
# 0 < kp < 10
# try different cases
kp = 5.0
def interactiveCar(n):
print("Total time: ", n*0.01, " seconds")
print("Distance to wall: ", carSys(n, kp, d0) )
return
interact(interactiveCar, n=(0,100,1))
"""
Explanation: Proportional control
We have a car robot equipped with a distance sensor. The robot is programmed to move with a velocity negatively proportional to the distance from the wall. That is:
$v[n] = -k_p d[n]$
And the distance measures are taken 10 times per second, that is, the sampling period is $\tau = 0.1$s.
The discrete motion model follows the kinematic expression (to be aware that no dynamics, forces, masses, inertias are considered):
$d[n] = d[n-1] + \tau v[n-1]$
If we substitute the control law, we get:
$\begin{matrix}
d[n] &=& d[n-1] + \tau (- k_p d[n-1])\
&=& d[n-1] - 0.1\tau k_p d[n-1]\
&=& (1-0.1 k_p) d[n-1]
\end{matrix}$
This is a first order linear difference equation of the form $y[n] = \lambda y[n-1]$. We know that depending on the value that $\lambda$ takes, the system will have a certain dynamics or the other.
What behavior do we want for our robot system that we want to reach the wall? We don't want the robot to oscillate around the wall, and we don't want to go away, so we want to set it to $0< \lambda < 1$. This translate in setting the $k_p$ such that
$\begin{matrix}
0 &<& 1-0.1k_p &<& 1 \
-1 &<& -0.1 k_p &<& 0 \
10 &<& k_p &<& 0
\end{matrix}$
Next, we define the carSys() model and we let you play with different values of $k_p$ and use a slider to interact with the sequence steps. Try to answer the following questions:
What happens if $k_p = 0$?
What happens if $k_p \approx 0$, but still $k_p > 0$?
What happens if $k_p < 0$?
What happens if $k_p = 10$?
What happens if $k_p = 11$?
Time to get to the wall using $k_p = 0.1$?
Now, answer 6, considering a zero threshold of $0.001$ ($1$ milimeter)
Time to get to the wall using $k_p = 9.0$ using the same threshold?
What are the units of $k_p$?
Is the maximum velocity that the robot can make considered in the control law? How would you add implement it?
End of explanation
"""
def carSys_unitSampleResponse(n):
# start with the robot already at the wall
# but try other initial distance after understanding the system
d0 = 0
# This is the moment where we have the unit sample
nd = 10
# Let's use 5.0, but you can try other values later
kp = 5.0
# This represent the stability of the system when n > nd +1
lmbda = 1 - 0.1*kp
# THINK OF THE MEANING OF THIS VALUE
b = 0.1*kp
# This initializes the sequence
d = np.zeros(n)
for i in range(n):
if i <= nd:
d[i] = d0
if i == nd+1:
d[i] = b + d0
if i > nd + 1:
d[i] = lmbda*d[i-1]
# Plot the sequence
plt.figure()
plotSequence(d)
return
# This slider sets the number of samples for the system
interact(carSys_unitSampleResponse, n=(2,100,1))
"""
Explanation: Proportional control with input
Consider now that you want to stop the robot car at any desired position $d_{des}[n]$. What would change in all the previous section?
The discrete motion model remains, since the robot is the same: $d[n] = d[n-1] + \tau v[n-1]$
Now, the control law looks like:
$v[n] = k_p (d_{des}[n] - d[n])$
In fact, if we set $d_{des} = 0$ (that is, the position of the wall), we end up with the same control law as before, that is $v[n] = -k_p d[n]$. Note that, we are commanding a velocity, $v[n]$ proportional to the error between our desired and measured position, $e[n] = d_{des}[n] - d[n]$.
Substituting the new control law into the motion model, we get:
$\begin{matrix}
d[n] &=& d[n-1] + \tau k_p(d_{des}[n-1]- d[n-1])\
&=& d[n-1] + k_p d_{des}[n-1] - 0.1\tau k_p d[n-1] \
&=& (1-0.1 k_p) d[n-1] + 0.1k_p d_{des}[n-1]
\end{matrix}$
Which is a control system of the form:
$y[n] = \lambda y[n-1] + b x[n-1]$
The way to study these kind of system, and in general, all systems, is to obtain their response to unit sample and unit step sequences. Let's check what form the solution will have for both cases.
Unit sample response
Consider the unit sample sequence defined as
$x[n] = \delta[n- n_d] = \left{ \begin{matrix}
1, &\text{if} \; n = n_d \
0, & \text{if} \; n \neq n_d
\end{matrix} \right.$
We analyze by intervals, assuming that the initial state is $y[0] = 0$
Interval $n \leq n_d$:
$x[n-1] = 0$, hence $y[n] = 0$
When $n = n_d +1$:
$\begin{matrix}
y[n_d +1] &=& \lambda y[n_d] + b x[n_d] \
&=& \lambda 0 + b 1 \
&=& b
\end{matrix}$
Finally, $n > n_d + 1$:
$x[n] = 0$, hence $y[n] = \lambda y[n-1]$, which we know is a first order linear difference equation.
So the solution would be somthing like:
$y[n] = \left{ \begin{matrix}
0, & \text{if} \; n \leq n_d \
b, & \text{if} \; n = n_d+1 \
\lambda y[n-1] & \text{if} \; n > n_d +1
\end{matrix} \right.$
or in a more compact way
$y[n] = \lambda^{n-n_d} \frac{b}{\lambda}, \text{for} \; n > n_d$
A feature that it is usually studied is the value of the system when times is very large, that is, when $n \rightarrow \infty$. In the stable case, $\|\lambda\| < 1$, we can say that
$y[\infty] = 0$
Since it falls in the last interval where we have a first order linear difference equation.
End of explanation
"""
def carSys_unitStepResponse(n):
# start with the robot already at the wall
# but try other initial distance after understanding the system
d0 = 0
# This is the moment where we have the unit step
nd = 10
# Let's use 5.0, but you can try other values later
kp = 5
# This represent the stability of the system when n > nd +1
lmbda = 1 - 0.1*kp
print("lambda: ", lmbda)
# THINK OF THE MEANING OF THIS VALUE
b = 0.1*kp
print("b: ", b)
print("d[inf]: ", b/(1-lmbda))
# This initializes the sequence
d = np.zeros(n)
for i in range(n):
if i <= nd:
d[i] = d0
if i == nd+1:
d[i] = d0 + b
if i > nd + 1:
d[i] = lmbda*d[i-1] + b
# Plot the sequence
plt.figure()
plotSequence(d)
return
# This slider sets the number of samples for the system
interact(carSys_unitStepResponse, n=(2,100,1))
"""
Explanation: Unit step response
Consider the unit step sequence defined as
$x[n] = u[n- n_d] = \left{ \begin{matrix}
0, &\text{if} \; n < n_d \
1, & \text{if} \; n \geq n_d
\end{matrix} \right.$
We analyze by intervals, assuming that the initial state is $y[0] = 0$
Interval $n \leq n_d$:
$x[n-1] = 0$, hence $y[n] = 0$
When $n = n_d+1$:
$\begin{matrix}
y[n_d +1] &=& \lambda y[n_d] + b x[n_d] \
&=& \lambda 0 + b 1 \
&=& b
\end{matrix}$
Finally, $n > n_d + 1$:
$x[n] = 1$, hence $y[n] = \lambda y[n-1] + b$, which can be solved iteratively
$\begin{matrix}
y[n_d] &=& 0\
y[n_d + 1] &=& b \
y[n_d +2] &=& \lambda b + b \
y[n_d + 3] &=& \lambda (\lambda b + b) + b = \lambda^2 b + \lambda b + b\
y[n_d +4] &=& \lambda (\lambda^2 b + \lambda b + b) + b = \lambda^3 b + \lambda^2 b + \lambda b + b\
\vdots
\end{matrix}$
which can be written in a compact way as
$y[n] = \sum_{m=n_d+1}^{n} \lambda^{m-n_d} \frac{b}{\lambda}, \text{for} \; n > n_d$
<!-- Comment, this is the homework -->
<!-- A feature that it is usually studied is the value of the system when times is very large, that is, when $n \rightarrow \infty$. In the stable case, $\|\lambda\| < 1$, we can write:
$y[\infty] = \lambda y[\infty -1] + b x[\infty -1]$
We can say that $\infty = \infty -1$, since a $1$ makes no difference when we are talking about a large value, so we use the trick of
$y[\infty] = y[\infty - 1]$
and we know that $x[\infty] = 1$, so
$\begin{matrix}
y[\infty] &=& \lambda y[\infty] + b \\
&=& \frac{b}{1-\lambda}
\end{matrix}$
Since it falls in the last interval where we have a first order linear difference equation.
-->
End of explanation
"""
|
mehmetcanbudak/JupyterWorkflow | JupyterWorkflow.ipynb | mit | URL = "https://data.seattle.gov/api/views/65db-xm6k/rows.csv?accessType=DOWNLOAD"
from urllib.request import urlretrieve
urlretrieve(URL, "Fremont.csv")
!head Freemont.csv
import pandas as pd
data = pd.read_csv("Fremont.csv")
data.head()
data = pd.read_csv("Fremont.csv", index_col="Date", parse_dates=True)
data.head()
%matplotlib inline
data.index = pd.to_datetime(data.index)
data.plot()
data.resample('W').sum().plot()
import matplotlib.pyplot as plt
plt.style.use("seaborn")
data.resample("W").sum().plot()
data.columns = ["West", "East"]
data.resample("W").sum().plot()
"""
Explanation: JupyterWorkflow
From exploratory analysis to reproducible research
Mehmetcan Budak
End of explanation
"""
data.resample("D").sum().rolling(365).sum().plot()
"""
Explanation: Look for Annual Trend; growth-decline over ridership
Let's try a rolling window. Over 365 days rolling sum
End of explanation
"""
ax = data.resample("D").sum().rolling(365).sum().plot()
ax.set_ylim(0, None)
"""
Explanation: They don't go all the way to zero so let's set the y lenght to zero to none. current maxima.
End of explanation
"""
data["Total"] = data["West"] + data["East"]
ax = data.resample("D").sum().rolling(365).sum().plot()
ax.set_ylim(0, None)
"""
Explanation: There seems to be a offset between the left and right sidewalk. Let's plot them. See their trends.
End of explanation
"""
data.groupby(data.index.time).mean().plot()
"""
Explanation: Somehow the east and west side trends are reversed so the total bike rides across the bridge hover around 1 million and pretty accurent over the last couple of years +- couple percent.
Let's group by time of day and let's take it's mean and plot it.
End of explanation
"""
pivoted = data.pivot_table("Total", index=data.index.time, columns=data.index.date)
pivoted.iloc[:5, :5]
"""
Explanation: Let's see the whole data set in this way not just this average. We will do a pivot table.
End of explanation
"""
pivoted.plot(legend=False)
"""
Explanation: We now have a 2d data set. Each column is a day and each row is an hour during that day.
Let's take legend off and plot it.
End of explanation
"""
pivoted.plot(legend=False,alpha=0.01)
"""
Explanation: Let's reduce transparency to see better.
End of explanation
"""
|
jtwhite79/pyemu | examples/working_stack_demo.ipynb | bsd-3-clause | %matplotlib inline
import os
import shutil
import platform
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import flopy
import pyemu
"""
Explanation: Current working stack for setting up PEST interface
End of explanation
"""
nam_file = "freyberg.nam"
org_model_ws = "freyberg_sfr_update"
m = flopy.modflow.Modflow.load(nam_file,model_ws=org_model_ws,check=False)
m.dis.nper #stress periods
m.dis.botm[2].plot()
m.export("shape.shp")
"""
Explanation: load the existing model just to see what is going on
End of explanation
"""
m.change_model_ws("temp",reset_external=True)
m.external_path = '.'
m.exe_name="mfnwt"
m.write_input()
EXE_DIR = os.path.join("..","bin")
if "window" in platform.platform().lower():
EXE_DIR = os.path.join(EXE_DIR,"win")
elif "darwin" in platform.platform().lower():
EXE_DIR = os.path.join(EXE_DIR,"mac")
else:
EXE_DIR = os.path.join(EXE_DIR,"linux")
[shutil.copy2(os.path.join(EXE_DIR,f),os.path.join('temp',f)) for f in os.listdir(EXE_DIR)]
m.run_model()
"""
Explanation: change the working dir and write a new copy of model files to keep the others safe
End of explanation
"""
props,hds = [],[]
for k in range(m.nlay):
props.append(["upw.hk",k])
props.append(["upw.vka",k])
for kper in range(m.nper):
hds.append([kper,k])
props.append(["rch.rech",0])
props.append(["rch.rech",1])
props
"""
Explanation: build up args for which properties and outputs we want to include in the interface
End of explanation
"""
ph = pyemu.helpers.PstFromFlopyModel(nam_file,org_model_ws="temp",
new_model_ws="template",grid_props=props,pp_space=3,
const_props=props,spatial_list_props=[["wel.flux",2]],
sfr_pars=True,hds_kperk=hds,
remove_existing=True, sfr_obs=True,
model_exe_name="mfnwt",build_prior=False)
EXE_DIR = os.path.join("..","bin")
if "window" in platform.platform().lower():
EXE_DIR = os.path.join(EXE_DIR,"win")
elif "darwin" in platform.platform().lower():
EXE_DIR = os.path.join(EXE_DIR,"mac")
else:
EXE_DIR = os.path.join(EXE_DIR,"linux")
[shutil.copy2(os.path.join(EXE_DIR,f),os.path.join('temp',f)) for f in os.listdir(EXE_DIR)]
"""
Explanation: Here's where the cool stuff happens: this call will build a pest interace entirely using mulitplier parameters - a mixture of uniform (constant) and grid-scale parameters for all props listed above, plus multipliers for all wells in all stress periods and SFR components.
For observations, we will get the MODFLOW list file budget values, sfr outputs and headsave file array values (all of them!). All observations will be given simulated outputs - this is very useful for error checking...
The interface package will be written to "template" and includes a python forward run script - "template" is the whole package....
End of explanation
"""
pst = ph.pst
pst.npar,pst.nobs
"""
Explanation: let's inspect what just happened...
End of explanation
"""
pst.parameter_data.head()
pst.observation_data.tail() # the last few observations
"""
Explanation: WAT!
End of explanation
"""
pst.write_par_summary_table()
pst.write_obs_summary_table()
"""
Explanation: write parameter and observtion summary LaTeX tables
End of explanation
"""
figs = pst.plot(kind="prior",unique_only=True,echo=False)
"""
Explanation: plot the prior distribution of the parameter implied by the parameter bounds (assuming bounds represent 95% credible limits)
End of explanation
"""
cov = ph.build_prior(fmt="none")
plt.imshow(np.ma.masked_where(cov.x==0,cov.x))
plt.show()
"""
Explanation: but we can do better! We can use geostatistics to build a prior parameter covariance matrix for spatially distributed parameters...
End of explanation
"""
pyemu.helpers.run("pestpp-glm freyberg.pst",cwd="template")
"""
Explanation: Let's run pestpp once to get residuals...which should be nearly zero since all observations are set to current simulated outputs and all parameters are multipliers - this is a good check!
End of explanation
"""
pst = pyemu.Pst(os.path.join("template","freyberg.pst"))
pst.phi_components
"""
Explanation: Reload the pst instance to update the residuals
End of explanation
"""
pst.plot(kind='phi_pie')
"""
Explanation: all residuals are nearly zero - this is good!
End of explanation
"""
pe = pyemu.ParameterEnsemble.from_gaussian_draw(pst=ph.pst,num_reals=100,
cov=cov)
# save to a csv file
pe.to_csv(os.path.join("template","sweep_in.csv"))
# run with sweep using 20 workers
pyemu.helpers.start_workers("template","pestpp-swp","freyberg.pst",num_workers=20,
master_dir="sweep_master")
"""
Explanation: Let's do some Monte Carlo!
Generate a parameter ensemble and run it in parallel
End of explanation
"""
pyemu.plot_utils.ensemble_helper(pe.iloc[:,::50],facecolor='b',
deter_vals=pst.parameter_data.parval1.to_dict(),
filename=None) # you can also pass pdf filename
"""
Explanation: more eye candy using the plot helpers...
Every 50th parameter
End of explanation
"""
df = pd.read_csv(os.path.join("sweep_master","sweep_out.csv"))
df.columns = df.columns.map(str.lower)
df = df.loc[:,[o for o in pst.obs_names if "hds_00" in o ]]
"""
Explanation: load the output csv file....
End of explanation
"""
pyemu.plot_utils.ensemble_helper(df.iloc[:,:20],filename=None)
"""
Explanation: and plot...
every 20th observation
End of explanation
"""
|
afunTW/dsc-crawling | appendix_ptt/00_parse_article.ipynb | apache-2.0 | import requests
import re
import json
from bs4 import BeautifulSoup, NavigableString
from pprint import pprint
ARTICLE_URL = 'https://www.ptt.cc/bbs/Gossiping/M.1537847530.A.E12.html'
"""
Explanation: 爬取單一文章資訊
你有可能會遇到「是否滿18歲」的詢問頁面
解析 ptt.cc/bbs 裏面文章的結構
爬取文章
爬取留言
URL https://www.ptt.cc/bbs/Gossiping/M.1537847530.A.E12.html
BACKUP https://afuntw.github.io/Test-Crawling-Website/pages/ptt/M.1537847530.A.E12.html
End of explanation
"""
resp = requests.get(ARTICLE_URL)
if resp.status_code == 200:
print(resp.text)
cookies = {'over18': '1'}
resp = requests.get(ARTICLE_URL, cookies=cookies)
if resp.status_code == 200:
print(resp.text)
soup = BeautifulSoup(resp.text, 'lxml')
"""
Explanation: 透過 cookies 繞過年齡檢查
觀察開發者工具 > NetWork > requests header
End of explanation
"""
article = {
'author_id': '',
'author_nickname': '',
'title': '',
'timestamp': '',
'contents': '',
'ip': ''
}
article_body = soup.find(id='main-content')
# article header
article_head = article_body.findAll('div', class_='article-metaline')
for metaline in article_head:
meta_tag = metaline.find(class_='article-meta-tag').text
meta_value = metaline.find(class_='article-meta-value').text
if meta_tag == '作者':
compile_nickname = re.compile('\((.*)\)').search(meta_value)
article['author_id'] = meta_value.split('(')[0].strip(' ')
article['author_nickname'] = compile_nickname.group(1) if compile_nickname else ''
elif meta_tag == '標題':
article['title'] = meta_value
elif meta_tag == '時間':
article['timestamp'] = meta_value
# article content
contents = [expr for expr in article_body.contents if isinstance(expr, NavigableString)]
contents = [re.sub('\n', '', expr) for expr in contents]
contents = [i for i in contents if i]
contents = '\n'.join(contents)
article['contents'] = contents
# article publish ip
article_ip = article_body.find(class_='f2').text
compile_ip = re.compile('[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}').search(article_ip)
article['ip'] = compile_ip.group(0) if compile_ip else ''
pprint(article)
"""
Explanation: 爬取文章
作者 id
作者暱稱
文章標題
發佈時間
文章內容
發文 ip
End of explanation
"""
comments = []
for comment in article_body.findAll('div', class_='push'):
tag = comment.find(class_='push-tag').text
guest_id = comment.find(class_='push-userid').text
guest_content = comment.find(class_='push-content').text
guest_ipdatetime = comment.find(class_='push-ipdatetime').text
compile_ip = re.compile('[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}').search(guest_ipdatetime)
guest_ip = compile_ip.group(0) if compile_ip else ''
guest_timestamp = re.sub(guest_ip, '', guest_ipdatetime).strip()
comments.append({
'tag': tag,
'id': guest_id,
'content': guest_content,
'ip': guest_ip,
'timestamp': guest_timestamp
})
pprint(comments)
"""
Explanation: 爬取流言
推噓
推文 id
推文內容
推文 ip
推文時間
End of explanation
"""
article['comments'] = comments
data = [article]
with open('M.1537847530.A.E12.json', 'w+', encoding='utf-8') as f:
json.dump(data, f, indent=2, ensure_ascii=False)
"""
Explanation: 將資料存成 json 檔
End of explanation
"""
|
roatienza/Deep-Learning-Experiments | versions/2020/cnn/code/cnn-siamese.ipynb | mit | from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy as np
from tensorflow.keras.layers import Dense, Dropout, Input
from tensorflow.keras.layers import Conv2D, MaxPooling2D, Flatten
from tensorflow.keras.models import Model
from tensorflow.keras.layers import concatenate
from tensorflow.keras.datasets import mnist
from tensorflow.keras.utils import to_categorical
# load MNIST dataset
(x_train, y_train), (x_test, y_test) = mnist.load_data()
# from sparse label to categorical
num_labels = len(np.unique(y_train))
y_train = to_categorical(y_train)
y_test = to_categorical(y_test)
# reshape and normalize input images
image_size = x_train.shape[1]
x_train = np.reshape(x_train,[-1, image_size, image_size, 1])
x_test = np.reshape(x_test,[-1, image_size, image_size, 1])
x_train = x_train.astype('float32') / 255
x_test = x_test.astype('float32') / 255
# network parameters
input_shape = (image_size, image_size, 1)
batch_size = 128
kernel_size = 3
filters = 64
"""
Explanation: Implements a Siamese/Y-Network using Functional API
This is our first example of a network with a more complex graph. We call it Y-Network because it has a shape that is similar to the letter Y. There are two branches, left and right. Each one gets the same copy of input. Each branch processes the input and produces a different set of features. The left and right feature maps are the combined and passed to a head Dense layer for logistic regression.
We use the same optimizer (sgd) and loss function (categorical_crossentropy). We train the network for 20 epochs.
~98.6% test accuracy
End of explanation
"""
# left branch of Y network
left_inputs = Input(shape=input_shape)
x = left_inputs
# 3 layers of Conv2D-MaxPooling2D
depth = 3
for i in range(depth):
x = Conv2D(filters=filters,
kernel_size=kernel_size,
padding='same',
activation='relu')(x)
#x = Dropout(dropout)(x)
if i < (depth - 1):
x = MaxPooling2D()(x)
"""
Explanation: Left Branch of a Y-Network
The left branch is made of 3 layers of CNN with the configuration as the single branch CNN model example. To save in space, the left branch is constructed using a for loop. This technique and is used in constructing bigger models such as ResNet.
End of explanation
"""
# right branch of Y network
right_inputs = Input(shape=input_shape)
y = right_inputs
# 3 layers of Conv2D-Dropout-MaxPooling2D
for i in range(depth):
y = Conv2D(filters=filters,
kernel_size=kernel_size,
padding='same',
activation='relu',
dilation_rate=2)(y)
#y = Dropout(dropout)(y)
if i < (depth - 1):
y = MaxPooling2D()(y)
"""
Explanation: Right Branch of a Y-Network
The right branch is an exact mirror of the left branch. To ensure that it learns a different set of features, we use dilation_rate = 2 to approximate a kernel with twice the size as the left brancg.
End of explanation
"""
# merge left and right branches outputs
y = concatenate([x, y])
# feature maps to vector in preparation to connecting to Dense layer
y = Flatten()(y)
# y = Dropout(dropout)(y)
outputs = Dense(num_labels, activation='softmax')(y)
# build the model in functional API
model = Model([left_inputs, right_inputs], outputs, name='Y_Network')
# verify the model using graph
# plot_model(model, to_file='cnn-y-network.png', show_shapes=True)
# verify the model using layer text description
model.summary()
"""
Explanation: Merging the 2 Branches
To complete a Y-Network, let us merge the outputs of left and right branches. We use concatenate() which results to feature maps with the same dimension as left or right branch feature maps but with twice the number. There are other merging functions in Keras such as add and multiply.
End of explanation
"""
# classifier loss, Adam optimizer, classifier accuracy
model.compile(loss='categorical_crossentropy',
optimizer='sgd',
metrics=['accuracy'])
# train the model with input images and labels
model.fit([x_train, x_train],
y_train,
validation_data=([x_test, x_test], y_test),
epochs=20,
batch_size=batch_size)
# model accuracy on test dataset
score = model.evaluate([x_test, x_test], y_test, batch_size=batch_size)
print("\nTest accuracy: %.1f%%" % (100.0 * score[1]))
"""
Explanation: Model Training and Validation
This is just our usual model training and validation. Similar to our previous examples.
End of explanation
"""
|
mlamoureux/PIMS_YRC | P_Data_analysis.ipynb | mit | # Get some basic tools
%pylab inline
from pandas import Series, DataFrame
import pandas as pd
#import pandas.io.data as web
#from pandas_datareader import data, web
#import pandas_datareader as pdr
from pandas_datareader import data as pdr
import fix_yahoo_finance
# Here are apple and microsoft closing prices since 2016
start = datetime.datetime(2016,1,1)
end = datetime.date.today()
data = pdr.get_data_yahoo(["SPY", "IWM"], start="2017-01-01", end="2017-04-30")
# aapl = pdr.get_data_yahoo('AAPL')
#apple = pdr.DataReader('AAPL', 'yahoo', start, end)
#aapl = pdr.get_data_yahoo('AAPL','2001-01-01')['Adj Close']
#msft = pdr.get_data_yahoo('MSFT','2001-01-01')['Adj Close']
#subplot(2,1,1)
#plot(aapl)
#subplot(2,1,2)
#plot(msft)
aapl
# Let's look at the changes in the stock prices, normalized as a percentage
aapl_rets = aapl.pct_change()
msft_rets = msft.pct_change()
subplot(2,1,1)
plot(aapl_rets)
subplot(2,1,2)
plot(msft_rets)
# Let's look at the correlation between these two series
pd.rolling_corr(aapl_rets, msft_rets, 250).plot()
"""
Explanation: Data Analysis
Warning: this doesn't work! The yahoo website doesn't seem to work anymore.
A simple demo with pandas in Python
This notebook is based on course notes from Lamoureux's course Math 651 at the University of Calgary, Winter 2016.
This was an exercise to try out some resourse in Python. Specifically, we want to scrape some data from the web concerning stock prices, and display in a Panda. Then do some basic data analysis on the information.
We take advantage of the fact that there is a lot of financial data freely accessible on the web, and lots of people post information about how to use it.
Pandas in Python
How to access real data from the web and apply data analysis tools.
I am using the book Python for Data Analysis by Wes McKinney as a reference for this section.
The point of using Python for this is that a lot of people have created good code to do this.
The pandas name comes from Panel Data, an econometrics terms for multidimensional structured data sets, as well as from Python Data Analysis.
The dataframe objects that appear in pandas originated in R. But apparently thery have more functionality in Python than in R.
I will be using PYLAB as well in this section, so we can make use of NUMPY and MATPLOTLIB.
Accessing financial data
For free, historical data on commodities like Oil, you can try this site: http://www.databank.rbs.com
This site will download data directly into spreadsheets for you, plot graphs of historical data, etc. Here is an example of oil prices (West Texas Intermdiate), over the last 15 years. Look how low it goes...
Yahoo supplies current stock and commodity prices. Here is an intereting site that tells you how to download loads of data into a csv file.
http://www.financialwisdomforum.org/gummy-stuff/Yahoo-data.htm
Here is another site that discusses accessing various financial data sources. http://quant.stackexchange.com/questions/141/what-data-sources-are-available-online
Loading data off the web
To get away from the highly contentious issues of oil prices and political parties, let's look at some simple stock prices -- say Apple and Microsoft. We can import some basic webtools to get prices directly from Yahoo.
End of explanation
"""
# We may also try a least square regression, also built in as a panda function
model = pd.ols(y=aapl_rets, x={'MSFT': msft_rets},window=256)
model.beta
model.beta['MSFT'].plot()
# Those two graphs looked similar. Let's plot them together
subplot(2,1,1)
pd.rolling_corr(aapl_rets, msft_rets, 250).plot()
title('Rolling correlations')
subplot(2,1,2)
model.beta['MSFT'].plot()
title('Least squaresn model')
"""
Explanation: Getting fancy.
Now, we can use some more sophisticated statistical tools, like least squares regression. However, I had to do some work to get Python to recognize these items. But I didn't work too hard, I just followed the error messages.
It became clear that I needed to go back to a terminal window to load in some packages. The two commands I had to type in were
- pip install statsmodels
- pip install patsy
'pip' is an 'python installer package' that install packages of code onto your computer (or whatever machine is running your python). The two packages 'statsmodels' and 'patsy' are assorted statistical packages. I don't know much about them, but they are easy to find on the web.
End of explanation
"""
px = web.get_data_yahoo('SPY')['Adj Close']*10
px
plot(px)
"""
Explanation: more stocks
There is all kinds of neat info on the web. Here is the SPY exchange-traded fund, which tracks the S&P 500 index.
End of explanation
"""
|
mfouesneau/pyphot | examples/astropy_Sun_Vega.ipynb | mit | %matplotlib inline
import pylab as plt
import numpy as np
import sys
sys.path.append('../')
from pyphot import astropy as pyphot
from pyphot.astropy import Vega, Sun
"""
Explanation: pyphot - A tool for computing photometry from spectra
Some examples are provided in this notebook
Full documentation available at http://mfouesneau.github.io/docs/pyphot/
End of explanation
"""
# get the internal default library of passbands filters
lib = pyphot.get_library()
print("Library contains: ", len(lib), " filters")
"""
Explanation: The Sun and Vega
pyphot provides convenient interfaces to a spectral representation of the Sun and Vega.
In this notebook, we show how they can be used.
End of explanation
"""
# convert to magnitudes
import numpy as np
# We'll use Vega spectrum as example
vega = Vega()
f = lib['HST_WFC3_F110W']
# compute the integrated flux through the filter f
# note that it work on many spectra at once
fluxes = f.get_flux(vega.wavelength, vega.flux, axis=-1)
# convert to vega magnitudes
mags = -2.5 * np.log10(fluxes.value) - f.Vega_zero_mag
print("Vega magnitude of Vega in {0:s} is : {1:f} mag".format(f.name, mags))
mags = -2.5 * np.log10(fluxes.value) - f.AB_zero_mag
print("AB magnitude of Vega in {0:s} is : {1:f} mag".format(f.name, mags))
mags = -2.5 * np.log10(fluxes.value) - f.ST_zero_mag
print("ST magnitude of Vega in {0:s} is : {1:f} mag".format(f.name, mags))
f.AB_zero_Jy, f.Vega_zero_Jy, f.ST_zero_Jy
"""
Explanation: Vega
Suppose one has a calibrated spectrum and wants to compute the vega magnitude throug the HST WFC3 F110W passband,
End of explanation
"""
from pyphot.astropy import Unit
sun_obs = Sun(flavor='observed')
sun_th = Sun() # default is theoric spectrum
sun_th_10pc = Sun(distance=10 * Unit('pc'))
float((sun_th.distance / sun_th_10pc.distance) ** 2)
plt.loglog(sun_obs.wavelength.value, sun_obs.flux.value, label='observed')
plt.loglog(sun_th.wavelength.value, sun_th.flux.value, label='theoretical')
plt.legend();
plt.xlabel('wavelength [{0:s}]'.format(str(sun_obs.wavelength.unit)));
plt.ylabel('flux [{0:s}]'.format(str(sun_obs.flux.unit)));
"""
Explanation: The Sun
The internal reference to the Solar spectrum comes in two flavors: an observed one and a theoretical one.
By default, the interface is set to theoretical.
In addition, the Sun is at $1\,au$ but can be set to any distance. Below we instanciate the two flavors and also the Sun if it was at $10\, pc$ (absolute flux units).
End of explanation
"""
f = lib['GROUND_JOHNSON_V']
for name, sun in zip(('observed', 'theoretical', 'th. 10pc'), (sun_obs,sun_th, sun_th_10pc)):
flux = f.get_flux(sun.wavelength, sun.flux)
vegamag = f.Vega_zero_mag
print('{0:12s} {1:0.5e} {2:+3.4f}'.format(name, flux.value, -2.5 * np.log10(flux.value) - vegamag))
filter_names = ['GROUND_JOHNSON_B', 'GROUND_JOHNSON_V', 'GROUND_BESSELL_J', 'GROUND_BESSELL_K']
filter_names += lib.find('GaiaDR2')
filters = lib.load_filters(filter_names, lamb=sun_th.wavelength)
mags = {}
for name, fn in zip(filter_names, filters):
flux = fn.get_flux(sun_th.wavelength, sun_th.flux)
vegamag = fn.Vega_zero_mag
mag = -2.5 * np.log10(flux.value) - vegamag
mags[name] = mag
print('{0:>25s} {1:+3.4f} mag'.format(name, mag))
colors = (('GROUND_JOHNSON_B', 'GROUND_JOHNSON_V'),
('GROUND_JOHNSON_V', 'GROUND_BESSELL_K'),
('GROUND_BESSELL_J', 'GROUND_BESSELL_K'),
('GaiaDR2_BP', 'GaiaDR2_RP'),
('GaiaDR2_BP', 'GaiaDR2_G'),
('GaiaDR2_G', 'GaiaDR2_RP'),
('GaiaDR2v2_BP', 'GaiaDR2v2_RP'),
('GaiaDR2v2_BP', 'GaiaDR2v2_G'),
('GaiaDR2v2_G', 'GaiaDR2v2_RP'),
('GaiaDR2_weiler_BPbright', 'GaiaDR2_weiler_RP'),
('GaiaDR2_weiler_BPfaint', 'GaiaDR2_weiler_RP'),
('GaiaDR2_weiler_BPbright', 'GaiaDR2_weiler_G'),
('GaiaDR2_weiler_BPfaint', 'GaiaDR2_weiler_G'),
('GaiaDR2_weiler_G', 'GaiaDR2_weiler_RP'))
color_values = {}
for color in colors:
color_values[color] = mags[color[0]] - mags[color[1]]
print('{0:>25s} - {1:<25s} = {2:3.4f} mag'.format(color[0], color[1], mags[color[0]] - mags[color[1]]))
"""
Explanation: One can see the differences between the two flavors.
The theoretical spectrum is scaled to match the observed spectrum from 1.5 - 2.5 microns, and then it is used where the observed spectrum ends. The theoretical model of the Sun from Kurucz‘93 atlas using the following parameters when the Sun is at 1 au.
|log_Z | T_eff | log_g | V$_{Johnson}$ |
|------|-------|-------|---------------|
|+0.0 | 5777 |+4.44 | -26.76 |
The Sun is also know to have a Johnson V (vega-)magnitude of -26.76 mag.
Let's verify this.
End of explanation
"""
|
ddtm/dl-course | Seminar9/Bonus-seminar.ipynb | mit | import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
"""
Explanation: Deep learning for Natural Language Processing
Simple text representations, bag of words
Word embedding and... not just another word2vec this time
1-dimensional convolutions for text
Aggregating several data sources "the hard way"
Solving ~somewhat~ real ML problem with ~almost~ end-to-end deep learning
Special thanks to Irina Golzmann for help with technical part.
NLTK
You will require nltk v3.2 to solve this assignment
It is really important that the version is 3.2, otherwize russian tokenizer might not work
Install/update
* sudo pip install --upgrade nltk==3.2
* If you don't remember when was the last pip upgrade, sudo pip install --upgrade pip
If for some reason you can't or won't switch to nltk v3.2, just make sure that russian words are tokenized properly with RegeExpTokenizer.
End of explanation
"""
df = pd.read_csv("./Train_rev1.csv",sep=',')
print df.shape, df.SalaryNormalized.mean()
df[:5]
"""
Explanation: Dataset
Ex-kaggle-competition on job salary prediction
Original conest - https://www.kaggle.com/c/job-salary-prediction
Download
Go here and download as usual
CSC cloud: data should already be here somewhere, just poke the nearest instructor.
What's inside
Different kinds of features:
* 2 text fields - title and description
* Categorical fields - contract type, location
Only 1 binary target whether or not such advertisement contains prohibited materials
* criminal, misleading, human reproduction-related, etc
* diving into the data may result in prolonged sleep disorders
End of explanation
"""
from nltk.tokenize import RegexpTokenizer
from collections import Counter,defaultdict
tokenizer = RegexpTokenizer(r"\w+")
#Dictionary of tokens
token_counts = Counter()
#All texts
all_texts = np.hstack([df.FullDescription.values,df.Title.values])
#Compute token frequencies
for s in all_texts:
if type(s) is not str:
continue
s = s.decode('utf8').lower()
tokens = tokenizer.tokenize(s)
for token in tokens:
token_counts[token] +=1
"""
Explanation: Tokenizing
First, we create a dictionary of all existing words.
Assign each word a number - it's Id
End of explanation
"""
#Word frequency distribution, just for kicks
_=plt.hist(token_counts.values(),range=[0,50],bins=50)
#Select only the tokens that had at least 10 occurences in the corpora.
#Use token_counts.
min_count = 5
tokens = <tokens from token_counts keys that had at least min_count occurences throughout the dataset>
token_to_id = {t:i+1 for i,t in enumerate(tokens)}
null_token = "NULL"
token_to_id[null_token] = 0
print "# Tokens:",len(token_to_id)
if len(token_to_id) < 10000:
print "Alarm! It seems like there are too few tokens. Make sure you updated NLTK and applied correct thresholds -- unless you now what you're doing, ofc"
if len(token_to_id) > 100000:
print "Alarm! Too many tokens. You might have messed up when pruning rare ones -- unless you know what you're doin' ofc"
"""
Explanation: Remove rare tokens
We are unlikely to make use of words that are only seen a few times throughout the corpora.
Again, if you want to beat Kaggle competition metrics, consider doing something better.
End of explanation
"""
def vectorize(strings, token_to_id, max_len=150):
token_matrix = []
for s in strings:
if type(s) is not str:
token_matrix.append([0]*max_len)
continue
s = s.decode('utf8').lower()
tokens = tokenizer.tokenize(s)
token_ids = map(lambda token: token_to_id.get(token,0), tokens)[:max_len]
token_ids += [0]*(max_len - len(token_ids))
token_matrix.append(token_ids)
return np.array(token_matrix)
desc_tokens = vectorize(df.FullDescription.values,token_to_id,max_len = 500)
title_tokens = vectorize(df.Title.values,token_to_id,max_len = 15)
"""
Explanation: Replace words with IDs
Set a maximum length for titles and descriptions.
* If string is longer that that limit - crop it, if less - pad with zeros.
* Thus we obtain a matrix of size [n_samples]x[max_length]
* Element at i,j - is an identifier of word j within sample i
End of explanation
"""
print "Matrix size:",title_tokens.shape
for title, tokens in zip(df.Title.values[:3],title_tokens[:3]):
print title,'->', tokens[:10],'...'
"""
Explanation: Data format examples
End of explanation
"""
#One-hot-encoded category and subcategory
from sklearn.feature_extraction import DictVectorizer
categories = []
data_cat = df[["Category","LocationNormalized","ContractType","ContractTime"]]
categories = [A list of dictionaries {"category":category_name, "subcategory":subcategory_name} for each data sample]
vectorizer = DictVectorizer(sparse=False)
df_non_text = vectorizer.fit_transform(categories)
df_non_text = pd.DataFrame(df_non_text,columns=vectorizer.feature_names_)
"""
Explanation: As you can see, our preprocessing is somewhat crude. Let us see if that is enough for our network
Non-sequences
Some data features are categorical data. E.g. location, contract type, company
They require a separate preprocessing step.
End of explanation
"""
#Target variable - whether or not sample contains prohibited material
target = df.is_blocked.values.astype('int32')
#Preprocessed titles
title_tokens = title_tokens.astype('int32')
#Preprocessed tokens
desc_tokens = desc_tokens.astype('int32')
#Non-sequences
df_non_text = df_non_text.astype('float32')
#Split into training and test set.
#Difficulty selector:
#Easy: split randomly
#Medium: split by companies, make sure no company is in both train and test set
#Hard: do whatever you want, but score yourself using kaggle private leaderboard
title_tr,title_ts,desc_tr,desc_ts,nontext_tr,nontext_ts,target_tr,target_ts = <define_these_variables>
"""
Explanation: Split data into training and test
End of explanation
"""
save_prepared_data = True #save
read_prepared_data = False #load
#but not both at once
assert not (save_prepared_data and read_prepared_data)
if save_prepared_data:
print "Saving preprocessed data (may take up to 3 minutes)"
import pickle
with open("preprocessed_data.pcl",'w') as fout:
pickle.dump(data_tuple,fout)
with open("token_to_id.pcl",'w') as fout:
pickle.dump(token_to_id,fout)
print "done"
elif read_prepared_data:
print "Reading saved data..."
import pickle
with open("preprocessed_data.pcl",'r') as fin:
data_tuple = pickle.load(fin)
title_tr,title_ts,desc_tr,desc_ts,nontext_tr,nontext_ts,target_tr,target_ts = data_tuple
with open("token_to_id.pcl",'r') as fin:
token_to_id = pickle.load(fin)
#Re-importing libraries to allow staring noteboook from here
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
print "done"
"""
Explanation: Save preprocessed data [optional]
The next tab can be used to stash all the essential data matrices and get rid of the rest of the data.
Highly recommended if you have less than 1.5GB RAM left
To do that, you need to first run it with save_prepared_data=True, then restart the notebook and only run this tab with read_prepared_data=True.
End of explanation
"""
#libraries
import lasagne
from theano import tensor as T
import theano
#3 inputs and a refere output
title_token_ids = T.matrix("title_token_ids",dtype='int32')
desc_token_ids = T.matrix("desc_token_ids",dtype='int32')
categories = T.matrix("categories",dtype='float32')
target_y = T.vector("is_blocked",dtype='float32')
"""
Explanation: Train the monster
Since we have several data sources, our neural network may differ from what you used to work with.
Separate input for titles
cnn+global max or RNN
Separate input for description
cnn+global max or RNN
Separate input for categorical features
Few dense layers + some black magic if you want
These three inputs must be blended somehow - concatenated or added.
Output: a simple regression task
End of explanation
"""
title_inp = lasagne.layers.InputLayer((None,title_tr.shape[1]),input_var=title_token_ids)
descr_inp = lasagne.layers.InputLayer((None,desc_tr.shape[1]),input_var=desc_token_ids)
cat_inp = lasagne.layers.InputLayer((None,nontext_tr.shape[1]), input_var=categories)
# Descriptions
#word-wise embedding. We recommend to start from some 64 and improving after you are certain it works.
descr_nn = lasagne.layers.EmbeddingLayer(descr_inp,
input_size=len(token_to_id)+1,
output_size=?)
#reshape from [batch, time, unit] to [batch,unit,time] to allow 1d convolution over time
descr_nn = lasagne.layers.DimshuffleLayer(descr_nn, [0,2,1])
descr_nn = 1D convolution over embedding, maybe several ones in a stack
#pool over time
descr_nn = lasagne.layers.GlobalPoolLayer(descr_nn,T.max)
#Possible improvements here are adding several parallel convs with different filter sizes or stacking them the usual way
#1dconv -> 1d max pool ->1dconv and finally global pool
# Titles
title_nn = <Process titles somehow (title_inp)>
# Non-sequences
cat_nn = <Process non-sequences(cat_inp)>
nn = <merge three layers into one (e.g. lasagne.layers.concat) >
nn = lasagne.layers.DenseLayer(nn,your_lucky_number)
nn = lasagne.layers.DropoutLayer(nn,p=maybe_use_me)
nn = lasagne.layers.DenseLayer(nn,1,nonlinearity=lasagne.nonlinearities.linear)
"""
Explanation: NN architecture
End of explanation
"""
#All trainable params
weights = lasagne.layers.get_all_params(nn,trainable=True)
#Simple NN prediction
prediction = lasagne.layers.get_output(nn)[:,0]
#loss function
loss = lasagne.objectives.squared_error(prediction,target_y).mean()
#Weight optimization step
updates = <your favorite optimizer>
"""
Explanation: Loss function
The standard way:
prediction
loss
updates
training and evaluation functions
End of explanation
"""
#deterministic version
det_prediction = lasagne.layers.get_output(nn,deterministic=True)[:,0]
#equivalent loss function
det_loss = <an excercise in copy-pasting and editing>
"""
Explanation: Determinitic prediction
In case we use stochastic elements, e.g. dropout or noize
Compile a separate set of functions with deterministic prediction (deterministic = True)
Unless you think there's no neet for dropout there ofc. Btw is there?
End of explanation
"""
train_fun = theano.function([desc_token_ids,title_token_ids,categories,target_y],[loss,prediction],updates = updates)
eval_fun = theano.function([desc_token_ids,title_token_ids,categories,target_y],[det_loss,det_prediction])
"""
Explanation: Coffee-lation
End of explanation
"""
# Out good old minibatch iterator now supports arbitrary amount of arrays (X,y,z)
def iterate_minibatches(*arrays,**kwargs):
batchsize=kwargs.get("batchsize",100)
shuffle = kwargs.get("shuffle",True)
if shuffle:
indices = np.arange(len(arrays[0]))
np.random.shuffle(indices)
for start_idx in range(0, len(arrays[0]) - batchsize + 1, batchsize):
if shuffle:
excerpt = indices[start_idx:start_idx + batchsize]
else:
excerpt = slice(start_idx, start_idx + batchsize)
yield [arr[excerpt] for arr in arrays]
"""
Explanation: Training loop
The regular way with loops over minibatches
Since the dataset is huge, we define epoch as some fixed amount of samples isntead of all dataset
End of explanation
"""
from sklearn.metrics import mean_squared_error,mean_absolute_error
n_epochs = 100
batch_size = 100
minibatches_per_epoch = 100
for i in range(n_epochs):
#training
epoch_y_true = []
epoch_y_pred = []
b_c = b_loss = 0
for j, (b_desc,b_title,b_cat, b_y) in enumerate(
iterate_minibatches(desc_tr,title_tr,nontext_tr,target_tr,batchsize=batch_size,shuffle=True)):
if j > minibatches_per_epoch:break
loss,pred_probas = train_fun(b_desc,b_title,b_cat,b_y)
b_loss += loss
b_c +=1
epoch_y_true.append(b_y)
epoch_y_pred.append(pred_probas)
epoch_y_true = np.concatenate(epoch_y_true)
epoch_y_pred = np.concatenate(epoch_y_pred)
print "Train:"
print '\tloss:',b_loss/b_c
print '\trmse:',mean_squared_error(epoch_y_true,epoch_y_pred)**.5
print '\tmae:',mean_absolute_error(epoch_y_true,epoch_y_pred)
#evaluation
epoch_y_true = []
epoch_y_pred = []
b_c = b_loss = 0
for j, (b_desc,b_title,b_cat, b_y) in enumerate(
iterate_minibatches(desc_ts,title_ts,nontext_ts,target_ts,batchsize=batch_size,shuffle=True)):
if j > minibatches_per_epoch: break
loss,pred_probas = eval_fun(b_desc,b_title,b_cat,b_y)
b_loss += loss
b_c +=1
epoch_y_true.append(b_y)
epoch_y_pred.append(pred_probas)
epoch_y_true = np.concatenate(epoch_y_true)
epoch_y_pred = np.concatenate(epoch_y_pred)
print "Val:"
print '\tloss:',b_loss/b_c
print '\trmse:',mean_squared_error(epoch_y_true,epoch_y_pred)**.5
print '\tmae:',mean_absolute_error(epoch_y_true,epoch_y_pred)
print "If you are seeing this, it's time to backup your notebook. No, really, 'tis too easy to mess up everything without noticing. "
"""
Explanation: Tweaking guide
batch_size - how many samples are processed per function call
optimization gets slower, but more stable, as you increase it.
May consider increasing it halfway through training
minibatches_per_epoch - max amount of minibatches per epoch
Does not affect training. Lesser value means more frequent and less stable printing
Setting it to less than 10 is only meaningfull if you want to make sure your NN does not break down after one epoch
n_epochs - total amount of epochs to train for
n_epochs = 10**10 and manual interrupting is still an option
Tips:
With small minibatches_per_epoch, network quality may jump up and down for several epochs
Plotting metrics over training time may be a good way to analyze which architectures work better.
Once you are sure your network aint gonna crash, it's worth letting it train for a few hours of an average laptop's time to see it's true potential
End of explanation
"""
#evaluation
epoch_y_true = []
epoch_y_pred = []
b_c = b_loss = 0
for j, (b_desc,b_title,b_cat, b_y) in enumerate(
iterate_minibatches(desc_ts,title_ts,nontext_ts,target_ts,batchsize=batch_size,shuffle=True)):
loss,pred_probas = eval_fun(b_desc,b_title,b_cat,b_y)
b_loss += loss
b_c +=1
epoch_y_true.append(b_y)
epoch_y_pred.append(pred_probas)
epoch_y_true = np.concatenate(epoch_y_true)
epoch_y_pred = np.concatenate(epoch_y_pred)
print "Scores:"
print '\tloss:',b_loss/b_c
print '\trmse:',mean_squared_error(epoch_y_true,epoch_y_pred)**.5
print '\tmae:',mean_absolute_error(epoch_y_true,epoch_y_pred)
"""
Explanation: Final evaluation
Evaluate network over the entire test set
End of explanation
"""
|
sdpython/ensae_teaching_cs | _doc/notebooks/td1a/texte_langue.ipynb | mit | from jyquickhelper import add_notebook_menu
add_notebook_menu()
"""
Explanation: 1A.2 - Deviner la langue d'un texte
Comment deviner la langue d'un texte sans savoir lire la langue ? Ce notebook aborde les dictionnaires, les fichiers et les graphiques.
End of explanation
"""
def read_file(filename):
# ...
return something
"""
Explanation: L'objectif est de distinguer un texte anglais d'un texte français sans avoir à le lire. Le premier réflexe consisterait à chercher la présence de mots typiquement anglais ou français. Cette direction est sans doute un bon choix lorsque le texte considéré est une oeuvre littéraire. Mais sur Internet, les contenus mélangent fréquemment les deux langues : la présence de tel mot anglais n'est plus aussi discriminante. Il n'est plus aussi évident d'étiqueter un document de langue anglaise lorsque les mots anglais sont présents partout.
On ne cherche plus à déterminer la langue d'un texte mais plutôt la langue majoritaire. Il serait encore possible de compter les mots de chacune des langues à l'aide d'un dictionnaire réduit de mots anglais et français. La langue majoritaire correspondrait à celle dont les mots sont les plus fréquents. Mais construire un dictionnaire est d'abord fastidieux. Ensuite, il faudrait que celui-ci contienne des mots présents dans la plupart des textes. Il faudrait aussi étudier le problème des mots communs aux deux langues. Pour ces raisons, il paraît préférable d'étudier d'abord une direction plus simple quitte à y revenir plus tard.
Cette idée plus simple consiste à compter la fréquence des lettres. On s'attend à ce que certaines lettres soient plus fréquentes dans un texte anglais que dans un texte français.
Q1 : lire un fichier
On commence par télécharger un texte sur le site Gutenberg et on écrit un programme pour le lire.
End of explanation
"""
def histogram(texte):
# ...
return something
"""
Explanation: Q2 : histogramme
Construire un histogramme comptant les occurrences de chaque lettre dans ce texte. C'est-à-dire écrire une fonction qui prend comme argument une chaîne de caractères et qui retourne un dictionnaire dont vous choisirez ce que seront les clés et les valeurs.
End of explanation
"""
def normalize(hist):
# ...
return something
"""
Explanation: Q3 : normalisation
Un texte inconnu contient 10 lettres I. Que pouvez-vous en conclure ? Pensez-vous que les fréquences de la lettre I dans un texte long et dans un texte court soient comparables ? Ecrire une fonction qui normalise toutes les valeurs du dictionnaire à un.
End of explanation
"""
from pyensae.datasource import download_data
texts = download_data("articles.zip")
texts[:5]
"""
Explanation: Q4 : calcul
Appliquer votre fonction à un texte anglais et à un autre français, ... Que suggérez-vous comme indicateur pour distinguer un texte français d'un texte anglais ? Calculer votre indicateur pour dix textes de chaque langue. On pourra prendre les dix textes suivants : articles.zip.
End of explanation
"""
|
shareactorIO/pipeline | source.ml/jupyterhub.ml/notebooks/zz_old/Spark/Intro/Lab 1 - Hello Spark/Lab 1 - Hello Spark - Instructor.ipynb | apache-2.0 | #Step 1 - sc is Spark Context, Execute Spark Context to see if its active in cluster
#Note: Notice the programming language used
sc
#Step 1 - The spark context has a .version available to return the version of the spark driver application
#Note: Different versions of spark application support additional functionality such as DataFrames, Streaming and Machine Learning.
#Invoke the spark context version command
sc.version
"""
Explanation: Lab 1 - Hello Spark
This Lab will show you how to work with Apache Spark using Python
Step 1 - Working with Spark Context
Check what version of Apache Spark is setup within this lab notebook.
Apache Spark has a driver application that launches various parrallel applications on a cluster. The drier application uses a spark context allow a programming interface to interact with the driver application. This is know as a Spark Context which supports multiple Python, Scala and Java programming languages.
In step 1 - Invoke the spark context and extract what version of the spark driver application is running.
Note: In the cloud based notebook enviornment used for this lab, the spark context is predefined
End of explanation
"""
#Step 2 - Create RDD Numbers
#Create numbers 1 to 10 into a variable
x = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
#Place the numbers into an rdd called x_nbr_rdd
#Note: the spark context has a parallelize command to create RDD from data
#Hint... sc.parallelize(Data here, pass in variable)
x_nbr_rdd = sc.parallelize(x)
#Step 2 - Extract first line
#Notice: after running the code you did not recieve a result.
#This is because you haven't yet invoked an Action command.
#Note: RDDs support first() command which returns the first object and is also an action command
#Invoke first on your RDD - Action
x_nbr_rdd.first()
#Step 2 - Extract first 5 lines
#Note: RDDs support take() command which returns x number of objects that you pass into the command
#Invoke take() and extract first 5 lines
x_nbr_rdd.take(5)
#Step 2 - Create RDD String, Extract first line
#Create a string "Hello Spark!"
y = ["Hello Spark!"]
#Place the string value into an rdd called y_str_rdd
y_str_rdd = sc.parallelize(y)
#Return the first value in yoru RDD - Action
y_str_rdd.first()
"""
Explanation: Step 2 - Working with Resilient Distributed Datasets
Create multiple RDDs and return results
Apache Spark uses an abstration for working with data called RDDs - Resilient Distributed Datasets. An RDD is simply an immutable distributed collection of objects. In Apache Spark all work is expressed by either creating new RDDs, tranforming existing RDDs or using RDDs to compute results. When working with RDDs, the Spark Driver application automatically distributes the work accross your cluster.
In Step 2 - Create RDD with numbers 1 to 10,
Extract first line,
Extract first 5 lines,
Create RDD with string "Hello Spark",
Extract first line.
Invoke an Action to return data from your RDDs.
Note: RDD commands are either transformations or actions. Transformations are commands that do not initiate a computation requiring parrallel application execution on a spark cluster.
End of explanation
"""
#Step 3 - Create RDD String, Extract first line
#Create a string with many words including "Hello" and "Spark"
z = ["Hello World!, Hello Universe!, I love Spark"]
#Place the string value into an rdd called z_str_rdd
z_str_rdd = sc.parallelize(z)
#Extract first line
z_str_rdd.first()
#Step 3 - Create RDD with object for each word, Extract first 7 words
#Note: To analys your string you have to break down your string in to multiple objects.
#One way to do that is to map each word into elements/lines in your RDD
#A RDD transformation called map, takes in an RDD and runs command on it. For Example a split command to split elements by a value
#A RDD tranformation called flatmap, does the same as map but returns a list of elements (0 or more) as an iterator
#Hint... flatMap(lambda line: line.split(" "))
#Create a new RDD z_str2_rdd using this transformation
#z_str2_rdd = z_str_rdd.flatMap(lambda line: line.split(" "))
z_str2_rdd = z_str_rdd.flatMap(lambda line: line.split(" "))
#Extract first 7 words - Action
z_str2_rdd.take(7)
#Step 3 - Count of "Hello" words
#Note: You can use filter command to create a new RDD from another RDD on a filter criteria
#Hint... filter syntax is .filter(lambda line: "Filter Criteria Value" in line)
#Create a new RDD z_str3_rdd for all "Hello" words in corpus of words
z_str3_rdd = z_str2_rdd.filter(lambda line: "Hello" in line)
#Note: Use a simple python print command to add string to your spark results
#Hint... repr() will represent a number as string
#Hint... Syntax: print "Text:" + repr(Spark commands)
#Extract count of values in the new RDD which represents number of "Hello" words in corpus
print "The count of words 'Hello' in: " + repr(z_str_rdd.first())
print "Is: " + repr(z_str3_rdd.count())
#Step 3 - Count of "Spark" words
#Create a new RDD z_str4_rdd for all "Hello" words in corpus of words
z_str4_rdd = z_str2_rdd.filter(lambda line: "Spark" in line)
#Extract count of values in the new RDD which represents number of "Spark" words in corpus
print "The count of words 'Spark' in: " + repr(z_str_rdd.first())
print "Is: " + repr(z_str4_rdd.count())
"""
Explanation: Step 3 - Working with Strings
As you can see, you created a single string "Hello Spark!" and you returned value of one object is "Hello Spark!" What if we wanted to work with a string corpus of words and run analysis on each of the works, then you would need to map each word into many objects (or lines) in an RDD.
In Step 3 - Create a larger string of words that include "Hello" and "Spark",
Map the string into an RDD as a collection of words,
extract the count of words "Hello" and "Spark" found in your RDD.
End of explanation
"""
|
calroc/joypy | docs/Zipper.ipynb | gpl-3.0 | from notebook_preamble import J, V, define
"""
Explanation: This notebook is about using the "zipper" with joy datastructures. See the Zipper wikipedia entry or the original paper: "FUNCTIONAL PEARL The Zipper" by Gérard Huet
Given a datastructure on the stack we can navigate through it, modify it, and rebuild it using the "zipper" technique.
Preamble
End of explanation
"""
J('[1 [2 [3 4 25 6] 7] 8]')
"""
Explanation: Trees
In Joypy there aren't any complex datastructures, just ints, floats, strings, Symbols (strings that are names of functions) and sequences (aka lists, aka quoted literals, aka aggregates, etc...), but we can build trees out of sequences.
End of explanation
"""
define('z-down == [] swap uncons swap')
define('z-up == swons swap shunt')
define('z-right == [swons] cons dip uncons swap')
define('z-left == swons [uncons swap] dip swap')
V('[1 [2 [3 4 25 6] 7] 8] z-down')
V('[] [[2 [3 4 25 6] 7] 8] 1 z-right')
J('[1] [8] [2 [3 4 25 6] 7] z-down')
J('[1] [8] [] [[3 4 25 6] 7] 2 z-right')
J('[1] [8] [2] [7] [3 4 25 6] z-down')
J('[1] [8] [2] [7] [] [4 25 6] 3 z-right')
J('[1] [8] [2] [7] [3] [25 6] 4 z-right')
J('[1] [8] [2] [7] [4 3] [6] 25 sqr')
V('[1] [8] [2] [7] [4 3] [6] 625 z-up')
J('[1] [8] [2] [7] [3 4 625 6] z-up')
J('[1] [8] [2 [3 4 625 6] 7] z-up')
"""
Explanation: Zipper in Joy
Zippers work by keeping track of the current item, the already-seen items, and the yet-to-be seen items as you traverse a datastructure (the datastructure used to keep track of these items is the zipper.)
In Joy we can do this with the following words:
z-down == [] swap uncons swap
z-up == swons swap shunt
z-right == [swons] cons dip uncons swap
z-left == swons [uncons swap] dip swap
Let's use them to change 25 into 625. The first time a word is used I show the trace so you can see how it works. If we were going to use these a lot it would make sense to write Python versions for efficiency, but see below.
End of explanation
"""
V('[1 [2 [3 4 25 6] 7] 8] [[[[[[sqr] dipd] infra] dip] infra] dip] infra')
"""
Explanation: dip and infra
In Joy we have the dip and infra combinators which can "target" or "address" any particular item in a Joy tree structure.
End of explanation
"""
define('Z == [[] cons cons] step i')
"""
Explanation: If you read the trace carefully you'll see that about half of it is the dip and infra combinators de-quoting programs and "digging" into the subject datastructure. Instead of maintaining temporary results on the stack they are pushed into the pending expression (continuation). When sqr has run the rest of the pending expression rebuilds the datastructure.
Z
Imagine a function Z that accepts a sequence of dip and infra combinators, a quoted program [Q], and a datastructure to work on. It would effectively execute the quoted program as if it had been embedded in a nested series of quoted programs, e.g.:
[...] [Q] [dip dip infra dip infra dip infra] Z
-------------------------------------------------------------
[...] [[[[[[[Q] dip] dip] infra] dip] infra] dip] infra
The Z function isn't hard to make.
End of explanation
"""
V('1 [2 3 4] Z')
"""
Explanation: Here it is in action in a simplified scenario.
End of explanation
"""
J('[1 [2 [3 4 25 6] 7] 8] [sqr] [dip dip infra dip infra dip infra] Z')
"""
Explanation: And here it is doing the main thing.
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.16/_downloads/plot_dipole_fit.ipynb | bsd-3-clause | from os import path as op
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne.forward import make_forward_dipole
from mne.evoked import combine_evoked
from mne.simulation import simulate_evoked
data_path = mne.datasets.sample.data_path()
subjects_dir = op.join(data_path, 'subjects')
fname_ave = op.join(data_path, 'MEG', 'sample', 'sample_audvis-ave.fif')
fname_cov = op.join(data_path, 'MEG', 'sample', 'sample_audvis-cov.fif')
fname_bem = op.join(subjects_dir, 'sample', 'bem', 'sample-5120-bem-sol.fif')
fname_trans = op.join(data_path, 'MEG', 'sample',
'sample_audvis_raw-trans.fif')
fname_surf_lh = op.join(subjects_dir, 'sample', 'surf', 'lh.white')
"""
Explanation: ============================================================
Source localization with equivalent current dipole (ECD) fit
============================================================
This shows how to fit a dipole using mne-python.
For a comparison of fits between MNE-C and mne-python, see:
https://gist.github.com/Eric89GXL/ca55f791200fe1dc3dd2
End of explanation
"""
evoked = mne.read_evokeds(fname_ave, condition='Right Auditory',
baseline=(None, 0))
evoked.pick_types(meg=True, eeg=False)
evoked_full = evoked.copy()
evoked.crop(0.07, 0.08)
# Fit a dipole
dip = mne.fit_dipole(evoked, fname_cov, fname_bem, fname_trans)[0]
# Plot the result in 3D brain with the MRI image.
dip.plot_locations(fname_trans, 'sample', subjects_dir, mode='orthoview')
"""
Explanation: Let's localize the N100m (using MEG only)
End of explanation
"""
fwd, stc = make_forward_dipole(dip, fname_bem, evoked.info, fname_trans)
pred_evoked = simulate_evoked(fwd, stc, evoked.info, cov=None, nave=np.inf)
# find time point with highest GOF to plot
best_idx = np.argmax(dip.gof)
best_time = dip.times[best_idx]
print('Highest GOF %0.1f%% at t=%0.1f ms with confidence volume %0.1f cm^3'
% (dip.gof[best_idx], best_time * 1000,
dip.conf['vol'][best_idx] * 100 ** 3))
# remember to create a subplot for the colorbar
fig, axes = plt.subplots(nrows=1, ncols=4, figsize=[10., 3.4])
vmin, vmax = -400, 400 # make sure each plot has same colour range
# first plot the topography at the time of the best fitting (single) dipole
plot_params = dict(times=best_time, ch_type='mag', outlines='skirt',
colorbar=False, time_unit='s')
evoked.plot_topomap(time_format='Measured field', axes=axes[0], **plot_params)
# compare this to the predicted field
pred_evoked.plot_topomap(time_format='Predicted field', axes=axes[1],
**plot_params)
# Subtract predicted from measured data (apply equal weights)
diff = combine_evoked([evoked, -pred_evoked], weights='equal')
plot_params['colorbar'] = True
diff.plot_topomap(time_format='Difference', axes=axes[2], **plot_params)
plt.suptitle('Comparison of measured and predicted fields '
'at {:.0f} ms'.format(best_time * 1000.), fontsize=16)
"""
Explanation: Calculate and visualise magnetic field predicted by dipole with maximum GOF
and compare to the measured data, highlighting the ipsilateral (right) source
End of explanation
"""
dip_fixed = mne.fit_dipole(evoked_full, fname_cov, fname_bem, fname_trans,
pos=dip.pos[best_idx], ori=dip.ori[best_idx])[0]
dip_fixed.plot(time_unit='s')
"""
Explanation: Estimate the time course of a single dipole with fixed position and
orientation (the one that maximized GOF) over the entire interval
End of explanation
"""
|
caseresearch/code-review | tutorials/jupyter_notebook_emcee/emcee_notebook.ipynb | mit | %matplotlib inline
"""
Explanation: The rad-ness of notebooks
I use notebooks more often than I use an executable .py script. This is partially because notebooks were my first major introduction to python, but my continued use relates back to the fact that it allows me to break up problems I'm solving into different blocks.
Either install Anaconda (which comes with jupyter notebooks), or use pip install jupyter for Python 2, or pip3 install jupyter for Python 3
Then it's as simple as typing jupyter notebook into your terminal to launch the application!
What problem are you trying to solve?
A major advantage of notebooks is that you can utilise Markdown and $\LaTeX$ to incorperate discussion and directions into your code. This will likely make it more readable for another user (or even your future self!)
In this example, we're going to work through an emcee example, where we fit a model to some data. The most common example is a linear fit to data with errors, but I'm going to cahnge it up a little to prove you can use it for models other than straight lines. Let's examine how well we can fit the general sinusiod:
$$y(t) = A\sin(2\pi ft+ \phi),$$
where $A$ is the amplitude, $f$ is the frequency, $t$ is time, and $\phi$ is the phase.
Setting up your notebook
Commands begining with '%' are known as magic commands in IPython. Depending on what you'd like to do, there are any number of useful magic commands. By far the most common command I use is %matplotlib inline which incorporates plots directly into the notebook, rather than being opened in a new window:
End of explanation
"""
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import emcee
from __future__ import division
matplotlib.rcParams.update({'font.size': 16,
'xtick.major.size': 12,
'ytick.major.size': 12,
'xtick.major.width': 1,
'ytick.major.width': 1,
'ytick.minor.size': 5,
'xtick.minor.size': 5,
'axes.linewidth': 1,
'font.family': 'serif',
'font.serif': 'Times New Roman',
'text.usetex': True})
"""
Explanation: Often I reserve a single notebook cell for all of my imports, similar to how you would normally import all of your libraries at the beginning of the program. I can always come back to add more. This is also usually where I configure the general look I want my plots to have, using the matplotlib.rcParams.update() function.
End of explanation
"""
A_true = 2
f_true = 0.2
phi_true = np.pi/4.
param_names = ["$A$", "$f$","$\phi$"]
"""
Explanation: Generating data
Often, I would read in data from a file but I'll just generate some here for simplicity. Let's define the "true" parameters of our model and attempt to recover these from data with random errors.
End of explanation
"""
N = 50
t = np.sort(10*np.random.rand(N))
t_lin = np.linspace(0,10,100)
y_true = A_true * np.sin(2*np.pi*f_true*t + phi_true)
displacement_err = 0.5*np.random.rand(N)*np.sqrt(max(abs(y_true)))
y_alt = y_true + displacement_err*np.random.randn(N)
plt.errorbar(t, y_alt, yerr=displacement_err, fmt=".k")
plt.plot(t_lin, A_true * np.sin(2*np.pi*f_true*t_lin + phi_true), "-k", lw=1, alpha=0.6)
plt.xlabel("$x$")
plt.ylabel("$y$")
plt.tight_layout()
plt.show()
"""
Explanation: Now we can generate some synthetic data. We'll need to add some noise as well.
End of explanation
"""
def lnlike(theta, t, y, yerr):
A, f, phi = theta
model = A * np.sin(2*np.pi*f*t + phi)
inv_sigma2 = 1.0/(yerr**2)
return -0.5*(np.sum((y-model)**2*inv_sigma2 - np.log(inv_sigma2) + np.log(2.*np.pi)))
"""
Explanation: Establishing the model
Now define the likelihood function. This is the probablility of our data given our model (including its free parameters). Traditionally, we define the logarithm of the likelihood function.
$$ L_i = \frac{1}{\sqrt{2\pi \sigma_i^2}} \exp \left( -\frac{1}{2} \frac{(x_i - \mu_i)^2}{\sigma_i^2} \right)$$
Likelihood values must then be multiplied together, alternatively, their logarithms may be summed:
$$ \ln(L) = -\frac{1}{2} \sum_i \left( \ln(2\pi) + \ln(\sigma_i^2) + \frac{(x_i - \mu_i)^2}{\sigma_i^2} \right) $$
End of explanation
"""
def lnprior(theta):
A, f, phi = theta
if (0 < phi) and (phi < 2.*np.pi) and (0 < A) and (0 < f):
return 0.0
else:
return -np.inf
"""
Explanation: We may wish to impose priors on our observations. The prior represents any information you already know about your parameters. For example, perhaps we are modelling some sort of physical system, and we know that the amplitude cannot be negative. Thus, we would wish to exclude any evaluations of the likelihood here, since we know this would give an unphysical result. Let us impose the condition $A>0$. We also know that the phase must be given by $0 \ge \phi < 2\pi$.
End of explanation
"""
def lnpost(theta, t, y, yerr):
lnp = lnprior(theta)
if not np.isfinite(lnp):
return -np.inf
return lnp + lnlike(theta, t, y, yerr)
"""
Explanation: We can now define the posterior distibution, which is the product of the product of the likelihood and prior
End of explanation
"""
ndim = 3
nwalkers = 100
initial_guess = [2, 0.2, np.pi/4.]
pos = [initial_guess + 1e-4*np.random.randn(ndim) for i in range(nwalkers)]
sampler = emcee.EnsembleSampler(nwalkers, ndim, lnpost, args=(t, y_alt, displacement_err))
"""
Explanation: Initialising emcee
The key parameters for emcee (aside from the likelihood defined earlier) are the number of dimensions and number of walkers. $n_{dim}$ is given by the number of free parameters (in our case, 3), and $n_{walkers}$ is the number of chains we'd like to generate. We'll also need to specify an initial starting position for every walker. A good approach is to pick a sensible estimate and distribute the walkers randomly around this point
End of explanation
"""
sampler.run_mcmc(pos, 5000)
burnin = 1000
samples = sampler.chain[:, burnin:, :].reshape((-1, ndim))
"""
Explanation: Run emcee
End of explanation
"""
import corner
fig = corner.corner(samples, truths=initial_guess, labels=param_names, verbose=True)
"""
Explanation: Plotting options
My favourite thing about notebooks is that I no longer have to run any of the previous cells in order to change the plots that I'm making. Let's examine two plotting approaches:
Corner
End of explanation
"""
from chainconsumer import ChainConsumer
c = ChainConsumer()
c.add_chain(samples, parameters=param_names, name="samples", walkers=nwalkers)
c.configure(statistics='cumulative', flip=True, diagonal_tick_labels=False)
fig = c.plot(figsize=2.5, truth=initial_guess)
"""
Explanation: ChainConsumer
End of explanation
"""
print c.diagnostic_gelman_rubin()
"""
Explanation: Similarly, now that I've defined the instance of chain consumer, I can ask for statistics in a new cell without rerunning the plot!
Gelman-Rubin Statistic
This is a measure of chain convergence
End of explanation
"""
pythonarr = c.get_correlations()
latextab = c.get_correlation_table()
print pythonarr
print latextab
"""
Explanation: Parameter correlation
we can ask for a python array of the correlation, or a latex table that could be given straight to a paper!
End of explanation
"""
|
kdmurray91/kwip-experiments | writeups/misc/sklearn-rice/clustering.ipynb | mit | cl = AgglomerativeClustering(16, compute_full_tree=True, affinity='precomputed', linkage='complete')
np.unique(cl.fit_predict(wip_0.data), return_counts=True)
"""
Explanation: sklearn Agglomerative
I can't get this to work properly. The values returned by fit_predict below should essentially be
[[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15],
[6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6]]
i.e. clusters are samples 0:15, with 6 runs each
End of explanation
"""
def does_cluster(distmat, ranges):
hcl = hierarchy.complete(distmat.condensed_form())
ids = distmat.ids
tree = skbio.TreeNode.from_linkage_matrix(hcl, ids)
tree = ete3.Tree.from_skbio(tree)
mono = 0
counts = Counter()
for start, stop in ranges:
m, phy, _ = tree.check_monophyly(ids[start:stop], target_attr="name", unrooted=True)
mono += 1 if m else 0
#print(start, '-', stop, "---", phy)
counts[phy] += 1
return mono, counts
def count_clustering(groups):
wipc = Counter()
ipc = Counter()
for wip, ip in pairs:
_, c = does_cluster(wip, groups)
wipc.update(c)
_, c = does_cluster(ip, groups)
ipc.update(c)
print(sum(wipc.values()), "groups")
return wipc, ipc
print(count_clustering(indjap))
print(count_clustering(samples))
print(sum(wipc.values()))
print(sum(ipc.values()))
"""
Explanation: Scipy Hclust
End of explanation
"""
|
gururajl/deep-learning | dcgan-svhn/DCGAN.ipynb | mit | %matplotlib inline
import pickle as pkl
import matplotlib.pyplot as plt
import numpy as np
from scipy.io import loadmat
import tensorflow as tf
!mkdir data
"""
Explanation: Deep Convolutional GANs
In this notebook, you'll build a GAN using convolutional layers in the generator and discriminator. This is called a Deep Convolutional GAN, or DCGAN for short. The DCGAN architecture was first explored last year and has seen impressive results in generating new images, you can read the original paper here.
You'll be training DCGAN on the Street View House Numbers (SVHN) dataset. These are color images of house numbers collected from Google street view. SVHN images are in color and much more variable than MNIST.
So, we'll need a deeper and more powerful network. This is accomplished through using convolutional layers in the discriminator and generator. It's also necessary to use batch normalization to get the convolutional networks to train. The only real changes compared to what you saw previously are in the generator and discriminator, otherwise the rest of the implementation is the same.
End of explanation
"""
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
data_dir = 'data/'
if not isdir(data_dir):
raise Exception("Data directory doesn't exist!")
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(data_dir + "train_32x32.mat"):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='SVHN Training Set') as pbar:
urlretrieve(
'http://ufldl.stanford.edu/housenumbers/train_32x32.mat',
data_dir + 'train_32x32.mat',
pbar.hook)
if not isfile(data_dir + "test_32x32.mat"):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='SVHN Testing Set') as pbar:
urlretrieve(
'http://ufldl.stanford.edu/housenumbers/test_32x32.mat',
data_dir + 'test_32x32.mat',
pbar.hook)
"""
Explanation: Getting the data
Here you can download the SVHN dataset. Run the cell above and it'll download to your machine.
End of explanation
"""
trainset = loadmat(data_dir + 'train_32x32.mat')
testset = loadmat(data_dir + 'test_32x32.mat')
"""
Explanation: These SVHN files are .mat files typically used with Matlab. However, we can load them in with scipy.io.loadmat which we imported above.
End of explanation
"""
idx = np.random.randint(0, trainset['X'].shape[3], size=36)
fig, axes = plt.subplots(6, 6, sharex=True, sharey=True, figsize=(5,5),)
for ii, ax in zip(idx, axes.flatten()):
ax.imshow(trainset['X'][:,:,:,ii], aspect='equal')
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
plt.subplots_adjust(wspace=0, hspace=0)
"""
Explanation: Here I'm showing a small sample of the images. Each of these is 32x32 with 3 color channels (RGB). These are the real images we'll pass to the discriminator and what the generator will eventually fake.
End of explanation
"""
def scale(x, feature_range=(-1, 1)):
# scale to (0, 1)
x = ((x - x.min())/(255 - x.min()))
# scale to feature_range
min, max = feature_range
x = x * (max - min) + min
return x
class Dataset:
def __init__(self, train, test, val_frac=0.5, shuffle=False, scale_func=None):
split_idx = int(len(test['y'])*(1 - val_frac))
self.test_x, self.valid_x = test['X'][:,:,:,:split_idx], test['X'][:,:,:,split_idx:]
self.test_y, self.valid_y = test['y'][:split_idx], test['y'][split_idx:]
self.train_x, self.train_y = train['X'], train['y']
self.train_x = np.rollaxis(self.train_x, 3)
self.valid_x = np.rollaxis(self.valid_x, 3)
self.test_x = np.rollaxis(self.test_x, 3)
if scale_func is None:
self.scaler = scale
else:
self.scaler = scale_func
self.shuffle = shuffle
def batches(self, batch_size):
if self.shuffle:
idx = np.arange(len(dataset.train_x))
np.random.shuffle(idx)
self.train_x = self.train_x[idx]
self.train_y = self.train_y[idx]
n_batches = len(self.train_y)//batch_size
for ii in range(0, len(self.train_y), batch_size):
x = self.train_x[ii:ii+batch_size]
y = self.train_y[ii:ii+batch_size]
yield self.scaler(x), y
"""
Explanation: Here we need to do a bit of preprocessing and getting the images into a form where we can pass batches to the network. First off, we need to rescale the images to a range of -1 to 1, since the output of our generator is also in that range. We also have a set of test and validation images which could be used if we're trying to identify the numbers in the images.
End of explanation
"""
def model_inputs(real_dim, z_dim):
inputs_real = tf.placeholder(tf.float32, (None, *real_dim), name='input_real')
inputs_z = tf.placeholder(tf.float32, (None, z_dim), name='input_z')
return inputs_real, inputs_z
"""
Explanation: Network Inputs
Here, just creating some placeholders like normal.
End of explanation
"""
def generator(z, output_dim, reuse=False, alpha=0.2, training=True):
with tf.variable_scope('generator', reuse=reuse):
# First fully connected layer
x1 = tf.layers.dense(z, 4*4*512)
# Reshape it to start the convolutional stack
x1 = tf.reshape(x1, (-1, 4, 4, 512))
x1 = tf.layers.batch_normalization(x1, training=training)
x1 = tf.maximum(alpha * x1, x1)
# 4x4x512 now
x2 = tf.layers.conv2d_transpose(x1, 256, 5, strides=2, padding='same')
x2 = tf.layers.batch_normalization(x2, training=training)
x2 = tf.maximum(alpha * x2, x2)
# 8x8x256 now
x3 = tf.layers.conv2d_transpose(x2, 128, 5, strides=2, padding='same')
x3 = tf.layers.batch_normalization(x3, training=training)
x3 = tf.maximum(alpha * x3, x3)
# 16x16x128 now
# Output layer
logits = tf.layers.conv2d_transpose(x3, output_dim, 5, strides=2, padding='same')
# 32x32x3 now
out = tf.tanh(logits)
return out
"""
Explanation: Generator
Here you'll build the generator network. The input will be our noise vector z as before. Also as before, the output will be a $tanh$ output, but this time with size 32x32 which is the size of our SVHN images.
What's new here is we'll use convolutional layers to create our new images. The first layer is a fully connected layer which is reshaped into a deep and narrow layer, something like 4x4x1024 as in the original DCGAN paper. Then we use batch normalization and a leaky ReLU activation. Next is a transposed convolution where typically you'd halve the depth and double the width and height of the previous layer. Again, we use batch normalization and leaky ReLU. For each of these layers, the general scheme is convolution > batch norm > leaky ReLU.
You keep stacking layers up like this until you get the final transposed convolution layer with shape 32x32x3. Below is the archicture used in the original DCGAN paper:
Note that the final layer here is 64x64x3, while for our SVHN dataset, we only want it to be 32x32x3.
End of explanation
"""
def discriminator(x, reuse=False, alpha=0.2):
with tf.variable_scope('discriminator', reuse=reuse):
# Input layer is 32x32x3
x1 = tf.layers.conv2d(x, 64, 5, strides=2, padding='same')
relu1 = tf.maximum(alpha * x1, x1)
# 16x16x64
x2 = tf.layers.conv2d(relu1, 128, 5, strides=2, padding='same')
bn2 = tf.layers.batch_normalization(x2, training=True)
relu2 = tf.maximum(alpha * bn2, bn2)
# 8x8x128
x3 = tf.layers.conv2d(relu2, 256, 5, strides=2, padding='same')
bn3 = tf.layers.batch_normalization(x3, training=True)
relu3 = tf.maximum(alpha * bn3, bn3)
# 4x4x256
# Flatten it
flat = tf.reshape(relu3, (-1, 4*4*256))
logits = tf.layers.dense(flat, 1)
out = tf.sigmoid(logits)
return out, logits
"""
Explanation: Discriminator
Here you'll build the discriminator. This is basically just a convolutional classifier like you've build before. The input to the discriminator are 32x32x3 tensors/images. You'll want a few convolutional layers, then a fully connected layer for the output. As before, we want a sigmoid output, and you'll need to return the logits as well. For the depths of the convolutional layers I suggest starting with 16, 32, 64 filters in the first layer, then double the depth as you add layers. Note that in the DCGAN paper, they did all the downsampling using only strided convolutional layers with no maxpool layers.
You'll also want to use batch normalization with tf.layers.batch_normalization on each layer except the first convolutional and output layers. Again, each layer should look something like convolution > batch norm > leaky ReLU.
Note: in this project, your batch normalization layers will always use batch statistics. (That is, always set training to True.) That's because we are only interested in using the discriminator to help train the generator. However, if you wanted to use the discriminator for inference later, then you would need to set the training parameter appropriately.
End of explanation
"""
def model_loss(input_real, input_z, output_dim, alpha=0.2):
"""
Get the loss for the discriminator and generator
:param input_real: Images from the real dataset
:param input_z: Z input
:param out_channel_dim: The number of channels in the output image
:return: A tuple of (discriminator loss, generator loss)
"""
g_model = generator(input_z, output_dim, alpha=alpha)
d_model_real, d_logits_real = discriminator(input_real, alpha=alpha)
d_model_fake, d_logits_fake = discriminator(g_model, reuse=True, alpha=alpha)
d_loss_real = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real, labels=tf.ones_like(d_model_real)))
d_loss_fake = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.zeros_like(d_model_fake)))
g_loss = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.ones_like(d_model_fake)))
d_loss = d_loss_real + d_loss_fake
return d_loss, g_loss
"""
Explanation: Model Loss
Calculating the loss like before, nothing new here.
End of explanation
"""
def model_opt(d_loss, g_loss, learning_rate, beta1):
"""
Get optimization operations
:param d_loss: Discriminator loss Tensor
:param g_loss: Generator loss Tensor
:param learning_rate: Learning Rate Placeholder
:param beta1: The exponential decay rate for the 1st moment in the optimizer
:return: A tuple of (discriminator training operation, generator training operation)
"""
# Get weights and bias to update
t_vars = tf.trainable_variables()
d_vars = [var for var in t_vars if var.name.startswith('discriminator')]
g_vars = [var for var in t_vars if var.name.startswith('generator')]
# Optimize
with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):
d_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(d_loss, var_list=d_vars)
g_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(g_loss, var_list=g_vars)
return d_train_opt, g_train_opt
"""
Explanation: Optimizers
Not much new here, but notice how the train operations are wrapped in a with tf.control_dependencies block so the batch normalization layers can update their population statistics.
End of explanation
"""
class GAN:
def __init__(self, real_size, z_size, learning_rate, alpha=0.2, beta1=0.5):
tf.reset_default_graph()
self.input_real, self.input_z = model_inputs(real_size, z_size)
self.d_loss, self.g_loss = model_loss(self.input_real, self.input_z,
real_size[2], alpha=alpha)
self.d_opt, self.g_opt = model_opt(self.d_loss, self.g_loss, learning_rate, beta1)
"""
Explanation: Building the model
Here we can use the functions we defined about to build the model as a class. This will make it easier to move the network around in our code since the nodes and operations in the graph are packaged in one object.
End of explanation
"""
def view_samples(epoch, samples, nrows, ncols, figsize=(5,5)):
fig, axes = plt.subplots(figsize=figsize, nrows=nrows, ncols=ncols,
sharey=True, sharex=True)
for ax, img in zip(axes.flatten(), samples[epoch]):
ax.axis('off')
img = ((img - img.min())*255 / (img.max() - img.min())).astype(np.uint8)
ax.set_adjustable('box-forced')
im = ax.imshow(img, aspect='equal')
plt.subplots_adjust(wspace=0, hspace=0)
return fig, axes
"""
Explanation: Here is a function for displaying generated images.
End of explanation
"""
def train(net, dataset, epochs, batch_size, print_every=10, show_every=100, figsize=(5,5)):
saver = tf.train.Saver()
sample_z = np.random.uniform(-1, 1, size=(72, z_size))
samples, losses = [], []
steps = 0
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for x, y in dataset.batches(batch_size):
steps += 1
# Sample random noise for G
batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size))
# Run optimizers
_ = sess.run(net.d_opt, feed_dict={net.input_real: x, net.input_z: batch_z})
_ = sess.run(net.g_opt, feed_dict={net.input_z: batch_z, net.input_real: x})
if steps % print_every == 0:
# At the end of each epoch, get the losses and print them out
train_loss_d = net.d_loss.eval({net.input_z: batch_z, net.input_real: x})
train_loss_g = net.g_loss.eval({net.input_z: batch_z})
print("Epoch {}/{}...".format(e+1, epochs),
"Discriminator Loss: {:.4f}...".format(train_loss_d),
"Generator Loss: {:.4f}".format(train_loss_g))
# Save losses to view after training
losses.append((train_loss_d, train_loss_g))
if steps % show_every == 0:
gen_samples = sess.run(
generator(net.input_z, 3, reuse=True, training=False),
feed_dict={net.input_z: sample_z})
samples.append(gen_samples)
_ = view_samples(-1, samples, 6, 12, figsize=figsize)
plt.show()
saver.save(sess, './checkpoints/generator.ckpt')
with open('samples.pkl', 'wb') as f:
pkl.dump(samples, f)
return losses, samples
"""
Explanation: And another function we can use to train our network. Notice when we call generator to create the samples to display, we set training to False. That's so the batch normalization layers will use the population statistics rather than the batch statistics. Also notice that we set the net.input_real placeholder when we run the generator's optimizer. The generator doesn't actually use it, but we'd get an error without it because of the tf.control_dependencies block we created in model_opt.
End of explanation
"""
real_size = (32,32,3)
z_size = 100
learning_rate = 0.0002
batch_size = 128
epochs = 25
alpha = 0.2
beta1 = 0.5
# Create the network
net = GAN(real_size, z_size, learning_rate, alpha=alpha, beta1=beta1)
dataset = Dataset(trainset, testset)
losses, samples = train(net, dataset, epochs, batch_size, figsize=(10,5))
fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], label='Discriminator', alpha=0.5)
plt.plot(losses.T[1], label='Generator', alpha=0.5)
plt.title("Training Losses")
plt.legend()
fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], label='Discriminator', alpha=0.5)
plt.plot(losses.T[1], label='Generator', alpha=0.5)
plt.title("Training Losses")
plt.legend()
_ = view_samples(-1, samples, 6, 12, figsize=(10,5))
_ = view_samples(-1, samples, 6, 12, figsize=(10,5))
"""
Explanation: Hyperparameters
GANs are very sensitive to hyperparameters. A lot of experimentation goes into finding the best hyperparameters such that the generator and discriminator don't overpower each other. Try out your own hyperparameters or read the DCGAN paper to see what worked for them.
End of explanation
"""
|
gregcaporaso/short-read-tax-assignment | ipynb/mock-community/taxonomy-assignment-template.ipynb | bsd-3-clause | from os.path import join, expandvars
from joblib import Parallel, delayed
from glob import glob
from os import system
from tax_credit.framework_functions import (parameter_sweep,
generate_per_method_biom_tables,
move_results_to_repository)
project_dir = expandvars("$HOME/Desktop/projects/tax-credit")
analysis_name= "mock-community"
data_dir = join(project_dir, "data", analysis_name)
reference_database_dir = expandvars("$HOME/Desktop/projects/tax-credit/data/ref_dbs/")
results_dir = expandvars("$HOME/Desktop/projects/mock-community/")
"""
Explanation: Data generation: using python to sweep over methods and parameters
This notebook serves as a template for using python to generate and run a list of commands. To use, follow these instructions:
1) select File -> Make a Copy... from the toolbar above to copy this notebook and provide a new name describing the method(s) that you are testing.
2) Modify file paths in cell 2 of Environment preparation to match the directory structure on your system.
3) Select the datasets you wish to test under Preparing data set sweep; choose from the list of datasets included in tax-credit, or add your own.
4) Prepare methods and command template. Enter your method / parameter combinations as a dictionary to method_parameters_combinations in cell 1, then provide a command_template in cell 2. This notebook example assumes that the method commands are passed to the command line, but the command list generated by parameter_sweep() can also be directed to the python interpreter, as shown in this example. Check command list in cell 3, and set number of jobs and joblib parameters in cell 4.
5) Run all cells and hold onto your hat.
For an example of how to test classification methods in this notebook, see taxonomy assignment with Qiime 1.
Environment preparation
End of explanation
"""
dataset_reference_combinations = [
('mock-1', 'gg_13_8_otus'), # formerly S16S-1
('mock-2', 'gg_13_8_otus'), # formerly S16S-2
('mock-3', 'gg_13_8_otus'), # formerly Broad-1
('mock-4', 'gg_13_8_otus'), # formerly Broad-2
('mock-5', 'gg_13_8_otus'), # formerly Broad-3
('mock-6', 'gg_13_8_otus'), # formerly Turnbaugh-1
('mock-7', 'gg_13_8_otus'), # formerly Turnbaugh-2
('mock-8', 'gg_13_8_otus'), # formerly Turnbaugh-3
('mock-9', 'unite_20.11.2016'), # formerly ITS1
('mock-10', 'unite_20.11.2016'), # formerly ITS2-SAG
('mock-12', 'gg_13_8_otus'), # Extreme
('mock-13', 'gg_13_8_otus_full16S'), # kozich-1
('mock-14', 'gg_13_8_otus_full16S'), # kozich-2
('mock-15', 'gg_13_8_otus_full16S'), # kozich-3
('mock-16', 'gg_13_8_otus'), # schirmer-1
]
reference_dbs = {'gg_13_8_otus' : (join(reference_database_dir, 'gg_13_8_otus/99_otus_clean_515f-806r_trim250.fasta'),
join(reference_database_dir, 'gg_13_8_otus/99_otu_taxonomy_clean.tsv')),
'gg_13_8_otus_full16S' : (join(reference_database_dir, 'gg_13_8_otus/99_otus_clean.fasta'),
join(reference_database_dir, 'gg_13_8_otus/99_otu_taxonomy_clean.tsv')),
'unite_20.11.2016' : (join(reference_database_dir, 'unite_20.11.2016/sh_refs_qiime_ver7_99_20.11.2016_dev_clean_ITS1Ff-ITS2r_trim250.fasta'),
join(reference_database_dir, 'unite_20.11.2016/sh_taxonomy_qiime_ver7_99_20.11.2016_dev_clean.tsv'))}
"""
Explanation: Preparing data set sweep
First, we're going to define the data sets that we'll sweep over. The following cell does not need to be modified unless if you wish to change the datasets or reference databases used in the sweep.
End of explanation
"""
method_parameters_combinations = {
'awesome-method-number-1': {'confidence': [0.0, 0.1, 0.2, 0.3, 0.4, 0.5,
0.6, 0.7, 0.8, 0.9, 1.0]},
}
"""
Explanation: Preparing the method/parameter combinations and generating commands
Now we set the methods and method-specific parameters that we want to sweep. Modify to sweep other methods. Note how method_parameters_combinations feeds method/parameter combinations to parameter_sweep() in the cell below.
End of explanation
"""
command_template = "command_line_assignment -i {1} -o {0} -r {2} -t {3} -m {4} {5} --rdp_max_memory 16000"
commands = parameter_sweep(data_dir, results_dir, reference_dbs,
dataset_reference_combinations,
method_parameters_combinations, command_template,
infile='rep_seqs.fna', output_name='rep_seqs_tax_assignments.txt')
"""
Explanation: Now enter the template of the command to sweep, and generate a list of commands with parameter_sweep().
Fields must adhere to following format:
{0} = output directory
{1} = input data
{2} = reference sequences
{3} = reference taxonomy
{4} = method name
{5} = other parameters
End of explanation
"""
print(len(commands))
commands[0]
"""
Explanation: As a sanity check, we can look at the first command that was generated and the number of commands generated.
End of explanation
"""
Parallel(n_jobs=4)(delayed(system)(command) for command in commands)
"""
Explanation: Finally, we run our commands.
End of explanation
"""
taxonomy_glob = join(results_dir, '*', '*', '*', '*', 'rep_seqs_tax_assignments.txt')
generate_per_method_biom_tables(taxonomy_glob, data_dir)
"""
Explanation: Generate per-method biom tables
Modify the taxonomy_glob below to point to the taxonomy assignments that were generated above. This may be necessary if filepaths were altered in the preceding cells.
End of explanation
"""
# precomputed_results_dir = join(project_dir, "data", "precomputed-results", analysis_name)
# method_dirs = glob(join(results_dir, '*', '*', '*', '*'))
# move_results_to_repository(method_dirs, precomputed_results_dir)
"""
Explanation: Move result files to repository
Add results to the tax-credit directory (e.g., to push these results to the repository or compare with other precomputed results in downstream analysis steps). The precomputed_results_dir path and methods_dirs glob below should not need to be changed unless if substantial changes were made to filepaths in the preceding cells.
Uncomment and run when (and if) you want to move your new results to the tax-credit directory. Note that results needn't be in tax-credit to compare using the evaluation notebooks.
End of explanation
"""
|
nonmean/nonmean.github.io | _notebooks/2020-09-01-fastcore.ipynb | mit | #hide
! pip install -U git+git://github.com/fastai/fastcore@master
! pip install -U git+git://github.com/fastai/nbdev@master
! pip install -U numpy
from fastcore.foundation import *
from fastcore.meta import *
from fastcore.utils import *
from fastcore.test import *
from nbdev.showdoc import *
from fastcore.dispatch import typedispatch
from functools import partial
import numpy as np
import inspect
"""
Explanation: "fastcore: An Underrated Python Library"
A unique python library that extends the python programming language and provides utilities that enhance productivity.
- author: "<a href='https://twitter.com/HamelHusain'>Hamel Husain</a>"
- toc: false
- image: images/copied_from_nb/fastcore_imgs/td.png
- comments: true
- categories: [fastcore, fastai]
- permalink: /fastcore/
- badges: true
Background
I recently embarked on a journey to sharpen my python skills: I wanted to learn advanced patterns, idioms, and techniques. I started with reading books on advanced Python, however, the information didn't seem to stick without having somewhere to apply it. I also wanted the ability to ask questions from an expert while I was learning -- which is an arrangement that is hard to find! That's when it occurred to me: What if I could find an open source project that has fairly advanced python code and write documentation and tests? I made a bet that if I did this it would force me to learn everything very deeply, and the maintainers would be appreciative of my work and be willing to answer my questions.
And that's exactly what I did over the past month! I'm pleased to report that it has been the most efficient learning experience I've ever experienced. I've discovered that writing documentation forced me to deeply understand not just what the code does but also why the code works the way it does, and to explore edge cases while writing tests. Most importantly, I was able to ask questions when I was stuck, and maintainers were willing to devote extra time knowing that their mentorship was in service of making their code more accessible! It turns out the library I choose, fastcore is some of the most fascinating Python I have ever encountered as its purpose and goals are fairly unique.
For the uninitiated, fastcore is a library on top of which many fast.ai projects are built on. Most importantly, fastcore extends the python programming language and strives to eliminate boilerplate and add useful functionality for common tasks. In this blog post, I'm going to highlight some of my favorite tools that fastcore provides, rather than sharing what I learned about python. My goal is to pique your interest in this library, and hopefully motivate you to check out the documentation after you are done to learn more!
Why fastcore is interesting
Get exposed to ideas from other languages without leaving python: I’ve always heard that it is beneficial to learn other languages in order to become a better programmer. From a pragmatic point of view, I’ve found it difficult to learn other languages because I could never use them at work. Fastcore extends python to include patterns found in languages as diverse as Julia, Ruby and Haskell. Now that I understand these tools I am motivated to learn other languages.
You get a new set of pragmatic tools: fastcore includes utilities that will allow you to write more concise expressive code, and perhaps solve new problems.
Learn more about the Python programming language: Because fastcore extends the python programming language, many advanced concepts are exposed during the process. For the motivated, this is a great way to see how many of the internals of python work.
A whirlwind tour through fastcore
Here are some things you can do with fastcore that immediately caught my attention.
End of explanation
"""
def baz(a, b=2, c =3, d=4): return a + b + c
def foo(c, a, **kwargs):
return c + baz(a, **kwargs)
inspect.signature(foo)
"""
Explanation: Making **kwargs transparent
Whenever I see a function that has the argument <strong>**kwargs</strong>, I cringe a little. This is because it means the API is obfuscated and I have to read the source code to figure out what valid parameters might be. Consider the below example:
End of explanation
"""
def baz(a, b=2, c =3, d=4): return a + b + c
@delegates(baz) # this decorator will pass down keyword arguments from baz
def foo(c, a, **kwargs):
return c + baz(a, **kwargs)
inspect.signature(foo)
"""
Explanation: Without reading the source code, it might be hard for me to know that foo also accepts and additional parameters b and d. We can fix this with delegates:
End of explanation
"""
@delegates(baz, keep=True)
def foo(c, a, **kwargs):
return c + baz(a, **kwargs)
inspect.signature(foo)
"""
Explanation: You can customize the behavior of this decorator. For example, you can have your cake and eat it too by passing down your arguments and also keeping **kwargs:
End of explanation
"""
def basefoo(a, b=2, c =3, d=4): pass
@delegates(basefoo, but= ['d']) # exclude `d`
def foo(c, a, **kwargs): pass
inspect.signature(foo)
"""
Explanation: You can also exclude arguments. For example, we exclude argument d from delegation:
End of explanation
"""
class BaseFoo:
def __init__(self, e, c=2): pass
@delegates()# since no argument was passsed here we delegate to the superclass
class Foo(BaseFoo):
def __init__(self, a, b=1, **kwargs): super().__init__(**kwargs)
inspect.signature(Foo)
"""
Explanation: You can also delegate between classes:
End of explanation
"""
class Test:
def __init__(self, a, b ,c):
self.a, self.b, self.c = a, b, c
"""
Explanation: For more information, read the docs on delegates.
Avoid boilerplate when setting instance attributes
Have you ever wondered if it was possible to avoid the boilerplate involved with setting attributes in __init__?
End of explanation
"""
class Test:
def __init__(self, a, b, c):
store_attr()
t = Test(5,4,3)
assert t.b == 4
"""
Explanation: Ouch! That was painful. Look at all the repeated variable names. Do I really have to repeat myself like this when defining a class? Not Anymore! Checkout store_attr:
End of explanation
"""
class Test:
def __init__(self, a, b, c):
store_attr(but=['c'])
t = Test(5,4,3)
assert t.b == 4
assert not hasattr(t, 'c')
"""
Explanation: You can also exclude certain attributes:
End of explanation
"""
class ParentClass:
def __init__(self): self.some_attr = 'hello'
class ChildClass(ParentClass):
def __init__(self):
super().__init__()
cc = ChildClass()
assert cc.some_attr == 'hello' # only accessible b/c you used super
"""
Explanation: There are many more ways of customizing and using store_attr than I highlighted here. Check out the docs for more detail.
P.S. you might be thinking that Python dataclasses also allow you to avoid this boilerplate. While true in some cases, store_attr is more flexible.{% fn 1 %}
{{ "For example, store_attr does not rely on inheritance, which means you won't get stuck using multiple inheritance when using this with your own classes. Also, unlike dataclasses, store_attr does not require python 3.7 or higher. Furthermore, you can use store_attr anytime in the object lifecycle, and in any location in your class to customize the behavior of how and when variables are stored." | fndetail: 1 }}
Avoiding subclassing boilerplate
One thing I hate about python is the __super__().__init__() boilerplate associated with subclassing. For example:
End of explanation
"""
class NewParent(ParentClass, metaclass=PrePostInitMeta):
def __pre_init__(self, *args, **kwargs): super().__init__()
class ChildClass(NewParent):
def __init__(self):pass
sc = ChildClass()
assert sc.some_attr == 'hello'
"""
Explanation: We can avoid this boilerplate by using the metaclass PrePostInitMeta. We define a new class called NewParent that is a wrapper around the ParentClass:
End of explanation
"""
@typedispatch
def f(x:str, y:str): return f'{x}{y}'
@typedispatch
def f(x:np.ndarray): return x.sum()
@typedispatch
def f(x:int, y:int): return x+y
"""
Explanation: Type Dispatch
Type dispatch, or Multiple dispatch, allows you to change the way a function behaves based upon the input types it receives. This is a prominent feature in some programming languages like Julia. For example, this is a conceptual example of how multiple dispatch works in Julia, returning different values depending on the input types of x and y:
```julia
collide_with(x::Asteroid, y::Asteroid) = ...
deal with asteroid hitting asteroid
collide_with(x::Asteroid, y::Spaceship) = ...
deal with asteroid hitting spaceship
collide_with(x::Spaceship, y::Asteroid) = ...
deal with spaceship hitting asteroid
collide_with(x::Spaceship, y::Spaceship) = ...
deal with spaceship hitting spaceship
```
Type dispatch can be especially useful in data science, where you might allow different input types (i.e. Numpy arrays and Pandas dataframes) to a function that processes data. Type dispatch allows you to have a common API for functions that do similar tasks.
Unfortunately, Python does not support this out-of-the box. Fortunately, there is the @typedispatch decorator to the rescue. This decorator relies upon type hints in order to route inputs the correct version of the function:
End of explanation
"""
f('Hello ', 'World!')
f(2,3)
f(np.array([5,5,5,5]))
"""
Explanation: Below is a demonstration of type dispatch at work for the function f:
End of explanation
"""
test_input = [1,2,3,4,5,6]
def f(arr, val):
"Filter a list to remove any values that are less than val."
return [x for x in arr if x >= val]
f(test_input, 3)
"""
Explanation: There are limitations of this feature, as well as other ways of using this functionality that you can read about here. In the process of learning about typed dispatch, I also found a python library called multipledispatch made by Mathhew Rocklin (the creator of Dask).
After using this feature, I am now motivated to learn languages like Julia to discover what other paradigms I might be missing.
A better version of functools.partial
functools.partial is a great utility that creates functions from other functions that lets you set default values. Lets take this function for example that filters a list to only contain values >= val:
End of explanation
"""
filter5 = partial(f, val=5)
filter5(test_input)
"""
Explanation: You can create a new function out of this function using partial that sets the default value to 5:
End of explanation
"""
filter5.__doc__
"""
Explanation: One problem with partial is that it removes the original docstring and replaces it with a generic docstring:
End of explanation
"""
filter5 = partialler(f, val=5)
filter5.__doc__
"""
Explanation: fastcore.utils.partialler fixes this, and makes sure the docstring is retained such that the new API is transparent:
End of explanation
"""
def add(arr, val): return [x + val for x in arr]
def arrsum(arr): return sum(arr)
# See the previous section on partialler
add2 = partialler(add, val=2)
transform = compose(filter5, add2, arrsum)
transform([1,2,3,4,5,6])
"""
Explanation: Composition of functions
A technique that is pervasive in functional programming languages is function composition, whereby you chain a bunch of functions together to achieve some kind of result. This is especially useful when applying various data transformations. Consider a toy example where I have three functions: (1) Removes elements of a list less than 5 (from the prior section) (2) adds 2 to each number (3) sums all the numbers:
End of explanation
"""
def fit(x, transforms:list):
"fit a model after performing transformations"
x = compose(*transforms)(x)
y = [np.mean(x)] * len(x) # its a dumb model. Don't judge me
return y
# filters out elements < 5, adds 2, then predicts the mean
fit(x=[1,2,3,4,5,6], transforms=[filter5, add2])
"""
Explanation: But why is this useful? You might me thinking, I can accomplish the same thing with:
py
arrsum(add2(filter5([1,2,3,4,5,6])))
You are not wrong! However, composition gives you a convenient interface in case you want to do something like the following:
End of explanation
"""
class Test:
def __init__(self, a, b=2, c=3): store_attr() # `store_attr` was discussed previously
Test(1)
"""
Explanation: For more information about compose, read the docs.
A more useful <code>repr</code>
In python, __repr__ helps you get information about an object for logging and debugging. Below is what you get by default when you define a new class. (Note: we are using store_attr, which was discussed earlier).
End of explanation
"""
class Test:
def __init__(self, a, b=2, c=3): store_attr()
__repr__ = basic_repr('a,b,c')
Test(2)
"""
Explanation: We can use basic_repr to quickly give us a more sensible default:
End of explanation
"""
class MyClass(int): pass
@patch
def func(self:MyClass, a): return self+a
mc = MyClass(3)
"""
Explanation: Monkey Patching With A Decorator
It can be convenient to monkey patch with a decorator, which is especially helpful when you want to patch an external library you are importing. We can use the decorator @patch from fastcore.foundation along with type hints like so:
End of explanation
"""
mc.func(10)
"""
Explanation: Now, MyClass has an additional method named func:
End of explanation
"""
from fastcore.utils import *
from pathlib import Path
p = Path('.')
p.ls() # you don't get this with vanilla Pathlib.Path!!
"""
Explanation: Still not convinced? I'll show you another example of this kind of patching in the next section.
A better pathlib.Path
When you see these extensions to pathlib.path you won't ever use vanilla pathlib again! A number of additional methods have been added to pathlib, such as:
Path.readlines: same as with open('somefile', 'r') as f: f.readlines()
Path.read: same as with open('somefile', 'r') as f: f.read()
Path.save: saves file as pickle
Path.load: loads pickle file
Path.ls: shows the contents of the path as a list.
etc.
Read more about this here. Here is a demonstration of ls:
End of explanation
"""
@patch
def fun(self:Path): return "This is fun!"
p.fun()
"""
Explanation: Wait! What's going on here? We just imported pathlib.Path - why are we getting this new functionality? Thats because we imported the fastcore.utils module, which patches this module via the @patch decorator discussed earlier. Just to drive the point home on why the @patch decorator is useful, I'll go ahead and add another method to Path right now:
End of explanation
"""
arr=np.array([5,4,3,2,1])
f = lambda a: a.sum()
assert f(arr) == 15
"""
Explanation: That is magical, right? I know! That's why I'm writing about it!
An Even More Concise Way To Create Lambdas
Self, with an uppercase S, is an even more concise way to create lambdas that are calling methods on an object. For example, let's create a lambda for taking the sum of a Numpy array:
End of explanation
"""
f = Self.sum()
assert f(arr) == 15
"""
Explanation: You can use Self in the same way:
End of explanation
"""
import pandas as pd
df=pd.DataFrame({'Some Column': ['a', 'a', 'b', 'b', ],
'Another Column': [5, 7, 50, 70]})
f = Self.groupby('Some Column').mean()
f(df)
"""
Explanation: Let's create a lambda that does a groupby and max of a Pandas dataframe:
End of explanation
"""
from fastcore.imports import in_notebook, in_colab, in_ipython
in_notebook(), in_colab(), in_ipython()
"""
Explanation: Read more about Self in the docs.
Notebook Functions
These are simple but handy, and allow you to know whether or not code is executing in a Jupyter Notebook, Colab, or an Ipython Shell:
End of explanation
"""
L(1,2,3)
"""
Explanation: This is useful if you are displaying certain types of visualizations, progress bars or animations in your code that you may want to modify or toggle depending on the environment.
A Drop-In Replacement For List
You might be pretty happy with Python's list. This is one of those situations that you don't know you needed a better list until someone showed one to you. Enter L, a list like object with many extra goodies.
The best way I can describe L is to pretend that list and numpy had a pretty baby:
define a list (check out the nice __repr__ that shows the length of the list!)
End of explanation
"""
p = L.range(20).shuffle()
p
"""
Explanation: Shuffle a list:
End of explanation
"""
p[2,4,6]
"""
Explanation: Index into a list:
End of explanation
"""
1 + L(2,3,4)
"""
Explanation: L has sensible defaults, for example appending an element to a list:
End of explanation
"""
|
kdestasio/online_brain_intensive | nipype_tutorial/notebooks/basic_iteration.ipynb | gpl-2.0 | from nipype import Node, Workflow
from nipype.interfaces.fsl import BET, IsotropicSmooth
# Initiate a skull stripping Node with BET
skullstrip = Node(BET(mask=True,
in_file='/data/ds000114/sub-01/ses-test/anat/sub-01_ses-test_T1w.nii.gz'),
name="skullstrip")
"""
Explanation: <img src="../static/images/iterables.png" width="240">
Iterables
Some steps in a neuroimaging analysis are repetitive. Running the same preprocessing on multiple subjects or doing statistical inference on multiple files. To prevent the creation of multiple individual scripts, Nipype has as execution plugin, called iterables.
The main homepage has a nice section about MapNode and iterables if you want to learn more. Also, if you are interested in more advanced procedures, such as synchronizing multiple iterables or using conditional iterables, check out synchronize and intersource.
For example, let's assume we have a node (A) that does simple skull stripping, followed by a node (B) that does isometric smoothing. Now, let's say, that we are curious about the effect of different smoothing kernels. Therefore, we want to run the smoothing node with FWHM set to 2mm, 8mm and 16mm.
End of explanation
"""
isosmooth = Node(IsotropicSmooth(), name='iso_smooth')
"""
Explanation: Create a smoothing Node with IsotropicSmooth
End of explanation
"""
isosmooth.iterables = ("fwhm", [4, 8, 16])
"""
Explanation: Now, to use iterables and therefore smooth with different fwhm is as simple as that:
End of explanation
"""
# Create the workflow
wf = Workflow(name="smoothflow")
wf.base_dir = "/output"
wf.connect(skullstrip, 'out_file', isosmooth, 'in_file')
# Run it in parallel (one core for each smoothing kernel)
wf.run('MultiProc', plugin_args={'n_procs': 3})
"""
Explanation: And to wrap it up. We need to create a workflow, connect the nodes and finally, can run the workflow in parallel.
End of explanation
"""
# Visualize the detailed graph
from IPython.display import Image
wf.write_graph(graph2use='exec', format='png', simple_form=True)
Image(filename='/output/smoothflow/graph_detailed.dot.png')
"""
Explanation: If we visualize the graph with exec, we can see where the parallelization actually takes place.
End of explanation
"""
!tree /output/smoothflow -I '*txt|*pklz|report*|*.json|*js|*.dot|*.html'
"""
Explanation: If you look at the structure in the workflow directory, you can also see, that for each smoothing, a specific folder was created, i.e. _fwhm_16.
End of explanation
"""
%pylab inline
from nilearn import plotting
plotting.plot_anat(
'/data/ds000114/sub-01/ses-test/anat/sub-01_ses-test_T1w.nii.gz', title='original',
display_mode='z', cut_coords=(-50, -35, -20, -5), annotate=False)
plotting.plot_anat(
'/output/smoothflow/skullstrip/sub-01_ses-test_T1w_brain.nii.gz', title='skullstripped',
display_mode='z', cut_coords=(-50, -35, -20, -5), annotate=False)
plotting.plot_anat(
'/output/smoothflow/_fwhm_4/iso_smooth/sub-01_ses-test_T1w_brain_smooth.nii.gz', title='FWHM=4',
display_mode='z', cut_coords=(-50, -35, -20, -5), annotate=False)
plotting.plot_anat(
'/output/smoothflow/_fwhm_8/iso_smooth/sub-01_ses-test_T1w_brain_smooth.nii.gz', title='FWHM=8',
display_mode='z', cut_coords=(-50, -35, -20, -5), annotate=False)
plotting.plot_anat(
'/output/smoothflow/_fwhm_16/iso_smooth/sub-01_ses-test_T1w_brain_smooth.nii.gz', title='FWHM=16',
display_mode='z', cut_coords=(-50, -35, -20, -5), annotate=False)
"""
Explanation: Now, let's visualize the results!
End of explanation
"""
# First, let's specify the list of input variables
subject_list = ['sub-01', 'sub-02', 'sub-03', 'sub-04', 'sub-05']
session_list = ['run-01', 'run-02']
fwhm_widths = [4, 8]
"""
Explanation: IdentityInterface (special use case of iterabels)
A special use case of iterables is the IdentityInterface. The IdentityInterface interface allows you to create Nodes that simple identity mapping, i.e. Nodes that only work on parameters/strings.
For example, let's say you want to run a preprocessing workflow over 5 subjects, with each having two runs and applying 2 different smoothing kernel (as is done in the Preprocessing Example), we can do this as follows:
End of explanation
"""
from nipype import IdentityInterface
infosource = Node(IdentityInterface(fields=['subject_id', 'session_id', 'fwhm_id']),
name="infosource")
infosource.iterables = [('subject_id', subject_list),
('session_id', session_list),
('fwhm_id', fwhm_widths)]
"""
Explanation: Now, we can create the IdentityInterface Node
End of explanation
"""
infosource.outputs
"""
Explanation: That's it. Now, we can connect the output fields of this infosource node like any other node to wherever we want.
End of explanation
"""
|
artfisica/notebooks | september_2018_v-2.0/ATLAS_OpenData_02-simple_python_example_histogram.ipynb | gpl-3.0 | import ROOT
"""
Explanation: <CENTER>
<a href="http://opendata.atlas.cern" class="icons"><img src="../images/opendata-top-transblack.png" style="width:40%"></a>
</CENTER>
A simple introductional notebook to HEP analysis in python
<p> In this notebook you can find an easy set of commands that show the basic computing techniques commonly used in high energy physics (HEP) analyzes. It also shows how to create a histogram, fill it and draw it. Moreover it is an introduction to [ROOT](https://root.cern.ch/) too. At the end you get a plot with the number of leptons.</p>
<CENTER><h1>Simple pyROOT notebook example</h1></CENTER>
The library used is ROOT - a scientific software framework that provides all the functionalities needed to deal with big data processing, statistical analysis, visualisation and storage.
First of all ROOT is imported to read the files in the .root data format. A .root file consists of a tree having branches and leaves. At this point you could also import further programs that contain other formulas that you maybe use more often. But here we don't import other programs to keep it simple.
End of explanation
"""
## %jsroot on
"""
Explanation: In order to activate the interactive visualisation of the histogram that is later created we can use the JSROOT magic:
End of explanation
"""
##f = ROOT.TFile.Open("http://opendata.atlas.cern/release/samples/MC/mc_147770.Zee.root")
f = ROOT.TFile.Open("/home/student/datasets/MC/mc_105987.WZ.root")
"""
Explanation: Next we have to open the data that we want to analyze. As described above the data is stored in a *.root file.
End of explanation
"""
canvas = ROOT.TCanvas("Canvas","a first way to plot a variable",800,600)
"""
Explanation: After the data is opened we create a canvas on which we can draw a histogram. If we do not have a canvas we cannot see our histogram at the end. Its name is Canvas and its header is a first way to plot a variable. The two following arguments define the width and the height of the canvas.
End of explanation
"""
tree = f.Get("mini")
"""
Explanation: The next step is to define a tree named tree to get the data out of the *.root file.
End of explanation
"""
hist = ROOT.TH1F("variable","Example plot: Number of leptons",4,0,4)
"""
Explanation: Now we define a histogram that will later be placed on this canvas. Its name is variable and the header of the histogram is Example plot: Number of leptons. The three following arguments indicate that this histogram contains 4 so called bins which have a range from 0 to 4.
End of explanation
"""
for event in tree:
hist.Fill(tree.lep_n)
print "Done!"
"""
Explanation: The following lines are a loop that goes over the data that is stored in the tree and fills the histogram h that we already defined. In this first notebook we don't do any cuts to keep it simple. Accordingly the loop fills the histogram for each event stored in the tree. After the program has looped over all the data it prints the word Done!.
End of explanation
"""
hist.Draw()
canvas.Draw()
scale = hist.Integral()
hist.Scale(1/scale)
hist.Draw()
canvas.Draw()
"""
Explanation: After filling the histogram we want to see the results of the analysis. First we draw the histogram on the canvas and then the canvas on which the histogram lies.
End of explanation
"""
|
turi-code/tutorials | notebooks/datapipeline_recsys_intro.ipynb | apache-2.0 | import graphlab
"""
Explanation: Making batch recommendations using GraphLab Create
In this notebook we will show a complete recommender system implemented using GraphLab's deployment tools. This recommender example is common in many batch scenarios, where a new recommender is trained on a periodic basis, with the generated recommendations persisted to a relational database used by the web application.
The data we will use in this notebook is the same as the Building a Recommender with Ratings Data notebook, but without the exploration and prototyping parts.
The pipeline will contain the following tasks:
Clean and transform data
Train a Recommender model
Generate Recommendations for users
Persist Recommendations to a MySQL database
Each of these tasks will be defined as a function and executed as a Job using GraphLab. And finally, we will cover how to Run and monitor these pipelines. Remember, when using GraphLab Data Pipelines, the Tasks and Jobs created are managed objects, so they must have unique names.
This notebook uses GraphLab Create 1.3.
End of explanation
"""
def clean_data(path):
import graphlab as gl
sf = gl.SFrame.read_csv(path, delimiter='\t')
sf['rating'] = sf['rating'].astype(int)
sf = sf.dropna()
sf.rename({'user':'user_id', 'movie':'movie_id'})
# To simplify this example, only keep 0.1% of the number of rows from the input data
sf = sf.sample(0.001)
return sf
"""
Explanation: Clean the data
The first task in this pipeline will take data, clean it, and transform it into an SFrame. In this task, the raw data is read using graphlab.SFrame.read_csv, with the file path provided as a parameter to the Task. Once the data is loaded into an SFrame, we clean it by calling dropna() on the SFrame. The code that will run when the task is executed is:
End of explanation
"""
def train_model(data):
import graphlab as gl
model = gl.recommender.create(data, user_id='user_id', item_id='movie_id', target='rating')
return model
"""
Explanation: Train the model
Now that the data is cleaned and ready as an SFrame, we need to train a model in this recommendation system. To train the model, we need the SFrame created in the previous Task.
End of explanation
"""
def gen_recs(model, data):
recs = model.recommend(data['user_id'])
return recs
"""
Explanation: Generate Recommendations
With the previous task there is now a trained model that we should use for generating recommendations. With a Task now specified that trains a model, it can be improved independently from the task that generates recommendations from that model. To generate recommendations we need the trained model to use, and the users needing recommendations.
Here is the code for generating recommendations froma trained model:
End of explanation
"""
def my_batch_job(path):
data = clean_data(path)
model = train_model(data)
recs = gen_recs(model, data)
return recs
job = graphlab.deploy.job.create(my_batch_job,
path = 'https://static.turi.com/datasets/movie_ratings/sample.small')
"""
Explanation: Running and Monitoring this recommender
Now that the tasks are defined for this pipeline, let's compose them together to create a Job. Using the late-binding feature of the Data Pipelines framework, the parameters, inputs, and outputs that have not been specified with the Task can be specified at runtime. We will use this feature to specify the database parameters for the 'persist' task, and then raw data location for the 'clean' task.
Create a Job
End of explanation
"""
job.get_status()
"""
Explanation: The job is started asynchronously in the background, and we can query for its status calling get_status on the Job instance returned:
End of explanation
"""
recs = job.get_results() # Blocking call which waits for the job to complete.
"""
Explanation: If you don't want to wait for the job to complete, you can use the get_results function which waits for the job to complete before you get thee results.
End of explanation
"""
print job
"""
Explanation: To see more information about the job, print the job object:
End of explanation
"""
graphlab.canvas.set_target('ipynb') # show Canvas inline to IPython Notebook
recs.show()
"""
Explanation: Let us try and visualize the recommendations.
End of explanation
"""
@graphlab.deploy.required_packages(['mysql-connector-python'])
def persist_to_db(recs, dbhost, dbuser, dbpass, dbport, dbtable, dbname):
import mysql.connector
from mysql.connector import errorcode
conn = mysql.connector.connect(host=dbhost, user=dbuser, password=dbpass, port=dbport)
conn.database = dbname
cursor = conn.cursor()
# this example expects the table to be empty, minor changes here if you want to
# update existing users' recommendations instead.
add_row_sql = ("INSERT INTO " + dbtable + " (user_id, movie_id, score, rank) "
"VALUES (%(user_id)s, %(movie_id)s, %(score)s, %(rank)s)")
print "Begin - Writing recommendations to DB...."
for row in recs:
cursor.execute(add_row_sql, row)
print "End - Writing recommendations to DB...."
# commit recommendations to database
conn.commit()
"""
Explanation: Persist Recommendations
Now that recommendations have been generated, the final step in this pipeline is to save them to a relational database. The main applicaton queries this database for user recommendations as users are interacting with the application. For this task, we will use MySQL as an example, but that can easily be substituted with a different database.
The DB table needed to run this example looks like the following:
+----------+-------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+----------+-------------+------+-----+---------+-------+
| user_id | varchar(50) | NO | | NULL | |
| movie_id | varchar(50) | NO | | NULL | |
| score | float | NO | | NULL | |
| rank | int(8) | NO | | NULL | |
+----------+-------------+------+-----+---------+-------+
To create a table in MySQL with this schema:
CREATE TABLE recommendations (user_id VARCHAR(50),
movie_id VARCHAR(50), score FLOAT, rank INT(8));
End of explanation
"""
# install the mysql-connector-python package locally, if not running from a virtualenv then sudo may be required
!pip install --allow-external mysql-connector-python mysql-connector-python
"""
Explanation: Note: An important note about this Task is that is requires using the mysql-connector-python package, which is not in standard Python. Using GraphLab Create, specyfing that this package is required is easily done in the Task definition. When running this task in a remote enviroment (EC2 or Hadoop) the framework will make sure this python package is installed prior to execution.
In order to run this pipeline locally, please install the mysql-connector-python package on your machine.
End of explanation
"""
job = graphlab.deploy.job.create(persist_to_db,
recs = recs,
dbhost = '10.10.2.2', # change these db params appropriately
dbuser = 'test',
dbpass = 'secret',
dbname = 'users',
dbport = 3306,
dbtable = 'recommendations')
results = job.get_results()
"""
Explanation: Save Recommendations to Database
Note: Obviously change the following database parameters to ones that match the database you are connecting to. Also, remember to install the mysql-python-connector package on your machine before running this job.
End of explanation
"""
ec2 = graphlab.deploy.Ec2Config(aws_access_key_id='<key>',
aws_secret_key='<secret>')
c = graphlab.deploy.ec2_cluster.create(name='ec2cluster',
s3_path='s3://my_bucket',
ec2_config=ec2)
"""
Explanation: The job is now 'Completed'.
Running in EC2 or Hadoop
Data Pipelines also supports running the same pipeline in EC2 or Hadoop YARN clusters (CDH5). In order to run this pipeline in those environments, simply add an environment parameter to graphlab.deploy.job.create API. No code needs to change, and the GraphLab Data Pipelines framework takes care of installing and configuring what is needed to run this pipeline in the specified environment.
To create an EC2 environment:
End of explanation
"""
c = graphlab.deploy.hadoop_cluster.create(name='hd',
turi_dist_path='hdfs://some.domain.com/user/name/dd-deployment',
hadoop_conf_dir='~/yarn-config)
"""
Explanation: To create a Hadoop environment:
End of explanation
"""
|
google-research/google-research | group_agnostic_fairness/data_utils/CreateCompasDatasetFiles.ipynb | apache-2.0 | from __future__ import division
import pandas as pd
import numpy as np
import json
import os,sys
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
import numpy as np
"""
Explanation: Copyright 2020 Google LLC.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
End of explanation
"""
pd.options.display.float_format = '{:,.2f}'.format
dataset_base_dir = './group_agnostic_fairness/data/compas/'
dataset_file_name = 'compas-scores-two-years.csv'
"""
Explanation: Overview
Pre-processes COMPAS dataset:
Download the COMPAS dataset from:
https://github.com/propublica/compas-analysis/blob/master/compas-scores-two-years.csv
and save it in the ./group_agnostic_fairness/data/compas folder.
Input: ./group_agnostic_fairness/data/compas/compas-scores-two-years.csv
Outputs: train.csv, test.csv, mean_std.json, vocabulary.json, IPS_exampleweights_with_label.json, IPS_exampleweights_without_label.json
End of explanation
"""
file_path = os.path.join(dataset_base_dir,dataset_file_name)
with open(file_path, "r") as file_name:
temp_df = pd.read_csv(file_name)
# Columns of interest
columns = ['juv_fel_count', 'juv_misd_count', 'juv_other_count', 'priors_count',
'age',
'c_charge_degree',
'c_charge_desc',
'age_cat',
'sex', 'race', 'is_recid']
target_variable = 'is_recid'
target_value = 'Yes'
# Drop duplicates
temp_df = temp_df[['id']+columns].drop_duplicates()
df = temp_df[columns].copy()
# Convert columns of type ``object`` to ``category``
df = pd.concat([
df.select_dtypes(include=[], exclude=['object']),
df.select_dtypes(['object']).apply(pd.Series.astype, dtype='category')
], axis=1).reindex_axis(df.columns, axis=1)
# Binarize target_variable
df['is_recid'] = df.apply(lambda x: 'Yes' if x['is_recid']==1.0 else 'No', axis=1).astype('category')
# Process protected-column values
race_dict = {'African-American':'Black','Caucasian':'White'}
df['race'] = df.apply(lambda x: race_dict[x['race']] if x['race'] in race_dict.keys() else 'Other', axis=1).astype('category')
df.head()
"""
Explanation: Processing original dataset
End of explanation
"""
train_df, test_df = train_test_split(df, test_size=0.30, random_state=42)
output_file_path = os.path.join(dataset_base_dir,'train.csv')
with open(output_file_path, mode="w") as output_file:
train_df.to_csv(output_file,index=False,columns=columns,header=False)
output_file.close()
output_file_path = os.path.join(dataset_base_dir,'test.csv')
with open(output_file_path, mode="w") as output_file:
test_df.to_csv(output_file,index=False,columns=columns,header=False)
output_file.close()
"""
Explanation: Shuffle and Split into Train (70%) and Test set (30%)
End of explanation
"""
IPS_example_weights_without_label = {
0: (len(train_df))/(len(train_df[(train_df.race != 'Black') & (train_df.sex != 'Female')])), # 00: White Male
1: (len(train_df))/(len(train_df[(train_df.race != 'Black') & (train_df.sex == 'Female')])), # 01: White Female
2: (len(train_df))/(len(train_df[(train_df.race == 'Black') & (train_df.sex != 'Female')])), # 10: Black Male
3: (len(train_df))/(len(train_df[(train_df.race == 'Black') & (train_df.sex == 'Female')])) # 11: Black Female
}
output_file_path = os.path.join(dataset_base_dir,'IPS_example_weights_without_label.json')
with open(output_file_path, mode="w") as output_file:
output_file.write(json.dumps(IPS_example_weights_without_label))
output_file.close()
print(IPS_example_weights_without_label)
IPS_example_weights_with_label = {
0: (len(train_df))/(len(train_df[(train_df[target_variable] != target_value) & (train_df.race != 'Black') & (train_df.sex != 'Female')])), # 000: Negative White Male
1: (len(train_df))/(len(train_df[(train_df[target_variable] != target_value) & (train_df.race != 'Black') & (train_df.sex == 'Female')])), # 001: Negative White Female
2: (len(train_df))/(len(train_df[(train_df[target_variable] != target_value) & (train_df.race == 'Black') & (train_df.sex != 'Female')])), # 010: Negative Black Male
3: (len(train_df))/(len(train_df[(train_df[target_variable] != target_value) & (train_df.race == 'Black') & (train_df.sex == 'Female')])), # 011: Negative Black Female
4: (len(train_df))/(len(train_df[(train_df[target_variable] == target_value) & (train_df.race != 'Black') & (train_df.sex != 'Female')])), # 100: Positive White Male
5: (len(train_df))/(len(train_df[(train_df[target_variable] == target_value) & (train_df.race != 'Black') & (train_df.sex == 'Female')])), # 101: Positive White Female
6: (len(train_df))/(len(train_df[(train_df[target_variable] == target_value) & (train_df.race == 'Black') & (train_df.sex != 'Female')])), # 110: Positive Black Male
7: (len(train_df))/(len(train_df[(train_df[target_variable] == target_value) & (train_df.race == 'Black') & (train_df.sex == 'Female')])), # 111: Positive Black Female
}
output_file_path = os.path.join(dataset_base_dir,'IPS_example_weights_with_label.json')
with open(output_file_path, mode="w") as output_file:
output_file.write(json.dumps(IPS_example_weights_with_label))
output_file.close()
print(IPS_example_weights_with_label)
"""
Explanation: Computing Invese propensity weights for each subgroup, and writes to directory.
IPS_example_weights_with_label.json: json dictionary of the format
{subgroup_id : inverse_propensity_score,...}. Used by IPS_reweighting_model approach.
End of explanation
"""
cat_cols = train_df.select_dtypes(include='category').columns
vocab_dict = {}
for col in cat_cols:
vocab_dict[col] = list(set(train_df[col].cat.categories))
output_file_path = os.path.join(dataset_base_dir,'vocabulary.json')
with open(output_file_path, mode="w") as output_file:
output_file.write(json.dumps(vocab_dict))
output_file.close()
print(vocab_dict)
"""
Explanation: Construct vocabulary.json, and write to directory.
vocabulary.json: json dictionary of the format {feature_name: [feature_vocabulary]}, containing vocabulary for categorical features.
End of explanation
"""
temp_dict = train_df.describe().to_dict()
mean_std_dict = {}
for key, value in temp_dict.items():
mean_std_dict[key] = [value['mean'],value['std']]
output_file_path = os.path.join(dataset_base_dir,'mean_std.json')
with open(output_file_path, mode="w") as output_file:
output_file.write(json.dumps(mean_std_dict))
output_file.close()
print(mean_std_dict)
"""
Explanation: Construct mean_std.json, and write to directory
mean_std.json: json dictionary of the format feature_name: [mean, std]},
containing mean and std for numerical features.
End of explanation
"""
|
brinkar/real-world-machine-learning | Chapter 2 - Data Processing.ipynb | mit | %pylab inline
"""
Explanation: Chapter 2: Processing data for machine learning
To simplify the code examples in these notebooks, we populate the namespace with functions from numpy and matplotlib:
End of explanation
"""
cat_data = array(['male', 'female', 'male', 'male', 'female', 'male', 'female', 'female'])
def cat_to_num(data):
categories = unique(data)
features = []
for cat in categories:
binary = (data == cat)
features.append(binary.astype("int"))
return features
cat_to_num(cat_data)
"""
Explanation: Converting categorical data to numerical features
End of explanation
"""
cabin_data = array(["C65", "", "E36", "C54", "B57 B59 B63 B66"])
def cabin_features(data):
features = []
for cabin in data:
cabins = cabin.split(" ")
n_cabins = len(cabins)
# First char is the cabin_char
try:
cabin_char = cabins[0][0]
except IndexError:
cabin_char = "X"
n_cabins = 0
# The rest is the cabin number
try:
cabin_num = int(cabins[0][1:])
except:
cabin_num = -1
# Add 3 features for each passanger
features.append( [cabin_char, cabin_num, n_cabins] )
return features
cabin_features(cabin_data)
"""
Explanation: Simple feature engineering of the Titanic dataset
End of explanation
"""
num_data = array([1, 10, 0.5, 43, 0.12, 8])
def normalize_feature(data, f_min=-1, f_max=1):
d_min, d_max = min(data), max(data)
factor = (f_max - f_min) / (d_max - d_min)
normalized = f_min + data*factor
return normalized, factor
normalize_feature(num_data)
"""
Explanation: Feature normalization
End of explanation
"""
|
calroc/joypy | docs/1. Basic Use of Joy in a Notebook.ipynb | gpl-3.0 | from joy.joy import run
from joy.library import initialize
from joy.utils.stack import stack_to_string
from joy.utils.pretty_print import TracePrinter
"""
Explanation: Preamble
First, import what we need.
End of explanation
"""
D = initialize()
S = ()
def J(text):
print stack_to_string(run(text, S, D)[0])
def V(text):
tp = TracePrinter()
run(text, S, D, tp.viewer)
tp.print_()
"""
Explanation: Define a dictionary, an initial stack, and two helper functions to run Joy code and print results for us.
End of explanation
"""
J('23 18 +')
J('45 30 gcd')
"""
Explanation: Run some simple programs
End of explanation
"""
V('23 18 +')
V('45 30 gcd')
"""
Explanation: With Viewer
A viewer records each step of the evaluation of a Joy program. The TracePrinter has a facility for printing out a trace of the evaluation, one line per step. Each step is aligned to the current interpreter position, signified by a period separating the stack on the left from the pending expression ("continuation") on the right. I find these traces beautiful, like a kind of art.
End of explanation
"""
V('96 27 gcd')
"""
Explanation: Here's a longer trace.
End of explanation
"""
|
msampathkumar/kaggle-quora-tensorflow | references/intro-to-rnns/Anna KaRNNa.ipynb | apache-2.0 | import time
from collections import namedtuple
import numpy as np
import tensorflow as tf
"""
Explanation: Anna KaRNNa
In this notebook, I'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.
This network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN.
<img src="assets/charseq.jpeg" width="500">
End of explanation
"""
with open('anna.txt', 'r') as f:
text=f.read()
vocab = set(text)
vocab_to_int = {c: i for i, c in enumerate(vocab)}
int_to_vocab = dict(enumerate(vocab))
chars = np.array([vocab_to_int[c] for c in text], dtype=np.int32)
"""
Explanation: First we'll load the text file and convert it into integers for our network to use. Here I'm creating a couple dictionaries to convert the characters to and from integers. Encoding the characters as integers makes it easier to use as input in the network.
End of explanation
"""
text[:100]
"""
Explanation: Let's check out the first 100 characters, make sure everything is peachy. According to the American Book Review, this is the 6th best first line of a book ever.
End of explanation
"""
chars[:100]
"""
Explanation: And we can see the characters encoded as integers.
End of explanation
"""
np.max(chars)+1
"""
Explanation: Since the network is working with individual characters, it's similar to a classification problem in which we are trying to predict the next character from the previous text. Here's how many 'classes' our network has to pick from.
End of explanation
"""
def split_data(chars, batch_size, num_steps, split_frac=0.9):
"""
Split character data into training and validation sets, inputs and targets for each set.
Arguments
---------
chars: character array
batch_size: Size of examples in each of batch
num_steps: Number of sequence steps to keep in the input and pass to the network
split_frac: Fraction of batches to keep in the training set
Returns train_x, train_y, val_x, val_y
"""
slice_size = batch_size * num_steps
n_batches = int(len(chars) / slice_size)
# Drop the last few characters to make only full batches
x = chars[: n_batches*slice_size]
y = chars[1: n_batches*slice_size + 1]
# Split the data into batch_size slices, then stack them into a 2D matrix
x = np.stack(np.split(x, batch_size))
y = np.stack(np.split(y, batch_size))
# Now x and y are arrays with dimensions batch_size x n_batches*num_steps
# Split into training and validation sets, keep the first split_frac batches for training
split_idx = int(n_batches*split_frac)
train_x, train_y= x[:, :split_idx*num_steps], y[:, :split_idx*num_steps]
val_x, val_y = x[:, split_idx*num_steps:], y[:, split_idx*num_steps:]
return train_x, train_y, val_x, val_y
"""
Explanation: Making training and validation batches
Now I need to split up the data into batches, and into training and validation sets. I should be making a test set here, but I'm not going to worry about that. My test will be if the network can generate new text.
Here I'll make both input and target arrays. The targets are the same as the inputs, except shifted one character over. I'll also drop the last bit of data so that I'll only have completely full batches.
The idea here is to make a 2D matrix where the number of rows is equal to the batch size. Each row will be one long concatenated string from the character data. We'll split this data into a training set and validation set using the split_frac keyword. This will keep 90% of the batches in the training set, the other 10% in the validation set.
End of explanation
"""
train_x, train_y, val_x, val_y = split_data(chars, 10, 50)
train_x.shape
"""
Explanation: Now I'll make my data sets and we can check out what's going on here. Here I'm going to use a batch size of 10 and 50 sequence steps.
End of explanation
"""
train_x[:,:50]
"""
Explanation: Looking at the size of this array, we see that we have rows equal to the batch size. When we want to get a batch out of here, we can grab a subset of this array that contains all the rows but has a width equal to the number of steps in the sequence. The first batch looks like this:
End of explanation
"""
def get_batch(arrs, num_steps):
batch_size, slice_size = arrs[0].shape
n_batches = int(slice_size/num_steps)
for b in range(n_batches):
yield [x[:, b*num_steps: (b+1)*num_steps] for x in arrs]
"""
Explanation: I'll write another function to grab batches out of the arrays made by split_data. Here each batch will be a sliding window on these arrays with size batch_size X num_steps. For example, if we want our network to train on a sequence of 100 characters, num_steps = 100. For the next batch, we'll shift this window the next sequence of num_steps characters. In this way we can feed batches to the network and the cell states will continue through on each batch.
End of explanation
"""
def build_rnn(num_classes, batch_size=50, num_steps=50, lstm_size=128, num_layers=2,
learning_rate=0.001, grad_clip=5, sampling=False):
# When we're using this network for sampling later, we'll be passing in
# one character at a time, so providing an option for that
if sampling == True:
batch_size, num_steps = 1, 1
tf.reset_default_graph()
# Declare placeholders we'll feed into the graph
inputs = tf.placeholder(tf.int32, [batch_size, num_steps], name='inputs')
targets = tf.placeholder(tf.int32, [batch_size, num_steps], name='targets')
# Keep probability placeholder for drop out layers
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
# One-hot encoding the input and target characters
x_one_hot = tf.one_hot(inputs, num_classes)
y_one_hot = tf.one_hot(targets, num_classes)
### Build the RNN layers
# Use a basic LSTM cell
lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
# Add dropout to the cell
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
# Stack up multiple LSTM layers, for deep learning
cell = tf.contrib.rnn.MultiRNNCell([drop] * num_layers)
initial_state = cell.zero_state(batch_size, tf.float32)
### Run the data through the RNN layers
# This makes a list where each element is on step in the sequence
rnn_inputs = [tf.squeeze(i, squeeze_dims=[1]) for i in tf.split(x_one_hot, num_steps, 1)]
# Run each sequence step through the RNN and collect the outputs
outputs, state = tf.contrib.rnn.static_rnn(cell, rnn_inputs, initial_state=initial_state)
final_state = state
# Reshape output so it's a bunch of rows, one output row for each step for each batch
seq_output = tf.concat(outputs, axis=1)
output = tf.reshape(seq_output, [-1, lstm_size])
# Now connect the RNN outputs to a softmax layer
with tf.variable_scope('softmax'):
softmax_w = tf.Variable(tf.truncated_normal((lstm_size, num_classes), stddev=0.1))
softmax_b = tf.Variable(tf.zeros(num_classes))
# Since output is a bunch of rows of RNN cell outputs, logits will be a bunch
# of rows of logit outputs, one for each step and batch
logits = tf.matmul(output, softmax_w) + softmax_b
# Use softmax to get the probabilities for predicted characters
preds = tf.nn.softmax(logits, name='predictions')
# Reshape the targets to match the logits
y_reshaped = tf.reshape(y_one_hot, [-1, num_classes])
loss = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y_reshaped)
cost = tf.reduce_mean(loss)
# Optimizer for training, using gradient clipping to control exploding gradients
tvars = tf.trainable_variables()
grads, _ = tf.clip_by_global_norm(tf.gradients(cost, tvars), grad_clip)
train_op = tf.train.AdamOptimizer(learning_rate)
optimizer = train_op.apply_gradients(zip(grads, tvars))
# Export the nodes
# NOTE: I'm using a namedtuple here because I think they are cool
export_nodes = ['inputs', 'targets', 'initial_state', 'final_state',
'keep_prob', 'cost', 'preds', 'optimizer']
Graph = namedtuple('Graph', export_nodes)
local_dict = locals()
graph = Graph(*[local_dict[each] for each in export_nodes])
return graph
"""
Explanation: Building the model
Below is a function where I build the graph for the network.
End of explanation
"""
batch_size = 100
num_steps = 100
lstm_size = 512
num_layers = 2
learning_rate = 0.001
keep_prob = 0.5
"""
Explanation: Hyperparameters
Here I'm defining the hyperparameters for the network.
batch_size - Number of sequences running through the network in one pass.
num_steps - Number of characters in the sequence the network is trained on. Larger is better typically, the network will learn more long range dependencies. But it takes longer to train. 100 is typically a good number here.
lstm_size - The number of units in the hidden layers.
num_layers - Number of hidden LSTM layers to use
learning_rate - Learning rate for training
keep_prob - The dropout keep probability when training. If you're network is overfitting, try decreasing this.
Here's some good advice from Andrej Karpathy on training the network. I'm going to write it in here for your benefit, but also link to where it originally came from.
Tips and Tricks
Monitoring Validation Loss vs. Training Loss
If you're somewhat new to Machine Learning or Neural Networks it can take a bit of expertise to get good models. The most important quantity to keep track of is the difference between your training loss (printed during training) and the validation loss (printed once in a while when the RNN is run on the validation data (by default every 1000 iterations)). In particular:
If your training loss is much lower than validation loss then this means the network might be overfitting. Solutions to this are to decrease your network size, or to increase dropout. For example you could try dropout of 0.5 and so on.
If your training/validation loss are about equal then your model is underfitting. Increase the size of your model (either number of layers or the raw number of neurons per layer)
Approximate number of parameters
The two most important parameters that control the model are lstm_size and num_layers. I would advise that you always use num_layers of either 2/3. The lstm_size can be adjusted based on how much data you have. The two important quantities to keep track of here are:
The number of parameters in your model. This is printed when you start training.
The size of your dataset. 1MB file is approximately 1 million characters.
These two should be about the same order of magnitude. It's a little tricky to tell. Here are some examples:
I have a 100MB dataset and I'm using the default parameter settings (which currently print 150K parameters). My data size is significantly larger (100 mil >> 0.15 mil), so I expect to heavily underfit. I am thinking I can comfortably afford to make lstm_size larger.
I have a 10MB dataset and running a 10 million parameter model. I'm slightly nervous and I'm carefully monitoring my validation loss. If it's larger than my training loss then I may want to try to increase dropout a bit and see if that helps the validation loss.
Best models strategy
The winning strategy to obtaining very good models (if you have the compute time) is to always err on making the network larger (as large as you're willing to wait for it to compute) and then try different dropout values (between 0,1). Whatever model has the best validation performance (the loss, written in the checkpoint filename, low is good) is the one you should use in the end.
It is very common in deep learning to run many different models with many different hyperparameter settings, and in the end take whatever checkpoint gave the best validation performance.
By the way, the size of your training and validation splits are also parameters. Make sure you have a decent amount of data in your validation set or otherwise the validation performance will be noisy and not very informative.
End of explanation
"""
epochs = 20
# Save every N iterations
save_every_n = 200
train_x, train_y, val_x, val_y = split_data(chars, batch_size, num_steps)
model = build_rnn(len(vocab),
batch_size=batch_size,
num_steps=num_steps,
learning_rate=learning_rate,
lstm_size=lstm_size,
num_layers=num_layers)
saver = tf.train.Saver(max_to_keep=100)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# Use the line below to load a checkpoint and resume training
#saver.restore(sess, 'checkpoints/______.ckpt')
n_batches = int(train_x.shape[1]/num_steps)
iterations = n_batches * epochs
for e in range(epochs):
# Train network
new_state = sess.run(model.initial_state)
loss = 0
for b, (x, y) in enumerate(get_batch([train_x, train_y], num_steps), 1):
iteration = e*n_batches + b
start = time.time()
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: keep_prob,
model.initial_state: new_state}
batch_loss, new_state, _ = sess.run([model.cost, model.final_state, model.optimizer],
feed_dict=feed)
loss += batch_loss
end = time.time()
print('Epoch {}/{} '.format(e+1, epochs),
'Iteration {}/{}'.format(iteration, iterations),
'Training loss: {:.4f}'.format(loss/b),
'{:.4f} sec/batch'.format((end-start)))
if (iteration%save_every_n == 0) or (iteration == iterations):
# Check performance, notice dropout has been set to 1
val_loss = []
new_state = sess.run(model.initial_state)
for x, y in get_batch([val_x, val_y], num_steps):
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: 1.,
model.initial_state: new_state}
batch_loss, new_state = sess.run([model.cost, model.final_state], feed_dict=feed)
val_loss.append(batch_loss)
print('Validation loss:', np.mean(val_loss),
'Saving checkpoint!')
saver.save(sess, "checkpoints/i{}_l{}_v{:.3f}.ckpt".format(iteration, lstm_size, np.mean(val_loss)))
"""
Explanation: Training
Time for training which is pretty straightforward. Here I pass in some data, and get an LSTM state back. Then I pass that state back in to the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I calculate the validation loss and save a checkpoint.
Here I'm saving checkpoints with the format
i{iteration number}_l{# hidden layer units}_v{validation loss}.ckpt
End of explanation
"""
tf.train.get_checkpoint_state('checkpoints')
"""
Explanation: Saved checkpoints
Read up on saving and loading checkpoints here: https://www.tensorflow.org/programmers_guide/variables
End of explanation
"""
def pick_top_n(preds, vocab_size, top_n=5):
p = np.squeeze(preds)
p[np.argsort(p)[:-top_n]] = 0
p = p / np.sum(p)
c = np.random.choice(vocab_size, 1, p=p)[0]
return c
def sample(checkpoint, n_samples, lstm_size, vocab_size, prime="The "):
samples = [c for c in prime]
model = build_rnn(vocab_size, lstm_size=lstm_size, sampling=True)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, checkpoint)
new_state = sess.run(model.initial_state)
for c in prime:
x = np.zeros((1, 1))
x[0,0] = vocab_to_int[c]
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.preds, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
for i in range(n_samples):
x[0,0] = c
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.preds, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
return ''.join(samples)
"""
Explanation: Sampling
Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.
The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.
End of explanation
"""
checkpoint = "checkpoints/____.ckpt"
samp = sample(checkpoint, 2000, lstm_size, len(vocab), prime="Far")
print(samp)
"""
Explanation: Here, pass in the path to a checkpoint and sample from the network.
End of explanation
"""
|
root-mirror/training | SoftwareCarpentry/04-histograms-and-graphs.ipynb | gpl-2.0 | import ROOT
h = ROOT.TH1D(name="h", title="My histo", nbinsx=100, xlow=-5, xup=5)
h.FillRandom("gaus", ntimes=5000)
"""
Explanation: ROOT histograms
Histogram class documentation
ROOT has powerful histogram objects that, among other features, let you produce complex plots and perform fits of arbitrary functions.
TH1F is a 1D histogram with Floating point y-axis, TH2I is a 2D histogram with Integer y-axis, etc.
<center><img src="images/examplehisto.png"><center>
To have something to play with, let's quickly fill a histogram with 5000 normally distributed values:
End of explanation
"""
%jsroot on
c = ROOT.TCanvas()
#h.SetLineColor(ROOT.kBlue)
#h.SetFillColor(ROOT.kBlue)
#h.GetXaxis().SetTitle("value")
#h.GetYaxis().SetTitle("count")
#h.SetTitle("My histo with latex: p_{t}, #eta, #phi")
h.Draw() # draw the histogram on the canvas
c.Draw() # draw the canvas on the screen
"""
Explanation: To check the full documentation you can always refer to https://root.cern/doc/master (and then switch to the documentation for your particular ROOT version with the drop-down menu at the top of the page).
Drawing a histogram
Drawing options documentation
The link above contains the documentation for the histogram drawing options.
In a notebook, as usual, we want to also use the %jsroot on magic and also explicitly draw a TCanvas.
End of explanation
"""
f2 = ROOT.TF2("f2", "sin(x*x - y*y)", xmin=-2, xmax=2, ymin=-2, ymax=2)
c = ROOT.TCanvas()
f2.Draw("surf1") # to get a surface instead of the default contour plot
c.Draw()
"""
Explanation: ROOT functions
The type that represents an arbitrary one-dimensional mathematical function in ROOT is TF1.<br>
Similarly, TF2 and TF3 represent 2-dimensional and 3-dimensional functions.
As an example, let's define and plot a simple surface:
End of explanation
"""
%%cpp
double gaussian(double *x, double *par) {
return par[0]*TMath::Exp(-TMath::Power(x[0] - par[1], 2.) / 2.)
/ TMath::Sqrt(2 * TMath::Pi());
}
"""
Explanation: Fitting a histogram
Let's see how to perform simple histogram fits of arbitrary functions. We will need a TF1 that represents the function we want to use for the fit.
This time we define our TF1 as a C++ function (note the usage of the %%cpp magic to define some C++ inline). Here we define a simple gaussian with scale and mean parameters (par[0] and par[1] respectively):
End of explanation
"""
fitFunc = ROOT.TF1("fitFunc", ROOT.gaussian, xmin=-5, xmax=5, npar=2)
"""
Explanation: The function signature, that takes an array of coordinates and an array of parameters as inputs, is the generic signature of functions that can be used to construct a TF1 object:
End of explanation
"""
res = h.Fit(fitFunc)
"""
Explanation: Now we fit our h histogram with fitFunc:
End of explanation
"""
c2 = ROOT.TCanvas()
h.Draw()
c2.Draw()
"""
Explanation: Drawing the histogram now automatically also shows the fitted function:
End of explanation
"""
res = h.Fit("gaus")
c3 = ROOT.TCanvas()
h.Draw()
c3.Draw()
"""
Explanation: For the particular case of a gaussian fit, we could also have used the built-in "gaus" function, as we did when we called FillRandom (for the full list of supported expressions see here):
End of explanation
"""
g = ROOT.TGraph()
for x in range(-20, 21):
y = -x*x
g.AddPoint(x, y)
c4 = ROOT.TCanvas()
g.SetMarkerStyle(7)
g.SetLineColor(ROOT.kBlue)
g.SetTitle("My graph")
g.Draw()
c4.Draw()
"""
Explanation: For more complex binned and unbinned likelihood fits, check out RooFit, a powerful data modelling framework integrated in ROOT.
ROOT graphs
TGraph is a type useful for scatter plots.
Their drawing options are documented here.
Like for histograms, the aspect of TGraphs can be greatly customized, they can be fitted with custom functions, etc.
End of explanation
"""
c5 = ROOT.TCanvas()
g.SetTitle("My graph")
g.SetFillColor(ROOT.kOrange + 1) # base colors can be tweaked by adding/subtracting values to them
g.Draw("AB1")
c5.Draw()
"""
Explanation: The same graph can be displayed as a bar plot:
End of explanation
"""
|
samuxiii/notebooks | titanic/Titanic Survival Kaggle.ipynb | apache-2.0 | import numpy as np
import pandas as pd
import seaborn as sns
%matplotlib inline
#load the files
train = pd.read_csv('input/train.csv')
test = pd.read_csv('input/test.csv')
data = pd.concat([train, test]).reset_index(drop=True)
#size of training dataset
train_samples = train.shape[0]
#print some of them
data.head()
#show the data types
data.info()
"""
Explanation: Tinanic Survival
*Note: Dataset has been obtained from kaggle.com
Loading Data
To begin with, the train and test datasets are loaded using the tools provided by pandas and both of these datasets are concatenated in only one. That will ease the way we can work with the information.
End of explanation
"""
#heatmap
#fig = plt.figure(figsize=(9, 7))
sns.heatmap(data.corr(), annot=True);
"""
Explanation: Data Analysis
End of explanation
"""
data.groupby(data.Survived).Survived.count()
"""
Explanation: At first sight we see the most correlated features with 'Survived' are 'Pclass' and 'Fare'.
End of explanation
"""
#Dropping useless features
data = data.drop(['Name', 'PassengerId', 'Ticket'], axis=1)
data.head(10)
"""
Explanation: There are more non-survivors than survivors so we will have to deal with it.
Data Engineering
Dropping
There are a few number of features that can be removed. Apparently things like the ticket, the passenger ID or their names should not be relevant. At least in the way a passenger can survive or not.
End of explanation
"""
data.Cabin.head(5)
# Cabin
data.Cabin.fillna('0', inplace=True)
data.loc[data.Cabin.str[0] == 'A', 'Cabin'] = 1
data.loc[data.Cabin.str[0] == 'B', 'Cabin'] = 2
data.loc[data.Cabin.str[0] == 'C', 'Cabin'] = 3
data.loc[data.Cabin.str[0] == 'D', 'Cabin'] = 4
data.loc[data.Cabin.str[0] == 'E', 'Cabin'] = 5
data.loc[data.Cabin.str[0] == 'F', 'Cabin'] = 6
data.loc[data.Cabin.str[0] == 'G', 'Cabin'] = 7
data.loc[data.Cabin.str[0] == 'T', 'Cabin'] = 8
# Change the type from object to int
data.Cabin = pd.to_numeric(data.Cabin)
"""
Explanation: Cabin
The cabin is formed by a letter with numbers and some cases there are more than one. So we can extract the letter to know the importance and localization of the passenger inside the ship. Here you have the real deck distribution.
Empty values will be set to zero.
End of explanation
"""
data[data.Survived==1].groupby(data.Cabin).Cabin.hist();
data[data.Survived==0].groupby(data.Cabin).Cabin.hist();
data[data.Cabin==0].Cabin.count()/data.Cabin.count()
"""
Explanation: Now we can show how this feature is related to the survival in the train dataset.
End of explanation
"""
data.loc[data.Cabin != 0, 'Cabin'] = 1
sns.heatmap(data.corr(), annot=True);
"""
Explanation: Another problem, the 77% of the cabins were unknown.. But seeing the previous graphs we can assume that is likely the passenger dies if the cabin is unknown.
So we can transform the Cabin to a binary classification.
End of explanation
"""
g = sns.factorplot(x="Embarked",y="Survived",kind="bar",data=data)
# Embarked
data.loc[data.Embarked == 'C', 'Embarked'] = 1
data.loc[data.Embarked == 'Q', 'Embarked'] = 2
data.loc[data.Embarked == 'S', 'Embarked'] = 3
data[data.Embarked.isna()]
# They are females of the first class and survivors -> assumption: they embarked in C
data.Embarked.fillna(1, inplace=True)
"""
Explanation: Embarked
End of explanation
"""
#New column to know if the passenger has family on board
def family(size):
if size == 0:
return "alone"
elif size < 5:
return "medium"
else:
return "large"
data['FamilySize'] = (data['SibSp']+data['Parch']).apply(family)
data = data.drop(['SibSp', 'Parch'], axis=1)
data.head()
"""
Explanation: Family Size
Obviously the family size is quite important because it can affect directly if a person gets survived or not. The dataset has two features related to this meaning, 'SibSp' and 'Parch'. The first one is 'siblings and spouse' and the second one means 'parents and child'. Both features are defined as numbers so they can be transformed to a new one which represents the size of the family. It's only necessary to sum both of them and asign the correspondent size. It's going to be taken the following sizes: 'alone', 'medium' (1-4) and 'large' (more than 4).
After that, the original features will be deleted.
End of explanation
"""
cond = (data['Sex']=='female') & (data['Pclass']==3)
data.groupby(['Survived','Sex','Pclass'])['Age'].mean()
data[cond].groupby(['Sex','Pclass'])['Age'].mean()
"""
Explanation: Filling 'Age' NaN with Mean
Train and test datasets have many rows with empty values. In order to avoid that, the mean of every feature will be taken and use it to fill in the blanks.
End of explanation
"""
def getAge(row):
surv = row.Survived
sex = row.Sex
pclass = row.Pclass
if surv==0 or surv==1:
condition = (data['Survived']==surv) & (data['Sex']==sex) & (data['Pclass']==pclass)
df_mean = data[condition].groupby(['Survived','Sex','Pclass'])['Age'].mean()
else:
condition = (data['Sex']==sex) & (data['Pclass']==pclass)
df_mean = data[condition].groupby(['Sex','Pclass'])['Age'].mean()
#print("surv: {}, sex: {}, class: {} -> age (mean): {}".format(surv, sex, pclass, df_mean.mean()))
return df_mean.mean()
data['Age'] = data['Age'].fillna(data.apply(getAge, axis=1))
len(data['Age']) - data['Age'].count()
"""
Explanation: In the case of age is not that easy. There could be a little correlation between the class where the passenger is traveling, the gender of it and if the passenger has survived or not. So it's a big deal to take into account these three features to calculate the most likely age.
End of explanation
"""
data.Fare.hist(bins=50)
d1 = data["Fare"].map(lambda i: np.log(i) if i > 0 else 0)
d1.hist(bins=50)
"""
Explanation: 'Fare' Distribution
Showing the 'Fare' distribution we observe the feature is skewed (to the right). So we can manage it applying the log to all the values. This is useful when the range of values has the minimum in 0, in other cases the log for negative numbers is a problem.
End of explanation
"""
data['Fare'] = data['Fare'].fillna(data['Fare'].mean())
"""
Explanation: Filling 'Fare' NaN with Mean
End of explanation
"""
#define age by ranges
def getAgeRange(age):
if age < 5:
return "child"
elif age < 20:
return "young"
elif age < 50:
return "adult"
else:
return "old"
data['Age'] = data['Age'].apply(getAgeRange)
# Age distribution
data.groupby(['Age'])['Survived'].describe()
"""
Explanation: Age Range: Grouping
Similar to the 'familty size' (engineered feature), the age can be represented and categorized. The possible classes are going to be: child, young, adult and old.
End of explanation
"""
#Transform categorical to dummies
data = pd.get_dummies(data)
"""
Explanation: Getting Feature Dummies
The features that are categorical have to be converted to dummies. On this way we'll have new "numerical" features.
End of explanation
"""
from sklearn.covariance import EllipticEnvelope
def idxAnomalies(X):
ee = EllipticEnvelope(contamination=0.05,
assume_centered=True,
random_state=13)
ee.fit(X)
pred = ee.predict(X)
return [index[0] for index, x in np.ndenumerate(pred) if x != 1]
"""
Explanation: Function to Remove Anomalies
End of explanation
"""
#finding NaN
data.columns[data.isnull().any()].tolist()
"""
Explanation: Checking NaN
Only the test dataset should have NaN values for the 'Survived' column.
End of explanation
"""
data.describe().T
"""
Explanation: Normalizing
Probably the range of numerical features are not the same and this could produce problems in our results. A proper way to get good calculations is normalizing all the features.
In this case we're going to use the minmax scaler because it transforms the range of the features to values between 0 to 1.
End of explanation
"""
import matplotlib.pyplot as plt
data.head(15).plot()
plt.show()
"""
Explanation: Avoiding the dummy features, we see 'pclass' is not moving between a range of (0,1).
End of explanation
"""
#Squeeze the data to [0,1]
from sklearn import preprocessing
scaler = preprocessing.MinMaxScaler()
data[['Pclass']] = scaler.fit_transform(data[['Pclass']])
data[['Fare']] = scaler.fit_transform(data[['Fare']])
data[['Cabin']] = scaler.fit_transform(data[['Cabin']])
data[['Embarked']] = scaler.fit_transform(data[['Embarked']])
print("Train shape: {}".format(data.shape))
data.head(15).plot()
plt.show()
data.describe().T
"""
Explanation: After the scaler processing we'll have all the features in the same range. This speeds up the calculations because all are small numbers which are easy to use (better performance).
The picture below shows the features ranges.
End of explanation
"""
from sklearn.model_selection import StratifiedKFold
y = np.array(data['Survived'])
X = np.array(data.drop('Survived', axis=1))
#split by idx
idx = train_samples
X_train, X_test = X[:idx], X[idx:]
y_train, y_test = y[:idx], y[idx:]
# Remove anomalies only in train dataset
idx_anomalies = idxAnomalies(X_train)
X_train = np.delete(X_train, idx_anomalies, axis=0)
y_train = np.delete(y_train, idx_anomalies, axis=0)
print("Shape train: {}".format(X_train.shape))
print("Shape test: {}".format(X_test.shape))
#print(y_train[0:1])
#print(X_train[0:1].tolist())
kf = StratifiedKFold(n_splits=3, random_state=42, shuffle=True)
print(kf)
"""
Explanation: Splitting the data to train and test
As a good practive, we're going to split the data into two different datasets, training and testing. Taking the number of training samples (saved in the beginning) we are able to split it.
Besides, we'll use the k-fold method to get different batches of the data (it's configured with 3 splits).
StratifiedKFold training set
End of explanation
"""
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import GridSearchCV, cross_val_score
# Cross validate model with Kfold stratified cross val
kfold = StratifiedKFold(n_splits=10)
rf = RandomForestClassifier(random_state=42)
rf_param_grid = {"max_depth": [3, 5, 8],
"min_samples_split": [3, 10, 20],
"min_samples_leaf": [3, 10, 20],
"n_estimators" :[10, 20, 30, 40, 50]}
gsrf = GridSearchCV(rf, param_grid = rf_param_grid, cv=kfold, scoring="accuracy", n_jobs=4, verbose=1)
gsrf.fit(X_train, y_train)
# Best score
gsrf.best_score_
brf = gsrf.best_estimator_
gsrf.best_params_
from sklearn.svm import SVC
svc = SVC(random_state=42, probability=True)
Cs = [0.001, 0.01, 0.1, 1, 10, 100, 1000]
gammas = [0.0001, 0.001, 0.01, 0.1, 1]
svc_param_grid = [{'kernel': ['rbf'], 'gamma': gammas, 'C': Cs}]
gssvc = GridSearchCV(svc, param_grid = svc_param_grid, cv=kfold, scoring="accuracy", n_jobs=4, verbose=1)
gssvc.fit(X_train, y_train)
# Best score
gssvc.best_score_
bsvc = gssvc.best_estimator_
gssvc.best_params_
"""
Explanation: Grid Search
End of explanation
"""
from sklearn.ensemble import VotingClassifier
from sklearn.model_selection import cross_val_score
eclf = VotingClassifier(estimators=[('rf', brf), ('svc', bsvc)], voting='soft')
epoch = 1
for train_idx, val_idx in kf.split(X_train, y_train):
X_t, X_v = X_train[train_idx], X_train[val_idx]
y_t, y_v = y_train[train_idx], y_train[val_idx]
epoch += 1
eclf.fit(X_t, y_t)
scores = cross_val_score(eclf, X_v, y_v)
print("Scores({}): {}".format(epoch-1, scores))
"""
Explanation: Voting Ensemble
Although it had been configured an ensemble method with a few classifiers (all have the same weight), it was seen the accuracy was bigger when the random forest classifier was used alone.
For this reason, part of the previous code is commented and 'clf2' is used.
End of explanation
"""
from sklearn.model_selection import learning_curve
def plot_learning_curve(estimator, title, X, y, ylim=None, cv=None,
n_jobs=1, train_sizes=np.linspace(.1, 1.0, 5)):
plt.figure()
plt.title(title)
if ylim is not None:
plt.ylim(*ylim)
plt.xlabel("Training examples")
plt.ylabel("Score")
train_sizes, train_scores, test_scores = learning_curve(
estimator, X, y, cv=cv, n_jobs=n_jobs, train_sizes=train_sizes)
train_scores_mean = np.mean(train_scores, axis=1)
train_scores_std = np.std(train_scores, axis=1)
test_scores_mean = np.mean(test_scores, axis=1)
test_scores_std = np.std(test_scores, axis=1)
plt.grid()
plt.fill_between(train_sizes, train_scores_mean - train_scores_std,
train_scores_mean + train_scores_std, alpha=0.1,
color="r")
plt.fill_between(train_sizes, test_scores_mean - test_scores_std,
test_scores_mean + test_scores_std, alpha=0.1, color="g")
plt.plot(train_sizes, train_scores_mean, 'o-', color="r",
label="Training score")
plt.plot(train_sizes, test_scores_mean, 'o-', color="g",
label="Cross-validation score")
plt.legend(loc="best")
return plt
#
plot_learning_curve(eclf, "Voting Ensemble", X_train, y_train, cv=kfold, n_jobs=4)
plot_learning_curve(brf, "Random Forest", X_train, y_train, cv=kfold, n_jobs=4)
plot_learning_curve(bsvc, "SVC", X_train, y_train, cv=kfold, n_jobs=4)
"""
Explanation: Learning Curve
End of explanation
"""
from sklearn.metrics import classification_report,confusion_matrix
predictions = eclf.predict(X_train)
cm = confusion_matrix(y_train, predictions)
print(cm)
plt.matshow(cm)
plt.colorbar()
ax = plt.gca()
ax.set_xlabel('Predicted')
ax.set_ylabel('True')
plt.show()
print(classification_report(y_train, predictions))
"""
Explanation: Post-Analysis
Finally, we only need to print the classification report and the confusion matrix and see the outcome.
End of explanation
"""
from sklearn.metrics import roc_curve, auc
# calculate the fpr and tpr for all thresholds of the classification
def plot_roc_curve(y_test, preds):
fpr, tpr, threshold = roc_curve(y_test, preds)
roc_auc = auc(fpr, tpr)
plt.plot(fpr, tpr, 'b', label = 'AUC = %0.2f' % roc_auc)
plt.legend(loc = 'best')
plt.plot([0, 1], [0, 1],'r--')
plt.xlim([-0.01, 1.01])
plt.ylim([-0.01, 1.01])
plt.title('Receiver Operating Characteristic')
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
plt.show()
plot_roc_curve(y_train, predictions)
"""
Explanation: Seeing the ROC curve and the AUC we can evaluate if the model is a good classifier.
End of explanation
"""
import os
predictions = gsrf.best_estimator_.predict(X_test)
passengerId = 892
file = "PassengerId,Survived" + os.linesep
for i in range(len(X_test)):
file += "{},{}".format(passengerId, (int)(predictions[i])) + os.linesep
passengerId += 1
# Save to file
with open('attempt.txt', 'w') as f:
f.write(file)
"""
Explanation: Get Predictions
*Note: Following code is not related to the calculations but how to compose the csv required by the Kaggle competition.
End of explanation
"""
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.